text
stringlengths 7.27k
650k
| label
int64 0
10
|
---|---|
1
A Resistance Distance-Based Approach for Optimal
Leader Selection in Noisy Consensus Networks
arXiv:1708.06873v1 [math.OC] 23 Aug 2017
Stacy Patterson, Member, IEEE, Yuhao Yi, and Zhongzhi Zhang
Abstract—We study the performance of leader-follower noisy
consensus networks, and in particular, the relationship between
this performance and the locations of the leader nodes. Two types
of dynamics are considered (1) noise-free leaders, in which leaders
dictate the trajectory exactly and followers are subject to external
disturbances, and (2) noise-corrupted leaders, in which both
leaders and followers are subject to external perturbations. We
measure the performance of a network by its coherence, an H2
norm that quantifies how closely the followers track the leaders’
trajectory. For both dynamics, we show a relationship between
the coherence and resistance distances in an a electrical network.
Using this relationship, we derive closed-form expressions for
coherence as a function of the locations of the leaders. Further, we
give analytical solutions to the optimal leader selection problem
for several special classes of graphs.
I. I NTRODUCTION
Consensus problems are an important class of problems in
networked and multi-agent systems. The consensus model has
been used to study a wide range of applications, including
opinion dynamics in social networks [1], information fusion in
sensor networks [2], formation control [3], and load balancing
in distributed computing systems [4]. Over the past decades,
much research effort has been devoted to analysis of the convergence behavior and robustness of consensus networks and
to the derivation of relationships between system performance
and graph theoretic properties.
A type of consensus problem that has received attention in
recent years is leader-follower consensus [5], [6], [7], [8],
[9], [10], [11]. In leader-follower systems, a subset of nodes
are leaders that track an external signal. The leaders, in
essence, dictate the desired trajectory of the network. The
remaining nodes are followers that update their states based
on relative information exchanges with neighbors. Leaderfollower dynamics can be used to model formation control
where, due to bandwidth limitations, only a small subset
of agents can be controlled by a system operator [12]. In
addition, leader-follower systems can also be used to model
agreement dynamics in social networks in which some subset
of participants exhibit degrees of stubbornness [13]. Leaderfollower dynamics have also been applied to the problem of
distributed sensor localization [14]. In leader-follower systems,
S. Patterson∗ is with the Department of Computer Science, Rensselaer
Polytechnic Institute, Troy, NY 12180. Email: [email protected], Phone: 518276-2054 (∗ corresponding author)
Y. Yi and Z. Zhang are with the Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan
University, Shanghai 200433, China. Email: [email protected],
[email protected].
the system performance depends on the network topology and
the locations of the leaders. This dependence naturally leads
to the question of how to select the leaders so as to optimize
performance for a given topology.
We study the the performance of leader-follower networks
where nodes are governed by consensus dynamics and are also
subject to stochastic external disturbances. We consider two
types of dynamics. In the first, referred to as noise-free leaders,
leaders are not subject to disturbances and thus track the
external signal exactly. In the second dynamics, called noisecorrupted leaders, all nodes are subject to the external perturbations. As in many works on noisy consensus networks [5],
[15], [11], [10], we quantify the system performance by an
H2 norm that captures the steady-state variance of the node
states. We call this the coherence of the network. Coherence is
related to the spectrum of the Laplacian matrix of the network;
however, it is not always straightforward to relate this spectrum
to the network topology and locations of leaders.
In this work, we develop relationships between the steady-state
variance for a given leader set and resistance distances in a
corresponding electrical network. A similar approach was used
to study the performance of a single noise-free leader [12];
we generalize this notion to an arbitrary number of noise-free
leaders. Further, we develop a novel resistance-distance based
approach to study coherence in networks with an arbitrary
number of noise-corrupted leaders. We use this resistance
distance-based approach to analyze the coherence for different
network topologies based on resistance distances. In special
classes of graphs, we can relate the resistance distance to graph
distance, which gives us the optimal leader locations in terms
of the graph distances between leaders. We also e derive closed
form-expressions for the optimal single noise-free and noisecorrupted leaders in weighted graphs, the optimal k noise-free
leaders in cycles and paths, the optimal two noise-free leaders
in trees, and the optimal twof noise-corrupted leaders in cycles.
The leader selection problem for noise-free leaders was first
posed in [5]. This problem can be solved by an exhaustive
search over all subsets of nodes of size k, but this proves
computationally intractable for large graphs and large k.
Several works have proposed polynomial-time approximation
algorithms for the k-leader selection problem in noise-free
leader-follower systems [14], [7], [15], [8]. In particular, we
note that the solution presented in [15] yields a leader set
whose performance is within a provable bound of optimal.
With respect to analysis for the noise-free leader selection
problem, the recent work by Lin [9] gives asymptotic scalings
2
of the steady-state variance in directed lattice graphs for a
single noise-free leader, based on the graph distance from the
leader. Our recent work [11] gives polynomial-time algorithms
for optimal k-leader selection in weighted, undirected cycles
and path graphs. The leader-selection problem for noisecorrupted leaders was first posed by Lin et al. [7], who also
gave heuristic-based bounds and algorithms for its solution.
In addition, other performance measures have been considered
for the leader selection problem including controllability [16],
[17] and convergence rate [6], [11].
The recent works by Fitch and Leonard [10], [17] study the
optimal leader selection problem for noise-free and noisecorrupted leaders. These works also relate the steady-state
variance to a graph theoretic concept, in this case, graph centrality. The authors define centrality measures that capture the
performance of a given leader set. They then use this analysis
to identify the optimal leader sets for various classes of graphs.
We note that this work identifies the optimal single leader for
noise-free and noise-corrupted graphs under slightly stronger
assumptions than we make in our approach. In addition, [10]
identifies the optimal k-noise free leaders in cycles, under
the restriction that the number of nodes in the cycle is a
multiple of k. We address cycles with an arbitrary number of
nodes and provide a closed-form expression for the resulting
steady-state variance for any leader set based on the graph
distance between leaders. We view our proposed approach as
complementary that in [10]; for some classes of networks,
analysis is more straightforward under the resistance distance
interpretation. Thus, our work expands the classes of networks
that have known analytical solutions. A preliminary version
of our work appeared in [18]. This earlier work gave analysis
for noise-free leader selection in cycle and path graphs only,
using the related concept of commute times of random walks
rather than resistance distance. Our resistance-distance based
approach greatly simplifies the analysis and presentation.
The remainder of this paper is organized as follows. Section II
describes the system model and dynamics, and it formalizes the leader selection problems. Section III describes the
relationship between the system performance and resistance
distance for both noise-free and noise corrupted leaders.
This section also presents analysis of resistance distance for
“building blocks”, i.e., components of graphs, that will be
used to analyze specific graph topologies. Section IV gives
closed-form solutions for the leader selection problem for
various classes of graphs. In Section V, we compare the
asymptotic behavior of coherence in leader-free and leaderfollower consensus networks, and in Section VI, we give an
algorithm and a numerical example for increasing the size of a
binary tree while maintaining the optimality of the two noise
free leaders. Finally, we conclude in Section VII.
II. S YSTEM M ODEL AND P ROBLEM F ORMULATION
We consider a network of n agents, modeled by an undirected,
connected graph G = (V, E, W ), where V is the set of agents,
also called nodes, and E is the set of edges. The weight of edge
(i, j), denoted by wij , corresponds to the (i, j)th component
of the symmetric weighted adjacency matrix W . We let D
denote the diagonal matrix
P of weighted node degrees, with
diagonal entries dii = j∈V wij . The matrix L = D − W is
thus the weighted Laplacian matrix of the graph G.
Each node i ∈ V has a scalar-valued state xi . The objective
is for all node states to track an external signal x ∈ R. Some
subset of nodes F ⊂ V are followers that update their states
using noisy consensus dynamics, i.e.,
X
ẋi = −
wij (xi − xj ) + di ,
(1)
j∈N (i)
where Ni denotes the neighbor set of node i, and di is a
zero-mean, unit variance, white stochastic noise process. The
remaining set of nodes S = V \ F are leaders; leader nodes
have access x.
T
We write the state of the system as xT = [xT
l xf ], where xl
are the leader states and xf are the follower states. We can
then decompose the Laplacian of G as:
Lll Llf
L=
.
Lf l Lf f
A. Noise-Free Leader Dynamics
We consider two types of leader dynamics. In the first, called
noise-free leaders, leader states are dictated solely by x.
Without loss of generality, we assume x = 0 [5], so leader
nodes update their states as:
ẋi = −κi (xi − x) = −κi xi ,
where κi ∈ R+ is the weight node i gives to the external
signal, sometimes referred to as the degree of stubbornness
of node i. The dynamics of the follower nodes can then be
written as:
ẋf = −Lf f x + df ,
(2)
where Lf f is the principle submatrix of the Laplacian corresponding to the follower nodes, and df is the vector of noise
processes for the followers.
We quantify the performance of the system for a given leader
set S by its coherence, i.e., the total steady-state variance of
the follower nodes. This value is related to Lf f as follows [5],
RN F (S) = lim
t→∞
X
i∈(V \S)
1
E xi (t)2 = tr (Lf f )−1 . (3)
2
Note that Lf f is positive definite for any S 6= 0 [5], and thus,
RN F (S) is well defined. The total variance depends on the
choice of leader nodes.
The nose-free leader selection problem is to identify the leader
set S of size at most k, such that RN F (S) is as small as
possible, i.e.,
minimize RN F (S)
(4)
subject to |S| ≤ k.
3
B. Noise-Corrupted Leader Dynamics
3
…
i
We also consider dynamics with noise-corrupted leaders. In
this case, the leader nodes update their states using both
consensus dynamics and the external signal, and the leader
states are also subject to external disturbances. We again
assume, without loss of generality, that x is 0. The dynamics
for leader node i are then:
X
ẋi = −
wij (xi − xj ) − κi xi + di ,
2
1
s
n
n-1
…
Fig. 1. Augmented cycle graph with noise-corrupted leaders nodes 1 and i.
j∈N (i)
where κi is the degree of stubbornness of node i, i.e., the
weight that it gives to its own state. The dynamics of the
entire system can be written as:
ẋ = −(L + Dκ DS )x + d,
(5)
where d is a vector of zero-mean white noise processes that
affect all nodes, Dκ is the diagonal matrix of degrees of
stubbornness, and DS is a diagonal (0,1) matrix with its (i, i)th
entry equal to 1 if node i is a leader and 0 otherwise. We note
that if S 6= ∅, then L + Dκ DS is positive semi-definite [19].
As with noise-free leaders, we define the performance of the
system for a given set of leaders S in terms of the total steadystate variance, which is given by [7],
1
(6)
RN C (S) = tr (L + Dκ DS )−1 .
2
The noise-corrupted leader selection problem is to identify the
set of at most k leaders that minimizes this variance, i.e.,
minimize RN C (S)
subject to |S| ≤ k.
(7)
This relationship can be generalized to multiple noise-free
leaders. In this case, the resistance distance r(i, S) is the
potential difference between follower node i and the leader
set S with unit current.
Proposition 1. The resistance distance r(i, S) from a node
i ∈ V \ S to a leader set S 6= ∅ is related to Lf f as:
r(u, S) = L−1
f f (i, i).
Proof: Let B ∈ R|E|×|V | be the incidence matrix of G. For
each edge e = (i, j) ∈ E, a direction is assigned arbitrarily.
B(e, i) = 1 if node i is the tail of edge e, B(e, i) = −1 if node
i is the head of edge e, and B(e, i) = 0 otherwise. A resistance
r is assigned to each edge e = (i, j) such that r(e) = w1ij . Let
K ∈ R|E|×|E| be a diagonal matrix with K(e, e) = r(e). It
is easy to verify that B > K −1 B = L. Let i ∈ R|E| represent
the current across all edges, and let v ∈ R|V | represent the
voltages at all vertices. By Kirchoff’s law, B > i = c, where
c ∈ R|V | denotes the external currents injected at all vertices,
and by Ohm’s law, Ki = Bv. It follows that,
Lv = c.
III. R ELATIONSHIP TO R ESISTANCE D ISTANCE
For a graph G = (V, E, W ), consider an electrical network
with V the set of nodes and E the set of edges, where each
edge (i, j) has resistance w1ij . The resistance distance between
two nodes i and j, denoted r(i, j), is the potential difference
between i and j when a unit current is applied between them.
Let Lj denote the Laplacian matrix of G where the row
and column of node j has been removed. It has been shown
that [20],
r(i, j) = L−1
(8)
j (i, i),
i.e., r(i, j) is given by the (i, i)th component of L−1
j .
We now show how the performance measures RN F (S) and
RN C (S) can be expressed in terms of resistance distances.
A. Noise-Free Leaders
For a single noise-free leader v, it follows directly from (8) that
the total steady-state variance is determined by the resistance
distances from all follower nodes to leader node v,
RN F ({v}) =
1
2
X
i∈V \{v}
r(i, v).
(9)
Let vj = 0 for all leaders j ∈ S, and thus vT = [0 vfT ], where
vf denotes the voltages for the follower nodes. Let ci = 1 for
follower i and ck = 0 for followers k 6= i. Expanding (9), we
obtain,
Lf f Llf
0
cl
=
,
Lf l Lf f
vf
ei
where ei is the canonical basis vector. Therefore, Lf f vf = ei ,
Since Lf f is positive definite, and thus, invertible, we have
r(i, S) = vi = L−1
f f (i, i).
The coherence for a set of noise-free leaders is given in the
following theorem, which follows directly from Proposition 1
and (3).
Theorem 2. Let G be a network with noise-free leader
dynamics, and let S be the set of leaders. The coherence of
G is:
1 X
r(i, S).
RN F (S) =
2
i∈V \S
B. Noise-Corrupted Leaders
For the case of noise-corrupted leaders, we obtain our expression for the coherence by constructing an augmented network.
4
Let G = (V, E, W ) be undirected weighted graph, and let
S ⊆ V be a set of noise-corrupted leaders. We form the
augmented graph G from G by adding a single node s to
G and creating an edge from each node i ∈ S to s, with edge
weight κi . An example is shown in Fig. 1 for an n-node cycle.
The noise-corrupted leaders are nodes 1 and i. We let r(u, v)
denote the resistance distance between nodes u and v in G
The relationship between resistance distances in G and the
coherence with a set S of noise-corrupted leaders is given in
the following theorem.
Theorem 3. Let G = (V, E, W ) be a network with noisecorrupted leader dynamics, and let S be the set of leaders.
Let G = (V , E, W ) be the corresponding augmented graph.
Then, the coherence of G is:
1X
r(i, s).
RN C (S) =
2
vertices i, j on the path, let dij denote their graph distance.
Then,
d2uy
d2
.
(10)
r(u, {x, y}) = dux − ux = duy −
dxy
dxy
Proof: We start by assigning 0 voltage to x and y. Then, we
impose unit external current to u, which flows out from {x, y}.
By definition, r(u, {x, y}) = vu , as defined in the proof of
Proposition 1. It follows that,
r(u, {x, y}) =
1
r(u,x)
= dux −
1
+
1
r(u,y)
=
1
dux
1
+
1
duy
d2uy
d2ux
= duy −
,
dxy
dxy
(11)
(12)
where the second equality follows from Lemma 5, and (12) is
obtained from (11) by applying the equality dux + duy = dxy .
i∈V
Proof: Let L be the weighted Laplacian of G, and let L be the
weighted Laplacian of G. We denote by Ls the matrix formed
from L by removing the row and column corresponding to
node s. We first note that, by the construction of G, Ls =
L + DS Dκ . By (8), for any node i ∈ V ,
−1
r(i, s) = (L + DS Dκ )
(i, i),
from which we obtain:
X
r(i, s) = tr (L + DS Dκ )−1 = 2RN C (S),
i∈V
where the last equality follows from (6).
C. Useful Results on Resistance Distance
We conclude this section by stating some useful results on
resistance distance.
Proposition 4. Let S ⊆ V , and for a node i ⊆ V , let Ui ⊆ S
be the set of nodes in S for which there is path from i to some
j ∈ U that does not traverse any other element in S. Then
r(i, S) = r(i, Ui ).
This proposition follows directly from the definition of resistance distance.
Lemma 5 ([20] Thm. D). Consider an undirected connected
graph G = (V, E, W ), and let duv denote the graph distance
between u, v ∈ V , i.e., the sum of the edge weights along
the shortest path between u and v. Then, r(u, v) ≤ duv , with
equality if and only if there is single path between u and v.
Lemma 6. Consider a weighted, undirected graph
G = (V, E, W ), partitioned into two components
A = (VA , EA , WA ) and B = (VB , EB , WB ) that share
only a single vertex {x}. Let S ⊆ VB . Then for any u ∈ VA ,
Theorem 8 ([21], Thm. 2.1). Let G0 = (V, E 0 , W 0 ) be
the graph formed by adding edge (i, j) to the connected,
undirected graph G = (V, E, W ), with edge weight wij . For
p, q ∈ V , let r(p, q) denote their resistance distance in G, and
let r0 (p, q) denote their resistance distance in G0 . Then,
2
r0 (p, q) = r(p, q) −
wij [r(p, i) + r(q, j) − r(p, j) − r(q, i)]
.
4 (1 + wij r(i, j))
IV. L EADER S ELECTION A NALYSIS
In this section, we use the resistance distance based formulations for coherence to provide closed-form solutions to the
leader selection problems for several classes of networks.
We first consider the case of a single leader v. For the noisefree case,
1 X
RN F ({v}) =
r(u, v).
(13)
2
u∈V \{v}
The expression (13) shows that the optimal single noise-free
leader is the node with minimal total resistance distance to all
other nodes. As shown in [17], this corresponds to the node
with maximal information centrality.
In the noise-corrupted case,
1X
RN F ({v}) =
r(u, s)
2
u∈V
X
1
=
r(u, v) + |V |κv .
2
(14)
u∈V \{v}
r(u, S) = r(u, x) + r(x, S)
where the last equality follows from Lemma 6. If all nodes exhibit the same degree of stubbornness, then the optimal noisefree leader and the optimal noise-corrupted leader coincide.
However, if nodes exhibit different degrees of stubbornness,
the single best leader may differ for the two dynamics.
Lemma 6 is a generalization of Lemma E from [20].
Lemma 7. Consider a weighted, undirected path graph, with
end vertices x and y. Let u be a vertex on the path. For any
We next explore the leader selections problems for k > 1
leaders. For the remainder of this section, we restrict our
study to networks where all edge weights and all degrees of
stubbornness, κi , are equal to 1.
5
A. k Noise-Free Leaders in a Cycle
B. k Noise-Free Leaders in a Path
Consider a cycle n nodes, identified by 1, 2, . . . , n in a
clockwise direction. We use the notation x ≺ y to mean that
node x precedes node y on the ring, clockwise and dxy denotes
the graph distance between nodes x and y where x ≺ y. For
example, for n = 5, where x = 4 and y = 2, x ≺ y and
dxy = 3.
Theorem 9. Let G be an n-node cycle with k noise-free
leaders, with n written n = k` + q, where ` and q are integers
with 0 ≤ q < k. Let S = {s1 , . . . , sk } be the leaders and let
c be a k-vector of graph distances between adjacent leaders,
i.e., ci is the distance between leader si and leader si+1 , for
i = 1 . . . k − 1, and ck is the distance between leader sk and
leader s1 . Then:
Consider a path graph with n nodes, identified by 1, 2, . . . , n.
Let duv denote the graph distance between nodes u and v.
Theorem 10. Let G be a path graph with n nodes, and let
S = {s1 , . . . sk } be a set of k noise-free leaders. Let c be a
(k + 1)-vector, where c1 = (s1 − 1) and ck+1 = n − sk . Let
ci = si+1 − si , for i = 2, . . . , k. Then,
1) The coherence of G is
RN F (S) =
1
T
12 (c c
− k).
2) S is an optimal solution to the k-leader selection problem
if and only if c ∈ C, where
)
(
k
X
di = n .
C = d | di ∈ {`, ` + 1}, i = 1 . . . k,
i=1
Proof: We first find the total resistance distance to S for all
nodes j such that with si ≺ j ≺ si+1 ,
X
X
r(j, S) =
r(j, {si , si+1 })
(15)
si ≺j≺si+1
si ≺j≺si+1
=
cX
i −1
=
`−
`=1
1
2
6 ci
`2
ci
−1 ,
1) The coherence of G is:
k
1 X 2
1 2
c1 + c2k+1 + c1 + ck +
(c − 1).
4
12 i=2 i
(21)
RN F (S) =
2) Let n be such that, for the optimal leader configuration,
it holds that c1 + ck+1 = a, where 2 divides a and b =
(n − 1) − a, where (k − 1) divides b. Then, the optimal
solution to the k-leader selection problem is:
2(n − 1) − 3(k − 1)
(22)
c1 = ck+1 = round
6(k − 1) + 4
1
ci =
((n − 1) − 2c1 ) , i = 2 . . . k.
(23)
k−1
Proof: We first find the total resistance distance to S for all
nodes u with 1 ≤ u < s1 :
sX
1 −1
r(u, S) =
u=1
u=1
n
X
(17)
v=sk+1
r(v, S) =
ck+1 (ck+1 + 1)
.
2
si+1 −1
k
T
subject to 1 c = n
X
ci ∈ {1, . . . , n − 1}.
r(u, S) =
u=si +1
(19)
(20)
If k divides n, then it is straightforward to verify that c? =
is a solution to the above problem. In this case, ` = nk .
n
k1
For n = kl + q, with q > 0, assume that c is a solution to
(4) but c ∈
/ C. Then there exists some component ci such that
ci = ` + 1 + x for some integer x > 0 and some component
cj such that cj = ` − y for some integer y > 0. Let c0 be
such that c0i = ci − 1 and c0j = cj + 1 and c0` = c` for all
1
1
` 6= i, ` 6= j. Clearly, 12
cT c − k > 12
(c0 )T c0 − k , which
contradicts our assumption that c is a solution to (4).
ci (ci + 1)
,
2
(24)
(25)
The total resistance distances to S for all nodes u between si
and si+1 can be obtained in a similar fashion to (15) - (17),
1X1 2
1 T
RN F (S) =
(ci − 1) =
(c c − k).
2 i=1 6
12
With this, we can express problem (4) as an integer quadratic
program:
1
minimize 12
cT c − k
(18)
u=
where the first equality follows from Proposition 4 and
Lemma 5. Similarly, the total resistance distance to S for all
nodes v > sk is:
(16)
where (15) follows from (16) by Proposition 4 and Lemma 7.
Applying Proposition 1, we obtain,
ci
X
1 2
(c
− 1).
6 i+1
(26)
Combining (24), (25), and (26) with Proposition 1, we obtain,
RN F (S) =
1 X
r(u, S)
2
u∈V \S
=
k
1 2
1 X 2
(c1 + c2K+1 + c1 + ck+1 ) +
(ci − 1).
4
12 i=2
To find the optimal leader locations, we must solve the
optimization problem,
minimize
subject to
cT P c + rT d − k−1
12
1T c = n − 1
ci ∈ {1, . . . , n − 1},
(27)
where P is the (k +1)×(k +1) diagonal matrix with diagonal
1
1
1
components [ 41 12
. . . 12
4 ], and r is a (k + 1)-vector with
1
r1 = rk+1 = 4 , and all other entries equal to 0. Let c? be a
6
solution to (27). Using a similar argument to that in the proof
of Theorem 9, we can conclude c?1 = c?k+1 = a/2 for some
even integer a, i.e., that leaders s1 and sk should each be the
same distance from their respective ends of the path. Similarly,
c?i = b/(k − 1) for i = 2 . . . k, i.e., the leaders s2 , . . . , sk−1
should be equidistant.
T rooted at x, excluding those nodes in Ty , denoted by
Tx = (Vx , Ex ), and (3) the induced subgraph of T consisting
of nodes V − (Vx ∪ Vy ) ∪ {x, y}, which is denoted by
Gxy = (Vxy , Exy ). Note that by Proposition 4, for u ∈ Vx , it
holds that r(u, S) = r(u, x). Similarly, for u ∈ Vy , we have
r(u, S) = r(u, y). We can therefore decompose RN F (S) as,
Let q = a/2, so that the optimal leader placement has c?1 =
1
c?k+1 = q and c?i = k−1
((n − 1) − 2q), for i = 2 . . . k. Then,
we can reframe (27) as minimizeq∈{1,...,n} C(q), where,
!
2
(n − 1) − 2q
1 2
(k − 1)
−1 .
C(q) = (q + q) +
2
12
k−1
RN F ({x, y}) =
X
X
X
1
r(u, x) +
r(u, y) +
r(u, {x, y})
2
Relaxing the integer constraint, the value of q that minimizes
C(q) is,
2(n − 1) − 3(k − 1)
q? =
.
6(k − 1) + 4
=
Since C(q) is quadratic, the optimal integer value for q is
round(q). The values for ci , i = 1 . . . k + 1, in (22) - (23)
follow from the definition of q above.
While the restriction that c1 = ck+1 and ci = ci+1 , i =
2 . . . k does not hold for all network sizes, it can be shown
experimentally to hold for many. An example is a path graph
with n = 40 and k = 3, where c1 = c4 = 3 and c1 = c2 =
c3 = 11.
u∈Vx
u∈Vy
u∈Vxy
(28)
X
X
1X
r(u, {x, y}) , (29)
duy +
dux +
2
u∈Vx
u∈Vy
u∈Vxy
where (29) follows from (28) by Lemma 5.
With this decomposition, we can apply the building blocks
described in Section III-C to identify the optimal 2 noise-free
leaders in M -ary trees for various values of M . We begin with
M = 2.
Theorem 12. For the noise-free 2-leader selection problem in
a perfect binary tree with height h ≥ 4, the optimal leaders are
such that dxy = 4 and dxr = 2, and the resulting coherence
is:
(n + 1)
25
7
RN F (S) =
log2 (n + 1) −
+ .
(30)
2
8
2
C. Two Noise-Free Leaders in Trees
We next consider the 2-leader selection problem in rooted,
undirected M -ary trees. An M -ary tree is a rooted tree where
each node has at most M children. A perfect M -ary tree is
an M -ary tree in which all non-leaf nodes have exactly M
children and all leaves are in the same level. Let r denote the
root node of the tree, and let h denote its height. We number
the levels of the tree starting with the root, as 0, 1, 2, . . . , h.
The root of the tree is at level 0, and the leaves of a perfect
M -ary tree of height h are at level h. We use lev(x) to denote
the level of a node.
We begin with the following lemma, which gives general
guidance for the optimal location of two leader nodes.
Lemma 11. Consider a perfect M -ary tree T = (V, E). Let
x, y ∈ V, x 6= y be such that their lowest common ancestor
is a node of level ` > 0. Then, there exists y, z ∈ V, y 6= z,
with lowest common ancestor r such that RN F ({x, y}) >
RN F ({y, z}).
The proof of this lemma is given in Appendix 11. Lemma 11
tells us that the optimal 2-leader set will not have two nodes
in the same subtree of a child of r.
We denote these two leaders by x and y, and assume there
lowest common ancestor is r. Without loss of generality,
we assume lev(x) ≤ lev(y). We denote the graph distances
between x and y, x and r, and y and r by dxy , dxr and
dyr , respectively. To study the coherence of this system, we
decompose the tree into three subgraphs, (1) the subtree of
T rooted at y, denoted Ty = (Vy , Ey ), (2) the subtree of
The proof of Theorem 12 is given in Appendix B.
It is interesting to note that the optimal leader locations are
independent of the height of the tree. This independence of
the height also holds for M > 2, as shown in the following
theorems. Proofs are given in Appendix B.
Theorem 13. For the noise-free 2-leader selection problem
in a perfect ternary tree T (3) with height h ≥ 4, the optimal
leaders are such that dxy = 2 and dxr = 1, and the resulting
coherence is:
RN F (S) =
2n + 1
(log3 (2n + 1) − 2) + 1 .
4
(31)
Theorem 14. For the noise-free 2-leader selection problem a
perfect M -ary tree T , with M ≥ 4 and h ≥ 4, the optimal
leaders are such that dxy = 1 and dxr = 0, and the resulting
coherence is:
1
1
n+
logM (nM − n + 1)
2
M −1
n(M 2 + M − 1)
1
−
+
.
2M (M − 1)
2M
RN F (S) =
(32)
D. Two Noise-Corrupted Leaders in a Cycle Graphs
Consider an n-node cycle with nodes labeled {1, 2, . . . , n}.
We use Theorem 8 to determine the coherence of the graph
as a function of the graph distance between nodes 1 and i.
Theorem 15. In an n-node cycle with two noise-corrupted
leaders, where n is even, the coherence is minimized with the
7
leaders are at distance n/2 apart, and the resulting coherence
is:
n3 + 16n2 + 44n − 16
RN C (S) =
.
(33)
24(n + 8)
Proof: Without loss of generality, we assume node 1 and node i
are noise-corrupted leaders. Let the graph G be the augmented
graph shown in Fig. 1, omitting edge (i, s). By Lemma 7, for
arbitrary nodes u, v ∈ {1, 2, ..., n}, their resistance distance in
G is:
|u − v|(n − |u − v|)
r(u, v) =
,
n
and the resistance distance from a node u ∈ {1, 2, . . . , n} is:
r(u, s) = r(u, 1) + 1 =
(u − 1)(n − (u − 1))
+ 1.
n
Let G0 be the graph formed from G by the addition of
edge (i, s). Then, for a node u ∈ {1, 2, ..., n}, the resistance
distance from u to s in G0 is:
2
[r(u, i) + r(s, s) − r(u, s) − r(i, s)]
4 (1 + r(i, s))
(u − 1)(n − (u − 1)
+ 1−
=
n
h
i2
|u−i|(n−|u−i|)
− (u−1)(n−(u−1))
− ((i−1)(n−(i−1))
−2
n
n
n
.
4 1 + (i−1)(n−(i−1))
+
1
n
r0 (u, s) = r(u, s) −
By Theorem 3, summing over all nodes u, we obtain:
n
1X 0
1 2
RN C (S) =
r (u, s) =
(n + 6n − 1)
2 u=1
12
h
1
2i4 − 4i3 (n + 2)
−
12n 2 + (i−1)(n−(i−1))
n
i
+ i3 (2n2 + 6n + 11) + i(2n2 + n − 6) + 2n2 − 3n + 1 .
(34)
We note that this function is continuous over the interval [1, n].
We then take the derivative with respect to i:
∂
RN C (S) =
∂i
h
1
2i5 − 5i4 (n + 2))
(6(−i2 + i(n + 2) + n − 1)2 )
+ 4i3 (n2 + 3n + 5) − i2 (n3 + 6n + 20)
i
− 2i(n3 + 3n2 + n − 5) + n2 + n − 2 .
The derivative has five roots. Of these, only i = (n + 2)/2 lies
in the interval [1, n]. Further, it is a minima of ∂/∂iRN C (S).
For even n, we substitute this value of i into (34) to obtain
(33).
Algorithm 1 Algorithm to add nodes to a perfect binary tree
of height h, while maintaining optimality of the 2 leaders.
Input: Th , with optimal 2 leaders x and y
while there is a new node u to add do
if last level of left or right subtree of x is not filled then
Add node u to level h+1 of subtree of x with fewer leaves,
breaking ties arbitrarily.
else if last level of left or right subtree of y is not filled then
Add node u to level h+1 of subtree of y with fewer leaves,
breaking ties arbitrarily.
else
Add node u as leaf on level h + 1, in any remaining
location.
end if
if level h + 1 is filled then
h←h+1
end if
end while
V. C OMPARISON TO C OHERENCE IN L EADER -F REE
N ETWORKS
Network coherence has also been studied in graphs without
leaders. In this setting, every node behaves as a follower, using
the dynamics in (1). Coherence is measured as the total steadystate variance of the deviation from the average of all node
states,
2
n
n
X
X
1
E xi (t) −
xj (t) .
V = lim
t→∞
n
i=1
j=1
It has been shown that, for a network with a single noise-free
leader, i.e.,|S| = 1 [5],
RN F (S) ≥ V.
In some sense, this means that adding a single leader increases
the disorder of the network.
In a leader-free cycle graph, it has been shown that the
coherence V scales as O(n2 ) [3]. In a cycle with k noise-free
leaders, where the leaders are located optimally, by Theorem 9,
1 n 2 T
n2
k
RN F (S) =
1 1−k =
− .
12
k
12k 12
Thus for a fixed leader set size k, The coherence RN F (S)
also scales as O(n2 ). Similarly, for the optimal two noisecorrupted leaders in a cycle, RN C (S) scales as O(n2 ). This
shows that, in the limit of large n, in cycle networks, the
disorder of the network is similar for leader-free and leaderfollower consensus networks.
VI. N UMERICAL E XAMPLE
Theorem 12 applies to the noise-free leader selection problem
in a perfect binary tree. We now present an algorithm that
can be used to grow the tree by adding nodes in a way that
does not change the location of the optimal two noise-free
leaders. Pseudocode for this tree-growing process is given in
Algorithm 1. The algorithm is initialized with a perfect binary
8
r
9
...
log2(RNF)
8.5
A
P (x, y ′ )
8
B
C
P (x, y)
7.5
D
E
F
G
7
y′
y
x
6.5
60
70
80
90
100
110
120
130
n
Fig. 2. Coherence for two-noise free leaders, as a perfect binary tree of height
5 is grown into a perfect binary tree of height 6 using Algorithm 1.
tree of height h ≥ 4, with the optimal leader set {x̂, ŷ}, with
dx̂r = 2 and dŷr = 2. In each iteration, a node is added in a
location dictated by the algorithm.
The analysis of this algorithm remains an open question.
However, example executions, the algorithm is able to grow
a tree from height h to height h + 1 without impacting the
optimality of the leader nodes x and y. In Fig. 2, we show
such and execution. The algorithm is initialized with a perfect
binary tree of of height h = 5, with 63 nodes. Nodes are added
according to the algorithm, until the tree is a perfect binary
tree of height h = 6, with 127 nodes. The figure shows the
coherence for every pair of leader nodes such that dxr ≤ 3 and
dyr ≤ 3, in log scale. The coherence for the leader set {x̂, ŷ}
is shown in red, while the coherence for each other leader set
is shown in blue. As the figure shows, the coherence for {x̂, ŷ}
is the smallest throughout the execution of the algorithm.
VII. C ONCLUSION
We have investigated the performance of leader-follower consensus networks under two types of leader dynamics, noisefree leaders and noise-corrupted leaders. For both leader
dynamics, we have developed a characterization of the system
performance in terms of resistance distances in electrical
networks. With this characterization, we have derived closedform expressions for network coherence in terms of the leader
locations. We have also identified the optimal leader locations
in several special classes of networks.
In future work, we plan to extend our analysis to study
coherence in general leader-follower networks. We also plan to
develop a similar mathematical framework to study coherence
in second-order systems.
A PPENDIX A
P ROOF OF L EMMA 11
Proof: Let x and y be the optimal two noise-free leaders
in a perfect binary tree T . Without loss of generality, let
SD
SB
SE
...
...
...
...
...
SA
SF
SC
SG
Fig. 3. Arrangement of nodes in perfect binary tree with two possible leader
sets {x, y} and {x, y 0 }.
lev(x) ≤ lev(y). Assume, for contradiction, that the lowest
common ancestor of x and y is a node B that is a descendant
of the root t. Let A be the parent of B, and let D and E be
children of B. Let x be a member of the node set consisting of
node B and the nodes in the subtree rooted at D. We denote
this node set by SD . Let y be a node in the subtree rooted at
E. We denote this node set by SE . Let SB denote the node
set in the subtree rooted at B, excluding B and the nodes in
SD and SE . This arrangement is shown in Fig. 3.
Let C be another child of A, as shown in the figure, and let
F and G be children of C, with the node sets of the the trees
rooted at F and G denoted by SF and SG , respectively. Let SC
denote the node set of the subtree rooted at C, excluding C and
the nodes in SF and SG . We will prove that, for a node y 0 in
the subtree rooted at G that is in the same location with respect
to G that y is with respect to to E, RN F {x, y} > RN F {x, y 0 }.
Let P (x, y) denote the vertices along the path between x and
y, and let P (x, y 0 ) denote the vertices along the path between
x and y 0 . Consider a pair of vertices u ∈ SD and k ∈ SF ,
where k is at the same location relative to F (in the subtree
rooted at F ) that u is relative to D (in the subtree rooted at
D). Denote the vertex on P (x, y) that is the nearest to u by
p. We find the sum of the resistance distances of u and k to
the respective leader sets {x, y} and {x, y 0 }. By Lemmas 5
and 7,
Ru,k
0
Ru,k
=
r(u, {x, y}) + r(k, {x, y})
=
dup + r(p, {x, y}) + dkB + r(B, {x, y}) ,
= r(u, {x, y 0 }) + r(k, {x, y 0 })
= dup + r(p, {x, y 0 }) + dkC + r(C, {x, y 0 }) .
Noting that dBy = dCy0 and dxy0 = dxy + 2, and applying
9
Since dCy0 = dBy ,
Lemma 7, we have:
0
Ru,k − Ru,k
=(dup − dup ) + (dkB − dkC )
0
Ri,j − Ri,j
=(diB − diB ) + (djB − djC )
0
+ (r(p, {x, y} − r(p, {x, y }))
+ r(B, {x, y}) − r(B, {x, y 0 })
+ (r(B, {x, y}) − r(C, {x, y 0 }))
=0 + 2 +
+
dpx −
dBy −
d2By
dxy
d2px
d2px
− (dpx −
)
dxy
dxy0
!
d2Cy0
− (dCy0 −
)
dxy0
+ r(B, {x, y}) − r(C, {x, y 0 })
d2
d2
=0 + 2 + dBx − Bx − (dBx − Bx )
dxy
dxy0
!
2
2
dBy
dCy0
+ dBy −
− (dCy0 −
)
dxy
dxy0
!
2(d2xp + d2By )
.
=2−
dxy (dxy + 2)
Further, since dxp ≤ dxB and dxB + dBy = dxy , we obtain:
0
Ru,k − Ru,k
≥
4(dxB dBy + dxy )
> 0.
dxy (dxy + 2)
Next, consider a pair of vertices v ∈ SE and ` ∈ SG , where
` is at the same location relative to G that v is relative to E.
Denote the vertex on P (x, y) that is the nearest to v by q, and
denote the vertex on P (x, y 0 ) that is nearest to ` by m The
sum of the resistance distances from v and ` to the respective
leader sets are (again, by Lemmas 5 and 7),
Rv,`
0
Rv,`
=
r(v, {x, y}) + r(`, {x, y})
=
dvq + r(q, {x, y}) + d`B + r(B, {x, y})
0
0
=
r(v, {x, y }) + r(`, {x, y })
=
dvB + r(B, {x, y 0 }) + d`m + r(m, {x, y 0 }) .
We note that dvq = d`m , dqy = dmy0 and dxy0 = dxy + 2.
Then,
0
Rv,` − Rv,`
=(dvq − dlm ) + (dlB − dvB )
+ r(q, {x, y}) − r(m, {x, y 0 })
+ r(B, {x, y}) − r(B, {x, y 0 })
!
d2my0
d2qy
− (dmy0 −
)
=0 + 2 + dqy −
dxy
dxy0
d2xB
d2xB
+ dxB −
− (dxB −
)
dxy
dxy0
2(d2xB + d2qy )
=2 −
.
dxy (dxy + 2)
0
Since dqy ≤ dBy and dxB +dBy = dxy , similar to Ru,k −Ru,k
,
0
we obtain Rv,` − Rv,` > 0.
For a pair of vertices i ∈ SB ∪ {B} and j ∈ SC ∪ {C}, where
` is at the same location relative to G that v is relative to E,
define
Ri,j = r(i, {x, y}) + r(j, {x, y})
= diB + r(B, {x, y}) + djB + r(B, {x, y})
0
Ri,j
= r(i, {x, y 0 }) + r(j, {x, y 0 })
= diB + r(B, {x, y 0 }) + djC + r(C, {x, y 0 }).
=2 −
2(d2xB + d2By )
.
dxy (dxy + 2)
Since dxB + dBy = dxy , it follows that
0
Ri,j − Ri,j
=
4(dxB dBy + dxy )
> 0.
dxy (dxy + 2)
Finally, we consider a vertex t that is not in the subtree rooted
at B nor the subtree rooted at C. In this case,
Rt = r(t, {x, y}) = dtB + r(B, {x, y})
Rt0 = r(t, {x, y 0 }) = dtA + r(A, {x, y 0 }).
It follows that
Rt − Rt0 = 2 −
(dxB + 1)2
d2xB
.
+
dxy
dxy + 2
Recall that dxy = dxB + dBy . Thus, Rt − Rt0 > 0.
Since |SD | = |SE | = |SF | = |SG | and |SB ∪ {B}| =
|SC ∪ {C}|, by grouping vertices into pairs, we have shown
that,
n
n
1X
1X
r(i, {x, y}) >
r(i, {x, y 0 }).
2 i=1
2 i=1
This contradicts our assumption that {x, y} is the optimal
leader set.
A PPENDIX B
P ROOF OF T HEOREMS 12, 13, AND 14
We first define a quantity Ω(S) as:
Ω(S) =
X
r(i, S) = 2RN F (S) .
i∈V \S
and note that a set S that is a minimizer of Ω(·) is also a
minimizer of RN F (·).
We next present a lemma that gives Ω(·) of a perfect M -ary
tree with two noise free leaders .
10
Lemma 16. Let T be a perfect M -ary tree with height h. Let
x and y be its two noise-free leaders, and assume that the
lowest common ancestor of x and y is the root of T . Then,
M h+1 + 1
d2
Ω({x, y}) =
dxr − xr
M −1
dxy
2
M +1
h+1
+M
+
(M dxr −dxy + M −dxr )
(M − 1)2
(M − 1)3 dy
2(M + 1)
3
h
−
−
+ M h+1
M −1
(M − 1)2
(M − 1)3 dxy
dxy
M
+
+
.
(35)
M −1
(M − 1)2
Proof: Recall that in (29), we decomposed the coherence into
three terms: the coherence in the subtree rooted at x, the
coherence in the subtree rooted at y, and the coherence at
the remaining nodes. We can also devide Ω into three part as
X
X
X
Ω({x, y}) =
dux +
duy +
r(u, {x, y}).
u∈Vx
u∈Vy
u∈Vxy
To simplify the first sum in (36), we first consider the subtrees
rooted at nodes on the path from x to r, denoted by P (x, r)
(excluding r):
X
R(Tj ) =
j∈P (x,r)
j6=x,r
((M − 1)h − M − 1) · M h
dxr − 1
+
M −1
(M − 1)2
+
((M − 1)dxr − (M − 1)h + 2) · M h−dxr +1
.
(M − 1)2
(37)
A similar expression can be obtained for the subtrees rooted
at nodes on the path from r to y, substituting dxr with dyr .
To simplify the second sum in (36), we also first consider the
subtrees rooted on nodes on the path P (x, r), which is:
X
|Tj |rj ({x, y}) = M h+1
j∈P (x,r)
j6=x,y
M +1
1
+
(M − 1)2
(M − 1)3 dxy
((M − 1)dxr − M )((M − 1)(dxy − dxr ) + M )
(M − 1)3 dxy
h+1
M
.
−
(M − 1)3 dxy
+ Mh
We let Ty denote the subtree rooted at y, Tx denote the subtree
rooted at x, excluding those nodes in Ty . The remaining
subgraph is denoted by Gxy .
We consider two cases: (1) x is not the root of T , and (2) x
is the root of T .
By Lemma 5, the resistance distance of a node i in Tx (or Ty )
to the leader set depends
P only on the resistance distance to x
(or y). Let R(Tx ) = i∈Tx r(i, x). The height of the subtree
rooted at x is hx = h − dxr , where dxr is the graph distance
between x and r. At each level i in Tx there are M i nodes,
each at distance i from x. Thus,
Phx
hx
R(Tx ) = i=1
M i · i = (MM
+1 .
−1)2 (M hx − hx − 1)M
A similar expression can be obtained for R(Ty ).
We next consider Gxy . We can think of this subgraph as a
path graph connecting nodes x and y, denoted by P (x, y),
with each node in the path the root of its own subtree. For
any node j on the path between x and y, r(j, {x, y}) is given
by Lemma 7. For any node v in the subtree Tj , its resistance
distance to {x, y} is
r(v, {x, y}) = dvj + r(j, {x, y}).
(38)
As before, a similar expression can be obtained for the
subtrees rooted at nodes on the path from r to y, substituting
dxr with dyr .
The above sums (37) and (38), and their corresponding sums
for P (r, y) account for the subtrees rooted at two children of
r, one containing leader x and one containing leader y. For
each of the remaining M − 2 children of r, the total resistance
distance to x and y from the subtree rooted at child v is
R(Tv ) =
((h − 1)M − h)M h + M M h − 1
+
(M − 1)2
M −1
dxr −
d2xr
+1 .
dxy
Combining all of these sums and including r(r, {x, y}),
we obtain R(Gxy ). Substituting the expressions for R(Tx ),
R(Ty ), R(Gxy ), and the equality dyr = dxy − dxr into (29)
leads to (35).
Case 2: x is the root. In this case, R(Ty ) is the same as in
Case 1, but R(Tx ) is now
R(Tx ) =
h
X
M h (M h − h − 1) + 1
(M − 1)M i−1 · i =
. (39)
M −1
i=1
From this, we obtain,
R(Gxy ) = 16 (dxy 2 − 1) +
P
j∈P (x,y)
j6=x,y
(R(Tj ) + (|Tj | − 1)r(j, {x, y})) .
The first term is the total resistance distance for nodes on the
path from x to y. For the summation terms, first we compute
the total resistance distance from nodes in the subtree rooted
at j to j. Then, for each node in the subtree, excluding j, we
add the resistance distance from j to {x, y}. An equivalent
expression is:
R(Gxy ) =
X
(R(Tj ) + |Tj |r(j, {x, y}))
X
j∈P (x,y)
j6=x,y
R(Tj ) +
X
j∈P (x,y)
j6=x,y
|Tj |r(j, {x, y}).
dxy
M
Ω({r, y}) =
+
M −1
(M − 1)2
2
M +1
+ M h+1
+
(M −dxy + 1)
(M − 1)2
(M − 1)3 dy
2(M + 1)
3
h
+ M h+1
−
−
,
M −1
(M − 1)2
(M − 1)3 dxy
(40)
which is equal to (35) given dxr = 0.
j∈P (x,y)
j6=x,y
=
For R(Gxy ), we only need to consider the path from root to y by
using (38). Combining (39) and (38), we obtain
(36)
Thus, we conclude that in a perfect M -ary tree, (35) holds for
any leader set {x, y} where their lowest common ancestor is
the root.
11
A. Proof of Theorem 12
B. Proof of Theorem 13
Proof: We first simplify (35) in Lemma 16 for M = 2,
6
Ω({x, y}) = 2 + 2h (2h − 6) + dxy − 2h+1 · dxy
d2
+ 2h+1 + 1
dxr − xr
dxy
3
h+1
−dxr
−(dxy −dxr )
+2
2
+2
+2 .
dxy
Proof: Based on Lemma 16, we derive Ω({x, y}) for a perfect
ternary with height h, where x and y have the root as their
lowest common ancestor,
(41)
For a given dxy and h, we treat Ω as a continuous function with
argument dxr . We derive expressions for its first and second
derivative:
∂Ω
2dxr
= 2h+1 + 1
1−
∂dxr
dxy
3
+ 2h+1 2dxr −dxy − 2−dxr
+ 2 · ln 2 ,
(42)
dxy
∂2Ω
2
= 2h+1 + 1
−
2
∂ dxr
dxy
3
+ 2h+1 2−dxr + 2dxr −dxy
+ 2 · (ln 2)2
dxy
(43)
2
dxy
3
≥ 2h+1 + 1
−
+ 2h+2− 2
+ 2 (ln 2)2 .
dxy
dxy
(44)
From (42), we observe that Ω has an extremum at dxr =
dxy /2. For dxy ≤ 5 and h ≥ 4, (44) is strictly positive, thus
Ω is convex with respect to dxr . This means that dxr = dxy /2
is a minimizer for the given dxy .
For h ≥ 4, and dxy ≤ 5, we examine the potential integer
minimizers dxy = 5, dxr = 2; dxy = 4, dxr = 2; dxy = 3,
dxr = 1; dxy = 2, dxr = 1; and dxy = 1, dxr = 1. By
comparing them in Ω({x, y}) in (41), we find the minimum
is always attained at dxy = 4, dxy = 2.
2
∂Ω
For dxy ≥ 6 and h ≥ 4, by checking ∂d
and ∂∂2 dΩxr , we
xr
observe that Ω has two minima. Because of the symmetry
of the function, these two minima must have the same value,
and so we only need to study the solution where dxr ≤ dxy /2.
Since dxy ≥ 6 and h ≥ 4, we have:
∂Ω
∂dxr
dxr =0
= 2h+1 + 1
3
+ 2h+1
+ 2 ln 2 2−dxy − 1 < 0 ,
dxy
and
∂Ω
∂dxr
dxr =2
4
= 2h+1 + 1
1−
dxy
3
1
+ 2h+1
+ 2 ln 2 22−dxy −
> 0.
dxy
4
Therefore, an integer minimizer of Ω is attained in the set
dxr ∈ {0, 1, 2}. It is readily verified that for h ≥ 4 and
dxy ≥ 6,
Ω|dxy ≥6,dxr =k ≥ Ω|dxy =4,dxr =2 ,
for k ∈ {0, 1, 2}. This implies that dxy = 4, dxr = 2 is the
integer solution that minimizes Ω for all h ≥ 4. We obtain the
expression for RN F in (30) by substituting dxy = 4, dxr = 2
and n = 2h+1 − 1 into (41) and applying Ω = 2RN F .
d2
3h+1 + 1
dxr − xr
2
dxy
3h+1
1
(3dxr −dxy + 3−dxr )
+
1+
2
dxy
3h+1
dxy
3
2
3
+
+
h− −
+ .
2
2
dxy
2
4
Ω({x, y}) =
(45)
For a given dxy , we find its first and second derivative,
∂Ω
3h+1 + 1
2dxr
=
1−
∂dxr
2
dxy
1
h+1
3
+
· 3dxr −dxy − 3−dxr
(46)
+ 1 · ln 3
2
dxy
∂2Ω
1
= (3h+1 + 1) −
∂ 2 dxr
dxy
1
3h+1 dxr +dxy
+ 3−dxr
3
+
+ 1 · (ln 3)2
2
dxy
dxy
1
1
+ 3h+1− 2
+ 1 · (ln 3)2 . (47)
≥ (3h+1 + 1) −
dxy
dxy
Similar to the proof of Theorem 12, we obtain that dxy = 2,
dxr = 1 is the optimal integer solution for any h ≥ 4 and
dxy ≤ 2. By enumerating all dxy , dxr , given dxy ≤ 4, we
can verify that dxy = 2, dxr=1 is the optimal solution for any
h ≥ 4 and dxy ≤ 4.
2
∂Ω
As shown by ∂d
and ∂∂2 dΩxr , for a given dxy , Ω has two
xr
minima. Because of the symmetry of (45), we only need to
study the minimum that satisfies dxr ≤ dxy . For any given
dxy ≥ 5, h ≥ 4,
∂Ω
=
∂dxr dxr =0
1
3h+1 + 1
3h+1 −dxy
+
3
−1
+ 1 (ln 3) < 0
2
2
dxy
∂Ω
=
∂dxr dxr =2
h+1
3
+1
4
3h+1 2−dxy
1 1
1−
+
3
−
(
+ 1)(ln 3) > 0.
2
dxy
2
9 dxy
Thus, the optimal real-valued dxr lies in (0, 2). By evaluating
(45) for k ∈ {0, 1, 2}, it can be verified that,
Ω|dxy ≥5,dxr =k ≥ Ω|dxy =2,dxr =1 .
Thus, we have shown that dxy = 2, dxr = 1 is the global
integer minimizer for all h ≥ 4 in perfect ternary trees. We
obtain (31) by substituting dxr = 1, dxy = 2 into (41) and
applying n = (3h+1 − 1)/2 and Ω = 2RN F .
12
R EFERENCES
C. Proof of Theorem 14
Proof: We start by calculating
∂Ω
∂dxr
and
∂2Ω
∂ 2 dxr .,
∂Ω
M h+1 + 1
2dxr
+ (ln M )M h+1
=
1−
∂dxr
M −1
dxy
(M
M +1
dxr −dxy
−dxr
(M
−M
)
+
(M − 1)3 dy
∂2Ω
M h+1 + 1
2
=
+ (ln M )2 M h+1
1−
2
∂ dxr
M −1
dxy
(M
M +1
dxr −dxy
−dxr
(M
+M
).
+
(M − 1)3 dy
[1] M. H. DeGroot, “Reaching a consensus,” J. Amer. Statist. Assoc., vol. 69,
no. 345, pp. 118–121, 1974.
2
− 1)2
(48)
2
− 1)2
(49)
From (48) and (49), we observe that Ω has a minimum that
d
dxy
satisfies dxr ≤ xy
2 . Since Ω is symmetric about dxr = 2 ,
d
we only consider potential integer minimizers with dxr ≤ xy
2 .
Further,
∂Ω
∂dxr
Mh
1
2
M
+
)(M − 1)2 (1 −
)
(M − 1)3
Mh
dxy
dxr =1
M +1
+ 2(M − 1) +
(50)
)(M 2−dxy − 1 ln(M ) ,
dxy
=
∂Ω
and observe that ∂d
|dxy =2,dxr =1 = 0. For dxy ≥ 3, we can
xr
lower bound (50) by
∂Ω
∂dxr
Mh
2
>
M (M − 1)2 (1 −
)
(M − 1)3
dxy
dxr =1
M +1
) ln(M ) .
− (2(M − 1) +
dxy
(51)
For dxy ≥ 3, the bound (51) is positive for M = 4 and h ≥ 4,
and it is increasing in M , dxy and h. Thus, for all dxy ≥ 2,
the integer minimizer of dxr will be either 0 or 1. Further, for
dxy = 1, the only potential solution that satisfies dxr ≤ dxy /2
is dxr = 0.
We propose that the optimal integer solution is dxy = 1,
dxr = 0, and we validate its optimality by comparing with
dxy ≥ 2 and dxr ∈ {0, 1}. For dxy ≥ 2,
Ω|dxr =0 − Ω|dxy =1,dxr =0 =
1
(M − 1)2 dxy (dxy − 1)
(M − 1)3 dxy
h+1−dxy
((2dxy + 1)M − 2dxy + 1)
+ M (dxy (M − 1)2 − M (M + 1)) ,
+M
h
which is positive for M ≥ 4. In addtion,
Ω|dxr =1 − Ω|dxy =1,dxr =0 =
1
(M − 1)2 (d2xy − 1)
3
(M − 1) dxy
+ M h+2−dxy ((2dxy + 1)M − 2dxy + 1)
+ M h (dxy (M − 1)3 − M (M 2 + 2)) ,
is also positive for dxy ≥ 2 and M ≥ 4. Therefore, the
optimal leader set {x, y} is such that dxy = 1, dxr = 0 when
M ≥ 4 and h ≥ 4. Then, (32) is obtained by substituting
dxr = 0, dxy = 1, n = (M h+1 − 1)/(M − 1) into (35) and
using the fact that Ω = 2RN F .
[2] L. Xiao, S. Boyd, and S. Lall, “A scheme for robust distributed sensor
fusion based on average consensus,” in Proc. 4th Int. Sym. Information
processing in sensor networks, 2005, p. 9.
[3] B. Bamieh, M. Jovanovic, P. Mitra, and S. Patterson, “Coherence in
large-scale networks: Dimension-dependent limitations of local feedback,” IEEE Trans. Autom. Control, vol. 57, no. 9, pp. 2235–2249, Sep
2012.
[4] G. Cybenko, “Dynamic load balancing for distributed memory multiprocessors,” J. Parallel Distrib. Comput., vol. 7, no. 2, pp. 279–301, Oct.
1989.
[5] S. Patterson and B. Bamieh, “Leader selection for optimal network
coherence,” in Proc. 49th IEEE Conf. Decision and Control, 2010, pp.
2692–2697.
[6] A. Clark, B. Alomair, L. Bushnell, and R. Poovendran, “Minimizing
convergence error in multi-agent systems via leader selection: A supermodular optimization approach,” IEEE Trans. Autom. Control, vol. 59,
no. 6, pp. 1480–1494, Jun 2014.
[7] F. Lin, M. Fardad, and M. Jovanovic, “Algorithms for leader selection in
stochastically forced consensus networks,” IEEE Trans. Autom. Control,
vol. 59, no. 7, pp. 1789–1802, Jul 2014.
[8] S. Patterson, “In-network leader selection for acyclic graphs,” in Proc.
American Control Conference, 2015, pp. 329–334.
[9] F. Lin, “Performance of leader-follower multi-agent systems in
directed networks,” arXiv:1606.02269, 2016. [Online]. Available:
https://arxiv.org/abs/1606.02269
[10] K. Fitch and N. Leonard, “Joint centrality distinguishes optimal leaders
in noisy networks,” IEEE Trans. Control Netw. Syst., vol. 3, no. 4, pp.
366–378, 2016.
[11] S. Patterson, N. McGlohon, and K. Dyagilev, “Optimal k-leader selection
for coherence and convergence rate in one-dimensional networks,” IEEE
Trans. Control Netw. Syst., vol. PP, no. 99, pp. 1–1, 2016.
[12] P. Barooah and J. Hespanha, “Graph effective resistance and distributed
control: Spectral properties and applications,” in Proc. 45th IEEE Conf.
Decision and Control, Dec 2006, pp. 3479–3485.
[13] L. Vassio, F. Fagnani, P. Frasca, and A. Ozdaglar, “Message passing
optimization of harmonic influence centrality,” IEEE Trans. Control
Netw. Syst., vol. 1, no. 1, pp. 109–120, March 2014.
[14] P. Barooah and J. Hespanha, “Error scaling laws for linear optimal estimation from relative measurements,” IEEE Trans. Inf. Theory, vol. 55,
no. 12, pp. 5661–5673, Dec 2009.
[15] A. Clark, L. Bushnell, and R. Poovendran, “A supermodular optimization
framework for leader selection under link noise in linear multi-agent
systems,” IEEE Trans. Autom. Control, vol. 59, no. 2, pp. 283–296, Feb
2014.
[16] A. Olshevsky, “Minimum input selection for structural controllability,”
in Proc. American Control Conference, 2015, pp. 2218–2223.
[17] K. Fitch and N. Leonard, “Optimal leader selection for controllability
and robustness in multi-agent networks,” in Proc. European Control
Conference, 2016.
[18] S. Patterson, “Optimizing coherence in 1-d noisy consensus networks
with noise-free leaders,” in Proc. American Control Conference, 2017.
[19] A. Rahmani, M. Ji, M. Mesbahi, and M. Egerstedt, “Controllability
of multi-agent systems from a graph-theoretic perspective,” SIAM J.
Control Optim., vol. 48, no. 1, pp. 162–186, 2009.
[20] D. Klein and M. Randic, “Resistance distance,” J. Math. Chem., no. 1,
pp. 81–95, 1993.
[21] Y. Yang and D. J. Klein, “A recursion formula for resistance distances
and its applications,” Discrete Appl. Math., vol. 161, no. 16-17, pp.
2702–2715, Nov. 2013.
| 3 |
arXiv:1709.05703v1 [] 17 Sep 2017
AI Programmer: Autonomously Creating
Software Programs Using Genetic Algorithms
Kory Becker
Justin Gottschlich
[email protected]
Bloomberg LP
[email protected]
Intel Labs
Abstract
and, of course, a plethora of tools aimed at assisting programmers in nearly every way [12, 13, 28].
Yet, simultaneous advances in hardware innovation have
occurred with similar frequency, such as increasingly performant general purpose multi-core CPUs with advanced
hardware extensions [22, 41], low power system-on-chip
(SoC) edge compute devices [17], high-performance pluggable coprocessors with near supercomputing performance
of yesteryear [14], wide data-parallel graphics processing
units (GPUs) [26], and application specific integrated circuits (ASICs) for deep neural networks and computer vision [4, 37], to name a few.
While such hardware advances continue to broaden and
deepen the space of what is computationally tractable, they
have the fracturing side-effect of complicating and exacerbating the tension between the ease of developing software and the ability for humans to write maximally efficient
code. In this paper we explore an alternative approach to traditional human-driven software development; one that autonomously creates software programs using genetic algorithms (GAs) requiring only minimal human guidance.
In this paper, we present the first-of-its-kind machine learning (ML) system, called AI Programmer, that can automatically generate full software programs requiring only minimal human guidance. At its core, AI Programmer uses genetic algorithms (GA) coupled with a tightly constrained
programming language that minimizes the overhead of its
ML search space. Part of AI Programmer’s novelty stems
from (i) its unique system design, including an embedded,
hand-crafted interpreter for efficiency and security and (ii)
its augmentation of GAs to include instruction-gene randomization bindings and programming language-specific
genome construction and elimination techniques. We provide a detailed examination of AI Programmer’s system
design, several examples detailing how the system works,
and experimental data demonstrating its software generation
capabilities and performance using only mainstream CPUs.
Keywords Genetic algorithm, program synthesis, genetic
programming, evolutionary computation, artificial intelligence, machine learning, programming languages, code generation and optimization
1.
Introduction
1.1
Since the invention of the computer, having the ability to correctly and efficiently develop software programs has been
a principle challenge [6]. To help address this, countless
breakthroughs have been made in the field of software development. Some of these include safety and flexibility advances in static, dynamic, and gradual type systems [5, 34];
simplification, safety, and robustness advances using automatic memory management and garbage collection systems [7, 15]; generality and specificity progress in both
general-purpose and domain specific languages [33, 38];
The Evolution of Programming Languages
Over the last several decades, programming languages (PLs)
have followed a steady path of providing higher-level programming abstractions, aimed at reducing the challenge of
human-driven software development [2]. To this end, PLs, in
general, have proliferated toward a design goal of simplifying human use. Although this trend is natural in an era where
humans perform the majority of software development, as
we will show, it is suboptimal in an environment where programming is performed predominantly by machines.
The ability for computers to automatically create their
own software programs has been a long-standing goal of artificial intelligence [31]. By largely removing humans from
the time-intensive and error-prone process of software development and replacing them with artificial intelligence, computer software has the potential to be generated in a more
streamlined, correct, and optimized fashion [8, 20].
This paper makes the following technical contributions:
[Copyright notice will appear here once ’preprint’ option is removed.]
1
2017/9/19
1. We present AI Programmer the first-of-its-kind software
generation framework, which constructs programs using genetic algorithms with novel enhancements coupled
with a minimalistic programming language.
lated according to how well the program can solve a given
task. This is performed with a sufficiently large population
size. Those that have the best fitness are mated together to
produce offspring.
Each generation of programs receive extra diversity
from evolutionary techniques including roulette selection,
crossover, and mutation [24]. The process is repeated at
each epoch with each child generation hopefully producing more favorable results than its parents’ generation until
a target solution is found. Through this process, applying
GAs to computer programming automation enacts a survival
of the fittest model for computer program generation [23]. A
deeper examination of these GA principles are provided in
Section 3.
2. We present several critical observations, including an embedded interpreter and simulator solution, for security
and optimization of ML-generated software.
3. We provide empirical results demonstrating the efficacy
and efficiency of AI Programmer across several of its
fully generated software programs on commodity hardware.
2.
Background
In this section we provide a brief synopsis of the challenges
in using traditional programming languages for machinebased program generation. We also provide a brief introduction into genetic algorithms, the ML technique used by AI
Programmer.
2.1
3.
In this section we provide a high-level overview of the AI
Programmer software architecture. The AI Programmer’s
system design is shown in Figure 1.
Programming Language Density
3.1
Most of today’s programming languages were designed for
human use [32]. We refer to such languages as humanintended PLs (HIPLs). Although HIPLs are useful when humans perform the majority of programming and debugging,
their design is usually counter to what is needed and appropriate for ML-based PLs (MLPLs).
HIPLs often introduce unnecessary complexity and overhead for ML program generators due, in part, to the large
number of language identifiers they include. The greater the
number of legal language identifiers, the greater the ML
computational search space. Moreover, type systems compound the challenge of creating legal programs because variable type bindings are intentionally restrictive to protect
against human error, yet provide limited value for ML program generators [30]. For these reasons, we chose to couple AI Programmer with a non-traditional programming language that is both constrained (i.e., using only eight identifiers) and typeless. We discuss this in more detail in Section 3.
Some instructions within any PL may be potentially
harmful and, if used conjunction with an ML-based program
generator, may cause irreversible damage. AI Programmer
has specific measures in place to prevent the occurrence of
such events. We discuss them in more detail in Section 3.4.
2.2
The Design of AI Programmer
Programming Language Selection and Challenges
We chose a typeless programming language that contains
only eight instructions to drive AI Programmer’s software
generation [25]. We briefly discuss the advantages of this
approach and the modifications to the language that were
required to integrate it into a GA solution.
Table 1. AI Programmer Instruction Set and Gene Map
Instr
Gene Range
Operation
>
<
+
.
,
[
]
(0, 0.125]
(0.125, 0.25]
(0.25, 0.375]
(0.375, 0.5]
(0.5, 0.625]
(0.625, 0.75]
(0.75, 0.875]
(0.875, 1.0]
Increment the pointer
Decrement the pointer
Increment the byte at the pointer
Decrement the byte at the pointer
Output the byte at the pointer
Input a byte and store it at the ptr
Jump to matching ] if current 0
Jump back to matching [ unless 0
Turing Completeness. AI Programmer’s programming
language, listed in Table 1, is Turing complete. A Turing
complete programming language is theoretically capable of
completing any (single taped Turing machine) programming
task given an unlimited amount of time and memory [40].
In essence, a programming language with this characteristic
is capable of implementations of a vast number of programming problems. Likewise, programs generated with AI Programmer, are theoretically capable of expressing all tasks
that one might want to accomplished with computers.
Programming with Genetic Algorithms
A genetic algorithm (GA) is a type of artificial intelligence, modeled after biological evolution, that begins with
no knowledge of the subject, aside from an encoding of
genes that represent a set of instructions or actions [9]. In
the concept of GA-driven computer programming, a series
of programming instructions are selected at random to serve
as an initial chain of DNA. The complete genome is executed as a program, with the resulting fitness score calcu-
GA Engine and Uniform Gene Distributions. AI Programmer’s genetic algorithm engine represents each generated program’s instructions as an array of floating point
values, which, when considered as a unit, is its genome.
Each individual location within a given genome is called a
2
2017/9/19
Figure 1. The AI Programmer Software Architecture.
+-+-+>-<[++++>+++++<+<>++]>[-[---.--[[-.++++
[+++..].]]]]
gene. Each gene within a program’s genome corresponds to
a single instruction from Table 1.
AI Programmer binds a gene value range to each of its instructions across a continuous uniform distribution (or rectangular distribution) [3] (see Table 1), where each instruction’s gene range is equal in size to each of the others. This
was done so each instruction would have an equally random probability of being chosen at any location in a gene
sequence when randomization was needed. 1
Figure 2. A generated program that outputs “hello”.
space needs to be constrained. As AI Programmer is intended for general purpose developers, limiting the programming instruction set to eight instructions enables the engine
to execute in reasonable times on commodity hardware (see
Section 5).
Simplified Instruction Set. Each of AI Programmer’s instructions manipulate a memory “tape” of byte values, ranging from 0-255. The language works by applying increment
and decrement operations to the current memory cell, while
shifting the memory cell up and down the tape, as instructed
by the program. The values at the current memory pointer
can be input from the user or output to the terminal. Primitive looping instructions also exist (e.g., ‘[’ and ‘]’), offering
a complete instruction set for creating software. An example
program is shown in Figure 2.
The simplified instruction set reduces the search space
in which a target program code can be found. As computational devices improve in speed, larger problem spaces can
be searched. However, on less powerful devices, the search
3.2
Genomes and Generations
To generate a software program using genetic algorithms,
one must first create a genome. A genome is a set of genes
that are grouped together as a single unit. For AI Programmer, the genome is encoded as an array of floating point values, with fixed value ranges per unique instruction ranging
between 0 and 1, as shown in the Gene Range column of
Table 1.
Once a genome is created, it is converted to a corresponding program, executed, and the resulting program is assigned
a fitness score based on the program’s output. The closer a
generated program comes to solving the provided task, the
greater its fitness score and, the more likely it is to continue to the next evolutionary generation. At each generation
epoch, AI Programmer utilizes roulette selection, along with
1 We
did not examine the impact of weighted ranges for different programs,
but note that it may be of interest as future work.
3
2017/9/19
crossover and mutation, to create child programs that contain
slight random perturbations, and potentially better, genomes
than their parents for solving the target task.
diately removed from the pool of genomes. However, programs that succeed are carried forward to produce child programs.3
Constructing a Genome Figure 3 demonstrates an example of constructing a genome from an array of floating point
values. Each value range maps to a specific instruction in
the programming language. Initially, these values are random (see the Random Gene Sequencer in Figure 1), resulting in generated programs that either won’t function properly, throw errors, or simply fail 2 . However, one or two are
bound to run and execute, at a minimum, some number of
valid instructions. The more successful a program is at executing, the more likely it is to continue on and produce offspring with code that achieves more successful results.
Figure 5. Programs are weighted by fitness, with the most
successful used for child program generation.
3.3
The Fitness Test
To use GAs, a fitness test is needed to determine how well a
generated solution performs. In the context of AI Programmer, this can involve scoring the byte-level output of the
generated program, inspecting the generated program’s internal state, or even analysis of intermediate state changes
of the program throughout its execution. The score of the
fitness test is calculated by analyzing these characteristics,
and many others, and then comparing them against a userdefined target.
This concept is similar to test-driven development. When
all unit tests pass, a program may be considered to be functionally correct. Likewise, a fitness test for a GA can be considered as a set of unit tests. In the case of AI Programmer, a
fitness test typically contains a suite of tests for varying scenarios, which guide the genome selection, preserving only
programs that evaluate well on the test suite. Further details
about the construction of AI Programmer fitness tests are described in Section 4.
Figure 3. Decoding a genome as a program.
Crossover and Mutation To create offspring, a parent
genome contributes part of its genes to the child, a process
called crossover, as shown in Figure 4. In addition to inheriting programming instructions from its parent, each child
can also experience mutation, which is the process of adding
controlled, but random perturbation, to specific genes. This
results in modified behavior of the value of a particular gene,
resulting in a change to the resulting programming instruction, and thus, the overall program.
3.4
AI Programmer’s Sandboxed Interpreter
Once a program has been generated, it must be executed so
it can be evaluated against human created fitness tests. However, the execution of ML generated programs may include
potential security risks as well as performance degradations.
Because of this and the need for complex fitness tests (Section 3.3), we developed our own interpreter. This interpreter
is sandboxed within the AI Programmer system to provide
a secure, efficient, and GA-appropriate execution environment. We explain the challenges and benefits of this system
in the following subsections.
Figure 4. An example of crossover and mutation. The child
genome inherits the first 5 instructions from its parent. One
instruction is mutated.
Crossover copies forward potentially beneficial parts of
the parent, while mutation offers differing behaviors of instruction combinations, which may or may not, end up making the child programs more successful.
3.4.1
Execution of Generated Programs in a
Controlled Environment
As generated programs are executed to evaluate their fitness,
the results can often be undesirable and potentially dangerous. Consider a program generated with I/O instructions, allowing for the modification of files on disk. A generated program could potentially overwrite critical system files, ren-
Survival of the Fittest Executable programs are ranked according to how well they have performed. As shown in Figure 5, a particular program that has failed is often imme2 Most
initial programs in the gene pool fail immediately upon being executed. Others may result in endless loops. It is due to these reasons that
exception handling and maximum iteration limits are imposed on the interpreter.
3 In
Figure 5, the bottom program is a valid running program that takes one
byte for input, increments it, and then displays it twice as output.
4
2017/9/19
4.
dering the entire machine inoperable. Likewise, a program
generated with instructions to support networking could inadvertently flood a computer network (e.g., denial of service attack [42]) or replicate itself across machines (e.g.,
worm [1]).
Normally, these types of behaviors are malicious, yet ML
software generators happen upon these situations in an attempt to satisfy fitness goals. By executing programs within
our own secure interpreter, which includes instruction-level
protection checks, AI Programmer can provide the additional security measures that are needed to prevent ML generated software from causing harmful behaviors. Non-ML
generating interpreters and compilers do not generally include these types of checks because management of such
issues are not usually within scope for HIPLs [39].
3.4.2
AI Programmer consists of a modular framework, designed
in C# .NET. It includes an engine for running genetic algorithms, an encoder and decoder for genomes, a sandboxed
interpreter for simulated program execution, and a compiler
to transform code into binary executables. While the initial
design of AI Programmer uses C#, it is important to note that
the principles employed by it are not bound to C#.
AI Programmer’s software framework for fitness test construction is extensible and was developed so users can devise a myriad of customized fitness suites, which eventually
guide the system’s GA generation and evolution of software
programs.
Specifying Requirements of a Program To generate a program, AI Programmer must be provided with the requirements for the desired input and output of the target program.
For example, if a program should prompt a user for a numerical input and then subsequently output a line of text,
this must be specified in the form of training data to AI Programmer. The following subsections detail the step-by-step
process of how a program is specified and generated with AI
Programmer.
Termination of Infinite Loops
Automatically generated software has the potential to create
infinite loops. This can occur from unsatisfied loop termination constraints or unexpected looping instructions. In our
experiments, this type of behavior often arises in early program generations due to the GA maximizing the goal fitness
score at the cost of program execution time. As a result, unterminated programs have the potential to halt the generation
process, resulting in a failure of further program evaluations.
In an attempt to mitigate this, one can add fitness constraints to prefer programs with fewer instruction counts
over larger ones. However, the generation of infinite loops,
especially in early generations of programs, remains a possibility. AI Programmer’s interpreter includes a customizable
maximum instruction count per execution. Programs that exceed the instruction count are terminated. A fitness penalty
can then be applied, reducing the likelihood of future generations of programs carrying forward the infinite loop constructs. With this addition, AI Programmer is guaranteed to
terminate all infinite loops.
3.4.3
Using AI Programmer
Creating Your First Program To begin, a user creates a
C# class within the AI Programmer project, inheriting from
the FitnessBase base class. This class includes all the necessary requirements for specifying a solution to be built by
AI Programmer, including fitness scoring functionality, program termination rules, and program generation capabilities.
Specifying a Target Fitness Score Next, the user indicates
a target fitness score which is specified in the constructor
of the class as shown in Figure 6. The score is typically
based upon characteristics of the desired program. For example, if the target program is intended to output a string,
such as “Hello World”, the fitness score might be the number of characters in the string (i.e., 11). However, since AI
Programmer generates programming code at the byte level,
the fitness score should account for incremental differences
in output characters. In this case, one should multiply each
target character in the output by 256, resulting in 2816 (e.g.,
11 * 256) and use that as the resulting target fitness score.
Simulation of Complex Instructions
Optimizing program execution is a principle concern for
ML program generators. This is because such systems may
generate and execute dozens to millions of programs before
one with a high enough fitness score is found. While simple
operations, such as add, load, store, jmp, may take a
single clock cycle to complete, more complex operations
can require many. Examples include disk I/O, networking,
and peripheral device access. These types of operations can
significantly increase program execution time, as they often
rely on accessing services or devices with increased latency.
AI Programmer can simulate the execution of these complex instructions. In doing so, the GA-based programs it generates can execute more efficiently, while still retaining the
ability to check the program’s fitness goals. Moreover, such
simulation protects the devices themselves from overuse by
the plethora of programs that may attempt to access them
during exploratory evolution of GA program generation.
public StringFitness(GA ga, int maxIterCount)
: base(ga, maxIterCount)
{
_targetStr = "Hello World";
_targetFit = _targetStr.Length * 256;
}
Figure 6. Example target fitness score for “Hello World.”
AI Programmer is designed to continue execution, generating incrementally better programs that satisfy the fitness
conditions, until the current fitness score reaches its target.
5
2017/9/19
Specifying Fitness Conditions Next, the user must specify the rules that are used to score each of the generated
programs (i.e., the fitness test). At each generation epoch,
AI Programmer will favor programs that have fitness scores
that are closer to the target fitness score. Therefore, careful crafting of the fitness conditions are required so fitness
scores accurately represent the desired goal of the program.
In the “Hello World” example, the user must specify that
the output of the program should match the target string. To
achieve this, one can add to the fitness score according to
how close each character in the generated output string is to
the target string. In particular, the fitness test can simply loop
over the characters in the string ”Hello World”, and compare
each one against the characters produced in the output of
the generated program, adding or subtracting accordingly,
as shown in Figure 7.
program, rather than the generation of a program that only
solves the exact training examples provided.
int val;
if (Int32.TryParse(_console.ToString(), out val)) {
Fit += 256 - Math.Abs(val -(input1 + input2));
}
Figure 8. Calculating the fitness of adding two numbers.
Programmatic Sequences of Action Because different
programs require different sequences of actions (e.g., requesting input, outputting a result, etc.), AI Programmer
provides users with a mechanism to specify the necessary
programmatic sequence of actions within the fitness method.
Programmatic sequences can be provided in the form of a
simple state machine within the fitness check method. When
the generated program executes a command to request input
from the user, a bonus score can be applied to the fitness if
it is executed at the correct time in the sequence of actions.
Likewise, when data is output, one can add or subtract from
the fitness score according to the time the action is executed.
It is important for users to account for programmatic sequence bonuses when they are generating the initial target
fitness. Doing so will ensure the generated solution will satisfy all required constraints, including sequences of events,
before returning a viable solution program.
for (int i = 0; i < _targetStr.Length; i++)
if (_console.Length > i)
Fit += 256-Math.Abs(_console[i]-_targetStr[i]);
Figure 7. Adding and subtracting the fitness score based
upon program output.
After assigning a fitness score to each generated program within the current pool, a check is made to determine
whether the target fitness score has been achieved by any of
the generated programs. If so, AI Programmer halts and returns the solution program. Otherwise, it continues with the
next generation of programs.
5.
Results
Using AI Programmer, we were able to generate numerous
complete software programs. A complete listing of these
programs, their associated program generation time, and the
total number of evolutionary generations used to build them
are shown in Table 2. It is important to note that the number of evolutionary generations is not equivalent to the total
computational time to generate a program. This is due, in
large part, to varying genome size and fitness function computation, which is unique to each program.
Even though genetic algorithms are embarrassingly parallel and AI Programmer utilizes task-level parallelism for
each generation’s genome construction and fitness test evaluation, we limited our experimental study to commodity hardware only. All experiments were run on an Intel Quad-Core
i7 CPU, 2.7GHz, containing 16GB ram with an x64-based
processor utilizing up to 4-threads for the parallelism described above. We constrained our experiments in this manner to demonstrate the efficacy of AI Programmer for realworld autonomous software development.
For the remainder of this section, we highlight the details
of some of the programs listed in Table 2 and discuss novel
aspects that emerged when generating them.
Specifying Conditions for Variable Output Previously, we
presented an example on how to train AI Programmer to find
an exact string. However, for more complex scenarios, such
as variable outputs or calculated values, a series of training
examples may be required. In such scenarios, training data
can be created to serve as an initial set of examples to base
the fitness score upon. Thereafter, AI Programmer can be
guided with an evolutionary goal to generalize from the
training data and provide correct results for new data.
As an example of variable output, consider the generation
of a program to output the summation of two numbers. The
target fitness score for this program would be the desired
output, which, in this case, would be one byte, multiplied by
the range of potential values (256). To construct the target
fitness score, we can simply multiply the target fitness for a
single result by the number of training examples. Therefore,
in this case, the target fitness is trainingCount * 256.
After specifying the target fitness score, we can implement the actual fitness check for adding two numbers by
looping over each training set combination (consisting of
two numbers), inputting those values to our program, and
checking the output for the correct sum. An example of this
is shown in Figure 8. By providing varying training input
values we can help foster the generalization of the solution
5.1
Greetings
“Hello World” is usually one of the first programs human
programmers create when they begin learning programming.
6
2017/9/19
-><[>-<+++]->>++++[++++++++++++++++++<+]>.---.
+-+++++++..+++.+>+<><+[+><><>+++++++++.+-<-+++
+[++[.--------.+++.------],.-----]]
Table 2. AI Programmer Results
Name
Duration (s)
Generations
hi
Hi!
hello
hello world
reddit
Keep Calm Keep Coding
I love all humans
hello {user}
Addition
Subtraction
Multiply x2
Multiply x3
XOR
Fibonacci
If/then conditionals
cats are evil
Bottles of Beer on the Wall
Reverse string
CSV parse
Extract in quotes
Extract in quotes 2
Trim left of quote
XML to JSON
Warning countdown
52
7,644
1,713
7,702
1,362
944
36,000
1,793
2,698
4,305
6,353
5,165
2,095
21,862
8,313
10,209
2,957
49
173
6,478
9,996
9,030
6,866
48
5,700
1,219,400
252,000
580,900
195,000
21,400
6,057,200
42,800
92,400
177,900
242,000
87,200
146,400
151,900
46,200
814,400
61,400
2,600
9,000
212,100
188,400
341,700
820,900
900
Figure 10. Generated program: “hello world”
mans,” which was successfully generated after 6,057,200
generations. It consists of the code shown in Figure 11. The
fitness method for this example includes a check on the output string length to ensure an exact matching output, without
extraneous text.
To ensure an exact output string, the fitness score includes
not just a check on the output characters, but also a check
on the length of the string. In this case, the target fitness
included an additional 10 points, of which a percentage of
this amount is added to the resulting fitness, depending on
how close the length of the output string matches the length
of the target. This forces the generation of a program that
outputs the exact target string, without extraneous output
instructions, as the generation process will not halt until the
target fitness is reached, of which, 10 points comprise having
the correct output length.
+[>+<+++]+>------------.+<+++++++++++++++++++
++++++++++++.>+++++++++++++++++++++++++++++++
+++.+++.+++++++.-----------------.--<.>--.+++
++++++++..---<.>-.+++++++++++++.--------.-----------.+++++++++++++.+++++.
Figure 11. Generated program: “I love all humans”
As such, we found it fitting to guide AI Programmer to learn
some basic greetings for its early programs. Rather than
starting with “Hello World”, we first had AI Programmer
create a more simplistic program that simply output “hi.” It
was successfully after 5,700 generations and the generated
code is shown in Figure 9.
// Assigning the target fitness.
_targetFitness = _targetString.Length * 256;
_targetFitness += 10;
...
+[+++++-+>++>++-++++++<<]>++.[+.]-.,-#>>]<]
// Calculating the fitness length bonus.
Fitness += 10 * ((_targetString.Length Math.Abs(_console.Length _targetString.Length)) /
_targetString.Length);
Figure 9. Generated program: “hi”
The generated program fulfilled its requirement to output
the target text, but interestingly included subsequent random
characters, which contained parsing errors, including nonmatching brackets. However, AI Programmer’s interpreter
computes results until the program fails. In this manner, the
syntax error (which is later on in the code, after a solution
is reached) does not negatively impact its fitness score, and
thus offers a working solution. In fact, the generated code
can be executed in almost any third-party interpreter as a
valid working program (provided, warnings are ignored).
Next, we guided AI Programmer to generate the famous
“hello world” output which was successfully constructed
after 580,900 generations and consists of the code shown in
Figure 10.
Figure 12. A percentage of 10 points is added to the fitness,
according to how exact the length of the output is to the
target.
5.2
Input-Output Computations
We next guided AI Programmer to generate programs that
perform computations based on user input. In such programs, the user provides some input and the computer program then generates the appropriate output.
Reversing a String AI Programmer was able to generate
the program to reverse any string after only 2,600 generations. The generated code is shown in Figure 13.
“I love all humans” As a humorous aside, we asked AI
Programmer to create the program to output “I love all hu7
2017/9/19
+->,>,[>+,],,,,-<[.+<]
branching, successful program generation required more advanced techniques within the fitness function.
In particular, a check was needed to examine the interpreter’s memory register (i.e., current data pointer via shift
operations), where the distinct number of memory registers
being used by the program was counted, providing a bonus
to fitness to favor more memory register usage over less.
This aided in inspiring diversity amongst child programs.
Additionally, the instruction pointer used for each print command was recorded and weighed against the fitness score. A
penalty was applied for reuse of the same print command.
This helped to foster diversity and achieve a successful ifthen result.
Figure 13. Generated program for reversing a string.
When executed, the program prompts the user for input. The
user then types one character at a time until a value of “0” is
entered. A novelty of this program is that it is required to take
variable size input first before performing the majority of
its program logic. However, the program’s internal memory
state must manage the variable input, as the program must
read all input first to locate the final character entered, which
is the first character in the reversed string. The genetic algorithm was able to produce this logic automatically, based
upon the fitness method.
6.
Addition and Subtraction AI Programmer was able to
generate programs for addition after 92,400 generations
(Figure 14) and subtraction after 177,900 generations (Figure 15).
We noticed that the program generation time increased significantly as the length of the target output increased. Furthermore, the need to extend AI Programmer beyond the basic instruction set was deemed a necessity if we were to have
it produce programs with more interesting features, such as
file I/O and networking capabilities.
As such, we extended AI Programmer to use an extended
programming instruction set, which reduced code generation time and improved code compression due to an increased range of instruction specificity (i.e., fewer instructions to achieve the same result). However, a disadvantage
of utilizing the extended instruction set is that the generated
programs would be difficult to test in standard interpreters.
As the extended instruction set for AI Programmer deviated
from the traditional programming language, standard interpreters would no longer be able to run the produced code. In
our case, AI Programmer’s internally developed interpreter
was modified to support the extended instruction set, so this
was not a practical obstacle.
,>,-[-<+>]<+.
Figure 14. Generated program for performing addition.
,-->,-[-<->]<+.
Figure 15. Generated program for performing subtraction.
If-Then Conditionals with User Input Generating programs involving more complex programming logic, such as
the ability to perform if-then decisions and actions, requires
a more advanced type of fitness function. However, as described in Section 3.4, AI Programmer’s embedded interpreter provides significantly more access to program state
than just its output, which is essential for generating a large
variety of more complex programs.
For example, AI Programmer was able to produce a program which prompts the user for input (e.g., 1, 2 or 3) and
outputs text based on which value was entered, similar to selecting an option from a menu. By entering the value “1”,
the program would output “hi”. Entering “2”, resulted in the
program output of “z”. Entering “3”, resulted in the output
“bye”. The program was generated in 446,200 generations.
The produced code was notably larger than previously
generated programs, containing 650 instructions (although
not all instructions are needed). The larger code was required, as the conditional branches are contained within individual blocks of the code.
5.3
Optimizing Program Generation
6.1
Extended Instruction Set
Several extensions of the programming language used by AI
Programmer exist, which are suitable to decrease program
generation time. Specifically, the speed-enhancing extension
set, Extended Type III [10], offers several programming instructions that aid generation. These instructions include the
ability to immediately set the value of a particular cell to
a multiple of 16, also called “fast cell initializers”. This aids
in allowing a generated program to quickly reach displayable
ASCII range characters for output, thus, decreasing the number of individual increment programming instructions that
would normally be required.
In addition to key instructions taken from Extended Type
III, we added several new instructions to support calling
functions from within a program, allowing for increasingly
complex programs to be generated.
Complexity in Fitness Functions
As the complexity of the target program grows, so too does
the fitness function. After all, the fitness function needs to
guide the engine in determining how well a particular child
program matches the targeted solution. For conditionals and
Fibonacci Sequence With these extensions in place, AI
Programmer was able to generate a program to output the
8
2017/9/19
,>,$[!>--$<<a>>]4]+,,-[-<+>]<+.$@
putational complexity of generated programs. This approach
been found useful in other areas of genetic programming, including the simulation of artificial life, as described in Ling’s
work [21]. In a simulation library based on genetic algorithms and biological hierarchy, the system, called Ragaraja,
uses biological concepts to form an esoteric programming
language, consisting of a set of 3-character instructions. In
this manner, the system is able to simplify the genetic algorithm generation and mutation process by limiting the number of possible instruction combinations. Although applied
in a completely different domain, the affects of this approach
are similar to AI Programmer, specifically for optimizing the
generation time and limiting the complexity of generated solutions to a constrained set of instructions.
Figure 16. Generated program to output the Fibonacci sequence from two starting input values.
Fibonacci sequence up to 233 4 , which was was generated in
approximately six hours. The program prompts the user for
input of the two starting values in the sequence. It then outputs the next digits in the Fibonacci sequence. The generated
code for this, using the extended instruction set, is shown in
Figure 16.
Advancing Complexity The ability of the GA to generate a
program for solving the Fibonacci sequence was a profound
advancement. The solution program contains several distinct
programming tasks, including prompting the user for input
of two numbers at the beginning of execution, calculating
the addition of values, determining the correct mathematical
sequence, outputting the result, and looping to repeat the
process for each value in the sequence.
This combination of tasks, spreading across a range of
programming abilities, might typically be given to human
programmers in order to evaluate their programming proficiency. The capability of the GA to automatically generate
this type of program demonstrates the potential for future
expansion of the system.
7.
7.2
AI Programmer has similarities to a program synthesis technique called sketching in that each approach attempts to automatically generate software by using some human guidance. However, the similarities between the two approaches
ends there. On one hand, sketching is a program synthesis
technique where a programmer provides only a minimalistic
outline of an implementation and the compiler generates the
remaining code [35, 36]. On the other hand, AI Programmerrequires no partial implementation, but instead requires human developers to design fitness tests which guide the evolutionary algorithm for the entire program construction.
Another slightly related work is verified lifting [16]. Verified lifting aims to lift algorithms written in one language
and place a formally verified equivalent in another language.
The benefits of verified lifting are highly practical, especially
when considering the need for such systems as real software
systems often migrate from one programming language to
another. However, verified lifting and AI Programmer are
only loosely similar in that both systems perform automatic
code generation, but do not possess any other similarities in
their approaches.
Related Work
Genetic programming has previously been applied in some
restricted cases. A key limitation in their broader application has been in the computational density of the search
space involved in program generation, which exponentially
increases as programs grow in complexity [18]. AI Programmer provides to novel mitigation of this inefficiency by using
a minimalistic programming language, exploiting the natural parallelism of GAs, simulating complex instructions, and
embedding an optimized interpreter for fast execution and
fast-failure of defunct programs.
7.1
Different Approaches in Program Generation
7.3
Genetic Algorithms in Other Domains
Slow Acceptance of Genetic Algorithms
The potential capabilities of automated program generation
using machine learning techniques have been considered for
some time. Yet, these approaches have encountered obstacles inhibiting their practical application. Part of those obstacles were a lack of computational power and data movement throughout. Advances in these fields have had recent
breakthroughs leading to the democratization of machine
learning, especially in the area of deep learning, which requires complex neural networks and a large amount of training data [11].
Still, other challenges remain in automated programming,
as explained by O’Neil et al. [27], which describes the slow
growth of genetic programming, despite the successes that it
has achieved in various real-world domains. To the best of
our knowledge, AI Programmer is the first end-to-end GA
Somewhat related to our work, is the use of program synthesis driven by genetic programming in hardware-based
niche fields. Koza et al. used an automated process for creating analog circuits, involving genetically evolved designs
with evolutionary computation to produce circuit components that typically require human-level intelligence to construct [19]. In addition, they used human constructed fitness
methods to guide their circuit design. Although applied in
different domains, the high-level machine learning approach
of Koza et al.’s system is similar to AI Programmer.
One of the key components of our research is the usage
of a minimalistic programming language to limit the com4 255
is the max value for a byte, with the next Fibonacci sequence value
being 377.
9
2017/9/19
system to demonstrate rapid progress in non-trivial program
generation achieved entirely on commodity hardware.
8.
[3] Uniform distribution. URL https://en.wikipedia.org/
wiki/Uniform_distribution_(continuous).
[4] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean,
M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur,
J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner,
P. A. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu,
and X. Zhang. Tensorflow: A system for large-scale machine learning. CoRR, abs/1605.08695, 2016. URL http:
//arxiv.org/abs/1605.08695.
Conclusion and Future Work
Traditional human-based computer programming is approaching a dramatic shift. With increasingly complex software and hardware advances and the growing challenges
integrating the two, the craft of software development will
inevitably surpass the capabilities of humans. As that time
approaches, it will be necessary to have some form of automatic software generation to assist humans in software development beyond what exists today (e.g., compilers, higherlevel programming languages, etc.).
The results presented in this paper, provide early notions
about the power that machine learning techniques, specifically genetic algorithms, may offer a partial solution for
automatic program generation. We showed that fully functional programs can indeed be automatically generated, provided they are supplied with some human guidance in the
way of input parameters and training data. While the initial
set of programs generated by AI Programmer are similar in
complexity to that a novice human programmer, the range of
generated programs need not restricted to traditional means
such as human time or human intellect. Instead, they are simply a function of fitness test complexity and computational
resources.
In addition to correctness, efficient implementation of fitness methods are imperative to the practical application of
AI Programmer. This is because each generated program is
checked against the fitness method every time a new program is evaluated. An important open area of future work is
the deep examination of how to implement fitness methods
as efficiently as possible while still retaining a high degree
of correctness. One possible solution is to build superoptimizers specifically for fitness test optimizations [29].
Another important open area in ML-based program generation is the need for specifically crafted programming languages that have strong alignment with the constraints of
ML computation. The current programming languages we
use today, for humans, are ill-suited for ML-based program
generation. The approach we use for typical program language creation needs to be abandoned and rethought when
considering a future of ML-driven program generation. Only
once this is done, can we begin to envision a new future of
computer software development, driven by artificial intelligence based systems, with human creativity and design guiding the way.
[5] L. Cardelli and P. Wegner. On understanding types, data
abstraction, and polymorphism. ACM Comput. Surv., 17(4):
471–523, Dec. 1985. ISSN 0360-0300. URL http://doi.
acm.org/10.1145/6041.6042.
[6] T. H. Cormen, C. Stein, R. L. Rivest, and C. E. Leiserson.
Introduction to Algorithms. McGraw-Hill Higher Education,
2nd edition, 2001. ISBN 0070131511.
[7] E. W. Dijkstra, L. Lamport, A. J. Martin, C. S. Scholten, and
E. F. M. Steffens. On-the-fly garbage collection: An exercise
in cooperation. Commun. ACM, 21(11):966–975, Nov. 1978.
ISSN 0001-0782. URL http://doi.acm.org/10.1145/
359642.359655.
[8] E. D. Dolan and J. J. Moré. Benchmarking optimization software with performance profiles. Mathematical programming,
91(2):201–213, 2002.
[9] P. Domingos. The Master Algorithm: How the Quest for the
Ultimate Learning Machine Will Remake Our World. 2015.
[10] Esolangs.org. Extended type iii. URL https://goo.gl/
9bS2gF.
[11] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning.
MIT Press, 2016. URL http://www.deeplearningbook.
org. Book in preparation for MIT Press.
[12] J. E. Gottschlich, M. P. Herlihy, G. A. Pokam, and J. G.
Siek. Visualizing transactional memory. In Proceedings of
the 21st International Conference on Parallel Architectures
and Compilation Techniques, PACT ’12, pages 159–170, New
York, NY, USA, 2012. ACM. ISBN 978-1-4503-1182-3. URL
http://doi.acm.org/10.1145/2370816.2370842.
[13] S. L. Graham, P. B. Kessler, and M. K. Mckusick. Gprof: A
call graph execution profiler. SIGPLAN Not., 17(6):120–126,
June 1982. ISSN 0362-1340. URL http://doi.acm.org/
10.1145/872726.806987.
[14] J. Jeffers and J. Reinders. Intel Xeon Phi Coprocessor High
Performance Programming. Morgan Kaufmann Publishers
Inc., San Francisco, CA, USA, 1st edition, 2013. ISBN
9780124104143, 9780124104945.
[15] R. Jones and R. Lins. Garbage Collection: Algorithms for
Automatic Dynamic Memory Management. John Wiley &
Sons, Inc., New York, NY, USA, 1996. ISBN 0-471-94148-4.
[16] S. Kamil, A. Cheung, S. Itzhaky, and A. Solar-Lezama. Verified lifting of stencil computations. In Proceedings of the
37th ACM SIGPLAN Conference on Programming Language
Design and Implementation, PLDI ’16, pages 711–726, New
York, NY, USA, 2016. ACM. ISBN 978-1-4503-4261-2. doi:
10.1145/2908080.2908117. URL http://doi.acm.org/
10.1145/2908080.2908117.
References
[1] Computer worm.
URL https://en.wikipedia.org/
wiki/Computer_worm.
[2] The 2017 top programming languages.
URL
https://spectrum.ieee.org/computing/software/
the-2017-top-programming-languages.
10
2017/9/19
ISBN 1-55860-442-1.
[17] M. Keating, D. Flynn, R. Aitken, A. Gibbons, and K. Shi. Low
Power Methodology Manual: For System-on-Chip Design.
Springer Publishing Company, Incorporated, 2007. ISBN
0387718184, 9780387718187.
[33] M. L. Scott. Programming Language Pragmatics, Third Edition. Morgan Kaufmann Publishers Inc., San Francisco, CA,
USA, 3rd edition, 2009. ISBN 0123745144, 9780123745149.
[18] J. R. Koza. Human-competitive results produced by genetic
programming. Genetic Programming and Evolvable Machines, 11(3-4):251–284, 2010.
[34] J. G. Siek and W. Taha. Gradual typing for functional languages. In IN SCHEME AND FUNCTIONAL PROGRAMMING WORKSHOP, pages 81–92, 2006.
[19] J. R. Koza, F. H. Bennett, D. Andre, M. A. Keane, and F. Dunlap. Automated synthesis of analog electrical circuits by
means of genetic programming. IEEE Transactions on evolutionary computation, 1(2):109–128, 1997.
[35] A. Solar-Lezama, R. Rabbah, R. Bodı́k, and K. Ebcioğlu. Programming by sketching for bit-streaming programs. SIGPLAN Not., 40(6):281–294, June 2005. ISSN 0362-1340. doi:
10.1145/1064978.1065045. URL http://doi.acm.org/
10.1145/1064978.1065045.
[20] L. Lamport. Proving the correctness of multiprocess programs. IEEE transactions on software engineering, (2):125–
143, 1977.
[36] A. Solar-Lezama, L. Tancau, R. Bodik, S. Seshia, and
V. Saraswat. Combinatorial sketching for finite programs.
SIGOPS Oper. Syst. Rev., 40(5):404–415, Oct. 2006. ISSN
0163-5980. doi: 10.1145/1168917.1168907. URL http:
//doi.acm.org/10.1145/1168917.1168907.
[21] M. H. Ling. An artificial life simulation library based on
genetic algorithm, 3-character genetic code and biological
hierarchy. The Python Papers, 7:5, 2012.
[37] G. P. Stein, G. Hayun, E. Rushinek, and A. Shashua. A
computer vision system on a chip: a case study from the
automotive domain. 2012 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition Workshops, 00:
130, 2005. ISSN 1063-6919.
[22] F. McKeen, I. Alexandrovich, I. Anati, D. Caspi, S. Johnson,
R. Leslie-Hurd, and C. Rozas. Intel® software guard extensions (intel® sgx) support for dynamic memory management inside an enclave. In Proceedings of the Hardware
and Architectural Support for Security and Privacy 2016,
HASP 2016, pages 10:1–10:9, New York, NY, USA, 2016.
ACM. ISBN 978-1-4503-4769-3. URL http://doi.acm.
org/10.1145/2948618.2954331.
[38] A. K. Sujeeth, K. J. Brown, H. Lee, T. Rompf, H. Chafi,
M. Odersky, and K. Olukotun. Delite: A compiler architecture for performance-oriented embedded domain-specific languages. ACM Trans. Embed. Comput. Syst., 13(4s):134:1–
134:25, Apr. 2014. ISSN 1539-9087. URL http://doi.
acm.org/10.1145/2584665.
[23] Z. Michalewicz. Genetic Algorithms Plus Data Structures
Equals Evolution Programs. Springer-Verlag New York, Inc.,
Secaucus, NJ, USA, 2nd edition, 1994. ISBN 0387580905.
[39] L. Torczon and K. Cooper. Engineering A Compiler. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2nd
edition, 2011. ISBN 012088478X.
[24] M. Mitchell. An Introduction to Genetic Algorithms. MIT
Press, Cambridge, MA, USA, 1998. ISBN 0262631857.
[25] U. Müller. Esoteric programming. URL https://goo.gl/
GnzaQe.
[40] A. Turing. Turing completeness. URL https://en.
wikipedia.org/wiki/Turing_completeness.
[26] H. Nguyen. Gpu Gems 3. Addison-Wesley Professional, first
edition, 2007. ISBN 9780321545428.
[41] R. M. Yoo, C. J. Hughes, K. Lai, and R. Rajwar. Performance
evaluation of intel® transactional synchronization extensions for high-performance computing. In Proceedings of the
International Conference on High Performance Computing,
Networking, Storage and Analysis, SC ’13, pages 19:1–19:11,
New York, NY, USA, 2013. ACM. ISBN 978-1-4503-2378-9.
URL http://doi.acm.org/10.1145/2503210.2503232.
[27] M. O’Neill, L. Vanneschi, S. Gustafson, and W. Banzhaf.
Open issues in genetic programming. Genetic Programming
and Evolvable Machines, 11(3-4):339–363, 2010.
[28] H. Patil, C. Pereira, M. Stallcup, G. Lueck, and J. Cownie.
Pinplay: A framework for deterministic replay and reproducible analysis of parallel programs. In Proceedings of
the 8th Annual IEEE/ACM International Symposium on Code
Generation and Optimization, CGO ’10, pages 2–11, New
York, NY, USA, 2010. ACM. ISBN 978-1-60558-635-9. URL
http://doi.acm.org/10.1145/1772954.1772958.
[42] S. T. Zargar, J. Joshi, and D. Tipper. A survey of defense
mechanisms against distributed denial of service (ddos) flooding attacks. IEEE Communications Surveys and Tutorials, 15
(4):2046–2069, 2013. URL http://dblp.uni-trier.de/
db/journals/comsur/comsur15.html#ZargarJT13.
[29] P. M. Phothilimthana, A. Thakur, R. Bodik, and D. Dhurjati.
Scaling up superoptimization. SIGPLAN Not., 51(4):297–310,
Mar. 2016. ISSN 0362-1340. doi: 10.1145/2954679.2872387.
URL http://doi.acm.org/10.1145/2954679.2872387.
[30] B. C. Pierce. Types and Programming Languages. The MIT
Press, 1st edition, 2002. ISBN 0262162091, 9780262162098.
[31] S. J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Pearson Education, 2 edition, 2003. ISBN
0137903952.
[32] M. L. Scott. Programming Language Pragmatics. Morgan
Kaufmann Publishers Inc., San Francisco, CA, USA, 2000.
11
2017/9/19
| 9 |
Fractality of Massive Graphs: Scalable Analysis
with Sketch-Based Box-Covering Algorithm∗
Takuya Akiba† , Kenko Nakamura‡ , Taro Takaguchi§
†
§
Preferred Networks, Inc. ‡ Recruit Communications Co., Ltd.
National Institute of Information and Communications Technology
arXiv:1609.07994v1 [] 26 Sep 2016
[email protected], [email protected], [email protected]
Abstract—Analysis and modeling of networked objects are
fundamental pieces of modern data mining. Most real-world
networks, from biological to social ones, are known to have
common structural properties. These properties allow us to
model the growth processes of networks and to develop useful
algorithms. One remarkable example is the fractality of networks,
which suggests the self-similar organization of global network
structure. To determine the fractality of a network, we need
to solve the so-called box-covering problem, where preceding
algorithms are not feasible for large-scale networks. The lack
of an efficient algorithm prevents us from investigating the
fractal nature of large-scale networks. To overcome this issue,
we propose a new box-covering algorithm based on recently
emerging sketching techniques. We theoretically show that it
works in near-linear time with a guarantee of solution accuracy.
In experiments, we have confirmed that the algorithm enables us
to study the fractality of million-scale networks for the first time.
We have observed that its outputs are sufficiently accurate and
that its time and space requirements are orders of magnitude
smaller than those of previous algorithms.
I. I NTRODUCTION
Graph representation of real-world systems, such as social relationship, biological reactions, and hyperlink structure,
gives us a strong tool to analyze and control these complex
objects [21]. For the last two decades, we have witnessed
the spark of network science that unveils common structural
properties across a variety of real networks. We can exploit
these frequently observed properties to model the generation
processes of real networked systems [20] and to develop
graph algorithms that are applicable to various objects [19].
A notable example of such properties is the scale-free property [3], [7], which manifests a power-law scaling in the vertex
degree distribution and existence of well-connected vertices
(often called hubs). The scale-free property, existence of hubs
especially, underlies efficient performance of practical graph
algorithms on realistic networks [1], [17].
Although the scale-free property inspires us to design better
network models and algorithms, it is purely based on the
local property of networks, i.e., the vertex degree. Real-world
networks should possess other common properties beyond the
local level. As a remarkable example of such non-local properties, the fractality of complex networks was found in network
science [27], [14]. The fractality of a network suggests that
∗ This work is done while all authors were at National Institute of
Informatics. A shorter version of this paper appeared in the proceedings of
ICDM 2016 [2].
the network shows a self-similar structure; if we replace
groups of adjacent vertices with supervertices, the resultant
network holds a similar structure to the original network
(see Section II-C for its formal definition). The fractality of
networks gives us unique insights into modeling of growth
processes of real-world networks [28]. In addition, fractal and
non-fractal networks, even with the same degree distribution,
indicate striking differences in facility of spreading [25] and
vulnerability against failure [16]. Aside from theoretical studies, the fractality provides us with useful information about
network topology. Examples include the backbone structure of
networks [15] and the hierarchical organization of functional
modules in the Internet [8], metabolic [28] and brain [13]
networks, to name a few.
Determination of the fractality of a network is based on the
so-called box-covering problem [27] (also see Section II-C).
We locally cover a group of adjacent vertices with a box such
that all vertices in a box are within a given distance from
each other, and then we count the number of boxes we use to
cover the whole network. In principle, we have to minimize
the number of boxes that cover the network, which is known
to be an NP-hard problem (see [26] and references therein).
Although different heuristic algorithms are proposed in the
previous work (e.g., [26], [24]), they are still not so efficient
as to be able to process networks with millions of vertices. This
limitation leaves the fractal nature of large-scale networks far
from our understanding.
Contributions: The main contribution of the present study
is to propose a new type of box-covering algorithm that is
much more scalable than previous algorithms. In general,
previous algorithms first explicitly instantiate all boxes and
then reduce the box cover problem to the famous set cover
problem. This approach requires quadratic Θ(n2 ) space for
representing neighbor sets and is obviously infeasible for
large-scale networks with millions of vertices. In contrast, the
central idea underlying the proposed method is to solve the
problem in the sketch space. That is, we do not explicitly
instantiate neighbor sets; instead, we construct and use the
bottom-k min-hash sketch representation [9], [10] of boxes.
Technically, we introduce several new concepts and algorithms. First, to make the sketch-based approach feasible,
we introduce a slightly relaxed problem called (1 − )B OX C OVER. We also define a key subproblem called the
(1 − )-S ET C OVER problem. The proposed box-cover algorithm consists of two parts. First, we generate min-hash
sketches of all boxes to reduce the (1−)-B OX C OVER problem
to the (1 − )-S ET C OVER problem. Our sketch generation
algorithm does not require explicit instantiation of actual boxes
and is efficient in terms of both time and space. Second, we
apply our efficient sketch-space set-cover algorithm to obtain
the final result. Our sketch-space set-cover algorithm is based
on a greedy approach, but is carefully designed with eventdriven data structure operations to achieve near-linear time
complexity.
We theoretically guarantee both the scalability and the
solution quality of the proposed box-cover algorithm. Specifically, for a given trade-off parameter k and radius parameter
`, it works in O((n + m)k log k min {`, log n}) time and
O(nk +m) space. The produced result is a solution of (1−)B OX C OVER within a factor
p 1 + 2 ln n of the optimum for
B OX C OVER for ≥ 2 5(ln n)/k, with a high probability
that asymptotically approaches 1.
In experiments, we have confirmed the practicability of
the proposed method. First, we observed that its outputs are
quite close to those of previous algorithms and are sufficiently
accurate to recognize networks with ground-truth fractality.
Second, the time and space requirements are orders of magnitude smaller than those of previous algorithms, resulting in
the capability of handling large-scale networks with tens of
millions of vertices and edges. Finally, we applied our algorithm to a real-world million-scale network and accomplish its
fractality analysis for the first time.
TABLE I
F REQUENTLY USED NOTATIONS .
Notation
Description
(In
G = (V, E)
n, m
Nδ (v)
{Sp }p∈P
n
k
ri
Se
e S)
e
C(
the context of the box cover problem)
The graph.
The numbers of vertices and edges in G.
The vertices with distance at most δ from v.
(In the context of the set cover problem)
The set family.
The number of elements and collections.
(Bottom-k min-hash sketch)
The trade-off parameter of min-hash sketches.
The rank of an item i.
The min-hash sketch of set S.
The estimated cardinality of set S.
set of items. We first assign a random rank value ri ∼ U (0, 1)
to each item i ∈ X, where U (0, 1) is the uniform distribution
on (0, 1). Let S be a subset of X. For an integer k ≥ 1,
e where
the bottom-k min-hash sketch of S is defined as S,
e
i ∈ S ⇐⇒ ri ≤ k-th {rj | j ∈ S} . In other words, Se is
the set of vertices with the k smallest rank values. We define
Se = S if |S| < k.
For a set S ⊆ X, the threshold rank τ (S) is defined as
follows. If |S| ≥ k, τ (S) = k-th {ri | i ∈ S}. Otherwise,
e Using sketch S,
e
τ (S) = (k − 1)/ |S|. Note that τ (S) = τ (S).
e
e
we estimate the cardinality |S| as C(S) = (k − 1)/τ (S). Its
relative error is theoretically bounded as follows.
Organization: The remainder of this paper is organized as
follows. We describe the definitions and notations in Section II.
In Section III, we present our algorithm for sketch-space
S ET C OVER. We explain our sketch construction algorithm to
complete the proposed method for B OX C OVER in Section IV.
In Section V, we present a few empirical techniques to further
improve the proposed method. We explain the experimental
evaluation of the proposed method in Section VI. We conclude
in Section VII.
Lemma 1 (Bottom-k cardinality estimator [9], [10]): The
e
cardinality estimation C(S)
is an unbiased estimator√of |S|,
and has a coefficient of variation (CV)1 of at most 1/ k − 2.
II. P RELIMINARIES
In addition, our algorithms heavily rely on the mergeability
of min-hash sketches. Suppose S1 , S2 ⊆ X and S3 = S1 ∪ S2 .
Then, since Se3 ⊆ Se1 ∪ Se2 , Se3 can be obtained only from Se1
and Se2 . We denote this procedure as Merge-and-Purify (e.g.,
Se3 = Merge-and-Purify(Se1 , Se2 )).
For simplicity, we assume that ri is unique for i ∈ X,
and sometimes identify i with ri . In particular, we use the
comparison between elements such as i < j for i, j ∈ X,
where we actually compare ri and rj . We also define k-th(S)
as the element with the k-th smallest rank in S ⊆ X.
A. Notations
We focus on networks that are modeled as undirected
unweighted graphs. Let G = (V, E) be a graph, where V
and E are the vertex set and edge set, respectively. We use
n and m to denote |V | and |E|, respectively. For d ≥ 0 and
v ∈ V , we define Nd (v) as the set of vertices with distance at
most d from v. We call Nd (v) the d-neighbor. When d = 1,
we sometimes omit the subscript, i.e., N (v) =SN1 (v). We also
define Nd (S) for a set S ⊆ V as Nd (S) = v∈S Nd (v). In
other words, Nd (S) represents the set of vertices with distance
at most d from at least one vertex in S. The notations we will
frequently use hereafter are summarized in Table I.
B. Bottom-k Min-Hash Sketch
In this subsection, we review the bottom-k min-hash sketch
and its cardinality estimator [9], [10]. Let X denote the ground
The following corollary can be obtained by applying Chernoff bounds [11].
Corollary 2: For > 0 and c > 1, by setting k ≥ (2 +
c)−2 ln |X|, the probability of the estimation having a relative
c
error larger than is at most 1/ |X| .
C. Problem Definition
1) Graph Fractality: The fractality of a network [27] is a
generalization of the fractality of a geometric object in Euclidean space [12]. A standard way to determine the fractality
of a geometric object is to use the so-called box-counting
1 The
CV is the ratio of the standard deviation to the mean.
106
10
∝
∝
5
b ( `)
104
2) Box Cover: As we described in the previous section, the
fractality of graphs is analyzed by solving the box covering
problem. The problem has two slightly different versions: the
diameter version [27] and the radius version [26]. It has been
empirically shown that these two versions yield negligible
difference in the results. In this study, we focus on the radius
version, which is defined as follows.
`−1.6
e−0.3 `
103
102
101
100 0
10
101
102
`
103
104
Fig. 1. Fitting of b(`) to the power-law and exponential functions for a
synthetic fractal network.
method; we tile the object with cubes of a fixed length and
count the number of cubes needed. If the number of cubes
follows a power-law function of the cube length, the object is
said to be fractal. A fractal object holds a self-similar property
so that we observe similar structure in it when we zoom in
and out to it.
The idea of the box counting method is generalized to
analyze the fractality of networks [27]. The box-covering
method for a network works by covering the network with
boxes of finite length `, which refers to a subset of vertices in
which all vertices are within distance `. For example, a box
with ` = 1 is a set of nodes all adjacent to each other. If
the number of boxes of length ` needed to cover the whole
network, denoted by b(`), follows a power-law function of `:
b(`) ∝ `−d , the network is said to be fractal. The exponent d is
called the fractal dimension. As can be noticed, b(`) crucially
depend on how we put the boxes. Theoretically, we have to
put boxes such that b(`) is minimized to assess its precise
scaling. However, this box-covering problem is NP-hard and
that is why we propose our new approximation algorithm of
this problem in the rest of this paper.
After computing b(`) for a network, we want to decide
whether the network is fractal or not. A typical indicator of
a non-fractal network is an exponential form: b(`) ∝ exp(c`),
where c is a constant factor [29]. Therefore, comparison of
the fitting of the obtained b(`) to power-law and exponential functions enables us to determine the fractality of the
network. Figure 1 illustrates the comparison for (3, 3, 7)flower, a network model with ground-truth fractality [23] (see
Section VI-A). Since b(`) is closer to the power-law than to the
exponential function in this case, the fitting procedure correctly
indicates the fractality of this network model.
It should be noted that the fractality of a network suggests
its self-similarity. Let us aggregate the vertices in a box into a
supervertex and then aggregate the edges spanning two boxes
into a superedge. Then we obtain a coarse-grained version
of the original network. If the original network is fractal,
the vertex degree distributions of the original and coarsegrained networks are (statistically) the same [27]. Note that
the fractality and self-similarity of a scale-free network are
not equivalent, and a non-fractal scale-free network can be
self-similar under certain conditions [18].
Problem 1 (B OX C OVER): In the B OX C OVER problem, given
a graph G and a radius limit ` > 0, the objective is to find a
set S ⊆ V of the minimum size such that N` (S) = V .
The size of set S is equal to b(`) discussed in Section II-C1.
In this study, we consider a slightly relaxed variant of the
B OX C OVER problem, named (1−)-B OX C OVER. The (1−)B OX C OVER problem is defined as follows.
Problem 2 ((1 − )-B OX C OVER): In the (1 − )-B OX C OVER
problem, we are given a graph G, a radius limit ` > 0 and
an error tolerance parameter > 0. The objective is to find a
set S ⊆ V of the minimum size such that |N` (S)| ≥ (1 − )n.
3) Set Cover: The B OX C OVER problem is a special case
of the S ET C OVER problem, which is defined as follows.
Problem 3 (S ET C OVER): In the S ET C OVER problem, we are
given a set family {Sp }p∈P . The objective
is to S
find a set
S
R ⊆ P of the minimum size such that p∈R Sp = p∈P Sp .
The proposed box-covering algorithm deals with a slightly
different version of S ET C OVER, named (1 − )-S ET C OVER
with sketched input as a key subproblem, which is defined as
follows.
Problem 4 ((1 − )-S ET C OVER with sketched input): In the
sketched input version of the (1 − )-S ET C OVER problem, we
are given the min-hash sketches of a set family {Sep }p∈P and
an error tolerance parameter > 0. The objective
S is to find
a set R S⊆ P of the minimum size such that | p∈R Sp | ≥
(1 − )| p∈P Sp |.
We first design an efficient approximation algorithm for
(1 − )-S ET C OVER (Section III). We then propose a new boxcovering algorithm using it (Section IV).
III. S ET C OVER IN S KETCH S PACE
In this section, we design an efficient approximation algorithm for the sketched-input version of (1 − )-S ET C OVER
(Problem 4). We call each p ∈ P a collection and i ∈ Sp
an element. Because of the connection to the B OX C OVER
problems, we assume that the numbers of collections and
elements
are equal. We denote them by n, that is,
S
S |P | =
| p∈P Sp | = n. For R ⊆ P , we define SR = p∈R Sp .
e R ) by C(R),
e
Moreover, for simplicity, we denote C(S
which
can be calculated from merged min-hash sketch SeR .
We first explain the basic greedy algorithm that runs in
O(n2 k) time, and then present its theoretical solution guarantee. Finally, we propose an efficient greedy algorithm, which
runs in O(nk log n) time and produces the exact same solution
as the basic algorithm.
Algorithm 1 Select-Greedily-Naive({Sep }p∈P )
At the i-th iteration, there is some collection p such that
1
n − C(Ri ∪ {p}) ≤ 1 − ∗ (n − C(Ri )) ,
(1)
|R |
eR ← {}
1: R ← {} , S
e
2: while R 6= P and C(R)
< (1 − /2)n do
e ∪ {p}) | p ∈ P \ R}
3:
p ← argmax{C(R
4:
R ← R ∪ {p} , SeR ← Merge-and-Purify(SeR , Sep )
5: return R
A. Basic Greedy Algorithm
Our basic greedy algorithm Select-Greedily-Naive is described as Algorithm 1. We start with an empty set R = {}. In
e ∪ {p}) for every p ∈ P \ R,
each iteration, we calculate C(R
and select p that maximizes the estimated cardinality, and add
e
it to R. We repeat this until C(R)
gets at least (1 − /2)n,
and the resulting R is the solution.
e ∪ {p}), together with R, we manage the
To calculate C(R
merged min-hash sketch SeR , so that SeR always corresponds
to the min-hash sketch of SR . To this end, we use the merger
operation of min-hash sketch. Let us assume that the items in
a min-hash sketch are stored in the ascending order of their
ranks. Then, merging two min-hash sketches can be done in
O(k) time like in the merge sort algorithm; we just need to
pick the top-k distinct items with the lowest ranks in the two
min-hash sketches. The complexity analysis of the algorithm
is as follows.
Lemma 3: Algorithm Select-Greedily-Naive
O(n2 k) time and O(nk) space.
runs
in
Proof Sketch. This algorithm always terminates as, even in the
worst case, after the n-th iteration, R gets P . Each iteration
takes O(nk) time, and the number of iterations is at most n.
Therefore, the time complexity is O(n2 k) time.
B. Theoretical Solution Guarantee
We can guarantee the quality of the solution produced by
the above algorithm as follows.
p
Lemma 4: For ≥ 2 5(ln n)/k, algorithm SelectGreedily-Naive produces a solution of (1 − )-S ET C OVER
within a factor 1 + 2 ln n of the optimum for S ET C OVER with
a probability of at least 1 − 1/n.
In other words, with a high probability that asymptotically
approaches 1, R is the solution of (1 − )-S ET C OVER and
|R| ≤ (1 + 2 ln n)|R∗ |, where R is the output of algorithm
Select-Greedily-Naive and R∗ is the optimum solution of
S ET C OVER (with the same set family as the input).
Proof. Let R be the output of algorithm Select-GreedilyNaive, and R∗ be the optimum solution to S ET C OVER. Let
R0 = ∅ and Ri ⊆ P be the currently selected sets after
the i-th iteration
of algorithm Select-Greedily-Naive. Let
S
C(R) = | p∈R Sp |. From Corollary 2, for set R ⊆ P , the
e
probability of C(R)
having a relative error larger than /2 is
3
at most 1/n .
since otherwise there would not be any solution R∗ of that
size to S ET C OVER. During the i-th iteration, there are at
most n new sets to be examined, and thus the union bound
e is at most /2
implies that the relative error between C and C
2
with a probability of at least 1 − 1/n . Therefore, with that
probability,
e i+1 ) ≥ C(R
e i ∪ {p}) ≥ 1 − C(Ri ∪ {p}).
(2)
C(R
2
From Inequalities 1 and 2, with some calculation, we have
1
e
e i) ,
1−
n−C(Ri+1 ) ≤ 1 − ∗
n − C(R
1−
2
|R |
2
with a probability of at least 1 − 1/n2 . As the number of
iterations is at most n, by applying the union bound over all
iterations, we obtain
i
i
1
e
1−
n − C(Ri ) ≤ 1 − ∗
n < e− |R∗ | n,
2
|R |
with a probability of at least 1−1/n. If i is at least 2|R∗ | ln n,
it becomes strictly less than 1/n, which is smaller than the
e Therefore, the number of iterations is at most
resolution of C.
∗
d2|R | ln ne, and thus |R| ≤ d2|R∗ | ln ne ≤ (1 + 2 ln n)|R∗ |.
e
Moreover, C(R) ≥ (1 − /2)C(R)
≥ (1 − )n, and thus R is
the solution to the (1 − )-S ET C OVER problem.
C. Near-Linear Time Greedy Algorithm
Algorithm Select-Greedily-Naive takes quadratic time,
which is unacceptable for large-scale set families. Therefore,
we then design an efficient greedy algorithm Select-GreedilyFast, which produces the exact same output as algorithm
Select-Greedily-Naive but runs in O(nk log n) time. As the
input size is O(nk), this algorithm is near-linear time.
The behavior of Select-Greedily-Fast at a high level is
the same as that of Select-Greedily-Naive. That is, we start
with an empty set R = {}, and, at each iteration, it adds
e to R. The central
p ∈ P \ R with the maximum gain on C
idea underlying the speed-up is to classify the state of each
p ∈ P at each iteration into two types and manage differently
to reduce the reevaluation of the gain. To this end, we closely
look at the relation between sketches Sep and SeR .
Types and Variables: Let us assume that we are in the main
loop of the greedy algorithm. Here, we have a currently
incomplete solution R ⊂ P . Let p ∈ P \ R. We define
that p belongs to type A if the k-th element of SeR∪p is in
Sp , i.e., k-th(SeR∪p ) ∈ Sp . Otherwise, p is type B. Note that
SeR∪p = Merge-and-Purify(SeR , Sep ).
We define ap = |SeR∪p ∩ Sep |, bp = |SeR∪p ∩ SeR |, and
cp = |SeR ∩ Sep |. Please note that, if p is a type-A collection,
e
C(R
∪ {p}) is determined by k-th(SeR∪p ) = ap -th(Sep ).
e
Similarly, if p is a type-B collection, C(R∪{p})
is determined
e
e
by k-th(SR∪p ) = bp -th(SR ).
Algorithm 2 Select-Greedily-Fast({Sep }p∈P )
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
43:
44:
45:
46:
47:
// Initialization
R ← {} , SeR ← {}
QA ← an empty min-queue (key: ranks, value: collections)
QB ← an empty min-queue (key: integers, value: collections)
for j = 1, 2, . . . , k do
T (j) ← a binary search tree (key: ranks, value: collections)
Ii ← {p ∈ P | i ∈ Sep }
for all p ∈ P do
Insert (p, k-th{ri | i ∈ Sep }) to QA and T (1)
ap ← k, bp ← 0
// Main loop
e
while R 6= P and C(R)
< (1 − /2)n do
// Selection
e ∪ {p}) | p is at the top of QA or QB }
p ← argmax{C(R
R ← R ∪ {p}
Remove p from QA , QB and T
0
SeR
← Merge-and-Purify(SeR , Sep )
0
0
∆ ← SeR
\ SeR , SeR ← SeR
// Notifying events of type 3
for all i ∈ ∆ do
for all p ∈ Ii do
if p ∈ QB then
Move p from T (bp ) to T (bp + 1)
else if ap -th(Sep ) = i then
while ap -th(Sep ) = i do ap ← ap − 1
Update p’s key in T (bp + 1) to ap -th(Sep )
Remove p from QA and Insert (p, bp + 1) to QB
else
Move p from T (bp + 1) to T (bp + 2)
bp ← bp + 1
// Notifying events of types 1 and 2-1
for j = 1, 2, . . . , k do
P 0 ← Retrieve those with keys ≥ j-th(SeR ) from T (j)
for all p ∈ P 0 do
r ← j-th(SeR )
Remove p from QA , QB and T (j)
if p ∈ QA then
ap ← ap − 1, bp ← bp + 1
if ap -th(Sep ) ∈ SeR then
bp ← bp − 1, r ← (j − 1)-th(SeR )
while ap -th(Sep ) ∈ SeR do ap ← ap − 1
if ap -th(Sep ) > r then
Insert (p, ap -th(Sep )) to QA and T (bp + 1)
else
Insert (p, bp ) to QB
Insert (p, ap -th(Sep )) to T (bp )
return R
Events to be Captured: Suppose that we have decided to
adopt a new collection and R is about to be updated to R0
(i.e., R0 = R ∪ {p0 } for some collection p0 ∈ P ). Let us first
assume that a single element appeared in the merged sketch,
i.e., SeR0 \ SeR = {i}. Let p ∈ P \ R0 . In the following, we
examine and classify the events where the evaluation of p is
e ∪ {p}) 6= C(R
e 0 ∪ {p}) (types 1 and 2), or
updated, i.e., C(R
cp is updated (type 3).
Type 1: We assume that i 6∈ Sep and p is type A. From the
definition, τ (SeR∪p ) = ap -th(Sep ), and, bp -th(SeR ) ≤ ap -th(Sep )
< (bp + 1)-th(SeR ). Therefore, τ (SeR∪p ) 6= τ (SeR0 ∪p ) if and
only if (bp +1)-th(SeR0 ) 6= (bp +1)-th(SeR ) and (bp +1)-th(SeR0 )
< ap -th(Sep ). We define that a type-1 event happens to p when
this condition holds.
Type 2: Similarly, we assume that i 6∈ Sep and p is
type B. From the definition, τ (SeR∪p ) = bp -th(SeR ) and
ap -th(Sep ) < bp -th(SeR ). Thus τ (SeR∪p ) 6= τ (SeR0 ∪p ), if
and only if bp -th(SeR ) 6= bp -th(SeR0 ). There are two cases:
bp -th(SeR0 ) ≤ ap -th(Sep ) (type 2-1), after which p becomes
type A, or bp -th(SeR0 ) > ap -th(Sep ) (type 2-2), after which p
still belongs to type B.
Type 3: If i ∈ Sep , then cp will be incremented.
The following lemma is the key to the efficiency of our
algorithm.
Lemma 5: For each p ∈ P , throughout the algorithm execution, events of type 1, type 2-1, or type 3 occur at most 3k
times in total.
Proof Sketch. We use the progress indicator Φ = k − ap +
bp + cp . Initially, ap = k and bp = cp = 0; hence Φ = 0. For
each event occurrence, Φ increases by at least one. As ap ≥ 0
and bp , cp ≤ k, Φ ≤ 3k.
Please note that events of type 2-2 are not considered in
the above lemma, and, indeed, they happen Θ(n) times in
the worst case for each collection. Therefore, we design the
algorithm so that we do not need to capture type-2-2 events.
Finding the Maximum Gain: To adopt a new collection in
each iteration, we need to efficiently find the collection that
gives the maximum gain. We clarify the ordering relation in
each type.
e ∪
Type A: Let p, q ∈ P \ R be type-A collections, then C(R
e
e
e
{p}) ≥ C(R ∪ {q}) if and only if ap -th(Sp ) ≤ aq -th(Sq ).
e ∪
Type B: Let p, q ∈ P \ R be type-B collections, then C(R
e ∪ {q}) if and only if bp -th(SeR ) ≤ bq -th(SeR ),
{p}) ≥ C(R
which is equivalent to bp ≤ bq .
Data Structures: We use the following data structures to
notify the collections about an event occurrence.
Type 1: For each type-A collection p, as we observed above,
p wants to be notified about a type-1 event when (bp +
1)-th(SeR ) becomes smaller than ap -th(Sep ). Therefore, for
j = 1, 2, . . . , k, we prepare a binary search tree T (j), where
values are collections and keys are ranks (i.e., collections are
managed in the ascending order of ranks in each tree). For
each type-A collection p, we put p in T (bp + 1) with key
ap -th(Sep ). Then, when j-th(SeR ) is updated to a new value,
from T (j), we retrieve collections with keys larger than or
equal to the new value and notify them about an event.
Type 2-1: Similarly, for each type-B collection p, p wants to
be notified about a type-2-1 event when bp -th(SeR ) becomes
smaller than or equal to ap -th(Sep ). Thus, we store p in T (bp )
and set its key to ap -th(Sep ). Then, when j-th(SeR ) is updated
to a new value, we retrieve those in T (j) with keys larger than
or equal to the new value and notify them about an event.
Type 3: To capture type-3 events, the use of an inverted index
suffices. That is, for each i ∈ X, we precompute Ii = {p ∈
P | i ∈ Sep }. When i comes to SeR , we notify the collections
in Ii .
Moreover, we also need data structures to find the collections with the maximum gain as follows.
Type A: Type-A collections are managed in a minimumoriented priority queue, where the key of a collection p is
ap -th(Sep ).
Type B: Type-B collections are managed in another minimumoriented priority queue, where the key of a collection p is bp .
Overall Set-Cover Algorithm: The overall algorithm of
Select-Greedily-Fast is described as Algorithm 2. In each
iteration, we adopt the new collection with maximum gain,
which can be identified by comparing the top elements of
the two priority queues. Then, we process events to update
variables and data structures. At the beginning of Section III-C,
we assumed that a single element appears in the new sketch.
When more than one elements come to the new sketch, we
basically process each of them separately. See Algorithm 2 for
the details of the update procedure. The algorithm complexity
and solution quality are guaranteed as follows.
Lemma 6: Algorithm Select-Greedily-Fast
O(nk log n) time and O(nk) space.
runs
in
Proof Sketch. Each data structure operation takes O(log n)
time, which, from Lemma 5, happens at most 3k times for
each collection.
Lemma 7: Algorithm Select-Greedily-Fast produces the
same solution as algorithm Select-Greedily-Naive.
Algorithm 3 Build-Sketches(G, `)
1: Xv ← {v} for all v ∈ V .
2: for ` times do
3:
for all v ∈ V in the increasing order of rv do
4:
Av ← {u ∈ V | v is added to Xu in the last iteration}
5:
for all w ∈ N (Av ) do
6:
Xw ← Merge-and-Purify(Xw , {v})
7:
if Xv was not modified for any v ∈ V then
8:
break
9: return {Xv }v∈V
In each iteration, for each vertex, we essentially merge the
sketches of its neighbors into its sketch in a message-passinglike manner. Two speed-up techniques are employed here to
avoid an unnecessary insertion check. For v ∈ V , let Av
be the vertices in whose sketches v is added to in the last
iteration. First, for each v ∈ V , we try to insert v only into
the sketches of the vertices that are neighbors of Av , as v
cannot be inserted into other vertices. Second, we conduct the
procedure above in the increasing order of ranks, since this
decreases the unnecessary insertion. We prove its correctness
and complexity as follows.
Lemma 8: In algorithm Build-Sketches, after the i-th iterei (v) for all v ∈ V .
ation, Xv = N
Proof Sketch. We prove the lemma by mathematical induction
e0 (v), it is true for i = 0. Now we assume
on i. Since {v} = N
it holds for i and prove it also holds for i + S
1. Let B =
{u ∈ V | (v, u) ∈ E}. Since Ni+1 (v) = {v} ∪ u∈B Ni (u),
ei+1 (v) can be obtained by
and {v} ∈ Ni (v) ⊆ Ni+1 (v), N
ei (u) for all u ∈ B ∪ {v}.
merging N
Proof Sketch. Both algorithms choose the collection with the
maximum gain in each iteration.
e` (v) for
Corollary 9: Algorithm Build-Sketches computes N
all v ∈ V .
IV. S KETCH -BASED B OX C OVERING
Lemma 10: Algorithm Build-Sketches runs in O((n + m)k
log k min {`, log n}) expected time and O(nk + m) space.
In this section, we complete our sketch-based box-covering
algorithm for the (1 − )-B OX C OVER problem (Problem 2).
We first propose an efficient algorithm to construct min-hash
sketches representing the `-neighbors, and then present and
analyze the overall box-covering algorithm,
A. Sketch Generation
For v ∈ V , we denote the min-hash sketch of N` (v) as
e` (v). Here, we construct N
e` (v) for all vertices v ∈ V
N
to reduce the (1 − )-B OX C OVER problem to the (1 − )S ET C OVER problem (Problem 4). Our sketch construction
algorithm Build-Sketches is described as Algorithm 3.
It receives a graph G and a radius parameter `. Each vertex
v manages a tentative min-hash sketch Xv . Initially, Xv only
includes the vertex itself, i.e., Xv = {v}, which corresponds to
e0 (v). Then, we repeat the following procedure for ` times so
N
ei (v). This algorithm has
that, after the i-th iteration, Xv = N
a similar flavor to algorithms for approximated neighborhood
functions and all-distances sketches [22], [5], [10].
Proof Sketch. In addition to the graph, the algorithm stores a
sketch of size k for each vertex, and hence it works in O(nk +
m) space. Each insertion trial takes O(log k) time (Line 6).
Therefore, it suffices to prove that the number of traversed
edges is O((n + m)k`) and O((n + m)k log n). The former
bound is easier, since, in each iteration, the number of last
inserted elements in each sketch is at most k, and thus we
traverse each edge at most k times.
For the latter bound, we count the expected number of
vertices that are inserted once into Xv for a vertex v ∈ V .
The vertex that is i-th to arrive at v is inserted into Xv with
a probability of min {1, k/i}, and thus it is at most
n
X
k
min 1,
= k + k(H(n) − H(k)) = O(k log n),
i
i=1
where H(i) is the i-th Harmonic number. Therefore, each edge
is traversed at most O(k log n) times in total.
B. Overall Box-Cover Algorithm
The overall box-covering algorithm Sketch-Box-Cover is
as follows. We first construct the min-hash sketches using algorithm Build-Sketches and then solve the set cover problem
in the sketch space using algorithm Select-Greedily-Fast.
The guarantees on performance and accuracy are immediate
from the previous lemmas and corollaries as below.
Theorem 11 (Scalability guarantee): Algorithm Sketch-BoxCover works in O((n + m)k log k min {`, log n}) time and
O(nk + m) space.
Theorem 12 (Solution accuracy guarantee):
p With a probability of at least 1 − 1/n, for ≥ 2 5(ln n)/k, algorithm Sketch-Box-Cover produces a solution to the (1 − )B OX C OVER problem within a factor 1 + 2 ln n of the optimum
for the B OX C OVER problem.
Assuming k is a constant, the time and space complexities
are near-linear. Similarly, given a constant , the time and
space complexities are still near-linear, since it suffices to set
k = d20−2 ln ne. In practice, as seen in our experiments,
the algorithm produces solutions that are much closer to the
optimum than what is expected from the above approximation
ratio with much smaller k.
V. P RACTICAL I MPROVEMENT
In this section, we propose techniques to improve the
practicality of the proposed method.
Exact Coverage Management: For the termination condition
in the greedy selection algorithm (i.e., Line 12 in Algorithm 2),
when applied to the box cover problem, we propose to use
the exact coverage C(R) instead of the estimated coverage
e
C(R).
This technique makes the results more stable. We can
efficiently manage the exact coverage as follows.
First, we prepare an array δ, and initialize it as δ[v] = ∞
for all v ∈ V . After selecting a vertex v in each iteration, we
conduct a pruned breadth-first search (BFS) from v. Suppose
we are visiting vertex u with distance d in this BFS. If
δ[u] ≤ d, then we prune this BFS, i.e., we do not traverse the
edges from u. Otherwise, we set δ[u] = d and continue the
search. We do not visit vertices with a distance larger than `.
The number of covered vertices is the number of non-infinity
values in array δ. Since the value of δ[u] changes at most `+1
times, each vertex or edge is visited O(`) times. Therefore, the
total time consumption of this process throughout all iterations
is O((n + m)`).
Multi-Pass Execution: On the basis of the above exact coverage management technique, we sometimes detect that, even
e
e )),
while the estimated coverage is saturated (i.e., C(R)
= C(P
the actual coverage is below the specified threshold. In that
case, to choose more vertices, we propose to repeat the
algorithm from sketch construction until the actual coverage
becomes higher than the threshold.
In the i-th pass, we only care for vertices that are not
covered by the previous passes. This can be easily realized
by modifying the algorithm Build-Sketches so that, at Line
1, we set Xv = ∅ for already covered vertices. For accurate
results, node ranks should be reassigned for each pass.
Exact Neighborhood: To further improve the accuracy, we
propose to combine our sketch-based algorithm with a nonsketch-based algorithm. For a very small radius parameter
`, neighborhood N` (v) is sometimes much smaller than k.
Moreover, even for a larger `, when using the above multi-pass
execution technique, the remaining neighbors may become
small in later passes. In these cases, the sketching approach has
little advantage. Therefore, we detect such circumstances and
switch to a non-sketch-based greedy algorithm. Interestingly,
e` (v)| ≤ k, then
this switching can be done seamlessly. If |N
e` (v) = N` (v). Therefore, under such circumstances, the
N
output of algorithm Build-Sketches can be immediately given
to the non-sketch-based greedy algorithm.
The proposed overall procedure is as follows. We specify
a parameter α. We start by constructing the “sketches” with
algorithm Build-Sketches, but, at first, we apply the algorithm
as if k = ∞, i.e., we do not conduct purification on the minhash sketches. During the construction, if the total number of
elements in all min-hash sketches exceeds αnk at some point,
then we conduct purification on all the min-hash sketches,
continue the construction with the actual k value, and pass
the resulting sketches to the sketch-based greedy algorithm.
Otherwise, we apply the non-sketch-based greedy algorithm
to the resulting “sketches,” which are actually exact neighborhood sets. Assuming parameter α is a constant, the total time
and space complexity remain the same.
Exact Box Covering: Together with the preceding three techniques, to further make the results reliable, we propose to use
our algorithm for solving the original B OX C OVER problem
(Problem 1) rather than (1 − )-B OX C OVER problem (Problem 2). In other words, we recommend setting = 0 to ensure
that all vertices are completely covered. As we will see in
the experiments, even with this seemingly extreme threshold,
thanks to the above techniques, both the running time and the
solution quality are reasonable.
VI. E XPERIMENTS
In this section, we present our experimental results to verify
the performance of the algorithm. Specifically, we compare our
algorithm with other preceding algorithms in terms of accuracy
and computation time.
We mainly focus on model networks, instead of empirical
ones, in order to validate the results of our algorithm with
ground-truth theoretical solutions and to investigate the scalability of the algorithm for various sizes of networks. However,
we also demonstrate the practicalness of our algorithm by
applying it to a real million-scale web graph. On the basis
of the result, we reveal the fractality of such large-scale real
graph for the first time.
A. Setup
Environment: Experiments were conducted on a Linux server
with Intel Xeon X5650 (2.67 GHz) and 96GB of main
approximation ratio
(3, 0, 6)-SHM
(3, 3, 7)-flower
(2, 0, 8)-SHM
24 25 26 27 28 29 210
k
1.8
1.6
1.4
1.2
1.0
approximation ratio
1.8
1.6
1.4
1.2
1.0
(3, 0, 6)-SHM
(3, 3, 7)-flower
(2, 0, 8)-SHM
2-3 2-2 2-1 20 21 22 23
Fig. 2. Average approximation ratio to the theoretical solutions for various
k and α.
memory. Algorithms were implemented in C++ and compiled
by gcc 4.8.4 with -O3 option.
Algorithms: For comparison, we used a naive algorithms
named greedy coloring (GC) and three advanced and popular algorithms, named maximum excluded mass burning
(MEMB), minimal value burning (MVB), and compact box
burning (CBB). GC, MEMB, and CBB were introduced in
[26] and MVB was in [24].
Network Models: We used two network models with groundtruth fractality: the (u, v)-flower [23] and the Song-HavlinMakse (SHM) [28] model. These models have power-law degree distributions, the representative characteristic of complex
networks. Both models can be either fractal or non-fractal,
depending on the structural parameter values. We refer to them
as the (u, v, g)-flower and (c, e, g)-SHM model to indicate the
parameter settings. The common parameter g (g = 1, 2, 3, . . . )
determines the network size n: n = (w − 2/w − 1)wg +
w/w − 1, where w ≡ u + v for the (u, v, g)-flower, and
n = (2c+1)g n0 for the (c, e, g)-SHM model. In addition to the
flower and SHM models, we considered the Barabási-Albert
(BA) network model [3] as one of the most famous models of
complex networks. The BA model is not fractal [27]. We refer
to this model as (c, t)-BA, where c is the number of edges that
a new node has and t sets the network size as n = 125 × 2t .
Fractality Decision Procedure: After the computation of the
box-covering algorithms, we determined whether the obtained
b(`) indicates the fractality or not. This task was done by
fitting the b(`) curve with a power-law function (i.e., fractal) and an exponential function (i.e., non-fractal) by using optimize.leastsq function in SciPy package of
Python. We used the parameters estimated by fitting the curves
to linearized models as the initial values for the nonlinear
fitting. The key quantity was the ratio between the residual
error of fitting to a power-law function and that to an exponential function, denoted by rfit . If − log10 rfit is postive (i.e.,
rfit < 1), the network was supposed to be fractal. Otherwise,
it was supposed to be non-fractal. This procedure of fitting
and comparison follows that used in [29].
B. Parameter Settings
First of all, we have to decide the parameter values of our
algorithm: (error tolerance), k (sketch size), and α (exact
neighborhood switch threshold). In principle, the accuracy
of results as well as running time increases with k and α,
and it decreases with . As we discussed in Section V, we
fixed = 0. To choose k and α, we plotted the average
approximation ratio of our results to the theoretical solutions
for several fractal network models as a function of k and α
in Figure 2. The average approximation ratio is defined by
ρ ≡ hbsketch (`)/btheory (`)i` , where h·i` is the average over
`. To compute bsketch (`), we executed the algorithm for ten
times and took the average of the resulting b(`) over the ten
runs.
In the left panel of Figure 2, we varied 24 ≤ k ≤ 210
while fixing α = 1. The ρ values were affected slightly by
k for the SHM models and tended to decrease with k for
the flower network. On the basis of the results, we decided
to use k = 27 throughout the following experiments. In the
right panel of Figure 2, we varied 2−3 ≤ α ≤ 23 while fixing
k = 27 . The ρ values were almost constant regardless of the
α values for all of the three networks considered. Therefore,
taking into account the running time, we decided to use α = 1
throughout the following experiments. It is worth noting that
Figure 2 also demonstrates the high accuracy and robustness
of our algorithm over a broad range of parameter values.
C. Accuracy and Scalability
Table II summarizes the main results of this paper and shows
the comparison of our algorithm (Sketch) with other preceding
algorithms for fractal and non-fractal network models with
various sizes. We evaluated the performance of algorithms by
two measures. The first was the accuracy given by − log10 rfit
(Section VI-A). If this measure took a positive (negative) value
for a fractal (non-fractal) network, the algorithm correctly
distinguished the fractality of the network. The second was
computation time in seconds.
Discrimination Ability: As we can see in Table II, the sketch
algorithm perfectly distinguishes between the fractal and nonfractal networks as the other algorithms do (except for CBB for
(3, 4, 5)-flower). The proposed algorithm shows its advantage
in computation time: the algorithm is generally faster than
other algorithms and is able to handle large networks that
other algorithms do not terminate. Although MEMB is faster
than Sketch for some relatively small network models, this
result is expected because actual neighborhood sets are not
significantly larger than sketch sizes in these networks. As
a summary, (i) the sketch algorithm correctly detected the
fractality of network models with around ten times smaller
computation time than the fastest previous algorithm. In addition, (ii) the algorithm was able to deal with networks with
millions of nodes with acceptable computation time (within
1 day), whereas other algorithms could not in our machine
environment.
Time and Memory Consumption: The proposed algorithm
is scalable for not only for computation time but also for
memory usage. In Figure 3, computation time (seconds) and
memory usage (KB) of the five algorithms were plotted as
a function of the number of vertices. We use (2, 2, g)-flower
(3 ≤ g ≤ 11) and (2, t)-BA (0 ≤ t ≤ 15) networks as the
example of a fractal and a non-fractal network, respectively.
TABLE II
RUNNING TIME IN SECONDS (Time) AND THE RELATIVE ERROR RATIO OF A POWER - LAW FUNCTION , − log10 rfit (Fit). DNF MEANS THAT IT DID NOT
FINISH IN ONE DAY OR RAN OUT OF MEMORY.
Graph
|V |
Model
|E|
Sketch
Time
Fit
MEMB [26]
Time
Fit
GC [26]
Time
Fit
MVB [24]
Time
Fit
CBB [26]
Time
Fit
5 Networks with ground-truth fractality (“Fit” values are expected to be positive.)
(2,
(2,
(2,
(2,
(2,
(2,
(2,
(2,
(2,
(3,
(3,
(3,
(3,
(2,
(2,
(2,
(3,
2,
2,
2,
2,
3,
3,
3,
4,
4,
3,
3,
4,
4,
0,
0,
0,
0,
4)-flower
7)-flower
10)-flower
11)-flower
6)-flower
7)-flower
8)-flower
6)-flower
7)-flower
6)-flower
7)-flower
5)-flower
7)-flower
6)-SHM
7)-SHM
8)-SHM
6)-SHM
172
10,924
699,052
2,796,204
11,720
58,595
292,970
37,326
223,950
37,326
223,950
14,007
686,287
12,501
62,501
312,501
67,229
256
16,384
1,048,576
4,194,304
15,625
78,125
390,625
46,656
279,936
46,656
279,936
16,807
823,543
12,500
62,500
312,500
67,228
0
15
8,628
62,138
26
286
2,913
138
1,526
148
1,779
34
8,873
33
224
2,728
207
0.8
2.5
3.5
4.0
1.2
1.1
1.0
1.1
1.0
1.1
1.2
0.7
0.8
1.2
1.2
1.1
1.0
0
10
DNF
DNF
14
377
DNF
121
DNF
101
DNF
16
DNF
8
206
DNF
190
1.0
3.4
—
—
1.1
1.0
—
1.0
—
1.3
—
0.9
—
1.1
1.1
—
0.9
0
228
DNF
DNF
146
8,538
DNF
2,422
DNF
10,751
DNF
560
DNF
872
48,116
DNF
21,726
28.7
27.7
—
—
0.1
0.1
—
1.7
—
2.1
—
0.1
—
1.1
1.1
—
0.9
199
DNF
DNF
DNF
DNF
DNF
DNF
DNF
DNF
DNF
DNF
DNF
DNF
32
1,126
DNF
628
1.0
—
—
—
—
—
—
—
—
—
—
—
—
1.1
1.1
—
0.9
0
122
DNF
DNF
5,593
DNF
DNF
2,559
DNF
1,284
61,562
3,380
DNF
325
6,579
DNF
4,623
28.0
27.4
—
—
0.5
—
—
0.6
—
1.4
1.5
-0.4
—
0.7
0.9
—
0.9
-5.4
-6.2
-7.0
-4.8
-6.0
—
-1.8
-1.8
-1.9
-0.4
-0.4
-0.3
-0.3
-0.6
-0.6
-0.6
-0.6
—
—
364
3,610
DNF
DNF
DNF
DNF
DNF
DNF
DNF
126
8,615
25
2,070
54
DNF
DNF
DNF
DNF
DNF
-2.2
-2.6
—
—
—
—
—
—
—
-3.5
-4.9
-2.6
-4.2
-0.5
—
—
—
—
—
21,833
DNF
DNF
1,862
61,953
DNF
3,781
DNF
DNF
7,129
DNF
1,224
DNF
0
404
DNF
DNF
DNF
DNF
-2.6
—
—
-3.3
-4.4
—
-1.9
—
—
-2.5
—
-2.9
—
-0.3
-0.1
—
—
—
—
5 Networks with ground-truth non-fractality (“Fit” values are expected to be negative.)
102
102
Sketch
MEMB
CBB
GC
MVB
100
10-4 2
10
108
107
106
105
104
103 2
10
time (sec)
104
104
103
number of vertices
Sketch
MEMB
CBB
GC
MVB
104
103
number of vertices
100
-2
105
108
466
1,774
20
123
4,195
23
223
1,678
31
390
12
210
0
1
17
377
6,474
36,125
-2.9
-3.8
-4.6
-3.0
-4.7
-6.0
-1.0
-0.8
-0.7
-3.5
-4.9
-2.6
-4.1
-0.9
-2.7
-1.3
-1.5
-1.4
-1.4
Sketch
MEMB
CBB
GC
MVB
10
102 103 104 105 106 107
number of vertices
108
Sketch
MEMB
107
CBB
106
GC
MVB
5
10
104
103 2
10 103 104 105 106 107
number of vertices
105
Fig. 3. Scalability of computation time (top) and memory usage (bottom) for
(2, 2, g)-flower (left) and (2, t)-BA (right) networks.
The symbols corresponding to an algorithm were not shown
if the algorithm did not stop within 24 hours or could not
197
1,641
DNF
16
280
DNF
20
548
DNF
32
1,397
8
580
0
0
76
3,535
DNF
DNF
-2.9
-3.8
—
-2.7
-3.2
—
-0.6
-0.7
—
-3.5
-4.9
-2.6
-4.2
-0.9
-2.0
-1.3
-1.5
—
—
105
104
103
102
101
100 0
10
286
2,999
38,278
44
826
DNF
53
1,598
67,866
433
17,703
97
9,504
0
2
154
12,457
DNF
DNF
theory
0.20
0.15
0.10
CV
59,049
177,147
531,441
16,384
65,536
1,048,576
15,625
78,125
390,625
31,104
186,624
16,384
131,072
497
3,997
31,997
255,997
2,047,997
8,191,997
104
10-2
memory (KB)
29,526
88,575
265,722
10,924
43,692
699,052
11,720
58,595
292,970
24,885
149,301
14,045
112,349
250
2,000
16,000
128,000
1,024,000
4,096,000
b ( `)
2, 10)-flower
2, 11)-flower
2, 12)-flower
3, 7)-flower
3, 8)-flower
3, 9)-flower
4, 6)-flower
4, 7)-flower
4, 8)-flower
1, 6)-SHM
1, 7)-SHM
1, 5)-SHM
1, 6)-SHM
1)-BA
4)-BA
7)-BA
10)-BA
13)-BA
15)-BA
memory (KB)
time (sec)
(1,
(1,
(1,
(1,
(1,
(1,
(1,
(1,
(1,
(2,
(2,
(3,
(3,
(2,
(2,
(2,
(2,
(2,
(2,
0.05
101
102
`
103
104
0.00
100
101
102
`
103
104
Fig. 4. Results of different runs for (3, 3, 7)-flower. (Left) b(`) and (Right)
CV of them as a function of `.
execute owing to memory shortage. The performance of the
proposed algorithm is comparable to or worse than some other
algorithms when the network is relatively small (i.e., n < 104 ).
However, the algorithm is orders of magnitude faster than
other algorithms for large networks. Also, it achieves such
high a high speed with incomparably smaller memory usage
than MEMB, the second fastest algorithm.
Robustness over Randomness: The sketch algorithm accurately recovers b(`) of theoretical prediction for fractal network
b ( `)
106
10
5
10
4
∝
∝
VII. C ONCLUSIONS
`−3.4
e−0.3 `
103
102
101
100 0
10
101
`
102
Fig. 5. Results for a real web graph.
models, and the results are robust over different execution runs.
The left panel of Figure 4 shows b(`) of ten different runs
of the proposed algorithm on (3, 3, 7)-flower. The b(`) values
follow well the theoretical solution, which is indicated by the
solid line. As we can clearly observe, the fluctuation in the b(`)
values due to the randomness is very small. The consistency
over different runs is captured by the CV of b(`) (i.e., the ratio
of the standard deviation of b(`) to its average over ten runs)
as a function of ` (right panel of Figure 4). The CV values
tend to increase with `. This tendency can be explained by
the following two factors. First, the b(`) value takes a positive
integer value and monotonically decreases with ` by definition.
Thus, even a change of ±1 in b(`) might cause a large CV
value if ` is large. Second, our algorithm intrinsically fluctuates
more when ` is larger. This could be because the sizes of the
solutions become smaller for larger `, and hence the algorithm
gets a little more sensitive to estimation errors. Nevertheless,
it should be noted that the variance of our algorithm was
considerably small even for large ` (i.e., CV ∼ 0.19 at most).
This magnitude of variance would have little impact on the
estimation of fractality.
D. Application to Real Large Network
In closing this section, we applied the sketch algorithm to a
large-scale real graph to show the scalability of the proposed
algorithm with an empirical instance. The results also gave
us some insight on the fractality of large-scale real-world
networks, which is beyond the reach of previous algorithms.
As a representative instance of a real-world large graph, we
considered the in-2004 network [6], [4], which is a crawled
web graph of 1, 382, 908 vertices and 16, 917, 053 edges. We
discarded the direction of the edges (i.e., hyperlinks) to make
the network undirected. The algorithm took 11.7 hours in total.
The resulting b(`) of the sketch algorithm and the fitting
curves are shown in Figure 5. We omitted the three points
with the smallest ` values from the fitting because empirical
networks would not show a perfect fractality, contrary to welldesigned network models. A large part of the points fall on the
line of the fitted power-law function, and indeed, our fractality
decision procedure yielded − log10 rfit = 0.79, which suggests
the fractality of the in-2004 network. It is worth mentioning
that the fractality of this network was unveiled for the first time
for the sake of our algorithm.
Fractality is an interesting property that appears in some
classes of real networks. In the present study, we designed a
new box-covering algorithm, which is useful for analyzing the
fractality of large-scale networks. In theory, we have shown
desirable guarantees on scalability and solution quality. In the
experiments, we confirmed that the algorithm’s outputs are
sufficiently accurate and that it can handle large networks with
millions of vertices and edges. We hope that our method enables further exploration of graph fractality and its applications
such as graph coarsening.
Repeatability: Our implementation of the proposed and previous box-cover algorithms is available at http://git.io/fractality.
It also contains the generators of the synthetic network models,
and thus the results in this paper can be perfectly replicated.
We hope that our public software will enable further exploration of graph fractality and its applications.
Acknowledgment: This work was supported by JSPS KAKENHI (No. 15H06828), JST, ERATO, Kawarabayashi Large
Graph Project, and JST, PRESTO. Web graph data was downloaded from http://law.di.unimi.it/datasets.php. T.T. thanks to
K. Takemoto for valuable discussions.
R EFERENCES
[1] T. Akiba, Y. Iwata, and Y. Yoshida. Fast exact shortest-path distance
queries on large networks by pruned landmark labeling. In SIGMOD,
page 349, 2013.
[2] T. Akiba, K. Nakamura, and T. Takaguchi. Fractality of massive graphs:
Scalable analysis with sketch-based box-covering algorithm. In ICDM,
2016. to appear.
[3] A.-L. Barabási and R. Albert. Emergence of Scaling in Random
Networks. Science, 286(5439):509–512, 1999.
[4] P. Boldi, M. Rosa, M. Santini, and S. Vigna. Layered label propagation: A multiresolution coordinate-free ordering for compressing social
networks. In WWW, pages 587–596, 2011.
[5] P. Boldi, M. Rosa, and S. Vigna. HyperANF: Approximating the
neighbourhood function of very large graphs on a budget. In WWW,
pages 625–634, 2011.
[6] P. Boldi and S. Vigna. The WebGraph framework I: Compression
techniques. In WWW, pages 595–601, 2004.
[7] G. Caldarelli. Scale-Free Networks. Oxford University Press, 2007.
[8] S. Carmi, S. Havlin, S. Kirkpatrick, Y. Shavitt, and E. Shir. A model
of Internet topology using k-shell decomposition. Proc. Natl. Acad. Sci.
USA, 104(27):11150–11154, 2007.
[9] E. Cohen. Size-estimation framework with applications to transitive
closure and reachability. J. Comput. Syst. Sci., 55(3):441–453, 1997.
[10] E. Cohen. All-distances sketches, revisited: HIP estimators for massive
graphs analysis. IEEE TKDE, 27(9):2320–2334, 2015.
[11] E. Cohen, D. Delling, T. Pajor, and R. F. Werneck. Sketch-based
influence maximization and computation: Scaling up with guarantees.
In CIKM, pages 629–638, 2014.
[12] K. Falconer. Fractal Geometry: Mathematical Foundations and Applications, Second Edition. Wiley-Blackwell, 2003.
[13] L. K. Gallos, H. A. Makse, and M. Sigman. A small world of weak ties
provides optimal global integration of self-similar modules in functional
brain networks. Proc. Natl. Acad. Sci. USA, 109(8):2825–2830, 2012.
[14] L. K. Gallos, C. Song, and H. A. Makse. A review of fractality and
self-similarity in complex networks. Physica A, 386:686–691, 2007.
[15] K.-I. Goh, G. Salvi, B. Kahng, and D. Kim. Skeleton and Fractal Scaling
in Complex Networks. Phys. Rev. Lett., 96(1):018701, 2006.
[16] T. Hasegawa and K. Nemoto. Hierarchical scale-free network is fragile
against random failure. Phys. Rev. E, 88(6):062807, 2013.
[17] D. Kempe, J. Kleinberg, and É. Tardos. Maximizing the spread of
influence through a social network. In KDD, pages 137–146, 2003.
[18] J. S. Kim, K.-I. Goh, B. Kahng, and D. Kim. Fractality and selfsimilarity in scale-free networks. New J. Phys., 9(6):177, 2007.
[19] J. Kleinberg. The small-world phenomenon. In STOC, pages 163–170,
2000.
[20] J. Leskovec, J. Kleinberg, and C. Faloutsos. Graphs over Time :
Densification Laws, Shrinking Diameters and Possible Explanations. In
KDD, pages 177–187, 2005.
[21] M. E. J. Newman. Networks: an Introduction. Oxford University Press,
2010.
[22] C. R. Palmer, P. B. Gibbons, and C. Faloutsos. ANF: A fast and scalable
tool for data mining in massive graphs. In KDD, pages 81–90, 2002.
[23] H. D. Rozenfeld, S. Havlin, and D. Ben-Avraham. Fractal and transfractal recursive scale-free nets. New J. Phys., 9(6):175, 2007.
[24] C. M. Schneider, T. A. Kesselring, J. S. Andrade, and H. J. Herrmann.
Box-covering algorithm for fractal dimension of complex networks.
Phys. Rev. E, 86(1):016707, 2012.
[25] M. Á. Serrano, D. Krioukov, and M. Boguñá. Percolation in Self-Similar
Networks. Phys. Rev. Lett., 106(4):048701, 2011.
[26] C. Song, L. K. Gallos, S. Havlin, and H. A. Makse. How to calculate
the fractal dimension of a complex network: the box covering algorithm.
J. Stat. Mech., 2007(03):P03006, 2007.
[27] C. Song, S. Havlin, and H. A. Makse. Self-similarity of complex
networks. Nature, 433(7024):392–395, 2005.
[28] C. Song, S. Havlin, and H. A. Makse. Origins of fractality in the growth
of complex networks. Nat. Phys., 2(4):275–281, 2006.
[29] K. Takemoto. Metabolic networks are almost nonfractal: A comprehensive evaluation. Phys. Rev. E, 90(2):022802, 2014.
| 8 |
Gibbs Sampling with
Low-Power Spiking Digital Neurons
Srinjoy Das† + , Bruno Umbria Pedroni⊥ , Paul Merolla‡ , John Arthur‡ , Andrew S. Cassidy‡ , Bryan L. Jackson‡
Dharmendra Modha‡ , Gert Cauwenberghs⊥ + , Ken Kreutz-Delgado† +
Email: {s2das, bpedroni, kreutz, gert}@ucsd.edu, {pameroll, arthurjo, andrewca, bryanlj, dmodha}@us.ibm.com
†
ECE,
⊥
BioEng. &
+
arXiv:1503.07793v2 [] 27 Mar 2015
‡
Inst. for Neural Computation, UC San Diego, La Jolla, CA 92093
IBM Research Almaden, San Jose, CA 95120
Abstract—Restricted Boltzmann Machines and Deep Belief
Networks have been successfully used in a wide variety of applications including image classification and speech recognition.
Inference and learning in these algorithms uses a Markov Chain
Monte Carlo procedure called Gibbs sampling. A sigmoidal
function forms the kernel of this sampler which can be realized
from the firing statistics of noisy integrate-and-fire neurons on
a neuromorphic VLSI substrate. This paper demonstrates such
an implementation on an array of digital spiking neurons with
stochastic leak and threshold properties for inference tasks and
presents some key performance metrics for such a hardwarebased sampler in both the generative and discriminative contexts.
I. INTRODUCTION AND BACKGROUND
Restricted Boltzmann Machines (RBMs) and Deep Belief
Networks (DBNs) (Fig. 1) are stochastic neural networks
that have been used for a wide variety of generative and
discriminative tasks like image classification, sequence completion, motion synthesis and speech recognition. An RBM is
a stochastic neural network consisting of two symmetrically
interconnected layers composed of neuron-like units — a set
of visible units v and a set of hidden units h. For an RBM
there are no interconnections within a layer. Both inference
and learning in this model use a Markov Chain Monte Carlo
(MCMC) procedure called Gibbs Sampling [1] where each
neuron is sampled based on its total input from other connected
neurons with a sigmoidal activation function. DBNs consist of
one visible layer and multiple layers of hidden units. Learning
in a DBN can be done in a layer-by-layer manner on each
RBM with this Gibbs sampling procedure [2]. Following this
approach the values of the neurons in each layer of a DBN can
be inferred by using the same stochastic sampling procedure.
Neuromorphic computing is an area of Very Large Scale
Integrated Circuit (VLSI) design inspired by the architecture
and function of the brain. Such systems which have been
realized with both analog [3] and digital [4] circuit elements
consist of massively parallel arrays of interconnected spiking
neurons modeled on the basis of neurons and synapses present
in biological neural substrates. In contrast to the traditional von
Neumann computing paradigm, memory and computation are
tightly coupled in this architecture. The principal benefits are
extremely energy efficient computation by spiking neurons in
a highly concurrent fashion.
(a)
(b)
Fig. 1: a) Restricted Boltzmann Machine with 4 visible and 3 hidden units.
b) Deep Belief Network with 3 hidden layers
The majority of RBMs and DBNs described in the literature
currently operate on standard platforms like high performance
CPU (Central Processing Unit) or GPU (Graphical Processing
Unit) and are deployable on cloud and related infrastructures.
However, for ultra low-power, realtime realizations of these
algorithms, hardware substrates provided by neuromorphic
VLSI are naturally amenable to the use of sampling methods for probabilistic computation in the context of highdimensional real world data. In this paper we propose an
MCMC sampling scheme for RBMs and DBNs using the
stochastic leak and threshold properties of digital spiking
neurons on a neuromorphic VLSI substrate. Such a framework
has significant potential for enabling applications which will
benefit from realtime, energy-efficient realizations, such as the
Internet of Things and Brain Computer Interfaces.
II. INFERENCE ON SPIKING SUBSTRATES
The RBM captures a probabilistic generative model of the
input data based on the Boltzmann distribution as below [1]:
(
exp(−E(v,h))
p(v, h) = Σv,h
exp(−E(v,h))
(1)
T
where E(v, h) = −vT Wh − bT
v v − bh h
Here p denotes the Boltzmann probability distribution and
E is an energy function of v and h where v denotes the state
of the visible units which are driven by the input data and
h represents the state of the hidden units. W represents the
weight between v and h, and bv , bh represent the biases of
v and h respectively. A necessary and sufficient condition for
sampling from the Boltzmann distribution given by the above
equation is to sample each neuron with a sigmoid probability
law as a function of the activities of all other connected
neurons [5]. This is as given below :
1
(2)
P(xi = 1|xj , j 6= i) =
1 + e−(Σj wij xj +bi )
This rule forms the kernel of the Gibbs sampling MCMC
procedure for an RBM. Here wij is the weight between
neurons xi , xj and bi denotes the bias of neuron xi . This
equivalence between probability laws at the unit and ensemble
levels is exactly realizable for substrates where explicit synchronization is provided and all samples from the underlying
primitives representing neurons are collected in discrete steps.
This allows the RBM to do alternating parallel sampling to
generate statistics for the MCMC inference process.
The TrueNorth neurosynaptic processor [4] is an array of
4096 cores. Each core consists of a crossbar with 256 axons
and 256 neurons. Communication between neurons occurs
only with all-or-none spikes realized with digital circuitry.
In the interconnection network, the spikes are generated with
asynchronous digital logic, however all spikes are explicitly
aligned at a single clock edge for use in the next step of
computation. On this substrate, the stochastic and dynamical
properties of the neurons coupled with these discrete synchronization steps can be used to generate the sigmoid probability
law used for sampling from the Boltzmann distribution of
interest. A Gibbs sampler for inference in RBMs and DBNs
can thus be constructed on such a substrate for the conditions
outlined in Eqns. (1), (2).
III. DIGITAL GIBBS SAMPLER
TrueNorth [4] is composed of digital integrate-and-fire
neurons (I&F) with both stochastic and deterministic leak
and threshold properties. The dynamical equations for the
membrane potential Vj (t) for neuron j at time t in this case
are shown below [6]:
N−1
Vj (t) = Vj (t − 1) + Σi=0 xi (t)si
(3)
Vj (t) = Vj (t) − λj
If (Vj (t) ≥ αj , SPIKE and set Vj (t) = Rj
Here λj and αj represent the leak and threshold values respectively of neuron j which can be stochastic or deterministic,
xi represents the input from N other neurons, si represents
the synaptic weight between neurons i, j and Rj represents
the reset value for neuron j.
Using similar I&F neurons with stochastic leak and threshold an algorithm for realizing the sigmoidal sampling rule
(Eqn. 2) to perform MCMC sampling in RBMs is given below:
Fig. 2: Noisy and smooth realizations of digital sigmoid using 1000 samples
per v-value
number of discrete time steps T w for sampling, fixed threshold
value V t, the number of bits allowed for the stochastic
threshold variation T M (uniformly distributed discrete random
variable) and the value of leak. For the sampled neuron,
V t rand and V denote its threshold and membrane potential
respectively. After integration, the sampled value of a neuron is
set to 1 if it spikes in any of the allowed number of sampling
intervals T w. Note that T M and leak are both positive. This
method uses the underlying substrate’s dynamical (integration)
and stochastic properties along with spike synchronization
at fixed time steps to generate a sigmoid probability curve.
Random threshold and leak values are realized with on-chip
pseudo random number generators (PRNGs).
IV. WEIGHT AND BIAS SCALING
On a substrate of digital neurons and synapses, weight
and bias values have to be quantized in accordance with
the finite precision available in hardware. To increase the
dynamic range, a multiplicative factor (scale > 1) is applied to
the weights obtained after offline training of the RBM/DBN
and before they are mapped to the hardware for applicable
inference tasks (classification, pattern completion and others).
Therefore the four sigmoid parameters (T w, V t, T M and
leak) should be chosen so that for any neuron i there is a
smooth mapping between the values of vi = Σj wij xj + bi
versus the activation probability P (vi ) and a majority of the
inputs are not mapped to the portion of the curve which
saturates to 0 or 1. This is illustrated in Fig. (2), where
parameter values of (4,0,3,1) provide an insufficient range
for smooth realization of the function. In contrast parameter
values of (4,100,8,90) provide a larger dynamic range, where
the original weight and bias values have been multiplied by a
scale factor (in this case scale = 50) for the mapping. This is
equivalent to scaling the ideal sigmoid by this same factor:
Pscaled (v) =
1
v
1 + e− scale
(4)
V. EFFECT OF STOCHASTIC PARAMETERS
repeat
V = V + leak ∗ (B(0.5)), B is Bernoulli
Vt rand = Vt + floor(U(0, 2TM − 1))
spiked(V ≥ Vt rand) = 1
until Tw steps;
Each neuron in the RBM is mapped to one digital neuron
in this sampling scheme. The algorithm uses 4 parameters: the
A heuristic motivation for the sigmoid realization using the
stochastic leak and threshold over multiple sampling intervals
is provided in Figs. (3a, 3b). In the first case Fig. (3a) only
the stochastic leak is initially applied to sample the inputs
which causes the transfer curve to split into 3 regions: input
< 10 where the neuron will never spike as the membrane
potential is always below threshold, input > 10 and < 100
where the spiking probability is 0.5 (since the leak of 90 can
(a) Effect of noisy threshold
Fig. 6: Classification accuracy for test data without noise
Fig. 7: Classification accuracy for test data with noise
(b) Effect of noisy leak
Fig. 3: Noisy threshold and leak for sigmoid realization
occur with probability 0.5) and input >100 where the neuron
is guaranteed to spike since the threshold is reached. As the
stochastic threshold is applied, the curve becomes piecewiselinear which approximates to a sigmoid over multiple sampling
intervals T w. Similarly when only the stochastic threshold
is initially applied (Fig. 3b), the probability of activation is
proportional to the difference between the input and the fixed
threshold. As the stochastic leak is applied over multiple T w
sampling intervals the curve also attains sigmoidal shape.
classification results are shown in Fig. 6. It is noticeable that
there is no significant difference in classification performance
irrespective of the sampler complexity or the scale factor used
for the digital realization.
Index
P1
P2
P3
P4
P5
P6
P7
(Tw, Vt, TM, leak)
(1, -130, 8, 0)
(1, -80, 8, 102)
(1, -20, 8, 200)
(1, -100, 9, 300)
(16, 50, 9, 15)
(16, 100, 10, 30)
(16, 633, 8, 90)
scale
50
50
75
120
30
50
100
TABLE I: Digital neuron parameters
VI. PERFORMANCE METRICS
Different realizations of the sigmoidal approximation with
varying complexity are possible for performing inference in
RBMs/DBNs using Gibbs sampling. This can be studied in
the context of both discriminative and generative tasks.
Fig. 4: Test data
Fig. 5: Test data with salt noise
A. CLASSIFICATION PERFORMANCE
For classification we study the performance of the digital
sampler on the MNIST dataset which consists of 28x28
grayscale images of handwritten digits [0-9] (Fig. 4) and
their corresponding labels. The RBM with 784 visible, 500
hidden and 10 label neurons is trained offline on such a set
of 5000 digits (training data). Classification performance is
then tested for this RBM with the same number of visible,
hidden and label units using the digital sampler with varying
parameterizations of T w, V t, T M, leak (refer Table I) on a
set of 1000 labeled digits (test data) which are similar to
those seen by the RBM during training. Appropriate scale
factors are applied to the weights and biases as outlined in
Section IV before the digital sigmoid is used for sampling. The
B. GENERALIZATION PERFORMANCE
The classification performance of different realizations of
the digital sampler can also be studied with noisy versions
of the test data. Two types of noise are introduced in the
test dataset: salt noise which corrupts randomly chosen pixels
to white and salt and pepper noise where randomly chosen
pixels can be turned white or dark. In both cases the level of
noise (pixel corruption) is controllable with a noise f actor.
This type of data (refer Fig. 5) is very different from those
that were used for training the RBM/DBN, so this provides
a measure of the generalization performance of the sampler.
The classification results for an RBM with 500 hidden units
are shown in Fig. 7. The sampler resolution (scale factor)
has a significant effect on classification performance for noisy
versions of test data as compared to the original test data
itself. The samplers with the highest scale factor (P4, P7) have
significantly fewer errors for the noisy data sets.
C. GENERATIVE MODEL PERFORMANCE
Inference performed on an RBM in hardware for generative
tasks like pattern completion depends on the quality of the
MCMC samples. That is, how closely these samples reflect the
learned probability distribution. This can be characterized by
the Kullback-Leibler (KL) divergence metric which is a similarity measure between any 2 probability distributions. MCMC
sampling with the ideal and various digital realizations of the
Fig. 9: a) Ideal sigmoid vs. TrueNorth realization b) Firing pattern showing
how synaptic weight values ranging from -100 to 100 are used with the
respective connected axons to activate data neurons 0 through 200
(a)
(b)
Fig. 8: a) TrueNorth processor b) Sigmoid generation circuit with TrueNorth
neurons and synapses
sampler can be used to generate the probability distributions
and these can be compared versus the exact distribution given
by Eq. (1). Since it is difficult to calculate this distribution for
high-dimensional inputs like MNIST (784 visible neurons),
this is done for an RBM with 3 visible and 2 hidden neurons
with pre-defined weights and biases. The results are shown
in Table II. The KL-divergence increases for approximate
realizations of the digital sampler (leak = 0) indicating a lower
quality of the sampled generative model in this case.
KL divergence (1e+05 Gibbs iterations)
exact vs ideal sampler
exact vs digital(1,-130,8,0)
exact vs digital(1,-80,8,102)
Trial-1
6.2e-05
0.0218
0.0091
Trial-2
6.06e-05
0.1090
0.0330
Trial-3
5.71e-05
0.0259
0.0083
TABLE II: KL divergence for various digital sampler realizations
VII. TRUENORTH SIGMOID CHARACTERIZATION
We implemented the digital sigmoid algorithm discussed
here on TrueNorth as shown in Fig. 8 using supported parameter values. Deterministic spike inputs are driven on axons
d−K through dK (K=100) every Tw+2 (Tw=16) cycles and
are integrated on the connected neurons n−K through nK .
The synaptic weights are set between -100 to 100 in steps of
1 for these connections (indicated by dots on the crossbar).
Each such neuron is configured with a stochastic threshold α
realized by V t = 50, T M = 9 and leak value λ set to 0. A
single neuron nleak generates a stochastic leak of 0 or 1 for
all the data neurons. This is multiplied by a factor of 15 by
the weight on the connected axons and is the value of leak
used in the algorithm. The number of sampling intervals T w
for a single application of the stimulus (spike input) is set to
16. Spiking probabilities obtained with TrueNorth neurons are
compared versus the ideal sampler in Fig. 9(a). A snapshot
of the firing pattern of the characterized data neurons n−K
through nK is shown in Fig. 9(b).
VIII. CONCLUSIONS
We have demonstrated that approximate realizations of a
Gibbs sampler on a digital neuromorphic substrate are feasible
for classification with low system latency (T w = 1). However,
for generative tasks it may be necessary to use a form of the
sampler that replicates the sigmoid more accurately via the
use of a nonzero stochastic leak. Our proposed method of
realization of the sigmoidal function with low-power, digital
integrate-and-fire neurons is well suited for Gibbs sampling in
RBMs and DBNs with parallel arrays of visible and hidden
neurons in contrast to hardware implementations of sigmoids
on standard Von Neumann computing platforms [7], [8].
Another advantage of our proposed implementation is that
the required noise generation mechanism uses on-chip PRNG
circuits which are easier to realize as compared to sampling
from I&F neurons with Gaussian noise [9]. Given these
advantages such a sampler is ideally suited for a whole range
of inference tasks in practical low-power, realtime applications
which is the subject of ongoing investigation.
ACKNOWLEDGMENTS
The authors would like to thank the team members of
the Brain-Inspired Computing group at IBM Almaden for
supporting this project.
R EFERENCES
[1] S. Haykin, Neural Networks and Learning Machines (3rd Edition).
Prentice Hall, 2008.
[2] G. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for
deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554,
2006.
[3] G. Indiveri, B. Linares-Barranco, T. J. Hamilton, A. Van Schaik,
R. Etienne-Cummings, T. Delbruck, S.-C. Liu, P. Dudek, P. Häfliger,
S. Renaud et al., “Neuromorphic silicon neuron circuits,” Frontiers in
neuroscience, vol. 5, 2011.
[4] P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada,
F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura et al., “A
million spiking-neuron integrated circuit with a scalable communication
network and interface,” Science, vol. 345, no. 6197, pp. 668–673, 2014.
[5] R. Rojas, Neural networks: a systematic introduction. Springer, 1996.
[6] A. S. Cassidy, P. Merolla, J. V. Arthur, S. K. Esser, B. Jackson, R. Alvarezicaza, P. Datta, J. Sawada, T. M. Wong, V. Feldman, A. Amir, D. B.
dayan Rubin, E. Mcquinn, W. P. Risk, and D. S. Modha, “Cognitive
computing building block: A versatile and efficient digital neuron model
for neurosynaptic cores,” in in International Joint Conference on Neural
Networks (IJCNN). IEEE, 2013.
[7] M. Tommiska, “Efficient digital implementation of the sigmoid function
for reprogrammable logic,” in Computers and Digital Techniques, IEE
Proceedings-, vol. 150, no. 6. IET, 2003, pp. 403–411.
[8] A. Tisan, S. Oniga, D. MIC, and A. Buchman, “Digital implementation of
the sigmoid function for fpga circuits,” ACTA TECHNICA NAPOCENSIS
Electronics and Telecommunications, vol. 50, no. 2, p. 6, 2009.
[9] E. Neftci, S. Das, B. Pedroni, K. Kreutz-Delgado, and G. Cauwenberghs,
“Event-driven contrastive divergence for spiking neuromorphic systems,”
Frontiers in Neuroscience, vol. 7, p. 272, 2013.
| 9 |
Mechanism Design via Dantzig-Wolfe
Decomposition
Salman Fadaei
⋆⋆
arXiv:1508.04250v2 [cs.GT] 15 Aug 2016
[email protected]
Abstract. In random allocation rules, typically first an optimal fractional point is calculated via solving a linear program. The calculated
point represents a fractional assignment of objects or more generally
packages of objects to agents. In order to implement an expected assignment, the mechanism designer must decompose the fractional point
into integer solutions, each satisfying underlying constraints. The resulting convex combination can then be viewed as a probability distribution
over feasible assignments out of which a random assignment can be sampled. This approach has been successfully employed in combinatorial
optimization as well as mechanism design with or without money.
In this paper, we show that both finding the optimal fractional point as
well as its decomposition into integer solutions can be done at once. We
propose an appropriate linear program which provides the desired solution. We show that the linear program can be solved via Dantzig-Wolfe
decomposition. Dantzig-Wolfe decomposition is a direct implementation
of the revised simplex method which is well known to be highly efficient in
practice. We also show how to use the Benders decomposition as an alternative method to solve the problem. The proposed method can also find
a decomposition into integer solutions when the fractional point is readily present perhaps as an outcome of other algorithms rather than linear
programming. The resulting convex decomposition in this case is tight in
terms of the number of integer points according to the Carathéodory’s
theorem.
Keywords: Mechanism design, Random allocation, Convex decomposition, Dantzig-Wolfe decomposition, Benders decomposition
1
Introduction
The technique of finding a fractional solution, and decomposing it into polynomiallymany integer points has been successfully employed in many problems. For a
usage of the technique in combinatorial optimization, for instance, see Carr and
Vempala [4]. In mechanism design with bidders who have quasi-linear valuations, the framework presented by Lavi and Swamy for designing truthful and
approximate mechanisms strongly relies on this technique [13,12]. Perhaps the
⋆⋆
This work was done while the author was a graduate student in the Department of
Informatics, TU München, Munich, Germany.
2
Fadaei
best connection between linear programming and algorithmic mechanism design
has been established by this framework. Finally, for applications in mechanism
design without money see e.g. Budish et al. [3], and Nguyen et al. [14].
Typically in such applications, first a fractional optimal point is calculated,
and in a second step, the point is represented as a convex combination of integer
points usually using the ellipsoid method. A subroutine or an approximation
algorithm which returns an integer point with respect to any cost vector is employed to construct the separation oracle for the ellipsoid method. A separation
oracle, in the ellipsoid method, states if a given point is feasible, or in case it
is not feasible, the oracle returns a violated constraint. A natural question that
arises here is to ask if the two steps, optimization as well as convex decomposition, can be done at once without employing the ellipsoid method?
1.1
Results and Techniques
We propose an appropriate linear program for finding an optimal fractional point
and its decomposition into integer points. We show how to use the DantzigWolfe decomposition which is based on the revised simplex to solve the linear
program. More specifically, we show that finding a convex combination of integer
points whose value is maximum is indeed equivalent to solving a linear program
using the Dantzig-Wolfe decomposition. The proposed method will improve the
connection between linear programming and algorithmic mechanism design.
Dantzig-Wolfe (DW) decomposition comprises a master problem and a subproblem. DW decomposition proceeds in iterations by solving the two problems
in each iteration until the subproblem is not able to find any point which can
contribute to the objective value of the master problem [2]. Since we are interested in integer points, we run a subroutine or an approximation algorithm
which returns integer solutions as the subproblem. DW decomposition has been
previously used for optimizing over a discrete set using branch and cut to obtain integer solutions [6]. However, here we assume the existence of a subroutine
which returns an approximate integer solution of good quality and prove that
the algorithm ends in optimality. To the best of our knowledge, this usage of
DW decomposition in mechanism design has not been introduced before.
Dantzig-Wolfe decomposition is a variant of the revised simplex algorithm.
A computational evaluation of the Dantzig-Wolfe decomposition has been done
in [15]. The study shows DW decomposition has a high performance, especially
when a reasonable block structure can be found.
The convex combination calculated by our method is tight in terms of the
number of integer solutions according to the Carathéodory’s theorem provided
that the number of constraints representing the underlying polytope is less than
the dimension of the polytope. We explain this fact further in the following.
Theorem 1 (Carathéodory). Given a polytope in Rn , any point in the polytope is a convex combination of at most n + 1 vertices of the polytope.
By standard polyhedra theory, the number of nonzero variables in an extreme
point is upper bounded by the number of constraints in the underlying linear
Mechanism Design via Dantzig-Wolfe Decomposition
3
program (see e.g. [5]). The proposed algorithm produces a convex combination
of at most m + 1 integer points, where m is the number of constraints, and
thus the solution is tight in this sense. It is very common that the number
of constraints m is less than the number of variables n. For example, in the
relaxations of combinatorial auctions, this is usually the case because the bidders
may obtain any package of items (for which one decision variable is needed) and
there are exponentially many packages of items. Thus, given that m < n, the
number of integer solutions will be at most n + 1 which is tight according to the
Carathéodory’s theorem.
Sometimes, a fractional point - not necessarily an optimum - is calculated via
other methods rather than linear programming. For example, a greedy algorithm
might be used to find a fractional point. This is because truthfulness 1 can be
guaranteed via the greedy algorithm, but directly solving the linear program
cannot assure truthfulness (see e.g. [8]). Our method can find a decomposition
into integer solutions for such readily-present fractional points.
We also show how to apply the Benders decomposition to the problem. Benders decomposition is known to be the dual of the Dantzig-Wolfe decomposition
technique [2]. We observe that sometimes working with the Benders decomposition has advantages over the DW decomposition. We discuss these advantages
further in a separate section of the paper.
1.2
Related Literature
Prior to this work, there have been other attempts to replace the ellipsoid in
finding convex decompositions. An alternative method is given by Kraft et al.
[11]. The main component of that work is an algorithm which is based on a simple
geometric idea that computes a convex combination within an arbitrarily small
distance ǫ > 0 to the fractional point. Our proposed method has advantages
over the result in [11]. First, the size of the convex decomposition (number of
integer solutions) is strictly smaller than the size of the convex decomposition
produced by the method in [11]. The size of the convex decomposition in [11]
might be as large as O(s3 ǫ−2 ), where s is the number of nonzero components of
the fractional point, and ǫ > 0. Our solution will have a size of at most s + 1.
Second, our decomposition is exact and does not suffer from an ǫ > 0 compromise in the solution. However, we provide no theoretical upper bound on
the number of iterations, and the proposed method relies on the performance of
Dantzig-Wolfe decomposition in practice.
Elbassioni et al. present an alternative method for finding a convex decomposition of a given fractional point [9]. Their method relies on the multiplicative
weights update method which is a general technique for solving packing and
covering problems [1,10]. While the algorithm presented in [9] has a theoretical
upper bound on the number of iterations, the algorithm is inferior to the presented method here in two aspects. First, their convex decomposition might have
1
Truthfulness is a desired property in algorithmic mechanism design, and assures that
no bidder would benefit from reporting false valuations.
4
Fadaei
a size (the number of integer solutions) of s(⌈ǫ−2 ln s⌉ + 1), s being the number
of nonzero components of the fractional point, and ǫ > 0. As mentioned earlier,
our solution will have a size of at most s + 1. Second, their convex decomposition
1
times the fractional solution (for some ǫ > 0) at the
can be as precise as 1+4ǫ
expense of increasing runtime, while our convex decomposition is exact.
1.3
Structure
In Section 2, we formally introduce the setting of the problem. In Section 3,
we provide a short summary of DW decomposition technique. In Section 4, we
establish our main result and show how the DW principle can be applied to our
setting. Section 5 is devoted to the Benders decomposition applied to our setting.
Section 6 discusses two applications of the adapted DW principle. Finally, in
Section 7, we provide a numerical example for the adapted DW technique.
2
Setting
Consider a finite set of integer points in Zn+ . Let Q denote the convex hull of
all these
points. That is Q defines a polytope with integral extreme points. Let
P = x ∈ Rn | Ax ≤ b & x ≥ 0 denote a polytope, where A is an m by
n matrix, and b an m-dimensional column vector. A subroutine A for any cost
n
∗
function c ∈ R
returns an integer point X ∈ Q such that cX ≥ cx , where
x∗ = arg max cx | x ∈ P . Equivalently, we say subroutine A will return for
any cost vector c an integer point X ∈ Q such that cX ≥ cx for any x in P . Let
I denote the
index set for integer points in Q. The set of integer points in Q is
therefore Xj j∈I .
Usually, subroutine A only accepts non-negative cost vectors. Examples are
approximation algorithms for NP-hard optimization problems. For instance, the
approximation algorithm provided for the knapsack problem works with nonnegative profits of items. However, in our setting, we expect A to work with any
arbitrary cost vector. In such cases, an assumption that Q is a packing polytope
is required: if x ∈ Q and y ≤ x then y ∈ Q. See Lavi and Swamy for more
information [13].
In this paper,
problem. Given a cost vector c ≥
we address the following P
∗
= 1, and ii) |{λ∗j | j ∈
0, find values λ∗j ≥ 0 j∈I such that i)
j∈I λj P
∗
∗
∗
I, λj > 0}| is polynomial in m and n, and iii)
j∈I λj Xj = x , where
x∗ = arg max cx | x ∈ P .
Using the ellipsoid method, it can be shown that every point in P can be
written as a convex combination of the extreme points in Q [4,13]. In this work,
aside from answering the question above, we give an alternative proof for this
fact.
3
Summary of Dantzig-Wolfe Decomposition
Dantzig-Wolfe decomposition belongs to column generation techniques. We shall
here briefly go over the Dantzig-Wolfe Decomposition. For a detailed explanation
Mechanism Design via Dantzig-Wolfe Decomposition
5
of the method we refer the reader to [2]. Consider the following linear program.
Minimize
cx
subject to Ax = b
x∈X
Where X is a bounded polyhedral of special structure, A is a m × n matrix,
c is a n-dimensional vector and b is a m-dimensional vector.
Since X is a bounded polyhedra, then any point x ∈ X can be represented as
a convex combination of a finite number of extreme points of X. Let us denote
these points by x1 , x2 , . . . , xl , and substitute x with its convex combination
of extreme points, then the aforementioned LP can be transformed into the
following program in which the variables are λ1 , λ2 , . . . , λl .
Minimize
l
X
(cxj )λj
(1)
(Axj )λj = b
(2)
λj = 1,
(3)
j=1
subject to
l
X
j=1
l
X
j=1
λj ≥ 0
j = 1, 2, . . . , l
(4)
The linear program (1) − (4) is called the master problem and the program
which finds an appropriate x ∈ X in each iteration is called the subproblem.
Since the number of extreme points of set X is exponentially many, we follow
the idea of column generation to find appropriate extreme point in each iteration.
The information is passed back and forth between the master problem and the
subproblem as follows. In each iteration a different cost coefficient is passed down
by the master problem to the subproblem and the subproblem finds an extreme
point xk , and sends it to the master problem.
Dantzig-Wolfe decomposition is an implementation of the revised simplex
method. Let vector w and α denote the dual variables corresponding to equations
(2) and (3), respectively. We first need an initial solution to generate the simplex
tableau. Suppose we have a basic feasible solution λ = (λB , λN ) to system
(2)−(4), where λB and λN denote the basic and nonbasic variables, respectively.
The initial (m + 1) × (m + 1) basis inverse B−1 hence will be known. The cost for
each basic variable λj is in fact ĉj = cxj . Therefore, we get (w, α) = ĉBB −1 ,
b
where ĉB is the cost vector of the basic variables. Denoting b̄ = B −1
, we
1
see the revised simplex tableau in Table 1.
The revised simplex proceeds by improving the current solution via finding
an entering and a leaving variable. In other words, the set of basic and nonbasic
6
Fadaei
BASIS RHS
(w, α)
ĉB b̄
B −1
b̄
Table 1. Simplex tableau. RHS stands for right-hand side.
variables exchange one element. When such an exchange is not possible then the
current solution is optimal. The entering variable is in fact a variable λkassoci
Axk
ated with extreme point xk for which zk − ĉk > 0, where zk = (w, α)
1
and ĉk = cxk .
We observe that zk − ĉk = (wA − c)xk + α denotes the value of point xk
with respect to current costs wA − c and dual variable α. In order to find such
a point, we solve the following subproblem which gives us the required index or
tells that the current solution is optimal when the maximum value is zero.
Maximize
(wA − c)x + α
subject to
x∈X
Notice that the objective function contains a constant and therefore it can be
replaced by (wA−c)x. Assuming that xk is the optimal solution to the program
above, the revised simplex method goes on as follows. If zk − ĉk = 0 then the
algorithm stops and the last solution to the master problem is an optimal of the
overall problem.
Axk
,
If zk − ĉk > 0 the master problem proceeds as follows. Let yk = B −1
1
z − ĉk
. In order to find the leaving column,
the entering column then will be k
yk
let index r be determined as follows:
(
)
b̄r
b̄i
= Minimum
: yik > 0 .
1≤i≤m+1
yrk
yik
We pivot at yrk which will update the dual variables, the basis inverse, and the
right-hand side. More specifically, pivoting on yrk can be stated as follows.
1. Divide row r by yrk .
2. For i = 1, . . . m and i 6= r, update the ith row by adding to it −yik times
the new rth row.
3. Update row zero by adding to it zk − ĉk times the new rth row.
After pivoting, the column λk is deleted and the algorithm repeats.
An important observation about the DW principle is as follows. The DW
principle expects the Subproblem to return any point x ∈ X where (wA −
Mechanism Design via Dantzig-Wolfe Decomposition
7
c)x + α > 0 and it does not impose any other specific requirement on the
selected point. We shall use this observation in our adaptation of the method.
Another observation is that, in each iteration, the master program finds the
best solution using known extreme points. This is done in an organized manner
as described above.
4
Applying Dantzig-Wolfe Decomposition
A wide range of combinatorial
optimization problems can be formulated
using
the integer program max cx | x ∈ P and integer . Recall, P = x ∈ Rn | Ax ≤
b & x ≥ 0 . For example, in combinatorial auctions, c denotes the accumulated
valuations of the players, and the integer program therefore expresses the welfare
maximization objective subject to feasibility constraints encoded as P .
Usually, using simplex method or other standard linear programming techniques, first a relaxed linear program of the integer program above is solved:
Maximize
cx
(5)
subject to
Ax ≤ b
x≥0
(6)
(7)
Notice, constraints (6) and (7) together are equivalent to x ∈ P . Next, the
solution is rounded to an integer solution at the expense of a value loss or slightly
violating the constraints. Given subroutine A, we wish to find a solution to the
linear program above as well as a convex decomposition of it into integer points.
We wish to achieve both goals at once. Recall, subroutine A returns for any cost
vector c an integer point X ∈ Q such that cX ≥ cx for any x in P .
Generally speaking, Dantzig-Wolfe principle uses the fact that if we relax
some constraints and obtain a simpler polyhedra then the solution to the original problem can be written as a convex combination of extreme points of the
simpler polyhedra. The simplicity refers to the fact that the extreme points of
the new polyhedra can be found more easily than those of the original problem. While Dantzig-Wolfe principle is useful when the underlying constraints
are decomposable into simpler regions, we use it in a slightly different manner
by looking at Q as the polyhedra over which we can efficiently optimize. Recall,
Q denotes the convex hull of a finite set of integer points. To apply the idea of
Dantzig-Wolfe decomposition to the problem (5) − (7), we add a new constraint
to the program.
x∈Q
(8)
We will show that an optimal solution to problem (5) − (8) is also an optimal
solution to (5)−(7), and thus adding the new constraint is harmless. We represent
x ∈ Q as a convex combination of extreme
points of Q. Recall, I denote the
index set for integer points in Q, and Xj j∈I is the set of integer points in Q.
8
Fadaei
Substitute x with its convex combination of extreme points of Q, then program
(5) − (8) can be transformed into the following program where the variables are
λj j∈I .
Maximize
X
(cXj )λj
(9)
j∈I
subject to
X
(AXj )λj + s = b
(10)
λj = 1
(11)
j∈I
X
j∈I
λj ≥ 0
∀j ∈ I
s ≥ 0.
(12)
(13)
The linear program (9) − (13) is the master problem. Notice, we have added
slack variables s ∈ Rm
+ to convert the inequalities into equality as needed by the
DW principle (see Section 3). The DW subproblem is defined below.
max (c + wA)Xj
j∈I
(14)
While our problem is a maximization problem, the procedure in Section 3
is presented for a minimization problem, thus we substitute the −c with c in
the objective function of the subproblem. P
That means, we change
P the objective
function of the master problem from max j∈I (cXj )λj to min j∈I (−cXj )λj ,
and apply the theory provided in Section 3.
We assume 0 ∈ Q thus the initial basic solution is simply defined by letting
λ0 , the variable corresponding to point X = 0, equal 1 (λ0 = 1), and letting
s = b.2 Assuming that program (14) can be efficiently solved for any cost vector,
then we are exactly following the DW principle, and therefore, we can successfully
solve the overall problem as DW principle does this. However, program (14) is
an integer program and solving it may not be computationally tractable.
To address this issue, we propose using subroutine A to approximate program
(14). Let (w̄, ᾱ) be the last dual variables calculated by the master program. In
each iteration, we call subroutine A with current cost vector c + w̄A to find a
point Xk ∈ Q to pass to the master problem. This substitution seemingly comes
at the expense of stopping at a local optimum, as explained in the following.
As long as the algorithm continues by using the points returned by subroutine
A, we are exactly running DW principle. Recall the important observation that
in DW principle, the subproblem need not be completely optimized and any
point Xk with (c + w̄A)Xk + ᾱ > 0 suffices to proceed. However, there might
be an iteration in which there exists a point X′ ∈ Q for which we have (c +
w̄A)X′ + ᾱ > 0, but for the integer point Xk returned by subroutine A, we have
2
Note that 0 ∈ Q is a consequence of the assumption that Q is a packing polytope.
Mechanism Design via Dantzig-Wolfe Decomposition
9
(c + w̄A)Xk + ᾱ ≤ 0. Therefore, DW stops at a suboptimal point. Nevertheless,
below, we argue that this cannot happen. That means as long as DW has not
reached the optimum to problem (5) − (7), subroutine A, given the current cost
vector (c + w̄A), returns a point Xk with (c + w̄A)Xk + ᾱ > 0.
Let x∗ denote the optimal solution to program (5) − (7). It is instructive to
see what the master step would do if the subproblem passes the point x∗ , rather
than an integer point, to the master problem. While we do not know such an
optimal point, but we know that such a point exists and this suffices for our
reasoning. We argue that the master step will set λx∗ = 1 and λj = 0 for all
other λj ’s which are currently in the base. In other words, the master program
returns the best possible convex combination which is in fact λx∗ = 1. This is
discussed in the following observation.
Observation 1 Let x∗ denote the optimal solution to program (5) − (7). If
supposedly the subproblem in any iteration passes x∗ to the master problem,
then the master step will set λx∗ = 1 and λj = 0 for all other λj ’s which are
currently in the base.
Proof. First, we observe that if the first subproblem (right after the initialization) passes x∗ to the master problem, then the master step will set λx∗ = 1
and λ0 = 0. Clearly, in the solution s needs to be evaluated accordingly. Second,
by looking more closely at what simplex does in each iteration, we observe that
in any further iteration, if the subproblem passes x∗ to the master problem, the
master step will set λx∗ = 1 and λj = 0 for all other λj ’s.
The simplex method, in each iteration, performs a set of row operations on
the constraints when it pivots (see pivoting steps in Section 3). The constraints,
in any iteration, are thus the initial constraints after a series of row operations.
This will certify that the aforementioned solution (λx∗ = 1) will be feasible in
any further iteration. If the subproblem passes x∗ to the master problem, our
entering variable will be λx∗ . The simplex algorithm then increases the entering
variable λx∗ as much as one basic variable gets zero. However, as discussed, the
solution λx∗ = 1 is feasible, and it is possible to increase λx∗ up to 1 and set
all other λj ’s to zero. The algorithm will behave as such to produce the highest
increase in the objective value, the desired conclusion.
Theorem 2. If the subproblem (14) calls subroutine A to return an integer point
in each iteration, the DW principle never stops until it gets to an optimal solution
to problem (5) − (7).
Proof. Let x∗ denote the optimal solution
P to program (5) − ∗(7). Assume the
algorithm stops at a suboptimal point:
j∈I (cXj )λj < cx . Let (w̄, ᾱ) be
the last dual variables calculated by the master program. Let Xk be the point
returned by subroutine A, given cost vector (c + w̄A), in the last iteration. We
must have (c + w̄A)Xk + ᾱ ≤ 0 because DW has stopped.
If supposedly the subproblem in the last iteration passes x∗ to the master
problem, according to Observation 1, the master step will increase λx∗ up to
1 and set all other λj ’s to zero. Since we assumed the algorithm has stopped
10
Fadaei
at a suboptimal point, by setting λx∗ = 1 the objective value will increase. If
entering λx∗ improves the objective value, we must have (c + w̄A)x∗ + ᾱ > 0
from the theory of DW principle provided in Section 3: if (c + w̄A)x∗ + ᾱ ≤ 0
then entering variable λx∗ cannot improve the objective value.
By the property of the subroutine, we have (c + w̄A)Xk ≥ (c + w̄A)x∗ .
Thus, in the last iteration, we must have (c + w̄A)Xk + ᾱ > 0. This contradicts
our assumption that (c + w̄A)Xk + ᾱ ≤ 0. Consequently, as long as we have not
reached the optimum, the subroutine returns a point Xk with (c + w̄A)Xk + ᾱ >
0. This completes the proof.
We draw the conclusion that substituting program (14) with subroutine A is
harmless. Therefore, we have shown that finding a convex decomposition of maximum value is indeed equivalent to solving a linear program via DW principle.
Let us call the method integer DW.
5
Benders Decomposition
It is known that the Dantzig-Wolfe Decomposition has an equivalent decomposition technique namely Benders decomposition [2]. Benders decomposition is a
row generation technique in contrast with the Dantzig-Wolfe column generation
procedure. Sometimes, working with Benders decomposition has advantages over
Dantzig-Wolfe decomposition. We explain how to apply the Benders algorithm
to our problem. Later, we discuss the advantages of the method.
Recall, polytope Q is a bounded polyhedra.
Hence, there exist matrix D ∈
′
′
Rm ×n and d ∈ Rm such that Q = x ∈ Rn | Dx ≤ d & x ≥ 0 . We add
constraint x ∈ Q to program (5) − (7) and work with the new program. We
will see that adding this constraint has no influence on the region of feasible
solutions to program (5) − (7). Following the standard procedure [2], we can
write the Benders decomposition for this new program. The Benders master
problem will be as follows.
Maximize z
(15)
subject to z ≤ wb − (c + wA)Xj
w≤0
z
unrestricted.
∀j ∈ I
(16)
(17)
(18)
The variables of the master problem are z and w. Variable vector w is the
vector of dual variables associated to the constraints (6). The Benders master
problem has exponentially many constraints, thus it is inconvenient to solve
directly. Hence, we maintain only a few of the constraints (16). Assuming 0 ∈ Q,
we start with only one constraint: z ≤ wb − (c + wA)0 = wb. Notice, we can
use any Xj ∈ Q to start with. We solve the master problem and let (z̄, w̄) be
the solution. The value of z̄ is an upper bound on the optimal value to the
master problem. If (z̄, w̄) satisfies constraints (16) for all j ∈ I, then (z̄, w̄) is
optimal for the master problem. We can check constraints (16) by examining if
z̄ ≤ w̄b − maxj∈I (c + w̄A)Xj .
Mechanism Design via Dantzig-Wolfe Decomposition
11
Thus, the Benders subproblem will be as the following.
max (c + w̄A)Xj
j∈I
(19)
Note that the Benders subproblem is also the subproblem solved by the
Dantzig-Wolfe decomposition. Furthermore, the Benders master problem is the
dual to the Dantzig-Wolfe master problem (9) − (13). The Benders subproblem,
in each iteration, is solved by calling subroutine A with cost vector c+ w̄A. If the
subproblem returns Xk that violates the constraints (16): z̄ > w̄b− (c + w̄A)Xk ,
we can generate the new constraint z ≤ wb − (c + wA)Xk , and add it to
the current master program, and reoptimize. We repeat this process until the
solution returned by subroutine A does not violate the constraints. We claim
that at this iteration, the value of z̄ is the optimal value to the master problem.
The Benders decomposition provides a more concise proof that the decomposition techniques in companion with the subroutine A work correctly.
Theorem 3. If the subproblem (19) calls subroutine A to return an integer point
in each iteration, the Benders algorithm never stops until it reaches an optimal
solution to problem (5) − (7).
Proof. Let x∗ denote an optimal solution to problem (5) − (7). Let z ∗ denote
an optimal value of the Benders master problem. We have z ∗ = (−c)x∗ by
the construction of the Benders master problem, and the duality theorem. Let
z̄ be the final solution to the master problem and Xk the solution returned
by subroutine A when the Benders algorithm stops. Since the algorithm stops,
we must have z̄ ≤ w̄b − (c + w̄A)Xk . We always have z ∗ ≤ z̄ because z̄ is
an upper bound on the optimal solution to the master problem. Assume z̄ is
not optimal: z̄ > z ∗ . Remember, by the definition of subroutine A, we have
(c + w̄A)Xk ≥ (c + w̄A)x∗ . Therefore,
⇒
⇒
⇒
⇒
Ax∗
≤b
since x∗ is a solution to
problem (5) − (7)
since w̄ ≤ 0
w̄Ax∗
−cx∗
−cx∗
z̄
≥ w̄b
≥ w̄b − cx∗ − w̄Ax∗
≥ w̄b − (c + w̄A)Xk since (c + w̄A)Xk ≥ (c + w̄A)x∗
> w̄b − (c + w̄A)Xk since z̄ > z ∗ = −cx∗
But, this contradicts z̄ ≤ w̄b − (c + w̄A)Xk . The contradiction arises from
the assumption that z̄ is not optimal. Thus, when the algorithm stops, we have
the optimal solution, the desired conclusion.
After solving the Benders master problem, we can use the provided integer
solutions and solve the restricted primal to obtain a convex decomposition.
Another advantage of the Benders decomposition arises from the fact that in
each iteration, we optimize an LP in the master step. Solving an LP is sometimes
more convenient than the implementation of the pivoting steps done in each
iteration in the master step of the DW principle.
12
5.1
Fadaei
Polynomial Runtime Using Ellipsoid
It is instructive to note that Theorem 3 implies that if the integer solution returned by subroutine A does not violate constraints (16), then the current master
solution is optimal. Exploiting this fact, we can use the ellipsoid method to solve
problem (15) − (18) to certify a polynomial runtime which might be of theoretical interest. To use the ellipsoid method, we need to implement a separation
oracle. Recall that a separation oracle, given a solution, either confirms that it
is a feasible solution, or returns the constraint violated by the solution. Using
subroutine A as the separation oracle, as long as we find a violated constraint,
we cut the current ellipsoid and continue. When the subroutine A cannot return
a violating constraint, according to Theorem 3, the algorithm has reached the
optimum.
6
6.1
Application of the Method in Mechanism Design
The Framework Proposed by Lavi and Swamy
Let X = x ∈ Rn | Ax ≤ b & x ≥ 0 denote the underlying polytope of a
linear program, and x∗ denote an optimal solution to the program with respect
to some cost vector. The maximum ratio between the value of an integer program and its relaxation, with respect to all cost vectors, is called the integrality
gap of the relaxation. Assuming that, the integrality gap of X is β ≥ 1, and
that a β integrality-gap-verifier is given, Lavi and Swamy propose a method to
∗
decompose the scaled-down fractional solution xβ into a convex combination of
integer solutions [13]. A β integrality-gap-verifier is an algorithm that, given any
cost vector, returns an integer solution whose value is at least 1/β times the
optimal relaxed solution.
This decomposition technique was originally observed by Carr and Vempala
[4], and later adapted by Lavi and Swamy to mechanism design problems provided that the underlying polytope of the relaxation of the problem has the
packing property [13]. The approach requires only a polynomial number of calls
to the integrality-gap-verifier with respect to the number of positive components
in x∗ . Yet, the approach strongly relies on the ellipsoid method, and hence it is
more of theoretical importance than of practical use.
In order to view the LS framework
in our setting, the integrality-gap-verifier
is used as subroutine A and X/β = x | βx ∈ X is treated as P in our setting
introduced in Section 2. This way, the integer DW finds the maximum value
in X/β as well as its decomposition into integer points, both in one step. This
improves upon other implementations of the LS framework which require two
steps to find the convex decomposition [13,11,9].
It is instructive to note that solving program (9) − (13), essentially defines a
Maximal-In-Distributional-Range (MIDR) allocation rule. An MIDR algorithm
fixes a set of distributions over feasible solutions (the distributional range) independently of the valuations reported by the self-interested players, and outputs
Mechanism Design via Dantzig-Wolfe Decomposition
13
a random sample from the distribution that maximizes expected (reported) welfare [7]. Here, we optimize over a range which is independent of bidder’s private
information. The range is in fact the feasible region of the program: all probability distributions over integer solutions which satisfy constraints (10) − (13).
The range is obviously independent of bidders’ valuations.
6.2
Existing Fractional Point
Sometimes a fractional point x∗ ∈ Q is present, and we wish to find a convex
decomposition of x∗ into extreme points of Q. This can happen when we use
other methods to find a fractional point rather than linear programming. Recall,
we assume that Q satisfy the packing property.
For this case, we can use the integer DW as follows. Define P = x ∈
Rn | x ≤ x∗ & x ≥ 0 and let c = x∗ . Now, apply the integer DW. All
arguments follow accordingly, assuming that a subroutine A with the following
property is available. Subroutine A will return for any cost vector c an integer
point X ∈ Q such that cX ≥ cx∗ . Because the number of constraints in P is at
most n, the resulting convex decomposition in this case is tight in terms of the
number of integer points, according to the Carathéodory’s theorem.
7
Numerical Example for Integer DW
In this section, we apply the integer DW to an instance of multi-unit auctions
to see how the method works. We relegate the details to the appendix.
References
1. Arora, S., Hazan, E., Kale, S.: The multiplicative weights update method: a metaalgorithm and applications. Theory of Computing 8(1), 121–164 (2012)
2. Bazaraa, M.S., Jarvis, J.J., Sherali, H.D.: Linear programming and network flows.
John Wiley & Sons (2011)
3. Budish, E., Che, Y.K., Kojima, F., Milgrom, P.: Designing random allocation mechanisms: Theory and applications. The American Economic Review 103(2), 585–623
(2013)
4. Carr, R., Vempala, S.: Randomized metarounding. In: Proceedings of the thirtysecond annual ACM symposium on Theory of computing. pp. 58–62. ACM (2000)
5. Chvátal, V.: Linear programming. WH Freeman and Company, New York (1983)
6. Desrosiers, J., Lübbecke, M.E.: A primer in column generation. Springer (2005)
7. Dobzinski, S., Dughmi, S.: On the power of randomization in algorithmic mechanism design. In: Foundations of Computer Science, 2009. FOCS’09. 50th Annual
IEEE Symposium on. pp. 505–514. IEEE (2009)
8. Dughmi, S., Ghosh, A.: Truthful assignment without money. In: Proceedings of
the 11th ACM conference on Electronic commerce. pp. 325–334. ACM (2010)
9. Elbassioni, K., Mehlhorn, K., Ramezani, F.: Towards more practical linear
programming-based techniques for algorithmic mechanism design. In: Algorithmic
Game Theory, pp. 98–109. Springer (2015)
14
Fadaei
10. Khandekar, R.: Lagrangian relaxation based algorithms for convex programming
problems. Ph.D. thesis, Indian Institute of Technology Delhi (2004)
11. Kraft, D., Fadaei, S., Bichler, M.: Fast convex decomposition for truthful social
welfare approximation. In: Web and Internet Economics, pp. 120–132. Springer
(2014)
12. Lavi, R., Swamy, C.: Truthful and near-optimal mechanism design via linear programming. In: Foundations of Computer Science, 2005. FOCS 2005. 46th Annual
IEEE Symposium on. pp. 595–604. IEEE (2005)
13. Lavi, R., Swamy, C.: Truthful and near-optimal mechanism design via linear programming. Journal of the ACM (JACM) 58(6), 25 (2011)
14. Nguyen, T., Peivandi, A., Vohra, R.: Assignment problems with complementarities.
Tech. rep., Mimeo., February (2015)
15. Tebboth, J.R.: A computational study of Dantzig-Wolfe decomposition. Ph.D. thesis, University of Buckingham (2001)
A
Numerical Example for Integer DW
In this section, we focus on applying the integer DW to an instance of multi-unit
auctions. In multi-unit auctions, there is a set of m identical items and a set of
players. Each player i has a valuation for any number of items denoted by vi (j)
for getting j items where 1 ≤ j ≤ m. The goal is to maximize social welfare by
distributing items among bidders.
The LP relaxation for this class of problems is as follows. Let xij denote if j
units is assigned to bidder i.
Maximize
X
vi (j)xij
(MU-P)
i,j
subject to
X
xij ≤ 1 for each player i
(20)
j · xij ≤ m
(21)
j
X
i,j
0 ≤ xij ≤ 1
for each i, j
(22)
Lavi and Swamy present a greedy algorithm which returns for any valuation
v an integer solution that is at least as good as half of the optimal fractional
solution to MU-P with respect to v [13]. Thus, we have a 2 integrality-gap-verifier
algorithm for MU-P. This greedy algorithm will serve as the subroutine in the
integer DW, and is called A.
We give a short example to demonstrate the proposed convex decomposition
method. Suppose a simple multi-unit auction with 3 players and 4 identical items.
The following valuation vectors vi (j) are given for each player i and quantity j:
j
v1 (j) = (
v2 (j) = (
v3 (j) = (
1234
6666 )
1446 )
0111 )
Mechanism Design via Dantzig-Wolfe Decomposition
15
We can reproduce program (9)−(13) for this instance as follows. Let I denote
the index set of integer points which satisfy inequalities (20) − (22).
111100000000
0.5
0 0 0 0 1 1 1 1 0 0 0 0
0.5
Let c = 6 6 6 6 1 4 4 6 0 1 1 1 , b = , and A =
.
0 0 0 0 0 0 0 0 1 1 1 1
0.5
123412341234
2
Notice the integrality gap has been reflected in defining b.
Initialization Step
Let the starting basis consist of s and λ0 where X0 = 0 is the starting integer
point. Therefore, the first simplex tableau is as the following.
BASIS INVERSE RHS
z
s1
s2
s3
s4
λ0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
.5
.5
.5
2
1
Iteration 1
From the simplex tableau, we have w = 0 0 0 0 and α = 0.
As a result, wA + c = c. The subproblem therefore is maxj∈I cXj . Subroutine
A returns X such that X11 = X22 = 1 and all other entries of X are zero. The
objective of the point with respect to current cost is z − ĉ = 10 > 0. Let us call
this point X1 .
SUBPROBLEM.
MASTER PROBLEM.
1
1
#
"
1
1
A
X
1
0
AX1 = . Then y1 = B −1
=
.
0
1
3
3
1
Now, we insert the column into the foregoing tableau and pivot. Variable s1
leaves the basis and λ1 enters the basis.
BASIS INVERSE RHS
z
s1
s2
s3
s4
λ0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
.5
.5
.5
2
1
λ1
10
1
1
0
3
1
After pivoting we obtain the following tableau.
16
Fadaei
BASIS INVERSE
z −10
λ1
1
s2 −1
s3
0
s4 −3
λ0 −1
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
RHS
0
0
0
0
0
1
−5
.5
0
.5
.5
.5
The best-known feasible solution of the overall problem is given by λ0 X0 +
λ1 X1 = 0.5X0 + 0.5X1 . The current objective value is 5.
Iteration 2
Fromthe simplex tableau, we havew = −10 0 0 0 and α = 0.
As a result, wA + c = −4 −4 −4 −4 1 4 4 6 0 1 1 1 . The subproblem therefore
is maxj∈I (wA + c)Xj . Subroutine A returns X such that X24 = 1 and all other
entries of X are zero. The objective of the point with respect to current cost is
z − ĉ = 6 > 0. Let us call this point X2 .
MASTER PROBLEM.
0
0
#
"
1
1
AX2
0
=
AX2 = . Then y2 = B −1
.
0
1
4
4
1
Now, we insert the column into the foregoing tableau and pivot. Variable s2
leaves the basis and λ2 enters the basis.
SUBPROBLEM.
BASIS INVERSE
z −10
λ1
1
s2 −1
s3
0
s4 −3
λ0 −1
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
RHS
0
0
0
0
0
1
−5
.5
0
.5
.5
.5
λ2
6
0
1
0
4
1
After pivoting we obtain the following tableau.
BASIS INVERSE
z
λ1
λ2
s3
s4
λ0
−4 −6
1
0
−1
1
0
0
1 −4
0 −1
0
0
0
1
0
0
0
0
0
0
1
0
RHS
0
0
0
0
0
1
−5
.5
0
.5
.5
.5
The best-known feasible solution of the overall problem is given by λ0 X0 +
λ1 X1 = 0.5X0 + 0.5X1 . The current objective value is 5.
Mechanism Design via Dantzig-Wolfe Decomposition
17
Iteration 3
−4
−6
0
0
From the
simplex
tableau,
we
have
w
=
and α =
0. As a result, wA+c = 2 2 2 2 −5 −2 −2 0 0 1 1 1 . The subproblem therefore
is maxj∈I (wA + c)Xj . Subroutine A returns X such that X11 = 1, X32 = 1 and
all other entries of X are zero. The objective of the point with respect to current
cost is z − ĉ = 3 > 0. Let us call this point X3 .
MASTER PROBLEM.
1
1
"
#
−1
0
A
X
3
AX3 = . Then y2 = B −1
=
1
.
1
1
4
3
1
Now, we insert the column into the foregoing tableau and pivot. Variable s4
leaves the basis and λ3 enters the basis.
SUBPROBLEM.
BASIS INVERSE
z
λ1
λ2
s3
s4
λ0
−4 −6
1
0
−1
1
0
0
1 −4
0 −1
0
0
0
1
0
0
0
0
0
0
1
0
RHS
0
0
0
0
0
1
−5
.5
0
.5
.5
.5
λ3
3
1
−1
1
4
1
After pivoting we obtain the following tableau.
BASIS INVERSE
z
λ1
λ2
s3
λ3
λ0
−4.75 −3
.75
1
−.75
0
−.25
1
.25 −1
−.25
0
0 −.75
0 −.25
0 .25
1 −.25
0 .25
0 −.25
RHS
0
0
0
0
0
1
−5.375
.375
.125
.375
.125
.375
The best-known feasible solution of the overall problem is given by λ0 X0 +λ1 X1 +
λ2 X2 + λ3 X3 = 0.375X0 + 0.375X1 + 0.125X2 + 0.125X3 . The current objective
value is 5.375.
Iteration 4
From the simplex tableau, we have w = −4.75 −3 0 −.75
and α = 0. As a result,
wA + c = .5 −.25 −1 −1.75 −2.75 −.5 −1.25 0 −.75 −.5 −1.25 −2 . The subproblem therefore is maxj∈I (wA + c)Xj . Subroutine A returns X such that
X11 = 1 and all other entries of X are zero. The objective of the point with
respect to current cost is z − ĉ = 0.5 > 0. Let us call this point X4 .
MASTER PROBLEM.
SUBPROBLEM.
18
Fadaei
.5
1
"
#
−.5
0
A
X
4
−.5
AX4 = . Then y2 = B −1
=
.
0
1
.5
1
.5
Now, we insert the column into the foregoing tableau and pivot. Variable λ3
leaves the basis and λ4 enters the basis.
BASIS INVERSE
z
λ1
λ2
s3
λ3
λ0
−4.75 −3
.75
1
−.75
0
−.25
1
.25 −1
−.25
0
0 −.75
0 −.25
0 .25
1 −.25
0 .25
0 −.25
0
0
0
0
0
1
RHS
λ4
−5.375 .5
.375 .5
.125 −.5
.375 −.5
.125 .5
.375 .5
After pivoting we obtain the following tableau.
BASIS INVERSE
z
−5 −2
λ1
.5 −2
λ2 −.5 −1
s3
0
0
.5 −2
λ4
λ0 −.5
1
0 −1
0 −.5
0 .5
1
0
0 .5
0 −.5
RHS
0
0
0
0
0
1
−5.5
.25
.25
.5
.25
.25
The best-known feasible solution of the overall problem is given by λ0 X0 +λ1 X1 +
λ2 X2 + λ4 X4 = 0.25X0 + 0.25X1 + 0.25X2 + 0.25X4 . The current objective value
is 5.5.
Iteration 5
From the simplex tableau, we have w = −5 −2 0 −1 and
α = 0. As a result,
wA + c = −6 −7 −8 −9 −3 −4 −5 −6 −1 −2 −3 −4 . The subproblem therefore is maxj∈I (wA + c)Xj . Subroutine A returns X = 0. The objective of the
point with respect to current cost is z − ĉ = 0. Therefore, the algorithm terminates. Our final solution is as follows.
0.5
0
1
0
1
x11
x∗ = x22 = 0.25 1 + 0.25 0 + 0.25 0 + 0.25 0 = 0.25 .
0.25
0
0
1
x24
0
SUBPROBLEM.
A simple examination shows that x∗ is in fact one half (scaled down by the
integrality gap) of the optimal solution to MU-P for our instance.
| 8 |
Structuring quantum effects: superoperators as arrows
Juliana K. Vizzotto1∗
arXiv:quant-ph/0501151v1 25 Jan 2005
1
Thorsten Altenkirch2
Department of Computer Science
Indiana University, USA
2
Amr Sabry1
School of Computer Science and IT
The University of Nottingham, UK
January 31, 2018
Abstract
We show that the model of quantum computation based on density matrices and superoperators can be decomposed in a pure classical (functional) part and an effectful part modeling
probabilities and measurement. The effectful part can be modeled using a generalization of
monads called arrows. We express the resulting executable model of quantum computing in the
programming language Haskell using its special syntax for arrow computations. The embedding in Haskell is however not perfect: a faithful model of quantum computing requires type
capabilities which are not directly expressible in Haskell.
1
Introduction
A newcomer to the field of quantum computing is immediately overwhelmed with many apparent
differences with classical computing that suggest that quantum computing might require radically
new semantic models and programming languages. In some sense this is true for two reasons:
(1) quantum computing is based on a kind of parallelism caused by the non-local wave character
of quantum information which is qualitatively different from the classical notion of parallelism,
and (2) quantum computing has a peculiar notion of observation in which the observed part of
the quantum state and every other part that is entangled with it immediately lose their wave
character. Interestingly none of the other differences that are often cited between quantum and
classical computing are actually relevant semantically. For example, even though we do not often
think of classical computation as “reversible,” it is just as reversible as quantum computing. Both
can be compiled or explained in terms of reversible circuits [2], but in neither model should the user
be required to reason about reversibility.
The two properties of quantum computing discussed above certainly go beyond “pure” classical programming and it has been suspected earlier that they might correspond to some notion
of computational effect. Following Moggi’s influential paper [9], computational effects like assignments, exceptions, non-determinism, etc. could all be modeled using the categorical construction of
a monad. This construction has been internalized in the programming language Haskell as a tool to
elegantly express computational effects within the context of a pure functional language. Since the
work of Moggi, several natural notions of computational effects were discovered which could only
be expressed as generalizations of monads. Of particular importance to us, is the generalization of
monads known as arrows [7] which is also internalized in the programming language Haskell.
In an early paper, Mu and Bird (2001) showed that quantum parallelism is almost a monad. We
expand and build on this observation as follows. First the traditional model of quantum computing
∗ Permanent
address: Institute of Informatics, Porto Alegre, Brazil
cannot even express measurements, so we use a known more general model using density matrices
and superoperators. After expressing this model in Haskell, we establish that the superoperators
used to express all quantum computations and measurements are indeed an instance of the concept
of arrows (with a small caveat). In particular the construction clarifies the crucial need for some
form of linear typing: arrow computations must be required to use every quantum value or else the
computations produce results that are inconsistent with quantum mechanics.
In summary, our construction relates “exotic” quantum features to well-understood semantic
constructions and programming languages. We hope it will serve as a useful tool to further understand the nature and structure of quantum computation. The remainder of the paper is organized
as follows. Section 2 presents the traditional model of quantum computing and its implementation
in Haskell, focusing on the possibility of structuring the effects using monads. Section 3 discusses
the limitations of the traditional model as a complete model of quantum computation which should
include measurement. Section 4 introduces a more general model of quantum based on density
matrices and superoperators. Our main result is discussed in Section 5 where we show that general quantum computations including measurement can be structured using the generalization of
monads called arrows. Section 6 gives two complete examples implementing a Toffoli circuit and
the teleportation experiment: both examples use the arrow notation to express the structure of the
computation elegantly. Section 7 discusses the limitations of our model and its connection to the
functional quantum programming language QML [2]. Section 8 concludes. Appendix A explains the
basics of the Haskell notation used in the paper, and the next two appendices present the proofs
that are omitted from the main body of the paper.
2
The Traditional Model of Quantum Computing
We present the traditional model of quantum computing in this section.
2.1
Vectors
A finite set a can be represented in Haskell as an instance of the class Basis below. Given such
a set a representing observable (classical) values, a pure quantum value is a vector a → C which
associates each basis element with a complex probability amplitude. The basis elements must be
distinguishable from each other which explains the constraint Eq a on the type of elements below:
lass Eq a => Basis a where basis :: [a]
type PA = Complex Double
type Vec a = a → PA
The type constructor Vec is technically not a monad: it corresponds to a Kleisli structure [3]. Yet
as noted by Mu and Bird (2001), the probabilities introduced by vector spaces constitute a computational effect which can be structured using a slight generalization of monads in Haskell [9]. From
a programming perspective, a monad is represented using a type constructor for computations m
and two functions: return :: a → m a and ≫= :: m a → (a → m b) → m b. The operation ≫=
(pronounced “bind”) specifies how to sequence computations and return specifies how to terminate
computations:
return :: Basis a => a → Vec a
return a b = if a≡b then 1 else 0
(>>=) :: Basis a => Vec a → (a → Vec b) → Vec b
va >>= f = λ b → sum [ (va a) * (f a b) | a ∈ basis]
Because of the additional constraint that our computations must be over specified bases whose
elements must be comparable, the types of our operations are more restricted than strictly desired
for a monad. However return and ≫= satisfy the three monad laws.
Proposition 2.1 Vector spaces satisfy the required equations for monads.
Proof. See Appendix B.
✷
Vector spaces have additional properties abstracted in the Haskell class MonadPlus. Instances
of this class support two additional methods: mzero and mplus which provide a “zero” computation
and an operation to “add” computations:
mzero :: Vec a
mzero = const 0
mplus :: Vec a → Vec a → Vec a
mplus v_1 v_2 a = v_1 a + v_2 a
mminus :: Vec a → Vec a → Vec a
mminus v_1 v_2 a = v_1 a − v_2 a
For convenience, it is also possible to define various kinds of products over vectors: the scalar
product $∗, the tensor product h∗i, and the dot product h·i:
($*) :: PA → Vec a → Vec
pa $* v = λa → pa * v a
a
(<*>) :: Vec a → Vec b → Vec (a,b)
v1 <*> v2 = λ (a,b) → v1 a * v2 b
(<.>) :: Basis a => Vec a → Vec a → PA
v1 <.> v2 = sum (map (λa → conjugate (v1 a) * (v2 a)) basis)
Examples of vectors over the set of booleans may be defined as follows:
instan e Basis Bool where basis = [False,Bool]
qFalse,qTrue,qFT,qFmT :: Vec Bool
qFalse = return False
qTrue = return True
qFT = (1 / sqrt 2) $* (qFalse ‘mplus‘ qTrue)
qFmT = (1 / sqrt 2) $* (qFalse ‘mminus‘ qTrue)
The first two are unit vectors corresponding to basis elements; the last two represent state which are
in equal superpositions of False and True. In the Dirac notation, these vectors would be respectively
written as |Falsei, |Truei, √12 (|Falsei+ |Truei), and √12 (|Falsei− |Truei).
Vectors over several values can be easily described using the tensor product on vectors or the
Cartesian product on the underlying bases:
instan e (Basis a, Basis b) => Basis(a, b) where
basis = [(a, b) | a ∈ basis, b ∈ basis ]
p1,p2,p3,epr :: Vec (Bool,Bool)
p1 = qFT <*> qFalse
p2 = qFalse <*> qFT
p3 = qFT <*> qFT
epr (False,False) = 1 / sqrt 2
epr (True,True) = 1 / sqrt 2
In contrast to the first three vectors, the last vector describes an entangled quantum state which
cannot be separated into the product of independent quantum states. The name of the vector “epr ”
refers to the initials of Einstein, Podolsky, and Rosen who used such a vector in a thought experiment
to demonstrate some strange consequences of quantum mechanics [5].
2.2
Linear Operators
Given two base sets A and B a linear operator f ∈ A ⊸ B is a function mapping vectors over A to
vectors over B. We represent such operators as functions mapping values to vectors which is similar
to representation used by Karczmarczuk (2003):
type Lin a b = a → Vec b
fun2lin :: (Basis a, Basis b) => (a → b) → Lin a b
fun2lin f a = return (f a)
The function fun2lin converts a regular function to a linear operator. For example, the quantum
version of the boolean negation is:
qnot :: Lin Bool Bool
qnot = fun2lin ¬
Linear operations can also be defined directly, for example:
phase :: Lin Bool Bool
phase False = return False
phase True = (0 :+ 1) $* (return True)
hadamard :: Lin Bool Bool
hadamard False = qFT
hadamard True = qFmT
The definition of a linear operation specifies its action on one individual element of the basis.
To apply a linear operation f to a vector v , we use the bind operation to calculate v ≫= f . For
example (qFT ≫= hadamard ) applies the operation hadamard to the vector qFT which one can
calculate produces the vector qFalse as a result.
It is possible to write higher-order functions which consume linear operators and produce new
linear operators. An important example of such functions produces the so-called controlled operations:
controlled :: Basis a => Lin a a → Lin (Bool,a) (Bool,a)
controlled f (b1,b2) = (return b1) <*> (if b1 then f b2 else return b2)
The linear operator f is transformed to a new linear operator controlled by a quantum boolean value.
The modified operator returns a pair whose first component is the input control value. The second
input is passed to f only if the control value is true, and is otherwise left unchanged. For example,
(qFT h∗i qFalse) ≫= (controlled qnot ) applies the familiar controlled-not gate to a vector over two
values: the control value is a superposition of False and True and the data value is False. As one
may calculate the result of this application is the epr vector.
Linear operations can be combined and transformed in several ways which we list below. The
function i∗h produces the linear operator corresponding to the outer product of two vectors. The
functions linplus and lintens are the functions corresponding to the sum and tensor product on
vectors. Finally the function o composes two linear operators.
adjoint :: Lin a b → Lin b a
adjoint f b a = conjugate (f a b)
(>*<) :: Basis a => Vec a → Vec a → Lin a a
(v1 >*< v2) a1 a2 = v1 a1 * conjugate (v2 a2)
linplus :: (Basis a, Basis b) => Lin a b → Lin a b → Lin a b
linplus f g a = f a ‘mplus‘ g a
lintens :: (Basis a, Basis b, Basis c, Basis d) =>
Lin a b → Lin c d → Lin (a,c) (b,d)
lintens f g (a,c) = f a <*> g c
o :: (Basis a, Basis b, Basis c) => Lin a b → Lin b c → Lin a c
o f g a = (f a >>= g)
2.3
Example: A Toffoli Circuit
Not
H
V
Not
VT
V
H
The circuit diagram uses the de-facto standard notation for specifying quantum computations. Each
line carries one quantum bit (qubit ); we refer to the three qubits in the circuit as top, middle, and
bottom. The values flow from left to right in steps corresponding to the alignment of the boxes which
represent quantum gates. The gates labeled H , V , VT , and Not represent the quantum operations
hadamard , phase, adjoint phase, and qnot respectively. Gates connected via a bullet to another wire
are controlled operations.
In general all three qubits in the circuit may be entangled and hence the state vector representing
them cannot be separated into individual state vectors. This means that, despite the appearance to
the contrary, it is not possible to operate on any of the lines individually. Instead the circuit defines
a linear operation on the entire state:
toffoli :: Lin (Bool,Bool,Bool) (Bool,Bool,Bool)
toffoli (top,middle,bottom) =
let cnot = controlled qnot
cphase = controlled phase
caphase = controlled (adjoint phase)
in hadamard bottom >>= λ b1 →
cphase (middle,b1) >>= λ (m1,b2) →
cnot (top,m1) >>= λ (t1,m2) →
caphase (m2,b2) >>= λ (m3,b3) →
cnot (t1,m3) >>= λ (t2,m4) →
cphase (t2,b3) >>= λ (t3,b4) →
hadamard b4 >>= λ b5 →
return (t3,m4,b5)
3
Measurement
The use of monads to structure the probability effects reveals an elegant underlying structure for
quantum computations. This structure can be studied in the context of category theory and exploited
in the design of a calculus for quantum computation [14, 15, 13, 2].
Unfortunately in the traditional model of quantum computing we have used so far, is difficult
or impossible to deal formally with another class of quantum effects, including measurements, decoherence, or noise. We first give one example where such effects are critical, and then discuss various
approaches in the literature on how to deal with such effects.
3.1
Teleportation
The idea of teleportation is to disintegrate an object in one place making a perfect replica of it
somewhere else. Indeed quantum teleportation [4] enables the transmission, using a classical communication channel, of an unknown quantum state via a previously shared epr pair.
In the following diagram, Alice and Bob initially have access to one of the qubits of an entangled
epr pair, and Alice aims to teleport an unknown qubit q to Bob:
Alice
Bob
EPR
{
Not
q
m2
Not
q
Z
H
m1
The calculation proceeds as follows. First Alice interacts with the unknown qubit q and her half
of the epr state. Then Alice performs a measurement collapsing her quantum state and getting two
classical bits m1 and m2 that she transmits to Bob using a classical channel of communication.
Upon receiving the two classical bits of information, Bob interacts with his half of the epr state
with gates controlled by the classical bits. The circuit in the figure can be shown to re-create the
quantum state q which existed at Alice’s site before the experiment.
Our main interest in this circuit is that it is naturally expressed using a sequence of operations on
quantum values which include a non-unitary measurement in the middle. Using the model developed
in the previous section, it is not possible to describe this algorithm as stated. In the next section,
we briefly several possible ways to deal with this problem.
3.2
Dealing with Measurement
The literature includes several approaches to the problem of measurement. We characterize such
approaches in three broad categories: deferring measurements, using classical control with pointers
and side-effects, and using density matrices and superoperators. We discuss the first two approaches
in the remainder of this section, and expand on the latter approach in the next section.
3.2.1
Deferring measurements:
The first approach (used for example by Mu and Bird (2001), Van Tonder (2003; 2004) and Karczmarczuk (2003) relies on the principle of deferred measurement [8]. This principle can be used to
transform computations to always defer measurements to the end. Using this idea one can focus
entirely on managing the probability effects and perform the measurements outside the formalism.
The drawback of this approach is clear: programs that interleave quantum operations with measurements cannot be expressed naturally. For example, transforming the teleportation circuit above
to defer the measurements until after Bob’s computation completely changes the character of the
experiment, because no classical information is transmitted form Alice to Bob.
3.2.2
Classical Control and Side-effects:
In general, this category of models is based on the so-called QRAM (quantum random access machine) model of Knill (1996), which is summarized by the slogan “quantum data, classical control” [12]. In this context, a quantum computer can be seen as a classical computer with a quantum
device attached to it. The classical control sends instructions for the quantum machine to execute
unitary operations and measurements. A measurement collapses the quantum (probabilistic) computation and forces it to produce a classical (deterministic) result. In fact, the situation is even more
complicated: measuring part of a quantum state collapses not only the measured part but any other
part of the global state with which it is entangled. The most common approach to computationally
realize this hybrid architecture is via manipulating what are effectively pointers to a global shared
quantum state as the following examples show:
• In the flowchart notation for the language introduced by Selinger (2004), the state is represented by a collection of variables that can each be assigned once. An operation can only be
applied to an initial group of the variables (and is implicitly composed with the identity on
the remaining variables). If the variables are not in the desired order, they must be permuted
first. Thus the first few steps of the toffoli circuit are:
input q1,q2,q3 : qubit
q1, q2, q3 : qubit
permute φ 1
q3, q2, q1: qubit
q3, q2, q1 *= H x Id
q3, q2, q1: qubit
permute φ 2
q2, q3, q1 : qubit
q2, q3, q1 *= cV x Id
..
.
• In the procedural language QCL [10] a quantum register is a realized using pointers to the
complete state. Operations on a register map to operations on the state as follows. If we have
an m-qubit register r which points to an n-qubit state, then an operation U on the register is
realized using:
U (r) = Π†r (U × I(n − m)) Πr
The operation U is composed with the identity on the remaining number of qubits of the state.
The operator Πr is an arbitrary reordering operator and Π†r is its inverse. After re-ordering,
the lifted U composed with the identity is applied, and the result is permuted back to the
original order.
• Jan Skibiński (2001) produced an early Haskell simulator of a quantum computer. The simulator maintains quantum registers and allows operations to act on specific qubits using what is
essentially pointers. To apply an operation to the third, fifth, and seventh qubits on a quantum
register, some low-level calculations depending on the indices and size of the register are used
to produce a lifted operation composed with several identity operations that acts on the entire
register.
• Valiron et. al. (2004) develop a functional quantum programming language based on the
original work of Selinger (2004). The representation of quantum data in their calculus uses an
external n-qubit state Q. Programs may contain free variables which are essentially pointers
to the quantum state.
• In our previous work [11] we introduced virtual values to hide the management of pointers to
the global state. Using virtual values the code for the toffoli example is essentially identical to
the one presented earlier, except for the need to manually generate the adaptors which mediate
between the virtual value and the global state.
The use of pointers and sharing to model the side-effect of measurement is in some sense adequate. However by doing so, we completely lose the monadic structure and the direct connections
to categorical semantics.
4
Density Matrices and Superoperators
Fortunately the usual model of quantum computing can be generalized to solve the problem of
modeling measurements in a better way. In the generalized model, the state of the computation
is represented using a density matrix and the operations are represented using superoperators [1].
Using these notions, the projections necessary to express measurements become expressible within
the model. We review this model in this section.
4.1
Density Matrices
Intuitively, density matrices can be understood as a statistical perspective of the state vector. In the
density matrix formalism, a quantum state that used to be modeled by a vector v is now modeled
by its outer product.
type Dens a = Vec (a,a)
pureD :: Basis a => Vec a → Dens a
pureD v = lin2vec (v >*< v)
lin2vec :: (a → Vec b) → Vec (a,b)
lin2vec = uncurry
The function pureD embeds a state vector in its density matrix representation. For convenience,
we uncurry the arguments to the density matrix so that it looks more like a “matrix.” For example,
the density matrices corresponding to the vectors qFalse, qTrue, and qFT can be visually represented
as follows:
1 0
0 0
1/2 1/2
0 0
0 1
1/2 1/2
The appeal of density matrices is that they can represent states other than the pure ones above.
In particular if we perform a measurement on the state represented by qFT , we should get False
with probability 1/2 or True with probability 1/2. This information which cannot be expressed
using vectors, can be represented by the following density matrix:
1/2 0
0 0
1/2 0
+
=
0 0
0 1/2
0 1/2
Such a density matrix represents a mixed state which corresponds to the sum (and then normalization) of the density matrices for the two results of the observation. If we further calculate with
the result of measuring qFT by for example, applying the hadamard operation, we get one of the
two vectors qFT or qFmT , each with probability 1/2. Because all operations on vectors are linear,
we can express this step as follows:
1/2 0
1/2 0
0 0
1/2 0
H
=H
+H
=
0 1/2
0 0
0 1/2
0 1/2
As the calculation shows, the application of hadamard has no effect on the density matrix, and indeed
there is no observable difference between the two configurations before and after the application of
hadamard . Indeed, the density matrix representation loses the information in the state vectors that
is not observable [12] and hence is a better representation from a semantic perspective.
4.2
Superoperators
Operations mapping density matrices to density matrices are called superoperators:
type Super a b = (a,a) → Dens b
lin2super :: (Basis a, Basis b) => Lin a b → Super a b
lin2super f (a1,a2) = (f a1) <*> (dual (adjoint f) a2)
where dual f a b = f b a
The function lin2super constructs a superoperator from a linear operator on vectors. To understand
the basic idea, consider the density matrix resulting from the application of f to |vi. This corresponds
to the outer product of the vector f |vi with itself, which applies f to |vi and the adjoint of f to
the “dual vector.”
4.3
Tracing and Measurement
In contrast to the situation with the traditional model of quantum computing, it is possible to
define a superoperator which “forgets”, projects, or traces out part of a quantum state as well as a
superoperator which measures part of a quantum state:
trL :: (Basis a, Basis b) => Super (a,b) b
trL ((a1,b1),(a2,b2)) = if a1 ≡ a2 then return (b1,b2) else mzero
meas :: Basis a => Super a (a,a)
meas (a1,a2) = if a1 ≡ a2 then return ((a1,a1),(a1,a1)) else mzero
For example, the sequence:
pureD qFT >>= meas >>= trL
first performs a measurement on the pure density matrix representing the vector qFT . This measurement produces a vector with two components: the first is the resulting collapsed quantum state
and the second is the classical observed value. The last operation forgets about the collapsed quantum state and returns the result of the classical measurement. As explained earlier the resulting
density matrix is:
1/2 0
0
1/2
5
Superoperators as Arrows
By moving to density matrices and superoperators, it becomes possible to express both the original
computations as well as measurements in the same formalism. One might hope that the original
monadic structure of quantum computations is preserved, but it appears that this is not the case.
The best we can do is to prove that the new model of computation fits within a generalization of
monads called arrows.
5.1
Arrows
The application of a superoperator to a density matrix can still be achieved with the monadic bind
operation, instantiated to the following type:
>>= :: Dens a → ((a,a) → Dens b) → Dens b
This type does not however correspond to the required type as computations now consume
multiple input values. This observation is reminiscent of Hughes’s motivation for generalizing monads
to arrows [7]. Indeed, in addition to defining a notion of procedure which may perform computational
effects, arrows may have a static component independent of the input, or may accept more than one
input.
In Haskell, the arrow interface is defined using the following class declaration:
lass Arrow a where
arr :: (b → c) → a b c
(>>>) :: a b c → a c d → a b d
first :: a b c → a (b,d) (c,d)
In other words, to be an arrow, a type a must support the three operations arr, ≫, and first with
the given types. The operations must satisfy the following equations:
arr id ≫ f
f ≫ arr id
(f ≫ g) ≫ h
arr (g . f )
first (arr f )
first (f ≫ g)
first f ≫ arr (id × g)
first f ≫ arr fst
first (first f ) ≫ arr assoc
=
=
=
=
=
=
=
=
=
f
f
f ≫ (g ≫ h)
arr f ≫ arr g
arr (f × id)
first f ≫ first g
arr (id × g) ≫ first f
arr fst ≫ f
arr assoc ≫ first f
where the functions × and assoc are defined as follows:
(f × g) (a, b) = (f a, g b)
assoc ((a, b), c) = (a, (b, c))
Graphically the functions associated with the arrow type are the following:
>>>
arr
c
b
f
b
first
c
f
d
g
b
f
d
(a)
(b)
c
d
(c)
The function arr allows us to introduce “pure” arrows which are simple functions from their
inputs to their outputs. The function ≫ is similar to ≫=: it composes two computations. The
function first is the critical one for our purposes: it allows us to apply an arrow to a component of
the global quantum state. The equations above ensure that these operations are always well-defined
even with arbitrary permutations and change of associativity.
5.2
Superoperators are Arrows (with Eq constraint)
Just as the probability effect associated with vectors is not strictly a monad because of the Basis
constraint, the type Super is not strictly an arrow as the following types include the additional
constraint requiring the elements to be comparable:
arr :: (Basis b, Basis c) => (b → c) → Super b c
arr f = fun2lin (λ (b1,b2) → (f b1, f b2))
(>>>) :: (Basis b, Basis c, Basis d) =>
Super b c → Super c d → Super b d
(>>>) = o
first :: (Basis b, Basis c, Basis d) => Super b c → Super (b,d) (c,d)
first f ((b1,d1),(b2,d2)) = permute ((f (b1,b2)) <*> (return (d1,d2)))
where permute v ((b1,b2),(d1,d2)) = v ((b1,d1),(b2,d2))
The function arr constructs a superoperator from a pure function by applying the function to both
the vector and its dual. The composition of arrows is simply the composition of linear operators.
The function first applies the superoperator f to the first component (and its dual) and leaves the
second component unchanged. The definition calculates each part separately and then permutes the
results to match the required type.
Proposition 5.1 Superoperators satisfy the required equations for arrows.
Proof. See Appendix C.
✷
The proposition implies that we can use the arrow combinators to structure our computations.
For instance, the first few steps of the Toffoli circuit of Section 2.3 would now look like:
toffoli :: Super (Bool,Bool,Bool) (Bool,Bool,Bool)
toffoli =
let hadS = lin2super hadamard
cphaseS = lin2super (controlled phase)
cnotS = lin2super (controlled qnot)
in arr (λ (a0, b0, c0) → (c0, (a0, b0))) >>>
(first hadS >>> arr (λ (c1, (a0, b0)) → ((b0, c1), a0))) >>>
(first cphaseS >>> arr (λ ((b1, c2), a0) → ((a0, b1), c2))) >>>
(first cnotS >>> arr (λ ((a1, b2), c2) → ((b2, c2), a1))) >>> ...
Clearly this notation is awkward as it forces us to explicitly manipulate the entire state and to
manually permute the values. However, all the tedious code can be generated automatically as we
explain next.
5.3
A Better Notation for Arrows
Following the Haskell’s monadic do-notation, Paterson (2001) presented an extension to Haskell with
an improved syntax for writing computations using arrows. We concentrate only on the explanation
of new forms which we use in our examples. Here is a simple example to illustrate the notation:
e1 :: Super (Bool,a) (Bool,a)
e1 = proc (a,b) → do
r ← lin2super hadamard ≺ a
returnA ≺ (r,b)
The do-notation simply sequences the actions in its body. The function returnA is the equivalent
for arrows of the monadic function return. The two additional keywords are:
• the arrow abstraction proc which constructs an arrow instead of a regular function.
• the arrow application ≺ which feeds the value of an expression into an arrow.
Paterson (2001) shows that the above notation is general enough to express arrow computations
and implemented a preprocessor which translates the new syntax to regular Haskell. In the case of
e1 above, the translation to Haskell produces the following code:
e2 :: Super (Bool,a) (Bool,a)
e2 = first (lin2super hadamard)
As the example shows, the output of the preprocessor is quite optimized.
5.4
Superoperators are (probably) not monads
Arrows are more general than monads. In particular, they include notions of computation that
consume multiple inputs as well as computations with static components, independent of the input.
Due to this general aspect of arrows, there are some subclasses of them which turns out to be
equivalent to monads. More precisely, arrow types which support the following app function are just
as expressive as monads.
lass Arrow => ArrowApply a where
app :: a (a b c, b) c
In other words, for superoperators to be monads, we would have to define a superoperator of type:
Super (Super b c, b) c
which in our case would require Super b c to be an instance of Basis. Unfortunately there is no
straightforward way to view the space of superoperators as a finite set of observables.
6
Examples Revisited: Toffoli and Teleportation
Using arrows and the notation introduced by Patterson, we can express both of our examples elegantly.
6.1
Toffoli
The code mirrors the structure of the circuit and the structure of the monadic computation expressed
earlier:
toffoli :: Super (Bool,Bool,Bool) (Bool,Bool,Bool)
toffoli = let hadS = lin2super hadamard
cnotS = lin2super (controlled qnot)
cphaseS = lin2super (controlled phase)
caphaseS = lin2super (controlled (adjoint phase))
in proc (a0,b0,c0) → do
c1 ← hadS ≺ c0
(b1,c2) ← cphaseS ≺ (b0,c1)
(a1,b2) ← cnotS ≺ (a0,b1)
(b3,c3) ← caphaseS ≺ (b2,c2)
(a2,b4) ← cnotS ≺ (a1,b3)
(a3,c4) ← cphaseS ≺ (a2,c3)
c5 ← hadS ≺ c4
returnA ≺ (a3,b4,c5)
6.2
Teleportation
We use the machinery we have developed to faithfully express the circuit presented in Section 3.1.
We break the algorithm in two individual procedures, alice and bob. Besides the use of the arrows
notation to express the action of superoperators on specific qubits, we incorporate the measurement
in Alice’s procedure, and trace out the irrelevant qubits from the answer returned by Bob.
alice :: Super (Bool,Bool) (Bool,Bool)
alice = proc (eprL,q) → do
(q1,e1) ← (lin2super (controlled qnot)) ≺ (q,eprL)
q2 ← (lin2super hadamard) ≺ q1
((q3,e2),(m1,m2)) ← meas ≺ (q2,e1)
(m1’,m2’) ← trL ((q3,e2),(m1,m2))
returnA ≺ (m1’,m2’)
bob :: Super (Bool,Bool,Bool) Bool
bob = proc (eprR,m1,m2) → do
(m2’,e1) ← (lin2super (controlled qnot)) ≺ (m2,eprR)
(m1’,e2) ← (lin2super (controlled z)) ≺ (m1,e1)
q’ ← trL ≺ ((m1’,m2’),e2)
returnA ≺ q’
teleport :: Super (Bool,Bool,Bool) Bool
teleport = proc (eprL,eprR,q) → do
(m1,m2) ← alice ≺ (eprL,q)
q’ ← bob ≺ (eprR,m1,m2)
returnA ≺ q’
7
Linear Typing: QML
The category of superoperators is considered to be an adequate model of non-reversible quantum
computation [12]. Our construction presented so far seems to suggest that this category corresponds
to a functional language with arrows, and so that we can accurately express quantum computation
in such a framework. But as we explain below, this is not quite the whole story.
First consider the well-known “non-cloning” property of quantum states [8]. The arrow notation
allows us to reuse variables more than once, and we are free to define the following operator:
copy :: Super Bool (Bool, Bool)
copy = arr (λ x → (x,x))
But can this superoperator be used to clone a qubit? The answer, as explained in Section 1.3.5 of
the classic book on quantum computing [8], is no. The superoperator copy can be used to copy
classical information encoded in quantum data, but when applied to an arbitrary quantum state,
for example like qFT , the superoperator does not make two copies of the state qFT but rather it
produces the epr state which is the correct and desired behavior. Thus, in this aspect the semantics
of arrows is coherent with quantum computation, i.e., the use of variables more than once models
sharing, not cloning.
In contrast, in our model there is nothing to prevent the definition of:
weaken :: Super (Bool,Bool) Bool
weaken = arr (λ (x,y) → y)
This operator is however not physically realizable. Applying weaken to epr gives qFT . Physically
forgetting about x corresponds to a measurement: if we measure the left qubit of epr we should get
qFalse or qTrue or the mixed state of both measurements, but never qFT .
Therefore, our use of Haskell as a vehicle for expressing the ideas finally hits a major obstacle:
arrow computations must be required to use every value that is introduced. Instead of attempting to
continue working within Haskell, a better approach might be to now consider a functional quantum
language like QML whose type system is designed to explicitly control weakening and decoherence,
and to express the separation of values and arrow computations in that framework.
In more detail, QML [2] is a functional quantum programming language which addresses this
problem by using a type system based on strict linear logic: contraction is permitted and modelled
by copy while weakening has to be explicit and is translated by a partial trace. QML also features to
case operators: a classical case operator which measures a qbit and returns the appropriate branch
and a quantum case operator which avoids measurement but requires that the branches return results
in orthogonal subspaces.
QML programs can be compiled to quantum circuits, using the category of finite quantum
computation FQC — Grattage’s QML compiler [6] is based on this semantics. An irreversible
computation can be modelled by a reversible circuit, allowing additional heap qubits, which are
initialized to a predefined values at the beginning of the computation and disposing, i.e. measuring,
qbits at the end of the computation. To any FQC morphism we can assign a superoperator and
indeed every superoperator can be represented this way.
Alternatively, we can interpret QML programs directly as superoperators, giving rise to a constructive denotational semantics exploiting the library of arrow combinators developed here. We
hope to exploit this semantics to further analyze QML and to develop high level reasoning principles
for QML programs.
8
Conclusion
We have argued that a realistic model for quantum computations should accommodate both unitary
operations and measurements, and we have shown that such general quantum computations can
be modeled using arrows. This is an extension of the previous-known observation that one can
model pure quantum probabilities using monads. Establishing such connections between quantum
computations and monads and arrows enables elegant embeddings in current classical languages,
and exposes connections to well-understood concepts from the semantics of (classical) programming
languages. We have demonstrated the use of arrows to model elegantly two examples in Haskell,
including the teleportation experiment which interleaves measurements with unitary operations.
Acknowledgments
We would like to thank Antônio Carlos da Rocha Costa and Jonathan Grattage for extensive discussions and feedback.
A
A Haskell Primer
We use Haskell as a precise mathematical (and executable) notation.
It is useful to think of a Haskell type as representing a mathematical set. Haskell includes several
built-in types that we use: the type Boolean whose only two elements are False and True; the type
Complex Double whose elements are complex numbers written a:+b where both a and b are elements
of the type Double which approximates the real numbers. Given two types a and b, the type (a, b)
is the type of ordered pairs whose elements are of the respective types; the type a → b is the type
of functions mapping elements of a to elements of b; and the type [a] is the type of sequences (lists)
whose elements are of type a. For convenience, we often use the keyword type to introduce a new
type abbreviation. For example:
type PA = Complex Double
introduces the new type PA as an abbreviation of the more verbose Complex Double. A family of
types that supports related operations can be grouped in a Haskell class. Individual types can then
be made an instance of the class, and arbitrary code can require that a certain type be a member
of a given class.
The syntax of Haskell expressions is usually self-explanatory except perhaps for the following
points. A function can be written in at least two ways. Both the following definitions define a
function which squares its argument:
sq n = n * n
sq’ = λ n → n * n
A function f can be applied to every element of a list using map or using list comprehensions. If xs
is the list [1, 2, 3, 4], then both the following:
map sq xs
[ sq x | x ← xs ]
evaluate to [1, 4, 9, 16].
Usually, a function f is applied to an argument a, by writing f a. If the function expects two
arguments, it can either be applied to both at once f (a, b) or one at a time f a b depending on its
type. When convenient the function symbol can be placed between the arguments using back quotes
a ‘f ‘ b.
B
Proof of Monad Laws for Vectors.
Proof of Proposition 2.1: The definitions of return and ≫= satisfy the three monad laws:
• First monad law: (return x ) ≫= f = f x
(return x ) ≫= f =
=
=
=
λ b . sum [ return x a ∗ f a b | a ← basis]
λ b . sum [ if x == a then 1 else 0 ∗ f a b | a ← basis]
λb .f x b
f x
• Second monad law: m ≫= return = m
m ≫= return =
=
=
=
λ b . sum [ m a ∗ return a b | a ← basis]
λ b . sum [ m a ∗ if a == b then 1 else 0 | a ← basis]
λb.mb
m
• Third monad law: (m ≫= f ) ≫= g = m ≫= (λ x . f x ≫= g)
(m ≫= f ) ≫= g = (λ b . sum [m a ∗ f a b | a ← basis]) ≫= g
= λ c . sum[(sum [m a ∗ f a b | a ← basis]) ∗ g b c |
b ← basis]
= λ c . sum [m a ∗ f a b ∗ g b c | a ← basis, b ← basis]
m ≫= (λ x . f x ≫= g) = λ c . sum[m a ∗ (f a ≫= g) c | a ← basis]
= λ c .sum[m a ∗ (sum[f a b ∗ g b c | b ← basis]) |
a ← basis]
= λ c .sum [m a ∗ f a b ∗ g b c | a ← basis, b ← basis]
C
Proof of Arrow Laws for Superoperators
Proof of Proposition 5.1:
• First arrow equation: arr id ≫ f = f .
arr id ≫ f =
=
=
=
=
=
fun2lin (λ (a1, a2).(id a1, id a2)) ‘o‘ f
fun2lin id ‘o‘ f
return ‘o‘ f
λ a . return a ≫= f
λa .f a
f
(by
(by
(by
(by
(by
arr and ≫)
simplification)
fun2lin)
‘o‘)
monad law 1.)
• Second arrow equation: f ≫ arr id = f .
f ≫ arr id = f ‘o‘ fun2lin (λ (b1, b2) . (id b1, id b2)) (by arr and ≫)
= f ‘o‘ fun2lin id
(by simplification)
= f ‘o‘ return
(by fun2lin)
= λ a . f a ≫= return
(by o)
= λa .f a
(by monad law 2.)
= f
• Third arrow equation: (f ≫ g) ≫ h = f ≫ (g ≫ h).
(f ≫ g) ≫ h = (f ‘o‘ g) ‘o‘ h
(by ≫)
= λ b . (λ a . f a ≫= g) b ≫= h (by o)
= λ b . (f b ≫= g) ≫= h
(by β)
f ≫ (g ≫ h ) = f ‘o‘ (g ‘o‘ h)
(by ≫=)
= λ a . f a ≫= (λ b . g b ≫= h) (by o)
= λ a . (f a ≫= g) ≫= h
(by monad law 3.)
• Fourth arrow equation: arr (g . f ) = arr f ≫ arr g.
arr (g . f ) = fun2lin (λ (b1, b2) .((g . f ) b1, (g . f ) b2)) (by arr )
= return .(λ (b1, b2) . ((g . f ) b1, (g . f ) b2)) (by fun2lin)
= λ (b1, b2) . return ((g . f ) b1, (g . f ) b2) (simplification)
arr f ≫ arr g = fun2lin (λ (b1, b2) . (f b1, f b2)) ‘o‘ fun2lin(λ (b1, b2) . (g b1, g b2))
(by ≫= and arr )
= return . (λ (b1, b2) . (f b1, f b2)) ‘o‘ return .(λ (b1, b2) . (g b1, g b2))
(by fun2lin)
= λ (b1, b2) . return (f b1, f b2) ≫= λ (b1, b2) . return (g b1, g b2))
(by o)
= λ (b1, b2) . (λ (b1, b2) . return (g b1, g b2)) (f b1, f b2)
(by monad law 1.)
= λ (b1, b2) . return ((g . f ) b1, (g . f ) b2) (by β)
• Fifth arrow equation: first (arr f ) = arr (f × id ).
first (arr f ) =
=
=
=
first (fun2lin (λ (b1, b2) . (f b1, f b2))) (by arr )
first (return . (λ (b1, b2) . (f b1, f b2))) (by fun2lin)
first (λ (b1, b2) . return (f b1, f b2)) (by simplification)
λ ((b1, d 1), (b2, d 2)) . λ ((x , y), (w , z )) .return (f b1, f b2) (x , w ) ∗
return (d 1, d 2) (y, z ) (by first )
= λ ((b1, d 1), (b2, d 2)) . λ ((x , y), (w , z )) .
if ((f b1, f b2), (d 1, d 2)) == ((x , w ), (y, z )) then 1 else 0 (by return)
arr (f × id ) = =
=
=
=
fun2lin (λ ((b1, d 1), (b2, d 2)). ((f b1, d 1), (f b2, d 2)))
(by arr )
return . (λ ((b1, d 1), (b2, d 2)). ((f b1, d 1), (f b2, d 2)))
(by fun2lin)
λ ((b1, d 1), (b2, d 2)) . return ((f b1, d 1), (f b2, d 2))
λ ((b1, d 1), (b2, d 2)) . λ ((x , y), (w , z )) .
if ((f b1, d 1), (f b2, d 2)) == ((x , y), (w , z )) then 1 else 0 (by return)
• Sixth arrow equation: first (f ≫ g) = first f ≫ first g. In the following proofs assume:
ad 1 ((b1, d 1), (b2, d 2)) = (b1, b2) and ad 2 ((b1, d 1), (b2, d 2)) = (d 1, d 2) .
first (f ‘o‘ g) = first (λ a . f a ≫= g)
(by ‘o‘)
= λ b . λ ((x , y), (w , z )) . (f (ad 1 b) ≫= g) (x , w ) ∗ return (ad 2 b) (y, z )
(by first )
= λ b . λ ((x , y), (w , z )) . (λ c . sum [(f (ad 1 b)) a ∗ g a c | a ← basis]) (x , w )
∗ return (ad 2 b) (y, z ) (by ≫=)
= λ b . λ ((x , y), (w , z )) . sum[(f (ad 1 b)) a ∗ g a (x , w ) | a ← basis] ∗
return (ad 2 b) (y, z ) (by β)
first f ‘o‘ first g = λ a . first f a ≫= λ b . first g b
(by ‘o‘)
= λ a . λ ((x , y), (w , z )) . f (ad 1 a) (x , w ) ∗ return (ad 2 a) (y, z ) ≫=
λ b . λ ((x , y), (w , z )) . g (ad 1 b) (x , w ) ∗ return (ad 2 b) (y, z )
(by first )
= λ a . λ ((x , y), (w , z )) . sum [ f (ad 1 a) (m, o) ∗ return (ad 2 a) (n, p) ∗
(λ ((x , y), (w , z )) . g (m, o) (x , w ) ∗ return (n, p) (y, z ))((x , y), (w , z )) |
((m, n), (o, p)) ← basis] (by ≫=)
= λ a . λ ((x , y), (w , z )) . sum [ f (ad 1 a) (m, o) ∗ return (ad 2 a) (n, p) ∗
g (m, o) (x , w ) ∗ return (n, p) (y, z ) |((m, n), (o, p)) ← basis]
= λ a . λ ((x , y), (w , z )) . sum [ f (ad 1 a) a1 ∗ g a1 (x , w ) ∗
return (ad 2 a) a2 ∗ return a2 (y, z ) | a1 ← basis , a2 ← basis]
(by simplification)
= λ a . λ ((x , y), (w , z )) . sum [ f (ad 1 a) a1 ∗ g a1 (x , w ) | a1 ← basis ]
∗ return (ad 2 a) (y, z ) (by simplification)
• Seventh arrow equation: first f ≫ arr (id × g) = arr (id × g) ≫ first f .
lhs = f irst f ‘o‘ arr (id × g)
lhs = λ ((a1, b1), (a2, b2)) . first f ((a1, b1), (a2, b2)) ≫=
fun2lin (λ ((a, b), (c, d )) . ((a, g b), (c, g d ))) (by ‘o‘ and arr )
= λ ((a1, b1), (a2, b2)) . first f ((a1, b1), (a2, b2)) ≫=
λ ((a, b), (c, d )) . return ((a, g b), (c, g d )) (by fun2lin)
= λ ((a1, b1), (a2, b2)) . λ ((x , y), (w , z )) . f (a1, a2) (x , w ) ∗ return (b1, b2) (y, z ) ≫=
λ ((a, b), (c, d )) . return ((a, g b), (c, g d ))
(by first )
= λ ((a1, b1), (a2, b2)) . λ c . sum [ f (a1, a2) (m, o) ∗ return (b1, b2) (n, p) ∗
return ((m, g n), (o, g p)) c | ((m, n), (o, p)) ← basis] (by ≫=)
= λ ((a1, b1), (a2, b2)) . λ ((x , y), (w , z )) . sum [ f (a1, a2) (m, o) ∗ return (b1, b2) (n, p) ∗
return ((m, g n), (o, g p)) ((x , y), (w , z ))| ((m, n), (o, p)) ← basis]
(by simplification)
= λ ((a1, b1), (a2, b2)) . λ ((x , y), (w , z )) . sum [ f (a1, a2) (m, o) ∗
[if (b1, b2) == (n, p) then 1 else 0] ∗
[(if (m, g n), (o, g p)) == ((x , y), (w , z )) then 1 else 0] | ((m, n), (o, p)) ← basis]
(by return)
= λ ((a1, b1), (a2, b2)) .λ ((x , y), (w , z )) . if (g b1, g b2) == (y, z )
then f (a1, a2) (x , w ) else 0
rhs = arr (id × g) ‘o‘ f irst f
rhs = λ ((a1, b1), (a2, b2)) . fun2lin (λ ((a, b), (c, d )) . ((a, g b), (c, g d )))
((a1, b1), (a2, b2)) ≫= first f (by ‘o‘ and arr )
= λ ((a1, b1), (a2, b2)) . return ((a1, g b1), (a2, g b2)) ≫= first f
(by fun2lin)
= λ ((a1, b1), (a2, b2)) . first f ((a1, g b1), (a2, g b2)) (by monad law 1.)
= λ ((a1, b1), (a2, b2)) .λ ((x , y), (w , z )) . f (a1, a2) (x , w ) ∗ return (g b1, g b2) (y, z )
(by first )
= λ ((a1, b1), (a2, b2)) .λ ((x , y), (w , z )) . f (a1, a2) (x , w ) ∗
[if (g b1, g b2) == (y, z ) then 1 else 0] (by return)
• Eighth arrow equation: first f ≫ arr fst = arr fst ≫ f .
lhs = f irst f ‘o‘ arr(λ(a, b).a)
lhs = λ ((a1, b1), (a2, b2)) . first f ((a1, b1), (a2, b2)) ≫= arr λ (a, b) . a (by o)
= λ ((a1, b1), (a2, b2)) . first f ((a1, b1), (a2, b2)) ≫= λ ((a, b), (c, d )) . return (a, c)
(by arr )
= λ ((a1, b1), (a2, b2)) . λ ((x , y), (w , z )) . f (a1, a2) (x , w ) ∗
return (b1, b2) (y, z ) ≫= λ ((a, b), (c, d )) . return (a, c) (by first )
= λ ((a1, b1), (a2, b2)) . λ (c1, c2) . sum [ f (a1, a2) (m, o) ∗ return (b1, b2) (n, p)∗
return (m, o) (c1, c2) | ((m, n), (o, p)) ← basis] (by ≫=)
= λ ((a1, b1), (a2, b2)) . λ (c1, c2) . sum [ f (a1, a2) (m, o) ∗
[if (b1, b2) == (n, p) then 1 else 0] ∗
[if (m, o) == (c1, c2) then 1 else 0] | ((m, n), (o, p)) ← basis] (by return)
= λ ((a1, b1), (a2, b2)) . λ (c1, c2) . f (a1, a2) (c1, c2) (by simplification)
rhs = arr f st ‘o‘f
rhs = λ ((a, b), (c, d )) . return (a, c) ‘o‘ f (by arr )
= λ ((a1, b1), (a2, b2)) . (λ ((a, b), (c, d )) . return (a, c)) ((a1, b1), (a2, b2)) ≫= f
(by o)
= λ ((a1, b1), (a2, b2)) . f (a1, a2) (by monad law 1.)
= λ ((a1, b1), (a2, b2)) . λ (c1, c2) . f (a1, a2) (c1, c2)
• Ninth arrow equation: first (first f ) ≫ arr assoc = arr assoc ≫ first f
lhs = λ(((a1, b1), c1), ((a2, b2), c2)).f irst(f irstf )(((a1, b1), c1), ((a2, b2), c2)) ≫=
arr(λ((a, b), c).(a, (b, c)))
lhs = λ (((a1, b1), c1), ((a2, b2), c2)) . first (λ b . λ ((x , y), (w , z )) .f (ad 1 b) (x , w ) ∗
return (ad 2 b) (y, z )) (((a1, b1), c1), ((a2, b2), c2)) ≫=
λ (((a1, b1), c1), ((a2, b2), c2)) . return ((a1, (b1, c1)), (a2, (b2, c2)))
(by first )
= λ (((a1, b1), c1), ((a2, b2), c2)) . λ ((m1, n1), p1) ((m2, n2), p2) .
(λ b . λ ((x , y), (w , z )) . f (ad 1 b) (x , w ) ∗ return (ad 2 b) (y, z )) ((a1, b1), (a2, b2))
((m1, n1), (m2, n2)) ∗ return (c1, c2) (p1, p2) ≫= λ (((a1, b1), c1), ((a2, b2), c2)) .
return ((a1, (b1, c1)), (a2, (b2, c2)))
(by first )
= λ (((a1, b1), c1), ((a2, b2), c2)) . λ ((m1, n1), p1) ((m2, n2), p2) .
f (a1, a2) (m1, m2) ∗ return (b1, b2) (n1, n2) ∗ return (c1, c2) (p1, p2) ≫=
λ (((a1, b1), c1), ((a2, b2), c2)) . return ((a1, (b1, c1)), (a2, (b2, c2)))
(by β)
= λ (((a1, b1), c1), ((a2, b2), c2)) . λ ((x 1, (y1, z 1)), (x 2, (y2, z 2))) .
sum [ f (a1, a2) (m1, m2) ∗ return (b1, b2) (n1, n2) ∗ return (c1, c2) (p1, p2) ∗
return ((m1, n1), p1) ((m2, n2), p2) ((x 1, (y1, z 1)), (x 2, (y2, z 2))) |
((m1, n1), p1) ((m2, n2), p2) ← basis]
(by ≫=)
= λ (((a1, b1), c1), ((a2, b2), c2)) . λ ((x 1, (y1, z 1)), (x 2, (y2, z 2))) .
sum [ f (a1, a2) (m1, m2) ∗ [if (b1, b2) == (n1, n2) then 1 else 0] ∗
[if (c1, c2) == (p1, p2)then 1 else 0] ∗
[if ((m1, n1), p1) ((m2, n2), p2) == ((x 1, (y1, z 1)), (x 2, (y2, z 2))) then 1 else 0]|
((m1, n1), p1) ((m2, n2), p2) ← basis]
(by return)
= λ (((a1, b1), c1), ((a2, b2), c2)) . λ ((x 1, (y1, z 1)), (x 2, (y2, z 2))) . f (a1, a2) (x 1, x 2) ∗
return ((b1, c1), (b2, c2)) ((y1, z 1), (y2, z 2))
rhs = λ(((a1, b1), c1), ((a2, b2), c2)) . return((a1, (b1, c1)), (a2, (b2, c2))) ‘o‘ f irst f
rhs = λ (((a1, b1), c1), ((a2, b2), c2)) . return ((a1, (b1, c1)), (a2, (b2, c2))) ≫= first f
(by o)
= λ (((a1, b1), c1), ((a2, b2), c2)) . first f ((a1, (b1, c1)), (a2, (b2, c2)))
(by monad law 1.)
= λ (((a1, b1), c1), ((a2, b2), c2)) . λ ((x 1, (y1, z 1)), (x 2, (y2, z 2))) .
f (a1, a2) (x 1, x 2) ∗ return ((b1, c1), (b2, c2)) ((y1, z 1), (y2, z 2)) (by first )
References
[1] Dorit Aharonov, Alexei Kitaev, and Noam Nisan. Quantum circuits with mixed states. In
Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages 20–30.
ACM Press, 1998.
[2] Thorsten Altenkirch and Jonathan Grattage. A functional quantum programming language.
quant-ph/0409065, November 2004.
[3] Thorsten Altenkirch and Bernhard Reus. Monadic presentations of lambda terms using generalized inductive types. In Computer Science Logic, 1999.
[4] C Bennett, G Brassard, C Crepeau, R Jozsa, A Peres, and W Wootters. Teleporting an unknown
quantum state via dual classical and EPR channels. Phys Rev Lett, pages 1895–1899, 1993.
[5] A. Einstein, B. Podolsky, and N. Rosen. Can quantum-mechanical description of physical reality
be considered complete? Phys. Rev., 47:777–780, 1935.
[6] Jonathan Grattage and Thorsten Altenkirch. A compiler for a functional quantum programming
language. submitted for publication, January 2005.
[7] John Hughes. Generalising monads to arrows. Science of Computer Programming, 37:67–111,
May 2000.
[8] Isaac L. Chuang Michael A. Nielsen. Quantum Computation and Quantum Information. Cambridge University Press, 2000.
[9] E. Moggi. Computational lambda-calculus and monads. In Proceedings of the Fourth Annual
Symposium on Logic in computer science, pages 14–23. IEEE Press, 1989.
[10] Bernhard Ömer. A procedural formalism for quantum computing. Master’s thesis, Department
of Theoretical Physics, Technical University of Vienna, 1998.
[11] Amr Sabry. Modeling quantum computing in Haskell. In Proceedings of the ACM SIGPLAN
workshop on Haskell, pages 39–49. ACM Press, 2003.
[12] Peter Selinger. Towards a quantum programming language. Mathematical Structures in Computer Science, 14(4):527–586, 2004.
[13] Benoit Valiron. Quantum typing. CoRR, cs.LO/0404056, 2004.
[14] André van Tonder. Quantum computation, categorical semantics and linear logic. CoRR,
quant-ph/0312174, 2003.
[15] Andre van Tonder. A lambda calculus for quantum computation. SIAM Journal on Computing,
33(5):1109–1135, 2004.
| 6 |
On flat submaps of maps of non-positive curvature
arXiv:1702.08205v1 [] 27 Feb 2017
A.Yu. Olshanskii, M.V. Sapir∗
Abstract
We prove that for every r > 0 if a non-positively curved (p, q)-map M contains no flat
submaps of radius r, then the area of M does not exceed Crn for some constant C. This
strengthens a theorem of Ivanov and Schupp. We show that an infinite (p, q)-map which
tessellates the plane is quasi-isometric to the Euclidean plane if and only if the map contains
only finitely many non-flat vertices and faces. We also generalize Ivanov and Schupp’s result
to a much larger class of maps, namely to maps with angle functions.
Contents
1 Introduction
1
2 Large flat submaps of (p, q)-maps
2.1 A lemma about curvatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Weakly exterior faces and the interior of a (p, q)-map. . . . . . . . . . . . . .
2.3 Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Connecting non-flat vertices and faces with the boundary . . . . . . . . . . .
2.5 Cutting the map along its connecting subgraph and the proof of Theorem 2.8
.
.
.
.
.
6
6
8
13
13
14
3 (p, q)-maps that are quasi-isometric to R2
3.1 The “if” part of Theorem 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 The “only if” part of Theorem 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . .
16
16
19
4 Maps with angle functions
23
1
.
.
.
.
.
Introduction
Recall that a map is a finite, connected and simply-connected 2-complex embedded in the
Euclidean plane. So its 1-skeleton is a finite, connected plane graph. The cells of dimensions
0, 1 and 2 are called vertices, edges and faces, respectively. Every edge e has an orientation; so
it starts at the vertex e− and ends at e+ , and (e−1 )− = e+ , (e−1 )+ = e− for the inverse edge
e−1 , which has the same support as e. The degree d(o) of a vertex o is the number of oriented
edges e with e− = o. In particular, every loop e (an edge which connects a vertex o with itself)
together with e−1 contributes 2 to the degree of o.
If a closed path q = e1 . . . ek is the boundary of a face Π, then the degree d(Π) of Π is the
length |q| = k. In particular, if both e and e−1 occur in the boundary path of a face, they
∗
Both authors were supported in part by the NSF grant DMS 1418506.The first author is also supported by
RFFI grant 15-01-05823
1
contribute 2 to the degree of that face. Similarly one defines the perimeter |∂M | of a map M as
the length of a closed boundary path of M .
A submap N of a map M is the subcomplex bounded by a closed curve which can be made
simple by an arbitrary small transformation. So either N is a map or it can be turned into a
map after such small transformation.
In group theory, maps appear most often as van Kampen diagrams. Many algebraic and
geometric results about groups (say, the small cancelation theory, and construction of various
groups with “extreme properties” such as Tarski monsters [7, 8]) are obtained by establishing
combinatorial properties of corresponding maps and their submaps. A typical example of such
a statement: The area (i.e., the number of faces) of every reduced van Kampen diagram over
a finite group presentation is at most linear in terms of its perimeter if and only if the group
is hyperbolic. In other words, hyperbolic groups are precisely the finitely presented groups
with linear Dehn functions. One of recurrent features of van Kampen diagrams is existence of
“special” submaps in every van Kampen diagram of large area. For example, in the proof of
the upper bound of the Dehn function of a group constructed in [10] using an S-machine, it is
proved that if the area of a reduced diagram is large enough, then up to a homotopy which does
not change the area very much, the area is “concentrated” is a few special subdiagrams called
“discs” (these are the subdiagrams simulating the work of the S-machine).
A remarkable result of this kind was proved by Ivanov and Schupp in [6]. Recall that an
edge e of a map M is called exterior if it belongs to a boundary path of M . A face Π of M is
called exterior if its boundary ∂Π has a common edge with ∂(M ). An exterior vertex is one of
the vertices of the boundary path. Non-exterior faces, vertices and edges are called interior.
A map M is called a (p, q)-map if every interior face Π in M has degree at least p and the
degree of every interior vertex is at least q. Note that if a group presentation P = hX Ri satisfies
the small cancelation condition C(p) − T (q), then every reduced van Kampen diagram over P
is a (p, q)-map if we ignore all interior vertices of degree 2 (as in [7, 6]). It is well known (see
[7]) that if 1p + 1q is smaller than 12 (i.e., the curvature of the presentation is negative), then the
group is hyperbolic and its Dehn function is linear. The case when p1 + 1q > 21 is not interesting.
Indeed, by a result of Gol’berg [4, 8], every group can be given by a presentation satisfying C(5)
and T (3) and by a presentation satisfying C(3) and T (5) and hence by a presentation satisfying
C(p) and T (q) for every p ≥ 3, q ≥ 3 with 1p + 1q > 12 (see [4, 8]). If p1 + 1q = 12 (i.e., the
“curvature” is non-positive), and so (p, q) is either (3, 6), (4, 4) or (6, 3)), then the group has at
most quadratic Dehn function [7, Theorem V.6.2].
A submap of a (p, q)-map is called flat if each of its faces is flat, i.e. has degree p, and each
interior vertex is flat, i.e. has degree q. The radius of a map is the maximal distance from a
vertex to the boundary of the map.
Ivanov and Schupp [6] proved that if a (p, q)-map M , p1 + 1q = 12 , has no flat submaps of radius
r (they call flat submaps regular ), then the area of the map is linear in terms of its perimeter
with the multiplicative constant depending on r. More precisely, Ivanov and Schupp proved the
following
Theorem 1.1 (Ivanov, Schupp, [6]). Let M be a finite (p, q)-map with perimeter n such that
the maximal distance from a vertex in M to a boundary vertex or to a non-flat vertex or face is
r. Then the area of M does not exceed L(r)n, where L(r) is some exponential function in r.
Theorem 1.1 implies that if a group presentation P = hX Ri satisfies conditions C(p) and
T (q), p1 + 1q = 12 , and the radius of every flat van Kampen diagram over P does not exceed
certain constant, then the Dehn function of the group given by P is linear, hence the group is
hyperbolic. Using this, Ivanov and Schupp proved hyperbolicity of many 1-related groups.
2
In this paper, we strengthen Theorem 1.1 in two ways. First, we replace the exponential
upper bound for L(r) by a linear upper bound. Second, we extend Theorem 1.1 to a much larger
class of maps called “maps with angle functions”.
Let us call a submap of a (p, q)-map simple if it is bounded by a simple closed curve. Note that
the closure of the union of faces from a non-simple submap may not be simply connected (Figure
1) while the closure of the union of faces from a simple submap is always simply connected.
A
G
B
F
E
J
C
D
I
H
Figure 1: The thick path ABCDEF BAGHIJA is simple up to an arbitrary small deformation.
It bounds a non-simple submap.
Let p = 4, 3 or 6. Let S p be the usual tessellation of the plane by p-gons. Then for every
n ≥ 0 the standard map Snp is constructed as follows. By definition S0p is a vertex o in T . If
p
the submap Snp of T is constructed, then Sn+1
is the (closure of the) union of all faces having a
p
p
common vertex with Sn . Then Sn is a simple (p, q)-map. For example, then Sn4 is the 2n × 2nsquare tessellated by unit squares, Sn3 is a regular hexagon with side length n tessellated by
equilateral triangles with side length 1, Sn6 can be viewed as the weak dual1 to the (3, 6)-map
3 , only starting with a triangle face instead of a vertex.
constructed just as Sn+1
Remark 1.2. Every simple (p, q)-map, 1p + 1q = 21 , of radius r, contains a simple submap M ′
which is isomorphic to Snp for n = O(r). The submap M ′ can be obtained in a similar manner
as Snp . Pick a vertex o in M at distance r from the boundary of M . This is the submap M0 . If
Mi is already constructed, then Mi+1 is obtained from Mi by adding all faces having a vertex
in common with Mi . The process continues until one of the vertices in Mi is exterior. In that
case we set M ′ = Mi . Of course it should be explained why Mi+1 is indeed a simple standard
submap provided Mi is already a simple standard submap. It is not as obvious as it seems. The
explanation uses Lemma 2.2 below, it is given in Remark 2.4.
Our main result is the following
Theorem 1.3 (See Theorem 2.1 below). If a (p, q)-map M does not contain flat simple submaps
of radius ≥ r, then the area of M is at most crn for some constant c.
1
Recall that if M is a map, then the weak dual map M̄ is obtained by putting a vertex in every (bounded)
face, and for every edge shared by two faces of M , connect the two vertices from these faces by an edge crossing
that edge. Thus the vertices of M̄ correspond to faces of M , edges of M̄ correspond to interior edges of M , faces
of M̄ correspond to interior vertices of M .
3
Note that the statement of Theorem 2.1 is non-trivial even for the van Kampen diagrams
over the standard presentation of Z2 although there is a significantly easier proof in this case.
Theorem 2.1 is applicable to van Kampen diagrams over any C(p) − T (q)-presentations with
1
1
1
p + q = 2 , say, the standard presentations of 2-dimensional Right Angled Artin groups or the
fundamental groups of alternating knots.
Theorem 2.1 is proved in Section 2. The plan of the proof is the following. First for every
map M and every two (real) numbers p, q with 1p + 1q = 12 we define curvature of faces and
vertices of M as numbers proportional to the excessive degrees, and show that the sum of all
curvatures is equal to p. Then we assume that p, q are positive integers (so (p, q) is (3, 6), (4, 4)
or (6, 3)) and note that by a simple transformation of the map, we can assume that all faces
of M have degrees between p and 2p − 1, the perimeter of the map after this transformation
increases by a factor of ≤ p − 1 and the set of (simple) flat submaps does not change. The
key “contraction” Lemma 2.6 says that the perimeter of the interior M 0 which is the union of
all faces of M having no boundary vertices of M , is “substantially smaller” than the perimeter
of M . From this, we deduce, first, that the area of M is O(Rn), where R is the radius of M .
Second, we deduce that one can cut M along paths of linear in n total length so that in the
resulting map M̃ all non-flat vertices and faces are on the boundary. Then the radius R̃ of M̃
is less than r + p. Hence Area(M ) = Area(M̃ ) = O(R̃ñ) = O(rn).
In Section 3 we consider infinite maps on the plane, i.e. tessellations of R2 . An infinite map
is called proper if its support, i.e. the union of all faces, edges and vertices is the whole plane
R2 and every disc in R2 intersects only a finite number of faces, edges and vertices of the map.
Our main result is the following:
Theorem 1.4 (See Theorem 3.1). Let M be a proper (p, q)-map with p1 + 1q = 12 . Then the
1-skeleton of M with its path metric is quasi-isometric to the Euclidean plane if an only if M
has only finitely many non-flat vertices and faces.
Our proof of Theorem 1.4 proceeds as follows. Suppose that a proper map M has finite
number non-flat vertices and faces. Then we modify it in a finite sequence of steps. At each
step we reduce the number of non-flat faces and vertices while keeping the map quasi-isometric
to M . As a result we get a proper map M ′ with at most one non-flat vertex and no non-flat
faces. Such a map is naturally subdivided by infinite paths emanating from the non-flat vertex
into a finite number of convex infinite submaps, each of which is quasi-isometric to a quadrant
of the Euclidean plane. Combining the corresponding quasi-isometries, we get a quasi-isometry
between M ′ and R2 . The main tool in the proof is the notion of infinite corridor, that is an
infinite sequence of faces in M where each consecutive faces share an edge. This gives the “if”
part of the theorem.
For the “only if” part, we prove that if a proper map M has infinitely many non-flat vertices
or faces, then for every constant c it contains an infinite c-separated set S of vertices which
has super-quadratic growth function (that is the function that for every n gives the number of
vertices from S at distance ≤ n from a given vertex is super-quadratic). This cannot happen if M
was quasi-isometric to R2 . The key tool in proving this part of the theorem is the “contraction”
Lemma 2.6 from the proof of Theorem 2.1. We construct a sequence of submaps N (r) such that
the boundaries of N (r) contain large c-separated sets of vertices. In order to prove this property
of N (r) we use winding numbers of piece-wise geodesic paths in M passing through vertices of
∂(N (r)) around a vertex which is deep inside N (r).
Bruce Kleiner and Michah Sageev explained to us that the “only if” part of Theorem 1.4
can be deduced from Theorem 4.1 of their paper [1] (joint with Mladen Bestvina). If we view
every face of a proper (p, q)-map M as a regular Euclidean n-gone, then the map M turns
4
into a CAT(0) 2-complex M ′ which is quasi-isometric to the original map. Then Part 1 of [1,
Theorem 4.1] implies that M has a locally finite second homology class whose support S is
locally isometric to the Euclidean plane outside some finite ball. It remains to notice that the
only such homology class is (up to a scalar multiple) the fundamental class of M . Hence M
is locally flat outside a finite ball. Bruce Kleiner also explained how to deduce the “if” part
of Theorem 1.4 using Riemannian geometry. First we “smooth out” the CAT(0) 2-complex
M which is locally flat outside a finite ball to obtain (using the Cartan-Hadamard theorem) a
2-dimensional Riemannian manifold M ′ with the same property and which is quasi-isometric to
the map M . Then we use the Rauch comparison theorem to establish a bi-Lipschitz equivalence
between M ′ and the Euclidean plane.
Note that our proof of Theorem 1.4 is completely self-contained, short and uses only basic
graph theory.
In Section 4 we consider the class of maps with angle functions. Let o be a vertex on the
boundary of a face Π of a map M . A corner of Π at o is the pair of two consecutive oriented
edges e and f of ∂(Π) with e+ = f− = o. (f −1 and e−1 define the same corner.) An angle
function assigns a non-negative number (angle) to each corner of each cell. Then the curvature
of an interior vertex o is 2π minus the sum of all angles at o. The curvature of an exterior vertex
is defined in a similar way. The curvature of a face of perimeter d is the sum of angles of corners
of this face minus the sum of angles of an Euclinean d-gone (that is π(d − 2)). A map with an
angle function is called flat if all its faces and interior vertices are of curvature 0. A map with
an angle function is called a (δ, b)-map, δ > 0, b > 0 if the curvature of every non-flat vertex and
face does not exceed −δ and the degree of every vertex and face does not exceed b.
The class of (δ, b)-maps is very large. By Fáry’s theorem (see [11, 2]), every finite planar
graph M without double edges and loops can be drawn on the Euclidean plane using only
straight line segments for edges. The proof from [2] shows that if M is a plane map, then one
can assume that the graph with straight edges is isomorphic to M as a 2-complex. For a map
with straight edges, we can assign to each corner its Euclidean angle, making the map flat.
Note that many authors considered van Kampen diagrams as maps with angle functions.
Some of the earliest implementations of this idea are in the papers [3] by Steve Gersten, [9] by
Steve Pride and [5] by Jim Howie.
It is easy to see that a (finitely presented) group G has a (finite) presentation P = hX | Ri
such that every van Kampen diagram over P can be assigned an angle function which makes
the diagram a flat map if and only if one can find a finite generating set of the group which
does not contain involutions. Such a finite generating set exists if and only if the group is not
an extension of an Abelian group A by the automorphism of order 2 which takes every element
of A to its inverse.
We will show in Remark 4.4 that every (p, q)-map can be transferred into a (δ, b)-map with
angle function without decreasing the area, or increasing the perimeter or the set of flat vertices
and faces. Thus the following theorem is a generalization of Ivanov and Schupp’s Theorem 1.1
to a much wider class of maps.
Theorem 1.5. Suppose that M is a (δ, b)-map of perimeter n and the distance of every vertex of
M to a boundary vertex or to a non-flat vertex or face of M is at most r. Then Area(M ) ≤ Ln,
where L is exponential in r.
The key part of the (very short) proof of Theorem 1.5 (see Section 4) is Lemma 4.2 which
shows that in every non-positively curved map with an angle function, the sum of curvatures of
all faces and interior vertices exceeds π(2 − n) where n is the perimeter of the map.
5
2
Large flat submaps of (p, q)-maps
Although every edge has an orientation, when counting the numbers of edges (or faces) in a map,
we take usually any pair (e, e−1 ) as one non-oriented edge (e.g., E is the number of non-oriented
edges, when we apply Euler’s formula). The boundary path p of a map or a face is considered
up to cyclic permutations and taking inverse paths p−1 .
The number of faces in M is called the area of M , denoted Area(M ).
Here is a precise formulation of our main result.
Theorem 2.1. Let p, q be positive integers with 1p + 1q = 12 , C = 23 (p − 1)(q + 1). Then the area
of M does not exceed C(r + p)n, provided M contains no simple flat submaps of radius greater
than r.
2.1
A lemma about curvatures
Given a pair (p, q) of arbitrary (possibly negative) real numbers with p1 + 1q = 21 , the curvature
of a face Π in a map M is defined as curvp,q (Π) = p − d(Π). Let o be a vertex in M . Then
let µ(o) be the number of times the boundary path x goes through the vertex o. For example,
the multiplicity µ(o) is 1 if the closed path x passes through o only once, and µ(o) = 0 if o is
an interior vertex. The curvature curvp,q (o) of a vertex o is defined as pq (q − d(o)) − µ(o). Let
Iv = Iv (M ) be the sum of curvatures of all vertices of M , and let If = If (M ) be the sum of
curvatures of all faces of M .
The following lemma follows from Theorem 3.1 from [7, Capter V], we include the proof for
completeness and because it is significantly easier than in [7].
Lemma 2.2. For an arbitrary map M and arbitrary real p, q with p1 + 1q = 21 , we have Iv +If = p.
Proof. The statement is obvious for a map consisting of one vertex. So assume that M has
more than one vertex, hence it has no vertices of degree 0. Let us assign weight 1 to every
non-oriented edge of the map M . Then the sum of all weights is the number E of edges in M .
Now let us make each edge give 1q of its weight to each of its vertices (if it has only one vertex,
the edge gives it 2q ) and p1 to each of the (at most two) faces containing that edge. Thus the
P
sum of weights of all vertices of M is equal to o 1q d(o), where o runs over all vertices of M .
P
The sum of weights of all faces is Π p1 d(Π), where the sum runs over all faces of M .
By the assumption 1p + 1q = 12 , every edge separating two faces becomes completely weightless
(it gives 1p to each of the faces and 1q to each of its vertices). For the same reason, an edge e
of the boundary path x = ∂(M ) becomes of weight p1 if it lies on the boundary of a face, or 2p
otherwise. In the later case, the non-oriented edge e occurs in the path x twice (with different
orientations). Therefore after the redistribution of weights, the sum of weighs of all edges in M
is equal to 1p n where n is the perimeter of the map.
Thus, the total weight is equal to
X1
X1
1
d(o) +
d(Π) + n.
E=
q
p
p
o
Π
Since E − V − F = −1 by Euler’s formula, where V and F are numbers of vertices and faces in
M respectively, we have:
−1 =
X1
o
q
d(o) +
X
1
d(Π) + n − V − F =
p
p
o
X1
Π
6
X
1
1
1
d(o) − 1 +
d(Π) − 1 + n. (2.1)
q
p
p
Π
P
Notice also that n = v µ(o) where v runs over all vertices of M (indeed, µ(o) = 0 for all
interior vertices and µ(o) is the number of times the boundary path passes through o). Therefore
we can rewrite (2.1) as follows:
X
X 1
1
1
−1 =
d(o) − 1 + µ(o) +
d(Π) − 1 .
q
p
p
o
Π
Since the first of these sums is − p1 Iv and the second sum is − p1 If , we deduce that Iv + If = p.
Remark 2.3. A result similar to Lemma 2.2 is true for maps on arbitrary surfaces S with
boundary. It is easy to see that in that case the right hand side is equal to pχ(S) where χ(S) is
the Euler characteristic of the smallest subsurface of S containing the map.
Remark 2.4. Let us use Lemma 2.2 to complete the proof from Remark 1.2 about standard
submaps of simple flat maps. Here we consider the case p = q = 4 only, leaving the other two
cases to the reader as an exercise. Let, as in Remark 1.2, M be a simple flat map of radius r.
We set M0 to be a vertex o at distance r from the boundary of M , and assume that for i ≥ 0,
the submap Mi is constructed and this submap is simple and isomorphic to the standard map
Si4 which is the n × n-square (where n = 2i) tessellated by unit squares. Counting the difference
between the degrees of the vertices from ∂Mi in M and in Mi , we obtain exactly 4n + 4 oriented
edges e1 , ..., e4n+4 with (ej )− ∈ ∂(Mi ) and (ej )+ 6∈ Mi . We claim that no two of the edges ej , ek ,
j 6= k, are mutually inverse and no two of the vertices (ej )+ , (ek )+ coincide.
Indeed, suppose that ej = e−1
k . Then this edge and a subpath of ∂Mi bound a submap N
having no faces from Mi . All exterior vertices of N , except for (ej )± , have degree at least 3 in
N
Mi
Figure 2: The case when ei = e−1
k .
N because the degrees of these vertices in Mi are at most 3 while there degrees in M are 4.
Therefore the (4, 4)-curvature of every face and vertex in the flat map N , except for (ej )± , are
non-positive, while the curvature of each of (ej )± is at most 1. Hence the sum If + Iv of all
these curvatures for N is at most 2 < 4, which contradicts Lemma 2.2.
7
In case when (ej )+ = (ek )+ , we consider the submap N , without faces from Mi , bounded by
ej , e−1
k and by a subpath of ∂Mi . It has at most 3 vertices of positive curvature (equal to 1),
namely, (ej )− , (ej )+ = (ek )+ and (ek )− . This again contradicts Lemma 2.2 since 3 < 4.
Now if we assume that edges e1 , ..., e4n+4 are enumerated clockwise, we see that for each j,
ej , ej+1 , j = 1, ..., 4n + 4 (where j + 1 is 1 if j = 4n + 4) must belong to the same face Πj which
shares a vertex with ∂(Mi ) and every face sharing a vertex with ∂(Mi ) is one of the Πj . Each
pair of consecutive faces Πj , Πj+1 share exactly one edge ej+1 . Hence Mi+1 is isomorphic to
the (n + 2) × (n + 2)-square tessellated by unit squares. The boundary of Mi+1 is simple since
otherwise a part of this boundary bounds a flat submap N with at most one exterior vertex of
positive curvature contrary to Lemma 2.2 for N since 1 < 4.
2.2
Weakly exterior faces and the interior of a (p, q)-map.
In this section, p and q are postive integers satisfying 1p + 1q = 12 .
A face (edge) in a map M is called strongly interior if it does not share a vertex with ∂(M ),
otherwise it is called weakly exterior.
Lemma 2.5. Let M be a (p, q)-map, o1 , ..., om be its exterior
Pvertices (counted counterclockwise).
Then the number of weakly exterior faces does not exceed
d(oi ) − 2m.
Proof. Induction on the number of faces in M . If M has a cut vertex, then M is a union of two
submaps M1 and M2 intersecting by a vertex, and it is easy to see that the statement for M
follows from the statements for M1 and M2 . Thus we can assume that the boundary path of
M is simple. Then every exterior vertex oj belongs to d(oj ) − 1 weakly
P exterior faces,
P and two
vertices oj , oj+1 (addition modulo m) belong to one face. So the sum (d(oj )−1) =
d(oj )−m
overcounts weakly exterior faces by at least m. The statement of the lemma then follows.
By the interior M 0 of M we mean the union of all strongly interior faces of M , their vertices
and edges. Note that M 0 may be empty. It may also be not connected in which case, it coincides
with the union of its maximal simple submaps M10 , M20 , ... It follows that every edge of ∂Mi0
belongs to a face of Mi0 and to a weakly exterior face of M .
Hence the intersection of different submaps is either empty or consists of one vertex. Let us
call these submaps the components of M 0 . Thus the boundary paths y01 , y02 , . . . of the components
0
M10 , M
2 , ... are simple (see Fig. 1). Below we denote by y the union of these boundaries and set
P
|y| = i |y0i |.
In the next lemma, we induct on the type of a map M . By definition, the type τ = τ (Π)
of a face Π in M is the number of interior (non-oriented) edges in ∂Π. If m = Area(M ) and
(τ1 , τ2 , ..., τm ) is the m-tuple of the types of all the faces of M with τ1 ≥ τ2 ≥ ..., then τ (M ) is
the infinite string (τ1 , . . . , τm , −1, −1, . . . ). We order types lexicographically: τ (M ) ≥ τ (M ′ ) =
(τ1′ , τ2′ , . . . ) if τ1 > τ1′ or τ1 = τ1′ but τ2 > τ2′ , and so on. For example, a map with only one face
has the type (0, −1, −1, . . . ). The set of types is obviously well ordered.
Lemma 2.6. Assume that a (p, q) map M has at least one face. Also assume that the degree of
every face of M is at least p. Let x be the boundary path of M and y is defined as above. Then
|x| − |y| ≥ −If − 2Ivi + p
(2.2)
where If is the sum of the (p, q)-curvatures of all faces and Ivi is the sum of (p, q)-curvatures of
all interior vertices of M .
8
M
M10
M20
Figure 3: The interior of a map and its components
Proof. Let us denote −If − 2Ivi by J = J(M ). Thus we need to prove that |x| − |y| ≥ J + p.
Step 1. The statement of the lemma is true if M has only one face Π, because we have
|x| = d(Π) ≥ p, |y| = 0, If = p − d(Π) and Ivi = 0. Since the smallest type of a (p, q) map that
has faces is the type (0, −1, −1, ...) of a map consisting of one face (recall that we assume that
(p, q)-maps do not have vertices of degree 1), this gives the base of induction and we can assume
that
(U1 ) The area of M is greater than 1.
Step 2. Suppose that M can be cut into two maps M1 and M2 with smaller number of
faces by a path of length at most 1. Defining parameters x(j), y(j), If (j), Ivi (j) of the map Mj ,
j = 1, 2, in the natural way (x(j) is the boundary path of Mj , etc.), we have:
• |x(1)| + |x(2)| ≤ |x| + 2,
• |y| = |y(1)| + |y(2)|,
• If = If (1) + If (2) and Ivi = Ivi (1) + Ivi (2) since no interior vertex of M became exterior
after the cutting.
Since p ≥ 2, the statement of the lemma follows from inequalities |x(j)| − |y(j)| ≥ −If (j) −
2Ivi (j) + p (where j = 1, 2), which hold, since τ (Mj ) < τ (M ) because Area(Mj ) < Area(M ).
Thus, we may assume further that
(U2 ) M has no cutting paths of length ≤ 1, hence, in particular, the boundary path
x is simple.
9
Step 3. Assume there is an exterior vertex o of degree d > 3. Let e1 , e2 , ..., ed be all edges
ending in o, so that e1 and e2 are on the boundary path of a face Π1 , e2 and e3 are on the
boundary path of a face Π2 , etc. Suppose (e1 )− = o′ . Then let us split the edge e2 into two
edges by a new vertex o′′ in the middle of e2 , and replace e1 with a new edge e′1 going from
o′ to o′′ . Note that this transformation does not change the type of M . Indeed, since e1 is an
exterior edge, the only faces that are changed by this transformation are Π1 , Π2 , but the number
of interior edges on ∂(Πj ), j = 1, 2, does not change (one of the two edges which are parts of the
exterior edge e2 is exterior in the new map and one is interior, and the new edge e′1 is exterior
as was e1 ), hence τ (Πj ), j = 1, 2, does not change, and the type of the map does not change as
well. Also this transformation does not change the set of interior vertices of the map and their
degrees. The degree of Π2 increases by 1 (because of the new vertex o′ ). Thus both −If and |x|
increases by 1 and Ivi does not change. Hence the value of |x| − J does not change. The degree
of the vertex o decreases by 1 and the degree of the new vertex o′ is 3. Therefore by doing this
transformation, we will eventually get a map with the degrees of all exterior vertices at most 3
with the same path y and the difference |x| − J as for M .
e1
✱
Π1 e2 ✱
✱
✱ Π
2
✱
e′1
ed
ot
.
.
⇒
.
ot
✱
′
Π1 ot✱
✱
✱ Π
2
✱
ed
.
.
.
Figure 4: Step 3.
So we continue the proof under the additional assumption
(U3 ) The degree of every exterior vertex is at most 3.
Remark 2.7. Note that (U3 ) implies that every weakly exterior face of M is exterior, i.e., every
face which shares a vertex with ∂(M ) also shares an edge with ∂(M ).
Step 4. Assume there is a vertex o of degree 2 on an exterior face Π. Let us join two edges
incident with o into one edge and remove the vertex o. This does not change τ (M ) because
only exterior edges and vertices are affected. Then the boundary y of the interior of the map
does not change, the contribution of Π to J decreases by 1, contributions of all other faces and
vertices remain the same, and |x| also decreases by 1, hence |x| − |y| − J will not change. Hence
(U4 ) We can remove vertices of degree 2 of x (joining pairs of edges that share these
vertices), provided the property d(Π) ≥ p is preserved, and we can split edges of x
by new vertices of degree 2 without changing |x| − |y| − J.
Step 5. Suppose an exterior face Π of M has boundary path of the form uw, where u is a
maximal subpath of the boundary path ∂Π contained as a subpath in the boundary path x of
M . Note that |w| ≥ 2, because we excluded cutting paths of length ≤ 1 by (U2 ) and M has
more than one face by (U1 ). Also note that |u| > 0 by Remark 2.7.
Suppose that w has an exterior vertex o which is not equal to the end vertices of w. Then we
add a vertex o′ of degree 2 on u (using (U4 ) ) and connect o and o′ by a new edge g cutting up
10
Π into two faces of degrees d1 and d2 , where d1 + d2 − 3 = d = d(Π). For the new map M ′ (with
parameters x′ , y′ , etc.) we have |x′ | = |x| + 1, y′ = y. Instead of the face Π of curvature p − d,
we have two faces with curvatures p − d1 and p − d2 . Hence If − If′ = 3 − p. Since the degrees
of interior vertices were preserved, we have J ′ − J = 3 − p and (|x′ | − J ′ ) − (|x| − J) = p − 2.
According to (U4 ), the same difference p − 2 has a map with additional vertices of degree 2 on
u. So we may assume that d1 , d2 ≥ p.
Cutting along g, we obtain new maps M1 and M2 with τ (Mj ) < τ (M ) (j = 1, 2), since Π is
subdivided into two faces with τ (Πj ) < τ (Π), j = 1, 2, where τ (Πj ) is computed in Mj . For the
parameters xj , yj , Jj , etc. of the maps Mj , j = 1, 2, we have |x1 | + |x2 | = |x| + 3, |y1 | + |y2 | = |y|
and J1 + J2 = J ′ = J + 3 − p. So, by induction on the type, we obtain
|x| − |y| − J − p = (|x1 | − |y1 | − J1 − p) + (|x2 | − |y2 | − J2 − p) − 3 + 3 ≥ 0 + 0 − 3 + 3 = 0,
as desired. Thus, we may assume that
(U5 ) For every exterior face Π as above, the path w has no exterior vertices except
its end vertices.
u
or′
Π
w
⇒
Π1
o
u
g Π2
o
Figure 5: Step 5.
Step 6. Properties (U1 )-(U5 ) imply the following property
(U6 ) For every weakly exterior face Π of M , we have ∂(Π) = euf v where u = u(Π) is
the subpath of the boundary path of M , |u(Π)| > 0, and e, f are edges with exactly
one exterior vertex while v = v(Π) has no exterior vertices.
Using the notation of Property (U6 ), suppose now that |v(Π)| > 1, i.e., v(Π) = v′ v′′ with
|v′ |, |v′′ | > 0. Let o be the last vertex of v′ . Then we add a new vertex o′ of degree 2 on an edge
of u (subdividing that edge into two edges) and add a new edge t connecting o and o′ . As a
result, the face Π of degree d = d(Π) is subdivided into two faces: a face Π′ of degree d′ and a
face Π′′ of degree d′′ , where d′ + d′′ = d + 3. Let M ′ be the new map with parameters x′ , y′ , If′ , J ′ ,
etc. We have τ (M ′ ) < τ (M ) since τ (Π′ ), τ (Π′′ ) < τ (Π) by (U6 ).
By (U4 ), we can add new vertices of degree 2 to ∂(Π′ ), ∂(Π′′ ) so we can assume that d′ =
d(Π′ ) ≥ p, d′′ = d(Π′′ ) ≥ p. The contributions of Π′ and Π′′ to If′ are p−d′ and p−d′′ , respectively,
while the contribution of Π to If was p−d. So If −If′ = p−(d′ +d′′ −3)+(d′ −p)+(d′′ −p) = 3−p.
The contribution of the interior vertex o to 2Ivi is greater than its contribution to 2(Ivi )′ by 2p
q
since this vertex is incident to the new edge t, i.e., 2(Ivi )′ −2Ivi = 2p/q. Thus J −J ′ = p−3− 2p
q =
1
1
1
′
−1 because p + q = 2 . However we also have |x| − |x | = −1 since one edge is subdivided by the
vertex o′ , and so |x′ | − J ′ = x − J. Thus we can assume,
(U ) M satisfies (U6 ) and for every exterior face Π, |v(Π)| ≤ 1.
11
os′
u
e
f
Π
v ′′
o
⇒
e
Π′′
v ′′
v′
t
o
Π′
f
v′
Figure 6: Step 6.
Step 7. Now we consider the cases p = 3, 4, 6 separately. Since |v(Π)| ≤ 1 by (U ), we have
|u(Π)| ≥ p − 3, and by (U4 ), one may assume that every exterior face has degree p if p > 3 and
has degree 4 if p = 3.
Since every exterior vertex has degree 2 or 3 (by (U3 )), the difference |x| − |y| is not smaller
than the sum S of |u(Π)| − |v(Π)| for all exterior faces Π.
Case p = 4, q = 4. In this case the degree of every exterior face is 4. Then a vertex of
degree 2 can occur only on the path u(Π) for some exterior face Π with |v(Π)| = 0. Let N be
the number of vertices of degree 2 on ∂(M ). Then the contribution of the face containing that
vertex to the sum S is 2 and S ≥ 2N . The contribution of faces Π with |u(Π)| = |v(Π)| = 1 is
0. The sum Ivb of curvatures of exterior vertices is
4
4
(4 − 2) − 1 N +
(4 − 3) − 1 (|x| − N ) = N.
4
4
By Lemma 2.2, Ivi +Ivb +If = 4. Thus N +Ivi +Ic = 4, hence 2N +2(Ivi +If ) = 8, and 2N ≥ J +8
because J = −2Ivi − If and If ≤ 0 by the assumption of the lemma. Therefore
|x| − |y| ≥ J + 8 ≥ J + p.
Case p = 6, q = 3. Now every exterior face of M has degree 6. Let Ni , be the number of
weakly exterior faces Π with |v(Π)| = i, i = 0, 1. Then |u(Π)| is 4 or 3, respectively, and so
|x| ≥ 4N0 + 3N1 and |y| ≤ N1 . So |x| − |y| ≥ 4N0 + 2N1 .
Every exterior face has either 3 or 2 vertices of degree 2. It is easy to compute now that the
sum Ivb of curvatures of the vertices of x is
6
6
(3N0 + 2N1 )
(3 − 2) − 1 + (|x|− 3N0 − 2N1 )
(3 − 3) − 1 = −|x|+ 6N0 + 4N1 = 2N0 + N1 .
3
3
Since by Lemma 2.2, Ivb + Ivi + If = 6, we have 2N0 + N1 + Ivi + If = 6. Hence J + p < 12 + J ≤
2(−Ivi − If + 6) = 4N0 + 2N1 ≤ |x| − |y|, as required.
Case p = 3, q = 6. Let N1 be the number of exterior faces Π of degree 4 with |u(Π)| = 2,
|v(Π)| = 0, and N0 be the number of exterior faces Π of degree 4 with |u(Π)| = |v(Π)| = 1.
Then we have |x| ≥ 2N1 + N0 , |y| ≤ N0 , and Ivb = N1 + 12 (|x| − N1 ) = 12 (|x| + N1 ), because the
curvature of an exterior vertex of degree 2 (respectively, 3) is 12 (8 − 2) − 1 = 1 (respectively, 21 ).
Since Ivb + Ivi + If = 3 by Lemma 2.2, we get |x| = 2Ivb − N1 = 2(−Ivi − If + 3) − N1 . Note that
−If ≥ N0 + N1 since each quadrangle contributes −1 to the sum If . Hence
|x| = 2(−Ivi − If + 3) − N1 = J − If + 6 − N1 ≥ J + N0 + N1 + 6 − N1 = J + N0 + 6
Therefore |x| − |y| ≥ J + N0 + 6 − N0 ≥ J + 6 > J + p, as desired.
12
2.3
Adjustment
Note that it is enough to prove Theorem 2.1 for simple maps. Let M be a simple (p, q)-map,
1
1
1
p + q = 2 . Every face of M of degree ≥ 2p can be subdivided by diagonals into faces of degrees
p + 1, ..., 2p − 1. Every vertex o of degree d ≥ 2q can be replaced by two (nearby) vertices o′ , o′′
connected by an edge such that d(o′ ) + d(o′′ ) = d(o) + 2 and d(o′ ) = q + 1. It can be done so
that if o is an exterior vertex, then exactly one of the vertices o′ , o′′ is exterior in the resulting
map. Each such transformation increases the total number of vertices and faces of negative
(p, q)-curvature but it does not increase the number and the curvatures of the vertices and faces
having non-negative curvature. Since the sum If + Iv of (p, q)-curvatures of all vertices and
faces cannot exceed p by Lemma 2.2, any sequence of subdivisions of vertices and faces as above
terminates. Clearly, if M is a (p, q)-map, then the new map M ′ is again a (p, q)-map, it has
non-smaller area than M , the same perimeter, and the set of (simple) flat submaps in M ′ is a
subset of the set of (simple) flat submaps of M because the subdivisions do not introduce new
flat vertices or faces. The map M ′ satisfies the following condition.
(B) The degree of every face (every vertex) of the simple (p, q)-map M ′ is less than 2p (resp.
2q).
If we have a (p, q)-map M with condition (B), then one can construct a new map M ′
satisfying condition (B), where all faces (not just interior ones) have degrees at least p. Namely,
one subsequently cuts out the exterior faces of degree less than p (and also the edges containing
the vertices of degree 1 if such edges appear). The perimeter of M ′ is at most (p − 1)n, where n
is the perimeter of M , Area(M ′ ) ≥ Area(M ) − n, and the maps M and M ′ have the same flat
submaps.
The map M ′ is a (p, q)-map satisfying the following additional condition
(D) The degree of every face of M ′ is ≥ p.
Let us call a (p, q)-map with the additional assumptions (B) and (D) a {p, q}-map. It is
easy to see that Theorem 2.1 follows from
Theorem 2.8. The area of an arbitrary simple {p, q}-map M does not exceed ( 3q
2 + 1)(r + p)n,
provided M contains no simple flat submaps of radius greater than r.
Indeed, let M ′ be the {p, q}-map obtaining from a (p, q)-map M satisfying the assumption
of Theorem 2.1 and condition (B) after removing some exterior faces. By Theorem 2.8, we
3q
have Area(M ′ ) ≤ ( 3q
2 + 1)(r + p)(p − 1)n. Hence Area(M ) ≤ ( 2 + 1)(r + p)(p − 1)n + n ≤
3
2 (q + 1)(r + p)(p − 1)n.
2.4
Connecting non-flat vertices and faces with the boundary
Note that all non-flat faces and all interior non-flat vertices of a {p,q}-map M have negative
curvatures.
If M contains non-flat faces or vertices, there exists a subgraph Γ of the 1-skeleton of M ,
such that every non-flat face or vertex of M can be connected with ∂M by a path in Γ We
will assume that Γ is chosen with minimal number of edges D(M ). Then Γ will be called a
connecting subgraph of M .
Remark 2.9. The minimality of Γ implies that every vertex of Γ can be connected in Γ with
∂M by a unique reduced path. It follows that Γ is a forest, where every maximal subtree has
exactly one vertex on the boundary ∂M .
13
We shall use the notation from Section 2.2. Thus x is the boundary path of M , y is the union
of the boundary paths of components of M 0 , etc.
Lemma 2.10. We have D(M ) ≤ (p − 1)|x|.
Proof. We shall induct on the area m of M . The statement is true if m = 1 since in this case
D(M ) = 0.
Let D0 = D(M 0 ) and let Γ0 be the corresponding connecting subgraph of the interior M 0 .
By the induction hypothesis, D0 ≤ (p − 1)|y|.
Let A0 (resp., A) be the number of non-flat faces and interior vertices in M 0 (in M ). By
Remark 2.9, Γ0 has at most A0 vertices on ∂M 0 , and so one needs at most A0 paths z1 , z2 , . . . ,
to connect them with ∂M . Besides there are A − A0 non-flat faces and interior vertices in M
which are not counted in A0 . One can connect them with ∂M adding at most A − A0 connecting
paths y1 , y2 , . . . to obtain a graph Γ′ connecting all non-flat faces and interior vertices of M with
∂(M ). The lengths |zi | and |yj | cannot exceed a half of the maximum of the perimeters of faces,
and so |zi |, |yj | ≤ p − 1 by Property (B). Therefore D(M ) ≤ D0 + A(p − 1).
Since every non-flat face (resp. non-flat interior vertex) has curvature at most −1 (respectively, at most − 21 ), we have A ≤ −If − 2Ivi = J ≤ |x| − |y| − p by Lemma 2.6. Therefore
D(M ) ≤ D0 + J(p − 1) ≤ (p − 1)|y| + J(p − 1) ≤ (p − 1)(|y| + (|x| − |y| − p)) < (p − 1)|x|
2.5
Cutting the map along its connecting subgraph and the proof of Theorem
2.8
As before p, q are positive integers with
1
p
+
1
q
= 12 .
Lemma 2.11. Let M be a {p, q}-map of radius 0 (i.e., all vertices of M are exterior) and
perimeter n > 0. Then Area(M ) ≤ q(n−2)
2p .
Proof. Induction on the number of faces in M . The statement is easy to check if M contains
only one face. If M contains more than one face, then M has a cut vertex or cut edge. In each
of the two cases, the cut vertex or the cut edge separates M into two submaps M1 and M2 with
perimeters n1 , n2 such that n1 + n2 ≤ n + 2. Therefore
Area(M ) = Area(M1 ) + Area(M2 ) ≤
q(n − 2)
q(n1 − 2) q(n2 − 2)
+
≤
.
2p
2p
2p
Lemma 2.12. Let M be a {p, q}-map. Then the sum Ivb of curvatures of exterior vertices
satisfies Ivb ≥ p.
Proof. Indeed, we have Ivb + Ivi + If = p by Lemma 2.2. Since Ivi ≤ 0 and If ≤ 0, we have
Ivb ≥ p.
Lemma 2.13. Let M be a {p, q}-map with perimeter n > 0. Then the number N of weakly
exterior faces of M is at most pq n − q.
14
Proof. Let o1 , ..., om (m ≤ n) be the exterior vertices of M . Then
X p
pX
Ivb =
( (q − d(oj )) − µ(oj )) = −
d(oj ) + pm − n
q
q
j
since
P
j
µ(oj ) = n. Therefore we have
X
q
q b
−Iv + pm − n ≤ −q + qm − n
d(oj ) =
p
p
j
by Lemma 2.12. By Lemma 2.5, the number N of weakly exterior faces in M is at most
P
j d(oj ) − 2m Therefore
q
q
q
N ≤ −q + (q − 2)m − n ≤ −q + q − 2 −
n= n−q
p
p
p
since
1
p
+
1
q
= 12 .
Lemma 2.14. If the radius of a {p, q}-map M is at most r − 1 and perimeter is n, then the
area of M does not exceed qp rn.
Proof. If r = 1 then this follows from Lemma 2.11, so let r > 1. Let M 0 be the interior
of M . Its boundary y is the union of the boundaries of the components M10 , M20 , . . . having
radii ≤ r − 2. So one may assume by induction on r that Area(M 0 ) < qp (r − 1)|y|, which
does not exceed qp (r − 1)(n − p) by Lemma 2.6. It follows from Lemma 2.13 that Area(M ) <
q
q
q
p (r − 1)(n − p) + p n − q < p rn.
Proof of Theorem 2.8. Let Γ be a connecting graph of M . Let e be an non-oriented
edge of Γ with one vertex on ∂M . Then cutting M along e, one obtains a {p, q}-map M1 with
perimeter |x| + 2, where all non-flat faces and vertices are connected with ∂M1 by paths in the
graph Γ1 , where Γ1 is obtained from Γ by removing the edge e. We can continue cutting this
way along the edges of Γ, until we obtain a {p, q}-map M̄ of perimeter |x| + 2D(M ) ≤ (2p − 1)|x|
(by Lemma 2.10), where every vertex and every face of curvature > 0 is (weakly) exterior. Thus
every component of the interior M̄ 0 of M̄ is a simple flat map.
Γ
⇒
✥✥
❇❇
❵❵
M̄
M
Figure 7: Cutting the map M along the edges of Γ.
If M̄ 0 is empty, then the radius r̄ of M̄ is at most r − 1 since by (B), the degree of every
exterior face in M̄ is less than 2p. Hence by Lemma 2.14 for M̄ , we have
Area(M ) = Area(M̄ ) ≤
3q
q
r(2p − 1)n ≤ ( + 1)(r + p)n
p
2
15
1
1
1
since pq (2p − 1) = 3q
2 + 1 by equality p + q = 2 , and the theorem is proved.
If M̄ 0 is not empty, then again by (B), it has a component N of radius r̄ 0 ≥ r̄ − p + 1
The map N is a simple flat submap of M . Hence its radius r̄ 0 does not exceed r. Therefore
r̄ − p + 1 ≤ r and r̄ ≤ r + p − 1. By Lemma 2.14,
Area(M ) = Area(M̄ ) ≤
3
3q
q
(2p − 1)(r + p − 1 + 1)n = ( + 1)(r + p)n
p
2
(p, q)-maps that are quasi-isometric to R2
Recall that a metric space X with distance function distX is (L, K)-quasi-isometric to a metric
space Y with distance function distY , where L > 1, K > 0, if there exists a mapping φ from
X to Y such that Y coincides with a tubular neighborhood of φ(X) and for every two vertices
o1 , o2 of X we have
−K +
1
distX (o1 , o2 ) < distY (φ(o1 ), φ(o2 )) < K + LdistX (o1 , o2 ).
L
Two metric spaces X and Y are quasi-isometric if there is an (L, K)-quasi-isometry X → Y for
some L and K. This relation is reflexive, symmetric and transitive.
In this section we consider infinite planar maps. Here such a map M is called proper if the
support of M is the whole plane R2 , and every disc on R2 intersects finitely many faces, edges
and vertices of M . The metric on M is the combinatorial path metric on its 1-skeleton.
Theorem 3.1. Let M be a proper (p, q)-map, where positive integers p and q satisfy 1p + 1q = 12 .
Then the 1-skeleton of M is quasi-isometric to Euclidean plane if and only if M has finite
number of non-flat vertices and faces.
We will provide a proof for the case p = q = 4, the other two cases are left for the reader
(see Remark 3.10).
3.1
The “if” part of Theorem 3.1
A corridor B of M is a finite sequence of faces, where any two consecutive faces share a gluing
boundary edge, and two gluing edges of a face are not adjacent in the boundary path of the
face. In detail: a corridor is a sequence
(e0 , Π1 , e1 , Π2 , . . . , et−1 , Πt , et ),
where ei−1 and e−1
are non-adjacent edges in the boundary of the face Πi for i = 1, . . . , t.
i
❛❛
❛
e0
Π1
e1
q
Π2
e2
e3
...
...
et−1 Π
t
et
q′
Figure 8: A corridor
′ −1
′
The boundary of B has the form e0 qe−1
t (q ) , where the sides q and q consist of non-gluing
edges of the faces Π1 , . . . , Πt .
16
Lemma 3.2. In the above notation, no vertex of a (4, 4)-map is passed by a side q or by q′
twice.
Proof. Assume that a corridor B is a counter-example. Then we may assume that q is a simple
closed path bounding a submap N of minimal possible area. So N contains no faces from B.
Every vertex of q, except for the initial (= terminal) vertex o has degree at least 4 in M , and
so its degree in N is at least 3, as it follows from the definition of a corridor: two gluing edges
of a face in a corridor are not adjacent in the boundary path of the face. Thus, the only vertex
that can give a negative contribution to the sum If + Iv from Lemma 2.2 for the map N is o.
But this contribution is at most 1, and so we have If + Iv ≤ 1, contrary to Lemma 2.2.
▲▲
q
N
B
Figure 9: A corridor touching itself
Lemma 3.2 allows us to extend an arbitrary corridor infinitely in both directions:
(. . . , e−1 , Π0 , e0 , Π1 , e1 , Π2 , . . . , et−1 , Πt , et , Πt+1 . . . ),
its sides are infinite simple paths subdividing the plane in two parts (because the map M is
proper), these are infinite corridors. One can also consider semi-infinite corridors of the form
(e0 , Π1 , e1 , Π2 , . . . , et−1 , Πt , et , . . . ).
Lemma 3.3. Let M be the map from Theorem 3.1 with a finite set of non-flat faces and vertices.
Then the 1-skeleton of M is quasi-isometric to the 1-skeleton of a map with finitely many non-flat
vertices and without non-flat faces.
Proof. Consider a infinite corridor B = (. . . , e0 , Π1 , e1 , Π2 , . . . , et−1 , Πt , et , . . . ) containing a nonflat face Πt . Since the number of non-flat faces in M is finite, we can assume that all the faces
Πt+1 , . . . are flat.
Let p, p′ be the two sides of B so that each ei connects a vertex oi on p with a vertex o′i on
′
p.
Since d = d(Πt ) ≥ 5, without loss of generality we can assume that the subpath w of p
connecting ot−1 and ot has length at least 2, so it can be decomposed as w = vu, where u is one
edge connecting o′′ with ot and |v| > 0. Then we modify the faces in B as follows: replace the
17
u
v
...
et−1
...
Πt
u
v
et
et+1
...
... ⇒
...
et−1
...
Πt
ft ft+1
...
...
B
Figure 10: Modifying non-flat faces in a corridor
gluing edge et by a new gluing edge ft connecting o′′ and o′t , and replace every gluing edge es
with s > t by the new gluing edge fs connecting os−1 with o′s (see Figure 10). Then the degree
of Π decreases by 1 and the degrees of all other faces are preserved. To complete the proof by
induction, it suffices to notice that the 1-skeleton of new map M ′ is quasi-isometric to 1-skeleton
of M , since the distances between the vertices cannot increase/decrease more than two times
when we passing from M to M ′ .
Lemma 3.4. Let B = (. . . , e0 , Π1 , e1 , Π2 , . . . , et−1 , Πt , et , . . . ) be a infinite corridor in M , where
every face Πi has degree 4, and so Π has the boundary of the form ei−1 fi (ei )−1 gi , where fi and
gi are edges. Excising the faces of B from M and identifying the edges fi and gi (for every
−∞ < i < ∞), one obtains a new map M ′ . We claim that M ′ is a (4, 4)-map whose 1-skeleton
is quasi-isometric to the 1-skeleton of M .
Proof. Every end vertex of ei has degree ≥ 4 in M . So the same must be true in M ′ . Hence M ′
is a (4, 4)-map.
B
fi
ei−1
gi
ei
⇒
fi = gi
Figure 11: Collapsing gluing edges of a corridor
If two vertices can be connected in M ′ by a path p of length m, then their preimages in M
can be connected in M by a path q of length at most 2m + 1 since q can be constructed from the
edges of p and the gluing edges of B. Conversely, the distance between two vertices in M ′ does
not exceeds the distance between them in M . The quasi-isometry of the 1-skeletons follows.
Proof of the ”if part” of Theorem 3.1. Suppose that M has finitely many non-flat vertices and
faces. By lemma 3.3, we may assume that M has no non-flat faces, and so every face has degree
4. Let two distinct non-flat vertices o and o′ be connected by a shortest path p. Then we chose
an edge e on p and consider a infinite corridor B, where e is one of the gluing edges of B. Then
the transformation M → M ′ from Lemma 3.4 decreases the sum of distances between non-flat
vertices and replaces M by a quasi-isometric map M ′ . So after a number of such transformations,
we shall have a (4, 4)-map without non-flat faces and with at most one non-flat vertex o. We
will use the same notation M for it.
Let us enumerate the edges e1 , . . . , ek with initial vertex o in clockwise order; so o lies on the
boundaries ei fi gi e−1
i+1 (indexes are taken modulo k) of k quadrangles Π1 , . . . , Πk . Consider the
semi-infinite corridors B1 = (e1 , Π1 , g1 , Π′1 . . . ) and C1 = (e2 , Π1 , f1 , . . . ) starting with the face
Π1 . They define semi-infinite sides q1 and q′1 starting at o. Since M is proper, the paths q1 , q2
bound a submap Q1 of the plane.
18
There is a semi-infinite corridor C1′ starting with the second edge of q′1 and the face Π′1 . This
corridor has to share the whole side with C1 since it is made of quadrangles and all the vertices,
except for o, have degree 4. Similarly, the semi-infinite corridor C1′′ starting with the third edge
of q′1 is glued up to C1′ along the whole side, and so on.
Therefore Q1 with its path metric is isometric to a standard quadrant of the square grid Z2 .
q2
✟
✟✟
B1
Q1
e3
e2
Π1
e1
C1′
q1
C1′′
C1
Figure 12: Representing a map as a union of several quadrants of Z2 .
Similarly we have quadrants Q2 , . . . , Qk , where each Qi is bounded by semi-infinite paths qi
and q′i , and as above, we have q′i = qi+1 (indices modulo k). The 1-skeleton of every submap Qi
is quasi-isometric to a quadrant on the Euclidean plane R2 with the Euclidean metric which, in
turn, is quasi-isometric to a part Si of the plane bounded by two rays with common origin and
angle 2π/k so that the union of all Si is the whole plane R2 (use polar coordinates). Combining
all these quasi-isometries and using the fact that a quadrant of R2 is convex, we get a quasiisometry between the 1-skeleton of M and R2 .
3.2
The “only if” part of Theorem 3.1
Let M be a proper (4, 4)-map having infinitely many non-flat vertices and faces. By contradiction, suppose that M with its path metric distM is (L, K)-quasi-isometric (L > 1, K > 0) to R2
If distM (o1 , o2 ) > c for some c ≥ KL, then distR2 (φ(o1 ), φ(o2 )) > c0 = (c − KL)/L and so the
growth of every c-separated set2 S of vertices of M is at most quadratic, that is the function
γS,o (r) = |{o′ ∈ V ′ | distM (o, o′ ) ≤ r}| is at most quadratic in r.
The number of vertices in a submap M ′ of M will be denoted by area(M ′ ) (recall that
Area(M ′ ) denotes the number of faces in M ′ ).
We start with the following well known
Lemma 3.5. (See Theorem 6.2 in [7, Chapter V]. Also it immediately follows from Lemmas
2.13 and 2.6 by induction on n.) If N is a (4, 4)-map with perimeter n, then area(N ) ≤ kn2 for
some constant k > 0.
Now we are going to modify faces of high degree.
2
A set of points S in a metric space X is called c-separated if distX (o1 , o2 ) > c for every two distinct points
o1 , o2 ∈ S.
19
Lemma 3.6. There exists a map M ′ on the plane
1. with the same set V of vertices as M ,
2. with infinite set of non-flat vertices and faces,
3. for a marked vertex o ∈ V and every o′ ∈ V , distM (o, o′ ) = distM ′ (o, o′ ),
4. for arbitrary vertices o′ , o′′ , we have distM ′ (o′ , o′′ ) ≤ distM (o′ , o′′ ),
5. the degrees of all faces are at most 6.
Proof. Let Π be a face with d = d(Π) ≥ 7, and vertices o1 , ..., od in the clockwise order. Consider
the difference f (i) = dist(oi , o) − dist(oi+j , o) (indices modulo d). It is non-negative if oi is the
farthest vertex from o among o1 , ..., od , and it is non-positive if oi is the closest one. Since
|dist(om , o) − dist(om+1 , o)| ≤ 1, we have |f (m) − f (m + 1)| ≤ 2 for any m, and so there is i such
that f (i) = |dist(oi , o) − dist(oi+j , o)| ≤ 1.
Let us connect vertices oi , oi+j by a new, diagonal edge e inside the cell Π, so that Π is
divided into two new cells Π′ , Π′′ both of degrees at least 4 and at least one of degree at least
5. This operation does not introduce any new vertices. Let M ′ be the new map on the plane.
It is clear that Properties 1 and 4 of the lemma hold.
Let us show that distance from every vertex o′ to o in M ′ is the same as in M (Property
3). Let g be a geodesic in M ′ connecting o′ and o. If g does not contain the new edge e, then
the distance between o′ and o did not change. So suppose that g contains e. Without loss of
generality we can assume that the vertices of g in the natural order are o′ , ..., oi , oi+j , ..., o. Since
g is a geodesic, e appears in g only once, and distM ′ (oi+j , o) = distM (oi+j , o), and distM ′ (oi , o) =
distM (oi+j , o) + 1. By the choice of the pair (oi , oi+j ), f (i) ∈ {0, 1, −1}. Since distM (oi , o) ≥
distM ′ (oi , o) we can deduce that distM (oi , o) = distM ′ (oi , o), so there exists a geodesic g ′ in M ′
connecting o′ and o and avoiding e. Hence distM ′ (o′ , o) = distM (o′ , o).
This implies that we can cut all faces of degree ≥ 7 by diagonals into several parts so that
the resulting map on R2 satisfies all five properties of the lemma.
Lemma 3.6 implies that for the map M ′ the growth function γS,o of every c-separated set
S of vertices with respect to vertex o is at most quadratic if c is large enough (because every
c-separated set S of vertices in M ′ is c-separated in M by Property 4, and the functions γS,o (r)
for M and M ′ are the same by Property 3). To obtain a contradiction with this quadratic
growth, Lemma 3.6 allows us to assume from now on that the degrees of all faces in M are at
most 6.
Lemma 3.7. For every r > 0 there exists a simple submap N = N (r) of M containing the
vertex o, such that distM (o, ∂(N )) ≥ r and the maximal distance (in M ) from o to an exterior
vertex of N is at most r + 2.
Proof. Let N be the smallest (with respect to the length of the boundary) submap of M containing all faces Π of M with distM (o, Π) ≤ r − 1. Such submap N exists since M is locally
finite. We claim that the boundary path of N has no cut points. Indeed, let o′ be a cut point on
∂(N ) and o′ subdivides N into two submaps N1 , N2 containing faces, N1 ∩ N2 = {o′ }, o ∈ N1 .
Suppose that N2 contains a face Π at distance (in M ) at most r − 1 from o. Let g be a geodesic
connecting o and Π′ in M . Then every vertex on g is at distance (in M ) at most r − 1 from o.
Hence every face of M having a common vertex with g is in N . Thus g is a path in the interior
20
of N . Since g connects a vertex in N1 with a vertex in N2 , g must contain o′ . Hence o′ is an
interior vertex of N , a contradiction.
Since the boundary path ∂N contains no cut points and has minimal length, it is simple.
There are no vertices o′ ∈ ∂(N ) at distance (in M ) at most r − 1 from o. Indeed, otherwise
every face of M containing o′ would be at distance ≤ r − 1 from o, and would be contained in
N , hence o would be an interior vertex in N , a contradiction.
Suppose that N contains an exterior vertex o′ at distance (in M ) at least r + 3 from o. Then
′
o belongs to an exterior face Π of N . Since d(Π) ≤ 6, we have distM (o, Π) ≥ r. Therefore if
we remove Π from M together with the longest subpath of ∂(Π) containing o′ and contained in
∂(N ), we get a smaller submap N ′ of N containing all faces of M at distance ≤ r − 1 from o, a
contradiction. Hence dist(o, o′ ) ≤ r + 2 for every o′ ∈ ∂(M ).
Let Φ(r) be the number of vertices o′ of M with distM (o′ , o) ≤ r.
Lemma 3.8. The function Φ is super-quadratic, i.e limr→∞ Φ(r)/r 2 = ∞.
Proof. Let us denote by φ(r) the minimum of the numbers of vertices on the boundaries of the
finite submaps Q with the property that Q is simple and distM (o, ∂Q) ≥ r. For any Q with this
property, let N be the component of the interior Q0 containing o. Since every (exterior) face of
Q has degree at most 6, the boundary yN of N satisfies inequality |yN | ≥ φ(r − 3). If x is the
boundary path of Q, then by Lemma 2.6, |x| ≥ |yN | + J + 4, where J = −If (Q) − 2Ivi (Q). Since
J is not less that the number K = K(Q) of non-flat faces in Q plus the number of interior in Q
non-flat vertices, we have |x| > |yN | + K ≥ φ(r − 3) + K, and so φ(r) > φ(r − 3) + K.
If a non-flat face Π (or vertex o′ ) lies in M at a distance ≤ r − 1 from o, then it belongs
to Q (resp., it is interior in Q). Indeed, if Π is not in Q or o′ is not an interior vertex of Q,
then any path connecting Π or o′ with o has to insersect ∂Q, which contradicts the property
distM (o, ∂Q) ≥ r. Hence K ≥ φ(r − 1), where ψ(r − 1) is the number of non-flat vertices and
faces of M at the distance ≤ r − 1 from o. Therefore
P φ(r) > φ(r − 3) + ψ(r − 1).
Since ψ(r) → ∞ as r → ∞, we have φ(r)/r ≥ 1r 0≤i≤r/6 ψ(r−1−3i) ≥ ψ(⌊r/2⌋−1)/6 → ∞.
Since the boundaries of the maps N (r) and N (r ′ ) from Lemma 3.7 do not intersect if |r−r ′ | ≥
3, we have
1
1 X
φ(r − 2 − 3i) ≥ φ([r/2] − 2) → ∞
Φ(r)/r 2 ≥ 2
r
6r
0≤i≤r/6
Lemma 3.9. For an arbitrary integer c ≥ 1, there is a super-linear function α(r) = αc (r)
such that the boundary of every submap N (r) satisfying the condition of Lemma 3.7 contains a
c-separated set of vertices S with |S| ≥ α(r).
Proof. We may assume that r > c. The simple boundary path x of N (r) has winding number
±1 around the vertex o, and it is a sequence of edges, i.e., of subpaths of length 1 ≤ c. Therefore
there is the smallest t ≥ 1 and the vertices o1 , . . . , ot of x such that distM (oi , oi+1 ) ≤ c (indices
modulo t) and the geodesic paths zi = oi − oi+1 (in M ) form a product z = z1 . . . zt with non-zero
winding number around o. (Self-intersections are allowed for z, but no zi goes through o since
|zi | ≤ c < r ≤ distM (o, oi ), and so the winding number is well defined.)
Suppose there is a pair of distinct vertices oi , oj , where i < j and i − j 6= ±1 (mod t),
with dist(oi , oj ) ≤ c. Then a geodesic path z̄ = oi − oj defines two new closed paths z′ =
−1
z̄zj . . . zt z1 . . . zi−1 and z′′ = z−1
i . . . zj−1 z̄ with numbers of factors less than t. Since one of the
21
paths z′ and z′′ has nonzero winding number with respect to o, this contradicts the minimality in
the choice of t. Hence dist(o
p i , oj ) > c, and so one can chose the set S of cardinality ≥ (t − 1)/2.
Assume now that t < 4 r 2 Φ(r − C/2), where Φ(r) is defined before Lemma 3.8. Note that o
belongs to a submap L bounded by a simple closed path w, which is a product of some
p pieces of
the paths zi -s. Therefore |w| ≤ |z| ≤ tc, and by Lemma 3.5, area(L) ≤ k(ct)2 ≤ Dr Φ(r − c/2),
where D = kc2 .
On the other hand, distM (o′ , o) ≥ r − c/2 for every vertex o′ of w since zi ≤ c for everyp
i. Therefore area(L) ≥ Φ(r − c/2) by Lemma 3.8. We obtain inequality Φ(r − c/2) ≤
Dr Φ(r − c/2), which can hold only for finitely p
many values of r since the function Φ is
super-quadratic by Lemma 3.8. Thus t = t(r) ≥ 4pr 2 Φ(r − c/2) for every r ≥ r0 , and since
|S| ≥ 12 (t − 1), one can define α(r) to be equal to 21 ( 4 r 2 Φ(r − c/2) − 1) if r ≥ r0 and α(r) = 1
if r < r0 .
Now we can prove that for any given integer c ≥ 1, the map M contains an infinite c-separated
set S of vertices which grows super-quadratically with respect to the vertex o, which would give
the desired contradiction. The boundary ∂N (r) has a c-separated subset Sr with at least α(r)
vertices. Since the distance between ∂N (r) and ∂N (r ′ ) is greater than c for r − r ′ ≥ c + 3, the
union S(r) = Sr ∪ Sr−(c+3) ∪ Sr−2(c+3) ∪ . . . is c-separated and
|S(r)|/r 2 ≥
1
r2
X
α(r − i(c + 3)) ≤
r
0≤i≤ 2(c+3)
1
2(c + 3)r
min
r
0≤i≤ 2(c+3)
α(r − i(c + 3)) → ∞
as r → ∞ since r − i(c + 3) ≥ r/2 and the function α is super-linear by Lemma 3.9.
Remark 3.10. The proof of Theorem 3.1 for (4, 4)-maps can be easily adapted for (6, 3)- and
(3, 6)-maps. The “only if” part does not need virtually any modification.
To prove the ”if” part for (3, 6)-maps by contradiction, again one can use Lemma 3.6 to
uniformly bound the degrees of all faces. Then one can subdivide non-flat faces by diagonals
and obtain a quasi-isometric (3, 6) map M ′ , where all the faces have degree 3. If two distinct
triangles of M ′ share an edge e, we say that they form a diamond with the hidden edge e. We
can view diamonds as new faces, and build analogs of corridors made of diamonds, where the
gluing edges of a diamond are not adjacent. The additional requirement is that the hidden edges
of neighbor diamonds in a corridor have no common vertices (see Fig. 11). The vertices on sides
q and q′ of a corridor B have degrees at most 4 in B. Then the statement of Lemma 3.2 holds
since every exterior vertex (except for one) of the submap N should have degree ≥ 4. Hence one
obtains the notion of an infinite and semi-infinite corridors. The Lemma 3.4 reduces the task
to a map M with single non-flat vertex, and the rest of the proof of Theorem 3.1 is as above:
the quadrangles ei fi gi ei+1 should be replaced by diamonds and the corridors B, C1 , C1′ , C1′′ ,...
are now built from diamonds. If now one erases the hidden edges of all these diamonds in the
quadrant Q1 , . . . , then the obtained quadrants Q′i -s are (4, 4)-maps quasi-isometric to Qi -s. So
our task is reduces to the case of (4, 4)-maps.
B
Figure 13: Corridor and hidden edges in (3, 6)-maps
22
The case of (6, 3)-map M can be easily reduces to (3, 6). For this goal, one bounds the
degrees of faces as above, then chooses a new vertex inside of every face Π and connects it with
the vertices of ∂Π. The resulting map is a (3, 6)-map which is quasi-isometric to M , it has
finitely many non-flat vertices and faces.
4
Maps with angle functions
Let M be a map with an angle function (for the definition, see Section 1).
For every face Π (vertex o) we denote by ΣΠ (resp. Σo ) the sum of the angles of the corners
of Π (resp. corners at o). Note that if there are no corners at a vertex o, then Σo = 0. We
define the curvature curv(Π) of a face Π with degree d = d(Π) as ΣΠ − π(d − 2)). The curvature
curv(o) of a vertex o is defined as (2 − µ(o))π − Σo , where, as before, µ(o) is the multiplicity of
o in the boundary path of M .
We denote by If (by Iv ) the sum of curvatures of the faces (vertices) of a finite map M . The
following discrete analog of Gauss - Bonnet formula is well known but we include its proof here
anyway.
Lemma 4.1. Let a map M with angle function have at least one edge. Then If + Iv = 2π.
Proof. Let V, E and F be the numbers of vertices, non-oriented edges and faces
P in M and n be
the perimeter of M . It was observed
in
the
proof
of
Lemma
2.2
that
n
=
o µ(o) (the sum
P
over all vertices in M ). Since Π d(Π) (the sum over all faces in M ) is equal to the number
of exterior
Pfaces in MPplus twice the number of the interior edges in M , we have
P edges of the
2E = Π d(Π) + n = Π d(Π) + o µ(o). Hence
!
X
X
X
X
ΣΠ −
Σo
If + Iv =
(2 − d(Π))π +
(2 − µ(o))π +
o
Π
=
X
Π
2π +
X
2π − π
o
X
Π
d(Π) +
Π
X
!
µ(o)
o
o
+ 0 = 2πF + 2πV − 2πE = 2π
In the next lemma, Ivi (resp., Ivb ) is the sum of the curvatures of interior (exterior) vertices
in M .
Lemma 4.2. Let M be a map of perimeter n ≥ 1 with angle function. Assume that the
curvatures of the faces and of the interior vertices of a map M are non-positive. Then nπ ≥
−If − Ivi + 2π.
Proof. On the one hand, it follows from the definition that
X
X
Ivb =
((2 − µ(o))π − Σo ) ≤ 2nπ − nπ −
Σo ≤ nπ
o∈∂M
o∈∂M
On the other hand, Lemma 4.1 gives us Ivb + Ivi + If = 2π. Therefore we have nπ + If + Ivi ≥ 2π,
as required.
Recall that a map M is called (δ, b)-map for some δ > 0 and a natural number b > 0 if
(1) the curvature of every non-flat vertex or face does not exceed −δ and
23
(2) the degree of every face and of every vertex in M is at most b.
We denote by B(d, o) the ball of radius d centered at o in a graph G, i.e., the set of vertices
o′ of G such that dist(o′ , o) ≤ d.
The following lemma is well known and obvious.
Lemma 4.3. The inequality |B(d, o)| ≤ bd + 1 holds for any graph where degrees of all vertices
are at most b (hence for (δ, b)-maps).
Proof of Theorem 1.5. Let V be the set of vertices of M which are either exterior or non-flat or
belong to a non-flat face. From the (δ, b)-condition and Lemma 4.2, one deduces that
|V| ≤ n + δ−1 (−Ivi ) + δ−1 b(−If ) ≤ nπ + δ−1 bnπ = (δ−1 b + 1)nπ
(4.3)
For arbitrary vertex o ∈ V, we consider the ball B(o, r). By the assumption of the theorem,
every vertex o′ of M belongs in one of these balls. Therefore by Lemma 4.3, area(M ) ≤
|V| × (br + 1) ≤ (1 + δ−1 b)nπ(br + 1). Since every vertex belongs to the boundaries of at most b
faces, the inequality Area(M ) ≤ Ln follows, provided L ≥ πb(1 + δ−1 b)(br + 1).
Remark 4.4. Theorem 1.5 generalizes Theorem 1.1. Indeed, it is enough to establish Theorem
1.1 for simple maps. As it was explained in Subsection 2.3, every simple (p, q)-map M can be
modified so that the new (p, q)-map M ′ satisfies Condition (B) from Section 2.3. Condition (B)
implies Condition (2) above with b ≥ 11. Moreover the area of M ′ is not smaller than the area
of M , the perimeter of M ′ is the same as the perimeter of M and the maximal distance from a
vertex to an exterior vertex or non-flat vertex or face in M ′ does not exceed that for M . The
(p, q)-map M ′ can be naturally viewed as a map with angle function which assigns to every
. Again, Condition (B) implies Condition (1) above with
corner of a d-gon face the angle π(d−2)
d
π
. It remains to note that the function L(r) in Theorem 1.5 is exponential as in Theorem
δ > 21
1.1.
References
[1] Mladen Bestvina, Bruce Kleiner, Michah Sageev, Quasiflats in CAT(0) 2-complexes. Algebr.
Geom. Topol. 16 (2016), no. 5, 2663–2676.
[2] István Fáry, On straight-line representation of planar graphs. Acta Sci Math. (Szeged),
11(1948), 229–233.
[3] S. M. Gersten, Branched coverings of 2-complexes and diagrammatic reducibility. Trans.
Amer. Math. Soc. 303 (1987), no. 2, 689–706.
[4] A. I. Gol’berg, The impossibility of strengthening certain results of Greendlinger and Lyndon. Uspekhi Mat. Nauk 33 (1978), no. 6(204), 201–202.
[5] James Howie, The quotient of a free product of groups by a single high-powered relator. I.
Pictures. Fifth and higher powers. Proc. London Math. Soc. (3) 59 (1989), no. 3, 507540.
[6] S. V. Ivanov, P. E. Schupp, On the hyperbolicity of small cancellation groups and one-relator
groups. Trans. Amer. Math. Soc. 350 (1998), no. 5, 1851–1894.
[7] Roger Lyndon and Paul Schupp. Combinatorial group theory. Springer-Verlag, 1977.
24
[8] A. Yu. Ol’shanskii. The geometry of defining relations in groups, Nauka, Moscow, 1989.
[9] Stephen J. Pride, Star-complexes, and the dependence problems for hyperbolic complexes.
Glasgow Math. J. 30 (1988), no. 2, 155–170.
[10] Mark Sapir, Jean-Camille Birget, Eliyahu Rips, Isoperimetric and isodiametric functions
of groups. Ann. of Math. (2) 156 (2002), 2, 345–466.
[11] Klaus Wagner, Bemerkungen zum Vierfarbenproblem. Jahresbericht der Deutschen
Mathematiker-Vereinigung, 46 (1936), 26–32.
Alexander Yu. Ol’shanskii
Department of Mathematics
Vanderbilt University
[email protected]
and
Department of Higher Algebra, MEHMAT
Moscow State University
Mark V. Sapir
Department of Mathematics
Vanderbilt University
[email protected]
25
| 4 |
762
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, VOL. 3, NO. 3, SEPTEMBER 2017
Modeling and Real-Time Scheduling of DC
Platform Supply Vessel for Fuel
Efficient Operation
Kuntal Satpathi, Student Member, IEEE, VSK Murthy Balijepalli, Member, IEEE,
and Abhisek Ukil, Senior Member, IEEE
Abstract— DC marine architecture integrated with variable
speed diesel generators (DGs) has garnered the attention of
the researchers primarily because of its ability to deliver
fuel efficient operation. This paper aims in modeling and to
autonomously perform real-time load scheduling of dc platform
supply vessel (PSV) with an objective to minimize specific fuel oil
consumption (SFOC) for better fuel efficiency. Focus has been on
the modeling of various components and control routines, which
are envisaged to be an integral part of dc PSVs. Integration
with photovoltaic-based energy storage system (ESS) has been
considered as an option to cater for the short time load transients.
In this context, this paper proposes a real-time transient simulation scheme, which comprises of optimized generation scheduling
of generators and ESS using dc optimal power flow algorithm.
This framework considers real dynamics of dc PSV during
various marine operations with possible contingency scenarios,
such as outage of generation systems, abrupt load changes,
and unavailability of ESS. The proposed modeling and control
routines with real-time transient simulation scheme have been
validated utilizing the real-time marine simulation platform. The
results indicate that the coordinated treatment of renewablebased ESS with DGs operating with optimized speed yields better
fuel savings. This has been observed in improved SFOC operating
trajectory for critical marine missions. Furthermore, SFOC
minimization at multiple suboptimal points with its treatment
in the real-time marine system is also highlighted.
Index Terms— DC power flow, dc shipboard power system,
platform supply vessel (PSV), real-time simulation.
Pmech
Pload
uF
kpm
τpm
td
J
ωG
kloss
TG
N OMENCLATURE
Mechanical power at diesel engine (DE) shaft.
Load demand.
Fuel injection input signal.
Fuel injection system gain.
Fuel injection time constant.
Dead-time of DE.
DE rotor inertia moment.
DE and generator angular speed.
DE rotational loss.
Torque produced by the generator.
Manuscript received March 5, 2017; revised June 2, 2017; accepted
August 19, 2017. Date of publication August 24, 2017; date of current version
September 15, 2017. This work was supported by the National Research
Foundation Singapore through the Corporate Laboratory@University Scheme.
(Corresponding author: Kuntal Satpathi.)
The authors are with the School of Electrical and Electronics
Engineering, Nanyang Technological University, Singapore 639798 (e-mail:
[email protected]; [email protected]; [email protected]).
Digital Object Identifier 10.1109/TTE.2017.2744180
p
i M s, iT s
λ M s , λT s
TT
τT
dP
ωP
PT
CT
Cτ
SOC
Number of poles of generator.
M-T axis current of wound rotor synchronous
generator (WRSG) at stator flux reference
frame (SFRF).
M-T axis flux of WRSG at SFRF.
Thrust developed by the thrusters.
Torque developed by the thrusters.
Propeller diameter.
Speed of the propeller.
Power developed by the thrusters.
Thrust coefficient.
Torque coefficient.
State of charge.
I. I NTRODUCTION
P
LATFORM supply vessel (PSV) plays a major role in the
marine industry because of its ability to perform cruising
and dynamic positioning (DP) operation [1]. Development of
marine integrated power systems has enabled the marine loads
esp. the propulsion systems to be powered from the common
generation units [2], [3]. This resulted in the reduction of
the number of installed prime movers and offered designers
flexibility to place the generation system at any suitable
location. The future operation of the marine vessels depends on
the International Maritime Organization’s air pollution requirements [4], [5]. To comply with the requirements, it is pertinent
to develop fuel efficient marine vessels, hence limiting the
exhaust gas emissions. In the conventional ac marine vessels,
shaft electric machines have been proposed to minimize the
fuel consumption [6]. DC marine systems are also proposed as
they can operate with increased fuel efficiency as compared
with the corresponding ac marine vessels [7], [8]. The lack
of critical phase and frequency synchronizing parameters
allows the interfaced DEs to run at variable speeds depending
on total load demand, thus optimizing the specific fuel oil
consumption (SFOC) and increasing the fuel efficiency [9].
Other advantages constitute the ease of integration with the
energy storage systems (ESSs) [10]–[12], space and weight
reductions [13], and lower losses as compared with the corresponding ac systems.
As analogous to the land-based multiterminal dc systems,
dc marine vessels are envisaged to have a two-layer control
system [14], [15]. The primary control system comprises of
2332-7782 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
SATPATHI et al.: MODELING AND REAL-TIME SCHEDULING OF DC PSV FOR FUEL EFFICIENT OPERATION
the dc bus voltage control [16] and active/reactive power
control [17] with suitable protection and fault management
algorithms [18]. The secondary control system comprises of
the load forecasting, power flow algorithms, which will be
executed for a given state and requirements of the marine
missions [8], [19]. The secondary control system also helps
in optimized power flow by curbing out unintended power
consumption, hence minimizing the risk of blackout condition.
It also aids in proper coordination of the load and generation
system, which reduces the oversizing requirement of the
generation systems.
The generation scheduling for commercial land-based power
systems is usually done beforehand [8], which has been
extensively studied for land-based microgrids [19] and electric
vehicles [20]. Unlike land-based systems, the load profile
of marine vessels is continuously changing owing to variable propulsion load demands. Thus, the generation output
of marine vessels might be optimized for various operating scenarios. The optimized operation of the generation
systems in the marine vessel is relatively new topic with
a limited number of research attempts made in this area.
Zahedi et al. [21] proposed the operation of DE at minimum
SFOC with the help of integrated ESS where the authors
have considered only the steady-state analysis to arrive a
single point of minimized SFOC. Furthermore, much work
on SFOC has been reported in the domain of ac marine
vessels, such as shaft generator system integrated with the
diesel generators (DGs) for possible minimization in fuel
consumption [6], [22], agent-based real-time load management
to control the loads [23], and stochastic approaches [15], [24].
One of the key research gaps in the past research studies has
been the lack of modeling and implementation of realistic
marine loads and marine missions. Although, the researchers
have proposed various marine operating scenarios and SFOC
minimization by considering mixed integer linear programming approach for unit commitment [25]; such approaches
are NP-hard (nondeterministic polynomial) and may not be
preferred choice for real-time scheduling system, which is
the prime focus of this paper. Hence, this paper presents
modeling and control of the various components of marine
vessels incorporated in a real-time transient simulation scheme
with dc optimal power flow (OPF) algorithms utilizing reduced
bus-bar model for the scheduling of generation system. This
paper considers SFOC minimization at multiple suboptimal
points [26] as well to incorporate the real-time dynamics
of the dc shipboard systems. Furthermore, the performance
of dc OPF-based secondary control algorithm and SFOC
optimization with an option of ESS has been demonstrated
for various marine missions.
This paper considers PSV as an example of marine vessel,
which performs cruising operation for the logistics and DP
operation to support the offshore supply vessels. Apart from
the DGs, this paper also considers photovoltaic (PV)-based
ESS [10], [27] as a part of generation systems of dc PSV.
The focus is given on the integrated and automated generation
operation by incorporating dc OPF-based algorithms in the
proposed optimization framework to minimize the SFOC of
DGs with the help of scheduling of ESS. The traditional
763
analysis by offline simulations of the bigger and complex
emerging dc marine systems is expected to consume longer
time to generate results. This approach becomes quite cumbersome to analyze for the multiple test studies scenarios [28].
Thus, in this paper, the transient simulation framework for the
entire dc shipboard system has been developed by utilizing
the advantages of the real-time simulation platform. The
study of the dynamics of the full shipboard power system
along with the interaction of PV/ESS with the generation
systems during contingencies has been carried out with the
proposed approach. The real-time optimal power scheduling
for generators and ESSs has been carried out for various
marine missions under different contingencies, such as sudden
load changes and network faults, to prove the efficacy of the
proposed approaches. The trajectory of the SFOC of the DEs
during the various contingencies has been studied, which could
effectively be realized by the real-time simulation platform.
Moreover, this method could be useful to validate the hardware
design, converter control, and protection algorithms of the
future dc marine vessels. Hence, the smarter generation system
of the future dc PSV is proposed, which should be able to
operate autonomously, encompassing both the primary and the
secondary control system.
The rest of this paper is organized as follows. Section II
covers the modeling of dc PSV, which comprises of the
generation system and various marine loads. Section III comprises of the operating structure of the dc PSV considered
in this paper. The transient simulation scheme with dc OPF
algorithm handling ac/dc systems to minimize SFOC has
been implemented in Section IV. The optimized generation
scheduling applied with various marine operational scenarios
is discussed in Section V. The real-time load scheduling is
achieved using OPAL-RT OP5600-based simulator and this
paper is concluded in Section VI.
II. M ODELING OF DC P LATFORM S UPPLY V ESSEL
The bus-breaker model of the representative dc PSV is
shown in Fig. 1. The model is in coherent with the commercially available PSVs [29], [30] with two-bus architecture
to increase the reliability and survivability of the vessel.
The representative vessel comprises of four DEs coupled
with the synchronous generators (P DG ) and one PV-based
ESS (P ESS ). The total generation capacity can be illustrated
as per (1). The generators are interfaced with two level voltage
source converter (2L-VSC) acting as an active front end
rectifier and PV-based ESS is integrated via dc/dc converter
for dc bus voltage and active/reactive power control
PGen = {P DG n , P ESS ||n = 1 : 4}.
(1)
The nominal bus voltage of dc PSV is set at 1500 Vdc which
according to the IEEE Std 1709-2010 falls under medium
voltage dc shipboard architecture [31]. As compared with
the land-based power systems, marine vessels have loads
pertaining to different marine missions. Variable frequency
propulsion system (L propulsion) comprises of main propulsion (MP) systems to cater for the cruising loads (LC L );
tunnel thrusters (TTs) and retractable thrusters (RTs) to cater
764
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, VOL. 3, NO. 3, SEPTEMBER 2017
Fig. 1.
Bus breaker model of representative dc PSV.
Fig. 2.
Control loop diagram for the speed regulation of the DE.
for the DP load (LDP ) [1]. The total connected propulsion
load is illustrated as L propulsion = {LC L , LDP }. The fixed
frequency hotel loads are required for air conditioning/lighting
systems, cranes/winches, small hotel motors, and so on. The
hotel loads are classified into high-power (LHLhigh ) and lowpower (LHLlow ) loads. The miscellaneous loads (Lmisc ) for
radar and pulsed load operation are also considered as it may
form the integral part of modern PSVs. The hotel loads can
be illustrated as L hotel = {LHLhigh , LHLlow , Lmisc } and the total
load L total can be expressed as follows:
L total = {LC L , LDP , LHLhigh , LHLlow , Lmisc }.
(2)
Fig. 3.
(a) BSFC chart for representative 2000-kW DE [34] and
(b) corresponding curve-fit of DE speed for optimized SFOC for various
loading conditions.
shown in (4)
kpm
Pmech (s)
=
e−td s
u F (s)
τpm s + 1
dωG
Pmech − Pload
+ klossωG =
J
.
dt
ωG
h pm (s) =
(3)
(4)
The power rating of the components and converter systems
is shown in Fig. 1. Derating factor of 125% has been used for
the selection of the converters, cables, and bus bars to cater for
the short time overload demands. The modeling and control
of various components are discussed in Section II-A, B, C.
Thus, the DE controls the synchronous generator by adjusting
the mechanical power output. By linearization of the power
flow equation (4) around the operating point ωG = ωGo ,
the transfer function reduces to
A. Generation System Modeling
1) Diesel Engine: DEs are used as prime movers for the
synchronous generators in dc PSV [3]. The prime-mover
model comprises of fuel injection system, dead time (td )
representing elapsed time until a cylinder produces torque
and inertia of the rotating parts. The dead-time approximation
of the prime-mover is realized by exponential delay and the
transfer function is given in (3) [32], [33]. The differential
equation governing the active power flow through the DE is
where
P = Pmech − Pload . The complete block diagram
for DE speed control is shown in Fig. 2. In the dc PSV, DE
should be able to operate at optimized SFOC by running at
optimized speed. Fig. 3(a) represents the brake specific fuel
consumption (BSFC) of the representative DE operating at
different powers for different operating speeds [34]. The cost
function of the optimized DE speed (C(ω)) in terms of DG
power output (P DG ) is calculated to understand the operating
1/ωGo
ωG (s)
h r (s) =
=
J s + 2kloss
P
(5)
SATPATHI et al.: MODELING AND REAL-TIME SCHEDULING OF DC PSV FOR FUEL EFFICIENT OPERATION
Fig. 4.
765
Complete control loop of the DG interfaced with 2L-VSC.
points of the DE. This is achieved with the help of curvefitting techniques and the cost function, C(ω) is derived as
shown in the following:
TABLE I
L OAD P RIORITIES FOR D IFFERENT M ARINE M ISSIONS
C(ω) = A0 + A1 P DG + A2 (P DG )2 + A3 (P DG )3
+ A4 (P DG )4 + A5 (P DG )5 (6)
where A0 = 720.93, A1 = 1.2591, A2 = −0.00292,
A3 = 3.8104 × 10−6 , A4 = −2.1716 × 10−9 , and
A5 = 4.5206 × 10−13 . The actual DE speed for various power
requirements and the curve-fit version is shown in Fig. 3(b).
The same cost function has been considered in the optimization framework presented in Section IV.
2) Diesel Generator–Rectifier Control: DG includes full
M − T model of WRSG at stator flux reference frame (SFRF)
with both the field and damper windings [35]. The output
torque of the shipboard DG is maintained by independently
controlling torque producing current, i T s and machine flux,
λ Ms , which is achieved by maintaining i Ms = 0 [16], [36].
With reference to [16], in this paper, the M-axis flux control is
done by implementing flux estimation-based method in which
the flux of the machine is estimated and suitably controlled for
varying operating conditions [16], [36]. The dc bus voltage and
line current control is realized by implementing PI regulator
having bandwidth of 100 and 1000 Hz, respectively, such a
range is suitable for simulating in the real-time operations
as well. The switching frequency of the VSC is chosen to
be 5 kHz and the VSC is modeled in both the average and
detailed switching models for comparative studies in the realtime simulation environment. The plant transfer function for
dc bus voltage control is calculated by equalizing the input
and output power flow while neglecting the line losses which
is shown in (7). The plant transfer function for current control
is calculated from the leakage inductance (L s ) and stator
resistance (Rs ) of the interfaced WRSG. The plant transfer
functions for voltage control loop (G V ) and current control
loop (G I ) are shown in (8). The combined control loop
representation of the DG system interfaced with 2L-VSC is
shown in Fig. 4
i T s λ Ms
dv dc
P DG = TG · ωG ⇒ 3 p
· ωr = C
· v dc +v dc · I L
4
dt
(7)
3 pλ Ms ωr
1
, GI =
.
(8)
GV =
4 sC Vdc + I L
s L s + Rs
B. Marine Loads
DC PSV comprises of propulsion systems, thruster systems,
hotel loads, and miscellaneous loads such as pulsed load
to undertake different marine missions. Prioritization of the
operation of these loads is dependent on the marine missions
undertaken by the vessel [37] and a sample priority table
considered in this paper is shown in Table I. Modeling of these
loads is required to understand the power consumption pattern,
which would eventually be necessary for the scheduling of
generation sources. The modeling and control of different
loads are discussed in Section II-B2 and II-B3.
1) Propulsion Systems: In the PSVs, the propulsion systems (L propulsion) are the main consumers of energy, which
undertake cruising and DP operation. The power requirement
during the cruising operation (LC L ) is dependent on the
operating speed of the PSV (ω P ) as shown in the following:
LC L ∝ ω3P .
(9)
The power requirement during DP operation (LDP ) carried
out by PSVs is primarily dependent on the environmental
forces and desired coordinate locations [38]. The sea current,
wind velocity, surge, and sway of the vessels have to be
balanced by the thrust produced by the thruster systems in
order to maintain the desired coordinates. The generalized
schematic of the DP system is shown in Fig. 5 [39]. The thrust
production of the propeller is dependent on the speed (ω P ),
propeller geometry (α), and hydrodynamic quantities (β). The
thrust (T ) and the torque (τ ) developed by the thrusters for
speed (ω P ) and diameter of the propeller (d P ) are given as
follows [38], [39]:
TT = gT (n, α, β) = C T ρd 4P ω2P
(10a)
τT = gτ (n, α, β) = Cτ ρd 5P ω2P
(10b)
766
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, VOL. 3, NO. 3, SEPTEMBER 2017
TABLE II
T OTAL PV I NSTALLED C APACITY U SING S UNPOWER 305 S OLAR PANEL
In this paper, the pulsed load duration is selected to be 20 ms.
The pulsed power load can be illustrated by the following:
1 t2
Lpulse =
Po . dt.
(12)
T t1
Fig. 5.
Schematic of DP of PSV [38], [39].
where C T and Cτ are determined by open-water tests for
submerged vessels and are dependent on propeller advance
velocity. In this paper, Cτ = 0.56, density of water,
ρ = 997 kg/m3 , and d P = 3.5 m are considered. The power
consumed by the thrusters of DP system is shown in the
following:
LDP = 2πnτ = Cτ ρd 5P ω3P .
(11)
For the worst weather conditions, the power demanded by
the DP system to maintain desired coordinates would be
significantly higher than that of the power demand during the
calm weather condition. Direct torque control is chosen over
field-oriented control for propulsion motors and DP thrusters
for its fast and superior performance and limited dependence
on the machine parameters [40].
2) Hotel Loads: The percentage of the hotel loads (L hotel )
to the total load depends on the type of marine vessels.
In PSVs, installed hotel load is much lower than the propulsion
loads, but it is an important part of dc PSV. As shown
in Fig. 1, two types of house loads are considered in this
paper. The high power hotel loads (LHLhigh ) supplying power
to cranes/winches and air-conditioning/humidifiers have a
cumulative rating of 3200 kVA, 440 Vac, and operates at
60-Hz frequency. The low-power hotel loads (LHLlow ) have
a cumulative rating of 400 kVA, 230 Vac, and operates at
60-Hz frequency and are responsible for small hotel motors
and lighting loads. Two level voltage source inverter with a
constant output of 690 Vac, 60 Hz is utilized for the house
loads.
3) Miscellaneous Loads: Miscellaneous loads (Lmisc ) comprise of the pulsed and radar loads. Pulsed loads have presence
in the modern naval vessels, which are used in the electromagnetic guns, free electron lasers, radars and high energy lasers,
and draw huge amount of current lasting for short period
of time. This intermittent nature of the loads has effect on
the stability of the generation sources [41]. The pulsed load
duration may vary from few microseconds to milliseconds.
C. Energy Storage System
Future autonomous vessels are expected to be operated with
different forms of renewables. Here, in this paper, it is assumed
that battery-based ESS (BESS) units are supplied from the
PV-based renewable source, which are interfaced with the dc
PSV to cater for the short time load requirement and power
fluctuations of the shipboard system. Such ESS also acts as
reserve generation supply during the contingencies or sudden
change of load. Unlike land-based power systems where
energy is stored in the ESS when its cost is low and release the
same when the grid electricity cost is high [19], here, there is
no such variations in the cost. Electricity stored in the BESS
is based on the marginal cost of the power supplied from the
DGs. Cost function considered for the BESS is a constant price
with the maximum power transfer based on the state of SOC
as depicted in the following:
Cess = f p /kwh.
(13)
The operational capacity of the ESS is restricted to 10% of
the total installed generation capacity as illustrated in the
following:
P ESS = 0.1 ∗ PGen .
(14)
The PV-BESS is set to operate in optimal mode by limiting
the battery usage between 20% and 100% of total storage
capacity; furthermore, the scheduling constraints are imposed
accordingly. The selection, sizing, and schematic of the
PV-BESS are described in the following.
1) PV Energy Sources in Marine Vessels: As per the green
ship initiative, combination of PV-DG-based generation systems is expected to be part of future marine vessels [10]. The
rating and capacity of the PV panel are dependent on the
available space in the target marine vessel [27]. For marine
vessels undertaking longer voyages, e.g., liquefied natural
gas carriers having easier accessibility to roof top terrace;
proliferation of PV-based generation system with 20% capacity
of the total generation system has been suggested [10], [27],
[42]. This paper considers the PSV with size and power
rating comparable with the commercial available PSVs, such
as Rolls-Royce UT776 [29] and Viking Queen [30]. These two
SATPATHI et al.: MODELING AND REAL-TIME SCHEDULING OF DC PSV FOR FUEL EFFICIENT OPERATION
TABLE III
PARAMETERS OF THE I NTERFACED BESS
Fig. 6. (a) Schematic of PV-based BESS (PV-BESS) and (b) representation
of the battery charging schemes.
PSVs are equal in size and rating according to the description
provided in the whitepaper [29], [30]. The deck area of the
PSV is of commercial interest and cannot be utilized for PV
installation [27]. Thus, the PSV has limited available space
for PV array installation. It has been assumed that the 600 m2
of the total available area of 1800 m2 [29], [30] is available
for installing PV arrays. Considering the parameters of the
commercially available Sunpower 305 Solar Panel [43] with
the available installation area of 600 m2 , the rating of total
installed PV capacity is described in Table II. The economic
analysis of the PV panel is dependent on several market parameters and specifications, but it is expected to be consistent
with the method provided in [42]. The comprehensive cost
analysis of PV panels in dc PSV is currently beyond the scope
of this paper.
2) Sizing of PV-BESS System: According to the IEEE
Std 1562-2007 [44] and the IEEE Std 1013-2007 [45], the sizing of the BESS connected to the PV is determined while
assuming that there is no power available from the PV system.
The BESS is installed in the dc Shipboard Power System with
the intention of fulfilling the intermittent loads and support the
generation system during various contingencies [8]. The PV
power is primarily used to charge the BESS and maintain its
SOC at maximum possible level. The selection of the BESS
has been done to minimize the weight and size constraints of
the dc marine vessels [8]. Furthermore, the 10% power level of
BESS is chosen to make it consistent with the trends of BESS
selection in commercially available marine vessels [46]. The
parameters of the BESS is chosen according to the commercially available SAFT Seanergy modules, which are suitable
for hybrid propulsion applications [47]. The parameters of the
battery module and the battery pack considering 10% of power
demand are shown in Table III.
3) Schematic of PV Interfaced BESS: The schematic of
the PV and BESS interfaced with the dc bus is shown as
per Fig. 6(a) [48]. To extract the maximum power, the PV
panel is interfaced with the unidirectional dc/dc-1 converter
767
which works on perturb and observe (P&O)-based maximum power point tracking (MPPT) algorithm [49] for the
proposed real-time transient simulation scheme. The modeling
of the PV generator and the MPPT algorithm is consistent
with strategy discussed in [49]. The dc/dc-2 converter is a
bidirectional converter used to interface PV-BESS system to
the dc bus. The modeling and control of dc/dc-2 is consistent with the approach provided in [7]. During the normal
operation, when the SOC of the battery is above threshold
limit (SOCmax ), the dc/dc-2 operates at boost conversion mode
supplying the power to the dc ship as fulfilling scheduled
generation requirements and complying with (15a)–(15h). The
variables in (15a)–(15h) are consistent with annotations shown
in Fig. 6(a)
PPV > 0
(15a)
PBatt > 0
PESS > 0
(15b)
(15c)
i PV_dc > 0
(15d)
i batt > 0
i dc_i = i batt + i PV_dc > 0
(15e)
(15f)
i ESS > 0
PESS = Pbatt + PPV .
(15g)
(15h)
When the SOC of the BESS is below lower threshold (SOCmin ), it could either be charged exclusively by
the PV system or by the combination of PV system and
dc/dc-2 converter. Since the power output of the PV array has
limitation owing to dependence on available irradiation, charging with PV panel would result in slow charging as shown
in Fig. 6(b). Constant current-based fast charging of the BESS
can be carried out by maintaining output current of dc/dc-2 at
desired charging rate suggested by the manufacturers. During
the charging operation supported by both PV panels and
dc/dc-2, (16a)–(16h) are satisfied
PPV > 0
PBatt < 0
(16a)
(16b)
PESS < 0
i PV_dc > 0
(16c)
(16d)
i batt < 0
i dc_i = i batt − i PV_dc < 0
(16e)
(16f)
i ESS < 0
Pbatt = PESS + PPV .
(16g)
(16h)
III. O PERATION OF DC P LATFORM S UPPLY V ESSEL
Installed generation capacity of the PSV is generally lower
than the total loads connected to the system. This is because
a defined set of loads are activated for particular marine
mission as indicated in Table I. For cruising operation, PSV
operates mostly in fixed speed condition and for DP mode,
the operating speed may change depending on the environmental conditions. Hence, the brake power of PSV for DP
768
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, VOL. 3, NO. 3, SEPTEMBER 2017
TABLE IV
L INE PARAMETERS OF DC PSV
Fig. 7.
Brake power of PSV for DP operation.
TABLE V
B US D ATA OF DC PSV
Fig. 8.
Reduced bus-branch model of representative dc PSV.
operation is not constant as shown in Fig. 7, where the vessel
mostly operates between 20% and 80% of the total installed
brake power [34]. Thus, incorporating dc OPF algorithm and
operating the vessel at minimum SFOC can be implemented
to increase the fuel efficiency. For the dc OPF reduced busbranch model, segregating the ac and dc subsystems with
ac/dc-dc/ac boundary node is shown in Fig. 8. Generators
Gen-1 and Gen-3 are clubbed together and are connected to
bus B1. Similarly, the Gen-2 and Gen-4 are clubbed together
and connected to bus B2. The total generation capacity (PGen )
of the DGs and the ESS with the bus they are interfaced to is
illustrated in the following:
Gen13
Gen24
ESS
PGen = P B1
.
(17)
, P B2
, P B14
The loads LC L , LDP , LHLhigh , LHLhigh , and Lmisc interfaced
with respective bus are depicted in the following:
MP1
MP2
LC L = L B3
(18a)
, L B7
TT1
TT2
RT
(18b)
LDP = L B12 , L B16 , L B5
HL1
(18c)
LHLhigh = Ln ||n = B8 to B11
HL2
LHLlow = Lm || m = B17 to B20
(18d)
PL
FL
, L B15
.
(18e)
Lmisc = L B13
IV. P ROPOSED R EAL -T IME DC PSV P OWER
M ANAGEMENT S YSTEM
A. Problem Statement
As explained in Section I, operation of dc PSV demands
real-time scheduling mechanisms to tackle different loads and
generation sources in various operating conditions. The realtime transient simulation scheme with dc OPF is more suitable
for such conditions, which determines the power injections
of the DGs and ESS to minimize the SFOC in real time,
subjected to physical and operational constraints (relevant
data in Tables IV and V). Equality constraints include power
balance at each node and inequality constraints include the network operating limits, DG limits, ESS limits, and limits on the
other control variables. These control variables include active
power output of the generators, power electronic controls,
amount of load disconnected, and the status of storage devices.
Hence, subsequent to the modeling of dc PSV, real-time dc
OPF with the objective of minimizing SFOC considering
all control and state variables with real-time optimization
framework is the objective behind this paper.
B. Problem Formulation
The real-time transient simulation system for the generation
scheduling of the dc PSV is governed in such a way to
SATPATHI et al.: MODELING AND REAL-TIME SCHEDULING OF DC PSV FOR FUEL EFFICIENT OPERATION
effectively utilize the available resources onboard to minimize
the SFOC of the DGs. In this process, the optimization
considers the set constraints as follows:
Gen13 Gen24 ESS
Minimize SFOC F P B1
, P B2 , P B14 = f (x, u) (19)
s.t. w(x, u) = 0
(20)
q(x, u) ≤ 0
(21)
where the cost function referring to the active power of the
energy resources is minimized while respecting the equality constraints w(x, u) and inequality constraints q(x, u).
These constraints can be viewed as linear and nonlinear
constraints
wnl (x, u)
w(x, u) =
(22)
Je (x, u) + oe
qnl (x, u)
q(x, u) =
(23)
Ji (x, u) + oi
where Je and Ji are constants and need to be calculated only
once. State variables of converter and dc network are set up to
this framework. Energy storage and dc side converter power
feed-in are mapped to the corresponding ac buses to satisfy
the Kirchhoff law.
The vector x consists of dependent variables such as fixed
parameters such as reference angles, noncontrolled generator or ESS outputs, noncontrolled loads, and line parameters.
The vector u consists of control variables, including real
power generation, PSV load shedding parameters/priorities,
ESS charging and discharging limits, ramp rates of the DG,
dc line flows, and converter control settings. The equality and
inequality constraints are, namely, power flow equations, limits
on all control variables, generation/load balance, branch flow
limits, and SOC limits. Considering Fig. 8, which represents
the reduced bus-bar model of dc PSV (having DGs, ESS, and
different types of loads), for an anticipated group of loads,
total system generation should be scheduled in such a way
to minimize the SFOC of DG. In such cases, the network
equality constraints are represented by the standard load flow
equations [14]. PSV load balance equation is as follows in the
real-time operation:
B
Gen
ESS
PBi + PBi
−
i=1
D
L
ESS
PDi + PDi
− PLosses = 0.
i=1
(24)
Inequality constraints limits are set accordingly, for example,
generator limits are set as
Gen
Gen
Gen
≤ PBi
≤ PBi
PBi
max
min
QGen
Bimin
≤
QGen
Bi
≤
QGen
Bimax .
(25)
(26)
Load shedding or load balancing limits have been set as
shed
≤ LBiDtotal .
0 ≤ LBi
(27)
Energy storage limits set as
ESS
ESS
ESS
≤ PBi
≤ PBi
.
PBi
max
min
(28)
769
Converter voltage limits on the ac side are nonlinear in nature
and can be set as
2
2
Vconv
≤ VR 2conv + VI 2conv ≤ Vconv
.
max
min
(29)
Converter filter side constraints are as follows, where V R and
VI are real and imaginary parts of the voltage:
2
2
Vfilter
≤ V R 2filter + VI 2filter ≤ Vfilter
max
min
(30)
and the limits on the converter current and dc voltages are
linear in nature and are as follows:
dc
dc
Vmin
≤ V dc ≤ Vmax
Iconv
min
≤I
conv
≤
Iconv
max .
(31)
(32)
q(x, u) in (23) is formed by (25)–(32) as stated previously.
It is to be noted that the voltage and branch flow limits are
the only nonlinear limits on the ac side. Options of setting
branch limits, and other operational limits in the dc PSV are
implemented as well, but skipped for better readability of this
paper.
Power generation schedules obtained from the optimization
framework have been fed to the SFOC calculation block to
calculate the speed (C(ω)) based on the equation derived in
Section II, where speed is derived as a function of generator
schedules
C(ω) = f (P DG ).
(33)
SFOC at each optimized speed point corresponding to the
power schedules has been calculated using the SFOC lookup
table available in the SFOC calculation block as shown
in Fig. 9(a).
C. Real-Time Transient Simulation of DC PSV
The architecture of the real-time transient simulation setup
comprising of generator scheduling scheme based on dc OPF
is shown in Fig. 9(a). Depending on the operating mode,
the scheduling block takes the input from the operating personnel. The available generation is also fed to calculate the
reserve generation capacity and setting the upper limits of the
generation system. Load estimation is the critical step where
the rate of the load changes has been assessed and passed as an
input to the proposed algorithm. Tabulation methods for electrical loads during marine missions which are proposed in [50]
are adopted in this paper. Line parameters (Table IV) and busbar parameters (Table V) are fed to ensure that the loading in
each line/bus stays within prescribed limits. The scheduled
generation is calculated and fed to the SFOC optimization
block to calculate the power demand and corresponding speed
set point of each of the in-line DGs. The scheduling block and
the SFOC algorithm are the part of the controller, while the rest
of the system of dc PSV system is divided into master/slave
computational subsystems and loaded into computing cores of
OPAL-RT OP5600-based real-time simulator. The segregation
of cores of the real-time simulation model has been realized
with the help of gyrators and the partitioning contours of the
divided subsystems are shown in Fig. 9(a). The description of
the real-time simulation system is described in the following.
770
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, VOL. 3, NO. 3, SEPTEMBER 2017
Fig. 9. (a) Architecture of the real-time load scheduling of dc PSV. (b) Schematic in MATLAB/Simulink environment. (c) Representative OPAL-RT setup
for real-time simulation.
Fig. 10. (a) Representation of gyrators for partioning between subsystem-1
and subsystem-2. (b) Bond graph structure of a gyrator.
1) Overview of Real-Time Simulation System: The real-time
simulation is conducted on the OPAL-RT-based OP5600 realtime simulator which operates on RedHat Linux-based operating system and is interfaced with the host PC by TCP/IP
cable. The setup of the real-time simulation system is shown
in Fig. 9(c). OPAL-RT uses RT-LAB-based real-time platform,
which facilitates the conversion of MATLAB/Simulink models into real-time executable models [55]. It has dedicated
toolboxes, such as RT-Events, RTE-drive, and ARTEMiS,
to support the real-time simulation system [55], [56]. The
execution of the model is achieved by ARTEMiS solver, which
is a high-order time-step integration algorithm and is not prone
to numerical oscillations [55], [56]. The minimum time step
available for real-time simulation in the dc transient real-time
simulation model is 10 μs. To comply with such requirements,
all the interfaced converters are operated with a switching
frequency of 5000 Hz. Furthermore, all the results obtained
with the switching models are compared with the averaged
models to analyze the performance of VSCs under such time
step limitations as well. The partitioning of system using
gyrators helps in avoiding numerical inaccuracies by ensuring
parallel computation of the partitioned subsystems.
2) Gyrator-Based Partitioning of System: Gyrator is an
ideal energy transducer used for bond graph representation of a physical system [51]–[53]. This method has been
Fig. 11.
Gyrator-based system partitioning for “n” number of elements.
used for partitioning of the bigger marine dc power system into smaller subsystems for parallel computation in
real-time transient simulation framework. With reference
to Fig. 10(a) and (b), the bigger system is divided into
Subsystem-1 and Subsystem-2 with the help of gyrator, G r Y
while satisfying the following:
V1 = f (I2 )
V2 = f (I1 ).
(34a)
(34b)
From 34(a) and 34(b), it can be implied that current I2 in Subsystem-2 is dependent on the voltage V1 of
Subsystem-1 or vice versa. This approach can be realized
by implementing dependent current and voltage sources. The
partitioning of subsystems for “n” number of elements utilizing gyrator-based partitioning approach is shown in Fig. 11.
In Figs. 10(a) and 11, a very high value resistance (RT ) is
placed to ensure the numerical consistency of the simulation
and memory block is used to avoid algebraic loop errors.
The measured current and voltage between the computational
subsystems is transferred using OpComm block [55], [56]. With
the gyrator-based approach, the entire dc marine system is
divided into four subsystems (three computational subsystems
SATPATHI et al.: MODELING AND REAL-TIME SCHEDULING OF DC PSV FOR FUEL EFFICIENT OPERATION
Fig. 12. Representative MATLAB/Simulink model into partitioned subsystems for real-time simulation in OPAL-RT.
and one console subsystem). The computational subsystems
comprise of one master (SM_Generator) and two slave
subsystems (SM_Bus1Load and SM_Bus2Load). The console subsystem (SC_Console) is the user interface for data
logging. The final partitioned executable file for transient realtime simulation is shown in Fig. 12.
3) Obtaining Results: The output results from the real-time
simulator have been obtained by: 1) monitoring scopes in the
console subsystem (SC_Scope); 2) by viewing the results in
the monitoring oscilloscope; and 3) by saving the data in .mat
file by OpWrite block [55], [56] for offline analysis. All the
results presented in this paper are obtained by processing
.mat files. The results obtained from the oscilloscope are
also presented in Section V for comparison with the offline
analyzed results.
D. Algorithm for Real-Time Transient Simulation
Pseudocode for real-time generation scheduling of dc PSV
for minimized SFOC is shown in Algorithm 1. It presents the
basis for real-time transient simulation scheme in the context
of handling various marine missions for minimized SFOC.
V. R EAL -T IME S IMULATION R ESULTS
With reference to the operational aspects of the proposed dc
PSV power management system, this section presents the simulation results of various cases of operating modes and associated contingencies in the real-time operation. The various
contingencies associated with dc PSV are listed in Table VI,
which has been prepared considering both availability and
unavailability of 10% ESS described in Section II-C. From
Table VI, it can be observed that the generator and ESS
output are marked in red for some specific contingencies. This
indicates the overload capabilities of the generation system
for supplying high power output for short durations [54].
In the proposed optimization framework, such relaxation has
been set for the PSV generation systems to emulate real-time
characteristics.
A. PSV Operating Modes and Associated Contingencies
1) Dynamic Positioning Operation: For the DP operation,
LC L is set to zero, while LHLhigh and LHLlow are set at
1000 and 270 kVA, respectively, and Lmisc is set to the
rated value. As described previously, the value of LDP is
dependent on the weather conditions, propeller design, and
771
Algorithm 1 Pseudocode of Scheduling for DC PSV
1: – Read generation data, network data, load estimation data,
static load data, SOC of ESS, and other PSV parameters.
2: – Define operating limits based on the shipboard real-time
marine missions.
Eq. (23)-(30).
3: – Build initial Z-bus by handling isolated nodes, if any.
Table II
4: – Create incidence matrix for the existing shipboard network.
5: - Initialize the proposed Optimization Suite.
Eq. (17)-(21)
6: while ((Error tolerance for power) < Set limit) do
7:
while (All options are not processed) do
8:
- Find X bus with power injection matrices
9:
- Calculate power flow, line outage conditions (if
any), SOC of ESS
10:
while (All scheduling options are not processed) do
11:
- Initialize power calculation process of each generator considering different sub-optimal points
12:
end while
13:
- Store optimized scheduling options
14:
- Treatment of sub-optimal points. Section V-C
15:
- Evaluate objective function f
Eq. (6), (13)
16:
end while
17:
for (Each optimized schedule Pi i = 1, 2, . . ., no. of
generators:P do
18:
(a) Create speed vector from scheduled generations Pi ;
19:
(b) Evaluate the schedules for minimized SFOC
20:
Check the error criterion to met. Otherwise, the appropriate speed is chosen from the corresponding generator speed vector.
21:
end for
22: end while
23: - Print the scheduling plans
characteristics. Under normal weather condition, LDP is set at
lower value with all the thrusters operating at lower loading
conditions as shown in 35(a). On the contrary at harsh weather
conditions, the thrusters are set to be operating at higher
loading conditions as shown in 35(b)
TT1
TT2
RT
LDP low = L B12
, L B16
, L B5
= {−300 kW, −300 kW, −300 kW} (35a)
TT1
TT2
RT
, L B16
, L B5
LDP high = L B12
= {−800 kW, −800 kW, −800 kW}. (35b)
Various contingency cases have been prepared, such
as loss of generation system and unavailability of ESS
(inadequate SOC) at low DP load and high DP load conditions.
Utilizing the transient simulation framework formulated in
Section IV, the desired power and corresponding operating
speed set point of the generation systems PGen for all the
contingency cases have been listed in Table VI. During the
Gen24 = 0) with harsh weather conditions,
fault at Bus-2 (P B2
power demand to the ESS (P ESS ) exceeds its rated capacity
which cannot be suitably fulfilled. This inadequate generation
availability can be mitigated by load shedding operation by
772
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, VOL. 3, NO. 3, SEPTEMBER 2017
TABLE VI
C ONTINGENCY L IST
referring to Table I. However, the ESS helps in optimized generation scheduling when the Bus-2 is isolated during low load
DP operation and also during sudden gain/loss of DP loads,
which is indicated in blue in Table VI. The results pertaining
to sudden gain in DP load as per Case 1A of Table VI are
shown in Fig. 13. The path of transition of the operating point
of the optimized SFOC from initial set point to the final set
point has been traced in Fig. 13(d). The trajectory of the SFOC
for fixed speed operation is also plotted for comparison with
the proposed operation methodology. From Fig. 13(d), it can
be inferred that there is substantial reduction of SFOC with
the proposed method. The reduction in SFOC is 19% when
the DGs are allowed to operate in optimized speed rather
than fixed speed during low load DP mode. The operating
regime of the DG exceeds the prescribed contour of operation
because of the abrupt increase in the load. However, in the real
system, the load transition is expected to be smoother rather
than sudden abrupt changes. Nevertheless, the initial and final
SFOC is optimized and lies within stable region. The output
from the monitoring oscilloscope is presented in Fig. 13(e) for
comparison with the offline results.
2) Cruising Mode Operation: For the cruising operation,
LDP is set to zero, while LHLhigh , LHLlow , and Lmisc are set to
similar value that of DP operation. The LC L primarily depends
on the operating speed of the vessel. For low cruising speed,
the loading of the MP systems is given in 36(a), while for
higher speeds, the loading of the MP systems is given in 36(b)
MP1
MP2
LC L low = L B3
, L B7
= {−1000 kW, −1000 kW}
LC L
high
(36a)
MP1
MP2
= L B3 , L B7
= {−2500 kW, −2500 kW}.
(36b)
Similar to the DP operation, various contingency conditions
have been prepared for the cruising loads, which is described
in Table VI. In the contingencies associated with fault at
ESS exceeds its rated capacity and thus the impleBus-2, P B14
mentation of load shedding algorithm becomes impertinent.
However, when the Bus-2 is isolated and by setting maximum
load shedding by employing LHLhigh == 0 and Lmisc == 0,
ESS still exceeds the rated limits as highlighted in Case 10C
P B14
of Table VI. Thus, during this contingency, higher cruising
load cannot be supported by the optimized operation of PGen ,
and thus, the speed of the PSV needs to be slowed down to
prevent inadvertent black-out condition. However, scheduling
of ESS is helpful for the sudden gain/loss of the cruising
loads indicated with blue in Table VI. The real-time simulation
results during the sudden loss of cruising load are shown
in Fig. 14. The transition of SFOC operating point from initial
to final set point value is shown in Fig. 14(d). The same
operation is repeated when the vessel operates with fixed speed
generation system and the SFOC is compared in Fig. 14(d).
It can be established that the SFOC is minimized with the
proposed method. Although the power is abruptly decreased,
the operating point lies within the contours of the operating
limits of the DG. As discussed earlier, the change in load
would not be abrupt in real scenario, and hence, the DG would
operate with optimized SFOC in the stable region. The output
from the monitoring oscilloscope is presented in Fig. 14(e) for
comparison with the offline results.
B. Dynamics of the PV-Based BESS System
1) Charging of BESS: As discussed in Section II-C,
the charging of the BESS could be carried out either by
slow charging or fast charging schemes. Fig. 15(a) shows
SATPATHI et al.: MODELING AND REAL-TIME SCHEDULING OF DC PSV FOR FUEL EFFICIENT OPERATION
773
Fig. 13. (a) Speed set point versus DG speed. (b) Power variation of DG-1. (c) Change in total load demand. (d) Transition of SFOC to optimized point in
real-time. (e) Output in monitoring oscilloscope during sudden gain in DP load as per Case 1A of Table VI.
Fig. 14. (a) Speed set point versus DG speed. (b) Power variation of DG-1. (c) Change in total load demand. (d) Transition of SFOC to optimized point in
real time. (e) Output in monitoring oscilloscope during sudden loss in cruising load as per Case 8A of Table VI.
the variation of irradiance and the output power of
the dc/dc-1 while operating at MPPT mode of operation. The PV current (i pv ) for charging the BESS is
shown in Fig. 15(b). For fast charging, the dc/dc-2
maintains battery charging current determined by the
set point (i batt−SP ) as shown in Fig. 15(c). With the
774
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, VOL. 3, NO. 3, SEPTEMBER 2017
which the SFOC is calculated. Such an SFOC is dependent on
the operating speed of the DG and the ESS output. However,
it has been observed that the scheduling constraints are also
satisfied at the suboptimal locations with certain generator
schedules and corresponding SFOC values associated with it.
These set of points are termed as local optimum points.
To illustrate this, PGn is considered as generator schedule and f as the objective function used in Algorithm 1.
Equations 37(a) and 37(b) represent the conditions for
minima [26]
PGn ∈ R ⇒ f PGn = 0
(37a)
n
> 0.
(37b)
f PG
Conditions for local minima (suboptimal point) is given in
the following:
(38)
f ∗ = f PGn∗
for local minimizer PGn∗ . This is the smallest function value
in some feasible neighborhood defined by the following:
PGn∗ ∈
.
(39)
There exists a δ > 0 such that
in PGn∗ ∈
Fig. 15. (a) Change in power availability from PV panels with change
in irradiation. (b) Variation of current output of PV panel while operating
it in MPPT mode. (c) Charging current characteristic from dc/dc-2. (d) Change
in SOC of BESS when it is charged from PV panel and both with PV panel
and dc/dc-2. (e) DC/DC-2 output for contingency 8A in Table VI.
dc/dc-1 current i pv , the variation of i dc_i and i batt with
change of irradiation and battery charging current set
point is shown in Fig. 15(c). Fig. 15(d) shows the
variation of SOC of the BESS when it is subjected to
slow charging and fast charging, respectively. The slow
charging of the BESS can be employed, while the PSV
is in dockyard. Alternatively, the fast charging schemes
might be employed by forecasting the nature of job to
be done by the PSV and if the job requires intermitted
power requirements.
2) Discharging of BESS: The discharging of the BESS
is explained with reference to Case 8A of Table VI.
During sudden change of the DP operation, the power
demand of the BESS decreases from 399.32 to 0.55 kW.
The power delivered by the PV-BESS system is shown
in Fig. 15(e).
C. Treatment of Suboptimal Points
Treatment of suboptimal points has been explained with
respect to the contingency scenario 8A of Table VI, where the
initial derived generation schedule of 1875.17 kW and ESS
of 399.32 kW are the points-1 in Fig. 16(a) and Table VII at
f ∗ f PGn∗ ∀ PGn
: |PGn − PGn∗ | δ.
(40a)
(40b)
Thus, there can be many local minima, i.e., multiple suboptimal points, which are not global minima. Special properties
such as the convexity of feasible region “ ” and objective
function “ f ” imply that any local solution is a global solution.
It has been observed that during real-time simulations, such
suboptimal (local optimum points), which satisfy the given
constraints, has the potential to speed up the calculations of the
scheduling process while diligently treating the associated ESS
to arrive at better SFOC values than the corresponding SFOC
at global optima. This has been demonstrated using Table VII
corresponding to Fig. 16(a), where the ESS is varied at all
suboptimal locations to study the impact on the SFOC. The
absence of ESS during the initial state at PG1 would improve
the SFOC operating scenario (Point-2), however, the limits of
generator capacity (2048 kW) are enforced to consider the ESS
as an option (Point-1). Here, ESS is delivering at 399.32 kW
and the corresponding SFOC is 207 (Point-1). Now, with the
sudden loss of the cruising load, we could notice that before
arriving to the global optimum point PG2(g) (Point-5), it has
been passing through a local optima PG2(l) (Point-3) where
the corresponding SFOC is 205 against the global optimum
SFOC of 197 (PG2(g) , Point-5). During such a phase, it is
evident that ESS is no longer required to contribute to the load,
and accordingly, scheduling algorithm suggested the output
of ESS at +0.55 kW. However, considering the enforced
operation of ESS, at this instant, shall make the corresponding
SFOCs of PG2 and PG2(l) at 195 (Point-6) and 200 (Point-4),
respectively. Hence, instead of abrupt suspension of ESS
supply, slowly adjusting the ESS in such a way to yield the best
scenario of SFOC is a possibility and where the suboptimal
points can be diligently treated.
SATPATHI et al.: MODELING AND REAL-TIME SCHEDULING OF DC PSV FOR FUEL EFFICIENT OPERATION
775
Fig. 16. (a) Trajectory of the SFOC during the sudden loss of cruising load with locations of various suboptimal points. (b) Surface plot of SFOC variation
for varying ESS and DG speed. (c) Comparison of the changing operating points. (d) Location of suboptimal points for DGs operating at optimized speed
and fixed speed.
TABLE VII
VARIATION OF SFOC AT S UBOPTIMAL P OINTS D URING
THE S UDDEN L OSS OF C RUISING L OAD
Table VII shows the variation of SFOC corresponding to
these suboptimal points considering ESS in operation even
after the sudden loss of load and at optimized speed of generators. However, it may not be prudent to consider such points in
all the contingency cases owing to operational constraints such
as availability and SOC of the interfaced ESS. So, it is evident
that the suboptimal point can yield better SFOC at optimized
speed and further it also has an upper hand in reducing the
time propagation of schedules for feeding to the shipboard
controllers.
So, in the proposed work, it was noticed that suboptimal
points can accommodate ESS for better performance of DGs.
To speed up the calculation in scheduling process and to treat
the consideration of ESS, suboptimal schedules can yield a
feasible solution.
Dependence of ESS output on suboptimal locations is
further demonstrated with the help of Fig. 16(b)–(d).
In Fig. 16(b), power extraction from ESS is varied from
0 to 300 kW to visualize the changing operating points
of the SFOC of DG while fulfilling constant load demand
of 1875 kW. The location of the global optima with minimized SFOC and local optima is highlighted in Fig. 16(b).
In Fig. 16(c) and (d), it is further compared with the DG
operating at constant speed. With reference to Fig. 16, the following observations can be cited as follows.
1) Fig. 16(b) shows the surface plot of the SFOC variation
with ESS and DG speed in the aforesaid treatment of
suboptimal points. The suboptimal scheduled points are
also marked in the figure.
2) Fig. 16(c) and (d) shows the comparative analysis
of the location of suboptimal points for fixed speed
and optimized speed operation of the DGs. It can be
observed that the minimum SFOC for the corresponding
suboptimal points for fixed speed DGs is achieved while
Fig. 17. (a) Variation of the operating point of SFOC for generator running
at 1800 rpm, 1600 rpm, and variable speed (optimized speed). (b) Comparison
of SFOC for generator running at 1800 rpm and optimized speed.
operating at rated conditions denoted by Poi nt F S A
in Fig. 16(d).
3) For the DGs operating at optimized speed, there are
multiple suboptimal points depending on ESS output
and operating speed of DG. These suboptimal points,
the variable speed DGs, and the suboptimal locations
are dependent on the injected power from ESS and DG
operating speed as shown in Fig. 16(d). The location
of the suboptimal points is shown as Poi nt O S A ,
Poi nt O S B , Poi nt O SC , and Poi nt O S D in Fig. 16(d).
D. Comparison With the Operation at Different Speeds
For all the operating cases in Table VI, SFOC is compared
when the generator is running at 1800 rpm and 1600 rpm
against the calculated optimized speed. The variation of the
operating points is shown in Fig. 17. It can be seen that SFOC
is significantly reduced when run at optimized speed, thus
highlighting the advantages of the proposed real-time transient
simulation scheme and corresponding optimization framework.
E. Influence of ESS on SFOC
From Section V-C, it is evident that ESS has strategic
influence on various marine missions not only on dealing with
short time-load transients for smooth operations but also on
improving the transient responses of DGs and corresponding
SFOC. The set point to ESS and DG output have been decided
based on the cost functions, cable loadings, and the realistic
criteria discussed in Section IV. From Fig. 17(b), it is evident
776
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, VOL. 3, NO. 3, SEPTEMBER 2017
TABLE VIII
I NFLUENCE OF ESS ON SFOC D URING THE S UDDEN L OSS OF C RUISING L OAD
that at higher loads, the SFOC is almost constant for optimized
variable speed and constant speed operation with or without
ESS and the same has been shown in Table VIII. However,
having ESS during generators meeting higher loads can reduce
the initial speed deviations and torsional stress and further it
enables the option of catering any unexpected load demand.
VI. C ONCLUSION
This paper investigated the modeling and control approaches
of dc PSV with real-time transient simulation framework
to minimize the fuel consumption in terms of SFOC while
considering the treatment of suboptimal points along with an
option of utilizing onboard PV-based energy storage facility.
This sort of real-time operation is expected to be a key
ingredient of the future autonomous marine vehicles.
1) DC OPF-based algorithms have been applied for realtime scheduling of generation resources with an objective to minimize the fuel consumption. This operation is
within the framework of the proposed real-time transient
simulation setup and has been successfully demonstrated in this paper. Results obtained are promising and
strengthens the ability of the future dc marine vessels to
comply with the upcoming stringent laws on pollution
control. It has been shown that the output of ESS can
influence the generator set points particularly during
sudden load changes for DP and cruising missions.
2) Traditionally, the power sharing happens according to
generator ratings through droop control rather than optimal generation scheduling as per load demand for a
particular marine mission. Hence, an optimal scheduling
system based on dc OPF for tackling any specific
nature of marine missions has been demonstrated in this
paper and the results indicated such an approach will
enable the PSV to operate in fuel efficient regime with
improved transient responses.
3) Responses of the DGs to various contingencies are
studied and it has been found that unavailability of the
generation system due to fault at bus bar demands higher
requirement of ESS to satisfy the load demands. In such
cases, it is pertinent to have load shedding routines,
which are dependent on the marine missions to prevent
power blackout of the vessel. This highlights the necessity for considering such routines in future autonomous
dc PSVs. Some cases have been highlighted using the
proposed real-time transient simulation framework.
4) Higher fuel savings are noticed while the generators
are operated at optimized speed for low power demand.
In comparison with fixed speed operation, reduction in
SFOC of 19% has been reported when the PSV was
operating at low-power DP operation.
R EFERENCES
[1] A. Boveri, F. D’Agostino, A. Fidigatti, E. Ragaini, and F. Silvestro,
“Dynamic modeling of a supply vessel power system for DP3 protection
system,” IEEE Trans. Transport. Electrific., vol. 2, no. 4, pp. 570–579,
Dec. 2016.
[2] H. Ginn and R. Cuzner, “The shipboard integrated power system [about
this issue],” IEEE Electrific. Mag., vol. 3, no. 2, pp. 2–3, Jun. 2015.
[3] S. Y. Kim, S. Choe, S. Ko, and S. K. Sul, “A naval integrated power
system with a battery energy storage system: Fuel efficiency, reliability,
and quality of power.,” IEEE Electrific. Mag., vol. 3, no. 2, pp. 22–33,
Jun. 2015.
[4] T. J. McCoy, “Electric ships past, present, and future [technology
leaders],” IEEE Electrific. Mag., vol. 3, no. 2, pp. 4–11, Jun. 2015.
[5] International Maritime Organization (IMO), “Prevention of air pollution
from ships (MARPOL 73/78 annex VI),” IMO, London, U.K., Tech.
Rep., Aug. 2004.
[6] P. Michalopoulos, F. D. Kanellos, G. J. Tsekouras, and J. M. Prousalidis,
“A method for optimal operation of complex ship power systems
employing shaft electric machines,” IEEE Trans. Transport. Electrific.,
vol. 2, no. 4, pp. 547–557, Dec. 2016.
[7] B. Zahedi and L. E. Norum, “Modeling and simulation of all-electric
ships with low-voltage DC hybrid power systems,” IEEE Trans. Power
Electron., vol. 28, no. 10, pp. 4525–4537, Oct. 2013.
[8] E. Skjong, R. Volden, E. Rødskar, M. Molinas, T. A. Johansen, and
J. Cunningham, “Past, present, and future challenges of the marine
vessel’s electrical power system,” IEEE Trans. Transport. Electrific.,
vol. 2, no. 4, pp. 522–537, Dec. 2016.
[9] W. Koczara and G. Iwanski, “Variable-speed power generation,” in
Power Electronics for Renewable and Distributed Energy Systems—A
Sourcebook of Topologies, Control and Integration, S. Chakraborty,
M. G. Simoes, and W. E. Kramer, Eds. London, U.K.: Springer, 2013.
[10] M. R. Banaei and R. Alizadeh, “Simulation-based modeling and power
management of all-electric ships based on renewable energy generation
using model predictive control strategy,” IEEE Intell. Transp. Syst. Mag.,
vol. 8, no. 2, pp. 90–103, May 2016.
[11] W. H. Kumm, “Marine and naval applications of fuel cells for
propulsion: The process selection,” J. Power Sour., vol. 29, nos. 1–2,
pp. 169–179, 1990.
[12] C. C. Chan, A. Bouscayrol, and K. Chen, “Electric, hybrid, and fuelcell vehicles: Architectures and modeling,” IEEE Trans. Veh. Technol.,
vol. 59, no. 2, pp. 589–598, Feb. 2010.
[13] R. Soman, M. M. Steurer, T. A. Toshon, M. O. Faruque, and
R. M. Cuzner, “Size and weight computation of MVDC power equipment in architectures developed using the smart ship systems design
environment,” IEEE J. Emerg. Sel. Topics Power Electron., vol. 5, no. 1,
pp. 40–50, Mar. 2017.
[14] M. M. El-Marsafawy and R. M. Mathur, “A new, fast technique for
load-flow solution of integrated multi-terminal DC/AC systems,” IEEE
Trans. Power App. Syst., vol. PAS-99, no. 1, pp. 246–255, Jan. 1980.
[15] K. Rouzbehi, J. I. Candela, G. B. Gharehpetian, L. Harnefors, A. Luna,
and P. Rodriguez, “Multiterminal DC grids: Operating analogies to AC
power systems,” Renew. Sustain. Energy Rev., vol. 70, pp. 886–895,
Apr. 2017.
[16] K. Satpathi, N. Thukral, A. Ukil, and M. A. Zagrodnik, “Flux estimation
based DC bus voltage control in marine DC power system,” in Proc.
42nd Annu. Conf. IEEE Ind. Electron. Soc. (IECON), Florence, Italy,
Oct. 2016, pp. 1815–1820.
[17] A. Yazdani and R. Iravani, Voltage-Sourced Converters in Power Systems: Modeling, Control and Applications. Hoboken, NJ, USA: Wiley,
2010.
[18] K. Satpathi, N. Thukral, A. Ukil, and M. A. Zagrodnik, “Directional
protection scheme for MVDC shipboard power system,” in Proc.
42nd Annu. Conf. IEEE Ind. Electron. Soc. (IECON), Florence, Italy,
Oct. 2016, pp. 3840–3847.
SATPATHI et al.: MODELING AND REAL-TIME SCHEDULING OF DC PSV FOR FUEL EFFICIENT OPERATION
[19] Y. Liu, Z. Qu, H. Xin, and D. Gan, “Distributed real-time optimal power
flow control in smart grid,” IEEE Trans. Power Syst., vol. 32, no. 5,
pp. 3403–3414, Sep. 2017.
[20] S. Barsali, C. Miulli, and A. Possenti, “A control strategy to minimize
fuel consumption of series hybrid electric vehicles,” IEEE Trans. Energy
Convers., vol. 19, no. 1, pp. 187–195, Mar. 2004.
[21] B. Zahedi, L. E. Norum, and K. B. Ludvigsen, “Optimized efficiency of
all-electric ships by DC hybrid power systems,” J. Power Sour., vol. 255,
pp. 341–354, Jun. 2014.
[22] A. G. Sarigiannidis, E. Chatzinikolaou, C. Patsios, and A. G. Kladas,
“Shaft generator system design and ship operation improvement involving SFOC minimization, electric grid conditioning, and auxiliary propulsion,” IEEE Trans. Transport. Electrific., vol. 2, no. 4, pp. 558–569,
Dec. 2016.
[23] X. Feng, K. L. Butler-Purry, and T. Zourntos, “Multi-agent system-based
real-time load management for all-electric ship power systems in DC
zone level,” IEEE Trans. Power Syst., vol. 27, no. 4, pp. 1719–1728,
Nov. 2012.
[24] E. A. Sciberras, B. Zahawi, D. J. Atkinson, A. Breijs, and
J. H. van Vugt, “Managing shipboard energy: A stochastic approach
special issue on marine systems electrification,” IEEE Trans. Transport.
Electrific., vol. 2, no. 4, pp. 538–546, Dec. 2016.
[25] E. Skjong, T. A. Johansen, M. Molinas, and A. J. Sørensen, “Approaches
to economic energy management in diesel–electric marine vessels,”
IEEE Trans. Transport. Electrific., vol. 3, no. 1, pp. 22–35, Mar. 2017.
[26] J. D. Pintér, Global Optimization in Action: Continuous and Lipschitz
Optimization: Algorithms, Implementations and Applications, vol. 6.
Dordrecht, The Netherlands: Springer, 2013.
[27] I. Kobougias, E. Tatakis, and J. Prousalidis, “PV systems installed in
marine vessels: Technologies and specifications,” Adv. Power Electron.,
vol. 2013, Jan. 2013, Art. no. 831560. [Online]. Available: http://dx.doi.
org/10.1155/2013/831560
[28] T. I. Bø et al., “Marine vessel and power plant system simulator,” IEEE
Access, vol. 3, pp. 2065–2079, 2015.
[29] Rolls-Royce, Fact Sheet-UT 776. Accessed on Jan. 8, 2016. [Online].
Available:
http://www.mokster.no/ShowFile.ashx?FileInstanceId=
8b75f34f-f294-4502-9908-3a443671fde1
[30] Bars Elekter, Viking Queen Project. Accessed on Feb. 12, 2017. [Online].
Available:
http://barselekter.com/public/documents/Viking_Queen
_datablad.pdf
[31] IEEE Recommended Practice for 1 kV to 35 kV Medium-Voltage
DC Power Systems on Ships, IEEE Standard 1709-2010, Nov. 2010,
pp. 1–54.
[32] A. J. Roscoe, I. M. Elders, J. E. Hill, and G. M. Burt, “Integration of a
mean-torque diesel engine model into a hardware-in-the-loop shipboard
network simulation using lambda tuning,” IET Elect. Syst. Transp.,
vol. 1, no. 3, pp. 103–110, Sep. 2011.
[33] T. Theubou, R. Wamkeue, and I. Kamwa, “Dynamic model of diesel
generator set for hybrid wind-diesel small grids applications,” in Proc.
25th IEEE Can. Conf. Elect. Comput. Eng. (CCECE), Montreal, QC,
Canada, May 2012, pp. 1–4.
[34] MTU Ffiedrichshafen GmbH, “Engine performance diagram,” in
Technical Project Guide Marine Application Part 1-General. Germany,
Jun. 2003, pp. 1–5.
[35] P. C. Krause, O. Wasynczuk, S. D. Sudhoff, and S. Pekarek, Analysis
of Electric Machinery and Drive Systems, vol. 75. Hoboken, NJ, USA:
Wiley, 2013.
[36] A. K. Jain and V. T. Ranganathan, “Modeling and field oriented
control of salient pole wound field synchronous machine in stator flux
coordinates,” IEEE Trans. Ind. Electron., vol. 58, no. 3, pp. 960–970,
Mar. 2011.
[37] M. Cupelli et al., “Power flow control and network stability in an
all-electric ship,” Proc. IEEE, vol. 103, no. 12, pp. 2355–2380,
Dec. 2015.
[38] A. J. Sørensen and Ø. N. Smogeli, “Torque and power control of
electrically driven marine propellers,” Control Eng. Pract., vol. 17,
no. 9, pp. 1053–1064, May 2009.
[39] A. J. Sørensen, “Marine Control Systems: Propulsion and Motion
Control of Ships and Ocean Structures,” Norwegian Univ. Sci. Technol.,
Trondheim, Norway, Tech. Rep. UK-13-76, Jan. 2013.
[40] D. Casadei, F. Profumo, G. Serra, and A. Tani, “FOC and
DTC: Two viable schemes for induction motors torque control,”
IEEE Trans. Power Electron., vol. 17, no. 5, pp. 779–787,
Sep. 2002.
777
[41] W. S. Im, C. Wang, L. Tan, W. Liu, and L. Liu, “Cooperative
controls for pulsed power load accommodation in a shipboard power
system,” IEEE Trans. Power Syst., vol. 31, no. 6, pp. 5181–5189,
Nov. 2016.
[42] K.-J. Lee, D. Shin, D.-W. Yoo, H.-K. Choi, and H.-J. Kim, “Hybrid
photovoltaic/diesel green ship operating in standalone and gridconnected mode in South Korea—Experimental investigation,” Energy,
vol. 49, pp. 475–483, Jan. 2013.
[43] Sunpower. 305 Solar Panel. Accessed on Jan. 10, 2017. [Online].
Available:
http://www.revolusun.com/documents/sunpowers-pr-305wht-en.pdf
[44] IEEE Guide for Array and Battery Sizing in Stand-Alone Photovoltaic
(PV) Systems, IEEE Standard 1562-2007, May 2008, p. I-22.
[45] IEEE Recommended Practice for Sizing Lead-Acid Batteries for StandAlone Photovoltaic (PV) Systems, IEEE Standard 1013-2007, Jul. 2007,
pp. 1–55.
[46] Energy Storage News. The First Offshore Vessel With a Battery Energy
Storage System in Operation. Accessed on Feb. 20, 2017. [Online].
Available:
https://www.energy-storage.news/blogs/the-first-offshorevessel-with-a-battery-energy-storage-system-in-operation
[47] SAFT Batteries. Seanergymodules: High Energy and High Power Li-Ion
Super-Iron Phosphate. Accessed on Feb. 20, 2017. [Online]. Available:
https://www.saftbatteries.com/force_download/4679_Seanergy+
Modules_final+low.pdf
[48] L. H. S. C. Barreto, P. Peixoto Praca, D. S. Oliveira, and
R. N. A. L. Silva, “High-voltage gain boost converter based on
three-state commutation cell for battery charging using PV panels in a
single conversion stage,” IEEE Trans. Power Electron., vol. 29, no. 1,
pp. 150–158, Jan. 2014.
[49] J. Ahmed and Z. Salam, “A modified P&O maximum power point
tracking method with reduced steady-state oscillation and improved
tracking efficiency,” IEEE Trans. Sustain. Energy, vol. 7, no. 4,
pp. 1506–1515, Oct. 2016.
[50] J. Wolfe and M. J. Roa, “Advanced methods for tabulation of electrical
loads during special modes of marine vessel operation,” IEEE Trans.
Ind. Appl., vol. 53, no. 1, pp. 667–674, Feb. 2017.
[51] F. E. Cellier and J. Greifeneder, Continuous System Modeling. New
York, NY, USA: Springer, 2013.
[52] B. Shenoi, “Practical realization of a gyrator circuit and RC-gyrator filters,” IEEE Trans. Circuit Theory, vol. 12, no. 3, pp. 374–380, Sep. 1965.
[53] J. Vlach and S. Kishore, Computer Methods for Circuit Analysis and
Design. New York, NY, USA: Springer, 1983.
[54] Understanding
the
Generator
Set
Limits.
Accessed
on
Feb. 17, 2010. [Online]. Available: http://www.mtuonsiteenergy.
com/../3156391_OE_TechnicalArticle_Reliability_2010.pdf
[55] OPAL-RT. OPAL-RT Technologies Official Website. Accessed:
Jan. 10, 2016. [Online]. Available: http://www.opalrt.com/
[56] S. Abourida, C. Dufour, J. Belanger, G. Murere, N. Lechevin, and B.
Yu, “Real-time PC-based simulator of electric systems and drives,”
in Proc. 17th Annu. IEEE Appl. Power Electron. Conf. Expo., vol. 1.
Mar. 2002, pp. 433–438.
Kuntal Satpathi (S’14) received the B.Tech. degree
in electrical engineering from the Haldia Institute
of Technology, Haldia, India, in 2011. He is currently pursuing the Ph.D. degree with the School
of Electrical and Electronic Engineering, Nanyang
Technological University, Singapore.
From 2011 to 2014, he was with Jindal Power
Ltd., Raigarh, India, specializing in power plant
operations. His current research interest includes
modeling, control and protection of dc grids, and
power electronics for dc distribution system.
778
IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, VOL. 3, NO. 3, SEPTEMBER 2017
VSK Murthy Balijepalli (M’06) received the Ph.D.
degree from IIT Bombay, Mumbai, India, in 2014.
He was with India Smart Grid Task Force,
Ministry of Power, New Delhi, India, the InterMinisterial Group on Smart Grids (The Future of
Energy), from 2014 to 2016, where he was involved
in the project execution and evaluation of smart
grid pilot projects in India. Since 2016, he has
been a Research Fellow with School of Electrical
and Electronic Engineering, Nanyang Technological
University, Singapore. His current research interests
include power system modeling and optimization, microgrid resiliency, policy
making, and model building for emerging power systems and smart grids.
Dr. Balijepalli was an Active Member of various technical panels under
the Bureau of Indian Standards (LITD-10), CIMug, and IEC PC118-Smart
Grid User Interface. He is the Founder of DesiSmartGrid.com, India’s first
Smart Grid Educational Portal. He was a recipient of the Massachusetts
Institute of Technology Young Innovator Award, the Department of Science
and Technology-Lockheed Martin Gold Medal, the Institute of Engineers
India Young Engineer Award (U.K. Royal Charter of Incorporation), and
the Gandhian Technological Edge Award for his outstanding research on
smart grids. He is currently serving on the expert committee of Sustainable
Energy to the United Nations Economic Commission for Europe, Geneva, and
Switzerland.
Abhisek Ukil (S’05–M’06–SM’10) received the
B.E. degree in electrical engineering from Jadavpur
University, Kolkata, India, in 2000, the M.Sc. degree
in electronic systems and engineering management
from the University of Bolton, Bolton, U.K., in 2004,
and the Ph.D. degree, with a focus on automated
disturbance analysis in power systems, from the
Tshwane University of Technology, Pretoria, South
Africa, in 2006.
From 2006 to 2013, he was a Principal Scientist
with the ABB Corporate Research Center, Baden,
Switzerland, where he led several projects on smart grid, protection, control,
condition monitoring, including the first worldwide prototype of directional
protection relay using only current for smart grid applications. Since 2013, he
has been an Assistant Professor with the School of Electrical and Electronic
Engineering, Nanyang Technological University, Singapore, where he has been
leading a group of 20 researchers with several industrial collaborations. He has
authored over 125 refereed papers, a monograph, and two chapters. He has
invented ten patents. His current research interests include smart grid, dc grid,
protection and control, renewable energy and integration, energy storage, and
condition monitoring.
| 3 |
Efficient Verification of Concurrent Programs
Over the TSO Memory Model
Chinmay Narayan, Subodh Sharma, S.Arun-Kumar
arXiv:1606.05435v1 [cs.LO] 17 Jun 2016
Indian Institute of Technology Delhi
Abstract. We address the problem of efficient verification of multithreaded programs running over Total Store Order (TSO) memory model.
It has been shown that even with finite data domain programs, the complexity of control state reachability under TSO is non-primitive recursive. In this paper, we first present a bounded-buffer verification approach
wherein a bound on the size of buffers is placed; verification is performed
incrementally by increasing the size of the buffer with each iteration of
the verification procedure until the said bound is reached. For programs
operating on finite data domains, we also demonstrate the existence of a
buffer bound k such that if the program is safe under that bound, then it
is also safe for unbounded buffers. We have implemented this technique
in a tool ProofTraPar. Our results against memorax [2], a state-of-the-art
sound and complete verifier for TSO memory model, have been encouraging.
1
Introduction
The explosion in the number of schedules is central to the complexity of verifying the safety and correctness of concurrent programs. There exist a plethora
of approaches in the literature that explore ways and means to address the
schedule-space explosion problem; incidentally, many of the the published techniques operate over the assumption of a sequentially consistent (SC) memory
model. In contrast, almost all modern multi-core processors conform to memory
models weaker than SC. A program executing on a relaxed memory model exhibits more behaviours than on the SC memory model. As a result, a program
declared correct by a verification methodology that assumes SC memory model
can possibly contain a buggy behaviour when executed on a relaxed memory
model.
Consider x86 machines that conform to TSO (Total Store Ordering). The
compiler or the runtime system of the program under the TSO memory model
is allowed to reorder a read following a write (read and writes are to different
variables) within a process, i.e. break the program order specified by the developer. Operationally, such a re-ordering is achieved by maintaining per-process
store buffers. Write operations issued by a process/thread are enqueued in the
store buffer local to that process. The buffered writes are later flushed (from
the buffer) into the global memory. The point in time when flushes take place
is deterministically known only when the store buffers are full. When the store
buffers are partially full, flushes are allowed to take place non-deterministically.
Therefore, when a read operation of variable x, is executed by a process, the process first checks whether there is a recent write to x in the process’s store buffer.
If such a write exists then the value from store buffer is returned, otherwise the
value is read from the global memory.
Figure 1 shows Peterson’s
algorithm as an instance of
flag1 = false, flag2 = false, t = 0;
a correct program under SC
semantics but which can fail
P2
P1
While(true){
While(true){
when executed under TSO.In
6. flag2 :=true;
1. flag1 :=true;
this algorithm, two processes
2. t:=2;
7. t:=1;
3. while(flag2 = true & t = 2);
8. while(flag1 = true & t = 1);
P1 and P2 coordinate their ac4. //Critical Section
9. //Critical Section
cess to their respective critical
5. flag1 :=false;
10. flag2 :=false;
}
}
sections using a shared variable t. This algorithm satisfies
Fig. 1: Peterson’s algorithm for two processes
the mutual exclusion property
under the SC memory model,
i.e. both processes can not be
simultaneously present in their critical sections. The property however, is violated when the same algorithm is executed with a weaker memory model, such
as TSO. Consider the following execution under TSO. The write operations at
1, 2, 6 and 7 from processes P1 and P2 are stored in store buffers and are yet
to be reflected in the global memory. The reads at control locations 3 and 8 will
return initial values, thereby violating the mutual exclusion property.
One can avoid such erroneous behaviors and restore the SC semantics of
the program by inserting special instructions, called memory fence, at chosen
control locations in the program. A memory fence ensures that the store buffer
of the process (which executes the fence instruction) is flushed entirely before
proceeding to the next instruction for execution. In the example, when fence
instructions are placed after flag1 :=true in P1 and after flag2 :=true in P2 , the
mutual exclusion property is restored.
Safety verification under TSO is a hard problem even in the case of finite data
domain programs. The main reason for this complexity is the unboundedness
of store buffers. A program can be proved correct under TSO only when the
non-reachability of the error location is shown irrespective of the bound on the
buffers. The work in [7] demonstrated the equivalence of the TSO-reachability
problem to the coverability problem of lossy channel machines which is decidable
and of non-primitive recursive complexity. A natural question is to ask if it is
possible to have a buffer bound k such that if a finite data domain program is safe
under the k-bounded TSO semantics then it is guaranteed to be safe even with
unbounded buffers. For programs without loops such a statement seems to hold
intuitively. For programs with loops, it is possible that a write instruction inside
a loop keeps filling the buffer with values without ever getting them flushed to
the main memory. However, for finite data domain programs, only a finite set
of different values will be present in this unbounded buffer and this leads to a
sufficient bound on the buffer size.
In this paper we show that it is possible to verify a program P under TSO
(with unbounded buffers) by generalizing the bounded buffer verification. Towards this we first define TSOk , TSO semantics with buffer size k, and then
characterize a bound k0 such that if a program is safe in TSOk0 then it is safe
for any buffer bound greater than k0 . We adapt a recently proposed trace partitioning based approach [16,25] for the TSO memory model. These methods work
for the SC memory model as follows: the set of all SC executions of a program
P are partitioned in a set of equivalence classes such that it is sufficient to prove
the correctness of only one execution per equivalence class. As this trace partitioning approach works with symbolic executions, we first define an equivalent
TSO semantics to generate a set of symbolic TSO traces. Subsequently we invoke
a trace partitioning tool ProofTraPar [25] for proving the correctness of these
traces. Note that the set of behaviors of a program P under TSOk is a subset of
the behaviors of P under TSOk′ for any k ′ > k. The trace partitioning approach
allows us to reuse the proof of correctness of P with buffer bound k in the proof
of correctness of P with any buffer bound greater than k. In a nutshell, the main
contributions of this work are:
– We characterize a buffer bound in case of finite state programs such that if
the program is correct under TSO up to that bound then it is correct for
unbounded buffers as well.
– We adapt the recently proposed trace partition based proof strategy of SC
verification [16,25] for TSO by defining an equivalent TSO semantics to
generate a set of symbolic TSO traces.
– We implement our approach in a tool, ProofTraPar[25], and compare its
performance against memorax[2], a sound and complete verifier for safety
properties under TSO. We perform competitively in terms of time as well as
space. In a few examples, memorax timed out after consuming around 6GB
of RAM whereas our approach could analyze the program in less than 100
MB memory.
Section 2 covers the related work in the area of verification under relaxed
memory models. Section 3 covers the notations used in this paper. Section 4
shows the necessary and sufficient conditions to generalize bounded verification
to unbounded buffers for finite data domain programs. Section 5 presents an
equivalent TSO semantics to generate a set of symbolic traces which can be
used by the trace partitioning tool ProofTraPar to check the correctness under
a buffer bound. This section ends with an approach based on critical cycle to
insert memory fence instructions. Section 6 compares the performance of our
approach with memorax. Section 7 concludes with future directions.
2
Related Work
Figure 2 captures the related work in this area. Verification approaches for relaxed memory models can be broadly divided into three classes: precise, under-
approximate and over-approximate. For finite state programs, the work in [7,2]
present sound and complete algorithms for control state reachability (finite state
programs) under TSO and PSO memory models.
Sets of infinite configurations, arising from unbounded buffer size, are
finitely presented using regular exRMM Verification
pressions. Acceleration based techPrecise
niques that led to faster convergence
Safety Property
in the presence of loops were preMemorax[7,2]
Remmex[22][23]
sented in [22,23]. However, the terSC Property
mination of the algorithm was not
Robustness[27,11,10,6]
guaranteed. Notice that in both [2]
Persistence[3]
Under-approximate
and [23] the specification was a set
of control states to be avoided. One
Buffer-bounded
can also ask the state reachability
[13,20,24,14]
question with respect to SC specificaContext-bounded
tion, i.e. does a program P reach only
[8]
Over-approximate
SC reachable states under a relaxed
[4,5,19]
memory model? This problem was
shown to be of the same complexity as
Fig. 2: State of the art
of SC verification (Pspace-complete)
and hence gave a more tractable correctness criterion than general state reachability problem. [27,11,10,6,3] work
with this notion of correctness and give efficient algorithms to handle a range
of memory models. In this paper we work with the control state reachability
problem as opposed to the SC state reachability problem.
Over-approximate analyses [4,5,19] trade precision with efficiency and construct an over-approximate set of reachable states. Recently [1,28] used stateless
model checking under TSO and PSO memory models. The main focus of these
approaches are in finding bugs rather than proving programs correct. Another
line of work to make the state reachability problem more tractable involved either restricting the size of buffers [13,20,24,14] or bounding the context switches
[8] among threads. None of theses methods give completeness guarantee even for
the finite data domain programs.
3
Preliminary
A concurrent program is a set of processes uniquely identified by indices t from
the set TID. As in [2,9], a process Pt is specified as an automaton hQt , LABLt , δt , q0,t i.
Here Qt is a finite set of control states, δt ⊆ Qt × LABLt × Qt is a transition
relation and q0,t is the initial state. Without loss of generality we assume every
transition is labeled with a different symbol from LABLt . LABLt represents a
finite set of labels to symbolically represent the instructions of the program. Let
SV be the set of shared variables of program P ranged over by x, y, z, Val be
a finite set of constants ranged over by v, LVt be the set of local variables of
process Pt ranged over by ℓ, m, and Expt be the setSof expressionsSconstructed
using LVtS
, Val and appropriate operators. Let LV = t LVt , Exp = t Expt , and
LABL = t LABLt . Let a, b, c range over LABL and and e range over Exp. Formally an instruction, from set INST, is one of the following type; (i) x:=e, (ii)
ℓ:=x, (iii) ℓ:=e, (iv) assume(e), and (v) fence, where x ∈ SV, ℓ ∈ LV and e ∈ Exp.
A function Ins : LABL → INST assigns an instruction to every label.
The first two assignment instructions, (i) and (ii), are the write and the read
operations of shared variables, respectively. Instruction (iii) assigns the value of
an expression (constructed from local variables and constants) to a local variable,
hence, does not include any shared memory operation. Instruction (iv) is used
to model loop and conditional statements of the program. Note that the boolean
expression e in assume(e) does not contain any shared variable. Instruction (v)
represents the fence operation provided by the TSO architecture. Let Loc(a)
be the shared variable used in Ins(a). For a function F : A × B, let the function
F[p ← q] be the same as F everywhere except at p where it maps to q.
TSO Semantics In the TSO memory model, every process has a buffer of unbounded capacity. However, we present the TSO semantics by first defining a
k-bounded TSO semantics where all buffers are of fixed size k. For a concurrent
program P , the k-bounded semantics is given by a transition system TSOk =
hS, →k , s0 i. Every state s ∈ S isS
of the form (cs, Lm, Gm, Buff k ) where process control states cs : TID → Q, Q = t Qt , local memory Lm : TID × LV → Val, global
memory Gm : SV → Val, and k-length bounded buffers Buff k : TID → (SV×Val)k .
We overload operator ‘.’ to denote the concatenation of labels as well as a dereferencing operator to identify a specific field inside a state. Therefore, for a state
s, s.Gm, s.Lm and s.Buff k denote the functions representing global memory, local
memory, and buffers respectively. Every write operation to a shared variable by
process Pt initially gets stored in the process-local buffer provided that the buffer
has less than k (buffer-bound) elements. This write operation is later removed
from the buffer non-deterministically to update the global memory. A read operation of a shared variable say x, by a process Pt first checks the local buffer for
any write to x. If buffer contains any write to x then the value of the last write to
x is returned as a result of this read operation. If no such write is present in the
buffer of Pt then the value is read from the global memory. A process executes
instruction fence only when its local buffer is empty. For instruction assume(e),
boolean expression e is evaluated in the local state of Pt . Execution proceeds
only when the expression e evaluates to true. Assignment operation involving
only local variables changes the local memory of Pt . The transition relation →k
is defined in detail in the Appendix.
Relevance of the buffer size k Parameter k influences the extent of reordering
that happens in an execution. For example, if k = 0 then no reordering happens
and the set of executions is the same as under the SC memory model. Size parameter k, under the TSO memory model, allows any two instructions separated
by at most k instructions to be reordered, provided that one is write and another
is a read instruction. This reorder-bounded analysis was also shown effective by
[18] and seems a natural way to make this problem tractable.
4
Unbounded Buffer Analysis
In this section we show that for any finite data domain program and safety
property φ, there exists a buffer size k0 such that it is sufficient to prove φ for
all buffers up-to size k0 . Note that for programs with write instructions inside
loops, it is possible to keep on writing to the buffer without flushing them to
the main memory. However since the data domain is finite, such instruction are
guaranteed to repeatedly write the same set of values to the buffer. It is this
repetition that guarantees the existence of a sufficient bound on the buffer.
The set of states in TSOk are monotonic with respect to the buffer bound,
i.e. Sk ⊆ Sk′ , for all k ≤ k ′ . Let s⇃(cs,Gm,Lm,Buff lst ) denote the restriction of a state
in S to only control states, global memory, local memory, and last writes (if
any) to shared variables in buffers. Let Sk⇃(cs,Gm,Lm,Buff lst ) = {s⇃cs,Gm,Lm,Buff lst | s ∈ Sk }
be the states ofSSk after projecting out the above information. For finite data
domain the set ∞
k=0 Sk⇃cs,Gm,Lm,Buff lst is finite because only finitely many different
possibilities exist for functions cs, Gm, Lm and Buff lst . Further, Sk⇃cs,Gm,Lm,Buff lst ⊆
Sk+1⇃cs,Gm,Lm,Buff lst . Therefore there exists a k0 such that Sk0 ⇃cs,Gm,Lm,Buff lst is equal
to the set Sk0 +1⇃cs,Gm,Lm,Buff lst . In this section we show that for every k > k0 , sets
Sk⇃cs,Gm,Lm,Buff lst and Sk+1⇃cs,Gm,Lm,Buff lst are equal and hence we can stop the analysis
at k0 .
For a buffer Buff(t), let σBuff(t).lst denote the sequence of last writes to shared
variables in buffer Buff(t). Let Exec(Gm, Lm(t), Buff(t), σ.a.σ ′ , (x, v), Lm′ (t)) be
a predicate, where a = (x := e), that holds true iff (i) after executing sequence
σBuff(t).lst .σ.a.σ ′ from the global memory Gm and local memory Lm(t) the local
memory of process t is Lm′ (t) and (ii) in the same sequence the value of expression
e in write instruction x := e at label a is v. The following two lemmas relate
σ
the states of TSOk and TSOk+1 transition systems. We use s0 → s to denote a
sequence of transitions over a sequence σ of labels.
σ
Lemma 1. For all n, σ, s ∈ Sk+1 such that s0 → s, |σ| = n and k ≥ 0, there
exists a state s′ ∈ Sk such that s′ .Gm = s.Gm and for all t ∈ TID,
1. (|s.Buff k+1 (t)| = k + 1) ⇒ ∃x, v.
(i) s.Buff k+1 (t) = s′ .Buff k (t).(x, v)
′
′′
(ii) ∃ qt ,′qt′ , σ , σ such′′ that (qt , a, qt′ ) ∈ δt ,
σ
σ
s′ .cs(t) → qt , qt′ .cs(t) → s.cs(t),
′
′′
σ and σ do not modify the buffer of Pt , and
Exec(s′ .Gm, s′ .Lm(t), s′ .Buff k (t), σ ′ .a.σ ′′ , (x, v), s.Lm(t))
and
2.
(|s.Buff k+1 (t)| < k + 1) ⇒
(i) s.Buff k+1 (t) = s′ .Buff k (t),
(ii) s.Lm(t) = s′ .Lm(t), and
(iii) s.cs(t) = s′ .cs(t)
s′ .cs(t) = s.cs(t). The above lemma states that every state in Sk+1 where the
buffer sizes of all processes are less than k + 1, is also present in Sk . The detailed
proof of this lemma is given in the Appendix. Now we are ready to prove that
after k0 , any increase in buffer size does not yield any new reachable control
location.
Theorem 1. For all k,
(Sk⇃(cs,Gm,Lm,Buff lst ) = Sk+1⇃(cs,Gm,Lm,Buff lst ) ) ⇒
(Sk+1⇃(cs,Gm,Lm,Buff lst ) = Sk+2⇃(cs,Gm,Lm,Buff lst ) )
Proof. there exists a state s′ ∈ Sk+1 such that s.cs = s′ .cs, s.Gm = s′ .Gm, s.Lm =
s′ .Lm and s.Buff lst = s′ .Buff lst . It is sufficient to show that (Sk+2⇃(cs,Gm,Lm,Buff lst ) ⊆
Sk+1⇃(cs,Gm,Lm,Buff lst ) ) as the other side of inclusion holds. Let us prove it by contradiction, i.e. there is a state s ∈ Sk+2 such that no state s′ ∈ Sk+1 exists with
s.cs = s′ .cs, s.Gm = s′ .Gm, s.Lm = s′ .Lm and s.Buff lst = s.Buff lst . Following
Lemma 1, this state s must have at least one buffer with k + 2 entries in it.
Without loss of generality let t ∈ TID such that s.Buff k+2 (t) is the only full
buffer.
1. Clearly, there exists a state s′ ∈ Sk+2 where all buffers except t are the same
as in s, s′ .Buff k+2 (t) is of size k + 1 and there exists a sequence of transitions
σ.a.σ ′ from s′ .cs(t) to s.cs(t) by process t with only one write operation a.
2. For state s′ , the conditions s′ .Gm = s.Gm (as no flush operation in σ), and
Exec(s′ .Gm, s′ .Lm(t), s′ .Buff lst (t), σ.a.σ ′ , s.Lm(t)) hold.
3. As all buffers of s′ are of size at most k + 1 therefore s′ also exists in Sk+1
(Lemma 1).
4. As Sk⇃(cs,Gm,Lm,Buff lst ) = Sk+1⇃(cs,Gm,Lm,Buff lst ) holds, therefore there exists a state
s′′ ∈ Sk such that (i) s′′ .Gm = s′ .Gm, (ii) s′′ .Lm = s′ .Lm, (iii) s′′ .cs = s′ .cs,
and (iv) s′′ .Buff lst (t) = s′ .Buff lst (t) for all t ∈ TID.
5. This state s′′ can have at most k entries in its process buffers. Therefore this
state must be present in Sk+1 as well.
6. Using Point 2 and the conditions (i),(ii),(iii), and (iv) of Point 4 above,
we get Exec(s′′ .Gm, s′′ .Lm(t), s′′ .Buff lst (t), σ.a.σ ′ , s.Lm(t)). This implies that
after executing the sequence σ.a.σ ′ by process t from state s′′ in Sk+1 the
resultant state, say s′′′ will have at most k + 1 write entries in the buffer of
process t. Further the global memory, local memories, control states and last
writes to shared variables in buffers will be identical in s′′′ and s. Therefore
s′′′ ∈ Sk+1 is the matching state with respect to s, a contradiction.
5
Trace partitioning approach
As a consequence of Theorem 1 one can use an explicit state model checker for
state reachability analysis of finite data domain programs. However, in this paper we are interested in adapting a recently proposed trace partitioning based
verification method [16,25] for relaxed memory models. This method has been
shown very effective for verification under the SC memory model. The approach
for SC verification, as given in [25], is presented in Algorithm 1. Firstly, an automaton is built that represents the set of symbolic traces under the SC memory
model. For SC memory model such an automaton is obtained by language level
shuffle operation [26,17] on individual processes. Subsequently, a symbolic trace
is picked from this automaton and checked against a given safety property using
weakest precondition axioms [15]. If this trace violates the given property then
we have a concrete erroneous trace. Otherwise, an alternating finite automaton
(AFA) [12] is constructed from the proof of correctness of this trace.
The AFA construction algorithm ensures that every trace in the language of
this AFA is correct and hence can be safely removed from the set of all symbolic
traces of the input program. This process is repeated until either all symbolic
traces are proved correct or an erroneous trace is found. This algorithm is sound
and complete for finite data domain programs.
Input: A concurrent program P = {p1 , · · · , pn } with safety property φ
Result: yes, if program is safe else a counterexample
Construct the automaton A(P) to capture the set of all SC traces of P ;
Let tmp be the language of A(P);
while tmp is not empty do
Let σ ∈ tmp with φ as a safety assertion to be checked;
Let Âσ,¬φ be the AFA constructed from σ and ¬φ ;
if σ violates φ then
σ is a valid counterexample;
return (σ);
else
tmp := tmp \ Rev, where Rev is the reverse of the language of Âσ,¬φ ;
end
end
return (yes);
Algorithm 1: SC verification algorithm[25]
The main challenge in applying this trace partitioning approach to the TSO
memory model is the construction of the set of symbolic traces. Consider a
program with two processes in Figure 3. With initial values of shared variables x
and y as 0, it is possible to have ℓ1 = ℓ2 = 0 under the TSO memory model. We
can construct a symbolic trace b.d.a.c such that after executing this sequence
the state ℓ1 = ℓ2 = 0 is reached.
Note that this trace is not constructible using
the standard interleaving semantics which was used
a. x:=1 c. y:=1
to construct the set of traces under the SC memory
b. ℓ1 :=y d. ℓ2 :=x
model. This is because of the program order between
a and b in process 1 and between c and d in process
Fig. 3
2. To use Algorithm 1 for the TSO memory model we
would like to first construct a set of all such symbolic traces such that the sequential executions of these traces yield all reachable states under the TSO memory
model. For the above example, it involves breaking the program orders a − b and
c − d and then applying standard interleaving semantics to construct symbolic
traces under the TSO memory model. Let us look at another non-trivial example
in Figure 4.
Assume the initial values of all variables are 0, and
ℓ, m are local variables. In TSO it is possible to have
the final values of variables ℓ and m as 0. This can
d. m:=3
a. ℓ:=2
b. y:=ℓ + 1 e. x:=m + 2
happen when writes at b and e are still in the buffers
c. ℓ:=x
f. m:=y
and the read operations at c and f read from the initial
values. Let
us
construct
a
symbolic trace whose sequential execution will yield
Fig. 4
this state. In this trace label e must appear after label c and label b must appear
after label f. This means that the trace will break either the order between b and
c or the order between e and f. However, by breaking the order between b and
c the value of ℓ = 2, assigned at a, no longer flows to b and hence y is assigned
the wrong value 1. Similarly by breaking the order e and f the value of m = 3,
assigned at d no longer flows to e and hence x is assigned the wrong value 1. In
a nutshell, it is not possible to create a symbolic trace whose execution will yield
the state where ℓ = m = 0, x = 5, and y = 3. Notice that the problem appeared
because of the use of the same local variable in two definitions. Such a scenario
is unavoidable when (i) multiple reads are assigned to the same local variable,
and/or (ii) in the case of loops the local variable appears in a write instruction
within the loop.
We propose to handle such cases by renaming local variables, viz. ℓ and m in
this case. For example, the execution of trace σ = ℓ:=2. ℓ1 :=ℓ .ℓ:=x. y:=ℓ1 + 1
.m:=3. m1 :=m .m:=y. x:=m1 + 2 results in state ℓ = m = 0, x = 5, y = 3 as
required by a TSO execution. Let us look at instructions highlighted in gray color
more carefully. We earlier saw that the problem arises when reordering b − c and
e − f instructions as their reordering will break the value flows of ℓ and m from a
and d respectively. Therefore, we create new instances of these local variables, ℓ1
and m1 , to take the snapshot of ℓ and m respectively which are later used in the
write instructions b and e. This renaming ensures that even if we reorder b − c
and e − f instructions (as done in σ) the correct value flows from a to b and from
d to e are not broken. We will show that for a buffer bound of k it is sufficient
to use at most k instances of these local variables and they can be safely reused
even in the case of loops. We call such symbolic traces, that correspond to TSOk
executions, as SC interpretable traces. Formally, SC interpretation of a trace
σ ∈ LABL∗ is a function SCI : LABL∗ × Var → Val ∪ {Undef}. such that SCI(σ, x)
calculates the last value assigned to variable x in the sequential execution of σ.
For example, if σ = a.b.c where labels a, b, and c denote ℓ := 3, x := ℓ + 2 and
y := 2 respectively then SCI(σ, x) = 5 and SCI(σ, ℓ) = 3. Label Undef is used to
denote the in-feasibility of σ as some boolean expressions in assume instructions
may become unsatisfiable because of the values that flow in them. If σ does not
contain any assignment to x then SCI(σ, x) returns the initial value of x.
Let us now construct a transition system such that the traces of this transition
system represent SC interpretable traces corresponding to TSOk semantics. We
represent this transition system as TSO♯ k = hS♯ , ⇒k , s♯ 0 i. Every state s♯ ∈ S♯
is of the form (cs, Li, Buff ♯ k ) such that cs : TID → Q represents process control
states, and Buff ♯ k : TID → (SV × LABL)k represents per process buffers of length
k. Unlike the buffers of TSOk , these buffers contain write instruction labels
along with the modified shared variable. A function Li : TID × LV → N tracks the
instances of the local variables which have been used (for renaming purposes)
in the construction of traces up to a given state. First, we define ⇒k for simple
Ins(a) = (ℓ := x),
Buff ♯ k (t) ⇃{x}×LABL = ǫ
a
(cs, Li, Buff ♯ k ) ⇒k (cs′ , Li, Buff ♯ k )
(MRead♯ )
Ins(a) = (assume(e))
a
(cs, Li, Buff ♯ k ) ⇒k (cs′ , Li, Buff ♯ k )
(Assume♯ )
Ins(a) = (ℓ := e)
a
(cs, Li, Buff ♯ k ) ⇒k (cs′ , Li, Buff ♯ k )
(LWrite♯ )
′
Buff ♯ k = (x, a).Buff ♯ k
a
′
(cs, Li, Buff ♯ k ) ⇒k (cs′ , Li, Buff ♯ k )
(flush♯ )
Fig. 5: All rules assume transitions for thread t, ie. cs[t] = q, (q, a, q ′ ) ∈ δt , and
cs′ = cs[t ← q ′ ]
cases, viz. read from memory, operations associated with local variables like
assume(e) and ℓ := e, and non-deterministic flush. In Rules MRead♯ , LWrite♯ ,
and Assume♯ the labels that denote these operations are put in the trace with
only change in the control state of the process. As there is no notion of local
and global valuation in a state s♯ of the transition system, no update takes place
unlike in TSOk . For memory read operation, in Rule MRead♯ , the condition on
the buffer of Pt is the same as in TSOk . For non-deterministic flush operation,
Rule flush♯ removes the first label present in the buffer of Pt and puts that in the
trace. In rule Assume♯ , the assume instruction is simply put in the trace without
evaluating the satisfiability of the boolean expression. This is different from the
corresponding rule in TSOk . This difference follows from the fact that we are
only interested in constructing symbolic traces. Symbolic model checking of these
traces will ensure that only feasible executions get analyzed (where all assume
instructions hold true). Now let us look at the remaining three operations, viz.
read from the buffer, write to the buffer and fence instruction, in detail.
Buffered Read Like TSOk , this transition takes place when Pt executes an
instruction ℓ := x to read the value of shared variable x and store it in its
local variable ℓ.
Ins(a) = (ℓ := x), Buff ♯ k ⇃{x}×LABL = α.(x, b),
Ins(b) = (x := e), Ins(c) = (ℓ := e)
c
(cs, Li, Buff ♯ k ) ⇒k (cs′ , Li, Buff ♯ k )
(BRead♯ )
For this transition to take place, the buffer of Pt must have at least one write
instruction that modifies the shared variable x. Conditions Buff ♯ k ⇃{x}×LABL =
α.(x, b) and Ins(b) = (x := e) ensure that the last write to x in Buff ♯ k of
Pt is due to instruction Ins(b) which is of the form (x := e). Under these
conditions, in TSOk , read of x uses the value of expression e to modify ℓ.
Whereas in TSO♯ k a label c is added to the trace such that Ins(c) represents
the assignment of e to variable ℓ.
Buffered Write This transition takes place when Pt executes a write instruc#»
tion of the form x := e. Let ℓ be a set of local variables used in expression
#»
e. For each of the local variables ℓ in ℓ , an integer Li(ℓ) is used to create an
assignment instruction of the form ℓLi(ℓ) := ℓ. These instructions are put in
the trace (through corresponding symbolic labels a#»ℓ ). Further, expression e
#»
is also modified where every instance of a local variable ℓ in ℓ is substituted
with ℓLi(ℓ) .
#»
Ins(a) = (sv := e), FV(e) = ℓ , |Buff ♯ k | < k,
#»
∀ℓ ∈ ℓ , create a label aℓ (if not already present in LABL)s.t.
Ins(aℓ ) = (ℓLi(ℓ) := ℓ), Li′ [ℓ] = Li[ℓ]%(k + 1) + 1
create a label a′ (if not already present in LABL)s.t
#» # »
Ins(a′ ) = (sv := e′ ), e′ = e[ ℓ /ℓLi(ℓ) ],
′
Buff ♯ k = Buff ♯ k [t ← Buff ♯ k [t].(sv, a′ )]
a#»
′
(cs, Li, Buff ♯ k ) ⇒ℓk (cs′ , Li′ , Buff ♯ k )
(BWrite♯ )
#» # »
This modified expression e′ is denoted e[ ℓ /ℓLi(ℓ) ] in Rule BWrite♯ . A label,
′
′
a , representing the assignment of e to x is put in the buffer in the form of
a tuple (x, a′ ). Note that the transition rule BWrite♯ increases the value of
Li(ℓ) (modulo (k + 1)) for every local variable ℓ present in expression e. We
can show the following property,
Lemma 2. For a state s♯ = (cs, Li, Buff ♯ k ) of TSO♯ k , if Li(ℓ) = m then local
variable ℓm does not appear in any write instruction used in buffers Buff ♯ k .
Proof. Suppose Li(ℓ) = m holds. By assumption, local variables among processes are disjoint therefore the only possibility is that Buff ♯ k [t] contains a
write instruction that uses local variable ℓm . If this were the case then there
must be at least k + 1 different writes appearing between that write and the
time s♯ is reached. This holds because every write, that uses a local variable
ℓ first increments its index by 1 and wraps around after k + 1. This incremented index is then used to create an instance of the local variable ℓ used
in this write operation. But it contradicts our assumption that the buffer is
of bounded length k.
The above lemma is used in the equivalence proof of TSOk and TSO♯ k .
Fence Fence instruction, like TSOk , gets enabled only when Buff ♯ k [t] is empty.
In the resultant state, function Li(t, ℓ) is set to 1 for every local variable ℓ of
Process Pt . This enables the reuse of indices in Function Li while preserving
Lemma 2.
Ins(a) = (fence), Buff ♯ k [t] = ǫ
Li′ = Li[(t, ℓ) ← 1], ∀ℓ ∈ LVt
ǫ
(cs, Li, Buff ♯ k ) ⇒k (cs′ , Li′ , Buff ♯ k )
(Fence♯ )
To show the equivalence of TSOk and TSO♯ k we want to prove the following;
(i) for every state s reachable in TSOk there exists a trace σ ♯ in TSO♯ k such
that the SC interpretation of σ ♯ reaches a state with the same global memory
and local memory as of s, and (ii) for every trace σ ♯ of TSO♯ k such that its
SC interpretation is not Undef (i.e. execution should be feasible) there exists a
state s ∈ TSOk with same global and local memory as obtained after the SC
interpretation of σ ♯ . We formally prove the following theorem in the Appendix.
Theorem 2. Transition systems TSOk and TSO♯ k are equivalent in terms of
state reachability.
In Theorem 1 we used the restricted set S⇃cs,Gm,Lm,Buff lst as a means to define fixed
point. However, there are no explicit representations of the global memory (Gm)
and the local memory (Lm) in the state definition of TSO♯ k . Therefore, in order
to define a fixed point condition like Theorem 1 we first augment the definition of
state in TSO♯ k to include global memory and local memory. Let Gm♯ : SV → Lab
and Lm♯ : TID×LV → Lab be the functions assigning labels (of write instructions)
to shared variables and local variables respectively. Specifically, Gm♯ (x) = a
means that the write instruction at label a was used to define the current value
of x in this state. Similarly, Lm♯ (t, ℓ) = a means that the write instruction at
label a was used to define the current value of local variable ℓ of process t. Note
that in the construction of TSO♯ k the values written by these write instructions
are only being represented symbolically using instruction labels. Therefore we
need a way to relate the instruction labels and the actual values written. For
a concurrent program P with finite data domain it is possible to construct an
equivalent program P ′ such that every assignment to variables in P ′ is only of
constant values. For example, consider the program in Figure 6 such that the
domain of variable x is {1, 2}. This program is equivalent to the program in
Figure 7 where only constant values are used in the write instructions. Here the
domain of x is used along with if-then-else conditions to decide the value that
needs to be written to y.
After this transformation, every write label
uniquely identifies the value written to a shared variable. Hence the functions Gm♯ , Lm♯ can be extended
to SV → Val and LV → Val respectively. This allows us
to use Theorem 1 for checking the fixed point.
ℓ:=x
y:=ℓ + 3
ℓ:=x
if(ℓ = 1)
y:=4
else if(ℓ = 2)
y:=5
5.1
Fence Insertion For Program Correction
Let P be a program that is correct under the SC memory model. Let σ be an execution of P that violates
the given safety property under the TSO memory model. We can insert a fence
instruction in P so that σ does not appear as an execution under the TSO memory model. Towards this we use the critical cycle based approach of [27] and [6]
to detect the locations of fence insertions. For an execution σ, let Cmptσ be a
competing[6] or conflicting[27] relation on the read and write events of σ such
that (a, b) ∈ Cmptσ iff (i) both memory events operate on the same location but
originate from different processes, (ii) at least one of them is a write instruction, and (iii) a appears before b in σ. Let poσ denote the program order among
instructions of processes present in σ. This is defined based on the process specification. Let ppoσ = poσ \ {(a, b) | a ∈ W, b ∈ R, (a, b) ∈ poσ } be a subset of poσ
preserved under TSO memory model, i.e. everything except write-read orders.
cs
An Execution σ contains a critical cycle →⊆ (Cmptσ ∪ poσ )+ iff (i) no cycle
exists in (Cmptσ ∪ppoσ )+ , (ii) per process there are at most two memory accesses
Fig. 6
Fig. 7
cs
a and b in → such that Loc(a) 6= Loc(b), and (ii) for a given shared variable x
there are at most three memory accesses on x which must originate from different processes. Following Theorem 1 of [6], an execution in TSO is sequentially
consistent if and only if it does not contain any critical cycle. Therefore, in order
to forbid an execution in TSO that is not sequentially consistent, it is sufficient
to ensure that no critical cycle exists in that execution. To avoid critical cycles,
we need to strengthen the ppoσ relation by adding a minimal set of program
orders such that Point (i) of critical cycle definition is not satisfied, i.e. finding a
set Dlay ⊆ poσ \ ppoσ , set of write-read pairs of instructions within each process,
such that (Cmptσ ∪ppoσ ∪Dlay)+ becomes cyclic. Once we identify that minimal
set of program orders we insert fence instructions in between them to enforce
the required orderings.
Overall Algorithm Algorithm that combines incremental buffer bounded verification and fence insertion for finite data programs works as follows. We start the
verification with buffer bound of 0. Towards this, the transition system TSO♯ k is
constructed using the relation ⇒k given in this section. This transition system
is represented as an automaton with error location representing the accepting
states and initial locations representing the initial state. The set of traces accepted by this automaton are the passed to the trace partitioning algorithm
implemented by [25] in the tool ProofTraPar. If an erroneous trace is found
then the program is not safe even under the SC memory model and hence the
algorithm returns the result as ‘Unsafe’. If all traces satisfy the given safety
property then the bound is increased by one and the analysis starts again. If
an error trace is found for non-zero buffer bound then the critical cycles are
obtained from this trace. Using these critical cycles a set of fence locations are
generated and the input program is modified by inserting fences in the code.
After the modification the analysis again starts with the same bound. This is
just an implementation choice because even if we increase the bound after the
modification still the fixed point will be eventually reached.
6
Experimental Results
We implemented our approach by extending the tool ProofTraPar which implements the trace partitioning based approach of [25]. We implemented TSO♯ k
semantics and fixed point reachability check on top of ProofTraPar. Its performance was compared against memorax which implements sound and complete
verification of state reachability under the TSO memory model. Note that other
tools which exist in this landscape of relaxed memory verification either consider
SC behaviour as specification [3,6,10] or are sound but not complete [23,1,28].
However memorax does not assume any bound on the buffer size and it uses
the coverability based approach of well-structured-transition systems. Table 8
compares the performance, in terms of time and memory, of our approach with
memorax. We ran all experiments on Intel i7-3.1GHz, 4 core machine with 8GB
RAM. Out of 11 examples, our tool outperformed memorax in 8 examples. Our
Program
Peterson.safe
Dekker.safe
Lamport.safe
Szymanksi.safe
Alternating Bit(ABP)
Dijkstra
Pgsql
RWLock.safe (2R,1W)
clh
Simple-dekker
Qrcu.safe (2R,1W)
# P ProofTraPar Memorax[2] # F
Time Memory Time Memory
(Sec) (MB) (Sec) (MB)
2
2
2
2
2
2
2
3
2
3
3
1.19
1.6
17
27
3.12
16
1.2
41
326
103
490
20
21.3
42
121
39
70
20
164
1500
155
3000
0.9
43
54.2
676
97
2312
ERR ERR
0.17
11
210
2800
600
3280
-
2
2
4
4
0
2
2
2
0
3
0
Fig. 8: Comparison of our tool with Memorax[2]. Time out, denote by ‘-’ is set
to 10 minute. #P and #F denote number of processes and number of fences
synthesized.
tool not only performed better in terms of time but also in terms of the memory
consumption. Except in two cases, qrcu and clh queue, on every other example our tool consumed less than 200 MB of RAM. Whereas memorax in most
cases took more than 500 MB of RAM and in some cases even touched the 3GB
mark. Programs like Alternating bit protocol, clh queue and Qrcu(quick read copy
update algorithm) remain correct even under TSO memory model. For other algorithms where bugs were exposed under TSO we were able to synthesize fences
to correct their behaviour.
Analysis of the benchmarks memorax performed better on three benchmarks,
viz. peterson, szymanksi, and ABP. After carefully looking at them we realized
that the performance of memorax loosely depends upon the number of backward control flow paths from error location to the start location, and number of
write instructions present along those paths. In benchmarks where ProofTraPar
outperformed memorax, viz. dekker, lamport, clh, qrcu, more than two such control paths exist. To further check this hypothesis experimentally we modified
peterson and ABP to add a write instruction along an already existing control
flow path where no write instruction was present. This write was performed
on a variable which was never read and hence did not affect the program. After this modification memorax became more than 6 time slower in analyzing
these two benchmarks. Further, the analysis of these modified benchmarks with
ProofTraPar exhibited a very little (less than a second) increase in time as
compared to the unmodified benchmarks. Interestingly, a bug was exposed in
memorax when we made a similar change in szymanksi. As a result of this bug
the modified program szymanksi was declared as safe. Note that the original
program szymanksi is incorrect under TSO and we only modified the code by
adding a write instruction to an unused variable. Therefore it is not possible for
the modified program szymanksi to become safe unless there is a bug in the tool.
This bug was also confirmed by the author of memorax.
6.1
Discussion
Note that memorax starts from the symbolic representation of all possible configurations of buffer contents which it further refines using backward reachability
analysis. However, in our approach we start from a finite and small buffer bound
( an under-approximation) and keep expanding until we reach a fix point. We
believe that this difference, picking an over-approximation as a starting point in
one case and an under-approximation as a starting point in the other case, plays
a crucial role in the better performance of our approach on these benchmarks.
In all benchmarks, except peterson, buffer size of 1 was sufficient to expose
the error. In peterson, buffer size of 2 was needed to expose the bug. Effectively, the buffer size depends upon the minimum distance (along control flow
path) between a write and a read instruction within a process whose reordering
reveals the bug. In the case of peterson, this distance is 2 since the reordering of first instruction (write to f lagi ) and third instruction (read of f lagj )
within each process reveals the bug. In our benchmarks fence instructions were
inserted after finding an erroneous trace, as discussed in Section 5.1. Fence instruction restricts the unbounded growth of the buffer by flushing the buffer
contents. As a result, when a fence is inserted within a loop the buffer never
grows in size with loop iterations and fix point is reached quickly. In fact, for
all the benchmarks, if a bug was exposed with buffer size k then after inserting
the fence instruction the fix point was reached with buffer size k + 1. Benchmarks which remain correct under TSO, a larger buffer bound was required to
reach the fix point and this bound depends upon the number of write operations in each process. As a result, their analysis took longer time and consumed
more memory. Detailed analysis of the benchmarks and the tool are available at
www.cse.iitd.ac.in/~chinmay/ProofTraParTSO.
7
Conclusion and Future Work
This paper uses the trace partitioning based approach to verify state reachability
of concurrent programs under the TSO memory model. We have also shown that
for finite state programs there exists a buffer bound such that if program is safe
up-to that bound then the program is guaranteed to be safe for unbounded
buffers as well. This work can be easily extended to PSO memory model as well.
This method gives us an alternate decidability proof of state reachability under
TSO (and PSO) memory model. We have also shown experimentally that for
standard benchmarks used in the literature such a bound is very small (in the
range of 2-4) and hence we may use SC verification based methods to efficiently
check concurrent programs under these memory models. We believe that for
other buffer based memory models a buffer bound can be shown to exist in a
similar manner. Recently [21] proposed a buffer based operational semantics for
C11 model. It will be interesting to investigate the use of bounded buffer based
method proposed in this paper to that semantics as well.
References
1. Abdulla, P. A., Aronis, S., Atig, M. F., Jonsson, B., Leonardsson, C.,
and Sagonas, K. F. Stateless model checking for TSO and PSO. In TACAS’15.
2. Abdulla, P. A., Atig, M. F., Chen, Y.-F., Leonardsson, C., and Rezine, A.
Counter-example guided fence insertion under tso. TACAS’12, Springer-Verlag.
3. Abdulla, P. A., Atig, M. F., and Ngo, T.-P. The best of both worlds: Trading
efficiency and optimality in fence insertion for tso. In ESOP’15, Springer-Verlag.
4. Alglave, J., Kroening, D., Nimal, V., and Poetzl, D. Don’t sit on the fence
- A static analysis approach to automatic fence insertion. In CAV’14.
5. Alglave, J., Kroening, D., Nimal, V., and Tautschnig, M. Software verification for weak memory via program transformation. In ESOP’13 (2013).
6. Alglave, J., and Maranget, L. Stability in weak memory models. CAV’11,
Springer-Verlag, pp. 50–66.
7. Atig, M. F., Bouajjani, A., Burckhardt, S., and Musuvathi, M. On the
verification problem for weak memory models. SIGPLAN Not., Jan’10 45, 1.
8. Atig, M. F., Bouajjani, A., and Parlato, G. Getting rid of store-buffers in
TSO analysis. In CAV’11 (2011).
9. Bouajjani, A., Calin, G., Derevenetc, E., and Meyer, R. Lazy TSO reachability. In FASE’15 (2015).
10. Bouajjani, A., Derevenetc, E., and Meyer, R. Checking and enforcing robustness against tso. ESOP’13, Springer-Verlag, pp. 533–553.
11. Burnim, J., Sen, K., and Stergiou, C. Sound and complete monitoring of
sequential consistency for relaxed memory models. TACAS’11, Springer-Verlag.
12. Chandra, A. K., Kozen, D. C., and Stockmeyer, L. J. Alternation. J. ACM
28, 1 (Jan. 1981), 114–133.
13. Dan, A. M., Meshman, Y., Vechev, M. T., and Yahav, E. Predicate abstraction for relaxed memory models. SAS’13, pp. 84–104.
14. Dan, A. M., Meshman, Y., Vechev, M. T., and Yahav, E. Effective abstractions for verification under relaxed memory models. VMCAI’15, pp. 449–466.
15. Dijkstra, E. W. Guarded commands, nondeterminacy and formal derivation of
programs. Commun. ACM 18, 8 (Aug. 1975), 453–457.
16. Farzan, A., Kincaid, Z., and Podelski, A. Inductive data flow graphs. In
POPL’13.
17. Hopcroft, J. E., Motwani, R., and Ullman, J. D. Introduction to automata
theory, languages, and computation, 2nd edition.
18. Joshi, S., and Kroening, D. Property-driven fence insertion using reorder
bounded model checking. In FM 2015: (2015).
19. Kuperstein, M., Vechev, M. T., and Yahav, E. Partial-coherence abstractions
for relaxed memory models. PLDI’11, pp. 187–198.
20. Kuperstein, M., Vechev, M. T., and Yahav, E. Automatic inference of memory fences. SIGACT News 43, 2 (2012), 108–123.
21. Lahav, O., Giannarakis, N., and Vafeiadis, V. Taming release-acquire consistency. In POPL’16 (New York, NY, USA, 2016), POPL 2016, ACM, pp. 649–662.
22. Linden, A., and Wolper, P. A verification-based approach to memory fence
insertion in relaxed memory systems. In SPIN’11 (2011), pp. 144–160.
23. Linden, A., and Wolper, P. A verification-based approach to memory fence
insertion in pso memory systems. TACAS’13, Springer-Verlag, pp. 339–353.
24. Meshman, Y., Dan, A. M., Vechev, M. T., and Yahav, E. Synthesis of memory
fences via refinement propagation. SAS’14, pp. 237–252.
25. Narayan, C., Sharma, S., Guha, S., and Arun-Kumar, S.
From
traces to proofs: Proving concurrent program safe (accepted for publishing).
Theoretical Aspects of Software Engineering, 2016 (arXived Version:
http://arxiv.org/abs/1506.07635).
26. Riddle, W. E. An approach to software system modelling and analysis. Comput.
Lang. 4, 1 (Jan. 1979), 49–66.
27. Shasha, D., and Snir, M. Efficient and correct execution of parallel programs
that share memory. TOPLAS 10, 2 (Apr. 1988), 282–312.
28. Zhang, N., Kusano, M., and Wang, C. Dynamic partial order reduction for
relaxed memory models. In Proceedings of the 36th ACM SIGPLAN Conference on
Programming Language Design and Implementation (New York, NY, USA, 2015),
PLDI 2015, ACM, pp. 250–259.
| 6 |
arXiv:1303.2054v1 [] 8 Mar 2013
Mining Representative Unsubstituted Graph Patterns
Using Prior Similarity Matrix∗ †
Wajdi Dhifli
Rabie Saidi
Engelbert Mephu Nguifo
Clermont University, Blaise
Pascal University, LIMOS, BP
10448, F-63000
Clermont-Ferrand, France
CNRS, UMR 6158, LIMOS,
F-63173 Aubière, France
European Bioinformatics
Institute
Hinxton, Cambridge, CB10
1SD, United Kingdom
Clermont University, Blaise
Pascal University, LIMOS, BP
10448, F-63000
Clermont-Ferrand, France
CNRS, UMR 6158, LIMOS,
F-63173 Aubière, France
[email protected]
[email protected]
[email protected]
ABSTRACT
General Terms
One of the most powerful techniques to study protein structures is to look for recurrent fragments (also called substructures or spatial motifs), then use them as patterns to characterize the proteins under study. An emergent trend consists
in parsing proteins three-dimensional (3D) structures into
graphs of amino acids. Hence, the search of recurrent spatial motifs is formulated as a process of frequent subgraph
discovery where each subgraph represents a spatial motif. In
this scope, several efficient approaches for frequent subgraph
discovery have been proposed in the literature. However, the
set of discovered frequent subgraphs is too large to be efficiently analyzed and explored in any further process. In this
paper, we propose a novel pattern selection approach that
shrinks the large number of discovered frequent subgraphs
by selecting the representative ones. Existing pattern selection approaches do not exploit the domain knowledge. Yet,
in our approach we incorporate the evolutionary information
of amino acids defined in the substitution matrices in order
to select the representative subgraphs. We show the effectiveness of our approach on a number of real datasets. The
results issued from our experiments show that our approach
is able to considerably decrease the number of motifs while
enhancing their interestingness.
Algorithms, Experimentation
Categories and Subject Descriptors
E.4 [Coding and information theory]: [Database Applications - Data Mining]
∗An implementation of the proposed approach and
the datasets used in the experiments are freely
available on the first authors personal web page:
http://fc.isima.fr/∼dhifli/unsubpatt/ or by email request
to any one of the authors
†This paper is the full version of a preliminary work accepted
as abstract in MLCB’12 (NIPS workshop)
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$15.00.
Keywords
Feature selection, graph mining, representative unsubstituted subgraphs, spatial motifs, protein structures
1.
INTRODUCTION
Studying protein structures can reveal relevant structural
and functional information which may not be derived from
protein sequences alone. During recent years, various methods that study protein structures have been elaborated based
on diverse types of descriptor such as profiles [25], spatial
motifs [17, 21] and others. Yet, the exponential growth of
online databases such as the Protein Data Bank (PDB) [4],
CATH [7], SCOP [2] and others, arises an urgent need for
more accurate methods that will help to better understand
the studied phenomenons such as protein evolution, functions, etc.
In this scope, proteins have recently been interpreted as
graphs of amino acids and studied based on graph theory
concepts [14]. This representation enables the use of graph
mining techniques to study protein structures in a graph
perspective. In fact, in graph mining, any problem or object
under consideration is represented in the form of nodes and
edges and studied based on graph theory concepts [3, 12, 16,
6]. One of the powerful and current trends in graph mining is
frequent subgraph discovery. It aims to discover subgraphs
that frequently occur in a graph dataset and use them as
patterns to describe the data. These patterns are lately
analyzed by domain experts to reveal interesting information
hidden in the original graphs, such as discovering pathways
in metabolic networks [9], identifying residues that play the
role of hubs in the protein and stabilize its structure [24],
etc.
The graph isomorphism test is one of the main bottlenecks of frequent subgraph mining. Yet, many efficient and
scalable algorithms have been proposed in the literature and
made it feasible for instance FFSM [15], gSpan [29], GASTON [18], etc. Unfortunately, the exponential number of
discovered frequent subgraphs is another serious issue that
still needs more attention [27], since it may hinder or even
make any further analysis unfeasible due to time, resources,
and computational limitations. For example, in an AIDS an-
tiviral screen dataset composed of 422 chemical compounds,
there are more than 1 million frequent substructures when
the minimum support is 5%. This problem becomes even
more serious with graphs of higher density such as those
representing protein structures. In fact, the issues raised
from the huge number of frequent subgraphs are mainly due
to two factors, namely redundancy and significance [22]. Redundancy in a frequent subgraph set is caused by structural
and/or semantic similarity, since most discovered subgraphs
differ slightly in structure and may infer similar or even the
same meaning. Moreover, the significance of the discovered
frequent subgraphs is only related to frequency. This yields
an urgent need for efficient approaches allowing to select relevant patterns among the large set of frequent subgraphs.
In this paper, we propose a novel selection approach which
selects a subset of representative patterns from a set of labeled subgraphs, we term them unsubstituted patterns. In
order to select these unsubstituted patterns and to shrink
the large size of the initial set of frequent subgraphs, we
exploit a specific domain knowledge, which is the substitution between amino acids represented as nodes. Though,
the main contribution of this work is to define a new approach for mining a representative summary of the set of
frequent subgraphs by incorporating a specific background
domain knowledge which is the ability of substitution between nodes labels in the graph. In this work, we apply
the proposed approach on protein structures because of the
availability of substitution matrices in the literature, however, it can be considered as general framework for other
applications whenever it is possible to define a matrix quantifying the possible substitution between the labels. Our
approach can also be used on any type of subgraph structure such as cliques, trees and paths (sequences). In addition, it can be easily coupled with other pattern selection
methods such as discrimination or orthogonality based approaches. Moreover, this approach is unsupervised and can
help in various mining tasks, unlike other approaches that
are supervised and dedicated to a specific task such as classification.
The remainder of the paper is organized as follows. Section 2 discusses the recent related works in the area of pattern selection for subgraphs. In Section 3, we present the
background of our work and we define the preliminary concepts as well as the main algorithm of our approach. Then,
Section 4 describes the characteristics of the used data and
the experimental settings. Section 5 presents the obtained
experimental results and the discussion. It is worth noting
that in the rest of the paper, we use the following terms
interchangeably : spatial motifs, patterns, subgraphs.
2.
RELATED WORKS
Recently, several approaches have been proposed for pattern selection in subgraph mining. In [5], authors proposed
ORIGAMI, an approach for both subgraph discovery and
selection. First they randomly mine a sample of maximal
frequent subgraphs, then straightforwardly they select an
α-orthogonal (non-redundant), β-representative subgraphs
from the mined set. The LEAP algorithm proposed in [28]
tries to locate patterns that individually have high discrimination power, using an objective function score that measures each pattern’s significance. Another approach termed
gPLS was proposed in [20]. It attempts to select a set of
informative subgraphs in order to rapidly build a classifier.
gPLS uses the mathematical concept of partial least squares
regression to create latent variables allowing a better prediction. COM [16] is another subgraph selection approach
which follows a process of pattern mining and classifier learning. It mines co-occurrence rules. Then, based on the cooccurrence information it assembles weak features in order
to generate strong ones. In [22], authors proposed a feature selection approach termed CORK. To find frequent subgraphs, it uses the state-of-the-art approach gSpan. Then
using a submodular quality function, it selects among them
the subset of subgraphs that are most discriminative for
classification. In [10], authors designed LPGBCMP, a general model which selects clustered features by considering
the structure relationship between subgraph patterns in the
functional space. The selected subgraphs are used as weak
classifiers (base learners) to obtain high quality classification models. To the best of our knowledge, in all existing
subgraph selection approaches [11], the selection is usually
based on structural similarity [5] and/or statistical measures
(e.g. frequency and coverage (closed [30], maximal [23]), discrimination [22], ...). Yet, the prior information and knowledge about the domain are often ignored. However, these
prior knowledge may help building dedicated approaches
that best fit the studied data.
3.
3.1
MINING REPRESENTATIVE UNSUBSTITUTED PATTERNS
Background
Statistical pattern selection methods have been widely
used to resolve the dimensionality problem when the number
of discovered patterns is too large. However, these methods
are too generic and do not consider the specificity of the
domain and the used data. We believe that in many contexts, it would be important to incorporate the background
knowledge about the domain in order to create approaches
that best fit the considered data. In proteomics, a protein
structure is composed by the folding of a set of amino acids.
During evolution, amino acids can substitute each other.
The scores of substitution between pairs of amino acids were
quantified by biologists in the literature in the form of substitution matrices such as Blosum62 [8]. Our approach uses
the substitution information given in the substitution matrices in order to select a subset of unsubstituted patterns
that summarizes the whole set of frequent subgraphs. We
consider the selected patterns as the representative ones.
The main idea of our approach is based on node substitution. Since the nodes of a protein graph represent amino
acids, though, using a substitution matrix, it would be possible to quantify the substitution between two given subgraphs. Starting from this idea, we define a similarity function that measures the distance between a given pair of subgraphs. Then, we preserve only one subgraph from each pair
having a similarity score greater or equal to a user specified
threshold such that the preserved subgraphs represent the
set of representative unsubstituted patterns. An overview of
the proposed approach is illustrated in Figure 1 and a more
detailed description is given in the following sections.
The substitution between amino acids was also used in
the literature but for sequential feature extraction from protein sequences in [19], where the authors proposed a novel
feature extraction approach termed DDSM for protein se-
In the case of protein’s substitution matrices, both positive and negative values represents possible substitutions.
However, positive scores are given to the more likely substitutions while negative scores are given to the less likely
substitutions. Though, in order to give more magnitude to
higher values of x, ∀ l and l0 ∈ L:
M(l, l0 ) = eM(l,l
Figure 1: Unsubstituted pattern selection framework.
quence classification. Their approach is restricted to protein
sequences and generates every subsequence substituting another one. In other words, DDSM eliminates any pattern
substituted by another one and which itself does not substitute any other one. We believe that their approach does
not guarantee an optimal summarization since its output
may still contain patterns that substitute each other. In
addition, they do consider negative substitution scores as
impossible substitutions which is biologically not true since
negative scores are only expressing the less likely substitutions, which obviously does not mean that they are impossible. Moreover, DDSM is limited to protein sequences and
does not handle more complex structures such as the protein
tertiary structure. Our approach overcomes these shortcomings, since it can handle both protein sequences and structures (since a sequence can be seen as a line graph). In
addition, it consider both the positive and negative scores
of the matrix. Moreover, our approach generates a set of
representative unsubstituted patterns ensuring an optimal
summarization of the initial set. Besides, it is unsupervised
and can be exploited in classification as well as other analysis and knowledge discovery contexts unlike DDSM which
is dedicated to classification.
3.2
Preliminaries
In this section we present the fundamental definitions and
the formal problem statement. Let G be a dataset of graphs.
Each graph G = (V, E, L) of G is given as a collection of
nodes V and a collection of edges E. The nodes of V are
labeled within an alphabet L. We denote by |V | the number
of nodes (also called the graph order) and by |E| the number
of edges (also called the graph size). Let also Ω be the set
of frequent subgraphs extracted from G, also referred here
as patterns.
Definition 1. (Substitution matrix) Given an alphabet L,
a substitution matrix M over L is the function defined as:
M:
2
L −→ [⊥, >] ⊂ R
(l, l0 ) 7−→ x
(1)
The higher the value of x is, the likely is the substitution
of l0 by l. If x = ⊥ then the substitution is impossible, and
if x = > then the substitution is certain. The values ⊥ and
> are optional and user-specified. They may appear or not
in M. The scores in M should respect the following two
properties:
1. ∀ l ∈ L, ∃ l’ ∈ L | M(l, l0 ) 6= ⊥,
2. ∀ l ∈ L, if ∃ l’ ∈ L | M(l, l0 ) = > then ∀ l” ∈ L\{l, l’ },
M(l, l”) = ⊥ and M(l0 , l”) = ⊥.
In many real world applications, the substitution matrices
may contain at the same time positive and negative scores.
0
)
(2)
Definition 2. (Structural isomorphism) Two patterns P =
(VP , EP , L) and P 0 = (VP 0 , EP 0 , L) are said to be structurally isomorphic (having the same shape), we note shape(P, P 0 ).
shape(P, P 0 ) = true , iff:
- P and P 0 have the same order, i.e., |VP | = |VP 0 |,
- P and P 0 have the same size, i.e., |EP | = |EP 0 |,
- ∃ a bijective function f : VP → VP 0 | ∀u, v ∈ VP if
(u, v) ∈ EP then (f (u), f (v)) ∈ EP 0 and inversely.
It is worth mentioning that in the graph isomorphism
problem we test whether two graphs are exactly the same by
considering both structures and labels. But in this definition, we only test whether two given graphs are structurally
the same, in terms of nodes and edges, without considering
the labels.
Definition 3. (Elementary mutation probability) Given a
node v of a label l ∈ L, the elementary mutation probability,
Mel (v), measures the possibility that v stay itself and does
not mutate to any other node depending on its label l.
⊥
0, if M(l, l) = e
>
(3)
Mel (v) = 1, if M(l, l) = e
M(l,l)
P|L|
, otherwise
i=1
M(l,li )
Obviously, if the substitution score in M between l and
itself is ⊥ then it is certain that l will mutate to another
label l0 and the probability value that v does not mutate
should be 0. Respectively, if this substitution score is >
then it is certain that v will stay itself and conserve its label
l so the probability value must be equal to 1. Otherwise, we
divide the score that l mutates to itself by the sum of all the
possible mutations.
Definition 4. (Pattern mutation probability) Given a pattern P = (VP , EP , L) ∈ Ω, the pattern mutation probability,
Mpatt (P ), measures the possibility that P mutates to any
other pattern having the same order.
|VP |
Mpatt (P ) = 1 −
Y
Mel (P [i])
(4)
i=1
Q P|
where |V
i=1 Mel (P [i]) represents the probability that the
pattern P does not mutate to any other pattern i.e. P stays
itself.
Definition 5. (Elementary substitution probability) Given
two nodes v and v 0 having correspondingly the labels l, l0 ∈
L, the elementary substitution probability, Sel (v, v 0 ), measures the possibility that v substitutes v 0 .
Sel (v, v 0 ) =
M(l, l0 )
M(l, l)
(5)
It is worth noting that Sel is not bijective i.e. Sel (v, v 0 ) 6=
Sel (v 0 , v).
Definition 6. (Pattern substitution score) Given two patterns P = (VP , EP , L) and P 0 = (VP 0 , EP 0 , L) having the
same shape, we denote by Spatt (P, P 0 ) the substitution score
of P 0 by P . In other words, it measures the possibility that P
mutates to P 0 by computing the sum of the elementary substitution probabilities then normalize it by the total number
of nodes of P . Formally:
P|VP |
0
i=1 Sel (P [i], P [i])
(6)
Spatt (P, P 0 ) =
| VP |
Definition 7. (Pattern substitution) A pattern P substitutes a pattern P 0 , we note subst(P, P 0 , τ ) = true, iff:
1. P and P 0 are structurally isomorphic (shape(P, P 0 ) =
true),
2. Spatt (P, P 0 ) ≥ τ , where τ is a user-specified threshold
such that 0% ≤ τ ≤ 100%.
Definition 8. (Unsubstituted pattern) Given a threshold
τ and Ω∗ ∈ Ω, a pattern P ∗ ∈ Ω∗ is said to be unsubstituted
iff @P ∈ Ω∗ | Mpatt (P ) > Mpatt (P ∗ ) and subst(P, P ∗ , τ ) =
true.
Proposition 1 (Null Mpatt case). Given a pattern P =
(Vp , Ep , L) ∈ Ω, if Mpatt (P ) = 0 then P is an unsubstituted
pattern.
Proof. The proof can be simply deduced from Definitions 3 and 4.
Definition 9. (Merge support) Given two patterns P and
P 0 , if P substitutes P 0 then P will represent P 0 in the list
of graphs where P 0 occurs. Formally:
Algorithm 1: UnSubPatt
Data: Ω, M, τ , (⊥,>) [Optional]
Result: Ω∗ : {unsubstituted patterns}
begin
Ω ← {Ωk ← {P ∈ Ω | ∀(P 0 , P ”) ∈ Ωk , |VP 0 | =
|VP ” | and |EP 0 | = |EP ” |}};
foreach Ωk ∈ Ω do
Ωk ← sort(Ωk by Mpatt );
foreach P ∈ Ωk do
if Mpatt (P ) > 0 then
foreach P 0 ∈ Ωk \P |
Mpatt (P 0 ) < Mpatt (P ) do
if Mpatt (P 0 ) > 0 then
if shape(P, P 0 ) and
subst(P, P 0 , τ ) then
merge support(P, P 0 );
remove(P 0 , Ωk );
Ω∗ ← Ω∗ ∪ Ωk ;
Theorem 1. Let Ω be a set of patterns and Ω∗ its subset
of unsubstituted patterns based on a substitution matrix M
and a threshold τ , i.e., UnSubPatt (Ω, M, τ, (⊥, >)) = Ω∗ .
Then :
UnSubPatt(Ω∗ , M, τ, (⊥, >)) = Ω∗
(8)
Proof. The proof can be deduced simply from Definition
8. Given a threshold τ , Ω∗ can not be summarized by its
subsets unless itself. Formally:
∀P ∈ Ω∗ , @P 0 ∈ Ω∗ |Mpatt (P ) > Mpatt (P 0 ) and subst(P, P 0 , τ )
(9)
∀(P, P 0 ) | subst(P, P 0 , τ ) = true then DP = DP ∪ DP 0 (7)
where DP and DP 0 are correspondingly the occurrence set
of P and that of P 0 .
3.3
Algorithm
Given a set of patterns Ω and a substitution matrix M, we
propose UnSubPatt(see Algorithm 1), a pattern selection
algorithm which enables detecting the set of unsubstituted
patterns Ω∗ within Ω. Based on our similarity concept, all
the patterns in Ω∗ are dissimilar, since it does not contain
any pair of patterns that are substitutable. This represents
a reliable summarization of Ω.
The general process of the algorithm is described as follows: first, Ω is divided into subsets of patterns having the
same number of nodes and edges. Then, each subset is sorted
in a descending order by the pattern mutation probability
Mpatt . Each subset is browsed starting from the pattern
having the highest Mpatt . For each pattern, we remove all
the patterns it substitutes and we merge their supports such
that the preserved pattern will represent all the removed
ones wherever they occurs. The remaining patterns represent the unsubstituted pattern set. Though, Ω∗ can not be
summarized by a subset of it but itself. Our algorithm uses
Proposition 1 to avoid unnecessary computation related to
patterns with Mpatt = 0. They are directly considered as
unsubstituted patterns, since they can not mutate to any
other pattern.
3.4
Complexity
Suppose Ω contains n patterns. Ω is divided into g groups,
each containing patterns of order k. This is done in O(n).
Each group Ωk is sorted in O(|Ωk | ∗ log|Ωk |). Searching
for unsubstituted patterns requires browsing Ωk (O(|Ωk |))
and for each pattern, browsing in the worst case all remaining patterns (O(|Ωk |)) to check the shape (O(k)) and the
substitution (O(k)). This means that searching for unsubstituted patterns in a group Ωk can be done in O(|Ωk |2 ∗ k2 ).
Hence, in the worst case, the complexity of our algorithm is
2
O(g ∗ m2max ∗ kmax
), where kmax is the maximum pattern order and mmax is the number of patterns of the largest group
Ωk .
4.
4.1
EXPERIMENTS
Datasets
In order to experimentally evaluate our approach, we use
four graph datasets of protein structures, which also have
been used in [28] then [10]. Each dataset consists of two
classes: positive and negative. Positive samples are proteins
selected from a considered protein family whereas negative
samples are proteins randomly gathered from the Protein
Data Bank [4]. Each protein is parsed into a graph of amino
acids. Each node represents an amino acid residue and is labeled with its amino acid type. Two nodes u and v are linked
by an edge e(u, v) = 1 if the euclidean distance between their
two Cα atoms ∆(Cα (u), Cα (v)) is below a threshold distance
δ. Formally:
(
1, if ∆(Cα (u), Cα (v)) ≤ δ
e(u, v) =
(10)
0, otherwise
In the literature, many methods use this definition with
usually δ ≥ 7Å on the argument that Cα atoms define the
overall shape of the protein conformation [13]. In our experiments, we use δ = 7Å.
Table 1 summarizes the characteristics of each dataset.
SCOP ID, Avg.|V|, Avg.|E|, Max.|V| and Max.|E| correspond respectively to the id of the protein family in SCOP
database [2], the average number of nodes, the average number of edges, the maximal number of nodes and the maximal
number of edges in each dataset.
4.2
Protocol and Settings
Generally, in a pattern selection approach two aspects are
emphasized, namely the number of selected patterns and
their interestingness. In order to evaluate our approach,
we first use the state-of-the-art method of frequent subgraph discovery gSpan [29] to find the frequent subgraphs in
each dataset with a minimum frequency threshold of 30%.
Then, we use UnSubPatt to select the unsubstituted patterns among them with a minimum substitution threshold
τ =30%. For our approach, we use Blosum62 [8] as the substitution matrix since it turns out that it performs well on
detecting the majority of weak protein similarities, and it is
used as the default matrix by most biological applications
such as BLAST [1]. It is worth mentioning that the choice
of 30% as minimum frequency threshold for the frequent
subgraph extraction is to make the experimental evaluation
feasible due to time and computational limitations.
In order to evaluate the number of selected subgraphs, we
define the selection rate as the rate of the number of unsubstituted subgraphs from the initial set of frequent subgraphs.
Formally :
|Ω∗ | ∗ 100
(11)
Selection rate =
|Ω|
To evaluate the interestingness of the set of selected patterns, we use them as features for classification. We perform a 5-fold cross-validation classification (5 runs) on each
protein-structure dataset. We encode each protein into a
binary vector, denoting by ”1” or ”0” the presence or the absence of the feature in the considered protein. To judge the
interestingness of the selected subgraphs, we use one of the
most known classifier, namely the naı̈ve bayes (NB) classifier, due to its simplicity and fast prediction and that its
classification technique is based on a global and conditional
evaluation of the input features. NB is used with the default
parameters from the workbench Weka [26].
5.
RESULTS AND DISCUSSION
In this section, we conduct experiments to examine the
effectiveness and efficiency of UnSubPatt in finding the
representative unsubstituted subgraphs. We test the effect
of changing the substitution matrix and the substitution
threshold on the results. Moreover, we study the size-based
distribution of patterns and we compare the results of our
approach with those of other subgraph selection methods
from the literature.
Table 2: Number of frequent subgraphs (Ω), representative unsubstituted subgraphs (Ω∗ ) and the selection rate
Dataset
|Ω|
| Ω∗ | Selection rate (%)
DS1
799094
7291
0.91
DS2
258371 15898
6.15
DS3
114792 14713
12.82
DS4
1073393 9958
0.93
5.1
Empirical Results
Here, we show the results of our experiments obtained
in terms of number of motifs and classification results. As
mentioned before, we use gSpan to extract the frequent subgraphs from each dataset with frequency ≥ 30%. Then, we
use UnSubPatt to select the unsubstituted patterns among
them with a substitution threshold τ =30% and using Blosum62 as substitution matrix. At last, we perform a 5-fold
cross-validation classification (5 runs) to evaluate the interestingness of each subset using the NB classifier. The obtained average results are reported in Table 3.
The high number of discovered frequent subgraphs is due
to their combinatorial nature (this was discussed in the introductory section). The results reported in Table 2 show
that our approach decreases considerably the number of subgraphs. The selection rate shows that the number of unsubstituted patterns | Ω∗ | does not exceed 13% of the initial set
of frequent subgraphs | Ω | with DS3 and even reaches less
than 1% with DS1 and DS4. This proves that exploiting the
domain knowledge, which in this case consists in the substitution matrix, enables emphasizing information that can
possibly be ignored when using exiting subgraph selection
approaches.
The classification results reported in Table 3 help to evaluate the interestingness of the selected patterns. Indeed, this
will demonstrate if the unsubstituted patterns were arbitrarily selected or they are really representative. Table 3 shows
that the classification accuracy significantly increases with
all datasets. We notice a huge leap in accuracy especially
with DS1 and DS4 with a gain of more than 17% and reaching almost full accuracy with DS4. To better understand the
accuracy results, we also reports the average precision, recall, F-measure and AUC values for all cases. We notice an
enhancement of performance with all the mentioned quality metrics. This supports the reliability of our selection
approach.
5.2
Results Using Other Substitution Matrices
Besides Blosum62, biologists also defined other substitution matrices describing the likelihood that two amino acid
types would mutate to each other in evolutionary time. We
want to study the effect of using other substitution matrices
on the experimental results. Though, we perform the same
experiments following the same protocol and settings but using two other substitution matrices, namely Blosum80 and
P am250. We compare the obtained results in terms of number of subgraphs and classification accuracy with those obtained using the whole set of frequent subgraphs and those
using subgraphs selected by UnSubPatt with Blosum62.
The results are reported in Table 4. A high selection rate
accompanied with a clear enhancement of the classification
Dataset
DS1
DS2
DS3
DS4
SCOP ID
52592
48942
56437
88854
Table 1: Experimental data
Family name
Pos. Neg. Avg.|V|
G proteins
33
33
246
C1 set domains
38
38
238
C-type lectin domains
38
38
185
Kinases, catalytic subunit
41
41
275
Avg.|E|
971
928
719
1077
Max.|V|
897
768
755
775
Max.|E|
3 544
2 962
3 016
3 016
Table 3: Accuracy, precision, recall (sensitivity), F-score and AUC of the classification of each dataset using
NB coupled with frequent subgraphs (FSg) then representative unsubstituted subgraphs (UnSubPatt)
Accuracy
Precision
Recall
F-score
AUC
Dataset
FSg UnSubPatt FSg UnSubPatt FSg UnSubPatt FSg UnSubPatt FSg UnSubPatt
DS1
0.62
0.78
0.61
0.69
0.70
0.90
0.64
0.78
0.64
0.78
DS2
0.80
0.90
0.86
0.94
0.74
0.86
0.79
0.89
0.79
0.89
DS3
0.86
0.94
0.89
1.00
0.86
0.89
0.86
0.94
0.86
0.94
DS4
0.79
0.98
0.86
0.92
0.70
0.98
0.76
0.94
0.76
0.94
accuracy is noticed using UnSubPatt with all the substitution matrices compared to the results using the whole set of
frequent subgraphs. It is clearly noticed that even using different substitution matrices, UnSubPatt shows relatively
similar behavior and is able to select a small yet relevant
subset of patterns. It is also worth mentioning that for all
the datasets, the best classification accuracy is obtained using Blosum62 and the best selection rate is achieved using
Pam250. This is simply due to how distant proteins within
the same dataset are, since each substitution matrix was
constructed to implicitly express a particular theory of evolution. Though, choosing the appropriate substitution matrix can influence the outcome of the analysis.
5.3
Impact of Substitution Threshold
In our experiments, we used a substitution threshold (of
30%) to select the unsubstituted patterns from the set of
discovered frequent subgraphs. In this section, we study the
impact of variation of the substitution threshold on both the
number of selected subgraphs and the classification results.
To do so, we perform the same experiments while varying
the substitution threshold from 0% to 90% with a step-size
of 10. In order to check if the enhancements of the obtained
results are due to our selected features or to the classifier,
we use two other well-known classifiers namely the support
vector machine (SVM) and decision tree (C4.5) besides the
naı̈ve bayes (NB) classifier. We use the same protocol and
settings of the previous experiments. Figure 2 presents the
selection rate for different substitution thresholds and Figures 3, 4 and 5 illustrate the classification accuracy obtained
respectively using NB, SVM and C4.5 with each dataset.
The classification accuracy of the initial set of frequent subgraphs (gSpan, the line in red) is considered as a standard
value for comparison. Thus, the accuracy values of UnSubPatt (in blue) that are above the line of the standard value
are considered as gains, and those under the standard value
are considered as losses.
In Figure 2, we notice that UnSubPatt reduces considerably the number of subgraphs especially with lower substitution thresholds. In fact, the number of unsubstituted
patterns does not exceed 30% for all substitution thresholds
below 70% and even reaches less then 1% in some cases.
This important reduction in the number of patterns comes
with a notable enhancement of the classification accuracies.
Figure 2: Rate of unsubstituted patterns from Ω
depending on the substitution threshold (τ ).
This fact is illustrated in Figures 3, 4 and 5 which show
that the unsubstituted patterns allow better classification
performance compared to the original set of frequent subgraphs. UnSubPatt scores very well with the three used
classifiers and even reaches full accuracy in some cases. This
confirms our assumptions and shows that our selection is reliable and contributes to the enhancement of the accuracy.
However, we believe that NB allows the most reliable evaluation because it performs a classification based on a global
and conditional evaluation of features, unlike SVM which
performs itself another attribute selection to select the support vectors and unlike C4.5 which performs an attribute by
attribute evaluation.
In the case of proteins, a substitution threshold of 0%
enables selecting subgraphs based only on their structure.
Precisely, UnSubPatt will select only one pattern from each
group of subgraphs that are structurally isomorphic. Based
on the experimental results, we believe that using these patterns is enough for a structural classification task since it
allows a fast selection, selects a very small number of subgraphs and performs very well on classification.
5.4
Size-based Distribution of Patterns
In this section, we study the distribution of patterns based
on their size (number of edges). We try to check which sizes
of patterns are more concerned by the selection. The Figures
6 and 7 draw the distribution of patterns for the original set
Table 4: Number of subgraphs (#SG) and accuracy (ACC) of the classification of each dataset using NB
coupled with frequent subgraphs (FSg) then representative unsubstituted subgraphs using Blosum80 (UnSubPatt Blosum80) and Pam250 (UnSubPatt Pam250)
FSg
UnSubPatt Blosum62 UnSubPatt Blosum80 UnSubPatt Pam250
Dataset
#SG
Accuracy #SG
Accuracy
#SG
Accuracy
#SG
Accuracy
DS1
799094
0.62
7291
0.78
7328
0.67
6137
0.68
DS2
258371
0.80
15898
0.90
15930
0.87
15293
0.87
DS3
114793
0.86
14713
0.94
14792
0.91
14363
0.93
DS4
1073393
0.79
9958
0.98
10417
0.90
9148
0.90
Figure 3: Classification accuracy by NB.
Figure 4: Classification accuracy by SVM.
Figure 6: Distribution of patterns of DS1 for all the
frequent subgraphs and for the representative unsubstituted ones with all the substitution thresholds
Figure 7: Distribution of patterns of DS2 for all the
frequent subgraphs and for the representative unsubstituted ones with all the substitution thresholds
of frequent subgraphs and for the final set of representative
unsubstituted subgraphs with all the substitution thresholds
using Blosum62. The downward tendency of UnSubPatt
using lower substitution thresholds and with respect to the
original set of frequent subgraph is very clear. In fact, UnSubPatt leans towards cutting off the peaks and flattening
the curves with lower substitution thresholds. Another interesting observation is that the curves are flattened in the
regions of small patterns as well as in those of big and dense
patterns. This demonstrates the effectiveness of UnSubPatt with both small and big patterns.
Figure 5: Classification accuracy by C4.5.
5.5
Comparison with other approaches
To objectively evaluate our approach, we compare it with
current trends in subgraph selection. In Figure 8, we report
Figure 8: Classification accuracy comparison with
other pattern selection approaches.
the classification accuracy using the representative unsubstituted patterns of UnSubPatt besides those using patterns of other new subgraph selection approaches from the
literature namely LEAP[28], gPLS[20], COM[16] and LPGBCMP[10] (reported and explained in the introductory section).
For UnSubPatt, we report the results obtained using
the substitution matrix Blosum62, a minimum substitution
threshold τ = 30% and SVM for classification. For LEAP+SVM,
LEAP is used iteratively to discover a set of discriminative subgraphs with a leap length=0.1. The discovered subgraphs are consider as features to train SVM with a 5-fold
cross validation. COM is used with tp = 30% and tn = 0%.
For gPLS, the frequency threshold is set to 30% and the best
accuracies are reported for all the datasets among all the parameters combinations from m = 2, 4, 8, 16 and k = 2, 4, 8,
16, where m is the number of iterations and k is the number
of patterns obtained per search. For LPGBCMP, threshold
values of maxv ar = 1 and δ = 0.25 were respectively used
for feature consistency map building and for overlapping.
The obtained results are reported in the Figure 8.
The classification results displayed in Figure 8 show that
UnSubPatt allows a better classification than all the other
pattern selection methods in the four cases. Considering
only these results does not allow to confirm that UnSubPatt would always outperform the considered methods. However, this proves that UnSubPatt represents a very competitive and promising approach. It is also worth noting that
these approaches are supervised and dedicated to classification unlike UnSubPatt which is an unsupervised approach.
This allows it to be used in classification as well as in other
mining tasks such as clustering and indexing.
5.6
Runtime Analysis
To study the variation of UnSubPatt’s runtime with larger
amounts of data, we use different sets of frequent patterns
from 10000 to 100000 with step-size of 10000. In Table 5, we
report the runtime results for the pattern sets using three
substitution thresholds.
Even though the complexity of the problem due to the
combinatorial test of substitution between subgraphs, our
algorithm is scalable with higher amounts of data. With increasing number of patterns, the runtime is still reasonable.
The use of different substitution thresholds slightly affected
the runtime of UnSubPatt, since the number of selected
patterns is comparable for all thresholds.
A possible way to make UnSubPatt runs faster is parallelization. In fact, UnSubPatt can be easily parallelized,
since it tests separately the substitution among each group
Table 5: Runtime analysis of UnSubPatt with different substitution thresholds
Number of
Substitution thresholds
patterns
τ = 10% τ = 30% τ = 50%
10000
4s
4s
4s
20000
8s
8s
10s
30000
13s
13s
17s
40000
18s
18s
25s
50000
23s
23s
33s
60000
28s
28s
41s
70000
35s
35s
52s
80000
40s
42s
66s
90000
46s
49s
80s
100000
53s
57s
136s
of subgraphs having the same size and order. Hence, these
groups can be distributed and treated separately in different
processes.
6.
CONCLUSION
In this paper, we proposed a novel selection approach for
mining a representative summary of the set of frequent subgraphs. Unlike existent methods that are based on the relations between patterns in the transaction space, our approach considers the distance between patterns in the pattern space. The proposed approach exploits a specific domain knowledge, in the form of a substitution matrix, to select a subset of representative unsubstituted patterns from
a given set of frequent subgraphs. It also allows to reduce
considerably the size of the initial set of subgraphs to obtain an interesting and representative one enabling easier
and more efficient further explorations. It is also worth mentioning that our approach can be used on graphs as well as
on sequences and is not limited to classification tasks, but
can help in other subgraph-based analysis such as indexing,
clustering, visual inspection, etc.
Since the proposed approach is a filter approach, a promising future direction could be to find a way to integrate the
selection within the extraction process in order to directly
mine the representative patterns from data. Moreover, we
intend to use our approach in other classification contexts
and in other mining applications.
7.
ACKNOWLEDGEMENT
This work is supported by a PhD grant from the French
Ministry of Higher Education to the first author.
8.
REFERENCES
[1] S. Altschul, W. Gish, W. Miller, E. Myers, and
D. Lipman. Basic local alignment search tool. Journal
of Molecular Biology, 215:403–410, 1990.
[2] A. Andreeva, D. Howorth, J.-M. Chandonia, S. E.
Brenner, T. J. P. Hubbard, C. Chothia, and A. G.
Murzin. Data growth and its impact on the scop
database: new developments. Nucleic Acids Research,
36(1):D419–D425, 2008.
[3] L. Bartoli, P. Fariselli, and R. Casadio. The effect of
backbone on the small-world properties of protein
contact maps. Physical Biology, 4(4):L1+, 2007.
[4] H. M. Berman, J. D. Westbrook, Z. Feng, G. Gilliland,
T. N. Bhat, H. Weissig, I. N. Shindyalov, and P. E.
Bourne. The protein data bank. Nucleic Acids
Research, 28(1):235–242, 2000.
[5] V. Chaoji, M. A. Hasan, S. Salem, J. Besson, and
M. J. Zaki. Origami: A novel and effective approach
for mining representative orthogonal graph patterns.
Statistical Analysis and Data Mining, 1(2):67–84,
2008.
[6] H. Cheng, X. Yan, and J. Han. Mining graph patterns.
In Managing and Mining Graph Data, pages 365–392.
Springer, 2010.
[7] A. L. Cuff, I. Sillitoe, T. Lewis, A. B. Clegg,
R. Rentzsch, N. Furnham, M. Pellegrini-Calace, D. T.
Jones, J. M. Thornton, and C. A. Orengo. Extending
cath: increasing coverage of the protein structure
universe and linking structure with function. Nucleic
Acids Research, 39:420–426, 2011.
[8] S. R. Eddy. Where did the blosum62 alignment score
matrix come from? Nature Biotechnology, pages
1035–1036, 2004.
[9] K. Faust, P. Dupont, J. Callut, and J. van Helden.
Pathway discovery in metabolic networks by subgraph
extraction. Bioinformatics, 26(9):1211–1218, 2010.
[10] H. Fei and J. Huan. Boosting with structure
information in the functional space: an application to
graph classification. In ACM knowledge discovery and
data mining conference (KDD), pages 643–652, 2010.
[11] M. A. Hasan. Pattern summarization in pattern
mining. Encyclopedia of Data Warehousing and
Mining, (2nd Ed), 2008.
[12] M. A. Hasan and M. J. Zaki. Output space sampling
for graph patterns. PVLDB, 2(1):730–741, 2009.
[13] J. Huan, D. Bandyopadhyay, W. Wang, J. Snoeyink,
J. Prins, and A. Tropsha. Comparing graph
representations of protein structure for mining
family-specific residue-based packing motifs. Journal
of Computational Biology, 12(6):657–671, 2005.
[14] J. Huan, W. Wang, D. B, J. Snoeyink, J. Prins, and
A. Tropsha. Mining spatial motifs from protein
structure graphs. In International Conference on
Research in Computational Molecular Biology
(RECOMB), pages 308–315, 2004.
[15] J. Huan, W. Wang, and J. Prins. Efficient mining of
frequent subgraphs in the presence of isomorphism. In
IEEE International Conference on Data Mining
(ICDM), pages 549–552, 2003.
[16] N. Jin, C. Young, and W. Wang. Graph classification
based on pattern co-occurrence. In ACM International
Conference on Information and Knowledge
Management, pages 573–582, 2009.
[17] G. J. Kleywegt. Recognition of spatial motifs in
protein structures. Journal of molecular biology,
285(4):1887–1897, 1999.
[18] S. Nijssen and J. N. Kok. A quickstart in frequent
structure mining can make a difference. In ACM
knowledge discovery and data mining conference
(KDD), pages 647–652, 2004.
[19] R. Saidi, M. Maddouri, and E. Mephu Nguifo. Protein
sequences classification by means of feature extraction
with substitution matrices. BMC Bioinformatics,
11(1):175+, 2010.
[20] H. Saigo, N. Krämer, and K. Tsuda. Partial least
squares regression for graph mining. In ACM
knowledge discovery and data mining conference
(KDD), pages 578–586, 2008.
[21] H. Sun, A. Sacan, H. Ferhatosmanoglu, and Y. Wang.
Smolign: A spatial motifs-based protein multiple
structural alignment method. IEEE/ACM
Transactions on Computational Biology and
Bioinformatics, 9(1):249–261, 2012.
[22] M. Thoma, H. Cheng, A. Gretton, J. Han, H.-P.
Kriegel, A. Smola, L. Song, P. S. Yu, X. Yan, and
K. M. Borgwardt. Discriminative frequent subgraph
mining with optimality guarantees. Statistical Analysis
and Data Mining, 3(5):302–318, 2010.
[23] L. T. Thomas, S. R. Valluri, and K. Karlapalem.
Margin: Maximal frequent subgraph mining. ACM
Transactions on Knowledge Discovery from Data
(TKDD), 4(3):10:1–10:42, 2010.
[24] R. R. Vallabhajosyula, D. Chakravarti, S. Lutfeali,
A. Ray, and A. Raval. Identifying hubs in protein
interaction networks. PLoS ONE, 4(4):e5344, 2009.
[25] N. von Öhsen, I. Sommer, R. Zimmer, and
T. Lengauer. Arby: automatic protein structure
prediction using profile-profile alignment and
confidence measures. Bioinformatics,
20(14):2228–2235, 2004.
[26] I. H. Witten and E. Frank. Data Mining: Practical
Machine Learning Tools and Techniques. Morgan
Kaufmann, 2005.
[27] A. Woznica, P. Nguyen, and A. Kalousis. Model
mining for robust feature selection. In ACM knowledge
discovery and data mining conference (KDD), pages
913–921, 2012.
[28] X. Yan, H. Cheng, J. Han, and P. S. Yu. Mining
significant graph patterns by leap search. ACM
SIGMOD international conference on Management of
data, pages 433–444, 2008.
[29] X. Yan and J. Han. gspan: Graph-based substructure
pattern mining. Order A Journal On The Theory Of
Ordered Sets And Its Applications, 02:721–724, 2002.
[30] X. Yan and J. Han. Closegraph: mining closed
frequent graph patterns. In ACM knowledge discovery
and data mining conference (KDD), pages 286–295,
2003.
| 5 |
Framework for state and unknown input estimation of linear
time-varying systems ?
Peng Lu a , Erik-Jan van Kampen a , Cornelis C. de Visser a , Qiping Chu a
arXiv:1606.08090v1 [] 26 Jun 2016
a
Delft University of Technology, Kluyverweg 1, 2629HS Delft, The Netherlands
Abstract
The design of unknown-input decoupled observers and filters requires the assumption of an existence condition in the literature.
This paper addresses an unknown input filtering problem where the existence condition is not satisfied. Instead of designing a
traditional unknown input decoupled filter, a Double-Model Adaptive Estimation approach is extended to solve the unknown
input filtering problem. It is proved that the state and the unknown inputs can be estimated and decoupled using the extended
Double-Model Adaptive Estimation approach without satisfying the existence condition. Numerical examples are presented
in which the performance of the proposed approach is compared to methods from literature.
Key words: Kalman filtering; state estimation; unknown input filtering; fault estimation; Double-Model Adaptive Estimation.
1
Introduction
Faults and model uncertainties such as disturbances can
be represented as unknown inputs. The problem of filtering in the presence of unknown inputs has received
intensive attention in the past three decades.
It is common to treat the unknown inputs as part of
the system state and then estimate the unknown inputs
as well as the system state [18]. This is an augmented
Kalman filter, whose computational load may become
excessive when the number of the unknown inputs is
comparable to the states of the original system [10].
Friedland [10] derived a two-stage Kalman filter which
decomposes the augmented filter into two reduced-order
filters. However, Friedland’s approach is only optimal in
the presence of a constant bias [18]. Hsieh and Chen derived an optimal two-stage Kalman filter which performance is also optimal for the case of a random bias [18].
On the other hand, unknown input filtering can be
achieved by making use of unbiased minimum-variance
? This paper was not presented at any IFAC meeting. Corresponding author Peng Lu. Tel. +31 152783466. Fax +31
152786480.
Email addresses: [email protected] (Peng Lu),
[email protected] (Erik-Jan van Kampen),
[email protected] (Cornelis C. de Visser),
[email protected] (Qiping Chu).
Preprint submitted to Automatica
estimation [16,21,5,14,15,3]. Kitanidis [21] first developed an unbiased recursive filter based on the assumption that no prior information about the unknown input
is available [12]. Hou and Patton [14] used an unknowninput decoupling technique and the innovation filtering
technique to derive a general form of unknown-input
decoupled filters [14,15]. Darouach, Zasadzinski and
Boutayeb [7] extended Kitanidis’ method using a parameterizing technique to derive an optimal estimator
filter. The problem of joint input and state estimation,
when the unknown inputs only appear in the system
equation, was addressed by Hsieh [15] and Gillijns and
De Moor [11]. Gillijns and De Moor [12] further proposed a recursive three-step filter for the case when the
unknown inputs also appear in the measurement equation. However, their approach requires the assumption
that the distribution matrix of the unknown inputs in
the measurement equation is of full rank. Cheng et al.
[4] proposed a global optimal filter which removed this
assumption, but this filter is limited to state estimation
[1]. Later, Hsieh [17] presented a unified approach to
design a specific globally optimal state estimator which
is based on the desired form of the distribution matrix
of the unknown input in the measurement equation [17].
However, all the above-mentioned filters require the assumption that an existence condition is satisfied. This
necessary condition is given by Hou and Patton [14] and
Darouach, Zasadzinski and Boutayeb [7], in the form
of rank condition (5). Hsieh [17] presents different decoupling approaches for different special cases. However,
28 June 2016
these approaches also have to satisfy the existence condition (5). In some applications, such as that presented
in the current paper, the existence condition is not satisfied. Therefore, a traditional unknown input decoupled
filter can not be designed.
2.1
Consider the following linear time-varying system:
xk+1 = Ak xk + Bk uk + Ek dk + wk
yk = Hk xk + Fk fk + vk
Recently, particle filters are also applied to unknown input estimation [13,8,28]. These filters can cope with systems with non-Gaussian noise and have a number of applications such as for robot fault detection [2,9,30]. In
this paper, the performance of unknown input estimation using particle filters will be compared with that of
our approach.
(1)
(2)
where xk ∈ Rn represents the system states, yk ∈ Rm
the measurements, dk and fk are the unknown inputs.
Specifically, dk ∈ Rnd the disturbances, fk ∈ Rnf are
the output faults. wk and vk are assumed to be uncorrelated zero-mean white noise sequences with covariance Qk and Rk respectively. uk , the known inputs,
is omitted in the following discussion because it does
not affect the filter design [14]. Without loss of generality, we consider the case: n = m = nd = nf and
rank Hk = rank Ek = rank Fk = m, which implies all
the states are influenced by dk and fk . It should be noted
that the approach proposed in this paper can be readily
extended to the case when n 6= m or rank Hk 6= rank Ek .
This paper proposes an extended Double-Model Adaptive Estimation (DMAE) approach, which can cope with
the unknown input filtering problem when a traditional
unknown input filter can not be designed. The original
DMAE approach, which was proposed by Lu et al. [22]
for the estimation of unknown inputs in the measurement equation, is extended to allow estimation of the
unknown inputs which appear both in the system equation and the measurement equation. The unknown inputs are augmented as system states and are modeled
as random walk processes. The unknown inputs in the
system equation are assumed to be Gaussian random
processes of which covariances are estimated on-line. It
is proved that the state and unknown inputs can be
estimated and decoupled while not requiring the existence condition. Two illustrative examples are given to
demonstrate the effectiveness of the proposed approach
with comparison to other methods from literature such
as the Robust Three-Step Kalman Filter (RTSKF) [12],
the Optimal Two-Stage Kalman Filter (OTSKF) [18]
and the particle filters [13,8].
The unknown inputs are denoted as d0k , i.e., d0k =
"
dk
#
∈
fk
Rnd0 . Then, model (1) and (2) can be reformulated into
the general form as given in Hou and Patton [14] and
Darouach, Zasadzinski and Boutayeb [7]:
xk+1 = Ak xk + Ek0 d0k + wk
yk = Hk xk + Fk0 d0k + vk
(3)
(4)
In this paper, Ek0 = [Ek 0], Fk0 = [0 Fk ]. The existence
of an unknown-input decoupled filter must satisfy the
following existence condition [14,7]:
The structure of the paper is as follows: the preliminaries of the paper are given in Section 2, formulating the
filtering problem when the existence condition for a traditional unknown input decoupled filter is not satisfied
and generalizing the DMAE approach. In Section 3, the
extension of the DMAE approach to the filtering problem when the unknown inputs appear both in the system
equation and the measurement equation is presented.
Furthermore, the on-line estimation of the covariance
matrix of the unknown inputs is introduced. It is proved
that the state and the unknown inputs can still be estimated and decoupled in Section 4. In Section 5, two illustrative examples are given to show the performance of
the proposed approach with comparison to some existing unknown-input decoupled filters. Finally, Section 6
concludes the paper.
2
Problem formulation
rank
"
Fk0 Hk Ek0
0
Fk0
#
= rank
[Fk0 ]
+ rank
" #
Ek0
Fk0
(5)
In our case, since rank Hk = m, the left-hand side of condition (5) is 2m while the right-hand side is 3m. Therefore, the above existence condition does not hold, which
means that all the unknown-input filters mentioned in
the introduction can not be directly implemented.
In this paper, we consider the consecutive bias fault estimation of a system subjected to disturbances, as described in Eqs. (1) and (2). Although the existence condition of designing a traditional unknown input decoupled filter is not satisfied, it will be shown that the unknown inputs can still be decoupled using an extended
DMAE approach.
Remark 1. The model described by Eqs. (1) and (2) is
useful for applications where the disturbances appear in
the system equation and the faults appear in the measurement equation, such as bias fault estimation in aircraft air data sensors [22].
The DMAE approach
This section presents the problem formulation and the
DMAE approach.
2
x̂nf (k)
0
Filter based on x̂nf
no fault
Y
u
x̂af (k)
then the conditional probability of the two filters can be
updated recursively using the following equation:
x̂nf (k + 1)
γnf
Selective
Reinitialization x̂af (k + 1)
Filter based on
x̂0af
fault
γaf
pi (k) =
j=1
Probability evaluator
paf
Fig. 1. Block diagram for the DMAE approach
2.2
,
i = 1, 2
fyk |a,Yk−1 (yk |aj , Yk−1 )pj (k − 1)
(8)
where Yk−1 is the measurement history vector which is
defined as Yk−1 = {y(1), y(2), .., y(k − 1)}.
fyk |a,Yk−1 (yk |ai , Yk−1 ) is the probability density function
which is given by the following Gaussian form [24]:
pnf
Hypothesis Conditional
•
fyk |a,Yk−1 (yk |ai , Yk−1 )pi (k − 1)
2
P
The DMAE approach
fy(k)|a,Yk−1 (y(k)|ai , Yk−1 )
=βi (k) exp{−γiT (k)Ci−1 (k)γi (k)/2}
(9)
The DMAE1 approach proposed in Lu et al. [22] considers the model (1) and (2) for dk = 0 (nd = 0). It is
referred to as the DMAE approach in this paper, which
is generalized in the following.
where
The DMAE [22], which is a modified approach of
multiple-model-based approach [23,24], is composed of
two Kalman Filters (KFs) operating in parallel: a nofault (or fault-free) filter and an augmented fault filter.
These two filters are based on two modes of the system:
fault-free (fk = 0) and faulty (fk 6= 0). The two filters
use the same vector of measurements Y and vector of
input u, and are based on the same equations of motion,
while each hypothesizes a different fault scenario. The
state vector of the no-fault filter xnf and that of the
augmented fault filter xaf are as follows:
In Eq. (10), |•| denotes the determinant of the covariance
matrix Ci (k) which is computed by the KF at time step
k. The filter which matches the fault scenario produces
the smallest innovation which is the difference between
the estimated measurement and the true measurement.
Therefore, the conditional probability of the filter which
matches the true fault scenario is the highest between
the two filters. After the computation of the conditional
probability, the state estimate of the nonlinear system
x̂(k) can be generated by the weighted state estimate
x̂i (k) of the two filters:
xnf,k = xk , xaf,k =
"
xnf,k
fk
#
βi (k) =
x̂(k) =
(6)
(10)
x̂i (k)pi (k)
= x̂nf (k)pnf (k) + x̂af (k)paf (k).
(11)
The fault is only estimated by the augmented fault filter
and the estimate is denoted as fˆ(k). The probabilityweighted fault estimate of the DMAE approach f¯(k) is
calculated as follows:
At time step k, each of the filters produces a state estimate x̂0i (k) and a vector of innovations γi (k). The
principle is that the KF which produces the most wellbehaved innovations, contains the model which matches
the true faulty model best [23,24]. The block diagram of
the DMAE is given in Fig. 1.
f¯(k) = fˆ(k)paf (k)
(12)
The core of the DMAE approach is selective reinitialization. The flow chart of the selective reinitialization
algorithm is presented in Fig. 2.
In the algorithm, x̂0nf (x̂0af ) and x̂nf (x̂af ) denote the state
estimate of the no-fault (augmented fault) filter before
0
0
and after the reinitialization, respectively. Pnf
(Paf
) and
Pnf (Paf ) denote the covariance of state estimate error of
the no-fault (augmented fault) filter before and after the
reinitialization, respectively. x̂t , pt and Pt are the vectors
which contain the state estimate, model probability and
the covariance matrix of state estimation error of the nofault filter and the fault filter respectively. imax,k is the
index of the model with the maximum model probability
A hypothesis test uses the innovation γi (k) and the innovation covariance matrix Ci (k) of the filters in order
to assign a conditional probability to each of the filters.
Let a denote the fault scenarios of the system. If we define the hypothesis conditional probability pi (k) as the
probability that a is assigned ai for i = 1, 2 (a1 = nf ,
a2 = af ), conditioned on the measurement history up to
time step k:
i = 1, 2
1/2
i (k)|
i=1
where “nf ” means no fault and “af ” means augmented
fault. It can be noted that the state vector of the augmented fault filter is the state vector of the no-fault filter
with augmentation of the fault vector fk .
pi (k) = Pr[a = ai |Y (k) = Yk ],
2
X
1
(2π)m/2 |C
(7)
3
where wd,k is a white noise sequence with covariance:
E{wd,k (wd,l )T } = Qdk δkl . fk is also modeled as a random
walk process as:
pt,1 = pnf ; pt,2 = paf
x̂t,1 = x̂0nf,j ; x̂t,2 = x̂0af,j , j = 1 : n
0
0
Pt,1 = Pnf,jj
; Pt,2 = Paf,jj
,j = 1 : n
fk+1 = fk + wf,k ,
(15)
where wf,k is a white noise sequence with covariance:
E{wf,k (wf,l )T } = Qfk δkl . Then, the system model and
measurement model of the no-fault filter can be described as follows:
Find imax,k with
pt,imax,k = max (pt )
Yes
x̄nf,k+1 = Ānf,k x̄nf,k + w̄nf,k
yk = H̄nf,k x̄nf,k + vk
No
imax,k = 1?
(16)
(17)
where
x̂af =
#
"
x̂t,imax,k
xf0
, Paf =
#
"
Pt,imax,k 0
0
Ānf,k =
x̂nf = x̂t,imax,k
P0f
Pnf = Pt,imax,k
Āaf,k =
The DMAE approach can achieve an unbiased estimation of xk and fk when dk = 0 [22]. However, when
dk 6= 0, the unknown-input filtering problem becomes
more challenging. Since the existence condition (5) is
no longer satisfied, traditional unknown-input decoupled
filters can not be designed.
x̄nf,k =
dk
, x̄af,k =
fk
#
(19)
(20)
"
Ānf,k 0
0
I
#
, H̄af,k = [H̄nf,k Fk ], w̄af,k =
"
w̄nf,k
#
wf,k
(21)
This paper proposes a method to adapt Qdk by making
use of the augmented fault filter of the DMAE approach.
To compensate for the effect of a bad choice of Qdk on
the estimation of xk , the system noise vector w̄nf,k in
Eqs.(16), (18) and (21) is modified to:
(13)
where x̄nf,k ∈ Rn+nd and x̄af,k ∈ Rn+nd +nf . The state
vector of the augmented fault filter is that of the nofault filter augmented with the fault vector. Therefore,
the state vector of the no-fault filter can be inferred from
that of the augmented fault filter and vice versa.
w̄nf,k =
"
#
wk + wk0
wd,k
(22)
where wk0 is the noise used to compensate for the effect of
a bad choice of Qdk on the estimation of xk . In this paper,
we approximate wk0 by Ek wd,k . Therefore, w̄nf,k is
The random walk process provides a useful and general
tool for the modeling of unknown time-varying processes
[10,27,15]. dk can be modeled by a random walk process
[27,15] as:
dk+1 = dk + wd,k ,
wd,k
#
Since the difference from the DMAE in Lu et al. [22] is
the augmentation of dk , only the covariance related to
wd,k , i.e., Qdk is discussed below. It should be noted that
Qdk is usually unknown, the optimality of the filter can
be compromised by a poor choice of Qdk [21,15]. If Qdk is
not properly chosen, it can influence the estimation of
dk as well as xk .
In this section, the DMAE is extended to the case when
dk 6= 0. In order to achieve this, the state vectors of the
no-fault filter and augmented fault filter are changed to:
x̄nf,k
, H̄nf,k = [Hk 0], w̄nf,k =
wk
where
Extension of the DMAE approach
"
I
"
x̄af,k+1 = Āaf,k x̄af,k + w̄af,k
yk = H̄af,k x̄af,k + vk
P0f
" #
xk
0
#
The model of the augmented fault filter is as follows:
at time step k.
and
are the parameters which are
used for the initialization of the fault filter.
3
Ak Ek
(18)
Fig. 2. Flow chart of the Selective Reinitialization algorithm.
Note n refers to the dimension of x̂nf .
xf0
"
w̄nf,k =
(14)
4
"
#
wk + Ek wd,k
wd,k
(23)
ˆaf,k−1|k−1 denote the unbiased estimate of x̄af,k−1
Let x̄
given measurements up to time k −1. x̂k−1|k−1 , dˆk−1|k−1
and fˆk−1|k−1 denote the estimates of xk−1 , dk−1 and
fk−1 , respectively. The innovation of the augmented
fault filter is:
where Q̂k,jj , j = 1, 2, ..., m is the jth diagonal element
of Q̂k which is denoted as:
ˆaf,k|k−1
γaf,k = yk − H̄af,k x̄
The restriction Q̃k,jj ≥ 0, j = 1, 2, ..., m in Eq. (31) is to
preserve the properties of a variance [19].
Q̂k = (Ĉaf,k − Hk Qk−1 HkT − Fk Qfk FkT − Rk )
= Hk Ak−1 x̃k−1|k−1 + Hk Ek−1 d˜k−1|k−1 + Fk f˜k−1|k−1
+ Hk wk−1 + Hk Ek−1 wd,k−1 + Fk wf,k−1 + vk
(24)
with
x̃k−1|k−1 := xk−1 − x̂k−1|k−1
d˜k−1|k−1 := dk−1 − dˆk−1|k−1
f˜k−1|k−1 := fk−1 − fˆk−1|k−1
4
(32)
Unknown input decoupled filtering
This section proves that the unknown input decoupled
filtering can be achieved using the extended DMAE approach which does not need to satisfy the existence condition (5). Let l (l ≥ 1) denote the time step when the
first fault occurs and le denote the time step when the
first fault is removed, which means fk = 0 when k < l
and fk 6= 0 when l ≤ k ≤ le . Without loss of generality,
it will be proven that fk can be estimated when k ≤ le .
(25)
(26)
(27)
Therefore, the innovation covariance of the augmented
fault filter is:
T
Caf,k = E{γaf,k γaf,k
}
4.1
x
= Hk Ak−1 Pk−1|k−1
ATk−1 HkT
f
d
T
+ Hk Ek−1 Pk−1|k−1
Ek−1
HkT + Fk Pk−1|k−1
FkT
xf
xd
T
+ Hk Ak−1 Pk−1|k−1
Ek−1
HkT + Hk Ak−1 Pk−1|k−1
FkT
df
dx
T
+ Hk Ek−1 Pk−1|k−1
ATk−1 HkT + Hk Ek−1 Pk−1|k−1
Fk−1
Unknown input estimation during k < l
Theorem 1 During k < l, an unbiased estimate of dk
can be achieved by the fault-free filter of the extended
DMAE approach.
fx
fd
T
+ Fk−1 Pk−1|k−1
ATk−1 HkT + Fk Pk−1|k−1
Ek−1
HkT + Rk
T
+ Hk Qk−1 HkT + Hk Ek−1 Qdk−1 Ek−1
HkT + Fk Qfk FkT
(28)
PROOF. When k < l, fk = 0. The fault-free model
matches the true fault scenario while the augmented
fault filter does not. Therefore, according to the DMAE
approach, imax,k = 1 during this time period.
where the covariance matrices are defined as follows:
x
d
Pk|k
:= E[x̃k|k x̃Tk|k ], Pk|k
:= E[d˜k|k d˜Tk|k ]
f
T
xd
Pk|k
:= E[f˜k|k f˜k|k
], Pk|k
:= E[x̃k|k d˜Tk|k ]
The system model during this period is as follows:
xk+1 = Ak xk + Ek dk + wk
yk = Hk xk + vk
xf
dx
T
Pk|k
:= E[d˜k|k x̃Tk|k ], Pk|k
:= E[x̃k|k f˜k|k
]
fx
df
T
Pk|k
:= E[f˜k|k x̃Tk|k ], Pk|k
:= E[d˜k|k f˜k|k
]
Under this situation, dk can be estimated using the faultfree filter whose convergence condition will be discussed
later.
2
fd
Pk|k
:= E[f˜k|k d˜Tk|k ].
The actual Caf,k is approximated as follows [26,29]:
Ĉaf,k =
1
N
k
X
T
γaf,j γaf,j
(33)
(34)
(29)
The estimation of dk and fk when l ≤ k ≤ le will be
discussed in the following.
j=k−N +1
Qdk can be approximated by the main diagonal of
−1
T
Ek−1
Hk−1 Q̃k (HkT )−1 (Ek−1
)−1
4.2
(30)
For the sake of readability, the subscript “af ” will be
discarded for the remainder of the section. All the variables with a bar on top in the remainder of this section
refer to the augmented fault filter.
with Q̃k is a diagonal matrix defined as:
Q̃k := diag(max{0, Q̂k,11 }, ..., max{0, Q̂k,mm })
Unknown input estimation at k = l
(31)
5
Consequently, the expectation of fˆl|l
Using the DMAE approach, the Kalman gain K̄l can be
partitioned as follows:
Klx
E[fˆl|l ] = E[Klf Fl fl ].
d
K̄l =
Kl
(35)
Therefore, it can concluded that fl can be estimated if
and only if Klf satisfies
Klf
where Klx , Kld and Klf are the Kalman gains associated
with xk , dk and fk , respectively.
Klf Fl = I.
(36)
˜l−1|l−1 x̄
˜Tl−1|l−1 ]
P̄l−1|l−1 := E[x̄
˜l−1|l−1 = x̄l−1 − x̄
ˆl−1|l−1 .
where x̄
(37)
Due to the selective reinitialization algorithm given in
f
Fig. 2, Pl−1|l−1
= P0f . Therefore, the covariance of the
state prediction error P̄l|l−1 can be computed and partitioned as follows:
x
xd
Pl−1|l−1
Pl−1|l−1
0
T
dx
d
P̄l|l−1 = Āl−1
Pl−1|l−1 Pl−1|l−1 0 Āl−1
0
0
P0f
T
Ql−1 + El−1 Qdl−1 El−1
El−1 Qdl−1 0
d
T
d
+
Q
E
Q
0
l−1
l−1
l−1
0
0
Qfl−1
(45)
Px
P xd
0
l|l−1 l|l−1
dx
d
= Pl|l−1 Pl|l−1 0
(46)
f
0
0 Pl|l−1
where el is defined as
el := Hl Al−1 x̃l−1|l−1 + Hl El−1 d˜l−1|l−1
+ Hl wl−1 + Hl El−1 wd,l−1 + vl
(38)
Since x̂l−1|l−1 and dˆl−1|l−1 are unbiased (this can be
achieved by the DMAE1 in Lu et. al [22] since fk = 0
when k < l), E[el ] = 0.
Consequently, the expectation of γ̄l is:
E[γ̄l ] = Fl fl .
(39)
The estimation of the fault can be given by
fˆl|l = fˆl|l−1 + Klf γ̄l
= fˆl−1|l−1 + K f γ̄l
l
(44)
PROOF. Define the following covariance matrix:
PROOF. The innovation of the augmented filter is
γ̄l = el + Fl fl
2
Theorem 3 Let x̂l−1|l−1 and dˆl−1|l−1 be unbiased, then
fl can be estimated by the augmented fault filter of the
DMAE approach by choosing a sufficiently large P0f and
a sufficiently small xf0 .
Lemma 2 Let x̂l−1|l−1 and dˆl−1|l−1 be unbiased, if xf0
is chosen to be 0 or sufficiently small, then fl can be
estimated by the augmented fault filter if and only if Klf
satisfies
Klf Fl = I.
(43)
(40)
where
Since imax,k = 1 when k < l, according to the flow chart
of the selective reinitialization algorithm given in Fig. 2,
Eq. (40) can be further written into
fˆl|l = xf0 + Klf γ̄l
x
x
d
T
ATl−1 + El−1 Pl−1|l−1
El−1
Pl|l−1
:= Al−1 Pl−1|l−1
xd
T
dx
+ Al−1 Pl−1|l−1
El−1
+ El−1 Pl−1|l−1
ATl−1
T
+ Ql−1 + El−1 Qdl−1 El−1
(41)
d
d
Pl|l−1
:= Pl−1|l−1
+ Qdl−1
xd
xd
d
Pl|l−1
:= Al−1 Pl−1|l−1
+ El−1 Pl−1|l−1
+ El−1 Qdl−1
Substituting (37) into (41), yields
fˆl|l = Klf Fl fl + Klf el
dx
dx
d
T
T
Pl|l−1
:= Pl−1|l−1
ATl−1 + Pl−1|l−1
El−1
+ Qdl−1 El−1
f
Pl|l−1
:= P0f + Qfl−1
(42)
6
where
Define
C̄l := H̄l P̄l|l−1 H̄lT + Rl .
(47)
x̂∗k|k−1
Substituting Eqs. (21) and (46) into the above equation,
it follows that
f
x
HlT + Fl Pl|l−1
FlT + Rl
C̄l = Hl Pl|l−1
x̂∗k|k
k|k
(48)
f
f
fˆk|k = fˆl|l , Pk|k
= Pl|l
, l < k ≤ le
Fl−1 .
Therefore,
=
can be estimated.
xk+1 = Ak xk + Ek dk + wk
yk = Hk xk + Fk fˆl|l + vk
(50)
It follows from Lemma 2 that fl
2
Error analysis
In the previous sections, it is assumed that x̂l−1|l−1 and
dˆl−1|l−1 are unbiased. We analyze the estimation error
of fl when x̂l−1|l−1 and dˆl−1|l−1 are biased.
Theorem 4 Provided that fk has been estimated at k =
l, dk can be estimated by the augmented fault filter of the
extended DMAE approach.
Through Eq. (44), Eq. (42) can be further rewritten into
fˆl|l = fl + Fl−1 el
PROOF. During this period, the augmented fault
model matches the true fault scenario. Therefore,
imax,k = 2, which means that the fault-free filter is reinitialized by the fault filter during this period. Since this
paper considers bias fault, fk is constant for l < k ≤ le .
Therefore, during this period, we can set:
ˆk|k
x̄
(55)
Therefore, dk can be estimated using the augmented
fault filter under the same condition as for the model
described by Eqs. (33) and (34).
2
Unknown input estimation during l < k ≤ le
ˆk|k−1
x̄
(54)
As can be seen, the only unknown input is dk since the
fault filter treats fk as a known input during this period.
Since a known input does not affect the design of a filter
[14], the convergence condition of this fault filter is the
same as that of the fault-free filter based on Eqs. (33)
and (34).
4.4
4.3
(53)
It can be inferred that the model of the fault filter is
equivalent to:
(49)
C̄l ≈ Fl P0f FlT . It follows that
Klf
k
are updated by the normal Kalman filtering procedure.
It can be seen that during this period, the estimation of
the fault and the covariance are:
f
If P0f is chosen sufficiently large, then Pl|l−1
≈ P0f and
x
Pl|l−1
HlT C̄l−1
dx
T −1
K̄l =
Pl|l−1 Hl C̄l
Fl−1
k|k
(52)
Consequently, the Kalman gain of the augmented filter
can be calculated and partitioned as follows:
K̄l = P̄l|l−1 H̄lT C̄l−1
x
Pl|l−1
HlT
dx
T −1
=
Pl|l−1 Hl C̄l
f
Pl|l−1
FlT
"
"
#
#
x
xd
P
P
x̂k|k−1
k|k−1
k|k−1
∗
,
:=
:=
, Pk|k−1
dx
d
Pk|k−1
Pk|k−1
dˆk|k−1
"
"
#
#
" #
x
xd
P
P
x̂k|k
Kkx
k|k
k|k
∗
:=
:=
, Pk|k
, Kk∗ :=
dˆk|k
P dx P d
Kd
(56)
Substitute Eq. (38) into Eq. (56), it follows
fˆl|l = fl + Fl−1 (Hl Al−1 x̃l−1|l−1 + Hl El−1 d˜l−1|l−1
+ Hl wl−1 + Hl El−1 wd,l−1 + vl )
(57)
"
#
"
#
∗
Pk|k−1
0
x̂∗k|k−1
=
, P̄k|k−1 =
f
0
Pl|l
fˆl|l
"
#
"
#
" #
∗
Pk|k
0
x̂∗k|k
Kk∗
=
, K̄k =
,
, P̄k|k =
f
0 P
0
fˆl|l
The estimation error of fl as a function of x̃l−1|l−1 and
d˜l−1|l−1 can be obtained as follows:
f˜l|l = fl − fˆl|l
(58)
˜
=
+ Hl El−1 dl−1|l−1
+ Hl wl−1 + Hl El−1 wd,l−1 + vl )
(59)
Fl−1 (Hl Al−1 x̃l−1|l−1
l|l
(51)
7
If x̃l−1|l−1 and d˜l−1|l−1 are unbiased, the expectation of
f˜l|l is zero, which means the fault estimate is unbiased.
If x̃l−1|l−1 and d˜l−1|l−1 are biased, assume
The system is described by model (1) and (2) where
A=
≤ f¯I,
aI ≤ Al−1 ≤ āI, f I ≤ Fl−1
¯
¯
hI ≤ Hl−1 ≤ h̄I, eI ≤ El−1 ≤ ēI,
¯
¯
ex I ≤ x̃l−1|l−1 ≤ ēx I, ed I ≤ d˜l−1|l−1 ≤ ēd I,
¯
¯
wI ≤ wl−1 ≤ w̄I, wd I ≤ wd,l−1 ≤ w̄d I,
¯
¯
vI ≤ vl−1 ≤ v̄I.
¯
"
#
−0.0005 −0.0084
,
(67)
1.7902
"
" #
" #
0.629
1 0
1 0
E=
,H =
,F =
, (68)
0 −0.52504
0 1
0 1
"
#
"
#
0.0022
0
0.012 0
Q=
,R =
(69)
0
0.0022
0 0.012
(60)
(61)
(62)
(63)
(64)
0.0517
0.8069
#
0
,B =
"
#
0.1815
Then it follows that the fault estimation error is bounded
by the following:
The input uk is: uk = −0.5 when 200 < k ≤ 300, otherwise uk = 0.5. fk is given by the red solid lines in
Fig. 3(c). It can be noted that the number of unknown
h(aex + eed + w + ewd ) + v h̄(āēx + ēēd + w̄ + ēw̄d ) + v̄ inputs in [27], [7] and [16] is n (n = 2) while this paper
d
d
¯ ¯¯
¯¯ ¯¯ ¯ ¯
¯,
f
deals with 2nd unknown inputs.
f
¯
(65)
4.5
In both examples, since Ek0 = [Ek 0], Fk0 = [0 Fk ], condition (5) is not satisfied. In addition, rank yk < rank
d0k . Consequently, all the unknown input decoupled filters in the introduction are not applicable to solve the
problem, except for special cases when dk = 0 or fk = 0.
N in Eq. (29) is set to be 10. In both examples, Qfk = 0,
Qdk is updated by the main diagonal of the matrix given
in (30), xf0 = [10−3 , 10−3 ]T , P0f = 102 I.
Discussion
For the model given in Eqs. (33) and (34), the convergence condition for time-invariant case has been given
by Darouach et al. [6], which is given as follows:
rank
"
#
zI − A −E
H
0
= n + nd , ∀z ∈ C, |z| ≥ 1
Example 1. In this example, dk is a constant bias vector, which is shown by the red solid lines in Fig. 3(b).
The condition (5) is not satisfied. Therefore, traditional
unknown input filters, which require the satisfaction of
condition (5), can not be implemented.
(66)
This convergence condition is also required by traditional unknown input filters such as those in Darouach,
Zasadzinski and Boutayeb [7] and Cheng et al. [4].
The extended DMAE approach is implemented. The
true and estimated pnf and paf using the extended
DMAE approach are well matched. The probabilityweighted estimates of xk , dk , which are calculated using
Eq. (11), are shown in Fig. 3(a) and 3(b), respectively.
The probability-weighted estimate of fk (calculated using Eq. (12)) is shown in Fig. 3(c). As can be seen, xk ,
dk and fk can all be estimated.
The system considered in this paper is linear and the
noise is assumed to be Gaussian. If the system is nonlinear, the DMAE should be extended using Unscented
Kalman Filters [20,22] or particle filters [13,8,28]. If the
system noise is non-Gaussian, then it should be extended
by making use of particle filters [13,8,28]. However, this
is out of the scope of the present paper.
5
Example 2. In this example 1 , the disturbances,
"
# which
d1,k
are taken from [25], are stochastic. dk =
is gend2,k
Illustrative examples with comparison to existing methods
In this section, two examples similar to that in [27], [7]
and [16] are provided to demonstrate the performance
of the extended DMAE approach. Note that both E and
F are of full rank in this example.
1
The implementation of this work is available at:
https://www.researchgate.net/profile/Peng_Lu15/
publications?pubType=dataset
8
True
DMAE
0.9
0.02
∆ f1
1
0.8
x
0.7
0
−0.02
0.6
0.5
0
RTSKF
DMAE
0.04
100
200
300
400
−0.04
0
500
100
200
100
200
300
400
500
300
400
500
0.04
0
∆ f2
2
0.02
x
−5
0
−0.02
−10
0
100
200
time (s)
300
400
−0.04
0
500
(a) True and estimated states, example 1
1
d
Fig. 4. Errors of estimation of f1 and f2 using the RTSKF
and the DMAE approach, case 1, example 2
True
DMAE
1.5
erated using the following model [25]:
#
"
di,k
1
0
di,k
0.5
0
100
200
300
400
500
2.5
d
2
2
1
100
200
time (s)
300
400
500
(b) True and estimated disturbances, example 1
"
#
di,k−1
0
= V2
− L2 −2 LVgi
di,k−1
gi
q
σi L3Vgi
0
wd,k
q
+
, i = 1, 2
√
(1 − 2 3)σi ( LVgi )3
0
1
(70)
Three cases are considered for this example. The first
two cases are special cases. In these two cases, the existence condition (5) is satisfied. Therefore, some of the
approaches mentioned in the introduction can still be
used.
True
DMAE
4
where V = 35, σ1 = 0.5, σ2 = 0.8, Lg1 = 2500, Lg2 =
0
1500 and wd,k
∼ N (0, 1). The generated dk is shown by
the red solid lines in Fig. 5(b). It should be noted that
the DMAE approach still models dk as a random walk
process since dk is treated as an unknown input.
1.5
0.5
0
time (s)
f1
2
Case 1 dk = 0, fk 6= 0
0
−2
0
100
200
100
200
300
400
500
300
400
500
In this case, Ek is a zero matrix. Therefore, condition (5)
is satisfied. The probability-weighted estimate of fk using the extended DMAE is the same as in Fig. 3(c). The
RTSKF in Gillijns and De Moor [12] is also applied and
the errors of estimation of fk compared to the DMAE
are shown in Fig. 4. In addition, particle filters [13,8] are
also applied. The model used for estimation of fk is also
the random walk. 100 particles are used. The root mean
square errors (RMSEs) of estimation of f1 and f2 using
the RTSKF, the particle filter [13,8] and the extended
DMAE are shown in Table 1.
3
f2
2
1
0
−1
0
time (s)
(c) True and estimated faults, example 1
Case 2 dk 6= 0, fk = 0
Fig. 3. Results of the DMAE approach, example 1
9
Table 1
RMSEs of the fault and disturbance estimation for Example
2
Case 2
Case 3
d1
d2
f1
f2
RTSKF [12]
-
-
0.0103
0.0102
PF [13,8]
-
-
0.1549
0.1496
DMAE
-
-
0.0060
0.0047
OTSKF [18]
0.0697
0.1442
-
-
PF [13,8]
0.1088
0.2035
-
-
DMAE
0.0709
0.1459
-
-
[12,11,18,13,8,15]
N/A
N/A
N/A
N/A
DMAE
0.0845
0.1655
0.0230
0.0283
pnf
Methods
100
200
100
200
300
400
500
300
400
500
1
0.5
0
0
In this case, Fk is a zero matrix. Therefore, condition (5)
is also satisfied. The true and estimated pnf and paf using
the extended DMAE approach are shown in Fig. 5(a).
The probability-weighted estimate of dk is presented in
Fig. 5(b). The results using the methods in Heish [15],
Heish and Chen [18], and Gillijns and De Moor [11], are
similar to that of the DMAE. Particle filter is also applied. The model used for estimation of dk is the random
walk. The RMSEs of estimation of d1 and d2 using the
OTSKF in Heish [18], the particle filter [13,8] and the
extended DMAE are shown in Table 1.
True
DMAE
0.5
0
0
paf
Case 1
1
time (s)
(a) True and estimated model probabilities, case 2, example 2
True
DMAE
1
d1
0
−1
−2
0
Case 3 dk 6= 0, fk 6= 0
100
200
100
200
300
400
500
300
400
500
4
In this case, condition (5) is not satisfied. Thus, all the
conventional filters mentioned in the introduction are
not applicable.
d2
2
0
The true and estimated pnf and paf using the extended DMAE approach are also well matched. The
probability-weighted estimates of xk , is shown in Fig. 6.
The probability-weighted estimates of dk and fk are the
same as in Figs. 5(b) and 3(c) respectively. It can be
seen that despite the fact that the existence condition
for traditional unknown-input decoupled filters is not
satisfied, xk , dk and fk can all be estimated using the
extended DMAE approach. The RMSEs of the estimation of dk and fk using the extended DMAE approach
are shown in Table 1.
−2
0
time (s)
(b) True and estimated disturbances, case 2, example 2
Fig. 5. Results of the DMAE approach, case 2, example 2
However, it is also noted that the extended DMAE approach is more sensitive to Rk errors. The RMSE of the
fault estimation increases to 0.063 when Qk is multiplied
with 103 and increases to 1.79 when Rk is multiplied
with 103 . This is expected since in section 3, the process
noise w̄nf,k is adapted while the output noise vk is not
adapted. Therefore, selection of Rk should be performed
with more caution.
Finally, the sensitivity of the DMAE with respect to
errors in Qk and Rk is discussed. To demonstrate the
sensitivity with respect to errors in Qk , Rk is fixed and
Qk is multiplied with a coefficient kQ . The sensitivity
result of the RMSE of fault estimation with kQ ranging
from 10−3 to 103 is shown in Fig. 7(a). To show the
sensitivity with respect to Rk errors, Qk is fixed and Rk
is multiplied with a coefficient kR . The sensitivity result
of the RMSE of fault estimation with kR ranging from
10−3 to 103 is shown in Fig. 7(b).
6
Conclusion
In this paper, the unknown input decoupling problem
is extended to the case when the existence condition of
traditional unknown input filters is not satisfied. It is
proved that the states, disturbances and faults can be estimated using an extended DMAE approach which does
not require the existence condition. Therefore, it can
It can be seen from Fig. 7(a) and 7(b) that the minimum RMSEs are obtained when kQ = 1 or kR = 1.
10
True
DMAE
0.1
0.5
0.09
0
0.08
−0.5
0.07
−1
0
0.06
100
200
300
400
RMSE
x
1
1
500
x
2
10
0.05
0.04
5
0.03
0
0.02
−5
0.01
−10
0
0
100
200
time (s)
300
400
500
−2
10
0
10
2
kQ
10
(a) Sensitivity with respect to Qk errors, case 3, example 2
Fig. 6. True and estimated states, case 3, example 2
be applied to a wider class of systems and applications.
Two illustrative examples demonstrate the effectiveness
of the extended DMAE approach. Future work would
consider extending the DMAE to deal with systems with
non-Gaussian noise.
2
1.8
1.6
1.4
RMSE
1.2
References
1
0.8
[1] Fayçal Ben Hmida, Karim Khémiri, José Ragot, and
Moncef Gossa.
Unbiased Minimum-Variance Filter for
State and Fault Estimation of Linear Time-Varying Systems
with Unknown Disturbances. Mathematical Problems in
Engineering, 2010:1–17, 2010.
0.6
0.4
0.2
0
[2] François Caron, Manuel Davy, Emmanuel Duflos, and
Philippe Vanheeghe. Particle Filtering for Multisensor Data
Fusion With Switching Observation Models: Application to
Land Vehicle Positioning. IEEE Transactions on Signal
Processing, 55(6):2703–2719, 2007.
−2
10
0
10
2
kR
10
(b) Sensitivity with respect to Rk errors, case 3, example 2
Fig. 7. Sensitivity of the fault estimation using the DMAE
approach with respect to Qk and Rk errors, case 3, example
2
[3] Jie Chen and Ron J. Patton. Optimal Filtering and
Robust Fault Diagnosis of Stochastic Systems with Unknown
Disturbances.
IEEE Proceedings Control Theory and
Applications, 143:31–36, 1996.
[9] Nando De Freitas. Rao-Blackwellised Particle Filtering for
Fault Diagnosis. In Proceedings IEEE Aerospace Conference,
pages 4–1767–4–1772, 2002.
[4] Yue Cheng, Hao Ye, Yongqiang Wang, and Donghua Zhou.
Unbiased Minimum-Variance State Estimation for Linear
Systems with Unknown Input. Automatica, 45(2):485–491,
February 2009.
[10] Bernard Friedland. Treatment of Bias in Recursive Filtering.
IEEE Transactions on Automatic Control, 14(4):359–367,
1969.
[5] M. Darouach and M. Zasadzinski. Unbiased Minimum
Variance Estimation for Systems with Unknown Exogenous
Inputs. Automatica, 33(4):717–719, 1997.
[11] Steven Gillijns and Bart De Moor. Unbiased MinimumVariance Input and State Estimation for Linear DiscreteTime Systems. Automatica, 43(1):111–116, January 2007.
[6] M. Darouach, M. Zasadzinski, O.A. Bassong, and
S. Nowakowski. Kalman Filtering with Unknown Inputs via
Optimal State Estimation of Singular Systems. International
Journal of Systems Science, 26(10):2015–2028, October 1995.
[12] Steven Gillijns and Bart De Moor. Unbiased MinimumVariance Input and State Estimation for Linear DiscreteTime Systems with Direct Feedthrough.
Automatica,
43(5):111–116, May 2007.
[7] M. Darouach, M. Zasadzinski, and M. Boutayeb. Extension
of Minimum Variance Estimation for Systems with Unknown
Inputs. Automatica, 39(5):867–876, May 2003.
[13] N.J. Gordan, D.J. Salmond, and A.F.M. Smith. Novel
Approach to Nonlinear/non-Gaussian Bayesian State
Estimation. In Proc. Inst. Elect. Eng., F, volume 140, pages
107–113, 1993.
[8] Arnaud Doucet, Simon Godsill, and Christophe Andrieu.
On Sequential Monte Carlo Sampling Methods for Bayesian
Filtering. Statistics and Computing, 10:197–208, 2000.
[14] M. Hou and R. J. Patton. Optimal Filtering for Systems with
11
Unknown Inputs. IEEE Transactions on Automatic Control,
43(3):445–449, 1998.
[15] Chien-Shu Hsieh. Robust Two-Stage Kalman Filters for
Systems with Unknown Inputs. IEEE Transactions on
Automatic Control, 45(12):2374–2378, 2000.
[16] Chien-Shu Hsieh. Extension of unbiased minimum-variance
input and state estimation for systems with unknown inputs.
Automatica, 45(9):2149–2153, September 2009.
[17] Chien-Shu Hsieh. On the Global Optimality of Unbiased
Minimum-variance State Estimation for Systems with
Unknown Inputs. Automatica, 46(4):708–715, April 2010.
[18] Chien-Shu Hsieh and Fu-Guang Chen. Optimal solution
of the two-stage Kalman filter. IEEE Transactions on
Automatic Control, 44(1):194–199, 1999.
[19] A. H. Jazwinski. Adaptive Filtering. Automatica, 5:475–485,
1969.
[20] Simon J. Julier and Jeffrey K. Uhlmann. A New Extension of
the Kalman Filter to Nonlinear Systems. in Proc. AeroSense:
11th Int. Symp. Aerospace/Defense Sensing, Simulation and
Controls, pages 182–193, 1997.
[21] Peter K. Kitanidis. Unbiased Minimum-variance Linear State
Estimation. Automatica, 23(6):775–778, 1987.
[22] Peng Lu, Laurens Van Eykeren, E. van Kampen,
Cornelis Coen de Visser, and Qiping Chu. Double-Model
Adaptive Fault Detection and Diagnosis Applied to Real
Flight Data. Control Engineering Practice, 36:39–57, March
2015.
[23] D. T. Magill. Optimal Adaptive Estimation of Sampled
Stochastic Processes. IEEE Transactions on Automatic
Control, 10(4):434–439, 1965.
[24] Peter S. Maybeck. Multiple Model Adaptive Algorithms for
Detecting and Compensating Sensor and Actuator/Surface
Failures in Aircraft Flight Control Systems. International
Journal of Robust and Nonlinear Control, 9(14):1051–1070,
December 1999.
[25] Donald Mclean.
Automatic Flight Control Systems.
Englewood Cliffs, NJ: Prentice-Hall, 1990.
[26] Raman K. Mehra. On the Identification of Variances
and Adaptive Kalman Filtering. IEEE Transactions on
Automatic Control, 15(2):175–184, April 1970.
[27] Sang Hwan Park, Pyung Soo Kim, Oh-kyu Kwon, and
Wook Hyun Kwon. Estimation and Detection of Unknown
Inputs Using Optimal FIR Filter. Automatica, 36:1481–1488,
2000.
[28] Vandi Verma, Geoff Gordon, Reid Simmons, and Sebastian
Thrun. Real-Time Fault Diagnosis. IEEE Robotics &
Automation Magzine, 11(1):56–66, 2004.
[29] Qijun Xia, Ming Rao, Yiqun Ying, and Xuemin Shen.
Adaptive Fading Kaiman Filter with an Application.
Automatica, 30(8):1333–1338, 1994.
[30] Bo Zhao, Roger Skjetne, Mogens Blanke, and Fredrik Dukan.
Particle Filter for Fault Diagnosis and Robust Navigation of
Underwater Robot. IEEE Transactions on Control Systems
Technology, 22(6):2399–2407, 2014.
12
| 3 |
УДК 004.9:66.013.512
ИНФОРМАТИКА
МОДЕЛИРОВАНИЕ СТРОИТЕЛЬНОЙ ПОДОСНОВЫ В САПР РЕКОНСТРУКЦИИ
ПРЕДПРИЯТИЙ НА ОСНОВЕ МОДУЛЕЙ В ЧЕРТЕЖЕ
В.В. Мигунов
ЦЭСИ РТ при КМ РТ, г. Казань
Аннотация
Описаны параметрическая модель, состав и особенности реализации операций разработки чертежей
строительной подосновы - общей составляющей чертежей различных марок в проектах реконструкции
предприятий. Ключевым моментом углубленной автоматизации проектирования явилось применение так
называемых модулей в чертеже, объединяющих видимую графическую часть и невидимые параметры. Модель
прошла проверку при подготовке нескольких сот чертежей.
Библ.4
Abstract
V.V. Migunov. The modelling of the build constructions in a CAD of the renovation of the enterprises by means of
units in the drawings // Izvestiya of the Tula State University/ Ser. Mathematics. Mechanics. Informatics. Tula: TSU, 2004.
V._. N _. P. __–__.
The parametric model of build constructions and features of design operations are described for making drawings,
which are the common component of the different parts of the projects of renovation of enterprises. The key moment of the
deep design automation is the using of so-called units in the drawings, which are joining a visible graphic part and invisible
parameters. The model has passed check during designing of several hundreds of drawings.
Bibl.4
Настоящая работа посвящена применению модульной технологии разработки проблемноориентированных расширений систем автоматизированного проектирования (САПР) реконструкции
предприятия, общие положения которой изложены в [1]. Основой этой технологии являются модули в
чертеже - дуальные объекты, включающее видимую геометрическую часть и невидимое
параметрическое представление (ПП). Первично ПП, по которому генерируется видимая часть.
Объект приложения технологии - автоматизация подготовки чертежей так называемой
строительной подосновы. Подоснова включает координационные оси зданий и сооружений, колонны,
перегородки, проемы и другие строительные конструкции, отражаемые в чертежах различных марок
системы проектной документации для строительства (СПДС) согласно [2]. К элементам строительной
подосновы, в частности, осуществляется привязка при монтаже оборудования, технологических
трубопроводов, электроснабжения, внутренних сетей водоснабжения и канализации, средств
автоматизации и т.д. в ходе реконструкции предприятий. В СПДС на изображении каждого здания или
сооружения указывают координационные оси и присваивают им самостоятельную систему обозначений.
Это касается не только проектной и рабочей документации на строительство предприятий, зданий и
сооружений различного назначения, но и отчетной технической документации по инженерным
изысканиям для строительства [2].
Один и тот же чертеж строительной подосновы используется многократно в чертежах нескольких
марок, выполняемых специалистами различных профилей в рамках одного проекта. Задачи
реконструкции предприятий требуют повторного изображения неизменной части строительной
подосновы в разных проектах, выполняемых в различное время, последовательно. Тем самым
эффективность автоматизации проектирования подосновы резко повышается по сравнению с
автоматизацией других частей проекта, которые реже используются повторно.
Согласно модульной технологии разработки расширений САПР, после выяснения
целесообразности разработки специализированного расширения необходимо выявить наиболее
информационно связанные элементы чертежа, изображения которых имеет смысл генерировать
автоматически по ПП.
Как показывает анализ чертежей различных марок и требований ГОСТов СПДС к ним, наибольшей
связанностью обладают координационные оси, колонны, перегородки (стены), проемы, тексты. Чаще
всего используются поэтажные планы (рис.1). Для чертежей марок КЖ (конструкции железобетонные),
КМ (конструкции металлические), АС (архитектурно-строительные решения) также важны элементы
фундаментов и перекрытий/покрытий, иногда требуются и разрезы с отметками высоты. Из-за сильной
связанности с поэтажными планами фундаментов (башмаки идут под колонны, балки опираются на
башмаки, ленточные фундаменты устанавливаются под перегородки) и перекрытий/покрытий (балки
1
опираются на колонны, плиты опираются на балки) целесообразно автоматизировать проектирование
всех перечисленных объектов совместно. В таблице 1 приведен состав учитываемых объектов чертежа
для трех видов чертежей в плане.
Рис.1. Пример плана этажа
Таблица 1. Объекты в планах этажа, покрытия/перекрытия и фундамента
Список объектов
группы горизонтальных осей
группы вертикальных осей
группы колонн
перегородки (стены)
проемы
балки перекрытия/покрытия
группы плит
ленточные фундаменты
группы башмаков
фундаментные балки
тексты
Этаж
+
+
+
+
+
–
–
–
–
–
+
Покрытие/перекрытие
+
+
+
+
–
+
+
–
–
–
+
Фундамент
+
+
–
–
–
–
–
+
+
+
+
В таблице отсутствуют такие элементы чертежа подосновы, как размеры, обозначения осей,
отметки высоты. Их генерация производится по общим установкам и свойствам объектов, вошедших в
таблицу. Такой подход реализован в САПР TechnoCAD GlassX [3]. Количественные сведения и
примеры, приводимые ниже, относятся также к этой системе.
Параметрическое представление объектов и их связей в чертеже строительной подосновы содержит
списки объектов, в описании которых отражены связи принадлежности (ссылки по номеру объекта в
своем списке). Точка пересечения двух координационных осей задается номерами этих осей в общей
нумерации и называется далее узлом привязки. От таких узлов отсчитываются смещения колонн,
перегородок, проемов и др. Все координаты, размеры и смещения задаются в миллиметрах натуры, если
не указано иначе. Заданный узел привязки и смещение от него начала объекта (левого нижнего угла)
далее называются привязками. Признак новизны для объекта означает, существующий он или новый
(существующие вычерчиваются тонкими линиями). Признак X задает, что элемент ориентирован вдоль
оси X, и вдоль нее отсчитывается его длина, а по нормали - ширина; иначе наоборот. Имеются
2
следующие списки объектов.
Группы горизонтальных осей. Группы горизонтальных координационных осей задаются числом
осей в группе (до 99), признаком основные/дополнительные оси, расстоянием до следующей оси в
случае основных осей либо смещением от основной для дополнительных.
Группы вертикальных осей. Имеют те же свойства, что и группы горизонтальных осей.
Группы колонн. Марка колонн определяет их ширину и толщину, наличие симметрии, количество и
направления балок, которые можно опереть на колонну, и др. Имеется более 600 вариантов марок.
Например:
• "3К96-7
(10500 x 600 x 400, 2.300)" - "Колонны одноэтажных зданий с мостовыми кранами.
Серия 1.424.1-75". "Колонны крайних рядов. Выпуск 1/87, 2/87.";
• "Немаркированная колонна" - для немаркированных колонн задается длина консоли или ветви, а
также тип колонны, например: "Железобетонная двухконсольная", "Металлическая двухветвевая".
Задаются узлы привязки начала и конца группы колонн, смещение центров колонн от узлов
привязки, признак X, признак новизны. Для одноконсольных колонн задается признак расположения
консоли слева (снизу).
Перегородки (стены). Характеризуются типом по ГОСТ 21.107-78: обыкновенная, сборная
щитовая, из стеклоблоков, остекленная 1 (три продольных линии), остекленная 2 (четыре продольных
линии), кирпичная. Задаются толщина и длина перегородки, признак несущая/не несущая, признак X,
привязка, признак новизны.
Проемы. Характеризуются маркой (около 100 вариантов). Например:
• "ОР 15-6
(1460 x 570)" - окно с двойным остеклением для жилых и общественных зданий по
ГОСТ 11214-86;
• "ДН 21-13АПЩ (2085 x 1274, АПЩР2)" - дверь наружная для жилых и общественных зданий по
ГОСТ 24698-81;
• "Немаркированный проем" - если марки нет
Тип проема по ГОСТ 21.107-78 имеет 19 вариантов. Например:
• "Проем без четвертей (не доходящий до пола)";
• "Дверь складчатая в проеме без четвертей".
Задаются ширина и высота проема, ссылка на перегородку, в которой выполнен проем, признак X,
признак поворота проема на 180 градусов, привязка, признак новизны.
Если планируется последующая генерация разрезов подосновы, задаются дополнительные сведения
о проеме: высота нахождения проема над уровнем пола, собственно высота проема, признак наличия,
марка, длина, ширина и высота перемычки. Вариантов марок перемычек - более 60. Например:
• "2ПБ19-3-п (1940 x 120 x 140, 0.033)" - Перемычки брусковые. Серия 1.038.1-1, вып.1;
• "2ПП18-5 (1810 x 380 x 140, 0.096)" - Перемычки плитные. Серия 1.038.1-1, вып.2,5.
Также для разрезов задаются признак наличия, марка, толщина, ширина и высота фрамуги. Имеется
10 вариантов марок фрамуг. Например:
• "ФВ 04-12 ( 390 x 1170)";
• "ФВ 13-10 (1290 x 970)".
Балки перекрытия/покрытия. Марка балки выбирается из 140 вариантов. Например:
• "2БСO 12-6 АШв (11960 x 280 x 890, 2.00)" - "Балки стропильные. Серия 1.462.1-1/88 вып.1";
• "ИБ 8-21
( 5280 x 800 x 300, 1.23)" - "Ригели производственных зданий. Серия ИИ 23-3.70".
Задаются длина, ширина и высота балки, привязка, признак X, признак новизны, привязка левого
(нижнего) конца балки к колонне (номер группы колонн, номера внутри группы колонн вдоль осей X и
Y) и аналогичная привязка правого (верхнего) конца балки.
Группы плит. Марка плиты выбирается из 180 вариантов. Например:
• "2ПВ12-5-4
(11960 x 2980 x 525, 3.200)" - "Плиты покрытия ребристые. Серии 1.465-3, 1.465.1-3,
1.465.1-7";
• "ПК24.12-8Т (2380 x 1190 x 220, 0.35)" - "Плиты многопустотные. Серия 1.141-1, выпуск 60".
Задаются длина, ширина и высота плиты, признак X, привязка, число плит в группе.
Ленточные фундаменты. Задаются: ширина и длина, признак X, привязка, признак новизны.
Группы башмаков. Марка башмака выбирается из 25 вариантов. Например:
• "1Ф 12.8-1 (1200 x 1200 x 750, 0.75)";
• "2Ф 18.9-2 (1800 x 1800 x 900, 1.60)".
Задаются длина, ширина и высота башмака, признак X, узлы привязки начала и конца группы
башмаков, смещение центров башмаков от узлов привязки, признак новизны.
Фундаментные балки. Марка балки выбирается из 70 вариантов. Например:
3
•
•
•
"1БФ6-5
(5050 x 200 x 300, 0.27)" - "Серия 1.415.1-2, выпуск 1. ";
"ФБ 6-36
(5050 x 450 x 520, 0.75)" - "Серия 1.415-1, выпуск 1. ФБ";
"Немаркированная".
Задаются длина, ширина и высота балки, привязка, признак X, признак новизны, привязка левого
(нижнего) конца балки к башмаку (номер группы башмаков, номера внутри группы башмаков вдоль
осей X и Y, положение балки на башмаке: по центру, по левому или правому краю), привязка правого
(верхнего) конца балки к башмаку (номер группы башмаков, номера внутри группы башмаков вдоль
осей X и Y).
Тексты. Сам многострочный текст с установками шрифта, шага строк и др. - как у обычного текста
в чертеже. Всегда имеет сноску. Задаются также точка начала текста и точка указания сноски.
Совокупность списков объектов − реляционная база данных с поддержкой ссылочной целостности.
Кроме списков объектов, в ПП входят общие для всей строительной подосновы параметры - установки,
такие, как смещение наименований осей и размеров от крайних осей; признак того, что горизонтальные
оси нумеруются буквами, а вертикальные цифрами, а не наоборот; признак расположения
горизонтальных размеров сверху от осей и другие.
Специализированные структуры данных в ПП и, прежде всего, связи объектов по ссылкам, сильно
повышают возможную степень автоматизации проектных работ. Например, при изменении шага в
группе координационных осей автоматически двигаются связанные с ними колонны, перегородки,
проемы на перегородках, перегенерируются тексты размеров. При выборе марок строительных
конструкций из имеющихся вариантов автоматически определяется часть их размеров, они заданы в
скобках в наименованиях марок. ПП может записываться на диск (без геометрических элементов),
порождая информационную среду проектирования в виде библиотек прототипов. При выборе ПП для
чтения с диска геометрия генерируется в режиме on-line, и проектировщик легко ориентируется в
прототипах. Хранение ПП на диске и в модулях в чертеже компактно - для чертежа рис.1 это около 2
килобайт. Оси строительной подосновы можно автоматически вставить в модуль аксонометрической
схемы [4], выбрав модуль подосновы в чертеже или ПП на диске.
Все работы по модификации ПП строительной подосновы производятся в специальном режиме, в
собственном основном меню. Используется как пользовательский интерфейс общего назначения (меню,
формы ввода...), так и специализированный для черчения и корректировки ("чертежный"), к которому
пользователь привыкает во время работы с самим чертежом. В основе реализации "чертежного"
интерфейса пользователя лежат специальные временные модули. Они помещаются в чертеж только на
время работы в этом специальном режиме и позволяют организовать работу методами, уже развитыми в
САПР для других элементов чертежа.
Каждый рабочий модуль соответствует одному объекту из вышеназванных списков. Модуль
содержит его изображение, но не включает его свойств, хранимых в ПП. Временный модуль содержит
ссылку на свой объект: идентификатор списка объектов и номер объекта в этом списке. Таким образом,
выбор в чертеже временного модуля (модулей) эквивалентен выбору объекта (объектов) в
параметрическом представлении. Поскольку при модификациях ПП ссылки на объект могут стать
некорректными, при всяком изменении ПП осуществляется частичная или полная перегенерация
временных модулей. На основе применения временных модулей удается обеспечить углубленную
автоматизацию разработки чертежей строительной подосновы, автоматизируемые операции которой
характеризуются ниже.
Проектировщик избавляется от необходимости вычерчивания повторяющихся элементов, выполняя
только неизбежные функции принятия решений. Максимально используется перегенерация
изображения при перемещении курсором характерных точек.
При создании всех видов планов подосновы автоматизированы следующие этапы работ:
автоматически проставляются размеры пролетов;
при необходимости можно сгенерировать общий размер всех пролетов или проставить
наименования осей и размеры с другой стороны от плана;
задается размер шрифта, которым будут выводиться все генерируемые размеры и отметки высот
(при генерации разреза);
извлечение параметров существующего плана, как находящегося в чертеже, так и на диске в ПП.
При создании плана этажа:
маркировка осей задается указанием начальных букв и цифр;
оси разбиваются на группы, каждая из которых характеризуется шагом осей (длиной пролетов).
Нанесение и правка групп X и Y осей происходит путем указания на группу осей в чертеже, вводом
значения шага осей и количества осей в группе. Количество осей в группе можно задать построением,
4
указав в чертеже положение крайней оси. При перемещениях курсора прорисовываются оси строящейся
группы с заданным шагом, и показывается их количество (рис. 2, 3). В отличие от планов фундамента и
перекрытия/покрытия группы осей можно удалять, добавлять, изменять в любой момент, а не только в
случае отсутствия других элементов;
Рис.2. Указание количества горизонтальных осей с подсветкой
числа пролетов. Серый цвет – оси, появляющиеся и исчезающие при
перемещении курсора
Рис.3. Координационные оси, сгенерированные по
указанию рис.2
Рис.4. Построение группы колонн. Серый цвет - строящаяся часть
нанесение групп колонн осуществляется путем выбора в меню нужной марки или типа (для
немаркированных) колонн, указанием в чертеже положений крайних колонн (при перемещениях
курсора прорисовываются колонны строящейся группы, и подсвечивается их число, рис.4) и затем
уточнением их параметров (для каждого типа – своих). Группы колонн можно удалять, добавлять,
корректировать – чертеж автоматически перегенерируется;
нанесение перегородок производится путем указания параметров перегородки и построения
ломаной – базовой линии перегородки. Во время построения текущий вид такой составной перегородки
виден на экране. Автоматически контролируется расположение перегородок вдоль осей X или Y.
Перегородки можно "образмерить", т.е. будут проставлены размеры всех проемов, расположенных на
5
перегородке и размеры всех частей перегородки, находящихся между проемами;
проемы наносятся на перегородки путем выбора нужных марки (проем может быть
немаркированным) и типа проема, указания длины и высоты для немаркированного проема. Проект
проема появляется на курсоре и движется вместе с ним, при нахождении вблизи перегородки проем
"вписывается" в нее автоматически. Если возможны варианты установки проема (дверь внутрь или
дверь наружу и др.), они переключаются клавишей. Автоматически контролируется невыход проема за
перегородку и непопадание на другой проем. Проемы можно копировать, переносить, удалять, менять
их марку, тип и другие параметры (в том числе наличие перемычки и фрамуги и их марки) – чертеж
автоматически перегенерируется.
При создании плана фундамента:
новый план фундамента создается на основе плана этажа. При этом из плана этажа импортируются
оси, группам колонн сопоставляются группы башмаков, а перегородкам ленточные фундаменты;
группы башмаков можно удалять и корректировать – чертеж автоматически перегенерируется;
нанесение ленточных фундаментов производится путем указания параметров и построения
ломаной - базовой линии фундамента. Во время построения текущий вид такого составного ленточного
фундамента виден на экране;
нанесение фундаментных балок производится путем указания параметров балки, в том числе ее
положения на башмаке (по левому краю, по центру или по правому краю). Затем выбираются башмаки,
на которых будет лежать балка. Возможность положить балку на башмак, расположение балки вдоль
осей X или Y и соответствие длины балки расстоянию между башмаками контролируется
автоматически. Фундаментные балки можно удалять и копировать;
При создании плана перекрытия/покрытия:
новый план перекрытия/покрытия создается на основе плана этажа. При этом из плана этажа
импортируются оси, группы маркированных колонн, на которые можно класть балки и несущие
перегородки;
нанесение балок производится путем указания параметров балки и выбором колонн, на которых
будет лежать балка. Возможность положить балку на колонну в заданном направлении, расположение
балки вдоль осей X или Y и соответствие длины балки расстоянию между колоннами контролируется
автоматически. Балки можно удалять и копировать;
нанесение групп плит осуществляется путем выбора в меню нужной марки плит и их
вертикальности, указанием в чертеже положений крайних плит (при перемещениях курсора
прорисовываются плиты строящейся группы, и подсвечивается их количество) и затем уточнением их
параметров. Группу плит можно "образмерить", т.е. будут проставлены толщины всех плит группы;
Генерация разреза осуществляется поэтапным заданием следующих сведений:
число этажей в разрезе, наличие фундамента и покрытия;
для каждого из них задаются соответствующие планы путем указания в чертеже модуля "План
строительной подосновы" с комплектом параметров плана. Программа не позволит указать что-либо
другое. Для этажей, начиная со 2-го, можно выбрать план перекрытия, которое будет являться полом
этого этажа;
для всех этажей вводятся уровни пола, плюс уровень подошвы фундамента (если есть фундамент)
плюс уровень низа покрытия (потолка, если есть покрытие) или верха последнего этажа (если нет
покрытия). Заданные уровни при генерации разреза наносятся на чертеж как отметки высоты;
на одном из относящихся к разрезу планов указывается секущая - ломаная вдоль осей X и Y,
буквенное обозначение и масштаб разреза;
генерируется сам чертеж разреза.
Для наглядного анализа в процессе проектирования подосновы предусмотрен просмотр разрезов с
пошаговыми смещением и вращением секущей плоскости в пространстве.
Колонны, перегородки и другие элементы строительной подосновы при помещении в чертеж
автоматически объявляются компоновочными блоками для последующей компоновки оборудования.
ПП плана подосновы запоминается в чертеже в модуле "План строительной подосновы", видимая его
часть включает два отрезка начальных осей X и Y (с их маркировкой) и два размера (если были
проставлены).
Изложенные в настоящей работе параметрическое представление, состав и особенности реализации
операций проектирования являются моделью чертежей строительной подосновы - общей составляющей
большого числа чертежей различных марок системы проектной документации для строительства,
повышающей эффективность систем автоматизированного проектирования, особенно при
проектировании реконструкции предприятий. Ключевым моментом как в реализации параметрического
представления, так и в обеспечении привычного графического пользовательского интерфейса явилась
6
эксплуатация модулей в чертеже, объединяющих видимую графическую часть и невидимые параметры.
За несколько лет эксплуатации подсистемы проектирования строительной подосновы в условиях
проектно-конструкторского отдела крупного химического предприятия разработаны несколько сот
чертежей строительной подосновы, что подтверждает эффективность и практическую значимость
изложенных подходов к моделированию чертежей строительной подосновы.
7
Литература
[1] Мигунов В.В. Модульная технология разработки проблемно-ориентированных расширений САПР
реконструкции предприятия / Материалы Второй международной электронной научно-технической
конференции "Технологическая системотехника" (ТСТ'2003), г.Тула, 01.09.2003-30.10.2003
[Электронный ресурс] / Тульский государственный университет. – Режим доступа:
http://www.tsu.tula.ru/aim/, свободный. – Загл. с экрана. – Яз. рус., англ.
[2]. Система проектной документации для строительства. ГОСТ 21.101–97. Основные требования к
проектной и рабочей документации. – М.: МНТКС, 1997. – 40c.
[3] Мигунов В.В. TechnoCAD GlassX − отечественная САПР реконструкции предприятия. В 3
частях//САПР и графика, 2004: № 4, С.78-86; № 5, С.42-48; № 6, С.34-40.
[4] Мигунов В.В., Кафиятуллов Р.Р., Сафин И.Т. Модульная технология разработки
расширений САПР. Аксонометрические схемы трубопроводов. Параметрическое представление /
Материалы
Второй
международной
электронной
научно-технической
конференции
"Технологическая системотехника" (ТСТ'2003), г.Тула, 01.09.2003-30.10.2003 [Электронный ресурс]
/ Тульский государственный университет. – Режим доступа: http://www.tsu.tula.ru/aim/, свободный. –
Загл. с экрана. – Яз. рус., англ.
8
| 5 |
On the Semantics of Intensionality
arXiv:1712.09302v1 [cs.LO] 26 Dec 2017
and Intensional Recursion
G. A. Kavvos
St John’s College
University of Oxford
A thesis submitted for the degree of
Doctor of Philosophy
Trinity Term 2017
This thesis is dedicated to my father, John G. Kavvos (1948–2015), who
taught me to persist in the face of all adversity.
Acknowledgements
A thesis in the making is a process of controlled deconstruction of its
author’s character. This fact in itself suffices to warrant a rich list of
acknowledgees.
First and foremost, I would like to thank my doctoral supervisor, Samson Abramsky, sine qua non. Not only did he suggest the—admittedly
unusual—topic of this thesis, but his unfailingly excellent advice and his
unparalleled wit were vital ingredients in encouraging me to plough on,
even when it seemed futile to do so. I shall never forget the time he quoted
the experience of J. E. Littlewood on the development of chaos theory:
“For something to do we went on and on at the thing with no
earthly prospect of “results”; suddenly the whole vista of the
dramatic fine structure of solutions stared us in the face.”
It was a pleasure to work for a few short years next to a scientist of his
stature, who also happens to be a kind man with a wonderful sense of
humour.
I also want to thank my examiners, Luke Ong and Martin Hyland, who
examined this work in detail, and provided many enlightening comments
and suggestions.
I am grateful to the UK Engineering and Physical Sciences Research Council (EPSRC) for their financial support. I am also greatly indebted to the
Department of Computer Science, the European Cooperation in Science
and Technology (COST) framework, and St John’s College, for generously
funding the trips involved in presenting my work to the wider community.
I would also like to thank all those who reviewed parts of this work before its completion. These include quite a few anonymous reviewers; John
Longley, who toiled over the first version of the manuscript on which this
thesis is based; Neil Jones, with whom I have had many interesting discussions during and after his visits to Oxford, and who also read many
parts of this thesis in detail; and my interim assessors within the department, Sam Staton and Hongseok Yang. This thesis could hardly have
been completed without their support and encouragement.
Many thanks to the fellow scientists around me: Kohei Kishida, for many
interesting discussions, for providing help and advice at a moment’s notice,
and for translating and transliterating the title of [Kiselyov, 2015]; Martin
Berger, for inviting me to Sussex to present my work, and for the numerous
exchanges that followed; and Sean Moss, who was eager to discuss all sorts
of interesting categorical diversions. Many thanks to Geraint Jones for
allowing me to use his macros for Eindhoven-style calculations.
I also want to thank my fellow students: Mario Alvarez, to whom I wish
the best of luck in his ongoing attempt to ‘debunk’ this thesis; Amar
Hadzihasanovic, for many discussions on higher category theory and its
applications; and Matthijs Vákár and Norihiro Yamada, who were always
eager to discuss categories, logic, and semantics.
Finally, I would like to thank Andrew Ker, who was my tutor during
my time as an undergraduate at University College, and my employer
thereafter. I learned an enormous amount from him as a student, and
even more whilst teaching under his supervision. I am indebted to him
for his wise advice, and all the things that he has inadvertently taught
me over a cup of coffee in the Senior Common Room. One would be
hard-pressed to find better guidance or more generosity.
4
Abstract
Intensionality is a phenomenon that occurs in logic and computation. In the
most general sense, a function is intensional if it operates at a level finer than
(extensional) equality. This is a familiar setting for computer scientists, who
often study different programs or processes that are interchangeable, i.e. extensionally equal, even though they are not implemented in the same way, so
intensionally distinct. Concomitant with intensionality is the phenomenon of
intensional recursion, which refers to the ability of a program to have access
to its own code. In computability theory, intensional recursion is enabled by
Kleene’s Second Recursion Theorem.
This thesis is concerned with the crafting of a logical toolkit through which
these phenomena can be studied. Our main contribution is a framework in
which mathematical and computational constructions can be considered either
extensionally, i.e. as abstract values, or intensionally, i.e. as fine-grained descriptions of their construction. Once this is achieved, it may be used to analyse
intensional recursion.
To begin, we turn to type theory. We construct a modal λ-calculus, called Intensional PCF, which supports non-functional operations at modal types. Moreover, by adding Löb’s rule from provability logic to the calculus, we obtain a
type-theoretic interpretation of intensional recursion. The combination of these
two features is shown to be consistent through a confluence argument.
Following that, we begin searching for a semantics for Intensional PCF. We
argue that 1-category theory is not sufficient, and propose the use of P-categories
instead. On top of this setting we introduce exposures, which are P-categorical
structures that function as abstractions of well-behaved intensional devices. We
produce three examples of these structures, based on Gödel numberings on
Peano arithmetic, realizability theory, and homological algebra.
The language of exposures leads us to a P-categorical analysis of intensional
recursion, through the notion of intensional fixed points. This, in turn, leads
to abstract analogues of classic intensional results in logic and computability,
such as Gödel’s Incompleteness Theorem, Tarski’s Undefinability Theorem, and
Rice’s Theorem. We are thus led to the conclusion that exposures are a useful
framework, which we propose as a solid basis for a theory of intensionality.
In the final chapters of the thesis we employ exposures to endow Intensional
PCF with an appropriate semantics. It transpires that, when interpreted in the
P-category of assemblies on the PCA K1 , the Löb rule can be interpreted as
the type of Kleene’s Second Recursion Theorem.
Contents
1 Introduction
1.1
1.2
1.3
1.4
1.5
1
Intensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
1.2.1
Intensional Programming . . . . . . . . . . . . . . . . . . . . .
3
1.2.2
Reflective Programming . . . . . . . . . . . . . . . . . . . . .
4
1.2.3
1.2.4
Higher-Order Non-Functional Computation . . . . . . . . . . .
Recursion in Type Theory . . . . . . . . . . . . . . . . . . . .
5
7
Quoting is Impossible . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1.3.1
Tale 1: Quoting is Not Definable . . . . . . . . . . . . . . . .
9
1.3.2 Tale 2: Quoting Collapses Observational Equivalence . . . . .
Intensionality and Types: Modality-as-Intension . . . . . . . . . . . .
10
11
1.4.1
Modality-as-Intension . . . . . . . . . . . . . . . . . . . . . . .
11
1.4.2
A Puzzle: Intensional Recursion . . . . . . . . . . . . . . . . .
13
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
2 Kleene’s Two Kinds of Recursion
2.1 Intensionality and Computability . . . . . . . . . . . . . . . . . . . .
18
19
2.2
2.3
A Road Map
2.1.1
The Space of All Programming Languages . . . . . . . . . . .
20
2.1.2
The Second Recursion Theorem . . . . . . . . . . . . . . . . .
23
2.1.3
2.1.4
Intensional Recursion . . . . . . . . . . . . . . . . . . . . . . .
Applications of Intensional Recursion . . . . . . . . . . . . . .
25
27
Extensional Recursion and the FRT . . . . . . . . . . . . . . . . . . .
31
2.2.1
Effective Operations . . . . . . . . . . . . . . . . . . . . . . .
31
2.2.2 Partial Recursive Functionals and Pure Oracles . . . . . . . .
FRT vs. SRT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
38
2.3.1
Effective Operations and the SRT . . . . . . . . . . . . . . . .
38
2.3.2
Partial Recursive Functionals and the SRT . . . . . . . . . . .
42
i
3 iPCF: An Intensional Programming Language
44
3.1
Introducing Intensional PCF . . . . . . . . . . . . . . . . . . . . . . .
45
3.2
Metatheory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.1 Structural Theorems & Cut . . . . . . . . . . . . . . . . . . .
46
46
3.2.2
Free variables . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
Consistency of Intensional Operations . . . . . . . . . . . . . . . . . .
50
3.3.1
3.3.2
Adding intensionality . . . . . . . . . . . . . . . . . . . . . . .
Reduction and Confluence . . . . . . . . . . . . . . . . . . . .
50
55
3.4
Some important terms . . . . . . . . . . . . . . . . . . . . . . . . . .
61
3.5
Two intensional examples . . . . . . . . . . . . . . . . . . . . . . . . .
62
3.5.1
3.5.2
‘Parallel or’ by dovetailing . . . . . . . . . . . . . . . . . . . .
A computer virus . . . . . . . . . . . . . . . . . . . . . . . . .
63
64
Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
3.3
3.6
4 Categories and Intensionality
4.1
4.2
66
Categories are not intensional . . . . . . . . . . . . . . . . . . . . . .
67
4.1.1
4.1.2
Intension, Modality, and Categories . . . . . . . . . . . . . . .
PERs and P-categories . . . . . . . . . . . . . . . . . . . . . .
68
72
Exposures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
4.2.1
Intensional Equality . . . . . . . . . . . . . . . . . . . . . . .
79
4.2.2
4.2.3
Cartesian and Product-Preserving Exposures . . . . . . . . . .
Comonadic Exposures . . . . . . . . . . . . . . . . . . . . . .
80
84
4.2.4
Idempotent Comonadic Exposures . . . . . . . . . . . . . . . .
86
4.2.5
Weakly Cartesian Closed Exposures . . . . . . . . . . . . . . .
93
5 Three Examples of Exposures
5.1
5.2
5.3
95
Exposures as Gödel Numbering . . . . . . . . . . . . . . . . . . . . .
5.1.1 The Lindenbaum P-category . . . . . . . . . . . . . . . . . . .
95
96
5.1.2
97
Numbering as Exposure . . . . . . . . . . . . . . . . . . . . .
Exposures in Realizability . . . . . . . . . . . . . . . . . . . . . . . . 100
5.2.1
5.2.2
Partial Combinatory Algebras . . . . . . . . . . . . . . . . . . 100
Assemblies and Modest Sets . . . . . . . . . . . . . . . . . . . 104
5.2.3
Passing to a P-category . . . . . . . . . . . . . . . . . . . . . 105
5.2.4
The Exposure . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.2.5 Weak Extensionality and Naturality . . . . . . . . . . . . . . . 113
Exposures in Homological Algebra . . . . . . . . . . . . . . . . . . . . 115
5.3.1
The P-category Grp . . . . . . . . . . . . . . . . . . . . . . . 115
ii
5.3.2
Intensionality and Homomorphisms . . . . . . . . . . . . . . . 116
6 Intensional Recursion in P-Categories
6.1
118
Extensional and Intensional Fixed Points . . . . . . . . . . . . . . . . 119
6.1.1
6.1.2
Consistency, Truth and Provability: Gödel and Tarski . . . . . 120
Rice’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.2
The relationship to Löb’s rule . . . . . . . . . . . . . . . . . . . . . . 124
6.3
Whence fixed points? . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.3.1
6.3.2
6.4
Lawvere’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . 129
An intensional Lawvere theorem . . . . . . . . . . . . . . . . . 132
Examples of Fixed Points . . . . . . . . . . . . . . . . . . . . . . . . 133
6.4.1
Fixed Points in Gödel numbering: the Diagonal Lemma . . . . 133
6.4.2
Fixed Points in Assemblies: Kleene’s Recursion Theorems . . 134
7 Intensional Semantics of iPCF I
138
7.1 Setting the scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.2
Distribution and naturality laws . . . . . . . . . . . . . . . . . . . . . 141
7.3
Fixed Points with Parameters . . . . . . . . . . . . . . . . . . . . . . 145
7.3.1
7.3.2
7.4
Extensional Fixed Points with Parameters . . . . . . . . . . . 146
Intensional Fixed Points with Parameters . . . . . . . . . . . . 149
A Parametric Intensional Lawvere Theorem . . . . . . . . . . . . . . 153
8 Intensional Semantics of iPCF II
156
8.1
iPCF v2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
8.2
8.3
Interpreting iPCF v2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.4
Natural and Weakly Extensional Models . . . . . . . . . . . . . . . . 171
8.4.1
Natural iPCF v2.0 models . . . . . . . . . . . . . . . . . . . . 171
8.5
8.4.2 Weakly Extensional iPCF v2.0 models, or iPCF models . . . . 173
Building IPWPSs categorically . . . . . . . . . . . . . . . . . . . . . . 175
8.6
Asm(K1 ) as a model of iPCF v2.0 . . . . . . . . . . . . . . . . . . . . 180
9 Conclusions & Future Work
184
9.1
Is intensionality really just non-functionality? . . . . . . . . . . . . . 186
9.2
How expressive is iPCF? . . . . . . . . . . . . . . . . . . . . . . . . . 187
9.2.1 Metaprogramming . . . . . . . . . . . . . . . . . . . . . . . . 187
9.2.2
Higher-Order Computability . . . . . . . . . . . . . . . . . . . 189
iii
9.2.3
iPCF, iPCF v2.0, and their models . . . . . . . . . . . . . . . 189
9.3
Exposures vs. other theories of intensionality . . . . . . . . . . . . . . 191
9.4
9.3.1 An Idea for an Alternative Approach . . . . . . . . . . . . . . 194
Kleene’s mysterious Second Recursion Theorem . . . . . . . . . . . . 195
9.5
Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
A iPCF v2.0 in Agda
197
A.1 Basics.agda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
A.2 iPCF.agda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
A.3 iPCF2.agda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Bibliography
214
iv
List of Figures
1.1
Chapter dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
3.1
Syntax and Typing Rules for Intensional PCF . . . . . . . . . . . . .
47
3.2
3.3
Reduction for Intensional PCF . . . . . . . . . . . . . . . . . . . . . .
Equational Theory for Intensional PCF . . . . . . . . . . . . . . . . .
53
54
3.4
Parallel Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
6.1
Types of Fixed Points (without parameters) . . . . . . . . . . . . . . 124
8.1
Syntax and Typing Rules for Intensional PCF v2.0 . . . . . . . . . . 158
8.2
Equational Theory for Intensional PCF v2.0 . . . . . . . . . . . . . . 161
8.3
Categorical Semantics for Intensional PCF v2.0 . . . . . . . . . . . . 164
This is the arXiv version of this thesis (identical to build 7049), and was compiled
on December 27, 2017.
v
Chapter 1
Introduction
This thesis concerns the computational phenomenon of intensionality.
1.1
Intensionality
To be intensional is to contain not only reference, but also sense. This philosophical
distinction was drawn by Frege; see Fitting [2015]. An intensional sign denotes an
external referent, yet inherently connotes more information—its elusive sense. The
classic example is that of the planet Venus, which may be referred to as either the
morning star, or the evening star.
In the most mathematically general sense, to be ‘intensional’ is to somehow operate
at a level finer than some predetermined ‘extensional’ equality. Intensionality is omnipresent in constructive mathematics, where the question of equality is non-trivial,
see e.g. Beeson [1985]. An example that dates back to the work of Bishop [1967]
on constructive analysis is that of real numbers, and their construction as Cauchy
sequences of rationals: two different Cauchy sequences of rationals may stand for the
same real number, thus being extensionally equal, yet intensionally distinct.
Most mainstream mathematics is extensional: we usually reason about some underlying, ‘ideal’ mathematical object, and not its concrete descriptions. The latter is
only a way to refer to the former. In this light, intensionality is merely a nuisance,
so common set theories assume some axiom of extensionality: sets are identified by
their members, and functions by their graphs. Glimpses of intensionality appear very
rarely, and usually only because we are interested in proving that some extensionality
axiom is independent from the rest of some logical theory: see e.g. Streicher [2015]
for a recent example.
This is a difficult-to-work-in setting for Computer Science, where intensionality is
the norm. In fact, the present author believes that it would be fair to say that many
1
branches of computer science are in essence the study of programs and processes seen
under some appropriate notion of equality. At one end of the spectrum, correctness of
programs is discussed in the context of some relation of observational equivalence, i.e.
indistinguishability of programs. Intermediately, complexity theory requires a slightly
stronger notion, which we could call complexity equivalence: this is observational
equivalence strengthened by some account of the resources the program consumes.
At the other extreme, computer viruses sometimes make decisions based strictly on
patterns of object code they encounter, disregarding the actual function of what
they are infecting; one could say they operate up to α-equivalence, i.e. syntactic
identity. Each of the aforementioned notions of equality is more intensional than the
one preceding it, and each level is interesting in its own right.
Thus, depending on our point of view, there are always two ways in which we
can see a computational process. There is an extensional level, which corresponds to
what may be computed. But there is also an intensional level, corresponding to the
programs and processes that carry out the computation. The shift between these two
viewpoints has been discussed by Moschovakis [1993] and Abramsky [2014]: computational processes may be understood by the ideal objects that they refer to (e.g.
functions), or their internal characteristics: length, structure, and, ultimately, the
algorithm they embody.
This thesis concerns the difficulties that arise when we are trapped between intension and extension, or description and behaviour.
1.2
Objectives
The main objective of this thesis is to answer the following question:
Is there a consistent, logical universe where the same mathematical objects
can be viewed both as intension, and extension?
This research programme is a suggestion of Abramsky [2014]; in his words:
The notions of intensionality and extensionality carry symmetric-sounding
names, but this apparent symmetry is misleading. Extensionality is enshrined in mathematically precise axioms with a clear conceptual meaning.
Intensionality, by contrast, remains elusive. It is a “loose baggy monster”
into which all manner of notions may be stuffed, and a compelling and
coherent general framework for intensional concepts is still to emerge.
2
Our discussion in the previous section hints at the fact that the choice of what is
extensional, and thereby what is intensional is entirely up to us. There are many
shades on the spectrum between ideal mathematical object and full symbolic description, and the choice of perspective depends on what we wish to study. However, the
design of a mathematical framework or universe where both the extensions and intensions of our choice co-exist harmoniously is the difficult and well-defined problem
with which this thesis is concerned.
The present author believes that such a framework will be instrumental in making
progress in providing answers to the following questions:
1. What is the meaning of intensional programming? Is there a logical interpretation of it?
2. What is the meaning of reflective programming? Is it possible to program
reflectively in a consistent way?
3. What is the meaning of intensional operations in a higher-order setting? How
can we have a non-functional operation without resorting to first-order manipulations of Gödel numbers?
4. How can we add recursion to type theory?
The theory developed in this thesis fully addresses the first two of these questions.
Considered as a toolkit, it is likely to be useful in answering the third one. The fourth
one is left for future work. Nevertheless, we shall now briefly consider all of them.
1.2.1
Intensional Programming
In the realm of functional computation, we can immediately distinguish two paradigms:
• The Extensional Paradigm. It has been exactly 50 years since Christopher
Strachey articulated the notion of functions as first-class citizens in his influential notes on programming [Strachey, 2000]. In a purely functional world, a
higher-order program can use a functional argument extensionally by evaluating
it at a finite number of points: this leads to a form of continuity, which is the
basis of domain theory [Abramsky and Jung, 1994].
• The Intensional Paradigm. This way of computing originated in computability theory [Cutland, 1980, Jones, 1997]: a program can compute with the source
code—or intension—of another program as an argument. It can edit, optimise,
call, or simulate the running of this code.
3
Whereas the first paradigm led to a successful research programme on the semantics of programming languages, the second is often reduced to symbolic evaluation.
This is one of the reasons for which the intensional paradigm has not reached the sophistication of its extensional counterpart. Yet, the question remains: what can the
intensional paradigm contribute to programming? What is the additional expressivity
or programming power afforded by intensionality?
This is a theme that is often discussed by the Lisp community. Indeed, certain
dialects of Lisp are the closest we have to a true paradigm of intensional programming,
both through Lisp macros [Graham, 1993] as well the construct of quoting [Bawden,
1999]. Either of these features can also be used for metaprogramming, which is the
activity of writing programs that write other programs.
The notion of intensional programming is also central to the work of the partial
evaluation community, which uses a rather extreme form of intensionality that is better known under the name programs-as-data: see Jones et al. [1993], Jones [1996,
1997]. Their work uses insights from computability theory and Gödel numberings to
build a metaprogramming-oriented methods for the automatic generation of compilers, all based on the power of the s-m-n theorem of computability theory: see §2.1.4
for more details and references. Even though Lisp-inspired, and never seriously considering non-functional operations, this is perhaps the closest anybody has come to
what we mean by intensional programming.
But, to this day, no satisfying theoretical account of intensionality in programming
has been produced. The present author believes that this has to do with the fact that
Lisp is an unstructured, untyped language. Hence, it is not amenable to any kind
of analysis, other than maybe that of the untyped λ-calculus [Barendregt, 1984]. See
also Wadler [1987] for an early critique of Scheme in programming methodology.
We shall introduce a new approach to intensional programming that is fundamentally typed in §3.
1.2.2
Reflective Programming
For a very long time, programmers and theoreticians have sought to understand
computational reflection, a concept introduced by Brian Cantwell Smith [1982, 1984].
Computational reflection is an obscure idea that has been used in a range of settings:
see Demers and Malenfant [1995] for a (slightly dated) survey. In broad strokes, it
refers to the ability of a program to access its own code, refer to its own description,
or examine its own internals.
4
The kind of reflection envisaged by Brian Cantwell Smith was to be implemented
by using reflective towers. The idea is that a program could be understood as running
on the topmost level of an infinity of interpreters. Reflective constructs then accept
code that is to be injected to the interpreter situated one level below, hence having
access to the complete state of the top-level interpreter, all its registers and variables,
and so on—to infinity. This is a rather mysterious construction with unclear semantics
that have been the subject of investigation by Friedman and Wand [1984], Wand and
Friedman [1988], and Danvy and Malmkjaer [1988]. This line of work eventually
concluded with a theorem of Wand [1998], which shows that there are no useful
semantic descriptions of the reflective tower. We will discuss Wand’s result in §1.3.2.
Despite its being poorly understood, computational reflection seems to be a recurring and useful concept. Demystifying it is a pressing problem, as many modern
programming languages have reflective or ‘introspective’ facilities that lead to pernicious bugs. The author suspects that a central theme of this thesis, the notion of
intensional recursion, will prove fundamental in obtaining a logical foundation for
reflection. To quote Polonsky [2011]:
Of course, they [the questions posed by Smullyan] are only a sliver in the
more global puzzle of understanding reflection as a distinct phenomenon.
There is still lacking a general concept, an all-inclusive definition through
which the common features of the constructions in Gödel’s theorem, computability, number theory (systems of arithmetic), and set theory could
be related. Finding such a concept remains a fascinating open problem.
We believe that the notion of intensional fixed points, to which we devote §6, is
precisely an abstract definition of (well-behaved) reflection. Our framework thereby
provides a candidate solution to the problem of Polonsky.
1.2.3
Higher-Order Non-Functional Computation
Over the past years, a new field of theoretical computer science has emerged under
the name of higher-order computability. As a subject, higher-order computability
has its roots in the 1950s, but the work of Longley [2005] has produced a unifying
account that overlaps significantly with the study of logic, λ-calculus, category theory,
realizability theory, and the semantics of programming languages; see the recent book
by Longley and Normann [2015].
5
In classical computability theory [Rogers, 1987, Cutland, 1980, Odifreddi, 1992]
there was a confluence of ideas [Gandy, 1988] in the 1930s, culminating in the ChurchTuring thesis, viz. that all ‘effectively calculable’ functions—whatever that might
mean—are partial recursive. In contrast, higher-order computability suffers from a
Church-Turing anti-thesis: there are multiple notions of computation at higher order,
and some of them can be shown to be strongly incompatible with each other. This situation generates a lot of debate regarding which notion of higher-order computability
is ‘more natural’ than the others: see the discussion in [Longley, 2005, §1]
Perhaps one of the most difficult challenges in higher-order computability is to
clarify what intensional, or non-functional higher-order computation is. This does
not only pertain to computation with the usual effects, like memory, exceptions,
or first-class continuations: such effects are well-understood, either through game
semantics [Abramsky and McCusker, 1996, Abramsky et al., 1998], or more abstractly
through the various theories of effects: see [Moggi, 1989, 1991, Plotkin and Power,
2004, Hyland and Power, 2007, Levy, 2003]. Instead, we consider more general nonfunctional computation, where the non-functional aspect arises from the ability of a
device to read the description or code of its higher-order argument.
A known example that is impossible to accommodate in an extensional setting is
the modulus of continuity functional. This is a type 3 functional,
Φ : ((N → N) → N) → N
Intuitively, when given a type 2 functional F : (N → N) → N, Φ(F ) returns an upper
bound n on the range of values F would examine of any argument given to it, so that
if f, g : N → N agree on those values, i.e.
∀i ∈ {0, . . . , n}. f (i) = g(i)
then F f = F g. It can be shown that one cannot define a modulus of continuity functional that is extensional. To compute Φ(F ) it is necessary to examine the internals
of F : we simulate its run on some function, and see what the maximum argument it
examines is. This can be computed in a language with side-effects: we call F (f♠ ),
where f♠ uses side-effects to record the maximum value at which it is called, and Φ(F )
then returns this recorded value. We can see that this is highly dependent on the
way F is implemented, and thus not extensional: see the blog post of Andrej Bauer
[2011]. The question we want to ask in this setting is this: can we make the modulus
of continuity computable, perhaps by admitting it at certain intensional types only,
but without generally violating neither extensionality nor freedom from side effects?
6
The main ways to understand such computation are either through a ‘highly
intensional,’ essentially untyped, first-order formulation, which suffers from a lack of
logical structure and useful properties; or through computational effects, which is even
less clear in terms of higher-order computability. One way out of this impasse has been
suggested by Longley [Longley, 2005, §6], and it employs realizability. Longley argues
that posing the question of whether a non-functional operation is definable amounts
to writing down a logical formula that specifies it, and then examining whether that
formula is realizable: see [Longley, 1999a] and the note [Longley, 1999b].
Whereas this is an informative approach, it does not smoothly lead us toward
the design of programming languages that harness these non-functional powers. We
propose, instead, the following research programme: we should first use the typetheoretic techniques proposed in this thesis to add intensionality to a λ-calculus.
Then, we shall be able to vary the intensional operations available, and study the
expressivity of the resulting systems.
1.2.4
Recursion in Type Theory
General recursive definitions are prohibited in most type theories, including MartinLöf type theory (MLTT) [Martin-Löf, 1998, 1984, Nordström et al., 1990]. This is
not an arbitrary design choice: MLTT features dependent types, i.e. types depend on
terms. Thus, its terms ought to be strongly normalising, so that the types themselves
are.
In that way, MLTT is logically well-behaved, but not ideal for programming. In
fact, writing down the definitions of many ordinary programs becomes a difficult
exercise. The intuitive reason is that every program we want to write must somehow
contain its own proof of termination. A lot of work has gone into regaining the lost
expressivity, from the results of Constable and Smith [1993] on partial types, to the
method of Bove and Capretta [2005], and all the way to the coinductive types of
Capretta [2005] and the delay monad. There is also recent work on the partiality
monad and higher inductive types: see Altenkirch et al. [2017].
In any case, working within MLTT has a fundamental disadvantage: an old theorem of Blum [1967] ensures that, no matter what kind of ‘blow-up factor’ we choose,
we will be able to write a program (in a partial, Turing-complete language) whose
shortest equivalent in a total programming language is larger by the given factor: see
the blog post of Harper [2014]. Thus, ultimately, there is no working around this
theorem: at some point we might have to add general recursion to type theory.
7
In the setting of simple types adding recursion is relatively easy. It was first done
by Scott [1993] and Plotkin [1977], who defined the prototypical fixed point language
PCF, a simply-typed λ-calculus of booleans, integers, and a fixed point combinator
Y : (A → A) → A. There is a long and rich literature on the models of PCF: the
reader may consult any of [Plotkin, 1977, Gunter, 1992, Mitchell, 1996, Abramsky
et al., 1996, Hyland and Ong, 2000, Streicher, 2006, Longley, 1995].
Merely throwing Y into the mixture is not advisable in more expressive type
theories, e.g. in System F or MLTT. If sum or Σ types are available (or are definable,
e.g. in System F), a theorem of Huwig and Poigné [1990] guarantees that the resulting
theory will be inconsistent: even the existence of the coproduct 1 + 1 is enough to
make a cartesian closed category with fixpoints degenerate to a preorder.
A short abstract by Plotkin [1993] proposes that we forget the cartesian setting,
and work in either a (second-order) intuitionistic linear type theory, or a relevant type
theory. The full manuscript for that abstract never appeared, but the linear type
system to which it refers (but without recursion) was studied in detail by Barber
[1996].
Further efforts went into defining a Linear PCF, based on these intuitions. Invariably, the type of Y or the rule for recursion has one of the following forms:
!(!A ( A) ( A
!(!A ( A)
A
!(!A ( A)
Braüner [1995, 1997]
Maraist et al. [1995], Bierman [2000]
Maraist et al. [1995]
!A
We will shortly see that this pattern is not accidental. We will develop a type-theoretic
approach to intensionality that will admit a slightly unusual kind of recursion, namely
intensional recursion. Its type-theoretic interpretation will be strongly reminiscent
of the pattern of these rules, but linearity will prove to be a red herring.
1.3
Quoting is Impossible
The first impression that one usually acquires regarding intensional phenomena is that
they can only spell trouble. After all, encoding programs or logical formulae as data
in the same language is the typical function of Gödel numbering. Such constructs
quickly lead to negative theorems that pinpoint the inherent limitations of logical
systems, e.g. Gödel’s First Incompleteness Theorem [Smullyan, 1992].
8
Our aim in this thesis is to tell a positive story, but we shall first recount the
negative tales of the past. We shall not engage in a foundational debate regarding
Gödel’s theorems and related results here, but a more in-depth discussion can be
found in §6, in the book of Smullyan [1992], and in [Girard, 2011, §2]. Instead, we
will focus on the negative repercussions on programming.
1.3.1
Tale 1: Quoting is Not Definable
Let Λ be the set of untyped λ-terms [Barendregt, 1984], and let
p·q : Λ → Λ
be a map on λ-terms. The intention is that, for each λ-term M , the term pM q
represents the program M as a datum in the λ-calculus. We call pM q the quote
of M . The properties of such (external) quoting functions p·q : Λ → Λ have been
systematically studied by Polonsky [2011], who also broadly surveys the sporadic
literature on such ‘meta-encodings’ of the λ-calculus into itself.
In standard accounts, e.g. [Barendregt, 1984], the quote function is a Gödel numbering, as known from the literature on Gödel’s theorems. To each term M one assigns
a number #M , and defines pM q to be the Church numeral for #M .
A fundamental question then arises: is quoting internally definable? The answer
is, of course, negative, as internal definability of quoting would lead to inconsistency.
The following argument is due to Barendregt [1991]: suppose there is a term Q ∈ Λ
such that
Q M =β pM q
It is a fact that Church numerals are in normal form. Hence, both pIq and pI Iq are
def
in normal form, where I = λx. x is the identity combinator. We have that
pIq =β Q I =β Q (I I) =β pI Iq
This amounts to equating two distinct normal forms. But it is known that the λcalculus is confluent, hence consistent! It follows that such a term Q cannot exist.
It is very important to notice that the crux of the argument essentially rests on
the confusion between the extension M , the intension pM q, and the ability to pass
from extension to intension. To obtain a consistent account of intensionality we ought
to forbid this possibility. However, let us seize the opportunity to remark that the
opposite direction is not only attainable, but a result of historical importance for
computability theory [Kleene, 1936, 1981]:
9
Theorem 1 (Kleene [1936]). There exists a term E ∈ Λ0 such that
E pM q β M
for all M ∈ Λ0 .
This is essentially the same result as the one obtained by Turing [1937], but for the
λ-calculus instead of the Turing Machine. See [Barendregt, 1984, §8.1.6] for a proof.
1.3.2
Tale 2: Quoting Collapses Observational Equivalence
Let us suppose that the above result does not deter us in our plans to add reflection
to the λ-calculus. All else failing, we can do so by postulating constants eval and
fexpr, along with the following reductions:
eval pM q −→ M
(fexpr V ) M −→ V pM q
where V is some notion of value (e.g. a weak head normal form). Then Q would be
definable as fexpr (λx. x).
One of the most interesting and well-studied notions in λ-calculus is that of observational equivalence. Two terms M, N are observationally equivalent, written M ∼
= N,
if they are interchangeable in all possible contexts, without any ‘observable changes.’
The notion of observable change is up for debate, and the exploration of different options leads to interesting variations—see e.g. Bloom and Riecke [1989] and Abramsky
[1990]. The usual choice is that we can observe normal forms at ground type (i.e.
different numerals and boolean values), or—equivalently—termination of evaluation
at ground type. If M =β N , then certainly M ∼
= N , but the converse is not normally
true: terms can be observationally equivalent, but the theory of equality (which is
only computably enumerable) is not strong enough to show that.
In this context, Wand [1998] showed the following theorem:
Theorem 2 (Wand). M ∼
= N if and only if M ≡α N
Wand’s definition of observation is termination at a weak head normal form, and his
quoting function p·q is a Scott-Mogensen encoding: see [Polonsky, 2011, Mogensen,
1992]. However, the result is strong and general, and puts the last nail in the coffin: there can be no semantic study of functional programming languages that are
so strongly reflective that they can internally define quoting. Such a language can
internally distinguish any two terms in the language, as long as they are not equal
up to renaming. This result concludes the long line of research on Smith’s reflective
towers that we mentioned in §1.2.2.
10
1.4
Intensionality and Types: Modality-as-Intension
The two negative tales we have just told are, fortunately, not the final word. If we take
a close look at the existing literature, e.g. at the chapters of [Barendregt, 1984] that
concern computability and diagonal arguments, we may see that some intensional
operations are indeed definable. For example, there are λ-terms gnum, app and E
such that
gnum pM q =β ppM qq
app pM q pN q =β pM N q
E pM q =β M
We mentioned E in the previous section, but the other two might come as a surprise:
they are indeed pure operations on syntax. The pattern that seems to be at work
here is that the only true restriction is quoting, and that everything else is admissible.
The standard way to enforce restriction in computation is to use types. Indeed,
the main contribution of this thesis is a detailed investigation and analysis of the
following idea:
Intensionality adheres to a typing discipline.
This is an old but not so well-known observation. To the best of our knowledge, it
was first formulated by Neil Jones,1 and led to his work on underbar types: see [Jones
et al., 1993, §16.2].
1.4.1
Modality-as-Intension
Given a type A, let us write Code(A) for the type of code of type A. We must certainly
have that, if M : A, then pM q : Code(A). Thus, if gnum pM q =β ppM qq, then it
must have the type
gnum : Code(A) → Code (Code(A))
This looks familiar! If we drop all pretense and write A for Code(A), we obtain the
following types:
gnum : A → A
app : (A → B) → A → B
E : A → A
1
Personal communication.
11
Surprisingly, there is an underlying Curry-Howard correspondence: the types of these
operations correspond to the axioms 4, K and T of the modal logic S4. This connection
to modal logic was drawn by Davies and Pfenning [1996, 2001], who used it to embed
two-level λ-calculi [Nielson and Nielson, 1992] in a modal λ-calculus, and to perform
binding-time analysis.
We will thus introduce and study the modality-as-intension interpretation. This
is an idea that pervades [Davies and Pfenning, 1996, 2001], and is even mentioned in
name in the conclusion of [Pfenning and Davies, 2001]. To quote:
One particularly fruitful interpretation of A is as the intensional type for
expressions denoting elements of type A. Embedding types of this form
in a programming language means that we can compute with expressions
as well as values. The term box M quotes the expression M , and the
construct let box u ⇐ M in N binds u to the expression computed by
M and then computes the value of N . The restrictions placed on the
introduction rule for A mean that a term box M can only refer to other
expression variables u but not value variables x. This is consistent with
the intensional interpretation of A, since we may not know an expression
which denotes a given value and therefore cannot permit an arbitrary value
as an expression.
The above excerpt refers to the modal type system introduced and used by Davies
and Pfenning, which is a dual-context λ-calculus, with judgements of the form
∆;Γ`M :A
where ∆ and Γ are two ordinary but disjoint contexts. The variables that occur in ∆
are to be thought of as modal variables, or variables that carry intensions or codes,
whereas the variables in Γ are ordinary (intuitionistic) variables. In this system, the
evaluator E : A → A exists as a variable rule, i.e. the ability to use a code variable
as if it were a value:
∆, u:A, ∆0 ; Γ ` u : A
The canonical terms of modal type are of the form box M , and they largely mimic
the Gödel numbering pM q. The shape of the introduction rule guarantees that all
the variables that occur in the ‘boxed term’ M are code variables, and not value
variables; we write · for the empty context:
∆;·`M :A
∆ ; Γ ` box M : A
12
And, as mentioned above, there is also a let box u ⇐ (−) in (−) construct that allows
for substitution of quoted terms for code variables:
∆ ; Γ ` M : A ∆, u:A ; Γ ` N : C
∆ ; Γ ` let box u ⇐ M in N : C
along with the reduction
let box u ⇐ box M in N −→ N [M/u]
The let box u ⇐ (−) in (−) construct secretly requires (A × B) ∼
= A × B, which
is just enough to define the app : (A → B) → A → B constant.
Finally, the 4 axiom is inherent in the system:
u:A ; · ` u : A
u:A ; · ` box u : A
· ; x : A ` x : A
u:A ; x : A ` box box u : A
· ; x : A ` let box u ⇐ x in box box u : A
The author has previously studied such dual-context systems in [Kavvos, 2017b].
1.4.2
A Puzzle: Intensional Recursion
We touched upon the subject of recursion in type theory in §1.2.4. To obtain recursion
in simple types, we have to add a fixed point combinator Y : (A → A) → A, and
obtain PCF. In contrast, recursion is definable in the untyped setting. This is the
conclusion of the First Recursion Theorem [Barendregt, 1984, §2.1, §6.1]:
Theorem 3 (First Recursion Theorem). ∀f ∈ Λ. ∃u ∈ Λ. u = f u.
Proof. Let
def
Y = λf.(λx.f (xx))(λx.f (xx))
Then Yf =β f (Yf ).
The FRT corresponds to extensional recursion, which is what most programming
languages support. When defining a recursive function definition, a programmer
may make a finite number of calls to the definiendum itself, in the same vein as
our description of the functional-extensional paradigm in §1.2.1. Operationally, this
leads a function to examine its own values at a finite set of points at which it has—
hopefully—already been defined.
13
However, as Abramsky [2014] notes, in the intensional paradigm, which we also
described in §1.2.1, a stronger kind of recursion is attainable. Instead of merely examining the result of a finite number of recursive calls, the definiendum can recursively
have access to a full copy of its own source code. This is embodied in the Second
Recursion Theorem (SRT), which was proved by Kleene [1938]. Here is a version of
the SRT in the untyped λ-calculus [Barendregt, 1984, §6.5]:
Theorem 4 (Second Recursion Theorem). ∀f ∈ Λ. ∃u ∈ Λ. u = f puq.
def
Proof. Given f ∈ Λ, set u = W pW q, where
def
W = λx. f (app x (gnum x))
where app and gnum are as above. Then
u ≡ W pW q
=β f (app pW q (gnum pW q))
=β f (app pW q ppW qq)
=β f (pW pW qq)
≡ f puq
It is not hard to see that, using Kleene’s interpreter for the λ-calculus (Theorem 1),
the SRT implies the FRT. It is not at all evident whether the converse holds. This
is because the FRT is a theorem that concerns higher-order computation, whereas
the SRT is very much grounded on first-order, diagonal constructions. The exact
relationship between these two theorems is the subject of §2.
The point that we wish to make is that, in the presence of intensional operations,
the SRT affords us with a much stronger kind of recursion. In fact, it allows for
exactly the sort of computational reflection that we discussed in §1.2.2.
Perhaps the greatest surprise to be found in this thesis is that the SRT admits a
type. Indeed, suppose that u : A. Then certainly puq : A, and it is forced that
f : A → A
The Curry-Howard reading of the SRT is then the following: for every f : A → A,
there exists a u : A such that u = f puq. This corresponds to Löb’s rule from
provability logic [Boolos, 1994]:
A → A
A
14
Löb’s rule is equivalent to adding the Gödel-Löb axiom,
(A → A) → A
to a modal logic. One of the punchlines of this thesis will be that
The type of the Second Recursion Theorem is the Gödel-Löb axiom.
Thus, to obtain reflective features, all we have to do is add a version of Löb’s rule to
the Davies-Pfenning modal λ-calculus.
1.5
A Road Map
The rest of this thesis consists of a thorough discussion of the above observations,
and the development of appropriate syntax and categorical semantics that capture
the aforementioned intuitions.
In §2, we dive back to find the origins of Kleene’s two Recursion Theorems. As
it happens, these correspond to the two ways in which we can define a mathematical
object by recursion: one can either use diagonalisation, or some kind of infinite (or
transfinite) iteration. The origin of both of these methods is lost in the mists of
time, but both find their first documented expressions in the work of Kleene [1952,
1938]. §2 states and proves these two theorems, and engages in a thorough discussion
regarding their similarities and differences. Whereas the SRT works by diagonalising
and is applicable to a first-order setting, the FRT requires a higher-order perspective.
In some cases both theorems are applicable, but one is stronger than the other: the
FRT always produces least fixed points, but this is not always the case with the SRT.
Nevertheless, an old result by Hartley Rogers Jr. bridges this gap. We also find the
opportunity to engage in some speculation regarding the possible applications of the
reflective features provided by the SRT.
In §3, we revisit the system of Pfenning and Davies [2001] that we described in
§1.4. After noting that it does not feature any actual intensional operations, we add
some to the system. We also take the hint from §1.4.2, and add a form of Löb’s
rule as well. The result is a programming language that is (a) intensional, exactly in
the sense described in §1.2.1, and (b) reflective, in the sense described in §1.2.2. A
confluence proof ensures that the resulting system, which we call Intensional PCF, is
consistent. It is evident that the central device by which everything comes together
is the modal types, which separate the two worlds of intension and extension, by way
of containing the former under the modality.
15
Following that, in §4 we begin to seek categorical semantics for intensionality. We
argue that 1-category theory is ill-suited for modelling intensionality. We are thus led
to consider the P-categories of Čubrić, Dybjer and Scott [Čubrić et al., 1998], which
are categories only up to a partial equivalence relation (PER). In this setting, we
introduce a new P-categorical construct, that of exposures. Exposures are very nearly
functors, except that they do not preserve the PERs of the P-category, but reflect
them instead. Inspired by the categorical semantics of S4 [Bierman and de Paiva,
2000], we begin to develop the theory of exposures.
To substantiate the discussion, §5 builds on the previous chapter by presenting
three concrete examples of exposures. The first example shows that, when built on
an appropriate P-category, an exposure is really an abstraction of the notion of a
well-behaved intensional device, such as a Gödel numbering. The second example is
based on realizability theory; it is also the motivating example for exposures, and is
later used to show that Kleene’s SRT is a form of intensional recursion. The final
example illustrates that intensionality and exposures may occur outside logic and
computability, and is related to basic homological algebra.
Then, in §6, we put on our P-categorical spectacles and examine intensional recursion. We find that it has a simple formulation in terms of exposures. We then
show that classic theorems of logic that involve intensional recursion, such as Gödel’s
First Incompleteness Theorem, Tarski’s Undefinability Theorem, and Rice’s Theorem,
acquire concise, clear formulations in the unifying framework of exposures. Moreover,
our theory ensures that each logical device or assumption involved in their proofs can
be expressed in the same algebraic manner. The chapter concludes by using exposures
to generalise a famous categorical fixed-point theorem of Lawvere [1969, 2006], which
roughly corresponds to a restricted version of the FRT. The resulting Intensional
Recursion Theorem is a categorical analogue of Kleene’s SRT.
At last, in §7 and §8, we bring Intensional PCF (§3) and exposures (§4) together,
by using the second to provide a semantics for the first. Intensional PCF is too
expressive for this endeavour, so we discuss a restriction of it, which we call Intensional
PCF v2.0. We find that a sound semantics for it consists of a cartesian closed Pcategory equipped with a product-preserving, idempotent comonadic exposure. We
then discuss in which cases we may lift some of these new restrictions of Intensional
PCF v2.0. We conclude by showing that Asm(K1 ) is a model of Intensional PCF
v2.0, with the intensional fixed points being interpreted by Kleene’s SRT.
Finally, in §9, we conclude our investigation by evaluating our contribution, and
delineating a number of future directions towards which this thesis seems to point.
16
Figure 1.1: Chapter dependencies
§1 (Introduction)
§3 (iPCF)
§4 (Categories and Intensionality)
§6 (Intensional Recursion)
§5 (Examples)
§7 (iPCF Semantics I)
§8 (iPCF Semantics II)
The figure presents a rough outline of the way the chapters of this thesis depend
on each other. §2 is not included in the diagram, as it functionally independent of the
developments in the thesis. Nevertheless, we recommend that the reader consult it in
order to understand the origins and importance of intensional recursion. The sequence
§1, §4, §6 may be read independently, hence constituting the basics of our ‘theory of
intensionality.’ Finally, the diagram does not capture two small further dependencies:
that of §6.4 on the examples developed in §5, and that of the construction of IFPs in
Asm(K1 ) in §8.6 on the definition of that P-category in §5.2.
17
Chapter 2
Kleene’s Two Kinds of Recursion1
It is well known that there are two ways to define a function by recursion.
One way is through a diagonal construction. This method owes its popularity to
Cantor, and forms the backbone of a large number of classic diagonalisation theorems.
Diagonal constructions are a very concrete, syntactic, and computational method of
obtaining fixed points, which we in turn use to obtain recursion.
Another way is through a least fixed point that is obtained as a result of some
kind of infinite (or even transfinite) iteration. This kind of construction is more
abstract and mathematical in style. It is a very common trope in the study of denotational semantics of programming languages, particularly those based on domain
theory [Abramsky and Jung, 1994]. The origins of this lattice-theoretic argument are
lost in the mists of time, but see Lassez et al. [1982] for a historical exposition.
Both of these ways were famously used by Stephen C. Kleene [Kleene, 1981] to define functions by recursion in computability theory. The least fixed point construction
is the basis of Kleene’s First Recursion Theorem (FRT) [Kleene, 1952], whereas the
diagonal construction is found at the heart of his Second Recursion Theorem (SRT)
[Kleene, 1938].
Nevertheless, it is not so well known that there is a slight mismatch between these
two theorems. This is mainly due to the context in which they apply: the FRT
is essentially a theorem about computation at higher types, whereas the SRT is a
first-order theorem of a syntactic nature.
Modulo the above mismatch, it so happens that the SRT is more general than
the FRT. Indeed, the SRT allows for a computationally ‘stronger’ kind of recursion—
namely intensional recursion—whereas the FRT has a more extensional flavour. However, there is a close yet slightly mysterious relationship between these two theorems,
1
A preprint is available as arXiv:1602.06220
18
the particulars of which we shall examine.
The SRT has numerous applications in computability, but it is deafeningly absent
in computer science. Abramsky [2014] has suggested further investigation, likening
the SRT to “the dog that didn’t bark” in the Sherlock Holmes story, and has also
discussed several related issues.
In the sequel we shall investigate both of these types of recursion, as well as
their intricate relationship. In §2.1 we discuss the notion of intensionality and its
relationship to computability; we state and prove the SRT, and discuss the extra
generality afforded by what we call intensional recursion; and we sketch a number of
speculative applications of intensional recursion. Subsequently, in §2.2, we move to
the discussion of higher types: we look at two slightly different notions of computation
at higher types, discuss their interaction, and prove the FRT for each one. Finally,
in §2.3, we investigate when each of the recursion theorems applies, and when the
resulting recursive constructions match each other.
2.1
Intensionality and Computability
In loose philosophical terms, to be intensional is to contain not only reference, but
also sense. The distinction between these two notions is due to Frege, see e.g. Fitting
[2015]. An intensional sign denotes an external referent, yet inherently connotes more
information—its elusive sense. The classic example is that of the planet Venus, which
may be referred to as either the morning star, or the evening star.
Most mainstream mathematics is rather extensional : we normally reason about
underlying, ‘ideal’ mathematical objects, and not their concrete descriptions; the
latter are, in a way, only there for our referential convenience. In most presentations
of set theory, for example, the axiom of extensionality equates any two sets whose
members are the same. Thus, in the mathematical sense, to be intensional is to be
finer than some presupposed ‘extensional equality.’
It is not difficult to argue that this setting is most inadequate for Computer Science. On a very rough level, extensions correspond to what may be computed, whereas
intensions correspond to the programs and processes that carry out the computation,
see e.g. Moschovakis [1993]. Once more, there is a distinction to be made: programs may be understood by the ideal objects that they refer to (e.g. functions), or
their internal characteristics: length, structure, and—ultimately—the algorithm they
express.
19
The former aspect—viz. the study of ideal objects behind programs—is the domain of Computability Theory (previously known as Recursion Theory), where the
object of study is ‘effectively computable’ functions over the natural numbers. Computability Theory began with the ‘confluence of ideas’ [Gandy, 1988] of multiple
researchers in the late 1930s in their attempt to characterise ‘automatic’ or ‘mechanical’ calculability. Remarkably, all roads led to Rome: different notions were shown
to coincide, leading to the identification of the class of partial recursive functions.
Subsequent to this fortuitous development, things took a decisive turn, as further
developments mostly concerned the study of the incomputable.2
So much for the extensional side. What about intensions? Here, we encounter a
diverse ecosystem. On one side, fixing the Turing Machine as one’s model of computation leads to Complexity Theory, which attempts to classify algorithmic problems
with a view to identifying the exact resources that one needs to solve a a problem.
This aspect is largely reliant on more combinatorial reasoning. Alternatively, adopting the λ-calculus as a point of reference leads to the study of programming languages,
which includes—amongst other things—type systems, semantics, program analysis,
and program logics. Here, the emphasis is on logical aspects. Finally, the subject of
concurrent and interactive computation unfolds as a bewildering, obscure, and diverse
landscape: see e.g. Abramsky [2006].
2.1.1
The Space of All Programming Languages
It is, however, a curious state of affairs that standard computability theory—as presented, for example, in the classic textbooks of Rogers [1987], Cutland [1980] and
Odifreddi [1992]—begins with a small set of abstract results that concern Gödel numberings of partial recursive functions. Even though they are the central pillar of an
extensional theory, these results have a decidedly intensional flavour.
These results begin by putting some very mild conditions on Gödel numberings.
Indeed, if one thinks of a Gödel numbering as a ‘programming language,’ then these
conditions comprise the absolute minimum that intuitively needs to hold if that programming language is ‘reasonable.’ A clear presentation of this part of the theory can
be found in the classic textbook of Odifreddi [1992, §II.5]. A more modern account
that is informed by programming language theory is that of Neil Jones [1997].
2
Harvey Friedman [1998] once made the following tongue-in-cheek recommendation to the Foundations of Mathematics (FOM) mailing list: “Why not rename recursion theory as: noncomputability
theory? Maybe that would make everybody happy.”
20
The story begins to unfold as soon as we encode programs-as-data, by assigning
partial recursive functions to natural numbers. Following tradition, we write φ for an
arbitrary numbering, and φp : N * N for the partial recursive function indexed by
p ∈ N under the numbering φ.
From a programming perspective, we may consider p to be a ‘program,’ and
φ : N → (N * N) a semantic function that maps programs to the functions they
compute. In practice, p ∈ N is usually a Gödel number that encodes the syntax of a
Turing Machine, or the instructions for a register machine, or even a λ-term.
Let us write e1 ' e2 for Kleene equality, viz. to mean that expressions e1 and e2
are both undefined, or both defined and equal in value. Of the numbering φ we shall
require the following conditions:
Turing-Completeness That for each partial recursive function f there exists a p
such that f = φp .
Universal Function That there is a program U such that φU (x, y) ' φx (y) for all
x, y ∈ N.
S-m-n That there is a total recursive function S such that φS(p,x) (y) ' φp (x, y) for
all x, y ∈ N.
That the first and second conditions are achievable was popularised by Turing
[1937], and the third is a result of Kleene [1938]. The first condition corresponds, by
the Church-Turing thesis, to the fact that our programming language is as expressive
as possible (in extensional terms). The second corresponds to the ability to write a
self-interpreter, under suitable coding. And the third allows us to computably ‘fix’
an argument into the source code of a two-argument program, i.e. that substitution
is computable (c.f. one-step β-reduction in the λ-calculus).
In logical terms, we may regard these as sanity conditions for Gödel numberings of
the partial recursive functions. The ‘sane’ numberings that satisfy them are variously
known as acceptable numberings [Rogers, 1987, Ex. 2.10], acceptable programming
systems [Machtey and Young, 1978, §3.1.1], or systems of indices [Odifreddi, 1992,
§II.5.1].
It was first shown by Rogers [1958] that acceptable numberings have very pleasant
properties.
Definition 1. For numberings φ and ψ, define
φ ≤R ψ
iff
∃t : N * N total recursive. ∀p ∈ N. φp = ψt(p)
21
We then say that φ Rogers-reduces to ψ, and ≤R is a preorder.
Hence, thinking of φ and ψ as different programming languages, φ ≤R ψ just if
every φ-program may be effectively translated—or compiled —to a ψ program. Then
def
≡R = ≤R ∩ ≥R is an equivalence relation. Quotienting by it yields the Rogers
semilattice under the extension of ≤R to the equivalence classes (see op. cit). More
specifically,
Theorem 5. The following are equivalent:
1. ψ is an acceptable numbering, as above.
2. ψ is a member of the unique top element of the Rogers semilattice.
3. ψ is an enumeration for which there is a universal function, and a total recursive
c : N * N such that
ψc(i,j) = ψi ◦ ψj
The first two equivalences are due to Rogers, see op. cit, and Odifreddi [1992,
§II.5.3], and the third is due to Machtey et al. [1978, Theorem 3.2]. Note that
one should exercise great caution with these equivalences, for their proofs liberally
invoke pairing tricks, loops, iterations, and other programming constructs. Finally,
the equivalence between (1) and (2) above was strengthened by Rogers [1958] to
Corollary 1 (Rogers’ Isomorphism Theorem). Any two acceptable enumerations are
recursively isomorphic.
The possible numberings of the partial recursive functions, whether acceptable
or pathological, as well as the various forms of SRTs that may or may not hold of
them, have been investigated by the school of John Case and his students: David
Riccardi [1980, 1981], James Royer [1987] and, more recently, Samuel Moelius III
[Case and Moelius, 2007, 2009a,b, Moelius, 2009]. For example, this community has
shown that there are numberings where certain known theorems, just as the s-mn theorem or Kleene’s second recursion theorem, are not ‘effective,’ or simply do
not hold. Earlier work on this front seems to have concentrated on enumerations of
subrecursive classes of functions, see e.g. Royer and Case [1994], whereas the later
work of Case and his students concentrated on the study of what they called control
structures, i.e. constructs which provide “a means of forming a composite program
from given constituent programs and/or data.”
22
2.1.2
The Second Recursion Theorem
The central intensional result of elementary computability theory is Kleene’s Second
Recursion Theorem (SRT), also first shown in [Kleene, 1938].
Theorem 6 (Kleene). For any partial recursive f : N × N * N, there exists e ∈ N
such that
∀y ∈ N. φe (y) ' f (e, y)
Proof. Consider the function defined by
δf : N × N * N
δf (y, x) ' f (S(y, y), x)
Since f is partial recursive, simple arguments concerning the computability of composition and substitution yield that δf is partial recursive. Hence δf = φp for some
def
p ∈ N. Consider e = S(p, p); then
φe (y) ' φS(p,p) (y) ' φp (p, y) ' δf (p, y) ' f (S(p, p), y) ' f (e, y)
The second Kleene equality follows by the s-m-n theorem, and the others are simply
by definition or construction.
In the above theorem, consider f (x, y) as a function that treats its first argument
as code, and its second argument as data. The equation φe (y) ' f (e, y) implies that
e is a program which, when run on some data, will behave like f with e being its first
argument. In slogan form,
We can always write a program in terms of its own source.
Indeed, the trick has become standard: f (e, y) is a ‘blueprint’ that specifies what
to do with its own code e, and we take its fixed point (c.f. the functionals in the
untyped λ-calculus to which we apply the Y combinator). Moreover, in much the
same way that the Y combinator is itself a term of the untyped λ-calculus, it so
happens that the construction used in the proof of the SRT is itself computable:
Theorem 7 (Constructive Kleene SRT). There is a total recursive h : N * N such
that, for every p ∈ N,
φh(p) (y) ' φp (h(p), y)
23
That is: in the original statement, we may calculate a code for δf from a code for f ,
and hence obtain e in an effective manner.
The Kleene SRT lies at the heart many proofs in computability, especially those
involving diagonalisation, or results pertaining to fixed points, self-reference and the
like. A smattering of more theoretical applications of this ‘amazing’ theorem has been
compiled by Moschovakis [2010].
The SRT is perhaps more familiar in the form popularised by Hartley Rogers Jr.
in his book [Rogers, 1987]. We state a slightly generalised version, and prove it from
Kleene’s version. We write e ↓ to mean that the expression e has a defined value.
Theorem 8 (Rogers SRT). For partial recursive f : N * N, there is a e ∈ N such
that, if f (e) ↓, then
φe = φf (e)
Proof. [Jones, 1997, Lemma 14.3.7] Define
df (x, y) ' φU (f (x), y)
Again, by standard arguments, df is partial recursive. By Kleene’s SRT, there is a
e ∈ N such that, for all y ∈ N,
φe (y) ' df (e, y) ' φU (f (e), y) ' φf (e) (y)
which is to say φe = φf (e) .
This result is equivalent to the previous formulation [Rogers, 1987, Ex. 11-4]. We
may summarise it in the following slogan:
Every computable syntactic program transformation
has a semantic fixed point.
Moreover, this version of the SRT comes in a ‘constructive’ variant as well—see
Rogers [1987, §11.2-II] or Cutland [1980, §11-3.1]:
Theorem 9 (Constructive Rogers SRT). There is a total recursive n : N * N such
that, for any z ∈ N such that φz is total,
φn(z) = φφz (n(z))
24
All of the above variants of the SRT are equivalent under the assumption that φ is
an acceptable enumeration. Nevertheless, if we relax the assumption of acceptability,
there are ways to compare and contrast them. Riccardi [1980] showed that there are
enumerations of the partial recursive functions for which there exist Rogers-type fixed
points, but not Kleene-type fixed points. In a rather technical section of his thesis,
Moelius [2009, §3] painstakingly compares the various entailments between different
formulations of the recursion theorem, and concludes that Kleene’s is the one that is
more natural and general.
No matter how enticing it may be, we shall not dwell on this particular line
of discussion. In fact, we shall avoid it as much as possible, because it does not
fit our view of programming. Any numbering that is not acceptable is somehow
pathological: by contraposition, it follows that either substitution is not computable
(and the s-m-n theorem does not hold), or that there is no self-interpreter (universal
function)—which, by Turing-completeness, means that the interpreter as a function is
not computable. Things are even more subtle if the language is not Turing-complete:
we then have a subrecursive indexing, as in [Royer and Case, 1994], and this is not a
path we would like to tread presently.
2.1.3
Intensional Recursion
From the point of view of programming languages, the very general and still rather
strange application of the SRT is the ability to define functions by intensional recursion. This means that a function can not only call itself on a finite set of points
during its execution—which is the well-known extensional viewpoint—but it may also
examine its own intension, insofar as the source code for a program is a finite and
complete representation of it.
Indeed, the SRT is the only basic tool available in standard (non-higher-order)
accounts of computability that enables one to make any ‘unrestricted’ recursive definition whatsoever. For example, if we define
(
1
if y = 0
f (x, y) '
y · φU (x, y − 1) otherwise
and apply Kleene’s SRT, we obtain a code e ∈ N such that φe is the factorial function.
However, the above use is slightly misleading, in that it is extensional : x is only
used as an argument to the universal function φU . Hence, the resulting behaviour
does not depend on the code x itself, but only on the values of the function φx for
which it stands. The following definition captures that phenomenon.
25
Definition 2. A total recursive f : N * N is extensional just if
φa = φb
=⇒
φf (a) = φf (b)
for any a, b ∈ N.
That is, f : N * N is extensional just if the program transformation it effects
depends solely on the extension of the program being transformed. By a classic result
of Myhill and Shepherdson [1955], such transformations correspond to a certain class
of functionals which we discuss in §2.2.1.
However, even if f : N * N is extensional, the paradigm of intensional recursion
strictly increases our expressive power. For example, suppose that we use the SRT
to produce a program e satisfying a recursive definition of the form
φe (x, y) ' . . . φU (e, g(x), y) . . .
for some g : N * N. We could then use the s-m-n function to replace this recursive
call by a specialisation of e to g(x), thereby obtaining another program e0 of the form
φe0 (x, y) ' . . . φS(e0 ,g(x)) (y) . . .
This is an equivalent program, in the sense that φe = φe0 . But if the s-m-n function
S(e, x) performs some optimisation based on the argument x—which it may sometimes do—then e0 may provide a more efficient definition, in that it the code for the
recursive call may be optimised for this particular recursive call. This line of thought
is the driving force of the partial evaluation community: there, the s-m-n function is
called a specialiser or a partial evaluator, and it is designed so that it optimises the
programs it is called to specialise; see §2.1.4 for more details.
But the SRT also allows for recursive definitions which are not functional, in that
the ‘blueprint’ of which we take a fixed point may not be extensional. For example,
one may be as daft as to define
(
x + 1 if x is even
f (x) '
x − 1 if x is odd
This f : N * N is total recursive, but decidedly not extensional. However, we may
still use Rogers’ theorem to obtain a fixed point e ∈ N such that φe = φf (e) , and
the resulting behaviour will depend on the parity of e (!). To this day, it is unclear
what the use of this is, except of course its underlying rôle in powerful diagonal
arguments—see e.g. Cutland [1980, §11.2]—as well as many kinds of reflection.
26
2.1.4
Applications of Intensional Recursion
Abramsky [2014] observed that the SRT, as well as other simple results on program
codes, are strangely absent from Computer Science. He comments:
“This reflects the fact which we have already alluded to, that while Computer Science embraces wider notions of processes than computability theory, it has tended to refrain from studying intensional computation, despite its apparent expressive potential. This reluctance is probably linked
to the fact that it has proved difficult enough to achieve software reliability even while remaining within the confines of the extensional paradigm.
Nevertheless, it seems reasonable to suppose that understanding and harnessing intensional methods offers a challenge for computer science which
it must eventually address.”
In many ways, we empathise with this programme. Consequently, we catalogue some
applications of such intensional ‘results about program codes,’ both within and on the
fringes of Computer Science, and then engage in some speculation regarding various
future directions.
Partial Evaluation
Kleene’s s-m-n theorem allows one to ‘specialise,’ or ‘partially evaluate’ a certain
program by fixing some of its arguments. It may appear simple and innocuous, but
this is deceptive: the s-m-n theorem is an essential result in computability.
The power afforded by the s-m-n theorem bestowed considerable success upon
the partial evaluation community, which began with the work of Futamura [1999]
and Ershov [1977, 1982] in the 1970s. Futamura observed that the ability to write
an interpreter for a language (i.e. a universal function), as well as the ability to
‘specialise’ an argument of a program (s-m-n function) followed by source-level optimisation yields an easy approach to generate a compiler, thus leading to the three
Futamura projections. Writing S = φs , where S is the s-m-n function, and U for the
program corresponding to the universal function, we have:
def
target code = φs (U, source code)
def
compiler = φs (s, U )
def
compiler generator = φs (s, s)
27
One can then verify that these equations do yield the desired behaviour, as shown in an
elementary fashion by Jones [1997]. This led to a successful programme of automatic
generation of compilers, first realised in Copenhagen. The results are documented in
Jones [1996], and the book of Jones et al. [1993].
Contrasting the simplicity of the s-m-n theorem with its success in the partial
evaluation community also led Jones to ponder whether the SRT, which is a much
more powerful result, could have interesting practical applications. Quoting from
[Jones, 1992]:
“While this theorem has many applications in mathematics, it is not yet
clear whether it will be as constructively useful in computer science as the
s-m-n theorem has turned out to be.”
Taking this as a point of departure, he has posed two significant questions:
• What is the exact relationship between the First Recursion Theorem (FRT)
and the Second Recursion Theorem (SRT)?
• If one implements the SRT, how can the accumulating layers of self-interpretation
be avoided?
Looking back at the literature on computability, we find that the first question has
been answered in a mostly satisfactory way. Regarding the second question, some
recent progress is documented in Kiselyov [2015].
A series of experiments with the SRT and further discussion are documented in
Hansen et al. [1989], and Jones [1992, 2013].
Abstract Computer Virology
Computer viruses rely heavily on the ability to propagate their own code, which is a
kind of reflection. This was noticed by Cohen [1989], who was the first to introduce
the term computer virus alongside an early Turing Machine model of viruses. A few
years later, his supervisor—Leonard Adleman [1990]—concocted his own model of
viruses that is based on elementary computability theory. In that model, viruses
are program transformations that ‘infect’ ordinary programs. This is also where the
connection with the SRT was made explicit, as Adleman invokes it to construct a
program that—under his own definition—is classified as a virus. He also proved a
result on his model that makes crucial use of the SRT for a diagonal construction.
28
These were the two cornerstones that laid the foundation of abstract computer
virology. In more recent years there have been further developments, owing to the
work of Bonfante et al. [2005, 2006, 2007], who discuss and classify different types of
viruses that correspond to multiple variants of the SRT. See also Marion [2012].
Computational Reflection & Reflective Towers
The concept of computational reflection was introduced in the context of programming languages by Brian Cantwell Smith [1982, 1984] in his voluminous thesis. The
underlying intuition is that a program may be considered to be running on an interpreter, which interpreter itself is running on another copy of this interpreter, and
so on A special construct that makes use of this structure is available in the programming language: it allows one to inject code in the interpreter that lies one level
below. Hence, a program has access not only to its code, but the entire state of the
interpreter that runs it, and even the interpreter that runs the interpreter. This is
embodied by the language 3-LISP, introduced in op. cit.
The resulting structure of reflective towers has captured the imagination of many,
but—while couched in colourful imagery—its construction is logically and computationally mysterious. A series of publications [Friedman and Wand, 1984, Wand and
Friedman, 1988, Danvy and Malmkjaer, 1988] have made partial attempts to explain
this ‘tower’ in more concrete terms.
Nevertheless, the concept of computational reflection seems to be a general recurring theme with broader scope, but is not well-understood. For the state of affairs up
to the mid-1990s, see the short comparative survey Demers and Malenfant [1995]. Demystifying reflection is a pressing concern, as many modern programming languages
have reflective or ‘introspective’ facilities, which are infamous for pernicious bugs, and
generally wreaking havoc.
Some ideas regarding reflection seem to be experiencing a resurgence of interest,
mainly because of the appearance of a new candidate foundation for reflection, namely
Barry Jay’s factorisation calculus: see Jay and Given-Wilson [2011], Jay and Palsberg
[2011], and Jay [2016].
As the SRT is a result with fundamental connections to reflection, we believe
that a better understanding of intensional recursion must be instrumental in laying
a logical foundation for reflective constructs.
29
Economics
This section concerns a speculative application of the SRT to economic modelling.
Historically, there has been a lot of discussion—especially in the years following the
Great Recession—regarding the foundational principles of economics, and whether
these are an adequate substrate for the science. Some of these critical approaches
touch on the self-referential aspects that are ignored by (neoclassical) economic theory. Winrich [1984] offers an early example of these criticisms; we give him the floor,
for the argument is compelling:
In order for preferences to be complete the choice set must include preferences themselves! But, as soon as you allow preferences within the choice
set you have preferences “talking” to preferences. In a world of preference
self-reference we can, if you will, produce a neoclassical liar. As an example let our liar be a smoker. In such a situation it is not uncommon to
hear a smoker say, “I dislike my desire to smoke.” What are we to make
of such a statement? In the static framework of neoclassical choice theory
this is a contradiction. But at the same time, it cannot be prevented. Not
only is the “act of smoking” an element in the choice set, but the “desire to
smoke” is itself an element in the choice set and also subject to discretion.
Continuously expanding the choice set by including not only commodities
but also social conditions in an attempt to explain invidious distinctions,
other individuals in an attempt to explain altruism, and so forth, neoclassicism has been able to “absorb” heterodox attacks within the atomic
individualistic perspective. However, the inclusion of choice itself is devastating to its own premise of consistency.
Let me make the point clear. Preference functions, preference orderings,
or preference rankings do not exist. [...]
Indeed, changes to an economic process originating from within an economic process
have been rather difficult to model in a reductionist fashion. Some early work that
attempted to model the ‘change of institutions’ in game theory was carried out by
Vassilakis [1989, 1992]. Moreover, there have even been approaches—even predating
algorithmic game theory—that approach economics with computational feasibility
in mind, see e.g. the work of Velupillai [2000] on computable economics. See also
Blumensath and Winschel [2013] for a recent attempt involving coalgebra.
30
2.2
2.2.1
Extensional Recursion and the FRT
Effective Operations
Suppose that we have some f : N * N that is total and extensional. It is not hard to
see that—by the definition of extensionality—f uniquely induces a very specific type
of functional. Let us write PR for the set of partial recursive functions.
Definition 3. A functional F : PR → PR is an effective operation, if there exists a
total, extensional f : N * N such that
F (φx ) = φf (x)
This is well-defined precisely because f is extensional. We are entering the realm
of higher types, by computing functions from functions. Nevertheless, we are doing
so in a computable and finitary sense: functions may be infinite objects, but this
computation occurs, or is tracked, on the level of program codes.
The Myhill-Shepherdson Theorem
Functionals such as the one defined above are perhaps one of the most straightforward
ways to define computability at higher order, namely as effective code transformations.
Surprisingly, Myhill and Shepherdson [1955] showed that the same functionals can be
defined in a much more abstract manner that dispenses with code transformations
entirely. In fact, anyone familiar with the domain-theoretic semantics of the λ-calculus
will recognise it immediately.
We first need to discuss the simple order-theoretic structure of the set P of unary
partial functions: its elements may indeed be ordered by subset inclusion:
ψ ⊆ χ iff ∀x, y ∈ N. ψ(x) ' y =⇒ χ(x) ' y
This makes P into a ω-complete partial order (ω-cpo), in that least upper bounds of
increasing chains always exist, and they are unions. We can now make the following
definition.
Definition 4. A functional F : P → P is effectively continuous just if it satisfies the
following properties:
Monotonicitity ψ ⊆ χ =⇒ F (ψ) ⊆ F (χ)
31
Continuity For any increasing sequence of partial functions,
f0 ⊆ f1 ⊆ f2 . . .
we have
!
G
F
fi
i
=
G
F (fi )
i
Effectivity on Finite Elements Given an encoding ˆ· of the graph of every finite
function θ : N * N as a number θ̂ ∈ N, there is a partial recursive gF : N × N *
N such that, for every finite θ : N * N, and for all x ∈ N,
F (θ)(x) ' gF (θ̂, x)
These are called recursive functionals by Cutland [1980] and Rogers [1987]: we beseech the reader to exercise caution, as terminology varies wildly.
Let us restate continuity, by using the following equivalent formulation, for which
see Odifreddi [1992, §II.2.23]:
Lemma 1 (Compactness). F : P → P is continuous if and only if
F (f )(x) ' y
⇐⇒
∃ finite θ ⊆ f. F (θ)(x) ' y
for all x, y ∈ N.
Putting these together, we see that effectively continuous functionals are indeed
very strongly computational and effective: the value of F (f )(x) only depends on a
finite part of the graph of f . In fact, we can show that the behaviour of F on finite
elements completely determines its behaviour on f :
Lemma 2 (Algebraicity). Let F : P → P be continuous. Then
G
F (f ) =
{ F (θ) | θ finite ∧ θ ⊆ f }
So, how do these functionals, which are computable in finite approximations, relate to
the aforementioned effective operations, which are based on computations on indices?
The answer is astonishingly simple:
Theorem 10 (Myhill-Shepherdson). An effective operation Feff : PR → PR can be
uniquely extended to an effectively continuous functional F : P → P (with Feff ⊆ F ).
Conversely, any effectively continuous functional, when restricted to the partial
recursive functions PR, is an effective operation.
See Cutland [1980, 10-§2], Rogers [1987, §15.3, XXIX], or Odifreddi [1992, §II.4.2] for
proofs.
32
A Turing Machine characterisation
Our discussion of functionals began with extensional operations on codes, as the most
natural definition of higher-order computation.
There is, however, an alternative, which some would argue is even more simple:
one may envisage the implementation of any functional F : P → P as a Turing
Machine which has access to an oracle for the argument of the functional. During a
computation, the machine may write a number x on a separate tape, and then enter
a special state, in order to query the oracle. The oracle then replaces x with f (x)
if f is defined at x. If it is not, the oracle does not respond, forcing the machine to
eternally wait for an answer, and the computation diverges.
Computation with oracles was first considered by Turing [1939], leading to the
intricate theory of Turing reducibility and relative computability. However, Turingtype oracles are fixed in relative computability, whilst we consider them as arguments
to a computation. This shift in perspective, as well as the first concrete results
involving higher types, are due to Kleene [1952].
In this context, subtle issues arise with non-determinism. It is well-known that
non-deterministic Turing Machines are equivalent to deterministic Turing machines
at the first order, but at higher order this is no longer true. In fact, the following
theorem was first shown—to the best of our knowledge—by Moschovakis [2010, §3],
even though it was simply labelled as the Myhill-Shepherdson theorem:
Theorem 11 (Moschovakis). A functional F : PR → PR is an effective operation if
and only if F (f ) is computable by a non-deterministic Turing machine with an oracle
for f .
Great care has to be taken in combining non-determinism with oracles: the machine should be designed so as to avoid the presence of two halting branches in
the same computation tree with different candidate outputs. Similar restrictions occur in defining what it means to non-deterministically compute a polynomial time
function—see e.g. Lewis and Papadimitriou [1997, §4.5.2]. However, in this case,
non-halting branches do not harm anyone; in fact, they are in some sense necessary
for the expressive power afforded by this model of computation.
The First Recursion Theorem
We have seen that effectively continuous functionals exhibit an impressive amount of
inherent order-theoretic structure. By the Myhill-Shepherdson theorem, this struc-
33
ture emerges automatically once a functional can be ‘realised’ on codes by an extensional operation on codes.
This order-theoretic structure is the basis of the proof of the First Recursion
Theorem (FRT). This fact was discovered by Dana Scott, who was the first one to
notice that the core argument in the proof of the FRT applies to all so-called simple
types.3 This discovery of Scott led to the development of domain theory, and the study
of PCF [Plotkin, 1977], both of which were the firstfruits of the field of programming
language semantics. Indeed, Scott acknowledged his debts; quoting from [Scott, 1975]:
“[...] It is rather strange that the present model was not discovered earlier,
for quite sufficient hints are to be found in the early paper of Myhill
and Shepherdson and in Rogers’ book (especially §§9.7-9.8). These two
sources introduce effective enumeration operators and indicate that there
is a certain amount of algebra about that gives these operators a pleasant
theory, but no one seemed ever to take the trouble to find out what it
was.”
The central result underlying the FRT is therefore this:
Theorem 12 (The Fixpoint Theorem). Let (D, v) be a ω-cpo with a least element
⊥ ∈ D, and let F : D → D be a continuous function. Then F has a least fixed point,
defined explicitly by
lfp(F ) =
G
F i (⊥)
i
This is largely considered a folk theorem: its origins are difficult to trace, and many
variants of it have been proved and used widely in Logic and Computer Science—see
Lassez et al. [1982] for a historical view. In op. cit the authors note that Kleene knew
of it at least as early as 1938; see also [Kleene, 1981].
In fact, it is high time that we show to the reader how it constitutes the first half of
Kleene’s FRT. But first, a caveat: the version of the FRT that we will presently prove
pertains to effective operations (à la Myhill and Shepherdson), and a proof similar
to ours may be found in the book by Cutland [1980, §10-3]. The original statement,
found in Kleene’s book [Kleene, 1952, §66], concerns partial recursive functionals,
which we discuss in §2.2.2; see also Odifreddi [1992, §II.3.15].
3
The simple types of PCF, as defined by Scott [1993], are generated by σ ::= B | N | σ → σ. B
is supposed to connote the booleans, and N the type of natural numbers.
34
Theorem 13 (First Recursion Theorem). Every effectively continuous functional
F : P → P has a least fixed point lfp(F ) : N * N, which is partial recursive.
Furthermore, let φq : N * N be an extensional function that realises F on the
partial recursive functions, i.e. F (φx ) = φφq (x) . Then, a p ∈ N such that φp = lfp(F )
may be computed effectively from any such q ∈ N.
Proof. The existence of the least fixed point follows from the fact (P, ⊆) is a ω-cpo,
and the Fixpoint Theorem.
By the Myhill–Shepherdson theorem, the functional F corresponds to some extensional φq : N * N. The proof of the Fixpoint Theorem constructs a chain,
f0 = ∅ ⊆ f1 ⊆ f2 . . .
where fi+1 = F (fi ), and the least fixed point is lfp(F ) =
F
i
fi . The fi may be
construed as increasingly defined ‘approximations’ to f . The key to this proof lies in
using the extensional φq to obtain indices that track each element of this chain:
p0 = (some index for the nowhere defined function)
p1 = φq (p0 )
..
.
so that pi+1 = φq (pi ) and hence, by induction, fi = φpi for all i ∈ N. Then, by
Church’s Thesis, we define p ∈ N by writing a program that performs the following
steps: on input n,
1. Set i := 0.
2. Begin a simulation of the program p0 running on n.
3. Loop:
(a) For each j ≤ i, simulate one step of pj on n.
(b) If any of these simulations have halted and produced a value m, halt and
output m.
(c) Otherwise, compute pi+1 = φq (pi ), and begin a simulation of pi+1 on input
n. Set i := i + 1.
Since the fi ’s are a chain, this program cannot accidentally produce two contradictory
F
values. But since lfp(F ) = i fi , if f (n) ' m, then fi (n) ' m for some i, so that
f = φp .
35
In the books of both Cutland [1980] and Odifreddi [1992], the above proof is
obtained after the recursively enumerable sets are characterised as the Σ01 sets in the
arithmetical hierarchy. We prefer the more primitive, ‘algorithmic version’ above,
because we can isolate the expressive power needed.
So, what do expressive power do we use? The program in the proof curiously calls
for countable dovetailing, i.e. the simulation of a slowly expanding yet potentially
infinite set of computations, of which we perform a few steps at each stage. This
requires access to the code q ∈ N of the relevant extensional function, so that we
can run it to obtain the rest of the codes pi . Furthermore, we require a handle on a
number of simulations we spawn, so that we can pause them, schedule some steps of
each, and possibly even discard some.
It is worth remarking once more that the output of this procedure is obviously
deterministic, but there is inherent non-determinism and parallelism in the method
we use to compute it. This is very much in line with Moschovakis’ version of the
Myhill–Shepherdson theorem (Theorem 11).
2.2.2
Partial Recursive Functionals and Pure Oracles
We have only discussed effective operations up to this point, and shown that they
correspond to effectively continuous functionals.
In our discussion leading to Theorem 11 we also mentioned a slightly different
paradigm, that of oracle computation. Theorem 11, however, guaranteed that nondeterministic oracle computation coincided with effective continuity. If, in contrast,
we adopt deterministic oracle computation as our notion of higher-order computation,
we are led to a different notion of computable functional. This kind of functional was
first discussed by Kleene [1952]:4
Definition 5. A functional F : P → P is a partial recursive functional if F (f ) can be
obtained from f : N * N and the initial functions by composition, primitive recursion,
and minimalisation. If its domain is restricted to the set F of total functions, such a
functional F : F → P is called a restricted partial recursive functional.
4
However, his definition was not identical to ours. Kleene defined his functionals through an
equation calculus. In this framework, even if the semantics of composition were understood to be
strict, multiple (and possibly inconsistent) defining equations were still allowed, leading to the nondeterminism observed by Platek [1966], which allowed parallel-or to be computable. We use the
definition that is widely believed that Kleene really intended to use, and found in later textbooks.
That this is the common interpretation I learned from John Longley, in personal communication.
36
Thus, if F (f ) = g, then we say that g is partial recursive uniformly in f .
An implementation of such a functional F would resemble a deterministic Turing
machine with an oracle for its argument. But notice that there is no non-determinism
in this case, and hence calls to the oracle have to happen in a predetermined way. As
soon as we decide to make a query at an undefined point, the computation diverges:
there is no other branch of the computation to save the day! Informally, we may say
that calls to the oracle may not be dovetailed. In effect, partial recursive functionals
deal with their arguments as pure extensions, whereas effectively continuous functionals interact in a more involved manner with the phenomenon of non-termination.
In the case of total inputs, the above connection was made precise by Kleene:
Theorem 14. [Kleene, 1952, §68, XXVIII] A functional F : F → P is a restricted
partial recursive functional if and only if it is computed by a deterministic Turing
machine with an oracle.
We are not aware of a plausible analogue of this theorem for partial recursive functionals and deterministic Turing machines.
The definition of partial recursive functionals has a lot of undesirable consequences: see the discussion in the thesis of Platek [1966, p. 128-130]. Thus, the
definition is often restricted to total inputs, for which the above characterisation
through Turing Machines exists. The underlying reason seems to be that, for total
inputs, we may enumerate the graph of the oracle, as no call to it will diverge.
Let us not forget this trivial but pleasant consequence:
Lemma 3. Let F : P → P be a partial recursive functional. If g ∈ PR, then
F (g) ∈ PR.
It is in this setting that Kleene obtained the First Recursion Theorem, which first
appeared in Kleene [1952, §66]:
Theorem 15 (FRT for Partial Recursive Functionals). Let F : P → P be a partial
recursive functional. Then F admits a partial recursive least fixed point.
Proof. See Odifreddi [1992, §II.3.15]. As before, the existence of the least fixed point
follows from the Fixpoint Theorem. The fact it is partial recursive follows from
Lemma 3: we conclude by induction that all the fi ’s are partial recursive; as the least
F
fixed point is f = i fi , we have that
f (x) ' y
⇐⇒
∃i ∈ N. fi (x) ' y
As the fi ’s are partial recursive, the predicate on the RHS of this equivalence is
recursively enumerable, and hence so is the graph of f .
37
Notice that the proof was rather abstract, and that all references to indices have
disappeared completely.
The following was shown by Uspenskii and Nerode, see Odifreddi [1992, §II.3.19]
for a proof:
Theorem 16. Every partial recursive functional is effectively continuous.
In particular, if we restrict a partial recursive functional to PR, it is an effective
operation. The converse was shown to fail by Sasso—see Odifreddi [1992, §II.3.20]:
Theorem 17. The functional
(
0
if f (2x) ' 0 or f (2x + 1) ' 0
F (f ) = λx.
undefined otherwise
is effectively continuous, but not partial recursive.
This clearly demonstrates, once more, that there is inherent parallelism or nondeterminism in effective operations, whilst partial recursive functionals are purely
sequential. In more detail, to compute the above functional we would have to concurrently query the argument f at two points by dovetailing the computations. A
deterministic Turing machine would have to either query f at either 2x or 2x + 1
first; if the first call were to an undefined point, it would diverge and never examine
the second. A non-deterministic Turing machine would deal with the same difficulty
by branching at the point where a choice between 2x and 2x + 1 is to be made.
2.3
2.3.1
FRT vs. SRT
Effective Operations and the SRT
Suppose we would like to construct a fixed point in a more simplistic manner than
the one employed in the proof of Theorem 13. All we need to do is use the MyhillShepherdson theorem to restrict an effectively continuous functional to an effective
operation and extract an extensional function from it, followed by applying the SRT.
Lemma 4. Given an effective operation F : PR → PR defined by an extensional
φp : N * N, we may effectively obtain a code for one of its fixed points from p ∈ N.
Proof. By Theorem 9, that code is n(p); for then,
φn(p) = φφp (n(p)) = F (φn(p) )
so that φn(p) is a fixed point of F .
38
So far, so good; but what sort of fixed point have we obtained? In particular, is it
minimal? The following construction, due to Rogers [1987, §11-XIII] demonstrates
that it is not.
Theorem 18. There is an extensional φm : N * N such that the fixed point obtained
by the SRT as in Lemma 4 is not minimal.
Proof. Use Church’s Thesis, Kleene’s SRT, and the function n from 9 to define m ∈ N
such that
(
x if x 6= n(m)
φm (x) '
t if x = n(m)
where t is an index for the constant zero function, i.e. φt (x) ' 0 for all x ∈ N.
Observe that, as n is total recursive, φm is total recursive. It is also extensional.
Essentially, φm asks: is the input my own Rogers fixed point? If yes, output code for
the constantly zero function; otherwise, echo the input. Thus, if x 6= n(m), we have
that φm (x) ' x, so that φφm (x) ' φx . Otherwise,
φφm (n(m)) ' φn(m)
as n(m) is a Rogers-style fixed point. In either case, φm is extensional, and defines
the identity functional. The least fixed point of it is the empty function. However,
the fixed point that results from the SRT has code n(m), and
φm (n(m)) ' t
so that φn(m) = φt , which is equal to the constantly zero function.
The key aspect of this construction seems to be that, unlike oracle computation,
an extensional function is able to syntactically inspect its input, thus creating a
‘singularity’ at one point. We maintain extensionality by arranging that the point
at which the ‘singularity’ is to be found is—incidentally—the extensional function’s
own fixed point! This is more evidence that effective operations really hide something
more than ‘pure extension’ under the hood.
The Standard Form
Can this mend this situation? The answer is positive: any extensional function can
be rewritten in a ‘standard form,’ which guarantees that the SRT really defines a
minimal fixed point. This is the exact sense in which the SRT implies the FRT.
39
The construction is due to Rogers [1987, §11-XIV]. The original statement is
horribly complicated, and involves multiple layers of enumeration; there is a lot of
concurrency happening here, and we cannot do much better than keep the description
informal.
For the following, we assume that there is also a standard way to enumerate the
graph of a partial recursive function given its index. This may be done by dovetailing
simulations of
φx (0), φx (1), . . .
and emitting pairs (i, φx (i)) as soon as the ith simulation halts. We do not care about
the exact details, but we do care that the exact same construction is used throughout.
Thus, let there be a effective operation F : PR → PR, and let f : N * N be
total and extensional, such that F (φx ) = φf (x) . We will define a total and extensional
hf : N * N, which is co-extensional with f , in the sense that
∀x ∈ N. φf (x) = φhf (x)
This hf will be in ‘standard’ form. Moreover, we may effectively compute an index
for hf from an index for f .
We use Church’s Thesis to define hf so that, on input y, it outputs a program
that performs the following instructions:
hf (y) ' “On input x, run the following processes in parallel:
1. One process enumerates the graphs of all finite functions N * N,
encoded as numbers:
θˆ0 = ∅, θˆ1 , θˆ2 , . . .
This may be done in many ways, but it is necessary that we begin
with the empty function (in order to include the covert base case in
the strong induction of the following theorem).
2. There is a total recursive d : N * N which turns a graph of a finite
function into an index for that function (by writing code that simply
checks if the input is in the graph, outputting the relevant value if so,
and diverging otherwise). The second process receives the encoded
graphs of the finite functions above, and enumerates codes,
d(θˆ0 ), d(θˆ1 ), d(θˆ2 ), . . .
with φd(θˆi ) = θi .
40
3. A third process receives messages from the second process, and applies f to those codes, outputting
f (d(θˆ0 )), f (d(θˆ1 )), . . .
with φf (d(θˆi )) = F (φd(θˆi ) ) = F (θi ). We thus obtain codes for all the
applications of the effective operation F on all the finite functions.
Beware: these functions φf (d(θˆi )) may now be infinite!
4. (This is the process where partiality enters the construction.) Enumerate, simultaneously, all the pairs in the graph of the function φy ,
as well as the pairs in the graphs of φf (d(θˆi )) = F (θi ). This can be
done using the method postulated above.
5. As soon as we find some t such that
F (θi )(x) ' t and θi ⊆ φy
we halt and output that t. This may be done by periodically checking whether x is defined in the enumeration of some F (θi ), and then
confirming that the entire graph of that θi is contained in the enumeration of φy .”
Notice that, in this construction, the code for f may be abstracted away. Using
the s-m-n theorem, we can then effectively produce code for it from any index of f .
Trivially, hf is total. Furthermore, notice that we needed to enumerate the φf (d(θˆi )) ,
for—in general—they will not be finite functions.
By the compactness of F , which follows from the theorem of Myhill and Shepherdson, we know that, F (φy )(x) ' t if and only if F (θi )(x) ' y for some finite θi ⊆ φy .
This construction will always find such a θi if there exists one. Hence hf defines the
same functional as f .
Now, using the SRT on hf will produce a minimal fixed point:
Theorem 19. If φv = hf , then n(v) is a code for the least fixed point of the effective
operation F : PR → PR defined by f : N * N.
Proof. We have that
φn(v) = φφv (n(v)) = φhf (n(v))
so that n(v) behaves exactly as hf (y) would if y were fixed to be its own code. That
is to say, we can read φn(v) wherever φy is occurs in the definition of hf , and the check
θi ⊆ φy
41
becomes
θi ⊆ φn(v)
which is to say that the program checks whether each finite function is a subset of its
own graph! By Lemma 4, this defines a fixed point for the functional F .
To prove that this fixed point is least, we proceed by strong induction on the
number of steps taken to enumerate the graph of φn(v) . That is, we shall show that
if we begin enumerating φn(v) using our standard enumeration procedure on the code
n(v), all the pairs produced will belong to the least fixed point, hence φn(v) ⊆ lfp(F ),
whence φn(v) = lfp(F ).
Begin enumerating φn(v) . This involves running the code n(v). One of the subprocesses in that code involves enumerating φn(v) itself, using the same procedure as
we are. Since this is a sub-computation of our enumeration, it is always shorter in
length. Hence, by the inductive hypothesis, we assume that the enumeration in the
sub-computation produces the least fixed point. Hence, if the check
θi ⊆ φn(v)
succeeds, we know that
θi ⊆ lfp(F )
By monotonicity, it follows that
F (θi ) ⊆ F (lfp(F )) = lfp(F )
Hence, when the check F (θi )(x) ' t succeeds and the pair (x, t) is output by the
enumeration, we know it belongs to the least fixed point. It follows that every pair
produced by the enumeration is in the least fixed point.
2.3.2
Partial Recursive Functionals and the SRT
In contrast with effective operations, the situation is simpler in the case of oracle
computation: because partial recursive functionals are decidedly extensional in their
behaviour, the problems that arose in the preceding section vanish. There is no
‘inherent’ parallelism in computing such a functional, and the SRT immediately yields
least fixed points.
The following theorem was shown by Odifreddi [1992, §II.3.16], who wrongly attributes it to Rogers:5
5
Rogers instead sketched the proof to our Theorem 19, which strictly concerns effective operations.
42
Theorem 20 (Odifreddi). Let F : P → P be a partial recursive functional, and
define
f (e, x) ' F (φe )(x)
Then f is partial recursive. Moreover, there exists q ∈ N such that f = φq , and the
function h : N * N of Theorem 7 produces a code h(q) such that φh(q) = lfp(F ).
Proof. As F is a partial recursive functional, it is also an effective operation on the
partial recursive functions, by Theorem 16. We define q ∈ N by Church’s thesis: on
input (e, x), process the code of e with the total extensional function associated to F
by Myhill-Shepherdson, and call the resulting code on x.
Let g = φh(q) . We have that
g(x) ' φh(q) (x) ' f (h(q), x) ' F (φh(q) )(x) ' F (g)(x)
for any x ∈ N, so that g is a fixed point of F .
It remains to show minimality. The proof is by strong induction on the length
of computations of F (g) on its arguments. Suppose F (g)(x) ' t. F is effectively
continuous, so there exists a finite θ ⊆ g such that
F (θ)(x) ' t
by compactness. Choose a minimal such θ, and let
θ = { (x1 , y1 ), . . . , (xn , yn ) }
By construction, the xi are exactly the ‘questions’ with which a Turing machine that
computes F would query the oracle, on input x.
By using Kleene’s SRT, we have replaced calls to the oracle by recursive calls
to another copy of itself. It follows that the computation of each F (g)(xi ) ' yi is
strictly shorter in length than the overall computation of F (g)(x) ' y. Hence, by the
induction hypothesis, (xi , yi ) ∈ lfp(F ) for all i, and θ ⊆ lfp(F ).
By monotonicity, F (θ) ⊆ lfp(F ), and since F (θ)(x) ' t, we have that (x, t) ∈
lfp(F ). Hence g ⊆ lfp(F ), and as g is also a fixed point, equality holds.
43
Chapter 3
iPCF: An Intensional Programming
Language1
This chapter concerns the elaboration of the modality-as-intension interpretation that
we introduced in §1.4. Our starting point will be the Davies-Pfenning calculus for S4,
which is a typed λ-calculus with modal types. The intuitive meaning of the modal
type A will be that of code, that—when evaluated—yields a value of type A. We
wish to use this calculus for intensional and reflective programming.
The Davies-Pfenning calculus already supports a notion of programs-as-data: to
each term M : A that uses only ‘code’ variables there corresponds a term box M : A
that stands for the term M considered as a datum. This is already considerably
stronger than ordinary higher-order functional programming with ‘functions as firstclass citizens,’ as it also entails a kind of homoiconicity, similar to the one present in
dialects of Lisp. But we want to go even further than that: in Lisp, a program is able
to process code by treating it as mere symbols, thereby disregarding its observable
behaviour.
The true spirit of intensionality is the ability to support operations that are, according to the extensional viewpoint, non-functional. This was not the case in the
work of Davies and Pfenning, who merely used their calculus for staged metaprogramming, which did not require non-functional operations. In this chapter we shall
mend this. We shall augment their calculus by adding intensional operations, and
intensional recursion. We shall call the resulting calculus Intensional PCF, after the
simply-typed λ-calculus with (extensional) fixed points studied by Scott [1993] and
Plotkin [1977].
1
This chapter is based on the paper [Kavvos, 2017d], which was presented at the 7th workshop on Intuitionistic Modal Logics and Applications (IMLA 2017). A preprint is available as
arXiv:1703.01288
44
There has been some previous work on adding intensional operations to the DaviesPfenning calculus. A complicated system based on nominal techniques that fleshed
out those ideas was presented by Nanevski [2002]. The notions of intensional and
extensional equality implicit in this system were studied using logical relations by
Pfenning and Nanevski Nanevski and Pfenning [2005]. However, none of these papers
studied whether the induced equational systems are consistent. We show that, no
matter the intensional mechanism at use, modalities enable consistent intensional
programming.
To our knowledge, this chapter presents (a) the first consistency proof for type-safe
intensional programming, and (b) the first type-safe attempt at reflective programming.
3.1
Introducing Intensional PCF
Intensional PCF (iPCF) is a typed λ-calculus with modal types. As discussed before,
the modal types work in our favour by separating intension from extension, so that
the latter does not leak into the former. Given the logical flavour of our observations
in §1.4 we shall model the types of iPCF after the constructive modal logic S4, in
the dual-context style pioneered by Pfenning and Davies [Pfenning and Davies, 2001,
Davies and Pfenning, 2001]. Let us seize this opportunity to remark that (a) there
are also other ways to capture S4, for which see the survey [Kavvos, 2016], and that
(b) dual-context formulations are not by any means limited to S4: they began in the
context of intuitionistic linear logic, but have recently been shown to also encompass
other modal logics; see Kavvos [2017b].
iPCF is not related to the language Mini-ML that is introduced by Davies and
Pfenning [2001]: that is a call-by-value, ML-like language, with ordinary call-byvalue fixed points. In contrast, ours is a call-by-name language with a new kind
of fixed point, namely intensional fixed points. These fixed points will afford the
programmer the full power of intensional recursion. In logical terms they correspond
to throwing the Gödel-Löb axiom (A → A) → A into S4. Modal logicians
might object to this, as, in conjunction with the T axiom A → A, it will make every
type inhabited. We remind them that a similar situation occurs in PCF, where the
YA : (A → A) → A combinator allows one to write a term YA (λx:A. x) at every
type A. As in the study of PCF, we care less about the logic and more about the
underlying computation: it is the terms that matter, and the types are only there to
stop type errors from happening.
45
The syntax and the typing rules of iPCF may be found in Figure 3.1. These are
largely the same as Pfenning and Davies’ S4, save the addition of some constants
(drawn from PCF), and a rule for intensional recursion. The introduction rule for the
modality restricts terms under a box (−) to those containing only modal variables,
i.e. variables carrying only intensions or code, but never ‘live values:’
∆;·`M :A
∆ ; Γ ` box M : A
There is also a rule for intensional recursion:
∆ ; z : A ` M : A
∆ ; Γ ` fix z in M : A
This will be coupled with the reduction fix z in M −→ M [box (fix z in M )/z]. This
rule is actually just Löb’s rule with a modal context, and including it in the Hilbert
system of a (classical or intuitionistic) modal logic is equivalent to including the GödelLöb axiom: see Boolos [1994] and Ursini [1979b]. We recommend the survey Litak
[2014] for a broad coverage of constructive modalities with a provability-like flavour.
Finally, let us record a fact noticed by Samson Abramsky, which is that erasing the
modality from the types appearing in either Löb’s rule or the Gödel-Löb axiom yields
the type of YA : (A → A) → A, as a rule in the first case, or axiomatically internalised
as a constant in the second (both variants exist in the literature: see Gunter [1992]
and Mitchell [1996].)
3.2
Metatheory
This section concerns the basic metatheoretic properties of iPCF. The expected structural rules are admissible. We also prove a theorem regarding the behaviour of free
variables, similar to the ones in [Kavvos, 2017b], which demonstrates how the different
layers of intension and extension are separated by the type system.
3.2.1
Structural Theorems & Cut
iPCF satisfies the expected basic results: structural and cut rules are admissible.
This is no surprise given its origin in the well-behaved Davies-Pfenning calculus. We
assume the typical conventions for λ-calculi: terms are identified up to α-equivalence,
for which we write ≡, and substitution [·/·] is defined in the ordinary, capture-avoiding
manner. Bear in mind that we consider occurrences of u in N to be bound in
46
Figure 3.1: Syntax and Typing Rules for Intensional PCF
::= Nat | Bool
Ground Types G
::= G | A → B | A
Types A, B
Terms M, N ::= x | λx:A. M | M N | box M | let box u ⇐ M in N |
n
b | true | false | succ | pred | zero? | ⊃G | fix z in M
Contexts Γ, ∆
::=
· | Γ, x : A
(b ∈ {true, false})
∆;Γ`n
b : Nat
∆ ; Γ ` b : Bool
∆ ; Γ ` zero? : Nat → Bool
∆ ; Γ ` f : Nat → Nat
(f ∈ {succ, pred})
∆ ; Γ ` ⊃G : Bool → G → G → G
∆ ; Γ, x:A, Γ0 ` x : A
(var)
∆ ; Γ, x:A ` M : B
∆ ; Γ ` λx:A. M : A → B
∆;·`M :A
∆ ; Γ ` box M : A
(→ I)
(I)
∆, u:A, ∆0 ; Γ ` u : A
∆;Γ`M :A→B
(var)
∆;Γ`N :A
∆ ; Γ ` MN : B
∆ ; Γ ` M : A
∆, u:A ; Γ ` N : C
∆ ; Γ ` let box u ⇐ M in N : C
∆ ; z : A ` M : A
∆ ; Γ ` fix z in M : A
47
(fix)
(→ E)
(E)
let box u ⇐ M in N . Contexts Γ, ∆ are lists of type assignments x : A. Furthermore, we shall assume that whenever we write a judgement like ∆ ; Γ ` M : A,
then ∆ and Γ are disjoint, in the sense that Vars (∆) ∩ Vars (Γ) = ∅, where
def
Vars (x1 : A1 , . . . , xn : An ) = {x1 , . . . , xn }. We write Γ, Γ0 for the concatenation of
disjoint contexts. Finally, we sometimes write ` M : A whenever · ; · ` M : A.
Theorem 21 (Structural & Cut). The following rules are admissible in iPCF:
1. (Weakening)
3. (Contraction)
∆ ; Γ, Γ0 ` M : A
∆ ; Γ, x:A, y:A, Γ0 ` M : A
∆ ; Γ, x:A, Γ0 ` M : A
∆ ; Γ, w:A, Γ0 ` M [w, w/x, y] : A
4. (Cut)
2. (Exchange)
∆ ; Γ, x:A, y:B, Γ0 ` M : C
∆;Γ`N :A
∆ ; Γ, x:A, Γ0 ` M : A
∆ ; Γ, Γ0 ` M [N/x] : A
∆ ; Γ, y:B, x:A, Γ0 ` M : C
Proof. All by induction on the typing derivation of M . Verified in the proof assistant
Agda: see Appendix A.2.
Theorem 22 (Modal Structural & Cut). The following rules are admissible:
1. (Modal Weakening)
3. (Modal Contraction)
∆, ∆0 ; Γ ` M : C
∆, x:A, y:A, ∆0 ; Γ ` M : C
∆, u:A, ∆0 ; Γ ` M : C
∆, w:A, ∆0 ; Γ ` M [w, w/x, y] : C
4. (Modal Cut)
2. (Modal Exchange)
∆, x:A, y:B, ∆0 ; Γ ` M : C
∆ ; · ` N : A ∆, u:A, ∆0 ; Γ ` M : C
∆, y:B, x:A, ∆0 ; Γ ` M : C
∆, ∆0 ; Γ ` M [N/u] : C
Proof. All by induction on the typing derivation of M . Verified in the proof assistant
Agda: see Appendix A.2.
3.2.2
Free variables
In this section we prove a theorem regarding the occurrences of free variables in welltyped terms of iPCF. It turns out that, if a variable occurs free under a box (−)
construct, then it has to be in the modal context. This is the property that enforces
that intensions can only depend on intensions.
48
Definition 6 (Free variables).
1. The free variables fv (M ) of a term M are defined by induction on the structure
of the term:
def
fv (x) = {x}
def
fv (M N ) = fv (M ) ∪ fv (N )
def
fv (λx:A. M ) = fv (M ) − {x}
def
fv (box M ) = fv (M )
def
fv (fix z in M ) = fv (M ) − {z}
def
fv (let box u ⇐ M in N ) = fv (M ) ∪ (fv (N ) − {u})
2. The unboxed free variables fv0 (M ) of a term are those that do not occur under
the scope of a box (−) or fix z in (−) construct. They are formally defined by
replacing the following clauses in the definition of fv (−):
def
fv0 (box M ) = ∅
def
fv0 (fix z in M ) = ∅
3. The boxed free variables fv≥1 (M ) of a term M are those that do occur under
the scope of a box (−) construct. They are formally defined by replacing the
following clauses in the definition of fv (−):
def
fv≥1 (x) = ∅
def
fv≥1 (box M ) = fv (M )
def
fv≥1 (fix z in M ) = fv (M ) − {z}
Theorem 23 (Free variables).
1. For every term M , fv (M ) = fv0 (M ) ∪ fv≥1 (M ).
2. If and ∆ ; Γ ` M : A, then
fv0 (M ) ⊆ Vars (Γ) ∪ Vars (∆)
fv≥1 (M ) ⊆ Vars (∆)
Proof.
1. Trivial induction on M .
49
2. By induction on the derivation of ∆ ; Γ ` M : A. We show the case for (I);
the first statement is trivial, so we show the second:
fv≥1 (box M )
= { definition }
fv (M )
= { (1) }
fv0 (M ) ∪ fv≥1 (M )
⊆ { IH, twice }
(Vars (∆) ∪ Vars (·)) ∪ Vars (∆)
= { definition }
Vars (∆)
3.3
Consistency of Intensional Operations
In this section we shall prove that the modal types of iPCF enable us to consistently
add intensional operations on the modal types. These are non-functional operations
on terms which are not ordinarily definable because they violate equality. All we
have to do is assume them as constants at modal types, define their behaviour by
introducing a notion of reduction, and then prove that the compatible closure of this
notion of reduction is confluent. A known corollary of confluence is that the equational
theory induced by the reduction is consistent, i.e. does not equate all terms.
There is a caveat involving extension flowing into intension. That is: we need to
exclude from consideration terms where a variable bound by a λ occurs under the
scope of a box (−) construct. These will never be well-typed, but—since we discuss
types and reduction orthogonally—we also need to explicitly exclude them here too.
3.3.1
Adding intensionality
Pfenning and Davies [2001] suggested that the modality can be used to signify
intensionality. In fact, in [Davies and Pfenning, 1996, 2001] they had prevented reductions from happening under box (−) construct, “ [...] since this would violate its
intensional nature.” But the truth is that neither of these presentations included any
genuinely non-functional operations at modal types, and hence their only use was for
50
homogeneous staged metaprogramming. Adding intensional, non-functional operations is a more difficult task. Intensional operations are dependent on descriptions
and intensions rather than values and extensions. Hence, unlike reduction and evaluation, they cannot be blind to substitution. This is something that quickly came
to light as soon as Nanevski [2002] attempted to extend the system of Davies and
Pfenning to allow ‘intensional code analysis’ using nominal techniques.
A similar task was also recently taken up by Gabbay and Nanevski [Gabbay and
Nanevski, 2013], who attempted to add a construct is-app to the system of Davies
and Pfenning, along with the reduction rules
is-app (box P Q) −→ true
is-app (box M ) −→ false
if M is not of the form P Q
The function computed by is-app is truly intensional, as it depends solely on the
syntactic structure of its argument: it merely checks if it syntactically is an application
or not. As such, it can be considered a criterion of intensionality, albeit an extreme
one: its definability conclusively confirms the presence of computation up to syntax.
Gabbay and Nanevski tried to justify the inclusion of is-app by producing denotational semantics for modal types in which the semantic domain JAK directly involves
the actual closed terms of type A. However, something seems to have gone wrong
with substitution. In fact, we believe that their proof of soundness is wrong: it is not
hard to see that their semantics is not stable under the second of these two reductions:
take M to be u, and let the semantic environment map u to an application P Q, and
then notice that this leads to JtrueK = JfalseK. We can also see this in the fact that
their notion of reduction is not confluent. Here is the relevant counterexample: we
can reduce like this:
let box u ⇐ box (P Q) in is-app (box u) −→ is-app (box P Q) −→ true
But we could have also reduced like that:
let box u ⇐ box (P Q) in is-app (box u) −→ let box u ⇐ box (P Q) in false −→ false
This example is easy to find if one tries to plough through a proof of confluence: it is
very clearly not the case that M −→ N implies M [P/u] −→ N [P/u] if u is under a
box (−), exactly because of the presence of intensional operations such as is-app.
Perhaps the following idea is more workable: let us limit intensional operations
to a chosen set of functions f : T (A) → T (B) from terms of type A to terms
51
of type B, and then represent them in the language by a constant f˜, such that
f˜(box M ) −→ box f (M ). This set of functions would then be chosen so that they
satisfy some sanity conditions. Since we want to have a let construct that allows us
to substitute code for modal variables, the following general situation will occur: if
N −→ N 0 , we have
let box u ⇐ box M in N −→ N [M/u]
but also
let box u ⇐ box M in N −→ let box u ⇐ box M in N 0 −→ N 0 [M/u]
Thus, in order to have confluence, we need N [M/u] −→ N 0 [M/u]. This will only
be the case for reductions of the form f˜(box M ) → box f (M ) if f (N [M/u]) ≡
f (N )[M/u], i.e. if f is substitutive. But then a simple naturality argument gives
that f (N ) ≡ f (u[N/u]) ≡ f (u)[N/u], and hence f˜ is already definable by
λx : A. let box u ⇐ x in box f (u)
so such a ‘substitutive’ function is not intensional after all.
In fact, the only truly intensional operations we can add to our calculus will be
those acting on closed terms. We will see that this circumvents the problems that
arise when intensionality interacts with substitution. Hence, we will limit intensional
operations to the following set:
Definition 7 (Intensional operations). Let T (A) be the set of (α-equivalence classes
of) closed terms such that · ; · ` M : A. Then, the set of intensional operations,
F(A, B), is defined to be the set of all functions f : T (A) → T (B).
We will include all of these intensional operations f : T (A) → T (B) in our calculus,
as constants:
∆ ; Γ ` f˜ : A → B
with reduction rule f˜(box M ) → box f (M ), under the proviso that M is closed.
Of course, these also includes operations on terms that might not be computable.
However, we are interested in proving consistency of intensional operations in the
most general setting. The questions of which intensional operations are computable,
and which primitives can and should be used to express them, are both still open.
52
Figure 3.2: Reduction for Intensional PCF
(λx:A. M )N −→ M [N/x]
M −→ N
M P −→ N P
M −→ N
(−→ β)
λx:A. M −→ λx:A. N
P −→ Q
(app1 )
M P −→ M Q
let box u ⇐ box M in N −→ N [M/u]
fix z in M −→ M [box (fix z in M )/z]
M closed
f˜(box M ) −→ box f (M )
(fix)
(int)
let box u ⇐ M in P −→ let box u ⇐ N in P
P −→ Q
let box u ⇐ M in P −→ let box u ⇐ M in Q
(zero?1 )
succ n
b −→ n[
+1
(let-cong1 )
(let-cong2 )
zero? n[
+ 1 −→ false
(succ)
⊃G true M N −→ M
(app2 )
(β)
M −→ N
zero? b
0 −→ true
pred n
b −→ n[
´1
(⊃1 )
(congλ )
(zero?2 )
(pred)
⊃G false M N −→ N
53
(⊃2 )
Figure 3.3: Equational Theory for Intensional PCF
Function Spaces
∆;Γ`N :A
∆ ; Γ, x:A, Γ0 ` M : B
∆ ; Γ ` (λx:A.M ) N = M [N/x] : B
(→ β)
Modality
∆;·`M :A
∆, u : A ; Γ ` N : C
(β)
∆ ; Γ ` let box u ⇐ box M in N = N [M/x] : C
∆ ; z : A ` M : A
∆ ; Γ ` fix z in M = M [box (fix z in M )/z] : A
(fix)
· ; · ` M : A f ∈ F(A, B)
(int)
∆ ; Γ ` f˜(box M ) = box f (M ) : B
∆ ; Γ ` M = N : A
∆;Γ`P =Q:C
∆ ; Γ ` let box u ⇐ M in P = let box u ⇐ N in Q : B
(let-cong)
Remark. In addition to the above, one should also include (a) rules that ensure
that equality is an equivalence relation, (b) congruence rules for λ-abstraction and
application, and (c) rules corresponding to the behaviour of constants, as in Figure
3.2.
54
3.3.2
Reduction and Confluence
We introduce a notion of reduction for iPCF, which we present in Figure 3.2. Unlike
many studies of PCF-inspired languages, we do not consider a reduction strategy
but ordinary ‘non-deterministic’ β-reduction. We do so because are trying to show
consistency of the induced equational theory.
The equational theory induced by this notion of reduction is the one alluded to
in the previous section: it is a symmetric version of it, annotated with types. It can
be found in Figure 3.3. Note the fact that, like in the work of Davies and Pfenning,
we do not include the congruence rule for the modality:
∆;·`M =N :A
∆ ; Γ ` box M = box N : A
(cong)
In fact, the very absence of this rule is what will allow modal types to become intensional. Otherwise, the only new rules are intensional recursion, embodied by the rule
(fix), and intensional operations, exemplified by the rule (int).
We note that it seems perfectly reasonable to think that we should allow reductions
under fix, i.e. admit the rule
M −→ N
fix z in M −→ fix z in N
as M and N are expected to be of type A, which need not be modal. However,
the reduction fix z in M −→ M [box (fix z in M )/z] ‘freezes’ M under an occurrence
of box (−), so that no further reductions can take place within it. Thus, the above
rule would violate the intensional nature of boxes. We were likewise compelled to
def
define fv0 (fix z in M ) = ∅ in the previous section: we should already consider M to
be intensional, or under a box.
We can now show that
Theorem 24. The reduction relation −→ is confluent.
We will use a variant of the proof in [Kavvos, 2017b], i.e. the method of parallel
reduction. This kind of proof was originally discovered by Tait and Martin-Löf, and
is nicely documented in Takahashi [1995]. Because of the intensional nature of our
box (−) constructs, ours will be more nuanced and fiddly than any in op. cit. The
method is this: we will introduce a second notion of reduction,
=⇒ ⊆ Λ × Λ
55
which we will ‘sandwich’ between reduction proper and its transitive closure:
−→ ⊆ =⇒ ⊆ −→∗
We will then show that =⇒ has the diamond property. By the above inclusions, the
transitive closure =⇒∗ of =⇒ is then equal to −→∗ , and hence −→ is Church-Rosser.
In fact, we will follow Takahashi [1995] in doing something better: we will define for
each term M its complete development, M ? . The complete development is intuitively
defined by ‘unrolling’ all the redexes of M at once. We will then show that if M =⇒ N ,
then N =⇒ M ? . M ? will then suffice to close the diamond:
M
P
Q
M?
The parallel reduction =⇒ is defined in Figure 3.4. Instead of the axiom (refl) we
would more commonly have an axiom for variables, x =⇒ x, and M =⇒ M would
be derivable. However, we do not have a congruence rule neither for box (−) nor
for Löb’s rule, so that possibility would be precluded. We are thus forced to include
M =⇒ M , which slightly complicates the lemmas that follow.
The main lemma that usually underpins the confluence proof is this: if M =⇒ N
and P =⇒ Q, M [P/x] =⇒ N [Q/x]. However, this is intuitively wrong: no reductions
should happen under boxes, so this should only hold if we are substituting for a
variable not occurring under boxes. Hence, this lemma splits into three different
ones:
• P =⇒ Q implies M [P/x] =⇒ M [Q/x], if x does not occur under boxes: this is
the price to pay for replacing the variable axiom with (refl).
• M =⇒ N implies M [P/u] =⇒ N [P/u], even if u is under a box.
• If x does not occur under boxes, M =⇒ N and P =⇒ Q indeed imply
M [P/x] =⇒ N [Q/x]
But let us proceed with the proof.
Lemma 5. If M =⇒ N then M [P/u] =⇒ N [P/u].
56
Figure 3.4: Parallel Reduction
M =⇒ M
M =⇒ N
(refl)
P =⇒ Q
(λx:A. M )P =⇒ N [Q/x]
M =⇒ N
M =⇒ N
λx:A. M =⇒ λx:A. N
P =⇒ P 0
⊃G true P Q =⇒ P 0
(congλ )
P =⇒ Q
M P =⇒ N Q
Q =⇒ Q0
(⊃1 )
⊃G false P Q =⇒ Q0
M =⇒ N
let box u ⇐ box P in M =⇒ N [P/u]
M =⇒ M 0
fix z in M =⇒ M 0 [box (fix z in M )/z]
M closed
f˜(box M ) =⇒ box f (M )
M =⇒ N
(→ β)
(app)
(⊃2 )
(β)
(fix)
(int)
P =⇒ Q
let box u ⇐ M in P =⇒ let box u ⇐ N in Q
(let-cong)
Remark. In addition to the above, one should also include rules for the constants,
but these are merely restatements of the rules in Figure 3.2.
57
Proof. By induction on the generation of M =⇒ N . Most cases trivially follow, or
consist of simple invocations of the IH. In the case of (→ β), the known substitution
lemma suffices. Let us look at the cases involving boxes.
Case(β). Then M =⇒ N is let box v ⇐ box R in S =⇒ S 0 [R/v] with S =⇒
S 0 . By the IH, we have that S[P/u] =⇒ S 0 [P/u], so
let box v ⇐ box R[P/u] in S[P/u] =⇒ S 0 [P/u][R[P/u]/v]
and this last is α-equivalent to S 0 [R/v][P/u] by the substitution lemma.
Case(fix). A similar application of the substitution lemma.
Case(int). Then M =⇒ N is f˜(box Q) =⇒ box f (Q). Hence
f˜(box Q) [P/u] ≡ f˜(box Q) =⇒ box f (Q) ≡ (box f (Q)) [P/u]
simply because both Q and f (Q) are closed.
Lemma 6. If P =⇒ Q and x 6∈ fv≥1 (M ), then M [P/x] =⇒ M [Q/x].
Proof. By induction on the term M . The only non-trivial cases are those for M a
variable, box M 0 or fix z in M 0 . In the first case, depending on which variable M is,
use either (refl), or the assumption P =⇒ Q. In the latter two, (box M 0 )[P/x] ≡
box M 0 ≡ (box M 0 )[Q/x] as x does not occur under a box, so use (refl), and similarly
for fix z in M 0 .
Lemma 7. If M =⇒ N , P =⇒ Q, and x 6∈ fv≥1 (M ), then
M [P/x] =⇒ N [Q/x]
Proof. By induction on the generation of M =⇒ N . The cases for most congruence
rules and constants follow trivially, or from the IH. We prove the rest.
Case(refl). Then M =⇒ N is actually M =⇒ M , so we use Lemma 6 to infer
M [P/x] =⇒ M [Q/x].
Case(int). Then M =⇒ N is actually
f˜(box M ) =⇒ box f (M ). But M
and f (M ) are closed, so f˜(box M ) [P/x] ≡ f˜(box M ) =⇒ box f (M ) ≡
(box f (M )) [Q/x].
58
Case(⊃i ). Then M =⇒ N is ⊃G true M N =⇒ M 0 with M =⇒ M 0 . By the
IH, M [P/x] =⇒ M 0 [Q/x], so
⊃G true M [P/x] N [P/x] =⇒ M 0 [Q/x]
by a single use of (⊃1 ). The case for false is similar.
Case(→ β). Then (λx0 :A. M )N =⇒ N 0 [M 0 /x0 ], where M =⇒ M 0 and N =⇒
N 0 . Then
((λx0 :A. M )N ) [P/x] ≡ (λx0 :A. M [P/x])(N [P/x])
But, by the IH, M [P/x] =⇒ M 0 [Q/x] and N [P/x] =⇒ N 0 [Q/x]. So by (→ β)
we have
(λx0 :A. M [P/x])(N [P/x]) =⇒ M 0 [Q/x] [N 0 [Q/x]/x0 ]
But this last is α-equivalent to (M 0 [N 0 /x0 ]) [Q/x] by the substitution lemma.
Case(β). Then let box u0 ⇐ box M in N =⇒ N 0 [M/u0 ] where N =⇒ N 0 . By
assumption, we have that x 6∈ fv (M ) and x 6∈ fv≥1 (N ). Hence, we have by
the IH that N [P/x] =⇒ N 0 [Q/x], so by applying (β) we get
(let box u0 ⇐ box M in N )[P/x] ≡ let box u0 ⇐ box M [P/x] in N [P/x]
≡ let box u0 ⇐ box M in N [P/x]
=⇒ N 0 [Q/x][M/u0 ]
But this last is α-equivalent to N 0 [M/u0 ][Q/x], by the substitution lemma and
the fact that x does not occur in M .
Case(fix). Then fix z in M =⇒ M 0 [box (fix z in M )/z], with M =⇒ M 0 . As
x 6∈ fv≥1 (fix z in M ), we have that x 6∈ fv (M ), and by Lemma 9, x 6∈ fv (M 0 )
either, so
(fix z in M )[P/x] ≡ fix z in M
and
M 0 [fix z in M/z][Q/x] ≡ M 0 [Q/x][fix z in M [Q/x]/z] ≡ M 0 [fix z in M/z]
Thus, a single use of (fix) suffices.
We now pull the following definition out of the hat:
59
Definition 8 (Complete development). The complete development M ? of a term M
is defined by the following clauses:
def
x? = x
def
c? = c
(c ∈ {f˜, n
b, zero?, . . . })
(λx:A. M )? = λx:A. M ?
?
def
˜
f (box M ) = box f (M )
def
if M is closed
((λx:A. M ) N )? = M ? [N ? /x]
def
(⊃G true M N )? = M ?
def
(⊃G false M N )? = N ?
def
(M N )? = M ? N ?
def
(box M )? = box M
def
(let box u ⇐ box M in N )? = N ? [M/u]
def
(let box u ⇐ M in N )? = let box u ⇐ M ? in N ?
def
(fix z in M )? = M ? [box (fix z in M )/z]
def
We need the following two technical results as well.
Lemma 8. M =⇒ M ?
Proof. By induction on the term M . Most cases follow immediately by (refl), or by
the IH and an application of the relevant rule. The case for box M follows by (refl),
the case for fix z in M follows by (fix), and the case for f˜(box M ) by (int).
Lemma 9 (BFV antimonotonicity). If M =⇒ N then fv≥1 (N ) ⊆ fv≥1 (M ).
Proof. By induction on M =⇒ N .
And here is the main result:
Theorem 25. If M =⇒ P , then P =⇒ M ? .
Proof. By induction on the generation of M =⇒ P . The case of (refl) follows by
Lemma 8, and the cases of congruence rules follow from the IH. We show the rest.
Case(→ β). Then we have (λx:A. M )N =⇒ M 0 [N 0 /x], with M =⇒ M 0
and N =⇒ N 0 . By the IH, M 0 =⇒ M ? and N 0 =⇒ N ? . We have that
x 6∈ fv≥1 (M ), so by Lemma 9 we get that x 6∈ fv≥1 (M 0 ). Hence, by Lemma 7
we get M 0 [N 0 /x] =⇒ M ? [N ? /x] ≡ ((λx:A. M ) N )? .
60
Case(β). Then we have
let box u ⇐ box M in N =⇒ N 0 [M/u]
where N =⇒ N 0 . By the IH, N 0 =⇒ N ? , so it follows that
N 0 [M/u] =⇒ N ? [M/u] ≡ (let box u ⇐ box M in N )?
by Lemma 5.
Case(fix). Then we have
fix z in M =⇒ M 0 [box (fix z in M )/z]
where M =⇒ M 0 . By the IH, M 0 =⇒ M ? . Hence
M 0 [box (fix z in M )/z] =⇒ M ? [box (fix z in M )/z] ≡ (fix z in M )?
by Lemma 5.
Case(int). Similar.
As a result,
Corollary 2. The equational theory of iPCF (Figure 3.3) is consistent.
3.4
Some important terms
Let us look at the kinds of terms we can write in iPCF.
From the axioms of S4 First, we can write a term corresponding to axiom K, the
normality axiom of modal logics:
def
axK = λf : (A → B). λx : A. let box g ⇐ f in let box y ⇐ x in box (g y)
Then ` axK : (A → B) → (A → B). An intensional reading of this is
the following: any function given as code can be transformed into an effective
operation that maps code of type A to code of type B.
The rest of the axioms correspond to evaluating and quoting. Axiom T takes
code to value, or intension to extension:
def
` evalA = λx : A. let box y ⇐ x in y : A → A
and axiom 4 quotes code into code-for-code:
def
` quoteA = λx : A. let box y ⇐ x in box (box y) : A → A
61
Undefined The combination of eval and intensional fixed points leads to non-termination,
in a style reminiscent of the term (λx. xx)(λx. xx) of the untyped λ-calculus.
Let
def
ΩA = fix z in (evalA z)
Then ` ΩA : A, and
ΩA −→ evalA (box ΩA ) −→∗ ΩA
The Gödel-Löb axiom: intensional fixed points Since (fix) is Löb’s rule, we
expect to be able to write down a term corresponding to the Gödel-Löb axiom
of provability logic. We can, and it is an intensional fixed-point combinator :
def
YA = λx : (A → A). let box f ⇐ x in box (fix z in f z)
and ` YA : (A → A) → A. We observe that
YA (box M ) −→∗ box (fix z in (M z))
Notice that, in this term, the modal variable f occurs free under a fix z in (−)
construct. This will prove important in §8, where this occurrence will be prohibited.
Extensional Fixed Points Perhaps surprisingly, the ordinary PCF Y combinator
is also definable in the iPCF. Let
def
YA = fix z in λf : A → A. f (eval z f )
Then ` YA : (A → A) → A, so that
YA −→∗ λf : A → A. f (eval (box YA ) f ))
−→∗ λf : A → A. f (YA f )
Notice that, in this term, the modal variable z occurs free under a λ-abstraction.
This will prove important in §8, where this occurrence will be prohibited.
3.5
Two intensional examples
No discussion of an intensional language with intensional recursion would be complete
without examples that use these two novel features. Our first example uses intensionality, albeit in a functional way, and is drawn from the study of PCF and issues
related to sequential vs. parallel (but not concurrent) computation. Our second example uses intensional recursion, so it is slightly more adventurous: it is a computer
virus.
62
3.5.1
‘Parallel or’ by dovetailing
In [Plotkin, 1977] Gordon Plotkin proved the following theorem: there is no term
por : Bool → Bool → Bool of PCF such that por true M β true and por M true β
true for any ` M : Bool, whilst por false false β false. Intuitively, the problem is
that por has to first examine one of its two arguments, and this can be troublesome
if that argument is non-terminating. It follows that the parallel or function is not
definable in PCF. In order to regain the property of so-called full abstraction for the
Scott model of PCF, a constant denoting this function has to be manually added to
PCF, and endowed with the above rather clunky operational semantics. See [Plotkin,
1977, Gunter, 1992, Mitchell, 1996, Streicher, 2006].
However, the parallel or function is a computable partial recursive functional [Streicher, 2006, Longley and Normann, 2015]. The way to prove that is intuitively the
following: given two closed terms M, N : Bool, take turns in β-reducing each one for
a one step: this is called dovetailing. If at any point one of the two terms reduces to
true, then output true. But if at any point both reduce to false, then output false.
This procedure is not definable in PCF because a candidate term por does not
have access to a code for its argument, but can only inspect its value. However, in
iPCF we can use the modality to obtain access to code, and intensional operations
to implement reduction. Suppose we pick a reduction strategy −→ r . Then, let us
include a constant tick : Bool → Bool that implements one step of this reduction
strategy on closed terms:
M −→r N, M, N closed
tick (box M ) −→ box N
Also, let us include a constant done? : Bool → Bool, which tells us if a closed term
under a box is a normal form:
M closed, normal
M closed, not normal
done? (box M ) −→ true
done? (box M ) −→ false
It is not hard to see that these two intensional operations can be subsumed under
our previous scheme for introducing intensional operations: our proof still applies,
yielding a consistent equational system.
The above argument is now implemented by the following term:
por :≡ Y(λpor. λx : Bool. λy : Bool.
⊃Bool (done? x)
(lor (eval x)(eval y))
(⊃Bool (done? y)
(ror (eval x)(eval y))
(por (tick x)(tick y)))
63
where lor, ror : Bool → Bool → Bool are terms defining the left-strict and rightstrict versions of the ‘or’ connective respectively. Notice that the type of this term is
Bool → Bool → Bool: we require intensional access to the terms of boolean type
in order to define this function!
3.5.2
A computer virus
Abstract computer virology is the study of formalisms that model computer viruses.
There are many ways to formalise viruses. We will use the model of Adleman [1990],
where files can be interpreted either as data, or as functions. We introduce a data
type F of files, and two constants
in : (F → F ) → F
and out : F → (F → F )
If F is a file, then out F is that file interpreted as a program, and similarly for
in. We ask that out (in M ) −→ M , making (F → F ) a retract of F .2 This
might seem the same as the situation where F → F is a retract of F , which yields
models of the (untyped) λ-calculus, and is not trivial to construct [Barendregt, 1984,
§5.4]. However, in our case it is not nearly as worrying: (F → F ) is populated
by programs and codes, not by actual functions. Under this interpretation, the pair
(in, out) corresponds to a kind of Gödel numbering—especially if F is N.
Now, in Adleman’s model, a virus is a given by its infected form, which either
injures, infects, or imitates other programs. The details are unimportant in the
present discussion, save from the fact that the virus needs to have access to code that
it can use to infect other executables. One can hence construct such a virus from its
infection routine, by using Kleene’s SRT. Let us model it by a term
` infect : (F → F ) → F → F
which accepts a piece of viral code and an executable file, and it returns either the
file itself, or a version infected with the viral code. We can then define a term
def
` virus = fix z in (infect z) : F → F
so that
virus −→∗ infect (box virus)
which is a program that is ready to infect its input with its own code.
2
Actually, in §8.5 and §8.6 we will see it is very easy to construct examples for the apparently
more natural situation where in : (F → F ) → F , out : F → (F → F ), and out (in M ) −→
evalF →F M . Nevertheless, our setup is slightly more well-adapted to virology.
64
3.6
Open Questions
We have achieved the desideratum of an intensional programming language, with
intensional recursion. There are two main questions that result from this development.
Firstly, does there exist a good set of intensional primitives from which all others
are definable? Is there perhaps more than one such set, hence providing us with a
choice of programming primitives?
Secondly, what is the exact kind of programming power that we have unleashed?
Does it lead to interesting programs that we have not been able to write before?
We have outlined some speculative applications for intensional recursion in §2.1.4. Is
iPCF a useful tool when it comes to attacking these?
We discuss some more aspects of iPCF in the concluding chapter (§9.2).
65
Chapter 4
Categories and Intensionality1
We turn now to the discussion of the categorical modelling of intensionality.
We have discussed the importance of intensionality for Computer Science in §1.1.
One might therefore ask why the concept has not led to many exciting developments.
Abramsky [2014] suggests that it may simply be that the extensional paradigm is
already sufficiently challenging:
“ [...] while Computer Science embraces wider notions of processes than
computability theory, it has tended to refrain from studying intensional
computation, despite its apparent expressive potential. This reluctance is
probably linked to the fact that it has proved difficult enough to achieve
software reliability even while remaining within the confines of the extensional paradigm. Nevertheless, it seems reasonable to suppose that
understanding and harnessing intensional methods offers a challenge for
computer science which it must eventually address.”
But we believe that there is also a deeper reason: once we step outside extensionality,
there are no rules: one might even say that ‘anything goes:’ this is what Abramsky
means by the phrase “loose baggy monster” (quoted at the start of §1.2). A natural
reaction to this state of affairs is to turn to category theory in an attempt to find
some structure that can put things into perspective, or simply provide a language for
studying the interplay between the extensional and the intensional.
Surprisingly, very little has been said about intensionality in the categorical domain. Lawvere [1969, 2006] observes that there are categories which are not wellpointed, hence—in some sense—‘intensional.’ But if we only allow for slightly more
generality, this intensionality vanishes: there is only one notion of equality.
1
A preliminary form of the results in this chapter was first published as [Kavvos, 2017a], which
is available at Springer: https://dx.doi.org/10.1007/978-3-662-54458-7_32
66
We shall take the hint regarding the relationship between modal logic and intensionality from our discussion in §1.4. We already know that there is a deep connection
between logic, type theory, and category theory, namely the Curry-Howard-Lambek
correspondence. This makes it likely that attempting to transport the reading of
modality-as-intension to the realm of category theory could lay the foundation for a
basic theory of intensionality. We hence revisit the appropriate categorical semantics
of type theories in the spirit of S4, which was introduced by Bierman and de Paiva
[2000]. We find that it is not appropriate for our purposes, and we argue to that
effect in §4.1.
Taking all of these points into account, one can only surmise that there has to be
a radical shift in our perspective: we need to step outside classical 1-category theory.
Fortunately, this necessary groundwork has been laid by Čubrić et al. [1998] and their
discovery of P-category theory. We introduce P-category theory in §4.1.2, and discuss
its use in modelling intensionality.
All that remains is to introduce a new concept that ties intensionality and extensionality together under the same roof. This is the notion of an exposure, which we
introduce and study in §4.2.
4.1
Categories are not intensional
At the outset, things look promising: let there be a category C with a terminal object
1. Arrows of type x : 1 → C are called points of the object C. An arrow f : C → A
introduces a map
x:1→C
7−→
f ◦x:1→A
from points of C to points of A. In this setting, Lawvere [1969, 2006] observes that
we may have two distinct arrows f, g : C → A that induce the same map, i.e. such
that
∀x : 1 → A. f ◦ x = g ◦ x
all whilst f 6= g. In this case, we say that the category C does not have enough points,
or is not well-pointed.
Nevertheless, lack of enough points is not enough to have ‘intensionality,’ and the
discussion in [Awodey, 2010, §2.3] provides the necessary intuition. Up to now, we
have focused on points x : 1 → C. These can be construed as ‘tests’, in the sense that
we can look at each f ◦ x and infer some information about the arrow f : C → A, e.g.
its value at some ‘argument’ x : 1 → C. However, we can conceive of more involved
67
test objects T , and significantly more comprehensive ‘tests’ of type x : T → C which
we can call generalised points. Arrows f : C → A are completely distinguishable
if given arbitrary generalised points. The argument is dumbfoundingly trivial: the
most thorough ‘test object’ is C itself, and for that one there’s the generalised point
idC : C → C with the unfortunate property that f ◦ idC = f 6= g = g ◦ idC .
We can therefore draw the conclusion that categories cannot model intensionality:
there cannot be two distinct arrows f and g that are indistinguishable within the
category. Of course, we can always quotient C by some compatible equivalence relation
∼ to obtain C/ ∼. Then the intensional universe would be C, and its extensional
version would be C/ ∼. But that would entail that we are no longer operating within
a single mathematical universe, i.e. a single category!
4.1.1
Intension, Modality, and Categories
We thus return to the modality-as-intention interpretation of §1.4 to look for the answer. From a categorical perspective, all is well with the intuitions we have developed
there and in §3, save the punchline: the categorical semantics of the S4 box modality,
due to Bierman and de Paiva [2000] and Kobayashi [1997], specifies that
: C −→ C
is part of a monoidal comonad (, , δ) on a cartesian closed category C. Let us define
some of these notions for the sake of completeness.
Definition 9. A comonad (Q, , δ) consists of an endofunctor
Q : C −→ C
and two natural transformations,
: Q ⇒ IdC ,
δ : Q ⇒ Q2
such that the following diagrams commute:
QA
δA
QA
δQA
δA
Q2 A
Q2 A
QδA
δA
Q3 A
Q2 A
68
δA
idA
QA
Q2 A
QA
QA
In the cartesian case, monoidality requires the provision of morphisms
mA,B : QA × QB → Q(A × B)
m0 : 1 → Q1
natural in each pair of objects A, B ∈ C, which must also make certain diagrams
commute. We will be particularly interested in the case where the functor is strong
monoidal, i.e. mA,B and m0 are natural isomorphisms. It has been shown in the
technical report [Kavvos, 2017c] that this is the same as a product-preserving functor.
The transformations δ : Q ⇒ Q2 and : Q ⇒ IdC are monoidal if the following
diagrams commute:
QA × QB
A ×B
A×B
1
mA,B
m0
Q(A × B)
QA × QB
δA ×δB
A×B
Q1
A×B
Q2 A × Q2 B
1
m0
mQA,QB
Q(QA × QB)
mA,B
1
1
Q1
m0
Q(m0 )
QmA,B
Q(A × B)
δA×B
Q1
Q2 (A × B)
δ1
Q2 1
Please refer to [Mac Lane, 1978, §XI.2] or [Melliès, 2009, §5] for more details, and the
missing commuting diagrams.
Now, as : C −→ C is a functor, it will unfortunately trivially satisfy
f =g
=⇒
f = g
and will hence necessarily validate the congruence rule for the modality:
∆;·`M =N :A
∆ ; Γ ` box M = box N : A
(cong)
This does not conform to the ‘no reductions under box (−)’ restriction, and hence
disrupts the intensional nature of the modal types in iPCF.
As if this were not enough, we will now present another argument that provides the
last straw for the monoidal comonad interpretation. Intuitively, if A is to represent
code of type A, then there should be many more points 1 → A than points 1 → A:
there is more than one expression corresponding to the same value in any interesting
69
logical system. Under a very mild assumption, this desideratum fails. To show that,
suppose we have a monoidal comonad (, , δ). Furthermore, suppose the components
of δ : Q ⇒ Q2 satisfy the following definition:
Definition 10. The component δA : A → 2 A is reasonable quoting device at A
just if the following diagram commutes:
1
a
m0
1
A
δA
a
2 A
This equation may be type-theoretically expressed in iPCF as the following equation
for any ` M : A:
` let box u ⇐ M in box (box u) = box M : A
Then,
Proposition 1. If each component δA : A → 2 A of a monoidal comonad (, , δ)
is a reasonable quoting device, then there is a natural isomorphism
C(1, −) ∼
= C(1, (−))
Proof. We can define maps
in : C(1, A) → C(1, A)
x 7→ x ◦ m0
out : C(1, A) → C(1, A)
a 7→ A ◦ a
and then calculate:
out (in(x)) = A ◦ x ◦ m0 = x ◦ 1 ◦ m0 = x
where the last equality is because 1 is a terminal object. Similarly,
in (out(a)) = (A ◦ a) ◦ m0 = A ◦ a ◦ m0 = A ◦ δA ◦ a = a
where we have only used our ‘reasonable’ condition, and one of the comonad laws.
Naturality of the isomorphism follows by functoriality of and naturality of .
70
Hence, in these circumstances, there are no more codes than values!
This definition of a reasonable quoting device is slightly mysterious. In fact,
it follows from the commutation of the following more general diagram, for each
f : QA → QB:
f
QA
QB
δA
δB
Q2 A
Qf
Q2 B
Indeed, given a : 1 → QA, consider a ◦ m−1
: Q1 → QA. Commutation of the
0
diagram them yields
−1
δA ◦ a ◦ m−1
0 = Q(a ◦ m0 ) ◦ δ1
Taking the m0 to the other side, we have
−1
δA ◦ a = Qa ◦ Qm−1
0 ◦ δ1 ◦ m0 = Qa ◦ Qm0 ◦ Qm0 ◦ m0 = Qa ◦ m0
by the monoidality of δ. In turn, commutation of this diagram for any f : A → B
corresponds to the comonad being idempotent.
Theorem 26 (Idempotence). Given a comonad (Q, , δ), the following are equivalent:
1. δ : Q ⇒ Q2 is an isomorphism.
2. δ ◦ Q : Q2 ⇒ Q2 is the identity natural transformation.
3. For all f : QA → QB, Qf ◦ δA = δB ◦ f .
If any one of these holds, we say that (Q, , δ) is idempotent.
Proof. We prove (2) ⇒ (3) ⇒ (1) ⇒ (2).
Case(2 ⇒ 3). We have
Qf ◦ δA
= { by (2) }
δB ◦ QB ◦ Qf ◦ δA
= { naturality of }
δB ◦ f ◦ QA ◦ δA
= { comonad equation }
δB ◦ f
71
Case(3 ⇒ 1). We already know that Q ◦ δ = Id from the comonad equations,
so it remains to prove that δ ◦ Q is the identity natural transformation on Q.
δA ◦ QA
= { by (3) }
QQA ◦ δQA
= { comonad equation }
idQA
Hence δ −1 = Q .
Case(1 ⇒ 2). We have
δA ◦ QA
= { by (1) }
δA ◦ QA ◦ δA ◦ δA−1
= { comonad equation }
δA ◦ δA−1
= { δ iso }
idQ2 A
The behaviour of code will often be idempotent in the above sense: once something
is quoted code, it is in the realm of syntax, and more layers of quoting do not change
its quality as sense. Thus, the above argument delivers a fatal blow to the monoidal
comonad approach.
4.1.2
PERs and P-categories
The way out of this seeming impasse is to step outside 1-category theory. We shall
use the notion of P-category, which was introduced by Čubrić et al. [1998] precisely
so the authors could deal with a form of intensionality.2
P-category theory is essentially category theory up to a partial equivalence relation.
2
For the record, the gist of [Čubrić et al., 1998] is that the Yoneda embedding on the categorical
term model of typed λ-calculus is a key ingredient in normalisation by evaluation—with the proviso
that terms are not strictly identified up to βη equality!
72
Definition 11. A partial equivalence relation (PER) is a symmetric and transitive
relation.
Partial equivalence relations were introduced to Theoretical Computer Science
by Turing in an unpublished manuscript, and brought to the study of semantics by
Girard [1972] and Scott [1975]. Broadly speaking, we can view an equivalence relation
on a set as a notion of equality for that set. However, partial equivalence relations
might not be reflexive. Elements that are not related to themselves can be understood
as not being well-defined. We can, for example, define a PER ∼ between sequences
of rationals as follows: {xi } ∼ {yi } just if both sequences are Cauchy and converge
to the same real number. Sequences that are not Cauchy cannot be real numbers.
Definition 12. A P-set is a pair A = (|A| , ∼A ) consisting of a set |A| and a PER
∼A on A.
We will formally distinguish between elements x ∈ |A| of the P-set A, and points
x ∈ A of A: for the latter we will also require that x ∼A x, i.e. that they be
well-defined. Given a P-set A, its domain dom(A) is the set of its points.
The notion of operation will be instrumental in the development of our theory. An
operation is a transformation between the elements of P-sets that is not functional,
in that it need not respect the PERs on P-sets.
Definition 13 (Operation). Given two P-sets A = (|A| , ∼A ) and B = (|B| , ∼B ), an
operation, written
f : A 99K B
is a function f : |A| → |B| such that x ∼A x implies f (x) ∼B f (x).
I.e. an operation takes elements to elements, but when given a point (well-defined
element) also returns a point. Some operations are more well-behaved:
Definition 14. An operation f : A 99K B is a P-function, written
f :A→B
just if x ∼A y implies f (x) ∼B f (y).
We will later see that P-sets and P-functions form a cartesian closed P-category
(Theorem 27). The main ingredients needed for that theorem are the subject of the
first three of the following examples.
Example 1.
73
1. The P-set 1 is defined to be ({∗}, {(∗, ∗)}).
2. Given P-sets A = (|A| , ∼A ) and B = (|B| , ∼B ), we define the P-set A × B by
def
|A × B| = |A| × |B| and (a, b) ∼A×B (a0 , b0 ) just if a ∼A a0 and b ∼B b0 .
3. Given P-sets A = (|A| , ∼A ) and B = (|B| , ∼B ), the P-functions f : A → B
def
form a P-set B A = ( B A , ∼B A ) as follows: B A = { f | f : A 99K B }, and
f ∼B A g just if a ∼A a0 implies f (a) ∼B g(a0 ).
4. Given P-sets A = (|A| , ∼A ) and B = (|B| , ∼B ), the P-operations f : A 99K B
form a P-set A B = ( B A , ∼AB ) with the same underlying set B A , but
with f ∼AB g just if f = g.
In the definition of the P-set of P-functions, it is evident that all operations f :
A 99K B are present as elements in B A . However, they are only in dom(B A )
if they are P-functions. This pattern of ‘junk’ being present amongst elements is
characteristic when it comes to PERs, and it is the reason we need the notion of points.
A P-point of the P-set A would be a P-function x : 1 → A, which is determined by
the point x(∗) ∈ dom(A). We will systematically confuse a P-point x : 1 → A with
the point x(∗) ∈ dom(A).
We can finally define what it means to be a
Definition 15 (P-category). A P-category (C, ∼) consists of:
• a set of objects ob(C);
• for any two objects A, B ∈ ob(C), a P-set C(A, B) = |C(A, B)| , ∼C(A,B) ;
• for each object A ∈ ob(C), a point idA ∈ C(A, A);
• for any three objects A, B, C ∈ ob(C), a P-function
cA,B,C : C(A, B) × C(B, C) → C(A, C)
def
for which we write g ◦ f = cA,B,C (f, g),
such that, for any point f ∈ C(A, B) we have
f ◦ idA ∼C(A,B) f
idB ◦ f ∼C(A,B) f
and for any f ∈ C(A, B), g ∈ C(B, C) and h ∈ C(C, D), we have
h ◦ (g ◦ f ) ∼C(A,D) (h ◦ g) ◦ f
74
In a P-category we have an ordinary set of objects, but only a P-set of morphisms.
We will say that f is an arrow of C with domain A and codomain B, and write
f : A → B, only whenever f is a well-defined morphism, i.e. f ∈ dom(C(A, B)).
Furthermore, we will variously refer to P-categories by Fraktur letters B, C, . . . ,
without mentioning the family of relations {∼C(A,B) }A,B∈C . Sometimes we might use
the same sets of morphisms |C(A, B)|, but with a different PER; we will then indicate
.
which PER we are using, by writing e.g. (C, ∼) or (C, =). When the types of two
morphisms f and g are evident, we will write f ∼ g without further ado.
We can think of the morphisms as intensional, and of the PER on them as describing extensional equality. That composition is a P-function encodes the requirement
that composition respects extensional equality: if f ∼ f 0 and g ∼ g 0 , then g◦f ∼ g 0 ◦f 0 .
As is expected, P-categories come with associated notions of functor and natural
transformation.
Definition 16 (P-functor). Let C, D be P-categories. A P-functor F : C −→ D from
C to D consists of a map assigning an object F X ∈ D for each object X ∈ C, and a
family of P-functions,
FA,B : C(A, B) → D(F A, F B)
for each pair of objects A, B ∈ C, such that
F (idA ) ∼ idF A
F (g ◦ f ) ∼ F g ◦ F f
for all pairs of arrows f : A → B and g : B → C.
Definition 17 (P-natural transformation). Let F, G : C −→ D be P-functors. A
P-natural transformation θ : F ⇒ G consists of an arrow θA : F A → GA in D for
each A ∈ C, such that for each f : A → B we have
θB ◦ F f ∼ Gf ◦ θA
Finally, the definitions of finite products and exponentials carry over smoothly.
The various components of the definitions are required to be unique with respect to
their universal property, but only up to the PERs.
Definition 18 (Terminal object). Let C be a P-category. An object 1 ∈ C is terminal
just if for every C ∈ C there is an arrow
!C : C → 1
such that !C ∼ !C , and for every arrow h : C → 1 we have h ∼ !C .
75
Definition 19 (Binary Product). Let C be a P-category, and let A, B ∈ C. The
object A × B ∈ C is a product of A and B if there are arrows
π
π
1
2
A ←−
A × B −→
B
and for any object C ∈ C, there is a P-function
h·, ·i : C(C, A) × C(C, B) → C(C, A × B)
such that for every f : C → A and g : C → B we have
π1 ◦ hf, gi ∼ f
π2 ◦ hf, gi ∼ g
and, for any h : C → A × B, we have hπ1 ◦ h, π2 ◦ hi ∼ h.
Of course, the usual calculational rules of products still hold, e.g.
hf, gi ◦ h ∼ hf ◦ h, g ◦ hi
(f × g) ◦ hh, ki ∼ hf ◦ h, g ◦ ki
and so on.
Definition 20. A cartesian P-category is a P-category that has a terminal object
and binary products.
Definition 21 (Exponential). Let C be a cartesian P-category, and let A, B ∈ C.
The object B A ∈ C is an exponential of A and B just if there is an arrow
evA,B : B A × A → B
such that, for each C ∈ C, there is a P-function
λC (−) : C(C × A, B) → C(C, B A )
such that, for any h : C × A → B and k : C → B A ,
evA,B ◦ (λC (h) × idA ) ∼ h
λC (evA,B ◦ (k × idA )) ∼ k
Definition 22 (P-ccc). A cartesian closed P-category, or P-ccc, is a P-category that
has a terminal object, binary products, and an exponential for each pair of objects.
Theorem 27 (Čubrić et al. [1998]). The P-category PSet of P-sets and P-functions
is a P-ccc.
We warn the reader that we might be lax with the prefix “P-” in the rest of this thesis,
as the overwhelming majority of it will solely concern P-categories.
76
4.2
Exposures
In this section we introduce a new P-categorical notion, that of an exposure. The aim
of exposures is to P-categorically capture the modality-as-intension interpretation.
An exposure is almost a functor : it is a map of objects and arrows of a P-category
into another, and it preserves identities and composition. It is not a functor because
it does not preserve the PERs on the hom-sets of the source P-category. Instead, it
reflects them. In that sense, it may expose the structure of a particular arrow by
uncovering its inner workings, irrespective of the extensional equality represented by
the PER. The inner workings are then represented as a well-defined arrow of some,
possibly the same, P-category.
Definition 23. An exposure Q : (B, ∼) # (C, ∼) consists of
(i) a map assigning to each object A ∈ B an object QA ∈ C, and
(ii) for each A, B ∈ C, an operation
QA,B : B(A, B) 99K C(QA, QB)
for which we simply write Qf when the source and target of f are known
such that
(i) Q(idA ) ∼ idQA ;
(ii) Q(g ◦ f ) ∼ Qg ◦ Qf , for any arrows f : A → B and g : B → C, and
(iii) QA,B reflects PERs: if Qf ∼ Qg then f ∼ g.
Like functors, exposures compose: it suffices to use reflection of PERs twice. There
is a close relationship between functors and exposures. In fact, if exposures were
functors, they would be faithful functors.
Lemma 10. A (P-)functor Q : (B, ∼) −→ (C, ∼) is an exposure if and only if it is
(P-)faithful.
Proof. If Q : (B, ∼) −→ (C, ∼) is also an exposure, then the morphism map QA,B :
B(A, B) → C(QA, QB) is a P-function which also reflects PERs, hence Q is a (P)faithful (P-)functor. The converse is similar.
77
Thus, the identity functor is an exposure IdB : (B, ∼) # (B, ∼). This lemma also
enables us to compose an exposure Q : (B, ∼) # (C, ∼) with a faithful functor in
either direction: if F : (A, ∼) −→ (B, ∼) is faithful then we can define the exposure
Q ◦ F : (A, ∼) # (C, ∼) by
def
(Q ◦ F )(A) = Q(F A)
(Q ◦ F )A,B : A(A, B) 99K C(Q(F A), Q(F B))
def
(Q ◦ F )A,B = f 7→ QA,B (FA,B (f ))
and similarly for pre-composition.
The notion of natural transformations also naturally carries over to the setting of
exposures:
•
Definition 24. A natural transformation of exposures t : F # G between two
exposures F, G : B # C consists of an arrow tA : F A → GA of C for each object
A ∈ B, such that, for every arrow f : A → B of B, the following diagram commutes
up to ∼:
FA
Ff
tA
FB
tB
GA
Gf
GB
Nevertheless, we must not be cavalier when adopting practices from 1-category theory.
For example, we cannot arbitrarily compose natural transformations of exposures with
•
other exposures. Let t : F # G be a natural transformation between two exposures
F, G : B # C. If R : A # B is an exposure, then we can define
def
(tR )A = tRA : F RA → GRA
•
These components form a natural transformation tR : F R # GR. The defining
diagram
F RA
F Rf
tRA
F RB
tRB
GRA
GRf
GRB
•
commutes, as it is the naturality square of t : F # G at Rf : RA → RB. But if we
instead have a P-functor L : C −→ D, we can define
•
Lt : LF # LG
def
(Lt)A = L(tA ) : LF A → LGA
78
We must have that L be a P-functor for this to be natural: the diagram we want is
commutative only if
L(tB ) ◦ LF f ∼ L(tB ◦ F f ) ∼ L(Gf ◦ tA ) ∼ LGf ◦ L(tA )
Whereas the first and last step would hold if L were merely an exposure, the middle
step requires that we can reason equationally ‘under L,’ which can only happen if L
preserves the PERs.
4.2.1
Intensional Equality
As exposures give a handle on the internal structure of arrows, they can be used to
define intensional equality: if the images of two arrows under the same exposure Q
are extensionally equal, then the arrows have the same implementation, so they are
intensionally equal. This is an exact interpretation of the slogan of Abramsky [2014]:
intensions become extensions.
Definition 25 (Intensional Equality). Let Q : (B, ∼) # (C, ∼) be an exposure. Two
arrows f, g : A → B of B are intensionally equal (up to Q), written
f ≈A,B
g
Q
just if Qf ∼ Qg.
We often drop the source and target superscripts, and merely write f ≈Q g. The fact
that exposures reflect PERs guarantees that
Lemma 11. Intensional equality implies extensional equality.
Proof. f ≈Q g : A → B is Qf ∼ Qg, which implies f ∼ g.
In some cases the converse is true, but not for general arrows A → B: we often need
some restrictions on A, perhaps that it is an intensional context i.e. of the form
Qn
3
i=1 QAi , or that it is simply a point. In that case, we use the following definition.
Definition 26. Let Q : (B, ∼) # (C, ∼) be an exposure.
3
Not to be confused with Voevodsky’s univalence axiom. When examining this thesis, Martin
Hyland strongly recommended that the name be changed. This will most likely happen before
subsequent publications.
79
1. Let A be a class of objects of B. An object U ∈ B is A-univalent (up to Q)
just if extensional equality implies intensional equality for arrows with domain
in A and codomain U , i.e. for every f, g : A → U with A ∈ A,
f ∼g
=⇒
f ≈Q g
2. If B has a terminal object 1, and U is {1}-univalent, we say that U is pointunivalent.
Intensional equality is a PER, as we have defined it through Q and extensional
equality, which is a PER itself. Replacing ∼ with ≈Q yields another P-category,
which is possibly ‘more intensional’ than (B, ∼).
Definition 27. Given an exposure Q : (B, ∼) # (C, ∼), we define the x-ray category
of (B, ∼) up to Q by replacing the hom-P-sets with
(|B(A, B)| , ≈A,B
Q )
We denote the x-ray category by (B, ≈Q ).
To show that this is a valid definition, we have to check that composition respects
intensional equality, and that the necessary axioms hold. If f ≈Q k and g ≈Q h, then
Q(g ◦ f ) ∼ Qg ◦ Qf ∼ Qh ◦ Qk ∼ Q(h ◦ k)
because exposures preserve composition; hence ◦ is indeed a well-defined P-function
◦ : (B, ≈Q )(A, B) × (B, ≈Q )(B, C) → (B, ≈Q )(A, C)
Similarly, the PER ≈Q hereditarily satisfies associativity of composition, and—as Q
preserves identities—also satisfies the identity laws.
4.2.2
Cartesian and Product-Preserving Exposures
Bare exposures offer no promises or guarantees regarding intensional equality. For
example, it is not a given that π1 ◦ hf, gi ≈Q f . However, from a certain viewpoint
one may argue there is no grand intensional content in projecting a component: it
is merely a structural operation and not much more. Requiring this of an exposure
strengthens the notion of intensional equality, and further reinforces the point that
exposures offer a stratified and modular view of equality.
80
Definition 28. Let B be a cartesian P-category, and let Q : B # C be an exposure.
We say that Q is cartesian just if for any arrows f : C → A, g : C → B, h : C →
A × B, and k : D → 1 we have
π1 ◦ hf, gi ≈Q f
π2 ◦ hf, gi ≈Q g
hπ1 ◦ h, π2 ◦ hi ≈Q h
k ≈Q !D
However, this is not enough to formally regain standard equations like hf, gi ◦ h ≈Q
hf ◦ h, g ◦ hi. This is because we cannot be certain that the pairing function h·, ·i
preserves the intensional equality ≈Q . We need something quite a bit stronger, which
is to ask for full extensional preservation of products.
Definition 29. A cartesian exposure Q : B # C of a cartesian P-category B in a
cartesian P-category C is product-preserving whenever the canonical arrows
∼
=
hQπ1 , Qπ2 i : Q(A × B) −
→ QA × QB
∼
=
!Q1 : Q1 −
→1
are (P-)isomorphisms. We write mA,B : QA × QB → Q(A × B) and m0 : 1 → Q1 for
their inverses.
∼
=
In essence, the isomorphism Q(A × B) −
→ QA × QB says that code for pairs is a pair
of codes, and vice versa. Amongst the exposures, the product-preserving are the only
ones that interact harmoniously with the product structure. In fact, preservation of
products forces the pairing function to preserve intensional quality, thus regaining all
the standard equations pertaining to products up to ≈Q . To show all that, we first
need the following proposition.
Proposition 2. In the above setting, the following diagram commutes up to ∼:
QC
hQf ,Qgi
QA × QB
mA,B
Qhf ,gi
Q(A × B)
i.e. mA,B ◦ hQf, Qgi ∼ Qhf, gi for f : C → A and g : C → B.
81
Proof. We compute
hQπ1 , Qπ2 i ◦ Qhf, gi
∼ { naturality }
hQπ1 ◦ Qhf, gi, Qπ2 ◦ Qhf, gii
∼ { Q is an exposure }
hQ(π1 ◦ hf, gi), Q(π2 ◦ hf, gi)i
∼ { Q is a cartesian exposure }
hQf, Qgi
def
and hence mA,B ◦ hQf, Qgi = hQπ1 , Qπ2 i−1 ◦ hQf, Qgi ∼ Qhf, gi.
One can easily easy to compute that the mA,B ’s satisfy a naturality property, similar
to the one for strong monoidal categories. That is,
Proposition 3. In the above setting, the following diagram commutes up to ∼:
QC × QD
Qf ×Qg
mC,D
Q(C × D)
QA × QB
mA,B
Q(f ×g)
Q(A × B)
i.e. mA,B ◦ (Qf × Qg) ∼ Q(f × g) ◦ mC,D for f : C → A and g : D → B.
Proof. Simple calculation as above, using the inverses of both mA,B and mC,D , as well
as the fact that Q : B # C is cartesian.
In monoidal 1-category theory this would simply be a natural isomorphism between
the functors Q(− × −) and Q(−) × Q(−). However, the product functor − × − is
not necessarily an exposure: we may have f × g ∼ h × k, yet it may be that f 6∼ h
or g 6∼ k. However, if the category is connected (all hom-P-sets are nonempty), then
the projections are epic, and hence − × − is faithful. By Lemma 10, this would then
allow us to compose it with Q to make an exposure. But since this is not the case in
general, we do not. However, the requisite naturality property follows from the fact
the m’s are the inverses of canonical arrows, so we are not seriously hampered.
We can also show that the following relationship holds between the projection
arrows and their ‘exposed’ version, given product-preservation:
82
Proposition 4. In the above setting, let
π A,B
π A,B
1
A ←−
−− A × B −−2−→ B
and
π QA,QB
π QA,QB
1
2
QA ←−
−−− QA × QB −−
−−→ QB
be product diagrams in B and C respectively. Then
QπiA,B ◦ mA,B ∼ πiQA,QB
Proof. πi ◦ m−1
A,B = πi ◦ hQπ1 , Qπ2 i ∼ Qπi
def
The product-preserving structure of Q : B # C then suffices to guarantee that taking
the mediating morphism hf, gi preserves not just extensional equality in f and g, as
it does by the definition of products in P-categories, but also intensional equality.
Hence, h·, ·i also induces products in the x-ray category (B, ≈Q ).
Proposition 5. If Q : B # C is a product-preserving exposure, then the function
h·, ·i : B(C, A) × B(C, B) → B(C, A × B)
preserves intensional equality ≈Q , and is thus a function
h·, ·i : (B, ≈Q )(C, A) × (B, ≈Q )(C, B) → (B, ≈Q )(C, A × B)
Proof. If f ≈Q h : C → A and g ≈Q k : C → B, then
Qhf, gi
∼ { Proposition 2 }
m ◦ hQf, Qgi
∼ { assumptions }
m ◦ hQh, Qki
∼ { Proposition 2 }
Qhh, ki
and hence hf, gi ≈Q hh, ki.
83
Note that in the above proof we used the ‘monoidality’ mA,B to ‘shift’ the exposure
Q exactly where we want it to be to use the assumptions f ≈Q h and g ≈Q k.
This result also implies that the standard equations for products hold up to ≈Q .4
Lemma 12.
1. If Q : B # C is a product-preserving exposure, then
hf, gi ◦ h ≈Q hf ◦ h, g ◦ hi
for f : C → A, g : C → B, and h : D → C.
2. If Q is a product-preserving exposure, then
(f × g) ◦ hh, ki ≈Q hf ◦ h, g ◦ ki
for f : C → A, g : D → B, h : E → C, k : F → D.
4.2.3
Comonadic Exposures
We can now revisit the failed categorical approach to modality-as-intension that
we discussed in §4.1.1. It turns out that all the categorical equipment used for
strong monoidal (= product-preserving) comonads have direct analogues in exposures. Throughout the rest of this section we fix a product-preserving endoexposure
Q : B # B.
If we have an interpreter that maps code to values at each type, then we can
present it as a (well-behaved) natural transformation from our selected exposure to
the identity exposure.
Definition 30. An evaluator is a transformation of exposures,
•
: Q # IdB
such that the following diagrams commute up to ∼:
QA × QB
A ×B
A×B
mA,B
Q(A × B)
1
m0
A×B
A×B
4
Q1
1
1
In fact, even if we exclude hπ1 ◦ h, π2 ◦ hi ≈Q h from the definition of a cartesian exposure, we
can then regain it through product-preservation: in this sense, product-preservation is a strong extensionality principle that even implies the ‘η-rule’ for the x-ray category. One might even entertain
the idea that they can show the ‘β-rule’ π1 ◦ hf, gi ≈Q f simply by the existence of the isomorphism
mA,B , and without explicitly assuming Q to be cartesian, thus ostensibly reducing cartesian exposures to product-preserving ones. But the derivation of this requires Qπ1 ◦ m ∼ π1 , which seems to
only follow if the exposure is cartesian, making the apparently simple argument circular.
84
How about quoting, then? Given a point a : 1 → A, we define its quote to be the
point
Qa ◦ m0 : 1 → QA
•
The fact that : Q # IdB lets us then calculate that
A ◦ Qa ◦ m0 ∼ a ◦ 1 ◦ m0 ∼ a
So, post-composing a component of an evaluator to a quoted point yields back the
point itself! The naturality is there to guarantee that the evaluator is defined ‘in
the same way’ at all objects. The two diagrams that are required to commute as
part of the definition would—in the context of monoidal functors—ensure that is
a monoidal natural transformation. In the setting of exposures they are not only
necessary in the final step of the above calculation, but they also ensure that the
evaluators are compatible with products.5
Definition 31. A quoter is a transformation of exposures,
•
δ : Q # Q2
for which the following diagrams commute up to ∼:
QA × QB
δA ×δB
Q2 A × Q2 B
1
m0
mQA,QB
Q(QA × QB)
mA,B
m0
Q1
Qm0
QmA,B
Q(A × B)
δA×B
Q2 (A × B)
Q1
δ1
Q2 1
If we post-compose a component of a quoter to a quoted point, we get
δA ◦ Qa ◦ m0 ∼ Q2 a ◦ δ1 ◦ m0 ∼ Q2 a ◦ Qm0 ◦ m0 ∼ Q(Qa ◦ m0 ) ◦ m0
So a quoter maps a quoted point to its doubly quoted version. In this instance, the
diagram that would correspond to the transformation being monoidal is crucial in
obtaining this pattern.
All of these ingredients then combine to form a comonadic exposure.
5
Nevertheless, notice that—since we are in a cartesian setting and 1 is a terminal object—the
second diagram commutes automatically.
85
Definition 32. A comonadic exposure (Q, , δ) consists of an endoexposure
Q : (B, ∼) # (B, ∼),
•
•
along with an evaluator : Q # IdB , and a quoter δ : Q # Q2 , such that the following
diagrams commute up to ∼:
QA
δA
Q2 A
δQA
δA
Q2 A
QδA
δA
QA
δA
Q3 A
idQA
Q2 A
QA
Q2 A
QA
QA
Comonadic exposures are the analogue of product-preserving (= strong monoidal)
comonads in the categorical semantics of S4. They will prove instrumental in our
analysis of intensional recursion (§6), and in the semantics of iPCF (§7, §8).
4.2.4
Idempotent Comonadic Exposures
•
If the components of δ : Q # Q2 are isomorphisms, we shall call the comonadic
exposure (Q, , δ) idempotent.
As we discussed before, if one is to take the interpretation of Q as ‘code’ seriously,
then there are clearly two ‘regions’ of data: that of static code, always found under
an occurrence of Q, and that of dynamic data. Intuitively, the notion of ‘code of code
of A,’ namely Q(QA), should be the same as ‘code of A.’ If something is code, it
is already intensional in a maximal sense: it can certainly be taken ‘one level up’
(Q(QA)), but that should not amount to very much.
We have seen that exposures are a very weak setting in calculational terms, as
they do not preserve equality. It is for this reason that we are forced to externally
impose equations, such as those for cartesian products in §4.2.2. However, we will
shortly see that the idempotence of a comonadic exposure is a particularly powerful
tool that immediately allows us to infer a lot about equality, especially intensional.
It follows, as in §4.1.1, that following diagram commutes for each f : QA → QB:
QA
f
δA
Q2 A
QB
δB
Qf
Q2 B
The proof is the same as before: nowhere in Theorem 26 did we use the ‘forbidden
principle’
f =g
=⇒ Qf = Qg
86
Furthermore, the argument we produced just before that theorem also still holds:
•
each component of δ : Q # Q2 is a ‘reasonable quoting device,’ in the sense that the
diagram
a
1
QA
m0
δA
Q1
Qa
Q2 A
commutes: the only time we used the ‘forbidden principle,’ namely the demonstration
that Qm0 ◦ Q(m−1
0 ) ∼ idQ1 , can be ‘simulated’ with idempotence as follows: we have
−1
Qm0 ◦ Q(m−1
0 ) ◦ δ1 ∼ δ1 ◦ m0 ◦ m0 ∼ δ1 ◦ idQ1 ∼ Q(idQ1 ) ◦ δ1
by using idempotence twice, so that cancelling the iso δQ1 then yields the result. We
will later see that this is due to a more general theorem.
So, even if this diagram commutes, where does the ‘degeneracy’ argument (Proposition 1) break down? The key lies precisely in the fact that Q does not respect the
PERs, and hence in : C(1, A) 99K C(1, QA) is now only an operation, not a P-function.
Hence, there is no natural isomorphism C(1, −) ∼
= C(1, Q−).
Let us, however, take a closer look: the function in is defined by
x:1→A
7−→
Qf ◦ m0 : 1 → QA
That is, the only occurrence of f is under Q, and the rest is simply pre-composition
with m0 . If f ≈Q f 0 , i.e. if Qf ∼ Qf 0 , then we have that in(f ) ∼ in(f 0 ). Hence, in
changes ≈Q to ∼, so it is actually more than an operation: it is a map
in : (C, ≈Q )(1, A) → (C, ∼)(1, QA)
Similarly, out is defined by
a : 1 → QA
7−→
A ◦ a : 1 → A
If a ∼ a0 , then δA ◦ a ∼ δA ◦ a0 , so Qa ◦ m0 ∼ Qa0 ◦ m0 . Cancelling the isomorphism
m0 gives us a ≈Q a0 . Thus, if we have a reasonable quoting device at A, QA is
point-univalent. For the time being, notice that this means that out takes ∼ to ≈Q ,
i.e. it is a map
out : (C, ∼)(1, QA) → (C, ≈Q )(1, A)
If we combine these facts with the previous calculations and naturality of Q, we obtain
a natural isomorphism
87
Proposition 6. (C, ≈Q )(1, −) ∼
= (C, ∼)(1, Q−)
That is: the intensional structure of the points, now visible in the x-ray category
(C, ≈Q ), is represented by the points 1 → QA under extensional equality.
In the rest of this section let us fix a product-preserving idempotent comonadic
exposure (Q, , δ) on B.
Equal Intensional Transformations are Intensionally Equal
A central result is the generalisation of the argument we used to define out, and it
is the following. Think of arrows f : QA → QB as intensional operations: these
transform code of type A to code of type B. It therefore should transpire that, if
f ∼ g : QA → QB, then f and g represent the same code transformation, and in fact
should be intensionally equal. If Q is idempotent, then this is exactly what happens.
Theorem 28. For any f, g : QA → QB,
f ∼g
=⇒
f ≈Q g
Proof. We have
Qf ◦ δA ∼ δB ◦ f ∼ δB ◦ g ∼ Qg ◦ δA
Pre-composing with the inverse of δA yields Qf ∼ Qg, and hence f ≈Q g.
Idempotence also implies another crucial piece of information regarding the product∼
=
preserving isomorphism mA,B : QA × QB −
→ Q(A × B). Even though we know mA,B
is an isomorphism up to ∼, we ostensibly do not have any information on its behaviour up to ≈Q : the hom-operations of Q do not preserve the PERs. However, in
the idempotent setting it always iso, even up to ≈Q .
∼
=
Lemma 13. mA,B : QA × QB −
→ Q(A × B) is an isomorphism up to ≈Q , i.e. it is
an isomorphism in the x-ray category (B, ≈Q ).
88
Proof. Calculate that
Qm ◦ QhQπ1 , Qπ2 i ◦ δ
∼ { Proposition 2 }
Qm ◦ m ◦ hQ2 π1 , Q2 π2 i ◦ δ
∼ { naturality of product bracket, δ natural }
Qm ◦ m ◦ hδ ◦ Qπ1 , δ ◦ Qπ2 i
∼ { product equation }
Qm ◦ m ◦ (δ × δ) ◦ hQπ1 , Qπ2 i
∼ { δ monoidal }
δ ◦ m ◦ hQπ1 , Qπ2 i
def
∼ { m−1 = hQπ1 , Qπ2 i }
δ
Since δ is an isomorphism, we can cancel it on both sides to yield m ◦ m−1 ≈Q id. The
calculation is similar in the opposite direction, and relies on post-composing with the
isomorphism m ◦ (δ × δ), and then cancelling it.
The same trick with m ◦ (δ × δ) also shows that Proposition 4 holds intensionally.
Lemma 14. QπiA,B ◦ mA,B ≈Q πiQA,QB .
These lemmas show that
Corollary 3. For any f, g :
Qn
i=1 QAi →
f ∼g
Qm
j=1
QBj ,
f ≈Q g
=⇒
Proof. Pre-and-post-compose with the appropriate isomorphisms m(n) (see §7.1), use
Theorem 28, and then use Lemma 13 to cancel the m(n) ’s.
Some more lemmas
In this section we prove some more lemmas that hold in the idempotent case. Firstly,
we can show that the comonadic diagrams commute intensionally.
Lemma 15. The comonadic diagrams
QA
δA
QδA
δA
QA
δQA
δA
Q2 A
Q2 A
δA
Q3 A
idQA
Q2 A
89
QA
Q2 AT
QA
QA
commute up to ≈Q .
Proof. Simple calculations that mainly follow by pre-composing δA and then cancelling it. For example,
Q2 A ◦ QδA ◦ δA ∼ Q2 A ◦ δQA ◦ δA ∼ δA ◦ QA ◦ δA ∼ δA
by the equations and the naturality of δ, which gives that QA ◦ δA ≈Q idQA .
Moreover, we have that
Lemma 16. A : QA → A is epic up to ≈Q .
Proof. If f ◦ A ≈Q g ◦ A : QA → B, then Qf ◦ QA ∼ Qg ◦ QA . Pre-composing δA
yields f ≈Q g.
The following lemma is also quite useful.
Lemma 17 (Quotation-Evaluation). For any f : QB → QA, the following diagram
commutes up to ∼:
δB
QB
Q2 B
Qf
f
QA
QA
Q2 A
Proof. We may calculate
QA ◦ Qf ◦ δB ∼ QA ◦ δA ◦ f ∼ f
by idempotence and the comonadic equations.
This has a simple corollary when it comes to quoted points:
Corollary 4. If (Q, δ, ) is a product-preserving idempotent comonadic exposure, then
Q(A ◦ a) ◦ m0 ∼ a
for any a : 1 → QA.
Proof. We may calculate
Q ◦ Qa ◦ m0 ∼ Q ◦ δA ◦ a ∼ a
by δ being reasonable and Q comonadic.
90
Coalgebras, Idempotence, and Univalence
Recall the definition of point-univalent objects (Definition 26): U is point-univalent if
x ∼ y : 1 → U implies x ≈Q y : 1 → U . Intuitively, U is point-univalent if extensional
and intensional equality coincide for its points, i.e. if none of them contain intensional
information. Now, the objects QA are supposed to contain intensions/codes corresponding to the ‘elements’ of A. It is no surprise then that we can prove that
Lemma 18. QA is point-univalent.
Proof. Suppose x ∼ y : 1 → QA. Then x ◦ 1 ∼ y ◦ 1 : Q1 → QA. Invoking Theorem
28 yields x ◦ 1 ≈Q y ◦ 1 . If only we establish that 1 ◦ m0 ≈Q id1 , then it would
suffice to pre-compose m0 . But Q is product-preserving, hence cartesian, with the
result that
1 ◦ m0 ≈Q !1 ≈Q id1
In fact, this is a special case of a more general
def
Theorem 29. Let Q = { QA | A ∈ B }. Then QA is Q-univalent.
This is just another way to state Theorem 28.
We would like to prove a kind of converse to this theorem, viz. that, with idempotence, every Q-univalent object A is closely related to QA. Unfortunately, there
is no systematic way to obtain an arrow A → QA from the Q-univalence of A. But
if we assume the existence of such an arrow—with appropriate equations—then it is
easy to show that it is an isomorphism. This arrow is, of course, an old friend:
Definition 33. A Q-coalgebra is an arrow
α : A → QA
such that the following diagrams commute up to ∼:
A
α
idA
A
QA
A
A
α
α
QA
91
QA
δA
Qα
Q2 A
In the modality-as-intension interpretation a Q-coalgebra has an intuitive meaning: the equation A ◦ α ∼ id states that α can ‘quote’ the elements of A, producing
an element of QA which—when evaluated—takes us back to where we started. The
•
second equation merely states that α cooperates well with the quoter δ : Q # Q2 .
Thus, a Q-coalgebra exists when we can internally quote the elements of an object.
Lemma 19. If A is Q-univalent, then a Q-coalgebra α : A → QA is an isomorphism,
with inverse A : QA → A.
Proof. This is the classic proof that given idempotence all coalgebras are isomorphisms. However, a crucial step of that proof would rely on Q preserving equality; in
this case, this is where Q-univalence steps in.
We calculate that
δA ◦ (α ◦ A ) ∼ Q(α ◦ A ) ◦ δA ∼ Qα ◦ QA ◦ δA ∼ Qα
by idempotence and the comonadic equations. If we post-compose QA to both sides
of this equation, we obtain
α ◦ A ∼ QA ◦ Qα ∼ Q(A ◦ α)
So, if we show that A ◦ α ≈Q idA , then we would obtain α ◦ A ∼ idQA . As we already
know that α ◦ A ∼ idQA , we would conclude that α−1 ∼ A . Since A ◦ α ∼ idA , we
obtain A ◦ α ◦ A ∼ A : QA → A. But A is Q-univalent, so
A ◦ α ◦ A ≈Q A
But A is epic up to ≈Q (Lemma 16), so A ◦ α ≈Q idA .
So, when is A univalent? Suppose that A ◦ α ≈Q idA , i.e. quoting and then
evaluating returns the same intensional construction with which one began. Then it
must be that A has no real intensional structure, and conversely.
Lemma 20. Let α : A → QA be a Q-coalgebra. Then A is Q-univalent if and only
if A ◦ α ≈Q idA .
Proof. The ‘only if’ part was shown in the preceding proof. As for the ‘if’ part, let
f ∼ g : QB → A. Then α ◦ f ∼ α ◦ g : QB → QA. By Theorem 29/28, α ◦ f ≈Q α ◦ g.
Post-composing with A and using the assumption gives f ≈Q g.
To sum, we can combine these facts to show the following
92
Theorem 30. If there is a Q-coalgebra α : A → QA such that A ◦ α ≈Q idA , then
∼
=
α:A−
→ QA is an isomorphism.
There is a partial converse, which is that
∼
=
Theorem 31. If Q is idempotent and there is an isomorphism α : A −
→ QA such
that δA ◦ α ∼ Qα ◦ α, then α : A → QA is a Q-coalgebra, with A ◦ α ≈Q idA .
Proof. We compute that
α ∼ QA ◦ δA ◦ α ∼ QA ◦ Qα ◦ α
by the comonadic equations and the assumption. Cancelling α yields A ◦ α ≈Q id,
and hence α is a Q-coalgebra.
Thus, either of the following data suffice to make A Q-univalent:
∼
=
1. an isomorphism A −
→ QA with the second Q-coalgebra equation; or
2. a Q-coalgebra with the first equation holding intensionally.
4.2.5
Weakly Cartesian Closed Exposures
We close this section with a notion of cartesian closure for exposures. Unlike the
situation with products, the relevant notion that will allow calculations under the
exposure will be weak. We pick the weak notion because our motivating example
of an exposure (see §5.2) is a weakly cartesian closed one. Whereas there is a good
argument for an exposure to distribute over products—i.e. products do not contain
any true intensional nature, they are simply pairs—exposures truly reveal the internal
structure of morphisms: it makes sense that η does not hold, cf. the age-old discussion
of the ordinary λ-theory λβ and the extensional theory λβη in e.g. Barendregt [1984].
Definition 34. A product-preserving exposure Q : B → C from a P-ccc B to a
cartesian P-category C is weakly cartesian closed just if
ev ◦ (λ(f ) × id) ≈Q f
for all f : C × X → Y .
These work as expected. For example, if for f : A → B we let
f
π2
def
pf q = λ 1 × A −→ A →
− B
as before, then
93
Lemma 21. For any a : C → A,
ev ◦ hpf q, ai ≈Q f ◦ a
Proof. All the necessary equations hold intensionally:
ev ◦ hpf q, ai ≈Q ev ◦ (pf q × id) ◦ hidC , ai ≈Q f ◦ π2 ◦ hidC , ai ≈Q f ◦ a
94
Chapter 5
Three Examples of Exposures1
Having introduced the basics of P-categories and exposures in the previous chapter, we
now seek to prove that they are indeed useful abstractions in the study of intensional
phenomena. Towards this goal, we shall present three examples of a P-category and
an endoexposure on it.
In §5.1 we will construct a P-category based on a first-order classical theory. In the
particular case of Peano Arithmetic (PA), it will become apparent that a well-behaved
Gödel numbering comprises an endoexposure.
Following that, in §5.2 we turn to realizability theory in order to obtain a handle on
intensionality in settings related to higher-order computability theory. The example
of a comonadic exposure constructed therein is particularly well-behaved, and is the
motivating example for the entire development of this thesis.
The third example is, we hope, somehow unfamiliar. Prompted by Abramsky
[2014], a natural question arises: is the phenomenon of intensionality only relevant
to logic and computation? We are prepared to entertain the idea that it may be a
more general mathematical pattern, which has hitherto been either a nuisance or—
more often—invisible, and whose categorical formulation will enable us to recognise
it in more settings. In §5.3, we present a simple example of intensionality found in
homological algebra, and submit it to the reader for further discussion.
5.1
Exposures as Gödel Numbering
Our first example of an exposure substantiates the claim that exposures can be considered as abstract analogues of Gödel numberings.
1
A preliminary form of the results in this chapter was first published as [Kavvos, 2017a], which
is available at Springer: https://dx.doi.org/10.1007/978-3-662-54458-7_32
95
Following Lawvere [1969, 2006], we will construct a P-category from a first-order
theory T. Then, we will sketch the proof that any sufficiently well-behaved Gödel
numbering defines an exposure over this theory. We omit the details, leaving them
as a (probably rather complicated) exercise in coding.
5.1.1
The Lindenbaum P-category
The construction in this section is called the Lindenbaum P-category of a first-order
theory T, and it is simpler than what one might imagine: we begin with three basic
objects,
1,
the terminal object
2,
the object of truth values
A,
the universe
The objects of the category will be the formal products of these three objects; we
write Γ for a generic product A1 , . . . , An of these objects. The arrows of type
Γ→2
will be construed as formulas, with free variables in Γ. Similarly, the arrows
Γ→A
will be construed as terms, once again with free variables in Γ. In this light, a formula
φ(x, y, z) in three free variables will be an arrow
φ(x, y, z) : A × A × A → A
with the three copies of A in the domain representing each free variable in a fixed
order—and similarly for terms.
However, this idea will be complicated by the presence of 2 in the domain. This
will represent a Boolean hole that may be filled by a formula. For example, consider
the expression
def
φ(x, P ) = ∀y. ∃z. (f (x, y) = z ∧ P )
is a formula with one free variable x, and one Boolean hole P , for which a formula
can be substituted. Hence, it will be an arrow
φ(x, P ) : A × 2 → 2
96
This is a generalisation that Lawvere introduced, and of which we shall be making
use. The interesting thing about it is that the Boolean connectives now appear as
arrows, e.g. ∧ : 2 × 2 → 2, and ¬ : 2 → 2. Of course, arrows Γ → A with 2 occurring
in Γ do not make particularly good sense: how could one have a Boolean hole in a
term?
Finally, we have the cases of 1 and products appearing in the codomain. In
the first case, there will be a unique arrow !Γ : Γ → 1. In the case of a product
def
∆ = B1 × · · · × Bn , arrows f : Γ → ∆ will be freely generated by brackets, i.e. they
will be
hf1 , . . . , fn i : Γ → ∆
where fi : Γ → Bi . Almost everything in sight will act component-wise on these.
We shall say that
f ∼ g : Γ → 2 just if T ` f ↔ g
That is, two arrows that are predicates are extensionally equal if and only if they can
be proven equivalent in the theory. The details are slightly more complicated than
in most presentations of first-order logic, since we also have to treat Boolean holes.
Similarly, two terms are extensionally equal if they can be proven equal in the theory,
i.e.
t ∼ s : Γ → A just if T ` t = s
On brackets, ∼ acts component-wise.
It is not hard to see that we obtain the following theorem, which—excluding the
P-categorical twist—is essentially due to Lawvere.
Theorem 32. The above construction is a cartesian P-category, called the Lindenbaum P-category Lind(T) of the first-order theory T.
5.1.2
Numbering as Exposure
Let us now concentrate on the case of Peano Arithmetic, which we denote by PA. A
Gödel numbering [Boolos, 1994, Smullyan, 1992] is obtained when one assigns to each
−
−
formula φ(→
x ) and each term t(→
x ) a Gödel number, denoted by
−
pφ(→
x )q,
−
pt(→
x )q
respectively. In the case of a first-order theory of arithmetic, the Gödel number of a
term or formula is supposed to represent the term or formula within the theory.
97
The notion of representation-within-the-theory is exactly what exposures are meant
to capture. Hence, we need to define the action of an exposure,
Q : Lind(PA) # Lind(PA)
on the Lindenbaum P-category of PA. This is where we need that the Gödel numbering
be well-behaved. Let us suppose that we have a formula φ(x, y) : A × A → A in two
free variables. We would like to map this to a formula Qφ(x, y), also in two free
variables, that respects substitution. Let us presume that we have functions
subwff,n (x, z1 , . . . , zn )
subt,n (x, z1 , . . . , zn )
that are definable in PA by a term (denoted by the same name), and moreover that
these functions define substitution for the Gödel numbering, in the sense that, for
example, if given—say—two closed terms t, s, we have
PA ` subwff,2 (pφ(x, y)q, ptq, psq) = pφ(t, s)q
We require more than most presentations of Gödel numberings. In particular, we
require that these behave well under substitution, e.g. we require that the terms
−
−
−
−
subwff,2 (pφ(x, y)q, subt,n (pt(→
z )q, →
w ), subt,n (ps(→
z )q, →
w ))
and
−
−
−
subwff,n (pφ (t(→
z ), s(→
z ))q, →
w)
be provably equal in PA. We can then define
def
Qφ(x, y) = subwff,2 (pφ(x, y)q, x, y)
Finally, in the case of a sentence ψ : 1 → 2, i.e. a closed formula, we shall define
Qψ : 1 → A to simply be the (numeral of the) Gödel number, i.e.
def
Qψ = pψq
This certainly respects substitution! We have elided the concept of Boolean holes,
but we believe that to be a not so difficult exercise. Identities are a little stranger;
the identity at A is the term that is a single free variable, i.e.
def
idA = x : A → A
98
and this is mapped to subt,1 (pxq, x) : A → A, which somehow needs to be provably
equal to x itself. This is a also strange requirement for a Gödel numbering, but we
are mostly willing to believe that it is a feasible desideratum. A similar situation
occurs when we examine the arrow
hQπ1 , Qπ2 i = hsubt,2 (pxq, x, y), subt,2 (pyq, x, y)i
For the exposure Q to be product-preserving, we would like this to be an isomorphism,
or—even better—the identity. And it would indeed be, if we could prove that, in
general,
−
PA ` subt,n (pzi q, →
z ) = zi
More rigorously, we define
def
Q1 = 1
def
Q(A) = A
def
Q(2) = A
def
Q(B1 × · · · × Bn ) = QB1 × · · · × QBn
def
and, of course, Qhf1 , . . . , fn i = hQf1 , . . . , Qfn i.
To complete the construction, we have to check the last axiom of exposures,
namely reflection of PERs. For that we need that
−
−
−
−
PA ` subwff,n (pφ(→
x )q, →
z ) = subwff,n (pψ(→
x )q, →
z)
implies
PA ` φ ↔ ψ
−
By substituting the Gödel numbers of variables →
y in the antecedent, we obtain that
−
−
PA ` pφ(→
y )q = pψ(→
y )q
and so it suffices for the Gödel numbering to be injective, in the sense that
−
−
pφ(→
y )q = pψ(→
y )q
=⇒
−
−
φ(→
y ) = ψ(→
y)
viz. that equality of Gödel numbers implies syntactic equality. In conclusion,
Theorem 33. If all of the above desiderata on Gödel numberings are feasiable, then
the construction is a product-preserving endoexposure,
Q : Lind(PA) # Lind(PA)
on the Lindenbaum P-category of PA.
99
5.2
Exposures in Realizability
In this section we study our central example of an exposure, which hails from realizability theory.
The basic objects in realizability are assemblies, i.e. sets of which every element
is associated with a set of realizers. The elements of the set can be thought of as the
elements of an abstract datatype, whereas the set of realizers of each element contains
its multiple machine-level representations. For example, realizers can range over the
natural numbers; then taking functions between assemblies which can be ‘tracked’ on
the level of realizers by partial recursive functions yields a category where ‘everything
is computable.’
In practice, the generalisation from natural numbers to an arbitrary partial combinatory algebra (PCA) is made. A PCA is an untyped ‘universe’ corresponding to
some notion of computability or realizability. There are easy tricks with which one
may encode various common first-order datatypes (such as booleans, integers, etc.) in
a PCA.2 Moreover, it is easy to show that, up to a simple representation of integers,
one may represent all partial recursive functions in a PCA. Before proceeding with
the construction, we recap in §5.2.1 the basics of PCAs. For the interested reader let
us mention that detailed discussions may be found in [Beeson, 1985, Longley, 1995,
Longley and Simpson, 1997, van Oosten, 2008, Longley and Normann, 2015].
Once a PCA—and hence a notion of computability—is fixed, defining a P-category
Asm(A) is reasonably straightforward: objects are assemblies, and morphisms are
functions which are ‘computable’ on the level of realizers. The arrows are pairs (f, r)
where f is a function on the underlying set of each assembly, and r is an element of
the PCA that tracks the function f . Two arrows are related if and only if they define
the same function on the underlying sets. Finally, to define an endoexposure all we
need to do is display the effect of each tracking element r on the realizers. It then
transpires that this exposure is a product-preserving idempotent comonadic exposure
on Asm(A).
5.2.1
Partial Combinatory Algebras
The definition of a PCA is deceptively simple:
2
Most of these tricks have their origins in untyped λ-calculus, and are hence found in Barendregt
[1984].
100
Definition 35. A partial combinatory algebra (PCA) (A, ·) consists of a carrier set
A and a partial binary operation · : A × A * A such that there exist K, S ∈ A with
the properties that
K · x ↓,
K · x · y ' y,
S · x · y ↓,
S · x · y · z ' x · z · (y · z)
for all x, y, z ∈ A.
The paradigmatic example comes from ordinary computability theory, as presented
by e.g. Cutland [1980], Odifreddi [1992], Rogers [1987]. It is not very difficult to use
the s-m-n theorem to cook up indices that behave like S and K, to the effect that
Theorem 34 (Kleene’s First Model). The applicative structure K1 = (N, · ), where
x · y ' φx (y)
is a partial combinatory algebra.
Combinatory Completeness
Even though the definition of a PCA is remarkably simple, there is more to it than
meets the eye. The structure of S and K suffice to obtain a property known as
combinatory completeness: every syntactic function on the PCA formed by variables,
applications and constants can be ‘internalised’ as an element of the PCA. This
results originates from combinatory logic, and is well-known in the study of untyped
λ-calculus—see Barendregt [1984].
Once combinatory completeness is obtained, standard tricks from untyped λcalculus can be used to represent first-order data, but also greatly simplify calculations. Nevertheless, let us remind the reader that not all the common rules of the
untyped λ-calculus hold in a PCA, so caution is advised. The particular presentation
in this section is due to Longley [1995].
Definition 36. Let V be an infinite set of variables. The set E(A) of terms or formal
expressions over a PCA (A, ·) is defined as the least set that satisfies the following
conditions:
A ⊆ E(A),
V ⊆ E(A),
s ∈ E(A) t ∈ E(A)
(s · t) ∈ E(A)
Conventionally, we will use e, s, t, u, v, . . . as metavariables ranging over E(A), and
we will write s[t/x] for the formal expression obtained by substituting t ∈ E(A) for
every occurrence of the variable x in the formal expression s ∈ E(A).
101
A formal expression is closed if it contains no variables. Write fv (e) for the set
of free variables of expression e ∈ E(A). Also, write e ↓ (“e denotes”), where e is a
closed formal expression, to mean that that, if the formal expression is interpreted as
an actual algebraic expression in the standard way, composition throughout is defined
and it denotes an element. This implies that all subexpressions also denote: if s · t ↓,
then s ↓ and t ↓. Otherwise, we write e ↑ to mean that partiality kicks in, and the
expression does not denote an element in the PCA.
We also notationally distinguish two equalities: the strict equality, s = t, where
both s ↓ and t ↓ and they denote the same element; and the Kleene equality, s ' t,
which holds when s and t are both undefined, or both defined and denote the same
element.
Finally, the above notions are straightforwardly extended to open terms, by substituting for all variables: let the variables of s, t ∈ E(A) be amongst x1 , . . . , xn ; then,
for example
e↓
just if ∀a1 , . . . , an ∈ A. e[~a/~x] ↓
s't
just if ∀a1 , . . . , an ∈ A. s[~a/~x] ' t[~a/~x]
for the obvious generalisation to simultaneous substitution. We can now λ-abstract:
Theorem 35 (Combinatory Completeness). Let (A, ·) be a PCA. For any e ∈ E(A),
there exists a formal expression λ∗ x.e ∈ E(A), where fv (λ∗ x.e) = fv (e) − {x}, such
that
λ∗ x.e ↓
and
(λ∗ x.e)a ' e[a/x]
for all a ∈ A.
Proof. Define
λ∗ x. x = S · K · K
def
λ∗ x. t = K · t
def
if t ∈ A ∪ V and t 6≡ x
λ∗ x. s · t = S · (λ∗ x.s) · (λ∗ x.t)
def
Then λ∗ x.e ↓ by the definedness conditions for S and K. The rest follows by induction.
102
Let us be pedantic and reiterate the warnings: Kleene equality is not respected
by the λ∗ x operation on terms, and the obvious β-rules do not hold—observe that
the operand above has to be a constant! Longley carefully develops correct β-rules
for this language of terms in [Longley, 1995, §1.1.2].
Some common encodings
We will need the following combinators:
def
I = SKK
B = λ∗ f.λ∗ g.λ∗ x.f (gx)
def
It is easy to define selection and pairs. Let
true = λ∗ ab.a
def
false = λ∗ ab.b
def
if = λ∗ xyz.xyz
def
pair = λ∗ xyz.zxy
def
fst = λ∗ p.p(true)
def
snd = λ∗ p.p(false)
def
We always have if x y ↓, and pair x y ↓. Furthermore, the following equalities hold:
fst (pair x y) = x
snd (pair x y) = y
if true y z = y
if false y z = z
Encoding numbers is not more difficult, and—like [Longley, 1995] and [Longley and
Simpson, 1997]—we use a trick due to Curry. Let
0=I
n + 1 = pair false n
def
Then we may let succ = λ∗ x. pair false x, so that succ n = n + 1. To check if a number
def
is zero, use iszero = fst so that
iszero 0 = I (true) = true
103
whereas
iszero n + 1 = fst (pair false n) = false
Finally, we can define the predecessor function by
pred = λ∗ x. if (iszero x) 0 (snd x)
def
5.2.2
Assemblies and Modest Sets
Definition 37. An assembly X on A consists of a set |X| and for each x ∈ |X| a
non-empty subset kxkX ⊆ A. If a ∈ kxkX , we say that a realizes x.
Definition 38. For two assemblies X and Y , a function f : |X| → |Y | is said to be
tracked by r ∈ A just if, for all x ∈ |X| and a ∈ kxkX , we have
r·a↓
and r · a ∈ kf (x)kY
Definition 39. An assembly X on A is a modest set just if no element of A realizes
two elements of |X|. That is,
x 6= x0
=⇒
kxkX ∩ kx0 kX = ∅
It is not hard to see that for each PCA A we can define a category Asm(A), with
objects all assemblies X on A, and morphisms f : X → Y being functions f : |X| →
|Y | that are tracked by some r ∈ A. In fact,
Theorem 36. Assemblies and trackable morphisms between them form a category
Asm(A) that is cartesian closed, has coproducts, as well as a natural numbers object.
It is not clear who originated the—admittedly very intuitive—definition of Asm(A),
and who first proved the above theorem. The identification of assemblies as the ¬¬separated objects of the effective topos can be found in the work of Hyland [1982].
Longo and Moggi [1991] refer to Asm(A) as the category of ω-sets, and so does Jacobs
[1999]. For more details, see [Longley, 1995] or [Longley and Simpson, 1997].
A special subcategory of assemblies is of particular interest:
Theorem 37. The full subcategory of Asm(A) consisting only of objects which are
modest sets, which we denote by Mod(A), inherits the cartesian closed, coproduct,
and natural number object structure from the category of assemblies.
The category of modest sets—or its equivalent presentation in terms of PERs on the
PCA A—seems to have originated in unpublished work by Turing, and later used by
Gandy [1956, 1959]: see [Hyland, 2016]. In semantics, PERs were used independently
by Scott [1976] and Girard [1972]. Accessible presentations may be found in Mitchell
[1996] or Crole [1993].
104
5.2.3
Passing to a P-category
The lack of intensionality in the category Asm(A) is blatant: to elevate a function
f : |X| → |Y | to a morphism f : X → Y , we only require that there exists a r ∈ A
that tracks it: as soon as this is witnessed, we throw away the witness. For all the
reasons discussed in §4.1 we have to move to P-categories to mend this.
The P-category of assemblies on A, denoted Asm(A), is defined as follows. Its
objects are once more all assemblies X on A. Given assemblies X and Y , the P-set
Asm(X, Y ) is defined by having underlying set
def
|Asm(X, Y )| = { (f : |X| → |Y | , r ∈ A) | r tracks f }
and
(f, r) ∼Asm(X,Y ) (g, s) just if f = g
For (f, r) : X → Y and (g, s) : Y → Z, we define composition by
def
(g, s) ◦ (f, r) = (g ◦ f, B · s · r)
Notice that for any x ∈ |X|, a ∈ kxkX implies p · a ∈ kf (x)kY , which implies
q · (p · a) ∈ kg(f (x))kZ , and as B · q · p · a ' q · (p · a) it follows that (g ◦ f, B · p · q)
is an arrow X → Z. It is easy to see that composition is a P-function: ∼ only refers
to the underlying ‘extensional’ functions. Composition of set-theoretic functions is
associative and the identity function is its unit. That said, we define the identity
idX : X → X to simply be (id|X| , I).
Finite Products
It is not hard to show that
Proposition 7. The category Asm(A) has binary products.
Proof. The construction essentially follows the underlying structure of products in
the category of sets, but augments it with tracking elements. For assemblies X and
Y , we define
def
def
|X × Y | = |X| × |Y | ,
k(x, y)kX×Y = { pair a b | a ∈ kxkX , b ∈ kykY }
The projections are the following arrows:
def
π1 = (π1 : |X| × |Y | → |X| , fst)
def
π2 = (π2 : |X| × |Y | → |Y | , snd)
105
Define the P-function h−, −i by
h(f, r), (g, s)i = (hf, gi, λ∗ c. pair (r c) (s c))
def
It is easy to see that this is a P-function. We compute that
π1 ◦ h(f, r), (g, s)i = (π1 ◦ f, B · fst · (λ∗ c. pair (r c) (s c))) ∼ (f, r)
def
and similarly for the other two equations.
Proposition 8. The P-category Asm(A) has a terminal object.
Proof. Define 1 ∈ Asm(A) by
def
def
|1| = {∗},
k∗k1 = {0}
def
and, for A ∈ Asm(A), let !A = (a 7→ ∗, K · 0) : A → 1, which is unique (up to ∼).
Hence,
Theorem 38. Asm(A) has finite products.
Exponentials
Given assemblies X and Y , we define
def
Y X = { f | (f, r) : X → Y } ,
def
kf kY X = { r | r tracks f }
Let
evX,Y = ((f, x) 7→ f (x), λ∗ p. (fst p) (snd p)) : Y X × X → Y
def
Define a P-function
λC : Asm(A)(C × X, Y ) → Asm(A)(C, Y X )
by
(f, r) 7→ (z 7→ (x 7→ f (z, x)), λ∗ c. λ∗ a. r(pair c a))
It is again easy to see that this is a P-function, and one can verify that this is the
exponential. Therefore,
Theorem 39. Asm(A) is cartesian closed.
106
The Lifted Assembly for K1
We only mention one other indispensable construction, which embodies partiality in
the computable setting of assemblies on K1 . Given an assembly X ∈ Asm(K1 ), the
lifted assembly X⊥ is defined to be
def
|X⊥ | = |X| + {⊥},
kxkX⊥
(
r r · 0 ↓ and r · 0 ∈ kxkX
=
r r·0↑
def
for x ∈ |X|
for x = ⊥
for some chosen element of the PCA 0. Elements of X⊥ are either elements of X, or
the undefined value ⊥. Realizers of x ∈ |X| are ‘computations’ r ∈ A which, when
run (i.e. given the dummy value 0 as argument) return a realizer of x. A computation
that does not halt when run represents the undefined value.
Bear in mind that this definition of the lifted assembly is only useful in K1 . In
particular, it does not work at all if the PCA is total. There are other, more involved
ways of defining the lifted assembly: see Longley and Simpson [1997] for a rather
elegant and uniform method.
5.2.4
The Exposure
Theorem 40. There is an exposure
: Asm(A) # Asm(A)
for any PCA A.
Proof. For an assembly X ∈ Asm(A), let X be the assembly defined by
def
|X| = { (x, a) | x ∈ |X| , a ∈ kxkX }
def
k(x, a)kX = { a }
Given (f, r) : X → Y , we define (f, r) = (fr , r) : X → Y where
fr : |X| −→ |Y |
(x, a) 7−→ (f (x), r · a)
Each element (x, a) ∈ |X| has a unique realizer, a. As a ∈ kxkX , and f is tracked
by r, we see that r · a ↓ and r · a ∈ kf (x)kY , so that (f (x), r · a) ∈ |Y |. It follows
that r tracks fr , and so (fr , r) is an arrow X → Y .
107
To prove that preserves composites, observe that for arrows (f, r) : A → B and
(g, s) : B → C, we have
fr
gs
(x, a) 7−→ (f (x), r · a) 7−→ (g(f (x)), s · (r · a))
and as B · s · r · a ' s · (r · a), we have
gs ◦ fr = (g ◦ f )B·s·r
which is of course tracked by B · s · r. Hence (g ◦ f ) ∼ g ◦ f . Regarding identities,
notice that if idX : X → X is the identity arrow, then
(idX )
I
(x, a) 7−−−→
(x, I · a)
and as I · a ' a, the latter is equal to (x, a), so (idX )I = id|X| . Hence idX ∼ idX .
Finally, we need to show that intensional equality implies extensional equality.
Suppose (f, r) ∼ (g, s). That is equivalent to fr = gs , which in turn gives us both
f = g and also r · a ' s · a for all a ∈ kxkX . The first implies (f, r) ∼ (g, s).
Intensional Equality
When showing that is an exposure, we inadvertently characterised intensional
equality up to :
Lemma 22. (f, r) ≈ (g, s) : X → Y precisely when fr = gs , i.e.
f =g
and ∀x ∈ |X| . ∀a ∈ kxkX . r · a ' s · a
This is indeed the meaning we expected of intensional equality: (f, r) and (g, s) are
not only equal extensionally, but they also have the same effect on realizers. Notice
that, unless we are in a highly extensional environment (which not all PCAs are),
this is very far from strict equality. Instead, it is something in between.
We can also use this characterisation of intensional equality to speak about the realizer structure of assemblies, in particular by characterising which objects of Asm(A)
are univalent (recall Definition 26).
Definition 40. An assembly X has unique realizers just if kxkX is a singleton, for
each x ∈ |X|.
Lemma 23. The univalent objects of Asm(A) are precisely those which have unique
realizers.
108
Proof. Suppose X is univalent. To each realizer a ∈ kxkX , there corresponds an
arrow
x̂a = (∗ 7→ x, λ∗ c. a) : 1 → X
def
But if a, b ∈ kxkX , then x̂a ∼ x̂b , as they share the same function component (∗ 7→ x).
As X is univalent, it follows that x̂a ≈ x̂b , so
a ' (λ∗ c. a) · 0 ' (λ∗ c. b) · 0 ' b
Conversely, suppose X has unique realizers. For any two extensionally equal arrows (f, r), (f, s) : Y → X, it is not very hard to see that fr = fs : we have that
π1 (fr (y, b)) = f (y) = π1 (fs (y, b)) for any b ∈ kykY . Thus, as f (y) is uniquely realized
by only one a, their second components are equal too, and (f, r) ≈ (f, s).
is cartesian
It so happens that projections behave nicely under exposures.
Theorem 41. : Asm(A) # Asm(A) is a cartesian exposure.
Recall that π1 ◦ h(f, r), (g, s)i = (f, d), where
d = B · fst · (λ∗ c. pair (r c)(s c))
def
so that the function component of (π1 ◦ h(f, r), (g, s)i) is fd (z, c) = (f (z), d · c). We
compute that
d · c = B · fst · (λ∗ c. pair (r c)(s c)) · c = r · c
which is to say that fd = fr , and hence (π1 ◦ h(f, r), (g, s)i) ∼ (f, r). The calculation is similar for the other projection.
For the third equation, it is easy to calculate that, for any arrow h : C → X × Y ,
the function component of hπ1 ◦ (h, r), π2 ◦ (h, r)i is hs , where
s = λ∗ c. pair (B · fst · r · c) (B · snd · r · c)
We proceed by calculating that
s · c = pair (B · fst · r · c) (B · snd · r · c) = pair (fst (r · c)) (snd (r · c))
so that
hs (z, c) = (h(z), pair (fst (r · c)) (snd (r · c)))
109
The argument would now be complete were our pairing surjective, but this is not
so in general PCAs. However, we know that c ∈ kzkC for some z ∈ |C|, so r · c ∈
k(x, y)kX×Y with h(z) = (x, y). Hence,
r · c = pair a b
for some a ∈ kxkX and b ∈ kykY , and hence
s · c = pair a b
as well. It follows that hs = hr , where r was the original tracking element of (h, r).
We have finally obtained that
hπ1 ◦ (h, r), π2 ◦ (h, r)i ∼ (h, r)
The final thing to check is that any arrow into the terminal object 1 is intensionally
equal to the canonical one; this follows, as 1 has only one element with a unique
realizer.
is weakly cartesian closed
It is also the case that : Asm(A) # Asm(A) is weakly cartesian closed (§4.2.5), in
the sense that the equation
ev ◦ (λ(f, r) × idX ) ≈ (f, r)
holds for any (f, r) : C × X → Y . To prove this one needs another tiring but easy
calculation like the one showing that is cartesian: it is of no interest, save another
use of the fact that it uses the trick that any z ∈ k(c, x)kC×X is always of the form
z = pair i j for i ∈ kckC and j ∈ kxkX .
Preservation of Products
Define
def
m0 = (∗ 7→ (∗, 0), I) : 1 → 1
which maps the unique element of 1 to the pair of itself and its unique realizer, and
is realized by the identity combinator. This is a P-isomorphism with inverse
def
!1 = ((∗, 0) 7→ ∗, I) : 1 → 1
110
def
Also, define mA,B : A × B → (A × B) by mA,B = (wA,B , I), where
wA,B : |A| × |B| −→ |(A × B)|
((x, a), (y, b)) 7−→ ((x, y), pair a b)
The only realizer for the pair ((x, a), (y, b)) is pair a b, so the identity combinator
tracks wA,B . Then hπ1 , π2 i : (A × B) → A × B is equal to (vA,B , I), where
vA,B : |(A × B)| −→ |A| × |B|
((x, y), r) 7−→ ((x, fst · r), (y, snd · r))
and, as before, it is easy to see that this is an inverse to mA,B , as r is necessarily of
the form pair a b. Hence,
Theorem 42. The exposure : Asm(A) # Asm(A) is product-preserving.
Comonadicity and Idempotence
Proposition 9. There exists a natural transformation of exposures,
•
: # IdAsm(A)
Proof. Define X = (uX , I), where
uX : |X| → |X|
(x, a) 7−→ x
If b ∈ k(x, a)kX = {a}, then b = a and I · b ' b = a ∈ kxkX , so uX is indeed tracked
by I. To show naturality for (f, r) : X → Y , we chase around the diagram:
(x, a)
fr
(f (x), r · a)
uY
uX
x
f (x)
f
Hence uY ◦ fr = f ◦ uX , so the square commutes up to ∼. Bear in mind that the
tracking element along one composite is B · r · I, whereas along the other composite
it is B · I · r.
Let us now investigate the structure of 2 X, for any assembly X. For each
(x, a) ∈ |X|, we have that k(x, a)kX = {a}. Thus, we can infer that
2 X = { ((x, a), a) | a ∈ kxkX }
111
and that, for any (f, r) : X → Y ,
2 f : 2 X −→ 2 Y
((x, a), a) 7−→ ((f (x), r · a), r · a)
Proposition 10. There exists a natural transformation of exposures,
•
δ : # 2
Proof. Define δX = (vX , I), where
vX : |X| −→ 2 X
(x, a) 7−→ ((x, a), a)
If a ∈ kxkX , then I · a ' a ∈ {a} = k((x, a), a)k2 X , so I indeed tracks vX . To show
naturality for a given arrow (f, r) : X → Y , we chase around the diagram:
fr
(x, a)
(f (x), r · a)
vX
vY
((x, a), a)
((f (x), r · a), r · a)
(fr )r
and so the diagram commutes up to ∼. Note that the tracking elements along the
composites are respectively B · I · r and B · r · I.
•
It is not difficult to see that the components of δ : # 2 are actually isomorphisms.
Hence, putting everything together:
Theorem 43. (, , δ) is a product-preserving idempotent comonadic exposure.
Proof. It suffices to verify the coherence conditions. Regarding the first one:
vX
(x, a)
((x, a), a)
vX
vX
((x, a), a)
(vX )I
(((x, a), a), I · a) = (((x, a), a), a)
which commutes, since I · a ' a. Both the tracking elements along each composite
are B · I · I, so the diagram actually commutes on the nose. Regarding the second
one:
(x, a)
vX
((x, a), a)
uX
vX
((x, a), a)
(uX )I
(x, I · a) = (x, a)
which commutes, since I · a ' a. Once more, both tracking elements are B · I · I, so
commutation is again on the nose.
112
5.2.5
Weak Extensionality and Naturality
We have shown that nearly everything in sight behaves well, even with respect to
intensional equality ≈ : Proposition 5 necessitates that, when we have a productpreserving (and hence cartesian) exposure, the function
h·, ·i : Asm(A)(C, X) × Asm(C, Y ) → Asm(A)(C, X × Y )
that is implicated in the definition of products respects intensional equality ≈ . The
glaring exception, of course, is the cartesian closed structure: it is not necessarily
that
λC : Asm(A)(C × X, Y ) → Asm(A)(C, Y X )
preserves intensional equality.
However, it is interesting to investigate when this might happen. Let (f, r) ≈
(f, s) : C × X → Y be two intensionally equal morphisms; we have
∀d ∈ k(c, x)kC×X . r · d ' s · d
(5.1)
We can calculate that
λ(f, r) = (λ(f ), λ∗ c a. r(pair c a))
def
λ(f, s) = (λ(f ), λ∗ c a. s(pair c a))
def
Thus, to prove that λ(f, r) ≈Q λ(f, s), all we need to check is that
∀d ∈ kckC . λ∗ a. r(pair c a) ' λ∗ a. s(pair c a)
By (5.1), it is indeed the case that r(pair c a) ' s(pair c a), because the realizers
d ∈ k(c, x)kC×X are exactly of the right form. Nevertheless, we are not allowed to
use that equation under an occurrence of λ∗ ! The situation that allows this is the one
where the PCA A is weakly extensional.
Definition 41. A PCA (A, ·) is weakly extensional if it satisfies the rule
M 'N
λ∗ x. M ' λ∗ x. N
for any two expressions M, N and any variable x.
113
Weak extensionality, also known as rule (ξ), is a notorious thorn in the study of
the correspondence between combinatory logic and untyped λ-calculus. [Barendregt,
1984, §7.3.5(iii)] sets out a finite set of equational axioms (due to Curry) that suffice
to ensure it.
The original plan for this thesis was that Asm(K1 ) would be a model of Intensional PCF, and Löb’s rule would directly correspond to Kleene’s Second Recursion
Theorem. Unfortunately, K1 is very likely not weakly extensional. In §8 we will
develop a slight restriction on Intensional PCF, which will remove the requirement
that λ(−) preserve intensional equality, which is otherwise necessary.
Since we have come this far, let us also investigate the naturality of λ(−), i.e. the
equation
λ (f ◦ (g × id)) ≈ λ(f ) ◦ g
under intensional equality. Given (f, r) : C ×X → Y and (g, s) : C 0 → C, we compute
that
λ ((f, r) ◦ ((g, s) × id)) = (λ (f ◦ (g × id)) , λ∗ c0 a. (B · r · h) · (pair c0 a))
def
λ(f, r) ◦ (g, s) = (λ(f ) ◦ g, B · (λ∗ c0 a. r · (pair c0 a)) · s)
def
def
where h = λ∗ d. pair (B · s · fst · d) (B · I · snd · d). It would suffice to prove that these
two realizers, when applied to anything, would return the same c0 , namely
λ∗ . r · (pair (s · c0 ) a)
This is easy to check with weak extensionality, but it seems impossible to ensure
without it. Therefore, λ(−) is natural up to ≈ if A is weakly extensional.
Finally, let us seize the opportunity to mention that if A is extensional, in the
sense that the η-rule
λ∗ x. e x ' e
holds for any expression e ∈ E(A), then
ev ◦ ((f, r) ◦ id) ≈Q (f, r) : C → Y X
In that case we would say that : Asm(A) # Asm(A) is cartesian closed.
114
5.3
Exposures in Homological Algebra
This section aims to support the claim that, even though inspired by logic and computability, exposures are to be found in other contexts as well. This lends credibility
to the idea that, even if within logic exposures are a sort of abstract yet well-behaved
Gödel numbering, the phenomenon of intensionality, as discussed in §1.1 is more
general, and can be found in other areas of mathematics.
We will draw our example from homological algebra. Homological algebra begins once we have chain complexes C(X), i.e. sequences of abelian groups Ci with
homomorphisms
∂n+1
∂n−1
∂
∂
∂
n
1
0
Cn−1 −−−→ . . . −
→
C1 −
→
C0
. . . −−−→ Cn −→
such that ∂d ◦ ∂n+1 = 0. One then forms the groups of boundaries and cycles, namely
def
Bn (X) = im(∂n+1 )
def
Zn (X) = ker(∂n )
The objects of study are then the homology groups Hn , defined by
def
Hn (X) = Zn (X)/Bn (X)
A natural question arises: what if we make it so we never actually have to take
quotients?
5.3.1
The P-category Grp
Instead of taking the quotient G/N of a group G by one of its normal subgroups N ,
we will instead merely keep the tuple
(G, N )
and work with it: we will consider this as G/N , even though the two components are
kept separately.
Often in homology one considers maps between homology groups, which are group
homomorphisms of type
f∗ : G1 /H1 → G2 /H2
Nevertheless, rarely does one work out G1 /H1 exactly before defining such a f∗ . More
commonly, one picks out a representative of the equivalence class, defines f on that,
115
and then proves that the outcome is invariant under the choice of representative. This
amounts to defining some f : G1 → G2 such that
f (H1 ) ⊆ H2
Thus, we pick the morphisms f : (G1 , H1 ) → (G2 , H2 ) to be exactly those homomorphisms. Each one of them induces a homomorphism f∗ : G1 /H1 → G2 /H2 as is
customary.
To prove that two such maps f∗ , g∗ : G1 /H1 → G2 /H2 are equal, it suffices to
prove that they are pointwise homologous, i.e. that that f − g takes values only in
H2 . This will be exactly our definition of extensional equality:
f ∼ g : (G1 , H1 ) → (G2 , H2 )
just if
im(f − g) ⊆ H2
A classic result of basic group theory, viz.
G1 /N1 × G2 /N2 ∼
= (G1 × G2 )/(N1 × N2 )
also implies that we can define
def
(G1 , N1 ) × (G2 , N2 ) = (G1 × G2 , N1 × N2 )
and use it to prove that
Theorem 44. The P-category Grp is cartesian.
We can now think of the n-th homology functor as taking values in this P-category,
or—even better—its subcategory Ab of abelian groups,
Hn : Top −→ Ab
X 7−→ (Zn (X), Bn (X))
f : X → Y 7−→ f# : (Zn (X), Bn (X)) → (Zn (Y ), Bn (Y ))
5.3.2
Intensionality and Homomorphisms
The point of not forcing f to be f∗ is that the action of f : (G1 , H1 ) → (G2 , H2 )
on each cycle of G1 is ‘visible,’ even if that cycle is a boundary. This is because
f : G1 → G2 is still an actual homomorphism, which only happens to ‘respect’ a
normal subgroup. The way we will define an exposure on this category is precisely
by ‘exposing’ the action of f on individual cycles, even if they are boundaries.
116
We shall then define an endoexposure,
C : Grp # Grp
by
def
C(G, H) = (G, {eG })
def
Cf = f : (G1 , {eG1 }) → (G2 , {eG2 })
It is not at all difficult to prove that this is indeed an exposure: it preserves
composition and identities, and indeed if Cf ∼ Cg then f and g are equal, so im(f −g)
is just the identity element.
In fact, this exposure has comonadic structure, which comes for free. For evaluators, we notice that (G,H) : C(G, H) → (G, H) is actually of type (G, {eG }) → (G, H),
so it suffices to take the identity, which—in this context—is a kind of quotient map.
Similarly, δ(G,H) : (G, {eG }) → (G, {eG }), so again it suffices to take the identity. It
is trivial that every single diagram in the definition of evaluator and quoter, as well
as that of comonadic exposure, commutes: all the arrows are identities.
Furthermore, it is not hard to see that C preserves products: the candidate isomorphisms
m : Q(G1 , H1 ) × Q(G2 , H2 ) → Q(G1 × G2 , H1 × H2 )
have the same source and target, namely (G1 × G2 , {(eG1 , eG2 )}), so it suffices to take
the identity.
117
Chapter 6
Intensional Recursion in
P-Categories1
Armed with the framework of exposures, we can now speak of both extensional and
intensional recursion in categorical terms.
The case of extensional fixed points (EFPs) was first treated by Lawvere [1969,
2006] in the late 1960s. However, we will argue that his notion of fixed point is far
too coarse for most applications in logic and computer science.
Instead, we will use exposures to replace that definition with one that captures
intensional recursion, namely that of intensional fixed points (IFPs) (§6.1). We begin
our investigation by showing that our framework allows for clear and concise formulations of the classic theorems of Gödel, Tarski, and Rice. The relevant arguments
are entirely algebraic, and it is very clear what logical devices or assumptions each
one requires. The conclusion to be drawn is that, despite their common use of IFPs,
these three arguments have a fundamentally different flavour. In (§6.2) we discuss
the relationship between IFPs and Löb’s rule in provability logic.
Then, in §6.3, we then ask the natural question: where do IFPs come from? We
recall in detail a theorem of Lawvere which guarantees the existence of EFPs under
certain assumptions. We use exposures to prove the Intensional Recursion Theorem,
a similar theorem that pertains to IFPs instead.
Finally, we examine the nature of both EFPs and IFPs in the three examples
that we presented at length in §5. In particular, when viewed through the lens of
the exposure on assemblies (§5.2), Lawvere’s theorem and our Intensional Recursion
Theorem are revealed to be categorical versions of the First and Second Recursion
Theorems of Kleene respectively, as discussed in §2.
1
A preliminary form of the results in this chapter was first published as [Kavvos, 2017a], which
is available at Springer: https://dx.doi.org/10.1007/978-3-662-54458-7_32
118
6.1
Extensional and Intensional Fixed Points
Lawvere [1969, 2006] famously proved a theorem which guarantees that, under certain
assumptions, which we discuss in §6.3, there exist fixed points of the following sort.
Definition 42. An extensional fixed point (EFP) of an arrow t : Y → Y is a point
y : 1 → Y such that
t◦y =y
If every arrow t : Y → Y has a EFP, then we say that Y has EFPs.
In Lawvere’s paper EFPs are a kind of fixed point that, for logical purposes,
oughtn’t exist. After constructing a category based on a logical theory (e.g. PA), in
a manner that we have quite closely followed in §5.1, he argues that there can be no
sat : A × A → 2
such that for every formula φ : A → 2 there is a point cφ : 1 → A such that
a
1
ha,cφ i
A
φ
A×A
sat
2
for every point a : 1 → A. In logical terms, this would amount to the existence of a
two-variable predicate sat(−, −), and a Gödel number pφq for each unary predicate
φ(x), such that
T ` sat(pφ(x)q, n) ↔ φ(n)
for each n. If such a predicate existed, then ‘satisfaction would be definable,’ and we
would obtain a EFP of the arrow 2 → 2 encoding the logical ‘not’ function. In logical
terms, we could obtain a closed formula ψ such that T ` ψ ↔ ¬ψ. This leads to a
categorical version of Tarski’s Undefinability Theorem: if ‘truth were definable,’ then
substitution would be too, and ¬ would have a fixed point.
Finally, Lawvere obtains a version of Gödel’s First Incompleteness Theorem as
follows: given an (external) relation between closed formulas and points, relating
closed formulas to their ‘Gödel number,’ if we assume that provability is ‘internally
decidable’ on these Gödel numbers, and if we assume that all ‘truth values’ are either
true or false, then truth would be definable, which—by the categorical version of
Tarski’s undefinability theorem—it is not.
119
We can already see that Lawvere’s notion of EFPs do not encompass fixed points
that ought to exist. For example, the diagonal lemma for Peano Arithmetic (henceforth PA) manufactures a closed formula fix(φ) for every formula φ(x), such that
PA ` fix(φ) ↔ φ(pfix(φ)q)
The formula fix(φ) occurs asymmetrically: on the left hand side of the bi-implication
it appears as a truth value, but on the right hand side it appears under a Gödel
numbering, i.e. an assignment p·q of a numeral to each term and formula of PA.
Taking our cue from the exposure on Peano arithmetic (§5.1), we can generalise this
idea to the following, which encompasses this kind of ‘asymmetric’ fixed point.
Definition 43. Let Q : B # B be a product-preserving endoexposure. An intensional fixed point (IFP) (w.r.t. to Q) of an arrow t : QA → A is a point a : 1 → A
such that the following diagram commutes up to ∼:
1
a
A
m0
t
Q1
Qa
QA
An object A has IFPs (w.r.t. Q) if every arrow t : QA → A has a IFP.
This makes intuitive sense: a : 1 → A is extensionally equal to t ‘evaluated’ at the
point Qa ◦ m0 : 1 → Qa, which is the ‘quoted’ version of a.
6.1.1
Consistency, Truth and Provability: Gödel and Tarski
We are now in a position to argue that the two well-known theorems that were
discussed by Lawvere can be reduced to very simple algebraic arguments involving
exposures. In fact, the gist of both arguments relies on the existence of IFPs for an
‘object of truth values’ in a P-category. For background in Gödel’s First Incompleteness Theorem and Tarski’s Undefinability Theorem, see Smullyan [1992] and Boolos
[1994].
Suppose that we have some sort of object 2 of ‘truth values.’ This need not be
fancy: we require that it has two points,
>:1→2
⊥:1→2
120
standing for truth and falsehood respectively. We also require an arrow ¬ : 2 → 2
that encodes the logical negation, satisfying
¬◦>∼⊥
¬◦⊥∼>
and
¬◦f ∼⊥
=⇒
f ∼>
A simplified version of Gödel’s First Incompleteness theorem for PA is this:
Theorem 45 (Gödel). If PA is consistent, then there are sentences φ of PA such that
neither PA ` φ nor PA ` ¬φ.
The proof relies on two constructions: the diagonal lemma, and the fact that provability is definable within the system. The definability of provability amounts to the
fact that there is a formula Prov(x) with one free variable x such that
PA ` φ if and only if PA ` Prov(pφq)
That is: modulo Gödel numbering, the system can internally ‘talk’ about its own
provability. It is not then hard to sketch the proof to Gödel’s theorem.
Proof of Theorem 45. Use the diagonal lemma to construct ψ such that
PA ` ψ ↔ ¬Prov(pψq)
Then ψ is provable if and only if it is not, so if either PA ` ψ or PA ` ¬ψ we would
observe inconsistency. Thus, if PA is consistent, neither ψ nor ¬ψ are provable.
It follows that ψ is not equivalent to either truth value. In a way, ψ has some other
eerie truth value, which is neither > nor ⊥. Classical logicians would say that it is
undecidable.
Let us represent the provability predicate as an arrow p : Q2 → 2 such that y ∼ >
if and only if p ◦ Qy ◦ m0 ∼ >. Consistency is captured by the following definition:
Definition 44. An object of truth values 2 as above is simply consistent just if
> 6∼ ⊥
Armed with this machinery, we can now transport the argument underlying Gödel’s
proof to our more abstract setting.
121
Theorem 46. If a p : Q2 → 2 is as above, and 2 has IFPs, then one of the following
things is true:
• either there are points of 2 other than > : 1 → 2 and ⊥ : 1 → 2, or
• 2 is not simply consistent, i.e. > ∼ ⊥.
Proof. As 2 has IFPs, take y : 1 → 2 such that
y ∼ ¬ ◦ p ◦ Qy ◦ m0
Now, if y ∼ >, then by the property of p above, p◦Qy◦m0 ∼ >, hence ¬◦p◦Qy◦m0 ∼
⊥, hence y ∼ ⊥. So either y 6∼ > or 2 is not simply consistent. Similarly, either
y 6∼ ⊥ or 2 is not simply consistent.
Tarski’s Undefinability Theorem, on the other hand is the result that truth cannot
be defined in arithmetic [Smullyan, 1992].
Theorem 47 (Tarski). If PA is consistent, then there is no predicate True(x) such
that
PA ` φ ↔ True(pφq)
for all sentences φ.
Proof. Use the diagonal lemma to obtain a closed ψ such that
PA ` ψ ↔ ¬True(pψq)
Then PA ` ψ ↔ ¬ψ, which leads to inconsistency.
•
A truth predicate would constitute an evaluator : Q # IdB . If we had one, we
would have that
2 ◦ Q(y) ◦ m0 ∼ y ◦ 1 ◦ m0 ∼ y
where the last equality is because 1 is terminal. This is actually a more general
•
Lemma 24. Let Q : B # B be an endoexposure, and let : Q # IdB be an evaluator.
Then, if A has IFPs then it also has EFPs.
Proof. Given t : A → A, consider t ◦ A : QA → A. A IFP for this arrow is a point
y : 1 → A such that y ∼ t ◦ A ◦ Qy ◦ m0 ∼ t ◦ y.
In proving Tarski’s theorem, we constructed a sentence ψ such that PA ` ψ ↔ ¬ψ.
This can be captured abstractly by the following definition.
122
Definition 45. An object 2 as above is fix-consistent just if the arrow ¬ : 2 → 2 has
no EFP: that is, there is no y : 1 → 2 such that ¬ ◦ y ∼ y.
Putting these together, we get
Theorem 48. If 2 has IFPs in the presence of an evaluator, then it is not fixconsistent.
6.1.2
Rice’s theorem
To further illustrate the applicability of the language of exposures, we state and prove
an abstract version of Rice’s theorem. Rice’s theorem is a result in computability
which states that no computer can decide any non-trivial property of a program by
looking at its code. A short proof relies on the SRT.
Theorem 49 (Rice). Let F be a non-trivial set of partial recursive functions, and
def
let AF = { e ∈ N | φe ∈ F } be the set of indices of functions in that set. Then AF is
undecidable.
Proof. Suppose AF is decidable. The fact F is non-trivial means that there is some
a ∈ N such that φa ∈ F and some b ∈ N such that φb 6∈ F. Consequently, a ∈ AF
and b 6∈ AF .
Define f (e, x) ' if e ∈ AF then φb (x) else φa (x). By Church’s thesis, f : N ×
N * N is partial recursive. Use the SRT to obtain e ∈ N such that φe (x) ' f (e, x).
Now, either e ∈ AF or not. If it is, φe (x) ' f (e, x) ' φb (x), so that φe 6∈ F, a
contradiction. Similarly if e 6∈ AF .
Constructing the function f in the proof required three basic elements: (a) the
ability to evaluate either φa or φb given a and b; (b) the ability to decide which one
to use depending on the input; and (c) intensional recursion. For (a), we shall need
evaluators, for (b) we shall need that the truth object 2 is a weak coproduct of two
copies of 1, and for (c) we shall require IFPs.
Theorem 50. Let 2 be a simply consistent truth object which also happens to be a a
weak coproduct of two copies of 1, with injections
>:1→2
⊥:1→2
Suppose that A has EFPs. If f : A → 2 is such that for all x : 1 → A, either
f ◦ x ∼ > or f ◦ x ∼ ⊥. Then f is trivial, in the sense that either
∀x : 1 → A. f ◦ x ∼ >
or
123
∀x : 1 → A. f ◦ x ∼ ⊥
Proof. Suppose that there are two such distinct a, b : 1 → A such that f ◦ a ∼ > and
f ◦ b ∼ ⊥. Let
def
g = [b, a] ◦ f : A → A
and let y : 1 → A be its EFP. Now, either f ◦ y ∼ > or f ◦ y ∼ ⊥. In the first case,
we can calculate that
> ∼ f ◦ y ∼ f ◦ g ◦ y ∼ f ◦ [b, a] ◦ f ◦ y ∼ f ◦ [b, a] ◦ > ∼ f ◦ b ∼ ⊥
so that 2 is not simply consistent. A similar situation occurs if f ◦ y ∼ ⊥.
Needless to say that the premises of this theorem are easily satisfied in our exposure
on assemblies from §5.2 if we take A = N⊥ N and 2 to be the lifted coproduct (1 + 1)⊥ .
6.2
The relationship to Löb’s rule
In this chapter we have considered two sorts of fixed points, extensional and intensional. Figure 6.1 summarises the definition of these two types of fixed points, in
1-category theory and P-category theory respectively.
Figure 6.1: Types of Fixed Points (without parameters)
Type
Morphism
Extensional
t:A→A
Intensional
t : QA → A
Fixed Point
a:1→A
t◦a=a
a:1→A
t ◦ Qa ◦ m0 ∼ a
Viewed through the lens of the Curry-Howard isomorphism, the existence of extensional fixed points at A can be written as the logical inference rule
A→A
A
which is exactly the type of the Y combinator of PCF. In fact, this exact inference rule
corresponds to an equivalent formulation of PCF that proceeds through the binding
construct µx:A.M with equation
µx:A.M = M [µx:A.M ]
For more details, see Gunter [1992].
124
This rule is obviously logically catastrophic, as it produces a closed term µx:A. x
at each type A. Consequently, there is no honest Curry-Howard isomorphism for PCF:
every type is inhabited, and thus every ‘formula’ is provable, leading to the trivial
logic. However, the terms do matter, because they have computational behaviour : it
might be that the types do not correspond to logical formulae, but they are there to
stop basic programming errors. And, in the end, the purpose of PCF is simply typed
general recursive programming.
Viewing the notion of intensional fixed points through the lens of Curry-Howard,
the result is much more impressive: IFPs correspond to Löb’s rule, namely
A → A
A
def
To see this, it suffices to read = Q and to look at the definition of intensional fixed
points as an inference rule, i.e.
f : QA → A
f◦ : 1 → A
such that
1
Qf ◦ ◦m0
QA
f◦
f
A
commutes up to ∼.
Unfortunately, we have seen in this chapter that a key ingredient in the theory of
•
exposures are the so-called evaluators, which are natural transformations : Q # Id.
These encapsulate the modal axiom T, namely
A → A
Coupled with the above inference rule, this will also have the catastrophic consequence
that every type is inhabited. We shall not be alarmed by this fact, for we will
want general recursion, and there seems no way around partiality in that case, as
we discussed in §1.2.4. We shall still use the Curry-Howard isomorphism, but only
heuristically.
However, let us for the moment revert to the mindset of a purely categorical
logician. In our parallel work [Kavvos, 2017b,c] we have investigated an extension
of the Curry-Howard isomorphism to box modalities. Our methodology consisted of
mimicking the rules of sequent calculus in natural deduction. A quick perusal suffices
125
to bring to light the fact that in that work we used a stronger form of Löb’s rule,
namely
A → A
A
Historically speaking, this variant of Löb’s rule is proof-theoretically well-behaved,
and yields cut-free sequent calculi. It was introduced in the context of intuitionistic
modal logic by Ursini [1979a]. If we were to use it to formulate iPCF,2 we would
obtain something akin to
∆ ; z : A ` M : A
∆ ; Γ ` fix z in box M : A
with
fix z in box M −→ box M [fix z in box M/z]
However, this does not have a clear intensional interpretation. If, as in §1.4.2, we try
to devise a ‘reading’ of this in the untyped λ-calculus, this rule would require that
for each term f there exists a u such that
u =β pf puqq
which is obviously nonsense. This is why in this thesis we have weakened our formulation to Löb’s original rule.
Nevertheless, we can still ‘transport’ this version of Löb’s rule over to P-categories:
it amounts to the inference rule
f : QA → A
f † : 1 → QA
such that
1
Qf † ◦m0
Q2 A
f◦
Qf
QA
commutes up to ∼.
Now: can we use the framework of comonadic exposures to prove that this version
of Löb’s rule is stronger? What is the exact relationship between this and the original
form?
Surprisingly, the answer is positive. But before we show that, let us give a name
to these two kinds of IFPs so we can talk about them efficiently.
2
In fact, the first versions of this thesis and [Kavvos, 2017d] both used this version of iPCF.
126
Definition 46. Let Q : B # B be a product-preserving endoexposure.
1. A meek intensional fixed point (meek IFP) of an arrow f : QA → A is a point
a : 1 → a such that the following diagram commutes up to ∼:
a
1
A
m0
f
Q1
Qa
QA
i.e. a ∼ f ◦ Qa ◦ m0 .
2. A vehement intensional fixed point (vehement IFP) of an arrow f : QA → A is
a point a : 1 → QA such that the following diagram commutes up to ∼:
a
1
QA
m0
Qf
Q1
Qa
Q2 A
i.e. a ∼ Qf ◦ Qa ◦ m0 .
Thus the intensional fixed points we have been working with up to this point are
meek, whereas the proof theory in [Kavvos, 2017b,c] use a pattern closer to vehement
ones.
If we have a meek IFP whose defining equation holds up to intensional equality
(≈Q ), then that is also a vehement IFP. Conversely, if we have a vehement IFP, and
our comonadic exposure is idempotent, then we obtain a meek IFP.
Theorem 51. Let f : QA → A.
1. If we have a meek IFP of f : QA → A, which moreover is so intensionally, i.e.
f ◦ ≈Q f ◦ Qf ◦ ◦ m0
then we can obtain a vehement IFP of f , defined by
def
f † = Qf ◦ ◦ m0
2. If we have a vehement IFP f : QA → A, and moreover the comonadic exposure
(Q, , δ) is idempotent, then we can obtain a meek IFP of of f , defined by
def
f ◦ = A ◦ f †
127
Proof.
1. We calculate:
f†
∼ { definition }
Qf ◦ ◦ m0
∼ { assumption }
Q(f ◦ Qf ◦ ◦ m0 ) ◦ m0
∼ { exposure }
Qf ◦ Q(Qf ◦ ◦ m0 ) ◦ m0
∼ { definition }
Qf ◦ Qf † ◦ m0
so that f † is a vehement IFP.
2. We calculate
f◦
∼ { definition }
A ◦ f †
∼ { definition }
A ◦ Qf ◦ Qf † ◦ m0
∼ { natural }
f ◦ QA ◦ Qf † ◦ m0
∼ { natural }
f ◦ f † ◦ 1 ◦ m0
∼ { monoidal }
f ◦ f†
But, by the corollary of the Quotation-Evaluation lemma (Lemma 4),
f † ∼ Q(A ◦ f † ) ◦ m0 ∼ Qf ◦ ◦ m0
so f ◦ ∼ f ◦ Qf ◦ ◦ m0 is a meek IFP.
128
6.3
Whence fixed points?
6.3.1
Lawvere’s Theorem
Lawvere [1969, 2006] proved a fixed point theorem that generalises a number of ‘diagonal’ constructions, including Cantor’s theorem, Gödel’s First Incompleteness Theorem, and the Tarski undefinability theorem. In order to state it, we will first need
some notation for cartesian closed categories (CCCs). A cartesian closed category
[Eilenberg and Kelly, 1966] is a great place: it is a mathematical universe where
morphisms of the category correspond exactly to points of certain objects, the exponentials. Indeed, to every morphism f : A → Y there corresponds a point of the
exponential object Y A , namely
pf q : 1 → Y A
which is defined by
∼
f
def
=
pf q = λ 1 × A −
→A→
− Y
and to each point y : 1 → Y A there corresponds a morphism y o : A → Y , defined by
hy◦!A ,idA i
ev
y o = A −−−−−−→ Y A × A −
→Y
def
The above operations are mutually inverse:
pf qo = f,
o
py q = y
For a classic exposition, see Lambek and Scott [1988].
The main idea in Lawvere’s paper is this: a morphism
r
X→
− YA
from an object to an exponential may be thought as ‘indexing’ morphisms of type
A → Y ; for, given x : 1 → X, we have (r ◦ x)o : A → Y . Such an indexing may be
considered an enumeration if it is, in some sense, surjective. There are many ways in
which an arrow r : X → A can be surjective. Here are four:
• It could a retraction; that is, there could exist s : A → X such that
A
idA
s
X
r
A
The arrow s may be thought of as ‘choosing a preimage’ of elements of A with
respect to r. In the category of sets, surjective functions are always retractions
(if one assumes the axiom of choice).
129
• It could be point-surjective: for each point a : 1 → A there could be a point
x : 1 → X such that
1
a
x
X
A
r
• It could be N -path-surjective: it could be that, for each ‘N -path’ q : N → A,
there is a N -path p : N → X such that
N
p
q
X
r
A
The name of the object N has been chosen to suggest the natural numbers, so
that p : N → A can be thought of as tracing out a discrete path of points in A.
A point-surjective arrow is, of course, a 1-path-surjective arrow.
• It could be weakly point-surjective (only if the codomain is an exponential): if
r : X → Y A , then, for each x : 1 → X, we obtain r ◦ x : 1 → Y A , which
corresponds to a morphism
(r ◦ x)o : A → Y
It could then be that every morphism A → Y is ‘pointwise emulated’ by (r ◦ x)o
for some x. That is, for each f : A → Y , there exists xf : 1 → X such that
∀a : 1 → A. (r ◦ xf )o ◦ a = f ◦ a
So a weak point-surjection is a bit like ‘pointwise cartesian closure.’
Evidently,
r is a retraction =⇒ r is N -path-surjective
if N non-empty
========⇒ r is point-surjective
=⇒ r is weakly point-surjective
where the third implication only makes sense if codomain of r is an exponential, but
!
x
may be skipped otherwise. For the second implication, notice that N →
− 1→
− A is a
N -path, factorise it through X, and pre-compose with any point n : 1 → N .
Lawvere then observed that, if the codomain of the weak point-surjection is an
exponential Y A , and the ‘indexing object’ X coincides with A, a curious phenomenon
occurs.
130
Theorem 52 (Lawvere). If r : A → Y A is a weak point-surjection, then then every
arrow t : Y → Y has a fixed point.
Proof. Let
def
hr,idA i
ev
t
f = A −−−−→ Y A × A −
→Y →
− Y
As r is a weak point-surjection, there exists a xf : 1 → A such that, for all a : 1 → A,
we have
(r ◦ xf )o ◦ a
= { r is a weak point-surjection }
f ◦a
= { definition of f }
t ◦ ev ◦ hr, idA i ◦ a
= { naturality of product }
t ◦ ev ◦ hr ◦ a, ai
= { terminal object: a ◦ !A = id1 }
t ◦ ev ◦ hr ◦ a ◦ !A ◦ a, ai
= { naturality of product }
t ◦ ev ◦ hr ◦ a ◦ !A , idA i ◦ a
= { definition of (−)o }
t ◦ (r ◦ a)o ◦ a
def
Taking a = xf produces a fixed point.
Lawvere also hinted at a ‘cartesian’ version of the above result that does not require
exponentials. In this version, the diagonal nature of the argument is even more
evident. To prove it, we need to introduce the following definition:
Definition 47. An arrow r : X × A → Y is a (cartesian) weak point-surjection if for
every f : A → Y there exists a xf : 1 → X such that
∀a : 1 → A. r ◦ hxf , ai = f ◦ a
We will not bother to qualify weak point-surjections as ordinary or cartesian, as it
will be clear by the context. We can now prove the
Theorem 53 (Lawvere). If r : A × A → Y is a weak point-surjection, then every
arrow t : Y → Y has a fixed point.
131
Proof. Let
def
f = t ◦ r ◦ hidA , idA i
Then there exists a xf : 1 → A such that
r ◦ hxf , ai = f ◦ a
for all a : 1 → A. We compute that
r ◦ hxf , xf i = t ◦ r ◦ hidA , idA i ◦ xf = t ◦ r ◦ hxf , xf i
so that r ◦ hxf , xf i is a fixed point of of t.
We have seen in §6.1 that the extensional kind of fixed points produced by this
theorem are of a sort that oughtn’t exist. We believe that this is one of the reasons
that Lawvere’s result has not found wider applications. Nevertheless, we will also see
in §6.4.2 that—in a certain setting—this theorem corresponds to a very weak form of
Kleene’s First Recursion Theorem, which has been a central theorem in the semantics
of programming languages (see §2).
6.3.2
An intensional Lawvere theorem
Can we adapt Lawvere’s result to IFPs? The answer is positive: all we need is a
cartesian P-category B, a product-preserving exposure Q : B # B, and a reasonable
quoting device. What remains is to ‘embellish’ Lawvere’s argument with appropriate
instances of Q.
Theorem 54 (Intensional Recursion). Let Q : B # B be a product-preserving exposure, and let δA : QA → Q2 A be a reasonable quoting device. If r : QA × QA → Y is
a weak point-surjection, then every arrow
t : QY → Y
has an intensional fixed point.
Proof. Let
def
hδ,δi
m
Qr
t
f = QA −−→ Q2 A × Q2 A −
→ Q(QA × QA) −→ QY →
− Y
Then, there exists a xf : 1 → QA such that
r ◦ hxf , ai ∼ f ◦ a
132
for all a : 1 → QA. We compute that
r ◦ hxf , xf i
∼ { definition }
t ◦ Qr ◦ m ◦ hδA , δA i ◦ xf
∼ { naturality }
t ◦ Qr ◦ m ◦ hδA ◦ xf , δA ◦ xf i
∼ { δA is a reasonable quoting device }
t ◦ Qr ◦ m ◦ hQxf ◦ m0 , Qxf ◦ m0 i
∼ { naturality }
t ◦ Qr ◦ m ◦ hQxf , Qxf i ◦ m0
∼ { Proposition 2 }
t ◦ Qr ◦ Qhxf , xf i ◦ m0
∼ { exposures preserve composition }
t ◦ Q(r ◦ hxf , xf i) ◦ m0
so that r ◦ hxf , xf i is a IFP of t.
6.4
Examples of Fixed Points
In this final section we shall briefly examine what extensional and intensional fixed
points mean in the first two examples of exposures presented in §5, namely the Lindenbaum P-category and the P-category of assemblies.
Unfortunately, since the trivial group is a zero object in the P-category of groups,
points do not carry any interesting structure, and thus neither notion of fixed point
is interesting in the third example.
6.4.1
Fixed Points in Gödel numbering: the Diagonal Lemma
We will now carefully consider what we have hinted at throughout the development of
the abstract analogues of Gödel and Tarski’s results in §6.1.1, viz. the diagonal lemma
of Peano arithmetic is precisely the existence of IFPs in the case of the exposure on
arithmetic presented in §5.1.
Recall once more the diagonal lemma: for every formula φ(x) there exists a closed
formula fix(φ) such that
PA ` fix(φ) ↔ φ(pfix(φ)q)
133
Take the formula φ(x). In Lind(PA), it is an arrow
φ(x) : A → 2
Let there be a sentence ψ : 1 → 2. When the exposure acts on it, it produces (the
numeral of) the Gödel number of ψ, which is a closed term. Suppose ψ is an IFP of
φ, i.e
ψ ∼ φ ◦ Qψ ◦ m0
We can then see that the RHS simplifies by substitution to φ(pψq), so this is precisely
the conclusion of the diagonal lemma.
6.4.2
Fixed Points in Assemblies: Kleene’s Recursion Theorems
We now turn to the consideration of fixed points in the P-category of assemblies
Asm(A) that we considered in §5.2, along with the paradigmatic exposure
: Asm(A) # Asm(A)
EFPs are exactly what one would expect: given an arrow (f, r) : X → X, a EFP
of this arrow is a x ∈ |X| such that
f (x) = x
We will shortly argue that EFPs in this setting are strongly reminiscent of Kleene’s
First Recursion Theorem, which we discussed at length in §2.
On the contrary, a IFP of an arrow
(f, r) : X → X
is more than what we had before: it is an element x ∈ |X| along with a realizer
a ∈ kxkX of it, such that
f (x, a) = x
That is, it is a kind of fixed point of f , but the computation of f also depends on the
chosen realizer a rather than simply x.
It is worth pausing for a moment to ask what a vehement IFP (as discussed in
§6.2) is in this case. It is not hard to compute that it consists once again of both an
element x ∈ |X| and a realizer a ∈ kxkX for it, but the pair now needs to satisfy not
only f (x, a) = x, but also
r·a'a
134
That is, the two sides need to have the same realizer. The above theorem is not too
surprising in light of Theorem 28 and Corollary 3: in the idempotent case, the fact
that two arrows 1 → QA are equal immediately yields that they are intensionally
equal. It therefore follows that vehement IFPs are too strong a notion for what we
consider to be intensional recursion.
Kleene’s Recursion Theorems
We will now argue that, in the case of Asm(K1 ), the notions of EFPs and IFPs indeed
match the conclusions of the two main theorems that are used to make recursive
definitions in computability theory, namely the First and Second Recursion Theorems
of Kleene. In fact, we will argue that Lawvere’s fixed point theorem (Theorem 53)
and our own Intensional Recursion theorem (Theorem 54) each correspond to abstract
versions of them. We have discussed the relationship between the two theorems at
length in §2, but—for the benefit of the reader—we recapitulate some basic points of
this discussion here, in a form tailored to our needs in this chapter.
Let us fix some notation. We write ' for Kleene equality: we write e ' e0 to mean
either that both expressions e and e0 are undefined, if either both are undefined, or
both are defined and of equal value. Let φ0 , φ1 , . . . be an enumeration of the partial
recursive functions. We will also require the s-m-n theorem from computability theory.
Full definitions and statements may be found in the book by Cutland [1980].
Theorem 55 (First Recursion Theorem). Let PR be the set of unary partial recursive
functions, and let F : PR → PR be an effective operation. Then F : PR → PR
has a fixed point.
Proof. That F : PR → PR is an effective operation means that there is a partial
recursive f : N × N * N such that f (e, x) ' F (φe )(x). Let d ∈ N a code for the
def
partial recursive function φd (y, x) = f (S(y, y), x), where S : N × N * N is the s-1-1
function of the s-m-n theorem. Then, by the s-m-n theorem, and the definitions of
d ∈ N and f ,
φS(d,d) (x) ' φd (d, x) ' f (S(d, d), x) ' F (φS(d,d) )(x)
so that φS(d,d) is a fixed point of F : PR → PR.
Lawvere’s theorem is virtually identical to a point-free version of this proof. Indeed,
if we let
S : N × N −→ PR
(a, b) 7−→ φS(a,b)
135
We can then show that this is a weak point-surjection: any ‘computable’ f : N → PR
is actually just an index d ∈ N such that
φd (x, y) ' f (x)(y)
for all x, y ∈ N. We can then show that
S(d, x)(y) ' φS(d,x) (y) ' φd (x, y) ' f (x)(y)
so that S(d, x) = f (x) ∈ PR. Thus the resemblance becomes formal, and the argument applies to yield the FRT.
Yet, one cannot avoid noticing that we have proved more than that for which we
bargained. The f : N × N * N in the proof implemented a certain effective operation
F : PR → PR. It follows that f has a special property: it is extensional, in the
sense that
φe = φe0
=⇒
∀x ∈ N. f (e, x) ' f (e0 , x)
Notice, however, that the main step that yields the fixed point in this proof also holds
for any such f , not just the extensional ones. This fact predates the FRT, and was
shown by Kleene [1938].
Theorem 56 (Second Recursion Theorem). For any partial recursive f : N×N * N,
there exists e ∈ N such that φe (y) ' f (e, y) for all y ∈ N.
This is significantly more powerful than the FRT, as f (e, y) can make arbitrary
decisions depending on the source code e, irrespective of the function φe of which it is
the source code. Moreover, it is evident that the function φe has access to its own code,
allowing for a certain degree of reflection. Even if f is extensional, hence defining an
effective operation, the SRT grants us more power than the FRT: for example, before
recursively calling e on some points, f (e, y) could ‘optimise’ e depending on what y
is, hence ensuring that the recursive call will run faster than e itself would.
Lawvere’s argument, as presented above, cannot account for the Second Recursion
Theorem: S : N × N → PR had PR in the codomain, and thus this argument only
works to yield fixed points of effective operations PR → PR. We do not see a
meaningful way of replacing this with, say, N . We could certainly replace it with an
object N⊥ that accounts for non-termination, but that is not what we want. we wish
to take here.
136
What we really want is to define a computable operation PR 99K PR. In order
to explain the meaning of this, a shift in perspective is required. To say that F :
PR → PR is an effective operation, we need an extensional, total recursive function
f , implemented by some index d ∈ N that tracks it on codes, along with a proof that
f is extensional. But what about indices d ∈ N that describe a total φd , yet are not
extensional? These still map every e ∈ N, which is an index for φe , to φd (e), which is
an index for φφd (e) . Hence, while f may map codes of partial recursive functions to
codes of partial recursive functions, it may do so without being extensional. In that
case, it defines a non-functional operation G : PR 99K PR, which is exactly the case
where IFPs will apply.
We can see this far more clearly in the setting of Asm(K1 ). Arrows
N → N⊥
are easily seen to correspond to partial recursive functions. The weak point-surjection
S : N × N → PR we produced above can now be seen as an arrow r : N × N → NN⊥
in Asm(K1 ), and invoking Lawvere’s theorem indeed shows that every arrow
NN⊥ → NN⊥
has an extensional fixed point. Now, by Longley’s generalised Myhill-Shepherdson
theorem [Longley, 1995, Longley and Normann, 2015], these arrows exactly correspond to effective operations. Hence, in this context Lawvere’s theorem states that
each effective operation has a fixed point, and indeed corresponds to the simple diagonal argument above.3
However, let us look now look at arrows of type
NN⊥ → NN⊥
These correspond to ‘non-functional’ transformations, mapping functions to functions, but without respecting extensionality. As every natural number indexes a
partial recursive function, these arrows really correspond to all partial recursive functions. It is not hard to see that N is P-isomorphic to N: this fact can be used with
r to immediately produce a weak point-surjection q : N × N → NN⊥ , so that, by
our Intensional Recursion theorem (Theorem 54), every arrow of type
NN⊥ → NN⊥
has a IFP. It is thus evident that there is considerable formal similarity between this
application of Theorem 54 and Kleene’s Second Recursion Theorem!
3
But note that this is not the complete story, as there is no guarantee that the fixed point obtained
in least, which is what Kleene’s original proof in Kleene [1952] gives. See also §2.
137
Chapter 7
Intensional Semantics of iPCF I
This is the first of the two chapters that address the issue of finding a truly intensional
semantics for the language iPCF which we introduced in §3. Both of these chapters
can be seen as evidence towards the claim that iPCF is a truly intensional language.
To make progress towards our goal, we will use the theory of exposures, as developed in §4 and 6. The desired outcome is that the category of assemblies Asm(K1 )
over the PCA K1 , which—as we showed in §5.2—is a cartesian closed P-category
equipped with an idempotent comonadic exposure, is a model of iPCF.
However, before we delve into the details, we need to develop some algebraic
machinery for interpreting the rules of iPCF. Because of the delicate behaviour of
our two notions of equality—extensional (∼) and intensional (≈Q )—this will be more
involved than it sounds. Nevertheless, a lot of the ground work has been done before,
and is still applicable to this setting.
We have discussed the categorical semantics of modal λ-calculi at length in previous work—see [Kavvos, 2017b,c]: therein we found that categorical semantics of a
S4 modality comprise a Bierman-de Paiva category [Bierman, 2000], viz. a cartesian
closed category (C, ×, 1) along with a product-preserving comonad (Q, , δ). Adapting this to the intensional setting involves exchanging Q : C −→ C for an exposure
Q : (C, ∼) # (C, ∼) that is comonadic. The crucial factor that one should be aware
of is that no calculation in our previous work depended on the ‘forbidden principle’
f = g =⇒ Qf = Qg. Hence, a lot of the groundwork may be immediately transferred
to the intensional setting.
Nevertheless, there are some differences, and we shall deal with them in this
chapter. One of the main ones is our emphasis on idempotence. In the ensuing
development it will become clear that the idempotent case is particularly elegant and
useful. What is more, the argument made in §4.2.4, viz. that idempotence is the
right notion for intensionality, will be corroborated in this chapter.
138
7.1
Setting the scene
First, we shall define certain basic ingredients that we will use for our semantics.
These amount to algebraic short-hands that will prove incredibly useful for calculations.
We shall define the following arrows by induction (and be lax about the subscripts):
m
def
0
m(0) = 1 −→
Q1
def
m(n+1) =
n+1
Y
m(n) ×id
QAi −−−−−→ Q
i=1
n
Y
!
Ai
m
× QAn+1 −
→Q
i=1
n+1
Y
!
Ai
i=1
Then, the m(n) ’s are natural, in the sense that
(n)
m
◦
n
Y
Qfi ∼ Q
i=1
n
Y
!
◦ m(n)
fi
i=1
The main contraption in the semantics of S4 was a hom-set map that generalised the
notion of co-Kleisli lifting. In our setting, this will be an operation:
!
!
n
n
Y
Y
(−)∗ : C
QAi , B 99K C
QAi , QB
i=1
i=1
which we define as follows:
f:
n
Y
QAi → B
i=1
n
n
n
Qn
Y
Y
Y
m(n)
def
i=1 δAi
f∗ =
QAi −−−−−→
Q2 Ai −−→ Q
QAi
i=1
i=1
!
Qf
−→ QB
i=1
This operation is the categorical counterpart of an admissible rule of S4, which from
Γ ` A allows one to infer Γ ` A. In the weaker setting of K, we only have Scott’s
rule: from Γ ` A infer Γ ` A. This is also categorically also an operation:
!
!
n
n
Y
Y
(−)• : C
Ai , B 99K C
QAi , QB
i=1
i=1
which is defined as follows:
f:
n
Y
Ai → B
i=1
f• =
def
n
Y
m(n)
QAi −−→ Q
i=1
n
Y
i=1
139
!
Ai
Qf
−→ QB
It might, of course, seem that both (−)• and (−)∗ are operations, but what they
have in common is that they both act by applying the exposure Q to their argument.
It follows then that they are functions with respect to intensional equality, and hence
well-defined on the x-ray category of C. We write
!
!
n
n
Y
Y
(−)∗ : (C, ≈Q )
QAi , B → C
QAi , QB
(−)• : (C, ≈Q )
i=1
n
Y
!
Ai , B
→C
i=1
i=1
n
Y
!
QAi , QB
i=1
However, if (Q, , δ) is idempotent, we can show that (−)∗ actually preserves intensional equality. For this, we will need the following proposition.
Q
Proposition 11. If (Q, , δ) is idempotent, then for f : ni=1 QAi → B,
Qf ∗ ∼ δB ◦ Qf
Proof.
Qf ∗
∼ { definition, Q exposure }
!
n
Y
Q2 f ◦ Qm(n) ◦ Q
δA i
i=1
∼ { Proposition 2 }
−−−−−−−−→
Q2 f ◦ Qm(n) ◦ m(n) ◦ hQδAi ◦ QπAi i
∼ { Q idempotent, so QδAi ∼ δQAi }
−−−−−−−→
Q2 f ◦ Qm(n) ◦ m(n) ◦ hδQAi ◦ Qπi i
∼ { product equation, δ monoidal }
−−→
Q2 f ◦ δ ◦ m(n) ◦ hQπi i
−−→
∼ { δ natural, inverse of m(n) is hQπi i }
δ ◦ Qf
This allows us to infer that
Corollary 5. If Q is idempotent then (−)∗ preserves ≈Q : it is a map
!
!
n
n
Y
Y
(−)∗ : (C, ≈Q )
QAi , B → (C, ≈Q )
QAi , QB
i=1
i=1
Proof. If f ≈Q g, then Qf ∗ ∼ δ ◦ Qf ∼ δ ◦ Qg ∼ Qg ∗ .
140
7.2
Distribution and naturality laws
We now look at the relationship between the two operations (−)• and (−)∗ , and we
state and prove certain ‘distribution’ laws that apply to them, and describe their
interaction. First, we note that it is definitionally the case that
∗
def
•
f =f ◦
n
Y
δA i
i=1
for f :
Qn
i=1
QAi → B.
Proposition 12. Given a comonadic exposure (Q, , δ), the following equations hold
up to ∼. Furthermore, if Q is idempotent, they hold up to ≈Q .
(i) id∗QA ∼ δQA
(ii) ∗A ∼ idQA
(iii) For k :
Qn
i=1
QAi → B and l : QB → C,
(l ◦ k ∗ )∗ ∼ l∗ ◦ k ∗
(iv) For k :
Qn
i=1
Ai → B and l : QB → C,
(l ◦ k • )∗ ∼ l∗ ◦ k •
(v) Let f :
Qn
i=1
Bi → C and gi :
Qk
j=1
QAj → Bi for i = 1, . . . , n. Then
D→
−E
∗
−
(f ◦ h→
gi i) ∼ f • ◦ gi∗
Q
Q
Q
−
(vi) For f : ni=1 QAi → B and h→
πj i : ni=1 QAi → j∈J QAj for J a list with
elements from {1, . . . , n},
∗
−
−
(f ◦ h→
πj i) ∼ f ∗ ◦ h→
πj i
(vii) If (Q, , δ) is idempotent then for f :
Qn
i=1 QAi we have
Qn
i=1
QAi → B and k :
(f ◦ k)∗ ≈Q f ∗ ◦ k
and hence (f ◦ k)∗ ∼ f ∗ ◦ k.
141
Qm
j=1
QDj →
Proof. Straightforward calculations involving the comonadic equations. (i) and (ii)
are standard from the theory of comonads and functional programming. In the case
of idempotence we can use them alongside Proposition 11 to prove the intensional
equation, e.g.
Q(id∗A ) ∼ δQA ◦ QidA ∼ δQA ∼ QδA
and hence id∗QA ≈Q δQA . For (ii) use δA ◦ QA ∼ idQ2 A . (iii) and (iv) are easy
calculations; e.g. for (iv):
(l ◦ k • )∗
∼ { definitions }
2
Ql ◦ Q k ◦ Qm
(n)
◦m
(n)
◦
n
Y
δA i
i=1
∼ { δ monoidal }
Ql ◦ Q2 k ◦ δ ◦ m(n)
∼ { δ natural }
Ql ◦ δ ◦ Qk ◦ m(n)
which by definition is l∗ ◦ k • . Given idempotence it is simple to use Proposition 11 to
prove (iii) and (iv) up to ≈Q , without even using the non-idempotent result; e.g. for
(iv):
Q (l ◦ k • )∗ ∼ δC ◦ Ql ◦ Qk • ∼ Ql∗ ◦ Qk •
(v) is a simple but lengthy calculation. With idempotence we have
∗
−
Q (f ◦ h→
πj i)
∼ { Proposition 11 }
−
δ ◦ Qf ◦ Qh→
g i
i
∼ { δ natural, Proposition 2 }
−→
Q2 f ◦ δ ◦ m(n) ◦ hQgi i
∼ { δ monoidal }
2
(n)
Q f ◦ Qm
(n)
◦m
◦
n
Y
−→
δ ◦ hQgi i
i=1
∼ { product equation, Proposition 11 }
−−−→
Q2 f ◦ Qm(n) ◦ m(n) ◦ hQ(gi∗ )i
∼ { Proposition 2 }
−−→
Q2 f ◦ Qm(n) ◦ Qh(gi∗ )i
142
→
−
−
and hence (f ◦ h→
gi )∗ ≈Q f • ◦ h gi∗ i. (vi) is a corollary of (v), once we notice that
Q
def
πi∗ ∼ δAi ◦ πi , and use the definition of f ∗ = f • ◦ δ. For the idempotent case, notice
that πi∗ ≈Q δAi ◦ πi , or derive it as a corollary of (vii).
(v) is an easy calculation:
Q(f ◦ k)∗ ∼ δ ◦ Qf ◦ Qk ∼ Qf ∗ ◦ Qk
which yields (f ◦ k)∗ ≈Q f ∗ ◦ k, and hence (f ◦ k)∗ ∼ f ∗ ◦ k. It is worth noting that we
know of no direct calculation that proves (7) up to ∼ without going through ≈Q .
In other news, (−)∗ interacts predictably with δ and .
Proposition 13.
Q
(i) Let f : ni=1 QAi → B. Then δB ◦ f ∗ ∼ (f ∗ )∗ . Furthermore, if Q is idempotent
then this equation holds intensionally.
Q
(ii) Let f : ni=1 QAi → B. Then B ◦ f ∗ ∼ f . Furthermore, if Q is idempotent
then this equation holds intensionally.
Proof.
def
1. Let E =
Qn
i=1
QAi . Then
δB ◦ f ∗
∼ { definition }
(n)
δB ◦ Qf ◦ m
◦
n
Y
δA i
i=1
∼ { δ natural }
Q2 f ◦ δE ◦ m(n) ◦
n
Y
δA i
i=1
∼ { δ monoidal }
2
Q f ◦ Qm
(n)
(n)
◦m
◦
n
Y
i=1
δQAi ◦
n
Y
δAi
i=1
∼ { product is functorial, comonadic equation }
n
n
Y
Y
2
(n)
(n)
Q f ◦ Qm ◦ m ◦
QδAi ◦
δA i
i=1
i=1
∼ { Q product-preserving }
!
n
n
Y
Y
Q2 f ◦ Qm(n) ◦ Q
δAi ◦ m(n) ◦
δA i
i=1
∼ { definitions }
(f ∗ )∗
143
i=1
If Q is idempotent, we can also compute that
Qδ ◦ Qf ∗
∼ { Proposition 11 }
Qδ ◦ δ ◦ Qf
∼ { comonadic equation }
δ ◦ δ ◦ Qf
∼ { Proposition 11 }
δ ◦ Qf ∗
∼ { Proposition 11 }
Q ((f ∗ )∗ )
and hence δ ◦ f ∗ ≈Q (f ∗ )∗ .
2. Straightforward calculation involving—amongst other things—the naturality
and monoidality of . If Q is idempotent, then we calculate that
QB ◦ Qf ∗
∼ { Proposition 11 }
QB ◦ δB ◦ Qf
∼ { comonadic equation }
Qf
and hence B ◦ f ∗ ≈Q f .
To conclude this section, we note that a special case of Proposition 12(7) generalises
the Quotation-Evaluation lemma (Lemma 17).
Corollary 6. If (Q, , δ) is idempotent then for k :
Qm
i=1
QBi → QA we have
(A ◦ k)∗ ≈Q k
and hence (f ◦ k)∗ ∼ f ∗ ◦ k.
Proof. By Proposition 12 (vii) & (ii), (A ◦ k)∗ ≈Q ∗A ◦ k ≈Q k
144
7.3
Fixed Points with Parameters
In §6 we discussed two types of fixed points, EFPs and IFPs. The first type concerned
arrows f : A → A, whereas the second pertained to arrows f : QA → A. However, if
we are to formulate a categorical semantics of PCF and iPCF, we are going to need a
slightly more general notion of each fixed point, namely a fixed point with parameters.
The parameters correspond to the context of free variables in the presence of which
we are taking the relevant fixed point.
In the case of extensional fixed points, the context appears as a cartesian product
in the domain of the morphism, which is of type t : B × Y → Y . B is usually of the
Q
form ni=1 Bi . Indeed, this is what happens in the categorical semantics of PCF, for
which see Hyland and Ong [2000], Poigné [1992], or Longley [1995]. It is not at all
difficult to generalise Lawvere’s theorem to produce this kind of fixed point.
We then move on to intensional fixed points. The situation in this case is slightly
more nuanced, for—as we saw in §6.2—we essentially need to model our fixed points
after Löb’s rule, viz.
A → A
A
Adapting this rule to a parametric version is not a trivial task, as it is almost equivalent to developing proof theory for it. However, we can look to our previous work on
the proof theory of GL [Kavvos, 2017b,c] to find a good pattern. We briefly discussed
that work in §6.2. The right form of the generalised rule is
Γ, Γ ` A → A
Γ ` A
However, since iPCF is based on an S4-like setting, the 4 axiom is available. Thus, it
suffices to consider a rule of the following shape:
Γ ` A → A
Γ ` A
Indeed, this is very close to the rule we used for iPCF in §3: the only remaining step is
the weaken the fixed point by dropping the box in the conclusion, thereby mimicking
the form of Löb’s rule more commonly found in the literature. Therefore, IFPs with
parameters will pertain to arrows of type
f:
n
Y
QBi × QA → A
i=1
145
and yield
f† :
n
Y
QBi → A
i=1
We thus arrive at a kind of context for IFPs that consists of a handful of intensional
assumptions QBi . We elaborate on that in §7.3.2.
But, before that, let us recall the extensional case.
7.3.1
Extensional Fixed Points with Parameters
Suppose we have a cartesian 1-category C. An arrow
f :B×A→A
can be considered as a sort of endomorphism of A that is ‘parameterised’ by B.
Type-theoretically, we will consider B to be the context, and we can take the EFP
‘at A.’
Definition 48. Let C be a cartesian 1-category. A parametric extensional fixed point
(parametric EFP) of f : B × A → A is an arrow f † : B → A such that the following
diagram commutes:
B
hidB ,f † i
B×A
f†
f
A
Definition 49 (Extensional Fixed Points with Parameters). A cartesian category C
has parametric extensional fixed points (parametric EFPs) at A ∈ C just if for all
B ∈ C there exists a map
(−)†B : C(B × A, A) −→ C(B, A)
such that for each f : B × A → A, the morphism f † : B → A is a EFP of f .
This, of course, is an ‘external view’ of what it means to have parametric EFPs in
a category. However, in typed λ-calculi we often include a fixed point combinator
YA : (A → A) → A in our calculus, which is rather more ‘internal.’ This fixed point
combinator can be either weak or strong, and always occurs in the context of a CCC.
Definition 50 (Fixed Point Combinators). Let C be a cartesian closed category.
146
1. A strong fixed point combinator at A is an arrow Y : AA → A such that
AA
hid,Y i
Y
AA × A
ev
A
2. A weak fixed point combinator at A is an arrow Y : AA → A such that, for each
f : B × A → A, the arrow
Y ◦ λ(f ) : B → A
is a EFP of f .
Surprisingly, we have the following theorem.
Theorem 57. In a cartesian closed category C, the following are equivalent:
1. A strong fixed point combinator at A.
2. A weak fixed point combinator at A.
3. Extensional fixed points at A.
Proof. We will prove a circular chain of implications.
Case(1 ⇒ 2). If Y : AA → A is a strong FPC, then
Y ◦ λ(f ) = ev ◦ hid, Y i ◦ λ(f ) = ev ◦ hλ(f ), Y ◦ λ(f )i = f ◦ hid, Y ◦ λ(f )i
so Y ◦ λ(f ) is a fixed point of f .
Case(2 ⇒ 3). Trivial: define (−)† by λ-abstraction and composition with Y .
Case(3 ⇒ 1). A strong FPC Y at A is a fixed point of ev : AA × A → A.
Hence, if we let
def
ev†
Y = AA −→ A
then Y = ev ◦ hid, Y i.
Finally, we note the following naturality property, which we learned from Simpson
and Plotkin [2000].
147
Lemma 25. EFPs induced by a (strong or weak) fixed point combinator satisfy the
following equation: for any f : B × A → A and g : C → B,
(f ◦ (g × idA ))† = f † ◦ g
Proof. We have
(f ◦ (g × idA ))† = Y ◦ λ (f ◦ (g × idA )) = Y ◦ λ(f ) ◦ g = f † ◦ g
by naturality of the λ(−) operation.
It is easy to extend the cartesian version of Lawvere’s theorem to one that also
yields fixed points with parameters: we redefines weak point-surjections to include a
parameter.
Definition 51 (Parametric weak point-surjection). An arrow r : X × A → Y is a
parametric weak point-surjection if for every f : B ×A → Y there exists a xf : B → X
such that
∀a : B → A. r ◦ hxf , ai = f ◦ hidB , ai
The only change in the main theorem is that the fixed points of an arrow of type
B × Y → Y now have type B → Y instead of being points of Y .
Theorem 58 (Parametric Recursion). If r : A × A → Y is a parametric weak-point
surjection, then every arrow
t:B×Y →Y
has a EFP.
Proof. Let
idB ×hidA ,idA i
def
id ×r
t
f = B × A −−−−−−−−→ B × (A × A) −−B−→ B × A →
− A
Then there exists a xf : B → A such that
r ◦ hxf , ai = f ◦ a
for all a : B → A. Then
r ◦ hxf , xf i
= { definition of parametric weak point-surjection }
t ◦ (idB × r) ◦ (idB × hidA , idA i) ◦ hidB , xf i
= { various product equations }
t ◦ hidB , r ◦ hxf , xf ii
so that r ◦ hxf , xf i is a EFP of t.
148
7.3.2
Intensional Fixed Points with Parameters
Without further ado, we generalise IFPs to include parameters.
Definition 52. Let (Q, , δ) be a product-preserving comonadic exposure. A paraQ
metric intensional fixed point (parametric IFP) of f : ni=1 QBi × QA → A (w.r.t.
Q) is an arrow
†
f :
n
Y
QBi → A
i=1
such that the following diagram commutes up to ∼:
∗
Qn
i=1
QBi
hid,(f † ) i
Qn
i=1
QBi × QA
f†
f
A
∗ Q
Q
Q
If f : ni=1 QBi × QA → A, then f † : ni=1 QBi → A, so f † : ni=1 QBi → QA, so
this diagram has the right types.
This definition is a generalisation analogous to the one for EFPs. The context is
Q
Q
intensional ( ni=1 QBi ) instead of extensional (B = ni=1 Bi ), and f † appears under
the co-Kleisli lifting, so essentially under an occurrence of Q. Notice that we have used
•
both product-preservation and the quoter δ : Q # Q2 for this definition; this probably
corresponds to the fact that that the 4 axiom (A → A) is a theorem of provability
logic: see [Boolos, 1994, Kavvos, 2017b,c]. Since we are in a categorical logic setting,
an appropriate gadget standing for the 4 axiom must be given as a primitive, alongside
appropriate coherence conditions (e.g. the first comonadic equation).
Definition 53 (Intensional Fixed Points with Parameters). Let (Q, , δ) be a productpreserving comonadic exposure on B. We say that B has parametric intensional fixed
points at A (parametric IFPs) just if for any objects Bi ∈ C there exists an operation
!
!
n
n
Y
Y
(−)†−→ : B
QBi × QA, A 99K B
QBi , A
Bi
such that for each arrow f :
i=1
i=1
Qn
i=1
QBi × QA → A, the arrow f † :
Qn
i=1
QBi → A is
an intensional fixed point of f .
−
→
We often write f † for succinctness, without specifying the context Bi .
In analogy to EFPs, there is a similar ‘internal’ view of this definition, related
to the Gödel-Löb axiom. However, in contrast to what we had before, it comes with
certain caveats. But first, some more definitions:
149
Definition 54 (Intensional Fixed Point Combinators). Let there be a cartesian closed
P-category B, along with a product-preserving comonadic exposure (Q, , δ) on it.
1. A strong intensional fixed point combinator at A (w.r.t. Q) is an arrow
Y : Q(AQA ) → QA
such that the following diagram commutes up to ∼:
Q(AQA )
h,Y i
AQA × QA
ev
Y
QA
A
2. A weak intensional fixed point combinator (w.r.t. Q) is an arrow
Y : Q(AQA ) → QA
such that, for each f :
Qn
n
Y
i=1
QBi × QA → A, the arrow
Y
(λ(f ))∗
A
QBi −−−−→ Q AQA −
→ QA −→
A
i=1
is a IFP of f .
Theorem 59. Let B be a cartesian closed P-category, along with a product-preserving
comonadic exposure (Q, , δ) on it.
1. If (Q, , δ) is idempotent then a strong fixed point combinator is also a weak fixed
point combinator.
2. A weak fixed point combinator Y : Q(AQA ) → QA implies the existence of
intensional fixed points at A.
3. The existence of intensional fixed points at A implies the existence of a strong
intensional fixed point combinator at A.
Proof.
150
1. Let Y : Q(AQA ) → QA be a strong FPC. Then, given f :
Qn
i=1
QBi × QA → A,
we may calculate that
◦ Y ◦ (λf )∗
∼ { definition of strong FPC }
ev ◦ h, Y i ◦ (λf )∗
∼ { naturality of product, Proposition 13 (ii) }
ev ◦ hλf, Y ◦ (λf )∗ i
∼ { cartesian closure }
f ◦ hid, Y ◦ (λf )∗ i
But as Q is idempotent and Y ◦ (λf )∗ :
Qn
i=1
QBi → QA, we can use quotation-
evaluation (Corollary 6) to conclude that ( ◦ Y ◦ (λf )∗ )∗ ≈Q Y ◦ (λf )∗ , and
hence that we have a IFP.
2. Trivial.
3. Let
×id
def
ev
g = Q(AQA ) × QA −−→ AQA × QA −
→A
Then we can show that (g † )∗ : Q(AQA ) → QA is a strong FPC. It is easy to
∗
calculate that g † ∼ ev ◦ h, g † i and hence that
◦ g†
∗
∗
∼ g † ∼ ev ◦ h, g † i
by using Proposition 13 (ii).
It is interesting to examine if and when the map (−)† might turn ≈Q to ∼, or
even preserve it. In fact, it is easy to see that if λ(−) preserves ≈Q , then we can build
a (−)† that turns it into ∼: a weak FPC suffices. If Q is idempotent then (−)† also
preserves ≈Q .
Lemma 26. If Y : Q(AQA ) → QA is a weak FPC, and the map
λC (−) : C(C × A, B) → C(C, B A )
preserves the intensional equality ≈Q , i.e. is a map
λC (−) : (C, ≈Q )(C × A, B) → (C, ≈Q )(C, B A )
151
then the IFPs induced by f † = A ◦ Y ◦ (λ(f ))∗ turn ≈Q into ∼, i.e. (−)† is a map
!
!
n
n
Y
Y
(−)† : (B, ≈Q )
QBi × QA, A → B
QBi , A
def
i=1
i=1
Moreover, if Q is idempotent, then (−)† preserves ≈Q , and hence is a map
!
!
n
n
Y
Y
(−)† : (B, ≈Q )
QBi × QA, A → (B, ≈Q )
QBi , A
i=1
i=1
∗
Proof. Since λ(−) preserves ≈Q (by assumption), and (−) turns that into ∼ (§7.1)
the result follows. In the case of idempotence observe that (−)∗ also preserves ≈Q
(Corollary 5), and so does post-composition with ◦ Y .
A similar argument will yield that, if λ(−) is natural up to ≈Q , then so are the fixed
points in a certain sense.
Lemma 27. If (Q, , δ) is idempotent, Y : Q(AQA ) → QA is a weak FPC, and
λC (−) : C(C × A, B) → C(C, B A )
is natural up to ≈Q , i.e.
λ (f ◦ (g × id)) ≈Q λ(f ) ◦ g
then the IFPs induced by f † = A ◦Y ◦(λ(f ))∗ are also natural, i.e. for f :
Q
Q
QA → A and k : kj=1 QCj → ni=1 QBi , we have
def
Qn
i=1
QBi ×
(f ◦ (k × id))† ≈Q f † ◦ k
Proof. Use naturality for λ(−) up to ≈Q , Corollary 5 for the preservation of ≈Q by
(−)∗ , and finally Proposition 12 (vii) to show that (λ(f ) ◦ k)∗ ≈Q (λ(f ))∗ ◦ k.
There is also a weaker version of this lemma that does not require idempotence: it
Q
Q
†
states that if f : ni=1 QBi ×QA → A and gi : Ci → Bi , then (f ◦ ( ni=1 Qgi × id)) ∼
Q
f † ◦ ni=1 gi . However, we do not find any use for it in the sequel.
We can summarise the above results by saying that if we have
• an idempotent comonadic exposure (Q, , δ);
• any of our three flavours of IFPs (strong, weak, (−)† );
• a λ(−) that preserves ≈Q and is natural up to it
then we obtain IFPs at A, which preserve ≈Q and are natural up to ≈Q . While this
is nice and useful, we will see in §8.6 that in the most intensional of P-categories it
will emphatically not be the case that λ(−) preserves ≈Q .
152
7.4
A Parametric Intensional Lawvere Theorem
In the final section of this chapter we shall generalise the Intensional Recursion Theorem (Theorem 54) to a version that admits parameters. The theorem will now
guarantee the existence of parametric IFPs.
Accordingly, we will have to also change the antecedent: neither weak pointsurjections (Def. 47) nor parametric weak point-surjections (Def. 51) are adequate
anymore. We shall therefore introduce a variant, which—due to lack of imagination on
the part of the author—we shall call an intensional parametric weak point-surjection.
The only essential difference is the restriction of the arbitrary context B to an intenQ
sional one, viz. of the form ni=1 QBi .
Definition 55 (IPWPS). An arrow r : X × A → Y is a intensional parametric
Q
weak point-surjection (IPWPS) if, for every f : ni=1 QBi × A → Y , there exists a
Q
xf : ni=1 QBi → X such that
∀a :
n
Y
QBi → A. r ◦ hxf , ai = f ◦ hid, ai
i=1
Armed with this, we can now prove a theorem analogous to the Parametric Recursion
Theorem (Theorem 58), but yielding IFPs instead. Recall that in the Intensional
Recursion Theorem (Theorem 54) we only needed a particular component of δ to be
‘reasonable,’ but in this case we will require full idempotence.
Theorem 60 (Parametric Intensional Recursion). Let (Q, , δ) be a product-preserving
idempotent comonadic exposure. If r : QA × QA → Y is a IPWPS, then every arrow
t:
n
Y
QBi × QY → Y
i=1
has an intensional fixed point.
Proof. Let
def
f=
n
Y
id×hid,idi
QBi × QA −−−−−→
n
Y
i=1
QBi × (Q2 A × Q2 A)
i=1
id×r∗
−−−→
n
Y
i=1
t
→
− Y
153
QBi × QY
Then, there exists a xf :
Qn
i=1
QBi → QA such that
r ◦ hxf , ai ∼ f ◦ hid, ai
for all a :
Qn
i=1
QBi → QA. We compute that
r ◦ hxf , xf i
∼
{ definition of IPWPS }
t ◦ (id × r∗ ) ◦ (id × hid, idi) ◦ hid, xf i
≈Q { product equation, naturality of brackets }
t ◦ hid, r∗ ◦ hxf , xf ii
≈Q { idempotence: Prop. 12(vii) }
t ◦ hid, (r ◦ hxf , xf i)∗ i
so that r ◦ hxf , xf i is a IFP of t.
Notice that we are allowed to use Proposition 12(vii) only because hxf , xf i :
Qn
i=1
QBi →
QA × QA is of the right type, i.e. there are occurrences of Q ‘guarding’ all the types.
Naturality
As we saw in §7.3.2, the main reason for introducing parametric IFPs is to have a conQ
text ni=1 QBi . In turn, a context is useful because one can substitute for any objects
in it, as they represent free variables. In this light, naturality is a key property: it
states that substitution (composition) commutes with the (type-theoretic/categorical)
construct in question; we will discuss that more in §8.
We showed in Lemmata 26 and 27 that, in the idempotent setting, if IFPs are
induced by a weak FPC, and λ(−) preserves ≈Q , then so does (−)† , and moreover it is
natural. But what about the IFPs produced by the Parametric Intensional Recursion
Theorem?
A IPWPS r : QA × QA → Y induces a map
f:
n
Y
QBi × QA → Y 7−→ xf :
i=1
n
Y
QBi → QA
i=1
Q
Q
and then we can define (−)† : B ( ni=1 QBi × QY, Y ) 99K B ( ni=1 QBi , Y ) to be
t 7−→ ft 7−→ xft 7−→ r ◦ hxft , xft i :
def
where ft = t ◦ (id × r∗ ) ◦ (id × hid, idi) :
Qn
i=1
154
QBi × QA → Y .
The final map xft 7→ r ◦ hxft , xft i is clearly natural and preserves both ∼ and ≈Q ,
as h·, ·i does. But what about the rest? Suppose that we pre-compose t with g × id,
Q
Qn
where g : m
QD
→
j
j=1
i=1 QBi . The map t 7→ ft clearly preserves both ∼ and ≈Q ,
and moreover it is natural, in the sense that
ft◦(g×id) ≈Q ft ◦ (g × id)
because of the interchange law for cartesian products, which holds up to ≈Q .
This leaves only the difficult case of f 7→ xf . For naturality, we would need1
xft◦(g×id) ≈Q xft ◦ g
which easily follows if f 7→ xf preserves ≈Q , and2
xh◦(g×id) ≈Q xh ◦ g
In that case xft◦(g×id) ≈Q xft ◦(g×id) ≈Q xft ◦ g, and hence
(t ◦ (g × id))† ≈Q r◦hxft◦(g×id) , xft◦(g×id) i ≈Q r◦hxft ◦g, xft ◦gi ≈Q r◦hxft , xft i◦g ≈Q t† ◦g
Hence, to obtain naturality we would need that the IPWPS be ‘natural,’ and that
f 7→ xf turn ≈Q to ∼.
Corollary 7. If the IPWPS r : QA × QA → Y of Theorem 60 is such that f 7→ xf
preserves ≈Q (or, equivalently, turns ≈Q into ∼), and is ‘natural’ insofar as
xf ◦(g×id) ≈Q xf ◦ g
(equivalently, with ∼) then the induced (−)† is a map
!
!
n
n
Y
Y
(−)† : (B, ≈Q )
QBi × QY, Y → (B, ≈Q )
QBi , Y
i=1
i=1
which is natural up to ≈Q , i.e.
(f ◦ (g × id))† ≈Q f † ◦ g
1
2
Note that because of idempotence and the type of xf , ∼ suffices.
Ditto.
155
Chapter 8
Intensional Semantics of iPCF II
In this final chapter, we shall use all our results up to this point to attempt to
produce an intensional semantics for iPCF. The intended—but, as it transpires,
unachievable—result is that the category of assemblies Asm(K1 ) is a model of iPCF.
We have shown in previous chapters that a product-preserving comonadic exposure
satisfies most of the standard equations of a product-preserving comonad. It should
therefore be the case that, given such an exposure, one can use it to almost directly
interpret the Davies-Pfenning fragment of iPCF. However, this is not so straightforward. The main ingredient of our soundness result is a substituion lemma, which
relates the interpretation with substition. Since we are allowed to substitute terms
under box (−) constructs—which we model by (−)∗ —we need the substitution lemma
to hold up to ≈Q at that location. Furthermore, since we may have multiple nested
occurrences of boxes, we will need that (−)∗ preserve ≈Q , which is the case if the
comonadic exposure is idempotent (Cor. 5). Thus idempotence is essential.
Once this is decided, we run into a second issue. Suppose that there is a λ in a
term, which we model by the function λ(−) of a cartesian closed category. Since this
λ might occur in the scope of a box (−), and a (modal) variable for which we want to
substitute might occur under that λ, the above soundness result requires that λ(−) be
natural up to ≈Q . Whereas in the case of Asm(K1 ), the exposure is idempotent, we
have no guarantees at all about the naturality of λ(−)—see §5.2.5. We will therefore
need to forbid λ-abstractions with free variables under boxes.
The above suffices to interpret the Davies-Pfenning fragment, in an intensional
sense. The remaining piece of the puzzle pertains to the construction of IFPs. This is
easy for booleans and naturals, but at higher types we are only able to use an inductive
construction that only builds IFPs for a certain kind of intensional exponential ideal :
if X is any object and Y is in the ideal, then Y QX is in the ideal as well. We also
face certain difficulties in showing that the IFPs are natural, as per §7.4, and lack of
156
naturality entails lack of substitution. It follows that we must constrain the taking of
IFPs to closed terms, at only at certain types.
We are thus led to reformulate iPCF, or—more specifically—to admit only a
subset of it. To reach this subset, we first layer it, so as to separate the intensional
layer —where everything will hold up to ≈Q —and the extensional layer, where things
will hold up to ∼. Since the modality already implicitly enables this kind of layering,
this will be a minor change. Nevertheless, it decisively quells the problem, as we will
never need to substitute in a λ-abstraction under the box, so the desired lemma holds
up to ≈Q at that location. In the case of IFPs, our only option is to alter the fixed
point rule.
The resulting language is called iPCF v2.0. On the one hand, the new language
has Asm(K1 ) as a model; on the other, a lot of expressiveness has been lost. We
discuss what has been lost, and when one can regain it. In particular, we can model
iPCF itself when given a natural iPCF v2.0 model. Furthermore, a weakly extensional
model of iPCF v2.0 is already natural; the terminology comes from PCAs: if A is a
weakly extensional PCA, then Asm(A) is a weakly extensional model, and hence a
model of iPCF; see also §5.2.5.
8.1
iPCF v2.0
We promptly introduce iPCF v2.0. We shall not prove any theorems in this section,
because we have formally proven all of them in Agda: see Appendix A for the proofs.
Each typing judgement of iPCF v2.0 will be annotated by a J , like so:
∆ ; Γ `J M : A
The possible options for J will be “int.” for intensional, and “ext.” for extensional.
The revised system appears in Figure 8.1. The occurrence of a generic J in these
rules is universally quantified. There is little else to say apart from the fact that these
rules enforce the prohibition of free variables under a λ in the intensional judgements.
In programming language terms this could be transliterated as the prohibition of the
creation of closures (= λ-abstraction + environment for free variables). Of course,
a term can then only be placed under a box if it is intensional, but the boxed term
itself can be either intensional or extensional:
∆ ; · `int. M : A
∆ ; Γ `J box M : A
157
Figure 8.1: Syntax and Typing Rules for Intensional PCF v2.0
::= Nat | Bool
Ground Types G
::= G | A → B | A
Types A, B
::= G | A → Afix
Fixable Types Afix
Terms M, N ::= x | λx:A. M | M N | box M | let box u ⇐ M in N |
n
b | true | false | succ | pred | zero? | ⊃G | fix z in M
Contexts Γ, ∆
· | Γ, x : A
::=
(b ∈ {true, false})
∆ ; Γ `J n
b : Nat
∆ ; Γ `J b : Bool
∆ ; Γ `J zero? : Nat → Bool
∆ ; Γ `J f : Nat → Nat
(f ∈ {succ, pred})
∆ ; Γ `J ⊃G : Bool → G → G → G
∆ ; Γ, x:A, Γ0 `J x : A
(var)
∆ ; Γ, x:A `ext. M : B
∆ ; Γ `ext. λx:A. M : A → B
∆ ; · `int. M : A
∆ ; Γ `J box M : A
(→ I)
(I)
∆, u:A, ∆0 ; Γ `J u : A
∆ ; Γ `J M : A → B
(var)
∆ ; Γ `J N : A
∆ ; Γ `J M N : B
∆ ; Γ `J M : A
∆, u:A ; Γ `J N : C
∆ ; Γ `J let box u ⇐ M in N : C
· ; z : Afix `int. M : Afix
∆ ; Γ `J fix z in M : Afix
· ; x:A `J M : B
∆ ; Γ `int. λx:A. M : A → B
158
(→ E)
(fix)
(→ Iint. )
(E)
We can only λ-abstract in an extensional term, yielding another extensional term:
∆ ; Γ, x:A `ext. M : B
∆ ; Γ `ext. λx:A. M : A → B
But if this term has no other free variables, then the result can be an intensional term.
Of course, we must not forget to include the ‘opportunity’ to weaken the context:
· ; x:A `J M : B
∆ ; Γ `int. λx:A. M : A → B
We shall also change the rule for intensional fixed points, which now reads
· ; z : Afix `int. M : Afix
∆ ; Γ `J fix z in M = M [box (fix z in M )/z] : Afix
(fix)
So fix z in M recurses, but it can only do so when M is closed to everything else save
the ‘diagonal’ variable z. What is more, we can only invoke this rule for types Afix
generated by the following grammar, where A is any type at all:
Afix
::=
Nat | Bool | A → Afix
Theorem 61. The following rules are admissible in iPCF v2.0:
∆ ; Γ `J M : A
∆ ; Γ `int. M : A
∆ ; Γ `ext. M : A
∆ ; Γ `J M : A
The standard admissibility results for iPCF are also valid in iPCF v2.0, but they are
now parametric in J .
Theorem 62 (Structural). The standard structural rules of iPCF (as stated in Theorem 21) are admissible in iPCF v2.0, parametrically up to J .
The situation with the cut rule, however, is slightly more complicated, and this has
to do with the nature of the term being substituted. If we substitute an intensional
term for a variable, the resulting term will still retain its original disposition (int. or
ext.). However, if we substitute an extensional term, we force the resulting term to
be extensional. Similarly, and because of (I), we may only substitute an intensional
term for a modal variable, and that leaves the disposition of the term invariant.
Theorem 63 (Cut for iPCF v2.0). The following rules are admissible in iPCF v2.0.
1. (Cut-Ext)
∆ ; Γ, x:A, Γ0 `J M : A
∆ ; Γ `ext. N : A
∆ ; Γ, Γ0 `ext. M [N/x] : A
159
2. (Cut-Int)
∆ ; Γ, x:A, Γ0 `J M : A
∆ ; Γ `int. N : A
∆ ; Γ, Γ0 `J M [N/x] : A
3. (Modal Cut)
∆ ; · `int. N : A ∆, u:A, ∆0 ; Γ `J M : C
∆, ∆0 ; Γ `J M [N/u] : C
In light of those theorems we reformulate the equational theory of iPCF (Figure
3.3) for iPCF v2.0. The resulting theory can be found in Figure 8.2. Curiously, we see
that a form of the congruence rule for reappears, even though the considerations of
Davies and Pfenning led us to banish such rules in §3. When proving soundness, we
will see that this rule reflects the fact shown in Corollary 5, viz. that (−)∗ preserves
intensional equality ≈Q .
Expressitivity of iPCF v2.0
It is easy to see that every typing judgment of iPCF v2.0 is also a typing judgment
of iPCF: each rule of v2.0 is a special case of the rule for iPCF. Hence, iPCF v2.0 is
in some sense a proper subset of iPCF.
We can thus conclude that, by moving to v2.0, we have lost some expressivity.
This is centred around three limitations:
1. no free variables under λ-abstractions in the modal/intensional fragment, i.e.
under a box (−);
2. IFPs can only be taken when there is exactly one free variable, the diagonal
variable, and that must be of modal type; and
3. IFPs can only be taken at certain types
The first limitation is, in a way, double-edged. On the one hand, it is reasonable
and familiar from someone coming from the computability theory, especially from the
perspective of Jones [1997]. Indices do not have “free variables”; sometimes they are
meant to have more than one argument, and in those cases we use the s-m-n theorem
to substitute for one of those; this can be simulated here using λ-abstraction. On
the other hand, this limitation invalidates every single one of the original examples of
S4-typed staged metaprogramming of Davies and Pfenning [2001] (power, acker, ip,
etc.—see §7 of that paper): in almost all cases, a box (−) containing a λ-abstraction
with free variables is implicated in the result.
160
Figure 8.2: Equational Theory for Intensional PCF v2.0
Function Spaces
∆ ; Γ `ext. N : A
∆ ; Γ, x:A, Γ0 `J M : B
∆ ; Γ, Γ0 `ext. (λx:A.M ) N = M [N/x] : B
∆ ; Γ `J M : A → B
x 6∈ fv(M )
∆ ; Γ `ext. M = λx:A.M x : A → B
(→ β)
(→ η)
Modality
∆ ; · `int. M : A
∆, u : A ; Γ `J N : C
(β)
∆ ; Γ `J let box u ⇐ box M in N = N [M/x] : C
∆ ; Γ `int. M : A
∆ ; Γ `J let box u ⇐ M in box u = M : A
∆ ; · `int. M = N : A
∆ ; Γ `J box M = box N : A
(η)
(cong)
· ; z : A `int. M : A
∆ ; Γ `J fix z in M = M [box (fix z in M )/z] : A
∆ ; Γ `J M = N : A
(fix)
∆ ; Γ `J P = Q : C
∆ ; Γ `J let box u ⇐ M in P = let box u ⇐ N in Q : B
(let-cong)
Remark. In addition to the above, one should also include (a) rules that ensure
that equality is an equivalence relation, (b) congruence rules for λ-abstraction and
application, and (c) rules corresponding to the behaviour of constants, as e.g. in
Figure 3.2.
161
The second limitation seems reasonably innocuous, but it is actually more severe
than it looks. As an example, it renders the intensional fixed-point combinator —
whose type is the Gödel-Löb axiom, see §3.4—untypable:
def
YA = λx : (A → A). let box f ⇐ x in box (fix z in f z)
6` YA : (A → A) → A
Notice the characteristic occurrence of f within the fix z in (−) construct. This particular limitation is severe, and increases the distance between iPCF and provability
logic: Löb’s rule does not even imply the Gödel-Löb axiom. This has to do with the
‘loss of naturality’ that occurs when we constrain the fixed point rule: the in the
antecedent (A → A) means that ‘only code variables can be found in the input
data,’ and this is too weak an assumption to use Löb’s rule, which requires exactly
one free variable, the diagonal one. This could be fixed by redefining box (−) to
only enclose completely closed terms, but then we completely obliterate the expressive content of iPCF. If we did that, modal types would not be useful for anything
at all, leading us to add more and more combinators that do specific things, e.g.
something like app : (A → B) → A → B with a δ-reduction of the form
app (box F )(box M ) → box F M . We are not sure that this approach is sustainable.
The relationship of this second limitation with computability theory is more subtle. Using the SRT on a program with a ‘free variable’ made no sense at all, as using
it on a program with ‘two arguments’ was the norm; thus, not much seems to be lost.
However, we remind the reader that ‘constructive’ versions of the SRT are available
in computability theory, e.g. there is a a partial recursive function n(−) that, given
an index e, returns n(e) which would be a sort of IFP of e: see §2.1.2. These, one
would expect, are intensional fixed point combinators. The fact that we cannot define an intensional fixed-point combinator with the Gödel-Löb axiom as type means
that this is not possible to do within iPCF v2.0, not unless we redefine to mean
completely closed. This is a serious limitation towards the goal of making iPCF v2.0
a typed ‘language of indices’ that works more or less in the style of Jones [1997]. The
only way out of this impasse would be to show that our intended model is natural, so
that we could model iPCF itself. We discuss this further in §8.4.
8.2
Interpreting iPCF v2.0
We have finally made it to the categorical interpretation of iPCF v2.0, which we will
use the algebraic machinery developed in §7 to define. We assume that the reader
162
has some background on the categorical semantics of simply-typed λ-calculus. Useful
expositions include the classics by Lambek and Scott [1988] and Crole [1993], as well
as the detailed presentation of Abramsky and Tzevelekos [2011]. For the details of
the categorical semantics of modal λ-calculi see [Kavvos, 2017b,c].
First, we define the notion of a iPCF v2.0 model.
Definition 56 (iPCF v2.0 model). An iPCF v2.0 model consists of
(i) a cartesian closed P-category (C, ∼, ×, 1);
(ii) a product-preserving idempotent comonadic exposure (Q, , δ);
(iii) a choice of objects N and B, suitable for interpreting the constants (see Hyland
and Ong [2000]); and
(iv) maps (−)† yielding IFPs at all the objects generated by
I
::=
B | N | I QZ
for all Z ∈ C, as per Definition 53.
Given an iPCF v2.0 model we define an object JAK ∈ C for every type A of iPCF, by
induction:
def
JNatK = N
def
JBoolK = B
def
JA → BK = JBKJAK
def
JAK = Q JAK
Then, given a well-defined context ∆ ; Γ where ∆ = u1 :B1 , . . . un :Bn and Γ =
x1 :A1 , . . . , xm :Am , we let
def
J∆ ; ΓK = QB1 × · · · × QBn × A1 × · · · × Am
where the product is, as ever, left-associating.
We then extend the semantic map J−K to one that associates an arrow
J∆ ; Γ ` M : AK : J∆ ; ΓK → JAK
of the P-category C to each derivation ∆ ; Γ ` M : A. The full definition is given in
Figure 8.3. The map
∆;Γ
π∆
: J∆ ; ΓK → J∆ ; ·K
163
Figure 8.3: Categorical Semantics for Intensional PCF v2.0
J∆ ; Γ, x:A, Γ0 `J x : AK = π : J∆ ; Γ, x:A, Γ0 K −→ JAK
def
J∆, u:A, ∆0 ; Γ `J u : AK = A ◦ π : J∆, u:A, ∆0 ; ΓK → JAK → JAK
def
def
J∆ ; Γ `ext. λx:A.M : A → BK = λ (J∆ ; Γ, x : A `ext. M : BK) : J∆ ; ΓK −→ JBKJAK
def
1,JAK
◦ ! : J∆ ; ΓK −→ JBKJAK
J∆ ; Γ `int. λx:A.M : A → BK = λ J· ; x : A `int. M : BK ◦ π2
def
J∆ ; Γ `J M N : BK = ev ◦ hJ∆ ; Γ `J M : A → BK , J∆ ; Γ `J N : AKi
def
−
→
J∆ ; Γ `J let box u ⇐ M in N : CK = J∆, u:A ; Γ `J N : CK ◦ h−
π→
∆ , J∆ ; Γ `J M : AK , πΓ i
Definitions for modal rules
∆;Γ
J∆ ; Γ `J box M : AK = J∆ ; · `int. M : AK∗ ◦ π∆
def
J∆ ; Γ `J fix z in M : AK = J· ; z : A `int. M : AK† ◦ !
def
164
−
→
is the obvious projection. Moreover, the notation h−
π→
∆ , f, πΓ i stands for
−
→ def
h−
π→
∆ , f, πΓ i = hπ1 , . . . , πn , f, πn+1 , . . . , πn+m i
The first thing we need to observe is that there is no difference in the interpretation
if the term is intensional or extensional: if a term can be both, it has the same
interpretation.
Lemma 28. If ∆ ; Γ `int. M : A, then
J∆ ; Γ `int. M : AK = J∆ ; Γ `ext. M : AK
where = stands for strict equality.
8.3
Soundness
The main tools in proving soundness of our interpretation are (a) lemmas giving the
categorical interpretation of various admissible rules, and (b) a fundamental lemma
relating substitution of terms to composition in the category. In the sequel we often
~ for the context
use informal vector notation for contexts: for example, we write ~u : B
~ /~u] for the simultaneous, capture-avoiding
u1 : B1 , . . . , un : Bm . We also write [N
substitution [N1 /u1 , . . . , Nm /un ].
First, we interpret weakening and exchange.
Lemma 29 (Semantics of Weakening).
1. Let ∆ ; Γ, x:C, Γ0 `int. M : A with x 6∈ fv (M ). Then
J∆ ; Γ, x:C, Γ0 `int. M : AK ≈Q J∆ ; Γ, Γ0 `int. M : AK ◦ π
where π : J∆ ; Γ, x:C, Γ0 K → J∆ ; Γ, Γ0 K is the obvious projection.
2. Let ∆ ; Γ, x:C, Γ0 `ext. M : A with x 6∈ fv (M ). Then
J∆ ; Γ, x:C, Γ0 `ext. M : AK ∼ J∆ ; Γ, Γ0 `ext. M : AK ◦ π
where π : J∆ ; Γ, x:C, Γ0 K → J∆ ; Γ, Γ0 K is the obvious projection. If the iPCF
model is weakly extensional (see §8.4) then the result holds up to ≈Q .
3. Let ∆, u:B, ∆0 ; Γ `int. M : A with u 6∈ fv (M ). Then
J∆, u:B, ∆0 ; Γ `int. M : AK ≈Q J∆, ∆0 ; Γ `int. M : AK ◦ π
where π : J∆, u:B, ∆0 ; ΓK → J∆, ∆0 ; ΓK is the obvious projection.
165
4. Let ∆, u:B, ∆0 ; Γ `ext. M : A with u 6∈ fv (M ). Then
J∆, u:B, ∆0 ; Γ `ext. M : AK ∼ J∆, ∆0 ; Γ `ext. M : AK ◦ π
where π : J∆, u:B, ∆0 ; ΓK → J∆, ∆0 ; ΓK is the obvious projection. If the iPCF
model is weakly extensional (see §8.4) then the result holds up to ≈Q .
Proof. By induction on the derivations. Most cases are straightforward.
The first one holds up to ≈Q because it essentially consists of projecting away
components, which holds intensionally: the fact the judgement is intensional means
no λ’s are involved.
The second one holds only up to ∼, because of the occurrence of λ(−)’s in the
semantics. If moreover the model is weakly extensional, λ(−) preserves ≈Q (Cor.
5) so we can strengthen the inductive hypothesis to ≈Q and obtain the result up to
intensional equality.
The third one also follows easily. In the case of a box (−), the term within it
is intensional, so we use the induction hypothesis and the fact1 (−)∗ preserves ≈Q .
We then know that (−)∗ is natural for projections (Prop. 12(vi)) up to ≈Q (due to
idempotence). There is not much to show in the case of fix z in (−), as no modal
variables occur freely under it.
The fourth one is perhaps the most complicated, and only holds up to ∼, again
because of the occurrence of λ(−)’s. In the case of box (−), the term within it is
intensional, so we use the third result and the fact (−)∗ preserves ≈Q , followed again
by naturality for projections. The case of fix z in (−) is again trivial.
We can also show that the components of the interpretation interact in the expected
way with the corresponding term formation rules in the language. These follows
from the analogous lemmata in §7.1, which show that the necessary equations hold
intensionally in a iPCF model, i.e. when the exposure Q is idempotent.
Lemma 30 (Double box). If ∆ ; · `J M : A, then
1
J∆ ; Γ `J box (box M ) : AK ≈Q δA ◦ J∆ ; Γ `J box M : AK
This is where idempotence is essential, otherwise this would hold only up to ∼ and the inductive
hypothesis for box (−) would fail.
166
def
Proof. Let f = J∆ ; · `J M : AK. Then
J∆ ; Γ `J box (box M ) : AK
≈Q { definitions }
∆;Γ
(f ∗ )∗ ◦ π∆
≈Q { Proposition 13 }
∆;Γ
δA ◦ f ∗ ◦ π∆
≈Q { definitions }
δA ◦ J∆ ; Γ `J box M : AK
Lemma 31 (Identity Lemma). For (ui : Bi ) ∈ ∆,
∆;Γ
J∆ ; Γ `J box ui : Bi K ≈Q πB
i
Proof.
J∆ ; Γ `J box ui : Bi K
≈Q { definition }
∗
∆;·
∆;Γ
Bi ◦ πBi ◦ π∆
≈Q { Proposition 12 }
∆;·
∆;Γ
∗Bi ◦ πB
◦ π∆
i
≈Q { Proposition 12 (ii), projections }
∆;Γ
πB
i
Lemma 32 (Semantics of Substitution). Suppose that ∆ ; Γ `J Mi : Ai for i =
1, . . . , n, and that ∆ ; · `int. Nj : Bj for j = 1, . . . , m, and let
def
βj = J∆ ; Γ `int. box Nj : Bj K
def
αi = J∆ ; Γ `J Mi : Ai K
Then,
~ ; ~x : A
~ `int. P : C, we have
1. if ~u : B
r
z
r
z D−
E
−
~ /~u, M
~ /~x] : C ≈Q ~u : B
~ ; ~x : A
~ `int. P : C ◦ →
∆ ; Γ `J P [N
β ,→
α
167
~ ; ~x : A
~ `ext. P : C, we have
2. if ~u : B
r
z r
z D−
E
−
~ /~u, M
~ /~x] : C ∼ ~u : B
~ ; ~x : A
~ `ext. P : C ◦ →
∆ ; Γ `ext. P [N
β ,→
α
~ ; ~x : A
~ `J P : C. Most cases are
Proof. By induction on the derivation of ~u : B
straightforward, and use a combination of standard equations that hold in cartesian
closed categories—see [Crole, 1993, §2]—in order to perform calculations very close
the ones detailed in [Abramsky and Tzevelekos, 2011, §1.6.5]. Because of the precise
definitions we have used, we also need to make use of Lemma 29 to interpret weakening
whenever variables in the context do not occur freely in the term. We only cover the
modal cases.
Case(var). Then P ≡ ui for some ui amongst the ~u. Hence, the LHS is
∆ ; Γ `int. Ni : Bi , whereas we calculate that the RHS, in either case, is
r
z
~ α
~ ; ~x : A
~ `J P : C ◦ hβ,
~u : B
~i
≈Q { definition, projection }
Bi ◦ J∆ ; Γ `int. box Ni : Bi K
≈Q { definition }
∆;Γ
Bi ◦ J∆ ; · `int. Ni : Bi K∗ ◦ π∆
≈Q { Proposition 13 }
∆;Γ
J∆ ; · `int. Ni : Bi K ◦ π∆
≈Q { Semantics of Weakening (Lemma 29) }
J∆ ; Γ `int. Ni : Bi K
→
− −
→
−
−
−
Case(I). We have that →
u : B ;→
x : A `J box P : C, so that →
u :
→
−
→
−
B ; · `int. P : C, with the result that none of the variables x occur in P . Hence
168
~ /~u, M
~ /~x] ≡ P [N
~ /~u], and we calculate, in either case:
P [N
r
z
~ /~u, M
~ /~x]) : C
∆ ; Γ `J box (P [N
≈Q { definition, and non-occurrence of the ~x in P }
r
z∗
∆;Γ
~
∆ ; · `int. P [N /~u] : C ◦ π∆
≈Q { IH, (−)∗ preserves ≈Q (Corollary 5) }
r
z D −−−−−−−−−−−−−−−−→E∗
∆;Γ
~ ; · `int. P : C ◦ −
~u : B
J∆ ; · `int. box Ni : Bi K
◦ π∆
≈Q { Proposition 12 }
r
z• D−−−−−−−−−−−−−−−−−−→E
~ ; · `int. P : C ◦ J∆ ; · `int. box Ni : Bi K∗ ◦ π ∆;Γ
~u : B
∆
≈Q { naturality of product morphism, definition }
r
z• D−−−−−−−−−−−−−−−−−−−−−−−−→E
~
~u : B ; · `int. P : C ◦ J∆ ; Γ `int. box (box Ni ) : Bi K
≈Q { Double box (Theorem 30), h·, ·i preserves ≈Q }
r
z• D −−−−−−−−−−−−−−−−−−−−−−→E
~ ; · `int. P : C ◦ −
δJBi K ◦ J∆ ; Γ `int. box Ni : Bi K
~u : B
≈Q { product after angled brackets }
n
r
z• Y
D−−−−−−−−−−−−−−−−−−→E
~
~u : B ; · `int. P : C ◦
δJBi K ◦ J∆ ; Γ `int. box Ni : Bi K
i=1
≈Q { some projections and definition of (−)∗ and J−K }
r
z D−
E
−
~ ; ~x : A
~ `J box P : C ◦ →
~u : B
β ,→
α
→
− − →
−
→
−
−
−
x : A `J fix z in P : C, so that →
u : B ;z :
Case(fix). We have that →
u : B ;→
−
C `int. P : C, with the result that none of the variables →
x occur in P . Hence
~ /~u, M
~ /~x] ≡ P [N
~ /~u], and we calculate:
P [N
r
z
~ /~u, M
~ /~x]) : C
∆ ; Γ `J fix z in (P [N
≈Q { definitions; only z is free in P }
J· ; z : C `int. P : CK† ◦ !
≈Q { definition }
J· ; · `int. fix z in P : CK ◦ !
≈Q { projections, definitions }
E
r
z D−
−
~ ; ~x : A
~ `J fix z in P : C ◦ →
~u : B
β ,→
α
Theorem 64 (Soundness).
169
1. If ∆ ; Γ `int. M = N : A, then we have that
J∆ ; Γ `int. M : AK ≈Q J∆ ; Γ `int. N : AK
2. If ∆ ; Γ `ext. M = N : A, then we have that
J∆ ; Γ `ext. M : AK ∼ J∆ ; Γ `ext. N : AK
Proof. By induction on the derivation of ∆ ; Γ ` M = N : A. The congruence cases
are clear, as is the majority of the ordinary clauses. All of these even hold up to ≈Q ,
with the exception of (→ β) and (→ η), which only hold up to ∼. Only the modal
rules remain, which we prove with direct calculation.
For (β) in the case of J = int. we calculate:
J∆ ; Γ `int. let box u ⇐ box M in N : CK
≈Q { definition }
−
→
J∆, u:A ; Γ `int. N : CK ◦ h−
π→
∆ , J∆ ; Γ `int. box M : AK , πΓ i
≈Q { Lemma 31 }
−−−−−−−−−−−−−−−−−→
→i
J∆, u:A ; Γ `int. N : CK ◦ hJ∆ ; Γ `int. box ui : Bi K, J∆ ; Γ `int. box M : AK , −
π
Γ
}
≈Q { Lemma 32; J∆ ; Γ `int. xi : Ai K = πA∆;Γ
i
def
J∆ ; Γ `int. N [~
ui /~
ui , M/u, x~i /~
xi ] : CK
In the case of J = ext., we use Lemma 28 to write J∆ ; Γ `ext. box M : AK =
J∆ ; Γ `int. box M : AK, and then the same calculation works up to ∼.
There remains the case of the fixpoint; let
def
g = J∆ ; Γ `J fix z in M : AK
def
Then g = f † ◦ !, where
def
f = J· ; z : A `int. M : AK
But we can easily calculate that
f†
∼
{ definition of fixpoint (Def. 52) }
∗
f ◦ f†
≈Q { definitions }
f ◦ J· ; · `int. box (fix z in M ) : AK
≈Q { Lemma 32 }
J· ; · `int. M [box (fix z in M )/z] : AK
and hence g ∼ J∆ ; Γ `J M [box (fix z in M )/z] : AK by weakening (Lemma 29).
170
8.4
Natural and Weakly Extensional Models
In iPCF v2.0 we effected three restrictions:
• No free variables when taking intensional fixed points (except the diagonal).
• No λ-abstractions with free variables under boxes.
• IFPs only at certain types generated by Afix .
We discussed at the end of §8.1 the effect that these have on the expressivity of
the language, and found that it was far too strong, so we would like to examine when
these can be lifted.
A iPCF v2.0 model must satisfy certain requirements in order for these restrictions
to be lifted. The first one can be lifted whenever a iPCF model is natural. The second
one can be lifted whenever a iPCF model is weakly extensional. Unfortunately, we
are still at a loss regarding the existence of IFPs at all objects.
8.4.1
Natural iPCF v2.0 models
The first restriction we want to lift is the occurrence of free variables when taking an
intensional fixed point; that is, we want to generalise the (fix) rule to
∆ ; z : Afix `int. M : Afix
∆ ; Γ `J fix z in M : Afix
We will be able to do this in natural models of iPCF v2.0.
Definition 57. An iPCF v2.0 model is natural just if
1. (−)† preserves ≈Q ; and
2. (−)† is natural up to ≈Q , in the sense that for any f :
Q
Q
and k : kj=1 QCj → ni=1 QBi , it is the case that
Qn
i=1
QBi × QA → A
(f ◦ (k × id))† ≈Q f † ◦ k
In this kind of iPCF model, we are free to have parameters in our IFPs, and we can
interpret the Löb rule by
∆;Γ
J∆ ; Γ `J fix z in M : AK = J∆ ; z : A `int. M : AK† ◦ π∆
def
171
The lemmas for weakening (Lem. 29) and substitution (Lem. 32) directly carry
over. Naturality is only used in the appropriate cases for (fix); e.g. here is the case
for substitution:
r
z
~ /~u, M
~ /~x]) : C
∆ ; Γ `J fix z in (P [N
≈Q { definition, and non-occurrence of the ~x in P }
r
z†
~ /~u] : C ◦ π ∆;Γ
∆ ; z : C `int. P [N
∆
≈Q { IH, (−)† preserves ≈Q , definitions }
r
z D −−−−−−−−−−−−−−−−−−−−−→
E†
∆;z:C
∆;Γ
~ ; z : C `int. P : C ◦ −
~u : B
J∆ ; z : C `int. box Ni : Bi K, πz:C
◦ π∆
≈Q { weakening, h·, ·i and (−)† preserve ≈Q }
r
†
z −−−−−−−−−−−−−−−−−−−−−−−−→
∆;z:C
∆;z:C
∆;Γ
~
~u : B ; z : C `int. P : C ◦ J∆ ; · `int. box Ni : Bi K ◦ π∆;·
, πz:C
◦ π∆
≈Q { naturality of products, definition }
r
z D −−−−−−−−−−−−−−−−→E
†
∆;Γ
~ ; z : C `int. P : C ◦ −
~u : B
J∆ ; · `int. box Ni : Bi K × id
◦ π∆
≈Q { naturality of (−)† }
r
z† D −−−−−−−−−−−−−−−−→E
∆;Γ
~ ; z : C `int. P : C ◦ −
~u : B
J∆ ; · `int. box Ni : Bi K ◦ π∆
≈Q { naturality of products, Lemma 29, projections, definitions }
r
z D−
E
−
~ ; ~x : A
~ `J fix z in P : C ◦ →
~u : B
β ,→
α
def
We can also calculate as usual for the fixed point. Let g = J∆ ; Γ `J fix z in M : AK.
∆;Γ
Then g = f † ◦ π∆
, where
def
def
f = J∆ ; z : A `int. M : AK
But we can easily calculate that
f†
∼
{ definition of fixpoint (Def. 52) }
∗
f ◦ hid, f † i
≈Q { definitions }
f ◦ hid, J∆ ; · `int. box (fix z in M ) : AKi
≈Q { Lemma 32 }
J∆ ; · `int. M [box (fix z in M )/z] : AK
and hence g ∼ J∆ ; Γ `J M [box (fix z in M )/z] : AK, by weakening.
172
8.4.2
Weakly Extensional iPCF v2.0 models, or iPCF models
In some cases, we can even rid ourselves of the distinction between intensional and
extensional judgements, eventually reaching a language very close to the one with
which we begun our investigation in §3. The models in which this can occur are
known as weakly extensional.
Definition 58 (iPCF model). An iPCF v2.0 model is weakly extensional —or, more
simply, a iPCF model—just if
1. λ(−) preserves ≈Q , and
2. λ(−) is natural with respect to ≈Q , i.e.
λ(f ◦ (g × id)) ≈Q λ(f ) ◦ g
Before anything else, let us immediately remark that
Lemma 33. A weakly extensional iPCF v2.0 model can be made natural.
Proof. We can use Theorem 59(3) to define a strong intensional fixed point combinator
at every object with IFPs given by (−)∗ . Because Q is idempotent, we can then use
Theorem 59(1) to yield weak fixed point combinators, and then Theorem 59(2) to
induce IFPs anew, by setting
0 def
f † = A ◦ Y ◦ (λ(f ))∗
As Q is idempotent and λ(−) preserves ≈Q , we have—by Lemmata 26 and 27—that
0
0
(−)† preserves ≈Q , and is natural. Thus, we can replace (−)† by (−)† , which results
in a weakly extensional and natural iPCF v2.0 model.
We can use a weakly extensional model to interpret iPCF almost as presented in
its original form in §3: we only need to hold back on the IFPs, and limit them to the
intensional exponential ideal Afix . Of course, we call these models weakly extensional
as the category of assemblies Asm(A) constitutes such a model whenever A is a weakly
extensional PCA: see §5.2.5.
It is not a difficult calculational exercise to show that the main lemmata in this
chapter, i.e. weakening (Lemma 29) and substitution (Lemma 32), hold up to ≈Q ,
with no distinction between extensional and intensional judgements. This happens
because the interpretation of all the main language constructs, i.e. λ(−) and (−)∗ ,
preserve ≈Q .
173
The only exception is the soundness theorem, which only holds up to ∼, and it
does so with good reason. Firstly, we do not expect the equational behaviour of the
constants (naturals, booleans) to be intensional : on account of the language being
partial, we expect both of these objects to have highly intensional structure. But
even if they did not, the cartesian closed equations can realistically be expected to
only hold extensionally (especially η).
However, if asked to name a weakly extensional model of iPCF, we might find ourselves at a loss. The paradigmatic example of a weakly extensional PCA is certainly
Λ/ =β , the closed terms of the untyped λ-calculus quotiented by β-equivalence. The
construction of Asm(Λ/ =β ) and the exposure on it (§5.2) are parametric in A,
so all that remains is to construct IFPs. But even if we were to construct them, we
might think that we have just engaged in an exercise in futility. The elements of A
would be pairs (a, x) where x ∈ kakA is a realizer of a point x ∈ |A|. But x would be
an equivalence class [P ]=β of an untyped λ-term, which would be an object that is
‘too extensional’ for the level at which we have been aiming: IFPs would merely be
ordinary fixed points of λ-terms!
That leaves us with three choices:
(i) Seek the Holy Grail PCA HG that is weakly extensional, yet sufficiently intensional for IFPs w.r.t. to in Asm(HG) to be of interest: this seems rather
difficult, perhaps to the point of being a contradiction in terms.
(ii) Try to redefine : Asm(A) # Asm(A) in the case of a weakly extensional PCA
A. In the previous case we were content to use realizers as the ‘true intensions.’
But how can we proceed this time? One attempt in the case of A consisting of
equivalence classes would be to try to ‘pick’ a representative of each class. But
then for to respect composition these would have to be ‘compatible’ up to
composition, which seems impossible.
(iii) A third option would be to be in a position to accept that weakly extensional
realizers, such as the terms of Λ/ =β , are sufficiently intensional. This could
be the case of we require a lot of extensionality at the assembly level, perhaps
to the point where a fixed point of an untyped λ-term seems a rather intensional affair. This is again in the spirit described in the introduction (§1.1),
where intensionality is argued to only be defined only relative to the extensional
equality.
174
It is slowly beginning to seem that, in the most intensional of settings, the restrictions we have demanded of iPCF v2.0 are somehow indispensable. We will produce
some further evidence for that in the process of proposing a general method for constructing IFPs at the end of the next section (§8.5): the simplest naturality argument
we can concoct already requires weak extensionality. This may not be proof, but
nonetheless it is solid evidence, as the method described therein is particularly useful
in constructing IFPs for very intensional PCA of classical computability K1 (§8.6).
8.5
Building IPWPSs categorically
In this section we shall show how to build IPWPSs from more basic constructs. If
given sufficiently many IPWPSs in a P-CCC which is equipped with an idempotent
comonadic exposure, then we can use our Parametric Intensional Recursion Theorem
(Theorem 60) to build a iPCF model.
The two main ingredients at our disposal will be a certain kind of retraction, and
a certain kind of enumeration. Both of these are unlike the ones that have been
considered before, and they make deep use of the theory of exposures as developed
in this thesis. Consequently, they have a very intensional flavour.
Our enumerations will be arrows of the form X → A, where A will be the object
enumerated, and X the object of ‘indices.’ To this we will add a factorisation property,
which will be evocative of, or even directly related to, the idea of path surjections as
briefly discussed in §6.3.
To this, we will add a special notion of intensional retraction, which allows one
to represent ‘code’ for objects of the form X QZ (for any Z) as a sort of retract
of X, but only up to extension. We will use these contraptions to formulate an
inductive argument that constructs IPWPSs at all objects contained in the intensional
exponential ideal I generated by the ‘grammar’
I
::=
B | N | I QZ
Throughout this section, let us fix a cartesian closed P-category C, and a comonadic
exposure (Q, , δ) on it.
QX is an object which holds information about ‘codes’ of objects of type X.
These ‘codes’ can often be encoded as very simple first-order data in an object Y ;
for example, Y could be the natural numbers object. Sometimes we might be able
to retrieve the original ‘code’ in QX from Y , making QX a retract of Y . But, in
some cases, we might not: we will only manage to ‘interpret’ the data in Y as data
175
in X, yielding the same extension—but not the same code—as the original one. This
situation is precisely captured by intensional retractions.
Definition 59. X is an intensional retract of Y (w.r.t. Q) whenever there is a pair
of arrows s : QX → Y and r : Y → X such that
s
QX
X
Y
r
X
commutes up to ∼. We call X an intensional retract of Y .
The second concept that is central is that of enumeration. As hinted in §6.3
in the context of various forms of surjections, we can think of arrows X → A as
‘enumerating’ the elements of A by ‘indices’ in X. We will require the existence of a
particular type of path surjection (see §6.3), namely one whose ‘path’ object (denoted
Q
N in §6.3) is an intensional context of the form ni=1 QBi .
Definition 60. An object A ∈ C is (Q, X)-enumerated by e : X → A just if it is a
Q
−
→
( ni=1 QBi )-path surjection for every finite list of objects Bi .
Qn
That is: for every arrow f :
i=1 QBi → A there is at least one arrow φf :
Qn
i=1 QBi → X, not necessarily unique, that makes the diagram
Qn
i=1
QBi
f
φf
X
e
A
commute. We often call the arrow e : X → A a (Q, X)-enumeration, and say that A
is (Q, X)-enumerable.
We are very fond of (Q, X)-enumerations for two reasons. The first reason is
that they are quite easy to construct: the domain of φf provides enough intensional
information in the QBi ’s. It would be essentially impossible to construct something
Q
of the sort given merely a ni=1 Bi . Intuitively, the reason is that the Bi ’s are available extensionally, i.e. as a kind of oracle to which we can pose a (probably finite)
number of questions. It would be unthinkable to internally extract an ‘index’ for the
enumeration e : X → A in such a situation.
The second reason is simply because—under one mild assumption—they directly
give rise to IPWPSs.
176
Lemma 34. If
• A is (Q, X)-enumerated by e : X → A; and
• X QX is an intensional retract of X.
then there is an intensional parametric weak-point surjection p : QX × QX → A.
Proof. Let (s, r) witness X QX as an intensional retract of X. Define
×id
def
r×id
ev
p = QX × QX −−→ X × QX −−→ X QX × QX −
→X
Q
We want to show that this is a IPWPS. Let f : ni=1 QBi × QX → A. Then f can
be written as
n
Y
φf
e
QBi × QX −→ X →
− A
i=1
as e : X → A is a (Q, X)-enumeration. We can λ-abstract the index, and take its
Q
co-Kleisli lifting to obtain (λ(φf ))∗ : ni=1 QBi → Q(X QX ). Post-composing with the
lifted section s∗ : Q(X QX ) → QX yields an ‘index’
def
∗
∗
xf = s ◦ (λ(φf )) :
n
Y
QBi → QX
i=1
w.r.t to p:
p ◦ hs∗ ◦ (λ(φf ))∗ , ai
∼ { definition of p, products after brackets }
e ◦ ev ◦ hr ◦ ◦ s∗ ◦ (λ(φf ))∗ , ai
∼ { Prop. 13, int. retract. }
e ◦ ev ◦ h ◦ (λ(φf ))∗ , ai
∼ { Prop. 13 again }
e ◦ ev ◦ hλ(φf ), ai
∼ { cartesian closure }
e ◦ φf ◦ hid, ai
∼ { e is a (Q, X)-enumeration }
f ◦ hid, ai
177
So much for the construction of IPWPSs given enumerations. What about higher
types? In fact, the following lemma shows that, if X is sufficient to intensionally
encode X QZ , then we can ‘lift’ a (Q, X)-enumeration e : X → A to (Q, X)-enumerate
AQZ . This is where our previous notion of intensional exponential ideal comes from.
Lemma 35. Suppose that
• A ∈ C is (Q, X)-enumerated by e : X → A, and
• X QZ is an intensional retract of X.
Then AQZ is (Q, X)-enumerable.
Qn
QBi → AQZ . Then f ∼ λ(g) for some g :
Q
By IH, we have some φg : ni=1 QBi × QZ → X such that
Proof. Take f :
i=1
Qn
i=1
Qn
i=1
QBi × QZ
g
φg
X
A
e
If we apply λ(−) to this triangle, we obtain
Qn
i=1
QBi
f
λ(φg )
X QZ
eQZ
AQZ
We can now rewrite λ(φf ) ∼ ◦ (λ(φf ))∗ using Proposition 13:
Qn
i=1
QBi
f
(λ(φg ))∗
AQZ
eQZ
Q X QZ
X QZ
But X QZ is an intensional retract of X, so we can rewrite like so:
Qn
i=1
f
QBi
AQZ
(λ(φg ))∗
Q X QZ
eQZ
X QZ
r
s
X
178
QBi × QZ → A.
Hence, by defining
e0 = eQZ ◦ r : X → E QZ
def
we have that E QZ is (Q, X)-enumerated by e0 , and the index of f is
φf = s ◦ (λ(φg ))∗
def
Theorem 65. Suppose that
• B, N ∈ C are (Q, X)-enumerable, and that
• X QZ is an intensional retract of X for any Z ∈ C.
Then any object in the intensional exponential ideal generated by
I
::=
B⊥
| N⊥
| I QZ
for any Z ∈ C has parametric IFPs.
Proof. We can show by induction that every object generated by I is (Q, X)-enumerable:
the base cases are assumptions, and the inductive step is provided by Lemma 35. Then
by Lemma 34 there is an intensional weak-point surjection pI : QX × QX → I for
every I. Finally, by the Parametric Intensional Recursion Theorem (Theorem 60)
each I has IFPs.
Naturality
It is again worth asking about the necessary premises that are sufficient for us to
conclude that the IPWPSs build in this section are natural in the sense of §7.4.
According to our previous results, we need xh ◦ g ≈Q xh◦(g×id) . We can show that,
under certain assumptions, the construction of a IPWPS from an enumeration in
Lemma 34 maintains it. Calculating suffices to elicit the necessary assumptions:
xh ◦ g
≈Q { definition }
s∗ ◦ (λ(φh ))∗ ◦ g
≈Q { idempotence }
s∗ ◦ (λ(φh ) ◦ g)∗
≈Q { λ(−) natural up to ≈Q }
s∗ ◦ (λ(φh ◦ (g × id)))∗
≈Q { λ(−) preserves ≈Q , φh ◦ (g × id) ≈Q φh◦(g×id) }
s∗ ◦ (λ(φh◦(g×id) ))∗
179
which by definition is xh◦(g×id) . Hence,
Corollary 8. If λ(−) preserves ≈Q and is natural up to it, and moreover the (Q, X)enumeration e : X → A satisfies φh ◦ (g × id) ≈Q φh◦(g×id) then the IPWPS p :
QX × QX → A constructed in Lemma 34 is natural in the sense described at the end
of §7.4.
In turn, an easy calculation shows that if a (Q, X)-enumeration is ‘natural’ in the
above sense then the inductive step of Lemma 35 maintains this property—if λ(−)
preserves ≈Q ! By induction, all the IPWPSs constructed in Theorem 65 are natural.
But we are—so to speak—already preaching to the choir: we have made use of
the naturality and preservation of λ(−) up to ≈Q . Thus, the model is already weakly
extensional, and Lemma 33 already provides a way to make it natural.
At this point, it is beginning to seem like there is no way around weak extensionality. On the one hand, generalising our results to yield natural IFPs seems to already
already weak extensionality. On the other hand, we cannot conceive of a weakly extensional model in which IFPs really are more informative than EFPs. Intensionality
and weak extensionality seem to be at odds with each other. If we take all of this
into account, limiting the fixed point rule to admit no free variables other than the
‘diagonal’ one seems almost forced in the most intensional of settings.
8.6
Asm(K1) as a model of iPCF v2.0
Let us revisit the construction of the P-category of assemblies Asm(K1 ) on the PCA
K1 of classical computability, as described in §5.2. We shall prove that it is a iPCF
model, as per Definition 56. It certainly comes with all the prerequisite structure,
so all we need to check is whether it has (natural) intensional fixed points at all the
relevant objects. We shall construct them using Theorem 65.
Before we proceed with the construction, we need to define some objects of interest.
First, we define the assembly N ∈ Asm(K1 ) of natural numbers by
def
def
knkN = {n}
|N| = N,
We also need to define the assembly N ∈ Asm(K1 ) of booleans by
(
{0}
for b = ff
def
def
|B| = {ff, tt},
kbk =
{1, 2, . . . } for b = tt
At this point we urge the reader to recall the definition of the lifted assemblies N⊥
and B⊥ that we defined for K1 in §5.2.3.
180
Proposition 14. For any assembly Z ∈ Asm(K1 ) there is an intensional retraction
Q N⊥ QZ
s
X
N⊥
r
N⊥ QZ
Proof. For the section part, we define
s : Q NQZ
−→ |N⊥ |
⊥
(f, n) 7−→ n
which is realized by λ∗ nx.n. For the retraction we define
r : |N⊥ | −→ NQZ
⊥
n 7−→ fn
⊥ 7−→ ((z, a) 7→ ⊥)
where
fn : |QZ| −→ |N⊥ |
(
m if n · a · 0 ' m
(z, a) 7−→
⊥ if n · a · 0 ↑
fn is realized by λ∗ ax. n · a · 0, and hence r itself is realized by λ∗ wax. w · 0 · a · 0.
It remains to show that if n ∈ kf kNQZ , then fn = f . In order to show this, the
⊥
first thing we have to note is that N⊥ is a modest set: a realizer can only realize one
def
element of |N⊥ | = N + {⊥}. Thus, if n · a ∈ kxkN⊥ , then necessarily f (z, a) = x: if
f (z, a) = x0 , then we must have n · a ∈ kx0 kN⊥ , so x = x0 . We can thus set up a chain
of equivalences,
f (z, a) = m ⇐⇒ n · a ∈ kmkN⊥ ⇐⇒ n · a · 0 ' m ⇐⇒ fn (z, a) = m
and, similarly,
f (z, a) = ⊥ ⇐⇒ n · a ∈ k⊥kN⊥ ⇐⇒ n · a · 0 ↑ ⇐⇒ fn (z, a) = ⊥
Proposition 15. N⊥ is (, N⊥ )-enumerable.
181
Proof. Suppose (f, r) :
Qn
i=1
QBi → N⊥ . This means that, if ai ∈ kbi kBi for all i then
r · ha1 , . . . , an i ∈ kf ((b1 , a1 ), . . . , (bn , an ))kN⊥
where
def
hai = a
def
ha1 , . . . , am+1 i = pair ha1 , . . . , am i am+1
def
We can then define φ(f,r) = (gr , s) where
gr :
n
Y
QBi −→ |N⊥ |
i=1
((b1 , a1 ), . . . , (bn , an )) 7−→ hr, a1 , . . . , an i
def
which is obviously realizable. Then, define eN⊥ : N⊥ → N⊥ = (h, v) by
h : |N⊥ | −→ N⊥
(
⊥, if (c)1 · h(c)2 . . . (c)n+1 i · 0 ↑
c 7−→
m, if (c)1 · h(c)2 . . . (c)n+1 i · 0 ' m
⊥ 7−→ ⊥
def
where (ha1 , . . . , am i)i = ai . This is realizable, and it is easy to show that the required
diagram
Qn
i=1 QBi
f
φf
N⊥
eN⊥
N⊥
commutes.
Proposition 16. B⊥ is (, N⊥ )-enumerable.
Proof. The same proof as for N⊥ almost works: we only need to slightly alter the
def
definition of eN⊥ = (h, v) : N⊥ → N⊥ to make it into an arrow eB⊥ = (h0 , v) : N⊥ → B⊥
where
h0 : |N⊥ | −→ B⊥
⊥, if (c)1 · h(c)2 . . . (c)n+1 i · 0 ↑
c 7−→ ff, if (c)1 · h(c)2 . . . (c)n+1 i · 0 ' 0
tt, if (c)1 · h(c)2 . . . (c)n+1 i · 0 ' m 6= 0
⊥ 7−→ ⊥
The same realizer works.
182
We thus conclude that
Theorem 66. Asm(K1 ) has intensional fixed points at the intensional exponential
ideal generated by the ‘grammar’
I
::=
B | N | I QZ
for any Z ∈ Asm(K1 ).
Proof. Use Propositions 14, 16, 15 to fulfil the premises of Theorem 65.
183
Chapter 9
Conclusions & Future Work
We briefly peruse what has been achieved in this thesis:
(i) We first attempted to pin down the informal meaning of intensionality as the
possibility of non-functional operations (§1), where non-functionality is understood in the presence of some ambient extensional equality.
(ii) Then, we carefully reviewed the distinction between extensional and intensional
recursion in computability theory (§2).
(iii) This led us to the formulation of a higher-order intensional and reflective programming language, in the form of a modal λ-calculus called Intensional PCF.
This language included genuinely ‘non-functional’ operations, and typed intensional recursion through Löb’s rule. We showed that, if intensionality/‘nonfunctionality’ is limited to modal types, then iPCF is consistent (§3).
(iv) In §4 we began the search for a categorical semantics for that calculus. We
first argued that 1-category theory is not the correct mathematical setting to
speak of intensionality. As an alternative, we proposed P-category theory. We
then proceeded to introduce exposures—a new P-categorical construct which
abstracts the idea of intensional devices, e.g. Gödel numberings. Then, drawing
inspiration from comonads, we developed the theory of exposures.
(v) The claim that exposures are abstractions of intensional devices was substantiated by carefully constructing three rather different examples in §5.
The first one comprises an actual Gödel numbering on Peano Arithmetic.
The second one was drawn from higher-order computability/realizability. If we
think of realizers as machine code, this main example made clear the idea that
exposures expose the implementation.
184
The third one is based on ideas from homological algebra, and constitutes a
first attempt at recognising the occurrence of intensionality in fields beyond
logic and computability.
(vi) Then, in §6, we reformulated to intensional recursion, and showed that it can
be captured through exposures. We proved abstract analogues of classic intensional results, like Gödel’s First Incompleteness Theorem, Tarski’s Undefinability Theorem, and Rice’s Theorem. These results lend credibility to the idea that
exposures are a toolkit where the fine structure of results with an intensional
flavour can be described.
The culmination of this chapter was the Intensional Recursion Theorem (Theorem 54), which set out conditions that guarantee the existence of intensional
fixed points. The Intensional Recursion Theorem can be thought of as an abstract version of Kleene’s Second Recursion Theorem.
(vii) In the final two chapters (§7, §8) we brought iPCF and exposures together.
After some technical development, we showed that a restriction of iPCF, called
iPCF v2.0, can be interpreted in a cartesian closed P-category equipped with a
product-preserving comonadic exposure, and IFPs at appropriate objects.
We then discussed the cases in which the restrictions that plague iPCF v2.0
can be waived. However, we argued that there are good reasons indicating that
lifting those restrictions is at odds with intensionality, at least in the way in
which we understand it.
We closed the thesis by proving that Asm(K1 ), the P-category of assemblies on
K1 , is a model of iPCF v2.0. As K1 is a PCA based on classical computability, this means that iPCF v2.0 is adequate for constructing indices in classical
computability theory: it is a typed ‘intensional metaprogramming’ language for
writing programs in the style of Jones [1997], albeit with limited expressivity.
In the rest of this concluding chapter, we will try to evaluate these achievements. We
would like to focus on four aspects in particular:
• Is intensionality really just the ability to have non-functional operations?
• Are iPCF and iPCF v2.0 adequate intensional and reflective languages?
• How do exposures compare with alternative ‘theories of intensionality’ ?
• Have we managed to elucidate the mysterious Second Recursion Theorem of
Kleene?
185
9.1
Is intensionality really just non-functionality?
The formalisation of Frege’s ideas of sense and reference, which we discussed in §1.1, is
an old problem. Even though many have tried, there does not seem to be a definitive
account. Some even question whether such a definitive account should exist.
Moschovakis [1993] defines the sense of logical formula as the (possibly infinitary)
algorithm that the syntax of a (first-order) formula suggests. This is formalised using
Moschovakis’ own theory of recursive algorithms.
Occupying some middle ground, Abramsky [2014] draws on a long tradition of
programming language semantics. In a sense, he introduced what one could call the
spectrum of intensionality: some kinds of semantics is more intensional than others,
in that the mathematical objects that comprise it contain strictly more information
about the computation that is being modelled, e.g. an account of the interactions
that take place when a program is run. The gist is that, by moving to more refined
semantics, more can be captured, even though a price might have to be paid, perhaps
in the form of quotienting the model. However, we believe that op. cit. is permeated
by a preliminary form of the ideas explicitly developed in this thesis.
A third opinion is given by Girard et al. [1989]: “the sense contains the denotation,
at least implicitly.” Girard proposes a study of the invariants of syntax, in a vein
inspired by proof theory. Some interpret this statement as taking the extreme viewpoint that ‘syntax = sense,’ but it becomes clear in [Girard, 2011] that the author is
simply proposing the study of new, unorthodox proof-theoretic structures.
In this thesis we avoided this lengthy debate by defining intensionality to mean
‘anything finer than what we call extensional equality.’ We like to view this as purely
mathematical, and philosophically agnostic. We have merely demonstrated in §3 that
modal types allow one to treat their elements as pure syntax.
In the development of exposures in §4, and then in the intensional semantics of
§8, it became clear that this view of equality is modular. The modal types allow one
to introduce equalities in a controlled fashion: we began with no equations in iPCF
(§3), and gradually reached a set of equations in iPCF v2.0 (§8) which seem to mirror
intensional equality, as defined by the exposure. Whether that is a precise reflection
can be shown through a completeness theorem, which we conjecture to hold. In turn,
our flavour of exposure (cartesian, product-preserving, weakly cartesian closed, etc.)
determines which equations intensional equality will satisfy. We believe that the
modularity of this framework is a serious advantage that makes it adaptable to all
sorts of settings, and all levels of fine-grain intensional information.
186
9.2
How expressive is iPCF?
The first two objectives of this thesis that we discussed in §1.2 were the clarification
of the intensional and reflective programming paradigms.
The development of iPCF elucidated the fact that we can indeed treat terms at
modal types as if they were ‘pure syntax’ (up to α-equivalence), and make arbitrary
decisions on them in a typed manner. This shows that, if we comprehend intensional
programming as entailing non-functional behaviour, then there is a type theory in
which this ability is provably consistent. As we remarked in the introduction, there
have been many similar attempts in the past, but all were in one way or another
problematic: some led to particularly complicated languages with unclear semantics,
like those of Smith [1982, 1984]; others, like Gabbay and Nanevski [2013], were close
to ours, but proposed semantics which are—unfortunately—inconsistent. A third
class led to impossibility theorems, e.g. Wand [1998].
Our work decisively resolved many of these issues: once a modal typing discipline
is established, then we are perfectly capable of accommodating both non-functional
behaviour as well as reflexivity. If we limit these behaviours to the modal types, and
use the typing system to stop the flow from the extensional region of the language to
the modal types, then we get a consistent language. We also believe that we are the
first to directly tie intensionality and reflexivity together, as we think they should be,
if we are to use reflexivity in any interesting manner.
Nonetheless, as we hinted in §3.6, iPCF can only be considered a proof-of-concept.
We have deliberately decided not to concern ourselves with the task of finding good
sets of intensional primitives, but merely with the possibility of crafting a setting in
which this is possible. We believe that finding good intensional primitives is a particularly hard problem, with close connections to both metaprogramming and higher-order
computability. Furthermore, finding other models is likely to prove challenging.
9.2.1
Metaprogramming
As we discussed in the introduction, metaprogramming is a difficult task that dates
back to the work of the Lisp community. The area has recently witnessed a resurgence
of interest, leading to the International Summer School on Metaprogramming that
took place at Robinson College, Cambridge (8-12 August 2016) around the time
that the author began drafting this thesis. It is quite clear that a good foundation for
metaprogramming is still lacking: see the work of Berger [2016] for a recent discussion.
187
At this point we should remark that intensional operations are not the main
subject of that area: metaprogramming is about constructing code dynamically from
its fragments, and not destructing it, as we are wont to do with intensionality.
A common issue in metaprogramming is that of manipulating code with open variables (= free variables). This is known to be a rather painful limitation of the S4-based
language of Davies and Pfenning [1996] on which iPCF is based. In order to overcome
this, Davies [1995, 1996, 2017] developed a λ-calculus based on another modality that
is reminiscent of the ‘next’ operator of Linear Temporal Logic (LTL). This language
has explicit annotations of the stage at which each computation is taking place, and is
also able to handle open code. This led to a flurry of developments, and in particular
the very influential work of Taha and Sheard [1997, 2000] on MetaML, and then the
environment classifiers of Taha and Nielsen [2003], which have recently been improved
by Tsukada and Igarashi [2010]. See [Kavvos, 2016, §6] for further references, and for
a discussion of the modal aspects used.
In iPCF, the restriction of no open code does not only occur as it did before
(only modal variables under boxes), but it also vengefully reappears in the case of
intensional operations: we saw in §3.3 that, unless the terms on which we operate
intensionally are closed, we risk inconsistency. The author’s colleague, Mario AlvarezPicallo, has begun some preliminary work in intensionality in calculi like Davies’,
which we discuss further in §9.3.
However, to achieve any meaningful notion of intensional primitives, much more
than simple modal types is required. Consider, for example, a constant deapp that
attempts to take apart an application, in the sense that
deapp (box (M N )) = hbox M, box N i
(where we have also assumed products in the language). This cannot meaningfully
be typed in the simple modal setting. In fact, it seems that we need existential types
in order to assign a type to this, as the domain of M is unknown. For example, the
type would be something like
deapp : B → ∃A. (A → B) × A
However, this would take us deep into the waters of second-order modal logic, and we
are not aware of any previous work on that front.
Similar second-order type systems like this have been suggested in the context of
Barry Jay’s factorisation calculi. These are combinatory calculi that admit intensional
operations at ‘face value.’ The trick by which disaster is avoided is that of having
188
intensional operations act only on normal forms, e.g. a partially applied S combinator
(SP Q), which they are able to take apart (factorise) in order to reuse its immediate
subterms P and Q: see Jay and Given-Wilson [2011]. In subsequent work by Jay and
Palsberg [2011] it was shown that such a system admits a second-order Curry-style
typing similar the one shown above, but without the modalities.
9.2.2
Higher-Order Computability
This is perhaps the point of view from which the present work originates, but also the
least developed in this thesis. We discussed some possible applications of our ideas
on intensional higher-order computation in §1.2.3. Unfortunately, the scope of this
thesis could not be stretched to contain more material in this direction. Nevertheless, the author has a presentiment that any truly novel understanding of intensional
higher-order computation will come simultaneously with the development of intensional primitives for a language like iPCF.
9.2.3
iPCF, iPCF v2.0, and their models
Even though we begun with the model of assemblies on K1 in mind, it gradually
became apparent that things were not quite that simple. In §8 we ran into severe
difficulties when trying to force Asm(K1 ) to be a model of iPCF, which we related to
three fundamental problems:
1. free variables in intensional fixed points;
2. free variables in abstractions of quoted terms, a.k.a no higher-order intensional
programming; and
3. the construction of intensional fixed points at all types.
Whereas the third problem is largely technical, the other two are quite serious
points against our argument that iPCF v2.0 is a good ‘language of indices.’ We argued
extensively in §8.1 that, indeed, both of these problems seem more-or-less natural
from a computability theory point-of-view. However, naturality and substitution are
fundamental aspects of any λ-calculus, and the fact that we have to disallow them
fundamentally reduces the expressiveness of what we have achieved.
It thus follows that we need to consider more candidate models of iPCF. We can
go about this task in three ways:
(i) We can study Asm(K1 ) more closely; or
189
(ii) we can look for other PCAs A such that Asm(A) is an interesting natural /weakly
extensional model; or
(iii) we can look elsewhere.
Regarding the first option, we should remark that we did not show that Asm(K1 )
is not natural. We very strongly believe that it is not weakly extensional, although we
have no evidence for that either. Disproving either of these assertions is likely to be
a very cumbersome exercise, and it is perhaps best that it be automated/computerassisted. If naturality is shown to not be the case, then we should perhaps redefine
box (−) to enclose only completely closed terms, with no free variables at all, whether
modal or intuitionstic. This would directly lead us to some kind of intensional combinator language, as also described at the end of §8.1, which would not be very far
from what is already the case with indices in computability theory. It may be that
the ‘nature’ of K1 simply has this shape.
However, even if Asm(K1 ) were to unexpectedly prove to be natural, the key to
Davies-Pfenning style staged metaprogramming—which is an expressive improvement
over indices in computability theory—is precisely the free variables under quoted λabstractions. This very urgently requires weak extensionality, as remarked in §8.4.
In that section, we also discussed the possibility of finding some weakly extensional
PCA A that can be informative in terms of intensional programming. We concluded
that the said task is likely very difficult.
It is not inconceivable that there could be another source of models for iPCF:
perhaps a change in perspective is required, and this change may be one that involves
moving away from ideas sourced from realizability, and more towards the emergent
theory of metaprogramming. This is also related to some ideas that we will discuss
in the next section, regarding alternative proposals for modelling the phenomenon of
intensionality at large.
Finally, as soon as more models of iPCF are identified, there are many interesting
questions that need to be answered. First, showing a completeness theorem for iPCF
interpreted with exposures should be a primary goal. In a sense, a completeness
theorem will demonstrate that our categorical structure indeed corresponds to what
we type-theoretically understand as intensional types. This should be investigated as
a priority. Secondly, a more careful approach to the available intensional operations
should be taken, and some form of adequacy theorem for iPCF should be shown.
We are not exactly sure what form this should take, but it should certainly be much
more involved than the adequacy statements concerning PCF. This issue is likely
190
to be deeply intertwined with the intensional primitives we discussed this section.
Thirdly, there must be many interesting results concerning the mismatch between
iPCF and its various models. This is very much the case with PCF, beginning with
the work of Plotkin [1977] on domain-theoretic models, and all the way to the work of
Escardó [1999] on metric models. However, the intensional case is likely to be much
richer, detailed, and tricky.
9.3
Exposures vs. other theories of intensionality
Exposures attempt to abstract the general notion of intensional devices, of which
Gödel numberings or numberings of partial recursive functions are only particular
cases, as seen in §5. As such, the ambition of the work in this thesis is slightly different
when compared to previous attempts, which are mainly concerned with presenting a
categorical account of computability theory.
We showed in §6 that exposures are particularly elegant settings for reproducing
well-known diagonal arguments. It has been known for some time that there are
common elements between these theorems, but the language of P-categories and exposures allows us to capture what is needed for each argument in very fine detail.
•
For example, Rice’s theorem necessitates an evaluator : Q # Id, whereas Tarski’s
Undefinability Theorem precisely states that merely having evaluators is catastrophic
for one aspect of consistency (fix-consistency), as this causes ¬ to have a fixed point.
On the other hand, Gödel’s First Incompleteness Theorem states that we must have
more truth values than simply true and false.
Contrary to the presentation of Lawvere [1969, 2006], all our theorems are in the
same language of comonadic exposures. Our development is positive, in that it is
predicated on the existence of IFPs, and not the absence of EFPs, as in Lawvere’s
paper. We have concentrated on the oughts, and not the ought nots. It is for this
reason that we view our results as a refinement of those of Lawvere.
We believe that all these observations and results lend support to the idea that
exposures can become a useful toolkit situated at the heart of a new theory of intensionality, which would be applicable in all sorts of settings, not necessarily related to
logic and computation. Furthermore, unlike previous work in the same style, exposures draw inspiration from modal logic and the Curry-Howard correspondence. It is
for that reason that the resulting framework is—unlike all its predecessors—inherently
typed. Essentially all previous work on the subject relied on some ‘universal’—in one
way or another object—which contained codes for a whole class of arrows.
191
First, we want to mention the only two generalisations of the SRT of which we
are aware. They both seem very similar, and they concern effective Scott domains:
one is due to Kanda [1988], and the other one due to Case and Moelius [2012].
As for category-theoretic frameworks, the only previous work with a similar flavour
consists of (a) a paper of Mulry, which uses the recursive topos; and (b) the line of
work that culminates with Cockett and Hofstra’s Turing categories, as well as their
intellectual predecessors.
Mulry’s Recursive Topos
Mulry [1982] constructed the recursive topos Rec, which is the Grothendieck topos of
canonical sheaves on the monoid of recursive functions under composition. Broadly
speaking, the notion of (higher-order) computation in the recursive topos is that of
Banach-Mazur computability: a map is computable if it maps recursive sequences to
recursive sequences.1
In [Mulry, 1989], the author aims to “contemplate a synthetic theory of computation, i.e. an intrinsic set of categorical axioms for recursion which captures both
essential features of classical recursion theory but is also applicable over a wide range
of applications in areas of effective mathematics and computer science.”
Indeed, Mulry’s work has some overlap with some of the material we presented in
§6.3 on Lawvere’s theorem. He identifies an enumeration of the partial recursive functions as a point surjective map, and then proceeds to interpret all the classic results
of type 2 computability (Myhill-Shepherdson, Kreisel-Lacombe-Shoenfield, etc.) in
the context of the recursive topos. This is followed by a discussion of the connections
between the recursive topos, the effective topos of Hyland [1982], and effective Scott
domains. He there points out that one can construct a N-path surjection N → D and
a point surjection N → DN in Rec, for any effective Scott domain D.
Fixed points are discussed in the final section of the paper. It is noted there that
Lawvere’s theorem corresponds to (a limited version of) Kleene’s FRT, simply by
considering an enumeration of (two-argument) partial recursive functions, alongside
N∼
= N × N. This last isomorphism is also used to prove versions of the FRT and SRT
for effective Scott domains, based on the observations of the previous paragraph. It is
remarked that the same can be shown in the effective topos, but not in the category
of effective domains itself, as neither of their natural numbers objects are effective
domains.
1
This short explanation of Banach-Mazur is due to Andrej Bauer, and was sourced from mathoverflow question #21745.
192
Thus, Mulry’s versions of the FRT and SRT seem to depend on the natural numbers object N. The SRT seems to give a fixed point for h : N → N, in the sense that
both the fixed point n and h ◦ n are indices for the same element of the effective Scott
domain. This is indeed a version of the SRT for all effective Scott domains, but it
lacks the well-typed flavour of our approach.
The road to Turing categories
Inspired by previous work by Paola and Heller [1987], and some of the material in
the thesis of Birkedal [2000], Cockett and Hofstra [2008] developed the notion of a
Turing category.
Turing categories are cartesian restriction categories: that is, they are equipped
with some structure to handle the partiality of their arrows, and there are cartesian
products—up to partiality. Their defining feature is that they have a Turing object
A, which is a domain that contains codes for all the arrows of the category: it has a
universal application,
τX,Y : A × X → Y
for each pair of objects X and Y , such that given any f : Z × X → Y , an index h :
Z → A exists, not by any means unique, such that the following diagram commutes:
A×X
τX,Y
h×idX
Y
f
Z ×X
So A is a very weak exponential for all X, Y at the same time. In fact, every object
X of a Turing category is a retract of A.
Whereas the work of Cockett and Hofstra is a beautiful and general account of
settings where basic recursion theory can be done, and also give rise to an interesting
theory of categorical simulations [Cockett and Hofstra, 2010], they are very far from
the goals of our own work: we are interested in exploring intensionality, which is only
one of the phenomena that occur in recursion theory. We are also interested in doing
so in a stringent type-theoretic manner, and thus we perceive the reliance on Turing
objects as a problem. Cockett and Hofstra themselves point out the untyped nature
of their work:
“A similar inherent limitation to Turing categories lies in its essential untypedness. Recently, Longley [...] has advocated the use of typed PCAs in
193
order to clarify notions of computation at higher types; it is not unimaginable that the corresponding generalization of Turing categories can be
of interest if we wish to handle such notions of computation in our setting.
Such a generalization would essentially bring us back to Birkedal’s weakly
closed partial cartesian categories.”
This is followed by the observation that in the Turing category corresponding to
classical computability theory no real discussion of higher-order phenomena can occur,
as one has no way of speaking about type-2 functionals.
In conclusion, the work of Cockett and Hofstra is untyped, fundamentally based
on partiality, and mostly centred around recursion theory. We prefer typed, total, and
intensional settings, without wanting to declare any particular allegiance to recursiontheoretic arguments.
9.3.1
An Idea for an Alternative Approach
At this point we ought to record another candidate approach for modelling intensionality as we see it in this thesis. This idea is due to Martin Hyland, who examined
this thesis. However, a similar framework has also been put forward by my fellow
student, Mario Alvarez-Picallo, in his work on the semantics of metaprogramming.
In many ways, the framework of exposures can be seen as evil or pathological,
in that it requires a highly non-standard property, namely that PERs are reflected
instead of preserved—as they would be for a P-functor. In that sense, it may indeed
be non-categorical (this is another interesting idea that needs to be investigated).
Perhaps the following proposal would be more workable. Instead of insisting on a
single mathematical universe, i.e. a single category, we could consider two: one that
is intensional, and one that is extensional. The theory of exposures already gives us
two rather good candidates for these: given an endoexposure Q : (C, ∼) # (C, ∼), the
intensional universe is the x-ray P-category (C, ≈Q ), wheras the extensional one is the
P-category (C, ∼) itself—see §4.2 for definitions of these notions. Now, as intensional
equality implies extensional equality, it is evident that there is a trivial P-functor,
(C, ≈Q ) −→ (C, ∼)
which is the identity on objects, the identity on morphisms, and full—but definitely
not faithful!2 One could perhaps argue that we could even move away from Pcategories, and back into ordinary categories: there could be a functor C → D, which
2
The first of these criteria is a recurrent theme of questionable categorical status, which nonetheless occurs reasonably often, e.g. in Freyd categories; see e.g. Staton [2014].
194
is the identity on objects and full. Perhaps D could be of the form C/ ∼, i.e. a quotient category of some sort. Thus, intensionality is obtained by having (P-)categories
on two levels, so that one is included in the other.
Mario Alvarez has proposed some directions regarding the categorical modelling
of non-homogeneous staged metaprogramming calculi, like the ones of Davies [1995,
1996, 2017], which bear a certain resemblance to the preceding idea. Put simply,
he proposes a simple metaprogramming calculus which consists of essentially two
‘stages,’ one ‘under the circle’ (cf. under the box), and the ordinary one. Under the
circle, affairs intensional, in much the same way that they are under our boxes, but
otherwise things work up to ordinary equality. Using the syntax of this calculus, one
can construct a cartesian category SM, the syntactic model : this is a term model,
but not quotiented up to equality. A model of this is then a cartesian closed category
C along with a functor,
F : SM −→ C
which is strictly product-preserving. Furthermore, if intensional operations are to
be soundly interpreted, this functor should be faithful. This is to be understood in
the following way: the functor F encodes syntactic information within the semantic
model, in a manner that does not collapse the syntax.
There is an odd tension between fullness and faithfulness in this. Which one
should we choose for intensionality? Faithfulness guarantees that the syntax is accurately represented in the model, whereas fullness describes the idea that for every
‘extensional’ view, there is at least one intensional. Nevertheless, investigation of
this idea of two level-two category paradigm of intensionality is likely to be a very
interesting avenue for further research.
9.4
Kleene’s mysterious Second Recursion Theorem
The basic impetus behind this thesis was not intensionality itself, but rather its rôle
in Kleene’s Second Recursion Theorem, which we exhaustively studied in §2. These
questions were brought to the author’s attention by Abramsky [2012, 2014], who—
along with Jones [2013]—had identified the mysterious nature of this theorem in the
1980s. Abramsky [2012] also remarks that the theorem is very powerful, and—even
though very simple—its proof is opaque, and provides no intuition. Our analysis of
intensional recursion in §6, and the connection we have drawn with Lawvere’s work
on diagonal arguments, has shed some light on these issues.
195
First, we believe that the opacity in proof is not a real problem: all diagonalisation proofs have a quintessentially ‘magical’ element that often stimulates people’s
imagination, bringing them to the forefront of popular science books and scientific
novels, see e.g. Hofstadter [1979], Doxiadis [2001]. What is more, Kleene [1981] explicitly states that the SRT was obtained quite straightforwardly by translating the
FRT from the λ-calculus. He even dates this derivation before the 1st of July 1935.
Instead, we believe that the our analysis provides a new perspective on the structure of the situations in which Kleene’s argument is fundamentally applicable. This
amounts to a generalisation of intensional recursion beyond first-order computability: the Curry-Howard interpretation through Löb’s rule, as well as the Intensional
Recursion theorem stated in the language of comonadic exposures, reveal patterns in
the statement and the proof of the theorem that were not known before.
9.5
Concluding remarks
It is evident from the previous sections that there are many very interesting questions that follow from our work in this thesis. Perhaps the most interesting theoretical
direction is that of studying the expressiveness of iPCF, especially through the related themes of metaprogramming and non-functional higher-order computation. The
present author believes that it is very likely that we could use modalities and intensionality to obtain access to computational power that we have both sclerotically
rejected from theoretical analysis, or left to programmers of the untyped persuasion.
These developments—we hope—will proceed hand-in-hand with an improved understanding of metaprogramming, intensional or not. Progress is difficult to achieve
in metaprogramming not only because we have been lacking a theoretical foundation, but also because industrial metaprogramming has progressed unabashedly in
the meantime. The result is that multiple hard-to-shake ad-hoc habits have been
established.
On the other hand, our intensional framework is as close to 1-category theory
as one can get, and it is for this reason that we hope that it may be more widely
applicable than just in the context of computability and logic. We have demonstrated
at least one example of this in the case of homological algebra (§5.3). However, as the
applications of category theory expand to include linguistics and philosophy—see e.g.
the forthcoming volume [Landry, 2017]—we would like to believe that the concept
of intensionality will reappear in many different settings, and our framework will be
there to explain its structure.
196
Appendix A
iPCF v2.0 in Agda
A.1
Basics.agda
module Basics where
open import Relation.Binary.PropositionalEquality
open import Data.Nat using (N ; zero ; suc)
open import Data.Sum renaming (_]_ to _+_)
infixr 1 _=>_
infixr 5 _∈_
infixl 1 _⊆_
infixl 4 _,_
infixl 3 _++_
infixl 2 _∧_
––––––––––––
– Types and Contexts –
––––––––––––
data Types : Set where
simple : Types
modal : Types
data Ty : Types → Set where
P : ∀ {T} → N → Ty T
_=>_ : ∀ {T} → Ty T → Ty T → Ty T
197
_∧_ : ∀ {T} → Ty T → Ty T → Ty T
_ : Ty modal → Ty modal
data Cx : Types → Set where
· : ∀ {T} → Cx T
_,_ : ∀ {T : Types} (Γ : Cx T) (A : Ty T) → Cx T
data _∈_ : ∀ {T : Types} (A : Ty T) (Γ : Cx T) → Set where
top : ∀ {T} {Γ : Cx T} {A : Ty T} → A ∈ (Γ , A)
pop : ∀ {T} {Γ : Cx T} {A B : Ty T} (i : A ∈ Γ) → A ∈ (Γ , B)
_⊆_ : ∀ {T} (Γ ∆ : Cx T) → Set
Γ ⊆ ∆ = ∀ {A} → A ∈ Γ → A ∈ ∆
– Functions on contexts.
boxcx : Cx modal → Cx modal
boxcx · = ·
boxcx (Γ , A) = boxcx Γ , A
_++_ : ∀ {T} → Cx T → Cx T → Cx T
∆ ++ · = ∆
∆ ++ (Γ , A) = (∆ ++ Γ) , A
box∈cx : ∀ {Γ : Cx modal} {A : Ty modal} → A ∈ Γ → A ∈ boxcx Γ
box∈cx top = top
box∈cx (pop d) = pop (box∈cx d)
subsetdef : ∀ {T} {Γ ∆ : Cx T} {A} → A ∈ Γ → Γ ⊆ ∆ → A ∈ ∆
subsetdef d f = f d
subsetempty : ∀ {T} {Γ : Cx T} → · ⊆ Γ
subsetempty ()
subsetid : ∀ {T} {Γ : Cx T} → Γ ⊆ Γ
subsetid = ń {Γ} {A} z → z
198
weakone : ∀ {T} {Γ ∆ : Cx T} {A} → Γ ⊆ ∆ → Γ ⊆ (∆ , A)
weakone p = ń {A} z → pop (p z)
weakboth : ∀ {T} {Γ ∆ : Cx T} {A} → Γ ⊆ ∆ → Γ , A ⊆ ∆ , A
weakboth p top = top
weakboth p (pop x) = subsetdef x (weakone p)
weakmany : ∀ {T} (Γ ∆ : Cx T) → Γ ⊆ Γ ++ ∆
weakmany Γ · x = x
weakmany Γ (∆ , A) x = pop (weakmany Γ ∆ x)
concat-subset-1 : ∀ {T} (Γ ∆ : Cx T) → Γ ⊆ Γ ++ ∆
concat-subset-1 Γ · x = x
concat-subset-1 Γ (∆ , A) x = subsetdef x (weakone (concat-subset-1 Γ ∆))
concat-subset-2 : ∀ {T} (Γ ∆ : Cx T) → ∆ ⊆ Γ ++ ∆
concat-subset-2 Γ · ()
concat-subset-2 Γ (∆ , A) x = subsetdef x (weakboth (concat-subset-2 Γ ∆))
incl-trans : ∀ {T} {Γ Γ’ Γ” : Cx T} → Γ ⊆ Γ’ → Γ’ ⊆ Γ” → Γ ⊆ Γ”
incl-trans p q x = q (p x)
swap-last : ∀ {T} {Γ : Cx T} {A B} → Γ , A , B ⊆ Γ , B , A
swap-last {_} {·} top = pop top
swap-last {_} {·} (pop top) = top
swap-last {_} {·} (pop (pop x)) = pop (pop x)
swap-last {_} {Γ , A} top = pop top
swap-last {_} {Γ , A} (pop top) = top
swap-last {_} {Γ , A} (pop (pop x)) = pop (pop x)
cx-exch : ∀ {T} {Γ ∆ : Cx T} {A B} → (Γ , A , B) ++ ∆ ⊆ (Γ , B , A) ++ ∆
cx-exch {∆ = ·} d = swap-last d
cx-exch {∆ = ∆ , A1} top = top
cx-exch {∆ = ∆ , A1} (pop d) = subsetdef d (weakone (cx-exch {∆ = ∆}))
199
cx-contr : ∀ {T} {Γ ∆ : Cx T} {A} → (Γ , A , A) ++ ∆ ⊆ (Γ , A) ++ ∆
cx-contr {∆ = ·} top = top
cx-contr {∆ = ·} (pop d) = d
cx-contr {∆ = ∆ , A1} top = top
cx-contr {∆ = ∆ , A1} (pop d) = subsetdef d (weakone (cx-contr {∆ = ∆}))
is-in : ∀ {T} (Γ Γ’ : Cx T) (A : Ty T) → A ∈ (Γ , A ++ Γ’)
is-in Γ · A = top
is-in Γ (Γ’ , A’) A = pop (is-in Γ Γ’ A)
ctxt-disj : ∀ {T} (Γ Γ’ : Cx T) (A : Ty T) → A ∈ (Γ ++ Γ’) → A ∈ Γ + A ∈ Γ’
ctxt-disj Γ · A x = inj1 x
ctxt-disj Γ (Γ’ , A’) .A’ top = inj2 top
ctxt-disj Γ (Γ’ , A’) A (pop x)
with ctxt-disj Γ Γ’ A x
ctxt-disj Γ (Γ’ , A’) A (pop x) | inj1 z = inj1 z
ctxt-disj Γ (Γ’ , A’) A (pop x) | inj2 z = inj2 (pop z)
swap-out : ∀ {T} (∆ Γ : Cx T) (A : Ty T) → (∆ , A) ++ Γ ⊆ (∆ ++ Γ) , A
swap-out ∆ · A x = x
swap-out ∆ (Γ , B) A x = swap-last (subsetdef x (weakboth (swap-out ∆ Γ A)))
swap-in : ∀ {T} (∆ Γ : Cx T) (A : Ty T) → (∆ ++ Γ) , A ⊆ (∆ , A) ++ Γ
swap-in ∆ Γ A top = is-in ∆ Γ A
swap-in ∆ Γ A (pop x)
with ctxt-disj ∆ Γ _ x
swap-in ∆ Γ A (pop x) | inj1 y = concat-subset-1 (∆ , A) Γ (pop y)
swap-in ∆ Γ A (pop x) | inj2 y = concat-subset-2 (∆ , A) Γ y
200
A.2
iPCF.agda
module iPCF where
infixl 0 _/_`_
open import Basics
– Definition
data _/_`_ (∆ Γ : Cx modal) : Ty modal → Set where
iPCF-var : ∀ {A}
→A∈Γ
––––––→∆/Γ`A
iPCF-modal-var : ∀ {A}
→A∈∆
––––––
→∆/Γ`A
iPCF-app : ∀ {A B}
→ ∆ / Γ ` A => B → ∆ / Γ ` A
––––––––––––––––→∆/Γ`B
iPCF-lam : ∀ {A B}
→ ∆ / (Γ , A) ` B
–––––––––
→ ∆ / Γ ` A => B
201
iPCF-prod : ∀ {A B}
→∆/Γ`A →∆/Γ`B
––––––––––––––
→∆/Γ`A∧B
iPCF-fst : ∀ {A B}
→∆/Γ`A∧B
––––––––→∆/Γ`A
iPCF-snd : ∀ {A B}
→∆/Γ`A∧B
––––––––
→∆/Γ`B
iPCF-boxI : ∀ {A}
→∆/·`A
–––––––→∆/Γ`A
iPCF-boxE : ∀ {A C}
→ ∆ / Γ ` A → (∆ , A) / Γ ` C
––––––––––––––––––
→∆/Γ`C
iPCF-fix : ∀ {A}
→ ∆ / (· , A) ` A
–––––––→∆/Γ`A
202
– Weakening and exchange.
exch : ∀ {∆ Γ A B C} (Γ’ : Cx modal)
→ ∆ / (Γ , A , B) ++ Γ’ ` C
––––––––––––––→ ∆ / (Γ , B , A) ++ Γ’ ` C
exch Γ’ (iPCF-var x) = iPCF-var (cx-exch {∆ = Γ’} x)
exch Γ’ (iPCF-modal-var x) = iPCF-modal-var x
exch Γ’ (iPCF-app d d1) = iPCF-app (exch Γ’ d) (exch Γ’ d1)
exch {C = A => B} Γ’ (iPCF-lam d) = iPCF-lam (exch (Γ’ , A) d)
exch Γ’ (iPCF-prod d e) = iPCF-prod (exch Γ’ d) (exch Γ’ e)
exch Γ’ (iPCF-fst d) = iPCF-fst (exch Γ’ d)
exch Γ’ (iPCF-snd d) = iPCF-snd (exch Γ’ d)
exch Γ’ (iPCF-boxI d) = iPCF-boxI d
exch Γ’ (iPCF-boxE d e) = iPCF-boxE (exch Γ’ d) (exch Γ’ e)
exch Γ’ (iPCF-fix d) = iPCF-fix d
exch-modal : ∀ {∆ Γ A B C} (∆’ : Cx modal)
→ (∆ , A , B) ++ ∆’ / Γ ` C
–––––––––––––––
→ (∆ , B , A) ++ ∆’ / Γ ` C
exch-modal ∆’ (iPCF-var x) = iPCF-var x
exch-modal ∆’ (iPCF-modal-var x) =
iPCF-modal-var (subsetdef x (cx-exch {∆ = ∆’}))
exch-modal ∆’ (iPCF-app d e) =
iPCF-app (exch-modal ∆’ d) (exch-modal ∆’ e)
exch-modal ∆’ (iPCF-lam d) = iPCF-lam (exch-modal ∆’ d)
exch-modal ∆’ (iPCF-prod d e) =
iPCF-prod (exch-modal ∆’ d) (exch-modal ∆’ e)
203
exch-modal ∆’ (iPCF-fst d) = iPCF-fst (exch-modal ∆’ d)
exch-modal ∆’ (iPCF-snd d) = iPCF-snd (exch-modal ∆’ d)
exch-modal ∆’ (iPCF-boxI d) = iPCF-boxI (exch-modal ∆’ d)
exch-modal ∆’ (iPCF-boxE d e) =
iPCF-boxE (exch-modal ∆’ d) (exch-modal (∆’ , _) e)
exch-modal ∆’ (iPCF-fix d) = iPCF-fix (exch-modal ∆’ d)
weak : ∀ {∆ Γ Γ’ A}
→ ∆ / Γ ` A → Γ ⊆ Γ’
––––––––––––→ (∆ / Γ’ ` A)
weak (iPCF-var x) f = iPCF-var (f x)
weak (iPCF-modal-var x) f = iPCF-modal-var x
weak (iPCF-app d e) f = iPCF-app (weak d f) (weak e f)
weak (iPCF-lam d) f = iPCF-lam (weak d (weakboth f))
weak (iPCF-prod d e) f = iPCF-prod (weak d f) (weak e f)
weak (iPCF-fst d) f = iPCF-fst (weak d f)
weak (iPCF-snd d) f = iPCF-snd (weak d f)
weak (iPCF-boxI d) f = iPCF-boxI d
weak (iPCF-boxE d e) f =
iPCF-boxE (weak d f) (weak e f)
weak (iPCF-fix d) f = iPCF-fix d
weak-modal : ∀ {∆ ∆’ Γ A}
→ ∆ / Γ ` A → ∆ ⊆ ∆’
––––––––––––→ ∆’ / Γ ` A
weak-modal (iPCF-var p) x = iPCF-var p
weak-modal (iPCF-modal-var p) x = iPCF-modal-var (x p)
weak-modal (iPCF-app t u) x = iPCF-app (weak-modal t x)
204
(weak-modal u x)
weak-modal (iPCF-lam t) x = iPCF-lam (weak-modal t x)
weak-modal (iPCF-prod t u) x = iPCF-prod (weak-modal t x)
(weak-modal u x)
weak-modal (iPCF-fst t) x = iPCF-fst (weak-modal t x)
weak-modal (iPCF-snd t) x = iPCF-snd (weak-modal t x)
weak-modal (iPCF-boxI t) x = iPCF-boxI (weak-modal t x)
weak-modal (iPCF-boxE t u) x =
iPCF-boxE (weak-modal t x)
(weak-modal u (weakboth x))
weak-modal (iPCF-fix t) x = iPCF-fix (weak-modal t x)
– Cut.
cut : ∀ {∆ Γ A B} → (Γ’ : Cx modal)
→ ∆ / Γ ` A → ∆ / Γ , A ++ Γ’ ` B
–––––––––––––––––––→ ∆ / Γ ++ Γ’ ` B
cut · d (iPCF-var top) = d
cut · d (iPCF-var (pop x)) = iPCF-var x
cut (Γ’ , B) d (iPCF-var top) = iPCF-var top
cut (Γ’ , A’) d (iPCF-var (pop x)) =
weak (cut Γ’ d (iPCF-var x)) (weakone subsetid)
cut Γ’ d (iPCF-modal-var p) = iPCF-modal-var p
cut Γ’ d (iPCF-app t u) = iPCF-app (cut Γ’ d t) (cut Γ’ d u)
cut Γ’ d (iPCF-lam e) = iPCF-lam (cut (Γ’ , _) d e)
cut Γ’ d (iPCF-prod t u) = iPCF-prod (cut Γ’ d t) (cut Γ’ d u)
cut Γ’ d (iPCF-fst e) = iPCF-fst (cut Γ’ d e)
cut Γ’ d (iPCF-snd e) = iPCF-snd (cut Γ’ d e)
cut Γ’ d (iPCF-boxI e) = iPCF-boxI e
cut Γ’ d (iPCF-boxE t u) =
iPCF-boxE (cut Γ’ d t)
(cut Γ’ (weak-modal d (weakone (subsetid))) u)
205
cut Γ’ d (iPCF-fix t) = iPCF-fix t
cut-modal : ∀ {∆ Γ A B} → (∆’ : Cx modal)
→ ∆ / · ` A → ∆ , A ++ ∆’ / Γ ` B
–––––––––––––––––––→ ∆ ++ ∆’ / Γ ` B
cut-modal ∆’ d (iPCF-var x) = iPCF-var x
cut-modal · d (iPCF-modal-var top) = weak d subsetempty
cut-modal · d (iPCF-modal-var (pop x)) = iPCF-modal-var x
cut-modal (∆’ , B) d (iPCF-modal-var top) = iPCF-modal-var top
cut-modal (∆’ , A’) d (iPCF-modal-var (pop x)) =
weak-modal (cut-modal ∆’ d (iPCF-modal-var x)) (weakone subsetid)
cut-modal ∆’ d (iPCF-app p q) =
iPCF-app (cut-modal ∆’ d p) (cut-modal ∆’ d q)
cut-modal ∆’ d (iPCF-lam e) = iPCF-lam (cut-modal ∆’ d e)
cut-modal ∆’ d (iPCF-prod p q) =
iPCF-prod (cut-modal ∆’ d p) (cut-modal ∆’ d q)
cut-modal ∆’ d (iPCF-fst e) = iPCF-fst (cut-modal ∆’ d e)
cut-modal ∆’ d (iPCF-snd e) = iPCF-snd (cut-modal ∆’ d e)
cut-modal ∆’ d (iPCF-boxI e) = iPCF-boxI (cut-modal ∆’ d e)
cut-modal ∆’ d (iPCF-boxE p q) =
iPCF-boxE (cut-modal ∆’ d p) (cut-modal (∆’ , _) d q)
cut-modal ∆’ d (iPCF-fix e) = iPCF-fix (cut-modal ∆’ d e)
206
A.3
iPCF2.agda
module iPCF2 where
infixl 0 _/_`_::_
open import Basics
– Definition
data Judgement : Set where
int : Judgement
ext : Judgement
data _/_`_::_ (∆ Γ : Cx modal) : Judgement → Ty modal → Set where
iPCF-var : ∀ {J A}
→A∈Γ
–––––––––––––→ ∆ / Γ ` J :: A
iPCF-modal-var : ∀ {J A}
→A∈∆
–––––––––––––→ ∆ / Γ ` J :: A
iPCF-app : ∀ {J A B}
→ ∆ / Γ ` J :: A => B
→ ∆ / Γ ` J :: A
–––––––––––––––––––––––––––––
→ ∆ / Γ ` J :: B
iPCF-lam-ext : ∀ {A B}
207
→ ∆ / (Γ , A) ` ext :: B
–––––––––––––––––––
→ ∆ / Γ ` ext :: A => B
iPCF-lam-int : ∀ {J A B}
→ · / (· , A) ` J :: B
–––––––––––––––––
→ ∆ / Γ ` int :: A => B
iPCF-boxI : ∀ {J A}
→ ∆ / · ` int :: A
–––––––––––––––→ ∆ / Γ ` J :: A
iPCF-boxE : ∀ {J A C}
→ ∆ / Γ ` J :: A → (∆ , A) / Γ ` J :: C
–––––––––––––––––––––––––––––––––
→ ∆ / Γ ` J :: C
iPCF-fix : ∀ {J A}
→ · / (· , A) ` int :: A
–––––––––––––––––––
→ ∆ / Γ ` J :: A
– Weakening and exchange.
exch : ∀ {∆ Γ J A B C} (Γ’ : Cx modal)
→ ∆ / (Γ , A , B) ++ Γ’ ` J :: C
––––––––––––––208
→ ∆ / (Γ , B , A) ++ Γ’ ` J :: C
exch Γ’ (iPCF-var x) = iPCF-var (cx-exch {∆ = Γ’} x)
exch Γ’ (iPCF-modal-var x) = iPCF-modal-var x
exch Γ’ (iPCF-app d d1) = iPCF-app (exch Γ’ d) (exch Γ’ d1)
exch {C = A => B} Γ’ (iPCF-lam-ext d) = iPCF-lam-ext (exch (Γ’ , A) d)
exch Γ’ (iPCF-lam-int d) = iPCF-lam-int d
exch Γ’ (iPCF-boxI d) = iPCF-boxI d
exch Γ’ (iPCF-boxE d e) = iPCF-boxE (exch Γ’ d) (exch Γ’ e)
exch Γ’ (iPCF-fix f) = iPCF-fix f
exch-modal : ∀ {∆ Γ J A B C} (∆’ : Cx modal)
→ (∆ , A , B) ++ ∆’ / Γ ` J :: C
–––––––––––––––
→ (∆ , B , A) ++ ∆’ / Γ ` J :: C
exch-modal ∆’ (iPCF-var x) = iPCF-var x
exch-modal ∆’ (iPCF-modal-var x) =
iPCF-modal-var (subsetdef x (cx-exch {∆ = ∆’}))
exch-modal ∆’ (iPCF-app d e) =
iPCF-app (exch-modal ∆’ d) (exch-modal ∆’ e)
exch-modal ∆’ (iPCF-lam-ext d) = iPCF-lam-ext (exch-modal ∆’ d)
exch-modal ∆’ (iPCF-lam-int d) = iPCF-lam-int d
exch-modal ∆’ (iPCF-boxI d) = iPCF-boxI (exch-modal ∆’ d)
exch-modal ∆’ (iPCF-boxE d e) =
iPCF-boxE (exch-modal ∆’ d) (exch-modal (∆’ , _) e)
exch-modal ∆’ (iPCF-fix f) = iPCF-fix f
weak : ∀ {∆ Γ Γ’ J A}
→ ∆ / Γ ` J :: A → Γ ⊆ Γ’
–––––––––––––––→ ∆ / Γ’ ` J :: A
209
weak (iPCF-var x) f = iPCF-var (f x)
weak (iPCF-modal-var x) f = iPCF-modal-var x
weak (iPCF-app d e) f = iPCF-app (weak d f) (weak e f)
weak (iPCF-lam-int d) f = iPCF-lam-int d
weak (iPCF-lam-ext d) f = iPCF-lam-ext (weak d (weakboth f))
weak (iPCF-boxI d) f = iPCF-boxI d
weak (iPCF-boxE d e) f =
iPCF-boxE (weak d f) (weak e f)
weak (iPCF-fix d) f = iPCF-fix d
weak-modal : ∀ {∆ ∆’ Γ J A}
→ ∆ / Γ ` J :: A → ∆ ⊆ ∆’
––––––––––––→ ∆’ / Γ ` J :: A
weak-modal (iPCF-var p) x = iPCF-var p
weak-modal (iPCF-modal-var p) x = iPCF-modal-var (x p)
weak-modal (iPCF-app t u) x = iPCF-app (weak-modal t x)
(weak-modal u x)
weak-modal (iPCF-lam-int t) x = iPCF-lam-int t
weak-modal (iPCF-lam-ext t) x = iPCF-lam-ext (weak-modal t x)
weak-modal (iPCF-boxI t) x = iPCF-boxI (weak-modal t x)
weak-modal (iPCF-boxE t u) x =
iPCF-boxE (weak-modal t x)
(weak-modal u (weakboth x))
weak-modal (iPCF-fix f) x = iPCF-fix f
– Including intensional into extensional.
incl : ∀ {∆ Γ A}
→ ∆ / Γ ` int :: A
210
–––––––––→ ∆ / Γ ` ext :: A
incl (iPCF-var x) = iPCF-var x
incl (iPCF-modal-var x) = iPCF-modal-var x
incl (iPCF-app d e) = iPCF-app (incl d) (incl e)
incl (iPCF-lam-int {int} d) =
iPCF-lam-ext (weak (weak-modal (incl d) subsetempty) (weakboth subsetempty))
incl (iPCF-lam-int {ext} d) =
iPCF-lam-ext (weak (weak-modal d subsetempty) (weakboth subsetempty))
incl (iPCF-boxI d) = iPCF-boxI d
incl (iPCF-boxE d e) = iPCF-boxE (incl d) (incl e)
incl (iPCF-fix f) = iPCF-fix f
incl-either-ext : ∀ {∆ J Γ A}
→ ∆ / Γ ` J :: A
–––––––––→ ∆ / Γ ` ext :: A
incl-either-ext {J = int} d = incl d
incl-either-ext {J = ext} d = d
incl-either-int : ∀ {∆ J Γ A}
→ ∆ / Γ ` int :: A
–––––––––→ ∆ / Γ ` J :: A
incl-either-int {J = int} d = d
incl-either-int {J = ext} d = incl d
– Cut.
211
cut-ext : ∀ {∆ Γ J A B} → (Γ’ : Cx modal)
→ ∆ / Γ ` ext :: A
→ ∆ / Γ , A ++ Γ’ ` J :: B
––––––––––––––––––––––––
→ ∆ / Γ ++ Γ’ ` ext :: B
cut-ext · d (iPCF-var top) = d
cut-ext · d (iPCF-var (pop x)) = iPCF-var x
cut-ext (Γ’ , B) d (iPCF-var top) = iPCF-var top
cut-ext (Γ’ , A’) d (iPCF-var (pop x)) =
weak (cut-ext {J = ext} Γ’ d (iPCF-var x)) (weakone subsetid)
cut-ext Γ’ d (iPCF-modal-var p) = iPCF-modal-var p
cut-ext Γ’ d (iPCF-app t u) = iPCF-app (cut-ext Γ’ d t) (cut-ext Γ’ d u)
cut-ext Γ’ d (iPCF-lam-int e) = incl (iPCF-lam-int e)
cut-ext Γ’ d (iPCF-lam-ext e) = iPCF-lam-ext (cut-ext (Γ’ , _) d e)
cut-ext Γ’ d (iPCF-boxI e) = iPCF-boxI e
cut-ext Γ’ d (iPCF-boxE t u) =
iPCF-boxE (cut-ext Γ’ d t)
(cut-ext Γ’ (weak-modal d (weakone (subsetid))) u)
cut-ext Γ’ d (iPCF-fix f) = iPCF-fix f
cut-int : ∀ {∆ Γ J A B} → (Γ’ : Cx modal)
→ ∆ / Γ ` int :: A
→ ∆ / Γ , A ++ Γ’ ` J :: B
––––––––––––––––––––––––
→ ∆ / Γ ++ Γ’ ` J :: B
cut-int · d (iPCF-var top) = incl-either-int d
cut-int · d (iPCF-var (pop x)) = iPCF-var x
cut-int (Γ’ , B) d (iPCF-var top) = iPCF-var top
cut-int (Γ’ , A’) d (iPCF-var (pop x)) =
weak (cut-int Γ’ d (iPCF-var x)) (weakone subsetid)
cut-int Γ’ d (iPCF-modal-var p) = iPCF-modal-var p
cut-int Γ’ d (iPCF-app t u) = iPCF-app (cut-int Γ’ d t) (cut-int Γ’ d u)
212
cut-int Γ’ d (iPCF-lam-int e) = iPCF-lam-int e
cut-int Γ’ d (iPCF-lam-ext e) = iPCF-lam-ext (cut-ext (Γ’ , _) (incl d) e)
cut-int Γ’ d (iPCF-boxI e) = iPCF-boxI e
cut-int Γ’ d (iPCF-boxE t u) =
iPCF-boxE (cut-int Γ’ d t)
(cut-int Γ’ (weak-modal d (weakone (subsetid))) u)
cut-int Γ’ d (iPCF-fix e) = iPCF-fix e
cut-modal : ∀ {∆ Γ J A B} → (∆’ : Cx modal)
→ ∆ / · ` int :: A
→ ∆ , A ++ ∆’ / Γ ` J :: B
–––––––––––––––––––––––––
→ ∆ ++ ∆’ / Γ ` J :: B
cut-modal ∆’ d (iPCF-var x) = iPCF-var x
cut-modal · d (iPCF-modal-var top) = incl-either-int (weak d subsetempty)
cut-modal · d (iPCF-modal-var (pop x)) = iPCF-modal-var x
cut-modal (∆’ , B) d (iPCF-modal-var top) = iPCF-modal-var top
cut-modal (∆’ , A’) d (iPCF-modal-var (pop x)) =
weak-modal (cut-modal ∆’ d (iPCF-modal-var x)) (weakone subsetid)
cut-modal ∆’ d (iPCF-app p q) =
iPCF-app (cut-modal ∆’ d p) (cut-modal ∆’ d q)
cut-modal ∆’ d (iPCF-lam-ext e) = iPCF-lam-ext (cut-modal ∆’ d e)
cut-modal ∆’ d (iPCF-lam-int e) = iPCF-lam-int e
cut-modal ∆’ d (iPCF-boxI e) = iPCF-boxI (cut-modal ∆’ d e)
cut-modal ∆’ d (iPCF-boxE p q) =
iPCF-boxE (cut-modal ∆’ d p) (cut-modal (∆’ , _) d q)
cut-modal ∆’ d (iPCF-fix f) = iPCF-fix f
213
Bibliography
Samson Abramsky. The Lazy Lambda Calculus. In Research Topics in Functional
Programming, pages 65–117. Addison Wesley, 1990. URL https://www.cs.ox.ac.
uk/files/293/lazy.pdf.
Samson Abramsky. What are the Fundamental Structures of Concurrency? Electronic
Notes in Theoretical Computer Science, 162:37–41, 2006. ISSN 15710661. doi:
10.1016/j.entcs.2005.12.075. URL http://linkinghub.elsevier.com/retrieve/
pii/S1571066106004105.
Samson Abramsky. Notes on Intensional Recursion, 2012.
Samson Abramsky. Intensionality, Definability and Computation. In Alexandru Baltag and Sonja Smets, editors, Johan van Benthem on Logic and Information Dynamics, pages 121–142. Springer-Verlag, 2014. doi: 10.1007/978-3-319-06025-5_5.
URL https://dx.doi.org/10.1007/978-3-319-06025-5_5.
Samson Abramsky and Achim Jung. Domain Theory. Handbook of Logic in Computer Science, 3:1–168, 1994.
URL https://www.cs.bham.ac.uk/$\sim$axj/
pub/papers/handy1.pdf.
Samson Abramsky and Guy McCusker.
Linearity, Sharing and State: a fully
abstract game semantics for Idealized Algol with active expressions.
Elec-
tronic Notes in Theoretical Computer Science, 3:2–14, 1996. ISSN 15710661.
doi: 10.1016/S1571-0661(05)80398-6. URL http://linkinghub.elsevier.com/
retrieve/pii/S1571066105803986.
Samson Abramsky and Nikos Tzevelekos. Introduction to Categories and Categorical
Logic. In Bob Coecke, editor, New Structures for Physics, pages 3–94. SpringerVerlag, 2011. doi: 10.1007/978-3-642-12821-9_1. URL http://arxiv.org/abs/
1102.1313.
214
Samson Abramsky, Radha Jagadeesan, and Pasquale Malacaria. Full Abstraction for
PCF. Information and Computation, 163:409–470, 1996.
Samson Abramsky, Kohei Honda, and G McCusker. A fully abstract game semantics for general references. In Proceedings of theThirteenth Annual IEEE Symposium on Logic in Computer Science. IEEE Comput. Soc, 1998. ISBN 0-8186-85069. doi: 10.1109/LICS.1998.705669. URL http://ieeexplore.ieee.org/lpdocs/
epic03/wrapper.htm?arnumber=705669.
Leonard M. Adleman. An Abstract Theory of Computer Viruses. In Advances
in Cryptology - CRYPTO’ 88, volume 403 of Lecture Notes in Computer Science, pages 354–374. Springer New York, New York, NY, 1990. ISBN 3-54097196-3. doi: 10.1007/0-387-34799-2_28. URL https://dx.doi.org/10.1007/
0-387-34799-2_28.
Thorsten Altenkirch, Nils Anders Danielsson, and Nicolai Kraus. Partiality, Revisited:
The Partiality Monad as a Quotient Inductive-Inductive Type. In Proceedings of the
20th International Conference on Foundations of Software Science and Computation Structures (FoSSaCS), oct 2017. URL http://arxiv.org/abs/1610.09254.
Steve Awodey. Category Theory. Oxford Logic Guides. Oxford University Press,
2010. ISBN 9780191612558.
zLs8BAAAQBAJ.
URL https://books.google.co.uk/books?id=
Andrew Graham Barber. Dual Intuitionistic Linear Logic. Technical report,
ECS-LFCS-96-347, Laboratory for Foundations of Computer Science, University of Edinburgh, 1996.
URL http://www.lfcs.inf.ed.ac.uk/reports/96/
ECS-LFCS-96-347/.
Henk Barendregt. Lambda Calculus: Its Syntax and Semantics. North-Holland, Amsterdam, 1984. ISBN 978-0444875082.
Henk Barendregt.
Self-Interpretation in Lambda Calculus.
Journal of Func-
tional Programming, 1(2):229–233, 1991. URL https://dx.doi.org/10.1017/
S0956796800020062.
Andrej
Bauer.
ulus
of
tion
(blog),
Definability
and
continuity
functional.
2011.
URL
extensionality
Mathematics
of
and
the
mod-
Computa-
http://math.andrej.com/2011/07/27/
definability-and-extensionality-of-the-modulus-of-continuity-functional/.
215
Alan Bawden.
Quasiquotation in LISP.
In Proceedings of the 6th ACM SIG-
PLAN Workshop on Partial Evaluation and Semantics-Based Program Manipulation (PEPM ’99), 1999. URL http://repository.readscheme.org/ftp/papers/
pepm99/bawden.pdf.
Michael J. Beeson. Foundations of Constructive Mathematics. Springer Berlin Heidelberg, 1985. ISBN 978-3-642-68954-3. doi: 10.1007/978-3-642-68952-9. URL
https://dx.doi.org/10.1007/978-3-642-68952-9.
Martin Berger.
Foundations of meta-programming, 2016.
URL http://users.
sussex.ac.uk/$\sim$mfb21/publications/mp-slides/slides.pdf.
Gavin M. Bierman. Program equivalence in a linear functional language. Journal
of Functional Programming, 10(2):167–190, 2000. ISSN 09567968. doi: 10.1017/
S0956796899003639.
Gavin M. Bierman and Valeria de Paiva. On an Intuitionistic Modal Logic. Studia
Logica, 65(3):383–416, 2000. doi: 10.1023/A:1005291931660. URL https://dx.
doi.org/10.1023/A:1005291931660.
Lars Birkedal. Developing theories of types and computability via realizability. Electronic Notes in Theoretical Computer Science, 34:2, 2000. ISSN 15710661. doi:
10.1016/S1571-0661(05)80642-5. URL http://cs.au.dk/$\sim$birke/papers/
devttc.pdf.
Errett Bishop. Foundations of Constructive Analysis. McGraw-Hill, 1967.
B Bloom and J G Riecke. LCF Should Be Lifted. 1989.
Manuel Blum. On the size of machines. Information and Control, 11(3):257–265,
sep 1967. ISSN 00199958. doi: 10.1016/S0019-9958(67)90546-3. URL http://
linkinghub.elsevier.com/retrieve/pii/S0019995867905463.
Achim Blumensath and Viktor Winschel. A Coalgebraic Framework for Games in
Economics. 2013.
Guillaume Bonfante, Matthieu Kaczmarek, and Jean-Yves Marion. Toward an Abstract Computer Virology. In Dan Van Hung and Martin Wirsing, editors, Theoretical Aspects of Computing – ICTAC 2005: Second International Colloquium,
Hanoi, Vietnam, October 17-21, 2005. Proceedings, volume 3722 of Lecture Notes
in Computer Science, pages 579–593. Springer Berlin Heidelberg, 2005. ISBN
216
3540291075. doi: 10.1007/11560647_38. URL http://link.springer.com/10.
1007/11560647_38.
Guillaume Bonfante, Matthieu Kaczmarek, and J.-Y. Marion. On Abstract Computer
Virology from a Recursion Theoretic Perspective. Journal in Computer Virology,
1(3):45–54, 2006. ISSN 1772-9890. doi: 10.1007/s11416-005-0007-4. URL http:
//link.springer.com/10.1007/s11416-005-0007-4.
Guillaume Bonfante, Matthieu Kaczmarek, and Jean-Yves Marion. A Classification
of Viruses Through Recursion Theorems. In S Barry Cooper, Benedikt Löwe, and
Andrea Sorbi, editors, Computation and Logic in the Real World: Third Conference
on Computability in Europe, CiE 2007, Siena, Italy, June 18-23, 2007. Proceedings,
volume 4497 of Lecture Notes in Computer Science, pages 73–82. Springer Berlin
Heidelberg, Berlin, Heidelberg, 2007. doi: 10.1007/978-3-540-73001-9_8. URL
http://link.springer.com/10.1007/978-3-540-73001-9_8.
George S. Boolos. The Logic of Provability. Cambridge University Press, Cambridge,
1994. ISBN 9780511625183. doi: 10.1017/CBO9780511625183. URL https://dx.
doi.org/10.1017/CBO9780511625183.
Ana Bove and Venanzio Capretta. Modelling general recursion in type theory. Mathematical Structures in Computer Science, 15(4):671–708, aug 2005.
Torben Braüner. The Girard Translation Extended with Recursion. In Leszek Pacholski and Jerzy Tiuryn, editors, Computer Science Logic, 8th International Workshop,
CSL ’94, Kazimierz, Poland, September 25-30, 1994, Selected Papers, number Lecture Notes in Computer Science 933, pages 31–45, 1995. ISBN 3-540-60017-5. doi:
10.1007/BFb0022245.
Torben Braüner. A general adequacy result for a linear functional language. Theoretical Computer Science, 177(1):27–58, 1997. ISSN 03043975. doi: 10.1016/
S0304-3975(96)00233-2.
Venanzio Capretta. General recursion via coinductive types. Logical Methods in
Computer Science, 1(2):1–28, jul 2005. ISSN 18605974. doi: 10.2168/LMCS-1(2:
1)2005. URL http://www.lmcs-online.org/ojs/viewarticle.php?id=55.
John Case and Samuel E. Moelius.
Self-reference.
Properties Complementary to Program
In Luděk Kučera and Antonín Kučera, editors, Proceedings
217
of the 32nd International Symposium on Mathematical Foundations of Computer Science 2007 (MFCS’07), volume 4708 of Lecture Notes in Computer
Science, pages 253–263, Berlin, Heidelberg, 2007. Springer Berlin Heidelberg. doi: 10.1007/978-3-540-74456-6_24. URL https://dx.doi.org/10.1007/
978-3-540-74456-6_24.
John Case and Samuel E. Moelius. Characterizing programming systems allowing
program self-reference. Theory of Computing Systems, 45(4):756–772, 2009a. ISSN
14324350. doi: 10.1007/s00224-009-9168-8. URL https://dx.doi.org/10.1007/
s00224-009-9168-8.
John Case and Samuel E. Moelius. Independence Results for n-Ary Recursion Theorems. In Miroslaw Kutylowski, Witold Charatonik, and Maciej Gebala, editors, Fundamentals of Computation Theory: Proceedings of the 17th International
Symposium, FCT 2009, Wrocław, Poland, September 2-4, 2009, volume 5699 of
Lecture Notes in Computer Science, pages 38–49. Springer, 2009b. ISBN 978-3642-03408-4. doi: 10.1007/978-3-642-03409-1_5. URL https://dx.doi.org/10.
1007/978-3-642-03409-1_5.
John Case and Samuel E. Moelius.
Scott Subdomains.
Program Self-Reference in Constructive
Theory of Computing Systems, 51(1):22–49, 2012.
ISSN
14324350. doi: 10.1007/s00224-011-9372-1. URL https://dx.doi.org/10.1007/
s00224-011-9372-1.
J. R. B. Cockett and Pieter J. W. Hofstra. Introduction to Turing categories. Annals
of Pure and Applied Logic, 156(2-3):183–209, 2008. doi: 10.1016/j.apal.2008.04.005.
URL http://dx.doi.org/10.1016/j.apal.2008.04.005.
J. R. B. Cockett and Pieter J. W. Hofstra. Categorical simulations. Journal of Pure
and Applied Algebra, 214(10):1835–1853, 2010. ISSN 00224049. doi: 10.1016/j.
jpaa.2009.12.028. URL http://dx.doi.org/10.1016/j.jpaa.2009.12.028.
Fred Cohen. Computational aspects of computer viruses. Computers and Security, 8
(4):325–344, 1989. ISSN 01674048. doi: 10.1016/0167-4048(89)90094-1.
Robert L. Constable and Scott F. Smith. Computational foundations of basic recursive function theory. Theoretical Computer Science, 121(1-2):89–112, dec 1993.
ISSN 03043975. doi: 10.1016/0304-3975(93)90085-8. URL https://dx.doi.org/
10.1016/0304-3975(93)90085-8.
218
Roy L. Crole. Categories for Types. Cambridge University Press, 1993. ISBN 0 521
45701 7.
Djordje Čubrić, Peter Dybjer, and Philip J. Scott.
Normalization and the
Yoneda embedding. Mathematical Structures in Computer Science, 8(2):153–192,
1998. doi: 10.1017/s0960129597002508. URL https://dx.doi.org/10.1017/
s0960129597002508.
Nigel Cutland. Computability: An Introduction to Recursive Function Theory. Cambridge University Press, 1980. ISBN 9780521294652.
Olivier Danvy and Karoline Malmkjaer. Intensions and extensions in a reflective
tower. In Proceedings of the 1988 ACM conference on LISP and functional programming (LFP ’88), pages 327–341, New York, New York, USA, 1988. ACM
Press. ISBN 089791273X. doi: 10.1145/62678.62725. URL https://dx.doi.org/
10.1145/62678.62725.
Rowan Davies. A Temporal-Logic Approach to Binding-Time Analysis. Technical
report, BRICS Report Series RS-95-51, 1995.
Rowan Davies. A temporal-logic approach to binding-time analysis. Proceedings 11th
Annual IEEE Symposium on Logic in Computer Science, pages 184–195, 1996.
ISSN 1043-6871. doi: 10.1109/LICS.1996.561317.
Rowan Davies. A Temporal Logic Approach to Binding-Time Analysis. Journal of
the ACM, 64(1):1–45, mar 2017. ISSN 00045411. doi: 10.1145/3011069. URL
http://dx.doi.org/10.1145/3011069.
Rowan Davies and Frank Pfenning. A modal analysis of staged computation. In Proceedings of the 23rd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’96), pages 258–270, 1996. ISBN 0897917693. doi:
10.1145/382780.382785. URL https://dx.doi.org/10.1145/382780.382785.
Rowan Davies and Frank Pfenning. A modal analysis of staged computation. Journal
of the ACM, 48(3):555–604, 2001. ISSN 00045411. doi: 10.1145/382780.382785.
URL https://dx.doi.org/10.1145/382780.382785.
François-Nicola Demers and Jacques Malenfant. Reflection in logic, functional and
object-oriented programming: a Short Comparative Study. In Proceedings of the
IJCAI ’95 Workshop on Reflection and Metalevel Architectures and their Applications in AI, pages 29–38, 1995.
219
Apostolos Doxiadis. Uncle Petros and Goldbach’s Conjecture. Faber and Faber, 2001.
Samuel Eilenberg and G. Max Kelly. Closed Categories. In Proceedings of the
Conference on Categorical Algebra, pages 421–562. Springer Berlin Heidelberg,
Berlin, Heidelberg, 1966. doi: 10.1007/978-3-642-99902-4_22. URL http://www.
springerlink.com/index/10.1007/978-3-642-99902-4_22.
A. P. Ershov. On the partial computation principle. Information Processing Letters,
6(2):38–41, apr 1977. ISSN 00200190. doi: 10.1016/0020-0190(77)90078-3. URL
https://dx.doi.org/10.1016/0020-0190(77)90078-3.
A. P. Ershov. Mixed computation: potential applications and problems for study.
Theoretical Computer Science, 18(1):41–67, apr 1982. ISSN 03043975. doi: 10.1016/
0304-3975(82)90111-6. URL http://linkinghub.elsevier.com/retrieve/pii/
0304397582901116.
Martín Hötzel Escardó. A metric model of PCF, 1999. URL http://cs.bham.ac.
uk/$\sim$mhe/papers/metricpcf.pdf.
Melvin Fitting. Intensional Logic. In Edward N Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, summer ’15
edition, 2015. URL https://plato.stanford.edu/archives/sum2015/entries/
logic-intensional/.
Daniel P. Friedman and Mitchell Wand. Reification: Reflection without metaphysics.
In Proceedings of the 1984 ACM Symposium on LISP and functional programming
(LFP ’84), pages 348–355, New York, New York, USA, 1984. ACM Press. ISBN
0897911423. doi: 10.1145/800055.802051. URL https://dx.doi.org/10.1145/
800055.802051.
Harvey Friedman. FOM: renaming recursion theory, 1998. URL http://www.cs.
nyu.edu/pipermail/fom/1998-August/002017.html.
Yoshihiko Futamura. Partial evaluation of computation process–an approach to a
compiler-compiler. Higher-Order and Symbolic Computation, 12(4):381–391, 1999.
ISSN 13883690. doi: 10.1023/A:1010095604496. URL http://link.springer.
com/article/10.1023/A:1010095604496.
Murdoch J. Gabbay and Aleksandar Nanevski. Denotation of contextual modal type
theory (CMTT): Syntax and meta-programming. Journal of Applied Logic, 11
220
(1):1–29, mar 2013. ISSN 15708683. doi: 10.1016/j.jal.2012.07.002. URL http:
//dx.doi.org/10.1016/j.jal.2012.07.002.
R Gandy. The Confluence of Ideas in 1936. In A Half-century Survey on The Universal
Turing Machine, pages 55–111, New York, NY, USA, 1988. Oxford University Press,
Inc. ISBN 0-19-853741-7. URL http://dl.acm.org/citation.cfm?id=57249.
57252.
R. O. Gandy.
On the axiom of extensionality – Part I.
The Journal
of Symbolic Logic, 21(01):36–48, mar 1956.
ISSN 0022-4812.
doi: 10.
2307/2268484. URL https://www.cambridge.org/core/product/identifier/
S0022481200085352/type/journal_article.
R. O. Gandy.
On the axiom of extensionality, Part II.
The Journal of
Symbolic Logic, 24(04):287–300, dec 1959.
ISSN 0022-4812.
doi: 10.
2307/2963897. URL https://www.cambridge.org/core/product/identifier/
S0022481200123266/type/journal_article.
Jean-Yves Girard.
Interprétation fonctionelle et élimination des coupures de
l’arithmétique d’ordre supérieur. PhD thesis, Université Paris VII, 1972.
Jean-Yves Girard. The Blind Spot: Lectures on Logic. European Mathematical Society, 2011. ISBN 978-3037190883.
Jean-Yves Girard, Yves Lafont, and Paul Taylor. Proofs and Types. Cambridge
University Press, 1989.
Paul Graham. On LISP: Advanced Techniques for Common LISP. Prentice Hall,
1993. ISBN 0130305529.
Carl A. Gunter. Semantics of programming languages: structures and techniques.
Foundations of Computing. The MIT Press, 1992.
Torben Amtoft Hansen, Thomas Nikolajsen, Jesper Larsson Träff, and Neil D. Jones.
Experiments with Implementations of Two Theoretical Constructions. In Proceedings of the Symposium on Logical Foundations of Computer Science: Logic at Botik
’89, pages 119–133, London, UK, 1989. Springer-Verlag. ISBN 3-540-51237-3. URL
http://dl.acm.org/citation.cfm?id=646798.759646.
221
Robert
2014.
Harper.
Old
URL
Neglected
Theorems
Are
Still
Theorems,
https://existentialtype.wordpress.com/2014/03/20/
old-neglected-theorems-are-still-theorems/.
Douglas R Hofstadter. Godel, Escher, Bach: An Eternal Golden Braid. Basic Books,
Inc., New York, NY, USA, 1979. ISBN 0465026850.
Hagen Huwig and Axel Poigné. A note on inconsistencies caused by fixpoints in a
cartesian closed category. Theoretical Computer Science, 73(1):101–112, jun 1990.
ISSN 03043975. doi: 10.1016/0304-3975(90)90165-E. URL http://linkinghub.
elsevier.com/retrieve/pii/030439759090165E.
J. M. E. Hyland.
The Effective Topos.
In Anne S. Troelstra and Dick van
Dalen, editors, The L. E. J. Brouwer Centenary Symposium, pages 165–216.
North-Holland, 1982. URL https://webdpmms.maths.cam.ac.uk/$\sim$martin/
Research/Oldpapers/hyland-effectivetopos.pdf.
J.M.E. Hyland. The Forgotten Turing. In S. Barry Cooper and Andrew Hodges,
editors, The Once and Future Turing, pages 20–33. Cambridge University Press,
Cambridge, 2016. doi: 10.1017/CBO9780511863196.005. URL http://ebooks.
cambridge.org/ref/id/CBO9780511863196A012.
J.M.E. Hyland and C.-H.L. Ong. On Full Abstraction for PCF: I, II, and III. Information and Computation, 163(2):285–408, 2000. doi: 10.1006/inco.2000.2917.
URL http://linkinghub.elsevier.com/retrieve/pii/S0890540100929171.
Martin Hyland and John Power. The Category Theoretic Understanding of Universal
Algebra: Lawvere Theories and Monads. Electronic Notes in Theoretical Computer
Science, 172:437–458, 2007. ISSN 15710661. doi: 10.1016/j.entcs.2007.02.019.
Bart Jacobs. Categorical Logic and Type Theory. Number 141 in Studies in Logic
and the Foundations of Mathematics. North Holland, Amsterdam, 1999.
Barry Jay. Programs as Data Structures in λSF-Calculus. Electronic Notes in Theoretical Computer Science, 325:221–236, 2016. ISSN 15710661. doi: 10.1016/j.entcs.
2016.09.040. URL http://dx.doi.org/10.1016/j.entcs.2016.09.040.
Barry Jay and Thomas Given-Wilson. A Combinatory Account of Internal Structure.
The Journal of Symbolic Logic, 76(3):807–826, 2011. URL http://www.jstor.
org/stable/23041848.
222
Barry Jay and Jens Palsberg. Typed self-interpretation by pattern matching. ACM
SIGPLAN Notices, 46(9):247, sep 2011. ISSN 03621340. doi: 10.1145/2034574.
2034808. URL http://dl.acm.org/citation.cfm?doid=2034574.2034808.
Neil D. Jones. Computer Implementation and Applications of Kleene’s S-M-N and
Recursion Theorems. In Yiannis N Moschovakis, editor, Logic from Computer
Science: Proceedings of a Workshop Held November 13-17, 1989 [at MSRI], volume 21 of Mathematical Sciences Research Institute Publications, pages 243–263.
Springer New York, 1992. doi: 10.1007/978-1-4612-2822-6_9.
//dx.doi.org/10.1007/978-1-4612-2822-6_9.
URL https:
Neil D. Jones. An introduction to partial evaluation. ACM Computing Surveys,
28(3):480–503, 1996. ISSN 03600300. doi: 10.1145/243439.243447. URL http:
//doi.acm.org/10.1145/243439.243447.
Neil D. Jones. Computability and Complexity: From a Programming Perspective.
Foundations of Computing. MIT Press, 1997.
Neil D. Jones. A Swiss Pocket Knife for Computability. Electronic Proceedings in
Theoretical Computer Science, 129:1–17, sep 2013. ISSN 2075-2180. doi: 10.4204/
EPTCS.129.1. URL http://arxiv.org/abs/1309.5128v1.
Neil D. Jones, Carsten K. Gomard, and Peter Sestoft. Partial Evaluation and Automatic Program Generation. Prentice Hall International, 1993. ISBN 0-13-020249-5.
Akira Kanda. Recursion theorems and effective domains. Annals of Pure and Applied
Logic, 38(3):289–300, 1988. ISSN 01680072. doi: 10.1016/0168-0072(88)90029-2.
G. A. Kavvos. The Many Worlds of Modal λ-calculi: I. Curry-Howard for Necessity,
Possibility and Time. CoRR, 2016.
G. A. Kavvos. On the Semantics of Intensionality. In Javier Esparza and Andrzej S. Murawski, editors, Proceedings of the 20th International Conference on
Foundations of Software Science and Computation Structures (FoSSaCS), volume 10203 of Lecture Notes in Computer Science, pages 550–566. Springer-Verlag
Berlin Heidelberg, 2017a. doi: 10.1007/978-3-662-54458-7_32.
//dx.doi.org/10.1007/978-3-662-54458-7_32.
URL https:
G. A. Kavvos. Dual-context calculi for modal logic. In 2017 32nd Annual ACM/IEEE
Symposium on Logic in Computer Science (LICS). IEEE, 2017b. ISBN 978-1-50903018-7. doi: 10.1109/LICS.2017.8005089.
223
G. A. Kavvos. Dual-context calculi for modal logic (technical report). Technical report, University of Oxford, 2017c. URL http://www.lambdabetaeta.eu/papers/
dualcalc.pdf.
G. A. Kavvos. Intensionality, Intensional Recursion, and the Gödel-Löb axiom. In
Proceedings of 7th Workshop on Intuitionistic Modal Logic and Applications (IMLA
2017), 2017d.
Oleg Kiselyov. In’yō to ichiokubai kōsokuka no monogatari: Kansū puroguramingu ni
yoru Kleene daini saiki teiri no shōmei [A story of quotation and 108 -fold speed-up:
A proof of Kleene’s second recursion theorem by functional programming]. In The
17th Programming and Programming Language Workshop (PPL 2015), 2015. URL
http://okmij.org/ftp/Computation/Kleene.pdf.
S. C. Kleene. λ-definability and recursiveness. Duke Mathematical Journal, 2(2):340–
353, 1936. ISSN 0012-7094. doi: 10.1215/S0012-7094-36-00227-2. URL https:
//dx.doi.org/10.1215/S0012-7094-36-00227-2.
Stephen C. Kleene. On notation for ordinal numbers. The Journal of Symbolic
Logic, 3(04):150–155, 1938. ISSN 0022-4812. doi: 10.2307/2267778. URL https:
//dx.doi.org/10.2307/2267778.
Stephen C. Kleene. Introduction to Metamathematics. North-Holland, Amsterdam,
1952.
Stephen C. Kleene. Origins of Recursive Function Theory. IEEE Annals of the History
of Computing, 3(1):52–67, 1981. ISSN 1058-6180. doi: 10.1109/MAHC.1981.10004.
URL https://dx.doi.org/10.1109/MAHC.1981.10004.
Satoshi Kobayashi. Monad as modality. Theoretical Computer Science, 175(1):29–74,
1997. doi: 10.1016/S0304-3975(96)00169-7.
Joachim Lambek and Philip J. Scott. Introduction to Higher-Order Categorical Logic.
Cambridge University Press, 1988. ISBN 9780521356534.
Elaine Landry, editor. Categories for the Working Philosopher. Oxford University
Press, 2017. ISBN 9780198748991.
J.-L. Lassez, V.L. Nguyen, and E.a. Sonenberg. Fixed point theorems and semantics:
a folk tale. Information Processing Letters, 14(3):112–116, 1982. ISSN 00200190.
doi: 10.1016/0020-0190(82)90065-5.
224
F. William Lawvere. Diagonal arguments and cartesian closed categories. In Category Theory, Homology Theory and their Applications II, number 15, pages
134–145. Springer Berlin Heidelberg, 1969. ISBN 978-3-540-04611-0. doi: 10.
1007/BFb0080769. URL http://link.springer.com/content/pdf/10.1007/
BFb0080769.pdf.
F. William Lawvere. Diagonal arguments and cartesian closed categories. Reprints
in Theory and Applications of Categories, 15:1–13, 2006. URL http://www.tac.
mta.ca/tac/reprints/articles/15/tr15abs.html.
Paul Blain Levy. Call-by-Push-Value: A Functional-Imperative Synthesis. Semantic
Structures in Computation. Springer, 2003. ISBN 978-94-010-3752-5. doi: 10.1007/
978-94-007-0954-6.
Harry R Lewis and Christos H Papadimitriou. Elements of the Theory of Computation. Prentice Hall PTR, Upper Saddle River, NJ, USA, 2nd edition, 1997. ISBN
0132624788.
Tadeusz Litak. Constructive Modalities with Provability Smack. In Guram Bezhanishvili, editor, Leo Esakia on duality in modal and intuitionistic logics, pages 179–
208. Springer, 2014. doi: 10.1007/978-94-017-8860-1_8. URL https://www8.cs.
fau.de/ext/litak/esakiaarxivfull.pdf.
John R. Longley. Realizability Toposes and Language Semantics. PhD thesis, University of Edinburgh. College of Science and Engineering. School of Informatics.,
1995. URL http://www.lfcs.inf.ed.ac.uk/reports/95/ECS-LFCS-95-332/.
John R. Longley. Matching typed and untyped realizability. Electronic Notes in Theoretical Computer Science, 23(1):74–100, 1999a. ISSN 15710661. doi: 10.1016/
S1571-0661(04)00105-7.
URL http://linkinghub.elsevier.com/retrieve/
pii/S1571066104001057.
John R. Longley. Unifying Typed and Untyped Realizability, 1999b. URL http:
//homepages.inf.ed.ac.uk/jrl/Research/unifying.txt.
John R. Longley. Notions of computability at higher types I. In Logic Colloquium
2000: Proceedings of the Annual European Summer Meeting of the Association for
Symbolic Logic, held in Paris, France, July 23-31, 2000, volume 19 of Lecture Notes
in Logic, pages 32–142. A. K. Peters, 2005.
225
John R. Longley and Dag Normann. Higher-Order Computability. Theory and
Applications of Computability. Springer Berlin Heidelberg, Berlin, Heidelberg,
2015. ISBN 978-3-662-47991-9. doi: 10.1007/978-3-662-47992-6. URL https:
//dx.doi.org/10.1007/978-3-662-47992-6.
John R. Longley and Alex K. Simpson. A uniform approach to domain theory in
realizability models. Mathematical Structures in Computer Science, 7:469–505,
1997. doi: 10.1017/S0960129597002387. URL https://dx.doi.org/10.1017/
S0960129597002387.
Giuseppe Longo and Eugenio Moggi. Constructive natural deduction and its ‘omegaset’ interpretation. Mathematical Structures in Computer Science, 1(02):215, 1991.
ISSN 0960-1295. doi: 10.1017/S0960129500001298. URL http://www.journals.
cambridge.org/abstract_S0960129500001298.
Saunders Mac Lane. Categories for the Working Mathematician, volume 5 of Graduate
Texts in Mathematics. Springer New York, New York, NY, 1978. ISBN 978-1-44193123-8. doi: 10.1007/978-1-4757-4721-8. URL http://link.springer.com/10.
1007/978-1-4757-4721-8.
Michael Machtey and Paul Young. An introduction to the general theory of algorithms.
Theory of Computation Series. Elsevier North-Holland, New York, 1978. ISBN
9780444002266. URL http://books.google.co.uk/books?id=qncEAQAAIAAJ.
Michael Machtey, Karl Winklmann, and Paul Young. Simple Gödel Numberings,
Isomorphisms, and Programming Properties. SIAM Journal on Computing, 7(1):
39–60, feb 1978. ISSN 0097-5397. doi: 10.1137/0207003. URL https://dx.doi.
org/10.1137/0207003.
John Maraist, Martin Odersky, David N Turner, and Philip Wadler. Call-by-name,
call-by-value, call-by-need, and the linear lambda calculus. Electronic Notes in
Theoretical Computer Science, 1:370–392, 1995.
Jean-Yves Marion. From Turing machines to computer viruses. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 370
(1971):3319–3339, 2012. ISSN 1364-503X. doi: 10.1098/rsta.2011.0332. URL http:
//rsta.royalsocietypublishing.org/cgi/doi/10.1098/rsta.2011.0332.
Per Martin-Löf. Intuitionistic type theory, volume 1 of Studies in Proof Theory.
Bibliopolis, 1984. ISBN 88-7088-105-9.
226
Per Martin-Löf. An intuitionistic theory of types. In Giovanni Sambin and Jan M
Smith, editors, Twenty-five years of constructive type theory (Venice, 1995), volume 36 of Oxford Logic Guides, pages 127–172. Oxford University Press, 1998.
Paul-André Melliès. Categorical Semantics of Linear Logic. In Pierre-Louis Curien,
Hugo Herbelin, Jean-Louis Krivine, and Paul-André Melliès, editors, Panoramas
et synthèses 27: Interactive models of computation and program behaviour. Société
Mathématique de France, 2009. ISBN 978-2-85629-273-0. URL http://www.pps.
univ-paris-diderot.fr/$\sim$mellies/papers/panorama.pdf.
John C. Mitchell. Foundations for programming languages. Foundations of Computing. The MIT Press, 1996. ISBN 9780262133210.
Samuel E. Moelius. Program Self-Reference. PhD thesis, University of Delaware,
2009.
Torben Æ. Mogensen. Efficient self-interpretation in lambda calculus. Journal of Functional Programming, 2(03):345–364, jul 1992. ISSN 0956-7968.
doi: 10.1017/S0956796800000423. URL http://www.journals.cambridge.org/
abstract_S0956796800000423.
Eugenio Moggi. Computational lambda-calculus and monads. In [1989] Proceedings. Fourth Annual Symposium on Logic in Computer Science, pages 14–23. IEEE
Comput. Soc. Press, 1989. ISBN 0-8186-1954-6. doi: 10.1109/LICS.1989.39155.
Eugenio Moggi. Notions of computation and monads. Information and Computation,
93(1):55–92, 1991. ISSN 08905401. doi: 10.1016/0890-5401(91)90052-4. URL
https://dx.doi.org/10.1016/0890-5401(91)90052-4.
Yiannis N. Moschovakis. Sense and Denotation as Algorithm and Value. Logic Colloquium ’90: ASL Summer Meeting in Helsinki, 2:210–249, 1993. URL http:
//www.math.ucla.edu/$\sim$ynm/papers/frege.pdf.
Yiannis N Moschovakis. Kleene’s Amazing Second Recursion Theorem. Bulletin of
Symbolic Logic, 16(2):189–239, 2010.
Philip S. Mulry. Generalized Banach-Mazur functionals in the topos of recursive
sets. Journal of Pure and Applied Algebra, 26(1):71–83, oct 1982. ISSN 00224049.
doi: 10.1016/0022-4049(82)90030-5. URL http://linkinghub.elsevier.com/
retrieve/pii/0022404982900305.
227
Philip S. Mulry. A categorical approach to the theory of computation. Annals of
Pure and Applied Logic, 43(3):293–305, aug 1989. ISSN 01680072. doi: 10.1016/
0168-0072(89)90072-9. URL http://linkinghub.elsevier.com/retrieve/pii/
0168007289900729.
J. Myhill and J. C. Shepherdson. Effective operations on partial recursive functions.
Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 1(4):310–
317, 1955. ISSN 00443050. doi: 10.1002/malq.19550010407. URL http://doi.
wiley.com/10.1002/malq.19550010407.
Aleksandar Nanevski. Meta-programming with names and necessity. ACM SIGPLAN
Notices, 37:206–217, 2002. ISSN 03621340. doi: 10.1145/583852.581498.
Aleksandar Nanevski and Frank Pfenning. Staged computation with names and necessity. Journal of Functional Programming, 15(06):893, 2005. ISSN 0956-7968.
doi: 10.1017/S095679680500568X.
Flemming Nielson and Hanne Riis Nielson. Two-Level Functional Languages. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 1992.
ISBN 9780521018470.
Bengt Nordström, Kent Petersson, and Jan M. Smith. Programming in Martin-Löf ’s
Type Theory: an Introduction. Oxford University Press, 1990. ISBN 0-19-853814-6.
doi: 10.1016/0377-0427(91)90052-L.
Piergiorgio Odifreddi. Classical recursion theory: The theory of functions and sets of
natural numbers. Elsevier, 1992. ISBN 9780444894830.
Robert A. Di Paola and Alex Heller. Dominical Categories: Recursion Theory without
Elements. The Journal of Symbolic Logic, 52(3):594, sep 1987. ISSN 00224812.
doi: 10.2307/2274352. URL http://www.jstor.org/stable/2274352?origin=
crossref.
Frank Pfenning and Rowan Davies. A judgmental reconstruction of modal logic.
Mathematical Structures in Computer Science, 11(4):511–540, 2001. ISSN 09601295. doi: 10.1017/S0960129501003322. URL https://dx.doi.org/10.1017/
S0960129501003322.
Richard Alan Platek. Foundations of Recursion Theory. PhD thesis, Stanford University, 1966.
228
Gordon Plotkin and John Power.
Computational Effects and Operations: An
Overview. Electronic Notes in Theoretical Computer Science, 73(March 2002):
149–163, oct 2004. ISSN 15710661. doi: 10.1016/j.entcs.2004.08.008.
http://linkinghub.elsevier.com/retrieve/pii/S1571066104050893.
URL
Gordon D. Plotkin. LCF considered as a programming language. Theoretical Computer Science, 5(3):223–255, 1977. ISSN 03043975. doi: 10.1016/0304-3975(77)
90044-5.
Gordon D. Plotkin. Type theory and recursion. In Proceedings Eighth Annual IEEE
Symposium on Logic in Computer Science, page 374. IEEE Comput. Soc. Press,
1993. ISBN 0-8186-3140-6. doi: 10.1109/LICS.1993.287571. URL https://dx.
doi.org/10.1109/LICS.1993.287571.
Axel Poigné. Basic category theory. In Handbook of Logic in Computer Science.
Clarendon Press, 1992. ISBN 9780198537359.
Andrew Polonsky. Axiomatizing the Quote. In Marc Bezem, editor, Computer Science Logic (CSL’11) - 25th International Workshop/20th Annual Conference of the
EACSL, volume 12 of Leibniz International Proceedings in Informatics (LIPIcs),
pages 458–469. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2011. doi:
10.4230/LIPIcs.CSL.2011.458. URL https://dx.doi.org/10.4230/LIPIcs.CSL.
2011.458.
Gregory A. Riccardi. The Independence of Control Structures in Abstract Programming Systems. PhD thesis, State University of New York at Buffalo, 1980.
Gregory A. Riccardi. The independence of control structures in abstract programming
systems. Journal of Computer and System Sciences, 22(2):107–143, apr 1981. ISSN
00220000. doi: 10.1016/0022-0000(81)90024-6. URL https://dx.doi.org/10.
1016/0022-0000(81)90024-6.
Hartley Rogers. Gödel numberings of partial recursive functions. The Journal of
Symbolic Logic, 23(03):331–341, sep 1958. ISSN 0022-4812. doi: 10.2307/2964292.
URL http://www.journals.cambridge.org/abstract_S0022481200058011.
Hartley Rogers. Theory of recursive functions and effective computability. MIT Press,
Cambridge, MA, USA, 1987. ISBN 0-262-68052-1.
229
James S. Royer.
A Connotational Theory of Program Structure,
ume 273 of Lecture Notes in Computer Science.
vol-
Springer Berlin Heidel-
berg, Berlin, Heidelberg, 1987.
ISBN 978-3-540-18253-5.
doi: 10.1007/
3-540-18253-5. URL http://link.springer.com/10.1007/3-540-18253-5www.
cis.syr.edu/$\sim$royer/archive/ctps.ps.
James S. Royer and John Case.
Subrecursive Programming Systems.
Birkhäuser Boston, Boston, MA, 1994.
ISBN 978-1-4612-6680-8.
doi:
10.1007/978-1-4612-0249-3.
URL
http://link.springer.com/10.1007/
978-1-4612-0249-3.
Dana Scott. Lambda Calculus and Recursion Theory (Preliminary Version). In Stig
Kanger, editor, Proceedings of the Third Scandinavian Logic Symposium, volume 82
of Studies in Logic and the Foundations of Mathematics, pages 154–193. NorthHolland, 1975. doi: 10.1016/S0049-237X(08)70730-4. URL http://linkinghub.
elsevier.com/retrieve/pii/S0049237X08707304.
Dana S. Scott. Data Types as Lattices. SIAM Journal on Computing, 5(3):522–587,
1976. doi: 10.1137/0205037.
Dana S. Scott. A type-theoretical alternative to ISWIM, CUCH, OWHY. Theoretical Computer Science, 121(1-2):411–440, 1993. ISSN 03043975. doi: 10.1016/
0304-3975(93)90095-B.
Alex K. Simpson and Gordon D. Plotkin. Complete axioms for categorical fixedpoint operators. In Proceedings of the 15th Annual IEEE Symposium on Logic in
Computer Science (LICS 2000), pages 30–41. IEEE Comput. Soc, 2000. ISBN
0-7695-0725-5. doi: 10.1109/LICS.2000.855753. URL http://ieeexplore.ieee.
org/document/855753/.
Brian Cantwell Smith. Procedural reflection in programming languages. PhD thesis,
Massachusetts Institute of Technology, 1982. URL http://hdl.handle.net/1721.
1/15961.
Brian Cantwell Smith. Reflection and Semantics in LISP. In Proceedings of the 11th
ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages
(POPL ’84), pages 23–35, New York, New York, USA, 1984. ACM Press. ISBN
0897911253. doi: 10.1145/800017.800513. URL https://dx.doi.org/10.1145/
800017.800513.
230
Raymond M. Smullyan. Gödel’s Incompleteness Theorems. Oxford University Press,
1992.
Sam Staton. Freyd categories are Enriched Lawvere Theories. Electronic Notes in
Theoretical Computer Science, 303:197–206, 2014. ISSN 15710661. doi: 10.1016/j.
entcs.2014.02.010. URL http://dx.doi.org/10.1016/j.entcs.2014.02.010.
Christopher Strachey. Fundamental Concepts in Programming Languages. HigherOrder and Symbolic Computation, 13:11–49, 2000. ISSN 13883690. doi: 10.1023/A:
1010000313106. URL http://dx.doi.org/10.1023/A:1010000313106.
Thomas Streicher. Domain-theoretic Foundations of Functional Programming. World
Scientific, 2006.
Thomas Streicher.
How Intensional Is Homotopy Type Theory?
In Maria
del Mar González, Paul C. Yang, Nicola Gambino, and Joachim Kock, editors,
Extended Abstracts Fall 2013: Geometrical Analysis; Type Theory, Homotopy Theory and Univalent Foundations, number September in Research Perspectives CRM
Barcelona, pages 105–110. Birkhäuser Basel, Cham, 2015. ISBN 978-3-319-212838. doi: 10.1007/978-3-319-21284-5_20. URL https://dx.doi.org/10.1007/
978-3-319-21284-5_20.
Walid Taha and Michael Florentin Nielsen. Environment classifiers. ACM SIGPLAN
Notices, 38:26–37, 2003. ISSN 03621340. doi: 10.1145/640128.604134.
Walid Taha and Tim Sheard. Multi-stage programming with explicit annotations.
In Proceedings of the 1997 ACM SIGPLAN symposium on Partial evaluation and
semantics-based program manipulation (PEPM ’97), pages 203–217, New York,
New York, USA, 1997. ACM Press. ISBN 0897919173. doi: 10.1145/258993.259019.
URL https://dx.doi.org/10.1145/258993.259019.
Walid Taha and Tim Sheard. MetaML and multi-stage programming with explicit annotations. Theoretical Computer Science, 248(1-2):211–242, 2000. ISSN
03043975. doi: 10.1016/S0304-3975(00)00053-0. URL https://dx.doi.org/10.
1016/S0304-3975(00)00053-0.
M. Takahashi. Parallel Reductions in λ-Calculus. Information and Computation,
118(1):120–127, apr 1995. ISSN 08905401. doi: 10.1006/inco.1995.1057. URL
https://dx.doi.org/10.1006/inco.1995.1057.
231
Takeshi Tsukada and Atsushi Igarashi. A logical foundation for environment classifiers. Logical Methods in Computer Science, 6(4):1–43, 2010. ISSN 18605974.
doi: 10.2168/LMCS-6(4:8)2010. URL https://dx.doi.org/10.2168/LMCS-6(4:
8)2010.
Alan M Turing. On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42(1):230–
265, jan 1937. ISSN 0024-6115. doi: 10.1112/plms/s2-42.1.230. URL http:
//plms.oxfordjournals.org/cgi/doi/10.1112/plms/s2-42.1.230.
Alan M Turing. Systems of logic based on ordinals. Proceedings of the London
Mathematical Society, 2(1):161–228, 1939.
Aldo Ursini. Intuitionistic diagonalizable algebras. Algebra Universalis, 9(1):229–237,
1979a. ISSN 00025240. doi: 10.1007/BF02488034.
Aldo Ursini. A modal calculus analogous to K4W, based on intuitionistic propositional logic.
Studia Logica, 38(3):297–311, 1979b.
ISSN 0039-3215.
doi:
10.1007/BF00405387. URL https://dx.doi.org/10.1007/BF00405387.
Jaap van Oosten. Realizability: An Introduction to its Categorical Side, volume 152.
Elsevier, 2008. ISBN 978-0-444-51584-1. URL http://www.sciencedirect.com/
science/bookseries/0049237X/152.
Spyros Vassilakis. Economic Data Types. 1989.
Spyros Vassilakis.
Some economic applications of Scott domains.
Mathemati-
cal Social Sciences, 24(2-3):173–208, nov 1992. ISSN 01654896. doi: 10.1016/
0165-4896(92)90061-9. URL http://linkinghub.elsevier.com/retrieve/pii/
0165489692900619.
Kumaraswamy Velupillai. Computable economics. The Arne Ryde Memorial Lectures.
Oxford University Press, 2000. ISBN 978 1 84376 239 3.
P Wadler. A critique of Abelson and Sussman or why calculating is better than
scheming. ACM SIGPLAN Notices, 22(3):83–94, mar 1987. ISSN 03621340. doi:
10.1145/24697.24706. URL https://dx.doi.org/10.1145/24697.24706.
Mitchell Wand. The Theory of Fexprs is Trivial. LISP and Symbolic Computation,
10(3):189–199, 1998. ISSN 08924635. doi: 10.1023/A:1007720632734. URL https:
//dx.doi.org/10.1023/A:1007720632734.
232
Mitchell Wand and Daniel P. Friedman. The mystery of the tower revealed: A
nonreflective description of the reflective tower.
Lisp and Symbolic Computa-
tion, 1(1):11–38, jun 1988. ISSN 0892-4635. doi: 10.1007/BF01806174. URL
https://dx.doi.org/10.1007/BF01806174.
Steven J Winrich. Self-Reference and the Incomplete Structure of Neoclassical Economics. Journal of Economic Issues, 18(4):987–1005, 1984.
233
| 6 |
CENTRAL QUOTIENT VERSUS COMMUTATOR SUBGROUP OF GROUPS
arXiv:1011.2083v4 [] 10 Mar 2016
MANOJ K. YADAV
Abstract. In 1904, Issai Schur proved the following result. If G is an arbitrary group such
that G/ Z(G) is finite, where Z(G) denotes the center of the group G, then the commutator
subgroup of G is finite. A partial converse of this result was proved by B. H. Neumann in
1951. He proved that if G is a finitely generated group with finite commutator subgroup, then
G/ Z(G) is finite. In this short note, we exhibit few arguments of Neumann, which provide
further generalizations of converse of the above mentioned result of Schur. We classify all
finite groups G such that |G/ Z(G)| = |γ2 (G)|d , where d denotes the number of elements in a
minimal generating set for G/ Z(G). Some problems and questions are posed in the sequel.
1. Introduction
In 1951, Neumann [18, Theorem 5.3] proved the following result: If the index of Z(G) in
G is finite, then γ2 (G) is finite, where Z(G) and γ2 (G) denote the center and the commutator
subgroup of G respectively. He mentioned [19, End of page 237] that this result can be obtained
from an implicit idea of Schur [23], and his proof also used Schur’s basic idea. However there is
no mention of this fact in [18] in which Schur’s paper is also cited. In this note, this result will
be termed as ‘the Schur’s theorem’. Neumann also provided a partial converse of the Schur’s
theorem [18, Corollary 5.41] as follows: If G is finitely generated by k elements and γ2 (G) is
finite, then G/ Z(G) is finite, and bounded by |G/ Z(G)| ≤ |γ2 (G)|k .
Our first motivation of writing this note is to exhibit an idea of Neumann[18, page 179] which
proves much more than what is said above on converse of the Schur’s theorem. We quote the
text here (with a minor modification in the notations):
“Let G be generated by g1 , g2 , . . . , gk . Then
Z(G) = ∩κ=k
κ=1 CG (gκ );
for, an element of G lies in the centre if and only if it (is permutable) commutes with all the
generators of G. If G is an FC-group (group whose all conjugacy classes are of finite length),
then |G : CG (gκ )| is finite for 1 ≤ κ ≤ k, and Z(G), as intersection of a finite set of subgroups
of finite index, also has finite index. The index of the intersection of two subgroups does not
exceed the product of the indices of the subgroups: hence in this case one obtains an upper bound
for the index of the centre, namely
|G : Z(G)| ≤ Πκ=k
κ=1 |G : CG (gκ )|.”
Just a soft staring at the quoted text for a moment or two suggests the following. The
conclusion does not require the group G to be FC-group. It only requires the finiteness of the
conjugacy classes of the generating elements. If a generator of the group G is contained in
Z(G), one really does not need to count it. Thus the argument works perfactly well even if G is
2010 Mathematics Subject Classification. Primary 20F24, 20E45.
Key words and phrases. commutator subgroup, Schur’s theorem, class-preserving automorphism.
1
CENTRAL QUOTIENT VERSUS COMMUTATOR SUBGROUP OF GROUPS
2
generated by infinite number of elements, all but finite of them lie in the center of G. Thus the
following result holds true.
Theorem A. Let G be an arbitrary group such that G/ Z(G) is finitely generated by x1 Z(G),
x2 Z(G), . . . , xt Z(G) and the conjugacy class of xi in G is of finite length for 1 ≤ i ≤ t. Then
G
G/ Z(G) is finite. Moreover |G/ Z(G)| ≤ Πti=1 |xG
i | and γ2 (G) is finite, where xi denotes the
conjugacy class of xi in G.
Neumann’s result [18, Corollary 5.41] was reproduced by Hilton [12, Theorem 1]. It seems
that Hilton was not aware of Neumann’s result. This lead two more publications [21] and [24]
dedicated to proving special cases of Theorem A.
Converse of the Schur’s theorem is not true in general as shown by infinite extraspecial pgroups, where p is an odd prime. It is interesting to know that example of such a 2-group also
exists, which is mentioned on page 238 (second para of Section 3) of [19]. It is a central product
of infinite copies of quaternion groups of order 8 amalgamated at the center of order 2.
Our second motivation of writing this note is to provide a modification of an innocent looking
result of Neumann [20, Lemma 2], which allows us to say little more on converse of the Schur’s
theorem. A modified version of this lemma is the following.
Lemma 1.1. Let G be an arbitrary group having a normal abelian subgroup A such that the
index of CG (A) in G is finite and G/A is finitely generated by g1 A, g2 A, . . . , gr A, where |giG | < ∞
for 1 ≤ i ≤ r. Then G/Z(G) is finite.
This lemma helps proving the first three statements of the following result.
Theorem B. For an arbitrary group G, G/ Z(G) is finite if any one of the following holds true:
(i) Z2 (G)/ Z(Z2 (G)) is finitely generated and γ2 (G) is finite.
(ii) G/ Z(Z2 (G)) is finitely generated and G/(Z2 (G)γ2 (G)) is finite.
(iii) γ2 (G) is finite and Z2 (G) ≤ γ2 (G).
(iv) γ2 (G) is finite and G/ Z(G) is purely non-abelian.
Our final motivation is to provide a classification of all groups G upto isoclinism (see Section
3 for the definition) such that |G/ Z(G)| = |γ2 (G)|d is finite, where d denotes the number of
elements in a minimal generating set for G/ Z(G), discuss example in various situations and pose
some problems. We conclude this section with fixing some notations. For an arbitrary group
G, by Z(G), Z2 (G) and γ2 (G) we denote the center, the second center and the commutator
subgroup of G respectively. For x ∈ G, [x, G] denotes the set {[x, g] | g ∈ G}. Notice that
|[x, G]| = |xG |, where xG denotes the conjugacy class of x in G. If [x, G] ⊆ Z(G), then [x, G]
becomes a subgroup of G. For a subgroup H of G, CG (H) denotes the centralizer of H in G
and for an element x ∈ G, CG (x) denotes the centralizer of x in G.
2. Proofs
We start with the proof of Lemma 1.1, which is essentialy same as the one given by Neumann.
Proof of Lemma 1.1. Let G/A be generated by g1 A, g2 A, . . . , gr A for some gi ∈ G, where
1 ≤ i ≤ r < ∞. Let X := {g1 , g2 , . . . , gr } and A be generated by a set Y . Then G = hX ∪ Y i
and Z(G) = CG (X) ∩ CG (Y ). Notice that CG (A) = CG (Y ). Since CG (A) is of finite index,
CG (Y ) is also of finite index in G. Also, since |giG | < ∞ for 1 ≤ i ≤ r, CG (X) is of finite index
in G. Hence the index of Z(G) in G is finite and the proof is complete.
CENTRAL QUOTIENT VERSUS COMMUTATOR SUBGROUP OF GROUPS
3
Proof of Theorem A can be made quite precise by using Lemma 1.1.
Proof of Theorem A. Taking A = Z(G) in Lemma 1.1, it follows that G/ Z(G) is finite.
Moreover,
|G/ Z(G)| = |G/ ∩ti=1 CG (xi )| ≤ Πti=1 |G : CG (xi )| = Πti=1 |[xi , G]| = Πti=1 |xG
i |.
That γ2 (G) is finite now follows from the Schur’s theorem.
For the proof of Theorem B we need the following result of Hall [9] and the subsequent
proposition.
Theorem 2.1. If G is an arbitrary group such that γ2 (G) is finite, then G/ Z2 (G) is finite.
Explicit bounds on the order of G/ Z2 (G) were first given by Macdonald [14, Theorem 6.2] and
later on improved by Podoski and Szegedy [22] by showing that if |γ2 (G)/(γ2 (G) ∩ Z(G))| = n,
then |G/ Z2 (G)| ≤ nc log2 n with c = 2.
Proposition 2.2. Let G be an arbitrary group such that γ2 (G) is finite and G/ Z(G) is infinite.
Then G/ Z(G) has an infinite abelian group as a direct factor.
Proof. Since γ2 (G) is finite, by Theorem 2.1 G/ Z2 (G) is finite. Thus Z2 (G)/ Z(G) is infinite.
Again using the finiteness of γ2 (G), it follows that the exponent of Z2 (G)/ Z(G) is finite. Hence
by [6, Theorem 17.2] Z2 (G)/ Z(G) is a direct sum of cyclic groups. Let G/ Z2 (G) be generated
by x1 Z2 (G), . . . , xr Z2 (G) and H := hx1 , . . . , xr i. Then it follows that modulo Z(G), H ∩ Z2 (G)
is finite. Thus we can write
Z2 (G)/ Z(G) = hy1 Z(G)i × · · · × hys Z(G)i × hys+1 Z(G)i × · · · ,
such that (H ∩ Z2 (G)) Z(G)/ Z(G) ≤ hy1 Z(G)i × · · · × hys Z(G)i. It now follows that the infinite
abelian group hys+1 Z(G)i × · · · is a direct factor of G/ Z(G), and the proof is complete.
We are now ready to prove Theorem B.
Proof of Theorem B. Since γ2 (G) is finite, it follows from Theorem 2.1 that G/ Z2 (G) is finite.
Now using the fact that Z2 (G)/ Z(Z2 (G)) is finitely generated, it follows that G/ Z(Z2 (G)) is
finitely generated. Take Z(Z2 (G)) = A. Then notice that A is a normal abelian subgroup of
G such that the index of CG (A) in G is finite, since Z2 (G) ≤ CG (A). Hence by Lemma 1.1,
G/ Z(G) is finite, which proves (i).
Again take Z(Z2 (G)) = A and notice that Z2 (G)γ2 (G) ≤ CG (A). (ii) now directly follows
from Lemma 1.1. If Z2 (G) ≤ γ2 (G), then Z2 (G) is abelian. Thus (iii) follows from (i). Finally,
(iv) follows from Proposition 2.2. This completes the proof of the theorem.
We conclude this section with an extension of Theorem A in terms of conjugacy classpreserving automorphisms of given group G. An automorphism α of an arbitrary group G
is called (conjugacy) class-preserving if α(g) ∈ g G for all g ∈ G. We denote the group of all
class-preserving automorphisms of G by Autc (G). Notice that Inn(G), the group of all inner
automorphisms of G, is a normal subgroup of Autc (G) and Autc (G) acts trivially on the center
of G. A detailed survey on class-preserving automorphisms of finite p-groups can be found in
[25].
CENTRAL QUOTIENT VERSUS COMMUTATOR SUBGROUP OF GROUPS
4
Let G be the group as in the statement of Theorem A. Then G is generated by x1 , x2 , . . . , xt
along with Z(G). Since Autc (G) acts trivially on the center of G, it follows that
(2.3)
| Autc (G)| ≤ Πti=1 |xG
i |
as there are only |xG
i | choices for the image of each xi under any class-preserving automorphism.
G
Since |xi | is finite for each xi , 1 ≤ i ≤ t, it follows that | Autc (G)| ≤ Πti=1 |xG
i | is finite.
We have proved the following result of which Theorem A is a corollary, because |G/ Z(G)| =
| Inn(G)| ≤ | Autc (G)|.
Theorem 2.4. Let G be an arbitrary group such that G/ Z(G) is finitely generated by x1 Z(G),
x2 Z(G), . . . , xt Z(G) and the conjugacy class of xi in G is of finite length for 1 ≤ i ≤ t. Then
Autc (G) is finite. Moreover | Autc (G)| ≤ Πti=1 |xG
i | and γ2 (G) is finite.
Proof of Theorem A is also reproduced using IA-automorphisms (automorphisms of a group
that induce identity on the abelianization) in [7, Theorem 2.3]. Proof goes on the same way as
in the case of class-preserving automorphisms.
3. Groups with maximal central quotient
We start with the following concept due to Hall [8]. For a group X, the commutator map
aX : X/ Z(X) × X/ Z(X) → γ2 (X) given by aX (x1 Z(X), x2 Z(X)) = [x1 , x2 ] is well defined.
Two groups K and H are said to be isoclinic if there exists an isomorphism φ of the factor
group K̄ = K/ Z(K) onto H̄ = H/ Z(H), and an isomorphism θ of the subgroup γ2 (K) onto
γ2 (H) such that the following diagram is commutative
a
K̄ × K̄ −−−G−→ γ2 (K)
φ×φy
yθ
a
H̄ × H̄ −−−H−→ γ2 (H).
The resulting pair (φ, θ) is called an isoclinism of K onto H. Notice that isoclinism is an
equivalence relation among groups.
The following proposition (also see Macdonald’s result [14, Lemma 2.1]) is important for the
rest of this section.
Proposition 3.1. Let G be a group such that G/ Z(G) is finite. Then there exists a finite group
H isoclinic to the group G such that Z(H) ≤ γ2 (H). Moreover if G is a p-group, then H is also
a p-group.
Proof. Let G be the given group. Then by Schur’s theorem γ2 (G) is finite. Now it follows from
a result of Hall [8] that there exists a group H which is isoclinic to G and Z(H) ≤ γ2 (H). Since
|γ2 (G)| = |γ2 (H)| is finite, Z(H) is finite. Hence, by the definition of isoclinism, H is finite.
Now suppose that G is a p-groups. Then it follows that H/ Z(H) as well as γ2 (H) are p-groups.
Since Z(H) ≤ γ2 (H), this implies that H is a p-group.
For an arbitrary group G with finite G/ Z(G), we have
(3.2)
|G/ Z(G)| ≤ |γ2 (G)|d ,
where d = d(G/ Z(G)). For simplicity we say that a group G has Property A if G/ Z(G) is finite
and equality holds in (3.2) for G. We are now going to classify, upto isoclinism, all groups G
having Property A.
CENTRAL QUOTIENT VERSUS COMMUTATOR SUBGROUP OF GROUPS
5
Let G be an arbitrary group having Property A. Then by Proposition 3.1 there exists a finite
group H isoclinic to G and, by the definition of isoclinism, H has Property A. Thus for classifying
all groups G, upto isoclinism, having Property A, it is sufficient to classify all finite group with
this property.
Let us first consider non-nilpotent finite groups. For such group we prove the following result
in [5]
Theorem 3.3. There is no non-nilpotent group G having Property A.
So we only need to consider finite nilpotent groups. Since a finite nilpotent group is a direct
product of it’s Sylow p-subgroups, it is sufficient to classify finite p-groups admitting Property A.
Obviously, all abelian groups admit Property A. Perhaps the simplest examples of non-abelian
groups having Property A are finite extraspecial p-groups. The class of 2-generated finite capable
nilpotent groups with cyclic commutator subgroup also admits Property A. A group G is said
∼ H/ Z(H). Isaacs [13, Theorem 2] proved:
to be capable if there exists a group H such that G =
Let G be finite and capable, and suppose that γ2 (G) is cyclic and that all elements of order 4
in γ2 (G) are central in G. Then |G/ Z(G)| ≤ |γ2 (G)|2 , and equality holds if G is nilpotent. So
G admits Property A, if G is a nilpotent group as in this statement and G is 2-generated. A
complete classification of 2-generated finite capable p-groups of class 2 is given in [15].
Motivated by finite extraspecial p-groups, a more general class of groups G admitting Property
A can be constructed as follows. For any positive integer m, let G1 , G2 , . . . , Gm be 2-generated
finite p-groups such that γ2 (Gi ) = Z(Gi ) ∼
= X (say) is cyclic of order q for 1 ≤ i ≤ m, where q
is some power of p. Consider the central product
(3.4)
Y = G1 ∗X G2 ∗X · · · ∗X Gm
of G1 , G2 , . . . , Gm amalgamated at X (isomorphic to cyclic commutator subgroups γ2 (Gi ), 1 ≤
i ≤ m). Then |Y | = q 2m+1 and |Y / Z(Y )| = q 2m = |γ2 (Y )|d(Y ) , where d(Y ) = 2m is the number
of elements in any minimal generating set for Y . Thus Y has Property A. Notice that in all of
the above examples, the commutator subgroup is cyclic. Infinite groups having Property A can
be easily obtained by taking a direct product of an infinite abelian group with any finite group
having Property A.
We now proceed to showing that any finite p-group G of class 2 having Property A is isoclinic
to a group Y defined in (3.4).
Let x ∈ Z2 (G) for a group G. Then, notice that [x, G] is a central subgroup of G. We have
the following simple but useful result.
Lemma 3.5. Let G be an arbitrary group such that Z2 (G)/ Z(G) is finitely generated by x1 Z(G),
x2 Z(G), . . . , xt Z(G) such that exp([xi , G]) is finite for 1 ≤ i ≤ t. Then
| Z2 (G)/ Z(G)| =
t
Y
exp([xi , G]).
i=1
Proof. By the given hypothesis exp([xi , G]) is finite for all i such that 1 ≤ i ≤ t. Suppose that
exp([xi , G]) = ni . Since [xi , G] ⊆ Z(G), it follows that [xni i , G] = [xi , G]ni = 1. Thus xni i ∈ Z(G)
and no smaller power of xi than ni can lie in Z(G), which implies that the order of xi Z(G) is
Qt
ni . Since Z2 (G)/ Z(G) is abelian, we have | Z2 (G)/ Z(G)| = i=1 exp([xi , G]).
CENTRAL QUOTIENT VERSUS COMMUTATOR SUBGROUP OF GROUPS
6
Let Φ(X) denote the Frattini subgroup of a group X. The following result provides some
structural information of p-groups of class 2 admitting Property A.
Proposition 3.6. Let H be a finite p-group of class 2 having Property A and Z(H) = γ2 (H).
Then
(i) γ2 (H) is cyclic;
(ii) H/ Z(H) is homocyclic;
(iii) [x, H] = γ2 (H) for all x ∈ H − Φ(H);
(iv) H is minimally generated by even number of elements.
Proof. Let H be the group given in the statement, which is minimally generated by d elements
x1 , x2 . . . , xd (say). Since Z(H) = γ2 (H), it follows that H/ Z(H) is minimally generated by
x1 Z(H), x2 Z(H), . . . , xd Z(H). Thus by the identity |H/ Z(H)| = |γ2 (H)|d , it follows that
order of xi Z(H) is equal to |γ2 (H)| for all 1 ≤ i ≤ d. Since the exponent of H/ Z(H) is equal
to the exponent of γ2 (H), we have that γ2 (H) is cyclic and H/ Z(H) is homocyclic. Now by
Qt
Lemma 3.5, |γ2 (H)|d = |H/ Z(H)| = i=1 exp([xi , H]). Since [xi , H] ⊆ γ2 (H), this implies that
[xi , H] = γ2 (H) for each i such that 1 ≤ i ≤ d. Let x be an arbitrary element in H − Φ(H).
Then the set {x} can always be extended to a minimal generating set of H. Thus it follows that
[x, H] = γ2 (H) for all x ∈ H − Φ(H). This proves first three assertions.
For the proof of (iv), we consider the group H̄ = H/Φ(γ2 (H)). Notice that both H as well
as H̄ are minimallay generated by d elements. Since [x, H] = γ2 (H) for all x ∈ H − Φ(H), it
follows that for no x ∈ H − Φ(H), x̄ ∈ Z(H̄), where x̄ = xΦ(γ2 (H)) ∈ H̄. Thus it follows that
Z(H̄) ≤ Φ(H̄). Also, since γ2 (H) is cyclic, γ2 (H̄) is cyclic of order p. Thus it follows that H̄ is
isoclinic to a finite extraspecial p-group, and therefore it is minimally generated by even number
of elements. Hence H is also minimally generated by even number of elements. This completes
the proof of the proposition.
Using the definition of isoclinism, we have
Corollary 3.7. Let G be a finite p-group of class 2 admitting Property A. Then γ2 (G) is cyclic
and G/ Z(G) is homocyclic.
We need the following important result.
Theorem 3.8 ([3], Theorem 2.1). Let G be a finite p-group of nilpotency class 2 with cyclic
center. Then G is a central product either of two generator subgroups with cyclic center or two
generator subgroups with cyclic center and a cyclic subgroup.
Theorem 3.9. Let G be a finite p-group of class 2 having Property A. Then G is isoclinic to
the group Y , defined in (3.4), for suitable positive integers m and n.
Proof. Let G be a group as in the statement. Then by Proposition 3.1 there exists a finite p-group
H isoclinic to G such that Z(H) = γ2 (H). Obviously H also satisfies |H/ Z(H)| = |γ2 (H)|d ,
where d denotes the number of elements in any minimal generating set of G/ Z(G). Then by
Proposition 3.6, γ2 (H) = Z(H) is cyclic of order q = pn (say) for some positive integer n, and
H/ Z(H) is homocyclic of exponent q and is of order q 2m for some positive integer m. Since
Z(H) = γ2 (H) is cyclic, it follows from Theorem 3.8 that H is a central product of 2-generated
CENTRAL QUOTIENT VERSUS COMMUTATOR SUBGROUP OF GROUPS
7
groups H1 , H2 , . . . , Hm . It is easy to see that γ2 (Hi ) = Z(Hi ) for 1 ≤ i ≤ m and |γ2 (H)| = q.
This completes the proof of the theorem.
We would like to remark that Theorem 3.9 is also obtained in [26, Theorem 11.2] as a
consequence on study of class-preserving automorphisms of finite p-group. But we have presented
here a direct proof.
Now we classify finite p-groups of nilpoency class larger than 2. Consider the metacylic groups
E
D
r+t
r
r+s
t
(3.10)
K := x, y | xp
= 1, y p = xp , [x, y] = xp ,
where 1 ≤ t < r and 0 ≤ s ≤ t (t ≥ 2 if p = 2) are non-negative integers. Notice that the
nilpotency class of K is at least 3. Since K is generated by 2 elements, it follows from (2.3)
that | Autc (K)| ≤ |γ2 (K)|2 = p2r . It is not so difficult to see that | Inn(K)| = |K/ Z(K)| = p2r .
Since Inn(K) ≤ Autc (K), it follows that | Autc (K)| = | Inn(K)| = |γ2 (K)|2 = |γ2 (K)|d(K) (That
Autc (G) = Inn(G), is, in fact, true for all finite metacylic p-groups). Thus K admits Property
A. Furthermore, if H is any 2-generator group isoclinic to K, then it follows that H admits
Property A. For a finite p-groups having Property A, there always exists a p-group H isoclinic
G such that |H/ Z(H)| = |γ2 (H)|d , where d = d(H). The following theorem now classifies, upto
isoclinism, all finite p-groups G of nilpotency class larger than 2 having Property A.
Theorem 3.11 (Theorem 11.3, [26]). Let G be a finite p-group of nilpotency class at least 3.
Then the following hold true.
(i) If |G/ Z(G)| = |γ2 (G)|d , where d = d(G), then d(G) = 2;
(ii) If |γ2 (G)/γ3 (G)| > 2, then |G/ Z(G)| = |γ2 (G)|d if and only if G is a 2-generator group
with cyclic commutator subgroup. Furthermore, G is isoclinic to the group K defined in (3.10)
for suitable parameters;
(iii) If |γ2 (G)/γ3 (G)| = 2, then |G/ Z(G)| = |γ2 (G)|d if and only if G is a 2-generator 2-group
of nilpotency class 3 with elementary abelian γ2 (G) of order 4.
It is clear that the groups G occuring in Theorem 3.11(iii) are isoclinic to certain groups
of order 32. Using Magma (or GAP), one can easily show that such groups of order 32 are
SmallGroup(32,k) for k = 6, 7, 8 in the small group library.
We conclude this section with providing some different type of bounds on the central quotient
of a given group. If |γ2 (G) Z(G)/ Z(G)| = n is finite for a group G, then it follows from [22,
Theorem 1] that |G/ Z2 (G)| ≤ n2 log2 n . Using this and Lemma 3.5 we can also provide an
upper bound on the size of G/ Z(G) in terms of n, the rank of Z2 (G)/ Z(G) and exponents of
certain sets of commutators (here these sets are really subgroups of G) of coset representatives
of generators of Z2 (G)/ Z(G) with the elements of G. This is given in the following theorem.
Theorem 3.12. Let G be an arbitrary group. Let |γ2 (G) Z(G)/ Z(G)| = n is finite and Z2 (G)/ Z(G)
is finitely generated by x1 Z(G), x2 Z(G), . . . , xt Z(G) such that exp([xi , G]) is finite for 1 ≤ i ≤ t.
Then
Yt
exp([xi , G]).
|G/ Z(G)| ≤ n2 log2 n
i=1
4. Problems and Examples
Theorem B provides some conditions on a group G under which G/ Z(G) becomes finite. It
is interesting to solve
CENTRAL QUOTIENT VERSUS COMMUTATOR SUBGROUP OF GROUPS
8
Problem 1. Let G be an arbitrary group. Provide a set C of optimal conditions on G such
that G/ Z(G) is finite if and only if all conditions in C hold true.
As we know that there is no finite non-nilpotent group G admitting Property A. Since
Inn(G) ≤ Autc (G), it is interesting to consider
Problem 2. Classify all non-nilpotent finite group G such that | Autc (G)| = |γ2 (G)|d , where
d = d(G).
A much stronger result than Theorem 3.3 is known in the case when the Frattini subgroup
of G is trivial. This is given in the following theorem of Herzog, Kaplan and Lev [10, Theorem
A] (the same result is also proved independently by Halasi and Podoski in [11, Theorem 1.1]).
Theorem 4.1. Let G be any non-abelian group with trivial Frattini subgroup. Then |G/ Z(G)| <
|γ2 (G)|2 .
The following result with the assertion similar to the preceding theorem is due to Isaacs [13].
Theorem 4.2. If G is a capable finite group with cyclic γ2 (G) and all elements of order 4 in
γ2 (G) are central in G, then |G : Z(G)| ≤ |γ2 (G)|2 . Moreover, equality holds if G is nilpotent.
So, there do exist nilpotent groups with comparatively small central quotient. A natural
problem is the following.
Problem 3. Classify all finite p-groups G such that |G : Z(G)| ≤ |γ2 (G)|2 .
Let G be a finite nilpotent group of class 2 minimally generated by d elements. Then it
Qd
follows from Lemma 3.5 that |G/ Z(G)| ≤ i=1 exp([xi , G]), which in turn implies
(4.3)
|G/ Z(G)| ≤ |exp(γ2 (G))|d .
Problem 4. Classify all finite p-groups G of nilpotency class 2 for which equality holds in (4.3).
Now we discuss some examples of infinite groups with finite central quotient. The most
obvious example is the infinite cyclic group. Other obvious examples are the groups G = H × Z,
where H is any finite group and Z is the infinite cyclic group. Non-obvious examples include
finitely generated F C-groups, in which conjugacy class sizes are bounded, and certain Cernikov
groups. We provide explicit examples in each case. Let Fn be the free group on n symbols and
p be a prime integer. Then the factor group Fn /(γ2 (Fn )p γ3 (Fn )) is the required group of the
first type, where γ2 (Fn )p = hup | u ∈ γ2 (Fn )i. Now let H = Z(p∞ ) × A be the direct product of
quasi-cyclic (Prüfer) group Z(p∞ ) and the cyclic group A = hai of order p, where p is a prime
integer. Now consider the group G = H ⋊ B, the semidirect product of H and the cyclic group
B = hbi of order p with the action by xb = x for all x ∈ Z(p∞ ) and ab = ac, where c is the
unique element of order p in Z(p∞ ). This group is suggested by V. Romankov and Rahul D.
Kitture through ResearchGate, and is a Cernikov group.
The following problem was suggested by R. Baer in [1].
Problem 5. Let A be an abelian group and Q be a group. Obtain necessary and sufficient
conditions on A and G so that there exists a group Q with A ∼
= Z(G) and Q ∼
= G/A.
CENTRAL QUOTIENT VERSUS COMMUTATOR SUBGROUP OF GROUPS
9
This problem was solved by Baer himself for an arbitrary abelian group A and finitely generated abelian group G. Moskalenko [16] solved this problem for an arbitrary abelian group A
and a periodic abelian group G. He [17] also solved this problem for arbitrary abelian group A
and a non-periodic abelian group G such that the rank of G/t(G) is 1, where t(G) denotes the
tortion subgroup of G. If this rank is more than 1, then he solved the problem when A is a
torsion abelian group. For a given group Q, the existence of a group G such that Q ∼
= G/ Z(G)
has been studied extensively under the theme ‘Capable groups’. However, to the best of our
knowledge, Problem 5 has been poorly studied in full generality. Let us restate a special case of
this problem in a little different setup. A pair of groups (A, Q), where A is an arbitray abelian
group and Q is an arbitrary group, is said to be a capable pair if there exists a group G such
that A ∼
= Z(G) and Q ∼
= G/ Z(G). So, in our situation, the following problem is very interesting.
Problem 6. Classify capable pairs (A, Q), where A is an infinite abelian group and Q is a finite
group.
Finally let us get back to the situation when G is a group with finite γ2 (G) but infinite
G/ Z(G). The well known examples of such type are infinite extraspecial p-groups. Other class
of examples can be obtained by taking a central product (amalgamated at their centers) of
infinitely many copies of a 2-generated finite p-group of class 2 such that γ2 (H) = Z(H) is cyclic
of order q, where q is some power of p. Notice that both of these classes consist of groups of
nilpotency class 2. Now if we take G = X × H, where X is an arbitrary group with finite γ2 (X)
and H is with finite γ2 (H) and infinite H/ Z(H), then γ2 (G) is finite but G/ Z(G) is infinite. So
we can construct nilpotent groups of arbitrary class and even non-nilpotent group with infinite
central quotient and finite commutator subgroup.
A non-nilpotent group G is said to be purely non-nilpotent if it does not have any non-trivial
nilpotent subgroup as a direct factor. With the help of Rahul D. Kitture, we have also been
able to construct purely non-nilpotent groups G such that γ2 (G) is finite but G/ Z(G) is infinite.
Let H be an infinite extra-special p-group. Then we can always find a field Fq , where q is
some power of a prime, containing all pth roots of unity. Now let K be the special linear group
sl(p, Fq ), which is a non-nilpotent group having a central subgroup of order p. Now consider
the group G which is a central product of H and K amalgamated at Z(H). Then G is a purely
non-nilpotent group with the required conditions. It will be interesting to see more examples of
this type which do not occur as a central product of such infinite groups of nilpotency class 2
with non-nilpotent groups.
By Proposition 2.2 we know that for an arbitrary group G with finite γ2 (G) but infinite
G/ Z(G), the group G/ Z(G) has an infinite abelian group as a direct factor. Further structural
information is highly welcome.
Problem 7. Provide structural information of the group G such that γ2 (G) is finite but G/ Z(G)
is infinite?
References
[1] R. Baer, Groups with preassigned central and central quotient group, Trans. Amer. Math. Soc. 44 (1938),
387-412.
[2] W. Bosma, J. Cannon and C. Playoust, The Magma algebra system. I. The user language, J. Symbolic
Comput., 24 (1997), 235-265.
CENTRAL QUOTIENT VERSUS COMMUTATOR SUBGROUP OF GROUPS
10
[3] J. M. Brady, R. A. Bryce, and J. Cossey, On certain abelian-by-nilpotent varieties, Bull. Austral. Math.
Soc. 1 (1969), 403 - 416.
[4] Y. Cheng, On finite p-groups with cyclic commutator subgroup, Arch. Math. 39 (1982), 295-298.
[5] S. Dolfi, E. Pacifici and M. K. Yadav, Bounds for the index of the centre in non-nilpotent groups, preprint.
[6] L. Fuchs, Infinite abelian groups, Vol. I Pure and Applied Mathematics, Vol 36 Academic Press, Yew York
- London, 1970.
[7] D. K. Gumber and H. Kalra, On the converse of a theorem of Schur, Arch. Math. (Basel) 101 (2013), 17-20.
[8] P. Hall, The classification of prime power groups, Journal für die reine und angewandte Mathematik 182
(1940), 130-141.
[9] P. Hall, Finite-by-nilpotent groups, Proc. Cambridge Phil. Soc. 52 (1956), 611-616.
[10] M. Herzog, G. Kaplan and A. Lev, The size of the commutator subgroup of finite groups, J. Algebra 320
(2008), 980-986.
[11] Z. Halasi and K. Podoski, Bounds in groups with trivial Frattini subgroups, J. Algebra 319 (2008), 893-896.
[12] P. Hilton, On a theorem of Schur, Int. J. Math. Math. Sci. 28 (2001), 455-460.
[13] I. M. Isaacs, Derived subgroups and centers of capable groups, Proc. Amer. Math. Soc. 129 (2001), 2853-2859.
[14] I. D. Macdonald, Some explicit bounds in groups with finite derived groups, Proc. London Math. Soc. (3)
11 (1961), 23-56.
[15] A. Magidin and R. F. Morse, Certain homological functors of 2-generator p-groups of class 2, Contemp.
Math. 511 (2010), 127-166.
[16] A. I. Moskalenko, Central extensions of abelian groups by means of abelian groups, Sibirsk. Mat. Z. 9 (1968),
104-115.
[17] A. I. Moskalenko, Central extensions of a periodic abelian group by means of an abelian group. Algebra and
number theory. Moskov. Gos. Ped. Inst. Ucen. Zap. 375 (1971), 80-84.
[18] B. H. Neumann, Groups with finite classes of conjugate elements, Proc. London Math. Soc. (3) 1 (1951),
178-187.
[19] B. H. Neumann, Groups covered by permutable subsets, J. London Math. Soc. 29 (1954), 236-248.
[20] B. H. Neumann, A problem of Paul Erdös on groups, J. Austral. Math. Soc. 21 (Series A) (1976), 467-472.
[21] P. Niroomand, The converse of Schur’s theorem, Arch. Math. 94 (2010), 401-403.
[22] K. Podoski and B. Szegedy, Bounds for the index of the centre in capable groups, Proc. Amer. Math. Soc.
133 (2005), 3441-3445.
[23] I. Schur, Über die Darstellung der endlichen Gruppen durch gebrochene lineare Substitutionen, Journal für
die reine und angewandte Mathematik 127 (1904), 20-50.
[24] B. Sury, A generalization of a converse of Schur’s theorem, Arch. Math. 95 (2010), 317-318.
[25] M. K. Yadav, Class-preserving automorphisms of finite p-groups: a survey, Groups St Andrews - 2009
(Bath), LMS Lecture Note Ser. 388 (2011), 569-579.
[26] M. K. Yadav, Class-preserving automorphisms of finite p-groups II, Israel J. Math. 209 (2015), 355-396.
DOI: 10.1007/s11856-015-1222-4
School of Mathematics, Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad
- 211 019, INDIA
E-mail address: [email protected]
| 4 |
arXiv:1611.08270v1 [math.CO] 24 Nov 2016
Status connectivity indices and co-indices of
graphs and its computation to intersection graph,
hypercube, Kneser graph and achiral polyhex
nanotorus
Harishchandra S. Ramanea∗, Ashwini S. Yalnaika†,
a
Reza Sharafdinib‡
Department of Mathematics, Karnatak University, Dharwad-580003, India
Department of Mathematics, Faculty of Science, Persian Gulf University,
Bushehr, 7516913817, Iran
b
March 19, 2018
Abstract
The status of a vertex u in a connected graph G, denoted by σG (u),
is defined as the sum of the distances between u and all other vertices
of a graph G. The first and second
status connectivity indices of a
P
graph G are defined as S1 (G) = uv∈E(G) [σG (u) + σG (v)] and S2 (G) =
P
uv∈E(G) σG (u)σG (v) respectively, where E(G) denotes the edge set of
G. In this paper we have
co-indices of a
P defined the first and second status P
[σ
(u)+σ
(v)]
and
S
(G)
=
σG (u)σG (v)
graph G as S1 (G) = uv∈E(G)
G
G
2
/
uv ∈E(G)
/
respectively. Relations between status connectivity indices and status
coindices are established. Also these indices are computed for intersection graph, hypercube, Kneser graph and achiral polyhex nanotorus.
Keywords: Distance in graph, status indices , transmission regular
graphs , intersection graph , Kneser graph , achiral polyhex nanotorus.
1
Introduction
The graph theoretic models can be used to study the properties of molecules
in theoretical chemistry. The oldest well known graph parameter is the Wiener
index which was used to study the chemical properties of paraffins [29]. The Zagreb indices were used to study the structural property models [15, 27]. Recently,
∗ [email protected]
† [email protected]
‡ [email protected]
1
Ramane and Yalnaik [24], introduced the status connectivity indices based on
the distances and correlated it with the boiling point of benzenoid hydrocarbons. In this paper we define the status co-indices of a graph and establish the
relations between the status connectivity indices and status co-indices. Also we
obtain the bounds for the status conncitivity indices of connected complement
graphs. Furher we compute these status indices for intersection graph, hypercube, Kneser graph and achiral polyhex nanotorus.
Let G be a connected graph with n vertices and m edges. Let V (G) =
{v1 , v2 , . . . , vn } be the vertex set of G and E(G) be an edge set of G. The edge
joining the vertices u and v is denoted by uv. The degree of a vertx u in a graph
G is the number of edges joining to u and is denoted by dG (u). The distance
between the vertices u and v is the length of the shortest path joining u and v
and is denoted by dG (u, v).
The status (or transmission) of a vertex u ∈ V (G), denoted by σG (u) is
defined as [18],
X
σG (u) =
d(u, v).
v∈V (G)
A connected graph G is said to be k-transmission regular if σG (u) = k
for every vertex u ∈ V (G). The transmission regular graphs are exactly the
distance-balanced graphs introduced in [19]. They are also called as self-median
graphs [7].
The Wiener index W (G) of a connected graph G is defined as [29],
W (G) =
X
dG (u, v) =
{u,v}⊆V (G)
1
2
X
σG (u).
u∈V (G)
More results about Wiener index can be found in [9, 11, 16, 23, 25, 26, 28].
The first and second Zagreb indices of a graph G are defined as [15]
X
X
M1 (G) =
[dG (u) + dG (v)]
and M2 (G) =
dG (u)dG (v).
uv∈E(G)
uv∈E(G)
Results on the Zagreb indices can be found in [10, 13, 14, 20, 22, 31].
The first and second Zagreb co-indices of a graph G are defined as [12]
X
X
M1 (G) =
[dG (u) + dG (v)]
and M2 (G) =
[dG (u)dG (v)] .
uv6∈E(G)
uv6∈E(G)
More results on Zagreb coindices can be found in [4, 5].
Recently, the first and second status connectivity index of a graph G have
been introduced by Ramane and Yalnaik [24] to study the property of benzenoid
2
hydrocarbons and these are defined as
X
S1 (G) =
[σG (u) + σG (v)]
and
X
S2 (G) =
uv∈E(G)
σG (u)σG (v). (1)
uv∈E(G)
Similar to Eq. (1) and the definition of Zagreb co-index, we define here the
first status co-index S1 (G) and the second status co-index S2 (G) as
X
S1 (G) =
[σG (u) + σG (v)]
and
X
S2 (G) =
uv6∈E(G)
[σG (u)σG (v)] .
uv6∈E(G)
Example:
v1t
@
v2 t @tv5
v3 t
tv4
Figure 1
For a graph given in Fig. 1, S1 = 74, S2 = 169, S1 = 11, S2 = 60.
2
Status connectivity indices and co-indices
Status connectivity indices of connected graphs are obtained in [24], In this section we obtain the status coindices and also status indices of complements.
Proposition 2.1. Let G be a connected graph on n vertices.
Then
S1 (G) = 2(n − 1)W (G) − S1 (G)
and
S2 (G) = 2(W (G))2 −
1
2
X
(σG (u))2 − S2 (G).
u∈V (G)
Proof.
S1 (G)
=
X
[σG (u) + σG (v)]
uv6∈E(G)
=
X
{u,v}⊆V (G)
=
=
(n − 1)
[σG (u) + σG (v)] −
X
u∈V (G)
X
uv∈E(G)
σG (u) − S1 (G)
2(n − 1)W (G) − S1 (G).
3
[σG (u) + σG (v)]
Also
S2 (G)
=
X
[σG (u)σG (v)]
uv6∈E(G)
=
X
{u,v}⊆V (G)
=
[σG (u)σG (v)] −
[σG (u)σG (v)]
uv∈E(G)
2
X
X
1
σG (u) −
σG (u)2 − S2 (G)
2
u∈V (G)
=
X
2(W (G))2 −
u∈V (G)
1
2
X
(σG (u))2 − S2 (G).
u∈V (G)
Corollary 2.2. Let G be a connected graph with n vertices, m edges and
diam(G) ≤ 2. Then
S1 (G) = 2n(n − 1)2 − 6m(n − 1) + M1 (G)
and
5
S2 (G) = (n − 1)2 [2n(n − 1) − 8m] + 2m2 + 2n −
M1 (G) − M2 (G).
2
Proof. For any graph G of diam(G) ≤ 2, σG (u) = 2n − 2 − dG (u) and
n(n − 1)
W (G) = m + 2
− m = n(n − 1) − m.
2
Also S1 (G) = 4m(n − 1) − M1 (G) and S2 (G) = 4m(n − 1)2 − 2(n − 1)M1 (G) +
M2 (G) [24].
Therefore by Proposition 2.1,
S1 (G)
=
=
2(n − 1)[n(n − 1) − m] − {4m(n − 1) − M1 (G)}
2n(n − 1)2 − 6m(n − 1) + M1 (G)
4
and
S2 (G)
2
2 [n(n − 1) − m] −
=
1
2
X
(2n − 2 − dG (u))2
u∈V (G)
− 4m(n − 1)2 − 2(n − 1)M1 (G) + M2 (G)
X
1 X
2
2 [n(n − 1) − m] −
(2n − 2)2 − 2(2n − 2)
dG (u)
2
u∈V (G)
u∈V (G)
X
+
(dG (u))2 − 4m(n − 1)2 − 2(n − 1)M1 (G) + M2 (G)
=
u∈V (G)
1
2
n(2n − 2)2 − 4m(2n − 2) + M1 (G)
2 [n(n − 1) − m] −
2
− 4m(n − 1)2 − 2(n − 1)M1 (G) + M2 (G)
5
2
2
(n − 1) [2n(n − 1) − 8m] + 2m + 2n −
M1 (G) − M2 (G).
2
=
=
Proposition 2.3. Let G be a connected graph with n vertices, m edges and
diam(G) ≤ 2. Then
S1 (G) = 2(n − 1) [n(n − 1) − 2m] − M1 (G)
and
S2 (G) = 2(n − 1)2 [n(n − 1) − 2m] − 2(n − 1)M1 (G) + M2 (G).
Proof. For any graph G of diam(G) ≤ 2, σG (u) = 2n − 2 − dG (u). Therefore
X
S1 (G) =
[(2n − 2 − dG (u)) + (2n − 2 − dG (v))]
uv ∈E(G)
/
=
=
n(n − 1)
− m (4n − 4) −
2
X
[dG (u) + dG (v)]
uv ∈E(G)
/
2(n − 1) [n(n − 1) − 2m] − M1 (G).
and
S2 (G)
X
=
uv6∈E(G)
[(2n − 2 − dG (u))(2n − 2 − dG (v))]
X
n(n − 1)
− m (2n − 2)2 − (2n − 2)
[dG (u) + dG (v)]
2
uv6∈E(G)
X
+
(dG (u)dG (v))
=
uv6∈E(G)
=
2(n − 1)2 [n(n − 1) − 2m] − 2(n − 1)M1 (G) + M2 (G).
5
Proposition 2.4. Let G be a graph with n vertices and m edges. Let G, the
complement of G, be connected. Then
S1 (G) ≥ (n − 1)[n(n − 1) − 2m] + M1 (G)
and
S2 (G) ≥ (n − 1)
2
n(n − 1)
− m + (n − 1)M1 (G) + M2 (G).
2
Equality holds if and only if diam(G) ≤ 2.
Proof. For any vertex u in G there are n − 1 − dG (u) vertices which are at distance 1 and the remaining dG (u) vertices are at distance at least 2. Therefore
σG (u) ≥ [n − 1 + dG (u)] + 2dG (u)
= n − 1 + dG (u).
Therefore,
S1 (G)
X
=
[σG (u) + σG (v)]
uv∈E(G)
≥
X
uv∈E(G)
X
=
uv6∈E(G)
=
=
[n − 1 + dG (u) + n − 1 + dG (v)]
[2n − 2 + dG (u) + dG (v)]
n(n − 1)
− m (2n − 2) +
2
X
[dG (u) + dG (v)]
uv6∈E(G)
[n(n − 1) − 2m](n − 1) + M1 (G).
And
S2 (G)
X
=
σG (u)σG (v)
uv∈E(G)
≥
X
[n − 1 + dG (u)][n − 1 + dG (v)]
uv∈E(G)
X
=
uv6∈E(G)
=
(n − 1)2 + (n − 1)[dG (u) + dG (v)] + [dG (u)dG (v)]
n(n − 1)
− m (n − 1)2 + (n − 1)M1 (G) + M2 (G).
2
For equality: If the diameter of G is 1 or 2 then the equality holds.
Conversely, let S1 (G) = (n − 1)[n(n − 1) − 2m] + M1 (G).
6
Suppose, diam(G) ≥ 3, then there exists at least one pair of vertices, say u1
and u2 such that dG (u1 , u2 ) ≥ 3. Therefore σG (u1 ) ≥ dG (u1 ) + 3 + 2(n − 2 −
dG (u1 )) = n + dG (u1 ). Similarly σG (u2 ) ≥ n + dG (u2 ) and for all other vertices
u of G, σG (u) ≥ n − 1 + dG (u).
Partition the edge set of G into three sets E1 , E2 and E3 , where E1 =
{u1 v | σG (u1 ) ≥ n + dG (u1 ) and σG (v) ≥ n − 1 + dG (v)}, E2 = {u2 v | σG (u2 ) ≥
n + dG (u2 ) and σG (v) ≥ n − 1 + dG (v)} and E3 = {uv | σG (u) ≥ n − 1 +
dG (u) and σG (v) ≥ n − 1 +dG (v)}. It is easy to check that |E1 | = dG (u1 ),
|E2 | = dG (u2 ) and |E3 | = n2 − m − dG (u1 ) − dG (u2 ).
Therefore
X
[σG (u) + σG (v)]
S1 (G) =
uv∈E(G)
=
X
[σG (u) + σG (v)] +
uv∈E1
=
X
uv∈E1
+
=
[σG (u) + σG (v)] +
uv∈E2
[2n − 1 + dG (u) + dG (v)] +
X
uv∈E3
=
X
X
[σG (u) + σG (v)]
uv∈E3
X
uv∈E2
[2n − 1 + dG (u) + dG (v)]
[2n − 2 + dG (u) + dG (v)]
(2n − 1)dG (u1 ) + (2n − 1)dG (u2 )
n
+ (2n − 2)
− m − dG (u1 ) − dG (u2 ) +
2
X
[dG (u) + dG (v)]
uv∈E(G)
(n − 1)[n(n − 1) − 2m] + dG (u1 ) + dG (u2 ) + M1 (G),
which is a contradiction. Hence diam(G) ≤ 2.
The equality of S2 (G) can be proved analogously.
3
Status indices and co-indices of some transmission regular graphs
Status indices of some standard graphs are obtained in [24].
A bijection α on V (G) is called an automorphism of G if it preserves E(G).
In other words, α is an automorphism if for each u, v ∈ V (G), e = uv ∈ E(G)
if and only if α(e) = α(u)α(v) ∈ E(G). Let
Aut(G) = {α | α : V (G) → V (G) is a bijection, which preserves the adjacency}.
It is known that Aut(G) forms a group under the composition of mappings.
A graph G is called vertex-transitive if for every two vertices u and v of G,
there exists an automorphism α of G such that α(u) = α(v). It is known that
any vertex-transitive graph is vertex degree regular, transmission regular and
7
Figure 2: The transmission regular but not degree regular graph with the
smallest order.
self-centred. Indeed, the graph depicted in Figure 2 is 14-transmission regular
graph but not degree regular and therefore not vertex-transitive (see [1, 2]).
The following is straightforward from the definition of status connectivity
indices.
Lemma 3.1. Let G be a connected k-transmission regular graph with m edges.
Then S1 (G) = 2mk and S2 (G) = mk 2 .
Theorem 3.2 ([3]). Let G be a connected graph on n vertices with the automorphism group Aut(G) and the vertex set V (G). Let V1 , V2 , · · · , Vt be all orbits
of the action Aut(G) on V (G). Suppose that for each 1 ≤ i ≤ t, ki are the
transmission of vertices in the orbit Vi , respectively. Then
t
1X
W (G) =
|Vi |ki .
2 i=1
Specially if G is vertex-transitive (i.e., t = 1), then W (G) =
the transmission of each vertex of G respectively.
1
2 nk,
where k is
Analogous to Theorem 3.2 and as a consequence of Proposition 2.1, we have
the following
Theorem 3.3. Let G be a connected graph on n vertices with the automorphism
group Aut(G) and the vertex set V (G). Let V1 , V2 , · · · , Vt be all orbits of the
action Aut(G) on V (G). Suppose that for each 1 ≤ i ≤ t, di and ki are the
vertex degree and the transmission of vertices in the orbit Vi , respectively. Then
S1 (G) =
t
X
i=1
|Vi |di ki ,
S1 (G) = (n − 1)
t
X
i=1
|Vi |ki (1 −
di
) .
n−1
Specially if G is vertex-transitive (i.e., t = 1), then
1
ndk 2 ,
2
n nd
n
S1 (G) = 2
k − ndk, S2 (G) =
−
k2 ,
2
2
2
where d and k are the degree and the transmission of each vertex of G respectively.
S1 (G) = ndk,
S2 (G) =
8
The following is a direct consequence of Proposition 2.1, Lemma 3.1 and
Theorem 3.2.
Lemma 3.4. Let G be a connected k-transmission
regular graph with m edges.
Then S1 (G) = 2 n2 k − 2mk and S2 (G) = n2 k 2 − mk 2 .
Following [17] we recall intersection graphs as follows. Let S be a set and
F = {S1 , · · · , SSq } be a non-empty family of distinct non-empty subsets of S
q
such that S = i=1 Si . The intersection graph of S which is denoted by Ω(F )
has F as its set ofTvertices and two distinct vertices Si , Sj , i 6= j , are adjacent
if and only if Si Sj 6= ∅. Here we will consider a set S of cardinality p and
let F be the set of all subsets of S of cardinality t, 1 < t < p, which is denoted
by S {t} . Upon convenience we may set S = {1, 2, · · · , p}. Let us denote the
{t}
{t}
= (V, E). The number of vertices of this
intersection graph
Ω(S ) by Γ
p
graph is |V | = t , the degree d of each vertex is as follows:
( p
p−t
− 1, p ≥ 2t;
t −
t
d=
p
p < 2t.
t − 1,
The number of its edges is as follows:
( 1 p p
p−t
− 1), p ≥ 2t;
2 t ( t −
t
|E| =
p
1 p
p < 2t.
2 t ( t − 1),
Lemma 3.5 ([8]). The intersection graph Γ{t} is vertex-transitive and for any
t-element subset A of S we have
( p
p−t
− 1, p ≥ 2t;
t +
t
σΓ{t} (A) =
p
p < 2t.
t − 1,
Moreover,
W (Γ{t} ) =
1 p
2 t
p
t
p
p
1
2 t
t
+
p−t
t
−1 ,
−1 ,
p ≥ 2t;
p < 2t.
Theorem 3.6.
S1 (Γ{t} ) =
S2 (Γ{t} ) =
p
t
p
p
t
p
t
t
1 p
2 t
p
1 p
2 t
p
t
t
−
p−t
t
p
−1
t +
p−t
t
−1 ,
p ≥ 2t;
2
−1 ,
−
p < 2t.
p−t
t
p
−1
t +
3
−1 ,
p−t
t
−1
2
,
p ≥ 2t;
p < 2t.
9
p−t
t
p p
p−t
t
−1 ,
p ≥ 2t;
p
2
p
− 1 , p < 2t.
2 t ( pt − 1) − pt
t
2
p
2
p
p−t
p
p−t
1 p
t
(
−
−
1)
−
+
−
1
t
t
2 t
t
t
2
p
S2 (Γ{t} ) =
2
p
t
−1 ,
− 12 pt ( pt − 1)
t
2
S1 (Γ{t} ) =
t
t
+
p ≥ 2t;
p < 2t.
Proof. It is a direct consequence of Theorem 3.3 and Lemma 3.5.
The vertex set of the hypercube Hn consists of all n-tuples (b1 , b2 , · · · , bn )
with bi ∈ {0, 1}. Two vertices are adjacent if the corresponding tuples differ in precisely one place. Moreover, Hn has exactly 2n vertices and n2n−1
edges. Darafsheh [8] proved that Hn is vertex-transitive and for every vertex u,
σHn (u) = n2n−1 . Therefore, by Lemmas 3.1 and 3.4 we have following result.
Theorem 3.7. For hypercube Hn
S1 (G) = n2 22n−1 and S2 (G) = n3 23n−3 ,
S1 (G) = 2n2 2n−1 (2n − 5) and S2 (G) = n2 22n−2 (n(2n − 1) − 1).
The Kneser graph KGp,k is the graph whose vertices correspond to the kelement subsets of a set of p elements, and where two vertices are adjacent if
v4
and only if the two corresponding sets are disjoint. Clearly
we must impose the
p
restriction p ≥ 2k. The
KG
v3 p,k has
v5 vertices and it is regular
v3 Kneser graph
v5
k
1 p p−k
of degree p−k
(see
k . Therefore the number of edges of KGp,k is 2 k
k
[21]). The Kneser graph KGn,1 is the complete graph on n vertices. The Kneser
v2
1
v2
v1
graph KG2p−1,p−1 is known
as the
odd graph
Op . vThe
odd graph O3 = KG5,2
(a) Ggraph (see(b)
H
is isomorphic to the Petersen
Figure
3).
1
{1, 2}
{3, 5}
{3, 4}
{1, 5}
{2, 3}
{1, 4}
{4, 5}
{2, 4}
{2, 5}
{1, 2}
v5
Figure 3: The odd graph O3 =v3KG5,2 is isomorphic
to the Petersen graph
v2
v1
10
2
30
A.R. Ashrafi
of x. If G has exactly one orbit, then G is said to be vertex transitive. The following
simple lemma is crucial for our algebraic method.
Lemma 1 Suppose G is a graph, A1 , A2 , . . . , Ar are the orbits of Aut(G) under its
r |Aj |
natural action on V(G) and xi ∈ Ai , 1 ≤ i ≤ r. Then W(G) =
j ),
j=1 2is d(x
Lemma 3.8 ([21]). The Kneser graph KG
vertex-transitive and for each
p,k
where d(x) denotes the summation of topological distances
between x and
all ver2W (KGp,k )
|V(G)|
A, σ
= W(G)
.
KGp,k (A)then
tices of G. In particular, ifk-subset
G is vertex
transitive
(kp) = 2 d(x), for every
vertex x.
Following Proposition follows from Lemma 3.8 and Lemma 3.1.
Proof It is easy to see that if vertices u and v are in the same orbit, then there is
For
a Kneser
KGp,k we have
an automorphism ϕ such Proposition
that ϕ(u) = v. 3.9.
So, by
definition
of graph
an automorphism,
for
every vertex x,
p−k
S1 (KGp,k ) = 2W (KGp,k )
k
d(u) =x∈V(G) d(x, u) = x∈V(G) d(ϕ(x), ϕ(u))
=x∈V(G)
and d(ϕ(x), v) = y∈V(G) d(y, v) =
#
d(v) "
2
p
−
k
2(W
(KG
))
p,k
r |Aj |
.
S2 (KGp,k ) =
Thus, W(G) = W(G) =
k then r = 1pand
j=1 2 d(xj ). If G is vertex transitive
k
|A1 | = |V(G)|. Therefore, W(G) = |V(G)|
d(x), for each vertex x.
Following2 Proposition follows from Proposition 2.1, Lemma 3.8 and PropoApply our method on an toroidal fullerene (or achiral polyhex nanotorus) R =
sition 3.9.
R[p, q], Figs. 2.5 and 2.6. To compute the Wiener index of this nanotorus, we first
prove its molecular graphProposition
is vertex transitive.
3.10. For a Kneser graph KGp,k we have
Lemma 2 The molecular graph of a polyhex nanotorus is vertex transitive.
p
p−k
S1 (KGp,k ) = 2W (KGp,k )
−
−1
Proof To prove this lemma, we first notice that p and q must be even.k Considerk
the vertices uij and urs of the molecular graph of a polyhex nanotori T = T[p, q],
and
Fig. 2.6. Suppose both of i and r are odd or even and σ is a horizontal symmetry
#
"
plane which maps uit to urt , 1 ≤ t ≤ p and π is a vertical symmetry whichmaps
p−k
2(W (KGp,k ))2
2
.
S2 (KG
) = 2(W (KGp,kof))T −
Wwe
(KG
π are
and
have
σ(uij ) =
utj to uts , 1 ≤ t ≤ q. Then σ and
p,kautomorphisms
p,k )π−
p
k
k
π(urj ) = urs . Thus uij and urs are in the same orbit under the action of Aut(G) on
V(G). On the other hand, the
θ defined by
θ(uijachiral
) = θ(upolyhex
is a graph (or toroidal fullerenes of
(p+1−i)j )nanotorus
A map
nanostructure
called
automorphism of T and soperimeter
if “i is oddp and
r
is
even”
or
“i
is
even
and
r
is
odd”
then
and length q, denoted by T [p, q]
is depicted
in Figure 4 and its
the same orbit molecular
of Aut(G), graph
proving
the
lemma.
again uij and urs will be in2-dimensional
is in Figure 5. It is regular of degree 3 and has
pq vertices and
3pq
2
edges.
Fig. 2.5 A toroidal fullerene
(or achiral polyhex
nanotorus) R[p,q]
Figure 4: A achiral polyhex nanotorus (or toroidal fullerene) T [p, q]
The following lemma was proved in [3] and [30].
11
◦ vertices
of degree
2
• vertices oflattice
degree 3,for
Figure 5: A 2-dimensional
an achiral
polyhex
nanotorus T [p, q]
(b)
(a)
Lemma 3.11 ([3],[30]). The achiral polyhex nanotorus T = T [p, q] is vertex
transitive such that for an arbitrary vertex u ∈ V (T )
q
2
2
q < p;
12 (6p + q − 4),
σT (u) =
p
(3q 2 + 3pq + p2 − 4), q ≥ p.
12
The following is a direct consequence of Lemma 3.1 and Lemma 3.11.
Corollary 3.12. Let T = T [p, q] be a achiral polyhex nanotorus. Then
pq 2
2
2
q < p;
4 (6p + q − 4),
S1 (T ) =
2
p q (3q 2 + 3pq + p2 − 4), q ≥ p.
4
And
pq 3
2
2
2
96 (6p + q − 4) ,
S2 (T ) =
p3 q
(3q 2 + 3pq + p2 − 4)2 ,
96
Corollary 3.13. Let T
S1 (T ) =
And
q < p;
q ≥ p.
= T [p, q] be a achiral polyhex nanotorus. Then
pq 2
(pq − 4)(6p2 + q 2 − 4),
12
q < p;
p2 q
(pq − 4)(3q 2 + 3pq + p2 − 4),
12
q ≥ p.
pq 3
2
2
2
288 (pq − 4)(6p + q − 4) ,
S2 (T ) =
p3 q
(pq − 4)(3q 2 + 3pq + p2 − 4)2 ,
288
12
q < p;
q ≥ p.
P
Proof. Since 2W (G) = u∈V (G) σG (u) and polyhex nanotorus T [p, q] has pq
vertices, by Lemma 3.11, the Wiener index of polyhex nanotorus T [p, q] is as
follows [30]:
pq 2
2
2
q < p;
24 (6p + q − 4),
W (T ) =
2
p q (3q 2 + 3pq + p2 − 4), q ≥ p.
24
Substituting this in the Proposition 2.1 and Corollary 3.12 we get the results.
Acknowledgement: The author HSR is thankful to the University Grants
Commission (UGC), Govt. of India for support through grant under UGC-SAP
DRS-III, 2016-2021: F.510/3/DRS-III /2016 (SAP-I). The second author ASY
is thankful to the University Grants Commission (UGC), Govt. of India for support through Rajiv Gandhi National Fellowship No. F1-17.1/2014-15/RGNF2014-15-SC-KAR-74909.
References
[1] M. Aouchiche, G. Caporossi, P. Hansen, Variable neighborhood search for
extremal graphs. 8. Variations on Graffiti 105, Congr. Numer. 148 (2001)
129–144.
[2] M. Aouchiche, P. Hansen, On a conjecture about the Szeged index, European J. Combin. 31 (2010) 1662–1666.
[3] A. R. Ashrafi, Wiener Index of Nanotubes, Toroidal Fullerenes and Nanostars, in: The Mathematics and Topology of Fullerenes, (Eds. F. Cataldo,
A. Graovac and O. Ori), Springer Netherlands, Dordrecht, 2011, pp. 21–38.
[4] A. R. Ashrafi, T. Došlić, A. Hamzeh, The Zagreb coindices of graph operations, Discrete Appl. Math., 158 (2010) 1571–1578.
[5] A. R. Ashrafi, T. Došlić, A. Hamzeh, Extremal graphs with respect to
the Zagreb coindices, MATCH Commun. Math. Comput. Chem. 65 (2011)
85–92.
[6] N. Biggs, Algebraic Graph Theory, Cambridge University Press, 1993.
[7] S. Cabello, P. Luksic, The complexity of obtaining a distance-balanced
graph, Electron. J. Combin., 18 (2011) Paper 49.
[8] M. R. Darafsheh, Computation of topological indices of some graphs. Acta.
Appl. Math., 110 (2010) 1225–1235.
13
[9] K. C. Das, I. Gutman, Estimating the Wiener index by means of number of
vertices, number of edges and diameter, MATCH Commun. Math. Comput.
Chem., 64 (2010) 647–660.
[10] K. C. Das, K. Xu, J. Nam, Zagreb indices of graphs, Front. Math. China,
10 (2015) 567–582.
[11] A. A. Dobrynin, R. Entringer, I. Gutman, Wiener index of trees: theory
and applications, Acta Appl. Math. 66 (2001) 211–249.
[12] T. Došlić, Vertex-wieghted Wiener polynomials for composite graphs, Ars.
Math. Contemp. 1 (2008) 66–80.
[13] I. Gutman, K. C. Das, The first Zagreb index 30 years after, MATCH
Commun. Math. Comput. Chem., 50 (2004) 83–92.
[14] I. Gutman, B. Furtula, Ž. Kovijanić Vukićević, G. Popivoda, On Zagreb
indices and coindices, MATCH Commun. Math. Comput. Chem., 74 (2015)
5–16.
[15] I. Gutman, N. Trinajstić, Graph theory and molecular orbitals. Total πelectron energy of alternant hydrocarbons, Chem. Phys. Lett., 17 (1972)
535–538.
[16] I. Gutman, Y. Yeh, S. Lee, Y. Luo, Some recent results in the theory of
the Wiener number, Indian J. Chem., 32A (1993) 651–661.
[17] F. Harary, Graph Theory. Addison Wesley, Reading, 1968.
[18] F. Harary, Status and contrastatus, Sociometry, 22 (1959) 23–43.
[19] K. Handa, Bipartite graphs with balanced (a, b)-partitions, Ars Combin.,
51 (1999) 113–119.
[20] M. H. Khalifeh, H. Yousefi-Azari, A. R. Ashrafi, The first and second Zagreb indices of some graph operations, Discrete Appl. Math., 157 (2009)
804–811.
[21] R. Mohammadyari, M. R. Darafsheh, Topological indices of the Kneser
graph KGn;k , Filomat, 26 (2012) 665–672.
[22] S. Nikolić, G. Kovačević, A. Miličević, N. Trinajstić, The Zagreb indices 30
years after, Croat. Chem. Acta, 76 (2003) 113–124.
[23] S. Nikolić, N. Trinajstić, Z. Mihalić, The Wiener index: development and
applications, Croat. Chem. Acta, 68 (1995) 105–129.
[24] H. S. Ramane, A. S. Yalnaik, Status connectivity indices of graphs and
its applications to the boiling point of benzenoid hydrocarbons, J. Appl.
Math. Comput., (2016) DOI 10.1007/s12190-016-1052-5.
14
[25] H. S. Ramane, V. V. Manjalapur, Note on the bounds on Wiener number
of a graph, MATCH Commun. Math. Comput. Chem., 76 (2016) 19–22.
[26] H. S. Ramane, D. S. Revankar, A. B. Ganagi, On the Wiener index of a
graph, J. Indones. Math. Soc., 18 (2012) 57–66.
[27] R. Todeschini, V. Consonni, Handbook of Molecular Descriptors, WileyVCH, Weinheim, 2000.
[28] H. B. Walikar, V. S. Shigehalli, H. S. Ramane, Bounds on the Wiener
number of a graph, MATCH Commun. Math. Comput. Chem., 50 (2004)
117–132.
[29] H. Wiener, Structural determination of paraffin boiling points, J. Am.
Chem. Soc., 69 (1947) 17–20.
[30] S. Yousefi, H. Yousefi-Azari, A. R. Ashrafi, M. H. Khalifeh, Computing
Wiener and Szeged Indices of an Achiral Polyhex Nanotorus, JSUT, 33
(2008) 7–11.
[31] B. Zhou, I. Gutman, Further properties of Zagreb indices, MATCH Commun. Math. Comput. Chem., 54 (2005) 233–239.
15
| 4 |
1
Quality and Diversity Optimization:
A Unifying Modular Framework
Abstract—The optimization of functions to find the best solution according to one or several objectives has a central role in
many engineering and research fields. Recently, a new family of
optimization algorithms, named Quality-Diversity optimization,
has been introduced, and contrasts with classic algorithms.
Instead of searching for a single solution, Quality-Diversity
algorithms are searching for a large collection of both diverse
and high-performing solutions. The role of this collection is to
cover the range of possible solution types as much as possible,
and to contain the best solution for each type. The contribution of
this paper is threefold. Firstly, we present a unifying framework
of Quality-Diversity optimization algorithms that covers the two
main algorithms of this family (Multi-dimensional Archive of
Phenotypic Elites and the Novelty Search with Local Competition), and that highlights the large variety of variants that
can be investigated within this family. Secondly, we propose
algorithms with a new selection mechanism for Quality-Diversity
algorithms that outperforms all the algorithms tested in this
paper. Lastly, we present a new collection management that
overcomes the erosion issues observed when using unstructured
collections. These three contributions are supported by extensive
experimental comparisons of Quality-Diversity algorithms on
three different experimental scenarios.
QD-algorithm
Collection of diverse and
high-performing solutions
Quality
arXiv:1708.09251v1 [] 12 May 2017
Antoine Cully and Yiannis Demiris, Senior Member, IEEE
Coverage
search space
descriptor space
Previously encountered
solution (not stored)
Solution contained
in the collection
Fig. 1: The objective of a QD-algorithm is to generate a
collection of both diverse and high-performing solutions. This
collection represents a (model free) projection of the highdimensional search space into a lower dimensional space
defined by the solution descriptors. The quality of a collection
is defined by its coverage of the descriptor space and by the
global quality of the solutions that are kept in the collection.
evolutionary algorithms are used to generate neural networks,
robot behaviors, or objects [9], [10].
However, from a more general perspective and in contrast
Index Terms—Optimization Methods, Novelty Search, Quality- with Artificial Evolution, Natural Evolution does not produce
Diversity, Behavioral Diversity, Collection of Solutions.
one effective solution but rather an impressively large set
of different organisms, all well adapted to their respective
environment. Surprisingly, this divergent search aspect of
I. I NTRODUCTION
Natural Evolution is rarely considered in engineering and
Searching for high-quality solutions within a typically high- research fields, even though the ability to provide a large
dimensional search space is an important part of engineering and diverse set of high-performing solutions appears to be
and research. Intensive work has been done in recent decades promising for multiple reasons.
to produce automated procedures to generate these solutions,
For example, in a set of effective solutions, each provides
which are commonly called “Optimization Algorithms”. The an alternative in the case that one solution turns out to be less
applications of such algorithms are numerous and range from effective than expected. This can happen when the optimization
modeling purposes to product design [1]. More recently, process takes place in simulation, and the obtained result does
optimization algorithms have become the core of most machine not transfer well to reality (a phenomenon called the reality
learning techniques. For example, they are used to adjust gap [11]). In this case, a large collection of solutions can
the weights of neural networks in order to minimize the quickly provide a working solution [4]. Maintaining multiple
classification error [2], [3], or to allow robots to learn new solutions and using them concurrently to generate actions or
behaviors that maximize their velocity or accuracy [4], [5].
predict actions when done by other agents has also been shown
Inspired by the ability of natural evolution to generate to be very successful in bioinspired motor control and cognitive
species that are well adapted to their environment, Evolutionary robotics experiments[12].
Computation has a long history in the domain of optimization,
Moreover, most artificial agents, like robots, should be able
particularly in stochastic optimization [6]. For example, evolu- to exhibit different types of behavior in order to accomplish
tionary methods have been used to optimize the morphologies their mission. For example, a walking robot needs to be able
and the neural networks of physical robots [7], and to infer to move not only forwards, but in every direction and at
the equations behind collected data [8]. These optimization different speeds, in order to properly navigate in its environment.
abilities are also the core of Evolutionary Robotics in which Similarly, a robotic arm needs to be able to reach objects at
different locations rather than at a single, predefined target.
A. Cully and Y. Demiris are with the Personal Robotics Laboratory,
Department of Electrical and Electronic Engineering, Imperial College London, Despite this observation, most optimization techniques that
U.K. (e-mail: [email protected]; [email protected]).
are employed to learn behaviors output only a single solution:
2
the one which maximizes the optimized function [10], [7], [5]. tend to the global-optimum and the diversity of the produced
Learning generic controllers that are able to solve several tasks walking behaviors will not be enough to properly control the
is particularly challenging, as it requires testing each solution robot. For instance, it will not contain slow behaviors, which
on several scenarios to assess their quality [13]. The automatic are essential for the robot’s manoeuvrability. This example
creation of a collection of behaviors is likely to overcome these illustrates that sampling the entire range of possible solutions
limitations and will make artificial agents more versatile.
is not always related to searching for the local optima, and why
The diversity of the solutions could also be beneficial for it may be useful to have the diversity preservation mechanism
the optimization process itself. The exploration process may not correlated with the performance function, but rather based
find, within the diversity of the solutions, stepping stones that on differences in the solution type.
allow the algorithm to find even higher-performing solutions.
Similarly, the algorithms may be able to solve a given problem B. Searching for Diverse Solutions
faster if they can rely on solutions that have been designed
Following this idea of a non performance-based diversity
for different but related situations. For example, modifying mechanism, the Novelty Search algorithm [18] introduces the
an existing car design to make it lighter might be faster than idea of searching for solutions that are different from the
inventing a completely new design.
previous ones, without considering their quality. This concept
Attracted by these different properties several recent works, is applied by optimizing a “novelty score” that characterizes the
such as Novelty Search with Local Competition [14] and the difference of a solution compared to those already encountered,
MAP-Elites algorithm [15], started to investigate the question which are stored in a “novelty archive”. The novelty archive is
of generating large collections of both diverse and high- independent from the population of the evolutionary algorithm.
performing solutions. Pugh et al. [16], [17] nicely named this The novelty score is computed as the average distance of
question as the Quality-Diversity (QD) challenge.
the k-nearest neighboring solutions that currently are in the
After a brief description of the origins of QD-algorithms novelty archive, while the distances are computed according
in the next section, we unify these algorithms into a single to a user-defined solution descriptor (also called a behavioral
modular framework, which opens new directions to create characterization, or behavioral descriptor [18], [13]). When the
QD-algorithms that combine the advantages of existing meth- novelty score of a solution exceeds a pre-defined threshold,
ods (see section III). Moreover, we introduce a new QD- this solution is added to the archive and thus used to compute
algorithm based on this framework that outperforms the the novelty score of future solutions.
existing approaches by using a new selective pressure, named
The main hypothesis behind this approach is that, in some
the “curiosity score”. We also introduce a new archive cases, the optimal solutions cannot be found by simply maximanagement approach for unstructured archives, like the mizing the objective function. This is because the algorithm
novelty archive [18]. The performance of these contributions is first needs to find stepping stones that are ineffective according
assessed via an extensive experimental comparison involving to the objective function, but lead to promising solutions
numerous variants of QD-algorithms (see section IV). After afterwards. A good illustration of this problem is the “deceptive
the conclusion, we introduce the open-source library designed maze” [18] in which following the objective function inevitably
for this study, which can be openly used by interested readers leads to a dead-end (a local extremum). The algorithm has to
(see section VI).
investigate solutions that lead the agent further from the goal
before being able to find solutions that actually solve the task.
The authors of Novelty Search also introduced the “Novelty
II. R ELATED W ORKS AND D EFINITIONS
Search
with Local Competition” algorithm (NSLC) [14], in
While the notion of Quality-Diversity is relatively recent,
which
the
exploration focuses on solutions that are both novel
the problem of finding multiple solutions to a problem is a
(according
to the novelty score) and locally high-performing.
long-standing challenge.
The main insight consists of comparing the performance of a
solution only to those that are close in the descriptor space.
A. Searching for Local Optima
This is achieved with a “local quality score” that is defined
This challenge was first addressed by multimodal function as the number of the k-nearest neighboring solutions in the
optimization algorithms, including niching methods in Evolu- novelty archive with a lower performance (e.g., slower walking
tionary Computation [19], [20], [21], which aim to find the speed [14]) than the considered solution. The exploration is then
local optima of a function. These algorithms mainly involve achieved with a multi-objective optimization algorithm (e.g.,
niche and genotypic diversity preservation mechanisms [21], NSGA-II [26]) that optimizes both the novelty and local quality
like clustering [22] and clearing [23] methods.
scores of the solutions. However, the local quality score does
However, in many applications, some interesting solutions not influence the threshold used to select whether an individual
are not captured by the local-optima of the fitness function. For is added to the novelty archive. The final result of NSLC is
example, it is important for walking robots to be able to control the population of the optimization algorithm, which contains
the walking speeds, however, there is no guarantee that the solutions that are both novel and high-performing compared
performance function (i.e., the walking speed [24], [25]) will to other local solutions. In other words, the population gathers
show local-optima that are diverse enough to provide a complete solutions that are both different from those saved in the novelty
range of walking speeds. Typically, if the optimized function is archive, and high-performing when compared to similar types
mono-modal (i.e., without local-optima), the population would of solutions.
3
The first applications of NSLC consisted of evolving both
the morphology and the behavior of virtual creatures in order to
generate a population containing diverse species, ranging from
slow and massive quadrupeds to fast and lightweight unipedal
hoppers by comparing velocity only between similar species
[14]. In this experiment, the solution descriptor was defined
as the height, the mass and the number of active joints, while
the quality of the solutions was governed by their walking
speed. At the end of the evolutionary process, the population
contained 1,000 different species. These results represent the
very first step in the direction of generating a collection of
diverse and high-performing solutions covering a significant
part of the spectrum of possibilities.
C. Gathering and Improving these Solutions into Collections
optimizing each solution independently (at least 5 times faster
and about 10 times more accurate [13]). By “recycling” and
improving solutions that are usually discarded by traditional
evolutionary algorithms, the algorithm is able to quickly find
necessary stepping stones. This observation correlates with the
earlier presented hypothesis that QD-algorithms are likely to
benefit from the diversity contained in the collection to improve
their optimization and exploration abilities.
However, it has been noticed that the archive improvement
mechanism may “erode” the borders and alter the coverage of
the collection [29]. Indeed, there are cases where the new, and
better, solution found by the algorithm is less novel than the one
it will replace in the archive. For instance, if high-performance
can be more easily achieved for a solution in the middle of
the descriptor space, then it is likely that the solutions near the
borders will progressively be replaced by slightly better, but
less novel, solutions. In addition to eroding the borders of the
collection, this phenomenon will also increase the density of
solutions in regions with a high performance. For instance, this
phenomenon has been observed in the generation of collections
containing different walking and turning gaits [29]. The novelty
archive of the original NSLC algorithm had a better coverage
of the descriptor space (but with lower performance scores)
than the one from the BR-Evolution, because it is easier for the
algorithms to find solutions that make the robot walk slowly
rather than solutions that make it walk fast or execute complex
turning trajectories (In section III-A2 of this paper, we introduce
a new archive management mechanism that overcomes these
erosion issues).
Instead of considering the population of NSLC as the result
of the algorithms, Cully et al. [13] suggested to consider the
novelty archive as the result. Indeed, the aim of the novelty
archive is to keep track of the different solution types that
are encountered during the process, and thus to cover as
much as possible of the entire descriptor space. Therefore,
the novelty archive can be considered as a collection of diverse
solutions on its own. However, the solutions are stored in the
collection without considering their quality: as soon as a new
type of solution is found, it is added to archive. While this
procedure allows the archive to cover the entire spectrum of
the possible solutions, in the original version of NSLC only
the first encountered solution of each type is added to the
archive. This implies that when finding a better solution for
a solution type already present in the archive, this solution is
not added to the archive. This mechanism prevents the archive D. Evolving the Collection
from improving over time.
Following different inspirations from the works presented
Based on this observation, a variant of NSLC, named above, the Multi-dimensional Archive of Phenotypic Elites
“Behavioral Repertoire Evolution”(BR-Evolution [13]), has (MAP-Elites) algorithm [15] has been recently introduced.
been introduced to progressively improve the archive’s quality While this algorithm was first designed to “illuminate” the
by replacing the solutions that are kept in the archive with landscape of objective functions [30], it showed itself to be an
better ones as soon as they are found. This approach has been effective algorithm to generate a collection of solutions that are
applied to generate “Behavioral Repertoires” in robotics, which both diverse and high-performing. The main difference with
consists of a large collection of diverse, but effective, behaviors NSLC and BR-Evolution is that, in MAP-Elites, the population
for a robotic agent in a single run of an evolutionary algorithm. of the algorithms is the collection itself, and the selection,
It has also been used to produce collections of walking gaits, mutations and preservation mechanisms directly consider the
allowing a virtual six-legged robot to walk in every direction solutions that are stored in the collection.
and at different speeds. The descriptor space is defined as the
In MAP-Elites, the descriptor space is discretized and
final position of the robot after walking for 3 seconds, while represented as a grid. Initially, this grid is empty and the
the quality score corresponds to an orientation error. As we algorithm starts with a randomly generated set of solutions.
reproduce this experiment in this paper, we provide additional After evaluating each solution and recording its associated
descriptions and technical details in section IV-C.
descriptor, these solutions are potentially added to the correThe concepts introduced with BR-Evolution have also later sponding grid cells. If the cell is empty, then the solution is
been employed in the Novelty-based Evolutionary Babbling added to the grid, otherwise, only the best solution among the
(Nov-EB) [27] that allows a robot to autonomously discover new one and the one already in the grid is kept. After the
the possible interactions with objects in its environment. This initialization, a solution is randomly selected via a uniform
work draws a first link between the QD-algorithms and the distribution among those in the grid, and is mutated. The
domain of developmental robotics, which is also studied in new solution obtained after the mutation is then evaluated and
several other works (see [28] for overview).
fitted back in the grid following the same procedure as in
One of the main results that has been demonstrated with BR- the initialization. This selection/mutation/evaluation loop is
Evolution experiments is that this algorithm is able to generate repeated several millions times, which progressively improves
an effective collection of behaviors several times faster than by the coverage and the quality of the collection.
4
Definition II.1: Quality-Diversity optimization
In one of its first applications, MAP-Elites was used to
generate a large collection of different but effective ways
A Quality-Diversity optimization algorithm aims to
to walk in a straight line by using differently the legs of
produce a large collection of solutions that are both as
a six-legged robot. This collection of behaviors was then used
diverse and high-performing as possible, which covers
to allow the robot to quickly adapt to unforeseen damage
a particular domain, called the descriptor space.
conditions by selecting a new walking gait that still works in
spite of the situation [4]. The same algorithm has also been
used to generate behavioral repertoires containing turning gaits,
While this definition is shared with the existing literature,
similarly to the work described previously, and it was shown we also stress the importance of the coverage regularity of the
that MAP-Elites generates better behavior collections while produced collections. In the vast majority of the applications
being faster than the BR-Evolution algorithm [31].
presented previously, not only is the coverage of importance but
The behaviors contained in these collections can be seen as its uniformity is as well. For example, in the locomotion tasks,
locomotion primitives and thus can be combined to produce an even coverage of all possible turning abilities of the robot
complex behaviors. Following this idea, the Evolutionary is required to allow the execution of arbitrary trajectories [29].
Based on this definition, the overall performance of a QDRepertoire-Based Control (EvoRBC [32]) evolves a neural
network, called the “arbitrator”, that selects the appropriate algorithm is defined by the quality of the produced collection
behavior in the repertoire, which was previously generated with of solutions according to three criteria:
MAP-Elites. This approach has been applied on a four-wheeled
1) the coverage of the descriptor space;
steering robot that has to solve a navigation task through a
2) the uniformity of the coverage; and
maze composed of several sharp angles, and a foraging task
3) the performance of the solution found for each type.
in which the robots needs to collect and consume as many
objects as possible.
F. Understanding the Underlying Mechanisms
These applications take advantage of the non-linear dimenIn addition to direct applications, several other works focus
sionality reduction provided by MAP-Elites. Indeed, both
on
studying the properties of QD-algorithms. For example,
applications select behaviors from the descriptor space, which
Lehman
et al. [37] revealed that extinction events (i.e., erasing
is composed of fewer than a dozen of dimensions (respectively,
a
significant
part of the collection) increases the evolvability
36 to 6 dimensions [4] and 8 to 2 dimensions [32]), while the
of
the
solutions
[38] and allow the process to find higherparameter space often consists of several dozen dimensions.
performing
solutions
afterwards. For example, with MAP-Elites,
MAP-Elites has been employed in several other applications,
erasing
the
entire
collection
except 10 solutions every 100 000
including the generation of different morphologies of soft
generations
increases
the
number
of filled cells by 20% and the
robots [15], or the production of images that are able to fool
average
quality
of
the
solutions
by
50% in some experimental
deep neural networks [33]. It has also been used to create
setups
[37].
“innovation engines” that are able to autonomously synthesize
In other studies, Pugh et al. [16], [17] analyzed the impact of
pictures that resemble to actual objects (e.g., television, bagel,
the
alignment between the solution descriptor and the quality
strawberry) [34].
score on both Novelty-based approaches (including NSLC) and
However, the obligation to discretize the descriptor space
MAP-Elites. For example, if the descriptor space represents
may be limiting for some applications, and the uniform
the location of the robot in a maze, and the quality score
random selection may not be suitable for particularly large
represents the distance between this position and the exit,
collections, as it dilutes the selection pressure. Indeed, the
then the descriptor space and the quality score are strongly
uniform random selection of individuals among the collection
aligned because the score can be computed according to the
makes the selection pressure inversely proportional to the
descriptor. The experimental results show that in the case
number of solutions actually contained in the collection. A
of such alignments with the quality score, then novelty-based
simple way to mitigate this limitation is to use a biased
approaches are more effective than MAP-Elites, and vice-versa.
selection according to the solution performance or according
Another study also reveals that the choice of the encoding
to its novelty score (like introduced by Pugh et al. [16],
(the mapping between the genotype and the phenotype)
[17]). Another direction consists in having a number of cells
critically impacts the quality of the produced collections [39].
irrespective of the dimensionality descriptor space, for example
The experimental results link these differences to the locality
by using computational geometry to uniformly partition the
of the encoding (i.e., the propensity of the encoding to produce
high-dimensional descriptor space into a pre-defined number of
similar behaviors after a single mutation). In other words, the
regions [35], or by using Hierarchical Spatial Partitioning [36].
behavioral diversity provided by indirect encoding, which is
known to empower traditional evolutionary algorithms [40],
appears to be counterproductive with MAP-Elites, while the
locality of direct encodings allows MAP-Elites to consistently
E. Quality-Diversity Optimization
fill the collection of behaviors.
Based on the seminal works presented previously [14], [15],
These different works illustrate the interest of the community
[13] and the formulation of Pugh et al. [16], [17], we can in QD-algorithms and that our understanding of the underlying
outline a common definition:
dynamics is only in its early stages. However, very few works
5
compare MAP-Elites and NSLC on the same applications (the
• Finally, several scores, like the novelty, the local compefew exceptions being [16], [17], [31], [36]), or investigate
tition, or the curiosity (defined in section III-B3) score,
alternative approaches to produce collections of solutions. One
are updated.
of the goals of this paper is to introduce a new and common These four steps repeat until a stopping criterion is reached
framework for these algorithms to exploit their synergies and (typically, a maximum number of iterations) and the algorithm
to encourage comparisons and the creation of new algorithms. outputs the collection stored in the container. More details
The next section introduces this framework.
can be found in the pseudo-code of the algorithm, defined
in Algorithm 1. In the following subsections, we will detail
III. A UNITED AND MODULAR FRAMEWORK FOR
different variants of the container, as well as the selection
QD-O PTIMIZATION ALGORITHMS
operators.
As presented in the previous section, most works using
or comparing QD-algorithms consider either MAP-Elites or
NSLC-based algorithms, or direct comparisons of these two A. Containers
The main purpose of a container is to gather all the solutions
algorithms. These comparisons are relevant because of the
distinct origins of these two algorithms. However, they only found so far into an ordered collection, in which only the
provide high-level knowledge and do not provide much insight best and most diverse solutions are kept. One of the main
of properties or particularities which make one algorithm better differences between MAP-Elites and NSLC is the way the
collection of solutions is formed. While MAP-Elites relies on an
than the other.
In this section, we introduce a new and common framework N-dimensional grid, NSLC uses an unstructured archive based
for QD-algorithms, which can be instantiated with different on the Euclidean distance between solution descriptors. These
operators, such as different selection or aggregation operators, two different approaches constitute two different container
similarly to most evolutionary algorithms. This framework types. In the following, we will detail their implementation
demonstrates that MAP-Elites and NSLC can be formulated as and particularities.
1) The Grid: MAP-Elites employs an N-dimensional grid
the same algorithm using a different combination of operators.
Indeed, specific configurations of this framework are equivalent to form the collection of solutions [15], [4]. The descriptor
to MAP-Elites or NSLC. However, this framework opens new space is discretized and the different dimensions of the grid
perspectives as some other configurations lead to algorithms correspond to the dimensions of the solution descriptor. With
that share the advantages of both MAP-Elites and NSLC. For this discretization, each cell of the grid represents one solution
example, it can be used to design an algorithm that is as simple type. In the initial works introducing MAP-Elites, only one
as MAP-Elites but working on an unstructured archive (rather solution can be contained in each cell. However, one can
than a grid), or to investigate different selection pressures like imagine having more individuals per cell (like in [17] in
NSLC. Moreover, this decomposition of the algorithms allows which two individuals are kept). Similarly, in the case of
us to draw conclusions on the key elements that make an multi-objective optimization, each cell can represent the Pareto
algorithm better than the others (e.g., the selective pressure or front for each solution type. Nevertheless, these considerations
are outside the scope of this paper.
the way to form the collection).
This new formulation is composed of two main operators:
a) Procedure to add solutions into the container: The
1) a container, which gathers and orders the solutions into procedure to add an individual to the collection is relatively
a collection, and 2) the selection operator, which selects straight forward. If the cell corresponding to the descriptor of
the solutions that will be altered (via mutations and cross- the individual is empty, then the individual is added to the grid.
over) during the next batch (or generation). The selection Otherwise, if the cell is already occupied, only the individual
operator is similar to the selection operators used in traditional with the highest performance is kept in the grid.
evolutionary algorithms, except that it considers not only
b) Computing the novelty/diversity of a solution: The
the current population, but all the solutions contained in the inherent structure of the grid provides an efficient way to
container as well. Other operators can be considered with this compute the novelty of each solution. Instead of considering
new formulation, like the traditional mutation or cross-over the average distance of the k-nearest neighbors as a novelty
operators. However, in this paper we only consider the operators score, like suggested in [18], here we can consider the number
described above that are specific to QD-algorithms.
of filled cells around the considered individual. The density of
After a random initialization, the execution of a QD- filled cells of the sub-grid defined around the individual is a
algorithm based on this framework follows four steps that good indicator of the novelty of the solution. Similarly to the
are repeated:
“k” parameter used in the k-nearest neighbors, the sub-grid is
defined according to a parameter that governs its size, which
• The selection operator produces a new set of individuals
(bparents ) that will be altered in order to form the new is defined as ±k cells around the individual (in each direction).
In this case, the score needs to be minimized.
batch of evaluations (boffspring ).
• The individuals of boffspring are evaluated and their
2) The Archive: The novelty archive introduced in the
performance and descriptor are recorded.
Novelty Search algorithm consists of an unstructured collection
• Each of these individuals is then potentially added to
of solutions that are only organized according to their descriptor
the container, according to the solutions already in the and their Euclidean distance. As introduced in the BR-Evolution
collection.
algorithm [13], the novelty archive can be used to form the
6
Algorithm 1 QD-Optimization algorithm ( I iterations)
(A ← ∅)
for iter = 1 → I do
if iter == 1 then
bparents ← random()
boffspring ← random()
else
bparents ← selection(A, boffspring )
boffspring ← random variation(bparents )
for each indiv ∈ boffspring do
{desc, perf} ← eval(indiv)
if add to container(indiv) then
curiosity(parent(indiv))+ = Reward
else
curiosity(parent(indiv))− = Penalty
update container()
return A
. Creation of an empty container.
. The main loop repeats during I iterations.
. Initialization.
. The first 2 batches of individuals are generated randomly.
. The next controllers are generated using the container and/or the previous batch.
. Selection of a batch of individuals from the container and/or the previous batch.
. Creation of a randomly modified copy of bparents (mutation and/or crossover).
. Evaluation of the individual and recording of its descriptor and performance.
. “add to container” returns true if the individual has been added to the container.
. The parent gets a reward by increasing its curiosity score (typically Reward = 1).
. Otherwise, its curiosity score is decreased (typically Penalty = 0.5).
. Update of the attributes of all the individuals in the container (e.g. novelty score).
collection of solutions by substituting solutions when better
ones are found. In contrast with the grid container presented
previously, the descriptor space here is not discretized and
the structure of the collection autonomously emerges from the
encountered solutions.
a) Procedure to add solutions into the container: The
management of the solutions is crucial with this container
because it affects both the quality, and the final coverage of the
collection. A first attempt was proposed in the BR-Evolution
algorithm [13] by extending the archive management of the
Novelty Search [18]: an individual is added to the archive if
its novelty score exceeds a predefined threshold (which can be
adjusted over time), or if it outperforms its nearest neighbor in
the archive. In the second case, the nearest neighbor is removed
from the archive and only the best of the two individuals is
kept.
While this archive management is relatively simple, further
experiments reveal underlying limitations [29]. First, an individual with the same (or very close) descriptor as another
individual can be added to the archive. Indeed, the novelty
score, which is based on the average distance of the k-nearest
neighbors, can still be relatively high even when two individuals
are close if the rest of the collection is further. One of the
consequences of using the novelty score as a criterion to add
the solution in the container is that the collection is likely to
show an uneven density of solutions [13], [29]. For example,
experiments in these works show collections that contain a high
density of solutions in certain regions (the inter-individuals
distance being notably lower than the Novelty Score threshold
used to add individual into the collection). While this property
can be interesting for some applications, it mainly originates
from a side effect. Second, the same experiments reveal that
the replacement of individuals by better ones can erode the
border of the collection, as discussed in the previous section.
Indeed, in some cases, the individuals in the center of the
collection show better performance than the ones in its border
(because of the intrinsic structure of the performance function
or because the center has been more intensively explored).
This can lead to the replacement of individuals that are on
the border of the collection by individuals that are closer to
the center. This is an important limitation as it reduces the
A
l
C
Quality
Zone dominating I1
𝜀 x N1
I2
B
Q1
I1
N1
𝜀 x Q1
Novelty
Fig. 2: Management of collections of solutions based on an
unstructured archive. A) A solution is directly added to the
collection if its nearest neighbor from the collection is further
than l. B) Conversely, if the distance is smaller than l (i.e., if
the circles overlap), the new solution is not automatically added
to the collection, but competes against its nearest neighbor. If
the new solution dominates the one already in the collection,
then the new solution replaces the previous one. C) In the
strict -domination, a solution dominates another one if the
progress in one objective is larger than the decrease in the
other objective (up to a predefined value ).
coverage of the collection, as shown in [29].
In order to mitigate these limitations, we propose the
following new way to manage solutions in the archive. A
solution is added to the archive if the distance to its nearest
neighbor exceeds a predefined threshold l (see Fig. 2.A). This
parameter defines the maximal density of the collection. The
threshold is similar to the novelty score threshold used in the
original Novelty Search algorithm, except that in this case we
only consider the distance of the nearest neighbor, and not the
average distance of the k-nearest ones.
If the distance between the new individual and its nearest
neighbor is lower than l, then this new individual can potentially
replace its nearest neighbor in the collection. This is only the
case if its distance from its second nearest neighbor exceeds
the l parameter, such that the distance among the solutions is
preserved (see Fig. 2.B) and if it improves the overall quality
of the collection. A new individual can improve the overall
collection in two ways: 1) if it has a better quality, which
increases the total quality of the collection or 2) if it has a
better novelty score, which means that it extends the coverage
7
of the collection. This can be seen as two objectives that
For these reasons, the choice of the most suitable container
need to be maximized. From this perspective, we can use depends more on the considered applications, rather than on
the definition of Pareto dominance to decide if an individual their performance. Therefore, while we will consider both of
should replace another one already in the collection. Therefore, the containers in the experimental section of this paper, we
a simple criterion could be to replace an existing individual, will not directly compare their results, as the comparison may
only if it is dominated by the new one. However, this criterion not be fair and may be irrelevant with respect to the considered
is very difficult to reach, as the new individual should be both applications.
better and more diverse than the previous one. This prevents
These two containers have been designed to provide uniform
most new individuals from being added to the collection, which coverage of the descriptor space. However, experiments reveal
limits the quality of the produced collections.
that the accumulation of density on specific regions of the
In order to soften this criterion, we introduce a variant of the descriptor space is a key factor for the Novelty Search
-dominance [41], that we name the exclusive -dominance. In algorithm, as it allows the novelty score to constantly change
this variant, we tolerate the dominating individual being worse over time. To avoid this issue, one can use an additional
than the other individual according to one of the objectives (up container in which the density accumulates and that drives
to a predefined percentage governed by ), only if it is better the exploration, while the other container gathers the collection
on the other objective by at least the same amount (see Fig. that will be return to the user. In this paper, we will only focus
2.C). This criterion is more strict than the original -dominance, on variants using only one container, however we will consider
which allows an individual to be dominated by another one extending the framework presented in this paper to multiple
that is worse on both objectives. From a mathematical point containers in future works.
of view, an individual x1 dominates x2 if these three points
are verified:
B. Selection Operators
1) N (x1 ) >= (1 − ) ∗ N (x2 )
The second main difference between MAP-Elites and NSLC
2) Q(x1 ) >= (1 − ) ∗ Q(x2 )
is the way the next batch, or population2 , of solutions is selected
3) (N (x1 )−N (x2 ))∗Q(x2 ) > −(Q(x1 )−Q(x2 ))∗N (x2 )
before being evaluated. On the one hand, MAP-Elites forms
with N corresponding to the Novelty Score and Q to the the next batch by randomly selecting solutions that are already
Quality (or performance) of an individual, which both need in the collection. On the other hand, NSLC relies on the current
to be maximized1 . This set of conditions makes the addition population of solutions and selects the individuals that are both
of new individuals in the container more flexible, but rejects novel and locally high-performing (according to a Pareto front).
individuals that do not improve the collection.
This difference is of significant importance as MAP-Elites uses
The experimental results presented in section IV demonstrate the entire collection of solutions, while NSLC only considers
that this new archive management overcomes the limitation of a smaller set of solutions.
the previous approaches by producing collections with similar
Similarly to the concept of containers, different approaches
coverage and quality compared with the grid-based container. for selecting the individuals of the next batch can be considered.
b) Computing the novelty of a solution: With the archive- In the following subsections, we will present several selection
based container, the computation of the novelty score can be methods that can be employed with both container types.
done with the traditional approach proposed by Lehman and
1) No Selection: A naive way to generate the next batch
Stanley [18], which consists of the average distance of the of evaluation is to generate random solutions. However, this
k-nearest neighbors.
approach is likely ineffective because it makes the QD3) Partial Conclusion: These two different types of con- algorithm equivalent to a random sampling of the search
tainers both present advantages and disadvantages. On the one space. In general, this approach provides an intuition about
hand, the grid-based container provides a simple and effective the difficulty of the task and can be used as a base-line when
way to manage the collection. However, it requires discretizing comparing alternative approaches.
the descriptor space beforehand, which can be problematic
2) Uniform Random Selection: A second way to select
for example if the discretization is not adequate, or needs to solutions that will be used in the next batch is to select solutions
be changed over time. On the other hand, the archive-based with a uniform probability from those that are already in the
container offers more flexibility, as it only requires the definition collection. This approach is the one used in MAP-Elites and
of a distance in the descriptor space. For example, specific follows the idea that promising solutions are close to each other.
distances can be used to compare complex descriptors, like In addition to being relatively simple, this approach has the
images, without a strong knowledge of the structure of the advantage of being computationally effective. However, one of
descriptor space (e.g., number of dimensions or limits) [27]. its main drawbacks is that the selection pressure decreases as
However, this advantage is a disadvantage as well, because it the number of solutions in the collection increases (the chance
implies that the algorithm needs to find the appropriate structure for a solution to be selected being inversely proportional to
of the collection on its own, which represents an additional
2 We use the word batch instead of generation because most of the approaches
challenge compared to the grid-based container.
1 This definition could very likely be generalized to more than two objectives,
but this question is beyond the scope of this paper.
presented in this paper can be used in a “steady state”, selecting and evaluating
only one individual at each iteration. However, considering the selection and
evaluation in batches allows the algorithm to execute the evaluation in parallel,
which increases the computational efficiency of the algorithm.
8
the number of solutions in the collection), which is likely to
be ineffective with large collections.
3) Score Proportionate Selection: An intuitive way to
mitigate the loss of selection pressure from the random selection
is to bias the selection according to a particular score. Similarly
to traditional evolutionary algorithms, the selection among the
solutions of the collection can be biased according to their
quality (fitness), following the roulette wheel or the tournamentbased selection principles [42].
Other scores can also be considered to bias the selection.
For example, the novelty score of each solution can substitute
for the quality score for fostering the algorithm to focus on
solutions that are different from the others.
In addition to these two scores, in this paper we introduce
a new score, named the Curiosity Score, that can be used to
bias the selection and which is defined as follows:
Definition III.1: Curiosity Score
fitter than any yet existing [46]. One important aspect shared
by these two definitions is that the score or the evolvability
may dynamically change over time according to the state of
the population or the collection, which is rarely considered
in evolvability’s definitions. For instance, the definition often
used in Evolutionary Computation [38], [45], [30], which
considers that the evolvability captures the propensity of
random variations to generate phenotypic diversity, depends
on the genome of the individual but not on the state of the
population.
4) Population-based Selection: All selection approaches
described so far select the individuals from the solutions
contained in the collection. This represents one of the main
differences introduced by MAP-Elites compared to NSLC
and traditional evolutionary algorithms, as the collection
becomes the “population” of the algorithm and this population
progressively grows during the evolutionary process. However,
to handle the selection, we can consider employing populations
The curiosity score represents the propensity of an
in parallel to the collection. This is in line with the Novelty
individual to generate offspring that are added to the
Search algorithm which computes the novelty score based
collection.
on the Collection (the Novelty archive), but instead uses a
traditional population to handle the selection.
A practical implementation (see Algorithm 1) consists of
This approach can be included in the framework proposed
increasing the curiosity score of an individual (initially set to in this paper by initializing the population with the first batch
zero) each time one of its offspring is added to the collection. and then, after each batch evaluation, a new population can be
Conversely, when an offspring fails to be added to the archive generated based on the individuals of the current population
(because it is not novel or high-performing enough), the (boffspring ) and their parents (bparents ). Classic selection
Curiosity Score is decreased. In this paper, we use respectively approaches, like tournament or score proportionate, can be
1 and −0.5 for the reward and the penalty values. With this employed to select the individuals that will be part of the
implementation, individuals may gain momentum, but this next population. Like in the collection-based selection, the
means that such individual will be selected more often, making selection can be biased according to either the quality, novelty
their score more likely to rapidly decrease.
or curiosity scores.
We named this score “Curiosity” because it encourages
5) Pareto-based Selection: The population-based selection
the algorithm to focus on individuals that produce interesting approach can be extended to multi-objective selection, via
solutions, until nothing new is produced. In other words, the the Pareto ranking, by taking inspiration from the NSGA-II
algorithm focuses on regions of the search space as long as algorithm [26]. In this paper, we will particularly consider
they produce interesting results, then, when the algorithm a Pareto-based selection operator that takes both the novelty
gets “bored”, it focuses its attention on different regions. This score and the local quality score (number of neighbors that
behavior is similar to the one of the “Intelligent Adaptive outperform the solution) of the individuals into account. This
Curiosity” [43], while the implementation and the inspiration selection operator is similar to the selection procedure of NSLC.
are strictly different.
6) Partial Conclusion: These different selection operators
A similar approach has recently been introduced to bias can all be equally used with both of the containers presented
the selection by using the same kind of successful offspring in the previous section. While the choice of the container
counter [44]. The difference is that, in this paper, the counter is influences the type of the produced results (e.g., unstructured
initialized to a fixed value (i.e., 10 in [44]) instead of starting or discretized descriptor space, see section III-A3), the selecat 0 like with the curiosity score, and that when an offspring tion operators will only influence the quality of the results.
is added to the collection, the counter is not incremented (like Therefore, it is of importance to know which operators provide
with the curiosity score), but rather reset to its maximal value. the best collection of solutions. In the following section, we
This difference make the selection process more forgivable, as provide a first answer to this question by comparing the
only one successful offspring is enough to make its parent very collections produced by the different selection operators and
likely to be selected again. While it would be interesting to by investigating their behaviors.
compare the effect of these two different, but related, methods,
this comparison is out of the scope of this paper.
IV. E XPERIMENTAL C OMPARISONS
Although there is no overall agreement on the definition of
evolvability [45], we can note that our definition of the curiosity
To compare the different combinations of containers and
score shares similarities with some of the first definitions of selection operators, we consider three experimental scenarios
evolvability, like the one given by Lee Altenberg who defines that take place in simulation: 1) a highly redundant robotic
the evolvability as the ability of a population to produce variants arm discovering how to reach points in its vicinity, 2) a virtual
9
TABLE I: The different combinations of containers and selection operators that are evaluated in this paper. The variants in bold
are tested on the three experimental scenarios while the others are only tested on the first one.
Variant name
arch no selection
arch random
arch pareto
arch fitness
arch novelty
arch curiosity
arch pop fitness
arch pop novelty
arch pop curiosity
grid no selection
grid random
grid pareto
grid fitness
grid novelty
grid curiosity
grid pop fitness
grid pop novelty
grid pop curiosity
NSLC
Container
archive
archive
archive
archive
archive
archive
archive
archive
archive
grid
grid
grid
grid
grid
grid
grid
grid
grid
grid
Selection Op.
noselection
random
Pareto
Score-based
Score-based
Score-based
Population-based
Population-based
Population-based
noselection
random
Pareto
Score-based
Score-based
Score-based
Population-based
Population-based
Population-based
Population & archive based
Considered Value
Novelty & Local Quality
Fitness
Novelty
Curiosity
Fitness
Novelty
Curiosity
Novelty & Local Quality
Fitness
Novelty
Curiosity
Fitness
Novelty
Curiosity
Novelty & Local Quality
Related approach
Random Search / Motor Babbling
MAP-Elites with Novelty [16]
Traditional EA
Novelty Search [18]
Random Search / Motor Babbling
MAP-Elites [15]
Traditional EA
Novelty Search with Local Competition [14]
six-legged robot learning to walk in every direction, and 3) the
same robot searching for a large number of ways to walk on a
straight line.
In addition to the tested combinations of containers and
selection operators, we include the original Novelty Search with
Local Competition algorithm (NSLC, [14]) in our experimental
comparisons in order to assess the influence of the lack of
density accumulation in the descriptor space, as discussed in
section III-A3. Like in [16], all individuals of the population
are potentially added to a grid container (the same as the one
used with the others variants) after each generation. We then
used the produced grid container to compare NSLC with the
other variants. For this experiment, we used the implementation
of NSLC provided by the Sferesv2 framework [47].
In the experiments presented in this paper, we only consider
direct encodings with genomes that are small and fixed in
size. It would be interesting to see how the conclusion drawn
from these experiments hold with large genomes, genomes of
increasing complexity over generations, or indirect encodings.
For instance, [39] highlights that indirect encodings may have
a negative impact on QD-algorithms. However, these further
considerations are out of the scope of this paper and will be
considered in future works.
be improved either by finding additional individuals or by
improving those already in the collection. It corresponds to
the metric named “Quality Diversity” used in [16].
4) Total Novelty: This metric is similar to the previous one,
except that the sum considers the novelty score and not the
quality value. This metric indicates if the collection is well
distributed over the description space or rather if the solutions
are highly concentrated. This metric will not be considered for
collections produced with the grid-based container because the
distribution of the solutions is forced by the grid.
Other metrics: In [15], [39], the authors presented other
metrics to compare collections produced by MAP-Elites.
However, the main idea of these metrics is to normalize the
quality of each solution by the maximal quality that can be
achieved by each type of solution (i.e., by each grid cell). To
infer the highest possible quality for each cell, the authors
selected the best solution found by all the algorithms over all
the replicates. However, this approach is only feasible with the
grid-based container because the continuous descriptor space
used in the archive-based container makes it challenging to
associate and compare each “solution type”. For this reason,
in this paper we decided to only consider the four metrics
presented previously.
A. Quality Metrics
B. The Redundant Arm
In order to compare the quality of the collections generated
1) Experimental Setup: In this first experimental comparison,
by each variant, we define four quality metrics that characterize we consider a redundant and planar robotic arm with 8 degrees
both the coverage and the performance of the solutions:
of freedom that needs to discover how to reach every point
1) Collection Size: indicates the number of solutions con- in its vicinity. The quality function captures the idea that all
tained in the collection and thus refers to the proportion of the joints of the arm should contribute equally to the movement,
descriptor space that is covered by the collection.
which allows quick transitions from one configuration to the
2) Maximal Quality: corresponds to the quality of the best next one. This constraint is defined by the variance of the
solution contained in the collection and indicates if the global angular position of the joints when the robot reaches its final
extremum of the performance function has been found (if it is configuration, and needs to be minimized by the algorithm.
known).
This experimental setup illustrates the need of quality-diversity
3) Total Quality: is the sum of the qualities over all the algorithms because it needs to simultaneously find a solution
solutions contained in the collection. This metric provides for all the reachable positions and to optimize the quality
information on the global quality of the collection as it can function for each of them.
10
No_selection
Random
Pareto
No_selection
Random
Pareto
Fitness
Novelty
Curiosity
Fitness
Novelty
Curiosity
NSLC
0
-0.1
Pop_fitness
Pop_curiosity
Pop_novelty
Pop_fitness
Pop_novelty
Pop_curiosity
Quality (rad2)
-0.2
-0.3
-0.4
-0.5
-0.6
-0.7
Archive-based container
Grid-based container
Fig. 3: Typical collections of solutions produced with QD-algorithms. These collections consist of several thousand colored
dots or cells that represent the final position of the gripper. The color of each dot or cell indicates the quality of the solution
(lighter is better).
TABLE II: Parameter values used the experiments.
Parameters
Batch size
No. of Iterations
Descriptor size
Genotype size
Genotype type
Crossover
Mutation rate for
each parameter
Mutation type
Grid container:
Grid size
Sub-grid depth
Archive container:
l
k
NSLC variant:
ρinit
k
First exp
200
50.000
2
8
disabled
Second exp
200
10.000
2
36
sampled
values
disabled
Third exp
200
20.000
6
36
sampled
values
disabled
12.5%
5%
5%
Polynomial
Random new
value
Random new
value
100 ∗ 100
±3 cells
100 ∗ 100
±5 cells
5 cells/dim
±1 cells
0.01
0.1
15
0.01
0.1
15
0.25
0.1
15
0.01
15
0.01
15
1
15
real values
To simulate the robotic arm, we consider its kinematic
structure, which provides the location of its gripper according
to the angular position of all joints. The solutions that are
optimized by the algorithms consist of a set of angular positions
that govern the final configuration of the different joints of the
robot. Neither the trajectory of the robot between its initial
and final positions, nor internal collisions are simulated in this
experiment.
The solution descriptor is defined as the final position of
the gripper, which is then normalized according to a square
bounding box to have values between 0 and 1. The size of the
bounding box is 2 ∗ 1.1 ∗ L, where L is the total length of the
robot when totally deployed (the factor 1.1 is used to leave
some room between the border of the descriptor space and the
robot). The center of the box corresponds to the location of
the robot’s base.
An extensive set of configurations from the QD-algorithm
framework (see algorithm 1) has been tested on this experimental setup (see Table I), and the execution of each of those
variants has been replicated 20 times. The parameter values
used for this experiment can be found in Table II.
2) Results: A typical collection of solutions produced by
each of the tested variants is pictured in Figure 3. The
collections using the archive-based container appear very
similar to those using the other container type. This similarity,
which holds in the other experiments as well, demonstrates that
the archive management introduced in this paper successfully
address the erosion issues described previously. Theoretically,
the ideal result homogeneously covers a quasi-circular region
and the performance (i.e., the color) should be arranged in
concentric shapes resembling cardioids (inverted, heart-shaped
curves)3 . This type of collection is found using the random,
the fitness or the curiosity-based selection operators (over the
collection) regardless of the container type used, as well as
with the NSLC algorithm. The novelty based selection with
the archive-based container also produces such a collection,
while this is not the case with the grid-based container. It is
interesting to note that the no-selection approach, which can
be considered as a motor babbling or random search, is unable
to produce the desired result. While the coverage is decent,
the quality of the gathered solutions is not satisfactory.
None of the population-based variants managed to produce a
collection that both covers all the reachable space and contains
high-performing solutions. This result could be explained by a
convergence of the population toward specific regions of the
collection. Typically, the population considering the fitness is
likely to converge toward regions with high quality, whereas
the population considering the novelty score converges to the
3 We can demonstrate that the points with the highest performance are
located on a curve resembling a cardioid by computing the position of the
end-effector for which all angular positions of the joints are set to the same
angle (from −π/2 to +π/2).
11
5350
Archive
5000
5340
4000
5330
3000
5320
2000
5310
1000
0
0
Grid
Maximal Quality
Collection size
6000
1
2
3
5300
4
5
4.9
× 104
5
× 104
0
-0.05
-0.1
-0.15
-0.2
-0.25
-0.3
-0.35
-0.4
-0.45
0
7000
6280
0
6000
6260
-0.05
5000
6240
-0.1
6220
4000
-0.2
6180
-0.25
2000
6160
-0.3
1000
6140
-0.35
0
0
1
2
3
6120
4.9
4
5
× 104
Number of Iterations
2
3
2
-0.15
6200
3000
1
5
× 104
-0.4
0
Total Quality
0
-0.002
-0.004
-0.006
-0.008
-0.01
-0.012
-0.014
-0.016
-0.018
4
5
4.9
× 104
1
2
3
4
5
× 104
5
× 104
× 10-3
4500
4000
3500
3000
2500
2000
1500
1000
500
0
0
4000
-4
3000
-6
2000
-8
1000
-10
4.9
5
× 104
Number of Iterations
0
0
92.1
70
60
4320
1
2
3
4300
4
5
4.9
× 104
5040
5020
5000
3
4
5
× 104
4980
4.9
91.8
30
5
× 104
5060
2
91.9
40
5080
1
92
50
4310
5100
5000
92.2
80
4330
5120
0
92.3
90
4340
6000
-2
Total Novelty
100
4350
5
× 104
20
0
1
2
3
4
5
× 104
91.7
4.9
Number of Iterations
pareto
pop_curiosity
pop_novelty
pop_fitness
curiosity
novelty
fitness
random
no_selection
NSLC
5
× 104
Number of Iterations
Fig. 4: Progression of the quality metrics in the redundant arm experiment. The first row depicts the results from variants using
the archive-based container, while the second row considers variants with the grid-based container. Because of the difficulty to
distinguish the different variants, a zoom on the best variants during the last 1000 batches is pictured on the right of each plot.
The middle lines represent the median performance over the 20 replications, while the shaded areas extend to the first and third
quartiles. In this experiment, the quality score is negative, thus in order to get a monotonic progression in the “Total Quality”
metric, +1 is added to the Quality to have a positive score.
border of the collection. The results of the variant using a
population with the curiosity score could be explained by the
difficulty to keep track of all individuals with a relatively
small population (200 individuals in the population compared
to about 6.000 in the entire collection). The curiosity score
is dynamic, and changes during the evolutionary process (an
individual can have a high curiosity score at one moment, for
example if it reaches a new region of the descriptor space, and
can have a very low curiosity score later during the process, for
instance when the region becomes filled with good solutions).
Therefore, it is likely that the individuals with the highest
curiosity score are not contained in the population.
score is continuous and the distribution of the solutions in the
collection adds some variability in the novelty score, which
makes it impossible to have several individuals with the lowest
novelty score.
While the Pareto-based selection is designed to be similar to
the NSLC algorithm, by keeping in the population individuals
that both have a high novelty and local-competition scores, we
can see that the collection produced by NSLC is significantly
better than the Pareto-based selection approach. We can explain
this poor performance by the presence of a Pareto-optimal
solution in this scenario. Indeed, the solution in which the
robot has all his joint positions set to zero has the best fitness
Moreover, we can observe different results between the grid- and is located on the border of the collection, which provides
based and the archive-based container variants considering a high novelty score. It is worth noting that we can mitigate
the novelty score. This difference is likely to originate from this issue by implementing a toroidal distance or container
the fact that the novelty score is computed differently in (like in [17]), when such a representation is compatible with
these two container types. Indeed, while in the archive- the descriptor space. This is not the case in our experiments.
based container the novelty score follows the formulation A behavior that reaches one end of the reachable space of
introduced by Lehman and Stanley [18], in the grid-based the robot is not meant to be considered similar to individuals
container, the novelty score is computed based on the number that reach the opposite end of the reachable space. For these
of individuals in the neighboring cells (see section III-A1b). reasons, the population is then very likely to converge to this
Both of these expressions capture the density of solutions Pareto-optimal solution and thus, to neglect certain regions
around the considered individuals. However, in the grid based of the collection. The size of the population is probably a
container, the novelty score is discretized (because it is related limiting factor as well. A large number of equivalent solutions
to the number of neighboring solutions). This discretization in terms of Pareto-dominance exist (all those in the center of
is likely to have a strong impact on score-based selection the collection with the highest fitness), which makes it difficult
variants using the novelty score because all individuals in the for the population to cover the entire descriptor space.
center of the collection will have the same and lowest novelty
NSLC is not impacted in the same way because the
score (because of all neighboring cells being filled). In the original archive management allows the density to constantly
score-based selection, individuals with the lowest score have accumulate around over-explored regions (for instance by
nearly no chance of being selected, which makes the selection varying the novelty threshold, as described in [14]). Thanks
focus on the border of the collection. This behavior is not to this feature, the novelty score constantly changes over time
observed with the archive-based container because the novelty and makes pareto optimal solutions disappear quickly. Indeed,
12
the regions that contain pareto optimal solutions will rapidly
This experimental setup has first been introduced in [13].
see their density increased making the novelty score of the Each potential solution consists of a set of 36 parameters (6
corresponding individuals less competitive compared with the per leg) that define the way each of the robot’s joint is moving
rest of the population.
(the controller is the same as the one used in [4]). During the
It is important to note that the NSLC variant uses two evaluation of a solution, the robot executes the behavior defined
containers and one population during the evolutionary process. by the parameters for three seconds, and its final position and
The population and one of the containers (the novelty archive) orientation are recorded. The descriptor space is defined by the
are used to drive the exploration process, while the second final position of the robot (X and Y coordinates), while the
container (a grid-based container) gathers the collection that quality of the solution corresponds to the orientation error with
respect to a desired orientation, which encourages the robot
will be delivered to the user.
The variations of the quality metrics (see Fig. 4) demonstrate to follow circular trajectories. These kinds of trajectories are
that among all tested variants, the best collections are provided interesting for planning purposes as any arbitrary trajectory
by variants which perform the selection based on the entire can be decomposed as a succession of circular arcs. In order
to be able to chain circular trajectories, the robot needs to be
collection.
The coverage, maximal quality, total quality, and total novelty aligned with the tangent of these circles at the beginning and
of the collections produced with selection operators considering the end of each movement. We can note that only one circular
the entire collections is higher than those using population- trajectory goes through both the initial and final positions of
based selection (all p-values < 7e − 8 from the Wilcoxon rank the robot with its tangent aligned with the initial orientation of
sum tests4 , except for the “(grid/arch) pop fitness” approaches the robot. The difference between the final orientation of the
which are not significantly different in terms of maximal quality robot and the direction of the tangent of this unique circular
and for “grid novelty” which performs significantly worse than trajectory defines the orientation error, which is minimized by
the other collection-based approaches). The only exception the QD algorithms (more details can be found in [13]).
is the novelty-based selection with the grid-based container,
The usage of the physical simulator makes the experiments
which is unable to correctly fill the center of the collection, as significantly longer (between 4 and 5 hours are required to
it focuses on its borders.
perform 10,000 batches with one variant). For this reason,
We can note that the variant using the Pareto-based selection the number of generations has been decreased to 10,000 and
with the archive-based container produces collections that are only 10 variants (those in bold in Table I) are considered for
better than those from variants using population-based selection, this experiment. This sub-set of variants includes variants that
but worse than those produced by variants that consider the are related to MAP-Elites, NSLC, Motor Babbling, traditional
entire collection for the selection. However, the Pareto-based population-based EA and the variant considering the curiosity
selection shows the best results according to the maximal score over the entire collection. The execution of each of
quality metrics.
those variants has been replicated 10 times. The value of the
While the difference among variants using the entire collec- parameters used for this experiment can be found in Table II.
tion in the selection with the grid-based container is negligible,
the curiosity-based selection appears to be significantly better
2) Results: From a high-level point of view, the same
(even if the difference is small) than the other selection conclusion as previously can be drawn based on the resulting
approaches on all the metrics with the archive-based container collections (see Fig. 5): The variants “no selection” and
(all p-values< 2e − 4 for all the metrics except for the “pop fitness” produce worse collections than the other variants,
total novelty in which p-values< 0.01). This observation while the variants “random”, “curiosity” and NSLC generate
demonstrates that relying on individuals with a high-propensity the best collections. In this experiment, the “Pareto” variant
to generate individuals that are added to the collection is a performs better than in the previous one. This result can be
promising selection heuristic.
explained by the absence of a unique Pareto-optimal solution.
We can observe that the NSLC variant performs significantly
The quality metrics indicate that the “curiosity” variants,
better than the pareto-based approach and that its performance
on both the grid and the archive containers, significantly
is close to, but lower than, those of the variants that use
outperform the other algorithms (see Fig. 6, all p-values < 0.01,
selection operators considering the entire collections.
except when compared to arch random in terms of total novelty
in which p-value = 0.05). These results also demonstrate that
this second experimental scenario is more challenging for the
C. The Robot Walking in Every Direction
1) The Experimental Setup: In this second experimental algorithms, as the difference in the metrics is clear and the
setup, we consider a six-legged robot in a physical simulator. performance of the naive “no selection” is very low.
The objective of the QD-algorithms is to produce a collection
of behaviors that allows the robot to walk in every direction
and at different speeds.
4 The reported p-values should be compared to a threshold α (usually set
to0.05) which is corrected to deal with the “Multiple Comparisons problem”.
In this paper, all our conclusions about the significance of a difference is given
by correcting α according to the Holm-Bonferroni method [48].
In this experiment, the NSLC variant shows similar results
to the “random” variant (which corresponds to the MAP-Elites
algorithm). In particular, the final size of the collection and
the final total quality are not significantly different (p-values<
0.61). However, the performance of the “curiosity” approach
remains significantly better on both aspects (p-values< 0.0047)
compared to NSLC.
13
Population-based
Selection wrt Fitness
Pareto-based Selection
Random Selection
(novelty and local quality) (over the entire collection)
Curiosity-based Selection
(over the entire collection)
Archive-based
Collection
No Selection
Novelty Search with
Local Competition
-180
Back
Grid-based
Collection
Front
1m
-160
-140
-120
-100
-80
Quality (degree)
-60
-40
-20
0
Right
Left
-1m
1m
-1m
Fig. 5: Typical collections of solutions produced by considered variants in the experiment with the virtual legged-robot learning
to walk in every direction. The center of each collection corresponds to the starting position of the robot and the vertical axis
represents the front axis of the robot. The position of each colored pixel or dot represent the final position of the robot after
walking for 3 seconds and its color depicts the absolute (negative) difference between the robot orientation and the desired
orientation. Lighter colors indicate better solutions. The collections are symmetrical because the robot learns how to walk both
forward and backward. This possibility, as well as the overall shape of the collection is not predefined but rather autonomously
discovered by the algorithms.
Total Quality
7
3500
3000
2500
2000
1500
60
50
40
3
30
1
0
0
4000
6000
8000
10000
6000
0
10
2000
× 10
4000
6000
8000
10000
4000
6000
8000
10000
5
0
0.5
1
1.5
2
× 10
3000
4
2000
1000
2
0
2000
4000
6000
8000
Number of Iterations
10000
0
0
2000
4000
6000
8000
10000
pareto
pop_fitness
curiosity
random
no_selection
NSLC
Number of Iterations
800
600
400
500
0.2
200
0
0.5
1
1.5
4
12000
6
1000
0.3
14000
8
1200
1000
0.4
Number of Iterations
4000
Grid
2000
1400
1500
0.5
1000
0
1600
0.6
2000
10
0
Total Novelty
1800
2000
0.7
3000
0
Total Quality
2500
0.8
4000
20
5000
0
1
0.9
5000
70
4
500
Collection size Maximal Quality
6000
80
5
2
2000
Total Novelty
5
6
1000
0
× 10
Archive
8
4000
2
× 10
0
6000
1
5000
0.8
6000
0.6
0.5
1
1.5
2
× 10
1.2
8000
0
4
10000
Grid
Archive
Collection size
4500
4
4000
3000
2000
4000
0.4
2000
0
0
0.5
1
1.5
2
× 10
4
Number of Iterations
0.2
1000
0
0.5
1
1.5
2
× 10
4
Number of Iterations
0
0
0.5
1
1.5
2
× 10
0
0
0.5
1
1.5
2
× 10
4
Number of Iterationss
4
pareto
pop_fitness
curiosity
random
no_selection
NSLC
Number of Iterations
Fig. 6: Progression of three quality metrics in the turning legged- Fig. 7: Progression of the four quality metrics in the experiment
robot experiment. The progression of the maximal quality is not with the legged-robot learning different ways to walk in a
depicted because all the variants found at least one solution with straight line. The first row depicts the results from variants
the highest possible quality (i.e., 0) in fewer than 1.000 batches. using the archive-based container, while the second row
The first row depicts the results from variants using the archive- considers variants with the grid-based container. The middle
based container, while the second row considers variants with lines represent the median performance over the 10 replications,
the grid-based container. The middle lines represent the median while the shaded areas extend to the first and third quartiles.
performance over the 10 replications, while the shaded areas
extend to the first and third quartiles. In this experiment, the
quality score is negative, thus in order to get a monotonic The experiment has been replicated 10 times and the other
progression in the “Total Quality” metric, +180 is added to parameters of the algorithm can be found in Table II.
the Quality to have positive score.
2) Results: From a general point of view, the same conclusion as in the previous experiments can be drawn from the
progression of quality metrics (see Fig.7)5 . Variants selecting
D. The Robot Walking with Different Gaits
individuals from the whole collection significantly outperform,
1) The Experimental Setup: In this third experimental setup, in terms of coverage, total quality and diversity, those that
we use the same virtual robot as in the previous experiment consider populations (all the p-values< 2e − 4). In particular,
with the same controller. However, in this case the robot has the curiosity-based selection operator shows the best results
to learn a large collection of gaits to walk in a straight line as both with the grid-based and the archive-based containers. For
instance, one can note that the total quality achieved by the
fast as possible. This scenario is inspired by [4].
In this experiment, the quality score is the traveled distance
after walking for 3 seconds, and the solution descriptor is the
proportion of time that each leg is in contact with the ground.
The descriptor space has thus 6 dimensions in this experiment.
5 Visualizations of the collections are not provided in this experiment because
of the high-dimensionality of the descriptor-space. While the grid-based
collections could have been depicted with the same approach as in [4], this
approach cannot be applied with the archive-based container.
14
random selection (second best approach) after 20,000 batches,
is achieved by the curiosity-based selection after only 11,000
batches with the archive-based container and 13,500 batches
with the grid-based container.
In contrast with the previous experiment, the “no selection”
variants manage to achieve good coverage (about half of the
coverage produced by the variants using the collection-wise
selection). However, they show the worst results according to
the total quality and the maximal quality metrics.
The variants using the population-based selection with
respect to the performance show the opposite results. While
the coverage of this variant is the worst among all the
evaluated variants with both of the container types, this
selection approach, which is similar to a traditional EA,
found the solutions with the best quality (the fastest way to
walk). In particular, the performance achieved with this variant
significantly outperforms the best solutions compared to every
other variant, even those using the collection-wise selection (pvalues< 0.0017). This observation shows that the best variants
tested so far are not always able to find the global extremum
of the quality. The quality difference between the “pop fitness”
variants and the others is smaller with the grid-based container
than with the archive-based. This quality difference could
be explained by the difference in the collection sizes, or the
additional difficulty of finding the inherent structure of the
collection for the archive-based container.
The Pareto-based variants are low-performing in this experiment. They show neither a good coverage (similar to
the “no selection” or the “pop fitness” variants) nor a good
maximal quality (lower than the variants with a collection-wise
selection). It is difficult to understand the reasons for such a
low performance in this experiment, as the behavioral space
is 6 dimensional, making it hard to visualize. However, it is
likely that it happens for the same reasons as in the previous
experiments, like a premature convergence to the border of
the collection (which show relatively bad performance), or
the existence of a Pareto-optimal solution. In contrast with
the Pareto-based variants, NSLC achieves good coverage of
the behavioral space in this experiment, while smaller than
the “random” and “curiosity” ones. However, the maximal
quality found on the produced collection is lower than most
of the considered variants (p-values< 0.03746 except with the
“no selection” variant, p-value= 0.9696), and the global quality
of the collections is equivalent to those of the Pareto-based
variant.
V. C ONCLUSION AND D ISCUSSION
In this paper, we presented three new contributions. First,
we introduced a new framework that unifies QD-algorithms,
showing for example that MAP-Elites and the Novelty Search
with Local Competition are two different configurations of the
same algorithm. Second, we suggested a new archive management procedure that copes with the erosion issues observed
with the previous approaches using unstructured archives (like
BR-evolution). This new procedure demonstrates good results
6 These p-values do not reject the null-hypothesis based on the HolmBonferroni method with a α = 0.05, but reject it with α = 0.1.
as it allows the algorithms to produce unstructured collections
with the same coverage as those with grid containers, which
was not the case with the previous management procedure [31].
Finally, we proposed a new selective pressure specific for QDalgorithms, named “curiosity score” that shows very promising
results by outperforming all the existing QD-algorithms on all
the experiments presented in this paper.
In addition to these three contributions, we presented the
results of an experimental comparison between a large number
of QD-algorithms, including MAP-Elites and NSLC. One of
the main results that can be outlined from these experiments
is that selection operators considering the collection instead of
a population showed better performance on all scenarios. We
can hypothesize that this results from the inherent diversity
of solutions contained in the collection. Indeed, several works
suggest that maintaining the behavioral diversity in populations of evolutionary algorithms (via additional objective for
example) is a key factor to avoid local extremum and to find
promising stepping stones [40], [18].
Another fundamental lesson learned from the experiments
presented in this paper is about the importance of allowing the
density of solutions to increase in diverse regions of the archive
to obtain the full effectiveness the NSLC. This can be achieved
by varying the novelty-score threshold or via probabilistic
addition to the archive[37]. While such mechanisms are often
used in the literature, their importance is rarely highlighted
by experimental comparisons like in this paper. In particular,
we demonstrated that algorithms using the novelty score, but
with archives in which the density does not increase, are
unable to show similar results to NSLC, because they are
severely impacted by certain aspects of the fitness landscape
(e.g., presence of Pareto-optimal solutions).
This unified and modular framework for QD-algorithms
is intended to encourage new research directions via novel
container types, selection operators, or selective pressures that
are specific to this domain. We expect that the emergence of
new QD-algorithms will provide insights about the key factors
for producing the best collection of solutions.
VI. Q UALITY D IVERSITY L IBRARY
The source code of the QD-algorithm framework is available
at https://github.com/sferes2/modular QD. It is based on the
Sferesv2 framework [47] and implements both the grid-based
and archive-based containers and several selection operators,
including all those that have been evaluated in this paper. The
source code of the experimental setups is available at the same
location and can be used by interested readers to investigate
and evaluate new QD-algorithms.
The implementation allows researchers to easily implement
and evaluate new combinations of operators, while maintaining
high execution speed. For this reason, we followed the policybased design in C++ [49], which allows developers to replace
the behavior of the program simply by changing the template
declarations of the algorithm. For example, changing from the
grid-based container to the archive-based one only requires
changing “container::Grid” to “container::Archive” in the
template definition of the QD-algorithm object. Moreover, the
15
modularity provided by this design pattern does not add any
overhead, contrary to classic Object-Oriented Programming
design. Interested readers are welcome to use and to contribute
to the source code.
ACKNOWLEDGMENT
This work was supported by the EU Horizon2020 project
PAL (643783-RIA). The authors gratefully acknowledge the
support from the members of the Personal Robotics Lab.
R EFERENCES
[1] A. Antoniou and W.-S. Lu, Practical optimization: algorithms and
engineering applications. Springer Science & Business Media, 2007.
[2] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Cognitive modeling, vol. 5, no. 3,
p. 1, 1988.
[3] S. J. Russell and P. Norvig, Artificial intelligence: a modern approach,
2003.
[4] A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret, “Robots that can
adapt like animals,” Nature, vol. 521, no. 7553, pp. 503–507, 2015.
[5] J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics:
A survey,” International Journal of Robotics Research, vol. 32, no. 11,
p. 1238, 2013.
[6] J. C. Spall, Introduction to stochastic search and optimization: estimation,
simulation, and control. John Wiley & Sons, 2005, vol. 65.
[7] H. Lipson and J. B. Pollack, “Automatic design and manufacture of
robotic lifeforms,” Nature, vol. 406, no. 6799, pp. 974–978, 2000.
[8] M. Schmidt and H. Lipson, “Distilling free-form natural laws from
experimental data,” science, vol. 324, no. 5923, pp. 81–85, 2009.
[9] A. E. Eiben and J. Smith, “From evolutionary computation to the
evolution of things,” Nature, vol. 521, no. 7553, pp. 476–482, 2015.
[10] J. Bongard, V. Zykov, and H. Lipson, “Resilient machines through
continuous self-modeling,” Science, vol. 314, no. 5802, 2006.
[11] S. Koos, J.-B. Mouret, and S. Doncieux, “The transferability approach:
Crossing the reality gap in evolutionary robotics,” Evolutionary Computation, IEEE Transactions on, vol. 17, no. 1, pp. 122–145, 2013.
[12] Y. Demiris, L. Aziz-Zadeh, and J. Bonaiuto, “Information processing in
the mirror neuron system in primates and machines,” Neuroinformatics,
vol. 12, no. 1, pp. 63–91, 2014.
[13] A. Cully and J.-B. Mouret, “Behavioral repertoire learning in robotics,” in
Proceedings of the 15th annual conference on Genetic and Evolutionary
Computation. ACM, 2013, pp. 175–182.
[14] J. Lehman and K. O. Stanley, “Evolving a diversity of virtual creatures
through novelty search and local competition,” in Proceedings of the 13th
annual conference on Genetic and Evolutionary Computation. ACM,
2011, pp. 211–218.
[15] J.-B. Mouret and J. Clune, “Illuminating search spaces by mapping elites,”
arXiv preprint arXiv:1504.04909, 2015.
[16] J. K. Pugh, L. Soros, P. A. Szerlip, and K. O. Stanley, “Confronting the
challenge of quality diversity,” in Proceedings of the 2015 on Genetic
and Evolutionary Computation Conference. ACM, 2015, pp. 967–974.
[17] J. K. Pugh, L. B. Soros, and K. O. Stanley, “Quality diversity: A new
frontier for evolutionary computation,” Frontiers in Robotics and AI,
vol. 3, p. 40, 2016.
[18] J. Lehman and K. O. Stanley, “Abandoning objectives: Evolution through
the search for novelty alone,” Evolutionary Computation, vol. 19, no. 2,
pp. 189–223, 2011.
[19] D. E. Goldberg and J. Richardson, “Genetic algorithms with sharing
for multimodal function optimization,” in Genetic algorithms and their
applications: Proceedings of the Second International Conference on
Genetic Algorithms. Hillsdale, NJ: Lawrence Erlbaum, 1987, pp. 41–49.
[20] S. W. Mahfoud, “Niching methods for genetic algorithms,” Urbana,
vol. 51, no. 95001, pp. 62–94, 1995.
[21] G. Singh and K. Deb Dr, “Comparison of multi-modal optimization
algorithms based on evolutionary algorithms,” in Proceedings of the 8th
annual conference on Genetic and Evolutionary Computation. ACM,
2006, pp. 1305–1312.
[22] X. Yin and N. Germay, “A fast genetic algorithm with sharing scheme
using cluster analysis methods in multimodal function optimization,” in
Artificial neural nets and genetic algorithms. Springer, 1993.
[23] A. Pétrowski, “A clearing procedure as a niching method for genetic
algorithms,” in Evolutionary Computation, 1996., Proceedings of IEEE
International Conference on. IEEE, 1996, pp. 798–803.
[24] D. J. Lizotte, Practical bayesian optimization. University of Alberta,
2008.
[25] N. Kohl and P. Stone, “Policy gradient reinforcement learning for fast
quadrupedal locomotion,” in Proceedings of the IEEE International
Conference on Robotics and Automation (ICRA), vol. 3. IEEE, 2004,
pp. 2619–2624.
[26] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist
multiobjective genetic algorithm: Nsga-ii,” Evolutionary Computation,
IEEE Transactions on, vol. 6, no. 2, pp. 182–197, 2002.
[27] C. Maestre, A. Cully, C. Gonzales, and S. Doncieux, “Bootstrapping
interactions with objects from raw sensorimotor data: a novelty search
based approach,” in Development and Learning and Epigenetic Robotics
(ICDL-EpiRob), 2015 Joint IEEE International Conference on. IEEE,
2015, pp. 7–12.
[28] F. Benureau and P.-Y. Oudeyer, “Behavioral diversity generation in
autonomous exploration through reuse of past experience,” Frontiers in
Robotics and AI, vol. 3, p. 8, 2016.
[29] A. Cully and J.-B. Mouret, “Evolving a behavioral repertoire for a
walking robot,” Evolutionary Computation, 2015.
[30] J. Clune, J.-B. Mouret, and H. Lipson, “The evolutionary origins of
modularity,” Proceedings of the Royal Society of London B: Biological
Sciences, vol. 280, no. 1755, 2013.
[31] A. Cully, “Creative adaptation through learning,” Ph.D. dissertation,
Université Pierre et Marie Curie, 2015.
[32] M. Duarte, J. Gomes, S. M. Oliveira, and A. L. Christensen, “Evorbc:
Evolutionary repertoire-based control for robots with arbitrary locomotion
complexity,” in Proceedings of the 25th annual conference on Genetic
and Evolutionary Computation. ACM, 2016.
[33] A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily
fooled: High confidence predictions for unrecognizable images,” in
Conference on Computer Vision and Pattern Recognition. IEEE, 2015.
[34] A. M. Nguyen, J. Yosinski, and J. Clune, “Innovation engines: Automated
creativity and improved stochastic optimization via deep learning,” in
Proceedings of the 2015 Annual Conference on Genetic and Evolutionary
Computation. ACM, 2015, pp. 959–966.
[35] V. Vassiliades, K. Chatzilygeroudis, and J.-B. Mouret, “Scaling up
map-elites using centroidal voronoi tessellations,” arXiv preprint
arXiv:1610.05729, 2016.
[36] D. Smith, L. Tokarchuk, and G. Wiggins, “Rapid phenotypic landscape
exploration through hierarchical spatial partitioning,” in International
Conference on Parallel Problem Solving from Nature. Springer, 2016,
pp. 911–920.
[37] J. Lehman and R. Miikkulainen, “Enhancing divergent search through extinction events,” in Proceedings of the 2015 on Genetic and Evolutionary
Computation Conference. ACM, 2015, pp. 951–958.
[38] D. Tarapore and J.-B. Mouret, “Evolvability signatures of generative
encodings: beyond standard performance benchmarks,” Information
Sciences, vol. 313, pp. 43–61, 2015.
[39] D. Tarapore, J. Clune, A. Cully, and J.-B. Mouret, “How do different
encodings influence the performance of the map-elites algorithm?” in
Genetic and Evolutionary Computation Conference, 2016.
[40] J.-B. Mouret and S. Doncieux, “Encouraging behavioral diversity in
evolutionary robotics: An empirical study,” Evolutionary Computation,
vol. 20, no. 1, pp. 91–133, 2012.
[41] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining convergence
and diversity in evolutionary multiobjective optimization,” Evolutionary
Computation, vol. 10, no. 3, pp. 263–282, 2002.
[42] D. E. Goldberg and K. Deb, “A comparative analysis of selection schemes
used in genetic algorithms,” Foundations of genetic algorithms, 1991.
[43] P.-Y. Oudeyer, F. Kaplan, and V. V. Hafner, “Intrinsic motivation systems
for autonomous mental development,” Evolutionary Computation, IEEE
Transactions on, vol. 11, no. 2, pp. 265–286, 2007.
[44] J. Lehman, S. Risi, and J. Clune, “Creative generation of 3d objects
with deep learning and innovation engines,” in Proceedings of the 7th
International Conference on Computational Creativity, 2016.
[45] M. Pigliucci, “Is evolvability evolvable?” Nature Reviews Genetics, vol. 9,
no. 1, pp. 75–82, 2008.
[46] L. Altenberg et al., “The evolution of evolvability in genetic programming,” Advances in genetic programming, vol. 3, pp. 47–74, 1994.
[47] J.-B. Mouret and S. Doncieux, “Sferes v2: Evolvin’in the multi-core
world,” in Evolutionary Computation (CEC), 2010 IEEE Congress on.
IEEE, 2010, pp. 1–8.
[48] J. P. Shaffer, “Multiple hypothesis testing,” Annual review of psychology,
vol. 46, no. 1, pp. 561–584, 1995.
[49] A. Alexandrescu, Modern C++ design: generic programming and design
patterns applied. Addison-Wesley, 2001.
| 9 |
arXiv:1610.05725v2 [] 27 Oct 2016
Polynomial-time algorithm for determining the
graph isomorphism
Anatoly D. Plotnikov
e-mail: [email protected]
Abstract
We develop the methodology of positioning graph vertices relative
to each other to solve the problem of determining isomorphism of two
undirected graphs. Based on the position of the vertex in one of the
graphs, it is determined the corresponding vertex in the other graph.
For the selected vertex of the undirected graph, we define the neighborhoods of the vertices. Next, we construct the auxiliary directed
graph, spawned by the selected vertex. The vertices of the digraph
are positioned by special characteristics — vectors, which locate each
vertex of the digraph relative the found neighborhoods.
This enabled to develop the algorithm for determining graph isomorphism, the runing time of which is equal to O(n4 ).
MCS2000: 05C85, 68Q17.
Key words: isomorphism, algorithm, graph, graph isomorphism problem.
1
Introduction
Let Ln is the set of all n-vertex undirected graphs without loops and multiple
edges.
Let, further, there is a graph G = (V, E) ∈ Ln , where V = {v1 , v2 , . . . , vn }
is the set of graph vertices and E = {e1 , e2 , . . . , em } is the set of graph
edges. Local degree deg(v) of a vertex v ∈ VG is the number of edges which
incident to the vertex v. Every graph G ∈ Ln can be characterized by the
vector DG = (deg(vi1 ), deg(vi2 ), . . . , deg(vin )) of local vertex degrees, where
deg(vi ) ≤ deg(vj ), if i < j.
1
The graphs G = (VG , EG ), H = (VH , EH ) ∈ Ln is called isomorphic if
between their vertices there exists a one-to-one (bijective) correspondence
ϕ : VG ↔ VH such that if eG = {v, u} ∈ EG then the corresponding edge
eH = {ϕ(v), ϕ(u)} ∈ EH , and conversely [3]. The graph isomorphic problem
consists in determining isomorphism graphs G, H ∈ Ln .
The problem of determining the isomorphism of two given undirected
graphs is used to solve chemical problems, and to optimize programs. Effective (polynomial-time) algorithms for solving this problem were found for
some narrow classes of graphs [1, 2]. However for the general case, effective
methods for determining the isomorphism of graphs are not known.
The purpose of this article is to propose a polynomial-time algorithm for
determining isomorphism of the connected undirected graphs.
2
Algorithm bases
We develop the methodology of positioning graph vertices relative to each
other to solve the problem of determining isomorphism of two undirected
graphs. Based on the position of the vertex in one of the graphs, it is determined the corresponding vertex in the other graph.
Consider the elements of our algorithm.
Let there be a graph G ∈ Ln . For the vertex v ∈ V of the graph G we
define the concept of neighborhood of kth level (0 ≤ k ≤ n − 1).
The neighborhood of 1st level of the vertex v is the set of all graph vertices
that are adjacent to the vertex v. In general, a neighborhood of k-th level is
the set of all graph vertices that are adjacent to the vertices (k − 1)th level.
(k)
Such a neighborhood we denote NG (v). For convenience, we assume that
the vertex V forms a neighborhood of the zero level.
For the given vertex v by means of the graph G we construct an auxiliary
~
directed graph G(v),
spawned by the vertex v ∈ V , as follows.
The set of vertices belonging to one and the same neighborhood of the
vertex v, form a line of graph vertices. Each line has the same number as
the level of a neighborhood of the vertex v. Further, if in the initial graph G
the edge {v, u} connects a vertex v belonging to lines with a lower number
than vertex u then such an edge to replace by the arc (v, u), outgoing from
a vertex v and incoming to the vertex u. If the edge of the graph connects
vertices of one and the same line, this edge is replaced by two opposite arcs.
Fig. 1 presents the graph G = (V, E) ∈ Ln and the auxiliary di~ 1 ), spawned according to this graph by the vertex v1 ∈ V 1 . Here
graph G(v
1
The graph G is borrowed from [3]
2
✉
✑◗
✑
✓❙ ◗
✑
✑ ✓ ❙ ◗◗
❙ ◗✉ v5
✑
✉
PP ✓
❙ ✁
PP
❆ ✓
PP
❆✓
PP❙✁
❆
✁P
❙✉ v6
✉
✓
❍❍❆
✁
❅ ❍
❅❆ ❍❍ ✁
❆✉ ❍✁✉
❅
v4
v3
v2
v7
v1
G
✉
❇❇❅
✂✂❅
✂ ❇ ❅
❇ ❅ ☞
✂
✎
✂ v5 ❇✉ ❅✉ v7
✉
✉
❏❏ v3 ❏❏ ✡✡
✡✡
❏
❏✡
✡
❏ ✡❏ ✡
❏✉
❏✉
✡
✡
v1
v2
v4
v6
~ 1)
G(v
~ 1)
Figure 1: Graph G and auxiliary digraph G(v
.
(0)
NG (v1 ) = {v1 } is its own neighborhood (line) of the 0th level of the vertex
(1)
v1 , the set of vertices NG (v1 ) = {v2 , v3 , v6 , v7 } forms a neighborhood (line)
(2)
of 1st level and, finally, the set NG (v1 ) = {v3 , v6 } is a neighborhood of the
2nd level of the vertex v1 .
~
Each vertex v of the auxiliary digraph G(v),
we will characterize by two
vectors Iv and Ov .
Elements of vectors Iv and Ov are numbers. These numbers are the line
numbers of vertices of the auxiliary digraph. Vector Iv contains the line
numbers of vertices, from which arcs of the digraph come into the vertex
v, and the vector Ov contains the line numbers of vertices that receive arcs
from vertex v. If several arcs income into the vertex v from one and the same
line of digraph then the line number is repeated in the vector accordingly.
Similarly, if several arcs from the vertex v come into one and the same line,
this number is also repeated in the vector accordingly. The elements of the
vectors Iv and Ov assume ordered in ascending order of value of numbers.
The vectors Iv and Ov will be called the characteristics of the vertex v,
and Iv is input and Ov is output characteristics. Characteristics of the two
vertices v1 , v2 are equal, if their input and output characteristics are equal,
respectively, i.e., if Iv1 = Iv2 and Ov1 = Ov2 .
~ 1 ), shown
Find characteristics of the vertices of the auxiliary digraph G(v
in Fig. 1.
Iv1 = ⊘;
Ov1 = (1, 1, 1, 1);
Iv5 = (0, 1);
Ov5 = (1, 2, 2);
Iv2 = (0, 1, 1);
Ov2 = (1, 1, 2);
Iv6 = (1, 1, 1, 2);
Ov6 = (2);
Iv3 = (0, 1);
IV4 = (1, 1, 1, 2);
Ov3 = (1, 2, 2); Ov4 = (2);
Iv7 = (0, 1, 1);
Ov7 = (1, 1, 2).
~
~
Auxiliary digraphs G(v)
and H(u)
is called positionally equivalent if the
lines of graphs of the same level have the equal number of vertices, respec3
tively, having equal characteristics.
In general, the positionally equivalent digraphs have arcs connecting the
vertices of the same level. This introduces an element of equivocation, i.e. in
this case, we can not say that the positionally equivalent digraphs determine
isomorphic graphs G and H.
~
A vertex v ∈ VG is called unique in digraph G(v)
if doesn’t exist other
vertex with the characteristics equal to characteristics of the vertex v.
It is easy to see that the vertex v1 is the unique vertex of the constructed
digraph (see Fig. 1).
Theorem 1 Let the graphs G and H are isomorphic. Let, further, the auxil~
~
iary digraphs G(v)
and H(u)
are the positionally equivalent, and each digraph
has an unique vertex vi ∈ VG and uj ∈ VH such that Ivi = Iuj , Ovi = Ouj .
Then between the vertices of digraphs there exists a bijective correspondence
ϕ such that uj = ϕ(vi ).
Proof. Let the conditions of Theorem 1 are satisfied. As the graphs G
and H are isomorphic then between vertices of positionally equivalent di~
~
graphs G(v)
and H(u),
having equal characteristics, there is a bijective correspondence ϕ. Since the vertices vi and uj have the same unique positioning
~
~
in the digraphs G(v)
and H(u)
then the relation uj = ϕ(vi ) is performed in
any correspondence ϕ. In particular, that sort unique vertices are always
vertices which spawn the positionally equivalent auxiliary digraphs.♦.
Note that in general case, the vertices of graphs G and H having the same
local degree, spawn different auxiliary digraphs.
Theorem 2 Suppose there are two isomorphic graph G = (VG , EG ), H =
(VH , EH ) ∈ Ln . Suppose, further, for the subgraph G1 ⊆ G, induced by the
~ 1 (x) has constructed. Then
vertex set X ⊆ VG , the auxiliary directed graph G
in the graph H there exists a subgraph H1 ⊆ H, spawned by the set of vertices
~ 1 (y) is positionally equivalent to
Y ⊆ VH , such that its auxiliary digraph H
~ 1 (x).
the digraph G
Proof. The validity of the above statement follows from the definition of
isomorphism of graphs, and the same sequence of construction of any of the
auxiliary digraphs.♦.
4
3
Algorithm
Using the results of the previous section, we can propose several algorithms
for determining the isomorphism of two given graphs G, H ∈ Ln . The simplest of them is as follows.
To determine the fact of isomorphism of graphs G and H, we look for
equally positioned vertex graphs.
Find the vertices v ∈ VG and u ∈ VH having positionally equivalent
auxiliary digraphs in the graphs G and H. Remove the found vertices from
the graphs together with the incident edges and repeat the process until we
have exhausted the list of vertices of graphs. If at some point in the graphs G
and H cannot find a pair of vertices having positionally equivalent auxiliary
digraphs, then stop the calculation, since the graphs are not isomorphic.
We describe the proposed algorithm in more detail.
Input of the algorithm: graphs G = (VG , EG ), H = (VH , EH ) ∈ Ln ,
isomorphism of which must be determined, if it exists. We believe that these
graphs have the same number of vertices and edges, as well as their vectors
of local degrees DG and DH are equal.
Output of the algorithm: conclusion about the isomorphism of graphs G
and H.
Algorithm for determining isomorphism graphs.
Step 1. Put Q = G, S = H, N := n, i := 1, j := 1.
Step 2. Choose a vertex vi ∈ VQ in the graph Q.
~ i ) using the graph Q.
Step 3. Construct the auxiliary digraph Q(v
~ i ).
Step 4. Find the characteristics of vertices of the auxiliary graph Q(v
Step 5. Choose the vertex uj ∈ VS in the graph S.
~ j ) using the graph S.
Step 6. Construct the auxiliary digraph S(u
~ j ).
Step 7. Find the characteristics of vertices of the auxiliary graph S(u
~ i ) and
Step 8. Compare the characteristics of the vertices of a digraphs Q(v
~ j ) in the neighborhoods of the vertices vi and uj of the same level.
S(u
~ i ) and S(u
~ j ) are positionally equivalent then put
Step 9. If the digraphs Q(v
VQ := VQ \ {vi }, VS := VS \ {uj }, N := N − 1. Go to Step 11.
Step 10. If j ≤ N then put j := j + 1 and go to Step 5. Otherwise finish the
calculations as the graphs G and H are not isomorphic.
Step 11. If i ≤ N then put i := i + 1, j := 1 and go to Step 2. Otherwise,
stop the calculations. The graphs G and H are isomorphic.
5
Theorem 3 The algorithm for determining the graph isomorphism determines an isomorphism of the given graph if it exists.
Proof. The Theorem 1 is established that if graphs G and H are isomorphic then the pair of unique vertices v and u that spawn positionally
~
~
equivalent digraphs G(v)
and H(u)
belong to some bijective mapping ϕ of
the vertices of this graph. Therefore, removal of vertices v and u of graphs G
and H together with incident edges, leads to the construction of subgraphs
of G′ ⊂ G and H ′ ⊂ H which is also isomorphic. Therefore, the repetition of the above procedure will lead to the exhaustion of the list of vertices
isomorphic graphs G and H.
Now suppose that the graphs G and H are not isomorphic and the vertex
~
~
v ∈ VG , u ∈ VH spawn positionally equivalent digraphs G(v)
and H(u).
Then, obviously, in these digraphs there exist some subsets of vertices X ⊂
VG and Y ⊂ VH , having corresponding equal characteristics, which spawn
different (non-isomorphic) subgraphs. Here v ∈ (VG \ X), u ∈ (VH \ Y ) and
Card(X) = Card(Y ).
Removing sequentially from graph Q and S vertices, spawning the equivalent auxiliary digraphs, we obtain non-isomorphic subgraphs. As a result,
the algorithm terminates without possibility finding a pair of vertices with
equivalent digraphs.♦
Unfortunately, the collection of pairs of vertices, determined by the proposed algorithm, as a whole, does not always determine the bijective mapping
of the vertices in initial isomorphic graphs. This is the price for the simplicity
of the algorithm.
Theorem 4 The running time of the algorithm for determining the graph
isomorphism equal to O(n4 ).
Proof. We determine the running time of the algorithm in steps 5–10.
Steps 5, 10 need to spend one time unit at each step.
Steps 6–8 require to spend O(n2 ) time units each.
Steps 9 require to spend O(n) time units.
Therefore, n-multiple executing steps 5–10 requires to spend O(n3 ) time
units.
Executing Steps 2–10 one times requires to spend O(n3 ) time units and
executing n times requires O(n4 ) time units.♦
4
Conclusion
We have developed the efficient (polynomial-time) algorithms for determining
isomorphism of undirected graphs. With that end in view, we used the
6
methodology of positioning vertices concerning the selected vertex and its
neighborhoods as the input and output characteristics of vertices. It allowed
effectively to compare the structure of the given graphs.
Apparently, ideology of the positioning of vertices, developed in this
study, can be used in solving the problem of finding a subgraph which is
isomorphic to the given graph. Although in this case we should expect origin
of essential search.
References
[1] A. V. Aho, J. E. Hopcroft, and J. D. Ullman. The design and analysis of
computer algorithms. Addison-Wesley publishing company, N.Y., 1976.
[2] M.R. Garey and D.S. Johnson.
Computers and Intractability.
W.H.Freeman and Company, San Francisco, 1979.
[3] D. B. West. Introduction to Graph Theory, 2nd ed. Prentice Hall, Inc.,
NJ, 2001.
7
Appendix
We illustrate the work of the proposed algorithm with an example.
Let two graph G, H ∈ Ln are given, for which it is necessary to determine
the isomorphism (see Fig. 2).
u2
v2
v1
✉
✡✂✂❇❏
✡ ❇❏
✡✂ ❇❏
❇ ❏
✡ ✂
❇✉ ❏
✂
✡ ✉
✁u❅
✡ u❆4
❅
5 ❏
❆ ✁
✡
❅❏
✁❳
❆✉
❳
✡ ✘✘✘✘
❏
❳
u6
❳❅
❳❅
✡✘
✉
✘
❏✉
❳
v3
✉
PP
✏✉
✏✏ ❅
P
✏
P
PP ❅
✏✏
PP
✏✏
❅
P
✉
✏
✏✉v
PP
✏✏ 4
❅ PP
✏
PP✏✏
❅
✏ PP
✏✏
P✉
❅✉
v6
G
v5
u1
H
u3
Figure 2: Initial graphs G and H.
The given graphs have an equal number of vertices, edges and vectors of
degrees.
Choose a vertex v1 in the graph G and construct the auxiliary digraph
~ 1 ) (see Fig. 3).
G(v
v1
v2
✉
✂❇❅
✂ ❇❅
✂
❇ ❅ ☞
✎
✂
❇❇✉ ❅
✂✉
✉
❅✉v6
❇
❅
✂✂ v5
❅ v3❇
❅ ❇ ✂
❅❇ ✂
v❅
4 ❇✂✉
u1
u2
~ 1)
G(v
✉
✂❇❅
✂ ❇❅
✂
❇ ☞
❅
✎
✂
❇❇✉u4 ❅
✂✉
✉
❅✉u6
❇❇
❅
✌
✂✂
❅ u3 ✍
❅ ❇ ✂
❅❇ ✂
u❅
5❇✂✉
~ 1)
H(u
~ 1 ) and H(u
~ 1 ).
Figure 3: Auxiliary digraphs G(v
Calculated characteristics of vertices of the constructed digraph. The
results bring to the table.
Iv1 = ⊘;
Ov1 = (1, 1, 1, 1);
Iv4 = (1, 1, 1, 1);
Ov4 = ⊘;
Iv2 = (0, 1, 1);
Ov2 = (1, 1, 2);
Iv5 = (0, 1, 1);
Ov5 = (1, 1, 2);
8
Iv3 = (0, 1, 1);
Ov3 = (1, 1, 2);
Iv6 = (0, 1, 1);
Ov6 = (1, 1, 2);
~ 1)
Choose a vertex u1 in the graph H and build an auxiliary digraph H(u
(see Fig. 3).
The calculated characteristics of vertices of the newly constructed digraph. The results bring to the table.
Iu1 = ⊘;
Ou1 = (1, 1, 1, 1);
Iu4 = (0, 1, 1);
Ou4 = (1, 1, 2);
Iu2 = (0, 1, 1);
Ou2 = (1, 1, 2);
Iu5 = (1, 1, 1, 1);
Ou5 = ⊘;
Iu3 = (0, 1, 1);
Ou3 = (1, 1, 2);
Iu6 = (0, 1, 1);
Ou6 = (1, 1, 2);
~ 1 ) and
It is easy to see that the constructed auxiliary directed graph G(v
~ 1 ) positionally equivalent.
H(u
Vertices v1 and u1 are removed from the initial graphs G and H respectively. We obtain graphs G1 and H1 (see Fig. 4).
u2
v2
✉
❇❏
✂✂❇❏
✂ ❇❏
❇ ❏
✂
✉
❇✉ ❏
✂
✁u❅
❅
u❆4
5 ❏
❆ ✁
❅❏
✁❳
❆✉
❳
❅❏
❳
u6
❳❳❅
❏✉
❳
v3
✉
✉
PP
❅
PP
PP ❅
PP
❅
P
✏✉v
✏✏ 4
✏
✏
✏✏
✉
✏✏
✉
v6
G1
v5
H1
u3
Figure 4: Subgraphs G1 and H1 .
In the subgraph G1 , choose a vertex v2 and build an auxiliary digraph
~ 1 (v2 ) (see Fig. 5).
G
~ 1 (v2 ).
We calculate the characteristics of vertices of the auxiliary digraph G
Iv2 = ⊘;
Ov2 = (1, 1, 1);
Iv5 = (1, 1, 1);
Ov5 = ⊘;
Iv3 = (0, 1); Iv4 = (0, 1, 1);
Ov3 = (1, 2); Ov4 = (1, 1, 2);
Iv6 = (0, 1);
Ov6 = (1, 2);
In the subgraph of H1 , choose a vertex u2 and build an auxiliary digraph
~ 1 (u2 ) (see Fig. 5).
H
~ 1 (u2).
Calculate characteristics of vertices of the auxiliary digraph H
9
u2
v2
v3
✉
✡✡❏❏
✡
❏
✡
❏
✉
✡
✉v4 ❏✉v6
❏
✡
❏
✡
❏
✡
❏❏✉
✡
v
✡ 5
u3
✉
✡✡❏❏
✡
❏
✡
❏ ☞
✎
✉
✡
✉
❏✉u5
❏
u4 ✡
❏
✡
❏
✡
❏❏✉
✡✡u6
~ 1 (u2 )
H
~ 1 (v2 )
G
~ 1 (v2 ) and H
~ 1 (u2 ).
Figure 5: Auxiliary digraphs G
Iu2 = ⊘;
Ou2 = (1, 1, 1);
Iu5 = (0, 1, 1);
Ou5 = (1, 1, 2);
Iu3 = (0, 1);
Iu4 = (0, 1);
Ou3 = (1, 2);
Ou4 = (1, 2);
Iu6 = (1, 1, 1);
Ou6 = ⊘;
~ 2 ) and H(u
~ 2)
Again we find that the constructed auxiliary digraphs G(v
are positionally equivalent.
Vertices v2 and u2 are removed from the subgraphs G1 and H1 , respectively. We obtain subgraphs G2 and H2 (see Fig. 6).
v3
✉
❅
❅
❅
✏✉v
✏✏ 4
✏
✏
✏✏
✉
✏✏
✉
v6
G2
✉
✉
✁
❆
❅
u4
u❅
5
❆ ✁
❅
✁❳❳
❆✉
❳
u6
❳❅
❳❅
❳✉
v5
H2
u3
Figure 6: Subgraphs G2 and H2 .
~ 2 (v3 )
In the graph G2 , choose a vertex v3 and build an auxiliary digraph G
(see Fig. 7).
~ 2 (v3 ).
Calculate characteristics of vertices of the auxiliary digraph G
Iv3 = ⊘;
Iv4 = (0, 1); Iv5 = (0, 1); Iv6 = (1, 1);
Ov3 = (1, 1); Ov4 = (1, 2); Ov5 = (1, 2); Ov6 = ⊘;
~ 2 (u3 ) and find the characteristics
Construct an auxiliary directed graph H
of its vertices.
10
v3
u3
✉
✡✡❏❏
❏
✡
❏
✡
❏✉v5
✉
✡
✡
❏
✡
❏
✡
❏
v6❏❏✉✡✡
v4
u5
~ 2 (v3 )
G
✉
✡✡❏❏
❏
✡
❏
✡
✉
❏✉u6
✡
✡
❏
✡
❏
✡
❏
u4❏❏✡
✉✡
~ 2 (u3)
H
~ 2 (v3 ) and H
~ 2 (u3 ).
Figure 7: Auxiliary digraphs G
Iu3 = ⊘;
Iu4 = (1, 1); Iu5 = (0, 1); Iu6 = (0, 1);
Ou3 = (1, 1); Ou4 = ⊘;
Ou5 = (1, 2); Ou6 = (1, 2);
~ 2 (v3 ) and H
~ 2 (u3 ) are positionally equivalent.
Auxiliary digraphs G
Vertices v3 and u3 are removed from the graph G2 and H2 respectively.
We obtain graphs G3 and H3 (see Fig. 8).
✉
u❆4
✏✉v4
✏✏
✏
✏
✏✏
✏
✉✏
✉
v6
G3
❆ ✁
✁
❆✉
✉
✁u5
u6
v5
H3
Figure 8: Subgraphs G3 and H3 .
~ 3 (v4 )
In the graph G3 , choose a vertex v4 and build an auxiliary digraph G
(see Fig. 9).
v4
v5
u4
✉
✡✡❏❏
❏
✡
❏
✡
✉
❏✉v6
✡
u5
G3 (v4 )
✉
✡✡❏❏
❏
✡
❏
✡
✉
❏✉u6
✡
H3 (u4 )
~ 3 (v4 ) and H
~ 3 (u4 ).
Figure 9: Auxiliary digraphs G
~ 3 (v4 ).
We calculate the characteristics of the auxiliary graph G
Iv4 = ⊘;
Iv5 = (0, 1); Iv6 = (0, 1);
Ov4 = (1, 1); Ov5 = (1);
Ov6 = (1);
11
~
Construct an auxiliary digraph doesh
3 (u4 ) and find the characteristics of
its vertices.
Iu4 = ⊘;
Iu5 = (0, 1); Iu6 = (0, 1);
Ou4 = (1, 1); Ou5 = (1);
Ou6 = (1);
~ 4 ) and H(u
~ 4)
Again we find that the constructed auxiliary digraphs G(v
are positionally equivalent.
Vertex v4 and u4 are removed from the graph G3 and doesh3 respectively.
We obtain subgrapgs G4 and H4 (see Fig. 10).
v5 ✉
✉v6
u5 ✉
✉u6
G4
H4
Figure 10: Subgraphs G4 and H4 .
In the graph of G4 , choose a vertex v5 and build an auxiliary digraph
~ 4 (v5 ) (see Fig. 11).
G
v5
u5
✉
✉
v6 ✉
G4 (v5 )
u6 ✉
H4 (u5 )
~ 4 (v5 ) and H
~ 4 (u5 ).
Figure 11: Auxiliary digraphs G
~ 4 (v5 ).
We calculate the characteristics of the auxiliary graph G
Iv5 = ⊘;
Iv6 = (0);
Ov5 = (1); Ov6 = ⊘.
~ 4 (u5) (see Fig. 11) and find the characConstruct an auxiliary digraph H
teristics of its vertices.
Iu5 = ⊘;
Iu6 = (0);
Ou5 = (1); Ou6 = ⊘;
12
~ 4 (v5 ) and H
~ 4 (u5 ) are positionally equivWe find that auxiliary digraphs G
alent.
Vertices v5 and u5 are removed from the subgraphs G4 and H4 , respectively. We obtain one-vertex subgraphs G5 and H5 . Auxiliary digraphs
~ 5 (v6 ) and H
~ 5 (u6 ) contain one vertex and,of course, are the positionally
G
equivalent.
Conclusion: the given graphs G and H are isomorphic.
13
| 8 |
Conditional Lower Bounds for Space/Time Tradeoffs
Isaac Goldstein⋆1 , Tsvi Kopelowitz⋆⋆2 , Moshe Lewenstein⋆ ⋆ ⋆1 , and Ely Porat
arXiv:1706.05847v2 [] 25 Jul 2017
1
⋆ ⋆ ⋆1
Bar-Ilan University , {goldshi,moshe,porately}@cs.biu.ac.il
2
University of Waterloo , [email protected]
Abstract. In recent years much effort has been concentrated towards achieving polynomial time lower
bounds on algorithms for solving various well-known problems. A useful technique for showing such
lower bounds is to prove them conditionally based on well-studied hardness assumptions such as 3SUM,
APSP, SETH, etc. This line of research helps to obtain a better understanding of the complexity inside
P.
A related question asks to prove conditional space lower bounds on data structures that are constructed
to solve certain algorithmic tasks after an initial preprocessing stage. This question received little
attention in previous research even though it has potential strong impact.
In this paper we address this question and show that surprisingly many of the well-studied hard problems
that are known to have conditional polynomial time lower bounds are also hard when concerning space.
This hardness is shown as a tradeoff between the space consumed by the data structure and the time
needed to answer queries. The tradeoff may be either smooth or admit one or more singularity points.
We reveal interesting connections between different space hardness conjectures and present matching
upper bounds. We also apply these hardness conjectures to both static and dynamic problems and
prove their conditional space hardness.
We believe that this novel framework of polynomial space conjectures can play an important role in
expressing polynomial space lower bounds of many important algorithmic problems. Moreover, it seems
that it can also help in achieving a better understanding of the hardness of their corresponding problems
in terms of time.
1
Introduction
1.1
Background
Lately there has been a concentrated effort to understand the time complexity within P, the class of
decision problems solvable by polynomial time algorithms. The main goal is to explain why certain
problems have time complexity that seems to be non-optimal. For example, all known efficient
algorithmic solutions for the 3SUM problem, where we seek to determine whether there are three
elements x, y, z in input set S of size n such that x + y + z = 0, take Õ(n2 ) time1 . However,
⋆
This research is supported by the Adams Foundation of the Israel Academy of Sciences and Humanities
Part of this work took place while the second author was at University of Michigan. This work is supported in
part by the Canada Research Chair for Algorithm Design, NSF grants CCF-1217338, CNS-1318294, and CCF-1514383
⋆⋆⋆
This work was partially supported by an ISF grant #1278/16
1
The Õ and Ω̃ notations suppress polylogarithmic factors
⋆⋆
the only real lower bound that we know is the trivial Ω(n). Likewise, we know how to solve the
all pairs shortest path, APSP, problem in Õ(n3 ) time but we cannot even determine whether it is
impossible to obtain an Õ(n2 ) time algorithm. One may note that it follows from the time-hierarchy
theorem that there exist problems in P with complexity Ω(nk ) for every fixed k. Nevertheless, such
a separation for natural practical problems seems to be hard to achieve.
The collaborated effort to understand the internals of P has been concentrated on identifying
some basic problems that are conjectured to be hard to solve more efficiently (by polynomial
factors) than their current known complexity. These problems serve as a basis to prove conditional
hardness of other problems by using reductions. The reductions are reminiscent of NP-complete
reductions but differ in that they are restricted to be of time complexity strictly smaller (by a
polynomial factor) than the problem that we are reducing to. Examples of such hard problems
include the well-known 3SUM problem, the fundamental APSP problem, (combinatorial) Boolean
matrix multiplication, etc. Recently, conditional time lower bounds have been proven based on
the conjectured hardness of these problems for graph algorithms [4,42], edit distance [13], longest
common subsequence (LCS) [3,15], dynamic algorithms [5,36], jumbled indexing [11], and many
other problems [1,2,6,7,14,25,31,34,40].
1.2
Motivation
In stark contrast to polynomial time lower bounds, little effort has been devoted to finding polynomial space conditional lower bounds. An example of a space lower bound appears in the work
of Cohen and Porat [19] and Pǎtraşcu and Roditty [38] where lower bounds are shown on the size
of a distance oracle for sparse graphs based on a conjecture about the best possible data structure
for a set intersection problem (which we call set disjointness in order to differ it from its reporting
variant).
A more general question is, for algorithmic problems, what conditional lower bounds of a
space/time tradeoff can be shown based on the set disjointness (intersection) conjecture? Even
more general is to discover what space/time tradeoffs can be achieved based on the other algorithmic problems that we assumed are hard (in the time sense)? Also, what are the relations between
these identified ”hard” problems in the space/time tradeoff sense? These are the questions which
form the basis and framework of this paper.
Throughout this paper we show connections between different hardness assumptions, show some
matching upper bounds and propose several conjectures based on this accumulated knowledge.
Moreover, we conjecture that there is a strong correlation between polynomial hardness in time
and space. We note that in order to discuss space it is often more natural to consider data structure
variants of problems and this is the approach we follow in this paper.
1.3
Our Results
Set Disjointness. In the SetDisjointness problem mentioned before, it is required to preprocess a
collection of m sets S1 , · · · , Sm ⊂ U , where U is the universe of elements and the total number of
elements in all sets is N . For a query, a pair of integers (i, j) (1 ≤ i, j ≤ m) is given and we are asked
whether Si ∩ Sj is empty or not. A folklore conjecture, which appears in [18,38], suggests that to
achieve a constant query time the space of the data structure constructed in the preprocessing stage
needs to be Ω̃(N 2 ). We call this conjecture the SetDisjointness conjecture. This conjecture does not
say anything about the case where we allow higher query time. Therefore, we suggest a stronger
2
conjecture which admits a full tradeoff between the space consumed by the data structure (denoted
by S) and the query time (denoted by T ). This is what we call the Strong SetDisjointness conjecture.
This conjecture states that for solving SetDisjointness with a query time T our data structure needs
Ω̃(N 2 /T 2 ) space. A matching upper bound exists for this problem by generalizing ideas from [18]
(see also [32]). Our new SetDisjointness conjecture can be used to admit more expressive space lower
bounds for a full tradeoff between space and query time.
3SUM Indexing. One of the basic and frequently used hardness conjectures is the celebrated 3SUM
conjecture. This conjecture was used for about 20 years to show many conditional time lower
bounds on various problems. However, we focus on what can be said about its space behavior. To
do this, it is natural to consider a data structure version of 3SUM which allows one to preprocess
the input set S. Then, the query is an external number z for which we need to answer whether
there are x, y ∈ S such that x + y = z. It was pointed out by Chan and Lewenstein [16] that all
known algorithms for 3SUM actually work within this model as well. We call this problem 3SUM
Indexing. On one hand, this problem can easily be solved using O(n2 ) space by sorting x + y for all
x, y ∈ S and then searching for z in Õ(1) time. On the other hand, by just sorting S we can answer
queries by a well-known linear time algorithm. The big question is whether we can obtain better
than Ω̃(n2 ) space while using just Õ(1) time query? Can it be done even if we allow Õ(n1−Ω(1) )
query time? This leads us to our two new hardness conjectures. The 3SUM-Indexing conjecture
states that when using Õ(1) query time we need Ω̃(n2 ) space to solve 3SUM-Indexing. In the Strong
3SUM-Indexing conjecture we say that even when using Õ(n1−Ω(1) ) query time we need Ω̃(n2 ) space
to solve 3SUM-Indexing.
3SUM Indexing and Set Disjointness. We prove connections between the SetDisjointness conjectures
and the 3SUM-Indexing conjectures. Specifically, we show that the Strong 3SUM-Indexing conjecture
implies the Strong SetDisjointness conjecture, while the SetDisjointness conjecture implies the 3SUMIndexing conjecture. This gives some evidence towards establishing the difficulty within the 3SUMIndexing conjectures. The usefulness of these conjectures should not be underestimated. As many
problems are known to be 3SUM-hard these new conjectures can play an important role in achieving
space lower bounds on their corresponding data structure variants. Moreover, it is interesting to
point on the difference between SetDisjointness which admits smooth tradeoff between space and
query time and 3SUM-Indexing which admits a big gap between the two trivial extremes. This may
explain why we are unable to show full equivalence between the hardness conjectures of the two
problems. Moreover, it can suggest a separation between problems with smooth space-time behavior
and others which have no such tradeoff but rather two ”far” extremes.
Generalizations. Following the discussion on the SetDisjointness and the 3SUM-Indexing conjectures
we investigate their generalizations.
I. k-Set Disjointness and (k+1)-SUM Indexing. The first generalization is a natural parametrization
of both problems. In the SetDisjointness problem we query about the emptiness of the intersection
between two sets, while in the 3SUM-Indexing problem we ask, given a query number z, whether two
numbers of the input S sum up to z. In the parameterized versions of these problems we are interested in the emptiness of the intersection between k sets and ask if k numbers sum up to a number
given as a query. These generalized variants are called k-SetDisjointness and (k+1)-SUM-Indexing
respectively. For each problem we give corresponding space lower bounds conjectures which generalize those of SetDisjointness and 3SUM-Indexing. These conjectures also have corresponding strong
3
variants which are accompanied by matching upper bounds. We prove that the k-SetDisjointness
conjecture implies (k+1)-SUM-Indexing conjecture via a novel method using linear equations.
II. k-Reachability. A second generalization is the problem we call k-Reachability. In this problem
we are given as an input a directed sparse graph G = (V, E) for preprocessing. Afterwards, for a
query, given as a pair of vertices u, v, we wish to return if there is a path from u to v consisting
of at most k edges. We provide an upper bound on this problem for every fixed k ≥ 1. The upper
bound admits a tradeoff between the space of the data structure (denoted by S) and the query
time (denoted by T ), which is ST 2/(k−1) = O(n2 ). We argue that this upper bound is tight. That
n2
). We call this conjecture
is, we conjecture that if query takes T time, the space must be Ω̃( T 2/(k−1)
the k-Reachability conjecture.
We give three indications towards the correctness of this conjecture. First, we prove that the
base case, where k = 2, is equivalent to the SetDisjointness problem. This is why this problem can
be thought of as a generalization of SetDisjointness.
Second, if we consider non-constant k then the smooth tradeoff surprisingly disappears and
n2
) eventually becomes Ω̃(n2 ). This means that to answer
we get ”extreme behavior” as Ω̃( T 2/(k−1)
reachability queries for non-constant path length, we can either store all answers in advance using
n2 space or simply answer queries from scratch using a standard graph traversal algorithm. The
general problem where the length of the path from u to v is unlimited in length is sometimes referred
to as the problem of constructing efficient reachability oracles. Pǎtraşcu in [37] leaves it as an open
question if a data structure with less than Ω̃(n2 ) space can answer reachability queries efficiently.
Moreover, Pǎtraşcu proved that for constant time query, truly superlinear space is needed. Our kReachability conjecture points to this direction, while admitting full space-time tradeoff for constant
k.
The third indication for the correctness of the k-Reachability conjecture comes from a connection
to distance oracles. A distance oracle is a data structure that can be used to quickly answer queries
about the shortest path between two given nodes in a preprocessed undirected graph. As mentioned
above, the SetDisjointness conjecture was used to exclude some possible tradeoffs for sparse graphs.
Specifically, Cohen and Porat [19] showed that obtaining an approximation ratio smaller than 2
with constant query time requires Ω̃(n2 ) space. Using a somewhat stronger conjecture Pǎtraşcu
and Roditty [38] showed that a (2,1)-distance oracle for unweighted graphs with m = O(n) edges
requires Ω̃(n1.5 ) space. Later, this result was strengthened by Pǎtraşcu et al. [39]. However, these
results do not exclude the possibility of compact distance oracles if we allow higher query time.
For stretch-2 and stretch-3 in sparse graphs, Agarwal et. al. [9,10] achieved a space-time tradeoff
of S × T = O(n2 ) and S × T 2 = O(n2 ), respectively. Agarwal [8] also showed many other results
for stretch-2 and below. We use our k-Reachability conjecture to prove that for stretch-less-than(1+2/k) distance oracles S × T 2/(k−1) is bounded by Ω̃(n2 ). This result is interesting in light of
Agarwal [8] where a stretch-(5/3) oracle was presented which achieves a space-time tradeoff of
S × T = O(n2 ). This matches our lower bound, where k = 3, if our lower bound would hold not
only for stretch-less-than-(5/3) but also for stretch-(5/3) oracles. Consequently, we see that there
is strong evidence for the correctness of the k-Reachability conjecture.
Moreover, these observations show that on one hand k-Reachability is a generalization of SetDisjointness which is closely related to 3SUM-Indexing. On the other hand, k-Reachability is related
to distance oracles which solve the famous APSP problem using smaller space by sacrificing the
accuracy of the distance between the vertices. Therefore, the k-Reachability conjecture seems as a
4
conjecture corresponding to the APSP hardness conjecture, while also admitting some connection
with the celebrated 3SUM hardness conjecture.
SETH and Orthogonal Vectors. After considering space variants of the 3SUM and APSP conjectures
it is natural to consider space variants for the Strong Exponential Time Hypothesis (SETH) and the
closely related conjecture of orthogonal vectors. SETH asserts that for any ǫ > 0 there is an integer
k > 3 such that k-SAT cannot be solved in 2(1−ǫ)n time. The orthogonal vectors time conjecture
states that there is no algorithm that for every c ≥ 1, finds if there are at least two orthogonal
vectors in a set of n Boolean vectors of length c log n in Õ(n2−Ω(1) ) time. We discuss the space
variants of these conjectures in Section 7. However, we are unable to connect these conjectures and
the previous ones. This is perhaps not surprising as the connection between SETH and the other
conjectures even in the time perspective is very loose (see, for example, discussions in [5,25]).
Boolean Matrix Multiplication. Another problem which receives a lot of attention in the context
of conditional time lower bounds is calculating Boolean Matrix Multiplication (BMM). We give a
data structure variant of this well-known problem. We then demonstrate the connection between
this problem and the problems of SetDisjointness and k-Reachability.
Applications. Finally, armed with the space variants of many well-known conditional time lower
bounds, we apply this conditional space lower bounds to some static and dynamic problems. This
gives interesting space lower bound results on these important problems which sometimes also
admits clear space-time tradeoff. We believe that this is just a glimpse of space lower bounds that
can be achieved based on our new framework and that many other interesting results are expected
to follow this promising route.
Figure 1 in Appendix A presents a sketch of the results in this paper.
2
Set Intersection Hardness Conjectures
We first give formal definitions of the SetDisjointness problem and its enumeration variant:
Problem 1 (SetDisjointness
Problem). Preprocess a family F of m sets, all from universe U , with
P
total size N = S∈F |S| so that given two query sets S, S ′ ∈ F one can determine if S ∩ S ′ = ∅.
Problem 2 (SetIntersection
Problem). Preprocess a family F of m sets, all from universe U , with
P
total size N = S∈F |S| so that given two query sets S, S ′ ∈ F one can enumerate the set S ∩ S ′ .
Conjectures. The SetDisjointness problem was regarded as a problem that admits space hardness.
The hardness conjecture of the SetDisjointness problem has received several closely related formulations. One such formulation, given by Pǎtraşcu and Roditty [38], is as follows:
Conjecture 1. SetDisjointness Conjecture [Formulation 1]. Any data structure for the SetDisjointness problem where |U | = logc m for a large enough constant c and with a constant query time
must use Ω̃(m2 ) space.
Another formulation is implicitly suggested in Cohen and Porat [18]:
5
Conjecture 2. SetDisjointness Conjecture [Formulation 2]. Any data structure for the SetDisjointness problem with constant query time must use Ω̃(N 2 ) space.
There is an important distinction between the two formulations, which is related to the sparsity
of SetDisjointness instances. This distinction follows from the following upper bound: store an m×m
matrix of the answers to all possible queries, and then queries will cost constant time. The first
formulation of the SetDisjointness conjecture states that if we want constant (or poly-logaritmic)
query time, then this is the best we can do. At a first glance this makes the second formulation,
whose bounds are in terms of N and not m, look rather weak. In particular, why would we ever be
interested in a data structure that uses O(N 2 ) space when we can use one with O(m2 ) space? The
answer is that the two conjectures are the same if the sets are very sparse, and so at least in terms
of N , if one were to require a constant query time then by the second formulation the space must
be at least Ω(N 2 ) (which happens in the very sparse case).
Nevertheless, we present a more general conjecture, which in particular captures a tradeoff curve
between the space usage and query time. This formulation captures the difficulty that is commonly
believed to arise from the SetDisjointness problem, and matches the upper bounds of Cohen and
Porat [18] (see also [32]).
Conjecture 3. Strong SetDisjointness Conjecture. Any data structure for the SetDisjointness
2
) space.
problem that answers queries in T time must use S = Ω̃( N
T2
For example, a natural question to ask is “what is the smallest query time possible with linear space?”. This question is addressed, at least from a lower bound perspective, by the Strong
SetDisjointness conjecture.
Conjecture 4. Strong SetIntersection Conjecture. Any data structure for the SetIntersection
problem that answers queries in O(T + op) time, where op is the size of the output of the query,
2
must use S = Ω̃( NT ) space.
3
3SUM-Indexing Hardness Conjectures
In the classic 3SUM problem we are given an integer array A of size n and we wish to decide
whether there are 3 distinct integers in A which sum up to zero. Gajentaan and Overmars [23]
showed that an equivalent formulation of this problem receives 3 integer arrays A1 , A2 , and A3 ,
each of size n, and the goal is to decide if there is a triplet x1 ∈ A1 , x2 ∈ A2 , and x3 ∈ A3 that sum
up to zero.
We consider the data structure variant of this problem which is formally defined as follows:
Problem 3 (3SUM-Indexing Problem). Preprocess two integer arrays A1 and A2 , each of length n,
so that given a query integer z we can decide whether there are x ∈ A1 and y ∈ A2 such that
z = x + y.
It is straightforward to maintain all possible O(n2 ) sums of pairs in quadratic space, and then
answer a query in Õ(1) time. On the other extreme, if one does not wish to utilize more than linear
space then one can sort the arrays separately during preprocssing time, and then a query can be
answered in Õ(n) time by scanning both of the sorted arrays in parallel and in opposite directions.
We introduce two conjectures with regards to the 3SUM-Indexing problem, which serve as natural
candidates for proving polynomial space lower bounds.
6
Conjecture 5. 3SUM-Indexing Conjecture: There is no solution for the 3SUM-Indexing problem
with truly subquadratic space and Õ(1) query time.
Conjecture 6. Strong 3SUM-Indexing Conjecture: There is no solution for the 3SUM-Indexing
problem with truly subquadratic space and truly sublinear query time.
Notice that one can solve the classic 3SUM problem using a data structure for 3SUM-Indexing
by preprocessing A1 and A2 , and answering n 3SUM-Indexing queries on all of the values in A3 .
Next, we prove theorems that show tight connections between the 3SUM-Indexing conjectures
and the SetDisjointness conjectures. We note that the proofs of the first two theorems are similar
to the proofs of [31], but with space interpretation.
Theorem 1. The Strong 3SUM-Indexing Conjecture implies the Strong SetDisjointness Conjecture.
Proof. A family H of hash functions from [u] → [m] is called linear if for any h ∈ H and any x, x′ ∈
[u], h(x) + h(x′ ) = h(x + x′ ) + ch (mod m), where ch is some integer that depends only on h. H is
called almost linear if for any h ∈ H and any x, x′ ∈ [u], either h(x)+h(x′ ) = h(x+x′ )+ch (mod m),
or h(x) + h(x′ ) = h(x + x′ ) + ch + 1 (mod m).
Given a hash function h ∈ H we say that a value i ∈ m is heavy for set S = {x1 , . . . , xn } ⊂ [u]
if |{x ∈ S : h(x) = i}| > 3n
m . H is called almost balanced if for any set S = {x1 , . . . , xn } ⊂ [u], the
expected number of elements from S that are hashed to heavy values is O(m). Kopelowitz et al.
showed in [31] that a family of hash functions obtained from the construction of Dietzfelbinger [20]
is almost-linear, almost-balanced, and pair-wise independent. In order to reduce clutter in the proof
here we assume the existence of linear, almost-balanced, and pair-wise independent families of hash
functions. Using the family of hash functions of Dietzfelbinger [20] will only affect multiplicative
constants.
We reduce an instance of the 3SUM-Indexing problem to an instance of the SetDisjointness
γ
2
problem as follows. Let R =
√ n for some constant 0 < γ < 1. Let Q = (5n/R) . Without loss of
generality we assume that Q is an integer. We pick a random hash function h1 : U → [R] from
a family that is linear and almost-balanced. Using h1 we create R buckets B1 , . . . , BR such that
Bi = {x ∈ A1 : h1 (x) = i}, and another R buckets C1 , . . . , CR such that Ci = {x ∈ A2 : h1 (x) = i}.
Since h1 is almost-balanced, the expected number of elements from A1 and A2 that are mapped
to buckets of size greater than 3n/R is O(R). We use O(R) space to maintain this list explicitly,
together with a lookup table for the elements in A1 and A2 .
Next, we pick a random hash function h2 : U → [Q] where
h2 is chosen from a pair-wise
√
Q
shifted
sets as follows: for each
independent
and
linear
family.
For
each
bucket
we
create
√
√
0 ≤ j < Q let Bi,j = {h2 (x) − j · Q (mod Q) | x ∈ Bi } and Ci,j = {−h2 (x) + j (mod Q) | x ∈ Ci }.
These sets are all preprocessed into a data structure for the SetDisjointness problem.
Next, we answer a 3SUM-Indexing query z by utilizing the linearity of h1 and h2 , which implies
that if there exist x ∈ A1 and y ∈ A2 such that x + y = z then h1 (x) + h1 (y) = h1 (z) + ch1 (mod R)
and h2 (x) + h2 (y) = h2 (z) + ch2 (mod Q).
Thus, if x ∈ Bi then y must be in Ch1 (z)+ch1 −i(mod R) . For each i ∈ [R] we would like to inin order to find candidate pairs of x and y. Denote by h↑2 (z) =
√
h2 (z)+c
⌊ √Q h1 ⌋ and h↓2 (z) = h2 (z) + ch2 (mod Q). Due to the almost-linearity of h2 , if the sets Bi and
Ch1 (z)+ch −i(mod R) + z are not disjoint then the sets Bi,h↑ (z) and Ch1 (z)+c −i(mod R),h↓ (z) are not dis-
tersect Bi with Ch1 (z)+ch
1
−i(mod R)
1
2
joint (but the reverse is not necessarily true). Thus, if Bi,h↑ (z) ∩Ch1 (z)+c
2
7
h1
2
↓
h1 −i(mod R),h2 (z)
= ∅ then there
is no candidate pair in Bi and Ch1 (z)+ch −i(mod R) +z. However, if Bi,h↑ (z) ∩Ch1 (z)+c −i(mod R),h↓ (z) 6= ∅
1
h1
2
2
then it is possible that this is due to a 3SUM-Indexing solution, but we may have false positives.
Notice that the number of set pairs whose intersection we need to examine is O(R) since z is given.
Once we pick i (R choices) the rest is implicit.
Set z and let k = h2 (z). Since h2 is pair-wise independent and linear then for any pair x, y ∈ U
where x 6= y we have that if x + y 6= z then Pr[h2 (x) + h2 (y) = k + ch2 (mod R)] = Pr[h2 (x + y) =
h2 (z) + ch2 (mod R)] = Q1 . Since each bucket contains at most 3n/R elements, the probability of a
9
21
false positive due to two buckets Bi and Cj is not greater than ( 3n
R ) Q = 25 . In order to reduce
the probability of a false positive to be polynomially small, we repeat the process with O(log n)
different choices of h2 functions (but using the same h1 ). This blows up the number of sets by a
factor of O(log n), but not the universe. If the sets intersect under all O(log n) choices of h2 then we
can spend O(n/R) time to find x and y within buckets Bi and Cj , which are either a 3SUM-Indexing
solution (and the algorithm halts), or a false positive,
which only occurs with probability 1/poly(n).
√
To summarize, we create a total of O(R Q log n) sets, each of size at most 3n/R. Thus, the
total size of the SetDisjointness instance is N = Õ(n2 /R). For a query, we perform Õ(R) queries on
n
1
the SetDisjointness structure, and spend another O(R · R
· poly(n)
) = O(1) expected time to verify
that we did not hit a false positive. Furthermore, we spend O(R) time to check possible solutions
containing one of the expected O(R) elements from buckets with too many elements by using the
lookup tables. If we denote by T (N ) and S(N ) the query time and space usage, respectively, of the
SetDisjointness data structure on N elements (in our case N = Õ(n2−γ )), then the query time of the
reduction becomes t3SI = Õ(R · T (n2 /R)) time and the space usage is s3SI = Õ(S(n2 /R) + O(n)).
Since we may assume that S(N ) = Ω(N ), we have that s3SI = Õ(S(N )).
By the Strong 3SUM-Indexing Conjecture, either s3SI = Ω̃(n2 ) or t3SI = Ω̃(n), which means that
2
1−γ
either S(N ) = Ω̃(N 2−γ ) or T (N ) = Ω̃(N 2−γ ). For any constant ǫ > 0, if the SetDisjointness data
2
2
2−2γ
structure uses Θ̃(N 2−γ −ǫ ) space, then S(N ) · (T (N ))2 = Ω̃(N 2−γ −ǫ+ 2−γ ) = Ω̃(N 2−ǫ ). Since this
holds for any ǫ > 0 it must be that S(N ) · (T (N ))2 = Ω̃(N 2 ).
⊔
⊓
Theorem 2. The Strong 3SUM-Indexing Conjecture implies the Strong SetIntersection Conjecture.
Proof. The proof follows the same structure as the proof of Theorem 1, but here we set Q =
(n1+δ /R), where δ > 0 is a constant. Furthermore, we preprocess the buckets using a SetIntersection
data structure, and if two sets intersect then instead of repeating the whole process with different
choices of h2 (in order to reduce the probability of a false positive), we use the SetIntersection data
structure to report all of the elements in an intersection, and verify them all directly.
As before, set z and let k = h2 (z). Since h2 is pair-wise independent and linear then for any
pair x, y ∈ U where x 6= y we have that if x + y 6= z then Pr[h2 (x) + h2 (y) = k + ch2 (mod R)] =
Pr[h2 (x + y) = h2 (z) + ch2 (mod R)] = Q1 . We now bound the expected output size from all of the
2
intersections. Since each pair of buckets imply at most ( 3n
R ) pairs of elements, the expected size
n1−δ
21
of their intersection is E[|h2 (Bi ) − k ∩ h2 (Cj )|] = ( 3n
R ) Q = O( R ). Thus, the expected size of the
n
1−δ ). For each pair in an intersection we
output of all of the O(R) intersections is O(R Rn
δ ) = O(n
can verify in constant time if together with z√they form a solution.
To summarize, we create a total of O(R Q) sets, each of size at most 3n/R. Thus, the total
size of the SetIntersection instance is N = Õ(n2 /R). For a query, we perform Õ(R) queries on the
SetIntersection structure. Furthermore, we spend O(R) time to check possible solutions containing
one of the expected O(R) elements from buckets with too many elements by using the lookup tables.
8
If we denote by T (N ) and S(N ) the query time and space usage, respectively, of the SetIntersection
√
3+δ−γ
data structure on N elements (in our case N = Õ(R Qn/R) = Õ(n 2 )), then the query time of
the reduction becomes t3SI = Õ(R·T (N )+n1−δ ) time and the space usage is s3SI = Õ(S(N )+O(n)).
Since we may assume that S(N ) = Ω(N ), we have that s3SI = Õ(S(N )).
By the Strong 3SUM-Indexing conjecture, either s3SI = Ω̃(n2 ) or t3SI = Ω̃(n), which means that
2−2γ
4
either S(N ) = Ω̃(N 3+δ−γ ) or T (N ) = Ω̃(N 3+δ−γ ). For any constant ǫ > 0, if the SetIntersection
4
4
2−2γ
2δ
data structure uses Θ̃(N 3+δ−γ −ǫ ) space, then S(N )·T (N ) = Ω̃(N 3+δ−γ −ǫ+ 3+δ−γ ) = Ω̃(N 2− 3+δ−γ −ǫ ).
Since this holds for any ǫ > 0 and any δ > 0 it must be that S(N ) · T (N ) = Ω̃(N 2 ).
⊔
⊓
Theorem 3. The SetDisjointness Conjecture implies the 3SUM-Indexing Conjecture.
Proof. Given an instance of SetDisjointness, we construct an instance of 3SUM-Indexing as follows.
Denote with M the value of the largest element in the SetDisjointness instance. Notice that we may
assume that M ≤ N (otherwise we can use a straightforward renaming). For every element x ∈ U
that is contained in at least one of the sets we create two integers xA and xB , which are represented
by 2⌈log m⌉ + ⌈log N ⌉ + 3 bits each (recall that m is the number of sets).
The ⌈log N ⌉ least significant bits in xA represent the value of x. The following bit is a zero.
The following ⌈log m⌉ bits in xA represent the index of the set containing x, and the rest of the
2 + ⌈log m⌉ are all set to zero. The ⌈log N ⌉ least significant bits in xB represent the value of M − x.
The following 2 + ⌈log m⌉ are all set to zero. The following ⌈log m⌉ bits in xB represent the index
of the set containing x, and the last bit is set to zero. Finally, the integer xA is added to A1 of the
3SUM-Indexing instance, while the integer xB is added to A2 .
We have created two sets of n ≤ M integers. We then preprocess them to answer 3SUM-Indexing
queries. Now, to answer a SetDisjointness query on sets Si and Sj , we query the 3SUM-Indexing data
structure with an integer z which is determined as follows. The ⌈log N ⌉ least significant bits in z
represent the value of M . The following bit is a zero. The following ⌈log m⌉ bits represent the index
i and are followed by a zero. The next ⌈log m⌉ bits represent the index j and the last bit is set to
zero.
It is straightforward to verify that there exists a solution to the 3SUM-Indexing problem on z
if and only if the sets Si and Sj are not disjoint. Therefore, if there is a solution to the 3SUMIndexing problem with less than Ω̃(n2 ) space and constant query time then there is a solution for
the SetDisjointness problem which refutes the SetDisjointness Conjecture.
⊔
⊓
4
Parameterized Generalization:
k-Set Intersection and (k+1)-SUM
Two parameterized generalizations of the SetDisjointness and 3SUM-Indexing problems are formally
defined as follows:
Problem 4 (k-SetDisjointness
Problem). Preprocess a family F of m sets, all from universe U , with
P
total size N = S∈F |S| so that given k query sets S1 , S2 , . . . , Sk ∈ F one can quickly determine if
∩ki=1 Si = ∅.
Problem 5 ((k+1)-SUM-Indexing Problem). Preprocess k integer arrays A1 , A2 , . . . , Ak , each of length
n, soPthat given a query integer z we can decide if there is x1 ∈ A1 , x2 ∈ A2 , . . . , xk ∈ Ak such that
z = ki=1 xi .
9
It turn out that a natural generalization of the data structure of Cohen and Porat [18] leads to
a data structure for k-SetDisjointness as shown in the following lemma.
Lemma 1. There exists a data structure for the k-SetDisjointness problem where the query time is
T and the space usage is S = O((N/T )k ).
Proof. We call the f largest sets in F large sets. The rest of the sets are called small sets. In
the preprocessing stage we explicitly maintain a k-dimensional table with the answers for all kSetDisjointness queries where all k sets are large sets. The space needed for such a table is S = f k .
Moreover, for each set (large or small) we maintain a look-up table that supports disjointness queries
(with this set) in constant time. Since there are f large sets and the total number of elements is
N , the size of each of the small sets is at most N/f .
Given a k-SetDisjointness query, if all of the query sets are large then we look up the answer
in the k-dimensional table. If at least one of the sets is small then using a brute-force search we
look-up each of the at most O(N/f ) elements in each of the other k − 1 sets. Thus, the total query
time is bounded by O(kN/f ), and the space usage is S = O(f k ). The rest follows.
⊔
⊓
Notice that for the case of k = 2 in Lemma 1 we obtain the same tradeoff of Cohen and Porat [18]
for SetDisjointness. The following conjecture suggests that the upper bound of Lemma 1 is the best
possible.
Conjecture 7. Strong k-SetDisjointness Conjecture. Any data structure for the k-SetDisjointness
k
) space.
problem that answers queries in T time must use S = Ω̃( N
Tk
Similarly, a natural generalization of the Strong 3SUM-Indexing conjecture is the following.
Conjecture 8. Strong (k+1)-SUM-Indexing Conjecture. There is no solution for the (k+1)SUM-Indexing problem with Õ(nk−Ω(1) ) space and truly sublinear query time.
We also consider some weaker conjectures, similar to the SetDisjointness and 3SUM-Indexing
conjectures.
Conjecture 9. k-SetDisjointness Conjecture. Any data structure for the k-SetDisjointness problem
that answers queries in constant time must use Ω̃(N k ) space.
Conjecture 10. (k+1)-SUM-Indexing Conjecture. There is no solution for the (k+1)-SUM-Indexing
problem with Õ(nk−Ω(1) ) space and constant query time.
Similar to Theorem 3, we prove the following relationship between the k-SetDisjointness conjecture and the (k+1)-SUM-Indexing conjecture.
Theorem 4. The k-SetDisjointness conjecture implies the (k+1)-SUM-Indexing conjecture
Proof. Given an instance of k-SetDisjointness, we construct an instance of (k+1)-SUM-Indexing as
follows. Denote by M the value of the largest element in the SetDisjointness instance. Notice that
we may assume that M ≤ N (otherwise we use a straightforward renaming). For every element
x ∈ U that is contained in at least one of the sets we create k integers x1 , x2 , ..., xk , where each
integer is represented by k⌈log m⌉ + (k − 1)⌈log N ⌉ + 2k − 1 bits.
For integer xi , if i > 1 the (k − 1)⌈log N ⌉ + k − 1 least significant bits are all set to zero, except
for the bits in indices (i − 2)(⌈log N ⌉ + 1) + 1, ..., (i − 1)(⌈log N ⌉ + 1) that represent the value of
10
x. If i = 1 the value of the bits in the indices (j − 1)(⌈log N ⌉ + 1) + 1, ..., j(⌈log N ⌉ + 1) is set to
M − x for all 1 ≤ j ≤ k − 1. The k⌈log m⌉ + k following bits are all set to zero, except for the bits
in indices (i − 1)(⌈log m⌉ + 1) + 1, ..., i(⌈log m⌉ + 1) which represent the index of the set containing
x.
We now create an instance of (k+1)-SUM-Indexing where the jth input array Aj is the set
of integers xj for all x ∈ U that is contained in at least one set of our family. Thus, the size
of each array is at most N . Now, given a k-SetDisjointness query (i1 , i2 , ..., ik ) we must decide if
Si1 ∩ Si2 ∩ ... ∩ Sik = ∅. To answer this query we will query the instance of (k+1)-SUM-Indexing we
have created with an integer z whose binary representation is as follows: In the (k −1)⌈log N ⌉+k −1
least significant bits the value of the bits in the indices (j − 1)(⌈log N ⌉ + 1) + 1, ..., j(⌈log N ⌉ + 1)
is set to M for all 1 ≤ j ≤ k − 1. In the k⌈log m⌉ + k following bits, the bits at locations (j −
1)(⌈log m⌉ + 1) + 1, ..., j(⌈log m⌉ + 1) represent ij (for 1 ≤ j ≤ k). The rest of the bits are padding
zero bits (in between representations of various ij s and M s).
If Si1 ∩ Si2 ∩ ... ∩ Sik 6= ∅ then by our construction it is straightforward to verify that the (k+1)SUM-Indexing query on z will return that there is a solution. If Si1 ∩ Si2 ∩ ... ∩ Sik = ∅ then at least
for one j ∈ [k − 1] the sum of values in the bits in indices (j − 1)(⌈log N ⌉ + 1) + 1, ..., j(⌈log N ⌉ + 1)
in the (k − 1)⌈log N ⌉ + k − 1 least significant bits will not be M . This is because we can view
each block of ⌈log N ⌉ + 1 bits in the (k − 1)⌈log N ⌉ + k − 1 least significant bits as solving a linear
equation. This equation is of the form M − x1 + xi = M for every block i − 1 where 2 ≤ i ≤ k.
The solution of each of these equations is x1 = xi for all 2 ≤ i ≤ k. Consequently, a solution can be
found only if there is a specific x which is contained in all of the k sets. Therefore, we get a correct
answer to a k-SetDisjointness query by answering a (k+1)-SUM-Indexing query.
Consequently, if for some specific constant k there is a solution to the (k+1)-SUM-Indexing
problem with less than Ω̃(nk ) space and constant query time, then with this reduction we refute
the k-SetDisjointness conjecture.
⊔
⊓
5
Directed Reachability Oracles as a Generalization of Set Disjointness
Conjecture
An open question which was stated by Pǎtraşcu in [37] asks if it is possible to preprocess a sparse
directed graph in less than Ω(n2 ) space so that Reachability queries (given two query vertices u
and v decide whether there is a path from u to v or not) can be answered efficiently. A partial
answer, given in [37], states that for constant query time truly superlinear space is necessary. In
the undirected case the question is trivial and one can answer queries in constant time using linear
space. This is also possible for planar directed graphs (see Holm et al. [27]).
We now show that Reachability oracles for sparse graphs can serve as a generalization of the
SetDisjointness conjecture. We define the following parameterized version of Reachability. In the
k-Reachability problem the goal is to preprocess a directed sparse graph G = (V, E) so that given
a pair of distinct vertices u, v ∈ V one can quickly answer whether there is a path from u to v
consisting of at most k edges. We prove that 2-Reachability and SetDisjointness are tightly connected.
Lemma 2. There is a linear time reduction from SetDisjointness to 2-Reachability and vice versa
which preserves the size of the instance.
Proof. Given a graph G = (V, E) as an instance for 2-Reachability, we construct a corresponding
instance of SetDisjointness as follows. For each vertex v we create the sets Vin = {u|(u, v) ∈ E} and
11
Vout = {u|(v, u) ∈ E} ∪ {v}. We have 2n sets and 2m + n elements in all of them (|V | = n and
|E| = m). Now, a query u, v is reduced to determining if the sets Uout and Vin are disjoint or not.
Notice, that the construction is done in linear time and preserves the size of the instance. In the
opposite direction, we are given m sets S1 , S2 , ..., Sm having N elements in total e1 , e2 , ..., eN . We
can create an instance of 2-Reachability in the following way. For each set Si we create a vertex vi .
Moreover, for each element ej we create a vertex uj . Then, for each element ej in a set si we create
two directed edges (vi , uj ) and (uj , vi ). These vertices and edges define a directed graph, which is
preprocessed for 2-Reachability queries. It is straightforward to verify that the disjointness of Si and
Sj is equivalent to determining if there is a path of length at most 2 edges from vi to vj . Moreover,
the construction is done in linear time and preserves the size of the instance.
⊔
⊓
Furthermore, we consider k-Reachability for k ≥ 3. First we show an upper bound on the tradeoff
between space and query time for solving k-Reachability.
Lemma 3. There exists a data structure for k-Reachability with S space and T query time such
that ST 2/(k−1) = O(n2 ).
Proof. Let α > 0 be an integer parameter to be set later. Given a directed graph G = (V, E), we
call vertex v ∈ V a heavy vertex if deg(v) > α and a vertex u ∈ V a light vertex if deg(u) ≤ α.
Notice that the number of heavy vertices is at most n/α. For all heavy vertices in V we maintain
a matrix containing the answers to any k-Reachability query between two heavy vertices. This uses
O(n2 /α2 ) space.
Next, we recursively construct a data structure for (k-1)-Reachability. Given a query u, v, if both
vertices are heavy then the answer is obtained from the matrix. Otherwise, either u or v is light
vertex. Without loss of generality, say u is a light vertex. We consider each vertex w ∈ Nout (u)
(Nout (u) = {v|(u, v) ∈ E}) and query the (k-1)-Reachability data structure with the pair w, v. Since
u is a light node, there are no more than α queries. One of the queries returns a positive answer if
and only if there exists a path of length at most k from u to v.
Denote by S(k, n) the space used by our k-Reachability oracle on a graph with n vertices and
denote by Q(k, n) the corresponding query time. In our construction we have S(k, n) = n2 /α2 +
S(k − 1, n) and Q(k, n) = αQ(k − 1, n) + O(1). For k = 1 it is easy to construct a linear space data
structure using hashing so that queries can be answered in constant time. Thus, S = S(k, n) =
O((k − 1)n2 /α2 ) and T = Q(k, n) = O(αk−1 ).
⊔
⊓
Notice that for the case of k = 2 the upper bounds from Lemma 3 exactly match the tradeoff of
the Strong SetDisjointness Conjecture (ST 2 = Õ(n2 )). We expand this conjecture by considering the
tightness of our upper bound for k-Reachability, which then leads to some interesting consequences
with regard to distance oracles.
Conjecture 11. Directed k-Reachability Conjecture. Any data structure for the k-Reachability
n2
) space.
problem with query time T must use S = Ω̃( T 2/(k−1)
Notice that when k is non-constant then by our upper bound Ω̃(n2 ) space is necessary independent of the query time. This fits nicely with what is currently known about the general question
of Reachability oracles: either we spend n2 space and answer queries in constant time or we do no
preprocessing and then answer queries in linear time. This leads to the following conjecture.
Conjecture 12. Directed Reachability Hypothesis. Any data structure for the Reachability
problem must either use Ω̃(n2 ) space, or linear query time.
12
The conjecture states that in the general case of Reachability there is no full tradeoff between
space and query time. We believe the conjecture is true even if the path is limited to lengths of
some non-constant number of edges.
6
Distance Oracles and Directed Reachability
There are known lower bounds for constant query time distance oracles based on the SetDisjointness
hypothesis. Specifically, Cohen and Porat [18] showed that stretch-less-than-2 oracles need Ω(n2 )
space for constant queries. Patrascu et al. [39] showed a conditional space lower bound of Ω(m5/3 )
for constant-time stretch-2 oracles. Applying the Strong SetDisjointness conjecture to the same
argument as in [18] we can prove that for stretch-less-than-2 oracles the tradeoff between S (the
space for the oracle) and T (the query time) is by S × T 2 = Ω(n2 ).
Recent effort was taken toward constructing compact distance oracles where we allow nonconstant query time. For stretch-2 and stretch-3 Agarwal et al. [10] [9] achieves a space-time tradeoff
of S × T = O(n2 ) and S × T 2 = O(n2 ), respectively, for sparse graphs. Agarwal [8] also showed
many other results for stretch-2 and below. Specifically, Agarwal showed that for any integer k a
stretch-(1+1/k) oracle exhibits the following space-time tradeoff: S × T 1/k = O(n2 ). Agarwal also
showed a stretch-(1+1/(k+0.5)) oracle that exhibits the following tradeoff: S × T 1/(k+1) = O(n2 ).
Finally, Agarwal gave a stretch-(5/3) oracle that achieves a space-time tradeoff of S × T = O(n2 ).
Unfortunately, no lower bounds are known for non-constant query time.
Conditioned on the directed k-Reachability conjecture we prove the following lower bound.
Lemma 4. Assume the directed k-Reachability conjecture holds. Then stretch-less-than-(1 + 2/k)
distance oracles with query time T must use S × T 2/(k−1) = Ω̃(n2 ) space.
Proof. Given a graph G = (V, E) for which we want to preprocess for k-Reachability, we create a
layered graph with k layers where each layer consists of a copy of all vertices of V . Each pair of
neighboring layers is connected by a copy of all edges in E. We omit all directions from the edges.
For every fixed integer k, the layered graph has O(|V |) vertices and O(|E|) edges. Next, notice
that if we construct a distance oracle that can distinguish between pairs of vertices of distance at
most k and pairs of vertices of distance at least k + 2, then we can answer k-Reachability queries.
Consequently, assuming the k-Reachability conjecture we have that S ×T 2/(k−1) = Ω(n2 ) for stretchless-than-(1+2/k) distance oracles (For k = 2 this is exactly the result we get by the SetDisjointness
hypothesis).
⊔
⊓
Notice, that the stretch-(5/3) oracle shown by Agarwal [8] achieves a space-time tradeoff of
S × T = O(n2 ). Our lower bound is very close to this upper bound since it applies for any distance
oracle with stretch-less-than-(5/3), by setting k = 3.
7
SETH and Orthogonal Vectors Space Conjectures
Solving SAT using O(2n ) time where n is number of variables in the formula can be easily done
using only O(n) space. However, the question is how can we use space in the case that we have only
a partial assignment of R variables and we would like to quickly figure out whether this partial
assignment can be completed to a full satisfying assignment or not. On one end, by using just O(n)
space we can answer queries in O(2n−R ) time. On the other end, we can save the answers to all
13
possible queries using O(2R ) space. It is not clear if there is some sort of a tradeoff in between these
two. A related problem is the problem of Orthogonal Vectors (OV). In this problem one is given
a collection of n vectors of length O(log n) and need to answer if there are two of them which are
orthogonal to one another. A reduction from SETH to OV was shown in [41]. By this reduction
given a k-CNF formula of n variables one can transform it using O(2ǫn ) time to O(2ǫn ) instances
of OV in which the vectors are of length 2f (k, ǫ) log n (for any ǫ > 0, where f (k, ǫ)n is the number
of clauses of each sparse formula represented by one instance of OV). This reduction leads to the
following conjecture regarding OV, which is based on SETH: There is no algorithm that, for every
c ≥ 1, solves the OV problem on n boolean vectors of length c log n in Õ(n2−Ω(1) ) time.
We can consider a data structure variant of the OV problem, which we call OV indexing. Given
a list of n boolean vectors of length c log n we should preprocess them and create a suitable data
structure. Then, we answer queries of the following form: Given a vector v, is there a vector in the
list which is orthogonal to v?
We state the following conjecture which is the space variant of the well-studied OV (time)
conjecture:
Conjecture 13. Orthogonal Vectors Indexing Hypothesis: There is no algorithm for every
c ≥ 1 that solves the OV indexing problem with Õ(n2−Ω(1) ) space and truly sublinear query time.
We note that we believe that the last conjecture is true even if we allow superpolynomial
preprocessing time. Moreover, it seems that it also may be true even for some constant c slightly
larger than 2.
8
Space Requirements for Boolean Matrix Multiplication
Boolean Matrix Multiplication(BMM) is one of the most fundamental problems in Theoretical
Computer Science. The question of whether computing the Boolean product of two Boolean matrices
of size n × n is possible in O(n2 ) time is one of the most intriguing open problems. Moreover,
finding a combinatorial algorithm for BMM taking O(n3−ǫ ) time for some ǫ > 0 is considered to
be impossible to do with current algorithmic techniques.
We focus on the following data structure version of BMM, preprocess two n×n Boolean matrices
A and B, such that given a query (i, j) we can quickly return the value of ci,j where C = {ci,j }
is the Boolean produce A and B. Since storing all possible answers to queries will require θ(n2 )
space in the worst case, we focus on the more interesting scenario where we have only O(n2−Ω(1) )
space to store the outcome of the preprocessing stage. In case the input matrices are dense (the
number of ones and the number of zeroes are both θ(n2 )) it seems that this can be hard to achieve
as storing the input matrices alone will take θ(n2 ) space. So we consider a complexity model, which
we call the read-only input model, in which storing the input is for free (say on read-only memory),
and the space usage of the data structure is only related to the additional space used. We now
demonstrate that BMM in the read-only input model is equivalent to SetDisjointness.
Lemma 5. BMM in the read-only input model and SetDisjointness are equivalent.
Proof. Given an instance of SetDisjointness let e1 , ..., eN denote the elements in an input instance.
We construct an instance of BMM as follows. Assume without loss of generality that all sets are
not empty, and so m ≤ N . Row i in matrix A represents a set Si while each column j represents
element ej . An entry ai,j equals 1 if ej ∈ Si and equals zero otherwise. We also set B = AT . We
14
also pad each of the matrices with zeroes so their size will be N × N . Clearly, ci,j in matrix C,
which is the product of A and B, is an indicator whether Si ∩ Sj = ∅.
In the opposite direction, given two matrices A and B having m ones we view each row i of A
as a characteristic vector of a set Si (the elements in the set correspond to the ones in that row)
and each column j of B as a characteristic vector of a set Sj+n (the elements in the set corresponds
to the ones in that column). Thus, the instance of SetDisjointness that have been created consists
of 2n set with O(m) elements. The value of an element ci,j in the product of A and B can be
determined by the intersection of Si and Sj+n .
⊔
⊓
Another interesting connection between BMM and the other problems discussed in this paper
is the connection to the problem of calculating the transitive closure of a graph, which is the
general directed reachability mentioned above. It is well-known that BMM and transitive closure
are equivalent in terms of time as shown by Fischer and Meyer [22]. But what happens if we consider
space? It is easy to see that BMM can be reduced to transitive-closure (directed reachability) even
in terms of space. However, the opposite direction is not clear as the reduction for time involves
recursive squaring, which cannot be implemented efficiently in terms of space.
Another fascinating variant of BMM is the one in which an n × n matrix A is input for preprocessing and afterwards we need to calculate the result of multiplying it by a given query vector
v. This can be seen as the space variant of the celebrated OMV (online matrix-vector) problem
discussed by Henzinger et al. [25]. It is interesting to see if one can make use of a data structure so
that n consecutive vector queries can be answered in Õ(n3−Ω(1) ) time.
9
Applications
We now provide applications of our rich framework for proving conditional space lower bounds. In
the following subsections we consider both static and dynamic problems.
9.1
Static Problems
Edge Triangles The first example we consider is in regards to triangles. In a problem that is called
edge triangles detection, we are given a graph G = (V, E) to preprocess and then we are given an
edge (v, u) as a query and need to answer whether (u, v) belongs to a triangle. In a reporting variant
of this problem, called edge triangles we need not only to answer if (u, v) belongs to a triangle but
also report all triangles it belongs to. This problem was considered in [12].
It can be easily shown that these problems are equivalent to SetDisjointness and SetIntersection.
We just construct a set Sv per each vertex v containing all its neighbors. Querying if there is a
triangle containing the edge (u, v) is equivalent to asking if Sv ∩ Su is empty or not. Considering
the reporting variant, reporting all triangles containing (u, v) is thus equivalent to finding all the
elements in Sv ∩ Su . Therefore, we get the following results:
Theorem 5. Assume the Strong SetDisjointness conjecture. Suppose there is a data structure for
edge triangles detection problem for a graph G = (V, E), with S space and query time T . Then
S = Ω̃(|E|2 /T 2 ).
Theorem 6. Assume the Strong SetIntersection conjecture. Suppose there is a data structure for
edge triangles problem for a graph G = (V, E), with S space and query time O(T + op) time, where
op is the size of the output of the query. Then S = Ω̃(|E|2 /T ).
15
Histogram Indexing A histogram, also called a Parikh vector, of a string T over alphabet Σ
is a |Σ|-length vector containing the character count of T . For example, for T = aaccbacab the
histogram is v(T ) = (4, 2, 3). In the histogram indexing problem we preprocess an N -length string
T to support the following queries: given a query histogram v, return whether there is a substring
T ′ of T such that v(T ′ ) = v.
This problem has received much attention in the recent years. The case where the alphabet size
is 2 (binary alphabet) was especially studied. A simple algorithm for this case solves the problem in
O(N 2 ) preprocessing time and constant query time. There was a concentrated effort to reduce the
quadratic preprocessing time for some years. However, an algorithm with preprocessing time that
is O(N 2−ǫ ) for some ǫ > 0 was unknown until a recent breakthrough by Chan and Lewenstein [16].
They showed an algorithm with O(N 1.859 ) preprocessing time and constant query time. For alphabet
size ℓ they obtained an algorithm with Õ(N 2−δ ) preprocessing time and Õ(N 2/3+δ(ℓ+13)/6 ) query
time for 0 ≤ δ ≤ 1. Regarding space complexity, it is well known how to solve histogram indexing
for binary alphabet using linear space and constant query time. For alphabet size ℓ, Kociumaka
et al. [30] presented a data structure with Õ(N 2−δ ) space and Õ(N δ(2ℓ−1) ) query time. Chan and
Lewenstein [16] improved their result and showed a solution by a data structure using Õ(N 2−δ )
space with only Õ(N δ(ℓ+1)/2 ) query time.
Amir et al. [11] proved conditional lower bound on the tradeoff between the preprocessing and
query time of the histogram indexing problem. Very recently, their lower bound was improved and
generalized by Goldstein et al. [24]. Following the reduction by Goldstein et al. [24] and utilizing
our framework for conditional space lower bounds, we obtain the following lower bound on the
tradeoff between the space and query time of histogram indexing:
Theorem 7. Assume the Strong 3SUM-Indexing conjecture holds. The histogram indexing problem
2(1−α)
for a string of length N and constant alphabet size ℓ ≥ 3 cannot be solved with O(N 2− ℓ−1−α −Ω(1) )
space and O(N 1−
1+α(ℓ−3)
−Ω(1)
ℓ−1−α
) query time, for any 0 ≤ α ≤ 1.
Proof. We use the same reduction as in [24]. This time it will be used to reduce an instance
of 3SUM-Indexing (on 2n numbers) to histogram indexing, instead of reducing from an instance
of 3SUM. The space consumed by the reduction is dominated by the space needed to prepare a
ℓ−2−α
histogram indexing instance with string length N = O(n ℓ−3 ) for histogram queries. The number
of histogram queries we do for each query number z of the 3SUM-Indexing instance is O(nα ). The
query time is dominated by the time required by these queries. Let S(N, ℓ) denote the space required
by a data structure for histogram indexing on N -length string over alphabet size ℓ and let Q(N, ℓ)
denote the query time for the same parameters. Assuming the strong 3SUM-Indexing conjecture and
following our reduction, we have that S(N, ℓ) = O(n2−Ω(1) ) and Q(N, ℓ) = O(n1−α−Ω(1) ). Plugging
in the value of n in terms of N we get the required lower bound.
⊔
⊓
If we plug in the previous theorem δ = 2(1−α)
ℓ−1−α , we get that if the strong 3SUM-Indexing conjecture
is true we cannot have a solution for histogram indexing with Õ(N 2−δ ) space and Õ(N δ(ℓ−2)/2 ) query
time. This lower bound is very close to the upper bound obtained by Chan and Lewenstein [16] as
there is only a gap of 32 δ in the power of N in the query time. Moreover, if the value of δ becomes
close to 0 (so the value of α is close to 1) the upper bound and the lower bound get even closer
to each other. This is very interesting, as it means that to get truly subquadratic space solution
for histogram indexing for alphabet size greater than 2, we will have to spend polynomial query
16
time. This is in stark contrast to the simple linear space solution for histogram indexing over binary
alphabets that supports queries in constant time.
Following reductions presented in [31], from SetIntersection or SetDisjointness to several other
problems, we are able to show that based on the Strong SetDisjointness conjecture, the same problems admit a space/query time lower bounds. For sake of completeness, we reproduce these reductions in the next three subsections and show that they admit the space lower bounds as needed.
Distance Oracles for Colors Let P be a set of points in some metric with distance function
d(·, ·), where each point p ∈ P has some associated colors C(p) ⊂ [ℓ]. For c ∈ [ℓ] we denote by
P (c) the set of points from P with color c. We generalize d so that the distance between a point p
and a color c is denoted by d(p, c) = minq∈P (c) {d(p, q)}. In the (Approximate) Distance Oracles for
Vertex-Labeled Graphs problem [17,26] we are interested in preprocessing P so that given a query
of a point q and a color c we can return d(q, c) (or some approximation). We further generalize d
so that the distance between two colors c and c′ is denoted by d(c, c′ ) = minp∈P (c) {d(p, c′ )}. In the
Distance Oracle for Colors problem we are interested in preprocessing P so that given two query
colors c and c′ we can return d(c, c′ ). In the Approximate Distance Oracle for Colors problem we
are interested in preprocessing P and some constant α > 1 so that given two query colors c and c′
we can return some value dˆ such that d(c, c′ ) ≤ dˆ ≤ αd(c, c′ ).
We show evidence of the hardness of the Distance Oracle for Colors problem and the Approximate Distance Oracle for Colors problem by focusing on the 1-D case.
Theorem 8. Assume the Strong SetDisjointness conjecture. Suppose there is a 1-D Distance Oracle
for Colors with constant stretch α ≥ 1 for an input array of size N with S space and query time T .
Then S = Ω̃(N 2 /T 2 ).
Proof. We reduce SetDisjointness to the Colored Distance problem as follows. For each set Si we
define a unique color ci . For an element e ∈ U (U is the universe
of the elements in our sets) let
P
|e| denote the number of sets containing e and notice that e∈U |e| = N . Since each element in U
appears in at most m sets, we partition U into Θ(log m) parts where the ith part Pi contains all of
the elements e ∈ U such that 2i−1 < |e| ≤ 2i . An array Xi is constructed from Pi = {e1 , · · · e|Pi | }
by assigning an interval Ij = [fj , ℓj ] in Xi to each ej ∈ Pi such that no two intervals overlap.
Every interval Ij contains all the colors of sets that contain ej . This implies that |Ij | = |ej | ≤ 2i .
Furthermore, for each ej and ej+1 we separate Ij from Ij+1 with a dummy color d listed 2i + 1
times at locations [ℓj + 1, fj+1 − 1].
We can now simulate a SetDisjointness query on subsets (Si , Sj ) by performing a colored distance
query on colors ci and cj in each of the Θ(log m) arrays. There exists a Pi for which the two points
returned from the query are at distance strictly less than 2i + 1 if and only if there is an element in
U that is contained in both Si and Sj . The space usage is Õ(S) and the query time is Õ(T ). The
rest follows directly from the Strong SetDisjointness conjecture.
Finally, notice that the lower bound also holds for the approximate case, as for any constant α
the reduction can overcome the α approximation by separating intervals using α2i + 1 listings of
d.
⊔
⊓
Document Retrieval Problems with Multiple Patterns In the Document Retrieval problem [35] we are interested in preprocessing a collection of documents X = {D1 , · · · , Dk } where
17
P
N = D∈X |D|, so that given a pattern P we can quickly report all of the documents that contain
P . Typically, we are interested in run time that depends on the number of documents that contain
P and not in the total number of occurrences of P in the entire collection of documents. In the
Two Patterns Document Retrieval problem we are given two patterns P1 and P2 during query time,
and wish to report all of the documents that contain both P1 and P2 . We consider two versions of
the Two Patterns Document Retrieval problem. In the decision version we are only interested in
detecting if there exists a document that contains both patterns. In the reporting version we are
interested in enumerating all documents that contain both patterns.
All known solutions
√for the Two Patterns Document Retrieval problem with non trivial preprocessing use at least Ω( N ) time per query [18,28,29,35]. In a recent paper, Larsen, Munro, Nielsen,
and Thankachan [33] show lower bounds for the Two Patterns Document Retrieval problem conditioned on the hardness of boolean matrix multiplication.
It is straightforward to see that the appropriate versions of the two pattern document retrieval
problem solve the corresponding versions of the SetDisjointness and SetIntersection problems. In
particular, this can be obtained by creating an alphabet Σ = F (one character for each set),
and for each e ∈ U we create a document that contains the characters corresponding to the sets
that contain e. The intersection between Si and Sj directly corresponds to all the documents that
contain both a and b. Thus, all of the lower bound tradeoffs for intersection problems are lower
bound tradeoffs for the two pattern document retrieval problem.
Theorem 9. Assume the Strong SetDisjointness conjecture. Suppose there is a data structure for
the decision version
P of the Two Patterns Document Retrieval problem for a collection of documents
X where N = D∈X |D|, with S space and query time T . Then S = Ω̃(N 2 /T 2 ).
Theorem 10. Assume the Strong SetIntersection conjecture. Suppose there is a data structure for
the reporting version
of the Two Patterns Document Retrieval problem for a collection of documents
P
X where N = D∈X |D|, with S space and query time O(T + op) where op is the size of the output.
Then S = Ω̃(N 2 /T ).
Forbidden Pattern Document Retrieval In the Forbidden Pattern Document Retrieval problem [21] we are also interested in preprocessing the collection of documents but this time given a
query P + and P − we are interested in reporting all of the documents that contain P + and do not
contain P − . Here too we consider a decision version and a reporting version.
All known solutions for the
√ Forbidden Pattern Document Retrieval problem with non trivial
preprocessing use at least Ω( N ) time per query [21,29]. In a recent paper, Larsen, Munro, Nielsen,
and Thankachan [33] show lower bounds for the Forbidden Pattern Document Retrieval problem
conditioned on the hardness of boolean matrix multiplication.
Theorem 11. Assume the Strong 3SUM-Indexing conjecture. Suppose there is a data structure
for the decision version ofPthe Forbidden Pattern Document Retrieval problem for a collection of
documents X where N = D∈X |D|, with S space and query time T . Then S = Ω̃(N 2 /T 4 ).
Proof. We will make use of the hard instance of SetDisjointness that was used in order to prove
Theorem 1, and reduce this specific hard instance to the decision version of the Forbidden Pattern
Document Retrieval problem. Recall that the size of this hard instance is Õ(n2−γ ), the universe
size is O(n2−2γ ), the number of sets is Õ(n), and we need to perform Õ(nγ ) SetDisjointness queries
in order to answer one 3SUM-Indexing query.
18
Similar to the proof of Theorem 9 we set Σ = F (one character for each set). However, this
time for each e we create a document that contains all the characters corresponding to sets Bi,j
that contain e and all the characters corresponding to sets Ci,j that do not contain e.
The reason that we prove our lower bound based on the Strong 3SUM-Indexing conjecture and
not on the Strong SetDisjointness conjecture is because the size of our instance can become rather
large relative to N (as the number of sets that do not contain an element can be extremely large).
Thus, the size of the Forbidden Pattern Document Retrieval instance is N = θ(n3−2γ ), and the
number of queries to answer is θ(nγ ). Notice that the size of the instance enforces γ to be strictly
2
larger than 1/2. By the Strong 3SUM-Indexing conjecture, either S = s3SI = Ω̃(n2 ) = Ω̃(N 3−2γ ) or
1−γ
O(nγ T ) ≥ t3SI ≥ Ω̃(n), and so T ≥ Ω̃(N 3−2γ ). For any constant ǫ > 0, if the Forbidden Pattern
2
2
−ǫ
Document Retrieval data structure uses Θ̃(N 3−2γ ) space, then S · T 4 = Ω̃(N 3−2γ
Ω̃(N 2−ǫ ). Since this holds for any ǫ > 0 it must be that S · T 4 = Ω̃(N 2 ).
4−4γ
−ǫ+ 3−2γ
) =
⊔
⊓
1
Notice that if we only allow linear space then we obtain a query time lower bound of Ω(N 4 −o(1) ).
Theorem 12. Assume the Strong 3SUM-Indexing conjecture. Suppose there is a data structure
for the reporting version of
Pthe Forbidden Pattern Document Retrieval problem for a collection of
documents X where N = D∈X |D|, with S space and query time O(T + op) where op is the size
of the output. Then S = Ω̃(N 2 /T ).
Proof. Our proof is similar to the proof of Theorem 11, only this time we use the hard instance
of SetIntersection from Theorem 2. So the number of queries is Õ(nγ ), the size of the universe is
1+δ+γ
O(n1+δ−γ ), the number of sets is Õ(n 2 ), and the total size of the output is Θ(n2−δ ).
1+δ+γ
Thus, the size of the Forbidden Pattern Document Retrieval instance is N = Θ(n1+δ−γ n 2 ) =
3+3δ−γ
Θ(n 2 ), and the number of queries to answer is θ(nγ ). Notice that the size of the instance en4
forces 3δ − γ < 1. By the Strong 3SUM-Indexing conjecture, either S = s3SI = Ω̃(n2 ) = Ω̃(N 3+3δ−γ )
2−2γ
or O(nγ T ) ≥ t3SI ≥ Ω̃(n), and so T ≥ Ω̃(N 3+3δ−γ ). For any constant ǫ > 0, if the Forbidden Pattern
4
Document Retrieval data structure uses Θ̃(N 3+3δ−γ
6δ
−ǫ
2− 3+3δ−γ
−ǫ
). Since this holds for any ǫ > 0 and since we can make
Ω̃(N
it must be that S · T = Ω̃(N 2 ).
9.2
4
) space, then S · T = Ω̃(N 3+3δ−γ
6δ
3+3δ−γ
2−2γ
−ǫ+ 3+3δ−γ
)=
as small as we like,
⊔
⊓
Dynamic Problems
We show space lower bounds on dynamic problems. Lower bounds for these problems from the time
perspective were considered by Abboud and Vassilevska-Williams [5]. The first dynamic problem
we consider is st-SubConn which is defined as follows. Given an undirected graph G = (V, E), two
fixed vertices s and t and a set S ⊆ V , answer whether s and t are connected using vertices form
S only. Vertices can be added or removed from S.
The SetDisjointness problem can be reduced to st-SubConn. Given an instance of SetDisjointness
we create an undirected graph G = (V, E) as follows. We first create two unique vertices s and t.
Then, for each set Si we create two vertices vi and ui and for each element ej we create a vertex wj .
Moreover, we define E = {(vi , wj )|ej ∈ Si } ∪ {(ui , wj )|ej ∈ Si } ∪ {(s, vi )|1 ≤ i ≤ m} ∪ {(ui , t)|1 ≤
i ≤ m}. Initially the set S contains s and t and all the wi s. Given a query (i, j) asking about the
emptiness of Si ∩ Sj , we add vi and uj to the set S. Then, we ask if s and t are connected, if so we
19
know that Si ∩ Sj is not empty as the only way to get from s to t is following vi and uj and some
node representing a common element of Si and Sj . If s and t are not connected then it is clear
that the intersection is empty. After the query we remove the two vertices we have added so other
queries can be handled properly. By this construction we get the following result:
Theorem 13. Assume the Strong SetDisjointness conjecture. Suppose there is a data structure for
st-SubConn problem for a graph G = (V, E), with S space and update and query time T . Then
S = Ω̃(|E|2 /T 2 ).
There are other dynamic problems that st-SubConn can be efficiently reduced to, as shown by
Abboud and Vssilevska-Williams [5]. This includes the following 3 problems:
Problem 6. (s,t)-Reachability (st-Reach). Maintain a directed graph G = (V, E) subject to
edge insertions and deletions, so that queries about the reachability of fixed vertices s and t can be
answered quickly.
Problem 7. Bipartite Perfect Matching (BPMatch). Preprocess and maintain undirected bipartite graph G = (V, E) subject to edge insertions and deletions, so that we can quickly answer if
the graph has perfect matching.
Problem 8. Strong Connectivity (SC). Preprocess and maintain directed graph G = (V, E)
subject to edge insertions and deletions, so that we can quickly answer if the graph is strongly
connected
Using our last theorem and the reductions by Abboud and Vassilevska-Williams [5], noting that
they do not effect the space usage, we get the following:
Theorem 14. Assume the Strong SetDisjointness conjecture. Suppose there is a data structure for
st-Reach/ BPMatch/ SC problem for a graph G = (V, E), with S space and update and query time
T . Then S = Ω̃(|E|2 /T 2 ).
We can get better lower bound for these 3 problems on sparse graphs based on the directed
reachability conjecture. Given a sparse graph G = (V, E) as an instance of directed reachability we
can reduce it to an instance of st-Reach by just adding to special nodes s and t to the graph. Then,
we can answer queries of the form ”Is v reachable from u?” by inserting two edges (s, v) and (u, t)
and asking if t is reachable from s. After the query we can restore the initial state by deleting these
two edges. Thus, by using the reductions from st-Reach to BPMatch and SC as shown in [5], we
get the following hardness result:
Theorem 15. Assume the Directed Reachability conjecture. Any data structure for the st-Reach/
BPMatch/ SC problem on sparse graphs can not have Õ(n1−Ω(1) ) update and query time and
Õ(n2−Ω(1) ) space.
References
1. Amir Abboud, Arturs Backurs, Thomas Deuholm Hansen, Virginia Vassilevska Williams, and Or Zamir. Subtree
isomorphism revisited. In Proc. of 27th ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 1256–1271,
2016.
20
2. Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. If the current clique algorithms are optimal,
so is Valiant’s parser. 55th IEEE Annual Symposium on Foundations of Computer Science, FOCS, pages 98–117,
2015.
3. Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. Quadratic-time hardness of LCS and other
sequence similarity measures. 55th IEEE Annual Symposium on Foundations of Computer Science, FOCS, pages
59–78, 2015.
4. Amir Abboud, Fabrizio Grandoni, and Virginia Vassilevska Williams. Subcubic equivalences between graph
centrality problems, APSP and diameter. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium
on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015, pages 1681–1697, 2015.
5. Amir Abboud and Virginia Vassilevska Williams. Popular conjectures imply strong lower bounds for dynamic
problems. In 55th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2014, Philadelphia,
PA, USA, October 18-21, 2014, pages 434–443, 2014.
6. Amir Abboud, Virginia Vassilevska Williams, and Oren Weimann. Consequences of faster alignment of sequences.
In Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part I, pages 39–51, 2014.
7. Amir Abboud, Virginia Vassilevska Williams, and Huacheng Yu. Matching triangles and basing hardness on an
extremely popular conjecture. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of
Computing, STOC 2015, Portland, OR, USA, June 14-17, 2015, pages 41–50, 2015.
8. Rachit Agarwal. The space-stretch-time tradeoff in distance oracles. In Algorithms - ESA 2014 - 22th Annual
European Symposium on Algorithms, Wroclaw, Poland, September 8-10, 2014. Proceedings, pages 49–60, 2014.
9. Rachit Agarwal, Brighten Godfrey, and Sariel Har-Peled. Faster approximate distance queries and compact
routing in sparse graphs. CoRR, abs/1201.2703, 2012.
10. Rachit Agarwal, Philip Brighten Godfrey, and Sariel Har-Peled. Approximate distance queries and compact routing in sparse graphs. In INFOCOM 2011. 30th IEEE International Conference on Computer Communications,
pages 1754–1762, 2011.
11. Amihood Amir, Timothy M. Chan, Moshe Lewenstein, and Noa Lewenstein. On hardness of jumbled indexing. In
Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014, Copenhagen, Denmark,
July 8-11, 2014, Proceedings, Part I, pages 114–125, 2014.
12. Amihood Amir, Tsvi Kopelowitz, Avivit Levy, Seth Pettie, Ely Porat, and B. Riva Shalom. Mind the gap. CoRR,
abs/1503.07563, 2015.
13. Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic time (unless SETH
is false). In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015,
Portland, OR, USA, June 14-17, 2015, pages 51–58, 2015.
14. Karl Bringmann. Why walking the dog takes time: Frechet distance has no strongly subquadratic algorithms
unless SETH fails. In 55th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2014, Philadelphia, PA, USA, October 18-21, 2014, pages 661–670, 2014.
15. Karl Bringmann and Marvin Künnemann. Quadratic conditional lower bounds for string problems and dynamic
time warping. 55th IEEE Annual Symposium on Foundations of Computer Science, FOCS, 2015.
16. Timothy M. Chan and Moshe Lewenstein. Clustered integer 3SUM via additive combinatorics. In Proceedings
of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA,
June 14-17, 2015, pages 31–40, 2015.
17. Shiri Chechik. Improved distance oracles and spanners for vertex-labeled graphs. In ESA 2012, pages 325–336,
2012.
18. Hagai Cohen and Ely Porat. Fast set intersection and two-patterns matching. Theor. Comput. Sci., 411(4042):3795–3800, 2010.
19. Hagai Cohen and Ely Porat. On the hardness of distance oracle for sparse graph. CoRR, abs/1006.1117, 2010.
20. M. Dietzfelbinger. Universal hashing and k-wise independent random variables via integer arithmetic without
primes. In Proceedings 13th Annual Symposium on Theoretical Aspects of Computer Science (STACS), pages
569–580, 1996.
21. Johannes Fischer, Travis Gagie, Tsvi Kopelowitz, Moshe Lewenstein, Veli Mäkinen, Leena Salmela, and Niko
Välimäki. Forbidden patterns. In LATIN, pages 327–337, 2012.
22. Michael J. Fischer and Albert R. Meyer. Boolean matrix multiplication and transitive closure. In 12th Annual
Symposium on Switching and Automata Theory, East Lansing, Michigan, USA, October 13-15, 1971, pages 129–
131, 1971.
23. A. Gajentaan and M. H. Overmars. On a class of O(n2 ) problems in computational geometry. Comput. Geom.,
5:165–185, 1995.
21
24. Isaac Goldstein, Tsvi Kopelowitz, Moshe Lewenstein, and Ely Porat. How hard is it to find (honest) witnesses?
In European Symposium on Algorithms, ESA 2016, pages 45:1–45:16, 2016.
25. Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol Saranurak. Unifying and
strengthening hardness for dynamic problems via the online matrix-vector multiplication conjecture. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR,
USA, June 14-17, 2015, pages 21–30, 2015.
26. Danny Hermelin, Avivit Levy, Oren Weimann, and Raphael Yuster. Distance oracles for vertex-labeled graphs.
In Automata, Languages, and Programming - 38th International Colloquium, ICALP (2), pages 490–501, 2011.
27. Jacob Holm, Eva Rotenberg, and Mikkel Thorup. Planar reachability in linear space and constant time. CoRR,
abs/1411.5867, 2014.
28. Wing-Kai Hon, Rahul Shah, Sharma V. Thankachan, and Jeffrey Scott Vitter. String retrieval for multi-pattern
queries. In SPIRE, pages 55–66, 2010.
29. Wing-Kai Hon, Rahul Shah, Sharma V. Thankachan, and Jeffrey Scott Vitter. Document listing for queries with
excluded pattern. In CPM, pages 185–195, 2012.
30. Tomasz Kociumaka, Jakub Radoszewski, and Wojciech Rytter. Efficient indexes for jumbled pattern matching
with constant-sized alphabet. In ESA, pages 625–636, 2013.
31. Tsvi Kopelowitz, Seth Pettie, and Ely Porat. Higher lower bounds from the 3SUM conjecture. In To appear in
27th ACM-SIAM Symposium on Discrete Algorithms (SODA) 2016.
32. Tsvi Kopelowitz, Seth Pettie, and Ely Porat. Dynamic set intersection. In Proceedings 14th Int’l Symposium on
Algorithms and Data Structures (WADS), pages 470–481, 2015.
33. Kasper Green Larsen, J. Ian Munro, Jesper Sindahl Nielsen, and Sharma V. Thankachan. On hardness of several
string indexing problems. In CPM, pages 242–251, 2014.
34. Kasper Green Larsen, J. Ian Munro, Jesper Sindahl Nielsen, and Sharma V. Thankachan. On hardness of several
string indexing problems. Theor. Comput. Sci., 582:74–82, 2015.
35. S. Muthukrishnan. Efficient algorithms for document retrieval problems. In SODA, pages 657–666, 2002.
36. Mihai Patrascu. Towards polynomial lower bounds for dynamic problems. In Proceedings of the 42nd ACM
Symposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 5-8 June 2010, pages 603–
610, 2010.
37. Mihai Patrascu. Unifying the landscape of cell-probe lower bounds. SIAM J. Comput., 40(3):827–847, 2011.
38. Mihai Patrascu and Liam Roditty. Distance oracles beyond the Thorup-Zwick bound. SIAM J. Comput.,
43(1):300–311, 2014.
39. Mihai Patrascu, Liam Roditty, and Mikkel Thorup. A new infinity of distance oracles for sparse graphs. In 53rd
Annual IEEE Symposium on Foundations of Computer Science, FOCS 2012, New Brunswick, NJ, USA, October
20-23, 2012, pages 738–747, 2012.
40. Mihai Patrascu and Ryan Williams. On the possibility of faster SAT algorithms. In Proceedings of the TwentyFirst Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010, Austin, Texas, USA, January 17-19,
2010, pages 1065–1075, 2010.
41. Ryan Williams. A new algorithm for optimal 2-constraint satisfaction and its implications. Theor. Comput. Sci.,
348(2-3):357–365, 2005.
42. Virginia Vassilevska Williams and Ryan Williams. Subcubic equivalences between path, matrix and triangle
problems. In 51th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2010, October 23-26,
2010, Las Vegas, Nevada, USA, pages 645–654, 2010.
22
Appendix
A
Sketch of the Main Results
Strong (k+1)-SUM Indexing
(k+1)-SUM Indexing
Strong k-Set Disjointness
k-Set Disjointness
Directed Reachability
k-Reachability
Strong 3SUM Indexing
3SUM Indexing
Strong Set Disjointness
Set Disjointness
Less-than-(1+2/k) Distance Oracles
Boolean Matrix Multiplication
2-Reachability
Orthogonal Vecors and SETH
Strong Set Intersection
Dynamic Problems
Static Problems
Fig. 1. Space conjectures and the connections between them as shown in this paper. Rectangles represent conjectures,
while problems shown to be hard based on these conjectures are represented by ovals. Full arrow represents a reduction
between two problems which also means an implication in the case of conjectures. Dotted arrow means a generalization
which is also an implication, while dashed line means a generalization with no (known) reduction.
23
| 8 |
An efficient method for evaluating BEM singular integrals on curved elements
with application in acoustic analysis
Junjie Rong, Lihua Wen∗, Jinyou Xiao
College of Astronautics, Northwestern Polytechnical University, Xi’an 710072, P. R. China
arXiv:1306.0282v1 [] 3 Jun 2013
Abstract
The polar coordinate transformation (PCT) method has been extensively used to treat various singular integrals in
the boundary element method (BEM). However, the resultant integrands of the PCT tend to become nearly singular
when (1) the aspect ratio of the element is large or (2) the field point is closed to the element boundary; thus a large
number of quadrature points are needed to achieve a relatively high accuracy. In this paper, the first problem is
circumvented by using a conformal transformation so that the geometry of the curved physical element is preserved
in the transformed domain. The second problem is alleviated by using a sigmoidal transformation, which makes the
quadrature points more concentrated around the near singularity.
By combining the proposed two transformations with the Guiggiani’s method in [M. Guiggiani, et al. A general algorithm for the numerical solution of hypersingular boundary integral equations. ASME Journal of Applied
Mechanics, 59(1992), 604-614], one obtains an efficient and robust numerical method for computing the weakly-,
strongly- and hyper-singular integrals in high-order BEM with curved elements. Numerical integration results show
that, compared with the original PCT, the present method can reduce the number of quadrature points considerably,
for given accuracy. For further verification, the method is incorporated into a 2-order Nyström BEM code for solving
acoustic Burton-Miller boundary integral equation. It is shown that the method can retain the convergence rate of the
BEM with much less quadrature points than the existing PCT. The method is implemented in C language and freely
available.
Keywords: singular integrals, boundary element method, Nyström method, acoustics
1. Introduction
The boundary element method (BEM) has been a most important numerical method in science and engineering.
Its unique advantages includes the highly accurate solution on the boundary, the reduction of dimensionality, and
the incomparable superior in solving infinite or semi-infinite field problems, etc. Historically, relatively low-order
discretizations have been used with geometries modelled using first order elements and surface variables modelled to
zero or first order on those elements. Recently, however, there has been increasing interest in the use of high order
methods, in order to obtain extra digits of precision with comparatively small additional effort. Successful usage
are seen in acoustics [1], electromagnetics [2, 3], elasticity [4], aerodynamics [5], to name a few. One of the main
difficulties in using high order BEM is the lack of efficient methods for accurately evaluating various singular integrals
over curved elements. For the well-developed low order methods, there is no great difficulty since robust analytical
and numerical integration schemes are generally available for planar elements [6, 7, 9]. When concerned with high
order elements, however, a fully numerical method is required; various techniques have been proposed, for example,
the singularity subtraction [8], special purpose quadrature [10, 3, 11, 12] and the variable transformation methods
[13, 14, 15, 16].
∗ Corresponding
author
Email addresses: [email protected] (Junjie Rong), [email protected] (Lihua Wen), [email protected] (Jinyou Xiao)
Preprint submitted to Elsevier
June 4, 2013
In [8], Guiggiani et al proposed a unified formula (Guiggiani’s method for short) for treating various order of
singular integrals on curved elements. It is a singularity subtraction method, and has found extensive use in BEM. In
this method the singular parts are extracted from the integrand and treated analytically. The remaining parts are regular
so that conventional Gaussian quadrature can be employed, but the number of quadrature points needed would be large,
depending on the regularity of the associated integrands. The special purpose quadrature can be used to substantially
reduce the number of quadrature points, besides it is more robust and highly accurate. Most recently, James Bremer
[3] proposed such a method which can achieve machine precision. The problem with the special purpose quadrature
is that its construction somewhat complicated and time-consuming. Variable transformation method, also known as
singularity cancelation, eliminate the singularity of the integrand by the null Jacobian at the field point through a
proper change of variables. Although simple to implement, it is generally hard to be used in handling hypersingular
integrals.
In most existing integration methods, including those mentioned above, polar coordinates transformation (PCT)
always serves as a common base [8, 3, 12, 13, 16, 21]. It converses the surface integral into a double integral in radial
and angular directions. Many works have been done on dealing with the singularity in the radial direction; numerical
integration on the angular direction, however, still deserves more attention. In fact, after singularity cancelation or
subtraction, although the integrand may behaves very well in the radial direction, its behavior in the angular direction
would be much worst, so that too many quadrature points are needed. Especially when the field point lies close
to the boundary of the element, one can clearly observe near singularity of the integrand in the angular direction.
Unfortunately, this case is frequently encountered in using high order elements, especially for non-conform elements
as will be demonstrated by a Nyström BEM in this paper. Similar problems have been considered in work about
the nearly singular BEM integrals. Effective methods along this line are the subdivision method [17], the Hayami
transformation [18], the sigmoidal transformation [19], etc. For the singular BEM integrals, the angular transformation
has been used for computing weakly singular integrals over planar element [13].
Another problem of the PCT is how to find a proper planar domain to establish the polar coordinates. In usual
practice, the integral is carried out over standard reference domain (i.e., triangle in this paper) in intrinsic coordinates.
However, the reference triangle is independent of the shape of the curved element, and thus the distortion of the
element is brought into the integrand, which can cause near singularity in the angular direction. Consequently, the
performance of quadratures is highly sensitive to the shape of element. The above two problems in the angular
direction were considered and resolved by special purpose quadrature rule in [3]; whereas, as mentioned before, the
algorithm has its own overhead.
In this paper, two strategies are proposed to overcome two problems in the angular direction, respectively. First, a
conformal transformation is carried out to map the curved physical element onto a planar integration domain. Since it
is conformal at the field point, the resultant integration domain perseveres the shape of the curved element. Second, a
new sigmoidal transformation is introduced to alleviate the near singularity caused by the closeness of the field point
to the element boundary. The two proposed techniques, when combined with Guiggiani’s method, lead to a unified,
efficient and robust numerical integration methods for various singular integrals in high order BEM. As a byproduct
of the conformal transformation, the line integral in Guiggiani’s method can be evaluated in close form.
The paper is organized as follows. The Nyström BEM for acoustics and singular integrals encountered are reviewed in section 2. Section 3 describes the Guiggiani’s unified framework for treating various order of singular
integrals, with emphasis on the two reasons which render the poor performance of polar coordinates methods. Two
efficient transformations which represent the main contribution of this paper are proposed in section 4. Numerical
examples are given in section 5 to validate the efficiency and accuracy of the present methods. Section 6 concludes
the paper with some discussions.
2. Model problem: acoustic BIEs and Nyström discretization
The method presented in this paper will be used in solving the acoustic Burton-Miller BIE, which is briefly
recalled here. The BIE is solved by using the Nyström method with the domain boundary being partitioned into
curved quadratic elements. Various singular integrals that will be treated in the following sections are summarized.
2
2.1. Acoustic Burton-Miller formulation
The time harmonic acoustic waves in a homogenous and isotropic acoustic medium Ω is described by the following
Helmholtz equation
∇2 u(x) + k2 u(x) = 0, ∀x ∈ Ω,
(1)
where, ∇2 is the Laplace operator, u(x) is the sound pressure at the point x = (x1 , x2 , x3 ) in the physical coordinate
system, k = ω/c is the wavenumber, with ω being the angular frequency and c being the sound speed. For static case
k = 0, (1) becomes the Laplace equation. By using Green’s second theorem, the solution of Eq. (1) can be expressed
by integral representation
Z
Z
∂G(x, y)
u(x) +
G(x, y)q(y)dS (y) + uI (x), ∀x ∈ Ω,
(2)
u(y)dS (y) =
S ∂n(y)
S
where x is a field point and y is a source point on boundary S ; q(y) = ∂u(y)/∂n(y) is the normal gradient of sound
pressure; n(y) denotes the unit normal vector at the source point y. The incident wave uI (x) will not be presented for
radiation problems. The three dimensional fundamental solution G is given as
G(x, y) =
eikr
,
4πr
(3)
with r = |x − y| denoting the distance between the source and the field points.
Before presenting the BIEs it is convenient to introduce the associated single, double, adjoint and hypersingular
layer operators which are denoted by S, D, M and H, respectively; that is,
Z
G(x, y)q(y)dS (y),
(4a)
S q(y) =
ZS
∂G(x, y)
D u(y) =
u(y)dS (y),
(4b)
S ∂n(y)
Z
∂G(x, y)
q(y)dS (y),
(4c)
M q(y) =
S ∂n(x)
Z
∂2G(x, y)
H u(y) =
u(y)dS (y).
(4d)
S ∂n(x)∂n(y)
The operator S is weakly singular and the integral is well-defined, while the operators D and M are defined in Cauchy
principal value sense (CPV). The operator H, on the other hand, is hypersingular and unbounded as a map from the
space of smooth functions on S to itself. It should be interpreted in the Hadamard finite part sense (HFP). Denoting a
vanishing neighbourhood surrounding x by sε , the CPV and HFP integrals are those after extracting free terms from a
limiting process to make sε tends to zero in deriving BIEs [8].
Letting the field point x approach the boundary S in Eq. (2) leads to the CBIE
C(x)u(x) + Du(x) = Sq(x) + uI (x),
x ∈ S,
(5)
where, C(x) is the free term coefficient which equals to 1/2 on smooth boundary. By taking the normal derivative of
Eq. (2) and letting the field point x go to boundary S , one obtains the hypersingular BIE (HBIE)
C(x)q(x) + Hu(x) = Mq(x) + qI (x),
x ∈ S.
(6)
Both CBIE and HBIE can be applied to calculate the unknown boundary values of interior acoustic problems. For an
exterior problem, they have a different set of fictitious frequencies at which a unique solution can’t be obtained. However, Eqs. (5) and (6) will always have only one solution in common. Given this fact, the Burton-Miller formulation
which is a linear combination of Eqs. (5) and (6) (CHBIE) should yield a unique solution for all frequencies [20]
h
i
C(x)u(x) + (D +α H) u(y) − uI (x) = (S +α M) q(y) − α c(x)q(x) − qI (x) , x ∈ S ,
(7)
where, α is a coupling constant that can be chosen as i/k.
3
2.2. Nyström method and the singular integrals
In this paper, the Nyström method is used to discretize the BIEs. Let K be one of the integral operators in (4)
and K be the associated kernel function. Divide the problem into two regions, a region near and far from the field
point x. If the element locates in far field, Nyström method replaces the integral operator K with a summation under
a quadrature rule
Z
X
K(xi , y)u(y)dS (y)
ω j K(xi , y j )u(y j), ∆S ∈ S \Di ,
(8)
∆S
j
where, xi and y j are quadrature points, Di is the near field of xi , ω j is the jth weight over element ∆S . Such quadrature
rules can be obtained by mapping the Gaussian quadrature rules onto the parameterization of ∆S .
If the element locates in near field, however, the kernels exhibit singularities or even hypersingularities. As a
result, conventional quadratures fail to give correct results. In order to maintain high-order properties, the quadrature
weights are adjusted by a local correction procedure. Thus (8) becomes
Z
X
K(xi , y)u(y)dS (y)
ω̄ j (xi )u(y j), ∆S ∈ Di ,
(9)
∆S
j
where ω̄ j (xi ) represents the modified quadrature weights for specialized rule at the singularity. The local corrected
procedure is performed by approximating the unknown quantities using linear combination of polynomial basis functions which are defined on intrinsic coordinates (Fig.1). Modified weights are obtained by solving the linear system
Z
X
(n)
K(xi , y)φ(n) (y)dS (y), ∆S ∈ Di ,
(10)
ω̄ j φ (y j ) =
∆S
j
where φ(n) are polynomial basis functions. For 2th Nyström method used in this paper, φ(n) employed are of the form
φ(n) (ξ1 , ξ2 ) = ξ1p ξ2q ,
p + q ≤ 2,
(11)
where, p and q are integers, ξ1 and ξ2 denote local intrinsic coordinates.
The right hand side integrals in Eq. (10) are of crucial importance to the accuracy of the Nyström method. They
are referred to as singular integrals when xi lies on the element, and nearly singular integrals when xi is closed to but
not on the element. This paper deals with the integrals in the first case, the nearly singular integrals are treated via an
recursive subdivision quadrature. When concerned with the first three operators in equation (4), the integrals have a
weak singularity of r−1 , while the other operator H is hypersingular of order r−3 , as r → 0.
3. An unified framework for singular integrals
Since various order of singularities appear in Eq. (7) or many other BIEs, it is advantage to find a unified formula
to treat these integrals in the same framework. In such a way, these integrals can be implement in just one program
so that the computational cost will be reduced. By expansion of the singular integrands in polar coordinates, the three
types of singular integrals considered in this paper can be handled in a unified manner by using the formula proposed
by Guiggiani [8]. Despite of this advantage the method can be of low efficiency in practical usage; the very reasons
are then explained.
3.1. Polar coordinates transformation
Following a common practice in the BEM, the curved element is first mapped onto a region ∆ of standard shape
in the parameter plane (Fig. 1). In this case, the integral must be evaluated in Eq. (10) are of the form
Z
K(x, y(ξ))φ(ξ) |J(ξ)| dξ1 dξ2 ,
(12)
I=
∆
where |J| is the transformation Jacobian from global coordinates to the intrinsic coordinates,
#
"
∂y
∂y
∂y
∂y
,
|J(ξ)| =
.
×
J=
∂ξ1 ∂ξ2
∂ξ1 ∂ξ2
4
ξ2
1
2
u2
4x
0.5
5
(0.5, 0.5)
u1
6
1
3
0
0.5
1 ξ1
Figure 1: Description of curved triangle. Left hand image: curved triangle and a field point; right hand image:the reference triangle and typical
field points distribution of 2th order Nyström method in intrinsic coordinate system
Then, polar coordinates (ρ, θ) centered at ξ s (the image of x on intrinsic plane) are defined in the parameter space
(Fig. 2)
(s)
ξ1 = ξ1 + ρ cos θ
ξ2 = ξ(s) + ρ sin θ
2
so that dξ1 dξ2 = ρdρdθ. Due to piecewise smooth property of the boundary of ∆, the triangle is split into three
sub-triangles. The associated integral of Eq. (12) now becomes
I = lim
ρ(ε)→0
3 Z
X
j=1
θj
θ j−1
Z
ρ̂(θ)
ρ(ε)
K(ρ, θ)φ(ρ, θ) |J(ρ, θ)| ρdρdθ,
(13)
where ρ̂(θ) gives a parametrization of the boundary of ∆ in polar coordinates, (θ j−1 , θ j ) are three intervals on which
ρ̂(θ) is smooth. The limiting process means CPV or HFP integral mentioned before, although for well-defined integral
operator S, the limiting is not necessary. According to the geometrical relationship in Fig.2
hj
.
cos θ̄
ρ̂(θ) =
(14)
where, h j is the perpendicular distance from ξ s to jth side of the planar triangle and θ̄ is the angle from the perpendicular to a point ξ (Fig.2). In each sub-triangle, θ̄ equals to θ minus a constant.
ξ2
θ̄
θ̄ ≈ 90◦
θ
ρ̂(θ)
h
ξs
ξs
ξ1
Figure 2: Integration under polar coordinates. Left: Polar coordinates transformation; Right: near singularity as the field point approaches the
boundary
For singularity of order no more than 3, the integrand of (13), denoted by F(ρ, θ), can be expressed as series
expansion under polar coordinate ([23, 8])
F(ρ, θ) = K(ρ, θ)φ(ρ, θ) |J| ρdρ =
∞
X
f−2 (θ) f−1 (θ)
2
+
f
(θ)
+
ρ
f
(θ)
+
ρ
f
(θ)
+
·
·
·
=
ρi fi (θ),
+
0
1
2
ρ
ρ2
i=p
5
(15)
where, fi are just functions of θ, integer p is determined by the order of singularity,
0,
weakly singular;
−1,
strongly singular;
p=
−2, hyper-singular.
First, let’s consider the hypersingular integrals. Due to the appearance of the two terms with (i = −2, −1), the
integration of F(ρ, θ) must be performed in the HFP sense; for more details see Guiggiani’s work [8]. The resultant
formula for hypersingular integrals is given by
3 Z
X
I = I1 + I2 =
θ j−1
j=1
|
+
θj
Z
ρ̂(θ)
0
"
f−2 (θ) f−1 (θ)
F(ρ, θ) −
+
ρ
ρ2
{z
!#
dρdθ
I1
3 Z
X
θj
θ j−1
j=1
|
!
1
dθ .
f−1 (θ) ln ρ̂(θ) − f−2 (θ)
ρ̂(θ)
{z
}
}
(16)
I2
The strongly and weak singular integrals can be treated in a similar manner based on expansion (15). The resultant
computing formulas can also be written in the above form except that for strongly singular integrals f−2 = 0, and for
weak singular integrals f−2 = f−1 = 0 (thus I2 = 0). Therefore, formula (16) actually provide an unified approach to
evaluate singular integrals in BEM.
In (16) the double integral I1 and single integral I2 are all regular, thus one may conclude that it is sufficient to guarantee numerical accuracy in evaluating these two integrals, it would be nevertheless too expensive in practical usage,
especially when used in high order Nyström method considered in this paper. The difficulties and the corresponding
solutions will be presented in the following sections.
3.2. Difficulties in evaluating I1
It is the computation of I1 that accounts for the main cost for evaluating the various singular integrals. The
integrand of I1 can be approximated by polynomials in ρ; that is,
I1 =
3 Z
X
j=1
θj
θ j−1
Z
ρ̂(θ)
0
f0 (θ) + ρ f1 (θ) + ρ2 f2 (θ) + . . . dρdθ.
(17)
The number of terms in this approximation is determined by the kernel function, the order of basis function and the
flatness of the associated element. The relative flatness of element is a basic requirement in BEM in order to guarantee
the accuracy. Consequently, it appears that the integrand of I1 can be well approximated by low order polynomials
in ρ in solving many problems including Laplace, Helmholtz, elasticity and so forth, using quadratic elements. This
implies that low order Gaussian quadratures are sufficient for numerical integration in ρ.
In angular θ direction, however, two difficulties are frequently encountered which severely retard the convergence
rate of Gaussian quadratures. To show this, preforming integration in ρ in (17) one obtains
I1 =
3 Z
X
j=1
θj
θ j−1
!
1 2
ρ̂(θ) f0 (θ) + ρ̂ (θ) f1 (θ) + · · · dθ.
2
(18)
The difficulties in computation of I1 are caused by the near singularities of fi (θ) and ρ̂(θ).
3.2.1. Near singularity in fi (θ)
Functions fi (θ) can be expressed as (Appendix A)
fi (θ) =
6
f˜i (θ)
,
Aα (θ)
(19)
where f˜i are regular trigonometric functions and α is integer determined by the subscript i. Function A(θ) is depend
on the shape of the element and parametric coordinate system (see [8] for a definition of A(θ)). Specifically, let u1 , u2
be the two column vectors of the Jacobian matrix J which spans the space tangent to the element at the point x, i.e.
∂y
∂ξ1
u1 =
and u2 =
y=x
∂y
∂ξ2
.
(20)
y=x
Then A(θ) is given by
q
A(θ) = |u1 |2 cos2 θ + u1 · u2 sin 2θ + |u2 |2 sin2 θ
r
1
1
= u1 · u2 sin 2θ + |u1 |2 − |u2 |2 cos 2θ + |u1 |2 + |u2 |2
2
2
r
1
=
|u1 |2 + |u2 |2 µ sin(2θ + ϕ) + 1 .
2
If we let λ = |u1 | / |u2 | and cos γ =
u1 ·u2
|u1 ||u2 | ,
(21)
then
ϕ = arctan
and
µ=
s
1−
λ2 − 1
,
2λ cos γ
4 sin2 γ
< 1.
(λ + λ−1 )2
(22)
It can be seen from (21) that, if µ → 1, there exist two points θ ∈ [0, 2π] such that A(θ) → 0. Thus function fi
tends to be nearly singular. The circumstance µ → 1 occurs in two cases according to Eq. (22): (1) the aspect ratio
of the element is large, i.e., λ approaches 0 or ∞; (2) peak or big obtuse corner appear in the element which lead to
sin γ → 0. Both of these two cases indicate a distorted shape of the element.
To illustrate the influence of the element aspect ratio on the behavior of function 1/A(θ) which represents the
smoothness of fi , consider the curved element in figure 7; see section 5.1 for more detailed descriptions. The aspect
ratio of the element is controlled by s; larger s implies more distorted shape. Let b be the singular point. The plots of
1/A(θ) in the interval corresponding to the sub-triangle (2-b-3) in figure 7 with various s are exhibited in Fig. 3. It is
clear that 1/A(θ) varies more acutely as the increase of the aspect ratio.
The above analysis shows how element shape affects fi and thus the integrand in (18). In an intuitionistic manner,
the near singularity in fi is induced by the fact that the integration plane ∆ in intrinsic coordinate system is independent
on the shape of element, as a result the distortion is brought into the integrand. One can suppose that if the integration
is performed over another planar triangle which reflects the distortion of the element, the near singularity should be
eliminated. This is the main idea of the our new method in section 4.1.
3.2.2. Near singularity in ρ̂(θ)
In addition to the near singularity in fi , another obstacle retards the convergence of the numerical quadrature for
(18) is the near singularity in ρ̂(θ), which can be clearly seen from (14). When the field point x lies near to the
boundary of the element, the restrict of θ̄ approaches ±π/2, thus the denominator cos θ̄ is close to 0 at the two ends of
the interval (θ j−1 , θ j ) (see Fig.2). The effect of the near singularity in ρ̂(θ) on the total behavior of the integrand F(ρ, θ)
is demonstrated by the left plots of Fig. 6 (a) and (b), where the integrand have large peaks near the two ends of the
interval.
Unfortunately, the situation that causes the near singularity in ρ̂(θ) is ubiquitous in using high order Nyström
method. See Fig. 3.1 for a typical distribution of field points in 2th order Nyström method.
4. Efficient transformation methods
How to effectively resolve the above mentioned two difficulties is crucial to the accurate evaluation of the BEM
singular integrals. More recently, special quadratures are constructed for this purpose in [3]. Although accurate and
7
1.5
s=0.5
s=2.0
s=4.0
s=10
1/A(θ)
1
0.5
0
−0.5
0
0.5
θ (rad)
1
1.5
2
Figure 3: The plot of 1/A(θ) for various value of s. The aspect ratio of the associated element increased with s. The curved element is described in
section 5.1, with field point (b)
robust numerical results are reported, it is noticed that the abscissas and weights of the special quadratures are depend
on the singular (field) point as well as the element on which the integral is defined. The construction of the quadrature
for integral can be rather complicated and time-consuming.
In this section, however, we propose a more simple yet efficient method. First, the intrinsic coordinates (ξ1 , ξ2 )
is transformed onto a new system which results in constant A(θ). An additional benefit of constant A(θ) is that the
line integral I2 can be evaluated in closed form; see section 4.3. Then the sigmoidal transformation is introduced to
alleviate the near singularity caused by ρ̂(θ).
4.1. Conformal transformation
First, the near singularity caused by A(θ) in fi is considered and resolved. The idea is to introduce a new transformation under which A(θ) becomes constant. It can be seen from (21) that if
u1 · u2 = 0
and
|u1 | = |u2 | ,
(23)
then A(θ) will be constant, i.e. A(θ) = |u1 | = |u2 |.
Relations (23) indicate a conformal mapping from the element to the plane at field point x, i.e., both angle and
the shape of the infinitesimal neighborhood at x are preserved. However, this condition generally can not be satisfied
under intrinsic coordinates. In [18] Hayami proposed to connect the three corners of the curved element to establish a
planar triangle which preserves the shape of the element. This operation, nevertheless, can not satisfy condition (23)
exactly.
Here, we propose a transformation in which the curved physical element is mapped to a triangle ∆¯ in plane
(2)
(2)
(η1 , η2 ) as shown in Fig. 4. The coordinates of the three corners of ∆¯ are (0, 0), (η(2)
1 , η2 ) and (1, 0), with η2 > 0.
The transformation from system (ξ1 , ξ2 ) to (η1 , η2 ) can be realized by linear interpolation
η=
3
X
φ̄i (ξ1 , ξ2 )η(i) ,
(24)
i=1
¯ φ̄i are linear interpolating functions
where, η(i) are coordinates of the three corners of ∆,
φ̄1 = 1 − ξ1 − ξ2 ,
φ̄2 = ξ2 ,
φ̄3 = ξ1 .
8
(25)
ξ2
1
0.5
2
x3
5
T
1 ξ1
T−1
u1
6
1
(0.5, 0.5)
0.5
0
J
u2
4x
ξ (s)
(2)
(2)
(η1 , η2 )
η2
3
θ
x2
x1
o
η
(s)
η1
Figure 4: Transformations between coordinate systems. Left: curved element in global coordinates; Top right: intrinsic coordinates and the
reference element; Bottom right: the parametric plane to perform integral
Plugging (25) into (24) yields
η = Tξ,
(26)
where, T is the transformation matrix
"
#
1 η(2)
1
T=
0 η(2)
2
−1
and T
1 − η(2)
1
(2)
.
η2
=
1
0
(2)
η
(27)
2
The Jacobian matrix at x from physical coordinates to (η1 , η2 ) coordinates, denoted by [ū1 ū2 ], can be written as
i
i h
h
−η(2)
u2
1 u1
ū1 ū2 = u1 u2 T−1 = u1
(28)
+
(2)
(2) .
η
η
2
2
In order to obtain constant A(θ), ū1 and ū2 have to satisfy condition (23), thus one has
η(2)
1 =
cos γ
,
λ
η(2)
2 =
sin γ
,
λ
(29)
where, λ and γ are defined in section 3.2.
By using relation (26) the integral (12) can be transformed onto (η1 , η2 ) plane. One can then employ the polar
coordinate transform in Section 3.1. The origin of the polar system is set as the image of x on (η1 , η2 ) plane. Then
integral I1 becomes
!#
3 Z θ j Z ρ̂(θ) "
X
f−2 (θ) f−1 (θ)
T−1 ρdρdθ,
(30)
F(ρ, θ) −
I1 =
+
2
ρ
ρ
j=1 θ j−1 0
in which the near singularity caused by A(θ) has been successfully removed.
Numerical results show that the above transformation can always improve the numerical integration regardless the
shapes of the elements being regular or irregular.
4.2. Sigmoidal transformation
When the field point x approaches the element boundary, function ρ̂(θ) and thus the integral I1 becomes nearly
singular. Here a sigmoidal transformation is introduced to alleviate this problem. The sigmoidal transformation
was first proposed to calculate the singular and nearly singular integrals in two dimensional BEM [22]. Recently,
9
1
0.5
1
Gaussian points
clustered points
Gaussian points
clustered points
0.5
θ̄
0
θ̄
0
−0.5
−0.5
−1
−1
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
w
0.6
0.8
1
w
Figure 5: Sigmoidal transformation with m = 3 to cluster 10 Gaussian quadrature points (x-axis) to the endpoints of the interval. The sub-triangle
(2-b-3) in Fig. 7 is considered, whose θ̄ ∈ [−1.466, 1.005]. Left: A naive use of sigmoidal transformation; Right: Sigmoidal transformation in this
paper.
this approach was adopted as an angular transformation in dealing with nearly singular integrals in 3D BEM [19].
One should notice that the angular transformation proposed by Khayat and Wilton [13] can be alternatively used,
but our numerical experience indicates a better overall performance by using sigmoidal transformation, especially in
hypersingular case.
A sigmoidal transformation can be thought of as a mapping of the interval [0, 1] onto itself whose graph is Sshaped. It has the effect of translating a grid of evenly spaced points on [0, 1] onto a non-uniform grid with the node
points clustered at the endpoints. A typical sigmoidal transformation is given by [22]
σ(w) =
wm
wm
,
+ (1 − w)m
σ, w ∈ [0, 1], m ≥ 1.
(31)
Consider one sub-triangle in figure 2 in which θ̄ ∈ [θ̄ j−1 , θ̄ j ], j = 1, 2, 3. A naive use of the above transformation can
be (see Fig. 5)
θ̄ − θ̄ j−1
= σ(w).
(32)
θ̄ j − θ̄ j−1
However, this would be of low efficiency. Since the integral I1 tends to be nearly singular (due to ρ̂(θ)) only when
θ̄ j−1 or θ̄ j is close to right angle or both, it is thus more reasonable to cluster the quadrature nodes according to the
discrepancy of θ̄ j−1 and θ̄ j to right angles, respectively. In addition there are cases where both θ̄ j−1 and θ̄ j are not very
close to right angles and thus the near singularity is not severe. For these cases, one can use the Gauss quadratures
directly without any transformation. Nevertheless, numerical examples in this paper show that the modification put
forward below can always achieve more accurate results.
The transformation used in this paper is based on the fact that the singularity occurs at θ̄ = ±π/2 and is given by
1
π
θ̄ +
= σ(z), z = (z j − z j−1 )w + z j−1 , w ∈ [0, 1],
(33)
π
2
where, z(θ̄ j−1 ) and z(θ̄ j ) are the values of z corresponding to θ̄ j−1 and θ̄ j which can be easily obtained by (31). Obviously, both transformation (33) and (32) project [θ̄ j−1 , θ̄ j ] onto [0, 1], but (33) can cluster the quadrature points
adaptively according to closeness of θ̄ to right angle, which is shown in Fig. 5. The left hand image demonstrate the
part of sigmoidal transformation used in dealing with the sub-triangle (2-b-3) in Fig. 7. Compared with full sigmoidal
transformation shown left, the distribution of quadrature points are more reasonable since they are more clustered on
θ̄ j−1 which is more closely to right angle. Besides, the points are not too away from the center. A comparison between
the original sigmoidal transformation and the modification is illustrated in table 1. Without any doubt, the modified
transformation is more powerful when dealing with the integrals in this paper.
The degree of cluster is impacted by parameter m, see [22]. Although numerical examples indicates not too much
difference in number of quadrature points when m takes the value between 2 and 3, it should be point out that the
optimum value of m is 3 for weakly singular case and 2 for hypesingular case.
10
Table 1: Comparison of relative errors using different methods in k = 0 case. The curved element is show in Fig. 7, with field point (b). “n” denotes
the number of quadrature points used in angular direction.
n
5
8
10
single
Eq. (32)
3.19e-3
1.16e-5
3.06e-6
Guiggiani
6.24e-4
1.02e-3
3.20e-4
0.8
Present
9.83e-5
6.44e-7
1.26e-8
Present
4.09e-05
1.49e-07
3.71e-09
0.3
integand
8th degree
integrand
8th degree
0.25
0.6
0.2
integrand
integrand
hyper
Eq. (32)
1.57e-04
1.87e-06
5.22e-07
Guiggiani
1.80e-03
1.69e-04
2.39e-05
Norm of residuals = 0.21819
0.4
0.15
Norm of residuals = 0.00738
0.1
0.05
0.2
0
0
-0.5
0
0.5
1
T (rad)
1.5
-0.05
2
-0.5
0
0.5
T (rad)
1
1.5
2
(a) Laplace single kernel, m = 3
4
integrand
8th degree
integrand
8th degree
0.6
0.5
2
integrand
integrand
3
Norm of residuals = 0.83882
1
Norm of residuals = 0.00946
0.4
0.3
0.2
0.1
0
0
-1
-0.5
0
0.5
1
T (rad)
1.5
-0.1
2
-0.5
0
0.5
T (rad)
1
1.5
2
(b) Laplace hypersingular kernel, m = 2
Figure 6: Integrand of I1 in angular direction corresponding to sub-triangle (2-b-3) in Fig. 7;the basis function is selected as ξ22 . 8th order
polynomial is plotted to fit the image. Left: Image of the original Guiggiani’s method; Right: Image after transformations proposed in this paper.
Finally, the integral has the formula
I1 =
3 Z
X
j=1
0
1
Z
ρ̂(θ)
0
"
F(ρ, w) −
f−2 (w) f−1 (w)
+
ρ
ρ2
!#
θ̄′j (w) T−1 ρdρdw,
(34)
where, θ̄ j (w) denotes transformation (33) on interval [θ̄ j−1 , θ̄ j ].
The effects of the two transformations in sub-sections 4.1 and 4.2 are demonstrated in Fig. 6. It can be seen that
after the transformations, the integrands become more regular and can be well approximated by polynomials of order
8, which means that low order Gaussian quadratures can achieve accurate results.
4.3. Analytical integration of the one-dimensional integrals
P
Consider the evaluation of the regular one dimensional integral I2 in Eq. (16), i.e., I2 = 3j=1 I2( j) and
( j)
I2 =
Z
θj
θ j−1
f−1 (θ) ln ρ̂(θ) − f−2 (θ)
!
1
dθ.
ρ̂(θ)
(35)
One notes that the integrand of I2 suffers from the same difficulties as that of the double integral I1 as stated
in section 3.2. The above proposed transformations can also be used to promote the efficiency of the numerical
quadratures.
11
In this section close form expression for I2 is derived. It is attributed to the fact that after the conformal transformation in section 4.1 the denominators of fi , i.e. A(θ), become constant. For Burton-Miller equation, it is readily
to see from (A.7) that f−2 (θ) is constant on each sub-triangle thanks to the constant A(θ), and that f−1 (θ) is typically
made up of combinations of elementary trigonometric functions of θ [8]
f−1 (θ) =c1 cos3 θ + c2 cos2 θ sin θ + c3 cos θ sin2 θ + c4 sin3 θ
+ d1 cos θ + d2 sin θ.
(36)
The coefficients ci and di are given in (A.8). Here we notice that the expressions in (A.8) are also applicable for Laplace
equations. For other problems, such as elasticity, Stokes flow, the expressions of f−2 and f−1 are more complicated
and can be derived analogously as in Appendix A.
To further simplify the derivation, the coordinate system (η1 , η2 ) is rotated so that the axis oη1 is parallel to the
¯ Thus θ = θ̄ over each sub-triangle. By substituting ρ̂(θ), f−2 and f−1 into (35), the
perpendicular line of the side of ∆.
integral with f−2 becomes
Z θj
Z
f ( j) θ j
1
f−2 (θ)
dθ = −2
cos θdθ.
ρ̂(θ)
h j θ j−1
θ j−1
The integral with f−1 consists of terms
ln h j
Z
θj
cos p θ sinq θdθ,
p+q=3
or
p + q = 1,
(37)
θ j−1
and
Z
I¯i =
θj
cos p θ sinq θ ln cos θdt,
p+q=3
or
p + q = 1.
(38)
θ j−1
The explicit expressions of integral (37) can be easily obtained and are omitted. Here the expression of I¯i is derived.
By changing the integral variable
t = sin θ
√
cos θ = 1 − t2
(39)
√
dt = 1 − t2 dθ,
one obtains
I¯1 =
Z
θj
3
cos θ ln cos θdθ =
θ j−1
Z
tj
t j−1
√
√
t 1 − t2 ln 1 − t2 dt = I˜1 − I˜3 ,
√
t − t3
ln 1 − t2 dt = I˜2 − I˜4 ,
√
θ j−1
t j−1
1 − t2
Z tj
Z θj
√
t2 ln 1 − t2 dt = I˜3 ,
cos θ sin2 θ ln cos θdθ =
I¯3 =
I¯2 =
Z
I¯4 =
Z
θj
cos2 θ sin θ ln cos θdθ =
θ j−1
θj
Z
sin3 θ ln cos θdθ =
I¯6 =
θj
cos θ ln cos θdθ =
θj
θ j−1
Z
tj
t j−1
θ j−1
Z
tj
t j−1
θ j−1
I¯5 =
Z
sin θ ln cos θdθ =
Z
Z
tj
t j−1
tj
t j−1
t3
ln
√
1 − t2 dt = I˜4 ,
√
1 − t2
√
ln 1 − t2 dt = I˜1 ,
√
t
ln 1 − t2 dt = I˜2 ,
√
1 − t2
12
(40)
where,
t
√
√
t+1 j
,
1 − t2 dt = t ln 1 − t2 − 1 + ln √
t j−1
1 − t2 t j−1
√
Z tj
√
√
tj
t ln 1 − t2
dt = 1 − t2 1 − ln 1 − t2
=
,
√
t j−1
t j−1
1 − t2
Z tj
1
√
√
1
1 1 + t tj
,
t2 ln 1 − t2 dt = t3 3 ln 1 − t2 − 1 − t + ln
=
9
3
6 1 − t t j−1
t j−1
"
# tj
Z tj 3 √
√
√
1
1
t ln 1 − t2
2
2
2
2
(8 + t ) − (2 + t ) ln 1 − t
dt = 1 − t
.
=
√
9
3
t j−1
1 − t2
t j−1
I˜1 =
I˜2
I˜3
I˜4
Z
tj
ln
(41)
5. Numerical examples
Figure 7: Curved element and the field points in example 1
Two numerical examples are provided to demonstrate the efficiency and robustness of the method in this paper.
The first one is used to illustrate the superior of the method in this paper to the method in 3.1, i.e., the classic polar
coordinates transformation and Guiggiani’s method [8], using a sample curved element. The second one is used to
verify the accuracy and convergence of the method when employed in solving Burton-Miller equation. The method is
implemented in C language. The numerical integration code are freely available.
The focus of this paper is to improve the performance of the quadrature in angular direction, while in radial
direction, as explained in section 3.2, the integrand behaves well so that only a small number of quadrature points
are sufficient. In section 5.1 6 points are used in every test in radial direction. However, when implemented in BEM
programs, fewer points are needed because as the subdivision of the model the diameter of each element will decrease
so that fewer terms are needed in expansion (15) to approximate the integrand. As a result, the rate of convergence of
BEM will not be affected. For the 2th order Nyström BEM in section 5.2, we use 3 points in polar direction.
5.1. Accuracy of the method on a curved element
The accuracy, effectiveness and robustness of the method in this paper are investigated. The integrals is performed
over a curved element extracted from a cylinder surface. The size of the cylinder and the element are given in Fig.7.
Distance s in Fig. 7 is designed to change the aspect ratio of the element. s = 0.5 represents a quite regular element;
the quality of the element becomes bad as the increase of s. Four positions for the field point x are considered, which
correspond to intrinsic coordinates (a) ξ = (0.3, 0.3), (b) ξ = (0.1, 0.8), (c) ξ = (0.45, 0.45), (d) ξ = (0.64, 0.31),
respectively. One should note that points (b) and (c) match approximately with the nodes of the 2th order Nyström
13
Table 2: methods to compare
methods
Guiggiani
Gui+sig
Present
Present+a
description
Guiggiani’s method in section 3.1
Guiggiani’s method with a sigmoidal transformation applied in angular direciton
Method in this paper, i.e. Guiggiani’s method with the two transformations in section 4
Similar to “Present”, except that the line integral is calculated analytically
discretization, while (d) match with that of the 3th order. In addition, for the rather centered point (a) the original
Guiggiani’s method in section 3.1 are believed to work well.
Since no analytical solutions are available, the (relative) error, given by
relative error =
|Icalc − Iref |
|Iref |
is evaluated by comparison with the results Iref of the Guiggiani’s methods using Gauss quadratures with 256 quadrature points in angular direction and 32 points in radial direction. Icalc denotes the result calculated by the various
methods. The single layer operator S and the hypersingular operator H are tested. The parameter m in sigmoidal
transformation takes the value of 3 in weakly singular case and 2 in hypersingular case. The basis function φ is
selected as the second order monomial ξ22 .
A description of the methods tested are listed in table 2. The method in this paper are indicated by “Present” in
which the line integral I2 in (16) is computed by Gauss quadratures; while in “Present+a” the line integral is computed
by using the closed formulations in section 4.3. Note that “Present+a” only validates for hypersingular integrals.
5.1.1. Results for regular element
First the performance of the present method for regular element are verified; for this purpose, we let s = 0.5 and
the wave number k = 0 (Laplace kernels).
Convergence behavior of the original methods and the present method for the four field points are illustrated in
Fig.8. The results of the present method are marked by triangles or circles. The results for single layer kernel are
plotted with dashed line, while real lines are for hypersingular integrals. A comparison between case (a) and the other
three cases shows that the original Guiggiani’s method tends to converge slowly when the field point is closed to the
element boundary; while the present method can achieve almost the same fast convergence. Even in regular case (a)
in which the original method performs better than other three cases, the benefit of the present method is also obvious.
The effect of each transformation scheme in section 4 are studied and exhibited in table 3. Both field points (b)
and (d) are tested. For the weak singular (single layer) integral, the accuracy can be improved by only using the
sigmoidal transformation; whereas the present method, say the combination of the two transformations in section 4,
can further promote the accuracy significantly. One notice that for the hypersingular case the sole use of the sigmoidal
transformation can not improve the accuracy. However, this problem can be completely overcome by using the
parametric coordinates transformation. The analytical formulas for the line integrals can be better than the numerical
quadratures.
Now consider the Helmholtz kernels. The results are similar to that of the Laplace kernels, so only the results for
field point (b) are given in table 4. The wavenumber k is selected as 2.0 so that the wavelength is about 3 times size of
the triangular element. In practical BEM implementation such a mesh is rather coarse. It is obvious that the present
method can greatly improve the accuracy.
5.1.2. Results for distorted element
As have been mentioned in section 4.1, the present method should be robust for elements with high aspect ratio;
here this is validated. The integrals are evaluated over elements with different s and k = 0. Point (a) is taken to be
the field point. Both the original and present methods are used to compute the integrals with relative error 10−8 . The
needed number of Gauss quadrature points in angular direction are listed in table 5. It is seen that this number of the
original method increases drastically as the increase of s, which clearly indicates that the performance of the original
method depend heavily on the shape of the element. On the contrary, the present method behaves more robust with the
14
0
0
10
10
Guiggiani (single)
Present (single)
Guiggiani (hyper)
Present+a (hyper)
−2
10
10
−4
10
relative error
relative error
−4
−6
10
−8
10
−10
−6
10
−8
10
10
−12
2
10
−10
10
10
Guiggiani (single)
Present (single)
Guiggiani (hyper)
Present+a (hyper)
−2
−12
4
6
8
10
12
14
16
18
number of integration points in angular direction
10
20
2
4
6
8
10
12
14
16
18
number of integration points in angular direction
(a) field point (a)
(b) field point (b)
0
0
10
10
Guiggiani (single)
Present (single)
Guiggiani (hyper)
Present+a (hyper)
−2
10
−2
10
−4
relative error
relative error
−4
10
−6
10
−8
10
−10
−6
10
−8
10
10
−12
2
10
−10
10
10
20
Guiggiani (single)
Present (single)
Guiggiani (hyper)
Present+a (hyper)
−12
4
6
8
10
12
14
16
18
number of integration points in angular direction
10
20
2
4
6
8
10
12
14
16
18
number of integration points in angular direction
(c) field point (c)
(d) field point (d)
Figure 8: Convergence comparison for the k = 0 case.
Table 3: Relative errors for different methods. “n” denotes the number of Gauss quadrature points in angular direction.
n
Guiggiani
single
Gui+sig
Present
5
8
10
12
6.24e-04
1.02e-03
3.20e-04
5.65e-05
1.22e-04
2.83e-05
3.08e-07
8.91e-08
9.83e-05
6.44e-07
1.26e-08
2.95e-11
5
8
10
12
4.05e-02
1.89e-02
1.07e-02
5.83e-03
1.18e-03
3.20e-05
5.85e-07
1.50e-08
6.06e-04
6.35e-06
2.23e-07
5.16e-09
Guiggiani
field point (b)
1.81e-03
1.70e-04
2.38e-05
3.01e-06
field point (d)
5.33e-03
2.35e-03
8.11e-04
3.45e-04
hyper
Gui+sig
Present
Present+a
5.51e-03
2.62e-04
2.72e-05
1.12e-05
1.28e-03
5.41e-06
5.64e-08
4.20e-10
4.1e-05
1.49e-07
3.71e-09
6.70e-10
6.28e-03
1.48e-04
2.65e-04
3.15e-05
1.11e-02
7.32e-05
1.93e-06
5.34e-08
1.68e-04
4.08e-06
2.92e-07
1.57e-08
Table 4: Relative errors in dynamic case with k = 2.0, the field point is selected as (b).
n
5
8
10
12
single
Guiggiani Present
2.52e-03 2.00e-04
1.16e-03 2.44e-06
4.09e-04 1.09e-07
1.26e-04 4.66e-09
15
hyper
Guiggiani Present+a
1.92e-03
3.64e-04
2.08e-04
3.22e-07
2.04e-05
6.93e-09
4.36e-06
4.71e-10
20
Table 5: Number of integration points needed in angular direction to make the relative error under 10−8
s
0.5
1.5
2.0
4.0
10.0
single
Guiggiani Present
15
8
19
9
22
9
42
11
82
13
Guiggiani
18
27
31
62
142
hyper
Present
11
12
13
14
15
Present+a
11
11
12
13
16
Figure 9: Geometry model
change of s. For element with aspect ratio larger than 10, high accurate results can still be achieved with a moderate
increase of quadrature points.
5.2. An exterior sound radiation problems
The overall performance of the present integration method is tested by incorporating it into a Nyström BEM code
for solving the Burton-Miller equation. The geometry of the problem consists of three sections of cylinders, as shown
in Fig. 9. The origin of coordinate system lies on the middle point of the symmetry axis of the cylinder. All the
surfaces of the cylinder vibrate with given velocity q(x) = ∂u/∂x (Neumann problem), where u is chosen to be the
fundamental solution u(x) = G(x, y) in (3) with y = (0, 0, 0).
The Nyström BEM with 2th order basis functions is used to discretize Burton-Miller equation. The wave number
k = 2.5. The surface is meshed by quadratic triangular element and refined four times; the finest mesh consists of
1638 elements. In evaluating the singular integrals, both the original Guiggiani’s method and the present method are
tested. A common m = 2.5 is used in the sigmoidal transformation for all the singular integrals, and 3-point Gauss
quadrature are used in radial direction.
Three different quadratures in the angular direction are used and compared, that is, the present method using 4
points and the Guiggiani’s method with 10 and 15 points. The L2 relative errors of the boundary values of u(x) are
demonstrated in Fig. 10. It is shown that the accuracy of BEM is retarded when using the Guiggiani’s method with
10 points. For the Guiggiani’s method with 15 points, the convergence of the BEM tends to slow down with further
refinement of the mesh. The present method with 4 points, however, can achieve the theoretical converge rate.
6. Conclusions
Highly efficient methods with high accuracy and low computational cost are crucial for high-order boundary
element analysis. In [8], Guiggiani proposed an unified framework to treat the singular integrals of various orders
in BEM. It is based on the polar coordinate transformation which has been extensively used in dealing with BEM
singular integrals. However, the performance of the polar coordinate transformation deteriorates when the field point
is close to the element boundary or the aspect ratio of the element becomes large.
16
−2
O(N )
Present(n=4)
Guiggiani(n=10)
Guiggiani(n=15)
−3
2
L error
10
−4
10
−5
10
3
4
10
10
Number of unkowns
Figure 10: L2 -errors of u in example 5.2.
In this paper, first, a conformal transformation is introduced to circumvent the near singularity caused by large
aspect ratio of element. This transformation maps a curved physical element onto a planar triangle. Since it is
conformal at the field point, the resultant integration domain (planar triangle) perseveres the shape of the curved
element. Then, a sigmoidal transformation is applied to alleviate the near singularity due to the closeness of the
field point to the element boundary. The rationale behind this is that the sigmoidal transformation can cluster the
quadrature points to area of near singularity. The combination of the two transformations can effectively alleviate
the two problems of the existing polar coordinate transformation method, and thus leads to considerable reduction of
quadrature points in angular direction.
The efficiency and robustness of the present method are illustrated by various singular integrals on a curved
quadratic element which is typical in high order Nyström BEM. It is shown that highly accurate results with relative
error 10−8 can be achieved with 10 quadrature points in angular direction. Moreover, the method is more stable with
the change of element aspect ratio. For further verification, the method is applied to a 2-order Nyström BEM for
solving acoustic Burton-Miller equation. Theoretical convergence rate of the BEM can be retained with much less
quadrature points than the existing quadrature methods.
Acknowledgements
This work was supported by National Science Foundations of China under Grants 11074201 and 11102154 and
Funds for Doctor Station from the Chinese Ministry of Education under Grants 20106102120009 and 20116102110006.
Appendix A. Coefficients in Eq. (36)
Consider a general integrand in BEM in polar coordinates
F(ρ, θ) =
ρF̄(ρ, θ)
,
rβ
(A.1)
where, β is the order of singularity, F̄(ρ, θ) is a regular function. There holds
∞
X
1
= ρ−β
S ν−β (θ)ρν ,
β
r
ν=0
17
(A.2)
F̄(ρ, θ) =
∞
X
av (θ)ρν ,
(A.3)
ν=0
where, S ν (θ) is given by ([23], Theorem 4)
S ν−β (θ) = A−β−2ν (θ)g3ν (θ),
(A.4)
g3ν (θ) are homogeneous trigonometric polynomials of order 3v.
For hypersingular integrand, β = 3, thus,
f−2 (θ) = S −3 (θ)a0 (θ),
f−1 (θ) = S −2 (θ)a0 (θ) + S −3 (θ)a1 (θ).
(A.5)
See [8] for the expressions of S −2 (θ) and S −3 (θ).
For hypersingular kernel of Burton-Miller equation in this paper, the expressions ai can be obtained similarly with
[8]. Let Jk be the kth component of the vector
∂y
∂y
×
.
∂η1 ∂η2
Then Jk can be expanded as
#
"
∂Jk
∂Jk
cos θ +
sin θ = Jk0 + ρJk1 (θ) + O(ρ2 ).
Jk = Jk0 + ρ
∂η1 η=ηs
∂η2 η=ηs
The basis function φ can be expanded analogously as
"
∂φ
∂φ
φ = φ0 + ρ
cos θ +
∂η1 η=ηs
∂η2
#
sin θ = φ0 + ρφ1 (θ) + O(ρ2 ).
η=ηs
Then, a0 and a1 (θ) in Eq. (A.5) can be expressed as (repeated indicies imply summation)
ni (x)
Ji0 φ0 ,
4π
ni (x)
Ji1 (θ)φ0 + Ji0 φ1 (θ) .
a1 (θ) =
4π
Substituting Eq. (A.6) into Eq. (A.5), yields
a0 =
(A.6)
f−1 (θ) =c1 cos3 θ + c2 cos2 θ sin θ + c3 cos θ sin2 θ + c4 sin3 θ
+ d1 cos θ + d2 sin θ,
ni (x)
f−2 (θ) =
Ji0 φ0 .
4πA3
with
3ni (x)Ji0 φ0 ∂yk ∂2 yk
,
8πA5 ∂η1 ∂η21
3ni (x)Ji0 φ0 ∂yk ∂2 yk
1 ∂yk ∂2 yk
,
c2 = −
+
∂η1 ∂η1 ∂η2 2 ∂η2 ∂η21
4πA5
1 ∂yk ∂2 yk
3ni (x)Ji0 φ0 ∂yk ∂2 yk
,
+
c3 = −
∂η2 ∂η1 ∂η2 2 ∂η1 ∂η22
4πA5
(A.7)
c1 = −
3ni (x)Ji0 φ0 ∂yk ∂2 yk
c4 = −
,
8πA5 ∂η2 ∂η22
!
∂φ
ni (x) ∂Ji
,
φ
+
J
0
i0
∂η1
4πA3 ∂η1
!
∂φ
ni (x) ∂Ji
.
φ
+
J
d2 =
0
i0
∂η2
4πA3 ∂η2
Note that all the derivatives are evaluated at the field point x.
d1 =
18
(A.8)
References
[1] L. F. Canino, J. J. Ottusch, M. A. Stalzer, et al. Numerical solution of the Helmhotz equation in 2D and 3D using a high-order Nyström
discretization. Journal of computational physics, 146 (1998), 627-633.
[2] S. Rjasanow, L. Weggler. ACA accelerated high order BEM for maxwell probelms. Computational mechanics, 51 (2013), 431-441.
[3] J. Bremer, Z. Gimbutas. A Nyström method for weakly singular integral operators on surfaces. Journal of computational physics, 231 (2012),
4885-4903.
[4] X. W. Gao, T. G. Davies. Boundary Element Programming in Mechanics. Cambridge University Press (ISBN: 052177359-8), 2002.
[5] D. J. Willis, J. Peraire, J. K. White. A quadratic basis function, quadratic geometry, high order panel method. In 44th AIAA Aerospace sciences
meeting, number AIAA-2006-1253, 2006.
[6] H. J. Wu, Y. J. Liu, W. K. Jiang. A low frequency fast multiploe boundary element method based on analytical integration of the hypersingular
integral for 3D acoustic problems. Engineering analysis with boundary elements, 37 (2013), 309-318.
[7] S. N. Fata. Explicit expressions for 3D boundary integrals in potential theory. International journal for numerical methods in engineering, 78
(2009), 32-47.
[8] M. Guiggiani, G. Krishnasamy, T. J. Rudolphi, F. J. Rizzo. A general algorithm for the numerical solution of hypersingular boundary integral
equations. ASME Journal of applied mechanics, 59 (1992), 604-614.
[9] S. Järvenpää, M. Taskinen, P. Ylä-Oijala. Singularity extraction technique for integral equation methods with higher order basis functions on
plane triangles and tetrahedra. International journal for numerical methods in engineering, 58 (2003), 1149-1165.
[10] P. Kolm, V. Rokhlin. Numerical quadratures for singular and hypersingular integrals. Computers and Mathematics with Applications, 41
(2001), 327-352.
[11] R. D. Graglia. G. Lombardi. Machine precision evaluation of singular and nearly singular potential integrals by use of Gauss quadrature
formulas for rational functions. IEEE transactions on antennas and propagation, 56 (2008), 981-998.
[12] M. Carley. Numeriacl quadratures for singular and hypersingular integrals in boundary element methods. SIAM journal on scientific computing, 29 (2007), 1207-1216.
[13] M. A. Khayat, D. R. Wilton, P. W. Fink. An improved transformation and optimized sampling scheme for the numberical evaluation of
singular and near-singular potentials. IEEE transactions on antennas and propagation, 7 (2008), 377-380.
[14] M. A. Khayat, D. R. Wilton. Numerical evaluation of singular and near-singular potential integrals. IEEE transactions on antennas and
propagation, 53 (2005), 3180-3190.
[15] M. G. Duffy. Quadrature over a pyramid or cube of integrands with a singularity at a vertex. SIAM Journal on Numerical Analysis, 19(6):
1260-1262, 1982.
[16] B. M. Johnston, P. R. Johnston. A comparsion of transformation methods for evaluating two-dimensional weakly singular integrals. International journal for numerical methods in engineering, 56 (2003), 589-607.
[17] L. Scuderi. On the compuation of nearly singular integrals in 3D BEM collocation. International Journal for numerical methods in engineering, 74 (2008), 1733-1770.
[18] K. Hayami. Variable transformation for nearly singular integrals in the boundary element method. Research institute for mathematical sciences, Kyoto university, 41 (2005), 821-842.
[19] B. M. Johnston, P. R. Johnston, D. Elliott. A new method for the numerical evaluation of nearly singular integrals on triangular elements in
the 3D boundary element method. Journal of Computational and Applied Mathematics, 245 (2013), 148-161.
[20] A. J. Burton, G. F. Miller. The application of integral equation methods to the numerical solution of some exterior boundary-value problems.
Proceedings of the Royal Society of London, Series A, 323 (1971), 201-210.
[21] X. W. Gao. An effective method for numerical evaluation of gneral 2D and 3D high order singular boundary integrals. Computer methods in
applied mechanics and engineering, 199 (2010), 2856-2864.
[22] P. R. Johnston. Application of sigmoidal transformations to weakly singular and near-singular boundary element integrals. International
journal for numerical methods in engineering, 45 (1999), 1333-1348.
[23] C. Schwab, W. L. Wendland. Kernel properties and representations of boundary integral operators. Mathematische Nathrichten, 156 (1992),
187-218.
19
| 5 |
AN OKA PRINCIPLE FOR STEIN G-MANIFOLDS
arXiv:1608.05156v2 [math.CV] 9 Jan 2017
GERALD W. SCHWARZ
Abstract. Let G be a reductive complex Lie group acting holomorphically on Stein manifolds
X and Y . Let pX : X → QX and pY : Y → QY be the quotient mappings. Assume that we
have a biholomorphism Q := QX → QY and an open cover {Ui } of Q and G-biholomorphisms
−1
Φi : p−1
X (Ui ) → pY (Ui ) inducing the identity on Ui . There is a sheaf of groups A on Q such that
the isomorphism classes of all possible Y is the cohomology set H 1 (Q, A). The main question
we address is to what extent H 1 (Q, A) contains only topological information. For example, if
G acts freely on X and Y , then X and Y are principal G-bundles over Q, and Grauert’s Oka
principle says that the set of isomorphism classes of holomorphic principal G-bundles over Q is
canonically the same as the set of isomorphism classes of topological principal G-bundles over
Q. We investigate to what extent we have an Oka principle for H 1 (Q, A).
Contents
1. Introduction
2. Background
3. Strongly continuous homeomorphisms and vector fields
4. Logarithms in Ac
5. Homotopies in H 1 (Q, Ac )
6. H 1 (Q, A) → H 1 (Q, Ac ) is a bijection
References
1
3
3
5
7
10
11
1. Introduction
Let X be a Stein G-manifold where G is a complex reductive group. There is a quotient
space QX = X//G (or just Q if X is understood) and surjective morphism pX (or just p) from
X to Q. Then Q is a reduced normal Stein space and the fibers of p are canonically affine
G-varieties (generally, neither reduced nor irreducible) containing precisely one closed G-orbit.
For S a subset of Q we denote p−1 (S) by XS and we abbreviate X{q} as Xq , q ∈ Q. We have
a sheaf of groups AX (or just A) on Q where A(U) = AutU (XU )G is the group of holomorphic
G-automorphisms of XU which induce the identity map IdU on XU //G = U.
Let Y be another Stein G-manifold. In [KLS15, KLS] we determined sufficient conditions
for X and Y to be equivariantly G-biholomorphic. Clearly we need that QY is biholomorphic
to QX , so let us assume that we have fixed an isomorphism of QY with Q = QX . Let us also
suppose that there are no local obstructions to a G-biholomorphism of X and Y covering IdQ .
(See [KLS, Theorem 1.3] for sufficient conditions for vanishing of the local obstructions.) Then
there is an open cover Ui of Q and G-biholomorphisms Φi : XUi → YUi inducing IdUi . We say
that X and Y are locally G-biholomorphic over Q. Set Φij = Φ−1
i Φj . Then the Φij ∈ A(Ui ∩ Uj )
are a 1-cocycle, i.e., an element of Z 1 (Q, A) (we repress explicit mention of the open cover).
Conversely, given Ψij ∈ Z 1 (Q, A) (for the same open cover) we can construct a corresponding
complex G-manifold Y from the disjoint union of the XUi by identifying XUj and XUi over
2010 Mathematics Subject Classification. Primary 32M05. Secondary 14L24, 14L30, 32E10, 32M17, 32Q28.
Key words and phrases. Oka principle, Stein manifold, reductive complex group, categorical quotient.
1
2
GERALD W. SCHWARZ
Ui ∩ Uj via Ψij . By [KLS, Theorem 5.11] the manifold Y is Stein, and it is obviously locally
G-biholomorphic to X over Q. Let Ψ′ij be another cocycle for {Ui } corresponding to the Stein
G-manifold Y ′ . If Y ′ is G-biholomorphic to Y (inducing IdQ ), then Ψij and Ψ′ij give the same
class in H 1 (Q, A). Thus H 1 (Q, A) is the set of G-isomorphism classes of Stein G-manifolds
Y which are locally G-biholomorphic to X over Q where the G-isomorphisms are required to
induce the identity on Q.
A fundamental question is whether or not H 1 (Q, A) contains more than topological information. For example, suppose that G acts freely on X so that X → Q is a principal G-bundle.
Then X corresponds to an element of H 1 (Q, E) where E is the sheaf of germs of holomorphic
mappings of Q to G. By Grauert’s famous Oka principle [Gra58], H 1 (Q, E) ≃ H 1 (Q, E c ) where
E c is the sheaf of germs of continuous mappings of Q to G. In other words, the set of isomorphism classes of holomorphic principal G-bundles over Q is the same as the set of isomorphism
classes of topological principal G-bundles over Q. The main point of this note is to establish a
similar Oka principle in our setting.
We define another sheaf of groups Ac on Q. For U open in Q, Ac (U) consists of “strongly
continuous” families σ = {σq } of G-automorphisms of the affine G-varieties Xq , q ∈ U. We
define the notion of strongly continuous family in §3. The sheaf A is a subsheaf of Ac .
Fix an open cover {Ui } of Q. Our main theorems are the following (the first of which is a
consequence of [KLS, Theorem 1.4]).
Theorem 1.1. Let Φij , Ψij ∈ Z 1 (Q, A) and suppose that there are ci ∈ Ac (Ui ) satisfying
′
Φij = ci Ψij c−1
j . Then there are ci ∈ A(Ui ) satisfying the same equation.
1
Theorem 1.2. Let Φij ∈ Z 1 (Q, Ac ). Then there are ci ∈ Ac (Ui ) such that ci Φij c−1
j ∈ Z (Q, A).
As a consequence we have the following Oka principle:
Corollary 1.3. The canonical map H 1 (Q, A) → H 1 (Q, Ac ) is a bijection.
Remark 1.4. Suppose that X is a smooth affine G-variety and that Z → Q is a morphism of affine
varieties. Then G acts on the fiber product Z ×Q X and we have the group AutZ,alg (Z ×Q X)G of
algebraic G-automorphisms of Z ×Q X which induce the identity on the quotient Z. A scheme
G with projection π : G → Q such that the fibers of G are groups whose structure depends
algebraically on q ∈ Q is called a group scheme over Q. (See [KS92, Ch, III] for a more precise
definition.) We say that the automorphism group scheme of X exists if there is a group scheme
G over Q together with a canonical isomorphism of Γ(Z, Z ×Q G) and AutZ,alg (Z ×Q X)G for all
Z → Q. The automorphism group scheme of X exists (and is an affine variety) if, for example,
p : X → Q is flat [KS92, Ch. III Proposition 2.2]. Assuming G exists, now consider X as a Stein
G-manifold and G as an analytic variety. Then for U open in Q, A(U) ≃ Γ(U, G) and one can
show that Ac (U) is the set of continuous sections of G over U. Thus, in this case, our theorems
reduce to the precise analogues of Grauert’s for the cohomology of G using holomorphic or
continuous sections.
For U an open subset of Q we have a topology on Ac (U) and A(U) and we define the notion of
a continuous path (or homotopy) in Ac (U) or A(U). We establish a result which is well-known
in the case of principal bundles but rather non-trivial in our situation.
Theorem 1.5. Let Φij (t) be a homotopy of elements in Z 1 (Q, Ac ), t ∈ [0, 1]. Then there are
homotopies ci (t) ∈ Ac (Ui ), t ∈ [0, 1], such that Φij (t) = ci (t)Φij (0)cj (t)−1 . Hence Φij (t) ∈
H 1 (Q, Ac ) is independent of t.
Theorem 1.6. Let Φij (t) ∈ Z 1 (Q, Ac ) be a homotopy, t ∈ [0, 1], where the Φij (0) and Φij (1) are
holomorphic. Then there is a homotopy Ψij (t) ∈ Z 1 (Q, A) with Ψij (0) = Φij (0) and Ψij (1) =
Φij (1).
AN OKA PRINCIPLE FOR STEIN G-MANIFOLDS
3
Here is an outline of this paper. In §2 we recall Luna’s slice theorem and related results. In
§3 we define the sheaf of groups Ac as well as a corresponding sheaf of Lie algebras LAc . In
§4 we show that sections of Ac sufficiently close to the identity are the exponentials of sections
of LAc . In §5 we establish our main technical result (Theorem 5.1) about homotopies in Ac .
We prove Theorem 1.1 and Theorem 1.6 as well as a preliminary version of Theorem 1.5. In
§6 we establish Theorem 1.2 and use it to prove Theorem 1.5. Finally, let X and Y be locally
G-biholomorphic over Q. We establish a theorem giving necessary and sufficient conditions for
a G-biholomorphism from XU → YU over IdU , where U ⊂ Q is Runge, to be the limit of the
restrictions to XU of G-biholomorphisms from X to Y over IdQ .
Remark 1.7. In [KLS15, KLS] we also consider G-diffeomorphisms Φ of X which induce the
identity over Q and are strict. This means that the restriction of Φ to Xq , q ∈ Q, induces an
algebraic G-automorphism of (Xq )red where “red” denotes reduced structure. One can adapt
the techniques developed here to prove the analogues of our main theorems for strong Ghomeomorphisms replaced by strict G-diffeomorphisms.
Acknowledgement. I thank F. Kutzschebauch and F. Lárusson for our collaboration on [KLS15]
and [KLS] which led to this paper.
2. Background
For details of what follows see [Lun73] and [Sno82, Section 6]. Let X be a Stein manifold with
a holomorphic action of a reductive complex Lie group G. The categorical quotient QX = X//G
of X by the action of G is the set of closed orbits in X with a reduced Stein structure that
makes the quotient map pX : X → QX the universal G-invariant holomorphic map from X to
a Stein space. The quotient QX is normal. When X is understood, we drop the subscript
X in pX and QX . If U is an open subset of Q, then p∗ induces isomorphisms of C-algebras
OX (XU )G ≃ OQ (U) and C 0 (XU )G ≃ C 0 (U). We say that a subset of X is G-saturated if it is a
union of fibers of p. If X is an affine G-variety, then Q is just the complex space corresponding
to the affine algebraic variety with coordinate ring Oalg (X)G .
Let H be a reductive subgroup of G and let B be an H-saturated neighborhood of the origin
of an H-module W . We always assume that B is Stein, in which case B//H is also Stein. Let
G×H B (or TB ) denote the quotient of G×B by the (free) H-action sending (g, w) to (gh−1, hw)
for h ∈ H, g ∈ G and w ∈ B. We denote the image of (g, w) in G ×H B by [g, w].
Let Gx be a closed orbit in X. Then the isotropy group Gx is reductive and the slice
representation at x is the action of H = Gx on W = Tx X/Tx (Gx). By the slice theorem, there
is a G-saturated neighborhood of Gx which is G-biholomorphic to TB where B is an H-saturated
neighborhood of 0 ∈ W .
3. Strongly continuous homeomorphisms and vector fields
The group G acts on O(X), f 7→ g · f , where (g · f )(x) = f (g −1x), x ∈ X, g ∈ G, f ∈ O(X).
Let Ofin (X) denote the set of holomorphic functions f such that the span of {g · f | g ∈ G} is
finite dimensional. They are called the G-finite holomorphic functions on X and obviously form
an O(Q) = O(X)G -algebra. If X is a smooth affine G-variety, then the techniques of [Sch80,
Proposition 6.8, Corollary 6.9] show that for U ⊂ Q open and Stein we have
Ofin (XU ) ≃ O(U) ⊗Oalg (Q) Oalg (X).
Let V be the direct sum of pairwise non-isomorphic non-trivial G-modules V1 , . . . , Vr . Let
O(X)V denote the elements of Ofin (X) contained in a copy of V . If H is a reductive subgroup of G and W an H-module, we similarly define Oalg (TW )V . Then for B an H-saturated
neighborhood of 0 ∈ W , Oalg (TW )V generates O(TB )V over O(B)H . By Nakayama’s Lemma,
4
GERALD W. SCHWARZ
f1 , . . . , fm ∈ O(X)V restrict to minimal generators of the O(U)-module O(XU )V for some
neighborhood U of q ∈ Q if and only if the restrictions of the fi to Xq form a basis of
O(Xq )V = Oalg (Xq )V . Thus by the slice theorem, the sheaf of algebras of G-finite holomorphic
functions is locally finitely generated as an algebra over OQ .
Definition 3.1. Let U ⊂ Q be relatively compact. Then there is a V as above such that
the O(XU )Vj are finitely generated over O(U) and generate Ofin (XU ) as O(U)-algebra. Let
f1 , . . . , fn be a generating set of ⊕O(XU )Vj with each fi in some O(XU )Vj . Then we call {fi }
a standard generating set of Ofin (XU ). When U = TB //G as before, we always assume that our
standard generators are the restrictions of homogeneous elements of Oalg (TW ).
Let U ⊂ Q, V and {f1 , . . . , fn } be as above. We say that a G-equivariant
homeomorphism
P
∗
Ψ : XU → XU is strong if it lies over the identity of U and Ψ fi = j aij fj where the aij are
in C 0 (XU )G ≃ C 0 (U). We also require that the aij (q) induce a G-isomorphism of O(Xq )V for
all q ∈ U. Then Ψ induces an algebraic isomorphism Ψq : Xq → Xq for all q ∈ U. It is easy
to see that the definition does not depend on our choice of V and the generators fi . We call
(aij ) a matrix associated to Ψ. Using a partition of unity on U it is clear that Ψ is strong if
and only if it is strong in a neighborhood of every q ∈ Q. In a neighborhood of any particular
q, we may assume that the fi restrict to a basis of O(Xq )V , in which case (aij ) is invertible in
a neighborhood of q. Then Φ−1 has matrix (aij )−1 near q. Thus if Φ is strong, so is Φ−1 . Let
Ac (U) denote the group of strong G-homeomorphisms of XU for U open in Q. Then Ac is a
sheaf of groups on Q.
We say that a vector field D on XU is formally holomorphic if it annihilates the antiholomorphic functions on XU . Let D be a continuous formally holomorphic vector field on XU , Ginvariant, annihilating O(XU )G . We say that D is strongly continuous (and write D ∈ LAc (U))
if for any q ∈ U there is a neighborhood
U ′ of q in U and a standard generating set f1 , . . . , fn
P
for Ofin (XU ′ ) such that D(fi ) = dij fj where the dij are in C 0 (U ′ ). We say that D has matrix
(dij ) over U ′ . The matrix is usually not unique. Clearly our definition of LAc (U) is independent
of the choices made. We denote the corresponding sheaf by LAc .
Remark 3.2. Let D ∈ LAc (U) and q ∈ Q. Then D is tangent to F = Xq and acts algebraically
on Oalg (F ), hence lies in the space of G-invariant derivations Deralg (F )G of Oalg (F ). Since
Deralg (F )G is the Lie algebra of the algebraic group Aut(F )G , the restriction of D to F can be
integrated for all time. It follows that D is a complete vector field.
Let U be open in Q, let ǫ > 0 and let K be a compact subset of U. Let f = {f1 , . . . , fn } be
a standard generating set of Ofin (XU ′ ) where U ′ is a neighborhood of K. Define
ΩK,ǫ,f = {Φ ∈ Ac (U) : ||(aij ) − I||K < ǫ}
where (aij ) is some matrix associated to Φ. Here ||(aij ) − I||K denotes the supremum of the
′
matrix norm of (aij ) − I over K. Let f ′ = {f1′ , . . . , fm
} be another standard generating set
defined on a neighborhood of K in U.
Lemma 3.3. Let ǫ′ > 0. Then there is an ǫ > 0 such that ΩK,ǫ,f ⊂ ΩK,ǫ′ ,f ′ .
Proof. We may assume that the fi and fj′ are standard generating sets of Ofin (U). There are
polynomials hi with coefficients in O(U) such that fi′ = hi (f1 , . . . , fn ), 1 ≤ i ≤ m. We may
assume that {f1′ , . . . , fs′ } are the fi′ corresponding to an irreducible G-module Vt . Let Φ ∈ ΩK,ǫ,f
with corresponding matrix (auv ) such that ||(buv )||K < ǫ where (buv ) = (auv ) − I. Let ri be the
degree of hi . Then for 1 ≤ i ≤ s we have
n
X
∗ ′
′
∗
∗
bkl pkl Mkl (f1 , . . . , fn )
(Φ fi ) − fi = hi (Φ f1 , . . . , Φ fn ) − hi (f1 , . . . , fn ) =
k,l=1
AN OKA PRINCIPLE FOR STEIN G-MANIFOLDS
5
where the pkl are polynomials in the auv of degree at most ri − 1 and the Mk,l are polynomials
in the fj with coefficients in O(U) which are independent of the auv and buv . Since Φ∗ fi′ is
a covariant corresponding to Vt , we can project the Mkl to O(XU )Vt in which case we get
P
s
′
j=1 Njkl fj where the Njkl are in O(U) and independent of the auv and buv . Hence
(Φ∗ fi′ )
−
fi′
=
s X
n
X
bkl Njkl pkl fj′ .
j=1 k,l=1
Since the Njkl pkl are bounded on K, choosing ǫ sufficiently small, we can force the terms
P
n
k,l=1 bkl Njkl pkl to be close to 0. Hence there is an ǫ > 0 such that ΩK,ǫ,f ⊂ ΩK,ǫ′ ,f ′ .
By the lemma, we get the same neighborhoods of the identity in Ac (U) from any standard
generating set of Ofin (XU ′ ) where U ′ is a neighborhood of K. Thus we can talk about neighborhoods of the identity without specifying the f in question. We then have a well-defined topology
on Ac (U) where Φ is close to Φ′ if Φ′ Φ−1 is close to the identity.
Let U, K and the fi be as above. Define
X
Ω′K,ǫ,f = {D ∈ LAc (U) | D(fi ) =
dij fj and ||(dij )||K < ǫ}
where D has (continuous) matrix (dij ) defined on a neighborhood of K. As before, the Ω′K,ǫ,f
give a basis of neighborhoods of 0 and define a topology on LAc (U), independent of the choice
of f.
Proposition 3.4. Let U be open in Q and let {f1 , . . . , fn } be a standard generating set for
Ofin (U).
(1) Let D be a G-invariant
P formally holomorphic vector field on XU which annihilates O(U)
such that D(fi ) = dij fj for dij ∈ C 0 (U). Then D is continuous, i.e., D ∈ LAc (U).
(2) LAc is a sheaf of Lie algebras and a module over the sheaf of germs of continuous
functions on Q.
(3) exp : LAc (U) → Ac (U) is continuous.
(4) LAc (U) is a Fréchet space.
Proof. Let D be as in (1) and let x0 ∈ XU . There is a subset, say f1 , . . . , fr , of the fi and
holomorphic invariant functions hr+1 , . . . , hs such that the zi = fi − fi (x0P
) and zj = hj − hj (x0 )
are local holomorphic coordinates at x0 . Then, near x0 , D has the form
ai ∂/∂zi where each
′
ai = D(fi ) is continuous. Hence D is continuous giving (1). Let D, D ∈ LAc (U) with matrices
′
(dij ) and (d′ij ). Let
P (eij ) be their matrix bracket. Then [D, D ] is G-invariant, annihilates O(U)
and sends fi to
eij fj . Hence we have (2). Part (3) is clear.
The topology on LAc (U), U open in Q, is defined by countably many seminorms, hence
LAc (U) is a metric space and it is Fréchet if it is complete. Let Dk be a Cauchy sequence in
LAc (U). Let K ⊂ U be a compact neighborhood of q ∈ U. There are matrices (dkij ) of elements
P k
of C 0 (K) such that Dk (fi ) =
dij fj over K. Since {Dk } is Cauchy, we may assume that
k
l
||(dij ) − (dij )||K < 1/m for k, l > Nm , m ∈ N. Then limk→∞ dkij = dij ∈ C 0 (K) for all i, j.
It follows that the pointwise limit of the Dk exists
Pand is a formally holomorphic vector field
D annihilating the invariants such that D(fi ) =
dij fj . By (1), D is of type LAc over the
interior of K. It follows that LF(U) is complete.
4. Logarithms in Ac
Let U be an open subset of Q isomorphic to TB = G ×H B where H is a reductive subgroup
of G and B is an H-saturated neighborhood of the origin in an H-module W . Let f1 , . . . , fn
be a standard generating set for Ofin (XU )G consisting of the restrictions to XU of homogeneous
polynomials in Oalg (TW ). Consider polynomial relations of the fi with coefficients in O(U).
6
GERALD W. SCHWARZ
These are generated by the relations with coefficients in Oalg (TW )G . Let h1 , . . . , hm be generating
relations of this type. Let N be a bound for the degree of the hj . Now take the covariants which
correspond to all the irreducible G-representations occurring in the span of the monomials of
degree at most N in the fi . Let {fα } be a set
and let K ⊂ U
P of generators for these covariants
∗
0
be compact.
P Let Φ ∈ Ac (U). Then0 Φ fα = cα,β fβ where the cα,β ∈ C (U). We also have that
Φ∗ fi =
aij fj where the aij ∈ C (U). We fix a neighborhood Ω of the identity in Ac (U) such
that Φ ∈ Ω implies that ||(cα,β ) − I||K < 1/3 and that ||(aij ) − I||K < 1/2. For Φ ∈ Ω let Λ′
denote Id −Φ∗ . Then the formal power series S(Λ′ ) for log Φ∗ is −Λ′ −(1/2)(Λ′ )2 −1/3(Λ′ )3 −· · · .
Now we restrict to a fiber F = Xq , q ∈ K. Let M denote the span of the covariants fα
restricted to F . Then M is finite dimensional and we give it the usual euclidian topology. Let
Λ denote the restriction of Λ′ to M.
Lemma 4.1. Let m ∈ M. Then the series S(Λ)(m) converges in M.
P
P
Proof. We have m =
aα fα |F where the aα ∈ C. Then Λ(m) = α,β (δα,β − cα,β (q))aβ fβ |F .
P
Let C denote (cα,β (q)). Then ||I − C|| < 1/3. By induction, Λk acts on α aα fα |F via the
matrix (I − C)k , where ||(I − C)k || < (1/3)k . Let
C′ = −
Then S(Λ)(m) converges to
P
∞
X
(I − C)k .
k=1
α,β
′
Cα,β
aβ fβ |F ∈ M.
For f ∈ M, define D(f ) to be the limit of S(Λ)(f ). Then D is a G-equivariant linear
endomorphism of M.
Proposition 4.2. Suppose that m1 , m2 and m1 m2 are in M. Then
D(m1 m2 ) = D(m1 )m2 + m1 D(m2 ).
Proof. By [Pra86, Proof of Theorem 4]
k
Λ (m1 m2 ) =
2k X
ℓ
X
ckℓn Λn (m1 )Λℓ−n (m2 )
l=0 n=0
where ckℓn is the coefficient of xn y ℓ−n in (x+ y −xy)k . We
know that Λk is given by the action of
P
the matrix (I − C)k where ||I − C|| < 1/3. The series
1/k(x + y − xy)k converges absolutely
when x and y have absolute value at most 1/3. Thus we may make a change in the order of
summation:
D(m1 m2 ) =
∞ X
2k X
ℓ
X
1
k=1 l=0 n=0
k
ckℓn Λn (m1 )Λℓ−n (m2 ) =
∞ X
ℓ X
∞
X
1
ℓ=0 n=0 k=1
k
ckℓn Λn (m1 )Λℓ−n (m2 ).
P
By [Pra86, Proof of Theorem 4] the (actually finite) sum ∞
k=1 (1/k)ckℓn equals 0 unless we have
ℓ > 0 and (n = 0 or n = ℓ), in which case the value is 1/ℓ. Hence
D(m1 m2 ) = −
∞
X
1
ℓ=1
ℓ
(Λℓ (m1 )m2 + m1 Λℓ (m2 )) = D(m1 )m2 + m1 D(m2 ).
Proposition 4.3. Let D : M → M be as above. Then D extends to a G-equivariant derivation
of Oalg (F ).
AN OKA PRINCIPLE FOR STEIN G-MANIFOLDS
7
Proof. Let R = C[z1 , . . . , zn ]. We have a surjective morphism ρ : R → Oalg (F ) sending zi to
fi |F , i = 1, . . . , n. The kernel J of ρ is generated by polynomials of degree at most
(the hj
P N
′
considered as elements of R). Let E denote the derivation of R which sends zi to
dij zj where
(d′ij ) is the logarithm of (aij (q)). Recall that ||(aij ) − I||K < 1/2. By construction, E on the
span of the zi is the pull-back of D on the span of the fi |F . By Proposition 4.2, E restricted
to polynomials of degree at most N is the pull-back of D restricted to polynomials of degree
at most N in the fi |F . Hence E preserves the span of the elements of degree at most N in J.
Since these elements generate J, we see that E preserves J. Hence E induces a derivation of
R/J, i.e., D extends to a G-invariant derivation of Oalg (F ).
Corollary 4.4. Let Φ ∈ Ω and let U ′ denote the interior of our compact subset K ⊂ U. There is
a D ∈ LAc (U ′ ) such that exp(D) = Φ|XU ′ . The mapping Ω ∋ Φ → D ∈ LAc (U ′ ) is continuous.
Proof. For q ∈ U ′ , let Dq be the G-equivariant derivation of Oalg (Xq ) constructed
P above. Let
dij fj where
D be the vector field on XU ′ whose value on Xq is Dq , q ∈ U ′ . Then D(fi ) =
′
(dij ) = log(aij ). By Proposition 3.4, D ∈ LAc (U ). By construction, exp(Dq ) = Φq for all
q ∈ U ′ . Hence exp D = Φ|XU ′ . The continuity of Φ 7→ D is clear since (dij ) = log(aij ).
Definition 4.5. Let U ⊂ Q be open and let f1 , . . . , fn be a standard generating set of Ofin (XU ).
′
Let U ′ ⊂ U be open with U ⊂ U. We say that Φ ∈ Ac (U) admits a logarithm in LAc (U ′ ) if
the following hold.
P
(1) Φ∗ fi = aij fj where ||(aij ) − I||U ′ < 1/2. P
(2) There is a D ∈ LAc (U ′ ) such that D(fi ) = dij fj on XU ′ where (dij ) = log(aij ).
Note that (aij ) is not unique. The condition is that some (aij ) corresponding to Φ satisfies (1)
and (2).
Remarks 4.6. The formal series log Φ∗ , when applied to any fi , converges to D(fi ). Hence D is
independent of the choice of (aij ). Properties (1) and (2) imply that exp D = Φ over U ′ . Note
that ||(dij )||U ′ < log 2 and (dij ) is the unique matrix satisfying this property whose exponential
is (aij ).
Corollary 4.4 and its proof imply the following result.
Theorem 4.7. Let K ⊂ U ⊂ Q where K is compact and U is open. Then there is a neighborhood Ω of the identity in Ac (U) and a neighborhood U ′ of K in U such that every Φ ∈ Ω admits
a logarithm D = log Φ in LAc (U ′ ). The mapping Φ → log Φ is continuous.
Corollary 4.8. Let Φn be a Cauchy sequence in Ac (U). Then Φn → Φ ∈ Ac (U).
Proof. Since this is a local question, we can assume that we have a standard generating set {fi }
for Ofin (U). Let q ∈ U and let U ′ be a relatively compact neighborhood of q in U. Then there is
a neighborhood Ω of the identity in Ac (U) such that any Ψ ∈ Ω admits a logarithm in LAc (U ′ ).
Let Ω0 be a smaller neighborhood of the identity with Ω0 ⊂ Ω. There is an N ∈ N such that
−1
′
n ≥ N implies that Φ−1
N Φn ∈ Ω0 , hence log(ΦN Φn ) = Dn ∈ LAc (U ), and Dn converges to an
element D ∈ LAc (U ′ ) by Proposition 3.4. Set Φ = ΦN exp D ∈ A(U ′ ). Since exp Dn = Φ−1
N Φn
over U ′ we have Φn → Φ in Ac (U ′ ).
5. Homotopies in H 1 (Q, Ac )
We establish our main technical result concerning homotopies in H 1 (Q, Ac ). We give proofs
of Theorems 1.1 and 1.6 and a special case of Theorem 1.5.
Let Φ(t) ∈ Ac (U), t ∈ C, where C is a topological space. We say that Φ(t) is continuous
if relative to a standard generating set, Φ(t) has corresponding matrices (aij (t, q)) where each
aij is continuous in t and q ∈ U. (It is probably false that every continuous map C → Ac (U)
8
GERALD W. SCHWARZ
is continuous in our sense.) Let Ac (U) denote the set of all continuous paths Φ(t) ∈ Ac (U),
t ∈ [0, 1], starting at the identity. We have a topology on Ac (U) as in §3 and Ac (U) is a
topological group. When we talk of homotopies in Ac (U) we mean that the corresponding
families with parameter space [0, 1]2 are continuous as above. We define continuous families
of elements of LAc (U) similarly. One defines A(U) similarly to Ac (U) where, of course, the
relevant aij (t, q) are required to be holomorphic in q and continuous in t.
Here is our main technical result about Ac .
Theorem 5.1.
(1) The topological group Ac (Q) is pathwise connected.
(2) If U ⊂ Q is open, then Ac (Q) is dense in Ac (U).
(3) H 1 (Q, Ac ) = 0.
Proof. Let Φ(t) be an element of Ac (Q). Since {0} is a deformation retract of [0, 1], there is a
homotopy Φ(s, t) with Φ(0, t) = Φ(t) and Φ(1, t) the identity automorphism. Hence we have
(1). For (2), let Φ ∈ Ac (U). Let K be a compact subset of U and U ′ a relatively compact
neighborhood of K in U. It follows from Theorem 4.7 that there are 0 = t0 < t1 < · · · < tm = 1
and continuous families Dj (s) in LAc (U ′ ) for s ∈ [0, tj+1 − tj ] such that, over U ′ , Φ(s + tj ) =
Φ(tj ) exp(Dj (s)), s ∈ [0, tj+1 − tj ], j = 0, . . . , m − 1. Multiplying by a cutoff function, we can
assume that the Dj (s) are in LAc (Q). Then our formula gives an element of Ac (Q) which
restricts to Φ on a neighborhood of K and we have (2).
Let K ⊂ Q be compact which is of the form K ′ ∪ K ′′ where K ′ and K ′′ are compact. Let
U = U ′ ∪ U ′′ be a neighborhood of K where K ′ ⊂ U ′ , K ′′ ⊂ U ′′ . Let Φ(t) be in Z 1 (U, Ac ) for
the open covering {U ′ , U ′′ } of U. Then Φ(t) is just an element in Ac (U ′ ∩ U ′′ ). By (2) we can
′
write Φ = Ψ1 Ψ−1
2 where Ψ1 is defined over Q (hence over U ) and Ψ2 is close to the identity
over K ′ ∩ K ′′ . Then Ψ2 (t) = exp D(t) where D(t) ∈ LAc (U ′ ∩ U ′′ ) is a continuous family and
D(0) = 0. Using a cutoff function again, we can find D0 (t) ∈ LAc (Q) which equals D(t) in
−1
a neighborhood of K ′ ∩ K ′′ and vanishes when t = 0. We have Φ = Ψ1 Ψ−1
2 where Ψ2 is the
exponential of D0 (t). Thus the cohomology class of Φ becomes trivial if we replace U ′ and
U ′′ by slightly smaller neighborhoods of K ′ and K ′′ . Let H 1 (K, Ac ) denote the direct limit of
H 1 (U, Ac ) for U a neighborhood of K. As in [Car58, §5], our result above shows that
S there
is a sequence of compact sets K1 ⊂ V2 ⊂ K2 . . . with Vn the interior of Kn , Q = Kn and
H 1 (Kn , Ac ) = 0 for all n.
Let {Ui } be an open cover of Q and Φij ∈ Ac (Ui ∩ Uj ) a cocycle. There are cni ∈ Ac (Ui ∩ Vn )
such that Φij = (cni )−1 cnj on Ui ∩ Uj ∩ Vn . Thus cn+1
(cni )−1 = cn+1
(cnj )−1 on Ui ∩ Uj ∩ Vn . The
i
j
cn+1
(cni )−1 define a section d ∈ Ac (Vn ). By (2) there is a section d′ of Ac (Q) which is arbitrarily
i
close to d on Kn−1 . Replace each cn+1
by (d′ )−1 cn+1
. Then cn+1
is very close to cni on Kn−1
i
i
i
n
and we can arrange that the limit as n → ∞ of the ci converges on every compact subset to
ci ∈ Ac (Ui ) such that Φij = c−1
i cj . We have used Corollary 4.8. This completes the proof of
(3).
Note that (3) says that for any homotopy of a cocycle Φij (t) starting at the identity there
are ci (t) ∈ Ac (Ui ) such that Φij (t) = ci (t)−1 cj (t) for all t ∈ [0, 1]. Hence Φij (t) is the trivial
element in H 1 (Q, Ac ) for all t. We now use a trick to show a similar result if we only assume
that Φij (0) ∈ Z 1 (Q, A).
Let Ψij ∈ Z 1 (Q, A) for some open cover {Ui } of Q. By [KLS, Theorem 5.11], there is a Stein
G-manifold Y with quotient Q corresponding to the Ψij . Let Xi = XUi and Yi = YUi . Then
there are G-biholomorphisms Ψi : Xi → Yi over the identity of Ui such that Ψ−1
i Ψj = Ψij .
Here is an analogue of the twist construction in Galois cohomology. We leave the proof to
the reader.
AN OKA PRINCIPLE FOR STEIN G-MANIFOLDS
9
Lemma 5.2. Let Ψij ∈ Z 1 (Q, A) and Φij ∈ Z 1 (Q, Ac ) be cocycles for the open cover {Ui } of
Q. Let Y and Ψi : Xi → Yi be as above. The mapping Φij 7→ Ψi Φij Ψ−1
j induces an isomorphism
1
1
Y
of H (Q, Ac ) and H (Q, Ac ) which sends the class Ψij to the trivial class of H 1 (Q, AYc ).
Corollary 5.3. Let Φij (t) be a homotopy of cocycles with values in Ac (Ui ∩ Uj ) where {Ui } is
an open cover of Q. Suppose that Φij (0) is holomorphic. Then there are ci ∈ Ac (Ui ) such that
Φij (t) = ci (t)−1 Φij (0)cj (t) for all t.
Proof. By Lemma 5.2 we may reduce to the case that Φij (0) is the identity, so we can apply
Theorem 5.1.
Let X, Y and the Ψi be as above. We say that a G-homeomorphism Φ : X → Y is strong if
◦ Φ : Xi → Xi is strong for all i, i.e., in Ac (Ui ). It is easy to see that this does not depend
upon the particular choice of the Ψi . Similarly one can define what it means for a family Φ(t) of
strong G-homeomorphisms to be continuous, t ∈ [0, 1]. Then we have the following nice result
[KLS, Theorem 1.4].
Ψ−1
i
Theorem 5.4. Let Φ : X → Y be strongly continuous. Then there is a continuous family Φ(t)
of strong G-homeomorphisms from X to Y with Φ(0) = Φ and Φ(1) holomorphiic.
Proof of Theorem 1.1. We have Φij , Ψij ∈ Z 1 (Q, A) and ci ∈ Ac (Ui ) satisfying Φij = ci Ψij c−1
j .
Using Lemma 5.2 we may assume that Φij is the trivial class. Then the ci are the same thing as
a strong G-homeomorphism Θ : X → Y where Y is the Stein G-manifold corresponding to the
Ψij (after our twisting). By Theorem 5.4 not only are there di ∈ A(Ui ) such that Ψij = di d−1
j ,
but the di are ei (1) where ei (t) is a path in Ac (Ui ) starting at ci and ending at di . The di
correspond to a G-biholomorphism of X and Y over Q.
We now prepare to prove Theorem 1.6.
Lemma 5.5. Let Φ ∈ Ac (Q) such that Φ(1) is holomorphic. Then Φ is homotopic to Φ′ ∈ A(Q)
where Φ′ (1) = Φ(1).
Proof. We have to make use of a sheaf of groups F on Q which is a subsheaf of the sheaf of
G-diffeomorphisms of X which induce the identity on Q and are algebraic isomorphisms on the
fibers of p. See [KLS, Ch. 6]. We give F (U) the usual C ∞ -topology. Let F(U) denote the
sheaf of homotopies Ψ(t) of elements of F (U), t ∈ [0, 1], where Ψ(0) is the identity and Ψ(1)
is holomorphic. Then [KLS, Theorem 10.1] tells us that F(Q) is pathwise connected. Hence
for Φ ∈ F(Q) there is a homotopy Ψ(s) ∈ F(Q) such that Ψ(0) = Φ and Ψ(1) is the identity.
Then Ψ(s) evaluated at t = 1 is a homotopy from Φ(1) to the identity in A(Q), establishing
the lemma when Φ ∈ F(Q).
We now use a standard trick. Let ∆ denote a disk in C containing [0, 1] with trivial G-action.
Then ∆ × X has quotient ∆ × Q with the obvious quotient mapping. Let ρ : ∆ → [0, 1] be
continuous such that ρ sends a neighborhood of 0 to 0 and a neighborhood of 1 to 1. For
(z, x) ∈ ∆ × X, define Ψ(z, x) = (z, Φ(ρ(z), x)). Then Ψ ∈ Ãc (∆ × Q) where Ãc = Ac∆×X .
Moreover, Ψ is the identity on the inverse image of a neighborhood of {0}×Q and is holomorphic
on the inverse image of a neighborhood of {1} × Q. By [KLS, Theorem 8.7] we can find a
homotopy Ψ(s) which starts at Ψ and ends up in F (∆ × Q). Moreover, the proof shows that we
can assume that the elements of the homotopy are unchanged over a neighborhood of {0, 1}×Q.
Restricting Ψ(1) to [0, 1] ⊂ ∆ we have an element in F(Q) which at time 1 is still Φ(1). Then
we can apply the argument above.
Proof of Theorem 1.6. By Lemma 5.2 we may assume that Φij (0) is the identity cocycle. Since
H 1 (Q, Ac ) is trivial, there are ci ∈ Ac (Ui ) such that Φij (t) = ci (t)cj (t)−1 for t ∈ [0, 1]. Now
the ci (1) define a strongly continuous G-homeomorphism from X to the Stein G-manifold Y
10
GERALD W. SCHWARZ
corresponding to Φij (1). By Theorem 5.4 there is a homotopy ci (t), 1 ≤ t ≤ 2, such that the
ci (2) are holomorphic and split Φij (1). Reparameterizing, we may reduce to the case that the
original ci (t) are holomorphic for t = 1. Now apply Lemma 5.5 to Ψij (t) = ci (t)cj (t)−1 .
6. H 1 (Q, A) → H 1 (Q, Ac ) is a bijection
We give a proof Theorem 1.2. We are given an open cover {Ui } of Q and Φij ∈ Z 1 (Q, Ac ).
We want to find ci ∈ Ac (Ui ) such that c−1
i Φij cj is holomorphic. We may assume that the Ui are
relatively compact, locally finite and Runge. We say that an open set U ⊂ Q is good if there
are sections ci ∈ Ac (Ui ∩ U) such that c−1
i Φij cj is holomorphic on Uij ∩ U for all i and j where
Uij denotes Ui ∩ Uj . This says that {Φij } is cohomologous to a holomorphic cocycle on U. The
goal is to show that Q is good. It is obvious that small open subsets of Q are good.
Lemma 6.1. Suppose that Q = Q′ ∪ Q′′ where Q′ and Q′′ are good and Q′ ∩ Q′′ is Runge in Q.
Then Q is good.
Proof. By hypothesis, we have c′i ∈ Ac (Q′ ∩ Ui ) and c′′i ∈ Ac (Q′′ ∩ Ui ) such that
Ψ′ij = (c′i )−1 Φij c′j , and Ψ′′ij = (c′′i )−1 Φij c′′j are holomorphic.
Then on Uij ∩ Q′ ∩ Q′′ we have
′
′ −1 ′′
Ψ′′ij = h−1
i Ψij hj where hi = (ci ) ci .
The Ψ′ij are a holomorphic cocycle for the covering Ui ∩ Q′ of Q′ , hence they correspond to a
Stein G-manifold X ′ with quotient Q′ . Similarly the Ψ′′ij give us X ′′ , and X ′ and X ′′ are locally
G-biholomorphic to X over Q′ and Q′′ , respectively. The hi give us a strong G-homeomorphism
h : X ′ → X ′′ , everything being taken over Q′ ∩ Q′′ . By Theorem 5.4 there is a homotopy h(t, x)
with h(0, x) = h(x) and h(1, x) holomorphic. Let k(x) denote h(1, x). Then k corresponds to a
family ki homotopic to the family hi .
Now just consider the space Ui covered by the two open sets Ui ∩ Q′ and Ui ∩ Q′′ . Then
hi and ki are defined on the intersection of the two open sets and are homotopic where hi is
cohomologous to the trivial cocycle since hi = (c′i )−1 c′′i . By Corollary 5.3 and Theorem 1.1 the
cohomology class represented by ki (x) is holomorphically trivial. Hence there are holomorphic
sections h′i and h′′i such that ki = (h′i )−1 h′′i on Ui ∩ Q′ ∩ Q′′ . Then h′i Ψ′ij (h′j )−1 = h′′i Ψ′′ij (h′′j )−1 on
Uij ∩ Q′ ∩ Q′′ . We construct a holomorphic cocycle Ψij on Uij by Ψij = h′i Ψ′ij (h′j )−1 on Uij ∩ Q′
and h′′i Ψ′′ij (h′′j )−1 on Uij ∩ Q′′ .
Using Lemma 5.2 we may reduce to the case that Ψij is the trivial cocycle. As in the beginning
of the proof there are c′i ∈ Ac (Q′ ∩ Ui ) and c′′i ∈ Ac (Q′′ ∩ Ui ) such that
Φij |X ′ = c′i (c′j )−1 and Φij |X ′′ = c′′i (c′′j )−1
where X ′ = XQ′ and X ′′ = XQ′′ . Let hi = (c′i )−1 c′′i . Then hi = hj on Uij ∩ Q′ ∩ Q′′ , hence we
have a section h ∈ Ac (Q′ ∩ Q′′ ), and this section gives the same cohomology class as Φij (use the
open cover {Q′ ∩ Ui , Q′′ ∩ Ui }). By Theorem 5.4, h is homotopic to an element h̃ ∈ A(Q′ ∩ Q′′ ),
and this holomorphic section gives the same cohomology class by Corollary 5.3. Since going
to a refinement of an open cover is injective on H 1 , we see that our original Φij differs from a
holomorphic cocycle by a coboundary. Thus Q is good.
Proof of Theorem 1.2. Using Lemma 6.1 as in [Car58, §5] we can show that there is a cover of Q
by compact subsets Kn such that K1 ⊂ V2 ⊂ K2 . . . where Vj is the interior of Kj and such that
a neighborhood of every Kn is good. We can assume that Ui ∩ Vn 6= ∅ implies that Ui ⊂ Vn+1 .
This is possible by replacing {Kn } by a subsequence. For each n we choose cni ∈ Ac (Ui ∩ Vn )
such that
(cni )−1 Φij cnj = Ψnij is holomorphic on Uij ∩ Vn .
AN OKA PRINCIPLE FOR STEIN G-MANIFOLDS
11
Then
n
Ψnij = (dni )−1 Ψn+1
ij dj on Uij ∩ Vn
where dni = (cn+1
)−1 cni gives a strongly continuous map from the Stein G-manifold Yn over Vn
i
obtained using the Ψnij to the Stein G-manifold Yn+1 obtained using the Ψn+1
ij . We know that
n
the map is homotopic to a holomorphic one. Hence there are homotopies di (t) on Ui ∩ Vn such
that
n
(1) Ψnij = (dni (t))−1 Ψn+1
ij dj (t) on Ui ∩ Uj ∩ Vn , for all t.
(2) dni (0) = (cn+1
)−1 cni .
i
(3) The dni (1) give a G-equivariant biholomorphic map from Yn to Yn+1 over IdVn .
Without changing the cni we may replace the cn+1
by sections c̃n+1
such that
i
i
n+1
n+1
n+1
e
(4) Ψ
= (c̃ )−1 Φij c̃
is holomorphic on Ui ∩ Uj ∩ Vn+1 .
ij
n+1
c̃i
i
j
cni
(5)
= on Ui ∩ Vn−2 .
It suffices to set c̃n+1
= cn+1
if Ui ∩ Vn−1 = ∅ and if not, then Ui ⊂ Vn , and we can set
i
i
c̃n+1
= cn+1
· dni (λ(x)),
i
i
where λ : Vn → [0, 1] is continuous, 0 for x ∈ Vn−2 and 1 for x 6∈ Vn−1 . Then one has (4) and
(5). Thus we can arrange that cn+1
= cni in Ui ∩ Vn−2 , hence we obviously have convergence of
i
n
the ci to a continuous section ci such that (ci )−1 Φij cj is holomorphic.
We now have Theorems 1.1 and 1.2 which imply Corollary 1.3, i.e., that H 1 (Q, A) →
H 1 (Q, Ac ) is a bijection.
Proof of Theorem 1.5. This is immediate from Corollary 5.3 and Theorem 1.2
We end with the analogue of an approximation theorem of Grauert.
Theorem 6.2. Let U ⊂ Q be Runge. Suppose that Φ : XU → YU is biholomorphic and Gequivariant inducing IdU . Here X and Y are locally G-biholomorphic over Q. Then Φ can be
arbitrarily closely approximated by G-biholomorphisms of X and Y over IdQ if and only if this
is true for strong G-homeomorphisms of X and Y .
Proof. Let K ⊂ U be compact and let Φ ∈ Mor(XU , YU )G be our holomorphic G-equivariant
map inducing IdU . We can find a relatively compact open subset U ′ of U which contains K
and is Runge in Q. By hypothesis, there is a strong G-homeomorphism Ψ : X → Y which is
arbitrarily close to Φ over U ′ . Then Ψ−1 Φ = exp D where D ∈ LAc (U ′ ), hence Ψ′ and Φ′ are
homotopic, where Φ′ is the restriction of Φ to U ′ and similarly for Ψ′ . Now Ψ is homotopic
to a biholomorphic G-equivariant map Θ : X → Y inducing IdQ , and Ψ′ is homotopic to the
restriction Θ′ of Θ to U ′ . Then (Φ′ )−1 Θ′ is holomorphic and homotopic to the identity section
over U ′ . Since the end points of the homotopy are holomorphic, by Theorem 1.6 we can find
a homotopy all of whose elements are holomorphic. By [KLS, Theorem 10.1] there is a section
∆ ∈ A(Q) which is arbitrarily close to (Φ′ )−1 Θ′ on U ′ . Then Θ∆−1 , restricted to U ′ , is arbitrarily
close to Φ′ , hence this is true over K. This establishes the theorem.
References
[Car58] Henri Cartan, Espaces fibrés analytiques, Symposium internacional de topologı́a algebraica, Universidad
Nacional Autónoma de México and UNESCO, Mexico City, 1958, pp. 97–121.
[Gra58] Hans Grauert, Analytische Faserungen über holomorph-vollständigen Räumen, Math. Ann. 135 (1958),
263–273.
[KLS] Frank Kutzschebauch, Finnur Lárusson, and Gerald W. Schwarz, Homotopy principles for equivariant
isomorphisms, preprint, arXiv:1503.00797.
[KLS15] Frank Kutzschebauch, Finnur Lárusson, and Gerald W. Schwarz, An Oka principle for equivariant
isomorphisms, J. reine angew. Math. 706 (2015), 193–214.
12
[KS92]
[Lun73]
[Pra86]
[Sch80]
[Sno82]
GERALD W. SCHWARZ
Hanspeter Kraft and Gerald W. Schwarz, Reductive group actions with one-dimensional quotient, Inst.
Hautes Études Sci. Publ. Math. (1992), no. 76, 1–97.
Domingo Luna, Slices étales, Sur les groupes algébriques, Soc. Math. France, Paris, 1973, pp. 81–105.
Bull. Soc. Math. France, Paris, Mémoire 33.
C. Praagman, Iterations and logarithms of formal automorphisms, Aequationes Math. 30 (1986), no. 23, 151–160.
Gerald W. Schwarz, Lifting smooth homotopies of orbit spaces, Inst. Hautes Études Sci. Publ. Math.
(1980), no. 51, 37–135.
Dennis M. Snow, Reductive group actions on Stein spaces, Math. Ann. 259 (1982), no. 1, 79–97.
Gerald W. Schwarz, Department of Mathematics, Brandeis University, Waltham MA 024549110, USA
E-mail address: [email protected]
| 4 |
arXiv:1705.07874v2 [] 25 Nov 2017
A Unified Approach to Interpreting Model
Predictions
Scott M. Lundberg
Paul G. Allen School of Computer Science
University of Washington
Seattle, WA 98105
[email protected]
Su-In Lee
Paul G. Allen School of Computer Science
Department of Genome Sciences
University of Washington
Seattle, WA 98105
[email protected]
Abstract
Understanding why a model makes a certain prediction can be as crucial as the
prediction’s accuracy in many applications. However, the highest accuracy for large
modern datasets is often achieved by complex models that even experts struggle to
interpret, such as ensemble or deep learning models, creating a tension between
accuracy and interpretability. In response, various methods have recently been
proposed to help users interpret the predictions of complex models, but it is often
unclear how these methods are related and when one method is preferable over
another. To address this problem, we present a unified framework for interpreting
predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature
an importance value for a particular prediction. Its novel components include: (1)
the identification of a new class of additive feature importance measures, and (2)
theoretical results showing there is a unique solution in this class with a set of
desirable properties. The new class unifies six existing methods, notable because
several recent methods in the class lack the proposed desirable properties. Based
on insights from this unification, we present new methods that show improved
computational performance and/or better consistency with human intuition than
previous approaches.
1
Introduction
The ability to correctly interpret a prediction model’s output is extremely important. It engenders
appropriate user trust, provides insight into how a model may be improved, and supports understanding
of the process being modeled. In some applications, simple models (e.g., linear models) are often
preferred for their ease of interpretation, even if they may be less accurate than complex ones.
However, the growing availability of big data has increased the benefits of using complex models, so
bringing to the forefront the trade-off between accuracy and interpretability of a model’s output. A
wide variety of different methods have been recently proposed to address this issue [5, 8, 9, 3, 4, 1].
But an understanding of how these methods relate and when one method is preferable to another is
still lacking.
Here, we present a novel unified approach to interpreting model predictions.1 Our approach leads to
three potentially surprising results that bring clarity to the growing space of methods:
1. We introduce the perspective of viewing any explanation of a model’s prediction as a model itself,
which we term the explanation model. This lets us define the class of additive feature attribution
methods (Section 2), which unifies six current methods.
1
https://github.com/slundberg/shap
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2. We then show that game theory results guaranteeing a unique solution apply to the entire class of
additive feature attribution methods (Section 3) and propose SHAP values as a unified measure of
feature importance that various methods approximate (Section 4).
3. We propose new SHAP value estimation methods and demonstrate that they are better aligned
with human intuition as measured by user studies and more effectually discriminate among model
output classes than several existing methods (Section 5).
2
Additive Feature Attribution Methods
The best explanation of a simple model is the model itself; it perfectly represents itself and is easy to
understand. For complex models, such as ensemble methods or deep networks, we cannot use the
original model as its own best explanation because it is not easy to understand. Instead, we must use a
simpler explanation model, which we define as any interpretable approximation of the original model.
We show below that six current explanation methods from the literature all use the same explanation
model. This previously unappreciated unity has interesting implications, which we describe in later
sections.
Let f be the original prediction model to be explained and g the explanation model. Here, we focus
on local methods designed to explain a prediction f (x) based on a single input x, as proposed in
LIME [5]. Explanation models often use simplified inputs x0 that map to the original inputs through a
mapping function x = hx (x0 ). Local methods try to ensure g(z 0 ) ≈ f (hx (z 0 )) whenever z 0 ≈ x0 .
(Note that hx (x0 ) = x even though x0 may contain less information than x because hx is specific to
the current input x.)
Definition 1 Additive feature attribution methods have an explanation model that is a linear
function of binary variables:
g(z 0 ) = φ0 +
M
X
φi zi0 ,
(1)
i=1
where z 0 ∈ {0, 1}M , M is the number of simplified input features, and φi ∈ R.
Methods with explanation models matching Definition 1 attribute an effect φi to each feature, and
summing the effects of all feature attributions approximates the output f (x) of the original model.
Many current methods match Definition 1, several of which are discussed below.
2.1
LIME
The LIME method interprets individual model predictions based on locally approximating the model
around a given prediction [5]. The local linear explanation model that LIME uses adheres to Equation
1 exactly and is thus an additive feature attribution method. LIME refers to simplified inputs x0 as
“interpretable inputs,” and the mapping x = hx (x0 ) converts a binary vector of interpretable inputs
into the original input space. Different types of hx mappings are used for different input spaces. For
bag of words text features, hx converts a vector of 1’s or 0’s (present or not) into the original word
count if the simplified input is one, or zero if the simplified input is zero. For images, hx treats the
image as a set of super pixels; it then maps 1 to leaving the super pixel as its original value and 0
to replacing the super pixel with an average of neighboring pixels (this is meant to represent being
missing).
To find φ, LIME minimizes the following objective function:
ξ = arg min L(f, g, πx0 ) + Ω(g).
(2)
g∈G
Faithfulness of the explanation model g(z 0 ) to the original model f (hx (z 0 )) is enforced through
the loss L over a set of samples in the simplified input space weighted by the local kernel πx0 . Ω
penalizes the complexity of g. Since in LIME g follows Equation 1 and L is a squared loss, Equation
2 can be solved using penalized linear regression.
2
2.2
DeepLIFT
DeepLIFT was recently proposed as a recursive prediction explanation method for deep learning
[8, 7]. It attributes to each input xi a value C∆xi ∆y that represents the effect of that input being set
to a reference value as opposed to its original value. This means that for DeepLIFT, the mapping
x = hx (x0 ) converts binary values into the original inputs, where 1 indicates that an input takes its
original value, and 0 indicates that it takes the reference value. The reference value, though chosen
by the user, represents a typical uninformative background value for the feature.
DeepLIFT uses a "summation-to-delta" property that states:
n
X
C∆xi ∆o = ∆o,
(3)
i=1
where o = f (x) is the model output, ∆o = f (x) − f (r), ∆xi = xi − ri , and r is the reference input.
If we let φi = C∆xi ∆o and φ0 = f (r), then DeepLIFT’s explanation model matches Equation 1 and
is thus another additive feature attribution method.
2.3
Layer-Wise Relevance Propagation
The layer-wise relevance propagation method interprets the predictions of deep networks [1]. As
noted by Shrikumar et al., this menthod is equivalent to DeepLIFT with the reference activations of all
neurons fixed to zero. Thus, x = hx (x0 ) converts binary values into the original input space, where
1 means that an input takes its original value, and 0 means an input takes the 0 value. Layer-wise
relevance propagation’s explanation model, like DeepLIFT’s, matches Equation 1.
2.4
Classic Shapley Value Estimation
Three previous methods use classic equations from cooperative game theory to compute explanations
of model predictions: Shapley regression values [4], Shapley sampling values [9], and Quantitative
Input Influence [3].
Shapley regression values are feature importances for linear models in the presence of multicollinearity.
This method requires retraining the model on all feature subsets S ⊆ F , where F is the set of all
features. It assigns an importance value to each feature that represents the effect on the model
prediction of including that feature. To compute this effect, a model fS∪{i} is trained with that feature
present, and another model fS is trained with the feature withheld. Then, predictions from the two
models are compared on the current input fS∪{i} (xS∪{i} ) − fS (xS ), where xS represents the values
of the input features in the set S. Since the effect of withholding a feature depends on other features
in the model, the preceding differences are computed for all possible subsets S ⊆ F \ {i}. The
Shapley values are then computed and used as feature attributions. They are a weighted average of all
possible differences:
φi =
X
S⊆F \{i}
|S|!(|F | − |S| − 1)!
fS∪{i} (xS∪{i} ) − fS (xS ) .
|F |!
(4)
For Shapley regression values, hx maps 1 or 0 to the original input space, where 1 indicates the input
is included in the model, and 0 indicates exclusion from the model. If we let φ0 = f∅ (∅), then the
Shapley regression values match Equation 1 and are hence an additive feature attribution method.
Shapley sampling values are meant to explain any model by: (1) applying sampling approximations
to Equation 4, and (2) approximating the effect of removing a variable from the model by integrating
over samples from the training dataset. This eliminates the need to retrain the model and allows fewer
than 2|F | differences to be computed. Since the explanation model form of Shapley sampling values
is the same as that for Shapley regression values, it is also an additive feature attribution method.
Quantitative input influence is a broader framework that addresses more than feature attributions.
However, as part of its method it independently proposes a sampling approximation to Shapley values
that is nearly identical to Shapley sampling values. It is thus another additive feature attribution
method.
3
3
Simple Properties Uniquely Determine Additive Feature Attributions
A surprising attribute of the class of additive feature attribution methods is the presence of a single
unique solution in this class with three desirable properties (described below). While these properties
are familiar to the classical Shapley value estimation methods, they were previously unknown for
other additive feature attribution methods.
The first desirable property is local accuracy. When approximating the original model f for a specific
input x, local accuracy requires the explanation model to at least match the output of f for the
simplified input x0 (which corresponds to the original input x).
Property 1 (Local accuracy)
f (x) = g(x0 ) = φ0 +
M
X
φi x0i
(5)
i=1
The explanation model g(x0 ) matches the original model f (x) when x = hx (x0 ).
The second property is missingness. If the simplified inputs represent feature presence, then missingness requires features missing in the original input to have no impact. All of the methods described in
Section 2 obey the missingness property.
Property 2 (Missingness)
x0i = 0 =⇒ φi = 0
Missingness constrains features where
x0i
(6)
= 0 to have no attributed impact.
The third property is consistency. Consistency states that if a model changes so that some simplified
input’s contribution increases or stays the same regardless of the other inputs, that input’s attribution
should not decrease.
Property 3 (Consistency) Let fx (z 0 ) = f (hx (z 0 )) and z 0 \ i denote setting zi0 = 0. For any two
models f and f 0 , if
fx0 (z 0 ) − fx0 (z 0 \ i) ≥ fx (z 0 ) − fx (z 0 \ i)
(7)
for all inputs z 0 ∈ {0, 1}M , then φi (f 0 , x) ≥ φi (f, x).
Theorem 1 Only one possible explanation model g follows Definition 1 and satisfies Properties 1, 2,
and 3:
X |z 0 |!(M − |z 0 | − 1)!
φi (f, x) =
[fx (z 0 ) − fx (z 0 \ i)]
(8)
M
!
0
0
z ⊆x
0
where |z | is the number of non-zero entries in z 0 , and z 0 ⊆ x0 represents all z 0 vectors where the
non-zero entries are a subset of the non-zero entries in x0 .
Theorem 1 follows from combined cooperative game theory results, where the values φi are known
as Shapley values [6]. Young (1985) demonstrated that Shapley values are the only set of values
that satisfy three axioms similar to Property 1, Property 3, and a final property that we show to be
redundant in this setting (see Supplementary Material). Property 2 is required to adapt the Shapley
proofs to the class of additive feature attribution methods.
Under Properties 1-3, for a given simplified input mapping hx , Theorem 1 shows that there is only one
possible additive feature attribution method. This result implies that methods not based on Shapley
values violate local accuracy and/or consistency (methods in Section 2 already respect missingness).
The following section proposes a unified approach that improves previous methods, preventing them
from unintentionally violating Properties 1 and 3.
4
SHAP (SHapley Additive exPlanation) Values
We propose SHAP values as a unified measure of feature importance. These are the Shapley values
of a conditional expectation function of the original model; thus, they are the solution to Equation
4
Figure 1: SHAP (SHapley Additive exPlanation) values attribute to each feature the change in the
expected model prediction when conditioning on that feature. They explain how to get from the
base value E[f (z)] that would be predicted if we did not know any features to the current output
f (x). This diagram shows a single ordering. When the model is non-linear or the input features are
not independent, however, the order in which features are added to the expectation matters, and the
SHAP values arise from averaging the φi values across all possible orderings.
8, where fx (z 0 ) = f (hx (z 0 )) = E[f (z) | zS ], and S is the set of non-zero indexes in z 0 (Figure 1).
Based on Sections 2 and 3, SHAP values provide the unique additive feature importance measure that
adheres to Properties 1-3 and uses conditional expectations to define simplified inputs. Implicit in this
definition of SHAP values is a simplified input mapping, hx (z 0 ) = zS , where zS has missing values
for features not in the set S. Since most models cannot handle arbitrary patterns of missing input
values, we approximate f (zS ) with E[f (z) | zS ]. This definition of SHAP values is designed to
closely align with the Shapley regression, Shapley sampling, and quantitative input influence feature
attributions, while also allowing for connections with LIME, DeepLIFT, and layer-wise relevance
propagation.
The exact computation of SHAP values is challenging. However, by combining insights from current
additive feature attribution methods, we can approximate them. We describe two model-agnostic
approximation methods, one that is already known (Shapley sampling values) and another that is
novel (Kernel SHAP). We also describe four model-type-specific approximation methods, two of
which are novel (Max SHAP, Deep SHAP). When using these methods, feature independence and
model linearity are two optional assumptions simplifying the computation of the expected values
(note that S̄ is the set of features not in S):
f (hx (z 0 )) = E[f (z) | zS ]
= EzS̄ |zS [f (z)]
SHAP explanation model simplified input mapping
expectation over zS̄ | zS
(9)
(10)
assume feature independence (as in [9, 5, 7, 3])
assume model linearity
(11)
(12)
≈ EzS̄ [f (z)]
≈ f ([zS , E[zS̄ ]]).
4.1
Model-Agnostic Approximations
If we assume feature independence when approximating conditional expectations (Equation 11), as
in [9, 5, 7, 3], then SHAP values can be estimated directly using the Shapley sampling values method
[9] or equivalently the Quantitative Input Influence method [3]. These methods use a sampling
approximation of a permutation version of the classic Shapley value equations (Equation 8). Separate
sampling estimates are performed for each feature attribution. While reasonable to compute for a
small number of inputs, the Kernel SHAP method described next requires fewer evaluations of the
original model to obtain similar approximation accuracy (Section 5).
Kernel SHAP (Linear LIME + Shapley values)
Linear LIME uses a linear explanation model to locally approximate f , where local is measured in the
simplified binary input space. At first glance, the regression formulation of LIME in Equation 2 seems
very different from the classical Shapley value formulation of Equation 8. However, since linear
LIME is an additive feature attribution method, we know the Shapley values are the only possible
solution to Equation 2 that satisfies Properties 1-3 – local accuracy, missingness and consistency. A
natural question to pose is whether the solution to Equation 2 recovers these values. The answer
depends on the choice of loss function L, weighting kernel πx0 and regularization term Ω. The LIME
choices for these parameters are made heuristically; using these choices, Equation 2 does not recover
the Shapley values. One consequence is that local accuracy and/or consistency are violated, which in
turn leads to unintuitive behavior in certain circumstances (see Section 5).
5
Below we show how to avoid heuristically choosing the parameters in Equation 2 and how to find the
loss function L, weighting kernel πx0 , and regularization term Ω that recover the Shapley values.
Theorem 2 (Shapley kernel) Under Definition 1, the specific forms of πx0 , L, and Ω that make
solutions of Equation 2 consistent with Properties 1 through 3 are:
Ω(g) = 0,
(M − 1)
,
(M choose |z 0 |)|z 0 |(M − |z 0 |)
X
0
0 2
0
L(f, g, πx0 ) =
f (h−1
x (z )) − g(z ) πx0 (z ),
πx0 (z 0 ) =
z 0 ∈Z
where |z 0 | is the number of non-zero elements in z 0 .
The proof of Theorem 2 is shown in the Supplementary Material.
It is important to note that πx0 (z 0 ) = ∞ when |z 0 | ∈ {0, M }, which enforces φ0 = fx (∅) and f (x) =
PM
i=0 φi . In practice, these infinite weights can be avoided during optimization by analytically
eliminating two variables using these constraints.
Since g(z 0 ) in Theorem 2 is assumed to follow a linear form, and L is a squared loss, Equation 2
can still be solved using linear regression. As a consequence, the Shapley values from game theory
can be computed using weighted linear regression.2 Since LIME uses a simplified input mapping
that is equivalent to the approximation of the SHAP mapping given in Equation 12, this enables
regression-based, model-agnostic estimation of SHAP values. Jointly estimating all SHAP values
using regression provides better sample efficiency than the direct use of classical Shapley equations
(see Section 5).
The intuitive connection between linear regression and Shapley values is that Equation 8 is a difference
of means. Since the mean is also the best least squares point estimate for a set of data points, it is
natural to search for a weighting kernel that causes linear least squares regression to recapitulate
the Shapley values. This leads to a kernel that distinctly differs from previous heuristically chosen
kernels (Figure 2A).
4.2
Model-Specific Approximations
While Kernel SHAP improves the sample efficiency of model-agnostic estimations of SHAP values, by
restricting our attention to specific model types, we can develop faster model-specific approximation
methods.
Linear SHAP
For linear models, if we assume input feature independence (Equation 11), SHAP values can be
approximated directly from the model’s weight coefficients.
Corollary 1 (Linear SHAP) Given a linear model f (x) =
PM
j=1
wj xj + b: φ0 (f, x) = b and
φi (f, x) = wj (xj − E[xj ])
This follows from Theorem 2 and Equation 11, and it has been previously noted by Štrumbelj and
Kononenko [9].
Low-Order SHAP
Since linear regression using Theorem 2 has complexity O(2M + M 3 ), it is efficient for small values
of M if we choose an approximation of the conditional expectations (Equation 11 or 12).
2
During the preparation of this manuscript we discovered this parallels an equivalent constrained quadratic
minimization formulation of Shapley values proposed in econometrics [2].
6
(A)
(B)
hapley
f3
f1
f3
f2
f1
f2
Figure 2: (A) The Shapley kernel weighting is symmetric when all possible z 0 vectors are ordered
by cardinality there are 215 vectors in this example. This is distinctly different from previous
heuristically chosen kernels. (B) Compositional models such as deep neural networks are comprised
of many simple components. Given analytic solutions for the Shapley values of the components, fast
approximations for the full model can be made using DeepLIFT’s style of back-propagation.
Max SHAP
Using a permutation formulation of Shapley values, we can calculate the probability that each input
will increase the maximum value over every other input. Doing this on a sorted order of input values
lets us compute the Shapley values of a max function with M inputs in O(M 2 ) time instead of
O(M 2M ). See Supplementary Material for the full algorithm.
Deep SHAP (DeepLIFT + Shapley values)
While Kernel SHAP can be used on any model, including deep models, it is natural to ask whether
there is a way to leverage extra knowledge about the compositional nature of deep networks to improve
computational performance. We find an answer to this question through a previously unappreciated
connection between Shapley values and DeepLIFT [8]. If we interpret the reference value in Equation
3 as representing E[x] in Equation 12, then DeepLIFT approximates SHAP values assuming that
the input features are independent of one another and the deep model is linear. DeepLIFT uses a
linear composition rule, which is equivalent to linearizing the non-linear components of a neural
network. Its back-propagation rules defining how each component is linearized are intuitive but were
heuristically chosen. Since DeepLIFT is an additive feature attribution method that satisfies local
accuracy and missingness, we know that Shapley values represent the only attribution values that
satisfy consistency. This motivates our adapting DeepLIFT to become a compositional approximation
of SHAP values, leading to Deep SHAP.
Deep SHAP combines SHAP values computed for smaller components of the network into SHAP
values for the whole network. It does so by recursively passing DeepLIFT’s multipliers, now defined
in terms of SHAP values, backwards through the network as in Figure 2B:
φi (f3 , x)
xj − E[xj ]
φi (fj , y)
=
yi − E[yi ]
mxj f3 =
∀j∈{1,2} myi fj
myi f3 =
2
X
(13)
(14)
myi fj mxj f3
chain rule
(15)
linear approximation
(16)
j=1
φi (f3 , y) ≈ myi f3 (yi − E[yi ])
Since the SHAP values for the simple network components can be efficiently solved analytically
if they are linear, max pooling, or an activation function with just one input, this composition
rule enables a fast approximation of values for the whole model. Deep SHAP avoids the need to
heuristically choose ways to linearize components. Instead, it derives an effective linearization from
the SHAP values computed for each component. The max function offers one example where this
leads to improved attributions (see Section 5).
7
(A)
Feature importance
SHAP
Shapley sampling
LIME
True Shapley value
(B)
Dense original model
Sparse original model
Figure 3: Comparison of three additive feature attribution methods: Kernel SHAP (using a debiased
lasso), Shapley sampling values, and LIME (using the open source implementation). Feature
importance estimates are shown for one feature in two models as the number of evaluations of the
original model function increases. The 10th and 90th percentiles are shown for 200 replicate estimates
at each sample size. (A) A decision tree model using all 10 input features is explained for a single
input. (B) A decision tree using only 3 of 100 input features is explained for a single input.
5
Computational and User Study Experiments
We evaluated the benefits of SHAP values using the Kernel SHAP and Deep SHAP approximation
methods. First, we compared the computational efficiency and accuracy of Kernel SHAP vs. LIME
and Shapley sampling values. Second, we designed user studies to compare SHAP values with
alternative feature importance allocations represented by DeepLIFT and LIME. As might be expected,
SHAP values prove more consistent with human intuition than other methods that fail to meet
Properties 1-3 (Section 2). Finally, we use MNIST digit image classification to compare SHAP with
DeepLIFT and LIME.
5.1
Computational Efficiency
Theorem 2 connects Shapley values from game theory with weighted linear regression. Kernal SHAP
uses this connection to compute feature importance. This leads to more accurate estimates with fewer
evaluations of the original model than previous sampling-based estimates of Equation 8, particularly
when regularization is added to the linear model (Figure 3). Comparing Shapley sampling, SHAP, and
LIME on both dense and sparse decision tree models illustrates both the improved sample efficiency
of Kernel SHAP and that values from LIME can differ significantly from SHAP values that satisfy
local accuracy and consistency.
5.2
Consistency with Human Intuition
Theorem 1 provides a strong incentive for all additive feature attribution methods to use SHAP
values. Both LIME and DeepLIFT, as originally demonstrated, compute different feature importance
values. To validate the importance of Theorem 1, we compared explanations from LIME, DeepLIFT,
and SHAP with user explanations of simple models (using Amazon Mechanical Turk). Our testing
assumes that good model explanations should be consistent with explanations from humans who
understand that model.
We compared LIME, DeepLIFT, and SHAP with human explanations for two settings. The first
setting used a sickness score that was higher when only one of two symptoms was present (Figure 4A).
The second used a max allocation problem to which DeepLIFT can be applied. Participants were told
a short story about how three men made money based on the maximum score any of them achieved
(Figure 4B). In both cases, participants were asked to assign credit for the output (the sickness score
or money won) among the inputs (i.e., symptoms or players). We found a much stronger agreement
between human explanations and SHAP than with other methods. SHAP’s improved performance for
max functions addresses the open problem of max pooling functions in DeepLIFT [7].
5.3
Explaining Class Differences
As discussed in Section 4.2, DeepLIFT’s compositional approach suggests a compositional approximation of SHAP values (Deep SHAP). These insights, in turn, improve DeepLIFT, and a new version
8
(A)
(B)
Human
SHAP
LIME
Orig. DeepLIFT
Human
SHAP
LIME
Figure 4: Human feature impact estimates are shown as the most common explanation given among
30 (A) and 52 (B) random individuals, respectively. (A) Feature attributions for a model output value
(sickness score) of 2. The model output is 2 when fever and cough are both present, 5 when only
one of fever or cough is present, and 0 otherwise. (B) Attributions of profit among three men, given
according to the maximum number of questions any man got right. The first man got 5 questions
right, the second 4 questions, and the third got none right, so the profit is $5.
Orig. DeepLift
New DeepLift
SHAP
LIME
Explain 8 Explain 3
Masked
(B)
60
Change in log-odds
Input
(A)
50
40
30
20
Orig. DeepLift
New DeepLift
SHAP
LIME
Figure 5: Explaining the output of a convolutional network trained on the MNIST digit dataset. Orig.
DeepLIFT has no explicit Shapley approximations, while New DeepLIFT seeks to better approximate
Shapley values. (A) Red areas increase the probability of that class, and blue areas decrease the
probability. Masked removes pixels in order to go from 8 to 3. (B) The change in log odds when
masking over 20 random images supports the use of better estimates of SHAP values.
includes updates to better match Shapley values [7]. Figure 5 extends DeepLIFT’s convolutional
network example to highlight the increased performance of estimates that are closer to SHAP values.
The pre-trained model and Figure 5 example are the same as those used in [7], with inputs normalized
between 0 and 1. Two convolution layers and 2 dense layers are followed by a 10-way softmax
output layer. Both DeepLIFT versions explain a normalized version of the linear layer, while SHAP
(computed using Kernel SHAP) and LIME explain the model’s output. SHAP and LIME were both
run with 50k samples (Supplementary Figure 1); to improve performance, LIME was modified to use
single pixel segmentation over the digit pixels. To match [7], we masked 20% of the pixels chosen to
switch the predicted class from 8 to 3 according to the feature attribution given by each method.
6
Conclusion
The growing tension between the accuracy and interpretability of model predictions has motivated
the development of methods that help users interpret predictions. The SHAP framework identifies
the class of additive feature importance methods (which includes six previous methods) and shows
there is a unique solution in this class that adheres to desirable properties. The thread of unity that
SHAP weaves through the literature is an encouraging sign that common principles about model
interpretation can inform the development of future methods.
We presented several different estimation methods for SHAP values, along with proofs and experiments showing that these values are desirable. Promising next steps involve developing faster
model-type-specific estimation methods that make fewer assumptions, integrating work on estimating
interaction effects from game theory, and defining new explanation model classes.
9
Acknowledgements
This work was supported by a National Science Foundation (NSF) DBI-135589, NSF CAREER
DBI-155230, American Cancer Society 127332-RSG-15-097-01-TBG, National Institute of Health
(NIH) AG049196, and NSF Graduate Research Fellowship. We would like to thank Marco Ribeiro,
Erik Štrumbelj, Avanti Shrikumar, Yair Zick, the Lee Lab, and the NIPS reviewers for feedback that
has significantly improved this work.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
Sebastian Bach et al. “On pixel-wise explanations for non-linear classifier decisions by layerwise relevance propagation”. In: PloS One 10.7 (2015), e0130140.
A Charnes et al. “Extremal principle solutions of games in characteristic function form: core,
Chebychev and Shapley value generalizations”. In: Econometrics of Planning and Efficiency
11 (1988), pp. 123–133.
Anupam Datta, Shayak Sen, and Yair Zick. “Algorithmic transparency via quantitative input
influence: Theory and experiments with learning systems”. In: Security and Privacy (SP), 2016
IEEE Symposium on. IEEE. 2016, pp. 598–617.
Stan Lipovetsky and Michael Conklin. “Analysis of regression in game theory approach”. In:
Applied Stochastic Models in Business and Industry 17.4 (2001), pp. 319–330.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “Why should i trust you?: Explaining
the predictions of any classifier”. In: Proceedings of the 22nd ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining. ACM. 2016, pp. 1135–1144.
Lloyd S Shapley. “A value for n-person games”. In: Contributions to the Theory of Games
2.28 (1953), pp. 307–317.
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. “Learning Important Features
Through Propagating Activation Differences”. In: arXiv preprint arXiv:1704.02685 (2017).
Avanti Shrikumar et al. “Not Just a Black Box: Learning Important Features Through Propagating Activation Differences”. In: arXiv preprint arXiv:1605.01713 (2016).
Erik Štrumbelj and Igor Kononenko. “Explaining prediction models and individual predictions
with feature contributions”. In: Knowledge and information systems 41.3 (2014), pp. 647–665.
H Peyton Young. “Monotonic solutions of cooperative games”. In: International Journal of
Game Theory 14.2 (1985), pp. 65–72.
10
| 2 |
Trainable ISTA for Sparse Signal Recovery
Daisuke Ito∗ , Satoshi Takabe∗† , and Tadashi Wadayama∗
∗ Nagoya
arXiv:1801.01978v1 [] 6 Jan 2018
Institute of Technology, Gokiso, Nagoya, Aichi, 466-8555, Japan,
[email protected], {s_takabe, wadayama}@nitech.ac.jp
† RIKEN Center for Advanced Intelligence Project, Nihonbashi, Chuo-ku, Tokyo, 103-0027, Japan,
Abstract—In this paper, we propose a novel sparse signal
recovery algorithm called Trainable ISTA (TISTA). The proposed
algorithm consists of two estimation units such as a linear
estimation unit and a minimum mean squared error (MMSE)
estimator-based shrinkage unit. The estimated error variance
required in the MMSE shrinkage unit is precisely estimated
from a tentative estimate of the original signal. The remarkable
feature of the proposed scheme is that TISTA includes adjustable
variables controlling a step size and the error variance for the
MMSE shrinkage. The variables are adjusted by standard deep
learning techniques. The number of trainable variables of TISTA
is equal to the number of iteration rounds and it is much
smaller than that of known learnable sparse signal recovery
algorithms. This feature leads to highly stable and fast training
processes of TISTA. Computer experiments show that TISTA is
applicable to various classes of sensing matrices such as Gaussian
matrices, binary matrices and matrices with large condition
numbers. Numerical results also demonstrate that TISTA shows
significantly faster convergence than those of AMP and LISTA
in many cases.
I. I NTRODUCTION
The basic problem setup for compressed sensing [1], [2] is
as follows. A real vector x ∈ RN represents the source sparse
signal. It is assumed that we cannot directly observe x but we
observe y = Ax+w where A ∈ RM ×N (N ≥ M ) is a sensing
matrix and w ∈ RM is a Gaussian noise vector. Our goal is
to estimate x from y as correct as possible.
For a number of sparse reconstruction algorithms [3], Lasso
formulation [4] is fairly common for solving the sparse signal
recovery problems. In Lasso formulation, the original problem
is recast as a a convex optimization problem for minimizing
1
2
2 ||y − Ax||2 + λ||x||1 . The regularization term λ||x||1 promotes sparseness of a reconstruction vector where λ is the
regularization constant. In order to solve the Lasso problem
efficiently, a number of algorithms have been developed [5].
Iterative Shrinkage Thresholding Algorithm (ISTA) [6], [7]
is one of the most known algorithms for solving the Lasso
problem. ISTA is an iterative algorithm comprising of two
processes, i.e., a linear estimation process and a shrinkage
process based on a soft thresholding function. ISTA can be
seen as a proximal gradient descent algorithm [8] and it can
be directly derived from Lasso formulation.
Approximate Message Passing (AMP) [9], [10] which is a
variant of approximate belief propagation, shows much faster
convergence than that of ISTA in general. The remarkable
feature of AMP is that its asymptotic behavior is completely
described by the state evolution equations [11]. AMP is
derived based on the assumption that the sensing matrices
consist of i.i.d. Gaussian distributed components. Recently,
Ma and Ping proposed Orthogonal AMP (OAMP) [13] that can
handle various classes of sensing matrices including unitary invariant matrices. Rangan et al. proposed VAMP [14] for rightrotationally invariant matrices and provided a theoretical justification for its state evolution. Independently, Takeuchi [15]
also gave a rigorous analysis for a sparse recovery algorithm
for unitary invariant measurements based on the expectation
propagation framework.
The advent of recent powerful neural networks (NN) triggered explosive spread of research activities and applications
on deep neural networks (DNN) [16]. The DNN have found
a number of practical applications such as image recognition [17], [18], speech recognition [19] and robotics because
of their outstanding performance compared with the traditional methods. The advancement of DNN are also giving
impact on design of algorithms for communications and signal
processing [20], [21]. By unfolding an iterative process of a
sparse signal recovery algorithm, we can obtain a signal-flow
graph. The signal-flow graph includes trainable variables that
can be tuned with a supervised leaning method, i.e., standard
deep learning techniques such as stochastic gradient descent
algorithms based on the back propagation and mini-batchs can
be used to adjust the trainable parameters. Gregor and LeCun
presented Learned ISTA (LISTA) [22] that employs learnable
threshold variables for a shrinkage function. It is shown that
LISTA yields recovery performance superior to that of the
original ISTA. Borgerding et al. also presented variants of
AMP and VAMP with learnable capability [23] [24].
The goal of this work is to propose a simple sparse recovery
algorithm based on deep learning techniques. The proposed
algorithm, called Trainable ISTA (TISTA), borrows the basic
structure from ISTA, and an estimator of the squared error
between true signals and tentative estimations, i.e., the error
variance estimator, from OAMP. Thus, TISTA comprises
of the three parts: a linear estimator, an minimum mean
squared error (MMSE) estimator-based shrinkage function,
and the error variance estimator. The linear estimator of TISTA
includes trainable variables that can be adjusted via deep
learning techniques.
II. B RIEF REVIEW OF KNOWN RECOVERY ALGORITHMS
As preparation for describing the details of the proposed algorithm, several known sparse recovery algorithms are briefly
reviewed in this section. In the following, the observation vector is assumed to be y = Ax+w where A ∈ RM ×N (N ≥ M )
and x ∈ RN . Each entry of additive noise vector w ∈ RM
obeys zero mean Gaussian distribution with variance σ 2 .
A. ISTA
ISTA is a well-known sparse recovery algorithm [6] defined
by the following simple recursion: rt = st + βAT (y −
Ast ), st+1 = η(rt ; τ ), where β represents a step size and
η is the soft thresholding function defined by η(r; τ ) =
sign(r) max{|r| − τ, 0}. The parameter τ (> 0) indicates the
threshold value. After T -iterations, the estimate x̂ = sT of
the original sparse signal x is obtained. The initial value is
assumed to be s0 = 0. In order to have convergence, the
step parameter should be carefully determined [6]. Several
accelerated methods for ISTA using a momentum term have
been proposed [25], [26]. Since the proximal operator of L1 regularization term ||x||1 is the soft thresholding function,
ISTA can be seen as a proximal gradient descent algorithm [3].
B. AMP
AMP [10] is defined by the following recursion:
rt
=
y − Ast + bt rt−1 ,
(1)
=
C. OAMP
OAMP [13] is defined by the following recursive formula:
rt
=
st+1
=
vt2
τt2
Bt
st + Wt (y − Ast ),
ηdf (rt ; τt ),
||y − Ast ||22 − M σ 2
= max
,
,
trace(AT A)
1
1
=
trace(Bt BtT )vt2 + trace(Wt WtT )σ 2 ,
N
N
= I − Wt A,
III. D ETAILS OF TISTA
This section describes the details of TISTA and its training
process.
A. MMSE estimators for additive Gaussian noise channel
η(st + AT rt ; τt ),
(2)
θ
1
||st ||0 , τt = √ ||rt ||2
(3)
bt =
M
M
and it provides the final estimate x̂ = sT . It is assumed that
each entry of the sensing matrix A is generated according to
the Gaussian distribution N (0, 1/M ), i.e., Gaussian distribution with mean zero and variance 1/M . The recursive formula
of AMP seem similar to these of ISTA at a glance but there
are several critical differences. Due to the Onsager correction
term bt rt−1 in (1), the output of the linear estimator becomes
statistically decoupled and an error between each output signal
from the linear estimator and the true signal behaves as a white
Gaussian random variable in large system limit. This enable us
to use a scalar recursion called the state evolution to track the
evolution of the error variances. Another difference between
ISTA and AMP is the estimator of τt in (3), which is used as
the threshold value for the shrinkage function (2). In [10], it is
reported that AMP shows much faster convergence than that
of ISTA if the sensing matrix satisfies the above condition.
However, it is known that AMP cannot provide excellent
recovery performance for sensing matrices violating the above
condition such as non-Gaussian sensing matrices, Gaussian
matrices with large variance, Gaussian matrices with non-zero
mean, and matrices with large condition numbers [12].
st+1
for t = 0, 1, 2, . . . , T −1. To be precise, the estimator equations
on vt2 (6) and τt2 (7) (presented also in [27]) are not part of
OAMP (for example, we can use the state evolution to provide
vt2 and τt2 ) but these estimators were adopted numerical
evaluation in [13]. The matrix Wt in linear estimator (4) can be
chosen from the transpose of A, the pseudo inverse of A and
the LMMSE matrix. The nonlinear estimation unit (5) consists
of a divergence free function ηdf that replaces the Onsager
correction term. It is proved in [13] that the estimation errors
at the linear estimator (4) and non-linear estimator (5) are
statistically orthogonal if a sensing matrix is i.i.d. Gaussian
or unitary invariant. This fact provides a justification for the
state evolution of OAMP.
(4)
(5)
(6)
(7)
(8)
Let X be a real-valued random variable with probability
density function (PDF) PX (·). We assume an additive Gaussian noise channel defined by Y = X +N, where Y represents
a real-valued random variable as well. The random variable N
is a Gaussian random variable with mean 0 and variance σ 2 .
Consider the situation where a receiver can observe Y and
wish to estimate the value of X.
The MMSE estimator ηM M SE (y) is defined ηM M SE (y) =
E[X|y] where E[X|y] is the conditional expectation given by
Z ∞
E[X|y] =
xP (x|y)dx.
(9)
−∞
The posterior PDF P (x|y) is given by Bayes Theorem:
PX|Y (x|y) =
PX (x)PY |X (y|x)
,
PY (y)
where the conditional PDF is Gaussian:
1
−(y − x)2
PY |X (y|x) = √
exp
.
2σ 2
2πσ 2
(10)
(11)
In the case of the Bernoulli-Gaussian prior, PX (x) is given
by
x2
exp − 2 ,
(12)
2α
2πα2
where p represents the probability such that a non-zero element
occurs. The function δ(·) is the Dirac’s delta function. In this
case, a non-zero element follows the Gaussian PDF with mean
zero and variance α2 . The MMSE estimator for the BernoulliGaussian prior can be easily derived [29] by using Stein’s
formula:
d
ln PY (y)
(13)
ηM M SE (y; σ 2 ) = y + σ 2
dy
PX (x) = (1 − p)δ(x) + √
p
and we have
2
ηM M SE (y; σ ) =
yα2
ξ
pF (y; ξ)
,
(1 − p)F (y; σ 2 ) + pF (y; ξ)
(14)
where ξ = α2 + σ 2 and
F (z; v) = √
1
exp
2πv
2
−z
2v
.
(15)
These MMSE estimators are going to be used as a building
block of TISTA to be presented in the next subsection.
B. Recursive formula for TISTA
We assume that the sensing matrix A ∈ RM ×N is a full
rank matrix. The recursive formula of TISTA are summarized
as follows:
=
st + γt W (y − Ast ),
(16)
st+1
=
ηM M SE (rt ; τt2 ),
(17)
vt2
=
max
rt
τt2
=
+
||y − Ast ||22 − M σ 2
,
,
trace(AT A)
where the matrix W = AT (AAT )−1 is the pseudo inverse
matrix 1 of the sensing matrix A, and Z = W A. The initial
condition is s0 = 0 and the final estimate is given by x̂ = sT .
The scalar variables γt ∈ R(t = 0, 1, . . . , T − 1) are learnable
variables that are tuned in a training process. The number of
learnable variables is thus T that is much smaller than that of
LISTA [22] and LAMP [23].
An appropriate MMSE shrinkage (17) is chosen according
to the prior distribution of the original signal x. Note that the
MMSE shrinkage is also employed in [23]. The real constant
is a sufficiently small value, e.g., = 10−9 . The max
operator in (18) is employed to prevent the estimate of the
variance being non-positive. The learnable variables γt in (16)
provide appropriate step sizes and control for the variance of
the MMSE shrinkage.
The true error variances τ̄t2 and v̄t2 are defined by
(20)
These error variances should be estimated as correct as possible in a sparse recovery process because the MMSE shrinkage
(17) requires to know τ̄t2 . As in the case of OAMP [13], we
make the following assumptions on the residual errors in order
to derive an error variance estimator.
The first assumption is that rt − x consists of i.i.d. zero
mean Gaussian entries. Due to this assumption, each entry of
the output from the linear estimator (16) can be seen as an
observation obtained from a virtual additive Gaussian noise
channel with the noise variance τ̄ 2 . This justifies the use of the
shrinkage function based on the MMSE estimator (17) with τ̄ 2 .
Another assumption is that st − x consists of zero mean i.i.d.
entries and satisfies E[(st −x)T AT w] = E[(st −x)T W w] = 0
for any t.
1 If
N < M , we let W = (AT A)−1 AT .
v̄t2 =
E[||y − Ast ||22 ] − M σ 2
trace(AT A)
(21)
holds.
The justification of the error variance estimator (19) for τ̄t2
is also provided by the following proposition.
Proposition 2: If each entry of st − x is i.i.d. with mean 0
and E[(st − x)T W w] = 0 is satisfied, then
τ̄t2
v̄t2
(N − 2γt trace(Z) + γt2 trace(ZZ T ))
N
γt2 σ 2
trace(W W T )
(22)
N
=
(18)
vt2
(N − 2γt trace(Z) + γt2 trace(ZZ T ))
N
γt2 σ 2
trace(W W T ),
(19)
N
τ̄t2 = E[||rt − x||22 ]/N, v̄t2 = E[||st − x||22 ]/N.
The error variance estimator for v̄t2 (18) is the same as that
of OAMP [13] and its justification comes from the following
proposition.
Proposition 1: If each entry of st − x is i.i.d. with mean 0
and E[(st − x)T AT w] = 0 is satisfied, then
+
holds.
(Proof) The residual error rt − x can be rewritten as
rt − x
= st + γt W (y − Ast ) − x
= st + γt W (Ax + w) − γt W Ast − x
(I − γt Z)(st − x) + γt W w.
=
From the definition τ̄t2 , we have
1
E[||(I − γt Z)(st − x) + γt W w||22 ]
τ̄t2 =
N
1
E[(st − x)T (I − γt Z)(I − γt Z)T (st − x)]
=
N
γt2
2γt
+
E[wT W T W w] +
E[(st − x)T (I − γt Z)T W w]
N
N
1
=
trace((I − γt Z)(I − γt Z)T )v̄t2
N
2(γt − γt2 )
γt2
trace(W W T )σ 2 +
E[(st − x)T W w].
+
N
N
The last term vanishes due to the assumption E[(st −
x)T W w] = 0 and the first term can be rewritten as
trace((I − γt W A)(I − γt W A)T )
X
X
=
(γt Zi,j )2 +
(1 − γt Zi,i )2
i
i,j:i6=j
=
γt2
X
i,j:i6=j
2
Zi,j
+
X
2
(1 − 2γt Zi,i + γt2 Zi,i
)
i
= N − 2γt trace(Z) + γt2 trace(ZZ T ).
(23)
The claim of the proposition is now obtained.
(QED)
These error variance estimators (18) and (19) play a crucial
role to give appropriate variance estimates required for the
MMSE shrinkage. Since the validity of these assumptions on
the residual errors cannot be proved, it will be experimentally
confirmed in the next section. It should be also remarked that
the TISTA recursive formula does not include neither an Onsager correction term nor a divergence free function. Thus, we
cannot expect stochastic orthogonality guaranteed in OAMP in
a process of TISTA. This means that the state evolution cannot
be applied to analyze the asymptotic performance of TISTA.
10-1
Fig. 1. The t-th iteration of the TISTA with learnable variable γt
C. Time complexity of TISTA
For treating a large scale problem, a sparse recovery algorithm should require low computational complexity for each
iteration. The time complexity required for evaluating the
recursive formula of TISTA per iteration is O(N 2 ), which
is the same time complexity as those of ISTA and AMP,
which means that TISTA has sufficient scalability for large
problems. The evaluation of the matrix-vector products, Ast
and W (y − Ast ) need O(N 2 )-time that are dominant in an
iteration. Although the evaluations of the scalar constants
trace(AT A), trace(W W T ), and trace(ZZ T ) and computation
of the pseudo inverse of A require O(N 3 )-time, they can be
pre-computed only once in advance.
error variance
10-2
10-3
10-4
10-5
0
2
4
6
8
iteration
10
12
14
Fig. 2. Comparison between the estimate τ̄ 2 and the true error variance τ 2 ;
A ∼ N (0, 1/M ), SNR = 40 dB. The optimized γt in this case are given by
γ0 = 1.67, γ1 = 4.42, γ2 = 1.41, γ3 = 1.35, γ4 = 5.66, γ5 = 1.51, γ6 =
2.83, γ7 = 1.38, γ8 = 5.84, γ9 = 0.92, γ10 = 1.13, γ11 = 1.49, γ12 =
1.70, γ13 = 1.91, γ14 = 2.18, γ15 = 2.21.
IV. P ERFORMANCE EVALUATION
D. Incremental training for TISTA
A. Details of experiments
In order to achieve reasonable recovery performance, the
trainable variables γt (t = 0, 1, . . . , T − 1) should be appropriately adjusted. By unfolding the recursive formula of
TISTA, we immediately have a signal-flow graph which is
similar to a multi-layer feedforward neural network. Figure 1
depicts a unit of the signal-flow graph corresponding to t-th
iteration of TISTA and we can stack the units to compose a
whole signal-flow graph. We here follow a standard recipe of
deep learning techniques, namely we apply mini-batch training
with a stochastic gradient descent algorithm to the signal-flow
graph of TISTA. From several experiments, we found that
the following incremental training is considerably effective
to learn appropriate values providing superior performance.
The training data consists of a number of randomly generated pairs (x, y) where y = Ax+w. The sample x follows the
prior distribution PX (x) and w is an i.i.d. Gaussian random
vector. The whole training data is divided into mini-batchs
to be used in a stochastic gradient descent algorithm such as
SGD, RMSprop or Adam.
In the t-th round of the incremental training (we call it a
generation), an optimizer tries to minimize E[||st − x||22 ] by
tuning γ0 , γ1 , . . . , γt−1 . The number of mini-batches used in
t-th generation is denoted by D (epochs). After processing D
epochs, the target of the optimizer is changed to E[||st+1 −
x||22 ]. Namely, after training the first to t-th layers, a new t + 1
layer is appended to the network and the whole network is
trained again for D epochs. Although the objective function
is changed, the values of the variables γ0 , . . . , γt−1 of the
previous generation is taken over as the initial values in the
optimization process for the new generation. In summary, the
incremental training updates the variables γt in a sequential
manner from the first layer to the last layer.
The basic conditions for computer experiments are summarized as follows. Each component of the sparse signal x is
assumed to be a realization of i.i.d. random variable following
the Bernoulli-Gaussian PDF (12) with p = 0.1, α2 = 1.
We thus use the MMSE estimator (17) for the BernoulliGaussian prior. Each component of the noise vector w follows
the zero mean Gaussian PDF with variance σ 2 . The signal
to noise ratio (SNR) of the system is defined as SN R =
E[||Ax||22 ]/E[||w||22 ]. The dimension of the sensing matrices
are set to be N = 500, M = 250. The size of the minibatch is set to 1000, and D = 200 epochs are allocated for
each generation. We used Adam optimizer with the initial
value 4.0 × 10−2 in the training phase. The experiment system
was implemented based on TensorFlow [28]. For comparison
purpose, we will include the NMSE performances of AMP
and LISTA in the following subsections. The hyper parameter
θ used in AMP is set to θ = 1.14. We used an implementation
of LISTA by Borgerding [30].
B. IID Gaussian matrix
This subsection describes the case where A ∼ N (0, 1/M ),
i.e., each component of the sensing matrix A obeys zero mean
Gaussian distribution with variance 1/M . Note that AMP is
designed for this matrix ensemble.
Figure 2 shows a comparison between the estimate τ 2 by
(19) and the empirically estimated values of the true error
variance τ̄ 2 . We can observe that the estimator τ 2 provides
accurate estimations and it justifies the use of (18) and (19)
and our assumptions on the residual errors. The caption of Fig.
2 includes a set of the optimized γt .
Figure 3 presents the average normalized MSE (NMSE)
of TISTA, LISTA and AMP as functions of iteration when
0
-5
TISTA
LISTA
AMP
-5
-10
-15
NMSE [dB]
NMSE [dB]
-15
-20
-25
-30
-20
-25
-30
-35
-35
-40
-40
2
4
6
8
10
iteration
12
14
-45
16
Fig. 3. NMSE of TISTA, LISTA and AMP; Ai,j ∼ N (0, 1/M ), SNR
= 40dB. The condition Ai,j ∼ N (0, 1/M ) is required for AMP to converge.
SNR = 40dB. The NMSE is defined by 10 log10 {||st+1 −
x||22 /||x||22 }(dB). From Fig. 3, we can observe that TISTA
provides the steepest NMSE curve than those of AMP and
LISTA at the first 16 rounds. For example, AMP and LISTA
require 16 and 10 rounds to achieve NMSE = −30dB, respectively, but TISTA needs only 6 rounds. It can be seen that
the NMSE curve of TISTA saturates around −43dB at which
TISTA converges. This means that TISTA shows significantly
faster convergence than that of AMP and LISTA in this
setting. Compared with the experimental results under the
same condition shown in [24], the NMSE values of TISTA is
almost comparable to those of LAMP [23]. CPU time required
for training a 7-layers TISTA graph was around 6 minutes by
using a PC with GPU NVIDIA GeForce GTX 1080. A large
Gaussian matrix with the size N = 5000, M = 2500 required
around 7 hours for training. It was observed that the NMSE
curve of N = 5000 is quite similar to that of N = 500.
In the next experiment, we made change on the variance of
sensing matrices to a larger number, i.e., each element in A
follows N (0, 1). Figure 4 shows the NMSE curves of TISTA
and LISTA. It should be noted that, under this condition,
AMP does not perform well, i.e, it cannot converge at all,
because the setting does not fit the required condition (Ai,j ∼
N (0, 1/M )) for achieving the guaranteed performance and the
convergence of AMP. As we can see from Fig. 4, TISTA
behaves soundly and shows faster convergence than that of
LISTA. This result suggests that TISTA is appreciably robust
against the change of the variance.
C. Binary matrix
In this subsection, we will discuss the case where sensing
matrices are binary, i.e., A ∈ {±1}M ×N . Each entry of
A is selected uniformly at random on {±1}. This situation
is closely related to multiuser detection in Coded Division
Multiple Access (CDMA) [9]. Figure 5 shows the NMSE
curves of TISTA and LISTA as a function of iteration. It can
2
4
6
8
10
iteration
12
14
16
Fig. 4. NMSE of TISTA and LISTA; Ai,j ∼ N (0, 1), SNR = 40dB. In this
case, AMP cannot converge because the variance of the matrix components
are too large.
-5
TISTA
LISTA
-10
-15
NMSE [dB]
-45
TISTA
LISTA
-10
-20
-25
-30
-35
-40
-45
2
4
6
8
10
iteration
12
14
16
Fig. 5. NMSE of TISTA and LISTA; Ai,j takes a value in {±1} uniformly
at random. SNR = 40dB. AMP is not applicable to this case.
be seen that the NMSE curves of TISTA almost coincide with
those of the Gaussian sensing matrices. This result can be
regarded as an evidence for the robustness of TISTA for nonGaussian sensing matrices.
D. Sensing matrices with large condition number
The condition number κ of a matrix is defined as the ratio of
the largest and the smallest singular values, i.e., κ = s1 /sM
where s1 ≥ s2 ≥ · · · ≥ sM are the singular values of the
matrix. It is known that regression problems regarding a matrix
with a large condition number are difficult to solve in an
accurate manner. In this subsection, we access the performance
of TISTA for sensing matrices with a large condition number.
Figure 6 presents NMSE of TISTA and AMP when SNR
= 60dB. AMP can converge up to κ = 4 but it diverges
when κ > 4. TISTA shows similar performance of AMP with
κ = 4 even when κ = 1000. This result is a strong evidence
0
-10
NMSE [dB]
-20
-30
-40
-50
-60
-70
κ=1
κ = 15
κ = 100
κ = 1000
AMP:κ = 1
AMP:κ = 4
2
4
6
8
10
iteration
12
14
16
Fig. 6. NMSE of TISTA and AMP; κ represents the condition number. SNR
= 60dB. The singular values are selected so that si /si−1 becomes a constant.
The sensing matrix A is normalized as ||A||2F = N .
that TISTA has a potential to achieve excellent NMSE even
for a sensing matrix with a large condition number.
V. C ONCLUDING S UMMARY
The crucial feature of TISTA is that it includes adjustable
variables which can be tuned by standard deep learning
techniques. The number of trainable variables of TISTA is
equal to the number of iterative rounds and it is much smaller
than those of the known learnable sparse signal recovery algorithms [22]–[24]. This feature leads to highly stable and fast
training processes of TISTA. Computer experiments indicate
that TISTA is applicable to various classes of sensing matrices
such as Gaussian matrices, binary matrices and matrices
with large condition numbers. Furthermore, numerical results
demonstrate that TISTA shows significantly faster convergence
than those of AMP and LISTA in many cases. By replacing the
MMSE shrinkage, we can expect that TISTA is also applicable
to non-sparse signal recovery problems such as detection of
BPSK signals in overloaded MIMO systems.
ACKNOWLEDGEMENT
This work is supported by JSPS Grant-in-Aid for Scientific
Research (B) Grant Number 16H02878 (TW) and Grant-inAid for Young Scientists (Start-up) Grant Number 17H06758
(ST). The last author is grateful to Keigo Takeuchi for an
inspiring seminar talk.
R EFERENCES
[1] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52,
no. 4, pp. 1289-1306, Apr. 2006.
[2] E. J. Candes and T. Tao, “Near-optimal signal recovery from random
projections: Universal encoding strategies?” IEEE Trans. Inf. Theory,
vol. 52, no. 12, pp. 5406-5425, Dec. 2006.
[3] Z. Zhang, Y. Xu, J. Yang, X. Li, and D. Zhang, “A Survey of Sparse
Representation: Algorithms and Applications,” IEEE Access, vol. 3, pp.
490-530, 2015.
[4] R. Tibshirani, “Regression shrinkage and selection via the lasso,” J.
Royal Stat. Society, Series B, vol. 58, pp. 267–288, 1996.
[5] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, “Least Angle
Regression,” Ann. Stat., vol. 32, no. 2, pp. 407-499, Apr. 2004.
[6] A. Chambolle, R. A. DeVore, N. Lee, and B. J. Lucier, “Nonlinear
wavelet image processing: variational problems, compression, and noise
removal through wavelet shrinkage,” IEEE Trans. Image Process., vol.
7, no. 3, pp. 319–335, Mar, 1998.
[7] I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding
algorithm for linear inverse problems with a sparsity constraint,” Comm.
Pure and Appl. Math., col. 57, no. 11, pp. 1413-1457, Nov. 2004.
[8] N. Parikh and S. Boyd, “Proximal Algorithms,” Foundations and Trends
in Optimization, vol. 1, no. 3, pp. 127-239, 2014.
[9] Y. Kabashima, “A CDMA multiuser detection algorithm on the basis of
belief propagation,” J. Phys. A: Math. Gen., vol. 36 11111–11121, Oct.
2003.
[10] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proceedings of the National Academy
of Sciences, vol. 106, no. 45, pp. 18914–18919, Nov. 2009.
[11] D. L. Donoho, A. Maleki, and A. Montanari, “Message passing algorithms for compressed sensing: I. Motivation and construction,” IEEE
Information Theory Workshop 2010, ITW 2010, pp. 1-5, 2010.
[12] F. Caltagirone, L. Zdeborova, and F. Krzakala, “On convergence of
approximate message passing,” 2014 IEEE Int. Symp. Inf. Theory, Jun.
2014, pp. 1812-1816.
[13] J. Ma and L. Ping, “Orthogonal AMP,” IEEE Access, vol. 5, pp. 2020–
2033, 2017.
[14] S. Rangan, P. Schniter, and A. K. Fletcher, “Vector approximate message
passing,” 2017 IEEE Int. Symp. Inf. Theory, Jun. 2017, vol. 65, no. 17,
pp. 1588-1592.
[15] K. Takeuchi, “Rigorous dynamics of expectation-propagation-based signal recovery from unitarily invariant measurements,” 2017 IEEE Int.
Symp. Inf. Theory, Jun. 2017, pp. 501-505.
[16] K. Fukushima, “Neocognitron: A self-organizing neural network model
for a mechanism of pattern recognition unaffected by shift in position,”
Bio. Cybern., vol. 36, no. 4, pp. 193-202, 1980.
[17] G. E. Hinton, R. R. Salakhutdinov, “Reducing the Dimensionality of
Data with Neural Networks,” Science, vol. 313, no. 5786, pp. 504-507,
Jun. 2006:
[18] A. Krizhevsky, I. Sutskever, G. E. Hinton, “Imagenet classification with
deep convolutional neural networks.” Advances in Neural Inf. Proc. Sys.
2012, pp. 1097-1105, 2012.
[19] G. Hinton et al., “Deep Neural Networks for Acoustic Modeling in
Speech Recognition: The Shared Views of Four Research Groups,” IEEE
Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, Nov. 2012.
[20] B. Aazhang, B. P. Paris and G. C. Orsak, “Neural networks for
multiuser detection in code-division multiple-access communications,”
IEEE Trans. Comm., vol. 40, no. 7, pp. 1212-1222, jul. 1992.
[21] E. Nachmani, Y. Beéry and D. Burshtein, “Learning to decode linear
codes using deep learning,” 2016 54th Annual Allerton Conf. Comm.,
Control, and Computing, 2016, pp. 341-346.
[22] K. Gregor, and Y. LeCun, “Learning fast approximations of sparse
coding,” Proc. 27th Int. Conf. Machine Learning, pp. 399–406, 2010.
[23] M. Borgerding and P. Schniter, “Onsager-corrected deep learning for
sparse linear inverse problems,” 2016 IEEE Global Conf. Signal and
Inf. Proc. (GlobalSIP), Washington, DC, Dec. 2016, pp. 227-231.
[24] M. Borgerding, P. Schniter, and S. Rangan, “AMP-Inspired Deep
Networks for Sparse Linear Inverse Problems, ” arXiv:1612.01183v2
(2017).
[25] A. Beck, and M. Teboulle, “A fast iterative shrinkage-thresholding
algorithm for linear inverse problems,” SIAM J Imaging Sciences, vol.
2, no. 1, pp. 183-202, 2009.
[26] J. M. Bioucas-Dias and M. A. T. Figueiredo, “A New TwIST: Two-Step
Iterative Shrinkage/Thresholding Algorithms for Image Restoration,”
IEEE Trans. Image Proc., vol. 16, no. 12, pp. 2992-3004, Dec. 2007.
[27] J. Vila and P. Schniter, “Expectation-maximization gaussian-mixture
approximate message passing,” IEEE Trans. Signal Proc., vo. 61, no.
19, pp. 4658-4672, 2013.
[28] “TensorFlow: Large-scale machine learning on heterogeneous systems,”
http://tensorflow.org/ 2015. Software available from tensorflow.org.
[29] Rémi Gribonval “Should penalized least squares regression be interpreted as Maximum A Posteriori estimation?, ” IEEE Transactions on
Signal Processing, vol.59, no.5, pp. 2405-2410, May. 2011.
[30] https://github.com/mborgerding/onsager_deep_learning/blob/master/
README.md
| 7 |
1
Improving the Performance of OTDOA based
Positioning in NB-IoT Systems
Sha Hu† , Axel Berg‡ , Xuhong Li† , and Fredrik Rusek†
of Electrical and Information Technology, Lund University, Lund, Sweden
‡ ARM, Lund, Sweden
† {sha.hu, xuhong.li, fredrik.rusek}@eit.lth.se, ‡ [email protected]
arXiv:1704.05350v2 [] 5 Sep 2017
† Department
Abstract—In this paper, we consider positioning with observedtime-difference-of-arrival (OTDOA) for a device deployed in longterm-evolution (LTE) based narrow-band Internet-of-things (NBIoT) systems. We propose an iterative expectation-maximization
based successive interference cancellation (EM-SIC) algorithm
to jointly consider estimations of residual frequency-offset (FO),
fading-channel taps and time-of-arrival (ToA) of the first arrivalpath for each of the detected cells. In order to design a low
complexity ToA detector and also due to the limits of lowcost analog circuits, we assume an NB-IoT device working at
a low-sampling rate such as 1.92 MHz or lower. The proposed
EM-SIC algorithm comprises two stages to detect ToA, based
on which OTDOA can be calculated. In a first stage, after
running the EM-SIC block a predefined number of iterations,
a coarse ToA is estimated for each of the detected cells. Then in
a second stage, to improve the ToA resolution, a low-pass filter is
utilized to interpolate the correlations of time-domain PRS signal
evaluated at a low sampling-rate to a high sampling-rate such as
30.72 MHz. To keep low-complexity, only the correlations inside
a small search window centered at the coarse ToA estimates
are upsampled. Then, the refined ToAs are estimated based on
upsampled correlations. If at least three cells are detected, with
OTDOA and the locations of detected cell sites, the position of
the NB-IoT device can be estimated. We show through numerical
simulations that, the proposed EM-SIC based ToA detector is
robust against impairments introduced by inter-cell interference,
fading-channel and residual FO. Thus significant signal-to-noise
(SNR) gains are obtained over traditional ToA detectors that do
not consider these impairments when positioning a device.
I. I NTRODUCTION
Observed-time-difference-of-arrival (OTDOA) is a downlink (DL) positioning method first introduced in long-termevolution (LTE) in Rel. 9 [1], [2]. The positioning-referencesignal (PRS) is transmitted in the DL to enhance positioning
measurements at receiver nodes to ensure sufficiently high
signal quality and detection probability. The PRS is distributed
in time and frequency resources over a subframe and a number
of consecutive positioning subframes are allocated with a
certain periodicity. In a PRS subframe where PRS is present,
no data but only control signaling is transmitted which reduces
the interference from neighbour cells [1]. In order to further
reduce inter-cell interference, the network can mute PRS
transmission of certain e-NodeBs (termed PRS muting). When
PRS is not available, cell-specific-reference-signal (CRS) can
also be used to detect the OTDOA.
As is well-known, low-cost and power-efficient transceiver
circuits are important for a device working in narrow-band
Internet-of-things (NB-IoT) systems, which needs to operate
Fig. 1. Positioning with PRS signals from 3 e-NodeBs based on OTDOA,
where the ToA (τ0,0 , τ1,0 and τ2,0 ) of the first arrival-paths are detected at
an NB-IoT device, and then reported back in the uplink (UL) for e-NodeBs
to estimate the position of the device.
for 10 years with its built-in battery. Therefore, using an analog
RF circuitry that supports sampling-rate up to 30.72 MHz to
obtain good resolution of time-of-arrival (ToA) is expensive
and infeasible for a low-end NB-IoT device. Although the
device can interpolate the received samples in digital domain
to have a high sampling-rate, intensive computations of PRS
correlation consume lots of the battery-power. Hence, in this
work we consider an NB-IoT device that only works at a low
sampling-rate such as 1.92 MHz, and instead of upsampling
the received samples we interpolate the PRS correlations in a
small search window to improve the ToA resolution.
In order to position an NB-IoT device, PRS signals from at
least three cells need to be detected as depicted in Fig. 1. To
enhance the hear ability from multiple cells, the positioning
subframes are designed with no data transmission in the
physical-downlink-shared-channel (PDSCH). However, due to
the time-delays, the PRS signal from different cells can still
cause strong interference to other cells, which degrades the
ToA detection performance. For instance, consider an NB-IoT
device that is very close to one e-NodeB; this causes dramatical interference to other cells which results in a positioning
failure since only one PRS signal from the closest cell can
be detected. Therefore, a successive interference cancellation
(SIC) technique is needed for the ToA detection. Although
PRS muting can avoid collisions of PRS signal, it increases
the latency of positioning process linearly in the number of
cells involved and is infeasible for a dense cell-deployment.
Besides inter-cell interference, the residual frequency offset
2
One and two PBCH antenna ports
R6
(FO) due to an imperfect FO estimate based on NB-IoT
R6
synchronization signals also causes performance degradation
R6 R0
R
R
0
6
of the coherent additions of correlations in the ToA detection
process. Moreover, although an NB-IoT device is expected
R6
R6
to have a low speed and the channel can be assumed constant
R6
R6 R0
R0
over one PRS subframe, the wireless fading-channel still needs
R6
to be estimated for the SIC process and the coherent additions
R6
of correlations.
R6 R0
R6 R0
In this paper, we consider OTDOA based positioning in LTE
based NB-IoT systems where only one physical resource block
R6
R6
(PRB) is used for data-transmission (180 KHz). We propose an
R0
R6
R6 R0
expectation-maximization based successive interference can…...
…...
l=0
l=6 l=0
l=6
ns is even
ns is odd
cellation (EM-SIC) algorithm to estimate the fading-channel,
residual FO and ToA of the first arrival-path for each of the
PRS subframe
detected multiple cells. The EM-SIC algorithm based ToA
Fig. 2. The PRS pattern in one PRB for normal CP and one or two PBCH
detector works on received time-domain samples at a low antenna ports for a cell with PCI=0 in LTE. The PRS is transmitted on antenna
sampling-rate, to firstly obtain coarse estimates of the ToA port 6 which is labeled as R6 , while the normal CRS is sent on antenna port
for all detected cells. Then, the resolution of estimated ToA 0 and labeled as R0 . The frequency shift is given by mod (PCI, 6).
is refined with upsampled correlations inside a search window the baseband time-domain PRS signal is obtained through
centered at the coarse ToA estimates using interpolations. To inverse Fast-Fourier-Transform (IFFT) according to
further improve the accuracies of ToA estimates, we use an
N/2−1
j2πnk
1 X
iterative multi-path detection (MPD) algorithm to detect the
Sp,ℓ [k]e N
sp,ℓ [n]= √
ToA of the first arrival-path by taking into account the property
N k=−N/2
of the auto-correlation function (ACF) of time-domain PRS
j2πn(kp,ℓ+6)
j2πnkp,ℓ
1
signal. We show through numerical results that, the proposed
N
,
Sp,ℓ [kp,ℓ ]e N +Sp,ℓ [kp,ℓ +6]e
=√
N
EM-SIC algorithm based ToA detector renders significant
(2)
signal-to-noise (SNR) gains compared to traditional OTDOA
detectors that do not thoroughly consider the impairments where N is the IFFT size, and Sp,ℓ [k] is the mapped PRS signal
introduced by the inter-cell interference, ACF of the time- generated in (1) on the ℓth OFDM symbol of the pth cell. In
domain PRS signal, fading-channel and residual FO.
total we consider P cells for OTDOA based positioning, and
kp,ℓ is the lower frequency index where the corresponding PRS
signal
is transmitted on the ℓth symbol of the pth cell.
II. PRS AND OTDOA BASED P OSITIONING IN NB-I OT
S YSTEMS
A. PRS Generation and Subframe Mapping
We consider an LTE based single-input-single-output
(SISO) NB-IoT system, where the PRS signal is generated
based on physical cell identity (PCI), and mapped to resource
element (RE) over a time-frequency grid as described in [2].
The number of consecutive PRS subframes can be either 1,
2, 4 or 6, which are transmitted periodically in the DL. The
period of one positioning occasion can be configured to every
160, 320, 640 or 1280 milliseconds (ms). In the considered
NB-IoT system, we assume a narrow-band data-transmission
over one PRB. The QPSK-modulated PRS signal is generated
as [2]
1
1
zns ,ℓ = √ (1 − 2c[2m]) + j √ (1 − 2c[2m + 1]).
2
2
(1)
where ns is the slot number within a radio-frame, ℓ is
the OFDM symbol number within one slot, and m = 0, 1
represents the two PRS symbols in each PRS OFDM symbol.
The pseudo-random sequence c[m] is generated by a length31 Gold sequence whose initial state depends on PCI, ns , ℓ
and the cyclic-prefix (CP) type. According to the PRS mapping
pattern such as shown in Fig. 2, the QPSK symbols are mapped
to REs. As there is no data transmission in the PRS subframe,
B. Received Signal Model
In LTE, the unit of ToA for OTDOA based positioning is
Ts = 1/Fs where Fs = 30.72 MHz. In the considered NB-IoT
system, we assume a sampling-rate F̃s with a typical value
1.92 MHz, which is much lower than Fs . Denote the true
ToA of the ith channel-tap from the pth cell as τp,i , and the
ToA measured in number of samples as
np,i = ⌊τp,i F̃s ⌋.
(3)
Then, the received signal from the pth cell corresponding to
the ℓth PRS OFDM symbol and ith path of the fading-channel
at time-epoch n can be modeled as
0 ≤ n < np,i
0,
yp,i [n+ℓM ] = hℓp,i [n]sp,ℓ [n−np,i ], np,i ≤ n < np,i +M (4)
0,
np,i + M ≤ n < M̃
where sp,ℓ [n] is the time-domain PRS signal (including CP)
generated in (2) with M being its length with the samplingfrequency F̃s , and M̃ equals M plus the number of samples
corresponding to the maximal ToA for all P cells. The channel
hℓp,i [n] comprises two parts: the ith tap of the fading-channel
hp,i which we assume to be constant over one PRS subframe,
and a phase-rotation caused by the residual FO, which equals
hℓp,i [n] = hp,i ej2π
ǫp (n+ℓM )
N
.
(5)
3
Processes in each block
Input samples
Stage 1: The strongest path esma on for
P cells with EM-SIC algorithm
Interpola on
of R1[n]
Stage 2: Correlaon intepolaon
and ToA esmaon
First-path
detecon
EM-SIC Block
cell P-1
Interpola on
of Rp-1[n]
Residual FO
es amte
Interpolaon
of R0[n]
…...
First-path
detec on
First-path
detec on
Correlator
EM-SIC Block
cell 1
…...
EM-SIC Block
cell 0
Add the subracted
PRS signal back
Time-domain
received samples
OTDOA based posi oning
Channel
es mate
Regenerate
PRS signal
ToA esamte
Esmated NB-device posion
Purified samples
Fig. 3. ToA Detection structure with proposed EM-SIC algorithm.
where ǫp is the residual FO of the pth cell normalized by
the subcarrier frequency-spacing (15 KHz). Then, the superimposed received samples for all P cells reads
y[n] =
sym Lp −1
P
−1 N
X
X
X
p=0 ℓ=0
yp,i [n+ℓM ] + w[n],
(6)
i=0
where w[n] is modeled as AWGN with zero-mean and variance
σ 2 , and Lp is the maximal number of channel-taps for the pth
cell. The signal-to-noise (SNR) is defined as SNR = σs2 /σ 2
where σs2 is the averaged power of signal sp,ℓ [n]. We denote
the number of OFDM symbols in one subframe as Nsym , and
the number of OFDM symbols carrying PRS as NPRS . From
Fig. 2, Nsym and NPRS are equal to 14 and 8, respectively.
For simplicity, we let ℓ̃(s) (0 ≤ s < 8) denote the sth OFDM
symbol that contains PRS. In order to position an NB-IoT
device1 , the device needs to detect ToA of the first arrivalpath for (at least three out of) the P cells, that is, detect np,0 .
C. ToA Detection of the First Arrival-Path
In order to detect the ToA, for each cell a cross-correlation
between the received samples and the PRS signals sp,ℓ [n] is
implemented, and the correlations are calculated as
y[k]s∗p,ℓ [k
k=n
− n].
Then, the delay np,0 can be estimated according to
7
P
Rp,ℓ̃(s) [n]
s=0
7
> η1 .
np,0 = argmin
P
n
max
Rp,ℓ̃(s) [n]
n
7
M̃−M−1
X X
η2
R
[n] .
n
M̃ −M −1 n=0 s=0 p,ℓ̃(s)
s=0
(9)
For both (8) and (9), η1 and η2 are a predefined thresholds
that can be adapted for different scenarios. The condition (9)
decides if the PRS signal is detected while the condition (8)
provides the estimation of the ToA. A drawback of decision
conditions (8) and (9) is that, the presence of residual FO
degrades the performance of coherent addition of Rp,ℓ̃(s) [k]
over all PRS symbols, and a non-coherent addition renders
inferior performance compared to coherent addition.
max
(
7
X
Rp,ℓ̃(s) [n]
)
>
D. Cramér-Rao Lower Bound (CRLB)
According to (4), under AWGN channel (hp,i = 0 for i 6= 0)
without residual FO, the CRLB for estimating np,0 can be
shown to be [4]
var(np,0 ) ≥
σ2 N 2
7 NP
−1
P
8π 2
n2 Sp,ℓ̃(s) [n]
2
s=0 n=0
n+M−1
X
Rp,ℓ [n] =
However, (8) is not executed unless the following is satisfied
(7)
=
σ2 N 2
2 , (10)
7
P
2
2
kp,
+
k
+6
8π 2
ℓ̃(s)
p,ℓ̃(s)
s=0
(8)
s=0
1 Although considering CRS can further improve the positioning performance, in this paper we only consider 8 OFDM symbols on which the PRS
are transmitted for positioning. The same principle can be applied to CRS
assisted positioning.
which shows that the CRLB for estimating np,0 depends on
the subcarrier index kp,ℓ̃(s) where the PRS signal is transmitted. Note that, the value of kp,ℓ̃(s) changes slightly with
different FFT shift operations in practical implementations of
(2). Although the CRLB is usually difficult to attain unless
under the AWGN channel and without impairments caused by
interference or FO, it still provides an insight for designing
the optimal PRS mapping-pattern of the NB-IoT system.
-8
0
-9
-2
-10
-4
Normalized PDP [dB]
Normalized PDP [dB]
4
-11
-12
-13
-6
-8
-10
-14
-12
-15
-14
-16
100
101
102
delay [Ts]
103
-16
0
5
10
delay [16Ts]
Fig. 4. The normalized power-delay-profile (PDP) of ETU channel with
sampling period Ts and 16Ts , respectively.
III. P ROPOSED EM-SIC A LGORITHM
P OSITIONING
FOR
several times to achieve a better performance. As shown in
[7] and observed in numerical simulations, 2-3 iterations can
harvest the major gains.
As the sampling-period at 1.92 MHz is 520 nanoseconds
(ns) which corresponds to a maximal resolution of 156 meters
(m) with a single cell, the resolutions of ToA estimates are
poor in the first stage. Noting that the Fourier transform of a
correlation is the power-spectral-density (PSD) which is bandlimited to one PRB, to further improve the resolutions, in the
second stage the correlations are interpolated to 30.72 MHz
(of a resolution 9.75 m). Then, ToA of the first arrival-path is
estimated based on upsampled correlations using an iterative
MPD approach. After that, the refined ToA estimates are sent
to the OTDOA based positioning module which generates an
estimate of the position of the NB-IoT device.
OTDOA
In this section, we elaborate the proposed EM-SIC algorithm
based ToA detector, which jointly considers mitigating the
inter-cell interference, estimating the residual FO and the
multi-path fading-channels, and detecting ToA of all cells.
A. ToA Detection and OTDOA Positioning Structure
In Fig. 3 we depict the detector structure with the EMSIC algorithm for OTDOA based positioning of the NB-IoT
device, which essentially comprises two stages. In the first
stage, coarse estimations of ToA of the first arrival-path of all
P cells are obtained with iterative EM-SIC blocks, and the
correlations are coherently summed-up after the residual FO
correction. Then in the second stage, the ToA estimates are
refined with interpolated correlations. Note that, as explained
earlier, we assume that the EM-SIC algorithm works with the
received samples at a low sampling-rate, and as depicted in
Fig. 4, the multiple-path components of fading-channel are
merged together at a low sampling-rate such as 1.92 MHz.
Therefore, for the EM-SIC algorithm in the first stage it is
sufficient to only consider the strongest path.
In the first stage, for each cell an EM-SIC block is applied
to firstly cross-correlating the time-domain PRS signal with
the received samples to obtain the correlations Rp,ℓ̃(s) [n].
Then, the residual FO is estimated by cross-correlating the
correlations Rp,ℓ̃(s) [n] among the 8 PRS symbols in each
subframe. After the FOC, the strongest channel-path is found
at the delay that maximizes the coherent addition of the
correlations, and the corresponding channel-tap is estimated
with LMMSE filtering. Finally, with the estimated FO and
channel-tap, the time-domain PRS samples are regenerated and
removed from the received signal for each of the cells. The
same operations are implemented in a successive way until
all P cells have been processed. Then in the next EM-SIC
iteration, before computing the correlations Rp,ℓ [n], for each
cell the PRS signal subtracted from the previous iteration is
added back. Such an EM-SIC based estimation algorithm was
first used to decompose the superposed signals for iterative
channel estimation and then also applied for PDSCH detection
with inter-cell interferences in LTE systems, which was shown
to work well [7]. The EM-SIC in the first stage can be repeated
B. Stage 1a: Correlation Per OFDM Symbol
As the operation in the EM-SIC block is the same for
each detected cell, we elaborate the operations for the pth
cell. At a first step, the cross-correlation Rp,ℓ [n] in (7) is
computed within the maximal delays M̃ − M for each of
the 8 OFDM symbols carrying PRS signal and for all the P
cells independently. Before summing Rp,ℓ [n] over 8 OFDM
symbols, we first need to estimate and correct the FO to
improve the detection performance.
C. Stage 1b: Residual FO Estimation
Since there have been FOC operations using the NB-IoT
synchronization signals (NPSS and NSSS) at the synchronization steps, the residual FO can be considered relatively small
such that the coherent addition inside one OFDM symbols
is still applicable. As there are in total 8 OFDM symbols
transmitted, the FO ǫp can be estimated based on Rp,ℓ̃(s) [n].
Note that, as depicted in Fig. 2, the maximal separation
between two OFDM symbols carrying PRS is 10 OFDM
symbols. Therefore, the largest residual FO, normalized by the
subcarrier frequency-spacing, that can be estimated is ±0.05.
To estimate the residual FO, we cross-correlate Rp,ℓ̃(s) [n] at
each delay n, and the best-linear-unbiased-estimator (BLUE)
for ǫp [n] is
ǫ̃p [n] =
4
X
w(m)φ(m, n)
(11)
m=1
where φ(m, n) equals
φ(m, n) =
7−m
X
N
2πM (8 − m) s=0
n
o
∗
arg Rp,ℓ̃(s) [n]Rp,
[n])
ℓ̃(s+m)
ℓ̃(s+m) − ℓ̃(s)
,
(12)
and the operation arg{·} returns the angle which belongs to
[−π, π). The combining coefficients w[m] are set to
w = [0.4762 0.3095 0.1667 0.0476],
(13)
computed according to [8, eq. (16)], and achieves the minimum estimation error (MSE) with 8 PRS OFDM symbols.
With ǫ̃p [n] estimated in (11), we implement FOC and coherently add the correlations for 8 PRS OFDM symbols to obtain
7
X
M (ℓ̃(s)−ℓ̃(0))
N
Rp [n] = Rp,ℓ̃(0) [n]+ e−j2πǫ̃p [n]
Rp,ℓ̃(s) [n]. (14)
Normalized Corr.
5
s=1
1
PRS Correlation at 30.72 MHz
PRS Correlation at 1.92 MHz
Interpolated PRS Correlation
0.5
0
-0.5
-800
-600
-400
-200
0
200
400
600
800
5
10
15
20
delay [Ts]
Then, the residual FO estimate for the pth cell is set to
(15)
where the index ñp is found such that
Rp [ñp ] = max {|Rp [n]|} .
(16)
n
NMSE [dB]
0
ǫ̂p = ǫ̃p [ñp ],
-100
-200
-20
Then, the condition (9) is verified to decide whether there is a
PRS signal received from the pth cell. If (9) is not satisfied, the
detector directly moves to process the next cell and removes
the pth cell from current EM-SIC iteration.
-15
-10
-5
0
delay [Ts]
Fig. 5. The normalized correlations and normalized mean-squared-error
(NMSE) of the interpolations for one PRS OFDM symbol (without CP) on
time-domain under SNR=0 dB at sampling-rate 1.92 and 30.72 MHz, and the
interpolated correlations based on 1.92 MHz and with W = 20, respectively.
E. Stage 2a: Interpolate the Correlations
D. Stage 1c: Channel Estimation and Interference Cancellation
As we assume that the fading-channel is approximately
constant within one subframe, the channel estimate for the
strongest channel-path of the pth cell is
7 ñp +M−1
1 X X ỹ[k+ ℓ̃(s)M ] −j2π ǫ̂p (k+ℓ̃(s)M )
N
ĥp =
e
,
8M s=0
sp,ℓ̃(s) [k− ñp ]
(17)
k=ñp
where without loss of generality, we assume sp,ℓ [k] 6= 0 for
all k. The purified received samples ỹ[k] after removing the
regenerated PRS signal from the first p cells equals
ỹ[k+ ℓ̃(s)M ] = y[k+ ℓ̃(s)M ]
p−1
X
ǫ̂q (k+ℓ̃(s)M )
N
, (18)
−
h̃q sq,ℓ̃(s) [k− ñq ]e−j2π
q=0
where the refined channel estimate h̃p with LMMSE filter is
h̃p =
ĥp
1 + σ̃p2 /|ĥp |2
,
(19)
and the noise density σ̃p2 is estimated according to
σ̃p2 =
7 ñp +M−1
1 X X
ỹ[k+ ℓ̃(s)M ]
8M s=0
k=ñp
−ĥp sp,ℓ̃(s) [k − ñp ]e−j2π
ǫ̂p (k+ℓ̃(s)M )
N
2
. (20)
After all PRS symbols are estimated and removed from y[k],
the data ỹ[k] is sent to the EM-SIC block for processing
the next cell. As the PRS signal from the pth cell has been
removed from ỹ[k], the detection performance of the remaining
cells is improved, especially under the case that the received
signal power from the pth cell is stronger than the others.
The EM-SIC algorithm (comprising the processes in Sec.
III B-D) is repeated for the remaining cells until the ToAs
have been estimated for all cells. Once done, the received
signal ỹ[k] ideally comprises only noise and remaining PRS
signals. Hence, at the beginning of the next EM-SIC iteration
for each cell, the regenerated and subtracted signal in (18)
corresponding to that cell is added back to ỹ[k] for a refined
detection as depicted in Fig. 3.
As the PSD of the PRS signal is band-limited inside one
PRB, we can therefore interpolate the correlations Rp [n] to
a higher sampling-rate to improve the resolutions of estimated ToAs. With the coarse estimates obtained from the low
sampling-rate samples output from the first stage (after SIC
and FOC process), the upsampled correlations R̂p [m] can be
interpolated using a sinc-function according to
W
X
sin π m
n
V −
,
(21)
R̂p [m] =
Rp [n]
π m
V −n
n=−W
where V is the upsampling rate, and W specifies the window
size for searching around the coarse ToA estimate ñp . As can
be seen in Fig. 5, with Rp [n] calculated at sampling-rate 1.92
MHz, setting W = 20 is sufficient to capture the main-lobe of
the normalized PRS correlation function. With setting V = 16,
the interpolated correlations at 30.72 MHz is also depicted
which is shown to be accurate at an SNR of 0 dB.
F. Stage 2b: Iterative MPD
After obtaining the upsampled correlations R̂p [m], we can
perform the ToA detection of the first arrival-path. Directly
comparing the maximal value of R̂p [m] to a predefined
threshold as in (8) results in poor performance due to the
strong correlations of the PRS signal as shown in Fig. 5. For
instance, with an AWGN channel, the ToA estimate should
be the index corresponding to the maximal correlation value.
However, with a threshold η1 < 1, the detection (8) provides
a ToA estimate which is smaller than the true ToA. In order
to cope with fading-channels the threshold η1 needs to adapt
accordingly which is a difficult design task. Instead we utilize
a similar method as in [6] to implement iteratively MPD of
the fading-channel taking into account the ACF of the PRS
signal. We claim that a first path is found at position ñp,0 if
R̂p [ñp,0 ] = max
n
n
R̂p [m]
o
>
M̂ −1
γ X
M̂
R̂p [m] ,
(22)
m=0
where M̂ = V (2W + 1) is the length of R̂p [m], and γ is a
predefined peak-to-average (PAR) threshold.
Then we update R̂p [m] as
R̃p [ñp,0 +k] = R̂p [ñp,0 +k] − R̂p [ñp,0 ]R0 [k],
(23)
6
ñp = min {ñp,i },
(24)
i
which is the estimated ToA of the detected first arrival-path.
Note that, when path delays of the fading channel are
smaller than the main lobe of ACF depicted in Fig. 5, the
peak ñp,0 found in (22) can be an artificial peak caused
by overlapping of ACFs corresponding to different channel
taps. Therefore, a more accurate ToA estimator shall jointly
detect the multi-path components of the fading-channel, which
increases the complexity and is out of the scope of this paper.
IV. N UMERICAL R ESULTS
In this section, we provide numerical results to show the
promising performance of proposed EM-SIC based ToA detection for positioning of NB-IoT devices.
A. ToA Detection Performance with 3 Cells
Firstly we consider ToA detection with 3 cells with PCIs
equal to 0, 1 and 2 at sampling rate 1.92 MHz. The ToA np,0
(0 ≤ p ≤ 2) is set to 320, 480 and 640 Ts , and the transmit
powers of the three cells are set to 0, -4 and -8 dB, respectively.
We first evaluate the ToA detection performance without
residual FO under AWGN channel, in which case the channel
estimation degrades to received power detection. As shown in
Fig. 6, compared to a ToA detector without IC, the proposed
EM-SIC algorithm provides substantial SNR (measured by
the ratio of the transmit power of the strongest cell and the
noise density) gains up to 10 dB for the cell that has the least
transmit power.
In Fig. 7 we repeat the tests under ETU-3Hz channel, and
set the normalized residual FO for the three cells to 0.02, 0.01
and 0.01, respectively. As can be seen, the proposed EM-SIC
detector renders significant detection improvements compared
to the detector that does not apply FOC or SIC processes.
1
0.9
0.8
0.7
Pd
0.6
0.8
0.35
Pd with error-tolerance +/-2 samples
Cell 0
Cell 0, EM-SIC
Cell 1
Cell 1, EM-SIC
Cell 2
Cell 2, EM-SIC
0.4
0.3
0.25
0.2
0.15
0.1
0.05
0
-20
-10
0
10
20
SNR [dB]
0.4
0.3
0.2
0.1
-15
-10
-5
0
5
SNR [dB]
Fig. 6. The detection probability under AWGN channel with 3 cells running
the proposed EM-SIC detector only once .
0.5
0.4
0.3
0.2
0.1
0
-20
-10
0
10
20
SNR [dB]
C. OTDOA based Positioning Performance
In Fig. 10 and Fig. 11 we plot the cumulative distribution
function (CDF) of the horizontal positioning error under
TABLE I
S IMULATION PARAMETERS FOR OTDOA- BASED P OSITIONING .
Parameter
Value
Number of e-NodeBs
Inter-cite distance
Frequency band
Channel model
Number of e-NodeB antenna
Number of device antenna
Macro transmit power
6
1.732 km
900 MHz
AWGN, ETU
1
1
46 dBm for 1.92 MHz
-174 dBm/Hz for AWGN
-184 dBm/Hz for ETU
L = 120.9+37.6 log10 (d)
8 dB
Between e-NodeBs: 0.5
Between sectors of e-NodeB: 1.0
Shadowing correlation
10
0.6
Cell 0
Cell 0, EM-SIC
Cell 1
Cell 1, EM-SIC
Cell 2
Cell 2, EM-SIC
B. ToA Detection Performance in Practical Scenarios
Next we consider an NB-IoT system with simulation parameters listed in Table I, and the OTDOA based positioning
performance is evaluated with 6 cells with a deployment
geometry depicted in Fig. 8. We uniformly generate 200 NBIoT devices associated to the cell with PCI 8. In all simulations
we set the thresholds η2 = 0.8, η2 = 3 and γ = 7 and make no
efforts to further optimize them for each individual channel
condition. At the beginning of all detecting methods we use
(9) to decide whether PRS signal is present or not, and the
false alarm probability of detecting the PRS are similar for all
evaluated detectors.
In Fig. 9 we evaluate the probability density function
(PDF) of the ToA estimation error when only transmitting the
PRS from the cell with PCI 8 and disabling the other cells
under AWGN channel. The estimation errors are calculated
by subtracting the true ToA with estimated ToA values. As
can be seen, the FOC and upsampling using interpolations
improves the accuracy of the ToA estimation significantly.
Path loss model [9] (d in km)
Shadowing standard deviation
Cell 0, no IC
Cell 0, EM-SIC
Cell 1, no IC
Cell 1, EM-SIC
Cell 0, no IC
Cell 2, EM-SIC
0.7
Fig. 7. The detection probability under ETU-3Hz channel for 3 cells with
the proposed EM-SIC detector with 2 global iterations.
Thermal noise density
0.5
0
-20
0.45
Pd
where R0 [k] is the normalized ACF of time-domain PRS
signal with delay k. Then we check again if (22) holds by
replacing R̂p [m] with R̃p [m]. If so, we claim that a second
valid path is present at position ñp,1 where the maximum of
R̃p [n] is attained, and then we update R̄p [ñp,1 +k] again by
subtracting the impact of the ACF of the PRS signal according
to (23). We repeat this process iteratively until the condition
(22) is violated, and denote the detected channel-path delays
as ñp,i . Then, the ToA estimate ñp for the pth cell is set to
Number of PRB / PRS occasion
/ consecutive PRS subframes
PRS Muting
Normalized residual FO
1 /1 /1
False
Uniformly drawn in [-0.03, 0.03].
7
1
-1500
0.9
0.8
1
13
0
12
0.7
-500
500
6
7
4
0.6
14
CDF
x [m]
2
8
3
4
15
16
8
0.5
0.4
0.3
1500
17
9
10
5
0.2
no IC, 67% localized
EM-SIC w/o FOC and w/o upsampl., 88% localized
EM-SIC w. FOC and w/o upsampl., 97% localized
proposed EM-SIC, 97% localized
0.1
11
2500
0
0
3000
-3000
-2000
-1000
0
1000
2000
50
100
150
200
250
300
350
400
450
500
Horizontal error [m]
3000
y [m]
Fig. 8. The geometric deployment of the 6 e-NodeBs (marked as magenta
circles and each of them comprises 3 sectors) and 200 NB-devices (marked
as red triangles) that are associated to the cell with PCI 8.
Fig. 10. The OTDOA based positioning performance and localization ratio
under AWGN channel.
1
0.9
3
0.8
no IC
EM-SIC w/o FOC and w/o upsampl.
EM-SIC w. FOC and w/o upsampl.
proposed EM-SIC
2.5
0.7
CDF
0.6
PDF [%]
2
0.5
0.4
1.5
0.3
0.2
no IC, 56% localized
EM-SIC w/o FOC and w/o upsampl., 76% localized
EM-SIC w. FOC and and w/o upsampl., 85% localized
proposed EM-SIC, 86% localized
1
0.1
0
0.5
0
50
100
150
200
250
300
350
400
450
500
Horizontal error [m]
0
-4
-3
-2
-1
0
1
2
3
4
ToA error [16Ts]
Fig. 9. The PDF of ToA estimation errors. The iterative MPD performs better
than the threshold based detection (8) under AWGN channel.
AWGN channel and ETU-3Hz channels, respectively. The
localization percentages of different detection methods are
also shown in the legends, where we claim a successful
positioning only when at least three cells are detected and
the positioning error-distance is less than 500 m. As can
be seen, under both channels the proposed EM-SIC detector
performs much better than a traditional no IC detector that
does not consider the impairments of the interference, residual
FO and fading-channels. Moreover, as can be seen, the FOC
and upsampling processes greatly improve the positioning
performance under AWGN channels, while under EUT-3Hz
channel, the gains in positioning accuracy introduced by FOC
and upsampling processes are marginal due to inaccurate ToA
estimates. Nevertheless, with FOC process the localized ratio
is still greatly improved, while the upsampling process further
slightly improves that.
V. S UMMARY
We have considered the observed-time-difference-of-arrival
(OTDOA) based positioning for an NB-IoT device with multiple cells. We have proposed an expectation-maximization
based successive interference cancellation (EM-SIC) detector
to jointly consider the fading-channel, residual frequency offset (FO), and time-of-arrival (ToA) of the first arrival-path. To
design a low complexity ToA detector, the EM-SIC algorithm
works with received time-domain samples at a low samplingrate. The resolution of estimated ToA is further refined by
Fig. 11. The OTDOA based positioning performance and localization ratio
under ETU-3Hz channel.
upsampling the correlations obtained at the low samplingrate inside a small search window to a higher sampling-rate
with interpolation. Numerical simulations have shown that,
the proposed EM-SIC ToA detector performs robustly against
impairments introduced by inter-cell interference, fadingchannel and residual FO, which shows significant signal-tonoise (SNR) gains compared to traditional ToA detectors that
do not thoroughly consider these impairments.
R EFERENCES
[1] S. Fischer, “Observed time difference of arrival (OTDOA) positioning
in 3GPP LTE,” White Paper, Qualcomm Technologies Inc., Jun. 2014.
[2] 3GPP TS 36.211, “Evolved Universal Terrestrial Radio Access (EUTRA): Physical channels and modulation,” Release 14, Dec. 2016.
[3] J. Liu and S. Feng, “RSTD performance for small bandwidth of OTDOA
positioning in 3GPP LTE,” IEEE Veh. Tech. Conf. (VTC-Fall), Las
Vegas, Sep. 2013, pp. 1-5.
[4] W. Xu, M. Huang, C. Zhu, and A. Dammann, “Maximum likelihood
TOA and OTDOA estimation with first arriving path detection for 3GPP
LTE system,” Trans. Emerging Tel. Tech., vol. 27, no. 3. pp. 339-356,
Mar. 2016.
[5] S. M. Kay, “Fundamentals of statistical signal processing, volume I:
Estimation theory,” Prentice Hall signal processing series, 1993.
[6] H. Rydén, A. A. Zaidi, S. M. Razavi, F. Gunnarsson, and I. Siomin,
“Enhanced time of arrival estimation and quantization for positioning in
LTE networks,” IEEE Int. Symp. on Personal, Indoor and Mobile Radio
Commun. (PIMRC), Valencia, Sep. 2016, pp. 1-6.
[7] S. Hu, G. Wu, B. Priyanto, F. Rusek, S. Kant, and J. Chen, “Iterative
interference cancellation method”, Patent US20140369300A1, filed on
Aug. 2014.
[8] M. Morelli and U. Mengali, “An improved frequency offset estimator
for OFDM applications,” IEEE Comm. Lett., vol. 3, no. 3, pp. 75-77,
Mar. 1999.
[9] 3GPP TSG RAN WG1, R1-168310, “WF on simulation assumption for
NB-IoT positioning,” Gothenburg, Sweden, Aug. 22-27, 2016.
| 7 |
NEW CLASSES OF EXAMPLES SATISFYING THE THREE MATRIX
ANALOG OF GERSTENHABER’S THEOREM
arXiv:1711.10109v1 [] 28 Nov 2017
JENNA RAJCHGOT AND MATTHEW SATRIANO
Abstract. In 1961, Gerstenhaber proved the following theorem: if k is a field and X and Y are
commuting d × d matrices with entries in k, then the unital k-algebra generated by these matrices
has dimension at most d. The analog of this statement for four or more commuting matrices is
false. The three matrix version remains open. We use commutative-algebraic techniques to prove
that the three matrix analog of Gerstenhaber’s theorem is true for some new classes of examples.
In particular, we translate this three commuting matrix statement into an equivalent statement
about certain maps between modules, and prove that this commutative-algebraic reformulation is
true in special cases. We end with ideas for an inductive approach intended to handle the three
matrix analog of Gerstenhaber’s theorem more generally.
Contents
1. Introduction and statement of results
2. Reformulating Statement 1.2 in terms of module morphisms
3. Reducing Theorem 1.5 to a special case
4. Completing the proof of Theorem 1.5
5. Proof of Theorem 1.6
6. Some other instances where Statement 1.2 holds
7. An inductive approach to Statement 1.2
References
1
4
7
10
13
14
14
16
1. Introduction and statement of results
1.1. An overview of Gerstenhaber’s theorem and its three matrix analog. Let k be a
field and let Md (k) denote the space of d × d matrices with entries in k.
Question 1.1. Let X1 , . . . , Xn ∈ Md (k) be pairwise commuting matrices. Must the unital k-algebra
that they generate be a finite-dimensional vector space of dimension at most d?
When n = 1, the answer to Question 1.1 is “yes” by the Cayley-Hamilton theorem: X ∈ Md (k)
satisfies its characteristic polynomial, thus I, X, X 2 , . . . , X d−1 is a vector space spanning set for the
algebra generated by X.
When n ≥ 4, the answer is “no” in general. The standard n = 4 counter-example is given by the
matrices E13 , E14 , E23 , E24 ∈ M4 (k), where Eij denotes the matrix with a 1 in position (i, j) and
0s elsewhere. These 4 matrices generate a 5-dimensional algebra.
The first interesting case is n = 2. Here the answer to Question 1.1 is “yes.” This result is
often called Gerstenhaber’s theorem and was proved in [Ger61]. Gerstenhaber’s proof was
algebro-geometric, and relied on the irreducibility of the commuting scheme C(2, d) of pairs of
d × d commuting matrices (a fact also proved in the earlier paper [MT55]). Some years later,
J.R. is partially supported by NSERC grant RGPIN-2017-05732.
M.S. is partially supported by NSERC grant RGPIN-2015-05631.
1
linear algebraic proofs (see [BH90, LL91]) and commutative algebraic proofs (see [Wad90, Ber13])
of Gerstenhaber’s theorem were found. More detailed summaries on the history and approaches to
Gerstenhaber’s theorem can be found in [Set11, HO15].
The case n = 3 is still open, and is the subject of this paper. We refer to the following statement
as the three matrix analog of Gerstenhaber’s theorem.
Statement 1.2. If X, Y, Z ∈ Md (k) are matrices which pairwise commute, then the unital k-algebra
generated by X, Y , and Z is a finite-dimensional vector space of dimension at most d.
To prove Statement 1.2, one might try to mimic the algebro-geometric proof of Gerstenhaber’s
theorem. This approach succeeds whenever the affine scheme of triples of commuting d×d matrices,
denoted C(3, d), is irreducible. Consequently, Statement 1.2 is true when d ≤ 10 and k is of
characteristic 0 (see [Š12] and references therein). However, since C(3, d) has multiple irreducible
components when d ≥ 30 [Gur92, HO01], a different approach is necessary to handle the general
case.
1.2. Summary of the main results. In this paper, we use commutative-algebraic methods to
study the three matrix analog of Gerstenhaber’s theorem. We do so by reformulating Statement
1.2 in terms of morphisms of modules, our key technical tools being Propositions 1.8 and 1.10.
Although this is nothing more than a simple reformulation, an approach along these lines appears
to be new.
We work over an arbitrary field k and let S = k[x1 , . . . , xn ] denote a polynomial ring in n
variables. One can easily rephrase Question 1.1 on n commuting matrices in terms of S-modules.
We provide a proof of this in Section 2 (see also [Ber13, §5]).
Proposition 1.3. Question 1.1 has answer “yes” if and only if for all S-modules N which are
finite-dimensional over k, we have
(1.4)
dim S/ Ann(N ) ≤ dim N.
Thus, Statement 1.2 is true if and only if inequality (1.4) holds for all finite-dimensional k[x, y, z]modules N .
From the perspective of Proposition 1.3, one can approach Statement 1.2 by successively considering modules of increasing complexity. The simplest modules to consider are cyclic ones: when
N = S/I it is obvious that equation (1.4) holds. The next simplest case to consider is extensions
0 → S/I → N → S/m → 0,
where m ⊆ S is a maximal ideal. This case is much less obvious, and is the central focus of this
paper. Our main result is that Statement 1.2 holds for such modules:
Theorem 1.5. Let k be an infinite field, S = k[x, y, z], I ⊆ S an ideal of finite colength, and
m ⊆ S a maximal ideal. If N is an extension of S-modules
0 → S/I → N → S/m → 0,
then the inequality (1.4) holds for N .
Using Theorem 1.5, we also obtain the following more general result.
Theorem 1.6. Let k be an infinite field, S = k[x, y, z], and N an S-module which is finite dimension over k. Suppose N = N1 ⊕ · · · ⊕ Nr where for each i, one of the following holds:
(1) Ni is a cyclic module, or
(2) Ni is local Gorenstein, or
2
(3) there is an extension
0 → Mi → Ni → S/mi → 0
L
where mi ⊆ S is a maximal ideal and Mi =
ℓ Mi,ℓ with each Mi,ℓ a cyclic or local
Gorenstein module.
Then the inequality (1.4) holds for N .
Before discussing the technique of proof, it is worth remarking why Theorems 1.5 and 1.6 are
more difficult than the cyclic case N = S/I. One reason is that the cyclic case holds over polynomial
rings S = k[x1 , . . . , xn ] in arbitrarily many variables, whereas Theorem 1.5 is specific to 3 variables.
Indeed, the technique of proof must be specific to 3 variables since there are counter-examples of
this form in 4 variables:
Example 1.7 (Theorems 1.5 and 1.6 are false for 4 variables). Let S = k[x, y, z, w] and m = (x, y, z, w).
Recall the standard 4 variable counter-example is given by the matrices E13 , E14 , E23 , E24 ∈ M4 (k).
Via the proof of Proposition 1.3, this corresponds to the S-module N given as follows: we have an
extension
π
0 → S/((x, y) + m2 ) → N → S/m → 0
so that N is generated by the elements of M := S/((x, y) + m2 ) and an additional element f ∈ N
such that π(f ) = 1 ∈ S/m. The S-module structure is given by
zf = wf = 0,
One checks Ann(N ) = m2 so that
xf = z ∈ M,
yf = w ∈ M.
dim S/ Ann(N ) = 5 > 4 = dim N
which violates the inequality of Proposition 1.3.
⋄
The proof of Theorem 1.5 consists of two steps. The first (see Theorem 3.2) is showing that if a
counter-example of this form exists, then we can reduce to the case where I = (F12 , F13 , F23 , g), the
Fij are the maximal minors of a specific 2 × 3 matrix, g is a non-zero divisor, and N satisfies certain
additional properties. The second step (see §4) consists of showing that no such counter-example
exists.
The case of Gorenstein modules, and extensions of S/m by Gorenstein modules, are handled in
Section 5. This, together with some preliminary results in Section 2, quickly reduces Theorem 1.6
to the case of Theorem 1.5.
As mentioned above, the following is a key tool we use in the proofs of Theorems 1.5 and 1.6.
Proposition 1.8. Let S = k[x, y, z] be a polynomial ring and m √
= (x, y, z). Then Statement 1.2
is true if and only if for all finite-dimensional S-modules M with Ann M = m, and all S-module
maps β : m → M , we have
(1.9)
dim S/ Ann(M ) + dim β(Ann M ) ≤ dim M + 1.
From the perspective of Proposition 1.8, Statement 1.2 is an assertion about bounding the
dimension of the image of Ann(M ) under a module map. Along these lines, we obtain an inductive
approach to Statement 1.2:
Proposition 1.10. Let S = k[x, y, z] and m = (x, y, z). Then Statement
1.2 is equivalent to the
√
following assertion: for all finite-dimensional S-modules M with Ann M = m, all submodules
M ′ ⊆ M with dim(M/M ′ ) = 1, all ideals J ⊆ m, and all S-module maps β : J → M ′ , if
then
dim J/(J ∩ Ann M ′ ) + dim β(J ∩ Ann M ′ ) ≤ dim M ′
dim J/(J ∩ Ann M ) + dim β(J ∩ Ann M ) ≤ dim M.
3
We prove Proposition 1.8 in Section 2, and Proposition 1.10 in Section 7.
Notation. All rings in this paper are commutative with unit. We will let Supp(M ) and Soc(M )
denote the support, respectively the socle, of a module M . Unless otherwise specified, dim(M ) and
the word dimension will refer to the dimension of a module M as a vector space over a given base
field.
Acknowledgments. It is a pleasure to thank Jason Bell, Mel Hochster, and Steven Sam for
many enlightening conversations. We thank the Casa Mathemática Oaxaca for the wonderful
accommodations where part of this work took place. We are especially grateful to Matt Kennedy
for suggesting this problem. We performed many computations in Macaulay2 [GS], which inspired
several ideas.
2. Reformulating Statement 1.2 in terms of module morphisms
Throughout this section, we fix a field k, let S = k[x1 , . . . , xn ] and m = (x1 , . . . , xn ). Although
our primary focus is the three matrix analogue of Gerstenhaber’s theorem, our proofs work for
arbitrary fields and arbitrarily many matrices. For convenience, consider the following:
Statement 2.1. The algebra generated by commuting matrices X1 , X2 , . . . , Xn ∈ Md (k) has dimension at most d.
For n > 3 this statement is false, n < 3 it is true, and the case when n = 3 is Statement 1.2.
Our goal in this section is to prove the following version of Proposition 1.8 which is valid for n
commuting matrices.
Proposition
2.2. Statement 2.1 is true if and only if for all finite-dimensional S-modules M with
√
Ann M = m, and S-module maps β : m → M , we have
(2.3)
dim S/ Ann(M ) + dim β(Ann M ) ≤ dim M + 1.
We begin with a different module-theoretic reformulation of Statement 2.1. This appeared as
Proposition 1.3 in the introduction.
Proposition 2.4. Statement 2.1 holds if and only if for all S-modules N which are finite-dimensional
over k, we have
dim S/ Ann(N ) ≤ dim N.
Proof. Given commuting matrices X1 , . . . , Xn ∈ Md (k), we obtain an S-module structure on k⊕d
where we define multiplication by xi to be the action of Xi . Conversely, given any S-module N
of dimension d over k, after fixing a basis we have N ≃ k⊕d as k-vector spaces; multiplication
by x1 , . . . , xn on N can then be viewed as matrices X1 , . . . , Xn ∈ Md (k) which commute since
xi xj = xj xi .
Let A denote the unital k-algebra generated by our commuting matrices. We then have a surjection π : S → A given by π(xi ) = Xi , and so A ≃ S/ ker π as S-modules. Under the correspondence
in the previous paragraph, we have ker π = Ann(N ) and so A = S/ Ann(N ). Therefore, the
inequality dim A ≤ d is equivalent to the inequality dim S/ Ann(N ) ≤ dim N .
Example 2.5. Following the above proof, we describe some triples of commuting matrices obtained
from our main theorem, Theorem 1.5. Let S = k[x, y, z] and m = (x, y, z). Let I ⊆ S be an ideal
of finite colength. Each module N that fits into a short exact sequence
i
π
0 → S/I −
→N −
→ S/m → 0
has an ordered basis of the form i(m1 ), . . . , i(md−1 ), f where m1 , . . . , md−1 is a basis of S/I and
π(f ) = 1 in S/m. Let nj = i(mj ). Let X, Y, Z ∈ Md (k) be determined by multiplication of x, y, z
4
on N and the given basis. To describe the first d − 1 columns of X, observe that each xnj is a
linear combination of n1 , . . . , nd−1 , and this linear combination is determined by multiplication by
x in S/I. Thus, if S/I with basis m1 , . . . , md−1 is given, the first d − 1 columns of X are fixed
and do not depend on the choice of extension of S/I by S/m. On the other hand, the last column
of X may be chosen almost arbitrarily. Indeed, xf is a linear combination of n1 , . . . , nd−1 (since
π(xf ) = 0 in S/m), but there are no restrictions on which linear combinations may occur: given
any linear combination of n1 , . . . , nd−1 , one can construct an extension N such that xf is the given
linear combination. The matrices Y and Z are described analogously. In other words, the matrices
X, Y , and Z take the form
a1
b.1
c.1
..
.
.
′
′
′
.
.
.
X= X
Y = Y
Z= Z
bd−1 ,
cd−1 ,
ad−1 ,
0···0
0
0···0
0
0···0
0
where X ′ , Y ′ , Z ′ ∈ Md−1 (k) are determined by multiplication, in S/I, by x, y, and z respectively,
and the entries aj , bj , cj can be chosen arbitrarily by choosing an appropriate extension of S/I by
S/m.
⋄
The next observation allows us to handle the case of direct sums of modules and in particular,
reduce to the local case.
Lemma 2.6. Let N = N1 ⊕ · · · ⊕ Nr where each Ni is an S-module which is finite-dimensional
over k. If dim S/ Ann(Ni ) ≤ dim Ni for all i, then dim S/ Ann(N ) ≤ dim N .
T
Proof. We have Ann(N ) = i Ann(Ni ), so the diagonal map
M
S/ Ann(N ) →
S/ Ann(Ni )
i
is injective. As a result,
dim S/ Ann(N ) ≤
X
i
dim S/ Ann(Ni ) ≤
X
dim Ni = dim N.
i
Corollary 2.7. If N is an S-module that is finite-dimensional over k, and dim S/ Ann(N ′ ) ≤
dim N ′ for all localizations N ′ of N , then dim S/ Ann(N ) ≤ dim N . In particular, Statement 2.1 is
true if and only if dim S/ Ann(N ) ≤ dim N for all S-modules N which are finite-dimensional over
k and such that Supp N is a point.
Proof. Since N is finite-dimensional as a k-vector space, its support consists of finitely many points,
and N can be written as a direct sum N1 ⊕ · · · ⊕ Nr where Ni are the localizations of N . By
Lemma 2.6, the desired inequality for N is implied by that for each of the Ni . In particular,
dim S/ Ann(N ) ≤ dim N for all N if and only if it holds for those N which are supported at a
point. The corollary then follows from Proposition 2.4.
Remark 2.8. Corollary 2.7 is equivalent to the following statement about commuting matrices:
“Statement 2.1 holds for all choices of pairwise commuting matrices X1 , X2 , . . . , Xn ∈ Md (k) if and
only if it holds for all choices of n pairwise commuting nilpotent matrices.” To see this, we follow
the proof of Proposition 2.4, using the same notation: if the matrices Xi are nilpotent then one
easily checks that S/ Ann N is a local ring with maximal ideal m. Conversely, if S/ Ann N is local
with maximal ideal m then Ann N ⊇ mc for some large enough c. Identifying the variable xi ∈ S
with the matrix Xi as in the proof of Proposition 2.4, and recalling that S/ Ann N ∼
= A, we see
c
that each Xi satisfies Xi = 0, hence is nilpotent.
We next prove Proposition 2.2, and hence Proposition 1.8.
5
Proof of Proposition 2.2. By Corollary 2.7, we need only show that the inequality in the statement
of the theorem is equivalent to dim S/ Ann(N ) ≤ dim N for all S-modules N which are finitedimensional over k and such that Supp N is a point. Let N be such a module.
By translation, we
√
can assume without loss of generality that its support is the origin, i.e. Ann N = m. Then the
Jordan-Hölder filtration yields a short exact sequence
0 → M → N → S/m → 0
and this corresponds to a class α ∈ Ext1 (S/m, M ).
From the short exact sequence
we have a long exact sequence
0 → m → S → S/m → 0
0 → Hom(S/m, M ) → M → Hom(m, M ) → Ext1 (S/m, M ) → 0.
So, Ext1 (S/m, M ) ≃ Hom(m, M )/M . Thus, our class α lifts to an S-module map β : m → M . It is
easy to check that Ann(N ) = Ann(M ) ∩ ker β, so
dim S/ Ann(N ) = dim S/ Ann(M ) + dim Ann(M )/(Ann(M ) ∩ ker β)
= dim S/ Ann(M ) + dim β(Ann M ).
Since dim N = dim M + 1, the inequality
dim S/ Ann(M ) + dim β(Ann M ) ≤ dim M + 1
is equivalent to dim S/ Ann(N ) ≤ dim N .
Remark 2.9. The proof of Proposition 2.2 shows that if
0 → M → N → S/m → 0
is the extension corresponding to the map β : m → M , then the inequality (2.3) holds if and only
if the inequality (1.4) holds for N . Note that N only depends on the class [β] ∈ Ext1 (S/m, M ).
Consequently, whether or not the inequality (2.3) holds depends only on the class [β] rather than
the map β itself.
In light of Proposition 2.2, we make the following definition:
Definition 2.10. Let M be an S-module which is finite-dimensional as a k-vector space with
√
Ann M = m. If β : m → M is an S-module map, we say that (β, M ) is a counter-example if
dim S/ Ann(M ) + dim β(Ann M ) > dim M + 1.
Corollary 2.11. If I ⊆ S is an ideal with S/I finite-dimensional over k, then (β, S/I) is a counterexample if and only if dim β(I) ≥ 2.
Proof. Let M = S/I and notice that Ann(S/I) = I. By definition, (β, S/I) is a counter-example if
and only if
dim S/I + dim β(I) = dim S/ Ann(M ) + dim β(Ann M ) > dim M + 1 = dim S/I + 1.
In other words, it is a counter-example if and only if dim β(I) > 1.
We end with the following result which reduces our search for counter-examples (β, M ) to the
case where M is indecomposable. Note the distinction from Lemma 2.6: the lemma concerns
the inequality dim S/ Ann(M ) ≤ dim M whereas the proposition below concerns the inequality
f) ≤ dim M
f, where M
f is the extension of S/m by M defined by β.
dim S/ Ann(M
Proposition
√ 2.12. Let β : m → M be an S-module map with M finite-dimensional over k and
satisfying Ann M = m. Suppose M = N1 ⊕ N2 and let πj : M → Nj be the two projections. If
neither (π1 β, N1 ) nor (π2 β, N2 ) is a counter-example, then (β, M ) is also not a counter-example.
6
f be the extension defined by β, and let M
fj be the extension defined by πj β. We must
Proof. Let M
f
f
show dim S/ Ann(M ) ≤ dim M . We know that
\
\
\
f) = ker β ∩ Ann(M ) =
fj ),
Ann(M
ker(πj β) ∩
Ann(Nj ) =
Ann(M
j
so we have
f) =
dim S/ Ann(M
X
j
j
j
fj ) − dim S/(Ann(M
f1 ) + Ann(M
f2 )).
dim S/ Ann(M
fj ) ≤ dim M
fj . Also notice that
Since (πj β, Nj ) is not a counter-example, we have dim S/ Ann(M
√
f1 ) + Ann(M
f2 ) ⊆ m. Hence, dim S/(Ann(M
f1 ) + Ann(M
f2 )) ≥ 1. Since
Ann M = m implies Ann(M
X
X
f=1+
fj ,
dim M
dim Nj = −1 +
dim M
j
j
f) ≤ dim M
f.
we have our desired inequality dim S/ Ann(M
3. Reducing Theorem 1.5 to a special case
Throughout this section, we fix an infinite field k, let S = k[x1 , x2 , x3 ] and m = (x1 , x2 , x3 ). The
goal of the next two sections is to prove Theorem 1.5. By Lemma 2.6, we reduce immediately to
the case where Supp(S/I) = m, so we must prove:
Theorem 3.1. If M is cyclic and Supp M = m, then (β, M ) is not a counter-example in the sense
of Definition 2.10.
The focus of this section is to prove the following theorem, which reduces Theorem 3.1 to a
special case:
√
Theorem 3.2. Suppose (β, M ) is a counter-example over k with M cyclic and Ann M = m.
Then there exist f1 , f2 , f3 ∈ m and g, h ∈ Ann(M ) with the following properties:
(1) letting Fij = xi fj − xj fi and J = (F12 , F13 , F23 , g), we have h ∈
/ J, the module S/J is
finite-dimensional over k, and m ∈ Supp(S/J),
(2) letting β ′ : m → S/J be the S-module map defined by β ′ (xi ) = fi , the elements β ′ (g) and
β ′ (h) are linearly independent in the localization of S/(J + hhi) at m, and
(3) dim Soc(Sm /Jm ) = 2.
This theorem is proved over the course of §§3.1–3.2. We begin with
√ some preliminary results.
Since M is cyclic, it is of the form S/I where I is an ideal of S with I = m. By Corollary 2.11,
we know dim β(I) ≥ 2. Furthermore, we can make a minimality assumption: we may assume that
S/I is the cyclic module of smallest dimension for which there exists a counter-example (β, S/I),
i.e. for all cyclic modules S/K with dim S/K < dim S/I and all pairs (γ, S/K), we can assume
dim γ(K) ≤ 1.
Let fi ∈ S such that β(xi ) = fi mod I. Letting Fij = xi fj − xj fi , we see Fij ∈ I. So, we can
write
I = (F12 , F13 , F23 , g1 , . . . , gs )
for some polynomials g1 , . . . , gs ∈ m.
Lemma 3.3. β(I) ⊆ Soc(S/I) is the vector space spanned by the β(gi ). Moreover, there exist i 6= j
such that β(gi ) and β(gj ) are linearly independent.
7
Proof. Notice that if p ∈ I, then xi β(p) = pβ(xi ) = 0 in S/I. This shows that β(I) ⊆ Soc(S/I).
Next observe that
β(Fij ) = β(xi )fj − β(xj )fi = fi fj − fj fi = 0.
As a result, β(I) is the ideal generated by the β(gj ) and since the β(gj ) are contained in Soc(S/I),
this ideal is nothing more than the vector space they span. Since dim β(I) ≥ 2, there must be i 6= j
such that β(gi ) and β(gj ) are linearly independent.
Proposition 3.4. For each 1 ≤ i ≤ 3, we have fi ∈ m.
√
Proof. Without loss of generality, assume that f1 ∈
/ m. Since I = m, we see S/I is an Artin local
ring, and so f1 is a unit in S/I. Since x2 f1 − x1 f2 = F12 is 0 in S/I, we see x2 = x1 f2 f1−1 ∈ (x1 ).
Similarly for x3 .
Thus, the maximal ideal m/I of S/I is principally generated by x1 . Any Artin local ring with
principal maximal ideal has the property that all ideals are of the form (m/I)n , hence principal.
Since Soc(S/I) is an ideal, it is principal and so 1-dimensional. This contradicts Lemma 3.3 which
shows that dim Soc(S/I) ≥ 2.
3.1. Showing the existence of a non-zero divisor. In this subsection, we show the existence
of g and h in the statement of Theorem 3.2. Let us give an outline of how we proceed. We begin by
showing that S/(F12 , F13 , F23 ) is Cohen-Macaulay of Krull dimension 1. Now there are of course
many choices of g such that S/(F12 , F13 , F23 , g) has Krull dimension 0 (i.e. finite dimensional over
k), however to prove Theorem 3.2, we need to guarantee that dim Soc(Sm /Jm ) = 2 and that β ′ (g)
and β ′ (h) are linearly independent. This is accomplished by choosing g to be a suitable non-zero
divisor in S/(F12 , F13 , F23 ).
Proposition 3.5. Every minimal prime of S over (F12 , F13 , F23 ) has height 2.
Proof. For convenience, let L = (F12 , F13 , F23 ). As F12 , F13 , F23 are the minors of the 2 × 3 matrix
x1 x2 x3
,
f1 f2 f3
the height of each minimal prime over L is at most 2. So assume that there is a minimal prime p
over L of height ≤ 1.
If p has height 0, then p = {0} and so L = {0}. Then Fij = 0, so xi fj = xj fi for each i, j.
From this it follows that there exists q ∈ S such that fi = xi q for all i. Thus, given any h ∈ m, we
have β(h) = hq. In particular, for h ∈ I we see β(h) = hq ∈ I, so dim β(I) = 0, which contradicts
dim β(I) ≥ 2.
If p has height 1, then there exists p ∈ S irreducible
with p = (p). By Bertini’s theorem, we
P
know that for a generic linear combination y = i λi xi , the ideal (p, y) is prime. Choose yP
so that
(p, y) is prime and so that (the open conditions) λ3 6= 0 and y ∈
/ (p) are satisfied. Let fy = i λi fi ,
and let F1y = x1 fy − yf1 , F2y = x2 fy − yf2 . Observe that L = (F12 , F1y , F2y ).
Next, since F1y , F2y ∈ L ⊆ (p) ( (p, y), we have x1 fy , x2 fy ∈ (p, y). Recalling that (p, y) is prime,
we see fy ∈ (p, y), or both x1 , x2 ∈ (p, y). In the latter case, we have that (x1 , x2 , x3 ) = (x1 , x2 , y) ⊆
(p, y), which is impossible as the vanishing locus V (p, y) ⊆ A3 is irreducible of dimension at least
1. We must therefore have fy ∈ (p, y).
Since fy ∈ (p, y), we have fy = qy + ry p for some q, ry ∈ S. Using that p divides F1y =
x1 fy − yf1 = (x1 q − f1 )y + x1 ry p, we see that p divides (x1 q − f1 )y. Since (p) is prime and y ∈
/ p,
we have x1 q − f1 ∈ (p) and so f1 = qx1 + r1 p for some r1 . Similarly, f2 = qx2 + r2 p for some
r2 . As a result, β = β ′ + β ′′ where β ′ (h) = qh for all h ∈ m, and β ′′ (xi ) = pri . By Remark 2.9,
whether or not (β, S/I) is a counter-example depends only on the value of [β] ∈ Ext1 (S/m, S/I)
and since [β ′ ] = 0, we can assume β = β ′′ . As a result, we can assume the image of β factors
through (p) ( S/I. Since (p) is generated by a single element, we have (p) ≃ S/J where J =
8
Ann(p). Since dim S/J < dim S/I, by our minimality assumption at the start of §3, we know that
β : m → (p) = S/J is not a counter-example, and so dim β(J) ≤ 1. But, I ⊆ J because I kills p.
So, dim β(I) ≤ dim β(J) ≤ 1.
Corollary 3.6. S/(F12 , F13 , F23 ) is Cohen-Macaulay of Krull dimension 1.
Proof. By Proposition 3.5, the variety cut out by (F12 , F13 , F23 ) has codimension 2 in A3 , so
S/(F12 , F13 , F23 ) has Krull dimension 1. Since F12 , F13 , F23 are the 2 × 2 minors of a 2 × 3 matrix,
we may apply [Eis95, Theorem 18.18] (originally proven in [HE71]) to see that S/(F12 , F13 , F23 ) is
Cohen-Macaulay.
The following proposition establishes the existence of our desired g, h ∈ m.
Proposition 3.7. There exist g, h ∈ I such that g is a non-zero divisor in S/(F12 , F13 , F23 ) and
β(g) and β(h) are linearly independent. Furthermore, we necessarily have h ∈
/ (F12 , F13 , F23 , g).
Proof. Recall our notation I = (F12 , F13 , F23 , g1 , . . . , gs ). Our first goal is to show that I contains
a non-zero divisor in S/(F12 , F13 , F23 ).
We know from Corollary 3.6 that S/(F12 , F13 , F23 ) is Cohen-Macaulay, so its associated primes
are its minimal primes. Consequently, the set of zero divisors of S/(F12 , F13 , F23 ) is the union of
minimal primes over (F12 , F13 , F23 ). If I is contained in this union of minimal primes then, by
the prime avoidance lemma, I is contained in one of these minimal primes. But this is impossible as S/(F12 , F13 , F23 ) is Cohen-Macaulay of Krull dimension 1 by Corollary 3.6, and S/I has
Krull dimension 0 by assumption. Thus, I is not contained in the union of minimal primes over
(F12 , F13 , F23 ) and so I contains a non-zero divisor of S/(F12 , F13 , F23 ).
Next, by Lemma 3.3, there exist i 6= j such that β(gi ) and β(gj ) are linearly independent in
S/I. Let q ∈ I be a non-zero divisor of S/(F12 , F13 , F23 ). If c, d ∈ k r {0} with c 6= d, and if
q + cgi and q + dgi are in the same minimal prime over (F12 , F13 , F23 ), then q and gi are also in that
minimal prime, a contradiction to q being a non-zero divisor. Thus, since there are only finitely
many minimal primes, we see that for all but finitely many c ∈ k, the polynomial q + cgi is a
non-zero divisor.
Since q ∈ I, we know from Lemma 3.3 that β(q) ∈ Soc(S/I). Then β(gi ) and β(q) are elements
of the vector space Soc(S/I) which has dimension at least 2 and β(gi ) 6= 0, so for infinitely many
c ∈ k, we see β(q + cgi ) = β(q) + cβ(gi ) 6= 0. Combining this with our conclusion from the previous
paragraph that q + cgi is a non-zero divisor for all but finitely many c ∈ k, we see we can find
a non-zero divisor g := q + cgi ∈ I such that β(g) 6= 0. Now choose h ∈ m to be any linear
combination of gi and gj such that β(h) is not a scalar multiple of β(g); this is possible by Lemma
3.3 as β(gi ) and β(gj ) span a 2-dimensional subspace of Soc(S/I).
Lastly, we show h ∈
/ J := (F12 , F13 , F23 , g). From Lemma 3.3, we know 0 6= β(g) ∈ Soc(S/I) and
we see that β(Fij ) = 0. Thus, β(J) is 1-dimensional, generated by β(g). Since β(g) and β(h) are
linearly independent, it follows that h ∈
/ J.
Remark 3.8 (Hypothesis that k is infinite). Proposition 3.7 is the only step in the proof of Theorem
3.1 that assumes that k is infinite.
With the choice of g and h from Proposition 3.7, we have:
Corollary 3.9. Conditions (1) and (2) of Theorem 3.2 hold.
Proof. From Proposition 3.7, g is a non-zero divisor of S/(F12 , F13 , F23 ), which is Cohen-Macaulay of
Krull dimension 1 by Corollary 3.6. Thus, S/J = S/(F12 , F13
√, F23 , g) has Krull dimension 0, so it is
finite-dimensional over k. Since S/J surjects onto S/I and I = m, we know that m ∈ Supp(S/J),
which proves condition (1).
9
Next since h ∈ I, we have
√ surjections S/J → S/(J + hhi) → S/I. After localizing at m, these
remain surjections. Since I = m, we know Sm /Im = S/I and so
β′
m −→ S/J → Sm /Jm → Sm /(J + hhi)m
is a lift of β, meaning that after post-composing the above map by Sm /(J + hhi)m → Sm /Im = S/I,
we obtain β. Since β(g) and β(h) are linearly independent in S/I, it must also be the case that
β ′ (g) and β ′ (h) are linearly independent in Sm /(J + hhi)m , proving condition (2).
3.2. Computing the socle. In this subsection we compute the socle of Sm /Jm , thereby showing
condition (3) of Theorem 3.2 and finishing the proof.
Proposition 3.10. dim Soc(Sm /Jm ) = 2.
Proof. From Corollary 3.6 we know that S/(F12 , F13 , F23 ) is Cohen-Macaulay of dimension 1. Thus,
the Eagon-Northcott complex yields a minimal free resolution [Eis05, Theorem A2.60]:
x3 f 3
B
A
→ S → S/hF12 , F13 , F23 i → 0, A = x2 f2 , B = F12 −F13 F23
→ S3 −
0 → S2 −
x1 f 1
is an exact sequence.
Since g ∈ m is a nonzerodivisior in S/(F12 , F13 , F23 ), we can use the above resolution to obtain
the minimal free resolution of S/J:
A′
g
0
A′ =
x3
x2
x1
Localizing at m, we
C′
B′
0 → S 2 −→ S 5 −→ S 4 −→ S → S/J → 0,
0
x
f
−g
0
0
3
3
g
x2 f 2 0
−g
0
′
, C ′ = F12 −F13 F23 g
f3
, B = x1 f 1 0
0
−g
f2
0 0 F12 −F13 F23
f1
obtain the minimal free resolution
2
5
4
0 → Sm
→ Sm
→ Sm
→ Sm → (S/J)m → 0.
Consequently, dim Soc(Sm /Jm ) = 2, as the left-most term in the above resolution is of rank 2.
4. Completing the proof of Theorem 1.5
Having now proved Theorem 3.2, we have reduced Theorem 3.1 to showing the following. Let
f1 , f2 , f3 , g ∈ m, Fij = xi fj − xj fi , and J = (F12 , F13 , F23 , g) such that S/J is finite-dimensional
with dim Soc(Sm /Jm ) = 2. Let β : m → S/J be defined by β(xi ) = fi . Then it is impossible to find
h ∈ m r J such that β(g) and β(h) are linearly independent in Sm /(Jm + hhi).
We begin with two well-known lemmas.
Lemma 4.1. If R is an Artinian Gorenstein local ring, and I1 and I2 are ideals of R, then
Ann(I1 ) + Ann(I2 ) = Ann(I1 ∩ I2 ).
Proof. In any commutative ring, we have the equality Ann(K1 + K2 ) = Ann(K1 ) ∩ Ann(K2 ) for
all ideals K1 and K2 . For Artinian Gorenstein local rings, we have Ann(Ann(K)) = K for all
ideals K, see e.g. [BH98, Exercise 3.2.15]. So letting Kj = Ann(Ij ), we see in our case that
Ann(Ann(I1 ) + Ann(I2 )) = I1 ∩ I2 . Taking annihilators of both sides then proves the result.
Lemma 4.2. Let R be any Artinian local ring and 0 6= r ∈ R. If s1 , . . . , sm form a basis for
Soc(R), then there is a linear dependence relation among the si in R/r.
10
Proof. Since every non-zero ideal of R intersects
the socle non-trivially, (r) ∩ Soc(R) contains a
P
non-trivial element s. We can write
s
=
a
s
with
(a1 , . . . , am ) ∈ km r 0. Then in R/r we have
i i
P
the linear dependence relation
ai si = 0.
Proposition 4.3. Let K ⊆ S = k[x1 , x2 , x3 ] be an ideal with S/K an Artinian Gorenstein local
ring, and γ : m → S/K an S-module map. If γ(q) is divisible by q for all q ∈ m, then there exists
r ∈ S/K such that for all q ∈ m, we have γ(q) = qr.
Proof. For every r ∈ S/K, let δr : m → S/K be the map δr (q) = qr. To prove the result, it suffices
to replace γ by γ − δr for any r. Let γ(xi ) = xi pi . Replacing γ by γ − δp3 , we can assume p3 = 0,
i.e. γ(x3 ) = 0. Then x1 x3 p1 = x3 γ(x1 ) = x1 γ(x3 ) = 0. In other words,
p1 ∈ Ann(x1 x3 ).
Similarly,
p2 ∈ Ann(x2 x3 ) and p1 − p2 ∈ Ann(x1 x2 ).
Our first goal is to show that p2 ∈ Ann(x2 ) + Ann(x3 ). By Lemma 4.1, we have Ann(x2 ) +
Ann(x3 ) = Ann((x2 ) ∩ (x3 )). So, let q ∈ (x2 ) ∩ (x3 ) and we must show p2 q = 0. Since q is
divisible by both x2 and x3 , we can write q = x3 q ′ and q = x2 q ′′ . We can further write q ′′ =
a(x2 ) + x3 b(x2 , x3 ) + x1 c(x1 , x2 , x3 ) where a ∈ k[x2 ], b ∈ k[x2 , x3 ], and c ∈ k[x1 , x2 , x3 ]. Then using
p2 ∈ Ann(x2 x3 ) and p1 − p2 ∈ Ann(x1 x2 ), we have
p2 q = p2 x2 a(x2 ) + p2 x1 x2 c(x1 , x2 , x3 ) = p2 x2 a(x2 ) + p1 x1 x2 c(x1 , x2 , x3 )
= γ(x2 a(x2 ) + x1 x2 c(x1 , x2 , x3 ) + x3 d)
for any choice of d. Choosing d = x2 b(x2 , x3 ) − q ′ , we see
s := x2 a(x2 ) + x1 x2 c(x1 , x2 , x3 ) + x3 d = q − x3 q ′ = 0.
But, by assumption p2 q = γ(s) is divisible by s = 0, and hence p2 q = 0. We have therefore shown
p2 ∈ Ann(x2 ) + Ann(x3 ).
Thus, we can write p2 = (p2 − r)+ r with r ∈ Ann(x3 ) and p2 − r ∈ Ann(x2 ). Then (γ − δr )(x2 ) =
x2 (p2 − r) = 0 and (γ − δr )(x3 ) = −x3 r = 0, and so we can assume
p2 = p3 = 0.
Then
p1 ∈ Ann(x1 x2 ) ∩ Ann(x1 x3 ).
To finish the proof we need only show that p1 ∈ Ann(x1 ) + Ann(x2 , x3 ). Indeed, upon doing so,
we can write p1 = (p1 − r) + r with r ∈ Ann(x2 , x3 ) and p1 − r ∈ Ann(x1 ). Then (γ − δr )(x1 ) =
x1 (p1 − r) = 0, (γ − δr )(x2 ) = −x2 r = 0, and (γ − δr )(x3 ) = −x3 r = 0. In other words, we will
have found r ∈ S such that γ − δr = 0, i.e. γ(q) = qr for all q ∈ m.
To show that p1 ∈ Ann(x1 ) + Ann(x2 , x3 ), and thereby finish the proof, we again note by Lemma
4.1 that Ann(x1 ) + Ann(x2 , x3 ) = Ann((x1 ) ∩ (x2 , x3 )). We let q ∈ (x1 ) ∩ (x2 , x3 ) and must show
that p1 q = 0. We can then write q = x1 (a(x1 ) + x2 b(x1 , x2 ) + x3 c(x1 , x2 , x3 )) and q = x2 q ′ + x3 q ′′ .
Then using that p1 ∈ Ann(x1 x2 ) ∩ Ann(x1 x3 ), we have
p1 q = p1 x1 a(x1 ) = γ(x1 a(x1 ) + x2 d + x3 e)
for any choice of d and e. As before, choosing d = x1 b(x1 , x2 ) − q ′ and e = x1 c(x1 , x2 , x3 ) − q ′′
yields s := x1 a(x1 ) + x2 d + x3 e = 0, and since p1 q = γ(s) is divisible by s = 0, we have p1 q = 0.
Finally, we turn to the proof of the main theorem.
11
Proof of Theorem 3.1. Since S/J is finite-dimensional over k, we know that Sm /Jm = S/K for some
ideal K. Recall that dim Soc(S/K) = 2 and our goal is to show that for all h ∈ m r J there is a
linear dependence relation between β(g) and β(h) in Sm /(Jm + hhi) = S/(K + hhi). In particular,
we may assume β(g) 6= 0 in S/(K + hhi).
To begin, we show β(h) ∈
/ Soc(S/K). If β(h) were in the socle, then since dim Soc(S/K) = 2
and β(g) ∈ Soc(S/K), either we have our desired linear dependence relation between β(g) and β(h)
in S/K (and hence in S/(K + hhi)), or β(g) and β(h) form a basis for Soc(S/K). In the latter
case, Lemma 4.2 shows there is a linear dependence relation between β(g) and β(h) in S/(K + hhi).
So, we have shown our claim that β(h) ∈
/ Soc(S/K). Further note that h ∈ Soc(S/K) implies
β(h) ∈ Soc(S/K), since xi β(h) = hβ(xi ) ∈ hm = 0. So, we conclude
h, β(h) ∈
/ Soc(S/K).
Next, notice that β(h) does not divide β(g) in S/K. Indeed, suppose to the contrary that
β(g) = qβ(h) with q ∈ S. If q ∈ k, then we have a linear dependence relation in S/K and hence
in S/(K + hhi). If q ∈ m, then β(g) = qβ(h) = hβ(q) ∈ hhi so β(g) = 0 in S/(K + hhi), which is
again a linear dependence relation. This shows our claim that
β(g) ∈
/ Sβ(h).
Now, Sβ(h) is an ideal of S/K, so it intersects Soc(S/K) non-trivially. Since dim Soc(S/K) = 2,
we know Sβ(h) ∩ Soc(S/K) has dimension 1 or 2, but β(g) ∈
/ Sβ(h) ∩ Soc(S/K), and so Sβ(h) ∩
Soc(S/K) is 1-dimensional. Let q0 ∈ S such that q0 β(h) is a basis vector for Sβ(h) ∩ Soc(S/K).
Then
β(g) and q0 β(h) form a basis for Soc(S/K).
Since h ∈
/ Soc(S/K), we can induct on the smallest ℓ for which mℓ h ∈ Soc(S/K). That is, we
can assume the result for qh for all q ∈ m, i.e. we can assume that β(g) and β(qh) are linearly
dependent in S/(K + hqhi) for all q ∈ m. So for all q ∈ m, there exists p ∈ S/K and a, b ∈ k such
that (a, b) 6= (0, 0) and
aβ(g) + bβ(qh) = pqh.
Note that β(qh) = hβ(q) ∈ hhi, so the above equality shows aβ(g) ∈ hhi. This yields our desired
linear dependence relation among β(g) and β(h) in S/(K + hhi) unless a = 0, in which case after
rescaling we can assume b = 1. We can therefore assume that
Next, let
β(qh) ∈ Sqh ∀q ∈ m.
γ : m → hhi ⊆ S/K, γ(q) = β(qh).
Since hhi ∩ Soc(S/K) is non-trivial, it has dimension 1 or 2. If β(g) is in this intersection, then
we have our desired linear dependence relation among β(g) and β(h) in S/(K + hhi), so we can
assume this is not the case. Thus, hhi ∩ Soc(S/K) does not contain β(g), so it is 1-dimensional, and
hence hhi is Gorenstein. Notice that hhi ≃ S/ AnnS/K (h) and via this identification, the condition
β(qh) ∈ hqhi is equivalent to the condition that q divides γ(q). Applying Proposition 4.3, there is
r ∈ S/ AnnS/K (h) such that γ(q) = qr. Translating this back into a statement about hhi via our
identification with S/ Ann(h), this says
∃ r ∈ S such that β(qh) = qhr ∀q ∈ m.
As a result, β(h) − hr ∈ Ann(m) = Soc(S/K), and so there exist a, b ∈ k such that
β(h) − hr = aβ(g) + bq0 β(h).
If q0 ∈ m, then we see β(h) − aβ(g) = h(bβ(q0 ) + r) = 0 in S/(K + hhi) and gives our linear
dependence relation. So, q0 ∈
/ m, i.e. q0 is a unit. By construction β(g) and q0 β(h) form a basis
12
for Soc(S/K), and q0 is a unit, so β(g) and β(h) form a basis for Soc(S/K). Then by Lemma 4.2,
they have a linear dependence relation in S/(K + hhi).
5. Proof of Theorem 1.6
We begin with the following result which holds for arbitrarily many variables:
Proposition 5.1. Let S = k[x1 , . . . , xn ] and let m = (x1 , . . . , xn ). Suppose that M is a finitedimensional S-module with Supp M = m and dim Soc(M ) = 1. Then the following hold:
(1) dim S/ Ann M ≤ dim M
(2) for every short exact sequence
0 → M → N → S/m → 0,
we have dim S/ Ann N ≤ dim N .
Proof. We first prove (1) by induction on the dimension of M . If dim M = 1 then M ≃ S/m and
the result holds trivially. For higher dimensional M , recall that M has a composition series
0 ⊆ M1 ⊆ · · · ⊆ Mr−1 ⊆ M
where Mi /Mi−1 ≃ S/m. Since the socle of Mr−1 is contained in the socle of M , we also have that
dim Soc(Mr−1 ) = 1, and we may apply induction to see that dim S/ Ann Mr−1 ≤ dim Mr−1 . Then
we have a short exact sequence
0 → Mr−1 → Mr → S/m → 0
which corresponds to a map β : m → Mr−1 . By Remark 2.9, we need only show
dim S/ Ann Mr−1 + dim β(Ann Mr−1 ) ≤ dim Mr−1 + 1.
By induction, we know dim S/ Ann Mr−1 ≤ dim Mr−1 . Furthermore, β(Ann Mr−1 ) ⊆ Soc(Mr−1 ),
so has dimension at most 1. This proves the desired inequality.
The proof of (2) is entirely analogous. We know that the short exact sequence defining N
corresponds to a map β : m → M . By (1), we know dim S/ Ann(M ) ≤ dim M . Since β(Ann M ) ⊆
Soc(M ), we have dim β(Ann M ) ≤ 1. Combining these two statements, inequality (2.3) holds for
β, and so dim S/ Ann N ≤ dim N by Remark 2.9.
We now turn to the proof of the second main result of this paper.
Proof of Theorem 1.6. By Lemma 2.6, we reduce immediately to the case where: (i) N is either
cyclic, or (ii) N is local Gorenstein, or (iii) there is an extension
(5.2)
0 → M → N → S/m → 0
L
where m ⊆ S is a maximal ideal and M = ℓ Mℓ with each Mℓ a cyclic or local Gorenstein module.
For case (i), inequality (1.4) holds trivially. Case (ii) is handled by Proposition 5.1 (1).
It remains to handle case (iii). Further decomposing if necessary, we can assume each
L Mℓ is
local. Next, letting L be the set of ℓ for which Supp(Mℓ ) = m, we can write N = N ′ ⊕ ℓ∈L
/ Mℓ
where we have a short exact sequence
M
0→
Mℓ → N ′ → S/m → 0.
ℓ∈L
Since Mℓ is cyclic or local Gorenstein for every ℓ ∈
/ L, another application of Lemma 2.6 combined
with cases (i) and (ii) above allows us to assume N = N ′ , i.e. we can assume that Supp(Mℓ ) = m
for all ℓ. Proposition 2.12 then reduces
√ us to the case where there is only one ℓ; that is, we need
only consider extensions (5.2) where Ann M = m and M itself is cyclic or Gorenstein. If M is
Gorenstein, then inequality (1.4) holds by Proposition 5.1 (2). If M is cyclic, then the inequality
holds by Theorem 1.5.
13
6. Some other instances where Statement 1.2 holds
In this short section we record a couple of additional situations where a finite-dimensional module
M over S = k[x1 , . . . , xn ] satisfies dim S/ Ann M ≤ dim M .
Proposition 6.1. If β : m → M is surjective, then the pair (β, M ) is not a counterexample.
Moreover, if N is the extension of M defined by β then
dim S/ Ann(N ) = dim N.
Proof. Since Ann(N ) = Ann(M ) ∩ ker(β), we see
dim S/ Ann(N ) = dim S/m + dim m/ ker(β) + dim ker(β)/(Ann(N ) ∩ ker(β))
= 1 + dim M + dim ker(β)/(Ann(N ) ∩ ker(β)).
Since dim N = 1 + dim M , we must show ker(β) ⊆ Ann(M ). Given m ∈ M and f ∈ ker(β),
we know β is surjective, so m = β(g) for some g ∈ m. Then f m = f β(g) = gβ(f ) = 0, and so
f ∈ Ann(M ).
Proposition 6.2. If there exists m ∈ N such that Ann(m) = Ann(N ), then
dim S/ Ann N ≤ dim N.
Proof. In this case, dim S/ Ann(N ) = dim S/ Ann(m) and since S/ Ann(m) is isomorphic to the
cyclic submodule Sm ⊆ N , we necessarily have dim S/ Ann(m) = dim Sm ≤ dim N .
We end this section with an example where Theorem 1.5 applies but Proposition 6.2 does not.
Example 6.3. Let I = (x, y 2 , z) ⊆ k[x, y, z]. Let N = (xy, z)/(yz, x2 , z 2 , xy 2 − xz) and observe that
N fits into a short exact sequence
0 → S/I → N → S/m → 0
where the injective map sends 1 to xy. We know by Theorem 1.5 that inequality (1.4) holds
for N . Proposition 6.2, however does not apply here: every element of N can be represented as
m = az + bxy + cxz, for some a, b, c ∈ k, and one checks that there is no choice of a, b, c ∈ k such
that Ann(m) agrees with Ann N = (z, y 2 , xy, x2 ). Indeed, if a 6= 0, then y − (b/a)x ∈ Ann(m) and
if a = 0, then x ∈ Ann(m).
⋄
7. An inductive approach to Statement 1.2
Our goal is to prove Proposition 1.10 stated in the introduction. We do so after a preliminary
lemma.
Lemma 7.1. Let S = k[x1 , . . . , xn ] and m = (x1 , . . . , xn ). Then Statement
2.1 is true if and only
√
if for all ideals J ⊆ m, all finite-dimensional S-modules M with Ann M = m, and all S-module
morphisms β : J → M , we have
(7.2)
dim J/(J ∩ Ann M ) + dim β(J ∩ Ann M ) ≤ dim M.
Proof. Notice that if J = m, then J ∩ Ann M = Ann M , and so the inequalities (2.3) and (7.2) are
equivalent. So by Proposition 2.2, the inequality (7.2) implies Statement 2.1.
Thus, it remains to show that Statement 2.1 implies inequality (7.2). To see this, fix J ⊆ m and
an S-module map β : J → M . As in the proof of Proposition 2.2, the map β defines an extension
0 → M → N → S/J → 0
14
with Ann(N ) = Ann(M ) ∩ ker β. By Proposition 2.4, we know Statement 2.1 implies inequality
(1.4) for N . Now note that
dim S/ Ann(N ) = dim S/(J ∩ Ann M ) + dim(J ∩ Ann M )/(ker β ∩ Ann M )
= dim S/(J ∩ Ann M ) + dim β(J ∩ Ann M ).
Since dim N = dim M + dim S/J, we obtain inequality (7.2) by subtracting dim S/J from both
sides of the inequality (1.4).
Remark 7.3. The proof of Lemma 7.1 shows that if
0 → M → N → S/J → 0
is the extension corresponding to the map β : J → M , then the inequality (7.2) holds if and only if
the inequality (1.4) holds for N .
Proof of Proposition 1.10. Let S = k[x, y, z] be a polynomial ring and m = (x, y, z). We know
by Lemma 7.1 that Statement 1.2 is true if and only if inequality (7.2) holds for all J, M , and
maps β : J → M . So, if Statement 1.2 is true, then both of the inequalities in the statement of
Proposition 1.10 are true, hence the first inequality implies the second.
We now show that if the implication of inequalities in the statement of Proposition 1.10 holds,
then Statement 1.2 is true. By virtue of Lemma 7.1, we need only show that inequality (7.2) holds
for all J, M , and β : J → M . We prove this latter statement by induction on dim M , the base case
being trivial. So, we need only handle the induction step. For this, we can choose a submodule
M ′ ⊆ M such that dim(M/M ′ ) = 1. Then M/M ′ ≃ S/m and we let π : M → S/m be the quotient
map.
Suppose first that πβ : J → S/m is surjective. Then letting I = ker(πβ), we have a map of short
exact sequences
0
// I
// J
α
0
// M ′
// S/m
β
// M
// 0
≃
π
// S/m
// 0
The maps α and β define extensions
0 → M ′ → N ′ → S/I → 0 and
0 → M → N → S/J → 0,
respectively, and one checks that N ′ ≃ N . Since dim M ′ < dim M , we can assume by induction
that
dim I/(I ∩ Ann M ′ ) + dim α(I ∩ Ann M ′ ) ≤ dim M ′ .
By Remark 7.3, this is equivalent to the inequality dim S/ Ann(N ′ ) ≤ dim N ′ . Since N ≃ N ′ we
obtain the inequality dim S/ Ann(N ) ≤ dim N , and applying Remark 7.3 again, we have
dim J/(J ∩ Ann M ) + dim β(J ∩ Ann M ) ≤ dim M.
It remains to handle the case when πβ : J → S/m is not surjective. In this case πβ = 0 and so β
factors through M ′ . We are thus in the situation β : J → M ′ ⊆ M . By induction, we can assume
dim J/(J ∩ Ann M ′ ) + dim β(J ∩ Ann M ′ ) ≤ dim M ′
and we want to show
dim J/(J ∩ Ann M ) + dim β(J ∩ Ann M ) ≤ dim M.
This is precisely the implication of inequalities in the hypothesis of Proposition 1.10.
15
References
[Ber13] George M. Bergman. Commuting matrices, and modules over artinian local rings. arXiv:1309.0053, 2013.
[BH90] José Barrı́a and P. R. Halmos. Vector bases for two commuting matrices. Linear and Multilinear Algebra,
27(3):147–157, 1990.
[BH98] W. Bruns and J. Herzog. Cohen-Macaulay Rings. Cambridge, England: Cambridge University Press, 1998.
[Eis95] David Eisenbud. Commutative algebra, with a view toward algebraic geometry, volume 150 of Graduate Texts
in Mathematics. Springer-Verlag, New York, 1995.
[Eis05] David Eisenbud. The geometry of syzygies, volume 229 of Graduate Texts in Mathematics. Springer-Verlag,
New York, 2005. A second course in commutative algebra and algebraic geometry.
[Ger61] Murray Gerstenhaber. On dominance and varieties of commuting matrices. Ann. of Math. (2), 73:324–348,
1961.
[GS] Daniel R. Grayson and Michael E. Stillman. Macaulay2, a software system for research in algebraic geometry.
Available at http://www.math.uiuc.edu/Macaulay2/.
[Gur92] Robert M. Guralnick. A note on commuting pairs of matrices. Linear and Multilinear Algebra, 31(1-4):71–75,
1992.
[HE71] M. Hochster and John A. Eagon. Cohen-Macaulay rings, invariant theory, and the generic perfection of
determinantal loci. Amer. J. Math., 93:1020–1058, 1971.
[HO01] John Holbrook and Matjaž Omladič. Approximating commuting operators. Linear Algebra Appl., 327(13):131–149, 2001.
[HO15] J. Holbrook and K. C. O’Meara. Some thoughts on Gerstenhaber’s theorem. Linear Algebra Appl., 466:267–
295, 2015.
[LL91] Thomas J. Laffey and Susan Lazarus. Two-generated commutative matrix subalgebras. Linear Algebra Appl.,
147:249–273, 1991.
[MT55] T. S. Motzkin and Olga Taussky. Pairs of matrices with property L. II. Trans. Amer. Math. Soc., 80:387–401,
1955.
[Set11] B. A. Sethuraman. The algebra generated by three commuting matrices. Math. Newsl., 21(2):62–67, 2011.
[Š12] Klemen Šivic. On varieties of commuting triples III. Linear Algebra Appl., 437(2):393–460, 2012.
[Wad90] Adrian R. Wadsworth. The algebra generated by two commuting matrices. Linear and Multilinear Algebra,
27(3):159–162, 1990.
16
| 0 |
Adversarial Attacks Beyond the Image Space
arXiv:1711.07183v2 [] 21 Nov 2017
Xiaohui Zeng1 , Chenxi Liu2 , Yu-Siang Wang3 , Weichao Qiu2 , Lingxi Xie2
Yu-Wing Tai4 , Chi Keung Tang1 , Alan L. Yuille2
1
HKUST 2 Johns Hopkins U 3 National Taiwan U 4 Tencent Youtu
Abstract
Object
Classification
Generating adversarial examples is an intriguing problem and an important way of understanding the working
mechanism of deep neural networks. Recently, it has attracted a lot of attention in the computer vision community.
Most existing approaches generated perturbations in image
space, i.e., each pixel can be modified independently. However, it remains unclear whether these adversarial examples
are authentic, in the sense that they correspond to actual
changes in physical properties.
This paper aims at exploring this topic in the contexts
of object classification and visual question answering. The
baselines are set to be several state-of-the-art deep neural
networks which receive 2D input images. We augment these
networks with a differentiable 3D rendering layer in front,
so that a 3D scene (in physical space) is rendered into a 2D
image (in image space), and then mapped to a prediction
(in output space). There are two (direct or indirect) ways
of attacking the physical parameters. The former backpropagates the gradients of error signals from output space
to physical space directly, while the latter first constructs
an adversary in image space, and then attempts to find the
best solution in physical space that is rendered into this
image. An important finding is that attacking physical space
is much more difficult, as the direct method, compared with
that used in image space, produces a much lower success
rate and requires heavier perturbations to be added. On
the other hand, the indirect method does not work out,
suggesting that adversaries generated in image space are
inauthentic. By interpreting them in physical space, most
of these adversaries can be filtered out, showing promise in
defending adversaries.
R: bench
Visual Question Answering
Q: There is a sphere that is
in front of the large yellow
object, what color is it?
A: red
𝑝 = 3.7 × 10−3
R: chair
Q: What size is the other
red block that is the same
material as the blue cube?
A: large
𝑝 = 0.8 × 10−3
A: cyan
𝑝 = 2.4 × 10−3
A: 0
conf = 89.9%
conf = 54.0%
conf = 64.3%
𝑝 = 4.7 × 10−3
𝑝 = 3.0 × 10−3
𝑝 = 2.7 × 10−3
R: table
conf = 89.9%
A: yes
conf = 45.7%
A: 0
conf = 52.8%
Figure 1. Adversarial examples for object classification and visual
question answering. The first row is the original image. The
middle group shows the perturbations (magnified by a factor of
5 and shifted by 128) and perturbed images by attacking image
space, and the bottom group by attacking physical space. p and
conf are the perceptibility (see Section 3.3.1) and the confidence
score on the predicted class, respectively. Attacking physical
space is more difficult, as we always observe a larger perceptibility
and a lower confidence score.
1. Introduction
Recent years have witnessed a rapid development in the
area of deep learning, in which deep neural networks have
been applied to a wide range of computer vision tasks, such
as image classification [16][12], object detection [9][34],
semantic segmentation [37][6], visual question answer1
ing [2][14], etc. The basic idea is to design a hierarchical
structure to learn visual patterns from labeled data. With
the availability of powerful computational resources and
large-scale image datasets [7], researchers have designed
more and more complicated models and achieved a boost
in visual recognition performance.
Despite the great success of deep learning, there still
lacks an effective method to understand the working mechanism of deep neural networks. An interesting effort is to
generate so-called adversarial perturbations. They are visually imperceptible noise [11] which, after being added to
an input image, changes the prediction results completely,
sometimes ridiculously. These examples can be constructed
in a wide range of vision problems, including image classification [28], object detection and semantic segmentation [45]. Researchers believed that the existence of adversaries implies unknown properties in feature space [43].
This work is motivated by the fact that conventional 2D
adversaries were often generated by modifying each image
pixel individually. Thus, while being strong in attacks,
it remains unclear if they can be generated by perturbing
physical properties in the 3D world. We notice that previous
work found adversarial examples “in the physical world”
by taking photos on the printed perturbed images [17]. But
our work is different and more essential as we only allow
to modify some basic physical parameters such as surface
normals. For this respect, we follow [19] to implement
3D rendering as a differentiable layer, and plug it into the
state-of-the-art neural networks for object classification and
visual question answering. In this way, we build a mapping
function from physical space (a set of physical parameters,
including surface normals, illumination and material), via
image space (a rendered 2D image), to output space (the
object class or the answer to a question).
We aim at answering two questions. (i) Is it possible to
directly generate perturbations in physical space (i.e., modifying basic physical parameters)? (ii) Given an adversary in
image space, is it possible to find an approximate solution
in physical space, so that the re-rendered 2D image preserves the attacking ability? Based on our framework, these
questions correspond to two different ways of generating
perturbations in physical space. The first one, named the
direct method, computes the difference between the current
output and the desired output, back-propagates the gradients to the physics layer directly and makes modifications.
The second one, the indirect method, first constructs an
adversary in image space, and then attempts to find the best
solution in physical space that is rendered into it. Both
methods are implemented by the iterative version of the
Fast Gradient Sign Method (FGSM) [11]. We constrain the
change in image intensities to guarantee the perturbations
to be visually imperceptible. Experiments are performed on
two datasets, i.e., 3D ShapeNet [5] for object classification
and CLEVR [14] for visual question answering.
Our major finding is that attacking physical space is
much more difficult than attacking image space. Although
it is possible to find adversaries using the direct manner
(YES to Question (i)), the success rate is lower and the
perceptibility of perturbations becomes much larger than
required in image space. This is expected, as the rendering
process couples changes in pixel values, i.e., modifying one
physical parameter (e.g., illumination) may cause many pixels to be changed at the same time. This also explains why
we found it almost impossible to generate adversaries using
the indirect manner (conditional NO to Question (ii); it
is possible that currently available optimization algorithms
such as FGSM are not strong enough). An implication of
this research is an effective approach to defend adversaries
generated in image space – finding an approximate solution
in physical space and re-rendering will make them fail.
The remainder of this paper is organized as follows.
Section 2 briefly introduces related work. The approach
of generating adversarial perturbations in physical space is
described in Section 3. After experiments are shown in
Section 4, we conclude our work in Section 5.
2. Related Work
Deep learning is the state-of-the-art machine learning
technique to learn visual representations from labeled data.
The basic methodology is to stack differentiable units in
a hierarchical manner [16]. It is believed that a network
with a sufficient number of layers and neurons can capture
complicated distributions in feature space. With large-scale
image datasets such as ImageNet [7], powerful computational resources such as GPU’s, and the assistance of efficient strategies [27][39][13], it is possible to train very deep
networks in a reasonable period of time. Recent years, deep
neural networks have been widely applied to computer vision problems, including image classification [38][42][12],
object detection [9][34], semantic segmentation [37][6],
visual question answering [2][8][15], etc.
Despite the success of deep learning, it remains a challenging task to explain what is learned by these complicated models. One of the most interesting efforts towards
this goal is to generated adversaries. In terms of adversaries [11], we are talking about small noise that is (i)
imperceptible to humans, and (ii) able to cause deep neural
networks make wrong predictions after being added to the
input image. It was shown that such perturbations can
be easily generated in a white-box environment, i.e., both
the network structure and pre-trained weights are known to
the attacker. Early studies were mainly focused on image
classification [28][26]. But soon, researchers were able to
attack deep networks for detection and segmentation [45],
and also visual question answering [46]. Efforts were also
made in finding universal perturbations which can transfer
2
across images [25], as well as adversarial examples in the
physical world which were produced by taking photos on
the printed perturbed images [17].
Attacking a known network (a.k.a, a white box) started
with setting a goal of prediction. There were generally two
types of goals. The first one (a non-targeted attack) aimed at
reducing the probability of the true class [28], and the second one (a targeted attack) defined a specific class that the
network should predict [20]. After that, the error between
the current and the target predictions was computed, and
gradients back-propagated to the image layer. This idea was
developed into a set of algorithms, including the Steepest
Gradient Descent Method (SGDM) [26] and the Fast Gradient Sign Method (FGSM) [11]. The difference lies in that
SGDM computed accurate gradients, while FGSM merely
kept the sign in every dimension of these gradients. The
latter one, while being less powerful in direct attacks, often
enjoys stronger transfer ability. The iterative version of
these two algorithms were also studied [17]. In comparison,
attacking an unknown network (a.k.a., a black box) is much
more challenging [20], and an effective way is to sum up
perturbations from a set of white-box attacks [45].
In opposite, there exist efforts in protecting deep networks from adversarial attacks. Defensive distillation [31]
proposed to defend the network by distilling robust visual
knowledge, but a stronger attacking method was designed
to beat this defense [4]. It was shown that training deep
networks in a large-scale dataset increases the robustness
against adversarial attacks [18], and a more essential solution was to add adversarial examples to the training
data [44]. Researchers also developed an algorithm to
detect whether an image was attacked by adversarial perturbations [23]. The battle between attackers and defenders
continues, and these ideas are closely related to the bloom
of Generative Adversarial Networks [10][33].
questions. First, is it possible to perturb the 3D physical
parameters so as to attack 2D deep networks? Second, are
the conventional 2D adversaries interpretable by perturbing
some physical parameters of a 3D scene? We answer these
two questions based on one single framework (Section 3.2),
which plugs a rendering module to various deep neural
networks, and thus builds a mapping function from the
physics layer, via the image layer, to the output layer.
Our goal is to generate adversaries on the physics layer.
There are two different ways, i.e., either directly backpropagating errors from the output layer to the physics
layer, or first constructing an adversary in the image layer,
and then finding a set of physical parameters that are rendered into it. We name them as direct and indirect ways
of attacking the physics layer. In experiments, the direct
method works reasonably well, but the indirect method fails
completely. Quantitative results are shown in Section 4.
3.2. From Physical Parameters to Prediction
As the basis of this work, we build an end-to-end framework which receives the physical parameters of a 3D scene,
renders them into a 2D image, and outputs prediction, e.g.,
the class of an object, or the answer to a visual question.
Note that our research involves 3D to 2D rendering as
part of the pipeline, which stands out from previous work
which either worked on rendered 2D images [40][15], or
directly processed 3D data without rendering them into 2D
images [32][36].
We denote the physical space, image space and output
space by X , Y and Z, respectively. Given a 3D scene
X ∈ X , the first step is to render it into a 2D image
Y ∈ Y. For this purpose, we consider three sets of physical
parameters, i.e., surface normals N, illumination L, and
material m. By giving these parameters, we assume that the
camera geometries, e.g., position, rotation, field-of-view,
etc., are known beforehand and will remain unchanged in
each case. The rendering module (Section 3.2.1) is denoted
by Y = r(N, L, m). Then, we make use of Y in two
vision tasks, i.e., object classification and visual question
answering. An individual deep neural network receives Y
as its input, and outputs the prediction Z ∈ Z. The networks for classification
question answering
are
and visual
C
V
C
C
V
V
denoted by Z = f Y; θ
and Z = f Y, q; θ ,
3. Approach
3.1. Motivation: Inauthenticity of Adversaries
Despite the fast development in generating adversaries
to attack deep neural networks, we note that almost all the
existing algorithms assumed that each pixel of the image
can be modified independently. Although this strategy
was successful in finding strong adversaries, the perturbed
images may be inauthentic, i.e., they may not correspond
to any 3D scenes in the real world. Motivated by this, we
study the possibility of constructing adversaries by directly
modifying the physical parameters, i.e., surface normals and
material of an object, and illumination of a 3D scene. Note
that our goal is more essential than the previous work [17],
which generated adversaries “in the physical world” by
taking photos of the printed perturbed images.
To be concrete, this work is aimed at answering two
respectively. Here, q is the question, ZC and ZV are
the output vectors, and θ C and θ V are the corresponding
parameters, respectively.
3.2.1
3D Object Rendering
We make use of [19], a differentiable algorithm for 3D
object rendering. Note that some other algorithms [3][22]
provide better rendering qualities but cannot be used in this
paper because they are non-differentiable. Differentiability
3
is indispensable, as it enables us to back-propagate errors
from the image layer to the physics layer.
Three sets of parameters are considered. (i) Surface normals are represented by a 2-channel image which has the
same size as the desired output image Y, sized WN × HN ,
where each pixel is encoded by the azimuth and polar
angles of the normal vector at this position. (ii) Illumination is defined by an HDR environment map of dimension
WL × HL , with each pixel storing the intensity of the light
coming from this direction (a spherical coordinate system
is used). (iii) Material impacts image rendering with a set
of bidirectional reflectance distribution functions (BRDFs)
which describe the point-wise light reflection for both diffuse and specular surfaces [29]. The material parameters
used in this paper come from the directional statistics BRDF
model [30], which represents a BRDF as a combination
of Dm distributions with Pm parameters in each. Mathematically, we have N ∈ RWN ×HN ×2 , L ∈ RWL ×HL and
m ∈ RDm ×Pm .
The rendering algorithm [19] is based on some reasonable assumptions, e.g., translation-invariant natural illumination (incoming light depends only on the direction), and
there is no emission and omit complex light interactions
like inter-reflections and subsurface scattering. Then the
intensity of each pixel of Y can be computed by integrating
the reflected light intensity over all incoming light directions [21]. The integral is substituted by a discrete sum over
a set of incoming light directions for numerical computations. In practice, the rendering process is implemented as
a network layer, which is differentiable to input parameters
N, L and m. Please refer to [19] for mathematical equations and technical details.
We make use of the algorithm described in [15]. This
algorithm consists of two components: a program generator
and an execution engine. The goal of the program generator
is to convert the question q written in natural language to
a tree structure (i.e., a program) that describes the order
and types of a series of actions to carry out. Specifically, a
sequence-to-sequence model [41] is used to translate words
in the question to the prefix traversal of the abstract syntax
tree. The execution engine assembles a neural module network [1] according to the predicted program. Each module
is a small convolutional neural network that corresponds to
one predicted action in the tree. The convolutional image
features may be queried at various places in this assembled
network.
In the testing stage, given a question q, the network
structure and its parameters θ V (q) are fixed, and the prediction process is very similar to that in object classification.
Thus, we unify object classification and visual question
answering into the same formulation:
3.2.2
∆Y = r(N + ∆N, L + ∆L, m + ∆m) − r(N, L, m).
(2)
Perceptibility is computed on the perturbations of both
the rendered image and the physical parameters. Following [43][26], the image perceptibility is defined as:
!1/2
WN X
HN
X
1
.
2
k∆yw,h k2
, (3)
p = p(∆Y) =
WN × HN w=1
Z = f (Y; θ) = f ◦ r(N, L, m; θ),
C
where θ denotes network parameters, i.e., θ or θ (q).
3.3. Attacking Physical Space
3.3.1
Perceptibility
The goal of an adversarial attack is to produce a visually
imperceptible perturbation, so that the network makes incorrect predictions after it is added to the original image.
Let the physical parameters be N, L, m, and the rendered
image be Y = r(N, L, m). Denote the perturbations added
to these parameters by ∆N, ∆L and ∆m, respectively. The
perturbation added to the rendered image is:
Object Classification
Based on the rendered 2D images, object classification is
straightforward. Two popular networks (AlexNet [16] and
ResNet [12]) are investigated. We start with two models
pre-trained on the ILSVRC2012 dataset [35], and fine-tune
them using the rendered images in the ShapeNet [5] dataset.
This is necessary, as the rendered images contain quite
different visual patterns from natural images.
C
In the testing stage, the network parameters
θ is fixed,
and we predict the class by Z = f Y; θ C , where Z ∈
h=1
where yw,h is a 3-dimensional vector representing the RGB
intensities (normalized in [0, 1]) of a pixel. Similarly, we
can also define the perceptibility values for object parameters, i.e., p(∆N), p(∆L) and p(∆m). For example,
K
[0, 1] (K is the number of classes) is the probability
distribution over all classes.
3.2.3
(1)
V
Visual Question Answering
p(∆N) =
WN X
HN
X
1
2
k∆nw,h k2
WN × HN w=1
!1/2
.
(4)
h=1
The visual question answering problem [14] considered in
this paper is essentially a classification task. Given an input
image Y and a question q, the goal is to choose the correct
answer from a pre-defined set (32 choices).
In experiments, p(∆Y) is the major criterion of visual
imperceptibility, but we also guarantee that p(∆N), p(∆L)
and p(∆m) are very small.
4
3.3.2
Setting a Goal of Attacking
the re-rendered image, and any perturbations exceeding a
fixed threshold U is truncated. In practice, U is set to be 18,
the reasonable value that is imperceptible to human eyes.
Truncations cause the inconsistency between the physical
parameters and the rendered image and risk failures in
attacking. To avoid frequent truncations, we set the learning
rate η to be small, which consequently increases the number
of iterations needed to attack the network.
Attacking the physical parameters starts with setting a goal,
which is what we hope the network to predict. In practice, this is done by defining a loss function L(Z), which
determines how far the current output is from the desired
status. In this work, the goal is set in two ways, i.e., either
targeted or non-targeted. A targeted attack specifies a class
c as which the image should be classified, and thus defines a
target output vector 1c using the one-hot encoding scheme.
The Manhattan distance between Z and 1c forms the loss
function:
.
LT (Z) = L(Z; c) = kZ − 1c k1 .
3.3.4
In opposite, the indirect attack first finds a perturbation ∆Y
in image space, then computes perturbations in physical
space, i.e., ∆N, ∆L and ∆m. ∆Y is generated in the same
way of perturbing the physical parameters.
The next step is to find physical perturbations, i.e., ∆N0 ,
∆L0 and ∆m0 , so that the newly rendered image
(5)
On the other hand, a non-targeted attack specified a class c0
as which the image should not be classified, and the goal is
to minimize the c0 -th dimension of the output Z:
.
LN (Z) = L(Z; c0 ) = Zc0 .
Indirect Attacks
Y0 = r(N + ∆N0 , L + ∆L0 , m + ∆m0 )
(6)
is close to Y + ∆Y. Mathematically, we minimize the
following loss function:
Throughout this paper, we make use of the non-target attack, and refer to the loss term as L(Z) = LN (Z).
L(Y0 ) = kY + ∆Y − Y0 k1 .
3.3.3
Direct Attacks
(9)
Similarly, Eqn (9) is expanded in physical space by substituting Y0 with Eqn (8), and optimization can be performed
on physical parameters either jointly or individually.
Note that the indirect way is indeed pursuing interpreting ∆Y in physical space. However, as we will show
in experiments, this way does not work out in any cases.
This suggests that adversaries generated in image space,
despite being strong in attacks, are often inauthentic, and
their approximations in physical space either do not exist,
or cannot be found by this simple optimization.
There are two ways of attacking physical space. The direct attack works by expanding the loss function L(Z),
i.e., L(Z) = L ◦ f ◦ r(N, L, m; θ), and minimizing this
function with respect to the physical parameters N, L and
m. Note that these parameters can also be optimized either jointly or individually. Without loss of generality, we
optimize N individually in the following example.
The optimization starts with an initial (unperturbed) state
.
N0 = N. A total of Tmax iterations are performed. In the
t-th round, we compute the gradient vectors with respect
to Nt−1 , i.e., ∇Nt−1 L ◦ f ◦ r(Nt−1 , L, m; θ), and update
Nt−1 along this direction. We follow the Fast Gradient
Sign Method (FGSM) [11] to only preserve the sign in each
dimension of the gradient vector, as this algorithm does
not need normalization, in which large gradient values may
swallow small ones. We denote the perturbation in the t-th
iteration by
.
∆Nt = ∇Nt−1 L ◦ f ◦ r(Nt−1 , L, m; θ),
(8)
4. Experiments
4.1. 3D Object Classification
4.1.1
Settings
We investigate 3D object recognition in the ShapeNetCorev2 dataset [5], which contains 51 rigid object categories,
each with various 3D models. We randomly sample 125 3D
models from each class, and generate 4 fixed viewpoints for
each object, so that each category has 500 training images.
Similarly, another randomly chosen 50 × 4 objects for each
class are used for testing.
We start with two popular deep neural networks, i.e.,
a 8-layer AlexNet [16] and a 34-layer deep residual network [12]. It is easy to generate our algorithm to other
network structures. Both networks are pre-trained on the
ILSVRC2012 dataset [35], and fine-tuned in our training set
through 40 epochs. Each mini-batch contains 256 samples,
and the learning rate is 0.001 for AlexNet and 0.005 for
(7)
and update Nt−1 by Nt = Nt−1 + η · ∆Nt−1 , where η is
the learning rate. This iterative process is terminated if the
goal of attacking is reached. The accumulated
Pperturbation
T
over all T iterations is denoted by ∆N = η · t=1 ∆Nt for
later reference.
Throughout the attacking process, in order to guarantee
imperceptibility, we constrain the RGB intensity changes
on the image layer. In each iteration, after a new set of
physical perturbations are generated, we check all pixels on
5
Attacking
Perturbations
FGSM on AlexNet
FGSM on ResNet-34
Image
Succ.
p
100.00
7.0
100.00
6.0
Surface N.
Succ.
p
89.02
18.7
87.84
11.0
Illumination
Succ.
p
37.65
43.9
18.04
20.4
Material
Succ.
p
21.57
19.9
3.92
30.3
Combined
Succ.
p
92.55
16.7
92.55
11.0
Table 1. Effect of white-box adversarial attacks to image space or individual elements in physical space. By combined, we allow three
sets of physical parameters to be perturbed jointly. Succ. denotes the success rate of attacks (%, higher is better), and p is the perceptibility
value (unit: 10−3 , lower is better) defined in Section 3.3.1.
GT: car
A: car
R: car
Attacking AlexNet (L) & ResNet (R)
𝑝 = 7.9 × 10−3
A: pillow
conf = 93.5%
𝑝 = 6.7 × 10−3
R: helmet
conf = 60.9%
GT: tower
GT: m-bike
GT: train
A: tower
A: m-bike
A: train
R: tower
R: m-bike
R: train
Attacking AlexNet (L) & ResNet (R)
Attacking AlexNet (L) & ResNet (R)
𝑝 = 10.9 × 10−3
𝑝 = 14.6 × 10−3
𝑝 = 16.4 × 10−3
A: helmet
R: bookshelf
A: train
conf = 86.9%
conf = 76.8%
conf = 93.9%
𝑝 = 16.7 × 10−3
R: earphone
conf = 74.6%
Attacking AlexNet (L) & ResNet (R)
𝑝 = 9.7 × 10−3
A: vessel
conf = 95.0%
𝑝 = 4.4 × 10−3
R: vessel
conf = 76.6%
Figure 2. Examples of adversaries generated in the 3D object classification task. In each group, the top row shows the original testing
image, which is correctly predicted by both AlexNet (A) and ResNet (R). The following two rows display the perturbations and the
attacked image, respectively. All perturbations are magnified by a factor of 5 and shifted by 128. p is the perceptibility value defined in
Section 3.3.1, and conf is the confidence score of the prediction.
ResNet-34. These networks work quite well on the original testing set (no perturbations are added), i.e., AlexNet
achieves 73.59% top-1 classification accuracy, and ResNet34 reports 79.35%. These numbers are comparable to the
single-view baseline accuracy reported in [40].
For each class, from the correctly classified testing samples, we choose 5 images with the highest classification
probabilities as the targets to generate adversaries. This
target set seems small, but we emphasize that attacking
in physical space is very time-consuming, as we need
to repeatedly re-render the perturbed physical parameters
throughout a total of 120 iterations. Using a Titan-X Pascal
GPU, attacking one image takes 12 minutes on average1 .
4.1.2
normals, illumination and surface) individually or to attack
them jointly. For comparison, we also provide results of
attacking image space directly. For each individual set of
parameters, we set a non-target goal (see Section 3.3.2), use
the SGD optimizer (momentum is 0.9 and weight-decay
is 10−4 ), and set the maximal number of iterations to be
120. Choosing the learning rate is a little bit tricky. If
the learning rate is too large, the truncation in image space
will happen frequently (see Section 3.3.3), and we cannot
guarantee an accurate correspondence between physical and
image spaces. On the other hand, if the learning rate is
too small, the accumulated perturbations are not enough
to change the prediction. We choose the best learning rate
from {0.0008, 0.001, 0.002, 0.005, 0.008, 0.01} for each set
of physical parameters. This is not cheating, as adversarial
attacks assume that we know the labels of all the target
images [11].
Quantitative Results
We apply the iterative version of Fast Gradient Sign Method
(FGSM), and try to attack each physical parameters (surface
Results of direct attacks are summarized in Table 1 (the
indirect method does not work out, see below). First, we
demonstrate that adversaries widely exist in both image
space and physical space. In image space, as researchers
1
This time cost depends on the parameters allowed to be perturbed.
Perturbing surface normals, illumination and material takes around 8, 4
and 10 minutes on each image, respectively, and attacking all of them
jointly takes around 12 minutes.
6
Average Loss Function Value
Loss Curves on AlexNet
adversarial attacks. This property also holds in the scenario
of visual question answering. As a side note, although
perturbing surface normals allows to change each pixel
independently, the rendered RGB intensity is also highly
impacted by other two parameters. This is the reason why
allowing all physical parameters to be jointly optimized
produces the highest success rate. We take this option in
the remaining diagnostic experiments.
Image
Normals
Illumination
Material
Joint
1.0
0.8
0.6
0.4
We plot the curves of the average loss over 255 target
images throughout the iterative process of generating adversaries in Figure 3. The loss values in physical attacks
especially for illumination and material drop much slower
than those in image attacks. Even with a larger number of
iterations, there are still some objects that are not attacked
successfully, especially in the scenarios of perturbing the
illumination and material parameters.
0.2
0.0
0
20
40
60
80
100
120
Average Loss Function Value
Number of Iterations
Loss Curves on ResNet−34
1.0
Finally, as the average perceptibility in physical space
is much larger than that in image space, we conjecture
that adversaries generated in image space are inauthentic,
i.e., using the current optimization approach (FGSM), it
is almost impossible to find physical parameters that are
approximately rendered into them. This is verified using the
indirect method described in Section 3.3.4. In both AlexNet
and ResNet-34, it fails on all 255 target images, regardless
of the learning rate and optimizer (SGD or Adam).
0.8
0.6
Image
Normals
Illumination
Material
Joint
0.4
0.2
0.0
0
20
40
60
80
100
120
Number of Iterations
4.2. Visual Question Answering
Figure 3. Curves of the average loss function value throughout the
iterations of FGSM. The starting point of each curve is the average
prediction confidence on the original images.
4.2.1
We extend our experiments to a more challenging vision
task – visual question answering. Experiments are performed on the recently released CLEVR dataset [14]. This
is an engine that can generate an arbitrary number of 3D
scenes with meta-information (object configuration). Each
scene is also equipped with multiple generated questions,
e.g., asking for the number of specified objects in the scene,
or if the object has a specified property.
have explored before [43][26], it is easy to confuse the
network with small perturbations – in our case, the success
rate is always 100% and the perceptibility does not exceed
10−2 . In physical space, however, generating adversaries
becomes much more difficult – the success rate becomes
lower and large perceptibility values are often observed on
the successful cases. Typical adversarial examples generated in physical space are shown in Figure 2.
4.1.3
Settings
The baseline algorithm is named Inferring and Executing
Programs (IEP) [15]. It applied an LSTM to parse each
question into a tree structure, which is then converted into a
neural module network. We use the released model without
training it by ourselves. We sample a subset of 200 images
from the original testing set, and equip each of them with 3
visual questions. The original model reports an answering
(classification) accuracy of 96.17% on these 200×3 images.
Diagnosis
Among three sets of physical parameters, attacking surface
normals is more effective than the other two. This is as
expected, as using local perturbations is often easier in
attacking deep neural networks [11]. The surface normal
matrix shares the same dimensionality with the image lattice, and changing an element in the matrix only impacts
on a single pixel in the rendered image. In comparison,
illumination and material are both global properties of the
3D scene or the object, so tuning each parameter will cause
a number of pixels to be modified, hence less effective in
We randomly pick up 100 testing images, on which all
3 questions are correctly answered, as the target images.
The settings for generating adversarial perturbations are the
same as in the classification experiments, i.e., the iterative
FGSM are used, and three sets of physical parameters are
attacked either individually or jointly.
7
Attacking
Perturbations
FGSM on IEP
Image
Succ.
p
100.00
2.9
Surface N.
Succ.
p
77.67
9.9
Illumination
Succ.
p
49.67
11.0
Material
Succ.
p
19.00
8.4
Combined
Succ.
p
82.67
9.9
Table 2. Generating white-box attacks for visual question answering with IEP [15]. By combined, we allow three sets of physical
parameters to be perturbed jointly. Succ. denotes the success rate of attacks (%, higher is better) of giving a correct answer, and p is the
perceptibility value (unit: 10−3 , lower is better) defined in Section 3.3.1.
Q1: Is there another small object of the same shape as the tiny
gray matte object?
Q2: What number of other objects are of the same size as the red
block?
Q3: Are there any big red matte balls?
A1: yes
Attacking Q1
Attacking Q2
𝑝 = 4.4 × 10−3
A1: no
A2: 4
A3: no
A2: 3
A1: small
Attacking Q3
𝑝 = 3.4 × 10−3
conf = 52.3%
Q1: What size is the other blue matte thing that is the same
shape as the yellow rubber thing?
Q2: Are there fewer cyan matte objects than tiny green shiny
blocks?
Q3: The large thing right of the big cyan rubber cube has what
shape?
conf = 66.8%
Attacking Q1
𝑝 = 8.3 × 10−3
A3: yes
A1: large
A2: no
Attacking Q2
𝑝 = 6.6 × 10−3
conf = 58.0%
conf = 57.2%
A3: cube
Attacking Q3
𝑝 = 5.5 × 10−3
A2: yes
conf = 58.1%
𝑝 = 5.2 × 10−3
A3: no
conf = 44.6%
Figure 4. Examples of adversaries generated in the 3D visual question answering task. In each group, the top row shows the original testing
image and two questions, both of which are correctly answered. The following two rows display the perturbations and the attacked image,
respectively. All perturbations are magnified by a factor of 5 and shifted by 128. p is the perceptibility value defined in Section 3.3.1, and
conf is the confidence score of choosing this answer. Note that the rightmost column shows a ridiculous answer.
4.2.2
5. Conclusions
Quantitative Results
Results are shown in Table 2. We observe similar phenomena as in the classification experiments. This is as expected,
since after the question is parsed and a neural module network is generated, attacking either image or physical space
is essentially equivalent to that in the classification task.
A side note comes from perturbing the material parameters. Although some visual questions are asking about the
material (e.g., metal or rubber) of an object, the success
rate of this type of questions does not differ from that in
attacking other questions significantly. This is because we
are constraining perceptibility, which does not allow the
material parameters to be modified by a large value.
A significant difference of visual question answering
comes from the so called language prior. With a language
parser, the network is able to clinch a small subset of
answers without looking at the image, e.g., when asked
about the color of an object, it is very unlikely for the
network to answer yes or three. We find that sometimes
the network can make such ridiculous errors, e.g., in the
rightmost column of Figure 4, when asked about the shape
of an object, the network answers no after a non-targeted
attack.
This paper delivers an important message, which is the
difficulty of generating adversaries in physical space. To
study this topic, we plug a differentiable rendering layer
into the state-of-the-art deep networks for object classification and visual question answering. Two methods are used
to attack the physical parameters. Directly constructing
adversaries in physical space is effective, but the success
rate is lower than that in image space, and much heavier
perturbations are required in a successful attack. Second,
based on the current optimization algorithms, e.g., iterative
FGSM, it is almost impossible to generate the adversaries in
image space by perturbing the physical parameters, which
suggests that 2D adversaries in physical space are often not
well explained by physical perturbations.
This work has two potentials. First, the existence of
real physical adversaries may trigger research in more complicated 3D applications, e.g., stereo matching [47] or reinforcement learning [24] in 3D virtual scenes. Second,
in 3D vision scenarios, we can defend the deep neural
networks from 2D adversaries by enforcing an image to be
interpreted in physical space, so that their attacking abilities
are weakened or removed after being re-rendered.
8
Acknowledgements
[15] J. Johnson, B. Hariharan, L. van der Maaten, J. Hoffman,
L. Fei-Fei, C. Zitnick, and R. Girshick. Inferring and
Executing Programs for Visual Reasoning. International
Conference on Computer Vision, 2017.
[16] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet
Classification with Deep Convolutional Neural Networks.
Advances in Neural Information Processing Systems, 2012.
[17] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial Examples in the Physical World. Workshop Track, International
Conference on Learning Representations, 2017.
[18] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial
Machine Learning at Scale. International Conference on
Learning Representations, 2017.
[19] G. Liu, D. Ceylan, E. Yumer, J. Yang, and J. Lien. Material Editing Using a Physically Based Rendering Network.
International Conference on Computer Vision, 2017.
[20] Y. Liu, X. Chen, C. Liu, and D. Song. Delving into Transferable Adversarial Examples and Black-Box Attacks. International Conference on Learning Representations, 2017.
[21] S. Lombardi and K. Nishino. Reflectance and Natural Illumination from a Single Image. European Conference on
Computer Vision, 2012.
[22] J. McCormac, A. Handa, S. Leutenegger, and A. Davison.
SceneNet RGB-D: 5M Photorealistic Images of Synthetic
Indoor Trajectories with Ground Truth. International Conference on Computer Vision, 2017.
[23] J. Metzen, T. Genewein, V. Fischer, and B. Bischoff. On Detecting Adversarial Perturbations. International Conference
on Learning Representations, 2017.
[24] P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. Ballard,
A. Banino, M. Denil, R. Goroshin, L. Sifre, K. Kavukcuoglu,
et al. Learning to Navigate in Complex Environments. arXiv
preprint arXiv:1611.03673, 2016.
[25] S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.
Universal Adversarial Perturbations. Computer Vision and
Pattern Recognition, 2017.
[26] S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. DeepFool:
A Simple and Accurate Method to Fool Deep Neural Networks. Computer Vision and Pattern Recognition, 2016.
[27] V. Nair and G. Hinton. Rectified Linear Units Improve
Restricted Boltzmann Machines. International Conference
on Machine Learning, 2010.
[28] A. Nguyen, J. Yosinski, and J. Clune. Deep Neural Networks
are Easily Fooled: High Confidence Predictions for Unrecognizable Images. Computer Vision and Pattern Recognition, 2015.
[29] F. Nicodemus, J. Richmond, J. Hsia, I. Ginsberg, and
T. Limperis. Geometrical Considerations and Nomenclature
for Reflectance. Radiometry, pages 94–145, 1992.
[30] K. Nishino. Directional Statistics BRDF Model. International Conference on Computer Vision, 2009.
[31] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami.
Distillation as a Defense to Adversarial Perturbations against
Deep Neural Networks. IEEE Symposium on Security and
Privacy, 2016.
[32] C. Qi, H. Su, K. Mo, and L. Guibas. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation.
Computer Vision and Pattern Recognition, 2017.
We thank Guilin Liu, Cihang Xie, Zhishuai Zhang and
Yi Zhang for instructive discussions.
References
[1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Neural
Module Networks. Computer Vision and Pattern Recognition, 2016.
[2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra,
C. Lawrence Zitnick, and D. Parikh. VQA: Visual Question
Answering. International Conference on Computer Vision,
2015.
[3] Blender Online Community. Blender – a 3D modelling
and rendering package. https://www.blender.org/,
2017. Blender Foundation, Blender Institute, Amsterdam.
[4] N. Carlini and D. Wagner. Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and
Privacy, 2017.
[5] A. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang,
Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al. ShapeNet:
An Information-Rich 3D Model Repository. arXiv preprint
arXiv:1512.03012, 2015.
[6] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and
A. Yuille. DeepLab: Semantic Image Segmentation with
Deep Convolutional Nets, Atrous Convolution, and Fully
Connected CRFs. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 2017.
[7] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. FeiFei. ImageNet: A Large-Scale Hierarchical Image Database.
Computer Vision and Pattern Recognition, 2009.
[8] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu.
Are You Talking to a Machine? Dataset and Methods for
Multilingual Image Question Answering. Advances in Neural Information Processing Systems, 2015.
[9] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich Feature Hierarchies for Accurate Object Detection and Semantic
Segmentation. Computer Vision and Pattern Recognition,
2014.
[10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu,
D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative Adversarial Nets. Advances in Neural Information
Processing Systems, 2014.
[11] I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and
Harnessing Adversarial Examples. International Conference
on Learning Representations, 2015.
[12] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. Computer Vision and Pattern
Recognition, 2016.
[13] S. Ioffe and C. Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate
Shift. International Conference on Machine Learning, 2015.
[14] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei,
C. Zitnick, and R. Girshick. CLEVR: A Diagnostic Dataset
for Compositional Language and Elementary Visual Reasoning. Computer Vision and Pattern Recognition, 2017.
9
[33] A. Radford, L. Metz, and S. Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. International Conference on Learning
Representations, 2016.
[34] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN:
Towards Real-Time Object Detection with Region Proposal
Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1137–1149, 2017.
[35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh,
S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein,
et al. ImageNet Large Scale Visual Recognition Challenge.
International Journal of Computer Vision, pages 1–42, 2015.
[36] K. Sfikas, T. Theoharis, and I. Pratikakis. Exploiting the
PANORAMA Representation for Convolutional Neural Network Classification and Retrieval. Eurographics Workshop
on 3D Object Retrieval, 2017.
[37] E. Shelhamer, J. Long, and T. Darrell. Fully Convolutional
Networks for Semantic Segmentation. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 39(4):640–651,
2017.
[38] K. Simonyan and A. Zisserman. Very Deep Convolutional
Networks for Large-Scale Image Recognition. International
Conference on Learning Representations, 2015.
[39] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and
R. Salakhutdinov. Dropout: A Simple Way to Prevent Neural
Networks from Overfitting. Journal of Machine Learning
Research, 15(1):1929–1958, 2014.
[40] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller. Multiview Convolutional Neural Networks for 3D Shape Recognition. International Conference on Computer Vision, 2015.
[41] I. Sutskever, O. Vinyals, and Q. Le. Sequence to Sequence
Learning with Neural Networks. Advances in Neural Information Processing Systems, 2014.
[42] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed,
D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich.
Going Deeper with Convolutions. Computer Vision and
Pattern Recognition, 2015.
[43] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan,
I. Goodfellow, and R. Fergus. Intriguing Properties of Neural
Networks. 2014.
[44] F. Tramer, A. Kurakin, N. Papernot, D. Boneh, and P. McDaniel. Ensemble Adversarial Training: Attacks and Defenses. arXiv preprint arXiv:1705.07204, 2017.
[45] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille.
Adversarial Examples for Semantic Segmentation and Object Detection. International Conference on Computer Vision, 2017.
[46] X. Xu, X. Chen, C. Liu, A. Rohrbach, T. Darell, and D. Song.
Can You Fool AI with Adversarial Examples on a Visual
Turing Test? arXiv preprint arXiv:1709.08693, 2017.
[47] Y. Zhang, W. Qiu, Q. Chen, X. Hu, and A. Yuille. UnrealStereo: A Synthetic Dataset for Analyzing Stereo Vision.
arXiv preprint arXiv:1612.04647, 2016.
10
| 1 |
Light spanners for bounded treewidth graphs
imply light spanners for H-minor-free graphs∗
Glencora Borradaile and Hung Le
Department of Electrical Engineering and Computer Science
Oregon State University, USA
[email protected], [email protected]
arXiv:1703.10633v1 [] 30 Mar 2017
Abstract
Grigni and Hung [10] conjectured that H-minor-free graphs have (1 + )-spanners that are light,
that is, of weight g(|H|, ) times the weight of the minimum spanning tree for some function
g. This conjecture implies the efficient polynomial-time approximation scheme (PTAS) of the
traveling salesperson problem in H-minor free graphs; that is, a PTAS whose running time is of
the form 2f () nO(1) for some function f . The state of the art PTAS for TSP in H-minor-freegraphs has running time n1/poly() . We take a further step toward proving this conjecture by
showing that if the bounded treewidth graphs have light greedy spanners, then the conjecture is
true. We also prove that the greedy spanner of a bounded pathwidth graph is light and discuss
the possibility of extending our proof to bounded treewidth graphs.
1998 ACM Subject Classification F.2.2 Nonnumerical Algorithms and Problems, G.2.2 Graph
Theory
Keywords and phrases Light spanners, bounded treewidth graphs, H-minor-free graphs, traveling salesperson problem
Digital Object Identifier 10.4230/LIPIcs.xxx.yyy.p
1
Introduction
Spanners are used to approximately preserve distances in a compact way. In this work, we
focus on spanners that preserve distances within a (1 + ) factor (for a fixed < 1) and
measure quality in terms of the spanner’s weight compared to the minimum spanning tree (the
lightness). Formally, given an edge-weighted graph G, we wish to find a spanning subgraph
S of G such that1 dS (x, y) ≤ (1 + ) · dG (x, y) ∀x, y ∈ V (G) and w(S) = L() · w(MST(G))
where the lightness, L(), is a function that depends only on .
We focus on the greedy (1 + )-spanner, the spanner that is constructed by adding edges
by increasing weight while doing so decreases the distance between their endpoints by a 1 +
factor. Althöfer et al. showed that the greedy spanner has lightness L() = O(1/) for planar
graphs (and also gave lightness bounds that depend on n for general graphs) [1]. The same
lightness bound holds for bounded genus graphs, as showed by Grigni [8]. However, the best
lightness
p bound, which was shown by Grigni and Sissokho [11], for H-minor-free graphs is
(|H| log |H| log n/).
∗
1
This material is based upon work supported by the National Science Foundation under Grant No.
CCF-1252833.
We use standard graph terminology and notation; we revisit notation necessary for some proofs in
Appendix A.
© Glencora Borradaile and Hung Le;
licensed under Creative Commons License CC-BY
Conference title on which this volume is based on.
Editors: Billy Editor and Bill Editors; pp. 1–21
Leibniz International Proceedings in Informatics
Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany
2
Light spanners bounded treewidth graphs and H-minor-free graphs
In this work, we investigate the possibility of removing the dependence on n from the
lightness for H-minor-free graphs by focusing on the following conjecture of Grigni and
Hung [10]:
I Conjecture 1. H-minor-free graphs have (1 + )-spanners with lightness that depends on
|H| and only.
If this conjecture is true, it would, among other things, imply that TSP admits an efficient
PTAS for H-minor free graph, that is, a PTAS whose running time is of the form 2f () nO(1)
for some function f , improving on the existing PTAS with running time n1/poly() via the
framework of Demaine, Hajiaghayi and Kawarabayashi [3] and the spanner of Grigni and
Sissokho [11]. We make progress towards proving Conjecture 1 by reducing the heart of the
problem to the simpler graph class of bounded treewidth2 graphs:
I Theorem 2. If the greedy (1 + )-spanner of a graph of treewidth tw has lightness that
depends on tw and only, then the greedy (1 + )-spanner of an H-minor-free graph has
lightness that depends on |H| and only.
Grigni and Hung gave a construction of a (1 + )-spanner for graphs of pathwidth pw with
lightness O(pw3 /) [10]; however, their construction is not greedy. Rather than considering
the edges by increasing order of weight, they constructed a monotone spanning tree and
greedily added edges to the monotone tree. They also argued that such a spanning tree is
unlikely to exist for bounded treewidth graphs, giving little hope on two fronts (the different
spanner construction as well as a construction that is unlikely to generalize to graphs of
bounded treewidth) that Theorem 2 will lead to proving Conjecture 1 via Grigni and Hung’s
work. In this paper we improve Grigni and Hung’s for bounded pathwidth graphs and
do so by arguing lightness for the standard greedy algorithm, removing the limitations of
Theorem 2 as a stepping stone to Conjecture 1. In Section 4, we prove:
I Theorem 3. The greedy (1 + )-spanner for a graph G of pathwidth pw has lightness
O(pw2 /).
While our proof does not immediately extend to graphs of bounded treewidth, the techniques
are not as specific to path decompositions as Grigni and Hung’s monotone-spanning-tree
technique is, and thus gives more hope for proving Conjecture 1.
While it may seem like a limitation in proving Conjecture 1 that we must show that
a particular construction of the (1 + )-spanner (namely the greedy construction) is light
for bounded treewidth graphs, Filtser and Solomon (Theorem 4 [7]) showed that if an
edge-weighted graph has light spanner, its greedy spanner is also light.
2
Analyzing greedy spanners
The greedy construction for a (1 + )-spanner due to Althöfer et al. [1] is an extension of
Kruskal’s minimum spanning tree algorithm. Start by sorting the edges by increasing weight
and an empty spanner subgraph S; for each edge uv in order, if (1 + )w(uv) ≤ dS (u, v), then
uv is added to S. By observing that this is a relaxation of Kruskal’s algorithm, MST(G) ⊆ S.
Althöfer et al. (Lemma 3 [1]) also showed that for any edge e = uv in S and any u-to-v path
PS (uv) between u and v in S \ {e}, we have:
(1 + )w(e) ≤ w(PS (uv))
2
Formal definitions of pathwidth and treewidth are given later in this paper.
(1)
G. Borradaile and H. Le
3
The following property of greedy (1 + )-spanners is crucial in our analysis. The proof follows
by contradiction to Equation (1) and can be found in Appendix B.
I Lemma 4. Let S be the greedy (1 + )-spanner of a graph and let H be a subgraph of S.
Then the greedy (1 + )-spanner of H is itself.
2.1
Charging scheme
To argue that S is light, that is, has weight L()w(MST(G) for some function L(), we
identify a specific charging path PS (uv) from u to v for each non-spanning-tree edge uv
of the spanner. One may think of PS (uv) as being the shortest u-to-v path in S when
uv is added to the spanner, but this is not necessary for the analysis; we only need that
(1 + )w(uv) ≤ w(PS (uv)) (as is guaranteed by Equation (1) for greedy spanners) for every
path in S. We call (uv, PS (uv)) a charging pair. For a spanning tree T (not necessarily a
minimum spanning tree), we call a set of charging pairs (e, PS (e)) for all edges e ∈ S\T a
charging scheme. We say that an edge is charged to if it belongs to the charging path for
another edge. A charging scheme is acyclic if for every edge e 6∈ T , the directed graph where
vertices of the graph are edges not in T and directed edges represent charged to relationship,
i.e. there is a directed edge (e1 → e0 ) if e0 is charged to by ei , is acyclic. A charging scheme
is k-simple if each edge e ∈ S\T is charged to at most once and each edge in T is charged to
at most k times. Based on these definitions , one can prove (see Appendix B for full details):
I Lemma 5. If S is a greedy (1 + )-spanner of a graph that has a k-simple acyclic charging
scheme to a spanning tree T , then w(S) ≤ (1 + k )w(T ).
Indeed, in proving the greedy (1 + )-spanner has weight at most (1 + 2 )MST(G) when G
is planar, Althofer et al. [1] implicitly proved the existence of a 2-simple acyclic charging
scheme to MST(G). Since our paper only deals with acyclic simple charging schemes, we
simply say simple charging schemes to refer to acyclic simple charging schemes. We will use
a stronger result for outer-planar graphs, planar graphs in which all the vertices are on the
boundary of a common face (the outer face), which we take to be simple. The proof of the
following lemma is included in Appendix B.
I Lemma 6. If G is an outer-planar graph and T is a path formed by all the edges in the
boundary less one edge, then G has an acyclic 1-simple charging scheme to T .
For a greedy spanner S, one may instead, when it is convenient, define a weak k-simple
charging scheme for a supergraph of S. A weak k-simple charging scheme is a k-simple
charging scheme in which Equation (1) need not hold for charging pairs. See Appendix B for
the proof of the following.
I Lemma 7. Let S be a greedy spanner with spanning tree T and let Ŝ be a supergraph of S
that S spans. If Ŝ has a weak k-simple charging scheme to T then S has a k-simple charging
scheme to T .
3
The greedy spanner of an H-minor free graph is possibly light
Our result relies on the Graph Minor Structure Theorem due to Robertson and Seymour [14]
which guarantees a structural decomposition of an H-minor-free graph into simpler graphs.
Informally, the seminal Graph Minor Structure Theorem of Robertson and Seymour states
that every H-minor-free graph is the (small) clique-sum of graphs that are almost embeddable
Arxiv
4
Light spanners bounded treewidth graphs and H-minor-free graphs
on graphs of small genus. We give a formal statement of the Graph Minor Structure Theorem
below after some requisite definitions.
We first argue that almost-embeddable graphs have light (1 + )-spanners assuming
bounded treewidth graphs have light (1 + )-spanners. We partition the spanner edges of
almost-embeddable graphs into two parts: those in the surface-embeddable part and those in
the non-embeddable part. We bound the weight of the surface-embeddable part by “cutting
along” a subset of edges to create an outer-planar graph and then using the lightness bound
for outer-planar graphs. Since the large-grid minor of the graph must be contained in the
surface-embeddable part, we can show that the non-embeddable part has bounded treewidth.
Therefore, the lightness of (1 + )-spanners of the non-embeddable part follows from the
assumption that bounded treewidth graphs have light (1 + )-spanners.
3.1
Definitions: Treewidth, pathwidth and the structure of H-minor
free graphs
Note that if G excludes K|H| as a minor, then it also excludes H.
Tree decomposition A tree decomposition of G(V, E) is a pair (X , T ) where X = {Xi |i ∈ I},
each Xi is a subset of V (called bags), I is the set of indices, and T is a tree whose set of
nodes is X satisfying the following conditions:
1. The union of all sets Xi is V .
2. For each edge uv ∈ E, there is a bag Xi containing both u, v.
3. For a vertex v ∈ V , all the bags containing v make up a subtree of T .
The width of a tree decomposition T is maxi∈I |Xi | − 1 and the treewidth of G, denoted by
tw, is the minimum width among all possible tree decompositions of G. A path decomposition
of a graph G(V, E) is a tree decomposition where the underlying tree is a path and is denoted
by (X , P). The pathwidth of a G(V, E), denoted by pw, is defined similarly.
β-almost-embeddable A graph G is β-almost-embeddable if there is a set of vertices A ⊆
V (G) and β + 1 graphs G0 , G1 , . . . , Gβ such that:
1. |A| ≤ β.
2. G0 ∪ G1 ∪ . . . ∪ Gβ = G[V \A].
3. G0 is embeddable in a surface Σ of genus at most β.
4. Each Gj has a path decomposition (Xj , Pj ) of width at most β and length |Ij |, which is
the number of bags, for j ≥ 1.
5. There are β faces F1 , F2 , . . . , Fβ of G0 such that |Fj | ≥ |Ij |, G0 ∩ Gj ⊆ V (Fj ), and the
vertices of Fj appear in the bags of Xj in order along Pj for each j.
The vertices A are called apices and the graphs {G1 , G2 , . . . , Gβ } are called vortices. The
vortex Gj is said to be attached to the face Fj , 1 ≤ j ≤ β.
β-clique-sum Given two graphs H1 , H2 , a graph H is called a β-clique-sum of H1 and H2
if it can be obtained by identifying a clique of size at most β in each of two graphs H1 , H2
and deleting some of the clique edges.
We can now state Robertson and Seymour’s result:
Graph Minor Structure Theorem (Theorem 1.3 [14]). An H-minor-free graph can be
decomposed into a set of β(|H|)-almost-embeddable graphs that are glued together in a treelike structure by taking β(|H|)-clique-sums where β(|H|) is a function of |H|.
G. Borradaile and H. Le
5
It will be convenient to consider a simplified decomposition that assumes there are no edges
between the apices and vortices. This simplification introduces zero-weight edges that do not
change the distance metric of the graph. We include the proof of this claim in Appendix B.
I Claim 8. There is a representation of a β-almost-embeddable graph as a 2β-almostembeddable graph that has no edge between apices and vertices of vortices (that are not in the
surface-embedded part of the graph) and that maintains the distance metric of the graph.
In the remainder of this section, we prove Theorem 2 by assuming that, if S is the greedy
(1 + )-spanner of a graph of treewidth tw:
w(S) ≤ g(tw, ) · w(MST)
(2)
The bulk of the technical detail in dealing with H-minor free graphs is in handling the
vortices, which we do first.
3.2
Handling vortices
In this subsection, we consider a greedy (1 + )-spanner G of some apex-free β-almostembeddable graph. We will show that:
w(G) ≤ q(β, ) · w(MST(G))
(3)
where q(β, ) is the lightness that only depends on β and . Note that the MST of the
graph is the MST of the spanner. We assume that G is connected since we can bound the
weight of each component separately. Let G = G0 ∪ G1 ∪ . . . ∪ Gβ be the decomposition of G
into a graph G0 embedded on a surface Σ of genus at most β and a set G1 , . . . , Gβ vortices
according to the definition of β-almost-embeddability. Let Ci be the cycle bounding the face
of G0 to which vortex Gi is attached.
Bounding the weight of the vortices First we bound the weight of the vortices and
Sβ
Sβ
their bounding cycles ( i=1 Ci ) ∪ Gi by showing that MST(G) ∪ ( i=1 Ci ∪ Gi ) has bounded
treewidth.
Sβ
Let K = (MST(G) ∩ G0 ) ∪ i=1 Ci and let K ∗ be the dual of K; the vertices of K ∗
correspond to the faces of K. Consider a vertex v of K ∗ that does not correspond to a face
bounded by a cycle in {C1 , C2 , . . . , Cβ }. Then v must be adjacent to a face that is bounded
Sβ
by a cycle Cj for some j because K \ ( i=1 Ci ) is a forest. Therefore, the diameter of K ∗ is
O(β). Since a graph of genus β has treewidth O(β · diameter) (Eppstein, Theorem 2 [5]), K ∗
has treewidth O(β 2 ). Since the dual of a graph of treewidth tw and genus β has treewidth
O(tw + β) (Mazoit, Proposition 2 [13]), K has treewidth O(β 2 ).
Grohe showed that if Gi is a vortex attached to a face of K, then tw(Gi ∪ K) ≤
(pw(Gi ) + 1)(tw(K) + 1) − 1 (Lemma 2 [12]). Adding in each of the vortices G1 , G2 , . . . , Gβ to
Sβ
K and using Grohe’s result gives that the treewidth of MST(G) ∪ ( i=1 Ci ∪ Gi ) is O(β β+2 ).
Sβ
Sβ
Since MST(G) ∪ ( i=1 Ci ∪ Gi ) is the subgraph of a greedy spanner, MST(G) ∪ ( i=1 Ci ∪ Gi )
is a greedy spanner itself (Lemma 4) and by Equation (2), we have:
Pβ
i=1
(w(Ci ) + w(Gi )) ≤ g(O(β β+2 ), ) · w(MST(G)).
(4)
b be the graph obtained
Bounding the surface-embedded part of the spanner Let G
from G0 by contracting the cycles {C1 , C2 , . . . , Cβ } bounding the vortices into vertices
{c1 , c2 , . . . , cβ }, removing loops and removing parallel edges. Let TG
b be the minimum
b and let X be the set of edges that has smallest summed weight such that
spanning tree of G
Arxiv
6
Light spanners bounded treewidth graphs and H-minor-free graphs
cutting open the surface Σ along TG
b ∪ X creates a disk; |X| ≤ 2β (see, e.g. Eppstein [6]).
Since TG
⊆
MST(G)
and
an
edge
in the spanner is also the shortest path between its
b
endpoints, we have:
w(TG
b ∪ X) ≤ (2β + 1)w(MST(G))
(5)
S
β
Cutting the surface open along X ∪ TG
i=1 Ci creates β + 1 disks: one disk for each
b∪
face that a vortex is attached to and one disk ∆ corresponding to the remainder of the
surface. The boundary of ∆, ∂∆, is formed by two copies of each of the edges of TG
b ∪ X and
one copy of each of the edges in ∪βi=1 Ci (see Figure 3 in Appendix C). Therefore we can use
Equations (4) and (5) to bound the weight of the boundary of ∆:
w(∂∆) ≤ (4β + 2 + g(O(β β+2 ), )) · w(MST(G))
(6)
Let E∆ be the set of edges of G that are not in MST(G), X or any of the vortices or their
boundaries. That is, E∆ contains all the edges that we have not yet bounded. Since ∂∆
spans G0 , there is a 1-simple charging scheme to ∂∆ (less an edge, Lemma 6). Therefore, by
Lemma 5 and Lemma 4,
w(E∆ ) ≤ (1 + 1/) · (4β + 2 + g(O(β β+2 ), )) · w(MST(G))
(7)
Total weight of the spanner Since every edge of G is either in MST(G), X, E∆ or Gi ∪ Ci
(for some i), summing Equations (4), (5) and (7) gives us Equation (3):
w(G) ≤ 2 + 2β + (1 + 1/)(4β + 2 + g(O(β β+2 ), )) + g(O(β β+2 ), ) w(MST(G))
|
{z
}
q(β,)
(8)
3.3
Adding apices and clique-sums
We are now ready to prove Theorem 2 by considering the apices and clique-sums of the
decomposition. Let G = J1 ⊕ J2 ⊕ . . . ⊕ Jγ be the β(|H|)-clique-sum of β(|H|)-almostembeddable graphs J1 , J2 , . . . , Jγ given by the Graph Minor Structure Theorem. For Ji , let
Ai be its set of apices, let Ji1 , . . . , Jiβ be its set of vortices and let Ji0 be the graph embedded
on a surface of genus at most β, as provided by the definition of β(|H|)-almost-embeddable.
We assume the representation includes no edges between apices and the internal vertices of
vortices (vertices that are not in Ji0 ) by Claim 8.
Let S be the greedy (1 + )-spanner of G. For each Ji , we define Si as the set of spanner
edges in the apex-free β(|H|)-almost-embeddable part of Ji (formally, Si = S ∩ (∪` Ji` )).
Consider the spanning forest of Si that is induced by MST(G): Fi = MST(G) ∩ Si . We
choose a subset of edges Ei of Si \Fi such that:
(i) The number of components of Fi ∪ Ei is minimized.
(ii) Subject to (i), the size of Ei is minimized.
(iii) Subject to (i) and (ii), the weight of Ei is minimized.
By the choice of Ei and since Ai has no edges to the internal vertices of vortices, each tree of
Fi ∪ Ei is a minimum spanning tree for each apex-free β(|H|)-almost-embeddable component
of Si . Since Si is a subgraph of a greedy spanner, it is its own greedy spanner (Lemma 4)
and so, by Equation (3), we have w(Si ) ≤ q(β(|H|), )(w(Fi ) + w(Ei )). Summing over i, we
have:
Pγ
Pγ
i=1 w(Si ) ≤ q(β(|H|), )
i=1 w(Fi ) + w(Ei )
Pγ
≤ q(β(|H|), ) w(MST(G)) + i=1 w(Ei )
(9)
G. Borradaile and H. Le
7
Let S(Ai ) be edges of S incident to vertices in Ai . Then, S ∩ Ji = Si ∪ S(Ai ) and hence
Sγ
S = i=1 (Si ∪ S(Ai )). Therefore, we have:
w(S) ≤
Pγ
i=1
w(Si ) +
Pγ
i=1
w(S(Ai ))
(10)
Now define Ji0 = Fi ∪ Ei ∪ S(Ai ). Then tw(Ji0 ) ≤ |Ai | + 1 ≤ β(|H|) + 1. Let G0 =
J10 ⊕ J20 ⊕ . . . ⊕ Jγ0 . We get that tw(G0 ) ≤ maxi tw(Ji0 ) ≤ β(|H|) + 1 by a result of Demaine et
al. (Lemma 3 [4]). Note that MST(G0 ) = MST(G). We have (by Lemma 4 and Equation (2)):
Pγ
i=1
w(Fi ) + w(Ei ) + w(S(Ai )) ≤ g(β(|H|) + 1, ) · w(MST(G))
(11)
By Equations 9, 10 and 11, we get Theorem 2:
w(S) ≤ (q(β(|H|), ) + 1) · g(β(|H|) + 1, ) + q(β(|H|), ) w(MST(G)).
4
The greedy spanner for bounded pathwidth graphs is light
Grigni and Hung proved that graphs of pathwidth pw have a (1 + )-spanner of lightness
O(pw3 /) [10]. They do so by building a spanning tree that is monotone with respect to the
path decomposition of weight O(pw2 ) w(MST) (Lemma 2 [10]) and devising what we observe
to be an O(pw)-simple charging scheme to the monotone spanning tree (Lemma 3 [10]).
We prove that graphs of pathwidth pw have light greedy (1 + )-spanners by showing that
there is an O(pw2 )-simple charging scheme to the MST, forgoing the need for constructing
a monotone spanning tree, giving Theorem 3. Our proof gives an evidence that one can
avoid the pathwidth-specific monotonicity argument, opening a door to show that graphs of
bounded treewidth may have light greedy (1 + )-spanners as well. We discuss the challenges
for bounded treewidth graphs at the end of the paper. Throughout this section G refers to
the greedy (1 + )-spanner of some graph of pathwidth pw.
Smooth decompositions It will be convenient for our proofs to work with a standardized
path decomposition. We assume that bags are ordered linearly, i.e, Xi and Xi+1 are adjacent
(1 ≤ i ≤ |I| − 1), and the path decomposition is smooth. A path decomposition (X , P) is
smooth if |Xi | = pw + 1 ∀i ∈ I and |Xi ∩ Xi+1 | = pw for all 1 ≤ i ≤ |I| − 1. Bodlaender [2]
showed that a path decomposition can be turned into a smooth path decomposition of the
same width in linear time. We root the path decomposition (X , P) at the bag X|I| .
For adjacent bags Xi , Xi+1 , we call the vertex in Xi+1 \Xi the introduced vertex of Xi+1
and the vertex in Xi \Xi+1 the forgotten vertex of Xi . All vertices of X1 are introduced
vertices and all vertices of X|I| are forgotten vertices.
Overview: designing an O(pw2 )-simple charging scheme In designing an O(pw2 )-simple
charging scheme, one needs to guarantee (i) each non-tree edge is charged at most once and
(ii) each tree edge3 is charged at most O(pw2 ) times. At high level, we use the charging
scheme for edges in X0 ∪ . . . ∪ Xi−1 to design a charging scheme for the edges introduced to
Xi . Let u be the introduce vertex of Xi . We need to define charging pairs for all non-tree
edges between vertices of u and Xi \ {u}.
3
A tree edge is an edge of the minimum spanning tree.
Arxiv
8
Light spanners bounded treewidth graphs and H-minor-free graphs
The simpler case is when there is a tree edge uv incident to u in Xi . For a non-tree edge
wu incident to u in Xi , we define a charging path for wu using the edges uv (a tree edge)
and vw (an edge that already has a defined charging path since vw is in a descendant bag of
Xi ). (This will be formalized as the triangle rule.) However, to guarantee condition (i), we
must prevent the use of wu in charging paths in the future. We keep track of this by way
of a charging forest whose vertices are the edges of G; in this case we add an edge to the
charging forest connecting uv and wu.
The harder case is when there is no tree edge incident to u in Xi . In this case, we consider
the u-to-v spanning-tree path that contains only edges of ancestor bags of Xi . We use this
path as a sit-in for the edge uv of the previous case. To guarantee condition (ii), we must
be careful to not use tree-edges in ancestor bags too many times. Since u may have an
ancestral spanning-tree path to multiple vertices of Xi \ {u}, we delay the choice of which
paths to use in defining a charging pair for edges incident to u by adding dashed edges to the
charging forest corresponding to all possible constructions. Then, to achieve condition (ii),
we carefully select which dashed edges to convert in defining the charging pairs for edges
incident to u in Xi .
Normalized graph We simplify the presentation of the formal argument, we use a normalized
graph which merges a graph G with its smooth path decomposition (X , P). For each bag Xi ,
define the bag graph Gi = (Xi , Ei ) to be a subgraph of G where Ei is a maximal subset of
edges of G[Xi ] incident to introduced vertices of Xi . This implies that each edge of MST(G)
appears in exactly one bag graph. For adjacent bags Xi , Xi+1 , we add edges between two
copies of the same vertex of G in Gi and Gi+1 . We call the resulting graph the normalized
graph of G with respect to the path decomposition (X , P) and denote it by GP (VP , EP ). We
assign weight 0 to edges between bag graphs and weight w(e) for the copy of the edge e in G.
See Figure 4 in Appendix C for an example. Since the distances between vertices in G and the
distances between their copies in GP are the same: w(MST(GP )) = w(MST(G)). Further,
since distances are preserved, we can define a charging scheme for the greedy (1 + )-spanner
of GP to MST(GP ). In fact, we prove:
I Theorem 9. There is a O(pw2 )-simple charging scheme for the greedy (1 + )-spanner of
GP to MST(GP ).
This theorem, along with the equivalence of the metrics of GP and G and Lemma 5, gives
Theorem 3. To simplify the construction of the charging scheme, we assume that G is a
k-path; this is without loss of generality by Lemma 7.
4.1
The charging forest
The main difficulty in defining the charging scheme is the existence of introduced vertices that
are connected to the MST via an edge that is in an ancestor bag of the path decomposition.
For the other types of introduced vertices, there is a triangle in the vertex’s bag graph that
allows us to pay for the non-tree edges incident to that vertex. Throughout this section MST
refers to MST(GP ).
To define the O(pw2 )-simple charging scheme, we construct a charging forest Φ to guide
the charging. The charging forest is a rooted spanning forest of the line graph4 of G, with
4
The nodes of the line graph of a graph G are the edges of G; the edges of the line graph are between
nodes whose corresponding edges of G share an endpoint.
G. Borradaile and H. Le
9
one tree rooted at a vertex corresponding to each edge of MST(G) (that is, each non-zero
edge of the MST(GP )). We call the nodes of Φ φ-vertices and denote a φ-vertex of Φ by
(u, v) where uv is an edge of G.
We use three types of edges in constructing Φ: dashed edges, bold edges and mixed edges.
We construct Φ iteratively; Φi will be an intermediate forest of the line graph of G[∪j≤i Xi ].
Φi may contain all three types of edges but Φ = Φ|I| will contain only bold and mixed edges.
From Φi−1 to Φi , a dashed edge may be deleted or converted to a mixed edge and newly
added edges will either all be bold or all be dashed. A dashed-free tree of Φi is a maximal
tree of Φi that contains no dashed edge. Trees of the intermediate forests may be unrooted,
but each tree of an intermediate forest will contain at most one root.
We also maintain a contracted forest Λi spanning vertices of Xi . Intuitively, Λi tells us
the connections between vertices of Xi in MST[∪j≤i Xj ]. Λi is used to handle introduced
vertices that have no tree edges to other vertices of the same bag. We also assign unique
positive rank to each edge of MST in G and assign rank 0 to all edges in MST added to GP
by the normalizing process. Ranks of edges of MST are used to define edge-rank in Λi as
follows: the rank of an edge uv in Λi , denoted by ri (uv), is the minimum rank over the edges
in the u-to-v path of MST[∪j≤i Xi ]. We assign rank to each edge of MST in a way that the
rank of each edge in Λi is unique.
The triangle rule We say that an edge ((u, v), (u, w)) of the line graph satisfies the triangle
rule if vw ∈ MST(G) and (u, v) and (u, w) are in distinct dashed-free trees, both of which
do not contain roots (i.e. φ-vertices that correspond to edges of MST(G)). We will add
edges that satisfy the triangle rule to the charging forest. To maintain the acyclicity of the
intermediate charging forests, if adding ((u, v), (u, w)) introduces a cycle, the most recently
added dashed edge on the path in the charging forest from (u, v) to (u, w) is deleted.
Invariants of the charging forest Let Φi be the charging forest for GP [∪j≤i Xj ]. We say
that a φ-vertex (u, v) of Φi is active if u, v ∈ Xi+1 . We will show that Φi satisfies the
following invariants:
(i) For two trees T1 and T2 of MST[∪j≤i Xj ], all the φ-vertices of the form (u, v) such that
u ∈ T1 and v ∈ T2 are in a common unrooted tree of Φi . Further every unrooted tree of
Φi contains φ-vertices spanning components of MST[∪j≤i Xj ].
(ii) For a tree T of MST[∪j≤i Xj ], all the φ-vertices of the form (u, v) such that u, v ∈ T are
in rooted dashed-free trees of Φi .
(iii) Each unrooted dashed-free tree of Φi contains at least one active φ-vertex.
(iv) For any j ≤ i and any two distinct trees T1 , T2 of MST[∪i`=j X` ], (yj,1 , yj,2 ) and (yi,1 , yi,2 )
are in the same unrooted dashed-free tree of Φi where y`,k ∈ Tk ∩ X` for ` = i, j and
k = 1, 2.
4.1.1
Initializing the charging forest
We define Φ1 as a forest of only bold edges. Recall that G1 is a complete graph so there is a
φ-vertex for every unordered pair of vertices in X1 . We greedily include bold edges in Φ1
that satisfy the triangle rule. Equivalently, consider the subgraph H of the line graph of G1
consisting of edges ((u, v), (u, w)) where vw ∈ MST; Φ1 corresponds to any maximal forest of
H each tree of which contains at most one root (φ-vertex corresponding to an edge of MST).
We arbitrarily assign an unique rank to each edge of MST[X1 ] from the set {1, 2, . . . , m1 }
where m1 is the number of edges of MST[X1 ] so that the highest-ranked edge is incident
to the forgotten vertex of X1 . We define Λ1 to be the forest obtained from MST[X1 ] by
Arxiv
10
Light spanners bounded treewidth graphs and H-minor-free graphs
contracting the highest-ranked edge incident to the forgotten vertex of MST[X1 ]. We observe
that each edge in Λ1 has a unique rank, since the rank of each edge of MST[X1 ] is unique.
We prove that Φ1 satisfies four invariants in Appendix B.
I Claim 10. Charging forest Φ1 satisfies all invariants.
4.1.2
Growing the charging forest
We build Φi from Φi−1 and show that Φi will satisfy the invariants using the fact that Φi−1
satisfies the invariants. Let u be the introduced vertex of non-leaf bag Xi . If u isolated
in MST[Xi ], we say that u is a free vertex. The construction of Φi from Φi−1 depends on
whether or not u is free.
u is a free vertex
Let Λi−1 be the contracted forest of Xi−1 . To obtain Φi from Φi−1 , we add φ-vertices
(u, w) ∀w ∈ (Xi \{u}) and add dashed edges ((u, wj ), (u, wk )) for each edge wj wk in Λi−1 .
We assign rank to the dashed edge ((u, wj ), (u, wk )) to be the rank of wj wk in Λi−1 . We
additionally change some dashed edges to mixed edges which we describe below. Since
converting dashed edges to mixed edges does not affect these invariants, we have:
I Claim 11. Charging forest Φi satisfies Invariants (i), (ii) and (iv).
We include the formal proof of Claim 11 in Appendix B.
Converting dashed edges to mixed edges To ensure that Φi will satisfy Invariant (iii),
we convert some dashed edges to mixed edges. First, consider the newly-added φ-vertex
(u, v) where v is the forgotten vertex of Xi . To ensure that (u, v) will be in a dashed-free
tree that contains an active φ-vertex, we convert the added dashed edge ((u, v), (u, w)), that
has highest rank among newly added dashed edges, to a mixed edge. Second, suppose that
τ is a dashed-free tree of Φi−1 such that its active φ-vertex is of the form (v, w) for some
w ∈ Xi \{u}. The tree τ is at risk of becoming inactive in Φi . However, by Invariant (i), τ
is a subtree of a component τ̂ of Φi−1 that consists of φ-vertices (u1 , u2 ) such that uk ∈ Tk
where Tk is a component of MST[∪j≤i−1 Xi ] for k = 1, 2. Further, since T1 and T2 must be
connected in MST by a path through ancestor nodes of the path decomposition, there must
be vertices in Tk ∩ Xi+1 ; w.l.o.g., we take (u1 , u2 ) to be an active φ-vertex in τ̂ . We greedily
convert dashed edges in τ̂ to mixed edges to grow τ until such an active φ-vertex is connected
to (v, w) by mixed and bold edges, thus ensuring that Φi will satisfy Invariant (iii). We break
ties between dashed edges for conversion to mixed edges by selecting the dashed edges that
were first added to the charging forest and if there are multiple dashed edges which were
added to the charging forest at the same time, we break ties by converting the dashed edges
that have higher ranks to mixed edges. Note that by converting dashed edges into mixed
edges, we add edges between dashed-free trees. Thus, Invariant (i), (ii), (iv) are unchanged.
Finally, we need to update Λi from Λi−1 . Let {v1 , v2 , . . . , vq } be the set of neighbors of v
such that ri−1 (vv1 ) ≤ . . . ≤ ri−1 (vvq ). Delete v from Λi−1 and if q ≥ 2, add edges vj vj+1
(1 ≤ j ≤ q − 1) to Λi−1 to obtained the forest Λi . Then, by definition, ri (vj vj+1 ) = ri−1 (vvj )
and ri (xy) = ri−1 (xy) for all other edges of Λi . Thus, by induction hypothesis, the rank of
each edge of Λi is unique.
G. Borradaile and H. Le
11
u is not a free vertex
To build Φi from Φi−1 , we add φ-vertices (u, w)∀w ∈ Xi \{u} and greedily add bold edges
that satisfy the triangle rule. We include the formal proof that Φi satisfies Invariant (i), (ii),
and (iv) in Appendix B.
I Claim 12. Charging forest Φi satisfies Invariants (i), (ii) and (iv).
We show that Φi satisfies Invariant (iii) here. Let v be the forgotten vertex of Xi . Note
that u may have no tree edge to the forgotten vertex of Xi . In this case, we will ensure that
previously active φ-vertices of the form (v, w) for w ∈ Xi get connected to dashed-free trees
that contain active φ-vertices, we convert dashed edges to mixed edges. We do so in the
same way as for the above described method for when u was a free vertex, guaranteeing that
Φi will satisfy Invariant (iii). Otherwise, we consider two subcases:
1. If u = v, then any φ-vertex that was active in Φi−1 is still active in Φi . Thus, (u, w) is
connected to existing φ-vertices that remain active for any w ∈ Xi \ {u}.
2. If u 6= v, then φ-vertex (x, v) will become inactive in Φi . However, (x, v) is connected to
(x, u) by the triangle rule or it was already connected to a root in a dashed-free tree. In
either case, Φi satisfy Invariant (iii).
Finally, we show how to update Λi . Let v be the forgotten vertex of Xi and u1 , u2 , . . . , up be
neighbors of u in MST[Xi ] such that u1 = v if uv ∈ MST[Xi ]. Let r0 be the maximum rank
of over ranked edges of MST[∪j≤i−1 Xj ]. We assign rank ri (u, uj ) = r0 + j for all 1 ≤ j ≤ p.
We will update Λi from Λi−1 depending on the relationship between u and v:
1. If u = v, then we add edges p − 1 edges uj uj+1 , 1 ≤ j ≤ p − 1, to Λi−1 .
2. If u 6= v and uv ∈ MST[Xi ], we replace v in Λi−1 by u and add p edges uuj to Λi−1 .
3. Otherwise, we add u and p edges uuj to Λi−1 . Let {v1 , v2 , . . . , vq } be the set of neighbors
of v such that ri−1 (vv1 ) ≤ . . . ≤ ri−1 (vvq ). We delete v from Λi−1 and add q − 1 edges
vj vj+1 .
I Claim 13. Each edge of the forest Λi has a distinct rank.
We include the proof of Claim 13 in Appendix B. See Figure 5 in Appendix C for an example
of a charging forest and contracted forests.
4.1.3
Using the charging forest to define an O(pw2 )-simple charging
scheme
By Invariant (ii), all trees in Φ are rooted at φ-vertices correspond to edges of MST. Order
the edges of E(G)\MST(G) given by DFS pre-order of Φ. Let (ui−1 , vi−1 ) and (ui , vi ) (i ≥ 2)
be two φ-vertices of a component T of Φ in this order. Define Pi to be the (unique) ui -to-vi
path in T ∪ {(ui−1 , vi−1 )} that contains the edge (ui−1 , vi−1 ). We take (Pi , (ui , vi )) to be
the charging pair for (ui , vi ). Note that the roots of Φ are edges of MST(G), so the charging
paths are well-defined.
We prove that the charging scheme defined by the charging pairs is O(pw2 )-simple by
bounding the number of times non-zero edges of MST are charged to. By the triangle
rule, if ((u, v), (u, w)) is a bold edge of Φ, then vw ∈ MST. We call the set of 3 vertices
{u, v, w} a charging triangle. If ((u, v), (u, w)) is a mixed edge, vw ∈
/ MST(G). In this
case, we call {u, v, w} a charging pseudo-triangle. The v-to-w path in MST(G) is called the
pseudo-edge; vw is said to be associated with the charging (pseudo-) triangle. We say the
edge ((u, v), (u, w)) represents the charging (pseudo-) triangle.
Arxiv
12
Light spanners bounded treewidth graphs and H-minor-free graphs
I Claim 14. There are at most pw − 2 charging triangles associated with each non-zero edge
of MST.
Proof. A charging triangle consists of one non-zero weight edge of MST and one edge not in
the MST in the same bag graph. Note that each non-zero weight edge of MST is in exactly
one bag graph and each bag graph has at most pw − 2 edges not in the MST; that implies
the claim.
J
The proof of the following is in Appendix B.
I Lemma 15. Each edge of MST(G) is in the paths corresponding to the pseudo-edges of at
most 2pw2 charging pseudo-triangles.
Consider each charging pair (Pi , (ui , vi )) in which Pi contains (ui−1 , vi−1 ) that precedes
(ui , vi ) in the DFS pre-order of a given component of Φ. Let (ui , vi ) = (û0 , v̂0 ), . . . , (ût , v̂t ) =
(ui−1 , vi−1 ) be the set of φ-vertices of the path between (ui , vi ) and (ui−1 , vi−1 ) in Φ. We
define Qj (1 ≤ j ≤ t) be the ûj−1 -to-v̂j−1 path of MST ∪ {(ûj , v̂j )} containing edge ûj v̂j .
Then we have:
Pi = Q1
Q2
...
Qt
where is the symmetric difference between two sets. Hence, charging tree edges of Pi is
equivalent to charging the tree edges of Q1 , Q2 , . . . , Qt . We observe that tree edges of Qj
are the tree edges of the (pseudo-) triangle represented by ((ûj , v̂j ), (ûj−1 , v̂j−1 )). Since each
edge of Φ appears twice in the collection of paths between (ui , vi ) and (ui−1 , vi−1 ) in Φ for
all i and since different edges of Φ represents different charging (pseudo-) triangles, the tree
edges of each charging (pseudo-) triangle are charged to twice. By Claim 14 and Lemma 15,
each non-zero tree edge is charged to by at most 2(pw − 2) + 2(2pw2 ) = O(pw2 ) times. Hence,
the charging scheme is O(pw2 )-simple.
4.2
Toward light spanners for bounded treewidth graphs
The main difficulty in designing a simple charging scheme for bounded pathwidth graphs is
the existence of free vertices. We introduce dashed edges to the charging forest when we
handle free vertices and change a subset of dashed edges into mixed edges. By changing a
dashed edge, say ((w, u), (w, v)) into a mixed edge, we charge to the u-to-v path in the MST
once. Unfortunately, for bounded treewidth graphs, such charging can be very expensive.
However, we observe that the φ-vertex (v, w) is contained in a rooted tree of Φ. That means
we can used vw to charge to one of two edges uv or uw. In general, for each non-tree edge
uv of the spanner in a bag Xi , we say uv is a simple edge if u, v are in the same component
of the MST restricted to descendant bags of Xi only or ancestor bags of Xi only. Simple
edges can be paid for in the spanner “cheaply”. Now, if uv be an edge of the spanner in
Xi such that the u-to-v path PMST (u, v) of MST crosses back and forth through Xi . Let
w1 , w2 , . . . , wk be the set of vertices of Xi in this order on the path PMST (u, v). Then, we
can charge the edge uv by the set of simple edges uw1 , w1 w2 , . . . , wk v. We believe that this
idea, with further refinement, will prove the existence of light spanners for bounded treewdith
graphs.
G. Borradaile and H. Le
13
References
1
2
3
4
5
6
7
8
9
10
11
12
13
14
I. Althofer, G. Das, D. Dobkin, D. Joseph, and L. Soares. On sparse spanners of weighted
graphs. Discrete and Computational Geometry, 9(1):81–100, 1993.
H.L. Bodlaender. A linear time algorithm for finding tree-decompositions of small treewidth.
In Proceedings of the 25th Annual ACM Symposium on Theory of Computing, STOC ’93,
pages 226–234. ACM, 1993.
E. Demaine, M. Hajiaghayi, and K. Kawarabayashi. Contraction decomposition in Hminor-free graphs and algorithmic applications. In Proceedings of the 43rd Annual ACM
Symposium on Theory of Computing, page to appear, June 6–8 2011.
E. Demaine, M. Hajiaghayi, N. Nishimura, P. Ragde, and D. Thilikos. Approximation
algorithms for classes of graphs excluding single-crossing graphs as minors. Journal of
Computer and System Sciences, 69(2):166–195, 2004.
D. Eppstein. Diameter and treewidth in minor-closed graph families. Algorithmica, 27:275–
291, 2000. Special issue on treewidth, graph minors, and algorithms.
D. Eppstein. Dynamic generators of topologically embedded graphs. In Proceedings of the
14th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ‘03, pages 599–608,
2003.
A. Filtser and S. Solomon. The greedy spanner is existentially optimal. In Proceedings of
the 2016 ACM Symposium on Principles of Distributed Computing, PODC ’16, pages 9–17,
2016.
M. Grigni. Approximate tsp in graphs with forbidden minors. In Automata, Languages and
Programming, volume 1853 of Lecture Notes in Computer Science, pages 869–877. Springer
Berlin Heidelberg, 2000.
M. Sigurd and M. Zachariasen. Construction of Minimum-Weight Spanners. In Proceedings
of the 12th Annual European Symposium on Algorithms, ESA’04, pages 797–808. SpringerVerlag, 2004.
M. Grigni and H. Hung. Light spanners in bounded pathwidth graphs. In Proceedings
of the 37th International Conference on Mathematical Foundations of Computer Science,
MFCS’12, pages 467–477. Springer-Verlag, 2012.
M. Grigni and P. Sissokho. Light spanners and approximate tsp in weighted graphs with
forbidden minors. In Proceedings of the 13th Annual ACM-SIAM Symposium on Discrete
Algorithms, SODA ’02, pages 852–857, Philadelphia, PA, US, 2002. Society for Industrial
and Applied Mathematics.
M. Grohe. Local tree-width, excluded minors, and approximation algorithms. Combinatorica, 23(4):613–632, 2003.
F. Mazoit. Tree-width of hypergraphs and surface duality. Journal of Combinatorial Theory,
Series B, 102(3):671–687, 2012.
N. Robertson and P.D. Seymour. Graph minors. XVI. Excluding a non-planar graph.
Journal of Combinatorial Theory, Series B, 89(1):43–76, 2003.
Arxiv
14
Light spanners bounded treewidth graphs and H-minor-free graphs
A
Notation
We denote G = (V (G), E(G)) to be the graph with vertex set V (G) and edge set E(G) and
use n, m to denote the number of vertices and edges, respectively. The order of G, denoted
by |G|, is the number of vertices of G. Each edge e of E(G) is a assigned a weight w(e). We
P
define w(H) = e∈E(H) w(e) to be the weight of edges of a subgraph H of G. The minimum
spanning tree of G is denoted by MST(G). For two vertices u, v, we denote the shortest
distance between them by dG (u, v). Given a subset of vertices S and u a vertex of G, we
define dG (u, S) = minv∈S {dG (u, v)} for v = arg minv∈S {dG (u, v)}. We omit the subscript G
when G is clear from context. The subgraph of G induced by S is denoted by G[S].
B
Omitted Proofs
Proof of Lemma 4. Let SH be the greedy (1 + )-spanner of H. We will prove that SH = H.
Note that SH is a subgraph of H and shares the same set of vertices with H. Suppose
for a contradiction that there is an edge e = uv ∈ H \ SH . Since uv is not added to the
greedy spanner of H, there must be a u-to-v-path PSH (uv) in SH that witnesses the fact
that uv is not added (i.e. (1 + )w(e) > w(PSH (uv))). However, PSH (uv) ⊆ S, contradicting
Equation 1.
J
Proof of Lemma 5. If S has a k-simple charging scheme then:
X
(1 + )w(S\T ) ≤
w(PS (e)) ≤ k · w(T ) + w(S\T )
e∈S\T
where the first inequality follows from edges in S\T having charging paths and the second
inequality follows from each edge in T appearing in charging paths at most k times and each
edge in S\T appearing in charging paths at most once. Rearranging the left- and right-most
sides of this inequality gives us Lemma 5.
J
Proof of Lemma 6. Let e be the edge on the boundary of G that is not in T . Let T ∗ be
the spanning tree of the dual graph containing all the edges that do not correspond to edges
of T . We construct a charging scheme for G by traversing T ∗ in post-order, considering all
the non-outer faces. Consider visiting face f with children f1 , . . . , fk and parent f0 . Let ei
be the edge of G between f and fi for all i and let P0 be the path between e0 ’s endpoints
in T ∪ {e1 , e2 , . . . , ek } that contains all the edges {e1 , e2 , . . . , ek }. Then, by Equation 1,
(e0 , P0 ) is a charging pair for e0 . Also, since we visit T ∗ in post-order, none of the edges
{e1 , e2 , . . . , ek } will be charged to when we build charging pairs for higher-ordered edges
which are edges between faces of higher orders. Thus, the set of charging pairs produced
from this process is a 1-simple charging scheme to T .
J
1 e 10
2
9
3
8
4
7
5
6
Figure 1 An outer planar graph G. Bold edges are edges of T , dashed edges are non-tree edges
and dotted edges are edges of the dual spanning tree, less the dual of e.
G. Borradaile and H. Le
15
Proof of Lemma 7. Consider an edge ê ∈ Ŝ \ S. We first argue that Ŝ \ {ê} has a weak
k-simple charging scheme. Since T ⊆ S ∩ Ŝ, ê ∈
/ T and so ê can be charged to at most once.
If ê is in the charging path PŜ (e) for another edge e of Ŝ, then we define the charging path
for e to be the simple path between e’s endpoints that is in PŜ (e) ∪ PŜ (ê) \ {ê}. The resulting
set of paths is a weak k-simple charging scheme since every edge of PŜ (ê) is charged to one
fewer time (by the removal of ê) and at most once more (by e).
By induction, S has a weak k-simple charging scheme to T . Since S is a greedy spanner,
Equation (1) holds for every charging pair, so the weak k-simple charging scheme to T is a
k-simple charging scheme to T .
J
Proof of Claim 8. Let Va be the set of vertices in vortex V that are adjacent to apex a. Split
vertex a into two vertices a and aV connected by a zero-weight edge so that aV ’s neighbors
are Va ∪ {a} and so that contracting the zero-weight edge gives the original graph. Add aV
to all of the bags of the path decomposition of V . Now all the edges that connected a to V
are within the vortex.
Consider the face in the surface-embedded part of the β-almost-embeddable graph to
which V is attached and let xy be edge in that face that is between the first and last bags of
V and such that y is in the first bag of V . Add the edges xaV and aV y to the embedded
part of the graph and give them weight equal to the distance between their endpoints. Now
xaV is the edge in that face that is between the first and last bags of the vortex and a is
adjacent to a vertex that is in the surface-embedded part of the β-almost-embeddable graph.
See Figure 2.
The splitting of a into aV increases the pathwidth of the vortex by 1. Repeating this
process for all apex-vortex pairs increases the pathwidth of each vortex by at most β.
x
y
a
(a)
a
y
x
aV
y
(b)
Figure 2 Apex a and the vortices it is adjacent to before (a) and after (b) the reduction of
Claim 8
J
Proof of Claim 10. Note first that since there are only bold edges, the dashed-free trees of
Φ1 are just the trees of Φ1 .
Arxiv
16
Light spanners bounded treewidth graphs and H-minor-free graphs
Invarint (i) Consider distinct trees T1 and T2 of MST[X1 ] and consider x, y ∈ T1 and
z ∈ T2 . The φ-vertices (x, z) and (y, z) are in the same component of H as witnessed by
the edges of the x-to-y path in T1 . Further, (x, z) cannot be connected to (x, u) in H where
xu ∈ MST since that would imply xz ∈ MST, contradicting that T1 and T2 are distinct
trees of MST[X1 ]. Therefore, there is a maximal unrooted tree of H that will contain all the
φ-vertices of the form (u, v) such that u ∈ T1 and v ∈ T2 .
Invariant (ii) Consider a path u0 , u1 , . . . , uk in MST[X1 ] for k ≥ 2; ((u0 , ui ), (u0 , ui+1 )) are
edges in H for i = 1, . . . , k − 1. Therefore (u0 , u1 ) (a root) and (u0 , uk ) are in a common
component of H. Therefore any maximal tree of H that contains (u0 , uk ) will contain a root.
(iii) Consider the component of H described in showing Φ1 satisfies Invariant (i). Since T1 and
T2 must be connected in MST by a path through ancestor nodes of the path decomposition,
there is a vertex ui ∈ Ti such that ui ∈ X2 for i = 1, 2. Therefore (u1 , u2 ) is an active
φ-vertex in this component of H and will be included in the maximal tree of this component.
Invariant (iv)
In this case, Invariant (iv) reduces to Invariant (i).
J
Proof of Claim 11. We show that Φi satisfies Invariants (i), (ii) and (iv) in turn:
(i) The only new tree of MST[∪j≤i X` ] compared to MST[∪j≤i−1 X` ] is u. For any pair
of trees in MST[∪i−1
`=j X` ], Invariant (i) holds for Φi because it helps for Φi−1 . For a
component C of Λi−1 , the addition of the edges ((u, wj ), (u, wk )) creates a new (unrooted)
component spanning all φ-vertices (u, v) where v is in the corresponding component of
MST[∪j≤i Xj ]. Therefore, Invariant (i) holds for Φi .
(ii) Since u is free and no new φ-vertices of the form (u, v) where u and v are in the same
tree of MST[∪j≤i Xj ] are introduced, Invariant (ii) holds for Φi because it holds for Φi−1 .
(iv) As with (i), the only new tree of MST[∪i`=j X` ] is u, which only has a non-zero intersection
with Xi . So, for j < i, Invariant (iv) holds for Φi because it holds for Φi−1 . For j = i, all
the components of MST[∪i`=j X` ] are isolated vertices, so the invariant holds trivially.
J
Proof of Claim 12. We prove that Φi satisfies each of the invariants in turn.
Invariant (i) Let T1 and T2 be distinct trees of MST[∪j≤i Xj ]. If T1 and T2 are distinct
trees of MST[∪j≤i−1 Xj ], then the invariant holds for Φi because it holds for Φi−1 . Otherwise, we may assume w.l.o.g. that T1 contains u and a subtree T3 that is a component of
MST[∪j≤i−1 Xj ]. Let x be a vertex of T3 ∩ Xi and w be a vertex of T2 ∩ Xi . Then ux is an
edge of MST and (u, w), (w, x) will be connected in Φi by greedy applications of the triangle
rule. By the same argument as used for showing that Φ1 satisfies Invariant (i), (u, w) will
not be connected to a root φ-vertex (a φ-vertex corresponding to a tree edge) in Φi .
Invariant (ii) We need only prove this for the tree T of MST[∪j≤i Xi ] that contains u as
other cases are covered by the fact that Φi−1 satisfies Invariant (ii). Let T1 and T2 be distinct
trees of MST[∪j≤i−1 Xi ] that are subtrees of T . Let u1 and u2 be vertices of T1 ∩ Xi and
T2 ∩ Xi , respectively.
We start by showing that (u1 , u2 ) is in a rooted dashed free tree of Φi . Let v1 and v2
be the neighbors of u in T1 and T2 , respectively. (Note, it may be that, e.g., u1 = v1 .) By
Invariant (ii), (v2 , u2 ) is in a rooted dashed-free tree of Φi−1 . By the triangle rule, (v1 , u2 )
will be connected by (a possibly non-trivial sequence of) bold edges to (u, u2 ) which in turn
G. Borradaile and H. Le
17
will be connected by (a possibly non-trivial sequence of) bold edges to (v2 , u2 ). Therefore,
(v1 , u2 ) will be in a rooted dashed-free tree of Φi . In this next paragraph, we show that
(u1 , u2 ) and (v1 , u2 ) are in the same dashed-free tree of Φi−1 , which we have just shown
belongs to a rooted dashed-free tree of Φi , showing that (u1 , u2 ) is in a rooted dashed free
tree of Φi .
To show that (u1 , u2 ) and (v1 , u2 ) are in the same dashed-free tree of Φi−1 , consider an
index k such that u1 and v1 are connected to x1 in MST[∪i`=k X` ] and u2 is connected to x2
in MST[∪i`=k X` ]. Then by Invariant (iv) for Φi−1 , (x1 , x2 ) is in the same dashed-free tree as
(u1 , u2 ) and (v1 , u2 ).
Now consider a φ-vertex (x1 , x2 ) where xk ∈ Tk ∩ Xj for k = 1, 2 and j < i. Let Tk0 be
0
the subtree of Tk that is in MST[∪i−1
`=j X` ] and let uk = Tk ∩ Xi for k = 1, 2. We just showed
that (u1 , u2 ) is in a rooted dashed-free tree of Φi . By Invariant (iv) for Φi−1 , (x1 , x2 ) and
(u1 , u2 ) are in the same dashed-free tree. Therefore, (x1 , x2 ) is in a rooted dashed-free tree
of Φi .
Invariant (iv) Consider j ≤ i, trees T1 , T2 of MST[∪i`=j X` ], and φ-vertices (yj,1 , yj,2 ) and
(yi,1 , yi,2 ) as defined in Invariant (iv). For the case j = i, the proof that Φi satisfies
Invariant (iv) is the same as for Φ1 . Further, if u ∈
/ T1 , T2 , Φi satisfies Invariant (iv) because
Φi−1 satisfies Invariant (iv), therefore, we assume w.l.o.g. that u ∈ T1 . Finally, consider
the components of T1 in MST[∪i−1
`=j X` ]; if yj,1 and yi,1 are in the same component, then Φi
satisfies Invariant (iv) because Φi−1 satisfies Invariant (iv). Therefore, we assume they are in
different components, T1p and T1q , respectively.
Let x be the neighbor of u in T1p . By Invariant (iv) for Φi−1 , (x, yi,2 ) and (yj,1 , yj,2 ) are in
a common unrooted dashed-free tree. We show that (x, yi,2 ) and (yi,1 , yi,2 ) are in a common
unrooted dashed-free tree of Φi , proving this invariant is held. Let y be the neighbor of u in
T1q .
1. By the triangle rule, (x, yi,2 ) will get connected by (a possibly non-trivial sequence of)
bold edges to (u, yi,2 ) because xu ∈ MST and (u, yi,2 ) will get connected by (a possibly
non-trivial sequence of) bold edges to (y, yi,2 ) because yu ∈ MST.
2. Let k be the index such that y and yi,1 are connected to a in MST[∪i`=k X` ] and yi,2 is
connected to b in MST[∪i`=k X` ]. Then by Invariant (iv), (a, b) is in the same unrooted
dashed-free tree as (y, yi,2 ) and (yi,1 , yi,2 ).
Together these connections show that (x, yi,2 ) and (yi,1 , yi,2 ) are in a common unrooted
dashed-free tree.
J
Proof of Claim 13. We prove the claim for each case of the construction of Λi :
1. If u = v, we only add new edges to Λi−1 to obtain Λi . Since each added edge has distinct
rank that is larger than the ranks of edges of Λi−1 , the claim follows.
2. If u 6= v and uv ∈ MST[Xi ], since ri (xy) = ri−1 (xy) for any edge xy such that x, y 6= u,
we only need to consider the case when u ∈ {x, y}. Observe that for each neighbor w
of u, ri (uw) = ri−1 (vw) if vw ∈ Λi−1 or uw is among the edges that are added to Λi−1 .
Since each newly added edge has unique rank, the claim follows.
3. Otherwise, we have ri (vj vj+1 ) = ri−1 (v)vj for 1 ≤ j ≤ p − 1 and ri (xy) = ri−1 (xy) for
all other edges. Thus, each edge in Λi has unique rank by the induction hypothesis.
J
r
Let wk (k = 1, 2) be a vertex in Ni−1
(vk ) ∪ {vk } such that:
Proof of Lemma 15. We investigate how dashed edges are changed into mixed edges as
this is when a charging pseudo-triangle arises. Recall that dashed edges are added to Φ
Arxiv
18
Light spanners bounded treewidth graphs and H-minor-free graphs
when we process free vertices. Let u be a free vertex that is introduced in bag Xi and
let ((u, v1 ), (u, v2 )) be a mixed edge (that is, a dashed edge in Φi that is later converted
to a mixed edge). Then by our construction, v1 and v2 are in the same component of
MST[∪k≤i−1 Xk ]. For each j ≥ i and w ∈ Xi , let Twij be the component of MST[∪i≤k≤j Xk ]
containing w. We will say that an introduced vertex of a bag Xj is branching if it is not free,
is not the forgotten vertex of Xj and is not connected to the forgotten vertex of Xj by MST.
Note that dashed edges are converted to mixed edges when processing free and branching
vertices.
We say that a tree Twij ∈ MST[∪i≤k≤j Xk ] is forgotten in Xj if j = |I| or :
Twij ∩ Xj = {v} which is the forgotten vertex of Xj .
the introduced vertex of Xj is branching or free.
We note that at most one tree can be forgotten in each bag Xj .
r
For each vertex x ∈ Λi−1 , let Ni−1
(x) be the set of vertices in Λi−1 reachable from x via
ij
ij
r
paths consisting of edges of ranks larger than r in Λi−1 . Let Fx,r
= ∪w∈Ni−1
(x)∪{x} Tw . We
ij
ij
say that the forest Fx,r
is forgotten in Xj if every tree in Fx,r
is forgotten in Xj 0 for some
0
ij
j ≤ j and Fx,r ∩ Xj 6= ∅.
I Claim 16. Let r be the rank of the edge v1 v2 ∈ Λi−1 . If the dashed edge ((u, v1 ), (u, v2 )) is
changed into a mixed edge, there exists a bag Xj for j ≥ i such that Tuij ∩ {Fvij1 ,r ∪ Fvij2 ,r } = ∅,
Tuij ∩ Xj 6= ∅ and exactly one of the forests Fvij1 ,r , Fvij2 ,r is forgotten in Xj .
Proof. Suppose that the claim fails, then there are two cases:
1. There exists j ≥ i such that Tuij = Twij for some w ∈ Fvij1 ,r and Fvij2 ,r ∩ Xj 6= ∅. In this
case, i < j since u is a free vertex. Let p be the index such that u and w are connected
r
to x in MST[∪i≤q≤p Xq ]. Let wk (k = 1, 2) be a vertex in Ni−1
(vk ) ∪ {vk } such that:
ij
(i) Twk ∩ Xp 6= ∅
(ii) Subject to (i), the distance dΛi−1 (wk , vk ) is minimum.
Let yk = Twijk ∩ Xp . Since the tie-breaking rule prefers changing the dashed edges of
higher ranks into mixed edges, (u, vk ) is in the same dashed-free tree of Φp as (u, wk ). By
Invariant (iv), (u, wk ) and (w, wk ) are in the same dashed-free tree of Φp as (x, yk ). Since
(w, wk ) is in a rooted dashed-free tree of Φp , by Invariant (i), (u, vk ) is in a rooted dashed
free tree. Therefore, the dashed edge ((u, v1 ), (u, v2 )) is not converted to a mixed edge.
2. There exists j ≥ i such that Tuij is forgotten and both forests Fvij1 ,r , Fvij2 ,r are not forgotten
in Xj . Since u is a free vertex, there must be a vertex û in Xi such that û ∈ Tuij . Let p be
the index such that u and û are connected to x in MST[∪i≤q≤p Xq ]. Let wk (k = 1, 2) be a
r
vertex in Ni−1
(vk ) ∪ {vk } as in the first case and yk = Twijk ∩ Xp . Then, by the tie-breaking
rule, (u, vk ) and (u, wk ) are in the same dashed-free tree of Φp . By Invariant (iv), (u, wk )
and (û, wk ) are in the same dashed-free tree of Φp as (x, yk ). Therefore, there is a cycle
in which ((u, v1 ), (u, v2 )) is the most recent added dashed edge. Thus, ((u, v1 ), (u, v2 )) is
deleted and not converted to a mixed edge.
J
We now bound the number of pseudo-triangles that contain an edge e of MST(G). Let Xi
be the bag that containing e and Λi be the corresponding contracted forest. Let u0 , v0 be
two vertices of Λi in the same tree of Λi such that e is in the u0 -to-v0 path PMST (u0 , v0 ).
We say that a pseudo-triangle strongly contains PMST (u0 , v0 ) if PMST (u0 , v0 ) is a subpath of
the pseudo-edge and the rank of every edge of the psedudo-edge of the triangle is at least
the minimum rank over edges of PMST (u0 , v0 ).
I Claim 17. There are at most pw pseudo-triangles strongly containing PMST (u0 , v0 ).
G. Borradaile and H. Le
19
Proof. Let ∆1 , . . . , ∆q be pseudo-triangles that strongly contain PMST (u0 , v0 ). Let ∆k =
{uk , vk , wk } and Xsk be the bag that has wk as the introduced vertex (1 ≤ k ≤ q). Let
r be the minimum rank over edges of PMST (u0 , v0 ) and X` be the bag in which one of
the forests Fui`0 ,r , Fvi`0 ,r , say Fui`0 ,r , is forgotten. Then, uk ∈ Fui`0 ,r and vk ∈ Fvi`,r
. We can
0
sk `
i`
assume w.l.o.g that s1 ≤ s2 ≤ . . . ≤ sq . Then, Fuk ,r = Fu0 ,r ∩ MST[∪sk ≤j≤` Xj ] and
Fvskk,r` = Fvi`0 ,r ∩ MST[∪sk ≤j≤` Xj ]. By Claim 16, Twskk` ∩ X` 6= ∅. Furthermore, all the tree
s `
Twskk` must be disjoint since otherwise, say Twjj is a subtree of Twskk` (k ≤ j), the second case
of the proof of Claim 16 implies that the dashed edge ((wj , uj ), (wj , vj )) is removed from Φ.
Hence, q ≤ pw.
J
Let e be an arbitrary edge of MST(G). Let Xi be the bag that containing e and Λi be
the corresponding contracted forest. By the way we build the contracted forest, there are
at most two edges, say u0 v0 and v0 w0 , of Λi that are incident to the same vertex v0 such
that e ∈ PMST (u0 , v0 ) ∩ PMST (v0 , w0 ). We observe that any pseudo-triangle that contains e
in the pseudo-edge must contain the path PMST (u, v) for some u, v ∈ Λi such that one of
two edges u0 v0 , v0 w0 is in the path PΛi−1 (u, v). Let Γ(u0 , v0 ) be the set of pseudo-triangles
that have PΛi−1 (u, v) containing u0 v0 . We define Γ(v0 , w0 ) similarly. We will show that
|Γ(u0 , v0 )|, |Γ(u0 , w0 )| ≤ pw2 , thereby, proving the lemma.
We only need to show that |Γ(u0 , v0 )| ≤ pw2 since |Γ(u0 , w0 )| ≤ pw2 can be proved
similarly. Let ∆1 , ∆2 , . . . , ∆p be the pseudo-triangles containing e in the pseudo-forest such
that for each k:
1. Each triangle ∆k contains distinct path PMST (uk , vk ) as subpath in the pseudo-edge.
2. Edge u0 v0 is in the path PΛi−1 (uk , vk ) of Λi−1 .
3. Each path PMST (uk , vk ) has distinct rank.
Then, by the way we construct the contracted forest, if the minimum rank over edges of
PMST (uj , vj ) is smaller than the minimum rank over edges of PMST (uk , vk ), then PΛi (uk , vk ) (
PΛi (uj , vj ) for 1 ≤ j 6= k ≤ p. Therefore, we can rearrange ∆1 , ∆2 , . . . , ∆k such that
PΛi (u1 , v1 ) ( PΛi (u2 , v2 ) ( . . . ( PΛi (up , vp ). Thus, p ≤ pw . By Claim 17, for each k,
there are at most pw pseudo-triangles containing the same subpath PMST (uk , vk ). Therefore,
|Γ(u0 , v0 )| ≤ pw2 .
J
C
Figures
(1)
(2)
Figure 3 The surface (2) is obtained by cutting the surface Σ in (1) along TG
b ∪ X. Small ovals
are cycles C1 , C2 , . . . , Cβ .
Arxiv
20
Light spanners bounded treewidth graphs and H-minor-free graphs
6
5
7
4
8
3
2
1
3
1,3,8
3
3,4,8
8
3
1
(2)
2
1
1,2,3
(1)
4,7,5
4,8,7
4
8
4
7
8
(3)
7,4,6
7
5
4
5
6
7
Figure 4 The normalized graph (3) of a graph (1) with path decomposition (2). Bold edges in
(3) are edges of the original graph and thin edges in (3) are zero weighted edges.
G. Borradaile and H. Le
7
2
1
21
9
X3
X4 X5
10
15
14
13
12
11
8
16
17
18
3
6
4
5
X1
X2
2
5
1
3
3
4
3
1
3
5
2
7
6
2
2
8
12
3
2
(1,2)
(8,5)
(1,4)
(1,5)
(1,6)
(2,6)
(7,2)
(8,7)
(10,8)
12
11
13
12
16
(13,8)
9
(15,14)
2
11
(7)
17
12
12
14
13
16
(12)
(11)
(14,8)
3
3
12
14
12
(16,11)
(14,13)
(17,13)
(17,14)
(14,11)
(18,17)
(18,14)
(16,14)
(14,12)
(12,8)
(14,9)
(10,3)
(15,11)
(13,11)
(11,3)
(3,5)
11
13
2
10
(6)
9
13
(9,8)
(3,4)
9
(5)
(11,8)
(8,4)
(2,5)
(1,3)
(4)
(11,3)
(8,3)
(2,4)
9
4
9
2
3
(10)
(9)
(8,2)
(2,3)
4
12
11
(8)
5
8
3
2
3
2
7
2
2
11
3
9
9
10
3
(3)
(2)
(1)
1
4
4
3
3
7
5
X6 X7 X8 X9 X10 X11 X12 X13
(16,13)
(17,16)
(18,16)
(15,12)
(12,3)
(13,12)
(17,12)
(18,12)
(17,9)
(18,9)
(15,9)
(3,6)
(7,3)
(13,9)
(9,3)
(4,5)
(10,4) (11,10)
(4,6)
(7,4)
(9,4)
(5,6)
(7,5)
(9,5)
(7,6)
(16,12)
(12,11)
(11,9)
(10,9)
(16,9)
(12,9)
(9,7)
(15,13)
(10,5)
(11,4)
(12,10)
Figure 5 Top: A normalized graph and its MST. Dotted edges are non-tree edges. Non-dotted
edges are edges of the MST. Thin edges are zero weighted edges. Center: Contracted forest
Λ1 , . . . , Λ12 . Small numbers are ranks of the edges of the contracted forests. Bottom: Charging
forest for the graph. Dotted edges are mixed edges. Hollow vertices are roots of the charging forest
Arxiv
| 8 |
arXiv:1310.2903v1 [] 10 Oct 2013
EXTREMAL BETTI NUMBERS OF SOME CLASSES OF BINOMIAL EDGE
IDEALS
AHMET DOKUYUCU
A BSTRACT. Let G be a cycle or a complete bipartite graph. We show that the binomial
edge ideal JG and its initial ideal with respect to the lexicographic order have the same
extremal Betti number. This is a partial positive answer to a conjecture proposed in [2].
I NTRODUCTION
Let G be a simple graph on the vertex set [n] with edge set E(G) and let S be the
polynomial ring K[x1 , . . . , xn , y1 , . . . , yn ] in 2n variables endowed with the lexicographic
order induced by x1 > · · · > xn > y1 > · · · > yn . The binomial edge ideal JG ⊂ S associated
with G is generated by all the binomials fi j = xi y j − x j yi with {i, j} ∈ E(G). The binomial
edge ideals were introduced in [5] and, independently, in [8]. Meanwhile, many algebraic
and homological properties of these ideals have been investigated; see, for instance, [1],
[2], [3], [5], [7], [9], [10], [11], [12], [13], [14].
In [2], the authors conjectured that the extremal Betti numbers of JG and in< (JG ) coincide for any graph G. Here, < denotes the lexicographic order in S induced by the natural
order of the variables. In this article, we give a positive answer to this conjecture when the
graph G is a complete bipartite graph or a cycle. To this aim, we use some results proved in
[12] and [14] which completely characterize the resolution of the binomial edge ideal JG
when G is a cycle or a complete bipartite graph. In particular, in this case, it follows that
JG has a unique extremal Betti number. In the first section we recall all the known facts
on the resolutions of binomial edge ideals of the complete bipartite graphs and cycles. In
Section 2, we study the initial ideal of JG when G is a bipartite graph or a cycle. We show
that proj dim in< (JG ) = proj dim JG and reg in< (JG ) = reg JG , and, therefore, in< (JG ) has
a unique extremal Betti number as well. Finally, we show that the extremal Betti number
of in< (JG ) is equal to that of JG .
To our knowledge, this is the first attempt to prove the conjecture stated in [2] for
extremal Betti numbers. In our study, we take advantage of the known results on the
resolutions of binomial edge ideals of cycles and complete bipartite graphs and of the
fact that their initial ideals have nice properties. For instance, as we show in Section 2,
the initial ideal of JG for a complete bipartite graph has linear quotients and is generated
in degrees 2 and 3. Therefore, it is componentwise linear and its Betti numbers may
2010 Mathematics Subject Classification. 13D02,05E40.
Key words and phrases. Binomial edge ideals, regularity, projective dimension.
1
be computed easily (Theorem 2.2). The initial ideal of JG when G is a cycle does not
have linear quotients, but by ordering its generators in a suitable way, we may easily
compute its extremal Betti number (Theorem 2.9). It is interesting to remark that even if
the admissible paths of the cycle (in the sense of [7, Section 3]) determine the minimal
set of monomial generators of in< (JG ), the Lyubeznik resolution [6] does not provide a
minimal resolution of in< (G).
1. P RELIMINARIES
1.1. Binomial edge ideals of complete bipartite graphs. Let G = Km,n be the complete
bipartite graph on the vertex set {1, . . ., m} ∪ {m + 1, . . ., m + n} with m ≥ n ≥ 1 and let
JG be its binomial edge ideal. JG is generated by all the binomials fi j = xi y j − x j yi where
1 ≤ i ≤ m and m + 1 ≤ j ≤ m + n. In [12, Theorem 5.3] it is shown that the Betti diagram
of S/JG has the form
0 1
2
0 1 0
0
1 0 mn 0
2 0 0 β24
···
p
···
0
···
0
· · · β p,p+2
m,
if n = 1,
2m + n − 2, if n > 1.
In particular, from the above Betti diagram we may read that S/JG has a unique extremal Betti number, namely β p,p+2 .
Moreover, in [12, Theorem 5.4] all the Betti numbers of S/JG are computed. Since we
are interested only in the extremal Bettinumber, we recall here its value as it was given
m − 1, if p = m,
in [12, Theorem 5.4], namely, β p,p+2 =
n − 1, if p = 2m + n − 2.
Since we will study the initial ideal of JG with respect to the lexicographic order induced by the natural order of the variables, we need to recall the following definition and
result of [5].
where p = proj dim S/JG =
Definition 1.1. Let i < j be two vertices of an arbitrary graph G. A path i = i0, i1 , . . . , ir−1, ir =
j from i to j is called admissible if the following conditions are fulfilled:
(i) ik 6= iℓ for k 6= ℓ;
(ii) for each k = 1, . . . , r − 1, one has either ik < i or ik > j;
(iii) for any proper subset { j1 , . . . , js} of {i1 , . . . , ir−1}, the sequence i, j1, . . . , js, j is
not a path in G.
Given an admissible path π of G from i to j, we set uπ = (∏ik > j xik )(∏iℓ <i yiℓ ).
Obviously, any edge of G is an admissible path. In this case, the associated monomial
is just 1.
2
Theorem 1.2 (HHHKR). Let G be an arbitrary graph. The set of binomials
Γ=
[
{uπ fi j : π is an admissible path from i to j}
i< j
is the reduced Gröbner basis of JG with respect to lexicographic order on S induced by
the natural order of indeterminates, x1 > · · · > xn > y1 > · · · > yn .
One may easily see that the only admissible paths of the complete graph G = Km,n
are the edges of G, the paths of the form i, m + k, j with 1 ≤ i < j ≤ m, 1 ≤ k ≤ m, and
m +i, k, m + j with 1 ≤ i < j ≤ n, 1 ≤ k ≤ m. Therefore, we get the following consequence
of the above theorem.
Corollary 1.3. Let G = Km,n be the complete bipartite graph on the vertex set V (G) =
{1, . . ., m} ∪ {m + 1, . . ., m + n}. Then
in< (JG ) = ({xi y j }
1≤i≤m
m+1≤ j≤m+n
, {xi xm+k y j }1≤i< j≤m , {xm+1 yk ym+ j } 1≤i< j≤n ).
1≤k≤n
1≤k≤m
1.2. Binomial edge ideals of cycles. In this subsection, G denotes the n–cycle on the
vertex set [n] with edges {1, 2}, {2, 3}, . . ., {n − 1, n}, {1, n}.
In [14] it was shown that the Betti diagram of S/JG has the form
1 2
3
···
n
0
0 0
0
···
0
n 0
0
···
0
1
0
···
0
2
0 β24
3
0 0
β36 · · ·
0
..
..
..
..
..
..
.
.
.
.
.
.
n − 2 0 0 β2,n β3,n+1 · · · βn,2n−2
and all the Betti numbers were computed. One
sees that we have a unique extremal Betti
n−1
number and, by [14], we have βn,2n−2 = 2 − 1.
We now look at the initial ideal of JG . It is obvious by Definition 1.1 and by the labeling
of the vertices of G that the admissible paths are the edges of G and the paths of the form
i, i − 1, . . ., 1, n, n − 1, . . ., j + 1 with 2 ≤ j − i ≤ n − 2. Consequently, we get the following
system of generators for the initial ideal of JG .
0
1
0
0
0
..
.
Corollary 1.4. Let G be the n–cycle with the natural labeling of its vertices. Then
in< (JG ) = (x1 y2 , . . . , xn−1 yn , x1 yn , {xi x j+1 · · · xn y1 · · · yi−1 y j }2≤ j−i≤n−2 ).
2. E XTREMAL B ETTI
NUMBERS
2.1. Complete bipartite graphs. Let G = Km,n be the complete bipartite graph on the
vertex set {1, . . ., m} ∪ {m + 1, . . . , m + n} with m ≥ n ≥ 1 and let JG be its binomial
edge ideal. The initial ideal in< (JG ) has a nice property which is stated in the following
proposition.
3
Proposition 2.1. Let G = Km,n be the complete graph. Then in< (JG ) has linear quotients.
Proof. Let u1 , . . . , ur be the minimal generators of in< (JG ) given in Corollary 1.3 where,
for i < j, either deg ui < deg u j or deg ui = deg u j and ui > u j . We show that we respect
to this order of its minimal monomial generators, in< (JG ) has linear quotients, that is, for
any ℓ > 1, the ideal quotient (u1 , . . ., uℓ−1 ) : (uℓ ) is generated by variables.
Let uℓ = xi y j for some 1 ≤ i ≤ m and m + 1 ≤ j ≤ m + n. In this case, one may easily
check that
(1)
(u1 , . . ., uℓ−1 ) : (uℓ ) = (x1 , . . . , xi−1 , ym+1 , . . ., y j−1 ).
Let uℓ = xi xm+k y j for some 1 ≤ i < j ≤ m and 1 ≤ k ≤ n. Then we get
(2)
(u1 , . . . , uℓ−1 ) : (uℓ ) = (ym+1 , . . . , ym+n , x1 , . . ., xi−1 , xm+1 , . . . , xm+k−1 , yi+1 , . . . , y j−1 ).
Finally, if uℓ = xm+i yk ym+ j for some 1 ≤ i < j ≤ n and 1 ≤ k ≤ m, we have
(3)
(u1 , . . . , uℓ−1 ) : (uℓ) = (x1 , . . . , xm , xm+1 , . . . , xm+i−1 , y1 , . . . , yk−1 , ym+i+1 , . . . , ym+ j−1 ).
Theorem 2.2. Let G = Km,n be the complete graph. Then
i+ j−m−2
,
βt,t+2 (in< (JG )) =
∑
t
1≤i≤m
m+1≤ j≤m+n
βt,t+3 (in< (JG )) =
n+k+ j−3
,
t
1≤k≤n
j−3
n+k+ j−3
,
+ ∑1≤i< j≤n m+k+
∑1≤i< j≤m
t
t
1≤k≤m
1≤k≤n
∑1≤i< j≤m
if n = 1,
if n > 1.
Proof. Since in< (JG ) has linear quotients, we may apply [4, Exercise 8.8] and get
m
qℓ
βt,t+d (in< (JG )) = ∑
t
ℓ=1
deg uℓ =d
where qℓ is the number of variables which generate (u1 , . . . , uℓ−1 ) : (uℓ ). Hence, by using
equations (1)-(3) for counting the number of variables which generate (u1 , . . . , uℓ−1 ) : (uℓ ),
we get all the graded Betti numbers of in< (JG ).
In particular, by the above theorem, it follows the following corollary which shows that
for G = Km,n the extremal Betti numbers of S/JG and S/ in< (JG ) coincide.
Corollary 2.3. Let G = Km,n be the complete graph. Then:
m,
(a) proj dim(S/ in< (JG )) = proj dim(in< (JG )) + 1 =
2m + n − 2,
4
if n = 1,
if n > 1.
(b) S/ in< (JG ) has a unique extremal Betti number, namely
m − 1,
β p,p+2 (S/ in< (JG )) = β p−1,p+2 (in< (JG )) =
n − 1,
if n = 1,
if n > 1.
Proof. (a) follows immediately from Betti number formulas of Theorem 2.2.
Let us prove (b). By using again Theorem 2.2, we get
(
m−1
m−1
if n = 1,
∑1≤i<m p−1 = ∑1≤i<m m−1 = m − 1,
β p−1,p+2 (in< (JG )) =
2m+n−3
2m+n−3
∑1≤i<n p−1 = ∑1≤i<n 2m+n−3 = n − 1, if n > 1.
2.2. Cycles. In this subsection, the graph G is an n–cycle. If n = 3, then G is a complete
graph, therefore the ideals JG and in< (JG ) have the same graded Betti numbers. Thus, in
the sequel, we may consider n ≥ 4.
As we have already seen in Corollary 1.4, in< (JG ) is minimally generated by the initial
monomials of the binomials corresponding to the edges of G and by m = n(n − 3)/2
monomials of degree ≥ 3 which we denote by v1 , . . ., vm where we assume that if i < j,
then either deg vi < deg v j or deg vi = deg v j and vi > v j . Let us observe that if vk =
xi x j+1 · · · xn y1 · · · yi−1 y j , we have deg vk = n − j + i + 1. Hence, there are two monomials
of degree 3, namely, v1 = x1 xn yn−1 and v2 = x2 y1 yn , three monomials of degree 4, namely,
v3 = x1 xn−1 xn yn−2 , v4 = x1 xn y1 yn−1 , v5 = x1 y1 y2 yn , etc.
We introduce the following notation. We set J = (x1 y2 , x2 y3 , . . ., xn−1 yn ), I = J +(x1 yn ),
and, for 1 ≤ k ≤ m, Ik = Ik−1 + (vk ), with I0 = I. Therefore, Im = in< (JG ).
Lemma 2.4. The ideals quotient J : (x1 yn ) and Ik−1 : (vk ), for k ≥ 1, are minimally generated by regular sequences of monomials of length n − 1.
Proof. The statement is obvious for J : (x1 yn ) since J is minimally generated by a regular
sequence. Now let k ≥ 1 and let vk = xi x j+1 · · · xn y1 · · · yi−1 y j for some i, j with 2 ≤
j − i ≤ n − 2. Then Ik−1 : (vk ) = I : (vk ) + (v1 , . . ., vk−1 ) : (vk ). One easily observes that
(v1 , . . ., vk−1 ) : (vk ) = (x1 , . . . , xi−1 , y j+1 , . . . , yn ). Hence,
I : (vk ) = (x1 , . . ., xi−1 , xi yi+1 , xi+1 yi+2 , . . ., x j−2 y j−1 , x j−1 y j , y j+1 , . . . , yn ) : (vk ) =
(x1 , . . . , xi−1 , yi+1 , xi+1 yi+2 , . . . , x j−2 y j−1 , x j−1 , y j+1 , . . . , yn )
Remark 2.5. From the above proof we also note that if vk = xi x j+1 · · · xn y1 · · · yi−1 y j ,
then the regular sequence of monomials which generates Ik−1 : (vk ) contains j − i − 2
monomials of degree 2 and n − j + i + 1 variables.
In the following lemma we compute the projective dimension and the regularity of S/I.
This will be useful for the inductive study of the invariants of S/Ik .
Lemma 2.6. We have proj dim S/I = n − 1 and reg S/I = n − 2.
5
Proof. In the exact sequence
(4)
0→
S
S
x1 yn S
(−2) −→ −→ → 0,
J : (x1 yn )
J
I
we have proj dimS/J : (x1 yn ) = proj dim S/J = n − 1 since both ideals are generated by
regular sequences of length n − 1. Moreover, since J is generated by a regular sequence
of monomials of degree 2, by using the Koszul complex, we get
0,
j = 2i,
βi j (S/J) =
n−1
i , j 6= 2i.
In particular, it follows that Torn−1 (S/J, K) ∼
= K(−2n + 2).
Analogously, since J : (x1 yn ) is generated by a regular sequence of n − 3 monomials of
degree 2 and two variables, it follows that Torn−1 (S/J : (x1 yn ), K) ∼
= K(−2n + 4), which
∼
implies that Torn−1 (S/J : (x1 yn )(−2), K) = K(−2n + 2). By the long exact sequence of
Tor’s derived from sequence (4), we get Torn (S/I, K) = 0, hence proj dimS/I ≤ n − 1.
On the other hand, we have
j = 2n − 5,
2,
βn−2, j (S/J : (x1 yn )) =
n − 3, j = 2n − 6,
0,
otherwise.
From the exact sequence of Tor’s applied to (4), as
Torn−1 (S/J, K)2n−3 = Torn−2 (S/J, K)2n−3 = 0,
we get the following exact sequence
0 → Torn−1 (S/I, K)2n−3 → Torn−2 (S/J : (x1 yn ), K)2n−5 → 0.
Thus, Torn−1 (S/I, K) 6= 0 which implies that proj dim S/I = n − 1.
For the regularity, we first observe that since the Koszul complex of the minimal generators gives the minimal graded free resolution of S/J and, respectively, S/J : (x1 yn ), we
have reg S/J = reg(S/J : (x1 yn )(−2)) = n − 1. Then sequence (4) implies that reg S/I ≤
n − 1. We have observed above that Torn−1 (S/I, K)2n−3 6= 0, thus reg S/I ≥ n − 2. In order
to derive the equality reg S/I = n − 2 we have to show that βi,i+n−1 (S/I) = 0 for all i.
For i = n − 1, as Torn−2 (S/J : (x1 yn ), K)2n−4 = 0, we get
0 → Torn−1 (S/J : (x1 yn ), K)2n−4 → Torn−1 (S/J, K)2n−2 → Torn−1 (S/I, K)2n−2 → 0.
But dimK Torn−1 (S/J : (x1 yn ), K)2n−4 = dimK Torn−1 (S/J, K)2n−2 = 1, which implies that
Torn−1 (S/I, K)2n−2 = 0. For i < n − 1, Tori (S/I, K)i+n−1 = 0 since Tori (S/J, K)i+n−1 = 0
and Tori−1 (S/J : (x1 yn ), K)i+n−3 = 0. The latter equality holds since, as we have already
observed, reg S/J : (x1 yn ) = n − 3.
Lemma 2.7. For 1 ≤ k ≤ m, we have proj dim S/Ik ≤ n and reg S/Ik ≤ n − 2.
6
Proof. We proceed by induction on k, by using the following exact sequence and Lemma 2.6
for the initial step,
S
S
S
(5)
0→
(− deg vk ) →
→ → 0.
Ik−1 : (vk )
Ik−1
Ik
Indeed, as Ik−1 : (vk ) is generated by a regular sequence of length n − 1, it follows
that proj dimS/Ik−1 : (vk ) = n − 1. Thus, if proj dim S/Ik−1 ≤ n, by (5), it follows that
proj dim S/Ik ≤ n.
In addition, we have reg(S/Ik−1 : (vk ))(− deg vk ) = reg(S/Ik−1 : (vk )) + deg vk . If vk =
xi x j+1 · · · xn y1 · · · yi−1 y j , by using Remark 2.5, we obtain reg(S/Ik−1 : (vk )) = j − i − 2.
As deg vk = n − j + i + 1, we get reg(S/Ik−1 : (vk ))(− deg vk ) = n − 1. Let us assume, by
induction, that reg S/Ik−1 ≤ n − 2. Then, by using the sequence (5), we obtain
reg S/Ik ≤ max{reg(S/Ik−1 : (vk ))(− deg vk ) − 1, reg S/Ik−1 } = n − 2.
Proposition 2.8. We have proj dimS/ in< (JG ) = n and reg S/ in< (JG ) = n − 2.
Proof. As in< (JG ) = Im , by Lemma 2.7, we get proj dim S/ in< (JG ) ≤ n and reg S/ in< (JG ) ≤
n − 2. For the other inequalities, we use [4, Theorem 3.3.4] which gives the inequalities
n = proj dimS/JG ≤ proj dim S/ in< (JG ) and n − 2 = reg S/JG ≤ reg S/ in< (JG ).
Theorem 2.9. Let G be a cycle. Then S/ in< (JG ) and S/JG have
the same extremal Betti
n−1
number, namely βn,2n−2 (S/JG ) = βn,2n−2 (S/ in< (JG )) = 2 − 1.
Proof. We only need to show that βn,2n−2 (S/ in< (JG )) = m since m = (n2 − 3n)/2 =
n−1
2 − 1.
We use again the sequence (5). By considering its long exact sequence of Tor’s and
using the equality Torn−1 (S/Ik−1 , K)2n−2 = 0, we get
0 → Torn (S/Ik−1, K)2n−2 → Torn (S/Ik , K)2n−2 → Torn−1 (S/Ik−1 : (vk ), K)2n−2−deg vk → 0,
for 1 ≤ k ≤ m. By Remark 2.5, if vk = xi x j+1 · · · xn y1 · · · yi−1 y j , then Ik−1 : (vk ) is generated
by a regular sequence of monomials which contains j − i − 2 elements of degree 2 and
n − j + i + 1 variables. This implies that dimK Torn−1 (S/Ik−1 : (vk ), K)2n−2−deg vk = 1.
Therefore, we get
dimK Torn (S/Ik , K)2n−2 = dimK Torn (S/Ik−1 , K)2n−2 + 1
for 1 ≤ k ≤ m. By summing up all these equalities, it follows that
βn,2n−2 (S/ in< (JG )) = dimK Torn (S/Im , K)2n−2 = dimK Torn (S/I, K)2n−2 + m = m.
The last equality is due to Lemma 2.6.
Remark 2.10. There are examples of graphs whose edge ideal have several extremal Betti
numbers. For instance, the graph G displayed below has two extremal Betti numbers
which are equal to the extremal Betti numbers of in< (JG ).
7
•
•
•
•
•
•
•
•
•
R EFERENCES
[1] M. Crupi, G. Rinaldo, Binomial edge ideals with quadratic Gröbner bases, Electron. J. Combin., 18
(2011), no. 1, # P211.
[2] V. Ene, J. Herzog, T. Hibi, Cohen-Macaulay binomial edge ideals, Nagoya Math. J. 204 (2011), 57–68.
[3] V. Ene, A. Zarojanu. On the regularity of binomial edge ideals, to appear in Math. Nachr.
[4] J. Herzog, T. Hibi, Monomial Ideals, Graduate Texts in Mathematics 260, Springer, 2010.
[5] J. Herzog, T. Hibi, F. Hreinsdotir, T. Kahle, J. Rauh, Binomial edge ideals and conditional independence
statements, Adv. Appl. Math. 45 (2010), 317–333.
[6] G. Lyubeznik, A new explicit finite free resolution of ideals generated by monomials in an R-sequence,
J. Pure Appl. Algebra 51 (1988), 193–195.
[7] K. Matsuda, S. Murai, Regularity bounds for binomial edge ideals, J. Commut. Algebra 5 (2013),
141–149.
[8] M. Ohtani, Graphs and ideals generated by some 2-minors, Commun. Algebra 39 (2011), no. 3, 905–
917.
[9] A. Rauf, G. Rinaldo, Construction of Cohen-Macaulay binomial edge ideals, to appear in Commun.
Algebra.
[10] S. Saeedi Madani, D. Kiani, Binomial edge ideals of graphs, Electron. J. Combin, 19 (2012), no. 2, #
P44.
[11] S. Saeedi Madani, D. Kiani, On the binomial edge ideal of a pair of graphs, Electron. J. Combin, 20
(2013), no. 1, # P48.
[12] P. Schenzel, S. Zafar, Algebraic properties of the binomial edge ideal of a complete bipartite graph,
to appear in An. St. Univ. Ovidius Constanta, Ser. Mat.
[13] S. Zafar, On approximately Cohen-Macaulay binomial edge ideal, Bull. Math. Soc. Sci. Math.
Roumanie, Tome 55(103) (2012) No. 4, 429–442.
[14] Z. Zahid, S. Zafar, On the Betti numbers of some classes of binomial edge ideals, Preprint 2012.
FACULTY OF M ATHEMATICS AND C OMPUTER S CIENCE , OVIDIUS U NIVERSITY , B D . M AMAIA 124,
900527 C ONSTANTA , AND U NIVERSITY OF S OUTH -E AST E UROPE L UMINA , S OS . C OLENTINA NR .
64 B , B UCHAREST , ROMANIA
E-mail address: [email protected]
8
| 0 |
arXiv:1801.06434v5 [] 14 Mar 2018
EffNet: AN EFFICIENT STRUCTURE FOR CONVOLUTIONAL NEURAL NETWORKS
Ido Freeman, Lutz Roese-Koerner
Anton Kummert
Aptiv
Wuppertal, Germany
University of Wuppertal
Department of Electical Engineering
{first_name.last_name}@aptiv.com
[email protected]
ABSTRACT
With the ever increasing application of Convolutional Neural
Networks to costumer products the need emerges for models which can efficiently run on embedded, mobile hardware.
Slimmer models have therefore become a hot research topic
with multiple different approaches which vary from binary
networks to revised convolution layers. We offer our contribution to the latter and propose a novel convolution block which
significantly reduces the computational burden while surpassing the current state-of-the-art. Our model, dubbed EffNet,
is optimised for models which are slim to begin with and is
created to tackle issues in existing models such as MobileNet
and ShuffleNet.
methods they leave two large issues unaddressed. First, both
papers report taking large networks and making them smaller
and more efficient. When applying their models to slimmer
networks the results start to diverge. Second, both proposed
models create an aggressive bottleneck [7] for data flow in the
network. This kind of a bottleneck might prove insignificant
in models of high redundancy yet, as our experiments show, it
has a destructive effect on smaller models.
We therefore propose an alternative constellation which retains most of the proportional decrease in computations while
having little to no affect on the accuracies. We achieve this
improvement by optimising data flow through the network
and neglecting practices which prove harmful in this unique
domain.
Index Terms— convolutional neural networks, computational efficiency, real-time inference
2. RELATED WORK
1. INTRODUCTION
With recent industrial recognition of the benefits of Artificial
Neural Networks to product capabilities, the demand emerges
for efficient algorithms to run in real-time on cost-effective
hardware. This contradicts, in a way, the now parallel university research. While the latter enjoys a relative freedom
in terms of execution cycles and hardware, the former is subjected to market forces and product requirements.
Over the years multiple papers proposed different approaches for tackling the problem of real-time inference on a
small hardware. One common method is the pruning of trained
networks [1], [2], [3]. Another is the fix-point conversion of
32bit networks to as far as binary models [4]. A more recent
approach concentrates on the interconnectivity of the neurons
and the very nature of the vanilla convolution layers.
A vanilla convolution layer consists, in its core, of a fourdimensional tensor which is swiped over an input signal in
the following format [rows, columns, channels in, channels
out], resulting in a quadruple-component multiplication. This
means that the computational cost scales by a four-fold factor.
As 3 × 3 convolutions are now a standard, they become a
natural candidate for optimisation. Papers as [5] (MobileNet)
and [6] (ShuffleNet) set to solve this issue by separating the
computations along the different dimensions. Yet in their
Much of the work in the field focuses on hyper-parameter
optimisation. Algorithms from this class are rather general
both in terms of target algorithm and optimisation objective.
[8] proposed a general Bayesian optimisation framework for
black-box algorithms as CNNs 1 and SVMs 2 by maximising
the probability of increasing the model’s accuracy. This could
be combined with multi-objective optimisation as in [9], to also
optimise the computational complexity. These methods mostly
work well when initialised properly and many of them are limited in their search space [10]. using reinforcement learning,
[11] trained an LSTM 3 [12] to optimise hyper-parameters for
improved accuracy and speed. This along with recent evolutionary methods [13] exhibits less limitations on the search
space but complicates the development by deploying additional
modules.
An additional approach resolves to decreasing the size of
large models in a post-processing manner. Papers such as
[1], [2] and [3] proposed pruning algorithms with a minimal
cost to accuracies. However pruning leads to several issues.
The pipeline itself requires an additional phase with dedicated
hyper-parameters which require optimisation. Furthermore, as
1 Convolutional
Neural Networks
Vector Machines
3 Long Short-Term Memory
2 Support
the network’s architecture is changed, the models require an
additional fine-tuning step.
A further method for compression through post-processing
is the fix-point quantisation of models to smaller primitives
than the common 32bit floats [14] [15], [16] and the binary networks of [4]. Quantised models, although much faster, consistently show decreased accuracies compared to their baselines
and are thus less appealing.
Last, similar to this work, papers as [17], [5] and [6] revisited the very nature of the common convolution operator.
This involves the dimension-wise separation of the convolution
operator, as discussed in [18]. Here, the original operation is
approximated using significantly less FLOPs. [7] separated
the 3 × 3 kernels into two consecutive kernels of shapes 3 × 1
and 1 × 3. The MobileNet model [5] took a step further and
separated the channel-wise from the spatial convolution which
is also only applied depthwise, see Figure 1b. By doing so
a significant reduction in FLOPs was achieved while the majority of computations was shifted to the point-wise layers.
Finally, the ShuffleNet model [6] addressed the stowage of
FLOPs in the point-wise layers by dividing them into groups
in a similar way to [19]. This lead to a drastic reduction in
FLOPs with a rather small toll on the accuracies, see Figure 1
in [6] and Figure 1c.
The diversity of methods shows that there are multiple
ways to successfully compress a CNN. Yet most methods
assume a large development model which is adjusted for efficiency. They thus commonly seem to reach their limits when
applied to networks which are slim to begin with. As many
embedded systems have a limited specification, models are
normally designed within these limitations rather than just
optimising a large network. In such environments the limitations of [5] and [6] become clearer thus laying the base for
our EffNet model which shows the same capacity even when
applied to shallow and narrow models.
Finally notice that the methods above are not mutually exclusive. For example, our model could also be converted to fixpoint, pruned and optimised for best set of hyper-parameters.
(a) An EffNet block (b) A MobileNet (c) A ShuffleNet
(ours)
block
block
Fig. 1: A comparison of MobileNet and ShuffleNet with our EffNet
blocks. ’dw’ means depthwise convolution, ’mp’ means max-pooling,
’ch’ is for the number of output channels and ’gc’ is for group convolutions. Best seen in colour.
algorithms which are optimised to run a specific task extremely
quick, e.g. [20]. Additionally, regulatory limitations often
prohibit a one network solution as they require backup systems
and highly interpretable decision making processes. Reducing
the computational cost of all small classifiers in a project would
thus allow either the redistribution of computational power to
more critical places or enable deeper and wider models with a
larger capacity.
Exploring the limitations of previous work revealed that the
smaller the model is, the more accuracy it loses when converted
to MobileNet or to ShuffleNet, see section 5. Analysing the
nature of these suggested modifications we came across several
issues.
The Bottleneck Structure The bottleneck structure as discussed in [21] applies a reduction factor of eight to the number
of input channels in a block, comparing to the number of output
channels. A ShuffleNet block uses a reduction factor of four
[6]. Yet narrow layers do not tend to have enough channels
to enable such a drastic reduction. In all of our experiments
we witnessed a loss in accuracy comparing to more moderate
reduction and we therefore propose to use a bottleneck factor
of two. Additionally it was found useful to use the special
convolution (see following paragraph) with a depth multiplier
of 2, i.e. the first depthwise convolution layer also doubles the
amount of channels.
Separable Convolutions Proposed by [7] and used neither
by ShuffleNet nor by MobileNet, we revisit the idea of consecutive separable spatial convolutions, i.e. using 3 × 1 and 1 × 3
layers instead of a single 3 × 3 layer. Separating the spatial
convolution might only make a minor difference in terms of
FLOPs but we could not find any reason not to use it.
Strides and Pooling Both MobileNet and ShuffleNet models choose to apply a stride of two to the depthwise spatial
convolution layer in their blocks. Our experiments show two
issues with this practice. First, we repeatedly witnessed a decrease in accuracy comparing to max-pooling. This was in a
way expected as strided convolution is prone to aliasing.
3. BUILDING BLOCKS FOR INCREASED
EFFICIENCY
This section discusses the most common practices for increasing efficiency. The presented results assisted with identifying
weaknesses in previous techniques and constructing a suitable
solution in the form of a unified EffNet block. For practical
reasons we avoid going into details regarding the exact settings
of the following experiments. Instead we discuss their results
and show their effect as a whole in section 5.
The combination of multiple tasks, competitive costs and
interactive run-times puts strict limitations on possible model
sizes for industrial applications. These requirements in fact
often lead to the usage of more classical computer vision
2
Table 1: Data flow analysis of selected models. One could intuitively understand how an aggressive data compression in early stages would
harm accuracies. Compression factors of 4 or more are marked in red. gc4 means convolution in 4 groups. Best seen in colour.
Baseline
MobileNet [5]
3x3x64 + mp
Floats
Out
16384
3x3x128 + mp
ShuffleNet [6]
3x3x64 + mp
Floats
Out
16384
8192
dw 3x3 + stride
1x1x128
3x3x256 + mp
4096
Fully Connected
10
Layer
EffNet (Ours)
3x3x64 + mp
Floats
Out
16384
4096
8192
gc4 1x1x32
dw 3x3 + stride
gc4 1x1x128
8192
2048
8192
dw 3x3 + stride
1x1x256
2048
4096
gc4 1x1x64
dw 3x3 + stride
gc4 1x1x256
4096
1024
4096
Fully Connected
10
Fully Connected
10
Layer
Layer
Layer
1x1x32
dw 1x3 + 1d mp
dw 3x1
2x1x64 + 1d stride
1x1x64
dw 1x3 + 1d mp
dw 3x1
2x1x128 + 1d stride
1x1x128
dw 1x3 + 1d mp
dw 3x1
2x1x256 + 1d stride
Fully Connected
Floats
Out
32768
16384
16384
16384
16384
8192
8192
8192
8192
4096
4096
4096
10
4. THE EffNet MODEL
Additionally, applying max-pooling to the spatial convolution layer does not allow the network to properly encode the
data before it is reduced to a fourth of its incoming size. Nevertheless early stage pooling means cheaper following layers in
a block. In order to maintain the advantages of early pooling
while also relaxing data compression, we propose using separable pooling. Similar to separable convolution, we first apply
a 2 × 1 pooling kernel (with corresponding strides) after the
first spatial convolution layer. The second phase of the pooling
then follows the last pointwise convolution of the block.
4.1. Data Compression
Upon analysing the effects of the various methods discussed
in section 3, we established that small networks are very sensitive to data compression. Throughout the experiments, each
method which led to a larger bottleneck in data flow had also
resulted in decreased accuracies. For a better understanding
of the data flow concept Table 1 lists the dimensionality of an
input through the different stages of our Cifar 10 [23] network
comparison.
Residual Connections Initially proposed by [22] and
quickly adopted by many, residual connections have become
a standard practice. Papers like [22] showed that residual
connections are only beneficial in deeper networks. We extend
this claim and report a consistent decrease in accuracies when
using residual connections throughout our experiments. We
interpret this as a support to our claim that small networks
cannot handle large compression factors well.
4.2. The EffNet Blocks
We propose an efficient network structure which at the same
time solves the issue of data compression and implements the
insights from section 3. We design this block as a general
construction to seamlessly replace the vanilla convolutional
layers in, but not limited to, slim networks.
We start by, in a similar manner to [7], splitting the 3 × 3
depthwise convolution to two linear layers. This allows us to
pool after the first spatial layer, thus saving computations in
the second layer.
We then split the subsampling along the spatial dimensions.
As seen in Table 1 and in Figure 1 we apply a 1×2 max pooling
kernel after the first depthwise convolution. For the second
subsampling we choose to replace the common pointwise
convolution with 2 × 1 kernels and a corresponding stride.
This practically has the same amount of FLOPs yet leads to
slightly higher accuracies.
Following the preliminary experiments in section 3, we
decide to relax the bottleneck factor for the first pointwise
convolution. Instead of using a fourth of the output channels,
we recognise a factor of 0.5, with a minimal channel amount
of 6, as preferable.
Group Convolutions Following the promising results of [6]
we also experimented with similar configurations. The most
drastic structure here was the original ShuffleNet module and
the most relaxed configuration was the mere application of
group convolutions to the last point-wise layer in the blocks.
The results showed a clear decrease in accuracies. We have
therefore refrain from using group convolutions, despite the
computational benefits.
Addressing the First Layer Both MobileNet [5] and ShuffleNet [6] avoided replacing the first layer with the claim that
this layer is already rather cheap to begin with. We respectfully
disagree with this claim and believe that every optimisation
counts. After having optimised the rest of the layers in the
network, the first layer becomes proportionally larger. In our
experiments, replacing the first layer with our EffNet block
saves ∼ 30% of the computations for the respective layer.
3
5. EXPERIMENTS
ment which favour our EffNet model both in terms of accuracy
and FLOPs.
This section covers the evaluation of our model. We select
datasets which comply with our general settings; a small number of classes and a relatively small input resolution. We then
do a quick manual search for probable hyper-parameters for
the baseline which fulfil the requirements; two to three hidden
layers and small number of channels. The other architectures
then simply replace the convolutional layers without changing
the hyper-parameters.
Architectures were executed at least five times to cancel
out the effects of random initialisation.
To concentrate the experiments on our model, we did not
apply any sort of data augmentation or pre-training on additional data as proposed by [24]. Furthermore, we did not use
hyper-parameter optimisation as our goal is to replace the convolution layers in every given network with the EffNet blocks.
This makes our model simpler to use as it allows developers to
not worry about the implications of the swap to EffNet blocks.
We used Tensorflow [25] for all experiments. Networks
were trained using the Adam optimiser [26] with a learning
rate of 0.001 and β1 = 0.75.
As an additional experiment, we evaluate a larger EffNet
model with roughly the same amount of FLOPs as the baseline.
This is dubbed large in the following results.
Table 3: A comparison of the large and smaller on the SVHN dataset
Baseline
EffNet large
Mean Accuracy
91.08%
91.12%
kFLOPs
3,563.5
3,530.7
Factor
1.00
0.99
MobileNet
ShuffleNet
EffNet (ours)
85.64%
82.73%
88.51%
773.4
733.1
517.6
0.22
0.21
0.14
5.3. German Traffic Sign Recognition Benchmark
A slightly older dataset which is nevertheless very relevant
in most current driver assistance applications is the GTSRB
dataset [28]. With over 50, 000 images and some 43 classes it
presents a rather small task with a large variation in data and is
thus an interesting benchmark. As even small networks started
overfitting very quickly on this data, we resize the input images
to 32 × 32 and use a dropout [29] layer with drop-probability
of 50% before the output layer. Results are shown in Table 4
and also favour our EffNet model.
Table 4: A comparison of the large and smaller models on the
GTSRB dataset
Baseline
EffNet large
Mean Accuracy
94.48%
94.82%
kFLOPs
2,326.5
2,171.9
Factor
1.00
0.93
88.15%
88.99%
91.79%
533.0
540.7
344.1
0.23
0.23
0.15
Table 2: A comparison of the large and smaller on the Cifar10
dataset
Baseline
EffNet large
Mean Accuracy
82.78%
85.02%
Mil. FLOPs
80.3
79.8
Factor
1.00
0.99
MobileNet
ShuffleNet
EffNet (ours)
MobileNet
ShuffleNet
EffNet (ours)
77.48%
77.30%
80.20%
5.8
4.7
11.4
0.07
0.06
0.14
6. COMPARISON WITH MOBILENET V2
As [30] came out at the same time as our work, we extend this
work and write a quick comparison. Finally, we show how
using a few minor adjustments, we surpass [30] in terms of
accuracy while being similarly expensive to compute.
5.1. Cifar 10
Being a fundamental simple datasets in computer vision, Cifar10 [23] is a good example of the sort of tasks we aim to
improve on. Its images are small and represent a limited number of classes. We achieve a significant improvement over
MobileNet, ShuffleNet and even the baseline while still requiring ∼ 7 times less FLOPs than the baseline (see Table 2).
We relate this improvement to the additional depth of the network meaning that the EffNet blocks simulate a larger, deeper
network which does not underfit as much as the other models.
6.1. Architecture Comparison
Both [30] and this work separate the convolution operation
along some of its dimensions to save computations. Unlike
[30], we also separate the spatial, two dimensional kernels
into two single-dimensional kernels. We did notice a small
decrease in accuracies of around 0.5% across our experiments
by doing so, yet it allows for a significantly more efficient
implementation and requires less computations.
For tackling the data compression problem, [30] proposes
to significantly inflate the amount of data throughout their
block by multiplying the number of channels of the input by a
factor of 4 – 10. This makes the compression less aggressive
comparing to the respective block’s input while also moving it
5.2. Street View House Numbers
Similar to Cifar10, the SVHN benchmark [27] is also a common dataset for evaluation of simple networks. The data consists of 32 × 32 pixel patches centred around a digit with a
corresponding label. Table 3 shows the results of this experi4
to the end of the blocks, i.e. a reversed bottleneck. They further
recognise an interesting, often overlooked property of the
ReLU function. When following a Batch Normalisation layer,
ReLU sets half of the data to zero thus further compressing
the data. To counter the problem, [30] resolves to a linear
pointwise convolution at the end of each block. In practice
they get a linear layer followed by another non-linear pointwise
layer, i.e. B ∗ (A ∗ x) with x being the input, A the first layer
and B the second. Removing layer A altogether simply forces
the network to learn the layer B as the function B ∗ A. Our
experiments also showed an indifference to the existence of
layer A. Nevertheless, we show that using a leaky ReLU [31]
on top of the layer A significantly increases the performance.
performs favourably to [30] in most settings in terms of accuracy while having only marginally more FLOPs. Furthermore,
although the mob_imp model outperforms our model, it is
significantly more expensive to compute.
Table 5: A comparison of MobileNet v2 and EffNet on the Cifar10
dataset for various expansion rates
Ex. Rate
Baseline
EffNet
MobileNet v2
EffNet
MobileNet v2
EffNet
MobileNet v2
mob_imp
6
4
2
6.2. EffNet Adaptations
6
Considering latest experiments, we revise our architecture by
introducing three minor adjustments.
First, considering the bottleneck structure, we define the
output channels of the first pointwise layer as a function of the
block’s input channels rather than its output channels. Similar
to [30], yet less extreme, the number of channels is given by
b
Mean Acc.
82.78%
83.20%
79.10%
82.45%
78.91%
81.67%
76.47%
84.25%
Mil. Flops
80.3
44.1
42.0
31.1
29.2
18.1
16.4
44.0
Fact.
1.00
0.55
0.52
0.39
0.36
0.22
0.20
0.55
Table 6: A comparison of MobileNet v2 and EffNet on the SVHN
dataset for various expansion rates
Ex. Rate
6
inputChannels ∗ expansionRate
c
2
4
2
Second, the depth multiplier in the spatial convolution,
which we only previously increased in some cases, is now
natively integrated into our architecture and set to 2.
Last, we replace the ReLU on the pointwise layers with a
leaky ReLU.
Please note that the experiments were not decisive regarding both the activation function for the depthwise convolution
and the first layer in the network. For the sake of simplicity, we
used a ReLU with the remark that both leaky ReLU and linear
spatial convolution were occasionally preferable. The first
layer in the following experiments is a vanilla convolutional
layer with max pooling.
6
Baseline
EffNet
MobileNet v2
EffNet
MobileNet v2
EffNet
MobileNet v2
mob_imp
Mean Acc.
91.08%
87.80%
87.16%
87.49%
86.93%
87.30%
86.71%
88.78%
kFlops
3,563.5
2,254.8
2,130.4
1,729.5
1,646.6
1,204.2
1,162.8
2,506.7
Fact.
1.00
0.63
0.60
0.49
0.46
0.34
0.33
0.70
Table 7: A comparison of MobileNet v2 and EffNet on the GTSRB
dataset for various expansion rates
Ex. Rate
6
4
2
6.3. Experiments
6
We use the same datasets as in section 5 but aim at a different
kind of comparison. We now evaluate three models.
Baseline
EffNet
MobileNet v2
EffNet
MobileNet v2
EffNet
MobileNet v2
mob_imp
Mean Acc.
94.48%
93.74%
92.82%
92.30%
91.56%
90.40%
90.74%
93.25%
kFlops
2,326,5
1,208.3
1,159.2
956.4
934.9
704.5
710.7
1,408.0
Fact.
1.00
0.51
0.50
0.41
0.40
0.30
0.31
0.61
7. CONCLUSIONS
We have presented a novel convolutional block structure for
CNNs, called EffNet, which promises to significantly reduce
computational effort while preserving and even surpassing the
baseline’s accuracy. Our unified blocks are designed for safely
replacing the vanilla convolution layers in applications for
embedded and mobile hardware. As networks are reduced to a
small fraction of the baseline’s FLOPs, our method presents a
two-fold advantage, first is the quicker architecture and second
the application of a larger, deeper network becomes possible.
We have also showed how such a larger network is clearly
preferable to the baseline while requiring a similar number of
operations.
1. Our revised EffNet model
2. The original MobileNet v2 model
3. Our proposed modifications to the MobileNet v2 model
with pooling instead of strided convolution and leaky
ReLU instead of the linear bottleneck. This is dubbed
mob_imp (mobile improved) throughout the following
tables.
The models are evaluated with three different expansion
rates: 2, 4 and 6 while mob_imp is only tested with expansion
rate of 6. Tables 5, 6 and 7 show how our revised architecture
5
8. REFERENCES
[1] M. Babaeizadeh et al., “Noiseout: A simple way to prune
neural networks,” 2016.
[16] L. Lai, N. Suda, and V. Chandra, “Deep convolutional neural network inference with floating-point
weights and fixed-point activations,” arXiv preprint
arXiv:1703.03073, 2017.
[2] X. Dong, S. Chen, and S. J. Pan, “Learning to prune deep
neural networks via layer-wise optimal brain surgeon,”
2017.
[17] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” arXiv preprint arXiv:1610.02357,
2016.
[3] Pavlo Molchanov et al., “Pruning convolutional neural
networks for resource efficient transfer learning,” 2017.
[18] Max Jaderberg et al., “Speeding up convolutional neural
networks with low rank expansions,” in Proc. BMVC.
2014, BMVA Press.
[4] M. Rastegari et al., “Xnor-net: Imagenet classification
using binary convolutional neural networks,” in ECCV.
Springer, 2016, pp. 525–542.
[19] A. Krizhevsky et al., “Imagenet classification with deep
convolutional neural networks,” in Advances in neural
information processing systems, 2012, pp. 1097–1105.
[5] A. G. Howard et al., “Mobilenets: Efficient convolutional
neural networks for mobile vision applications,” arXiv
preprint arXiv:1704.04861, 2017.
[20] C. Tomasi and T. Kanade, Detection and tracking of
point features, School of Computer Science, Carnegie
Mellon Univ. Pittsburgh, 1991.
[6] X. Zhang et al., “Shufflenet: An extremely efficient
convolutional neural network for mobile devices,” arXiv
preprint arXiv:1707.01083, 2017.
[21] F. N. Iandola et al., “Squeezenet: Alexnet-level accuracy
with 50x fewer parameters and< 0.5 mb model size,”
arXiv preprint arXiv:1602.07360, 2016.
[7] C. Szegedy et al., “Rethinking the inception architecture
for computer vision,” in Proc. CVPR. IEEE, 2016, pp.
2818–2826.
[22] K. He et al., “Deep residual learning for image recognition,” in Proc. CVPR. IEEE, 2016, pp. 770–778.
[23] A. Krizhevsky and G. Hinton, “Learning multiple layers
of features from tiny images,” 2009.
[8] J. Snoek, H. Larochelle, and R. P. Adams, “Practical
bayesian optimization of machine learning algorithms,”
in Advances in neural information processing systems,
2012, pp. 2951–2959.
[24] J. Krause et al., “The unreasonable effectiveness of noisy
data for fine-grained recognition,” in ECCV. Springer,
2016, pp. 301–320.
[9] D. Horn and B. Bischl, “Multi-objective parameter configuration of machine learning algorithms using modelbased optimization,” in Symposium Series on Computational Intelligence. IEEE, 2016, pp. 1–8.
[25] M. Abadi et al., “Tensorflow: Large-scale machine
learning on heterogeneous distributed systems,” arXiv
preprint arXiv:1603.04467, 2016.
[26] D. Kingma and J. Ba, “Adam: A method for stochastic
optimization,” in Proc. ICLR. IEEE, 2015.
[10] J. Bergstra and Y. Bengio, “Random search for hyperparameter optimization,” Journal of Machine Learning
Research, vol. 13, pp. 281–305, 2012.
[27] Y. Netzer et al., “Reading digits in natural images with
unsupervised feature learning,” in NIPS, 2011, number 2,
p. 5.
[11] B. Zoph and Q. V. Le, “Neural architecture search with
reinforcement learning,” in ICLR. IEEE, 2017.
[28] Johannes S. et al., “The German Traffic Sign Recognition
Benchmark: A multi-class classification competition,” in
IJCNN. IEEE, 2011, pp. 1453–1460.
[12] S. Hochreiter and J. Schmidhuber, “Long short-term
memory,” Neural computation, vol. 9, no. 8, pp. 1735–
1780, 1997.
[13] E. Real et al., “Large-scale evolution of image classifiers,”
arXiv preprint arXiv:1703.01041, 2017.
[29] N. Srivastava et al., “Dropout: a simple way to prevent
neural networks from overfitting.,” Journal of machine
learning research, vol. 15, no. 1, pp. 1929–1958, 2014.
[14] X. Chen et al., “Fxpnet: Training a deep convolutional
neural network in fixed-point representation,” in IJCNN.
IEEE, 2017, pp. 2494–2501.
[30] Mark Sandler et al., “Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and
segmentation,” arXiv preprint arXiv:1801.04381, 2018.
[15] D. Lin, S. Talathi, and S. Annapureddy, “Fixed point
quantization of deep convolutional networks,” in ICML,
2016, pp. 2849–2858.
[31] Bing Xu et al., “Empirical evaluation of rectified activations in convolutional network,” ICML, 2015.
6
| 9 |
1
Generalizable Data-free Objective for Crafting
Universal Adversarial Perturbations
arXiv:1801.08092v1 [] 24 Jan 2018
Konda Reddy Mopuri*, Aditya Ganeshan*, R. Venkatesh Babu, Senior Member, IEEE
Abstract—Machine learning models are susceptible to adversarial perturbations: small changes to input that can cause large changes
in output. It is also demonstrated that there exist input-agnostic perturbations, called universal adversarial perturbations, which can
change the inference of target model on most of the data samples. However, existing methods to craft universal perturbations are (i)
task specific, (ii) require samples from the training data distribution, and (iii) perform complex optimizations. Also, because of the data
dependence, fooling ability of the crafted perturbations is proportional to the available training data. In this paper, we present a novel,
generalizable and data-free objective for crafting universal adversarial perturbations. Independent of the underlying task, our objective
achieves fooling via corrupting the extracted features at multiple layers. Therefore, the proposed objective is generalizable to craft
image-agnostic perturbations across multiple vision tasks such as object recognition, semantic segmentation and depth estimation.
In the practical setting of black-box attacking scenario, we show that our objective outperforms the data dependent objectives to fool
the learned models. Further, via exploiting simple priors related to the data distribution, our objective remarkably boosts the fooling
ability of the crafted perturbations. Significant fooling rates achieved by our objective emphasize that the current deep learning models
are now at an increased risk, since our objective generalizes across multiple tasks without the requirement of training data for crafting
the perturbations. To encourage reproducible research, we have released the code for our proposed algorithm at GitHub† .
Index Terms—Adversarial perturbations, fooling CNNs, stability of Neural Networks, perturbations, universal, generalizable attacks,
attacks on ML systems, data-free objectives, adversarial noise.
F
1
I NTRODUCTION
S
MALL but structured perturbations to the input, called
adversarial perturbations, are shown ( [1], [2], [3]) to
significantly affect the output of machine learning systems.
Neural network based models, despite their excellent performance, are observed ( [4], [5], [6]) to be vulnerable to
adversarial attacks. Particularly, Deep Convolutional Neural
Networks (CNN) based vision models ( [7], [8], [9], [10],
[11]) can be fooled by carefully crafted quasi-imperceptible
perturbations. Multiple hypotheses attempt to explain the
existence of adversarial samples, viz. linearity of the models [5], finite training data [12], etc. More importantly, the
adversarial perturbations generalize across multiple models. That is, the perturbations crafted for one model fools
another model even if the second model has a different
architecture or trained on a different subset of training data (
[4], [5]). This property of adversarial perturbations enables
potential intruders to launch attacks without the knowledge
about the target model under attack: an attack model typically known as black-box attacking [13]. On the contrast, an
attack model where everything about the target model is
known to the attacker is called a white-box attacking. Until
recently, all the existing works assumed a threat model in
which the adversaries can directly feed input to the machine
learning system. However, Kurakin et al. [14] lately showed
that the adversarial samples can remain misclassified even
if they were constructed in physical world and observed
•
The authors are with Department of Computational and Data Sciences,
Indian Institute of Science, Bangalore, India, 560012.
E-mail: [email protected], [email protected] and
[email protected]
* denotes equal contribution
† https://github.com/val-iisc/gd-uap
through a sensor (e.g., camera). Given that the models are
vulnerable even in physical world scenario [14], the models’ susceptibility poses serious issues about their deployability (e.g., safety concerns for autonomous driving) in
the real world. Particularly, in case of critical applications
that involve safety and security, reliable models need to
be deployed to stand against the strong adversarial attacks.
Thus, the effect of these structured perturbations has to be
studied thoroughly in order to develop dependable machine
learning systems.
Recent work by Moosavi-Dezfooli et al. [8] presented the
existence of image-agnostic perturbations, called universal
adversarial perturbations (UAP) that can fool the state-ofthe-art recognition models on most natural images. Their
method for crafting the UAPs is based on the DeepFool [7]
attacking method. It involves solving a complex optimization problem (eqn. 2) to design a perturbation. The UAP [8]
procedure utilizes a set of training images to iteratively
update the universal perturbation with an objective of
changing the predicted label upon addition, for most of the
dataset images. Similar to [8], Metzen et al. [11] proposed
UAP for semantic segmentation task. They extended the
iterative FGSM [5] attack by Kurakin et al. [14] to change
the label predicted at individual pixels and craft the perturbation. They craft the image-agnostic perturbations to
fool the system in order to predict a pre-determined target
segmentation output.
However, these approaches to craft UAPs ( [8], [11], [15])
have the following important drawbacks:
•
Data dependency: It is observed that the objective
presented by [8] to craft UAP for recognition models
requires a minimum number of training samples
2
•
•
for it to converge and craft an image-agnostic perturbation. Moreover, the fooling performance of the
resulting perturbation is proportional to the available training data (Figure 4). Similarly, the objective
for crafting image-agnostic perturbation for semantic
segmentation models proposed by [11] also requires
data. Therefore, as their optimization/objectives involve data, existing procedures can not craft perturbations when enough data is not provided.
Weaker black-box performance: Since information about
the target models is generally not available for attackers, it is practical to study the black-box attacks.
Also, black-box attacks reveal the true susceptibility
of the models, while white-box attacks provide the
upper bound on the achievable fooling. However,
the black-box attacking performance of UAP [8] is
poor compared to their white-box performance (Table 4). Note that, in [8], authors have not analyzed
the performance of their perturbations in the blackbox attack scenario. They have assumed that the
training data of the target models is known and
have not considered the case in which adversary has
access to only a different set of data. This amounts
to performing only semi white-box attacks. Black-box
attacks generally mean [13] the adversary does not
have access to (i) the target network architecture
(including the parameters), and (ii) a large training
dataset. In case of semantic segmentation, as [11]
worked with targeted attacks, they observed that the
perturbations do not generalize to other models very
well.
Task specificity: The current objectives to craft UAPs
are task specific. The objectives are typically designed to suit the underlying task at hand as the concept of fooling varies across the tasks. Particularly,
for regression tasks such as depth estimation and
crowd counting, the idea of fooling is not straight
forward as it is for recognition task.
In order to address the above shortcomings and to better
analyze the stability of the models, we present novel datafree objective to craft universal adversarial perturbations.
Our objective is to craft image-agnostic perturbations that
can fool the target model without any knowledge about the
data distribution, such as, the number of categories, type
of data (e.g., faces, objects, scenes, etc.) or the data samples
themselves. Since we do not have access to any data, instead
of an objective that reduces the confidence to the predicted
label or flip the predicted label (as in [4], [7], [8], [11]), we
propose an objective to learn perturbations that can adulterate the features extracted by the models. Our proposed
objective attempts to over-fire the neurons at multiple layers
in order to deteriorate the extracted features. During the
inference time, the added perturbations misfire the neuron
activations in order to contaminate the representations and
eventually predict wrong label.
This work extends our earlier conference paper [9]. We
make the following new contributions in this paper:
1)
Propose a novel data-free objective for crafting
image-agnostic perturbations
2)
3)
4)
We demonstrate that our objective is generalizable
across multiple vision tasks
Further, we show that apart from being data-free
objective, the proposed method can exploit minimal
prior information about the training data distribution of the target models in order to craft stronger
perturbations
We present comprehensive analysis on the proposed
objective and the crafted perturbations across three
different vision tasks covering both classification
and regression tasks.
The rest of this paper is organized as follows: section
2 presents detailed account of related works, section 3
discusses the proposed data-free objective to craft imageagnostic adversarial perturbations, section 4 demonstrates
the effectiveness of our perturbations via comprehensive
experimentation, section 5 hosts discussion on the proposed
method and finally section 6 concludes the paper.
2
R ELATED W ORKS
Szegedy et al. [4] demonstrated that despite their superior
recognition performance, neural networks are susceptible
to adversarial perturbations. Subsequently, multiple other
works [5], [6], [7], [16], [17], [18], [19] studied this interesting
and surprising property of the machine learning models.
Though it is first observed with recognition models, the
adversarial behaviour is noticed with models trained on
other tasks such as semantic segmentation [11], [20], object
detection [20] and deep reinforcement learning tasks [21],
etc. There exist multiple methods to craft these malicious
perturbations for a given data sample. For recognition tasks,
they range from performing simple gradient ascent ( [5])
on cost function to solving complex optimizations ( [4],
[7], [22]). Simple and fast methods such as FGSM [5] find
the gradient of loss function to determine the direction in
the image space to perturb the input image. An iterative
version of this attack presented in [14] achieves better fooling via performing the gradient ascent multiple times. On
the other hand, complex approaches such as [4], [7] find
minimal perturbation that can move the input across the
learned classification boundary in order to flip the predicted
label. More robust adversarial attacks have been proposed
recently that transfer to real world [14] and are invariant to
general image transformations [23].
Moreover, it is observed that the perturbations exhibit
transferability property. That means, perturbations crafted
for one model can fool other models with different architectures and trained on a disjoint training set as well ( [4], [5]).
Further, Papernot et al. [13] introduced a practical attacking
setup via model distillation to understand the black-box attacking. Black-box attacking assumes no information about
the target model and its training data. They proposed to use
a target model’s substitute to craft the perturbations.
The common underlying aspect of all these techniques is
that they are intrinsically data dependent. The perturbation
is crafted for a given data sample independently of others.
However, recent works by Moosavi-Dezfooli et al. [8] and
Metzen et al. [11] showed the existence of input-agnostic
perturbations that can fool the models over multiple images.
In [8], authors proposed an iterative procedure based on
3
3
P ROPOSED A PPROACH
In this section, we discuss the proposed data-free objective
to craft UAPs in detail.
First, we introduce the notation followed throughout the
paper. X denotes the distribution of images in Rd . f denotes
the function learned by the CNN that maps an input image
x ∼ X to its output f (x). Note that the output is task
dependent, for example, it is a label for object recognition
and segmentation map for semantic segmentation. δ denotes
the image-agnostic perturbation learned by our objective.
Note that similar to input x, δ also belongs to Rd . Though
the proposed objective is task independent, for ease of
understanding we explain the proposed approach in the
context of object recognition.
3.1
Data-free objective for fooling
The objective of our paper is to craft an image-agnostic
perturbation δ ∈ Rd that fools the CNN f for images
from the target distribution X without utilizing any samples
from it. In other words, we seek a universal adversarial
perturbation δ that significantly alters the prediction of the
CNN (f ). That is, we synthesize a δ such that
f (x + δ) 6= f (x), for x ∼ X .
(1)
8
8
In order for the δ to be an adversarial perturbation, it has
to be imperceptible when added to the images. Therefore
the pixel intensities of δ are restricted by an imperceptibility
constraint. Typically, it is realized as a max-norm constraint
in terms l or l2 norms (e.g. [5], [8], [9], [11]). In this paper,
for all our analysis we impose max-norm constraint in terms
of l norm. Thus, the aim is to find a δ such that
(2)
8
f (x + δ) 6= f (x), for x ∈ X ;
k δ k < ξ.
However, the focus of the proposed work is to craft the
image-agnostic perturbations without requiring any data
samples (not only from the training dataset X on which
the target model f is learned). The data-free nature of our
approach prohibits us from optimizing for the first part
of eqn. 2 while learning δ . That is, our approach does
not contain the data term x in the proposed objective.
Therefore, we propose to fool the CNN by contaminating
the extracted representations of the input at multiple layers
of the architecture. In other words, as opposed to the typical
“flipping the label” objective, we attempt to over-fire the
features extracted at multiple layers. That is, we craft a
perturbation δ such that it leads to additional activation
firing at each layer and thereby misleading the features
(filters) at the following layer. The accumulated effect of the
contamination eventually leads the CNN to misclassify the
input.
The perturbation essentially causes filters at a particular layer to spuriously fire and abstract out inefficacious
information. Note that in the presence of data (during
attack), in order to mislead the activations from retaining
useful discriminative information, the perturbation (δ) has
to be highly effective. Also, the imperceptibility constraint
(second part of eqn. 2) on δ makes it more challenging.
Hence without utilizing any data (x), we seek for an
image-agnostic perturbation δ that can produce maximal
spurious activations at each layer of a given CNN. In order
to craft such a δ we start with a random perturbation and
optimize for the following objective
!
K
Y
Loss = − log
kli (δ)k2 , such that k δ k < ξ.
8
Deepfool attacking [7] method to craft a universal perturbation to fool classification models. Similarly, in [11], authors
craft universal perturbations that can result in target segmentation outputs. However, both these works optimize for
different task specific objectives. Also, they require training
data to craft the image-agnostic perturbations. Unlike the
existing works, the proposed method presents a data-free
objective that can craft perturbations without the need for
any data samples. Also, we introduce a generic notion of
fooling across multiple computer vision tasks via over-firing
the neuron activations. Particularly, our objective is generalizable across various vision models in spite of differences in
terms of architectures, regularizers, underlying tasks, etc.
i=1
(3)
where li (δ) is the activation in the output tensor at layer
i when δ is fed to the network f . Note that the activations
are considered after the non-linearity (typically ReLU). K is
the total number of layers in f at which we maximize the
activations caused by perturbation δ , and ξ is the max-norm
limit on δ .
The proposed objective computes product of activation
magnitude at all the individual layers in order to simultaneously maximize the interference at all layers. We observed product resulting in stronger δ than other forms
of combining (e.g. sum) individual layer activations. This
is understandable, since product can force the individual
activations to rise in order for the loss to reduce. To avoid
working with extreme values (≈ 0), we apply log on the
product. Note that the objective is open-ended as there is no
optimum value to reach. We would ideally want δ to cause
as much strong disturbance at all the layers as possible,
while respecting the imperceptibility constraint.
3.2
Implementation Details
We begin with a target network f which is a trained CNN
whose parameters are frozen and a random perturbation
δ . We then perform the proposed optimization to update
δ for causing strong activations at multiple layers in the
given network. We typically consider all the convolution
(conv) layers before the fully connected (f c) layers. This is
because, the conv layers are generally considered to learn
required features to extract information over which a series
of f c layers perform classification. Also, we empirically
found that it is efficient to optimize at conv layers. Therefore,
we restrict the optimization to feature extraction layers. In
case of advanced architectures such as GoogLeNet [24] and
ResNet [25], we optimize at the last layers of all the inception blocks (or residual blocks) and the independent conv
layers. We observed that optimizing at these layers results
in δ with a fooling capacity similar to the one resulting from
optimizing at all layers (including the conv layers within
the inception/residual blocks). However, since optimizing
4
at only the last layers of inception/residual blocks along
with independent conv layers is slightly more efficient, we
perform the same.
Note that the optimization updates only the perturbation
δ , not the network parameters. Additionally, no image data
is involved in the optimization process. We update δ with
the gradients computed for loss in eqn. (3) iteratively till
the fooling performance of the learned δ gets saturated on
a set of validation images. In order to validate the fooling
performance of the learned δ , we compose an unrelated
substitute dataset (D). Since our objective is not to utilize
data samples from the training dataset of the target models,
we randomly select 1000 images from a substitute dataset
to serve as validation images. Note that, it is a reasonable
assumption for an attacker to have access to 1000 unrelated images. For crafting perturbations to object recognition
models trained on ILSVRC dataset [26], we choose samples
from Pascal VOC-2012 [27] dataset. Similarly, for semantic
segmentation models trained on Pascal VOC [27], [28], we
choose validation samples from ILSVRC [26], for depth
estimation models trained on KITTI dataset [29] we choose
samples from Places-205 [30] dataset.
3.3.2 Target data samples
In this subsection, we formulate our data-free objective to
utilize samples from the target distribution X and benefit
to improve the fooling ability of the crafted perturbations.
Note that in the case of data availability, we can design
direct objectives such as reducing confidence for the predicted label or changing the predicted label, etc. However,
we investigate if our data-free objective of over-firing the
activations, though is not designed to utilize data, crafts
better perturbations when data is presented to the optimization. Additionally, our objective does not utilize data
to manipulate the predicted confidences or labels. Rather,
the optimization benefits from prior information about the
data distribution such as the dynamic range, local patterns,
etc., which can be provided through the actual data samples.
Therefore, with minimal data samples we solve for the
following optimization problem
Exploiting additional priors
Though the proposed method is a data-free optimization for
crafting image-agnostic perturbations, it can exploit simple
additional priors about the data distribution X . In this
section we demonstrate how our method can utilize simple
priors such as (i) mean value and dynamic range of the
input, and (ii) target data samples.
3.3.1 Mean and dynamic range of the input
Note that the proposed optimization (eqn. (3)) does not
consider any information about X . We present only the
norm limited δ as input and maximize the resulting activations. That is, during the optimization, input to the target
CNN has a dynamic range of [−10, 10]. However, during
the inference time, input lies in [0, 255] range. Therefore,
it becomes very challenging to learn perturbations that
can affect the neuron activations in the presence of strong
(approximately, an order higher) input signal x. Hence, in
order to make the learning easier, we provide this useful
information about the data (x ∈ X ), and let the optimization
better explore the space of perturbations. Thus, we slightly
modify our objective to craft δ relative to the dynamic range
of the data. We create pseudo data d via randomly sampling
from a Gaussian distribution whose mean (µ) is equal to the
mean of training data and variance (σ) that covers 99.9%
of density to lie in [0, 255], the dynamic range of input.
Essentially, we solve for the following
Loss = − log
K
Y
!
kli (d + δ)k2 ,
i=1
8
such that kδ k
(4)
< ξ, and d ∼ N (µ, σ).
Essentially, we operate the proposed optimization in a closer
subspace to the target data distribution X . In other words,
d in eqn. (4) acts as a place holder for the actual data
and helps to learn perturbations which can over-fire the
neuron activations in the presence of the actual data. For
Loss = − log
K
Y
!
kli (x + δ)k2 ,
i=1
such that kδ k
8
3.3
each iteration of the optimization process, we sample a
new d from the normal distribution and feed it through the
target CNN. We also perform typical augmentations such as
flipping, cropping and slight rotation, etc. on the sampled d
to construct variations.
(5)
< ξ, and x ∼ X .
Note that presenting data samples to the optimization procedure is a natural extension to presenting the dynamic
range of the target data alone (section 3.3.1). In this case,
we utilize a subset of training images on which the target
CNN models are trained (similar to [8], [11]).
3.4
Improved Optimization
In this subsection, we present the improvements we propose
to our earlier optimization presented in our conference paper [9]. The proposed objective in [9] is observed to quickly
accumulate δ beyond the imposed max-norm constraint (ξ).
Because of the clipping performed after each iteration, the
updates will be futile after δ reaching the constraint. In
order to tackle this saturation, δ is re-scaled to half of its
dynamic range (i.e. [−5, 5]) in regular time intervals of
300 iterations. Though this re-scaling helps to learn better
δ , it is inefficient since it performs blind re-scaling without
verifying the scope for updating the δ . For example, as the
learning progresses, magnitude of updates decreases and
during the interval of 300 iterations, the values of δ might
not reach the extreme values of ±10. Projecting the δ by
re-scaling can badly affect the learning.
Therefore, we propose an adaptive re-scaling of δ based
on the rate of saturation (reaching the extreme values of
±10) in its pixel values. During the optimization, at each
iteration we compute the proportion (p) of the pixels in
δ that reached the max-norm limit ξ . As the learning progresses, since our objective is open ended, more number of
pixels reach the max-norm limit and because of the clipping
eventually get saturated at ξ . Hence, the rate of increase in p
decreases as δ saturates. We compute the rate of saturation,
denoted as S , of the pixels in δ after each iteration during
5
the training. For consecutive iterations, if increase in p
is not significant (less than a pre-defined threshold T h),
we perform a re-scaling to half the dynamic range. Note
that the proposed criterion for re-scaling is similar to the
typical usage of validation performance to stop training. We
observe that the proposed adaptive re-scaling consistently
leads to better learning.
3.5
Algorithmic summarization
In this subsection, for the sake of brevity we summarize the
proposed data-free objective in the form of an algorithm.
Algorithm 1 presents the proposed optimization as a series
of steps. Note that it is a generic form comprising of all
the three variations including both data-free and with prior
versions.
For ease of reference, we repeat some of the notation. Ft
is the fooling rate at iteration t, li (x) is the activation caused
at layer i of the CNN f for an input x, η is the learning rate
used for training, ∆ is the gradient of the loss with respect
to the input δ , St is the rate of saturation of pixels in the
perturbation δ at iteration t, T h is the threshold on the rate
of saturation, Ft is the fooling rate, H is the patience interval
of validation for verifying the convergence of the proposed
optimization.
Algorithm 1: Algorithm summarizing our approach
to craft image-agnostic adversarial perturbations via
data-free objective and exploiting various data priors.
1
2
3
4
5
6
7
Generalized Fooling Rate (GFR), making it independent
of the task, and dependent on the metric being used for
evaluating the model’s performance.
Consider the task of image recognition. For this task,
fooling rate for any perturbation is defined as the % of data
samples, for which the labels are changed due to the perturbation. i.e., the % of data samples for which the output label
before and after the adversarial attack are different. If the
fooling rate is x%, then for (100 − x)% of the data samples,
the label before and after the adversarial attack remains the
same. If we consider the labels of clean samples as ground
truth, and labels of perturbed samples as predicted labels,
(100 − x)% is same as the Top-1 accuracy of the model.
This leads us to introduce the following definition for the
Generalized Fooling Rate.
Let M be a metric for measuring the performance of a
model for any task, where the range of M is [0, R]. Let the
metric take two inputs ŷ and y , where ŷ is the predicted
output and y is the ground truth output, such that the
performance of the model is measured as M (ŷ, y). Let yˆδ be
the output of the model when the input is perturbed with
a perturbation δ . Then, the Generalized Fooling Rate with
respect to measure M is defined as:
R − M (yˆδ , ŷ)
(6)
.
R
This definition of Generalized Fooling rate (GFR) has the
following benefits:
GF R(M ) =
•
Data: Target CNN f , data g . Note that g = 0 for
data-free case, g = d ∼ N (µ, σ) for range prior
case, and g = x for training data samples case.
Result: Image-agnostic adversarial perturbation δ .
Randomly initialize δ0 ∼ U[−ξ, ξ]
t=0
Ft = 0
do
t←t+1
Compute li (g + δ)
K
Q
kli (g + δ)k2
Compute the loss = − log
•
i=1
8
9
10
11
12
13
14
15
16
Update1 δt : δt ← δt−1 − η∆
Compute the rate of saturation St in the δt pixels
if St < T h then
δt ← δt /2
end
Compute Ft of δt on substitute dataset D
while Ft < min. of {Ft−H , Ft−H+1 . . . Ft−1 };
i ← argmax. of {Ft−H , Ft−H+1 . . . Ft−1 }
Return δi
Fooling rate should be a measure of the change in
model’s output caused by the perturbation. Being
independent of the ground truth y , and dependant
only on yˆδ and ŷ , GFR primarily measures the change
in the output. Other methods such as the metric
value after the attack may not properly capture this.
A poorly performing model which however is very
robust to adversarial attacks will show very poor
GFR values, highlighting its robustness. For example,
GFR will be equal to 0% if there is no change in the
output, and in extreme cases it can reach 100%, for a
given task. Hence it can be compared across tasks.
GFR measures the performance of a perturbation
in terms of the damage caused to a model with
respect to a metric. This is an important feature, as
tasks such as depth estimation have multiple performance measures, where some perturbation might
cause harm only to some of the metrics while leaving
other metrics unaffected.
For all the tasks considered in this work, we report GFR
with respect to a metric, as a measure of ‘fooling’.
1 Note
that this generic update equation is only for understanding
purpose and not the exact equation implemented.
3.6 Generalized Fooling Rate (GFR) with respect to a
metric
While the notion of ‘fooling’ has been well defined for the
task of image recognition, for other tasks, how to measure the ‘fooling’ is unclear. Hence, in order to provide
an interpretable metric to measure ‘fooling’, we introduce
4
E XPERIMENTS
In this section, we present the experimental evaluation to
demonstrate the effectiveness of the proposed data-free
objective. We consider three different vision tasks to demonstrate the generalizability of our objective, namely, object recognition, semantic segmentation and unsupervised
monocular depth estimation. Note that the set of applications include both classification and regression tasks. Also,
it has both supervised and unsupervised learning setups.
6
TABLE 1
Fooling rates for the proposed data-free objective learned for object recognition on ILSVRC dataset [26]. Each row of the table shows the fooling
rates for the perturbation learned on a specific target model when attacking various other models (columns). These rates are obtained by the
proposed objective without utilizing the data samples but only the range prior (sec. 3.3.1). Diagonal rates indicate the white-box attack scenario
and off-diagonal ones represent the black-box attack scenario. Note that the highest fooling rate when attacking a given model (column) is shown
in bold.
Model
CaffeNet
VGG-F
GoogLeNet
VGG-16
VGG-19
Resnet-152
CaffeNet
VGG-F
CaffeNet
VGG-F
GoogLeNet
VGG-16
VGG-19
Resnet-152
87.02
59.89
44.70
50.05
49.11
38.41
65.97
91.91
46.09
55.66
53.45
37.20
49.40
52.24
71.44
46.59
40.90
33.22
50.46
51.65
37.95
63.08
55.73
27.76
49.92
50.63
37.90
56.04
64.67
26.52
38.57
40.72
34.56
36.84
35.81
37.3
GoogleNet
VGG-16
VGG-19
ResNet-152
Fig. 1. Universal adversarial perturbations crafted by the proposed data-free objective for multiple models trained on ILSVRC [26] dataset.
Perturbations were crafted with ξ = 10 using the minimal prior, i.e. mean and dynamic range of the input samples (sec. 3.3.1). Corresponding
target network architecture is mentioned below each perturbation. Images are best viewed in color.
We explain each of the tasks separately in the following
subsections.
For all the experiments, the ADAM [31] optimization
algorithm is used with the learning rate of 0.1. The threshold
T h for the rate of saturation S is set to 10−5 . Validation
fooling rate Ft is measured on the substitute dataset D after
every 200 iterations only when the threshold of the rate of
saturation is crossed. If it is not crossed, Ft is still measured
after every 400 iterations. Note that none of the algorithm
specific hyper-parameters are changed across tasks or across
prior scenarios.
4.1
Object recognition
We have worked with models trained on ILSVRC [26]
and Places-205 [30] datasets, viz. CaffeNet [32], VGGF [33], GoogLeNet [24], VGG-16 [34], VGG-19 [34], ResNet152 [25]. Since our approach does not involve training the
models, for all the experiments we work with available
trained models. Also, unlike UAP [8], as we do not use
training data in the data-free case (sec. 3.1 and sec. 3.3.1), no
training data is used. However, as explained earlier, we use
1000 images randomly chosen from Pascal VOC-2012 [27]
train images as validation set (D in Algorithm 1) for our
optimization. Also, in case of exploiting additional data
prior (sec. 3.3.2), we use limited data from the corresponding training set. However, for evaluating the perturbations
learned for the target models trained on ILSVRC dataset,
50000 images from the validation set are used. Similarly,
Places-205 dataset contains 20500 validation images from
205 categories, over which the fooling rates are computed.
4.1.1 Fooling performance of the data-free objective
Table 1 presents the fooling rates achieved by our objective
on various network architectures. Fooling rate is the per-
centage of test images for which our crafted perturbation δ
successfully changed the predicted label. Higher the fooling
rate, greater is the perturbation’s ability to fool and lesser
is the the classifier’s robustness. Fooling rates in Table 1
are obtained using the mean and dynamic range prior of
the training distribution (sec. 3.3.1). Each row in the table
indicates one target model employed in the learning process
and the columns indicate various models attacked using the
learned perturbations. The diagonal fooling rates indicate
the white-box attacking, where all the information about the
model is known to the attacker. The off-diagonal rates
indicate black-box attacking, where no information about the
model under attack is revealed to the attacker. However,
note that the dataset over which both the models (target
CNN and the CNN under attack) are trained is same.
Our perturbations cause a mean white-box fooling rate of
69.24% and a mean black-box fooling rate of 45.13%. Note
that, given the data-free nature of the optimization, the
fooling rates are alarmingly significant. These high fooling
rates achieved by the proposed approach can adversely
affect the real-world deployement of these models.
Figure 1 shows example image-agnostic perturbations
(δ) crafted by the proposed method. Note that the perturbations look very different for each of the target CNNs. However, the perturbations corresponding to the VGG models
look slightly similar owing to their architectural similarity. Figure 3 shows sample perturbed images (x + δ) for
VGG-19 [34] from ILSVRC [26] validation set. The top row
shows the clean and bottom row shows the corresponding
adversarial images. Note that the adversarially perturbed
images are visually indistinguishable form their corresponding clean images. All the clean images shown in the figure
are correctly classified and are successfully fooled by the
added perturbation. Below each image, the corresponding
7
TABLE 2
Fooling rates achieved by the proposed objective with and without utilizing the prior information about the training distribution of the models. For
comparison, we provide the random baseline (U [−ξ, ξ]), existing data-free [9] and data dependent [8] objectives. Note that the best fooling rate for
a given model (row) is shown in bold.
Model
CaffeNet
VGG-F
Googlenet
VGG-16
VGG-19
Resnet-152
Baseline
No prior
Range prior
Data prior
FFF [9]
UAP [8]
12.9
12.62
10.29
8.62
8.40
8.99
84.88
85.96
58.62
45.47
40.68
29.78
87.02
91.81
71.44
63.08
64.67
37.3
91.54
92.64
83.54
77.77
75.51
66.68
80.92
81.59
56.44
47.10
43.62
-
93.1
93.8
78.5
77.8
80.8
84.0
VGG16 - no data VGG16 - min. prior
VGG16 - data
Res-152-no data
Res-152-min. prior
Res-152-data
Fig. 2. Sample universal adversarial perturbations crafted by the proposed method under multiple settings for a pair of models trained on
ILSVRC [26] dataset. Perturbations were crafted with ξ = 10 for no data, minimal data prior and with data scenarios. First three perturbations
are crafted for VGG-16 [34] and the later three for ResNet-152 [25]. Corresponding setting is mentioned below each perturbation. Images are best
viewed in color.
label predicted by the model is shown. Note that the correct
labels are shown in black color and the wrong ones are
shown in red.
4.1.2
Exploiting the minimal prior
In this section, we present experimental results to demonstrate how our data-free objective can exploit the additional
prior information about the target data distribution as discussed in section 3.3. Note that we consider two cases: (i)
providing the mean and dynamic range of the data samples,
denoted as range prior (sec. 3.3.1), and (ii) utilizing minimal
data samples themselves during the optimization, denoted
as data prior ( 3.3.2).
Table 2 shows the fooling rates obtained with and without utilizing the prior information. Note that all the fooling
rates are computed for white-box attacking scenario. To
emphasize the effectiveness of the proposed objective, we
present fooling rates obtained by a random baseline perturbation. Since our learned perturbation is norm limited by ξ ,
we sample random δ from U[−ξ, ξ] and compute the fooling
rates. We denote these results in the ‘Baseline’ column of
Table 2. Also, for comparison, fooling rates obtained by our
data-free objective [9] and a data dependent objective [8] are
presented. Important observations to draw from the table
are listed below:
•
•
•
Utilizing the prior information consistently improves
the fooling ability of the crafted perturbations.
A simple range prior can boost the fooling rates on
an average by an absolute 10%.
Given the proposed objective is not designed to
utilize the data, feeding the data samples results in
an absolute 22% rise in the fooling rates. For some
of the models (e.g. GoogLeNet and VGG-16) our
method performs equally (or even outperforms) to
TABLE 3
Comparison of data-free objectives. Fooling rates achieved by the
proposed objective of maximizing the l2 norm of the activations caused
by δ versus the mean activation [9] when utilizing data samples. Note
that the proposed objective consistently outperforms the previous
objective across multiple architectures (shown in bold).
Model
CaffeNet
VGG-16
Resnet-152
•
4.1.3
Mean activation [9]
l2 norm
88.35
72.68
65.43
91.54
77.77
66.68
an objective [8] designed especially to utilize data
and flip the predicted labels.
Our improved objective (eqn.3) and the optimization
procedure (3.4) result in an absolute increase of fooling rate by 22.36% in the case of utilizing data.
Comparison of the data-free objectives
In this subsection we compare the effectiveness of the
proposed objective against the existing data-free objective
proposed in our conference publication [9]. That is, we
compare maximizing the mean versus l2 norm (energy) of
the activations caused by the perturbation δ (or x/d + δ in
case of exploiting the additional priors).
Table 3 shows the comparison of fooling rates obtained
with both the objectives (separately) in the improved optimization setup (3.4). We have chosen 3 representative
models across various generations of architectures (CaffeNet, VGG and ResNet) to compare the effectiveness of
the proposed objective. Note that the improved objective
consistently outperforms the previous one by a significant
3.18%. Similar behaviour is observed for other vision tasks
also.
8
Missile
Banana
Tank
Banjo
Skunk
French loaf
African
chamaelon
Custard Apple
Half track
Vault
Jigsaw puzzle
Normal chiton
Fig. 3. Sample original and adversarial image pairs from ILSVRC validation set generated for VGG-19. First row shows original images and
corresponding predicted labels, second row shows the corresponding perturbed images along with their predictions. Note that all of the shown
perturbed images were misclassified.
TABLE 4
Effect of data dependency on crafting the perturbations. Data
dependent objectives [8] suffer significant drop in fooling ability when
arbitrary data samples are utilized for crafting image-agnostic
perturbations. Note that, A→ B denotes that data A is used to craft
perturbations to fool models trained on data B. Also note that fooling
rates for our approach are crafted without utilizing any data samples
(denoted with ∗ ).
Model
Places-205 → ILSVRC
Ours
UAP [8]
Ours
UAP [8]
CaffeNet
GoogLenet
87.02*
71.44*
73.09
28.17
88.61*
83.37*
77.21
52.53
4.1.4
ILSVRC → Places-205
Data dependent vs. Data-free objectives
In this subsection we demonstrate the necessity of data
dependent objective [8] to have samples from the target
distribution only. That is, methods (such as [8]) that craft
perturbations with fooling objective (i.e. move samples
across the classification boundaries) require samples from
only the training data distribution during the optimization.
We show that crafting with arbitrary data samples leads to
significantly inferior fooling performance.
Table 4 shows the fooling rates of data dependent objective [8] when arbitrary data samples are utilized in place
of target samples. Experiment in which we use Places-205
data to craft perturbations for models trained on ILSVRC is
denoted as Places-205 → ILSVRC and vice versa. For both
the setups, a set of 10000 training images are used. Note
that, the rates for the proposed method are obtained without
utilizing any data and rates for data-free scenario can be
found in Table 2. Clearly the fooling rates for UAP [8] suffer
significantly, as their perturbations are strongly tied to the
target data. On the other hand, for the proposed method,
since it does not craft via optimizing a fooling objective, the
fooling performance does not decrease. Importantly, these
experiments show that the data dependent objectives are not
effective when samples from the target distribution are not
Fig. 4. Reliance of the data dependent objective UAP [8] on the size
of available training data samples. Fooling rate of the crafted perturbations monotonically increases with the number of training data samples
available during the optimization. Note that our approach utilizes no data
samples and achieves competitive fooling performance.
available. In most of the practical scenarios it is difficult to
procure the same training data. Also, as the data dependent
objectives rely on the available training data, the ability
of the crafted perturbations heavily depends on the size
of the data utilized in the crafting process. We show that
the fooling performance of UAP [8] monotonically increases
with the size of the available samples for optimization.
Figure 4 shows the fooling rates obtained by the perturbations crafted for multiple recognition models trained on
ILSVRC by UAP [8] with varying size of samples available
for optimization. We craft different UAP [8] perturbations
(using the codes provided by the authors) utilizing only 500,
1000, 2000, 4000 and 10000 data samples and evaluate their
ability to fool various models. Note that the performance
of the crafted perturbations increases drastically (shown in
different shades of blue) with the available data samples
during the optimization. For comparison, the fooling rates
obtained by the proposed data-free objective is shown in
green.
9
TABLE 5
Comparison of mean IOU obtained by various models against the proposed perturbations. Comparison with image specific adversaries [20] is also
presented. Note that ∗ denotes being image-specific and † denotes a transfer attack (black-box attacking). Note that being image specific, [20]
outperforms our perturbations, however, even our no data perturbations cause more drop in mIOU than their transfer perturbations.
Model
FCN-Alex
FCN-8s-VGG
DL-VGG
DL-RN101
Original
Baseline
No
Data
Range
prior
All
data
Data w/
less BG
ICCV 2017 [20]
46.21
65.49
62.10
74.94
45.72
64.34
61.13
73.42
15.35
42.78
36.91
56.40
10.37
39.08
35.41
58.66
10.64
33.61
44.90
37.45
8.03
28.05
27.41
39.00
3.98*
4.02*
43.96∗†
73.01∗†
TABLE 6
Generalized fooling rates achieved by the perturbations crafted by the
proposed approach under various settings. Note that for comparison,
fooling rates achieved by random perturbations are also presented.
Model
FCN-Alex
FCN-8s-VGG
DL-VGG
DL-RN101
4.2
Baseline
14.29
9.24
10.66
8.8
No
Data
80.15
49.42
55.90
37.06
Range
Prior
86.57
55.04
58.96
35.6
All
Data
85.96
61.15
44.82
58.62
Data W/
less BG
89.61
67.19
66.68
56.61
Semantic segmentation
In this subsection we demonstrate the effectiveness of our
data-free objective to craft universal adversarial perturbations for semantic segmentation. We consider four network
architectures. The first two architectures are from FCN [35]:
FCN-Alex, based on Alexnet [32], and FCN-8s-VGG, based
on the 16-layer VGGNet [34]. The last two architectures are
16-layer VGGNet based DL-VGG [36], and DL-RN101 [37],
which is a multi-scale architecture based on ResNet-101 [25].
The FCN architetures are trained on Pascal VOC-2011
dataset [28], [38], consisting 9610 training samples and the
the remaining two architectures are trained on Pascal VOC2012 dataset [27], [38], consisting 10582 training samples.
However, for testing our perturbation’s performance, we
only use the validation set provided by [35], which consist
of 736 images.
Semantic segmentation is realized as assigning a label to
each of the image pixels. That is, these models are typically
trained to perform pixel level classification into one of
21 categories (including the background) using the crossentropy loss. Performance is commonly measured in terms
of mean IOU (intersection over union) computed between
the predicted map and the ground truth. Extending the UAP
generation framework provided in [8] to segmentation is a
non-trivial task. However, our generalizable data-free algorithm can be applied for the task of semantic segmentation
without any changes.
Similar to recognition setup, we present multiple scenarios for crafting the perturbations ranging from no data to
utilizing data samples from the target distribution. An interesting observation with respect to the data samples from
Pascal VOC-2012, is that, in the 10582 training samples,
65.4% of the pixels belong to the ‘background’ category.
Due to this, when we craft perturbation using training
data samples as target distribution prior, our optimization process encounters roughly 65% pixels belonging to
‘background‘ category, and only 35% pixels belonging to
the rest 20 categories. As a result of this data imbalance,
the perturbation is not sufficiently capable to corrupt the
features of pixels belonging to the categories other than
‘background’. To handle this issue, we curate a smaller set
of 2833 training samples from Pascal VOC-2012, where each
sample has less than 50% pixels belonging to ‘background’
category. We denote this as “data with less BG”, and only
33.5% of pixels in this dataset belong to the ‘background’
category. The perturbations crafted using this dataset as
target distribution prior show a higher capability to corrupt
features of pixels belonging to the rest 20 categories. Since
mean IOU is the average of IOU for each of the 21 categories,
we further observe that perturbations crafted using “data
with less BG” cause a substantial reduction in the mean IOU
measure as well.
Table 6 shows the generalized fooling rates with respect
to the mean IOU obtained by the proposed objective under
various data priors. As explained in section 3.6, the generalized fooling rate measures the change in the performance of
a network with respect to a given metric, which in our case
is the mean IOU. Note that, similar to the recognition case,
the fooling performance monotonically increases with the
addition of data priors. This observation emphasizes that
the proposed objective, though being an indirect and nonfooling in nature, can rightly exploit the additional prior
information about the training data distribution. Also, of all
the models, generally, data with less background scenario
results in the best fooling rate. This can be attributed to the
fact that in data with less background scenario we lessen
the data-imbalance that leads to perturbations that can fool
both background and object pixels. Also, for comparison, we
present the fooling rates achieved by the random perturbation sampled from U[−ξ, ξ]. Note that similar to recognition
experiments, for all the segmentation experiments we use a
ξ value of 10.
In Table 5 we present the mean IOU metric obtained
on the perturbed images learned under various scenarios
along with original mean IOU obtained on clean images.
Comparison with baseline random noise perturbation sampled from U[−10, 10] is also provided. It is clearly observed
that the random perturbation is not effective in fooling
the segmentation models. However, the proposed objective
crafts perturbations within the same range that can significantly fool the models. We also show the mean IOU
obtained by Xie et al. [20], an image specific adversarial
perturbation crafting work. Note that since the proposed
method is an image-agnostic approach it is unfair to expect
10
FCN-Alex
FCN-8s-VGG
DL-VGG
DL-RN101
Fig. 5. Universal adversarial perturbations for semantic segmentation, crafted by the proposed data-free objective for multiple models. Perturbations
were crafted with ξ = 10 using the less BG prior. Corresponding target network architecture is mentioned below each perturbation. Images are best
viewed in color.
No prior
Range prior
less BG prior
All data prior
Fig. 6. Sample universal adversarial perturbations for semantic segmentation, crafted by the proposed method under multiple settings for FCN-8sVGG. Perturbations were crafted with ξ = 10 for no prior,range prior, data with less BG prior, and all data prior scenarios. Corresponding scenario
is mentioned below each perturbation. Images are best viewed in color.
Clean
No prior
Range Prior
Less BG prior
All data Prior
Fig. 7. Sample original and adversarial images from PASCAL-2011 dataset generated for FCN-Alex. First row shows clean and adversarial images
with various priors. Second row shows the corresponding predicted segmentation maps.
similar performance as [20]. Further, the mean IOU shown
by [20] for DL-VGG and DL-RN101 models (bottom 2 rows
of Table 5) denote the transfer performance, i.e., black-box
attacking and hence the smaller drop of the mean IOU from
that of clean images. However, they are provided as a means
of weak comparison instead of no comparison.
Finally, we end this section by providing some qualitative results. Figure 5 and 6 shows sample image-agnostic
adversarial perturbations learned by our objective for semantic segmentation. In Figure 5 we show the perturbations
learned with “data with less BG” prior for all the models.
Similar to the recognition case, these perturbations look
different across architectures. In Figure 6, we show the perturbations learned for FCN-8s-VGG model under various
scenarios ranging from no prior to full data prior. Note
that, even for a given network, the perturbations learned
with various priors look different. Figures 7 shows example
image and predicted segmentation outputs by FCN-Alex
model for perturbations crafted with various priors. Top
row shows the clean and the perturbed images. Bottom row
shows the predictions for the corresponding inputs. Note
that the type of prior utilized to craft the perturbation is
mention below the predictions. Crafted perturbations are
clearly successful in misleading the model to predict inaccurate segmentation maps. Note that the color map shown
below the predictions provides the labels.
Figure 8 shows the effect of perturbation on multiple
networks. It shows the output maps predicted by various
models for the same input perturbed with corresponding
δ learned with “data with less BG” prioir. It is interesting
to note from Figure 8 that for the same image, with UAPs
crafted using the same prior, different networks can have
very different outputs, even if their outputs for clean images
are very similar.
11
FCN-Alexnet
FCN8S-VGG16
ResNet-MSC
VGG-16
Fig. 8. Segmentation predictions of multiple models over a sample perturbed image. Perturbations were crafted with ξ = 10 using the “data with
less BG” prior. The first row shows the perturbed input image, the second shows the segmentation output of clean sample image, and the third
shows the segmentation output of the perturbed sample image. The corresponding target network architecture is mentioned below each column.
Images are best viewed in color.
4.3
Depth estimation
Recent works such as [39], [40], [41] show an increase in use
of convolutional networks for regression-based Computer
Vision task. A natural question to ask is whether they are
as susceptible to universal adversarial attacks, as CNNs
used for classification. In this section, by crafting UAPs
for convolutional networks performing regression, we show
that they are equally susceptible to universal adversarial
attacks. To the best of our knowledge, we are the first
to provide an algorithm for crafting universal adversarial
attacks for convolutional networks performing regression.
Many recent works like [39], [42], [43] perform depth
estimation using convolutional network. In [39], the authors introduce Monodepth, an encoder-decoder architecture which regresses the depth of given monocular input image. We craft UAP using the proposed method for
the two variants of Monodepth, Monodepth-VGG and
Monodepth-Resnet50. The network is trained using KITTI
dataset [29]. In its raw form, the dataset contains 42, 382
rectified stereo pairs from 61 scenes, with a typical image
being 1242×375 pixels in size. We show results on the eigen
split, introduced in [44], which consist of 23488 images for
training and validation, and 697 images for test. We use
the same crop size as suggested by the authors of [44] and
evaluate at the input image resolution.
As in the case of object recognition, UAPs crafted by the
proposed method for monodepth also show the potential to
exploit priors about the data distribution. We consider three
cases, (i) providing no priors, (ii)range prior, and (ii) data
prior. For providing data priors, we randomly pick 10000
image samples from the KITTI train dataset. To attain complete independence of target data in the training procedure,
while crafting the perturbation, we perform validation on a
set of 1000 randomly picked images from Places-205 dataset.
The optimization procedure followed is the same as in the
case of the previous two task.
Table 7 and 8 show the performance of MonodepthResnet50, and Monodepth-VGG respectively under the
presence of the various UAPs crafted by the proposed
method. As can be observed from the tables, the crafted
UAPs have a strong impact on the performance of the
networks. For both the variants of monodepth, UAPs crafted
12
Fig. 9. Sample original and adversarial image pairs from KITTI dataset generated for Monodepth-VGG. First row shows clean and perturbed
images with various priors. Second row shows the corresponding predicted depth maps.
TABLE 7
Performance of the crafted perturbations for Monodepth-Resnet50 using various metrics for evaluating depth estimation on the eigen test-split of
KITTI datset. Results are presented for various scenarios such as clean data (Normal), perturbations learned with no prior, various priors and the
train set mean. Note that the best fooling results for each scenario are shown in bold. The evaluation of trainset mean performance has been taken
from [45]. Note that the first four metrics are error based (higher means better fooling, indicated as ↑) and the later three are precision based
(lower is better fooling, indicated as ↓).
Perturbation
Normal
Baseline
No prior
Range prior
Data prior
Train set mean
Abs Rel (↑)
Sq Rel (↑)
RMSE (↑)
RMSE log (↑)
δ < 1.25 (↓)
δ < 1.252 (↓)
δ < 1.253 (↓)
0.133
0.1339
0.201
0.319
0.380
0.361
1.148
1.1591
1.810
3.292
10.278
4.826
5.549
5.576
6.603
9.064
10.976
8.102
0.230
0.231
0.352
0.640
0.402
0.377
0.829
0.827
0.688
0.460
0.708
0.638
0.935
0.934
0.840
0.611
0.842
0.804
0.970
0.969
0.908
0.717
0.900
0.894
TABLE 8
Performance of the crafted perturbations for Monodepth-VGG using various metrics for evaluating depth estimation on the eigen test-split of KITTI
datset. Results are presented for various scenarios such as clean data (Normal), perturbations learned with no prior, various priors and the train
set mean. Note that the best fooling results for each scenario are shown in bold. The evaluation of trainset mean performance has been taken
from [45]. Note that the first four metrics are error based (higher means better fooling, indicated as ↑) and the later three are precision based
(lower is better fooling, indicated as ↓).
Perturbation
Normal
Baseline
No prior
Range prior
Data prior
Train set mean
Abs Rel (↑)
Sq Rel (↑)
RMSE (↑)
RMSE log (↑)
δ < 1.25 (↓)
δ < 1.252 (↓)
δ < 1.253 (↓)
0.148
0.149
0.192
0.212
0.355
0.361
1.344
1.353
1.802
2.073
9.612
4.826
5.927
5.949
6.626
6.994
10.592
8.102
0.247
0.248
0.325
0.364
0.390
0.377
0.803
0.800
0.704
0.658
0.714
0.638
0.922
0.920
0.861
0.825
0.850
0.804
0.964
0.963
0.929
0.906
0.908
0.894
with range prior, bring down the accuracy with the threshold of 1.25 units (δ < 1.25) by 25.7% on an average.
With data priors, the crafted UAPs are able to increase the
Sq Rel (an error metric) to almost 10 times the original
performance. Under the impact of the crafted UAPs, the
network’s performance drops below that of the baseline,
which uses the train set mean as the prediction for all image
pixels.
Table 9 shows the Generalized Fooling Rates with respect
to the accuracy with the threshold of 1.25 units (δ < 1.25).
A surprising observation from the table is that the performance of the range prior perturbations on both the networks
surpasses that of the data prior perturbations. This might
lead us to understand that the range prior perturbations
are stronger. However the Generalized Fooling Rate (GFR)
measures the change in a network’s outcome, with respect to
a metric. As observed from Table 7 and 8, the data prior perturbations have a much harsher effect on the error metrics
like Root Mean Square Error (RMSE), whereas range prior
perturbations have a harsher effect on precision metrics like
δ < 1.25. Hence, ‘which perturbation is better?’, is rather
a metric dependant question and conclusions based on a
TABLE 9
Generalized Fooling rate with respect to δ < 1.25 metric for the task of
depth estimation
Model
Monodepth-VGG
Monodepth-Resnet50
Baseline
No
data
Range
prior
Data
prior
0.4%
2%
15.3%
21.3%
22.7%
47.6%
21.3%
24.3%
single metric would only partially reflect the truth.
Figures 9 and 11 show some qualitative examples, showing the perturbed input and the change in output. Figure 9
shows the input-output pair for Monodepth-VGG, where
the input is perturbed by the various kinds of UAPs crafted.
Figure 11 compares the input-output pair for MonodepthVGG and Monodepth-Resnet50 using the range-prior UAP.
As show by figures 9 and 11, for both versions of Monodepth, quasi-imperceptible change in the input can cause a
large change in the output.
13
No data Prior
Range Prior
Data Prior
Fig. 10. Universal adversarial perturbations for depth estimation task crafted using our proposed algorithm.Top row shows the perturbations learned
for Monodepth-Resnet50 and the bottom row shows that for Monodepth-VGG. Perturbations were crafted with ξ = 10 for no prior,range prior, and
all data prior scenarios. Corresponding scenario is mentioned below each column. Images are best viewed in color.
Fig. 11. Depth predictions of both models over a sample perturbed image. Perturbations were crafted with ξ = 10 using the range prior.The top row
corresponds to Monodepth-Resnet50 and the bottom row corresponds to Monodepth-VGG. The first column shows the perturbed input image,
the second shows the depth output of clean sample image, and the third shows the depth output of the perturbed sample image. Images are best
viewed in color.
the learned representations at each layer?” That is, Are the
features learned by the CNNs robust to small changes in the
input ? or Can a small change in the input layer leads to a
huge change in its output ? As mentioned earlier ‘fooling’
refers to instability of the CNN in terms of its output,
independent of the task at hand.
Fig. 12. Percentage relative change in the extracted representations
caused by our crafted perturbations at multiple layers of VGG-16. Note
that as we do deeper into the net, the amount of perturbation increases
drastically which results in eventual misclassification of the input.
5
D ISCUSSION
As demonstrated by our experimentation in section 4, it
is evident that the proposed algorithm is able to craft
perturbations which achieve ‘fooling’ for a variety of computer vision tasks. An important question to ask about the
Convolutional neural Networks (CNNs), is “How stable are
Earlier works (e.g., [4], [5], [8]) have proven that the
CNNs can be fooled by small but learned noise. Most of
these works (e.g., [5]) rely on data to find these directions
in which the input can be slightly perturbed in order to
fool the CNN. These approaches try to find via simple
methods such as gradient ascent, the directions in which
the network’s confidence falls. Recent works by MoosaviDezfooli et al. [7], [8] find the nearest classification boundary
for a given sample and perturb it by moving across the
boundary thereby achieve the fooling. They also establish
that it is possible to learn a single perturbation which can
work for most of the input images. Thus existing works
(independent of the underlying task) exploit data samples,
model’s confidence and the classification boundaries in the
input space in order to craft the adversarial perturbations,
either image specific or agnostic.
Unlike the existing works, we depict this as a stability
issue with the learned representations by CNNs. We attempt
to learn the optimal perturbation in the input space that
can cause maximal change in the output of the network.
14
TABLE 10
Relative Shift in Classification Layer’s input and the fooling rate, for
VGG-16 in the task of classification. The relative shift has been
evaluation on 1000 random images from ILSVRC validation set, while
the fooling rate is evaluated on the entire ILSVRC validation set.
Perturbation
Rel. Shift in input to
f c8 (classification) layer
Fooling rate
Baseline
No Prior
Range Prior
All data Prior
0.0006
0.867
1.142
3.169
8.62
45.47
63.08
77.77
We achieve this via learning perturbations that can result
in maximal change in the activations at all the intermediate
layers of the architecture. As an example, we consider VGG16 CNN trained for object recognition to illustrate the working of our objective. Figure 12 shows the percentage relative
kl (x+δ)−li (x)k2 ×100
change in the feature activations ( i
) at
kli (x)k2
various layers in the architecture. Also, the relative change
is shown for variants of our method that utilize the prior
information about the training data distribution. Note that
the percentage relative change in the feature activations
due to the addition of the learned perturbation increases
monotonically as we go deeper in the network. Because
of this accumulated perturbation in the projection of the
input, our learned adversaries are able to fool the CNNs
independent of the task at hand. This phenomenon explains
the fooling achieved by our objective. Also, Figure 12 shows
that with the utilization of the data priors, the relative
perturbation further increases. Hence, our objective achieves
better fooling when the prior information is provided during the learning. Note that, for comparison, the baseline
perturbation of random noise with equal max-norm as the
learned perturbations is provided. The perturbation caused
by the random noise is almost negligible at all the layers
of CNN, which explains its robustness to random noise
attacks.
Finally, we relate the relative change caused by our
perturbations at the input to classification layer (f c8 or softmax) to the fooling rates achieved for various perturbations.
Table 10 shows the relative shift in the feature activations
i (x)k2
) that are input to the classification layer and
( kli (x+δ)−l
kli (x)k2
the corresponding fooling rates for various perturbations.
Note that they are highly correlated, which explains why
and how the proposed objective can fool the CNNs trained
across multiple vision tasks.
crafted perturbations can pose substantial threat to the deep
learned systems in terms of black-box attacking.
Further, we show that our objective can exploit minimal
priors about the target data distribution to craft stronger
perturbations. For example, providing simple information
such as the mean and dynamic range of the images to
the proposed objective would craft significantly stronger
perturbations. Though the proposed objective is data-free
in nature, it can craft stronger perturbations when data is
utilized.
More importantly, we introduced the idea of generalizable objectives to craft image-agnostic perturbations. It is
already established that the representations learned by deep
models are susceptible. On top of it, the existence of generic
objectives to fool “any” learning based vision model independent of the underlying task can pose critical concerns
about the model deployment. Therefore, it is an important
research direction to be focused by the community in order
to build reliable machine learning based systems.
R EFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
6
C ONCLUSION
In this paper, we proposed a novel data-free objective to
craft image-agnostic (universal) adversarial perturbations
(UAP). More importantly, we show that the proposed objective is generalizable not only across multiple CNN architectures but also across diverse computer vision tasks.
We demonstrated that our seemingly simple objective of
injecting maximal “adversarial” energy into the learned
representations (subject to the imperceptibility constraint)
is effective to fool both the classification and regression
models. Significant transfer performances achieved by our
[12]
[13]
[14]
[15]
[16]
B. Biggio, G. Fumera, and F. Roli, “Pattern recognition systems
under attack: Design issues and research challenges,” International
Journal of Pattern Recognition and Artificial Intelligence, vol. 28,
no. 07, 2014.
B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov,
G. Giacinto, and F. Roli, “Evasion attacks against machine learning
at test time,” in Joint European Conference on Machine Learning and
Knowledge Discovery in Databases, 2013, pp. 387–402.
L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. D.
Tygar, “Adversarial machine learning,” in Proceedings of the 4th
ACM Workshop on Security and Artificial Intelligence, ser. AISec ’11,
2011.
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,”
arXiv preprint arXiv:1312.6199, 2013.
I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and
harnessing adversarial examples,” arXiv preprint arXiv:1412.6572,
2014.
A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial machine
learning at scale,” arXiv preprint arXiv:1611.01236, 2016.
S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: A
simple and accurate method to fool deep neural networks,” in
IEEE Computer Vision and Pattern Recognition (CVPR), 2016.
S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations,” in IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2017.
K. R. Mopuri, U. Garg, and R. V. Babu, “Fast feature fool: A data
independent approach to universal adversarial perturbations,” in
Proceedings of the British Machine Vision Conference (BMVC), 2017.
C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, “Adversarial examples for semantic segmentation and object detection,”
arXiv preprint arXiv:1703.08603, 2017.
J. H. Metzen, M. C. Kumar, T. Brox, and V. Fischer, “Universal
adversarial perturbations against semantic image segmentation,”
in International Conference on Computer Vision (ICCV), 2017.
Y. Bengio, “Learning deep architectures for AI,” Found. Trends
Mach. Learn., vol. 2, no. 1, Jan. 2009.
N. Papernot, P. D. McDaniel, I. J. Goodfellow, S. Jha, Z. B.
Celik, and A. Swami, “Practical black-box attacks against deep
learning systems using adversarial examples,” arXiv preprint
arXiv:1602.02697, 2016.
A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples
in the physical world,” arXiv preprint arXiv:1607.02533, 2016.
C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. L. Yuille, “Adversarial examples for semantic segmentation and object detection,”
arXiv preprint arXiv:1703.08603, 2017.
A. Fawzi, O. Fawzi, and P. Frossard, “Analysis of classifiers’ robustness to adversarial perturbations,” arXiv preprint
arXiv:1502.02590, 2015.
15
Fig. 13. Sample failure case for the object recognition using VGG-16 model. Top row shows multiple clean images from ILSVRC validation set.
Bottom row shows the adversarial images generated by adding the perturbation crafted utilizing the no data prior. Note that for all the shown images
the perturbation fails to change the predicted label.
Clean
No data
Range prior
Data with less BG
All data
Fig. 14. Sample failure case for the semantic segmentation using the FCN-8s-VGG model. Top row shows the clean and corresponding perturbed
images for no prior to various priors. Bottom row shows the predicted segmentation maps. Note that the people segments are undisturbed by the
addition of various perturbations.
Clean
No data
Range prior
Data prior
Fig. 15. Sample failure case for the depth estimation using Monodepth-VGG model. Note that, top row shows clean image and the corresponding
perturbed images with no data case and various prior cases. Bottom row shows the corresponding depth predictions.
[17] A. Fawzi, S. Moosavi-Dezfooli, and P. Frossard, “Robustness of
classifiers: from adversarial to random noise,” in Advances in
Neural Information Processing Systems (NIPS), 2016.
[18] A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks
are easily fooled: High confidence predictions for unrecognizable
images,” in Computer Vision and Pattern Recognition (CVPR), 2015.
[19] A. Rozsa, E. M. Rudd, and T. E. Boult, “Adversarial diversity and
hard positive generation,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR) Workshops, 2016,
pp. 25–32.
[20] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, “Adversarial examples for semantic segmentation and object detection,”
in International Conference on Computer Vision (ICCV), 2017.
[21] V. Behzadan and A. Munir, “Vulnerability of deep reinforcement learning to policy induction attacks,” arXiv preprint
arXiv:1701:04143, 2017.
[22] O. Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori,
and A. Criminisi, “Measuring neural net robustness with constraints,” in Advances in Neural Information Processing Systems
(NIPS), 2016.
[23] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” arXiv preprint arXiv:1707.07397, 2017.
[24] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov,
D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with
16
convolutions,” arXiv preprint arXiv:1409.4842, 2014.
[25] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
image recognition,” arXiv preprint arXiv:1512.03385, 2015.
[26] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and
L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,”
International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp.
211–252, 2015.
[27] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and
A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012
(VOC2012) Results.”
[28] M. Everingham, L. Van-Gool, C. K. I. Williams, J. Winn, and
A. Zisseman, “The PASCAL Visual Object Classes Challenge 2007
(VOC2011) Results.”
[29] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous
driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
[30] B. Zhou, A. Khosla, À. Lapedriza, A. Torralba, and A. Oliva,
“Places: An image database for deep scene understanding,” arXiv
preprint arXiv:1610.02055, 2016.
[31] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint: arXiv:1412.6980, 2014.
[32] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick,
S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture
for fast feature embedding,” arXiv preprint arXiv:1408.5093, 2014.
[33] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return
of the devil in the details: Delving deep into convolutional nets,”
in Proceedings of the British Machine Vision Conference (BMVC), 2014.
[34] K. Simonyan and A. Zisserman, “Very deep convolutional
networks for large-scale image recognition,” arXiv preprint
arXiv:abs/1409.1556, 2014.
[35] E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE transactions on pattern
analysis and machine intelligence (PAMI), vol. 39, no. 4, 2017.
[36] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L.
Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” in
International Conference on Learning Representations (ICLR), 2015.
[37] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L.
Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” arXiv
preprint arXiv:1606.00915, 2016.
[38] B. Hariharan, P. Arbeláez, L. Bourdev, S. Maji, and J. Malik,
“Semantic contours from inverse detectors,” in IEEE International
Conference on Computer Vision (ICCV), 2011.
[39] C. Godard, O. Mac Aodha, and G. J. Brostow, “Unsupervised
monocular depth estimation with left-right consistency,” in CVPR,
2017.
[40] D. B. Sam, S. Surya, and R. V. Babu, “Switching convolutional
neural network for crowd counting,” in 2017 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 2017.
[41] A. Kendall and R. Cipolla, “Geometric loss functions for camera
pose regression with deep learning,” in 2017 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), July 2017, pp.
6555–6564.
[42] S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser,
“Semantic scene completion from a single depth image,” in 2017
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
July 2017, pp. 190–198.
[43] D. Xu, E. Ricci, W. Ouyang, X. Wang, and N. Sebe, “Multi-scale
continuous crfs as sequential deep networks for monocular depth
estimation,” in 2017 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), July 2017, pp. 161–169.
[44] D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from
a single image using a multi-scale deep network,” in Advances in
Neural Information Processing Systems (NIPS), 2014, pp. 2366–2374.
[45] C. Godard, O. Mac Aodha, and G. J. Brostow, “Unsupervised
monocular depth estimation with left-right consistency,” in IEEE
conference on Computer Vision and Pattern Recognition (CVPR), 2017.
| 2 |
arXiv:1611.09350v1 [] 26 Nov 2016
A NOTE ABOUT THE (2, 3)-GENERATION OF
SL12(q)
TSANKO RAYKOV GENCHEV
Abstract
In this note we provide explicit uniform type (2, 3)-generators for the special
linear group SL12 (q) for all q except for q = 2 or q = 4. Our considerations
are easily traceable, self-contained and based only on the known list of maximal
subgroups of this group.
Key words: (2,3)-generated group.
2010 Mathematics Subject Classification: 20F05, 20D06.
Now, when the problem concerning the (2, 3)-generation (especially) of the special
linear groups and their projective images is completely solved (see [4] and references
there), we give our contribution by discussing the last remaining group SL12 (q) in the
light of the method used in our works [2], [3] and [5]. It has been proved in [4] the
(2, 3)-generation of the same group over all finite fields in characteristic not equal to 5.
The author’s approach is different from ours in which we make an essential use of the
known list of maximal subgroups of SL12 (q).
The group G = SL12 (q), where q = pm and p is a prime, acts naturally on a twelvedimensional vector space V over the field F = GF (q). We identify V with the column
vectors of F 12 , and let v1 , . . . , v12 be the standard base of the space V . Set Q = q 11 − 1
if q 6= 3, 7 and Q = (q 11 − 1)/2 if q = 3, 7.
First of all, based only to the orders of the maximal subgroups of G (which exact
structure is given in Tables 8.76 and 8.77 in [1]), it can be easily seen that there is only
one class of such subgroups having an element of order Q. Namely, this is the class
of reducible on the space V subgroups; actually they are the stabilizers in G of one or
eleven-dimensional subspaces of V .
Now, let us choose an element ω of order Q in the multiplicative group of the field
GF (q 11 ) and set
2
3
4
5
6
7
8
9
f (t) = (t−ω)(t−ω q )(t−ω q )(t−ω q )(t−ω q )(t−ω q )(t−ω q )(t−ω q )(t−ω q )(t−ω q )(t−
10
ω q ) = t11 − α1 t10 + α2 t9 − α3 t8 + α4 t7 − α5 t6 + α6 t5 − α7 t4 + α8 t3 − α9 t2 + α10 t − α11 .
Then f (t) ∈ F [t] and the polynomial f (t) is irreducible over the field F . Note that
α11 = ω
q11 −1
q−1
has order q − 1 if q 6= 3, 7, α11 = 1 if q = 3, and α311 = 1 6= α11 if q = 7.
1
Set
x=
−1 0
0 −1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
−1
0
0
0
0
0
0
0
1
0
0
0
0
y=
0
0
0
0
0
0
0
0
0
0
0
0
α6 α−1
11
0
0
0
0
0
0
α5 α−1
11
0
0 −1 0
0
0
α4 α−1
11
0
0
0
0 −1 0
α3 α−1
11
0 −1 0
0
0
0
α8 α−1
11
0
0
0
0
0
0
α7 α−1
11
0
0
0
0
0
0
α1 α−1
11
−1 0
0
0
0
0
α9 α−1
11
0
0
0
0
0 −1 α2 α−1
11
0
0
0
0
0
0
0
0
0
0 −1 0
0 α10 α−1
11
0
0
0
0
0
0
α−1
11
0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0
.
0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 1 0
0
α6
0
α5
0
α7
0
α9
0
α8
0
α4
−1 α10
0
α3
0
α2
0 α11
0
α1
0
0
,
Then x and y are elements of G of orders 2 and 3, respectively. Denote
0
0 −1 0
0
0
0
0
0
0
α6 α6 α−1
11
−1 0
0
0
0
0
0
0
0
0
α5 α5 α−1
11
0
0
0
0 −1 0
0
0
0
0
α7 α4 α−1
11
0
0
0
0
0
0 −1 0
0
0
α9 α3 α−1
11
0
0
0 −1 0
0
0
0
0
0
α8 α8 α−1
11
0 −1 0
0
0
0
0
0
0
0
α4 α7 α−1
11
z = xy =
0
0
0
0
0
0
0
0
0 −1 α10 α1 α−1
11
0
0
0
0
0 −1 0
0
0
0
α3 α9 α−1
11
0
0
0
0
0
0
0 −1 0
0
α2 α2 α−1
11
0
0
0
0
0
0
0
0
0
0 α11
0
0
0
0
0
0
0
0
0 −1 0
α1 α10 α−1
11
0
0
0
0
0
0
0
0
0
0
0
α−1
11
.
The characteristic polynomial of z is fz (t) = (t − α−1
11 )f (t) and the characteristic roots
10
−1
q
q2
q3
q4
q5
q6
q7
q8
q9
α11 , ω, ω , ω , ω , ω , ω , ω , ω , ω , ω , and ω q of z are pairwise distinct.
2
3
4
5
−1
Then, in GL12 (q 11 ), z is conjugate to the matrix diag (α11
, ω, ω q , ω q , ω q , ω q , ω q ,
6
7
8
9
10
ω q , ω q , ω q , ω q , ω q ) and hence z is an element of SL12 (q) of order Q.
2
Let H = hx, yi, H ≤ G. We prove that H acts irreducibly on the space V . Indeed,
assume that W is an H-invariant subspace of V and k = dim W , 1 ≤ k ≤ 11.
Let first k = 1 and 0 6= w ∈ W . Then y(w) = λw where λ ∈ F and λ3 = 1. This
yields
w = µ1 (v1 +λ2 v2 +λv3 )+µ2 (v4 +λ2 v5 +λv6 )+µ3 (v7 +λ2 v8 +λv9 )+µ4 (v10 +λ2 v11 +λv12 ),
where µ1 , µ2 , µ3 , and µ4 are elements of the field F .
Now, we involve the action of x onto w: x(w) = νw where ν = ±1. This yields consecutively µ4 6= 0 , α11 = λ2 ν, and
(1)
µ3 = λ(α1 + να10 − λν)µ4 ,
(2)
νµ1 + µ2 = (να4 + α7 )µ4 ,
(3)
νµ2 + λ2 µ3 = λ(να3 + α9 )µ4 ,
(4)
(ν + 1)(µ1 − λα6 µ4 ) = 0,
(5)
(ν + 1)(µ1 − λ2 α5 µ4 ) = 0,
(6)
(ν + 1)(µ2 − λ2 α8 µ4 ) = 0,
(7)
(ν + 1)(µ3 − α2 µ4 ) = 0,
In particular, we have α311 = ν and α611 = 1. This is impossible if q = 5 or q > 7 since
then α11 has order q − 1. According to our assumption (q 6= 2, 4) only two possibilities
left: q = 3 (and α11 = 1) or q = 7 (and α311 = 1 6= α11 ). So ν = 1, α11 = λ2 and (1), (2),
(3), (4), (5), (6) and (7) produce α1 = λ2 α2 − α10 + λ, α6 = λα5 , α7 = λ2 α5 + λ2 α8 − α4
and α9 = λα2 + λα8 − α3 . Now f (−1) = −(1 + λ + λ2 )(1 + α2 + α5 + α8 ) = 0 both for
q = 3 and q = 7, an impossibility as f (t) is irreducible over the field F .
Now let 2 ≤ k ≤ 11. Then the characteristic polynomial of z|W has degree k and has
to divide fz (t) = (t − α−1
11 )f (t).The irreducibility of f (t) over F leads immediately to the
conclusion that this polynomial is f (t) and k = 11. Now the subspace U of V which is
generated by the vectors v1 , v2 , v3 , . . . , v11 is hzi-invariant. If W 6= U then U ∩ W is
hzi-invariant and dim (U ∩ W ) = 10. This means that the characteristic polynomial of
z|U∩W has degree 10 and must divide fz (t) = (t − α−1
11 )f (t) which is impossible. Thus
W = U but obviously U is not hyi-invariant, a contradiction.
Note that the above considerations fail if q = 2 or 4.
Now, as H = hx, yi acts irreducible on the space V and it has an element of order
Q, we conclude that H can not be contained in any maximal subgroup of G(= SL12 (q)).
Thus H = G and G = hx, yi is a (2, 3)-generated group.
3
References
[1] J. N. BRAY, D.F. HOLT, C. M. RONEY-DOUGAL. The Maximal Subgroups of
the Low - Dimensional Finite Classical Groups. London Math. Soc. Lecture Note
Series 407, Cambridge University Press (2013).
[2] TS. GENCHEV, E. GENCHEVA. (2, 3)-generation of the special linear groups of
dimension 8. Proceedings of the Forty Fourth Spring Conference of the Union of
Bulgarian Mathematicians, SOK ”Kamchia”, April 2-6 (2015), 167-173.
[3] E. GENCHEVA, TS. GENCHEV and K. TABAKOV. (2, 3)-generation of the special
linear groups of dimensions 9, 10 and 11. arXiv 1412.8631v5 (2016).
[4] M. A. PELLEGRINI. The (2, 3)-generation of the special linear groups over finite
fields. arXiv 1605.04276v1 (2016).
[5] K. TABAKOV, E. GENCHEVA and TS. GENCHEV. (2, 3)-generation of the special
linear groups of dimension 11. Proceedings of the Forty Fifth Spring Conference of
the Union of Bulgarian Mathematicians, Pleven, April 6-10 (2016), 146-151.
Ts. Genchev
e-mail: [email protected]
Department of Mathematics
Technical University
Varna, Bulgaria
4
| 4 |
Optimal weighted least-squares methods ∗
arXiv:1608.00512v1 [math.NA] 1 Aug 2016
Albert Cohen† and Giovanni Migliorati‡
August 2, 2016
Abstract
We consider the problem of reconstructing an unknown bounded function u defined on a domain X ⊂
Rd from noiseless or noisy samples of u at n points (xi )i=1,...,n . We measure the reconstruction error in a
norm L2 (X, dρ) for some given probability measure dρ. Given a linear space Vm with dim(Vm ) = m ≤ n,
we study in general terms the weighted least-squares approximations from the spaces Vm based on
independent random samples. It is well known that least-squares approximations can be inaccurate and
unstable when m is too close to n, even in the noiseless case. Recent results from [4, 5] have shown the
interest of using weighted least squares for reducing the number n of samples that is needed to achieve an
accuracy comparable to that of best approximation in Vm , compared to standard least squares as studied
in [3]. The contribution of the present paper is twofold. From the theoretical perspective, we establish
results in expectation and in probability for weighted least squares in general approximation spaces Vm .
These results show that for an optimal choice of sampling measure dµ and weight w, which depends on the
space Vm and on the measure dρ, stability and optimal accuracy are achieved under the mild condition
that n scales linearly with m up to an additional logarithmic factor. In contrast to [3], the present
analysis covers cases where the function u and its approximants from Vm are unbounded, which might
occur for instance in the relevant case where X = Rd and dρ is the Gaussian measure. From the numerical
perspective, we propose a sampling method which allows one to generate independent and identically
distributed samples from the optimal measure dµ. This method becomes of interest in the multivariate
setting where dµ is generally not of tensor product type. We illustrate this for particular examples of
approximation spaces Vm of polynomial type, where the domain X is allowed to be unbounded and high
or even infinite dimensional, motivated by certain applications to parametric and stochastic PDEs.
AMS classification numbers: 41A10, 41A25, 41A65, 62E17, 93E24.
Keywords: multivariate approximation, weighted least squares, error analysis, convergence rates, random matrices, conditional sampling, polynomial approximation.
1
Introduction
Let X be a Borel set of Rd . We consider the problem of estimating an unknown function u : X → R from
pointwise data (y i )i=1,...,n which are either noiseless or noisy observations of u at points (xi )i=1,...,n from X.
In numerous applications of interest, some prior information is either established or assumed on the function
u. Such information may take various forms such as:
∗ This
research is supported by Institut Universitaire de France and the ERC AdV project BREAD.
Universités, UPMC Univ Paris 06, CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, 4, place Jussieu 75005,
Paris, France. email: [email protected]
‡ Sorbonne Universités, UPMC Univ Paris 06, CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, 4, place Jussieu 75005,
Paris, France. email: [email protected]
† Sorbonne
1
(i) regularity properties of u, in the sense that it belongs to a given smoothness class;
(ii) decay or sparsity of the expansion of u in some given basis;
(iii) approximability of u with some prescribed error by given finite-dimensional spaces.
Note that the above are often related to one another and sometimes equivalent, since many smoothness
classes can be characterized by prescribed approximation rates when using certain finite-dimensional spaces
or truncated expansions in certain bases.
This paper uses the third type of prior information, taking therefore the view that u can be “well
approximated” in some space Vm of functions defined everywhere on X, such that dim(Vm ) = m. We work
under the following mild assumption:
f or any x ∈ X, there exists v ∈ Vm such that v(x) 6= 0.
(1)
This assumption holds, for example, when Vm contains the constant functions. Typically, the space Vm
comes from a family (Vj )j≥1 of nested spaces with increasing dimension, such as algebraic or trigonometric
polynomials, or piecewise polynomial functions on a hierarchy of meshes.
We are interested in measuring the error in the L2 (X, dρ) norm
Z
1/2
kvk :=
|v|2 dρ
,
X
where dρ is a given probability measure on X. We denote by h·, ·i the associated inner product. One typical
strategy is to pick the estimate from a finite-dimensional space Vm such that dim(Vm ) = m. The ideal
estimator is given by the L2 (X, dρ) orthogonal projection of u onto Vm , namely
Pm u := argmin ku − vk.
v∈Vm
In general, this estimator is not computable from a finite number of observations. The best approximation
error
em (u) := min ku − vk = ku − Pm uk,
v∈Vm
thus serves as a benchmark for a numerical method based on a finite sample. In the subsequent analysis,
we make significant use of an arbitrary L2 (X, dρ) orthonormal basis {L1 , . . . , Lm } of the space Vm . We also
introduce the notation
em (u)∞ := min ku − vkL∞ ,
v∈Vm
∞
where L is meant with respect to dρ, and observe that em (u) ≤ em (u)∞ for any probability measure dρ.
The weighted least-squares method consists in defining the estimator as
n
uW := argmin
v∈Vm
1X i
w |v(xi ) − y i |2 ,
n i=1
(2)
where the weights wi > 0 are given. In the noiseless case y i = u(xi ), this also writes
argmin ku − vkn ,
(3)
v∈Vm
where the discrete seminorm is defined by
n
kvkn :=
1X i
w |v(xi )|2
n i=1
2
!1/2
.
(4)
This seminorm is associated with the semi-inner product h·, ·in . If we expand the solution to (3) as
Pm
j=1 vj Lj , the vector v = (vj )j=1,...,m is the solution to the normal equations
Gv = d,
(5)
where the matrix G has entries Gj,k = hLj , Lk in and where the data vector d = (dj )j=1,...,m is given
Pn
by dj := n1 i=1 wi y i Lj (xi ). This system always has at least one solution, which is unique when G is
nonsingular. When G is singular, we may define uW as the unique minimal ℓ2 norm solution to (5).
Note that G is nonsingular if and only if k · kn is a proper norm on the space Vm . Then, if the data are
noisefree that is, when y i = u(xi ), we may also write
n
uW = Pm
u,
n
where Pm
is the orthogonal projection onto Vm for the norm k · kn .
In practice, for the estimator (2) to be easily computable, it is important that the functions L1 , . . . , Lm
have explicit expressions that can be evaluated at any point in X so that the system (5) can be assembled.
Let us note that computing this estimator by solving (5) only requires that {L1 , . . . , Lm } is a basis of the
space Vm , not necessarily orthonormal in L2 (X, dρ). Yet, since our subsequent analysis of this estimator
makes use of an L2 (X, dρ) orthonormal basis, we simply assume that {L1 , . . . , Lm } is of such type.
In our subsequent analysis, we sometimes work under the assumption of a known uniform bound
kukL∞ ≤ τ.
(6)
We introduce the truncation operator
z 7→ Tτ (z) := sign(z) min{|z|, τ },
and we study the truncated weighted least-squares approximation defined by
uT := Tτ ◦ uW .
Note that, in view of (6), we have |u − uT | ≤ |u − uW | in the pointwise sense and therefore
ku − uT k ≤ ku − uW k.
The truncation operator aims at avoiding unstabilities which may occur when the matrix G is ill-conditioned.
In this paper, we use randomly chosen points xi , and corresponding weights wi = w(xi ), distributed in such
a way that the resulting random matrix G concentrates towards the identity I as n increases. Therefore, if
no L∞ bound is known, an alternative strategy consists in setting to zero the estimator when G deviates
from the identity by more than a given value in the spectral norm. We recall that for m × m matrices X,
this norm is defined as kXk2 := supkvk2 =1 kXvk2 . More precisely, we introduce the conditioned least-squares
approximation, defined by
(
uW , if kG − Ik2 ≤ 12 ,
uC :=
0,
otherwise.
The choice of 12 as a threshold for the distance between G and I in the spectral norm is related to our
subsequent analysis. However, the value 21 could be be replaced by any real number in ]0, 1[ up to some
minor changes in the formulation of our results. Note that
kG − Ik2 ≤
1
=⇒ cond(G) ≤ 3.
2
3
(7)
It is well known that if n ≥ m is too much close to m, weighted least-squares methods may become
unstable and inaccurate for most sampling distributions. For example, if X = [−1, 1] and Vm = Pm−1 is the
space of algebraic polynomials of degree m − 1, then with m = n the estimator coincides with the Lagrange
polynomial interpolation which can be highly unstable and inaccurate, in particular for equispaced points.
The question that we want to address here in general terms is therefore:
Given a space Vm and a measure dρ, how to best choose the samples y i and weights wi in order to ensure that the L2 (X, dρ) error ku − ũk is comparable to em (u), with n being as close as possible to m, for
ũ ∈ {uW , uT , uC } ?
We address this question in the case where the xi are randomly chosen. More precisely, we draw independently the xi according to a certain probabiity measure dµ defined on X. A natural prescription for the
success of the method is that kvkn approaches kvk as n tends to +∞. Therefore, one first obvious choice is
to use
dµ = dρ and wi = 1, i = 1, . . . , n,
(8)
that is, sample according to the measure in which we plan to evaluate the L2 error and use equal weights.
When using equal weights wi = 1, the weighted least-squares estimator (2) becomes the standard leastsquares estimator, as a particular case. The strategy (8) was analyzed in [3], through the introduction of
the function
m
X
|Lj (x)|2 ,
x 7→ km (x) :=
j=1
which is the diagonal of the integral kernel of the projector Pm . This function only depends on Vm and dρ.
It is strictly positive in X due to Assumption 1. Its reciprocal function is characterized by
1
=
min
kvk2 ,
km (x) v∈Vm ,v(x)=1
and is called Christoffel function in the particular case where Vm is the space of algebraic polynomials of
total degree m − 1, see [10]. Obviously, the function km satisfies
Z
km dρ = m.
(9)
X
We define
Km = Km (Vm , dρ) := kkm kL∞ ,
and recall the following results from [3, 7] for the standard least-squares method with the weights and the
sampling measure chosen as in (8).
Theorem 1 For any r > 0, if m and n are such that the condition
Km ≤ κ
n
1 − ln 2
, with κ := κ(r) =
ln n
2 + 2r
(10)
is satisfied, then the following hold:
(i) The matrix G satisfies the tail bound
1
≤ 2n−r .
Pr kG − Ik2 >
2
4
(11)
(ii) If u ∈ L2 (X, dρ) satisfies a uniform bound (6), then the truncated least-squares estimator satisfies, in
the noiseless case,
E(ku − uT k2 ) ≤ (1 + ε(n))em (u)2 + 8τ 2 n−r ,
(12)
where ε(n) :=
4κ
ln(n)
→ 0 as n → +∞, and κ as in (10).
(iii) If u ∈ L∞ (X, dρ), then the truncated and nontruncated least-squares estimators satisfy, in the noiseless
case,
√
(13)
ku − uT k ≤ ku − uW k ≤ (1 + 2)em (u)∞ ,
with probability larger than 1 − 2n−r .
The second item in the above result shows that the optimal accuracy em (u) is met in expectation, up
to an additional term of order n−r . When em (u) has polynomial decay O(m−s ), we are ensured that this
additional term can be made negligible by taking r strictly larger than s/2, which amounts in taking κ(r)
small enough. Condition (10) imposes a minimal number of samples to ensure stability and accuracy of
standard least squares. Since (9) implies that Km ≥ m, the fulfillment of this condition requires that n is
at least of the order m ln(m). However simple examples show that the restriction can be more severe, for
example if Vm = Pm−1 on X = [−1, 1] and with ρ being the uniform probability measure. In this case, one
√
choice for the Lj are the Legendre polynomials with proper normalization kLj kL∞ = |Lj (1)| = 1 + 2j so
that Km = m2 , and therefore condition (10) imposes that n is at least of order m2 ln(m). Other examples in
the multivariate setting are discussed in [1, 2] which show that for many relevant approximation spaces Vm
and probability measures dρ, the behaviour of Km is superlinear in m, leading to a very demanding regime
in terms of the needed number n of samples. In the case of multivariate downward closed polynomial spaces,
precise upper bounds for Km have been proven in [2, 6] for measures associated to Jacobi polynomials.
In addition, note that the above theory does not cover simple situations such as algebraic polynomials
over unbounded domains, for example X = R equipped with the Gaussian measure, since the orthonormal
polynomials Lj are unbounded for j ≥ 2 and thus Km = ∞ if m ≥ 2.
2
Main results
In the present paper, we show that these limitations can be overcome, by using a proper weighted leastsquares method. We thus return to the general form of the discrete norm (4) used in the definition of the
weighted least-squares estimator. We now use a sampling measure dµ which generally differs from dρ and is
such that
wdµ = dρ,
R
where w is a positive function defined everywhere on X and such that X w−1 dρ = 1, and we then consider
the weighted least-square method with weights given by
wi = w(xi ).
With such a choice, the norm kvkn again approaches kvk as n increases. The particular case dµ = dρ and
w ≡ 1 corresponds to the standard least-squares method analyzed by Theorem 1. Note that changing the
sampling measure is a commonly used strategy for reducing the variance in Monte Carlo methods, where it
is referred to as importance sampling.
5
With Lj again denoting the L2 (X, dρ) orthonormal basis of Vm , we now introduce the function
x 7→ km,w (x) :=
m
X
w(x)|Lj (x)|2 ,
j=1
which only depends on Vm , dρ and w, as well as
Km,w = Km,w (Vm , dρ, w) := kkm,w kL∞ .
R
√
√
Note that, since the wLj are an L2 (X, dµ) orthonormal basis of wVm , we find that X km,w dµ = m and
thus Km,w ≥ m. We prove in this paper the following generalization of Theorem 1.
Theorem 2 For any r > 0, if m and n are such that the condition
1 − ln 2
n
, with κ :=
ln n
2 + 2r
(14)
1
≤ 2n−r .
Pr kG − Ik2 >
2
(15)
Km,w ≤ κ
is satisfied, then the following hold:
(i) The matrix G satisfies the tail bound
(ii) If u ∈ L2 (X, dρ) satisfies a uniform bound (6), then the truncated weighted least-squares estimator
satisfies, in the noiseless case,
E(ku − uT k2 ) ≤ (1 + ε(n))em (u)2 + 8τ 2 n−r ,
where ε(n) :=
4κ
ln(n)
(16)
→ 0 as n → +∞, and κ as in (10).
(iii) If u ∈ L∞ (X, dρ), then the nontruncated weighted least-squares estimators satisfy, in the noiseless case,
√
ku − uW k ≤ (1 + 2)em (u)∞ ,
(17)
with probability larger than 1 − 2n−r .
(iv) If u ∈ L2 (X, dρ), then the conditioned weighted least-squares estimator satisfies, in the noiseless case,
E(ku − uC k2 ) ≤ (1 + ε(n))em (u)2 + 2kuk2n−r ,
where ε(n) :=
4κ
ln(n)
(18)
→ 0 as n → +∞, and κ as in (10).
Let us mention that the quantity Km,w has been considered in [4], where similar stability and approximation results have been formulated in a slightly different form (see in particular Theorem 2.1 therein), in
the specific framework of total degree polynomial spaces.
The interest of Theorem 2 is that it leads us in a natural way to an optimal sampling strategy for the
weighted least-square method. We simply take
w :=
m
m
= Pm
,
2
km
j=1 |Lj |
(19)
and with such a choice for w one readily checks that
dµ :=
km
dρ,
m
6
(20)
R
is a probability measure on X since X km dρ = m.
In addition, we have for this particular choice that
km,w = wkm = m,
and therefore
Km,w = m.
We thus obtain the following result as a consequence of Theorem 2, which shows that the above choice of w
and dµ allows us to obtain near-optimal estimates for the truncated weighted least-squares estimator, under
the minimal condition that n is at least of the order m ln(m).
Corollary 1 For any r > 0, if m and n are such that the condition
m≤κ
n
1 − ln 2
, with κ :=
ln n
2 + 2r
(21)
is satisfied, then the conclusions (i), (ii), (iii) and (iv) of Theorem 2 hold for weighted least squares with the
choice of w and dµ given by (19) and (20).
One of the interests of the above optimal sampling strategy is that it applies to polynomial approximation
on unbounded domains that were not covered by Theorem 1, in particular X = R equipped with the Gaussian
measure. In this case, the relevant target functions u are often nonuniformly bounded and therefore the
results in items (ii) and (iii) of Theorem 2 do not apply. The result in item (iv) for the conditioned estimator
uC remains valid, since it does not require uniform boundedness of u.
Let us remark that all the above results are independent of the dimension d of the domain X. However,
raising d has the unavoidable effect of restricting the classes of functions for which the best approximation
error em (u) or em (u)∞ have some prescribed decay, due to the well-known curse of dimensionality.
Note that the optimal pair (dµ, w) described by (19) and (20) depends on Vm , that is
w = wm
and dµ = dµm .
This raises a difficulty for properly choosing the samples in settings where the choice of Vm is not fixed
a-priori, such as in adaptive methods. In certain particular cases, it is known that wm and dµm admit limits
w∗ and dµ∗ as m → ∞ and are globally equivalent to these limits. One typical example is given by the
univariate polynomial spaces Vm = Pm−1 , when X = [−1, 1] and dρ = ρdx where ρ is a Jacobi weight and
dx is the Lebesgue measure on X. In this case dµ∗ is the pluripotential equilibrium measure
dµ∗ =
dx
√
,
2π 1 − x2
see e.g. [11, 9], and one has
cdµ∗ ≤ dµm ≤ Cdµ∗ ,
m ≥ 1,
for some fixed constants 0 < c < C < ∞. Thus, in such a case, the above corollary also holds for the choice
w = w∗ and dµ = dµ∗ under the condition m ≤ Cc κ lnnn . The development of sampling strategies in cases of
varying values of m without such asymptotic equivalences is the object of current investigation.
A closely related weighted least-squares strategy was recently proposed and analyzed in [5], in the polynomial framework. There, the authors propose to use the renormalized Christoffel function (19) in the
definition of the weights, however sampling from the fixed pluripotential equilibrium measure dµ∗ . Due to
7
the fact that dµm differs from dµ∗ , the main estimate obtained in [5] (see p.3 therein) does not have the
same simple form of a direct comparison between ku − uT k and em (u) as in (ii) of Theorem 2. In particular,
it involves an extra term d(f ) which does not vanish even as n → ∞.
One intrinsic difficulty when using the optimal pair (dµ, w) = (dµm , wm ) described by (19) and (20) is the
effective sample generation, in particular in the multivariate framework since the measure dµm is generally
not of tensor product type. One possible approach is to use Markov Chain Monte Carlo methods such as
the Metropolis-Hastings algorithm, as explored in [4]. In such methods the samples are mutually correlated,
and only asymptotically distributed according to the desired sampling measure. One contribution of the
present paper is to propose a straightforward and effective sampling strategy for generating an arbitrary
finite number n of independent samples identically distributed according to dµm . This strategy requires that
dρ has tensor product structure and that the spaces Vm are spanned by tensor product bases, such as for
multivariate polynomial spaces, in which case dµm is generally not of tensor product type.
The rest of our paper is organized as follows. The proof of Theorem 2 is given in §3 in a concise form
since it follows the same lines as the original results on standard least squares from [3, 7]. We devote §4
to analog results in the case of samples affected by additive noise, proving that the estimates are robust
under condition (14). The proposed method for sampling the optimal measure dµm is discussed in §5, and
we illustrate its effectiveness in §6 by numerical examples.
3
Proof of Theorem 2
The proof is structurally similar to that of Theorem 1 given in [3] for items (i) and (ii) and in [2] for item
Pn
(iii), therefore we only sketch it. We observe that G = n1 i=1 Xi where the Xi are i.i.d. copies of the rank
1 random matrix
X = X(x) := w(x)Lj (x)Lk (x)
,
j,k=1,...,m
with x a random variable distributed over X according to µ. One obviously has E(X) = I. We then invoke
the Chernov bound from [12] to obtain that if kXk2 ≤ R almost surely, then, for any 0 < δ < 1,
Pr {kG − Ik2 > δ} ≤ 2m
c
1/R
e−δ
δ
=
2m
exp
−
,
(1 − δ)1−δ
R
(22)
with cδ := δ + (1 − δ) ln(1 − δ) > 0. Taking δ = 12 , and observing that
kX(x)k2 =
m
X
1
Km,w (x)
|Lj (x)|2 =
w(x)
,
n
n
j=1
K
which yields (15) in item (i).
we may thus take R = m,w
n
For the proof of (16) in item (ii), we first consider the event where kG − Ik2 ≤ 12 . In this case we write
n
n
ku − uT k2 = kTτ (u) − Tτ (uW )k2 ≤ ku − uW k2 = ku − Pm
uk2 ≤ kgk2 + kPm
gk2 ,
n
where we have used that Pm
Pm u = Pm u and that g is orthogonal to Vm , and thus
ku − uT k2 ≤ em (u)2 +
where a = (aj )j=1,...,m is solution of the system
Ga = b,
8
m
X
j=1
|aj |2 ,
g := u − Pm u,
and b := (hg, Lk in )k=1,...,m . Since kG−1 k2 ≤ 2, it follows that
ku − uT k2 ≤ em (u)2 + 4
In the event where kG − Ik2 >
1
2,
m
X
k=1
|hg, Lk in |2 .
we simply write ku − uT k ≤ 2τ . It follows that
E(ku − uT k2 ) ≤ em (u)2 + 4
m
X
k=1
E(|hg, Lk in |2 ) + 8τ 2 n−r .
For the second term, we have
n
n
1 XX
E(w(xi )w(xj )g(xi )g(xj )Lk (xi )Lk (xj ))
n2 i=1 j=1
1
= 2 n(n − 1)|E(w(x)g(x)Lk (x))|2 + nE(|w(x)g(x)Lk (x)|2 )
n
Z
1
1
|hg, Lk i|2 +
|w(x)|2 |g(x)|2 |Lk (x)|2 dµ
= 1−
n
n X
Z
1
=
w(x)|g(x)|2 |Lk (x)|2 dρ,
n X
E(|hg, Lk in |2 ) =
where we have used the fact that g is L2 (X, ρ)-orthogonal to Vm and thus to Lk . Summing over k, we obtain
m
X
k=1
E(|hg, Lk in |2 ) ≤
κ
Km,w
kgk2 ≤
em (u)2 ,
n
ln(n)
and we therefore obtain (16).
For the proof of (17) in item (iii) we place ourselves in the event where kG − Ik2 ≤ 12 . This property also
means that
3
1
kvk22 ≤ hGv, vi2 ≤ kvk22 , v ∈ Rm ,
2
2
which can be expressed as a norm equivalence over Vm ,
1
3
kvk2 ≤ kvk2n ≤ kvk2 ,
2
2
v ∈ Vm .
(23)
We then write that for any v ∈ Vm ,
n
ku − Pnm uk ≤ ku − vk + kv − Pm
uk
√
n
ukn
≤ ku − vk + 2kv − Pm
√
≤ ku − vk + 2ku − vkn
√
≤ (1 + 2)ku − vkL∞ ,
n
n
where we have used (23), the Pythagorean identity ku − vk2n = ku − Pm
uk2n + kv − Pm
uk2n , and the fact that
both k · k and k · kn are dominated by k · kL∞ . Since v is arbitrary, we obtain (17).
Finally, (18) in item (iv) is proven in a very similar way as (16) in item (ii), by writing that in the event
kG − Ik2 > 12 , we have ku − uC k = kuk, so that
E(ku − uC k2 ) ≤ em (u)2 + 4
m
X
k=1
and we conclude in the same way.
E(|hg, Lk in |2 ) + 2kuk2 n−r ,
✷
9
4
The noisy case
In a similar way as in [3, 8], we can analyze the case where the observations of u are affected by an additive
noise. In practical situations the noise may come from different sources, such as a discretization error when u
is evaluated by some numerical code, or a measurement error. The first one may be viewed as a perturbation
of u by a deterministic funtion h, that is, we observe
y i = u(xi ) + h(xi ).
The second one is typically modelled as a stochastic fluctuation, that is, we observe
y i = u(xi ) + η i .
where η i are independent realizations of the centered random variable η = y − u(x). Here, we do not
necessarily assume η and x to be independent, however we typically assume that the noise is centered, that
is,
E(η|x) = 0,
(24)
and we also assume uniformly bounded conditional variance
σ 2 := sup E(|η|2 |x) < ∞.
(25)
x∈X
Note that we may also consider consider a noncentered noise, which amounts in adding the two contributions,
that is,
y i = u(xi ) + β i , β i = h(xi ) + η i ,
(26)
with h(x) = E(β|x). The following result shows that the estimates in Theorem 2 are robust under the
presence of such an additive noise.
Theorem 3 For any r > 0, if m and n are such that condition (14) is satisfied, then the following hold for
the noise model (26):
(i) if u ∈ L2 (X, dρ) satisfies a uniform bound (6), then the truncated weighted least-squares estimator
satisfies
K m,w σ 2
+ 8τ 2 n−r ,
(27)
E(ku − uT k2 ) ≤ (1 + 2ε(n))em (u)2 + (8 + 2ε(n))khk2 +
n
(ii) if u ∈ L2 (X, dρ), then the conditioned weighted least-squares estimator satisfies
K m,w σ 2
+ 2kuk2 n−r ,
n
R
→ 0 as n → +∞, with κ as in (10), and K m,w := X km,w dρ.
E(ku − uC k2 ) ≤ (1 + 2ε(n))em (u)2 + (8 + 2ε(n))khk2 +
where in both cases ε(n) :=
4κ
ln(n)
(28)
Proof: We again first consider the event where kG − Ik2 ≤ 12 . In this case we write
ku − uT k ≤ ku − uW k,
n
and use the decomposition u − uW = g − Pm
g − h where g = u + Pm u as in the proof of Theorem 2 and h
stands for the solution to the least-squares problem for the noise data (β i )i=1,...,n . Therefore
n
n
n
ku − uW k2 = kgk2 + kPm
g + hk2 ≤ kgk2 + 2kPm
gk2 + 2khk2 = kgk2 + 2kPm
gk2 + 2
10
m
X
j=1
|nj |2 ,
where n = (nj )j=1,...,m is solution to
Gn = b,
b :=
n
1 X
n
i=1
β i w(xi )Lk (xi )
k=1,...,m
= (bk )k=1,...,m .
Since kG−1 k2 ≤ 2, it follows that
ku − uT k2 ≤ em (u)2 + 8
m
X
k=1
|hg, Lk in |2 + 8
m
X
k=1
|bk |2 .
Compared to the proof of Theorem 2, we need to estimate the expectation of the third term on the right
side. For this we simply write that
E(|bk |2 ) =
n
n
1 XX
E(β i w(xi )Lk (xi )β j w(xj )Lk (xj )).
n2 i=1 j=1
For i 6= j, we have
E(β i w(xi )Lk (xi )β j w(xj )Lk (xj )) = E(βw(x)Lk (x))2 = E(h(x)w(x)Lk (x))2 =
Z
2
hwLk dµ
X
= |hh, Lk i|2 .
Note that the first and second expectations are with respect to the joint density of (x, β) and the third one
with respect to the density of x, that is, µ. For i = j, we have
E(|β i w(xi )Lk (xi )|2 ) = E(|βw(x)Lk (x)|2 )
Z
E(|βw(x)Lk (x)|2 |x)dµ
=
X
Z
E(|β|2 |x)|w(x)Lk (x)|2 dµ
=
ZX
E(|β|2 |x)w(x)|Lk (x)|2 dρ
=
X
Z
(|h(x)|2 + E(|η|2 |x))w(x)|Lk (x)|2 dρ
=
X
Z
≤
(|h(x)|2 + σ 2 )w(x)|Lk (x)|2 dρ.
X
Summing up on i, j and k, and using condition (14), we obtain that
m
X
k=1
K m,w 2
K m,w σ 2
1
Km,w
κ
E(|bk |2 ) ≤ 1 − 2 khk2 +
khk2 +
khk2 +
σ ≤ 1+
.
n
n
n
log n
n
(29)
For the rest we proceed as for item (ii) and (iv) in the proof of Theorem 2, using that in the event kG−Ik2 > 12
we have ku − uT k ≤ 2τ and ku − uC k = kuk.
✷
Remark 1 Note that for the standard least-squares method, corresponding to the case where w ≡ 1, we know
2
that K m,w = m. The noise term thus takes the stardard form mσ
n , as seen for example in Theorem 3 2of [3]
κσ
or in Theorem 1 of [8]. Note that, in any case, condition (14) implies that this term is bounded by log
n.
The conclusions of Theorem 3 do not include the estimate in probability similar to item (iii) in Theorem
2. We can obtain such an estimate in the case of a bounded noise, where we assume that h ∈ L∞ (X) and
11
η is a bounded random variable, or equivalently, assuming that β is a bounded random variable, that is we
use the noise model (26) with
|β| ≤ D, a.s.
(30)
For this bounded noise model we have the following result.
Theorem 4 For any r > 0, if m and n are such that condition (14) is satisfied, then the following hold
for the the noise model (26) under (30): if u ∈ L∞ (X, dρ), then the nontruncated weighted least-squares
estimator satisfies
√
√
ku − uW k ≤ (1 + 2)em (u)∞ + 2D,
(31)
with probability larger than 1 − 2n−r .
Proof: Similar to the proof of (iii) in Theorem 2, we place ourselves in the event where kG − Ik2 ≤
use the norm equivalence (23). We then write that for any v ∈ Vm ,
1
2
and
n
n
ku − uW k ≤ ku − vk + kv − Pm
uk + kPm
βk.
The first two terms already appeared in the noiseless case and can be treated in the same way. The new
n
term Pm
β corresponds to the weighted least-squares approximation from the noise vector, and satisfies
√
√
√
n
n
βkn ≤ 2kβkn ≤ 2D.
kPm
βk ≤ 2kPm
This leads to (31).
5
✷
Random sampling from µm
The analysis in the previous sections prescribes the use of the optimal sampling measure dµm defined in
(20) for drawing the samples x1 , . . . , xn in the weighted least-squares method. In this section we discuss
numerical methods for generating independent random samples according to this measure, in a specific
relevant multivariate setting.
Here, we make the assumption that X = ×di=1 Xi is a Cartesian product of univariate real domains Xi ,
and that dρ is a product measure, that is,
d
O
dρi ,
dρ =
i=1
where each dρi is a measure defined on Xi . We assume that each dρi is of the form
dρi (t) = ρi (t)dt,
for some nonnegative continuous function ρi , and therefore
dρ(x) = ρ(x) dx,
ρ(x) =
d
Y
ρi (xi ),
i=1
x = (x1 , . . . , xd ) ∈ X.
In particular dρ is absolutely continuous with respect to the Lebesgue measure.
We consider the following general setting: for each i = 1, . . . , d, we choose a univariate basis (φij )j≥0
orthonormal in L2 (Xi , dρi ). We then define the tensorized basis
Lν (x) :=
d
Y
φiνi (xi ),
i=1
12
ν ∈ Nd0 ,
which is orthonormal in L2 (X, dρ). We consider general subspaces of the form
Vm := span{Lν : ν ∈ Λ},
for some multi-index set Λ ⊂ Nd0 such that #(Λ) = m. Thus we may rename the (Lν )ν∈Λ as (Lj )j=1,...,m
after a proper ordering has been chosen, for example in the lexicographical sense. For the given set Λ of
interest, we introduce
λj := max νj and λΛ := max λj .
ν∈Λ
j=1,...,d
The measure dµm is thus given by dµm (x) = µm (x)dx, where
µm (x) :=
m
1 X
1 X
|Li (x)|2 ρ(x) =
|Lν (x)|2 ρ(x),
m i=1
#(Λ)
ν∈Λ
x ∈ X.
(32)
We now discuss our sampling method for generating n independent random samples x1 , . . . , xn identically
distributed according to the multivariate density (32). Note that this density does not have a product
structure, despite ρ is a product density. There exist many methods for sampling from multivariate densities.
In contrast to Markov Chain Monte Carlo methods mentioned in the introduction, the method that we next
propose exploits the particular structure of the multivariate density (32), in order to generate independent
samples in a straightforward manner, and sampling only from univariate densities.
Given the vector x = (x1 , . . . , xd ) of all the coordinates, for any A ⊆ {1, . . . , d}, we introduce the notation
xA := (xi )i∈A ,
Ā := {1, . . . , d} \ A,
xĀ := (xi )i∈Ā ,
and
dxA :=
O
dxi ,
dρA :=
i∈A
O
dρi ,
ρA (xA ) :=
i∈A
Y
ρi (xi ),
i∈A
In the following, we mainly use the particular sets
XA := × Xi .
i∈A
Aq := {1, . . . , q} and Āq := {q + 1, . . . , d},
so that any x ∈ X may be written as x = (xAq , xĀq ).
Using such a notation, for any q = 1, . . . , d, we associate to the joint density µm its marginal density ψq
of the first q variables, namely
Z
ψq (xAq ) :=
XĀq
µm (xAq , xĀq ) dxĀq .
(33)
Since (φij )j≥0 is an orthonormal basis of L2 (Xi , dρi ), for any q = 1, . . . , d and any ν ∈ Nd0 , we obtain that
Z
XĀq
2
|Lν (xAq , xĀq )| ρ(xAq , xĀq )dxĀq = ρAq (xAq )
q
Y
i=1
|φiνi (xi )|2 ,
xAq ∈ XAq .
Therefore, the marginal density (33) can be written in simple form as
q
ψq (xAq ) =
XY
1
|φiνi (xi )|2 .
ρAq (xAq )
#(Λ)
i=1
ν∈Λ
13
(34)
Sequential conditional sampling. Based on the previous notation and remarks, we propose an algorithm
which generates n samples xk = (xk1 , . . . , xkd ) ∈ X with k = 1, . . . , n, that are independent and identically
distributed realizations from the density µm in (32).
In the multivariate case the coordinates can be arbitrarily reordered. Start with the first coordinate x1
and sample n points x11 , . . . , xn1 ∈ X1 from the univariate density
ϕ1 : X1 → R : t 7→ ϕ1 (t) := ψ1 (t) =
ρ1 (t) X 1
|φν1 (t)|2 ,
#(Λ)
(35)
ν∈Λ
which coincides with the marginal ψ1 of x1 calculated in (34). In the univariate case d = 1 the algorithm
terminates. In the multivariate case d ≥ 2, by iterating q from 2 to d, consider the qth coordinate xq ,
and sample n points x1q , . . . , xnq ∈ Xq in the following way: for any k = 1, . . . , n, given the values xkAq−1 =
(xk1 , . . . , xkq−1 ) ∈ XAq−1 that have been calculated at the previous q − 1 steps, sample the point xkq ∈ Xq from
the univariate density
Qq−1 j k 2
P
2
q
j=1 |φνj (xj )|
ν∈Λ |φνq (t)|
k
ϕq : Xq → R : t 7→ ϕq (t|xAq−1 ) := ρq (t)
.
(36)
P
Qq−1 j k 2
ν∈Λ
j=1 |φνj (xj )|
The expression on the right-hand side of (36) is continuous at any t ∈ Xq and at any xkAq−1 ∈ XAq−1 .
Assumption 1 ensures that the denominator of (36) is strictly positive for any possible choice of xkAq−1 =
(xk1 , . . . , xkq−1 ) ∈ XAq−1 , and also ensures that the marginal ψq−1 is strictly positive at any point xkAq−1 ∈
XAq−1 such that ρAq−1 (xkAq−1 ) 6= 0. For any t ∈ Xq and any xkAq−1 ∈ XAq−1 such that ρAq−1 (xkAq−1 ) 6= 0, the
density ϕq satisfies
ϕq (t|xkAq−1 ) =
ψq (xkAq−1 , t)
,
ψq−1 (xkAq−1 )
(37)
where the densities ψq and ψq−1 are the marginals defined in (33) and evaluated at the points (xkAq−1 , t) ∈
X q and xkAq−1 ∈ XAq−1 , respectively. From (37), using (34) and simplifying the term ρAq−1 (xkAq−1 ) =
QAq−1
k
j=1 ρj (xj ) 6= 0, one obtains the right-hand side of (36). The right-hand side of equation (37) is well
defined for any t ∈ Xq and any xkAq−1 ∈ XAq−1 such that ρAq−1 (xkAq−1 ) 6= 0, and it is not defined at the
points xkAq−1 ∈ XAq−1 such that ρAq−1 (xkAq−1 ) = 0 where ψq−1 (xkAq−1 ) vanishes. Nonetheless, (37) has finite
limits at any point (xkAq−1 , t) ∈ XAq , and these limits equal expression (36).
According to technical terminology, the right-hand side of equation (37) is the conditional density of
xq given x1 , . . . , xq−1 with respect to the density ψq , and ϕq is the continuous extension to XAq of this
conditional density.
The densities ϕ1 , . . . , ϕd defined in (35)–(36) can be concisely rewritten for any q = 1, . . . , d as
X
(38)
αν (xkAq−1 )|φqνq (t)|2 ,
ϕq (t|xkAq−1 ) = ρq (t)
ν∈Λ
where the nonnegative weights (αν )ν∈Λ are defined as
1
if q = 1,
#(Λ) ,
Q
q−1
αν = αν (zAq−1 ) :=
|φjνj (zj )|2
P j=1
, if 2 ≤ q ≤ d,
Q
q−1
j
2
ν∈Λ
j=1 |φνj (zj )|
P
for any zAq−1 = (z1 , . . . , zq−1 ) ∈ XAq−1 . Since ν∈Λ αν = 1, each density ϕq in (38) is a convex combination
of the densities ρq |φq1 |2 , . . . , ρq |φqλq |2 . Note that if the orthonormal basis (φqj )j≥0 have explicit expressions
14
and can be evaluated at any point in Xq , then the same holds for the univariate densities (38). In particular,
in the polynomial case, for standards univariate densities ρi such as uniform, Chebyshev or Gaussian, the
orthonormal polynomials (φij )j≥1 have expressions which are explicitely computable, for example by recursion
formulas.
In Algorithm 1 we summarize our sampling method, that sequentially samples the univariate densities
(38) to generate independent samples from the multivariate density (32). In the univariate case d = 1 the
algorithm does not run the innermost loop, and only samples from ϕ1 . In the multivariate case d ≥ 2
the algorithm runs also the innermost loop, and conditionally samples also from ϕ2 , . . . , ϕd . Our algorithm
therefore relies on accurate sampling methods for the relevant univariate densities (38).
Algorithm 1 Sequential conditional sampling for µm .
INPUT: n, d, Λ, ρi , (φij )j≥0 for i = 1, . . . , d.
i.i.d.
OUTPUT: x1 , . . . , xn ∼ µm .
for k = 1 to n do
αν ← (#(Λ))−1 , for any ν ∈ Λ.
Sample xk1 from t 7→ ϕ1 (t) = ρ1 (t)
for q = 2 to d do
q−1
Q j k 2
|φνj (xj )|
αν ←
j=1
P q−1
Q
ν∈Λ j=1
|φjνj (xkj )|2
P
ν∈Λ
αν |φ1ν1 (t)|2 .
, for any ν ∈ Λ.
Sample xkq from t 7→ ϕq (t) = ρq (t)
end for
xk ← (xk1 , . . . , xkd ).
end for
P
ν∈Λ
αν |φqνq (t)|2 .
We close this section by discussing two possible methods for sampling from such densities: rejection
sampling and inversion transform sampling. Both methods equally apply to any univariate density ϕq , and
therefore we present them for any q arbitrarily chosen from 1 to d.
Rejection sampling (RS). For applying this method, one needs to find a suitable univariate density Θq ,
whose support contains the support of ϕq , and a suitable real Mq > 1 such that
ϕq (t) ≤ Mq Θq (t),
t ∈ supp(ϕq ).
The density Θq should be easier to sample than ϕq , i.e. efficient pseudorandom number generators for
sampling from Θq are available. The value of Mq should be the smallest possible. For sampling one point
from ϕq using RS: sample a point z from Θq , and sample u from the standard uniform U(0, 1). Then check if
u < ϕq (z)/Mq Θq (z): if this is the case then accept z as a realization from ϕq , otherwise reject z and restart
sampling z and u from beginning. On average, acceptance occurs once every Mq trials. Therefore, for a
given q, sampling one point from ϕq by RS requires on average Mq evaluations of the function
t 7→
ρq (t) X
ϕq (t)
=
αν |φqνq (t)|2 .
Mq Θq (t)
Mq Θq (t)
ν∈Λ
15
This amounts in evaluating Mq times the terms φq0 , φqλq and a subset of the terms φq1 , . . . , φqλq −1 , depending
on Λ. The coefficients αν depend on the terms φj0 , . . . , φjλj for j = 1, . . . , q − 1, which have been already
evaluated when sampling the previous coordinates 1, . . . , q −1. Thus, if we use RS for sampling the univariate
densities, the overall computational cost of Algorithm 1 for sampling n points x1 , . . . , xn ∈ X is on average
Pd
proportional to n q=1 Mq (λq + 1).
When the basis functions (φqj )j≥0 form a bounded orthonormal system, an immediate and simple choice
of the parameters in the algorithm is
Mq = max kφqνq k2L∞ ,
and
ν∈Λ
Θq (t) = ρq (t).
(39)
With such a choice, we can quantify more precisely the average computational cost of sampling n points
√
in dimension d. When (φqj )j≥0 are the Chebyshev polynomials, whose L∞ norms satisfy kφqj kL∞ ≤ 2, we
Pd
obtain the bound 2n q=1 (λq + 1) ≤ 2nd(λΛ + 1) ≤ 2ndm. When (φqj )j≥0 are the Legendre polynomials,
Pd
√
whose L∞ norms satisfy kφqj kL∞ ≤ 2j + 1, we have the crude estimate 2n q=1 (λq + 1)2 ≤ 2nd(λΛ + 1)2 ≤
2ndm2 . In general, when (φqj )j≥0 are Jacobi polynomials, similar upper bounds can be derived, and the
dependence of these bounds on n and d is linear.
Inversion transform sampling (ITS). Let Φq : Xq → [0, 1] be the cumulative distribution function
associated to the univariate density ϕq . In the following, only when using the ITS method, we make
the further assumption that ρq vanishes at most a finite number of times in Xq . Such an assumption is
fulfilled in many relevant situations, e.g. when ρq is the density associated to Jacobi or Hermite polynomials
orthonormal in L2 (Xq , dρq ). Together with Assumption 1, this ensures that the function t 7→ Φq (t) is
continuous and strictly increasing on Xq . Hence Φq is a bijection between Xq and [0, 1], and it has a
unique inverse Φ−1
: [0, 1] → Xq which is continuous and strictly increasing on [0, 1]. Sampling from ϕq
q
using ITS can therefore be performed as follows: sample n independent realizations u1 , . . . , un identically
distributed according to the standard uniform U(0, 1), and obtain the n independent samples from ϕq as
1
−1 n
(Φ−1
q (u ), . . . , Φq (u )).
For any u ∈ [0, 1], computing z = Φ−1
q (u) ∈ Xq is equivalent to find the unique solution z ∈ Xq to
Φq (z) = u. This can be executed by elementary root-finding numerical methods, e.g. the bisection method
or Newton’s method. In alternative to using root-finding methods, one can build an interpolant operator Iq
−1
of Φ−1
q , and then approximate Φq (u) ≈ Iq (u) for any u ∈ [0, 1]. Such an interpolant Iq can be constructed
for example by piecewise linear interpolation, from the data (Φq (tq1 ), tq1 ), . . . , (Φq (tqsq ), tqsq ) at sq suitable
points tq1 < . . . < tqsq in Xq .
Both root-finding methods and the interpolation method require evaluating the function Φq pointwise in
Xq . In general these evaluations can be computed using standard univariate quadrature formulas. When
(φqj )j≥0 are orthogonal polynomials, the explicit expression of the primitive of ϕq can be used for directly
evaluating the function Φq .
Finally we discuss the overall computational cost of Algorithm 1 for sampling n points x1 , . . . , xn ∈ X
when using ITS for sampling the univariate densities. With the bisection method, this overall cost amounts
Pd
to n q=1 γq Wq , where γq is the maximum number of iterations for locating the zero in Xq up to some desired
tolerance, and Wq is the computational cost of each iteration. With the interpolation of Φ−1
q , the overall
cost amounts to n evaluations of each interpolant Iq , in addition to the cost for building the interpolants
which does not depend on n.
16
6
Examples and numerical illustrations
This section presents the numerical performances of the weighted least-squares method compared to the
standard least-squares method, in three relevant situations where dρ can be either the uniform measure,
the Chebyshev measure, or the Gaussian measure. In each one of these three cases, we choose w and dµ
in the weighted least-squares method from (19) and (20), as prescribed by our analysis in Corollary 1. For
standard least squares we choose w and dµ as in (8). Our tests focus on the condition number of the Gramian
matrix, that quantifies the stability of the linear system (5) and the stability of the weighted and standard
least-squares estimators. A meaningful quantity is therefore the probability
Pr{cond(G) ≤ 3},
(40)
where, through (7), the value three of the threshold is related to the parameter δ = 1/2 in the previous
analysis. For any n and m, from (7) the probability (40) is larger than Pr{kG − Ik2 ≤ 21 }. From Corollary 1,
under condition (21) between n, m and r, the Gramian matrix of weighted least squares satisfies (15) and
therefore the probability (40) is larger than 1 − 2n−r . For standard least squares, from Theorem 1 the
Gramian matrix satisfies (40) with probability larger than 1 − 2n−r , but under condition (10).
In the numerical tests the probability (40) is approximated by empirical probability, obtained by counting
how many times the event cond(G) ≤ 3 occurs when repeating the random sampling one hundred times.
All the examples presented in this section confine to multivariate approximation spaces of polynomial
type. One natural assumption in this case is to require that the set Λ is downward closed, that is, satisfies
ν∈Λ
and ν̃ ≤ ν =⇒ ν̃ ∈ Λ,
where ν̃ ≤ ν means that ν̃j ≤ νj for all i = 1, . . . , d. Then Vm is the polynomial space spanned by the
monomials
d
Y
ν
zj j ,
z 7→ z ν :=
j=1
and the orthonormal basis Lν is provided by taking each (φij )j≥0 to be a sequence of univariate orthonormal
polynomials of L2 (Xi , dρi ).
In both the univariate and multivariate forthcoming examples, the random samples from the measure
dµm are generated using Algorithm 1. The univariate densities ϕ1 , . . . , ϕd are sampled using the inversion
transform sampling method. The inverse of the cumulative distribution function is approximated using the
interpolation technique.
6.1
Univariate examples
In the univariate case d = 1, let the index set be Λ = {0, . . . , m − 1} and Vm = PΛ = span{z k : k =
0, . . . , m − 1}. We report in Fig. 1 the probability (40), approximated by empirical probability, when G is
the Gramian matrix of the weighted least-squares method. Different combinations of values for m and n are
tested, with three choices of the measure dρ: uniform, Gaussian and Chebyshev. The results do not show
perceivable differences among the performances of weighted least squares with the three different measures.
In any of the three cases, n/ ln(n) ≥ 4m is enough to obtain an empirical probability equal to one that
cond(G) ≤ 3. This confirms that condition (21) with any choice of r > 0 ensures (40), since it demands for
a larger number of samples.
Fig. 2 shows the probability (40) when G is the Gramian matrix of standard least squares. With the
uniform measure, the condition n/ ln(n) ≥ m2 is enough to have (40) with empirical probability larger
17
dρ uniform measure
dρ Gaussian measure
dρ Chebyshev measure
Figure 1: Weighted least squares, P r{cond(G) ≤ 3}, d = 1. Left: dρ uniform measure. Center: dρ Gaussian
measure. Right: dρ Chebyshev measure.
dρ uniform measure
dρ Gaussian measure
dρ Chebyshev measure
Figure 2: Standard least squares, P r{cond(G) ≤ 3}, d = 1. Left: dρ uniform measure. Center: dρ Gaussian
measure. Right: dρ Chebyshev measure.
than 0.95. When dρ is the Gaussian measure, stability requires a very large number of evaluations, roughly
n/ ln(n) linearly proportional to exp(m/3). For the univariate Chebyshev measure, it is proven that standard
least squares are stable under the same minimal condition (21) as for weighted least squares. In accordance
with the theory, the numerical results obtained in this case with weighted and standard least squares are
indistinguishable, see Fig. 1-right and Fig. 2-right.
6.2
Multivariate examples
Afterwards we present some numerical tests in the multivariate setting. These tests are again based, as
in the previous section, on approximating the probability (40) by empirical probability. In dimension d
larger than one there are many possible ways to enrich the polynomial space PΛ . The number of different
downward closed sets whose cardinality equals m gets very large already for moderate values of m and d.
Therefore, we present the numerical results for a chosen sequence of polynomial spaces PΛ1 , . . . , PΛm such
that Λ1 ⊂ · · · ⊂ Λm , where each Λj ⊂ Nd0 is downward closed, #(Λj ) = dim(PΛj ) = j and the starting set
Λ1 contains only the null multi-index. All the tests in Fig. 3 and Fig. 4 have been obtained using the same
sequence of increasingly embedded polynomial spaces PΛ1 ⊂ . . . ⊂ PΛm , for both weighted and standard least
18
squares and for the three choices of the measures dρ. Such a choice allows us to establish a fair comparison
between the two methods and among different measures, without the additional variability arising from
modifications to the polynomial space.
dρ uniform measure
dρ Gaussian measure
dρ Chebyshev measure
Figure 3: Weighted least squares, P r{cond(G) ≤ 3}, d = 10. Left: dρ uniform measure. Center: dρ Gaussian
measure. Right: dρ Chebyshev measure.
dρ uniform measure
dρ Gaussian measure
dρ Chebyshev measure
Figure 4: Standard least squares, P r{cond(G) ≤ 3}, d = 10. Left: dρ uniform measure. Center: dρ Gaussian
measure. Right: dρ Chebyshev measure.
We report the results obtained for the tests in dimension d = 10. The results in Fig. 3 confirm that
weighted least squares always yield an empirical probability equal to one that cond(G) ≤ 3, provided that
n/ log(n) ≥ 2m. This condition ensures that (21) with any choice of r > 0 implies (40), thus verifying
Corollary 1. Again, the results do not show significant differences among the three choices of the measure
dρ: a straight line, with the same slope for all the three cases uniform, Chebyshev and Gaussian, separates
the two regimes corresponding to empirical probabilities equal to zero and one. Compared to the univariate
case in Fig. 1, the results in Fig. 3 exhibit a sharper transition between the two extreme regimes, and an
overall lower variability in the transition regime.
The results for standard least squares with d = 10 are shown in Fig. 4. In the case of the uniform
measure, in Fig. 4-right, stability is ensured if n/ ln(n) ≥ 3.5m, which is more demanding than the condition
n/ ln(n) ≥ 2m needed for the stability of weighted least squares in Fig. 3-right, but much less strict than
the condition required with standard least squares in the univariate case, where n/ ln(n) scales like m2 .
19
These phenomena have already been observed and described in [7]. Similar results as those with the uniform
measure are obtained with the Chebyshev measure in Fig. 4-left, where again standard least squares achieve
stability using more evaluations than weighted least squares in Fig. 3-left. The case of the Gaussian measure
drastically differs from the uniform and Chebyshev cases: the results in Fig. 4-center clearly indicate that a
very large number of evaluations n compared to m is required to achieve stability of standard least squares.
Let us mention that analogous results as those presented in Figs. 1 and 3 for weighted least squares have
been obtained also in other dimensions, and with many other sequences of increasingly embedded polynomial
spaces. In the next tables we report some of these results for selected values of d = 1, 2, 5, 10, 50, 100. We
choose n = 26599 and m = 200 that satisfy condition (21) with r = 1, and report in Table 1 the empirical
probabilities that approximate (40), again calculated over one hundred repetitions. This table provides
multiple comparisons: weighted least squares versus standard least squares, for the three choices of the
measure dρ (uniform, Gaussian and Chebyshev) and with d varying between 1 and 100.
method
dρ
d=1
d=2
d=5
d = 10
d = 50
d = 100
weighted LS
weighted LS
weighted LS
uniform
Gaussian
Chebyshev
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
standard LS
standard LS
standard LS
uniform
Gaussian
Chebyshev
0
0
1
0
0
1
0.54
0
1
1
0
1
1
0
1
1
0
1
Table 1: P r{cond(G) ≤ 3}, with n = 26559 and m = 200: weighted least squares versus standard least
squares, dρ uniform versus dρ Gaussian versus dρ Chebyshev, d = 1, 2, 5, 10, 50, 100.
method
dρ
d=1
d=2
d=5
d = 10
d = 50
d = 100
weighted LS
weighted LS
weighted LS
uniform
Gaussian
Chebyshev
1.5593
1.5994
1.5364
1.4989
1.5698
1.4894
1.4407
1.4743
1.4694
1.4320
1.4643
1.4105
1.4535
1.4676
1.4143
1.4179
1.4237
1.4216
standard LS
standard LS
standard LS
uniform
Gaussian
Chebyshev
19.9584
∼ 1019
1.5574
29.8920
∼ 1019
1.5367
3.0847
∼ 1019
1.5357
1.9555
∼ 1016
1.4752
1.7228
∼ 109
1.4499
1.5862
∼ 103
1.4625
Table 2: Average of cond(G), with n = 26559 and m = 200: weighted least squares versus standard least
squares, dρ uniform versus dρ Gaussian versus dρ Chebyshev, d = 1, 2, 5, 10, 50, 100.
In Table 1, all the empirical probabilities related to results for weighted least squares are equal to one,
and confirm the theory since, for the chosen values of n, m and r, the probability (40) is larger than
1 − 5.67 × 10−7 . This value is computed using estimate (22) from the proof of Theorem 2. In contrast
to weighted least squares, whose empirical probability equal one independently of dρ and d, the empirical
probability of standard least squares does depend on the chosen measure, and to some extent on the dimension
d as well. With the uniform measure, the empirical probability that approximates (40) equals zero when
d = 1 or d = 2, equals 0.54 when d = 5, and equals one when d = 10, d = 50 or d = 100. In the Gaussian
case, standard least squares always feature null empirical probabilities. With the Chebyshev measure, the
condition number of G for standard least squares is always lower than three for any tested value of d.
In addition to the results in Table 1, further information are needed for assessing how severe is the lack
20
of stability when obtaining null empirical probabilities. To this aim, in Table 2 we also report the average
value of cond(G), obtained when averaging the condition number of G over the same repetitions used to
estimate the empirical probabilities in Table 1. The information in Table 2 are complementary to those in
Table 1. On the one hand they point out the stability and robustness of weighted least squares, showing a
tamed condition number with any measure dρ and any dimension d. On the other hand they provide further
insights on stability issues of standard least squares and their dependence on dρ and d. For standard least
squares with the uniform measure, the average condition number reduces as the dimension d increases, in
agreement with the conclusion drawn from Table 1. The Gramian matrix of standard least squares with
the Gaussian measure is very ill-conditioned for all tested values of d. For standard least squares with the
Chebyshev measure, the averaged condition number of G is only slightly larger than the one for weighted
least squares.
It is worth remarking that, the results for standard least squares in Fig. 4, Table 1 and Table 2 are
sensitive to the chosen sequence of polynomial spaces. Testing different sequences might produce different
results, that however necessarily obey to the estimates proven in Theorem 1 with uniform and Chebyshev
measures, when n, m and r satisfy condition (10). Many other examples with standard least squares have
been extensively discussed in previous works e.g. [7, 2], also in situations where n, m and r do not satisfy
condition (10) and therefore Theorem 1 does not apply. In general, when n, m and r do not satisfy (10), there
exist multivariate polynomial spaces of dimension m such that the Gramian matrix of standard least squares
with the uniform and Chebyshev measures does not satisfy (11). Examples of such spaces are discussed in
[7, 2]. Using these spaces would yield null empirical probabilities in Table 1 for standard least squares with
the uniform and Chebyshev measures.
For weighted least squares, when n, m and r satisfy condition (21), any sequence of polynomial spaces
yields empirical probabilities close to one, according to Corollary 1. Indeed such a robustness with respect
to the choices of dρ, of the polynomial space and of the dimension d represents one of the main advantages
of the weighted approach.
References
[1] G. Chardon, A. Cohen, and L. Daudet, Sampling and reconstruction of solutions to the Helmholtz
equation, Sampl. Theory Signal Image Process., 13:67–89, 2014.
[2] A. Chkifa, A. Cohen, G. Migliorati, F. Nobile, and R. Tempone, Discrete least squares polynomial
approximation with random evaluations - application to parametric and stochastic elliptic PDEs, M2AN,
49(3):815–837, 2015.
[3] A. Cohen , M.A. Davenport, and D. Leviatan, On the stability and accuracy of least squares approximations, Found. Comput. Math., 13:819–834, 2013.
[4] A. Doostan and J. Hampton, Coherence motivated sampling and convergence analysis of least squares
polynomial Chaos regression, Comput. Methods Appl. Mech. Engrg., 290:73–97,2015.
[5] J.D. Jakeman, A. Narayan, and T. Zhou, A Christoffel function weighted least squares algorithm for
collocation approximations, preprint.
[6] G. Migliorati, Multivariate Markov-type and Nikolskii-type inequalities for polynomials associated with
downward closed multi-index sets, J. Approx. Theory, 189:137–159, 2015.
21
[7] G. Migliorati, F. Nobile, E. von Schwerin, and R. Tempone, Analysis of discrete L2 projection on
polynomial spaces with random evaluations, Found. Comput. Math., 14:419–456, 2014.
[8] G. Migliorati, F. Nobile, and R. Tempone, Convergence estimates in probability and in expectation for
discrete least squares with noisy evaluations at random points, J. Multivar. Analysis, 142:167–182, 2015.
[9] E.B. Saff, and V. Totik, Logarithmic Potentials with External Fields, Springer, 1997.
[10] P. Nevai, Géza Freud, orthogonal polynomials and Christoffel Functions. A case study, J. Approx. theory,
48:3–167, 1986.
[11] A. Máté, P. Nevai, and V. Totik, Szegö’s extremum problem on the unit circle, Annals of Mathematics,
134:433–453, 1991.
[12] J. Tropp, User friendly tail bounds for sums of random matrices, Found. Comput. Math., 12:389–434,
2012.
22
| 10 |
Non-Termination Inference of Logic Programs
arXiv:cs/0406041v1 [] 22 Jun 2004
Etienne Payet and Fred Mesnard
IREMIA, Université de La Réunion, France
We present a static analysis technique for non-termination inference of logic programs. Our
framework relies on an extension of the subsumption test, where some specific argument
positions can be instantiated while others are generalized. We give syntactic criteria to
statically identify such argument positions from the text of a program. Atomic left looping
queries are generated bottom-up from selected subsets of the binary unfoldings of the program of interest. We propose a set of correct algorithms for automating the approach. Then,
non-termination inference is tailored to attempt proofs of optimality of left termination conditions computed by a termination inference tool. An experimental evaluation is reported.
When termination and non-termination analysis produce complementary results for a logic
procedure, then with respect to the leftmost selection rule and the language used to describe
sets of atomic queries, each analysis is optimal and together, they induce a characterization
of the operational behavior of the logic procedure.
Keywords: languages, verification, logic programming, static analysis, non-termination analysis, optimal termination condition
1
Introduction
Since the work of N. Lindenstrauss on TermiLog [20, 12], several automatic tools for termination
checking (e.g. TALP [3]) or termination inference (e.g. cTI [25, 26] or TerminWeb [17]) are now
available to the logic programmer. As the halting problem is undecidable for logic programs, such
analyzers compute sufficient termination conditions implying left termination. In most works,
only universal left termination is considered and termination conditions rely on a language for
describing classes of atomic queries. The search tree associated to any (concrete) query satisfying
a termination condition is guaranteed to be finite. When terms are abstracted using the term-size
norm, the termination conditions are (disjunctions of) conjunctions of conditions of the form “the
i-th argument is ground”. Let us call this language Lterm .
In this report, which is based on an earlier conference paper [27], we present the first approach
to non-termination inference tailored to attempt proofs of optimality of termination conditions at
verification time for pure logic programs. The aim is to ensure the existence, for each class of
atomic queries not covered by a termination condition, of one query from this class which leads to
an infinite search tree when such a query is proved using any standard Prolog engine. We shall first
present an analysis which computes classes of left looping queries, where any atomic query from
such a class is guaranteed to lead to at least one infinite derivation under the usual left-to-right
selection rule. Intuitively, we begin by computing looping queries from recursive binary clauses
of the form p(. . .) ← p(. . .). Then we try to add binary clauses of the form q(. . .) ← p(. . .) to
increase the set of looping queries. Finally by combining the result of non-termination inference
with termination inference, for each predicate, we compute the set of modes for which the overall
verification system has no information.
The main contributions of this work are:
• A new application of binary unfoldings to left loop inference. [16] introduced the binary
1
unfoldings of a logic program P as a goal independent technique to transform P into a
possibly infinite set of binary clauses, which preserves the termination property [7] while
abstracting the standard operational semantics. We present a correct algorithm to construct
left looping classes of atomic goals, where such classes are computed bottom-up from selected
subsets of the binary unfoldings of the analyzed program.
• A correct algorithm which, when combined with termination inference [23], may detect
optimal left termination conditions expressed in Lterm for logic programs. When termination
and non-termination analysis produce complementary results for a logic procedure, then with
respect to the leftmost selection rule and the language used to describe sets of atomic queries,
each analysis is optimal and together, they induce a characterization of the operational
behavior of the logic procedure.
• A report on the experimental evaluation we conduct. We have fully implemented termination
and non-termination inference for logic programs. We have run the couple of analyzers on
a set of classical logic programs, the sizes of which range from 2 to 177 clauses. The results
of this experiment should help the reader to appreciate the value of the approach.
We organize the paper as follows: Section 2 presents the notations. In Section 3 we study
loop inference for binary programs. We offer a full set of correct algorithms for non-termination
inference in Section 4 and optimality proofs of termination conditions in Section 5. Finally, in
Section 6, we discuss related works. The detailed proofs of the results can be found in Appendix B,
at the end of the article.
2
2.1
Preliminaries
Functions
Let E and F be two sets. Then, f : E → F denotes that f is a partial function from E to F and
f : E F denotes that f is a function from E to F . The domain of a partial function f from E
to F is denoted by Dom(f ) and is defined as: Dom(f ) = {e | e ∈ E, f (e) exists}. Thus, if f is a
function from E to F , then Dom(f ) = E. Finally, if f : E → F is a partial function and E ′ is a
set, then f |E ′ is the function from Dom(f ) ∩ E ′ to F such that for each e ∈ Dom(f ) ∩ E ′ , f |E ′
maps e to f (e).
2.2
Logic Programming
We strictly adhere to the notations, definitions, and results presented in [1].
N denotes the set of non-negative integers and for any n ∈ N , [1, n] denotes the set {1, . . . , n}.
If n = 0 then [1, n] = ∅.
From now on, we fix a language L of programs. We assume that L contains an infinite number
of constant symbols. The set of relation symbols of L is Π, and we assume that each relation
symbol p has a unique arity, denoted arity(p). T UL (resp. T BL ) denotes the set of all (ground
and non ground) terms of L (resp. atoms of L). A query is a finite sequence of atoms A1 , . . . , An
(where n ≥ 0). When n = 1, we say that the query is atomic. Throughout this article, the
variables of L are denoted by X, Y, Z, . . . , the constant symbols by a, b, . . . , the function symbols
by f, g, h, . . . , the relation symbols by p, q, r, . . . , the atoms by A, B, . . . and the queries by Q, Q′ ,
. . . or by A, B, . . .
Let t be a term. Then V ar(t) denotes the set of variables occurring in t. This notation is
extended to atoms, queries and clauses. Let θ := {X1 /t1 , . . . , Xn /tn } be a substitution. We
denote by Dom(θ) the set of variables {X1 , . . . , Xn } and by Ran(θ) the set of variables appearing
in t1 , . . . , tn . We define V ar(θ) = Dom(θ) ∪ Ran(θ). Given a set of variables V , θ|V denotes the
substitution obtained from θ by restricting its domain to V .
2
Let t be a term and θ be a substitution. Then, the term tθ is called an instance of t. If θ is a
renaming (i.e. a substitution that is a 1-1 and onto mapping from its domain to itself), then tθ is
called a variant of t. Finally, t is called more general than t′ if t′ is an instance of t.
A logic program is a finite set of definite clauses. In program examples, we use the ISO-Prolog
syntax. Let P be a logic program. Then ΠP denotes the set of relation symbols appearing in P .
In this paper, we only focus on left derivations i.e. we only consider the leftmost selection rule.
Consider a non-empty query B, C and a clause c. Let H ← B be a variant of c variable disjoint
θ
with B, C and assume that B and H unify. Let θ be an mgu of B and H. Then B, C =⇒(B, C)θ
c
is a left derivation step with H ← B as its input clause. If the substitution θ or the clause c is
irrelevant, we drop a reference to it.
θ2
θ1
· · · of left derivation steps is called a
Q1 =⇒
Let Q0 be a query. A maximal sequence Q0 =⇒
c1
c2
left derivation of P ∪ {Q0 } if c1 , c2 , . . . are clauses of P and if the standardization apart condition
holds, i.e. each input clause used is variable disjoint from the initial query Q0 and from the mgu’s
and input clauses used at earlier steps. A finite left derivation may end up either with the empty
query (then it is a successful left derivation) or with a non-empty query (then it is a failed left
derivation). We say Q0 left loops with respect to (w.r.t.) P if there exists an infinite left derivation
+
of P ∪ {Q0 }. We write Q =⇒ Q′ if there exists a finite non-empty prefix ending at Q′ of a left
P
derivation of P ∪ {Q}.
2.3
The Binary Unfoldings of a Logic Program
Let us present the main ideas about the binary unfoldings [16] of a logic program, borrowed from
[7]. This technique transforms a logic program P into a possibly infinite set of binary clauses.
Intuitively, each generated binary clause H ← B (where B is either an atom or the atom true
which denotes the empty query) specifies that, with respect to the original program P , a call to
H (or any of its instances) necessarily leads to a call to B (or its corresponding instance).
More precisely, let Q be an atomic query. Then A is a call in a left derivation of P ∪ {Q} if
+
Q =⇒ A, B. We denote by calls P (Q) the set of calls which occur in the left derivations of P ∪ {Q}.
P
The specialization of the goal independent semantics for call patterns for the left-to-right selection
rule is given as the fixpoint of an operator TPβ over the domain of binary clauses, viewed modulo
renaming. In the definition below, id denotes the set of all binary clauses of the form true ← true
or p(X1 , . . . , Xn ) ← p(X1 , . . . , Xn ) for any p ∈ ΠP , where arity(p) = n.
c := H ← B1 , . . . , Bm ∈ P, i ∈ [1, m],
hHj ← trueii−1
∈
X
renamed
with
fresh
variables,
j=1
β
TP (X) = (H ← B)θ Hi ← B ∈ X ∪ id renamed with fresh variables,
i < m ⇒ B 6= true
θ = mgu(hB1 , . . . , Bi i, hH1 , . . . , Hi i)
We define its powers as usual. It can be shown that the least fixpoint of this monotonic operator
always exists and we set bin unf (P ) := lfp(TPβ ). Then the calls that occur in the left derivations of
P ∪{Q} can be characterized as follows: calls P (Q) = {Bθ|H ← B ∈ bin unf (P ), θ = mgu(Q, H)}.
This last property was one of the main initial motivations of the proposed abstract semantics, enabling logic programs optimizations. Similarly, bin unf (P ) gives a goal independent representation
of the success patterns of P .
But we can extract more information from the binary unfoldings of a program P : universal
left termination of an atomic query Q with respect to P is identical to universal termination of
Q with respect to bin unf (P ). Note that the selection rule is irrelevant for a binary program and
an atomic query, as each subsequent query has at most one atom. The following result lies at the
heart of Codish’s approach to termination:
Theorem 2.1 [7] Let P be a program and Q an atomic query. Then Q left loops with respect to
P iff Q loops with respect to bin unf (P ).
3
Notice that bin unf (P ) is a possibly infinite set of binary clauses. For this reason, in the algorithms
of Section 4, we compute only the first max iterations of TPβ where max is a parameter of the
analysis. As an immediate consequence of Theorem 2.1, assume that we detect that Q loops with
respect to a subset of the binary clauses of TPβ ↑ i, with i ∈ N . Then Q loops with respect to
bin unf (P ) hence Q left loops with respect to P .
Example 2.2 Consider the following program P (see [21], p. 56–58):
p(X,Z) :- p(Y,Z),q(X,Y).
p(X,X).
q(a,b).
The binary unfoldings of P are:
TPβ
TPβ
TPβ
TPβ
TPβ
TPβ
↑0
↑1
↑2
↑3
↑4
↑5
=
=
=
=
=
=
∅
{p(X, Z) ← p(Y, Z), p(X, X) ← true, q(a, b) ← true} ∪ TPβ ↑ 0
{p(a, b) ← true, p(X, Y ) ← q(X, Y )} ∪ TPβ ↑ 1
{p(X, b) ← q(X, a), p(X, Z) ← q(Y, Z)} ∪ TPβ ↑ 2
{p(X, b) ← q(Y, a)} ∪ TPβ ↑ 3
TPβ ↑ 4 = bin unf (P )
Let Q := p(X, b). Note that Q loops w.r.t. TPβ ↑ 1, hence it loops w.r.t. bin unf (P ). So Q left
loops w.r.t. P .
3
Loop Inference Using Filters
In this paper, we propose a mechanism that, given a logic program P , generates at verification
time classes of atomic queries that left loop w.r.t. P . Our approach is completely based on
the binary unfoldings of P and relies on Theorem 2.1. It consists in computing a finite subset
BinProg of bin unf (P ) and then in inferring a set of atomic queries that loop w.r.t. BinProg. By
Theorem 2.1, these queries left loop w.r.t. P .
Hence, we reduce the problem of inferring looping atomic queries w.r.t. a logic program to
that of inferring looping atomic queries w.r.t. a binary program. This is why in the sequel, our
definitions, results and discussions mainly concentrate on binary programs only.
The central point of our method is the subsumption test, as the following lifting lemma,
specialized for the leftmost selection rule, holds:
Lemma 3.1 (One Step Lifting, [1]) Let Q =⇒ Q1 be a left derivation step, Q′ be a query that is
c
more general than Q and c′ be a variant of c variable disjoint with Q′ . Then, there exists a query
Q′1 that is more general than Q1 and such that Q′ =⇒ Q′1 with input clause c′ .
c
From this result, we derive:
Corollary 3.2 Let c := H ← B be a binary clause. If B is more general than H then H loops
w.r.t. {c}.
Corollary 3.3 Let c := H ← B be a clause from a binary program BinProg . If B loops w.r.t.
BinProg then H loops w.r.t. BinProg .
These corollaries provide two sufficient conditions that can be used to design an incremental
bottom-up mechanism that infers looping atomic queries. Given a binary program BinProg , it
suffices to build the set Q of atomic queries consisting of the heads of the clauses whose body is
more general than the head. By Corollary 3.2, the elements of Q loop w.r.t. BinProg. Then, by
Corollary 3.3, the head of the clauses whose body is more general than an element of Q can safely
been added to Q while retaining the property that every query in Q loops w.r.t. BinProg.
Notice that using this technique, we may not detect some looping queries. In [15], the authors
show that there is no algorithm that, when given a right-linear binary recursive clause (i.e. a
4
binary clause p(· · · ) ← p(· · · ) such that all variables occur at most once in the body) and given
an atomic query, always decides in a finite number of steps whether or not the resolution stops. In
the case of a linear atomic query (i.e. an atomic query such that all variables occur at most once)
however, the halting problem of derivations w.r.t. one binary clause is decidable [33, 13, 14].
It can be argued that the condition provided by Corollary 3.2 is rather weak because it fails
at inferring looping queries in some simple cases. This is illustrated by the following example.
Example 3.4 Let c be the clause p(X) ← p(f (X)). We have the infinite derivation:
p(X) =⇒ p(f (X)) =⇒ p(f (f (X))) =⇒ p(f (f (f (X)))) · · ·
c
c
c
But, since the body of c is not more general than its head, Corollary 3.2 does not allow to infer
that p(X) loops w.r.t. {c}.
In this section, we distinguish a special kind of argument positions that are “neutral” for
derivation. Our goal is to extend the relation “is more general than” by, roughly, disregarding the
predicate arguments whose position has been identified as neutral. Doing so, we aim at inferring
more looping queries.
Intuitively, a set of predicate argument positions ∆ is “Derivation Neutral” (DN for short) for
a binary clause c when the following holds. Let Q be an atomic query and Q′ be a query obtained
by replacing by any terms the predicate arguments in Q whose position is in ∆. If Q =⇒ Q1 then
c
Q′ =⇒ Q′1 where Q′1 is more general than Q1 up to the arguments whose position is in ∆.
c
Example 3.5 (Example 3.4 continued) The predicate p has only one argument position, so let
us consider ∆ := hp 7→ {1}i which distinguishes position 1 for predicate p. For any derivation
step p(s) =⇒ p(s1 ) if we replace s by any term t then there exists a derivation step p(t) =⇒ p(t1 ).
c
c
Notice that p(t1 ) is more general than p(s1 ) up to the argument of p. So, by the intuition described
above, ∆ is DN for c. Consequently, as in c the body p(f (X)) is more general than the head p(X)
up to the argument of p which is neutral, by an extended version of Corollary 3.2 there exists an
infinite derivation of {c} ∪ {p(X)}.
Let us give some more concrete examples of DN positions.
Example 3.6 The second argument position of the relation symbol append in the program APPEND:
append([],Ys,Ys).
append([X|Xs],Ys,[X|Zs]) :- append(Xs,Ys,Zs).
% C1
% C2
is DN for C2. Notice that a very common programming technique called accumulator passing
(see for instance e.g. [28], p. 21–25) always produces DN positions. A classical example of the
accumulator passing technique is the following program REVERSE.
reverse(L,R) :- rev(L,[],R).
rev([],R,R).
rev([X|Xs],R0,R) :- rev(Xs,[X|R0],R).
% C1
% C2
% C3
Concerning termination, we may ignore the second and the third argument of rev in the recursive
clause C3 while unfolding a query with this clause. Only the first argument can stop the unfolding.
But we can be even more precise. Instead of only identifying positions that can be totaly
disregarded as in the above examples, we can try to identify positions where we can place any
terms for which a given condition holds.
5
Example 3.7 Consider the clause c := p(f (X)) ← p(f (f (X))). If we mean by a DN position
a position where we can place any terms, then the argument position of p is not DN for c. This
is because, for example, we have the derivation step p(X) =⇒ p(f (f (X1 ))) but if we replace X by
c
g(X) then there is no derivation step of {c} ∪ {p(g(X))}. However, if we mean by a DN position
a position where we can place any instances of f (X), then the argument position of p is DN for
c.
In the sequel of the section, we define more precisely DN positions as positions where we can
place any terms satisfying certain conditions identified by “filters”. We use filters to present an
extension of the relation “is more general than” and we propose an extended version of Corollary 3.2. We offer two syntactic conditions of increasing power for easily identifying DN positions
from mere inspection of the text of a logic program. The practical impact of such filters will be
tackled in Section 5.
3.1
Filters
Let us first introduce the notion of a filter. We use filters in order to distinguish atoms, some
arguments of which satisfy a given condition. A condition upon atom arguments, i.e. terms, can
be defined as a function in the following way.
Definition 3.8 (Term-condition) A term-condition is a function from the set of terms T UL to
{true, false}.
Example 3.9 The following functions are term-conditions.
ftrue : T UL
t
{true, false}
7→ true
f1 : T UL
{true, false}
t
f2 : T UL
t
7→
true iff t is an instance of [X|Y ]
{true, false}
7→
true iff t unifies with h(a, X)
Notice that a term-condition might give distinct results for two terms which are equal modulo
renaming. For instance f2 (X) = false and f2 (Y ) = true. However, in Definition 3.12 below, we
will only consider variant independent term-conditions.
Definition 3.10 (Variant Independent Term-Condition) A term-condition f is variant independent if, for every term t, f (t) = true implies that f (t′ ) = true for every variant t′ of t.
Example 3.11 (Example 3.9 continued) ftrue and f1 are variant independent while f2 is not.
We restrict the class of term-conditions to that of variant independent ones because we want
to extend the relation “is more general than” so that if an atom A is linked to an atom B by
the extended relation, then every variant of A is also linked to B (see Proposition 3.16 below).
This will be essential to establish the forthcoming main Proposition 3.20 which is an extension of
Corollary 3.2. Now we can define what we exactly mean by a filter.
Definition 3.12 (Filter) A filter, denoted by ∆, is a function from Π such that: for each p ∈ Π,
∆(p) is a partial function from [1, arity(p)] to the set of variant independent term-conditions.
Example 3.13 (Example 3.9 continued) Let p be a relation symbol whose arity equals 3. The
filter ∆ which maps p to the function h1 7→ ftrue , 2 →
7 f1 i and any q ∈ Π \ {p} to hi is noted
∆ := h p 7→ h1 7→ ftrue , 2 7→ f1 i i.
6
3.2
Extension of the Relation “Is More General Than”
Given a filter ∆, the relation “is more general than” can be extended in the following way: an atom
A := p(· · · ) is ∆-more general than B := p(· · · ) if the “is more general than” requirement holds
for those arguments of A whose position is not in the domain of ∆(p) while the other arguments
satisfy their associated term-condition.
Definition 3.14 (∆-more general) Let ∆ be a filter and A and B be two atoms.
• Let η be a substitution. Then A is ∆-more general than B for η if:
A = p(s1 , . . . , sn )
B = p(t1 , . . . , tn )
∀i ∈ [1, n] \ Dom(∆(p)), ti = si η
∀i ∈ Dom(∆(p)), ∆(p)(i)(si ) = true .
• A is ∆-more general than B if there exists a substitution η s.t. A is ∆-more general than B
for η.
An atomic query Q is ∆-more general than an atomic query Q′ if either Q and Q′ are both empty
or Q contains the atom A, Q′ contains the atom B and A is ∆-more general than B.
Example 3.15 (Example 3.13 continued) Let
A := p( b , X , h(a, X) )
B := p( a , [a|b] , X
)
C := p( a , [a|b] , h(Y, b) ) .
Then, A is not ∆-more general than B and C because, for instance, its second argument X is
not an instance of [X|Y ] as required by f1 . On the other hand, B is ∆-more general than A for
the substitution {X/h(a, X)} and B is ∆-more general than C for the substitution {X/h(Y, b)}.
Finally, C is not ∆-more general than A because h(Y, b) is not more general than h(a, X) and C
is not ∆-more general than B because h(Y, b) is not more general than X.
As in a filter the term-conditions are variant independent, we get the following proposition.
Proposition 3.16 Let ∆ be a filter and A and B be two atoms. If A is ∆-more general than B
then every variant of A is ∆-more general than B.
The next proposition states an intuitive result:
Proposition 3.17 Let ∆ be a filter and A and B be two atoms. Then A is ∆-more general than
B if and only if there exists a substitution η such that V ar(η) ⊆ V ar(A, B) and A is ∆-more
general than B for η.
3.3
Derivation Neutral Filters: Operational Definition
In the sequel of this paper, we focus on “derivation neutral” filters. The name “derivation neutral”
stems from the fact that in any derivation of an atomic query Q, the arguments of Q whose position
is distinguished by such a filter can be safely replaced by any terms satisfying the associated termcondition. Such a replacement does not modify the derivation process.
Definition 3.18 (Derivation Neutral) Let ∆ be a filter and c be a binary clause. We say that ∆
is DN for c if for each derivation step Q =⇒ Q1 where Q is an atomic query, for each Q′ that is
c
∆-more general than Q and for each variant c′ of c variable disjoint with Q′ , there exists a query
Q′1 that is ∆-more general than Q1 and such that Q′ =⇒ Q′1 with input clause c′ . This definition
c
is extended to binary programs: ∆ is DN for P if it is DN for each clause of P .
7
Example 3.19 The following examples illustrate the previous definition.
• Let us reconsider the program APPEND from Example 3.6 with the term-condition ftrue defined
in Example 3.9 and the filter ∆ := happend 7→ h2 7→ ftrue ii. ∆ is DN for C2. However, ∆
is not DN for APPEND because it is not DN for C1.
• Consider the following clause:
merge([X|Xs],[Y|Ys],[X|Zs]) :- merge(Xs,[Y|Ys],Zs).
The filter hmerge 7→ h2 7→ f1 ii, where the term-condition f1 is defined in Example 3.9, is
DN for this clause.
In the next subsection, we present some syntactic criteria for identifying correct DN filters. For
proving that the above filters are indeed DN, we will just check that they actually fulfill these
syntactic criteria that are sufficient conditions.
Derivation neutral filters lead to the following extended version of Corollary 3.2 (take ∆ such
that for any p, ∆(p) is a function whose domain is empty):
Proposition 3.20 Let c := H ← B be a binary clause and ∆ be a filter that is DN for c. If B is
∆-more general than H then H loops w.r.t. {c}.
We point out that the above results remain valid when the program under consideration is
restricted to its set of clauses used in the derivation steps. For instance, although the filter ∆ of
Example 3.19 is not DN for APPEND, it will help us to construct queries which loop w.r.t. C2. Such
queries also loop w.r.t. APPEND.
Notice that lifting lemmas are used in the literature to prove completeness of SLD-resolution.
As Definition 3.18 corresponds to an extended version of the One Step Lifting Lemma 3.1, it may
be worth to investigate its consequences from the model theoretic point of view.
First of all, a filter may be used to “expand” atoms by replacing every argument whose position
is distinguished by any term that satisfies the associated term-condition.
Definition 3.21 Let ∆ be a filter and A be an atom. The expansion of A w.r.t. ∆, denoted A↑∆ ,
is the set defined as
def
A↑∆ = {A} ∪ {B ∈ T BL | B is ∆-more general than A for ǫ}
where ǫ denotes the empty substitution.
Notice that in this definition, we do not necessary have the inclusion
{A} ⊆ {B ∈ T BL | B is ∆-more general than A for ǫ} .
For instance, suppose that A := p(f (X)) and that ∆ maps p to the function h1 7→ f i where f is
the term-condition mapping any term t to true iff t is an instance of g(X). Then
{B ∈ T BL | B is ∆-more general than A} = {p(t) | t is an instance of g(X)}
with A 6∈ {p(t) | t is an instance of g(X)}.
Term interpretations in the context of logic programming were first introduced in [6] and further
investigated in [11] and then in [22]. A term interpretation for L is identified with a (possibly
empty) subset of the term base T BL . So, as for atoms, a term interpretation can be expanded by
a filter.
Definition 3.22 Let ∆ be a filter and I be a term interpretation for L. Then I↑∆ is the term
interpretation for L defined as:
def [
I↑∆ =
A↑∆ .
A∈I
8
For any logic program P , we denote by C(P ) its least term model.
Theorem 3.23 Let P be a binary program and ∆ be a DN filter for P . Then C(P )↑∆ = C(P ).
Proof. The inclusion C(P ) ⊆ C(P )↑∆ is straightforward so let us concentrate on the other one
i.e. C(P )↑∆ ⊆ C(P ). Let A′ ∈ C(P )↑∆ . Then there exists A ∈ C(P ) such that A′ ∈ A↑∆ . A well
known result states:
C(P ) = {B ∈ T BL | there exists a successful derivation of P ∪ {B}}
(1)
Consequently, there exists a successful derivation ξ of P ∪{A}. Therefore, by successively applying
Definition 3.18 to each step of ξ, one construct a successful derivation of A′ . So by (1) A′ ∈ C(P ).
3.4
Some Particular DN Filters
In this section, we provide two sufficient syntactic conditions for identifying DN filters.
3.4.1
DN Sets of Positions
The first instance we consider corresponds to filters, the associated term-conditions of which are
all equal to ftrue (see Example 3.9). Within such a context, as the term-conditions are fixed, each
filter ∆ is uniquely determined by the domains of the partial functions ∆(p) for p ∈ Π. Hence the
following definition.
Definition 3.24 (Set of Positions) A set of positions, denoted by τ , is a function from Π to 2N
such that: for each p ∈ Π, τ (p) is a subset of [1, arity(p)].
Example 3.25 Let append and append3 be two relation symbols. Assume that arity(append ) = 3
and arity(append3 ) = 4. Then τ := h append 7→ {2}, append3 7→ {2, 3, 4} i is a set of positions.
Not surprisingly, the filter that is generated by a set of positions is defined as follows.
Definition 3.26 (Associated Filter) Let τ be a set of positions and ftrue be the term-condition
defined in Example 3.9. The filter ∆[τ ] defined as:
for each p ∈ Π, ∆[τ ](p) is the function from τ (p) to {ftrue }
is called the filter associated to τ .
Example 3.27 (Example 3.25 continued) The filter associated to τ is
∆[τ ] :=
happend 7→ h2 7→ ftrue i, append3 7→ h2 7→ ftrue , 3 7→ ftrue , 4 7→ ftrue ii.
Now we define a particular kind of sets of positions. These are named after “DN” because, as
stated by Theorem 3.30 below, they generate DN filters.
Definition 3.28 (DN Set of Positions) Let τ be a set of positions. We say that τ is DN for a
binary clause p(s1 , . . . , sn ) ← q(t1 , . . . , tm ) if:
si is a variable
si occurs only once in p(s1 , . . . , sn )
∀i ∈ τ (p),
∀j ∈ [1, m], si ∈ V ar(tj ) ⇒ j ∈ τ (q) .
A set of positions is DN for a binary program P if it is DN for each clause of P .
9
The intuition of Definition 3.28 is the following. If for instance we have a clause c :=
p(X, Y, f (Z)) ← p(g(Y, Z), X, Z) then in the first two positions of p we can put any terms and get
a derivation step w.r.t. c because the first two arguments of the head of c are variables that appear
exactly once in the head. Moreover, X and Y of the head reappear in the body but again only in
the first two positions of p. So, if we have a derivation step p(s1 , s2 , s3 ) =⇒ p(t1 , t2 , t3 ), we can rec
place s1 and s2 by any terms s′1 and s′2 and get another derivation step p(s′1 , s′2 , s3 ) =⇒ p(t′1 , t′2 , t′3 )
where t′3 is the same as t3 up to variable names.
c
Example 3.29 (Example 3.25 continued) τ is DN for the program:
append([X|Xs],Ys,[X|Zs]) :- append(Xs,Ys,Zs).
append3(Xs,Ys,Zs,Ts) :- append(Xs,Ys,Us).
which is a subset of the binary unfoldings of the program APPEND3:
append([],Ys,Ys).
append([X|Xs],Ys,[X|Zs]) :- append(Xs,Ys,Zs).
append3(Xs,Ys,Zs,Ts) :- append(Xs,Ys,Us), append(Us,Zs,Ts).
DN sets of positions generate DN filters.
Theorem 3.30 Let τ be a DN set of positions for a binary program P . Then ∆[τ ] is DN for P .
Proof. As we will see in Section 3.4.2, this theorem is a particular case of Theorem 3.39.
Notice that the set of DN sets of positions of any binary program P is not empty because, by
Definition 3.28, τ0 := hp 7→ ∅ | p ∈ Πi is DN for P . Moreover, an atom A is ∆[τ0 ]-more general
than an atom B iff A is more general than B.
3.4.2
DN Sets of Positions with Associated Terms
Now we consider another instance of Definition 3.18. As we will see, it is more general than the
previous one. It corresponds to filters whose associated term-conditions have all the form “is an
instance of t” where t is a term that uniquely determines the term-condition. Notice that such
term-conditions are variant independent, so it makes sense to consider such filters. Hence the
following definition.
Definition 3.31 (Sets of Positions with Associated Terms) A set of positions with associated
terms, denoted by τ + , is a function from Π such that: for each p ∈ Π, τ + (p) is a partial function
from [1, arity(p)] to T UL .
Example 3.32 Let p and q be two relation symbols whose arity is 2. Then
τ + := h p 7→ h2 7→ Xi, q 7→ h2 7→ g(X)i i
is a set of positions with associated terms.
The filter that is generated by a set of positions with associated terms is defined as follows.
Definition 3.33 (Associated Filter) Let τ + be a set of positions with associated terms. The filter
associated to τ + , denoted by ∆[τ + ], is defined as: for each p ∈ Π, ∆[τ + ](p) is the function
Dom(τ + (p))
i
The set of term-conditions
(
T UL {true, false}
7→
t
7→ true iff t is an instance of τ + (p)(i)
10
Example 3.34 (Example 3.32 continued) The filter associated to τ + is
∆[τ + ] :=
where
f1 : T UL
t
f2 : T UL
t
h p 7→ h2 7→ f1 i, q 7→ h2 7→ f2 i i
{true, false}
7→
true iff t is an instance of X
{true, false}
7→
true iff t is an instance of g(X)
As for sets of positions, we define a special kind of sets of positions with associated terms.
Definition 3.35 (DN Sets of Positions with Associated Terms) Let τ + be a set of positions with
associated terms. We say that τ + is DN for a binary clause p(s1 , . . . , sn ) ← q(t1 , . . . , tm ) if these
conditions hold:
• (DN1) ∀i ∈ Dom(τ + (p)), ∀j ∈ [1, n] \ {i}: V ar(si ) ∩ V ar(sj ) = ∅,
• (DN2) ∀hi 7→ ui i ∈ τ + (p): si is more general than ui ,
• (DN3) ∀hj 7→ uj i ∈ τ + (q): tj is an instance of uj ,
• (DN4) ∀i ∈ Dom(τ + (p)), ∀j 6∈ Dom(τ + (q)): V ar(si ) ∩ V ar(tj ) = ∅.
A set of positions with associated terms is DN for a binary program P if it is DN for each clause
of P .
This definition says that any si where i is in the domain of τ + (p) (i.e. position i is distinguished
by τ + ): (DN1) does not share its variables with the other arguments of the head, (DN2) is more
general than the term ui that i is mapped to by τ + (p), (DN4) distributes its variables to some tj
such that j is in the domain of τ + (q) (i.e. position j is distinguished by τ + ). Moreover, (DN3)
says that any tj , where j is distinguished by τ + , is such that tj is an instance of the term uj that
j is mapped to by τ + (q).
Example 3.36 (Example 3.32 continued) τ + is DN for the following program:
p(f(X),Y) :- q(X,g(X)).
q(a,g(X)) :- q(a,g(b)).
The preceding notion is closed under renaming:
Proposition 3.37 Let c be a binary clause and τ + be a set of positions with associated terms that
is DN for c. Then τ + is DN for every variant of c.
Notice that a set of positions is a particular set of positions with associated terms in the
following sense.
Proposition 3.38 Let τ be a set of positions and X be a variable. Let τ + be the set of positions
with associated terms defined as: for each p ∈ Π, τ + (p) := ( τ (p) {X} ). Then, the following
holds.
1. An atom A is ∆[τ ]-more general than an atom B iff A is ∆[τ + ]-more general than B.
2. For any binary clause c, τ is DN for c iff τ + is DN for c.
Proof. A proof follows from these remarks.
11
• Item 1 is a direct consequence of the definition of “∆-more general” (see Definition 3.14)
and the definition of the filter associated to a set of positions (see Definition 3.26) and to a
set of positions with associated terms (see Definition 3.33).
• Item 2 is a direct consequence of the definition of DN sets of positions (see Definition 3.28)
and DN sets of positions with associated terms (see Definition 3.35).
The sets of positions with associated terms of Definition 3.35 were named after “DN” because
of the following result.
Theorem 3.39 Let P be a binary program and τ + be a set of positions with associated terms that
is DN for P . Then ∆[τ + ] is DN for P .
As in the case of sets of positions, the set of DN sets of positions with associated terms of any
binary program P is not empty because, by Definition 3.35, τ0+ := hp 7→ hi | p ∈ Πi is DN for
P . Moreover, an atom A is ∆[τ0+ ]-more general than an atom B iff A is more general than B.
Finally, in Appendix A, we give an incremental algorithm (see Section 4.2) that computes a DN
set of positions with associated terms. Its correctness proof is also presented.
3.5
Examples
This section presents some examples where we use filters obtained from DN sets of positions and
DN sets of positions with associated terms to infer looping queries. As the filters we use in each
case are not “empty” (i.e. are not obtained from τ0 or τ0+ ), we are able to compute more looping
queries than using the classical subsumption test.
Example 3.40 Consider the program APPEND that we introduced in Example 3.6. Every infinite
derivation w.r.t. APPEND starting from an atomic query only uses the non-unit clause C2. Therefore, as we aim at inferring looping atomic queries w.r.t. APPEND, we only focus on C2 in the
sequel of this example.
As in C2 the body, which is append (Xs, Ys, Zs), is more general than the head, which is
append ([X |Xs], Ys, [X |Zs]), by Corollary 3.2 we have that the query append ([X |Xs], Ys, [X |Zs])
loops w.r.t. {C2}. Consequently, by the One Step Lifting Lemma 3.1, each query that is more
general than append ([X |Xs], Ys, [X |Zs]) also loops w.r.t. {C2}.
But we can be more precise than that. According to Definition 3.28, τ := h append 7→ {2} i
is a DN set of positions for {C2}. The filter associated to τ (see Definition 3.26) is ∆[τ ] :=
h append 7→ h2 7→ ftrue i i. By Theorem 3.30, ∆[τ ] is a DN filter for {C2}. Consequently, by
Definition 3.18, each query that is ∆[τ ]-more general than append ([X |Xs], Ys, [X |Zs]) loops w.r.t.
{C2}. This means that
t2 is any term and
append (t1 , t2 , t3 ) ∈ TB L
t1 , t3 is more general than [X |Xs], [X |Zs]
is a set of atomic queries that loop w.r.t. {C2}, hence w.r.t. APPEND. This set includes the ’welltyped’ query append (As, [ ], Bs).
Example 3.41 Consider the program REVERSE that was introduced in Example 3.6. As in the
example above, in order to infer looping atomic queries w.r.t. REVERSE, we only focus on the
non-unit clauses C1 and C3 in the sequel of this example. More precisely, we process the relation
symbols of the program in a bottom-up way, so we start the study with clause C3 and end it with
clause C1.
According to Definition 3.28, τ := h rev 7→ {2, 3} i is a DN set of positions for {C3}. The
filter associated to τ (see Definition 3.26) is ∆[τ ] := h rev 7→ h2 7→ ftrue , 3 7→ ftrue i i. By
Theorem 3.30, ∆[τ ] is DN for {C3}. As rev (Xs, [X |R0 ], R) (the body of C3) is ∆[τ ]-more general
than rev ([X |Xs], R0 , R) (the head of C3), by Proposition 3.20 we get that rev ([X |Xs], R0 , R) loops
w.r.t. {C3}. Notice that, unlike the example above, here we do not get this result from Corollary 3.2
12
as rev (Xs, [X |R0 ], R) is not more general than rev ([X |Xs], R0 , R). Finally, as ∆[τ ] is DN for
{C3}, by Definition 3.18 we get that each query that is ∆[τ ]-more general than rev ([X |Xs], R0 , R)
loops w.r.t. {C3}, hence w.r.t. REVERSE. This means that
t2 and t3 are any terms and
Q := rev (t1 , t2 , t3 ) ∈ TB L
t1 is more general than [X |Xs]
is a set of atomic queries that loop w.r.t. REVERSE. This set includes the ’well-typed’ query
rev (As, [ ], [ ]).
Now, consider clause C1. As rev (L, [], R) (its body) is an element of Q, then rev (L, [], R) loops
w.r.t. {C3}, hence w.r.t. {C1, C3}. Consequently, by Corollary 3.3, reverse(L, R) (the head of C1)
loops w.r.t. {C1, C3}. Let τ ′ := h rev 7→ {2, 3}, reverse 7→ {2} i. By Definition 3.28, τ ′ is DN for
{C1, C3}, so ∆[τ ′ ] is DN for {C1, C3}. Consequently, each query that is ∆[τ ′ ]-more general than
reverse(L, R) also loops w.r.t. {C1, C3} hence w.r.t. REVERSE. This means that
reverse(X, t) ∈ TB L | X is a variable and t is any term
is a set of atomic queries that loop w.r.t. REVERSE. This set includes the ’well-typed’ query
reverse(As, [ ]).
Example 3.42 Consider the two recursive clauses of the program MERGE where we have removed
the inequalities:
merge([X|Xs],[Y|Ys],[X|Zs]) :- merge(Xs,[Y|Ys],Zs). % C3
merge([X|Xs],[Y|Ys],[Y|Zs]) :- merge([X|Xs],Ys,Zs). % C4
Every set of positions τ that is DN for {C3} is such that τ (merge) = ∅ because each argument
of the head of C3 is not a variable (see Definition 3.28). Hence, using Proposition 3.20 with
a filter obtained from a DN set of positions leads to the same results as using Corollary 3.2:
as merge(Xs, [Y |Ys], Zs) is more general than merge([X |Xs], [Y |Ys], [X |Zs]), by Corollary 3.2
merge([X |Xs], [Y |Ys], [X |Zs]) loops w.r.t. {C3}. So, by the One Step Lifting Lemma 3.1, each
query that is more general than merge([X |Xs], [Y |Ys], [X |Zs]) also loops w.r.t. {C3}, hence w.r.t.
MERGE.
But we can be more precise than that. According to Definition 3.35, τ + := hmerge 7→ h2 7→
[Y |Ys]i i is a set of positions with associated terms that is DN for {C3}. Hence, by Theorem 3.39,
the associated filter ∆[τ + ] (see Definition 3.33) is DN for {C3}. So, by Definition 3.18, each query
that is ∆[τ + ]-more general than merge([X |Xs], [Y |Ys], [X |Zs]) loops w.r.t. {C3}. This means that
t is any instance of [Y |Ys] and
merge(t1 , t2 , t3 ) ∈ TB L 2
t1 , t3 is more general than [X |Xs], [X |Zs]
is a set of atomic queries that loop w.r.t. MERGE. Notice that this set includes the ’well-typed’ query
merge(As, [0], Bs). Finally, let us turn to clause C4. Reasoning exactly as above with the set of
positions with associated terms hmerge 7→ h1 7→ [X |Xs]i i which is DN for {C4}, we conclude that:
t1 is any instance of [X |Xs] and
merge(t1 , t2 , t3 ) ∈ TB L
t2 , t3 is more general than [Y |Ys], [Y |Zs]
is a set of atomic queries that loop w.r.t. MERGE. Notice that this set includes the ’well-typed’ query
merge([0], As, Bs).
4
Algorithms
We have designed a set of correct algorithms for full automation of non-termination analysis of
logic programs. These algorithms are given in Appendix A with their correctness proofs. In this
section, we present the intuitions and conceptual definitions underlying our approach.
13
4.1
Loop Dictionaries
Our technique is based on a data structure called dictionary which is a set of pairs (BinSeq, τ + )
where BinSeq is a finite ordered sequence of binary clauses and τ + is a set of positions with
associated terms. In the sequel, we use the list notation of Prolog and a special kind of dictionaries
that we define as follows.
Definition 4.1 (Looping Pair, Loop Dictionary) A pair (BinSeq, τ + ), where the list BinSeq is a
finite ordered sequence of binary clauses and τ + is a set of positions with associated terms, is a
looping pair if τ + is DN for BinSeq and:
• either BinSeq = [H ← B] and B is ∆[τ + ]-more general than H,
• or BinSeq = [H ← B, H1 ← B1 | BinSeq 1 ] and there exists a set of positions with associated
terms τ1+ such that ([H1 ← B1 | BinSeq 1 ], τ1+ ) is a looping pair and B is ∆[τ1+ ]-more general
than H1 .
A loop dictionary is a finite set of looping pairs.
Example 4.2 The pair (BinSeq := [H1 ← B1 , H2 ← B2 , H3 ← B3 ], τ1+ ) where
H1 ← B1 := r(X) ← q(X, f (f (X)))
H2 ← B2 := q(X, f (Y )) ← p(f (X), a)
H3 ← B3 := p(f (g(X)), a) ← p(X, a)
and τ1+ := hp 7→ h2 7→ ai, q 7→ h2 7→ f (X)ii is a looping pair:
• Let τ3+ := hp 7→ h2 7→ aii. Then τ3+ is a DN set of positions with associated terms for
[H3 ← B3 ]. Moreover, B3 is ∆(τ3+ )-more general than H3 . Consequently, ([H3 ← B3 ], τ3+ )
is a looping pair.
• Notice that B2 is ∆(τ3+ )-more general than H3 . Now, let τ2+ := τ1+ . Then τ2+ is DN for
[H2 ← B2 , H3 ← B3 ]. So, ([H2 ← B2 , H3 ← B3 ], τ2+ ) is a looping pair.
• Finally, notice that B1 is ∆(τ2+ )-more general than H2 . As τ1+ is DN for BinSeq, we conclude
that (BinSeq, τ1+ ) is a looping pair.
A looping pair immediately provides an atomic looping query. It suffices to take the head of
the first clause of the binary program of the pair:
Proposition 4.3 Let ([H ← B|BinSeq], τ + ) be a looping pair. Then H loops with respect to
[H ← B|BinSeq].
Proof. By induction on the length of BinSeq using Proposition 3.20, Corollary 3.3 and Theorem 3.39. So, a looping pair denotes a proof outline for establishing that H left loops. Moreover,
looping pairs can be built incrementally in a simple way as described below.
4.2
Computing a Loop Dictionary
Given a logic program P and a positive integer max , the function infer loop dict from Appendix A first computes TPβ ↑ max (the first max iterations of the operator TPβ ), which is a
finite subset of bin unf (P ). Then, using the clauses of TPβ ↑ max , it incrementally builds a loop
dictionary Dict as follows.
At start, Dict is set to ∅. Then, for each clause H ← B in TPβ ↑ max , the following actions
are performed.
• infer loop dict tries to extract from H ← B the most simple form of a looping pair: it
computes a set of positions with associated terms τ + that is DN for H ← B, then it tests if
B is ∆[τ + ]-more general than H. If so, the looping pair ([H ← B], τ + ) is added to Dict .
14
• infer loop dict tries to combine H ← B to some looping pairs that have already been
added to Dict in order to build other looping pairs. For each ([H1 ← B1 |BinSeq 1 ], τ1+ ) in
Dict, if B is ∆[τ1+ ]-more general than H1 , then a set of positions with associated terms
τ + that is DN for [H ← B, H1 ← B1 |BinSeq 1 ] is computed and the looping pair ([H ←
B, H1 ← B1 |BinSeq 1 ], τ + ) is added to Dict .
Notice that in the second step above, we compute τ + that is DN for [H ← B, H1 ← B1 |BinSeq 1 ].
As we already hold τ1+ that is DN for [H1 ← B1 |BinSeq 1 ], it is more interesting, for efficiency
reasons, to compute τ + from τ1+ instead of starting from the ground. Indeed, starting from τ1+ ,
one uses the information stored in τ1+ about the program [H1 ← B1 |BinSeq 1 ], which may speed
up the computation substantially. This is why we have designed a function dna that takes two
arguments as input, a binary program BinProg and a set of positions with associated terms τ + . It
computes a set of positions with associated terms that is DN for BinProg and that refines τ + . On
+
the other hand, the function unit loop calls dna with τmax
which is the initial set of positions with
+
+
associated terms defined as follows: Dom(τmax (p)) = [1, arity(p)] for each p ∈ Π and τmax
(p)(i) is
a variable for each i ∈ [1, arity(p)].
Example 4.4 Consider the program APPEND3
append3(Xs,Ys,Zs,Us) :- append(Xs,Ys,Vs), append(Vs,Zs,Us).
β
augmented with the APPEND program. The set TAPPEND3
↑ 2 includes:
append([A|B],C,[A|D]) :- append(B,C,D).
append3(A,B,C,D) :- append(A,B,E).
append3([],A,B,C) :- append(A,B,C).
% BC1
% BC2
% BC3
From clause BC1 we get the looping pair (BinSeq 1 , τ1+ ) where
BinSeq 1 = append ([X1 |X2 ], X3 , [X1 |X4 ]) ← append (X2 , X3 , X4 )
and τ1+ (append ) = h2 7→ X3 i. From this pair and the clause BC2, we get the looping pair
(BinSeq 2 , τ2+ ) where:
BinSeq 2 =
append3 (X1 , X2 , X3 , X4 ) ← append (X1 , X2 , X5 ),
append ([X1 |X2 ], X3 , [X1 |X4 ]) ← append (X2 , X3 , X4 )
and τ2+ (append ) = h2 7→ X3 i and τ2+ (append3 ) = h2 7→ X2 , 3 7→ X3 , 4 7→ X4 i.
Finally, from (BinSeq 1 , τ1+ ) and BC3, we get the looping pair (BinSeq 3 , τ3+ ) where:
BinSeq 3 =
append3 ([], X1 , X2 , X3 ) ← append (X1 , X2 , X3 ),
append ([X1 |X2 ], X3 , [X1 |X4 ]) ← append (X2 , X3 , X4 )
and τ3+ (append ) = h2 7→ X3 i and τ3+ (append3 ) = h3 7→ X2 i.
Example 4.5 Consider the program PERMUTE:
delete(X,[X|Xs],Xs).
delete(Y,[X|Xs],[X|Ys]) :- delete(Y,Xs,Ys).
permute([],[]).
permute([X|Xs],[Y|Ys]) :- delete(Y,[X|Xs],Zs), permute(Zs,Ys).
β
The set TPERMUTE
↑ 1 includes:
15
delete(B,[C|D],[C|E]) :- delete(B,D,E).
permute([B|C],[D|E]) :- delete(D,[B|C],F).
% BC1
% BC2
From clause BC1 we get the looping pair (BinSeq 1 , τ1+ ) where
BinSeq 1 = delete(X1 , [X2 |X3 ], [X2 |X4 ]) ← delete(X1 , X3 , X4 )
and τ1+ (delete) = h1 7→ X1 i. From this pair and BC2, we get the looping pair (BinSeq 2 , τ2+ ) where:
BinSeq 2 =
permute([X1 |X2 ], [X3 |X4 ]) ← delete(X3 , [X1 |X2 ], X5 ),
delete(X1 , [X2 |X3 ], [X2 |X4 ]) ← delete(X1 , X3 , X4 )
and τ2+ (delete) = h1 7→ X1 i and τ2+ (permute) = h2 7→ [X3 |X4 ]i.
4.3
Looping Conditions
One of the main purposes of this article is the inference of classes of atomic queries that left loop
w.r.t. a given logic program. Classes of atomic queries we consider are defined by pairs (A, τ + )
where A is an atom and τ + is a set of positions with associated terms. Such a pair denotes the
set of queries A↑τ + , the definition of which is similar to that of the expansion of an atom, see
Definition 3.21.
Definition 4.6 Let A be an atom and τ + be a set of positions with associated terms. Then A↑τ +
denotes the class of atomic queries defined as:
def
A↑τ + = {A} ∪ {B ∈ T BL | B is ∆[τ + ]-more general than A} .
Once each element of A↑τ + left loops w.r.t. a logic program, we get what we call a looping
condition for that program:
Definition 4.7 (Looping Condition) Let P be a logic program. A looping condition for P is a
pair (A, τ + ) such that each element of A↑τ + left loops w.r.t. P .
The function infer loop cond takes as arguments a logic program P and a non-negative
integer max . Calling infer loop dict(P, max ), it first computes a loop dictionary Dict . Then,
it computes from Dict looping conditions for P as follows. The function extracts the pair (H, τ + )
from each element ([H ← B|BinSeq], τ + ) of Dict . By Proposition 4.3, H loops w.r.t. [H ←
B|BinSeq]. As τ + , hence ∆[τ + ], is DN for [H ← B|BinSeq], by Definition 3.18 each element of
H↑τ + loops w.r.t. [H ← B|BinSeq]. Finally, as [H ← B|BinSeq] ⊆ TPβ ↑ max ⊆ bin unf (P ), by
Theorem 2.1, each element of H↑τ + left loops w.r.t. P .
Example 4.8 (Example 4.4 continued) From each looping pair we have infered, we get the following information.
• (append ([X1 |X2 ], X3 , [X1 |X4 ]), τ1+ ) is a looping condition. So, each query append (t1 , t2 , t3 ),
where [X1 |X2 ] = t1 η and [X1 |X4 ] = t3 η for a substitution η and t2 is an instance of X3
(because τ1+ (append )(2) = X3 ), left loops w.r.t. APPEND3. In other words, each query
append (t1 , t2 , t3 ), where [X1 |X2 ] = t1 η and [X1 |X4 ] = t3 η for a substitution η and t2 is
any term, left loops w.r.t. APPEND3.
• (append3 (X1 , X2 , X3 , X4 ), τ2+ ) is a looping condition. As we have τ2+ (append3 )(2) = X2 ,
τ2+ (append3 )(3) = X3 and τ2+ (append3 )(4) = X4 , this means that each query of form
append3 (x1 , t2 , t3 , t4 ), where t2 , t3 and t4 are any terms, left loops w.r.t. APPEND3.
• (append3 ([], X1 , X2 , X3 ), τ3+ ) is a looping condition. So, as τ3+ (append3 )(3) = X2 , this
means that each query of form append3 ([], X1 , t, X3 ), where t is any term, left loops w.r.t.
APPEND3.
16
Example 4.9 (Example 4.5 continued) From each looping pair we have infered, we get the following information.
• (delete(X1 , [X2 |X3 ], [X2 |X4 ]), τ1+ ) is a looping condition. As τ1+ (delete)(1) = X1 , this means
that each query of form delete(t1 , t2 , t3 ), where t1 is any term and [X2 |X3 ] = t2 η and
[X2 |X4 ] = t3 η for a substitution η, left loops w.r.t. PERMUTE.
• (permute([X1 |X2 ], [X3 |X4 ]), τ2+ ) is a looping condition. As τ2+ (permute)(2) = [X3 |X4 ], this
means that each query of form permute(t1 , t2 ), where t1 is more general than [X1 |X2 ] and
t2 is any instance of [X3 |X4 ], left loops w.r.t. PERMUTE.
5
An Application: Proving Optimality of Termination Conditions
[26] presents a tool for inferring termination conditions that are expressed as multi-modes, i.e.
as disjunctions of conjunctions of propositions of form “the i-th argument is ground”. In this
section, we describe an algorithm that attempts proofs of optimality of such conditions using the
algorithms for non-termination inference of the previous section.
5.1
Optimal Terminating Multi-modes
Let P be a logic program and p ∈ ΠP be a relation symbol, with arity(p) = n. First, we describe
the language we use for abstracting sets of atomic queries:
Definition 5.1 (Mode) A mode mp for p is a subset of [1, n], and denotes the following set of
atomic goals: [mp ] = {p(t1 , . . . , tn ) ∈ TB L | ∀i ∈ mp Var (ti ) = ∅}. The set of all modes for p,
i.e. 2[1,n] , is denoted modes(p).
Note that if mp = ∅ then [mp ] = {p(t1 , . . . , tn ) ∈ TB L }. Since a logic procedure may have
multiple uses, we generalize:
Definition 5.2 (Multi-mode) A multi-mode Mp for p is a finite set of modes for p and denotes
the following set of atomic queries: [Mp ] = ∪m∈Mp [m].
Note that if Mp = ∅, then [Mp ] = ∅. Now we can define what we mean by terminating and
looping multi-modes:
Definition 5.3 (Terminating mode, terminating multi-mode) A terminating mode mp for p is a
mode for p such that any query in [mp ] left terminates w.r.t. P . A terminating multi-mode TM p
for p is a finite set of terminating modes for p.
Definition 5.4 (Looping mode, looping multi-mode) A looping mode mp for p is a mode for p
such that there exists a query in [mp ] which left loops w.r.t. P . A looping multi-mode LM p for p
is a finite set of looping modes for p.
As left termination is instantiation-closed, any mode that is “below” (less general than) a
terminating mode is also a terminating mode. Similarly, as left looping is generalization-closed,
any mode that is “above” (more general than) a looping mode is also a looping mode. Let us be
more precise:
Definition 5.5 (Less general, more general) Let Mp be a multi-mode for the relation symbol p.
We set:
less general (Mp ) =
more general (Mp ) =
{m ∈ modes(p) | ∃m′ ∈ Mp [m] ⊆ [m′ ]}
{m ∈ modes(p) | ∃m′ ∈ Mp [m′ ] ⊆ [m]}
17
looping modes(L, p):
in: L: a finite set of looping conditions
p: a predicate symbol
out: a looping multi-mode for p
1: LM p := ∅
2: for each (p(t1 , . . . , tn ), τ + ) ∈ L do
3:
mp := Dom(τ + (p)) ∪ {i ∈ [1, n] | Var (ti ) = ∅}
4:
LM p := LM p ∪ {mp }
5: return LM p
Figure 1:
We are now equipped to present a definition of optimality for terminating multi-modes:
Definition 5.6 (Optimal terminating multi-mode) A terminating multi-mode TM p for p is optimal if there exists a looping multi-mode LM p verifying:
modes(p) = less general (TM p ) ∪ more general (LM p )
Otherwise stated, given a terminating multi-mode TM p , if each mode which is not less general
than a mode of TM p is a looping mode, then TM p characterizes the operational behavior of p
w.r.t. left termination and our language for defining sets of queries.
Example 5.7 Consider the program APPEND. A well-known terminating multi-mode is the set
TM append = {{1}, {3}}. Indeed, any query of the form append(t,Ys,Zs) or append(Xs,Ys,t),
where t is a ground term ( i.e. such that Var(t) = ∅), left terminates. We have:
less general (TM append ) = {{1}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}
On the other hand, append(Xs,[],Zs) left loops. Hence LM append = {{2}} is a looping condition
and more general (LM append ) = {∅, {2}}.
Since modes(append ) = less general (TM append ) ∪ more general (LM append ), we conclude that
the terminating multi-mode TM append is optimal.
5.2
Algorithms
Suppose we hold a finite set L of looping conditions for P . Then, each element (p(t1 , . . . , tn ), τ + )
of L provides an obvious looping mode for p: it suffices to take {i ∈ [1, n] | Var(ti ) = ∅}. But
actually, we can extract more information from L. Let p(t′1 , . . . , t′n ) be an atom such that:
• for each hi 7→ ui i ∈ τ + (p), t′i is a ground instance of ui ,
• for each i in [1, n] \ Dom(τ + (p)), t′i = ti .
Then, p(t′1 , . . . , t′n ) belongs to p(t1 , . . . , tn )↑τ + , hence it left loops w.r.t. P . Consequently, we
have that Dom(τ + (p)) ∪ {i ∈ [1, n] | Var (ti ) = ∅} is a looping mode for p. The function
looping modes of Fig. 1 is an application of these remarks.
Now we have the essential material for the design of a tool that attempts proofs of optimality
of left terminating multi-modes computed by a termination inference tool as e.g. cTI [26] or
TerminWeb [17]. For each pair (p, ∅) in the set the function optimal tc of Fig. 2 returns, we can
conclude that the corresponding TM p is the optimal terminating multi-mode which characterizes
the operational behavior of p with respect to Lterm .
18
optimal tc(P , max , {TM p }p∈ΠP ):
in: P : a logic program
max : a non-negative integer
{TM p }p∈ΠP : a finite set of terminating multi-modes
out: a finite set of pairs (p, Mp ) such that p ∈ ΠP and
Mp is a multi-mode for p with no information w.r.t. its left behaviour
note: if for each p ∈ Πp , Mp = ∅, then {TM p }p∈ΠP is optimal
1:
2:
3:
4:
5:
6:
7:
Res := ∅
L := infer loop cond(P, max )
for each p ∈ ΠP do
LM p := looping modes(L, p)
Mp := modes(p) \ (less general(TM p ) ∪ more general(LM p ))
Res := Res ∪ {(p, Mp )}
return Res
Figure 2:
Example 5.8 (Example 4.8 continued) We apply our algorithm to the program APPEND3 of Example 4.4. We get that
L := { (append ([X1 |X2 ], X3 , [X1 |X4 ]),
(append3 (X1 , X2 , X3 , X4 ),
(append3 ([], X1 , X2 , X3 ),
τ1+ ),
τ2+ ),
τ3+ )
}
is a finite set of looping conditions for APPEND3 (see Example 4.8) with
Dom(τ1+ (append ))
=
{2}
Dom(τ2+ (append3 )) =
{2, 3, 4}
Dom(τ3+ (append3 ))
{3}
=
So, for append we have:
LM append :=
more general(LM append ) =
looping modes(L, append ) = {{2}}
{∅, {2}}
TM append
less general(TM append )
=
=
{{1}, {3}}
{{1}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}
Mappend
=
{}
For append3 , we get:
• the looping mode {2, 3, 4} from (append3 (X1 , X2 , X3 , X4 ), τ2+ ) and
• the looping mode mp := {1, 3} from (append3 ([], X1 , X2 , X3 ), τ3+ ) (notice that 3 ∈ mp because
Dom(τ3+ (append3 )) = {3} and 1 ∈ mp because of constant [] which is the first argument of
append3 ([], X1 , X2 , X3 ).)
19
So, we have:
LM append3 :=
more general(LM append3 ) =
looping modes(L, append3 ) = {{2, 3, 4}, {1, 3}}
{∅, {1}, {2}, {3}, {4}, {1, 3}, {2, 3}, {2, 4},
{3, 4}, {2, 3, 4}}
TM append3
less general(TM append3 )
Mappend3
=
=
{{1, 2}, {1, 4}}
{{1, 2}, {1, 4}, {1, 2, 3}, {1, 2, 4}, {1, 3, 4},
=
{1, 2, 3, 4}}
{}
Hence in both cases, we have characterized the left behaviour of the predicates by using two complementary tools.
5.3
An Experimental Evaluation
We have implemented1 the algorithms presented in Sections 4 and 5.2. The binary unfoldings
algorithm is derived from the one described in [7], where we added time stamps to precisely
control what is computed at each iteration. Looping modes are computed starting from the leaves
of the call graph then moving up to its roots. The cTI termination inference tool2 is detailed in
[26, 24]. Here is the configuration we used for this experiment: Intel 686, 2.4GHz, 512Mb, Linux
2.4, SICStus Prolog 3.10.1, 24.8 MLips. Timings in seconds are average over 10 runs.
First we have applied them on some small programs from standard benchmarks of the termination analysis literature [30, 2, 9] (predefined predicates were erased). The column opt? of Table
1 indicates whether the result of cTI (see [26]) is proved optimal (X) or not (? ). The column
max gives the least non-negative integer implying optimality or the least non-negative integer n
where it seems we get the most precise information from non-termination inference (i.e. for n and
n + 1, the analyser delivers the same results). Then timings in seconds (t[s]) appear, followed by
a pointer to a comment to the notes below.
Notes:
1. The predicate fold/3 is defined by:
fold(X,[],X).
fold(X,[Y|Ys],Z) :- op2(X,Y,V), fold(V,Ys,Z).
When the predicate op2/3 is defined by the fact op2(A,B,C), the result of cTI is optimal.
When the predicate op2/3 is defined by the fact op2(a,b,c), no looping mode is found and
the result of cTI is indeed sub-optimal as the query fold(X,Y,Z) terminates.
2. Termination proofs for mergesort require the list-size norm, while cTI applies the term-size
norm.
3. The result of cTI is not optimal. The analyzed program:
p(A,B) :- q(A,C),p(C,B).
p(A,A).
q(a,b).
has finite binary unfoldings because there is no function symbol. Hence its termination is
decidable (see [7]). This could be easily detected at analyze time. We notice that no looping
mode is found. But as any constant is mapped to 0 by the term-size norm, the modes
modes(p) remain undecided for cTI while they all terminate.
1 Available
2 Available
from http://www.univ-reunion.fr/~gcc
from http://www.cs.unipr.it/cTI
20
Table 1: Some De Schreye’s, Apt’s, and Plümer’s programs.
program
permute
duplicate
sum
merge
dis-con
reverse
append
list
fold
lte
map
member
mergesort
mergesort ap
naive rev
ordered
overlap
permutation
quicksort
select
subset
sum peano
pl2.3.1
pl3.5.6
pl4.4.6a
pl4.5.2
pl4.5.3a
pl5.2.2
pl7.2.9
pl7.6.2a
pl7.6.2b
pl7.6.2c
pl8.3.1a
pl8.4.1
pl8.4.2
top-level predicate
permute(X,Y)
duplicate(X,Y)
sum(X,Y,Z)
merge(X,Y,Z)
dis(X)
reverse(X,Y,Z)
append(X,Y,Z)
list(X)
fold(X,Y,Z)
goal
map(X,Y)
member(X,Y)
mergesort(X,Y)
mergesort ap(X,Y,Z)
naive rev(X,Y)
ordered(X)
overlap(X,Y)
permutation(X,Y)
quicksort(X,Y)
select(X,Y,Z)
subset(X,Y)
sum(X,Y,Z)
p(X,Y)
p(X)
perm(X,Y)
s(X,Y)
p(X)
turing(X,Y,Z,T)
mult(X,Y,Z)
reach(X,Y,Z)
reach(X,Y,Z,T)
reach(X,Y,Z,T)
minsort(X,Y)
even(X)
e(X,Y)
cTI
term-cond
X
X ∨Y
X ∨Y ∨Z
(X ∧ Y ) ∨ Z
X
X
X ∨Z
X
Y
1
X ∨Y
Y
0
Z
X
X
X ∧Y
X
X
Y ∨Z
X ∧Y
Y ∨Z
0
X
X
0
0
0
X ∧Y
0
0
Z∧T
X
X
X
21
t[s]
0.01
0.01
0.01
0.02
0.02
0.02
0.01
0.01
0.01
0.01
0.01
0.01
0.04
0.08
0.02
0.01
0.01
0.03
0.05
0.01
0.01
0.01
0.01
0.01
0.02
0.03
0.01
0.08
0.02
0.02
0.02
0.02
0.03
0.02
0.05
opt?
X
X
X
X
X
X
X
X
?
X
X
X
?
?
X
X
X
X
X
X
X
X
?
X
X
X
X
?
X
?
?
?
X
X
X
Optimal
max
t[s]
1
0.01
1
0.01
1
0.01
1
0.01
2
0.01
1
0.01
1
0.01
1
0.01
2
0.01
1
0.01
2
0.01
1
0.01
2
0.01
2
0.02
1
0.01
1
0.01
2
0.01
1
0.01
1
0.01
1
0.01
2
0.01
1
0.01
1
0.01
2
0.01
1
0.01
1
0.01
1
0.01
2
0.03
4
0.03
1
0.01
1
0.01
2
0.02
2
0.02
2
0.01
3
0.04
cf.
note 1
note 2
note 3
note 4
note 5
note 6
Table 2: Some middle-sized programs.
program
name
ann
bid
boyer
browse
credit
peephole
plan
qplan
rdtok
read
warplan
clauses
177
50
136
30
57
134
29
148
55
88
101
cTI
Q%
48
100
84
53
100
88
100
61
44
52
32
t[s]
1.00
0.14
0.30
0.26
0.11
1.08
0.11
1.13
0.65
1.72
0.49
max=1
Opt%
t[s]
46
0.14
55
0.02
80
0.03
46
0.03
91
0.02
23
0.06
68
0.02
50
0.11
44
0.11
39
0.04
37
0.07
Optimal
max=2
Opt%
t[s]
68
1.34
90
0.08
96
0.22
80
0.18
95
0.11
70
3.62
81
0.09
79
1.60
88
40.2
47
0.80
83
0.99
max=3
Opt%
t[s]
74
32.4
95
0.50
100
3.66
100
6.05
100
4.46
70
406
81
0.37
81
1911
?
> 3600
47
10.9
91
21.5
4. The analyzed program (from [30], p. 64) simulates a Turing machine. The result of cTI is
optimal.
5. With respect to the program:
mult(0,A,0).
mult(s(A),B,C) :- mult(A,B,D),add(D,B,C).
add(0,A,A).
add(s(A),B,s(C)) :- add(A,B,C).
the query mult(s(s(0)),A,B) is automatically detected as looping, although mult(0,A,B)
and mult(s(0),A,B) do terminate.
6. These three programs propose various definitions of the reachability relation between two
nodes in a list of edges. For the first and the third definition, cTI is indeed optimal. For the
second one, cTI is not optimal.
Next, we have applied the couple of analyzers to some middle-sized Prolog programs, see Table
2. Again, predefined predicates were all erased, while they are usually taken into account for cTI
which of course improves the analysis. In other words, we only consider the logic programming
skeleton of each program. The first two columns give the name of the file and its size (number of
clauses). The fourth column indicates the running time (in seconds) of the termination analysis,
while the third column is the ratio of predicates for which a non-false termination condition is
computed over the total number of predicates defined in the program. For instance, cTI is able
to show that there is at least one terminating mode for 48% of the predicates defined in the
program ann. We ran the non-termination analyzer with 1 ≤ max ≤ 3 iterations. For each value
of max, we give the running time (in seconds) and the ratio of predicates for which looping modes
complement terminating modes. For example, with respect to the program ann, for max = 2 we
get the full complete mode termination behavior of 68% of all the defined predicates.
We note that when we increase max, we obtain better results but the running times also
increase, which is fairly obvious. For max = 3, we get good to optimal results but the binary
unfoldings approach reveals its potentially explosive nature: we aborted the analysis of rdtok
after one hour of computation.
In conclusion, from such a naive implementation, we were rather surprised by the quality of the
combined analysis. Adopting some more clever implementation schemes, for instance computing
the binary unfoldings in a demand driven fashion, could be investigated to improve the running
times.
22
6
Related Works
Some extensions of the Lifting Theorem with respect to infinite derivations are presented in [18],
where the authors study numerous properties of finite failure. The non-ground finite failure set
of a logic program is defined as the set of possibly non-ground atoms which admit a fair finitely
failed SLD-tree w.r.t. the program. This denotation is shown correct in the following sense. If two
programs have the same non-ground finite failure set, then any ground or non-ground goal which
finitely fails w.r.t. one program also finitely fails w.r.t. the other. Such a property is false when we
consider the standard ground finite failure set. The proof of correctness of the non-ground finite
failure semantics relies on the following result. First, a derivation is called non-perpetual if it is
a fair infinite derivation and there exists a finite depth from which unfolding does not instantiate
the original goal any more. Then the authors define the definite answer goal of a non-perpetual
derivation as the maximal instantiation of the original goal. A crucial lemma states that any
instance of the definite answer goal admits a similar non-perpetual derivation. Compared to our
work, note that we do not need fairness as an hypothesis for our results. On the other hand,
investigating the relationships between non-ground arguments of the definite answer and neutral
arguments is an interesting problem.
In [35], the authors present a dynamic approach to characterize (in the form of a necessary
and sufficient condition) termination of general logic programs. Their technique employs some
key dynamic features of an infinite generalized SLDNF-derivation, such as repetition of selected
subgoals and recursive increase in term size.
Loop checking in logic programming is also a subject related to our work. In this area, [5] sets
up some solid foundations. A loop check is a device to prune derivations when it seems appropriate.
A loop checker is defined as sound if no solution is lost. It is complete if all infinite derivations
are pruned. A complete loop check may also prune finite derivations. The authors show that even
for function-free programs (also known as Datalog programs), sound and complete loop checks are
out of reach. Completeness is shown only for some restricted classes of function-free programs.
We now review loop checking in more details. To our best knowledge, among all existing
loop checking mechanisms only OS-check [32], EVA-check [34] and VAF-check [36] are suitable for
logic programs with function symbols. They rely on a structural characteristic of infinite SLDderivations, namely, the growth of the size of some generated subgoals. This is what the following
theorem states.
Theorem 6.1 Consider an infinite SLD-derivation ξ where the leftmost selection rule is used.
Then there are infinitely many queries Qi1 , Qi2 , . . . (with i1 < i2 < . . . ) in ξ such that for
any j ≥ 1, the selected atom Aij of Qij is an ancestor of the selected atom Aij+1 of Qij+1 and
size(Aij+1 ) ≥ size(Aij ).
Here, size is a given function that maps an atom to its size which is defined in terms of the number
of symbols appearing in the atom. As this theorem does not provide any sufficient condition to
detect infinite SLD-derivations, the three loop checking mechanisms mentioned above may detect
finite derivations as infinite. However, these mechanisms are complete w.r.t. the leftmost selection
rule i.e. they detect all infinite loops when the leftmost selection rule is used.
OS-check (for OverSize loop check) was first introduced by Shalin [31, 32] and was then formalized by Bol [4]. It is based on a function size that can have one of the three following definitions:
for any atoms A and B, either size(A) = size(B), either size(A) (resp. size(B)) is the count of
symbols appearing in A (resp. B), either size(A) ≤ size(B) if for each i, the count of symbols of
the i-th argument of A is smaller than or equal to that of the i-th argument of B. OS-check says
that an SLD-derivation may be infinite if it generates an atomic subgoal A that is oversized, i.e.
that has ancestor subgoals which have the same predicate symbol as A and whose size is smaller
than or equal to that of A.
EVA-check (for Extented Variant Atoms loop check) was introduced by Shen [34]. It is based
on the notion of generalized variants (if Gi and Gj (i < j) are two goals in an SLD-derivation, an
atom A in Gj is a generalized variant of an atom A′ in Gi if A is a variant of A′ except for some
arguments whose size increases from A′ to A via a set of recursive clauses.) EVA-check says that
23
an SLD-derivation may be infinite if it generates an atomic subgoal A that is a generalized variant
of some of its ancestor A′ . Here the size function that is used applies to predicate arguments,
i.e. to terms, and it is fixed: it is defined as the the count of symbols that appear in the terms.
EVA-check is more reliable than OS-check because it is less likely to mis-identify infinite loops
[34]. This is mainly due to the fact that, unlike OS-check, EVA-check refers to the informative
internal structure of subgoals.
VAF-check (for Variant Atoms loop check for logic programs with Functions) was proposed by
Shen et al. [36]. It is based on the notion of expanded variants. An atom A is an expanded variant
of an atom A′ if, after variable renaming, A becomes B that is the same as A′ except that there
may be some terms at certain positions in A′ , each A′ [i] . . . [k] of which grows in B into a function
B[i] . . . [k] = f (. . . , A′ [i] . . . [k], . . . ) (here, we use A′ [i] . . . [k] (resp. B[i] . . . [k]) to refer to the k-th
argument of . . . of the i-th argument of A′ (resp. B)). VAF-check says that an SLD-derivation
may be infinite if it generates an atomic subgoal A that is an expanded variant of some of its
ancestor A′ . VAF-check is as reliable as and more efficient than EVA-check [36].
The main difference with our work is that we want to infer atomic queries which are guaranteed
to be left looping. Hence, we consider sufficient conditions for looping, in contrast to the above
mentioned methods which consider necessary conditions. Our technique returns a set of queries
for which it has pinpointed one infinite derivation. Hence, we are not interested in soundness
as we do not care of finite derivations, nor in completeness, as the existence of just one infinite
derivation suffices. Of course, using the ∆-subsumption test as a loop checker leads to a device that
is neither sound (since ∆-subsumption is a particular case of subsumption) nor complete (since
the ∆-subsumption test provides a sufficient but not necessary condition). This is illustrated by
the following example.
Example 6.2 Let c := p(X, X) ← p(f (X), f (X)). As the arguments of the head of c have one
common variable X, every set of positions with associated terms τ + that is DN for {c} is such
that the domain of τ + (p) is empty (see (DN1) in Definition 3.35). Notice that from the head
p(X, X) of c we get
p(X, X) =⇒ p(f (X), f (X)) =⇒ · · · =⇒ p(f n (X), f n (X)) =⇒ · · ·
c
c
c
c
As the arguments of p grow from step to step, there cannot be any query in the derivation that is
∆[τ + ]-more general than one of its ancestors. Consequently, we can not conclude that p(X, X)
left loops w.r.t {c}.
On the other hand, using loop checking approaches to infer classes of atomic left looping queries
is not satisfactory because, as we said above, non-looping queries may be mis-identified as looping.
Example 6.3 We cannot replace, in Corollary 3.2, the subsumption test by the expanded variant
test used in VAF-check because, for instance, in the clause c := p(a) ← p(f (a)), we have: p(f (a))
is an expanded variant of p(a), but p(a) does not loop w.r.t. c.
Finally, [10] is also related to our study. In this paper, the authors describe an algorithm for
detecting non-terminating queries to clauses of the type p(· · · ) ← p(· · · ). The algorithm is able to
check if such a given clause has no non-terminating queries or has a query which either loops or fails
due to occur check. Moreover, given a linear atomic goal (i.e. a goal where all variable occurs at
most once), the algorithm is able to check if the goal loops or not w.r.t. the clause. The technique
of the algorithm is based on directed weighted graphs [14] and on a necessary and sufficient
condition for the existence of non-terminating queries to clauses of the type p(· · · ) ← p(· · · ). This
condition is proved in [8] and is expressed in terms of rational trees.
7
Conclusion
We have presented a extension of the subsumption test which allows to disregard some arguments,
termed neutral arguments, while checking for subsumption. We have proposed two syntactic
24
criteria for statically identifying neutral arguments. From these results, in the second part of this
report we have described algorithms for automating non-termination analysis of logic programs,
together with correctness proofs. Finally, we have applied these techniques to check the optimality
of termination conditions for logic programs.
This paper leaves numerous questions open. For instance, it might be interesting to try to
generalize this approach to constraint logic programming [19]. Can we obtain higher level proofs
compared to those we give? Can we propose more abstract criteria for identifying neutral arguments? A first step in this direction is presented in [29]. Also, our work aims at inferring classes
of atomic left looping queries, using a bottom-up point of view. Experimental data show that it
may sometimes lead to prohibitive time/space costs. How can we generate only the useful binary
clauses without fully computing the iterations of this TP -like operator? Or can we adapt our
algorithms towards a more efficient correct top-down approach for checking non-termination?
Acknowledgments
We thank Ulrich Neumerkel for numerous discussions on this topic, Roberto Bagnara and anonymous referees for interesting suggestions.
References
[1] K. R. Apt. From Logic Programming to Prolog. Prentice Hall, 1997.
[2] K. R. Apt and D. Pedreschi. Modular termination proofs for logic and pure Prolog programs.
In G. Levi, editor, Advances in Logic Programming Theory, pages 183–229. Oxford University
Press, 1994.
[3] T. Arts and H. Zantema. Termination of logic programs using semantic unification.
In Logic Program Synthesis and Transformation, volume 1048 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 1996. TALP can be used online at the address:
http://bibiserv.techfak.uni.biekefeld.de/talp/.
[4] R. N. Bol. Loop checking in partial deduction. Journal of Logic Programming, 16:25–46,
1993.
[5] R. N. Bol, K. R. Apt, and J. W. Klop. An analysis of loop checking mechanisms for logic
programs. Theoretical Computer Science, 86:35–79, 1991.
[6] K. L. Clark. Predicate logic as a computational formalism. Technical Report Doc 79/59,
Logic Programming Group, Imperial College, London, 1979.
[7] M. Codish and C. Taboch. A semantic basis for the termination analysis of logic programs.
Journal of Logic Programming, 41(1):103–123, 1999.
[8] D. De Schreye, M. Bruynooghe, and K. Verschaetse. On the existence of nonterminating
queries for a restricted class of Prolog-clauses. Artificial Intelligence, 41:237–248, 1989.
[9] D. De Schreye and S. Decorte. Termination of logic programs : the never-ending story. Journal
of Logic Programming, 19-20:199–260, 1994.
[10] D. De Schreye, K. Verschaetse, and M. Bruynooghe. A practical technique for detecting nonterminating queries for a restricted class of Horn clauses, using directed, weighted graphs. In
Proc. of ICLP’90, pages 649–663. The MIT Press, 1990.
[11] P. Deransart and G. Ferrand. Programmation en logique avec négation: présentation
formelle. Technical Report 87/3, Laboratoire d’Informatique, Département de Mathématiques
et d’Informatique, Université d’Orleans, 1987.
25
[12] N. Dershowitz, N. Lindenstrauss, Y. Sagiv, and A. Serebrenik. A general framework
for automatic termination analysis of logic programs. Applicable Algebra in Engineering,Communication and Computing, 12(1/2):117–156, 2001.
[13] P. Devienne. Weighted graphs, a tool for expressing the behavious of recursive rules in logic
programming. pages 397–404. OHMSHA Ltd. Tokyo and Springer-Verlag, 1988. Proc. of the
Inter. Conf. on Fifth Generation Computer Systems 88, Tokyo, Japan.
[14] P. Devienne. Weighted graphs: A tool for studying the halting problem and time complexity
in term rewriting systems and logic programming. Theoretical Computer Science, 75(2):157–
215, 1990.
[15] P. Devienne, P.Lebègue, and J-C. Routier. Halting problem of one binary Horn clause is
undecidable. In LNCS, volume 665, pages 48–57. Springer-Verlag, 1993. Proc. of STACS’93.
[16] M. Gabbrielli and R. Giacobazzi. Goal independency and call patterns in the analysis of
logic programs. In Proceedings of the ACM Symposium on applied computing, pages 394–399.
ACM Press, 1994.
[17] S. Genaim and M. Codish. Inferring termination condition for logic programs using backwards
analysis. In Proceedings of Logic for Programming, Artificial intelligence and Reasoning,
Lecture Notes in Computer Science. Springer-Verlag, Berlin, 2001. TerminWeb can be used
online from http://www.cs.bgu.ac.il/~codish.
[18] R. Gori and G. Levi. Finite failure is and-compositional. Journal of Logic and Computation,
7(6):753–776, 1997.
[19] J. Jaffar and J. L. Lassez. Constraint logic programming. In Proc. of the ACM Symposium
on Principles of Programming Languages, pages 111–119. ACM Press, 1987.
[20] N. Lindenstrauss. TermiLog: a system for checking termination of queries to logic programs,
1997. http://www.cs.huji.ac.il/~naomil.
[21] J. W. Lloyd. Foundations of Logic Programming. Springer-Verlag, 1987.
[22] M. Martelli M. Falaschi, G. Levi and C. Palamidessi. A model-theoretic reconstruction of the
operational semantics of logic programs. Information and Computation, 102(1):86–113, 1993.
[23] F. Mesnard. Inferring left-terminating classes of queries for constraint logic programs by
means of approximations. In M. J. Maher, editor, Proc. of the 1996 Joint Intl. Conf. and
Symp. on Logic Programming, pages 7–21. MIT Press, 1996.
[24] F. Mesnard and R. Bagnara. cTI: a constraint-based termination inference tool for ISOProlog. Theory and Practice of Logic Programming, 2004. To appear.
[25] F. Mesnard and U. Neumerkel. cTI: a tool for inferring termination conditions of ISO-Prolog,
april 2000. http://www.complang.tuwien.ac.at/cti.
[26] F. Mesnard and U. Neumerkel. Applying static analysis techniques for inferring termination
conditions of logic programs. In P. Cousot, editor, Static Analysis Symposium, volume 2126
of Lecture Notes in Computer Science, pages 93–110. Springer-Verlag, Berlin, 2001.
[27] F. Mesnard, E. Payet, and U. Neumerkel. Detecting optimal termination conditions of logic
programs. In M. Hermenegildo and G. Puebla, editors, Proc. of the 9th International Symposium on Static Analysis, volume 2477 of Lecture Notes in Computer Science, pages 509–525.
Springer-Verlag, Berlin, 2002.
[28] R. O’Keefe. The Craft of Prolog. MIT Press, 1990.
26
[29] E. Payet and F. Mesnard. Non-termination inference of logic programs. In R. Giacobazzi,
editor, Proc. of the 11th International Symposium on Static Analysis, volume 3148 of Lecture
Notes in Computer Science. Springer-Verlag, Berlin, 2004.
[30] L. Plümer. Terminations proofs for logic programs. Number 446 in LNAI. Springer-Verlag,
Berlin, 1990.
[31] D. Sahlin. The mixtus approach to automatic partial evaluation of full Prolog. In S. Debray and M. Hermenegildo, editors, Proc. of the 1990 North American Conference on Logic
Programming, pages 377–398. MIT Press, Cambridge, MA, 1990.
[32] D. Sahlin. Mixtus: an automatic partial evaluator for full Prolog. New Generation Computing,
12(1):7–51, 1993.
[33] M. Schmidt-Schauss. Implication of clauses is undecidable. Theoretical Computer Science,
59:287–296, 1988.
[34] Y-D. Shen. An extended variant of atoms loop check for positive logic programs. New
Generation Computing, 15(2):187–204, 1997.
[35] Y-D. Shen, J-H. You, L-Y. Yuan, S. Shen, and Q. Yang. A dynamic approach to characterizing
termination of general logic programs. ACM Transactions on Computational Logic, 4(4):417–
434, 2003.
[36] Y-D. Shen, L-Y. Yuan, and J-H. You. Loops checks for logic programs with functions. Theoretical Computer Science, 266(1-2):441–461, 2001.
27
A
Algorithms
First, we define a pre-order relation over sets of positions with associated terms. Such a relation
is useful for the design of the algorithms that we present in the sequel of this section.
+
Definition A.1 (4 and τmax
)
• τ1+ 4 τ2+ if for each p ∈ Π, Dom(τ1+ (p)) ⊆ Dom(τ2+ (p)) and for each i ∈ Dom(τ1+ (p)),
τ2+ (p)(i) is more general than τ1+ (p)(i).
+
+
• τmax
denotes a set of positions with associated terms s.t. Dom(τmax
(p)) = [1, arity(p)] for
+
each p ∈ Π and τmax (p)(i) is a variable for each i ∈ [1, arity(p)].
A.1
DN Sets of Positions with Associated Terms for Binary Programs
We present below an algorithm for computing DN sets of positions with associated terms.
dna BinProg, τ1+ :
in: BinProg: a finite set of binary clauses
τ1+ : a set of positions with associated terms
out: τ2+ s.t. τ2+ 4 τ1+ and τ2+ is DN for BinProg
1:
2:
3:
4:
5:
6:
7:
τ2+ := τ1+
τ2+ := satisfy DN1(BinProg, τ2+ )
τ2+ := satisfy DN2(BinProg, τ2+ )
τ2+ := satisfy DN3(BinProg, τ2+ )
while satisfy DN4(BinProg, τ2+ ) 6= τ2+ do
τ2+ := satisfy DN4(BinProg, τ2+ )
return τ2+
The algorithm dna calls four auxiliary functions that correspond to conditions (DN1), (DN2)
(DN3) and (DN4) in the definition of a DN set of positions with associated terms (see Definition 3.35). These functions are detailed below.
After τ2+ := satisfy DN1(BinProg , τ2+ ) at line 2 of dna, τ2+ satisfies item (DN1) of Definition 3.35.
satisfy DN1 BinProg, τ1+ :
1: τ2+ := τ1+
2: for each p(s1 , . . . , sn ) ← B ∈ BinProg do
3:
E := {i ∈ [1, n] | Var (si ) ∩ Var ({sj | j 6= i}) = ∅}
4:
τ2+ (p) := τ2+ (p)|(Dom(τ2+ (p)) ∩ E)
5: return τ2+
After τ2+ := satisfy DN2(BinProg , τ2+ ) at line 3 of dna, τ2+ satisfies item (DN2) of Definition 3.35. Notice that less general at line 5 of satisfy DN2 is a function that returns the less
general term of two given terms; if none of the given terms is less general than the other, then
this function returns undefined.
28
satisfy DN2 BinProg , τ1+ :
1: τ2+ := τ1+
2: for each p(s1 , . . . , sn ) ← B ∈ BinProg do
3:
F := ∅
4:
for each i ∈ Dom(τ2+ (p)) do
5:
ui := less general(si , τ2+ (p)(i))
6:
if ui = undefined then F := F ∪ {i}
7:
else τ2+ (p)(i) := ui
8:
τ2+ (p) := τ2+ (p)|(Dom(τ2+ (p)) \ F )
9: return τ2+
After τ2+ := satisfy DN3(BinProg , τ2+ ) at line 4 of dna, τ2+ satisfies item (DN3) of Definition 3.35. The function satisfy DN3 is detailed below.
satisfy DN3 BinProg, τ1+ :
1: τ2+ := τ1+
2: for each H ← q(t1 , . . . , tm ) ∈ BinProg do
3:
F := ∅
4:
for each i ∈ Dom(τ2+ (q)) do
5:
if τ2+ (q)(i) is not more general than ti then F := F ∪ {i}
6:
τ2+ (q) := τ2+ (q)|(Dom(τ2+ (q)) \ F )
7: return τ2+
Finally, the function satisfy DN4 is defined as follows. After line 6 of dna, the set of positions
with associated terms τ2+ satisfies item (DN4) of Definition 3.35.
satisfy DN4 BinProg, τ1+ :
1: τ2+ := τ1+
2: for each p(s1 , . . . , sn ) ← q(t1 , . . . , tm ) ∈ BinProg do
3:
F := ∅
4:
for each i ∈ Dom(τ2+ (p)) do
5:
for each j ∈ [1, m] \ Dom(τ2+ (q)) do
6:
if Var(si ) ∩ Var (tj ) 6= ∅ then F := F ∪ {i}
+
7:
τ2 (p) := τ2+ (p)|(Dom(τ2+ (p)) \ F )
8: return τ2+
Proposition A.2 (Correctness of dna) Let BinProg be a binary program and τ1+ be a set of
positions with associated terms.
1. satisfy DN1(BinProg, τ1+ ), . . . , satisfy DN4(BinProg, τ1+ ) terminate;
2. satisfy DN1(BinProg, τ1+ ) 4 τ1+ , . . . , satisfy DN4(BinProg , τ1+ ) 4 τ1+ ;
3. dna(BinProg, τ1+ ) terminates;
4. dna(BinProg , τ1+ ) 4 τ1+ and dna(BinProg, τ1+ ) is a set of positions with associated terms that
is DN for BinProg .
Proof. We have:
1. satisfy DN1(BinProg, τ1+ ) terminates because, as BinProg is a finite set of binary clauses,
the loop at lines 2–4 terminates. Concerning satisfy DN2–4, the inner loops terminate since
for each p ∈ Π, Dom(τ2+ (p)) ⊆ [1, arity(p)] and the outer loop terminates as BinProg is a
finite set of binary clauses.
29
• satisfy DN1(BinProg , τ1+ ) 4 τ1+ :
Line 1, we start from τ1+ . Line 4, we have for each relation symbol p from the heads of
the clauses of BinProg :
2.
Dom(τ2+ (p)) ⊆ Dom(τ1+ (p)) and ∀i ∈ Dom(τ2+ (p)), τ2+ (p)(i) = τ1+ (p)(i) .
Hence, when we reach line 5, we have: satisfy DN1(BinProg, τ1+ ) 4 τ1+ .
• satisfy DN2(BinProg , τ1+ ) 4 τ1+ :
Line 1, we start from τ1+ . Then, for each relation symbol p from the heads of the
clauses of BinProg and for each i ∈ Dom(τ2+ (p)), either τ2+ (p)(i) is set to a less general
term than τ2+ (p)(i) (line 7) or i is removed from the domain of τ2+ (p) (lines 6 and 8).
Therefore, when we reach line 9, we have: satisfy DN2(BinProg , τ1+ ) 4 τ1+ .
• satisfy DN3(BinProg , τ1+ ) 4 τ1+ :
Line 1, we start from τ1+ . Line 6, we have for each relation symbol q from the bodies
of the clauses of BinProg:
Dom(τ2+ (q)) ⊆ Dom(τ1+ (q)) and ∀i ∈ Dom(τ2+ (q)), τ2+ (q)(i) = τ1+ (q)(i) .
Hence, when we reach line 7, we have: satisfy DN3(BinProg, τ1+ ) 4 τ1+ .
• satisfy DN4(BinProg , τ1+ ) 4 τ1+ :
Line 1, we start from τ1+ . Line 7, we have for each relation symbol p from the heads of
the clauses of BinProg :
Dom(τ2+ (p)) ⊆ Dom(τ1+ (p)) and ∀i ∈ Dom(τ2+ (p)), τ2+ (p)(i) = τ1+ (p)(i) .
(2)
Hence, when we reach line 8, we have: satisfy DN4(BinProg, τ1+ ) 4 τ1+ .
3. Each call to satisfy DN1, . . . , satisfy DN4 terminate. Moreover, concerning function
satisfy DN4, we mentioned above that (2) holds. As ⊂ is a well-founded relation over
the set of sets, the loop at lines 5–6 terminates.
4. Line 1, we start from τ1+ . Then satisfy DN1, . . . , satisfy DN4 weaken τ1+ with respect to
Definition 3.35. When we reach the fixpoint, the property holds.
A.2
Loop Dictionaries
A.2.1
Proof of Proposition 4.3, page 14
We proceed by induction on the length n of BinSeq.
• Basis. If n = 0, then, as ([H ← B], τ + ) is a looping pair, B is ∆[τ + ]-more general than H
and τ + is DN for H ← B, i.e. ∆[τ + ] is DN for H ← B by Theorem 3.39. Consequently, by
Proposition 3.20, H loops w.r.t. [H ← B].
• Induction. Suppose that for an n ≥ 0, each looping pair ([H ← B|BinSeq], τ + ) with BinSeq
of length n is such that H loops w.r.t. [H ← B|BinSeq].
If BinSeq is of length n + 1, it has form [H1 ← B1 |BinSeq 1 ] with BinSeq 1 of length n.
Moreover, as ([H ← B|BinSeq], τ + ) is a looping pair, there exists a set of positions with
associated terms τ1+ such that ([H1 ← B1 |BinSeq 1 ], τ1+ ) is a looping pair and B is ∆[τ1+ ]more general than H1 . So, by the induction hypothesis, H1 loops w.r.t. [H1 ← B1 |BinSeq 1 ]
i.e. H1 loops w.r.t. BinSeq. As B is ∆[τ1+ ]-more general than H1 and ∆[τ1+ ] is DN for
BinSeq, by Definition 3.18 B loops w.r.t. BinSeq. Therefore, by Corollary 3.3, H loops
w.r.t. [H ← B|BinSeq].
30
A.2.2
Computing a Loop Dictionary
The top-level function for inferring loop dictionaries from a logic program is the following. It uses
the auxiliary functions unit loop and loops from dict described below.
infer loop dict(P , max ):
in: P : a logic program
max : a non-negative integer
out: a loop dictionary, each element (BinSeq, τ + ) of which
is such that BinSeq ⊆ TPβ ↑ max
1: Dict := ∅
2: for each H ← B ∈ TPβ ↑ max do
3:
Dict := unit loop(H ← B, Dict )
4:
Dict := loops from dict(H ← B, Dict )
5: return Dict
The function unit loop is used to extract from a binary clause the most simple form of a
looping pair:
unit loop(H ← B, Dict):
in: H ← B: a binary clause
Dict: a loop dictionary
out: Dict′ : a loop dictionary, every element (BinSeq, τ + ) of which is
such that either (BinSeq, τ + ) ∈ Dict or BinSeq = [H ← B]
1:
2:
3:
4:
5:
Dict′ := Dict
+
τ + := dna([H ← B], τmax
)
+
if B is ∆[τ ]-more general than H then
Dict ′ := Dict ′ ∪ {([H ← B], τ + )}
return Dict ′
Termination of unit loop relies on that of dna. Partial correctness is deduced from the next
theorem.
Theorem A.3 (Partial correctness of unit loop) Let H ← B be a binary clause and Dict be a
loop dictionary. Then unit loop(H ← B, Dict) is a loop dictionary, every element (BinSeq, τ + )
of which is such that either (BinSeq, τ + ) ∈ Dict or BinSeq = [H ← B].
Proof. Let τ + be the set of positions with associated terms computed at line 2. If B is not
∆[τ + ]-more general than H then, at line 5 of unit loop, we have Dict ′ = Dict, so the theorem
holds.
Now suppose that B is ∆[τ + ]-more general than H. Then, at line 5 we have Dict ′ := Dict ∪
{([H ← B], τ + )} where Dict is a loop dictionary, τ + is DN for H ← B and B is ∆[τ + ]-more
general than H. So at line 5 Dict ′ is a loop dictionary, every element (BinSeq, τ + ) of which is
such that either (BinSeq, τ + ) ∈ Dict or BinSeq = [H ← B].
The function loops from dict is used to combine a binary clause to some looping pairs that
have already been infered in order to get some more looping pairs.
31
loops from dict(H ← B, Dict ):
in: H ← B: a binary clause
Dict: a loop dictionary
out: Dict ′ : a loop dictionary, every element (BinSeq, τ + ) of which is
such that (BinSeq, τ + ) ∈ Dict or BinSeq = [H ← B|BinSeq 1 ]
for some (BinSeq 1 , τ1+ ) in Dict
1: Dict ′ := Dict
2: for each [H1 ← B1 |BinSeq 1 ], τ1+ ∈ Dict do
3:
if B is ∆[τ1+ ]-more general than H1 then
4:
τ + := dna([H ← B, H1 ← B1 |BinSeq 1 ], τ1+ )
5:
Dict ′ := Dict ′ ∪ {([H ← B, H1 ← B1 |BinSeq 1 ], τ + )}
6: return Dict ′
Termination of loops from dict follows from finiteness of Dict (because Dict is a loop dictionary) and termination of dna. Partial correctness follows from the result below.
Theorem A.4 (Partial correctness of loops from dict) Let H ← B be a binary clause and Dict
be a loop dictionary. Then loops from dict(H ← B, Dict) is a loop dictionary, every element
(BinSeq, τ + ) of which is such that (BinSeq, τ + ) ∈ Dict or BinSeq = [H ← B|BinSeq 1 ] for some
(BinSeq 1 , τ1+ ) in Dict .
Proof. Upon initialization at line 1, Dict ′ is a loop dictionary. Suppose that before an iteration of
the loop at line 2, Dict ′ is a loop dictionary. Let ([H1 ← B1 |BinSeq 1 ], τ1+ ) ∈ Dict.
If the condition at line 3 is false, then Dict ′ remains unchanged, so after the iteration Dict ′
is still a loop dictionary. Otherwise, the pair ([H ← B, H1 ← B1 |BinSeq 1 ], τ + ) is added to
Dict ′ at line 5. Notice that this pair is a looping one because τ + defined at line 4 is DN for
[H ← B, H1 ← B1 |BinSeq 1 ] and ([H1 ← B1 |BinSeq 1 ], τ1+ ) is a looping pair and B is ∆[τ1+ ]-more
general than H1 . Therefore, after the iteration, Dict ′ is a loop dictionary. Finally, notice that as
Dict is a finite set, the loop at line 2 terminates. Hence, at line 6 Dict ′ is a finite set of looping
pairs i.e. Dict ′ is a loop dictionary.
Moreover, at line 1, each element of Dict ′ belongs to Dict . Then, during the loop, pairs of form
([H ← B|BinSeq 1 ], τ + ) are added to Dict ′ where BinSeq 1 is such that there exists (BinSeq 1 , τ1+ ) ∈
Dict . Consequently, at line 6 each element (BinSeq, τ + ) of Dict ′ is such that either (BinSeq, τ + ) ∈
Dict or BinSeq = [H ← B|BinSeq 1 ] for some (BinSeq 1 , τ1+ ) in Dict .
Finally, here is the correctness proof of the function infer loop dict.
Theorem A.5 (Correctness of infer loop dict) Let P be a logic program and max be a nonnegative integer. Then, infer loop dict(P, max ) terminates and returns a loop dictionary, every
element (BinSeq, τ + ) of which is such that BinSeq ⊆ TPβ ↑ max .
Proof. At line 1, Dict is initialized to ∅ which is a loop dictionary. Suppose that before an
iteration of the loop at line 2, Dict is a loop dictionary. Then at lines 3 and 4 unit loop and
loops from dict fullfil their specifications. Hence, the call to these functions terminates and after
the iteration Dict is still a loop dictionary. Finally, as TPβ ↑ max is a finite set, the loop at line 2
terminates and at line 5 Dict is a loop dictionary.
Moreover, at line 1 each element (BinSeq, τ + ) of Dict is such that BinSeq ⊆ TPβ ↑ max . Then,
during the loop, unit loop and loops from dict are called with clauses from TPβ ↑ max . So, by
Theorem A.3 and Theorem A.4, after the iteration each element (BinSeq, τ + ) of Dict is such that
BinSeq ⊆ TPβ ↑ max .
A.3
Looping Conditions
The following function computes a finite set of looping conditions for any given logic program.
32
infer loop cond(P , max ):
in: P : a logic program
max : a non-negative integer
out: a finite set of looping conditions for P
1:
2:
3:
4:
5:
L := ∅
Dict := infer loop dict(P, max )
for each ([H ← B|BinSeq], τ + ) ∈ Dict do
L := L ∪ {(H, τ + )}
return L
A call to infer loop cond(P , max ) terminates for any logic program P and any non-negative
integer max because infer loop dict(P, max ) at line 2 terminates and the loop at line 3 has a
finite number of iterations (because, by correctness of infer loop dict, Dict is finite.) Partial
correctness of infer loop cond follows from the next theorem.
Theorem A.6 (Partial correctness of infer loop cond) Let P be a logic program and max be a
non-negative integer. Then infer loop cond(P, max ) is a finite set of looping conditions for P .
Proof. By correctness of infer loop dict, Dict is a loop dictionary.
Let ([H ← B|BinSeq], τ + ) ∈ Dict . Then ([H ← B|BinSeq], τ + ) is a looping pair. Consequently, by Proposition 4.3, H loops w.r.t. [H ← B|BinSeq]. As τ + , hence ∆[τ + ], is DN for
[H ← B|BinSeq], by Definition 3.18 every atom that is ∆[τ + ]-more general than H loops w.r.t.
[H ← B|BinSeq].
As ([H ← B|BinSeq], τ + ) ∈ Dict , by Theorem A.5 we have
[H ← B|BinSeq] ⊆ TPβ ↑ max ⊆ bin unf (P ) .
So, by Theorem 2.1, H left loops w.r.t. P and every atom that is ∆[τ + ]-more general than H left
loops w.r.t. P . So, (H, τ + ) is a looping condition for P . Consequently, at line 5, L is a finite set
of looping conditions for P because, as Dict is finite, the loop at line 3 iterates a finite number of
times.
B
Proofs
B.1
Two Useful Lemmas
Lemma B.1 Let c := H ← B be a binary clause. Then, for every variant cγ of c such that
Var (cγ) ∩ Var (H) = ∅, we have H =⇒ Bγ ′ where γ ′ := γ|Var(B) \ Var (H).
c
Proof. Let µ := {xγ/x|x ∈ Var (H)}. By Claim B.2 below, µ is an mgu of Hγ and H. Hence, as
µ
Var (cγ) ∩ Var(H) = ∅, we have the left derivation step H =⇒ Bγµ where cγ is the input clause
c
used.
µ
If Var(B) = ∅, then Bγµ = Bγ ′ , so we have H =⇒ Bγ ′ i.e. H =⇒ Bγ ′ .
c
If Var(B) 6= ∅, take a variable x ∈ Var(B):
• if x ∈ Var(H), then x(γµ) = (xγ)µ = x by definition of µ,
• if x 6∈ Var(H), then x(γµ) = (xγ)µ = xγ by definition of µ.
µ
Hence, Bγµ = Bγ ′ , so we have H =⇒ Bγ ′ i.e. H =⇒ Bγ ′ .
c
c
Claim B.2 µ is an mgu of Hγ and H.
33
c
Proof. Let p(s1 , . . . , sn ) := H. The set of unifiers of Hγ and H is the same as that of E1 :=
{s1 γ = s1 , . . . , sn γ = sn }. Let E2 := {xγ = x | x ∈ Var (H)}. Notice that, as γ is a renaming,
if x, y ∈ Var (H) then x 6= y ⇒ xγ 6= yγ. Moreover, for each x ∈ Var(H), xγ 6= x because
Var (cγ) ∩ Var (H) = ∅. So, E2 is solved. Consequently, by Lemma 2.15 page 32 of [1], µ is an
mgu of E2 . Notice that, by Claim B.3 below, the set of unifiers of E1 is that of E2 . So µ is an
mgu of E1 i.e. µ is an mgu of Hγ and H.
Claim B.3 E1 and E2 have the same set of unifiers.
Proof. Let θ be a unifier of E1 . Let x ∈ Var (H) and let i ∈ [1, n] such that x ∈ Var (si ). Then
si γθ = si θ, so, if xk is an occurrence of x in si , we have xk γθ = xk θ i.e. (xk γ)θ = xk θ. As x
denotes any variable of H, we conclude that θ is a unifier of E2 . Conversely, let θ be a unifier of
E2 . Then, for each i ∈ [1, n], (si γ)θ = si θ by definition of E2 . Hence, θ is a unifier of E1 .
Lemma B.4 Let c := H ← B be a binary clause, cγ be a variant of c such that Var (cγ) ∩
Var (H) = ∅ and γ ′ := γ|Var (B)\Var (H). Then, there exists a renaming γ ′′ such that Bγ ′ = Bγ ′′ .
Proof. Let A := {x | x ∈ Ran(γ ′ ) and x 6∈ Dom(γ ′ )} and B := {x | x ∈ Dom(γ ′ ) and x 6∈
Ran(γ ′ )}. Notice that Ran(γ ′ ) and Dom(γ ′ ) have the same number of elements, so A and B have
the same number of elements. Let σ be a 1-1 and onto mapping from A to B. Then, γ ′′ := γ ′ ∪ σ
is a well-defined substitution, is such that Dom(γ ′′ ) = Ran(γ ′′ ), is 1-1 and is onto. Consequently,
γ ′′ is a renaming.
Now, let us prove that Bγ ′ = Bγ ′′ . If Var (B) = ∅, then the result is straightforward.
Otherwise, let x ∈ Var(B).
• If x ∈ Var (H) then, as Dom(γ ′ ) ⊆ Var (B) \ Var (H), we have x 6∈ Dom(γ ′ ), so xγ ′ = x.
Moreover, xγ ′′ = x(γ ′ ∪σ) = xσ = x because Dom(σ) ⊆ Ran(γ ′ ) and Ran(γ ′ )∩Var (H) = ∅.
Consequently, we have xγ ′ = xγ ′′ .
• If x 6∈ Var (H) and x ∈ Dom(γ ′ ) then xγ ′′ = x(γ ′ ∪ σ) = xγ ′ .
• If x 6∈ Var (H) and x 6∈ Dom(γ ′ ) then xγ ′ = x.
Now, suppose that x ∈ Dom(σ). Then, as Dom(σ) ⊆ Ran(γ ′ ) ⊆ Ran(γ), we have x ∈
Ran(γ). But, as γ is a renaming, Ran(γ) = Dom(γ), so we have x ∈ Dom(γ). As
x ∈ Var (B), as x 6∈ Var(H) and as γ ′ := γ|Var (B) \ Var (H), we have x ∈ Dom(γ ′ ).
Contradiction.
Consequently, x 6∈ Dom(σ), so xσ = x and xγ ′′ = x(γ ′ ∪ σ) = xσ = x. Finally, we have
proved that xγ ′ = xγ ′′ .
B.2
Proof of Corollary 3.2, page 4
By Lemma B.1 and Lemma B.4, we have H =⇒ Bγ ′′ where γ ′′ is a renaming. As by hypothesis
c
B is more general than H, then Bγ ′′ is more general than H. Therefore, by the One Step Lifting
Lemma 3.1, H loops w.r.t. {c}.
B.3
Proof of Corollary 3.3, page 4
By Lemma B.1 and Lemma B.4, we have H =⇒ Bγ ′′ where γ ′′ is a renaming. As Bγ ′′ is more
c
general than B and as B loops w.r.t. P , then, by the One Step Lifting Lemma 3.1, Bγ ′′ loops
w.r.t. P , so H loops w.r.t. P .
34
B.4
Proof of Proposition 3.16, page 7
If A is ∆-more general than B, we have, for a substitution η:
A = p(s1 , . . . , sn )
B = p(t1 , . . . , tn )
∀i ∈ [1, n] \ Dom(∆(p)), ti = si η
∀i ∈ Dom(∆(p)), ∆(p)(i)(si ) = true .
Let A′ be a variant of A. Then, there exists a renaming γ such that A′ = Aγ. As for each
i ∈ Dom(∆(p)), ∆(p)(i) is a variant independent term-condition, we have:
′
A = p(s1 γ, . . . , sn γ)
B = p(t1 , . . . , tn )
∀i ∈ [1, n] \ Dom(∆(p)), ti = si η = (si γ)(γ −1 η)
∀i ∈ Dom(∆(p)), ∆(p)(i)(si γ) = true .
Consequently, A′ is ∆-more general than B for γ −1 η, i.e. A′ is ∆-more general than B.
B.5
Proof of Proposition 3.17, page 7
⇐ By definition.
⇒ Let p(s1 , . . . , sn ) := A and p(t1 , . . . , tn ) := B. As A is ∆-more general than B, there exists a
substitution σ such that A is ∆-more general than B for σ. Notice that A is also ∆-more
general than B for the substitution obtained by restricting the domain of σ to the variables
appearing in the positions of A not distinguished by ∆. More precisely, let
η := σ|V ar({si | i ∈ [1, n] \ Dom(∆(p))}) .
Then,
Dom(η) ⊆ V ar(A)
(3)
and A is ∆-more general than B for η.
Now, let x ∈ Dom(η). Then, there exists i ∈ [1, n] \ Dom(∆(p)) such that x ∈ V ar(si ).
As A is ∆-more general than B for η and i ∈ [1, n] \ Dom(∆(p)), we have ti = si η. So, as
x ∈ V ar(si ), xη is a subterm of ti . Consequently, V ar(xη) ⊆ V ar(ti ), so V ar(xη) ⊆ V ar(B).
So, we have proved that for each x ∈ Dom(η), V ar(xη) ⊆ V ar(B), i.e. we have proved that
Ran(η) ⊆ V ar(B) .
(4)
Finally, (3) and (4) imply that Dom(η) ∪ Ran(η) ⊆ V ar(A, B) i.e. that
V ar(η) ⊆ V ar(A, B) .
B.6
Proof of Proposition 3.20, page 8
By Lemma B.1 and Lemma B.4, we have H =⇒ Bγ ′′ where γ ′′ is a renaming. As by hypothesis B
c
is ∆-more general than H, then by Proposition 3.16 Bγ ′′ is ∆-more general than H. Therefore,
as ∆ is DN for c, by Definition 3.18, H loops w.r.t. {c}.
B.7
Proof of Proposition 3.37, page 11
Let c := p(s1 , . . . , sn ) ← q(t1 , . . . , tm ) and c′ := p(s′1 , . . . , s′n ) ← q(t′1 , . . . , t′m ) be a variant of c.
Then, there exists a renaming γ such that c′ = cγ.
35
(DN1) Let i ∈ Dom(τ + (p)). Suppose that there exists j 6= i such that V ar(s′i ) ∩ V ar(s′j ) 6= ∅
and let us derive a contradiction.
Let x′ ∈ V ar(s′i ) ∩ V ar(s′j ). As s′j = sj γ, there exists x ∈ V ar(sj ) such that x′ = xγ.
For such an x, as j 6= i and as V ar(si ) ∩ V ar(sj ) = ∅ (because τ + is DN for c), we
have x 6∈ V ar(si ). So, as γ is a 1-1 and onto mapping from its domain to itself, we have
xγ 6∈ V ar(si γ)3 , i.e. x′ 6∈ V ar(s′i ). Contradiction!
Consequently, V ar(s′i ) ∩ V ar(s′j ) = ∅.
(DN2) Let hi 7→ ui i ∈ τ + (p). As si is more general than ui (because τ + is DN for c) and as s′i
is a variant of si , s′i is more general than ui .
(DN3) Let hj 7→ uj i ∈ τ + (q). As tj is an instance of uj (because τ + is DN for c) and as t′j is a
variant of tj , t′j is an instance of uj .
(DN4) Let i ∈ Dom(τ + (p)). Suppose there exists j 6∈ Dom(τ + (q)) such that V ar(s′i )∩V ar(t′j ) 6=
∅. Let us derive a contradiction.
Let x′ ∈ V ar(s′i ) ∩ V ar(t′j ). As t′j = tj γ and x′ ∈ V ar(t′j ), there exists x ∈ V ar(tj ) such that
x′ = xγ. For such an x, as the elements of V ar(si ) only occur in those tk s.t. k ∈ Dom(τ + (q))
(because τ + is DN for c) and as x ∈ V ar(tj ) with j 6∈ Dom(τ + (q)), we have x 6∈ V ar(si ).
So, as γ is a 1-1 and onto mapping from its domain to itself, we have xγ 6∈ V ar(si γ) (see
footnote 3), i.e. x′ 6= V ar(s′i ). Contradiction! So, for each j 6∈ Dom(τ + (q)), we have
V ar(s′i ) ∩ V ar(t′j ) = ∅.
Finally, we have established that τ + is DN for c′ .
B.8
DN Sets of Positions with Associated Terms Generate DN Filters
In this section, we give a proof of Theorem 3.39, page 12.
B.8.1
Context
All the results of this section are parametric to the following context:
• P is a binary program and τ + is a set of positions with associated terms that is DN for P ,
θ
• Q =⇒ Q1 is a left derivation step where
c
– c ∈ P,
– Q := p(t1 , . . . , tn ),
– c1 := p(s1 , . . . , sn ) ← B is the input clause used (consequently, c1 is a variant of c that
is variable disjoint from Q),
• Q′ := p(t′1 , . . . , t′n ) is ∆[τ + ]-more general than Q i.e., by Proposition 3.17, there exists a
substitution η such that V ar(η) ⊆ V ar(Q, Q′ ) and Q′ is ∆[τ + ]-more general than Q for η.
3 Because if xγ ∈ V ar(s γ), then either x ∈ V ar(s ), either xγ ∈ V ar(s ) and (xγ)γ = xγ. The former case is
i
i
i
impossible because we said that x 6∈ V ar(si ). The latter case is impossible too because (xγ)γ = xγ implies that
xγ 6∈ Dom(γ) i.e. x 6∈ Dom(γ) (because γ is a 1-1 and onto mapping from its domain to itself); so, x = xγ i.e., as
xγ ∈ V ar(si ), x ∈ V ar(si ).
36
B.8.2
Technical Definitions and Lemmas
Definition B.5 (Technical Definition) Let c′1 := p(s′1 , . . . , s′n ) ← B ′ be a binary clause such that
• V ar(c′1 ) ∩ V ar(Q, Q′ ) = ∅ and
• c1 = c′1 γ for some renaming γ satisfying V ar(γ) ⊆ V ar(c1 , c′1 ).
As c′1 is a variant of c1 and c1 is a variant of c, then c′1 is a variant of c. Moreover, as τ +
is DN for c, by Proposition 3.37, τ + is DN for c′1 . So, by (DN2) in Definition 3.35, for each
hi 7→ ui i ∈ τ + (p) there exists a substitution δi such that ui = s′i δi .
Moreover, as p(t′1 , . . . , t′n ) is ∆[τ + ]-more general than p(t1 , . . . , tn ), for each hi 7→ ui i ∈ τ + (p),
′
ti is an instance of ui . So, there exists a substitution δi′ such that t′i = ui δi′ .
For each i ∈ Dom(τ + (p)), we set
def
σi = (δi δi′ )|V ar(s′i ) .
Moreover, we set:
def
[
σ =
σi .
i∈Dom(τ + (p))
Lemma B.6 The set σ of Definition B.5 is a well-defined substitution.
Proof. Notice that, as τ + is DN for c′1 , by (DN1) in Definition 3.35, we have
∀i ∈ Dom(τ + (p)), ∀j ∈ [1, n] \ {i}, V ar(s′i ) ∩ V ar(s′j ) = ∅ .
Consequently,
∀i, j ∈ Dom(τ + (p)), i 6= j ⇒ Dom(σi ) ∩ Dom(σj ) = ∅ .
Moreover, for each i ∈ Dom(τ + (p)), σi is a well-defined substitution. So, σ is a well-defined
substitution.
Lemma B.7 (Technical Lemma) Let c′1 := p(s′1 , . . . , s′n ) ← B ′ be a binary clause such that
• V ar(c′1 ) ∩ V ar(Q, Q′ ) = ∅ and
• c1 = c′1 γ for some renaming γ satisfying V ar(γ) ⊆ V ar(c1 , c′1 ).
Let σ be the substitution of Definition B.5.
p(t′1 , . . . , t′n ) and p(s′1 , . . . , s′n ).
Then, the substitution σηγθ is a unifier of
Proof. The result follows from the following facts.
• For each hi 7→ ui i ∈ τ + (p), we have:
s′i σ = s′i σi = s′i δi δi′ = (s′i δi )δi′ = ui δi′ = t′i
and t′i σ = t′i because Dom(σ) ⊆ V ar(c′1 ) and V ar(Q′ ) ∩ V ar(c′1 ) = ∅. So, s′i σ = t′i σ and
s′i σηγθ = t′i σηγθ.
• For each i ∈ [1, n] \ Dom(τ + (p)), we have:
s′i ηγθ = (s′i η)γθ = s′i γθ = (s′i γ)θ = si θ
and
t′i ηγθ = (t′i η)γθ = ti γθ = (ti γ)θ = ti θ
θ
and si θ = ti θ because θ is a unifier of p(s1 , . . . , sn ) and p(t1 , . . . , tn ) (because Q =⇒ Q1 with
c
c1 as input clause used). So,
s′i ηγθ = t′i ηγθ
37
(5)
• For each i ∈ [1, n] \ Dom(τ + (p)), we also have:
– s′i σ = s′i because Dom(σ) = V ar {s′j | j ∈ Dom(τ + (p))} and, by (DN1) in Defini
tion 3.35, V ar {s′j | j ∈ Dom(τ + (p))} ∩ V ar(s′i ) = ∅;
– t′i σ = t′i because Dom(σ) ⊆ V ar(c′1 ) and V ar(Q′ ) ∩ V ar(c′1 ) = ∅.
Therefore, because of (5), s′i σηγθ = t′i σηγθ.
B.8.3
∆-Propagation
Now we extend, in the case of left derivations with atomic queries and binary clauses, the following
Propagation Lemma that is proved by Apt in [1] p. 54–56.
Lemma B.8 (Propagation) Let G, G1 , G′ and G′1 be some queries such that
G =⇒ G1
G′ =⇒ G′1
and
c
c
and
• G is an instance of G′
• in G and G′ atoms in the same positions are selected.
Then, G1 is an instance of G′1 .
First we establish the following result.
θ′
Lemma B.9 Suppose there exists a left derivation step of form Q′ =⇒ Q′1 where the input clause
c
is c′1 such that V ar(Q) ∩ V ar(c′1 ) = ∅. Then, Q′1 is ∆[τ + ]-more general than Q1 .
Proof. Notice that we have
V ar(Q) ∩ V ar(c1 ) = V ar(Q, Q′ ) ∩ V ar(c′1 ) = ∅ .
Moreover, as c1 is a variant of c′1 , there exists a renaming γ such that
V ar(γ) ⊆ V ar(c1 , c′1 ) and
c1 = c′1 γ .
Let c′1 := p(s′1 , . . . , s′n ) ← B ′ . Then,
Q1 = Bθ
and
Q′1 = B ′ θ′ .
τ + is DN for c and c′1 is a variant of c. So, by Proposition 3.37, τ + is DN for c′1 . Let σ be the
substitution of Definition B.5.
′
Let q(v1′ , . . . , vm
) := B ′ . As B = B ′ γ, B has form q(v1 , . . . , vm ).
• For each hj 7→ uj i ∈ τ + (q), vj′ is an instance of uj (because τ + is DN for c′1 and (DN3) in
Definition 3.35.)
• For each j ∈ [1, m] \ Dom(τ + (q)) we have:
vj′ σηγθ = (vj′ σ)ηγθ = vj′ ηγθ
because, by (DN4) in Definition 3.35
V ar(vj′ ) ∩ V ar {s′i | i ∈ Dom(τ + (p))} = ∅
with Dom(σ) = V ar {s′i | i ∈ Dom(τ + (p))} . Moreover,
vj′ ηγθ = (vj′ η)γθ = vj′ γθ
because V ar(η) ⊆ V ar(Q, Q′ ) and V ar(c′1 ) ∩ V ar(Q, Q′ ) = ∅. Finally,
vj′ γθ = (vj′ γ)θ = vj θ
because B = B ′ γ.
38
Consequently, we have proved that
′
q(v1′ , . . . , vm
) is ∆[τ + ]-more general than q(v1 , . . . , vm )θ for σηγθ
i.e. that B ′ is ∆[τ + ]-more general than Bθ for σηγθ i.e. that
B ′ is ∆[τ + ]-more general than Q1 for σηγθ .
(6)
But, by the Technical Lemma B.7, σηγθ is a unifier of p(s′1 , . . . , s′n ) and p(t′1 , . . . , t′n ). As θ′ is an
θ′
mgu of p(s′1 , . . . , s′n ) and p(t′1 , . . . , t′n ) (because Q′ =⇒ Q′1 with c′1 as input clause), there exists δ
c
such that σηγθ = θ′ δ. Therefore, we conclude from (6) that B ′ is ∆[τ + ]-more general than Q1
for θ′ δ which implies that B ′ θ′ is ∆[τ + ]-more general than Q1 for δ i.e. that Q′1 is ∆[τ + ]-more
general than Q1 for δ. Finally, we have proved that Q′1 is ∆[τ + ]-more general than Q1 .
Using the Propagation Lemma B.8, the preceding result can be extended as follows.
θ′
Proposition B.10 (∆-Propagation) Suppose there exists a left derivation step Q′ =⇒ Q′1 . Then
c
Q′1 is ∆[τ + ]-more general than Q1 .
θ′
Proof. Let c′1 be the input clause used in Q′ =⇒ Q′1 . Take a variant Q′′ of Q such that
c
V ar(Q′′ ) ∩ V ar(c′1 ) = ∅
and a variant c′′1 of c such that
V ar(c′′1 ) ∩ V ar(Q′′ ) = ∅ .
Then, the left resolvent Q′′1 of Q′′ and c exists with the input clause c′′1 . So, for some θ′′ , we have
θ ′′
Q′′ =⇒ Q′′1 with input clause c′′1 . Consequently, we have:
c
θ
Q =⇒ Q1
c
θ ′′
and Q′′ =⇒ Q′′1 .
c
Q and Q′′ are instances of each other because Q′′ is a variant of Q. So, by the Propagation
Lemma B.8 used twice, Q′′1 is an instance of Q1 and Q1 is an instance of Q′′1 . So,
Q′′1 is a variant of Q1 .
But we also have
θ ′′
Q′′ =⇒ Q′′1
c
′
θ′
and Q′ =⇒ Q′1
c
+
with Q that is ∆[τ ]-more general than Q′′ (because Q′′ is a variant
with input clauses
and
′
of Q and Q is ∆[τ ]-more general than Q) and V ar(Q′′ ) ∩ V ar(c′1 ) = ∅. So, by Lemma B.9,
c′′1
+
c′1 ,
(7)
Q′1 is ∆[τ + ]-more general than Q′′1 .
(8)
Finally, from (7) and (8) we have: Q′1 is ∆[τ + ]-more general than Q1 .
B.8.4
Epilogue
Theorem 3.39 is a direct consequence of the following result.
Proposition B.11 (One Step ∆-Lifting) Let c′ be a variant of c variable disjoint with Q′ . Then,
θ′
there exist θ′ and a query Q′1 that is ∆[τ + ]-more general than Q1 such that Q′ =⇒ Q′1 with input
c
clause c′ .
39
Proof. Let c′1 := p(s′1 , . . . , s′n ) ← B ′ be a variant of c1 . Then there exists a renaming γ such that
V ar(γ) ⊆ V ar(c1 , c′1 ) and c1 = c′1 γ. Suppose also that
V ar(c′1 ) ∩ V ar(Q, Q′ ) = ∅ .
By the Technical Lemma B.7, p(s′1 , . . . , s′n ) and p(t′1 , . . . , t′n ) unify. Moreover, as V ar(c′1 ) ∩
V ar(Q′ ) = ∅, p(s′1 , . . . , s′n ) and p(t′1 , . . . , t′n ) are variable disjoint. Notice that the following claim
holds.
Claim B.12 Suppose that the atoms A and H are variable disjoint and unify. Then, A also
unifies with any variant H ′ of H variable disjoint with A.
Proof. For some γ such that Dom(γ) ⊆ V ar(H ′ ), we have H = H ′ γ. Let θ be a unifier of A and
H. Then, Aγθ = Aθ = Hθ = H ′ γθ, so A and H ′ unify.
Therefore, as c′ is a variant of c′1 and c′ is variable disjoint with Q′ , p(t′1 , . . . , t′n ) and the head
of c′ unify. As they also are variable disjoint, we have
θ′
Q′ =⇒ Q′1
c
for some θ′ and Q′1 where c′ is the input clause used. Moreover, by the ∆-Propagation Proposition B.10, Q′1 is ∆[τ + ]-more general than Q1 .
40
| 2 |
arXiv:1309.0135v1 [] 31 Aug 2013
RAMIFICATION OF LOCAL RINGS ALONG VALUATIONS
STEVEN DALE CUTKOSKY AND PHAM AN VINH
Abstract. In this paper we discuss stable forms of extensions of algebraic local rings
along a valuation in all dimensions over a field k of characteristic zero, and generalize
a formula of Ghezzi, Hà and Kashcheyeva describing the extension of associated graded
rings along the valuation for stable extensions of regular algebraic local rings of dimension
two to arbitrary ground fields k of characteristic zero. We discuss the failure of this result
in positive characteristic.
1. Introduction
Suppose that k is a field, K is an algebraic function field over k and ν is a valuation
of K (which is trivial on k). Let Vν be the valuation ring of ν, with maximal ideal mν .
Let Γν be the value group of ν. Important invariants associated to ν are its rank (one
less than the number of prime ideals in Vν ), rational rank (dimQ Γν ⊗Z Q) and dimension
(dim ν = trdegk Vν /mν ). We have that rank ν ≤ rational rank ν and by Abhyankar’s
inequality ([1] and Appendix 2 [12]),
(1)
rational rank ν + dim ν ≤ dim K
∼ Zrr as an unordered group,
where dim K = trdegk K, and if equality holds, then Γν =
where rr = rational rank ν. Such valuations are called Abhyankar valuations.
An algebraic local ring of K is a local ring R which is essentially of finite type over k
and whose quotient field is K. R is dominated by ν if R ⊂ Vν and mν ∩ R = mR is the
maximal ideal of R.
A monoidal transform R → R1 of R is a local ring R1 of the blowup of a regular prime
ideal P of R (R/P is regular). R → R1 is a quadratic transform if R1 is a local ring of
the blow up of the maximal ideal of R. R → R1 is a monoidal transform along ν if Vν
dominates R1 .
For each γ ∈ Γν , let
Pγ (R) = {f ∈ R | ν(f ) ≥ γ} and Pγ+ (R) = {f ∈ R | ν(f ) > γ}.
We define the associated graded algebra of ν on R (as in [10]) as
M
Pγ (R)/Pγ+ (R).
grν (R) =
γ∈Γν
If f ∈ R and ν(f ) = γ, we define the initial form inν (f ) of f in grν (R) as f + Pγ+ (R) ∈
Pγ /Pγ+ . A sequence {Pi }i≥0 in R is called a generating sequence of ν in R if {inν (Pi )}i≥0
generate grν (R) as an R/mR -algebra.
The semigroup of R is
S R (ν) = {ν(f ) | f ∈ R \ {0}}.
The first author was partially supported by NSF.
1
Now suppose that K ∗ is an algebraic function field over k such that K ∗ is finite separable
over K, and ν ∗ is a valuation of K ∗ which is an extension of ν. Let
n = trdegk K ∗ − trdegk Vν ∗ /mν ∗ .
Let
e = [Γν ∗ : Γν ] and f = [Vν ∗ /mν ∗ : Vν /mν ]
be the reduced ramification index and relative degree of ν ∗ over ν.
Suppose that R and S are algebraic local rings for K and K ∗ such that S dominates R
and ν ∗ dominates S (so that ν dominates R).
Lemma 1.1. Suppose that Vν ∗ /mν ∗ = (Vν /mν )(S/mν ). Then [S/mS : R/mR ] = f if and
only if Vν /mν and S/mS are linearly disjoint in Vν ∗ /mν ∗ over R/mR .
Proof. Suppose that [S/mS : R/mR ] = f . Let h1 , . . . , hs ∈ S/mS be linearly independent
over R/mR . Extend this set to a basis h1 , . . . , hf of S/mS over R/mR . Then h1 , . . . , hf
span Vν ∗ /mν ∗ over Vν /mν , so they are linearly independent over Vν /mν .
Now suppose that Vν /mν and S/mS are linearly disjoint over R/mR . There exist
α1 , . . . , αf ∈ S/mS which are a basis of Vν ∗ /mν ∗ over Vν /mν . Then α1 , . . . , αf are linearly
independent over R/mR , so [S/mS : R/mR ] ≥ f . However, a basis of S/mS over R/mR
is linearly independent over Vν /mν , so [S/mS : R/mR ] = f .
We will say that R → S is monomial if R and S are n-dimensional regular local rings and
there exist regular parameters x1 , . . . , xn in R, y1 , . . . , yn in S, an n × n matrix A = (aij )
of natural numbers with Det(A) 6= 0 and units δi ∈ S such that
(2)
xi = δi
n
Y
a
yj ij
j=1
for 1 ≤ i ≤ n. In Theorem 1.1 [3] it is proven that when the ground field k has characteristic
zero, there exists a commutative diagram
(3)
R0 → S 0
↑
↑
R → S
such that the vertical arrows are products of monoidal transforms along ν ∗ and R0 →
S0 is monomial. It is shown in Theorem 5.1 [3] and Theorem 4.8 [4] that the matrix
A0 describing R0 → S0 (with respect to regular parameters x1 (0), . . . , xn (0) in R0 and
y1 (0), . . . , yn (0) in S0 ) can be required to take a very special block form, which reflects
the rank and rational rank of ν ∗ . We will say that R0 → S0 is strongly monomial if it is
monomial and the matrix A0 has this special block form.
In Theorem 6.1 [4] it is shown (assuming that k has characteristic zero) that we can
always find a diagram (3) such that the following conditions hold:
1) R0 → S0 is strongly monomial.
2) If
R1 → S 1
↑
↑
R0 → S 0
is such that R1 → S1 is strongly monomial with respect to regular parameters
x1 (1), . . . , xn (1) in R1 and y1 (1), . . . , yn (1) in S1 , and the vertical arrows are products of monoidal transforms, then
2
2a) The natural group homomorphism
Zn /At1 Zn → Γν ∗ /Γν
defined by
(b1 , . . . , bn ) 7→ [b1 ν ∗ (y1 (1)) + · · · + bn ν ∗ (yn (1))]
is an isomorphism (where A1 is the matrix of exponents of R1 → S1 with
respect to our given systems of parameters).
2b) Vν ∗ /mν ∗ is the join Vν ∗ /mν ∗ = (Vν /mν )(S1 /mS1 ).
2c) Vν /mν and S1 /mS1 are linearly disjoint over R1 /mR1 in Vν ∗ /mν ∗ .
Theorem 6.1 [4] and Lemma 1.1 implies that, given R → S, there exists a monomial
extension R0 → S0 as in (3) satisfying 1) and 2) above. In Theorem 6.3 [4] it is shown
that the extension V → V ∗ can naturally be understood as a direct limit of R0 → S0 as
above.
We will say that R0 → S0 is stable if the conclusions 1) and 2) above hold.
If R → S is stable, we have that
e = Det(A) and f = [S/mS : R/mR ].
where e is the reduced ramification index and f is the relative degree of ν ∗ over ν.
The simplest valuations are the Abhyankar valuations (defined at the beginning of this
section). In this case, we easily obtain a very strong statement comparing the associated
graded rings of the valuations. We have the following proposition.
Proposition 1.2. Suppose that k has characteristic zero, ν is an Abhyankar valuation
and R → S is stable. Then we have a natural isomorphism of graded rings
grν ∗ (S) ∼
= grν (R) ⊗R/mR S/mS [y 1 , . . . , y n ]
where y 1 , . . . , y n are the initial forms of y1 , . . . , yn , with the only relations being
[xi ] = [δi ]y 1ai1 · · · y anin 1 ≤ i ≤ n
obtained from (2) ([δi ] is the class of δi in S/mS ). The degree of the extension of quotient
fields of
grν (R) → grν ∗ (S)
is ef .
Proof. Since R → S is stable, and ν ∗ and ν are Abhyankar valuations, we have that
ν ∗ (y1 ), . . . , ν ∗ (yn ) is a Z-basis of Γν ∗ and ν(x1 ), . . . , ν(xn ) is a Z-basis of Γν .
By Hensel’s lemma, R̂ ∼
= R/mR is a coefficient field of R̂.
= k′ [[x1 , . . . , xn ]] where k′ ∼
Since ν(x1 ), . . . , ν(xn ) are rationally independent, ν has a unique extension to a valuation
ν̂ of the quotient field of R̂, defined by
ν̂(f ) = min{i1 ν(x1 ) + · · · + in ν(xn ) | ai1 ,...,in 6= 0}
if f =
ai1 ,...,in xi11 · · · xinn ∈ k′ [[x1 , . . . , xn ]] (with ai1 ,...,in ∈ k′ ). Since distinct monomials
have distinct values, we have an isomorphism of residue fields Vν /mν ∼
= R/mR .
Hence grν (R) ∼
= R/mR [x1 , . . . , xn ], is a polynomial ring, where xi is the class of xi ,
with the grading deg xi = ν(xi ). Further grν ∗ (S) ∼
= S/mS [y 1 , . . . , y n ], is a polynomial
ring, where y i is the class of yi . The proposition follows.
P
3
If ν is an Abhyankar valuation, and R → S is quasi-finite, we have that S S (ν ∗ ) is finitely
generated as a module over the semigroup S R (ν) by Proposition 1.2.
It is natural to ask if an analog of Proposition 1.2 holds for more general valuations.
We have the essential difference that the valuation groups Γν are not finitely generated
in general. There even exist examples where R → S is quasi-finite but S S (ν ∗ ) is not a
finitely generated module over S R (ν). In Theorem 9.4 [6] an example is given of a finite
monomial extension of two dimensional regular algebraic local rings (over any ground
field) such that S S (ν ∗ ) is not a finitely generated module over S R (ν). This example is
necessarily not stable. Some other examples are given in [5] showing bad behavior of
S S (ν ∗ ) over S R (ν).
However, the conclusions of Proposition 1.2 always hold for stable mappings R → S
when R and S have dimension two (n = 2). By Abhyankar’s inequality, when n = 2,
ν is an Abhyankar valuation unless ν is rational (the value group is order isomorphic to
a subgroup of the rational numbers). We have the following theorem, which generalizes
Proposition 1.2 to this case. This surprising theorem was proven when k is algebraically
closed of characteristic zero and dim K = 2 by Ghezzi, Hà and Kashcheyeva in [7]. If
n = 2, ν is rational and R → S is stable, then R has regular parameters u, v, S has
regular parameters x, y and there exist a unit γ in S such that
u = γxe , v = y,
(4)
where e = |Γν ∗ /Γν | is the reduced ramification index.
Theorem 1.3. Suppose that k is a field of characteristic zero, ν ∗ is a rational 0-dimensional
valuation, n = 2 and R → S is stable. Then
gr ∗ (S) ∼
= gr (R) ⊗R/m S/mS [Z]/(Z e − [γ0 ]−1 [u]),
ν
ν
R
and the degree of the extension of quotient fields of gr ν (R) → grν ∗ (S) is ef .
The remaining sections of this paper are devoted to the proof of this theorem. Our
proof requires the construction of generating sequences for valuations in arbitrary regular
local rings of dimension two in [6]. Theorem 1.3 is proven in Section 4, as a consequence
of Proposition 4.1, which shows that a generating sequence in R is almost a generating
sequence in S if R → S is stable.
In contrast to the fact that finite generation may not hold even for a monomial mapping,
when ν ∗ is a rational 0-dimensional valuation with n = 2 (Example 9.4 [6]), we have finite
generation if R → S is stable.
Corollary 1.4. Suppose that k is a field of characteristic zero, ν ∗ is a rational 0-dimensional
valuation, n = 2 and R → S is stable. Then the semigroup S S (ν ∗ ) is a finitely generated
S R (ν)-module.
An interesting question is if an analogue of the conclusions of Proposition 1.2 holds in
general for any n and arbitrary valuations for stable mappings over fields k of characteristic
zero. It would be remarkable if this were true.
With some small modification in the definition of strongly monomial (in (3)), strong
monomialization holds for Ahhyankar valuations in positive characteristic, as follows from
[9], (a strong form of local uniformization is proven for Abhyankar valuations by Knaf
and Kuhlmann), and thus Proposition 1.2 holds in positive characteristic. A description
of grν (R) for ν an Abhyankar valuation dominating a (singular) local ring R, over an
algebraically closed field of arbitrary characteristic, and a proof of local uniformization for
4
Abhyankar valuations derived from this construction, has been recently given by Teissier
in [11].
Over fields of positive characteristic, it is shown in Section 7.11 of [4] that the strong
monomialization theorem is not true, even when n = 2, k is algebraically closed and ν is
rational and zero dimensional. It is not known if monomialization holds, although it seems
unlikely. Stable forms are given in [4] for mappings in dimension two over an algebraically
closed field of positive characteristic which are much more complicated than in the characteristic zero case. The fundamental obstruction to obtaining strong monomialization is
the defect. It is shown in [4] that strong monomialization holds in dimension two over
algebraically closed fields k for extensions of valuations for which there is no defect. In
[8], Ghezzi and Kashcheyeva prove Theorem 1.3 when k is algebraically closed of positive
characteristic, dim K = 2 and the extension has no defect.
In the example of Section 7.11 of [4], the stable forms Ri → Si satisfy
(5)
grν (Ri ) → grν ∗ (Si )
is integral but not finite, in contrast to the case of Proposition 1.2 and Theorem 1.3. In fact,
grν (Ri ) = grν ∗ (Si )p . Further, S Si (ν ∗ ) is not a finitely generated S Ri (ν)-module for any
∗
i. In this example, the degree of the extension of quotient fields of (5) is ef pδ(ν /ν) = p2 ,
where δ(ν ∗ /ν) = 2 is the defect of ν ∗ over ν. The defect is always zero in characteristic
zero, and for Abhyankar valuations.
2. A modification of the algorithm of [6] to construct a generating
sequence
Suppose that k is a field of characteristic zero and K is a two dimensional algebraic
function field over k. Suppose that ν is a rational 0-dimensional valuation of K (the
value group is isomorphic as an ordered group to a subgroup of Q and trdegk Vν /mν = 0).
Suppose that R is a regular algebraic local ring of K such that ν dominates R.
Let
R → T1 → T2 → · · ·
be the sequence of quadratic transforms of R along ν, so that Vν = ∪Ti . Suppose that
x, y are regular parameters in R. There exists a smallest value i such that the divisor of
xy in spec(Ti ) has only one component.
(6)
Define R1 = Ti .
We consider the algorithm of Theorem 4.2 [6] to construct a generating sequence in R
with Remark 4.3 [6] and the following observation: We can replace Ui with a unit τi ∈ R
times Ui in the algorithm. The algorithm (which we will call the modified algorithm to
construct a generating sequence) iterates in the following way. Suppose that for i ≥ 0 we
have constructed the first i + 1 terms
P0 = x, P1 = y, P2 , . . . , Pi
of a generating sequence by the (modified) algorithm. To produce the next term Pi+1 , the
algorithm proceeds as follows. First we compute
ni = [G(ν(P0 ), . . . , ν(Pi )) : G(ν(P0 ), . . . , ν(Pi−1 ))].
This allows us to find a suitable element
(7)
ω (i)
ω (i)
Ui = P0 0 P1 1
5
ω
(i)
i−1
· · · Pi−1
τi
with τi ∈ R an arbitary unit, such that ν(Pini ) = ν(Ui ). Let
"
#
Pini
(8)
αi =
∈ Vν /mν ,
Ui
and
fi (z) = z di + bi,di −1 z di −1 + · · · + bi,0
be the minimal polynomial of αi over R/mR (α1 , . . . , αi−1 ). Then the algorithm produces
an element Pi+1 ∈ R of the form
!
dX
λt
i −1
X
j (s,t)
ji−1 (s,t)
as,t P0 0
· · · Pi−1
Pitni
(9)
Pi+1 = Pini di +
t=0
s=1
where as,t ∈ R are units, j0 (s, t), . . . , ji−1 (s, t) ∈ N with 0 ≤ jk (s, t) < nk for k ≥ 1 and
0 ≤ t < di such that
j (s,t)
ν(P0 0
j
i−1
· · · Pi−1
(s,t)
Pitni ) = ni di ν(Pi )
for all s, t, and
(10)
bi,t =
"
λt
X
j (s,t)
as,t
P0 0
s=1
j
i−1
· · · Pi−1
(s,t)
Uidi −t
#
∈ Vν /mν .
Then P0 , P1 , . . . , Pi , Pi+1 are the first i + 2 terms of a generating sequence for ν in R.
The observation of Remark 4.3 [6] is that any choice of (9) such that (10) holds gives
an extension Pi+1 to the next term in a generating sequence, satisfying the conclusions of
Theorem 4.2 [6].
We will consider the (modified) algorithm of Theorem 4.2 [6] in various rings R with
given regular parameters x, y. We will denote
Pi (R) = Pi so P0 (R) = x, P1 (R) = y,
ni (R) = ni , Ui (R) = Ui , αi (R) = αi , fiR (z) = fi (z), di (R) = di , ni (R) = ni = di ni .
These calculations not only depend on R, but on the previous terms P0 , P1 , . . . , Pi−1
constructed in the algorithm.
We will also consider the algorithm of Theorem 7.1 [6] in different rings R, with given
regular parameters x, y, and a generating sequence
x = P0 , y = P1 , P2 , . . . , Pi , . . .
constructed by the (modified) algorithm 4.2 of [6]. This algorithm considers the birational
extension ring R1 of R defined by (6).
The positive integers n1 and ω0 (1) of Theorem 4.2 [6] are defined by the conditions that
n1 ν(y) = ω0 (1)ν(x) and gcd(n1 , ω0 (1)) = 1. Choose a, b ∈ N so that n1 b − ω0 (1)a = 1.
Define
(11)
x1 =
xb
y n1
,
y
=
.
1
ya
xω0 (1)
Let
(12)
σ = [y1 ] ∈ Vν /mν ,
6
which is nonzero. Then (as is shown in Theorem 7.1 [6]) R1 /mR1 = R/mR [σ]. Theorem
7.1 [6] shows that
(13)
Q 0 = x1 , Q 1 =
P2
ω0 (1)n1
x1
are regular parameters in R1 , and taking
(14)
Qi =
Pi+1
ω0 (1)n1 ···ni
Q0
for 1 ≤ i, the Qi are a generating sequence for ν in R1 produced by the algorithm of
Theorem 4.2 [6] (as interpreted by Remark 4.3 [6]).
We will consider the algorithm of Theorem 7.1 [6] in different rings R, and will denote
Qi (R1 ) = Qi , β̂i (R1 ) = β̂i , Vi (R1 ) = Vi , α̂i (R1 ) = α̂i
in the notation of the proof of Theorem 7.1 [6].
We have that the Vi (R1 ) constructed in the the proof of Theorem 7.1 [6] for R → R1
are actually the Ui (R1 ) as constructed by the algorithm of Theorem 4.2 [6].
Let L0 ∼
= R/mR be a coefficient field of R̂, so that R̂ = L0 [[x, y]].
R1 = R[x1 , y1 ]mν ∩R[x1 ,y1 ] .
ν(x1 ) > 0 and ν(y1 ) = 0. We have that
R1 /mR1 ∼
= L0 [σ],
where σ is the class of y1 in R1 /mR1 . Let L1 ∼
= L0 (σ) be a coefficient field of R̂1 containing
L0 (this is possible since k has characteristic zero, by Hensel’s Lemma). Let
y1∗ = y1 − σ ∈ mRˆ1 .
(15)
Thus x1 , y1∗ are regular parameters in R̂1 and hence
R̂1 = L1 [[x1 , y1∗ ]].
We have an expression
ω (1)
x = xn1 1 (y1∗ + σ)a , y = x1 0
(y1∗ + σ)b
in R̂1 .
3. Monomial forms under sequences of quadratic transforms
Suppose that k is a field of characteristic zero, and K → K ∗ is an extension of two
dimensional algebraic function fields over k. Suppose that ν ∗ is a rational 0-dimensional
valuation of K ∗ which restricts to ν. Suppose that R and S are regular algebraic local
rings of K and K ∗ respectively such that ν ∗ dominates S and S dominates R.
By Theorem 5.1 [3] and Theorem 4.8 [4] (summarized after (3)), there exists a sequence
of quadratic transforms along ν ∗
R′ → S ′
↑
↑
R → S
7
such that R′ → S ′ is strongly monomial. For this type of valuation, this means that R′
has a regular system of parameters u, v and S ′ has a regular system of parameters x, y
giving an expression
u = γ0 x t , v = y
(16)
where γ0 is a unit in S ′ . For the rest of this section, we will assume that R → S is strongly
monomial (so R, S have regular parameters satisfying (16)), but we do not assume that
R → S is stable.
Lemma 3.1. Suppose that R has regular parameters u, v and S has regular parameters
x, y giving an expression
u = γ0 x t , v = y
where γ0 is a unit in S. Let R → R1 be the sequence of quadratic transforms along ν
defined by (6) and Let S → S1 be the sequence of quadratic transforms along ν ∗ defined
by (6). Then R1 has regular parameters u1 , ṽ1 and S1 has regular parameters x1 , ỹ1 such
that
u1 = γ1 xt11 , ṽ1 = ỹ1
where γ1 is a unit in S1 .
Proof. We use the notation of the previous section. We have that
R1 = R[u1 , v1 ]mν ∩R[u1 ,v1 ] ,
n (R) a(R)
v1 , v
u = u1 1
ω (1)(R) b(R)
v1
= u1 0
with
n1 (R)b(R) − ω0 (1)(R)a(R) = 1.
We have that
v n1 (R)
uω0 (1)(R)
so that ν(v1 ) = 0 and [v1 ] = σ(R) in Vν /mν . We have
v1 =
u1 =
ub(R)
.
v a(R)
Further,
S1 = S[x1 , y1 ]mν ∗ ∩S[x1,y1 ]
where
n (S) a(S)
ω (1)(S) b(S)
y1
x = x1 1 y 1 , y = x1 0
with n1 (S)b(S) − ω0 (1)(S)a(S) = 1. We have that
y n1 (S)
xω0 (1)(S)
so that ν ∗ (y1 ) = 0, and [y1 ] = σ(S) in Vν ∗ /mν ∗ . We have
y1 =
(17)
Substitute
x1 =
xb(S)
.
y a(S)
b(R)
u1 = ub(R) v −a(R) = γ0 xtb(R) y −a(R)
b(R) n (S) a(S)
ω (1)(S) b(S) −a(R)
= γ0 (x1 1 y1 )tb(R) (x1 0
y1 )
b(R) n (S)tb(R)−ω0 (1)(S)a(R) a(S)tb(R)−a(R)b(S)
y1
.
= γ0 x1 1
8
Set t1 = n1 (S)tb(R) − ω0 (1)(S)a(R). Since ν ∗ (u1 ) > 0, ν ∗ (x1 ) > 0 and ν ∗ (γ0 ) = ν ∗ (y1 ) =
0, we have t1 > 0.
v1 = u−ω0 (1)(R) v n1 (R) = (γ0 xt )−ω0 (1)(R) y n1 (R)
−ω (1)(R) n1 (S) a(S) −tω0 (1)(R) ω0 (1)(S) b(S) n1 (R)
= γ0 0
(x1
y1 )
(x1
y1 )
−ω0 (1)(R) ω0 (1)(S)n1 (R)−tω0 (1)(R)n 1 (S) b(S)n1 (R)−a(S)tω0 (1)(R)
y1
.
= γ0
x1
ν ∗ (v1 ) = ν ∗ (y1 ) = ν ∗ (γ0 ) = 0 and ν ∗ (x1 ) > 0 implies
ω0 (1)(S)n1 (R) − tω0 (1)(R)n1 (S) = 0.
Since n1 (S)b(S) − ω0 (1)(S)a(S) 6= 0, we have that
Thus
n1 (S) ω0 (1)(S)
a(S) b(S)
−tω0 (1)(R)
n1 (R)
6=
0
0
.
m := b(S)n1 (R) − a(S)tω0 (1)(R) 6= 0.
(18)
We have that u1 , v1 ∈ S1 , so that
R1 = R[u1 , v1 ]mν ∩R[u1 ,v1 ] ⊂ S1 .
We have a commutative diagram
R̂1 = L1 [[u1 , v1∗ ]] → Sˆ1 = M1 [[x1 , y1∗ ]]
↑
↑
R̂ = L[[u, v]]
→
Ŝ = M [[x, y]]
where L, M, L1 , M1 are coefficient fields of R̂, Ŝ, R̂1 , Sˆ1 such that there are inclusions
L1 → M1
↑
↑
L → M
This is possible (by Hensel’s Lemma) since R, S, R1 , S1 have equicharacteristic zero.
y1∗ = y1 − σ(S), v1∗ = v1 − σ(R) are constructed as in (15). We compute in M1 [[x1 , y1∗ ]],
y1m = (y1∗ + σ(S))m = σ(S)m + mσ(S)m−1 y1∗ +
m(m − 1)
σ(S)m−2 (y1∗ )2 + · · ·
2!
−ω0 (1)(R)
γ0
= β + x1 Ω with 0 6= β ∈ M1 and Ω ∈ Sˆ1 .
In Sˆ1 we have an expression
v1 = (β + x1 Ω)(σ(S)m + mσ(S)m−1 y1∗ + (y1∗ )2 Λ)
= βσ(S)m + βmσ(S)m−1 y1∗ + x1 Ω′ + (y1∗ )2 Λ′
for some Λ ∈ Sˆ1 , Ω′ , Λ′ ∈ Sˆ1 . Thus x1 , v1 − βσ(S)m are regular parameters in Sˆ1 , (and
βσ(S)m = σ(R)). Hence if u1 , v1′ are regular parameters in R1 , then x1 , y1′ = v1′ are regular
parameters in S1 , and we have an expression:
u1 = γ1 xt11
v1′ = y1′
with γ1 a unit in S1 .
9
By iteration of Lemma 3.1 and (6), we obtain an infinite sequence
..
..
.
.
↑
↑
R2 → S 2
↑
↑
R1 → S 1
↑
↑
R → S
where each Ri has regular parameters ui , ṽi and each Si has regular parameters xi , ỹi such
that
ui = γi xtii , ṽi = ỹi
where γi is a unit in Si .
Let e = |Γν ∗ /Γν | and f = [Vν ∗ /mν ∗ : Vν /mν ]. If R → S is stable, then
(19)
ti = e and [Si /mSi : Ri /mRi ] = f
for i ≥ 0.
4. Construction of a generating sequence in S from that of R
In this section, we continue to have the assumptions of Section 3. We further assume
that R → S is stable. Let
P0 (R) = u, P1 (R) = v, P2 (R), . . .
be a generating sequence in R, constructed by the algorithm of Theorem 4.2 [6].
Let P0 (S) = x, P1 (S) = y.
Then we have that the t and t1 in Lemma 3.1 satisfy
(20)
t = |Γν ∗ /Γν | = t1 ,
and
(21)
[S/mS : R/mR ] = [Vν ∗ /mν ∗ : Vν /mν ] = [S1 /mS1 : R1 /mR1 ].
By the calculations in the previous section, we have that
b(R)
−a(R)
t 0
n1 (S)
a(S)
t1 ∗
(22)
=
.
−ω0 (1)(R) n1 (R)
0 1
ω0 (1)(S) b(S)
0 m
Taking determinants and using the fact that t1 = t gives t = tm so that m = 1.
Multiplying (22) by
n1 (R)
a(R)
,
ω0 (1)(R) b(R)
we obtain
(23)
n1 (S) = n1 (R), ω0 (1)(S) = tω0 (1)(R).
Since
P1 (S)n1 (S) = P1 (R)n1 (R) ,
we can take U1 (S) to be U1 (R) = uω0 (1)(R) , so
ω (1)(R) tω0 (1)(R)
U1 (S) = uω0 (1)(R) = γ0 0
x
10
ω (1)(R) ω0 (1)(S)
= γ0 0
x
.
ω (1)(R)
That is, we take τ1 = γ0 0
in (7). Thus
# "
# "
"
#
P1 (R)n1 (R)
v n1 (R)
P1 (S)n1 (S)
=
=
α1 (S) =
= α1 (R),
U1 (S)
U1 (R)
uω0 (1)(R)
with the notation of (8). We have that R1 /mR1 = R/mR [σ(R)] and S1 /mS1 = S/mS [σ(S)]
(with notation of (12)).
#
"
v n1 (R)
= σ(R)
α1 (R) =
uω0 (1)(R)
and
α1 (S) =
=
h
y n1 (R)
i
y n1 (R)
= ω0 (1)(R)
tω (1)(R)
uω0 (1)(R)
γ
h 0n (S) ix 0
1
y
−ω
(1)(R)
[γ0 ] 0
= [γ0 ]−ω0 (1)(R) σ(S).
xω0 (1)(S)
Thus R1 /mR1 = R/mR (α1 (R)) and S1 /mS1 = S/mS (α1 (R)).
By (21), we have that
[S1 /mS1 : S/mS ] = [R1 /mR1 : R/mR ]
and thus
d1 (S) = [S/mS (α1 (R)) : S/mS ] = [R/mR (α1 (R)) : R/mR ] = d1 (R),
and the minimal polynomial f1S (z) of α1 (S) over S/mS is the minimal polynomial f1R (z)
of α1 (R) over R/mR . Thus
x, y = P1 (R), P2 (R)
are the first terms of a generating sequence in S, obtained by the (modified) algorithm of
Theorem 4.2 [6].
Proposition 4.1. Suppose that i ≥ 2 and
P0 (S) = x, P1 (S) = y, P2 (S) = P2 (R), . . . , Pi (S) = Pi (R)
are the first i + 1 terms of a generating sequence in S produced by the modified algorithm
of Theorem 4.2 [6]. Then
P0 (S) = x, P1 (S) = y, P2 (S) = P2 (R), . . . , Pi (S) = Pi (R), Pi+1 (S) = Pi+1 (R)
are the first i + 2 terms of a generating sequence in S produced by the modified algorithm
of Theorem 4.2 [6].
Proof. With the assumption, we have that for j ≤ i − 1,
nj (S) = nj (R), αj (S) = αj (R),
dj (S) = [S/mS (α1 (S), . . . , αj (S)) : S/mS (α1 (S), . . . , αj−1 (S))]
= [R/mR (α1 (R), . . . , αj (R)) : R/mR (α1 (R), . . . , αj−1 (R))] = dj (R)
and the minimal polynomial fjS (z) of αj (S) = αj (R) over S/mS (α1 (S), . . . , αj−1 (S)) is
the minimal polynomial fjR (z) of αj (R) over R/mR (α1 (R), . . . , αj−1 (R)).
Theorem 7.1 [6] produces a generating sequence Q0 (R1 ) = u1 , Q1 (R1 ), Q2 (R1 ), . . . in R1
from P0 (R), P1 (R), P2 (R), . . .. The generating sequence Q0 (R1 ) = u1 , Q1 (R1 ), Q2 (R1 ), . . .
in R1 can be produced by the algorithm of Theorem 4.2 from the regular system of
parameters u1 , Q1 (R1 ) in R1 (as shown in Theorem 7.1 [6] and recalled in (22) and (14)).
Since R → S is stable, we have that
(24)
u1 = γ1 xt1
11
for some unit γ1 ∈ S1 , and recalling (23), we have that
(25)
ω0 (1)(S) = tω0 (1)(R).
By the induction hypothesis applied to the stable map R1 → S1 , we have that
x1 , Q1 (R1 ), . . . , Qi (R1 )
are the first i+1 terms of a generating sequence in S1 , produced by the modified algorithm
of Theorem 4.2 [6] in S1 .
For j ≥ 1, let
nj (R1 ), Uj (R1 ), αj (R1 ), dj (R1 ), fjR1 (z)
be the calculations of the algorithm of Theorem 4.2 [6] in R1 , obtained in the construction
of the generating sequence Q0 (R1 ) = u1 , Q1 (R1 ), Q2 (R1 ), . . ..
For j ≤ i, let
nj (S1 ), Uj (S1 ), αj (S1 ), dj (S1 ), fjS1 (z)
be the calculations of the modified algorithm of Theorem 4.2 [6] in S1 , obtained in the construction of the first i+ 1 terms of the generating sequence x1 , Q1 (R1 ), Q2 (R1 ), . . . , Qi (R1 )
in S1 . We have that for j ≤ i − 1,
(26)
nj (S1 ) = nj (R1 ), Uj (S1 ) = Uj (R1 ), αj (S1 ) = αj (R1 ),
dj (S1 ) = dj (R1 ), fjS1 (u) = fjR1 (u).
Since
−ω0 (1)(R)
Q0 (S1 )ω0 (1)(S) = Q0 (R1 )ω0 (1)(R) γ1
we have from (14) that for j ≤ i − 1,
ω (1)(R)n1 (S)···nj (S)
Qj (S1 ) = γ1 0
ω (1)(R)n1 (S)···nj (S)
= γ1 0
,
Pj+1 (S)
Q0 (R1 )ω0 (1)(R)n1 (S)···nj (S)
Qj (R1 ).
For j ≤ i − 1, we have by (14) and (26), and then by (24) and (25), that
ν ∗ (Qj (R1 )) = ν ∗ (Pj+1 (R)) − ω0 (1)(R)n1 (R) · · · nj (R)ν ∗ (u1 )
= ν ∗ (Pj+1 (R)) − n1 (R) · · · nj (R)ω0 (1)(S)ν ∗ (x1 ).
Thus
G(ν ∗ (x1 ), ν(Q1 (R)), . . . , ν(Qj (R))) = G(ν ∗ (x1 ), ν(P2 (R)), . . . , ν(Pj+1 (R)))
= G(ν ∗ (x), ν ∗ (y), ν(P2 (R)), . . . , ν(Pj+1 (R)))
= G(ν ∗ (x), ν(P1 (R)), . . . , ν(Pj+1 (R)))
since G(ν ∗ (x1 )) = G(ν ∗ (x), ν ∗ (y)), as calculated before (55) in the proof of Theorem 7.1 [6].
Thus ni−1 (S1 ) = ni (S). We have that ni−1 (S1 ) = ni−1 (R1 ) by (26), and ni−1 (R1 ) = ni (R)
by (55) and (54) in the proof of Theorem 7.1 [6]. Thus
ni (S) = ni (R).
In applying the modified algorithm of Theorem 4.2 [6] to extend x, P1 (R), . . . , Pi (R) to a
generating sequence in S, we can thus take Ui (S) = Ui (R), and then
# "
#
"
Pi (R)ni (R)
Pi (S)ni (S)
=
= αi (R).
αi (S) =
Ui (S)
Ui (R)
We have from (11) that
y1 =
n1 (R)
y n1 (S)
ω0 (1)(R) v
ω (1)(R)
=
γ
= γ0 0
v1 .
0
xω0 (1)(S)
uω0 (1)(R)
12
Thus
σ(S1 ) = [y1 ] = [γ0 ]ω0 (1)(R) [v1 ] = [γ0 ]ω0 (1)(R) α1 (R)
in Vν ∗ /mν ∗ , and
S1 /mS1 = S/mS [α1 (R)] and R1 /mR1 = R/mR [α1 (R)].
For 1 ≤ j ≤ i − 1, by (60) of [6], we have that
αj (S1 ) = αj (R1 ) = α̂j (R1 ) = αj+1 (R)α1 (R)a(R)ω0 (i+1)(R)+b(R)ω1 (i+1)(R) .
Thus
di−1 (S1 ) = [S1 /mS1 (α1 (S1 ), . . . , αi−1 (S1 )) : S1 /mS1 (α1 (S1 ), . . . , αi−2 (S1 ))]
= [S/mS (α1 (R), α2 (R), . . . , αi (R)) : S/mS (α1 (R), . . . , αi−1 (R))] = di (S)
and
di−1 (R1 ) = [R1 /mR1 (α1 (R1 ), . . . , αi−1 (R1 )) : R1 /mR1 (α1 (R1 ), . . . , αi−2 (R1 ))]
= [R/mR (α1 (R), α2 (R), . . . , αi (R)) : R/mR (α1 (R), . . . , αi−1 (R))] = di (R).
We thus have that di (S) = di (R) since di−1 (S1 ) = di−1 (R1 ) by (26). Thus the minimal
polynomial fiS (z) of αi (S) = αi (R) over S/mS (α1 (R), . . . , αi−1 (R)) is the minimal polynomial fiR (z) of αi (R) over R/mR (α1 (R), . . . , αi−1 (R)). Thus we can take Pi+1 (S) = Pi+1 (R)
in the modified algorithm of Theorem 4.2 [6].
We obtain the following theorem (Theorem 1.3 from the Introduction of this paper).
Theorem 4.2 is proven by Ghezzi, Hà and Kashcheyeva in [7] when k is algebraically closed
of characteristic zero.
Theorem 4.2. Suppose that k is a field of characteristic zero, ν ∗ is a rational 0-dimensional
valuation, n = 2 and R → S is stable. Then
gr ∗ (S) ∼
= gr (R) ⊗R/m S/mS [Z]/(Z e − [γ0 ]−1 [u]),
ν
ν
R
and the degree of the extension of quotient fields of gr ν (R) → grν ∗ (S) is ef .
Proof. We have an inclusion of graded algebras grν (R) → grν ∗ (S). The classes [Pi (R)] for
i ≥ 0 generate grν (R) as a grν (R)0 = R/mR -algebra and the classes [P0 (S)] and [Pi (R)] for
i ≥ 1 generate grν ∗ (S) as a grν (S)0 = S/mS -algebra by Theorem 4.11 [6] and Proposition
4.1. We have the relation
(27)
[P0 (S)]t [γ0 ] = [P0 (R)]
in grν ∗ (S). Further,
(28)
ni (R) = ni (S) for i ≥ 1
by Proposition 4.1.
Since grν (R) ⊗R/mR S/mS → grν ∗ (S) is homogeneous, to verify that it is 1-1, it suffices
to show that the homomorphism of S/mS -vector spaces
(29)
grν (R)λ ⊗R/mR S/mS → grν ∗ (S)λ
is 1-1 for all λ ∈ S R (ν). By 2) of Theorem 4.2 [6], the set of all monomials
(30)
[P0 (R)]i0 [P1 (R)]i1 · · · [Pr (R)]ir
such that r ∈ N, ik ∈ N, 0 ≤ ik < nk (R) for 1 ≤ k ≤ r and
i0 ν(P0 (R)) + · · · + ir ν(Pr (R)) = λ
13
is an R/mR -basis of grν (R)λ , and the set of all
(31)
[P0 (S)]j0 [P1 (S)]j1 · · · [Ps (S)]js
such that s ∈ N, jk ∈ N, 0 ≤ jk < nk (S) for 1 ≤ k ≤ s and
j0 ν ∗ (P0 (S)) + · · · + js ν ∗ (Ps (S)) = λ
is an S/mS -basis of grν ∗ (S)λ .
By (27), (28), (30) and (31), we have that (29) is 1-1, so
grν (R) ⊗R/mR S/mS → grν ∗ (S)
is 1-1.
We have established that [P0 (S)] generates grν ∗ (S) as a grν (R) ⊗R/mR S/mS -algebra
and that the relation (27) holds. To establish that the conclusions of the theorem hold,
we must show that if there is a relation
(32)
h0 + [P0 (S)]h1 + · · · + [P0 (S)]t−1 ht−1 = 0
in grν ∗ (S), with hi ∈ grν (R) ⊗R/mR S/mS , then h0 = h1 = · · · = ht−1 = 0. We may
assume that each [P0 (S)]j hj is homogeneous of the same degree λ. Since R → S is stable,
we have that t = [Γν ∗ : Γν ] and iν(P0 (S)) 6∈ Γν for 1 ≤ i ≤ t − 1. Thus there can be at
most one nonzero expression in (32), so all terms are zero.
References
[1] S. Abhyankar, On the Valuations centered in a Local Domain, Amer. J. Math., Vol 78., 1956.
[2] S. Abhyankar, Ramification theoretic method in Algebraic Geometry, Princeton University
Press, Princeton, New Jersey, 1959.
[3] S.D. Cutkosky, Local factorization and monomialization of morphisms, Astérisque 260, 1999.
[4] S.D. Cutkosky and O. Piltant, Ramification of valuations, Advances in Math. 183 (2004),
1-79.
[5] S.D. Cutkosky and B. Teissier, Semigroups of valuations on local rings, Mich. Math. J. 57
(2008), 173 - 193.
[6] S.D. Cutkosky and Pham An Vinh, Valuation semigroups of two dimensional local rings, to
appear in the Proceedings of the London Math. Soc.
[7] L. Ghezzi, Huy Tài Hà and O. Kashcheyeva, Toroidalization of generating sequences in
dimension two function fields, J. Algebra 301 (2006) 838-866.
[8] L. Ghezzi and O. Kashcheyeva, Toroidalization of generating sequences in dimension two
function fields of positive characteristic, J. Pure and Applied Algebra 209 (2007), 631 - 649.
[9] H. Knaf and F.-V. Kuhlmann, Abhyankar places admit local uniformization in any characteristic, Ann. Scient. Ec. Norm. Sup 30 (2005), 833 - 846.
[10] B. Teissier, Valuations, Deformations and Toric geometry, Proceedings of the Saskatoon
Conference and Workshop on Valuation Theory, Volume II, F.-V. Kuhlmann, S. Kuhlmann,
M. Marshall (Eds.), Fields Inst. Comm. 33, AMS 2003.
[11] B. Teissier, Overweight deformations of affine toric varieties and local uniformization,
preprint.
[12] O. Zariski and P. Samuel, Commutative Algebra, Volume 2, Van Nostrand, Princeton, New
Jersey, 1960.
Steven Dale Cutkosky, Department of Mathematics, University of Missouri, Columbia,
MO 65211, USA
E-mail address: [email protected]
Pham An Vinh, Department of Mathematics, University of Missouri, Columbia, MO 65211,
USA
E-mail address: [email protected]
14
| 0 |
DCS
12 March 2018
New Ideas for Brain Modelling 4
Kieran Greer
Distributed Computing Systems, Belfast, UK.
http://distributedcomputingsystems.co.uk
Version 1.2
Abstract: This paper continues the research that considers a new cognitive model based strongly
on the human brain. In particular, it considers the neural binding structure of an earlier paper. It
also describes some new methods in the areas of image processing and behaviour simulation.
The work is all based on earlier research by the author and the new additions are intended to fit
in with the overall design. For image processing, a grid-like structure is used with ‘full linking’.
Each cell in the classifier grid stores a list of all other cells it gets associated with and this is used
as the learned image that new input is compared to. For the behaviour metric, a new prediction
equation is suggested, as part of a simulation, that uses feedback and history to dynamically
determine its course of action. While the new methods are from widely different topics, both can
be compared with the binary-analog type of interface that is the main focus of the paper. It is
suggested that the simplest of linking between a tree and ensemble can explain neural binding
and variable signal strengths.
Keywords: image, behaviour, binary-analog, neural, cognitive, clustering.
1
Introduction
This paper continues the research that considers a new cognitive model based strongly on the
human brain, last updated in [7]. In particular, it considers figure 4 of that paper (Figure 3 below)
and how it might be useful in practice. The paper also describes some new methods in the areas
of image processing and behaviour simulation. The image processing introduces a most classical
form of pattern cross-referencing, while the behaviour equations used feedback for a memory1
DCS
12 March 2018
type of cross-referencing. The work is all based on earlier research by the author and the new
additions are intended to fit in with the overall design. For image processing, a grid-like structure
is used with ‘full linking’, if you like. Each cell in the classifier grid stores a list of all other cells it
gets associated with and this is used as the learned image that new input is compared with. For
the behaviour metric, a new prediction equation is suggested, as part of a simulation, that uses
feedback and history to dynamically determine its current state and course of action. While the
new methods are from widely different topics, both can be compared with the binary-analog
type of interface that is the main focus of the paper. Sensory input may be static and binary, but
cross-references result in variable comparisons that make the input more dynamic. It is suggested
that the simplest of linking between a tree and ensemble can explain neural binding and variable
signal strengths.
The rest of the paper is organised as follows: section 2 introduces an image processing method
that cross-references at a pixel level to associate images. Section 3 describes some related work.
Section 4 re-visits the behaviour metric of an earlier paper and updates that with a new predictive
equation. This feeds earlier evaluations back into the equation, to allow it to self-adjust. Section
6 describes concept aggregation or binding, for realising global concepts, while section 6
considers a process for neural binding that could relate to consciousness. Finally, section 7 gives
some conclusions to the work.
2
Image Processing
This section describes a very basic image recognition algorithm, but one that has characteristics
of the other algorithms developed as part of the work, see the related work section. For this
paper, images are represented by a 2-D grid, with a black cell meaning that a pixel is present and
a white cell meaning that it is empty. A classifier can use the same grid-like structure, where all
of the cells can link to each other. The author has used this structure before in [7] and it is a type
of entropy classifier. It attempts to reduce the error overall and is not so concerned with
minimising individual associations. The paper [6] describes a classifier that is conjecturally more
2
DCS
12 March 2018
visual in nature than other types and it also uses a complete linking method. Instead of several
levels of feature refactoring, it is a 1-level impression only. With the image classifier, each cell
stores a count of every other cell it gets associated with, when averaging this can determine what
cells are most similar to the pixel in question. Figure 1 is an example of the clustering technique.
If the top LHS grid is the first image to be mapped, then for cell A1, the other black cells are
recorded as shown, with a count of 1. The count would then be incremented each time a cell is
recorded again, for example, after the second image, cell A3 would lose a count. The idea of
linking everything this way has now been used 3 times.
Figure 1. Example mapping of cells presented as an image.
Using this algorithm, a set of hand-written numbers [24] was selected as the test data. There
were 9 numbers in total and 55 examples of each number. Each number was trained on a
separate classifier, where each cell would store the other related cell associations. The counts
could then be averaged to produce the weight values. To use the system, a new binary image
would be presented to each of the trained classifiers and it would be assigned to the classifier
that matched closest. It is easy to recognise pixels in the input image, but the problem is sorting
any other pixels that they are associated with. For each pixel in the input image therefore, the
weighted value of the related classifier cell can be retrieved. This would also have links to other
cells, maybe not in the input image. For example, if the counts are as shown and a new image is
3
DCS
12 March 2018
presented that contains pixels in cells A1 and A2 – then the classifier would return cells A1, A2
plus B1 and C1. A3 might not be returned depending on a set threshold value. The success score
is then the percentage of retrieved weighted cells in the classifier that are also in the input image,
compared to the number that are not in the image. For this example, 2 cells are in the input image
while 2 cells are not, leading to a success score of 1. The weight value of the cell can be considered
as an association strength and the idea may be auto-associative.
After training on the hand-written numbers, the same dataset was fed through the classifiers
again for recognition only. The test results are not particularly good and to improve it, some level
of scaling would be required. There is a problem of a larger image covering a smaller one with
the currently tried dataset. An average success score of only 46% was achieved with this basic
version, with a best score for a number of 89% and a worst score of 15%; although state-of-theart is only about 55%. It was interesting that the classifier would try to return a picture that was
more a reflection of itself. So regardless of what the input was, for example, the number 1
classifier would try to return an image that looked like a ‘1’. If the input image was a ‘4’ however,
then maybe the number 4 classifier would return a more accurate comparison and therefore win
the matching competition.
3 Related Work
The author’s own papers that are quoted [6]-[11] are all relevant to the research of this paper.
The image processing of section 2 has already been tried in [3], where they tested the full dataset.
Their results were better overall, with maybe 55% accuracy and over a larger dataset. As stated
however, the tests here are only initial results and it would be expected that some improvement
would be possible, especially if the images can be scaled. The recently found paper [16] looks
significant and the general architecture ([7], figure 2, for example)) could have analogies with bidirectional searches in the and-or with theorem-proving graphs architecture of that paper. As
suggested, and-or could work from goals to axioms (the neural network in the general model and
theorem-proving from axioms to goals (the concept trees in the general model). The paper [23]
4
DCS
12 March 2018
models at a higher behaviour level, but it is interesting that the behaviours are considered to be
unique (time or sequence-based) sets of events and these event patterns are then clustered,
rather than each individual event. The idea of using unique sets of nodes to cluster with has also
been used for the symbolic neural network [11].
An earlier paper on control theory [25] posts some interesting equations that are similar to ones
in this paper. Equation 1 there, for example, looks like the image success score ratio and equation
7 is a likelihood ratio test that is also trying to maximise inside some type of sequence. The
behaviour metric of section 4 has only been updated with a new predictive equation, where the
first paper [9] notes some references, including [1][12][17] and [19]. One interesting aspect of
the equation is that it uses a feedback mechanism which appears to be similar to one that was
part of another research project and even in the project code1.
3.1
Cognitive Modelling
Hawkins and Blakeslee [13] describe how a region of the cortex might work (p. 57) and they note
an input signal being voted on by a higher level, where one higher level pattern set will win and
switch off the other sets. They also state explicitly that the higher level is voting to ‘fit’ its label
better than the other patterns. It may be trying to return its own image as the input signal and
the best match there with the input signal should win. The theory that they state is that a region
learns when it may be important and then it can become partially active, as part of a memory or
prediction. So, this is a type of reasoning, to play over previous scenarios, even when they have
not happened in the current situation yet. That can then maybe be reinforced further by specific
instance values, making it the real decision. The following quote is also interesting:
‘Every moment in your waking life, each region of your neocortex is comparing a set of
expected columns driven from above with the set of observed columns driven from below.
1
Cognitive Algorithm by Boris Kazachenko, http://www.cognitivealgorithm.info/.
5
DCS
12 March 2018
Where the two sets intersect is what we perceive. If we had perfect input from below and
perfect predictions, then the set of perceived columns would always be contained in the
set of predicted columns. We often don't have such agreement. The method of combining
partial prediction with partial input resolves ambiguous input, it fills in missing pieces of
information, and it decides between alternative views.’
The paper [21] introduces the idea of temporal synchrony and synchronised oscillatory activity
as important for multisensory perception.
3.2
Neural Binding
There is quite a lot of research and philosophy into the idea of neural binding. At its most basic,
it means ‘how do neural ensembles that fire together be understood to represent the said
concept’. For example, questions like ‘why don’t we confuse a red circle and a blue square with
a blue circle and a red square’ [4] need to be answered. It includes the idea of consciousness and
how the brain is able to be coherent, but while there are lots of theories, there are not a lot of
very specific results for the binding mechanism itself. Some cognitive models for the real brain
include temporal logic or predicate calculus rules [4] to explain how variables can bind with each
other and reasoning can be obtained. This includes the flow of information in both directions and
so the basic circuits of this and earlier papers would not be too extravagant. The paper Mashour
(2004) is a philosophical paper about how the neural binding mechanism may work. It argues for
quantum mechanics, to allow neurons to be represented in more than 1 pattern simultaneously
and probably the resulting merging of the patterns into a consciousness. The author would only
favour quantum mechanics as a last resort and in section 6, a relatively simple method for
representing the same neuron in different patterns is suggested. If time differences between the
patterns is very small, then they could still merge into a single coherent message. This could be
especially true for the argument against Hebbian cell assemblies. The paper [2] describes a theory
that is quite similar. They call the framework the Specialized Neural Regions for Global Efficiency
(SNRGE) framework. The paper describes that ‘the specializations associated with different brain
6
DCS
12 March 2018
areas represent computational trade-offs that are inherent in the neurobiological
implementation of cognitive processes. That is, the trade-offs are a direct consequence of what
computational processes can be easily implemented in the underlying biology.’ The
specializations of the paper correspond anatomically to the hippocampus (HC), the prefrontal
cortex (PFC), and all of neocortex that is posterior to prefrontal cortex (posterior cortex, PC).
Essentially, prefrontal cortex and the hippocampus appear to serve as memory areas that
dynamically and interactively support the computation that is being performed by posterior brain
areas. The PC stores overlapping distributed representations used to encode semantic and
perceptual information. The HC stores sparse, pattern separated representations used to rapidly
encode ('bind') entire patterns of information across cortex while minimizing interference. The
FC stores isolated stripes (columns) of neurons capable of sustained firing (i.e., active
maintenance or working memory). They argue against temporal synchrony, because of the ‘red
circle blue square’ question and prefer to argue for coarse-coded distributed representations
(CCDR) ([14] and others) instead.
4
Cognitive Behaviour
An earlier paper introduced a set of equations that were based on the collective behaviour
research of [5]. They proposed a set of characteristics for modelling the stigmergic behaviour of
very simple animals, such as ants. They proposed to use coordination, cooperation, deliberation
and collaboration, as follows:
• Coordination – is the appropriate organisation in space and time of the tasks required to solve
a specific problem.
• Cooperation – occurs when individuals achieve together a task that could not be done by a
single one.
• Deliberation – refers to mechanisms that occur when a colony faces several opportunities.
These mechanisms result in a collective choice for at least one of the opportunities.
7
DCS
12 March 2018
• Collaboration – means that different activities are performed simultaneously by groups of
specialised individuals.
They note that these are not mutually exclusive, but rather contribute together to accomplish
the various collective tasks of the colony. This led to a set of equations by the author [9] for
modelling these types of entity. The model is actually behaviour-based not entity-based, where
the entity instances are then made up of a set of the pre-defined behaviours, with the following
characteristics:
1. Individual agent characteristics: Relate to an agent as an individual:
1.1. Ability: this defines how well the behaviour is able to execute the required action.
1.2. Flexibility: this defines how well an agent performing a behaviour can adapt or change to
a different behaviour if the situation requires it to. This can be seen as the ability to make
that decision individually. The collective capabilities described next can then be seen as
the ability to be flexible after an environment response.
2. Collaborative agent characteristics: These relate to the agent working in a team environment:
2.1. Coordination: this defines how well the agent can coordinate its actions with those of
other agents. This is again a behaviour selection, related to flexibility, but this variable
measures the group aspect of the attribute after interaction with other agents.
2.2. Cooperation: this defines how well an agent performing an action can cooperate with
other agents also involved in that action. How well can the selected behaviours work
together?
2.3. Communication: this defines how well the agents can communicate with each other. This
is defined as an input signal and an output signal for each behaviour type. Behaviours
could require local or remote communication, for example.
The metric is quite well balanced, with approximately half of the evaluation going to the
individual capabilities and half going to the group capabilities.
8
DCS
12 March 2018
4.1
Problem Modelling
It is possible to specify a problem with all of the related agents and actions that are part of the
solution space. The modelling is based around the behaviour types that are used to solve the
problem, where the same type definitions can be used, both to model the problem and also to
simulate its execution. The agents are defined by agent types, where an agent type can perform
a particular set of behaviours. So, if the same behaviour type is to be performed at different levels
of success; for static values, this would require different behaviour definitions, or for dynamic
ones the value can change through an equation.
4.2
Behaviour Equations
The problem is therefore modelled as sets of agents that can each perform a set of behaviours.
The Problem Success Likelihood (PSL) is the summed result of the behaviour scores and estimates
how well the problem can be solved. The top part of the PSL value, shown in equation 1, evaluates
the average agent complexity (ECs), as just described. When modelling, this is measured for all of
the behaviour type instances (Bs) that are part of the problem behaviour set (PBS). This can be
no larger than the optimal problem complexity (PC) value of 1.0. The problem complexity is a
factor of how intelligent the agents need to be to solve it. Because the evaluations are all
normalised, in a static specification, the maximum value that the problem complexity can be is
1.0. If all behaviours are perfect, they will also only sum to 1 as well. The problem success
likelihood, can therefore be defined as follows:
Eq. 1
PSL =
4.2.1 Individual Parameters
The individual capabilities of a behaviour can be modelled as follows:
9
DCS
12 March 2018
ECs =
Eq. 2
Is =
Eq. 3
The agent or entity complexity (ECs) for behaviour s is a factor of its ability to perform the related
behaviour attributes of intelligence and collective capabilities. The agent intelligence (Is) is a
factor of its ability (BAs) and flexibility (BFs) capabilities for the specified behaviour.
4.2.2 Team Work
The collective or team work capabilities (COLs) of a behaviour are modelled as follows:
COLs =
Eq. 4
COMs =
Eq. 5
The collective capabilities of the agent performing the behaviour are its ability to cooperate with
other agents (COPs), coordinate its actions with them (CORs) and also communicate this (COMs),
normalised. The communication capabilities of the agent for the behaviour s include its ability to
send a signal to another agent (SOs) and also its ability to receive a signal from another agent
(SIs).
4.3
Prediction Operation for the Metric
When simulating the problem, the Problem Complexity value can change. The success likelihood
then becomes an individual evaluation, based on its knowledge and understanding of the
10
DCS
12 March 2018
environment. This can be defined by the subset of behaviours the agent has either performed or
has interacted with from other agents. For a more intelligent agent, the memory or history of
earlier events can lead to a prediction operation that can reason over the earlier events. It could
be a deliberation function that is fed the history of earlier and/or possible choices, before
selecting the most appropriate one.
For example, Equation 3 of section 4.2.1 defines agent intelligence as a combination of ability
and flexibility. The idea is the ability to perform the intended behaviour but also flexibility to
change or adapt the individual behaviour depending on some response. Ideally, an agent would
score high in both and a modelling scenario that uses static values would be able to demonstrate
this. If instead, running the agents in a simulation, it may be more interesting to let them change
their behaviours dynamically, but again constrained by the pre-defined environment. This leads
to the idea of a prediction metric that is influenced by what the agent can do and also what it did
in the past. The current situation is the most important and so the decision there has the largest
weight. The prediction could then include decreasing values for earlier related events. These can
be factored as a count of earlier events times a factor for the time when they occurred. If the
behaviour was not repeated, then maybe something went wrong, such as an unfavourable
response. These responses, including for the current situation, would change the state of the
agent into what it then has to deal with. The equation would be something like:
Pr = (ECS1 + (∑𝑀
𝑚=0 𝑓(n, ECm, R, t) / M)) / 2
Eq. 6
Where:
ECs1 is the currently selected individual behaviour complexity,
ECm is any previously selected individual behaviour complexity, for any related scenario,
n is number of times in memory that the previous event occurred,
R is the response or impact of the event,
t is the last time the event occurred,
11
DCS
12 March 2018
M is the total number of behaviour-response pairs stored in memory,
f is some function evaluation over the variable set, maybe ‘n(EC + R) / t’ if R is a specific response,
or a multiplication if it is the weight of the response.
When the agent selects a new behaviour, it is expecting a positive response. After a reply from
the environment, the individual behaviour plus the response is fed back into the equation to get
a new amount. If this is less than expected – ‘current Pr plus new EC’, then the response has been
a negative one and possibly a different behaviour should be selected. This type of process can
repeat, with bad responses being flagged and not selected again, for example, until a decision is
made, maybe a new stable state is reached. As the equation calculates, it also feeds back its
current state to update its evaluation for the next selection. The responses are therefore even
more responsible for changing the agent state, where the behaviour selection, using the entity
complexity, is an individual one, maybe based more on knowledge. So again, there is a hint of a
simpler evaluation, which is the knowledge-based decision, balancing itself with the more
complex decision, after the response is also factored in.
4.4
Self-Adjusting Evaluation
A more intelligent version of the metric for a simulation might therefore look as follows:
PSL =
𝑃𝑟
Eq.
𝑃𝐶
7
Where the predictive equation can replace the entity complexity and both of the flexibility parts.
The prediction is modelled as in equation 6, by the agent decision plus its reaction to any
response. A dynamic problem complexity can be measured as in equation 8 and is essentially the
fraction of all behaviours that the agent knows about. Depending on how this is measured, a
multiplication might be more appropriate for the PSL. However, if I only ‘know’ about 1
12
DCS
12 March 2018
behaviour, for example, then based on my own knowledge, I can solve that more easily than if I
have to deal with several known behaviours.
𝑀
PC = (∑𝑁
𝑛=0 ECn + ∑𝑚=0 𝑅𝑚) / PBS
Eq. 8
The collective capabilities are thus reduced to cooperation and communication with other
entities, where coordination is moved to the predictive part.
COLs =
4.5
𝐶𝑂𝑃𝑠+𝐶𝑂𝑀𝑠
Eq. 9
2
Testing
The behaviour metric was tested in [9]. There has not been an opportunity to test the new
predictive equation or its feedback algorithm and so this paper presents the theory of it only and
notes the relation of the theory to the other research. However, a worked example described
next, should help to show how it would work in practice.
4.5.1 Worked Behaviour Example
Consider the following scenario: an agent finds itself in a situation S1. The agent is modelled with
behaviours B1 to B5 and the world is modelled with behaviours B1 to B10. The agent has
encountered the same scenario S1 before and attempted the following behaviour set with related
events, to deal with it:
Event B3 was tried at time t3 and resulted in a response R8.
Event B4 was tried at time t2 and resulted in response R6.
Based on this, behaviour B4 is selected again, leading to the equation:
13
DCS
12 March 2018
Pr = B4s1 + (1 x (B4 + R6) / 2) + (1 x (B3 + R8) / 3)
The scene is an interactive one, with another agent able to reply. The other agent also knows the
scenario and replies with a behaviour B10. This is found to be unfavourable for the agent and
reduces its overall evaluation through the equation:
Pr = (B4S1 + R10S1) + (1 x (B4 + R6) / t2) + (1 x (B3 + R8) / t3), where R10S1 is negative.
Therefore, the prediction reduces and the agent is required to try again. It now has knowledge
of the reply B10 and also knows not to use behaviour B4 if it doesn’t have to. Therefore, a new
response based on its new history could lead to:
Pr = B2S1 + (1 x (B4S1 + R10S1) / t2) + (1 x (B4 + R6) / t3) + (1 x (B3 + R8) / t4)
The reply by the environment, B10 again, is not as unfavourable now and so the prediction
increases. Based on other criteria, the behaviour can be played again, or a satisfactory situation
may have been achieved. In either case, to resolve this situation, two behaviours were tried and
both were fed back into the evaluation function. The first one even counted as part of the history
for selecting the second one.
5
Concept Binding
Concept aggregation or binding is mentioned as part of the symbolic neural network [11]. This is
an experience-based structure that combines lower-level concepts into more complex global
ones and would work in the more intelligent brain region. Other related papers [6][7] have
developed algorithms that link everything together and then sort using entropy. The imageprocessing model of section 2, for example, introduces the variable structure through crossreferencing. The behaviour metric of section 4 does not fit quite as obviously. The sensory input
14
DCS
12 March 2018
must still come first from the environment, before deciding on a plan. The behaviour selection
and reasoning process must therefore occur afterwards, using the higher-level brain regions to
sort the lower-level ones. But a binding is simply a repeat of the pattern and it does not have to
represent anything other than what the ensemble mass represents. So it can be used in exactly
the same context. With one visual system theory ([22] and others), synchronous oscillations in
neuronal ensembles bind neurons representing different features of an object. Gestalt
psychology is also used, where objects are seen independently of their separate pieces. They
have an ‘other’ interpretation of the sub-features and not just a summed whole of them.
Although, there are still problems with the theories, including the requirement for too many
independent neural constellations to represent every feature.
The numerical problem can therefore be helped with some level of cross-referencing. The
question like ‘why don’t we confuse a red circle and a blue square with a blue circle and a red
square’ [4] could be answered if ‘red’, ‘blue’, ‘circle’ or ‘square’ are individual concepts that also
cross-reference each other. Individual means a base node in a tree and cross-reference means a
leaf node in another tree. If a tree is accepted as part of a circuit, then the base neuron will
receive positive feedback, which may be recognised more because of the greater firing effort.
Leaf nodes would also have to be relevant to complete a circuit, but they may also be peripheral
to a main concept and so can act more as links. One leaf node would also be the base of another
tree, when both trees could be active at the same time and relate to the same larger concept.
This is illustrated in Figure 2. The concepts of red, blue, circle and square are all base concepts
learned by the system and also have cross-referenced branches in other trees. Some neurons can
have 1000 branches or more2. If the senses send signals about red and circle, for example, then
the two central trees can complete a circuit, even if the concepts exist in other places as well. For
reasoning then, there would also be influences from the higher-level processes that perform
other types of aggregation, or for synchronisation.
2
Prof K. Arai, SAI’14.
15
DCS
12 March 2018
Figure 2. One level of linking in a temporal model defines a particular ensemble mix.
It is therefore possible to give concepts graded strengths and also any arbitrary mix of the learned
base set. This is already part of the concept trees research [8], where a leaf node in one tree links
to a base node in another tree. Or to put it another way, if a base node branches to link with
another tree, it represents the same thing in any other tree. The concept tree may be too
semantic for the level being considered here and their base concepts would not be ‘anchored’
because there is an idea that concept trees can re-join with each other. So possibly, the path
description for Figure 2 is only 1 or 2 levels deep – the base node and its same branches. The
related Concept Base [11] however has been used to manage flat hierarchies in that paper, or
the trees in other papers, where the flat hierarchy has been associated with a cognitive process.
6
Neural Binding for a Binary-Analog Interface
While section 5 looked at the coarser concept binding, this section considers again the idea of
neuron binding3. Biology has already suggested theories about neural oscillations and binding
that include neural pairing. The pairing helps to group the neurons into specific patterns that the
brain can understand, when different features can become synchronised and oscillate together.
3
See Wikipedia, for example.
16
DCS
12 March 2018
The binding theory of this paper is described in Figure 3. In the figure, a neuron in a base
ensemble mass binds with a neuron in a related hierarchical structure. The hierarchy gives more
meaning to the structure and helps to guide a search process, but it is not clear how or why this
structure would form. Signal strength would be an attractive option to produce the hierarchical
neuron, but if the base node has the strongest signal, then that should result in a longer link to
its paired neuron. The construction of the hierarchy is therefore more likely to be based on time.
When neurons fire they create new neurons and a neuron must exist before it can form a link to
another one. The neurons that form first are therefore more likely to link to other neurons that
form later and so in a mechanical sense, the hierarchy could be created.
It is also useful to consider electrical synapses, which can be bi-directional and setup an oscillating
wave between close neural regions. They are created along with chemical synapses. The point of
the pairing is this resonance and so a weaker electrical signal would be ideal. Quantum mechanics
is one theory used to exaplain how the conscious might work, where several patterns and states
can collapse into one. With a paired neuron however, there can be resonance between the pair
without a quantum element. This resonance could produce a signal, similar to how different sized
pipes produce a note. The resonance is obviously very quick and so it would all meld into the one
signal. If the base ensemble can refresh the whole pattern as well, then that variable process can
(re)activate parts of the structure and in a timed way. This model fits deeper in the brain however
and is not intended for the intelligent cortex area.
The model is also based on the idea of an auto-associative neural network. The Hopfield neural
network [15], and its stochastic equivalents are auto-associative or memory networks. With the
memory networks, information is sent between the input and the output until a stable state is
reached, when the information does not then change. These are resonance networks, such as
bidirectional associative memory (BAM), or others [20], but they can only provide a memory
recall – they map the input pattern directly to the output pattern. If some of the input pattern is
missing however, they can still provide an accurate recall of the whole pattern. They also prefer
17
DCS
12 March 2018
the data vectors to be orthogonal without overlap. This is however ideal for the binding that only
wants to reproduce the base ensemble in the hierarchy.
Figure 3. Neuron Pairing: an ensemble neuron links with a hierarchal neuron. Also figure 4 in
Greer (2016).
6.1
Image Processing Example
An an example, the image-processing algorithm of section 2 is mapped to the neuron pairing
architecture in this section. This is not final and there are questions about how exactly it might
work, but there is also a clear process that can relate the two. The first thing to consider is the
neuron ensemble and while neurons are continually being created, the ensemble mass is
assumed to exist already. When some of it is excited, this then starts the binding process with
the neurons in the hierarchy. The process is shown in Figure 4. The LHS represents the lower
ensemble mass, where an input has activated the central column of black neurons. With the
18
DCS
12 March 2018
image-processing algorithm, each pixel (neuron) on the RHS image relates to the same pixel
(neuron) on the LHS image. The grey squares represent additional pixels (neurons) that have been
associated during the reinforcement procedure. The binding process can probably include more
than 1 base beuron and hierarchy at a time and it is probably not the case that only the grey
squares would be further up a hierarchy. While that part is not clear, what is good about the
binding is that it should introduce more accuracy into the recognition result. It would require for
both the input sensory image and the stored hierarchy image to both fire the same related set of
neurons, for the oscillations to register a persistent signal between them. If there is a neuron in
the hierarchy that sends a signal back to the sensory input and it is not part of the pattern, then
it cannot oscillate, so that part of the error can be removed. If part of the sensory input is missing
from the hierarchy, then it cannot oscillate and so that part of the input needs to be learned.
Figure 4. LHS relates to neuron binding ensemble mass, with central column activated. RHS
relates to hierarchy, with a direct mapping. The two red lines show where the ensemble is
missing and so it needs to be learned. The blue lines show extra neurons from the hierarchy
back to the ensemble, but can be removed as error. The other paired black squares represent
where the patterns match and can oscillate together.
19
DCS
12 March 2018
A third possibility is if the hierarchy returns a signal not in the sensory input, but that has links in
the ensemble mass. It may usually be part of the input pattern and so through the links it can get
activated as part of the pattern. A fourth possibility is if a lot of the hierarchy sends back signals
to inactive neurons, but the hierarchy should be more accurate and is activated from the
ensemble. It is also controlled from further levels above, so this possibility is not as likely.
7
Conclusions
This paper has mostly considered a binding problem and the implications of a binary-analog
conversion process. This has led to some interesting results, both in the areas of image processing
and higher-level reasoning. It has also helped when thinking about the temporal synchrony
problems of neural binding and how separate parts can be re-combined into a more coherent
whole. The binding can also help with recognition accuracy. The behaviour metric has been
updated to include a predictive part that may be used as part of a simulation. This allows the
metric to be more intelligent and should help to clarify the flexibility attributes that it has. A
behaviour decision is based on the agent’s current state, its abilities and also its memory.
If the model of this and earlier papers is used, then (sub)concepts can in fact be represented
individually, with lots of cross-linking representing the different contexts. With so many neurons
in the brain, depending on how a scenario is broken down, why could it not accommodate this?
For the pattern ensembles, base nodes that link as branches in other trees can simply represent
themselves. This is a simplification of concept trees, where symbolically or conceptually, the node
representation is only 1 or 2 levels deep. If there is a tree structure however, then there can be
deeper paths that can relate to graded signal strengths, or even allow for different connection
patterns over the same ensemble. Also key is reinforcement from above, through reasoning, that
completes the circuits. It can still be shown that the ideas fit together into a common model, even
if it uses a lot of standard and compatible structures.
20
DCS
12 March 2018
References
[1] Bryant, B.D. and Miikkulainen, R. (2007). Acquiring Visibly Intelligent Behavior with ExampleGuided Neuroevolution, In Proceedings of the Twenty-Second National Conference on
Artificial Intelligence (AAAI-07), pp. 801 - 808.
[2] Cer, D.M. and O'Reilly, R.C. (2006). Neural Mechanisms of Binding in the Hippocampus and
Neocortex: Insights from Computational Models, H.D. Zimmer, A. Mecklinger, and U.
Lindenberger (Eds) Handbook of binding and memory: Perspectives from cognitive
neuroscience, pp.193 - 220, Oxford University Press.
[3] de Campos, T.E., Babu, B.R. and Varma, M. (2009). Character recognition in natural images,
In Proceedings of the International Conference on Computer Vision Theory and Applications
(VISAPP), Lisbon, Portugal.
[4] Feldman, J. (2013). The Neural Binding Problem(s), Cognitive neurodynamics, Vol. 7, No. 1,
pp. 1-11.
[5] Garnier, S., Gautrais, J., and Theraulaz, G. (2007). The biological principles of swarm
intelligence, Swarm Intelligence. Vol. 1, pp. 3 – 31.
[6] Greer, K. (2017). A Single-Pass Classifier for Categorical Data, Special Issue on: IJCSysE Recent
Advances in Evolutionary and Natural Computing Practice and Applications, Int. J.
Computational Systems Engineering, Inderscience, Vol. 3, Nos. 1/2, pp. 27 - 34. Also available
on arXiv at http://arxiv.org/abs/1503.02521.
[7] Greer, K. (2016). New Ideas for Brain Modelling 3, available on arXiv at
https://arxiv.org/abs/1612.00369.
[8] Greer, K. (2014). Concept Trees: Building Dynamic Concepts from Semi-Structured Data using
Nature-Inspired Methods, in: Q. Zhu, A.T Azar (eds.), Complex system modelling and control
through intelligent soft computations, Studies in Fuzziness and Soft Computing, SpringerVerlag, Germany, Vol. 319, pp. 221 – 252, 2014. Published on arVix at
http://arxiv.org/abs/1403.3515.
[9] Greer, K. (2013). A Metric for Modelling and Measuring Complex Behavioural Systems, IOSR
Journal of Engineering (IOSRJEN), Vol. 3, Issue 11, November, pp. 19 – 28, e-ISSN: 2250-3021,
p-ISSN: 2278-8719. Published on arXiv at http://arxiv.org/abs/1403.0770.
[10]Greer, K. (2012). Turing: Then, Now and Still Key, in: X-S. Yang (eds.), Artificial Intelligence,
Evolutionary Computation and Metaheuristics (AIECM) - Turing 2012, Studies in
Computational Intelligence, 2013, Vol. 427/2013, pp. 43-62, DOI: 10.1007/978-3-642-296949_3, Springer-Verlag Berlin Heidelberg. Published on arXiv at http://arxiv.org/abs/1403.2541.
21
DCS
12 March 2018
[11]Greer, K. (2011). Symbolic Neural Networks for Clustering Higher-Level Concepts, NAUN
International Journal of Computers, Issue 3, Vol. 5, pp. 378 – 386, extended version of the
WSEAS/EUROPMENT International Conference on Computers and Computing (ICCC’11).
[12]Hanks, S., Pollack, M.E., and Cohen, P.R. Benchmarks, Test Beds, Controlled Experimentation,
and the Design of Agent Architectures, AI Magazine. 1993;14(4):17 – 42.
[13]Hawkins, J. and Blakeslee, S. On Intelligence. Times Books, 2004.
[14]Hinton, G.E., McClelland, J.L., and Rumelhart, D.E. (1986). Distributed representations. In D.E.
Rumelhart, J.L. McClelland, & PDP Research Group (Eds.), Parallel distributed processing. Vol.
1: Foundations (Chap. 3, pp. 77–109). Cambridge, MA: MIT Press.
[15]Hopfield, J.J. (1982). Neural networks and physical systems with emergent collective
computational abilities, Proceedings of the National Academy of Sciences of the USA, vol. 79,
No. 8, pp. 2554 - 2558.
[16]Kowalski, R. (1972). And-or Graphs, Theorem-proving Graphs and Bi-directional Search,
Machine Intelligence 7.
[17]Macal, C.M. and North, M.J. (2010). Tutorial on agent-based modelling and simulation,
Journal of Simulation, Operational Research Society Ltd., Vol. 4, pp. 151–162.
[18]Mashour, G.A. (2004). The Cognitive Binding Problem: From Kant to Quantum
Neurodynamics, NeuroQuantology, Issue 1, pp. 29-38.
[19]Pollack, M., and Ringuette, M. Introducing the TILEWORLD: Experimentally Evaluating Agent
Architectures, In Proceedings of the Eighth National Conference on Artificial Intelligence,
Menlo Park, Calif.: American Association for Artificial Intelligence. 1990;183 – 189.
[20]Rojas, R. Neural Networks: A Systematic Introduction, Springer-Verlag, Berlin and online at
books.google.com, 1996.
[21]Senkowski, D., Schneider, T.R., Foxe, J.J. and Engel, A.K. (2008). Crossmodal binding through
neural coherence: implications for multisensory processing, Trends in Neurosciences, Vol. 31,
No. 8, pp. 401 - 409, DOI: 10.1016/j.tins.2008.05.002.
[22]Singer, W. and Gray, C.M. (1995). Visual feature integration and the temporal correlation
hypothesis, Annu Rev Neurosci. Vol. 18, pp. 555 - 586.
[23]Sukanya, P. and Gayathri, K.S. (2013). An Unsupervised Pattern Clustering Approach for
Identifying Abnormal User Behaviors in Smart Homes, IJCSN International Journal of
Computer Science and Network, Vol. 2, Issue 3, pp. 115 - 122.
[24]The Chars74K dataset, http://www.ee.surrey.ac.uk/CVSSP/demos/chars74k/. (last accessed
15/8/17).
[25]Yashchin, E. (1987). Some aspects of the theory of statistical control schemes, IBM J. Res.
Develop., Vol. 31 No. 2, pp. 199 – 205.
22
| 2 |
arXiv:1412.8054v2 [math.OC] 15 Nov 2016
Power Flow as an Algebraic System
Jakub Marecek, Timothy McCoy, and Martin Mevissen
∗
January 24, 2018
Abstract
Steady states of alternating-current (AC) circuits have been studied
in considerable detail. In 1982, Baillieul and Byrnes derived an upper
bound on the number of steady states in a loss-less AC circuit [IEEE
TCAS, 29(11): 724–737] and conjectured that this bound holds for AC
circuits in general. We prove this is indeed the case, among other results,
by studying a certain multi-homogeneous structure in an algebraisation.
1
Introduction
For more than 60 years [37], steady states of alternating-current (AC) circuits
have been studied in considerable detail. The key problem, sometimes known
as the power flow or load flow problem, considers complex voltages Vk at all
buses k as variables, except for one reference bus (k = 0), where the power
supplied. When one denotes complex admittance matrix Y , complex current
Ik , and complex power Sk at bus k, the steady-state equations are based on:
X
X
∗
∗
Sk = Vk Ik∗ = Vk
Yk,l
Vl∗ =
Yk,l
Vk Vl∗
(1)
l∈N
l∈N
where asterisk denotes complex conjugate. This captures the complex, nonconvex non-linear nature [16, 38, 11, 15, 18] of any problem in the AC model.
In order to obtain an algebraic system from (1), one needs to reformulate the
complex conjugate. In order to do so, one may replace all Vk∗ with independent
variables Uk , and filter for “real” solutions where Uk = Vk∗ = ℜVk − ℑVk ı once
the complex solutions are obtained. Thereby, we obtain a particular structure,
which allows us to prove a variety of results.
In particular, the main contributions of our paper are:
• a reformulation of the steady-state equations to a multi-homogeneous algebraic system
∗ J. Marecek and M. Mevissen are with IBM Research – Ireland, IBM Technology Campus
Damastown, Dublin D15, Ireland. e-mail: [email protected]. T. McCoy is with
Google.
1
• analytical results on the number and structure of feasible solutions considering losses, resolving a conjecture of Baillieul and Byrnes [2], which
has been open for over three decades
• empirical results for some well-known instances, including the numbers of
roots, conditions for non-uniqueness of optima, and tree-width.
Our analytical results rely on the work of Morgan and Sommese [32, 36] on multihomogeneous structures. Our empirical results rely on Bertini [6], a leading
implementation of homotopy-continuation methods. As we explain in Section
8, ours is not the first algebraisation of the system (1), cf. [37, 4, 3, 2], and
there is a long history [34, 14, 26, 23, 31, 29, 9, 39, 41] of the use of homotopycontinuation methods.
2
The Problem
In order to make the paper self-contained, we restate of the steady-state equations. Consider a circuit represented by an undirected graph (N, E), where
vertices n ∈ N are called buses and edges {l, m} ∈ E ⊆ N × N are called
branches, and an admittance matrix Y = G + Bı ∈ C|N |×|N |, where the real
part of an element is called conductance G = (glm ) and the imaginary part
susceptance B = (blm ). Each bus k ∈ N is associated with complex voltage
Vk = ℜVk + ℑVk ı, complex current Ik = ℜIk + ℑIk ı, and power Sk = Pk + Qk ı
demanded or generated. Let 0 ∈ N correspond to a reference bus with phase
ℑV0 = 0 and magnitude |V0 | fixed; powers at all other buses are fixed too. (In
a variety of extensions, there are other buses, denoted generators, where voltage magnitude, but not the phase and not the power is fixed.) Each branch
(l, m) ∈ E is associated with the complex power Slm = Plm + Qlm ı. The
key constraint linking the buses is Kirchhoff’s current law, which stipulates the
sum of the currents injected and withdrawn at each bus is 0. Considering the
2
relationship I = Y V , the steady state equations hence are:
Pkg = Pkd + ℜVk
n
X
(ℜyik ℜVi − ℑyik ℑVi )
i=1
+ ℑVk
n
X
(ℑyik ℜVi + ℜyik ℑVi )
(2)
i=1
Qgk = Qdk + ℜVk
n
X
(−ℑyik ℜVi − ℜyik ℑVi )
i=1
+ ℑVk
n
X
(ℜyik ℜVi − ℑyik ℑVi )
(3)
i=1
Plm = blm (ℜVl ℑVm − ℜVm ℑVl )
(4)
2
2
+ glm (ℜVl + ℑVm − ℑVl ℑVm − ℜVl ℜVm )
Qlm = blm (ℜVl ℑVm − ℑVl ℑVm − ℜVl 2 − ℑVl 2 )
+ glm (ℜVl ℑVm − ℜVm ℑVl − ℜVm ℑVl )
−
b̄lm
(ℜVl 2 + ℑVl 2 )
2
(5)
Additionally, one can optimise a variety of objectives over the steady states.
In one commonly used objective function, one approximates the costs of real
power P0 generated at the reference bus 0 by a quadratic function f0 :
cost := f0 (P0 ).
(6)
(In a variety of extensions, in which there are other buses where the power
is not fixed, there would be a quadratic function for each such bus and the
quadratic function of power would be summed across all of these buses.) In
the Lp -norm loss objective, one computes a norm of the vector D obtained by
summing apparent powers S(u, v) + S(v, u)∀(u, v) ∈ E for:
||D||p =
X
|S(u, v) + S(v, u)|
(u,v)∈E
p
1/p
.
(7)
The usual ||D||1 is denoted loss below. We consider these objectives only in
Section VII, while our results in Sections IV–VI apply independently of the use
of any objective function whatsoever.
3
Definitions from Algebraic Geometry
In order to state our results, we need some definitions from algebraic geometry.
While we refer the reader to [2, 27] for the basics, we present the concepts
introduced in the past three decades, not yet widely covered by textbooks. For
a more comprehensive treatment, please see [32, 36, 35].
3
Let n ≥ 0 be an integer and f (z) be a system of n polynomial equations in
z ∈ C n with support (A1 , . . . , An ):
P
α1 α2
αn
α∈A1 f1α z1 z2 · · · zn
f1 (z) =
..
(8)
. P
α1 α2
αn
fn (z) =
α∈An fnα z1 z2 · · · zn ,
where coefficients fiα are non-zero complex numbers. It is well known [12] that
the polynomials define n projective hypersurfaces in a projective space CPn .
Bézout theorem states that either hypersurfaces intersect in an infinite set
with some component of positive dimension, or the number of intersection
points, counted with multiplicity, is equal to the product d1 · · · dn , where di
is the degree of polynomial i. We call the product d1 · · · dn the usual Bézout
number.
The usual Bézout number can be improved by considering:
Definition 1 (Structure). Any partition of the index set {1, . . . , n} into k sets
I1 , . . . , Ik defines a structure. There, Zj = {zi : i ∈ Ij } is known as the group
of variables for each set Ij . The associated degree dij of a polynomial fi with
respect to group Zj is
X
def
αl .
(9)
dij = max
α∈Ai
l∈Ij
We say that fi has multi-degree (di1 , . . . , din ).
Whenever for some j, for all i, the same dij is attained for all α ∈ Ai , we
call the system (11) homogeneous in the group of variables Zj . The projective
space associated to the group of variables Zj in a structure has dimension
|Ij | − 1 if (11) is homogeneous in Zj , and
def
aj =
(10)
|Ij |
otherwise.
Pk
Definition 2 (Multi-homogeneous Bézout Number). Assuming n = j=1 aj ,
the multi-homogeneous Bézout number Béz(A1 , . . . , An ; I1 , . . . , Ik ) is defined as
Qk
a
the coefficient of the term j=1 ζj j , where aj is the associated dimension (10),
Q n Pk
within the polynomial i=1 j=1 dij ζj , in variables ζj , j = 1 . . . k with coefficients dij are the associated degrees (9); that is (d11 ζ1 + d12 ζ2 + . . . + d1k ζk )
(d21 ζ1 + d22 ζ2 + . . . + d2k ζk ) · · · (d2n ζ1 + d2n ζ2 + . . . + dnk ζk ).
Consider the example of Wampler
p1 (z) =
p2 (z) =
p3 (z) =
[42] in x ∈ C 3 :
x21 + x2 + 1,
x1 x3 + x2 + 2,
x2 x3 + x3 + 3,
(11)
with the usual Bézout number of 8. Considering the partition {x1 , x2 }, {x3 },
where d11 = 2, d12 = 0, d21 = d22 = d31 = d32 = 1. the monomial ζ12 ζ21 is
4
to be looked up in the polynomial 2ζ1 (ζ1 + ζ2 )2 . The corresponding multihomogeneous Bézout number is hence 4 and this is the minimum across all
possible structures.
In general, the multi-homogeneous Bézout number Béz(A1 , . . . , An ; I1 , . . . , Ik )
is an upper bound on the number of isolated roots of (11) in CPa1 × · · · × CPak ,
and thereby an upper bounds the number of isolated finite complex roots of (11).
There are a variety of additional methods for computing the multi-homogeneous
Bézout number, e.g., [42]. In the particular case where A = A1 = · · · = An , we
denote
Y
k
n
def
a
(12)
dj j ,
Béz(A1 , . . . , An ; I1 , . . . , Ik ) =
a1 a2 · · · ak
j=1
where dj = dij (equal for each i) and the multinomial coefficient
n!
n
def
=
a1 a2 · · · ak
a1 ! a2 ! · · · ak !
Qk
Pk
is the coefficient of j=1 ζjak in (ζ1 + · · · + ζk )n with n = j=1 aj , as above.
In summary, the multi-homogeneous Bézout number provides a sharper
bound on the numberQof isolated solutions of a system of equations than the
n
usual Bézout number i=1 di = d1 · · · dn . In the famous example of the eigenvalue problem [43], it is known that the Bézout number is 2n , whereas there
exists a structure with multi-homogeneous Bézout number of n. We hence
study the multi-homogeneous structure within the steady state equations of
alternating-current circuits.
4
The Multi-Homogeneous Structure
Notice that in order to obtain an algebraic system from the steady-state equations (2–5), one needs to reformulate the complex conjugate. In order to do so,
one may replace all vn∗ with independent variables un , and later filter for solutions where un = vn∗ once the complex solutions are obtained. We denote such
solution “real”. Let G be the set of slack generators for which |vn | is specified,
and assume 0 ∈ G corresponds to a reference node with phase 0. Notice that
the use of variables vn and un produces a multi-homogeneous structure with
variable groups {vn } and {un }:
vn
X
Yn,k uk + un
X
Yn,k uk − un
k
vn
k
X
∗
Yn,k
vk = 2pn
n∈N \G
X
∗
Yn,k
vk = 2qn
n∈N \G
k
k
vn un = |vn |2
v0 = |v0 |, u0 = |v0 |
n ∈ G − {0}
(13)
5
For example for the two-bus network, we obtain:
∗
∗
v1 (Y1,0 u0 + Y1,1 u1 ) + u1 (Y1,0
v0 + Y1,1
v1 ) = 2p1
∗
∗
v1 (Y1,0 u0 + Y1,1 u1 ) − u1 (Y1,0
v0 + Y1,1
v1 ) = 2q1
v0 = |v0 |, u0 = |v0 |
(14)
Using the algebraic system, one can formulate a number of structural results
concerning power flows.
5
An Analysis for s = 1
For the particular multi-homogeneous structure, which is the partition of the
variables into several groups in (13), we can bound the number of isolated
solutions:
Theorem 1. With exceptions on a parameter set of measure zero, the alternatingcurrent power flow (13) has a finite number of complex solutions, which is
bounded above by:
2n − 2
(15)
n−1
Proof. Each equation in the system (13) is linear in the V variables and also
in the V ∗ variables, giving rise to a natural multi-homogeneous structure of
mult-idegree (1, 1). Since the slack bus voltage is fixed at a reference value, the
system has 2n − 2 such equations in (n − 1, n − 1) variables. By the multihomogeneous form of Bézout’s Theorem (see e.g. Theorem 8.4.7 in [36]), the
total number of solutions in multi-projective space CPn−1 × CPn−1 is precisely
the stated bound, counting multiplicity. Some subset of these lie on the affine
patch C n−1 × C n−1 ⊂ CPn−1 × CPn−1 , giving the result.
Notice that this applies also to some well-known instances of alternatingcurrent optimal power flows (ACOPF). For example, the instances of Lesieutre
et al. [22] and Bukhsh et al. [8] have only a single “slack” bus, whose active
and reactive powers are not fixed, and hence the result applies. Notice that the
exception of measure-zero set is necessary; cf. Example 4.1 of [2].
As we will illustrate in the next section, this bound is tight in some cases.
Deciding whether the bound on the number of roots obtained using a particular
multi-homogeneous structure is tight for a particular instance is, nevertheless,
hard. This can be seen from:
Theorem 2 (Theorem 1 of Malajovich and Meer [27]). There does not exist
a polynomial time algorithm to approximate the minimal multi-homogeneous
Bézout number for a polynomial system (11) up to any fixed factor, unless P =
NP.
6
We can, however, show there exists a certain structure among these solutions:
Corollary 1. If there exists a feasible solution of the alternating-current power
flow, then the solution has even multiplicity greater or equal to 2 or another
solution exists.
Proof. The finite number of solutions to the power flows problem of Theorem 1 is
even. Observe that (U, Û ) is a solution of the system (13) if and only if (Û ∗ , U ∗ )
is a solution. This implies that the non-“real” solutions, that is solutions for
which U ∗ 6= Û , necessarily come in pairs. It follows that “real” feasible solutions
are also even in number, counting multiplicity. The result follows.
Note that a solution having multiplicity greater than 1 is a special case that
is highly unlikely in a real system. Moreover, it is easily detected, since the
Jacobian at a solution is nonsingular if and only if the solution has multiplicity
1.
6
Alternating-Current Optimal Power Flow
One may also make the following observations about the alternating-current
optimal power flows, i.e., the problem of optimising an objective over the steady
state:
Remark 1. For the alternating-current power flows, where powers are fixed at
all but the reference bus, whenever there exists a real feasible solution, except
for a parameter set of measure zero, one can enumerate all feasible solutions in
finite time.
Indeed: By Theorem 1, we know there exist a finite number of isolated
solutions to the system (13). By the homotopy-continuation method of Sommese
et al. [36, 6], we can enumerate the roots with probability 1, which allows us
to pick the global optimum, trivially. Notice that Bertini, the implementation
of the method of Sommese [6], makes it possible to check that all roots are
obtained. Notice that the addition of inequalities can be accommodated by
filtering the real roots.
Nevertheless, this method is not practical, as there may be too many isolated
solutions to enumerate. Generically, this is indeed the case, whenever there
are two or more generators with variable output, i.e., buses, whose active and
reactive power is not fixed:
Corollary 2. In the alternating-current optimal power flow problem, i.e., with
s > 1, where powers are variable outside of the reference bus and there are no additional inequalities, the complex solution set is empty or positive-dimensional,
except for a parameter set of measure zero. When the complex solution set
is positive-dimensional, if a smooth real feasible solution exists, then there are
infinitely many real feasible solutions.
7
Proof. For each slack bus after the first, the system has two variables but only
one equation. With s + 1 slack buses, the rank of the Jacobian and hence the
dimension of the complex solution set will be at least s by Lemma 13.4.1 in [36].
Furthermore, if a real feasible solution U exists and the solution set is smooth
at this point, then the local dimension of the complex and real solution sets are
equal at U . Therefore, since the complex solution set is positive-dimensional,
so is the real set at U , and so infinitely many real feasible solutions exist.
Although there are a variety of methods for studying positive-dimensional
systems, including the enumeration of a point within each connected component
[5, 33] and studying the critical points of the restriction to the variety of the
distance function to such points [1], we suggest the method of moments [20, 13]
may be more suitable for studying the feasible set of alternating-current optimal
power flows. It has been shown recently [13] that it allows for very small errors
on systems in dimension of over five thousand.
As is often the case in engineering applications, one may also be interested
in the distance of a point to the set of feasible solutions. Again, by considering
our algebraisation, one can bound the probability this distance is small using
the theorem of Lotz [25] on the zero-set V of multivariate polynomials. This
could be seen as the converse of the results of [24].
8
Method
No. |N | of buses
3
4
5
6
7
8
9
10
11
12
13
14
9
Bézout’s upper bound
A BKK-based upper bound
Theorem 1
16
8
6
64
40
20
256
192
70
1024
864
252
4096
3712
924
16384
15488
3432
65536
63488
12870
262144
257536
48620
1048576
1038336
184756
4194304
4171776
705432
16777216
16728064
2704156
67108864
67002368
10400600
Generic lower bound
6
20
70
252
924
3432
12870
48620
184756
705432
2704156
10400600
Table 1: The maximum number of steady states in a circuit with a fixed number of buses.
Treewidth tw(P )
No. |X| of solutions
minx∈X (cost(x))
No. of min. wrt. cost
avgx∈X (cost(x))
maxx∈X (cost(x))
minx∈X (loss(x))
No. of min. wrt. loss
avgx∈X (loss(x))
maxx∈X (loss(x))
Bukhsh et al. [8]
Klos and Wojcicka [18]
Lavaei and Low [21]
Bukhsh et al. [8]
McCoy et al.
McCoy et al.
Bukhsh et al. [8]
Grainger and Stevenson [45]
Lesieutre et al. [22]
McCoy et al.
McCoy et al.
McCoy et al.
Bukhsh et al. [8]
McCoy et al.
McCoy et al.
McCoy et al.
Bukhsh et al. [8]
Chow [45]
McCoy et al.
McCoy et al.
McCoy et al.
McCoy et al.
McCoy et al.
McCoy et al.
No. |E| of branches
10
case2w
case3KW
case3LL
case3w
case4
case4ac
case4cyc
case4gs
case5w
case6ac
case6ac2
case6b
case6cyc
case6cyc2
case6cyc3
case7
case8cyc
case9
case9g
caseK4
caseK4sym
caseK6
caseK6b
caseK6sym
Source
No. |N | of buses
Instance
2
3
3
3
4
4
4
4
5
6
6
6
6
6
6
7
8
9
9
4
4
6
6
6
1
3
3
2
3
4
4
4
6
6
6
6
6
6
6
7
8
9
9
6
6
15
15
15
1
2
2
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
3
3
5
5
5
2
6
2
2
4
6
4
10
2
23
22
30
30
12
12
2
60
16
2
8
6
36
40
48
8.42
-0.0
1502.07
5.88
1502.51
2005.25
3001.61
2316.37
2003.57
4005.44
4005.32
3005.7
3005.7
1506.87
4502.42
1667.31
6506.11
3504.44
2604.41
2008.62
2009.28
2018.2
6002.79
6003.01
1
1
1
1
1
2
1
1
1
1
1
5
4
1
1
1
1
1
1
1
2
1
1
1
9.04
1250.0
1502.19
5.94
1505.19
3337.5
3003.56
3681.07
2004.6
5574.23
5554.03
3606.3
3606.3
3131.55
4505.95
1668.36
8557.72
5692.02
4103.36
3006.32
3339.29
5075.47
6017.01
6017.75
9.66
1500.0
1502.31
6.01
1507.88
4005.51
3005.52
4595.8
2005.63
6013.26
6012.82
4508.28
4508.28
4508.24
4508.01
1669.41
9511.04
6505.02
5602.31
4005.51
4005.91
6030.0
6019.41
6019.86
0.71
-0.0
0.22
0.55
0.11
0.01
0.01
0.37
0.47
0.02
0.02
0.01
0.01
0.03
0.02
0.08
0.01
0.03
0.08
0.01
0.01
0.02
0.01
0.01
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1.02
-0.0
0.34
0.58
2.79
2.44
1.96
27.24
1.5
6.26
5.97
3.9
3.9
4.15
3.55
0.26
4.52
1.37
0.26
4.6
3.96
17.16
14.23
14.75
1.33
0.0
0.46
0.61
5.48
3.78
3.92
39.72
2.53
10.51
10.21
5.88
5.88
6.56
5.61
0.45
7.84
1.87
0.45
7.03
7.28
27.25
16.62
16.86
Table 2: Properties of the instances tested.
variable_group V0, V1; variable_group U0, U1;
I0 = V0*Yv0_0 + V1*Yv0_1; I1 = V0*Yv1_0 + V1*Yv1_1;
J0 = U0*Yu0_0 + U1*Yu0_1; J1 = U0*Yu1_0 + U1*Yu1_1;
fv0 = V0 - 1.0; fv1 = I1*U1 + J1*V1 + 7.0;
fu0 = U0 - 1.0; fu1 = -I1*U1 + J1*V1 - 7.0*I;
Figure 1: A Bertini encoding of ACPF on the two-bus instance of Bukhsh et al.
2012, where the impedance of a single branch is 0.04 + 0.2i.
7
Computational Illustrations
In order to illustrate Theorem 1, we first present the maximum number of
steady states in a circuit with a fixed number of buses in Table , and compare
it to the values of our upper bound, Bézout-based upper bound, and the BKKbased upper bound [10, 31]. Notice that the maximum number of steady states
in a circuit with a fixed number of buses is achieved when (N, E) is a clique.
Notice further that the generic lower bound, obtained as the number of solutions
found by tracing the paths, matches the upper bound of Theorem 1 throughout
Table 7.
To illustrate Proposition 1, we have enumerated the steady states using
Bertini, a versatile package for homotopy-continuation methods by Sommese
et al. [36]. See Figure 1 for an example of Bertini input corresponding to
the example above (14), with constants Yu i j representing Yi,j and Yv i j
∗
representing Yi,j
. The results are summarised in Table .
To illustrate Theorem 1 further, we present the values of our upper bound on
a collection of instances widely known in the power systems community. The
instances are mostly available from the Test Case Archive of Optimal Power
Flow (OPF) Problems with Local Optima of Bukhsh1 , while some have appeared
in well-known papers, e.g. [18], and some are available in recent distributions of
Matpower [45], a well-known benchmark. In particular, we present the numbers
of distinct roots of the instances. In all cases, where Theorem 1 applies, the
number of solutions found by tracing the paths matches the upper bound of
Theorem 1, certifying the completeness. In other cases, one could rely on Bertini
certificates of completeness of the search.
Empirically, we observe there exists a unique global optimum in all these instances tested with respect to the L1 -loss objective. For the generation cost objective, however, there are a number of instances (case4ac, caseK4sym, case6b,
case6cyc), where the global optima are not unique. The case of caseK4sym is a
particularly good illustration, where the symmetry between two generators and
two demand nodes in a complete graph results in multiple global optima.
In order to provide material for further study of structural properties, we
present tree-width of the instances in Table 7 in column tw(N ). Notice that
for the well-known small instances, tree-width is 1 or 2, e.g., 1 for the instance
1 http://www.maths.ed.ac.uk/OptEnergy/LocalOpt/ ,
11
accessed November 30th, 2014.
in Figure 1, and 2 for the instance of Lesieutre et al. As the instances grow,
however, this need not be the case: Kloks [17] shows treewidth is not bounded
even in sparse random graphs, with high probability. In complete graphs, such
as caseK4sym with tree-width 3 above, tree-width grows linearly in the number
of vertices.
8
Related Work
There is a long history of study of the number and structure of solutions of
power flows [37, 4, 3, 2]. [37] considered the Bézout bound. [10, 31] considered
a bound based on the work of Bernstein [7] and Kushnirenko [19]. [2, 4] derived
the same expression as in Theorem 1 using intersection theory, but in the lossless
AC model. They highlight that the number of solutions in an alternatingcurrent model with losses is an important open problem. We note that Theorem
1 subsumes Theorem 4.1 of [2] as a special case. Finally, we note that [18]
present a lower bound without a proof and recent papers [39, 44] bound certain
distinguished solutions, but not all solutions; cf. [28].
There is also a long history of applications of homotopy-continuation methods in power systems [34, 14, 26, 23], although often, e.g. in [26], the set-up
of the homotopy restricted the methods to a heuristic, which could not enumerate all the solutions of the power flow [30]. Recently, these have attracted
much interest [31, 29, 9] following the work of Trias [39, 40, 41]. See [28] for an
overview.
9
Conclusions
We hope that the structural results provided will aid the development of faster
solvers for the related non-linear problems [15]. Arguably, one could:
• By using Theorem 1 in the construction of start systems for homotopycontinuation methods [43], allow for larger zero-dimensional systems to be
studied.
• Extend Corollary 2 to finding at least one point in each connected component [33, 1].
• Extend the homotopy-continuation methods to consider inequalities within
the tracing, rather than only in the filtering phase, which could improve
their computational performance considerably.
• Develop methods for the optimal power flow problem, whose complexity
would be superpolynomial only in the tree-width and the number of buses.
The latter two may be some of the most important challenges within the analysis
of circuits and systems.
12
Acknowledgements Parts of this work have been done while Tim was visiting IBM Research. Jakub would like to thank Isaac Newton Institute for
Mathematical Sciences at the University of Cambridge for their generous support for his visits. Dhagash Mehta has kindly provided a variety of suggestions
as the related work.
References
[1] P. Aubry, F. Rouillier, and M. S. E. Din. Real solving for positive dimensional systems. Journal of Symbolic Computation, 34(6):543 – 560, 2002.
[2] J. Baillieul and C. Byrnes. Geometric critical point analysis of lossless power system models. IEEE Transactions on Circuits and Systems,
29(11):724–737, Nov 1982.
[3] J. Baillieul and C. I. Byrnes. Remarks on the number of solutions to the
load flow equations for a power system with electrical losses. In Decision
and Control, 1982 21st IEEE Conference on, pages 919–924, Dec 1982.
[4] J. Baillieul, C. I. Byrnes, and R. B. Washburn. An algebraic-geometric and
topological analysis of the solution to the load-flow equations for a power
system. In Decision and Control including the Symposium on Adaptive
Processes, 1981 20th IEEE Conference on, pages 1312–1320, Dec 1981.
[5] S. Basu, R. Pollack, and M.-F. Roy. A new algorithm to find a point in
every cell defined by a family of polynomials. In Quantifier elimination and
cylindrical algebraic decomposition, pages 341–350. Springer Vienna, 1998.
[6] D. Bates, J. Hauenstein, A. Sommese, and C. Wampler. Numerically Solving Polynomial Systems with Bertini:. Software, Environments, and Tools.
Society for Industrial and Applied Mathematics, 2013.
[7] D. N. Bernstein. The number of roots of a system of equations. Funkcional.
Anal. i Prilozen., 9(3):1–4, 1975.
[8] W. Bukhsh, A. Grothey, K. McKinnon, and P. Trodden. Local solutions
of the optimal power flow problem. IEEE Trans. Power Syst., 28(4):4780–
4788, 2013.
[9] S. Chandra, D. Mehta, and A. Chakrabortty. Equilibria analysis of power
systems using a numerical homotopy method. In Power Energy Society
General Meeting, 2015 IEEE, pages 1–5, July 2015.
[10] T. Chen and D. Mehta. On the Network Topology Dependent Solution
Count of the Algebraic Load Flow Equations. 2015. ArXiv e-prints
1512.04987.
13
[11] H.-D. Chiang, C.-W. Liu, P. P. Varaiya, F. F. Wu, and M. G. Lauby.
Chaos in a simple power system. IEEE Transactions on Power Systems,
8(4):1407–1417, Nov 1993.
[12] W. Fulton. Intersection Theory. Springer New York, Jan. 1998.
[13] B. Ghaddar, J. Marecek, and M. Mevissen. Optimal power flow as a
polynomial optimization problem. IEEE Transactions on Power Systems,
31(1):539–546, Jan 2016.
[14] S. X. Guo and F. M. A. Salam. The real homotopy-based method for
computing solutions of electric power systems. In Circuits and Systems,
1992. ISCAS ’92. Proceedings., 1992 IEEE International Symposium on,
volume 6, pages 2737–2740 vol.6, May 1992.
[15] I. A. Hiskens. Analysis tools for power systems-contending with nonlinearities. Proceedings of the IEEE, 83(11):1573–1587, Nov 1995.
[16] I. A. Hiskens and R. J. Davy. Exploring the power flow solution space
boundary. IEEE Transactions on Power Systems, 16(3):389–395, Aug 2001.
[17] T. Kloks. Only few graphs have bounded treewidth. In T. Kloks, editor,
Treewidth, volume 842 of Lecture Notes in Computer Science, pages 51–60.
Springer Berlin Heidelberg, 1994.
[18] A. Klos and J. Wojcicka. Physical aspects of the nonuniqueness of load flow
solutions. International Journal of Electrical Power and Energy Systems,
13(5):268–276, 1991.
[19] A. Kushnirenko. Newton polytopes and the Bézout theorem. Funct. Anal.
Appl., 10:233–235, 1976.
[20] J. B. Lasserre. Convergent sdp-relaxations in polynomial optimization with
sparsity. SIAM Journal on Optimization, 17(3):822–843, 2006.
[21] J. Lavaei and S. Low. Zero duality gap in optimal power flow problem.
Power Systems, IEEE Transactions on, 27(1):92 –107, feb. 2012.
[22] B. Lesieutre, D. Molzahn, A. Borden, and C. DeMarco. Examining the
limits of the application of semidefinite programming to power flow problems. In Communication, Control, and Computing (Allerton), 2011 49th
Annual Allerton Conference on, pages 1492–1499, 2011.
[23] C.-W. Liu, C.-S. Chang, J. A. Jiang, and G. H. Yeh. Toward a cpflow-based
algorithm to compute all the type-1 load-flow solutions in electric power
systems. IEEE Transactions on Circuits and Systems I: Regular Papers,
52(3):625–630, March 2005.
14
[24] C.-W. Liu and J. S. Thorp. A novel method to compute the closest unstable
equilibrium point for transient stability region estimate in power systems.
IEEE Transactions on Circuits and Systems I: Fundamental Theory and
Applications, 44(7):630–635, Jul 1997.
[25] M. Lotz. On the volume of tubular neighborhoods of real algebraic varieties.
Proc. Amer. Math. Soc., 143:1875–1889, 2015.
[26] W. Ma and J. S. Thorp. An efficient algorithm to locate all the load
flow solutions. IEEE Transactions on Power Systems, 8(3):1077–1083, Aug
1993.
[27] G. Malajovich and K. Meer. Computing minimal multi-homogeneous
Bézout numbers is hard. Theory of Computing Systems, 40(4):553–570,
2007.
[28] D. Mehta, D. K. Molzahn, and K. Turitsyn. Recent advances in computational methods for the power flow equations. In 2016 American Control
Conference (ACC), pages 1753–1765, July 2016.
[29] D. Mehta, H. D. Nguyen, and K. Turitsyn. Numerical polynomial homotopy
continuation method to locate all the power flow solutions. IET Generation,
Transmission Distribution, 10(12):2972–2980, 2016.
[30] D. K. Molzahn, B. C. Lesieutre, and H. Chen. Counterexample to a
continuation-based algorithm for finding all power flow solutions. IEEE
Transactions on Power Systems, 28(1):564–565, Feb 2013.
[31] D. K. Molzahn, D. Mehta, and M. Niemerg. Toward topologically based
upper bounds on the number of power flow solutions. In 2016 American
Control Conference (ACC), pages 5927–5932, July 2016.
[32] A. Morgan and A. Sommese. A homotopy for solving general polynomial
systems that respects m-homogeneous structures. Appl. Math. Comput.,
24(2):101–113, 1987.
[33] F. Rouillier, M.-F. Roy, and M. S. E. Din. Finding at least one point in each
connected component of a real algebraic set defined by a single equation.
Journal of Complexity, 16(4):716–750, 2000.
[34] F. M. A. Salam, L. Ni, S. Guo, and X. Sun. Parallel processing for the
load flow of power systems: the approach and applications. In Decision
and Control, 1989., Proceedings of the 28th IEEE Conference on, pages
2173–2178 vol.3, Dec 1989.
[35] I. R. Shafarevich. Basic algebraic geometry. Springer-Verlag, springer study
edition edition, 1977. Translated from the Russian by K. A. Hirsch; Revised printing of Grundlehren der mathematischen Wissenschaften, Vol.
213, 1974.
15
[36] A. J. Sommese and C. W. Wampler. The numerical solution of systems of
polynomials arising in Engineering and Science. World Scientific, 2005.
[37] C. J. Tavora and O. J. M. Smith. Equilibrium analysis of power systems.
IEEE Transactions on Power Apparatus and Systems, PAS-91(3):1131–
1137, May 1972.
[38] J. S. Thorp and S. A. Naqavi. Load flow fractals. In Decision and Control,
1989., Proceedings of the 28th IEEE Conference on, pages 1822–1827 vol.2,
Dec 1989.
[39] A. Trias. The holomorphic embedding load flow method. In Power and
Energy Society General Meeting, 2012 IEEE, pages 1–8, July 2012.
[40] A. Trias. System and method for monitoring and managing electrical power
transmission and distribution networks, US Patent 7519506 and 7979239,
2009 and 2010, number = US 7519506 and 7979239, type = Patent, version
= , location = US,.
[41] A. Trias and J. L. Marı́n. The holomorphic embedding loadflow method for
dc power systems and nonlinear dc circuits. IEEE Transactions on Circuits
and Systems I: Regular Papers, 63(2):322–333, Feb 2016.
[42] C. W. Wampler. Bezout number calculations for multi-homogeneous polynomial systems. Applied Mathematics and Computation, 51(2):143 – 157,
1992.
[43] C. W. Wampler. An efficient start system for multi-homogeneous polynomial continuation. Numerische Mathematik, 66(4):517–524, 1993/94.
[44] T. Wang and H. D. Chiang. On the number of system separations in
electric power systems. IEEE Transactions on Circuits and Systems I:
Regular Papers, 63(5):661–670, May 2016.
[45] R. Zimmerman, C. Murillo-Sánchez, and R. Thomas. Matpower: Steadystate operations, planning, and analysis tools for power systems research
and education. Power Systems, IEEE Transactions on, 26(1):12–19, 2011.
16
| 5 |
FEAST: An Automated Feature Selection
Framework for Compilation Tasks
Pai-Shun Ting
arXiv:1610.09543v1 [] 29 Oct 2016
Department of Electrical Engineering and Computer Science
University of Michigan
Ann Arbor, Michigan 48109, USA
Email: [email protected]
Chun-Chen Tu
Department of Statistics
University of Michigan
Ann Arbor, Michigan 48109, USA
Email: [email protected]
Pin-Yu Chen
IBM T. J. Watson Research Center
Yorktown Heights, New York 10598, USA
Email: [email protected]
Ya-Yun Lo
Adobe Systems
San Francisco, California 94103, USA
Email: [email protected]
Shin-Ming Cheng
Department of Computer Science and Information Engineering
National Taiwan University of Sciecne and Technology
Taipei 106, Taiwan
Email: [email protected]
Abstract—Modern machine-learning techniques greatly reduce
the efforts required to conduct high-quality program compilation,
which, without the aid of machine learning, would otherwise
heavily rely on human manipulation as well as expert intervention. The success of the application of machine-learning
techniques to compilation tasks can be largely attributed to the
recent development and advancement of program characterization, a process that numerically or structurally quantifies a
target program. While great achievements have been made in
identifying key features to characterize programs, choosing a
correct set of features for a specific compiler task remains an ad
hoc procedure. In order to guarantee a comprehensive coverage
of features, compiler engineers usually need to select excessive
number of features. This, unfortunately, would potentially lead
to a selection of multiple similar features, which in turn could
create a new problem of bias that emphasizes certain aspects
of a program’s characteristics, hence reducing the accuracy and
performance of the target compiler task. In this paper, we propose
FEAture Selection for compilation Tasks (FEAST), an efficient
and automated framework for determining the most relevant
and representative features from a feature pool. Specifically,
FEAST utilizes widely used statistics and machine-learning tools,
including LASSO, sequential forward and backward selection,
for automatic feature selection, and can in general be applied
to any numerical feature set. This paper further proposes an
automated approach to compiler parameter assignment for assessing the performance of FEAST. Intensive experimental results
demonstrate that, under the compiler parameter assignment
task, FEAST can achieve comparable results with about 18% of
features that are automatically selected from the entire feature
pool. We also inspect these selected features and discuss their
roles in program execution.
Index Terms—Compiler optimization, feature selection,
LASSO, machine learning, program characterization
I. I NTRODUCTION
Program characterization, a process to numerically or structurally quantify a target program, allows modern machinelearning techniques to be applied to compiler tasks, since most
machine-learning methods assume numerical inputs for both
training and testing data. Program characterization is usually
achieved by extracting from the target program a set of static
features and/or a set of dyanamic features. Static features can
be obtained directly from the source code or intermediate
TABLE I: List of all original 56 static features from cTuning
Compiler Collection [1].
ft1
ft2
ft3
ft4
ft5
ft6
ft7
ft8
ft9
ft10
ft11
# basic blocks in the
method
# basic blocks with a single
successor
# basic blocks with two
successors
ft29
# basic blocks with more
then two successors
# basic blocks with a single
predecessor
# basic blocks with two
predecessors
# basic blocks with more
then two predecessors
# basic blocks with a single
predecessor and a single
successor
# basic blocks with a single
predecessor and two successors
# basic blocks with a two
predecessors and one successor
# basic blocks with two
successors and two predecessors
ft32
ft30
ft31
ft33
ft34
ft35
ft36
ft37
ft38
ft39
ft12
# basic blocks with more
then two successors and
more then two
ft40
ft13
# basic blocks with # instructions less then 15
ft41
ft14
ft15
ft16
ft17
ft18
ft19
ft20
ft21
ft22
ft23
ft24
ft25
ft26
ft27
ft28
# basic blocks with # instructions in the interval
[15, 500]
# basic blocks with # instructions greater then 500
# edges in the control flow
graph
# critical edges in the control flow graph
# abnormal edges in the
control flow graph
# direct calls in the method
# conditional branches in
the method
# assignment instructions
in the method
# binary integer operations
in the method
# binary floating point operations in the method
#instructions in the method
Average of # instructions in
basic blocks
Average of # phi-nodes at
the beginning of a basic
block
Average of arguments for a
phi-node predecessors
# basic blocks with no phi
nodes
ft42
ft43
ft44
ft45
ft46
ft47
ft48
ft49
ft50
ft51
ft52
ft53
ft54
ft55
ft56
# basic blocks with phi
nodes in the interval [0, 3]
# basic blocks with more
then 3 phi nodes
# basic block where total #
arguments for all phi-nodes
is in greater then 5
# basic block where total #
arguments for all phi-nodes
is in the interval [1, 5]
# switch instructions in the
method
# unary operations in the
method
# instruction that do
pointer arithmetic in the
method
# indirect references via
pointers (”*” in C)
# times the address of a
variables is taken (”&” in
C)
# times the address of a
function is taken (”&” in
C)
# indirect calls (i.e. done
via pointers) in the method
# assignment instructions
with the left operand an
integer constant in the
method
# binary operations with
one of the operands an
integer constant in the
method
# calls with pointers as arguments
# calls with the # arguments is greater then 4
# calls that return a pointer
# calls that return an integer
# occurrences of integer
constant zero
# occurrences of 32-bit integer constants
# occurrences of integer
constant one
# occurrences of 64-bit integer constants
# references of a local variables in the method
# references (def/use) of
static/extern variables in
the method
# local variables referred in
the method
# static/extern variables referred in the method
# local variables that are
pointers in the method
# static/extern variables
that are pointers in the
method
# unconditional branches in
the method
representation of the target program, while the procurement
of dynamic features usually requires actually executing the
target program to capture certain run-time behavior.
With current intensive research on program characterization,
new features, both static and dynamic, are continuously being
proposed. An example of a static feature set is shown in Table
I, which lists all the 56 original static features extracted by
Milpost GCC from cTuning Compiler Collection [1], with
many of them being different yet non-independent features.
For example. ft. 8 implies ft. 2 and ft. 5, meaning that these
features are correlated. In a compiler task, determining which
features to use or to include for program characterization is
of considerable importance, since different features can have
different effects, and hence different resulting performance
on a specific compiler task. Intuitively, including as many
features as possible for program characterization seems to be a
reasonable approach when considering feature selection. This,
however, essentially increases the dimensionality of the feature
space, and thus potentially introduces extra computational
overhead. Also, some features may not be relevant to the target
compiler task, and therefore behave equivalently as noise in
program characterization, harming the resulting performance.
Furthermore, many features, though different, capture very
similar characteristics of a program. The similarities among
features can produce bias that overemphasizes certain aspects
of a program’s characteristics, and consequently lead to an
inaccurate result for a target compiler task. Due to the aforementioned reasons, many machine-learning-aided compiler
tasks still heavily rely on expert knowledge for determining an
appropriate set of features to use. This, unfortunately, hinders
the full automation of compiler tasks using machine learning,
which is originally deemed as a tool to lower the involvement
of field expertise and engineer intervention.
This paper proposes FEAture Selection for compiler Tasks
(FEAST), an automated framework of feature selection for a
target compiler task. FEAST incorporates into its framework a
variety of feature selection methods, including the well-known
LASSO [2], sequential forward and backward selection. Given
a compiler task and a list of feature candidates, FEAST first
samples a small set of available programs as training data, and
then uses the integrated feature selection methods to choose M
most appropriate or relevant features for this specific compiler
tasks. The remaining programs can then be handled using only
the chosen features.
To demonstrate the feasibility and practicality of FEAST, we
assess its performance on a proposed method to assignment of
compiler parameters. Modern compilers usually embed a rich
set of tuning parameters to provide hints and guidance for
their optimization processes. To obtain an optimal compiled
executable program, exhaustive trials over all combinations
of tuning parameters of the utilized compiler is required.
This is, in general, excessively time consuming and hence
infeasible due to its combinatorial nature. As many other
compiler tasks, in practice, expert intervention is frequently
triggered and heuristics based on expertise and experience
are adopted as a general guideline for tuning parameters [3].
Unfortunately, fine tuning compiler’s parameters may require
multiple compilation processes, and can take up to weeks or
months to complete for a moderate program size, entailing
a huge burden and software engineering. In this work, as a
test case for FEAST, we develop an automated method to
assigning “good” parameters to a pool of programs using
machine-learning techniques. We then show that using FEAST,
the dimension of the feature space can be greatly reduced
to pertaining relevant features, while maintaining comparable
resulting performance.
!"##$ %& '
("%%
!"##$ %& '
)
*
Fig. 1: Flow diagrams of the proposed methods.
This paper is organized as follows. Sec. II details the
proposed FEAST framework as well as the automatic compiler
parameter assignment process. Sec. III presents experimental
results with detailed analysis. Related work is discussed in
Sec. IV. Finally, Sec. V draws some conclusions and provides
envisions for future work.
II. FEAST
AND
C OMPILER PARAMETER A SSIGNMENT
This section details the mechanisms of FEAST, and presents
the proposed compiler parameter assignment method.
A. FEAST
Given K training programs and a set of p numerical
features, FEAST assumes a linear model for resulting performance and feature values:
y = Xβ + β0 1
(1)
where y ∈ RK×1 is the compiled programs’ performance
vector, whose i-th entry denotes the performance (measured in
some pre-defined metrics) of the i-th program in the set of total
K training programs. X ∈ RK×p is a matrix whose (i, j)-th
entry denotes the value of the j-th extracted feature of the i-th
program. β ∈ Rp×1 and β0 ∈ R are the coefficients describing
the linear relationship between the features and the resulting
performance. 1 is a K × 1 vector with its elements being
all 1s. FEAST utilizes three widely used feature selection
methods, namely LASSO, sequential forward selection (SFS)
and sequential backward selection (SBS), all of which pick M
most influential features out of the total p features. Specifically,
the elastic net approach for LASSO adopted in FEAST selects
features by first solving optimization problem [2]:
minβ̃
1
y − X̃ β̃
K
2
+ λ β̃
2
(2)
1
where X̃ = X 1 . The first p elements of the solution β̃ are
the coefficient estimates whose magnitudes directly reflect the
influence of the corresponding features. M selected features
are chosen as those with coefficients of largest magnitudes.
SFS and SBS are other well-known feature selection methods.
For SFS, we sequentially or greedily select the most relevant
feature from the training programs until we have selected M
features. For SBS, we sequentially exclude the most irrelevant
feature until there are M features left. We omit algorithmic
descriptions of SFS and SBS, and refer interested readers to
[4] for implementation details.
B. Compiler Parameter Assignment Algorithms
This section discusses the proposed method for compiler
parameter assignment algorithm and the application of FEAST
to this task.
Given a compiler with many parameters to set, finding an
optimal assignment of parameters that can best compile a
target program is a vexing problem. This paper proposes a
machine-learning algorithm that automatically assigns “good”
parameters to target programs based on the known optimal
assignment to the training programs. The proposed method
works in two schemes: active training scheme and passive
training scheme (see Fig. 1). In active training scheme, the
users can actively acquire the optimal compiler parameters
for a subset of programs, while in passive training scheme,
the users are given as prior knowledge a set of programs
whose optimal compiler parameters are known. A practical
example of active training scheme is that a company has no
prior knowledge about compiler parameter assignment, and
it has a very limited budget which only allows it to select
a small set of programs for full tuning. Remaining large
set of programs, however, has to be quickly and efficiently
compiled. For passive training scheme, a company has a small
set of programs with well-tuned compiler parameters, and
it would like to quickly find good compiler parameters for
other programs. Note that, in the active training scheme, our
proposed method can also automatically choose a good set of
candidate programs for full tuning. The remaining section is
dedicated to detailing the two preceding schemes with FEAST
application to them respectively.
Active training scheme. Since acquirement of dynamic
features can be very expensive due to the potential need
for multiple compilations and iterative tuning, we opt to use
Algorithm 1 Active Training Scheme
Input: n programs with p static features, number K of
training programs for optimization
Output: compiler parameter assignment for each untrained
program
Procedures:
1) Partition n programs into K clusters by K-means
clustering
2) For each cluster select one program having the least
sum of distances to other programs in the same cluster
as a training program.
3) Find the set of optimal compiler parameters of the
selected K programs.
4) Use FEAST to perform feature selection on the selected K programs by regressing their optimal execution time with respect to their static features.
5) Repartition the untrained programs based on the similarities computed by the selected features.
6) For each untrained program, assign the compiler parameters of the closest trained program to it based on
the selected features.
numerical static features in the proposed compiler parameter
assignment task. Also, we use the execution time of a compiled
program as the measure of performance for that program.
Given n programs with p static features, we are granted to
choose K ≤ n programs as training samples and acquire
the optimal compiler parameters for each training program
that optimize the execution time. In order to choose the K
programs, we first compute the similarity of each program
pair based on the Euclidean distance between the corresponding static feature vectors, and partition the programs into
K clusters using K-means clustering. For each cluster, we
select one program that has the least average distance to
other programs in the same cluster as a training program.
Exhaustive trials are then conducted for the training programs
to obtain their optimal compiler parameters and the associated
execution time. Given the K training programs as well as
their execution time, FEAST can then select M features that
are most influential to the training programs’ performance
(execution time in this case) . We then leverage the selected
features to recompute similarities and repartition the programs.
Lastly, each untrained program is assigned by the optimal
parameters of the most similar training program. See Alg. 1
for detailed algorithm.
Passive training scheme. Different from the active training
scheme, in addition to n programs with p static features, we
are also given K pre-selected training programs. Therefore,
the methodology of the passive training scheme is similar to
that of the active training scheme, except that the clustering
procedures described in active training scheme are no longer
required. See Alg. 2 for detailed algorithm.
Algorithm 2 Passive Training Scheme
Input: n programs with p static features, K given training
programs
Output: compiler parameter assignment for each untrained
program
Procedures:
1) Find the set of optimal compiler parameters of the K
given training programs.
2) Use FEAST to perform feature selection on the selected K programs by regressing their optimal execution time with respect to their static features.
3) Compute distances between programs based on the
selected features.
4) For each untrained program, assign the compiler parameters of the closest trained program based on the
selected features.
III. P ERFORMANCE E VALUATION OF C OMPLIER
PARAMETER A SSIGNMENT
We test our implementations using the PolyBench benchmark suite [5] that consists of n = 30 programs. The programs
are characterized using p = 56 static features from cTuning Compiler Collection [1]. For the two proposed training
schemes, K trained programs are used for feature selection and
compiler parameter assignment, and for each feature selection
method (LASSO, SFS, SBS), we select the M = 10 most
relevant features.
A. Performance Comparison of the Active and Passive Training Schemes
Fig. 2 shows the program execution time of active and
passive training schemes. The minimal execution time refers
to the optimal performance over 192 possible combinations
of compiler parameters of every program. We also show the
results for the case where no tunable compiler parameters are
enabled. The results of minimal-time and no-tuning-parameter
are regarded as baselines for comparing the proposed FEAST
methods, as their execution time does not depend on the
number K of training programs. Furthermore, to validate the
utility of FEAST, we also calculate the execution time using
all features, i.e., the case where FEAST is disabled.
For the three feature selection methods introduced in Sec.
II-A (LASSO, SFS, and SBS), we select the top M = 10
important features and use these selected features to compute
program similarities and assign compiler parameters. In this
setting, we only compare the execution time of the untrained
programs, and we omit the computation time required to obtain
the optimal compiler parameters associated with the training
programs. We will consider the overall execution time of the
trained and untrained programs shortly.
In Fig. 2, each untrained program is executed once and we
sum up the execution time to get the overall execution time
under various values of K of trained programs. It is observed
that the execution time of both training schemes converge to
38
34
32
34
Executation time (s)
Executation time (s)
36
32
No feature selection
Lasso
Forward selection
Backward selection
Minimal time
No tuning parameter
30
28
26
24
30
28
26
24
22
22
20
No feature selection
Lasso
Forward selection
Backward selection
Minimal time
No tuning parameter
0
5
10
15
20
25
20
30
0
5
K: number of trained program
10
15
20
25
30
K: number of trained program
(a) Active training scheme.
(b) Passive training scheme.
Fig. 2: Program execution time of the active and passive training schemes. Minimal time refers to exhaustive parameter
optimization on every program. It can be observed that the execution time of the three feature selection algorithms integrated
in FEAST are comparable to that of using all features, which strongly suggests that important features affecting program
execution are indeed identified by FEAST.
Lasso
38
36
Passive training
Active training
36
34
Execution time (sec)
34
Execution time (sec)
SFS
38
Passive training
Active training
32
30
28
26
32
30
28
26
24
24
22
22
20
20
0
5
10
15
20
25
30
K: # trained program
0
5
10
15
20
25
30
K: # trained program
Fig. 3: Comparison of active and passive training schemes. Error bars represent standard deviation. The average execution
time of active training is smaller that that of passive training. The variations in active training scheme are caused by random
initialization in K-means clustering procedure, whereas the variations in passive training scheme are caused by randomness in
the selection of training programs.
the minimal execution time as K increases. The curve of the
passive training scheme is smoother than that of the active
training scheme due to the fact that the former is an averaged
result over 1000 trials of randomly selected training programs.
The trends in Fig. 2 can be explained as follows: when
adopting passive training, every program has an equal chance
to be selected as a training program. As K grows, the set of
available optimal compiler parameters for training programs
increases as well, resulting in the decrease in average total
execution time. Active training, on the other hand, adopts
K-means clustering when selecting the training programs.
Given a fixed K, only K programs can be selected for
training and compiler parameter optimization. Therefore, for
small K (e.g.,. K = 1 or 2), we might select the programs
whose optimal compiler parameters do not fully benefit other
untrained programs.
The execution time of cases with FEAST enabled are shown
to be comparable to that of using all features, which strongly
suggests that FEAST can successfully select important features
affecting program execution, leading to dimensionality reduc-
-5000
00
5
20
200
400
50
00
600
800
0
-2000
300
0
-3000
00
0
4000
1000
00
-1
0
Program execution times (Nexec)
(a) Active training scheme.
0
-2000
200
0
0
10
-3000
4000
-40
0
0
00
-2
200
400
600
800
-4000
-5000
3000
2000
5
-6000
1000
2000
-1000
-3
00
-4000
3000
1000
10
0
-3000
40
00
100
-2
0
3000
-1
00
0
10
0
00
-2000
15
0
00
-1
00
0
4000
3000
-1000
K: number of trained program
0
-10
00
0
00
-40
-30
0
20
3000
15
1000
4000
0
00
2000
5000
20
25
200
-20
100
00
-50
3000
2000
K: number of trained program
00
4000
5000
5000
25
Passive with lasso feature selection
30
5000
-5
0
Active with lasso feature selection
30
-6000
1000
Program execution times (Nexec)
(b) Passive training scheme.
Fig. 4: Time reduction for active and passive training schemes using LASSO feature selection. The parameter Nexec specifies
the number of times a program is executed. The contour indicates a phase transition in time reduction. The figure suggests
that with Nexec large enough, the proposed compiler parameter assignment method can provide time saving when considering
overall execution time.
tion while still attaining satisfactory execution time reduction.
To gain more insights on the performance of active and
passive training schemes, we compare the execution time
of LASSO and SFS in Fig. 3. The comparison of SBS is
omitted since in practice SBS is computationally inefficient
due to its sequential feature removal nature, especially for
high-dimensional data. It is observed in Fig. 3 that the average
execution time of active training is smaller than that of
passive training, since for active training, we are able to select
representative programs as training samples for optimization.
These results indicate that the execution time can be further
reduced when we have the freedom to select K representative
programs from clustering as training samples for compiler
parameter assignment.
B. Overall Execution Time Comparison
In this section, we consider the overall execution time,
including time overhead imposed by the proposed algorithms
as well as the execution time of every untrained program.
To this end, we introduce a parameter Nexec, the number of
executions per program. The motivation behind introducing
Nexec is as follows: for programs such as matrix operations,
users may execute these core programs multiple times (i.e.,
Nexec times). Therefore, as will be demonstrated by the
analysis detailed in this section, the time overhead introduced
by the proposed algorithms will be compensated as Nexec
increases, since the time spent in optimizing the training
programs comprises relatively small portion of the overall time
cost with large Nexec. Time Reduction (TR) can therefore
be computed using the following formula:
K
TR = Nexec · Tnull − Nexec · Tauto − Texhaustive
,
where Nexec denotes the number of executions for each
program, Tnull represents the total time to run every program
with all tunable compiler parameters disabled, Tauto denotes
the total time to run every program compiled by using the proK
posed compiler parameter assignment method, and Texhaustive
denotes the computation time for finding the optimal compiler
parameters for K training programs.
Fig. 4 shows the contour plot of the overall time reduction
metric for both training schemes with LASSO. Results using
SFS and SBS are similar, and are omitted in this paper. If TR is
positive, it indicates the overall execution time is smaller than
that with all tunable compiler parameters disabled. It can be
observed that for each K, TR becomes positive when Nexec
exceeds certain threshold value, implying that if a user uses
the compiled programs repeatedly, the proposed method could
potentially provide great time saving. Also for each K, the
preceding threshold of active training is lower than that of
passive training, since programs to exhaustively optimize are
actively chosen by the proposed method.
C. Features Selected by FEAST
To investigate the key features affecting compiler execution
time, we inspect the selected features from FEAST. Table
II lists the top 10 selected features using various feature
selection methods integrated in FEAST. The features are
selected with 3-fold cross validation method for regressing
the optimal execution time of all 30 programs with respect to
their static program features. Based on the selected features,
we can categorize the selected features into three groups:
1) Control Flow Graph (CFG), 2) assignments/operations,
and 3) phi node. CFG features describe a programs’ control
flow, which can be largely influenced by instruction branches,
such as “if-else”, “if- if” and “for loop” statements. The
selected CFG features are reasonable as in our program testing
dataset, for loop contributes to the major part of programs’
control flow. In addition, assignment operations are essential to
matrix operations, and hence possess discriminative power for
distinguishing programs. Lastly, Phi node is a special operation
of Intermediate Representation (IR). It is designed for Static
Single Assignment form (SSA) which enables a compiler to
perform further optimization, and hence it is an important
factor for program execution.
IV. R ELATED W ORK
Recent rapid development of program characterization, a
process to quantify programs, allows the application of modern
machine-learning techniques to the field fo code compilation
and optimization. These machine-learning techniques provide
powerful tools that are widely used in various aspects of
compilation procedures. For example, Buse and Weimer use
static features to identify ”hot paths” (executional paths that
are most frequently invoked) of a target program by applying
logistic regression [6] without ever profiling the program,
Kulkarni et al. build an evolving neural-network model that
uses static features to help guide the inlining heuristics in compilation process [7], and Wang and O’Boyle exploit the use
of artificial neural networks (ANNs) as well as support vector
machine (SVM) for automatic code parallelization on multicore systems [8], among others. Many existing applications of
machine-learning-enabled compiler tasks use features, either
static or dynamic, selected by the designers, and hence heavily
rely on field expertise. This work provides a comprehensive
solution to this problem by using modern statistical methods
to select appropriate features for a specific case.
There has also been a vast amount of research dedicated
to designing suitable features for target applications. For
example, it is shown in [9] that, for compiler tasks that
cannot afford the time cost for procurement of dynamic
features, carefully designed graph-based static features can
achieve accuracy and performance comparable to dynamic
features in some applications, regardless the fact that it is
prevailingly believed dynamic features are preferred due to
insightful information they provide. Another example is the
compiler parameters assignment task by Park et al. [12]. In
their work, an SVM-based supervised training algorithm is
used to train a set of support vectors that can help estimate
the performance or reaction of an unknown program to a set
of compiler parameters. They use newly-defined graph-based
static features for training, which achieves high performance
comparable to that using dynamic features, but without the
need to invoke multiple compilations and profiling. While in
general, it is possible to design dedicated features for specific
tasks, the applicability of these dedicated features to other
applications remain questionable. In the scenario where there
are excessive number of numerical features that may or may
not fit a target task, FEAST can help in selecting the most
meaningful and influential features.
As to the task of compiler parameter tuning, recent work has
been focused on automation of parameter assignment process.
Compiler parameter tuning has long been a crucial problem
which attracts a vast amount of attention. Recent trends and
efforts on exploiting the power of modern machine-learning
techniques have achieved tremendous success in compiler
parameter tuning. Stephenson and Amarasinghe demonstrate
the potential of machine learning on automatic compiler
parameter tuning by applying ANNs and SVM to predict
a suitable loop-unrolling factor for a target program [10].
This work is relatively restricted, since it deals with a single compiler parameter. Agakov et al. propose a computeraided compiler parameter tuning method by identifying similar
programs using static features [3], where certain level of
expert intervention is still required. Cavazos et al. characterize
programs with dynamic features, and use logistic regression, a
classic example of conventional machine-learning algorithm,
to predict good compiler parameter assignment [11]. While
providing state-of-the-art performance, [11] requires dynamic
features, which can be expensive to acquire. In [12], graphbased features are used along with SVM for performance
prediction given a compilation sequence. This work uses
dedicated features for the machine-learning task, and further
implicitly utilizes excessive number of candidate assignments
of compiler parameters in order to find a good assignment,
resulting in a non-scalable algorithm. On the other hand,
the proposed compiler parameter assignment algorithm is a
comprehensive assignment algorithm that does not require
dynamic features or dedicated static features. Furthermore,
the training data that need full optimization can be set fixed.
The good assignment for an unseen target program is directly
derived from that of trained programs, implying its scalability
with the number of potential assignments.
V. C ONCLUSIONS
AND
F UTURE W ORK
In this work, we propose FEAST, an automated framework
for feature selection in compiler tasks that incorporates with
well-known feature selection methods including LASSO, sequential forward and backward selection. We demonstrate the
feasibility and applicability of FEAST by testing it on a proposed method for the task of compiler parameter assignment.
The experimental results show that the three feature selection
methods integrated in FEAST can select a representative small
subset of static features that, when used in the compiler
parameter assignment task, can achieve non-compromised performance. We also validate the effectiveness of the proposed
methods by experimentally demonstrating significant overall
execution time reduction of our method in a practical scenario
where each program is required to run multiple times. Lastly,
we discussed the roles of the features selected by FEAST,
which provides deep insights into compilation procedures. In
summary, our contributions are two-fold:
1) We integrate into FEAST with various modern machinelearning and optimization techniques for feature selection for compilation tasks.
2) We demonstrate the applicability of FEAST by experimentally showing that it can achieve comparable
performance in compiler parameter assignment tasks
with a very small set of selected static features.
For future work, we are interested in exploring the inherent
structural dependencies of codes in each program as additional
features for compiler parameter assignment. We are also interested in integrating the proposed compiler parameter assignment algorithm with recently developed automated community
detection algorithms, such as AMOS [13], to automatically
cluster similar programs for the proposed passive and active
training schemes.
R EFERENCES
[1] “cTuning
Compiler
Collection.”
[Online].
Available:
http://ctuning.org/wiki/index.php?title=CTools:CTuningCC
[2] H. Zou and T. Hastie, “Regularization and variable selection via the
elastic net,” Journal of the Royal Statistical Society: Series B (Statistical
Methodology), vol. 67, no. 2, pp. 301–320, 2005.
[3] F. Agakov, E. Bonilla, J. Cavazos, B. Franke, G. Fursin, M. F. O’Boyle,
J. Thomson, M. Toussaint, and C. K. Williams, “Using machine learning
to focus iterative optimization,” in Proceedings of the International
Symposium on Code Generation and Optimization. IEEE Computer
Society, 2006, pp. 295–305.
[4] M. Dash and H. Liu, “Feature selection for classification,” Intelligent
data analysis, vol. 1, no. 1, pp. 131–156, 1997.
[5] “Polybench
benchmark
suite.”
[Online].
Available:
http://web.cse.ohiostate.edu/∼ pouchet/software/polybench/
[6] R. P. Buse and W. Weimer, “The road not taken: Estimating path
execution frequency statically,” in Proceedings of the 31st International
Conference on Software Engineering. IEEE Computer Society, 2009,
pp. 144–154.
[7] S. Kulkarni, J. Cavazos, C. Wimmer, and D. Simon, “Automatic
construction of inlining heuristics using machine learning,” in Code
Generation and Optimization (CGO), 2013 IEEE/ACM International
Symposium on. IEEE, 2013, pp. 1–12.
[8] Z. Wang and M. F. O’Boyle, “Mapping parallelism to multi-cores: a
machine learning based approach,” in ACM Sigplan notices, vol. 44,
no. 4. ACM, 2009, pp. 75–84.
[9] J. Demme and S. Sethumadhavan, “Approximate graph clustering for
program characterization,” ACM Transactions on Architecture and Code
Optimization (TACO), vol. 8, no. 4, p. 21, 2012.
[10] M. Stephenson and S. Amarasinghe, “Predicting unroll factors using supervised classification,” in International symposium on code generation
and optimization. IEEE, 2005, pp. 123–134.
[11] J. Cavazos, G. Fursin, F. Agakov, E. Bonilla, M. F. O’Boyle, and
O. Temam, “Rapidly selecting good compiler optimizations using performance counters,” in International Symposium on Code Generation
and Optimization (CGO’07). IEEE, 2007, pp. 185–197.
[12] E. Park, J. Cavazos, and M. A. Alvarez, “Using graph-based program
characterization for predictive modeling,” in Proceedings of the Tenth
International Symposium on Code Generation and Optimization. ACM,
2012, pp. 196–206.
[13] P.-Y. Chen, T. Gensollen, and A. O. Hero III, “Amos: An automated
model order selection algorithm for spectral graph clustering,” arXiv
preprint arXiv:1609.06457, 2016.
TABLE II: Top 10 selected features from various methods integrated in FEAST. The number in the bracket indicates the feature
ranking for each method.
LASSO
Number of basic blocks
with a single predecessor
and a single successor (6)
Number of basic blocks
with a single predecessor
and two successors (7)
Number of basic blocks
with more then two
successors and more than
two predecessors (8)
Number of basic blocks
with number of instructions
in the interval [15, 500] (5)
Number of assignment
instructions in the
method (9)
SFS
Number of binary integer
operations in the method (1)
Number of direct calls in the
method (9)
Number of binary floating
point operations in the
method (2)
Number of basic blocks
with phi nodes in the
interval [0,3] (4)
Number of basic block
where total number of
arguments for all phi-nodes
is greater than 5 (10)
Number of assignment
instructions in the
method (10)
Average of number of
phi-nodes at the beginning
of a basic block (10)
Number of basic blocks
with more than 3 phi
nodes (4)
Number of basic block
where total number of
arguments for all phi-nodes
is greater than 5 (7)
Number of switch
instructions in the
method (3)
Number of binary integer
operations in the method (1)
Number of unary operations
in the method (2)
Number of unary operations
in the method (3)
Number of basic blocks in
the method (3)
SBS
Number of basic blocks
with a two predecessors
and one successor (8)
Number of basic blocks with
a two predecessors and one
successor (7)
Number of conditional
branches in the method (5)
Number of basic blocks with
two successors and two
predecessors (2)
Number of instructions
in the method (9)
Number of basic blocks with
more then two successors and
more than two predecessors (6)
Number of basic blocks with
number of instructions greater
then 500 (4)
Number of basic blocks with
more than 3 phi nodes (5)
Number of basic block
where total number of
arguments for all phi-nodes
is in greater than 5 (8)
Number of assignment
instructions with the left
operand an integer constant
in the method (6)
Number of binary operations
with one of the operands an
integer constant in the
method (1)
| 6 |
arXiv:1611.06539v1 [] 20 Nov 2016
Efficient Stochastic Inference of Bitwise Deep Neural
Networks
Sebastian Vogel∗
Robert Bosch GmbH
Corporate Research Campus
71272 Renningen, Germany
[email protected]
Christoph Schorn∗
Robert Bosch GmbH
Corporate Research Campus
71272 Renningen, Germany
[email protected]
Gerd Ascheid†
Institute for Communication
Technologies and Embedded Systems
RWTH Aachen University, Germany
[email protected]
Andre Guntoro
Robert Bosch GmbH
Corporate Research Campus
71272 Renningen, Germany
[email protected]
Abstract
Recently published methods enable training of bitwise neural networks which
allow reduced representation of down to a single bit per weight. We present a
method that exploits ensemble decisions based on multiple stochastically sampled
network models to increase performance figures of bitwise neural networks in terms
of classification accuracy at inference. Our experiments with the CIFAR-10 and
GTSRB datasets show that the performance of such network ensembles surpasses
the performance of the high-precision base model. With this technique we achieve
5.81% best classification error on CIFAR-10 test set using bitwise networks. Concerning inference on embedded systems we evaluate these bitwise networks using a
hardware efficient stochastic rounding procedure. Our work contributes to efficient
embedded bitwise neural networks.
1
Introduction
Research results in recent years have shown tremendous advances in solving complex problems
using deep learning approaches. Especially classification tasks based on image data have been a
major target for deep neural networks (DNNs) [8, 14]. A challenge for leveraging the strengths of
deep learning methods in embedded systems is their massive computational cost. Even relatively
small DNNs often require millions of parameters and billions of operations for performing a single
classification. Model compression approaches can help to relax memory requirements as well as
to reduce the number of required operations of DNNs. While some approaches consider special
network topologies [8, 11], another stream of research focuses on precision reduction of the model
parameters. Recent publications of bitwise neural networks (BNNs) have shown that network weights
and activations can be reduced from a high-precision floating-point down to a binary representation,
while maintaining classification accuracy on benchmark datasets [5]. Stochastic projection of the
network weights during training is a key component that enables this strong quantization. Studies
∗
†
These authors contributed equally to this work.
Professor Gerd Ascheid is Senior Member IEEE.
Submitted to 1st International Workshop on Efficient Methods for Deep Neural Networks at 30th Conference
on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Copyright c 2016 Robert Bosch
GmbH. Rights reserved.
which employed this training method have so far only analyzed deterministic projections during
test-time [4, 5, 15].
With techniques presented in this paper, we contribute to stochastic inference of bitwise neural
networks on hardware. We show that stochastic rounding at test-time improves classification accuracy
of networks that were trained with stochastic weight projections (Section 3). Furthermore, we present
a method which efficiently realizes stochastic rounding of network weights in a dedicated hardware
accelerator (Section 4). We start off with a brief review of the literature on weight discretization
(Section 2).
2
Related Work
Some recent studies have shown that weights (and activations) of DNNs can be discretized to a
very low number of quantization levels while maintaining high classification performance [1, 4,
5, 10, 12, 15, 16]. They employ a method which has already been sketched out by [6]. For each
iteration of the back-propagation learning algorithm the high-precision weights of the network are
projected to discretized values. The discrete weights are used to compute gradient descent based
weight updates, which are then applied to the high-precision weights. This method can be used either
as a fine-tuning step for several epochs after regular training [1, 10, 12] or from the beginning of
the training [4, 5, 15, 16]. [4] has recently introduced clipping followed by stochastic rounding as a
method for projecting high-precision to binary (-1, +1) weights. Before, [7] used a similar method
but with a relatively large number of discretization levels and presented a neural network hardware
accelerator using multiply-accumulate-units for stochastic rounding. Instead, we present a method
avoiding multipliers.
3
Stochastic Inference
Our methods are based on neural networks which are trained with stochastic weight projections.
In this section, we show that by applying these projections at test-time, a stochastic ensemble of
BNNs can be created whose aggregated classification performance surpasses that of the underlying
high-precision floating-point model, while maintaining the benefits of bitwise and multiplierless
computations.
3.1
Stochastic Network Ensembles
We employ the method introduced in [4] during training and inference. Depending on the number of
discrete values we speak of binary or ternary network weights. Clipping limits the numerical range
of the weights to the interval [−1, 1] and the projection W 7→ W d is done by stochastic rounding:
sround(w) =
dwe, with probability p =
bwc,
with probability 1 −
bwc−w
bwc−dwe
dwe−w
p = bwc−dwe
.
(1)
Best test-time results in [4] were achieved with the high-precision neural network parameters W .
However, discretized values are much better suited for dedicated hardware accelerators, which is
why we investigate inference based on W d . One approach is to perform inference at test-time with
the same weight discretization projections as in the training procedure. The reasoning behind this is
that the network has been optimized for these projections when minimizing the loss function. With
Eqn. (1) as projection function, experiments show a high variance in classification accuracy when the
projection is performed only once. Ensembles of classifiers can be used to lower the classification
variance of the aggregated classification decision. Using multiple stochastic projections W 7→ W d we
sample different versions of our neural network and combine their outputs as visualized in Figure 1.
The ensemble classification decision is then taken based on this accumulated network output.
3.2
Experimental Results
For the first evaluation of our method, we train a ConvNet on the CIFAR-10 classification dataset [13],
which contains 60 000 images in 32×32 pixel RGB resolution and 10 different classes. We use
2
+
bitwise NN
𝒘𝒅𝟐
bitwise NN
𝒘𝒅𝟏
high-precision NN
𝒘𝒑𝒓𝒆𝒄
accumulated
network output
input data
bitwise NN
𝒘𝒅𝑵
stochastic
projection
Figure 1: Based on a high-precision network, an ensemble of networks is created. The outputs of the
ensemble members are used in conjunction.
the setup described in [4] for training, but with sign3 activation function as in [5] and stochastic
ternary weights. The network structure is 128C3–128C3–MP2–256C3–256C3–MP2–512C3–512C3–
MP2–1024FC–1024FC–10SVM4 . After training the model for 500 epochs with hyperparameters
from [4] and without any preprocessing or augmentations on the dataset, we select high-precision
model parameters which have the lowest error on the validation set. These weights are used to
generate multiple instances of the network by rounding the weights stochastically to ternary values
(see Section 3.1). Classification error rates on the CIFAR-10 test set based on the ensemble decision
for different accumulation lengths, i. e. numbers of ensemble members, are plotted in Figure 2a.
Since classification results are not deterministic in this case, we run the whole experiment 20× and
provide mean and standard deviation. In our experiment, a stochastic BNN ensemble with at least
four members always performs better than the floating-point reference model, which achieves a
classification error of 10.74%.
Figure 2: We evaluated ensembles of networks which were generated by stochastically projecting the
high-precision model parameters to ternary values.
13
13.15%
high-precision reference (10.74%)
mean and std. of 20 evaluations
12
11
10
10.44%
10.16% 9.95%
7.5
Classification error rate
on test set [%]
Classification error rate
on test set [%]
14
7.0
6.5
9.92%
9.79%
high-precision reference (6.13%)
mean and std. of 20 evaluations
6.91%
6.21%
6.12%
6.06%
6.04%
6.04%
6.0
9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
number of networks in ensemble
number of networks in ensemble
(a) The network has sign activation. Our best result (b) The network uses ReLU activation. The best result
was 9.41% with an ensemble of 23 networks.
of 5.81% was achieved for an ensemble of 29 networks.
Better classification results can be achieved when the same network is trained with ReLU activation
function, binary projections, global contrast normalization and ZCA whitening, as well as augmentations on the training data. We apply a commonly used simple data augmentation method [9],
consisting of a random translation of up to 4 pixels in the image plane and a random flip around the
vertical axis. Classification results for this setup using ternary projections at test-time are shown in
Figure 2b. The best result of 5.81% was reached with an ensemble of 29 networks. To the best of our
knowledge we are the first to report a classification error of less than 6% on the CIFAR-10 benchmark
using bitwise neural networks.
In addition, we test our method on the German Traffic Sign Recognition Benchmark dataset [17].
The resulting high-precision network with sign activation leads to 2.19% classification error. For
3
sign(x): 1 for ≥ 0, −1 otherwise.
Preceding numbers indicate the number of channels, C3 denotes a convolution layer with 3×3 kernel, MP2
abbreviates spatial max-pooling with a receptive field of 2×2, FC stands for fully connected layers and SVM for
a square hinge loss output layer.
4
3
20 evaluations, a single projected bitwise network results in 2.73% mean error rate (0.092% std.)
whereas ensembles of 11 networks reach 1.79% mean error rate (0.042% std.). The best result of
1.63% was achieved with 16 ensemble members.
Interestingly, the mean performance of discretized ensembles reach better classification results than
the high-precision base model. We believe that due to the gradient descent optimization of the loss
function which is evaluated for discrete values, best results are achieved with projected versions of
the base model.
4
Efficient Stochastic Rounding in Hardware
In order to fully exploit the performance of bitwise neural networks in terms of accuracy, the BNN
needs to be evaluated more than once and therefore an efficient integration of a stochastic rounding
engine is necessary. Based on the publications [2] and [3], a simple multiplexer can be used to
perform sround(x) (see Eqn. (1)). Assuming the probability of the select signal sel of an N-to-1
multiplexer to route signal ini ∈ {0, 1} to the output is equally distributed, the probability of the
output signal out being 1 can be written as
P (out = 1) =
N
X
ini P (sel = i) =
i=1
N
X
i=1
ini
1
.
N
(2)
Hence, the probability P (out = 1) is determined by the number of ones at the input in. However, if
the probability function P (sel = i) is chosen to be
P (sel = i) =
2i−1
,
2N − 1
(3)
the probability P (out = 1) is directly related to the input in. Additionally, considering in as a binary
coded5 fractional number ∈ [0, 1) then P (out = 1) ≈ in with a maximum error of 21N . In order to
use this technique in hardware, the corresponding signal for sel has to be generated by individual
select wires selj . Whereas [2] considers the N equations (3) as an overdetermined problem and
proposes a numerical solution, we present an analytic solution to the problem. There are log2 (N )
individual select bits selj with
j−1
22
1
22j−1 + 1
log2 (N )
log2 (M )
Y
Y
k−1
P (selj ) = P (sel), because
22
⇒
+ 1 = 2M − 1.
P (selj = 1) =
22j−1 + 1
, P (selj = 0) =
j=1
(4)
k=1
Bitstreams for selj with the corresponding frequencies can be generated using a linear feedback shift
register (LFSR) in combination with Daalen modulators [18].
In order to verify the concept of stochastic rounding engines for neural networks using the method
presented above, we evaluated the network for road sign recognition with weights stochastically
projected in hardware. The results presented in Section 3.2 have been reproduced using this approach.
To take a potential hardware parallelization into consideration, we also performed projections in
parallel over the dimension of output features. As the generation of random bitstreams using LFSRs
is expensive in terms of energy and hardware resources, we evaluated the classification performance
when using a single pseudo random bitstream (PRBS) generator to provide the same select signal
for all stochastic rounders (i.e. multiplexers) in the network. We found that relying on a single
PRBS generator retains mean classification accuracy. Moreover, the mean network performance is
preserved when only a single LFSR is used to generate a random base bitstream which is then subject
to different modulations [18] to generate PRBS with appropriate frequencies of 1’s (see Eqn. (4)).
5
inN corresponds to the most significant bit (MSB).
4
5
Conclusion and Outlook
We investigated bitwise neural networks with stochastically projected weights during inference.
Results show that an ensemble-based decision of multiple versions of such a BNN enhances performance compared to the inference based on the high-precision shadow weights. Furthermore, we
presented a hardware efficient stochastic rounding procedure for the first time used on bitwise DNNs.
Our results show that this technique can be used for test-time inference enabling efficient hardware
implementation in embedded systems.
The methods proposed in [4] and [5] rely on stochastic projections during training. Future research
will investigate the integration of our generalized form of stochastic rounding into the training process.
References
[1] S. Anwar, K. Hwang, and W. Sung. Fixed point optimization of deep convolutional neural
networks for object recognition. In 2015 IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP), pages 1131–1135, April 2015.
[2] S. L. Bade. Lookup table based neural network using fpga. Master’s thesis, Brigham Young
University, 1994.
[3] S. L. Bade and B. L. Hutchings. Fpga-based stochastic neural networks-implementation. In
Proceedings of the IEEE Workshop on FPGAs for Custom Computing Machines 1994, pages
189–198, Apr 1994.
[4] M. Courbariaux, Y. Bengio, and J.-P. David. BinaryConnect: Training Deep Neural Networks
with binary weights during propagations. ArXiv e-prints, November 2015. arXiv:1511.00363.
[5] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized Neural Networks:
Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. ArXiv
e-prints, February 2016. arXiv:1602.02830.
[6] E. Fiesler, A. Choudry, and H. J. Caulfield. Weight discretization paradigm for optical neural
networks. In Proceedings of SPIE, volume 1281, pages 164–173, 1990.
[7] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. Deep Learning with Limited
Numerical Precision. In Proceedings of The 32nd International Conference on Machine
Learning, pages 1737–1746, 2015.
[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. ArXiv
e-prints, December 2015. arXiv:1512.03385.
[9] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Weinberger. Deep Networks with Stochastic Depth.
ArXiv e-prints, March 2016. arXiv:1603.09382.
[10] K. Hwang and W. Sung. Fixed-point feed forward deep neural network design using weights
+1, 0 and -1. In 2014 IEEE Workshop on Signal Processing Systems (SiPS), 2014.
[11] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. SqueezeNet:
AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. ArXiv e-prints,
February 2016. arXiv:1602.07360.
[12] J. Kim, K. Hwang, and W. Sung. X1000 real-time phoneme recognition vlsi using feed-forward
deep neural networks. In 2014 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), pages 7510–7514, May 2014.
[13] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Technical report,
Department of Computer Science, University of Toronto, 2009.
[14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification with Deep Convolutional
Neural Networks. In Advances in Neural Information Processing Systems, pages 1097–1105,
2012.
5
[15] P. Merolla, R. Appuswamy, J. Arthur, S. K. Esser, and D. Modha. Deep neural networks
are robust to weight binarization and other non-linear distortions. ArXiv e-prints, June 2016.
1606.01981.
[16] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks. ArXiv e-prints, March 2016. arXiv:1603.05279.
[17] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. The German Traffic Sign Recognition
Benchmark: A multi-class classification competition. In 2011 IEEE International Joint Conference on Neural Networks, pages 1453–1460, 2011.
[18] M. van Daalen, P. Jeavons, J. Shawe-Taylor, and D. Cohen. Device for generating binary
sequences for stochastic computing. In Electronics Letters, volume 29, page 80, Jan 1993.
6
| 9 |
Published as a conference paper at ICLR 2017
E MERGENCE
OF FOVEAL IMAGE SAMPLING FROM
LEARNING TO ATTEND IN VISUAL SCENES
arXiv:1611.09430v2 [] 21 Oct 2017
Brian Cheung, Eric Weiss, Bruno Olshausen
Redwood Center
UC Berkeley
{bcheung,eaweiss,baolshausen}@berkeley.edu
A BSTRACT
We describe a neural attention model with a learnable retinal sampling lattice. The
model is trained on a visual search task requiring the classification of an object embedded in a visual scene amidst background distractors using the smallest number
of fixations. We explore the tiling properties that emerge in the model’s retinal
sampling lattice after training. Specifically, we show that this lattice resembles
the eccentricity dependent sampling lattice of the primate retina, with a high resolution region in the fovea surrounded by a low resolution periphery. Furthermore,
we find conditions where these emergent properties are amplified or eliminated
providing clues to their function.
1
I NTRODUCTION
A striking design feature of the primate retina is the manner in which images are spatially sampled
by retinal ganglion cells. Sample spacing and receptive fields are smallest in the fovea and then
increase linearly with eccentricity, as shown in Figure 1. Thus, we have highest spatial resolution at
the center of fixation and lowest resolution in the periphery, with a gradual fall-off in resolution as
one proceeds from the center to periphery. The question we attempt to address here is, why is the
retina designed in this manner - i.e., how is it beneficial to vision?
The commonly accepted explanation for this eccentricity dependent sampling is that it provides us
with both high resolution and broad coverage of the visual field with a limited amount of neural resources. The human retina contains 1.5 million ganglion cells, whose axons form the sole output of
the retina. These essentially constitute about 300,000 distinct samples of the image due to the multiplicity of cell types coding different aspects such as on vs. off channels (Van Essen & Anderson,
1995). If these were packed uniformly at highest resolution (120 samples/deg, the Nyquist-dictated
sampling rate corresponding to the spatial-frequencies admitted by the lens), they would subtend an
image area spanning just 5x5 deg2 . Thus we would have high-resolution but essentially tunnel vision. Alternatively if they were spread out uniformly over the entire monocular visual field spanning
roughly 150 deg2 we would have wide field of coverage but with very blurry vision, with each sample subtending 0.25 deg (which would make even the largest letters on a Snellen eye chart illegible).
Thus, the primate solution makes intuitive sense as a way to achieve the best of both of these worlds.
However we are still lacking a quantitative demonstration that such a sampling strategy emerges as
the optimal design for subserving some set of visual tasks.
Here, we explore what is the optimal retinal sampling lattice for an (overt) attentional system performing a simple visual search task requiring the classification of an object. We propose a learnable
retinal sampling lattice to explore what properties are best suited for this task. While evolutionary
pressure has tuned the retinal configurations found in the primate retina, we instead utilize gradient descent optimization for our in-silico model by constructing a fully differentiable dynamically
controlled model of attention.
Our choice of visual search task follows a paradigm widely used in the study of overt attention in
humans and other primates (Geisler & Cormack, 2011). In many forms of this task, a single target
is randomly located on a display among distractor objects. The goal of the subject is to find the
target as rapidly as possible. Itti & Koch (2000) propose a selection mechanism based on manually
1
Published as a conference paper at ICLR 2017
0
1
2
3
L
5
6
7
ECCENTRICITY
0
)
1
2
3
L
5
8
9
10
li
12
13
I‘
fTH?
.L--L.L-I__L.
7
6
ECCENTRICITY
8
mm
9
10
I!
12
13
li
Fig. 6(A) and (B)
Figure 1: Receptive field size (dendritic field diameter) as a function of eccentricity of Retinal
Ganglion Cells from a macaque monkey (taken from Perry et al. (1984)).
defined low level features of real images to locate various search targets. Here the neural network
must learn what features are most informative for directing attention.
While neural attention models have been applied successfully to a variety of engineering applications
(Bahdanau et al., 2014; Jaderberg et al., 2015; Xu et al., 2015; Graves et al., 2014), there has been
little work in relating the properties of these attention mechanisms back to biological vision. An
important property which distinguishes neural networks from most other neurobiological models is
their ability to learn internal (latent) features directly from data.
But existing neural network models specify the input sampling lattice a priori. Larochelle & Hinton
(2010) employ an eccentricity dependent sampling lattice mimicking the primate retina, and Mnih
et al. (2014) utilize a multi scale glimpse window’ that forms a piece-wise approximation of this
scheme. While it seems reasonable to think that these design choices contribute to the good performance of these systems, it remains to be seen if this arrangement emerges as the optimal solution.
We further extend the learning paradigm of neural networks to the structural features of the glimpse
mechanism of an attention model. To explore emergent properties of our learned retinal configurations, we train on artificial datasets where the factors of variation are easily controllable. Despite
this departure from biology and natural stimuli, we find our model learns to create an eccentricity
dependent layout where a distinct central region of high acuity emerges surrounded by a low acuity
periphery. We show that the properties of this layout are highly dependent on the variations present
in the task constraints. When we depart from physiology by augmenting our attention model with
the ability to spatially rescale or zoom on its input, we find our model learns a more uniform layout
which has properties more similar to the glimpse window proposed in Jaderberg et al. (2015); Gregor et al. (2015). These findings help us to understand the task conditions and constraints in which
an eccentricity dependent sampling lattice emerges.
2
R ETINAL T ILING IN N EURAL N ETWORKS WITH ATTENTION
Attention in neural networks may be formulated in terms of a differentiable feedforward function.
This allows the parameters of these models to be trained jointly with backpropagation. Most formulations of visual attention over the input image assume some structure in the kernel filters. For
example, the recent attention models proposed by Jaderberg et al. (2015); Mnih et al. (2014); Gregor et al. (2015); Ba et al. (2014) assume each kernel filter lies on a rectangular grid. To create a
learnable retinal sampling lattice, we relax this assumption by allowing the kernels to tile the image
independently.
2.1
G ENERATING A G LIMPSE
We interpret a glimpse as a form of routing where a subset of the visual scene U is sampled to form
a smaller output glimpse G. The routing is defined by a set of kernels k[•](s), where each kernel i
specifies which part of the input U [•] will contribute to a particular output G[i]. A control variable s
2
Published as a conference paper at ICLR 2017
µx
N(m; µx , σ )
́σ
Figure 2: Diagram of single kernel filter parameterized by a mean µ́ and variance σ́.
is used to control the routing by adjusting the position and scale of the entire array of kernels. With
this in mind, many attention models can be reformulated into a generic equation written as
G[i] =
W
H X
X
n
U [n, m]k[m, n, i](s)
(1)
m
where m and n index input pixels of U and i indexes output glimpse features. The pixels in the
input image U are thus mapped to a smaller glimpse G.
2.2
R ETINAL G LIMPSE
The centers of each kernel filter µ́[i] are calculated with respect to control variables sc and sz and
learnable offset µ[i]. The control variables specify the position and zoom of the entire glimpse.
µ[i] and σ[i] specify the position and spread respectively of an individual kernel k[−, −, i]. These
parameters are learned during training with backpropagation. We describe how the control variables
are computed in the next section. The kernels are thus specified as follows:
µ́[i] = (sc − µ[i])sz
σ́[i] = σ[i]sz
k[m, n, i](s) = N (m; µ́x [i], σ́[i])N (n; µ́y [i], σ́[i])
(2)
(3)
(4)
We assume kernel filters factorize between the horizontal m and vertical n dimensions of the input
image. This factorization is shown in equation 4, where the kernel is defined as an isotropic gaussian
N . For each kernel filter, given a center µ́[i] and scalar variance σ́[i], a two dimensional gaussian is
defined over the input image as shown in Figure 2. These gaussian kernel filters can be thought of
as a simplified approximation to the receptive fields of retinal ganglion cells in primates (Van Essen
& Anderson, 1995).
While this factored formulation reduces the space of possible transformations from input to output,
it can still form many different mappings from an input U to output G. Figure 3B shows the possible
windows which an input image can be mapped to an output G. The yellow circles denote the central
location of a particular kernel while the size denotes the standard deviation. Each kernel maps to
one of the outputs G[i].
Positional control sc can be considered analogous to the motor control signals which executes saccades of the eye, whereas sz would correspond to controlling a zoom lens in the eye (which has
no counterpart in biology). In contrast, training defines structural adjustments to individual kernels which include its position in the lattice as well as its variance. These adjustments are only
possible during training and are fixed afterwards.Training adjustments can be considered analagous
to the incremental adjustments in the layout of the retinal sampling lattice which occur over many
generations, directed by evolutionary pressure in biology.
3
Published as a conference paper at ICLR 2017
A. Optimizing the Retinal Lattice
B. Controlling the Retinal Lattice
Training
Initial Layout
Final Layout
Control
sc,t ; sz,t
Control
sc,t+1 ; sz,t+1
Control
sc,t+2 ; sz,t+2
Recurrent
ht
Recurrent
ht+1
Recurrent
ht+2
Glimpse
Gt
Glimpse
Gt+1
Glimpse
Gt+2
t
t+1
t+2
Time
Figure 3: A: Starting from an initial lattice configuration of a uniform grid of kernels, we learn an
optmized configuration from data. B: Attentional fixations generated during inference in the model,
shown unrolled in time (after training).
Ability
Table 1: Variants of the neural attention model
Fixed Lattice Translation Only Translation and Zoom
Translate retina via sc,t
Learnable µ[i], σ[i]
Zoom retina via sz,t
3
X
X
X
X
X
X
R ECURRENT N EURAL A RCHITECTURE FOR ATTENTION
A glimpse at a specific timepoint, Gt , is processed by a fully-connected recurrent network frnn ().
ht = frnn (Gt , ht−1 )
[sc,t ; sz,t ] = fcontrol (ht )
(5)
(6)
The global center sc,t and zoom sz,t are predicted by the control network fcontrol () which is parameterized by a fully-connected neural network.
In this work, we investigate three variants of the proposed recurrent model:
• Fixed Lattice: The kernel parameters µ[i] and σ[i] for each retinal cell are not learnable.
The model can only translate the kernel filters sc,t = fcontrol (ht ) and the global zoom is
fixed sz,t = 1.
• Translation Only: Unlike the fixed lattice model, µ[i] and σ[i] are learnable (via backpropagation).
• Translation and Zoom: This model follows equation 6 where it can both zoom and translate the kernels.
A summary for comparing these variants is shown in Table 1.
Prior to training, the kernel filters are initialized as a 12x12 grid (144 kernel filters), tiling uniformly
over the central region of the input image and creating a retinal sampling lattice as shown in Figure
5 before training. Our recurrent network, frnn is a two layer traditional recurrent network with 512512 units. Our control network, fcontrol is a fully-connected network with 512-3 units (x,y,zoom)
4
Dataset 2
Dataset 1
Published as a conference paper at ICLR 2017
Figure 4: Top Row: Examples from our variant of the cluttered MNIST dataset (a.k.a Dataset 1).
Bottom Row: Examples from our dataset with variable sized MNIST digits (a.k.a Dataset 2).
in each layer. Similarly, our prediction networks are fully-connected networks with 512-10 units for
predicting the class. We use ReLU non-linearities for all hidden unit layers.
Our model as shown in Figure 3C are differentiable and trained end-to-end via backpropagation
through time. Note that this allows us to train the control network indirectly from signals backpropagated from the task cost. For stochastic gradient descent optimization we use Adam (Kingma &
Ba, 2014) and construct our models in Theano (Bastien et al., 2012).
4
4.1
DATASETS AND TASKS
M ODIFIED C LUTTERED MNIST DATASET
Example images from of our dataset are shown in Figure 4. Handwritten digits from the original
MNIST dataset LeCun & Cortes (1998) are randomly placed over a 100x100 image with varying
amounts of distractors (clutter). Distractors are generated by extracting random segments of nontarget MNIST digits which are placed randomly with uniform probability over the image. In contrast
to the cluttered MNIST dataset proposed in Mnih et al. (2014), the number of distractors for each
image varies randomly from 0 to 20 pieces. This prevents the attention model from learning a
solution which depends on the number ‘on’ pixels in a given region. In addition, we create another
dataset (Dataset 2) with an additional factor of variation: the original MNIST digit is randomly
resized by a factor of 0.33x to 3.0x. Examples of this dataset are shown in the second row of Figure
4.
4.2
V ISUAL S EARCH TASK
We define our visual search task as a recognition task in a cluttered scene. The recurrent attention
model we propose must output the class ĉ of the single MNIST digit appearing in the image via
the prediction network fpredict (). The task loss, L, is specified in equation 8. To minimize the
classification error, we use cross-entropy cost:
ĉt,n = fpredict (ht,n )
L=
N X
T
X
n
cn log(ĉt,n )
(7)
(8)
t
Analolgous to the visual search experiments performed in physiological studies, we pressure our
attention model to accomplish the visual search as quickly as possible. By applying the task loss to
every timepoint, the model is forced to accurately recognize and localize the target MNIST digit in
as few iterations as possible. In our classification experiments, the model is given T = 4 glimpses.
5
Published as a conference paper at ICLR 2017
Before Training (Initial Layout)
After 1 epochs
After 10 epochs
After 100 epochs
Figure 5: The sampling lattice shown at four different stages during training for a Translation Only
model, from the initial condition (left) to final solution (right). The radius of each dot corresponds
to the standard deviation σi of the kernel.
Translation Only (Dataset 1)
Translation Only (Dataset 2)
Translation and Zoom (Dataset 1)
Translation and Zoom (Dataset 2)
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
0.25
Sampling Interval
0.20
0.15
0.10
0.05
0.00
Kernel Standard Deviation
0.12
0.10
0.08
0.06
0.04
0.02
0.00
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
Distance from Center (Eccentricity)
Figure 6: Top: Learned sampling lattices for four different model configurations. Middle: Resolution
(sampling interval) and Bottom: kernel standard deviation as a function of eccentricity for each
model configuration.
5
R ESULTS
Figure 5shows the layouts of the learned kernels for a Translation Only model at different stages
during training. The filters are smoothly transforming from a uniform grid of kernels to an eccentricity dependent lattice. Furthermore, the kernel filters spread their individual centers to create a
sampling lattice which covers the full image. This is sensible as the target MNIST digit can appear
anywhere in the image with uniform probability.
When we include variable sized digits as an additional factor in the dataset, the translation only
model shows an even greater diversity of variances for the kernel filters. This is shown visually in the
first row of Figure 6. Furthermore, the second row shows a highly dependent relationship between
the sampling interval and standard deviatoin of the retinal sampling lattice and eccentricity from the
center. This dependency increases when training on variable sized MNIST digits (Dataset 2). This
6
Published as a conference paper at ICLR 2017
t=2
t=3
t=4
Translation and Zoom
(Dataset 2)
Translation Only
(Dataset 2)
Fixed Lattice
(Dataset 2)
t=1
Figure 7: Temporal rollouts of the retinal sampling lattice attending over a test image from Cluttered
MNIST (Dataset 2) after training.
relationship has also been observed in the primate visual system (Perry et al., 1984; Van Essen &
Anderson, 1995).
When the proposed attention model is able to zoom its retinal sampling lattice, a very different
layout emerges. There is much less diversity in the distribution of kernel filter variances as evidenced in Figure 6. Both the sampling interval and standard deviation of the retinal sampling lattice
have far less of a dependence on eccentricity. As shown in the last column of Figure 6, we also
trained this model on variable sized digits and noticed no significant differences in sampling lattice
configuration.
Figure 7 shows how each model variant makes use of its retinal sampling lattice after training. The
strategy each variant adopts to solve the visual search task helps explain the drastic difference in
lattice configuration. The translation only variant simply translates its high acuity region to recognize and localize the target digit. The translation and zoom model both rescales and translates its
sampling lattice to fit the target digit. Remarkably, Figure 7 shows that both models detect the digit
early on and make minor corrective adjustments in the following iterations.
Table 2 compares the classification performance of each model variant on the cluttered MNIST
dataset with fixed sized digits (Dataset 1). There is a significant drop in performance when the
retinal sampling lattice is fixed and not learnable, confirming that the model is benefitting from
learning the high-acuity region. The classification performance between the Translation Only and
Translation and Zoom model is competitive. This supports the hypothesis that the functionality of a
high acuity region with a low resolution periphery is similar to that of zoom.
7
Published as a conference paper at ICLR 2017
Table 2: Classification Error on Cluttered MNIST
Sampling Lattice Model Dataset 1 (%) Dataset 2 (%)
Fixed Lattice
Translation Only
Translation and Zoom
6
11.8
5.1
4.0
31.9
24.4
24.1
C ONCLUSION
When constrained to a glimpse window that can translate only, similar to the eye, the kernels converge to a sampling lattice similar to that found in the primate retina (Curcio & Allen, 1990; Van Essen & Anderson, 1995). This layout is composed of a high acuity region at the center surrounded
by a wider region of low acuity. Van Essen & Anderson (1995) postulate that the linear relationship
between eccentricity and sampling interval leads to a form of scale invariance in the primate retina.
Our results from the Translation Only model with variable sized digits supports this conclusion.
Additionally, we observe that zoom appears to supplant the need to learn a high acuity region for
the visual search task. This implies that the high acuity region serves a purpose resembling that of
a zoomable sampling lattice. The low acuity periphery is used to detect the search target and the
high acuity ‘fovea’ more finely recognizes and localizes the target. These results, while obtained on
an admittedly simplified domain of visual scenes, point to the possibility of using deep learning as
a tool to explore the optimal sample tiling for a retinal in a data driven and task-dependent manner.
Exploring how or if these results change for more challenging tasks in naturalistic visual scenes is a
future goal of our research.
ACKNOWLEDGMENTS
We would like to acknowledge everyone at the Redwood Center for their helpful discussion and
comments. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the
Tesla K40 GPUs used for this research.
R EFERENCES
Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual
attention. arXiv preprint arXiv:1412.7755, 2014.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly
learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Arnaud Bergeron, Nicolas Bouchard, David Warde-Farley, and Yoshua Bengio. Theano: new features and
speed improvements. arXiv preprint arXiv:1211.5590, 2012.
Christine A Curcio and Kimberly A Allen. Topography of ganglion cells in human retina. Journal
of Comparative Neurology, 300(1):5–25, 1990.
Wilson S Geisler and Lawrence Cormack. Models of overt attention. Oxford handbook of eye
movements, pp. 439–454, 2011.
Alex Graves, Greg Wayne, and Ivo Danihelka.
arXiv:1410.5401, 2014.
Neural turing machines.
arXiv preprint
Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network
for image generation. arXiv preprint arXiv:1502.04623, 2015.
Laurent Itti and Christof Koch. A saliency-based search mechanism for overt and covert shifts of
visual attention. Vision research, 40(10):1489–1506, 2000.
Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in Neural Information Processing Systems, pp. 2008–2016, 2015.
8
Published as a conference paper at ICLR 2017
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
Hugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-order
boltzmann machine. In Advances in neural information processing systems, pp. 1243–1251, 2010.
Yann LeCun and Corinna Cortes. The mnist database of handwritten digits, 1998.
Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In
Advances in Neural Information Processing Systems, pp. 2204–2212, 2014.
VH Perry, R Oehler, and A Cowey. Retinal ganglion cells that project to the dorsal lateral geniculate
nucleus in the macaque monkey. Neuroscience, 12(4):1101–1123, 1984.
David C Van Essen and Charles H Anderson. Information processing strategies and pathways in the
primate visual system. An introduction to neural and electronic networks, 2:45–76, 1995.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and
Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention.
arXiv preprint arXiv:1502.03044, 2015.
9
| 9 |
Discriminant analysis in small and large dimensions
arXiv:1705.02826v1 [] 8 May 2017
Taras Bodnara,1 , Stepan Mazurb , Edward Ngailoa and Nestor Parolyac
a
b
Department of Mathematics, Stockholm University, Roslagsvägen 101, SE-10691 Stockholm, Sweden
Unit of Statistics, School of Business, Örebro University, Fakultetsgatan 1, SE-70182 Örebro, Sweden
c
Institute of Statistics, Leibniz University Hannover, Königsworther Platz 1, D-30167 Hannover,
Germany
Abstract
We study the distributional properties of the linear discriminant function under
the assumption of normality by comparing two groups with the same covariance
matrix but different mean vectors. A stochastic representation for the discriminant function coefficients is derived which is then used to obtain their asymptotic
distribution under the high-dimensional asymptotic regime. We investigate the performance of the classification analysis based on the discriminant function in both
small and large dimensions. A stochastic representation is established which allows
to compute the error rate in an efficient way. We further compare the calculated
error rate with the optimal one obtained under the assumption that the covariance
matrix and the two mean vectors are known. Finally, we present an analytical
expression of the error rate calculated in the high-dimensional asymptotic regime.
The finite-sample properties of the derived theoretical results are assessed via an
extensive Monte Carlo study.
ASM Classification: 62H10, 62E15, 62E20, 60F05, 60B20
Keywords: discriminant function, stochastic representation, large-dimensional asymptotics, random matrix theory, classification analysis
1
Corresponding Author: Taras Bodnar. E-Mail: [email protected]. Tel: +46 8 164562. Fax:
+46 8 612 6717. This research was partly supported by the Swedish International Development Cooperation Agency (SIDA) through the UR-Sweden Programme for Research, Higher Education and Institutional Advancement. Stepan Mazur acknowledges financial support from the project ”Ambit fields:
probabilistic properties and statistical inference” funded by Villum Fonden
1
1
Introduction
In the modern world of science and technology, high-dimensional data are present in
various fields such as finance, environment science and social sciences. In the sense of
many complex multivariate dependencies observed in data, formulating correct models and
developing inferential procedures are the major challenges. The traditional multivariate
analysis considers fixed or small sample dimensions, while sample sizes approaching to
infinity. However, its methods cannot longer be used in the high-dimensional setting
where the dimension is not treated as fixed but it is allowed to be comparable to the
sample size.
The covariance matrix is one of the mostly used way to capture the dependence between variables. Although its application is restricted only to linear dependence and
more sophisticated methods, like copula, should be applied in the general case, modeling
dynamics in the covariance matrix is still a very popular subject in both statistics and
econometrics. Recently, a number of papers have been published which deal with estimating the covariance matrix (see, e.g., Ledoit and Wolf (2003), Cai and Liu (2011a), Cai
et al. (2011), Agarwal et al. (2012), Fan et al. (2008), Fan et al. (2013), Bodnar et al.
(2014, 2016)) and testing its structure (see, e.g., Johnstone (2001), Bai et al. (2009), Chen
et al. (2010), Cai and Jiang (2011), Jiang and Yang (2013), Gupta and Bodnar (2014))
in large dimension.
In many applications, the covariance matrix is accompanied by the mean vector. For
example, the product of the inverse sample covariance matrix and the difference of the
sample mean vectors is present in the discriminant function where a linear combination
of variables (discriminant function coefficients) is determined such that the standardized
distance between the groups of observations is maximized. A second example arises in
portfolio theory, where the vector of optimal portfolio weights is proportional to the
products of inverse sample covariance matrix and the sample mean vector (see Bodnar
and Okhrin (2011)).
The discriminant analysis is a multivariate technique concerned with separating distinct sets of objects (or observations) (Johnson et al. (2007)). Its two main tasks are to
distinguish distinct sets of observations and to allocate new observations to previously
defined groups (Rencher and Christensen (2012)). The main methods of the discriminant
analysis are the linear discriminant function and the quadratic discriminant function.
The linear discriminant function is a generalization of Fisher linear discriminant analysis,
a method used in statistics, pattern recognition and machine learning to find a linear
combination of features that characterizes or separates two or more groups of objects
in the best way. The application of the linear discriminant function is restricted to the
assumption of the equal covariance matrix in the groups to be separated. Although the
quadratic discriminant function can be used when the latter assumption is violated, its
application is more computational exhaustive, needs to estimate the covariance matrices
2
of each group, and requires more observations than in the case of linear discriminant function (Narsky and Porter (2013)). Moreover, the decision boundary is easy to understand
and to visualize in high-dimensional settings, if the linear discriminant function is used.
The discriminant analysis is a well established topic in multivariate statistics. Many
asymptotic results are available when the sample sizes of groups to be separated are assumed to be large, while the number of variables is fixed and significantly smaller than
the sample size (see, e.g., Muirhead (1982), Rencher and Christensen (2012)). However, these results cannot automatically be transferred when the number of variables is
comparable to the sample size which is known in the statistical literature as the highdimensional asymptotic regime. It is remarkable that in this case the results obtained
under the standard asymptotic regime can deviate significantly from those obtained under the high-dimensional asymptotics (see, e.g., Bai and Silverstein (2010)). Fujikoshi
and Seo (1997) provided an asymptotic approximation of the linear discriminant function
in high dimension by considering the case of equal sample sizes and compared the results
with the classical asymptotic approximation by Wyman et al. (1990). For the samples of
non-equal sizes, they pointed out that the high-dimensional approximation is extremely
accurate. However, Tamatani (2015) showed that the Fisher linear discriminant function
performs poorly due to diverging spectra in the case of large-dimensional data and small
sample sizes. Bickel and Levina (2004), Srivastava and Kubokawa (2007) investigated
the asymptotic properties of the linear discriminant function in high dimension, while
modifications of the linear discriminant function can be found in Cai and Liu (2011b),
Shao et al. (2011). The asymptotic results for the discriminant function coefficients in
matrix-variate skew models can be found in Bodnar et al. (2017b).
We contribute to the statistical literature by deriving a stochastic representation of
the discriminant function coefficient and the classification rule based on the linear discriminant function. These results provide us an efficient way of simulating these random
quantities and they are also used in the derivation of their high-dimensional asymptotic
distributions, using which the error rate of the classification rule based on the linear discriminant function can be easily assessed and the problem of the increasing dimensionality
can be visualized in a simple way.
The rest of the paper is organized as follows. The finite-sample properties of the discriminant function are presented in Section 2.1, where, in particular we derive a stochastic
representation for the discriminant function coefficients. In Section 2.2, an exact one-sided
test for the comparison of the population discriminant function coefficients is suggested,
while a stochastic representation for the classification rule is obtained in Section 2.3. The
finite-sample results are then use to derive the asymptotic distributions of the discriminant function coefficients and of the classification rule in Section 3, while finite sample
performance of the asymptotic distribution is analysed in Section 3.2.
3
2
Finite-sample properties of the discriminant function
(1)
(1)
(2)
(2)
Let x1 , . . . , xn1 and x1 , . . . , xn2 be two independent samples from the multivariate
normal distributions which consist of independent and identically distributed random
(1)
(2)
vectors with xi ∼ Np (µ1 , Σ) for i = 1, ..., n1 and xj ∼ Np (µ2 , Σ) for j = 1, ..., n2
where Σ is positive definite. Throughout the paper, 1n denotes the n-dimensional vector
of ones, In is the n × n identity matrix, and the symbol ⊗ stands for the Kronecker
product.
(2)
(2)
(1)
(1)
Let X(1) = x1 , . . . , xn1 and X(2) = x1 , . . . , xn2 be observation matrices. Then
the sample estimators for the mean vectors and the covariance matrices constructed from
each sample are given by
x̄
(j)
nj
1
1 X (j)
xi = X(j) 1nj
=
nj i=1
nj
nj
(j)
S
T
1 X (j)
(j)
.
xi − x̄(j) xi − x̄(j)
=
nj − 1 i=1
The pooled estimator for the covariance matrix, i.e., an estimator for Σ obtained from
two samples, is then given by
Spl =
1
(n1 − 1)S(1) + (n2 − 1)S(2)
n1 + n2 − 2
(1)
The following lemma (see, e.g., (Rencher and Christensen, 2012, Section 5.4.2)) presents
the joint distribution of x̄(1) , x̄(2) and Spl .
Lemma 1. Let X1 ∼ Np,n1 µ1 1Tn1 , Σ ⊗ In1 and X2 ∼ Np,n2 µ2 1Tn2 , Σ ⊗ In2 for p <
n1 + n2 − 2. Assume that X1 and X2 are independent. Then
1
(1)
(a) x̄ ∼ Np µ1 , n1 Σ ,
(b) x̄(2) ∼ Np µ2 , n12 Σ ,
(c) (n1 + n2 − 2)Spl ∼ Wp (n1 + n2 − 2, Σ),
Moreover, x̄(1) , x̄(2) and Spl are mutually independently distributed.
The results of Lemma 1, in particular, implies that
x̄
(1)
− x̄
(2)
1
1
∼ Np µ1 − µ2 ,
+
Σ
n1 n2
which is independent of Spl .
4
(2)
2.1
Stochastic representation for the discriminant function coefficients
The discriminant function coefficients are given by the following vector
â = S−1
x̄(1) − x̄(2)
pl
(3)
which is the sample estimator of the population discriminant function coefficient vector
expressed as
a = Σ−1 (µ1 − µ2 )
We consider a more general problem by deriving the distribution of linear combinations
of the discriminant function coefficients. This result possesses several practical application: (i) it allows a direct comparison of the population coefficients in the discriminant
function by deriving a corresponding statistical test; (ii) it can be used in the classification problem where providing a new observation vector one has to decide to which of two
groups the observation vector has to be ordered.
Let L be a k × p matrix of constants such that rank(L) = k < p. We are then
interested in
θ̂ = Lâ = LS−1
x̄(1) − x̄(2) .
pl
(4)
Choosing different matrices L we are able to provide different inferences about the linear
combinations of the discriminant function coefficients. For instance, if k = 1 and L is the
vector with all elements zero except the one on the jth position which is one, then we get
the distribution of the jth coefficient in the discriminant function. If we choose k = 1 and
L = (1, −1, 0, . . . , 0)T , then we analyse the difference between the first two coefficients
in the discriminant function. The corresponding result can be further used to test if the
population counterparts to these coefficients are zero or not. For k > 1 several linear
combinations of the discriminant function coefficients are considered simultaneously.
In the next theorem we derive a stochastic representation for θ̂. The stochastic representation is a very important tool in analysing the distributional properties of random
quantities. It is widely spread in the computation statistics (e.g., Givens and Hoeting
(2012)), in the theory of elliptical distributions (see, Gupta et al. (2013)) as well as in
d
Bayesian statistics (cf., Bodnar et al. (2017a)). Later on, we use the symbol = to denote
the equality in distribution.
Theorem 1. Let L be an arbitrary k × p matrix of constants such that rank(L) = k < p.
Then, under the assumption of Lemma 1 the stochastic representation of θ̂ = Lâ is given
5
by
s
d
θ̂ = (n1 + n2 − 2)ξ −1 LΣ−1 x̆ +
!
x̆T Σ−1 x̆
1/2
LRx̆ LT
t0 ,
n1 + n2 − p
where Rx̆ = Σ−1 −Σ−1 x̆x̆T Σ−1 /x̆T Σ−1 x̆; ξ ∼ χ2n1 +n2 −p−1 , x̆ ∼ Np µ1 − µ2 , n11 +
and t0 ∼ tk (n1 + n2 − p, 0k , Ik ). Moreover, ξ, x̆ and t0 are mutually independent.
(5)
1
n2
Σ ,
Proof. From Lemma 1.(c) and Theorem 3.4.1 of Gupta and Nagar (2000) we obtain that
1
S−1 ∼ IW p (n1 + n2 + p − 1, Σ−1 ).
n1 + n2 − 2 pl
(6)
Also, since x̆ = x̄(1) − x̄(2) and Spl are independent, the conditional distribution of
∗
−1 ∗
∗
θ̂ = LS−1
pl x̆ given x̆ = x̆ equals to the distribution of θ = LSpl x̆ and it can be rewritten
in the following form
θ
∗
x̆∗T S−1
pl x̆
.
= (n1 + n2 − 2)x̆ Σ x̆ ∗T −1 ∗
x̆ Spl x̆ (n1 + n2 − 2)x̆∗T Σ−1 x̆∗
d
∗
∗T
−1 ∗
∗
LS−1
pl x̆
Applying Theorem 3.2.12 of Muirhead (1982) we obtain that
x̆∗T Σ−1 x̆∗
ξ = (n1 + n2 − 2) ∗T −1 ∗ ∼ χ2n1 +n2 −p−1
x̆ Spl x̆
∗
(7)
and its distribution is independent of x̆∗ . Hence,
ξ = (n1 + n2 − 2)
x̆T Σ−1 x̆
∼ χ2n1 +n2 −p−1
x̆T S−1
x̆
pl
(8)
and ξ, x̆ are independent.
∗
Using Theorem 3 of Bodnar and Okhrin (2008) we get that x̆∗T S−1
pl x̆ is indepen∗
∗T −1 ∗
∗
∗
∗T −1 ∗
dent of LS−1
pl x̆ /x̆ Spl x̆ for given x̆ . Therefore, ξ is independent of x̆ Σ x̆ ·
−1
T −1
T −1
∗
∗T −1 ∗
LS−1
pl x̆ /x̆ Spl x̆ and, respectively, ξ is independent of x̆ Σ x̆ · LSpl x̆/x̆ Spl x̆. Furthermore, from the proof of Theorem 1 of Bodnar and Schmid (2008) it holds that
∗
LS−1
pl x̆
∗T −1 ∗
−1 ∗ x̆ Σ x̆
T
x̆ Σ x̆ ∗T −1 ∗ ∼ tk n1 + n2 − p; LΣ x̆ ,
LRx̆∗ L
n1 + n2 − p
x̆ Spl x̆
∗T
−1 ∗
(9)
with Rx̆∗ = Σ−1 − Σ−1 x̆∗ x̆∗T Σ−1 /x̆∗T Σ−1 x̆∗ .
Thus, we obtain the following stochastic representation of θ̂ which is given by
s
d
θ̂ = (n1 + n2 − 2)ξ −1 LΣ−1 x̆ +
6
!
x̆T Σ−1 x̆
1/2
LRx̆ LT
t0 ,
n1 + n2 − p
(10)
1
1
where Rx̆ = Σ −Σ x̆x̆ Σ /x̆ Σ x̆; ξ ∼
x̆ ∼ Np µ1 − µ2 , n1 + n2 Σ ,
and t0 ∼ tk (n1 + n2 − p, 0k , Ik ). Moreover, ξ, x̆ and t0 are mutually independent. The
theorem is proved.
−1
−1
T
−1
−1
T
χ2n1 +n2 −p−1 ,
In the next corollary we consider the special case when k = 1, that is, when L = lT is
a p-dimensional vector of constants.
Corollary 1. Let λ = 1/n1 +1/n2 and let l be a p-dimensional vector of constants. Then,
under the condition of Theorem 1, the stochastic representation of θ̂ = lT â is given by
d
θ̂ = (n1 + n2 − 2)ξ −1
s
!
λ(p − 1)
λ+
u lT Σ−1 lz0 , (11)
lT Σ−1 (µ1 − µ2 ) +
n1 + n2 − p
where ξ ∼ χ2n1 +n2 −p−1 , z0 ∼ N (0, 1), u ∼ F p − 1, n1 + n2 − p, (µ1 − µ2 )T Rl (µ1 − µ2 )/λ
(non-central F-distribution with p−1 and n1 +n2 −p degrees of freedom and non-centrality
parameter (µ1 − µ2 )T Rl (µ1 − µ2 )/λ) with Rl = Σ−1 − Σ−1 llT Σ−1 /lT Σ−1 l; ξ, z0 and u
are mutually independently distributed.
Proof. From Theorem 1 we get that
s
d
θ̂ = (n1 + n2 − 2)ξ −1
lT Σ−1 x̆ + t0
x̆T Σ−1 x̆
!
· lT Rx̆ l
n1 + n2 − p
p
√
t0
−1
T −1
= (n1 + n2 − 2)ξ
lT Σ−1 l x̆T Rl x̆ ,
l Σ x̆ + √
n1 + n2 − p
(12)
(13)
where Rl = Σ−1 − Σ−1 llT Σ−1 /lT Σ−1 l; ξ ∼ χ2n1 +n2 −p−1 , t0 ∼ t(n1 + n2 − p, 0, 1), and
x̆ ∼ Np (µ1 − µ2 , λΣ) with λ = 1/n1 + 1/n2 ; ξ, t0 and x̆ are mutually independent.
Because x̆ ∼ Np (µ1 − µ2 , λΣ), Rl ΣRl = Rl , and tr [Rl Σ] = p − 1, the application of
Corollary 5.1.3a of Mathai and Provost (1992) leads to
ζ = λ−1 x̆T Rl x̆ ∼ χ2p−1 δ 2
(14)
where δ 2 = (µ1 − µ2 )T Rl (µ1 − µ2 )/λ. Moreover, since Rl ΣΣ−1 l = 0, the application of
Theorem 5.5.1 of Mathai and Provost (1992) proves that lT Σ−1 x̆ and ζ are independently
distributed.
Finally, we note that the random variable t0 ∼ t(n1 + n2 − p, 0, 1) has the following
stochastic representation
d
t0 =z0
r
n1 + n2 − p
,
w
7
(15)
where z0 ∼ N (0, 1) and w ∼ χ2n1 +n2 −p ; z0 and w are independent. Hence,
s
−1
T
l Σ x̆ + t0
λζ · lT Σ−1 l
ζ, w ∼ N
n1 + n2 − p
ζ
l Σ µ, λl Σ l 1 +
(16)
w
p−1
T −1
T −1
u
,(17)
= N l Σ µ, λl Σ l 1 +
n1 + n2 − p
T
−1
T
−1
where
u=
ζ/(p − 1)
∼ F p − 1, n1 + n2 − p, (µ1 − µ2 )T Rl (µ1 − µ2 )/λ .
w/(n1 + n2 − p)
(18)
Putting all above together we get the statement of the corollary.
2.2
Test for the population discriminant function coefficients
One of the most important questions when the discriminant analysis is performed is
to decide which coefficients are the most influential in the decision. Several methods
exist in the literature with the following three approaches to be the most popular (c.f.,
(Rencher and Christensen, 2012, Section 5.5)): (i) standardized coefficients; (ii) partial
F -values; (iii) correlations between the variables and the discriminant function. (Rencher,
1998, Theorem 5.7A) argued that each of this three methods has several drawbacks. For
instance, the correlations between the variables and the discriminant function do not show
the multivariate contribution of each variable, but provide only univariate information how
each variable separates the groups, ignoring the presence of other variables.
In this section, we propose an alternative approach based on the statistical hypothesis
test. Namely, exact statistical tests will be derived on the null hypothesis that two
population discriminant function coefficients are equal (two-sided test) as well as on the
alternative hypothesis that a coefficient in the discriminant function is larger than another
one (one-sided test). The testing hypothesis for the equality of the i-th and the j-th
coefficients in the population discriminant function is given by
H0 : ai = aj
against H1 : ai 6= aj ,
(19)
while in the case of one-sided test we check if
H0 : ai ≤ aj
against H1 : ai > aj .
8
(20)
In both cases the following test statistic is suggested
(1)
p
lT S−1
− x̄(2) )
pl (x̄
q
T = n1 + n2 − p − 1 q
lT S−1
l
(n1 + n2 − 2)( n11 + n12 ) + (x̄(1) − x̄(2) )T R̂l (x̄(1) − x̄(2) )
pl
(21)
with
> −1
S−1
pl ll Spl
−1
R̂l = Spl − > −1
and l = (0, .., 0, |{z}
1 , 0..., 0, |{z}
−1 , 0, ..., 0)> .
l S l
pl
i
j
The distribution of T follows from (Bodnar and Okhrin, 2011, Theorem 6) and it is
summarized in Theorem 2.
Theorem 2. Let λ = 1/n1 + 1/n2 and let l be a p-dimensional vector of constants. Then,
under the condition of Theorem 1,
(a) the density of T is given by
n1 + n2 − p
fT (x) =
λ(p − 1)
Z
0
∞
ftn1 +n2 −p−1,δ1 (y) (x)fFp−1,n1 +n2 −p,s/λ
n1 + n2 − p
y dy (22)
λ(p − 1)
√
lT Σ−1 (µ1 −µ2 )
√
with δ1 (y) = η/ λ + y, η =
, and s = (µ1 −µ2 )T Rl (µ1 −µ2 ); the symbol
lT Σ−1 l
fG (.) denotes the density of the distribution G.
(b) Under the null hypothesis it holds that T ∼ tn1 +n2 −p−1 and T is independent of (x̄(1) −
x̄(2) )T R̂l (x̄(1) − x̄(2) ).
Theorem 2 shows that the test statistics T has a standard t-distribution under the null
hypothesis. As a result, the suggested test will reject the null hypothesis of the two-sided
test (19) as soon as |T | > tn1 +n2 −p−1;1−α/2 .
The situation is more complicated in the case of the one-sided test (20). In this case
the maximal probability of the type I error has to be control. For that reason, we first
calculate the probability of rejection of the null hypothesis for all possible parameter
values and after that we calculate its maximum for the parameters which correspond to
the null hypothesis in (20). Since the distribution of T depends on µ1 , µ2 , and Σ only
over η and s (see, Theorem 2), the task of finding the maximum is significantly simplified.
Let FG (.) denotes the distribution function of the distribution G. For any constant q, we
9
get
Z
+∞
fT (x)dx
P(T > q) =
q
Z
n1 + n2 − p
n1 + n2 − p ∞
ftn1 +n2 −p−1,δ1 (y) (x)fFp−1,n1 +n2 −p,s/λ (
y)dydx
λ(p − 1) 0
λ(p − 1)
q
Z +∞
Z
n1 + n2 − p ∞
n1 + n2 − p
y
fFp−1,n1 +n2 −p,s/λ
ftn1 +n2 −p−1,δ1 (y) (x)dxdy
λ(p − 1) 0
λ(p − 1)
q
Z
n1 + n2 − p
n1 + n2 − p ∞
(1 − Ftn1 +n2 −p−1,δ1 (y) (q))fFp−1,n1 +n2 −p,s/λ
y dy
λ(p − 1) 0
λ(p − 1)
Z
n1 + n2 − p ∞
n1 + n2 − p
(1 − Ftn1 +n2 −p−1,0 (q))fFp−1,n1 +n2 −p,s/λ
y dy
λ(p − 1) 0
λ(p − 1)
(1 − Ftn1 +n2 −p−1,0 (q)).
Z
=
=
=
≤
=
+∞
where the last equality follows from the fact that the distribution function of the noncentral t-distribution is a decreasing function in non-centrality parameter and δ1 (y) ≤ 0.
Consequently, we get q = tn1 +n2 −p−1;1−α and the one-sided test rejects the null hypothesis
in (20) as soon as T > tn1 +n2 −p−1;1−α .
2.3
Classification analysis
Having a new observation vector x, we classify it to one of the considered two groups.
Assuming that no prior information is available about the classification result, i.e. the
prior probability of each group is 1/2, the decision which is based on the optimal rule is
to assign the observation vector x to the first group as soon as the following inequality
holds (c.f., (Rencher, 1998, Section 6.2))
1
(µ1 − µ2 )> Σ−1 x > (µ1 − µ2 )> Σ−1 (µ1 + µ2 )
2
(23)
and to the second group otherwise. The error rate is defined as the probability of classifying the observation x into one group, while it comes from another one. Rencher (1998)
presented the expression of the error rate expressed as
1
P(classify to the first group | second group is true)
2
1
+
P(classify to the second group | first group is true)
2
∆
= Φ −
with ∆2 = (µ1 − µ2 )> Σ−1 (µ1 − µ2 ) ,
2
ERp (∆) =
where Φ(.) denotes the distribution function of the standard normal distribution.
In practice, however, µ1 , µ2 , and Σ are unknown quantities and the decision is based
10
on the inequality
1 (1)
(1)
(x̄(1) − x̄(2) )> S−1
− x̄(2) )> S−1
+ x̄(2) )
pl x > (x̄
pl (x̄
2
(24)
instead. Next, we derive the error rate of the decision rule (24). Let
1 (1)
(1)
− x̄(2) )> S−1
+ x̄(2) )
dˆ = (x̄(1) − x̄(2) )> S−1
pl (x̄
pl x − (x̄
2
1 (1)
(1)
(2) > −1
(2)
= (x̄ − x̄ ) Spl x − (x̄ + x̄ ) .
2
(25)
ˆ
In Theorem 3 we present the stochastic representation of d.
Theorem 3. Let λ = 1/n1 +1/n2 . Then, under the condition of Theorem 1, the stochastic
representation of dˆ is given by
(−1)i−1
√
√
n1 + n2 − 2
λni − 2
d
dˆ =
(−1)i−1
λξ2 + (∆ + λw0 )2 +
∆2 + λ∆w0
ξ
2λni
λni
s
!
q
√
p−1
1
+
+
u
1+
λξ2 + (∆ + λw0 )2 z0
for i = 1, 2, (26)
n1 + n2 n1 + n2 − p
where u|ξ1 , ξ2 , w0 ∼ F (p − 1, n1 + n2 − p, (n1 + n2 )−1 ξ1 ) with ξ1 |ξ2 , w0 ∼ χp−1,δξ2
2 ,w0
∆2 ξ√
2
and
δξ22 ,w0 = nn1 n2 2 λξ +(∆+ λw )2 , z0 , w0 ∼ N (0, 1), ξ ∼ χ2n1 +n2 −p−1 , ξ2 ∼ χ2p−1 ; ξ, z0 are inde2
0
i
pendent of u, ξ1 , ξ2 , w0 where ξ2 and w0 are independent as well.
Proof. Let x ∼ Np (µi , Σ), Since x̄(1) , x̄(2) , x, and Spl are independently distributed, we
(1)
(2)
get that the conditional distribution of dˆ given x̄(1) = x0 and x̄(2) = x0 is equal to the
distribution of d0 defined by
(1)
(2)
d0 = (x̄0 − x̄0 )> S−1
pl x̃ ,
(1)
(2)
(1)
(2)
where x̃ = x − 12 (x̄0 + x̄0 ) ∼ Np µi − 21 (x̄0 + x̄0 ), Σ , (n1 + n2 − 2)Spl ∼ Wp (n1 +
n2 − 2, Σ), x̃ and Spl are independent.
Following the proof of Corollary 1, we get
d
d0 = (n1 + n2 − 2)ξ
s
+
1+
−1
(1)
(x̄0
−
(2)
x̄0 )T Σ−1
1 (1)
(2)
µi − (x̄0 + x̄0 )
2
!
(p − 1)
(1)
(2)
(1)
(2)
u (x̄0 − x̄0 )T Σ−1 (x̄0 − x̄0 )z0 ,
n1 + n2 − p
T
(1)
(2)
(1)
(2)
1
1
where u ∼ F p − 1, n1 + n2 − p, µi − 2 (x̄0 + x̄0 ) R0 µi − 2 (x̄0 + x̄0 )
with
11
(1)
(2)
(1)
(2)
(1)
(2)
(1)
(2)
R0 = Σ−1 − Σ−1 (x̄0 − x̄0 )(x̄0 − x̄0 )T Σ−1 /(x̄0 − x̄0 )T Σ−1 (x̄0 − x̄0 ), z0 ∼ N (0, 1),
and ξ ∼ χ2n1 +n2 −p−1 ; ξ, z0 and u are mutually independently distributed.
In using that
1 (1)
1 (1)
(2)
(i)
(2)
µi − (x̄0 + x̄0 ) = µi − x̄0 + (−1)i−1 (x̄0 − x̄0 )
2
2
(1)
(2)
and (x̄0 − x̄0 )T R0 = 0, we get
n1 + n2 − 2 (−1)i−1 (1)
d
(x̄ − x̄(2) )T Σ−1 (x̄(1) − x̄(2) ) − (x̄(1) − x̄(2) )T Σ−1 x̄(i) − µi
dˆ =
ξ
2
s
!
p−1
+
1+
u (x̄(1) − x̄(2) )T Σ−1 (x̄(1) − x̄(2) )z0 ,
n1 + n2 − p
T
where u|x̄(1) , x̄(2) ∼ F p − 1, n1 + n2 − p, x̄(i) − µi Rx x̄(i) − µi with Rx = Σ−1 −
Σ−1 (x̄(1) − x̄(2) )(x̄(1) − x̄(2) )T Σ−1 /(x̄(1) − x̄(2) )T Σ−1 (x̄(1) − x̄(2) ), z0 ∼ N (0, 1), and ξ ∼
χ2n1 +n2 −p−1 ; ξ, z0 are independent of u, x̄(1) , x̄(2) .
Since x̄(1) and x̄(2) are independent and normally distributed, we get that
(i)
x̄ − µi
x̄(1) − x̄(2)
!
0
µ1 − µ2
∼ N2p
!
,
1
Σ
ni
i−1
(−1)
Σ
ni
(−1)i−1
Σ
ni
!!
λΣ
and, consequently,
x̄
(i)
− µi |(x̄
(1)
(2)
− x̄ ) ∼ Np
1
(−1)i−1 (1)
(2)
(x̄ − x̄ − (µ1 − µ2 )),
Σ ,
λni
n1 + n2
1
where we used that n1i − λn1 2 = n1 +n
.
2
i
The application of Theorem 5.5.1 in Mathai and Provost (1992) shows that given
(1)
(x̄ − x̄(2) ) the random variables (x̄(1) − x̄(2) )T Σ−1 (x̄(i) − µi ) and (x̄(i) − µi )Rx (x̄(i) − µi )
are independently distributed with
(x̄(1) − x̄(2) )T Σ−1 (x̄(i) − µi )|(x̄(1) − x̄(2) )
(−1)i−1 (1)
1
(2) T −1
(1)
(2)
(1)
(2) T −1
(1)
(2)
∼ N
(x̄ − x̄ ) Σ (x̄ − x̄ − (µ1 − µ2 )),
(x̄ − x̄ ) Σ (x̄ − x̄ )
λni
n1 + n2
and, by using Corollary 5.1.3a of Mathai and Provost (1992),
(n1 + n2 )(x̄(i) − µi )T Rx (x̄(i) − µi )|(x̄(1) − x̄(2) ) ∼ χp−1,δx2
12
with
n1 + n2 (1)
(x̄ − x̄(2) − (µ1 − µ2 ))T Rx (x̄(1) − x̄(2) − (µ1 − µ2 ))
λ2 n2i
n1 + n2
=
(µ1 − µ2 )T Rx (µ1 − µ2 )
λ2 n2i
n1 + n2 (µ1 − µ2 )T Σ−1 (µ1 − µ2 )
(x̄(1) − x̄(2) )T Rµ (x̄(1) − x̄(2) )
=
λ2 n2i (x̄(1) − x̄(2) )T Σ−1 (x̄(1) − x̄(2) )
δx2 =
where we use that (x̄(1) −x̄(2) )T Rx = 0 and Rµ = Σ−1 −Σ−1 (µ1 −µ2 )(µ1 −µ2 )T Σ−1 /(µ1 −
µ2 )T Σ−1 (µ1 − µ2 ).
As a result, we get
n1 + n2 − 2
λni − 2 2 (−1)i−1
d
(−1)i−1
∆x +
(µ1 − µ2 )T Σ−1 (x̄(1) − x̄(2) )
dˆ =
ξ
2λni
λni
s
!
1
p−1
1+
u ∆x z0 ,
+
+
n1 + n2 n1 + n2 − p
where ∆2x = (x̄(1) −x̄(2) )T Σ−1 (x̄(1) −x̄(2) ), u|x̄(1) , x̄(2) ∼ F (p − 1, n1 + n2 − p, (n1 + n2 )−1 ξ1 )
with ξ1 ∼ χp−1,δx2 , z0 ∼ N (0, 1), and ξ ∼ χ2n1 +n2 −p−1 ; ξ, z0 are independent of u, ξ1 , x̄(1) , x̄(2) .
Finally, it holds with ∆2 = (µ1 − µ2 )T Σ−1 (µ1 − µ2 ) that
∆2x
= (x̄
(1)
(2) T
− x̄ ) Rµ (x̄
(1)
2
(µ1 − µ2 )T Σ−1 (x̄(1) − x̄(2) )
− x̄ ) +
,
∆2
(2)
where both summands are independent following Theorem 5.5.1 in Mathai and Provost
(1992). The application of Corollary 5.1.3a in Mathai and Provost (1992) leads to
λ−1 (x̄(1) − x̄(2) )T Rµ (x̄(1) − x̄(2) ) ∼ χ2p−1
and
(µ1 − µ2 )T Σ−1 (x̄(1) − x̄(2) ) ∼ N (∆2 , λ∆2 ).
From the last statement we get the stochastic representation of dˆ expressed as
(−1)i−1
√
√
n1 + n2 − 2
λni − 2
d
dˆ =
(−1)i−1
λξ2 + (∆ + λw0 )2 +
∆2 + λ∆w0
ξ
2λni
λni
s
!
q
√
1
p−1
+
1+
+
u
λξ2 + (∆ + λw0 )2 z0 ,
n1 + n2 n1 + n2 − p
where u|ξ1 , ξ2 , w0 ∼ F (p − 1, n1 + n2 − p, (n1 + n2 )−1 ξ1 ) with ξ1 |ξ2 , w0 ∼ χp−1,δξ2
δξ22 ,w0 =
∆2
n1 +n2
√
λξ2 ,
λ2 n2i λξ2 +(∆+ λw0 )2
2 ,w0
and
z0 , w0 ∼ N (0, 1), ξ ∼ χ2n1 +n2 −p−1 , ξ2 ∼ χ2p−1 ; ξ, z0 are
13
independent of u, ξ1 , ξ2 , w0 where ξ2 and w0 are independent as well.
Theorem 3 shows that the distribution of dˆ is determined by six random variables
ξ, ξ1 , ξ2 , z0 , w0 , and u. Moreover, it depends on µ1 , µ2 , and Σ only via the quadratic form
∆. As a result, the the error rate based on the decision rule (24) is a function of ∆ only
and it is calculated by
ERs (∆) =
1
1 ˆ
P(d > 0| second group is true) + P(dˆ ≤ 0| first group is true) .
2
2
The two probabilities in (27) can easily be approximated for all ∆, p, n1 , and n2 with
high precision by applying the results of Theorem 3 via the following simulation study
(i) Fix ∆ and i ∈ {1, 2}.
(ii) Generate four independent random variables ξb ∼ χ2n1 +n2 −p−1 , ξ2;b ∼ χ2p−1 , z0;b ∼
N (0, 1), and w0;b ∼ N (0, 1).
(iii) Generate ξ1,b ∼ χp−1,δξ2
2 ,w0
with δξ22,b ,w0,b =
∆2 ξ2;b
n1 n2
√
.
n2i λξ2;b +(∆+ λw0;b )2
(iv) Generate u ∼ F (p − 1, n1 + n2 − p, (n1 + n2 )−1 ξ1,b ).
(i)
(v) Calculate dˆb following the stochastic representation (26) of Theorem 3.
(i)
(i)
(vi) Repeat steps (ii)-(v) for b = 1, ..., B leading to the sample dˆ1 , ..., dˆB .
The procedure has to be performed for both values of i = 1, 2 where for i = 1 the relative
number of events {dˆ > 0} will approximate the first summand in (27) while for i = 2 the
relative number of events {dˆ ≤ 0} will approximate the second summand in (27).
It is important to note that the difference between the error rates calculated for the
two decision rules (23) ad (24) could be very large as shown in Figure 1 where ERp (∆) and
ERs (∆) calculated for several values of n1 = n2 ∈ {50, 100, 150, 250} with fixed values
of p ∈ {10, 25, 50, 75}. If p = 10 we do not observe large differences between ERp (∆)
and ERs (∆) computed for different sample sizes. However, this statement does not hold
any longer when p becomes comparable to both n1 and n2 as documented for p = 50 and
p = 75. This case is known in the literature as a large-dimensional asymptotic regime
and it is investigated in detail in Section 3.
3
Discriminant analysis under large-dimensional asymptotics
In this section we derive the asymptotic distribution of the discriminant function coefficients under the high-dimensional asymptotic regime, that is, when the dimension
14
0.5
p=25
0.5
p=10
0.1
0.2
0.3
0.4
ERp(Δ)
ERs(Δ), n1 = n2 = 50
ERs(Δ), n1 = n2 = 100
ERs(Δ), n1 = n2 = 150
ERs(Δ), n1 = n2 = 250
0.0
0.0
0.1
0.2
0.3
0.4
ERp(Δ)
ERs(Δ), n1 = n2 = 50
ERs(Δ), n1 = n2 = 100
ERs(Δ), n1 = n2 = 150
ERs(Δ), n1 = n2 = 250
0
1
2
3
4
5
0
1
2
3
4
5
p=75
0.5
0.5
p=50
0.4
ERp(Δ)
ERs(Δ), n1 = n2 = 50
ERs(Δ), n1 = n2 = 100
ERs(Δ), n1 = n2 = 150
ERs(Δ), n1 = n2 = 250
0.0
0.0
0.1
0.1
0.2
0.2
0.3
0.3
0.4
ERp(Δ)
ERs(Δ), n1 = n2 = 50
ERs(Δ), n1 = n2 = 100
ERs(Δ), n1 = n2 = 150
ERs(Δ), n1 = n2 = 250
0
1
2
3
4
5
0
1
2
3
4
Figure 1: Error rates ERp (∆) and ERs (∆) as functions of ∆ for p ∈ {10, 25, 50, 75} and
ERs (∆).
15
5
increases together with the sample sizes and they all tend to infinity. More precisely, we
assume that p/(n1 + n2 ) → c ∈ [0, 1) as n1 + n2 → ∞.
The following conditions are needed for the validity of the asymptotic results:
(A1) There exists γ ≥ 0 such that p−γ (µ1 − µ2 )T Σ−1 (µ1 − µ2 ) < ∞ uniformly on p.
(A2) 0 <
lim
(n1 ,n2 )→∞
(n1 /n2 ) < ∞.
It is remarkable that, no assumption on the eigenvalues of the covariance matrix Σ,
like they are uniformly bounded on p, is imposed. The asymptotic results are also valid
when Σ possesses unbounded spectrum as well as when its smallest eigenvalue tends to
zero as p → ∞. The constant γ is a technical one and it controls the growth rate of the
quadratic form. In Theorem 4 the asymptotic distribution of linear combinations of the
discriminant function coefficients is provided.
Theorem 4. Assume (A1) and (A2). Let l be a p-dimensional vector of constants such
that p−γ lT Σ−1 l < ∞ is uniformly on p, γ ≥ 0. Then, under the conditions of Theorem 1,
the asymptotic distribution of θ̂ = lT â is given by
√
n1 +
n2 σγ−1
θ̂ −
1 T −1
D
l Σ (µ1 − µ2 ) −→ N (0, 1)
1−c
for p/(n1 + n2 ) → c ∈ [0, 1) as n1 + n2 → ∞ with
σγ2 =
1
(1 − c)3
lT Σ−1 (µ1 − µ2 )
2
+ lT Σ−1 l(µ1 − µ2 )T Σ−1 (µ1 − µ2 )
(27)
!
+ λ(n1 + n2 )lT Σ−1 l1{0} (γ)
where 1A (.) denotes the indicator function of set A.
Proof. Using the stochastic representation (11) of Corollary 1, we get
√
1 T −1
n1 +
l Σ (µ1 − µ2 )
1−c
−γ T −1
√
1
p l Σ (µ1 − µ2 )
d
−1
=
n1 + n2 (n1 + n2 − 2)ξ −
1−c
p−γ σγ
s
p −γ T −1
p
n1 + n2 − 2
p
−
1
p l Σ l
+
λ(n1 + n2 )
p−γ + p−γ
u
z0 ,
ξ
n1 + n2 − p
p−γ σγ
n2 σγ−1
θ̂ −
where ξ ∼ χ2n1 +n2 −p−1 , z0 ∼ N (0, 1), u ∼ F p − 1, n1 + n2 − p, (µ1 − µ2 )T Rl (µ1 − µ2 )/λ
with Rl = Σ−1 − Σ−1 llT Σ−1 /lT Σ−1 l; ξ, z0 and u are mutually independently distributed.
16
Since, ξ ∼ χ2n1 +n2 −p−1 , we get that
p
n1 + n2 − p − 1
ξ
D
− 1 −→ N (0, 2)
n1 + n2 − p − 1
for p/(n1 + n2 ) → c ∈ [0, 1) as n1 + n2 → ∞ and, consequently,
√
1
n1 + n2 − p − 1 1
n1 + n2
−1
n1 + n2 (n1 + n2 − 2)ξ −
=√
1−c
ξ
1−c
n1 + n2 − p − 1
p
n1 + n2 − 2
ξ
2
D
×
n1 + n2 − p − 1 (1 − c)
−
−→ z˜0 ∼ N 0,
n1 + n2 − p − 1 n1 + n2 − p − 1
1−c
√
for
p
−γ
p
n1 +n2
= c + o((n1 + n2 )−1/2 ) where z0 and z̃0 are independent.
Furthermore, we get (see, (Bodnar and Reiß, 2016, Lemma 3))
+p
−γ
p−1
c
u − 1{0} (γ) −
n1 + n2 − p
1−c
p−γ (µ1 − µ2 )T Rl (µ1 − µ2 )
1{0} (γ) +
cλ(n1 + n2 )
a.s.
−→ 0
Putting the above results together, we get the statement of the theorem with
σγ2 =
2
1
T −1
2
l
Σ
(µ
−
µ
)
+ lT Σ−1 l(µ1 − µ2 )T Rl (µ1 − µ2 )
1
2
(1 − c)3
!
+ λ(n1 + n2 )lT Σ−1 l1{0} (γ)
=
1
(1 − c)3
lT Σ−1 (µ1 − µ2 )
2
+ lT Σ−1 l(µ1 − µ2 )T Σ−1 (µ1 − µ2 )
!
T
−1
+ λ(n1 + n2 )l Σ l1{0} (γ)
The results of Theorem 4 show that the quantity γ is present only in the asymptotic
variance σγ2 . Moreover, if γ > 0, then the factor λ(n1 + n2 ) vanishes and therefore the
assumption (A2) is no longer needed. However, in the case of γ = 0 we need (A2) in order
to keep the variance bounded. We further investigate this point via simulations in Section
3.3, by choosing γ > 0 and considering small n1 and large n2 such that n1 /n2 → 0.
17
3.1
Classification analysis in high dimension
The error rate of the classification analysis based on the optimal decision rule (23) remains
the same independently of p and it is always equal to
∆
ERp (∆) = Φ −
2
with ∆2 = (µ1 − µ2 )> Σ−1 (µ1 − µ2 ) .
In practice, however, µ1 , µ2 , and Σ are not known and, consequently, one has to make
the decision based on (24) instead of (23). In Theorem 5, we derived the asymptotic
distribution of dˆ under the large-dimensional asymptotics.
e 2 and λni → bi for p/(n1 + n2 ) →
Theorem 5. Assume (A1) and (A2). Let p−γ ∆2 → ∆
c ∈ [0, 1) as n1 + n2 → ∞. Then, under the conditions of Theorem 1, it holds that
pmin(γ,1)/2
D
dˆ
n1 + n2 − 2 (−1)i−1 −γ 2
−
p ∆
pγ
n1 + n2 − p − 1
2
−→ N (−1)i−1
!
c bi − 2
(b1 + b2 )1{0} (γ),
1 − c 2bi
!
c
1
e 4 1[1,+∞) (γ) +
e 2 1[0,1] (γ))
∆
(c(b1 + b2 )1{0} (γ) + ∆
2(1 − c)3
(1 − c)3
for p/(n1 + n2 ) → c ∈ [0, 1) as n1 + n2 → ∞.
Proof. The application of Theorem 3 leads to
!
i−1
ˆ
n
+
n
−
2
(−1)
d
1
2
−
p−γ ∆2
pmin(γ,1)/2
pγ
n1 + n2 − p − 1
2
√
p
n1 + n2 − 2 p
ξ
d
min(γ,1)/2−1/2
√
= p
n1 + n2 − p − 1 1 −
ξ
n1 + n2 − p − 1
n1 + n2 − p − 1
×
×
+
+
×
(−1)i−1 −γ 2 n1 + n2 − 2
λni − 2
p ∆ +
(−1)i−1
2
ξ
2λni
p
√
min(γ,1)/2−γ
min(γ,1)/2−γ/2
min(γ,1)/2−γ
2
−γ
2
p ∆ λw0 + p
λw0
p
λξ2 + 2p
!
(−1)i−1 min(γ,1)/2−γ/2 p −γ 2 √
p
p ∆ λw0
λni
s
n1 + n2 − 2
1
p−1
1+
+
u
ξ
n1 + n2 n1 + n2 − p
!
q
p
√
pmin(γ,1)−2γ λξ2 + (pmin(γ,1)/2−γ/2 p−γ ∆2 + pmin(γ,1)/2−γ λw0 )2 z0
18
D
−→ N (−1)i−1
c bi − 2
(b1 + b2 )1{0} (γ),
1 − c 2bi
!
c
1
e 4 1[1,+∞) (γ) +
e 2 1[0,1] (γ)) ,
∆
(c(b1 + b2 )1{0} (γ) + ∆
2(1 − c)3
(1 − c)3
where the last line follows from Lemma 3 in Bodnar and Reiß (2016) and Slutsky Theorem
(see, (DasGupta, 2008, Theorem 1.5)).
The parameters of the limit distribution derived in Theorem 5 can be significantly
simplified in the special case of n1 = n2 because of λn1 = λn2 = 2. The results of
Theorem 5 are also used to derived the approximate error rate for the decision (24). Let
1 1 −γ
p ∆. Then, the error rate is given by
a = 1−c
2
o 1 n
o
1 nˆ
P d > 0|i = 2 + P dˆ ≤ 0|i = 1
2 (
2
!
)
ˆ
1
d
=
P pmin(γ,1)/2
− (−1)i−1 a > −pmin(γ,1)/2 (−1)i−1 a|i = 2
2
pγ
(
!
)
ˆ
1
d
+
P pmin(γ,1)/2
− (−1)i−1 a ≤ −pmin(γ,1)/2 (−1)i−1 a|i = 1
2
pγ
min(γ,1)/2
1
ap
− m2
1
−apmin(γ,1)/2 − m1
1−Φ
+ Φ
,
≈
2
v
2
v
ERs (∆) =
with
c b1 − 2
c b2 − 2
(b1 + b2 )1{0} (γ), m2 = −
(b1 + b2 )1{0} (γ),
1 − c 2b1
1 − c 2b2
1
c
(p−γ ∆2 )2 1[1,+∞) (γ) +
(c(b1 + b2 )1{0} (γ) + p−γ ∆2 1[0,1] (γ)),
v2 =
3
2(1 − c)
(1 − c)3
m1 =
e 2 by p−γ ∆2 .
where we approximate ∆
In the special case of n1 = n2 which leads to b1 = b2 = 2, we get
∆
ERs (∆) = Φ −hc
2
with
p
√
pmin(γ,1)/2−γ 1 − c p−γ ∆2
hc = p −γ 2 2
,
c(p ∆ ) 1[1,+∞) (γ)/2 + 4c1{0} (γ) + p−γ ∆2 1[0,1] (γ)
√
which is always smaller than one. Furthermore, for γ ∈ (0, 1) we get hc = 1 − c.
In Figure 2, we plot ERs (∆) as a function of ∆ ∈ [0, 100] for c ∈ {0.1, 0.5, 0.8, 0.95}.
19
0.0
0.1
0.2
0.3
0.4
0.5
ERp(∆)
ERs(∆), c = 0.1
ERs(∆), c = 0.5
ERs(∆), c = 0.8
ERs(∆), c = 0.95
0
20
40
60
80
100
Figure 2: Error rates ERp (∆) and ERs (∆) as functions of ∆ for c ∈ {0.1, 0.5, 0.8, 0.95}.
We also add the plot of ERp (∆) in order to compare the error rate of the two decision
rules. Since only finite values of ∆ are considered in the figure we put γ = 0 and also
1
1 +n2 −2
choose n1 = n2 . Finally, the ratio n1n+n
in the definition of a is approximated by 1−c
.
2 −p−1
We observe that ERs (∆) lies very close to ERp (∆) for c = 0.1. However, the difference
between two curves becomes considerable as c growths, especially for c = 0.95 and larger
values of ∆.
3.2
Finite-sample performance
In this section we present the results of the simulation study. The aim is to investigate
how good the asymptotic distribution of a linear combination of the discriminant function
coefficients θ̂ = lT â performs in the case of the finite dimension and of the finite sample
size. For that reason we compare the asymptotic distribution of the standardized θ̂ as
given in Theorem 4 to the corresponding exact distribution obtained as a kernel density
approximation with the Eppanechnikov kernel applied to the simulated data from the
standardized exact distribution which are generated following the stochastic representation of Corollary 1: (i) first, ξb , z0;b , ub are sampled independently from the corresponding
20
univariate distributions provided in Corollary 1; (ii) second, θ̂b is computed by using (11)
and standardized after that as in Theorem 4; (iii) finally, the previous two steps are repeated for b = 1, ..., B times to obtain a sample of size B. It is noted that B could be
large to ensure a good performance of the kernel density estimator.
In the simulation study, we take l = 1p (p-dimensional vector of ones). The elements
of µ1 and µ2 are drawn from the uniform distribution on [−1, 1] when γ > 0, while
the first ten elements of µ1 and the last ten elements of µ2 are generated from the
uniform distribution on [−1, 1] and the rest of the components are taken to be zero when
γ = 0. We also take Σ as a diagonal matrix, where every element is uniformly distributed
on (0, 1]. The results are compared for several values of c = {0.1, 0.5, 0.8, 0.95} and
the corresponding values of p, n1 , n2 . Simulated data consist of N = 105 independent
repetitions. In both cases γ = 0 and γ > 0 we plot two asymptotic density functions to
investigate how robust are the obtained results to the choice of γ.
In Figures 3-4, we present the results in the case of equal and large sample sizes (data
are drawn with γ = 0 in Figure 3 and with γ > 0 in Figure 4), while the plots in Figure
5 correspond to the case of one small sample and one large sample. We observe that the
impact of the incorrect specification of γ is not large, while some deviations are observed
in Figure 5 for small values of c. If c increases, then the difference between the two
asymptotic distributions becomes negligible. In contrast, larger differences between the
asymptotic distributions and the finite-sample one are observed for c = 0.8 and c = 0.95
in all figures, although their sizes are relatively small even in such extreme case.
References
Agarwal, A., Negahban, S., and Wainwright, M. J. (2012). Noisy matrix decomposition via
convex relaxation: Optimal rates in high dimensions. Annals of Statistics, 40(2):1171–
1197.
Bai, Z., Jiang, D., Yao, J.-F., and Zheng, S. (2009). Corrections to lrt on large-dimensional
covariance matrix by rmt. Annals of Statistics, 37(6B):3822–3840.
Bai, Z. and Silverstein, J. W. (2010). Spectral Analysis of Large Dimensional Random
Matrices. New York, NY: Springer Science+ Business Media, LLC.
Bickel, P. J. and Levina, E. (2004). Some theory for fisher’s linear discriminant function,’naive bayes’, and some alternatives when there are many more variables than
observations. Bernoulli, pages 989–1010.
Bodnar, T., Gupta, A., and Parolya, N. (2016). Direct shrinkage estimation of large
dimensional precision matrix. Journal of Multivariate Analysis, 146:223–236.
21
Bodnar, T., Gupta, A. K., and Parolya, N. (2014). On the strong convergence of the
optimal linear shrinkage estimator for large dimensional covariance matrix. Journal of
Multivariate Analysis, 132:215–228.
Bodnar, T., Mazur, S., and Okhrin, Y. (2017a). Bayesian estimation of the global minimum variance portfolio. European Journal of Operational Research, 256:292–307.
Bodnar, T., Mazur, S., and Parolya, N. (2017b). Central limit theorems for functionals of
large dimensional sample covariance matrix and mean vector in matrix-variate skewed
model. Scandinavian Journal of Statistics, under revision.
Bodnar, T. and Okhrin, Y. (2008). Properties of the singular, inverse and generalized
inverse partitioned wishart distributions. Journal of Multivariate Analysis, 99:2389–
2405.
Bodnar, T. and Okhrin, Y. (2011). On the product of inverse wishart and normal distributions with applications to discriminant analysis and portfolio theory. Scandinavian
Journal of Statistics, 38(2):311–331.
Bodnar, T. and Reiß, M. (2016). Exact and asymptotic tests on a factor model in low
and large dimensions with applications. Journal of Multivariate Analysis, 150:125–151.
Bodnar, T. and Schmid, W. (2008). A test for the weights of the global minimum variance
portfolio in an elliptical model. Metrika, 67(2):127–143.
Cai, T. and Liu, W. (2011a). Adaptive thresholding for sparse covariance matrix estimation. Journal of the American Statistical Association, 106(494):672–684.
Cai, T. and Liu, W. (2011b). A direct estimation approach to sparse linear discriminant
analysis. Journal of the American Statistical Association, 106(496):1566–1577.
Cai, T., Liu, W., and Luo, X. (2011). A constrained l1 minimization approach to
sparse precision matrix estimation. Journal of the American Statistical Association,
106(494):594–607.
Cai, T. T. and Jiang, T. (2011). Limiting laws of coherence of random matrices with
applications to testing covariance structure and construction of compressed sensing
matrices. Annals of Statistics, 39(3):1496–1525.
Chen, S. X., Zhang, L.-X., and Zhong, P.-S. (2010). Tests for high-dimensional covariance
matrices. Journal of the American Statistical Association, 105(490):810–819.
DasGupta, A. (2008). Asymptotic theory of statistics and probability. Springer Science
& Business Media.
22
Fan, J., Fan, Y., and Lv, J. (2008). High dimensional covariance matrix estimation using
a factor model. Journal of Econometrics, 147(1):186–197.
Fan, J., Liao, Y., and Mincheva, M. (2013). Large covariance estimation by thresholding
principal orthogonal complements. Journal of the Royal Statistical Society: Series B
(Statistical Methodology), 75(4):603–680.
Fujikoshi, Y. and Seo, T. (1997). Asymptotic aproximations for epmcs of the linear and
the quadratic discriminant functions when the sample sizes and the dimension are large.
Random operators and stochastic equations, University of Toronto.
Givens, G. H. and Hoeting, J. A. (2012). Computational statistics. John Wiley & Sons.
Gupta, A. and Nagar, D. (2000). Matrix Variate Distributions. Chapman and Hall/CRC,
Boca Raton.
Gupta, A., Varga, T., and Bodnar, T. (2013). Elliptically contoured models in statistics
and portfolio theory. Springer, second edition.
Gupta, A. K. and Bodnar, T. (2014). An exact test about the covariance matrix. Journal
of Multivariate Analysis, 125:176–189.
Jiang, T. and Yang, F. (2013). Central limit theorems for classical likelihood ratio tests
for high-dimensional normal distributions. Annals of Statistics, 41(4):2029–2074.
Johnson, R. A., Wichern, D. W., et al. (2007). Applied multivariate statistical analysis.
Prentice hall Upper Saddle River, NJ.
Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal components analysis. Annals of Statistics, 29(2):295–327.
Ledoit, O. and Wolf, M. (2003). Improved estimation of the covariance matrix of
stock returns with an application to portfolio selection. Journal of Empirical Finance,
10(5):603–621.
Mathai, A. and Provost, S. B. (1992). Quadratic Forms in Random Variables. Marcel
Dekker.
Muirhead, R. J. (1982). Aspects of Multivariate Statistical Theory. Wiley, New York.
Narsky, I. and Porter, F. C. (2013). Linear and quadratic discriminant analysis, logistic regression, and partial least squares regression. Statistical Analysis Techniques in
Particle Physics: Fits, Density Estimation and Supervised Learning, pages 221–249.
Rencher, A. C. (1998). Multivariate statistical inference and applications, volume 338.
Wiley-Interscience.
23
Rencher, A. C. and Christensen, W. F. (2012). Methods of multivariate analysis. John
Wiley & Sons.
Shao, J., Wang, Y., Deng, X., Wang, S., et al. (2011). Sparse linear discriminant analysis
by thresholding for high dimensional data. The Annals of statistics, 39(2):1241–1265.
Srivastava, M. S. and Kubokawa, T. (2007). Comparison of discrimination methods for
high dimensional data. Journal of the Japan Statistical Society, 37:123–134.
Tamatani, M. (2015). Asymptotic theory for discriminant analysis in high dimension low
sample size. Memoirs of the Graduate School of Science and Engineering, Shimane
University. Series B, Mathematics, pages 15–26.
Wyman, F. J., Young, D. M., and Turner, D. W. (1990). A comparison of asymptotic
error rate expansions for the sample linear discriminant function. Pattern Recognition,
23:775–783.
24
0.5
0.5
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
−4
−2
0
2
4
−4
0
2
4
0.5
b) p = 250, n1 = n2 = 250
0.5
a) p = 50, n1 = n2 = 250
−2
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
−4
−2
0
2
4
−4
c) p = 400, n1 = n2 = 250
−2
0
2
d) p = 475, n1 = n2 = 250
Figure 3: The kernel density estimator of the asymptotic distribution and standard normal
for θ̂ as given in Theorem 4 for γ = 0 and c = {0.1, 0.5, 0.8, 0.95}.
25
4
0.5
0.5
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
−4
−2
0
2
4
−4
0
2
4
0.5
b) p = 250, n1 = n2 = 250
0.5
a) p = 50, n1 = n2 = 250
−2
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
−4
−2
0
2
4
−4
c) p = 400, n1 = n2 = 250
−2
0
2
d) p = 475, n1 = n2 = 250
Figure 4: The kernel density estimator of the asymptotic distribution and standard normal
for θ̂ as given in Theorem 4 for γ > 0 and c = {0.1, 0.5, 0.8, 0.95}.
26
4
0.5
0.5
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
−4
−2
0
2
4
−4
0
2
4
b) p = 250, n1 = 25, n2 = 475
0.5
0.5
a) p = 50, n1 = 25, n2 = 475
−2
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
0.0
0.0
0.1
0.2
0.3
0.4
Asymptotic True
Asymptotic False
Finite Sample
−4
−2
0
2
4
−4
c) p = 400, n1 = 25, n2 = 475
−2
0
2
d) p = 475, n1 = 25, n2 = 475
Figure 5: The kernel density estimator of the asymptotic distribution and standard normal
for θ̂ as given in Theorem 4 for γ > 0 and c = {0.1, 0.5, 0.8, 0.95}.
27
4
| 10 |
Information
Sciences
Information Sciences 00 (2014) 1–16
arXiv:1410.3744v1 [] 14 Oct 2014
Refined Particle Swarm Intelligence Method for Abrupt Motion
Tracking
Mei Kuan Lim1 , Chee Seng Chan1 , Dorothy Monekosso2 and Paolo Remagnino3
1 University
of Malaya, Center of Image and Signal Processing, 50603 Kuala Lumpur, Malaysia;
of West England, Eng. & Maths., Bristol BS16 1QY, United Kingdom;
3 Kingston University, Comp. & Info. Sys., Surrey KT1 2EE, United Kingdom
2 University
Abstract
Conventional tracking solutions are not feasible in handling abrupt motion as they are based on smooth motion assumption or an
accurate motion model. Abrupt motion is not subject to motion continuity and smoothness. To assuage this, we deem tracking
as an optimisation problem and propose a novel abrupt motion tracker that based on swarm intelligence - the SwaTrack. Unlike
existing swarm-based filtering methods, we first of all introduce an optimised swarm-based sampling strategy to tradeoff between
the exploration and exploitation of the search space in search for the optimal proposal distribution. Secondly, we propose Dynamic
Acceleration Parameters (DAP) allow on the fly tuning of the best mean and variance of the distribution for sampling. Such
innovating idea of combining these strategies in an ingenious way in the PSO framework to handle the abrupt motion, which so far
no existing works are found. Experimental results in both quantitative and qualitative had shown the effectiveness of the proposed
method in tracking abrupt motions.
Keywords: abrupt motion, visual tracking, particle swarm optimisation
1. Introduction
Visual tracking is one of the most important and challenging research topics in computer vision. One of the main
reason is due to it’s pertinent in the tasks of motion based recognition, automated surveillance, video indexing, humancomputer interaction and vehicle navigation [1, 2]. In general, motion estimation in a typical visual tracking system
can be formulated as a dynamic state estimation problem: xt = f (xt −1, vt −1) and zt = h(xt , wt ), where xt is the current
state, f is the state evolution function, vt − 1 is the evolution process noise, zt is the current observation, h denotes
the measurement function, and wt is the measurement noise. The task of motion estimation is usually completed by
utilizing predictors such as Kalman filters [3, 4, 5] particle filters [6, 7, 8] or linear regression techniques [9]. This is
commonly further enhanced by assuming that motion is always governed by Gaussian distribution based on Brownian
motion or constant velocity motion models [2, 10].
While this assumption holds true to a certain degree of smooth motion, it tends to fail in the case of abrupt
motion such as fast motion (e.g. the movement of ball in sport events), camera switching (tracking of subject in a
camera topology), low frame-rate video as illustrated in Fig. 1. The main reason is that the state equation could not
cope with the unexpected dynamic movement, e.g. sudden or sharp changes of the camera/object motion in adjacent
frames. Nonetheless, such sampling-based solutions also suffered from the well-known local trap problem and particle
I Corresponding
Author: Mei Kuan Lim (email: [email protected]; phone/fax: +0060379676433)
1
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
2
Figure 1. Example of the abrupt motion in different scenarios. Top: Abrupt Motion with Inconsistent Speed. Middle: Switching of Camera during
a Boxing Game. Bottom: Low Frame Rate of Video due to Downsampling.
degeneracy problem. In order to handle these issues, one of the earliest work [11] considered tracking in low frame
rate video. The work considered tracking in low frame rate as to abrupt motion and proposed a cascade particle filters
to tackle this issue. This is, then, followed by a number of sampling strategies [12, 13, 14, 15, 16] incorporated
into Markov Chain Monte Carlo (MCMC) tracking framework. Their method alleviates the constant velocity motion
constraint in MCMC by improvising the sampling efficiency.
These aforementioned works have shown satisfactory results in tracking abrupt motion, however, we observed that
most of the work had been focused on employing different sampling strategy into the Bayesian filtering framework.
There are clear trends of increased complexity; as methods have gotten more complicated to cope with more difficult
tracking scenarios. Often these sophisticated methods compensate the increased in complexity in a certain aspect of
the algorithm by reducing the other aspect of it. For example, the increased number of subregions for sampling to cope
with the variation of abrupt motion is compensated by using a smaller number of samples to reduce, if not maintaining
the computational cost incurred. However, are these complex and sophisticated methods really necessary?
Recently, Particle Swarm Optimization (PSO) [17, 18, 19, 20, 21], a new population based stochastic optimization
technique, has received more and more attention because of its considerable success. Unlike the independent particles
in the particle filter, the particles in PSO interact locally with one another and with their environment in analogy with
the cooperative and social aspects of animal populations, for example as found in birds flocking. With this, Li et al.
[22] employed PSO in contour tracking problem to handle the abrupt motions. However, in the PSO method, there is
a possibility that most samples will get trapped in a few strong local maxima. Hence, the PSO method fails to track
highly abrupt motions.
In this paper, we proposed SwaTrack - Swarm intelligence-based Tracking algorithm to handle the abrupt motion.
Our contributions are firstly, in contrast to the conventional abrupt motion solutions that based on different sampling
methods in Bayesian filtering which are computational expensive, we deem tracking as an optimisation problem
and adopted particle swarm optimisation algorithm soley as the motion estimator. In particular, we replace the state
equation, xt = f (xt − 1, vt − 1) with a novel velocity model in the PSO. Secondly, we introduced Dynamic Acceleration
Parameters (DAP) and Exploration Factor (EF ) into the PSO framework to avoid the swarm explosion and divergence
problem in tracking highly abrupt motion. While the PSO is not new, it is the innovating idea of combining the DAP
and EF in an ingenious way to handle the abrupt motion which so far no existing works are found. Experimental
results using a large scale of public datasets and comparison with the state-of-the-art algorithms have shown the
effectiveness and robustness of the proposed method in terms of dataset unbiased, different size of object and recovery
from error.
The rest of this paper is organised as follows. In Section 2, we provide the background work in tracking abrupt
motion. The PSO is revisited in Section 3 and its limitation to handle abrupt motion. Proposed work is detailed in
2
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
(a) Particle degeneracy
3
(b) Trapped in local optima
Figure 2. Known problem of sampling-based tracking such as particle filter tracking and its variation (a) particle degeneracy problem (b) trapped
in local optima.
Section 3 while experimental results and discussion are given in Section 5. Finally, conclusion is drawn in Section 6.
2. Related Work
While considerable research efforts exist in relation to visual tracking, only a handful corresponds to abrupt motion
[23, 14, 24, 16]. Abrupt motion can be defined as situations where the objects motion changes at adjacent frames with
unknown pattern in scenarios such as i) partially low-frame rate, ii) switching of cameras view in a topology network
or iii) the irregular motion of the object. Therefore conventional sampling-based solutions that assume Gaussian
distribution based on Brownian motion or constant velocity motion models tend to fail in this area as illustrated in
Fig. 2 . For a complete review on the general visual tracking, we encourage the reader to [1, 2] while this literature
review section will only focus on work that handle abrupt motion.
In the recent work, Markov Chain Monte Carlo (MCMC) has been proposed to overcome the computational
complexity in PF when the state space increases [25]. While MCMC methods cope better in a high-dimensional state
space, a common problem is the need to have a large number of samples, especially when tracking abrupt motion.
Thus, to deal with abrupt motion, there has been a handful of work which introduced modifications and refinements
on the conventional MCMC. Kwon et al. in [12], introduced an integration of the Wang-Landau algorithm into the
MCMC tracking framework to track abrupt motion. Their method alleviates the constant-velocity motion constraint in
MCMC by improvising the sampling efficiency using the proposed annealed Wang-Landau Monte Carlo (A-WLMC)
sampling method. The A-WLMC method increases the flexibility of the proposal density in MCMC by utilising the
likelihood and density of states terms for resampling. Then, another variation of MCMC known as the interactive
MCMC (IMCMC) was proposed [13], where multiple basic trackers are deployed to track the motion changes of
a corresponding object. The basic trackers which comprise of different combinations of observation and motion
models are then fused into a compound tracker using the IMCMC framework. The exchange of information between
these trackers has been shown to cope with abrupt motion while retaining the number of samples used. In another
advancement, an intensively adaptive MCMC (IA-MCMC) sampler [16] has been proposed. Their method further
reduces the number of samples required when tracking abrupt motion by performing a two-step sampling scheme;
the preliminary sampling step to discover the rough landscape of the proposal distribution (common when there is
large motion uncertainty in abrupt motion) and the adaptive sampling step to refine the sampling space towards the
promising regions found by the preliminary sampling step. In another attempt for effective sampling of abrupt motion,
[14] proposed the N-fold Wang-Landau (NFWL) tracking method that uses the N-fold algorithm to estimate the
density of states which will then be used to automatically increase or decrease the variance of the proposal distribution.
The NFWL tracking method copes with abrupt changes in both position and scale by dividing the state space into larger
number of subregions. Therefore, the N-fold algorithm was introduced during sampling to cope with the exponentially
increased number of subregions.
Motivated by the meta-level question prompted in [26] on whether there is a need to have more training data
or better models for object detection, we raise similar question in the domain of this area; will continued progress
in visual tracking be driven by the increased complexity of tracking algorithms? As indicated in the earlier section,
often these sophisticated methods compensate the increased in complexity in a certain aspect of the algorithm by
3
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
4
reducing the other aspect of it. Furthermore, according to [10], different scenarios require different dynamic models.
If motion models only work sometimes, on a particular scenario, then how far should the increased in complexity of
tracking algorithms be, in order to cope with the challenges of real-time tracking scenarios? Should we look into less
complex methods instead, since motion models only work sometimes? Hence, we study a simple and yet effective
algorithm, the SwaTrack that utilise the PSO framework to effectively handle the abrupt motion using the particles
sharing information themselves. We deem the tracking as an optimisation problem, and hence the proposed method
is dataset unbias and able to recover from error.
Work that are considered similar to us are [22, 27]. Li et al [22] proposed a two-layer tracking framework in
which PSO is successfully combined with a level set evolution. In the first layer, PSO is adopted to capture the global
motion of the target and to help construct the coarse contour. In the second layer, level set evolution based on the
coarse contour is carried out to track the local deformation. However, there is a possibility that most samples will get
trapped in a few strong local maxima. Hence, the PSO method fails to track highly abrupt motions. Zhang et al. [27]
proposed a swarm intelligence based particle lter algorithm with a hierarchical importance sampling process which
is guided by the swarm intelligence extracted from the particle conguration, and thus greatly overcome the sample
impoverishment problem suffered by particle lters. Unfortunately, it cannot be a perfect solution either as it is still
depends on the Gaussian approximation. In order to handle this issue, we introduce 1) DAP - a dynamic acceleration
parameters by utilising the averaged velocity information of the particles; and 2) EF - a mechanism that balance the
tradeoffs between exploration and exploitation of the swarm into the PSO framework. With this, the SwaTrack will
able to alleviate these problems and track in highly abrupt motion or recover from tracking error.
3. Particle Swarm Optimisation Revisit
Particle Swarm Optimisation (PSO) - a population-based stochastic optimisation technique was developed by
Kennedy and Eberhart in 1995 [17]. It was inspired by the social behaviour of a flock of birds. The operation of
the PSO can be described as let us assume an n-dimensional search space, S ⊂ Rn and a swarm comprising of I
particles. Each particle represents a candidate solution to the search problem and is associated to a fitness function
(cost function), f : S → R. At every kth iteration, each particle is represented as {xki }i=1,...,I at kth iteration, where
k = 1, 2, ...K. Each particle, xki has its own velocity, v(xki ) and a corresponding fitness value (cost), f (xki ). The
best position encountered by the ith particle (personal best) will be denoted as {p(xki )}i=1,...,I and the fitness value as
pBestki = f (p(xki )). For every kth iteration, the particle with the best fitness value will be chosen as the global best
and is denoted as the index of the particle is denoted as f . Finally, the overall best position found by the swarm will
denote as gBestki = f (p(xkg )). The PSO algorithm is shown in Algo 1:
1. Initialisation, at iteration k = 0
• Initialise a population of I particles, {xki }i=1,...,I with positions, p(xki ),at random within the search space, S.
• Initialise the velocities, v(xki ) at random within [1, −1].
• Evaluate the fitness value of each particle and identify their personal best pBestki = f (p(xki )).
• Identify the global best gth particle and update the global best information, gBestk = f (p(xkg )).
2. Repeat at iteration k = 1, 2, ...K until the stopping criterion is met.
• For each ith particle, compute the new velocity according to:
vik+1 = [(ω ∗ vik ) + (c1 ∗ r1 ∗ (pBestki − xki )) + (c2 ∗ r2 ∗ (gBestk − xki )]
(1)
• For each ith particle, move them using the computed new velocity as in Eq. 2 and update its position
according to:
i
p(xk+1
) = p(xk )i + vik+1
(2)
i
• For each ith particle, ensure the newly computed position is within state space, p(xk+1
)⊂S
• Update pBestki , p(pBestki ), g, gBestk , p(gBestkg ).
4
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
5
• Check for Convergence
• End Repeat
The parameters ω, c1 and c2 are positive acceleration constants used to scale the influence of the inertia, cognitive
and social components respectively; r1 , r2 ⊂ (0, 1) are uniformly distributed random numbers to randomise the search
exploration.
3.1. Limitations for Abrupt Motion Tracking
However, the traditional PSO does not able to cope with abrupt motion, due to few reasons:
Constant Acceleration Parameters: The parameter c1 controls the influence of the cognitive component, (c1 ∗
r1 ∗ (pBestki − xki )) which represents the individual memory of particles (personal best solution). A higher values gives
more emphasize on the cognitive component and vice versa. In contrast, the parameter c2 controls the influence of the
social component,(c2 ∗ r2 ∗ (gBestk − xki ) which indicates the joint effort of all particles to optimize a particular fitness
function, f .
A drawback of the current PSO is the lack of a reasonable mechanism to effectively handle the acceleration
parameters (ω, c and r); which always set to a constant variable. For example, many applications of the PSO and
its variant set these parameters c1 = c2 = 2.00, which gives the stochastic factor a mean of 1.0 and giving equal
importance to both the cognitive and social components [17]. This limits the search space and therefore could not
cope with abrupt motion. Therefore, it is essential to have dynamic acceleration parameters that able to cope with
unexpected dynamic motion.
Tradeoffs in Exploration and Exploitation: The inertia weight, ω serves an important value that directs the
exploratory behaviour of the swarms. High inertia weights accentuates the influence of previous velocity information
and force the swarm to explore a wider search space; while a decreasing inertia weight reduces the influence of
previous velocity and exploit a smaller search space. Often, the inertia value that controls the influence of the previous
velocity is set to ω ∈ [0.8, 1.2] [28]. Recently, decaying inertia weigh, ω = 0.9 → 0.4 has been proposed and tested,
with the aim of favouring global search at the start of the algorithm and local search later. While these settings has
been tested to work well in other optimisation problems, one must note that it is not applicable in tracking abrupt
motion where the dynamic change is unknown. Therefore, a solution that able to handle the tradeoffs between the
exploration and exploitation will be beneficial.
4. Proposed Method - SwaTrack
In this section, we present our proposed SwaTrack - a variant of the traditional PSO to track target with arbitrary
motion. Particularly, we will discuss how the ingenious combination of Dynamic Acceleration Parameters (DAP) and
Exploration Factor EF in the proposed PSO framework has alleviated the problem of swarm explosion and divergence
problem.
4.1. Dynamic Acceleration Parameters (DAP)
PSO is a population based stochastic optimization technique. Since PSO is an iterative solution, efficient convergence is an important issue toward a real time abrupt motion estimation system. However, the strict threshold of
the conventional PSO velocity computation as in Eq. 2 will always lead to particles converging to a common state
estimate (the global solution). One reason is that the velocity update equation uses a decreasing inertia value which
indirectly forces the exploration of particles to decrease over the iterations. On the other hand, an increasing inertia
value will lead to swarm explosion in some scenarios.
To overcome this, we introduce DAP - a mechanism to self-tune the acceleration parameters by utilising the
averaged velocity information of the particles. That is, we normalised the acceleration parameters so that they can
be compared fairly with respect to the estimated velocity, p(w ∩ c1 ∩ c2 ) = 1. The fitness function information is
incorporated in the PSO framework in order to refine the acceleration parameters dynamically rather than a static
value. The basic idea is that when an object moves consistently in a particular direction, C → 1, the inertia, w and
cognitive weight, c1 values are increased to allow resistance of any changes in its state of motion in the later frames.
Otherwise when C → 0, the social weight c2 is increased by a stepsize to reduce its resistance to motion changes
5
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
6
as Eq. 3. The increase of the social weight allows global influence and exploration of the search space, which is
relevant when the motion of a target is dynamic. The exploitation within nearby regions is reasonable when an object
is moving with small motion.
c1 = c1 + m; c2 = c2 − m; ω = ω + m
c1 = c1 − m; c2 = c2 + m; ω = ω − m
∗ condition on p(ω ∩ c ∩ c ) = 1
1
2
C→1
otherwise
(3)
The C is estimated by computing the frequency of the change in the quantised motion direction of the object;
C → 1 represent consistent motion with minimal change of direction, while C → 0 represent inconsistent or dynamic
motion.
4.1.1. Exploration Factor (EF )
The normalisation of DAP as 1 will restrict the overall exploration of the state to a certain degree. Hence, we
refine this by introducing the exploration factor, EF which serves as a multiplying factor to increase or decrease the
exploration. We define the exploration factor, EF as the parameters that adaptively
1. increase the exploration with high variance, and
2. increase the exploitation with low variance.
By utilising these exploitation and exploration abilities, our method is capable to recover from being trapped into a
common state (local optima). Thus, the proposed SwaTrack copes better with both the smooth and abrupt motion. At
every kth iteration, the quality of the estimated position upon convergence (global best) is evaluated using its fitness
value. f (gBestkg ) → 1 indicates high likelihood whereas f (gBestkg ) → 0 indicates low likelihood or no similarity
between an estimation and target.
When f (gBestkg ) ≤ T MinF , where T MinF is a threshold, we know that there is low resemblance between the estimation and target and most likely the proposal distribution may not match the actual posterior. Thus in this scenario, EF
is increased alongside the maximum number of iterations, K by empirically determined step sizes m and n respectively. This drives the swarm of particles to explore the region beyond the current local maxima (increase exploration).
However, when an object has left the scene, K tend to increase continuously and cause swarm explosion. Thus, we
limit K ⊂ S.
E α f (gBestkg )
(4)
f (gBestkg )
In another scenario, where
≥ T MinF , EF is decreased alongside K; constraining the search around the
current local maximum (exploitation). In a straightforward manner, it is always best to drive particles at its maximum
velocity to provide a reasonable bound in order to cope with the maximum motion change. However, this is not
reasonable for real-time applications as it incurs unnecessary computational cost especially when the motion is not
abrupt. Thus, by introducing the adaptive scheme to automatically adjust the exploration and exploitation behaviour
of the swarm, SwaTrack is able to cope with both the smooth and abrupt motion with less computational cost. Also,
we observed that since the particles in SwaTrack exchange information with one another, we only need a few particles
for sampling.
4.2. Novel Velocity Model
With the introduction of DAP and EF , the novel velocity model, v0 in our PSO framework is represented as:
v̀ik+1 = EF k [(ω ∗ v̀ik ) + (c1 ∗ r1 ∗ (pBestki − xki )) + (c2 ∗ r2 ∗ (gBestk − xki )]
(5)
where EF k is the exploration factor at iteration k and c, r, ω are the acceleration parameters with the condition
p(w ∩ c1 ∩ c2 ) = 1. The normalised condition applied to the acceleration allows on the fly tuning of these parameters
6
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
7
according to the quality of the fitness function. The fitness function used here is represented by the normalised
distant measure between the appearance model of an estimation and the object-of-interest. The fitness value of a
particle, f (xki ) measures how well an estimation of the object’s position matches the actual object-of-interest; where 1
represents the highest similarity between an estimation and target and 0 represents no similarity.
At every kth iteration, each particle varies its velocity according to Eq. 5 and move its position in the search space
according to:
i
p(xk+1
) = p(xk )i + v̀ik+1
(6)
Note that the motion of each particle is directed towards the promising region found by the global best, gBestk
from previous iteration, k = k − 1.
1. Initialisation, at iteration k = 0
• Initialise a population of I particles, {xki }i=1,...,I with positions, p(xki ),at random within the search space, S.
• Initialise the velocities, v(xki ) at random within [1, −1].
• Evaluate the fitness value of each particle and identify their personal best pBestki = f (p(xki )).
• Identify the global best gth particle and update the global best information, gBestk = f (p(xkg )).
2. Repeat at iteration k = 1, 2, ...K until the stopping criterion is met.
• For each ith particle, compute the new velocity according to:
v̀ik+1 = Ek [(ω ∗ v̀ik ) + (c1 ∗ r1 ∗ (pBestki − xki )) + (c2 ∗ r2 ∗ (gBestk − xki )]
where,
p(w ∩ c1 ∩ c2 ) = 1
• If f (gBestkg ) ≤ T MinF
then E = E + n, K = K + n
else
then E = E − n, K = K − n
end If
• If C → 1
then c1 = c1 + m, c2 = c2 − m, ω = ω + m
else
then c1 = c1 − m, c2 = c2 + m, ω = ω − m
end If
• For each ith particle, move them using the computed new velocity as in Eq. 2 and update its position
according to:
i
p(xk+1
) = p(xk )i + v̀ik+1
i
• For each ith particle, ensure the newly computed position is within state space, p(xk+1
)⊂S
• Update pBestki , p(pBestki ), g, gBestk , p(gBestkg ).
• Check for Convergence
• End Repeat
5. Experimental Results & Discussion
In this section, we verify the feasibility and robustness of our proposed method in handling abrupt motion via various experiments using public and synthetic datasets. The experiments were performed on an Intel Core-2 processor
with C++ and OpenCV implementation.
7
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
8
5.1. Experimental Settings
We assumed the object-of-interest to be known and hence initialise manually the 2D position of the target in the
first frame as automatic initialisation is another research topic by itself. The object is represented by its appearance
model, which comprises of HSV histogram with uniform binning; 32 bins. The normalised Bhattacharyya distant
measure is used as the fitness value (cost function) to measure the quality of the estimation; where 1 represents the
highest similarity between an estimation and target and 0 represents no similarity. Here, the initial values for SwaTrack
are EF = 25, ω = 0.4, c1 = 0.3, c2 = 0.3, K = 30, I = 15 respectively. These values are set empirically and are not
as critical; the adaptive mechanism in the proposed method allows adjustment of these parameters according to the
quality of the observation model.
We manually labelled the ground truth of nth object. The ground truth is described as bounding box information,
Xn (xn ,yn ,wn ,hn = x-positions, y-positions, width, height). We compare the state-of-the-art results of PSO, PF [30, 31],
BDM [23], FragTrack [32], A-WLMC [12] and CT [19], respectively in terms of both the detection accuracy (%) and
processing time (milliseconds per frame).
Figure 3. Sample shots of the dataset employed.
5.2. Dataset
We tested our proposed method with a number of public and synthetic datasets. In the quantitative experiments,
we employed 5 public datasets (TableTennis, Youngki, Boxing and Tennis) as illustrated in Fig. 3. Rapid Motion
of Small Object: The T ableT ennis(T ableT ) dataset consists of 5 video sequences to test the effectiveness of the
proposed method in terms of tracking small object (e.g. table tennis ball) that exhibits fast motion. Sequence 11 , the
SIF Table Tennis sequence is a widely used dataset in the area of computer vision, especially for evaluation of detection
and tracking methods. This sequence has complex, highly textured background and exhibit camera movement with
some occlusion between the ball and the player’s arm. Sequence 22 , is a sample training video from the ITTF video
library which is created to expose players, coaches and umpires to issues related to service action. Although this
sequence is positioned to provide the umpire’s point of view of a service, it is very challenging as the size of the
tennis ball is very small; about 8x8 pixels to 15x15pixels for an image resolution of 352x240. The video comprises
of 90 frames including 10 frames in which severe occlusion happens, where the ball is hidden by the player’s arm.
Sequence 33 is a match obtained from a publicly available source. In this sequence, the tennis ball is relatively large
as it features a close-up view of the player. However, there are several frames where the ball appears to be blurred
1 The
SIF training data is available at http://www.sfu.ca/∼ibajic/datasets.html
ITTF training and demo dataset is available at http://www.ittf.com
3 The video is available at http://www.youtube.com/watch?v=C9D88AcmLjI
2 The
8
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
9
due to the low frame rate and abrupt motion of the tennis ball. Sequence 4 and 54 is captured at a higher frame rate,
thus the spatial displacement of the ball from one frame to another appears to be smaller (less abrupt) and the ball
is clearer. This is to test the ability of proposed method to handle normal visual tracking scenario. Since the dataset
didnt provide any groundtruth, we manually labelled the ground truth of nth object. The ground truth is described as
bounding box information, Xn (xn ,yn ,wn ,hn = positions in the x-dimension, y-dimension, width, height). If the paper is
accepted, we will make the dataset and groundtruth publicly available. Switching Camera: The Youngki and Boxing
dataset [14] comprise of frames edited from different cameras (camera switching). This results in an object appearing
at the different part of the image. Low-frame rate T ennis dataset [14] comprises of downsampled data to simulate
abrupt change. The frames are downsampled from a video with more than 700 original frames, by keeping one frame
in every 25frames. The rapid motion of the tennis player from one frame to another due to the downsampling made
tracking extremely difficult. Downsampling is done to simulate abrupt motion during low-frame rates.
In the qualitative experiments, on top of the 5 public datasets, we included 1 more dataset that consists of 3 video
sequences to further test the robustness of the proposed method. Abrupt Motion with Inconsistent Speed: The first
sequence of the dataset is to track a synthetic ball which moves randomly across the sequence with inconsistent speed,
whilst the second sequence tracks a soccer ball which is being juggled in a free-style manner in a moving scene with a
highly textured background (grass). Multiple targets: In the third sequence is two simulated balls moving at random.
We intend to demonstrate the capability of the proposed system to track multiple targets; whilst most of the existing
solutions are focused on single target.
5.3. Quantitative Results
5.3.1. Experiment 1: Detection Rate
Detection rate refers to the correct number and placement of the objects in the scene. For this purpose, we denote
the ground truth of nth object as GTn and the output from the tracking algorithms of jth object is denoted as ξn . We
describe the ground truth and tracker output of each nth object as bounding box information , Xn (xn ,yn ,wn ,hn = xposition, y-position, width, height). The coverage metric determines if a GT is being tracked, or if an ξ is tracking
accurately. In [33], it is shown that the F-measure, F suited this task as the measure is 1.0 when the estimate, ξn
overlaps perfectly with the ground truth, GTn . Two fundamental measures known as precision and recall are used to
determine the F-measure.
Recall: Recall measures how much of the GT is covered by the ξ and takes value of 0 if there is no overlap and 1
if they are fully overlapped. Given a ground truth, GTn and a tracking estimate, ξn , the recall, <n is expressed as:
|ξn ∩ GT n |
(7)
|GT n |
Precision: Precision measures how much of the ξ covers the GT takes value of 0 if there is no overlap and 1 if
they are fully overlapped. The precision, ℘n is expressed as:
<n =
℘n =
|ξn ∩ GT j |
|ξn |
(8)
2<n ℘n
<n + ℘n
(9)
F-measure: The F-measure, Fn is expressed as:
Fn =
Coverage Test In this experiment, we employ the F − measure according to the score measurement of the known
PASCAL challenge [34]. That is, if the Fn of nth object is larger than 0.5, the estimation is considered as correctly
tracked in the frame. Table. 1 demonstrates the detection accuracy of the benchmarked tracking algorithms for all
8 test sequences. Overall, the experimental results show that the average tracking accuracy of the proposed method
surpasses most of the state-of-the art tracking methods with an average detection accuracy of 91.39%. For all 6
test sequences (TableT1, TableT2, TableT5, Youngki and T ennis), the SwaTrack generates the best tracking results
amongst the rest and Rank 2 for TableT4 and Boxing, respectively.
4 This
dataset can be obtained from http://xgmt.open.ac.uk
9
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
10
Table 1. Experiment results - Comparison of the Detection Rate (in %)
TableT1
TableT2
TableT3
TableT4
TableT5
Average
PSO
70.1
83.1
58.2
59.6
60.3
66.26
PF [30, 31]
58.4
69.8
52.1
47.3
34.5
52.42
BDM [23]
68.3
53.4
67.3
73.2
64.2
65.28
FragTrack [32]
64.9
24.1
55.3
57.2
9.7
42.24
A-WLMC [12]
47.2
3.2
8.7
6.9
5.4
14.28
CT [19]
72.3
4.3
24.5
98.2
36.3
47.12
SwaTrack
87.8
93.1
74.1
97.3
72.8
85.02
In the meantime, methods that do not built on sophisticated motion model, the FragTrack [32] employs refine
appearance model that adapts to the changes of the object. Even so, it still performs poorly in this condition when
compared to the others with an average accuracy of 37.19%. PF on the other hand, gives the detection accuracy of
85.6%. This is expected as it is known that PF algorithm is constraint to a fixed Gaussian motion model. Once PF
has lost track of the object, it has the tendency to continue searching for the object in the wrong region such as shown
in Fig. 4a; leading to error propagation and inability to recover from incorrect tracking. The proposed method copes
better with abrupt motion and is not subjected to trapped in local optima as shown in Fig. 4b. While the MCMC
tracking method is still subjected to a certain degree of recovery as shown in Fig. 6.
Table 2. Experiment results - Comparison of the Detection Rate (in %)
Tennis
Youngki
Boxing
Average
PSO
87.3
87.1
82.4
85.6
PF [30, 31]
67.3
47.2
16.3
43.6
FragTrack [32]
20.6
27.5
48.3
32.13
A-WLMC [12]
95.1
86.8
98.1
93.33
SwaTrack
98.3
98.7
96.3
97.76
Dataset Unbias The problem of dataset bias was highlighted in [35] where the paper argue that “Is it to be
expected that when training on one dataset and testing on another there is a big drop in performance?” In here, we
replicate similar scenario in tracking domain and observe that though the A-WLMC method [12] performs well in
TableT4 and Youngki sequence, they do not produce consistent results when tested across the other datasets as shown
in Table 1-2 . For example, we can notice that the average detection accuracy of A-WLMC is 14.28% for TableT
dataset, and 93.33% for Tennis, Boxing and Youngki dataset respectively. The indicate that the A-WLMC solution
[12] is dataset bias as it seem to only work well in their proposed dataset, but performed porrly when is employed
on different dataset. Perhaps this is due to the motion model employed by these tracking methods that works well
only on certain scenarios, alluding to the notion in [10] that different motion requires different motion models. This is
indeed not the case for our proposed SwaTrack. Our overall detection rates are 85.02% and 97.76%, respectively. For
all sequences that exhibit different challenging conditions, e.g. rapid motion (T ableT 1 − 5), low-frame rate (T ennis),
the Swatrack has shown its robustness to cope without any ease.
We further investigated the dataset bias problem and found out that there is an influence of object size to the
detection rate. For instance, A-WLMC algorithm [12] performs poorly for sequences in which the resolution of the
object-of-interest is small, such as in Table Tennis dataset and performs surprisingly well when the object is large
such as in the Youngki, Boxing and T ennis dataset, respectively. This indicates the need to have better representation
of the object for a more accurate acceptance and rejection of estimations in the MCMC.
5.3.2. Experiment 2: Computational Cost
Fig. 5 demonstrates the comparison of the proposed method with state-of-the-art solution in terms of time complexity. As for the processing time, the SwaTrack algorithm requires the least processing time with an average of 63
milliseconds per frame. In contrary, MCMC-based solution such as A-WLMC [12] and PF [30, 31] require higher
processing time. This is likely due to the inherent correlation between MCMC samplers which suffer from slow convergence when an object has not been tracked accurately. Notice that in scenarios where the MCMC requires high
10
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
11
(a) Sample detections from PF tracking.
(b) Sample detections from SwaTrack tracking.
Figure 4. Sample output to demonstrates incorrect tracking due to trapped in local optima. The aim is to track the person in dark skin and purple
short. From Frame 449-451 (a), PF lost track of the object due to sampling from incorrect distribution during abrupt motion.. On the other hand,
the results in (b) demonstrate the capability of the SwaTrack tracking in dealing with the non-linear and non-Gaussian motion of the object.
Figure 5. Time Complexity. This figure illustrates the comparison of processing time (milliseconds per frame) between the proposed SwaTrack,
standard PSO, PF, BDM, FragTrack, A-WLMC and CT
11
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
12
processing time, the accuracy of the MCMC is minimal; the increase in computational cost is due to the increase of
search space when the observation model is unlikely representing the target. Note that the optimal number of samples
deployed in the PF and MCMC throughout the sequences has been selected empirically; where it ranges from 150 to
1000 particles in PF, 600 to 1000 particles in MCMC with 600 iterations while SwaTrack uses 10-50 particles with
5-70 iterations. Intuitively, an increase in the number of samples would lead to an increase in computational cost as
each particle would need to be evaluated against the appearance observation; elucidating the minimal processing time
required by the proposed SwaTrack.
As shown in Table 1-2, in which the SwaTrack detection rate is Rank 2, though the CT [19] and A-WLMC [12]
achieved better but their average processing time are almost increase by 3x compared to SwaTrack. As compared
to the A-WLMC tracking which increases its subregions for sampling when the state space increases, our method
adaptively increases and decreases its proposal variance for a more effective use of samples. Thus the processing time
required is much lesser as compared to the other methods. The advantage of the dynamic mechanism is shown when
comparing the processing time of SwaTrack to PSO (average of 195.20 milliseconds per frame); where the processing
time of PSO is 3x more than that of SwaTrack. In summary, the experimental results demonstrate the capability of the
proposed system to cope with the variety of scenario that exhibits highly abrupt motion. The adaptation of a stochastic
optimisation method into tracking abrupt motion has been observed to incur not much additional processing cost, yet
at the same time is able to have fair tracking accuracy as compared to the more sophisticated methods. Thus, the
preliminary results give a promising indication that sophisticated tracking methods may not be necessary after all.
5.4. Qualitative Results
Low Frame Rate The sequence aims to track a tennis player in a low-frame rate video, which is down-sampled
from a 700 frames sequence by keeping one frame in every 20 frames. Here, the target (player) exhibits frequent
abrupt changes which violate the smooth motion and constant velocity assumptions. Thus, motion that is governed
by Gaussian distribution based on the Brownian or constant-velocity motion models will not work in this case. Fig.
6 shows sample shots to compare the performance between conventional PF tracking (500 samples), A-WLMC (600
samples) [14], IA-MCMC (300 samples)[16] and SwaTrack (50 samples). It is observed that the tracking accuracy
of SwaTrack is better than PF and A-WLMC even by using fewer samples. While the performance of SwaTrack is
comparable to IA-MCMC, SwaTrack requires fewer samples and thus requires less processing requirement. These
results further verify that the proposed method is able to track the moving targets accurately and effectively, regardless
of the variety of change in the target’s motion.
Local Miminum Problem In this experiment, we aim to test the capability of SwaTrack to recover from incorrect
tracking. This is to test the capability of the DAP and EF to handle the abrupt motion. Fig. 7 shows the result for
Youngki where the camera switches. Due to this phenomenon, the subject will have a drastic change of position the
adjacent image. It can be seen that the SwaTrack is able to cope with this problem.
Secondly, we simulate another challenging scenario in which incorrect tracking is most likely to happen by sampling frames from 2 different datasets as shown in Fig.8 (a). The frames in the Boxing sequence are combined in
an alternative manner with the frames from the Youngki sequence. In this combined sequence, the object-of-interest
which is highlighted in the ellipse in Frame 1 of in Fig. 8 (a) tend to disappear from one frame and re-appear in the
subsequent frame interchangeably. From the qualitative results shown in Fig. 8 (b), we observe that the A-WLMC
tracking [14] is not robust and does not cope well with inaccurate tracking. When the object-of-interest disappear
from the scene (i.e. Frame 77), the A-WLMC gives an erroneous estimation of the object. In the subsequent frame,
where the object re-appears, the A-WLMC has difficulty to recover from its tracking such as shown in Frame 78
where the estimation does not fit the actual position of the object accurately. In the subsequent frames, the A-WLMC
tend to continuously missed tracked of the object. Although the sampling efficiency in the A-WLMC adopts a more
efficient proposal distribution as compared to the standard PF, it is still subjected to a certain degree of trapped in
local optima. Furthermore, the A-WLMC utilizes the information of historical samples for intensive adaptation, thus
requiring more frames information to recover from inaccurate tracking. The proposed SwaTrack on the other hand,
is observed to work well in this experiment, where minimal frame is required to recover from erroneous tracking.
As shown in Fig. 8 (c), the SwaTrack is able to track the object accurately when the object appear or re-appears in
the scene (as shown in the even frame number). The inaccurate tracking in the odd frame number is reasonable as
the object does not appear in those scenes. This is made possible due to the information exchange and cooperation
12
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
13
between particles in a swarm that provide a way to escape the local optima and reach the global maximum; leading to
and optimised proposal distribution.
Sensitivity to Object Size We further tested the proposed SwaTrack, PF [30, 31] and A-WLMC [14] on resized
sequences of similar data to simulate scenario in which the object size is smaller. Thus, the initial frame size of
360x240 is reduced into half, to 180x120 pixels. From our observations, the SwaTrack is the least sensitive towards
the size of object-of-interest, while the detection accuracy of the A-WLMC is reduced as the size of object gets
smaller. This is due to the robustness of the optimised sampling in SwaTrack as compared to the least robust method
of rejection and acceptance as proposed in the A-WLMC. The overall detection accuracy of the proposed SwaTrack
remain at an average of 90% regardless of the object’s size whereas the detection accuracy of PF and A-WLMC
decrease significantly by more than 25% when the object’s size decreases. Sample output is as shown in Fig. 9.
Finally, we evaluated the proposed SwaTrack on videos obtained from Youtube and the qualitative results are as
depicted in Fig. 10. Abrupt motion with Inconsistent Speed: The first aim to track a synthetic ball which moves
randomly across the sequence with inconsistent speed, whilst the second sequence tracks a soccer ball which is being
juggled in a free-style manner in a moving scene with a highly textured background (grass). It is observed that the
SwaTrack is able to track the abrupt motion of the balls efficiently. Multiple targets: In the third sequence, we
demonstrate the capability of the proposed system to track multiple targets; two simulated balls moving at random.
Most of the existing solutions are focused on single target.
5.5. Sampling-based vs Iterative-based Solutions
Motivated by the meta-level question prompted in [26] on whether there is a need to have more training data or
better models for object detection, we raise similar question in the domain of this area; will continued progress in
visual tracking be driven by the increased complexity of tracking algorithms? Intuitively, an increase in the number
of samples in sampling-based tracking methods such as PF and MCMC would increase the tracking accuracy. One
may also argue that the additional computational cost incurred in the iterative nature of the proposed SwaTrack and
MCMC would complement the higher number of particles required by the PF. Thus, in order to investigate if these
intuitions hold true, we perform experiments using an increasing number of samples and iterations. We then observe
the behaviors of PF and SwaTrack in terms of accuracy and processing time with the increase in complexity. PF is
chosen in this testing as it bears close resemblance to the proposed SwaTrack algorithm in which a swarm of particles
are deployed for tracking.
5.5.1. Sample of Particles vs Accuracy
Particle Filter: In the PF algorithm, we vary the number of samples or particles (i.e. 50, 100, · · · , 2000) used
throughout the sequence to determine the statistical relationship between number of samples and performance. We
gauge the performance by the detection accuracy (%) and processing time (in milliseconds per frame). The average
performance across all 5 datasets (TableT) are as shown in Fig. 11a. Sample of the performance for sequence T ableT 1
and T ableT 2 are as shown in Fig. 12.
The results demonstrate that the number of particles used in PF is correlated to the detection accuracy; where the
increase in the number of particles tends to increase the accuracy. Similarly, the average time taken also increases
exponentially as the number of particles used in PF grows. This alludes the fact that as the number of particles
increase, the estimation processes which include object representation, prediction and update also multiply. However,
it is observed that PF reaches plateau after hitting the optimal accuracy, after which any increase in the number
of particles will either have a decrease in accuracy or no significant improvement. From Fig. 11, we can see the
detection accuracy decreases after the optimal solution, which is given when the number of particles are 600. Our
findings provoke the underlying assumption that the increase of number of particles will lead to an increase in the
accuracy. Thus, we raise the question of whether complex (in this context the complexity is proportional to the
number of particles deployed) tracking methods are really necessary? Also, the best parameter configurations may
differ from one sequence to another due to the different motion behaviour portrayed by the object in each sequence.
For example, in Fig. 12(a), the optimal setting is 250 particles which produces detection accuracy of 55% and takes
1.78 seconds of processing time. Whilst the second sequence has a different optimal setting of 150 particles as shown
in Fig. 12(b). This advocates the notion as in [10] that motion models indeed only work for sometimes.
SwaTrack: Similarly, we perform the different parameter settings test on the proposed SwaTrack algorithm and
the average results are demonstrated in Fig. 11b, while Fig. 13 illustrate the results for T ableT 1 and T ableT 2.
13
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
14
In addition to the number of particles used in PF tracking, the proposed SwaTrack has an additional influencing
parameter, the maximum number of iterations. We vary the number of particles against the number of iterations for
fair evaluation. As illustrated in the left y-axis of the chart (bottom graph), we can see that the average processing time
increases as the number of iterations increase. However, the increment seems to be valid until it reaches a maximal
value; in which any increase in the iterations would not incur much difference in its processing time. Notice that the
processing time for different higher number of iterations (55 & 70) tend to overlap with one another, demonstrating
minimal increase in processing time as the number of iterations grows. This is due to the optimisation capability of
the proposed SwaTrack to terminate its search upon convergence, regardless of the defined number of iterations. This
is particularly useful in ensuring efficient search for the optimal solution, with minimal number of particles. As for
the detection accuracy, we can see that in general the average accuracy of the proposed SwaTrack is higher than PF,
with an average accuracy of 92.1% in the first sequence as shown in Fig. 13a. The sudden decrease in accuracy for
SwaTrack tracking with 70 number of iterations as shown in Fig. 13a may be due to the erratic generation of random
values in C++ implementation. This behaviour is not observed in other sequences, where their detection accuracy
is consistent across frames. Thus far, we take an average result for each test case over 10 runs to ensure reliable
results and we believe that with a higher number of runs, we would be able to obtain unbiased results without outliers.
In summary, the results further validate our findings that the proposed SwaTrack is able to achieve better accuracy
as compared to PF, whilst requiring only about 10% of the number of samples used in PF with minimal number of
iterations. This is made possible by an iterative search for the optimal proposal distribution, incorporating available
observations rather than making strict assumptions on the motion of an object. Thus, we believe that the findings
from our study create prospects for a new paradigm of object tracking. Again, we raise the question if there is a need
to make complex existing tracking methods by fusing different models and algorithms to improve tracking efficiency?
Would simple optimisation methods be sufficient?
5.5.2. Number of Samples vs Processing Time
Particle Filter: In the PF algorith,m we vary the number of samples or particles (i.e. 50, 100, · · · , 2000) used
throughout the sequence to determine the statistical relationship between number of samples and detection accuracy.
The lowest value of the parameter value is determined based on the minimal configuration to allow tracking while
the highest value is set to the maximal configuration before it reaches a plateau detection accuracy. The detection
accuracy and performance of the PF algorithm with different parameter settings are as shown in Fig. 14-18a. The
results demonstrate that the number of particles used in PF is correlated to the detection accuracy; where the increases
in the number of particles tend to increase the accuracy. Similarly, the average time taken also increases as the number
of particles used in PF grows. This alludes the fact that as the number of particles increase, the estimation processes
which include object representation, prediction and update also multiply. However, it is observed that PF reaches a
plateau detection accuracy after hitting the optimal accuracy, after which any increase in the number of particles will
either have a decrease in accuracy or no significant improvement. Thus, the underlying assumption that the increase
of number of particles will lead to an increase in the accuracy does not hold true. This may be due to the resampling
step in most PF algorithms that is highly prone to error propagation. Also, the best parameter configurations may
differ from one sequence to another due to the different motion behaviour portrayed by the object in each sequence.
For example, in Fig. 14, the optimal setting is 250 number of particles which produces detection accuracy of 55% and
takes 1.78 seconds of processing time. Note that in this set of experiments, other parameters such as the mean and
variance for the Gaussian distribution in PF is not optimal values as compared to the earlier experiment. A standard
configuration of Gaussian white noise is used across frames. Thus, the results obtained may slightly differ.
SwaTrack: Similarly, we perform the different parameter settings test on the proposed SwaTrack algorithm and
the results are demonstrated in Fig. 14-18b. In addition to the number of particles used in PF tracking, the proposed
SwaTrack has an additional influencing parameter, the maximum number of iterations. Thus, here we vary the number
of particles against the number of iterations and obtain their detection accuracy. As illustrated in the left y-axis of
the chart (bottom graph), we can see that the average processing time increases as the number of iterations increase.
However, the processing time taken reaches a maximal value, where the different number of iterations require almost
comparable amount of time. This can be seen by the overlapping results as shown in Fig. 14-18b, in particular. This
demonstrate that the effectiveness of the termination criteria in the proposed SwaTrack. When a global solution has
been found by the entire swarm (swarm reaches convergence), the search activity terminates despite the initial setting
value of the number of iterations. Also, our proposed method which automatically changes the number of iterations
14
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
15
according to the swarms search quality allows a self-tuned setting of the maximum iteration number. As for the
detection accuracy, we can see that in general the average accuracy of the proposed SwaTrack is higher than PF, with
an average accuracy of 92.1% in the first sequence as shown in Fig. 14b. In summary, the results further validate our
findings that the proposed SwaTrack is able to achieve better accuracy as compared to PF, whilst requiring only about
10% of the number of samples used in PF with minimal number of iterations.
5.5.3. Sampling Strategy
To further evaluate the robustness of the proposed algorithm as well as to understand the behaviours of other
algorithms when tracking abrupt motion, we perform the sampling strategy test. In this test, we simulate the scenario
of receiving inputs from the sensors with a lower frame rate by downsampling the number of frames from the test
sequence; assuming the actual data are obtained at normal rate of 25 frames per second to a lower rate of 5 frames per
second.
Fig. 19 demonstrates the detection accuracy between the proposed SwaTrack and PF for all four sequences by
down-sampling each sequence to simulate the 5 frames per second scenario. Note that the detection accuracy is
determined by comparing the ground truth for the sampled frames only. However, it is observed that in general the
proposed SwaTrack has better detection accuracy as compared to PF in both situations; with and without sampling.
The average detection accuracy of SwaTrack for the complete sequences is about 95.5% whereas the average for PF is
about 62.5%. During sampling, the average detection of accuracy of SwaTrack is about 77.25% whereas PF is about
32.75%. We can see that the detection accuracy of PF drops drastically when the frame rate decreases. This is because,
in low frame rate videos, the target tends to have abrupt motion and thus, methods that assume Gaussian distribution
in its dynamic motion model such as PF fail in such cases. The changes between detection accuracy on complete and
sampled sequence is as indicated in red in Fig. 19. SwaTrack on the other hand, copes better with low frame rate with
an average accuracy of more than 70% although there is a decrease in its efficiency. This is because, the proposed
SwaTrack algorithm allows iterative adjustment of the exploration and exploitation of the swarm in search for the
optimal motion model without making assumptions on the target’s motion. We can thus conclude that the proposed
SwaTrack algorithm is able to cope with scenarios where the frame rate is low.
6. Conclusions
In this paper, we presented a novel swarm intelligence-based tracker for visual tracking that copes with abrupt
motion efficiently. The proposed SwaTrack optimised the search for the optimal distribution without making assumptions or need to learn the motion model before-hand. In addition, we introduced an adaptive mechanism that detects
and responds to changes in the search environment to allow on the tuning of the parameters for a more accurate and
effective tracking. Experimental results show that the proposed algorithm improves the accuracy of tracking while
significantly reduces the computational overheads, since it requires less than 20% of the samples used by PF. In future,
we would like to further investigate the robustness of the proposed method as well as its behaviour change with the
different parameter settings and sampling strategy.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
H. Yang, L. Shao, F. Zheng, L. Wang, Z. Song, Recent advances and trends in visual tracking: A review, Neurocomp. 74 (2011) 3823–3831.
A. Yilmaz, O. Javed, M. Shah, Object tracking: A survey, ACM Comp. Surv. 38.
G. Welch, G. Bishop, An introduction to the kalman filter (1995).
E. A. Wan, R. Van Der Merwe, The unscented kalman filter for nonlinear estimation, in: AS-SPCC, 2000, pp. 153–158.
M. Oussalah, J. D. Schutter, Possibilistic kalman filtering for radar 2d tracking, Information Sciences 130 (14) (2000) 85 – 107.
M. Isard, A. Blake, Condensationconditional density propagation for visual tracking, IJCV 29 (1) (1998) 5–28.
M. S. Arulampalam, S. Maskell, N. Gordon, T. Clapp, A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking, IEEE
TSP 50 (2) (2002) 174–188.
Efficient visual tracking using particle filter with incremental likelihood calculation, Information Sciences 195 (2012) 141 – 153.
L. Ellis, N. Dowson, J. Matas, R. Bowden, Linear regression and adaptive appearance models for fast simultaneous modelling and tracking,
IJCV 95 (2) (2011) 154–179.
F. J. Cristina Garca Cifuentes, Marc Sturzel, G. J. Brostow, Motion models that only work sometimes, in: BMVC, 2012, pp. 1–12.
Y. Li, H. Ai, T. Yamashita, S. Lao, M. Kawade, Tracking in low frame rate video: A cascade particle filter with discriminative observers of
different life spans, IEEE TPAMI 30 (10) (2008) 1728–1740.
15
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
16
J. Kwon, K. M. Lee, Tracking of abrupt motion using wang-landau monte carlo estimation, in: ECCV, pp. 387–400.
J. Kwon, K. M. Lee, Visual tracking decomposition, in: CVPR, 2010, pp. 1269–1276.
J. Kwon, K. M. Lee, Wang-landau monte carlo-based tracking methods for abrupt motions, IEEE TPAMI 35 (4) (2013) 1011–1024.
X. Zhang, W. Hu, X. Wang, Y. Kong, N. Xie, H. Wang, H. Ling, S. Maybank, A swarm intelligence based searching strategy for articulated
3d human body tracking, in: CVPRW, 2010, pp. 45–50.
X. Zhou, Y. Lu, J. Lu, J. Zhou, Abrupt motion tracking via intensively adaptive markov-chain monte carlo sampling, IEEE TIP 21 (2012) 789
–801.
R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: MHS, 1995, pp. 39–43.
A study of particle swarm optimization particle trajectories, Information Sciences 176 (8) (2006) 937 – 971.
X. Zhang, W. Hu, S. Maybank, X. Li, M. Zhu, Sequential particle swarm optimization for visual tracking, in: CVPR, 2008, pp. 1–8.
G. Tong, Z. Fang, X. Xu, A particle swarm optimized particle filter for nonlinear system state estimation, in: CEC, 2006, pp. 438 –442.
F. Neri, E. Mininno, G. Iacca, Compact particle swarm optimization, Information Sciences 239 (2013) 96 – 121.
W. Li, X. Zhang, W. Hu, Contour tracking with abrupt motion, in: ICIP, 2009, pp. 3593–3596.
K. C. P. Wong, L. S. Dooley, Tracking table tennis balls in real match scenes for umpiring applications, BJMCS 1(4) (2011) 228–241.
Y. Liu, S. Lai, B. Wang, M. Zhang, W. Wang, Feature-driven motion model-based particle-filter tracking method with abrupt motion handling,
Opt. Eng. 51(4).
I. Zuriarrain, F. Lerasle, N. Arana, M. Devy, An mcmc-based particle filter for multiple person tracking, in: ICPR, 2008, pp. 1–4.
X. Zhu, C. Vondrick, D. Ramanan, C. Fowlkes, Do we need more training data or better models for object detection?, in: BMVC, 2012, pp.
1–11.
X. Zhang, W. Hu, S. Maybank, A smarter particle filter, in: ACCV, Springer, 2010, pp. 236–246.
Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: WCCI, 1998, pp. 69–73.
L. P. Kaelbling, M. L. Littman, A. W. Moore, Reinforcement learning: a survey, JAIR 4 (1996) 237–285.
F. Yan, W. Christmas, J. Kittler, A tennis ball tracking algorithm for automatic annotation of tennis match, Sig. Proc. (2005) 619–628.
E. Maggio, A. Cavallaro, Accurate appearance-based bayesian tracking for maneuvering targets, CVIU 113 (4) (2009) 544–555.
A. Adam, E. Rivlin, I. Shimshoni, Robust fragments-based tracking using the integral histogram, in: CVPR, 2006, pp. 798–805.
K. Smith, D. Gatica-Perez, J. Odobez, S. Ba, Evaluating multi-object tracking, in: CVPRW, 2005, pp. 36–36.
M. Everingham, L. Gool, C. K. Williams, J. Winn, A. Zisserman, The pascal visual object classes (voc) challenge, IJCV 88 (2010) 303–338.
A. Torralba, A. A. Efros, Unbiased look at dataset bias, in: CVPR, 2011, pp. 1521–1528.
16
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
17
(a) Sample detections from PF.
(b) Sample detections from SwaTrack.
(c) Sample detections from A-WLMC.
(d) Sample detections from IA-MCMC.
Figure 6. A comparison between PF, SwaTrack, A-WLCM [14] and IA-MCMC [16]. It is observed that the SwaTrack tracking gives a more
accurate fit of the state.
17
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
18
(a) Sample of SwaTrack on Boxing sequence.
(b) Sample of SwaTrack on Youngki sequence.
Figure 7. Sample outputs to demonstrate the flexibility of the proposed SwaTrack to recover from incorrect tracking. Minimal frames (1-2frames)
are required to escape from local optima and achieve global maximum.
(a) Sample shots of the dataset that is obtained by combining frames from two different sequences. The object enclosed
in the ellipse is the object to be tracked.
(b) Sample detections by the A-WLMC tracking. A-WLMC tend to tracked the object inaccurately once it has lost or missed
tracked of the object as shown from Frame 79 onwards.
(c) Sample detections by the SwaTrack tracking. In Frame 77, since the object-of-interest does not appear in the frame,
inaccurate tracking happens. However, the SwaTrack is able to recover its tracking at the following frame, Frame 78.
Figure 8. Sample outputs to demonstrate the capability to recover from incorrect tracking.
18
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
(a) Sample detections from A-WLMC on reduced image size. A-WLMC has a high tendency to lost track of the object
when it moves abruptly, and demonstrate continuous inaccurate tracking such as shown in Frame 279-284. Note that
for similar frames, MCMC is able to track the object accurately when the image size is larger. Number of iterations =
600, particles = 600.
(b) Sample detections from SwaTrack on reduced image size. SwaTrack produces consistent tracking as compared to
PF and A-WLMC, regardless of the size of object. Number of iterations = 30, particles =20.
Figure 9. Qualitative Results: Comparison between A-WLMC and Our Proposed Method in term of different image size.
Figure 10. Sample shots of tracking results on our proposed method.
19
19
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
(a) PF
20
(b) SwaTrack
Figure 11. A comparison between PF and SwaTrack in terms accuracy vs different number of samples and accuracy vs different number of samples
and iteration
(a) Sequence 1.
(b) Sequence 2.
Figure 12. The accuracy and performance of PF with different number of samples for T ableT 1 − 2.
(a) Sequence 1.
(b) Sequence 2.
Figure 13. The accuracy and performance of SwaTrack with different number of samples and iterations for T ableT 1 − 2.
20
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
(a) PF
(b) SwaTrack
Figure 14. TableTennis 1: The accuracy and performance of PF/SwaTrack with different parameter settings
(a) PF
(b) SwaTrack
Figure 15. TableTennis 2: The accuracy and performance of PF with different parameter settings
(a) PF
(b) SwaTrack
Figure 16. TableTennis 3: The accuracy and performance of PF with different parameter settings
21
21
M.K. Lim et al. / Information Sciences 00 (2014) 1–16
(a) PF
(b) SwaTrack
Figure 17. TableTennis 4: The accuracy and performance of PF with different parameter settings
(a) PF
(b) SwaTrack
Figure 18. TableTennis 5: The accuracy and performance of PF with different parameter settings
Figure 19. The detection accuracy of SwaTrack against PF during sampling for sequence 1 to 4.
22
22
| 9 |
st
* Consi
se
eu
*
* Easy to
ed
R
nt
alu
at e d
arXiv:1507.07049v3 [] 16 Apr 2016
Ev
cu m
e
Jedidiah McClurg
Hossein Hojjat
Nate Foster
CU Boulder, USA
Cornell University, USA
Cornell University, USA
CU Boulder, USA
[email protected]
[email protected]
[email protected]
[email protected]
Pavol Černý
execute on general-purpose machines. These programs react to events such as topology changes, traffic statistics,
receipt of packets, etc. by modifying sets of forwarding
rules installed on switches. SDN programs can implement
a wide range of advanced network functionality including
fine-grained access control [8], network virtualization [22],
traffic engineering [15, 16], and many others.
Although the basic SDN model is simple, building sophisticated applications is challenging in practice. Programmers must keep track of numerous low-level details
such as encoding configurations into prioritized forwarding
rules, processing concurrent events, managing asynchronous
events, dealing with unexpected failures, etc. To address
these challenges, a number of domain-specific network programming languages have been proposed [2, 10, 19, 21, 29,
31, 36, 37]. The details of these languages vary, but they all
offer higher-level abstractions for specifying behavior (e.g.,
using mathematical functions, boolean predicates, relational
operators, etc.), and rely on a compiler and run-time system
to generate and manage the underlying network state.
Unfortunately, the languages that have been proposed so
far lack critical features that are needed to implement dynamic, event-driven applications. Static languages such as
NetKAT [2] offer rich constructs for describing network configurations, but lack features for responding to events and
maintaining internal state. Instead, programmers must write
a stateful program in a general-purpose language that generates a stream of NetKAT programs. Dynamic languages such
as FlowLog and Kinetic [21, 31] offer stateful programming
models, but they do not specify how the network behaves
while it is being reconfigured in response to state changes.
Abstractions such as consistent updates provide strong guarantees during periods of reconfiguration [26, 33], but current realizations are limited to properties involving a single
packet (or set of related packets, such as a unidirectional
flow). To implement correct dynamic SDN applications today, the most effective option is often to use low-level APIs,
forgoing the benefits of higher-level languages entirely.
Abstract
Software-defined networking (SDN) programs must simultaneously describe static forwarding behavior and dynamic
updates in response to events. Event-driven updates are critical to get right, but difficult to implement correctly due to
the high degree of concurrency in networks. Existing SDN
platforms offer weak guarantees that can break application
invariants, leading to problems such as dropped packets,
degraded performance, security violations, etc. This paper
introduces event-driven consistent updates that are guaranteed to preserve well-defined behaviors when transitioning
between configurations in response to events. We propose
network event structures (NESs) to model constraints on
updates, such as which events can be enabled simultaneously and causal dependencies between events. We define
an extension of the NetKAT language with mutable state,
give semantics to stateful programs using NESs, and discuss
provably-correct strategies for implementing NESs in SDNs.
Finally, we evaluate our approach empirically, demonstrating that it gives well-defined consistency guarantees while
avoiding expensive synchronization and packet buffering.
Categories and Subject Descriptors C.2.3 [Computercommunication Networks]: Network Operations—Network
Management; D.3.2 [Programming Languages]: Language
Classifications—Specialized application languages; D.3.4
[Programming Languages]: Processors—Compilers
Keywords network update, consistent update, event structure, software-defined networking, SDN, NetKAT
1.
*
ll
We
PLDI *
* Complete
*
Event-Driven Network Programming
tifact
t
en
*
AECDo
Ar
Introduction
Software-defined networking (SDN) allows network behavior to be specified using logically-centralized programs that
Example: Stateful Firewall. To illustrate the challenges
that arise when implementing dynamic applications, consider a topology where an internal host H1 is connected to
switch s1 , an external host H4 is connected to a switch s4 ,
and switches s1 and s4 are connected to each other (see FigEvent-Driven Network Programming
1
2016/4/19
Existing Approaches. Experienced network operators may
be able to use existing tools/methods to correctly implement
event-driven configuration changes. However, as seen above,
this requires thinking carefully about the potential interleavings of events and updates, delegating atomic operations to
the controller (incurring a performance hit), etc.
As mentioned, there are stateful programming systems
that attempt to make this process easier for the programmer,
but update strategies in these systems either offer no consistency guarantees during dynamic updates, rely on expensive
processing via the controller, and/or require the programmer
to craft an update protocol by hand. In this paper, we group
these approaches together, using the term uncoordinated update to describe their lack of support for coordinating local
updates in a way that ensures global consistency.
Figure 1: Topology for simple Stateful Firewall.
ure 1). Suppose we wish to implement a stateful firewall: at
all times, host H1 is allowed to send packets to host H4 , but
H4 should only be allowed to send packets to H1 if H1 previously initiated a connection. Implementing even this simple application turns out to be difficult, because it involves
coordinating behavior across multiple devices and packets.
The basic idea is that upon receiving a packet from H1 at
s4 , the program will need to issue a command to install a
forwarding rule on s4 allowing traffic to flow from H4 back
to H1 . There are two straightforward (but incorrect) implementation strategies on current SDN controllers.
1. The outgoing request from H1 is diverted to the controller, which sets up flow tables for the incoming path
and also forwards the packet(s) to H4 . Reconfiguring
flow tables takes time, so H4 ’s response will likely be
processed by the default drop rule. Even worse, if the response is the SYN-ACK in a TCP handshake, normal retransmission mechanisms will not help—the client will
have to wait for a timeout and initiate another TCP connection. In practice, this greatly increases the latency of
setting up a connection, and potentially wreaks havoc on
application performance.
2. The outgoing request is buffered at the controller, which
sets up the flow tables for the incoming path but waits until the rules are installed before forwarding the packet(s).
This avoids the problem in (1), but places extra load
on the controller and also implements the firewall incorrectly, since incoming traffic is allowed before the outgoing request is delivered. Leaving the network unprotected
(even briefly) can be exploited by a malicious attacker.
Thus, while it is tempting to think that reliability mechanisms built into protocols such as TCP already prevent (or at
least reduce) these types of errors, this is not the case. While
it is true that some applications can tolerate long latencies,
dropped packets, and weak consistency, problems with updates do lead to serious problems in practice. As another
example, consider an intrusion detection system that monitors suspicious traffic—inadvertently dropping or allowing
even a few packets due to a reconfiguration would weaken
the protection it provides. The root of these problems is that
existing SDN frameworks do not provide strong guarantees
during periods of transition between configurations in response to events. An eventual guarantee is not strong enough
to implement the stateful firewall correctly, and even a consistent update [33] would not suffice, since consistent updates only dictate what must happen to individual packets.
Event-Driven Network Programming
Event-Driven Consistent Update. We propose a new semantic correctness condition with clear guarantees about updates triggered by events. This enables specification of how
the network should behave during updates, and enables precise formal reasoning about stateful network programs.
An event-driven consistent update is denoted as a triple
e
Ci −
→ Cf , where Ci and Cf are the initial and final configurations respectively, and e is an event. Intuitively, these
configurations describe the forwarding behaviors of the network before/after the update, while the event describes a
phenomenon, such as the receipt of a packet at a particular
switch, that triggers the update itself. Semantically, an eventtriggered consistent update ensures that for each packet:
1. the packet is forwarded consistently, i.e. it must be processed entirely by a single configuration Ci or Cf , and
2. the update does not happen too early, meaning that if
every switch traversed by the packet has not heard about
the event, then the packet must be processed by Ci , and
3. the update does not happen too late, meaning that if every
switch traversed by the packet has heard about the event,
then the packet must be processed by Cf .
The first criterion requires that updates are consistent, which
is analogous to a condition proposed previously by Reitblatt
et al. [33]. However, a consistent update alone would not
provide the necessary guarantees for the stateful firewall
example, as it applies only to a single packet, and not to
multiple packets in a bidirectional flow. The last two criteria
relate the packet-processing behavior on each switch to the
events it has “heard about.” Note that these criteria leave
substantial flexibility for implementations: packets that do
not satisfy the second or third condition can be processed by
either the preceding or following configuration. It remains to
define what it means for a switch s to have “heard about” an
event e that occurred at switch t (assuming s 6= t). We use
a causal model and say that s hears about e when a packet,
which was processed by t after e occurred, is received at s.
This can be formalized using a “happens-before” relation.
Returning to the stateful firewall, it is not hard to see that
the guarantees offered by event-driven consistent updates are
2
2016/4/19
sufficient to ensure correctness of the overall application.
e
Consider an update Ci −
→ Cf . In Ci , H1 can send packets
to H4 , but not vice-versa. In Cf , additionally H4 can send
packets to H1 . The event e is the arrival at s4 of a packet
from H1 to H4 . Before e occurs, can H4 send a packet to
H1 , as is possible in Cf ? No, since none of the switches
along the necessary path have heard about the event. Now,
imagine that the event e occurs, and H4 wants to send a
packet to H1 afterwards. Can s4 drop the new packet, as it
would have done in the initial configuration Ci ? No, because
the only switch the packet would traverse is s4 , and s4 has
heard about the event, meaning that the only possible correct
implementation should process this new packet in Cf .
The first defining aspect of our locality requirements involves the happens-before (“heard-about”) relation in eventdriven consistent update. For example, the receipt of a packet
in New York can not immediately affect the behavior of
switches in London. Intuitively, this makes sense: requiring
“immediate” reaction to remote events would force synchronization between switches and buffering of packets, leading
to unacceptable performance penalties. Event-driven consistent update only requires the switches in London to react
after they have heard about the event in New York.
The second defining aspect of our locality requirements
involves the compatibility constraints in NESs. Suppose that
New York sends packets to London and Paris, but the program requires transitioning to a different global state based
on who received a packet first. Clearly, it would be impossible to implement this behavior without significant coordination. However, suppose New York and Philadelphia are sending packets to London, and the program requires transitioning to a different global state based on whose packet was received first in London. This behavior is easily implementable
since the choice is local to London. We use NESs to rule out
non-local incompatible events—specifically, we require that
incompatible events must occur at the same switch.
Our approach gives consistency guarantees even when an
event occurs at a switch different from the one that will be
updated. The change will not happen “atomically” with the
event that triggered it, but (a) every packet is processed by
a single configuration, and (b) the configuration change occurs as dictated by event-driven consistent update (happensbefore) requirements. We show that these requirements can
be implemented with minimal performance penalty.
Locality issues are an instance of the tension between
consistency and availability in distributed systems, which
motivates existing SDN languages to favor availability
(avoiding expensive synchronization and packet buffering)
over consistency (offering strong guarantees when state
changes). We demonstrate that it is possible to provide the
same level of availability as existing systems, while providing a natural consistency condition that is powerful enough
to build many applications. We also show that weakening
the locality requirement forces us to weaken availability.
Overall, we present a new abstraction based on (i) a notion of causal consistency requiring that events are propagated between nodes, (ii) per-packet consistency governing
how packets are forwarded through the network, and (iii) locality requirements. We believe this is a powerful combination that is a natural fit for building many applications.
Event-Driven Transition Systems. To specify event-driven
network programs, we use labeled transition systems called
event-driven transition systems (ETSs). In an ETS, each
node is annotated with a network configuration and each
edge is annotated with an event. For example, the stateful
firewall application would be described as a two-state ETS,
one state representing the initial configuration before H1 has
sent a packet to H4 , and another representing the configuration after this communication has occurred. There would
be a transition between the states corresponding to receipt of
a packet from H1 to H4 at s4 . This model is similar to the
finite state machines used in Kinetic [21] and FAST [30].
However, whereas Kinetic uses uncoordinated updates, we
impose additional constraints on our ETSs which allow them
to be implemented correctly with respect to our consistency
property. For example, we extend event-triggered consistent
updates to sequences, requiring each sequence of transitions
in the ETS to satisfy the property. For simplicity, in this paper, we focus on finite state systems and events corresponding to packet delivery. However, these are not fundamental
assumptions—our design extends naturally to other notions
of events, as well as infinite-state systems.
Network Event Structures. The key challenge in implementing event-driven network programs stems from the fact
that at any time, the switches may have different views of
the global set of events that have occurred. Hence, for a given
ETS, several different updates may be enabled at a particular
moment of time, and we need a way to resolve conflicts. We
turn to the well-studied model of event structures [38], which
allows us to constrain transitions in two ways: (1) causal dependency, which requires that an event e1 happens before
another event e2 may occur, and (2) compatibility, which
forbids sets of events that are in some sense incompatible
with each other from occurring in the same execution. We
present an extension called network event structure (NES),
and show how an ETS can be encoded as an NES.
Implementing Network Programs. NESs also provide a
natural formalism for guiding an implementation technique
for stateful programs. Intuitively, we need switches that can
record the set of events that have been seen locally, make
decisions based on those events, and transmit events to other
switches. Fortunately, in the networking industry there is a
trend toward more programmable data planes: mutable state
Locality. While event-driven consistent updates require
immediate responses to local events (as in the firewall), they
do not require immediate reactions to events “at a distance.”
This is achieved by two aspects of our definitions.
Event-Driven Network Programming
3
2016/4/19
is already supported in most switch ASICs (e.g. MAC learning tables) and is also being exposed to SDN programmers in
next-generation platforms such as OpenState [5] and P4 [6].
Using these features, we can implement an NES as follows.
1. Encode the sets of events contained in the NES as flat tags
that can be carried by packets and tested on switches.
2. Compile the configurations contained in the NES to a
collection of forwarding tables.
3. Add “guards” to each configuration’s forwarding rules to
explicitly test for the tag enabling the configuration.
4. Add rules to “stamp” incoming packets with tags corresponding to the current set of events.
5. Add rules to “learn” which events have happened by
reading tags on incoming packets and adding the tags in
the local state to outgoing packets, as required to implement the happens-before relation.
In this paper, we prove that a system implemented in this
way correctly implements an NES.
• We propose a new semantic correctness condition for dy-
namic network programs called event-driven consistent
update that balances the need for immediate response
with the need to avoid costly synchronization and buffering of packets. Our consistency property generalizes the
guarantees offered by consistent updates, and is as strong
as possible without sacrificing availability.
• We propose network event structures to capture causal
dependencies and compatibility between events, and
show how to implement these using SDN functionality.
• We describe a compiler based on a stateful extension
of NetKAT, and present optimizations that reduce the
overhead of implementing such stateful programs.
• We conduct experiments showing that our approach gives
well-defined consistency guarantees, while avoiding expensive synchronization and packet buffering.
The rest of this paper is structured as follows: §2 formalizes
event-driven consistent updates; §3 defines event transition
systems, network event structures, and Stateful NetKAT; §4
describes our implementation; and §5 presents experiments.
We discuss related/future work in §6-7, and conclude in §8.
Evaluation. To evaluate our design, we built a prototype
of the system described in this paper.† We have used this to
build a number of event-driven network applications: (a) a
stateful firewall, which we have already described; (b) a
learning switch that floods packets going to unknown hosts
along a spanning tree, but uses point-to-point forwarding for
packets going to known hosts; (c) an authentication system
that initially blocks incoming traffic, but allows hosts to gain
access to the internal network by sending packet probes to a
predefined sequence of ports; (d) a bandwidth cap that disables access to an external network after seeing a certain
number of packets; and (e) an intrusion detection system that
allows all traffic until seeing a sequence of internal hosts
being contacted in a suspicious order. We have also built
a synthetic application that forwards packets around a ring
topology, to evaluate update scalability. We developed these
applications in an extended version of NetKAT which we
call Stateful NetKAT. Our experiments show that our implementation technique provides competitive performance on
several important metrics while ensuring important consistency properties. We draw several conclusions. (1) Eventdriven consistent update allow programmers to easily write
real-world network applications and get the correct behavior, whereas approaches relying only on uncoordinated consistency guarantees do not. (2) The performance overhead of
maintaining state and manipulating tags (measured in bandwidth) is within 6% of an implementation that uses only uncoordinated update. (3) There is an optimization that exploits
common structure in rules across states to reduce the number of rules installed on switches. In our experiments, a basic
heuristic version of this optimization resulted in a 32-37%
reduction in the number of rules required on average.
Summary.
2.
This section presents our new consistency model for stateful
network programs: event-driven consistent update.
Preliminaries. A packet pkt is a record of fields {f1 ; f2 ;
· · · ; fn }, where fields f represent properties such as source
and destination address, protocol type, etc. The (numeric)
values of fields are accessed via the notation pkt.f , and
field updates are denoted pkt[f ← n]. A switch sw is a
node in the network with one or more ports pt. A host is
a switch that can be a source or a sink of packets. A location
l is a switch-port pair n:m. Locations may be connected by
(unidirectional) physical links (lsrc , ldst ) in the topology.
Packet forwarding is dictated by a network configuration
C. A located packet lp = (pkt, sw , pt) is a tuple consisting
of a packet and a location sw :pt. We model C as a relation on located packets: if C(lp, lp 0 ), then the network maps
lp to lp 0 , possibly changing its location and rewriting some
of its fields. Since C is a relation, it allows multiple output
packets to be generated from a single input. In a real network, the configuration only forwards packets between ports
within each individual switch, but for convenience, we assume that our C also captures link behavior (forwarding between switches), i.e. C((pkt, n1 , m1 ), (pkt, n2 , m2 )) holds
for each link (n1 :m1 , n2 :m2 ). We refer to a sequence of located packets that starts at a host and can be produced by C
as a packet trace, using Traces(C) to denote the set of all
such packet traces. We let C be the set of all configurations.
Consider a tuple ntr = (lp 0 lp 1 · · · , T ), where the first
component is a sequence of located packets, and each t ∈ T
is an increasing sequence of indices corresponding to located
packets in the sequence. We call such a tuple a network trace
if and only if the following conditions hold:
Our main contributions are as follows.
† The PLDI 2016 Artifact Evaluation Committee (AEC) found that our
prototype system “met or exceeded expectations.”
Event-Driven Network Programming
Event-Driven Network Behavior
4
2016/4/19
1. for each lp j , we have j ∈ t for some t ∈ T , and
2. for each t = (k0 k1 · · · ) ∈ T , lp k0 is at a host, and
∃C ∈ C such that C(lp ki , lp ki+1 ) holds for all i, and
3. if we consider the graph G with nodes {k : (∃t ∈
T : k ∈ t)} and edges {(ki , ki+1 ) : (∃t ∈ T : t =
k0 k1 · · · ki ki+1 · · · )}, then G is a family of trees rooted
at K = {k0 : (∃t ∈ T : t = k0 · · · )}.
We will use ntr ↓k to denote the set {t ∈ T : k ∈ t}, and
when t = (k0 k1 · · · ) ∈ T , we can use similar notation ntr ↓t
to denote the packet trace lp k0 lp k1 · · · . Intuitively, we have
defined a network trace to be an interleaving of these packet
traces (the packet traces form the family of trees because,
as previously mentioned, the configuration allows multiple
output packets from a single input packet). Ultimately, we
will introduce a consistency definition that dictates which
interleavings of packet traces are correct.
We now define how the network changes its configuration
in response to events. An event e is a tuple (ϕ, sw , pt)eid ,
where eid is an (optional) event identifier and ϕ is a firstorder formula over fields. Events model the arrival of a
packet satisfying ϕ (denoted pkt |= ϕ) at location sw :pt.
Note that we could have other types of events—anything that
a switch can detect could be an event—but for simplicity, we
focus on packet events. We say that a located packet lp =
(pkt, sw 0 , pt 0 ) matches an event e = (ϕ, sw , pt) (denoted
by lp |= e) if and only if sw = sw 0 ∧ pt = pt 0 ∧ pkt |= ϕ.
Figure 2: Example topology with four switches and hosts.
• ∃t ∈ ntr ↓ki such that t is in Traces(Ci ) (intuitively, the
event ei can be triggered only by a packet processed in
the immediately preceding configuration).
If such a sequence exists, it is unique, and we denote it by
F O(ntr , U ), shorthand for “first occurrences.”
Definition 2 (Event-driven consistent update correctness). A
network trace ntr = (lp 0 lp 1 · · · , T ) is correct with respect
e1
e0
C1 −→
to an event-driven consistent update U = C0 −→
en
· · · −→ Cn+1 , if F O(ntr , U ) = k0 , · · · , kn exists, and for
all 0 ≤ i ≤ n, the following holds for each packet trace
ntr ↓t = lp 00 lp 01 · · · where t ∈ T :
• ntr ↓t is in Traces(C) for some C ∈ {C0 , · · · , Cn+1 }
(packet is processed entirely by one configuration), and
• if ∀j : lp 0j ≺ lp ki , then ntr ↓t is in Traces(C) for some
C ∈ {C0 , · · · , Ci } (the packet is processed entirely in a
preceding configuration), and
• if ∀j : lp ki ≺ lp 0j , then ntr ↓t is in Traces(C) for some
C ∈ {Ci+1 , · · · , Cn+1 } (the packet is processed entirely
in a following configuration).
Definition 1 (Happens-before relation ≺ntr ). Given a network trace ntr = (lp 0 lp 1 · · · , T ), the happens-before relation ≺ntr is the least partial order on located packets that
• respects the total order induced by ntr at switches, i.e.,
∀i, j : lp i ≺ lp j ⇐ i < j ∧ lp i = (pkt, sw , pt) ∧ lp j =
(pkt 0 , sw , pt 0 ), and
• respects the total order induced by ntr for each packet,
i.e., ∀i, j : lp i ≺ lp j ⇐ i < j ∧ ∃t ∈ T : i ∈ t ∧ j ∈ t.
To illustrate, consider Figure 2. We describe an update
e
Ci −
→ Cf . In the initial configuration Ci , the host H1
can send packets to H2 , but not vice-versa. In the final
configuration Cf , traffic from H2 to H1 is allowed. Event
e models the arrival to s4 of a packet from H1 (imagine
s4 is part of a distributed firewall). Assume that e occurs,
and immediately afterwards, H2 wants to send a packet to
s1 . Can s2 drop the packet (as it would do in configuration
Ci )? Event-driven consistent updates allow this, as otherwise
we would require s2 to react immediately to the event at s4 ,
which would be an example of action at a distance. Formally,
the occurrence of e is not in a happens-before relation with
the arrival of the new packet to s2 . On the other hand, if e.g.
s4 forwards some packets to s1 and s2 before the new packet
from H2 arrives, s1 and s2 would be required to change their
configurations, and the packet would be allowed to reach H1 .
Event-Driven Consistent Update. In Section 1, we informally defined an event-driven consistent update as a triple
e
Ci −
→ Cf consisting of an initial configuration Ci , event e,
and final configuration Cf . Here, we formalize that definition in a way that describes sequences of events and configurations (in the single-event case, this formal definition is
equivalent to the informal one). We denote an event-driven
consistent update as a pair (U, E), where U is a sequence
e0
e1
en
C0 −→
C1 −→
· · · −→
Cn+1 , and {e0 , · · · , en } ⊆ E.
Let ntr = (lp 0 lp 1 · · · , T ) be a network trace. Given an
event-driven consistent update (U, E), we need the indices
where the events from U first occurred. Specifically, we wish
to find the sequence k0 , · · · , kn where lp j does not match
any e ∈ E for any j > kn , and the following properties hold
for all 0 ≤ i ≤ n (assuming k(−1) = −1 for convenience):
• ki > ki−1 , and
• lp ki matches ei , and for all j, if ki−1 < j < ki then
lp j does not match ei (i.e., ki is the first occurrence of ei
after the index ki−1 ), and
Event-Driven Network Programming
Network Event Structures. As we have seen, event-driven
consistent updates specify how the network should behave
during a sequence of updates triggered by events, but additionally, we want the ability to capture constraints between
the events themselves. For example, we might wish to say
that e2 can only happen after e1 has occurred, or that e2 and
e3 cannot both occur in the same network trace.
To model such constraints, we turn to the event structures model introduced by Winskel [38]. Intuitively, an event
structure endows a set of events E with (a) a consistency
5
2016/4/19
predicate (con) specifying which events are allowed to occur
in the same sequence, and (b) an enabling relation (`) specifying a (partial) order in which events can occur. This is formalized in the following definition (note that we use ⊆fin to
mean “finite subset,” and Pfin (X) = {Y : Y ⊆fin P(X)}).
all proper subsets are not inconsistent. An NES N is called
locally-determined if and only if for each of its minimallyinconsistent sets E, all events in E happen at the same
switch (i.e., ∃sw ∀ei ∈ E : ei = (ϕi , sw , pt i )). To illustrate
the need for the locally-determined property, let us consider
the following two programs, P1 and P2 .
• Program P1 : Recall that two events are inconsistent if either of them can happen, but both cannot happen in the
same execution. Consider the topology shown in Figure 2
and suppose this program requires that H2 and H4 can
both receive packets from H1 , but only the first one to
receive a packet is allowed to respond. There will be two
events e1 and e2 , with e1 the arrival of a packet from H1
at s2 , and e2 the arrival of a packet from H1 at s4 . These
events are always enabled, but the set {e1 , e2 } is not
consistent, i.e. con({e1 , e2 }) does not hold. This models the fact that at most one of the events can take effect.
These events happen at different switches—making sure
that at most one of the events takes effect would necessitate information to be propagated instantaneously “at a
distance.” In implementations, this would require using
inefficient mechanisms (synchronization and/or packet
buffering). Our locality restriction is a clean condition
which ensures that the NES is efficiently implementable.
• Program P2 : Consider a different program where H2 can
send traffic to one of the two hosts H1 , H3 that sends it a
packet first. The two events (a packet from H1 arriving
at s2 , and a packet from H3 arriving at s2 ) are still
inconsistent, but inconsistency does not cause problems
in this case, because both events happen at the same
switch (the switch can determine which one was first).
In contrast to our approach, an uncoordinated update approach improperly handles locality issues, mainly because
it does not guarantee when the configuration change occurs.
Consider the program P1 again, and consider the (likely)
scenario where events e1 and e2 happen nearly simultaneously. In an uncoordinated approach, this could result in
switch s2 hearing about e1 , e2 (in that order), and s4 hearing about e2 , e1 (in that order), meaning the two switches
would have conflicting ideas of which event was “first” (i.e.
the switches would be in conflicting states, and this conflict
cannot be resolved). In our implementation, we would require e1 and e2 to occur at the same switch, guaranteeing
that we never see such a conflicting mix of states.
Definition 3 (Event structure). An event structure is a tuple
(E, con, `) where:
• E is a set of events,
• con : (Pfin (E) → Boolean) is a consistency predicate
that satisfies con(X) ∧ Y ⊆ X =⇒ con(Y ),
• ` : (P(E) × E → Boolean) is an enabling relation that
satisfies (X ` e) ∧ X ⊆ Y =⇒ (Y ` e).
An event structure can be seen as defining a transition system
whose states are subsets of E that are consistent and reachable via the enabling relation. We refer to such a subset an
as an event-set (called “configuration” in [38]).
Definition 4 (Event-set of an event structure). Given an
event structure N = (E, con, `), an event-set of N is any
subset X ⊆ E which is: (a) consistent: ∀Y ⊆fin X, con(Y )
holds, and (b) reachable via the enabling relation: for each
e ∈ X, there exists e0 , e1 , · · · , en ∈ X where en = e and
∅ ` {e0 } and {e0 , · · · , ei−1 } ` ei for all 1 ≤ i ≤ n.
We want to be able to specify which network configuration should be active at each event-set of the event structure.
Thus, we need the following extension of event structures.
Definition 5 (Network event structure (NES)). A network
event structure is a tuple (E, con, `, g) where (E, con, `) is
an event structure, and g : (P(E) → C) maps each event-set
of the event structure to a network configuration.
Correct Network Traces. We now define what it means for
a network trace ntr to be correct with respect to an NES
N = (E, con, `, g). We begin by constructing a sequence S
of events that is allowed by N . A sequence S = e0 e1 · · · en
is allowed by N , if ∅ ` {e0 } ∧ con({e0 }), and ∀1 ≤ i ≤ n :
({e0 , e1 , · · · , ei−1 } ` ei ∧ con({e0 , e1 , · · · , ei })).
Intuitively, we say that ntr is correct if there is a sequence
of events allowed by N which would cause ntr to satisfy the
event-driven consistent update condition.
Definition 6 (Correct network trace). Let S be the set of all
sequences allowed by N . Formally, a network trace ntr =
(lp 0 lp 1 · · · , T ) is correct with respect to N if
• no lp j matches any e ∈ E, and for all packet traces ntr ↓t
where t ∈ T , we have ntr ↓t is in Traces(g(∅)), or
• there exists some e0 e1 · · · en ∈ S such that ntr is correct
e0
with respect to event-driven consistent update (g(∅) −→
e1
en
g({e0 }) −→ · · · −→ g({e0 , · · · , en }), E).
Strengthening Consistency. We now show that strengthening the consistency conditions imposed by NESs would
lead to lower availability, as it would lead to the need for expensive synchronization, packet buffering, etc. First, we will
try to remove the locally-determined condition, and second,
we will try to obtain a strengthened consistency condition.
The proof of the following theorem is an adaptation of the
proof of the CAP theorem [7], as presented in [13]. The idea
is that in asynchronous network communication, a switch
might need to wait arbitrarily long to hear about an event.
Locality Restrictions for Incompatible Events. We now
show how NESs can be used to impose reasonable locality restrictions. A set of events E is called inconsistent
if and only if con(E) does not hold. We use the term
minimally-inconsistent to describe inconsistent sets where
Event-Driven Network Programming
6
2016/4/19
Lemma 1. In general, it is impossible to implement an NES
that does not have the locally-determined condition while
guaranteeing that switches process each packet within an a
priori given time bound.
Figure 3: Event-driven transition systems.
Proof Sketch. Consider a simple NES, with event sets ∅,
{e1 }, {e2 }, and where {e1 } and {e2 } are both enabled from
∅. Assume that con({e1 , e2 }) does not hold, and that e1 can
happen at switch A and e2 can happen at switch B (i.e., the
locally-determined condition does not hold).
Because the communication is asynchronous, there is no
a priori bound on how long the communication between
switches can take. When a packet p that matches e2 arrives
at the switch B, the switch must distinguish the following
two cases: (#1) event e1 has occurred at A (and thus p
does not cause e2 ), or (#2) event e1 has not occurred at
A (and thus p causes e2 ). No matter how long B waits, it
cannot distinguish these two cases, and hence, when a packet
that matches e2 arrives to B, the switch B cannot correctly
decide whether to continue as if e2 has happened. It has
the choice to either eventually decide (and risk the wrong
decision), or to buffer the packet that matches e2 .
and edges correspond to events. We also present a network
programming language based on NetKAT that provides a
compact notation for specifying both the transition system
and the configurations at the nodes.
3.1
Definition 7 (Event-driven Transition System). An eventdriven transition system (ETS) is a graph (V, D, v0 ), in
which V is a set of vertices, each labeled by a configuration;
D ⊆ V × V is a set of edges, each labeled by an event e;
and v0 is the initial vertex.
Consider the ETSs shown in Figure 3 (a-b). In (a), the
two events are intuitively compatible—they can happen in
any order, so we obtain a correct execution if both happen
in different parts of the network, and different switches can
have a different view of the order in which they happened.
In (b), the two events are intuitively incompatible—only one
of them can happen in any particular execution. Therefore,
even if they happen nearly simultaneously, only one of them
should take an effect. To implement this, we require the locality restriction—we need to check whether the two events
happen at the same switch. We thus need to distinguish between ETSs such as (a) and (b) in Figure 3, to determine
where locality restrictions must be imposed in the conversion from an ETS to an NES.
We now ask whether we can strengthen the event-driven
consistent update definition. We define strong update as an
e
update C1 −
→ C2 such that immediately after e occurred, the
network processes all incoming packets in C2 . We obtain the
following lemma by the same reasoning as the previous one.
Lemma 2. In general, it is impossible to implement strong
updates and guarantee that switches process each packet
within an a priori given time bound.
Proof Sketch. Let A be the switch where e can happen, and
let B be a switch on which the configurations C1 , C2 differ.
For A and B, the same argument as in the previous lemma
shows that B must either risk the wrong decision on whether
to process packets using C1 or C2 , or buffer packets.
3.
From ETSs to NESs. To convert an ETS to an NES, we
first form the event sets (Definition 4) and then construct the
enabling relation and consistency predicate. Given an ETS
T , consider the set W (T ) of sequences of events in T from
the initial node to any vertex (including the empty sequence).
For each sequence p ∈ W (T ), let E(p) be the set of events
collected along the sequence. The set F (T ) = {E(p) | p ∈
W (T )} is our candidate collection of event sets. We now
define conditions under which F (T ) gives rise to an NES.
1. We require that each set E in F (T ) must correspond to
exactly one network configuration. This holds if all paths
in W (T ) corresponding to E end at states labeled with
the same configuration.
2. We require that F (T ) is finite-complete, i.e. for any sets
E1 , E2 , · · · , En where each Ei ∈ F (T ), if there is a set
E 0 ∈ F (T ) which contains every Ei (an upper bound for
the sets Ei ), then the set Elub = ∪i Ei (the least upper
bound for the Ei ) must also be in F (T ). For example,
consider the ETS in Figure 3(c), which violates this condition since the event-sets E1 = {e1 } and E2 = {e3 } are
both subsets of {e1 , e4 , e3 }, but there is no event-set of
the form E1 ∪ E2 = {e1 , e3 }.
Programming with Events
The correctness condition we described in the previous section offers useful application-level guarantees to network
programmers. At a high level, the programmer is freed from
thinking about interleavings of packets/events and responses
to events (configuration updates). She can think in terms of
our consistency model—each packet is processed in a single
configuration, and packets entering “after” an event will be
processed in the new configuration (similar to causal consistency). An important consequence is that the response to an
event is immediate with respect to a given flow if the event
is handled at that flow’s ingress switch.
With this consistency model in mind, programmers can
proceed by specifying the desired event-driven program behavior using network event structures. This section introduces an intuitive method for building NESs using simple
transition systems where nodes correspond to configurations
Event-Driven Network Programming
Event-Driven Transition Systems
7
2016/4/19
In [38], such a collection F (T ) is called a family of configurations. Our condition (2) is condition (i) in Theorem 1.1.9
in [38] (conditions (ii)-(iii) are satisfied by construction).
Given an ETS T , it is not difficult to confirm the above
conditions statically. They can be checked using straightforward graph algorithms, and any problematic vertices or
edges in T can be indicated to the programmer. The development of efficient checking algorithms is left for future work.
We build the con and ` relations of an NES from the
family F (T ), using Theorem 1.1.12. of [38]. Specifically,
predicate con can be defined by declaring all sets in F (T ) as
consistent, and for `, we take the smallest relation satisfying
the constraints ∅ ` e ⇐⇒ {e} ∈ F (T ) and X 0 ` e ⇐⇒
(X 0 ∈ con) ∧ ((X 0 ∪ {e}) ∈ F (T ) ∨ (∃X ⊆ X 0 : X ` e)).
After obtaining an NES, deciding whether it satisfies
the locality restriction is easy: we check whether the NES
is locally determined (see Section 2), verifying for each
minimally-inconsistent set that the locality restriction holds.
Again, we leave the efficiency of this check for future work.
∈
∈
::=
::=
|
p, q ::=
|
f
n
x
a, b
Figure 4: Stateful NetKAT: syntax.
,
J(a:b) _ (c:d) _ hstate(m) ← niK~k
,
(
JtrueK~k
JfalseK~k
if ~k(m)=n
otherwise
J(a:b) _ (c:d)K~k
individual static configurations (though it is not a KAT itself), but also allows packets to affect processing of future
packets via assignments to (and tests of) a global state. The
syntax of Stateful NetKAT is shown in Figure 4. A Stateful
NetKAT program is a command, which can be:
• a test, which is a formula over packet header fields (there
are special fields sw and pt which test the switch- and
port-location of the packet respectively),
• a field assignment x←n, which modifies the (numeric)
value stored in a packet’s field,
• a union of commands p + q, which unions together the
packet-processing behavior of commands p and q,
• a command sequence p ; q, which runs packet-processing
program q on the result of p,
• an iteration p∗, which is equivalent to true + p + (p ; p) +
(p ; p ; p) + · · · ,
• or a link definition (n1 :m1 )_(n2 :m2 ), which forwards a
packet from port m1 at switch n1 across a physical link
to port m2 at switch n2 .
The functionality described above is also provided by standard NetKAT [35]. The key distinguishing feature of our
Stateful NetKAT is a special global vector-valued variable
called state, which allows the programmer to represent a collection of NetKAT programs. The function shown in Figure
5 gives the standard NetKAT program JpK~k corresponding
to each value ~k of the state vector (for conciseness, we only
show the non-trivial cases). We can use the NetKAT compiler [35] to generate forwarding tables (i.e. configurations)
corresponding to these, which we denote C(JpK~k ).
Stateful NetKAT
NetKAT [2] is a domain-specific language for specifying
network behavior. It has semantics based on Kleene Algebra with Tests (KAT), and a sound and complete equational theory that enables formal reasoning about programs.
Operationally, a NetKAT program behaves as a function
which takes as input a single packet, and uses tests, fieldassignments, sequencing, and union to produce a set of “histories” corresponding to the packet’s traces.
Standard NetKAT does not support mutable state. Each
packet is processed in isolation using the function described
by the program. In other words, we can use a standard
NetKAT program for specifying individual network configurations, but not event-driven configuration changes. We describe a stateful variant of NetKAT which allows us to compactly specify a collection of network configurations, as well
as the event-driven relationships between them (i.e. an ETS).
This variant preserves the existing equational theory of the
Event-Driven Network Programming
Jstate(m)=nK~k
Figure 5: Stateful NetKAT: extracting NetKAT Program (state ~k).
Loops in ETSs. If there are loops in the ETS T , the previous definition needs to be slightly modified, because we
need to “rename” events encountered multiple times in the
same execution. This gives rise to an NES where each eventset is finite, but the NES itself might be infinite (and thus
can only be computed lazily). If we have the ability to store
and communicate unbounded (but finite) event-sets in the
network runtime, then no modifications are needed to handle infinite NESs in the implementation (which is described
in Section 4). Otherwise, there are various correct overapproximations we could use, such as computing the stronglyconnected components (SCCs) of the ETS, enforcing the locality restriction on events in each (non-singleton) SCC, and
requiring the implementation to attach timestamps on occurrences of events in those SCCs. For simplicity of the presentation, we will consider only loop-free ETSs in this paper.
3.2
Field
(packet field name)
N
(numeric value)
f | pt
(modifiable field)
true | false | x = n | sw=n | state(n) = n
(test)
a ∨ b | a ∧ b | ¬a
a | x ← n | p + q | p ; q | p∗
(command)
(n:n) _ (n:n) | (n:n) _ (n:n) _ hstate(n) ← ni
3.3
Converting Stateful NetKAT Programs to ETSs
Now that we have the J · K~k function to extract the static configurations (NetKAT programs) corresponding to the vertices of an ETS, we define another function L · M~k , which
produces the event-edges (Figure 6). This collects (using parameter ϕ) the conjunction of all tests seen up to a given
8
2016/4/19
Lf = nM~k ϕ
Lsw = nM~k ϕ
Lport = nM~k ϕ
,
,
,
Lstate(m) = nM~k ϕ
,
({}, {ϕ ∧ f = n})
LtrueM~k ϕ
LtrueM~k ϕ
(
LtrueM~ ϕ if ~k(m) = n
,
,
,
,
({}, {(∃f : ϕ) ∧ f =n})
(LpM~k ϕ) t (LqM~k ϕ)
(LpM~k ‚ LqM~k ) ϕ
F j
~
j Fp (ϕ, k)
Lf ← nM~k
Lp + qM~k
Lp ; qM~k
Lp∗M~k
ϕ
ϕ
ϕ
ϕ
La ∧ bM~k
La ∨ bM~k
LtrueM~k
LfalseM~k
L¬trueM~k
L¬falseM~k
L¬(v = n)M~k
L¬¬aM~k
L¬(a ∧ b)M~k
L¬(a ∨ b)M~k
k
LfalseM~k ϕ
otherwise
L(s1 :p1 ) _ (s2 :p2 )M~k ϕ
L(s1 :p1 ) _ (s2 :p2 ) _ hstate(m) ← niM~k ϕ
,
,
ϕ
ϕ
ϕ
ϕ
ϕ
ϕ
ϕ
ϕ
ϕ
ϕ
,
,
,
,
,
,
,
,
,
,
La ; bM~k ϕ
La + bM~k ϕ
({}, {ϕ})
({}, {})
LfalseM~k ϕ
LtrueM~k ϕ
Lv 6= nM~k ϕ
LaM~k ϕ
L¬a ∨ ¬bM~k ϕ
L¬a ∧ ¬bM~k ϕ
({}, {ϕ})
({(~k, (ϕ, s2 , p2 ), ~k[m 7→ n])}, {ϕ})
Figure 6: Stateful NetKAT: extracting event-edges from state ~k.
set upon ingress, and (iii) reads the tags carried by packets,
and updates the event-set at subsequent switches.
program location, and records a corresponding event-edge
when a state assignment command is encountered. The function returns a tuple (D, P ), where D is a set of event-edges,
and P is a set of updated conjunctions of tests. In the figure, the t operator denotes pointwise union of tuples, i.e.
(A1 , B1 , · · · ) t (A2 , B2 , · · · ) = (A1 ∪ A2 , B1 ∪ B2 , · · · ).
The ‚ operator
F denotes (pointwise) Kleisli composition, i.e.
(f ‚ g) , {g y : y ∈ f x}, and function F is as follows.
Fp0 (ϕ, ~k) ,
Fpj+1 (ϕ, ~k) ,
4.1
Static Configurations. The NES contains a set of network
configurations that need to be installed as flow tables on
switches. In addition, we must be able to transition to a new
configuration in response to a local event. We do this proactively, installing all of the needed rules on switches in advance, with each rule guarded by its configuration’s ID. This
has a disadvantage of being less efficient in terms of rulespace usage, but an advantage of allowing quick configuration changes. In Section 5.3, we discuss an approach for
addressing the space-usage issue by sharing rules between
configurations. Our implementation strategy encodes each
event-set in the NES as an integer, so a single unused packet
header field (or single register on switches) can be used. This
keeps the overhead low, even for very large programs.
({}, {ϕ})
(LpM~k ‚ Fpj ) ϕ
The symbol variable = is either equality “=” or inequality
“6=”, and 6= is the opposite symbol with respect to = . Given
any conjunction ϕ and a header field f , the formula (∃f : ϕ)
strips all predicates of the form (f = n) from ϕ.
Using fst to denote obtaining the first element of a tuple,
we can now produce the event-driven transition system for a
Stateful NetKAT program p with the initial state ~k0 :
Stateful Switches. Emerging data-plane languages such as
P4 [6] and OpenState [5] are beginning to feature advanced
functionality such as customizable parsing, stateful memories, etc. We assume that our switches support (1) modifying
a local register (e.g. an integer on a switch) appropriately
upon receipt of a packet, and (2) making packet forwarding
decisions based on the value of a register. This allows each
switch to maintain a local view of the global state. Specifically, the register records the set of events the device knows
have occurred. At any time, the device can receive a packet
(from the controller or another device) informing it of new
event occurrences, which are “unioned” into the local register (by performing a table lookup based on integer values).
Currently, P4 data planes support this type of functionality.
We also assume that the switch atomically processes each
packet in the order in which it was received. Such “atomic”
switch operations are proposed by the “Packet Transactions”
P4 extension [34]. Because the P4 switch platform is attracting considerable attention (even spawning its own highly-
ETS (p) , (V, D, v0 )
S
where V , ~k {(~k, C(JpK~k ))}
F
and D , fst
~
k LpM~
k true
and v0 , (~k0 , C(JpK~k0 ))
4.
Implementing Event-Driven Programs
Next, we show one method of implementing NESs in a real
SDN, and we prove that this approach is correct—i.e., all
traces followed by actual packets in the network are correct
with respect to Definition 6 in Section 2. At a high level, the
basic idea of our implementation strategy can be understood
as follows. We assume that the switches in the network
provide mutable state that can be read and written as packets
are processed. Given an NES, we assign a tag to each eventset and compile to a collection of configurations whose rules
are “guarded” by the appropriate tags. We then add logic
that (i) updates the mutable state to record local events, (ii)
stamps incoming packets with the tag for the current eventEvent-Driven Network Programming
Implementation Building Blocks
9
2016/4/19
Switch ID
Port ID
Host ID
Location
n
m
h
l
∈
∈
∈
::=
(h, n:m) ∈ L
0
Packet
Located Packet
Queue Map
Link
Links
Event
Event-set
N
N
N
n:m
pkt
lp
qm
lk
L
e
E
::=
::=
::=
::=
::=
::=
::=
{f1 ; · · · ; fk ; C; digest}
(pkt, l)
{n 7→ pkts, · · · }
(l, l)
{lk, · · · }
(ϕ, l)
{e, · · · }
S = S 0 ∪{(n, qm[m7→pkts], E, qm2 )}
(Q, R, S) −
→ (Q, R, S ∪{(n, qm[m7→pkts@[pkt[C←g(E)]]], E, qm2 )})
IN
(n:m, h) ∈ L
Configuration
Enabling Rel.
Consist. Pred.
Config. Map
Switch
Queue, Control.
Switches
C
`
con
g
sw
Q, R
S
::=
::=
::=
::=
::=
::=
::=
{(lp, lp), · · · }
{(E, e), · · · }
{E, · · · }
{E 7→ C, · · · }
(n, qm, E, qm)
E
{sw, · · · }
S = S 0 ∪{(n, q1 , E, qm[m7→pkt::pkts])}
(Q, R, S) −
→ (Q, R, S 0 ∪{(n, q1 , E, qm[m7→pkts])})
E 0 = {e : (E ∪ pkt.digest) ` e ∧ con(E ∪ pkt.digest ∪ {e}) ∧ (pkt, n:m) |= e}
{lp : pkt.C((pkt, n:m), lp)} = {(pkt 1 , n:m1 ), · · · } S = S 0 ∪ {(n, qm[m 7→ pkt::pkts], E, qm2 [m1 7→ pkts 1 , · · · ])}
(Q, R, S) −
→ (Q ∪ E 0 , R, S 0 ∪ {(n, qm[m 7→ pkts], E ∪ E 0 ∪ pkt.digest,
qm2 [m1 7→ pkts 1 @[pkt 1 [digest ← pkt 1 .digest ∪ E ∪ E 0 ]], · · · ])})
(n1 :m1 , n2 :m2 ) ∈ L
S = S 0 ∪ {(n1 , qm1 , E1 , qm2 [m1 7→ pkt::pkts]), (n2 , qm3 [m2 7→ pkts 0 ], E2 , qm4 )}
(Q, R, S) −
→ (Q, R, S 0 ∪ {(n1 , qm1 , E1 , qm2 [m1 7→ pkts]), (n2 , qm3 [m2 7→ pkts 0 @[pkt]], E2 , qm4 )})
Q = Q0 ∪ {e}
0
(Q, R, S) −
→ (Q , R ∪ {e}), S)
C TRL R ECV
R = R0 ∪ {e}
S = S 0 ∪ {(n, qm, E, qm2 )}
(Q, R, S) −
→ (Q, R, S 0 ∪ {(n, qm, E ∪ {e}, qm2 )})
O UT
S WITCH
L INK
C TRL S END
Figure 7: Implemented program semantics.
corresponding to its version number is denoted pkt.C. The
rules in Figure 7 can be summarized as follows.
• I N /O UT: move a packet between a host and edge port.
• S WITCH: process a packet by first adding new events
from the packet’s digest to the local state, then checking
if the packet’s arrival matches an event e enabled by the
NES and updating the state and packet digest if so, and
finally updating the digest with other local events.
• L INK: move a packet across a physical link.
• C TRL R ECV: bring an event from the controller queue
into the controller.
• C TRL S END: update the local state of the switches.
attended workshop), we feel that our assumptions are realistic for the current state-of-the-art in regards to switches.
Packet Processing. Each packet entering the network is
admitted from a host to a port on an edge switch. The
configuration ID j corresponding to the device’s view of
the global state is assigned to the packet’s version number
field. The packet will processed only by j-guarded rules
throughout its lifetime. Packets also carry a digest encoding
the set of events the packet has heard about so far (i.e.
the packet’s view of the global state). If the packet passes
through a device which has heard about additional events,
the packet’s digest is updated accordingly. Similarly, if the
packet’s digest contains events not yet heard about by the
device, the latter adds them to its view of the state. When a
packet triggers an event, that event is immediately added to
the packet’s digest, as well as to the state of the device where
the event was detected. The controller is then notified about
the event. Optionally (as an optimization), the controller
can periodically broadcast its view of the global state to all
switches, in order to speed up dissemination of the state.
4.2
4.3
We now prove the correctness of our implementation. Formally, we show that the operational semantics generates correct traces, as defined in Section 2.
Lemma 3 (Global Consistency). Given a locally-determined
network event structure N , for an execution of the implementation (Q1 , R1 , S1 )(Q2 , R2 , S2 ) · · · (Qm , Rm , Sm ), the
event-set Qi ∪ Ri is consistent for all 1 ≤ i ≤ m.
Operational Model
We formalize the above via operational semantics for the
global behavior of the network as it executes an NES. Each
state in Figure 7 has the form (Q, R, S), with a controller
queue Q, a controller R, and set of switches S. Both the controller queue and controller are a set of events, and initially,
R=Q=∅. Each switch s ∈ S is a tuple (n, qmin , E, qmout ),
where n is the switch ID, qmin , qmout are the input/output queue maps (mapping port IDs to packet queues). Map
updates are denoted qm[m 7→ pkts]. The event-set E represents a switch’s view of what events have occurred. A
packet’s digest is denoted pkt.digest, and the configuration
Event-Driven Network Programming
Correctness of the Implementation
Proof Sketch. We first show that if an inconsistent set Y
where |Y | > 1 satisfies the locality restriction (i.e. all of its
events are handled at the same switch), then Y ⊆ Ri ∪ Qi is
not possible for any i (the S WITCH rule ensures that multiple
events from Y could not have been sent to the controller).
We proceed by induction over m, the trace length, noting
that the base case Q0 ∪R0 = ∅ is consistent. Assume that the
implementation adds an e (via S WITCH) to some consistent
event-set Qm ∪ Rm , producing an inconsistent set. We look
at the minimally-inconsistent set Y ⊆ (Qm ∪Rm ∪{e}), and
notice that the locality restriction requires all events in Y to
10
2016/4/19
be detected at the same switch, so by the previous paragraph,
we must have |Y | ≤ 1. This generates a contradiction, since
it would mean that either Y = {e0 } or Y ⊆ Qm ∪ Rm ,
either of which would make Y consistent.
mance measurements on simple automatically-generated
programs. For the experiments, we assume that the programmer has first confirmed that the program satisfies the
conditions allowing proper compilation to an NES, and we
assume that the ETS has no loops. Our tool could be modified to perform these checks via basic algorithms operating
on the ETS, but they have not yet been implemented in the
current prototype (as mentioned in Section 3.1, developing
efficient algorithms for these checks is left for future work).
Our experimental platform was an Ubuntu machine with
20GB RAM and a quad-core Intel i5-4570 CPU (3.2 GHz).
To choose a representative set of realistic examples, we
first studied the examples addressed in other recent stateful network programming approaches, such as SNAP [3],
FlowLog [31], Kinetic [21], NetEgg [39], and FAST [30],
and categorized them into three main groups:
• Protocols/Security: accessing streaming media across
subnets, ARP proxy, firewall with authentication, FTP
monitoring, MAC learning, stateful firewall, TCP reassembly, Virtual Machine (VM) provisioning.
• Measurement/Performance: heavy hitter detection, bandwidth cap management (uCap), connection affinity in
load balancing, counting domains sharing the same IP
address, counting IP addresses under the same domain,
elephant flows detection, link failure recovery, load balancing, network information base (NIB), QoS in multimedia streaming, rate limiting, sampling based on flow
size, Snort flowbits, super spreader detection, tracking
flow-size distributions.
• Monitoring/Filtering: application detection, DNS amplification mitigation, DNS TTL change tracking, DNS
tunnel detection, intrusion detection system (IDS),
optimistic ACK attack detection, phishing/spam detection, selective packet dropping, sidejack attack detection,
stolen laptop detection, SYN flood detection, UDP flood
mitigation, walled garden.
As we will see in the following section, our current prototype
system is best suited for writing programs such as the ones in
the Protocols/Security category, since some of the Measurement/Performance programs require timers and/or integer
counters, and some of the Monitoring/Filtering programs require complex pattern matching of (and table lookups based
on) sequences of packets—functionality which we do not
(yet) natively support, Thus, we have selected three examples from the first category, and one from each of the latter
two, corresponding to the boldface applications in the list.
We believe that these applications are representative of the
basic types of behaviors seen in the other listed applications.
Traces of the Implementation. Note that we can readily
produce the network trace (Section 2) that corresponds to an
implementation trace, since a single packet pkt is processed
at each step of Figure 7. We now present the main result of
this section—executions of the implementation correspond
to correct network traces (Definition 6).
Theorem 1 (Implementation Correctness). For an NES N ,
and an execution (Q1 , R1 , S1 )(Q2 , R2 , S2 ) · · · (Qm , Rm ,
Sm ) of the implementation, the corresponding network trace
ntr is correct with respect to N .
Proof Sketch. The proof is by induction over the length m
of the execution. In the induction step, we show that (1)
the S WITCH rule can only produce consistent event-sets
(this follows directly from Lemma 3), and (2) when the I N
rule tags a packet pkt based on the local event-set E, that
E consists of exactly the events that happened before pkt
arrived (as ordered by the happens-before relation).
5.
Implementation and Evaluation
We built a full-featured prototype implementation in OCaml.
• We implemented the compiler described in Section 3.
This tool accepts a Stateful NetKAT program, and produces the corresponding NES, with a standard NetKAT
program representing the configuration at each node. We
interface with Frenetic’s NetKAT compiler to produce
flow-table rules for each of these NetKAT programs.
• We modified the OpenFlow 1.0 reference implementation
to support the custom switch/controller needed to realize
the runtime described in Section 4.
• We built tools to automatically generate custom Mininet
scripts to bring up the programmer-specified network
topology, using switches/controller running the compiled
NES. We can then realistically simulate the whole system
using real network traffic.
Research Questions. To evaluate our approach, we wanted
to obtain answers to the following questions.
1. How useful is our approach? Does it allow programmers
to easily write real-world network programs, and get the
behavior they want?
2. What is the performance of our tools (compiler, etc.)?
3. How much does our correctness guarantee help? For instance, how do the running network programs compare
with uncoordinated event-driven strategies?
4. How efficient are the implementations generated by our
approach? For instance, what about message overhead?
State-change convergence time? Number of rules used?
We address #1-3 through case studies on real-world programming examples, and #4 through quantitative perforEvent-Driven Network Programming
5.1
Case Studies
In the first set of experiments, we compare correct behavior (produced by our implementation strategy) with that of
an uncoordinated update strategy. We simulate an uncoordinated strategy in the following way: events are sent to the
controller, which pushes updates to the switches (in an un11
2016/4/19
(a, d)
(b)
(c, e)
Figure 8: Topologies: (a) Firewall, (b) Learning Switch, (c) Authentication, (d) Bandwidth Cap, (e) Intrusion Detection System.
(a)
(b)
(c)
(d)
(e)
pt=2 ∧ ip_dst=H4; pt←1; (state=[0];
(1:1)_(4:1)_hstate←[1]i + state6=[0];
(1:1)_(4:1)); pt←2
+ pt=2 ∧ ip_dst=H1; state=[1]; pt←1;
(4:1)_(1:1); pt←2
for each. We then plotted the total number of incorrectlydropped packets with respect to delay. The results are shown
in Figure 10. Note that even with a very small delay, the uncoordinated strategy still always drops at least one packet.
pt=2 ∧ ip_dst=H1; (pt←1; (4:1)_(1:1) +
state=[0]; pt←3; (4:3)_(2:1)); pt←2
+ pt=2 ∧ ip_dst=H4; pt←1; (1:1)_(4:1)_h
state←[1]i; pt←2
+ pt=2; pt←1; (2:1)_(4:3); pt←2
Stateful Firewall. The example in Figures 8-9(a) is a simplified stateful firewall. It always allows “outgoing” traffic
(from H1 to H4), but only allows “incoming” traffic (from
H4 to H1) after the outside network has been contacted, i.e.
“outgoing” traffic has been forwarded to H4.
Program p corresponds to configurations C[0] = JpK[0]
and C[1] = JpK[1] . In the former, only outgoing traffic is
allowed, and in the latter, both outgoing and incoming are
state=[0] ∧ pt=2 ∧ ip_dst=H1; pt←1;
(4:1)_(1:1)_hstate←[1]i; pt←2
+ state=[1] ∧ pt=2 ∧ ip_dst=H2; pt←3;
(4:3)_(2:1)_hstate←[2]i; pt←2
+ state=[2] ∧ pt=2 ∧ ip_dst=H3; pt←4;
(4:4)_(3:1); pt←2
+ pt=2; pt←1; ((1:1)_(4:1) + (2:1)_(4:3) +
(3:1)_(4:4)); pt←2
(dst=H4, 4:1)
allowed. The ETS has the form {h[0]i −−−−−−−−−→ h[1]i}.
The NES has the form {E0 =∅ → E1 ={(dst=H4, 4:1)}},
where the g is given by g(E0 ) = C[0] , g(E1 ) = C[1] .
The Stateful Firewall example took 0.013s to compile,
and produced a total of 18 flow-table rules. In Figure 11(a),
we show that the running firewall has the expected behavior.
We first try to ping H1 from H4 (the “H4-H1”/red points),
which fails. Then we ping H4 from H1 (the “H1-H4”/orange
points), which succeeds. Again we try H4-H1, and now this
succeeds, since the event-triggered state change occurred.
For the uncoordinated strategy, Figure 11(b) shows that
some of the H1-H4 pings get dropped (i.e. H1 does not hear
back from H4), meaning the state change did not behave as
if it was caused immediately upon arrival of a packet at S4.
pt=2 ∧ ip_dst=H4;
pt←1; (
state=[0]; (1:1)_(4:1)_hstate←[1]i
+ state=[1]; (1:1)_(4:1)_hstate←[2]i
+ state=[2]; (1:1)_(4:1)_hstate←[3]i
.
.
.
+ state=[10]; (1:1)_(4:1)_hstate←[11]i
+ state=[11]; (1:1)_(4:1)
); pt←2
+ pt=2 ∧ ip_dst=H1; state6=[11]; pt←1;
(4:1)_(1:1); pt←2
pt=2 ∧ ip_dst=H1; pt←1; (state=[0];
(4:1)_(1:1)_hstate←[1]i + state6=[0];
(4:1)_(1:1)); pt←2
+ pt=2 ∧ ip_dst=H2; pt←3; (state=[1];
(4:3)_(2:1)_hstate←[2]i + state6=[1];
(4:3)_(2:1)); pt←2
+ pt=2 ∧ ip_dst=H3; pt←4; state6=[2];
(4:4)_(3:1); pt←2
+ pt=2; pt←1; ((1:1)_(4:1) + (2:1)_(4:3) +
(3:1)_(4:4)); pt←2
Learning Switch. The example in Figures 8-9(b) is a simple learning switch. Traffic from H4 to H1 is flooded (sent
to both H1 and H2), until H4 receives a packet from H1, at
50
Total # dropped packets
Figure 9: Programs: (a) Firewall, (b) Learning Switch, (c) Authentication, (d) Bandwidth Cap, (e) Intrusion Detection System.
predictable order) after a few-seconds time delay. We believe
this delay is reasonable because heavily using the controller
and frequently updating switches can lead to delays between
operations of several seconds in practice (e.g. [17] reports up
to 10s for a single switch update).
To show that problems still arise for smaller delays, in the
firewall experiment described next, we varied the time delay in the uncoordinated strategy between 0ms and 5000ms
(in increments of 100ms), running the experiment 10 times
Event-Driven Network Programming
Incorrect
Correct
40
30
20
10
0
0
1000
2000
3000
Delay (ms)
4000
5000
Figure 10: Stateful Firewall: impact of delay.
12
2016/4/19
2
4
6
8
10 12
(a)
Time
(s)
14
16
18
Ping? (high=yes)
0
H4-H3
H4-H2
H4-H1
20
H4-H1
H1-H4
0
0
2
4
6
8
10 12
Time (s)
14
16
18
20
(b)
Figure 11: Stateful Firewall: (a) correct vs. (b) incorrect.
which point it “learns” the address of H1, and future traffic
from H4 to H1 is sent only to H1.
This program p corresponds to two configurations C[0] =
JpK[0] and C[1] = JpK[1] . In the former, flooding occurs from
H4, and in the latter, packets from H4 are forwarded directly
0
# Packets Sent
# Packets Sent
10
8
6
4
2
0
to H1
to H2
1
1
2
3
4
5
(a)
Time
(s)
6
7
8
9
2
3
4
5
Time (s)
6
7
8
9
20 25 30
(a)
Time
(s)
35
40
45
50
2
4
6
Time (s)
8
10
12
the state change (i.e. learning H1’s address), and all subsequent packets are sent only to H1. In Figure 12(b), however,
since the state change can be delayed, multiple packets are
sent to H2, even after H4 has seen a reply from H1.
Authentication. In this example, shown in Figures 8-9(c),
the untrusted host H4 wishes to contact H3, but can only do
so after contacting H1 and then H2, in that order.
This program p corresponds to three configurations:
C[0] = JpK[0] in which only H4-H1 traffic is enabled,
C[1] = JpK[1] in which only H4-H2 traffic is enabled,
and C[2] = JpK[2] which finally allows H4 to communi(dst=H1, 1:1)
cate with H3. The ETS has the form {h[0]i −−−−−−−−−→
(dst=H2, 2:1)
h[1]i −−−−−−−−−→ h[2]i}. The NES has the form {E0 =∅ →
E1 ={(dst=H1, 1:1)} → E2 ={(dst=H1, 1:1), (dst=H2,
2:1)}}, where the g function is given by g(E0 ) = C[0] ,
g(E1 ) = C[1] , g(E2 ) = C[2] .
The Authentication example took 0.017s to compile, and
produced a total of 72 flow-table rules. In Figure 13(a) we
demonstrate the correct behavior of the program, by first
trying (and failing) to ping H3 and H2 from H4, then successfully pinging H1, again failing to ping H3 (and H1), and
finally succeeding in pinging H3. The incorrect (uncoordinated) implementation in Figure 13(b) allows an incorrect
behavior where we can successfully ping H1 and then H2,
but then fail to ping H3 (at least temporarily).
Bandwidth Cap. The Figure 8-9(d) example is a simplified
bandwidth cap implementation. It allows “outgoing” traffic
(H1-H4), but only until the limit of n packets has been
reached, at which point the service provider replies with a
(b)
Figure 12: Learning Switch: (a) correct vs. (b) incorrect.
Event-Driven Network Programming
15
Figure 13: Authentication: (a) correct vs. (b) incorrect.
(dst=H4, 4:1)
to H1
to H2
10
(b)
to H1. The ETS has the form {h[0]i −−−−−−−−−→ h[1]i}.
The NES has the form {E0 =∅ → E1 ={(dst=H4, 4:1)}},
where the g is given by g(E0 ) = C[0] , g(E1 ) = C[1] .
This only allows learning for a single host (H1), but we
could easily add learning for H2 by using a different index
in the vector-valued state field: we could replace state in
Figure 9(b) with state(0), and union the program (using the
NetKAT “+” operator) with another instance of Figure 9(b)
which learns for H2 and uses state(1).
The Learning Switch example took 0.015s to compile,
and produced a total of 43 flow-table rules. We again compare the behavior of our correct implementation with that
of an implementation which uses an uncoordinated update
strategy. We first ping H1 from H4. Expected behavior is
shown in Figure 12(a), where the first packet is flooded to
both H1 and H2, but then H4 hears a reply from H1, causing
10
8
6
4
2
0
5
H4-H3
H4-H2
H4-H1
22
Ping? (high=yes)
Ping? (high=yes)
Ping? (high=yes)
H4-H1
H1-H4
13
2016/4/19
2
4
6
8
10 12
(a)
Time
(s)
14
16
18
20
Ping? (high=yes)
0
H4-H3
H4-H2
H4-H1
22
H1-H4
0
0
2
4
6
8
10 12
Time (s)
14
16
18
20
(b)
Figure 14: Bandwidth Cap: (a) correct vs. (b) incorrect.
notification message, and disallows the “incoming” path. In
this experiment, we use a bandwidth cap of n = 10 packets.
Program p corresponds to configurations C[0] =JpK[0] ,
· · · , C[n] =JpK[n] , which all allow incoming/outgoing traffic,
and a configuration C[n+1] =JpK[n+1] which disallows the
0
10
15
20
25
(a)
Time
(s)
30
4
6
Time (s)
8
35
40
45
2
10
12
(b)
Figure 15: Intrusion Detection System: (a) correct vs. (b) incorrect.
(dst=H4, 4:1)
incoming traffic. The ETS has the form {h[0]i −−−−−−−−−→
(dst=H4, 4:1)
5
H4-H3
H4-H2
H4-H1
22
Ping? (high=yes)
Ping? (high=yes)
Ping? (high=yes)
H1-H4
(dst=H4, 4:1)
h[1]i −−−−−−−−−→ · · · −−−−−−−−−→ h[n + 1]i}. The NES
has the form {E0 =∅ → E1 ={(dst=H4, 4:1)} → · · · →
En+1 ={(dst=H4, 4:1)0 , · · · , (dst=H4, 4:1)n }}, where
the g is given by g(E0 ) = C[0] , · · · , g(En+1 ) = C[n+1] .
Note that the subscripts on events in the NES event-sets
(e.g. the ones in En+1 ) indicate “renamed” copies of the
same event (as described in Section 3.1).
The Bandwidth Cap example took 0.023s to compile, and
produced a total of 158 flow-table rules. In Figure 14(a), we
show that the running example has the expected behavior.
We send pings from H1 to H4, of which exactly 10 succeed,
meaning we have reached the bandwidth cap. Using the
uncoordinated update strategy in Figure 14(b), we again
send pings from H1 to H4, but in this case, 15 are successful,
exceeding the bandwidth cap.
This IDS example took 0.021s to compile and produced
152 flow-table rules. In Figure 15(a), we demonstrate the
correct behavior of the program, by first successfully pinging H3, H2, H1, H3, H2, H1 (in that order) from H4. This
results in a situation where we have contacted H1 and then
H2, causing the third attempt to contact H3 to be blocked
(H4-H3 pings dropped). The incorrect (uncoordinated) implementation in Figure 15(b) allows a faulty behavior where
we can successfully ping H1 and then H2 (in that order), but
subsequent H4-H3 traffic is still enabled temporarily.
5.2
In this experiment, we automatically generated some eventdriven programs which specify that two hosts H1 and H2 are
connected to opposite sides of a ring of switches. Initially,
traffic is forwarded clockwise, but when a specific switch
detects a (packet) event, the configuration changes to forward counterclockwise. We increased the “diameter” of the
ring (distance from H1 to H2) up to 8, as shown in Figure
16, and performed the following two experiments.
1. We used iperf to measure H1-H2 TCP/UDP bandwidth, and compared the performance of our running
event-driven program, versus that of the initial (static)
configuration of the program running on un-modified
OpenFlow 1.0 reference switches/controller. Figure 16(a)
shows that our performance (solid line) is very close to
the performance of a system which does not do packet
tagging, event detection, etc. (dashed line)—we see
around 6% performance degradation on average (note
that the solid and dashed lines almost coincide).
Intrusion Detection System. In this example, shown in
Figures 8-9(e), the external host H4 is initially free to communicate with the internal hosts H1, H2, and H3. However,
if H4 begins engaging in some type of suspicious activity (in
this case, beginning to scan through the hosts, e.g. contacting H1 and then H2, in that order), the activity is thwarted
(in this case, by cutting off access to H3).
This program p corresponds to three configurations:
C[0] = JpK[0] and C[1] = JpK[1] , in which all traffic is
enabled, and C[2] = JpK[2] in which H4-H3 communica(dst=H1, 1:1)
tion is disabled. The ETS has the form {h[0]i −−−−−−−−−→
(dst=H2, 2:1)
h[1]i −−−−−−−−−→ h[2]i}. The NES has the form {E0 =∅ →
E1 ={(dst=H1, 1:1)} → E2 ={(dst=H1, 1:1), (dst=H2,
2:1)}}, where the g function is given by g(E0 ) = C[0] ,
g(E1 ) = C[1] , g(E2 ) = C[2] .
Event-Driven Network Programming
Quantitative Results
14
2016/4/19
Bandwidth (Mbit/sec)
Event Discovery Time (s)
100
∗,∅
1
0∗ , {r1 }
TCP perf.
UDP % loss
UDP perf.
10
2
3
00,
4
5
6
7
(a)
Network Diameter
(# switches)
8
10
3
4
5
6
7
8
Network Diameter (# switches)
(b)
# Original Rules
Figure 16: Circular Example: (a) bandwidth (solid line is ours,
dotted line is reference implementation) and (b) convergence.
500
x=y
400
300
280
300
320
340
Number of Rules w/ Heuristic
Figure 17: Heuristic: reducing the number of rules.
2. We measured maximum and average time needed for a
switch to learn about the event. The “Max.” and “Avg.”
bars in Figure 16(b) are these numbers when the controller does not assist in disseminating events (i.e. only
the packet digest is used), and the other columns are the
maximum and average when the controller does so.
5.3
Optimizations
When a configuration change occurs, the old and new configurations are often similar, differing only in a subset of flowtable rules. Tables are commonly stored in TCAM memory
on switches, which is limited/costly, so it is undesirable to
store duplicate rules. As mentioned in Section 4.1, each of
our rules is guarded by its configuration’s numeric ID. If the
same rule occurs in several configurations having IDs with
the same (binary) high-order bits, intuitively we can reduce
space usage by keeping a single copy of the rule, and guarding it with a configuration ID having the shared high-order
bits, and wildcarded low-order bits. For example, if rule r is
used in two different configurations having IDs 2 (binary 10)
and 3 (binary 11), we can wildcard the lowest bit (1∗), and
keep a single rule (1∗)r having this wildcarded guard, inEvent-Driven Network Programming
01,
10,
11,
0∗ , {r1 , r2 }
00,
01,
1∗ , {r3 }
10,
11,
{r1 , r2 }{r1 , r3 }{r2 , r3 }{r1 , r2 }
{r1 , r2 }{r1 , r2 }{r1 , r3 }{r2 , r3 }
(a)
(b)
stead of two copies of r, with the “10” and “11” guards. Ideally, we would like to (re)assign numeric IDs to the configurations, such that maximal sharing of this form is achieved.
We formalize the problem as follows. Assume there is a
set of all possible rules R. A configuration C is a subset
of these rules C ⊆ R. Assume there are k bits in a configuration ID. Without loss of generality we assume there
are exactly 2k configurations (if there are fewer, we can add
dummy configurations, each containing all rules in R). For
a given set of configurations, we construct a trie having all
of the configurations at the leaves. This trie is a complete
binary tree in which every node is marked with (1) a wildcarded mask that represents the configuration IDs of its children, and (2) the intersection of the rule-sets of its children.
Consider configurations C0 = {r1 , r2 }, C1 = {r1 , r3 },
C2 = {r2 , r3 }, C3 = {r1 , r2 }. Figure 18 shows two different assignments of configurations to the leaves of tries. The
number of rules for trie (a) is 6: (0∗)r1 , (00)r2 , (01)r3 ,
(1∗)r2 , (10)r3 , (11)r1 . The number of rules for trie (b) is 5:
(0∗)r1 , (0∗)r2 , (1∗)r3 , (10)r1 , (11)r2 . Intuitively, this is
because the trie (b) has larger sets in the interior. Our polynomial heuristic follows that basic intuition: it constructs the
trie from the leaves up, at each level pairing nodes in a way
that maximizes the sum of the cardinalities of their sets. This
does not always produce the global maximum rule sharing,
but we find that it produces good results in practice.
As indicated by the Figure 17 result (64 randomlygenerate configurations w/ 20 rules), on average, rule savings was about 32% of the original number of rules. We
also ran this on the previously-discussed Firewall, Learning
Switch, Authentication, Bandwidth Cap, and IDS examples,
and got rule reductions of 18 → 16, 43 → 27, 72 → 46,
158 → 101, and 152 → 133 respectively.
100
1
1∗ , {r2 }
Figure 18: Heuristic: two different tries for the same configurations.
Max.
Avg.
Max. w/ Controller
Avg. w/ Controller
1000
∗,∅
6.
Related Work
Network Updates, Verification, and Synthesis. We already
briefly mentioned an early approach known as consistent
updates [33]. This work was followed by update techniques
that respect other correctness properties [25] [17] [40] [26].
These approaches for expressing and verifying correctness
of network updates work in terms of individual packets.
In event-driven network programs, it is necessary to
check properties which describe interactions between multiple packets. There are several works which seek to perform
15
2016/4/19
network updates in the context of multi-packet properties
[12] [23]. There are also proposals for synthesizing SDN
controller programs from multi-packet examples [39] and
from first-order specifications [32]. Lopes et al. presented
techniques for verifying reachability in stateful network programs [24], using a variant of Datalog. This is a complimentary approach which could be used as a basis for verifying
reachability properties of our stateful programs.
interleavings of events and updates for each application, and
we show that our consistency model and implementation
technique work well in the context of SDN programs, but
we do not believe they are limited to that specific arena. Our
approach could also possibly be extended to other distributed
systems in which availability is prioritized, and consistency
can be relaxed in a well-defined way, as in our event-driven
consistent updates. Example domains include wireless sensor networks or other message-passing systems where the
nodes have basic stateful functionality.
Network Programming Languages. Network programs
can often be constructed using high-level languages. The
Frenetic project [10] [27] [11] allows higher-level specification of network policies. Other related projects like Merlin
[36] and NetKAT [35] [4] provide high-level languages/tools
to compile such programs to network configurations. Works
such as Maple [37] and FlowLog [31] seek to address the
dynamic aspect of network programming.
None of these systems and languages provide both (1)
event-based constructs, and (2) strong semantic guarantees
about consistency during updates, while our framework enables both. Concurrently with this paper, an approach called
SNAP [3] was developed, which enables event-driven programming, and allows the programmer to ensure consistency
via an atomic language construct. Their approach offers a
more expressive language than our Stateful NetKAT, but
in our approach, we enable correct-by-construction eventbased behavior and provide a dynamic correctness property,
showing (formally) that is strong enough for easy reasoning,
yet flexible enough to enable efficient implementations. We
also prove the correctness of our implementation technique.
Future Work. There are several directions for future work
which could address limitations of our current system.
1. We assume that the set of (potential) hosts is known in advance, and use this information to generate corresponding flow tables for each switch. This may not be the right
choice in settings where hosts join/leave. Our approach
could be extended to represent hosts symbolically.
2. We currently store all configurations on the switches, so
that they are immediately available during updates. Our
optimizations allow this to be done in a space-efficient
way, but there may be situations when it would be better
for the controller to reactively push new configurations to
switches. This is an interesting problem due to interleavings of events and controller commands.
3. It would be interesting to consider formal reasoning and
automated verification for Stateful NetKAT.
4. We provide a solution to the problem of performing multiple updates, and the dynamic implementations we produce are meant to “run” in the network indefinitely. However, there may be ways to update the running dynamic
program itself in some consistent way.
Routing. The consistency/availability trade-off is of interest in routing outside the SDN context as well. In [18], a
solution called consensus routing is presented, based on a
notion of causality between triggers (related to our events).
However, the solution is different in many aspects, e.g. it allows a transient phase without safety guarantees.
8.
This paper presents a full framework for correct event-driven
programming. Our approach provides a way of rigorously
defining correct event-driven behavior without the need for
specifying logical formulas. We detail a programming language and compiler which allow the user to write high-level
network programs and produce correct and efficient SDN
implementations, and we demonstrate the benefits of our approach using real-world examples. This paper considers the
challenging problem of distributing an event-based stateful
network program, and solves it in a principled way.
High-Level Network Functionality. Some recent work
has proposed building powerful high-level features into the
network itself, such as fabrics [9], intents [1], and other virtualization functionality [22]. Pyretic [28] and projects built
on top of it such as PyResonance [20], SDX [14], and Kinetic [21] provide high-level operations on which network
programs can be built. These projects do not guarantee consistency during updates, and thus could be profitably combined with an approach such as ours.
7.
Acknowledgments
Discussion and Future Work
Many thanks to the anonymous PLDI reviewers for offering
helpful and constructive comments, as well as Zach Tatlock
for shepherding our paper and providing useful feedback.
Our work is supported by the National Science Foundation
under grants CNS-1111698, CNS-1413972, CCF-1421752,
CCF-1422046, CCF-1253165, and CCF-1535952; the Office of Naval Research under grant N00014-15-1-2177; and
gifts from Cisco, Facebook, Fujitsu, Google, and Intel.
Generality of Our Approach. The event-driven SDN update problem considered in this paper is an instance of a
more general distributed-systems programming problem,
namely how to write correct and efficient programs for distributed systems. We provide a PL approach (consistency
property, programming language, and compiler/runtime)
which ensures that the programmer need not reason about
Event-Driven Network Programming
Conclusion
16
2016/4/19
[20] H. Kim, A. Gupta, M. Shahbaz, J. Reich, N. Feamster, and
R. Clark. Simpler Network Configuration with State-Based
Network Policies. Technical report, Georgia Tech, 2013.
References
[1] ONOS Intent Framework. 2014. URL https://wiki.
onosproject.org/x/XgAZ.
[21] H. Kim, J. Reich, A. Gupta, M. Shahbaz, N. Feamster, and
R. Clark. Kinetic: Verifiable Dynamic Network Control.
NSDI, 2015.
[2] C. J. Anderson, N. Foster, A. Guha, J.-B. Jeannin, D. Kozen,
C. Schlesinger, and D. Walker. NetKAT: Semantic Foundations for Networks. POPL, 2014.
[22] T. Koponen, K. Amidon, P. Balland, M. Casado, A. Chanda,
B. Fulton, I. Ganichev, J. Gross, N. Gude, P. Ingram, et al.
Network Virtualization in Multi-tenant Datacenters. NSDI’14.
[3] M. T. Arashloo, Y. Koral, M. Greenberg, J. Rexford, and
D. Walker. SNAP: Stateful Network-Wide Abstractions for
Packet Processing. 2015.
[23] W. Liu, R. B. Bobba, S. Mohan, and R. H. Campbell. InterFlow Consistency: Novel SDN Update Abstraction for Supporting Inter-Flow Constraints. NDSS, 2015.
[4] R. Beckett, M. Greenberg, and D. Walker. Temporal NetKAT.
PLVNET, 2015.
[5] G. Bianchi, M. Bonola, A. Capone, and C. Cascone. OpenState: Programming Platform-independent Stateful Openflow
Applications Inside the Switch. ACM SIGCOMM CCR, 2014.
[24] N. P. Lopes, N. Bjørner, P. Godefroid, K. Jayaraman, and
G. Varghese. Checking Beliefs in Dynamic Networks. NSDI,
2015.
[6] P. Bosshart, D. Daly, G. Gibb, M. Izzard, N. McKeown,
J. Rexford, C. Schlesinger, D. Talayco, A. Vahdat, G. Varghese, et al. P4: Programming Protocol-independent Packet Processors. ACM SIGCOMM CCR, 2014.
[25] A. Ludwig, M. Rost, D. Foucard, and S. Schmid. Good
Network Updates for Bad Packets: Waypoint Enforcement
Beyond Destination-based Routing Policies. HotNets, 2014.
[26] J. McClurg, H. Hojjat, P. Cerny, and N. Foster. Efficient
Synthesis of Network Updates. PLDI, 2015.
[7] E. Brewer. Towards robust distributed systems (abstract).
PODC, page 7, 2000.
[27] C. Monsanto, N. Foster, R. Harrison, and D. Walker. A
Compiler and Run-time System for Network Programming
Languages. POPL, 2012.
[8] M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown,
and S. Shenker. Ethane: Taking Control of the Enterprise.
SIGCOMM, 2007.
[28] C. Monsanto, J. Reich, N. Foster, J. Rexford, and D. Walker.
Composing Software Defined Networks. NSDI, 2013.
[9] M. Casado, T. Koponen, S. Shenker, and A. Tootoonchian.
Fabric: A Retrospective on Evolving SDN. HotSDN, 2012.
[29] M. Moshref, M. Yu, A. B. Sharma, and R. Govindan. Scalable
Rule Management for Data Centers. NSDI, 2013.
[10] N. Foster, R. Harrison, M. J. Freedman, C. Monsanto, J. Rexford, A. Story, and D. Walker. Frenetic: A Network Programming Language. ICFP, 2011.
[30] M. Moshref, A. Bhargava, A. Gupta, M. Yu, and R. Govindan.
Flow-level State Transition as a New Switch Primitive for
SDN. 2014.
[11] N. Foster, A. Guha, M. Reitblatt, A. Story, M. J. Freedman,
N. P. Katta, C. Monsanto, J. Reich, J. Rexford, C. Schlesinger,
et al. Languages for Software-Defined Networks. Communications Magazine, IEEE, 51(2):128–134, 2013.
[31] T. Nelson, A. D. Ferguson, M. Scheer, and S. Krishnamurthi.
Tierless Programming and Reasoning for Software-Defined
Networks. NSDI, 2014.
[12] S. Ghorbani and B. Godfrey. Towards Correct Network Virtualization. HotSDN, 2014.
[32] O. Padon, N. Immerman, A. Karbyshev, O. Lahav, M. Sagiv,
and S. Shoham. Decentralizing SDN Policies. POPL, 2015.
[13] S. Gilbert and N. Lynch. Perspectives on the CAP Theorem.
IEEE Computer, 45(2):30–36, 2012.
[33] M. Reitblatt, N. Foster, J. Rexford, C. Schlesinger, and
D. Walker. Abstractions for Network Update. SIGCOMM’12.
[14] A. Gupta, L. Vanbever, M. Shahbaz, S. P. Donovan,
B. Schlinker, N. Feamster, J. Rexford, S. Shenker, R. Clark,
and E. Katz-Bassett. SDX: A Software Defined Internet Exchange. SIGCOMM, 2014.
[34] A. Sivaraman, M. Budiu, A. Cheung, C. Kim, S. Licking,
G. Varghese, H. Balakrishnan, M. Alizadeh, and N. McKeown. Packet Transactions: A Programming Model for DataPlane Algorithms at Hardware Speed. 2015.
[15] C.-Y. Hong, S. Kandula, R. Mahajan, M. Zhang, V. Gill,
M. Nanduri, and R. Wattenhofer. Achieving High Utilization
with Software-driven WAN. SIGCOMM, 2013.
[35] S. Smolka, S. Eliopoulos, N. Foster, and A. Guha. A Fast
Compiler for NetKAT. ICFP, 2015.
[36] R. Soulé, S. Basu, P. J. Marandi, F. Pedone, R. Kleinberg,
E. G. Sirer, and N. Foster. Merlin: A Language for Provisioning Network Resources. CoNEXT, 2014.
[16] S. Jain et al. B4: Experience with a Globally-Deployed Software Defined WAN. SIGCOMM, 2013.
[17] X. Jin, H. H. Liu, R. Gandhi, S. Kandula, R. Mahajan,
M. Zhang, J. Rexford, and R. Wattenhofer. Dynamic Scheduling of Network Updates. SIGCOMM, 2014.
[37] A. Voellmy, J. Wang, Y. R. Yang, B. Ford, and P. Hudak.
Maple: Simplifying SDN Programming Using Algorithmic
Policies. SIGCOMM, 2013.
[18] J. John, E. Katz-Bassett, A. Krishnamurthy, T. Anderson, and
A. Venkataramani. Consensus Routing: The Internet as a
Distributed System. NSDI, 2008.
[38] G. Winskel. Event Structures. Springer, 1987.
[39] Y. Yuan, D. Lin, R. Alur, and B. T. Loo. Scenario-based
Programming for SDN Policies. CoNEXT, 2015.
[19] N. Kang, Z. Liu, J. Rexford, and D. Walker. Optimizing the
One Big Switch Abstraction in Software-Defined Networks.
CoNEXT, 2013.
Event-Driven Network Programming
[40] W. Zhou, D. Jin, J. Croft, M. Caesar, and P. B. Godfrey.
Enforcing Generalized Consistency Properties in SoftwareDefined Networks. NSDI, 2015.
17
2016/4/19
| 6 |
THE MONOID OF ORDER ISOMORPHISMS OF PRINCIPAL FILTERS OF A
POWER OF THE POSITIVE INTEGERS
arXiv:1802.03598v1 [] 10 Feb 2018
OLEG GUTIK AND TARAS MOKRYTSKYI
Abstract. Let n be any positive integer and IPF (Nn ) be the semigroup of all order isomorphisms
between principal filters of the n-th power of the set of positive integers N with the product order. We
study algebraic properties of the semigroup IPF (Nn ). In particular, we show that IPF (Nn ) bisimple,
E-unitary, F -inverse semigroup, describe Green’s relations on IPF (Nn ) and its maximal subgroups.
We prove that the semigroup IPF (Nn ) is isomorphic to the semidirect product of the direct n-the
power of the bicyclic monoid C n (p, q) by the group of permutation Sn , every non-identity congruence
C on the semigroup IPF (Nn ) is group and describe the least group congruence on IPF (Nn ). Also,
we show that every Hausdorff shift-continuous topology on IPF (Nn ) is discrete and discuss embedding
the semigroup IPF (Nn ) into compact-like topological semigroups.
1. Introduction and preliminaries
We shall follow the terminology of [9, 11, 13, 33, 34].
In this paper we shall denote the first infinite ordinal by ω, the set of integers by Z, the set of positive
integers by N, the set of non-negative integers by N0 , the additive group of integers by Z(+) and the
symmetric group of degree n by Sn , i.e., Sn is the group of all permutations of an n-element set. All
topological spaces, considered in this paper, are supposed to be are Hausdorff.
Let (X, 6) be a partially ordered set (a poset). For an arbitrary x ∈ X we denote
↑x = {y ∈ X : x 6 y}
and
↓x = {y ∈ X : y 6 x}
and the sets ↑x and ↓x are called the principal filter and the principal ideal, respectively, generated by the
element x ∈ X. A map α : (X, 6) → (Y, 0) from poset (X, 6) into a poset (Y, 0) is called monotone or
order preserving if x 6 y in (X, 6) implies that xα 0 yα in (Y, 0). A monotone map α : (X, 6) → (Y, 0)
is said to be order isomorphism if it is bijective and its converse α−1 : (Y, 0) → (X, 6) is monotone.
An semigroup S is called inverse if for any element x ∈ S there exists a unique x−1 ∈ S such that
xx−1 x = x and x−1 xx−1 = x−1 . The element x−1 is called the inverse of x ∈ S. If S is an inverse
semigroup, then the function inv : S → S which assigns to every element x of S its inverse element x−1
is called the inversion.
A congruence C on a semigroup S is called non-trivial if C is distinct from universal and identity
congruences on S, and a group congruence if the quotient semigroup S/C is a group.
If S is a semigroup, then we shall denote the subset of all idempotents in S by E(S). If S is an
inverse semigroup, then E(S) is closed under multiplication and we shall refer to E(S) as a band (or
the band of S). Then the semigroup operation on S determines the following partial order 4 on E(S):
e 4 f if and only if ef = f e = e. This order is called the natural partial order on E(S). A semilattice
is a commutative semigroup of idempotents. A semilattice E is called linearly ordered or a chain if its
natural order is a linear order. By (P(λ), ∩) we shall denote the semilattice of all subsets of a set of
cardinality λ > ω with the semilattice operation “intersection”.
Date: February 13, 2018.
2010 Mathematics Subject Classification. 20M18, 20M20, 22A15, 54D40, 54D45, 54H10.
Key words and phrases. Semigroup, partial map, inverse semigroup, least group congruence, permutation semigroup,
bicyclic monoid, semidirect product, semitopological semigroup, topological semigroup,
1
2
OLEG GUTIK AND TARAS MOKRYTSKYI
If S is a semigroup, then we shall denote the Green relations on S by R, L , J , D and H (see
[11]). A semigroup S is called simple if S does not contain proper two-sided ideals and bisimple if S
has only one D-class.
Hereafter we shall assume that λ is an infinite cardinal. If α : λ ⇀ λ is a partial map, then we shall
denote the domain and the range of α by dom α and ran α, respectively.
Let Iλ denote the set of all partial one-to-one transformations of an infinite set X of cardinality λ
together with the following semigroup operation:
x(αβ) = (xα)β
if x ∈ dom(αβ) = {y ∈ dom α | yα ∈ dom β} ,
for α, β ∈ Iλ .
The semigroup Iλ is called the symmetric inverse semigroup over the set X (see [11, Section 1.9]). The
symmetric inverse semigroup was introduced by Wagner [38] and it plays a major role in the theory of
semigroups.
The bicyclic semigroup (or the bicyclic monoid ) C (p, q) is the semigroup with the identity 1 generated
by elements p and q subject only to the condition pq = 1.
Remark 1.1. We observe that the bicyclic semigroup is isomorphic to the semigroup CN (α, β) which
is generated by partial transformations α and β of the set of positive integers N, defined as follows:
(n)α = n + 1 if n > 1 and (n)β = n − 1 if n > 1 (see Exercise IV.1.11(ii) in [33]).
If T is a semigroup, then we say that a subsemigroup S of T is a bicyclic subsemigroup of T if S is
isomorphic to the bicyclic semigroup C (p, q).
A semitopological (topological ) semigroup is a topological space with separately continuous (continuous) semigroup operations. An inverse topological semigroup with continuous inversion is called a
topological inverse semigroup.
A topology τ on a semigroup S is called:
• semigroup if (S, τ ) is a topological semigroup;
• shift-continuous if (S, τ ) is a semitopological semigroup.
The bicyclic semigroup is bisimple and every one of its congruences is either trivial or a group
congruence. Moreover, every homomorphism h of the bicyclic semigroup is either an isomorphism or
the image of C (p, q) under h is a cyclic group (see [11, Corollary 1.32]). The bicyclic semigroup plays
an important role in algebraic theory of semigroups and in the theory of topological semigroups. For
example a well-known Andersen’s result [1] states that a (0–)simple semigroup with an idempotent is
completely (0–)simple if and only if it does not contain an isomorphic copy of the bicyclic semigroup. The
bicyclic monoid admits only the discrete semigroup Hausdorff topology and if a topological semigroup
S contains it as a dense subsemigroup then C (p, q) is an open subset of S [12]. We observe that the
openness of C (p, q) in its closure easily follows from the non-topologizability of the bicyclic monoid,
because the discrete subspace D is open in its closure D in any T1 -space containing D. Bertman and
West in [8] extended this result for the case of Hausdorff semitopological semigroups. Stable and Γcompact topological semigroups do not contain the bicyclic monoid [2, 26]. The problem of embedding
the bicyclic monoid into compact-like topological semigroups was studied in [4, 5, 22]. Independently
to Eberhart-Selden results on topolozabilty of the bicyclic semigroup, in [36] Taimanov constructed a
commutative semigroup Aκ of cardinality κ which admits only the discrete semigroup topology. Also,
Taimanov [37] gave sufficient conditions on a commutative semigroup to have a non-discrete semigroup
topology. In the paper [16] it was showed that for the Taimanov semigroup Aκ from [36] the following
conditions hold: every T1 -topology τ on the semigroup Aκ such that (Aκ , τ ) is a topological semigroup
is discrete; for every T1 -topological semigroup which contains Aκ as a subsemigroup, Aκ is a closed
subsemigroup of S; and every homomorphic non-isomorphic image of Aκ is a zero-semigroup.
Also, in the paper [14] it is proved that the discrete topology is the unique shift-continuous Hausdorff
topology on the extended bicyclic semigroup CZ . We observe that for many (0-) bisimple semigroups of
transformations S the following statement holds: every shift-continuous Hausdorff Baire (in particular
THE MONOID OF ORDER ISOMORPHISMS OF PRINCIPAL FILTERS ...
3
locally compact) topology on S is discrete (see [10, 21, 23, 24]). In the paper [31] Mesyan, Mitchell,
Morayne and Péresse showed that if E is a finite graph, then the only locally compact Hausdorff
semigroup topology on the graph inverse semigroup G(E) is the discrete topology. In [7] it was proved
that the conclusion of this statement also holds for graphs E consisting of one vertex and infinitely many
loops (i.e., infinitely generated polycyclic monoids). Amazing dichotomy for the bicyclic monoid with
adjoined zero C 0 = C (p, q) ⊔ {0} was proved in [15]: every Hausdorff locally compact semitopological
bicyclic monoid C 0 with adjoined zero is either compact or discrete. The above dichotomy was extended
by Bardyla in [6] to locally compact λ-polycyclic semitopological monoids and to locally compact
semitopological interassociates of the bicyclic monoid with adjoined zero [17].
We observe that some classes of inverse semigroups of partial transformations with cofinite domains
and range have algebraic properties similar to the bicyclic semigroup. In [25] it was showed that the
semigroup Iλcf of injective partial cofinite selfmaps of cardinal λ is a bisimple inverse semigroup and
that for every non-empty chain L in E(Iλcf ) there exists an inverse subsemigroup S of Iλcf such that
S is isomorphic to the bicyclic semigroup and L ⊆ E(S), and proved that every non-trivial congruence
on Iλcf is a group congruence. Also, there described the structure of the quotient semigroup Iλcf /σ,
where σ is the least group congruence on Iλcf .
ր
ր
The semigroups I∞
(N) and I∞
(Z) of injective isotone partial selfmaps with cofinite domains and
images of positive integers and integers, respectively, were studied in [23] and [24]. There it was proved
ր
ր
that the semigroups I∞
(N) and I∞
(Z) are bisimple and every non-trivial homomorphic image of
ր
ր
ր
I∞
(N) and I∞
(Z) is a group, and moreover the semigroup I∞
(N) has Z(+) as a maximal group
ր
image and I∞ (Z) has Z(+) × Z(+), respectively.
In [21] the semigroup IO∞ (Znlex ) of monotone injective partial selfmaps of the set of Ln ×lex Z having
cofinite domain and image, where Ln ×lex Z is the lexicographic product of n-elements chain and the set
of integers with the usual linear order is studied. There it was showed that the semigroup IO∞ (Znlex )
is bisimple and established its projective congruences. Also, there it was proved that IO∞ (Znlex ) is
finitely generated, every automorphism of IO∞ (Z) is inner and showed that in the case n > 2 the
semigroup IO∞ (Znlex ) has non-inner automorphisms. In [21] we proved that for every positive integer
n the quotient semigroup IO∞ (Znlex )/Cmg , where Cmg is a least group congruence on IO∞ (Znlex ), is
isomorphic to the direct power (Z(+))2n . The structure of the sublattice of congruences on IO∞ (Znlex )
which contained in the least group congruence is described in [18].
In the other side the semigroup of monotone injective partial selfmaps cofinite domains and images of
the square N2 with the product order has more difficult properties to above described inverse semigroups
[19]. In particularly, there was proved that D = J in PO∞ (N26 ). In [20] it was proved that the natural
partial order 4 on PO∞ (N26 ) coincides with the natural partial order which is induced from symmetric
inverse monoid IN×N over N × N onto the semigroup PO∞ (N26 ). The congruence σ on the semigroup
PO∞ (N26 ), which is generated by the natural order 4 on the semigroup PO∞ (N26 ) is described: ασβ if
and only if α and β are comparable in PO∞ (N26 ), 4 . Also, there was proved that quotient semigroup
PO∞ (N26 )/σ is isomorphic to the semidirect product of the free commutative monoid AMω over an
infinite countable set by the cyclic group of the order two Z2 .
For an arbitrary positive integer n by (Nn , 6) we denote the n-th power of the set of positive integers
N with the product order:
(x1 , . . . , xn ) 6 (y1 , . . . , yn )
if and only if
xi ≤ yi
for all i = 1, . . . , n.
It is obvious that the set of all order isomorphisms between principal filters of the poset (Nn , 6) with
the operation of composition of partial maps form a semigroup. This semigroup we shall denote by
IPF (Nn ). It is easy to see that the semigroup IPF (Nn ) is isomorphic to the Munn semigroup of Nn
with the dual order to 6 (see [27, Section 5.4] or [32]). Also, Remark 1.1 implies that the semigroup
IPF (Nn ) is a some generalization of the bicyclic semigroup C (p, q). Hence it is natural to ask: what
algebraic properties of the semigroup IPF (Nn ) are similar to ones of the bicyclic monoid?
4
OLEG GUTIK AND TARAS MOKRYTSKYI
Later by I we shall denote the identity map of Nn and it is obvious that I is unit element of the
semigroup IPF (Nn ). Also by H(I) we shall denote the group of units of IPF (Nn ) and its clear that
α ∈ IPF (Nn ) is an element of H(I) if and only if it is an order isomorphism of the poset (Nn , 6).
In this paper we study algebraic properties of the semigroup IPF (Nn ). In particular, we show that
IPF (Nn ) bisimple, E-unitary, F -inverse semigroup, describe Green’s relations on IPF (Nn ) and its
maximal subgroups. We prove that the semigroup IPF (Nn ) is isomorphic to the semidirect product
of the n-th direct power of the bicyclic monoid C n (p, q) by the group of permutation Sn , every nonidentity congruence C on the semigroup IPF (Nn ) is group and describe the least group congruence
on IPF (Nn ). Also, we show that every Hausdorff shift-continuous topology on IPF (Nn ) is discrete
and discuss embedding the semigroup IPF (Nn ) into compact-like topological semigroups.
2. Algebraic properties of the semigroup IPF (Nn )
By P↑ (Nn ) we denote the set of all principal filters of the poset (Nn , 6). It is obvious that the
semilattice operation of (P(Nn ), ∩) is closed on P↑ (Nn ), and hence (P↑ (Nn ), ∩) is a semilattice. Also,
we observe that the semilattice (Nn , max), which is the set Nn with the point-wise operation max:
(x1 , . . . , xn ) (y1 , . . . , yn ) = (max {x1 , y1} , . . . , max {xn , yn }) ,
is isomorphic to the semilattice (P↑ (Nn ), ∩) by the mapping (x1 , . . . , xn ) 7→ ↑ (x1 , . . . , xn ).
Proposition 2.1. Let n be any positive integer. Then the following statements hold:
(i) IPF (Nn ) is an inverse semigroup;
(ii) the semilattice E(IPF (Nn )) is isomorphic to (P↑ (Nn ), ∩) by the mapping ε 7→ dom ε, and
hence it is isomorphic to the semilattice (Nn , max);
(iii) αL β in IPF (Nn ) if and only if dom α = dom β;
(iv) αRβ in IPF (Nn ) if and only if ran α = ran β;
(v) αH β in IPF (Nn ) if and only if dom α = dom β and ran α = ran β;
(vi) for any idempotents ε, ι ∈ IPF (Nn ) there exist elements α, β ∈ IPF (Nn ) such that αβ = ε
and βα = ι;
(vii) IPF (Nn ) is bisimple and hence it is simple.
Proof. It it obvious that IPF (Nn ) is an inverse subsemigroup of the symmetric inverse monoid INn
over the set Nn . Then statements (i)–(v) follow from the definitions of the semigroup IPF (Nn ) and
Green’s relations.
(vi) Fix arbitrary idempotents ε, ι ∈ IPF (Nn ). Since dom ε and dom ι are principal filters in (Nn , 6)
we put dom ε and dom ι are generated by elements (x01 , . . . , x0n ) and (y10 , . . . , yn0 ) of the poset (Nn , 6).
We define a partial map α : Nn ⇀ Nn in the following way:
dom α = dom ε,
ran α = dom ι
and
(z1 , . . . , zn ) α = z1 − x01 + y10, . . . , zn − x0n + yn0 , for any (z1 , . . . , zn ) ∈ dom α.
Then αα−1 = ε and α−1 α = ι and hence we put β = α−1 .
Statement (vii) follows from (vi) and Proposition 3.2.5(1) of [29].
The proofs of the following two lemmas are trivial.
Lemma 2.2. Let n be a positive integer ≥ 2. Then for any i = 1, . . . , n the projection on i-th coordinate
πi : Nn → Nn : (x1 , . . . , xi , . . . , xn ) 7−→ (1, . . . , xi , . . . , 1)
|{z}
|{z}
i-th
i-th
is a monotone map and moreover (1, . . . , xi , . . . , 1) 6 (x1 , . . . , xi , . . . , xn ) in (Nn , 6).
|{z}
|{z}
i-th
i-th
THE MONOID OF ORDER ISOMORPHISMS OF PRINCIPAL FILTERS ...
5
Lemma 2.3. Let n be a positive integer ≥ 2. Then every map α : Nn → Nn which permutates the
coordinates of elements of Nn is a n order isomorphism of the poset (Nn , 6).
Lemma 2.4. Let n be a positive integer ≥ 2 and a map α : Nn → Nn be an order isomorphism of the
poset (Nn , 6) such that (x2i ) α = x2i for any i = 1, . . . , n, where x2i = (1, . . . , |{z}
2 , . . . , 1) is the element
i-th
of the poset (Nn , 6) such that only i-th coordinate of x2i is equal to 2 and all other coordinates are equal
to 1. Then α is the identity map of (Nn , 6).
Proof. We observe that the element (1, 1, . . . , 1) is the minimum in the poset (Nn , 6) and since α is an
order isomorphism of (Nn , 6) we have that (1, 1, . . . , 1)α = (1, 1, . . . , 1).
Now, by induction we show that for any positive integer k > 2 for the element xk1 = (k, 1, 1, . . . , 1)
of (Nn , 6) the set ↓xk1 is a k-element chain, then so is (xk1 )α, which implies that only first coordinate of
(xk1 )α is equal to k and all other coordinates are equal to 1. Similarly, for every positive i = 1, . . . , n by
induction we show that for any positive integer k > 2 for the element xki of (Nn , 6) with the property
that only i-th coordinate of xki is equal to k and all other coordinates are equal to 1 the set ↓xki is a
k-element chain, then so is (xki )α, which implies that only i-th coordinate of (xki )α is equal to k and all
other coordinates are equal to 1.
Hence we shown that (xpi ) α = xpi for any i = 1, . . . , n, where xpi = (1, . . . , p , . . . , 1) is the element
|{z}
i-th
of (Nn , 6) such that only i-th coordinate of xpi is equal to p and all other coordinates are equal to 1.
We observe that the converse map α−1 : Nn → Nn to the order isomorphism α of (Nn , 6) is an order
isomorphism of (Nn , 6) as well, which satisfies the assumption of lemma. Hence the above part of the
proof implies that (xpi ) α−1 = xpi for any i = 1, . . . , n, where xpi = (1, . . . , p , . . . , 1) is the element of
|{z}
i-th
(Nn , 6) such that only i-th coordinate of x2i is equal to p and all other coordinates are equal to 1.
Fix an arbitrary (x1 , . . . , xn ) ∈ Nn and suppose that
(x1 , . . . , xn )α = (y1 , . . . , yn ).
Since a composition of monotone maps is again a monotone map, Lemma 2.2 and the above part of the
proof imply that for any i = 1, . . . , n we have that
(1, . . . , xi , . . . , 1) = (1, . . . , xi , . . . , 1)πi = ((1, . . . , xi , . . . , 1)α)πi 6
|{z}
|{z}
|{z}
i-th
i-th
i-th
6 ((x1 , . . . , xn )α)πi = (y1 , . . . , yn )πi = (1, . . . , yi , . . . , 1)
|{z}
i-th
and
(1, . . . , yi , . . . , 1) = (1, . . . , yi , . . . , 1)πi = ((1, . . . , yi , . . . , 1)α−1 )πi 6
|{z}
|{z}
|{z}
i-th
i-th
i-th
−1
6 ((y1 , . . . , yn ))α πi = ((x1 , . . . , xn )α)πi = (1, . . . , xi , . . . , 1).
|{z}
i-th
This implies that (x1 , . . . , xn ) = (y1 , . . . , yn ), which completes the proof of the lemma.
Theorem 2.5. For any positive integer n the group of units H(I) of the semigroup IPF (Nn ) is
isomorphic to Sn . Moreover, every element of H(I) permutates coordinates of elements of Nn , and only
so permutations are elements of H(I).
Proof. We shall show that every order isomorphism of the poset (Nn , 6) permutates coordinates of
elements in Nn . The converse statement follows from Lemma 2.3.
6
OLEG GUTIK AND TARAS MOKRYTSKYI
Next we consider the element x21 = (2, 1, 1, . . . , 1) of (Nn , 6). Since ↓x21 is a chain which consists
of two elements and α is an order isomorphism of (Nn , 6), the image (x21 )α has above properties so
x21 . Then the definition of poset (Nn , 6) the element (x21 )α satisfies the following property: only one
coordinate of (x21 )α is equal to 2 and all other coordinates are equal to 1. Such coordinate of (x21 )α we
denote by σ1 . Similar arguments show that for every element x2i of the poset (Nn , 6) with the property
that only i-th coordinate of x2i is equal to 2 and all other coordinates are equal to 1 we obtain that only
σi -th coordinate of (x2i )α is equal to 2 and all other coordinates are equal to 1, for all i = 1, . . . , n. Also,
since α is an order isomorphism of (Nn , 6) we get that σi = σj if and only if i = j, for i, j = 1, . . . , n.
Thus we defined a permutation σ : {1, . . . , n} → {1, . . . , n}. Then the compositions ασ −1 and σ −1 α
are order isomorphisms of (Nn , 6). Moreover ασ −1 and σ −1 α satisfy the assumption of Lemma 2.4,
which implies that the maps ασ −1 : Nn → Nn and σ −1 α : Nn → Nn are the identity maps of Nn . This
implies that α = σ which completes the proof the theorem.
Theorems 2.3 and 2.20 from [11] and Theorem 2.5 imply the following corollary.
Corollary 2.6. Let n be any positive integer. Then every maximal subgroup of IPF (Nn ) is isomorphic
to Sn and every H -class of IPF (Nn ) consists of n! elements.
The following proposition gives sufficient conditions when a subsemigroup of the semigroup IPF (Nn ),
which generated by an element of IPF (Nn ) and its inverse, is isomorphic to the bicyclic monoid C (p, q).
Proposition 2.7. Let n be any positive integer and α be an element of the semigroup IPF (Nn ) such
that ran α $ dom α. Then the subsemigroup hα, α−1 i of IPF (Nn ), which is generated by α and its
inverse α−1 , is isomorphic to the bicyclic monoid C (p, q).
Proof. Put ε be the identity map of dom α. The semigroup operation of IPF (Nn ) implies that the
following equalities hold:
εα = αε = α,
εα−1 = α−1 ε = α−1 ,
αα−1 = ε
and
α−1 α 6= ε
Then we apply Lemma 1.31 from [11].
Corollary 2.8. Let n be any positive integer. Then for any idempotents ε and ι of the semigroup
IPF (Nn ) such that ε 4 ι there exists a subsemigroup C of IPF (Nn ) which is isomorphic to the
bicyclic monoid C (p, q) and contains ε and ι.
Proof. Suppose that ε 6= ι. Let α be any order isomorphism from dom ι onto dom ε. Next we apply
Proposition 2.7.
If ε = ι then we choice any idempotent ν 6= ε such that ν 4 ε and apply the above part of the
proof.
Lemma 2.9. Let n be any positive integer ≥ 2 and C be a congruence on the semigroup IPF (Nn )
such that εCι for some two distinct idempotents ε, ι ∈ IPF (Nn ). Then ςCυ for all idempotents ς, υ of
IPF (Nn ).
Proof. We observe that without loss of generality we may assume that ε 4 ι where 4 is the natural
partial order on the semilattice E(IPF (Nn )). Indeed, if ε, ι ∈ E(IPF (Nn )) then εCι implies that
ε = εεCιε, and since the idempotents ε and ι are distinct in IPF (Nn ) we have that ιε 4 ε.
Now, the inequality ε 4 ι implies that dom ε ⊆ dom ι and hence (x1 , . . . , xn ) 6 (y1 , . . . , yn ), where
↑ (x1 , . . . , xn ) and ↑ (y1 , . . . , yn ) are principal filters in (Nn , 6) such that
↑ (x1 , . . . , xn ) = dom ι
and
↑ (y1 , . . . , yn ) = dom ε.
Next, we define partial maps α, β : Nn ⇀ Nn in the following way:
(a) dom α = Nn , ran α = dom ι and (z1 , . . . , zn ) α = (z1 + x1 − 1, . . . , zn + xn − 1) for (z1 , . . . , zn ) ∈
dom α;
THE MONOID OF ORDER ISOMORPHISMS OF PRINCIPAL FILTERS ...
7
(b) dom β = dom ι, ran β = Nn and (z1 , . . . , zn ) β = (z1 − x1 + 1, . . . , zn − xn + 1) for (z1 , . . . , zn ) ∈
dom β.
Simple verifications show that αιβ = I and βα = ι, and moreover since αβ = I we have that
(αεβ) (αεβ) = αε (βα) εβ = αειεβ = αεεβ = αεβ,
which implies that αεβ is an idempotent of IPF (Nn ) such that αεβ 6= I.
Thus, it was shown that there exists a non-unit idempotent ε∗ in IPF (Nn ) such that ε∗ CI. This
implies that ε0 CI for any idempotent ε0 of IPF (Nn ) such that ε∗ 4 ε0 4 I. Then the definition of the
semigroup IPF (Nn ) and Proposition 2.1(ii) imply that there exists an element x2i = (1, . . . , |{z}
2 , . . . , 1)
i-th
of the poset (Nn , 6) such that only i-th coordinate of x2i is equal to 2 and all other coordinates are
equal to 1 and there exists an idempotent εi such that dom ε0 ⊆ dom εi = ↑x2i .
Fix an arbitrary positive integer j ∈ {1, . . . , n} \ {i}. Put σ(i,j) is the permutation of coordinates of
elements of the set Nn which permutates only j-th and i-th coordinates, i.e., this permutation is the
cycle (i, j) on the coordinates. Then the semigroup operation of IPF (Nn ) implies that σ(i,j) Iσ(i,j) = I
and σ(i,j) εi σ(i,j) = εj , where εj is the identity map of the principal filter ↑x2j of the poset (Nn , 6) such
that x2j = (1, . . . , |{z}
2 , . . . , 1), i.e., only j-th coordinate of x2j is equal to 2 and all other coordinates are
j -th
equal to 1.
The above arguments imply that εi CI for every idempotent εi ∈ IPF (Nn ) such that εi is the identity
map of the principal filter ↑x2i of the poset (Nn , 6), i = 1, . . . , n. This implies that IC (ε1 . . . εn ). The
semigroup operation of IPF (Nn ) implies that the idempotent ε1 . . . εn is the identity map of the
principal filter ↑(2, . . . , 2) of (Nn , 6). We define a partial map γ : Nn ⇀ Nn in the following way:
dom γ = Nn ,
ran γ = ↑(2, . . . , 2)
and
(z1 , . . . , zn ) γ = (z1 + 1, . . . , zn + 1) ,
for (z1 , . . . , zn ) ∈ dom γ. By Proposition 2.7 the subsemigroup hγ, γ −1 i of IPF (Nn ), which is generated
by γ and its inverse γ −1 , is isomorphic to the bicyclic monoid C (p, q). It is obvious that γγ −1 = I and
γ −1 γ = ε1 . . . εn . Since IC (ε1 . . . εn ), by Corollary 1.32 from [11] we obtain that all idempotents of
the subsemigroup hγ, γ −1 i in IPF (Nn ) are C-equivalent. Also, the definition of the bicyclic semigroup
C (p, q) and Lemma 1.31 from [11] imply that all idempotents of the subsemigroup hγ, γ −1 i of IPF (Nn )
k
are elements of the form (γ −1 ) γ k , where k is a some non-negative integer. Now, by the definition of
k
the semigroup IPF (Nn ) we have that (γ −1 ) γ k is the identity map of the principal filter ↑(k, . . . , k)
of (Nn , 6) for some non-negative integer k. Moreover, for every idempotent ζ of IPF (Nn ) which is
m
the identity map of the principal filter ↑(a1 , . . . , an ) of (Nn , 6), we have that (γ −1 ) γ m 4 ζ, where
m = max {a1 , . . . , an } ,
which implies that ICζ.
Lemma 2.10. Let n be any positive integer ≥ 2 and C be a congruence on the semigroup IPF (Nn )
such that αCβ for some non-H -equivalent elements α, β ∈ IPF (Nn ). Then εCι for all idempotents
ε, ι of IPF (Nn ).
Proof. Since α and β are not-H -equivalent in IPF (Nn ) we have that either αα−1 6= ββ −1 or
α−1 α 6= β −1 β (see [29, p. 82]). Then Proposition 4 from [29, Section 2.3] implies that αα−1 Cββ −1
and α−1 αCβ −1 β and hence the assumption of Lemma 2.9 holds.
Lemma 2.11. Let n be any positive integer ≥ 2 and C be a congruence on the semigroup IPF (Nn ) such
that αCβ for some two distinct H -equivalent elements α, β ∈ IPF (Nn ). Then εCι for all idempotents
ε, ι of IPF (Nn ).
8
OLEG GUTIK AND TARAS MOKRYTSKYI
Proof. By Proposition 2.1(vii) the semigroup IPF (Nn ) is simple and then Theorem 2.3 from [11]
implies that there exist µ, ξ ∈ IPF (Nn ) such that f : Hα → HI : χ 7→ µχξ maps α to I and β to
γ 6= I, respectively, which implies that ICγ. Since γ 6= I is an element of the group of units of the
semigroup IPF (Nn ), by Theorem 2.5, γ permutates coordinates of elements of Nn , and hence there
2 , . . . , 1) is the element of the
exists a positive integer iγ such that (x2i ) α 6= x2iγ , where x2iγ = (1, . . . , |{z}
iγ -th
poset (Nn , 6) with the property that only iγ -th coordinate of x2iγ is equal to 2 and all other coordinates
are equal
to 1. Also, by Theorem 2.5 there exists a positive integer jγ ∈ {1, . . . , n} \ {iγ } such that
2
2 , . . . , 1) is the element of the poset (Nn , 6) with the property that only jγ -th
xiγ γ = x2jγ = (1, . . . , |{z}
jγ -th
coordinate of x2jγ is equal to 2 and all other coordinates are equal to 1.
Put ε is the identity map of the principal filter ↑x2iγ . Since C is a congruence on the semigroup
IPF (Nn ) and γ ∈ HI we have that
ε = εε = εIεCεγε.
Since jγ 6= iγ , the semigroup operation of IPF (Nn ) implies that dom(εγε) $ dom ε. Then by Proposition 2.1(v), εγε and ε are non-H -equivalent elements in IPF (Nn ). Next we apply Lemma 2.10.
Theorem 2.12. Let n be any positive integer ≥ 2. Then every non-identity congruence C on the
semigroup IPF (Nn ) is group.
Proof. For every non-identity congruence C on IPF (Nn ) there exist two distinct elements α, β ∈
IPF (Nn ) such that αCβ. If αH β in IPF (Nn ) then by Lemma 2.11 all idempotents of the semigroup
IPF (Nn ) are C-equivalent, otherwise by Lemma 2.10 we get same. Thus, by Lemma II.1.10 of [33]
the quotient semigroup IPF (Nn )/C has a unique idempotent and hence it is a group.
For arbitrary elements x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) of Nn and any permutation σ : {1, . . . , n}
→ {1, . . . , n} we denote
(x)σ = x(1)σ−1 , . . . , x(n)σ−1 ;
max{x, y} = (max{x1 , y1 }, . . . , max{xn , yn }) .
Lemma 2.13. For every positive integer n and any x, y ∈ Nn the following conditions hold:
(i) (x + y)σ = (x)σ + (y)σ;
(ii) (x − y)σ = (x)σ − (y)σ;
(iii) (max{x, y})σ = max{(x)σ, (y)σ}.
Proof. In (i) we have that
(x + y)σ = (x1 + y1 , . . . , xn + yn )σ = (p1 , . . . , pn )σ
for pi = xi + yi , i = 1, . . . , n, and then
(p1 , . . . , pn )σ = (p(1)σ−1 , . . . , p(n)σ−1 ) = (x(1)σ−1 + y(1)σ−1 , . . . , x(n)σ−1 + y(n)σ−1 ) = (x)σ + (y)σ.
The proofs of (ii) and (iii) are similar.
The statement of the following lemma follows from the definition of the semigroup IPF (Nn ).
Lemma 2.14. For every α ∈ IPF (Nn ) there exist unique x, y ∈ Nn , ρα , λα ∈ IPF (Nn ) and σα ∈ Sn
such that α = ρα σα λα and
dom ρα = dom α = ↑x,
ran λα = ran α = ↑y,
ran ρα = Nn ,
n
dom λα = N ,
(z)ρα = z − x + 1
for z ∈ dom ρα ;
(z)λα = z + y − 1
for z ∈ dom λα ,
where 1 = (1, . . . , 1) is the smallest element of the poset (Nn , 6).
THE MONOID OF ORDER ISOMORPHISMS OF PRINCIPAL FILTERS ...
9
Later in this section, for every α ∈ IPF (Nn ) by ρα , λα and σα we denote the elements ρα , λα ∈
IPF (Nn ) and σα ∈ Sn whose are determined in Lemma 2.14.
Lemma 2.15. Let α and β be elements of the semigroup IPF (Nn ) such that dom α = ↑x, ran α = ↑y,
dom β = ↑u and ran β = ↑v. Then
dom(αβ) = ↑[(max{y, u} − y)σα−1 + x];
ran(αβ) = ↑[(max{y, u} − u)σβ + v];
σαβ = σα σβ .
Proof. The definition of the domain of the composition of partial transformations (see [29, p. 4]) implies
that
dom(αβ) = [ran α ∩ dom β]α−1 = [↑y ∩ ↑u]α−1 = [↑ max{y, u}]α−1.
Since α is a monotone bijection between principal filters of the poset (Nn , 6) we get that
[↑ max{y, u}]α−1 = ↑ [max{y, u}]α−1 ,
and by Lemma 2.14
dom(αβ) = ↑ [max{y, u}]α−1 =
−1 −1
= ↑ [max{y, u}]λ−1
=
α σα ρα
= ↑ [max{y, u} − y + 1]σα−1 ρ−1
=
α
= ↑ [(max{y, u} − y)σα−1 + 1]ρ−1
=
α
= ↑[(max{y, u} − y)σα−1 + x].
Similar, the definition of the range of the composition of partial transformations (see [29, p. 4]) implies
that
ran(αβ) = [ran α ∩ dom β]β = [↑y ∩ ↑u]β = [↑ max{y, u}]β.
Since β is a monotone bijection between principal filters of the poset (Nn , 6) we get that
[↑ max{y, u}]β = ↑ ([max{y, u}]β) ,
and by Lemma 2.14
ran(αβ) = ↑ ([max{y, u}]β) =
= ↑ ([max{y, u}]ρβ σβ λβ ) =
= ↑ ([max{y, u} − u + 1]σβ λβ ) =
= ↑ ([(max{y, u} − u)σβ + 1]λβ ) =
= ↑[(max{y, u} − u)σβ + v].
We observe that definitions of elements σα and σβ imply that
dom σα = ran σα = dom σβ = ran σβ = Nn ,
and hence dom(σα σβ ) = ran(σα σβ ) = Nn . Since ρα , λα , σα , ρβ , λβ , σβ are partial bijection of Nn and
dom(αβ) = dom(ραβ σα σβ λαβ ), the equality αβ = ραβ σα σβ λαβ implies that σαβ = σα σβ .
Next we shall show that the equality αβ = ραβ σα σβ λαβ holds. We observe that for any z ∈ dom(αβ)
there exists a unique p ∈ Nn ∪ {(0, . . . , 0), (1, . . . , 0), . . . , (0, . . . , 1)} such that
z = (max{y, u} − y)σα−1 + x + p.
10
OLEG GUTIK AND TARAS MOKRYTSKYI
Then we have that
(z) αβ = (max{y, u} − y)σα−1 + x + p αβ =
= (max{y, u} − y)σα−1 + x + p ρα σα λα β =
= (max{y, u} − y)σα−1 + p + 1 σα λα β =
= (max{y, u} − y + (p)σα + 1) λα β =
= (max{y, u} + (p)σα ) ρβ σβ λβ =
= (max{y, u} − u + (p)σα + 1) σβ λβ =
= ((max{y, u} − u)σβ + ((p)σα )σβ + 1) λβ =
= (max{y, u} − u)σβ + ((p)σα )σβ + v
and
(z)ραβ σα σβ λαβ = (max{y, u} − y)σα−1 + x + p ραβ σα σβ λαβ =
= (p + 1) σα σβ λαβ =
= (((p)σα )σβ + 1) λαβ =
= (max{y, u} − u)σβ + v + ((p)σα )σβ ).
This completes the proof of the lemma.
Proposition 2.16. Let α and β be elements of the semigroup IPF (Nn ) such that dom α = ↑x,
ran α = ↑y, dom β = ↑u and ran β = ↑v. Then the following statements hold:
(i) α is an idempotent of IPF (Nn ) if and only if λα is an inverse partial map to ρα , i.e., x = y,
and σα is the identity element of the group Sn ;
(ii) α is inverse of β in IPF (Nn ) if and only if x = v, y = u (i.e., λα is an inverse partial map
to ρβ and λβ is an inverse partial map to ρα ) and σα is inverse of σβ in the group Sn .
Proof. (i) Suppose that α is an idempotent of IPF (Nn ). Since α is an identity map of a some principal
filter of (Nn , 6), we have that x = y and hence λα is the converse partial map to ρα . Then the equalities
ρα σα λα = α = αα = ρα σα λα ρα σα λα = ρα σα σα λα
and Lemma 2.14 imply that σα = σα σα , and hence σα is the identity element of the group Sn .
The converse statement is obvious.
(ii) Suppose that α and β are inverse elements in IPF (Nn ). Then dom α = ran β and ran α = dom β
and hence we get that x = v and y = u. This and Lemma 2.14 imply that λα is the converse partial
map to ρβ and λβ is an inverse partial map to ρα . Since αβ is an idempotent of IPF (Nn ) the above
arguments imply that
αβ = ρα σα λα ρβ σβ λβ = ρα σα σβ λβ ,
and hence by statement (i) the element σα σβ is the identity of the group Sn . This implies that σα is
inverse of σβ in Sn .
The converse statement is obvious.
Remark 2.17. In the bicyclic semigroup C (p, q) the semigroup operation is determined in the following
way:
i j−k+l
, if j > k;
pq
i j
k l
i l
pq,
if j = k;
pq ·p q =
i−j+k l
p
q , if j < k,
which is equivalent to the following formula:
pi q j · pk q l = pi+max{j,k}−j q l+max{j,k}−k .
THE MONOID OF ORDER ISOMORPHISMS OF PRINCIPAL FILTERS ...
11
It is well known that the above implies that the bicyclic semigroup C (p, q) is isomorphic to the semigroup (S, ∗) which is determined on the square of non-negative integers N0 × N0 with the following
multiplication:
(1)
(i, j) ∗ (k, l) = (i + max{j, k} − j, l + max{j, k} − k).
Later, for an arbitrary positive integer n by C n (p, q) we shall denote the n-th direct power of (S, ∗),
i.e., C n (p, q) is the n-th power of N0 ×N0 with the point-wise semigroup operation defined by formula (1).
Also, by [x, y] we denote the ordered collection ((x1 , y1 ), . . . , (xn , yn )) of C n (p, q), where x = (x1 , . . . , xn )
and y = (y1 , . . . , yn ), and for arbitrary permutation σ : {1, . . . , n} → {1, . . . , n} we put
(x)σ = x(1)σ−1 , . . . , x(n)σ−1 .
Let Aut(C n (p, q)) be the group of automorphisms of the semigroup C n (p, q). We define a map Φ from
Sn into all selfmaps of the semigroup C n (p, q) putting σ 7→ Φσ , where the map Φσ : C n (p, q) → C n (p, q)
is defined by the formula:
(2)
([x, y])Φσ = [(x)σ, (y)σ].
It is obvious that the map Φσ is a bijection of C n (p, q) and Φσ1 6= Φσ2 for distinct σ1 , σ2 ∈ Sn .
Since
([x, y] ∗ [u, v])Φσ = ([max{y, u} − y + x, max{y, u} − u + v]) Φσ =
= [(max{y, u} − y + x)σ, (max{y, u} − u + v)σ] =
= [max{(y)σ, (u)σ} − (y)σ + (x)σ, max{(y)σ, (u)σ} − (u)σ + (v)σ] =
= [(x)σ, (y)σ] ∗ [(u)σ, (v)σ] =
= [x, y] Φσ ∗ [u, v] Φσ
and
([x, y])Φσ1 σ2 = [(x)σ1 σ2 , (y)σ1 σ2 ] = ([(x)σ1 , (y)σ1 ])Φσ2 = ([x, y])Φσ1 Φσ2
for any [x, y], [u, v] ∈ C n (p, q) and any σ, σ1 , σ2 ∈ Sn , the following proposition holds:
Proposition 2.18. For arbitrary positive integer n the map Φ is an injective homomorphism from Sn
into the group Aut(C n (p, q)) of automorphisms of the semigroup C n (p, q).
Theorem 2.19. For arbitrary positive integer n the semigroup IPF (Nn ) is isomorphic to the semidirect product Sn ⋉Φ C n (p, q) of the semigroup C n (p, q) by the group Sn .
Proof. We define the map Ψ : IPF (Nn ) → Sn ⋉Φ C n (p, q) in the following way:
(α)Ψ = (σα , [(x)σα , y]),
for α ∈ IPF (Nn ), where ↑x = dom α and ↑y = ran α. Since σα is a bijection, we have that Ψ is a
bijection as well.
For any α, β ∈ IPF (Nn ) with dom α = ↑x, ran α = ↑y, dom β = ↑u, ran β = ↑v by Lemma 2.15 we
have that
(αβ)Ψ = σα σβ , [((max{y, u} − y)σα−1 + x)σα σβ , (max{y, u} − u)σβ + v] =
= (σα σβ , [max{(y)σβ , (u)σβ } − (y)σβ + (x)σα σβ , max{(y)σβ , (u)σβ } − (u)σβ + v]) =
= (σα σβ , ([(x)σα , y])σβ ∗ [(u)σβ , v]) =
= (σα , [(x)σα , y])(σβ , [(u)σβ , v]) =
= (α)Ψ(β)Ψ,
and hence Ψ is an isomorphism.
12
OLEG GUTIK AND TARAS MOKRYTSKYI
Every inverse semigroup S admits the least group congruence Cmg (see [33, Section III]):
sCmg t if and only if
there exists an idempotent e ∈ S
such that se = te.
Later, for any α ∈ IPF (Nn ) put (σα , [(xα )σα , yα ]) = (α)Ψ is the image of the element α by the
isomorphism Ψ : IPF (Nn ) → Sn ⋉Φ C n (p, q) which is defined in the proof of Theorem 2.19.
The following theorem describes the least group congruence on the semigroup IPF (Nn ).
Theorem 2.20. Let n be an arbitrary positive integer. Then αCmg β in the semigroup IPF (Nn ) if
and only if
σα = σβ
and
(xα )σα − yα = (xβ )σβ − yβ .
Proof. First we observe that if ε is an idempotent in IPF (Nn ) then Proposition 2.16(i) and the
definition of the map Ψ : IPF (Nn ) → Sn ⋉Φ C n (p, q) imply that σε is the identity permutation and
xε = yε . Then the following calculations
(σα , [(xα )σα , yα ]) (σε , [(xε )σε , xε ]) =
= (σα σε , [max{(yα )σε , xε } − (yα )σε + ((xα )σα )σε , max{(yα )σε , xε } − xε + xε ]) =
= (σα , [max{yα , xε } − yα + (xα )σα , max{yα , xε }]) ,
(σβ , [(xβ )σβ , yβ ]) (σε , [(xε )σε , xε ]) =
= (σβ σε , [max{(yβ )σε , xε } − (yβ )σε + ((xβ )σβ )σε , max{(yβ )σε , xε } − xε + xε ]) =
= (σβ , [max{yβ , xε } − yβ + (xβ )σβ , max{yβ , xε }]) ,
imply that for the idempotent ε ∈ IPF (Nn ) the equality αε = βε holds if and only if
σα = σβ
and
(xα )σα − yα = (xβ )σβ − yβ .
This completes the proof of the theorem.
For any positive integer n, an arbitrary permutation σ : {1, . . . , n} → {1, . . . , n} and an ordered
collection z = (z1 , . . . , zn ) of integers we put
(z)σ = z(1)σ−1 , . . . , z(n)σ−1 .
Let Aut(Zn ) be the group of automorphisms of the direct n-the power of the additive group of integers
Z(+). Next we define a map Θ : Sn → Aut(Zn ) in the following way. We put (σ)Θ = Θσ is the map
from Zn into Zn which is defined by the formula
(z)Θσ = (z)σ.
It is obvious that the maps Θ and Θσ are injective. Since
(z + v)σ = (z)σ + (v)σ
and
(z)Θσ1 σ2 = (z)(σ1 σ2 ) = ((z)σ1 )σ2 = ((z)Θσ1 )Θσ2 ,
for any z = (z1 , . . . , zn ) and v = (v1 , . . . , vn ) from the direct n-th power of the group Z(+) and any
σ, σ1 , σ2 ∈ Sn , we have that so defined map Θ : Sn → Aut(Zn ) is an injective homomorphism.
Theorem 2.21. For an arbitrary positive integer n the quotient semigroup IPF (Nn )/Cmg is isomorphic to the semidirect product Sn ⋉Θ (Z(+))n of the direct n-th power of the additive group of integers
(Z(+))n by the group of permutation Sn .
Proof. We define a map Υ : IPF (Nn ) → Sn ⋉Θ (Z(+))n in the following way. If (σα , [(xα )σα , yα ]) =
(α)Ψ is the image of α ∈ IPF (Nn ) under the isomorphism Ψ : IPF (Nn ) → Sn ⋉Φ C n (p, q) which is
defined in the proof of Theorem 2.19, then we put (α)Υ = (σα , (xα )σα − yα ).
THE MONOID OF ORDER ISOMORPHISMS OF PRINCIPAL FILTERS ...
13
For any α, β ∈ IPF (Nn ) with dom α = ↑xα , ran α = ↑yα , dom β = ↑xβ , ran β = ↑yβ by Lemma 2.15
we have that
(αβ)Υ = σα σβ , ((max{yα , xβ } − yα )σα−1 + xα )σα σβ − (max{yα , xβ } − xβ )σβ − yβ =
= (σα σβ , max{(yα )σβ , (xβ )σβ } − (yα )σβ + (xα )σα σβ − max{(yα )σβ , (xβ )σβ } + (xβ )σβ − yβ ) =
= (σα σβ , (xα )σα σβ − (yα )σβ + (xβ )σβ − yβ ) =
= (σα σβ , ((xα )σα − yα )σβ + ((xβ )σβ − yβ )) =
= (σα , (xα )σα − yα ) · (σβ , (xβ )σβ − yβ ) =
= (α)Υ · (β)Υ,
and hence Υ is a homomorphism. It is obvious that the map Υ : IPF (Nn ) → Sn ⋉Θ (Z(+))n is
surjective. Also, Theorem 2.20 implies that αCmg β in IPF (Nn ) if and only if (α)Υ = (β)Υ. This
implies that the homomorphism Υ generates the congruences Cmg on IPF (Nn ).
Every inverse semigroup S admits a partial order:
a4b
if and only if there exists
e ∈ E(S) such that a = be.
So defined order is called the natural partial order on S. We observe that a 4 b in an inverse semigroup
S if and only if a = f b for some f ∈ E(S) (see [29, Lemma 1.4.6]).
If ε is an idempotent in IPF (Nn ) then Proposition 2.16(i) and the definition of the isomorphism
Ψ : IPF (Nn ) → Sn ⋉Φ C n (p, q) imply that σε is the identity permutation and xε = yε . Then for any
α ∈ IPF (Nn ) with (σα , [(xα )σα , yα ]) = (α)Ψ we have that
(σα , [(xα )σα , yα ]) (σε , [xε ,xε ]) =
(3)
= (σα σε , [max{(yα )σε , xε } − (yα )σε + ((xα )σα )σε , max{(yα )σε , xε } − xε + xε ]) =
= (σα , [max{yα , xε } − yα + (xα )σα , max{yα , xε }]) ,
This implies the following proposition, which describes the natural partial order on the semigroup
IPF (Nn ).
Proposition 2.22. Let n be an arbitrary positive integer and α, β ∈ IPF (Nn ). Then the following
conditions are equivalent:
(i) α 4 β;
(ii) σα = σβ , (xα )σα − yα = (xβ )σβ − yβ and xα 6 xβ in the poset (Nn , 6);
(iii) σα = σβ , (xα )σα − yα = (xβ )σβ − yβ and yα 6 yβ in the poset (Nn , 6).
An inverse semigroup S is said to be E-unitary if ae ∈ E(S) for some e ∈ E(S) implies that a ∈ E(S)
[29]. E-unitary inverse semigroups were introduced by Siatô in [35], where they were called “proper
ordered inverse semigroups”.
Formula (3) implies that if the element (σα , [(xα )σα , yα ]) · (σε , [xε ,xε ]) is an idempotent in the semidirect product Sn ⋉Φ C n (p, q) then so is (σα , [(xα )σα , yα ]). This implies the following corollary:
Corollary 2.23. For an arbitrary positive integer n the inverse semigroup IPF (Nn ) is E-unitary.
An inverse semigroup S is called F -inverse, if the Cmg -class sCmg of each element s has the top (biggest)
element with the respect to the natural partial order on S [30].
Proposition 2.24. For an arbitrary positive integer n the semigroup IPF (Nn ) is an F -inverse semigroup.
Proof. Fix an arbitrary element β0 = (σ, [x, y]) ∈ Sn ⋉Φ C n (p, q), where x = (x1 , . . . , xn ) and y =
(y1 , . . . , yn ). For every positive integer k we put βk = (σ, [x−k, y−k]), where x−k = (x1 −k, . . . , xn −k)
and y − k = (y1 − k, . . . , yn − k). It is obvious that there exists a (biggest) positive integer k0 such that
(x1 − k0 , . . . , xn − k0 ), (y1 − k0 , . . . , yn − k0 ) ∈ Nn
14
OLEG GUTIK AND TARAS MOKRYTSKYI
but
(x1 − k0 − 1, . . . , xn − k0 − 1) ∈
/ Nn
or
(y1 − k0 , . . . , yn − k0 ) ∈
/ Nn .
Then Theorem 2.20 and Proposition 2.22 imply that the element βk0 is the biggest element in the
Cmg -class of the element β0 in Sn ⋉Φ C n (p, q).
Proposition 2.25. For every α, β ∈ IPF (Nn ), both sets
{χ ∈ IPF (Nn ) : α · χ = β}
and
{χ ∈ IPF (Nn ) : χ · α = β}
are finite. Consequently, every right translation and every left translation by an element of the semigroup
IPF (Nn ) is a finite-to-one map.
Proof. We shall show that the set A = {χ ∈ IPF (Nn ) : χ · α = β}. The proof of the statement that
the set {χ ∈ IPF (Nn ) : α · χ = β} is finite, is similar.
It is obvious that A is a subset of the set B = {χ ∈ IPF (Nn ) : χ · αα−1 = βα−1}. Then B is a
subset of C = {ξ ∈ IPF (Nn ) : ξ 4 βα−1 }. Since every principal ideal in the poset (Nn , 6) is finite,
Proposition 2.25 implies that C is finite, and hence so is A.
3. On a semitopological semigroup IPF (Nn )
The following theorem generalizes the Bertman-West result from [8] (and hence Eberhart-Selden
result from [12]) for the semigroup IPF (Nn ).
Theorem 3.1. For any positive integer n every Hausdorff shift-continuous topology on the semigroup
IPF (Nn ) is discrete.
Proof. Fix an arbitrary shift continuous Hausdorff topology τ on IPF (Nn ). Since for every idempotent
ε ∈ IPF (Nn ) the left and right shifts lε : IPF (Nn ) → IPF (Nn ) : x 7→ ε · x and rε : IPF (Nn ) →
IPF (Nn ) : x 7→ x · ε are continuous maps, the Hausdorffness of (IPF (Nn ), τ ) and [13, 1.5.c] implies
that the principal ideals εIPF (Nn ) and IPF (Nn )ε are closed subsets in (IPF (Nn ), τ ).
For any positive integer i = 1, . . . , n put εi is the identity map of the subset ↑(1, . . . , |{z}
2 , . . . , 1) of
the poset (Nn , 6). It is clear that
i-th
H(I) = IPF (Nn ) \ (ε1 IPF (N)n ∪ · · · εn IPF (Nn ) ∪ IPF (Nn )ε1 ∪ · · · ∪ IPF (Nn )εn ) .
Then above part of the proof implies that H(I) is an open subset of (IPF (Nn ), τ ), and by Theorem 2.5,
H(I) is a finite discrete open subspace of (IPF (Nn ), τ ).
Since By Proposition 2.1(vii) the semigroup IPF (Nn ) is simple, for an arbitrary α ∈ IPF (Nn )
there exist β, γ ∈ IPF (Nn ) such that βαγ = I. Theny Proposition 2.25 implies that the point α has
a finite open neighbourhood in (IPF (Nn ), τ ), and hence α is an isolated point in (IPF (Nn ), τ ).
The following theorem generalizes Theorem I.3 from [12].
Theorem 3.2. If for some positive integer n the semigroup IPF (Nn ) is dense in a Hausdorff semitopological semigroup (S, ·) and I = SIPF (Nn ) 6= ∅, then I is a two-sided ideal in S.
Proof. Fix an arbitrary element y ∈ I. If x · y = z ∈
/ I for some x ∈ IPF (Nn ) then there exists an
open neighbourhood U(y) of the point y in the space S such that {x} · U(y) = {z} ⊂ IPF (Nn ). Then
the open neighbourhood U(y) contains infinitely many elements of the semigroup IPF (Nn ) which
contradicts Proposition 2.25. The obtained contradiction implies that x · y ∈ I for all x ∈ IPF (Nn )
and y ∈ I. The proof of the statement that y · x ∈ I for all x ∈ IPF (Nn ) and y ∈ I is similar.
Suppose to the contrary that x · y = w ∈
/ I for some x, y ∈ I. Then w ∈ IPF (Nn ) and the separate
continuity of the semigroup operation in S implies that there exist open neighbourhoods U(x) and U(y)
of the points x and y in S, respectively, such that {x} · U(y) = {w} and U(x) · {y} = {w}. Since both
neighbourhoods U(x) and U(y) contain infinitely many elements of the semigroup IPF (Nn ), both
THE MONOID OF ORDER ISOMORPHISMS OF PRINCIPAL FILTERS ...
15
equalities {x} · U(y) = {w} and U(x) · {y} = {w} contradict the mentioned above part of the proof,
because {x} · (U(y) ∩ IPF (Nn )) ⊆ I. The obtained contradiction implies that x · y ∈ I.
We recall that a topological space X is said to be:
• compact if every open cover of X contains a finite subcover;
• countably compact if each closed discrete subspace of X is finite;
• feebly compact if each locally finite open cover of X is finite;
• pseudocompact if X is Tychonoff and each continuous real-valued function on X is bounded.
According to Theorem 3.10.22 of [13], a Tychonoff topological space X is feebly compact if and only if X
is pseudocompact. Also, a Hausdorff topological space X is feebly compact if and only if every locally
finite family of non-empty open subsets of X is finite. Every compact space and every sequentially
compact space are countably compact, every countably compact space is feebly compact (see [3]).
A topological semigroup S is called Γ-compact if for every x ∈ S the closure of the set {x, x2 , x3 , . . .} is
compact in S (see [26]). Since by Proposition 2.7 for every positive integer n the semigroup IPF (Nn )
contains the bicyclic semigroup as a subsemigroup the results obtained in [2], [4], [5], [22], [26] imply
the following corollary
Corollary 3.3. Let n be an arbitrary non-negative integer. If a Hausdorff topological semigroup S
satisfies one of the following conditions:
(i) S is compact;
(ii) S is Γ-compact;
(iii) S is a countably compact topological inverse semigroup;
(iv) the square S × S is countably compact; or
(v) the square S × S is a Tychonoff pseudocompact space,
then S does not contain the semigroup IPF (Nn ).
Also, Theorems 2.12, 2.21 and Corollary 3.3 imply the following corollary:
Corollary 3.4. Let n be an arbitrary non-negative integer. If a Hausdorff topological semigroup S
satisfies one of the following conditions:
(i) S is compact;
(ii) S is Γ-compact;
(iii) S is a countably compact topological inverse semigroup;
(iv) the square S × S is countably compact; or
(v) the square S × S is a Tychonoff pseudocompact space,
then for every homomorphism h : IPF (Nn ) → S the image (IPF (Nn ))h is a subgroup of S. Moreover,
for every homomorphism h : IPF (Nn ) → S there exists a unique homomorphism uh : Sn ⋉Θ (Z(+))n →
S such that the following diagram
IPF (Nn )
Υ
h
// S
✉::
✉
✉
✉✉
✉✉
✉
✉
✉✉
✉✉uh
✉
✉
✉✉
✉✉
✉
✉
Sn ⋉Θ (Z(+))n
commutes.
Proposition 3.5. Let n be an arbitrary positive integer. Let S be a Hausdorff topological semigroup
which contains a dense subsemigroup IPF (Nn ). Then for every c ∈ IPF (Nn ) the set
Dc = {(x, y) ∈ IPF (Nn ) × IPF (Nn ) : x · y = c}
16
OLEG GUTIK AND TARAS MOKRYTSKYI
is an open-and-closed subset of S × S.
Proof. By Theorem 3.1, IPF (Nn ) is a discrete subspace of S and hence Theorem 3.3.9 of [13] implies
that IPF (Nn ) is an open subspace of S. Then the continuity of the semigroup operation of S implies
that Dc is an open subset of S × S for every c ∈ IPF (Nn ).
Suppose that there exists c ∈ IPF (Nn ) such that Dc is a non-closed subset of S × S. Then there
exists an accumulation point (a, b) ∈ S × S of the set Dc . The continuity of the semigroup operation
in S implies that a · b = c. But IPF (Nn ) × IPF (Nn ) is a discrete subspace of S × S and hence by
Theorem 3.2 the points a and b belong to the two-sided ideal I = S \ IPF (Nn ). This implies that the
product a · b ∈ S \ IPF (Nn ) cannot be equal to the element c.
Theorem 3.6. Let n be an arbitrary positive integer. If a Hausdorff topological semigroup S contains
IPF (Nn ) as a dense subsemigroup then the square S × S is not feebly compact.
Proof. By Proposition 3.5 for every c ∈ IPF (Nn ) the square S ×S contains an open-and-closed discrete
subspace Dc . In the case when c is the unit I of IPF (Nn ), by Corollary 2.8 there exists a subsemigroup
C of IPF (Nn ) which is isomorphic to the bicyclic monoid C (p, q) and contains I. If we identify the
elements of C with the elements the bicyclic monoid C (p, q) by an isomorphism h : C (p, q) → C then
the subspace Dc contains an infinite subset {(h(q i ), h(pi )) : i ∈ N0 } and hence Dc is infinite. This implies
that the square S × S is not feebly compact.
References
[1] O. Andersen, Ein Bericht über die Struktur abstrakter Halbgruppen, PhD Thesis, Hamburg, 1952.
[2] L. W. Anderson, R. P. Hunter, and R. J. Koch, Some results on stability in semigroups. Trans. Amer. Math. Soc.
117 (1965), 521–529.
[3] A. V. Arkhangel’skii, Topological Function Spaces, Kluwer Publ., Dordrecht, 1992.
[4] T. Banakh, S. Dimitrova, and O. Gutik, The Rees-Suschkiewitsch Theorem for simple topological semigroups, Mat.
Stud. 31:2 (2009), 211–218.
[5] T. Banakh, S. Dimitrova, and O. Gutik, Embedding the bicyclic semigroup into countably compact topological semigroups, Topology Appl. 157:18 (2010), 2803–2814.
[6] S. Bardyla, Classifying locally compact semitopological polycyclic monoids, Math. Bull. Shevchenko Sci. Soc. 13
(2016), 21–28.
[7] S. Bardyla and O. Gutik, On a semitopological polycyclic monoid, Algebra Discrete Math. 21:2 (2016), 163–183.
[8] M. O. Bertman and T. T. West, Conditionally compact bicyclic semitopological semigroups, Proc. Roy. Irish Acad.
A76:21–23 (1976), 219–226.
[9] J. H. Carruth, J. A. Hildebrant, and R. J. Koch, The Theory of Topological Semigroups, Vol. I, Marcel Dekker, Inc.,
New York and Basel, 1983; Vol. II, Marcel Dekker, Inc., New York and Basel, 1986.
[10] I. Chuchman and O. Gutik, Topological monoids of almost monotone injective co-finite partial selfmaps of the set of
positive integers, Carpathian Math. Publ. 2:1 (2010), 119–132.
[11] A. H. Clifford and G. B. Preston, The Algebraic Theory of Semigroups, Vols. I and II, Amer. Math. Soc. Surveys 7,
Providence, R.I., 1961 and 1967.
[12] C. Eberhart and J. Selden, On the closure of the bicyclic semigroup, Trans. Amer. Math. Soc. 144 (1969), 115–126.
[13] R. Engelking, General Topology, 2nd ed., Heldermann, Berlin, 1989.
[14] I. Fihel and O. Gutik, On the closure of the extended bicyclic semigroup, Carpathian Math. Publ. 3:2 (2011), 131–157.
[15] O. Gutik, On the dichotomy of a locally compact semitopological bicyclic monoid with adjoined zero, Visn. L’viv.
Univ., Ser. Mekh.-Mat. 80 (2015), 33–41.
[16] O. Gutik, Topological properties of Taimanov semigroups, Math. Bull. Shevchenko Sci. Soc. 13 (2016), 29–34.
[17] O. Gutik and K. Maksymyk, On semitopological interassociates of the bicyclic monoid, Visn. L’viv. Univ., Ser.
Mekh.-Mat. 82 (2016), 98–108.
[18] O. Gutik and I. Pozdniakova, Congruences on the monoid of monotone injective partial selfmaps of Ln ×lex Z with
co-finite domains and images, Math. Methods and Phys.-Mech. Fields 57:2 (2014), 7–15; reprinted version: J. Math.
Sci. 217:2 (2016), 139–148.
[19] O. Gutik and I. Pozdniakova, On the monoid of monotone injective partial selfmaps of N26 with cofinite domains and
images, Visn. Lviv. Univ., Ser. Mekh.-Mat. 81 (2016), 101–116.
[20] O. Gutik and I. Pozdniakova, On the monoid of monotone injective partial selfmaps of N26 with cofinite domains and
images, II, Visn. Lviv. Univ., Ser. Mekh.-Mat. 82 (2016), 109–127.
THE MONOID OF ORDER ISOMORPHISMS OF PRINCIPAL FILTERS ...
17
[21] O. Gutik and I. Pozdnyakova, On monoids of monotone injective partial selfmaps of Ln ×lex Z with co-finite domains
and images, Algebra Discr. Math. 17:2 (2014), 256–279.
[22] O. Gutik and D. Repovš, On countably compact 0-simple topological inverse semigroups, Semigroup Forum 75:2
(2007), 464–469.
[23] O. Gutik and D. Repovš, Topological monoids of monotone, injective partial selfmaps of N having cofinite domain
and image, Stud. Sci. Math. Hungar. 48:3 (2011), 342–353.
[24] O. Gutik and D. Repovš, On monoids of injective partial selfmaps of integers with cofinite domains and images,
Georgian Math. J. 19:3 (2012), 511–532.
[25] O. Gutik and D. Repovš, On monoids of injective partial cofinite selfmaps, Math. Slovaca 65:5 (2015), 981–992.
[26] J. A. Hildebrant and R. J. Koch, Swelling actions of Γ-compact semigroups, Semigroup Forum 33 (1986), 65–85.
[27] J. M. Howie, Fundamentals of Semigroup Theory, London Math. Monographs, New Ser. 12, Clarendon Press, Oxford,
1995.
[28] R. J. Koch and A. D. Wallace, Stability in semigroups, Duke Math. J. 24 (1957), 193–195.
[29] M. Lawson, Inverse Semigroups. The Theory of Partial Symmetries, World Scientific, Singapore, 1998.
[30] R. McFadden and L. O’Carroll, F -inverse semigroups, Proc. Lond. Math. Soc., III Ser. 22 (1971), 652–666.
[31] Z. Mesyan, J. D. Mitchell, M. Morayne, and Y. H. Péresse, Topological graph inverse semigroups, Topology Appl.
208 (2016), 106–126.
[32] W. D. Munn, Uniform semilattices and bisimple inverse semigroups, Q. J. Math., Oxf. II. Ser. 17 (1966), 151–159.
[33] M. Petrich, Inverse Semigroups, John Wiley & Sons, New York, 1984.
[34] W. Ruppert, Compact Semitopological Semigroups: An Intrinsic Theory, Lect. Notes Math., 1079, Springer, Berlin,
1984.
[35] T. Saitô, Proper ordered inverse semigroups, Pacif. J. Math. 15:2 (1965), 649–666.
[36] A. D. Taimanov, An example of a semigroup which admits only the discrete topology, Algebra i Logika 12:1 (1973),
114–116 (in Russian), English transl. in: Algebra and Logic 12:1 (1973), 64–65.
[37] A. D. Taimanov, On the topologization of commutative semigroups, Mat. Zametki 17:5 (1975), 745–748 (in Russian),
English transl. in: Math. Notes 17:5 (1975), 443–444.
[38] V. V. Vagner, Generalized groups, Dokl. Akad. Nauk SSSR 84 (1952), 1119–1122 (in Russian).
Faculty of Mathematics, National University of Lviv, Universytetska 1, Lviv, 79000, Ukraine
E-mail address: [email protected], [email protected], [email protected]
| 4 |
1
arXiv:1803.09099v1 [] 24 Mar 2018
A Resourceful Reframing of Behavior Trees
CHRIS MARTENS, North Carolina State University
ERIC BUTLER, University of Washington
JOSEPH C. OSBORN, University of California, Santa Cruz
Designers of autonomous agents, whether in physical or virtual environments, need to express nondeterminisim, failure, and parallelism in behaviors, as well as accounting for synchronous coordination between
agents. Behavior Trees are a semi-formalism deployed widely for this purpose in the games industry, but with
challenges to scalability, reasoning, and reuse of common sub-behaviors.
We present an alternative formulation of behavior trees through a language design perspective, giving a
formal operational semantics, type system, and corresponding implementation. We express specifications
for atomic behaviors as linear logic formulas describing how they transform the environment, and our type
system uses linear sequent calculus to derive a compositional type assignment to behavior tree expressions.
These types expose the conditions required for behaviors to succeed and allow abstraction over parameters to
behaviors, enabling the development of behavior “building blocks” amenable to compositional reasoning and
reuse.
Additional Key Words and Phrases: linear logic, behavior trees, programming languages, type systems
ACM Reference format:
Chris Martens, Eric Butler, and Joseph C. Osborn. 2016. A Resourceful Reframing of Behavior Trees. 1, 1,
Article 1 (January 2016), 17 pages.
DOI: 10.1145/nnnnnnn.nnnnnnn
1
INTRODUCTION
Specifying the desired behaviors of agents in environments is a major theme in artificial intelligence.
Analysts often need to define particular policies with explicit steps, but the agents must also
acknowledge salient changes in a potentially hostile or stochastic environment. This challenge
arises in applications including robotics, simulation, and video game development. Games in
particular bring challenges related to interaction with human decision-makers: even for games
notionally working against the objectives of the player, the activity of game design is centrally
concerned with helping the player learn something or have an emotional experience, and in this
sense can be thought of as a cooperative system between agents with different knowledge states,
not unlike human-robot teams. The behaviors of non-player characters (NPCs) in games must be
designed in support of this goal.
Game designers must be able to specify that a given agent should patrol a hallway until it gets
hungry (or its battery runs low) and goes home for a snack (or to recharge); but if the agent sees a
one-hundred dollar bill on the ground on the way to where it recuperates, it should force a detour.
In some designs, we would want an adversary (e.g., the player) to be able to trick the agent into
running out of fuel by this mechanism; in other designs we would hope the agent ignores optional
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires
prior specific permission and/or a fee. Request permissions from [email protected].
© 2016 ACM. XXXX-XXXX/2016/1-ART1 $15.00
DOI: 10.1145/nnnnnnn.nnnnnnn
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
1:2
Chris Martens, Eric Butler, and Joseph C. Osborn
but attractive diversions and prioritizes severe need. We can easily imagine two distinct agents
within the same game which are differentiated only by whether they can be misled in such a way.
Game character AI designers have somewhat contradictory goals that distinguish their project
from, for example, game-playing AI whose objective is optimal play. On the one hand they want
believable characters who react reasonably to player actions and to the behaviors of other nonplayer characters; but on the other hand they want to craft certain very specific experiences that
nudge the player into trying new actions or approaching problems from a specific direction or that
prevent the agent from performing awkward-looking sequences of 3D animations. Traditionally,
game character AI was implemented with explicit state machines built by hand; more recently
behavior trees, goal-oriented action planning, and utility-based systems have come into vogue.
Fig. 1. A example behavior tree for a noise-investigation behavior. The tree is evaluated in preorder traversal.
Leaf nodes specify actions in the world (such as moving to a target), which can succeed or fail. Interior nodes
combine children into composite behaviors. The arrow (→) is sequencing (run each child until first failure),
and the question (?) is selection (run each child until first success).
Behavior trees are a scripting system for agents in virtual worlds, allowing designers of virtual
agents to visually construct behavioral flowcharts based on conditions on the world around them.
They are widely employed in the video games industry [16] for describing the “artificial intelligence”
behavior of non-player characters, such as enemy units in combat simulators and members of
virtual populations in open world-style games. Behavior trees have also been used for robot
control [12]. They are often described as merging the utility of decision trees and state machines,
allowing repeated or cyclic behaviors that modify and check state (internal or shared) as they
execute. Figure 1 shows an example behavior tree for a hypothetical security guard character. The
tree defines how to sequence and prioritize basic behaviors of listening for noises, investigating the
source of the noise, or doing idle activities. During game simulation, behavior trees are typically
re-executed with some frequency depending on the game, as often as once or more per time step.
The example in Figure 1, for instance, needs to be executed twice to both acquire a target and
investigate it.
Since behavior trees are often deployed in multi-agent simulations and with complex statechanging behavior, the ability for a designer to reason about the correctness of the tree quickly
succumbs to its size and branching factor. Even for straightforward sequences of behaviors, the
preconditions and postconditions are left unstated. For example, if an agent is told to move to door,
open door, and go through door, we might reasonably expect that in all circumstances where
the door is accessible, the agent will be on the opposite side of it by the time its behavior finishes.
However, this is not possible to conclude unless we reason both about the conditions and effects of
the individual actions and how the effects of earlier actions are expected to connect to the conditions
of later ones. Such a sequence of actions could fail, for instance, if the player were to intervene and
close the door immediately after the agent opened it. Furthermore, the success of behaviors may
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
A Resourceful Reframing of Behavior Trees
1:3
depend on external conditions on the environment: an agent may expect another agent to have
placed an important item that it needs, and the behavior is only correct on the condition that this
dependency has been satisfied.
We describe an approach to reasoning compositionally about behavior trees in such a way that
they may be constructed in small units, typechecked against an expected behavioral schema, and
combined to form behaviors with new, compositionally-defined types. The approach requires the
author to provide a linear logical specification of the atomic actions, i.e. the leaves of the tree; types
for complex expressions formed from these leaves are derived from a linear logical interpretation
of the behavior tree operations (sequencing, selection, and conditions). The present work can be
seen as a way to regain some of the guarantees given by reasoning about a behavior from start
to finish without losing the reactivity, which is the main benefit of using behavior trees over, for
example, static plan generation [7].
Since behavior trees are a relatively simple formalism repeatedly realized in different incarnations,
and since game developers are under somewhat notorious pressure to ship products, there is no
authoritative, standard version of behavior trees. As alluded to above, a recurring issue with
behavior trees is resolving the apparent tension between reacting to unexpected changes in the
environment on the one hand and to performing authored behaviors over a longer duration on
the other hand. The ad hoc extensions applied to behavior trees in the wild are often intended to
resolve this tension. The approaches described in this paper could give a theoretical foundation
for addressing these “hacks” employed in practice—and, potentially, for more principled and
better-behaved adaptations of behavior trees towards the problem of designing complex agent and
character behaviors.
Our contributions are a formal specification and operational semantics for our formulation of
behavior trees, a type system and synthesis algorithm backed by an interpretation in linear logic,
and an implementation of these systems in Standard ML. These results represent the first step of
toward building a toolkit for robust authoring of virtual agent behaviors, combining support for
correct human authorship and certified goal-driven synthesis of behaviors.
The rest of the paper is organized as follows: Section 2 discusses related work; Section 3 describes further how behavior trees are used in the games industry, Section 4 explains linear logical
specifications and how they may be used to describe a possibility space for virtual worlds; Section 5
describes the syntax and operational semantics of our behavior tree language; Section 6 describes
the type system and its guarantees; Section 7 describes our implementation; Section 8 discusses
our current scope and future work; and 9 summarizes our contributions.
2
RELATED WORK
For the most part, efforts to provide robust formalisms to designers of virtual agents have been
disjoint from formal and language-based approaches. We identify related work in two key areas:
previous attempts to characterize virtual agent behaviors from a formal methods standpoint, and
related models of computation that have been characterized with linear logic.
2.1
Formal accounts of behavior trees
Marzinotto et al. provide an account [12] of behavior trees in the context of robot control, citing a dearth of mathematical rigor prior to their contribution. Their work contributes the first
mathematical definition of behavior trees and accounts for their expressive capabilities.
More recently, there has been some very recent work in applying synthesis and verification to AI
behavior trees [4]. The formal basis for said work is model checking in linear temporal logic (LTL).
Our work, by contrast, seeks a type-theoretic solution that supports modular reuse of behaviors.
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
1:4
2.2
Chris Martens, Eric Butler, and Joseph C. Osborn
Linear logical accounts of agents and processes
Linear Session Types [2] are an important touchstone for this work as another characterization
of a pre-existing system, π -calculus, under a semantics derived from linear sequent calculus. Our
work does not identify a direct logical correspondence between logical and operational notions in
the same way, but similarly provides a basis for static reasoning about complex behaviors.
The CLF [18] logical framework and corresponding implementation Celf [17] form a basis for
interpreting linear logic formulas as programs under a proof-construction-as-execution paradigm
(logic programming). While operationally, this approach diverges from the semantics of behavior
trees, the representation formalism informs out approach.
Finally, linear logic has been used to account for planning in a number of interesting ways:
deductive planning [5] runs with the observation that, in addition to Masseron et al.’s observation
that linear proof search can model planning [13], linear proofs generalize plans: they can characterize
recursive and contingent (branching) plans, recovering some of the same expressiveness as behavior
trees. Dixon et al. [6] apply deductive planning to an agent-based domain for dialogue-based
environments. This work encourages us to consider integrating the generative abilities of planners
with the reactivity of behavior trees in future work.
3
BACKGROUND: BEHAVIOR TREES IN GAMES
Behavior trees are widely used to define the behavior of non-player characters in digital game
genres ranging from strategy and simulation to first-person shooters. The major game-making
tools (Unreal Engine, Unity 3D, CryEngine, Amazon Lumberyard, and others) all either provide
natively or have third-party implementations of the technique. The canonical examples of behavior
trees’ use in games come from the Halo series of first-person shooter games [9]. Notable in their
formulation is that most of the tree is shared across the different types of enemy agents that appear
in the game, which reflects the difficulty of authoring good and reasonable behavior policies in
general. Behavior trees give authors a way to reuse some behaviors and override others from agent
to agent.
Behavior trees are usually characterized as a reactive AI formalism, in this context meaning that
agents are defined in terms of their reactions to a changing environment, rather than by a top-down
plan that tries to achieve a goal by considering contingencies in advance. Certainly, even finite
state machines can be made reactive by adding appropriate transitions, but scaling them to myriad
potential game events quickly overwhelms authors. Behavior trees reduce that burden by asking a
behavior author to structure the reactive behaviors in a tree, implicitly defining which behaviors
supersede or interrupt which other behaviors by their position in a preorder traversal of that tree.
A behavior tree is a data structure describing how an agent decides on its next actions, and at
the leaves some primitives for executing those actions. Behavior trees are repeatedly evaluated and
on each evaluation they process their nodes in sequence. When a node is processed, it evaluates
with some status: RUNNING, SUCCEEDED, or FAILED. Different sorts of nodes in the tree are specified
in terms of the circumstances under which they evaluate to each return value.
A key question in behavior tree semantics is whether a tree which ends an evaluation with
the RUNNING status should, on the next evaluation, continue from where it left off; the alternative
is for it to begin its next evaluation from the root again. The latter approach is more reactive to
changes in the environment or interruptions to behaviors, but in the former it is easier to specify
and conceptualize behaviors which take some time and should not be interrupted. It is also easier
to avoid behavior oscillations in the former evaluation strategy. For example, with the investigation
example from Figure 1: with the latter approach, the agent can be interrupted by a new noise
when moving to a target, while with the former approach, the agent will fully investigate a target
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
A Resourceful Reframing of Behavior Trees
1:5
without distraction. Game designers have explored both semantics and even hybrids between these
approaches; we leave our discussion of this issue until Sec. 5.
Leaf nodes of the tree can be domain-specific conditions (which succeed if the condition is
currently satisfied or fail otherwise) or domain-specific actions (for example, setting the value of a
variable or triggering some external action). These are the only operations which can interact with
the environment. The actions in Figure 1 include setting a variable representing the agent’s current
target or physically navigating the agent towards said target. Failure may come from, for example,
there being no navigable path to the target. In video games, these are often implemented using
arbitrary program code working outside of the behavior tree formalism.
Non-leaf nodes come in three key variants (others are usually possible to define as syntactic
sugar). First, sequences evaluate each of their child nodes from left to right, and are RUNNING if any
child node is RUNNING, FAILED if any child is FAILED, or SUCCEEDED otherwise. Second, selectors
also evaluate their child nodes left to right, but are RUNNING if any child is RUNNING, SUCCEEDED
if any child has SUCCEEDED, and FAILED if all the child nodes are FAILED. Third, the parallel node
evaluates each of its children independently of each other, and has SUCCEEDED if more than a certain
number of its children succeeds, FAILED if more than a certain number fail, and RUNNING otherwise.
It is also implicit in the definition of behavior trees that there is some external environment where
state can be stored and persisted from evaluation to evaluation.
In practice, there are many other types of nodes that can alter the semantics of the tree in
arbitrary ways, often violating the assumption of a preorder traversal: repeaters which evaluate
their children over and over until they evaluate with some status, stateful versions of sequence
and selector with memory that remember when they get stuck in RUNNING and only evaluate from
that stuck node forwards in their next evaluation, and so on. We ignore such extensions in this
work to simplify the presentation. Most of the extensions of behavior trees are meant to facilitate
long-running actions, to limit the reactivity of behavior trees (e.g., to allow interruptions only at
designer-defined times), and to ease the sharing of behavior tree or character state across situations,
characters, and actions. Actions, conditions, and decorators often themselves involve arbitrary code
in practice, so in our presentation of the formal semantics we require a linear logic formulation of
the leaf nodes.
4
ACTION SPECIFICATIONS IN LINEAR LOGIC
As a first step towards a type system for general behaviors, we concretize action specifications for
describing the behavior of an atomic action, such as “idly smoke cigarette” in Figure 1. Although
in reality, this behavior may simply take the form of an observable effect (e.g., some animation),
semantically, there are certain things we expect for it to make sense: for instance, that the agent has
a supply of cigarettes (and perhaps that this action spends one). Other actions, like passing through
a door, have more important requirements and effects, such as requiring being near the door and
resulting in the door being open: these are aspects of the environment that may be created, or
depended on, by other agents (or the same agent at another time).
There is a long line of successful work on describing actions in a protocols and virtual worlds
using any of a class of related formalisms: multiset rewriting, Petri nets, vector addition systems,
and linear logic. These systems have in common an approach to specification using rules (or
transitions in some systems) that describe dependencies and effects, such that the cummulative
effects of applying those rules may be reasoned about formally.1
1 Planning
domain description languages also share this approach, but most standards such as PDDL [14], do not have as
clean of a compositional interpretation due to their allowance for the “deletion” of facts that do not appear as preconditions.
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
1:6
Chris Martens, Eric Butler, and Joseph C. Osborn
Fig. 2. One step of multiset rewriting execution, visualized. Each color/shape (purple diamond, blue circle) represents a distinct predicate; the contents of those shapes are terms (a, b, c) or term variables
(X). This diagram represents a transition of the state ∆ = {diamond(a), circle(a), circle(b), diamond(c)}
along the rule circle(X ) ⊗ diamond(X ) ( diamond(c) ⊗ diamond(d) to the new state ∆ 0 =
{diamond(c), diamond(d), circle(b), diamond(c)}. The thick orange borders on some atoms highlight which
ones are replaced and added by the rule.
The following example uses a linear logic-based notation adapted from Ceptre [11] to describe
action specifications for an Investigation world that could assign meaning to the actions used in
Figure 1:
set_target
move_to_target
investigate
smoke
pace
:
:
:
:
:
no_target -o has_target.
has_target -o has_target * at_target.
has_target * at_target * heard_noise -o no_target.
has_cigarette -o 1.
1 -o 1.
The “lolli” syntax A -o B describes the ability to transition from a world in which A obtains to
one in which A no longer obtains and has been replaced with B. The atomic propositions include
facts like at_door and door_open, which represent pieces of world state, the “tensor” p * q syntax
conjoins them, and 1 is the unit of tensor. World configurations can be represented as multisets (or
linear contexts) ∆ specifying which facts hold, such as {no_target, heard_noise, has_cigarette,
has_cigarette}.
In general, predicates can take arguments (e.g., at(castle)) and rules can universally quantify
over variables that stand in for term arguments, in which case states are always ground (contain no
variables) and the application of rules identifies appropriate substitutions for variables for which
the rule applies. Figure 2 visualizes a step of execution for an example.
Multiset rewriting has been used commonly to model nondeterminism and concurrency: rulesets
can be nondeterministic whenever multiple rules may apply to a given state, and concurrency arises
from the partial-order causal relationships between rules firing. If two rules operate on disjoint
parts of the state, for instance, they can be considered to fire simultaneously, whereas rules that
depend on the effects of previous rules firing must obey sequential ordering. See Figure 3 for a
diagram of the causal relationships between actions for a particular program trace in which the
agent sets a target, moves to the target, investigates a noise, and smokes a cigarette.
For the work described in this paper, however, we are less interested in the multiset rewriting
interpretation of the rules. The specification under the multiset rewriting interpretation alone does
not give us as authors any control over strategies for action selection or goal-driven search. Instead,
it can be thought of as a description of the space of possible actions and a way of calculating their
cumulative effects. Behavior trees, then, can be understood as directives for how to explore this
space.
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
A Resourceful Reframing of Behavior Trees
1:7
Fig. 3. A causal diagram for a possible trace of actions under the multiset rewriting interpretation of the
Investigate specification.
Formally, we define action specifications under the following grammar:
arд
arдs
::= t | x
::=
· | arд, arдs
S
::= p(arдs) | 1 | S ⊗ S
opdecl
::= name : xs. S ( S
Σ
::=
· | Σ, opdecl
Σ is a collection of specifications for operators op. Σ may also specify a collection of valid domain
types over which the arguments of operators may range; for example, the operator move(Dir, N )
may range over directions and natural numbers, perhaps meaning to move in that direction a
certain number of units. The world state ∆ is represented as a linear logic context, i.e. a multiset of
atomic propositions p(arдs) representing available resources.
In the next section, we assume an arbitrary signature Σ for each action that computes a function
on states, which does not depend on the linear logical interpretation. However, we revisit this idea
in Section 6 to assign types to behavior tree expressions.
5
BTL: A FORMAL SEMANTICS FOR BEHAVIOR TREES
In this section we describe BTL, a formal calculus for describing synchronous agent behaviors with
sequencing, branching, conditions, and loops.
The goals of this system are similar in many ways to the BTs used in practice: we aim to provide
simple authoring affordances for scripting reactions to different circumstances in an implicit
environment that is changing around the agent, and which the agent can change. We also adopt
some goals that are not currently met by industry practice:
• Compositional reasoning. In order to be able to reason about BT nodes in terms of the
behaviors of their subtrees, we need to know that subtree behaviors won’t be interrupted
in unknowable states.
• Debugging support—specifically, the ability for authors to state what they expect a behavior
to accomplish and have this expectation checked algorithmically. The algorithm should be
able to identify where in the tree a stated expectation is violated.
• Support for the expression of coordinated multi-agent behaviors. This requirement means
that we question the notion of an action dependency necessarily meaning failure and
instead (or additionally) require a blocking semantics (for instance, an agent may wait until
another agent joins them in the same location to hand off a needed item).
• Support for the eventual integration of behavior synthesis, or algorithms that accept a
propositional goal for the world state and generate BT subtrees corresponding to conditional
plans that achieve the goal.
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
1:8
Chris Martens, Eric Butler, and Joseph C. Osborn
These nonstandard goals entail some tradeoffs of expressiveness. While it would be ideal to
retain, for example, the “reactive” nature of BTs that allow them to break sequential actions to
tend to urgent interruptions, we do not adopt this form of reactivity by default because it would
preclude the ability to reason about sequential behaviors compositionally. In Section 8 we revisit
these expressiveness tradeoffs and consider ways to re-incorporate additional features.
5.1
Expressions
The expressions of BTL are:
α
::= op(arдs) |?p. α | Seq{α; α } | Sel{α + α } | Seq{} | Sel{} | Repeat{α }
Intuitively, op(arдs) is an atomic action, invoking a pre-defined operator on a set of ground
arguments (such as move(left)); Seq{α; α } is a sequence node; Seq{α + α } is a selector node; Seq{}
is the unit of sequencers (does nothing); Sel{} is the unit of selectors (always fails); ?p. α checks the
condition p and executes α if it holds, failing otherwise; and Repeat{α } is a repeater node, running
α until failure.
5.2
Operational Semantics
We define an operational semantics for behavior trees in terms of what they may do to an abstract
world state, using a big-step evaluation judgment α . ∆ ⇓ δ , where ∆ is a world state and δ is the
result of evaluating a BTL expression, either a new world state (on successful execution) or FAIL.
The evaluation judgment requires a few preliminaries to define. First, we implicitly index the
judgment by a signature Σ, which provides a specification for a transition function t : τ → ∆ → δ
for each operator (atomic action) available to an agent, which takes arguments of type τ , computes
a transformation on a world state if the action can be performed, and returns FAIL otherwise.
Concretely, our linear logical action specifications can play this role. Second, we assume a notion of
a condition “holding for” a world state, expressed by the judgment ∆ p. Again, while evaluation
can be defined holding this judgment abstract, in we can fulfill this definition by expressing
conditions in terms of a (positive) subset of linear logic formulas and interpreting as affine
provability.
Evaluating an operation consists of looking up its transtition function in Σ and applying that
function to the current state; evaluating a condition requires that the current state satisfies the
condition, and otherwise fails:
Σ(op) = t t(arдs, ∆) = δ
∆ S α .∆⇓δ
∆6 S
op(arдs) . ∆ ⇓ δ
?S. α . ∆ ⇓ δ
?S. α . ∆ ⇓ FAIL
A sequence evaluates by chaining the states computed by successful subtrees through successive
subtrees in the sequence, and fails if any subtree fails:
α . ∆ ⇓ ∆ 0 Seq{α 0 } . ∆ 0 ⇓ δ
α . ∆ ⇓ FAIL
0
Seq{} . ∆ ⇓ ∆
Seq{α; α } . ∆ ⇓ δ
Seq{α; α 0 } . ∆ ⇓ FAIL
A selector succeeds with the first successful subtree and fails if no options are possible:
α . ∆ ⇓ FAIL Sel{α 0 } . ∆ ⇓ δ
α . ∆ ⇓ ∆0
Sel{} . ∆ ⇓ FAIL
Sel{α + α 0 } . ∆ ⇓ δ
Sel{α + α 0 } . ∆ ⇓ ∆ 0
Repeaters continue evaluating the underlying expression until failure:
α . ∆ ⇓ ∆ 0 Repeat{α } . ∆ 0 ⇓ δ
Repeat{α } . ∆ ⇓ δ
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
α . ∆ ⇓ FAIL
Repeat{α } . ∆ ⇓ ∆
A Resourceful Reframing of Behavior Trees
1:9
This definition of BTL adopts similar conventions and semantics to process algebras, such as
the adoption of two key operators, sequential (conjunctive) and choice (disjunctive) composition,
which have certain algebraic properties. In the case of BTL, evaluation respects the following
structural congruence:
Seq{Seq{}; α }
Seq{α; Seq{β; γ }}
Sel{Sel{} + α }
≡ Seq{α } ≡ Seq{α; Seq{}}
≡
≡ Sel{α } ≡
Seq{Seq{α; β }; γ }
Sel{α + Sel{}}
Sel{α + Sel{β + γ }}
≡
Sel{Sel{α + β } + γ }
Seq{α; Sel{β + γ }}
≡
Sel{Seq{α; β } + Seq{α; γ }}
Seq{Sel{α + β }; γ }
≡
Sel{Seq{α; γ } + Seq{β; γ }}
In other words, sequences form a monoid with the unit Seq{}; selectors form a monoid with the
unit Sel{}; and sequencing distributes over selection. We state that this equivalence is respected by
evaluation but omit the proof for brevity:
Conjecture 5.1. BTL operational semantics respects congruence: If α . ∆ ⇓ δ and α ≡ β then
β . ∆ ⇓ δ.
While the system bears resemblance to models of concurrency such as CSP [8] and (CCS) [15],
it differs in that interactions between BTL expressions and their environment happen implicitly
through manipulation of a shared world state, not through channel-based communication (as in
CSP) or explicit labels for inputs and outputs (as in CCS). The lack of such machinery is what
makes behavior trees so attractive to authors; it reduces the burden of needing to specify how
information is transmitted from one agent to another. However, it also makes the dependencies
between agents tacit and therefore difficult to debug when things go wrong, which is what this
paper aims to address.
Kleene algebra, particularly Kozen’s variant with tests (KAT) [10], offers another touchstone for
semantic insights; however, BTL does not quite satisfy the Kleene conditions: (1) order matters in
selector semantics due to fallthrough, so selectors are not commutative; (2) the annihilation law
does not hold; Seq{α; Sel{}} is not equivalent to Sel{} due to the state changes that α may incur.
5.3
Example
Below is BTL implementation of the behavior tree described in Figure 1. This and all future examples
use an n-ary form of Seq and Sel defined in the obvious way.
Sel{?heard_noise.set_target() +
Seq{move_to_target(); investigate_target()} +
Sel{idle_smoke() + idle_pace()}}
To illustrate how an BTL expression evaluates, we consider an evaluation of this tree in an environment where the agent already has a reachable target and has not heard a noise, i.e. the situation
{has target}. Starting evaluation at the root, the outer selector expression evaluates each child in
succession until one succeeds. The first child will fail because the heard noise condition does not
hold. The second child, a sequence, will evaluate each of its children in succession. The first action,
predicated on having a target, evaluates by modifying the world state such that the agent is in the
same location as the target. Upon the movement action succeeding, the investigate target()
action will be evaluated; however, this node fails in the absence of having heard a noise, and that
failure propagates to the root of the tree.
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
1:10
Chris Martens, Eric Butler, and Joseph C. Osborn
Γ; p ` p
init
Γ; · ` 1
1R
Γ; ∆ ` M : C
1L
Γ; ∆, 1 ` C
Γ; ∆1 ` A Γ; ∆2 ` B
⊗R
Γ; ∆1 , ∆2 ` A ⊗ B
Γ; ∆, A ` B
(R
Γ; ∆ ` A ( B
>R
(no >L)
Γ; ∆, A, B ` C
⊗L
Γ; ∆, A ⊗ B ` C
Γ; ∆1 ` A Γ; ∆2 , B ` C
(L
Γ; ∆1 , ∆2 , A ( B ` C
Γ; ∆ ` A Γ; ∆ ` B
NR
Γ; ∆ ` ANB
Γ, x:τ ; ∆ ` A
∀R
Γ; ∆ ` ∀x:τ . A
Γ; ∆ ` >
Γ; ∆, Ai ` C
NLi
Γ; ∆, A1 NA2 ` C
Γ ` t : τ Γ; ∆, N [t]:A ` C
∀L
Γ; ∆, ∀x:τ . A ` C
Fig. 4. A fragment of intuitionistic linear sequent calculus.
If instead we started in an environment {has target, heard noise}, then at the same point in
the tree, the investigate target action will succeed and change the world state by replacing
has target with no target (in practice, this might have a more interesting effect like updating
variables representing the agent’s knowledge of its target). Because both children of the sequence
evaluate to success, the sequence evaluates to success. Thus, the root selector will itself evaluate
to success without evaluating the third branch, completing the evaluation of the entire tree, and
resulting in the state {no target}.
6
COMPOSITIONAL REASONING
Compositional reasoning for behavior trees means that understanding the effects of a whole BT can
be done by understanding the effects of its subtrees. The type system we describe gives a precise
account of the conditions under which a BT has successful execution and the consequences of that
execution. Accounting for the range of behaviors possible under failure is outside the scope of this
paper (see Section 8). However, these types are richer than sets of precondtions and postcondtions:
they account for the “reactive” nature of BTs by requiring dependencies to be filled not prior to
execution but just at the node of the tree where they are needed; types also describe resources that
are released periodically if they are not needed for later use.
This “open” structure of behavior types makes the account of agents’ behavior amenable to
analysis in the presence of multiple agent executing in parallel: BTs may both incur and use changes
in the environment.
6.1
A linear type system, take 1
Our guiding principle for assigning types to BTL expressions adopts a “formulas-as-processes”
point of view to imagine the proof-theoretic semantics of what the formula admits provable under
arbitrary environments. Consider linear logic formulas A ::= p | 1 | > | A ⊗ A | ANA | A ( A and
an intuitionistic sequenct calculus defining their provability (following [3]) shown in Figure 4.
The following intuition guides the correspondence we seek:
• Firing a leaf action op(arдs) of type S ( S 0 in an environment ∆ corresponds to the (-left
rule in linear sequent calculus: to succeed, it requires that the current environment match
the antecedent of the action and then changes the environment to replace it with the
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
A Resourceful Reframing of Behavior Trees
1:11
consequent. Correspondingly, evaluating op(arдs) in an environment ∆, ∆ 0 where ∆ 0 ` S
evaluates to ∆, S 0 in the operational semantics.
• The unit selector Sel{} always fails, having run out of options; this corresponds to the >
unit of N in linear logic, which has no left rule, so everything is beneath it in the preorder.
• The unit sequence Seq{} does nothing, corresponding to the left rule of the unit 1 of ⊗.
Correspondingly, the operational semantics of Seq{} take the environment ∆ to itself.
• Selectors Sel{α 1 +α 2 } nearly correspond to making a choice, as in Linear Logic’s N operator.
There is a difference in that N is symmetric; ANB and BNA are interprovable, whereas order
matters in BTL selectors. However, certain reasoning principles apply: if either α 1 . ∆ ⇓ ∆1
or α 2 . ∆ ⇓ ∆2 , then one of ∆1 or ∆2 will be the result of evaluating Sel{α 1 + α 2 } against ∆.
For reasons described above, however, accounting for sequences will be more difficult. It might
be tempting to think that ⊗ is an appropriate interpretation, despite the relative lack of ordering
constraints, for reasons similar to how N can approximate selectors. A conjectured rule:
α 1 : A1 α 2 : A2
BAD RU LE
Seq{α 1 ; α 2 } : A1 ⊗ A2
At this point we need to formulate the metatheorem we have so far been implicitly expecting to
hold:
Conjecture 6.1. If α : A and ∆, A ` S, then α . ∆ ⇓ ∆ 0 and ∆ 0 ` S.
(Recall that S stands for a formula with no Ns or (s, representing a successful state in the
course of a BTL expression’s execution.) The proposed rule violates this conjecture; we show a
counterexample next.
6.2
The trouble with sequences: an example
The following action specification describes a Doors world in which agents may pass through open
doors, open unlocked doors, and unlock locked doors if they have keys:
walk_to_door : at_elsewhere -o at_door.
pass_through : door_open * at_door -o door_open * through_door.
open_door : door_unlocked * at_door -o door_open * at_door.
smash_door : door_locked * at_door -o door_open * at_door.
close_door : door_open * through_door -o door_unlocked * through_door.
For a counterexample to Conjecture 6.1, let α = Seq{open door; walk to door} and let ∆ =
{at elsewhere, door unlocked}. According to BAD RU LE, α : A = (at door⊗door unlocked (
door open)⊗(at elsewhere ( at door). By straightforward rule applications, ∆, A ` door unlocked,
but it is not the case that Seq{open door; walk to door} . ∆ ⇓ door unlocked.
In addition to the clear unsoundness of describing a sequential behavior with a commutative
connective, there are also concerns regarding the granularity of concurrent execution. Consider a
simple sequential behavior for opening and going through a door:
Seq{walk_to_door; open_door; pass_through; close_door}
A type we could reasonably expect to ascribe to this behavior is:
at elsewhere ⊗ door unlocked ( through door ⊗ door unlocked
This formula corresponds to the assumption that if our starting environment has at elsewhere
and door unlocked, each element in this sequence of actions will consume the output of the
previous action as an input, resulting in through door. Each successive action depends on the
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
1:12
Chris Martens, Eric Butler, and Joseph C. Osborn
effects of previous actions: opening the door assumes that the previous walk action brought us to
the door; passing through assumes we successfully opened the door; and closing the door assumes
we passed through and the door is still open.
However, in a general, maximally concurrent environment, we would not be allowed to make
these assumptions: suppose, for example, another agent interferes and closes the door just after we
open it. This relaxed assumption instead observes that we might forfeit all of the effects of previous
actions, resulting in the following type:
at elsewhere ( at door ⊗ (at door ⊗ door unlocked (
at door ⊗ door open⊗
(at door ⊗ door open ( through door ⊗ door open⊗
(door open ⊗ through door ( through door ⊗ door unlocked)))
This formula characterizes the behavior that, at each step, a sequence releases some resources
into the world along with a “continuation” that could, in some cases, potentially reabsorb those
resources, or require new ones along the way.
These two ascriptions correspond to different assumptions about how behaviors interact with
other behaviors manipulating the environment. The former assumes an un-interruptable, “critical
section” behavior to sequences and gives a stronger guarantee, allowing us to treat the sequence
as a black-box behavior without worrying about internal failure. On the other hand, the latter
permits interruption and “race condition”-like scenarios that are common in games and interactive
simulations in practice, but offers less strict guarantees that reflect the complexity of reasoning
about fine-grained interaction.
Our type system makes the latter assumption that processes may be interrupted, but we discuss
the potential to accommodate both in Section 8.
6.3
Linear Behavior Interfaces
We constrain linear logical formulas to the following grammar of interfaces, expressed inductively
as nested stagings of inputs and outputs (and choice between multiple possible interfaces):
N
::= S | S ( N | S ⊗ N | N NN | >
This grammar mainly serves to prevent ( from appearing to the left of another ( while
representing staged inputs and outputs as described above.
We assign types as linear logic formulas N to BTL expressions α with the judgment α :Σ N .
where α is an expression, N is an interface type, and Σ is a specification giving types S ( S 0 to the
actions used at the leaves of the trees.
The typing rules are as follows, with Σ left implicit as an index to the judgment except when it is
needed. Atomic operations, conditions, the units, and selectors, are straightforward, and conditions
must assume, but then reproduce, the condition they depend on. Sequences are assigned a type
based on a computation seq of the types of their components:
Seq{} : 1
α1 : N1 α2 : N2
Sel{α 1 + α 2 } : N 1 NN 2
Sel{} : >
Σ ` op : xs. S ( S 0
op(arдs) :Σ [arдs/xs](S ( S 0)
α :N
?S. α : S ( S ⊗ N
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
α1 : N1 α2 : N2
Seq{α 1 ; α 2 } : seq(N 1 , N 2 )
A Resourceful Reframing of Behavior Trees
1:13
The seq operator is defined as follows:
seq(1, N ) =
N
seq(S 1 , S 2 ) = S 1 ⊗ S 2
seq(S, S 0 ⊗ N ) =
(S ⊗ S 0) ⊗ N
seq(S, N 1 NN 2 ) =
seq(S, N 1 )Nseq(S, N 2 )
seq(S 1 , S 2 ( N ) = S 1 ⊗ (S 2 ( N )
seq(S ⊗ N 1 , N 2 ) =
seq(S, seq(N 1 , N 2 ))
seq(S 1 ( N 1 , N 2 ) = S 1 ( seq(N 1 , N 2 )
seq(N 1 NN 2 , N ) =
(seq(N 1 , N )Nseq(N 2 , N ))
It can be interpreted as pushing the requirements of the first formula to the outside of the
whole formula, then conjoining its consequences with the specification of the second formula. The
correctness of this definition, and of the type system in general, with respect to the operational
semantics, is considered next.
6.4
Metatheory
We revisit Conjecture 6.1 and sketch a proof. First we establish a lemma about the seq operator:
Lemma 6.2. If ∆, seq(N 1 , N 2 ) ` S and ∆ is flat, i.e. consists only of propositions of the form S, then
there exists S 1 such that ∆, N 1 ` S 1 and ∆, S 1 ` N 2 .
Proof. By induction on the definition of seq. We show the interesting cases.
• Case: seq(S 1 , S 2 ( N ) = S 1 ⊗ (S 2 ( N ).
Assume ∆, S 1 , S 2 ( N ` S.Ë
In this case, we can
Ëjust tensor together the first state and feed
it into the second. ∆, S 1 `
(∆) ⊗ S 1 , and ∆, (∆) ⊗ S 1 , S 2 ( N ` S by untensoring that
proposition to get to the assumption.
• Case: seq(S 1 ( N 1 , N 2 ) = S 1 ( seq(N 1 , N 2 ).
Assume ∆, S 1 ( seq(N 1 , N 2 ) ` S. Because the proof of this sequent concludes with an S,
somewhere along the way we must discharge the (, i.e. some part of ∆ proves S 1 . Rewrite
∆ = ∆1 , ∆ 0 where ∆1 ` S 1 . Somewhere in the proof there is an application of ( L such
that ∆ 0, seq(N 1 , N 2 ) ` S is a subproof, and by inductive hypothesis, there exists S 0 such that
∆ 0, N 1 ` S 0 and S 0, N 2 ` S.
Now it suffices to show that ∆, S 1 ( N 1 ` S 0 (since we already have S 0, N 2 ` S). This
can be established by reusing the part of ∆ that discharges S 1 , using ( L on ∆1 ` S 1 and
∆ 0, N 1 ` S 0 .
• Remaining cases are straightforward.
Theorem 6.3. If α : A, ∆ is flat, and ∆, A ` S, then α . ∆ ⇓ ∆ 0 and ∆ 0 ` S.
Proof. By lexicographic induction on the typing derivation and proof. We show the sequence
case here.
• Case:
α1 : N1 α2 : N2
Seq{α 1 ; α 2 } : seq(N 1 , N 2 )
Known: ∆, seq(N 1 , N 2 ) ` S. By lemma, there exists S 0 such that ∆, N 1 ` S 0 and S 0, N 2 ` S.
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
1:14
Chris Martens, Eric Butler, and Joseph C. Osborn
By i.h., α 1 . ∆ ⇓ ∆ 0 where ∆ 0 ` S 0. By i.h., α 2 . {S 0 } ⇓ ∆ 00 where ∆ 00 ` S. By appropriate
equivalence between the positive propostion S 0 and context ∆ 0, and by the sequence
evaluation rule, Seq{α 1 ; α 2 } . ∆ ⇓ ∆ 00 where ∆ 00 ` S 0.
6.5
Example
We now return to the “investigating a sound” example whose evaluation was shown in Section 5.3.
The computed type for the example:
Sel{?heard_noise.set_target() +
Seq{move_to_target(); investigate_target()} +
Sel{idle_smoke() + idle_pace()}}
is:
(heard noise ( (heard noise ( no target ( has target)
N
(has target ( (has target ⊗ at target ⊗
(has target ⊗ at target ⊗ heard noise ( no target))
N
(has cigarette ( 1)
N
7
(1 ( 1)
IMPLEMENTATION
We implemented both an interpreter for BTL (with repeater nodes) and a type synthesis algorithm
for BTL excluding repeaters, both following the descriptions in the paper. The implementation is
written in 523 lines of Standard ML, including detailed comments, and we authored an additional
448 lines of examples, including those used in this paper.
The implementation is freely available on GitHub at (URL redacted for double-blind review).
8
DISCUSSION
A longer-term goal for this work is to be able to account for how behavior trees are used in practice,
to integrate the type system into the behavior authoring process (perhaps through a combination
of checking and synthesis), and to evaluate how it may make designers (particularly without
programming background) more effective. We anticipate using the implementation of BTs in the
Unreal Engine as a benchmark. Shorter term, there are a few theoretical concerns we still need to
consider. We now describe a roadmap for this project.
8.1
Parallel Composition
Currently, the semantics of agents operating in the world concurrently is not specified by the
language. To account for placing multiple world-manipulating agents into the environment, we
might consider introducing a “parallel” operator to BTL:
α ::= . . . | Par{α 1 k α 2 }
We may consider a few options for an operational semantics that warrant a different typetheoretic treatment. For instance, perhaps parallel behaviors split the state and operate in isolation
until complete. This behavior could be captured with the rule:
∆ = ∆1 , ∆2 α 1 . ∆1 ⇓ ∆10 α 2 . ∆2 ⇓ ∆20
Par{α 1 k α 2 } . ∆ ⇓ ∆10 , ∆20
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
A Resourceful Reframing of Behavior Trees
∆/skip → ∆/
Σ ` op[arдs] : A ( B ∆A ` A
step/op
∆, ∆A /op(arдs) → ∆, B/
step/skip
∆/α 1 →∗ ∆ 0/
step/;
∆/α 1 ; α 2 → ∆ 0/α 2
1:15
∆/α 1 + α 2 → ∆/α 1
∆, p/?p. α → ∆, p/α
step/?
∆1 /α 1 → ∆10 /α 10
step/k 1
∆1 , ∆2 /α 1 kα 2 → ∆10 , ∆2 /α 10 kα 2
step/+1
∆/α 1 9
step/+2
∆/α 1 + α 2 → ∆/α 2
∆/α →∗ ∆ 0/
step/∗
∆/α ∗ → ∆ 0/α ∗
∆2 /α 2 → ∆20 /α 20
step/k 2
∆1 , ∆2 /α 1 kα 2 → ∆1 , ∆2 /α 1 kα 20
Fig. 6. A small-step semantics for BTL without failure.
Additional rules may specify that if either
subbehavior fails, the whole behavior fails.
However, in practice, behavior trees allow for
finer-grained interactions between processes.
The above specification precludes, for example,
the below two behaviors succeeding:
// Agent 1 (a1)
Seq{move(a1,L);
give(a2,O)}
// Agent 2 (a2)
Seq{move(a2,L);
eat(a2,O)}
These behaviors will only succeed if they
interact when run; a1’s action give(a2,O) will
only succeed if a2’s first action, move(a2,L), is
permitted to succeed first. Figure 5 describes
Fig. 5. Composing processes that interact.
visually the behavior specification we would
like to result in this interaction.
To account for such fine-grained concurrent
behaviors formally, we require a small-step semantics over the judgment α/∆ → α/∆ 0. A
sketch of this semantics that includes parallel
composition is in Figure 6. However, note that this semantics does not properly handle failure;
instead, it embodies the synchronous semantics of behaviors simply pausing (failing to evolve)
if their conditions are not satisfied, instead permitting the possibility of a delayed transition if
their conditions become satisfied as another behavior evolves. While this behavior may be useful
in some scenarios, it is not universally desirable, so we need a way to account for this behavior,
perhaps through a stack-based semantics with success and failure continuations. Likewise, the type
system has a clear extension to count for arbitrarily “pausing” processes (⊗ is a straightforward
interpretation), but accounting for failure in the type system is also left to work.
8.2
Theoretical extensions
In addition to accounting for parallel execution, we also need to consider repeater nodes. The
operational semantics are fairly easy to specify, but guaranteeing convergence of computing fixed
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
1:16
Chris Martens, Eric Butler, and Joseph C. Osborn
points for a type-based characterization may prove difficult. Recursive types have been successfully
integrated into linear logic [1], and we plan to investigate their use, although readability may
remain a challenge.
Another step we would like to take is to introduce additional forms of lightweight verification
on top of the type system. For instance, selectors are often designed with the intent that all possible
cases are covered: each child of the selector is guarded by a condition, and the disjunction of the
conditions characterizes an invariant of the state. Provided a proof that the invariant actually holds,
it may be useful to simplify the type to omit the guards. This corresponds to provability between
e.g. (A ⊕ B) ⊗ ((A ( C)N(B ( D)) and (C ⊕ D).
Next, while we have established a correspondence between the type system and evaluation of
successful behaviors, we believe we can formulate a conjecture to the effect that the situations in
which types fail to yield a flat context (because there is some implication that cannot be discharged
on the left, say) correspond to the failure cases of execution. We expect this proof will be more
difficult than the former.
9
CONCLUSION
We have presented a formal semantics and type system for a fragment of behavior trees as they are
used to describe characters in virtual environments. Our system includes a reference implementation
and correctness proofs. This work represents substantial new ground broken towards a longer-term
vision of authoring robust, designer-friendly specifications for reactive agent behaviors.
If our long-term vision is successful, we can enable several new things for behavior authors:
integrating hand-authored trees with behavior synthesis algorithms through linear logic theorem
proving (akin to planning); the development of behavior libraries consisting of reusable, parameterized behavior trees whose specification precludes examining the entire tree; and certified behaviors
that are guaranteed to succeed under certain environments. These features would improve the
effectiveness of developing agents in virtual worlds with varied applications in entertainment, arts,
simulation modeling, research competitions and challenges, and computing education.
REFERENCES
[1] David Baelde. 2012. Least and greatest fixed points in linear logic. ACM Transactions on Computational Logic (TOCL)
13, 1 (2012), 2.
[2] Luis Caires, Frank Pfenning, and Bernardo Toninho. 2016. Linear logic propositions as session types. Mathematical
Structures in Computer Science 26, 3 (2016), 367–423.
[3] Bor-Yuh Evan Chang, Kaustuv Chaudhuri, and Frank Pfenning. 2003. A judgmental analysis of linear logic. Technical
Report CMU-CS-03-131R. Department of Computer Science, Carnegie Mellon University. Revised December 2003.
[4] Michele Colledanchise, Richard M Murray, and Petter Ogren. 2017. Synthesis of Correct-by-Construction Behavior
Trees. (2017).
[5] Stephen Cresswell, Alan Smaill, and Julian Richardson. 1999. Deductive synthesis of recursive plans in linear logic. In
European Conference on Planning. Springer, 252–264.
[6] Lucas Dixon, Alan Smaill, and Alan Bundy. 2006. Planning as deductive synthesis in intuitionistic linear logic. Technical
Report. Technical Report EDI-INF-RR-0786, School of Informatics, University of Edinburgh.
[7] Malik Ghallab, Dana Nau, and Paolo Traverso. 2016. Automated Planning and Acting. Cambridge University Press.
[8] Charles Antony Richard Hoare. 1978. Communicating sequential processes. Commun. ACM 21, 8 (1978), 666–677.
[9] Damian Isla. 2005. Handling Complexity in the Halo 2 AI. In Proceedings of the 2005 Game Developers Conference.
[10] Dexter Kozen. 1997. Kleene algebra with tests. ACM Transactions on Programming Languages and Systems (TOPLAS)
19, 3 (1997), 427–443.
[11] Chris Martens. 2015. Ceptre: A language for modeling generative interactive systems. In Eleventh Artificial Intelligence
and Interactive Digital Entertainment Conference.
¨
[12] Alejandro Marzinotto, Michele Colledanchise, Christian Smith, and Petter Ogren.
2014. Towards a unified behavior
trees framework for robot control. In Robotics and Automation (ICRA), 2014 IEEE International Conference on. IEEE,
5420–5427.
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
A Resourceful Reframing of Behavior Trees
1:17
[13] Marcel Masseron, Christophe Tollu, and Jacqueline Vauzeilles. 1993. Generating plans in linear logic: I. actions as
proofs. Theoretical Computer Science 113, 2 (1993), 349–370.
[14] Drew McDermott, Malik Ghallab, Adele Howe, Craig Knoblock, Ashwin Ram, Manuela Veloso, Daniel Weld, and
David Wilkins. 1998. PDDL-the planning domain definition language. (1998).
[15] Robin Milner. 1980. A calculus of communicating systems. (1980).
[16] Steven Rabin. 2013. Game AI Pro: Collected Wisdom of Game AI Professionals. A. K. Peters, Ltd., Natick, MA, USA.
[17] Anders Schack-Nielsen and Carsten Schürmann. 2008. Celf-A Logical Framework for Deductive and Concurrent
Systems (System Description).. In IJCAR, Vol. 5195. Springer, 320–326.
[18] Kevin Watkins, Iliano Cervesato, Frank Pfenning, and David Walker. 2003. A concurrent logical framework I: Judgments
and properties. Technical Report. Technical Report CMU-CS-02-101, Department of Computer Science, Carnegie
Mellon University, 2002. Revised May.
, Vol. 1, No. 1, Article 1. Publication date: January 2016.
| 6 |
Massive Multiple Input Massive Multiple Output for
5G Wireless Backhauling
D.-T. Phan-Huy1, P. Ratajczak1, R. D'Errico2, J. Järveläinen3*, D. Kong4, K. Haneda3, B. Bulut4, A. Karttunen3, M.
Beach4, E. Mellios4, M. Castañeda5, M. Hunukumbure6, T. Svensson7
1
Orange Labs, 2CEA-LETI, 3Aalto Univ.,*Premix Oy, 4Univ. of Bristol, 5Huawei, 6Samsung, 7Chalmers Univ. of Technology
dinhthuy.phanhuyorange.com
Abstract— In this paper, we propose a new technique for the
future fifth generation (5G) cellular network wireless
backhauling. We show that hundreds of bits per second per
Hertz (bits/s/Hz) of spectral efficiency can be attained at a high
carrier frequency (such as 26 GHz) between large antenna arrays
deployed along structures (such as lamp posts) that are close and
roughly parallel to each other. Hundreds of data streams are
spatially multiplexed through a short range and line of sight
“massive multiple input massive multiple output” (MMIMMO)
propagation channel thanks to a new low complexity spatial
multiplexing scheme, called “block discrete Fourier transform
based spatial multiplexing with maximum ratio transmission” (BDFT-SM-MRT). Its performance in real and existing
environments is assessed using accurate ray-tracing tools and
antenna models. In the best simulated scenario, 1.6 kbits/s/Hz of
spectral efficiency is attained, corresponding to 80% of Singular
Value Decomposition performance, with a transmitter and a
receiver that are 200 and 10,000 times less complex, respectively.
spectral efficiencies of hundreds of bits/s/Hz. In this paper, for
the first time, we evaluate MMIMMO deployments in real and
existing environments, using ray-tracing tools that accurately
model the scattering. Antenna radiation patterns are also
accurately modeled and practical deployment considerations
not exactly fulfilling (1) are also taken into account.
Legend:
L
ULA
D
Transmit ULA with N elements
Receive ULA with N elements
Fig. 1. Communication between two ULAs in LOS.
Transmit
ULA
Receive
ULA
Transmit
ULA
Receive
ULA
Keywords—5G, high carrier frequency, millimeter wave,
Massive MIMO, short range, Line-Of-Sight MIMO
I.
INTRODUCTION
Due to the availability of large spectrum at higher carrier
frequencies, the spectrum bands corresponding to millimeter
waves are good candidates for the self-backhauling of the
future 5th generation (5G) of mobile networks [1]. Their
coverage limitation could be overcome through a dense
deployment. In this paper, we propose to boost the spectral
efficiency of millimeter wave based backhaul links through a
new type of deployment. In theory [2-5], two uniform linear
arrays (ULAs) of 𝑁 antenna elements and of equal length and
parallel to each other, communicating through a line of sight
(LOS) multiple input multiple output (MIMO) propagation
channel (as illustrated in Fig. 1), can multiplex 𝑁 data streams
in the spatial domain, under the following conditions:
(1)
𝐿2
= 𝑁;
𝜆𝐷
(2)
𝐷 ≫ 𝐿,
where 𝜆 = 𝑐/𝑓 is the wavelength (𝑐 and 𝑓 being the speed of
light and the carrier frequency, respectively), 𝐷 is the distance
between the ULAs, 𝐿 is the ULAs’ length (as illustrated in Fig.
1.). According to [5], conditions (1) and (2) guarantee that the
MIMO channel matrix has 𝑁 equal eigenvalues. Recently, [6]
has shown that 𝑁 can reach values as high as several hundreds
of antenna elements, if 𝜆 corresponds to 5G (candidate) high
carrier frequencies and if 𝐷 and 𝐿 are chosen smartly. These
new types of deployments, that we call “massive multiple input
massive multiple output” (MMIMMO), could deliver gigantic
a) DFT over the entire
array
b) DFT per block (two
blocks example)
Fig. 2. 16×16 MIMO system mapping 16 streams into 16 angles.
In this paper, we also propose a new practical signal
processing scheme for these new MMIMMO deployments. As
applying singular value decomposition (SVD) to a MMIMMO
system is too complex, we propose to re-use a practical low
complexity spatial multiplexing (SM) scheme that combines
discrete Fourier transform (DFT) and maximum ratio
transmission (MRT) precoding, called DFT-SM-MRT [7]. Fig.
2-a) illustrates the use of DFT-SM alone (without MRT) with
two ULAs parallel to each other and in LOS. In this case, data
streams are mapped into angles. In DFT-SM-MRT, the role of
MRT is to mitigate the effect of scattering and to deal with
cases where the ULAs might not be perfectly parallel. However,
DFT-SM-MRT [7] still suffers from residual interference,
especially when condition (2) is not met. In this paper, we
present a new low complexity scheme called block DFT-SMMRT (B-DFT-SM-MRT), with a similar complexity as the one
of DFT-SM-MRT. As illustrated in Fig. 2, it applies the DFT
per block. The main idea of this scheme is to approximately
fulfill condition (2) on a per-block basis.
The outline of the paper is as follows. Section II defines a
set of practical MMIMMO links that could be deployed in
existing environments, and their corresponding antenna and
propagation models. Section III presents the novel B-DFT-SM-
MRT scheme and recalls the definitions of the DFT-SM-MRT
and SVD schemes. Section IV compares these schemes in
terms of performance and complexity, for all links defined in
Section II. Section V concludes this paper.
A. MMIMMO links in the City Center of Bristol
Legend:
ULA
The following notations are used. 𝐈 (𝑁) is the identity matrix
of size 𝑁. 𝐌 DFT(𝑁) , 𝐌 IDFT(𝑁) ∈ ℂ𝑁×𝑁 are the Butler matrices
of size 𝑁, for the DFT and the IDFT operations, respectively. If
𝐀 ∈ ℂ𝑁×𝑀 , 𝐀∗ is the conjugate of 𝐀 , 𝐀† is the transpose
conjugate of 𝐀, rank(𝐀) is the rank of 𝐀, 𝐀𝑛,𝑚 is the element in
the 𝑛-th row and 𝑚-th column, with 1 ≤ 𝑛 ≤ 𝑁 and 1 ≤ 𝑚 ≤
𝑀. If 𝑥 ∈ ℂ, then |𝑥| is the module 𝑥.
II.
a) College Green Area
(Google maps)
MMIMMO LINKS IN EXISTING ENVIRONMENTS
To assess the performance of MMIMMO links in real and
existing environments, we have built environment-specific
channel models. We consider
𝑓 = 26 ∙ 109 Hz, as it
corresponds to a candidate carrier frequency for 5G in Europe.
We consider a narrowband signal. Such signal can either be
obtained with a narrowband single carrier waveform or a
narrow sub-band of a wideband multi-carrier waveform. With
this assumption, the propagation between the transmit array
and the receive array can be considered as frequency flat and
can be modeled with a complex channel matrix. Let 𝐇 ∈ ℂ𝑁×𝑁
be the MMIMMO propagation channel matrix. We model the
propagation between the 𝑛-th transmit antenna element and the
rays
𝑚-th receive antenna with a finite number of rays 𝑁𝑛,𝑚 that
depends on 𝑛 and 𝑚 . Indeed, different pairs of receive and
transmit antenna, which are very far apart in the arrays, may
see different numbers of scatterers. The 𝑟-th ray (with 1 ≤ 𝑟 ≤
rays
𝑁𝑛,𝑚 ) has a path gain 𝛼𝑛,𝑚,𝑟 ∈ ℂ, a direction of arrival vector
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐃𝐨𝐀 𝑛,𝑚,𝑟 ∈ ℝ𝟑 and a direction of departure vector ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐃𝐨𝐃𝑛,𝑚,𝑟 ∈
ℝ𝟑 . We assume that all transmit and receive antenna elements
𝟑
have the same antenna gain function 𝛤 ∈ ℝℝ , 𝛤 being a
function of the direction of arrival (or departure). With these
notations, the channel coefficient 𝐇𝑛,𝑚 between the receive
antenna n and the transmit antenna m is given by:
rays
𝑁𝑛,𝑚
(3)
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐇𝑛,𝑚 = ∑ 𝛼𝑛,𝑚,𝑟 𝛤(𝐃𝐨𝐀
𝑛,𝑚,𝑟 )𝛤(𝐃𝐨𝐃𝑛,𝑚,𝑟 ).
𝑟=1
We have used two different ray-tracing tools to obtain the
rays
parameters 𝑁𝑛,𝑚 , 𝛼𝑛,𝑚,𝑟 , ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐀𝐨𝐀 𝑛,𝑚,𝑟 and ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐀𝐨𝐃𝑛,𝑚,𝑟 in two
different environments: an outdoor and an indoor environment,
described in the Sections II-A and II-B, respectively. Each tool
is modelling a real and existing environment in which antenna
elements can be positioned. The aforementioned parameters are
then generated based on the chosen positions of the antenna
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
elements. 𝛤(𝐃𝐨𝐀
𝑛,𝑚,𝑟 ) and 𝛤(𝐃𝐨𝐃𝑛,𝑚,𝑟 ) are determined
based on an accurate antenna model presented in the Section IIC. Finally, the method for the setting of the MMIMMO
parameters (such as the number of antenna elements N) is
given in Section II-D.
c) Link N 1
b) Link N 0
d) Link N 2
Fig. 3. Modeled links for outdoor environment (in Bristol’s City Center).
Fig. 3 a) illustrates the considered outdoor environment.
The chosen outdoor environment is an existing road of the
“College Green Area”, in the city center of Bristol, in the
United Kingdom. The ULAs are assumed to be deployed on
existing lamp-posts. Three different links between lamp-posts,
named “Link N° 0”, “Link N° 1” and “Link N° 2” are
considered and illustrated in Fig. 3 b), c) and d), respectively.
The employed ray tracer identifies the radio wave scatterers
using an accurate geometrical database of the physical
environment [8], [9]. A similar scenario has been adopted in
[10]. Point-source three dimensional (3D) ray-tracing is
performed from each antenna element of the transmit array to
each antenna element of the receive array assuming isotropic
elements. The tool provides the necessary information to
compute the parameters of equation (3) for each ray.
B. MMIMMO links in the Helsinki Airport
As an example of indoor environment, we chose to model
the existing Helsinki airport check-in hall. Again, the used raytracing tool uses an accurate geometrical database of the
physical environment, a so called point cloud model [11]. The
point cloud model includes small objects (e.g. self-check-in
machines) which scatter energy at high carrier frequencies. Our
simulator is calibrated with experimental measurements made
in the Helsinki airport check-in hall [12].
Legend:
scatterer (larger circle means stronger)
distance between ULAs
C. Model of antennas
The radiation patterns at 26 GHz of two different antennas
are generated by simulation: a ‘basic antenna’ (a classical
printed dipole on a ground plane) illustrated in Fig. 6-a) and a
“directional antenna” illustrated in Fig. 6-b). The directional
antenna consists of five units of the basic antenna, separated by
1.5 wavelengths. As a finite number of discrete spatial samples
of these radiation patterns are generated by simulations, an
interpolation between samples is necessary to obtain the exact
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
values of 𝛤(𝐃𝐨𝐀
𝑛,𝑚,𝑟 ) and 𝛤(𝐃𝐨𝐃𝑛,𝑚,𝑟 ) used in Equation (3).
Fig. 4. Point cloud model of the Helsinki Airport Check-In Hall, illustrating
the propagation between one point on the giant screen and one point on
the canopy: the main LOS direction is indicated by the light green
straight line, the scatterers identified by the tool are indicated by the
yellow circles. Larger yellow circle means that the scatterer has a
stronger impact.
L = 25 cm
a) Link N 3 between laptops in Helsinki Airport Check-In
Hall Airport (view of the hall from above)
Fig. 6. Antenna radiation pattern of an antenna element and ULAs.
D. MMIMMO parameters
For each MMIMMO link listed in Section II-A and Section
II-B, we compute the ULAs parameters depending on the
physical structures (lamp posts, signboards etc.) on which the
ULAs are deployed.
Link N 6
b) Links N 4, 5, 6 and 7 in Helsinki Airport Check-In Hall
Fig. 5. Modeled links for indoor environment in Helsinki Airport Check-In
Hall.
As illustrated in Fig. 4, our simulator accurately identifies
the locations and the reflection (or scattering) coefficients of
the scatterers. The information on the scatterers allows us to
derive the parameters of equation (3). Different MMIMMO
links illustrated in Fig. 5 are considered: between nearby
devices (“Link N° 3”); between signboards (“Link N° 4” and
“Link N° 5”); between self-check-in-machines (“Link N° 6”)
and finally between a signboard and a canopy (“Link N° 7”).
Let 𝑁 U and 𝑑 be the number of data streams multiplexed in
the spatial domain and the inter-antenna spacing, respectively.
The number of antenna elements 𝑁 is set equal to 𝑁 U in the
case of the DFT-SM-MRT scheme and the SVD scheme. As it
will later be explained in Section III, 𝑁 potentially very slightly
exceeds 𝑁 U in the case of B-DFT-SM-MRT. Although, from
the ray-tracing tools, we know the exact value of 𝐷 , we
compute the MMIMMO system parameters based on an
̂ (with an arbitrarily small chosen error in the
approximation 𝐷
order of a decimeter), since we assume that in a real
deployment situation, one can only obtain an imperfect
measurement of 𝐷.
We choose 𝑑 and 𝑁 U , where 𝑁 U = 𝑁 (for all cases except
some configurations of B-DFT-SM-MRT), so that condition
(1) is met as much as possible and with the following
additional constraint: 𝑁 U must be a power of 2. Note that this
constraint only applies to 𝑁 U and does not apply to 𝑁 when
𝑁 > 𝑁 U . This latter requirement ensures a low complexity
implementation of the DFT. Two different methods to chose
𝑁 U are tested in this paper.
In the first method (applied to the outdoor links), we
arbitrarily set 𝑁 U = 64 = 26 . In the second method (applied to
the indoor links), we determine the length 𝐿 (in meters) of the
physical structure on which we deploy the ULA. We then
compute the largest 𝑁 U that is deployable within 𝐿 and that is
close to fulfilling condition (1), as follows:
𝑁 U = 2𝐾 and
(4)
2
𝐿
)}.
𝑘∈ℕ
𝜆𝐷
In practice, one cannot position antennas with an infinite
precision. We thus define 𝛿 as the spatial step for the
positioning of antennas. 𝑑 is then determined as follows, for
both methods:
𝐾 = arg {max (2𝑘 ≤
𝛿 1
⌊ √𝜆𝐷𝑁 U ⌋.
𝑁U 𝛿
Table I lists the parameters of the considered MMIMMO links.
̂ /(dND)
Note that for some links, condition (2) is not met (i.e. 𝐷
is not much higher than 1). Compared to DFT-SM-MRT, BDFT-SM-MRT is therefore expected to improve these links.
For the Link N°4, we allow the deployment of antennas 30 cm
above the check-in machine height.
𝑑=
TABLE I. MMIMMO PARAMETERS WITH 𝛿 = 0.1 𝑚𝑚
Link
N°
3
4
5
0, 1
and 2
6
7
̂
𝐷
(m)
0.5
17.4
8.9
𝐿
(m)
0.30
1.55+0.30
3.7
25
NA
0.9
18.7
1.3
8.85
III.
𝑁U
8
16
64
128
256
𝑑
(mm)
26.8
112
45.8
𝑁 U𝑑
(m)
0.2144
1.792
2.9312
67.1
4.2944
9
28.7
1.152
7.3472
̂
𝐷
Fig. 7. Common and scheme specific parameters.
To assess the maximum achievable performance of the
MMIMMO system, we assume that the signal to noise ratio is
very large, and that the system is only limited by the signal to
interference ratio (SIR). The SIR of one data stream can be
derived based on the MIMO equivalent channel matrix 𝐆. The
SIR 𝐬𝐢𝐫𝑛 of each data stream 𝑛 is given by:
2
𝐬𝐢𝐫𝑛 =
|𝐆𝑛,𝑛 |
2
U
∑𝑁
𝑝=1,𝑝≠𝑛|𝐆𝑛,𝑝 |
The theoretical attainable spectral efficiency 𝐜𝑛 for the data
stream number 𝑛 is given by: 𝐜𝑛 = log 2 (1 + 𝐬𝐢𝐫𝑛 ), 1 ≤ 𝑛 ≤
𝑁 U . Practical modulations (such as 256 QAM or QPSK) and
coding schemes have a bounded spectral efficiency. We
therefore define the minimum and maximum spectral
efficiencies, 𝑠 MIN and 𝑠 MAX , accordingly. We define the
practical
spectral
efficiency
𝐜𝑛P
as
follows:
P
MAX )
MAX )
𝐜𝑛 = min(𝐜𝑛 , 𝑠
if min(𝐜𝑛 , 𝑠
> 𝑠 MIN and 𝐜𝑛P = 0
otherwise. The resulting total spectral efficiency 𝑠 is therefore:
D
𝑑𝑁
~2
~10
~3
~6
~13
~3
STUDIED SCHEMES
This section describes the three following spatial
multiplexing schemes: A) DFT-SM-MRT [7] (as the baseline
method); B) B-DFT-SM-MRT (as the new proposed method);
C) SVD spatial multiplexing (as an upper bound). To make a
fair comparison, we impose the following common constraint:
the number of streams 𝑁U and the inter-antenna spacing 𝑑
defined in Section II are common to all schemes. As illustrated
in Fig. 7, only the following parameters can be schemespecific: the number of antenna elements 𝑁 , the spatial
precoder and the spatial decoder. As a consequence, the
propagation channel matrix 𝐇 ∈ ℂ𝑁×𝑁 and the equivalent
channel matrix 𝐆 ∈ ℂ𝑁U×𝑁U (that includes precoding,
propagation, and decoding) are also scheme-specific.
, 1 ≤ 𝑛 ≤ 𝑁 U.
𝑁U
(5)
𝑠 = ∑ 𝐜𝑛P .
𝑝=1
For each spatial multiplexing scheme, the spectral efficiency is
determined using equation (5), this equation being fed with a
scheme-specific expression of 𝐆.
Next sub-sections provide the expressions of the schemespecific parameters 𝐇 , 𝐆 and 𝑁 . The same transmit power
constraint is assumed for all transmitters.
A. DFT-SM-MRT
For DFT SM-MRT [7], the number of antenna elements is
U
𝑁 = 𝑁 U . As illustrated in Fig. 8, 𝜌𝐇 † 𝐌 IDFT(𝑁 ) is the
U
precoder and 𝐌 DFT(𝑁 ) is the decoder, with 𝜌 being a schemespecific normalising factor to satisfy the power constraint. The
equivalent MIMO channel 𝐆 is thus:
U
U
𝐆 = 𝜌𝐌 DFT(𝑁 ) 𝐇𝐇 † 𝐌 IDFT(𝑁 ) .
(6)
𝐆 = 𝜌𝐑𝐇𝐇 † 𝐓,
Fig. 8. DFT-SM-MRT spatial multiplexing scheme [7].
B. B-DFT-SM-MRT
As illustrated in Fig. 9, compared to DFT-SM-MRT, BDFT-SM-MRT applies the DFT to 𝑁 S blocks of 𝑁 D data
symbols separately, with 𝑁 D = 𝑁 U /𝑁 S . 𝑁 S is selected so that
condition (2) is better fulfilled, at least on a per-block basis, i.e.
̂ /(𝑑𝑁 D ) > 1. We optionally append a cyclic prefix
such that: 𝐷
(CP) [13] of 𝑁 CP symbols in the spatial domain (with 0 ≤
𝑁 CP ≤ 𝑁 D), after each ‘per block’ DFT operation. As symbols
are mapped onto antennas, this has a direct impact on the role
of each antenna. The 𝑁 S blocks of 𝑁 D data symbols are
mapped onto 𝑁 S blocks of 𝑁 D ‘data antennas’. This constitutes
a set of 𝑁 U ‘useful antennas’. 𝑁 S blocks of 𝑁 CP symbols are
mapped onto 𝑁 S blocks of 𝑁 CP ‘CP antennas’. Each block of
𝑁 CP ‘CP antennas’ is inserted between two successive blocks
of 𝑁D ‘data antennas’. Each block of 𝑁 D data streams goes
through an inverse DFT, which is equivalent to a multiplication
D
by 𝐌 IDFT(𝑁 ) . We set 𝑁 E = 𝑁 D + 𝑁 CP . This time, 𝑁 ≥ 𝑁 U
antenna elements (instead of 𝑁 U ) are used at both the
transmitter and receiver sides, with:
𝑁 = 𝑁 S (𝑁 D + 𝑁 CP ) = 𝑁 S 𝑁 E = 𝑁 U + 𝑁 S 𝑁 CP .
(7)
(8)
where 𝜌 is a scheme-specific normalising factor to satisfy the
power constraint. Note that when 𝑁 S = 1 and 𝑁 CP = 0 , BDFT-SM-MRT is identical to DFT-SM-MRT.
C. SVD
Fig. 10. SVD spatial multiplexing scheme.
The number of antennas 𝑁 for this scheme is: 𝑁 = 𝑁 U . As
U
U
illustrated in Fig. 10, 𝐔, 𝐕, 𝚫 ∈ ℂ𝑁 ×𝑁 are matrices obtained
from the singular value decomposition of 𝐇𝐇 † , i.e., such that
U
𝐇𝐇 † = 𝐔𝚫𝐕, with 𝚫 being diagonal, and 𝐕𝐕 † =𝐔𝐔 † = 𝐈 (𝑁 ) .
Let 𝜌 be a scheme-specific normalising factor to satisfy the
power constraint. 𝜌𝐇𝐕 † is the precoder and 𝐔 † is the decoder.
With these notations, the equivalent MIMO channel 𝐆 is:
𝐆 = 𝜌𝚫.
IV.
(9)
PERFORMANCE AND COMPLEXITY EVALUATION
The performance analysis and the complexity analysis are
performed for the MMIMMO links defined in Section II and
the schemes described in Section III. Section IV. A lists the
simulated scenarios. Section IV-B and Section IV-C describe
the spectral efficiency and complexity evaluation methods,
respectively. Finally, Section IV-D provides the results.
A. Simulation scenarios
Table II lists the simulated scenarios and their
corresponding parameters. In this table, and throughout this
paper, the notations * and ** indicate that B-DFT-SM-MRT
without CP and B-DFT-SM-MRT with CP are used,
respectively. The absence of these notations indicates that
DFT-SM-MRT is used.
Fig. 9. B-DFT-SM-MRT spatial ultiplexing scheme.
,
For all scenarios, the channel and antenna models described
in Section II are used to generate H.
𝐀𝑘,𝑙 = 1 if 𝑙 = 𝑁 + 𝑘 and 1 ≤ 𝑘 ≤ 𝑁 or if 𝑙 = 𝑘 and
𝑁 CP + 1 ≤ 𝑘 ≤ 𝑁 CP + 𝑁 D ; 𝐀(𝑘, 𝑙) = 0 otherwise;
The performance is also evaluated in a free space (FS)
propagation scenario (i.e. a pure LOS scenario). Let 𝛿𝑛,𝑞 be
the distance between the receive antenna element 𝑛 and the
transmit antenna element 𝑞. For FS, 𝐇𝑛,𝑞 is given by: 𝐇𝑛,𝑞 =
We define the matrices 𝐀, 𝐀′ ∈ ℂ𝑁
U
U
𝐓 ∈ ℂ𝑁×𝑁 and 𝐑 ∈ ℂ𝑁 ×𝑁 as follows:
E ×𝑁D
D
, 𝐁, 𝐁′ ∈ ℂ𝑁
D×
CP
𝐁𝑘,𝑙 = 1 if 𝑙 = 𝑁 CP + 𝑘 and 1 ≤ 𝑘 ≤ 𝑁 D ; 𝐁(𝑘, 𝑙) = 0
otherwise;
𝐀′ = 𝐀𝐌 IDFT(𝑁
D)
(
𝜆
4π𝛿𝑛,𝑞
)𝑒 −𝑗2𝜋𝛿𝑛,𝑞/𝜆 .
D
and 𝐁′ = 𝐌 DFT(𝑁 ) 𝐁;
𝐓𝑘+(𝑛−1)𝑁E ,𝑙+(𝑛−1)𝑁D = 𝐀′𝑘,𝑙 , for 1 ≤ 𝑛 ≤ 𝑁 S , 1 ≤ 𝑘 ≤ 𝑁 E
and 1 ≤ 𝑙 ≤ 𝑁 D;
′
𝐑 𝑘+(𝑛−1)𝑁D ,𝑙+(𝑛−1)𝑁E = 𝐁𝑘,𝑙
, for 1 ≤ 𝑛 ≤ 𝑁 S , 1 ≤ 𝑘 ≤
D
E
𝑁 and 1 ≤ 𝑙 ≤ 𝑁 .
With these definitions, the equivalent channel 𝐆 is given by:
TABLE II. SIMULATED SCENARIOS
N°
3
3*
4
4*
Link
N
𝑁𝑑 (m)
3
8
0.2144
4
16
1.792
𝑁S
𝑁D
1
2
1
2
8
4
16
8
𝑁 CP
0
0
̂
𝐷
D
𝑑𝑁
~2
~5
~10
~19
0
4.2944
64
1
4.2944
2
4.2944
6
7
128
1.152
144
1.296
256
7.3472
264
7.5768
1
4
1
2
1
2
1
2
1
16
16
1
8
64
16
64
32
64
32
64
32
128
8
8
256
32
32
0
0
0
0
0
0
1
0
1
~3
~12
~6
~12
~6
~12
~6
~12
~1
~13
~13
~3
~20
~20
B. Spectral efficiency evaluation methodology
The spectral efficiency 𝑠 is computed using Equation (5)
and the method given in Section III. We set 𝑠 =8 bits/s/Hz
(corresponding to 256-QAM and a coding rate of 1) and
𝑠 MIN = 1 bit/s/Hz (corresponding to QPSK and a coding rate
of 1/2). Note that, for SVD, the spectral efficiency is simply
given by 𝑠 = 𝑁 U 𝑠 MAX . For each scheme, the two following
metrics are computed:
the ratio 𝜙 SVD between the spectral efficiency of the
considered scheme and the spectral efficiency of SVD;
in Table IV. The larger these metrics, the better they are.
Indeed, the transmitter (respectively the receiver) of the
considered scheme, is 𝜇 TX (respectively 𝜇 RX ) less times
complex than the one of SVD.
TABLE III. SVD SCHEME COMPLEXITY SCALING LAWS (TX= TRANSMITTER,
RX= RECEIVER)
Computations
Tx
SVD
2.9312
Rx
Tx
DFT-SMMRT
5
𝐕,
𝐕 † 𝐱,
Complexity scaling law
3
𝐇† 𝐕† 𝐱
3
𝐌 IDFT(𝑁U) 𝐱
2
𝑂 (𝑁 U log 2 (𝑁 U )).
𝐌 DFT(𝑁U) 𝐳 DFT
Tx
𝐇 † 𝐓. 𝐱
𝑂 ((𝑁 U + 𝑁 S 𝑁 CP ) + 𝑁 U log 2 (
Rx
𝐑𝐳 BDFT
𝑂 (𝑁 U log 2 (
2
B-DFT-SM-MRT
We finally define 𝜇 𝑇𝑋 (and 𝜇 𝑅𝑋 , respectively) as the ratio of
the complexity scaling law of the transmitter (the receiver
respectively) of SVD, over the complexity scaling law of the
transmitter (the receiver respectively) of the considered scheme.
Using the expressions in Table III, we obtain the expressions of
𝜇 𝑇𝑋 and 𝜇 𝑅𝑋 , for the DFT-SM-MRT and the B-DFT-SM-MRT,
𝑁S
𝑁S
))
))
𝜇TX
𝜇RX
log2 (𝑁U )+𝑁U
log2 (𝑁U )
𝑁U +2𝑁U
𝑁U +2𝑁U
2
2
(𝑁U +𝑁S 𝑁CP ) +𝑁U log2 (
𝑁
U
𝑁U
𝑁U
TABLE IV. EXPRESSIONS OF 𝜇TX AND 𝜇RX
The closer to these metrics, the better the schemes are.
We define 𝐱 ∈ ℂ𝑁 ×1 as the vector of transmitted symbols.
U
U
U
S CP
SVD
𝐳
∈ ℂ𝑁 ×1 , 𝐳 DFT ∈ ℂ𝑁 ×1 and 𝐳 BDFT ∈ ℂ𝑁 +𝑁 𝑁 are the
vectors of symbols received at the receive antenna array for the
SVD, the DFT-SM-MRT and the B-DFT-SM-MRT,
respectively. Using these notations, we derive the complexities
scaling laws for the transmitter (taking into account the spatial
precoding only) and the receiver (taking into account the
spatial decoding) and report them in Table III. Our analysis
excludes the MRT block as it appears in all the compared
schemes (SVD included).
2
𝑂 (𝑁 U log 2 (𝑁 U ) + 𝑁 U ) .
𝐇 † 𝐌 IDFT(𝑁U) 𝐱
Scheme
DFT-SM-MRT
We recall that the complexities of the DFT of size 𝑁, of the
SVD of a matrix of size 𝑁 × 𝑁 and of the multiplication of two
matrices of sizes 𝑁 × 𝑀 and 𝑀 × 𝑃, scale with 𝑂(𝑁log 2 (𝑁)),
𝑂(𝑁 3 ) and 𝑂(𝑁𝑀𝑃) , respectively. As a consequence, 𝑁 S
DFTs of complexity that scales with 𝑂((𝑁 U /𝑁 S )log 2 (𝑁 U /
𝑁 S )) each, result in a total complexity that scales with
𝑂(𝑁 U log 2 (𝑁 U /𝑁 S )).
2
𝑂 (𝑁 U + 𝑁 U )
𝐔, 𝐔† 𝐳 SVD
the ratio 𝜙 FS between the spectral efficiency of the
considered scheme and the spectral efficiency of the same
scheme in a FS environment.
C. Complexity evaluation
We assume a fully digital architecture and we base our
complexity evaluation on [14].
2
𝑂 (𝑁 U + 𝑁 U + 𝑁 U )
Rx
B-DFT-SMMRT
5
5*
0
0*
1
1*
2
2*
6
6*
6**
7
7*
7**
U3
+2𝑁
U2
2
𝑁U
)
𝑁S
𝑁U
)
𝑁S
log2 (
𝑁
U2
+𝑁U
D. Simulation results
Table V provides the simulation results for all scenarios
listed in Section IV-A). DFT-SM-MRT and B-DFT-SM-MRT
both attain spectral efficiencies of several hundreds of bits/s/Hz
(that are close to the ones of SVD) with much less complex
transmitters and receivers. B-DFT-SM-MRT outperforms
DFT-SM-MRT with an even simpler receiver and a slightly
more complex transmitter. For all MMIMMO links, except for
Link N° 7, the performance is close to the FS performance.
This confirms that in the chosen existing environments, the
propagation is dominated by LOS and that simple spatial
multiplexing schemes (such as DFT-SM-MRT or B-DFT-SMMRT) can be used. However, for Link N° 7, the performance is
much lower than the FS one. The scatterers of scenario 7 are
visible on Fig. 4. One can observe that a strong dominating
scatterer is located on the metallic ceiling of the canopy. For
this particular scenario, we replace the basic antennas by
directional antennas (defined in section III-C) oriented along
the main LOS direction. We obtain an improved performance
that is reported in Table VI. In particular, for scenario 7*,
around 1.6 kbits/s/Hz of spectral efficiency is attained,
corresponding to 80% of SVD performance with a transmitter
and a receiver that are 200 and 10000 less complex,
respectively. The CP insertion slightly improves the
performance in scenarios 6** and in scenario 7** (with
directional antennas). An extensive study of the CP insertion is
for further study.
TABLE V. RESULTS († INDICATES DIRECTIONAL ANTENNAS ARE USED)
N°
SE (bits/s/Hz)
𝜙 FS (%)
𝜙 SVD (%)
𝜇 TX
𝜇 RX
0
0*
1
1B*
2B
2B*
3B
3B*
4C
4C*
5D
5D*
6D
6D*
6D**
7
7*
7**
246
258
296
342
194
277
47
52
81
83
349
431
290
318
429
113/1174†
281/1651†
266/1681†
65
62
75
81
64
75
100
100
63
65
91
97
97
99
88
9/94†
16/93†
15/90†
48
50
58
67
38
54
74
81
63
65
68
84
57
31
42
6/57†
14/81†
13/82†
60
61
60
61
60
61
693
832
693
832
693
832
NA
NA
V.
NA
NA
60
62
52
123
127
101
253
238
693
1040
2359
5504
5504
8224
13158
13158
CONCLUSION
In this paper, we showed that there is an opportunity for
future 5G networks operators to exploit the existing urban
architecture to transport, on the wireless media, huge data rates
with gigantic spectral efficiencies. A new precoding/decoding
scheme is proposed, called “block discrete Fourier transform
based spatial multiplexing with maximum ratio transmission”,
B-DFT-SM-MRT, which has a low complexity compared to
singular value decomposition. The performance of this scheme
at 26 GHz is assessed in existing environments that are
accurately modeled with ray-tracing tools. Antennas as well,
are accurately modeled. In the best scenario, 1.6 kbits/s/Hz is
attained, corresponding to 80% of SVD performance, with a
transmitter and a receiver that are 200 and 10000 times less
complex, respectively. Further studies will be conducted with
measured MMIMMO channel data.
ACKNOWLEDGMENTS
This work has been partially funded by the 5G PPP project
mmMAGIC [15] under grant ICT-671650. We warmly thank
Mr Antonio Clemente and Mrs Marie-Hélène Hamon for their
support on this activity.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
T. S. Rappaport et al., “Millimeter wave mobile communications for 5G
cellular: It will work!, ” IEEE Access, vol. 1, pp. 335-349, 2013.
E. Torkildson et al., “Millimeter-wave MIMO: Wireless links at optical
speeds,” in Proc. of 44th ACCCC Allerton Conf., Sept. 2006.
Z. Pi, and F. Khan, “A millimeter-wave massive MIMO system for next
generation mobile broadband, ” in Proc. 2012 ASILOMAR, pp. 693-698.
X: Hailin, O. Shan, N. Zaiping, Z. Feng, “Capacity analysis of high-rank
line-of-sight MIMO channels, ” J. of Systems Engineering and
Electronics, vol. 20, no. 4, pp. 706-710, Aug. 2009.
F. Bohagen et al. , “Optimal design of uniform planar antenna arrays for
strong line-of-sight MIMO channels, ” in Proc. IEEE SPAWC '06.
P. Baracca, et al., “Final performance results and consolidated view on
the most promising multi-node/multi-antenna transmission technologies,
”
available
at
https://www.metis2020.com/wpcontent/uploads/deliverables/METIS_D3.3_v1.pdf.
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
D. T. Phan-Huy et al., “DFT based spatial multiplexing and maximum
ratio transmission for mm-wave large MIMO, ” in Proc. 2014 IEEE
WCNC, pp. 913-918.
N. F. Abdullah et al., “Channel parameters and throughput predictions
for mmWave and LTE-A networks in urban environments,” in Proc.
2015 IEEE 81st VTC, 2015
E. K. Tameh and A. R. Nix, “A 3-D integrated macro and microcellular
propagation model, based on the use of photogrammetric terrain and
building data,” in 47th IEEE VTC, vol. 3, pp. 1957-1961, 1997.
R. Ford, S. Rangan, E. Mellios, D. Kong, and A. R. Nix, “Markov
channel-based performance analysis for millimeter wave mobile
networks,” in Proc. IEEE WCNC 2017.
J. Järveläinen, Measurement-based millimeter-wave radio propagation
simulations and modeling, Doctoral Dissertation, Aalto University,
2016. https://aaltodoc.aalto.fi/handle/123456789/21931.
J. Vehmas et al., “Millimeter-wave channel characterization at Helsinki
Airport in 15, 28, and 60 GHz bands,” in Proc. IEEE 84th VTC, Sep.
2016.
B. Muquet et al., "Cyclic prefixing or zero padding for wireless
multicarrier transmissions?," in IEEE Trans. on Comm., vol. 50, no. 12,
pp. 2136-2148, Dec 2002.
G. H. Golub, and C. F. Van Loan, (1996). "Matrix Computations" (3rd
ed.). Hopkins, J.
mm-Wave based Mobile Radio Access Network for 5G Integrated
Communications (mmMAGIC) project, https://5g-mmmagic.eu.
| 7 |
Intersection Logic in sequent calculus style
Simona Ronchi Della Rocca
Alexis Saurin
Dipartimento di Informatica
Universita di Torino
IT-10149 Torino, Italy
[email protected]
Laboratoire CNRS PPS & INRIA pi.r2
Paris, France
[email protected]
Yiorgos Stavrinos
Anastasia Veneti
Graduate Program in Logic and Algorithms (MPLA)
Department of Mathematics
University of Athens
GR-15784 Zografou, Greece
[email protected]
[email protected]
1
Introduction
The intersection type assignment system IT (shown in Figure 1) is a deductive system that assigns formulae (built from the intuitionistic implication → and the intersection ∩) as types to the untyped λ -calculus.
It has been defined by Coppo and Dezani [6], in order to increase the typability power of the simple type
assignment system. In fact, IT has a strong typability power, since it gives types to all the strongly
normalizing terms [17, 11]. Intersection types, supported by a universal type and a suitable pre-order
relation, has been used for describing the λ -calculus semantics in different domains (e.g. Scott domains
[3, 7], coherence spaces [14]), allowing to characterize interesting semantical notions like solvability and
normalizability, both in call-by-name and call-by-value settings [18].
Differently from other well known type assignments for λ -calculus, like for example the simple or
the second order one, IT has not been designed as decoration of a logical system, following the so called
Curry-Howard isomorphism, relating in particular types with logical formulae. In fact, despite to the fact
that it looks like a decoration of the implicative and additive fragment of the intuitionistic logic, this is
not true. Indeed, while the connective → has the same behaviour of the intuitionistic implication, the
intersection ∩ does not correspond to the intuitionistic conjunction, as Hindley pointed out in [10], since
the rule introducing it has the meta-theoretical condition that the two proofs of the premises be decorated
by the same term, and this constraint is no more explicit when terms are erased.
It is natural to ask if there is a logic with a natural correspondence with intersection types. Some
attempts have been made, which will be briefly recalled in Section 7. Our work follow the line started
from [19] and [15, 16], where the authors defined Intersection Logic IL and Intersection Synchronous
Logic ISL, respectively. ISL is a deductive system in natural deduction style proving molecules which
are finite multisets of atoms which, in turn, are pairs of a context and a formula. The notion of molecule
allows to distinguish between two sets of connectives, global and local, depending on whether they
are introduced and eliminated in parallel on all atoms of a molecule or not. Note that global and local
connectives are called asynchronous and synchronous, respectively, in [15, 16]. In particular, two kinds
of conjunction are present: the global one corresponding to intuitionistic conjunction and the local one
corresponding to intersection. Then, the intersection type assignment system is obtained by decorating
ISL with λ -terms in the Curry-Howard style. Roughly speaking, a molecule can be mapped into a set of
E. Pimentel, B. Venneri and J. Wells (Eds.): Workshop on
Intersection Types and Related Systems 2010 (ITRS 2010).
EPTCS 45, 2011, pp. 16–30, doi:10.4204/EPTCS.45.2
c S. Ronchi Della Rocca, A. Saurin, et al.
S. Ronchi Della Rocca, A. Saurin, et al.
17
Γ, x : σ ⊢IT x : σ
(A)
Γ ⊢IT M : σ Γ ⊢IT M : τ
(∩I)
Γ ⊢IT M : σ ∩ τ
Γ ⊢IT M : σL ∩ σR
(∩Ek )k = L, R
Γ ⊢IT M : σk
Γ, x : σ ⊢IT M : τ
(→ I)
Γ ⊢IT λ x.M : σ → τ
Γ ⊢IT M : σ → τ Γ ⊢IT N : τ
(→ E)
Γ ⊢IT MN : τ
Figure 1: The intersection type assignment system IT.
intuitionistic proofs which are isomorphic, in the sense that some rules are applied in parallel to all the
elements in the set. The isomorphism is obtained by collapsing the two conjunctions.
ISL put into evidence the fact that the usual intuitionistic conjunction can be splitted into two different connectives, with a different behaviour, and the intersection corresponds to one of it. So we think that
ISL is a logic that can be interesting in itself, and so we would like to explore it from a proof-theoretical
point of view. To this aim, we present here a sequent calculus formulation of it, that we call ISC, and
we prove that it enjoys the property of cut elimination. The proof is not trivial, since the play between
global and local connectives is complicated and new structural rules are needed.
Stavrinos and Veneti [21, 23] further enhanced IL and ISL with a new connective: the logical counterpart of the union operator on types [2]. It turns out that union is a different kind of disjunction. In
the extensions of IL and ISL presented in [21], though, the status of union is not so clear; in particular,
union is not an explicitly local version of intuitionistic disjunction, since its elimination rule involves
a global side-effect. So, the pair union-disjunction does not enjoy the nice characterization of the pair
intersection-conjunction, which are the local and global versions, respectively, of the same connective.
Moving to a sequent calculus exposition [23] allows for a very nice and symmetric presentation of both
intersection and union, which are now the local versions of intuitionistic conjunction and disjunction,
respectively. Our next step will be to do a proof theoretical investigation of this new calculus.
The paper is organized as follows. In Section 2 the system ISL is recalled, and its sequent calculus
version ISC is presented. In Section 3 the relation between ISC and Intuitionistic Sequent Calculus LJ
is shown. Section 4 contains a brief survey of the cut-elimination procedure in LJ. In Section 5 the cut
elimination steps of ISC are defined, and in Section 6 the cut-elimination algorithm is given. Moreover
Section 7 contains a brief survey of the other approaches to the problem of a logical foundation of
intersection types.
2
The system ISC
In this section the system ISL presented in [15, 16] is recalled. Moreover, an equivalent sequent calculus
version is given.
Definition 2.1 (ISL)
i) Formulas are generated by the grammar
σ ::= a | σ → σ | σ &σ | σ ∩ σ
where a belongs to a countable set of propositional variables. Formulas are ranged over by small
greek letters.
Intersection Logic in sequent calculus style
18
[(σi ; σi ) | i ∈ I]
(Ax)
[(Γi1 , τi , σi , Γi2 ; ρi ) | i ∈ I]
[(Γi ; τi ) | i ∈ I]
(W)
[(Γi , σi ; τi ) | i ∈ I]
[(Γi , σi ; τi ) | i ∈ I]
(→I)
[(Γi ; σi → τi ) | i ∈ I]
[(Γi1 , σi , τi , Γi2 ; ρi ) | i ∈ I]
(X)
[(Γi ; σi → τi ) | i ∈ I] [(Γi ; σi ) | i ∈ I]
(→E)
[(Γi ; τi ) | i ∈ I]
[(Γi ; σi ) | i ∈ I] [(Γi ; τi ) | i ∈ I]
(&I)
[(Γi ; σi &τi ) | i ∈ I]
M ∪ [(Γ; σ ), (Γ; τ )]
(∩I)
M ∪ [(Γ; σ ∩ τ )]
M ∪ N (P)
M
[(Γi ; σiL &σiR ) | i ∈ I]
[(Γi ; σik ) | i ∈ I]
(&Ek ), k = L, R
M ∪ [(Γ; σL ∩ σR )]
(∩Ek ), k = L, R
M ∪ [(Γ; σk )]
Figure 2: The system ISL
ii) Contexts are finite sequences of formulas, ranged over by Γ, ∆, E, Z. The cardinality |Γ| of a
context Γ is the number of formulas in Γ.
iii) Atoms (Γ; σ ) are pairs of a context and a formula, ranged over by A , B.
iv) Molecules [(α1i , . . . αmi ; αi ) | 1 6 i 6 n] or [(α1i , . . . αmi ; αi ) | i ∈ I] or [(α1i , . . . αmi ; αi )]i are finite
multisets of atoms such that all atoms have the same context cardinality. Molecules are ranged
over by M , N .
v) ISL is a logical system proving molecules in natural deduction style. Its rules are displayed in
Figure 2. We write ⊢ISL M when there is a deduction proving the molecule M , and π : M when
we want to denote a particular derivation π proving M .
Some comments are in order. Rule (P) is derivable and it is useful for the normalization procedure.
The connectives →, & are global, in the sense that they are both introduced and eliminated in all the atoms
of a molecule by a unique rule, while the connective ∩ is local, since it is introduced and eliminated in
just one atom of a molecule.
The following theorem recall the correspondence between ISL and IT, proved in [16]. In the text of
the theorem, we use the notation (Γ)s for denoting a decoration of the context Γ through a sequence s
of different variables. More precisely, if Γ = α1 , ..., αn and s = x1 , ..., xn , (Γ)s is the set of assignments
{x1 : α1 , ..., xn : αn }.
Theorem 2.2 (ISL and IT)
i) Let ⊢ISL [(Γ1 ; α1 ), . . . , (Γm ; αm )], where αi and all formulae in Γi do
not contain the connective &. Then for every s, there is M such that (Γi )s ⊢IT M : αi .
ii) If (Γi )x1 ,...,xn ⊢IT M : αi (i ∈ I), then ⊢ISL [(Γi ; αi ) | i ∈ I].
Now we will define an equivalent formulation in sequent calculus style.
Definition 2.3 ISC is a logical system proving molecules in sequent calculus style. Its rules are displayed in Figure 3. We write ⊢ISC M when there is a derivation proving molecule M and π : M when
we want to denote a particular derivation π proving M .
Notice that the connectives →, & are dealt with rules having a multiplicative behaviour of contexts,
but the structural rules ensure an equivalent additive presentation. The connective ∩, though, need to
S. Ronchi Della Rocca, A. Saurin, et al.
19
Identity Rules: both global
[(αi ; αi )|i ∈ I]
(Ax)
[(Γi ; αi )|i ∈ I] [(∆i , αi ; βi )|i ∈ I]
(cut)
[(Γi , ∆i ; βi )|i ∈ I]
Structural Rules: (W), (X), (C) are global and (Fus), (P) are local
[(Γi ; βi )|i ∈ I]
(W)
[(Γi , αi ; βi )|i ∈ I]
[(Γi , βi , αi , ∆i ; γi )|i ∈ I]
(X)
[(Γi , αi , βi , ∆i ; γi )|i ∈ I]
[(Γ; β )] ∪ M
(Fus)
[(Γ; β ), (Γ; β )] ∪ M
[(Γi , αi , αi ; βi )|i ∈ I]
(C)
[(Γi , αi ; βi )|i ∈ I]
M ∪ N (P)
M
Logical Rules: →, & are global and ∩ is local
[(Γi ; αi )|i ∈ I] [(∆i , βi ; γi )|i ∈ I]
(→L)
[(Γi , ∆i , αi → βi ; γi )|i ∈ I]
[(Γi , αiL , αiR ; βi )|i ∈ I]
[(Γi , αiL &αiR ; βi )|i ∈ I]
(& L)
[(Γi , αi ; βi )|i ∈ I]
(→R)
[(Γi ; αi → βi )|i ∈ I]
[(Γi ; αi )|i ∈ I] [(∆i ; βi )|i ∈ I]
(& R)
[(Γi , ∆i ; αi &βi )|i ∈ I]
[(Γ, αk ; β )] ∪ M
(∩ Lk ), k = L, R
[(Γ, αL ∩ αR ; β )] ∪ M
[(Γ; α ), (Γ; β )] ∪ M
(∩ R)
[(Γ; α ∩ β )] ∪ M
Figure 3: The system ISC
Intersection Logic in sequent calculus style
20
have an additive presentation. By abuse of notation, we will call global the rules dealing with → and
& and local the rules dealing with ∩. The rules (P) and (Fus) are derivable; they will be useful in the
cut-elimination procedure.
Lemma 2.4 Let π :⊢ISC M . There is a function clean, such that clean(π ) is a derivation proving M
which does not contain applications of the rules (P) and (Fus).
Proof It is easy to check that both (P) and (Fus) commute upwards with any rule preceding them and
that they disappear when following an axiom rule. For instance:
D
[(Γ; α ), (Γ; β )] ∪ M
(∩ R)
[(Γ; α ∩ β )] ∪ M
(Fus)
[(Γ; α ∩ β ), (Γ; α ∩ β )] ∪ M
−→
D
[(Γ; α ), (Γ; β )] ∪ M
(Fus)
[(Γ; α ), (Γ; β ), (Γ; β )] ∪ M
(Fus)
[(Γ; α ), (Γ; β ), (Γ; α ), (Γ; β )] ∪ M
(∩ R)
[(Γ; α ), (Γ; β ), (Γ; α ∩ β )] ∪ M
(∩ R)
[(Γ; α ∩ β ), (Γ; α ∩ β )] ∪ M
(Ax)
[(β ; β )] ∪ [(αi ; αi )|i ∈ I]
[(β ; β ), (β ; β )] ∪ [(αi ; αi )|i ∈ I]
(Fus)
−→
[(β ; β ), (β ; β )] ∪ [(αi ; αi )|i ∈ I]
(Ax)
and
[(αi ; αi )|i ∈ I] ∪ [(αi ; αi )|i ∈ J]
[(αi ; αi )|i ∈ I]
(Ax)
(P)
−→
[(αi ; αi )|i ∈ I]
(Ax)
We will call clean a derivation without applications of rules (P) and (Fus).
Theorem 2.5 ⊢ISL M if and only if ⊢ISC M
Proof
(only if). By induction on the natural deduction style derivation. Rules (Ax), (W), (X) are the same in
both styles. Rule (P) is derivable in both styles (see previous lemma and Theorem 11 in [15]). In case
of the global rules, the proof is quite similar to the standard proof of the equivalence between natural
deduction and sequent calculus for intuitionistic case, just putting similar cases in parallel. So we will
show just the case of implication as example, and then the case for local conjunction elimination (the
introduction is the same in both the systems).
Case (→E):
(Ax)
[(Γi ; αi )]i
[(βi ; βi )]i
(→L)
[(Γi , αi → βi ; βi )]i
[(Γi ; αi → βi )]i
(cut)
[(Γi , Γi ; βi )]i
(XC)
[(Γi ; βi )]i
where the dashed line named (XC) denotes a sequence of applications of rules (X) and (C).
Case (∩Ek ):
(Ax)
[(γi ; γi )]i , (αk ; αk )
[(γi ; γi )]i , (αL ∩ αR ; αk )
[(Γi ; γi )]i , (Γ ; αL ∩ αR )
[(Γi ; γi )]i , (Γ ; αk )
(∩Lk )
(cut)
S. Ronchi Della Rocca, A. Saurin, et al.
21
(if).
By induction on the sequent calculus style derivation.
We will show just the case of implication, local conjunction elimination and cut.
Case (→L): Let Zi = Γi , ∆i , αi → βi . Then:
(Ax)
[(αi → βi ; αi → βi )]i
[(Γi ; αi )]i
[(∆i , βi ; γi )]i
(WX)
(WX)
[(Zi , βi ; γi )]i
[(Zi ; αi → βi )]i
[(Zi ; αi )]i
(→I)
[(Zi ; βi → γi )]i
[(Zi ; βi )]i
(→E)
;
[(Γi , ∆i , αi → βi γi )]i
{z
}
|
(W)
(→E)
Zi
Case (∩Lk ): Let Γi = Γ′i , θi and Ei = Γ′i , θi , θi = Γi , θi . Then:
(Ax)
[(θi ; θi )]i , (αL ∩ αR ; αL ∩ αR )
[(Γi ; γi )]i , (Γ, αk ; β )
(WX)
(WX)
[(Ei ; γi )]i , (Γ, αL ∩ αR , αk ; β )
[(Γi ; θi )]i , (Γ, αL ∩ αR ; αL ∩ αR )
(→I)
(∩Ek )
[(Γi ; θi → γi )]i , (Γ, αL ∩ αR ; αk → β )
[(Γi ; θi )]i , (Γ, αL ∩ αR ; αk )
(→E)
[(Γi ; γi )]i , (Γ, αL ∩ αR ; β )
Case (cut):
[(∆i , αi ; βi )]i
(WX)
[(Γi , ∆i , αi ; βi )]i
[(Γi ; αi )]i
(→I)
[(Γi , ∆i ; αi → βi )]i
[(Γi , ∆i ; αi )]i
[(Γi , ∆i ; βi )]i
3
(W)
(→E)
ISC and LJ
A derivation in ISC corresponds to a set of derivations in Intuitionistic Sequent Calculus (LJ), where the
two conjunctions collapse into ∧.
Definition 3.1 Given a ISC derivation π , we define the set π̃ = (πi )i∈I of LJ derivations by induction on
the structure of π as follows:
• if π : [(αi ; αi ) | i ∈ I]
(Ax)
then π̃ = (πi )i∈I with πi : αi ⊢ αi
π 1 : [(Γi ; αi ) | i ∈ I] π 2 : [(∆i , αi ; βi ) | i ∈ I]
π : [(Γi , ∆i ; βi ) | i ∈ I]
• if
and π˜2 = (πi2 : ∆i , αi ⊢ βi )i∈I , then π̃ = (πi )i∈I
π ′ : [(Γi ; βi ) | i ∈ I]
• if π : [(Γi , αi ; βi ) | i ∈ I]
(W)
(cut)
(Ax)
with π˜1 = (πi1 : Γi ⊢ αi )i∈I
πi1 : Γi ⊢ αi πi2 : ∆i , αi ⊢ βi
with
πi : Γ i , ∆ i ⊢ βi
with π˜′ = (πi′ : Γi ⊢ βi )i∈I , then π̃ = (πi )i∈I
• π̃ is defined in the same way as above for (X), (C) and (→R).
(cut)
πi′ : Γi ⊢ βi
with πi : Γi , αi ⊢ βi
(W)
Intersection Logic in sequent calculus style
22
M ′ (R)
˜ π)
• if π : M
where (R) = (Fus), (P) then π̃ = clean(
π 1 : [(Γi ; αi ) | i ∈ I] π 2 : [(∆i , βi ; γi ) | i ∈ I]
• if
π : [(Γi , ∆i , αi → βi ; γi ) | i ∈ I]
and π˜2 = (πi2 : ∆i , βi ⊢ γi )i∈I , then π̃ = (πi )i∈I
π ′ : [(Γi , αi0 , αi1 ; βi ) | i ∈ I]
• if π : [(Γi , αi0 &αi1 ; βi ) | i ∈ I]
πi′ : Γi , αi0 , αi1 ⊢ βi
with πi : Γi , αi0 ∧ αi1 ⊢ βi
(& L)
and π˜2 =
(→L)
with π˜′ = (πi′ : Γi , αi0 , αi1 ⊢ βi )i∈I , then π̃ = (πi )i∈I
(∧L)
: ∆i ⊢ βi )i∈I , then π̃ = (πi )i∈I
(&R)
with π˜1 = (πi1 : Γi ⊢ αi )i∈I
πi1 : Γi ⊢ αi πi2 : ∆i ⊢ βi
with
πi : Γi , ∆i ⊢ αi ∧ βi
π ′ : [(Γ, α j ; β )] ∪ [(Γi ; γi ) | i ∈ I]
• if π : [(Γ, α0 ∩ α1 ; β )] ∪ [(Γi ; γi ) | i ∈ I]
then π̃ = {π⋆ } ∪ (πi′ )i∈I
with π˜1 = (πi1 : Γi ⊢ αi )i∈I
πi1 : Γi ⊢ αi πi2 : ∆i , βi ⊢ γi
πi : Γi , ∆i , αi → βi ⊢ γi
with
π 1 : [(Γi ; αi ) | i ∈ I] π 2 : [(∆i ; βi ) | i ∈ I]
π : [(Γi , ∆i ; αi &βi ) | i ∈ I]
• if
(πi2
(→L)
(∩L j )
(∧R)
with π̃ ′ = {π⋆′ : Γ, α j ⊢ β } ∪ (πi′ : Γi ⊢ γi )i∈I ,
π⋆′ : Γ, α j ⊢ β
(W), (X)
Γ, α0 , α1 ⊢ β
(∧L)
with π⋆ : Γ, α0 ∧ α1 ⊢ β
π ′ : [(Γ; α ), (Γ; β )] ∪ [(Γi ; γi ) | i ∈ I]
• if π : [(Γ; α ∩ β )] ∪ [(Γi ; γi ) | i ∈ I]
πα′ : Γ ⊢ α
then π̃ = {π⋆ } ∪ (πi′ )i∈I with
(∩R)
with π˜′ = {πα′ : Γ ⊢ α }∪{πβ′ : Γ ⊢ β }∪(πi′ : Γi ⊢ γi )i∈I ,
πβ′ : Γ ⊢ β
Γ, Γ ⊢ α ∧ β
π⋆ : Γ ⊢ α ∧ β
(∧R)
(X), (C)
Let us stress the fact that, by Definition 3.1, each πi in π̃ is a derivation in LJ. The translation from
ISC to LJ is almost standard, but for the rules (Fus), (P), where the result of Lemma 2.4 has been used.
4
Cut-elimination in LJ: a short survey
Analogously to LJ, ISC enjoys cut-elimination, i.e., the cut rule is admissible. In order to give a proof of
this fact the most natural idea is to mimic the cut elimination procedure of LJ, in a parallel way. We will
use Definition 3.1, which associates to every derivation in ISC a set of derivations in LJ. There are different versions of such a proof in the literature; we suggest the versions in [9, 22, 13]. Let us briefly recall it.
S. Ronchi Della Rocca, A. Saurin, et al.
23
The cut-elimination algorithm in LJ is based on a definition of some elementary cut-elimination
steps, each one depending on the premises of the cut-rule, and on a certain order of applications of such
steps. If the elementary steps are applied in the correct order, then the procedure eventually stops and the
result is a cut-free proof of the same sequent.
The principal problem for designing this algorithm is in defining the elementary step when the contraction rule is involved. In fact the natural way of defining the contraction step is the following:
∆, α , α ⊢ β
(C)
∆, α ⊢ β
Γ⊢α
(cut)
Γ, ∆ ⊢ β
Γ⊢α
→
Γ ⊢ α ∆, α , α ⊢ β
(cut)
Γ, ∆, α ⊢ β
(cut)
Γ, Γ, ∆ ⊢ β
(CX)
Γ, ∆ ⊢ β
where (CX) represents a suitable number of contraction and exchange rules. This step generates two cuts,
such that the last one appears of the same “size” and also of the same (or maybe greater) “height” as
the original one, according to any reasonable notions of size and height. A standard way to solve this
problem is to strenghen the cut rule, allowing it to eliminate at the same time more than one occurrences
of a formula, in the following way:
m
Γ⊢α
z }| {
∆, α , ..., α ⊢ β
Γ, ∆ ⊢ β
m≥1
(multicut)
It is easy to check that replacing the cut-rule with the multicut-rule produces an equivalent system.
Abusing the naming, in what follows we will use the name “cut” for multicut and we will call “LJ” the
system obtained by replacing cut by multicut.
Assuming, for simplicity, that Γ and ∆ do not contain any occurrence of α , the contraction step now
becomes:
∆, α , α ⊢ β
(C)
Γ⊢α
Γ ⊢ α ∆, α , α ⊢ β
∆, α ⊢ β
(cut)
(cut)
→
Γ, ∆ ⊢ β
Γ, ∆ ⊢ β
and the new cut-rule which has been generated has been moved up in the derivation with respect to the
original one. The cut rule can be eliminated and, in order to design the algorithm doing it, the following
measures are needed.
Definition 4.1
i) The size of a formula σ (denoted by |σ |) is the number of symbols in it;
ii) The height of a derivation is the number of rule applications in its derivation tree. Let h(π ) denote
the height of π ;
iii) The measure of a cut π , denoted by m(π ), is a pair (s, h), where s is the size of the formula
eliminated by it and h is the sums of the heights of its premises.
We consider measures ordered according to a restriction of lexicographic order, namely:
(s, h) < (s′ , h′ ) if either s < s′ and h ≤ h′ or s ≤ s′ and h < h′ .
Then the following lemma holds:
Intersection Logic in sequent calculus style
24
Lemma 4.2 Let π : Γ ⊢ σ be a derivation in LJ, with some cut rule applications. Let π ′ : Γ ⊢ σ be
the derivation obtained from π by applying an elementary cut-elimination step to a cut closest to the
axioms, let c. Then π ′ does not contain c anymore, and, if it contains some new cuts with respect to π ,
their measure are less than the measure of c. Moreover the measures of the cuts different from c do not
increase.
The cut elimination property can now be stated.
Theorem 4.3 Let π be a derivation in LJ. Then, there is a derivation π ′ proving the same judgement
which does not use the cut-rule.
Proof From our definition of measure, a topmost cut can be eliminated in a finite number of steps.
Since the number of cuts is finite, the property follows.
5
Cut-elimination in ISC: the elementary cut elimination steps
First of all we strenghten the cut-rule of ISC analogously to LJ, in the following way:
m
z }| {
[(Γi ; αi )|i ∈ I] [(∆i , αi , ..., αi ; βi )|i ∈ I]
[(Γi , ∆i ; βi )|i ∈ I]
(cut)
It is easy to check that we obtain an equivalent system, so we will abuse the notation, and call respectively
cut the new cut, and ISC the new system.
Then, we can divide the most significant occurrences of cut in ISC in two cases: the global and local
ones, depending on whether global or local connectives are introduced on cut formulas in the premises.
In the case of global cut-rules, the elementary cut elimination steps act as for LJ, but in parallel on
all the atoms of the involved molecules. As an example, let us show the case of the (&R), (&L) cut:
[(Zi , αi , βi ; γi )]i
[(Γi ; αi )]i
[(∆i ; βi )]i
(&R)
[(Γi , ∆i ; αi &βi )]i
[(Zi , αi &βi ; γi )]i
[(Γi , ∆i , ZI ; γi )]i
(& L)
(cut)
↓
[(Zi , αi , βi ; γi )]i
[(∆i ; βi )]i
[(∆i , Zi , αi ; γi )]i
[(Γi ; αi )]i
(cut)
[(Γi , ∆i , Zi ; γi )]i
(cut)
The definition of the local cut-elimination steps in IUSC poses some problems. Let us consider the
following example:
Let π1 and π2 be respectively the following derivations:
(Ax)
(Ax)
[(α ; α ), (α ; α ), (µ ; µ )]
[(α ; α ), (α ; α ), (µ ∩ ν ; µ ∩ ν )]
(→L)
[(α , α → α ; α ), (α , α → α ; α ), (µ , µ → (µ ∩ ν ); µ ∩ ν )]
(∩R)
π1 : [(α , α → α ; α ∩ α ), (µ , µ → (µ ∩ ν ); µ ∩ ν )]
(Ax)
[(α ∩ α ; α ∩ α ), (µ ; µ )]
π2 : [(α ∩ α ; α ∩ α ), (µ ∩ ν ; µ )]
(∩L)
S. Ronchi Della Rocca, A. Saurin, et al.
And let π be:
25
π1
π2
π : [(α , α → α ; α ∩ α ), (µ , µ → (µ ∩ ν ); µ )]
(cut)
Notice that the right and left introduction of the connective ∩ create an asymmetry because they are
applied on different atoms, so it looks quite impossible to perform a cut-elimination step. In fact, the
derivation π corresponds (modulo some structural rules) to the following derivation in ISL:
(Ax)
[(α → α ; α → α ), (α → α ; α → α ), (µ → (µ ∩ ν ); µ → (µ ∩ ν ))]
[(α ; α ), (α ; α ), (µ ; µ )]
[(α , α → α ; α ), (α , α → α ; α ), (µ , µ → (µ ∩ ν ); µ ∩ ν )]
(∩E)
[(α , α → α ; α ), (α , α → α ; α ), (µ , µ → (µ ∩ ν ); µ )]
(∩I)
[(α , α → α ; α ∩ α ), (µ , µ → (µ ∩ ν ); µ )]
(Ax)
(→E)
This derivation is in normal form according to the definition given in [15]; roughly speaking, a derivation
in ISL is in normal form if it does not contain an introduction of a connective immediately followed by
an elimination of the same connective, modulo some structural rules.
Since the problem is due to the presence of the local connective, in particular when the two premises
of a cut are the right and left introduction of ∩ on different atoms, the solution is to restrict the use of this
connective, in particular forbidding atoms where it is in principal position.
Definition 5.1
i) A formula is canonical if its principal connective is not ∩.
i) An ISC derivation is canonical if all the formulas introduced by axioms and (W) are canonical.
Lemma 5.2 Let π :⊢ISC M . Then, there is a canonical derivation π ′ :⊢ISC M .
Proof
By induction on π .
The following example illustrates the base-case.
Example 5.3 Consider the derivation:
[(σ ∩ τ ; σ ∩ τ )]
(Ax)
The corresponding canonical derivation can be obtained by introducing the molecule [(σ ; σ ), (τ ; τ )] by
an axiom and then by applying a sequence of two (∩L) rules followed by a (∩R) rule.
From now on, we will only consider canonical derivations to define the cut-elimination steps. Moreover we assume that the derivations are also clean. As said before, the global rules are completely
standard, since they act like in LJ, but in parallel on all the atoms of the involved molecules. Thus, we
will expose only some characteristic cases dealing with ∩. Note the use of the structural rules (P) and
(Fus).
Definition 5.4 A cut-elimination step in ISC is defined by cases. We assume do not have applications of
rules (P) and (Fus), and we show the most interesting structural and local cases.
Commutation steps:
• case of (∩R)
Intersection Logic in sequent calculus style
26
π ′ : [(∆0 , γ0 ; α0 ), (∆0 , γ0 ; β0 )] ∪ [(∆i , γi ; δi )]1≤i≤m
(∩R)
π : [(Γ0 ; γ0 )] ∪ [(Γi ; γi )]1≤i≤m
π ′′ : [(∆0 , γ0 ; α0 ∩ β0 )] ∪ [(∆i , γi ; δi )]1≤i≤m
(cut)
[(Γ0 , ∆0 ; α0 ∩ β0 )] ∪ [(Γi , ∆i ; δi )]1≤i≤m
↓
π : [(Γ0 ; γ0 )] ∪ [(Γi ; γi )]1≤i≤m
(Fus)
′′′
π : [(Γ0 ; γ0 ), (Γ0 ; γ0 )] ∪ [(Γi ; γi )]1≤i≤m
π ′ : [(∆0 , γ0 ; α0 ), (∆0 , γ0 ; β0 )] ∪ [(∆i , γi ; δi )]1≤i≤m
[(Γ0 , ∆0 ; α0 ), (Γ0 , ∆0 ; β0 )] ∪ [(Γi , ∆i ; δi )]1≤i≤m
(∩R)
[(Γ0 , ∆0 ; α0 ∩ β0 )] ∪ [(Γi , ∆i ; δi )]1≤i≤m
(cut)
Conversion steps:
• case of symmetric ∩ rule
π : [(Γ0 ; α0 ), (Γ0 ; β0 )] ∪ [(Γi ; αi )]1≤i≤m
π ′ : [(∆0 , α0 ; γ0 )] ∪ [(∆i , αi ; γi )]1≤i≤m
(∩R)
(∩L)
[(Γ0 ; α0 ∩ β0 )] ∪ [(Γi ; αi )]1≤i≤m
[(∆0 , α0 ∩ β0 ; γ0 )] ∪ [(∆i , αi ; γi )]1≤i≤m
(cut)
[(Γi , ∆i ; γi )]i
↓
π : [(Γ0 ; α0 ), (Γ0 ; β0 )] ∪ [(Γi ; αi )]1≤i≤m
(P)
[(Γ0 ; α0 )] ∪ [(Γi ; αi )]1≤i≤m
π ′ : [(∆0 , α0 ; γ0 )] ∪ [(∆i , αi ; γi )]1≤i≤m
(cut)
[(Γi , ∆i ; γi )]i
• case of asymmetric ∩ rule
π0′ : [(∆i , σi ; τi )]i ∪ [(∆, α ∩ β ; γ ), (∆′ , µ ; ρ )]
π0 : [(Γi ; σi )]i ∪ [(Γ; α ), (Γ; β ), (Γ′ ; µ ∩ ν )]
(∩R)
π : [(Γi ; σi )]i ∪ [(Γ; α ∩ β ), (Γ′ ; µ ∩ ν )]
π ′ : [(∆i , σi ; τi )]i ∪ [(∆, α ∩ β ; γ ), (∆′ , µ ∩ ν ; ρ )]
[(Γi , ∆i ; τi )]i ∪ [(Γ, ∆; γ ), (Γ′ , ∆′ ; ρ )]
(∩L)
(cut)
Since the derivation is canonical, the ∩ in the atom (Γ′ ; µ ∩ ν ) has been introduced by a (∩R) rule.
Substituting this (∩R) rule in π by (P) and removing the (∩L) rule from π ′ , we get:
[ . . . , (Γ′′ ; µ ), (Γ′′ ; ν )]
[ . . . , (Γ′′ ; µ ∩ ν )]
..
.
(∩R)
[(Γi ; σi )]i ∪ [(Γ; α ), (Γ; β ), (Γ′ ; µ ∩ ν )]
π : [(Γi ; σi )]i ∪ [(Γ; α ∩ β ), (Γ′ ; µ ∩ ν )]
→
(∩R)
[ . . . , (Γ′′ ; µ ), (Γ′′ ; ν )]
[ . . . , (Γ′′ ; µ )]
..
.
(P)
[(Γi ; σi )]i ∪ [(Γ; α ), (Γ; β ), (Γ′ ; µ )]
πs : [(Γi ; σi )]i ∪ [(Γ; α ∩ β ), (Γ′ ; µ )]
πs : [(Γi ; σi )]i ∪ [(Γ; α ∩ β ), (Γ′ ; µ )]
π0′ : [(∆i , σi ; τi )]i ∪ [(∆, α ∩ β ; γ ), (∆′ , µ ; ρ )]
[(Γi , ∆i ; τi )]i ∪ [(Γ, ∆; γ ), (Γ′ , ∆′ ; ρ )]
(∩R)
(cut)
S. Ronchi Della Rocca, A. Saurin, et al.
27
Note that a canonical derivation remains canonical after a cut-elimination step, while a clean derivation can be transformed into a not clean one. Morever, note that all the elementary steps related to global
connectives and structural rules act locally. In fact they correspond to apply the standard cut elimination
steps of LJ in parallel in all the atoms of the considered molecule. But some cases dealing with the local
connective ∩ are not local. In particular, in the asymmetric case, the derivation can be modified in an
almost global way.
6
Cut-elimination in ISC: the algorithm
We are now ready to define the cut elimination algorithm for ISC. First we need to define a notion of
measure, for proving the termination of the algorithm.
Definition 6.1 Let π be a cut in ISC, with premises π1 and π2 . The measure of π , denoted by m(π ) is
the set {m(πi ) | πi ∈ π̃ }. m(π ) ≤ m(π ′ ) if and only if for every (s, h) ∈ m(π ) \ m(π ′ ) there is (s′ , h′ ) ∈
m(π ′ ) \ m(π ) such that (s, h) ≤ (s′ , h′ ).
An important lemma holds.
˜ π ) = {π ′ | i ∈ I}. Then
Lemma 6.2 Let π be a derivation in ISC, let π̃ = {πi | i ∈ I} and let clean(
i
′
h(πi ) = h(πi ), for all | i ∈ I.
Proof
By Lemma 2.4 and by the definition of π̃ .
Now we are able to design the cut-elimination algorithm.
Definition 6.3
i) Let π be a clean and canonical derivation in ISC, containing at least one cut rule.
Then step(π ) is the result of applying to π an elementary cut elimination step to a cut closest to
the premises.
ii) The algorithm A , which takes in input a clean and canonical proof π and produces as output a
cut-free proof π ′ , is defined as follows:
– A (π ) = i f π does not contain any cut then π else A (clean(step(π ))).
Lemma 6.4 A (π ) eventually stops.
Proof It would be necessary to check that every application of an elementary cut elimination step
makes either the cut disappearing, in case of an axiom cut, or replaces it by some cuts, with a less measure. In case of a cut involving structural rules, or global connectives, the check is easy, since the result
comes directly from Lemma 4.2, and by the definition of measure of a cut in ISC which is done in function of the measure of a cut in LJ (remember that the input proof is clean, so there are not occurrences
of (P) and of ((Fus))). So we will show only the cases of the elementary steps defined in Definition 5.4.
For readability, we will use the same terminology.
•. commutation step (∩R).
Let π̃ = {π0 : Γ0 ⊢ γ0 } ∪ {πi : Γi ⊢ γi | 1 ≤ i ≤ m}, π˜′ = {π01 : ∆0 , γ0 ⊢ α0 , π02 : ∆0 , γ0 ⊢ β0 } ∪ {πi′ :
π01 π02
(∩R)
∆i , γi ⊢ δi | 1 ≤ i ≤ m} and π˜′′ = {π0′ : ∆0 , γ0 ⊢ α0 ∩ β0
} ∪ {πi′ : ∆i , γi ⊢ δi | 1 ≤ i ≤ m}. Moreover
1
2
′
1
2
let h0 , h0 , h0 , hi , hi be respectively the heights of π0 , π0 , π0 , πi , πi′ .
The measure of the cut is : m = {(|γ0 |, h0 + h10 + h20 + 1)} ∪ {|γi |, hi + h′i ) | 1 ≤ i ≤ m}.
Intersection Logic in sequent calculus style
28
The measure of the new generated cut is: m′ = {(|γ0 |, h0 + h10 ), (|γ0 |, h0 + h20 )} ∪ {|γi |, hi + h′i ) | 1 ≤
i ≤ m}, and m′ < m, by Definition 6.1, remembering that π˜′′′ = clean(π˜′′′ ) and since the clean step does
not modify the height of π0 and πi , by Lemma 6.2.
• Conversion step: symmetric ∩ rule.
Let π̃ = {π01 : Γ0 ⊢ α0 , π02 : Γ0 ⊢ β0 } ∪ {πi : Γi ⊢ αi | 1 ≤ i ≤ m} and {π˜′ = π03 : ∆0 , α0 ⊢ γ0 } ∪ {πi′ :
∆i , αi ⊢ γi | 1 ≤ i ≤ m}. Moreover let h10 , h20 , h30 , hi , h′i be respectively the heights of π0 , π01 , π02 , π03 , πi , πi′ .
The measure of the cut is: m = {(|α0 ∩ β0 |, h10 + h20 + h30 + 2)} ∪ {(|αi |, hi + h′i ) | 1 ≤ i ≤ m}. The
measure of the new generared cut is: m′ = {(|α0 |, h10 + h30 )} ∪ {(|αi |, hi + h′i ) | 1 ≤ i ≤ m}. In computing
m′ , we used Lemma 6.2, which assures that the heigh of π01 has been not modified by the clean step.
• Conversion step: asymmetric ∩ rule
Let π˜0 = {π01 : Γ ⊢ α , π02 : Γ ⊢ β , π03 : Γ′ ⊢ µ ∩ ν } ∪ {πi : Γi ⊢ σi | i ∈ I} and {π˜0′ = π04 : ∆, α ∩ β ⊢
γ , π05 : ∆′ , µ ⊢ ρ } ∪ {πi′ : ∆i , σi ⊢ τi | i ∈ I}. Moreover let h10 , h20 , h30 , h40 , h50 , hi , h′i be respectively the heights
of π0 , π01 , π02 , π03 , π04 , π05 πi , πi′ .
The measure of the cut is: m = {(|α ∩ β |, h10 + h20 + h40 + 1)} ∪ {(|µ ∩ ν |, h30 + h50 + 1)} ∪ {(|σi |, hi +
′
hi )}. The measure of the new generared cut is: m′ = {(|α ∩ β |, h10 + h20 + h40 + 1)} ∪ {(|µ |, h30 + h50 − 1)} ∪
{(|σi |, hi + h′i )}, since, by Lemma 6.2, h(πs ) = h(π ).
So a topmost cut can be eliminated in a finite number of steps. Moreover Lemma 6.2 assures us that
the cleaning step does not increase the measure of any cut. Since the number of cuts is finite, the algorithm eventually stops.
Corollary 6.5 ISC enjoys the cut elimination property.
7
Intersection types from a logical point of view: an overview
The problem raised by Hindley, of looking for a logical system naturally connected to intersection
type assignment through the Curry-Howard isomorphism, has generated many different proposals. Very
roughly speaking, we could divide them in two categories, that we call the semantic approach and the
logical approach to the problem. The semantic approach is characterized by the fact that an extension of
the system we called IT in this paper has been considered. Namely intersection type assignment comes
with a subtyping relation, which formalizes the semantics of intersection as the meet in the continuous
function space. This subtyping relation is the essential tool in using intersection types for modelling
denotational semantics of λ -calculus, and so we call this approach semantic. The key idea here is to
avoid the rule introducing the intersection, which has not a logical explication. The first result in this
line is by Venneri [24], who designed an intersection type assignment system for Combinatory Logic, in
Hilbert style, where the introduction of intersection is replaced by an infinite set of axioms for the basic
combinators, built from their principal types. Then the subtyping rule plays the role of the intersection
introduction. In [8] this result has been enhanced, both by extending it to union types, and by giving
a logical interpretation of the subtyping relation, which turned out to correspond to the implication in
minimal relevant logic. The connection between intersection types and relevant implication has been
already noticed in [1].
S. Ronchi Della Rocca, A. Saurin, et al.
29
In the logical approach the system shown IT shown in Fig. 1 is taken into consideration, without any
subtyping relation. The aim is to design a true deductive system, such that its decoration coincide with
IT. In this line Capitani, Loreti and Venneri [5] propose a system of hypersequents, i.e., sequences of
formulae of LJ, where the distinction between global and local connectives has been already introduced.
The relation between this system and LJ cannot be formally stated, since the notion of empty formula
in an hypersequent is essential, while it has no correspondence in LJ. The relation between IT and LJ
has been clarified by Ronchi Della Rocca and Roversi, who designed IL [19], a deductive system where
formulae are tree of formulae of LJ, proved by isomorphic proofs. The result has been further enhanced
by Pimentel, Ronchi Della Rocca and Roversi [15, 16], who defined ISL and proved that the intersection
born from a splitting of the usual intuitionistic conjunction into two connectives, each one reflecting part
of its behaviour, local or global. The system ISL is extensively discussed in this paper.
A problem strongly related to the considered one is the design of a language, explicitly typed by
intersection types. Different proposals have been made. In [20], a language with this property has been
obtained as side effect of the logical approach, by a full decoration of the intersection logic IL. But
its syntax is difficult, since the syncronous behaviour of intersection types is reflected in the fact that a
term typed by an intersection type is a pair of terms which are identical, modulo type erasure. A similar
language has been proposed in [26]. All the other attempts have been made with the aim of avoiding
such a duplication of terms. Wells and Haack [25] build a language where the duplication becomes
dynamic, by enriching the syntax and by defining an operation of type selection both on terms and types.
Liquori and Ronchi Della Rocca [12] proposed a language which has an imperative flavour, since terms
are decorated by locations, which in their turn contain intersection of simply typed terms, describing the
corresponding type derivation. The last proposal is by Bono, Bettini and Venneri [4], and consists in
a language with parallel features, where parallel subterms share the same free variables, i.e., the same
resources. Since we can see a connection between the global behaviour of the arrow type and a parallel
behaviour of terms, we think it would be interesting to explore if there is a formal connection between
this language and ISL.
References
[1] F. Barbanera, S. Martini, Proof-functional connectives and realizability Archive for Mathematical Logic, 33,
189–211, 1994.
[2] F. Barbanera, M. Dezani-Ciancaglini, and U. de’Liguoro, Intersection and Union Types: Syntax and Semantics, Inform. and Comput., 119, pp. 202-230, 1995.
[3] H. Barendregt, M. Coppo, and M. Dezani-Ciancaglini, A Filter Lambda Model and the Completeness of Type
Assignment, Journal of Symbolic Logic, 48(4), pp. 931-940, 1983.
[4] V. Bono, L. Bettini and B. Venneri, A Typed lambda calculus with Intersection Types, Theoretical Computer
Science, vol. 398 (1-3), pp. 95-113, 2008.
[5] B. Capitani, M. Loreti, B. Venneri, Hyperformulae, Parallel Deductions and Intersection Types Electr. Notes
Theor. Comput. Sci., vol. 50 (2), 2001.
[6] M. Coppo and M. Dezani-Ciancaglini, An extension of the basic functionality theory for the lambda-calculus,
Notre Dame J. Formal Logic, 21(4), pp. 685-693, 1980.
[7] Coppo M., Dezani-Ciancaglini M., Honsell F., and Longo G. “Extended Type Structures and Filter Lambda
Models”, Logic Colloquium ’82, pp.241-262, 1983.
[8] M. Dezani-Ciancaglini, S. Ghilezan, B. Venneri, Intersection Types as Logical Formulae, Notre Dame Journal
of Formal Logic, vol. 38 (2), pp. 246-269,1997 1994.
30
Intersection Logic in sequent calculus style
[9] J.-Y. Girard, Y. Lafont, and P. Taylor, Proofs and Types, Cambridge University Press, 1989.
[10] J.R. Hindley, Coppo-Dezani types do not correspond to propositional logic, Theoret. Comput. Sci., 28(1-2),
pp. 235-236, 1984.
[11] J.L.Krivine Lambda-calcul, types et modèles, Masson, 1990.
[12] L. Liquori, S. Ronchi Della Rocca, Intersection Types a la Church, Information and Computation, vol. 205
(9), pp. 1371-1386, 2007.
[13] A.R. Omondi, Proof Normalization I: Gentzen Hauptsatz, Victoria University of Wellington, Techn. Rep.CSTR 93-5, 1993.
[14] L. Paolini, M. Piccolo, S. Ronchi Della Rocca Logical Semantics for Stability, proc. MFPS 2009, LNCS,
249, pp. 429-449, 2009.
[15] E. Pimentel, S. Ronchi Della Rocca, and L. Roversi, Intersection Types: a Proof-Theoretical Approach,
ICALP’05 workshop, Proceedings of Structure and Deduction, pp. 189-204, 2005.
[16] E. Pimentel, S. Ronchi Della Rocca, and L. Roversi, Intersection Types from a proof-theoretic perspective,
Fund. Informaticae: Special Issue on Intersection Types and Related Systems, to appear.
[17] Pottinger, G. A type assignment for the strongly normalizable λ -terms. In To H. B. Curry: essays on
combinatory logic, lambda calculus and formalism, pages 561-577. Academic Press, London, 1980.
[18] S. Ronchi Della Rocca and L. Paolini, The Parametric λ -Calculus: a Metamodel for Computation, Text in
Theoretical Computer Science, Springer-Verlag, 2004.
[19] S. Ronchi Della Rocca and L. Roversi, Intersection Logic, Proceedings of CSL’01, LNCS 2142, pp. 414-428,
2001.
[20] S. Ronchi Della Rocca, Typed Intersection lambda calculus, Proc. of ITRS’02, ENTCS, vol 70, 2002.
[21] Y. Stavrinos and A. Veneti, Towards a logic for union types, Fund. Informaticae: Special Issue on Intersection Types and Related Systems, to appear.
[22] A.S. Troelstra and H. Schwichtenberg, Basic Proof Theory, Cambridge University Press, 2000.
[23] A. Veneti and Y. Stavrinos, A Sequent Calculus for Intersection and Union Logic, CiE’08 Conference,
abstract in Local Proceedings of the 4th CiE Conference, p. 514, 2008.
[24] B. Venneri, Intersection Types as Logical Formulae, J.Log. Comput., vol. 4 (2), pp. 109-124, 1994.
[25] J.B. Wells, C. Haack, Branching Types, proc. of ESOP’02, LNCS, vol. 2305, pp. 115-132, 2002.
[26] J. B. Wells, Allyn Dimock, Robert Muller, and Franklyn Turbak, A calculus with polymorphic and polyvariant flow types, J. Funct. Programming, 12(3), pp.183227, 2002.
| 6 |
CHARACTERISTICS OF ROTA-BAXTER ALGEBRAS
arXiv:1511.00936v1 [math.RA] 3 Nov 2015
LI GUO AND HOUYI YU
Abstract. The characteristic is a simple yet important invariant of an algebra. In this paper,
we study the characteristic of a Rota-Baxter algebra, called the Rota-Baxter characteristic. We
introduce an invariant, called the ascent set, of a Rota-Baxter characteristic. By studying its
properties, we classify Rota-Baxter characteristics in the homogenous case and relate Rota-Baxter
characteristics in general to the homogeneous case through initial ideals. We show that the RotaBaxter quotients of Rota-Baxter characteristics have the same underlying sets as those in the
homogeneous case. We also give a more detailed study of Rota-Baxter characteristics with special
base rings. In particular, we determine the prime characteristics of Rota-Baxter rings.
Contents
1. Introduction
2. Characteristics of Rota-Baxter algebras
3. Classification of the characteristics of Rota-Baxter algebras
3.1. The ascent set of a Rota-Baxter characteristic
3.2. Classification of homogeneous Rota-Baxter characteristics
3.3. Rota-Baxter characteristics and their quotients
4. Characteristics of Rota-Baxter rings
References
1
2
4
4
6
9
11
15
1. Introduction
A Rota-Baxter algebra is an associative algebra together with a linear endomorphism that is
an algebraic analogue of the integral operator. This concept has it origin from a work of G.
Baxter [5] in probability theory. In the late 1960s, Rota [16] studied the subject from an algebraic
and combinatorial perspective and suggested that they are closely related to hypergeometric
functions, incidence algebras and symmetric functions [17, 18]. Since then, these algebras have
been investigated by mathematicians and mathematical physicists with various motivations. For
example, a Rota-Baxter operator on a Lie algebra is closely related to the operator form of the
classical Yang-Baxter equation. Here the Baxter is the physicist R. Baxter [3, 19]. In recent
years Rota-Baxter operators have found applications to many areas, such as number theory [15],
combinatorics [10, 16, 17], operads [1, 4] and quantum field theory [6]. See [11, 13] and the
references therein for further details.
For the theoretic study of this important algebraic structure, it is useful to generalize the study
of characteristics of algebras to Rota-Baxter algebras. The characteristic of a (unitary) ring R is
(the nonnegative generator of) the kernel of the structure map Z → R sending 1 to the identity
Date: January 20, 2018.
2010 Mathematics Subject Classification. 13C05, 13E05, 16W99.
Key words and phrases. Rota-Baxter algebra, Rota-Baxter characteristic, initial object, initial ideal, prime ideal.
1
2
LI GUO AND HOUYI YU
element of R. More generally, the characteristic of an algebra R over a commutative ring k
is the kernel of the structure map k → R. To generalize this concept to the context of RotaBaxter algebras, we note that the structure map k → R comes from the fact that k is the initial
object in the category of k-algebras, as the free k-algebra on the empty set. Then to study the
characteristics of Rota-Baxter k-algebras, we consider the initial object in the category of RotaBaxter k-algebras, as the free Rota-Baxter algebra on the empty set. Thus every Rota-Baxter
k-algebra comes with a (unique) structure map from the initial object to this Rota-Baxter algebra.
Then the kernel of the structure map should be the characteristic of the Rota-Baxter algebra.
In this paper, we study the characteristics of Rota-Baxter algebras. More precisely, we study
the Rota-Baxter ideals, and their corresponding quotients, of the initial Rota-Baxter k-algebra. In
particular, we study the Rota-Baxter ideals of the initial Rota-Baxter ring. The initial Rota-Baxter
algebra is a generalization of the divided power algebra, and the polynomial algebra k[x] when k
is taken to contain Q. So some our results are naturally related to results of these algebras.
The construction of the initial object will be reviewed at the beginning of Section 2 and will
be applied to give the concept of Rota-Baxter characteristics. In Section 3, we first classify
homogeneous Rota-Baxter characteristics and their quotients (Theorem 3.6). We then show that
the Rota-Baxter quotients of general Rota-Baxter characteristics have the same underlying sets
as those in the homogeneous case (Theorem 3.9). Finally specializing to the case of k = Z
in Section 4, we give the structures of Rota-Baxter characteristics of Rota-Baxter rings, and
determine the prime Rota-Baxter characteristics (Theorem 4.3).
2. Characteristics of Rota-Baxter algebras
Notations. Unless otherwise specified, an algebra in this paper is assumed to be unitary associative defined over a unitary commutative ring k. Let N and P denote the set of nonnegative and
positive integers respectively. For n ∈ N, denote [n] := {1, · · · , n}. For notational convenience,
we also denote [∞] = P. Let Zn denote the set of integers modulo n.
Before giving the definition of the characteristic of a Rota-Baxter algebra, we first provide
some background and preliminary results on Rota-Baxter algebras. See [9, 11, 14, ] for details.
Let λ ∈ k be given. A Rota-Baxter k-algebra of weight λ is a k-algebra R paired with a linear
operator P on R that satisfies the identity
(1)
P(x)P(y) = P(xP(y)) + P(P(x)y) + λP(xy)
for all x, y ∈ R. When k is Z, then the pair (R, P) is called a Rota-Baxter ring.
A Rota-Baxter ideal of a Rota-Baxter algebra (R, P) is an ideal I of R such that P(I) ⊆ I. Then
we denote I 6 R. The concepts of Rota-Baxter subalgebras, quotient Rota-Baxter algebras and
homomorphisms of Rota-Baxter algebras can be similarly defined.
We recall that the free Rota-Baxter algebra on a set X is a Rota-Baxter algebra (F RB (X), PX )
together with a set map iX : X → F RB (X) characterized by the universal property that, for any
Rota-Baxter algebra (R, P) and set map f : X → R, there is a unique Rota-Baxter algebra
homomorphism f˜ : F RB(X) → R such that iX ◦ f˜ = f . The free Rota-Baxter algebra on the
empty set is also the free Rota-Baxter k-algebra F RB (k) on the k-algebra k [14, ], characterized
by the universal property that, for any Rota-Baxter algebra (R, P) and k-algebra homomorphism
f : k → R, there is a unique Rota-Baxter algebra homomorphism f˜ : F RB (k) → R such that
ik ◦ f˜ = f , where ik is the structure map ik : k → F RB (k). Since there is only one k-algebra
homomorphism k → R, this universal property shows that F RB (k) is the initial object in the
category of Rota-Baxter k-algebras.
CHARACTERISTICS OF ROTA-BAXTER ALGEBRAS
3
We refer the reader to [14], as well as [7, 16], for the general constructions of free Rota-Baxter
algebras, but will focus on a simple construction of F RB (k) following [2], where it is denoted by
Xk (k). This free Rota-Baxter k-algebra not only provides the simplest example of free RotaBaxter algebras but also establishes the connection between Rota-Baxter algebra and some wellknown concepts such as divided powers and Stirling numbers [2, 12, ]. As a k-module, Xk (k) is
given by the free k-module
∞
M
Xk (k) =
kam
m=0
on the basis {am | m > 0}. For a given λ ∈ k, the product ⋄ = ⋄λ on Xk (k) is defined by
min(m,n)
X m + n − i ! m!
(2)
λi am+n−i ,
m, n ∈ N.
am ⋄λ an =
i
m
i=0
Thus when λ = 0, Xk (k) is the divided power algebra. Note that ⋄ is an extension of the product
on k viewed as ka0 . Thus there should no confusion if the notation ⋄ is suppressed, as we often
do. Define the k-linear operator P = Pk on Xk (k) by assigning Pk (am ) = am+1 , m > 0 and
extending by additivity.
By [14, ], when k contains Q and λ = 0, we have Xk (k) k[x] as a k-algebra.
Theorem 2.1. The pair (Xk (k), Pk ) is the initial object in the category of Rota-Baxter k-algebras.
More precisely, for any Rota-Baxter k-algebra (R, P), there is a unique Rota-Baxter k-algebra
homomorphism ϕ : (Xk (k), Pk ) → (R, P).
Proof. Suppose a k-algebra A has a linear basis X. By [9, 11], the free Rota-Baxter algebra F RB (A)
on A (denoted by XNC
k (A)) has a linear basis consisting of Rota-Baxter words in the alphabet set
X. By definition, a Rota-Baxter word in X is a bracketed word in the alphabet set X in which
there are no adjacent pairs of brackets. Taking A = k, then X = {1}. Thus a Rota-Baxter word in
X can only be of the form
ur := ⌊· · · ⌊ 1 ⌋ · · ·⌋
|{z}
|{z}
r iterations
r iterations
(namely applying the bracket operator ⌊ ⌋ to 1 r times), for r > 1, together with u0 := 1. Thus
X
F RB(k) =
kur ,
r>0
on which the Rota-Baxter operator Qk is given by Qk (ur ) = ⌊ur ⌋ = ur+1 . By the universal property
of F RB (k), the natural inclusion map f : k → Xk (k) sending 1 → a0 (that is, the structure map)
extends to a Rota-Baxter algebra homomorphism
f˜ : F RB (k) → Xk (k).
As such, we obtain f˜(u0 ) = a0 and recursively,
f˜(ur+1 ) = f˜(Qk (ur )) = Pk ( f˜(ur )) = Pk (ar ) = ar+1 , r > 0.
Therefore f˜ is a linear isomorphism and thus a Rota-Baxter algebra isomorphism, showing that
Xk (k) is the free Rota-Baxter algebra on k and hence the initial object in the category of RotaBaxter k-algebras.
Thus Xk (k) plays the same role in the category of Rota-Baxter k-algebras as the role played
by k in the category of k-algebras.
4
LI GUO AND HOUYI YU
Definition 2.2. Let (R, P) be a Rota-Baxter k-algebra and let ϕ = ϕ(R,P) : (Xk (k), Pk ) → (R, P) be
the unique Rota-Baxter algebra homomorphism from the initial object (Xk (k), Pk ) in the category
of Rota-Baxter k-algebras to (R, P). The kernel of the structure map ϕ is called the Rota-Baxter
characteristic of (R, P).
In view of Theorem 2.1, the characteristic of a Rota-Baxter k-algebra R must be a Rota-Baxter
ideal of Xk (k). Conversely, any Rota-Baxter ideal I of Xk (k) is the characteristic of some RotaBaxter algebra, for example of Xk (k)/I.
Remark 2.3. Based on the above observation, we will use the terminology Rota-Baxter characteristics in exchange with Rota-Baxter ideals of Xk (k) in the rest of the paper.
3. Classification of the characteristics of Rota-Baxter algebras
In this section, we study Rota-Baxter characteristics and their quotients. First in Section 3.1,
we introduce an invariant, called the ascent set, of a Rota-Baxter characteristic. We then apply
the invariants to classify all homogeneous Rota-Baxter characteristics in Section 3.2. Finally in
Section 3.3 we relate a Rota-Baxter characteristic to a homogeneous Rota-Baxter characteristic
by taking the initial terms and show that the quotients of the two Rota-Baxter characteristics share
the same underlying sets, but not the same k-modules. This approach is motivated from the study
in the polynomial algebra k[x1 , · · · , xn ] when k is a field. There the quotient modulo an ideal and
the quotient modulo the corresponding initial ideal are known to share the same basis [8, § 5.3,
Proposition 4].
3.1. The ascent set of a Rota-Baxter characteristic. As recalled in the last section, the RotaBaxter algebra Xk (k), as the initial object in the category of Rota-Baxter algebras, is the direct
sum
M
Xk (k) =
kam
m>0
on the basis {am | m > 0} and hence is an N-graded k–module. Any nonzero element f of Xk (k)
n
P
ci ai with cn , 0. Then n is called the degree of f and cn an
can be uniquely written as f =
i=0
is called the initial term of f , denoted by deg f and in( f ) respectively. In addition, we define
deg 0 = −∞. We call the set supp( f ) := {ci ai |ci , 0} the support of f .
Let I be a Rota-Baxter characteristic, namely an ideal of Xk (k). We introduce an invariant of
I. For each j ∈ N, we denote
j
X
Ω j := Ω j (I) :=
b
∈
k
(∃
f
∈
I)
f
=
b
a
.
j
i
i
i=0
Equivalently,
o
n
Ω j = {0} ∪ b j ∈ k (∃ f ∈ I) in( f ) = b j a j .
The smallest index j such that Ω j , 0 is called the starting point of I, denoted by st(I), that is,
o
n
st(I) = min j ∈ N Ω j , 0 .
Lemma 3.1. Let I be a Rota-Baxter ideal of Xk (k). Then for each j ∈ N, Ω j is an ideal of k and
Ω j ⊆ Ω j+1 .
CHARACTERISTICS OF ROTA-BAXTER ALGEBRAS
Proof. For any b j , c j ∈ Ω j , there exist f and g ∈ I such that f =
Pj
5
bi ai and g =
Pj
ci ai . Hence
i=0
i=0
f +g=
Pj
(bi + ci )ai is in I. So b j + c j ∈ Ω j . On the other hand, for any c ∈ k, c f =
Pj
cbi ai ∈ I.
i=0
i=0
This yields cb j ∈ Ω j . Thus Ω j is an ideal of k.
Since I is a Rota-Baxter ideal of Xk (k), we have P( f ) =
Pj
bi ai+1 ∈ I. So b j ∈ Ω j+1 . Hence
i=0
Ω j ⊆ Ω j+1 , as required.
For a given Rota-Baxter ideal I of Xk (k), Lemma 3.1 shows that the ideals Ω j ⊆ k, j ∈ N,
form a non-decreasing sequence of ideals of k. Thus the sequence is controlled by the locations
and extents where the increases occur. This motivates us to introduce the following notions.
Let s1 < s2 < · · · be the integers t ∈ N such that Ωt−1 ( Ωt with the convention that Ω−1 = {0}.
We will use the notation si , i ∈ [NI ] where N = NI is either in N or ∞ with the convention adopted
at the beginning of the last section. The integers si , i ∈ [NI ], are called the ascending points of I
and the ideals Ωsi , i ∈ [NI ] are called the ascending levels of I. In view of Lemma 3.1, we have
s1 = st(I).
Thus for a given Rota-Baxter characteristic I, the set of pairs
o
n
(3)
A(I) := (s j , Ωs j ) j ∈ [NI ] ,
called the ascent set, is uniquely determined by I. By the definition of Ωs j , we have s j < s j+1 and
Ωs j = Ωs j +1 = · · · = Ωs j+1 −1 ( Ωs j+1
(4)
for all j ∈ [NI ], as illustrated by Figure 1.
Ω
✻
Ω s3 r
Ω s2 r
Ω s1 r
0
r
r
r
s1
s2
s3
✲
N
Figure 1.
We next study how the information from A(I) can be used to recover I. Denote
o
n
(5)
A := {(s j , Ωs j )} j∈[N] s j ∈ N, Ωs j 6 k, s j < s j+1 , Ωs j ( Ωs j+1 , j ∈ [N], 1 6 N 6 ∞ ,
called the set of ascending pairs. So A consists of pairs of sequences of the same lengths
with one sequence of strictly increasing nonnegative integers and a second sequence of strictly
increasing ideals of k. Let I = I(k) denote the set of Rota-Baxter characteristics, namely the set
of Rota-Baxter ideals of Xk (k). Then taking the ascent set of a Rota-Baxter characteristic defines
6
LI GUO AND HOUYI YU
a map
Φ : I → A,
(6)
I 7→ A(I), I 6 Xk (k).
In the rest of the paper, we study the property of this map, including its surjectivity and fibers,
that is, inverse images. In Theorem 3.6, we show that the restriction of Φ to the subset of
homogeneous Rota-Baxter characteristics gives a bijection to A, proving the surjectivity of Φ.
In Proposition 3.8, we show that two Rota-Baxter characteristics are in the same fiber if and only
if they have the same initial ideal.
n
o
Lemma 3.2. Let I be a Rota-Baxter ideal of Xk (k) with A(I) = (s j , Ωs j ) | j ∈ [NI ] . For each
o
n
j ∈ [NI ], we let Θ j := ω s j ,ℓ ℓ ∈ [N j ] be a set of generators of the ideal Ωs j . Here N j is either a
positive integer or ∞. For each ω s j ,ℓ ∈ Θ j , let f s j ,ℓ be an element of I whose initial term is ω s j ,ℓ as j .
Then the set
[ n
o
f s j ,ℓ ℓ ∈ [N j ]
j∈[NI ]
is a generating set of the Rota-Baxter ideal I.
o
S n
We call the above-mentioned set
f s j ,ℓ ℓ ∈ [N j ] an ascent generating set of the Rotaj∈[NI ]
Baxter ideal I.
Proof. Let I ′ be the Rota-Baxter ideal generated by the set
S n
j∈[NI ]
o
f s j ,ℓ ℓ ∈ [N j ] . Since I ′ ⊆ I is
trivial, it suffices to show that each element f of I belongs to I ′ .
So take an f ∈ I. Then f is in I ′ if f = 0. We next assume that f , 0. Then we have
in( f ) = badeg f for some 0 , b ∈ k. We proceed by induction on deg f . Clearly, deg f > st(I) = s1 .
If deg f = s1 , then b ∈ P
Ωs1 . So there exist cs1 ,ℓ ∈ k, ℓ ∈ [N1 ], all but finitely many of which
cs1 ,ℓ ω s1 ,ℓ . Thus the element
being zero, such that b =
ℓ∈[N1 ]
g= f−
X
cs1 ,ℓ f s1 ,ℓ
ℓ∈[N1 ]
is in I. But now the degree
P of g is less than′ deg f . It follows from deg f = st(I) that g must be 0,
cs1 ,ℓ f s1 ,ℓ is in I and we are done. For a given n > st(I), assume that
which means that f =
ℓ∈[N1 ]
all f ∈ I with deg f 6 n are in I ′ and take f ∈ I with deg f = n + 1. Then there exists r ∈ [NI ]
such that sr 6 n + 1 < P
sr+1 with the convention that sNI +1 = ∞ if NI is finite. By Eq. (4), we have
csr ,ℓ ω sr ,ℓ where csr ,ℓ ∈ k, ℓ ∈ [Nr ], with all but a finite number of csr ,ℓ
Ωn+1 = Ωsr . So b =
ℓ∈[Nr ]
being zero. Put
h= f −
X
ℓ∈[Nr ]
csr ,ℓ Pn+1−sr f sr ,ℓ .
Then h ∈ I with deg h < deg f . So we can apply the induction hypothesis to obtain that h is in I ′
and hence f is in I ′ , as required.
3.2. Classification of homogeneous Rota-Baxter characteristics. We now apply ascent sets to
classify homogeneous Rota-Baxter characteristics.
Definition 3.3. A Rota-Baxter ideal I of Xk (k) is called a homogeneous Rota-Baxter ideal
if I is a Rota-Baxter ideal generated by a set of homogeneous elements. Then the Rota-Baxter
characteristic I is called homogeneous.
CHARACTERISTICS OF ROTA-BAXTER ALGEBRAS
7
We next show that, for a Rota-Baxter ideal of Xk (k), its homogeneity as a Rota-Baxter ideal
is equivalent to its homogeneity as an ideal. For this purpose, we first give a general relation
between generating a Rota-Baxter ideal and generating an ideal in a Rota-Baxter algebra.
Lemma 3.4. Let (R, P) be a Rota-Baxter k-algebra of weight λ, and S a subset of R. Then the
Rota-Baxter ideal (S )RB generated by S is the ideal of R generated by the set
[
S RB :=
(7)
(◦mi=1 P xi )(a)|a ∈ S , xi ∈ R, 1 6 i 6 m ,
m∈N
where (◦mi=1 P xi )(a) := P(xm P(xm−1 P(· · · P(x1 a)))) with the convention that (◦0i=1 P xi )(a) := a.
If R = Xk (k), then it suffices to take xi to be the homogeneous elements in Eq. (7).
Proof. Let I be the ideal of R generated by S RB. Then an element of I is a sum of elements of the
form r(◦mi=1 P xi )(a) with r ∈ R. Then we have
P(r(◦mi=1 P xi )(a)) = (◦m+1
i=1 P xi )(a),
where xm+1 = r. So, by the additivity of P, we obtain P(I) ⊆ I, and hence I is the Rota-Baxter
ideal of R generated by S RB , that is, I = (S RB )RB . Notice that S ⊆ S RB ⊆ (S )RB , so (S )RB = (S RB )RB
whence I = (S )RB , as required. The second statement follows since any element in Xk (k) is a
linear combination of homogeneous elements.
Now we can give the following characterization of homogeneous Rota-Baxter ideals in Xk (k).
Proposition 3.5. Let I ⊆ Xk (k) be a Rota-Baxter ideal. Then I is a homogeneous Rota-Baxter
ideal if only if I is a homogeneous ideal.
Proof. (⇐=) If a set of homogeneous elements generates I as an ideal, then it does so as a RotaBaxter ideal.
(=⇒) Suppose that I is a homogeneous Rota-Baxter ideal. Then we can assume that there are a
subset Λ of N and numbers ni ∈ P (i ∈ Λ) or ni = ∞ such that I is generated by the following set
of homogeneous elements:
G := {bi j ai ∈ kai | j ∈ [ni ], i ∈ Λ}.
Then, by Lemma 3.4, I is the ideal generated by the set
n
o
G′ := (◦mk=1 P xk )(bi j ai ) | bi j ai ∈ G, xk ∈ Xk (k) homogeneous, 1 6 k 6 m, m > 0
with the convention that (◦0i=1 P xi )(a) := a. Hence I is contained in the homogeneous ideal
generated by the supports of elements in G′ . Thus to complete the proof, we just need to prove
that for each u ∈ G′ , we have supp(u) ⊂ I. We will prove this for u in the form (◦mk=1 P xk )(bi j ai ),
where bi j ai ∈ G, by induction on m > 0. When m = 0, we have u = bi j ai ∈ G which is assumed
to be in I. Suppose that the statement is true for m = k > 0 and consider
m
u = (◦m+1
k=1 P xk )(bi j ai ) = P xm+1 (◦k=1 P xk )(bi j ai ) .
By the induction hypothesis, the support of (◦mk=1 P xk )(bi j ai ), that is, the set of homogenous components of it, is contained in I. Let ba p ∈ I, b ∈ k, be such a homogenous component and denote
xm+1 = can , c ∈ k. Then by Eq. (2), we have
min(n,p)
min(n,p)
X n + p − i! p!
X n + p − i! p!
i
P xm+1 (ba p ) = P (can )(ba p ) = P
λ bcan+p−i =
λi cPn−i+1 (ba p ).
i
p
i
p
i=0
i=0
Since Pn−i+1 (ba p ) is in I, the homogeneous components of P xm+1 (ba p ) are in I. Hence the homogeneous components of u are in I. This completes the induction.
8
LI GUO AND HOUYI YU
Theorem 3.6.
(a) Let I be a homogeneous Rota-Baxter
characteristic,
that is, a homogeneous
o
n
Rota-Baxter ideal of Xk (k), with A(I) = (s j , Ωs j ) j ∈ [NI ] . Then
j+1 −1
∞ sL
L
s −1
Ωs j ai ,
NI = ∞,
NI M
∞
M
M
j+1
j=1 i=s j
(8)
I=
Ωi ai =
Ωs j ai =
j+1 −1
∞
I −1 s L
L L
NL
i=s1
i=s j
j=1
, NI < ∞,
Ω
a
Ω
a
s
i
s
i
NI
j
j=1
i=sNI
i=s j
as the direct sum of the k-modules Ωs j ai .
(b) The quotient Xk (k)/I is the direct sum of the k-modules (k/Ωs j )ai , that is,
−1
NI sM
M
j+1
Xk (k)/I
(k/Ω
)a
s j i
i=s j
j=0
s −1
!
sL
j+1
∞ L
1 −1
L L
,
ka
(k/Ω
)a
NI = ∞,
i
s
i
j
i=0
j=1 i=s j
!
=
(9)
j+1 −1
sL
∞
I −1 s L
1 −1
L L
NL
L
, NI < ∞,
(k/Ω
)a
(k/Ω
)a
ka
s NI i
s j i
i=0 i
i=sN
j=1 i=s j
I
with the convention that s0 = 0, Ωs0 = 0 and sNI +1 = ∞ if NI is finite.
(c) For any ascent pair {(s j , Ωs j )} j∈[N] in A defined in Eq. (5), consisting of a positive integer
N or N = ∞, strictly increasing nonnegative integers si and strictly increasing ideals
Ωs j , j ∈ [N], there is a unique homogeneous Rota-Baxter ideal I of Xk (k) such that A(I) =
{(s j , Ωs j )} j∈[N] . In other words, the restriction of the map Φ : I → A in Eq. (6) to the set
of homogeneous Rota-Baxter characteristics is a bijection.
Proof. (a) By Proposition 3.5, the k-module I is the direct sum of the k-modules Ωi ai .
(b) Since I is a graded submodule of Xk (k), Item (b) is proved.
(c) For a given ascending pair {(s j , Ωs j )} j∈[N] ∈ A, define I ⊆ Xk (k) as in Eq. (8) with NI replaced
by N. We next show that I is a homogeneous Rota-Baxter ideal.
It follows from s1 < s2 < · · · and Ωs1 ( Ωs2 ( · · · that P(I) ⊆ I. I is a k-submodule since
Ωs j is an ideal of k for each j ∈ [N]. Further by its definition, I is homogeneous as a k-module.
We next prove that I(Xk (k)) ⊆ I. Since each element of Xk (k) can be written as the summand
of finitely many homogeneous components, it suffices to show that (ban )(ca p ) ∈ I, where ban ∈ I
and ca p ∈ Xk (k). By Eq. (2), we have
X n + p − i ! p!
min(n,p)
(ban ) ca p =
bcλi an+p−i .
i
p
i=0
Since ban ∈ I, we have b ∈ Ωsr where r ∈ [N] is such that sr 6 n < sr+1 . For any given i with
0 6 i 6 min(n, p), let t be the integer in [N] such that st 6 n + p − i < st+1 . Note that we always
have n + p − i > n > sr , so sr 6 st . Thus Ωsr ⊆ Ωst whence b ∈ Ωst . Then for each
iwith
n+p−i p
i
0 6 i 6 min(n, p), the element p i bcλ an+p−i is in Ωst an+p−i . This shows that (ban ) ca p ∈ I.
Thus I is a homogeneous Rota-Baxter ideal of Xk (k).
Thus we obtain a map Ψ from A to the set of homogeneous Rota-Baxter ideals of Xk (k). It is
direct to check that the left and right compositions of Ψ with the restriction of Φ to homogeneous
CHARACTERISTICS OF ROTA-BAXTER ALGEBRAS
9
Rota-Baxter ideals of Xk (k) are the identities. Thus the restriction of Φ to the set of homogeneous
Rota-Baxter ideals is bijective.
3.3. Rota-Baxter characteristics and their quotients. Now we relate a Rota-Baxter characteristic to the homogeneous case and describe the structure of the quotient of Xk (k) modulo a
Rota-Baxter characteristic. We will see that such a quotient has the underlying set of the form
defined by Eq. (9). However, the same underlying set may be shared by different Rota-Baxter
characteristics, as can be seen from the following theorem. As mentioned at the beginning of this
section, this theorem is a property of Xk (k) coming as an analog to polynomial algebras.
We first relate a Rota-Baxter ideal of Xk (k) to a suitable homogeneous Rota-Baxter ideal.
Definition 3.7. The initial Rota-Baxter ideal of a Rota-Baxter ideal I of Xk (k) is the RotaBaxter ideal in(I) generated by {in( f )| f ∈ I}.
Thus in(I) is a homogeneous ideal.
Proposition 3.8. Let I be a Rota-Baxter ideal of Xk (k). Then for its ascent set we have A(I) =
A(in(I)). In other words, two Rota-Baxter characteristics are in the same fiber under the map Φ
in Eq. (6) if and only if they have the same initial ideal.
o
n
Proof. Assume that A(I) := (s j , Ωs j ) j ∈ [NI ] . We only need to show that Ωi (I) = Ωi (in(I))
for all i ∈ N. If i < st(I), then there is no element f ∈ I with degree lower than i, so Ωi (I) =
Ωi (in(I)) = ∅. Next we assume that i > st(I). Take a nonzero element b ∈ k. Then b ∈ Ωi (I) if and
only if there exists an element f in I such that in( f ) = bai , which is equivalent to b ∈ Ωi (in(I)).
Thus A(I) = A(in(I)), as required.
Now we can determine the underlying set of the quotient of a Rota-Baxter characteristic.
Theorem 3.9. Let I be a Rota-Baxter ideal of Xk (k). Then Xk (k)/I has the same underlying
set as Xk (k)/in(I) in Eq. (9). More precisely, for each j with j ∈ {0} ∪ [NI ], fix a complete set
T j ⊆ k, such that 0 ∈ T j , of representatives of k modulo Ωs j , with the convention that s0 = 0,
Ωs0 = 0 and sNI +1 = ∞ if NI is finite. Then Xk (k)/I has a complete set of representatives given
∞
L
by the following subset of Xk (k) =
kam :
m=0
(10)
s −1
!
sL
j+1
∞ L
1 −1
L L
s −1
,
ka
T
a
NI = ∞,
i
j
i
N
j+1
I M
M
i=s
i=0
j=1
j
!
T :=
T j ai =
sL
j+1 −1
∞
1 −1
I −1 s L
L L
L NL
i=s j
j=0
ka
T
a
T NI ai , NI < ∞.
i
j i
i=0
i=sN
j=1 i=s j
I
n
o
Proof. Let I be a Rota-Baxter ideal of Xk (k) with its ascent set A(I) = (s j , Ωs j ) | j ∈ [NI ] . By
Proposition 3.8, we have A(I) = A(in(I)). Thus it suffices to show that the underlying set of
Xk (k)/I is T in light of Theorem 3.6(b).
By convention, we take s0 = 0 and Ωs0 = 0. Then T 0 = k and
s1 −1
M
i=0
Note that this term does not exist if s1 = 0.
T 0 ai =
s1 −1
M
i=s0
kai .
10
LI GUO AND HOUYI YU
Let f ∈ Xk (k) be a nonzero element with in( f ) = bn an for some bn ∈ k. So deg f = n > 0. We
first show that
there is a unique
f ′ in T such that f − f ′ is in I.
o
S n
Let
f s j ,ℓ ℓ ∈ [N j ] be an ascent generating set of I, we may assume that for each j ∈
j∈[NI ]
[N
o j ] we have in( f s j ,ℓ ) = ω s j ,ℓ as j and hence Ωs j is generated by the set Θ j :=
n I ] and ℓ ∈ [N
ω s j ,ℓ ℓ ∈ [N j ] by Lemma 3.2.
sL
sL
1 −1
1 −1
T 0 ai since s0 = 0, Ωs0 = 0. Thus, it is
If deg f < s1 , then f ∈
kai , which equals to
i=0
i=s0
enough to take f ′ = f .
Next we assume that deg f > s1 . We prove that there exists an element f ′ ∈ T with deg f ′ 6
deg f such that f − f ′ ∈ I by induction on deg f . If deg f = s1 , then in( f ) = b s1 as1 for some
b s1 ∈ k. Since T 1 ⊆ k is a complete set of representatives of k modulo Ωs1 , there exists a unique
b′s1 ∈ T 1 such that b s1 − b′s1 ∈ Ωs1 . Note that Ωs1 is generated by Θ1 , so there P
exist cs1 ,ℓ in k,
cs1 ,ℓ ω s1 ,ℓ . If
ℓ ∈ [N1 ], with all but a finite number of cs1 ,ℓ being zero, such that b s1 − b′s1 =
ℓ∈[N1 ]
P
cs1 ,ℓ f s1 ,ℓ , then f − f ′ ∈ I. It is clear that f ′ ∈ T if f ′ = 0. If f ′ , 0, then
we take f ′ = f −
ℓ∈[N1 ]
f ′ = b′s1 as1 + g′ for some g′ ∈ Xk (k) with deg g′ < deg f = s1 . Thus g′ ∈
sL
1 −1
i=0
kai =
sL
1 −1
T 0 ai so
i=s0
that f ′ is also in T .
Now we assume that the claim has been proved for f with deg f 6 n − 1 for a given n > 1 and
show that the claim is true when deg f = n. Assume that in( f ) = bn an . Let r ∈ {0} ∪ [NI ] be such
that sr 6 n < sr+1 . So Ωn = Ωsr by Eq. (4), and hence bn − b′n ∈ Ωsr for some b′n ∈ T r .PThen there
exist cn,ℓ ∈ k, ℓ ∈ [Nr ], all but finitely many of which being zero, such that bn − b′n =
cn,ℓ ω sr ,ℓ .
ℓ∈[Nr ]
P
cn,ℓ f sr ,ℓ ), then we have g = b′n an + g′ for some g′ ∈ Xk (k) with
Taking g = f − Pn−sr (
ℓ∈[Nr ]
deg g′ < deg f . If g′ = 0, then put f ′ = b′n an so that f ′ ∈ T and we have f − f ′ ∈ I. If g′ , 0,
then deg g′ < deg f = n. By the induction hypothesis, there exists g′′ ∈ T with deg g′′ 6 deg g′
such that g′ − g′′ ∈ I. If we take f ′ = b′n an + g′′ , then f − f ′ ∈ I. Since deg g′′ 6 deg g′ , we have
deg g′′ < deg f = n, so that f ′ is an element of T .
Suppose that there is another element h′ in T such that h′ , f ′ and f − h′ ∈ I. Then f ′ − h′ ∈ I.
m
k
P
P
bi ai , where bi , ci ∈ T r if sr 6 i < sr+1 for some r ∈ {0} ∪ [NI ].
ci ai , h′ =
Let f ′ =
i=0
i=0
By symmetry, we may assume that k 6 m and put ci = 0 for all i with k < i 6 m. Thus
m
P
h′ − f ′ = (bi − ci )ai . Since h′ , f ′, there is a nonnegative integer p with 0 6 p 6 m such that
i=0
b p − c p , 0 and in(h′ − f ′ ) = (b p − c p )a p . Let t ∈ {0} ∪ [NI ] be such that st 6 p < st+1 . Then
b p ∈ T t , c p ∈ T t and b p − c p ∈ Ωst . But T t is a complete set of representatives of k modulo Ωst , so
b p = c p , a contradiction.
It remains to show that every element of T represents an element of Xk (k)/I. For any f ′ in T
we take f = f ′ in Xk (k), then f − f ′ ∈ I whence f ′ ∈ T is corresponding to the element f + I in
Xk (k)/I. Thus, the underlying set of Xk (k)/I is T and the statement follows at once.
We remark that the underlying k-module T in Eq. (10) is usually not the direct sum of these
k-modules (k/Ωs j )ai for a nonhomogeneous Rota-Baxter ideal I, even though we write it in the
form of a direct sum. The scalar multiplication by k should be defined according to the k-module
Xk (k)/I. We illustrate this by the following example.
CHARACTERISTICS OF ROTA-BAXTER ALGEBRAS
11
Example 3.10. Let XZ (Z) be the free Rota-Baxter Z-algebra on Z of weight 1. Let I be the RotaBaxter ideals of XZ (Z) generated by the element f = 2(a1 + a0 ). Then XZ (Z)/I and XZ (Z)/in(I)
are in bijection as sets, but are not isomorphic as Z-modules (that is, abelian groups).
L
Proof. Since in( f ) = 2a1 , we have 2am+1 = Pm (2a1 ) ∈ in(I) for m > 0 whence in(I) ⊇
2Zai .
i>1
We note from Lemma 3.4 that all elements of I can be obtained from iterated operations on
f by the Rota-Baxter operator P, the scalar product, the multiplication and the addition of the
algebra XZ (Z). On one hand, we have Pm ( f ) = 2(am+1 + am ). On the other hand, it follows from
Eq. (2) that
am f = 2(m + 1)(am+1 + am ).
Thus, any element of I must be of the form
2c1 (a1 + a0 ) + 2c2 (a2 + a1 ) + · · · + 2cn+1 (an+1 + an ),
L
L
where n ∈ N, ci ∈ Z, 1 6 i 6 n + 1. Consequently, in(I) ⊆
2Zai and hence in(I) =
2Zai .
(11)
i>1
i>1
Clearly, A(in(I)) = {(1, 2Z)}, whence both the underlying sets of XZ (Z)/in(I) and XZ (Z)/I are
M M
Za0
Z
a
2
i
i>1
by Theorems 3.6 and 3.9.
For the Z-module XZ (Z)/in(I), it follows from 2a1 ∈ in(I) that 2a1 = 0. However, for the
Z-module XZ (Z)/I, we have 2a1 + I = −2a0 + I so that 2a1 = −2a0 , which is clearly not zero
since −2a0 is not in I by Eq. (11). Therefore, the Rota-Baxter ideals I and in(I) share the same
quotient sets, but not Z-modules. From the fact that 2a1 , 0 in XZ (Z)/I we also see that XZ (Z)/I
is not the direct sum of the Z-modules Za0 and Z2 ai , i > 1.
The following example shows that a similar phenomenon can already be found in the polynomial algebra k[x].
Example 3.11. Let I be the ideal of Z[x] generated by the polynomial 2x + 2. Then the initial
ideal in6 (I) is generated by 2x. Thus the ideal I and its initial ideal in(I) have the same quotient
set
M M
n
.
Z
Z
x
2
n>1
However, the two ideals do not have isomorphic quotient groups. For instance, 2x + I = −2 + I is
not zero since −2 is not in I, but 2x + in6 (I) = 0 since 2x ∈ in6 (I). Furthermore, 2x + I , 0 shows
that Z[x]/I is not the direct sum of the abelian groups Z and Z2 xn , n > 1.
4. Characteristics of Rota-Baxter rings
We next focus on the case when k = Z and classify the Rota-Baxter ideals and prime RotaBaxter ideal of XZ (Z). Note that XZ (Z) is the initial object in the category of unitary Rota-Baxter
rings (that is, Rota-Baxter Z-algebras). So we are talking about characteristics of Rota-Baxter
rings.
12
LI GUO AND HOUYI YU
Theorem 4.1. Let I be a Rota-Baxter ideal of XZ (Z). Then there exist a positive integer m and
m pairs (s1 , ω1), · · · , (sm , ωm ) ∈ N × P with s j < s j+1 , ω j+1 , ω j and ω j+1 |ω j , 1 6 j 6 m − 1, such
that A(I) = {(s1 , ω1 Z), · · · , (sm , ωm Z)}. Thus if I is homogeneous then
−1
m sM
M
j+1
I=
ω
Za
sj
i
j=1
i=s j
and the quotient XZ (Z)/I is isomorphic to
−1
m sM
M
j+1
,
Z
ω j ai
(12)
j=0
i=s j
with the convention that s0 = 0, ω s0 = 0 and sm+1 = ∞. If I is nonhomogeneous then XZ (Z)/I
has the same underlying set as Xk (k)/in(I) in Eq. (12).
Conversely, for any positive integer m and m pairs (s1 , ω1 ), · · · , (sm , ωm ) ∈ N × P with s j < s j+1 ,
ω j+1 , ω j and ω j+1 |ω j , 1 6 j 6 m − 1, there is a Rota-Baxter ideal I of XZ (Z) such that the
underlying set of XZ (Z)/I is the construction defined by Eq. (12).
Proof. Let I be a Rota-Baxter ideal of XZ (Z). Since Z satisfies the ascending chain condition
on ideals, I has only finite ascend steps, say s1 , · · · , sm , where m is a positive integer. Since Z
is a PID, we have Ωs j = ω j Z for some positive integer ω j , j = 1, · · · , m. Then, by Lemma 3.1,
we have ω j+1 , ω j and ω j+1 |ω j , 1 6 j 6 m − 1. It follows from the definition of A(I) that
A(I) = {(s1 , ω1 Z), · · · , (sm , ωm Z)}. Consequently, in view of Theorems 3.6 and 3.9, the first part
of the theorem follows.
Conversely, take Ωs j = ω j Z. Then it follows from s j < s j+1 , ω j+1 , ω j and ω j+1 |ω j , 1 6 j 6
m − 1 that Ωs1 ( Ωs2 ( · · · ( Ωsm . So by Theorem 3.9, there is a Rota-Baxter ideal I of XZ (Z)
such that the underlying set of XZ (Z)/I is the construction defined by Eq. (12).
Let k be an integral domain with characteristic 0. The next lemma tells us that if Ωt = kωt is a
principal ideal for t = st(I), then the element f ∈ I with in( f ) = ωt at is completely determined by
ωt and the number of nonzero terms in f .
Proposition 4.2. Let k be an integral domain with characteristic 0 and Xk (k) the free RotaBaxter algebra of weight λ, and let I ⊆ Xk (k) be a nonzero Rota-Baxter ideal. Suppose that
st(I) = t and Ωt = kωt .
t
P
(a) If f = ci ai ∈ I with ct = ωt and cr , 0, then
i=r
!
t − r t−i
λ ct ,
r 6 i 6 t − 1.
ci =
t−i
L
kai .
(b) If λ = 0, then I ⊆
i>t
L
kai .
(c) If λ = 0 and k is a field, then I =
i>t
Proof. (a) Since I is a Rota-Baxter ideal and f =
t
P
ci ai is in I, we have g1 := (t+1)P( f )−a1 f ∈ I.
i=r
Now from Eq. (2) it follows that
a1 f = (t + 1)ct at+1 +
t
X
i=r+1
i(ci−1 + λci )ai + λrcr ar ,
CHARACTERISTICS OF ROTA-BAXTER ALGEBRAS
13
which together with
(t + 1)P( f ) = (t + 1)
t
X
ci ai+1 = (t + 1)
i=r
t+1
X
ci−1 ai
i=r+1
implies that
g1 =
t
X
((t + 1 − i)ci−1 − λici ) ai − λrcr ar .
i=r+1
Thus, the coefficient of at is in Ωt , that is, ct−1 − λtct is in kct so that ct−1 is in kct . Hence there
exists b ∈ k such that ct−1 = bct . Then
t
X
((t + 1 − i)ci−1 + (λ(t − i) − b)ci ) ai + (λ(t − r) − b)cr ar
g2 :=g1 − (b − λt) f =
i=r+1
is in I. But the coefficient of at in g2 is 0, so the fact that st(I) = t yields that g2 = 0, which is
equivalent to (λ(t − r) − b)cr = 0 and
(13)
(t + 1 − i)ci−1 + (λ(t − i) − b)ci = 0,
r + 1 6 i 6 t.
By the hypotheses that k is an integral domain and cr , 0, it follows from (λ(t − r) − b)cr = 0 that
b = λ(t − r). Consequently, Eq. (13) becomes
(t + 1 − i)ci−1 = λ(i − r)ci ,
r + 1 6 i 6 t.
So for each r 6 i 6 t − 1, we have
t
t
Y
Y
(t + 1 − j)c j−1 =
λ( j − r)c j ,
j=i+1
j=i+1
which is equivalent to
(t − i)!
t
Y
j=i+1
c j−1
t
− r)! Y
=λ
c j.
(i − r)! j=i+1
t−i (t
Since k is an integral domain with characteristic 0, we obtain
!
(t − r)!
t − r t−i
t−i
λ ct ,
ci = λ
ct =
t−i
(t − i)!(i − r)!
This completes the proof of part (a) of this lemma.
(b) Take arbitrary f ∈ I. Since st(I) = t, we may assume that
r 6 i 6 t − 1.
f = bm am + bm−1 am−1 + · · · + bt at + f0 ∈ I,
where m > t and deg f0 6 t − 1. Note that Ωt = kωt . So there exists an element g in I with ωt at as
the initial term. Since λ = 0, by part (a), we must have that g = ωt at is in I. So ωt am = Pm−t (ωt at )
is in I. Then h := ωt f − bm ωt am is in I, that is,
(14)
h = bm−1 ωt am−1 + · · · + bt ωt at + ωt f0
is in I. Since P(I) ⊆ I, we have ωt aℓ = Pℓ−t (ωt at ) ∈ I for all t 6 ℓ 6 m − 1, so that bm−1 ωt am−1 +
· · · + bt ωt at ∈ I. Then Eq. (14) implies that ωt f0 ∈ I. Note that deg f0 6 t − 1, which
Ltogether
kai and
with st(I) = t yields ωt f0 = 0. But k is an integral domain, so f0 = 0. Therefore, I ⊆
i>t
we are done.
14
LI GUO AND HOUYI YU
(c) Since λ = 0, we know that ωt at is in I. But k is a field, so Ωt = k, whence
L
kai .
together with part (b) gives I =
L
kai ⊆ I, which
i>t
i>t
We finally give a classification for the prime Rota-Baxter ideals of XZ (Z) when λ = 0. Recall
that a Rota-Baxter ideal I of a Rota-Baxter algebra (R, P) is said to be prime if it is a prime ideal
with P(I) ⊆ I. If λ = 0, then for any m, n ∈ N, it follows from Eq. (2) that
!
m+n
(15)
am+n .
am an =
m
Theorem 4.3. Let XZ (Z) be the free Rota-Baxter Z-algebra on Z of weight 0 and let I be a proper
nonzero Rota-Baxter ideal of XZ (Z). Then I is prime if and only if
M M
I = pZa0
Zai ,
i>1
where p is either 0 or a prime number of Z.
By Theorems 4.1 and 4.3, the quotient of a prime characteristic of a Rota-Baxter ring is either
Z or Z/pZ for a prime number p, as in the case of prime characteristic of a ring.
Proof. Let I be a prime Rota-Baxter ideal. Denote t = st(I) and Ωt = ωZ for some positive integer
ω. Then, by Proposition 4.2(a), the element in I of the form
ωat + lower degree terms
must be ωat , that is, ωat ∈ I.
If t > 1, then ωat−1 < I since st(I) = t. From Eq. (15) we know that a1 (ωat−1 ) = tωat ∈ I. Since
I is prime, we have a1 ∈ I, which means that t = st(I) 6 1 and hence t = 1. Now a1 ∈ I gives
1 ∈ Ω1 so that Ω1 = Z. By Lemma 3.1, Ω j = Z for all positive integer j. Therefore,
M
I=
Zai .
i>1
If t = 0, then ωa0 ∈ I. Since I is a prime ideal and a0 is the identity of XZ (Z), ω must be a
prime number. Let ω = p. Then p > 2 and pa0 ∈ I. So a1p = p!a p = (p − 1)!P p (pa0 ) ∈ I which
means that a1 is in the prime ideal I. Hence I is generated by pa0 and a1 . Therefore,
M M
I = pZa0
Zai .
i>1
Conversely, I is a homogeneous Rota-Baxter ideal generated by either {a1 } or {pa0 , a1 }, where
p is a prime number. By Theorem 4.1, if I is generated by {a1 }, then XZ (Z)/I is isomorphic to
Z. If I is generated by {pa0 , a1 }, then XZ (Z)/I is isomorphic to Z p . In either case, I is a prime
ideal.
Acknowledgements: This work was supported by NSFC grant 11426183, 11501467, Chongqing
Research Program of Application Foundation and Advanced Technology (No. cstc2014jcyjA00028).
H. Yu would like to thank the NYU Polytechnic School of Engineering for hospitality and support.
CHARACTERISTICS OF ROTA-BAXTER ALGEBRAS
15
References
[1] M. Aguiar, On the associative analog of Lie bialgebras. J. Algebra 244 (2001), 492-532. 1
[2] G. E. Andrews, L. Guo, W. Keigher and K. Ono, Baxter algebras and Hopf algebras, Trans. Amer. Math. Soc.
355 (2003), 4639-4656. 3
[3] C. Bai, A unified algebraic approach to classical Yang-Baxter equation, J. Phys. A: Math. Theor. 40 (2007),
11073-11082. 1
[4] C. Bai, O. Bellier, L. Guo and X. Ni, Spliting of operations, Manin products and Rota-Baxter operators, Int.
Math. Res. Not. 3 (2013), 485-524. 1
[5] G. Baxter, An analytic problem whose solution follows from a simple algebraic identity, Pacific J. Math. 10
(1960), 731-742. 1
[6] A. Connes and D. Kreimer, Hopf algebras, renormalization and noncommutative geometry, Commun. Math.
Phys. 199 (1998), 203-242. 1
[7] P. Cartier, On the structure of free Baxter algebras, Adv. Math. 9 (1972), 253-265. 3
[8] D. Cox, J. Little and D. O’shea, Ideals, Varieties, and Algorithms, Springer, 1992. 4
[9] K. Ebrahimi-Fard and L. Guo, Free Rota-Baxter algebras and dendriform algebras, J. Pure Appl. Algebra 212
(2008), 320-339. 2, 3
[10] N. S. Gu and L. Guo, Generating functions from the viewpoint of Rota-Baxter algebras. Discrete Math. 338
(2015), 536554. 1
[11] L. Guo, An Introduction to Rota-Baxter Algebra, International Press (US) and Higher Education Press (China),
2012. 1, 2, 3
[12] L. Guo, Baxter algebras, Stirling numbers and partitions, J. Alg. Appl. 4 (2005), 153-164. 3
[13] L. Guo, WHAT IS a Rota-Baxter algebra? Notice Amer. Math. Soc. 56 (2009), 1436-1437. 1
[14] L. Guo and W. Keigher, Baxter algebras and shuffle products, Adv. Math. 150 (2000), 117-149. 2, 3
[15] L. Guo and B. Zhang, Renormalization of multiple zeta values, J. Algebra. 319 (2008), 3770-3809. 1
[16] G.-C. Rota, Baxter algebras and combinatorial identities I & II, Bull. Amer. Math. Soc. 75 (1969), 325-329,
330-334. 1, 3
[17] G.-C. Rota, Baxter operators, an introduction, In: “Gian-Carlo Rota on Combinatorics, Introductory papers and
commentaries”, Joseph P.S. Kung, Editor,Birkhäuser, Boston, 1995. 1
[18] G.-C. Rota, Ten mathematics problems I will never solve, Invited address at the joint meeting of the American
Mathematical Society and the Mexican Mathematical Society, Oaxaca, Mexico, December 6, 1997, Mitt. Dtsch.
Math.-Ver., Heft 2 (1998), 45-52. 1
[19] M. Semonov-Tian-Shansky, What is a classical R-matrix? Funct. Anal. Appl. 17 (1983), 259-272. 1
Department of Mathematics and Computer Science, Rutgers University, Newark, NJ 07102, USA
E-mail address: [email protected]
School of Mathematics and Statistics, Southwest University, Chongqing, China
E-mail address: [email protected]
| 0 |
Inactivation Decoding of LT and Raptor Codes:
Analysis and Code Design
arXiv:1706.05814v1 [] 19 Jun 2017
Francisco Lázaro, Student Member, IEEE, Gianluigi Liva, Senior Member, IEEE,
Gerhard Bauch, Fellow, IEEE
Abstract—In this paper we analyze LT and Raptor codes under
inactivation decoding. A first order analysis is introduced, which
provides the expected number of inactivations for an LT code, as
a function of the output distribution, the number of input symbols
and the decoding overhead. The analysis is then extended to the
calculation of the distribution of the number of inactivations.
In both cases, random inactivation is assumed. The developed
analytical tools are then exploited to design LT and Raptor codes,
enabling a tight control on the decoding complexity vs. failure
probability trade-off. The accuracy of the approach is confirmed
by numerical simulations.
Index Terms—Fountain codes, LT codes, Raptor codes, erasure
correction, maximum likelihood decoding, inactivation decoding.
I. I NTRODUCTION
F
OUNTAIN codes [3], [4] provide an efficient solution
for data delivery to large user populations over broadcast
channels. The result is attained by encoding the data object
(e.g., a file) through an (n, k) linear code, where the number
of output (i.e., encoded) symbols n can grow indefinitely, enabling a simple rate adaptation to the channel conditions. Due
to this, fountain codes are often regarded as rateless codes.
Fountain codes were originally conceived for transmission
over erasure channels. In practice, different users experience
a different channel quality resulting in a different erasure
probability. When an efficient fountain code is employed, after
a user has received m = k +δ output symbols, with δ small, the
original input symbols can be recovered with high probability.
Luby transform (LT) codes were introduced in [5] as a first
example of practical fountain codes. In [5] an iterative erasure
decoding algorithm was introduced that performs remarkably
Francisco Lázaro and Gianluigi Liva are with the Institute
of Communications and Navigation of the German Aerospace
Center (DLR), Muenchner Strasse 20, 82234 Wessling, Germany.
Email:{Francisco.LazaroBlasco, Gianluigi.Liva}@dlr.de.
Gerhard Bauch is with the Institute for Telecommunication, Hamburg
University of Technology, Hamburg, Germany. E-mail: [email protected].
Corresponding Address: Francisco Lázaro, KN-SAN, DLR, Muenchner
Strasse 20, 82234 Wessling, Germany. Tel: +49-8153 28-3211, Fax: +49-8153
28-2844, E-mail: [email protected].
This work has been presented in part at the 54th Annual Allerton Conference on Communication, Control, and Computing, Monticello, Illinois, USA,
September 2015 [1], and at the 2014 IEEE Information Theory Workshop,
Hobart, Tasmania, November 2014 [2].
This work has been accepted for publication in IEEE Transactions on
Communications, DOI: 10.1109/TCOMM.2017.2715805
c 2017 IEEE. Personal use of this material is permitted. Permission from
IEEE must be obtained for all other uses, in any current or future media,
including reprinting /republishing this material for advertising or promotional
purposes, creating new collective works, for resale or redistribution to servers
or lists, or reuse of any copyrighted component of this work in other works
well when the number of input symbols k is large. Raptor
codes [6]–[8] address some of the shortcomings of LT codes.
A Raptor code is a serial concatenation of an (h, k) outer linear
block code with an inner LT code.
Most of the literature on LT and Raptor codes considers
iterative decoding (see e.g. [9]–[15]). In [9], [14], [16] LT
codes under iterative decoding were analyzed using a dynamic
programming approach. This analysis models the iterative
decoder as a finite state machine and it can be used to derive
the probability of decoding failure of LT codes under iterative
decoding. Iterative decoding is particularly effective for large
input block sizes, with k at least in the order of several tens
of thousands symbols [8]. However, in practice, moderate and
small values of k are often used, due to memory limitations at
the receiver side, or due to the fact that the piece of data to be
transmitted is small. For example, the recommended value of
k for the Raptor codes standardized in [17] is between 1024
and 8192 symbols. In this regime, iterative decoding of LT
and Raptor codes turns to be largely sub-optimum. In actual
implementations, different decoding algorithms are used. In
particular, an efficient maximum likelihood (ML) decoding
algorithm usually referred to as inactivation decoding [18],
[19] is frequently employed.
Inactivation decoding belongs to a large class of Gaussian
elimination algorithms tailored to the solution of large sparse
linear systems [20]–[25]. The algorithm can be seen as an
extension of iterative decoding, where whenever the iterative
decoding process stops, a variable (input symbol) is declared
as inactive (i.e., removed from the equation system) and
iterative decoding is resumed. At the end of the process, the
inactive variables have to be solved using Gaussian elimination. If a unique solution is found, it is possible to recover all
the remaining input symbols by back-substitution (i.e., using
iterative decoding). A few works addressed the performance
of LT and Raptor codes under inactivation decoding. The
decoding failure probability of several types of LT codes
was thoroughly analyzed via tight lower and upper bounds in
[26]–[29]. In [30] the distance spectrum of fixed-rate Raptor
codes is characterized, which enables the evaluation of their
performance under ML erasure decoding (e.g., via upper union
bounds [31], [32]). Upper bounds on the failure probability
for binary and non-binary Raptor codes are introduced in
[33]. However, none of these works addressed inactivation
decoding from a complexity viewpoint. In fact, the complexity
of inactivation decoding grows with the third power of the
number of inactivations [24]. It is hence of large practical
interest to develop a code design which leads (on average) to
2
SUBMITTED TO IEEE TRANSACTIONS ON COMMUNICATIONS
a small number of inactivations.
In [25], inactivation decoding of low-density parity-check
(LDPC) codes was analyzed, providing a reliable estimate
of the expected number of inactivations. The approach of
[25] relies on bridging the inactivation decoder with the
Maxwell decoder of [34]. By establishing an equivalence
between inactive and guessed variables, it was shown how
to exploit the Area Theorem [35] to derive the average
number of inactivations. Though, the approach of [25] cannot
be extended to LT and Raptor codes due to their rateless
nature. In [36] the authors proposed a new degree distribution
design criterion to design the LT component of Raptor codes
assuming inactivation decoding. However, when this criterion
is employed there is not a direct control on the average output
degree. In [37], the authors present a finite length analysis of
batched sparse codes under inactivation decoding that provides
the expected number of inactivations needed for decoding.
In this paper we provide a thorough analysis of LT and Raptor codes under inactivation decoding. A first order analysis
is introduced, which allows estimating the expected number
of inactivations for an LT code, as a function of the output
distribution, the number of input symbols and the decoding
overhead. The analysis is then extended to the calculation of
the distribution of the number of inactivations. In both cases,
random inactivation is assumed. The developed analytical tools
are then exploited to design LT and Raptor codes, enabling a
tight control on the decoding complexity vs. failure probability
trade-off. The work presented in this paper is an extension of
the work in [1]. In particular, the different proofs in [1] have
been modified and extended. Furthermore, in this work we
not only consider LT codes but also a class of Raptor Codes.
An analysis based on a Poisson assumption approximation is
presented in the Appendix1 .
The rest of the paper is structured as follows. In Section II
we present the system model considered in this paper. Section III contains a detailed description of inactivation decoding
applied to LT codes. In Section IV we focus on the analysis
of LT codes under inactivation decoding. Section V shows
how to exploit the analysis of the LT component of a Raptor
code to estimate the number of inactivations for the Raptor
code. A Raptor code design methodology is presented too.
The conclusions are presented in Section VI.
II. P RELIMINARIES
Let v = (v1, v2, ..., vk ) be the k input symbols, out
of which the LT encoder generates the output symbols
c = (c1, c2, ..., cn ), where n can grown indefinitely. Each output
symbol is generated by first sampling a degree d from an
output degree distribution Ω = (Ω1, Ω2, Ω3, . . . Ωdmax ), where
dmax is the maximum output degree. In the following, we
1 In the Appendix we provide a simplified analysis, which, though related
to the one in [2], is fully developed here on the basis of the analysis provided
in Section IV of this paper. The simplified analysis is based on a Poisson
approximation, which is analyzed in depth in the Appendix, discussing its
limitations and comparing the analytical results with Monte Carlo simulations.
m
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
k
Fig. 1. Structure of the matrix G0T .
will make use of a polynomial representation of the degree
distribution in the form
Õ
Ω(x) :=
Ωd xd
d
where x is a dummy variable. Then, d distinct input symbols
are selected uniformly at random and their x-or is computed
to generate the output symbol. The relation between output
and input symbols can be expressed as
c = vG
where G is the generator matrix of the LT code.
The output symbols are transmitted over a binary erasure
channel (BEC) at the output of which the receiver collects
m = k + δ output symbols, where δ is referred to as absolute
receiver overhead. Similarly, the relative receiver overhead is
defined as := δ/k. Denoting by y = (y1, y2, . . . , ym ) the m
received output symbols and by I = {i1, i2, . . . , im } the set of
indices corresponding to the m non-erased symbols, we have
y j = ci j .
This allows us to express the dependence of the received
output symbols on the input symbols as
G0T v T = y T
(1)
with G0 given by the m columns of G with indices in I.
ML erasure decoding requires the solution of (1). Note that
decoding is successful (i.e., the unique solution can be found)
if and only if rank [G0 ] = k.
III. I NACTIVATION D ECODING OF LT C ODES
Inactivation decoding is an efficient ML erasure decoding
algorithm, which can be used to solve the system of equations
in (1). We will illustrate inactivation decoding by means of an
example and with the aid of Figures 1 and 2. In the example,
we fix k = 50 and m = 60. The structure of G0T is given in
Figure 1 (in the figure, dark squares represent the non-zero
elements of G0T ). Inactivation decoding consists of 4 steps.
FRANCISCO LÁZARO ET AL.: INACTIVATION DECODING OF LT AND RAPTOR CODES: ANALYSIS AND CODE DESIGN
k−α
m−k+α
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
k−α
(a) Triangulation process.
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
k−α
α
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
α
(b) Zero matrix procedure.
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
k−α
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
α
(c) Gaussian elimination.
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
3
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
k−α
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
α
(d) Back-substitution.
Fig. 2. Structure of G0T as inactivation decoding proceeds.
Step 1 (Triangulation). Initially, G0T is put in an approximate lower triangular form by means of column and row
permutations only. Given that no operation is performed on
the rows or columns of G0T , the density of G0T is preserved.
At the end of triangulation process, G0T can be partitioned in
four submatrices as
A D
B C
and as depicted in Figure 2a. In the upper left part we have
the (k − α) × (k − α) lower triangular matrix A. The matrix
under submatrix A is denoted by B and it has dimension
(m − k + α) × (k − α). The upper right part is given by the
(k − α) × α submatrix D, while the lower right submatrix is
denoted by C and it has dimension (m − k + α) × α. The α
rightmost columns of G0T (corresponding to matrices C and
D) are usually referred to as inactive columns. Note that the
manipulation of G0T is typically performed through inactivation
or pivoting algorithms (see e.g. [24], [38]), which aim at
reducing the number of inactive columns. In the rest of this
work, we will assume that the use of the random inactivation
algorithm [38], as it will be detailed later in the section.
Step 2 (Zero matrix procedure). Matrix A is put in a
diagonal form and matrix B is zeroed out by means of row
sums. As a result, on average the density of the matrices C
and D increases (Figure 2b).
Step 3 (Gaussian elimination). Gaussian elimination (GE)
is applied to solve the system of equations associated with C.
Being C in general dense, the complexity of this step is cubic
in the number of inactive columns α. As observed in [24],
this step drives, for large equation systems, the complexity of
inactivation decoding. After performing GE, matrix G0T has
the structure shown in Figure 2c.
Step 4 (Back substitution). If Step 3 succeeds, after determining the value of the inactive variables (i.e., of the
input symbols associated with the inactive columns), backsubstitution is applied to compute the values of the remaining
variables in v. This is equivalent to setting to zero all elements
of matrix D in Figure 2c. At the end G0T shows a diagonal
structure as shown in Figure 2d, and all input symbols are
recovered.
Recalling that a unique solution to the system of equations
exists if and only if G0 is full rank, we have that decoding
succeeds if and only if the rank of C at Step 3 equals the
number of inactive variables.
Given that the number of inactive variables is determined
by the triangulation step, and that the GE on C dominates the
decoding complexity for large k, a first analysis of inactivation
decoding can be addressed by focusing on the triangulation
procedure.
A. Bipartite Graph Representation of Inactivation Decoding
We introduce next a bipartite graph representation of the
LT code from the receiver perspective. The graph comprises
m + k nodes, divided in two sets. The first set consists of k
input symbol nodes, one per input symbol, while the second
set consists of m output symbol nodes, one per received output
symbol. We denote the set of input symbol nodes as V , and the
input symbol nodes in V as v1, v2, . . . , vk . Similarly, we denote
the set of output symbol nodes as Y with output symbol nodes
denoted by y1, y2, . . . , ym . Here, we purposely referred to input
and output symbol nodes with the same notation used for
their respective input and output symbols to emphasize the
correspondence between the two sets of nodes and the sets
of input and received output symbols. For simplicity, in the
following we will refer to input (output) symbol nodes and
to input (output) symbols interchangeably. An input symbol
node vi is connected by an edge to an output symbol node
y j if and only if the corresponding input symbol contributes
to the generation of the corresponding output symbol, i.e., if
and only if the (i, j) element of G0 is equal to 1. We denote
by deg(v) (or deg(y)) the degree of a node v (or y), i.e., the
number of its adjacent edges.
Triangulation can be modelled as an iterative pruning of the
bipartite graph of the LT code. At each iteration, a reduced
graph is obtained, which corresponds to a sub-graph of the
original LT code graph. The sub-graph involves only a subset
of the original input symbols, and their neighboring output
symbols. The input symbols in the reduced graph will be
referred to as active input symbols. We shall use the term
reduced degree of a node to refer to the degree of a node in
the reduced graph. Obviously, the reduced degree of a node is
less than or equal to its original degree. We denote by red(v)
4
(or red(y)) the reduced degree of a node v (or y). Let us
now introduce some additional definitions that will be used to
model the triangulation step.
Definition 1 (Ripple). We define the ripple as the set of output
symbols of reduced degree 1 and we denote it by R.
The cardinality of the ripple is denoted by r and the corresponding random variable as R.
Definition 2 (Cloud). We define the cloud as the set of output
symbols of reduced degree d ≥ 2 and we denote it by C .
The cardinality of the cloud is denoted by c and the corresponding random variable as C.
Initially, all input symbols are marked as active, i.e., the
reduced sub-graph coincides with the original graph. At every
step of the process, one active input symbol is marked as either
resolvable or inactive and leaves the graph. After k steps no
active symbols are present and triangulation ends. In order
to keep track of the steps of the triangulation procedure, the
temporal dimension will be added through the subscript u.
This subscript u corresponds to the number of active input
symbols in the graph. Given the fact that the number of active
input symbols decreases by 1 at each step, triangulation will
start with u = k active input symbols and it will end after
k steps with u = 0. Therefore, the subscript decreases as the
triangulation procedure progresses. Triangulation with random
inactivations works as follows. Consider the transition from u
to u − 1 active input symbols. Then,
• If the ripple Ru is not empty (ru > 0), the decoder selects
an output symbol y ∈ Ru uniformly at random. The only
neighbor of y is marked as resolvable and leaves the
reduced graph, while all edges attached to it are removed.
• If the ripple Ru is empty (ru = 0), one of the active
input symbols v is chosen uniformly at random2 and it is
marked as inactive, leaving the reduced graph. All edges
attached to v are removed.
The input symbols marked as inactive are recovered using
Gaussian elimination (step 3). After recovering the inactive
input symbols, the remaining input symbols, those marked
as resolvable, can be recovered by iterative decoding (back
substitution in step 4).
Example 1. We provide an example for an LT code with k = 4
input symbols and m = 4 output symbols (Figure 3).
i. Transition from u = 4 to u = 3. Initially, there are
two output symbols in the ripple (r4 = 2) (Figure 3a).
Thus, one of the input symbols in the ripple is randomly
selected and marked as resolvable. In this case symbol
v1 is selected and all its adjacent edges are removed.
The graph obtained after the transition is shown in
Figure 3b. Observe that the nodes y1 and y4 have left
the graph since their reduced degree is now zero.
ii. Transition from u = 3 to u = 2. As shown in Figure 3b
the ripple is now empty (r3 = 0). Thus, an inactivation
2 This is certainly neither the only possible inactivation strategy nor the one
leading to the least number or inactivations. However, this strategy makes the
analysis tractable. For an overview of the different inactivation strategies we
refer the reader to [38].
SUBMITTED TO IEEE TRANSACTIONS ON COMMUNICATIONS
has to take place. Node v2 is chosen (according to
a random selection) and is marked as inactive. All
edges attached to v2 are removed from the graph. As
a consequence, the nodes y2 and y3 that were in the
cloud C3 enter the ripple R2 (i.e., their reduced degree
is now 1), as shown in Figure 3c.
iii. Transition from u = 2 to u = 1. Now the ripple is
not empty (r2 = 2). The input symbol v3 is selected
(again, according to a random choice) from the ripple
and is marked as resolvable. All its adjacent edges are
removed. The nodes y2 and y3 leave the graph because
their reduced degree becomes zero (see Figure 3d).
iv. Transition from u = 1 to u = 0. The ripple is now empty
(Figure 3d). Hence, an inactivation takes place: node v4
is marked as inactive and the triangulation procedure
ends.
Note that in this example input symbol v4 has no neighbors,
i.e., the matrix G0 is not full rank and decoding fails.
IV. A NALYSIS UNDER R ANDOM I NACTIVATION
In this section we analyze the triangulation procedure with
the objective of determining the average number of inactivations, i.e., the expected number of inactive input symbols at
the end of the triangulation process. We will then extend the
analysis to obtain the probability distribution of the number
of inactivations.
A. Average Number of Inactivations
Following [9], [14], [16], we model the decoder as a finite
state machine with state at time u given by the cloud and the
ripple sizes at time u, i.e.
Su := (Cu, Ru ).
We aim at deriving a recursion for the decoder state, that is,
to obtain Pr{Su−1 = (cu−1, ru−1 )} as a function of Pr{Su =
(cu, ru )}. We do so by analyzing how the ripple and cloud
change in the transition from u to u − 1. In the transition
exactly one active input symbol is marked as either resolvable
or inactive and all its adjacent edges are removed. Whenever
edges are removed from the graph, the reduced degree of
one or more output symbols decreases. Consequently, some
of symbols in the cloud may enter the ripple and some of the
symbols in the ripple may become of reduced degree zero and
leave the graph.
We focus first on the symbols that leave the cloud and
enter the ripple during the transition at step u, conditioned on
Su = (cu, ru ). Since for an LT code the neighbors of the output
symbols are selected independently and uniformly at random,
in the transition each output symbol may leave the cloud and
enter the ripple independently from the other output symbols.
Hence, the number of symbols leaving Cu and entering Ru−1
is binomially distributed with parameters cu and Pu with
Pu : = Pr{Y ∈ Ru−1 |Y ∈ Cu }
=
Pr{Y ∈ Ru−1 , Y ∈ Cu }
Pr{Y ∈ Cu }
(2)
FRANCISCO LÁZARO ET AL.: INACTIVATION DECODING OF LT AND RAPTOR CODES: ANALYSIS AND CODE DESIGN
active symbols
active symbols
v1
v2
v3
v4
y1
y2
y3
y4
ripple, R4
5
r
v2
v3
y2
y3
ripple, R3 = ∅
cloud, C4
(a) u = 4.
active symbols
v4
i
v3
y2
y3
r
cloud, C3
v4
cloud, C2 = ∅
ripple, R2
(b) u = 3.
(c) u = 2.
active symbols
r
ripple, R1 = ∅
i
v4
r
cloud, C1 = ∅
(d) u = 1.
r
r
i
ripple, R0 = ∅
i
cloud, C0 = ∅
(e) u = 0.
Fig. 3. Triangulation procedure example
where random variable Y represents a randomly chosen output
symbol.
We first consider the numerator of (2). Conditioning on the
original degree of Y , we have the following proposition.
Proposition 1. The probability that an output symbol Y
belongs to the cloud at step u and enters the ripple at step
u − 1, condition to its original degree being d, is
Pr{Y ∈ Ru−1 , Y ∈ Cu |deg(Y ) = d} =
−1
(u − 1) k − u k
if 2 ≤ d ≤ k − u+2,
(3)
d−2 d
0
otherwise.
Proof: For an output symbol Y of degree d to belong to
the cloud at step u and to the ripple at step u − 1 we need that
the output symbol has reduced degree 2 before the transition
and reduced degree 1 after the transition. For this to happen
two events must take place:
• Y ∈ Ru−1 one of the d edges of output symbol Y is
connected to the symbol being marked as inactive or
resolvable at the transition,
• Y ∈ Cu another edge is connected to one of the u − 1
active symbols after the transition and at the same time
the remaining d −2 edges connected to the k −u not active
input symbols (inactive or resolvable).
The joint probability of Y ∈ Ru−1 and Y ∈ Cu can be
derived as
Pr{Y ∈ Ru−1, Y ∈ Cu } = Pr{Y ∈ Ru−1 } Pr{Y ∈ Cu |Y ∈ Ru−1 }
Let us focus first on Y ∈ Ru−1 . We consider an output symbols
of degree d, thus this is simply probability that one the d edges
of Y is connected to the symbol being marked as inactive or
resolvable at the transition, Pr{Y ∈ Ru−1 } = d/k.
Let us now focus on Pr{Y ∈ Cu |Y ∈ Ru−1 }. Since we
condition on Y ∈ Ru−1 we have that one of the d edges of
Y is already assigned to one input symbol. We now need to
consider the remaining d − 1 edges and k − 1 input symbols.
The probability that one of the d − 1 edges is connected to
the set of u − 1 active symbols is (d − 1)(u − 1)/(k − 1). At
the same time we must have exactly d − 2 edges going to the
k − u input symbols that were not active before the transition.
Hence, we have
−1
u−1 k −u k −2
Pr{Y ∈ Cu |Y ∈ Ru−1 } = (d − 1)
k −1 d−2 d−2
By multiplying the two probabilities, applying few manipulations, and making explicit the conditioning on deg(Y ) = d, we
get
Pr{Y ∈ Ru−1 , Y ∈ Cu |deg(Y ) = d} = (u − 1)
−1
k −u k
.
d−2 d
We shall anyhow observe that if deg(Y ) < 2, the output symbol
cannot belong to the cloud. Moreover, one needs to impose
the condition d − 2 ≤ k − u, since output symbols choose
their neighbors without replacement (an output symbol cannot
have more than one edge going to an input symbol). Hence,
for d < 2 and d > k − u + 2, the probability Pr{Y ∈ Ru−1 , Y ∈
Cu |deg(Y ) = d} is zero.
By removing the conditioning on the degree of Y in (3), we
6
SUBMITTED TO IEEE TRANSACTIONS ON COMMUNICATIONS
Proposition 3. The probability Pu that a randomly chosen
output symbol leaves the cloud Cu enters the ripple Ru−1
after the transition from u to u − 1 active input symbols is
have
k−u+2
Õ
Pr{Y ∈ Ru−1 , Y ∈ Cu } = (u − 1)
d=2
Ωd
−1
k −u k
.
d−2 d
Let us now focus on the denominator of (2) which gives the
probability that a randomly chosen output symbol Y is in the
cloud when u input symbols are still active. This probability
is provided by the following proposition.
Proposition 2. The probability that the randomly chosen
output symbol Y is in the cloud when u input symbols are
still active is
k−u+1
Pr{Y ∈ Cu } = 1 − u
Õ
Ωd
d=1
k−u
−
Õ
d=1
Ωd
−1
k −u k
+
d−1 d
−1
k −u k
.
d
d
(4)
Proof: The probability of Y not being in the cloud is given
by the probability of Y having reduced degree 0 or being in the
ripple. Given that the two events are mutually exclusive, we
can compute such probability as the sum of the probabilities
of the two events,
Pr{Y ∈ Cu } = 1 − Pr{Y ∈ Ru ∪ redu (Y ) = 0}
= 1 − Pr{Y ∈ Ru } − Pr{redu (Y ) = 0}
(5)
where redu (Y ) denotes the reduced degree of output symbol
Y when u input symbols are still active. We first focus on
the probability of Y being in the ripple. Let us assume Y has
original degree d. The probability that Y has reduced degree 1
equals the probability of Y having exactly one neighbor among
the u active input symbols and the remaining d − 1 neighbors
among the k −u non-active (i.e., solved or inactive) ones. Thus,
we have
−1
k−u
k −u k
u d−1
. (6)
Pr{Y ∈ Ru |deg(Y ) = d} = d k−1 = u
k
d−1 d
d−1
The probability of Y having reduced degree 0 is the probability
that all d neighbors of Y are in the k − u non-active symbols.
The total number of different edge assignments is dk , and the
total number of edge assignments in which all d neighbors of
Y are in the k − u non-active symbols is k−u
d . Thus we have
Pr{redu (Y ) = 0|deg(Y ) = d} =
−1
k −u k
.
d
d
(7)
By removing the conditioning on d in (6) and (7) and by
replacing the corresponding results in (5) we obtain (4).
The expression of Pu is finally given in the following
proposition.
(u − 1)
Pu =
1−u
k−u+1
Õ
Ωd
k−u+2
Õ
Ωd
k−u k −1
d−2 d
d=2
k−u k −1
d−1
d
d=1
−
k−u
Õ
.
Ωd
k−u k −1
d
d
d=1
Proof: The proof follows directly from (2) and Propositions 1 and 2.
Let us now focus on the number au of symbols leaving the
ripple during the transition from u to u − 1 active symbols. We
denote by Au the random variable associated with au . Two
cases shall be considered. In a first case, no inactivation takes
place because the ripple is not empty. Thus, an output symbol
Y is chosen at random from the ripple and its only neighbor
v is marked as resolvable and removed from the graph. By
removing v from the graph, other output symbols that are
connected to v and that are in the ripple leave the ripple. Thus,
for ru > 0 we have
Pr{Au = au |Ru = ru } =
r −a
a −1
1 u u
ru − 1 1 u
1−
u
au − 1 u
(8)
with 1 ≤ au ≤ ru . In the second case, the ripple is empty
(ru = 0) and an inactivation takes place. Since no output
symbols can leave the ripple, we have
(
1 if au = 0
Pr{Au = au |Ru = 0} =
(9)
0 if au > 0.
We are now in the position to derive the transition probability
Pr{Su−1 = (cu−1, ru−1 )| Su = (cu, ru )}. Let us introduce the
cloud drift bu to denote the variation of number of cloud
elements in the transition from u to u − 1 active symbols,
i.e.,
bu := cu − cu−1 .
We can now express the cloud and ripple cardinality after the
transition respectively as
cu−1 = cu − bu
ru−1 = ru − au + bu .
Since output symbols are generated independently, the random variable associated to bu is binomially distributed with
parameters cu and Pu , and making use of (8), we obtain the
following recursion
Pr{Su−1 = (cu − bu, ru − au + bu )| Su = (cu, ru )}
cu b u
=
P (1 − Pu )cu −bu
bu u
r −a
a −1
1 u u
ru − 1 1 u
1−
(10)
×
u
au − 1 u
FRANCISCO LÁZARO ET AL.: INACTIVATION DECODING OF LT AND RAPTOR CODES: ANALYSIS AND CODE DESIGN
45
which is valid for ru > 0, while for ru = 0, according to (9),
we have
35
(11)
Note that the probability of Su−1 = (cu−1, ru−1 ) can be computed recursively via (10), (11) by initializing the decoder state
as
m rk
Pr{Sk = (ck , rk )} =
Ω (1 − Ω1 )ck
rk 1
for all non-negative ck , rk such that ck + rk = m, where m is
the number of output symbols.
The following proposition establishes how the number of
inactivations can be determined using the finite state machine.
Theorem 1. Let T denote the random variable associated to
the number of inactivations at the end of the triangulation
process. The expected value of T is given by
E [T] =
k Õ
Õ
u=1 cu
Pr{Su = (cu, 0)}.
(12)
Proof: Let us denote by f = (f1, f2, · · · , fk ) the binary
vector associated to the inactivations performed during the
triangulation process, and let F = (F1, F2, · · · , Fk ) denote the
associated random vector. In particular, the u-th element of f,
fu is set to 1 if an inactivation is performed when u input
symbols are active, and it is set to 0 if no inactivation is
performed. Thus, for a given instance of inactivation decoding,
the total number of inactivations corresponds simply to the
Hamming weight of f, which we denote as w H (f). The
expected number of inactivations can be obtained as
Õ
E [T] =
w H (f) Pr {F = f}
f
!
=
Õ Õ
f
u
fu Pr {F = f} =
ÕÕ
u
f
fu Pr {F = f}
where the summation is taken over all possible vectors f.
We shall now define f\u = (f1, · · · , fu−1, fu+1, fk ), i.e., f\u
denotes a vector containing all but the u-th element of f. We
have
ÕÕÕ
E [T] =
fu Pr {F = f}
u f\u fu
=
ÕÕ
u
=
=
Õ
u
fu
Õ
f\u
fu
ÕÕ
u
fu
Pr {F = f}
fu Pr {Fu = fu }
Pr {Fu = 1} .
If we now observe that by definition
Õ
Pr {Fu = 1} =
Pr{Su = (cu, 0)}
cu
we obtain (12), and the proof is complete.
Equation (12)
Monte Carlo simulation
40
Average number of inactivations
Pr{Su−1 = (cu − bu, bu )| Su = (cu, 0)}
cu
=
Pu bu (1 − Pu )cu −bu .
bu
7
30
25
20
15
10
5
0
0.00
0.03
0.06
0.09
0.12
0.15
0.18
0.21
0.24
0.27
0.30
Fig. 4. Average number of inactivations vs. relative overhead for an LT
code with k = 1000 and with degree distribution Ω(1) (x).
Figure 4 shows the expected number of inactivations for a
k = 1000 LT code with the output degree distribution
Ω(1) (x) : = 0.0098x + 0.4590x2 + 0.2110x3 + 0.1134x4 +
+ 0.1113x10 + 0.0799x11 + 0.0156x40
(13)
which is the one adopted by standardized Raptor codes [17],
[39]. The figure shows the number of inactivations according
to (12) and results obtained through Monte Carlo simulations.
In particular, in order to obtain the simulation results for each
value of relative overhead 1000 decoding attempts were carried
out. A tight match between the analysis and the simulation
results can be observed.
The results in this section are strongly based on [16],
where LT codes under inactivation decoding were analyzed.
The difference is that, when considering the triangulation step
of inactivation decoding, the decoding process does not stop
when the decoder is at state Su = (cu, 0). Instead, decoding
can be resumed after performing an inactivation.
B. Distribution of the Number of Inactivations
The analysis presented in Section IV-A provides the expected number of inactivations at the end of the triangulation
process under random inactivation decoding. In this section we
extend the analysis to obtain the probability distribution of the
number of inactivations. To do so, we extend the finite state
machine by including the number of inactive input symbols in
the state definition, i.e.,
Su = (Cu, Ru, Tu )
where Tu is the random variable associated to the number of inactivations at step u (when u input symbols are active). We proceed by deriving a recursion
to obtain Pr{Su−1 = (cu−1, ru−1, tu−1 )} as a function of
Pr{Su = (cu, ru, tu )}. Two cases shall be considered. When
the ripple is not empty (ru > 0) no inactivation takes place,
8
SUBMITTED TO IEEE TRANSACTIONS ON COMMUNICATIONS
at the transition from u to u − 1 active symbols the number of
inactivations is unchanged (tu−1 = tu ). Hence, we have
If the ripple is empty (ru = 0) an inactivation takes place. In
this case the number of inactivations increases by one yielding
Pr{Su−1 = (cu − bu, bu, tu + 1)| Su = (cu, 0, tu )}
cu
=
Pu bu (1 − Pu )cu −bu .
bu
6
5
4
3
2
1
(15)
The probability of Su−1 = (cu−1, ru−1, tu−1 ) can be computed
recursively via (14), (15) starting with the initial condition
m r
Pr{Sk = (ck , rk , tu )} =
Ω (1 − Ω1 )ck
r 1
for all non-negative ck , rk such that ck + rk = m and tk = 0.
Finally, the distribution of the number of inactivations needed
to complete the decoding process is given by3
ÕÕ
fT (t) =
Pr{S0 = (c0, r0, t)}.
(16)
c0
·10−2
Equation (16)
Monte Carlo simulation
Distribution of inactivations
Pr{Su−1
=(cu − bu, ru − au + bu, tu )| Su = (cu, ru, tu )}
cu
=
Pu bu (1 − Pu )cu −bu
bu
r −a
a −1
1 u u
ru − 1 1 u
1−
×
.
(14)
u
au − 1 u
7
r0
In Figure 5 the distribution of the number of inactivations is
shown, for an LT code with degree distribution Ω(1) (x) given in
(13), input block size k = 300 and relative overhead = 0.02.
The chart shows the distribution of the number of inactivations
obtained through Monte Carlo simulations and by evaluating
(16). In order to obtain the simulation results for each value
of absolute overhead 10000 decoding attempts were carried
out. As before, we can observe a very tight match between
the analysis and the simulation results.
V. I NACTIVATION D ECODING OF R APTOR C ODES
The analysis introduced in Section IV holds for LT codes.
We shall see next how the proposed tools can be successfully
applied to the design of Raptor codes whose outer code
has a dense parity check matrix (see [30]). We proceed by
illustrating the impact of the LT component of a Raptor code
on the inactivation count. We then develop a methodology for
the design of Raptor codes based on the results of Section IV.
3 From (16) we may obtain the cumulative distribution F (t). The cumulaT
tive distribution of the number of inactivations has practical implications. Let
us assume the fountain decoder runs on a platform with limited computational
capability. For example, the decoder may be able to handle a maximum number of inactive symbols (recall that the complexity of inactivation decoding
is cubic in the number of inactivations, t). Suppose the maximum number of
inactivations that the decoder can handle is tmax . The probability of decoding
failure will be lower bounded by 1 − FT (tmax ).The probability of decoding
failure is actually higher than 1 − FT (tmax ) since the system of equations to
be solved in the Gaussian elimination (GE) step might be rank deficient.
0
0
5
10
15
20
25
30
t
35
40
45
50
55
60
Fig. 5. Distribution of the number of inactivations for an LT code with
k = 300, relative overhead = 0.02 and degree distribution Ω(1) (x) given in
(13).
A. Average Number of Inactivations for Raptor Codes
Let us now consider a Raptor code based on the concatenation of a (dense) (h, k) outer code and an inner LT code.
We denote by HP the ((h − k) × h) parity-check matrix of
the outer code. At the input of the Raptor encoder, we have
a vector of k input symbols, u = (u1, u2, . . . , uk ). Out of
the input symbols, the outer encoder generates a vector of h
intermediate symbols v = (v1, v2, . . . , vh ). The intermediate
symbols serve as input to an LT encoder which produces the
codeword c = (c1, c2, . . . , cn ), where n can grow unbounded.
The relation between intermediate and output symbols can be
expressed as c = v G where G is the generator matrix of the LT
code. The output symbols are sent over a BEC, at the output of
which the receiver collects m = k + δ output symbols, denoted
as y = (y1, y2, . . . , ym ). The relation between the collected
output symbols and the intermediate symbols can be expressed
as
G0T v T = y T .
where G0 is a matrix that contains the m columns of G
associated to the m received output symbols. Due to the outer
code constraints, we have
HP v T = 0T
where 0 is the length-(h − k) zero vector. Let us now define
the constraint matrix M of the Raptor code
M := HPT |G0 .
ML decoding can be carried out by solving the system
T
M T v T = 0| y .
(17)
If we compare the system of equations that need to be solved
for LT and Raptor decoding, given respectively by (1) and (17),
we can observed how Raptor ML decoding is very similar to
FRANCISCO LÁZARO ET AL.: INACTIVATION DECODING OF LT AND RAPTOR CODES: ANALYSIS AND CODE DESIGN
Ω(2) (x) := 0.05x + 0.2x2 + 0.4x3 + 0.3x4 + 0.05x40 .
The Figure also shows the number of inactivations needed
to decode the two standalone LT codes. If we compare the
number of inactivations required by the Raptor and LT codes,
we can see how for both degree distributions, the number
of additional inactivations needed for Raptor decoding with
respect to LT decoding is very similar. We hence conjecture
that the impact of the (dense) outer code on the inactivation
count depends mostly on the number of the outer code paritycheck equations. This empirical observation provides a hint
on a practical design strategy for Raptor codes: If one aims
at minimizing the number of inactivations for a Raptor code
based on a given outer code, it is sufficient to design the
LT code component for a low (i.e., minimal) number of
inactivations. Based on this consideration, an explicit design
example is provided in the following subsection.
B. Example of Raptor Code Design
We consider a Raptor code with a (63, 57) outer Hamming
code and we assume decoding is carried out when the absolute
receiver overhead reaches δ = δ∗ , i.e., we carry out decoding
after collecting δ∗ output symbols in excess of k. Furthermore,
we want to have a probability of decoding failure, PF , lower
that a given value PF?. Thus, the objective of our code design
will be minimizing the number of inactivations needed for
decoding while achieving a probability of decoding failure
lower than PF?. Hence, the design problem consists of finding
a suitable output degree distribution for the inner LT code.
For illustration we will introduce a series of constraints in
the output degree distribution. In particular we constraint
the output degree distribution to have the same maximum
and average output degree as standard R10 Raptor codes
(Ω̄ ≈ 4.63 and dmax = 40, [17]). Furthermore, we constraint
the output degree distribution to have the same support as the
degree distribution of R10 raptor codes, that is, only degrees
1, 2, 3, 4, 10, 11 and 40 are allowed. These design constraints
allow us to perform a fair comparison between the Raptor
4 In practice, the outer codes used are not totally dense, but the rows of
their parity check matrix are usually denser that the rows of GT . This is, for
example, usually the case if the outer code is a high rate LDPC code, since
the check node degree increases with the rate.
20
Raptor code, Ω(1) (x)
LT code, Ω(1) (x)
Raptor code, Ω(2) (x)
LT code, Ω(2) (x)
18
16
Average number of inactivations
ML decoding of an LT code. The main difference lies in the
fact that matrix M is formed, for the first h − k columns,
by the transpose of the outer code parity-check matrix. The
high density of the outer code parity-check matrix lowers the
probability that the ripple contains (some of) the h − k output
symbols associated to the zero vector in (17) (this is especially
evident at the early steps of the triangulation process). As a
result, we shall expect the average number of inactivations to
increase with h − k, for a fixed overhead 4 .
In Figure 6 we provide the average number of inactivations
needed to decode two Raptor codes, as a function of the
receiver overhead δ. Both Raptor codes have the same outer
code, a (63, 57) Hamming code, but different LT degree
distributions. The first distribution is Ω(1) (x) from (13), and
the second distribution is
9
14
12
10
8
6
4
2
0
0
2
4
6
8
10
12
14
16
18
20
22
24
δ
Fig. 6. Expected number of inactivations two Raptor codes using a (63, 57)
Hamming code with LT degree distributions Ω(1) (x) and Ω(2) (x), and for two
LT codes with k = 63 and degree distributions Ω(1) (x) and Ω(2) (x).
code obtained through optimization and a Raptor code with
the same outer code and the degree distribution from R10
Raptor codes.
The design of the LT output degree distribution is formulated as a numerical optimization problem. For the numerical
optimization we used is simulated annealing (SA) [40]. More
concretely, we define the objective function to be minimized
to be the following function [41]
η = E[T] + φ P̄F
where E[T] is the number of inactivations needed to decode
the LT code and the penalty function φ is defined as
(
0
if P̄F < PF?
φ P̄F =
?
b 1 − PF /P̄F otherwise
being b a large positive constant5 , PF? the target probability
of decoding failure at δ = δ∗ and P̄F the upper bound to the
probability of decoding failure of the Raptor code in [33],
which for binary Raptor codes has the expression6
PF ≤ P̄F :=
h
Õ
Alo πlk+δ
l=1
where Alo is the multiplicity of codewords of weight l in the
outer code, and πl depends on the LT code output degree
5 In our example, b was set to 104 . A large b factor ensures that degree
distributions which do not comply with the target probability of decoding
failure are discarded.
6 The use of the upper bound on the probability of decoding failure, P̄
F
in place of the actual value of PF in the objective functions stems from the
need of having a fast (though, approximate) performance estimation to be used
within the SA recursion. The evaluation of the actual PF presents a prohibitive
complexity since it has to be obtained through Monte Carlo simulations. Note
also that the upper bound of [33] is very tight.
10
SUBMITTED TO IEEE TRANSACTIONS ON COMMUNICATIONS
100
distribution as
In particular, two code designs were carried out using the
proposed optimization. In the first case we set the overhead
to δ∗ = 15 and the target probability of decoding failure to
PF? = 10−3 , and we denote by Ω(3) the distribution obtained
from the optimization process. In the second case we chose
the same overhead, δ∗ = 15, and set the target probability
of decoding failure to PF? = 10−2 , and we denote by Ω(4) the
resulting distribution. The degree distributions obtained are the
following
10−1
10−2
10−3
10−4
Ω(3) (x) = 0.0347 x1 + 0.3338 x2 + 0.2268 x3 + 0.1548 x4
10−5
+ 0.1515 x10 + 0.0973 x11 + 0.0011 x40
+ 0.0797 x10 + 0.0762 x11 + 0.0248 x40 .
Monte Carlo simulations were carried out in order to assess
the performance of the two Raptor codes obtained as result
of the optimization process. In order to have a benchmark for
comparison, a third Raptor code was considered, employing
the same outer code (Hamming) and the degree distribution of
standard R10 Raptor codes given in (13). Note that in all three
cases we consider the same outer code, a (63, 57) Hamming
code, and thus, the number of input symbols is k = 57. To
derive the probability of decoding failure for each overhead
value δ simulations were run until 200 errors were collected,
whereas in order to obtain the average number of inactivations,
1000 transmissions were simulated for each overhead value δ.
Figure 7 shows the probability of decoding failure PF as
a function of δ for the three Raptor codes based on the
(63, 57) outer Hamming code and inner LT codes with degree
distributions Ω(1) (x), Ω(3) (x) and Ω(4) (x). The upper bound to
the probability of failure P̄F is also provided. It can be observed
how the Raptor codes with degree distributions Ω(3) (x) and
Ω(4) (x) meet the design goal, being their probability of decoding failure at δ = 15 below 10−3 and 10−2 , respectively. It
can also be observed that the probability of decoding failure of
the Raptor code with degree distribution Ω(1) (x) lies between
that of Ω(3) (x) and Ω(4) (x).
Figure 8 shows the average number of inactivations as
a function of the absolute receiver overhead for the three
Raptor codes considered. It can observed that the Raptor code
requiring the least number of inactivations is Ω(4) , followed
by Ω(1) , and finally the Raptor code with degree distribution
Ω(3) is the one requiring the most inactivations, and thus has
the highest decoding complexity.
The results in Figures 7 and 8 illustrate the tradeoff existing
between probability of decoding failure and number of inactivations (decoding complexity): In general if one desires to
improve the probability of decoding failure, it is necessary to
adopt LT codes with degree distributions that lead to a larger
average number of inactivations.
0
5
10
15
20
25
δ
Fig. 7. Probability of decoding failure PF vs. absolute receiver overhead δ
for binary Raptor codes with a (63, 57) Hamming outer code and LT degree
distributions Ω(1) (x), Ω(3) (x) and Ω(4) (x). The markers represent the result of
simulations, while the lines represent the upper bound to the probability of
decoding failure in [33].
22
Ω(1) (x)
Ω(3) (x)
Ω(4) (x)
20
18
Average number of inactivations
Ω(4) (x) = 0.0823 x1 + 0.4141 x2 + 0.1957 x3 + 0.1272 x4
(simulation)
(upper bound)
(simulation)
(upper bound)
(simulation)
(upper bound)
PF
πl =
Ω(1) (x)
Ω(1) (x)
Ω(3) (x)
Ω(3) (x)
Ω(4) (x)
Ω(4) (x)
min(l,
Õj)
j h−j
i l−i
Ωj
.
h
j=1
i=max(0,l+j−h)
l
i even
d
max
Õ
16
14
12
10
8
6
4
2
0
0
5
10
15
20
25
δ
Fig. 8. Number of inactivations vs. absolute receiver overhead δ for binary
Raptor codes with a (63, 57) Hamming outer code and LT degree distributions
Ω(1) (x), Ω(3) (x) and Ω(4) (x).
VI. C ONCLUSIONS
In this paper the decoding complexity of LT and Raptor
codes under inactivation decoding has been analyzed. Using
a dynamic programming approach, recursions for the number
of inactivations have been derived for LT codes as a function
of their degree distribution. The analysis has been extended to
obtain the probability distribution of the number of inactivations. Furthermore, the experimental observation is made that
decoding a Raptor code with a dense outer code results in an
increase of the number of inactivations, compared to decoding
a standalone LT code. Based on this observation, it has been
FRANCISCO LÁZARO ET AL.: INACTIVATION DECODING OF LT AND RAPTOR CODES: ANALYSIS AND CODE DESIGN
shown how the recursive analysis of LT codes can be used
to design Raptor codes with a fine control on the number of
inactivations vs. decoding failure probability trade-off.
A PPENDIX A
I NDEPENDENT P OISSON A PPROXIMATION
In Section IV we have derived recursive methods that can
compute the expected number of inactivations and the distribution of the number of inactivations required by inactivation
decoding. The proposed recursive methods, albeit accurate,
entail a non negligible computational complexity. In this
appendix we propose an approximate recursive method that
can provide a reasonably-accurate estimation of the number
of inactivations with a much lower computational burden.
The development of the approximate analysis relies on the
following definition.
Definition 3 (Reduced degree-d set). The reduced degree-d
set is the set of output symbols of reduced degree d. We denote
it by Zd .
The cardinality of Zd is denoted by zd and its associated
random variable by Zd . Obviously, Z1 corresponds to the
ripple. Furthermore, it is easy to see how the cloud C
corresponds to the union of the sets of output symbols of
reduced degree higher than 1, i.e.,
C =
dØ
max
Zd .
d=2
Moreover, since the sets Zd are disjoint, we have
C=
d
max
Õ
Zd .
d=2
We aim at approximating the evolution of the number of
reduced degree d output symbols, Zd , as the triangulation
procedure of inactivation decoding progresses. As it was done
in Section IV, in the following a temporal dimension shall
be added through subscript u (recall that the subscript u
corresponds to the number of active input symbols in the
graph). At the beginning of the triangulation process we have
u = k, and the counter u decreases by one in each step of
triangulation. Triangulation ends when u = 0. It follows that
Zd,u is the set of reduced degree d output symbols when u
input symbols are still active. Moreover, Zd,u and zd,u are
respectively the random variable associated to the number
of reduced degree d output symbols when u input symbols
are still active and its realization. We model the triangulation
process by means of a finite state machine with state
Su := Z1,u, Z2,u, . . . , Zdmax,u .
This model is equivalent to the one presented in Section IV-A. However, the evaluation of the state evolution is
now more complex due to the large state space. Yet, the analysis can be, greatly simplified by resorting to an approximation.
Before decoding starts (for u = k), due to the independence
of output symbols we have that Sk follows a multinomial
distribution, which for large number of output symbols m
11
can be approximated as the product of independent Poisson
distributions. Figure 9 shows the probability distribution of
Z1,u , Z2,u and Z3,u for an LT code with a robust soliton
distribution and k = 1000 obtained through Monte Carlo
simulation, for u = 1000, 500 and 20. The figure also shows
the curves of the Poisson distributions which match best the
experimental data in terms of minimum mean square error.
As we can observe, the Poisson distribution tightly matches
the experimental data not only for u = k, but also for smaller
values of u. Hence, we shall approximate the distribution of
the decoder state at step u as a product of independent Poisson
distributions,
Pr{Su = z u } ≈
dmax
Ö
λ
d=1
d,u
z d, u −λ d, u
e
zd,u !
(18)
where zu is the vector defined as zu = z1,u, z2,u, . . . zdmax,u, ,
i.e., we assume that the distribution of reduced degree d
output symbols when u input symbols are active follows a
Poisson distribution with parameter λd,u . We remark that by
introducing this assumption, the number of received output
symbols m is no longer constant but becomes a sum of Poisson
random variables. In spite of this mismatch, we will later see
how a good estimate of the number of inactivations can be
obtained resorting to this approximation.
Next, we shall explain how the parameters λd,u can be
determined. For this purpose let us define Bd,u as the random
variable associated with the number of output symbols of
reduced degree d that become of reduced degree d − 1 in
the transition from u to u − 1 output symbols. We have
Zd,u−1 = Zd,u + Bd+1,u − Bd,u .
If we take the expectation at both sides we can write
E Zd,u−1 = E Zd,u + E Bd+1,u − E Bd,u .
(19)
Let us now derive the expression of E Bd,u . We shall
distinguish two cases. First we shall consider output symbols
with reduced degree d ≥ 2. Since output symbols select
their neighbors uniformly at random, we have that Bd,u ,
d ≥ 2, conditioned on Zd,u = zd,u is binomially distributed
with parameters zd,u and Pd,u , where Pd,u is the probability
that the degree of Y decreases to d − 1 in the transition from
u to u − 1, i.e.,
Pd,u := Pr{Y ∈ Zd−1,u−1 |Y ∈ Zd,u }.
The following proposition holds.
Proposition 4. The probability that a randomly chosen output
symbol Y , with reduced degree d ≥ 2 when u input symbols
are active, has reduced degree d − 1 when u − 1 input symbols
are active is
d
Pd,u = .
u
Proof: Before the transition, Y has exactly d neighbors
among the u active input symbols. In the transition from u to
u −1 active symbols, 1 input symbol is selected at random and
marked as either resolvable or inactive. The probability that
12
SUBMITTED TO IEEE TRANSACTIONS ON COMMUNICATIONS
·10−2
Z1
Z2
Z3
Poisson approx.
20
10
0
0
50
100
150
200
250
300
350
400
450
(a) u = k = 1000
·10−2
probability
20
Hence, we have
m
1 Õ
1
Pr{Z1,u ≥ 1} +
E B1,u = 1 −
z1,u Pr{Z1,u = z1,u }
u
u z =1
10
0
1, u
0
40
80
120
160
200
240
(b) u = 500
·10−2
1
1
= 1−
1 − Pr{Z1,u = 0} + E Z1,u
u
u
1
1
1 − e−λ1, u + λ1,u .
= 1−
u
u
20
10
0
and removed from the graph. Furthermore, any other output
symbol in the ripple being connected to input symbol v also
leaves the ripple during the transition. Hence, for z1,u ≥ 1 we
have
1
z1,u − 1
E B1,u |Z1,u = z1,u ≥ 1 = 1 +
u
1 1
= 1 − + z1,u
u u
whereas for z1,u = 0 we have
E B1,u |Z1,u = 0 = 0.
0
50
100
150
200
250
300
350
400
450
(c) u = 20
number of output symbols
Fig. 9. Distribution of Z1, u , Z2, u and Z3, u for an LT code with robust
soliton distribution and k = 1000 obtained through Monte Carlo simulation.
The upper, middle and lower figures represent respectively the distribution
for u = 1000, 500 and 20. The black bars represent Z1, u , the light grey bars
Z2, u and the dark grey bars Z3, u . The red lines represent the best Poisson
distribution fit in terms of minimum mean square error.
the degree of Y gets reduced is simply the probability that one
of its d neighbors is marked as resolvable or inactive.
Thus, the expected value of Bd,u is
d
E Bd,u = E E Bd,u |Zd,u = E Zd,u .
(20)
u
If we now replace (20) in (19) a recursive expression is
obtained for λd,u , d ≥ 2,
d
d+1
λd,u−1 = λd,u +
λd+1,u − λd,u
(21)
u
u
d
d+1
λd,u−1 = 1 −
λd,u +
λd+1,u
u
u
where we have replaced E Zd,u = λd,u according to our
Poisson distribution assumption.
We shall now consider the output symbols of reduced degree
1. In particular, we are interested in B1,u , the random variable
associated to the output symbols of reduced degree d = 1
that become of reduced degree 0 in the transition from u to
u − 1 active input symbols. Two different cases need to be
considered. In the first one, the ripple is not empty, and hence
there are one or more output symbols of reduced degree 1.
In this case, an output symbol Y is chosen at random from
the ripple and its only neighbor v is marked as resolvable
(22)
Replacing (22) in (19) a recursive expression is obtained as,
1
2
1
λ1,u−1 = 1 −
λ1,u + λ2,u − 1 −
1 − e−λ1, u . (23)
u
u
u
The decoder state probability is obtained by setting the initial condition λd,k = m Ωd and applying the recursions in (21)
and (23). Furthermore, the expected number of inactivations
after the k steps of triangulation, can be approximated as
E [T] =
k
Õ
u=1
Pr{Z1,u = 0} ≈
k
Õ
e−λ1, u .
u=1
In Figure 10 we provide again the probability distribution
of Z1,u , Z2,u and Z3,u for an LT code with robust soliton
distribution (RSD) (see [5]) and k = 1000 obtained through
Monte Carlo simulation, for u = 1000, 500 and 20. The
figure also shows the curves of the approximation to Zi,u
obtained using the model in this section. We can observe
how the proposed method is able to track the distribution
of Zi,u very accurately for u = k and u = 500. However,
at the end of the triangulation process a divergence appears
as it can be observed for u = 20 in Figure 10 (c). The
source of this divergence could, in large part, be attributed
to the independence assumption made in (18). As the number
of active input symbols u decreases, the dependence among
the different Zi,u becomes stronger, and the independence
assumption approximation falls apart.
Figure 11 shows the average number of inactivations needed
to complete decoding for a linear random fountain code
(LRFC)7 and a RSD, both with average output degree Ω̄ = 12
and k = 1000. The figure shows results obtained by Monte
Carlo simulation and also the estimation of the number of
inactivations obtained under our Poisson approximation. A
tight match between simulation results and the estimation can
be observed. The experimental results indicate that, although
7 The degree distribution of a LRFC follows a binomial distribution with
parameters k and p = 1/2 (see [42]).
FRANCISCO LÁZARO ET AL.: INACTIVATION DECODING OF LT AND RAPTOR CODES: ANALYSIS AND CODE DESIGN
13
600
·10−2
Z1
Z2
Z3
Poisson approx.
20
500
10
0
0
50
100
150
200
250
300
350
400
450
(a) u = k = 1000
·10−2
probability
20
100
0
40
80
120
160
200
240
0
0.00
(b) u = 500
·10−2
20
0.05
0.10
0.15
0.20
0.25
0.30
Fig. 11. Average number of inactivations needed to decode a linear random
fountain code and a RSD for k = 1000 and average output degree Ω̄ = 12.
The markers represent simulation results and the lines represent the predicted
number of inactivations using the proposed Poisson approximation.
10
0
LRFC Poisson approx.
LRFC simulation
RSD Poisson approx.
RSD simulation
300
200
10
0
E [N]
400
0
50
100
150
200
250
300
350
400
450
(c) u = 20
number of output symbols
Fig. 10. Distribution of Z1, u , Z2, u and Z3, u for an LT code with robust
soliton distribution and k = 1000 obtained through Monte Carlo simulation.
The upper, middle and lower figures represent respectively the distribution for
u = 1000, 500 and 20. The black bars represent Z1, u , the light grey bars Z2, u
and the dark grey bars Z3, u . The red lines represent the Poisson distribution
approximation to Z d, u obtained employing the model in this section.
the independence assumption made does not hold in general,
it is a good approximation for most of the decoding process,
deviating from simulation results only at the last stages of
decoding. Thus, the proposed method can still provide a
good approximation of the number of inactivations needed for
decoding.
R EFERENCES
[1] F. Lázaro, G. Liva, and G. Bauch, “Inactivation decoding analysis for
LT codes,” in Proc. 52nd Annu. Allerton Conf. on Commun., Control,
and Computing, Monticello, Illinois, USA, Oct. 2015.
[2] F. Lázaro Blasco, G. Liva, and G. Bauch, “LT code design for inactivation decoding,” in Proc. 2014 IEEE Inf. Theory Workshop, Hobart,
Tasmania, Australia, Nov. 2014, pp. 441–445.
[3] J. Metzner, “An improved broadcast retransmission protocol,” IEEE
Trans. Commun., vol. 32, no. 6, pp. 679–683, Jun 1984.
[4] J. Byers, M. Luby, and M. Mitzenmacher, “A digital fountain approach
to reliable distribution of bulk data,” IEEE J. Select. Areas Commun.,
vol. 20, no. 8, pp. 1528–1540, Oct. 2002.
[5] M. Luby, “LT codes,” in Proc. 43rd Annual IEEE Symp. on Foundations
of Computer Science, Vancouver, Canada, Nov. 2002, pp. 271–282.
[6] P. Maymounkov, “Online codes,” Technical report, New York University,
Tech. Rep., 2002.
[7] A. Shokrollahi, “Raptor codes,” in Proc. of the 2004 IEEE Int. Symp.
on Inf. Theory, Chicago, Illinois, US, Jun. 2004, p. 36.
[8] M. Shokrollahi, “Raptor codes,” IEEE Trans. Inf. Theory, vol. 52, no. 6,
pp. 2551–2567, Jun. 2006.
[9] R. Karp, M. Luby, and A. Shokrollahi, “Finite length analysis of LT
codes,” in Proc. 2004 IEEE International Symp. on Inf. Theory, Chicago,
Illinois, US, Jun. 2004.
[10] E. Maneva and A. Shokrollahi, “New model for rigorous analysis of LTcodes,” in Proc. 2006 IEEE International Symp. on Inf. Theory, Seattle,
Washington, US, Jul. 2006, pp. 2677–2679.
[11] S. Puducheri, J. Kliewer, and T. E. Fuja, “The design and performance
of distributed LT codes,” IEEE Trans. Inf. Theory, vol. 53, no. 10, pp.
3740–3754, Oct 2007.
[12] E. Hyytia, T. Tirronen, and J. Virtamo, “Optimal degree distribution for
LT codes with small message length,” in IEEE Infocom, Anchorage,
USA, May 2007, pp. 2576–2580.
[13] D. Vukobratovic, C. Stefanovic, V. Crnojevic, F. Chiti, and R. Fantacci,
“Rateless packet approach for data gathering in wireless sensor networks,” IEEE J. Select. Areas Commun., vol. 28, no. 7, pp. 1169–1179,
Sep. 2010.
[14] G. Maatouk and A. Shokrollahi, “Analysis of the second moment of the
LT decoder,” IEEE Trans. Inf. Theory, vol. 58, no. 5, pp. 2558–2569,
May 2012.
[15] M. Shirvanimoghaddam, Y. Li, S. Tian, and B. Vucetic, “Distributed raptor coding for erasure channels: Partially and fully coded cooperation,”
IEEE Trans. Commun., vol. 61, no. 9, pp. 3576–3589, Sep. 2013.
[16] A. Shokrollahi, “Theory and applications of Raptor codes,” Mathknow,
vol. 3, pp. 59–89, 2009.
[17] “Technical Specification Group Services and System Aspects; Multimedia Broadcast/Multicast Service; Protocols and Codecs,” Jun. 2012,
3GPP TS 26.346 V11.1.0.
[18] M. Shokrollahi, S. Lassen, and R. Karp, “Systems and processes for
decoding chain reaction codes through inactivation,” Feb. 2005, uS
Patent 6,856,263.
[19] A. Shokrollahi and M. Luby, Raptor Codes. Foundations and Trends
in Commun. and Inf. Theory, Now Publishers Inc., 2011.
[20] C. Lanczos, “Solution of systems of linear equations by minimized
iterations,” J. Res. Nat. Bureau of Standards, vol. 49, pp. 33–53, 1952.
[21] E. R. Berlekamp, Algerbraic Coding Theory. McGraw-Hill, 1968.
[22] B. A. LaMacchia and A. M. Odlyzko, “Solving large sparse linear
systems over finite fields,” Advances in Cryptology-CRYPT090, pp. 109–
133, 1991.
[23] T. J. Richardson and R. L. Urbanke, “Efficient encoding of low-density
parity-check codes,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 638–
656, Feb 2001.
[24] D. Burshtein and G. Miller, “An efficient maximum likelihood decoding
of LDPC codes over the binary erasure channel,” IEEE Trans. Inf.
Theory, vol. 50, no. 11, nov 2004.
14
[25] E. Paolini, G. Liva, B. Matuz, and M. Chiani, “Maximum likelihood
erasure decoding of LDPC codes: Pivoting algorithms and code design,”
IEEE Trans. Commun., vol. 60, no. 11, pp. 3209–3220, Nov. 2012.
[26] B. Schotsch, H. Schepker, and P. Vary, “The performance of short
random linear fountain codes under maximum likelihood decoding,” in
Proc. 2011 IEEE International Conf. on Commun., Kyoto, Japan, Jun.
2011.
[27] B. Schotsch, R. Lupoaie, and P. Vary, “The Performance of Low-Density
Random Linear Fountain Codes over Higher Order Galois Fields under
Maximum Likelihood Decoding,” in Proc. 48th Annu. Allerton Conf. on
Commun., Control, and Computing, Monticello, Illinois, US, Oct. 2011.
[28] B. Schotsch, G. Garrammone, and P. Vary, “Analysis of LT codes over
finite fields under optimal erasure decoding,” IEEE Commun. Lett.,
vol. 17, no. 9, pp. 1826–1829, Sep. 2013.
[29] B. E. Schotsch, “Rateless coding in the finite length regime,” Ph.D.
dissertation, Inst. of Commun. Systems and Data Proc., RWTH Aachen,
Aachen, Germany, Jul. 2014.
[30] F. Lázaro, E. Paolini, G. Liva, and G. Bauch, “Distance spectrum of
fixed-rate Raptor codes with linear random precoders,” IEEE J. Select.
Areas Commun., vol. 34, no. 2, pp. 422–436, Feb 2016.
[31] C. Di, D. Proietti, I. Telatar, T. Richardson, and R. Urbanke, “Finitelength analysis of low-density parity-check codes on the binary erasure
channel,” Information Theory, IEEE Transactions on, vol. 48, no. 6, pp.
1570 –1579, jun 2002.
[32] G. Liva, E. Paolini, and M. Chiani, “Bounds on the error probability
of block codes over the q-ary erasure channel,” IEEE Trans. Commun.,
vol. 61, no. 6, pp. 2156–2165, Jun. 2013.
[33] F. Lázaro, G. Liva, E. Paolini, and G. Bauch, “Bounds on the error
probability of Raptor codes,” in Proc. IEEE Globecomm, Washington
DC, USA, Dec. 2016.
[34] C. Measson, A. Montanari, and R. Urbanke, “Maxwell construction: The
hidden bridge between iterative and maximum a posteriori decoding,”
IEEE Trans. Inf. Theory, vol. 54, no. 12, pp. 5277–5307, Dec. 2008.
[35] A. Ashikhmin, G. Kramer, and S. ten Brink, “Extrinsic information
transfer functions: Model and erasure channel properties,” IEEE Trans.
Inf. Theory, vol. 50, no. 11, pp. 2657–2673, Nov. 2004.
[36] K. Mahdaviani, M. Ardakani, and C. Tellambura, “On Raptor code
design for inactivation decoding,” IEEE Trans. Commun., vol. 60, no. 9,
pp. 2377–2381, Sep. 2012.
[37] T.-C. Ng and S. Yang, “Finite-length analysis of BATS codes,” in
Proc. of 2013 IEEE International Symp. on Network Coding, (NetCod),
Calgary, Alberta, Canada, Jun. 2013.
[38] E. Paolini, G. Liva, B. Matuz, and M. Chiani, “Maximum likelihood
erasure decoding of LDPC codes: Pivoting algorithms and code design,”
IEEE Trans. Commun., vol. 60, no. 11, pp. 3209–3220, Nov. 2012.
[39] M. Luby, A. Shokrollahi, M. Watson, and T. Stockhammer, “RFC 5053:
Raptor forward error correction scheme: Scheme for object delivery,”
IETF, Tech. Rep., Oct. 2007.
[40] S. Kirkpatrick, D. Gelatt, and M. Vecchi, “Optimization by simmulated
annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983.
[41] F. Lázaro Blasco, G. Liva, and G. Bauch, “Enhancing the LT component
of Raptor codes,” in Proc. of the 10th International ITG Conf. on
Systems, Commun. and Coding, SCC 2015, Hamburg, Germany, Feb.
2015.
[42] G. Liva, E. Paolini, and M. Chiani, “Performance versus overhead for
fountain codes over F q ,” IEEE Comm. Letters., vol. 14, no. 2, pp. 178–
180, 2010.
SUBMITTED TO IEEE TRANSACTIONS ON COMMUNICATIONS
| 7 |
Distribution-free Detection of a Submatrix
arXiv:1604.07449v1 [] 25 Apr 2016
Ery Arias-Castro∗
Yuchao Liu†
Abstract. We consider the problem of detecting the presence of a submatrix with largerthan-usual values in a large data matrix. This problem was considered in (Butucea
and Ingster, 2013) under a one-parameter exponential family, and one of the test they
analyzed is the scan test. Taking a nonparametric stance, we show that a calibration
by permutation leads to the same (first-order) asymptotic performance. This is true for
the two types of permutations we consider. We also study the corresponding rank-based
variants and precisely quantify the loss in asymptotic power.
1
Introduction
Biclustering has emerged as an important set of tools in bioinformatics, in particular, in the analysis
of gene expression data (Cheng and Church, 2000). It comes in different forms, and in fact,
the various methods proposed under that umbrella may target different goals. See (Madeira and
Oliveira, 2004) for a survey. Here we follow (Shabalin et al., 2009), where the problem is posed as
that of discovering a submatrix of unusually large values in a (large) data matrix. For example,
in the context of a microarray dataset, the data matrix is organized by genes (rows) and samples
(columns). We let X = (Xij ) denote the matrix, M denote the number of rows and N denote the
number of columns, so the data matrix X is M -by-N .
1.1
Submatrix detection
In its simplest form, there is only one submatrix to be discovered. In that context, the detection
problem is that of merely detecting of the presence of an anomalous (or unusual) submatrix, which
leads to a hypothesis testing problem. This was considered in (Butucea and Ingster, 2013) from a
minimax perspective. Their work relies on parametric assumptions. For example, in the normal
model, they assume that the Xij ’s are independent and normal, with mean θij and unit variance.
Under the null hypothesis, θij = 0 for all i ∈ [M ] ∶= {1, . . . , M } and all j ∈ [N ]. Under the alternative,
there is a m-by-n submatrix indexed by Itrue ⊂ [M ] and Jtrue ⊂ [N ] such that
θij ≥ θ‡ ,
∀(i, j) ∈ Itrue × Jtrue ,
(1)
while θij = 0 otherwise. Here θ‡ > 0 controls the signal-to-noise ratio. In that paper, Butucea and
Ingster precisely establish how large θ‡ needs to be as a function of (M, N, m, n) in order for there
to exist a procedure that has (worst-case) risk tending to zero in the large-sample limit (i.e., as
the size of the matrix grows). They consider two tests which together are shown to be minimax
optimal. One is the sum test based on
sum(X) = ∑
∑ Xij .
i∈[M ] j∈[N ]
∗
†
University of California, San Diego — http://www.math.ucsd.edu/~eariasca/
University of California, San Diego — http://www.math.ucsd.edu/~yul085/
1
(2)
2
It is most useful when the submatrix is large. The other one is the scan test which, when the
submatrix size is known (meaning m and n are known) is based on
scan(X) =
max
I⊂[M ],∣I∣=m
max
J ⊂[N ],∣J ∣=n
∑ ∑ Xij .
(3)
i∈I j∈J
When m and n are unknown, one can perform a scan test for each (m, n) in some range of interest
and control for multiple testing using the Bonferroni method. From (Butucea and Ingster, 2013),
and also from our own prior work, we know that the resulting procedure achieves the same firstorder asymptotic performance.
To avoid making parametric assumptions, some works such as (Barry et al., 2005; Hastie et al.,
2000) have suggested a calibration by permutation. We consider two somewhat stylized permutation
approaches:
• Unidimensional permutation. The entries are permuted within their row. (One could permute
within columns, which is the same after transposition.)
• Bidimensional permutation. The matrix is vectorized, the entries are permuted uniformly at
random as one would in a vector, and the matrix is reformed.
The first method is most relevant when one is not willing to assume that the entries in different
rows are comparable. It is appealing in the context of microarray data and was suggested, for
example, in (Hastie et al., 2000). The second method is most relevant in a setting where all the
variables are comparable. In the parlance of hypothesis testing, the first method derives from a
model where the entries within each row are exchangeable under the null, while the second method
arises when assuming that all the entries are exchangeable under the null.
Contribution 1 (Calibration by permutation). We analyze the performance of the scan test when
calibrated using one of these two permutation approaches. We show that, regardless of the variant,
the resulting test is (first order) asymptotically as powerful as a calibration by Monte Carlo with
full knowledge of the parametric model. We prove this under some standard parametric models.
Remark 1. We focus on the scan statistic (3) and abandon the sum statistic (2) for at least two
reasons: 1) the sum statistic cannot be calibrated without knowledge of the null distribution; 2)
the sum statistic is able to surpass the scan statistic when it is impossible to locate the submatrix
with any reasonable accuracy, which is somewhat less interesting to the practitioner.
A calibration by permutation is computationally intensive in that it requires the repeated computation of the test statistic on permuted data. In practice, several hundred permutations are used,
which can cause the method to be rather time-consuming. A possible way to avoid this is to use
ranks, which was traditionally important before the availability of computers with enough computational power. (Hettmansperger, 1984) is a classical reference. In line with the two permutation
methods described above, we consider the corresponding methods for ranking the entries:
• Unidimensional ranks. The entries are ranked relative to the other entries in their row.
• Bidimensional ranks. The entries are ranked relative to the all other entries.
The use of ranks has the benefit of only requiring calibration (typically done on a computer nowadays) once for each matrix size M × N . It has the added benefit of yielding a method that is much
more robust to outliers.
3
Contribution 2 (Rank-based method). We analyze the performance of the scan test when the
entries are replaced by their ranks following one of the two methods just described. We show that,
regardless of the variant, there is a mild loss of asymptotic power, which we precisely quantify. We
do this under some standard parametric models.
1.2
More related work
The scan statistic (3) is computationally intractable and there has been efforts to offer alternative
approaches. We already mentioned (Shabalin et al., 2009), which proposes an alternate optimization
strategy: given a set of rows, optimize over the set of columns, and vice versa, alternating in this
fashion until convergence to a local maximum. This is the algorithm we use in our simulations.
It does not come with theoretical guarantees (other than converging to a local maximum) but
performs well numerically. A spectral method is proposed in (Cai et al., 2015) and a semidefinite
relaxation is proposed in (Chen and Xu, 2014). These methods can be run in time polynomial in
the problem size (meaning in M and N ). (Ma and Wu, 2015) establishes a lower bound based
on the Planted Clique Problem that strictly separates the performance of methods that run in
polynomial time from the performance of the scan statistic.
Our work here is not on the computational complexity of the problem. Rather we assume that
we can compute the scan statistic and proceed to study it. In effect, we contribute here to a long
line of work that studies permutation and rank-based methods for nonparametric inference. Most
notably, we continue our recent work (Arias-Castro et al., 2015) where we study the detection
problem under a similar premise but under much more stringent structural assumptions. The
setting there would correspond to an instance where the submatrix is in fact a block, meaning,
that Itrue and Jtrue are of the form Itrue = {i + 1, . . . , i + k} and Jtrue = {j + 1, . . . , j + l}. The
present setting assumes much less structure. The related applications are very different in the end.
Nevertheless, the technical arguments developed there apply here with only minor adaptation. The
main differences are that we consider two types of permutation and ranking protocols.
1.3
Content
The rest of the paper is organized as follows. In Section 2 we describe a parametric setting where
likelihood methods have been shown to perform well. This parametric setting will serve as benchmark for the nonparametric methods that ensue. In Section 3 we consider the detection problem
and study the scan statistic with each of the two types of calibration by permutation. In Section 4
we consider the same problem and study the rank-based scan statistic using each of the two types
of rankings. In Section 5 we present some numerical experiments on simulated data. All the proofs
are in Section 6.
2
The parametric scan
Following the classical line in the literature on nonparametric tests, we will evaluate the nonparametric methods introduced later on a family on parametric models. As in (Butucea and Ingster,
2013), and in our preceding work (Arias-Castro et al., 2015), we consider a one-parameter exponential family in natural form.
To define such a family, fix a probability distribution ν on the real line with zero mean and unit
variance, and with a sub-exponential right tail, specifically meaning that ϕ(θ) ∶= ∫R eθx ν(dx) < ∞
for some θ > 0. Let θ⋆ denote the supremum of all such θ > 0. (Note that θ⋆ may be infinite.) The
4
family is then parameterized by θ ∈ [0, θ⋆ ) and has density with respect to ν defined as
fθ (x) = exp{θx − log ϕ(θ)}.
(4)
By varying ν, we obtain the normal (location) family, the Poisson family (translated to have zero
mean), and the Rademacher family.
Such a parametric model is attractive as a benchmark because it includes these popular models
and also because likelihood methods are known to be asymptotically optimal under such a model.
Butucea and Ingster (2013) showed this to be the case for the problem of detection, where the
generalized likelihood ratio test is based on the scan statistic (3).
Under such a parametric model, the detection problem is formalized as a hypothesis testing
problem where ν plays the role of null distribution. In detail, suppose that the submatrix is known
to be m × n. The search space is therefore
Sm,n ∶= {S = I × J ∶ I ⊂ [M ], ∣I∣ = m and J ⊂ [N ], ∣J ∣ = n}.
(5)
We assume that the Xij ’s are independent with Xij ∼ fθij , and the testing problem is
H0 ∶ θij = 0,
∀(i, j) ∈ [M ] × [N ],
(6)
versus
⎧
⎪
⎪θij ≥ θ‡ , ∀(i, j) ∈ Strue ,
H1 ∶ ∃Strue ∈ Sm,n such that ⎨
⎪
⎪
⎩θij = 0, otherwise.
Here θ‡ controls the signal-to-noise ratio is assumed to be known in this formulation.
In this context, we have the following.
(7)
Theorem 1 (Butucea and Ingster (2013)). Consider an exponential model as described above, with
ν having finite fourth moment. Assume that
m n
log(M ∨ N )
,
→ 0,
→ 0.
M N
m∧n
Then the sum test based on (2), at any fixed level α > 0, has limiting power 1 when
mn
θ‡ √
→ ∞.
MN
M, N, m, n → ∞,
Then the scan test based on (3), at any fixed level α > 0, has limiting power 1 when
√
θ‡ mn
lim inf √
> 1.
N
+
n
log
)
2(m log M
m
n
(8)
(9)
(10)
Conversely, the following matching lower bound holds. Assume in addition that log M ≍ log N and
m ≍ n. Then any test at any fixed level α > 0 has limiting power at most α when
√
θ‡ mn
mn
θ‡ √
→ 0 and lim inf √
< 1.
(11)
N
MN
2(m log M
+
n
log
)
m
n
We note that Butucea and Ingster (2013) derived their lower bound under slightly weaker
assumptions on M, N, m, n.
Remark 2. Proper calibration in this context is based on knowledge of the null distribution ν. In
more detail, consider a test that rejects for large values of a statistic T (X). Assuming a desired
level of α > 0 and that ν is either diffuse or discrete (for simplicity), the critical value for T is set
at tα , where tα = inf{t ∶ ν(T (X) ≥ t) ≤ α}. The test is then I{T (X) ≥ tα }. In practice, tα may be
approximated by Monte Carlo sampling.
5
3
Permutation scan tests
In the previous section we described the work of Butucea and Ingster (2013), who in certain
parametric models show that the sum test (2) and scan test (3) are jointly optimal for the problem
of detecting a submatrix. This is so if they are both calibrated with full knowledge of the null
distribution (denoted ν earlier).
What if the null distribution is unknown? A proven approach is via permutation. This is
shown to be optimal in some classical settings (Lehmann and Romano, 2005) and was recently
shown to also be optimal in more structured detection settings (Arias-Castro et al., 2015). We
prove that this is also the case in the present setting of detecting a submatrix. We consider the two
types of permutation, unidimensional and bidimensional, described in Section 1.1. More elaborate
permutation schemes have been suggested, e.g., in (Barry et al., 2005), but these are not considered
here, in part to keep the exposition simple. Indeed, we simply aim at showing that a calibration
by permutation performs very well in the present context.
Let Π be a subgroup of permutations of [M ] × [N ], identified with [M N ]. Then a calibration
by permutation of the scan statistic (or any other statistic) yields the P-value
P(X) =
#{π ∈ Π ∶ scan(Xπ ) ≥ scan(X)}
,
∣Π∣
(12)
where Xπ = (Xπ(i,j) ) is the matrix permuted by π. The permutation scan test at level α is the test
I{P(X) ≤ α}. It is well-known that this this a valid P-value, in the sense that, under the null, it
dominates the uniform distribution on [0, 1] (Lehmann and Romano, 2005). (This remains true of
a Monte Carlo approximation.)
The set of unidimensional permutations, denoted Π1 , is that of all permutations that permute
within each row, while the set of bidimensional permutations, denoted Π2 , is simply the set of all
permutations. Obviously, Π1 ⊂ Π2 with ∣Π1 ∣ = (N !)M and ∣Π2 ∣ = (M N )!, and they are both groups.1
Theorem 2. Consider an exponential model as described in Section 2. In addition to (8), assume
log3 (M ∨ N )/(m ∧ n) → 0,
(13)
and that either (i) ν has support bounded from above, or (ii) maxi,j θij ≤ θ̄ for some θ̄ < θ⋆ fixed.
Let the group of permutations Π be either Π1 or Π2 ; if Π = Π1 , we require that ϕ(θ) < ∞ for some
θ < 0. Then the permutation scan test based on (12), at any fixed level α > 0, has limiting power 1
when (10) holds.
The additional condition (on ν or the nonzero θij ’s) seems artificial, but just as in (Arias-Castro
et al., 2015), we are not able to eliminate it. Other than that, in view of Theorem 1 we see that
the permutation scan test — just like the parametric scan test — is optimal to first-order under a
general one-parameter exponential model.
4
Rank-based scan tests
Rank tests are classical special cases of permutation tests (Hettmansperger, 1984). Traditionally,
when computers were not as readily available and not as powerful, permutation tests were not
practical, but rank tests could still be, as long as calibration had been done once for the same (or
1
The group structure is important. See the detailed discussion in (Hemerik and Goeman, 2014).
6
a comparable) problem size. Another well-known advantage of rank tests is their robustness to
outliers.
We consider the two ranking protocols described in Section 1.1. After the observations are
ranked, the distribution under the null is the permutation distribution, either uni- or bi-dimensional
depending on the ranking protocol. This is strictly true under an appropriate exchangeability
condition, which holds in the null model we consider here where all observations are IID. In fact, the
unidimensional rank scan test is a form of unidimensional permutation test, and the bidimensional
rank scan is a form of bidimensional permutation test, each time, the statistic being the rank scan
scan(R) =
max
I⊂[M ],∣I∣=m
max
J ⊂[N ],∣J ∣=n
∑ ∑ Rij ,
(14)
i∈I j∈J
where R = (Rij ) is the matrix of ranks.
Rank tests have been studied in minute detail in the classical setting (Hájek and Sidak, 1967;
Hettmansperger, 1984). Typically, this is done, again, by comparing their performance with the
likelihood ratio test in the context of some parametric model. Typically, there is some loss in
efficiency, unless one tailors the procedure to a particular parametric family.2 Such a performance
analysis was recently carried out for the rank scan in more structured settings (Arias-Castro et al.,
2015). We again extend this work here and obtain the following.
Define
1
Υ = E(Z 1(Z>Y ) ) + E(Z 1(Z=Y ) )
2
where Y, Z are IID with distribution ν. (This is the same constant introduced by Arias-Castro
et al. (2015).)
Theorem 3. Consider an exponential model as described in Section 2. Assume that (8) holds. Let
the group of permutations Π be either Π1 or Π2 . The rank scan test at any fixed level α > 0 has
limiting power 1 when
√
θ‡ mn
1
lim inf √
(15)
> √ .
N
2 3Υ
2(m log M
+
n
log
)
m
n
The proof is omitted as it is entirely based on an adaptation of that of Theorem 2 and arguments
given in (Arias-Castro et al., 2015) to handle the rank moments.
Compared to the (optimal) performance of the parametric and permutation scan tests in the
same setting (Theorem 1 and Theorem 2), we see that there is a loss in power. However, the loss
can √
be quite√small. For example, as argued in (Arias-Castro et al., 2015), in the normal model
1/(2 3Υ) ≤ π/3 ≈ 1.023.
5
Numerical experiments
We performed some numerical experiments3 to assess the accuracy of our asymptotic theory. To do
so, we had to deal with two major issues in terms of computational complexity. The first issue is
the computation of the scan statistic defined in (3). There are no known computationally tractable
method for doing so. As Butucea and Ingster (2013) did, we opted instead for an approximation in
the form of the alternate optimization (or hill-climbing) algorithm of Shabalin et al. (2009). Since
in principle this algorithm only converges to a local maximum, we run the algorithm on several
2
3
Actually, Hajek (1962) proposes a more complex method that avoids the need for knowing the null distribution.
In the spirit of reproducible research, our code is publicly available at https://github.com/nozoeli/NPDetect
7
random initializations and take the largest output. The second issue is that of computing the
permutation P-value defined in (12). (This is true for the permutation test and also for the special
case of the rank test.) Indeed, examining all possible permutations in Π (either Π1 or Π2 ) is only
feasible for very small matrices. As usual, we opted for Monte Carlo sampling. Specifically, we
picked π1 , . . . πB IID uniform from Π with B = 500 in our setup. We then estimate the permutation
P-value by
#{b ∈ [B] ∶ scan(Xπb ) ≥ scan(X)}
P̂(X) =
.
(16)
B+1
We mention that when rank methods are applied, the ties in the data are broken randomly.
Simulation setup Our simulation strategy is as follows. A data matrix X of size M × N is
generated with the anomaly as [m] × [n]. All the entries of X are independent with distribution
f0 (same as ν) except for the anomalous ones which have distribution fθ‡ for some θ‡ > 0. We
compare the permutation tests and rank tests (unidimensional and bidimensional) with the scan
test calibrated by Monte Carlo (using 500 samples), which serves as an oracle benchmark as it has
full knowledge of the null distribution f0 . By construction, all tests have the prescribed level. As
we increase θ‡ , the P-values of the different tests are recorded. Each setting is repeated 200 times.
As one of the main purposes of our simulations is to confirm our theory, we zoom in on the
region near the critical value
¿
Á 2(m log M + n log N )
Á
À
m
n
θcrit =
,
(17)
mn
which comes from (10). Specifically, we increase θ‡ from 0.5 × θcrit to 1.5 × θcrit with step size
0.125 × θcrit to explore the behavior of P-values around the critical value.
The Normal Case Here we generate data from normal family, where fθ is N (θ, 1). We used
two setups, (M, N, m, n) = (200, 100, 10, 15) and (M, N, m, n) = (200, 100, 30, 10), to assess the
performance of the tests under different anomaly sizes. The resulting boxplots of the averaged
P-values are shown in Figure 1.
From the plots we see that the P-values are generally very close to 0 when θ‡ exceeds θcrit .
When (m, n) = (10, 15) the convergence towards 0 is slower, which may be due to the small size
of the anomalous submatrix. As expected, the (oracle) Monte Carlo test is best, followed by the
bidimensional permutation test, followed by the unidimensional permutation test. That said, the
differences appear to be minor, which confirms our theoretical findings.
For the rank tests, we observe a similar behavior of the P-values, with the bidimensional showing
superiority over the unidimensional rank test, but the loss of power with respect
√ to the oracle test
is a bit more substantial, as predicted by the theory. As shown before, 1/(2 3Υ) ≈ 1.03 for the
standard normal, so that we should place the critical threshold approximately at 1.03 × θcrit . This
appears to be confirmed in the setting where (m, n) = (30, 10). While the P-values for the rank
tests converge relatively slowly when (m, n) = (10, 15) (for unidimensional rank test the P-value is
close to 0 at θ = 1.5 × θcrit ), this may be due to the relatively small size of the anomaly.
The Poisson Case As another example, we consider the Poisson family, where fθ corresponds
to Poisson(eθ ) − 1. The data matrix and anomaly sizes are the same as they are in the normal
case. The resulting boxplots of the P-values are shown in Figure 2. Overall, we observe a similar
behavior of the P-values.
8
Figure 1: P-values of various forms of scan tests in the normal model
9
Figure 2: P-values of various forms of scan tests in the Poisson model
10
6
6.1
Proofs
Preliminaries
We start with some preliminary results that already appear, one or another, in our previous work
(Arias-Castro et al., 2015). First, for any one-parameter exponential family (fθ ∶ θ ∈ Θ) with a
standardized base distribution ν, as we consider to be here,
Eθ (X) ≥ θ,
∀θ ∈ Θ.
(18)
Next, in the same context, if sup Θ > 0 (which we assume throughout), then fθ has a sub-exponential
right tail, which is uniform in θ ∈ θ̄ if θ̄ ∈ Θ. In particular, there is γ̄ that depends on θ̄ > 0 such
that, if X1 , . . . , Xk are independent, with Xj ∼ fθj with θj ≤ θ̄, then
max Xj ≤ γ̄ log k,
with probability tending 1 as k → ∞.
(19)
j∈[k]
By symmetry, if inf Θ < 0 (which we assume in the case of unidimensional permutations), the same
is true on the left. In particular, ν itself (corresponding to θ = 0) has a sub-exponential left tail in
this case, and this is all that will be used below. In particular, there is a constant γ0 > 0 such that,
if X1 , . . . , Xk are IID ν, then
min Xj ≥ −γ0 log k,
with probability tending 1 as k → ∞.
(20)
j∈[k]
6.2
Proof of Theorem 2
The proof arguments are parallel to those of Arias-Castro et al. (2015), derived in the context of
more structured settings. For that reason, we only detail the proof of Theorem 2 in the case of
unidimensional permutations, which, compared to bidimensional permutations, is a bit more different from the setting considered in (Arias-Castro et al., 2015) and requires additional arguments.
Therefore, in what follows, we take Π = Π1 . Recall that in this case we assume in addition that
ϕ(θ) < ∞ for some θ < 0. This implies that ν as sub-exponential tails
Case (i) We first focus on the condition where ν has support bounded from above and let b0
denote such an upper bound. (Necessarily, b0 > 0.) Thus, regardless of the θij ’s,
P(maxi,j Xij ≤ b0 ) = 1.
(21)
The permutation scan test has limiting power 1 if and only if P(P(X) ≤ α) → 1 under the alternative. We show that by proving the stronger claim that P(X) → 0 in probability under the
alternative.
We first work conditional on X = x, where x = (xij ) denotes a realization of X = (Xij ). We
may equivalently center the rows of X before scanning, and the resulting test remains unchanged.
Therefore, we may assume that all the rows of x sum to 0. Let ζ = scan(x) for short. We have
P(x) = P(scan(xπ ) ≥ ζ),
(22)
where the randomness comes solely from π, uniformly drawn from Π. Using the union bound, we
get
P(x) ≤ ∣Sm,n ∣ max P ( ∑(i,j)∈S xπ(i,j) ≥ ζ).
(23)
S∈Sm,n
11
For each i ∈ [N ], let (Aij ∶ j ∈ [n]) be a sample from (xij ∶ j ∈ [N ]) without replacement and let
Ai = ∑j∈[n] Aij . Note that A1 , . . . , AM are independent and, for S = I × J , we have
∑ xπ(i,j) ∼ ∑ Ai .
(i,j)∈S
(24)
i∈I
Fix I ⊂ [M ] of size m. Using Markov’s inequality and the independence of the Ai ’s, we get
P ( ∑i∈I Ai ≥ ζ) ≤ e−cζ ∏i∈I φi (c),
(25)
where φi is the moment generating function of Ai . The key is (Hoeffding, 1963, Th 4), which
implies that φi ≤ ψi , where ψi is the moment generating function of Bi , where Bi = ∑j∈[n] Bij and
(Bij ∶ j ∈ [n]) is a sample from (xij ∶ j ∈ [N ]) with replacement, meaning that these are IID random
variables uniformly distributed in (xij ∶ j ∈ [N ]). By (21), we have Bij ≤ b0 with probability one,
and the usual arguments leading to the (one-sided) Bernstein’s inequality yield the usual bound
ψi (c) ≤ exp [
nc2 σi2 ecb0 − 1 − cb0
],
2
c2 b20 /2
where σi2 is the variance of Bi1 , meaning, σi2 =
mean. Letting σ 2 = maxi∈[M ] σi2 , we derive
e−cζ ∏ φi (c) ≤ e−cζ ∏ exp [
i∈I
i∈I
1
N
(26)
∑j∈[N ] (xij − x̄i )2 , with x̄i =
1
N
∑j∈[N ] xij being the
nc2 σi2 ecb0 − 1 − cb0
mnc2 σ 2 ecb0 − 1 − cb0
−cζ
]
≤
e
exp
[
],
2
2
c2 b20 /2
c2 b20 /2
(27)
the latter being the usual bound that leads to Bernstein’s inequality. The same optimization over
c yields
P (∑i∈I Ai ≥ ζ) ≤ exp [−
ζ2
].
2mnσ 2 + 32 b0 ζ
(28)
We now emphasize the dependency of ζ and σ 2 on x by adding x as a subscript. Noting that this
bound is independent of I (of size m), we get
P(x) ≤ ∣Sm,n ∣ exp [−
ζx2
].
2mnσx2 + 23 b0 ζx
(29)
2
We now free X and bound ζX from below, and σX
from above. When doing so, we need to take
into account that we assumed the rows summed to 0. When this is no longer the case, ζX denotes
the scan of X after centering all the rows. Let X̄i denote the mean of row i. By definition of the
scan in (3),
ζX ≥ ζtrue ∶= ∑
∑ (Xij − X̄i ) = (1 −
i∈Itrue j∈Jtrue
n
N)
∑
∑ Xij −
i∈Itrue j∈Jtrue
n
N
∑
∑ Xij .
(30)
i∈Itrue j∉Jtrue
For the expectation, by (8) and (18), we have
E(ζtrue ) ≥ (1 −
n
N)
∑
∑ θij ≥ (1 − o(1))mnθ‡ .
(31)
i∈Itrue j∈Jtrue
For the variance, we have Var(Xij ) = 1 when (i, j) ∉ Strue (since ν has variance 1) and Var(Xij ) ≤
E(Xij2 ) ≤ b20 always. Using this, we derive
n 2
Var(ζtrue ) ≤ mnb20 + ( N
) mN = mn(b20 +
n
N)
= O(mn).
(32)
12
√
Because of (8) and (10), E(ζtrue ) ≫
Var(ζtrue ), and thus by Chebyshev’s inequality,
ζtrue = (1 + oP (1)) E(ζtrue ) ≥ (1 + oP (1))mnθ‡ .
(33)
We now bound σx2 . For i ∈ Itrue , we have
σi2 (X) ≤
1
1
2
∑ X =
N j∈[N ] ij N
2
∑ Xij +
j∈Jtrue
For i ∉ Itrue ,
σi2 (X) ≤
Therefore
1
N
2
∑ Xij ≤
j∉Jtrue
nb20 1
+
N
N
2
∑ Xij .
1
2
∑ X .
N j∈[N ] ij
1
∑ Tij ,
i∈[M ] N j∈[N ]
sto
(34)
j∉Jtrue
2
σX
≤ 1 + o(1) + max
(35)
(36)
where (Tij ∶ (i, j) ∈ [M ] × [N ]) are IID with distribution that of X 2 − 1 when X ∼ ν. Note that
E(Tij ) = 0 since ν has variance 1 and
max Tij ≤ t̄ ∶= b20 ∨ (γ0 log(M N ))2 ,
i,j
(37)
by (21) and when the following event holds
A ∶= { min Xij ≥ −γ0 log(M N )},
i,j
(38)
which by (20) happens with probability tending to 1. Let PA be the probability conditional on A
and EA the corresponding expectation. Let µA = EA (Tij ) and τA2 = VarA (Tij ) < ∞, because ν has
finite fourth moment. By Bernstein’s inequality, for any c > µA ,
N (c − µA )2
1
].
∑ Tij > c) ≤ exp [ −
N j∈[N ]
2τA2 + 23 t̄c
(39)
1
N (c − µA )2
].
∑ Tij > c) ≤ M exp [ −
i∈[M ] N j∈[N ]
2τA2 + 23 t̄c
(40)
PA (
Then using a union bound
PA ( max
Taking logs, noting that µA → 0 and τA2 → τ 2 ∶= Var(Tij ), as well as t̄ = O(log(M N )), and using (8)
and (13), we see that the RHS tends to 0 for any c > 0 fixed. Therefore maxi∈[M ] N1 ∑j∈[N ] Tij = oP (1)
conditional on A, and since P(A) → 1, also unconditionally. Coming back to (36), we conclude that
2
σX
= 1 + oP (1).
(41)
2
The upper bound on ζX and the lower bound on σX
, combined, imply by monotonicity that
2
mnθ‡2
ζX
≥ (1 + oP (1))
.
2 + 2b ζ
2mnσX
2 + 23 b0 θ‡
3 0 X
(42)
)(N
), so that
We also have ∣Sm,n ∣ = (M
m
n
)
(N )
log ∣Sm,n ∣ = log (M
m + log n ≤ (1 + o(1))Λ,
(43)
13
with
N
],
Λ ∶= [m log M
m + n log n
(44)
)
where in the last inequality we used (8) and the fact that log (K
k ≤ k log(K/k) + k for all integers
1 ≤ k ≤ K.
Coming back to (29) and collecting all the bounds in between, we find that
log P(X) ≤ (1 + o(1))Λ − (1 + oP (1))
mnθ‡2
2 + 23 b0 θ‡
.
(45)
Under (10), there is ε > 0 such that, eventually,
√
θ‡ ≥ (1 + ε) 2Λ/(mn).
(46)
When that’s the case, we get
log P(X) ≤ (1 + o(1))Λ − (1 + oP (1))
1+
(1 + 2ε)Λ
√
.
2Λ/(mn)
(47)
1
3 b0 (1 + ε)
Noting that Λ/(mn) = o(1) and Λ → ∞ under (8), we get
log P(X) ≤ −(1 + oP (1))2εΛ → −∞,
(48)
which is what we needed to prove.
Case (ii) We now consider the case where θij ≤ θ̄ for all (i, j) ∈ [M ] × [N ] for some θ̄ < θ∗ .
Although (21) may not hold for any b0 , we redefine b0 = γ̄ log(M N ), where γ̄ depends on θ̄, and
condition on the event
B ∶= { max Xij ≤ b0 },
(49)
i,j
which holds with probability tending to 1 by (19). The bound (29) holds unchanged (assuming
2
are handled, now that we conditioned on
that maxi,j xij ≤ b0 ). What is different is how ζX and σX
B. Let PB and EB denote the probability and expectation conditional on B.
We have
EB (ζtrue ) ≥ (1 −
≥ (1 −
n
N)
n
N)
∑
∑ EB (Xij ) −
i∈Itrue j∈Jtrue
∑
n
N
∑ E(Xij ∣Xij ≤
i∈Itrue j∈Jtrue
≥ (1 + o(1))mnθ‡ .
∑ ∑ EB (Xij )
i∈Itrue j∉Jtrue
n
b0 ) − N
∑ ∑ E(Xij ∣Xij
i∈Itrue j∉Jtrue
(50)
≤ b0 )
(51)
(52)
In the last inequality, for j ∉ Jtrue we used the fact that E(Xij ) = 0, which implies that EB (Xij ) ≤ 0
in that case. And for j ∈ Jtrue we used the fact that E(Xij ∣Xij ≤ b0 ) → θij ≥ θ‡ combined with a
Cèsaro-type argument. On the other hand, in a similar way, we also have
VarB (ζtrue ) = O(mnb20 ) = O(mn log2 (M N )).
(53)
√
So we still have EB (ζtrue ) ≫ VarB (ζtrue ), by (8) and (10), and in addition (13). In particular,
(33) holds under B. In very much the same way, one can verify that the same is true of (41).
From there we get to (47) in exactly the same way, conditional on
√ B, and then unconditionally
since P(B) → 1. Then, to conclude, we only need to check that b0 Λ/(mn) = o(1), which is the
case by (13).
14
Acknowledgements
This work was partially supported by a grant from the US Office of Naval Research (N00014-13-10257) and a grant from the US National Science Foundation (DMS 1223137).
References
Arias-Castro, E., R. M. Castro, E. Tánczos, and M. Wang (2015). Distribution-free detection of
structured anomalies: Permutation and rank-based scans. arXiv preprint arXiv:1508.03002 .
Barry, W. T., A. B. Nobel, and F. A. Wright (2005). Significance analysis of functional categories in
gene expression studies: a structured permutation approach. Bioinformatics 21 (9), 1943–1949.
Butucea, C. and Y. I. Ingster (2013). Detection of a sparse submatrix of a high-dimensional noisy
matrix. Bernoulli 19 (5B), 2652–2688.
Cai, T. T., T. Liang, and A. Rakhlin (2015). Computational and statistical boundaries for submatrix localization in a large noisy matrix. arXiv preprint arXiv:1502.01988 .
Chen, Y. and J. Xu (2014). Statistical-computational tradeoffs in planted problems and submatrix
localization with a growing number of clusters and submatrices. arXiv preprint arXiv:1402.1267 .
Cheng, Y. and G. M. Church (2000). Biclustering of expression data. In Proceedings of the Eighth
International Conference on Intelligent Systems for Molecular Biology, pp. 93–103. AAAI Press.
Hajek, J. (1962). Asymptotically most powerful rank-order tests. The Annals of Mathematical
Statistics 33 (3), 1124–1147.
Hájek, J. and Z. Sidak (1967). Theory of rank tests. Academic Press, Academia Publishing House
of the Czechoslovak Acad.
Hastie, T., R. Tibshirani, M. B. Eisen, A. Alizadeh, R. Levy, L. Staudt, W. C. Chan, D. Botstein,
and P. Brown (2000). ‘gene shaving’ as a method for identifying distinct sets of genes with similar
expression patterns. Genome Biology 1 (2), 1–21.
Hemerik, J. and J. Goeman (2014). Exact testing with random permutations. arXiv preprint
arXiv:1411.7565 .
Hettmansperger, T. P. (1984). Statistical inference based on ranks. Wiley Series in Probability
and Mathematical Statistics: Probability and Mathematical Statistics. New York: John Wiley
& Sons, Inc.
Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. J. Amer.
Statist. Assoc. 58, 13–30.
Lehmann, E. L. and J. P. Romano (2005). Testing statistical hypotheses (Third ed.). Springer
Texts in Statistics. New York: Springer.
Ma, Z. and Y. Wu (2015). Computational barriers in minimax submatrix detection. The Annals
of Statistics 43 (3), 1089–1116.
Madeira, S. C. and A. L. Oliveira (2004). Biclustering algorithms for biological data analysis: a
survey. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB) 1 (1),
24–45.
Shabalin, A. A., V. J. Weigman, C. M. Perou, and A. B. Nobel (2009). Finding large average
submatrices in high dimensional data. The Annals of Applied Statistics 3 (3), 985–1012.
| 10 |
1
Diagnose like a Radiologist: Attention Guided
Convolutional Neural Network for Thorax Disease
Classification
arXiv:1801.09927v1 [] 30 Jan 2018
Qingji Guan, Yaping Huang, Zhun Zhong, Zhedong Zheng, Liang Zheng and Yi Yang
Abstract—This paper considers the task of thorax disease classification on chest X-ray images. Existing methods generally use
the global image as input for network learning. Such a strategy
is limited in two aspects. 1) A thorax disease usually happens
in (small) localized areas which are disease specific. Training
CNNs using global image may be affected by the (excessive)
irrelevant noisy areas. 2) Due to the poor alignment of some CXR
images, the existence of irregular borders hinders the network
performance. In this paper, we address the above problems by
proposing a three-branch attention guided convolution neural
network (AG-CNN). AG-CNN 1) learns from disease-specific
regions to avoid noise and improve alignment, 2) also integrates
a global branch to compensate the lost discriminative cues by
local branch. Specifically, we first learn a global CNN branch
using global images. Then, guided by the attention heat map
generated from the global branch, we inference a mask to crop
a discriminative region from the global image. The local region
is used for training a local CNN branch. Lastly, we concatenate
the last pooling layers of both the global and local branches for
fine-tuning the fusion branch. The comprehensive experiment is
conducted on the ChestX-ray14 dataset. We first report a strong
global baseline producing an average AUC of 0.841 with ResNet50 as backbone. After combining the local cues with the global
information, AG-CNN improves the average AUC to 0.868. While
DenseNet-121 is used, the average AUC achieves 0.871, which is
a new state of the art in the community.
Index Terms—chest X-ray, convolutional neural network, thorax disease classification, visual attention
I. I NTRODUCTION
T
HE chest X-ray (CXR) has been one of the most
common radiological examinations in lung and heart
disease diagnosis. Currently, reading CXRs mainly relies on
professional knowledge and careful manual observation. Due
to the complex pathologies and subtle texture changes of different lung lesion in images, radiologists may make mistakes
even when they have experienced long-term clinical training
and professional guidance. Therefore, it is of importance to
develop the CXR image classification methods to support
Q. Guan is with the Beijing Key Laboratory of Traffic Data Analysis
and Mining, Beijing Jiaotong University, Beijing, 100044, China and Center
for Artificial Intelligence, University of Technology Sydney, NSW, 2007,
Australia (E-mail: [email protected]).
Y. Huang is with the Beijing Key Laboratory of Traffic Data Analysis
and Mining, Beijing Jiaotong University, Beijing, 100044, China (E-mail:
[email protected]).
Z. Zhong is with Cognitive Science Department, Xiamen University and
Center for Artificial Intelligence, University of Technology Sydney, NSW,
2007, Australia (E-mail: [email protected]).
Z. Zheng, L. Zheng, Y. Yang are with Center for Artificial Intelligence,
University of Technology Sydney, NSW, 2007, Australia (E-mail: {zdzheng12,
liangzheng06, yee.i.yang}@gmail.com).
clinical practitioners. The noticeable progress in deep learning
has benefited many trials in medical image analysis, such as
lesion segmentation or detection [1], [2], [3], [4], [5], diseases
classification [6], [7], [8], [9], noise induction [10], image
annotation [11], [12], registration [13], regression [14] and so
on. In this paper, we investigate the CXR classification task
using deep learning.
Several existing works on CXR classification typically employ the global image for training. For example,
Wang et al. [9] evaluate four classic CNN architectures,
i.e., AlexNet [15], VGGNet [16], GoogLeNet [17], ResNet
[18], to tell the presence of multiple pathologies using a
global CXR image. In addition, using the same network,
the disease lesion areas are located in a weakly supervised
manner. Viewing CXR classification as a multi-label recognition problem, Yao et al. [19] explore the correlation among
the 14 pathologic labels with global images in ChestX-ray14
[9]. Using a variant of DenseNet [20] as an image encoder,
they adopt the Long-short Term Memory Networks (LSTM)
[21] to capture the dependencies. Kumar et al. [7] investigate
that which loss function is more suitable for training CNNs
from scratch and present a boosted cascaded CNN for global
image classification. The recent effective method consists in
CheXNet [8]. It fine-tunes a 121-layer DenseNet on the global
chest X-ray images, which has a modified last fully-connected
layer.
However, the global learning strategy can be compromised
by two problems. On the one hand, as shown in Fig. 1 (the
first row), the lesion area can be very small (red bounding box)
and position unpredictable (e.g. , “Atelectasis”) compared with
the global image, so using the global image for classification
may include a considerable level of noise outside the lesion
area. This problem is rather different from generic image
classification [22], [23] where the object of interest is usually
positioned in the image center. Considering this fact, it is
beneficial to induce the network to focus on the lesion regions
when making predictions. On the other hand, due to the
variations of capturing condition, e.g. , the posture of the
patient and the small size of children body, the CXR images
may undergo distortion or misalignment. Fig. 1 (the second
row) presents a misalignment example. The irregular image
borders may exist an non-negligible effect on the classification
accuracy. Therefore, it is desirable to discover the salient lesion
regions and thus alleviate the impact of such misalignment.
To address the problems caused by merely relying on
the global CXR image, this paper introduces a three-branch
2
our method achieves superior performance over the stateof-the-art approaches.
II. R ELATED W ORKS
(a) original global image
(b) heatmap
(c) cropped local image
Fig. 1. Two training images from the ChestX-ray14 dataset. (a) The global
images. (b) Heat maps extracted from a specific convolutional layer. (c) The
cropped images from (a) guided by (b). In this paper, we consider both
the original global image and the cropped local image for classification, so
that 1) the noise contained in non-lesion area is less influencing, and 2) the
misalignment can be reduced. Note that there are some differences between
the global images and their heat maps. The reason is that the global images
are randomly cropped from 256×255 to 224×224 during training.
attention guided convolutional neural network (AG-CNN) to
classify the lung or heart diseases. AG-CNN is featured in two
aspects. First, it has a focus on the local lesion regions which
are disease specific. Generally, such a strategy is particularly
effective for diseases such as ”Nodule”, which has a small
lesion region. In this manner, the impact of the noise in nondisease regions and misalignment can be alleviated. Second,
AG-CNN has three branches, i.e., a global branch, a local
branch and a fusion branch. While the local branch exhibits the
attention mechanism, it may lead to information loss in cases
where the lesion areas are distributed in the whole images,
such as Pneumonia. Therefore, a global branch is needed to
compensate for this error. We show that the global and local
branches are complementary to each other and, once fused,
yield favorable accuracy to the state of the art.
The working mechanism of AG-CNN is similar to that of a
radiologist. We first learn a global branch that takes the global
image as input: a radiologist may first browse the whole CXR
image. Then, we discover and crop a local lesion region and
train a local branch: a radiologist will concentrate on the local
lesion area after the overall browse. Finally, the global and
local branches are fused to fine-tune the whole network: a
radiologist will comprehensively consider the global and local
information before making decisions.
Our contributions are summarized as follows.
•
•
•
We propose an attention guided convolutional neural
network (AG-CNN) which diagnoses thorax diseases by
combining the global and local information. AG-CNN
improves the recognition performance by correcting image alignment and reducing the impact of noise.
We introduce a CNN training baseline, which produces
competitive results to the state-of-the-art methods by
itself.
We present comprehensive experiment on the ChestXray14 dataset. The experiment results demonstrate that
Chest X-ray datasets. The problem of Chest X-ray image
classification has been extensively explored in the field of
medical image analysis. Several datasets have been released
in this context. For example, the JSRT dataset [24], [25]
contains 247 chest X-ray images including 154 lung nodules.
It also provides masks of the lung area for segmentation
performance evaluation. The Shenzhen chest X-ray set [26]
has a total of 662 images belonging to two categories (normal
and tuberculosis (TB)). Among them, 326 are normal cases
and 336 are cases with TB. The Montgomery County chest
X-ray set (MC) [26] collects 138 frontal chest X-ray images
from Montgomery Country’s Tuberculosis screen program, of
which 80 are normal and 58 are cases with manifestations of
TB. These three datasets are generally small for deep model
training. In comparison, the Indiana University Chest X-ray
Collection dataset [27] has of 3,955 radiology reports and
the corresponding 7,470 chest X-ray images. It is publicly
available through Open-I [28]. However, this dataset does not
provide explicit disease class labels, so we do not use it in this
paper. Recently, Wang et al. [9] released the ChestX-ray14
dataset, which is the largest chest X-ray dataset by far. ChestXray14 collects 112,120 frontal-view chest X-ray images of
30,805 unique patients. Each radiography is labeled with one
or more types of 14 common thorax diseases. This dataset
poses a multi-label classification problem and is large enough
for deep learning, so we adopt this dataset for performance
evaluation in this paper.
Deep learning for chest X-ray image analysis. Recent
surveys [29], [30], [31], [32] have demonstrated that deep
learning technologies have been extensively applied to the
field of chest X-ray image annotation [33], classification [6],
[34], [8], [9], and detection (localization) [35], [36]. Islam
et al. [34] explore different CNN architectures and find that a
single CNN does not perform well across all abnormalities.
Therefore, they leverage model ensemble to improve the
classification accuracy, at the cost of increased training and
testing time. Yao et al. [19] and Kumar et al. [7] classify the
chest X-ray images by investigating the potential dependencies
among the labels from the aspect of multi-label problems.
Rajpurkar et al. [8] train a convolutional neural network to
address the multi-label classification problem. This paper
departs from the previous methods in that we make use of the
attention mechanism and fuse the local and global information
to improve the classification performance.
Attention models in medical image analysis. The CXR
classification problem needs to tell the relatively subtle differences between different diseases. Usually, a disease is often
characterized by a lesion region, which contains critical dues
for classification. Ypsilantis et al. [37] explore where to look
in chest X-rays with recurrent attention model (RAM) [38].
The RAM learns to sample the entire X-ray image sequentially
and focus on informative areas. Only one disease enlarged
heart is considered in their work. Recently, Pesce et al. [39]
3
Input/224*224*3
112*112*64
56*56*256
pooling
Global Branch
28*28*512
14*14*1024
fc
7*7*2048
sigmoid loss
ResBlock
ResBlock
ResBlock
ResBlock
conv
B
C
E
Mask Inference
crop
X
[xmin, ymin, xmax, ymax]
mask
concat
heatmap
B
C
E
+
Fusion Branch
ResBlock
ResBlock
ResBlock
ResBlock
conv
B
C
E
Local Branch
Fig. 2. Overall framework of the attention guided convolutional neural network (AG-CNN). We show an example with ResNet-50 as backbone. AG-CNN
consists of three branches. Global and local branches consist of five convolutional blocks with batch normalization and ReLU. Each of them is then connected
to a max pooling layer (Pool5), a fully connected (FC) layer, and a sigmoid layer. Different from the global branch, the input of the local branch is a local
lesion patch which is cropped by the mask generated from global branch. Then, Pool5 layers of the these two branches are concatenated into the fusion
branch. ”BCE” represents binary cross entropy loss. The input image is added to the heat map for visualization.
explore a soft attention mechanism from the saliency map of
CNN features to locate lung nodule position in radiographies.
And a localization loss is calculated by comparing the predicted position with the annotated position.
In this paper, AG-CNN locates the salient regions with
an attention guided mask inference process, and learns the
discriminative feature for classification. Compared with the
method which relies on bounding box annotations, Our method
only need image-level labels without any extra information.
III. T HE P ROPOSED A PPROACH
In this section, we describe the proposed attention guided
convolutional neural network (AG-CNN) for thorax disease
classification. We will first illustrate the architecture of AGCNN in Section III-A. Second, we describe the mask inference
process for lesion region discovery in Section III-B. We then
present the training process of AG-CNN in Section III-C.
Finally, a brief discussion of the AG-CNN is provided.
A. Structure of AG-CNN
The architecture of AG-CNN is presented in Fig. 2. Basically, it has two major branches, i.e., the global and local
branches, and a fusion branch. Both the global and local
branches are classification networks that predict whether the
pathologies are present or not in the image. Given an image,
the global branch is first fine-tuned from a classification CNN
using the global image. Then, we crop an attended region
from the global image and train it for classification on the
local branch. Finally, the last pooling layers of both the global
and local branches are concatenated for fine-tuning the fusion
branch.
Multi-label setup. We label each image with a 15-dim
vector L = [l1 , l2 , ..., lC ] in which lc ∈ {0, 1}, C = 15.
lc represents whether the there is any pathology, i.e., 1 for
presence and 0 for absence. The last element of L represents
the label with ”No Finding”.
Global and local branches. The global branch informs the
underlying CXR information derived from the global image
as input. In the global branch, we train a variant of ResNet-50
[18] as the backbone model. It consists of five down-sampling
blocks, followed by a global max pooling layer and a 15dimensional fully connected (FC) layer for classification. At
last, a sigmoid layer is added to normalize the output vector
pg (c|I) of FC layer by
peg (c|I) = 1/(1 + exp(−p(c|I))),
(1)
where I is the global image. peg (c|I) represents the probability
score of I belonging to the cth class, c ∈ {1, 2, ..., C}. We
optimize the parameter Wg of global branch by minimizing
the binary cross-entropy (BCE) loss:
C
1 X
lc log(peg (c|I)) + (1 − lc )log(1 − peg (c|I)),
C c=1
(2)
where lc is the groundtruth label of the cth class, C is the
number of pathologies.
On the other hand, the local branch focuses on the lesion
area and is expected to alleviate the drawbacks of only using
the global image. In more details, the local branch possesses
the same convolutional network structure with the global
branch. Note that, these two branches do not share weights
since they have distinct purposes. We denote the probability
score of local branch as pel (c|Ic ), Wl as the parameters of
local branch. Here, Ic is the input image of local branch. We
perform the same normalization and optimization as the global
branch.
L(Wg ) = −
4
Fig. 3. The process of lesion area generation. (Top:) global CXR images of various thorax diseases for the global branch. The manually annotated legion
areas provided by [9] are annotated with red bounding boxes. Note that we do not use the bounding boxes for training or testing. (Middle:) corresponding
visual examples of the output of the mask inference process. The lesion areas are denoted by green bounding boxes. Higher response is denoted with red,
and lower blue. Note that the heat maps are resized to the same size as the input images. (Bottom:) cropped and resized images from the green bounding
boxes which are fed to the local branch.
Algorithm 1: Attention Guided CNN Procedure
Input: Input image I; Label vector L; Threshold τ .
Output: Probability score pf
f (c|[I, Ic ]).
Initialization: the global and local branch weights.
1 Learning Wg with I, computing peg (c|I), optimizing by
Eq. 2 (Stage I);
2 Computing mask M and the bounding box coordinates
[xmin , ymin , xmax , ymax ], cropping out Ic from I;
3 Learning Wl with Ic , computing p
el (c|Ic ), optimizing by
Eq. 2 (Stage II);
4 Concentrating P oolg and P ooll , learning Wf , computing
pf
f (c|[I, Ic ]), optimizing by Eq. 2.
Fusion branch. The fusion branch first concatenates the
Pool5 outputs of the global and local branches. The concatenated layer is connected to a 15-dimensional FC layer for final
classification. The probability score is pf
f (c|[I, Ic ]). We denote
Wf as the parameters of fusion branch and optimize Wf by
Eq. 2.
B. Attention Guided Mask Inference
In this paper, we construct a binary mask to locate the
discriminative regions for classification in the global image.
It is produced by performing thresholding operations on the
feature maps, which can be regarded as an attention process.
This process is described below.
Given a global image, let fgk (x, y) represent the activation
of spatial location (x, y) in the kth channel of the output of
the last convolutional layer, where k ∈ {1, ..., K}, K = 2, 048
in ResNet-50. g denotes the global branch. We first take the
absolute value of the activation values fgk (x, y) at position
(x, y). Then the attention heat map Hg is generated by
counting the maximum values along channels,
Hg (x, y) = max(|fgk (x, y)|), k ∈ {1, ..., K}.
k
(3)
The values in Hg directly indicate the importance of the
activations for classification. In Fig. 1(b) and Fig. 3 (the
second row), some examples of the heat maps are shown.
We observe that the discriminative regions (lesion areas)
of the images are activated. Heat map can be constructed
by computing different statistical values
the channel
PK across
1
k
dimensions,qsuch as L1 distance K
(x,
y)| or L2
|f
g
k=1
PK
1
k
2
distance K
k=1 (fg (x, y)) . Different statistics result in
subtle numerical differences in heat map, but may not effect
the classification significantly. Therefore, we computing heat
map with Eq. 3 in our experiment. The comparison of these
statistics is presented in Section IV-C.
We design a binary mask M to locate the regions with
large activation values. If the value of a certain spatial position
(x, y) in the heat map is larger than a threshold τ , the value
at corresponding position in the mask is assigned with 1.
Specifically,
(
1, Hg (x, y) > τ
M (x, y) =
(4)
0, otherwise
where τ is the threshold that controls the size of attended region. A larger τ leads to a smaller region, and vice versa. With
the mask M , we draw a maximum connected region that covers the discriminative points in M . The maximum connected
region is denoted as the minimum and maximum coordinates
in horizontal and vertical axis [xmin , ymin , xmax , ymax ]. At
last, the local discriminative region Ic is cropped from the
input image I and is resized to the same size as I. We visualize
the bounding boxes and cropped patches with τ = 0.7 in
Fig. 3. The attention informed mask inference method is
able to locate the regions (green bounding boxes) which are
reasonably close to the groundtruth (red bounding boxes).
C. Training Strategy of AG-CNN
This paper adopts a three-stage training scheme for AGCNN.
5
Atelectasis
Cardiomegaly
Effusion
Infiltrate
Mass
Nodule
Pneumonia
Pneumothorax
Fig. 4. Examples of 8 pathologies in ChestX-ray14. The lesion regions are annotated with the red bounding boxes provided by [9]. Note that these groundtruth
bounding boxes are only used for demonstration: they are neither used in training nor testing.
TABLE I
C OMPARISON RESULTS OF VARIOUS METHODS ON C HEST X- RAY 14.
Method
Wang et al. [9]
Yao et al. [19]
CNN
R-50
D-/
Rajpurkar et al. [8]∗ D-121
Kumar et al. [7]∗
D-161
Global branch (baseline) R-50
Local branch
R-50
AG-CNN
R-50
Global branch (baseline) D-121
Local branch
D-121
AG-CNN
D-121
*
Atel
0.716
0.772
0.821
0.762
0.818
0.798
0.844
0.832
0.797
0.853
Card
0.807
0.904
0.905
0.913
0.904
0.881
0.937
0.906
0.865
0.939
Effu
0.784
0.859
0.883
0.864
0.881
0.862
0.904
0.887
0.851
0.903
Infi
Mass
0.609 0.706
0.695 0.792
0.720 0.862
0.692 0.750
0.728 0.863
0.707 0.826
0.753 0.893
0.717 0.870
0.704 0.829
0.754 0.902
Nodu
0.671
0.717
0.777
0.666
0.780
0.736
0.827
0.791
0.733
0.828
Pne1
0.633
0.713
0.763
0.715
0.783
0.716
0.776
0.732
0.710
0.774
Pne2
0.806
0.841
0.893
0.859
0.897
0.872
0.919
0.891
0.850
0.921
Cons
0.708
0.788
0.794
0.784
0.807
0.805
0.842
0.808
0.802
0.842
Edem
0.835
0.882
0.893
0.888
0.892
0.874
0.919
0.905
0.882
0.924
Emph
0.815
0.829
0.926
0.898
0.918
0.898
0.941
0.912
0.874
0.932
Fibr
0.769
0.767
0.804
0.756
0.815
0.808
0.857
0.823
0.801
0.864
PT
0.708
0.765
0.814
0.774
0.800
0.770
0.836
0.802
0.769
0.837
Hern
0.767
0.914
0.939
0.802
0.889
0.887
0.903
0.883
0.872
0.921
Mean
0.738
0.803
0.842
0.795
0.841
0.817
0.868
0.840
0.810
0.871
We compute the AUC of each class and the average AUC across the 14 diseases. ∗ denotes that a different train/test split is used: 80% for training and the rest 20% for
testing. All the Other methods split the dataset with 70% for training, 10% for validation and 20% for testing. Each pathology is denoted with its first four characteristics,
e.g., Atelectasis with Atel. Pneumonia and Pneumothorax are denoted as Pneu1 and Pneu2, respectively. PT represents Pleural Thickening. We report the performance
with parameter τ = 0.7. ResNet-50 (R-50) and Desnet-121 (D-121) are used as backbones in our approach. For each column, the best and second best results are
highlighted in red and blue, respectively.
Stage I. Using the global images, we fine-tune the global
branch network pretrained by ImageNet. peg (c|I) is normalized
by Eq. 1.
Stage II. Once the local image Ic is obtained by mask
inference with threshold τ , we feed it into the local branch
for fine-tuning. pel (c|Ic ) is also normalized by Eq. 1. When
we fine-tune the local branch, the weights in the global branch
are fixed.
Stage III. Let P oolg and P ooll represent the Pool5 layer
outputs of the global and local branches, respectively. We
concatenate them for a final stage of fine-tuning and normalize
the probability score pf
f (c|[I, Ic ]) by Eq. 1. Similarly, the
weights of previous two branches are fixed when we fine-tune
the weights of fusion branch.
In each stage, we use the model with the highest AUC on
the validation set for testing. The overall AG-CNN training
procedure is presented in Algorithm 1. Variants of training
strategy may influence the performance of AG-CNN. We
discussed it in Section IV-C.
IV. E XPERIMENT
This section evaluates the performance of the proposed
AG-CNN. The experimental dataset, evaluation protocol and
the experimental settings are introduced first. Section IV-C
demonstrates the performance of global and local branches
and the effectiveness of fusing them. Furthermore, comparison
of AG-CNN and the state of the art is presented in Table. I.
In Section. IV-D, we analyze the parameter impact in mask
inference.
A. Dataset and Evaluation Protocol
Dataset. We evaluate the AG-CNN framework using the
ChestX-ray141 dataset [9]. ChestX-ray14 collects 112,120
frontal-view images of 30,805 unique patients. 51,708 images
of them are labeled with up to 14 pathologies, while the others
are labeled as “No Finding”. Fig. 4 presents some examples
of 8 out of 14 thorax diseases and the ground-truth bounding
boxes of the lesion regions provided by [9]. We observe that
the size of the lesion area varies a lot for different pathologies.
Evaluation protocol. In our experiment, we randomly
shuffle the dataset into three subsets: 70% for training, 10%
for validation and 20% for testing. Each image is labeled with
a 15-dim vector L = [l1 , l2 , ..., lC ] in which yc ∈ {0, 1}, C =
15. l15 represents the label with ”No Finding”.
B. Experimental Settings
For training (any of the three stages), we perform data
augmentation by resizing the original images to 256 × 256,
1 https://nihcc.app.box.com/v/ChestXray-NIHCC
6
Fig. 5. ROC curves of the global, local and fusion branches (DenseNet-121 as backbone) over the 14 pathologies. The corresponding AUC values are given
in Table. I. We observe that fusing global and local information yields clear improvement.
C. Evaluation
Fig. 6. ROC curves of AG-CNN on the 14 diseases (ResNet-50 and DenseNet121 as backbones, respectively).
randomly resized cropping to 224×224, and random horizontal
flipping. The ImageNet mean value is subtracted from the
image. When using ResNet-50 as backbone, we optimize
the network using SGD with a mini-batch size of 126, 64,
64 for global, local and fusion branch, respectively. But for
DenseNet-121, the network is optimized with a mini-batch
of 64, 32, and 32, respectively. We train each branch for 50
epochs. The learning rate starts from 0.01 and is divided by
10 after 20 epochs. We use a weight decay of 0.0001 and
a momentum of 0.9. During validation and testing, we also
resize the image to 256 × 256, and then perform center cropping to obtain an image of size 224 × 224. Except in Section
IV-D, we set τ to 0.7 which yields the best performance on
the validation set. We implement AG-CNN with the Pytorch
framework [40].
We evaluate our method on the ChestX-ray14 dataset.
Mostly, ResNet-50 [18] is used as backbone, but the AUC and
ROC curve obtained by DenseNet-121 [20] are also presented.
Global branch (baseline) performance. We first report the
performance of the baseline, i.e., the global branch. Results are
summarized in Table. I, Fig. 5 and Fig. 9.
The average AUC across the 14 thorax diseases arrives
at 0.841 and 0.840, using ResNet-50 and DenseNet-121,
respectively. For both backbone networks, this is a competitive
accuracy compared with the previous state of the art. Except
Herina, the AUC scores of the other 13 pathologies are very
close to or even higher than [8]. Moreover, we observe that
Infiltration has the lower recognition accuracy (0.728 and
0.717 for ResNet-50 and DenseNet-121). This is because the
diagnosis of Infiltration mainly relies on the texture change
among the lung area, which is challenging to recognize. The
disease Cardiomegaly achieves higher recognition accuracy
(0.904 and 0.912 for ResNet-50 and DenseNet-121, respectively), which is characterized by the relative solid region
(heart).
Performance of the local branch. The local branch is
trained on the cropped and resized lesion patches, which is
supposed to provide attention mechanisms complementary to
the global branch. The performance of the local branch is
demonstrated in Table. I, Fig. 5 and Fig. 9 as well.
Using ResNet-50 and DenseNet-121, the average AUC
score is 0.817 and 0.810, respectively, which is higher than [9],
[7]. Despite of being competitive, the local branch yields lower
accuracy than the global branch. For example, when using
Consolidation
ation
iltration
ysema
Classification results
Images
7
Effusion
Atelectasis
Infiltration
Consolidation
No Finding
Pneumonia
Mass
Nodule
Edema
Cardiomegaly
0.770
0.732
0.352
0.205
0.127
0.017
0.014
0.014
0.014
0.013
Emphysema
Pneumothorax
Effusion
Infiltration
Mass
No Finding
Atelectasis
Nodule
PT
Consolidation
0.831
0.754
0.106
0.101
0.087
0.082
0.075
0.030
0.027
0.024
Effusion
Atelectasis
Consolidation
Infiltration
No Finding
Pneumothorax
Emphysema
PT
Mass
Cardiomegaly
0.902
0.727
0.207
0.193
0.074
0.058
0.017
0.016
0.012
0.010
Effusion
Mass
Atelectasis
Infiltration
Nodule
No Finding
Consolidation
PT
Pneumothorax
Edema
0.820
0.780
0.201
0.130
0.115
0.065
0.051
0.046
0.028
0.011
Cardiomegaly
No Finding
Effusion
Infiltration
Atelectasis
Hernia
Nodule
Fibrosis
PT
Mass
0.752
0.304
0.133
0.108
0.068
0.054
0.048
0.037
0.035
0.022
Emphysema
Pneumothorax
Atelectasis
Effusion
No Finding
Infiltration
PT
Nodule
Mass
Fibrosis
0.854
0.810
0.264
0.139
0.138
0.085
0.054
0.034
0.018
0.016
Effusion
Cardiomegaly
Infiltration
Edema
Atelectasis
PT
Consolidation
Pneumonia
Mass
Nodule
0.915
0.807
0.415
0.144
0.089
0.078
0.052
0.037
0.029
0.029
Fig. 7. Examples of classification results. We present the top-10 predicted categories and the corresponding probability scores. The ground-truth labels are
highlighted in blue.
0.872
Average AUC
0.870
0.868
0.866
0.864
0.862
0.1
0.2
0.3
0.4
0.5
Threshold
0.6
0.7
0.8
0.9
Fig. 8. Average AUC scores of AG-CNN with different settings of τ on the
validation set (ResNet-50 as backbone).
ResNet-50, the performance gap is 2.4% (0.841 to 0.817). The
probable reason for this observation is that the lesion region
estimation and cropping process may lead to information loss
which is critical for recognition. So the local branch may suffer
from inaccurate estimation of the attention area.
Among the 14 classes, the largest performance drop is
observed at “Pneumonia” (0.067). The reason for the inferior performance at “Pneumonia” is probably that lots of
information are lost. Generally, the area where the lung is
inflamed is relative large and its corresponding attention heat
map shows a scattered distribution. With a higher value of τ ,
only a very small patch is cropped in original image. For the
classes “Hernia” and “Consolidation”, the local branch and
global branch yield very similar accuracy. We speculate that
the cropped local patch is consist with the lesion area in the
global image.
Effectiveness of fusing global and local branches. In
Table. I, Fig. 5, and Fig. 6, we illustrate the effectiveness of the
fusion branch, which yields the final classification results of
our model. Table. I shows AUC of AG-CNN over 14 classes.
The observations are consistent across different categories and
the two backbones. Fig. 5 presents the ROC curve of three
branches for each pathologies which illustrates that fusing
global and local branches can improve both of them obviously.
We presents the ROC curves of 14 pathologies with these
two backbones in Fig. 6. It shows the highly consistency
which demonstrate that AG-CNN is not sensitive to network
architecture of backbone.
For both ResNet-50 and DenseNet-121, the fusion branch,
i.e., AG-CNN, outperforms both the global branch and local
branch. For example, when using ResNet-50, the performance
gap from AG-CNN to the global and local branches is
0.027 and 0.051, respectively. Specifically AG-CNN (with
DenseNet-121 as backbone) surpasses the global and local
branches for all 14 pathologies.
The advantage of AG-CNN is consistent across the categories. Using ResNet-50 for example, the largest improvement
(0.047) is observed at the class “Nodule”, the disease of
which is featured by small lesion areas (see Fig. 4). In
fact, under such circumstances, the global branch can be
largely affected by the noise within the non-disease areas.
By paying attention on the small yet focused lesion areas,
our method effectively improves the classification performance
of Nodule. On the other hand, we also notice that under the
class Pneumonia, AG-CNN is inferior to the global branch, a
consistent observation made with the local branch: the local
branch is the least effective at this class. Some classification
results are presented in Fig. 7.
Another experiment, inputing a global image into both
global and local branch, is conducted to verify the effectiveness of fusing global and local cues. The same experimental
settings with Section IV-B are performed expect that the
mini-batchsize is 64 in training. Three branches are trained
together with ResNet-50 as backbone. The average AUC of
global, local and fusion branches achieve to 0.845, 0.846 and
0.851, respectively. The performance is lower 0.017 compared
with inputing a local patch into local branch. The results
show that AG-CNN is superior than both global and local
branch. In particular, the improvement is benefit from the
local discriminative region instead of increasing the number
of parameters.
Comparison with the state of the art. We compare our results with the state-of-the-art methods [9], [19], [7], [8] on the
ChestX-ray14 dataset. Wang et al. [9] classify and localize the
thorax disease in a unified weakly supervised framework. This
localization method actually compromises the classification
8
local branch
0.90
Average AUC
0.865
0.861
0.86
0.828
0.831
0.82
0.859
0.822
0.819
global branch (baseline)
0.868
0.867
0.858
0.858
0.822
fusion branch
0.825
0.866
0.863
0.841
0.817
0.804
0.78
0.751
0.74
0.1
0.2
0.3
0.4
0.5
Threshold
0.6
0.7
0.8
0.9
Fig. 9. Average AUCs for different settings of τ on the test set (ResNet-50 as backbone). Note that the results from global branch are our baseline.
accuracy. The reported results from Yao et al. [19] are based
on the model in which labels are considered independent.
Kumar et al. [7] try different boosting methods and cascade
the previous classification results for multi-label classification.
The accuracy of the previous step directly influences the result
of the following pathologies.
Comparing with these methods, this paper contributes
new state of the art to the community: average AUC =
0.871. AG-CNN exceeds the previous state of the art [8] by
2.9%. AUC scores of pathologies such as Cardiomegaly and
Infltration are higher than [8] by about 0.03. AUC scores of
Mass, Fibrosis and Consolidation surpass [8] by about 0.05.
Furthermore, we train AG-CNN with 70% of the dataset, but
80% are used in [7], [8]. In nearly all the 14 classes, our
method yields best performance. Only Rajpurkar et al. [8]
report higher accuracy on Hernia. In all, the classification
accuracy reported in this paper compares favorably against
previous art.
Variant of training strategy analysis. Training three
branches with different orders influences the performance of
AG-CNN. We perform 4 orders to train AG-CNN: 1) train
global branch first, and then local and fusion branch together
(G LF); 2) train global and local branch together, and then
fusion branch (GL F); 3) train three branches together (GLF);
4) train global, local and fusion branch sequentially (G L F).
Note that G L F is our three-stage training strategy. We limit
the batchsize to 64 for training two or three branches together,
such as GL F and GLF. And if the global branch is trained
first, the batchsize of each branch is set to 128, 64 and
64, respectively. The other experimental settings are same as
Section IV-B. We present the classification performance of
these training strategies in Table. II.
AG-CNN yields better performance (0.868 and 0.854) with
strategy of training three branches sequentially (G L F and
G L F∗ ). When global branch is trained first, we perform
the same model as the baseline in Table. I. Training with
G L F, AG-CNN obviously improves the baseline from 0.841
to 0.868. AG-CNN (G L F∗ ) performs a overall fine-tuning
when we train the fusion branch. It improves the global branch
TABLE II
R ESULTS OF DIFFERENT TRAINING STRATEGIES .
Strategy
GL F
GLF
G LF
G L F∗
G L F
*
Batchsize
64/64/64
64/64/64
128/64/64
128/64/64
128/64/64
Global
0.831
0.847
0.841
0.852
0.841
Local
0.800
0.815
0.809
0.819
0.817
Fusion
0.833
0.849
0.843
0.854
0.868
∗ represents that the parameters in global and local branch
are fine-tuned when we train the fusion branch. ResNet-50
is used as backbone.)
TABLE III
R ESULTS CORRESPONDING DIFFERENT STATISTICS .
Statistic
Max
L1
L2
*
Global
0.8412
0.8412
0.8412
Local
0.8171
0.8210
0.8213
Fusion
0.8680
0.8681
0.8672
ResNet-50 is used as backbone.
performance to 0.852, but not the local and fusion branches.
Compared with G L F and G L F∗ , performance of AGCNN (G LF) is much lower because its the inaccuracy of
local branch. When AG-CNN is trained with GL F and GLF,
it is inferior to G L F or G L F∗ . We infer that local branch
is essential to enhance AG-CNN performance.
Variant of heat map analysis. In Table. III, we report the
performance of using different heat map computing methods.
Based on the same baseline, the local branch produce a gap
of 0.0042 between Max and L2, but only 0.008 in fusion
branch. Max and L1 achieve very close performance on both
the local and fusion branch. It illustrates that different statistics
result in subtle differences in local branch, but not effect the
classification performance significantly.
D. Parameter Analysis
We analyze the sensitivity of AG-CNN to parameter variations. The key parameter of AG-CNN consists in τ in Eq. 4,
which defines the local regions and affects the classification
accuracy. Fig. 8 shows the average AUC of AG-CNN over
9
different τ on validation set. AG-CNN achieves the best
performance when τ is setting as 0.7. Therefore, we report
the results on test set with τ = 0.7.
Fig. 9 compares the average AUC of the global, local branch
and fusion branch on the test dataset when ResNet-50 is used
as basic network. τ changes from 0.1 to 0.9. When τ is small
(e.g. , close to 0), the local region is close to the global image.
For example, when τ = 0.1, the average AUC of the local
branch (0.828) is close to the result of the global branch
(0.841). In such cases, most of the entries in the attention
heat map are preserved, indicating that the cropped image
patches are close to the original input. On the other hand,
while τ reaches to 1, e.g., 0.9, the local branch is inferior
to the global branch by a large margin (0.9%). Under this
circumstance, most of the information in the global image is
discarded but only the top 10% largest values in the attention
heat map are retained. The cropped image patches reflect very
small regions.
Unlike the local branch, AG-CNN is relative stable to
changes of the threshold τ . When concentrating the global
and local branches, AG-CNN outperforms both branches by
at least 1.7% at τ = 0.4 and 0.5. AG-CNN exhibits the highest
AUC (>0.866) when τ ranges between [0.6, 0.8].
V. C ONCLUSION
In this paper, we propose an attention guided two-branch
convolutional neural network for thorax disease classification.
The proposed network is trained by considering both the global
and local cues informed in the global and local branches,
respectively. Departing from previous works which merely
rely on the global information, it uses attention heat maps to
mask the important regions which are used to train the local
branch. Extensive experiments demonstrate that combining
both global and local cues yields state-of-the-art accuracy
on the ChestX-ray14 dataset. We also demonstrate that our
method is relatively insensitive to parameter changes.
In the future research, we will continue the study from two
directions. First, we will investigate more accurate localization
of the lesion areas. Second, to tackle with the difficulties
in sample collection and annotation, semi-supervised learning
methods will be explored.
R EFERENCES
[1] P. Liskowski and K. Krawiec, “Segmenting retinal blood vessels with¡?
pub newline?¿ deep neural networks,” IEEE transactions on medical
imaging, vol. 35, no. 11, pp. 2369–2380, 2016. 1
[2] Y. Yuan, M. Chao, and Y.-C. Lo, “Automatic skin lesion segmentation
using deep fully convolutional networks with jaccard distance,” IEEE
Transactions on Medical Imaging, 2017. 1
[3] H. R. Roth, L. Lu, A. Farag, H.-C. Shin, J. Liu, E. B. Turkbey, and R. M.
Summers, “Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation,” in International Conference on Medical
Image Computing and Computer-Assisted Intervention. Springer, 2015,
pp. 556–564. 1
[4] H. Fu, J. Cheng, Y. Xu, D. W. K. Wong, J. Liu, and X. Cao, “Joint
optic disc and cup segmentation based on multi-label deep network and
polar transformation,” IEEE Transactions on Medical Imaging, 2018. 1
[5] H. Fu, Y. Xu, S. Lin, X. Zhang, D. W. K. Wong, J. Liu, A. F. Frangi,
M. Baskaran, and T. Aung, “Segmentation and quantification for angleclosure glaucoma assessment in anterior segment oct,” IEEE transactions
on medical imaging, vol. 36, no. 9, pp. 1930–1938, 2017. 1
[6] M. Anthimopoulos, S. Christodoulidis, L. Ebner, A. Christe, and
S. Mougiakakou, “Lung pattern classification for interstitial lung diseases using a deep convolutional neural network,” IEEE transactions on
medical imaging, vol. 35, no. 5, pp. 1207–1216, 2016. 1, 2
[7] P. Kumar, M. Grewal, and M. M. Srivastava, “Boosted cascaded convnets
for multilabel classification of thoracic diseases in chest radiographs,”
arXiv preprint arXiv:1711.08760, 2017. 1, 2, 5, 6, 7, 8
[8] P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding,
A. Bagul, C. Langlotz, K. Shpanskaya et al., “Chexnet: Radiologistlevel pneumonia detection on chest x-rays with deep learning,” arXiv
preprint arXiv:1711.05225, 2017. 1, 2, 5, 6, 7, 8
[9] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers,
“Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on
weakly-supervised classification and localization of common thorax
diseases,” arXiv preprint arXiv:1705.02315, 2017. 1, 2, 4, 5, 6, 7
[10] J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Isgum, “Generative
adversarial networks for noise reduction in low-dose ct,” IEEE Transactions on Medical Imaging, 2017. 1
[11] S. Albarqouni, C. Baur, F. Achilles, V. Belagiannis, S. Demirci, and
N. Navab, “Aggnet: deep learning from crowds for mitosis detection in
breast cancer histology images,” IEEE transactions on medical imaging,
vol. 35, no. 5, pp. 1313–1321, 2016. 1
[12] Y. Xu, T. Mo, Q. Feng, P. Zhong, M. Lai, I. Eric, and C. Chang, “Deep
learning of feature representation with multiple instance learning for
medical image analysis,” in Acoustics, Speech and Signal Processing
(ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp.
1626–1630. 1
[13] R. Liao, S. Miao, P. de Tournemire, S. Grbic, A. Kamen, T. Mansi,
and D. Comaniciu, “An artificial agent for robust image registration.” in
AAAI, 2017, pp. 4168–4175. 1
[14] D. Yang, S. Zhang, Z. Yan, C. Tan, K. Li, and D. Metaxas, “Automated
anatomical landmark detection ondistal femur surface using convolutional neural network,” in Biomedical Imaging (ISBI), 2015 IEEE 12th
International Symposium on. IEEE, 2015, pp. 17–21. 1
[15] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification
with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105. 1
[16] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
1
[17] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan,
V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,”
in Proceedings of the IEEE conference on computer vision and pattern
recognition, 2015, pp. 1–9. 1
[18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
recognition,” in Proceedings of the IEEE conference on computer vision
and pattern recognition, 2016, pp. 770–778. 1, 3, 6
[19] L. Yao, E. Poblenz, D. Dagunts, B. Covington, D. Bernard, and K. Lyman, “Learning to diagnose from scratch by exploiting dependencies
among labels,” arXiv preprint arXiv:1710.10501, 2017. 1, 2, 5, 7
[20] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely
connected convolutional networks,” arXiv preprint arXiv:1608.06993,
2016. 1, 6
[21] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural
computation, vol. 9, no. 8, pp. 1735–1780, 1997. 1
[22] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet:
A large-scale hierarchical image database,” in Computer Vision and
Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE,
2009, pp. 248–255. 1
[23] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from
tiny images,” 2009. 1
[24] J. Shiraishi, S. Katsuragawa, J. Ikezoe, T. Matsumoto, T. Kobayashi, K.i. Komatsu, M. Matsui, H. Fujita, Y. Kodera, and K. Doi, “Development
of a digital image database for chest radiographs with and without a
lung nodule: receiver operating characteristic analysis of radiologists’
detection of pulmonary nodules,” American Journal of Roentgenology,
vol. 174, no. 1, pp. 71–74, 2000. 2
[25] B. Van Ginneken, M. B. Stegmann, and M. Loog, “Segmentation of
anatomical structures in chest radiographs using supervised methods:
a comparative study on a public database,” Medical image analysis,
vol. 10, no. 1, pp. 19–40, 2006. 2
[26] S. Jaeger, S. Candemir, S. Antani, Y.-X. J. Wáng, P.-X. Lu, and
G. Thoma, “Two public chest x-ray datasets for computer-aided screening of pulmonary diseases,” Quantitative imaging in medicine and
surgery, vol. 4, no. 6, p. 475, 2014. 2
[27] D. Demner-Fushman, M. D. Kohli, M. B. Rosenman, S. E. Shooshan,
L. Rodriguez, S. Antani, G. R. Thoma, and C. J. McDonald, “Preparing
10
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
a collection of radiology examinations for distribution and retrieval,”
Journal of the American Medical Informatics Association, vol. 23, no. 2,
pp. 304–310, 2015. 2
“Open-i: An open access biomedical search engine.” https://openi.nlm.
nih.gov. 2
G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi,
M. Ghafoorian, J. A. van der Laak, B. van Ginneken, and C. I. Sánchez,
“A survey on deep learning in medical image analysis,” arXiv preprint
arXiv:1702.05747, 2017. 2
A. Qayyum, S. M. Anwar, M. Majid, M. Awais, and M. Alnowami,
“Medical image analysis using convolutional neural networks: A review,” arXiv preprint arXiv:1709.02250, 2017. 2
D. Shen, G. Wu, and H.-I. Suk, “Deep learning in medical image
analysis,” Annual Review of Biomedical Engineering, no. 0, 2017. 2
H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao,
D. Mollura, and R. M. Summers, “Deep convolutional neural networks
for computer-aided detection: Cnn architectures, dataset characteristics
and transfer learning,” IEEE transactions on medical imaging, vol. 35,
no. 5, pp. 1285–1298, 2016. 2
H.-C. Shin, K. Roberts, L. Lu, D. Demner-Fushman, J. Yao, and R. M.
Summers, “Learning to read chest x-rays: recurrent neural cascade model
for automated image annotation,” in Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, 2016, pp. 2497–2506. 2
M. T. Islam, M. A. Aowal, A. T. Minhaz, and K. Ashraf, “Abnormality
detection and localization in chest x-rays using deep convolutional neural
networks,” arXiv preprint arXiv:1705.09850, 2017. 2
S. Hwang and H.-E. Kim, “Self-transfer learning for weakly supervised
lesion localization,” in International Conference on Medical Image
Computing and Computer-Assisted Intervention. Springer, 2016, pp.
239–246. 2
C. Payer, D. Štern, H. Bischof, and M. Urschler, “Regressing heatmaps
for multiple landmark localization using cnns,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.
Springer, 2016, pp. 230–238. 2
P.-P. Ypsilantis and G. Montana, “Learning what to look in chest
x-rays with a recurrent visual attention model,” arXiv preprint
arXiv:1701.06452, 2017. 2
V. Mnih, N. Heess, A. Graves et al., “Recurrent models of visual
attention,” in Advances in neural information processing systems, 2014,
pp. 2204–2212. 2
E. Pesce, P.-P. Ypsilantis, S. Withey, R. Bakewell, V. Goh, and G. Montana, “Learning to detect chest radiographs containing lung nodules using visual attention networks,” arXiv preprint arXiv:1712.00996, 2017.
2
A. Paszke, S. Gross, S. Chintala, and G. Chanan, “Pytorch,” 2017. 6
| 1 |
Classification of Major Depressive Disorder
via Multi-Site Weighted LASSO Model
Dajiang Zhu1, Brandalyn C. Riedel1, Neda Jahanshad1, Nynke A. Groenewold2,3,
Dan J. Stein3, Ian H. Gotlib4, Matthew D. Sacchet5, Danai Dima6,7, James H. Cole8,
Cynthia H.Y. Fu9, Henrik Walter10, Ilya M. Veer11, Thomas Frodl11,12,
Lianne Schmaal13.14,15, Dick J. Veltman15, Paul M. Thompson1
1
Imaging Genetics Center, USC Stevens Neuroimaging and Informatics Institute, Keck School
of Medicine of the University of Southern California, CA, USA;
2
BCN NeuroImaging Center and Department of Neuroscience of the University of Groningen,
University Medical Center Groningen, The Netherlands;
3
Dept of Psychiatry and Mental Health, University of Cape Town, South Africa;
4
Neurosciences Program and Department of Psychology, Stanford University, CA, USA;
5
Department of Psychiatry and Behavioral Sciences, Stanford University, CA, USA;
6
Dept of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King’s College
London, UK;
7
Dept of Psychology, School of Arts and Social Science, City, University of London, UK;
8
Department of Medicine, Imperial College London, UK;
9
Department of Psychological Medicine, King’s College London, UK;
10
Dept of Psychiatry and Psychotherapy, Charité Universitätsmedizin Berlin, Germany;
11
Department of Psychiatry, Trinity College Dublin, Ireland;
12
Dept of Psychiatry and Psychotherapy, Otto von Guericke University Magdeburg, Germany;
13
Dept of Psychiatry and Neuroscience Campus Amsterdam, VU University Medical Center,
The Netherlands;
14
Orygen, The National Centre of Excellence in Youth Mental Health, Australia;
15
Center for Youth Mental Health, The University of Melbourne, Australia
Abstract. Large-scale collaborative analysis of brain imaging data, in psychiatry and neurology, offers a new source of statistical power to discover features
that boost accuracy in disease classification, differential diagnosis, and outcome
prediction. However, due to data privacy regulations or limited accessibility to
large datasets across the world, it is challenging to efficiently integrate distributed information. Here we propose a novel classification framework through
multi-site weighted LASSO: each site performs an iterative weighted LASSO
for feature selection separately. Within each iteration, the classification result
and the selected features are collected to update the weighting parameters for
each feature. This new weight is used to guide the LASSO process at the next
iteration. Only the features that help to improve the classification accuracy are
preserved. In tests on data from five sites (299 patients with major depressive
disorder (MDD) and 258 normal controls), our method boosted classification
accuracy for MDD by 4.9% on average. This result shows the potential of the
proposed new strategy as an effective and practical collaborative platform for
machine learning on large scale distributed imaging and biobank data.
Keywords: MDD, weighted LASSO
1
Introduction
Major depressive disorder (MDD) affects over 350 million people worldwide [1] and
takes an immense personal toll on patients and their families, placing a vast economic
burden on society. MDD involves a wide spectrum of symptoms, varying risk factors,
and varying response to treatment [2]. Unfortunately, early diagnosis of MDD is challenging and is based on behavioral criteria; consistent structural and functional brain
abnormalities in MDD are just beginning to be understood. Neuroimaging of large
cohorts can identify characteristic correlates of depression, and may also help to detect modulatory effects of interventions, and environmental and genetic risk factors.
Recent advances in brain imaging, such as magnetic resonance imaging (MRI) and its
variants, allow researchers to investigate brain abnormalities and identify statistical
factors that influence them, and how they relate to diagnosis and outcomes [12]. Researchers have reported brain structural and functional alterations in MDD using different modalities of MRI. Recently, the ENIGMA-MDD Working Group found that
adults with MDD have thinner cortical gray matter in the orbitofrontal cortices, insula, anterior/posterior cingulate and temporal lobes compared to healthy adults without
a diagnosis of MDD [3]. A subcortical study – the largest to date – showed that MDD
patients tend to have smaller hippocampal volumes than controls [4]. Diffusion tensor
imaging (DTI) [5] reveals, on average, lower fractional anisotropy in the frontal lobe
and right occipital lobe of MDD patients. MDD patients may also show aberrant functional connectivity in the default mode network (DMN) and other task-related functional brain networks [6].
Fig. 1. Overview of our proposed framework.
Even so, classification of MDD is still challenging. There are three major barriers:
first, though significant differences have been found, these previously identified brain
regions or brain measures are not always consistent markers for MDD classification
[7]; second, besides T1 imaging, other modalities including DTI and functional magnetic resonance imaging (fMRI) are not commonly acquired in a clinical setting; last,
it is not always easy for collaborating medical centers to perform an integrated data
analysis due to data privacy regulations that limit the exchange of individual raw data
and due to large transfer times and storage requirements for thousands of images. As
biobanks grow, we need an efficient platform to integrate predictive information from
multiple centers; as the available datasets increase, this effort should increase the
statistical power to identify predictors of disease diagnosis and future outcomes, beyond what each site could identify on its own.
In this study, we introduce a multi-site weighted LASSO (MSW-LASSO) model
to boost classification performance for each individual participating site, by integrating their knowledge for feature selection and results from classification. As shown in
Fig. 1, our proposed framework features the following characteristics: (1) each site
retains their own data and performs weighted LASSO regression, for feature selection, locally; (2) only the selected brain measures and the classification results are
shared to other sites; (3) information on the selected brain measures and the corresponding classification results are integrated to generate a unified weight vector
across features; this is then sent to each site. This weight vector will be applied to the
weighted LASSO in the next iteration; (4) if the new weight vector leads to a new set
of brain measures and better classification performance, the new set of brain measures
will be sent to other sites. Otherwise, it is discarded and the old one is recovered.
2
Methods
2.1
Data and demographics
For this study, we used data from five sites across the world. The total number of
participants is 557; all of them were older than 21 years old. Demographic information for each site’s participants is summarized in Table 1.
Sites
1
2
3
4
5
Total Total N of Total N of Age of Controls
N
MDD
Controls (%) (Mean ± SD; y)
patients (%)
Groningen 45 22 (48.89%) 23 (51.11%) 42.78 ± 14.36
38.17 ± 9.97
Stanford 110 54 (49.09%) 56 (50.91)
51.72 ± 7.94
BRCDECC 130 69 (53.08%) 61 (46.92%)
172 101 (58.72%) 71 (41.28%) 41.09 ± 12.85
Berlin
100
53 (53%)
47 (47%)
38.49 ± 12.37
Dublin
Combined 557 299 (53.68%) 258 (46.32$)
Age of MDD
(Mean ± SD;
y)
43.14 ± 13.8
37.75 ± 9.78
47.85 ± 8.91
41.21 ± 11.82
41.81 ± 10.76
%
Female
MDD
72.73
57.41
68.12
64.36
62.26
%
Female
Total
73.33
60.00
60.77
60.47
57.00
Table 1. Demographics for the five sites participating in the current study.
2.2
Data preprocessing
As in most common clinical settings, only T1-weighted MRI brain scans were acquired at each site; quality control and analyses were performed locally. Sixty-eight
(34 left/34 right) cortical gray matter regions, 7 subcortical gray matter regions and
the lateral ventricles were segmented with FreeSurfer [8]. Detailed image acquisition,
pre-processing, brain segmentation and quality control methods may be found in [3,
9]. Brain measures include cortical thickness and surface area for cortical regions and
volume for subcortical regions and lateral ventricles. In total, 152 brain measures
were considered in this study.
2.3
Algorithm overview
To better illustrate the algorithms, we define the following notations:
1. 𝐹𝑖 : The selected brain measures (features) of Site-i;
2. 𝐴𝑖 : The classification performance of Site-i;
3. W: The weight vector;
4. w-LASSO (W, 𝐷𝑖 ): Performing weighted LASSO on 𝐷𝑖 with weight vector – W;
5. SVM (𝐹𝑖 , 𝐷𝑖 ): Performing SVM classifier on 𝐷𝑖 using the feature set - 𝐹𝑖 ;
The algorithms have two parts that are run at each site, and an integration server. At
first, the integration server initializes a weight vector with all ones and sends it to all
sites. Each site use this weight vector to conduct weighted LASSO (Section 2.6) with
their own data locally. If the selected features have better classification performance,
it will send the new features and the corresponding classification result to the integration server. If there is no improvement in classification accuracy, it will send the old
ones. After the integration server receives the updates from all sites, it generates a
new weight vector (Section 2.5) according to different feature sets and their classification performance. The detailed strategy is discussed in Section 2.5.
Algorithm 1 (Integration Server)
Initialize W (with all features weighted as one)
Send W to all sites
while at least one site has improvement on A
update W (Section 2.5)
Send W to all sites
end while
Send W with null to all sites
1.
2.
3.
4.
5.
6.
7.
Table 2. Main steps of Algorithm 1.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Algorithm 2 (Site-i)
𝐹𝑖 ← ∅, 𝐴𝑖 ← 0
while received W is not null
𝐹𝑖′ ← w-LASSO (W, 𝐷𝑖 ) (Section 2.6)
if 𝐹𝑖′ ≠ 𝐹𝑖
𝐴′𝑖 ← SVM (𝐹𝑖′, 𝐷𝑖 )
if 𝐴′𝑖 > 𝐴𝑖
send 𝐹𝑖′ and 𝐴′𝑖 to Integration Server
𝐹𝑖 ← 𝐹𝑖′, 𝐴𝑖 ← 𝐴′𝑖
else send 𝐹𝑖 and 𝐴𝑖 to Integration Server
end if
end if
end while
Table 3. Main steps of Algorithm 2.
2.4
Ordinary LASSO and weighted LASSO
LASSO [11] is a shrinkage method for linear regression. The ordinary LASSO is
defined as:
β̂(LASSO) = arg min ‖y − ∑ni=1 xi βi ‖2 + λ∑ni=1|βi |
(1)
Y and x are the observations and predictors. λ is known as the sparsity parameter. It
minimizes the sum of squared errors while penalizing the sum of the absolute values
of the coefficients - β. As LASSO regression will force many coefficients to be zero,
it is widely used for variable selection [11].
However, the classical LASSO shrinkage procedure might be biased when estimating large coefficients [12]. To alleviate this risk, adaptive LASSO [12] was developed and it tends to assign each predictor with different penalty parameters. Thus it
can avoid having larger coefficients penalized more heavily than small coefficients.
Similarly, the motivation of multi-site weighted LASSO (MSW-LASSO) is to penalize different predictors (brain measures), by assigning different weights, according to
its classification performance across all sites. Generating the weights for each brain
measure (feature) and the MSW-LASSO model are discussed in Section 2.5 and 2.6.
2.5
Generation of a Multi-Site Weight
In Algorithm 1, after the integration server receives the information on selected features (brain measures) and the corresponding classification performance of each site,
it generates a new weight for each feature. The new weight for the 𝑓 𝑡ℎ feature is:
𝑊𝑓 = ∑𝑚
𝑠=1 𝛹𝑠,𝑓 𝐴𝑠 𝑃𝑠 ⁄𝑚
(2)
1, 𝑖𝑓 the 𝑓 𝑡ℎ feature was selected in site − s
𝛹𝑠,𝑓 = {
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
(3)
Here m is the number of sites. 𝐴𝑠 is the classification accuracy of site-s. 𝑃𝑠 is the proportion of participants in site-s relative to the total number of participants at all sites.
Eq. (3) penalizes the features that only “survived” in a small number of sites. On the
contrary, if a specific feature was selected by all sites, meaning all sites agree that this
feature is important, it tends to have a larger weight. In Eq. (2) we consider both the
classification performance and the proportion of samples. If a site has achieved very
high classification accuracy and it has a relatively small sample size compared to
other sites, the features selected will be conservatively “recommended” to other sites.
In general, if the feature was selected by more sites and resulted in higher classification accuracy, it has larger weights.
2.6
Multi-Site weight LASSO
In this section, we define the multi-site weighted LASSO (MSW-LASSO) model:
β̂MSW−Lasso = arg min ‖y − ∑ni=1 xi βi ‖2 + λ∑ni=1(1 − ∑𝑚
𝑠=1 𝛹𝑠,𝑖 𝐴𝑠 𝑃𝑠 ⁄𝑚 )|βi |
(4)
Here xi represents the MRI measures after controlling the effects of age, sex and intracranial volume (ICV), which are managed within different sites. y is the label indicating MDD patient or control. n is the 152 brain measures (features) in this study. In
our MSW-LASSO model, a feature with larger weights implies higher classification
performance and/or recognition by multiple sites. Hence it will be penalized less and
has a greater chance of being selected by the sites that did not consider this feature in
the previous iteration.
3
Results
3.1
Classification improvements through the MSW-LASSO model
In this study, we applied Algorithm 1 and Algorithm 2 on data from five sites across
the world. In the first iteration, the integration server initialized a weight vector with
all ones and sent it to all sites. Therefore, these five sites conducted regular LASSO
regression in the first round. After a small set of features was selected using similar
strategy in [9] within each site, they performed classification locally using a support
vector machine (SVM) and shared the best classification accuracy to the integration
server, as well as the set of selected features. Then the integration server generated the
new weight according to Eq. (2) and sent it back to all sites. From the second iteration, each site performed MSW-LASSO until none of them has improvement on the
classification result. In total, these five sites ran MSW-LASSO for six iterations; the
classification performance for each round is summarized in Fig. 2 (a-e).
Fig. 2. Applying MSW-LASSO to the data coming from five sites (a-e). Each subfigure shows
the classification accuracy (ACC), specificity (SPE) and sensitivity (SEN) at each iteration. (f)
shows the improvement in classification accuracy at each site after performing MSW-LASSO.
Though the Stanford and Berlin sites did not show any improvements after the second
iteration, the classification performance at the BRCDECC site and Dublin continued
improving until the sixth iteration. Hence our MSW-LASSO terminated at the sixth
round. Fig. 2f shows the improvements of classification accuracy for all five sites the average improvement is 4.9%. The sparsity level of the LASSO is set as 16% which means that 16% of 152 features tend to be selected in the LASSO process.
Section 3.3 shows the reproducibility of results with different sparsity levels. When
conducing SVM classification, the same kernel (RBF) was used, and we performed a
grid search for possible parameters. Only the best classification results are adopted.
3.2
Analysis of MSW-LASSO features
In the process of MSW-LASSO, only the new set of features resulting in improvements in classification are accepted. Otherwise, the prior set of features is preserved.
The new features are also “recommended” to other sites by increasing the correspond-
ing weights of the new features. Fig. 3 displays the changes of the involved features
through six iterations and the top 5 features selected by the majority of sites.
Fig. 3. (a) Number of involved features through six iterations. (b-f) The top five consistently
selected features across sites. Within each subfigure, the top showed the locations of the corresponding features and the bottom indicated how many sites selected this feature through the
MSW-LASSO process. (b-c) are cortical thickness and (d-f) are surface area measures.
At the first iteration, there are 88 features selected by five sites. This number decreases over MSW-LASSO iterations. Only 73 features are preserved after six iterations
but the average classification accuracy increased by 4.9%. Moreover, if a feature is
originally selected by the majority of sites, it tends to be continually selected after
multiple iterations (Fig. 3d-e). For those “promising” features that are accepted by
fewer sites at first, they might be incorporated by more sites as the iteration increased
(Fig. 2b-c, f).
3.3
Reproducibility of the MSW-LASSO
Selected Improvement, in % Selected
Improvement, in %
Features ACC SPE SEN
features ACC SPE
SEN
3.1
1.8
4.4
2.6
3.1
2.5
13%
33%
3.9
1.4
6.0
1.7
2.1
1.5
20%
36%
3.8
2.9
4.4
2.5
4.1
1.4
23%
40%
4.3
3.4
5.2
3.1
1.1
5.0
26%
43%
2.9
3.0
2.9
2.8
3.9
1.9
30%
46%
Table 4. Reproducibility results with different sparsity levels. The column of selected
features represents the percentage of features preserved during the LASSO procedure,
and the average improvement in accuracy, sensitivity, and specificity by sparsity.
For LASSO-related problems, there is no closed-form solution for the selection of
sparsity level; this is highly data dependent. To validate our MSW-LASSO model, we
repeated Algorithm 1 and Algorithm 2 at different sparsity levels, which leads to
preservation of different proportions of the features. The reproducibility performance
of our proposed MSW-LASSO is summarized in Table 4.
4
Conclusion and Discussion
Here we proposed a novel multi-site weighted LASSO model to heuristically improve
classification performance for multiple sites. By sharing the knowledge of features
that might help to improve classification accuracy with other sites, each site has multiple opportunities to reconsider its own set of selected features and strive to increase
the accuracy at each iteration. In this study, the average improvement in classification
accuracy is 4.9% for five sites. We offer a proof of concept for distributed machine
learning that may be scaled up to other disorders, modalities, and feature sets.
5
References
1. World Health Organization. World Health Organization Depression Fact sheet, No. 369.
(2012). Available from: http://www.who.int/mediacentre/factsheets/fs369/en/.
2. Fried, E.I., et al. "Depression is more than the sum score of its parts: individual DSM
symptoms have different risk factors." Psych Med. 2067-2076 (2014).
3. Schmaal, L., et al. "Cortical abnormalities in adults and adolescents with major depression
based on brain scans from 20 cohorts worldwide in the ENIGMA Major Depressive Disorder Working Group." Mol Psych. doi: 10.1038/mp.2016.60 (2016).
4. Schmaal, L., et al. "Subcortical brain alterations in major depressive disorder: findings
from the ENIGMA Major Depressive Disorder working group." Mol Psych 806-812
(2016).
5. Liao, Y., et al. "Is depression a disconnection syndrome? Meta-analysis of diffusion tensor
imaging studies in patients with MDD." J Psych & Neurosci. 49 (2013).
6. Sambataro, F., et al. "Revisiting default mode network function in major depression: evidence for disrupted subsystem connectivity." Psychl Med. 2041-2051 (2014).
7. Lo, A., et al. "Why significant variables aren’t automatically good predictors." PNAS.
13892-13897 (2015).
8. https://surfer.nmr.mgh.harvard.edu/
9. Zhu, D., et al. Large-scale classification of major depressive disorder via distributed Lasso.
Proc. of SPIE, 10160 (2017).
10. Tibshirani, R., “Regression shrinkage and selection via the LASSO.” Journal of the Royal
Statistical Society. 58: 267–288 (1996).
11. Li, Qingyang, et al., "Large-Scale Collaborative Imaging Genetics Studies of Risk Genetic
Factors for Alzheimer’s Disease Across Multiple Institutions." MICCAI. 335-343 (2016).
12. Zou, H., “The adaptive LASSO and its oracle properties.” J. Amer. Statist. Assoc
101(476):1418-1429 (2006).
13. Koutsouleris, N., et al. Individualized differential diagnosis of schizophrenia and mood
disorders using neuroanatomical biomarkers. Brain, 138(7), 2059-2073 (2015).
*
*
Supported in part by NIH grant U54 EB020403; see ref. 3 for additional support to coauthors for cohort recruitment.
| 5 |
Declarative vs Rule-based Control for Flocking Dynamics
Usama Mehmood
Department of Computer Science,
Stony Brook University, USA
Radu Grosu
arXiv:1710.10013v1 [cs.MA] 27 Oct 2017
Cyber-Physical Systems Group,
Technische Universitat Wien, Austria
Ashish Tiwari
SRI International, USA
Nicola Paoletti
Department of Computer Science,
Stony Brook University, USA
Shan Lin
Department of Electrical and
Computer Engineering, Stony Brook
University, USA
Junxing Yang
Department of Computer Science,
Stony Brook University, USA
ABSTRACT
The popularity of rule-based flocking models, such as Reynolds’
classic flocking model, raises the question of whether more declarative flocking models are possible. This question is motivated by
the observation that declarative models are generally simpler and
easier to design, understand, and analyze than operational models.
We introduce a very simple control law for flocking based on a
cost function capturing cohesion (agents want to stay together) and
separation (agents do not want to get too close). We refer to it as
declarative flocking (DF). We use model-predictive control (MPC)
to define controllers for DF in centralized and distributed settings. A
thorough performance comparison of our declarative flocking with
Reynolds’ classic flocking model, and with more recent flocking
models that use MPC with a cost function based on lattice structures, demonstrate that DF-MPC yields the best cohesion and least
fragmentation, and maintains a surprisingly good level of geometric regularity while still producing natural flock shapes similar to
those produced by Reynolds’ model. We also show that DF-MPC
has high resilience to sensor noise.
ACM Reference Format:
Usama Mehmood, Nicola Paoletti, Dung Phan, Radu Grosu, Shan Lin, Scott
D. Stoller, Ashish Tiwari, Junxing Yang, and Scott A. Smolka. 2018. Declarative vs Rule-based Control for Flocking Dynamics. In Proceedings of ACM/SIGAPP
Symposium On Applied Computing (SAC 2018). ACM, New York, NY, USA,
8 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
1
INTRODUCTION
Flocking is a collective behavior exhibited by a large number of
interacting agents possessing a common group objective [7]. The
term is most commonly associated with birds, and more recently,
drones. Examples include foraging for food, executing a predatoravoidance maneuver, and engaging in migratory behavior.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
SAC 2018, April 9-13, 2018, Pau, France
© 2018 Copyright held by the owner/author(s).
ACM ISBN 978-x-xxxx-xxxx-x/YY/MM.
https://doi.org/10.1145/nnnnnnn.nnnnnnn
Dung Phan
Department of Computer Science,
Stony Brook University, USA
Scott D. Stoller
Department of Computer Science,
Stony Brook University, USA
Scott A. Smolka
Department of Computer Science,
Stony Brook University, USA
With the introduction of Reynolds’ model [12, 13], rule-based
control became the norm in the flocking community. Specifically,
in this model, at each time-step, each agent executes a control law
given in terms of the weighted sum of three competing forces to
determine its next acceleration. Each of these forces has its own rule:
separation (keep a safe distance away from your neighbors), cohesion
(move towards the centroid of your neighbors), and alignment (steer
toward the average heading of your neighbors). As the descriptions
suggest, these rules are executed by each agent in a distributed
environment with limited-range sensing and no communication.
The popularity of Reynolds’ model and its many variants raises
the question: Is there a more abstract declarative form of control for
flocking? This question is important because declarative models
are generally simpler and easier to design, understand, and analyze
than operational models. This is analogous to declarative programs
(e.g., functional programs and logic programs) being easier to write
and verify than imperative programs.
We show that the answer to this question is indeed positive by
providing a very simple control law for flocking based on a cost
function comprising two main terms: cohesion (the average squared
distance between all pairs of agents) and separation (a sum of inverse squared distances, except this time between pairs of agents
within each other’s sensing range). That is it. For example, no term
representing velocity alignment is needed. The cost function specifies what we want as the goal, and is hence declarative. In contrast,
the update rules in Reynolds’ model aim to achieve an implicit goal
and hence are operational. Executing declarative control amounts
to finding the right balance between attracting and repelling forces
between agents. We refer to this approach as Declarative Flocking
(DF). We use MPC (model-predictive control) to define controllers
for DF, and refer to this approach as DF-MPC. We define a centralized version of DF-MPC, which requires communication, and a
distributed version, which does not.
Previous MPCs for flocking exist, e.g., [16–18]. Most of these
MPCs are designed to conform to the α-lattice model of flocking
proposed in [7]. α-lattices impose a highly regular structure on
flocks: all neighboring agents are distance d apart, for a specified
constant d. This kind of structure is seen in some settings, such as
beehives, but is not expected in many other natural and engineered
settings, and it is not imposed by Reynolds’ model.
SAC 2018, April 9-13, 2018, Pau, France
U. Mehmood et al.
In this paper, we show, via a thorough performance evaluation,
how centralized and distributed DF-MPC compare to Reynolds’ rulebased approach [12, 13], Olfati-Saber’s potential-based approach
[7], a variant of Zhan and Li’s centralized lattice-based MPC approach [15, 16], and Zhang et al.’s distributed lattice-based MPC
approach [17]. We consider performance measures that capture multiple dimensions of flocking behavior: number of sub-flocks (flock
fragmentation), maximum sub-flock diameter (cohesion), velocity
convergence, and a new parameter-free measure of the geometric
regularity of the formation.
Our experimental results demonstrate that DF-MPC yields the
best cohesion and least fragmentation, and produces natural flock
shapes like those produced by Reynolds’ model. Also, distributed
DF-MPC maintains a surprisingly good level of geometric regularity.
We also analyze the resiliency of DF-MPC and the lattice-based
MPC approaches by considering the impact of sensor noise. Our
results demonstrate a remarkably high level of resiliency on the
part of DF-MPC in comparison with these other approaches.
The rest of the paper is organized as follows. Section 2 presents
the rule-based, potential-based, and lattice-based MPC approaches
mentioned above. Section 3 defines our declarative flocking approach. Section 4 defines our performance measures for flocking
models. Section 5 presents our experimental results and performance evaluation. Section 6 discusses related work. Finally, Section 7 offers our concluding remarks and directions for future work.
2
MODELS OF FLOCKING BEHAVIOR
We consider a set of dynamic agents B = {1, . . . , n} that move
according to the following discrete-time equation of motion:
x i (k + 1) = x i (k) + dT · vi (k), vi (k) ∈ V
vi (k + 1) = vi (k) + dT · ai (k), ai (k) ∈ A,
(1)
(2)
where x i (k), vi (k), ai (k) ∈ Rm are respectively position, velocity
and acceleration of agent i ∈ B in the m-dimensional space at
step k, and dT ∈ R+ is the time step. We consider physical constraints on velocities and accelerations, described by the sets V
and A, respectively, which are defined by V = {v | |v | ≤ v̄} and
A = {a | |a| ≤ ā}, where v̄ and ā limit the allowed magnitude of
the velocity and acceleration vectors, respectively.
In most flocking models, agents update their motion by changing
their acceleration. In this sense, ai (k) represents the control input
for agent i.
The configuration of all agents is described by the vector x(k) =
[x 1T (k) . . . x nT (k)]T ∈ Rm ·n . Let v(k) = [vT1 (k) . . . vnT (k)]T ∈
Rm ·n , and a(k) = [aT1 (k) . . . aTn (k)]T ∈ Rm ·n . Then the equation
of motion for all agents can be expressed as
x(k + 1) = x(k) + dT · v(k),
v(k + 1) = v(k) + dT · a(k),
(3)
(4)
The local neighborhood of agent i is defined by the set of other
agents, called neighbors, within a given distance from i, mimicking
the agent’s visibility sphere. For an interaction radius r > 0 and
configuration x, the set of spatial neighbors of agent i, Ni (x) ⊆ B,
is given by:
Ni (x) = j ∈ B | j , i ∧ ∥x i − x j ∥ < r ,
(5)
where ∥ · ∥ denotes the Euclidean norm.
a)
b)
Figure 1: Examples of α-lattice a) and quasi α-lattice b). Solid
lines connect agents in the same neighborhood that have distance d. Dashed lines connect those with have distance d ± ϵ
for ϵ ≤ δ (the tolerance).
For configuration x ∈ Rm ·n , we define the associated proximity
net G(x) = (B, E(x)) as the graph that connects agents within their
interaction radius:
E(x) = (i, j) ∈ B × B | ∥x i − x j ∥ < r , i , j ,
(6)
To capture the regular geometry of flocks, Olfati-Saber introduced the notions of α-lattices, i.e. configurations where each agent
is equally distant from its neighbors, and quasi α-lattices, i.e. configurations that are α-lattices modulo a small error in the distances
[7]. The scale parameter d defines the ideal inter-agent distance.
Definition 2.1 (α-lattice [7]). A configuration x ∈ Rm ·n is called
α-lattice if for all i ∈ B and all j ∈ Ni (x), ∥x i − x j ∥ = d, where d ∈
R+ is the scale of the α-lattice. For tolerance δ ∈ R+ , a configuration
x ∈ Rm ·n is called a quasi α-lattice if for all i ∈ B and all j ∈ Ni (x),
|∥x i − x j ∥ − d | ≤ δ .
2.1
Sensing noise
We extend the classical equations of motion, Eqs. (1)–(2), with sensing noise affecting how each agent perceives positions and velocities
of its neighbors. Existing work has put little focus on flocking dynamics subject to noise, which is unfortunately unavoidable in
realistic natural and engineered flocks.
For actual positions x(k) and velocities v(k) at step k, let x̃(k)
and ṽ(k) denote their noisy counterparts sensed by a generic agent,
defined by:
x̃(k) = x(k) + nx(k) and ṽ(k) = v(k) + nv(k),
(7)
where nx(k) and nv(k) in
are vectors of independent and
identically distributed (i.i.d.) random variables. The position noise
nx(k) and velocity noise nv(k) are distributed according to Gaussian distributions with mean 0 and standard deviation σx and σv ,
respectively. We stress the dependency on k because noise variables
are independent across time steps.
In centralized flocking algorithms, where agent decisions are
computed by a single controller with information about the whole
population, we use Eq. 7 to define noisy measurements. In distributed algorithms, sensing noise is independent for each agent.
We denote the noisy measurement of agent i by x̃▷i (k) and ṽ▷i (k),
where positions and velocities are noisy for all agents except agent
i
Rm ·n
x̃▷i (k) =[x̃ 1T (k) . . . x iT (k) . . . x̃ nT (k)]T and
ṽ▷i (k) =[ṽT1 (k) . . . viT (k) . . . ṽnT (k)]T ,
(8)
(9)
Declarative vs Rule-based Control for Flocking Dynamics
SAC 2018, April 9-13, 2018, Pau, France
with x̃ 1 (k), . . . , x̃ n (k) and ṽ 1 (k), . . . , ṽn (k) defined as per (7); implicitly, for each agent i and each other agent j, the noise distribution is
sampled independently to compute the x̃ Tj (k) component of x̃▷i (k).
2.2 Reynolds’ rule-based model
In Reynolds’ rule-based distributed model [12, 13], agents follow
simple rules to compute their accelerations from the positions and
velocities of their neighbors. The rules are illustrated in Figure 2.
They do not explicitly specify the desired flocking formation as an
objective; rather, flocking emerges from the interaction rules.
Specifically, each agent i ∈ B updates its acceleration ai (k) at
step k by considering the following three components (adapted to
include sensing noise):
(1) Alignment: agents match their velocities with the average
velocity of nearby agents.
Õ
©©
ª
ª
= w al ·
·
ṽ j (k)® − vi (k)® (10)
|Ni (x̃▷i (k))|
▷i
j ∈N i (x̃ (k ))
««
¬
¬
(2) Cohesion: agents move towards the centroid of the agents in
the local neighborhood.
1
aial (k)
Õ
1
©©
ª
ª
aci (k) = wc ·
·
x̃ j (k)® − x i (k)® (11)
▷i
|Ni (x̃ (k))|
j ∈N i (x̃▷i (k ))
««
¬
¬
(3) Separation: agents move away from nearby neighbors.
Õ
x i (k) − x̃ j (k) ª
©
·
® (12)
|Ni
∥x i (k) − x̃ j (k)∥ 2
«j ∈Ni (x̃▷i (k ))
¬
The cohesion and alignment rules help form and maintain a closely
packed, flock-like formation. The separation rule prevents agents
from coming too close to each other, thus reducing crowding and
collisions.
Non-negative constants w al , wc and w s are the weights for each
acceleration component. Typically, a smaller interaction radius
(hence a smaller neighborhood) is used for the separation rule,
because it is significant only when agents are very close to each
other. The overall acceleration in Reynolds’ model is given by:
asi (k) = w s ·
1
2.4
MPC-based models
Model predictive control (MPC) [2] is a well-established control
technique that works as follows: at each time step k, it computes
the optimal control sequence (agents’ accelerations in our case)
that minimizes a given cost function with respect to a predictive
model of the controlled system and a finite prediction horizon of
length T , i.e., from step k + 1 to k + T . Then, the first control input
of the optimal sequence is applied (the remainder of the sequence
is unused), and the algorithm proceeds with a new iteration.
Two main kinds of MPC-based flocking models exist, centralized
and distributed. Centralized models assume that information about
positions and velocities of all agents is available to compute their
optimal accelerations. Formally, at each time step k, it solves the
following optimization problem:
min
a(k |k ), ...,a(k +T −1|k )∈A
(13)
2.3 Olfati-Saber’s potential-based model
In potential-based flocking models, the interaction between a pair
of agents is modeled by a potential field. It is assumed that an agent
is a point source, and it has a potential field around it, which exerts a force, equal to its gradient, on other agents in its range of
influence. The potential field has circular symmetry and hence is a
function of distance from the source. In the work of Olfati-Saber [7],
the potential function ψ α for a pair of agents has its minimum at
the desired inter-agent distance d of the desired α-lattice. Outside
the interaction radius r , the potential function is constant, so the
potential field exerts no force. The exact definition of ψ α is complicated: it is the definite integral of an “action function” ϕ α that
is the product of a “bump function” ρh and an uneven sigmoidal
function ϕ. The control law computes an agent’s acceleration based
on the sum of the forces from all other agents in its neighborhood
and a velocity alignment term.
TÕ
−1
t =0
∥a(k + t | k)∥ 2
(14)
where a(k + t | k) is the control input (accelerations) for all agents
at predicted time step k + t starting from step k. The first term J (k)
is the primary model-specific cost function that the controller seeks
to optimize within the prediction horizon; it is implicitly a function
of the predicted configurations during the prediction horizon for
time step k. The second term is standard for MPC problems and
penalizes large control inputs, with weight λ > 0.
In distributed flocking models, each agent computes its optimal
acceleration based only on information about its neighbors. Each
agent i solves an optimization problem of the form:
(x̃▷i (k))|
ai (k) = aial (k) + aci (k) + asi (k).
J (k) + λ ·
min
a i (k |k ), ...,a i (k +T −1|k )∈A
Ji (k) + λ ·
TÕ
−1
t =0
∥ai (k + t | k)∥ 2
(15)
where ai (k + t | k) is the acceleration for agent i at predicted time
step k + t starting from step k, and Ji (k) is the model-specific cost
function for agent i. In distributed MPC, an agent has no way to
know current or future control decisions of its neighbors, which
are needed to make accurate predictions about their behavior. To
address this problem, some approaches allow agents to communicate their local control decisions or future positions (e.g. [16, 18]),
or assume that neighbors follow some default motion law, e.g.,
they move with constant velocities. We adopt the second strategy,
because it does not require any communication.
The majority of existing MPC-based approaches to flocking are
designed to optimize the regularity of the flock, by penalizing configurations where neighboring agents are not exactly distance d
apart, i.e., configurations that differ from an α-lattice [15–18]. We
call these approaches lattice-based MPC. Next we describe representative centralized and distributed lattice-based MPC flocking
models, which we extend to account for sensing noise. The centralized model is a variant of a model by Zhan and Li [15, 16]. The
distributed model is by Zhang et al. [17].
2.4.1 Centralized lattice-based MPC flocking. The centralized
lattice-based MPC problem is defined as:
min
a(k |k ), ...,a(k+T −1|k )∈A
T
Õ
t =1
TÕ
−1
∥д (x(k + t | k)) ∥ 2 +λ·
t =0
∥a(k+t | k)∥ 2
(16)
SAC 2018, April 9-13, 2018, Pau, France
U. Mehmood et al.
(a) Alignment
(b) Cohesion
(c) Separation
Figure 2: Interaction rules for flocking behavior in Reynolds Model.
where x(k + t | k) is the configuration of the system at predicted
time step k + t starting from step k, following the dynamics:
x i (k | k) = x̃ i (k) vi (k | k) = ṽi (k)
x i (k + t + 1 | k) = x i (k + t | k) + dT · vi (k + t | k)
vi (k + t + 1 | k) = vi (k + t | k) + dT · ai (k + t | k),
where the initial state of the prediction window is given by noisy
measurements. For configuration x, д(x) captures the α-lattice irregularity as the total deviation between agent distances and d:
Õ
d · x ji 2
∥д(x)∥ 2 =
x ji −
, with x ji = x j − x i . (17)
∥x ji ∥
(i, j)∈ E(x)
This model is inspired by [15] and [16] but differs from both:
in [15] the cost function also contains a velocity alignment term,
which the same authors removed in their subsequent work, while
in [16], “impulsive MPC” is used, which means that agents directly
control their velocities (instead of accelerations), an abstraction
that allows physically unrealizable accelerations.
2.4.2 Distributed lattice-based MPC flocking. In the distributed
MPC flocking model of Zhang et al. [17], each agent i controls its
acceleration based on position and velocity measurements of the
neighbors and assumes they have constant velocity (zero acceleration) during the prediction horizon. Similarly, the set of neighbors
of i is assumed invariant during the prediction horizon, and we
denote it by Ni (k) = Ni (x(k)). The control law for agent i is:
min
a i (k |k ), ...,a i (k +T −1|k )∈A
T
Õ
t =1
2
∥дi (x(k + t | k)) ∥ +
λ·
TÕ
−1
t =0
∥ai (k + t | k)∥ 2 .
(18)
where the predicted future dynamics of i is determined by:
x i (k | k) = x̃ i (k) vi (k | k) = ṽi (k)
x i (k + t + 1 | k) = x i (k + t | k) + dT · vi (k + t | k)
x i (k + t + 1 | k) = x i (k + t | k) + dT · ai (k + t | k),
while i’s neighbors j ∈ Ni (k) have constant velocity:
x j (k | k) = x̃ j (k) x j (k + t + 1 | k) = x j (k + t | k) + dT · ṽ j (k).
For configuration x, дi (x) is defined in a similar way to Eq. (17)
and quantifies how much i’s neighborhood Ni (k) deviates from an
α-lattice:
Õ
d · x ji 2
∥дi (x)∥ 2 =
x ji −
.
(19)
∥x ji ∥
j ∈N i (k )
3
DECLARATIVE FLOCKING
This section introduces centralized and distributed versions of our
Declarative Flocking (DF) model, and presents a flocking algorithm
based on MPC. Our formulation is declarative in that it consists of
just two simple terms: (1) a cohesion term based on the average
squared distance between pairs of agents, to keep the flock together,
and (2) a separation term based on the inverse squared distances
between pairs of agents, to avoid crowding. These two terms represent opposing forces on agents, causing agents to move towards
positions in which these forces are balanced. Unlike the majority
of existing MPC-based approaches that are designed to optimize
conformance to an α-lattice, our design does not impose a specific
geometric structure.
3.1
Centralized DF model
The cost function J for our centralized DF model contains the two
terms described above, with the cohesion term considering all pairs
of agents, and the separation term considering only pairs of agents
that are neighbors. The weight ω of the separation term provides
control over the density of the flock.
Õ
Õ Õ
1
2
∥x i j ∥ 2 + ω ·
J C (x) =
·
2
|B| · (|B| − 1)
∥x
ij ∥
i ∈B j ∈B,i <j
The control law is Eq. (14) with J (k) equal to
3.2
Distributed DF model
ÍT
(i, j)∈E(x)
t =1
J C (x(k + t | k)).
The cost function J for our distributed DF model is similar to the
centralized one, except that both terms are limited to consider pairs
of agents that are neighbors.
Õ
Õ
1
1
·
∥x i j ∥ 2 + ω ·
(20)
JiD (x) =
|Ni (k)|
∥x i j ∥ 2
j ∈N i (k )
j ∈N i (k )
The control law for agent i is Eq. (15) with Ji (k) equal to
ÍT
D
t =1 Ji (x(k + t | k)).
4
MEASURES OF FLOCKING PERFORMANCE
We introduce four key measures of flocking performance. A single
measure is insufficient, because flocking is indeed characterized by
multiple desirable properties, such as aligned velocities and cohesion. Olfati-Saber introduces four main properties for flocking [7],
informally described as:
(1) the group of agents stays connected in a unique flock, i.e., no
sub-flocks and fragmentation should emerge;
(2) the group remains cohesive, in a close-knit formation;
Declarative vs Rule-based Control for Flocking Dynamics
SAC 2018, April 9-13, 2018, Pau, France
(3) the group moves in a coherent way as if it was a unique body,
i.e., agents’ velocities are aligned; and
(4) the group maintains a regular geometry (in the α-lattice
sense).
We introduce the following four measures to capture these four
requirements. An important concept in these definitions is a subflock, which is a set of interacting agents that is too far apart from
other agents to interact with them. Formally, a sub-flock in a configuration x corresponds to a connected component of the proximity
net G(x). Let CC(x) ⊆ 2 B be the set of connected components of
the proximity net G(x).
(1) The number of connected components of the proximity net
quantifies connectedness—or, equivalently, fragmentation—of the
flock. There is no fragmentation when |CC(x)| = 1. Fragmentation
exists when |CC(x)| > 1. Fragmentation may be temporary or, if
sub-flocks move in different directions, permanent.
(2) The maximum component diameter, denoted D(x), quantifies
cohesion. It is defined by
D(x) =
max
B ′ ∈CC(x)
(21)
D(x, B ′ )
where D(x, B ′ ) is the diameter of connected component B ′ :
max
D(x, B ′ ) =
(i, j)∈B ′ ×B ′
i,j
∥xi j ∥.
(22)
Ð
Note that when all agents are isolated, i.e., CC(x) = i ∈B {{i}},
D(x) = −∞ because the domain of the max function in Equation 22
is empty when B ′ is a singleton. Note that we consider the maximum diameter of a sub-flock in order to make this measure more
independent of connectedness. If we instead considered the overall
diameter of the entire (possibly fragmented) flock, any flocking
model that did poorly on connectedness would also do very poorly
on this measure.
(3) The velocity convergence measure, adopted from [17], quantifies the average discrepancy between each agent’s velocity and the
average velocity of the flock. In particular, we extend the measure
of [17] to average velocity convergence values across sub-flocks:
VC(x, v) =
Í
B ′ ∈CC(x)
Í
i ∈B ′ vi
−
Í
|CC(x)|
j ∈B ′ v j
| B′ |
2
|B ′ |
(23)
(4) To measure the regularity of the geometric structure of a flock,
as reflected in the inter-agent spacing, we introduce a parameterfree and model-independent irregularity measure I (x). For a connected component (sub-flock) B ′ , it is defined as the sample standard deviation of the distances between each agent in B ′ and its
closest neighbor. Thus, the measure penalizes configurations where
there is dispersion in inter-agent distances, while not imposing any
fixed distance between them (unlike α-lattices).
Ð
Let CC ′ (x) = CC(x) \ i ∈B {{i}} be the set of connected components where isolated agents are excluded. For |CC ′ (x)| = 0 (or
equivalently, |CC(x)| = |B|), i.e., all agents are isolated, we set the
irregularity I (x) = 0, which is the optimal value. This reflects the
fact that a single point is a regular structure on its own. Moreover,
such a configuration is already highly penalized by |CC(x)| and
VC(v). For |CC ′ (x)| > 0, the measure is defined by:
Í
Ò
B ′ ∈CC ′ σ
i ∈B ′ min j,i ∥x i j ∥
.
(24)
I (x) =
|CC ′ |
where σ (S) is the standard deviation of the multiset of samples S
Ò
and is the sum operator (or disjoint union) for multisets.
An α-lattice (see Def. 2.1) has the optimal value of I (x), i.e.,
I (x) = 0, since all neighboring agents are located at the same
distance d from each other, leading to zero standard deviation for the
term σ ({d, d, . . . , d }). This shows that I (x) captures the regularity
underlying the concept of α-lattice.
We introduce this measure because previous measures of regularity or irregularity, such as those in [7, 16, 17], measure deviations
from an α-lattice with a specified inter-agent distance d and are
therefore inapplicable to flocking models, such as Reynolds’ model
and our DF models, that are not based on α-lattices and do not
have a specified target inter-agent distance. Also, our irregularity
measure is more flexible than those based on α-lattices, because it
gives an optimal score to some configurations that are geometrically
regular but not α-lattices. For example, consider a configuration x
in which the agents are on the vertices of a grid with edge length e,
and the interaction radius is equal to the length of the diagonal of
a box in the grid. This configuration has an optimal value for our
irregularity measure, i.e., I (x) = 0, because the distance from every
agent to its nearest neighbor is e. This configuration is not an αlattice and hence does not nave an optimal value for the irregularity
measures used in prior work.
5
PERFORMANCE EVALUATION
We compare the performance of the models of Section 2 with the
newly introduced DF flocking models in the 2-dimensional setting.
In the first set of experiments (Section 5.1), we evaluate the performance measures illustrated in Section 4. In the second set of
experiments (Section 5.2), we analyze the resilience of the algorithms to sensor noise.
For consistency with the experimental settings of [17], the latticebased MPC problems are solved using the interior point method implemented in MATLAB’s fmincon function. Our DF-MPC problems
are solved using gradient descent optimization. Unless otherwise
specified, the population size is n = 30, the simulation length is
100, dT = 0.3, v̄ = 8, ā = 1, r = 8.4, d = 7, T = 3, and λ = 1.
These parameter values are the same ones reported in [17]. Following the settings in the OpenSteer project [11], the parameters for
Reynolds’ model are rc = 9, r s = 5, r al = 7.5, wc = 8, w s = 12,
and w al = 8. The weight of the separation term in our centralized
and distributed DF-MPC is ω = 50. As in [17], initial positions and
initial velocities of agents are uniformly sampled from [−15, 15]2
and [0, 2]2 , respectively.
5.1
Performance Comparison of Flocking
Algorithms
Fig. 3 shows examples of final formations for all flocking models.
In particular, we chose configurations where fragmentation did
not occur. We observe that the formations for lattice-based MPC
algorithms have spread-out, rigid structures, consistent with the
SAC 2018, April 9-13, 2018, Pau, France
design objective of maximizing the α-lattice regularity. On the other
hand, Reynolds and our DF MPC models result in more natural
flock shapes.
In Fig. 4, we compare the performance measures averaged over
100 runs for each flocking model. Regarding the number of connected components (sub-flocks), our centralized DF-MPC registers
the best behavior, rapidly stabilizing to an average of 1 component
(see plot a). Our distributed DF-MPC and Reynolds’ model have
comparable performance, reaching an average number of sub-flocks
below 1.4. The lattice-based MPCs and Olfati-Saber instead lead
to constant fragmentation, with more than 2 sub-flocks for the
distributed lattice-based MPC, 6 for the centralized lattice-based
MPC, and more than 8 for Olfati-Saber’s model.
This ranking is confirmed by the diameter measure (plot b),
where our centralized and distributed DF-MPC and Reynolds’ model
show the best cohesion, outperforming the lattice-based approaches.
Recall that this measure indicates the maximum diameter over all
sub-flocks, not the diameter of the entire population. As a consequence, fragmentation tends to improve diameter values since it
produces sub-flocks with fewer individuals. This explains why our
distributed DF-MPC performs better on this measure than the centralized version, and similarly why Olfati-Saber’s model has smaller
diameter measure than centralized lattice-based MPC, which in turn
has smaller diameter measure than the distributed variant.
As expected, Olfati-Saber’s model and the lattice-based MPCs
have very good performance for irregularity (plot c), since they are
designed to achieve the regular geometric formation of α-lattice.
Surprisingly, our distributed DF-MPC performs almost as well as
them on this measure. Centralized DF-MPC and Reynolds’ model
have the least regular formations.
For velocity convergence (plot d), we find that all models perform
comparably well and are able to achieve flocks with consistent
velocities fairly quickly after an initial spike.
5.2
Robustness to Sensing Noise
To evaluate the resiliency of the models to sensor noise, we performed 20 runs for each model at 10 noise levels. The noise levels
are numbered from 1 to 10, and noise level i has σx = 0.2i and
σv = 0.1i. For each performance metric, we averaged its final values over 20 runs for each noise level. The results are plotted in Fig. 5.
Of the six models, Olfati-Saber’s model is the most vulnerable to
sensing noise: the number of sub-flocks |CC | in Olfati-Saber’s model
quickly increases to nearly 30, rendering other metrics irrelevant.
The lattice-based MPC models also exhibit high fragmentation, leading to nominally good but largely irrelevant values for the other
performance metrics. Our distributed DF-MPC and Reynolds’ model
have the best resiliency to sensing noise, with both models exhibiting similar profiles in all metrics. While the irregularity and velocity
convergence measures increase with noise level, as expected, both
models remarkably maintain almost a single connected component
with a nearly constant component diameter for all 10 noise levels,
with DF-MPC achieving a smaller diameter than Reynolds’ model.
6
RELATED WORK
Reynolds [12] introduced the first rule-based approach for simulation of flocking behavior. With only three simple rules, his model is
U. Mehmood et al.
able to capture complex flocking behaviors of animals. Additional
rules can be added to the model to simulate specific behaviors, such
as leader following and predator avoidance. Pearce et al. [8] present
a rule-based strategy for flocking, where agents move to maximize
their view out of the flock. Cucker and Dong [3] present a rulebased flocking approach with proofs of convergence and collision
avoidance.
Cucker and Smale [4] introduced another popular rule-based
flocking model. The Cucker-Smale model is parameterized by a
constant β such that if β < 1/2, velocity convergence is guaranteed.
If β ≥ 1/2, velocity convergence can also be achieved under some
conditions on the initial positions and initial velocities of the agents.
Ahn and Ha [1] investigated the effects of multiplicative noise on
the long term dynamics of the Cucker-Smale model. Erban et al. [5]
extend the Cucker-Smale model to take into account stochasticity
(imperfections) of agent behavior and delay in agents’ responses to
changes in their environment.
Flocking models based on potential fields have been proposed
in several papers. Tanner et al. [14] propose a potential function
Ui j , given in Equation 25, where ||r i j || 2 is the distance between
agents i and j. For distances greater than R, the potential is set to a
constant value, C, indicating a zero force. In their control law, the
acceleration of agent i is based on the sum over all neighbors j of
the gradient of the potential function Ui j .
(
1
+ loд||r i j || 2 , ||r i j || 2 < R
| |r i j | | 2
(25)
Ui j =
C,
||r i j || 2 ≥ R
A similar potential function is also proposed by [10]. Furthermore,
potential-based solutions have been extended with additional behaviors such as obstacle avoidance and leader following. For example,
Ogren et.al. [9] use the motion of the leader to guide the motion of
the flock; the leader’s motion is independent, i.e., is not influenced
by other agents.
La and Sheng [6] propose an extension of Olfati-Saber’s model
designed for noisy environments. In addition to the terms found in
Olfati-Saber’s model, their control law contains feedback terms for
position and velocity, to make agents tend to stay close to the centroid of their neighborhood and minimizing the velocity mismatch
with their neighbors. They show that adding these feedback terms
to the control law helps bound the error dynamics of the system.
7
CONCLUSIONS
This paper presents an abstract declarative form of control for
flocking behavior and the results of a thorough comparison of
centralized and distributed versions of our MPC-based declarative
flocking with four other flocking models. Our simulation results
demonstrate that DF-MPC yields the best cohesion and least fragmentation, and produces natural flock shapes like those produced
by Reynolds’ rule-based model. Our resiliency analysis shows that
the distributed version of our DF-MPC is highly robust to sensor
noise.
As future work, we plan to study resilience of the flocking models
with respect to additional noisy scenarios such as actuation noise
(i.e., noise affecting acceleration) and faulty agents with deviant
behavior. We also plan to investigate smoothing techniques to
increase resilience to sensor noise.
Declarative vs Rule-based Control for Flocking Dynamics
75
SAC 2018, April 9-13, 2018, Pau, France
135
140
80
70
125
65
120
125
120
120
110
20
55
50
190
200
210
105
0
20
40
0
110
90
-20
(a) Reynolds
20
110
115
100
0
40
115
40
60
60
130
130
60
100
60
100
120
140
(b) Lattice-based central- (c)
Lattice-based
ized MPC
tributed MPC
dis-
90
100
-20
110
80
(d) DF centralized MPC
90
100
0
(e) DF distributed MPC
20
40
60
80
(f) Olfati-Saber
Figure 3: Examples of final formations for different flocking models. The red dots are the agent positions. The blue lines
denote the agent velocities; the line lengths are proportional to the speeds.
Lattice Distributed
Reynolds
DF Centralized
60
10
20
40
3
2
4
10
80
1
0.5
20
60
1.5
1
I
D
0
O-S
4
30
6
0
Lattice Centralized
40
5
|CC|
|CC|
50
8
DF Distributed
2
VC
10
0
0
0
20
40
Time
60
80
0
20
40
Time
2
(a) Number of connected components
|CC |
0
0
10
30
0
80
20
40
(c) Irregularity I
40
50
60
60
80
Time
Time
(b) Max component diameter D
20
60
(d) Velocity convergence V C
70
80
90
Figure 4: Comparison of performance measures obtained
with 100 runs for each flocking algorithm.
Time
Lattice Distributed
10
30
DF Centralized
DF Distributed
Lattice Centralized
O-S
1.5
1.5
1
1
40
8
30
I
VC
20
D
|CC|
Reynolds
50
20
|CC|
10
0
2
4
6
0.5
0.5
6
10
4
0
8
10
Noise Level
2
(a) Number of connected components
|CC |
0
0
10
0
0
2
4
6
8
10
2
4
Noise Level
30
8
2
10
40
(c) Irregularity I
50
60
70
4
6
8
10
Noise Level
Noise Level
(b) Max component diameter D
20
6
(d) Velocity convergence V C
80
90
Figure 5: Comparison of the final values of the performance measures
obtained with 20 runs for each flocking algorithm and
Time
for each noise level.
REFERENCES
[1] Shin Mi Ahn and Seung-Yeal Ha. 2010.
Stochastic flocking dynamics of the CuckerâĂŞSmale model with multiplicative white noises. J.
Math. Phys. 51, 10 (2010), 103301.
https://doi.org/10.1063/1.3496895
arXiv:http://dx.doi.org/10.1063/1.3496895
[2] E.F Camacho and C. Bordons. 2007. Model predictive control. Springer.
[3] Felipe Cucker and Jiu-Gang Dong. 2011. A general collision-avoiding flocking
framework. IEEE Trans. Automat. Control 56, 5 (2011), 1124–1129.
[4] F. Cucker and S. Smale. 2007. Emergent Behavior in Flocks. IEEE Trans. Automat.
Control 52, 5 (May 2007), 852–862. https://doi.org/10.1109/TAC.2007.895842
[5] Radek Erban, Jan Hasźkovec, and Yongzheng Sun. 2016. A Cucker–Smale Model
with Noise and Delay. SIAM J. Appl. Math. 76, 4 (July 2016), 1535–1557. https:
//doi.org/10.1137/15M1030467
[6] H. M. La and W. Sheng. 2010. Flocking control of multiple agents in noisy
environments. In 2010 IEEE International Conference on Robotics and Automation.
4964–4969. https://doi.org/10.1109/ROBOT.2010.5509668
[7] Reza Olfati-Saber. 2006. Flocking for multi-agent dynamic systems: Algorithms
and theory. IEEE Transactions on automatic control 51, 3 (2006), 401–420.
[8] Daniel J. G. Pearce, Adam M. Miller, George Rowlands, and Matthew S. Turner.
2014. Role of projection in the control of bird flocks. Proceedings of the National
Academy of Sciences 111, 29 (2014), 10422–10426. https://doi.org/10.1073/pnas.
1402202111 arXiv:http://www.pnas.org/content/111/29/10422.full.pdf
[9] Naomi Ehrich Leonard Peter Ogren. 2004. Cooperative control of mobile sensor networks:Adaptive gradient climbing in a distributed environment. IEEE
transactions on Automatic Control 49, 8 (2004).
[10] John H. Reif and Hongyan Wang. 1999. Social potential fields: A distributed
behavioral control for autonomous robots. Robotics and Autonomous Systems 27,
3 (1999), 171 – 194. https://doi.org/10.1016/S0921-8890(99)00004-4
[11] Craig Reynolds. 2004. OpenSteer, steering behaviors for autonomous characters.
(2004). http://opensteer.sourceforge.net/
[12] Craig W. Reynolds. 1987. Flocks, Herds and Schools: A Distributed Behavioral
Model. SIGGRAPH Comput. Graph. 21, 4 (Aug. 1987), 25–34. https://doi.org/10.
1145/37402.37406
[13] Craig W. Reynolds. 1999. Steering Behaviors For Autonomous Characters. In
Proceedings of Game Developers Conference 1999. 763–782.
SAC 2018, April 9-13, 2018, Pau, France
[14] H. G. Tanner, A. Jadbabaie, and G. J. Pappas. 2003. Stable flocking of mobile
agents part I: dynamic topology. In 42nd IEEE International Conference on Decision
and Control (IEEE Cat. No.03CH37475), Vol. 2. 2016–2021 Vol.2.
[15] Jingyuan Zhan and Xiang Li. 2011. Flocking of discrete-time multi-agent systems
with predictive mechanisms. IFAC Proceedings Volumes 44, 1 (2011), 5669–5674.
[16] Jingyuan Zhan and Xiang Li. 2013. Flocking of multi-agent systems via model
predictive control based on position-only measurements. IEEE Transactions on
Industrial Informatics 9, 1 (2013), 377–385.
[17] Hai-Tao Zhang, Zhaomeng Cheng, Guanrong Chen, and Chunguang Li. 2015.
Model predictive flocking control for second-order multi-agent systems with
input constraints. IEEE Transactions on Circuits and Systems I: Regular Papers 62,
6 (2015), 1599–1606.
[18] Lifeng Zhou and Shaoyuan Li. 2017. Distributed model predictive control for
multi-agent flocking via neighbor screening optimization. International Journal
of Robust and Nonlinear Control 27, 9 (2017), 1690–1705.
U. Mehmood et al.
| 3 |
RANDOM SAMPLING IN COMPUTATIONAL ALGEBRA:
HELLY NUMBERS AND VIOLATOR SPACES
arXiv:1503.08804v3 [cs.DM] 23 Dec 2015
JESÚS A. DE LOERA, SONJA PETROVIĆ, AND DESPINA STASI
Abstract. This paper transfers a randomized algorithm, originally used in geometric optimization,
to computational problems in commutative algebra. We show that Clarkson’s sampling algorithm
can be applied to two problems in computational algebra: solving large-scale polynomial systems and
finding small generating sets of graded ideals. The cornerstone of our work is showing that the theory
of violator spaces of Gärtner et al. applies to polynomial ideal problems. To show this, one utilizes
a Helly-type result for algebraic varieties. The resulting algorithms have expected runtime linear in
the number of input polynomials, making the ideas interesting for handling systems with very large
numbers of polynomials, but whose rank in the vector space of polynomials is small (e.g., when the
number of variables and degree is constant).
Keywords: Violator spaces, ideal generators, solving polynomial systems, randomized algorithm
in algebra, large sparse systems.
AMS subject classification: 68W20, 68R05, 12Y05, 13P10, 14Q10, 08A40.
1. Introduction
Many computer algebra systems offer excellent algorithms for manipulation of polynomials. But
despite great success in the field, many algebraic problems have bad worst-case complexity. For
example, Buchberger’s [13, 14, 18] groundbreaking algorithm, key to symbolic computational algebra today, computes a Gröbner basis of any ideal, but it has a worst-case runtime that is doubly
exponential in the number of variables [22]. This presents the following problem: what should one do
about computations whose input is a very large, overdetermined system of polynomials? In this paper,
we propose to use randomized sampling algorithms to ease the computational cost in such cases.
One can argue that much of the success in computation with polynomials (of non-trivial size) often
relies heavily on finding specialized structures. Examples include Faugère’s et al. fast computation
of Gröbner bases of zero-dimensional ideals [25, 26, 27, 29, 35], specialized software for computing
generating sets of toric ideals [1], several packages in [32] built specifically to handle monomial
ideals, and the study of sparse systems of polynomials (i.e., systems with fixed support sets of
monomials) and the associated homotopy methods [45]. A more recent example of the need to find
good structures is in [16], where Cifuentes and Parrilo began exploiting chordal graph structure in
computational commutative algebra, and in particular, for solving polynomial systems. Our paper
exploits combinatorial structure implicit in the input polynomials, but this time akin to Helly-type
results from convex discrete geometry [36].
At the same time, significant improvements in efficiency have been obtained by algorithms that
involve randomization, rather than deterministic ones (e.g. [10, 43]); it is also widely recognized
that there exist hard problems for which pathological examples requiring exponential runtimes occur
only rarely, implying an obvious advantage of considering average behavior analysis of many algorithms. For example, some forms of the simplex method for solving linear programming problems
have worst-case complexity that is exponential, yet [44] has recently shown that in the smoothed
analysis of algorithms sense, the simplex method is a rather robust and fast algorithm. Smoothed
analysis combines the worst-case and average-case algorithmic analyses by measuring the expected
1
2
JESÚS A. DE LOERA, SONJA PETROVIĆ, AND DESPINA STASI
performance of algorithms under slight random perturbations of worst-case inputs. Of course, probabilistic analysis, and smoothed analysis in particular, has been used in computational algebraic
geometry for some time now, see e.g., the elegant work in [8, 9, 15]. The aim of this paper is to
import a randomized sampling framework from geometric optimization to applied computational
algebra, and demonstrate its usefulness on two problems.
Our contributions. We apply the theory of violator spaces [31] to polynomial ideals and adapt
Clarkson’s sampling algorithms [17] to provide efficient randomized algorithms for the following
concrete problems:
(1) solving large (overdetermined) systems of multivariate polynomials equations,
(2) finding small, possibly minimal, generating sets of homogeneous ideals.
Our method is based on using the notion of a violator space. Violator spaces were introduced
in 2008 by Gärtner, Matoušek, Rüst, and Škovroň [31] in a different context. Our approach allows
us to adapt Clarkson’s sampling techniques [17] for computation with polynomials. Clarkson-style
algorithms rely on computing with small-size subsystems, embedded in an iterative biased sampling
scheme. In the end, the local information is used to make a global decision about the entire system.
The expected runtime is linear in the number of input elements, which is the number of polynomials
in our case (see [12] for a more recent simplified version of Clarkson’s algorithm for violator spaces).
Violator spaces naturally appear in problems that have a natural linearization and a sampling size
given by a combinatorial Helly number of the problem. While violator spaces and Clarkson’s algorithm have already a huge range of applications, to our knowledge, this is the first time such sampling
algorithms are being used in computational algebraic geometry. For an intuitive reformulation of
Helly’s theorem for algebraic geometers, see Example 2.1. Main ingredients of violator spaces are
illustrated through Examples 2.2, 3.2 and 3.6. A typical setup where problem (1) can be difficult
and a randomized algorithm appropriate can be found in Example 4.8.
Before stating the main results, let us fix the notation used throughout the paper. We assume the
reader is acquainted with the basics of computational algebraic geometry as in the award-winning
undergraduate textbook [18]. Denote by K a (algebraically closed) field; the reader may keep K = C
in mind as a running example. Let f1 = 0, . . . , fm = 0 be a system of m polynomials in n+1 variables
with coefficients in K. We usually assume that m ≫ n. As is customary in the algebra literature,
we write f1 , . . . , fm ∈ R = K[x0 . . . , xn ] and often denote the polynomial ring R by a shorthand
notation K[x]. We will denote by (f1 , . . . , fm ) ⊂ R the ideal generated by these polynomials; that
is, the set of all polynomial combinations of the fi ’s. Note that if F = {f1 , . . . , fm } is a set of
polynomials, the ideal (f1 , . . . , fm ) will equivalently be denoted by (F ).
A polynomial is said to be homogeneous if all of its terms are of same degree; an ideal generated
by such polynomials is a homogeneous ideal. In this paper, the ideals we consider need not be
homogeneous; if they are, that will be explicitly stated. In that case, the set of all homogeneous
polynomials of total degree d will be denoted by [R]d . Finally, denote by V(S) the (affine) variety
defined by the set of polynomials S ∈ R, that is, the Zariski closure of the set of common zeros of
the polynomials in the system S. Therefore, the concrete problem (1) stated above simply asks for
the explicit description of the variety (solution set) corresponding to an ideal (system of polynomial
equations). The concrete problem (2) asks to find a smaller (e.g., minimal with respect to inclusion)
set of polynomial equations that generate the same ideal - and thus have the exact same solution
set.
Solving large polynomial systems. Suppose we would like to solve a system of m polynomials
in n + 1 variables over the field K, and suppose that m is large. We are interested in the coefficients
of the polynomials as a way to linearize the system. To that end, recall first that the d-th Veronese
HELLY NUMBERS AND VIOLATOR SPACES IN COMPUTATIONAL ALGEBRA
n+d
d
embedding of Pn is the following map νd : Pn → P(
3
)−1 :
d
(x0 : · · · : xn ) 7→ (xd0 : xd−1
0 x1 : · · · : xn ).
The map νd induces a coefficient-gathering map for homogeneous polynomials in fixed degree d:
n+d
coeff d : [R]d → K ( d )
X
α
cα x 7→ cα1 , . . . , cα n+d ,
( d )
α:|α|=d
xαi
where
corresponds
to the i-th coordinate of the d-th Veronese embedding. We follow the usual
P
notation |α| = i αi . Therefore, if f is a homogeneous polynomial of deg(f ) = d, coeff(f ) is a vector
n+d
in the K-vector space K ( d ) . This construction can be extended to non-homogeneous polynomials
in the following natural way. Consider all distinct total degrees d1 , . . . , ds of monomials that appear
in a non-homogenous polynomial f . For each di , compute the image under coeff di of all monomials
of f of degree di . Finally, concatenate all these vectors
into the total coefficient vector of f , which
we will call coeff(f ) and which is of size n+d+1
,
the
number
of monomials in n + 1 variables of
n+1
(total) degree ranging from 0 to d. In this way, a system f1 , . . . , fm of polynomials
in n variables
of degree at most d can be represented by its coefficient matrix of size n+d+1
×
m.
Each column
n+1
of this matrix corresponds to the vector produced by the map coeff d above. This map allows us to
think of polynomials as points in a linear affine space, where Helly’s theorem applies.
We utilize this construction to import Clarkson’s method [17] for solving linear problems to algebraic geometry and, in particular, we make use of Helly-type theorems for varieties. Helly-type
theorems allow one to reduce the problem of solving the system to repeated solution of smaller
subsystems, whose size is a Helly number of intersecting linear spaces. As a result, our algorithms
achieve expected linear runtime in the number of input equations.
Theorem 1.1. Let F = {f1 , . . . , fm } ⊂ R be a system of polynomials, and let δ be the dimension of
the vector subspace generated by the coefficient vectors of the fi ’s, as described above.
Then there exists a sampling algorithm
that outputs F ′ = {fi1 , . . . , fiδ } ⊂ F such that V(F ) = V(F ′ )
in an expected number O δm + δO(δ) of calls to the primitive query that solves a small radical ideal
membership problem. F and F ′ generate the same ideal up to radicals.
It is important to point out that our sampling algorithm will find a small subsystem of the input
system that, when solved with whatever tools one has at their disposal, will give the same solution
set as the original (input) system. Here by ‘small’ we mean a system of size δ, where δ is polynomially
bounded or constant when the number of variables is constant or when the degree d is small.
That the rank δ of the coefficient matrix of the system of polynomials gives the combinatorial
dimension for this problem is shown in Theorem 4.11. There are several interesting special cases of
this result. For example, we obtain [21, Corollary 2] as a corollary: if f1 , . . . , fm ∈ K[x0 , . . . , xn ] are
homogeneous and
of degree at most d each, then the dimension δ of the vector subspace they generate
(see Lemma 4.6). Of course, in many situations in practice, this bound is not sharp,
is at most n+d
d
as many systems are of low rank. For example, this situation can arise if the monomial support
of the system is much smaller than the total number of monomials of degree d. In light of this,
Theorem 4.11 gives a better bound for low-rank systems. Note that we measure system complexity
by its rank, that is, the vector space dimension δ, and not the usual sparsity considerations such as
the structure of the monomial support of the system. Further, our result applies to non-homogeneous
systems as well. Its proof is presented in Section 4, along with the proof of Theorem 1.1.
4
JESÚS A. DE LOERA, SONJA PETROVIĆ, AND DESPINA STASI
Computing small generating sets of ideals. The problem of finding “nice” generating sets of
ideals has numerous applications in statistics, optimization, and other fields of science and engineering. Current methods of calculating minimal generating sets of ideals with an a priori large number
of generators are inefficient and rely mostly on Gröbner bases computations, since they usually involve ideal membership tests. Of course there are exceptional special cases, such as ideals of points
in projective space [40] or binomial systems [1]. Our second main result shows how to efficiently
extract a small or close to minimal generating set for any ideal from a given large generating set and
a bound on the size of a minimal generating set.
Theorem 1.2. Let I = (H) be an ideal generated by a (large) finite set of homogeneous polynomials
H, and suppose that γ is a known upper bound for the 0-th total Betti number β(R/I).
Then there exists a randomized algorithm that computes a generating set of I of size γ in expected
number of O(γ|H| + γ γ ) calls to the primitive query that solves a small ideal membership problem.
In particular, if γ = β(R/I), the algorithm computes a minimal generating set of I.
The proof is presented in Section 5.
2. A Warm-Up: Algebraic Helly-type theorems and the size of a meaningful sample
A Helly-type theorem has the following form: Given a family of objects F , a property P , and
a Helly number δ such that every subfamily of F with δ elements has property P , then the entire
family has property P. (See [19, 23, 47, 4].) In the original theorem of E. Helly, F is a finite family
of convex sets in Rn , the constant δ is n + 1, and the property P is to have a non-empty intersection
[34]. Here we are looking for non-linear algebraic versions of the same concept, where the objects in
F are algebraic varieties (hypersurfaces) or polynomials; the property desired is to have a common
point, or to generate the same ideal; and the Helly constant δ will be determined from the structure
of the problem at hand. To better understand the algorithms that we present, it is instructive to
consider two intuitive easy examples that highlight the fundamental combinatorial framework. The
first one is an obvious reformulation of Helly’s theorem for algebraic geometers.
Example 2.1. Let H = {L1 , L2 , . . . , Ls } be a family of affine linear subspaces in Rn . Consider the
case when s is much larger than n. One would like to answer the following question: when do all of
the linear varieties have a nonempty intersection? It is enough to check whether each subfamily of
H with n + 1 elements has a non-empty intersection, as that would imply, by Helly’s Theorem, that
H also has a non-empty intersection. Thus, in practice, one can reduce the task of deciding whether
∩si=1 Li 6= ∅ to the collection of smaller queries ∩n+1
j=1 Lij . However, instead of testing all possible
s
n+1 many (n + 1)-tuples, we may choose to randomly sample multiple (n + 1)-tuples. Each time
we sample, we either verify that one more (n + 1)-tuple has a non-empty intersection thus increasing
the certainty that the property holds for all (n + 1)-tuples, or else find a counterexample, a subfamily
without a common point, the existence of which trivially implies that ∩si=1 Li = ∅. This simple idea
is the foundation of a randomized approach. For now we ask the reader to observe that n + 1 is the
dimension of the vector space of (non-homogeneous) linear polynomials in n variables.
Example 2.2. The next example is just slightly more complicated, but illustrates well some key
concepts. Consider next H = {f1 (x1 , x2 ), f2 (x1 , x2 ), . . . , fs (x1 , x2 )}, a large family of affine real plane
curves of degree at most d. Imagine that H is huge, with millions of constraints fi , but the curves
are of small degree, say d = 2. Nevertheless, suppose that we are in charge of deciding whether the
curves in H have a common real point. Clearly, if the pair of polynomials f, g ∈ H intersect, they
do so in finitely many points, and, in particular, Bezout’s theorem guarantees that no more than d2
intersections occur. One can observe that if the system H has a solution, it must pass through some
of the (at most d2 ) points defined by the pair f, g alone. In fact, if we take triples f, g, h ∈ H, the
same bound of d2 holds, as well as the fact that the solutions for the entire H must also be part of
HELLY NUMBERS AND VIOLATOR SPACES IN COMPUTATIONAL ALGEBRA
5
the solutions for the triplet f, g, h. Same conclusions hold for quadruples, quintuples, and in general
δ-tuples. But how large does an integer δ have to be in order to function as a Helly number? We
seek a number δ such that if all δ-tuples of plane curves in H intersect, then all of the curves in H
must intersect. The reader can easily find examples where δ = d does not work, e.g., for d ≥ 2.
To answer the question posed in Example 2.2, we refer to Theorem 4.11 in Section 4. Without
re-stating the theorem here, we state the following Corollary and note that it gives a nice bound on
δ. Corollary 2.3 is implied by the observation that there are only d+2
monomials in two variables
2
of degree ≤ d (which says they span a linear subspace of that dimension inside the vector space of
all polynomials) and Theorem 4.11.
Corollary 2.3. Let H = {f1 (x, y), f2 (x, y), . . . , fs (x, y)} be a family of affine real plane curves of
degree at most d. If every δ = d+2
of the curves have a real intersection point, then all the curves
2
in H have a real intersection point. If we consider the same problem over the complex numbers, then
the same bound holds.
Thus, it suffices to check all δ-tuples of curves for a common real point of intersection, and if all of
those instances do intersect, then we are sure all |H| polynomials must have a common intersection
point, too. The result suggests a brute-force process to verify real feasibility of the system, which
of course is not a pretty proposition, given that |H| is assumed to be very large. Instead, Section 3
explains how to sample the set of δ-tuples in order to obtain a solution to the problem more efficiently.
Notice that it is important to find a small Helly number δ, as a way to find the smallest sampling
size necessary to detect common intersections. It turns out that in this example and in the case
when all fi are homogeneous, the Helly number is best possible [28].
3. Violator spaces and Clarkson’s sampling algorithms
The key observation in the previous section was that the existence of Helly-type theorems indicates
that there is a natural notion of sampling size to test for a property of varieties. Our goal is to import
to computational algebra an efficient randomized sampling algorithm by Clarkson. To import this
algorithm, we use the notion of violator spaces which we outline in the remainder of this section. We
illustrate the definitions using Example 2.2 as a running example in this section.
In 1992, Sharir and Welzl [41] identified special kinds of geometric optimization problems that
lend themselves to solution via repeated sampling of smaller subproblems: they called these LP-type
problems. Over the years, many other problems were identified as LP-type problems and several
abstractions and methods were proposed [2, 3, 11, 33, 37]. A powerful sampling scheme, devised by
Clarkson [17] for linear programming, works particularly well for geometric optimization problems in
small number of variables. Examples of applications include convex and linear programming, integer
linear programming, the problem of computing the minimum-volume ball or ellipsoid enclosing a
given point set in Rn , and the problem of finding the distance of two convex polytopes in Rn . In
2008, Gärtner, Matoušek, Rüst and Škovroň [31] invented violator spaces and showed they give a
much more general framework to work with LP-type problems. In fact, violator spaces include all
prior abstractions and were proven in [42] to be the most general framework in which Clarkson’s
sampling converges to a solution. Let us begin with the key definition of a violator space.
Definition 3.1 ([31]). A violator space is a pair (H, V), where H is a finite set and V a mapping
2H → 2H , such that the following two axioms hold:
Consistency: G ∩ V(G) = ∅ holds for all G ⊆ H, and
Locality:
V(G) = V(F ) holds for all F ⊆ G ⊆ H such that G ∩ V(F ) = ∅.
Example 3.2 (Example 2.2, continued). To illustrate our definition, we consider Example 2.2 of s
real plane curves {f1 , . . . , fs } = H.
6
JESÚS A. DE LOERA, SONJA PETROVIĆ, AND DESPINA STASI
A violator operator for testing the existence of a real point of intersection of a subset F ⊂ H of the
curves should capture the real intersection property. One possible way to define it is the following
map Vreal : 2H → 2H :
Vreal (F ) = {h ∈ H : V R (F ) ) V R (F ∪ {h})} ,
where V R (F ) is the set of common real intersection points of F , in other words, the real algebraic
variety of F . Note that, by definition, Vreal (F ) = ∅ if the curves in F have no real points of intersection. Before explaining why Vreal captures the real intersection property correctly (see Example 3.6),
let us show that the set (H, Vreal ) is a violator space according to Definition 3.1.
Consistency holds by definition of Vreal : for any h ∈ F , V R (F ) = V R (F ∪{h}), and so h 6∈ Vreal (F ).
To show locality, we begin by showing the auxiliary fact that V R (F ) = V R (G). The inclusion
V R (G) ⊆ V R (F ) is direct. This is because for any S ′ ⊆ S it is always the case that V R (S ′ ) ⊇ V R (S).
We need only show V R (F ) ⊆ V R (G). Recall F ⊆ G ⊆ H and G ∩ Vreal (F ) = ∅. Consider g ∈ G.
By the assumption, g does not violate F , and the proper containment V R (F ) ) V R (F ∪ {g}) does
not hold. Therefore the equality V R (F ) = V R (F ∪ {g}) must hold. By iteratively adding elements
of G r F to F and repeating the argument, we conclude that indeed V R (F ) = V R (G).
Finally, we argue that V R (F ) = V R (G) implies Vreal (F ) = Vreal (G). We first show Vreal (F ) ⊂
Vreal (G). Take h ∈ Vreal (F ). Then V R (G) = V R (F ) ) V R (F ∪ {h}) = V R (F ) ∩ V R ({h}) =
V R (G)∩V R ({h}) = V R (G∪{h}). The last and third-to-last equalities follow from the fact that for any
two sets S1 , S2 , one always has V R (S1 ∪S2 ) = V R (S1 )∩V R (S2 ). The containment Vreal (F ) ⊃ Vreal (G)
follows in a similar argument. Thus (H, Vreal ) is a violator space.
Every violator space comes equipped with three important components: a notion of basis, its
combinatorial dimension, and a primitive test procedure. We begin with the definition of a basis of
a violator space, analogous to the definition of a basis of a linear programming problem: a minimal
set of constraints that defines a solution space.
Definition 3.3 ([31, Definition 7]). Consider a violator space (H, V ). B ⊆ H is a basis if B ∩V(F ) 6=
∅ holds for all proper subsets F ⊂ B. For G ⊆ H, a basis of G is a minimal subset B of G with
V(B) = V(G).
It is very important to note that a violator operator can capture algebraic problems of interest
as long as the basis for that violator space corresponds to a basis of the algebraic object we study.
Violator space bases come with a natural combinatorial invariant, related to Helly numbers we
discussed earlier.
Definition 3.4 ([31, Definition 19]). The size of a largest basis of a violator space (H, V ) is called
the combinatorial dimension of the violator space and denoted by δ = δ(H, V ).
A crucial property was proved in [31]: knowing the violations V(G) for all G ⊆ H is enough to
compute the largest bases. To do so, one can utilize Clarkson’s randomized algorithm to compute
a basis of a violator space (H, V) with m = |H|. The results about the runtime and the size of the
sets involved are summarized below. The primitive operation, used as black box in all stages of the
algorithm, is the violation test primitive.
Definition 3.5. Given a violator space (H, V), some set G ( H, and some element h ∈ H \ G, the
primitive test decides whether h ∈ V(G).
The running example illustrates these three key ingredients.
Example 3.6 (Example 3.2, continued). In the example of s real plane curves, the violator operator
we defined detects whether the polynomials have a real point of intersection. Note that a basis would
be a (minimal) set of curves B = {fi1 , . . . , fiδ }, for some δ < s, such that either the curves in B
have no real point of intersection, or the real points of intersection of the curves in B are the real
HELLY NUMBERS AND VIOLATOR SPACES IN COMPUTATIONAL ALGEBRA
7
intersection of all of H = {f1 , . . . , fs }. If the set F has no real intersection point, then Vreal (F ) = ∅ by
definition, so that set F could be a basis in the sense that it is a certificate of infeasibility for this realintersection problem. If, on the other hand, F does have a real intersection point, and Vreal (F ) = ∅,
then this means that F is a basis in the sense that the curves in F capture the intersections of all of
H. The combinatorial dimension for general H is provided by Corollary 2.3, and it equals δ = d+2
2 .
However, special structure of the curves in H may imply a smaller combinatorial dimension.
The primitive query simply checks, given fi ∈ H and a candidate subset G ⊆ H, whether the
set of real points of intersection of G ∪ {fi } is smaller than the set of real points of intersection of
the curves in G alone. The role of the primitive query is therefore not to find a basis directly, but
to check, instead, whether a given candidate subset G can be a basis of H. This can be done by
checking whether fi ∈ Vreal (G) for all fi ∈ H \ G. Clearly, given the primitive test, a basis for H can
be found by simply testing all sets of size at most δ, but that would be a waste because the number
of times one would need to call the primitive would be O(|H|δ+1 ).
As we will see, this brute-force approach can be avoided. Namely, in our current
example, the
d+2
randomized algorithm from Theorem 3.7 below will only sample subsets of δ = 2 curves from the
set {f1 , . . . , fs }, and find a basis of the violator space of size δ in the sense explained above.
The sampling method in [17] avoids a full brute-force approach. It is presented in two stages,
referred to as Clarkson’s first and second algorithm. We outline these below.
Clarkson’s first algorithm, in the first iteration, draws a small random sample R ⊂ G, calls the
second stage to calculate the basis C of R, and returns C if it is already a basis for the larger subset
G. If C is not already a basis, but the elements of G \ C violating R are few, it adds those elements
to a growing set of violators W , and repeats the process with C being calculated as the basis of the
set W ∪ R for a new randomly chosen small R ⊂ G \ W . The crucial point here is that |R| is much
smaller than |G| and, consequently, it acts as a Helly number of sorts.
Clarkson’s second algorithm (Basis2) iteratively picks a random small (6δ2 elements) subset R of
G, finds a basis C for R by exhaustively testing each possible subset (BruteForce;) taking advantage
of the fact that the sample R is very small, and then calculates the violators of G \ C. At each
iteration, elements that appear in bases with small violator sets get a higher probability of being
selected.
This idea is very important: we are biasing the sampling process, so that some constraints will
be more likely to be chosen. This is accomplished by considering every element h of the set G as
having a multiplicity m(h); the multiplicity of a set is the sum of the multiplicities of its elements.
The process is repeated until a basis of G is found, i.e. until V(G \ C) is empty.
Again, as described above, all one needs is to be able to answer the Primitive query: Given
G ⊂ H and h ∈ H \ G, decide whether h ∈ V (G). The runtime is given in terms of the combinatorial
dimension δ(H, V ) and the size of H. The key result we will use in the rest of the paper concerns
the complexity of finding a basis:
Theorem 3.7. [31, Theorem 27] Using Clarkson’s algorithms, a basis of H of a violator space (H, V)
can be found by answering the primitive query an expected O δ |H| + δO(δ) times.
It is very important to note that, in both stages of Clarkson’s method, the query h ∈ V(C)
is answered via calls to the primitive as a black box. In our algebraic applications, the primitive
computation requires solving a small-size subsystem (e.g., via Gröbner bases or numerical algebraic
geometry methods), or an ideal membership query applied to the ideal generated by a small subset
of the given polynomials. On the other hand, the combinatorial dimension relates to the Helly
number of the problem which is usually a number that is problem-dependent and requires non-trivial
mathematical results.
8
JESÚS A. DE LOERA, SONJA PETROVIĆ, AND DESPINA STASI
Algorithm 1: Clarkson’s first algorithm
input : G ⊆ H, δ: combinatorial complexity of H
output: B, a basis for G
1
2
3
4
5
6
7
8
9
10
11
12
13
14
if |G| ≤ 9δ2 then
return Basis2(G)
else
W ←∅
repeat
p
R ← random subset of G \ W with ⌊δ |G|⌋ elements.
C ← Basis2(W ∪ R)
V ← {h ∈ G \ C s.t. h ∈ V(C)}
p
if |V | ≤ 2 |G| then
W ←W ∪V
end
until V = ∅
end
return C.
Algorithm 2: Clarkson’s second algorithm: Basis2(G)
input : G ⊆ H; δ: combinatorial complexity of H.
output: B: a basis of G
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
if |G| ≤ 6δ2 then
return BruteForce(G)
else
repeat
R ← random subset of G with 6δ2 elements.
C ← BruteForce(R)
V ← {h ∈ G \ C s.t. h ∈ V(C)}
if m(V ) ≤ m(G)/3δ then
for h ∈ V do
m(h) ← 2m(h)
end
end
until V = ∅
end
return C.
In the two sections that follow we show how violator spaces naturally arise in non-linear algebra
of polynomials.
4. A violator space for solving overdetermined systems
We discuss our random sampling approach to solve large-size (non-linear) polynomial systems by
applying Clarkson’s algorithm. In particular, we prove Theorem 1.1 as a corollary of Theorem 4.11.
This result is motivated by, and extends, Helly-type theorems for varieties from [21] and [28], which
HELLY NUMBERS AND VIOLATOR SPACES IN COMPUTATIONAL ALGEBRA
9
we use to show that the above algorithms apply to large dense homogeneous systems as well (Corollary 4.7).
First, we define a violator space that captures (in the sense explained in the previous section)
solvability of a polynomial system.
Definition 4.1. [Violator Space for solvability of polynomial systems] Let S ⊂ H be finite subsets
of polynomials in R. Define the violator operator Vsolve : 2H → 2H to record the set of polynomials
in H which do not vanish on the variety V(S). Formally,
Vsolve (S) = {f ∈ H : V(S) is not contained in V(f )}.
Lemma 4.2. The pair (H, Vsolve ) is a violator space.
Proof. Note that Vsolve (S) ∩ S = ∅ by definition of Vsolve (S), and thus the operator satisfies the
consistency axiom.
To show locality, suppose that F ( G ⊂ H and G ∩ Vsolve (F ) = ∅. Since F ( G we know that
V(G) ⊆ V(F ). On the other hand, T
by definition, G ∩ Vsolve (F ) = ∅ implies that V(F ) ⊆ V(g) for all
g ∈ G. Thus V(F ) is contained in g∈G V(g) = V(G). But then the two varieties are equal.
To complete the argument we show that V(F ) = V(G) implies Vsolve (F ) = Vsolve (G). If h ∈
Vsolve (F ) then V(h) cannot contain V(F ) = V(G), thus h ∈ Vsolve (G) too. The argument is symmetric, hence Vsolve (F ) = Vsolve (G).
It follows from the definition that the operator Vsolve gives rise to a violator space for which a
basis B of G ⊂ H is a set of polynomials such that V(B) = V(G). Therefore, a basis B ⊂ G will
either be a subset of polynomials that has no solution and as such be a certificate of infeasibility of
the whole system G, or it will provide a set of polynomials that are sufficient to find all common
solutions of G, i.e., the variety V(G).
Next, we need a violation primitive test that decides whether h ∈ Vsolve (F ), as in Definition 3.5.
By the definition above, this is equivalent to asking whether h vanishes on all irreducible components
of the algebraic variety V(F ). As is well known, the points of V(F ) where the polynomial h does
not vanish correspond to the variety associated with the saturation ideal ((F ) : h∞ ). Thus, we may
use ideal saturations for the violation primitive. For completeness, we recall the following standard
definitions. The saturation of the ideal (F ) with respect to f , denoted by ((F ) : f ∞ ), is defined to
be the ideal of polynomials g ∈ R with f m g ∈ I for some m > 0. This operation removes from
the variety V(F ) the irreducible components on which the polynomial f vanishes. Recall that every
variety can be decomposed into irreducible components (cf. [18, Section 4.6] for example). The
corresponding algebraic operation is the primary decomposition of the ideal defining this variety.
Lemma 4.3 (e.g. [5, Chapter 4]). Let ∩m
i=1 Qi be a minimal primary decomposition for the ideal I.
The saturation ideal (I : f ∞ ) equals ∩f ∈/ √Qi Qi .
m
∞
m
∞
∞
Proof. It is known that
√ (∩i=1 Qi : f )∞= ∩i=1 (Qi : f ). We observe further that (Qi : f ) = Qi if
f does not belong to Qi and (Qi : f ) = (1) otherwise.
This allows us to set up the primitive query for (H, Vsolve ). However we do not need to calculate
the decomposition explicitly, but can instead carry it out using elimination ideals via Gröbner bases,
as explained for example in [18, Exercise 4.4.9].
Observation 4.4. The primitive query for (H, Vsolve ) is simply the saturation test explained above.
Remark 4.5. There is an obvious reformulation of these two ingredients that is worth stating
explicitly. Namely, since a basis B for the violator space (H,p
Vsolve ) ispa set of polynomials such
that V(B) = V(H), the strong Nullstellensatz implies that (B) = (H). Thus a basis determines the ideal of the input system up to radicals, and we could have named the violator operator
10
JESÚS A. DE LOERA, SONJA PETROVIĆ, AND DESPINA STASI
Vsolve ≡ Vradical instead. Furthermore, a polynomial
p h vanishing on all irreducible components of
the algebraic variety V(F ) is equivalent to h ∈ (F ), i.e., h belonging to the radical of the ideal
(F ). In particular, the primitive query for Vsolve can also be stated as the radical ideal membership
test. This test
p can be implemented using Gröbner bases, as explained for example in [18, Proposition
4.2.8]: h ∈ (F ) if and only if 1 ∈ (F, 1 − yh) ⊆ K[x0 , . . . , xn , y]. Therefore, computation of one
Gröbner basis of the ideal (F, 1 − yh) suffices to carry out this test.
Finally, we solve the problem of finding a combinatorial dimension for Vsolve . For this, consider,
as a warm up, the simpler situation where we have a Helly-type theorem for hypersurfaces defined
by homogeneous polynomials. This was proved by Motzkin [39] and then later reproved by Deza
and Frankl [21], and it provides us with a combinatorial dimension for guaranteeing that a largescale homogeneous system has a solution. Its proof relies on thinking of the polynomial ring R as a
K-vector space (see also the discussion before Definition 4.9).
Lemma 4.6 ([21], Corollary 2). Let f1 , . . . , fm ⊂ R be a system of homogeneous polynomials,
that is, fi ∈ [R]di , and define d = max{di }. Suppose that every subset of p = n+d
polynomials
d
{fi1 , . . . , fip } ⊂ {f1 , . . . , fm } has a solution. Then the entire system {f1 , . . . , fm } does as well.
Lemma 4.6 provides the combinatorial dimension that, along with the variety membership primitive from Observation 4.4, allows us to apply Clarkson’s algorithms to the violator space (H, Vsolve ).
Corollary 4.7. Let (f1 , . . . , fm ) ⊂ R be an ideal generated by m homogeneous polynomials in n + 1
variables of degree at most d; fi ∈ [R]di and d = max{di }. Let δ = n+d
d . Then there is an
adaptation of Clarkson’s sampling algorithm that, in an expected O δm + δO(δ) number of calls to
the primitive query 4.4, computes {fi1 , . . . , fiδ } such that V(f1 , . . . , fm ) = V(fi1 , . . . , fiδ ).
In particular, this algorithm is linear in the number of input equations m, and a randomized
polynomial time algorithm when the number of variables n + 1 and the largest degree d are fixed.
Furthermore, we can extend it to actually solve a large system: once a basis B = {fi1 , . . . , fiδ } for the
space ({f1 , . . . , fm }, Vsolve ) is found, then we can use any computer algebra software (e.g. [6, 32, 46])
to solve fi1 = · · · = fiδ = 0.
Note that Lemma 4.6 can be thought of as a statement about the complexity of Hilbert’s Null-
stellensatz. If (f1 , . . . , fm ) = R (i.e., V(f1 , . . . , fm ) = ∅), then there exists a subset of size δ = n+d
d
polynomials {fi1 , . . . , fiδ } such that V(fi1 , . . . , fiδ ) = ∅ as well. In particular, there is a Nullstel
is, in fact, only an upper bound,
lensatz certificate with that many elements. The dimension n+d
d
attainable only by dense systems. However, in practice, many systems are very large but sparse, and
possibly non-homogeneous. Let us highlight again that the notion of ‘sparsity’ we consider is captured by a low-rank property of the system of polynomial equations, made explicit below in terms of
the coefficient matrix. This is crucially different from the usual considerations of monomial supports
(Newton polytopes) of the system; instead, we look at the coefficients of the input polynomials - that
is, we linearize the problem and consider the related vector spaces, as illustrated in the following
example.
Example 4.8. Consider the following system consisting of two types of polynomials: polynomials of
the form x2i − 1 for i = 1, . . . , n, and polynomials of the form xi + xj for the pairs {i, j : i 6≡ j mod 2}
along with the additional pair i = 1, j = 3. This system has m = n2 + n + 1 equations, and the
interesting situation is when the number of variables is a large even number, that is, n = 2k for any
large integer k. This system of polynomials generates the 2-coloring ideal of a particular n-vertex
non-chordal graph. (See [20] and references therein for our motivation to consider this particular
system.)
HELLY NUMBERS AND VIOLATOR SPACES IN COMPUTATIONAL ALGEBRA
11
Consider a concrete graph. Take the n-cycle with all possible even chords, and one extra edge
{1, 3}. Thus the pairs {i, j} are indexed by edges of the graph G on n nodes where all odd-numbered
vertices are connected to all even-numbered vertices, and with one additional edge {1, 3}.
We wish to decide if the system has a solution, but since there are n2 + n + 1 many polynomials,
we would like to try to avoid computing a Gröbner basis of this ideal. Instead, we search for a
subsystem of some specific size that determines the same variety. It turns out that the system
actually has no solution. Indeed, a certificate for infeasibility is a random subsystem consisting of
the first n quadratic equations, n − 1 of the edge equations xi + xj with {i, j : i 6≡ j mod 2}, and
the additional equation x1 + x3 . For example, the first n − 1 edge polynomials will do to construct
a 2n-sized certificate of this form. Why is the number n + (n − 1) + 1 = 2n so special?
To answer this question, let us linearize the problem: to each of the polynomials f associate
a coefficient (column) vector coeff(f ) ∈ C2n+1 whose coordinates are indexed by the monomials
appearing in the system x21 , . . . , x2n , 1, x1 , . . . , xn . Putting all these column vectors in one matrix
produces the coefficient matrix of the system of the form
In
0
− 1 0,
0
E
where In is the n × n identity, −1 is the row vector with all entries −1, and E is the vertex-edge
incidence matrix of the graph G. Since it is known that the rank of an edge-incidence matrix of an
n-vertex connected graph is n − 1, the rank of this matrix is δ = n + (n − 1) + 1 = 2n.
Remarkably, the magic size of the infeasibility certificate equals the rank of this coefficient matrix.
This motivating example suggests that the desired Helly-type number of this problem is captured
by a natural low-rank property of the system. To define it precisely, let us revisit the extension of
the Veronese embedding to non-homogeneous polynomials explained in the Introduction. Here we
adopt the notation from [7, Section 2] and consider polynomials in R of
degree up to d as a K-vector
d+n+1
n+1
n+1
has dimension n+1 , which, of course, equals the
space denoted by Cd . The vector space Cd
number of monomials in n + 1 variables of (total) degree from 0 to d. In this way, any polynomial
f ∈ R is represented by a (column) vector, coeff(f ) ∈ Cdn+1 , whose entries are the coefficients of f .
Thus, any system S ⊂ R defines a matrix with |S| columns, each of which is an element of Cdn+1 .
Definition 4.9. A system S ⊂ R is said to have rank D if dimK hSi = D, where hSi is the vector
subspace of Cdn+1 generated by the coefficients of the polynomials in S.
We need to also make the notion of Helly-type theorems more precise in the setting of varieties.
Definition 4.10 (Adapted from Definition 1.1. in [28]). A set S ⊂ R is said to have the D-Helly property if for every nonempty subset S0 ⊂ S, one can find p1 , . . . , pD ∈ S0 with V(S0 ) = V(p1 , . . . , pD ).
The following result, which implies Theorem 1.1, is an extension of [28] to non-homogeneous
systems. It also implies (the contrapositive of) Lemma 4.6 when restricted to homogeneous systems,
in the case when the system has no solution. The proof follows that of [28], although we remove the
homogeneity assumption. We include it here for completeness.
Theorem 4.11. Any polynomial system S ⊂ R of rank D has the D-Helly property.
In other words, for all subsets P ⊂ S, there exist p1 , . . . , pD ∈ P such that V(P) = V(p1 , . . . , pD ).
Proof. Let P ⊂ S be an arbitrary subset of polynomials, and denote by hPi ⊂ Cdn+1 the vector
subspace it generates. Let d0 = dimK hPi. We need to find polynomials p1 , . . . , pD such that
V(p1 , . . . , pD ) = V(P). Note that d0 ≤ D, of course, so it is sufficient to consider the case P = S.
Choose a vector space basis hp1 , . . . , pD i = hPi = hSi. It suffices to show V(p1 , . . . , pD ) ⊆ V(S);
indeed, the inclusions V(S) ⊆ V(P) ⊆ V(p1 , . . . , pD ) already hold.
12
JESÚS A. DE LOERA, SONJA PETROVIĆ, AND DESPINA STASI
Suppose, on the contrary, there exists x = (x0 , . . . , xn ) ∈ Cn+1 and p ∈ S such that p(x) 6= 0 but
pi (x) = 0 P
for all i = 1, . . . , D. Since pi ’s generate S as a vector space, there exist constants γi ∈ K
with p = i γi pi , implying that p(x) = 0, a contradiction.
The proof above is constructive: to find a subset p1 , . . . , pD ∈ S0 , one only needs to compute a
vector space basis for hS0 i. Thus, linear algebra (i.e., Gaussian elimination) can construct this subset
in time O(|S0 |3 ). The sampling algorithm based on violator spaces is more efficient.
Proof of Theorem 1.1. From Lemma 4.2, we know that ({f1 , . . . , fm }, Vsolve ) is a violator space.
Theorem 4.11 shows that it has a combinatorial dimension, and Observation 4.4 shows that there
exists a way to answer the primitive test. Having these ingredients, Theorem 3.7 holds and it is
possible for us to apply Clarkson’s Algorithm again.
Remark 4.5 provides the following interpretation of Theorem 4.11:
Corollary 4.12. Let I = (f1 , . . . , fm ) ⊂ R and let D = dimK hf1√
, . . . , fm
pi. Then, for all subsets P
of the generators f1 , . . . , fm , there exist p1 , . . . , pD ∈ P such that P = (p1 , . . . , pD ).
5. A violator space for finding generating sets of small cardinality
In this section, we apply the violator space approach to obtain a version of Clarkson’s algorithm
for calculating small generating sets of general homogeneous ideals as defined on page 3. As in
Section 4, this task rests upon three ingredients: the appropriate violator operator, understanding
the combinatorial dimension for this problem, and a suitable primitive query which we will use as a
black box. As before, fixing the definition of the violator operator induces the meaning of the word
‘basis’, as well as the construction of the black-box primitive.
To determine the natural violator space for the ideal generation problem, let I ⊂ R be a homogeneous ideal, H some initial generating set of I, and define the operator VSmallGen as follows.
Definition 5.1. [Violator Space for Homogeneous Ideal Generators] Let S ⊂ H be finite subsets of
R. We define the operator VSmallGen : 2H → 2H to record the set of polynomials in H that are not
in the ideal generated by the polynomials in S. Formally,
VSmallGen (S) = {f ∈ H : (S, f ) ) (S)}.
Equivalently, the operator can be viewed as VSmallGen (S) = {f ∈ H : f 6∈ (S)}.
Lemma 5.2. The pair (H, VSmallGen ) is a violator space.
Proof. Note that VSmallGen (S) ∩ S = ∅ by definition of VSmallGen (S), and thus the operator satisfies
the consistency axiom.
To show locality, suppose that F ( G ⊂ H and G ∩ VSmallGen (F ) = ∅. Since F ( G, (F ) ⊆ (G).
On the other hand G ∩ VSmallGen (F ) = ∅ implies that G ⊆ (F ) which in turn implies that (G) ⊆ (F ).
Then the ideals are equal. Then, because (G) = (F ) we can prove that VSmallGen (F ) = VSmallGen (G).
Note first that (G) = (F ), holds if and only if (G, h) = (F, h) for all polynomials h ∈ H.
Finally, to show VSmallGen (F ) = VSmallGen (G), we note that h ∈ VSmallGen (F ) if and only if
(G, h) = (F, h) ) (F ) = (G), and this chain of equations and containment holds if and only if
h ∈ VSmallGen (G). Therefore, locality holds as well and VSmallGen is a violator space operator.
It is clear from the definition that (H, VSmallGen ) is a violator space for which the basis of G ⊂ H
is a minimal generating set of the ideal (G).
The next ingredient in this problem is the combinatorial dimension: the size of the largest minimal
generating set. This natural combinatorial dimension already exists in commutative algebra, namely,
it equals a certain Betti number. (Recall that Betti numbers are the ranks βi,j of modules in the
minimal (graded) free resolution of the ring R/I; see, for example, [24, Section 1B (pages 5-9)].)
HELLY NUMBERS AND VIOLATOR SPACES IN COMPUTATIONAL ALGEBRA
13
Specifically, the number β0,j is defined as the number of elements of degree j required among any set
of minimal generators
of I. The (0-th) total Betti number of R/I, which we will denote by β(R/I),
P
simply equals j β0,j , and is then the total number of minimal generators of the ideal I. It is well
known that while I has many generating sets, every minimal generating set has the same cardinality,
namely β(R/I). In conclusion, it is known that
Observation 5.3. The combinatorial dimension for (H, VSmallGen ) is the (0-th) total Betti number
of the ideal I = (H); in symbols, β(R/I) = δ(H, VSmallGen ).
Although it may be difficult to exactly compute β(R/I) in general, a natural upper bound for
β(R/I) is the Betti number for any of its initial ideals (the standard inequality holds by uppersemicontinuity; see e.g. [38, Theorem 8.29]). In particular, if H is known to contain a Gröbner
basis with respect to some monomial order, then the combinatorial dimension can be estimated by
computing the minimal generators of an initial ideal of (H), which is a monomial ideal problem and
therefore easy. In general, however, we only need β(R/I) < |H| for the proposed algorithms to be
efficient.
The last necessary ingredient is the primitive query for VSmallGen .
Observation 5.4. The primitive query for VSmallGen , deciding if h ∈ VSmallGen (G) given h ∈ H
and G ⊂ H, is an ideal membership test.
Of course, the answer to the query is usually Gröbner-based, but, as before, the size of the
subsystems G on which we call the primitive query is small: (O(δ2 )). In fact, it is easy to see that
many small Gröbner computations for ideal membership cost less than the state-of-the-art, which
includes at least one large Gröbner computation.
Proof of Theorem 1.2. From Lemma 5.2 we know (H, VSmallGen ) is a violator space and we have
shown it has a combinatorial dimension and a way to answer the primitive test. Having these
ingredients, Theorem 3.7 holds and it is possible for us to apply Clarkson’s Algorithm.
Remark 5.5. Intuitively, the standard algorithm for finding minimal generators needs to at least
compute a Gröbner basis for an ideal generated by |H| polynomials, and in fact it is much worse
than that. One can simplify this by skipping the computation of useless S-pairs (e.g. as in [30]), but
improvement is not by an order of magnitude, overall. The algorithm remains doubly exponential in
the size of H for general input. In contrast, our randomized algorithm distributes the computation
into many small Gröbner basis calculations, where “many” means no more than O(β|H| + β β ), and
“small” means the ideal is generated by only O(β 2 ) polynomials.
To conclude, in a forthcoming paper we will study further the structure of our violator spaces
and discuss the use of more LP-type methods for the same algorithmic problems. We also intend to
present some experimental results for the sampling techniques we discussed here. We expect better
performance of the randomized methods versus the traditional deterministic algorithms.
Acknowledgements
We are grateful to Nina Amenta, Robert Ellis, Diane Maclagan, Pablo Soberón, Cynthia Vinzant,
and three anonymous referees for many useful comments and references. The first author is grateful
to IMA, the Institute of Mathematics and its Applications of the Univ. of Minnesota for the support
received when this paper was being written. He was also supported by a UC MEXUS grant. This
work was supported by the NSF grant DMS-1522662.
14
JESÚS A. DE LOERA, SONJA PETROVIĆ, AND DESPINA STASI
References
[1] 4ti2 team. 4ti2-a software package for algebraic, geometric and combinatorial problems on linear spaces combinatorial problems on linear spaces.
[2] N. Amenta. Bounded boxes, Hausdorff distance, and a new proof of an interesting Helly-type theorem. In Proceedings of the 10th Annual Symposium on Computational Geometry (SCG), pages 340–347. ACM Press, 1994.
[3] N. Amenta. Helly theorems and generalized linear programming. Discrete and Computational Geometry, 12:241–
261, 1994.
[4] Nina Amenta, Jesús A. De Loera, and Pablo Soberón. Helly’s theorem: New variations and applications. Preprint.
arXiv:1508.07606. math.MG (math.CO).
[5] M. F. Atiyah and I. G. Macdonald. Introduction to commutative algebra. Addison-Wesley Publishing Co., Reading,
Mass.-London-Don Mills, Ont., 1969.
[6] D. J. Bates, J. D. Hauenstein, A. J. Sommese, and C. W. Wampler. Bertini: Software for numerical algebraic
geometry. Available at bertini.nd.edu with permanent doi: dx.doi.org/10.7274/R0H41PB5.
[7] K. Batselier, P. Dreesen, and B. De Moor. The geometry of multivariate polynomial division and elimination.
SIAM Journal on Matrix Analysis and Applications, 34(1):102–125, 2013.
[8] C. Beltrán and L.M. Pardo. On Smale’s 17th problem: a probabilistic positive solution. Foundations of Computational Mathematics, 8(1):1–43, 2008.
[9] C. Beltrán and L.M Pardo. Smale’s 17th problem: average polynomial time to compute affine and projective
solutions. J. Amer. Math. Soc., 22(2):363–385, 2009.
[10] E. R. Berlekamp. Factoring polynomials over large finite fields. Mathematics of Computation, 24(111):713–735,
1970.
[11] H. Björklund, S. Sandberg, and S. Vorobyov. A discrete subexponential algorithm for parity games. In Proc. 20th
Annual Symposium on Theoretical Aspects of Computer Science (STACS), pages 663–674. Springer-Verlag, 2003.
[12] Y. Brise and B. Gärtner. Clarkson’s algorithm for violator spaces. In 21st Canadian Conference on Computational
Geometry (CCCG2009), pages 9–12, 2009.
[13] B. Buchberger. Ein Algorithmus zum Auffinden der Basiselemente des Restklassenringes nach einem nulldimensionalen Polynomideal. PhD thesis, Leopold-Franzens University, 1965.
[14] B. Buchberger. Bruno Buchberger’s PhD thesis 1965: An algorithm for finding the basis elements of the residue
class ring of a zero dimensional polynomial ideal (English translation). Journal of Symbolic Computation, 41(34):475–511, 2006.
[15] P. Bürgisser and F. Cucker. On a problem posed by Steve Smale. Annals of Mathematics, 174(3):1785–1836, 2011.
[16] D. Cifuentes and P. Parrilo. Exploiting chordal structure in polynomial ideals: a Gröbner bases approach. Preprint:
arXiv:1411.1745v1 [cs.SC].
[17] K. L. Clarkson. Las Vegas algorithms for linear and integer programming. Journal of the ACM, 42(2):488–499,
1995.
[18] D. Cox, J.B. Little, and D. O’Shea. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic
Geometry and Commutative Algebra. Springer, 2007.
[19] L. Danzer, B. Grünbaum, and V. Klee. Helly’s theorem and its relatives. In Proc. Sympos. Pure Math., Vol. VII,
pages 101–180. Amer. Math. Soc., Providence, R.I., 1963.
[20] J. A. De Loera, S. Margulies, M. Pernpeintner, E. Riedl, D. Rolnick, G. Spencer, D. Stasi, and J. Swenson. Graphcoloring ideals: Nullstellensatz certificates, Gröbner bases for chordal graphs, and hardness of Gröbner bases. In
International Symposium on Symbolic and Algebraic Computation (ISSAC), 2015.
[21] M. Deza and P. Frankl. A Helly type theorem for hypersurfaces. J. Combin. Theory Ser. A, 45(1):27–30, 1987.
[22] T.W. Dube. The structure of polynomial ideals and Gröbner bases. SIAM Journal of Computing, 19(4):750–773,
1990.
[23] J. Eckhoff. Helly, Radon, and Carathéodory type theorems. In Handbook of convex geometry, Vol. A, B, pages
389–448. North-Holland, Amsterdam, 1993.
[24] D. Eisenbud. The geometry of syzygies, volume 229 of Graduate Texts in Mathematics. Springer-Verlag, New York,
2005. A second course in commutative algebra and algebraic geometry.
[25] J.-C. Faugére, P. Gaudry, L. Huot, and G. Renault. Polynomial systems solving by fast linear algebra. arXiv
preprint arXiv:1304.6039, 2013.
[26] J.-C. Faugére, P. Gianni, D. Lazard, and T. Mora. Efficient computation of zero dimensional Gröbner bases by
change of ordering. Journal of Symbolic Computation, 16(4):329–344, 1993.
[27] J.-C. Faugére, P.-J. Spaenlehauer, and J. Svartz. Sparse Gröbner bases: the unmixed case. arXiv preprint
arXiv:1402.7205, 2014.
[28] P. Frankl. Helly-type theorems for varieties. European J. Combin., 10(3):243–245, 1989.
HELLY NUMBERS AND VIOLATOR SPACES IN COMPUTATIONAL ALGEBRA
15
[29] S. Gao. Counting Zeros over Finite Fields Using Gröbner Bases. PhD thesis, MS Thesis in Logic and Computation,
2009.
[30] S. Gao, V. M Rodrigues, and J. Stroomer. Gröbner basis structure of finite sets of points. Preprint. Available on
http://www.math.clemson.edu/~ sgao/papers/GBstr.pdf. Last accesed: November 2015., 2003.
[31] B. Gärtner, J. Matoušek, L. Rüst, and P. Škovroň. Violator spaces: structure and algorithms. Discrete Appl.
Math., 156(11):2124–2141, 2008.
[32] D. R. Grayson and M. E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available
at http://www.math.uiuc.edu/Macaulay2/.
[33] N. Halman. Discrete and lexicographic Helly theorems and their relations to LP-type problems. PhD thesis, Tel-Aviv
University, 2004.
[34] E. Helly. Über mengen konvexer Körper mit gemeinschaftlichen Punkten. Jahresbericht der Deutschen
Mathematiker-Vereinigung, 32(175–176), 1923.
[35] Y. N. Lakshman. On the complexity of computing a Gröbner basis for the radical of a zero dimensional ideal. In
Proceedings of the twenty-second annual ACM Symposium on Theory of Computing, pages 555–563. ACM, 1990.
[36] J. Matoušek. Lectures on discrete geometry, volume 212 of Graduate Texts in Mathematics. Springer-Verlag, New
York, 2002.
[37] J. Matoušek, M. Sharir, and E. Welzl. A subexponential bound for linear programming. Algorithmica, 16(4–5):498–
516, 1996.
[38] E. Miller and B. Sturmfels. Combinatorial commutative algebra. Graduate Texts in Mathematics. Springer, 2005.
[39] T. S. Motzkin. A proof of Hilbert’s Nullstellensatz. Math. Z., 63:341–344, 1955.
[40] I. Ramella. Algoritmi di Computer Algebra relativi agli ideali di punti dello spazio proiettivo. PhD thesis, Universita
degli studi di Napoli “Federico II”, 1990.
[41] M. Sharir and E. Welzl. A combinatorial bound for linear programming and related problems. In Proc. 9th Symposium on Theoretical Aspects of Computer Science (STACS), volume 577 of Lecture Notes in Computer Science,
pages 569–579. Springer-Verlag, 1992.
[42] P. Škovroň. Abstract models of optimization problems. PhD thesis, Charles University, Prague, 2007.
[43] R. Solovay and V. Strasse. A fast Monte-Carlo test for primality. SIAM Journal of Computing, 6(1), 1977.
[44] D.A. Spielman and S.H. Teng. Smoothed analysis: an attempt to explain the behavior of algorithms in practice.
Communications of the ACM, 52(10):76–84, 2009.
[45] B. Sturmfels. Sparse elimination theory. Mathematical Sciences Institute, Cornell University, 1991.
[46] J. Verschelde. PHCpack: a general-purpose solver for polynomial systems by homotopy continuation. Available at
http://homepages.math.uic.edu/∼jan/download.html.
[47] R. Wenger. Helly-type theorems and geometric transversals. In Handbook of discrete and computational geometry,
CRC Press Ser. Discrete Math. Appl., pages 63–82. CRC, Boca Raton, FL, 1997.
J. De Loera, Department of Mathematics, University of California, Davis.
E-mail address: [email protected]
S. Petrović, Applied Mathematics Department, Illinois Institute of Technology.
E-mail address: [email protected]
D. Stasi, Applied Mathematics Department, Illinois Institute of Technology.
E-mail address: [email protected]
| 0 |
1
A new bio-inspired method for remote sensing
imagery classification
Amghar Yasmina Teldja*, Fizazi Hadria†
The problem of supervised classification of the satellite image is considered to be the task of grouping
pixels into a number of homogeneous regions in space intensity. This paper proposes a novel approach
that combines a radial basic function clustering network with a growing neural gas include utility factor
classifier to yield improved solutions, obtained with previous networks. The double objective technique is
first used to the development of a method to perform the satellite images classification, and finally, the
implementation to address the issue of the number of nodes in the hidden layer of the classic Radial Basis
functions network. Results demonstrating the effectiveness of the proposed technique are provided for
numeric remote sensing imagery. Moreover, the remotely sensed image of Oran city in Algeria has been
classified using the proposed technique to establish its utility.
I.
INTRODUCTION
For remote sensing applications, classification is an important task which partitions the pixels in the images into
homogeneous regions, each of which corresponds to some particular landcover type. The problem of pixel classification is
often posed as clustering in the intensity space.
Some natural events are able to implement original heuristics to solve problems for which it is difficult to find out solutions
deterministically by classical algorithms. Further, these heuristics are robust; even with the failure of a constructive component
of the heuristic can’t affect the heuristic at all.
The source of bio-inspired computing is the behavior of social insects: a population of simple agents interacting and
communicating indirectly through their environment is a massively parallel algorithm for solving a given task, such as the
foraging, the crowd, the task division and prey capture …
Indeed, for some years, lot of studies revealed the effectiveness of the stochastic approach based on ant colony for solving
various problems; as the combinatorial optimization problems - such as such as robotics [1] and the traveling salesman
problem [2] - and algorithms based on ant - where the main constitute the metaheuristic of the ant colony optimization ACO making incrementally configurations [3] [4].
Ants are able to solve complex problems collectively, as finding the shortest path between two points in a broken
environment. The collective capacity of the ants to find shortest path is mainly due to the fact that more the path is short, more
quickly the ant returns to the nest by this road, by redepositing the pheromone, and more ants are attracted on this path and
reinforce it. [2]
The classification forms also part of these problems in which the ants suggest very interesting heuristics for processing
specialists. Based on existing work, we contribute in this work to the study of classifiable ants as seen from the knowledge
discovery, with the goal of solving real problems. In such problems, we consider that a domain expert has collected a data set
and he would like to see himself proposing a partition of his data in relevant classes. [1]
The artificial immune systems form a rather recent research sector compared with other models of data-processing
calculation taking as a starting point the biology to find solutions with various encountered problems. These are proper
adaptive defense systems, able to create a very large variety of cells and molecules capable to recognize and eliminate
specifically practically unlimited number foreign invaders. These cells and these molecules intervene together in a dynamic
network precisely adaptable, whose complexity competes with that of the nervous system. [5]
This work is placed within the framework of a supervised classification; supposing that the number and the parameters of
the classes are known. The application is carried out using a neurons network: the radial basis function network including the
growing neural gas with utility factor (RBF-GNGU or RBFU) [6], having for task to treat the image pixels according to the
2
various components categories the western area of Oran city. In effect, this area consist on many components (vegetation,
water,…), which is the source of a great variability of reflectance, which is translated at the same time between various pixels
and inside the same pixel whose reflectance can then be a mixture of several other components.
II.
ANT COLONY
Biologists have long been intrigued by the behavior of insect colonies: ants, bees, termites …. Each individual of the
colony is a priori independent and is not supervised in one path or another. It is helped by the community in its development
and in return it helps the proper functioning of this one. The colony is self-controlled through simple mechanisms to study. [7]
Walking from the nest to the food source and vice versa, the ants deposit in passing on the ground an odorous substance
called "pheromone". So this substance allows creating a chemical trail, and other ants can detect pheromones through sensors
on their antennae, and follow the same traversed path.
Pheromones have a role as a path marker: when the ants choose their path, they tend to choose the track with the highest
concentration of pheromone. This allows them to find their path back to their nest upon return. On the other hand, smells can
be used by other ants to find food sources found by their peers. [7]
Consequently, this behavior allows the emergence of shortest paths, provided that the pheromone trails are used by an
entire colony of ants (see Fig. 1 and Fig. 2).
Fig. 1. Behavior at beginning
Fig. 2. Behavior after 10 min
This behavior allows finding the shortest path to food when the pheromone trails are used by the entire colony. In other
words, when multiple marked paths are at the disposal of an ant, it can find the shortest path to its destination. This essential
observation is the base of all the methods which we will develop further [7].
Ants lay pheromone on the path to the food source and back to the nest. Initially, the choice is random but quickly the short
arm becomes most pronounced because the ants that use it arrive faster to the nest and will be statistically more likely to
borrow when they return to the food source.
From this behavior is born the ant colony optimization metaheuristic (Ant Colony Optimization). ACO is a class of
algorithms, whose first member, called Ant System, was initially proposed by Colorni, Dorigo and Maniezzo [8]. Several
algorithms based on or inspired by the ant colony optimization metaheuristic have been proposed to tackle continuous
optimization problems [9]. The first Ant System was proposed in the earlier nineties, and since then several studies were
performed to apply this paradigm in real problems. Several researchers have explored the idea of applying it to image
processing. [10]
Some features of the behavior of artificial ants are inspired by nature, thus, the artificial ants deposit pheromone on the arcs
of the graph which they borrows, and choose their path randomly depending on the quantity of pheromone deposited on the
incident arcs. In addition, the quantity of pheromone is gradually reduced, simulating an evaporation phenomenon avoiding
premature convergence. [2]
Artificial ants have also other characteristics, which do not find their counterpart in nature. In particular, they may have a
memory, which allows them to keep track of their past actions. In most cases, the ants deposit a pheromone trail only after
performing a full path and not incrementally, progressively when they move.
3
Here, the work is carried around the modeling of a particular type of ants: the “Pachycondyla apicalis”. This study stems
from the work of Dominique Fresneau [11] on the original foraging strategy of this species of ponérines ant. So, we start by
presenting the biological characteristics that could be exploited in terms of algorithmic modeling with the aim to apply for our
optimization problem.
A. Pachycondyla apicalis ant
Pachycondyla apicalis is a common and conspicuous insect in many Neotropical forests. Most observations and collection
records are of single foragers on the ground or on low vegetation. This species has been observed nesting in rotting wood on or
near the ground, in the ground, and in the root mass of large ficus trees within one meter of the ground.
In wet and dry forested lowlands, Pachycondyla apicalis is one of the most common and conspicuous ants. Foraging
workers are extremely fast and run over the surface of trails in a nervous, erratic manner, with the antennae rapidly vibrating.
Their behavior is reminiscent of compiled wasps.
Colonies are small, containing fewer than 200 workers [12], [13], and monogynous [14].
Foraging is done individually, without recruitment, and individual foragers over time show strong fidelity to a particular
area. Tandem-running has been observed during nest relocation. Orientation is probably visual, test an optimal foraging model
using Pachycondyla apicalis, concluding that foraging in the observed colonies is sub-optimal. A group of computer scientists
have used the foraging behavior of Pachycondyla apicalis as a model for creating an internet search algorithm [15].
Our interest for these ants is that they use simple principles, of a global perspective and local levels, to find their prey.
From their nest, they generally cover a given surface in the partitioning in individual hunting sites. [11]
B. Local behavior of the ant
Fig. 3. (a): Search for hunting sites, (b): Local exploration around the site
s1
In figure 3.a, the ant constitute is a list of three hunting sites ,
and
in the vicinity of the nest with a maximum
distance of
from de nest. While in figure 3b, the squares represent the explorations conducted in the area radius
around of .
Initially, and every movement of the nest, each ant a i leaves the nest to provide a list of p hunting sites which it stores (see
fig. 3.a). A hunting site is a point of S constructed by the operator
with amplitude
in the vicinity of . The ant
will then perform a local exploration around one of his hunting sites (see fig. 3b). [11]
Initially, when the interest of the sites is unknown, the ant selects a site s randomly from the p site that its disposal. The
exploration is to build a local point s’ of S in the vicinity of s due to the operator
with a amplitude
. The ant
′
<
. Improved of f models so
capture a prey if the local exploration has to find a better value of f, which is to be had
the capture of prey. Each time an ant is able to improve
, it stores s’ instead of s and the next exploration will take place in
the local neighborhood of s’. If the local exploration is unsuccessful, the next exploration, the ant will choose a site randomly
from the p sites it has in memory. When a site has been explored more than
times without having reported prey, it is
definitely forgotten and will be replaced by a new site in the next release of the nest so the next iteration. The parameter
represents a local patient. [11]
4
Fig. 4. Automaton representing the individual behavior of an ant foraging
The automaton of figure 4 summarizes the individual behavior of forage. is the number of sites for hunting the ant stores
at any given time. gives the number of successive failures encountered on the site memorized . [11]
The main steps in the simulation of the Pachycondyla apicalis ant are given by the following algorithm:
API()
Randomly choose the initial location of the nest:
←
←0
/ * index of the iterations number * /
While the stopping condition is not satisfied do
For every "# $ % do
API-Foraging
If the nest must be moved then
← & / * Best solution reached * /
Clear the memory of all ants
← +1
&
Return & and
The stop condition can be various [11]: either & has not been improved since a number of iterations
, or has reached
a limit value
or a number
of evaluations of f solutions was reached.
III. IMMUNE SYSTEM
The biological immune system's basic functioning identifying antigens, and the affinity between receptor cells and the
antigens, whether they are complete or decomposed, from there was create the “Artificial Immune System” or commonly
known as “ AIS”.
To be used in computing, the different components of the biological immune system, and particularly immune cells and
antigens, should be modeled as digital and date form.
For the modeled cells can recognize the “antigens”, this concept of affinity is taken again and applied to the dataprocessing cells. The lymphocytes B and T have on their surface a great number of receivers, each one is able to recognize
only one type of antigen and all the receivers of the same cell are identical.
In the AIS - which the application domain is the data processing - we'll not tell B or T cells or their respective receptors any
more, but an antibody in general to facilitate modeling and representation. [5]
The selection concept is included in the AIS:
• Positive selection: it ensures that all the T cells leaving the thymus recognize the MHC (Major Histocompatibility
Complex) of the self.
• Negative selection: it ensures that all the T cell (T lymphocyte) recognizing too much the self cells as antigen doesn’t
leave the thymus in order to prevent autoimmune diseases.
• Clonal selection (CS): this is the theory explaining how the immune system interacts with the antigen. This theory is
applicable to B and the T cell. The only difference is that the B cells undergo somatic hypermutation during proliferation in
contrast to the T cells. The AIS are inspired by this theory, but when only B cells are able to mutate for optimize the immune
response, only those cells interest us.
5
This optimization is the fact that B cells upon contact with antigen proliferate and give several clones and each clone is
mutated. This mutation is used to find clones of the mother cell with a greater affinity with the antigen. In the successive
mutations, there is the risk of being faced with a problem of local optimum which we can’t find a better clone even in passing
generations of cells. The human body avoids this problem by adding to these clones some newly created elements, which do no
have any relation with the mother cell. This addition of “random” elements solves the problem by changing the new elements
if they have a greater affinity with the antigen. These cells are presented as bit vectors of length L. [5]
Figure 5 shows the clonal selection algorithm, which proceeds as follows:
(1) Generate randomly a number of defined receptors,
(2) Select from these receptors the n best,
(3) Clone the new population,
(4) Mutate the new population,
(5) Refilter the obtained population keeping only the best elements (memory cells),
(6) Replaced a portion of these cells by other detectors randomly regenerated.
The introduction of these new detectors avoids the local optima problem. Then we start the cycle again by making the best
selection of n elements [5].
Fig. 5. Clonal selection algorithm
The mutation of the detectors is performed by changing one or more bits of the vector representing the cell by another
value. The main differences between the clonal selection algorithms are the methods used in the random generation of
detectors, the mutation and the affinity between the detectors and antigens. [5]
A. Principle of clonal selection
When a B lymphocyte receptor (LB) identifies an antigen with a certain affinity, it was selected to proliferate. This growth
will result in the production of a cell clone of a single type. Due of the mutation, the cells within a clone are similar but have
slight differences that can recognize the antigen that triggered the immune system response.
This whole process of mutation and selection is known as the immune response maturation. The B lymphocyte with high
antigenic affinities are selected to become memory cells with a long life. These memory cells predominate in future responses
to the same antigen or similar.
Other important features of clonal selection appropriate from the viewpoint of the cell are:
1. An antigen is recognized by several B cell. The rate of proliferation of each cell B is proportional to its affinity with the
antigen selected (higher is the affinity, greater is number of offspring (clones) product, and vice versa).
2. Inversely with the rate of proliferation, mutation suffered by each B cell during the reproduction is inversely
proportional to the affinity of the receptor of B cells with antigen (higher is the affinity, smaller is the mutation, and vice
versa).
B. Clonal selection algorithm :
The clonal selection algorithms are often used in optimization applications seen as B cells become more affine to antigens,
intrusion detection applications where we can not list all the undesirable elements and where elements are extremely varied. [5]
This whole process of mutation and selection is known clonal selection algorithm, originally known as the CSA (Clonal
Selection Algorithm) or CLONALG (Clonal Algorithm) is inspired by the following elements of the clonal selection theory:
6
•
•
•
•
•
Maintenance of a specific memory location.
Selection and cloning of most stimulated antibodies.
The death of unstimulated antibodies.
Affinity maturation (mutation).
Re-selection of clones proportional to the affinity of the antigen. Generation and maintenance of diversity.
a) Main elements of the clonal selection algorithm :
• Antigen: in the AIS, the antigen means the problem and its constraints.
• Antibodies: the antibodies represent candidates of the problem.
• Affinity antibody / antigen: the reflection of the power of combination placed between the antigen and the antibodies.
• Mutation (affinity maturation): random change of the antibody value.
• Memory cell (antibodies memory): the memory cell represents the best antigen.
b) CLONALG algorithm:
1) Initialization: in this initialization step, the algorithm generates a random antibody population with size N. Then the
population is divided into two components, a memory antibodies section m, and a remaining group r.
2) Generation: the algorithm proceeds to the execution of a number of iterations to expose the system to all known
antigens. One round of exposition or iteration is considered as a generation. The number of generations G performed by the
system is predefined by the user.
• Select of the antigen: an antigen is selected randomly, without replacement (for de actual generation) of the population
of antigens.
• Exposure: the system is exposed to the selected antigen. The affinity is calculated for all the antibodies directed against
the antigen. The affinity is a measure of similarity, it depends on the problem. The Hamming distance is most often used in
this case.
• Selection: a set of n antibodies is selected from the entire population of antibodies with the highest affinity for the
antigen.
• Cloning: a set of selected antibodies is then cloned according to their affinity (founded classification).
• Affinity maturation (mutation): the clone (antigen duplicated) is then subjected to a process of affinity maturation to
respond better to the antigen.
• Clone exposition: the clone is exposed to the antigen, and affinity measures are calculated.
• Candidature: one or several antibodies having the highest affinity with the clone are selected as a memory antibody for a
replacement in m. If the affinity of the candidate memory cell is higher than the largest, then it is replaced.
• Replacement: the d individuals in the remaining antigen population with the lowest affinity are replaced by new random
antibodies.
3) Output: after the achievement of the mode of formation, the memory component m of the antigen is considered as the
solution of the algorithm. This solution must be the best individual or the collective association of all the antigens in the
population.
The algorithm CLONALG applied to the pattern recognition use the following formulas to calculate the parameters:
•
Number of clones : created from each antibody, it is calculated as follows:
/∗
)*+,=
1
Where / is a clonal factor, is the size of the antibodies population, and i is the antibody rank with 1 ∈ 31, 5.
The number of clones prepared for each antigen exposition to the system is calculated as follows:
/∗
9
6 = 78
1
:
Where
is the total clones number, and n is the selected antibodies number.
• Affinity: is a measure of similarity between two binary strings of equal length. The distance most often used is
the Hamming distance. This distance is a simple measure that counts the number of differences of points
between two strings, it is calculated as follows:
D
1 ) ≠ AC
; = 7 < =ℎ * < = ?
0 ) ≡ A
:
Where D is the Hamming distance, ab is the antibody, ag is the antigen and L is the length of the binary string.
7
IV. RESULTS
Classification is a central problem in the pattern recognition. For each of the encountered problems (classification,
interpretation, segmentation, …), we should detect certain characteristics to extract exploitable information in the future.
The classification improves significantly when the input vector of connectionist network includes not only the three
components of the image pixels in the selected color space (RGB), but also the average and standard deviation of these
components immediate vicinity of the pixel to be classified, thereby improving sampling.
The quality of the classification is evaluated using three parameters: the error, the confusion rate and the classification rate.
A. Description of the experimental data
The remote sensing image used is a multispectral image, 7-band, acquired by the Landsat 5 TM satellite, with a 30 * 30
resolution. The study scene represents the west area of Oran city in Algeria, composed in twelve classes (see Fig. 6):
Fig. 6. Classes of the studied image
(1): Sea, (2): Surf, (3): Sand and bare soil, (4): Truck farming, (5): Cultivations of cereals, (6): Fallows, (7): Forest, (8):
Scrub, (9): Urban, (10): Burnt land, (11): Sabkha 1and (12): Sabkha 2.
B. Settings algorithms
1) CLONALG :
• Parameters to fix: NbrLar: number of pixels (antibodies) of low affinity replaced, NbrLar = 0, NbrTra: number of
pixels trained for each sample, NbrTra = 21, Coe: coefficient of cloning: Coe = 10.
• Parameters to vary: Gen: number of generations, Ncm: number of pixels of better affinity selected for cloning and
mutation.
2) API :
• Parameters to fix: NbrNeur: number of neurons in the hidden layer, Auto: the learning automatically stops when the
results stabilize.
• Parameters to vary: NbrAnt: number of ants, Plocal: patience of ants, NbrSit: number of hunting sites, NbrItr: number
of iterations.
C. Tests on AntImuClass
The performed tests are aimed to evaluate the quality of the AntImuClass network, which hybrid CLONALG and API
algorithms - in the learning and the optimization phases of the RBFU network - and to test the influence of various parameters
on the satellite images classification.
1) Tests on CLONALG
To test the performance of the CLONALG algorithm, we vary one parameter at a time while the rest of parameters are
fixed, for study the influence of each one on the neural network and thus the influence on the classification rate and therefore
on the satellite images classification. The result of each test is given by the corresponding figure.
During this first test, we fix the API parameters as follows: (Asite, Alocal, Plocal, NbrNeur, NbrSit, NbrItr, NbrAnt) = {0.1,
0.01, 15, 12, 12, 100, 24}.
Note: we decided to take a number of iterations equal to 100. At first seen, this number seems fairly important and this will
we avoid the above we focus in the sense that by taking it and it will not affect the results throughout the tests, as we shall see
later.
8
a) Influence of the number of generations
For this test we fix the number of pixels of better affinity selected for cloning and mutation (Ncm) to 10, and we vary the
number of generations (Gen). The test results are illustrated by the following table:
TABLE I.
INFLUENCE OF THE NUMBER OF GENRATIONS ON THE CLASSIFICATION RATE
Gen
Classification rate
10
20
30
40
93.25 %
95.07 %
95.07 %
95.01 %
Based on the above table, we find that when the number of generations is at its lowest (Gen = 1), we obtain an
unsatisfactory classification rate (equal to 87.23%), but when we increase the number of generations, the classification rate
increases soon. With 20 generations, we note stabilization in the rate and after 30 generations a decrease in the rate was
noticed.
With a number of generations equal to 20 and 30 generations, we obtain a classification rate equal to 95.07%, and even if a
priori we should choose the number of 20 generations because of time savings (cause more we increase the calculation more
processing time increases), we choose for the rest of tests the number for 30 generations, since, with this number we have less
confusion between classes. The following figures show respectively the results of the classification made according to two
different numbers of generations (see fig.7 and fig. 9).
Fig. 7. Classified image with Gen = 20, rate= 95.07%
Fig. 8. Legend of the classified images
Fig. 9. Classified image with Gen = 30, rate= 95.07%
Unlike the previous figure, with adding 10 generations, the algorithm affects all pixels of Sabkha 1 and 2 classes to the
right class. 10 generations seem a small number at first but in the end were used to a much better affectation of pixels in the
right class. And like the previous classification, the confusion between the two classes Scrub and Forest persists.
9
b) Influence of the number of pixels of better affinities
Having achieved the maximum classification rate with the generations number variation, we thought of other changes
including the number of pixels for better affinities. Now, we fix the number of generations Gen to 30 generations and we vary
the number of pixels of better affinity selected for cloning and mutation (Ncm). The results are shown in the following table:
TABLE II.
INFLUENCE OF THE NUMBER OF PIXELS OF BETTER AFFINITY ON THE CLASSIFICATION RATE
Ncm
Classification rate
5
10
15
20
35
92.50 %
95.27 %
94.54 %
93.03 %
93.44%
We note that with a number of pixels of better affinity selected for cloning (Ncm) equal to 10 the classification rate is at its
maximum.
The following figure shows the satellite image resulted with a classification rate equal to 94.54% (see Fig. 10), and where
we see that the number of confusions in this figure greatly increased compared to that where the classification rate is equal to
95.27% (Figure 11).
Fig. 10. Classified image with rate = 94.54 %
Fig. 11. Classified image with rate= 95.27 %
In the figure 12, even if the classification rate is quite important but it’s the confusion between Sabkha 1, Sabkha 2 and
Sand and bare soil stands which arises the most.
Consequently, with 10 pixels of better affinities the network offers the best result of classification.
2) Test on API
From the previous tests, it seems clear that with the CLONALG parameters equal to 10 for Ncm and to 30 for Gen, we get
the best results.
In the following and throughout tests, we fix the parameters of the CLONALG algorithm, i.e.: (NbrLar, NbrTra, Coe, Gen,
Ncm) respectfully as follows = {0, 21, 10, 30, 10}.
10
a) Influence of the ants number
For seeing if the number of ants has an impact on the satellite images classification and for determinate the degree of this
impact, we vary the number of ants while the others algorithm parameters are fixed, in advance: (
,
,
,
NbrNeur, NbrSit, NbrItr) to the following values (0.1, 0.01, 15, 12, 12, 35).
TABLE III. INFLUENCE OF THE NUMBER OF ANTS ON THE CLASSIFICATION RATE
NbrAnt
5
10
20
24
30
35
RBFU-API rate
90.61 %
89.95 %
90.31 %
90.23 %
90.03 %
87.45 %
RBFU-APIh rate
91.36 %
93.09 %
94.54 %
95.47 %
95.25 %
93.36 %
The obtained results throughout this test show that the rates are not stable, since an increase of the number of ants doesn’t
necessarily mean an increased in the classification rate. However it’s with a number of ants equal to 20 that the classification
rate (RBF-API) is at its maximum.
At this stage we introduce the factor of heterogeneity to the RBF-API network and thus all levels of this test are increased,
all without exception. However it is not with a number of ants equal to 20 that we get this time the highest rate but with a
number equal to 24.
Fig. 12. Classified image with a rate = 94.67 %
b) Influence of the patience of ants
The influence of the patience of ants on the classification rate of satellite images is described in the following table, where
we vary the patience of ants.
TABLE IV. INFLUENCE OF PATIENT ANTS ON THE CLASSFICATION RATE
Plocal
2
5
15
20
30
35
50
RBFU-API rate
90.04 %
89.25 %
91.77 %
90.36 %
92.24 %
90.65 %
92.02 %
RBFU-APIh rate
90.46 %
93.96 %
96.19 %
93.63 %
94.28 %
91.55 %
95.17 %
Again the best results were obtained at different levels (Plocal is equal to 30 for RBF-API and to 15 for RBF-APIh).
11
Fig. 13. Classified image with a rate = 92.24 %
c)
Influence of the number of sites for hunting
As done previously, we fix all the algorithm parameters unless the number of sites that we vary and we note the changes as
follows.
TABLE V.
INFLUENCE OF THE NUMBER OF SITES ON THE CLASSIFICATION RATE
NbrSit
5
10
12
20
35
50
RBFU-API rate
54.85 %
87.33 %
89.74 %
87.55 %
83.15 %
77.56 %
RBFU-APIh rate
62.65 %
87.93 %
96.87 %
87.99 %
88.23 %
89.41 %
Fig. 14. Classified image with a rate = 89.74 %
Unlike other figures, in Figure 10 we note that many pixels were not classified, even worse, they were detected as unknown
pixels, this means that the algorithm has failed in what class (category) put them.
Fig. 15. Classified image with a rate = 96.87 %
For the first time since the early tests, the best results were obtained at the same level, whether for a homogeneous
population (API) or heterogeneous (APIh), and that for a number of sites equal to 12. However, it is more the APIh result that
interests us because it is through this diversity that we increase the quality of results.
12
d) Influence of number of iterations
We vary the number of iterations and note the changes.
TABLE VI. INFLUENCE OF THE NUMBER OF ITERATIONS ON THE CLASSIFICATION RATE
RBFU-API rate
89.63 %
91.28 %
91.78 %
92.00 %
92.00 %
NbrItr
15
25
30
35
40
RBFU-APIh rate
92.89 %
94.93 %
95.07 %
97.02 %
95.26 %
We note that it isn’t necessary to go up to 50 iterations, because with only 35 iterations we obtain the same results.
As with the previous test best results were obtained at the same level, 35 iterations, in addition to this the more we increase
the number of iterations over the classification rate increases.
Fig. 16. Classified image with rate = 92.00 %
Fig. 17. Classified image with rate = 97.02 %
In conclusion, AntImuClass performs better than RBF alone [6] but requires a longer execution time. We note also that the
time limit given to our algorithms was fixed relative to ACO algorithms that, unlike the local search algorithm converge slowly
enough GNGU [6] has been disadvantaged by this time limit.
Finally, the parameters of the AntImuClass algorithm - CLONALG and API combined - offer the best results are
equivalent to: (NbrNeur, NbrAnt, NbrSit, Plocal, NbrItr, NbrLar, NbrTra, Coe, Gen, Ncm) = {12, 24, 15, 12, 35, 0, 21, 10, 30,
10}.
V.
CONCLUSION
We were interested in this paper on problem of the remote sensing imagery classification, through a hybrid network type,
named AntImuClass: RBF-GNGU network hybridized, based on an optimization by clonal selection (immune system) doped
with a learning based on colony ant by Pachycondyla apicalis type.
This algorithm was tested on satellite images from Landsat 5 TM satellite of the western region of Oran city, where it has
shown promising results. Thus, a natural biological mechanism of self-government of autonomous agents can manage complex
problems in real time with great efficiency.
The presented model is quite flexible, it may incorporate several variations. One can, for example, consider that the
environment contains many sources of food in each set amount, which decreases with the visits of ants. These quantities can be
incorporated into the model of the optimal control function through the award states sources. The reward then evolves over
13
time, and traces of pheromones, which are constantly updated, follow this optimal value function “change”. You can also use
different settings for the outward and return journey of ants: this could model different strategies depending on whether or not
the ants carry food.
In conclusion, we showed that once again the bio-inspired computing demonstrates its effectiveness in the face of problems
like classification, and is able to classify a relevant data bases.
REFERENCES
[1] N. Monmarché, G. Venturini and M. Slimane, “Clustering by a population of artificial ants”, Computer Science
Laboratory, Tours University. 2000.
[2] C. Solnon, “Ant-P-solver: a constraint solver based on artificial ants”, LISI Laboratory. University of Lyon. 2000.
[3] C. Solnon, “Contributions to solving practical combinatorial problems: ants and graphs”, Claude Bernard University,
Lyon. 2005.
[4] C. Solnon, “Problem solving and combinatorial optimization by ant colony”, Claude Bernard University, Lyon. 2006.
[5] M. Gharbi, “Artificial immune systems optimization”, European Center for Virtual Reality. Virtual Ecosystem and
Biology. 2006.
[6] Y.T. Amghar, “Optimization of a connectionist model for processing of optical data”, University of Sciences and
Technology of Oran, Algeria. 2009.
[7] A. Costanzo, T. V. Luong and G. Marill, “Ant Colony Optimization”. 2006.
[8] V. Maniezzo, L. M. Gambardella and F. D. Luigi, “Ant Colony Optimization”, Optimization Techniques in Engineering.
Springer-Verlag. 2004.
[9] T. Liao, M. A. Montes de Oca, D. Aydin, T. Stützle and M. Dorigo, “An Incremental Ant Colony Algorithm with Local
Search for Continuous Optimization”, IRIDIA - Technical Report Series Technical Report. 2011.
[10] A.V. Alvarenga, “Artificial Ant Colony: Features and applications on medical image segmentation”, Laboratory of
Ultrasound, Nat. Inst. of Metrol, Stand. & Ind. Quality (Inmetro), Rio de Janeiro, Brazil. 2011.
[11] N. Monmarchè, “Artificial ant algorithms: applications to the classification and optimization”, François Rabelais
University. Tours. 2000.
[12] D. Fresneau, “Individual foraging and path fidelity in a ponerine ant”, Insectes Sociaux, 32:109-116. 1985.
[13] S. Goss, D. Fresneau, J. L. Deneubourg, J. P.Lachaud, and J. Valenzuela Gonzalez, “Individual foraging in the ant
Pachycondyla apicalis”, Oecologia 80:65-69. 1989
[14] V. Dietemann and C. Peeters, “ Queen influence on the shift from trophic to reproductive eggs laid by workers of
Pachycondyla apicalis”, Insectes Sociaux. V : 47, pages 223-228. 2000.
[15] N. Monmarché, G. Venturini and M. Slimane. “On how Pachycondyla apicalis ants suggest a new search algorithm”,
Computer Science Laboratory of the University of Tours, School of Engineering in Computer Science for Industry
Tours, France. 2000.
* Y.T Amghar, Science and Techniques Preparatory School of Oran, ALGERIA
E-mail address: [email protected]
†
H. Fizazi, University of Science and Technology of Oran, ALGERIA
E-mail address: [email protected]
| 9 |
1
Automated Algorithm Selection on Continuous
Black-Box Problems By Combining Exploratory
Landscape Analysis and Machine Learning
arXiv:1711.08921v1 [stat.ML] 24 Nov 2017
Pascal Kerschke and Heike Trautmann
Abstract—In this paper, we build upon previous work on designing informative and efficient Exploratory Landscape Analysis
features for characterizing problems’ landscapes and show their
effectiveness in automatically constructing algorithm selection
models in continuous black-box optimization problems.
Focussing on algorithm performance results of the COCO
platform of several years, we construct a representative set of
high-performing complementary solvers and present an algorithm selection model that manages to outperform the single best
solver out of the portfolio by factor two. Acting on the assumption
that the function set of the Black-Box Optimization Benchmark is
representative enough for practical applications the model allows
for selecting the best suited optimization algorithm within the
considered set for unseen problems prior to the optimization itself
based on a small sample of function evaluations. Note that such
a sample can even be reused for the initial algorithm population
so that feature costs become negligible.
Index Terms—Automated Algorithm Selection, Black-Box Optimization, Exploratory Landscape Analysis, Machine Learning,
Single-Objective Continuous Optimization.
I. I NTRODUCTION
LTHOUGH the Algorithm Selection Problem (ASP, [1])
has been introduced more than four decades ago, there
only exist few works (e.g., [2], [3]), which perform algorithm
selection in the field of continuous optimization. Independent
of the underlying domain, the goal of the ASP can be described
as follows: given a set of optimization algorithms A, often
denoted algorithm portfolio, and a set of problem instances I,
one wants to find a model m : I → A that selects the best
algorithm A ∈ A from the portfolio for an unseen problem
instance I ∈ I. Albeit there already exists a plethora of
optimization algorithms – even when only considering singleobjective, continuous optimization problems – none of them
can be considered to be superior to all the other ones across
all optimization problems. Hence, it is very desirable to find a
sophisticated selection mechanism, which automatically picks
the portfolio’s best solver for a given problem.
Within other optimization domains, such as the well-known
Travelling Salesperson Problem, feature-based algorithm selectors have already shown their capability of outperforming
the respective state-of-the-art optimization algorithm(s) by
combining machine learning techniques and problem dependent features [4], [5]. As schematized in Figure 1, we now
transfer the respective idea of using instance-specific features
A
Pascal Kerschke and Heike Trautmann are with the Group of Information
Systems and Statistics at the University of Münster, 48149 Münster, Germany
(e-mail: {kerschke, trautmann}@uni-muenster.de).
Fig. 1. Schematic view of how Exploratory Landscape Analysis (ELA) can
be used for improving the automated algorithm selection process.
to single-objective continuous optimization problems based
on Exploratory Landscape Analysis (ELA, [6]) for leveraging
solver complementarity of a well-designed algorithm portfolio.
As we show within this work, the integration of ELA results
in strong performance improvements (averaged across the
entire benchmark) over any of the portfolio’s single solvers.
More precisely, our selector requires only half of the resources
needed by the portfolio’s single best solver. Hence, our model
strongly reduces the gap towards the idealistic – and thus, from
a realistic point of view unreachable – virtual best solver.
A more detailed overview of Exploratory Landscape Analysis, as well as an introduction into flacco – an R toolbox for
computing a variety of such landscape features enhanced by a
graphical user interface – is given in Section II. In Section III,
we give more insights into the COCO platform and afterwards
describe our experimental setup, including the generation of
the considered algorithm portfolio, in Section IV. An analysis
of our found algorithm selection models is given in Section V
and Section VI concludes our work.
II. E XPLORATORY L ANDSCAPE A NALYSIS
While problem-dependent (landscape) features can in general be computed for any optimization problem (e.g., [7],
[8], [9], [10], [11]), we will only consider single-objective,
continuous optimization problems within this work. For this
domain, Mersmann et al. introduced a sophisticated approach
for characterizing a problem’s landscape by means of numerical feature values and called it “Exploratory Landscape
Analysis” [6]. Within their work, they designed a total of 50
numerical measures and grouped them into six categories of
so-called “low-level” features: convexity, curvature, level set,
local search, meta model and y-distribution features. These
numbers were then used to characterize eight so-called “highlevel” properties, such as the degree of multimodality, the
2
separability, the basin size homogeneity or the number of
plateaus. However, given that these properties (a) require
expert knowledge and consequently can not be computed automatically, and (b) are categorical and thus, make it impossible
to e.g., distinguish problems by their degree of multimodality,
the introduction of the low-level features can be seen as
a major step towards automatically computable landscape
features and hence automated algorithm selection.
A. Background
Already in the years before the term ELA was introduced,
researchers have tried to characterize a problem’s landscape
by numerical values: Jones and Forrest assessed a problem’s
difficulty by means of a fitness distance correlation [12],
Lunacek and Whitley introduced a dispersion metric [13],
Malan and Engelbrecht quantified the landscape’s ruggedness [14] and Müller and Sbalzarini performed fitness-distance
analyses [15].
However, after Bischl et al. [2] have shown the potential
of landscape analysis by using ELA features for algorithm
selection, a manifold of new landscape features has been
introduced. Abell et al. used hill climbing characteristics [16],
Muñoz et al. measured a landscape’s information content [17]
and Morgan and Gallagher analyzed the problems with the
help of length scale features [18]. More recently, Malan
et al. characterized constraint optimization problems [19],
and Shirakawa and Nagao introduced an entire bag of local
landscape features [20].
In other works, research groups, which also include this paper’s authors, have designed features based on a discretization
of the continuous search space into a grid of cells [21], and
successfully employed the nearest better clustering approach
for distinguishing funnel-shaped landscapes with a global
structure from problems, whose local optima are aligned in
a more random manner [22]. This particularly facilitates the
decision of the class of algorithms which is most suited for
the problem at hand. We also showed that even low budgets
of 50 × d observations (d being the problem dimensionality)
– i.e., a sample size that is close to the size of an evolutionary algorithm’s initial population – is sufficient for such a
distinction [23]. In consequence, the evaluation of this initial
sample, which is required for the landscape features, would
come without any additional costs, given that the evolutionary
algorithm would have to evaluate those points anyways.
B. Flacco
In the previous subsection, we have provided an overview
of numerous landscape features. Unfortunately, those features
were usually – if at all – implemented in different programming languages, such as Python [24], Matlab [25] or
R [26], making it extremely complicated to use all of them
within a single experiment. This obstacle has been solved
(for R-users) with the development of flacco [27], an Rpackage for feature-based landscape-analysis of continuous
and constrained optimization problems. The package (currently) provides a collection of more than 300 landscape
features (including the majority of the ones from above),
Fig. 2. Screenshot of the platform-independent GUI of flacco, which is
hosted publicly available at http://www.flacco.shinyapps.io/flacco.
distributed across a total of 17 different feature sets. In
addition, the package comes with several visualization techniques, which should help to facilitate the understanding of
the underlying problem landscapes [28]. One can either use
the package’s stable release from CRAN1 or its developmental
version from GitHub2 . Note that the latter also provides further
information on the usage of flacco, as well as a link to its
online-tutorial3 .
Being aware of possible obstacles for non-R-users, we
recently developed a web-hosted and hence, platformindependent, graphical user interface (GUI)4 of flacco [29].
A screenshot of that GUI is displayed in Figure 2. The GUI
provides a slim, and thus user-friendly, version of flacco.
In its left part (highlighted in grey), the user needs to provide
information on the function that should be optimized – either
by selecting one of the single-objective optimization problems
available in the R-package smoof [30], configuring one of
the BBOB-functions, manually defining a function, or by
uploading an evaluated set of points. On the right side, one
then can either compute a specific feature set or visualize
certain aspects of the optimization problem and/or its features.
III. E XPLORATORY DATA A NALYSIS OF COCO
Instead of executing the optimization algorithms ourselves,
we use the results from COCO5 [31], which is a platform
for COmparing Continuous Optimization algorithms on the
Black-Box Optimization Benchmark (BBOB [32], [33]). The
1 https://cran.r-project.org/package=flacco
2 https://github.com/kerschke/flacco
3 http://kerschke.github.io/flacco-tutorial/site/
4 http://www.flacco.shinyapps.io/flacco
5 http://coco.gforge.inria.fr/
3
platform provides a collection of the performance results of
129 optimization algorithms, which have been submitted to
the BBOB workshops at the GECCO conference6 of the years
2009, 2010, 2012, 2013 and 2015. Using the results published
on this platform comes with the advantage that over the years
basically all important classes of optimization algorithms,
including current state-of-the art optimizers, have been submitted. Thus, experiments are of representative character in a
comprehensive setting.
A. General Setup of the COCO-Platform
The competition settings were quite similar across the
years: per dimension d ∈ {2, 3, 5, 10, 20, 40} and function
FID ∈ {1, . . . , 24}, the participants had to submit the results
for a total of 15 problem instances. As shown below, only five
instances (IIDs 1 to 5) were part of each competition, while
the remaining ten instances changed per year:
• 2009: IIDs 1 to 5 (with 3 replications each)
• 2010: IIDs 1 to 15 (1 replication each)
• 2012: IIDs 1 to 5, 21 to 30 (1 replication each)
• 2013: IIDs 1 to 5, 31 to 40 (1 replication each)
• 2015: IIDs 1 to 5, 41 to 50 (1 replication each)
For each pair of problem instance i ∈ I and optimization
algorithm A ∈ A, the submitted data contains a log of the
performed number of function evaluations and the corresponding achieved fitness value, enabling an a posteriori evaluation
of the solver’s performance. More precisely, this data allows
to decide whether the solver was successful7 to approximate
the (known) global optimum of instance i ∈ I up to a
precision value ε ∈ {101 , 100 , . . . , 10−7 } and also, how many
function evaluations F Ei (ε) were performed until the solver
(un)successfully terminated. These values were then used to
compute the solver’s Expected Runtime (ERT) [32]:
P
F Ei (ε)
.
ERT (ε) = P i
i Successi (ε)
Although the aforementioned setup comes with some drawbacks, we will still use it as it has been well-established
in the community and thus allows for wide comparability.
Nevertheless, we would like to address its pitfalls as well:
(a) Given the strong variability across the instances’ objective values, it is not straightforward to use a fixed absolute
precision value for comparing the solvers across all instances.
Instead, it might be more reasonable to use a relative threshold.
(b) Computing the ERT based on different instances – even if
they belong to the same BBOB problem – is very questionable
as the instances can have completely different landscapes:
if the original instance is a highly multimodal function, the
transformed instance8 could – if at all – consist of only
a few peaks. Therefore, we strongly encourage to ask for
several solver runs on the same instance in future setups, i.e.,
perform (at least five to ten) replicated runs, and then evaluate
the performance results per instance rather than per function
allowing for ERT computations on function and instance level.
6 http://sig.sigevo.org/index.html/tiki-index.php?page=GECCOs
7 Success (ε) is 1, if the solver finds a solution x∗ ∈ [−5, +5]d , whose
i
fitness value f (x∗ ) lies within [yopt , yopt + ε]; otherwise Successi (ε) = 0.
8 Per definition, instances are identical up to rotation, scaling and shifts [33].
B. Preprocessing COCO’s Solver Performances
We will use the performance results of the 129 optimization
algorithms available at COCO9 . However, in order to work
with a valid data base, we first performed some sanity checks
on the platform’s data before combining it in a joint data base.
While the submitted results of all 29 (2009) and 23 (2010)
solvers of the first two competitions passed these checks, in
the following years, only 22 of 28 (2012), 16 of 31 (2013) and
13 of 18 (2015) submissions did so. The invalid submissions
partially used settings of the previous years. However, in order
to use the most general portfolio of solvers, we only considered
the submissions for IIDs 1 to 5 with only one run per instance,
as this is the only set of instances that was used across all five
BBOB competitions. Fortunately, the performances of all 129
solvers could be considered for our experiments, because even
the problematic submissions from above had valid data for the
first five instances.
IV. E XPERIMENTAL S ETUP
A. Algorithm Performance Data
For each of the 129 solvers we computed the ERT per
tuple of problem dimension d ∈ {2, 3, 5, 10}, BBOB problem
respectively function FID ∈ {1, . . . , 24} and problem instance
IID ∈ {1, . . . , 5} (if multiple runs exist, we only considered
the first run), resulting in a total of 61 920 observations. The
ERTs were computed for a precision threshold of ε = 10−2 ,
because smaller values led to too many unsuccessful runs.
Even for this chosen precision, only approximately 67% of all
(considered) runs terminated successfully.
B. Instance Feature Data
Each of the ELA feature sets was computed using a socalled improved latin hypercube design [34] consisting of
50 × d observations, which were sampled across the decision
space, i.e., [−5, +5]d . The feature sets were then computed
using the R-package flacco [27] for all four problem dimensions, 24 BBOB problems and five problem instances that were
used by the performance data (see Section IV-A). For each of
these 480 observations, we calculated the six classical ELA
feature sets (convexity, curvature, levelset, local search, metamodel and y-distribution) [6], as well as the basic, (cell mapping) angle10 [21], dispersion [13], information content [17],
nearest better clustering [22] and principal component features,
resulting in a total of 102 features per problem instance.
Although being conscious of the resulting information loss,
we aggregate each feature across the five problem instances
(per BBOB problem) via the median of the respective feature
values, in order to map our feature data to the 96 observations
(24 problems, four dimensions) of the performance data.
9 An overview of all submitted optimization algorithms along with their descriptions can be found at http://coco.gforge.inria.fr/doku.php?id=algorithms.
10 The cell mapping angle features were computed using three blocks per
dimension, as larger values would result in too many empty cells due to the
“curse of dimensionality”.
4
C. Constructing the Algorithm Portfolio
For meaningfully addressing the algorithm selection task,
the choice of the underlying algorithm portfolio is crucial. Ideally, the considered set should be as small and as complimentary as possible and should include state-of-the art optimizers.
For this purpose, we ranked the solvers per considered BBOB
problem based on ERT performance. We then constructed four
solver sets (one per dimension), each of them containing the
solvers that ranked within the “Top 3” of at least one of the
24 functions of the respective dimension. Based on these four
solver sets – consisting of between 37 and 41 solvers each –
a portfolio of 12 solvers was constructed by only considering
optimizers that belonged to each of the four sets.
The 12 optimization algorithms from the found portfolio can
be grouped into four categories and are summarized below.
1) Deterministic Optimization Algorithms (2): The two
solvers of this category are variants of the Brent-STEP algorithm11 [37]. It performs axis-parallel searches and chooses
the next iteration’s search dimension either using a round-robin
(BSrr, [38]) or a quadratic interpolation strategy (BSqi, [38]).
2) Multi-Level Approaches (5): The origin of most solvers
belonging to this category is the multi level single linkage
method (MLSL, [39], [40]). It is a stochastic, multistart, global
optimizer that relies on random sampling and local searches.
Aside from MLSL itself, some of its variants also belong to our
portfolio: an interior-point version for constrained nonlinear
problems (fmincon, [39]), a quasi-Newton version, which
approximates the Hessian using BFGS [41] (fminunc, [39]),
and a hybrid variant whose most important improvements
are related to its sampling phase (HMLSL, [39]). The final
optimizer belonging to this group is the multilevel coordinate
search (MCS, [42]), which splits the search space into smaller
boxes – each containing a known observation – and then starts
local searches from promising boxes.
3) Variants of the CMA-ES (4): In 2001, Hansen introduced
one of the most popular evolution strategies: the Covariance
Matrix Adaption Evolution Strategy (CMA-ES) [43] with cumulative step-size adaptation (CMA-CSA, [44]). It led to a
plethora of variants [45], including the following three solvers
from our portfolio: (1) IPOP400D [46], a restart version of
the CMA-ES with an increasing population size (IPOP-CMAES, [47]) and a maximum of 400 × (d + 2) function evaluations. (2) A hybrid CMA (HCMA, [48]), which combines
a bi-population self-adaptive surrogate-assisted CMA-ES12
(BIPOP-s∗ aACM-ES-k, [50]), STEP [35] and NEWUOA [51]
to benefit from surrogate models and line searches simultaneously. (3) A sequential, model-based algorithm configuration
(SMAC, [52]) procedure applied to the BBOB problems
(SMAC-BBOB, [53]). It uses Gaussian processes (GP, [54])
to model the the expected improvement function and then
performs one run of DIRECT [55] (with 10 × d evaluations)
and ten runs of the classical CMA-ES [43] (with 100 × d
evaluations) on the expected improvement function.
11 The Brent-STEP algorithm itself accelerates the global line search method
STEP (“select the easiest point”) [35] by using Brent’s method [36].
12 A BIPOP-CMA-ES [49] is a multistart CMA-ES with equal budgets for
two interlaced restart strategies: one with an increasing population size and
one with varying small population sizes.
4) Others (1): The final optimizer from our portfolio is
called OptQuest/NLP (OQNLP, [39], [56]). It is a commercial, heuristic, multistart algorithm that was designed to find
the global optima of smooth constrained nonlinear programs
(NLPs) and mixed integer nonlinear programs (MINLPs). The
algorithm uses the OptQuest Callable Library (OCL, [57]) to
generate candidate starting points for a local NLP solver.
D. Machine Learning Algorithms
We considered three classes of supervised learning strategies
for training our algorithm selection models: (1) A classification approach, which simply tries to predict the bestperforming optimizer13 and hence, completely ignores the
magnitude of performance differences between the best and the
remaining portfolio solvers. (2) A regression approach, which
trains a separate model for the performances of each optimizer
and afterwards predicts the solver with the best predicted
performance. (3) In addition to these well-known strategies, we
also considered the so-called pairwise regression, which led
to promising results in other works (e.g., [4], [5]). In contrast
to modelling the performances straight-forward (as in (2)), it
models the performance differences for each solver pair and
afterwards predicts the solver whose predicted performance
difference was the highest, compared to all other solvers.
The algorithm selectors were trained in R [26] using the Rpackage mlr [58]. For each class of the supervised learning
approaches, we considered recursive partitioning and regression trees (rpart, [59]), kernel-based support vector machines (ksvm, [60]), random forests (randomForest, [61])
and extreme gradient boosting (xgboost, [62]). Additionally, we also tried multivariate adaptive regression splines
(mars, [63]) in case of the (paired) regression approaches.
Note that the SVM’s inverse kernel width sigma was the
only hyperparameter that was (automatically) configured – using the sigest function from the R-package kernlab [60].
All other hyperparameters were used in their default settings:
the SVMs used Gaussian radial basis kernels and the random
forests were constructed using 500 trees, whose split points
√
were sampled random uniformly of b pc (classification) or
max{bp/3c, 1} (regression / paired regression) features with
p being the data set’s number of (ELA) features.
E. Feature Selection Strategies
The algorithm selectors are initially trained using all 102
features. However, using all of the features simultaneously
likely causes lots of noise and/or redundancy, which could lead
to poorly performing algorithm selectors. Furthermore, some
of the feature sets, namely, the convexity, curvature and local
search features from the classical ELA features [6], require
additional function evaluations on top of the costs for the
initial design. In order to overcome these obstacles, we used
the following four feature selection strategies.
1) sffs: A greedy forward-backward selection, which starts
with an empty feature set and iteratively alternates between
greedily adding and/or removing features as long as this results
in an improvement of the model’s performance.
13 Ties
are broken via random uniform sampling among the tied solvers.
5
F. Performance Assessment
In lieu of using the ERT itself, we will use the relative
ERT (relERT) for our experiments. While the former strongly
biases the algorithm selectors towards multimodal and higherdimensional problems due to much larger amounts of used
function evaluations, the relERTs, which are also used within
the BBOB competitions, allow a fair comparison of the solvers
across the problems and dimensions by normalizing each
solver’s ERT with the ERT of the best solver for the respective
BBOB problem (of that dimension). Instead of scaling each
performance with the respective best ERT of all 129 solvers,
we used the best performance from the 12 solvers of the
considered portfolio as this is our set of interest in this study.
As some solvers did not even solve a single instance from
some of the BBOB functions, the respective ERTs and relERTs
were not defined. We thus imputed the missing relERTs using
the well-known PAR10 score (e.g., [64]). That is, each of the
problematic values is replaced with a penalty value (36 690.3)
that is the tenfold of the highest valid relative ERT; for all
other values, the respective (finite) relERTs are used.
For each of the supervised learning approaches (see Section IV-D), the algorithm selectors are evaluated using the
mean relative ERT, which averages the relERTs of the predictions (including the costs for the initial design) on the
corresponding test data, i.e., a subset of the BBOB problems.
In order to obtain more realistic and reliable estimates of the
selectors’ performances, they were assessed using leave-one(function)-out cross-validation. That is, per algorithm selector
we train a total of 96 submodels. Each of them uses only 95
of the 96 BBOB problems for training and the remaining one
for testing. Note that each problem was used exactly once as
test data. Consequently, within each iteration/fold of the leaveone-(function)-out cross-validation, exactly one problem (i.e.,
the respective test data) is completely kept out of the modeling
phase and only used for assessing the respective submodel’s
performance. The average of the resulting 96 relERTs is then
used as our algorithm selector’s performance.
Following common practices in algorithm selection
(e.g., [64]), we compare the performances of our algorithm
selectors with two baselines: the virtual best solver (VBS) and
the single best solver (SBS). The virtual best solver, sometimes
also called oracle or perfect selector, provides a lower bound
for the selectors as it shows the performance that one could
(theoretically) achieve on the data, when always selecting the
Boxplot
ERT−Ratio of Best Solvers from Portfolio and COCO
2) sfbs: This strategy works analogously to the previous
one, but starts with the full set of all 102 features and
afterwards alternates between removing and adding features.
3) (10 + 5)-GA: A genetic algorithm with population size
10 and 5 offsprings per generation. Here, the selected features
are represented as a 102-dimensional bit string, where a value
of 1 at position k implies that the k-th feature is selected.
The GA runs for a maximum of 100 generations and selects
the features by performing random bit flips – with a (default)
mutation rate of 5% and a crossover rate of 50%.
4) (10 + 50)-GA: A modified version of the previous GA,
using a tenfold of offsprings (50 instead of 5) per generation.
63.746 (FID 24, 05D)
●
4.788 (FID 24, 10D)
●
3.977 (FID 17, 02D)
●
2.991 (FID 23, 03D)
●
●
Violin Plot
10
2.243 (FID 22, 02D)
●
●
●
●
2.890 (FID 20, 05D)
2.567 (FID 23, 02D)
2.309 (FID 07, 02D)
1
Fig. 3. Boxplot (left) and violin plot (right) visualizing the ratios (on a logscale) between the ERTs of the portfolio’s best solvers and the best ones from
COCO. The labeled outliers within the left image, coincidentally depicting all
problems with a ratio greater than two, indicate the BBOB problems that were
difficult to solve for the 12 algorithms from the considered portfolio. Note that
two instances, FID 18 in 10D (ratio: 2.251) and FID 20 in 10D (ratio: 2.451),
have not been labeled within the plot to account for better readability.
best performing algorithm per instance. Given that the relERT
has a lower bound of 1 and that at least one solver achieves
that perfect performance per instance, the VBS has to be 1
per definition. Nevertheless, it is quite obvious that algorithm
selectors usually do not reach such an idealistic performance
given their usually imperfect selections and/or the influence of
the costs for computing the landscape features.
The second baseline, the SBS, represents the (aggregated)
performance of the (single) best solver from the portfolio.
Consequently, this baseline is much more important than
the VBS, because an algorithm selector is only useful if its
performance (including feature costs) is better than the SBS.
Note that in principle, the SBS could either be determined
on the entire data set or, for (leave-one-out) cross-validation,
be computed per fold. However, within our experiments, both
approaches result in identical performances.
V. R ESULTS
A. Analyzing the Algorithm Portfolio
Here, we examine the constructed portfolio to get more
insights into the data provided by COCO. In a first step,
we compare the best performances of the portfolio solvers
to the best performances of all 129 solvers. For 51 of the
96 considered BBOB functions over the dimensions, the
portfolio’s VBS is identical to the VBS of all COCO-solvers.
The violin plot on the right side of Figure 3 – which can
be understood as a boxplot, whose shape represents a kernel
density estimation of the underlying observations – indicates
that the ratios between the best ERTs from the portfolio and
all COCO-solvers (per BBOB problem) is usually close to
one. More precisely, only ten BBOB problems, depicted by
black dots in the boxplot of Figure 3, have a ratio greater than
6
TABLE I
S UMMARY OF THE RELATIVE EXPECTED RUNTIME OF THE 12 PORTFOLIO SOLVERS AND THE TWO ALGORITHM SELECTORS ( INCLUDING COSTS FOR
FEATURE COMPUTATION ) SHOWN PER DIMENSION AND BBOB GROUP. T HE SINGLE BEST SOLVER (SBS) FOR EACH OF THESE COMBINATIONS IS
PRINTED IN BOLD AND THE OVERALL SBS (HCMA) IS HIGHLIGHTED IN GREY. VALUES IN bold italic INDICATE THAT THE RESPECTIVE ALGORITHM
SELECTOR WAS BETTER THAN ANY OF THE SINGLE PORTFOLIO SOLVERS . F URTHER DETAILS ON THIS SUMMARY ARE GIVEN WITHIN THE TEXT.
Dim
2
BBOBGroup
F1
F6
F10
F15
F20
-
F5
F9
F14
F19
F24
all
3
F1
F6
F10
F15
F20
-
5
-
F5
F9
F14
F19
F24
F5
F9
F14
F19
F24
all
10
F1
F6
F10
F15
F20
-
F5
F9
F14
F19
F24
all
all
F1
F6
F10
F15
F20
all
BSrr
CMACSA
fmincon
fminunc
HCMA
1.2
1.3
18 516.7 9 708.2
7 649.2 7 481.5
7 406.6 14 710.3
84.8 14 768.5
54.8
7.4
8.3
14.7
7 351.9
11.0
18.6
1.0
7 392.0
4.1
11.8
19.2
62.7
7 367.7
14.5
3.7
5.8
6.3
25.3
44.9
9 318.4
1 549.1
1 546.5
1 556.7
17.7
1.3
1.3
331.2 9 527.4
29 356.3 14 712.1
14 698.2 22 026.2
14 741.8 14 758.7
7 367.9
4.7
8.9
1.6
7 389.4
85.2
38.5
1.0
14 701.2
7 339.6
132.1
9 173.7
4.1
14 699.5
14 677.4
356.1
4.5
5.0
2.6
66.8
12 304.7 12 316.7
3 077.4
4 616.2
7 677.5
90.4
1.4
27 597.4
22 032.8
36 690.3
22 053.6
1.4
36 690.3
29 360.3
36 690.3
22 050.8
7 533.6
5.6
8.9
3.1
7 400.0
14 678.4
9 173.5
1.0
36 690.3
14 678.9
14 679.2
9 173.8
11.9
36 690.3
22 014.9
12.0
3.9
4.2
4.3
7.7
21 428.3 24 469.8
3 114.6
15 289.0
16 819.9
6.5
1.6 14 691.0
27 563.9
4.3
29 359.8
8.4
36 690.3
1.7
29 367.0 14 685.9
14 679.9
9 173.4
1.1
36 690.3
22 015.2
14 682.7
9 173.8
15.4
36 690.3
22 015.0
BSqi
6 240.7
all
F1
F6
F10
F15
F20
Relative Expected Runtime of the 12 Solvers from the Considered Algorithm Portfolio and the 2 Presented Algorithm Selection-Models
F5
F9
F14
F19
F24
1.6
36 690.3
29 359.3
36 690.3
36 690.3
IPOP400D
MCS
MLSL
OQNLP
SMACBBOB
14.6
18.4
1.7
5.7
1.0
10.7
8.1
15.5
3.9 14 679.3
5.8
11.3
322.7
7.7
11.4
15.5
24.2
1.0
7 391.7
2.1
17.0
1.5
4.9
7 351.2
2.7
22 014.9
27 518.6
29 353.2
29 354.8
22 014.6
16.6
3.1
4.7
26.2
42.5
20.3
3.5
4.0
10.1
3.0
74.3
1 547.9
1 536.9
25 990.1
19.3
8.4
45.9
55.9
31.4 9 173.4
8 132.7
1.0
7 346.9 14 700.0
7 342.4 7 339.8
7 347.6
2.5
9.3
14 686.2
1.9
22 015.1
36 690.3
29 353.4
36 690.3
22 014.8
58.4
3.3
4.8
2.8
67.0
94.9
39.9
3.6
7.1
3.4
6 132.4
4 593.1
29 047.1
28.3
29.4
17.5 14 688.7 14 678.1 14 678.5
2.4
4.9
28.8 9 173.4
1.0
13.6 22 019.2
1.0
7 346.1 29 352.5 36 690.3 36 690.3
7 339.8 22 017.4 14 681.0 22 015.0
14 678.0
9 173.5
10.7
29 352.5
14 676.8
22 015.1
36 690.3
36 690.3
36 690.3
22 014.9
22.7
4.8
5.2
4.4
7.8
22.9
4.8
5.2
4.4
7.8
3 063.8 13 765.8 18 352.4 16 817.4
13 761.8
30 575.6
9.1
9.2
2.7
2.2
2.8
2.0
23.6
7 365.5 14 698.8 14 680.0 14 679.9
4.1 9 181.9 9 188.1 9 173.4
1.1 7 352.5 22 018.7
1.1
22 028.5 29 352.5 36 690.3 36 690.3
14 677.1 29 352.8 22 018.9 22 014.6
14 678.3
9 173.9
12.0
36 690.3
22 014.9
22 015.7
36 690.3
36 690.3
36 690.3
36 690.3
16.3
2.7
3.7
2.1
23.7
16.3
2.7
3.7
2.1
23.7
HMLSL
6.0
3 068.4
6.8 14 686.6
1.9
6.5
1.0
12.3
11.4 7 339.4
2.3 22 015.1
4.8
9 178.9
4 769.4
AS-Model
#1
#2
27 519.5 24 472.9
6 123.0
16 817.8
16 821.3
6.9
9 182.4 18 354.6 21 408.0 16 817.6
16 819.7
33 633.1
10.0
10.0
1.4
20 783.9
22 099.4
23 871.3
18 392.6
1.4
20 872.4
20 228.4
27 529.3
20 236.3
7 411.8
5.5
8.7
5.2
9 206.8
7 363.6
4 601.0
1.0
23 868.5
11 009.4
7 376.5
6 885.1
23.5
23 861.9
14 680.5
93.6
4.1
4.6
8.6
35.8
1 851.1 11 023.1 7 352.4 7 357.4
2.5 2 299.8 2 314.9 6 886.1
1.0 1 847.3 13 123.3
1.0
7 348.5 16 515.0 20 183.8 23 868.1
5 505.8 22 016.2 11 013.4 12 842.9
9 180.2
4 587.9
9.3
22 020.0
9 174.1
22 015.2
34 397.4
33 021.8
34 856.4
25 683.7
28.5
3.5
4.6
8.9
35.3
38.6
12.7
4.1
5.9
9.5
16 873.3 17 644.5
3 466.0
9 567.4
10 718.9
30.4
3 064.3 11 091.9 11 151.0 10 328.8
9 177.9
29 811.5
16.7
14.2
two, and only three of them – the 2D-version of Schaffers F7
Function (FID 17, [33]), as well as the 5D- and 10D-versions
of the Lunacek bi-Rastrigin Function (FID 24, [33]) – have
ratios worse than three. Consequently, we can conclude that if
we find a well-performing algorithm selector, its suggested
optimization algorithm likely requires less than twice the
number of function evaluations compared to the best algorithm
from the entire COCO platform.
As mentioned in Section IV-F, we had to impute some of the
performances, because not all optimizers solved at least one
instance per BBOB problem. More precisely, HCMA was the
only one to find at least one valid solution for all 96 problems,
followed by HMLSL (88), CMA-CSA (87), OQNLP (72),
fmincon (71), MLSL (69), fminunc (68), IPOP400D and MCS
(67), BSqi (52), BSrr (50) and SMAC-BBOB (18). Hence, the
mean relERT of HCMA (30.37) was by far the lowest of all
considered solvers, making this solver the clear SBS.
However, as shown in Table I, HCMA is not superior
to the remaining solvers across all problems. Independent
of the problem dimensions, the Brent-STEP variants BSqi
and BSrr outperform the other optimizers on the separable
problems (FIDs 1 to 5). Similarly, the multilevel methods
MLSL, fmincon and HMLSL are superior on the unimodal
functions with high conditioning (FIDs 10 to 14), and the
multimodal problems with adequate global structure (FIDs 15
to 19) are solved fastest by MCS (in 2D) and CMA-CSA (3D
to 10D). The remaining problems, i.e., functions with low or
moderate conditioning (FIDs 6 to 9), as well as multimodal
functions with a weak global structure (FIDs 20 to 24) do
not show such obvious patterns. Nevertheless one can see that
HCMA, HMLSL, OQNLP and sometimes also MLSL perform
quite promising on these function groups.
The fact that HCMA often is inferior to the other 11 solvers
clearly indicates that the data contains sufficient potential for
improving over the SBS – e.g., with a reasonable algorithm
selector. This finding is supported by the small amount of
points located exactly on the diagonal of the scatterplots in
Figure 4, which depict the ERT-pairs of the two baseline algorithms for all 24 BBOB problems and per problem dimension.
Noticeably, though not very surprising, the VBS and SBS
always have higher ERTs on the multimodal problems (FIDs
15 to 24) than on the unimodal problems (FIDs 1 to 14).
Also, the shape of the cloud of observations indicates that
the problems’ complexity grows with its dimension: while the
2D-observations are mainly clustered in the lower left corner
(indicating rather easy problems), the cloud stretches along the
entire diagonal for higher-dimensional problems. Furthermore
it is obvious that two problems, FIDs 1 (Sphere, indicated
7
2
TABLE II
C OMPARISON OF THE NUMBER OF FUNCTION EVALUATIONS REQUIRED BY
THE PORTFOLIO ’ S SINGLE BEST SOLVER , I . E ., HCMA, FOR SOLVING THE
20 INSTANCES OF FID 4 (Büche-Rastrigin) UP TO A PRECISION OF 10−2 .
3
Expected Runtime of the SBS (HCMA)
1e+06
●
1e+04
●
●
●
●
5
ERT of VBS
(Solver)
ERT of SBS
(relERT)
2
8.44e-03
6.91e-03
8.84e-05
5.44e-03
4.29e-03
215
545
440
248
578
TRUE
TRUE
TRUE
TRUE
TRUE
98.8
(BSqi)
405.2
(4.1)
3
1
2
3
4
5
4.48e-03
9.99e-01
9.62e-04
1.72e-03
8.24e-03
976 690
570 925
781
1 373
1 369
TRUE
FALSE
TRUE
TRUE
TRUE
219.6
(BSrr)
387 784.5
(1 765.9)
5
1
2
3
4
5
6.52e-03
8.09e-04
1.00e+00
2.16e-03
8.54e-03
2 048
2 248
91 882
2 382
2 228
TRUE
TRUE
FALSE
TRUE
TRUE
486.2
(BSrr)
25 197.0
(51.8)
10
1
2
3
4
5
3.75e-03
6.72e-03
2.58e-03
2.79e-03
5.43e-03
4 253
4 495
4 632
4 032
4 021
TRUE
TRUE
TRUE
TRUE
TRUE
1 067.8
(BSrr)
4 286.6
(4.0)
10
1e+06
●
●
●
●●
●
●
●
●●
●●
1e+02
●
●
1e+02
Success?
1
2
3
4
5
●●
●
1e+04
Gap to
# Function
Optimum Evaluations
●
●●
1e+02
Dim IID
●
●
●
1e+04
1e+06
1e+02
1e+04
1e+06
Expected Runtime of the VBS
BBOB−Group
● F1 − F5
● F15 − F19
● F6 − F9
● F20 − F24
● F10 − F14
●
1
2
3
4
5
6
Function ID (FID)
7 ● 10 ● 13 ● 16 ● 19
8
11
14
17 ● 20
9
12
15
18 ● 21
22
23
24
Fig. 4. Scatterplots comparing the expected runtime (ERT) (shown on a logscale) of the virtual best solver (VBS) and the single best solver (SBS), i.e.,
HCMA, across the 24 BBOB functions per problem dimension: 2D (top left),
3D (top right), 5D (bottom left) and 10D (bottom right).
by ◦) and 5 (Linear Slope, ), are consistently solved very fast
(independent of its dimensionality), whereas FID 24 (Lunacek
Bi-Rastrigin, 4) is always the most difficult problem for the
two baseline algorithms.
Looking at the performances of HCMA across all 96 problems, one problem stands out: the three-dimensional version of
the Büche-Rastrigin function (FID 4). On this (usually rather
simple) function, HCMA has a relERT of 1765.9, compared to
at most 51.8 on the non-3D-versions of this problem and 249.1
as the worst relERT of all remaining 95 BBOB problems. We
therefore had a closer look at the submitted data of HCMA
across FID 4 and for all dimensions. The respective data
is summarized in Table II. Comparing the results of the 20
optimization runs with each other, three of them seem to be
clear outliers. However, it remains unclear whether they were
caused by the algorithm itself – it could for instance be stuck in
a local optimum without terminating – or whether the outliers
were caused by some technical issue on the platform.
B. Analyzing the Best Algorithm Selector
After gaining more insights into the underlying data, we
use the performance and feature data for training a total
of 70 algorithm selectors: 14 machine learning algorithms
(see Section IV-D), each of them using either one of the
four feature selection strategies described in Section IV-E or
no feature selection at all. The classification-based support
vector machine, which employed a greedy forward-backward
feature selection strategy (sffs), achieved the best performance.
While the single best solver (HCMA) had a mean relERT
of 30.37, the algorithm selector (denoted as Model 1) had
a mean relERT of 16.67 (including feature costs) and 13.60
(excluding feature costs). Therefore, the selector is able to pick
an optimizer from the portfolio, which solves the respective
problem on average twice as fast.
Looking at the scatterplots shown in the top row of Figure 5,
one can see that – with the exception of the 2D Rastrigin
function (FID 3, +) – Model 1 predicts on all instances an optimizer that performs at least as good as HCMA. Admittedly,
this comparison is not quite fair, as the ERTs shown within
the respective row do not consider the costs for computing the
landscape features. Hence, a more realistic comparison of the
selector and the SBS is shown in the second row of Figure 5.
The fact that Model 1 performs worse than HCMA on some
of the problems is negligible as the losses only occur on rather
simple – and thus, quickly solvable – problems.
The allocated budget for computing the landscape features
was 50 × d, i.e., between 100 and 500 function evaluations
depending on the problem dimension. On the other hand, the
ERT of the portfolio’s VBS lies between 4.4 and 7 371 411.
These numbers (a) explain, why Model 1 performs poor on
rather simple problems, and (b) support the thesis that the costs
for computing the landscape features only have a small impact
on the total costs when solving rather complex problems.
However, one should be aware of the fact that the number
of required function evaluations for the initial design is in a
range of the common size of initial algorithm populations so
that in practice no additional feature costs would occur.
These findings are also visible in the upper row of Figure 6,
which compares the relERTs of HCMA and Model 1 (on a
log-scale) across all 96 BBOB problems (distinguished by
FID and dimension). For better visibility of the performance
differences, the area between the relERTs of the SBS (depicted
by •) and Model 1 (N) is highlighted in grey. One major
contribution of our selector obviously is to successfully avoid
8
using HCMA on the three-dimensional version of FID 4.
Moreover, one can see that the SBS mainly dominates our
selector on the separable BBOB problems (FIDs 1 to 5),
whereas our selector achieved equal or better performances
on the majority of the remaining problems.
Eight features resulted from the feature selection approach
and were included in Model 1: three features from the ydistribution feature set (the skewness, kurtosis and number
of peaks of a kernel-density estimation of the problems’
objective values, [6]), one levelset feature (the ratio of mean
misclassification errors when using a LDA and MDA, [6]),
two information content features (the maximum information
content and the settling sensitivity, [17]), one cell mapping
feature (the standard deviation of the distances between each
cell’s center and worst observation, [21]) and one of the basic
features (the best fitness value within the sample).
TABLE III
C OMPARISON OF THE PREDICTED SOLVERS . E ACH ROW SHOWS HOW
OFTEN THE RESPECTIVE BEST SOLVER WAS PREDICTED AS FMINCON ,
HCMA, HMLSL OR MLSL BY THE SELECTORS (M ODEL 1 / M ODEL 2).
True Best Solver
Predicted Solver (Model 1 / Model 2)
Solver
#
BSqi
BSrr
CMA-CSA
fmincon
fminunc
HCMA
HMLSL
IPOP400D
MCS
MLSL
OQNLP
SMAC-BBOB
6
6
7
12
6
14
11
7
4
12
6
5
Σ
96
fmincon
3
1
0
0
1
0
3
0
2
4
0
0
/
/
/
/
/
/
/
/
/
/
/
/
2
2
1
4
2
3
3
0
1
2
1
0
14 / 21
HCMA
3
5
7
8
4
14
7
7
2
8
6
5
/
/
/
/
/
/
/
/
/
/
/
/
2
4
3
4
3
11
4
3
0
6
2
2
76 / 44
HMLSL
0
0
0
0
0
0
0
0
0
0
0
0
/
/
/
/
/
/
/
/
/
/
/
/
2
0
3
1
0
0
0
3
3
2
2
1
0 / 17
MLSL
0
0
0
4
1
0
1
0
0
0
0
0
/
/
/
/
/
/
/
/
/
/
/
/
0
0
0
3
1
0
4
1
0
2
1
2
6 / 14
C. Improving the Currently Best Algorithm Selector
Although Model 1 already performs better than the SBS,
we tried to improve the previous model by making use of
results from previous works. More precisely, we start with
the set of features that was employed by Model 1 and try
to improve that model by adding the meta-model and nearest
better clustering features. The rational behind this approach is
that (a) these feature sets – as shown in [22] – are very useful
for characterizing a landscape’s global structure, and (b) the
feature set of Model 1 was derived from a greedy approach.
Based on the combination of the three feature sets from
above, additional feature selections were performed. Two of
the new selectors, namely the ones constructed using the
greedy sfbs-approach (mean relERT: 14.27) and the (10+50)GA (14.24) – performed better than Model 1. The latter model
is from here on denoted as “Model 2”.
As displayed in Table I, Model 2 not only has the best
overall performance, but also performs best on the multimodal
functions with a weak global structure (FIDs 20 to 24). This
effect is also visible in the bottom row of Figure 5, where the
problems of the respective BBOB group (colored in pink) are
located either in the left half of the respective plots (for 2D
and 3D) or on the diagonal itself (5D and 10D).
Also, Figures 5 and 6 reveal that Model 2 differs more often
from the SBS than Model 1. This finding is also supported by
Table III, which summarizes how often each of the solvers
was predicted and how often it was the best solver. Although
HCMA is only in roughly 15% of the problems (14 of 96)
the best choice, Model 1 selects it in approximately 80% and
Model 2 in 46%. Interestingly, the two models only predict
three and four different solvers, respectively.
The different behavior of the two selectors is likely caused
by the differences in the used features. While three of the features are identical for both selectors – the previously described
angle and levelset features, as well as the skewness of the
objective values – the remaining six features of Model 2 are
meta-model and nearest better clustering (NBC) features. The
two meta model features measure (1) the smallest absolute,
non-intercept coefficient of a linear model (without interac2
tions) and (2) the adjusted model fit (Radj
) of a quadratic
model (without interactions) [6]. The remaining four NBC
features measure (1) the ratio of the standard deviations of
the distances among the nearest neighbours and the nearest
better neighbours, (2) the ratio of the respective arithmetic
means, (3) the correlation between the distances of the nearest
neighbours and the nearest better neighbours, and (4) the socalled indegree, i.e., the correlation between fitness value and
the count of observations to whom the current observation is
the nearest better neighbour [22].
VI. C ONCLUSION
We showed that sophisticated machine learning techniques
combined with informative exploratory landscape features
form a powerful tool for automatically constructing algorithm
selection models for unseen optimization problems. In our
setting of continuous black-box optimization, we were able
to reduce the mean relative ERT of the single best solver
of our portfolio by half relying only on a quite small set of
ELA features and function evaluations. A specific R-package
enhanced by a user-friendly graphical user interface allows for
transferring the presented methodology to other (continuous)
optimization scenarios differing from the BBOB setting.
Of course, the quality and usability of such models heavily
relies on the quality of the algorithm benchmark as well as on
the representativeness of the included optimization problems.
While we considered the well-known and commonly accepted
BBOB workshop setting, we are aware of possible shortcomings and will extend our research in terms of including
other comprehensive benchmarks once available and practical
applications.
Finally, we work on extending the methodological approach
to constrained and multi-objective optimization problems in
order to increase the applicability to real-world scenarios.
ACKNOWLEDGMENT
The authors acknowledge support from the European Research Center for Information Systems14 (ERCIS).
14 https://www.ercis.org/
9
Expected Runtime of Model 1 (ksvm + sffs) excluding Feature Costs
ERT of SBS (HCMA)
2
3
5
10
1e+06
●●
●
●
●
●
●
1e+04
●
●
1e+02
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Expected Runtime of Model 1 (ksvm + sffs) including Feature Costs
ERT of SBS (HCMA)
2
3
5
10
1e+06
●●
●
●
●
1e+02
●
●
1e+04
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
Expected Runtime of SBS (HCMA)
Expected Runtime of Model 2 (ksvm + (10+50)−GA) including Feature Costs
2
3
5
10
1e+06
●●
●
●
●
●
●
●
●
1e+04
1e+06
1e+02
●
●
●
●
●
●
●
●
●
●
●
●
1e+02
●
●
●
●
●
1e+02
●
●
1e+04
1e+04
1e+06
1e+02
1e+04
1e+06
1e+02
1e+04
1e+06
Expected Runtime of Models 1 and 2 (ksvm + (10+50)−GA)
BBOB−Group
● F1 − F5 ● F10 − F14 ● F20 − F24
● F6 − F9 ● F15 − F19
Function ID (FID)
● 1
3
5
7
2
4
6
8 ● 10
9
11 ● 13
12
15
14 ● 16
17 ● 19 ● 21
23
18
24
●
20
22
Fig. 5. Scatterplots of the ERT-pairs (shown on a log-scale) of the SBS (i.e., HCMA) and Model 1 without considering the costs for computing the features
(top row), Model 1 including the feature costs (middle), and Model 2 including the feature costs (bottom).
R EFERENCES
[1] J. R. Rice, “The Algorithm Selection Problem,” Advances in Computers,
vol. 15, pp. 65 – 118, 1975.
[2] B. Bischl, O. Mersmann, H. Trautmann, and M. Preuss, “Algorithm
Selection Based on Exploratory Landscape Analysis and Cost-Sensitive
Learning,” in Proceedings of the 14th Annual Conference on Genetic
and Evolutionary Computation (GECCO). ACM, 2012, pp. 313 – 320.
[3] M. A. Muñoz Acosta, Y. Sun, M. Kirley, and S. K. Halgamuge,
“Algorithm Selection for Black-Box Continuous Optimization Problems:
A Survey on Methods and Challenges,” Information Sciences, vol. 317,
pp. 224 – 245, 2015. [Online]. Available: http://www.sciencedirect.
com/science/article/pii/S0020025515003680
[4] L. Kotthoff, P. Kerschke, H. H. Hoos, and H. Trautmann, “Improving the
State of the Art in Inexact TSP Solving Using Per-Instance Algorithm
Selection,” in Proceedings of 9th International Conference on Learning
and Intelligent Optimization (LION), ser. Lecture Notes in Computer
Science, C. Dhaenens, L. Jourdan, and M.-E. Marmion, Eds., vol. 8994.
Springer, January 2015, pp. 202 – 217.
[5] P. Kerschke, L. Kotthoff, J. Bossek, H. H. Hoos, and H. Trautmann,
“Leveraging TSP Solver Complementarity through Machine Learning,”
Evolutionary Computation Journal, (accepted).
[6] O. Mersmann, B. Bischl, H. Trautmann, M. Preuss, C. Weihs, and
G. Rudolph, “Exploratory Landscape Analysis,” in Proceedings of the
[7]
[8]
[9]
[10]
[11]
[12]
13th Annual Conference on Genetic and Evolutionary Computation
(GECCO). ACM, 2011, pp. 829 – 836.
O. Mersmann, B. Bischl, H. Trautmann, M. Wagner, J. Bossek, and
F. Neumann, “A Novel Feature-Based Approach to Characterize Algorithm Performance for the Traveling Salesperson Problem,” Annals of
Mathematics and Artificial Intelligence, vol. 69, pp. 151 – 182, October
2013.
F. Hutter, L. Xu, H. H. Hoos, and K. Leyton-Brown, “Algorithm Runtime
Prediction: Methods & Evaluation,” Artificial Intelligence Journal, vol.
206, pp. 79 – 111, 2014.
G. Ochoa, S. Verel, F. Daolio, and M. Tomassini, “Local Optima
Networks: A New Model of Combinatorial Fitness Landscapes,” in
Recent Advances in the Theory and Application of Fitness Landscapes,
ser. Emergence, Complexity and Computation. Springer, 2014, pp.
233 – 262.
J. Pihera and N. Musliu, “Application of Machine Learning to Algorithm
Selection for TSP,” in Proceedings of the IEEE 26th International
Conference on Tools with Artificial Intelligence (ICTAI). IEEE press,
2014.
F. Daolio, A. Liefooghe, S. Verel, H. Aguirre, and K. Tanaka, “Problem
features vs. algorithm performance on rugged multi-objective combinatorial fitness landscapes,” Evolutionary Computation, 2016.
T. Jones and S. Forrest, “Fitness Distance Correlation as a Measure
of Problem Difficulty for Genetic Algorithms,” in Proceedings of the
10
Relative Expected Runtime
2
3
10
●
●
●
●
●
●
●
●
10
●
●
●
●
●
●
●●●●●
●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●●
●
●
●
●
●
●
3
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
2
Relative Expected Runtime
5
●
1000
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
5
●●
●
●
●
●
●●
10
●
1000
●
●
●
●
●
●
●
●
10
●
●
●
●
●
●
2
●●●●●
5
●
●●
●
●
8
11
●●
●
●
14
●
●
20
23
●●●
●
●●
●
●●
●
5
11
14
17
20
●
23
2
●
●
●
●
8
●
5
●
●
●
●
●
●
●
2
●
●
●
●
17
●
●
●
●●
●
●
8
11
●
14
●
●
●●
●●
●
●
17
●●
20
23
●
●
●
●
2
5
●
●
●
8
●
●
●
●
11
●
●●
14
●
●
17
●
20
●
●●
23
Function ID (FID)
Algorithm
● SBS (HCMA)
Model 1 (ksvm + sffs)
Model 2 (ksvm + (10+50)−GA)
Fig. 6. Comparison of the relERT (shown on a log-scale) of the single best solver (HCMA, depicted as •), and the best two algorithm selectors: a kernelbased SVM whose features were selected using a greedy forward-backward strategy (Model 1, N) and an extension of the previous model, which performed
a (10 + 50)-GA feature selection on top (Model 2, ). The performances are shown separately per dimension and selector (top: Model 1, bottom: Model 2).
For better visibility, the areas between the curves are highlighted grey and the five BBOB groups are separated from each other by vertical dot-dashed lines.
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
6th International Conference on Genetic Algorithms (ICGA). Morgan
Kaufmann Publishers Inc., 1995, pp. 184 – 192. [Online]. Available:
http://dl.acm.org/citation.cfm?id=657929
M. Lunacek and L. D. Whitley, “The Dispersion Metric and the CMA
Evolution Strategy,” in Proceedings of the 8th Annual Conference on
Genetic and Evolutionary Computation (GECCO). ACM, 2006, pp.
477 – 484.
K. M. Malan and A. P. Engelbrecht, “Quantifying Ruggedness of
Continuous Landscapes Using Entropy,” in Proceedings of the IEEE
Congress on Evolutionary Computation (CEC). IEEE, 2009, pp. 1440 –
1447.
C. L. Müller and I. F. Sbalzarini, “Global Characterization of the
CEC 2005 Fitness Landscapes Using Fitness-Distance Analysis,” in
Proceedings of the European Conference on the Applications of Evolutionary Computation (EvoApplications), ser. Lecture Notes in Computer
Science. Springer, April 2011, pp. 294 – 303.
T. Abell, Y. Malitsky, and K. Tierney, “Features for Exploiting BlackBox Optimization Problem Structure,” in Proceedings of 7th International Conference on Learning and Intelligent Optimization (LION),
G. Nicosia and P. Pardalos, Eds. Springer, January 2013, pp. 30 –
36.
M. A. Muñoz Acosta, M. Kirley, and S. K. Halgamuge, “Exploratory
Landscape Analysis of Continuous Space Optimization Problems Using
Information Content,” IEEE Transactions on Evolutionary Computation,
vol. 19, no. 1, pp. 74 – 87, 2015.
R. Morgan and M. Gallagher, “Analysing and Characterising Optimization Problems Using Length Scale,” Soft Computing, pp. 1 – 18, 2015.
K. M. Malan, J. F. Oberholzer, and A. P. Engelbrecht, “Characterising
Constrained Continuous Optimisation Problems,” in Proceedings of the
IEEE Congress on Evolutionary Computation (CEC). IEEE, 2015, pp.
1351–1358.
S. Shirakawa and T. Nagao, “Bag of local landscape features for fitness
landscape analysis,” Soft Computing, vol. 20, no. 10, pp. 3787–3802,
2016.
P. Kerschke, M. Preuss, C. I. Hernández Castellanos, O. Schütze,
J.-Q. Sun, C. Grimme, G. Rudolph, B. Bischl, and H. Trautmann,
“Cell Mapping Techniques for Exploratory Landscape Analysis,” in
EVOLVE – A Bridge between Probability, Set Oriented Numerics, and
Evolutionary Computation V. Springer, 2014, pp. 115 – 131.
P. Kerschke, M. Preuss, S. Wessing, and H. Trautmann, “Detecting
Funnel Structures by Means of Exploratory Landscape Analysis,” in
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
Proceedings of the 17th Annual Conference on Genetic and Evolutionary
Computation (GECCO). ACM, 2015, pp. 265 – 272.
——, “Low-Budget Exploratory Landscape Analysis on Multiple Peaks
Models,” in Proceedings of the 18th Annual Conference on Genetic and
Evolutionary Computation (GECCO). ACM, 2016.
G. VanRossum and The Python Development Team, The Python Language Reference – Release 3.5.0. Python Software Foundation, 2015.
MATLAB, Version 8.2.0 (R2013b). Natick, MA, USA: The MathWorks
Inc., 2013.
R Core Team, R: A Language and Environment for Statistical
Computing, R Foundation for Statistical Computing, Vienna, Austria,
2017. [Online]. Available: http://www.R-project.org/
P. Kerschke, flacco: Feature-Based Landscape Analysis of Continuous
and Constrained Optimization Problems, 2017, R-package version 1.6.
[Online]. Available: https://cran.r-project.org/package=flacco
P. Kerschke and H. Trautmann, “The R-Package FLACCO for Exploratory Landscape Analysis with Applications to Multi-Objective
Optimization Problems,” in Proceedings of the IEEE Congress on
Evolutionary Computation (CEC). IEEE, 2016.
C. Hanster and P. Kerschke, “flaccogui: Exploratory Landscape Analysis
for Everyone,” in Proceedings of the 19th Annual Conference on Genetic
and Evolutionary Computation (GECCO) Companion. ACM, 2017.
J. Bossek, smoof: Single and Multi-Objective Optimization Test
Functions, 2016, R-package version 1.4. [Online]. Available: https:
//github.com/jakobbossek/smoof
N. Hansen, A. Auger, O. Mersmann, T. Tušar, and D. Brockhoff,
“COCO: A Platform for Comparing Continuous Optimizers in a BlackBox Setting,” CoRR, vol. abs/1603.08785, 2016. [Online]. Available:
http://arxiv.org/abs/1603.08785
N. Hansen, A. Auger, S. Finck, and R. Ros, “Real-Parameter
Black-Box Optimization Benchmarking 2009: Experimental Setup,”
INRIA, Tech. Rep. RR-6828, 2009. [Online]. Available: https:
//hal.inria.fr/inria-00362649v3/document
S. Finck, N. Hansen, R. Ros, and A. Auger, “Real-Parameter
Black-Box Optimization Benchmarking 2010: Presentation of the
Noiseless Functions,” University of Applied Science Vorarlberg,
Dornbirn, Austria, Tech. Rep., 2010. [Online]. Available: http:
//coco.lri.fr/downloads/download15.03/bbobdocfunctions.pdf
B. Beachkofski and R. Grandhi, “Improved Distributed Hypercube
Sampling,” in Proceedings of the 43rd AIAA/ASME/ASCE/AHS/ASC
11
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
Structures, Structural Dynamics, and Materials Conference, AIAA paper
2002-1274. American Institute of Aeronautics and Astronautics, 2002.
S. Swarzberg, G. Seront, and H. Bersini, “S.T.E.P.: The Easiest Way to
Optimize a Function,” in Proceedings of the First IEEE Congress on
Evolutionary Computation (CEC). IEEE, June 1994, pp. 519 – 524.
R. P. Brent, Algorithms for Minimization Without Derivatives. Courier
Corporation, 2013.
P. Baudiš and P. Pošı́k, “Global Line Search Algorithm Hybridized with
Quadratic Interpolation and its Extension to Separable Functions,” in
Proceedings of the 17th Annual Conference on Genetic and Evolutionary
Computation (GECCO). New York, NY, USA: ACM, 2015, pp. 257 –
264.
P. Pošı́k and P. Baudiš, “Dimension Selection in Axis-Parallel BrentStep Method for Black-Box Optimization of Separable Continuous
Functions,” in Proceedings of the 17th Annual Conference on
Genetic and Evolutionary Computation (GECCO). New York,
NY, USA: ACM, 2015, pp. 1151 – 1158. [Online]. Available:
http://pasky.or.cz/sci/dimsel.pdf
L. Pál, “Comparison of Multistart Global Optimization Algorithms on
the BBOB Noiseless Testbed,” in Proceedings of the 15th Annual
Conference on Genetic and Evolutionary Computation (GECCO).
ACM, July 2013, pp. 1153 – 1160. [Online]. Available: http:
//coco.gforge.inria.fr/lib/exe/fetch.php?media=pdf2013:w0303-pal.pdf
A. H. G. Rinnooy Kan and G. T. Timmer, “Stochastic Global Optimization Methods Part II: Multi Level Methods,” Mathematical Programming, vol. 39, no. 1, pp. 57–78, 1987.
C. G. Broyden, “The Convergence of a Class of Double-Rank Minimization Algorithms 2. The New Algorithm,” Journal of Applied
Mathematics, vol. 6, no. 3, pp. 222 – 231, 1970.
W. Huyer and A. Neumaier, “Benchmarking of MCS on the Noiseless
Function Testbed,” in Proceedings of the 11th Annual Conference
on Genetic and Evolutionary Computation (GECCO). ACM, July
2009. [Online]. Available: http://www.mat.univie.ac.at/%7Eneum/ms/
mcs exact.pdf
N. Hansen and A. Ostermeier, “Completely Derandomized SelfAdaptation in Evolution Strategies,” Evolutionary Computation Journal,
vol. 9, no. 2, pp. 159 – 195, 2001.
A. Atamna, “Benchmarking IPOP-CMA-ES-TPA and IPOP-CMAES-MSR on the BBOB Noiseless Testbed,” in Proceedings of the
17th Annual Conference on Genetic and Evolutionary Computation
(GECCO). New York, NY, USA: ACM, 2015, pp. 1135 – 1142.
[Online]. Available: https://www.lri.fr/%7Eatamna/atamna BBOB 15.
pdf
S. van Rijn, H. Wang, M. van Leeuwen, and T. Bäck, “Evolving the
Structure of Evolution Strategies,” in Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, December
2016.
A. Auger, D. Brockhoff, and N. Hansen, “Benchmarking the Local
Metamodel CMA-ES on the Noiseless BBOB’2013 Test Bed,”
in Proceedings of the 15th Annual Conference on Genetic and
Evolutionary Computation (GECCO). ACM, July 2013, pp. 1225 –
1232. [Online]. Available: http://coco.gforge.inria.fr/lib/exe/fetch.php?
media=pdf2013:w0313-auger.pdf
A. Auger and N. Hansen, “A Restart CMA Evolution Strategy with
Increasing Population Size,” in Proceedings of the IEEE Congress on
Evolutionary Computation (CEC), vol. 2. IEEE, September 2005, pp.
1769 – 1776.
I. Loshchilov, M. Schoenauer, and M. Sebag, “Bi-Population CMA-ES
Algorithms with Surrogate Models and Line Searches,” in Proceedings
of the 15th Annual Conference on Genetic and Evolutionary
Computation (GECCO). ACM, July 2013, pp. 1177 – 1184.
[Online]. Available: http://coco.gforge.inria.fr/lib/exe/fetch.php?media=
pdf2013:w0306-loshchilov.pdf
N. Hansen, “Benchmarking a BI-Population CMA-ES on the BBOB2009 Function Testbed,” in Proceedings of the 11th Annual Conference
on Genetic and Evolutionary Computation (GECCO) Companion: Late
Breaking Papers. ACM, July 2009, pp. 2389 – 2396.
I. Loshchilov, M. Schoenauer, and M. Sebag, “Intensive Surrogate Model
Exploitation in Self-adaptive Surrogate-assisted CMA-ES (saACM-ES),”
in Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation (GECCO). ACM, July 2013, pp. 439 – 446.
M. Powell, “The NEWUOA Software for Unconstrained Optimization
Without Derivatives,” Large-scale Nonlinear Optimization, pp. 255 –
297, 2006.
F. Hutter, H. H. Hoos, and K. Leyton-Brown, “Sequential Model-Based
Optimization for General Algorithm Configuration,” in Proceedings of
5th International Conference on Learning and Intelligent Optimization
[53]
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
(LION), C. A. Coello Coello, Ed. Springer, January 2011, pp. 507 –
523.
F. Hutter, H. Hoos, and K. Leyton-Brown, “An Evaluation of Sequential
Model-Based Optimization for Expensive Blackbox Functions,” in
Proceedings of the 15th Annual Conference on Genetic and
Evolutionary Computation (GECCO). ACM, July 2013, pp. 1209 –
1216. [Online]. Available: http://coco.gforge.inria.fr/lib/exe/fetch.php?
media=pdf2013:w0311-hutter.pdf
C. E. Rasmussen and C. K. Williams, Gaussian Processes for Machine
Learning. MIT press Cambridge, 2006, vol. 1.
D. R. Jones, C. D. Perttunen, and B. E. Stuckman, “Lipschitzian
Optimization without the Lipschitz Constant,” Journal of Optimization
Theory and Applications, vol. 79, no. 1, pp. 157 – 181, 1993.
Z. Ugray, L. Lasdon, J. Plummer, F. Glover, J. Kelly, and R. Martı́,
“Scatter Search and Local NLP Solvers: A Multistart Framework for
Global Optimization,” INFORMS Journal on Computing, vol. 19, no. 3,
pp. 328 – 340, 2007.
M. Laguna and R. Martı́, “The OptQuest Callable Library,” in Optimization Software Class Libraries. Springer, 2003, pp. 193 – 218.
B. Bischl, M. Lang, L. Kotthoff, J. Schiffner, J. Richter, Z. M. Jones,
and G. Casalicchio, mlr: Machine Learning in R, 2016, R-package
version 2.9. [Online]. Available: https://github.com/mlr-org/mlr
T. Therneau, B. Atkinson, and B. Ripley, rpart: Recursive
Partitioning and Regression Trees, 2017, r package version 4.1-11.
[Online]. Available: https://CRAN.R-project.org/package=rpart
A. Karatzoglou, A. Smola, K. Hornik, and A. Zeileis, “kernlab
– An S4 Package for Kernel Methods in R,” Journal of Statistical
Software, vol. 11, no. 9, pp. 1 – 20, 2004. [Online]. Available:
http://www.jstatsoft.org/v11/i09/
A. Liaw and M. Wiener, “Classification and Regression by
randomForest,” R News, vol. 2, no. 3, pp. 18 – 22, 2002.
[Online]. Available: http://CRAN.R-project.org/doc/Rnews/
T. Chen, T. He, M. Benesty, V. Khotilovich, and Y. Tang, xgboost:
Extreme Gradient Boosting, 2017, R-package version 0.6-4. [Online].
Available: https://CRAN.R-project.org/package=xgboost
S original by Trevor Hastie and Robert Tibshirani. Original R port by
Friedrich Leisch and Kurt Hornik and Brian D. Ripley., mda: Mixture
and Flexible Discriminant Analysis, 2016, R-package version 0.4-9.
[Online]. Available: https://CRAN.R-project.org/package=mda
B. Bischl, P. Kerschke, L. Kotthoff, T. M. Lindauer, Y. Malitsky,
A. Fréchette, H. H. Hoos, F. Hutter, K. Leyton-Brown, K. Tierney,
and J. Vanschoren, “ASlib: A Benchmark Library for Algorithm
Selection,” Artificial Intelligence Journal, vol. 237, pp. 41 – 58,
2016. [Online]. Available: http://www.sciencedirect.com/science/article/
pii/S0004370216300388
Pascal Kerschke (http://erc.is/p/kerschke) is a research associate at the group of Information Systems
and Statistics at the University of Muenster, Germany. Prior to his doctoral studies, he has received
a B.Sc. degree in Data Analysis and Management
(2010) and a M.Sc. degree in Data Sciences (2013)
from the TU Dortmund University, Germany.
His current research interests are algorithm selection (for continuous and TSP problems), as well
as Exploratory Landscape Analysis for single- and
multi-objective (Black-Box) optimization problems.
Heike Trautmann (http://erc.is/p/trautmann) is head
of the Information Systems and Statistics group at
the University of Muenster, Germany and a director
of the European Research Center for Information
Systems (ERCIS).
Her current research activities are focused on
algorithm selection and benchmarking concepts, data
science, multi-objective (evolutionary), as well as
combinatorial optimization.
| 8 |
1
A Decentralized Framework for Real-Time Energy Trading in
Distribution Networks with Load and Generation Uncertainty
arXiv:1705.02575v1 [] 7 May 2017
Shahab Bahrami and M. Hadi Amini
email: [email protected], [email protected]
Abstract—The proliferation of small-scale renewable generators and price-responsive loads makes it a challenge for distribution network operators (DNOs) to schedule the controllable loads
of the load aggregators and the generation of the generators
in real-time. Additionally, the high computational burden and
violation of the entities’ (i.e., load aggregators’ and generators’)
privacy make a centralized framework impractical. In this paper,
we propose a decentralized energy trading algorithm that can be
executed by the entities in a real-time fashion. To address the
privacy issues, the DNO provides the entities with proper control
signals using the Lagrange relaxation technique to motivate them
towards an operating point with maximum profit for entities.
To deal with uncertainty issues, we propose a probabilistic load
model and robust framework for renewable generation. The
performance of the proposed algorithm is evaluated on an IEEE
123-node test feeder. When compared with a benchmark of not
performing load management for the aggregators, the proposed
algorithm benefits both the load aggregators and generators
by increasing their profit by 17.8% and 10.3%, respectively.
When compared with a centralized approach, our algorithm
converges to the solution of the DNO’s centralized problem with
a significantly lower running time in 50 iterations per time slot.
Keywords: price-responsive load, generation uncertainty, distributed algorithm, trading market.
I. I NTRODUCTION
One goal of the emerging smart grid is to move distribution
systems towards a smarter and more secure network through
integrating two-way communication infrastructure. The information exchange provides distribution network operators
(DNOs) with sophisticated management and monitoring systems to perform complex analyses and automated operations
in near real-time. Furthermore, drivers such as distribution
organizations have accelerated the expansion of applications
for smart grid technologies, such as smart meters, and integration of renewable energy generators. Resulting benefits include
a more efficient use of electric appliances in households to
reduce the energy bill payment for the load aggregators and
lower operation cost for the generators, as well as a higher
flexibility for the DNO to enhance the system’s technical
operation; thereby reaching a triple-win condition.
The DNO is responsible for optimal power flow (OPF)
analysis. There are challenges in solving the OPF problem
by the DNO. First, the OPF can be computationally difficult
to be solved, especially when the number of decision variables
increases by participation of the price-responsive load aggregators in the energy market. Second, the DNO may violate
the entities’ privacy, e.g., by revealing the load aggregators’
demand information and generators’ cost parameters to the
DNO. Third, the DNO is uncertain about the load demand
and renewable generation ahead of time.
There have been some efforts in the literature to tackle
the above-mentioned challenges. We divide the related works
into three main threads. The first thread is concerned with
decentralized energy management programs for a market with
multiple suppliers and multiple users. Mechnisms such as the
multi-level game methods [1], Stackelberg game [2], dual
decomposition method [3], supply bidding framework [4],
and hierarchical bidding [5] have been used. However, these
approaches did not consider the constraints imposed by the
topology and operation of the distribution network. The second
thread is concerned with including the power flow equations in
the decentralized energy management procedure. To achieve
this goal, different techniques such as convex relaxation [6]–
[9], quadratic programming [10], alternating direction method
of multipliers (ADMM) [11]–[13], and Lagrange relaxation
method [14], [15] have been used. These studies, however,
did not consider the uncertainties in the renewable generators
and load demand. Furthermore, these studies mainly focused
on off-line algorithms, which are applicable in day-ahead
markets. The third thread concerned with the online operation
of distribution systems using different mechanisms such as
real-time closed-loop control [16], differential evolution optimization [17], online gradient method [18], projected gradient
descent [19], online mirror descent [20], and graph theorybased approach [21]. These works, however, did not mentioned
how to consider the uncertainty in the load demand for users
in smart distribution networks.
In this paper, we focus on designing a distributed algorithm
for an electricity trading market with renewable energy generators and price-responsive residential load aggregators. In each
time slot (e.g., every hour), the load aggregators and generators
use the communication infrastructure in the smart grid to
exchange information with the DNO and jointly maximize
their profit, while considering the uncertainty in the future
demand and renewable generation. The privacy of each entity
is protected in the proposed framework, as the generators and
load aggregators solve their own profit maximization problem
using the locally available information. The main challenges
that we address are tackling the uncertainty in the load demand
and renewable generation, as well as determining the proper
control signals communicated between the DNO, generators,
and load aggregators that enforce the proposed distributed
algorithm to converge to the solution of the DNO’s centralized
problem with the objective of maximizing the social welfare.
The main contributions of this paper are as follows:
• Uncertainty Issues and Risk Evaluation: To address the
uncertainty in the load demand, we propose a probabilistic load estimation for the electric appliances of
the residential users served by each load aggregator. It
enables each load aggregator to schedule the electric
appliances of its users in real-time, while taking into
account the impacts of its decision on the load profile
in the upcoming time slots. We also consider an adaptive
robust decision making framework for the renewable generators to optimize the risk of power shortage based on an
adjustable confidence level. It enables the generators to
limit their cost for compensating the generation shortage.
2
It also enables the DNO to prevent high voltage drop
caused by the shortage in the total renewable generation.
• Distributed Algorithm Design: To protect the privacy of
the load aggregators and generators, as well as to address
the computational complexity of the DNO’s centralized
problem, we propose a decentralized algorithm that can
be executed by the entities in real-time. Each entity
requires to share limited information to meet its local
objective, while satisfying the physical constraints of the
linearized ac power flow in the distribution network.
• Performance Evaluation: Simulations are performed on
an IEEE 123-bus test feeder with 10 generators and 113
load aggregators. When compared with the benchmark of
not performing load management, the proposed algorithm
benefits the load aggregators and generators by increasing
their profit by 17.8% and 10.3% on average, respectively.
Furthermore, it helps generators to reduce the peak-toaverage generation ratio by 13%. Our algorithm converges to the solution of the centralized problem with
a significantly lower execution time.
The rest of this paper is organized as follows. Section II
introduces the system model. Section III formulates the DNO’s
centralized and decentralized problems. A decentralized algorithm is proposed. Section IV provides the simulation results,
followed by Section V that concludes the paper. Appendices
A−F can be found in the supplementary document.
II. S YSTEM M ODEL
Consider an electricity market with a set N of N , |N |
load aggregators and a set G of G , |G| generators scattered
in a distribution network. Each load aggregator is responsible
for managing the load demand of its electricity users. Each
generator sells electricity to the market. The load aggregators
and generators use a two-way communication infrastructure
to exchange information with the DNO. The entities are also
connected to each other through the electric power distribution
network. The DNO is a neutral entity responsible for monitoring the power flow in the network. For simplicity in the
problem formulation, we assume that each bus in the network
has exactly one load aggregator or one generator. If both load
aggregator and generator are connected to the same bus, we
divide that bus into two buses connected to each other through
a line with zero impedance. If neither load aggregator nor
generator is connected to a bus, we propose to add a virtual
load aggeragtor with zero demand for that bus. It enables
us to denote the set of buses by N ∪ G and refer a load
aggregator or a generator by its bus index. We use notation
L ⊆ (N ∪ G) × (N ∪ G) to denote the set of branches.
The overall trading horizon is denoted by H , {1, . . . , H},
where H is the number of time slots with equal length (e.g.,
each time slot is one hour). Notice that the load management
decision of a load aggregator in the current time slot affects its
demand in the upcoming time slots. Meanwhile, the generators
need to match their generation level with the changes in the
load demand. Hence, generators also need to modify their
generation for the current and upcoming time slots. To avoid
an abuse of notations, hereafter, we use index h for a time slot
in general and use index t specifically for the current time slot.
The general idea of this paper for implementing a real-time
energy trading can be summarized as follows. At the beginning
of the current time slot t, the entities optimize the demand and
generation profiles over the period Ht = {t, . . . , H} ⊆ H, but
apply only the obtained decision for the current time slot t.
The scheduling procedure is performed with uncertainty about
the load demand and renewable generation in the upcoming
time slots h > t. Hence, the entities repeat the optimization
procedure at the beginning of the next time slot to update
their scheduling decision with the revealed demand/generation
information. We aim to answer two key questions:
Q.1 How do the entities interact with the DNO to determine
their optimal load an generation profiles in the current time
slot with the locally available information?
Q.2 How do the entities address the lack of information about
the demand and generation in the upcoming time slots?
A. Load Aggregator’s Model
In this subsection, we address questions Q.1 and Q.2 for residential load aggregators by modeling the electric appliances
and providing a probabilistic load estimation technique.
1) Users’ Appliances Model: Load aggregator i ∈ N is
responsible for scheduling its users’ electric appliances. An
electric appliance is either asleep or awake in the current time
slot t. Let Aasleep
(t) and Aawake
(t) denote the sets of asleep and
i
i
awake appliances in the current time slot t, respectively. An
awake appliance a ∈ Aawake
(t) is available to be scheduled
i
for operation, i.e., the load aggregator schedules the power
consumption profile ea,i (t) = (ea,i (h), h ∈ Ht ).
The awake appliance a ∈ Aawake
(t) provides the load aggrei
gator i with its scheduling horizon, utility function, and type
using the smart meter inside the household. The scheduling
horizon Ha,i ⊆ Ht defines the time interval over the upcoming
time slots, in which the appliance should be scheduled. The
utility function Ua,i (ea,i (t)) is used to model the satisfaction
of the customer in monetary units from using the appliance.
It is generally an increasing and concave function of the
consumption profile ea,i (t). The type of appliance depends
on its specifications and the customer’s preferences. Inspired
by the work in [22], we consider three types of appliances:
• The appliance a with type 1 should be operated within
the scheduling horizon Ha,i and turned off in other time slots.
Examples include the electric vehicle (EV) and dish washer.
(t) denote the set of appliances with type
Let A1i (t) ⊆ Aawake
i
1 that are awake in the current time slot t. We have
ea,i (h) = 0,
a ∈ A1i (t), i ∈ N , h 6∈ Ha,i , (1a)
max
1
emin
a,i (h) ≤ ea,i (h) ≤ ea,i (h), a ∈ Ai (t), i ∈ N , h ∈ Ha,i , (1b)
P
min
max
, a ∈ A1i (t), i ∈ N . (1c)
Ea,i
≤ h∈Ha,i ea,i (h) ≤ Ea,i
The utility obtained from using a type 1 appliance depends on the total power consumption.
P The utility can be
e (h) , e.g.,
expressed as Ua,i (ea,i (t)) = Ua,i
P
P h∈Ha,i a,i
min
Ua,i
e
(h)
=
κ
f
e
(h) − Ea,i
a,i
a,i
a,i
h∈Ha,i
h∈Ha,i
with a concave function f (·) and nonnegative constant κa,i .
• The appliances of type 2 can be operated in time slots
out of the scheduling horizon, but the customer attains a
relatively low utility, e.g., TV and personal computer. Let
3
A2i (t) ⊆ Aawake
(t) denote the set of appliances of type 2 that
i
are awake in the current time slot t. We have
a ∈ A2i (t), i ∈ N , h 6∈ Ha,i , (2a)
ea,i (h) ≥ 0,
max
2
emin
a,i (h) ≤ ea,i (h) ≤ ea,i (h), a ∈ Ai (t), i ∈ N , h ∈ Ha,i , (2b)
P
min
max
, a ∈ A2i (t), i ∈ N . (2c)
Ea,i
≤ h∈Ha,i ea,i (h) ≤ Ea,i
The utility function for type 2 appliances depends on both
the amount of power consumption and the time of consuming
the power, i.e., the customer would gain different benefits
from consuming the same amount of power at different
times, e.g., watching
the favorite TV program. We have
P
Ua,i (ea,i (t)) = h∈Ht Ua,i (ea,i (h),
P h). As a concrete example, utility function Ua,i (ea,i ) = h∈Ha,i κa,i (h)f ea,i (h) −
P
′
emin
a,i +
k6∈Ha,i κa,i (h)f ea,i (h) with a concave function
f (·) and time dependent nonnegative coefficients κa,i (h) and
κ′a,i (h), κ′a,i (h) ≪ κa,i (h) is a viable candidate.
• The appliances of type 3 can be operated out of the
scheduling horizon without any constraint on their total power
consumption, such as lighting and refrigerator. Let A3i (t) ⊆
Aawake
(t) denote the set of appliances of type 3 that are awake
i
in the current time slot t. We have
a ∈ A3i (t), i ∈ N , h 6∈ Ha,i , (3a)
ea,i (h) ≥ 0,
max
3
emin
a,i (h) ≤ ea,i (h) ≤ ea,i (h), a ∈ Ai (t), i ∈ N , h ∈ Ha,i . (3b)
The utility Ua,i (ea,i ) attained by the customer from using
the appliances with type 3 depends on the amount of power
consumption ea,i (t) within the scheduling horizon Ha,i , but
not the time of consumption. The customer attains a relatively low utility out of interval
Ha,i . Function Ua,i (ea,i
)=
P
P
′
min
h6∈Ha,i κa,i f ea,i (h) with
h∈Ha,i κa,i f ea,i (h)−ea,i +
a concave function f (·) and nonnegative constants κa,i and
κ′a,i , κ′a,i ≪ κa,i is a viable candidate.
The total utility of load aggregator i in the current time slot
t with decision vector ei (t) = (ea,i (t), a ∈ Aawake
(t)) is
i
P
(4)
Ui (ei (t)) = a∈Aawake (t) Ua,i (ea,i (t)), i ∈ N .
i
2) Load Estimation: The actual wake-up times of the
appliances are not available to the load aggregator in advance.
To address this lack of information, load aggregator i can
collect the sleep-awake historical data record of each appliance
and estimate the probability pa,i (h) that each appliance a ∈ Ai
becomes awake at each time slot h ∈ H. In appendix A, we
show the conditional probability pa,i (h | t) that the appliance
becomes awake in an upcoming time slot h > t, given that it
has not become awake until the current time slot, t, is
pa,i (h | t) =
1−
pa,i (h)
.
Pt
′
h′ =1 pa,i (h )
(5)
A load aggregator has no information about the scheduling
horizon, users’ utility, and type of the appliances ahead of
time. For decision making at the current time slot t, we consider the worst-case scenario, in which the electric appliances
that become awake in the upcoming time slots h > t should
be operated once they become awake without any control
max
nom
on power consumption, i.e., emin
a,i (h) = ea,i (h) = ea,i and
min
max
nom
Ea,i = Ea,i = Ea,i . The payment of the load aggregator
in the worst-case scenario is an upper-bound for its actual
payment. Hence, minimizing the worst-case payment implies
reducing the risk of high payment. For the current time slot
t, the worst-case expected electric demand liasleep(h) of the
currently sleeping appliances in an upcoming time slot h > t is
#
"
h
X
X
asleep
′
nom
pa,i (h | t) . (6)
li
(h) =
ea,i (h)
a∈Aasleep
(t)
i
h′ =max{t+1,h−Ta +1}
nom nom
where parameter Ta = Ea,i
/ea,i is the operation duration of
asleep
the appliance a ∈ Ai
(t) that becomes awake in upcoming
P
time slot h > t. The value of kh′ =max{t+1,h−Ta +1} pa (h′ | t)
is equal to the probability that a currently sleeping appliance
a is operating in the upcoming time slot h > t.
For the given current time slot t, we use the notation
li (t) = (li (h), h ∈ Ht ) to denote the profile of active power
consumption of the users during time interval Ht . We have
P
h ∈ Ht .
(7)
li (h) =liasleep (h) + a∈Aawake (t) ea,i (h),
i
To model the reactive power consumption for a load aggregator i, we consider a constant power factor PFi . Theqreactive
1−PF2i
.
power for load aggregator i in time h is qi (h) = li (h)
PF2i
3) Local Optimization Problem: Constraints (1a)−(7) define the feasible set Ei (t) for decision vector ei (t) of load
aggregator i in the current time slot t. Load aggregator i aims
to maximize the profit πiagg (ei (t)), which includes the total
utility in (4) minus the payment to the DNO over period Ht .
The DNO provides load aggregator i with the price ρi (h) for
a unit of active power in each time slot h. We assume that
load aggregators do not pay for the reactive power. We have
P
πiagg (ei (t)) = Ui (ei (t)) − h∈Ht li (h) ρi (h), i ∈ N , (8)
Load aggregator i solves the following optimization problem in time slot t to determine decision vector ei (t):
maximize πiagg (ei (t))
(9a)
subject to ei (t) ∈ Ei (t).
(9b)
ei (t)
B. Generator’s Model
In this subsection, we address questions Q.1 and Q.2 for the
generators by modeling the conventional and renewable units
and providing a robust optimization technique for renewables.
1) Conventional Unit: In general, the generation cost function of the conventional unit of generator j ∈ G in time
slot h is an increasing convex function of pconv
(h) [23]. The
j
2
conv
class of quadratic functions Cj (pj (h)) = aj2 pconv
j (h)
+aj1 pconv
(h) + aj0 is well-known. For the given current
j
time slot t, generator j offers the profiles of active and
conv
reactive powers pconv
(h), h ∈ Ht ) and qjconv (t) =
j (t) = (pj
conv
(qj (h), h ∈ Ht ) for the current and upcoming time slots.
2) Renewable Unit: Without loss of generality, we assume
that the renewable plants are operated at unity power factor.
Given the current time slot t, generator j with renewable unit
ren
offers an active power profile pren
j (t) = (pj (h), h ∈ Ht ) for
its renewable unit. To prevent non-credible high renewable
generation offers, the DNO charges generator j by the unit
price βj (h) ($/MW) for generation shortage in time slot h. To
4
cope with the uncertainty issues, we consider a robust decision
making for generators with renewable units. Generator j can
uses the historical data record to forecast an uncertainty bound
[pmin,ren
(h), pmax,ren
(h)] for its actual renewable generation in
j
j
time slot h ∈ H. Generator j considers the cost Γj (pren
j (t))
of the worst-case scenario for generation shortage as follows:
P
min,ren
ren
(h)). (10)
Γj (pren
j (t)) ,
h∈Ht βj (h)(pj (h) − pj
The feasible set for the renewable generation profile pren
j (t)
can be defined based on all scenarios that satisfy pren
(h)
∈
j
max,ren
(h),
p
(h)].
However,
it
is
very
conservative
and
[pmin,ren
j
j
possibly inefficient to take into account all possible scenarios.
Inspired by the work in [24], we consider an adaptive robust
model. In the current time slot t, the uncertainty space for the
generation profile pren
j (t) in the time interval Ht is defined as
n
min,ren
ren
(h), pmax,ren
(h)], h ∈ Ht ,
Pjren (t) = pren
j (t) | pj (h) ∈ [pj
j
max,ren
ren
o
pj
(h) − pj (h)
P
≤
∆
(t)
, (11)
j
h∈Ht max,ren
pj
(h) − pmin,ren
(h)
j
where 0 ≤ ∆j (t) ≤ |Ht | is the confidence level parameter
for generator j in the current time slot t. The space defined
in (11) is a singleton, corresponding to the least-conservative
max,ren
(h), h ∈ Ht when ∆j (t) = 0. As
scenario pren
j (h) = pj
∆j (t) increases, the size of the uncertainty set enlarges, and
the resulting robust solution is more conservative. The space
includes all possible scenarios when ∆j (t) = |Ht |. In [24],
∆j (t) is known and fixed. Whereas, we consider parameter
∆j (t) as a variable that should be optimized by generator j.
3) Local Optimization Problem: For a given current time
slot t, generator j decides on the generation profile ψj (t) =
(pconv
(t), qjconv (t), pren
j
j (t)) and the confidence level ∆j (t). The
gen
objective of generator j is to maximize its profit πj (ψj (t)),
which is the revenue from selling active and reactive powers
minus the generation cost and financial risk in (10). That is
P
gen
(h)+pren
πj (ψj (t)) = h∈Ht pconv
j
j (h) ρj (h)
(h)) − Γj (pren
(12)
+ qjconv (h)̺j (h) − Cj (pconv
j
j (t)).
The problem for generator j ∈ G in the current time slot t is
maximize πjgen (ψj (t))
ψj (t), ∆j (t)
(13a)
subject to pmin,conv
≤ pconv
(h) ≤ pmax,conv
, h ∈ Ht , (13b)
j
j
j
qjmin,conv ≤ qjconv (h) ≤ qjmax,conv , h ∈ Ht ,
(13c)
pren
j (t)
(13d)
∈
Pjren (t).
C. DNO’s Model
We address Q.1 and Q.2 for the DNO in the following.
1) Linearized ac Power flow: We consider balanced distribution networks. The model for unbalanced networks is
a research for future work. The ac power flow equations
are nonlinear and nonconvex. An alternative is to use a
linearized ac power flow. Let p(h) = (pb (h), b ∈ N ∪ G) and
q(h) = (qb (h), b ∈ N ∪ G) denote the vectors of injected
active power pb (h) and reactive power qb (h) to all buses
b ∈ N ∪ G in time h. Let |vb (h)| and θb (h) denote the
voltage magnitude and phase angle of bus b in time h. We
define the grid-wide vectors θ(h) = (θb (h), b ∈ N ∪ G) and
|v(h)| = (|vb (h)|, b ∈ N ∪ G) in time slot h. Let Grs and
Brs denote the real and reactive parts of the entry (r, s) in
bus admittance matrix Y . Let brr and grr denote the shunt
susceptance and conductance at bus r. In Appendix B, we
show that the linearized ac power flow can be expressed as
θ(h)
p(h)
−B ′
G′
,
(14)
=
q(h)
−G′ −B |v(h)|
{z
}
|
Λ
where the diagonal element (r, r) of matrices B and B ′ are
Brr and Brr − brr , respectively. The non-diagonal elements
(r, s) of both B and B ′ are Brs . The diagonal element (r, r)
of matrices G and G′ are Grr and Grr − grr , respectively.
The non-diagonal elements (r, s) of both G and G′ are Grs .
In Appendix C, we show that, in time slot h, the linearized
active and reactive power flow through line (r, s) ∈ L with
resistance Rrs and reactance Xrs can be calculated as:
Rrs (|vr (h)|−|vs (h)|)+Xrs (θr (h)−θs (h))
, (15a)
2 + X2
Rrs
rs
Xrs (|vr (h)|−|vs (h)|)−Rrs (θr (h)−θs (h))
qrs (h) =
, (15b)
2 + X2
Rrs
rs
p
2 (h) is
The apparent power flow srs (h) = p2rs (h) + qrs
upper bounded by smax
,
which
implies
that
the
feasible
real
rs
and reactive powers are bounded by a circle. To linearize
the constraint, a piecewise approximation of the boundary
by a regular polygon with central angle α can be used. In
Appendix D, we obtain the following constraints
prs (h) =
prs (h) cos (mα) + qrs (h) sin (mα) ≤ smax
rs ,
(16)
where m = 0, . . . , 2π/α. For each bus b, we also have
vbmin ≤ |vb (h)| ≤ vbmax .
(17)
2) DNO’s Centralized Optimization Problem: The DNO
considers the impact of renewable generation shortage on the
technical operation of the network. As a concrete example, the
risk of voltage drop at different buses is a viable choice for the
DNO. The DNO considers function ΓDNO (t) of the grid-wide
renewable generation pren (t) for the voltage variations as
P
P
vb (h)|) , (18)
ΓDNO (pren (t)) , h∈Ht
b∈N ∪G (|vb (h)| − |b
where vbb (h) is voltage magnitude of bus b in the worst-case
scenario, when all renewable generators’ power are pmin,ren
(h)
j
in time slot h ∈ Ht . The uncertainty space that the DNO
considers for renewable generator j is defined as (11).
We consider the objective of maximizing the social welfare
for the DNO. Considering the grid-wide vectors e(t) and
pconv (t), and pren (t), the DNO’s objective function is
P
f DNO (e(t), pconv (t), pren (t)) , i∈N Ui (ei (t))
P
P
c DNO ren
(p (t)), (19)
− h∈Ht j∈G Cj (pconv
j (h)) −ϑ Γ
where ϑc is a positive weighting coefficient. We formulate the
DNO’s centralized problem as
maximize
e(t),pconv (t),q conv (t),
pren (t),∆(t),|v(t)|,θ(t)
f DNO (e(t), pconv (t), pren (t))
subject to constraints (9b), (13b)−(13d), (14)−(18).
(20a)
(20b)
5
Problem (20) has a concave objective function and linear
constraints due to the concavity of the load aggregators’ utility
function, the convexity of the generation cost function, and the
linearity of the ac power flow model in (14). Hence, we have:
Theorem 1 The optimal solution to the DNO’s centralized
problem in (20) is unique.
To solve the centralized problem (20), the DNO needs the
information about the load aggregators’ utilities, generators’
generation cost, and renewable units’ forecast data. However,
these information may not be available to the DNO. Instead,
we develop a decentralized algorithm by showing that the
DNO can determine ρb (h) and ̺b (h), b ∈ N ∪ M, as well as
the penalties βj (h), j ∈ G for h ∈ Ht such that when the load
aggregators solve (9) and generators solve (13), the resulting
solution coincides with the unique solution of problem (20).
III. D ECENTRALIZED A LGORITHM D ESIGN
Given the current time slot t, the decision vector of load
aggregator i is load profile ei (t) of the awake appliances.
Further, the decision vector of generator j is the generation
profile ψj (t) = (pconv
(t), qjconv (t), pren
j
j (t)) and the confidence
level ∆j (t). The DNO influences the entities by using the
nodal prices ρ(t), ̺(t), and penalties β(t).
One well-know technique to determine the proper values
of the nodal prices ρ(t), ̺(t), and penalties β(t) is to
formulate the partial Lagrangian relaxation of the DNO’s
centralized problem (20) [14], [15], [25]. Let λb (h) and
γb (h), b ∈ N ∪ G, h ∈ H denote the Lagrange multipliers
associated with the equality constraints for the injected active
power pb (h) and reactive power qb (h) in (14). We move these
constraints with their Lagrange multipliers to the objective
function of the centralized problem in (20). In equation (SDNO
12) of Appendix E, we obtain the objective function fLag
(·)
of the relaxed problem. Due to the convexity of problem
(20) and linearity of the constraints, the strong duality gap
condition (Slater’s condition) is satisfied if a feasible solution
exist [26, Ch. 5]. Thus, the optimal solution to the relaxed
problem is equal to the optimal solution to the primal problem
(20). Using the relaxed problem enables us to determine the
price signals ρ(t), ̺(t), and β(t) based on the Lagrangian
decomposition technique, such that the market equilibrium
among load aggregators and generators coincides with the
optimal solution of the centralized problem (20).
Theorem 2 The equilibrium of the energy market coincides
with the unique solution to the DNO’s centralized problem in
(20) if and only if for i ∈ N , j ∈ G, h ∈ Ht the DNO sets
p
ρi (h) = λi (h) + γi (h) (1 − PF2i )/PF2i , i ∈ N , h ∈ Ht , (21a)
ρj (h) = λj (h),
j ∈ G, h ∈ Ht , (21b)
̺j (h) = γj (h),
j ∈ G, h ∈ Ht , (21c)
P
(21d)
βj (h) = ϑc b∈N ∪G Λ−1 (|N ∪ G| + b, j),
where Λ−1 (|N ∪ G| + b, j) is the entry (|N ∪ G| + b, j) of the
inverse of matrix Λ in (14).
The proof can be found in Appendix E. It suggests a decentralized algorithm to determine the solution to problem (20).
We propose Algorithm 1 that can be executed by the load
aggregators, generators, and DNO in real-time. In Algorithm
Algorithm 1 Decentralized Energy Market Trading Algorithm.
1: Set k := 1 and ξ1 = ξ2 := 10−2 .
2: If t = 1
3: Each load aggregator i ∈ N randomly initializes its users’ appliances
load profile e1i (t).
4: Each generator j ∈ G randomly initializes its conventional generation
profiles pjconv,1 (t) and qjconv,1 (t).
5: Each generator j with renewable units initializes ∆1j (t) = 0 and set the
presumed generation levels to pren,1
(h) = pmax,ren
(h) for h ∈ Ht .
j
j
1
1
6: The DNO sets |vb (h)| = 1 pu, θb = 0, and λ1b (h) = γb1 (h) = 0, b ∈
N ∪ G, h ∈ Ht .
7: Else if t > 1
8: Load aggregators, generators, and DNO initialize their decision variables
with their values in the equilibrium at previous time slot t − 1.
9: End if
10: Repeat
11: Each load aggregator i and generator j sends its load profile lik (t) and
generation profiles pconv,k
(t), qjconv,k (t), and pren,k
(t) to the DNO.
j
j
12: DNO obtains the updated vector φk+1 (t) = (θbk+1 (t), |vbk+1 (t)|,
λk+1
(t), γbk+1 (t), b ∈ N ∪ G) using (22).
b
13: DNO uses (21a)−(21d) to compute the updated values of control signals
ρk+1 (t), ̺k+1 (t), and β k+1 (t), and sends the control signals to the
corresponding entity in each bus.
14: Each load aggregator i updates its controllable load profile ek+1
(t) by
i
solving its local problem (9)
15: Each generator j updates its generation profile ψjk (t) and decision
variable ∆kj (t), by solving its local problem in (13).
16: k := k + 1. The step size is updated.
17: Until |θbk (t) − θbk−1 (t)| ≤ ξ1 , ||vbk (t)| − |vbk−1 (t)|| ≤ ξ2 , b ∈ N ∪ G.
1, when the current time slot t begins, each load aggregator i determines the power consumption profile ea,i (t) =
(ea,i (h), h ∈ Ht ) of all awake appliances a ∈ Aawake
(t) over
i
time slots h ∈ Ht . Each generator j obtains the profiles of
active and reactive powers pconv
(t) = (pconv
(h), h ∈ Ht ) and
j
j
conv
conv
qj (t) = (qj (h), h ∈ Ht ) of the conventional unit and
ren
generation profile pren
j (t) = (pj (h), h ∈ Ht ). The entities
use the obtained scheduling decision for upcoming time slots
h ≥ t + 1 as an initial decision in the next time slot t + 1.
In each time slot t, Algorithm 1 is executed in an iterative
fashion. Let k denote the iteration index. Our algorithm
involves the initiation phase and market trading phase.
• Initiation phase: Lines 1 to 9 describe the initiation phase.
• Market trading phase: The loop involving Lines 10 to 18
describes this phase, which includes the following parts:
a) Information exchange: In Line 11, each load aggregator i
uses (7) to obtain its demand profile lik (t) = (lik (h), h ∈ Ht ),
and sends to the DNO. Each generator j sends the profiles
(t), qjconv,k (t), and pren,k
(t) to the DNO.
pconv,k
j
j
b) DNO’s update: In Line 12, the DNO receives the
information from the entities, it obtains the updated vector
φk+1(t) = (θbk+1 (t), |vbk+1 (t)|, λk+1
(t),γbk+1 (t), b ∈ N∪G) as
b
+
DNO
φk+1 (t) = φk (t) + ǫk ∇φk (t) fLag
(·) ,
(22)
where ∇ is the gradient operator, and [·]+ is the projection
onto the feasible set defined by constraints in (20b). Recall
DNO
that fLag
(·) is the objective function of the DNO’s relaxed
problem, which is given in equation (S-12) in Appendix E.
The DNO uses (21a)−(21d) to compute the updated prices
ρk+1 (t) and ̺k+1 (t) and penalties β k+1 (t) for all buses.
c) Load aggregator’s update: When load aggregator i
receives the control signal ρk+1
(h), h ∈ Ht from the DNO,
i
in Line 9, it updates its controllable load profile ek+1
(t)
i
6
Bus 17
1.1
0.95
0.8
7 am
12 pm
6 pm
12 am
0.95
1.6
Bus 23
1.5
1.4
1.3
1.2
6 am 7 am 12 pm
2.1
Bus 90
0.8
1.9
6 pm
12 am
6 am
6 pm
12 am
6 am
Bus 110
1.7
0.65
0.5
7 am
12 pm
6 pm
12 am
6 am
1.5
7 am
12 pm
Profit ($)
Figure 1. Load demand profiles over 24 hours in buses 17, 23, 90, and 110
with and without appliances scheduling.
2000
1750
1500
1250
1000
750
500
Without load scheduling
With load scheduling
17
23
90
110
Conventional
Generation (MW)
Figure 2. The profit for load aggregators with and without load scheduling.
25
20
Without Demand Response
With Demand Response
15
10
7 am
12 pm
6 pm
12 am
6 am
Renewable
Generation (MW)
Time (hour)
5
4
3
2
1
0
4.5
3
1.5
0
96
192
288
384
0
7 am 12 pm
478
6 pm
12 am
6 am
Time (hour)
Time (hour)
Figure 3. (a) The generation of the conventional unit of generator 15. (b)
The PV panel historical data samples. (c) The offered output power of the
PV panel of generator 15 in the market over the day.
Conventional
Generation (MW)
IV. P ERFORMANCE E VALUATION
In this section, we evaluate the performance of our proposed
decentralized algorithm on an IEEE 123-node test feeder. The
original test system is unbalanced. For all unbalanced case
studies, we construct a balanced test system by ignoring the
phase-to-phase admittance and replace all multi-phase lines
with a one-phase line with average inductance and resistance
of the phases. The data for the test system can be found in [27].
We consider the configuration, where all switched are open.
The slack bus is the substation bus. The trading horizon is one
day with H = 24 time slots. We add 10 generators at different
buses and assume that each load aggregator serves between
100 to 500. In Appendix F, we provide the simulation setup
[28], [29]. For the benchmark scenario, we consider a system
without demand response program for all load aggregators;
thus, users operate their appliances right after they become
awake. We perform simulations using Matlab R2016b in a PC
with processor Intel(R) Core(TM) i7-3770K [email protected] GHz.
1) Load aggregators’ strategy: Each load aggregator executes Algorithm 1 to schedule the appliances of its users. Fig.
1 shows the load profile of the load aggregators in buses 17,
23, 90, and 110 in the benchmark scenario and the scenario
with load scheduling. Peak shaving can be observed in the
load profiles. Since Algorithm 1 is executed in real-time, the
load aggregators can only modify the demand for upcoming
time slots using the revealed information about the awake
appliances in the current time slot and the estimated load
demand for future time slots. Results for all load aggregators
verify that by executing Algorithm 1, the peak load demand is
reduced by 14.5% in average. Load scheduling is performed
by each load aggregator with the goal of increasing the profit
in (8). Fig. 2 shows that the profit of the load aggregators 17,
23, 90, and 110 is increased. Specifically, results show that
the profit for all load aggregators is increased by 17.8% on
average, since they can benefit from the price fluctuations by
modifying the operation of their users’ appliances.
2) Generators’ strategy: On the other hand, generators can
benefit from the users’ load scheduling to reduce their peak
generation, and thus their generation cost during peak hours.
For example, Fig. 3 (a) shows the active output power profile
from the conventional unit of the generator in bus 15. The
peak generation level is reduced from 25 MW to 20 MW (i.e.
20% reduction). Generator 15 also has a PV panel with the
1.25
Renewable
Generation (MW)
by solving its local problem (9), which is convex and can
be efficiently solved at each iteration. Note that the utility
function in (8) is a concave function.
d) Generator’s update: When generator j receives signals
ρk+1
(t), ̺k+1
(t) and βjk+1 (t), it updates its generation profile
j
j
conv,k
k
ψj (t) = (pj
(t), qjconv,k (t), pren,k
(t)) and decision variable
j
k
∆j (t), by solving its local problem in (13). This problem
is a linear problem and can be solved efficiently by the
generator using its local information about its conventional
and renewable units.
e) Step size update: We use a nonsummable diminishing
step size. In Line 17, the step size is updated.
We emphasize that in Algorithm 1, the DNO needs to
consider on bus (e.g., the substation bus) as the slack bus.
22
Without Demand Response
With Demand Response
19
16
13
10
7 am
12 pm
6 pm
12 am
6 am
Time (hour)
7.5
6
4.5
3
1.5
6
4.5
3
0
96
192
288
Time (hour)
384
478
1.5
7 am 12 pm
6 pm
12 am
6 am
Time (hour)
Figure 4. (a) The generation of the conventional unit of generator 60. (b)
The wind turbine historical data samples. (c) The presumed output power of
the wind turbine of generator 60.
historical generation record shown in Fig. 3 (b). The generator
executes Algorithm 1 and responds to the penalties βj (t) from
the DNO in each time slot to set the least-risk generation level
for its PV unit. Fig. 3 (c) shows the offers for generation 15
over the day. Note that the offers may not be equal to the actual
PV panel’s generation in real-time, but the generation profile
in Fig. 3 (c) results in the optimal risk of energy shortage for
PV panel in bus 15. We also show the generation profile of the
conventional unit of generator 60 in Fig. 4 (a). The reduction
in peak generation can be observed. This generator has wind
turbine with the historical generation record shown in Fig. 4
7
Wind Turbine
Generation (MW)
PV Panel
Generation (MW)
c
5
4
3
2
1
0
7 am
c
=5
12 pm
c
7.5
c
=10
6 pm
c
=10
c
=20
=50
12 am
c
=20
6 am
c
=50
=5
6
4.5
3
1.5
7 am
12 pm
6 pm
12 am
6 am
Time (hour)
Figure 5. (a) The offered output power of the PV panel of generator 15 with
different values of coefficient ϑc . (b) The offered output power of the wind
turbine of generator 60 with different values of coefficient ϑc .
PAR
1.5
1.4
1.3
1.2
1.1
1
15
18
36
51
60
79
86
100
108
Generator bus number
Figure 6. The PAR in the generation of the generators with and without
demand response.
Profit ($)
30
103
25
20
15
10
1
15
18
36
51
60
79
86
100
108
Figure 7. The profit of the generators with and without demand response.
(b). Fig. 3 (c) shows the offers for wind turbine in bus 60.
The offers for renewable units’ generation mainly depends
on the conservativeness of the DNO. In particular, when the
weight coefficient ϑc in (20a) is large, the DNO is risk-averse
and forces the generators to prevent generation shortage in
their renewable units. On the other hand, small coefficient
ϑc means the DNO encourages the generators to offer higher
amount of renewable generations. As an example, Figs. 5 (a)
and (b) show the renewable generation profiles of generators
15 and 60 for different values of ϑc . When ϑc increases from 5
to 50, the generation levels decrease, since the penalties βj (t)
in (21d) increase. Hence, the generators offer lower renewable
generations to reduce their cost of generation shortage.
To quantify the peak shaving, we consider the PAR of the
generation. Fig. 6 shows that the PAR is reduced for the
generators by 13% on average. A lower PAR means a lower
generation cost, and thus a higher profit. Fig. 7 confirms that
the generators’ profit is increased by 10.3% on average.
3) Algorithm convergence: We study the required number
of iterations for convergence, which can be interpreted as an
indicator of the number of message exchanges among the load
aggregators, generators and DNO. The angle and magnitude
of the voltage of the buses depend on all generators’ and
load aggregators’ decision variables. Thus, the convergence
of the these variables is a viable indicator of the convergence
of all decision variables in the system. Since the values of
the voltage angles of all buses can be added by a constant,
we illustrate the convergence of the phase angle difference
between the voltages of the buses at the end nodes of the
lines. As an example, we consider time slot 1 and we provide the convergence of δ14 (1) − δ11 (1), δ70 (1) − δ71 (1),
δ34 (1) − δ13 (1), and δ72 (1) − δ76 (1) in Fig. 8 (a). We also
show the voltage magnitude of the buses 19, 38, 76, and 30
in Fig. 8 (b). We can observe that 50 iterations are enough
for convergence. The average running time of the algorithm
for different initial conditions is 5 seconds for 100 random
initial conditions. The low convergence rate and running time
make the proposed algorithm implementable for real-time
interactions among entities in an energy market.
We also evaluate the running time of Algorithm 1 for larger
test systems to show its scalability. Meanwhile, we compare
its running time with a centralized algorithm, where the DNO
solves problem problem (20). We use MOSEK solver to
solve the DNO’s centralized problem (20). The control signals
in (21a)−(21d) are determined in order to obtain the same
solution for both the centralized and decentralized algorithms,
and simulation results confirm this. We provide the average
running time of Algorithm 1 and the centralized approach for
six test systems (all can be found in [27] except the system
with 1500 buses, which is a part of 8500-bus test system) in
Fig. 9. We can observe the the centralized algorithm suffer
from a high running time due to a large number of decision
variables and constraints. On the other hand, Algorithm 1
is executed by each entity to solve its own optimization
problem with its locally available information in a distributed
fashion. Hence, the number of decision variables for each
entity becomes independent of the size of the test system. The
overall running time of Algorithm 1 increases almost linearly
with the number of buses due to the increase in the required
number of iterations for convergence in larger test systems.
We also compare the performance of Algorithm 1 for the
scenario with uncertainty in the load demand and renewable
generations and the scenario with complete information. As an
example, we consider the load profile of the load aggregator
110 and the generation profile of generator 15 with a PV panel
in Figs. 10 (a) and (b). The lack of information makes the load
aggregators more conservative, since it considers the worstcase for the electric appliances in the upcoming time slots.
Whereas, when the load aggregator has complete information,
it can better manage the electric appliances especially during
the peak hours. A lower peak demand for the load aggregators
results in a lower peak in the generation level of the conventional units. The conventional unit may provide more power
during the times when the PV panel has high generation (e.g.
around 6 pm), as the PV generation is known and the generator
does not take risk to offer a high renewable generation.
V. C ONCLUSION
In this paper, we proposed a real-time decentralized algorithm for energy trading among load aggregators and generators. Our proposed approach considers the uncertainty at
both generation and demand sides. In our model, the DNO
sends control signals to the entities to encourage them towards
8
Running Time (s)
Figure 8. The convergence of phase differences and voltage magnitudes.
1000
750
Centeralized Algorithm
Decenteralized Algorithm
500
250
0
13-bus
34-bus
37-bus
123-bus
1300-bus
8500-bus
Test System
2.1
1.9
1.7
1.5
7 am
Conventional
Generation (MW)
Load Demand
Figure 9. The running time of the centralized and decentralized algorithms.
25
12 pm
6 pm
Without Demand Response
With Uncertainty
12 am
6 am
Without Uncertainty
20
15
10
7 am
12 pm
6 pm
12 am
6 am
Time (hour)
Figure 10. (a) Load scheduling with uncertainty and complete information.
(b) Generation profile with uncertainty and complete information.
optimizing their objectives independently, while meeting the
physical constraints of the power network. This study uses
linearized ac optimal power flow formulation to increase the
accuracy of the obtained solution. Further, we solve the problem in a real-time fashion to obtain the most recent optimal
solution for each time-step. To evaluate the performance of the
proposed decentralized algorithm, we used an IEEE 123-bus
test feeder connected to some renewable generators. Although
we considered the stochastic nature of renewables as well as
ac power flow formulation, our algorithm converges in 50
iterations. We evaluated the price responsive load profiles and
generation values and showed that our method can benefit
the load aggregators by increasing their profit by 17.8%, and
the generators by reducing the PAR by 13% and increasing
their profit by 10.3%. Our algorithm benefit the DNO by
maintaining the privacy issues and a lower computational time
compared to the centralized approach.
R EFERENCES
[1] B. Chai, J. Chen, Z. Yang, and Y. Zhang, “Demand response management with multiple utility companies: A two-level game approach,” IEEE
Trans. on Smart Grid, vol. 5, no. 2, pp. 722–731, Mar. 2014.
[2] S. Maharjan, Q. Zhu, Y. Zhang, S. Gjessing, and T. Basar, “Dependable
demand response management in the smart grid: A stackelberg game
approach,” IEEE Trans. on Smart Grid, vol. 4, no. 1, pp. 120–132, Mar.
2013.
[3] R. Deng, Z. Yang, F. Hou, M.-Y. Chow, and J. Chen, “Distributed realtime demand response in multiseller-multibuyer smart distribution grid,”
IEEE Trans. on Power Systems, vol. PP, no. 99, pp. 1–11, Oct. 2014.
[4] F. Kamyab et al., “Demand response program in smart grid using supply
function bidding mechanism,” IEEE Trans. on Smart Grid, vol. 7, no. 3,
pp. 1277 – 1284, 2016.
[5] M. Parvania, M. Fotuhi-Firuzabad, and M. Shahidehpour, “ISO’s optimal
strategies for scheduling the hourly demand response in day-ahead
markets,” IEEE Trans. on Power Systems, vol. 29, no. 6, pp. 2636–2645,
2014.
[6] L. Gan, N. Li, U. Topcu, and S. H. Low, “Exact convex relaxation
of optimal power flow in radial networks,” IEEE Trans. on Automatic
Control, vol. 60, no. 1, pp. 72–87, 2015.
[7] W. Shi, N. Li, X. Xie, C. Chu, and R. Gadh, “Optimal residential demand
response in distribution networks,” IEEE Journal on Selected Areas in
Comm., vol. 32, no. 7, pp. 1441–1450, Jun. 2014.
[8] N. Li, L. Gan, L. Chen, and S. Low, “An optimization-based demand
response in radial distribution networks,” in Proc. of IEEE Globecom,
Anaheim, CA, Anaheim, CA 2012.
[9] E. Dall’Anese, H. Zhu, and G. B. Giannakis, “Distributed optimal power
flow for smart microgrids,” IEEE Trans. on Smart Grid, vol. 4, no. 3,
pp. 1464–1475, Sept. 2013.
[10] A. G. Bakirtzis and P. N. Biskas, “A decentralized solution to the DCOPF of interconnected power systems,” IEEE Trans. on Power Systems,
vol. 18, no. 3, pp. 1007–1013, Aug. 2003.
[11] T. Erseghe, “Distributed optimal power flow using ADMM,” IEEE
Trans. on Power Systems, vol. 29, no. 5, pp. 2370–2380, Sept. 2014.
[12] Q. Peng and S. H. Low, “Distributed algorithm for optimal power flow
on a radial network,” in Proc. of IEEE Conf. on Decision and Control,
Dec. 2014, pp. 167–172.
[13] S. Magnusson, P. C. Weeraddana, and C. Fischione, “A distributed
approach for the optimal power-flow problem based on ADMM and
sequential convex approximations,” IEEE Trans. on Control of Network
Systems, vol. 2, no. 3, pp. 238–253, Sept. 2015.
[14] J. M. Arroyo and F. D. Galiana, “Energy and reserve pricing in security
and network-constrained electricity markets,” IEEE Trans. on Power
Systems, vol. 20, no. 2, pp. 634–643, May 2005.
[15] S. Mhanna, A. C. Chapman, and G. Verbič, “A fast distributed algorithm
for large-scale demand response aggregation,” IEEE Trans. on Smart
Grid, vol. 7, no. 4, pp. 2094–2107, 2016.
[16] M. J. Dolan, E. M. Davidson, I. Kockar, G. W. Ault, and S. D. McArthur,
“Distribution power flow management utilizing an online optimal power
flow technique,” IEEE Trans. on Power Systems, vol. 27, no. 2, pp. 790–
799, May 2012.
[17] E. Belic, N. Lukac, K. Dezelak, B. Zalik, and G. Stumberger, “Gpubased online optimization of low voltage distribution network operation,”
accepted for publication in IEEE Trans. on Smart Grid, 2017.
[18] L. Gan and S. H. Low, “An online gradient algorithm for optimal power
flow on radial networks,” IEEE J. on Selected Areas in Comm., vol. 34,
no. 3, pp. 625–638, Mar. 2016.
[19] A. Hauswirth, S. Bolognani, G. Hug, and F. Dörfler, “Projected gradient
descent on Riemannian manifolds with applications to online power
system optimization,” in Proc. of Allerton Conf. on Communications,
Control and Computing, Sept. 2016, pp. 225–232.
[20] S.-J. Kim, G. B. Giannakis, and K. Y. Lee, “Online optimal power flow
with renewables,” in Proc. of Asilomar Conf. on Signals, Systems and
Computers, Nov. 2014, pp. 355–360.
[21] D. Mehta, A. Ravindran, B. Joshi, and S. Kamalasadan, “Graph theory
based online optimal power flow control of power grid with distributed
flexible ac transmission systems (d-facts) devices,” in Proc. of North
American Power Symposium (NAPS), Oct. 2015, pp. 1–6.
[22] N. Li, L. Chen, and S. H. Low, “Optimal demand response based on
utility maximization in power networks,” in IEEE Power and Energy
Society General Meeting, Jul. 2011, pp. 1–8.
[23] T. Li and M. Shahidehpour, “Price-based unit commitment: a case of
lagrangian relaxation versus mixed integer programming,” IEEE Trans.
on Power Systems, vol. 20, no. 4, pp. 2015–2025, 2005.
[24] D. Bertsimas, E. Litvinov, X. A. Sun, J. Zhao, and T. Zheng, “Adaptive
robust optimization for the security constrained unit commitment problem,” IEEE Trans.s on Power Systems, vol. 28, pp. 52–63, Feb. 2013.
[25] G. Hug-Glanzmann and G. Andersson, “Decentralized optimal power
flow control for overlapping areas in power systems,” IEEE Trans. on
Power Systems, vol. 24, no. 1, pp. 327–336, Feb. 2009.
[26] S. Boyd and L. Vandenberghe, Convex Optimization.
Cambridge
University Press, 2004.
[27] [Online]. Available: https://ewh.ieee.org/soc/pes/dsacom/testfeeders/
[28] Independent Electricty System Operator (IESO). [Online]. Available:
http://www.ieso.ca
[29] [Online]. Available: http://www.torontohydro.com/sites/electricsystem/\residential/yourb
| 3 |
1
Decentralized Online Learning with Kernels
Alec Koppel§ , Santiago Paternain? , Cédric Richard† and Alejandro Ribeiro?
arXiv:1710.04062v1 [math.OC] 11 Oct 2017
Abstract
We consider multi-agent stochastic optimization problems over reproducing kernel Hilbert spaces (RKHS).
In this setting, a network of interconnected agents aims to learn decision functions, i.e., nonlinear statistical
models, that are optimal in terms of a global convex functional that aggregates data across the network, with
only access to locally and sequentially observed samples. We propose solving this problem by allowing each agent
to learn a local regression function while enforcing consensus constraints. We use a penalized variant of functional
stochastic gradient descent operating simultaneously with low-dimensional subspace projections. These subspaces
are constructed greedily by applying orthogonal matching pursuit to the sequence of kernel dictionaries and weights.
By tuning the projection-induced bias, we propose an algorithm that allows for each individual agent to learn, based
upon its locally observed data stream and message passing with its neighbors only, a regression function that is
close to the globally optimal regression function. That is, we establish that with constant step-size selections agents’
functions converge to a neighborhood of the globally optimal one while satisfying the consensus constraints as
the penalty parameter is increased. Moreover, the complexity of the learned regression functions is guaranteed to
remain finite. On both multi-class kernel logistic regression and multi-class kernel support vector classification with
data generated from class-dependent Gaussian mixture models, we observe stable function estimation and state of
the art performance for distributed online multi-class classification. Experiments on the Brodatz textures further
substantiate the empirical validity of this approach.
I. I NTRODUCTION
We consider decentralized online optimization problems: a network G = (V, E) of agents aims to
minimize a global objective that is a sum of local convex objectives available only to each node.
The problem is online and distributed because data samples upon which the local objectives depend
are sequentially and locally observed by each agent. In this setting, agents aim to make inferences
as well as one which has access to all data at a centralized location in advance. Instead of assuming
agents seek a common parameter vector w ∈ Rp , we focus on the case where agents seek to learn a
common decision function f (x) that belong to a reproducing kernel Hilbert space (RKHS). Such functions
represent, e.g., nonlinear statistical models [2] or trajectories in a continuous space [3]. Learning in multiagent settings arises predominately in two technological settings: industrial-scale machine learning, where
optimizing statistical model parameters is decentralized across a parallel processing architecture to attain
computational speedup; and networked intelligent systems such as sensor networks [4], multi-robot teams
[5], [6], and Internet of Things [7], [8]. In the later setting, decentralized processing justified as opposed
to using a fusion center when the communication cost of centralization exceeds the cost of distributed
information protocols. This is true of multi-agent systems with streaming data considered here.
Efforts to develop optimization tools for multi-agent online learning have thus far been restricted to
the case where each agent learns a linear statistical model [9] or a task-driven dictionary [10] that is
as good as one with data aggregated across the network. However, these efforts exclude the state of the
art tools for statistical learning based on nonlinear interpolators: namely, kernel methods [11], [12] and
neural networks [13], [14]. We note that instabilities associated with non-convexity which are only a
minor issue in centralized settings [15] become both theoretically and empirically difficult to overcome
This work in this paper is supported by NSF CCF-1017454, NSF CCF-0952867, ONR N00014-12-1-0997, ARL MAST CTA, and ASEE
SMART.Part of the results in this paper appeared in [1].
§
Computational and Information Sciences Directorate, U.S. Army Research Laboratory, Adelphi, MD, 20783. Email:
[email protected]
?
Department of ESE, University of Pennsylvania, 200 South 33rd Street, Philadelphia, PA 19104. Email: {spater, aribeiro}@seas.upenn.edu
†
Laboratory Lagrange - UMR CNRS 7293, Observatory of the French Riviera University of Nice Sophia-Antipolis, Nice, France, 06108
2
in settings with consensus constraints [10], and therefore efforts to extend neural network learning to
multi-agent online learning likely suffer the same drawbacks.1 Therefore, we focus on extending kernel
methods to decentralized online settings, motivated both by its advantageous empirical performance, as
well as the theoretical and practical benefits of the fact that the optimization problem defined by their
training is convex. This stochastic convex problem, however, is defined over an infinite dimensional space,
and therefore it is not enough to solve the optimization problem, but one must also solve it in an optimally
sparse way. Doing so in multi-agent settings is the goal of this work.
To contextualize our solution methodology, consider centralized vector-valued stochastic convex programming, which has classically been solved with stochastic gradient descent (SGD) [16]. SGD involves
descending along the negative of the stochastic gradient rather than the true gradient to avoid the fact that
computing the gradient of the average objective has complexity comparable to the training sample size,
which could be infinite. In contrast, the setting considered in this work is a stochastic program defined
over a function space, which is in general an intractable variational inference problem. However, when
the function space is a RKHS [17], the Representer Theorem allows us to transform a search over an
infinite space into one over a set of weights and data samples [18]. Unfortunately, the feasible set of the
resulting problem has complexity comparable to the sample size N , and thus is intractable for N → ∞
[19]. Compounding this problem is that the storage required to construct the functional generalization of
SGD is comparable to the iteration index of the algorithm, which is untenable for online settings.
Efforts to mitigate the complexity of the function representation (“the curse of kernelization”) have
been previously developed. These combine functional extensions of stochastic gradient method with
compressions of the function parameterization independently of the optimization problem to which they are
applied [20]–[24] or approximate the kernel during training [25]–[29], and at best converge on average.
In contrast, a method was recently proposed that combines greedily constructed [30] sparse subspace
projections with functional stochastic gradient method and guarantees exact convergence to the minimizer
of the average risk functional. This technique, called parsimonious online learning with kernels (POLK),
tailors the parameterization compression to preserve the descent properties of the underlying RKHS-valued
stochastic process [31], and inspires the approach considered here.
In this work, we extend the ideas in [31] to multi-agent settings. Multiple tools from distributed
optimization may be used to do so; however, we note that the Representer Theorem [18] has not been
established for general stochastic saddle point problems in RKHSs. Therefore, we adopt an approximate
primal-only approach based on penalty methods [32], [33], which in decentralized optimization is known
as distributed gradient descent (DGD). Using functional stochastic extensions of DGD, together with the
greedy Hilbert subspace projections designed in POLK, we develop a method such that each agent, through
its local data stream and message passing with only its neighbors, learns a memory-efficient approximation
to the globally optimal regression function with probability 1. Such global stability guarantees are in
contrast to specialized results for multi-agent kernel learning [34], [35] and alternative distributed online
nonlinear function estimation methods such as dictionary learning [10], [15], [36] or neural networks [14],
which suffer from instability due to the non-convexity of the optimization problem their training defines.
The result of the paper is organized as follows. In Section II we clarify the problem setting of stochastic
programming in RKHSs in the centralized and decentralized case. In Section III, we propose a new
penalty functional that permits deriving a decentralized online method for kernel regression without any
complexity bottleneck by making use of functional stochastic gradient method (Section III-A) combined
with greedy subspace projections (Section III-B). In Section IV we present our main theoretical results,
which establishes that the function sequence of each agent generated by the proposed technique converges
to a neighborhood of the globally optimal function with probability 1. In Section V, we present numerical
examples of decentralized online multi-class kernel logistic regression and kernel support vector machines
with data generated from Gaussian mixtures, and observe a state of the art trade-off between Lyapunov
1
In general, globally convergent decentralized online training of neural networks is an open problem, whose solution requires fundamentally
new approaches to stochastic global optimization.
3
stability and statistical accuracy. We then apply the resulting method to the benchmark Brodatz texture
dataset [37] and observe state of the art decentralized online multi-class classification performance.
II. P ROBLEM F ORMULATION
A. Decentralized Functional Stochastic Programming
Consider the problem of expected risk minimization, where the goal is to learn a regressor that minimizes
a loss function quantifying the merit of a statistical model averaged over a data set. We focus on the case
when the number of training examples N is very large or infinite. In this work, input-output examples,
(xn , yn ), are i.i.d. realizations drawn from a stationary joint distribution over the random pair (x, y) ∈
X × Y, where X ⊂ Rp and Y ⊂ R. Here, we consider finding regressors that are not vector valued
parameters, but rather functions f˜ ∈ H in a hypothesized function class H, which allows for learning
nonlinear statistical models rather than generalized linear models that rarely achieve satisfactory statistical
error rates in practice [12], [38]. The merit of the function f˜ is evaluated by the convex loss function
` : H×X ×Y → R that quantifies the merit of the estimator f˜(x̃) evaluated at feature vector x̃. This loss is
averaged over all possible training examples to define the statistical loss L̃(f˜) := Ex,y [`(f˜(x), y)], which
we combine with a Tikhonov regularizer to construct the regularized loss R̃(f˜) := argminf˜∈H L̃(f˜) +
(λ/2)kf˜k2H [39], [40]. We then define the optimal function as
h
i λ
(1)
f˜∗ = argmin R̃(f˜) := argmin Ex̃,ỹ `(f˜ x̃), ỹ + kf˜k2H
2
f˜∈H
f˜∈H
In this work, we focus on extensions of the formulation in (1) to the case where data is scattered across
an interconnected network that represents, for instance, robotic teams [10], communication systems [41],
or sensor networks [4]. To do so, we define a symmetric, connected, and directed network G = (V, E)
with |V| = V nodes and |E| = E edges and denote as ni := {j : (i, j) ∈ E} the neighborhood of agent
i. For simplicity we assume that the number of edges E is even. Each agent i ∈ V observes a local data
sequence as realizations (xi,n , yi,n ) from random pair (xi , yi ) ∈ X × Y and seeks to learn a common
globally optimal regression function f . This setting may be mathematically captured by associating to
each node i a convex loss functional `i : H × X × Y → R that quantifies the merit of the estimator fi (xi )
evaluated at feature vector xi , and defining the goal for each node as the minimization of the common
global loss
h
X
i λ
∗
2
f = argmin
Exi ,yi `i (f xi ), yi + kf kH
(2)
2
f ∈H
i∈V
Observe that this global loss is a network-wide average (scaled by V ) of all local losses, and therefore the
minimizers of (1) and (2) coincide when (xi , yi ) have a common joint distribution for each i. However,
in multi-agent optimization, this is not generally the case, thus when selecting a regression function f
with only local data, different agents will learn a different decision function fi∗ that it is not optimal
as compared to one selected in a centralized manner, i.e., with the data gathered by all agents. To
overcome this limitation we allow message passing between agents and we impose a consensus constraint
on the regression function among neighbors fi = fj , (i, j) ∈ E. Thus we consider the nonparametric
decentralized stochastic program:
h
X
i λ
∗
2
f = argmin
Exi ,yi `i (fi x), yi + kfi kH
2
{fi }⊂H
i∈V
such that
fi = fj , (i, j) ∈ E
(3)
For further define the product Hilbert space HV of functions aggregated over the network whose elements
are stacked functions f (·) = [f1 (·); · · · ; fV (·)] that yield vectors of length V when evaluated at local
random vectors f (x) = [f1 (x1 ); · · · ; fV (xV )] ∈ RV . Moreover, define the stacked random vectors x =
4
[x1 ; · · · ; xV ] ∈ X V ⊂ RV p and y = [y1 ; · · · yV ] ∈ RV that represents V labels or physical measurements,
for instance.
The goal of this paper is to develop an algorithm to solve (3) in distributed online settings where nodes
do not know the distribution of the random pair (xi , yi ) but observe local independent training examples
(xi,n , yi,n ) sequentially.
B. Function Estimation in Reproducing Kernel Hilbert Spaces
The optimization problem in (1), and hence (3), is intractable in general, since it defines a variational
inference problem integrated over the unknown joint distribution P(x, y). However, when H is equipped
with a reproducing kernel κ : X × X → R (see [12], [42]), a function estimation problem of the form
(1) may be reduced to a parametric form via the Representer Theorem [19], [43]. Thus, we restrict the
Hilbert space in Section II-A to be one equipped with a kernel κ that satisfies for all functions f˜ : X → R
in H:
(i) hf˜, κ(xi , ·))iH = f˜(xi ), (ii) H = span{κ(xi , ·)}
(4)
for all xi ∈ X . Here h·, ·iH denotes the Hilbert inner product for H. Further assume that the kernel
is positive semidefinite, i.e. κ(xi , x0i ) ≥ 0 for all xi , x0i ∈ X . Function spaces of this type are called
reproducing kernel Hilbert spaces (RKHS).
In (4), property (i) is the reproducing property (via Riesz Representation Theorem [43]). Replacing f˜
by κ(x0i , ·) in (4) (i) yields hκ(x0i , ·), κ(xi , ·)iH = κ(xi , x0i ) which is the origin of the term “reproducing
kernel.” This property induces a nonlinear transformation of the input space X : denote by φ(·) a nonlinear
map of the feature space that assigns to each xi the kernel function κ(·, xi ). The reproducing property
yields that the inner product of the image of distinct feature vectors xi and x0i under the map φ requires
only kernel evaluations: hφ(xi ), φ(x0i )iH = κ(xi , x0i ) (the ’kernel trick’).
Moreover, property (4) (ii) states that functions f˜ ∈ H may be written as a linear combination of kernel
evaluations. For kernelized and regularized empirical risk minimization (ERM), the Representer Theorem
[17], [18] establishes that the optimal f˜ in hypothesized function class H admit an expansion in terms of
kernel evaluations only over training examples
N
X
˜
f (xi ) =
wi,n κ(xi,n , xi ) ,
(5)
n=1
T
N
where wi = [wi,1 , · · · , wi,N ] ∈ R denotes a set of weights. The upper index N in (5) is referred
to as the model order, and for ERM the model order and training sample size are
d equal. Common
choices κ include the polynomial and radial basis kernels, i.e., κ(xi , x0i ) = xTi x0i + b and κ(xi , x0i ) =
exp{−kxi − x0i k22 /2d2 }, respectively, where xi , x0i ∈ X .
Suppose, for the moment, that we have access to N i.i.d. realizations of the random pairs (xi , yi ) for
each agent i such that the expectation in (3) is computable, and we further ignore the consensus constraint.
Then the objective in (3) becomes:
N
1 XX
λ
`(fi (xi,n ), yi,n ) + kfi k2H
f = argmin
2
f ∈HV N n=1 i∈V
∗
(6)
Then, by substituting the Representer Theorem [cf. (5)] into (3), we obtain that optimizing in HV reduces
to optimizing over the set of N V weights:
N
1XX
λ
f ∗=argmin
`i (wiTκXi(xi,n),yi,n)+ wiT KXi ,Xiwi ,
2
{wi}∈RN Nn=1 i∈V
(7)
where we have defined the Gram (or kernel) matrix KXi ,Xi ∈ RN ×N , with entries given by the kernel
evaluations between xi,m and xi,n as [KXi ,Xi ]m,n = κ(xi,m , xi,n ). We further define the vector of kernel
5
evaluations κXi (·) = [κ(xi,1 , ·) . . . κ(xi,N , ·)]T , which are related to the kernel matrix as KXi ,Xi =
[κXi (xi,1 ) . . . κXi (xi,N )]. The dictionary of training points associated with the kernel matrix is defined as
Xi = [xi,1 , . . . , xi,N ].
By exploiting the Representer Theorem, we transform a nonparametric infinite dimensional optimization problem in HV (6) into a finite N V -dimensional parametric problem (7). Thus, for empirical risk
minimization, the RKHS provides a principled framework to solve nonparametric regression problems as
a search over RV N for an optimal set of coefficients.
However, to solve problems of the form (6) when training examples (xi,n , yi,n ) become sequentially
available or their total number N is not finite, the objective in (6) becomes an expectation over random
pairs (xi , yi ) as [11]
X
X
f ∗ = argmin
Exi ,yi [`i (
wi,n κ(xi,n , xi ), yi )]
wi ∈RI ,{xi,n }n∈I i∈V
n∈I
λ X
wi,n wi,m κ(xi,m , xi,n )k2H ,
+ k
2 n,m∈I
(8)
where we substitute the Representer Theorem generalized to the infinite sample-size case established in
[19] into the objective (3) with I as some countably infinite indexing set. That is, as the data sample size
N → ∞, the representation of fi becomes infinite as well. Thus, our goal is to solve (8) in an approximate
manner such that each fi admits a finite representation near fi∗ , while satisfying the consensus constraints
fi = fj for (i, j) ∈ E (which were omitted for the sake of discussion between (6) - (8)).
III. A LGORITHM D EVELOPMENT
We turn to developing an online iterative and decentralized solution to solving (3) when the functions
{fi }i∈V are elements of a RKHS, as detailed in Section II-B. To exploit the properties of this function
space, we require the applicability of the Representer Theorem [cf. (5)], but this result holds for any
regularized minimization problem with a convex functional. Thus, we may address the consensus constraint
fi = fj , (i, j) ∈ E in (3) by enforcing approximate consensus on estimates fi (xi ) = fj (xi ) in expectation.
This specification may be met by introducing the penalty functional
h
X
i λ
ψc (f ) =
Exi ,yi `i (fi xi ), yi + kfi k2H
2
i∈V
cX
2
(9)
+
Ex [fi (xi )−fj (xi )]
2j∈n i
i
The reasoning for the definition (9) rather than one that directly addresses the consensus constraint
deterministically is given in Remark 1, motivated by following the algorithm derivation. For future
reference, we also define the local penalty as
h
i λ
ψi,c (fi ) = Exi ,yi `i (fi xi ), yi + kfi k2H
2
cX
+
Exi [fi (xi )−fj (xi )]2
(10)
2j∈n
i
P
and we observe from (9) - (10) that ψc (f ) = i ψi,c (fi ). Further define fc∗ = argminf ∈HV ψc (f ). We
note that in the vector-valued decision variable case, other techniques to address the constraint in (3) are
possible such as primal-dual methods [9] or dual methods [44], but the Representer Theorem has not been
established for RKHS-valued stochastic saddle point problems. It is an open question whether expressions
of the form (5) apply to problems with general functional constraints, but this matter is beyond the scope
of this work. Therefore, these other approaches which make use of Lagrange duality do not readily extend
to the nonparametric setting considered here.
6
Algorithm 1 Greedy Projected Penalty Method
Require: {xt , yt , ηt , t }t=0,1,2,...
initialize fi,0 (·) = 0, Di,0 = [], w0 = [], i.e. initial dictionary, coefficients are empty for each i ∈ V
for t = 0, 1, 2, . . . do
loop in parallel for agent i ∈ V
Observe local training example realization (xi,t , yi,t )
Send obs. xi,t to nodes j ∈ ni , receive scalar fj,t (xi,t )
Receive obs. xj,t from nodes j ∈ ni , send fi,t (xj,t )
Compute unconstrained stochastic grad. step [cf. (22)]
f˜i,t+1 (·) = (1 − ηt λ)fi,t − ηt ∇fi ψ̂i,c (fi (xi,t ), yi,t ) .
Update params: D̃i,t+1 = [Di,t , xi,t ], w̃i,t+1 [cf. (23)]
Greedily compress function using matching pursuit
(fi,t+1, Di,t+1,wi,t+1) = KOMP(f˜i,t+1 ,D̃i,t+1 ,w̃i,t+1 ,t)
end loop
end for
A. Functional Stochastic Gradient Method
Given that the data distribution P(x, y) is unknown, minimizing ψc (f ) directly via variational inference is not possible. Rather than postulate a specific distribution for (x, y), we only assume access to
sequentially available (streaming) independent and identically distributed samples (xt , yt ) from their joint
density. Then, we may wield tools from stochastic approximation to minimize (9), which in turn yields
a solution to (3). Begin by defining, ψ̂c (f (xt ), yt ), the stochastic approximation of the penalty function
ψc (f ), evaluated at a realization (xt , yt ) of the stacked random pair (x, y):
X
λ
`i (fi xi,t ), yi,t + kfi k2H
ψ̂c (f (xt ), yt ) =
2
i∈V
X
c
+
(fi (xi,t )−fj (xi,t ))2
(11)
2 j∈n
j
and the local instantaneous penalty function ψ̂i,c (fi (xi,t ), yi,t ) similarly. To compute the functional stochastic gradient of ψc (f ) evaluated at a sample point (xt , yt ), we first address the local loss `i (fi xi,t ), yi,t )
in (11) as [22], [31]:
∇fi `i (fi (xi,t ), yi,t )(·) =
∂`i (fi (xi,t ), yi,t ) ∂fi (xi,t )
(·)
∂fi (xi,t )
∂fi
(12)
where we have applied the chain rule. Now, define the short-hand notation
`0i (fi (xi,t ), yi,t ) := ∂`i (fi (xi,t ), yi,t )/∂fi (xi,t )
for the derivative of `i (f (xi,t ), yi,t ) with respect to its first scalar argument fi (xi,t ) evaluated at xi,t . To
evaluate the second term on the right-hand side of (12), differentiate both sides of the expression defining
the reproducing property of the kernel [cf. (4)(i)] with respect to fi to obtain
∂fi (xi,t )
∂hfi , κ(xi,t , ·)iH
=
= κ(xi,t , ·)
∂fi
∂fi
(13)
7
Then, given (12) - (13), we may compute the overall gradient of the instantaneous penalty function
ψ̂c (f (xt ), yt ) in (11) as
h
∇f ψ̂c (f (xt ), yt ) = vec `0i (fi (xi,t ), yi,t )κ(xi,t , ·)+ λfi
(14)
i
X
+c (fi (xi,t )−fj (xi,t ))κ(xi,t , ·)
j∈ni
where on the right-hand side of (14), we have defined the vector stacking notation vec[·] to denote the
stacking of V component-wise functional gradients, each associated with function fi , i ∈ V, and used the
fact that the variation of the instantaneous approximate of the cross-node term, [fi (xi )−fj (xi )]2 , by the
same reasoning as (12) - (13), is 2[fi (xi,t )−fj (xi,t )]κ(xi,t , ·). With this computation in hand, we present
the stochastic gradient method for the λ-regularized multi-agent expected risk minimization problem in
(3) as
h
ft+1 = (1 − ηt λ)ft − ηt vec `0i (fi,t (xi,t ), yi,t )κ(xi,t , ·)
i
X
+c (fi,t (xi,t )−fj,t (xi,t ))κ(xi,t , ·) ,
(15)
j∈ni
where ηt > 0 is an algorithm step-size either chosen as diminishing with O(1/t) or a small constant –
see Section IV. We may glean from (15) that the update for the network-wide P
function ft decouples into
ones for each agent i ∈ V, using the node-separability of the penalty ψc (f ) = i ψi,c (fi ), i.e.,
h
fi,t+1 = (1 − ηt λ)fi,t − ηt `0i (fi,t (xi,t ), yi,t )κ(xi,t , ·)
i
X
+c (fi,t (xi,t )−fj,t (xi,t ))κ(xi,t , ·) .
(16)
j∈ni
We further require that, given λ > 0, the step-size satisfies ηt < 1/λ and the global sequence is initialized
as f0 = 0 ∈ HV . With this initialization, the Representer Theorem (5) implies that, at time t, the function
fi,t admits an expansion in terms of feature vectors xi,t observed thus far as
fi,t (x) =
t−1
X
T
wi,n κ(xi,n , x) = wi,t
κXi,t (x) .
(17)
n=1
On the right-hand side of (17) we have introduced the notation Xi,t = [xi,1 , . . . , xi,t−1 ] ∈ Rp×(t−1) ,
κXi,t (·) = [κ(xi,1 , ·), . . . , κ(xi,t−1 , ·)]T , and wi,t = [wi,1 , . . . wi,t−1 ] ∈ Rt−1 . Moreover, observe that the
kernel expansion in (17), taken together with the functional update (15), yields the fact that performing the
stochastic gradient method in HV amounts to the following V parallel parametric updates on the kernel
dictionaries Xi and coefficients wi :
Xi,t+1 = [Xi,t , xi,t ] ,
(
(1
− ηt λ)[wi,t ]u for 0 ≤ u ≤ t − 1
P
[wi,t+1 ]u=
0
−ηt `i (fi,t (xi,t),yi,t)+c j∈ni(fi,t (xi,t)−fj,t (xi,t)) ,
(18)
where the second case on the last line of (18) is for u = t. Observe that this update causes Xi,t+1 to have
one more column than Xi,t . We define the model order as number of data points Mi,t in the dictionary
of agent i at time t (the number of columns of Xt ). FSGD is such that Mi,t = t − 1, and hence grows
unbounded with iteration index t. Next we address this intractable memory growth such that we may
execute stochastic descent through low-dimensional projections of the stochastic gradient, inspired by
[31]. First, we clarify the motivation for the choice of the penalty function (9).
8
Remark 1 In principle, it is possible to address the RKHS-valued consensus constraint in (3) directly,
through primal-only stochastic methods, by introducing the penalty function
h
X
i λ
cX
kfi −fj k2H
(19)
ψ̃c (f ) =
Exi ,yi `i (fi xi ), yi + kfi k2H +
2
2
j∈n
i∈V
i
Observe, however, that FSGD applied to (19), using comparable reasoning to that which leads to (16)
from (9), yields
h
fi,t+1 = (1 − ηt λ)fi,t − ηt ∇fi `0i (fi,t (xi,t ), yi,t )κ(xi,t , ·)
i
X
+c (fi,t −fj,t ) .
(20)
j∈ni
Unfortunately, we cannot inductively define a parametric representation of (20) for node i in terms of its
own kernel dictionaries and weights independently of the entire function associated to node j, since the
last term in (20) lives directly in the Hilbert space. Thus, to implement (20) each agent would need to store
the entire kernel dictionary and weights of all its neighbors at each step, which is impractically costly. The
use of (9) rather than (19) is further justified that under a hypothesis regarding the mean transformation
of the local data spaces, Exi [κ(xi , ·)], consensus with respect to the Hilbert norm, in addition to the mean
square sense, is achieved when the penalty coefficient is c → ∞ (see Section IV for details).
B. Sparse Subspace Projections
To mitigate the complexity growth noted in Section III-A, we approximate the function sequence
(15) by one that is orthogonally projected onto subspaces HD ⊆ H that consist only of functions that
can
some dictionary D = [d1 , . . . , dM ] ∈ Rp×M , i.e., HD = {f : f (·) =
PM be represented using
T
M
n=1 wn κ(dn , ·) = w κD (·)} = span{κ(dn , ·)}n=1 , and {dn } ⊂ {xu }u≤t . For convenience we define
[κD (·) = κ(d1 , ·) . . . κ(dM , ·)], and KD,D as the resulting kernel matrix from this dictionary. We enforce
function parsimony by selecting dictionaries Di with Mi,t << O(t) for each i [31].
To be specific, we propose replacing the local update (16) in which the dictionary grows at each iteration
Mt+1
by its projection onto subspace HDi,t+1 = span{κ(di,n , ·)}n=1
as
2
fi,t+1 = argmin f − fi,t −ηt ∇fi ψ̂i,c (fi (xi,t ), yi,t )
H
f ∈HDi,t+1
h
:= PHDi,t+1 (1 − ηt λ)fi,t − ηt ∇fi `i (fi,t (xi,t ), yi,t )
i
X
+c (fi,t (xi,t )−fj,t (xi,t ))κ(xi,t , ·) .
(21)
j∈ni
where we define the projection operator P onto subspace HDi,t+1 ⊂ H by the update (21).
Coefficient update The update (21), for a fixed dictionary Di,t+1 ∈ Rp×Mt+1 , yields one in the coefficient
space only. This fact may be observed by defining the un-projected stochastic gradient step starting at
function fi,t parameterized by dictionary Di,t and coefficients wi,t :
f˜i,t+1 = fi,t − ηt ∇fi ψ̂i,c (fi (xi,t ), yi,t ) .
(22)
This update may be represented using dictionary and weights
D̃i,t+1 = [Di,t , xi,t ] ,
(
(1
− ηt λ)[wi,t ]u for 0 ≤ u ≤ t − 1
P
[w̃i,t+1 ]u=
0
−ηt `i (fi,t (xi,t),yi,t)+c j∈ni(fi,t (xi,t)−fj,t (xi,t)) ,
(23)
9
where the last coefficient is for u = t. Note that D̃i,t+1 has M̃ = Mi,t +1 columns, which is also the length
of w̃i,t+1 . For a fixed Di,t+1 , the stochastic projection (21) is a least-squares update on the coefficient
vector: the Representer Theorem allows us to rewrite (21) in terms of kernel expansions as in Section 3.2
of [31], which yields
wi,t+1 = K−1
(24)
Di,t+1 Di,t+1 KDi,t+1 D̃i,t+1 w̃i,t+1 ,
where we define the cross-kernel matrix KDi,t+1 ,D̃i,t+1 whose (n, m)th entry is given by κ(di,n , d̃i,m ). The
other kernel matrices KD̃i,t+1 ,D̃i,t+1 and KDi,t+1 ,Di,t+1 are defined similarly. Observe that Mi,t+1 is the
number of columns in Di,t+1 , while M̃i = Mi,t + 1 is the number of columns in D̃t+1 [cf. (23)]. Given
that the local projections of f˜i,t+1 onto stochastic subspaces HDi,t+1 , for a fixed node-specific dictionaries
Di,t+1 , is a least-squares problem, we now detail the kernel dictionary Di,t+1 selection from past data
{xi,u , yi,u }u≤t .
Dictionary Update The selection procedure for the kernel dictionary Di,t+1 is based upon greedy compression [45]: function f˜i,t+1 defined by the stochastic gradient method without projection is parameterized
by dictionary D̃i,t+1 [cf. (23)] of model order M̃i = Mi,t + 1. We form Di,t+1 by selecting a subset of
Mi,t+1 columns from D̃i,t+1 that best approximate f˜i,t+1 in terms of Hilbert norm error, which may be
done by executing kernel orthogonal matching pursuit (KOMP) [30], [46] with error tolerance t to find
a kernel dictionary matrix Di,t+1 based on the one which adds the latest sample point D̃i,t+1 . This choice
is due to the fact that we can tune its stopping criterion to guarantee stochastic descent, and guarantee
the model order of the learned function remains finite – see Section IV for details.
We now describe the variant of KOMP we propose using, called Destructive KOMP with Pre-Fitting
(see [46], Section 2.3). Begin with an input a candidate function f˜ of model order M̃ parameterized by
kernel dictionary D̃ ∈ Rp×M̃ and coefficients w̃ ∈ RM̃ . The method then approximates f˜ by a function
f ∈ H with a lower model order. Initially, this sparse approximation is the original function f = f˜ so
that its dictionary is initialized with that of the original function D = D̃, with corresponding coefficients
w = w̃. Then, the algorithm sequentially removes dictionary elements from the initial dictionary D̃,
yielding a sparse approximation f of f˜, until the error threshold kf − f˜kH ≤ t is violated, in which case
it terminates. See Appendix A for further details.
We summarize the key steps of the proposed method in Algorithm 1 for solving (3) while maintaining a
finite model order, thus allowing for the memory-efficient learning of nonparametric regression functions
online in multi-agent systems. The method, Greedy Projected Penalty Method, executes the stochastic
projection of the functional stochastic gradient iterates onto sparse subspaces HDi,t+1 stated in (21). Initial
functions are set to null fi,0 = 0, i.e., it has empty dictionary Di,0 = [] and coefficient vector wi,0 = [].
The notation [] is used to denote the empty matrix or vector respective size p × 0 or 0. Then, at each
step, given an independent training example (xi,t , yi,t ) and step-size ηt , we compute the unconstrained
functional stochastic gradient iterate (22) with respect to the instantaneous penalty function (11) which
admits the parameterization D̃i,t+1 and w̃i,t+1 as stated in (23). These parameters are then fed into KOMP
with approximation budget t , such that (fi,t+1 , Di,t+1 , wi,t+1 ) = KOMP(f˜i,t+1 , D̃i,t+1 , w̃i,t+1 , t ).
IV. C ONVERGENCE A NALYSIS
We turn to establishing that the method presented in Algorithm 1 converges with probability 1 to the
minimizer of the penalty function ψc (f ) [cf. (9)] when attenuating algorithm step-sizes are used, and to
a neighborhood of the minimizer along a subsequence when constant step-sizes are used. Moreover, for
the later case, the kernel dictionary that parameterizes the regression function fi for each agent i remains
finite in the worst case. This analysis is an application of Section IV of [31], but these results, together
with the properties of the penalty function ψc (f ) allow us to establish bounds on the deviation for each
individual in the network from the common globally optimal regression function.
Before analyzing the proposed method developed in Section III, we define key quantities to simplify
the analysis and introduce standard assumptions which are necessary to establish convergence. Define the
10
local projected stochastic functional gradient associated with the update in (21) as
˜ f ψ̂i,c (fi,t (xi,t), yi,t ) =
∇
i
h
i
fi,t − PHDi,t+1 fi,t − ηt∇fi ψ̂i,c (fi,t (xi,t),yi,t) /ηt
(25)
such that the local update of Algorithm 1 [cf. (21)] may be expressed as a stochastic descent using
˜ f ψ̂i,c (fi,t (xi,t), yi,t ) . The definitions of (25) and the local
projected functional gradients fi,t+1 = fi,t − ηt ∇
i
stochastic gradient ∇fiψ̂i,c (fi,t (xi,t), yi,t ) may be stacked to analyze the global convergence behavior of
the algorithm. For further reference, we define the stacked projected functional stochastic gradient of the
˜ f ψ̂c (ft (xt ), yt ) = [∇
˜ f1 ψ̂1,c (f1,t (x1,t), y1,t ); · · · ; ∇
˜ f ψ̂V,c (fV,t (xV,t), yV,t )]. Then the
penalty function as ∇
V
stacked global update of the algorithm is
˜ f ψ̂c (ft (xt ), yt ) .
ft+1 = ft − ηt ∇
(26)
Moreover, observe that the stochastic functional gradient in (14), based upon the fact that (xt , yt ) are
independent and identically distributed realizations of the random pair (x, y), is an unbiased estimator of
the true functional gradient of the penalty function ψc (f ) in (9), i.e.
E[∇f ψ̂c (f (xt ), yt ) Ft ] = ∇f ψc (f )
(27)
for all t. In (27), we denote as Ft the sigma algebra which measures the algorithm history for times
u ≤ t, i.e. Ft = {xu , yu , uu }tu=1 . Next, we formally state technical conditions on the loss functions, data
domain, and stochastic approximation errors that are necessary to establish convergence.
Assumption 1 The feature space X ⊂ Rp and target domain Y ⊂ R are compact, and the kernel map
may be bounded as
p
sup κ(x, x) = X < ∞
(28)
x∈X
Assumption 2 The local losses `i (fi (x), y) are convex and differentiable with respect to the first (scalar)
argument fi (x) on R for all x ∈ X and y ∈ Y. Moreover, the instantaneous losses `i : H × X × Y → R
are Ci -Lipschitz continuous for all z ∈ R for a fixed y ∈ Y
|`i (z, y) − `i (z 0 , y)| ≤ Ci |z − z 0 |
(29)
with C := maxi Ci as the largest modulus of continuity.
Assumption 3 The projected functional gradient of the instantaneous penalty function defined by stacking
(25) has finite conditional second moments:
˜ f ψ̂c (ft (xt ), yt )k2 | Ft ] ≤ σ 2
E[k∇
H
(30)
Assumption 1 holds in most settings by the data domain itself, and justifies the bounding of the loss.
Taken together, these conditions permit bounding the optimal function fc∗ in the Hilbert norm, and imply
that the worst-case model order is guaranteed to be finite. Variants of Assumption 2 appear in the analysis
of stochastic descent methods in the kernelized setting [47], [48], and is satisfied for supervised learning
problems such as logistic regression, support vector machines with the square-hinge-loss, the square loss,
among others. Moreover, it is standard in the analysis of descent methods (see [49]). Assumption 3 is
common in stochastic methods, and ensures that the stochastic approximation error has finite variance.
Next we establish a few auxiliary results needed in the proof of the main results. Specifically, we
introduce a proposition which quantifies the error due to sparse projections in terms of the ratio of the
compression budget to the learning rate.
11
Proposition 1 Given independent realizations (xt , yt ) of the random pair (x, y), the difference between
the stacked projected stochastic functional gradient and the its un-projected variant defined by (25) and
(14), respectively, is bounded as
t V
(31)
ηt
where ηt > 0 denotes the algorithm step-size and t > 0 is the approximation budget parameter of
Algorithm 2.
˜ f ψ̂c (ft (xt ), yt ) − ∇f ψ̂c (f (xt ), yt )kH ≤
k∇
Proof: See Appendix B.
With the error induced by sparse projections quantified, we may now shift focus to analyzing the
Hilbert-norm sub-optimality of the stacked iterates generated by Algorithm 1. Specifically, we have a
descent property of the sequence {ft }.
Lemma 1 (Stochastic Descent) Consider the sequence generated {ft } by Algorithm 1 with f0 = 0. Under
Assumptions 1-3, the following expected descent relation holds.
E kft+1 − fc∗ k2H Ft ≤ kft −fc∗ k2H −2ηt [ψc (ft )−ψc (fc∗ )]
+ 2t V kft − fc∗ kH +ηt2 σ 2
(32)
Proof: See Appendix B.
Now that Lemma 1 establishes a descent-like property, we may apply the proof of Theorem 1 in [31]
to kft − fc∗ kH with diminishing step-sizes. Thus we have the following corollary.
Corollary 1 Consider the sequence {ft } generated by Algorithm 1 with f0 = 0 and regularizer λ > 0.
Under Assumptions 1-3 and the hypothesis that the projection sets HDi,t in (21) are intersected with some
finite Hilbert-norm ball kf kH ≤ D for all t, with diminishing step-sizes and compression budget, i.e.,
∞
X
t=0
ηt = ∞ ,
∞
X
ηt2 < ∞ ,
t = ηt2 ,
(33)
t=0
such that ηt < 1/λ, the sequence converges exactly to the minimizer of the penalty [cf. (9)]: ft → fc∗ with
probability 1.
To attain exact convergence to the minimizer of the penalty, fc∗ , we require the compression budget
determining the error t incurred by sparse projections to approach null. This means that to have exact
convergence, we require the function representation to require an increasing amount of memory which is,
in the limit, of infinite complexity. In contrast, when constant step-size and compression budget are used,
then the algorithm settles to a neighborhood, as we state next.
Theorem 1 The sequence {ft } generated by Algorithm 1 with f0 = 0 and regularizer λ > 0, under
Assumptions 1-3, with constant step-size selection ηt = η < 1/λ and constant compression budget t =
= Kη 3/2 for a positive constant K, converges to a neighborhood of fc∗ with probability 1:
√ h
i
√
η
√
∗
lim inf kft −fc kH ≤
KV + K 2 V 2 +λσ 2 = O( η) a.s.
(34)
t
λ
Proof: See Appendix D.
Empirically, the use of constant step-sizes has the effect of maintaining consistent algorithm adaptivity in
the face of new data, at the cost of losing exact convergence. But this drawback is more than compensated
for by the fact that in this case we may apply Theorem 3 of [31], which guarantees the model order of
the function sequence remains finite, and in the worst case, is related to the covering number of the data
domain
12
2
2
2
1
1
1
0
0
0
-1
-1
-1
-2
-2
-2
-3
-2
-1
0
1
2
(a) Gaussian Mixtures data.
3
-2
0
2
(b) Logistic Decision surface.
-3
-2
-1
0
1
2
3
(c) Hinge Decision surface.
Fig. 1: Visualizations of the Gaussian mixture data set (Figure 1a) as in [24] and the learned low-memory multi-class kernel
logistic regressor of a randomly chosen agent in the network (Figure 1b), which attains 95.2% classification accuracy on a
hold-out test set. Curved black lines denote decision boundaries between classes; dotted lines denote confidence intervals; bold
black dots denote kernel dictionary elements associated to an arbitrary i ∈ V. Kernel dictionary elements concentrate at peaks
of the Gaussian clusters and near points of overlap between classes. In Figure 1c we plot the resulting decision surface learned
by kernel SVM which attains 95.7% accuracy – the state of the art.
Corollary 2 Denote ft ∈ HV as the stacked function sequence defined by Algorithm 1 with constant
step-size ηt = η < 1/λ and approximation budget = Kη 3/2 where K > 0 is an arbitrary positive scalar.
Let Mt be the model order of the stacked function ft i.e., the number of columns of the dictionary Dt
which parameterizes ft . Then there exists a finite upper bound M ∞ such that, for all t ≥ 0, the model
order is always bounded as Mt ≤ M ∞ .
Thus, only constant step-sizes attain a reasonable tradeoff between performance relative to fc∗ and the
complexity of storing the function sequence {ft }: in this setting, we obtain approximate convergence to
fc∗ while ensuring the memory requirements are always finite, as stated in Corollary 2.
We are left to analyze the goodness of the solution fc∗ as an approximation of the solution of the original
problem (3). In particular, we establish consensus in the mean square sense. Let us start by establishing
that the penalty term is bounded by a p∗ /c, where p∗ is the primal value of the optimization problem (3)
and c is the barrier parameter introduced in (9).
Proposition 2 Let Assumptions 1 - 3 hold. Let fc∗ be the minimizer of the penalty function (9) and let p∗
be the primal optimal value of (3). Then, it holds that
∗
1 XX
p∗
∗
Exi [fc,i
(xi )−fc,j
(xi )]2 ≤ .
(35)
2 i∈Vj∈n
c
i
Proof: See Appendix E.
Proposition 2 establishes a relationship between the choice of penalty parameter c and constraint
satisfaction. This result may be used to attain convergence in mean square of each individual agent’s
regression function to ones which coincide with one another. Under an additional hypothesis, we obtain
exact consensus, as we state next.
Theorem 2 Let Assumptions 1 - 3 hold. Let fc∗ be the minimizer of the penalty function (9). Then, suppose
∗
∗
the penalty parameter c in (9) approaches infinity c → ∞, and that the node-pair differences fi,c
− fj,c
are not orthogonal to mean transformation Exi [κ(xi , ·)] of the local input spaces xi for all (i, j) ∈ E.
∗
∗
Then fi,c
= fj,c
for all (i, j) ∈ E.
Proof: As a consequence, the limit of (35) when c tends to infinity yields consensus in L2 sense, i.e.,
∗
1 XX
∗
Exi [fc,i
(xi )−fc,j
(xi )]2 = 0,
(36)
lim
c→∞ 2
i∈V j∈n
i
13
0.6
0.4
0.2
0
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Mi,t , Model Order
Network Disagreement
Global Objective
20
10-11
1.6
1.4
1.2
1
0.8
10-12
10-13
10-14
10-15
0
1000
2000
3000
4000
5000
t, number of samples processed
t, number of samples processed
15
10
5
0
0
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
t, number of samples processed
(c) Model Order Mi,t vs. samples pro(a) Global objective vs. samples processed (b) Disagreement vs. samples processed
cessed
P
Fig. 2: In Fig. 2a, we plot the global objective i∈V (Exi ,yi [`i (fi,t x), yi ]) versus P
the number of samples processed, and
observe convergence. In Fig. 2b we display the Hilbert-norm network disagreement (i,j)∈E kfi,t − fj,t k2H with a penalty
parameter c that doubles every 200 samples. As c increases, agents attain consensus. In Fig. 2c, we plot the model order of a
randomly chosen agent’s regression function, which stabilizes to 18 after 162 samples.
which, by pulling the limit outside the sum in (36), yields
∗
∗
lim Exi [fc,i
(xi )−fc,j
(xi )]2 = 0 ,
c→∞
(37)
for all (i, j) ∈ E. Consensus in the mean square sense is a less stringent constraint that equality in the
Hilbert norm as desired in (3). In particular, for any (i, j) ∈ E, if fi = fj , then consensus in the mean
square sense is satisfied as well. Then, apply the reproducing property of the kernel (4)(i), to write
∗
∗
−fc,j
, k(xi , ·) >
(38)
0 = lim Exi < fc,i
c→∞
∗
∗
≥ lim Exi < fc,i
−fc,j
, k(xi , ·) >
c→∞
∗
∗
= lim < fc,i
−fc,j
, Exi k(xi , ·) >
c→∞
where in the previous expression we pull the absolute value outside the expectation, and in the later we
apply linearity of the expectation. Thus, (38) implies consensus is achieved with respect to the Hilbert
∗
∗
are not orthogonal to Exi [κ(xi , ·)], the mean of the
− fc,j
norm, whenever the function differences fc,i
transformation of the local input data xi .
V. N UMERICAL E XPERIMENTS
We consider the task of kernel logistic regression (KLR) (Section V-A) from multi-class training data
scattered across a multi-agent system in two settings: classification of data from a Gaussian mixture model
and texture classification. In Section V-B, we consider kernel support vector machines (KSVM).2
A. Kernel Logistic Regression
For KLR, the merit of a particular regressor for agent i is quantified by its contribution to the classconditional probability. We define a set of class-specific functions fi,k : X → R, and denote them jointly
as fi ∈ HD , where {1, . . . , D} denotes the set of classes. Then, define the probabilistic model
exp(fi,d (xi ))
P (yi = d | xi ) := P
.
d0 exp(fi,d0 (xi ))
(39)
2
We thank Garrett Warnell and Ethan Stump of the U.S. Army Research Laboratory for invaluable assistance in the algorithm
implementation.
14
which models the odds ratio of a sample being in class d versus all others. The negative log likelihood
defined by (39) is the instantaneous loss (see, e.g., [50]) at sample (xi,n , yi,n ):
`i (fi , xi,n , yi,n ) = −log P (yi = yi,n |xi,n ).
(40)
For a given set of activation functions, classification decisions d˜ for xi is given by the maximum likelihood
estimate, i.e., d˜ = argmaxd∈{1,...,D} fi,d (x).
Gaussian Mixture Model Following [24], [31], we generate a data set from Gaussian mixture models,
which consists N = 5000 feature-label pairs for training and 2500 for testing. Each label yn was drawn
uniformly at random from the label set. The corresponding feature vector xn ∈ Rp was
P then drawn from
2
a planar (p = 2), equitably-weighted Gaussian mixture model, i.e., x y ∼ (1/3) 3j=1 N (µy,j , σy,j
I)
2
where σy,j = 0.2 for all values of y and j. The means µy,j are themselves realizations of their own
Gaussian distribution with class-dependent parameters, i.e., µy,j ∼ N (θ y , σy2 I), where {θ 1 , . . . , θ D } are
equitably spaced around the unit circle, one for each class label, and σy2 = 1.0. We fix the number of
classes D = 5, meaning that the feature distribution has, in total, 15 distinct modes. The data is plotted
in Figure 1a.
Each agent in a V = 20 network observes a unique stream of training examples from this common data
set. Here the communications graph is a random network with edges generated randomly between nodes
with probability 1/5 repeatedly until we obtain one that is connected, and then symmetrize it. We run
Algorithm 1 when the entire training set is fed to each agent in a streaming fashion, a Gaussian kernel is
used with bandwidth d = 0.6, with constant learning rate η = 3, compression budget chosen as = η 3/2
with parsimony constant K = 0.04, mini-batch size 32, and regularizer λ = 10−6 . The penalty coefficient
is initialized as c = 0.01 and doubled after every 200 training examples.
We plotPthe results of this implementation
in Figures 1b and 2. In Figure 2a, we plot the global
objective i∈V (Exi ,yi [`i (fi,t x), yi ]) relative to the number of training examples processed, and observe
stable
convergence to a global minimum. In Figure 2b we display Hilbert-norm network disagreement
P
2
(i,j)∈E kfi,t − fj,t kH versus observed sample points. Since each regression function is initialized as null,
initially the disagreement is trivially null, but it remains small over the function sample path as model
training occurs. Moreover, the model order of an arbitrarily chosen agent i = 15 versus samples processed
is given in Figure 2c: observe that the model order stabilizes after only a couple hundred training examples
to 18, which is only a couple more than 15, the number of modes of the joint data density function. The
resulting decision surface of node 15 is given in Figure 1b, which achieves 95.2% classification accuracy
on the test set which is comparable to existing centralized batch approaches (see Table 2 of [31]) to kernel
logistic regression.
Texture Classification We generated the brodatz data set using a subset of the images provided in
[37]. Specifically, we used 13 texture images (i.e. D=13), and from them generated a set of 256 textons
[51]. Next, for each overlapping patch of size 24-pixels-by-24-pixels within these images, we took the
feature to be the associated p = 256-dimensional texton histogram. The corresponding label was given
by the index of the image from which the patch was selected. We then randomly selected N = 10000
feature-label pairs for training and 5000 for testing. Each agent in network with V = 5 observes a unique
stream of training examples from this common data set. Here the communication graph is a random
network with edges generated randomly between nodes with probability 1/5 repeatedly until we obtain
one that is connected, and then symmetrize it. To train the classifier we run Algorithm 1 ten epoches:
in each epoch we fed the entire training set to each agent in a streaming fashion. A Gaussian kernel
is used with bandwith σ 2 = 0.1, with constant learning rate η = 4, compression budget = η 3/2 with
parsimony constant K = 0.04, mini-batch size 32 and regularizer λ = 10−5 . The penalty coefficient is set
to c = 0.02.
PWe plot the results of this experiment in Figure 3. In Figure 3a we display the global objective
convergence to
i∈V (Exi ,yi [`i (fi,t x), yi ]) relative to the number of observed examples, and observe
P
a global minimum. In Figure 3b we plot the Hilbert norm network disagreement (i,j)∈E kfi,t − fj,t k2H .
Since the initial regression function is null for all agents the disagreement is zero and as observed in Figure
15
3
5000
10 -10
2
2.5
4000
1.5
2
3000
1.5
1
2000
1
0.5
0.5
0
0
2
4
6
8
10
10 4
1000
0
0
0
2
4
6
8
10
10
0
4
5
10
10 4
(a) Global Objective vs. samples pro- (b) Disagreement vs. samples processed (c) Model Order Mi,t vs. samples processed
cessed
P
Fig. 3: In Fig. 3a, we plot the global objective i∈V (Exi ,yi [`i (fi,t x), yi ]) versus P
the number of samples processed, and
observe convergence. In Fig. 3b we display the Hilbert-norm network disagreement (i,j)∈E kfi,t − fj,t k2H with a penalty
parameter c = 0.02. In Fig. 3c, we plot the model order of a randomly chosen agent’s regression function, which stabilizes to
4299.
3b it remains small over the training. Moreover, the model order of an agent chosen at random versus
samples processed is given in Figure 3c. The resulting decission function achives 93.5% classification
accuracy over the test set which is comparable with the accuracy of the centralized version (95.6%) [31].
However the model order requiered is more than twice the model order in the centralized case (4358
in average v.s. 1833 [31]). Compared to other distributed classification algorithms the current algorithm
outperforms them. For instance D4L achieves around 75% classification accuracy [10].
B. Kernel Support Vector Machines
Now we address the problem of training a multi-class kernel support vector machine online in a multiagent systems. The merit of a particular regressor is defined by its ability to maximize its classification
margin, which may be formulated by first defining a set of class-specific activation functions fi,d : X → R,
and denote them jointly as fi ∈ HD . In Multi-KSVM, points are assigned the class label of the activation
function that yields the maximum response. KSVM is trained by taking the instantaneous loss ` to be
the multi-class hinge function which defines the margin separating hyperplane in the kernelized feature
space, i.e.,
`i (fi , xn , yn ) = max(0, 1 + fi,r (xn ) − fi,yn (xn )),
(41)
where r = argmaxd0 6=y fi,d0 (x). See [50] for further details.
We consider an implementation where each agent in a V = 20 network observes a unique stream of
training examples from the Gaussian mixtures data set (see Figure 1a). Moreover, the communications
graph is fixed as a random network with edges generated randomly between nodes with probability 1/5
repeatedly until we obtain one that is connected, and then symmetrize it. We run Algorithm 1 when the
entire training set is fed to each agent in a streaming fashion, a Gaussian kernel is used with bandwidth
σ̃ 2 = 0.6, with constant learning rate η = 3, compression budget chosen as = η 3/2 with parsimony
constant K = 0.04, mini-batch size 32, and regularizer λ = 10−6 . The penalty coefficient is initialized as
c = 0.01 and doubled after every 200 training examples.
We plot the results
in Figures 1c and 4. In Figure 4a, we observe that the
P of this implementation
global objective
(E
[`
(f
x),
y
])
converges
stably to a global minimum as P
the number of
xi ,yi i i,t
i
i∈V
samples processed increases. In Figure 4b we display Hilbert-norm network disagreement (i,j)∈E kfi,t −
fj,t k2H versus observed sample points. Since each regression function is initialized as null, initially the
disagreement is trivially null, but it remains small over the function sample path as model training occurs,
and periodically spikes when the penalty parameter is increased. Moreover, the model order of an arbitrarily
0.3
0.2
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
t, number of samples processed
Mi,t , Model Order
25
0.4
0
30
100
0.8
0.7
0.6
0.5
Network Disagreement
Global Objective
16
10-5
10
-10
10-15
0
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
t, number of samples processed
20
15
10
5
0
0
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
t, number of samples processed
(b) Disagreement vs. samples
(a) Global objective vs. samples
(c) Model Order Mi,t vs. samples
P
Fig. 4: In Fig. 4a, we plot the global objective i∈V (Exi ,yi [`i (fi,t x), yi ]) versus the number of samples processed, and
observe convergence,
albeit more noisily than for the differentiable logistic loss. In Fig. 4b we display the Hilbert-norm network
P
disagreement (i,j)∈E kfi,t − fj,t k2H with a penalty parameter c that doubles every 200 samples. As c increases, agents attain
consensus with respect to the Hilbert norm. In Fig. 4c, we plot the model order of a randomly chosen agent’s regression
function, which stabilizes to 22 after 354 samples. Here we obtain a slightly higher complexity classifier that achieves slightly
better accuracy.
chosen agent i = 6 versus samples processed is given in Figure 4c: the model order stabilizes after only
a couple hundred training examples to 22, which is only a couple more than 15, the number of modes
of the joint data density function. The resulting decision surface of node 6 is given in Figure 1c, which
achieves 95.7% classification accuracy, which is approximately state of the art.
VI. C ONCLUSION
In this paper, we extended the ideas in [31] to multi-agent settings with the intent of developing a
method such that a network of autonomous agents, based on their local data stream, may learn a kernelized
statistical model which is optimal with respect to information aggregated across the entire network. To do
so, we proposed an unusual penalty function whose structure is amenable to efficient parameterizations
when developing stochastic approximation-based updates. By applying functional stochastic gradient
method to this node-separable penalty combined with greedily constructed subspace projections, we
obtain a decentralized online algorithm for memory-efficient nonparametric function approximation that
is globally convergent. We obtain a controllable trade-off between optimality and memory requirements
through the design of the greedy subspace projections. Moreover, for large penalty parameter selections,
agents achieve consensus.
The empirical performance of this protocol, the Greedy Projected Penalty Method, yields state of the art
statistical accuracy for a team of interconnected agents learning from streaming data for both multi-class
kernel logistic regression and multi-class kernel support vector machines problems. These results provide
a mathematical and empirical foundation for accurate and stable multi-agent statistical inference in online
settings while preserving memory-efficiency.
A PPENDIX A: D ETAILS OF M ATCHING P URSUIT
The removal procedure is as follows: at each step, a single dictionary element j of D is selected to
be removed which contributes the least to the Hilbert-norm error minf ∈HD\{j} kf˜ − f kH of the original
function f˜, when dictionary D is used. Since at each stage the kernel dictionary is fixed, this amounts
to a computation involving weights w ∈ RM −1 only;P
that is, the error of removing dictionary point dj is
˜
computed for each j as γj = minwI\{j} ∈RM −1 kf (·) − k∈I\{j} wk κ(dk , ·)k. We use the notation wI\{j} to
denote the entries of w ∈ RM restricted to the sub-vector associated with indices I \ {j}. Then, we define
the dictionary element which contributes the least to the approximation error as j ∗ = argminj γj . If the
error incurred by removing this kernel dictionary element exceeds the given compression budget γj ∗ > t ,
17
Algorithm 2 Kernel Orthogonal Matching Pursuit (KOMP)
Require: function f˜ defined by dict. D̃ ∈ Rp×M̃ , coeffs. w̃ ∈ RM̃ , approx. budget t > 0
initialize f = f˜, dictionary D = D̃ with indices I, model order M = M̃ , coeffs. w = w̃.
while candidate dictionary is non-empty I =
6 ∅ do
for j = 1, . . . , M̃ do
Find minimal approximation error with dictionary element dj removed
X
γj =
min
kf˜(·) −
wk κ(dk , ·)kH .
wI\{j} ∈RM −1
k∈I\{j}
end for
Find index minimizing approx. error: j ∗ = argminj∈I γj
if minimal approx. error exceeds threshold γj ∗ > t
stop
else
Prune dictionary D ← DI\{j ∗ }
Revise set I ← I \ {j ∗ }, model order M ← M − 1.
Update weights w defined by current dictionary D
w = argminkf˜(·) − wT κD (·)kH
w∈RM
end
end while
return f, D, w of model order M ≤ M̃ such that kf − f˜kH ≤ t
the algorithm terminates. Otherwise, this dictionary element dj ∗ is removed, the weights w are revised
based on the pruned dictionary as w = argminw∈RM kf˜(·) − wT κD (·)kH , and the process repeats as long
as the current function approximation is defined by a nonempty dictionary. This procedure is summarized
in Algorithm 2.
A PPENDIX B: P ROOF OF P ROPOSITION 1
˜ f ψ̂c (ft (xt), yt )
Consider the square-Hilbert-norm difference of the stacked projected stochastic gradient ∇
and its un-projected variant ∇f ψ̂c (ft (xt ), yt ) defined in (25) and (14), respectively,
˜ f ψ̂c (ft (xt ), yt ) − ∇f ψ̂c (f (xt ), yt )k2H
k∇
h
i
= vec fi,t −PHDi,t+1 fi,t −ηt∇fi ψ̂i,c (fi,t (xi,t),yi,t) /ηt
2
− vec ∇fi ψ̂i,c (fi,t (xi,t ), yi,t )
H
h
i
2
≤ V max fi,t −PHDi,t+1 fi,t −ηt∇fi ψ̂i,c (fi,t (xi,t),yi,t) /ηt
(42)
i∈V
2
− ∇fi ψ̂i,c (fi,t (xi,t ), yi,t )
H
where we apply the fact that the functional gradient is a concatenation of functional gradients associated
with each agent in (42) for the first equality, and for the second inequality we consider the worst-case
estimate across the network. Now, let’s focus on the term inside the Hilbert-norm on the right-hand side.
18
Multiply and divide ∇fi ψ̂i,c (fi,t (xi,t ), yi,t ), the last term, by ηt , and reorder terms to write
h
i
fi,t −PHDi,t+1 fi,t −ηt∇fi ψ̂i,c (fi,t (xi,t),yi,t) /ηt
2
− ∇fi ψ̂i,c (fi,t (xi,t ), yi,t )
H
1
=
fi,t −ηt ∇fi ψ̂i,c (fi,t (xi,t ), yi,t )
ηt
h
i
1
− PHDi,t+1 fi,t −ηt ψ̂i,c (fi,t (xi,t ), yi,t )
ηt
1
= 2 kf˜i,t+1 − fi,t+1 k2H
ηt
2
H
(43)
where we have substituted the definition of f˜i,t+1 and fi,t+1 in (22) and (21), respectively, and pulled
the nonnegative scalar ηt outside the norm. Now, observe that the KOMP residual stopping criterion in
Algorithm 2 is kf˜i,t+1 − fi,t+1 kH ≤ t , which we may apply to the last term on the right-hand side of
(43). This result with the inequality (42) yields (31).
A PPENDIX C: P ROOF OF L EMMA 1
Begin by considering the square of the Hilbert-norm difference between ft+1 and fc∗ = argmin ψc (f )
which minimizes (9), and expand the square to write
˜ f ψ̂c (ft (xt ), yt )k2H
kft+1 − fc∗ k2H = kft − ηt ∇
˜ f ψ̂c (ft (xt ), yt)iH
= kft −f ∗ k2 −2ηt hft −f ∗ , ∇
+
H
c
2 ˜
ηt k∇f ψ̂c (ft (xt ), yt )k2H
(44)
Add and subtract the functional stochastic gradient of the penalty function ∇f ψ̂c (ft (xt ), yt) defined in (14)
to the second term on the right-hand side of (44) to obtain
kft+1 − fc∗ k2H = kft −fc∗ k2H −2ηt hft −fc∗ ,∇f ψ̂c (ft (xt ), yt)iH
˜ f ψ̂c (ft (xt ), yt)−∇f ψ̂c (ft (xt ), yt)iH
−2ηthft −f ∗ ,∇
+
c
2 ˜
ηt k∇f `(ft (xt ), yt )k2H
(45)
We deal with the third term on the right-hand side of (45), which represents the directional error
associated with the sparse stochastic projections, by applying the Cauchy-Schwartz inequality together
with Proposition 1 to obtain
kft+1 −fc∗ k2H = kft −fc∗ k2H −2ηt hft −fc∗ ,∇f ψ̂c (ft (xt ), yt)iH
˜ f `(ft (xt ), yt )k2H
+2t V kft −fc∗ kH +ηt2 k∇
Now compute the expectation of (46) conditional on the algorithm history Ft
E kft+1 −fc∗ k2H Ft = kft −fc∗ k2H + 2t V kft − fc∗ kH + ηt2 σ 2
−2ηthft −fc∗, ∇f ψc (ft )iH
(46)
(47)
where we have applied the fact that the stochastic functional gradient in (14) is an unbiased estimator [cf.
(27)] for the functional gradient of the penalty function in (9), as well as the fact that the variance of the
functional projected stochastic gradient is finite stated in (30) (Assumption 3). Observe that since ψc (f )
is an expectation of a convex function, it is also convex, which allows us to write
ψc (ft ) − ψc (fc∗ ) ≤ hft − fc∗ , ∇f ψc (ft )iH ,
(48)
19
which we substitute into the second term on the right-hand side of the relation given in (47) to obtain
E kft+1 − fc∗ k2H Ft ≤ kft −fc∗ k2H −2ηt [ψc (ft )−ψc (fc∗ )]
(49)
+ 2t V kft − fc∗ kH + ηt2 σ 2 .
Thus the claim in Lemma 1 is valid.
A PPENDIX D: P ROOF OF T HEOREM 1
The use of the regularizer (λ/2)kf k2H in (9) implies that the penalty is λ-strongly convex in f ∈ H,
yielding
λ
(50)
kft − fc∗ k2H ≤ ψc (ft ) − ψc (fc∗ )
2
Substituting the relation (50) into the second term on the right-hand side of the expected descent relation
stated in Lemma 1, with constant step-size ηt = η and budget t = , yields
E[kft+1 − fc∗ k2H Ft ]
≤ (1 − ηλ)kft − fc∗ k2H + 2V kft − fc∗ kH + η 2 σ 2 .
(51)
The expression in (51) may be used to construct a stopping stochastic process , which tracks the suboptimality of kft − fc∗ k2H until it reaches a specific threshold, as in the proof of Theorem 2 of [31]. In doing
so, we obtain convergence
to a neighborhood. We may define a stochastic process δt that qualifies as a
supermartingale, i.e. E δt+1 Ft ≤ δt by considering (51) and solving for the appropriate threshold by
analyzing when the following holds true
E[kft+1 − fc∗ k2H Ft ]
≤ (1 − ηλ)kft − fc∗ k2H + 2V kft − fc∗ kH + η 2 σ 2
≤ kft − fc∗ k2H .
(52)
which may be rearranged to obtain the sufficient condition
−ηλkft − fc∗ k2H + 2V kft − fc∗ kH + η 2 σ 2 ≤ 0 .
(53)
Note that (53) defines a quadratic polynomial in kft − fc∗ kH , which, using the quadratic formula, has roots
p
V
±
2 V 2 + λη 3 σ 2
kft − fc∗ kH =
(54)
λη
Observe (53) is a downward-opening polynomial in kft − fc∗ kH which is nonnegative. Thus, focus on
the positive root, substituting the approximation budget selection = Kη 3/2 to define the radius of
convergence as
p
√
√
η
V + 2 V 2 +λη 3 σ 2
=
KV + K 2 V 2 +λσ 2
(55)
∆ :=
λη
λ
(55) allows us to construct a stopping process: define δt as
δt = kft − fc∗ kH
o
n
× 1 min −ηλkfu − fc∗ k2H + 2V kfu − fc∗ kH + η 2 σ 2 > ∆
(56)
u≤t
where 1{E} denotes the indicator process of event E ∈ Ft . Note that δt ≥ 0 for all t, since both kft −f ∗ kH
and the indicator function are nonnegative. The rest of the proof applies the same reasoning as that of
20
Theorem 2 in [31]: in particular, given the definition (56), either minu≤t −ηλkfu − fc∗ k2H + 2V kfu −
fc∗ kH + η 2 σ 2 > ∆ holds, in which case we may compute the square root of the condition in (52) to write
E[δt+1 Ft ] ≤ δt
(57)
Alternatively, minu≤t −ηλkfu − fc∗ k2H + 2V kfu − fc∗ kH + η 2 σ 2 ≤ ∆, in which case the indicator function
is null for all s ≥ t from the use of the minimum inside the indicator in (56). Thus in either case, (57) is
valid, implying δt converges almost surely to null, which, as a consequence we obtain the fact that either
limt→∞ kft − fc∗ kH − ∆ = 0 or the indicator function is null for large t, i.e. limt→∞ 1{minu≤t −ηλkfu −
fc∗ k2H + 2V kfu − fc∗ kH + η 2 σ 2 > ∆} = 0 almost surely. Therefore, we obtain that
√
√
η
∗
2
2
KV + K +λσ
a.s. ,
(58)
lim inf kft −fc kH ≤ ∆ =
t→∞
λ
as stated in Theorem 1.
A PPENDIX E: P ROOF OF P ROPOSITION 2
Let
be the minimizer of ψc (f ) defined in (9) and f ∗ be the solution of the problem (3). Since the
former is the minimizer of ψc (f ) it holds that
h
X
i λ
Exi ,yi `i (fi∗ xi ), yi + kfi∗ k2H
ψc (fc∗ ) ≤ ψc (f ∗ ) =
2
i∈V
∗
cX
∗
2
+
Ex [f (xi )−fj (xi )]
.
(59)
2j∈n i i
fc∗
i
Where the equality follows from the definition of ψc (f ) in (9). Since f ∗ is solution to the problem (3) it
satisfies that fi = fj for all (i, j) ∈ E, thus
Exi [fi∗ (xi ) − fj∗ (xi )]2 = 0 ,
(60)
for all (i, j) ∈ E. As a consequence, replacing ψc (fc∗ ) by its expression in the first equality in (59) and
rearranging terms yields a bound the constraint violation of fc∗ as
∗
1
1 XX
∗
Exi [fc,i
(xi )−fc,j
(xi )]2 ≤ (R(f ∗ ) − R(fc∗ )) ,
(61)
2 i∈Vj∈n
c
i
where R(f ) is the global regularized objective in (2), i.e.,
h
X
i λ
2
R(f ) =
Exi ,yi `i (fi xi ), yi + kfi kH .
2
i∈V
(62)
The fact that by definition p∗ = R(f ∗ ) yields (35).
R EFERENCES
[1] A. Koppel, S. Paternain, C. Richard, and A. Ribeiro, “’decentralized efficient nonparametric stochastic optimization’,” in Signal and
Information Processing (GlobalSIP), 2017 IEEE Global Conference on (to appear). IEEE, 2017.
[2] M. Anthony and P. L. Bartlett, Neural network learning: Theoretical foundations. cambridge university press, 2009.
[3] Z. Marinho, B. Boots, A. Dragan, A. Byravan, G. J. Gordon, and S. Srinivasa, “Functional gradient motion planning in reproducing
kernel hilbert spaces,” in Proceedings of Robotics: Science and Systems, Ann Arbor, MI, July 2016.
[4] R. J. Kozick and B. M. Sadler, “Source localization with distributed sensor arrays and partial spatial coherence,” IEEE Transactions
on Signal Processing, vol. 52, no. 3, pp. 601–616, 2004.
[5] A. Koppel, J. Fink, G. Warnell, E. Stump, and A. Ribeiro, “Online learning for characterizing unknown environments in ground robotic
vehicle models,” in Proc. Int. Conf. Intelligent Robots and Systems.
[6] M. Schwager, P. Dames, D. Rus, and V. Kumar, “A multi-robot control policy for information gathering in the presence of unknown
hazards,” in Robotics Research. Springer, 2017, pp. 455–472.
[7] J. Liu, Q. Chen, and H. D. Sherali, “Algorithm design for femtocell base station placement in commercial building environments,” in
INFOCOM, 2012 Proceedings IEEE. IEEE, 2012, pp. 2951–2955.
21
[8] A. Ghosh and S. Sarkar, “Pricing for profit in internet of things,” in Information Theory (ISIT), 2015 IEEE International Symposium
on. IEEE, 2015, pp. 2211–2215.
[9] A. Koppel, F. Jakubiec, and A. Ribeiro, “A saddle point algorithm for networked online convex optimization,” IEEE Trans. Signal
Process., p. 15, Oct 2015.
[10] A. Koppel, G. Warnell, E. Stump, and A. Ribeiro, “D4l: Decentralized dynamic discriminative dictionary learning,” IEEE Trans. Signal
and Info. Process. over Networks, vol. (submitted), June 2017, available at http://www.seas.upenn.edu/ aribeiro/wiki.
[11] K. Slavakis, P. Bouboulis, and S. Theodoridis, “Online learning in reproducing kernel hilbert spaces,” Signal Processing Theory and
Machine Learning, pp. 883–987, 2013.
[12] J.-B. Li, S.-C. Chu, and J.-S. Pan, Kernel Learning Algorithms for Face Recognition. Springer, 2014.
[13] S. Haykin, “Neural networks: A comprehensive foundation,” 1994.
[14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural
information processing systems, 2012, pp. 1097–1105.
[15] J. Mairal, F. Bach, and J. Ponce, “Task-driven dictionary learning,” Pattern Analysis and Machine Intelligence, IEEE Transactions on,
vol. 34, no. 4, pp. 791–804, 2012.
[16] H. Robbins and S. Monro, “A stochastic approximation method,” Ann. Math. Statist., vol. 22, no. 3, pp. 400–407, 09 1951.
[17] G. Kimeldorf and G. Wahba, “Some results on tchebycheffian spline functions,” Journal of mathematical analysis and applications,
vol. 33, no. 1, pp. 82–95, 1971.
[18] B. Schölkopf, R. Herbrich, and A. J. Smola, “A generalized representer theorem,” Subseries of Lecture Notes in Computer Science
Edited by JG Carbonell and J. Siekmann, p. 416.
[19] V. Norkin and M. Keyzer, “On stochastic optimization and statistical learning in reproducing kernel hilbert spaces by support vector
machines (svm),” Informatica, vol. 20, no. 2, pp. 273–292, 2009.
[20] Y. Engel, S. Mannor, and R. Meir, “The kernel recursive least-squares algorithm,” IEEE Transactions on Signal Processing, vol. 52,
no. 8, pp. 2275–2285, Aug 2004.
[21] W. Liu, P. P. Pokharel, and J. C. Principe, “The kernel least-mean-square algorithm,” Signal Processing, IEEE Transactions on, vol. 56,
no. 2, pp. 543–554, 2008.
[22] J. Kivinen, A. J. Smola, and R. C. Williamson, “Online Learning with Kernels,” IEEE Transactions on Signal Processing, vol. 52, pp.
2165–2176, August 2004.
[23] O. Dekel, S. Shalev-Shwartz, and Y. Singer, “The forgetron: A kernel-based perceptron on a fixed budget,” in Advances in Neural
Information Processing Systems 18. MIT Press, 2006, p. 259266. [Online]. Available: http://research.microsoft.com/apps/pubs/default.
aspx?id=78226
[24] J. Zhu and T. Hastie, “Kernel Logistic Regression and the Import Vector Machine,” Journal of Computational and Graphical Statistics,
vol. 14, no. 1, pp. 185–205, 2005.
[25] B. Dai, B. Xie, N. He, Y. Liang, A. Raj, M.-F. F. Balcan, and L. Song, “Scalable kernel methods via doubly stochastic gradients,” in
Advances in Neural Information Processing Systems, 2014, pp. 3041–3049.
[26] T. Le, V. Nguyen, T. D. Nguyen, and D. Phung, “Nonparametric budgeted stochastic gradient descent,” in Artificial Intelligence and
Statistics, 2016, pp. 654–572.
[27] T. Le, T. Nguyen, V. Nguyen, and D. Phung, “Dual space gradient descent for online learning,” in Advances in Neural Information
Processing Systems, 2016, pp. 4583–4591.
[28] J. Lu, S. C. Hoi, J. Wang, P. Zhao, and Z.-Y. Liu, “Large scale online kernel learning,” Journal of Machine Learning Research, vol. 17,
no. 47, p. 1, 2016.
[29] D. Calandriello, A. Lazaric, and M. Valko, “Second-order kernel online convex optimization with adaptive sketching,” in International
Conference on Machine Learning, 2017.
[30] Y. Pati, R. Rezaiifar, and P. Krishnaprasad, “Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to
Wavelet Decomposition,” in Proceedings of the Asilomar Conference on Signals, Systems and Computers, 1993.
[31] A. Koppel, G. Warnell, E. Stump, and A. Ribeiro, “Parsimonious online learning with kernels via sparse projections in function space,”
arXiv preprint arXiv:1612.04111, 2016.
[32] B. Johansson, T. Keviczky, M. Johansson, and K. Johansson, “Subgradient methods and consensus algorithms for solving convex
optimization problems,” in Proc. of the 47th IEEE Conference on Decision and Control, Cancun, Mexico, 2008, pp. 4185–4190.
[33] S. Ram, A. Nedic, and V. Veeravalli, “Distributed stochastic subgradient projection algorithms for convex optimization,” J Optimiz.
Theory App., vol. 147, no. 3, pp. 516–545, Sep. 2010.
[34] X. Nguyen, M. J. Wainwright, and M. I. Jordan, “Nonparametric decentralized detection using kernel methods,” IEEE Transactions on
Signal Processing, vol. 53, no. 11, pp. 4053–4066, 2005.
[35] P. A. Forero, A. Cano, and G. B. Giannakis, “Consensus-based distributed support vector machines,” Journal of Machine Learning
Research, vol. 11, no. May, pp. 1663–1707, 2010.
[36] P. Chainais and C. Richard, “Learning a common dictionary over a sensor network,” in Computational Advances in Multi-Sensor
Adaptive Processing (CAMSAP), 2013 IEEE 5th International Workshop on. IEEE, 2013, pp. 133–136.
[37] P. Brodatz, Textures: A Photographic Album for Artists and Designers. Dover, 1966.
[38] S. Mukherjee and S. K. Nayar, “Automatic generation of rbf networks using wavelets,” Pattern Recognition, vol. 29, no. 8, pp. 1369–
1383, 1996.
[39] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan, “Learnability, stability and uniform convergence,” Journal of Machine
Learning Research, vol. 11, no. Oct, pp. 2635–2670, 2010.
[40] T. Evgeniou, M. Pontil, and T. Poggio, “Regularization networks and support vector machines,” Advances in computational mathematics,
vol. 13, no. 1, pp. 1–50, 2000.
[41] A. Ribeiro, “Ergodic stochastic optimization algorithms for wireless communication and networking,” IEEE Transactions on Signal
Processing, vol. 58, no. 12, pp. 6369–6386, 2010.
22
[42] K. Müller, T. Adali, K. Fukumizu, J. C. Principe, and S. Theodoridis, “Special issue on advances in kernel-based learning for
signal processing [from the guest editors],” IEEE Signal Process. Mag., vol. 30, no. 4, pp. 14–15, 2013. [Online]. Available:
http://dx.doi.org/10.1109/MSP.2013.2253031
[43] R. Wheeden, R. Wheeden, and A. Zygmund, Measure and Integral: An Introduction to Real Analysis, ser. Chapman & Hall/CRC
Pure and Applied Mathematics. Taylor & Francis, 1977. [Online]. Available: https://books.google.com/books?id=YDkDmQ hdmcC
[44] T. Suzuki, “Dual averaging and proximal gradient descent for online alternating direction multiplier method,” in Proc. 30th Int. Conf.
Machine Learning, vol. 28, no. 1, Atlanta, GA, USA, Jun. 16-21 2013, pp. 392–400.
[45] D. Needell, J. Tropp, and R. Vershynin, “Greedy signal recovery review,” in Signals, Systems and Computers, 2008 42nd Asilomar
Conference on. IEEE, 2008, pp. 1048–1050.
[46] P. Vincent and Y. Bengio, “Kernel matching pursuit,” Machine Learning, vol. 48, no. 1, pp. 165–187, 2002.
[47] M. Pontil, Y. Ying, and D. xuan Zhou, “Error analysis for online gradient descent algorithms in reproducing kernel hilbert spaces,”
Tech. Rep., 2005.
[48] Y. Ying and D. X. Zhou, “Online regularized classification algorithms,” IEEE Transactions on Information Theory, vol. 52, no. 11, pp.
4775–4788, Nov 2006.
[49] Y. Nesterov, “Introductory lectures on convex programming volume i: Basic course,” 1998.
[50] K. Murphy, Machine Learning: A Probabilistic Perspective. MIT press, 2012.
[51] T. Leung and J. Malik, “Representing and Recognizing the Visual Appearence of Materials using Three-dimensional Textons,”
International Journal of Computer Vision, vol. 43, no. 1, pp. 29–44, 1999.
| 10 |
A note on the gap between rank and border rank
Jeroen Zuiddam
arXiv:1504.05597v2 [] 23 Mar 2017
Centrum Wiskunde & Informatica, Science Park 123, Amsterdam
Abstract
We study the tensor rank of the tensor corresponding to the algebra of n-variate
complex polynomials modulo the dth power of each variable. As a result we
find a sequence of tensors with a large gap between rank and border rank, and
thus a counterexample to a conjecture of Rhodes. At the same time we obtain a
new lower bound on the tensor rank of tensor powers of the generalised W-state
tensor. In addition, we exactly determine the tensor rank of the tensor cube of
the three-party W-state tensor, thus answering a question of Chen et al.
Keywords: tensor rank, border rank, algebraic complexity theory, quantum
information theory, W-state.
2010 MSC: 68Q17, 15A69, 16Z05
1. Introduction
Let V1 , . . . , Vk be finite-dimensional complex vector spaces and consider the
vector space V := V1 ⊗· · ·⊗Vk of k-tensors. A tensor of the form v1 ⊗v2 ⊗· · ·⊗vk
in V is called simple. The tensor rank of a tensor t ∈ V is the smallest number r
such that t can be written as a sum of r simple tensors. The border rank R(t)
of t is the smallest number r such that t is the limit of a sequence of tensors
in V of rank at most r, in the Euclidean topology. Clearly, R(t) ≤ R(t) and
already in the small space C2 ⊗ C2 ⊗ C2 there exist tensors with R(t) < R(t).
Tensor rank plays a fundamental role in various problems in modern applied
mathematics. One famous example is the problem of deciding the complexity of
matrix multiplication [1]. We refer to [2] for more examples of applications of
tensor rank. While the tensor rank of a 2-tensor (matrix rank) can be efficiently
computed, computing the tensor rank of a k-tensor is NP-hard when k ≥ 3
[3, 4, 5]. The border rank notion is important for at least the following two
reasons. Unlike tensor rank, border rank is defined by polynomial equations.
One approach for computing a lower bound for the tensor rank of a tensor is
thus to find the relevant border rank equation and then verify that the tensor
does not satisfy the equation. This strategy was for example used in [6]. On the
other hand, border rank upper bounds can in some situations be turned into
Email address: [email protected] (Jeroen Zuiddam)
1
nontrivial rank upper bounds, especially when one is interested in the asymptotic
behaviour of tensor rank when taking large tensor powers of a fixed tensor. This
idea is crucial in, for example, the laser method of Strassen [7] and all later
improvements of this method, see for example [8].
This paper is motivated by the following basic question about tensor rank
and border rank.
Problem 1. What is the maximal ratio R(t)/ R(t) for a k-tensor t in (Cn )⊗k ?
Our main result is the following lower bound. Let e0 , e1 be the standard
basis of C2 . Define Wk to be the tensor e1 ⊗ e0 ⊗ · · · ⊗ e0 + e0 ⊗ e1 ⊗ · · · ⊗ e0 +
· · · + e0 ⊗ · · · ⊗ e0 ⊗ e1 living in (C2 )⊗k . This tensor is known as the generalised
W-state tensor in quantum information theory.
Theorem 2. Let k ≥ 3. There exists an explicit sequence of k-tensors tn in
n
(C2 )⊗k such that
R(tn )
≥ k − o(1),
R(tn )
n
when n goes to infinity. Namely, let tn = Wk⊗n ∈ (C2 )⊗k, the n-fold tensor
Kronecker product of Wk . Then R(tn ) = 2n and R(tn ) ≥ k · 2n − o(2n ), when n
goes to infinity.
We obtain Theorem 2 by applying a tensor rank lower bound of Bläser to
the tensor corresponding to the algebra Ad,n := C[x1 , . . . , xn ]/(xd1 , . . . , xdn ) of
n-variate complex polynomials modulo the dth power of each variable. This
in turn leads to the aforementioned lower bound on the tensor rank of tensor
powers of the generalised W-state tensor Wk . Our bound improves the lower
bound R(Wk⊗n ) ≥ (k − 1) · 2n − k + 2 of Chen et al. [9].
We note that it is a major open problem to find explicit tensors t ∈ (Cn )⊗3
with R(t) ≥ (3 + ε)n for any ε > 0 [10]. There are explicit tensors t ∈ (Cn )⊗3
known with R(t) ≥ (3 − o(1))n when n goes to infinity, see [11, Theorem 2].
Related work. De Silva and Lim show that for a 3-tensor t the difference between
tensor rank and border rank R(t) − R(t) can be arbitrarily large [12]. However,
their result implies a lower bound of only 3/2 on the maximal ratio R(t)/ R(t)
for t a 3-tensor.
Allman et al. give explicit tensors Kn in Cn ⊗ Cn ⊗ Cn of border rank n
and rank 2n − 1 [13]; a rank to border rank ratio that converges to 2. They
provide references to other tensors with similar rank and border rank behaviour.
We note that the tensor Kn is essentially the tensor of the algebra C[x]/(xn ).
It was conjectured by Rhodes that the rank of a tensor in Cn ⊗ Cn ⊗ Cn of
border rank n is at most 2n − 1 [14, Conjecture 0]. Theorem 2 shows that this
conjecture is false.
Independently of the author and with different techniques, Landsberg and
Michałek have recently constructed a sequence of 3-tensors with a ratio of rank
to border rank converging to 5/2, thus also disproving the above conjecture [15].
2
As is also mentioned in [15], we note that for any k ≥ 3, the tensor Wk ∈
(C2 )⊗k has border rank 2 and rank k, thus giving a rank to border rank ratio
of k/2, see the proof of Theorem 2.
As pointed out by an anonymous reviewer, a lower bound on the maximal ratio between rank and border rank for 3-tensors (k = 3) similar to
the one in Theorem 2 in this paper, can also be obtained as follows. Let
Id (x1 , . . . , xn ) ⊆ C[x1 , . . . , xn ] be the ideal generated by all monomials of degree d. Bläser [11] proves that the tensor corresponding to the algebra Pn,d defined
as C[x1 , . . . , xn ]/Id+1 (x1 , . . . , xn ) has tensor rank R(Pn,d ) ≥ (3−o(1)) dim(Pn,d ).
Moreover, the ideal Id+1 (x1 , . . . , xn ) is a so-called monomial ideal and is therefore
“smoothable”. It turns out (see [16]) that associative unital algebras defined
by smoothable ideals (like Pn,d ) have “minimal border rank” which in this
case means that R(Pn,d ) = dim Pn,d . Combining these observations yields, for
any d > 1, an explicit sequence of 3-tensors tn ∈ (Cdim(Pn,d ) )⊗3 such that
R(tn )/ R(tn ) ≥ 3 − o(1) when n goes to infinity. Note that the algebra Pn,d is
slightly different from the algebra Ad,n that we study here.
Very little is known about general upper bounds on the rank to border rank
ratio. We are only aware of the following bound that can be deduced from a
result of Lehmkuhl and Lickteig [17] and Proposition 15.26 in [1]. For any tensor
t ∈ Cn ⊗ Cn ⊗ Cn we have R(t)/ R(t) ≤ 2 · 9(n−1) R(t) + 1.
Outline. This paper is organised as follows. First we introduce the algebra Ad,n
and for the corresponding tensor study its border rank and tensor rank. Then
we observe that this tensor specialises to powers of the W-state tensor, yielding
the gap between rank and border rank given above. Finally, we compute the
tensor rank of the tensor cube of the three-party W-state tensor.
2. The algebra Ad,n
Many examples of interesting 3-tensors come from algebras. A complex
algebra is a complex vector space V together with a multiplication defined by a
bilinear map φ : V × V → V . An algebra is called associative if φ(φ(u, v), w) =
φ(u, φ(v, w)) for all u, v, w ∈ V . An algebra is called unital if there is an element
e ∈ V such that φ(e, v) = φ(v, e) = v for all v ∈ V . Let e1 , e2 , . . . be a basis
of V and e∗1 , e∗2 , . . . the dual basis. We can naturally view the algebra (V, φ) as a
tensor in V ⊗ V ⊗ V by
X
φ 7→
e∗k (φ(ei , ej )) ei ⊗ ej ⊗ ek ,
i,j,k
called the structure tensor. In this way we can speak about the tensor rank and
border rank of an algebra. There are many results on the tensor rank and border
rank of algebras, in particular of the algebra of n × n matrices, for which we
refer to [1] and [18]. For results on the tensor rank and border rank of general
tensors we refer to [2]. In this section we will study the complex associative
unital algebra
Ad,n := (C[x]/(xd ))⊗n = C[x1 , . . . , xn ]/(xd1 , . . . , xdn ),
3
of n-variate complex polynomials modulo the dth power of each variable.
2.1. Border rank
A tensor t in V1 ⊗ · · · ⊗ Vk is called 1-concise if there does not exist a
proper subspace U1 ⊆ V1 such that t ∈ U1 ⊗ V2 ⊗ · · · ⊗ Vk . Similarly, we define
i-conciseness for i ∈ {2, . . . , k}. A tensor is called concise if it is i-concise for
all i. We can think of a concise tensor as a tensor that “uses” all dimensions
of the local spaces Vi . Tensors of unital algebras are concise. For a concise
tensor t in V1 ⊗ · · · ⊗ Vk the border rank is at least maxi dim Vi [1, Lemma 15.23].
The following proposition is a direct consequence of the well-known fact that
R(C[x]/(xd )) = d, see [1, Example 15.20].
Proposition 3. R(Ad,n ) = dn .
Proof. The algebra Ad,n is unital. Therefore, the corresponding tensor Ad,n ∈
n
n
n
Cd ⊗ Cd ⊗ Cd is concise. This implies that R(Ad,n ) ≥ dn . On the other
hand, border rank is submultiplicative under tensor products, so R(Ad,n ) =
R((C[x]/(xd ))⊗n ) ≤ R(C[x]/(xd ))n = dn .
2.2. Rank lower bound
Let (V, φ) be a complex finite-dimensional associative unital algebra. A
subspace I ⊆ V is called a left-ideal if φ(V, I) = I. A left-ideal I is called
nilpotent if I n = {0} for some positive integer n. The nilradical of V is the sum
of all nilpotent left-ideals in A.
Theorem 4 ([18, Theorem 7.4]). Let A be a finite-dimensional complex associative unital algebra and let N be the nilradical of A. For any integer m ≥ 1,
R(A) ≥ dim(A) − dim(N 2m−1 ) + 2 dim(N m ).
We will apply Theorem 4 to the algebra Ad,n . Let us first look at a small
example.
Example 5. Consider the algebra A := A2,2 = C[x1 , x2 ]/(x21 , x22 ) of dimension 4.
The elements in A are of the form α + βx1 + γx2 + δx1 x2 with α, β, γ, δ ∈ C.
The nilradical N ⊆ A is the subspace spanned by {x1 , x2 , x1 x2 }, and hence has
dimension 3. The square of the nilradical N 2 is spanned by x1 x2 and hence has
dimension 1. Taking m = 1, Theorem 4 gives R(A) ≥ 4 − 3 + 2 · 3 = 7.
We use extended binomial coefficients
to get a handle on the dimension of
powers of the radical of Ad,n . Let nb d be the number of ways to put b balls
into n containers with at most d balls per container. This equals the number of
monomials of degree b in C[x1 , . . . , xn ]/(xd+1
, . . . , xd+1
n ).
1
Lemma 6. For any 0 ≤ q < 1/2,
bqndc
X
b=0
n
b d
(d + 1)n
→ 0
4
as
n → ∞.
Proof. Fix n, d. The limit
1
n
ln
,
n→∞ n
ρn d
hd (ρ) := lim
0 ≤ ρ ≤ d,
exists, the function hd is strictly concave, unimodal and reaches its maximum
ln(m + 1) at the point ρ = d/2 [19]. Let 0 ≤ q < 1/2 and let ρ := qd. For all
ρ < d/2, hd (ρ) < ln(d + 1), so there is an ε > 0 such that hd (ρ) + ε < ln(d + 1).
For n big enough,
n
≤ exp (hd (ρ) + ε)n
ρn d
and thus
Pbqndc
b=0
(d +
n
b d
1)n
≤ (qnd + 1) exp (hd (ρ) + ε − ln(d + 1))n ,
which goes to zero as n goes to infinity.
We note that the case d = 1 can also easily be obtained from the well-known
inequality
bqnc
X n
≤ 2H(q)n ,
b
b=0
where H(q) = −q log2 q − (1 − q) log2 (1 − q) is the binary entropy of q, see for
example [20, Lemma 16.19].
Proposition 7. Let n ≥ 1, d ≥ 2 be integers. Then
" 2m−2
#
m−1
X n
X n
n
R(Ad,n ) ≥ 2 d + max
− 2
≥ 3dn − o(dn ).
m≥1
b d−1
b d−1
b=0
b=0
Proof. The nilradical N of Ad,n is the ideal generated by x1 , . . . , xn , that is, N
is the subspace of Ad,n of elements with zero constant term. The mth power N m
is the subspace spanned
by monomials
of degree at least m, hence the dimension
Pm−1
of N m equals dn − b=0 nb d−1 . Theorem 4 then gives, for any m ≥ 1,
2m−2
m−1
X n
X n
n
n
R(Ad,n ) ≥ d − d −
+2 d −
b d−1
b d−1
b=0
b=0
2m−2
m−1
X n
X n
−2
.
= 2dn +
b d−1
b d−1
n
b=0
b=0
If 2m − 1 ≤ n(d − 1), then
2m−2
X
b=0
n
b
n(d−1)
n
=d −
d−1
n(d−1)−(2m−1)
X n
X
n
n
= d −
,
b d−1
b d−1
b=2m−1
b=0
5
so
R(Ad,n ) ≥ 3 · dn −
n(d−1)−(2m−1)
X
b=0
n
b
−2
d−1
m−1
X
n
b
b=0
.
d−1
One checks that for any n large enough there exists an m ≥ 1 such that
2m − 1 ≤ n(d − 1), n(d − 1) − (2m − 1) < 12 n(d − 1) and m − 1 < 12 n(d − 1).
Therefore, with Lemma 6, we obtain the inequality R(Ad,n ) ≥ 3 · dn − o(dn ).
For computations, the following lemma is useful.
Lemma 8. For integers b ≥ 1, n ≥ 1, d ≥ 2,
min(n,bb/dc)
X
n
n b+n−1−i·d
=
(−1)i
.
b d−1
i
n−1
i=0
Proof. Let X := {ways to put b balls into n containers} and for j ∈ [n] let
Aj := {ways to put b balls in n containers such that container j has at least
d balls} ⊂ X. By the inclusion-exclusion principle [21, Proposition 1.13], the
number of elements of X which lie in none of the subsets Aj is
X
\
X
|I|
|I| b + n − 1 − |I| · d
(−1)
Aj =
(−1)
.
n−1
j∈I
I⊆{1,...,n}
I⊆{1,...,n}
n
Now use that there are |I|
subsets of size |I| in {1, . . . , n}. The statement
about the special case d = 2 follows immediately from the definition.
In the table below we list some values of the lower bound of Proposition 7.
d
2
3
4
5
6
n
1
2
3
4
5
6
3
7
15
33
68
141
5
18
57
182
576
1773
7
33
142
601
2507
10356
9
53
285
1509
7824
40329
11
78
501
3166
19782
121971
Table 1: Lower bounds for R(Ad,n ) from Proposition 7. The bold numbers are known to be
sharp, see Theorem 15.
2.3. Rank upper bound
It is well-known that upper bounds on border rank imply upper bounds on
rank. Proposition 3 implies the following upper bound on R(Ad,n ). We will not
use the upper bound later, but it provides some context for the lower bound of
Proposition 7.
6
Proposition 9. R(Ad,n ) ≤ (nd + 1)dn .
Proof. The statement follows from the proof of Theorem 5 in [22], using that the
error degree in the degeneration of the dth unit tensor to Ad,n is d [1, Example
15.20].
3. Generalised W-state tensor
In quantum information theory, the generalised W-state tensor Wk is the
tensor in (C2 )⊗k defined by
Wk := e1 ⊗ e0 ⊗ · · · ⊗ e0 + e0 ⊗ e1 ⊗ · · · ⊗ e0 + · · · + e0 ⊗ · · · ⊗ e0 ⊗ e1 .
It is not hard to check that, in a particular basis, the tensor of the algebra
A2,1 = C[x]/(x2 ) equals W3 . Therefore, R(A2,n ) = R(W3⊗n ) and R(A2,n ) =
R(W3⊗n ). By the following proposition, lower bounds for R(W3⊗n ) give lower
bounds for R(Wk⊗n ).
Proposition 10 ([9]). R(Wk⊗n ) ≥ R(W3⊗n ) + (k − 3)(2n − 1).
Theorem 11.
R(Wk⊗n )
n
≥ (k − 1)2 + max
m≥1
2m−2
X
b=0
m−1
X n
n
− (k − 3) ≥ k · 2n − o(2n ).
−2
b
b
b=0
Proof. Combine Proposition 10 with Proposition 7 for A2,n .
Chen et al. give the lower bound R(Wk⊗n ) ≥ (k − 1)2n − k + 2, which they
obtain by combining the lower bound R(A2,n ) ≥ 2n+1 − 1 with Proposition 10 [9].
Theorem 11 improves the lower bound of Chen et al. The best upper bound so
far is R(Wk⊗n ) ≤ (n(k − 1) + 1)2n [22].
Below we list some values of the lower bound of Theorem 11. The first
two columns are, in fact, sharp [9]. In Section 5 we will prove the equality
R(W3⊗3 ) = 16. Therefore, the lower bound of Theorem 11 is not sharp in
general.
n
1
2
3
4
5
6
7
8
9
10
3
4
5
6
7
8
9
10
7
10
13
16
19
22
25
28
15
22
29
36
43
50
57
64
33
48
63
78
93
108
123
138
68
99
130
161
192
223
254
285
141
204
267
330
393
456
519
582
297
424
551
678
805
932
1059
1186
601
856
1111
1366
1621
1876
2131
2386
1230
1741
2252
2763
3274
3785
4296
4807
2544
3567
4590
5613
6636
7659
8682
9705
k
3
4
5
6
7
8
9
10
Table 2: Lower bounds for R(Wk⊗n ) from Theorem 11. The bold numbers are known to be
sharp [9].
7
4. Gap between rank and border rank
Our main result Theorem 2 follows easily from Theorem 11.
Proof of Theorem 2. By Proposition 3, R(Wk⊗n ) = 2n . By Theorem 11,
therefore,
R(Wk⊗n )
o(2n )
,
≥
k
−
2n
R(Wk⊗n )
when n goes to infinity.
5. Tensor cube of the W-state tensor
It is known that the tensor rank of W := W3 equals 3 and the tensor rank of
the tensor square W ⊗2 equals 7, see [23, Lemma 3]. For the tensor cube W ⊗3
the tensor rank was known to be either 15 or 16 [9, Theorem 4]. We will prove
the following.
Theorem 12. The tensor rank of W ⊗3 equals 16.
In the following, algebra means complex finite-dimensional associative algebra.
Let (V, φ) be an algebra. A subspace I ⊂ V is called a two-sided ideal if
φ(I, V ) = φ(V, I) = I. A two-sided ideal I is called maximal if for all two-sided
ideals J with I ⊆ J ⊆ V we have J = I or J = V . Similarly for left-ideals.
Theorem 13 (Alder-Strassen bound [1, Theorem 17.14]). Let A be an algebra
with t maximal two-sided ideals. Then R(A) ≥ 2 dim A − t.
Definition 14. Let A be an algebra with t maximal two-sided ideals. We say
A has minimal rank if R(A) = 2 dim A − t.
There is a structural description of the algebras of minimal rank [24]. We will
only need the following special case. A simply generated algebra is an algebra of
the form C[x]/(f ) for some nonconstant polynomial f ∈ C[x]. A generalised null
algebra is an algebra A such that there exist nilpotent elements w1 , . . . , ws ∈ A
with wi wj = 0 if i 6= j and A = C[w1 , . . . , ws ]. A local algebra is an algebra
with a unique maximal left-ideal. The radical rad A of A is the intersection of
all maximal left-ideals of A. The radical is a two-sided nilpotent ideal (see for
example [25]).
Theorem 15 ([1, Theorem 17.38]). A commutative local algebra is of minimal
rank if and only if it is a simply generated algebra or a generalised null algebra.
Lemma 16 (Nakayama’s lemma). Let A be an algebra such that A/ rad A ∼
= C.
Then A can be generated as an algebra by p := dim rad A/(rad A)2 elements
in rad A, that is, there are w1 , . . . , wp ∈ rad A such that A = C{w1 , . . . , wp }.
This p is minimal.
We repeat a proof found in [26].
8
Proof. Let N := rad A. Let w1 , . . . , wp ∈ N such that w1 + N 2 , . . . , wp + N 2 is
a C-basis for N/N 2 . One can show by induction that for any r ≥ 1,
{wi1 · · · wir + N r+1 | 1 ≤ i1 , . . . , ir ≤ p}
generates N r /N r+1 as a vector space over C. Using that A/N ∼
= C (so A = C⊕N )
and that N is nilpotent, we get
A = C{w1 , . . . , wp }.
Suppose p is not minimal. Then there is a q < p and a surjective morphism
of algebras
φ : C{X1 , . . . , Xq } A.
Without loss of generality, we may assume that φ(Xi ) is in N , since otherwise
we can use the decomposition A = C ⊕ N to map φ(Xi ) to N . The set
{φ(Xi ) + N 2 | 1 ≤ i ≤ q}
is too small to generate N/N 2 , and φ maps monomials of degree ≥ 2 in
C{X1 , . . . , Xq } to N 2 . Therefore, φ is not surjective.
Proof of Theorem 12. The W-state tensor is the structure tensor of the algebra W := C[x]/(x2 ). Consider the algebra
A := W ⊗3 = C[x, y, z]/(x2 , y 2 , z 2 ).
It is not hard to see that A is a local algebra with maximal ideal (x, y, z). Let N
be the radical rad A = (x, y, z). By the Alder-Strassen bound (Theorem 13) we
have
R(A) ≥ 2 dim A − 1 = 15.
We will show that A is not of minimal rank. The following type of argument
has been used before by Büchi to compute ranks of certain local algebras of
dimension at most 5 [26]. Suppose A has minimal rank. By Nakayama’s lemma
(Lemma 16), the algebra A can be generated by dim rad A/(rad A)2 = 3 elements
and no fewer. Therefore, by Theorem 15 our algebra A is a generalised null
algebra. Hence there are elements x1 , x2 , x3 ∈ N with x1 x2 = 0, x2 x3 = 0,
and x1 x3 = 0 such that (x1 + N 2 , x2 + N 2 , x3 + N 2 ) is a basis of N/N 2 . On the
other hand, (x + N 2 , y + N 2 , z + N 2 ) is a basis of N/N 2 . Therefore, there are
elements Aij ∈ C and pi ∈ N 2 with
x1 = A11 x + A12 y + A13 z + p1 ,
x2 = A21 x + A22 y + A23 z + p2 ,
x3 = A31 x + A32 y + A33 z + p3 ,
and det A 6= 0. We may assume that A11 is nonzero. We have relations
0 = x1 x2 = (A11 A22 + A12 A21 )xy + (A11 A23 + A13 A21 )xz
+ (A12 A23 + A13 A22 )yz + terms in N 3 ,
0 = x1 x3 = (A11 A32 + A12 A31 )xy + (A11 A33 + A13 A31 )xz
+ (A12 A33 + A13 A32 )yz + terms in N 3 .
9
Let
f1 := A11 A22 + A12 A21 ,
g1 := A11 A32 + A12 A31 ,
f2 := A11 A23 + A13 A21 ,
g2 := A11 A33 + A13 A31 ,
f3 := A12 A23 + A13 A22 ,
g3 := A12 A33 + A13 A32 .
Then we can rewrite the relations as 0 = f1 = f2 = f3 = g1 = g2 = g3 .
With the following Sage code we can compute the syzygy module of the ideal
I := (det(A), f1 , f2 , f3 , g1 , g2 , g3 ) C[Aij ].
R = PolynomialRing(QQ, 3, var_array="a")
A = matrix(R, 3, 3, lambda i,j: "a%d%d" % (i,j))
var("a","b","c")
y = A * vector([a,b,c])
I = R.ideal(det(A),
"a00*a11+a01*a10","a00*a12+a02*a10","a01*a12+a02*a11",
"a00*a21+a01*a20","a00*a22+a02*a20","a01*a22+a02*a21")
L = I.syzygy_module()
print L.str()
One of the syzygies is
− A11 det(A) = (A13 A31 − A11 A33 )f1 + (−3A12 A31 − A11 A32 )f2 + 0 · f3
+ 2A11 A23 g1 + 2A12 A21 g2 + 0 · g3 ,
implying det(A) = 0, which is a contradiction.
Acknowledgements. The author is grateful to Matthias Christandl, Markus Bläser
and Florian Speelman for helpful discussions. Part of this work was done while
the author was visiting the Simons Institute for the Theory of Computing, UC
Berkeley and the Workshop on Algebraic Complexity Theory 2015, Saarbrücken.
This work is supported by the Netherlands Organisation for Scientific Research
(NWO), through the research programme 617.023.116, and by the European
Commission, through the SIQS project.
References
[1] P. Bürgisser, M. Clausen, M. A. Shokrollahi, Algebraic complexity theory,
Vol. 315 of Grundlehren der Mathematischen Wissenschaften, SpringerVerlag, Berlin, 1997. doi:10.1007/978-3-662-03338-8.
[2] J. M. Landsberg, Tensors: geometry and applications, Vol. 128 of Graduate
Studies in Mathematics, American Mathematical Society, Providence, RI,
2012.
[3] J. Håstad, Tensor rank is NP-complete, J. Algorithms 11 (4) (1990) 644–654.
doi:10.1016/0196-6774(90)90014-6.
10
[4] C. J. Hillar, L.-H. Lim, Most tensor problems are NP-hard, J. ACM 60 (6)
(2013) Art. 45, 39. doi:10.1145/2512329.
[5] Y. Shitov, How hard is the tensor rank? (2016). arXiv:1611.01559.
[6] J. D. Hauenstein, C. Ikenmeyer, J. M. Landsberg, Equations for lower
bounds on border rank, Exp. Math. 22 (4) (2013) 372–383. doi:10.1080/
10586458.2013.825892.
[7] V. Strassen, Relative bilinear complexity and matrix multiplication, J. Reine
Angew. Math. 375/376 (1987) 406–443. doi:10.1515/crll.1987.375-376.
406.
[8] F. Le Gall, Powers of tensors and fast matrix multiplication, in: ISSAC
2014—Proceedings of the 39th International Symposium on Symbolic and
Algebraic Computation, ACM, New York, 2014, pp. 296–303. doi:10.1145/
2608628.2608664.
[9] L. Chen, E. Chitambar, R. Duan, Z. Ji, A. Winter, Tensor rank and
stochastic entanglement catalysis for multipartite pure states, Phys. Rev.
Lett. 105 (20) (2010) 200501. doi:10.1103/PhysRevLett.105.200501.
[10] M. Bläser, Explicit tensors, in: Perspectives in computational complexity,
Vol. 26 of Progr. Comput. Sci. Appl. Logic, Birkhäuser/Springer, Cham,
2014, pp. 117–130. doi:10.1007/978-3-319-05446-9_6.
[11] M. Bläser, Improvements of the Alder-Strassen bound: algebras with nonzero
radical, in: Automata, languages and programming, Vol. 2076 of Lecture
Notes in Comput. Sci., Springer, Berlin, 2001, pp. 79–91. doi:10.1007/
3-540-48224-5_7.
[12] V. de Silva, L.-H. Lim, Tensor rank and the ill-posedness of the best lowrank approximation problem, SIAM J. Matrix Anal. Appl. 30 (3) (2008)
1084–1127. doi:10.1137/06066518X.
[13] E. S. Allman, P. D. Jarvis, J. A. Rhodes, J. G. Sumner, Tensor rank,
invariants, inequalities, and applications, SIAM J. Matrix Anal. Appl. 34 (3)
(2013) 1014–1045. doi:10.1137/120899066.
[14] E. Ballico, A. Bernardi, Stratification of the fourth secant variety of Veronese
varieties via the symmetric rank, Adv. Pure Appl. Math. 4 (2) (2013) 215–
250. doi:10.1515/apam-2013-0015.
[15] J. M. Landsberg, M. Michałek, Abelian tensors (2016). doi:10.1016/j.
matpur.2016.11.004.
[16] M. Bläser, V. Lysikov, On degeneration of tensors and algebras, in: 41st International Symposium on Mathematical Foundations of Computer Science
(MFCS 2016), Vol. 58, 2016, pp. 19:1–19:11. doi:10.4230/LIPIcs.MFCS.
2016.19.
11
[17] T. Lehmkuhl, T. Lickteig, On the order of approximation in approximative
triadic decompositions of tensors, Theoret. Comput. Sci. 66 (1) (1989) 1–14.
doi:10.1016/0304-3975(89)90141-2.
[18] M. Bläser, Lower bounds for the bilinear complexity of associative algebras,
Comput. Complexity 9 (2) (2000) 73–112. doi:10.1007/PL00001605.
[19] N.-E. Fahssi, Polynomial triangles revisited (2012). arXiv:1202.0228.
[20] J. Flum, M. Grohe, Parameterized complexity theory, Texts in Theoretical
Computer Science. An EATCS Series, Springer-Verlag, Berlin, 2006.
[21] S. Jukna, Extremal combinatorics: with applications in computer science,
Springer Science & Business Media, 2011.
[22] P. Vrana, M. Christandl, Asymptotic entanglement transformation between
W and GHZ states, J. Math. Phys. 56 (2) (2015) 022204, 12. doi:10.1063/
1.4908106.
[23] N. Yu, E. Chitambar, C. Guo, R. Duan, Tensor rank of the tripartite state
W tensor n, Phys. Rev. A 81 (1) (2010) 014301. doi:10.1103/PhysRevA.
81.014301.
[24] M. Bläser, A complete characterization of the algebras of minimal bilinear
complexity, SIAM J. Comput. 34 (2) (2004/05) 277–298. doi:10.1137/
S0097539703438277.
[25] R. S. Pierce, Associative algebras, Vol. 88 of Graduate Texts in Mathematics,
Springer-Verlag, New York-Berlin, 1982, studies in the History of Modern
Science, 9.
[26] W. Büchi, Über eine klasse von algebren minimalen ranges, Ph.D. thesis,
Universität Zürich (1984).
12
| 0 |
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
arXiv:1605.08737v2 [] 3 Jun 2016
Li Wanga , Guannan Wangb , Min-Jun Laic and Lei Gaoa
a Iowa
State University, b College of William & Mary and c University of Georgia
Abstract: In this paper, we study the estimation of partially linear models for spatial data distributed over complex domains. We use bivariate splines over triangulations to represent the
nonparametric component on an irregular two-dimensional domain. The proposed method is formulated as a constrained minimization problem which does not require constructing finite elements
or locally supported basis functions. Thus, it allows an easier implementation of piecewise polynomial representations of various degrees and various smoothness over an arbitrary triangulation.
Moreover, the constrained minimization problem is converted into an unconstrained minimization
via a QR decomposition of the smoothness constraints, which leads to a penalized least squares
method to estimate the model. The estimators of the parameters are proved to be asymptotically
normal under some regularity conditions. The estimator of the bivariate function is consistent, and
its rate of convergence is also established. The proposed method enables us to construct confidence
intervals and permits inference for the parameters. The performance of the estimators is evaluated
by two simulation examples and by a real data analysis.
Key words and phrases: Bivariate splines, Penalty, Semiparametric regression, Spatial data, Triangulation.
1. Introduction
In many geospatial studies, spatially distributed covariate information is available. For
example, geographic information systems may contain measurements obtained from satellite
images at some locations. These spatially explicit data can be useful in the construction
and estimation of regression models, but sometimes they are distributed over irregular twodimensional (2-D) domains that may have complex boundaries or holes inside. It is well
known that many conventional smoothing tools suffer from the problem of “leakage” across
the complex domains, which refers to the poor estimation over difficult regions by smoothing inappropriately across boundary features, such as peninsulas; see excellent discussions in
Ramsay (2002) and Wood, Bravington and Hedley (2008). In this paper, we propose to use
bivariate splines (smooth piecewise polynomial functions over a triangulation of the domain
of interest) to model spatially explicit datasets which enable us to overcome the “leakage”
problem and find the estimation more accurately.
Address for correspondence: Li Wang, Department of Statistics and the Statistical Laboratory, Iowa State
University, Ames, IA, USA. Email: [email protected]
2
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
We focus here on the partially linear model (PLM), popularized by Hardle, Liang and
Gao (2000), for data randomly distributed over 2-D domains. To be more specific, let Xi =
(Xi1 , Xi2 )T be the location of i-th point, i = 1, . . . , n, which ranges over a bounded domain
Ω ⊆ R2 of arbitrary shape, for example, a domain with polygon boundary. Let Yi be the
response variable and Zi = (Zi1 , . . . , Zip )T be the predictors at location Xi . Suppose that
{(Zi , Xi , Yi )}ni=1 satisfies the following model
Y i = ZT
i β + g (Xi ) + i ,
i = 1, . . . , n,
(1.1)
where β = (β1 , . . . , βp )T are unknown parameters, g(·) is some unknown but smooth bivariate
function, and i ’s are i.i.d random noises with E (i ) = 0 and Var (i ) = σ 2 . Each i is
independent of Xi and Zi . In many situations, our main interest is in estimating and making
inference for the regression parameters β, which provides measures of the effect of the covariate
Z after adjusting for the location effect of X.
If g(·) is a univariate function, model (1.1) becomes a typical PLM. In the past three
decades, flexible and parsimonious PLMs have been extensively studied and widely used in
many statistical applications, from biostatistics to econometrics, from engineering to social science; see Chen, Liang and Wang (2011), Huang, Zhang and Zhou (2007), Liang and Li (2009),
Liu, Wang and Liang (2011), Ma, Song and Wang (2013), Ma and Yang (2011), Wang, et al
(2011), Wang, et al (2014), Zhang, Cheng and Liu (2011) for some recent works on PLMs.
When g(·) is a bivariate function, there are two popular estimation tools: bivariate P-splines
Marx and Eilers (2005) and thin plate splines Wood (2003). Later, Xiao, Li and Ruppert (2013)
proposed an efficient sandwich smoother, which has a tensor product structure that simplifies
an asymptotic analysis and can be fast computed. Their method has been applied to quantifying the lifetime circadian rhythm of physical activity Xiao, et al (2015). The application to
spatial data analysis over complex domains, however, has been hampered due to the scarcity
of bivariate smoothing tools that are not only computationally effficient but also theoretically
reliable to solve the problem of “leakage” across the domain. Traditional smoothing methods
in practical data analysis, such as kernel smoothing, wavelet-based smoothing, tensor product
splines and thin plate splines, usually perform poorly for those data, since they do not take
into account the shape of the domain and also smooth across concave boundary regions.
There are several challenges when going from rectangular domains to irregular domains
with complex boundaries or holes. Some efforts have recently been devoted to studying the
smoothing over irregular domains, and significant progress has been made. Most of them are
based on the roughness penalization approach Green and Silverman (1994). To deal with
irregular domains, Wang and Ranalli (2007) applied low-rank thin-plate splines defined as
functions of the geodesic distance instead of the Euclidean distance. Eilers (2006) utilized the
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
3
Schwarz-Christoffel transform to convert the complex domains to regular domains. To solve the
“leakage” problem, in a pioneering paper, Ramsay (2002) suggested a penalized least squares
approach with a Laplacian penalty and transformed the problem to that of solving a system of
partial differential equations (PDEs). Wood, Bravington and Hedley (2008) provided an elegant
solution and developed the soap film smoothing estimator for smoothing over difficult regions
that can be represented by a low-rank basis and one or two quadratic penalties. Recently,
Sangalli, Ramsay and Ramsay (2013) extended the method in Ramsay (2002) to the PLMs,
which allows for spatially distributed covariate information to be in the models. The data
smoothing problem in Sangalli, Ramsay and Ramsay (2013) is solved using finite element
method (FEM), a method mainly developed and used to solve PDEs. Although their method
is practically useful, the theoretical properties of the estimation are not investigated.
In this paper, we tackle the estimation problem differently from Sangalli, Ramsay and
Ramsay (2013). Instead of using FEM, we approximate the nonparametric function g(·) by
using the bivariate splines over triangulations in Lai and Schumaker (2007). An important
feature of this approach is that it uses splines for applications without constructing locally
supported splines or finite elements and without computing the dimension. This method has
been shown to be more efficient and flexible than the conventional FEM in data fitting problems
and solving PDEs; see Awanou, Lai and Wenston (2005), Ettinger, Guillas and Lai (2015),
Guillas and Lai (2010), Lai and Wang (2013) and Liu, Guillas and Lai (2015). For example,
the users can choose spline functions of flexible degrees and various smoothness across any
given domain. Another advantage is that the linear systems arising in this approach are more
easy to assemble than those from the finite elements or locally supported spline basis functions.
The linear systems are sparser than that from any macro-FEM method. In addition, due to
the great scalability, the assembling can be computed in parallel.
To the best of the authors’ knowledge, statistical aspects of smoothing for PLMs by using
bivariate splines have not been discussed in the literature so far. This paper presents the
first attempt at investigating the asymptotic properties of the PLMs for data distributed on
a non-rectangular complex region. We study the asymptotic properties of the least squares
estimators of β and g(·) by using bivariate splines over triangulations with a penalty term.
We show that our estimator of β is root-n consistent and asymptotically normal, although the
convergence rate of the estimator of the nonparametric component g(·) is slower than root-n.
A standard error formula for the estimated coefficients is provided and tested to be accurate
enough for practical purposes. Hence, the proposed method enables us to construct confidence
intervals for the regression parameters. We also obtain the convergence rate for the functional
estimator of g(·). We show, by using numerical studies, that our method is competitive with
existing methods such as the soap lm smoother and the thin-plate regression spline.
4
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
The rest of the paper is organized as follows. In Section 2, we give a brief review of the
triangulations and propose our estimation method based on penalized bivariate splines. We
also discuss the details on how to choose the penalty parameters. Section 3 is devoted to the
asymptotic analysis of the proposed estimators. Section 4 provides a detailed simulation to
compare several methods in two different scenarios and explores the estimation and prediction
accuracy. Section 5 studies a real dataset on house values over California. Some concluding
remarks are given in Section 6. Technical details are provided in Appendix A.
2. Triangulations and Penalized Spline Estimators
Our estimation method is based on penalized bivariate splines over triangulations. The
idea is to approximate the function g(·) by bivariate splines that are piecewise polynomial
functions over a 2D triangulated domain. We use this approximation to construct least squares
estimators of the linear and nonlinear components of the model with a penalization term. In
the following of this section, we describe the background of triangulations, B-form bivariate
splines and introduce the penalized spline estimators.
2.1. Triangulations
Triangulation is an effective strategy to handle data distribution on irregular regions with
complex boundaries and/or interior holes. Recently, it has attracted substantial recent attention in many applied areas, such as geospatial studies, numerical solutions of PDEs, image
enhancements, and computer aided geometric design. See, for example, the recent comprehensive book by Lai and Schumaker (2007) and the article by Lai (2008).
We use τ to denote a triangle which is a convex hull of three points not located in one line.
A collection 4 = {τ1 , ..., τN } of N triangles is called a triangulation of Ω = ∪N
i=1 τi provided
that if a pair of triangles in 4 intersect, then their intersection is either a common vertex or
a common edge. In general, any kind of polygon shapes can be used for the partition of Ω. In
this paper we consider triangulations of Ω because any polygonal domain of arbitrary shape
can be partitioned into finitely many triangles. In the following, we assume that all Xi s are
inside triangles of 4. That is, they are not on edges or vertices of triangles in 4. Otherwise,
we can simply count them twice or multiple times if any observation is located on an edge or
at a vertex of 4. Given a triangle τ ∈ 4, we let |τ | be its longest edge length, and denote the
size of 4 by |4| = max{|τ |, τ ∈ 4}, i.e., the length of the longest edge of 4. Furthermore, let
ρτ be the radius of the largest circle inscribed in τ . We measure the quality of a triangulation
4 by δ4 = maxτ ∈4 |τ |/ρτ < ∞, which is equivalent to the smallest angle of 4. The study in
Lai and Schumaker (2007) shows that the approximation of spline spaces over 4 is dependent
on δ4 , i.e., the larger the δ4 is, the worse the spline approximation is. In the rest of the paper,
we restrict our attention to the triangulations satisfying δ4 < δ for a positive constant δ.
5
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
2.2. B-form bivariate splines
In this section we give a brief introduction to the bivariate splines. More in-depth description can be found in Lai and Schumaker (2007), Lai (2008), as well as Zhou and Pan (2014)
and the details of the implementation is provided in Awanou, Lai and Wenston (2005). Let
τ = hv1 , v2 , v3 i be a non-degenerate (i.e. with non-zero area) triangle with vertices v1 , v2 ,
and v3 . Then for any point v ∈ R, there is a unique representation in the form
v = b1 v1 + b2 v2 + b3 v3
with b1 + b2 + b3 = 1, where b1 , b2 and b3 are called the barycentric coordinates of the point
v relative to the triangle τ . The Bernstein polynomials of degree d relative to triangle τ is
defined as
τ,d
Bijk
(v) =
d i j k
b b b .
i!j!k! 1 2 3
Then for any τ ∈ 4, we can write the polynomial piece of spline s restricted on τ ∈ 4 as
X
s|τ =
τ,d
τ
,
Bijk
γijk
i+j+k=d
τ , i + j + k = d} are called B-coefficients of s.
where the coefficients γ τ = {γijk
For a nonnegative integer r, let Cr (Ω) be the collection of all r-th continuously differentiable
functions over Ω. Given a triangulation 4, let Srd (4) = {s ∈ Cr (Ω) : s|τ ∈ Pd (τ ), τ ∈ 4} be
a spline space of degree d and smoothness r over triangulation 4, where Pd is the space of all
polynomials of degree less than or equal to d. For notation simplicity, let S = Sr3r+2 (4) for a
fixed smoothness r ≥ 1, and we know that such a spline space has the optimal approximation
order (rate of convergence) for noise-free datasets; see Lai and Schumaker (1998) and Lai and
Schumaker (2007).
For notation simplicity, let {Bξ }ξ∈K be the set of degree-d bivariate Bernstein basis polynomials for S, where K stands for an index set of K Bernstein basis polynomials. Then for any
function s ∈ S, we can represent it by using the following basis expansion:
s(x) =
X
Bξ (x)γξ = B(x)T γ,
(2.1)
ξ∈K
where γ T = (γξ , ξ ∈ K) is the spline coefficient vector. To meet the smoothness requirement of
the splines, we need to impose some linear constraints on the spline coefficients γ in (2.1). We
require that γ satisfies Hγ = 0 with H being the matrix for all smoothness conditions across
shared edges of triangles, which depends on r and the structure of the triangulation.
2.3. Penalized Spline Estimators
6
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
To define the penalized spline method, for any direction xj , j = 1, 2, let Dxq j f (x) denote
the q-th order derivative in the direction xj at the point x = (x1 , x2 ). Let
X Z X υ
(Dxi 1 Dxj 2 f )2 dx1 dx2
Eυ (f ) =
i
τ
τ ∈4
(2.2)
i+j=υ
be the energy functional for a fixed integer υ ≥ 1. Although all partial derivatives up to the
chosen order υ can be included in the penalty of (2.2), for simplicity, in the remaining part
of the paper, we use υ = 2, and one can study the similar problem for general υ ≥ 2. When
υ = 2,
Z
E2 (f ) =
Ω
(Dx21 f )2 + 2(Dx1 Dx2 f )2 + (Dx22 f )2 dx1 dx2 ,
(2.3)
which is similar to the thin-plate spline penalty (Green and Silverman, 1994) except the latter
is integrated over the entire plane R2 . Sangalli, Ramsay and Ramsay (2013) used a different
roughness penalty from (2.3), specifically, they use the integral of the square of the Laplacian
R
of f , that is, λ Ω (Dx21 f + Dx22 f )2 dx1 dx2 . Both forms of penalties are invariant with respect
to Euclidean transformations of spatial co-ordinates, thus, the bivariate smoothing does not
depend on the choice of the coordinate system.
Given λ > 0 and {(Zi , Xi , Yi )}ni=1 , we consider the following minimization problem:
min
s∈S
n
X
Yi − ZT
i β − s (Xi )
2
+ λEυ (s).
(2.4)
i=1
Let Y = (Y1 , . . . , Yn )T be the vector of n observations of the response variable. Denote
by Xn×2 = {(Xi1 , Xi2 )}ni=1 the location design matrix and Zn×p = {(Zi1 , . . . , Zip )}ni=1 the
collection of all covariates. Denote by B the n × K evaluation matrix of Bernstein basis
polynomials whose i-th row is given by BT
i = {Bξ (Xi ), ξ ∈ K}. Then the minimization
problem in (2.4) reduces to
min L(β, γ) = min kY − Zβ − Bγk2 + λγ T Pγ
β,γ
β,γ
subject to Hγ = 0,
(2.5)
where P is the block diagonal penalty matrix satisfying that γ T Pγ = Eυ (Bγ).
To solve the constrained minimization problem (2.5), we first remove the constraint via
QR decomposition of the transpose of the constrain matrix H. Specifically, we write
!
R
1
HT = QR = (Q1 Q2 )
,
R2
(2.6)
where Q is an orthogonal matrix and R is an upper triangle matrix, the submatrix Q1 is the
first r columns of Q, where r is the rank of matrix H, and R2 is a matrix of zeros. It is easy
to see the following result.
7
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
Lemma 1. Let Q1 , Q2 be submatrices as in (2.6). Let γ = Q2 θ for a vector θ of appropriate
size. Then Hγ = 0. On the other hand, if Hγ = 0, then there exists a vector θ such that
γ = Q2 θ.
T
Proof. By (2.6), we have HT = Q1 R1 since R2 = 0. That is, H = RT
1 Q1 . Thus,
T
Hγ =HQ2 θ = RT
1 Q1 Q2 θ = 0
since QT
1 Q2 = 0. On the other hand, if
T
0 = Hγ = RT
1 Q1 γ,
we have QT
1 γ = 0 since R1 is invertible. Thus, γ is in the perpendicular subspace of the space
spanned by the columns of Q1 . That is, γ is in the space spanned by the columns of Q2 . Thus,
there exists a vector θ such that γ = Q2 θ. These complete the proof.
The problem (2.5), is now converted to a conventional penalized regression problem without
any constraints:
min kY − Zβ − BQ2 θk2 + λ(Q2 θ)T P(Q2 θ) .
β,θ
For a fixed penalty parameter λ, we have
b
β
b
θ
!
(
=
ZT Z
!
ZT BQ2
+λ
T
T T
QT
2 B Z Q2 B BQ2
!)−1
0
QT
2 PQ2
ZT Y
T
QT
2B Y
!
.
Letting
V=
V11 V12
!
=
V21 V22
ZT Z
!
ZT BQ2
T
T
T
QT
2 B Z Q2 (B B + λP)Q2
,
(2.7)
we have
b
β
b
θ
!
= V−1
ZT Y
!
T
QT
2B Y
.
Next let us write
V
−1
≡U=
U11 U12
U21 U22
!
=
U11
−1
−U11 V12 V22
−1
−U22 V21 V11
U22
!
,
(2.8)
where
−1
T
−1 T T
U−1
= V11 − V12 V22
V21 = ZT I − BQ2 {QT
Z, (2.9)
2 (B B + λP)Q2 } Q2 B
11
−1
T
U−1
= V22 − V21 V11
V12 = QT
I − Z(ZT Z)−1 ZT B + λP Q2 .
(2.10)
2 B
22
8
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
Then the minimizers of (2.7) can be given precisely as follows:
b = U11 ZT I − BQ2 V−1 QT BT Y=U11 ZT I − BQ2 {QT (BT B + λP)Q2 }−1 QT BT Y,
β
2
2
2
22
−1 T
T T
T T
T
−1 T
b
θ = U22 Q2 B I − ZV11 Z Y=U22 Q2 B I − Z(Z Z) Z Y.
Therefore, one obtains the estimators for γ and g(·), respectively:
b = Q2 U22 QT BT I − Z(ZT Z)−1 ZT Y,
b = Q2 θ
γ
2
b=
gb(x) = B(x)T γ
X
Bξ (x)b
γξ .
(2.11)
ξ∈K
The fitted values at the n data points are
b + Bb
b = Zβ
Y
γ = S(λ)Y,
where the smoothing or hat matrix is
−1 T
−1 T T
T
Z .
Q2 B + BQ2 U22 QT
I − ZV11
S(λ) = ZU11 ZT I − BQ2 V22
2B
In nonparametric regression, the trace tr(S(λ)) of smoothing matrix S(λ) is often called the
degrees of freedom of the model fit. It has the rough interpretation as the equivalent number
of parameters and can be thought as a generalization of the definition in linear regression.
Finally, we can estimate the variance of the error term, σ 2 by
σ̂ 2 =
b 2
kY − Yk
.
n − tr(S(λ))
(2.12)
2.4. Choosing the Triangulation
Triangulation has been extensively investigated in the past few decades, and various packages have been developed. For example, one can use the “Delaunay” algorithm to find a
triangulation; see MATLAB program delaunay.m or MATHEMATICA function DelaunayTriangulation. “Triangle” (Shewchuk, 1996) is also widely used in many applications, and one can
download it for free from http://www.cs.cmu.edu/ quake/triangle.html. It is a C++ program for
two-dimensional mesh generation and construction of Delaunay triangulations. “DistMesh” is
another method to generate unstructured triangular and tetrahedral meshes; see the DistMesh
generator on http://persson.berkeley.edu/distmesh/. A detailed description of the program is
provided by Persson and Strang (2004). We used our own triangulation code in simulation
studies and real data analysis below.
As is usual with the one-dimensional (1-D) penalized least squares (PLS) splines, the
number of knots is not important given that it is above some minimum depending upon the
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
9
degree of the smoothness; see Li and Ruppert (2008). For bivariate PLS splines, Lai and Wang
(2013) also observed that the number of triangles N is not very critical, provided N is larger
than some threshold. In fact, one of the main advantages of using PLS splines over discrete
least squares (DLS) splines is the flexibility of choosing knots in the 1-D setting and choosing
triangles in the 2-D setting. For DLS splines, one has to have large enough sample according to
the requirement of the degree of splines on each subinterval in the 1-D case or each triangle in
the 2-D case to guarantee that a solution can be found. However, there is no such requirement
for PLS splines. When the smoothness r ≥ 1, the only requirement for bivariate PLS splines is
that there is at least one triangle containing three points which are not in one line (Lai, 2008).
Also, PLS splines perform similar to DLS splines as long as the penalty parameter λ is very
small. So in summary, the proposed bivariate PLS splines are very flexible and convenient for
data fitting, even for smoothing sparse and unevenly sampled data.
In practice, we recommend that the user first constructs a polygon domain Ω containing all
the design points of the data and makes a simple triangulation 40 of Ω by hand or computer,
then refines 40 several times to have a triangulation of desired size.
2.5. Penalty Parameter Selection
Selecting a suitable value of smoothing parameter λ is critical to good model fitting. A
large value of λ enforces a smoother fitted function with potentially larger fitting errors, while
a small value yields a rougher fitted function and potentially smaller fitting errors. Since the
in-sample fitting errors can not gauge the prediction property of the fitted function, one should
target a criterion function that mimics the out-of-sample performance of the fitted model. The
generalized cross-validation (GCV) (Craven and Wahba, 1979; Wahba, 1990) is such a criterion
and is widely used for choosing the penalty parameter. We choose the smoothing parameter λ
by minimizing the following generalized cross-validation (GCV) criterion
GCV(λ) =
nkY − S(λ)Yk2
,
{n − tr(S(λ))}2
over a grid of values of λ. We use the 10-point grid where the values of log10 (λ) are equally
spaced between −6 and 7.
3. IAsymptotic Results
This section studies the asymptotic properties for the proposed estimators. To discuss
these properties, we first introduce some notation. For any function f over the closure of
domain Ω, denote kf k∞ = supx∈Ω |f (x)| the supremum norm of function f and |f |υ,∞ =
maxi+j=υ kDxi 1 Dxj 2 f (x)k∞ the maximum norms of all the υth order derivatives of f over Ω.
10
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
Let
W `,∞ (Ω) = {f on Ω : |f |k,∞ < ∞, 0 ≤ k ≤ `}
(3.1)
be the standard Sobolev space. For any j = 1, . . . , p, let zj be the coordinate mapping that
maps z to its j-th component so that zj (Zi ) = Zij , and let
hj = argminh∈L2 kzj − hk2L2 = argminh∈L2 E{(Zij − h(Xi ))2 }
(3.2)
be the orthogonal projection of zj onto L2 .
Before we state the results, we make the following assumptions:
(A1) The random variables Zij are bounded, uniformly in i = 1, . . . , n, j = 1, . . . , p.
T
T
T
(A2) The eigenvalues of E
Xi are bounded away from 0.
1 Zi
1 Zi
(A3) The noise satisfies that limη→∞ E 2 I( > η) = 0.
Assumptions (A1)–(A3) are typical in semiparametric smoothing literature, see for instance
Huang, Zhang and Zhou (2007) and Wang, et al (2011). The purpose of Assumption (A2) is
to ensure that the vector (1, ZT
i ) is not multicolinear. We next introduce some assumptions
on the properties of the true bivariate function in model (1.1) and the bivariate spline space
introduced in Section 2.2.
(C1) The bivariate functions hj (·), j = 1, . . . , p, and the true function in model (1.1) g(·) ∈
W `+1,∞ (Ω) in (3.1) for an integer ` ≥ 2.
(C2) For every s ∈ S and every τ ∈ 4, there exists a positive constant F1 , independent of s
and τ , such that
1/2
X
F1 ksk∞,τ ≤
s (Xi )2
,
for all τ ∈ 4.
(3.3)
Xi ∈τ, i=1,··· ,n
(C3) Let F2 be the largest among the numbers of observations in triangles τ ∈ 4 in the sense
that
1/2
X
s (Xi )2
≤ F2 ksk∞,τ ,
for all τ ∈ 4,
(3.4)
Xi ∈τ, i=1,··· ,n
where ksk∞,τ denotes the supremum norm of s over triangle τ . The constants F1 and F2
defined in (3.3) and (3.4) satisfy F2 /F1 = O(1).
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
11
(C4) The number N of the triangles and the sample size n satisfy that N = Cnγ for some
constant C > 0 and 1/(` + 1) ≤ γ ≤ 1/3.
(C5) The penalized parameter λ satisfies λ = o(n1/2 N −1 ).
Condition (C1) describes the requirement for the true bivariate function as usually used
in the literature of nonparametric or semiparametric estimation. Condition (C2) ensures the
existence of a least squares spline. Although one can get a decent penalized least squares spline
fitting with F1 = 0 for some triangles, we need (C2) to study the convergence of bivariate
penalized least squares splines. Condition (C3) suggests that we should not put too many
observations in one triangle. Condition (C4) requires that the number of triangles is above
some minimum depending upon the degree of the spline, which is similar to the requirement
of Li and Ruppert (2008) in the univariate case. It also ensures the asymptotic equivalence
of the theoretical and empirical inner products/norms defined at the beginning of Section .
Condition (C5) is required to reduce the bias of the bivariate spline approximation through
“under smoothing” and “choosing smaller λ”.
To avoid confusion, in the following let β 0 and g0 be the true parameter value and function
b is root-n and β
b is
in model (1.1). The following theorem states that the rate convergence of β
asymptotically normal.
b is asympTheorem 1. Suppose Assumptions (A1)-(A3), (C1)-(C5) hold, then the estimator β
totically normal, that is,
b − β ) → N (0, I),
(nΣ)1/2 (β
0
where I is a p × p identity matrix,
e i )(Zi − Z
e i )T }
Σ = σ −2 E{(Zi − Z
(3.5)
e i = {h1 (Xi ), . . . , hp (Xi )}T , for hj (·) defined in (3.2), j = 1, . . . , p. In addition, Σ can
with Z
be consistently estimated by
Σn =
n
1 X
b i )(Zi − Z
b i )T = 1 (Z − Z)
b T (Z − Z).
b
(Zi − Z
2
nb
σ
nb
σ2
(3.6)
i=1
b i is the i-th column of Z
b T = ZT BQ2 V−1 QT BT and σ
where Z
b2 is given by (2.12).
2
22
The results in Theorem 1 enable us to construct confidence intervals for the parameters.
The next theorem provides the global convergence of the nonparametric estimator gb(·).
Theorem 2. Suppose Assumptions (A1)-(A3), (C1)-(C5) hold, then the bivariate penalized
estimator gb(·) in ( 2.11) is consistent with the true function g0 , and satisfies that
λ
λ
F2
1
`+1
√
kb
g − g0 kL2 = OP
|g
|
+
1
+
|4|
|g
|
+
.
0 2,∞
0 `,∞
n|4|
n |4|3
n |4|5 F1
12
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
The proofs of the above two theorems are given in Appendix. We notice that the rate
of convergence given in Theorem 2 is the same as those for nonparametric spline regression
without including the covariate information obtained in Lai and Wang (2013).
4. Simulation
In this section, we carry out two numerical studies to assess the performance of our proposed method. We compare the performance of bivariate penalized splines over triangulations
(BPST) with filtered kriging (KRIG), thin plate splines (TPS), soap film smoothing (SOAP)
in Wood, Bravington and Hedley (2008), linear finite elements (LFE) and quadratic finite
elements (QFE) in Sangalli, Ramsay and Ramsay (2013).
Example 1. In this example, we consider a modified horseshoe domain with the surface test
function g(·) used by Wood, Bravington and Hedley (2008) and Sangalli, Ramsay and Ramsay
(2013). In particular, for 201 × 501 grid points over the domain, we simulate data as follows,
the response variable Y is generated from the following PLM:
Y = β1 Z1 + β2 Z2 + g(X1 , X2 ) + .
The random error, , is generated from an N (0, σ2 ) distribution with σ = 0.5. In addition,
we set the parameters as β1 = −1, β2 = 1. For the design of the explanatory variables, Z1
and Z2 , two scenarios are considered based on the relationship between the location variables
(X1 , X2 ) and covariates (Z1 , Z2 ). Under both scenarios, Z1 ∼ unif orm[−1, 1]. On the other
hand, the variable Z2 = cos[π(ρ(X12 + X22 ) + (1 − ρ)U )] where U ∼ unif orm[−1, 1] and is
independent from (X1 , X2 ) as well as Z1 . We consider both independent design: ρ = 0.0 and
dependent design: ρ = 0.7 in this example. Under both scenarios, 100 Monte Carlo replicates
are generated. For each replication, we randomly sample n = 200 locations uniformly from the
grid points inside the horseshoes domain.
Figure 1 (a) and (b) show the surface and the contour map of the true function g(·),
respectively. Figure 1 (c) demonstrates the sampled location points of replicate 1, and Figure1
(d) and (e) illustrates two different triangulations used in the BPST method. In the first
triangulation (41 ), we use 91 triangles and 74 vertices, while in the second one (42 ) we use
109 triangles and 95 vertices.
[Figure 1 about here]
To implement the TPS and KRIG methods, we use the R package fields under the standard
implementation setting of Furrer, Nychka and Sainand (2011). For KRIG, we try different
covariance structures, and we choose the Matérn covariance with smoothness parameter ν = 1,
which gives the best prediction. For the SOAP method, we implement it by using R package
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
13
mgcv (Wood, Bravington and Hedley, 2008) with 32 interior knots. In addition, a rank 39
(40-knot) cyclic penalized cubic regression spline is used as the boundary curve. For all the
methods requiring a smoothing or roughness parameter, GCV is used to choose the values of
the parameter.
To see the accuracy of the estimators, we compute the root mean squared error (RMSE)
for each of the components based on 100 Monte carlo samples. Table 1 shows the RMSEs of
the estimate of the parameters β1 , β2 , σε as well as the average mean squared prediction error
(MSPE) of the nonlinear function g(·) over all the grid points in the horseshoes shaped domain.
From Table 1, one sees that methods SOAP and BPST give very comparable estimates of the
parameters β1 and β2 , respectively, while BPST method produces the best prediction of the
nonlinear function g(·), regardless of the choice of triangulation.
[Table 1 about here]
Figure 2 shows the estimated functions via different methods for the first replicate. For
the test function on Figure 2, the BPST estimate looks visually better than the other four
estimates. In addition, one sees there is a “leakage effect” in KRIG and TPS estimates and
the SOAP, LFE, QFE and BPST methods improve the model fitting of KRIG and TPS. The
poor performance of KRIG and TPS is because they do not take the complex boundary into
any account and smooth across the gap inappropriately. In addition, from Figure 2, one also
sees that the BPST estimators based on 41 and 42 are very similar, which agrees our findings
for penalized splines that the number of triangles is not very critical for the fitting as long as
it is large enough.
[Figure 2 about here]
Next we test the accuracy of the standard error formula in (3.6) for β̂1 and β̂2 , and the
results are listed in Table 2. The standard deviations of the estimated parameters are computed based on 100 replications, which can be regarded as the true standard errors (column
labeled “SEmc ”) and compared with the mean and median of the 100 estimated standard errors
calculated using (3.6) (columns labeled “SEmean ” and “SEmedian ”, respectively). The column
labeled “SEmad ” is the interquartile range of the 100 estimated standard errors divided by
1.349, which is a robust estimate of the standard deviation. From Table 2 one observes that
the averages or medians of the standard errors calculated using the formula are very close
to the true standard deviations, which confirms the accuracy of the proposed standard error
formula.
[Table 2 about here]
14
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
Finally, for computing time, it takes about 60 seconds to compute BPST in one run of
simulation with even 2000 observations over 319 triangles under the computing environment
of x64 PC with Intel Dual Core i5. Overall, the proposed algorithm is fast to compute.
Example 2. In this example, we consider a rectangular domain, [0, 1]2 , where there is no
irregular shape or complex boundaries problem. In this case, classical methods for spatial data
analysis, such as KRIG and TPS, will not encounter any difficulty. We obtain the true signal
and noisy observation for each coordinate pair lying on a 101 × 101-grid over [0, 1]2 using the
following model:
Y = ZT β + g(X1 , X2 ) + ξ(X1 , X2 ) + ,
where β = (−1, 1)T and g(x1 , x2 ) = 5 sin(2π(x21 + x22 )). The random error, , is generated from
an N (0, σ2 ) distribution with σ = 0.5, and the process ξ(·) is generated from a stationary
gaussian random field with the Matérn(1,1) covariance structure. The components of Z and
are standard normal, and Z, ξ and are independent of each other. Similar to Example
1, we simulate Z1 ∼ unif orm[−1, 1], and Z2 = cos[π(ρ(X12 + X22 ) + (1 − ρ)U )], where ρ =
0.0 or 0.7, U ∼ unif orm[−1, 1] and is independent from (X1 , X2 ) and Z1 . Next we take 100
Monte Carlo random samples of size n = 200 from the 101 × 101 points.
Figure 4 (a) and (b) display the true sinusoidal surface and the contour map, respectively.
4 (c) represents the Matérn structure. We use the triangulation in Figure 4 (e) and (f), and
there are 8 triangles and 9 vertices as well as 32 triangles and 25 vertices, respectively. In
addition, the points in Figure 4 (d) demonstrate the sampled location points of replicate 1.
[Figure 4 about here]
Similar to the study in Example 4.1, we also compare the proposed BPST estimator with
estimators from the KRIG, TPS, SOAP, LFE and QFE methods, which are implemented in
the same way as Example 4.1. Specifically, for KRIG, we choose the true Matérn covariance
with smoothness parameter ν = 1. To see the accuracy of the estimators, we compute the
RMSEs of the coefficient estimators and the estimator of σ . To see the overall prediction
accuracy, we make prediction over the 101 × 101 grid points on the domain for each replication
using different methods, and compare the predicted values with the true observations of Y at
these grid points, and we report the average mean squared prediction errors (MSPE) over all
replications.
All the results are summarized in Table 3. As expected, KRIG works pretty well since the
true covariance structure is used in the KRIG fitting. When ρ = 0, TPS, KRIG and BPST
all perform very well. When ρ = 0.7, KRIG and BPST are the best among all the estimators,
and both of them outperform the rest of the estimators. In both scenarios, the differences
between BPST and KRIG are almost unnoticeable. One also notices that, compared with the
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
15
FEM estimators (LEM and QEM), our BPST estimator shows much better performance in
terms of both estimation and prediction, because BPST provides a more flexible and easier
construction of splines with piecewise polynomials of various degrees and smoothness than the
FEM method. Finally as pointed out in Wood, Bravington and Hedley (2008), the standard
(linear) FEM method requires a very fine triangulation in order to reach certain approximation
power, however, BPST doesn’t need such a strict fineness requirement as it uses piecewise
polynomials of degree ≥ 5 yielding an higher order approximation power. In fact, we have
used 64 times more triangles in the FEM than that for the BPST in our simulation. That is,
the BPST is computationally more efficient than the FEM to approximate smooth functions.
[Table 3 about here]
Table 4 lists the accuracy results of the standard error formula in (3.6) for β̂1 and β̂2 using
BPST. From Table 4, one sees that the estimated standard errors based on sample size n = 200
are very accurate.
[Table 4 about here]
5. Application to California House Value Data
In this section we apply the proposed method to analyze the California house value data
from the StatLib repository. The data appeared in Pace and Barry (1997). The spatial data
consists of information of all the block groups in California defined by centroid of census
enumeration areas. In the data set, a block group on average includes 1425.5 individuals living
in a geographically compact area and there are 20, 640 blocks in the final data.
In this paper, we study how different features and factors influence the real estate property
prices. The response variable is the median house value (Value). The investigated factors
include the median income (MedInc), median house age (Age), the average number of bedrooms
(AveBedrms), housing density as reflected by the number of households (Hhd) in each block,
and the average occupancy in each household (AveOccup). It is obvious that the location
of a house is very crucial for making an accurate prediction, so we also include the latitude
(Latitude) and longitude (Longitude) of the block. We model the median house value using
the following PLM:
log(Value) = β0 + β1 MedInc + β2 log(Age) + β3 log(AveBedrms)
+β4 log(AveOccup) + β5 log(Hhd) + g(Latitude, Longitude).
(5.1)
To fit model (5.1), we use six different methods: KRIG, TPS, SOAP, LFE, QFE, and BPST.
16
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
For comparison, we also consider the purely linear model without using the spatial information:
log(Value) = β0 + β1 MedInc + β2 log(Age) + β3 log(AveBedrms)
+β4 log(AveOccup) + β5 log(Hhd),
(5.2)
and fit it using the ordinary linear least squares (OLS) method.
In the following, we report our estimation results for the linear and nonlinear components
in model (5.1) by using the OLS, KRIG, TPS, SOAP, LFE, and QFE methods. Table 6
summarizes the coefficient estimation results of all procedures, where the BPST confidence
intervals are constructed based on the standard error (s.e.) calculated using (3.6). Note that
the coefficient on “log(Age)” is positive 0.165 (s.e. = 0.006) in the fitted linear regression
model (5.2) via OLS, which is obviously against the common sense in household real estate
market. Using the PLM (5.1) with the spatial coordinates, we obtain negative coefficients
for “log(Age)” regardless which spatial based method is employed. The coefficient for the
variable “log(AveBedrms)” is −0.037 with a standard error 0.018 using OLS. In contrast, all
the semiparametric methods unanimously suggest that the coefficient for “log(AveBedrms)” is
positive in the PLM in (5.1) after the location effect is controlled. In addition, the coefficient on
“log(Hhd)” is positive 0.088 and very significant (s.e. = 0.004) in the OLS, but it is statistically
insignificant when we apply the PLM (5.1) to the data. In summary, compared with the
OLS method, our results from PLM are more consistent with intuitions in the real estate
market study. The above counter-intuitive phenomenon in OLS is perhaps due to the model
misspecification of (5.2) that completely ignores the location factor of the property.
[Table 6 about here]
Based on the median house values, we classify the houses in the dataset into six different
groups: (1) less than 50K, (2) 50K–100K, (3) 100K–150K, (4)150K–200K, (5) 200K–300K, and
(6) greater than 300K, and these groups are plotted in Figure 5. We plot the estimated median
house values using different methods in Figure 6, where different colors are used to represent
the value of houses as in Figure 5. All the plots in Figures 5 and 6 show that expensive houses
are clustered around the major cities and inland house values are lower than coastal house
values. The OLS significantly underestimates the coastal enclaves of expensive houses and
overestimates the house values in the central valley; see Figure 6 (a). In contrast, the PLM
methods in Figure 6 (b)-(g) provide much more accurate estimates.
[Figures 5, 6 about here]
Figure 7 further demonstrates the differences between the estimated house values and
the observed house values using seven different methods. As seen in Figure 7, some methods,
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
17
especially the OLS, have difficulties in estimating the house values in major metropolitan areas
and the central valley. The proposed BPST method dramatically reduces the estimation errors
in those areas.
[Figure 7 about here]
To evaluate different methods, we also estimate the out-of-sample prediction errors of each
method using the 5-fold cross validation. We randomly split all the observations into five
roughly equal-sized parts. For each k = 1, . . . , 5, we leave out part k, fit the model to the
other four parts (combined), and then obtain predictions for the left-out kth part. Table 5
summarizes the mean squared prediction errors of the logarithm of the median house value
based on different methods. As expected, incorporating the spatial information dramatically
reduces the prediction errors, and the proposed BPST method provides the most accurate
prediction on the median housing values among all the methods.
[Table 5 about here]
6. Concluding Remarks
In this paper, we have considered PLMs for analyzing spatial data. We introduce a framework of bivariate penalized splines over triangulations in the semiparametric estimation. Our
work differs from existing works in four major aspects.
First, the proposed estimator solves the problem of “leakage” across the complex domains
where many conventional smoothing tools suffer from. The numerical results of the simulation
show our method is very effective on both regular and irregular domains.
Secondly, we provide new statistical theories for estimating the PLM for data distributed
over complex spatial domains. It is shown that our estimates of both parametric part and
non-parametric part of the model enjoy excellent asymptotic properties. In particular, we
have shown that our estimates of the coefficients in the parametric part are asymptotically
normal and derived the convergence rate of the nonparametric component under regularity
conditions. We have also provided a standard error formula for the estimated parameters and
our simulation studies show that the standard errors are estimated with good accuracy. The
theoretical results provide measures of the effect of covariates after adjusting for the location
effect. In addition, they give valuable insights into the accuracy of our estimate of the PLM
and permit joint inference for the parameters.
Thirdly, comparing with the conventional FEM, our method provides a more flexible and
easier construction of splines with piecewise polynomials of various degrees and various smoothness.
18
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
Finally, the proposed method greatly enhances the application of PLMs to spatial data
analysis. We don’t require the data to be evenly distributed or on regular-spaced grid. When
we have regions of sparse data, PLS splines provides a more convenient tool for data fitting
than DLS splines since the roughness penalty helps regularize the estimation. Our estimation
is computationally fast and efficient since it can formulate a penalized regression problem using
a QR decomposition.
This paper leaves open the problem of how to choose a triangulation for a given data set.
An optimal triangulation 4 enables us to achieve the best approximation of the given data
using a spline space S(4) of a fixed degree and fixed smoothness. We leave the problem for
our future exploration.
Acknowledgment
The first author’s research was supported in part by National Science Foundation grants
DMS-11-06816 and DMS-15-42332, and the third author’s research was supported in part by
National Science Foundation grant DMS-15-21537.
Appendices
A.1. Preliminaries
In the following, we use c, C, c1 , c2 , C1 , C2 , etc. as generic constants, which may be
different even in the same line. For functions f1 and f2 on Ω × Rp , we define the empirical
P
inner product and norm as hf1 , f2 in = n1 ni=1 f1 (Xi , Zi )f2 (Xi , Zi ) and kf1 k2n = hf1 , f1 in . If
f1 and f2 are L2 -integrable, we define the theoretical inner product and theoretical L2 norm
as hf1 , f2 iL2 = E {f1 (Xi , Zi )f2 (Xi , Zi )} and kf1 kL2 = hf1 , f1 iL2 . Furthermore, let k·kEυ be the
norm introduced by the inner product h·, ·iEυ , where
1/2
1/2
Z X
X υ
υ
hg1 , g2 iEυ =
(Dxi 1 Dxj 2 g1 )2
(Dxi 1 Dxj 2 g2 )2
dx1 dx2
i
i
Ω
i+j=υ
i+j=υ
for g1 and g2 on Ω.
Lemma A.1. [Lai and Schumaker (2007)] Let {Bξ }ξ∈K be the Bernstein polynomial basis for
spline space S with smoothness r, where K stands for an index set. Then there exist positive
constants c, C depending on the smoothness r and the shape parameter δ such that
X
X
X
c|4|2
|γξ |2 ≤ k
γξ Bξ k2L2 ≤ C|4|2
|γξ |2
ξ∈K
ξ∈K
ξ∈K
for all γξ , ξ ∈ K.
With the above stability condition, Lai and Wang (2013) established the following uniform
rate at which the empirical inner product approximates the theoretical inner product.
19
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
Lemma A.2. [Lemma 2 of the Supplement of Lai and Wang (2013)] Let g1 =
P
g2 = ζ∈K e
cζ Bζ be any spline functions in S. Under Condition (C4), we have
sup
g1 ,g2 ∈S
P
ξ∈K cξ Bξ ,
n
o
hg1 , g2 in − hg1 , g2 iL2
= OP (N log n)1/2 /n1/2 .
kg1 kL2 kg2 kL2
Lemma A.3. [Corollary of Theorem 6 in Lai (2008)] Assume g(·) is in Sobolev space W `+1,∞ (Ω).
For bi-integer (α1 , α2 ) with 0 ≤ α1 + α2 ≤ υ, there exists a unique spline fit g ∗ (·) ∈ S such that
kDxα11 Dxα22 (g − g ∗ ) k∞ ≤ C
F2
|4|`+1−α1 −α2 |g|`+1,∞ ,
F1
where C is an absolute constant depending on r and δ.
For any smooth bivariate function g(·) and λ > 0, define
sλ,g = argmins∈S
n
X
{g(Xi ) − s(Xi )}2 + λEυ (s)
(A.1)
i=1
the penalized least squares splines of g(·). Then s0,g is the nonpenalized spline estimator of
g(·).
Lemma A.4. Assume g(·) is in Sobolev space W `+1,∞ (Ω). For bi-integer (α1 , α2 ) with 0 ≤
α1 +α2 ≤ υ, there exists an absolute constant C depending on r and δ, such that with probability
approaching 1,
kDxα11 Dxα22 (g − s0,g ) k∞ ≤ C
F2
|4|`+1−α1 −α2 |g|`+1,∞ .
F1
Proof. Note that
kDxα11 Dxα22 (g − s0,g ) k∞ ≤ kDxα11 Dxα22 (g − g ∗ ) k∞ + kDxα11 Dxα22 (g ∗ − s0,g ) k∞
≤ kDxα11 Dxα22 (g − g ∗ ) k∞ + kDxα11 Dxα22 (s0,g∗ −g ) k∞ .
The desired result follows from Lemma A.3, the projection bound result in Golitschek and
Schumaker (2002), and the differentiation properties of bivariate splines over triangulatins
given in Lai and Schumaker (2007).
Lemma A.5. Suppose g(·) is in the Sobolev space W `+1,∞ (Ω), and let sλ,g be its penalized
spline estimator defined in (A.1). Under Conditions (C2) and (C3),
kg − sλ,g kn = OP
F2
λ
|4|`+1 |g|`+1,∞ +
F1
n |4|2
F2
|g|υ,∞ +
|4|`+1−υ |g|`+1,∞
.
F1
20
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
Proof. Note that sλ,g is characterized by the orthogonality relations
n hg − sλ,g , uin = λ hsλ,g , uiEυ ,
for all u ∈ S,
(A.2)
while s0,g is characterized by
hg − s0,g , uin = 0,
for all u ∈ S.
(A.3)
By (A.2) and (A.3), n hs0,g − sλ,g , uin = λ hsλ,g , uiEυ , for all u ∈ S. Replacing u by s0,g − sλ,g
yields that
n ks0,g − sλ,g k2n = λ hsλ,g , s0,g − sλ,g iEυ .
(A.4)
Thus, by Cauchy-Schwarz inequality,
n ks0,g −
sλ,g k2n
≤ λ ksλ,g kEυ ks0,g − sλ,g kEυ ≤ λ ksλ,g kEυ sup
f ∈S
kf kEυ
, kf kn 6= 0 ks0,g − sλ,g kn .
kf kn
Similarly, using (A.4), we have
n
o
n ks0,g − sλ,g k2n = λ hsλ,g , s0,g iEυ − hsλ,g , sλ,g iEυ ≥ 0.
Therefore, by Cauchy-Schwarz inequality,
ksλ,g k2Eυ ≤ hsλ,g , s0,g iEυ ≤ ksλ,g kEυ ks0,g kEυ ,
which implies that ksλ,g kEυ ≤ ks0,g kEυ . Therefore,
ks0,g − sλ,g kn ≤ n
−1
λ ks0,g kEυ sup
f ∈S
By Lemma A.4, with probability approaching 1,
(
X
ks0,g kEυ ≤ C1 AΩ |g|υ,∞ +
kf kEυ
, kf kn 6= 0 .
kf kn
)
Dxα11 Dxα22
(g − s0,g )
∞
α1 +α2 =υ
≤ C 2 AΩ
F2
`+1−υ
|g|υ,∞ +
|4|
|g|`+1,∞ ,
F1
(A.5)
where AΩ denotes the area of Ω. By Markov’s inequality, for any f ∈ S, kf kEυ ≤ C |4|−2 kf kL2 .
Lemma (A.2) implies that supf ∈S { kf kn / kf kL2 } ≥ 1−OP (N log n)1/2 /n1/2 . Thus, we have
sup
f ∈S
h
n
oi−1/2
kf kEυ
, kf kn 6= 0 ≤ C |4|−2 1 − OP (N log n)1/2 /n1/2
= OP |4|−2 . (A.6)
kf kn
21
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
Therefore,
ks0,g − sλ,g kn = OP
λ
n |4|2
F2
`+1−υ
|g|υ,∞ +
|4|
|g|`+1,∞
,
F1
and
kg − sλ,g kn ≤ kg − s0,g kn + ks0,g − sλ,g kn .
By Lemma A.4,
kg − s0,g kn ≤ kg − s0,g k∞ = OP
F2
`+1
|4|
|g|`+1,∞ .
F1
Thus, the desired result is established.
Lemma A.6. Under Assumptions (A1), (A2), (C4), (C5), there exist constants 0 < cU <
CU < ∞, such that with probability approaching 1 as n → ∞, cU Ip×p ≤ nU11 ≤ CU Ip×p , where
U11 is given in (2.8).
Proof. Denote by
" n
#
X
1
1
λ
Γλ =
BT B + λP =
Bξ (Xi ) Bζ (Xi ) + hBξ , Bζ iEυ
n
n
n
i=1
ξ,ζ∈K
a symmetric positive definite matrix. Then for V22 defined in (2.7), we can rewrite it as
V22 = nQT
2 Γλ Q2 . Let αmin (λ) and αmax (λ) be the smallest and largest eigenvalues of Γλ .
As shown in the proof of Theorem 2 in the Supplement of Lai and Wang (2013), there exist
positive constants 0 < c3 < C3 such that under Conditions (C4) and (C5), with probability
approaching 1, we have
c3 |4| ≤ αmin (λ) ≤ αmax (λ) ≤ C3 |4|2 +
2
λ
n|4|2
.
Therefore, we have
c4 |4|2 +
λ
n|4|2
−1
−1
−1
−2
2
kak2 ≤ naT V22
a = aT (QT
2 Γλ Q2 ) a ≤ C4 |4| kak .
Thus, by Assumption (A2), we have with probability approaching 1
c5 |4|2 +
λ
n|4|2
−1
−1
−1 T T
|4|2 kak2 ≤ aT V12 V22
V21 a = aT ZT BQ2 V22
Q2 B Za ≤ C5 kak2 .
(A.7)
According to (2.8) and (2.9), we have
−1
−1
(nU11 )−1 = n−1 (V11 − V12 V22
V21 ) = n−1 (ZT Z − V12 V22
V21 ).
22
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
2
The desired result follows from Assumptions (A1), (A2) and (A.7).
A.2. Proof of Theorem 1
T
T
Let µi = ZT
i β 0 + g0 (Xi ), µ = (µ1 , . . . , µn ), and let = (1 , . . . , n ). Define
e = U11 ZT I − BQ2 V−1 QT BT µ,
β
µ
2
22
e = U11 ZT I − BQ2 V−1 QT BT .
β
2
22
(A.8)
(A.9)
b − β = (β
e −β )+β
e .
Then β
0
µ
0
e − β k = oP n−1/2 for β
e in
Lemma A.7. Under Assumptions (A1), (A2), (C1)-(C5), kβ
µ
0
µ
(A.8).
Proof. Let g0 = (g0 (X1 ), . . . , g0 (Xn ))T . It is clear that
e − β = U11 ZT I − BQ2 V−1 QT BT g0
β
µ
0
2
22
T
−1 T T
= U11 ZT g0 − BQ2 {QT
2 (B B + λP)Q2 } Q2 B g0
= nU11 A,
where A = (A1 , . . . , Ap )T , with
T
T
−1 T T
Aj = n−1 ZT
j g0 − BQ2 {Q2 (B B + λP)Q2 } Q2 B g0
for ZT
j = (Z1j , ..., Znj ). Next we derive the order of Aj , 1 ≤ j ≤ p, as follows. For any gj ∈ S,
by (A.2) we have
Aj = hzj , g0 − sλ,g0 in = hzj − gj , g0 − sλ,g0 in +
λ
hsλ,g0 , gj iEυ .
n
For any j = 1, . . . , p, let hj (·) be the function h(·) that minimizes E{Zij − h(Xi )}2 as defined
in (3.2). According to Lemma A.3, there exists a function e
hj ∈ S satisfy
F2
ke
hj − hj k∞ ≤ C
|4|`+1 |hj |`+1,∞ ,
F1
(A.10)
then
Aj = hzj − hj , g0 − sλ,g0 in + hhj − e
hj , g0 − sλ,g0 in +
λ
hsλ,g0 , e
hj iEυ = Aj,1 + Aj,2 + Aj,3 .
n
Since hj satisfies hzj − hj , ψiL2 (Ω) = 0 for any ψ ∈ L2 (Ω), E (Aj,1 ) = 0. According to Proposition 1 in Lai and Wang (2013),
F2
λ
F2
`+1
`−1
kg0 − sλ,g0 k∞ = OP
|4|
|g0 |`+1,∞ +
|g0 |2,∞ +
|4|
|g0 |`+1,∞
.
F1
F1
n |4|3
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
23
Next,
Var (Aj,1 ) =
n
kg0 − sλ,g0 k2∞
1 X
2
kzj − hj k2L2 ,
E
[{Z
−
h
(X
)}
(g
−
s
)]
≤
ij
j
i
0
λ,g0
n2
n
i=1
which together with E (Aj,1 ) = 0 implies that
F2
λ
F2
`+1
`−1
|Aj,1 | = OP
|4|
|g
|
+
|g
|
+
|4|
|g
|
.
0 `+1,∞
0 2,∞
0 `+1,∞
F1
n1/2 F1
n3/2 |4|3
Cauchy-Schwartz inequality, Lemma A.5 and (A.10) imply that
|Aj,2 | ≤ khj − e
hj kn kg0 − sλ,g0 kn
F2
`+1
|4|
|hj |`+1,∞
= OP
F1
λ
F2
F2
`+1
`−1
|4|
|g0 |`+1,∞ +
|4|
|g0 |`+1,∞
.
|g0 |2,∞ +
× OP
F1
F1
n |4|2
Finally, by (A.5), we have
|Aj,3 | ≤
≤
λ
λ
ksλ,g0 kEυ ke
hj kEυ ≤ ks0,g0 kEυ ke
hj kEυ
n
n
λ
F2
F2
`−1
`−1
C1 |g0 |2,∞ +
|4|
|g0 |`+1,∞
|hj |2,∞ +
|4|
|hj |`+1,∞ .
n
F1
F1
Combining all the above results yields that
λ
F2
F2
`−1
|4|`+1 |g0 |`+1,∞ +
|g
|
+
|4|
|g
|
|Aj | = OP n−1/2
0 2,∞
0 `+1,∞
F1
F1
n |4|3
for j = 1, . . . , p. By Assumptions (C3)-(C5), |Aj | = oP (n−1/2 ), for j = 1, . . . , p. In addition,
e − β k = oP n−1/2 .
we have nU11 = OP (1) according to Lemma A.6. Therefore, kβ
2
µ
0
Lemma A.8. Under Assumptions (A1)-(A3) and (C1)-(C5), as n → ∞,
h
i−1/2
e −→ N (0, Ip×p ) ,
e |{(Xi , Zi ) , i = 1, . . . , n}
Var β
β
e is given in (A.9).
where β
Proof. Note that
e = U11 ZT I − BQ2 V−1 QT BT .
β
2
22
e =
For any b ∈ Rp with kbk = 1, we can write bT β
n
P
αi i , where
i=1
−1 T
αi2 = n−2 bT (nU11 ) ZT
i − V12 V22 Q2 Bi
−1
Zi − B T
i Q2 V22 V21 (nU11 )b,
24
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
and conditioning on {(Xi , Zi ) , i = 1, . . . , n}, αi i ’s are independent. By Lemma A.6, we have
n
o
2
−1 T
max αi2 ≤ Cn−2 max kZi k2 + V12 V22
Q2 Bi
,
1≤i≤n
1≤i≤n
where for any a ∈ Rp ,
−1 T
−1 T
−1
−2 T T
aT V12 V22
Q2 Bi a = n−1 aT V12 (QT
2 Γλ Q2 ) Q2 Bi a ≤ Cn |4| a Z BBi a,
P
P
and the j-th component of n−1 ZT BBi is n1 ni0 =1 Zi0 j ξ∈K Bξ (Xi0 ) Bξ (Xi ). Using Assumpn P
o2
P
= O(1), for large n,
tions (A1) and (A2), we have E n1 ni0 =1 Zi0 j ξ∈K Bξ (Xi0 ) Bξ (Xi )
thus with probability approaching 1,
max
1≤i≤n
n
1 XX
Zi0 j Bξ (Xi0 ) Bξ (Xi ) = OP (1),
n 0
i =1 ξ∈K
−1 T
Q2 Bi k2 = OP (|4|−2 ).
max kV12 V22
1≤i≤n
Therefore, max1≤i≤n αi2 = OP n−2 |4|−2 . Next, with probability approaching 1,
n
X
i
h
e |{(Xi , Zi ) , i = 1, . . . , n}
αi2 = Var bT β
i=1
−1 T T
−1 T T
Q2 B ZU11 bσ 2
Q2 B
I − BQ2 V22
= bT U11 ZT I − BQ2 V22
(
)
n
X
−1 T
−1
T
b i )(Zi − Z
b i)
= n b (nU11 ) n
(Zi − Z
(nU11 ) bσ 2 ,
(A.11)
i=1
b i is the i-th column of ZT BQ2 V−1 QT BT . Using Lemma A.6 again, we have Pn α2 ≥
where Z
2
22
i=1 i
P
cn−1 . So max αi2 / ni=1 αi2 = OP n−1 |4|−2 = oP (1) from Assumption (C4). By Linderberg1≤i≤n
P
Pn
2 −1/2 −→ N (0, 1). Then the desired result follows.
Feller CLT, we have ni=1 αi i /
i=1 αi
2
For any j = 1, . . . , p and λ > 0, define
sλ,zj = argmins∈S
n
X
{zj (Xi ) − s(Xi )}2 + λEυ (s),
(A.12)
i=1
where zj is the coordinate mapping that maps z to its j-th component.
Lemma A.9. Under Assumptions (A1), (A2), (C2), (C3), for sλ,zj defined in (A.12), s0,zj − sλ,zj
OP
(λn−1 |4|−5 ),
j = 1, . . . , p.
Proof. Note that
n zj − sλ,zj , u
n
= λ sλ,zj , u
Eυ
,
zj − s0,zj , u
n
=0
for all u ∈ S,
n
=
25
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
Inserting u = s0,zj − sλ,zj in the above yields that
n s0,zj − sλ,zj
2
n
= λ sλ,zj , s0,zj − sλ,zj
2
Eυ
By Cauchy-Schwarz inequality, sλ,zj
Eυ
= λ(hsλ,zj , s0,zj iEυ − hsλ,zj , sλ,zj iEυ ).
≤ sλ,zj , s0,zj
sλ,zj
Eυ
≤ s0,zj
≤ sλ,zj
Eυ
Eυ
Eυ
s0,zj
Eυ
, which implies
.
(A.13)
By (A.6), we have for large n
2
n
n s0,zj − sλ,zj
thus, s0,zj − sλ,zj
n
≤ s0,zj
Eυ
≤ λ sλ,zj
Eυ
s0,zj − sλ,zj
n
× OP (|4|−2 ),
× OP (λn−1 |4|−2 ). Markov’s inequality implies that
s0,zj
Eυ
≤ C|4|−2 maxξ∈K n−1
≤
C1
s0,zj
|4|2
∞
.
(A.14)
Pn
(Xi ) Zij with probability approaching one.
P
According to Assumptions (A1) and (A2), maxξ∈K n−1 ni=1 Bξ (Xi ) Zij = OP (|4|). The
Note that s0,zj
∞
i=1 Bξ
2
desired results follows.
Lemma A.10. Under Assumptions (A1)-(A3) and
(C1)-(C5), for the covariance
matrix Σ
∗
∗
e
defined in (3.5), we have cΣ Ip ≤ Σ ≤ CΣ Ip , and Var β |{(Xi , Zi ) , i = 1, . . . , n} = n−1 Σ +
oP (1).
Proof. According to (A.11),
(
e |{(Xi , Zi ) , i = 1, . . . , n} = n−1 (nU11 ) n−1
Var β
n
X
)
b i )(Zi − Z
b i )T
(Zi − Z
(nU11 ) σ 2 .
i=1
By the definition of U−1
11 in (2.9), we have
n
−1
(nU11 )
1X
b i )T = hzj , zj 0 − sλ,z 0 in
=
Zi (Zi − Z
.
j
n
1≤j,j 0 ≤p
i=1
As in the proof of Lemma A.7, let e
hj ∈ S and hj satisfy (A.10). Then we have
hzj , zj 0 − sλ,zj 0 in = hzj − e
hj , zj 0 − sλ,zj 0 in +
λ
hsλ,zj 0 , e
hj iEυ .
n
(A.15)
By (A.5), (A.13) and (A.14), we have
hsλ,zj 0 , e
hj 0 iEυ ≤ ksλ,zj 0 kEυ ke
hj 0 kEυ ≤ ks0,zj 0 kEυ ke
hj 0 kEυ
C
CC ∗
F2
e
0
≤
ks
k
k
h
k
≤
h0j 2,∞ +
|4|`+1−υ h0j
0,zj 0 ∞
j Eυ
2
3
|4|
|4|
F1
`+1,∞
.
26
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
Note that
hzj − e
hj , zj 0 − sλ,zj 0 in = hzj − hj , zj 0 − hj 0 in + hhj − e
hj , hj 0 − e
hj 0 in + hzj − hj , hj 0 − e
hj 0 in
+ hhj − e
hj , zj 0 − hj 0 in + hzj − hj , e
hj 0 − sλ,zj 0 in + hhj − e
hj , e
hj 0 − sλ,zj 0 in .
(A.16)
According to (A.10), the second term on the right side of (A.16) satisfies that
hj k∞ khj 0 − e
hj 0 k∞ = oP (1).
hhj − e
hj , hj 0 − e
hj 0 i∞ ≤ khj − e
By Lemma A.2 and (A.10), the third term on the right side of (A.16) satisfies that
hzj − hj , hj 0 − e
hj 0 in ≤ {kzj − hj kL2 (1 + oP (1))} khj 0 − e
hj 0 k∞ = oP (1).
Similarly, we have
hhj − e
hj , zj 0 − hj 0 in = oP (1).
From the triangle inequality, we have
hj − hj kn + khj − s0,zj kn + ks0,zj − sλ,zj kn .
ke
hj − sλ,zj kn ≤ ke
According to (A.10) and Lemma A.9, we have
ke
hj − sλ,zj kn ≤ khj − s0,zj kn + oP (1).
Define h∗j,n = argminh∈S kzj − hkL2 , then, from the triangle inequality, we have
khj − s0,zj kn ≤ khj − h∗j,n kn + kh∗j,n − s0,zj kn
Note that khj − h∗j,n kL2 = oP (1). Lemma A.2 implies that khj − h∗j,n kn = oP (1). Next note
that ks0,zj − h∗j,n k2L2 = kzj − s0,zj k2L2 − kzj − h∗j,n k2L2 and kzj − s0,zj kn ≤ kzj − h∗j,n kn . Using
Lemma A.2 again, we have
ks0,zj − h∗j,n k2L2 = oP (kzj − h∗j,n k2L2 ) + oP (kzj − s0,zj k2L2 ).
Since there exists a constant C such that kzj − h∗j,n kL2 ≤ C, so we have
kzj − s0,zj kL2 ≤ kzj − h∗j,n kL2 + kh∗j,n − s0,zj kL2 ≤ C + kh∗j,n − s0,zj kL2 .
Therefore, kh∗j,n − s0,zj kL2 = oP (1). Lemma A.2 implies that kh∗j,n − s0,zj kn = oP (1). As a
consequence,
ks0,zj − hj kn = oP (1).
(A.17)
For the fifth item, by Lemma A.2 and (A.17), we have
hzj − hj , e
hj 0 − sλ,zj 0 in ≤ {kzj − hj kL2 (1 + oP (1))} khj − s0,zj kn + oP (1) = oP (1).
27
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
Similarly, for the sixth item, we have
hhj − e
hj , e
hj 0 − sλ,zj 0 in ≤ khj − e
hj kn khj − s0,zj kn + oP (1) = oP (1).
(A.18)
Combining the above results from (A.15) to (A.18) gives that
hzj , zj 0 − sλ,zj 0 in = hzj − hj , zj 0 − h∗j 0 in + oP (1).
Therefore,
n
(nU11 )
−1
1X
e i )(Zi − Z
e i )T + oP (1) = E[(Zi − Z
e i )(Zi − Z
e i )T ] + oP (1).
(Zi − Z
=
n
i=1
Hence,
e |{(Xi , Zi ) , i = 1, . . . , n} = n−1 Σ−1 + oP (1) .
Var β
A.3. Proof of Theorem 2
Let HZ = I − Z(ZT Z)−1 ZT , then
b = U22 QT BT HZ Y = U22 QT BT HZ g0 + U22 QT BT HZ = θ
eµ + θ
e .
θ
2
2
2
According to Lemma A.3, ks0,g0 − g0 k∞ ≤ C FF21 |4|`+1 |g0 |`+1,∞ . Denote by γ 0 = Q2 θ 0 the
b − θ0 = θ
eµ − θ 0 + θ
e .
spline coefficients of s0,g0 . Then we have the following decomposition: θ
Note that
eµ − θ 0 = U22 QT BT HZ g0 − θ 0 = U22 QT BT HZ (g0 − BQ2 θ 0 ) − λU22 QT PQ2 θ 0 .
θ
2
2
2
According to (2.10), for any a
T T
T
aT U−1
22 a = a Q2 B HZ B + λP Q2 a.
Since HZ is idempotent, so its eigenvalues πj is either 0 or 1. Without loss of generality we
can arrange the eigenvalues in decreasing order so that πj = 1, j = 1, . . . , m and πj = 1,
j = m + 1, . . . , n. Therefore, we have
m
aT (nU22 )−1 a =
λ T T
1X
T
T
πj aT QT
2 B ej ej BQ2 a + a Q2 PQ2 a,
n
n
j=1
where ej be the indicator vector which is a zero vector except for an entry of one at position
j. Using Markov’s inequality, we have
X
λ
λ C1
Eυ
aξ Bξ ≤
C2 kak2 .
n
n |4|2
ξ∈K
28
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
Thus, by Conditions (C4) and (C5), naT U22 a ≤ C|4|−2 . Next
T
1/2
kU22 QT
|4|−1 n−1 kBT HZ (g0 − BQ2 θ 0 )k
2 B HZ (g0 − BQ2 θ 0 )k ≤ C
1/2
X
F2
`
1/2
−1 −1
T
2
|4| |g0 |`+1,∞ ,
≤ C |4| n
= OP
{Bξ HZ (g0 − BQ2 θ 0 )}
F1
ξ∈K
and
λkU22 QT
2 PQ2 θ 0 k
Cλ
Cλ
≤
ks0,g0 kEυ ≤
4
n|4|
n|4|4
F2
`−1
|g0 |2,∞ +
|4| |g0 |`+1,∞ .
F1
Thus,
λ
λ
F2
`
|g
|
+
1
+
|4|
|g
|
0 2,∞
0 `+1,∞ .
n |4|4
n |4|5 F1
e = Pn αi i and α2 = αT U22 QT BT HZ BQ2 U22 α.
For any α with kαk = 1, we write αT θ
2
i
i=1
e − θ 0 k = OP
kθ
Following the same arguments as those in Lemma A.8, we have max1≤i≤n αi2 = OP (n−2 |4|−4 ).
e k ≤ |4|−1 |αT θ
e | = |4|−1 | Pn αi i | = OP (|4|−2 n−1/2 ). Therefore,
Thus, kθ
i=1
b − θ 0 k = OP
kθ
λ
1
λ
F2
`
.
|4| |g0 |`+1,∞ + √
|g0 |2,∞ + 1 +
n|4|2
n |4|4
n |4|5 F1
b we have
Observing that gb(x) = B(x)b
γ = B(x)Q2 θ,
g − s0,g0 kL2 + C
kb
g − g0 kL2 ≤ kb
F2
|4|`+1 |g0 |`+1,∞ .
F1
According to Lemma A.1, we have.
F2
`+1
|4| |g0 |`+1,∞
kb
g − g0 kL2 ≤ C |4|kb
γ − γ 0k +
F1
λ
λ
F2
1
`+1
= OP
|g0 |2,∞, + 1 +
|4|
|g0 |`+1,∞ + √
.
n|4|
n |4|3
n |4|5 F1
The proof is completed.
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
29
References
Awanou, G. and Lai, M. J. and Wenston, P. (2005). The multivariate spline method for
scattered data fitting and numerical solutions of partial differential equations. Wavelets
and splines: Athens 2005 24–74.
Chen, R. and Liang, H. and Wang, J. (2011). On determination of linear components in
additive models. Journal of Nonparametric Statistics 23 367–383.
Craven, P. and Wahba, G. (1979). Smoothing noisy data with spline functions. Numerische
Mathematik 31 377–403.
Eilers, P. (2006). P-spline smoothing on difficult domains. [online] Available at
http://www.statistik.lmu.de/sfb386/workshop/smcs2006/slides/eilers.pdf
Ettinger, B. and Guillas, S. and Lai, M. J. (2015). Bivariate splines for functional regression
models with application to ozone forecasting. Environmetrics Accepted.
Furrer, R. and Nychka, D. and Sainand, S. (2011). Package ‘fields’. R package version 6.6.1..
[online] Available at http://cran.r-project.org/web/packages/fields/fields.pdf.
Golitschek, M. and Schumaker, L. L. (2002). Bounds on projections onto bivariate polynomial
spline spaces with stable local bases. Constructive Approximation 18 241–254.
Green, P. J. and Silverman, B. W. (1994). Nonparametric regression and generalized linear
models. Chapman and Hall, London.
Guillas, S. and Lai, M. J. (2010). Bivariate splines for spatial functional regression models.
Journal of Nonparametric Statistics 22 477–497.
Härdle and W., Liang, H. and Gao, J. T. (2000). Partially linear models. Heidelberg: Springer
Physica-Verlag.
Huang, J. Z. and Zhang, L. and Zhou, L. (2007). Efficient estimation in marginal partially
linear models for longitudinal/clustered data using splines. Scandinavian Journal of
Statistics 34 451–477.
Lai, M. J. (2008). Multivariate splines for data fitting and approximation. Conference Proceedings of the 12th Approximation Theory 210–228.
Lai, M. J. and Schumaker, L. L. (2007). Spline functions on triangulations. Cambridge
University Press.
30
Li Wang, Guannan Wang, Min-Jun Lai and Lei Gao
Lai, M. J. and Schumaker, L. L. (1998). Approximation power of bivariate splines. Advances
in Computational Mathematics 29 595-623.
Lai, M. J. and Wang, L. (2013). Bivariate penalized splines for regression. Statistica Sinica
23 1399-1417.
Li, Y. and Ruppert, D. (2008). On the asymptotics of penalized splines. Biometrika 95
415–436.
Liang, H. and Li, R. (2009). Variable selection for partially linear models with measurement
errors. Journal of the American Statistical Association 104 234–248.
Liu, X. and Guillas, S. and Lai, M. J. (2015). Efficient spatial modeling using the SPDE
approach with bivariate splines. Journal of Computational and Graphical Statistics Accepted.
Liu, X. and Wang, L. and Liang, H. (2011). Estimation and variable selection for semiparametric additive partial linear models. Statistica Sinica 21 1225–1248.
Ma, S. and Song, Q. and Wang, L. (2013). Simultaneous variable selection and estimation in
semiparametric modeling of longitudinal/clustered data. Bernoulli 19 252–274.
Ma, S. and Yang, L. (2011). Spline-backfitted kernel smoothing of partially linear additive
model. Journal of Statistical Planning and Inference 141 204–219.
Marx, B. and Eilers, P. (2005) Multidimensional penalized signal regression. Technometrics
47 13–22.
Pace, R. K. and Barry, R. (1997) Sparse spatial autoregressions. Statistics & Probability
Letters 33 291–297.
Persson, P. O. and Strang, G. (2002). A simple mesh generator in MATLAB. SIAM Review
64 329–345.
Ramsay, T. (2002). Spline smoothing over difficult regions. Journal of the Royal Statistical
Society, Series B 64 307–319.
Sangalli, L. and Ramsay, J. and Ramsay, T. (2013). Spatial spline regression models. Journal
of the Royal Statistical Society, Series B 75 681–703.
Shewchuk, J. R. (1996). Applied Computational Geometry: Towards Geometric Engineering.
Springer-Verlag.
Efficient Estimation of Partially Linear Models for Spatial Data over Complex Domains
31
Wahba, G. (1990). Spline models for observational data. SIAM Publications, Philadelphia.
136, 2506-2534.
Wang, L. and Liu, X. and Liang, H. and Carroll, R. (2011). Estimation and variable selection
for generalized additive partial linear models. Annals of Statistics 39 1827–1851
Wang, H. and Ranalli, M. G. (2007). Low-rank smoothing splines on complicated domains.
Biometrics 63 209–217.
Wang, L. and Xue, L. and Qu, A. and Liang, H. (2014). Estimation and model selection in
generalized additive partial linear models for correlated data with diverging number of
covariates. Annals of Statistics 42 592–624.
Wood, S. N. (2003). Thin plate regression splines. Journal of the Royal Statistical Society,
Series B 65 95–114.
Wood, S. N. and Bravington, M. V. and Hedley, S. L. (2008). Soap film smoothing. Journal
of the Royal Statistical Society, Series B 70 931–955.
Xiao, L. and Huang, L. and Schrack, J. A. and Ferrucci, L. and Zipunnikov, V. and Crainiceanu,
C. (2015). Quantifying the life-time circadian rhythm of physical activity: a covariatedependent functional approach. Biostatistics 16 352–367.
Xiao, L. and Li, Y. and Ruppert, D. (2013). Fast bivariate P-splines: the sandwich smoother.
Journal of the Royal Statistical Society, Series B 75 577–599.
Zhang, H. and Cheng, G. and Liu, Y. (2011). Linear or nonlinear? Automatic structure
discovery for partially linear models. Journal of American Statistical Association 106
1099–1112.
Zhou, L. and Pan, H. (2014). Smoothing noisy data for irregular regions using penalized
bivariate splines on triangulations. Computational Statistics 29 263–281.
Table 1: Root mean squared errors of the estimates in Example 1.
ρ
Method
β1
β2
σε
g(·)
0.0
KRIG
TPS
SOAP
LFE
QFE
BPST (41 )
BPST (42 )
0.0850
0.0801
0.0706
0.0712
0.0720
0.0705
0.0702
0.0591
0.0541
0.0511
0.0515
0.0520
0.0510
0.0509
0.0497
0.0415
0.0284
0.0305
0.0341
0.0285
0.0283
0.4240
0.3268
0.2164
0.1713
0.1800
0.1686
0.1669
0.7
KRIG
TPS
SOAP
LFE
QFE
BPST (41 )
BPST (42 )
0.0867
0.0788
0.0705
0.0746
0.0770
0.0706
0.0702
0.0589
0.0565
0.0545
0.0537
0.0533
0.0556
0.0549
0.0454
0.0377
0.0241
0.0288
0.0341
0.0237
0.0236
0.4207
0.3224
0.2016
0.1673
0.1765
0.1648
0.1624
Table 2: Standard error estimates of the BPST (42 ) coefficients in Example 1.
ρ
Parameter
SEmc
SEmean
SEmedian
SEmad
0.0
β1
β2
0.0702
0.0501
0.0660
0.0539
0.0662
0.0536
0.0032
0.0027
0.7
β1
β2
0.0723
0.0551
0.0660
0.0540
0.0661
0.0540
0.0037
0.0028
Table 3: Root mean squared errors of the estimates in Example 2.
ρ
0.0
0.7
Method
β1
β2
σε
MSPE
KRIG
TPS
SOAP
LFE
QFE
BPST (41 )
BPST (42 )
0.0901
0.0835
0.1106
0.1130
0.0940
0.0839
0.0799
0.0754
0.0690
0.0920
0.0930
0.0821
0.0699
0.0703
0.3369
0.0656
0.3223
0.4999
0.1223
0.0633
0.0425
0.7700
0.7648
0.9140
0.9649
0.8802
0.7647
0.7652
KRIG
TPS
SOAP
LFE
QFE
BPST (41 )
BPST (42 )
0.0983
0.0906
0.1269
0.1075
0.0988
0.0893
0.0891
0.0733
0.0690
0.0882
0.1067
0.0766
0.0692
0.0681
0.3353
0.0612
0.3404
0.4999
0.1316
0.0556
0.0450
0.7552
0.8758
0.9116
0.9496
0.8694
0.7574
0.7553
Table 4: Standard error estimates of the BPST (42 ) coefficients in Example 2.
ρ
Parameter
SEmc
SEmean
SEmedian
SEmad
0.0
β1
β2
0.0803
0.0701
0.0828
0.0673
0.0821
0.0673
0.0059
0.0054
0.7
β1
β2
0.0893
0.0679
0.0893
0.0681
0.0843
0.0683
0.0046
0.0054
Table 5: 5-fold cross validation prediction errors for California housing data.
Method
OLS
KRIG
TPS
SOAP
LFE
QFE
BPST
Prediction Error
(in log scale)
0.157
0.086
0.083
0.080
0.067
0.059
0.054
Variable
MedInc
(95% CI)
log(Age)
(95% CI)
log(AveBedrms)
(95% CI)
log(AveOccup)
(95% CI)
log(Hhd)
(95% CI)
OLS
0.202 (s.e. = 0.001)
(0.200, 0.204)
0.165 (s.e. = 0.005)
(0.155, 0.175)
−0.037 (s.e. = 0.018)
(−0.073, −0.001)
−0.465 (s.e. = 0.022)
(−0.509, −0.421)
0.088 (s.e. = 0.004)
(0.080, 0.096)
0.094
NA
−0.039
NA
0.083
NA
−0.081
NA
−0.003
NA
0.148
NA
−0.022
NA
0.133
NA
−0.276
NA
0.024
NA
Table 6: Coefficients.
KRIG
TPS
SOAP
0.085
NA
−0.037
NA
0.068
NA
−0.063
NA
−0.004
NA
† Estimates and confidence intervals (CI) are reported in the California house value study.
0.139
NA
−0.038
NA
0.112
NA
−0.249
NA
0.011
NA
LFE
0.122
NA
−0.040
NA
0.100
NA
−0.169
NA
0.006
NA
QFE
0.114 (s.e. = 0.001)
(0.112, 0.116)
−0.035 (s.e. = 0.004)
(−0.043, −0.027)
0.095 (s.e. = 0.016)
(0.062, 0.128)
−0.133 (s.e. = 0.018)
(−0.169, −0.097)
0.003 (s.e. = 0.003)
(−0.002, 0.008)
BPST
‡ Note that the standard errors of β̂ are not available for KRIG, TPS, SOAP, LFE and QFE methods, so the corresponding CIs are not available,
and we leave them as “NA”.
1
4
0.8
5
4
3
0.6
2
3
0.4
2
1
0.2
1
0
0
0
-1
-0.2
-1
-2
-0.4
-3
-2
-4
-0.6
-3
-5
-0.8
1
0.8
0.6
3.5
0.4
3
0.2
2.5
0
-1
-1
0.5
1
1.5
2
2.5
3
3.5
4
1
-0.4
0.5
-0.6
0
-0.8
-0.5
-1
(a)
(b)
-1
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
-0.6
-0.6
-0.8
-0.8
-0.5
0
1.5
-0.2
-1
-1
-4
-0.5
2
0
0.5
1
1.5
2
2.5
3
3.5
(c)
-1
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
(d)
1
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
(e)
Figure 1: Example 1: (a) true function of g(·); (b) contour map of true function g(·); (c)
sampled location points of replicate 1; (d) first triangulation (41 ) and (e) second triangulation
(42 ) over the domain.
1
4
0.8
3
0.6
1
3
0.8
2
0.6
2
0.4
1
0.4
1
0.2
0.2
0
0
0
0
-1
-0.2
-1
-0.4
-0.2
-0.4
-2
-2
-0.6
-0.6
-3
-0.8
-1
-1
-3
-0.8
-4
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
-1
-1
-0.5
0
0.5
1
(a)
1.5
2
2.5
3
3.5
4
(b)
1
4
0.8
3
1
3
0.8
2
0.6
0.6
2
1
0.4
0.4
1
0.2
0.2
0
0
0
-0.2
-1
0
-1
-0.2
-0.4
-0.4
-2
-2
-0.6
-0.6
-3
-0.8
-1
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
-1
-1
4
-3
-0.8
-0.5
0
0.5
1
(c)
1.5
2
2.5
3
3.5
4
(d)
1
4
0.8
3
0.6
1
4
0.8
3
0.6
2
2
0.4
0.4
1
1
0.2
0.2
0
0
-0.2
-1
0
0
-0.2
-1
-0.4
-0.4
-2
-2
-0.6
-0.6
-3
-0.8
-1
-1
-4
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
(e)
-3
-0.8
-1
-1
-4
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
(f)
1
4
0.8
3
0.6
2
0.4
1
0.2
0
0
-0.2
-1
-0.4
-2
-0.6
-3
-0.8
-1
-1
-4
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
(g)
Figure 2: Contour maps for the estimators (ρ = 0.0) in Example 1: (a) KRIG; (b) TPS; (c)
SOAP; (d) LFE; (e) QFE; (f) BPST (41 ) and (g) BPST (42 ).
1
4
0.8
3
0.6
1
4
0.8
3
0.6
2
0.4
2
0.4
1
0.2
1
0.2
0
0
0
0
-0.2
-1
-0.2
-0.4
-2
-0.4
-1
-2
-0.6
-0.6
-3
-3
-0.8
-0.8
-4
-1
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
-4
-1
-1
4
-0.5
0
0.5
1
(a)
1.5
2
2.5
3
3.5
4
(b)
1
0.8
4
1
3
0.8
0.6
3
2
0.6
2
1
0.4
0.4
1
0.2
0.2
0
0
0
0
-1
-1
-0.2
-0.4
-0.6
-0.8
-1
-1
0
0.5
1
1.5
2
2.5
3
3.5
-2
-2
-0.4
-3
-0.6
-4
-0.5
-0.2
-4
-1
-1
4
-3
-0.8
-0.5
0
0.5
1
(c)
1.5
2
2.5
3
3.5
4
(d)
1
3
1
4
0.8
0.8
3
2
0.6
0.6
2
1
0.4
0.2
0
0.4
1
0.2
0
0
0
-1
-0.2
-0.2
-1
-2
-0.4
-0.4
-2
-0.6
-0.6
-3
-3
-0.8
-0.8
-4
-1
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
-1
-1
4
(e)
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
(f)
1
4
0.8
3
0.6
2
0.4
1
0.2
0
0
-0.2
-1
-0.4
-2
-0.6
-3
-0.8
-1
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
(g)
Figure 3: Contour maps for the estimators (ρ = 0.7) in Example 1: (a) KRIG; (b) TPS; (c)
SOAP; (d) LFE; (e) QFE; (f) BPST (41 ) and (g) BPST (42 ).
1
5
0.9
4
0.8
3
0.7
2
0.6
1
0.5
0
0.4
-1
0.3
-2
0.2
-3
0.1
-4
5
0
-5
0
1
0.8
0.2
0.6
0.4
0.4
0.6
0.2
0.8
0
1
0
0
0.1
0.2
0.3
0.5
0.6
0.7
0.8
0.9
1
(b)
0.0
0.0
0.2
0.2
0.4
0.4
Y Coord
0.6
0.6
0.8
0.8
1.0
1.0
(a)
0.4
0.0
0.2
0.4
0.6
0.8
0.0
1.0
0.2
0.4
0.6
0.8
1.0
X Coord
(c)
(d)
1.2
1.2
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
0
0.2
0.4
0.6
(e)
0.8
1
1.2
-0.2
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
(f)
Figure 4: Example 2: (a) true function of g(·); (b) contour map of g(·); (c) gaussian random
field ξ(·) and (d) sampled location points of replicate 1; (e) first triangulation (41 ); and (f)
second triangulation (42 ) over the domain.
<50K
50K-100K
100K-150K
150K-200K
200K-300K
>300K
boundary
(a)
<50K
50K-100K
100K-150K
150K-200K
200K-300K
>300K
boundary
(b)
Figure 5: The median house values: (a) data location points (the colors of dots indicate different
values of the houses); (b) domain triangulation.
(a)
(e)
<50K
50K-100K
100K-150K
150K-200K
200K-300K
>300K
boundary
<50K
50K-100K
100K-150K
150K-200K
200K-300K
>300K
boundary
(b)
(f)
<50K
50K-100K
100K-150K
150K-200K
200K-300K
>300K
boundary
<50K
50K-100K
100K-150K
150K-200K
200K-300K
>300K
boundary
(c)
(g)
<50K
50K-100K
100K-150K
150K-200K
200K-300K
>300K
boundary
<50K
50K-100K
100K-150K
150K-200K
200K-300K
>300K
boundary
(d)
<50K
50K-100K
100K-150K
150K-200K
200K-300K
>300K
boundary
Figure 6: Scatter plots for the estimated house values using methods: (a) OLS; (b) KRIG; (c) TPS; (d) SOAP; (e) LFE; (f)
QFE and (g) BPST.
<-300K
-300K~-200K
-200K~-100K
100K~200K
200K~300K
>300K
boundary
(f)
(b)
<-300K
-300K~-200K
-200K~-100K
100K~200K
200K~300K
>300K
boundary
<-300K
-300K~-200K
-200K~-100K
100K~200K
200K~300K
>300K
boundary
(g)
(c)
<-300K
-300K~-200K
-200K~-100K
100K~200K
200K~300K
>300K
boundary
<-300K
-300K~-200K
-200K~-100K
100K~200K
200K~300K
>300K
boundary
(d)
<-300K
-300K~-200K
-200K~-100K
100K~200K
200K~300K
>300K
boundary
Figure 7: Scatter plots for the differences between the estimated house values and the observed house values using methods:
(a) OLS; (b) KRIG; (c) TPS; (d) SOAP; (e) LFE; (f) QFE and (g) BPST.
(e)
(a)
<-300K
-300K~-200K
-200K~-100K
100K~200K
200K~300K
>300K
boundary
| 10 |
arXiv:1506.05607v5 [] 22 Aug 2017
1
Unbounded-Time Analysis of Guarded LTI
Systems with Inputs by Abstract
Acceleration
Dario Cattaruzza Alessandro Abate Peter Schrammel Daniel Kroening
Abstract—Reachability analysis of continuous and
discrete time systems is a hard problem that has
seen much progress in the last decades. In many
cases the problem has been reduced to bisimulations
with a number of limitations in the nature of the
dynamics, soundness, or time horizon. In this article
we focus on sound safety verification of UnboundedTime Linear Time-Invariant (LTI) systems with inputs
using reachability analysis. We achieve this by using
Abstract Acceleration, which over-approximates the
reach tube of a system over unbounded time by using
abstraction . The technique is applied to a number of
models and the results show good performance when
compared to state-of-the-art tools.
Reachability in linear programs, however, is a
formidable challenge for automatic analysers: the
problem is undecidable despite the restriction to
linear transformations (i.e., linear dynamics) and
linear guards.
The goal of this article is to push the frontiers
of unbounded-time reachability analysis: we aim at
devising a method that is able to reason soundly
about unbounded trajectories. We present a new
approach for performing abstract acceleration. Abstract acceleration [26], [27], [34] approximates the
effect of an arbitrary number of loop iterations (up to
I. Introduction
Linear loops are an ubiquitous programming pat-
infinity) with a single, non-iterative transfer function
that is applied to the entry state of the loop (i.e., to
tern. Linear loops iterate over continuous variables, the set of initial conditions of the linear dynamics).
which are updated with a linear transformation. Lin- This article extends the work in [34] to systems
ear loops may be guarded, i.e., terminate if a given with non-deterministic inputs elaborating the details
linear condition holds. Inputs from the environment
can be modelled by means of non-deterministic
choices within the loop. These features make linear
loops expressive enough to capture the dynamics of
omitted in [39].
The key contributions of this article are:
many hybrid dynamical models. The usage of such
models in safety-critical embedded systems makes
linear loops a fundamental target for formal methods.
Many high-level requirements for embedded con-
1) We present a new technique to include inputs
(non-determinism) in the abstract acceleration
trol systems can be modelled as safety properties, i.e.
deciding reachability of certain bad states, in which
of general linear loops.
2) We introduce the use of support functions in
the system exhibits unsafe behaviour. Bad states may,
in linear loops, be encompassed by guard assertions.
complex spaces, in order to increase the precision of previous abstract acceleration methods.
2
Since the transformations A and B are linear, and
vector sums preserve convexity, the sets Xn =
II. Preliminaries
A. Linear Loops with Inputs
Simple linear loops are programs expressed in the
form:
while(Gx ≤ h) x := Ax + Bu,
ψ := Gx ≤ h is a linear constraint on the states (with
G ∈ Rr×p and h ∈ Rr ), u ∈ Rq is a non-deterministic
input, and A ∈ R p×p and B ∈ R p×q are linear transformations characterising the dynamics of the system.
In particular, the special instance where ψ = ⊤ (i.e.,
“while true”) represents a time-unbounded loop with
no guards, for which the discovery of a suitable
invariant (when existing) is paramount. As evident
at a semantical level, this syntax can be interpreted
as the dynamics of a discrete-time LTI model with
inputs, under the presence of a guard set which, for
ease of notation, we denote as G = {x | Gx ≤ h}.
In the remaining of this work we will also use the
notation Mi,∗ to represent the rows of a matrix and
M∗, j its columns.
B. Model Semantics
The traces of the model starting from an initial
set X0 ⊆ R p , with inputs restricted to U ⊆ Rq , are
u0
u1
u2
sequences x0 −→ x1 −→ x2 −→ . . ., where x0 ∈ X0
and ∀k ≥ 0, xk+1 = τ(xk , uk ), where
as the union of the reachable sets over n iterations.
S
Moreover, X̂ = n≥0 τn (X0 , U) extends the previous
notion over an unbounded time horizon.
C. Support Functions
1) Support Function Definition: A support function is a convex function on R p which describes
the distance of a supporting hyperplane for a given
geometrical set in R p .
Support functions may be used to describe a set by
defining the distance of its convex hull with respect
to the origin, given a number of directions. More
specifically, the distance from the origin to the hyperplane that is orthogonal to the given direction and
that touches its convex hull at its farthest. Finitely
sampled support functions are template polyhedra in
which the directions are not fixed, which helps avoiding wrapping effects [25]. The larger the number of
directions provided, the more precisely represented
the set will be.
In more detail, given a direction v ∈ R p , the
support function of a non-empty set X ⊆ R p in the
direction of v is defined as
ρX : R p → R,
where x · v =
vectors.
states reached from X by τ in one step:
(2)
We furthermore denote the set of states reached from
X0 via τ in n steps (n-reach set), for n ≥ 0:
0
τ (X0 , U) = X0
n
(4)
(1)
We extend the notation above to convex sets of
states and inputs (X and U), and denote the set of
τ(X, U) = {τ(x, u) | x ∈ X, u ∈ U}
We define the n-reach tube
[
X̂n = τ̂n (X0 , U) =
τk (X0 , U)
k∈[0,n]
where x ∈ R p is a valuation on the state variables,
τ(xk , uk ) = { Axk + Buk | Gxk ≤ h ∧ uk ∈ U}
τn (X0 , U) are also convex.
Pp
i=0
ρX (v) = sup{x · v : x ∈ X} .
xi vi is the dot product of the two
Support functions do not exclusively apply to
convex polyhedra, but in fact to any set X ⊆ R p
represented by a general assertion θ(X). We will
restrict ourselves to the use of convex polyhedra, in
which case the support function definition translates
to solving the linear program
n−1
τ (X0 , U) = τ(τ
(X0 , U) ∩ G, U)
(3)
ρX (v) = max{x · v | Cx ≤ d} .
(5)
3
which is consistent with the real case when θ = 0.
The reason why eiθ cannot be extracted out is be2) Support Functions Properties: Several properties of support functions allow us to reduce operational complexity. The most significant are [23]:
ρkX (v) = ρX (kv) = kρX (v) : k ≥ 0
ρAX (v) = ρX (AT v) : A ∈ R p×p
ρX1 ⊕X2 (v) = ρX1 (v) + ρX2 (v)
ρX (v1 + v2 ) ≤ ρX (v1 ) + ρX (v2 )
ρconv(X1 ∪X2 ) (v) = max{ρX1 (v), ρX2 (v)}
ρX1 ∩X2 (v) ≤ min{ρX1 (v), ρX2 (v)}
As can be seen by their structure, some of these properties reduce complexity to lower-order polynomial
or even to constant time, by turning matrix-matrix
multiplications (O(p3 )) into matrix-vector (O(p2 )), or
into scalar multiplications.
3) Support Functions in Complex Spaces:
The literature does not state, as far as we found
any description of the use of support functions
in complex spaces. Since this is relevant to using
our technique, we extend the definition of support
functions to encompass their operation on complex
spaces.
A support function in a complex vector field
is a transformation:
cause it is a rotation, and therefore follows the same
rules as a matrix multiplication,
cos θ sin θ
iθ
v .
ρX (e v) , ρX
− sin θ cos θ
Since matrices using pseudo-eigenvalues are real, all
other properties remain the same. An important note
is that when using pseudo-eigenvalues, conjugate
eigenvector pairs must be also converted into two
separate real eigenvectors, corresponding to the real
and the imaginary parts of the pair.
III. The Polyhedral Abstract Domain
A. Convex Polyhedra
A polyhedron is a topological element in R p
with flat polygonal (2-dimensional) faces. Each face
corresponds to a hyperplane that creates a halfspace,
and the intersections of these hyperplanes are the
edges of the polyhedron. A polyhedron is said to be
convex if its surface does not intersect itself and a
line segment joining any two points of its surface is
contained in the interior of the polyhedron. Convex
polyhedra are better suited than general polyhedra
as an abstract domain, mainly because they have a
simpler representation and operations over convex
ρX (v) : C p → R = sup{|x · v| | x ∈ X ⊆ C p , v ∈ C p }.
polyhedra are in general easier than for general
The dot product used here is the Euclidean Internal
polyhedra. There are a number of properties of
convex polyhedra that make them ideal for abstract
Product of the vectors, which is commonly defined
in the complex space as:
a·b=
p
X
i=0
ai bi ,
a, b ∈ C p
We are interested in the norm of the complex value,
which is a 1-norm given our definition of dot product:
|a · b| = |re(a · b)| + |im(a · b)|
Returning to our support function properties, we
now have:
ρX (reiθ v) = rρX (eiθ v),
interpretation of continuous spaces, including their
ability to reduce an uncountable set of real points
into a countable set of faces, edges and vertices.
Convex polyhedra retain their convexity across linear
transformations, and are functional across a number
of operations because they have a dual representation [21]. The mechanism to switch between these
two representations is given in section III-B5
1) Vertex Representation: Since every edge in the
polyhedron corresponds to a line between two vertices and every face corresponds to the area enclosed
4
by a set of co-planar edges, a full description of the
polyhedron is obtain by simply listing its vertices.
Since linear operations retain the topological properties of the polyhedron, performing these operations
on the vertices is sufficient to obtain a complete
description of the transformed polyhedron (defined
dron is
V ′ = {v′ = v + t | v ∈ V}
Given an inequality representation X and a translation vector t, the transformed polyhedron corresponds to
X ′ = {x | Cx ≤ d + Ct}
by the transformed vertices). Formally, a polyhedron
is a set V ∈ R p such that v ∈ V is a vertex of the
polyhedron.
2) Linear Transformation: Given a vertex rep-
2) Inequality Representation: The dual of the
Vertex representation is the face representation. Each
resentation V and a linear transformation L, the
transformed polyhedron is
face corresponds to a bounding hyperplane of the
V ′ = LV
polyhedron (with the edges being the intersection
of two hyperplanes and the vertices the intersection
Given an inequality representation X and a linear
of 3 or more), and described mathematically as a
function of the vector normal to the hyperplane. If
transformation L, the transformed polyhedron corresponds to
we examine this description closely, we can see that
it corresponds to the support function of the vector
normal to the hyperplane. Given this description we
formalise the following: A convex polyhedron is a
p
o
n
X ′ = x | C(L+ )T x ≤ ρ′X (L+ CT )
where L+ represents the pseudo-inverse of L. In the
case when the inverse L−1 exists, then
topological region in R described by the set
X = {x ∈ R p | Cx ≤ d, C ∈ Rm×p , d ∈ Rm }
where the rows Ci,∗ for i ∈ [1, m] correspond to the
transposed vectors normal to the faces of the polyhedron and di for i ∈ [1, m] their support functions.
For simplicity of expression, we will extend the use
of the support function operator as follows:
ρ′X : Rm×p → Rm
ρX (MT )
1,∗
ρX (MT )
2,∗
ρ′X (MT ) =
..
.
ρX (MTm,∗ )
B. Operations on Convex Polyhedra
Several operations of interest can be performed on
convex polyhedra
1) Translation: Given a vertex representation V
and a translation vector t, the transformed polyhe-
n
o
X ′ = x | C(L−1 )T x ≤ d)
From this we can conclude that linear transformations are better handled by a vertex representation,
except when the inverse of the transformation exists
and is know a-priori. This works makes use of this
last case to avoid continuous swapping in representations.
3) Set Sums: The addition of two polyhedra is a
slightly more complex matter. The resulting set is
one such that for all possible combinations of points
inside both original polyhedra, the sum is contained
in the result. This operation is commonly known as
the Minkowski sum, namely
A ⊕ B = {a + b | a ∈ A, b ∈ B}
Given two vertex representations V1 and V2 the
resulting polyhedron
V = conv(V1 ⊕ V2 )
5
where conv(·) is the convex hull of the set of vertices
contained in the Minkowski sum.
5) Vertex Enumeration: The vertex enumeration
problem corresponds to the algorithm required to
Let
obtain a list of all vertices of a polyhedron given an
inequality description of its bounding hyperplanes.
X1 = {x | C1 x ≤ d1 }
Given the duality of the problem, it is also possible
to find the bounding hyperplanes given a vertex de-
X2 = {x | C2 x ≤ d2 }
scription if the chosen algorithm exploits this duality.
be two sets, then
In this case the description of V is given in the forms
of a matrix inequality Vx ≤ [ 1 1 · · · 1 ]T with
X = X1 ⊕ X2 = {x | Cx ≤ d},
V = [ v1 · · · vm ]T , vi ∈ V. Similarly, A can be
described as a set containing each of its rows. At
where
C1
d + ρ′X (CT1 )
2
, d = 1
C =
C2
d2 + ρ′X1 (CT2 )
.
Because these sets correspond to systems of inequalities, they may be reduced removing redundant
constraints. Note that if C1 = C2 then
X = X1 ⊕ X2 = {x | C1 x ≤ d1 + d2 },
4) Set Hadamard Product:
Lemma 1. Given two vertex representations V1 and
V2 the resulting polyhedron
V = V1 ◦ V2 = conv({v = v1 ◦ v2 | v1 ∈ V1 , v2 ∈ V2 )
where ◦ represents the Hadamard (coefficient-wise)
product of the vectors, contains all possible combinations of products between elements of each set.
Proof: Given a convex set X, we have:
the time of writing, there are two algorithms that
efficiently solve the vertex enumeration problem. lrs
is a reverse search algorithm, while cdd follows the
double description method. In this work we use the
cdd algorithm for convenience in implementation
(the original cdd was developed for floats, whereas
lrs uses rationals). The techniques presented here can
be applied to either.
Let
C = {x | Ax ≥ 0, A ∈ Rn×p , x ∈ R p }
be the polyhedral cone represented by A. The pair
( A, V) is said to be a double description pair if
C = {λT V | V ∈ R p , λ ∈ R|V|
≥0 }
V is called the generator of X. Each element in V lies
in the cone of X, and its minimal form (smallest m)
has a one-to-one correspondence with the extreme
X ′ = {xi j | xi , x j ∈ X, xi j = txi +(1−t)x j , t ∈ [0, 1]} ⊆ X
rays of X if the cone is pointed (i.e., it has a vertex
at the origin). This last can be ensured by translating
Given xi ∈ X, y j ∈ Y, zi, j = xi ◦ y j ∈ Z
a polyhedral description so that it includes the origin,
and then translating the vertices back once they have
xi j ∈ X ′ , yk ∈ Y, zi,k , z j,k ∈ Z ⇒ zi j,k ∈ Z
xi j ∈ X, ym , yn ∈ Y, zi j,m , zi j,n ∈ Z ⇒ zi j,mn ∈ Z
This equation proves that given v11 , v12 ∈ V1 ,
v21 , v22 ∈ V2 and u, t ∈ [0, 1],
(tv11 + (1 − t)v12 ) (uv21 + (1 − u)v22 ) ∈ V
been discovered (see section III-B).
We will also point out that
n
{x | Ax ≤ b} = x′ | [ −A
x
p
′
where x ∈ R and x =
1
b ] x′ ≥ 0
∈ R p+1 .
o
The vertex enumeration algorithm starts by finding
a base CK which contains a number of vertices of
6
the polyhedron. This can be done by pivoting over a
number of different rows in A and selecting the feasi-
computing the reachset (3) using a function f such
that f (·) = τn (·). In the case of systems without
ble visited points, which are known to be vertices of
the polyhedron (pivoting p times will ensure at least
inputs, this equation is xn = An x0 . We will use this
property and others derived from it to calculate our
one vertex is visited if the polyhedron is non-empty). abstract matrices.
CK is represented by AK which contains the rows
used for the pivots. The base CK is then iteratively
expanded to CK+i by exploring the ith row of A until
CK = C. The corresponding pairs ( AK+i , VK+i ) are
constructed using the information from ( AK , VK ) as
follows:
(6)
Hi− = {x | Ai,∗ x < 0},
(7)
= {x | Ai,∗ x = 0}
(8)
Hi0
be the spaces outside inside and on the ith hyperplane
and
VK+
VK−
= {v j ∈
= {v j ∈
Hi+ },
Hi− },
VK0 = {v j ∈ Hi0 }
The basic steps required to evaluate a reach tube
using abstract acceleration can be seen in figure 1.
1) The process starts by doing eigendecomposition
Let AK ∈ RnK ×p , Ai,∗ ∈ R1×p , VK ∈ R p ,
Hi+ = {x | Ai,∗ x > 0},
B. Overview of the Algorithm
(9)
(10)
(11)
the existing vertices lying on each of these spaces.
Then
of the dynamics ( A) in order to transform the
problem into a simpler one.
2) A variety of off-the-shelf tools may be used,
but since larger problems require numerical
algorithms for scalability, a second step involves
upper-bounding the error in order to obtain
sound results. In such cases, all subsequent
steps must be performed using interval arithmetic.
3) The inverse of the generalised eigenvectors
must be calculated soundly.
4) The problem gets transformed into canonical
form by multiplying both sides of the equation
VK+i = VK+ ∪ VK− ∪ VK0 ∪ VKi
(12)
VKi = ( Ai,∗ v+ )v− − ( Ai,∗ v− )v+ | v− ∈ V − , v+ ∈ V +
(13)
For the proof see [21].
IV. Abstract Matrices in abstract acceleration
A. Acceleration Techniques
Acceleration of a transition system is a method
that seeks to precisely describe the transition
by S−1 :
′
∩ G′ + U ′ where
Xk′ = J Xk−1
X ′ = S−1 X, U ′ = S−1 BU and G′ = {x | GSx ≤ h}
5) We calculate the number of iterations as explained in section VI. If there are no guards, we
use n = ∞. It is worth noting that this number
need not be exact: if we overapproximate the
number of iterations, the resulting reachtube
will overapproximate the desired one.
relations over a number of steps using a concise
description of the overall transition between the first
6) we overapproximate the dynamics of the variable inputs (for parametric or no inputs this step
and final step. Namely, it looks for a direct formula
to calculate the postimage of a loop from the initial
will be ignored) using the techniques described
in section V-D
states of the loop. Formally, given the dynamics
in equation (1) an acceleration formula aims at
7) we calculate the abstract dynamics using the
techniques described in section V-A
7
A
1. Calculate Ŝ, Ĵ
Eigenspace
2. Restore
Soundness
3. Find
Inverse
S
S−1
J
5. Find Number
of Iterations
G′
X0′
U′
4. Transform
problem into
Eigenspace
J, n
G
X0
BU
X0′ , U ′
7. Get Abstract
6. Semispherical
Dynamics
Jb Approximation
S
X♯
10. Get
Reach Tube
X0′ , Ub′
J
X ′♯
9. Find Eigen
Reach Tube
V0′
8. Find
Vertices
Fig. 1. Block diagram describing the different steps used to
calculate the abstract reach tube of a system.
8) we evaluate the vertices of the combined inputinitial eigenspace to be used as source for the
reachtube calculation
9) we use a sound simplex algorithm to evaluate
made up by the corresponding eigenvectors. We can
then easily compute An = SJ n S−1 , where
n
J 1
..
J n =
(15)
.
n
Jr
n n−p +1
n n n−1
s
λ s 1 λ s
...
p s −1 λ s
..
n n−1
n
.
λs
λs
n
1
J s∈[1,r] = .
..
..
.
.
.
.
λns
(16)
The abstract matrix An is computed as an abstraction
over a set of vectors mk ∈ R p , k ∈ [1 n] of entries of
Jk.
Let I s = [ 1 0 · · · 0 ] ∈ R ps . The vector mk
is obtained by the transformation ϕ−1 :
mk = [ I1 J k1
···
Ir J kr ] ∈ R p
(17)
the convex set product of the abstract dynamics
(used as the tableau) and the initial set (whose
such that J k = ϕ(mk ).
vertices are used as the obejctive functions
alongside a set of template directions for the
If J is diagonal [34], then mk equals the vector
of powers of eigenvalues [λk1 , . . . , λkr ]. An interval
result).
abstraction can thus be simply obtained by computing the intervals [min{λ0s , λns }, max{λ0s , λns }], s ∈
10) since we have calculated our result in the
eigenspace, we transform the reachtube back
into the normal space by multiplying by S .
C. Computation of Abstract Matrices
We define the abstract matrix An as an over-
approximation of the union of the powers of the maS
trix Ak such that An ⊇ k∈[0,n] Ak and its application
to the initial set X0
X̂n♯ = An X0 ⊇ X̂n
(14)
Next we explain how to compute such an abstract
matrix. For simplicity, we first describe this computation for matrices A with real eigenvalues, whereas
[1, r]. We observe that the spectrum of the interval
matrix σ(An ) (defined as intuitively) is an overS
approximation of k∈[0,n] σ( Ak ).
In the case of the sth Jordan block J s with geometric non-trivial multiplicity p s (λi = λi−1 = . . .),
observe that the first row of J ns contains all (possibly)
distinct entries of J ns . Hence, in general, the vector
section ms is the concatenation of the (transposed)
n n−p +1 T
s
first row vectors λns , n1 λn−1
of
s , . . . , p s −1 λ s
J ns .
Since the transformation ϕ transforms the vector
m into the shape of (16) of J n , it is called a matrix
shape [34]. We then define the abstract matrix as
the extension to the complex case will be addressed
in Section IV-D. Similar to [34], we first have to com-
An = {S ϕ(m) S−1 | Φm ≤ f } ,
pute the Jordan normal form of A. Let A = SJS−1
where J is the normal Jordan form of A, and S is
where the constraint Φm ≤ f is synthesised from
intervals associated to the individual eigenvalues and
(18)
8
to their combinations. More precisely, we compute
polyhedral relations: for any pair of eigenvalues (or
To deal with complex numbers in eigenvalues and
eigenvectors, [34] employs the real Jordan form for
conjugate eigenvalues λ = reiθ and λ∗ = re−iθ (θ ∈
cos θ
r
− sin θ
sin θ
.
cos θ
Although this equivalence will be of use once we
evaluate the progression of the system, calculating
powers under this notations is often more difficult
than handling directly the original matrices with
complex values.
In Section IV-C, in the case of real
eigenvalues we
in the power
150
100
50
D. Abstract Matrices in Complex Spaces
[0, π]), so that
λ 0
is replaced by
0 λ∗
n=5
200
λn2
binomials) within J, we find an over-approximation
of the convex hull containing the points
n
o
mk | k ∈ [1, n] ⊆ {m | Φm≤ f }
250
n=4
n=1
n=3
n=2
0
0
5
10
20
15
25
30
35
λn1
Fig. 2. Polyhedral faces from an R2 subspace, where (λn1 , λn2 )
so that λ1 =2, λ2 =3, 1≤n≤5. Bold purple lines represent supports
found by this article. The dotted grey and dashed red polytopes
show logahedral approximations (box and octagon) used in [34].
Note the scales (sloped dashed lines are parallel to the x=y
line, and dashed red polytope hides two small faces yielding an
octagon).
1) Positive Real Eigenvalues
The exponential curve is cut along the diagonal
have abstracted the entries
matrix J ns by ranges of
between the maximum and minimum eigenvalues to create a support function for the
eigenvalues [min{λ0s · · · λns }
max{λ0s · · · λns }].
In the complex case we can do something
corresponding hyperplane. A third point taken
from the curve is used to test the direction of
similar by
form λ s
the corresponding template vector. An arbitrary
number of additional hyperplanes are selected
[min{r0s · · · rns }
rewriting eigenvalues into polar
=
r s eiθs and abstracting by
max{r0s · · · rns }]ei[0
, min(θ s ,2π)]
.
V. General Abstract Acceleration with Inputs
A. Using Support Functions on Abstract Acceleration
by picking pairs of adjacent points in the curve
and creating the corresponding support function
as shown in Figure 2.
2) Complex Conjugate Eigenvalue pairs
As an improvement over [34], the rows in Φ and
In the case of complex conjugate pairs, the
eigenvalue map corresponds to a logarithmic
f (see (18)) are synthesised by discovering support
functions in these sets. The freedom of directions
spiral. In this case, we must first extract the
number of iterations required for a full cycle.
provided by these support functions results in an
improvement over the logahedral abstractions used in
For convergent eigenvalues, only the first n iter-
previous works (see Figures 2 - 5). The mechanism
by which this works follows the convex properties
of the exponential progression. There are four cases
cases to consider 1 :
1 these
explain in detail the procedure alluded in [8]
ations have an effect on the support functions,
while in the divergent case only the last n
iterations are considered. Support functions are
found for adjacent pairs, checking for the location of the origin point (first point for convergent eigenvalues, last for divergent eigenvalues).
9
0.8
2
0.6
n=3
apex
n=3
n=2
0.4
1.5
n=2
λn2
λn2
n=1
0.2
0
n=1
1
−0.2
−0.4
−0.6 −0.4 −0.2
0.5
0
0.2
0.4
0.6
0.8
0
0.2
Fig. 3. Polyhedral faces from an R2 complex conjugate subspace,
where (λn1 , λn2 ) so that λ1 =0.8 + 0.4i, λ2 =0.8 − 0.4i, 1≤n≤14. Bold
purple lines represent supports found by this article. The blue
dotted line shows the support function that excludes the origin
(n=1), which is replaced by the support function projecting from
said origin.
0.4
0.6
0.8
λn1
λn1
Fig. 4. Polyhedral faces from an R2 Jordan block subspace,
where (λn1 , λn2 ) so that λ1 =0.8, λ2 =0.8, 1≤n≤15. Bold purple lines
represent supports found by this article. The blue dotted line
shows the support function that excludes the origin (n=1), which
is replaced by the support function projecting from said origin.
account for both sides of the axis on the latIf the origin falls outside the support function,
we look for an interpolant point that closes the
spiral tangent to the origin. This last check is
performed as a binary search over the remaining
points in the circle (whose supporting planes
would exclude the origin) to achieve maximum
tightness (see Figure 3).
3) Equal Eigenvalues
When two eigenvalues are the same, the resulting support functions are those orthogonal to
the x = y plane, intersecting the square created
by the maximum and minimum values.
4) Jordan Blocks of size > 1
In the case of eigenvalues with geometric mul-
ter. These form mirror images that are merged
during the abstraction. To make matters simple,
we use the magnitude of a complex eigenvalue
and evaluate whether the dynamics are concave
or convex with respect to the mirroring plane.
See Figure 5.
Note that if both eigenvalues are negative and/or
conjugate pairs from a different pair, the mirror
image would be taken on both axes, resulting
in a hyperrectangle. For a tighter bound in the
purely convergent case, we find the convex hull
of a point cloud for a small time horizon and
merge it with the hyperrectangle for the infinite
time horizon thereon.
tiplicities > 1, the shape of the function is
similar to its corresponding unit size block. In
An additional drawback of [34] is that calculating
the exact Jordan form of any matrix is computa-
the convergent case, since the convexity can be
tionally expensive and hard to achieve for large-
sharp, it is important to find the apex of the
upper diagonals in order to minimise the over-
dimensional matrices. We will instead use numerical
algorithms in order to get an approximation of the
approximation. See Figure 4.
5) Negative Eigenvalues and mixed types
When mapping a positive real eigenvalue to a
complex conjugate or negative one, we must
Jordan normal form and account for numerical errors.
In particular, if we examine the nature of (14),
we find out that the numerical operations are not
iterative, therefore the errors do not accumulate with
10
vides a 100-fold plus improvement, which can make
a difference towards rendering verification practically
15
λ2 > |λ1 | > 1
10
λn2
λ2 = |λ1 |
feasible. For a full description on the numerical
processes described here see [9]
|λ1 | > λ2 > 1
5
B. Abstract Matrices in Support Functions
|λ1 | > 1 > λ2
Since we are describing operations using abstract
matrices and support functions, we briefly review the
0
−10
0
10
λn1
Fig. 5. Polyhedral faces from an R2 subspace, with different
convexities (note that the blue and orange plots are convex w.r.t.
the λn2 -axis, whereas the green and brown are concave). Dotted
purple lines represent supports for some of these layouts.
nature of these operations and the properties that the
support functions retain within this domain. Let X ∈
R p be a space and A ∈ R p×p an abstract matrix for
the same space. From the definition we have
[
A=
Sϕ(m)S−1 : Φm ≤ f
which leads to
time. We use properties of eigenvalues to relax f by
finding the maximum error in the calculations that
can be determined by computing the norm δmax =
|A − ŜĴSˆ−1 |, where Ĵ and Ŝ are the numerically
ρAX (v) = ρSϕ(m)S−1 X (v) = ρϕ(m)S−1 X ST v ,
(19)
where
n
o
ρϕX (v) = sup ρϕ (x ◦ v) | x ∈ X
(20)
interval matrices and all operations are performed
using interval arithmetic with outward rounding in
ρϕ (v) = sup{m · ϕ−1 (v) | Φm ≤ f }
(21)
order to ensure soundness. In the following we will
presume exact results and use the regular notation to
Here, x ◦ y is the Hadamard product, where
(x ◦ y)i = xi yi , and ϕ−1 (·) is the reverse operation
calculated eigenvalues and eigenvectors of A. The
notation above is used to represent the matrices as
describe the algorithms. The constraints Φm < f are
then computed by considering the ranges of eigenvalues λ s ±δmax (represented in Fig. 2 as the diameter of
the blue dots). The outward relaxation of the support
functions ( f ), which follows a principle similar to
that introduced in [22], reduces the tightness of the
over-approximation, but ensures the soundness of the
abstract matrix An obtained. It is also worth noting
that the transformation matrices into and from the
eigenspace will also introduce over-approximations
due to the intervals. One can still use exact arithmetic
with a noticeable improvement over previous work;
however, for larger-scale systems the option of using
floating-point arithmetic, while taking into account
errors and meticulously setting rounding modes, pro-
and
of ϕ(·) in order to align the elements on v with the
elements in m. In the case of conjugate pairs this
is equivalent to multiplying the vector section by
h 1 1 i
, and in the case of a Jordan block by an
1
−1
upper triangular matrix of all ones.
We may also define
ρAX (v) = sup {ρ aX (v), ∀a ∈ A}
n
o
= sup Sϕ(m)S−1 x · v, ∀x ∈ X
n
o
= sup ϕ(m)S−1 x · ST v, ∀x ∈ X
n
o
= sup ρϕ (S−1 x ◦ ST v), ∀x ∈ X .
(22)
In order to simplify the nomenclature we write
ρAX (v) = ρX (AT v).
(23)
11
following representation:
C. Acceleration of Parametric Inputs
Let us now consider the following overapproximation for τ on sets:
n−1
X
k=0
♯
τ (X0 , U) = AX0 ⊕ BU
(24)
Unfolding (3) (ignoring the presence of the guard set
G for the time being), we obtain
Xn = An X0 ⊕
X
Ak BU
k∈[0,n−1]
What is left to do is to further simplify the sum
P
k
k∈[0,n−1] A BU. We can exploit the following simple results from linear algebra.
Lemma 2. If I − A is invertible, then
n−1
X
k=0
λ =
k
n
1−λn
1−λ
λ=1
λ,1
⇒ (I − An )(I − A)−1 = SDn S−1
−1k 1 − λni
d(λi , n, k) =
k + 1 (1 − λi )k+1
! n− j−1
k
X
λi
−1k− j n
+
k
−
j
j
−
1
(1
− λi )k− j
j=1
0
j<i
n
i = j ∧ λi = 1
1−λni
i = j ∧ λi , 1
1−λi
Dni, j =
0
gm(λi ) = 1
n+1
λi = 1
k+1
d(λi , n, j − i)
λi , 1
(25)
where gm(·) is the geometric multiplicity of the given
Ak = (I − An )(I − A)−1
eigenvalue.
. If furthermore lim An = 0, then
n→∞
lim
n→∞
n
X
k=0
D. Acceleration of Variable Inputs
k
A = (I − A)
−1
The result in the previous section can be only
.
This lemma presents a difficulty in the nature of A.
The inverse (I − A)−1 , does not exist for eigenvalues
directly applied under restricted conditions in the
case of variable inputs. For instance whenever ∀k >
0, uk = uk−1 . In order to generalise it (in particular to
non-constant inputs), we will over-approximate BU
of 1, i.e. we need 1 < σ( A), where σ(A) is the over the eigenspace by a semi-spherical enclosure
spectrum (the set of all the eigenvalues) of matrix A. with centre u′c and radius U ′ . To this end, we first
b
In order to overcome this problem, we introduce the rewrite
eigen-decomposition of A = SJS−1 , (and trivially
U J′ = S−1 BU = {u′c } ⊕ Ud′ ,
I = SIS−1 ), and by the distributive and transitive
properties we obtain
where u′c is the centre of the interval hull of U J′ :
n
(I − A )(I − A)
−1
n
−1 −1
= S(I − J )(I − J) S
.
1
1
= (ρU′J (vi ) + ρU′J (−vi )) | vi j =
0
2
j=i
.
This allows us to accelerate the eigenvalues individj,i
Pn−1 k
ually, using the property k=0 1 = n for eigenvalues
of λ = 1. Using the properties above, and trans- We then over-approximate Ud′ via Ub′ , by the maxiu′ci
lating the problem into the generalised eigenspace
to accounting for unit eigenvalues, we obtain the
mum radius in the directions of the complex eigenvalues as (cf. illustration in Figure 6). Let
12
40
Λ = {λi | i ∈ [1, p], λ∗i , λi−1 }
λi < Λ
λi , λ∗i+1
of a vector by removing the elements where λi < Λ.
Extending this to matrices we have
Fb : R
o×pb
→R
Fb (C) = Cb where (Cb )i,∗ = fb (Ci,∗ )
0
0, 0
λi = λ∗i+1
and red(·) is a function that reduces the dimension
o×p
u′c
λ2
0
|v |
fb (v) = red(vb ) where (vb )i =
q i
2
vi + v2i+1
Ud′
20
fb : R p → R pb such that
U′
−20
−40
Ub′
−40
−20
0
20
40
λ1
(26)
Finally
Fig. 6. Relaxation of an input set within a complex subspace, in
order to make it invariant to matrix rotations. Dashed lines and
curves denote translated quantities onto the origin.
Ud′ = {u | C′u u ≤ d′u }
Ud′ ⊆ Ub′ = {u | Fb (C′u ) fb (u) ≤ fb (d′u )}
BU ⊆ Ub ⊕ Uc | Ub = SUb′ and Uc = {Su′c }
(27)
Since the description of Ub′ is no longer polyhedral
in R p , we will also create a semi-spherical overapproximation J b of J in the directions of the complex eigenvectors, in a similar way as we generated
Ub′ for Ud′ . More precisely,
J b1
..
where
J b =
.
n
J br
λ j ∈ J b s = |λi | ∈ J s ∩ Λ
∀s ∈ [1 r]
gm(J b s ) = gm(J s )
(31)
Shifting our attention from reach sets to tubes,
we can now over-approximate the reach tube by
Theorem 1. The abstract acceleration
(28)
Jordan block.
Definition 1. Given a matrix A = SJS−1 and a
vector x, we define the following operations:
Fb∗ ( A, x) = S fb−1 Fb (J) fb (S−1 x)
(29)
′
′
−1
Fb ( A, x) = fb Fb (J) fb (x )
(30)
Finally, we refer to the accelerated sets
o
n
Ubn = Fb∗ ((I − An ), Fb∗ ((I − A)−1 , u)) | u ∈ Ub
n
Ucb
= Ucn ⊕ Ubn
n
Xn ⊆ An X0 ⊕ Ucb
abstract acceleration of the three summands in (31),
as follows.
where gm(·) is the geometric multiplicity of the
Ucn = (I − An )(I − A)−1 Uc
Returning to our original equation for the n-reach
set, we obtain2
τ♯n (X0 , U) =def An X0 ⊕ Bn Uc ⊕ Bnb Ub
(32)
is an over-approximation of the n-reach tube, namely
X̂n ⊆ τ♯n (X0 , U).
Proof: The proof is derived from that in [34]
for An X0 , and extends it as in the developments
presented above.
E. Combining Abstract Matrices
One important property of the abstract matrices
An , Bn and Bnb is that they are correlated. In the
that ∀U b′ , U c′ , U d′ ; ∃U b , U c , U d : U b′ = S−1 BU b so
that
= S−1 BU c and U d′ = S−1 BU d . Hence, this inclusion is
also valid in the original state space.
2 Note
U c′
13
case of parametric inputs, this correlation is linear
and described by the acceleration defined in Lemma
with J b defined by (28). This model provides a
tighter over-approximation than (32) since the accel-
(2). In the case of Bnb this relationship is not linear
(see Eq. 27). However, we can still find a linear over-
erated dynamics of the inputs are now constrained
by the accelerated dynamics of the system.
approximation of the correlation between Bnb and An
based on the time steps k. Given two orthonormal
p
q
spaces X ∈ R ∧ U ∈ R and a transition equation
The most important task remaining is how to
Xk+1 = AXk + BU,
calculate the number of iterations dealing with the
presence of the guard set G.
which is related to
Given a convex polyhedral guard expressed as the
assertion {x | Gx ≤ h}, we define Gi,∗ as the ith row
ρXk+1 (v) = ρ AXk (v) + ρ BU (v),
we define a space
x
′
X =
|
x
∈
X,
u
∈
U
Bu
of G and hi as the corresponding element of h. We
denote the normal vector to the ith face of the guard
as gi = GTi . The distance of the guard to the origin
is thus γi = | hgi | .
so that
with
i
Given a convex set X, we may now describe
its position with respect to each face of the guard
T
A v
= ρX ′ DT v′ ,
ρXk+1 (v) = ρXk′
k
v
A
D =
0
0
,
I
Accelerating Xk+1 , we obtain
through the use of its support function alongside
the normal vector of the hyperplane (for clarity, we
v
v = .
v
assume the origin to be inside set X):
′
ρX ( gi ) ≤ γi ,
ρXn (v) = ρ An X0 (v) + ρ(I− An )(I− A)−1 BU (v) = ρX0′ DnT v′ ,
with
n
A
n
D =
0
0
(I − An )(I − A)−1
in the case of parametric inputs. More generally, the
diagonal elements of Dn correspond to the diagonal
P
k
elements of An and n−1
k=0 A B, which means we can
construct
An 0
n
(33)
D =
| ρXn (v) = ρX0′ (DnT v′ ).
0 Bn
We can then apply this abstraction to (27) and obtain:
ρXn (v) =
′
ρX0′ (DnT
b v )
An
n
Db =
0
VI. Abstract Acceleration with Guards:
Estimation of the number of Iterations
where
v
0
, v′ =
Bnb
fb (v)
Bnb = SFb−1 (I − Jbn )(I − J b )−1 Fb (S−1 )
(34)
inside the hyperplane,
−ρX (−gi ) ≥ γi ,
outside the hyperplane.
Applying this to equation (31) we obtain:
ρXn ( gi ) = ρX0 ( Ani T gi ) + ρUcbn ( gi ) ≤ γi
(35)
ρXn (−gi ) = ρX0 (−A
(36)
ni T
gi ) + ρUcbn (−gi ) ≤ −γi
From the inequalities above we can determine
up to which number of iterations ni the reach tube
remains inside the corresponding hyperplane, and
starting from which iteration ni the corresponding
reach set goes beyond the guard:
In order for a reach set to be inside the guard
it must therefore be inside all of its faces, and we
can ensure it is fully outside of the guard set when
it is fully beyond any of them. Thus, we have n =
min{ ni }, and n = min{ ni }.
We have not however discussed why these two
cases are important. Looking at the transition in
equation (1), we can easily derive that if Gxk h
14
the postimage of all subsequent iterations is empty.
Therefore, any overapproximation henceforth will
diagonal. In such a case, the progression of the
support function in each direction is monotonically
only add imprecision. We will use the bounds n and
n to create a tighter overapproximation. Let
increasing (or decreasing) and it is therefore very
easy to find a bound for its progression. We note
X̂n♯ = An X0 ⊕ Bn U
that the envelope of rotating dynamics will always
contain the true dynamics and is therefore a sound
(n-reachtube)
Xn♯ = An X0 ⊕ Bn U
(n-reachset)
X̂n♯ | n = τ An−n−1 Xn♯ ⊕ Bn U ∩ G, U
X̂n♯
X̂n♯ | n
∪
overapproximation. We will initially assume that γi
is positive and then extend to the general case.
X̂n♯
(37)
o
n
This double step prevents the set x | x ∈ X̂n♯ , x < Xn♯
to be included in further projections, thus reducing
=
Let ρX0 ( AnT gi ) = ρX0′ (J nT g′i ) such that
g′i = S−1 gi
X0 = {x | CX0 x ≤ dX0 }
the size of the overapproximation.
X0′ = S−1 X0 = {x | SC X0 x ≤ dX0 }
Computing the maximum ni such that (35) is
satisfied is not easy because the unknown ni occurs
in the exponent of the equation. However, since an
intersection with the guard set will always return a
sound over-approximation, we do not need a precise
value. We can over-approximate it by decomposing
gi into the generalised eigenspace of A. Let gi =
Pp
−1
or
j=1 ki j v j +res( gi ), where v j are row vectors of S
−1
−S such that ki j ≥ 0, and res( gi ) is the component
Let
Λσ = {λi : i ∈ [1, p],
p
fσ (v) : R → R
i−1
^
(λ∗i , λ j ∧ λi , λ j )}
j=1
pb
of gi that lies outside the range of S. Notice that
since S has an inverse, it is full rank and therefore
fσ (v) = red(vσ )
0
r
P
where (vσ )i =
v2j
j∈[1,p]∧(λ j =λi ∨λ j =λ∗i )
res( gi ) = 0 and subsequently not relevant. It is also
Fσ (C) = Cσ where (Cσ )i,∗ = fσ (Ci,∗ ).
important to note that S is the matrix of generalised
eigenvectors of A and therefore we are expressing
our guard in the generalised eigenspace of A.
p
p
X
X
nT
nT
ρX0 ( A gi ) = ρX0 ki j A v j ≤
ki j ρX0 AnT v j
j=1
j=1
(38)
A. Overestimating the Iterations of a loop without
inputs
We start by looking into the approximation of
the inside bound (i.e. the iterations for which the
reachtube remains fully inside the guard). Since
rotating dynamics and Jordan shapes will have a
complex effect on the behaviour of the equation, we
seek to transform the Jordan form into a real positive
λi < Λσ
λi ∈ Λσ
Fσ : Ro×p → Ro×r
and red(·) is a function that reduces the dimension of a vector by removing the elements where
λi < Λσ . This reduction is not extrictly necessary, but it enables a faster implementation by reducing dimensionality. Correspondingly, given J =
diag ({J s | s ∈ [1, r]})
σ1
0
J σ =
0
0
0
σ2
0
0
···
···
..
.
···
0
σr
0
0
(39)
where σ s = ||J s ||2 is the maximum singular value
(hence the induced norm [36]) of the Jordan block
J s.
15
Finally, let
x′c =
1
vi j =
0
1
(ρX ′ (vi ) + ρX0′ (−vi )),
2 0
j=i
j,i
Xσ′ = {x | Fσ (SC X0 ) fσ (x) ≤ fσ (dX0 − SCX0 x′c )}
′
′
) | Xcσ
= { fσ (x′c )} ⊕ Xσ′
X0′ ⊆ fσ−1 (Xcσ
(40)
Proof: This follows from the developments unfolded above. Notice that the sequence ni is monotonically increasing, before it breaks the inequality. As
such any local minimum represents a sound underapproximation of the number of loop iterations. Note
that in the case where γi ≤ 0 we must first translate
the system coordinates such that γi > 0. This is
and vσ = fσ (v).
Using eigenvalue and singular value properties, we
obtain ρX0 ( AnT v j ) ≤ σ j n ρXcσ ((vσ ) j ) | j ∈ [1, r], and
therefore:
p
X
ρX0 ( AnT gi ) ≤
ki j σnj ρXcσ ((vσ ) j ))
(41)
j=1
Since we have no inputs, ρUcn ( gi ) + ρUbn ( gi ) = 0,
hence we may solve for ni :
ρX0 ( Ani T gi ) ≤
p
X
j=1
n
ki j σ j i ρXcσ ((vσ ) j ) ≤ γi
(42)
simply done by replacing x′ = x + c and operating
over the resulting system where γi′ = ρ c ( gi ) + γi .
Mathematically this is achieved as follows: first
we get c by finding the center of the interval hull
of G (if G is open in a given direction we may pick
any number in that direction for the corresponding
row of c). Next we
xk A Ac
=
1
0
1
where
To separate the divergent element of the dynamics
from the convergent one, let us define
ki j = max ki j ρXcσ ((vσ ) j ) , 0
transform the dynamics into
xk−1 B
x
+
uk | k−1
1
0
1
x G
G =
|
1 0
′
Gc xk−1
1
1
∈ G′
h
≤
1
σ = max ({σ s | s ∈ [1, p]}) .
This step will allow us to track effectively which
B. Underestimating the Iterations of a loop without
trajectories are likely to hit the guard and when, since
it is only the divergent element of the dynamics that
inputs
can increase the reach tube in a given direction.
Replacing (42), we obtain
p
X
σj
σ
!n
In order to apply a similar techniques to (36) we
must find an equivalent under-approximation. In the
case of equation (42), the σ j esure that the equation
(43)
diverges faster than the real dynamics, hence the
iteration found is an upper bound to the desired
which allows to finally formulate an iteration scheme
iteration. In this case we want the opposite, hence
we look for a model where the dynamics diverge
σ
n
j=1
ki j
≤ γi ,
for approximating n.
Proposition 1. An iterative under-approximation of
the number of iterations n can be computed by
starting with ni = 0 and iterating over
p
!ni
X
σ
j
, (44)
ni ≥ n = logσ (γi ) − logσ ki j
σ
j=1
substituting ni = n on the right-hand side until we
meet the inequality.
slower. In this case it is easy to demonstrate that
λb j = |λ j | represents these slower dynamics.
ni T
ρX0 (−A
gi ) ≤
p
X
j=1
ki j λb j ni ρXcσ (−(vσ ) j ) ≤ −γi (45)
which reduces to
σ
n
p
X
j=1
k−i j
λb j
σ
!n
+σ
n
p
X
j=1
k+i j ≤ −γi ,
(46)
16
where
k−i j = min ki j ρXcσ (−(vσ ) j ) , 0
k+i j = max ki j ρXcσ (−(vσ ) j ) , 0
cosines and removing the constants:
!
p
ni X
((n − ni )θ j − αi j )2
ci j 1 −
max σ
2
j=1
p
X
⇒ min ci j ((n − ni )θ j − αi j )2
j=1
p
X
2
2
⇒ min ci j θ j (n − ni ) + ci j αi j θ j (n − ni )
j=1
An additional consideration must also be made
regarding the rotational nature of the dynamics. In
the previous case we did not care about the rotational
alignment of the set Xn with respect to the vector
gi , because any rotation would move the set inside
the guard. In this case, although the magnitude of
the resulting vector is greater than the required one,
the rotation may cause it to be at an angle that
keeps the set inside the guard. We must therefore
account for the rotating dynamics in order to find
the point where the angles align with the guard. In
order to do this, let us first fix the magnitudes of
the powered eigenvalues, in the case of convergent
dynamics we will assume they have converged a full
The solution to this equation is
Pp
j=1 ci j αi j θ j
| n ∈ [ni , ni + nθ ]
n = ni − P p
2 j=1 ci j θ2j
(47)
The second part of the equation is expected to be
a positive value. When this is not the case, the
dominating dynamics will have a rotation θ j ≥ pi2 .
In such cases we must explicitly evaluate the set of
up to 4 iterations after ni . If the resulting
bound does
T
not satisfy the original inequality: ρX0 Ani gi ≥ γi ,
we replace ni = n until it does 3 .
Proposition 2. An iterative under-approximation of
the number of iterations n can be computed by
rotation in to make our equation strictly divergent.
starting with ni ′ = 0 and iterating over
Let θ = min{θ j | j ∈ [1, p]}, where θ j are the angles
p
!ni ′ X
p
X
λ
2π
b
j
′
of the complex conjugate eigenvalues. Let nθ = θ
+
ni ≤ n = logσ (γi ) − logσ k−i j
k+i j
σ
j=1
j=1
be the maximum number of iterations needed for
′
T
any of the dynamics to complete a full turn. Then
(48)
ni = ni ′ + k | ρX0 A(ni +k) gi ≥ γi
ni +nθ
ni +n
≤ |λ j |
| |λi | ≤ 1, n ∈ 0nθ .
at any given turn |λ j |
where k is the result of equation (47). we substitute
This means that any bound we find on the iterations
for ni = n on the right-hand side until we break
will be necessarily smaller than the true value. Our
the inequality, and then find k such that the second
problem becomes the solution to:
inequality holds.
max σ
−1
ni
p
X
j=1
ci j cos((n − ni )θ j − αi j )
αi j = cos ( gi · v j )
n
− λb j i
ki j σ
ci j =
k− λb j ni +nθ
ij σ
Since we are explicitly verifying the inequality,
there is no further proof required.
C. Estimating the Iterations of a loop with inputs
For the case with inputs, we will use the same
|λ j | ≥ 1
|λ j | < 1
The problem is simplified by underapproximating the
paradigm explained in the previous section after performing a mutation that transforms the system with
3 this
is a tighter value than work shown on previous versions
(2π)m
, where
of this paper where we overapproximated using nθ = Q
j θj
m is the number of conjugate pairs.
17
inputs into an over-approximating system without
inputs.
′
′
Let Xcσ
, Ucσ
be the corresponding sets of initial
states and inputs obtained by applying equation (40)
′
′
to X0′ and U J′ , and let U Jσ
= (I − J σ )−1 Ucσ
. The
accelerated resulting system may be represented by
the equations
′
′
⊕ (I − J nσ )U Jσ′
(Xcσ
)n = J nσ Xcσ
nT
′ ) (v) = ρX ′
J nT
ρ(Xcσ
σ v + ρU ′Jσ (v) − ρU ′Jσ J σ v
n
cσ
(49)
′
Xcσ
,u
Proof: Since the elements of the sums are
convergent, we have
ni ≥ nk ⇒ kni ≥ knk i.e. |kni | ≤ |knk |
⇒ logσ kni + kn ≥ logσ kni + kn
k
i
which means that nk in equation (51) is smaller than
or n in equation (44) (nk ≤ n ≤ ni | ni ≥ nk ).
In the case of equation (48), the explicit evaluation
of the guard at each cycle executes the behaviour
described here.
′
U Jσ
}
Let us now define (XU)σ = {x−u | x ∈
∈
which allows us to translate the system into
(50)
ρ((XU)′σ )n (v) = ρ(XU)′σ J nT
σ v
which has the same shape as the equations in the
E. Maintaining Geometric Multiplicity
A second step in optimising the number of itera-
previous section. We may now apply the techniques
tions comes from adding granularity to the bounding
semi-spherical abstraction by retaining the geometric
described above to find the bounds on the iterations.
multiplicity using the matrix J b .
D. Narrowing the estimation of the iterations
Lemma 3. Given a matrix A with eigenvalues
{λ s | s ∈ [1, r]}, where each eigenvalue λ s has a
The estimations above are very conservative, but
we may use further techniques to obtain tighter
bounds on the number of iterations. In the first
instance we note that we have eliminated all negative
terms in the sums in equation (44). Reinstating these
geometric multiplicity p s and corresponding generalised eigenvectors {v s,i | i ∈ [1, p s]},
∀n ≥ 0, An vis = λns v s,i +
terms can cause us to lose monotonicity, but we
may still create an iterative approach by fixing the
negative value at intermediate stages. Let ni be our
existing bound for the time horizon before reaching
σ ni
σ ni
P
P
a guard, and kn = pj=1 ki j σj , kni = pj=1 ki j σj
i
the corresponding negative and positive terms of the
equation. We may now find upper and lower bounds
for ni by replacing the equation
ni ≥ nk = logσ (γi ) − logσ kni + kn
k
(51)
where nk is the bound found in the previous stage.
Some stages of this process will provide an unsound
result, but they will also provide an upper bound to
our number of iterations. In fact, every second stage
will provide a monotonically increasing sound bound
which will be tighter than the one in equation (44).
i−1
X
j=1
j
λn−
s
j−1
Y
(n − k)v s,i− j
k=0
j−1
Q
(n
−
k)
i−1
X k=0
n
v
= λ s v s,i +
(52)
s,i− j
j
λs
j=1
Proof: By definition, given an eigenvector v s
of A, then Avs = λ s v s [32]. Similarly a generalised eigenvector v s,i of A satisfies the equation
( A − λ s I) v s,i = v s,i−1 and vs,1 = vs hence
Av s,i = λ s vs,i + v s,i−1
An v s,1 = λns v s,1
An v s,i = An−1 (λ s v s,i + v s,i−1 ) = λ s An−1 vs,i + An−1 v s,i−1
= λ2s An−2 v s,i + λ s An−2 v s,i−1 + An−1 vs,i−1
= · · · = λns v s,i +
n−1
X
j=0
λ sj An− j−1 v s,i−1
18
10,000
From here we recursively expand the formula for
An− j−1 v s,i−1 and obtain:
An v s,i = λns vs,i +
n−1
X
j−1
λ sj λn−
v s,i−1
s
+
j=0
n−1 X
n−2
X
λks An−k−2 v s,i−2
ρ( g)
1,000
n( fσ , fσ )
n( fb , fb )
n( fσ , fb )
100
n( fb , fb )
j=0 k=0
= λns vs,i + nλn−1
s v s,i−1 + n
n−2
X
λ sj An− j−2 vs,i−2
10
j=0
= · · · = λns v s,i +
i−1
X
j
λn−
s
j=1
j−1
Y
(n − k)v s,i− j
1
Let i′ denote the position of fb (λ j ) within the
block J bs it belongs to, such that its corresponding
generalised eigenvector is identified as vbs,i′ = fb (v j ).
Then
ρX0′ (J nT g′i )
pb
X
≤
ki j ρX0 J nb T fb (v j )
j=1
k−1
Q
′
(n − m)
pb
i −1
X
X
m=0
′ −k
v
≤
ki j λb nj ρX0 vbs,i′ +
bs,i
k
λ
b
j
j=1
k=1
k−1
Q
′
(n
−
m)
p
i
−1
b
X
X m=0
′ −k
v
ρ
≤
ki j λb nj ρX0 vbs,i′ +
bs,i
X
0
k
λ
b
j
j=1
k=1
≤
pb
X
ki′ j0 λb nj
j=1
+
i′
X
ki′ jm λb nj
p sY
−i′ −1
m=0
m=1
(n − m)
(53)
I order to manage the product on the right hand
side we use slightly different techniques for over- and
under-approximations. For ni we first find an upper
bound n′i using equation (44) and ki j = ki′ j 0 + ki′ j m
and then do a second iteration using ki j = ki′ j 0 +
′
p s −i
Q−1 ′
ki′ j m
(ni − m) which ensures the true value is
m=0
under the approximation. In the case of ni , we also
ki′ j 0
start with ki j =
iterative process.
+
ki′ j m
0
2
4
and update it during the
8
6
iteration
k=0
10
Fig. 7. Progression of the support function of a system for a
given guard. Blue dots are real values. The dashed green line overapproximates the progression using singular values (sec VI-A),
the dashed yellow line underapproximates them using eigenvalue
norms (sec VI-B), whereas the continuous purple lines represent
the tighter overapproximation maintaining the gemoetric multiplicity (sec VI-E). We can see how the purple line finds a better bound
for ni , while the ni bound is conservative for both approaches.
Mind the logarithmic scale.
Let us look at the following example:
3 0 0 0 0 0
0 2 1 0 0 0
0 0 2 0 0 0
J = 0 0 0 −2 0 0
0 0 0 0 −1 1
S =
0
0
1
0
0
0
0
0
0
3
−4
0
0
0
J σ =
x′0 =
0
0
0
0
1
0
0
0
3
0
0
0
0
2
0
3
0
0
0
1
Gx ≤ 300 | G=
G= 1
−1
0
0
0
1
0
0
0
−1
0
0
0
0
1
1
0
0
0
0
0
1
0
0
0
√
2
1
1
1
1
3
−3
1
1
2
1
2
4
1
4
−3
1
T
S
The progression of the system along the support
function and corresponding bounds as described in
the previous section are shown in figure 7
19
we assume that part of the system is coded, and
further assume that it is possible to discretise the
1,000,000
100,000
physical environment for simulation. Algorithm 1
shows a pseudo-code fragment for the temperature
ρ( g)
10,000
1,000
n( fb , fb )
control problem. We use the read function to
100
Algorithm 1 Temperature Control Loop
States: temp=temperature, heat=heat output.
n( fb , fb )
10
1
Inputs: set=set-point, amb=ambient temperature.
0
5
10
15
20
iteration
Fig. 8. Progression of the support function of a rotational system
for a given guard. Blue dots are real values (negative values are
missing due to the log scale). Continuous purple lines represent
the overapproximation. The steep vertical line at 19 is due to the
alignment of the rotations with the guard at this point. The point
at iteration 14 appears below the line because of the higher point
at iteration 9. The model will either find that this boundary was
met at iteration 9 or push it forward to 19.
1:
temp=5+read(35);
2:
heat=read(1);
while(temp< 400 && heat< 300)
3:
4:
5:
0
0
0
0
0
0
0
0
1.1e0.5i
0
0
0
0
0
0
1.1e−0.5i
we get the results in figure 8. In this case we can
see that the rotational dynamics force an increase of
the initially calculated iteration to account for the
effects of the rotation.
F. Case Study
We have selected a known benchmark to illustrate
the discussed procedure: the room temperature control problem [17]. The temperature (variable temp)
of a room is controlled to a user-defined set point
(set), which can be changed at any time through a
heating (heat) element, and is affected by ambient
temperature (amb) that is out of the control of the
system.
We formalise the description of such a system both
via a linear loop and via hybrid dynamics. Observe
that since such a system may be software controlled,
amb=5+read(35);
7:
set=read(300);
temp=.97 temp + .02 amb + .1 heat;
8:
heat=heat + .05 set;
6:
9:
Changing the eigenvalues to:
2e−0.2i
0
0
0
0
2e0.2i
0
0
√ −0.3i
0
2e
0
0
√ 0.3i
J = 0
2e
0
0
0
0
0
0
{
}
represent non-deterministic values between 0 and
the maximum given as argument. Alternatively, this
loop corresponds to the following hybrid dynamical
model:
0.97 0.1 temp
temp
=
heat k+1
−0.05 1
heat k
0.02
0 amb
,
+
0
0.05
set k
with initial condition
temp
heat
[5 40]
∈
,
[0
1]
0
non-deterministic inputs
amb [5 40]
∈
set k
[0 300]
and guard set
temp
G=
heat
1
|
0
0
1
,
temp 400
<
.
300
heat
In this model the variables are continuous and
take values over the real line, whereas within the
code they are represented as long double precision floating-point values, with precision of ±10−19 ,
20
moreover the error of the approximate Jordan form
computation results in δmax < 10−17 . Henceforth we
300
focus on the latter description, as in the main text of
this work. The eigen-decomposition of the dynamics
200
A = S JS −1 ⊆ SJS−1 where
0.798 ± 10−14 0.173 ± 10−15
S =
0 ± 10−19
0.577 ± 10−14
0.985 ± 10−16 0.069 ± 10−17
J =
−0.069 ± 10−17 0.985 ± 10−16
1.253 ± 10−12 −0.376 ± 10−13
−1
.
S =
0 ± 10−18
1.732 ± 10−12
The discussed over-approximations of the reach-sets
indicate that the temperature variable intersects the
guard at iteration n = 32. Considering the pseudoeigenvalue matrix (described in the extended version
heat
is (the values are rounded to three decimal places):
heat = 300
100
0
0
100
200
temp
300
400
Fig. 9. The abstractly accelerated tube (yellow, dashed boundary),
representing an over-approximation of the thermostat reach tube
(dark blue). The set of initial conditions is shown in black,
whereas successive reach sets are shown in white. The guards
and the reach set that crosses them are close to the boundary in
red.
for the case of complex eigenvalues) along these
iterations, we use Equation (18) to find that the corre-
in Figure 9, where for the sake of clarity we display
sponding complex pair remains within the following
boundaries:
only 8 directions of the 16 constraints. This results
in a rather tight over-approximation that is not much
A32
r
=
−i
r
B32 =
−i
i
r
i
r
0.4144
0.0691
0.1082
0.9159
1
0
1
6.145
<
<
<
<
<
<
<
<
r
i
r+i
i−r
r
i
r+i
i−r
<
<
<
<
<
<
<
<
0.985
0.7651
1.247
0.9389
13.41
17.98
29.44
6.514
looser than the convex hull of all reach sets obtained
by [20] using the given directions. In Figure 9, we
can see the initial set in black colour, the collection
of reach sets in white, the convex hull of all reach
sets in dark blue (as computed by [20]), and finally
the abstractly accelerated set in light yellow (dashed
lines). The outer lines represent the guards.
The reach tube is calculated by multiplying these
abstract matrices with the initial sets of states and inputs, as described in Equation (32), by the following
inequalities:
[5 40]
[5 40]
32
#
32
X̂32 =A
+ B
[0 1]
[0 300]
−24.76 <
temp
< 394.5
"
#
heat
< 253
temp −30.21 <
=
−40.85
<
temp
+
heat
< 616.6
heat
−86.31 < temp − heat < 843.8
G. Calculating the Number of Iterations for Continuous Dynamics
Following the same steps as in Lemma 3 we
develop an equivalent for continuous dynamics.
The negative values represent the lack of restriction
Lemma 4. Given a matrix A with eigenvalues
in the code on the lower side and correspond to system cooling (negative heating). The set is displayed
{λ s | s ∈ [1, r]}, where each eigenvalue λ s has a
geometric multiplicity p s and corresponding gener-
21
alised eigenvectors {v s,i | i ∈ [1, p s]},
∀t ≥ 0, At vis = eλs v s,i +
i−1
X
Table II gives the comparison of our implementation using different levels of precision (long double,
t j eλs vs,i− j
256 bit, and 1024 bit floating-point precision) with
the original abstract acceleration for linear loops
j=1
i−1
X
t j v s,i− j
= eλs v s,i +
(54)
without inputs (J) [34] (where inputs are fixed to
constants). This shows that our implementation gives
Proof: The proof derives again from the taylor
expansion.
tighter over-approximations on most benchmarks
j=1
∞
X
∞
X tk
tk
λks v s,i + kλk−1
v s,i =
s v s,i−1
k!
k!
k=0
k=0
∞ k
∞ k
X
X
t k
t
=
λ s v s,i +
kλk−1
s v s,i−1
k!
k!
k=0
k=0
e At v s,i =
Ak
(column ‘improved’). While on a limited number of
instances the current implementation is less precise
(Fig. 2 gives a hint why this is happening), the
overall increased precision is owed to lifting the lim-
(55)
itation on directions caused by the use of logahedral
abstractions.
At the same time, our implementation is faster –
The rest of the proof follows the same expansion as
in 3
even when used with 1024 bit floating-point precision – than the original abstract acceleration (using
Given the similarity of equation (54) with (52) we
may apply exactly the same techniques described in
rationals). The fact that many bounds have improved
with the new approach, while speed has increased by
section VI-E to the continuous case.
several orders of magnitude, provides evidence of the
advantages of the new approach.
The speed-up is due to the faster Jordan form
= eλi t (vs,i + tv s,i−1 )
VII. Experimental Results
The algorithm has been implemented in C++ using the eigen-algebra package (v3.2), with double
precision floating-point arithmetic, and has been
tested on a 1.6 GHz core 2 duo computer.
computation, which takes between 2 and 65 seconds
for [34] (using the ATLAS package), whereas our
implementation requires at most one second. For the
last two benchmarks, the polyhedral computations
blow up in [34], whereas our support function approach shows only moderately increasing runtimes.
A. Comparison with other unbounded-time ap- The increase of speed is owed to multiple factors,
proaches.
as detailed in Table III. The difference of using long
In a first experiment we have benchmarked our double precision floating-point vs. arbitrary precision
implementation against the tools InterProc [33] and
Sting [12]. We have tested these tools on different
arithmetic is negligible, as all results in the given
examples match exactly to 9 decimal places. Note
scenarios, including guarded/unguarded, stable/un-
that, as explained above, soundness can be ensured
by appropriate rounding in the floating-point compu-
stable and complex/real loops with inputs (details
in Table I).4 It is important to note that in many
tations.
instances, InterProc (due to the limitations of widening) and Sting (due to the inexistence of tight
B. Comparison with bounded-time approaches.
polyhedral, inductive invariants) are unable to infer
finite bounds at all.
In a third experiment, we compare our method
with the LGG algorithm [28] used by SpaceEx [20].
4 The tool and
the benchmarks
http://www.cprover.org/LTI/.
In order to set up a fair comparison we have provided the implementation of the native algorithm
are
available
from
22
characteristics
improved
analysis time [sec]
name
type dim inputs bounds IProc Sti IProc
Sti
J+I
parabola i1
¬s,¬c,g 2
1
80
+25 +28 0.007 237
0.049
parabola i2
¬s,¬c,g 2
1
80
+24 +35 0.008 289
0.072
cubic i1
¬s,¬c,g 3
1
120
+44 +50 0.015 704
0.097
cubic i2
¬s,¬c,g 3
1
120
+35 +55 0.018 699
0.124
oscillator i0
s,c,¬g 2
0
56
+24 +24 0.004
0.990 0.021
oscillator i1
s,c,¬g 2
0
56
+24 +24 0.004
1.060 0.024
inv pendulum
s,c,¬g 4
0
16
+8 +8 0.009
0.920 0.012
convoyCar2 i0 s,c,¬g 3
2
12
+9 +9 0.007
0.160 0.043
convoyCar3 i0 s,c,¬g 6
2
24
+15 +15 0.010
0.235 0.513
convoyCar3 i1 s,c,¬g 6
2
24
+15 +15 0.024
0.237 0.901
convoyCar3 i2 s,c,¬g 6
2
24
+15 +15 0.663
0.271 1.416
convoyCar3 i3 s,c,¬g 6
2
24
+15 +15 0.122
0.283 2.103
type: s – stable loop, c – complex eigenvalues, g – loops with guard; dim: system dimension (variables); bounds: nb. of half-planes
defining the polyhedral set;
IProc is [33]; Sti is [12]; J+I is this work;
improved: number of bounds newly detected by J+I over the existing tools (IProc, Sti)
TABLE I
Experimental comparison of unbounded-time analysis tools with inputs
in [28]. We have run both methods on the con- ∼ 4 ms for each iteration on a 6-dimensional problem
voyCar example [34] with inputs, which presents with octagonal abstraction). This can be improved
an unguarded, scalable, stable loop with complex
dynamics, and focused on octahedral abstractions.
by the use of zonotopes, or by careful selection
of the directions along the eigenvectors, but this
For convex reach sets, the approximations computed
by abstract acceleration are quite tight in comparison
comes at a cost on precision. Even when finding
combinations that outperform our approach, this will
to those computed by the LGG algorithm. However,
storing finite disjunctions of convex polyhedra, the
only allow the time horizon of the LGG approach
to be slightly extended before matching the analysis
LGG algorithm is able to generate non-convex reach
tubes, which are arguably more proper in case of
time from abstract acceleration, and the reachable
states will still remain unknown beyond the extended
oscillating or spiralling dynamics. Still, in many
applications abstract acceleration can provide a tight
time horizon.
over-approximation of the convex hull of those nonconvex reach sets.
Table IV gives the results of this comparison. For
simplicity, we present only the projection of the
The evident advantage of abstract acceleration is
its speed over finite horizons without much precision
loss, and of course the ability to prove properties for
unbounded-time horizons.
bounds along the variables of interest. As expected,
C. Scalability
the LGG algorithm performs better in terms of
tightness, but its runtime increases with the number
Finally, in terms of scalability, we have an expected O(n3 ) complexity worst-case bound (from
of iterations. Our implementation of LGG using
Convex Polyhedra with octagonal templates is slower
than the abstractly accelerated version even for small
time horizons (our implementation of LGG requires
the matrix multiplications in equation (32)). We
have parameterised the number of cars in the convoyCar example [34] (also seen in Table II), and
experimented with up to 33 cars (each car after
23
characteristics
improved
analysis time (sec)
name
type
dim
bounds
tighter
looser
J (jcf)
mpfr+(jcf)
mpfr
ld
parabola i1
¬s,¬c,g
3
80
+4(5%)
0(0%)
2.51
( 2.49) 0.16 (0.06) 0.097 0.007
parabola i2
¬s,¬c,g
3
80
+4(5%)
0(0%)
2.51
( 2.49) 0.26 (0.06) 0.101 0.008
cubic i1
¬s,¬c,g
4
120
0(0%)
0(0%)
2.47
( 2.39) 0.27 (0.20) 0.110 0.013
cubic i2
¬s,¬c,g
4
120
0(0%)
0(0%)
2.49
( 2.39) 0.32 (0.20) 0.124 0.014
oscillator i0
s,c,¬g
2
56
0(0%)
-1(2%)
2.53
( 2.52) 0.12 (0.06) 0.063 0.007
oscillator i1
s,c,¬g
2
56
0(0%)
-1(2%)
2.53
( 2.52) 0.12 (0.06) 0.078 0.008
inv pendulum
s,c,¬g
4
12
+8(50%)
0(0%)
65.78
(65.24) 0.24 (0.13) 0.103 0.012
convoyCar2 i0
s,c,¬g
5
12
+9(45%)
0(0%)
5.46
( 4.69) 3.58 (0.22) 0.258 0.005
convoyCar3 i0
s,c,¬g
8
24
+10(31%)
-2(6%)
24.62
(11.98) 3.11 (1.01) 0.552 0.051
convoyCar3 i1
s,c,¬g
8
24
+10(31%)
-2(6%)
23.92
(11.98) 4.94 (1.01) 0.890 0.121
convoyCar3 i2
s,c,¬g
8
24
+10(31%)
-2(6%)
1717.00
(11.98) 6.81 (1.01) 1.190 0.234
convoyCar3 i3
s,c,¬g
8
24
+10(31%)
-2(6%)
1569.00
(11.98) 8.67 (1.01) 1.520 0.377
type: s – stable loop, c – complex eigenvalues, g – loops with guard; dim: system dimension (including fixed inputs); bounds: nb.
of half-planes defining the polyhedral set; improved: number of bounds (and percentage) that were tighter (better) or looser (worse)
than [34];
J is [34]; mpfr+ is this article using 1024bit mantissas (e < 10−152 ); mpfr uses a 256bit mantissa (e < 10−44 ); ld uses a 64bit
mantissa (e < 10−11 ); here e is the accumulated error of the dynamical system; jcf: time taken to compute Jordan form
TABLE II
Experimental comparison with previous work
the first requires 3 variables, so that for example
(33 − 1) × 3 = 96 variables), and have adjusted
the time bounded analysis is in most cases unsound
since it cannot reason about the unbounded time case
the initial states/inputs sets. We report an average
of 10 runs for each configuration. These results
(we not that a proof of the existence of a fix-point for
the given horizon would restore such soundness by
demonstrate that our method scales to industrial-size
many tools do not attempt to find such proof which
problems.
is left to the user). Unbounded-time solutions are
therefore preferred when such soundness is required,
# of variables
runtime (s)
3
6
12
24
48 96
0.004 0.031 0.062 0.477 5.4 56
although they are often either less precise or slower
than their bounded counterparts.
VIII. Related Work
There are several approaches that solve the safety
problem for the linear and other cases such as hybrid
systems. They are broadly divided into two categories due to the inherent nature of these. Namely
Optimisation
Eigen vs. ATLAS 5
Support functions vs. generators
long double vs. multiple precision arithmetic
interval vs. regular arithmetic
Total
Speed-up
2–10
2–40
5–200
.2–.5
4–80000
TABLE III
Performance improvements by feature
A. Time-Bounded Reachability Analysis
The first approach is to surrender exhaustive analysis over the infinite time horizon, and to restrict
the exploration to system dynamics up to some
given finite time bound. Bounded-time reachability
is decidable, and decision procedures for the resulting satisfiability problem have made much progress
in the past decade. The precision related to the
bounded analysis is offset by the price of uncertainty:
behaviours beyond the given time bound are not
considered, and may thus violate a safety requirement. Representatives are STRONG [15], HySon [7],
24
name
run time
car acceleration
car speed
car position
this article
100 iterations
unbounded
166 ms
166 ms
[-0.820 1.31]
[-1.262 1.31]
[-1.013 5.11]
[-4.515 6.15]
[43.7 83.4]
[40.86 91.9]
LGG
100 iterations 200 iterations 300 iterations
50 ms
140 ms
195 ms
[-0.815 1.31] [-0.968 1.31] [-0.968 1.31]
[-1.013 4.97] [-3.651 4.97] [-3.677 4.97]
[44.5 83.4]
[44.5 88.87] [44.5 88.87]
TABLE IV
Comparison on convoyCar2 benchmark, between this work and the LGG algorithm [28]
CORA [1], HYLAA [3] and SpaceEx [20].
B. Unbounded Reachability Analysis
The second approach, epitomised in static anal-
Set-based simulation methods generalise guaran- ysis methods [30], explores unbounded-time horiteed integration [6], [37]from enclosing intervals to zons. It employs conservative over-approximations to
relational domains. They use precise abstractions
with low computational cost to over-approximate
sets of reachable states up to a given time horizon.
achieve completeness and decidability over infinite
time horizons.
Unbounded techniques attempt to infer a loop
Early tools used polyhedral sets (HyTech [31] and
PHAVer [19]), polyhedral flow-pipes [10], ellip-
invariant, i.e., an inductive set of states that includes
all reachable states. If the computed invariant is
soids [5] and zonotopes [24]. A breakthrough was
disjoint from the set of bad states, this proves that the
been achieved by [25], [28], with the representation of convex sets using template polyhedra and
latter are unreachable and hence that the loop is safe.
However, analysers frequently struggle to obtain an
support functions. This method is implemented in
the tool SpaceEx [20], which can handle dynamical
invariant that is precise enough with acceptable computational cost. The problem is evidently exacerbated
systems with hundreds of variables. Although it
may use exact arithmetic to maintain soundness, it
by non-determinism in the loop, which corresponds
to the case of open systems. Prominent representa-
performs computations using floating-point numbers:
this is a deliberate choice to boost performance,
tives of this analysis approach include Passel [35],
Sting [12], and abstract interpreters such as Astrée
which, although quite reasonable, its implementation
is numerically unsound and therefore does not pro-
[4] and InterProc [33]. Early work in this area has
used implementations of abstract interpretation and
vide genuine formal guarantees. In fact, most tools
using eigendecomposition over a large number of
widening [13], which are still the foundations of
most modern tools. The work in [30] uses abstract
variables (more than 10) are numerically unsound
due to the use of unchecked floating-point arithmetic.
interpretation with convex polyhedra over piecewiseconstant differential inclusions. Dang and Gawl-
Another breakthrough in performance was done by
itza [14] employ optimisation-based (max-strategy
HYLAA [3] which was the first tool to solve all
high order problems of hundreds and thousands
iteration) with linear templates for hybrid systems
with linear dynamics. Relational abstractions [38]
dimensions. Other approaches use specialised con- use ad-hoc “loop summarisation” of flow relations,
straint solvers (HySAT [18], iSAT [16]), or SMT while abstract acceleration focuses on linear relations
encodings [11], [29] for bounded model checking
of hybrid automata.
analysis [26], [27], which is common in program
analysis.
25
C. Abstract Acceleration
References
Abstract acceleration [26], [27], [34] captures the
effect of an arbitrary number of loop iterations with a
single, non-iterative transfer function that is applied
to the entry state of the loop (i.e., to the set of initial
conditions of the linear dynamics). Abstract acceleration has been extended from its original version
to encompass inputs over reactive systems [40] but
restricted to subclasses of linear loops, and later to
general linear loops but without inputs [34].
The work presented in this article lifts these limitations by presenting abstract acceleration for general
linear loops with inputs [8], developing numeric
[1] M. Althoff. An introduction to cora 2015. In ARCH@
CPSWeek, pages 120–151, 2015.
[2] E. Asarin, T. Dang, and A. Girard. Hybridization methods
for the analysis of nonlinear systems. Acta Informatica,
43(7):451–476, 2007.
[3] S. Bak and P. S. Duggirala. Hylaa: A tool for computing
simulation-equivalent reachability for linear systems. In
Proceedings of the 20th International Conference on Hybrid
Systems: Computation and Control, HSCC 2017, Pittsburgh,
PA, USA, April 18-20, 2017, pages 173–178, 2017.
[4] B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne,
A. Miné, D. Monniaux, and X. Rival. A static analyzer
for large safety-critical software. In PLDI, pages 196–207.
ACM, 2003.
techniques for scalability and extending the domain
to continuous time systems.
[5] O. Botchkarev and S. Tripakis. Verification of hybrid
systems with linear differential inclusions using ellipsoidal
approximations. In HSCC, LNCS, pages 73–88. Springer,
2000.
IX. Conclusions and Future Work
[6] O. Bouissou. Analyse statique par interprétation abstraite
de systèmes hybrides. PhD thesis, École Polytechnique,
2008.
We have presented an extension of the Abstract
Acceleration paradigm to guarded LTI systems (linear loops) with inputs, overcoming the limitations of
existing work dealing with closed systems. We have
decisively shown the new approach to over-compete
state-of-the-art tools for unbounded-time reachability
analysis in both precision and scalability. The new
approach is capable of handling general unboundedtime safety analysis for large scale open systems
with reasonable precision and fast computation times.
Conditionals inside loops and nested loops are out
of the scope of this paper.
Work to be done is extending the approach to nonlinear dynamics, which we believe can be explored
via hybridisation techniques [2], and to formalise the
framework for general hybrid models with multiple
guards and location-dependent dynamics, with the
aim to accelerate transitions across guards rather than
integrate individual accelerations on either side of the
guards.
a) Acknowledgments.: We would like to thank
Colas Le Guernic for his constructive suggestions
and comments on the paper.
[7] O. Bouissou, S. Mimram, and A. Chapoutot. Hyson: Setbased simulation of hybrid systems. In Rapid System Prototyping (RSP), 2012 23rd IEEE International Symposium on,
pages 79–85. IEEE, 2012.
[8] D. Cattaruzza, A. Abate, P. Schrammel, and D. Kroening.
Unbounded-time analysis of guarded LTI systems with inputs by abstract acceleration. In SAS, volume 9291 of LNCS,
pages 312–331. Springer, 2015.
[9] D. Cattaruzza, A. Abate, P. Schrammel, and D. Kroening.
Sound numerical computations in abstract acceleration. In
International Workshop on Numerical Software Verification,
pages 38–60. Springer, 2017.
[10] A. Chutinan and B. H. Krogh. Computing polyhedral
approximations to flow pipes for dynamic systems. In CDC,
pages 2089–2094. IEEE Computer Society, 1998.
[11] A. Cimatti, S. Mover, and S. Tonetta. SMT-based verification of hybrid systems. In AAAI Conference on Artificial
Intelligence. AAAI Press, 2012.
[12] M. A. Colón, S. Sankaranarayanan, and H. B. Sipma. Linear
invariant generation using non-linear constraint solving. In
CAV, pages 420–432. Springer, 2003.
[13] P. Cousot and R. Cousot. Abstract interpretation: A unified
lattice model for static analysis of programs by construction
or approximation of fixpoints. In POPL, pages 238–252,
1977.
[14] T. Dang and T. M. Gawlitza. Template-based unbounded
time verification of affine hybrid automata. In APLAS,
LNCS, pages 34–49. Springer, 2011.
[15] Y. Deng, A. Rajhans, and A. A. Julius. STRONG: A
trajectory-based verification toolbox for hybrid systems. In
26
Quantitative Evaluation of Systems, volume 8054 of LNCS,
pages 165–168. Springer, 2013.
[16] A. Eggers, M. Fränzle, and C. Herde. SAT Modulo ODE:
A direct SAT approach to hybrid systems. In ATVA, volume
5311 of LNCS, pages 171–185. Springer, 2008.
[17] A. Fehnker and F. Ivancic. Benchmarks for hybrid systems
verification. In HSCC, pages 326–341. Springer, 2004.
[18] M. Fränzle and C. Herde. HySAT: An efficient proof engine
for bounded model checking of hybrid systems. Formal
Methods in System Design, 30(3):179–198, 2007.
[19] G. Frehse. PHAVer: Algorithmic verification of hybrid
systems past HyTech. In HSCC, volume 3414 of LNCS,
pages 258–273. Springer, 2005.
[20] G. Frehse, C. L. Guernic, A. Donzé, R.
R. Ripado, A. Girard, T. Dang, and O.
Scalable verification of hybrid systems.
6806 of LNCS, pages 379–395. Springer,
Ray, O. Lebeltel,
Maler. SpaceEx:
In CAV, volume
2011.
[21] K. Fukuda and A. Prodon. Double description method
revisited. In Combinatorics and computer science, pages
91–111. Springer, 1996.
[22] S. Gao, J. Avigad, and E. M. Clarke. δ-complete decision
procedures for satisfiability over the reals. In Automated
Reasoning, pages 286–300. Springer, 2012.
[23] P. K. Ghosh and K. V. Kumar. Support function representation of convex bodies, its application in geometric
computing, and some related representations. Computer
Vision and Image Understanding, 72:379–403, 1998.
[24] A. Girard. Reachability of uncertain linear systems using
zonotopes. In HSCC, volume 3414 of LNCS, pages 291–
305. Springer, 2005.
[25] A. Girard, C. L. Guernic, and O. Maler. Efficient computation of reachable sets of linear time-invariant systems with
inputs. In HSCC, volume 3927 of LNCS, pages 257–271.
Springer, 2006.
[26] L. Gonnord and N. Halbwachs. Combining widening and
acceleration in linear relation analysis. In SAS, LNCS, pages
144–160. Springer, 2006.
[27] L. Gonnord and P. Schrammel. Abstract acceleration in
linear relation analysis. Science of Computer Programming,
93(Part B):125–153, 2014.
[28] C. L. Guernic and A. Girard. Reachability analysis of hybrid
systems using support functions. In CAV, volume 5643 of
LNCS, pages 540–554. Springer, 2009.
[29] S. Gulwani and A. Tiwari. Constraint-based approach for
analysis of hybrid systems. In CAV, volume 5123 of LNCS,
pages 190–203. Springer, 2008.
[30] N. Halbwachs, P. Raymond, and Y.-E. Proy. Verification of
linear hybrid systems by means of convex approximations.
In SAS, volume 864 of LNCS, pages 223–237. Springer,
1994.
[31] T. A. Henzinger, P.-H. Ho, and H. Wong-Toi. HyTech: A
model checker for hybrid systems. Journal on Software
Tools for Technology Transfer, 1(1-2):110–122, 1997.
[32] R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge
university press, 2012.
[33] B. Jeannet.
Interproc analyzer for recursive
programs
with
numerical
variables,
2010.
http://pop-art.inrialpes.fr/interproc/interprocweb.cgi.
[34] B. Jeannet, P. Schrammel, and S. Sankaranarayanan. Abstract acceleration of general linear loops. In POPL, pages
529–540. ACM, 2014.
[35] T. T. Johnson and S. Mitra.
Passel: A verification
tool for parameterized networks of hybrid automata, 2012.
https://publish.illinois.edu/passel-tool/.
[36] P. Lancaster and M. Tismenetsky. The Theory of Matrices.
Academic Press, 2nd edition, 1984.
[37] R. Löhner. Einschließung der Lösung gewöhnlicher Anfangsund Randwertaufgaben und Anwendungen. PhD thesis,
Universität Karlsruhe, 1988.
[38] S. Sankaranarayanan and A. Tiwari. Relational abstractions
for continuous and hybrid systems. In CAV, volume 6806
of LNCS, pages 686–702. Springer, 2011.
[39] P. Schrammel. Unbounded-time reachability analysis of
hybrid systems by abstract acceleration. In Embedded
Software, pages 51–54. IEEE, 2015.
[40] P. Schrammel and B. Jeannet. Applying abstract acceleration
to (co-)Reachability analysis of reactive programs. Journal
of Symbolic Computation, 47(12):1512–1532, 2012.
| 3 |
IAS/Park City Mathematics Series
Volume 00, Pages 000–000
S 1079-5634(XX)0000-0
Four lectures on probabilistic methods for data science
arXiv:1612.06661v2 [math.PR] 4 Nov 2017
Roman Vershynin
Abstract. Methods of high-dimensional probability play a central role in applications for statistics, signal processing, theoretical computer science and related fields. These lectures present a sample of particularly useful tools of highdimensional probability, focusing on the classical and matrix Bernstein’s inequality and the uniform matrix deviation inequality. We illustrate these tools with
applications for dimension reduction, network analysis, covariance estimation,
matrix completion and sparse signal recovery. The lectures are geared towards
beginning graduate students who have taken a rigorous course in probability but
may not have any experience in data science applications.
Contents
1
2
3
4
Lecture 1: Concentration of sums of independent random variables
1.1 Sub-gaussian distributions
1.2 Hoeffding’s inequality
1.3 Sub-exponential distributions
1.4 Bernstein’s inequality
1.5 Sub-gaussian random vectors
1.6 Johnson-Lindenstrauss Lemma
1.7 Notes
Lecture 2: Concentration of sums of independent random matrices
2.1 Matrix calculus
2.2 Matrix Bernstein’s inequality
2.3 Community recovery in networks
2.4 Notes
Lecture 3: Covariance estimation and matrix completion
3.1 Covariance estimation
3.2 Norms of random matrices
3.3 Matrix completion
3.4 Notes
Lecture 4: Matrix deviation inequality
4.1 Gaussian width
2
3
4
5
5
7
7
9
9
9
11
14
18
19
19
23
25
28
29
30
Received by the editors November 7, 2017.
Partially supported by NSF Grant DMS 1265782 and U.S. Air Force Grant FA9550-14-1-0009.
©0000 (copyright holder)
1
2
Four lectures on probabilistic methods for data science
4.2
4.3
4.4
4.5
4.6
4.7
Matrix deviation inequality
Deriving Johnson-Lindenstrauss Lemma
Covariance estimation
Underdetermined linear equations
Sparse recovery
Notes
31
32
33
35
37
38
1. Lecture 1: Concentration of sums of independent random variables
These lectures present a sample of modern methods of high dimensional probability and illustrate these methods with applications in data science. This sample
is not comprehensive by any means, but it could serve as a point of entry into
a branch of modern probability that is motivated by a variety of data-related
problems.
To get the most out of these lectures, you should have taken a graduate course
in probability, have a good command of linear algebra (including the singular
value decomposition) and be familiar with very basic concepts of functional analysis (familiarity with Lp norms should be enough).
All of the material of these lectures is covered more systematically, at a slower
pace, and with a wider range of applications, in my forthcoming textbook [60].
You may also be interested in two similar tutorials: [58] is focused on random
matrices, and a more advanced text [59] discusses high-dimensional inference
problems.
It should be possible to use these lectures for a self-study or group study. You
will find here many places where you are invited to do some work (marked in
the text e.g. by “check this!”), and you are encouraged to do it to get a better
grasp of the material. Each lecture ends with a section called “Notes” where you
will find bibliographic references of the results just discussed, as well asvarious
improvements and extensions.
We are now ready to start.
Probabilistic reasoning has a major impact on modern data science. There are
roughly two ways in which this happens.
• Radnomized algorithms, which perform some operations at random, have
long been developed in computer science and remain very popular. Randomized algorithms are among the most effective methods – and sometimes the only known ones – for many data problems.
• Random models of data form the usual premise of statistical analysis. Even
when the data at hand is deterministic, it is often helpful to think of it as a
random sample drawn from some unknown distribution (“population”).
In these lectures, we will encounter both randomized algorithms and random
models of data.
Roman Vershynin
3
1.1. Sub-gaussian distributions Before we start discussing probabilistic methods, we will introduce an important class of probability distributions that forms a
natural “habitat” for random variables in many theoretical and applied problems.
These are sub-gaussian distributions. As the name suggests, we will be looking
at an extension of the most fundamental distribution in probability theory – the
gaussian, or normal, distribution N(µ, σ).
It is a good exercise to check that the standard normal random variable X ∼
N(0, 1) satisfies the following basic properties:
Tails: P |X| > t 6 2 exp(−t2 /2) for all t > 0.
√
Moments: kXkp := (E |X|p )1/p = O( p) as p → ∞.
MGF of square: 1 E exp(cX2 ) 6 2 for some c > 0.
MGF: E exp(λX) = exp(λ2 ) for all λ ∈ R.
All these properties tell the same story from four different perspectives. It is
not very difficult to show (although we will not do it here) that for any random
variable X, not necessarily Gaussian, these four properties are essentially equivalent.
Proposition 1.1.1 (Sub-gaussian properties). For a random variable X, the following
properties are equivalent.2
Tails: P |X| > t 6 2 exp(−t2 /K21 ) for all t > 0.
√
Moments: kXkp 6 K2 p for all p > 1.
MGF of square: E exp(X2 /K23 ) 6 2.
Moreover, if E X = 0 then these properties are also equivalent to the following one:
MGF: E exp(λX) 6 exp(λ2 K24 ) for all λ ∈ R.
Random variables that satisfy one of the first three properties (and thus all of
them) are called sub-gaussian. The best K3 is called the sub-gaussian norm of X, and
is usually denoted kXkψ2 , that is
kXkψ2 := inf t > 0 : E exp(X2 /t2 ) 6 2 .
One can check that k · kψ2 indeed defines a norm; it is an example of the general
concept of the Orlicz norm. Proposition 1.1.1 states that the numbers Ki in all four
properties are equivalent to kXkψ2 up to absolute constant factors.
Example 1.1.2. As we already noted, the standard normal random variable X ∼
N(0, 1) is sub-gaussian. Similarly, arbitrary normal random variables X ∼ N(µ, σ)
are sub-gaussian. Another example is a Bernoulli random variable X that takes
values 0 and 1 with probabilities 1/2 each. More generally, any bounded random
variable X is sub-gaussian. On the contrary, Poisson, exponential, Pareto and
1MGF stands for moment generation function.
2The parameters K > 0 appearing in these properties can be different. However, they may differ
i
from each other by at most an absolute constant factor. This means that there exists an absolute
constant C such that property 1 implies property 2 with parameter K2 6 CK1 , and similarly for
every other pair or properties.
4
Four lectures on probabilistic methods for data science
Cauchy distributions are not sub-gaussian. (Verify all these claims; this is not
difficult.)
1.2. Hoeffding’s inequality You may remember from a basic course in probability that the normal distribution N(µ, σ) has a remarkable property: the sum of
independent normal random variables is also normal. Here is a version of this
property for sub-gaussian distributions.
Proposition 1.2.1 (Sums of sub-gaussians). Let X1 , . . . , XN be independent, mean
P
zero, sub-gaussian random variables. Then N
i=1 Xi is a sub-gaussian, and
N
X
Xi
i=1
where C is an absolute constant.
2
ψ2
6C
N
X
kXi k2ψ2
i=1
3
Proof. Let us bound the moment generating function of the sum for any λ ∈ R:
E exp λ
N
X
Xi =
i=1
N
Y
E exp(λXi )
(using independence)
i=1
6
N
Y
exp(Cλ2 kXi k2ψ2 )
(by last property in Proposition 1.1.1)
i=1
= exp(λ2 K2 )
where K2 := C
N
X
kXi k2ψ2 .
i=1
Using again the last property in Proposition 1.1.1, we conclude that the sum
P
S= N
i=1 Xi is sub-gaussian, and kSkψ2 6 C1 K where C1 is an absolute constant.
The proof is complete.
Let us rewrite Proposition 1.2.1 in a form that is often more useful in applications, namely as a concentration inequality. To do this, we simply use the first
P
property in Proposition 1.1.1 for the sum N
i=1 Xi . We immediately get the following.
Theorem 1.2.2 (General Hoeffding’s inequality). Let X1 , . . . , XN be independent,
mean zero, sub-gaussian random variables. Then, for every t > 0 we have
P
N
X
ct2
Xi > t 6 2 exp − PN
.
2
i=1 kXi kψ2
i=1
Hoeffding’s inequality controls how far and with what probability a sum of
independent random variables can deviate from its mean, which is zero.
3In the future, we will always denote positive absolute constants by C, c, C , etc. These numbers do
1
not depend on anything. In most cases, one can get good bounds on these constants from the proof,
but the optimal constants for each result are rarely known.
Roman Vershynin
5
1.3. Sub-exponential distributions Sub-gaussian distributions form a sufficiently
wide class of distributions. Many results in probability and data science are
proved nowadays for sub-gaussian random variables. Still, as we noted, there
are some natural random variables that are not sub-gaussian. For example, the
square X2 of a normal random variable X ∼ N(0, 1) is not sub-gaussian. (Check!)
To cover examples like this, we will introduce the similar but weaker notion of
sub-exponential distributions.
Proposition 1.3.1 (Sub-exponential properties). For a random variable X, the following properties are equivalent, in the same sense as in Proposition 1.1.1.
Tails: P |X| > t 6 2 exp(−t/K1 ) for all t > 0.
Moments: kXkp 6 K2 p for all p > 1.
MGF of the square: E exp(|X|/K3 ) 6 2.
Moreover, if E X = 0 then these properties imply the following one:
MGF: E exp(λX) 6 exp(λ2 K24 ) for |λ| 6 1/K4 .
Just like we did for sub-gaussian distributions, we call the best K3 the subexponential norm of X and denote it by kXkψ1 , that is
kXkψ1 := inf {t > 0 : E exp(|X|/t) 6 2} .
All sub-exponential random variables are squares of sub-gaussian random variables. Indeed, inspecting the definitions you will quickly see that
kX2 kψ1 = kXk2ψ2 .
(1.3.2)
(Check!)
1.4. Bernstein’s inequality A version of Hoeffding’s inequality for sub-exponential
random variables is called Bernstein’s inequality. You may naturally expect to see
a sub-exponential tail bound in this result. So it may come as a surprise that
Bernstein’s inequality actually has a mixture of two tails – sub-gaussian and subexponential. Let us state and prove the inequality first, and then we will comment
on the mixture of the two tails.
Theorem 1.4.1 (Bernstein’s inequality). Let X1 , . . . , XN be independent, mean zero,
sub-exponential random variables. Then, for every t > 0 we have
P
N
X
i
h
t
t2
,
.
Xi > t 6 2 exp − c min PN
2
maxi kXi kψ1
i=1 kXi kψ1
i=1
Proof. For simplicity, we will assume that K = 1 and only prove the one-sided
bound (without absolute value); the general case is not much harder. Our approach will be based on bounding the moment generating function of the sum
PN
S :=
i=1 Xi . To see how MGF can be helpful here, choose λ > 0 and use
Markov’s inequality to get
(1.4.2)
P S > t = P exp(λS) > exp(λt) 6 e−λt E exp(λS).
6
Four lectures on probabilistic methods for data science
Recall that S =
as
PN
i=1 Xi
and use independence to express the right side of (1.4.2)
e−λt
N
Y
E exp(λXi ).
i=1
(Check!) It remains to bound the MGF of each term Xi , and this is a much simpler
task. If we choose λ small enough so that
c
,
(1.4.3)
0<λ6
maxi kXi kψ1
then we can use the last property in Proposition 1.3.1 to get
E exp(λXi ) 6 exp Cλ2 kXi k2ψ1 .
Substitute into (1.4.2) and conclude that
P{S > t} 6 exp −λt + Cλ2 σ2
P
2
where σ2 = N
i=1 kXi kψ1 . The left side does not depend on λ while the right side
does. So we can choose λ that minimizes the right side subject to the constraint
(1.4.3). When this is done carefully, we obtain the tail bound stated in Bernstein’s
inequality. (Do this!)
Now, why does Bernstein’s inequality have a mixture of two tails? The subexponential tail should of course be there. Indeed, even if the entire sum consisted
of a single term Xi , the best bound we could hope for would be of the form
exp(−ct/kXi kψ1 ). The sub-gaussian term could be explained by the central limit
theorem, which states that the sum should becomes approximately normal as the
number of terms N increases to infinity.
Remark 1.4.4 (Bernstein’s inequality for bounded random variables). Suppose the
random variables Xi are uniformly bounded, which is a stronger assumption than
being sub-gaussian. Then there is a useful version of Bernstein’s inequality, which
unlike Theorem 1.4.1 is sensitive to the variances of Xi ’s. It states that if K > 0 is
such that |Xi | 6 K almost surely for all i, then, for every t > 0, we have
P
(1.4.5)
σ2
PN
2
i=1 E Xi
N
X
i=1
Xi > t 6 2 exp −
t2 /2
.
σ2 + CKt
Here
=
is the variance of the sum. This version of Bernstein’s
inequality can be proved in essentially the same way as Theorem 1.4.1. We will
not do it here, but a stronger Theorem 2.2.1, which is valid for matrix-valued
random variables Xi , will be proved in Lecture 2.
To compare this with Theorem 1.4.1, note that σ2 + CKt 6 2 max(σ2 , CKt). So
we can state the probability bound (1.4.5) as
h
t2 t i
2 exp − c min 2 ,
.
σ K
Roman Vershynin
7
Just like before, here we also have a mixture of two tails, sub-gaussian and
sub-exponential. The sub-gaussian tail is a bit sharper than in Theorem 1.4.1,
since it depends on the variances rather than sub-gaussian norms of Xi . The subexponential tail, on the other hand, is weaker, since it depends on the sup-norms
rather than the sub-exponential norms of Xi .
1.5. Sub-gaussian random vectors The concept of sub-gaussian distributions
can be extended to higher dimensions. Consider a random vector X taking values
in Rn . We call X a sub-gaussian random vector if all one-dimensional marginals of X,
i.e., the random variables hX, xi for x ∈ Rn , are sub-gaussian. The sub-gaussian
norm of X is defined as
kXkψ2 := sup k hX, xi kψ2
x∈Sn−1
where Sn−1 denotes the unit Euclidean sphere in Rn .
Example 1.5.1. Examples of sub-gaussian random distributions in Rn include the
standard normal distribution N(0, In ) (why?), the uniform distribution on the
√
centered Euclidean sphere of radius n, the uniform distribution on the cube
{−1, 1}n , and many others. The last example can be generalized: a random vector
X = (X1 , . . . , Xn ) with independent and sub-gaussian coordinates is sub-gaussian,
with kXkψ2 6 C maxi kXi kψ2 .
1.6. Johnson-Lindenstrauss Lemma Concentration inequalities like Hoeffding’s
and Bernstein’s are successfully used in the analysis of algorithms. Let us give
one example for the problem of dimension reduction. Suppose we have some data
that is represented as a set of N points in Rn . (Think, for example, of n gene
expressions of N patients.)
We would like to compress the data by representing it in a lower dimensional
space Rm instead of Rn with m n. By how much can we reduce the dimension
without loosing the important features of the data?
The basic result in this direction is the Johnson-Lindenstrauss Lemma. It states
that a remarkably simple dimension reduction method works – a random linear
map from Rn to Rm with
m ∼ log N,
see Figure 1.6.3. The logarithmic function grows very slowly, so we can usually
reduce the dimension dramatically.
What exactly is a random linear map? Several models are possible to use.
Here we will model such a map using a Gaussian random matrix – an m × n
matrix A with independent N(0, 1) entries. More generally, we can consider an
m × n matrix A whose rows are independent, mean zero, isotropic4 and subgaussian random vectors in Rn . For example, the entries of A can be independent
Rademacher entries – those taking values ±1 with equal probabilities.
4A random vector X ∈ Rn is called isotropic if E XXT = I .
n
8
Four lectures on probabilistic methods for data science
Theorem 1.6.1 (Johnson-Lindenstrauss Lemma). Let X be a set of N points in Rn
and ε ∈ (0, 1). Consider an m × n matrix A whose rows are independent, mean zero,
isotropic and sub-gaussian random vectors in Rn . Rescale A by defining the “Gaussian
random projection”5
1
P := √ A.
m
Assume that
m > Cε−2 log N,
where C is an appropriately large constant that depends only on the sub-gaussian norms of
the vectors Xi . Then, with high probability (say, 0.99), the map P preserves the distances
between all points in X with error ε, that is
(1.6.2)
(1 − ε)kx − yk2 6 kPx − Pyk2 6 (1 + ε)kx − yk2
for all x, y ∈ X.
Figure 1.6.3. Johnson-Lindenstrauss Lemma states that a random projection of N data points from dimension n to dimension m ∼ log N
approximately preserves the distances between the points.
Proof. Take a closer look at the desired conclusion (1.6.2). By linearity, Px − Py =
P(x − y). So, dividing the inequality by kx − yk2 , we can rewrite (1.6.2) in the
following way:
(1.6.4)
1 − ε 6 kPzk2 6 1 + ε for all z ∈ T
where
T :=
x−y
: x, y ∈ X distinct points .
kx − yk2
It will be convenient to square the inequality (1.6.4). Using that 1 + ε 6 (1 + ε)2
and 1 − ε > (1 − ε)2 , we see that it is enough to show that
(1.6.5)
1 − ε 6 kPzk22 6 1 + ε for all z ∈ T .
By construction, the coordinates of the vector Pz =
we can restate (1.6.5) as
√1 Az
m
are
√1
m
hXi , zi. Thus
1 X
hXi , zi2 − 1 6 ε for all z ∈ T .
m
m
(1.6.6)
i=1
5Strictly speaking, this P is not a projection since it maps Rn to a different space Rm .
Roman Vershynin
9
Results like (1.6.6) are often proved by combining concentration and a union
bound. In order to use concentration, we first fix z ∈ T . By assumption, the
random variables hXi , zi2 − 1 are independent; they have zero mean (use isotropy
to check this!), and they are sub-exponential (use (1.3.2) to check this). Then
Bernstein’s inequality (Theorem 1.4.1) gives
m
1 X
2
hXi , zi − 1 > ε 6 2 exp(−cε2 m).
P
m
i=1
(Check!)
Finally, we can unfix z by taking a union bound over all possible z ∈ T :
m
m
X
1 X
1 X
2
2
hXi , zi − 1 > ε 6
hXi , zi − 1 > ε
P max
P
m
z∈T m
z∈T
i=1
i=1
2
6 |T | · 2 exp(−cε m).
(1.6.7)
By definition of T , we have |T | 6
So, if we choose m > Cε−2 log N with
appropriately large constant C, we can make (1.6.7) bounded by 0.01. The proof
is complete.
N2 .
1.7. Notes The material presented in Sections 1.1–1.5 is basic and can be found
e.g. in [58] and [60] with all the proofs. Bernstein’s and Hoeffding’s inequalities
that we covered here are two basic examples of concentration inequalities. There
are many other useful concentration inequalities for sums of independent random
variables (e.g. Chernoff’s and Bennett’s) and for more general objects. The textbook [60] is an elementary introduction into concentration; the books [10, 38, 39]
offer more comprehensive and more advanced accounts of this area.
The original version of Johnson-Lindenstrauss Lemma was proved in [31]. The
version we gave here, Theorem 1.6.1, was stated with probability of success 0.99,
but an inspection of the proof gives probability 1 − 2 exp(−cε2 m) which is much
better for large m. A great variety of ramifications and applications of JohnsonLindenstrauss lemma are known, see e.g. [2, 4, 7, 10, 34, 42].
2. Lecture 2: Concentration of sums of independent random matrices
In the previous lecture we proved Bernstein’s inequality, which quantifies how
a sum of independent random variables concentrates about its mean. We will
now study an extension of Bernstein’s inequality to higher dimensions, which
holds for sums of independent random matrices.
2.1. Matrix calculus The key idea of developing a matrix Bernstein’s inequality
will be to use matrix calculus, which allows us to operate with matrices as with
scalars – adding and multiplying them of course, but also comparing matrices
and applying functions to matrices. Let us explain this.
10
Four lectures on probabilistic methods for data science
We can compare matrices to each other using the notion of being positive semidefinite. Let us focus here on n × n symmetric matrices. If A − B is a positive
semidefinite matrix,6 which we denote A − B 0, then we say that A B (and,
of course, B A). This defines a partial order on the set of n × n symmetric matrices. The term “partial” indicates that, unlike the real numbers, there exist n × n
symmetric matrices A and B that can not be compared. (Give an example where
neither A B nor B A!)
Next, let us guess how to measure the magnitude of a matrix A. The magnitude
of a scalar a ∈ R is measured by the absolute value |a|; it is the smallest nonnegative number t such that
−t 6 a 6 t.
Extending this reasoning to matrices, we can measure the magnitude of an n × n
symmetric matrix A by the smallest non-negative number t such that7
−tIn A tIn .
The smallest t is called the operator norm of A and is denoted kAk. Diagonalizing
A, we can see that
(2.1.1)
kAk = max{|λ| : λ is an eigenvalue of A}.
With a little more work (do it!), we can see that kAk is the norm of A acting as a
linear operator on the Euclidean space (Rn , k · k2 ); this is why kAk is called the
operator norm. Thus kAk is the smallest non-negative number M such that
kAxk2 6 Mkxk2
for all x ∈ Rn .
Finally, we will need to be able to take functions of matrices. Let f : R → R
be a function and X be an n × n symmetric matrix. We can define f(X) in two
equivalent ways. The spectral theorem allows us to represent X as
X=
n
X
λi u i u T
i
i=1
where λi are the eigenvalues of X and ui are the corresponding eigenvectors.
Then we can simply define
f(X) :=
n
X
f(λi )ui uT
i.
i=1
Note that f(X) has the same eigenvectors as X, but the eigenvalues change under
the action of f. An equivalent way to define f(X) is using power series. Suppose
the function f has a convergent power series expansion about some point x0 ∈ R,
6Recall that a symmetric real n × n matrix M is called positive semidefinite if xT Mx > 0 for any
vector x ∈ Rn .
7Here and later, I denotes the n × n identity matrix.
n
Roman Vershynin
i.e.
∞
X
f(x) =
11
ak (x − x0 )k .
k=1
Then one can check that the following matrix series converges8 and defines f(X):
f(X) =
∞
X
ak (X − X0 )k .
k=1
(Check!)
2.2. Matrix Bernstein’s inequality We are now ready to state and prove a remarkable generalization of Bernstein’s inequality for random matrices.
Theorem 2.2.1 (Matrix Bernstein’s inequality). Let X1 , . . . , XN be independent, mean
zero, n × n symmetric random matrices, such that kXi k 6 K almost surely for all i.
Then, for every t > 0 we have
P
Here
σ2
=
PN
N
X
Xi > t 6 2n · exp −
i=1
2
i=1 E Xi
t2 /2
.
σ2 + Kt/3
is the norm of the “matrix variance” of the sum.
The scalar case, where n = 1, is the classical Bernstein’s inequality we stated
in (1.4.5). A remarkable feature of matrix Bernstein’s inequality, which makes
it especially powerful, is that it does not require any independence of the entries (or
the rows or columns) of Xi ; all is needed is that the random matrices Xi be
independent from each other.
In the rest of this section we will prove matrix Bernstein’s inequality, and give
a few applications in this and next lecture.
Our proof will be based on bounding the moment generating function (MGF)
P
E exp(λS) of the sum S = N
i=1 Xi . Note that to exponentiate the matrix λS in
order to define the matrix MGF, we rely on the matrix calculus that we introduced
in Section 2.1.
If the terms Xi were scalars, independence would yield the classical fact that
the MGF of a product is the product of MGF’s, i.e.
(2.2.2)
E exp(λS) = E
N
Y
exp(λXi ) =
i=1
N
Y
E exp(λXi ).
i=1
But for matrices, this reasoning breaks down badly, for in general
eX+Y 6= eX eY
even for 2 × 2 symmetric matrices X and Y. (Give a counterexample!)
Fortunately, there are some trace inequalities that can often serve as proxies for
the missing equality eX+Y = eX eY . One of such proxies is the Golden-Thompson
8The convergence holds in any given metric on the set of matrices, for example in the metric given by
the operator norm. In this series, the terms (X − X0 )k are defined by the usual matrix product.
12
Four lectures on probabilistic methods for data science
inequality, which states that
(2.2.3)
tr(eX+Y ) 6 tr(eX eY )
for any n × n symmetric matrices X and Y. Another result, which we will actually
use in the proof of matrix Bernstein’s inequality, is Lieb’s inequality.
Theorem 2.2.4 (Lieb’s inequality). Let H be an n × n symmetric matrix. Then the
function
f(X) = tr exp(H + log X)
is concave9 on the space on n × n symmetric matrices.
Note that in the scalar case, where n = 1, the function f in Lieb’s inequality is
linear and the result is trivial.
To use Lieb’s inequality in a probabilistic context, we will combine it with
the classical Jensen’s inequality. It states that for any concave function f and a
random matrix X, one has10
(2.2.5)
E f(X) 6 f(E X).
Using this for the function f in Lieb’s inequality, we get
E tr exp(H + log X) 6 tr exp(H + log E X).
And changing variables to X = eZ , we get the following:
Lemma 2.2.6 (Lieb’s inequality for random matrices). Let H be a fixed n × n symmetric matrix and Z be an n × n symmetric random matrix. Then
E tr exp(H + Z) 6 tr exp(H + log E eZ ).
Lieb’s inequality is a perfect tool for bounding the MGF of a sum of indepenP
dent random variables S = N
random
i=1 Xi . To do this, let us condition on the
P
variables X1 , . . . , XN−1 . Apply Lemma 2.2.6 for the fixed matrix H := N−1
i=1 λXi
and the random matrix Z := λXN , and afterwards take the expectation with respect to X1 , . . . , XN−1 . By the law of total expectation, we get
E tr exp(λS) 6 E tr exp
h N−1
X
i
λXi + log E eλXN .
i=1
Next, apply Lemma 2.2.6 in a similar manner for H :=
and Z := λXN−1 , and so on. After N times, we obtain:
PN−2
i=1
λXi + log E eλXN
9Formally, concavity of f means that f(λX + (1 − λ)Y) > λf(X) + (1 − λ)f(Y) for all symmetric
matrices X and Y and all λ ∈ [0, 1].
10Jensen’s inequality is usually stated for a convex function g and a scalar random variable X, and
it reads g(E X) 6 E g(X). From this, inequality (2.2.5) for concave functions and random matrices
easily follows (Check!).
Roman Vershynin
13
Lemma 2.2.7 (MGF of a sum of independent random matrices). Let X1 , . . . , XN be
P
independent n × n symmetric random matrices. Then the sum S = N
i=1 Xi satisfies
E tr exp(λS) 6 tr exp
N
hX
i
log E eλXi .
i=1
Think of this inequality is a matrix version of the scalar identity (2.2.2). The
main difference is that it bounds the trace of the MGF11 rather the MGF itself.
You may recall from a course in probability theory that the quantity log E eλXi
that appears in this bound is called the cumulant generating function of Xi .
Lemma 2.2.7 reduces the complexity of our task significantly, for it is much
easier to bound the cumulant generating function of each single random variable
Xi than to say something about their sum. Here is a simple bound.
Lemma 2.2.8 (Moment generating function). Let X be an n × n symmetric random
matrix. Assume that E X = 0 and kXk 6 K almost surely. Then, for all 0 < λ < 3/K we
have
λ2 /2
.
E exp(λX) exp g(λ) E X2
where g(λ) =
1 − λK/3
Proof. First, check that the following scalar inequality holds for 0 < λ < 3/K and
|x| 6 K:
eλx 6 1 + λx + g(λ)x2 .
Then extend it to matrices using matrix calculus: if 0 < λ < 3/K and kXk 6 K
then
eλX I + λX + g(λ)X2 .
(Do these two steps carefully!) Finally, take the expectation and recall that E X = 0
to obtain
E eλX I + g(λ) E X2 exp g(λ) E X2 .
In the last inequality, we use the matrix version of the scalar inequality 1 + z 6 ez
that holds for all z ∈ R. The lemma is proved.
Proof of Matrix Bernstein’s inequality. We would like to bound the operator norm
P
of the random matrix S = N
i=1 Xi , which, as we know from (2.1.1), is the largest
eigenvalue of S by magnitude. For simplicity of exposition, let us drop the absolute
value from (2.1.1) and just bound the maximal eigenvalue of S, which we denote
λmax (S). (Once this is done, we can repeat the argument for −S to reinstate the
absolute value. Do this!) So, we are to bound
P λmax (S) > t = P eλ·λmax (S) > eλt
(multiply by λ > 0 and exponentiate)
6 e−λt E eλ·λmax (S)
(by Markov’s inequality)
= e−λt E λmax (eλS )
(check!)
6e
−λt
E tr e
λS
(max of eigenvalues is bounded by the sum)
11Note that the order of expectation and trace can be swapped using linearity.
14
Four lectures on probabilistic methods for data science
6 e−λt tr exp
N
hX
log E eλXi
i
(use Lemma 2.2.7)
i=1
6 tr exp [−λt + g(λ)Z]
where
Z :=
(by Lemma 2.2.8)
N
X
E X2i .
i=1
It remains to optimize this bound in λ. The minimum is attained for λ =
t/(σ2 + Kt/3). (Check!) With this value of λ, we conclude
t2 /2
.
P λmax (S) > t 6 n · exp − 2
σ + Kt/3
This completes the proof of Theorem 2.2.1.
PN
Bernstein’s inequality gives a powerful tail bound for k i=1 Xi k. This easily
implies a useful bound on the expectation:
Corollary 2.2.9 (Expected norm of sum of random matrices). Let X1 , . . . , XN be
independent, mean zero, n × n symmetric random matrices, such that kXi k 6 K almost
surely for all i. Then
N
X
p
Xi . σ log n + K log n
E
where σ =
PN
i=1
2 1/2 .
i=1 E Xi
Proof. The link from tail bounds to expectation is provided by the basic identity
Z∞
(2.2.10)
EZ =
P Z > t dt
0
which is valid for any non-negative random variable Z. (Check it!) Integrating
the tail bound given by matrix Bernstein’s inequality, you will arrive at the expectation bound we claimed. (Check!)
Notice that the bound in this corollary has mild, logarithmic, dependence on
the ambient dimension n. As we will see shortly, this can be an important feature
in some applications.
2.3. Community recovery in networks Matrix Bernstein’s inequality has many
applications. The one we are going to discuss first is for the analysis of networks.
A network can be mathematically represented by a graph, a set of n vertices
with edges connecting some of them. For simplicity, we will consider undirected
graphs where the edges do not have arrows. Real world networks often tend to
have clusters, or communities – subsets of vertices that are connected by unusually
many edges. (Think, for example, about a friendship network where communities
form around some common interests.) An important problem in data science is
to recover communities from a given network.
Roman Vershynin
15
We are going to explain one of the simplest methods for community recovery,
which is called spectral clustering. But before we introduce it, we will first of all
place a probabilistic model on the networks we consider. In other words, it will be
convenient for us to view networks as random graphs whose edges are formed at
random. Although not all real-world networks are truly random, this simplistic
model can motivate us to develop algorithms that may empirically succeed also
for real-world networks.
The basic probabilistic model of random graphs is the Erdös-Rényi model.
Definition 2.3.1 (Erdös-Rényi model). Consider a set of n vertices and connect every
pair of vertices independently and with fixed probability p. The resulting random graph
is said to follow the Erdös-Rényi model G(n, p).
The Erdös-Rényi random model is very simple. But it is not a good choice if
we want to model a network with communities, for every pair of vertices has the
same chance to be connected. So let us introduce a natural generalization of the
Erdös-Rényi random model that does allow for community structure:
Definition 2.3.2 (Stochastic block model). Partition a set of n vertices into two subsets
(“communities”) with n/2 vertices each, and connect every pair of vertices independently
with probability p if they belong to the same community and q < p if not. The resulting
random graph is said to follow the stochastic block model G(n, p, q).
Figure 2.3.3 illustrates a simulation of a stochastic block model.
Figure 2.3.3. A network generated according to the stochastic block
model G(n, p, q) with n = 200 nodes and connection probabilities
p = 1/20 and q = 1/200.
Suppose we are shown one instance of a random graph generated according
to a stochastic block model G(n, p, q). How can we find which vertices belong to
which community?
The spectral clustering algorithm we are going to explain will do precisely this.
It will be based on the spectrum of the adjacency matrix A of the graph, which is
16
Four lectures on probabilistic methods for data science
the n × n symmetric matrix whose entries Aij equal 1 if the vertices i and j are
connected by an edge, and 0 otherwise.12
The adjacency matrix A is a random matrix. Let us compute its expectation
first. This is easy, since the entires of A are Bernoulli random variables. If i and j
belong to the same community then E Aij = p and otherwise E Aij = q. Thus A
has block structure: for example, if n = 4 then A looks like this:
p p q q
p p q q
EA =
q q p p
q q p p
(For illustration purposes, we grouped the vertices from each community together.)
You will easily check that A has rank 2, and the non-zero eigenvalues and the
corresponding eigenvectors are
(2.3.4)
1
1
1
1
p + q
p − q
n, v1 (E A) =
n, v2 (E A) =
λ1 (E A) =
; λ2 (E A) =
1
−1
2
2
1
−1
(Check!)
The eigenvalues and eigenvectors of E A tell us a lot about the community
structure of the underlying graph. Indeed, the first (larger) eigenvalue,
p + q
d :=
n,
2
is the expected degree of any vertex of the graph.13 The second eigenvalue tells
us whether there is any community structure at all (which happens when p 6= q
and thus λ2 (E A) 6= 0). The first eigenvector v1 is not informative of the structure
of the network at all. It is the second eigenvector v2 that tells us exactly how to
separate the vertices into the two communities: the signs of the coefficients of v2
can be used for this purpose.
Thus if we know E A, we can recover the community structure of the network
from the signs of the second eigenvector. The problem is that we do not know
E A. Instead, we know the adjacency matrix A. If, by some chance, A is not far
from E A, we may hope to use the A to approximately recover the community
structure. So is it true that A ≈ E A? The answer is yes, and we can prove it
using matrix Bernstein’s inequality.
12For convenience, we call the vertices of the graph 1, 2, . . . , n.
13The degree of the vertex is the number of edges connected to it.
.
Roman Vershynin
17
Theorem 2.3.5 (Concentration of the stochastic block model). Let A be the adjacency
matrix of a G(n, p, q) random graph. Then
p
E kA − E Ak . d log n + log n.
Here d = (p + q)n/2 is the expected degree.
Proof. Let us sketch the argument. To use matrix Bernstein’s inequality, let us
break A into a sum of independent random matrices
X
A=
Xij ,
i,j: i6j
where each matrix Xij contains a pair of symmetric entries of A, or one diagonal
entry.14 Matrix Bernstein’s inequality obviously applies for the sum
X
A−EA =
(Xij − E Xij ).
i6j
15
Corollary 2.2.9 gives
p
E kA − E Ak . σ log n + K log n
(2.3.6)
P
2 and K = max kX − E X k. It is a good
where σ2 =
ij
ij
ij
i6j E(Xij − E Xij )
exercise to check that
σ2 . d and K 6 2.
(Do it!) Substituting into (2.3.6), we complete the proof.
How useful is Theorem 2.3.5 for community recovery? Suppose that the network is not too sparse, namely
d log n.
Then
kA − E Ak .
p
d log n while
k E Ak = λ1 (E A) = d,
which implies that
kA − E Ak k E Ak.
In other words, A nicely approximates E A: the relative error or approximation
is small in the operator norm.
At this point one can apply classical results from the perturbation theory for
matrices, which state that since A and E A are close, their eigenvalues and eigenvectors must also be close. The relevant perturbation results are Weyl’s inequality
for eigenvalues and Davis-Kahan’s inequality for eigenvectors, which we will not
14Precisely, if i 6= j, then X
ij has all zero entries except the (i, j) and (j, i) entries that can potentially
equal 1. If i = j, the only non-zero entry of Xij is the (i, i).
15We will liberally use the notation . to hide constant factors appearing in the inequalities. Thus,
a . b means that a 6 Cb for some constant C.
18
Four lectures on probabilistic methods for data science
reproduce here. Heuristically, what they give us is
1
1
v2 (A) ≈ v2 (E A) =
−1
−1
.
Then we should expect that most of the coefficients of v2 (A) are positive on one
community and negative on the other. So we can use v2 (A) to approximately
recover the communities. This method is called spectral clustering:
Spectral Clustering Algorithm. Compute v2 (A), the eigenvector corresponding to
the second largest eigenvalue of the adjacency matrix A of the network. Use the signs of
the coefficients of v2 (A) to predict the community membership of the vertices.
We saw that spectral clustering should perform well for the stochastic block
model G(n, p, q) if it is not too sparse, namely if the expected degrees satisfy
d = (p + q)n/2 log n.
A more careful analysis along these lines, which you should be able to do
yourself with some work, leads to the following more rigorous result.
Theorem 2.3.7 (Guarantees of spectral clustering). Consider a random graph generated according to the stochastic block model G(n, p, q) with p > q, and set a = pn,
b = qn. Suppose that
(2.3.8)
(a − b)2 log(n)(a + b).
Then, with high probability, the spectral clustering algorithm recovers the communities
up to o(n) misclassified vertices.
Note that condition (2.3.8) implies that the expected degrees are not too small,
namely d = (a + b)/2 log(n) (check!). It also ensures that a and b are sufficiently different: recall that if a = b the network is Erdös-Rényi graph without
any community structure.
2.4. Notes The idea to extend concentration inequalities like Bernstein’s to matrices goes back to R. Ahlswede and A. Winter [3]. They used Golden-Thompson
inequality (2.2.3) and proved a slightly weaker form of matrix Bernstein’s inequality than we gave in Section 2.2. R. Oliveira [48, 49] found a way to improve
this argument and gave a result similar to Theorem 2.2.1. The version of matrix
Bernstein’s inequality we gave here (Theorem 2.2.1) and a proof based on Lieb’s
inequality is due to J. Tropp [52].
The survey [53] contains a comprehensive introduction of matrix calculus, a
proof of Lieb’s inequality (Theorem 2.2.4), a detailed proof of matrix Bernstein’s
inequality (Theorem 2.2.1) and a variety of applications. A proof of GoldenThompson inequality (2.2.3) can be found in [8, Theorem 9.3.7].
Roman Vershynin
19
In Section 2.3 we scratched the surface of an interdisciplinary area of network analysis. For a systematic introduction into networks, refer to the book
[47]. Stochastic block models (Definition 2.3.2) were introduced in [33]. The
community recovery problem in stochastic block models, sometimes also called
community detection problem, has been in the spotlight in the last few years. A
vast and still growing body of literature exists on algorithms and theoretical results for community recovery, see the book [47], the survey [22], papers such as
[9, 29, 30, 32, 37, 46, 61] and the references therein.
A concentration result similar to Theorem 2.3.5 can be found in [48]; the argument there is also based on matrix concentration. This theorem is not quite
optimal. For dense networks, where the expected degree d satisfies d & log n, the
concentration inequality in Theorem 2.3.5 can be improved to
√
(2.4.1)
E kA − E Ak . d.
This improved bound goes back to the original paper [21] which studies the simpler Erdös-Rényi model but the results extend to stochastic block models [17]; it
can also be deduced from [6, 32, 37].
If the network is relatively dense, i.e. d & log n, one can improve the guarantee
(2.3.8) of spectral clustering in Theorem 2.3.7 to
(a − b)2 (a + b).
All one has to do is use the improved concentration inequality (2.4.1) instead of
Theorem 2.3.5. Furthermore, in this case there exist algorithms that can recover
the communities exactly, i.e. without any misclassified vertices, and with high
probability, see e.g. [1, 17, 32, 43].
For sparser networks, where d log n and possibly even d = O(1), relativelyfew algorithms were known until recently, but now there exist many approaches that provably recover communities in sparse stochastic block models,
see e.g. [9, 17, 29, 30, 37, 46, 61].
3. Lecture 3: Covariance estimation and matrix completion
In the last lecture, we proved matrix Bernstein’s inequality and gave an application for network analysis. We will spend this lecture discussing a couple of
other interesting applications of matrix Bernstein’s inequality. In Section 3.1 we
will work on covariance estimation, a basic problem in high-dimensional statistics.
In Section 3.2, we will derive a useful bound on norms of random matrices, which
unlike Bernstein’s inequality does not require any boundedness assumptions on
the distribution. We will apply this bound in Section 3.3 for a problem of matrix
completion, where we are shown a small sample of the entries of a matrix and
asked to guess the missing entries.
3.1. Covariance estimation Covariance estimation is a problem of fundamental
importance in high-dimensional statistics. Suppose we have a sample of data
20
Four lectures on probabilistic methods for data science
points X1 , . . . , XN in Rn . It is often reasonable to assume that these points are
independently sampled from the same probability distribution (or “population”)
which is unknown. We would like to learn something useful about this distribution.
Denote by X a random vector that has this (unknown) distribution. The most
basic parameter of the distribution is the mean E X. One can estimate E X from
1 PN
the sample by computing the sample mean N
i=1 Xi . The law of large numbers
guarantees that the estimate becomes tight as the sample size N grows to infinity,
i.e.
N
1 X
Xi → E X as N → ∞.
N
i=1
The next most basic parameter of the distribution is the covariance matrix
Σ := E(X − E X)(X − E X)T .
This is a higher-dimensional version of the usual notion of variance of a random
variable Z, which is
Var(Z) = E(Z − E Z)2 .
The eigenvectors of the covariance matrix of Σ are called the principal components.
Principal components that correspond to large eigenvalues of Σ are the directions
in which the distribution of X is most extended, see Figure 3.1.1. These are often
the most interesting directions in the data. Practitioners often visualize the highdimensional data by projecting it onto the span of a few (maybe two or three) of
such principal components; the projection may reveal some hidden structure of
the data. This method is called Principal Component Analysis (PCA).
Figure 3.1.1. Data points X1 , . . . , XN sampled from a distribution in Rn
and the principal components of the covariance matrix.
One can estimate the covariance matrix Σ from the sample by computing the
sample covariance
N
1 X
ΣN :=
(Xi − E Xi )(Xi − E Xi )T .
N
i=1
Again, the law of large numbers guarantees that the estimate becomes tight as
the sample size N grows to infinity, i.e.
ΣN → Σ
as N → ∞.
Roman Vershynin
21
But how large should the sample size N be for covariance estimation? Generally, one can not have N < n for dimension reasons. (Why?) We are going to
show that
N ∼ n log n
is enough. In other words, covariance estimation is possible with just logarithmic
oversampling.
For simplicity, we shall state the covariance estimation bound for mean zero
distributions. (If the mean is not zero, we can estimate it from the sample and
subtract. Check that the mean can be accurately estimated from a sample of size
N = O(n).)
Theorem 3.1.2 (Covariance estimation). Let X be a random vector in Rn with covariance matrix Σ. Suppose that
kXk22 . E kXk22 = tr Σ
(3.1.3)
almost surely.
Then, for every N > 1, we have
r n log n
n log n
.
N
N
Before we pass to the proof, let us note that Theorem 3.1.2 yields the covariance
estimation result we promised. Let ε ∈ (0, 1). If we take a sample of size
E kΣN − Σk . kΣk
+
N ∼ ε−2 n log n,
then we are guaranteed covariance estimation with a good relative error:
E kΣN − Σk 6 εkΣk.
Proof. Apply matrix Bernstein’s inequality (Corollary 2.2.9) for the sum of independent random matrices Xi XT
i − Σ and get
(3.1.4)
E kΣN − Σk =
N
X
1 p
1
(Xi XT
E
σ log n + K log n
i − Σ) .
N
N
i=1
where
σ2 =
N
X
2
E(Xi XT
= N E(XXT − Σ)2
i − Σ)
i=1
and K is chosen so that
kXXT − Σk 6 K
almost surely.
It remains to bound σ and K. Let us start with σ. We have
E(XXT − Σ)2 = E kXk22 XXT − Σ2
- tr(Σ) · E XXT
(check by expanding the square)
(drop Σ2 and use (3.1.3))
= tr(Σ) · Σ.
Thus
σ2 . N tr(Σ)kΣk.
22
Four lectures on probabilistic methods for data science
Next, to bound K, we have
kXXT − Σk 6 kXk22 + kΣk
. tr Σ + kΣk
(by triangle inequality)
(using (3.1.3))
6 2 tr Σ =: K.
Substitute the bounds on σ and K into (3.1.4) and get
q
1
E kΣN − Σk .
N tr(Σ)kΣk log n + tr(Σ) log n
N
To complete the proof, use that tr Σ 6 nkΣk (check this!) and simplify the bound.
Remark 3.1.5 (Low-dimensional distributions). Much fewer samples are needed
for covariance estimation for low-dimensional, or approximately low-dimensional,
distributions. To measure approximate low-dimensionality we can use the notion
of the stable rank of Σ2 . The stable rank of a matrix A is defined as the square of
the ratio of the Frobenius to operator norms:16
kAk2F
.
kAk2
The stable rank is always bounded by the usual, linear algebraic rank,
r(A) :=
r(A) 6 rank(A),
and it can be much smaller. (Check both claims.)
Our proof of Theorem 3.1.2 actually gives
r r log n r log n
E kΣN − Σk 6 kΣk
+
.
N
N
where
tr Σ
r = r(Σ1/2 ) =
.
kΣk
(Check this!) Therefore, covariance estimation is possible with
N ∼ r log n
samples.
Remark 3.1.6 (The boundedness condition). It is a good exercise to check that if
we remove the boundedness condition (3.1.3), a nontrivial covariance estimation
is impossible in general. (Show this!) But how do we know whether the boundedness condition holds for data at hand? We may not, but we can enforce this
condition by truncation. All we have to do is to discard 1% of data points with
largest norms. (Check this accurately, assuming that such truncation does not
change the covariance significantly.)
16The Frobenius norm of an n × m matrix, sometimes also called the Hilbert-Schmidt norm, is de-
P
Pm
2 1/2 . Equivalently, for an n × n symmetric matrix, kAk =
fined
= ( n
F
i=1
j=1 Aij )
Pn as kAk2F 1/2
( i=1 λi (A) ) , P
where λi (A) are the eigenvalues of A. Thus the stable rank of A can be ex2
2
pressed as r(A) = n
i=1 λi (A) / maxi λi (A) .
Roman Vershynin
23
3.2. Norms of random matrices We have worked a lot with the operator norm of
matrices, denoted kAk. One may ask if is there exists a formula that expresses kAk
in terms of the entires Aij . Unfortunately, there is no such formula. The operator
norm is a more difficult quantity in this respect than the Frobenius norm, which as
P
we know can be easily expressed in terms of entries: kAkF = ( i,j A2ij )1/2 .
If we can not express kAk in terms of the entires, can we at least get a good estimate? Let us consider n × n symmetric matrices for simplicity. In one direction,
kAk is always bounded below by the largest Euclidean norm of the rows Ai :
X
1/2
(3.2.1)
kAk > max kAi k2 = max
A2ij
.
i
i
j
(Check!) Unfortunately, this bound is sometimes very loose, and the best possible
upper bound is
√
(3.2.2)
kAk 6 n · max kAi k2 .
i
(Show this bound, and give an example where it is sharp.)
Fortunately, for random matrices with independent entries the bound (3.2.2)
can be improved to the point where the upper and lower bounds almost match.
Theorem 3.2.3 (Norms of random matrices without boundedness assumptions).
Let A be an n × n symmetric random matrix whose entries on and above the diagonal are
independent, mean zero random variables. Then
E max kAi k2 6 E kAk 6 C log n · E max kAi k2 ,
i
i
where Ai denote the rows of A.
In words, the operator norm of a random matrix is almost determined by the
norm of the rows.
Our proof of this result will be based on matrix Bernstein’s inequality – more
precisely, Corollary 2.2.9. There is one surprising point. How can we use matrix
Bernstein’s inequality, which applies only for bounded distributions, to prove
a result like Theorem 3.2.3 that does not have any boundedness assumptions?
We will do this using a trick based on conditioning and symmetrization. Let us
introduce this technique first.
Lemma 3.2.4 (Symmetrization). Let X1 , . . . , XN be independent, mean zero random
vectors in a normed space and ε1 , . . . , εN be independent Rademacher random variables.17
Then
N
N
N
X
X
X
1
E
εi Xi 6 E
Xi 6 2 E
εi Xi .
2
i=1
i=1
i=1
(Xi0 )
be an independent copy of the random
Proof. To prove the upper bound, let
vectors (Xi ), i.e. just different random vectors with the same joint distribution as
17This means that random variables ε take values ±1 with probability 1/2 each. We require that all
i
random variables we consider here, i.e. {Xi , εi : i = 1, . . . , N} are jointly independent.
24
Four lectures on probabilistic methods for data science
(Xi ) and independent from (Xi ). Then
X
X
X
Xi − E
Xi0
Xi = E
E
i
i
6E
X
Xi −
i
=E
X
(since E
X
Xi0 = 0 by assumption)
i
i
Xi0
(by Jensen’s inequality)
i
X
(Xi − Xi0 ) .
i
The distribution of the random vectors Yi := Xi − Xi0 is symmetric, which means
that the distributions of Yi and −Yi0 are the same. (Why?) Thus the distribution
of the random vectors Yi and εi Yi is also the same, for all we do is change the
signs of these vectors at random and independently of the values of the vectors.
Summarizing, we can replace Xi − Xi0 in the sum above with εi (Xi − Xi0 ). Thus
X
X
εi (Xi − Xi0 )
E
Xi 6 E
i
i
6E
X
εi Xi + E
X
X
(using triangle inequality)
i
i
= 2E
εi Xi0
εi Xi
(the two sums have the same distribution).
i
This proves the upper bound in the symmetrization inequality. The lower bound
can be proved by a similar argument. (Do this!)
Proof of Theorem 3.2.3. We already mentioned in (3.2.1) that the bound in Theorem 3.2.3 is trivial. The proof of the upper bound will be based on matrix Bernstein’s inequality.
First, we decompose A in the same way as we did in the proof of Theorem 2.3.5.
Thus we represent A as a sum of independent, mean zero, symmetric random
matrices Zij each of which contains a pair of symmetric entries of A (or one
diagonal entry):
X
A=
Zij .
i,j: i6j
Apply the symmetrization inequality (Lemma 3.2.4) for the random matrices Zij
and get
X
X
(3.2.5)
E kAk = E
Zij 6 2 E
Xij
i6j
i6j
where we set
Xij := εij Zij
and εij are independent Rademacher random variables.
Now we condition on A. The random variables Zij become fixed values and all
randomness remains in the Rademacher random variables εij . Note that Xij are
(conditionally) bounded almost surely, and this is exactly what we have lacked to
Roman Vershynin
25
apply matrix Bernstein’s inequality. Now we can do it. Corollary 2.2.9 gives18
X
p
(3.2.6)
Eε
Xij . σ log n + K log n,
σ2
P
i6j
2
i6j Eε Xij
and K = maxi6j kXij k.
where
=
A good exercise is to check that
σ . max kAi k2
i
and K . max kAi k2 .
i
(Do it!) Substituting into (3.2.6), we get
X
Xij . log n · max kAi k2 .
Eε
i
i6j
Finally, we unfix A by taking expectation of both sides of this inequality with
respect to A and using the law of total expectation. The proof is complete.
We stated Theorem 3.2.3 for symmetric matrices, but it is simple to extend it
to general m × n random matrices A. The bound in this case becomes
(3.2.7)
E kAk 6 C log(m + n) · E max kAi k2 + E max kAj k2
i
j
where Ai and Aj denote the rows and columns of A. To see this, apply Theorem 3.2.3 to the (m + n) × (m + n) symmetric random matrix
"
#
0 A
.
AT 0
(Do this!)
3.3. Matrix completion Consider a fixed, unknown n × n matrix X. Suppose we
are shown m randomly chosen entries of X. Can we guess all the missing entries?
This important problem is called matrix completion. We will analyze it using the
bounds on the norms on random matrices we just obtained.
Obviously, there is no way to guess the missing entries unless we know something extra about the matrix X. So let us assume that X has low rank:
rank(X) =: r n.
The number of degrees of freedom of an n × n matrix with rank r is O(rn).
(Why?) So we may hope that
(3.3.1)
m ∼ rn
observed entries of X will be enough to determine X completely. But how?
Here we will analyze what is probably the simplest method for matrix completion. Take the matrix Y that consists of the observed entries of X while all
unobserved entries are set to zero. Unlike X, the matrix Y may not have small
18We stick a subscript ε to the expected value to remember that this is a conditional expectation, i.e.
we average only with respect to εi .
26
Four lectures on probabilistic methods for data science
rank. Compute the best rank r approximation19 of Y. The result, as we will show,
will be a good approximation to X.
But before we show this, let us define sampling of entries more rigorously.
Assume each entry of X is shown or hidden independently of others with fixed
probability p. Which entries are shown is decided by independent Bernoulli
random variables
m
δij ∼ Ber(p) with p := 2
n
which are often called selectors in this context. The value of p is chosen so that
among n2 entries of X, the expected number of selected (known) entries is m.
Define the n × n matrix Y with entries
Yij := δij Xij .
We can assume that we are shown Y, for it is a matrix that contains the observed
entries of X while all unobserved entries are replaced with zeros. The following
result shows how to estimate X based on Y.
Theorem 3.3.2 (Matrix completion). Let X̂ be a best rank r approximation to p−1 Y.
Then
r
1
rn
(3.3.3)
E kX̂ − XkF 6 C log(n)
kXk∞ ,
n
m
Here kXk∞ = maxi,j |Xij | denotes the maximum magnitude of the entries of X.
Before we prove this result, let us understand what this bound says about the
quality of matrix completion. The recovery error is measured in the Frobenius
norm, and the left side of (3.3.3) is
n
1 X
1/2
1
2
|
X̂
−
X
|
.
kX̂ − XkF =
ij
ij
n
n2
i,j=1
Thus Theorem 3.3.2 controls the average error per entry in the mean-squared sense.
To make the error small, let us assume that we have a sample of size
m rn log2 n,
which is slightly larger than the ideal size we discussed in (3.3.1). This makes
p
C log(n) rn/m = o(1) and forces the recovery error to be bounded by o(1)kXk∞ .
Summarizing, Theorem 3.3.2 says that the expected average error per entry is much
smaller than the maximal magnitude of the entries of X. This is true for a sample of
almost optimal size m. The smaller the rank r of the matrix X, the fewer entries
of X we need to see in order to do matrix completion.
Proof of Theorem 3.3.2. Step 1: The error in the operator norm. Let us first bound
the recovery error in the operator norm. Decompose the error into two parts using
19The best rank r approximation of an n × n matrix A is a matrix B of rank r that minimizes the
operator norm kA − Bk or, alternatively, the Frobenius norm kA − BkF (the minimizer
out to
Pturns
n
be the same). One can compute
B
by
truncating
the
singular
value
decomposition
A
=
s
ui vT
i
i=1
i
P
of A as follows: B = ri=1 si ui vT
,
where
we
assume
that
the
singular
values
s
are
arranged
in
i
i
non-increasing order.
Roman Vershynin
27
triangle inequality:
kX̂ − Xk 6 kX̂ − p−1 Yk + kp−1 Y − Xk.
Recall that X̂ is a best approximation to p−1 Y. Then the first part of the error is
smaller than the second part, i.e. kX̂ − p−1 Yk 6 kp−1 Y − Xk, and we have
2
(3.3.4)
kX̂ − Xk 6 2kp−1 Y − Xk = kY − pXk.
p
The entries of the matrix Y − pX,
(Y − pX)ij = (δij − p)Xij ,
are independent and mean zero random variables. Thus we can apply the bound
(3.2.7) on the norms of random matrices and get
(3.3.5)
E kY − pXk 6 C log n · E max k(Y − pX)i k2 + E max k(Y − pX)j k2 .
i∈[n]
j∈[n]
All that remains is to bound the norms of the rows and columns of Y − pX.
This is not difficult if we note that they can be expressed as sums of independent
random variables:
n
n
X
X
(δij − p)2 X2ij 6
(δij − p)2 · kXk2∞ ,
k(Y − pX)i k22 =
j=1
j=1
and similarly for columns. Taking expectation and noting that E(δij − p)2 =
Var(δij ) = p(1 − p), we get20
√
(3.3.6)
E k(Y − pX)i k2 6 (E k(Y − pX)i k22 )1/2 6 pn kXk∞ .
This is a good bound, but we need something stronger in (3.3.5). Since the maximum appears inside the expectation, we need a uniform bound, which will say
that all rows are bounded simultaneously with high probability.
Such uniform bounds are usually proved by applying concentration inequalities followed by a union bound. Bernstein’s inequality (1.4.5) yields
n
X
P
(δij − p)2 > tpn 6 exp(−ctpn) for t > 3.
j=1
(Check!) This probability can be further bounded by n−ct using the assumption
that m = pn2 > n log n. A union bound over n rows leads to
n
X
P max
(δij − p)2 > tpn 6 n · n−ct for t > 3.
i∈[n]
j=1
Integrating this tail, we conclude using (2.2.10) that
n
X
E max
(δij − p)2 . pn.
i∈[n]
j=1
20The first bound below that compares the L1 and L2 averages follows from Hölder’s inequality.
28
Four lectures on probabilistic methods for data science
(Check!) And this yields the desired bound on the rows,
√
E max k(Y − pX)i k2 . pn,
i∈[n]
which is an improvement of (3.3.6) we wanted. We can do similarly for the
columns. Substituting into (3.3.5), this gives
√
E kY − pXk . log(n) pn kXk∞ .
Then, by (3.3.4), we get
(3.3.7)
E kX̂ − Xk . log(n)
r
n
kXk∞ .
p
Step 2: Passing to Frobenius norm. Now we will need to pass from the
operator to Frobenius norm. This is where we will use for the first (and only)
time the rank of X. We know that rank(X) 6 r by assumption and rank(X̂) 6 r
by construction, so rank(X̂ − X) 6 2r. There is a simple relationship between the
operator and Frobenius norms:
√
kX̂ − XkF 6 2rkX̂ − Xk.
(Check it!) Take expectation of both sides and use (3.3.7); we get
r
√
rn
E kX̂ − XkF 6 2r E kX̂ − Xk . log(n)
kXk∞ .
p
Dividing both sides by n, we can rewrite this bound as
r
1
rn
E kX̂ − XkF . log(n)
kXk∞ .
n
pn2
But pn2 = m by definition of the sampling probability p. This yields the desired
bound (3.3.3).
3.4. Notes Theorem 3.1.2 on covariance estimation is a version of [58, Corollary 5.52], see also [36]. The logarithmic factor is in general necessary. This
theorem is a general-purpose result. If one knows some additional structural information about the covariance matrix (such as sparsity), then fewer samples may
be needed, see e.g. [12, 16, 40].
A version of Theorem 3.2.3 was proved in [51] in a more technical way. Although the logarithmic factor in Theorem 3.2.3 can not be completely removed in
general, it can be improved. Our argument actually gives
p
E kAk 6 C log n · E max kAi k2 + C log n · E max |Aij |,
i
ij
p
Using different methods, one can save an extra log n factor and show that
p
E kAk 6 C E max kAi k2 + C log n · E max |Aij |
i
ij
(see [6]) and
E kAk 6 C
p
log n · log log n · E max kAi k2 ,
i
Roman Vershynin
29
see [55]. (The results in [6, 55] are stated for Gaussian random matrices; the two
bounds above can be deduced by using conditioning and symmetrization.) The
surveys [6, 58] and the textbook [60] present several other useful techniques to
bound the operator norm of random matrices.
The matrix completion problem, which we discussed in Section 3.3, has attracted a lot of recent attention. E. Candes and B. Recht [14] showed that one can
often achieve exact matrix completion, thus computing the precise values of all
missing values of a matrix, from m ∼ rn log2 (n) randomly sampled entries. For
exact matrix completion, one needs an extra incoherence assumption that is not
present in Theorem 3.3.2. This assumption basically excludes matrices that are
simultaneously sparse and low rank (such as a matrix whose all but one entries
are zero – it would be extremely hard to complete it, since sampling will likely
miss the non-zero entry). Many further results on exact matrix completion are
known, e.g. [15, 18, 28, 56].
Theorem 3.3.2 with a simple proof is borrowed from [50]; see also the tutorial
[59]. This result only guarantees approximate matrix completion, but it does not
have any incoherence assumptions on the matrix.
4. Lecture 4: Matrix deviation inequality
In this last lecture we will study a new uniform deviation inequality for random matrices. This result will be a far reaching generalization of the JohnsonLindenstrauss Lemma we proved in Lecture 1.
Consider the same setup as in Theorem 1.6.1, where A is an m × n random matrix whose rows are independent, mean zero, isotropic and sub-gaussian random
vectors in Rn . (If you find it helpful to think in terms of concrete examples, let
the entries of A be independent N(0, 1) random variables.) Like in the JohnsonLindenstrauss Lemma, we will be looking at A as a linear transformation from
Rn to Rm , and we will be interested in what A does to points in some set in Rn .
This time, however, we will allow for infinite sets T ⊂ Rn .
Let us start by analyzing what A does to a single fixed vector x ∈ Rn . We have
E kAxk22 = E
=
m
X
j=1
m
X
Aj , x
E Aj , x
2
2
(where AT
j denote the rows of A)
(by linearity)
j=1
= mkxk22
(using isotropy of Aj ).
Further, if we assume that concentration about the mean holds here (and in fact,
it does), we should expect that
√
(4.0.1)
kAxk2 ≈ m kxk2
with high probability.
30
Four lectures on probabilistic methods for data science
Similarly to Johnson-Lindenstrauss Lemma, our next goal is to make (4.0.1)
hold simultaneously over all vectors x in some fixed set T ⊂ Rn . Precisely, we
may ask – how large is the average uniform deviation:
√
(4.0.2)
E sup kAxk2 − m kxk ?
x∈T
This quantity should clearly depend on some notion of the size of T : the larger
T , the larger should the uniform deviation be. So, how can we quantify the size
of T for this problem? In the next section we will do precisely this – introduce a
convenient, geometric measure of the sizes of sets in Rn , which is called Gaussian
width.
4.1. Gaussian width
Definition 4.1.1. Let T ⊂ Rn be a bounded set, and g be a standard normal random
vector in Rn , i.e. g ∼ N(0, In ). Then the quantities
w(T ) := E sup hg, xi
and γ(T ) := E sup | hg, xi |
x∈T
x∈T
are called the Gaussian width of T and the Gaussian complexity of T , respectively.
Gaussian width and Gaussian complexity are closely related. Indeed,21
(4.1.2)
1
1
1
1
w(T ) = w(T − T ) = E sup hg, x − yi = E sup | hg, x − yi | = γ(T − T ).
2
2 x,y∈T
2 x,y∈T
2
(Check these identities!)
Gaussian width has a natural geometric interpretation. Suppose g is a unit
vector in Rn . Then a moment’s thought reveals that supx,y∈T hg, x − yi is simply
the width of T in the direction of g, i.e. the distance between the two hyperplanes
with normal g that touch T on both sides as shown in Figure 4.1.3. Then 2w(T )
can be obtained by averaging the width of T over all directions g in Rn .
x
width
T
g
y
Figure 4.1.3. The width of a set T in the direction of g.
21The set T − T is defined as {x − y : x, y ∈ T }. More generally, given two sets A and B in the
same vector space, the Minkowski sum of A and B is defined as A + B = {a + b : a ∈ A, b ∈ B}.
Roman Vershynin
31
This reasoning is valid except where we assumed that g is a unit vector. Instead, for g ∼ N(0, In ) we have E kgk22 = n and
√
kgk2 ≈ n with high probability.
(Check both these claims using Bernstein’s inequality.) Thus, we need to scale
√
by the factor n. Ultimately, the geometric interpretation of the Gaussian width
√
becomes the following: w(T ) is approximately n/2 larger than the usual, geometric
width of T averaged over all directions.
A good exercise is to compute the Gaussian width and complexity for some
simple sets, such as the unit balls of the `p norms in Rn , which we denote Bn
p =
{x ∈ Rn : kxkp 6 1}. In particular, we have
p
√
γ(Bn
log n.
(4.1.4)
γ(Bn
2 ) ∼ n,
1)∼
For any finite set T ⊂ Bn
2 , we have
(4.1.5)
γ(T ) .
q
log |T |.
The same holds for Gaussian width w(T ). (Check these facts!)
A look a these examples reveals that the Gaussian width captures some nonobvious geometric qualities of sets. Of course, the fact that the Gaussian width of
√
n is not surprising: the usual, geometric
the unit Euclidean ball Bn
2 is or order
√
width in all directions is 2 and the Gaussian width is about n times that. But
it may be surprising that the Gaussian width of the `1 ball Bn
1 is much smaller,
and so is the width of any finite set T (unless the set has exponentially large
cardinality). As we will see later, Gaussian width nicely captures the geometric
size of “the bulk” of a set.
4.2. Matrix deviation inequality Now we are ready to answer the question we
asked in the beginning of this lecture: what is the magnitude of the uniform deviation (4.0.2)? The answer is surprisingly simple: it is bounded by the Gaussian
complexity of T . The proof is not too simple however, and we will skip it (see the
notes after this lecture for references).
Theorem 4.2.1 (Matrix deviation inequality). Let A be an m × n matrix whose rows
Ai are independent, isotropic and sub-gaussian random vectors in Rn . Let T ⊂ Rn be a
fixed bounded set. Then
√
E sup kAxk2 − mkxk2 6 CK2 γ(T )
x∈T
where K = maxi kAi kψ2 is the maximal sub-gaussian norm22 of the rows of A.
Remark 4.2.2 (Tail bound). It is often useful to have results that hold with high
probability rather than in expectation. There exists a high-probability version of
the matrix deviation inequality, and it states the following. Let u > 0. Then the
22A definition of the sub-gaussian norm of a random vector was given in Section 1.5. For example, if
A is a Gaussian random matrix with independent N(0, 1) entries, then K is an absolute constant.
32
Four lectures on probabilistic methods for data science
event
(4.2.3)
sup kAxk2 −
√
mkxk2 6 CK2 [γ(T ) + u · rad(T )]
x∈T
holds with probability at least 1 − 2 exp(−u2 ). Here rad(T ) is the radius of T ,
defined as
rad(T ) := sup kxk2 .
x∈T
Since rad(T ) . γ(T ) (check!) we can continue the bound (4.2.3) by
. K2 uγ(T )
for all u > 1. This is a weaker but still a useful inequality. For example, we can
use it to bound all higher moments of the deviation:
p 1/p
√
(4.2.4)
E sup kAxk2 − mkxk2
6 Cp K2 γ(T )
x∈T
√
where Cp 6 C p for p > 1. (Check this using Proposition 1.1.1.)
Remark 4.2.5 (Deviation of squares). It is sometimes helpful to bound the deviation of the square kAxk22 rather than kAxk2 itself. We can easily deduce the deviation of squares by using the identity a2 − b2 = (a − b)2 + 2b(a − b) for a = kAxk2
√
and b = mkxk2 . Doing this, we conclude that
√
(4.2.6)
E sup kAxk22 − mkxk22 6 CK4 γ(T )2 + CK2 m rad(T )γ(T ).
x∈T
(Do this calculation using (4.2.4) for p = 2.) We will use this bound in Section 4.4.
Matrix deviation inequality has many consequences. We will explore some of
them now.
4.3. Deriving Johnson-Lindenstrauss Lemma We started this lecture by promising a result that is more general than Johnson-Lindenstrauss Lemma. So let us
show how to quickly derive Johnson-Lindenstrauss from the matrix deviation
inequality. Theorem 1.6.1 from Theorem 4.2.1.
Assume we are in the situation of the Johnson-Lindenstrauss Lemma (Theorem 1.6.1). Given a set X ⊂ R, consider the normalized difference set
x−y
: x, y ∈ X distinct vectors .
T :=
kx − yk2
Then T is a finite subset of the unit sphere of Rn , and thus (4.1.5) gives
q
q
q
γ(T ) . log |T | 6 log |X|2 . log |X|.
Matrix deviation inequality (Theorem 4.2.1) then yields
p
√
kA(x − y)k2 √
− m . log N 6 ε m.
sup
kx − yk2
x,y∈X
with high probability, say 0.99. (To pass from expectation to high probability, we
can use Markov’s inequality. To get the last bound, we use the assumption on m
in Johnson-Lindenstrauss Lemma.)
Roman Vershynin
33
√
Multiplying both sides by kx − yk2 / m, we can write the last bound as follows.
With probability at least 0.99, we have
1
(1 − ε)kx − yk2 6 √ kAx − Ayk2 6 (1 + ε)kx − yk2 for all x, y ∈ X.
m
This is exactly the consequence of Johnson-Lindenstrauss lemma.
The argument based on matrix deviation inequality, which we just gave, can
be easily extended for infinite sets. It allows one to state a version of JohnsonLindenstrauss lemma for general, possibly infinite, sets, which depends on the
Gaussian complexity of T rather than cardinality. (Try to do this!)
4.4. Covariance estimation In Section 3.1, we introduced the problem of covariance estimation, and we showed that
N ∼ n log n
samples are enough to estimate the covariance matrix of a general distribution
in Rn . We will now show how to do better if the distribution is sub-gaussian.
(Recall Section 1.5 for the definition of sub-gaussian random vectors.) In this case,
we can get rid of the logarithmic oversampling and the boundedness condition
(3.1.3).
Theorem 4.4.1 (Covariance estimation for sub-gaussian distributions). Let X be a
random vector in Rn with covariance matrix Σ. Suppose X is sub-gaussian, and more
specifically
(4.4.2)
k hX, xi kψ2 . k hX, xi kL2 = kΣ1/2 xk2
for any x ∈ Rn .
Then, for every N > 1, we have
r n
n
.
N N
This result implies that if, for ε ∈ (0, 1), we take a sample of size
E kΣN − Σk . kΣk
+
N ∼ ε−2 n,
then we are guaranteed covariance estimation with a good relative error:
E kΣN − Σk 6 εkΣk.
Proof. Since we are going to use Theorem 4.2.1, we will need to first bring the
random vectors X, X1 , . . . , XN to the isotropic position. This can be done by a
suitable linear transformation. You will easily check that there exists an isotropic
random vector Z such that
X = Σ1/2 Z.
(For example, Σ has full rank, set Z := Σ−1/2 X. Check the general case.) Similarly,
we can find independent and isotropic random vectors Zi such that
Xi = Σ1/2 Zi ,
i = 1, . . . , N.
34
Four lectures on probabilistic methods for data science
The sub-gaussian assumption (4.4.2) then implies that
kZkψ2 . 1.
(Check!) Then
1 X
Zi ZT
i − In .
N
N
kΣN − Σk = kΣ1/2 RN Σ1/2 k
where
RN :=
i=1
The operator norm of a symmetric n × n matrix A can be computed by maximizing the quadratic form over the unit sphere: kAk = maxx∈Sn−1 | hAx, xi |. (To
see this, recall that the operator norm is the biggest eigenvalue of A in magnitude.) Then
D
E
kΣN − Σk = max Σ1/2 RN Σ1/2 x, x = max hRN x, xi
x∈T
x∈Sn−1
where T is the ellipsoid
T := Σ1/2 Sn−1 .
Recalling the definition of RN , we can rewrite this as
1
1 X
hZi , xi2 − kxk22 =
max kAxk22 − Nkxk22 .
N
N x∈T
N
kΣN − Σk = max
x∈T
i=1
Now we apply the matrix deviation inequality for squares (4.2.6) and conclude
that
√
1
kΣN − Σk .
γ(T )2 + N rad(T )γ(T ) .
N
(Do this calculation!) The radius and Gaussian width of the ellipsoid T are easy
to compute:
rad(T ) = kΣk1/2 and γ(T ) 6 (tr Σ)1/2 .
Substituting, we get
p
1
tr Σ + NkΣk tr Σ .
N
To complete the proof, use that tr Σ 6 nkΣk (check this!) and simplify the bound.
kΣN − Σk .
Remark 4.4.3 (Low-dimensional distributions). Similarly to Section 3.1, we can
show that much fewer samples are needed for covariance estimation of lowdimensional sub-gaussian distributions. Indeed, the proof of Theorem 4.4.1 actually yields
r r
r
(4.4.4)
E kΣN − Σk 6 kΣk
+
N N
where
tr Σ
r = r(Σ1/2 ) =
kΣk
is the stable rank of Σ1/2 . This means that covariance estimation is possible with
N∼r
Roman Vershynin
35
samples.
4.5. Underdetermined linear equations We will give one more application of
the matrix deviation inequality – this time, to the area of high dimensional inference. Suppose we need to solve a severely underdetermined system of linear
equations: say, we have m equations in n m variables. Let us write it in the
matrix form as
y = Ax
where A is a given m × n matrix, y ∈ Rm is a given vector and x ∈ Rn is an
unknown vector. We would like to compute x from A and y.
When the linear system is underdetermined, we can not find x with any accuracy, unless we know something extra about x. So, let us assume that we do
have some a-priori information. We can describe this situation mathematically by
assuming that
x∈K
where K ⊂ Rn is some known set in Rn that describes anything that we know
about x a-priori. (Admittedly, we are operating on a high level of generality here.
If you need a concrete example, we will consider it in Section 4.6.)
Summarizing, here is the problem we are trying to solve. Determine a solution
x = x(A, y, K) to the underdetermined linear equation y = Ax as accurately as
possible, assuming that x ∈ K.
A variety of approaches to this and similar problems were proposed during the
last decade; see the notes after this lecture for pointers to some literature. The one
we will describe here is based on optimization. To do this, it will be convenient
to convert the set K into a function on Rn which is called the Minkowski functional
of K. This is basically a function whose level sets are multiples of K. To define it
formally, assume that K is star-shaped, which means that together with any point
x, the set K must contain the entire interval that connects x with the origin; see
Figure 4.5.1 for illustration. The Minkowski functional of K is defined as
kxkK := inf t > 0 : x/t ∈ K , x ∈ Rn .
If the set K is convex and symmetric about the origin, kxkK is actually a norm on
Rn . (Check this!)
0
0
Figure 4.5.1. The set on the left (whose boundary is shown) is starshaped, the set on the right is not.
36
Four lectures on probabilistic methods for data science
Now we propose the following way to solve the recovery problem: solve the
optimization program
min kx 0 kK
(4.5.2)
subject to
y = Ax 0 .
Note that this is a very natural program: it looks at all solutions to the equation
y = Ax 0 and tries to “shrink” the solution x 0 toward K. (This is what minimization of Minkowski functional is about.)
Also note that if K is convex, this is a convex optimization program, and thus
can be solved effectively by one of the many available numeric algorithms.
The main question we should now be asking is – would the solution to this
program approximate the original vector x? The following result bounds the
approximation error for a probabilistic model of linear equations. Assume that A
is a random matrix as in Theorem 4.2.1, i.e. A is an m × n matrix whose rows Ai
are independent, isotropic and sub-gaussian random vectors in Rn .
Theorem 4.5.3 (Recovery by optimization). The solution x̂ of the optimization program (4.5.2) satisfies23
w(K)
E kx̂ − xk2 . √ ,
m
where w(K) is the Gaussian width of K.
Proof. Both the original vector x and the solution x̂ are feasible vectors for the
optimization program (4.5.2). Then
kx̂kK 6 kxkK
61
(since x̂ minimizes the Minkowski functional)
(since x ∈ K).
Thus both x̂, x ∈ K.
We also know that Ax̂ = Ax = y, which yields
(4.5.4)
A(x̂ − x) = 0.
Let us apply matrix deviation inequality (Theorem 4.2.1) for T := K − K. It
gives
√
E sup kA(u − v)k2 − m ku − vk2 . γ(K − K) = 2w(K),
u,v∈K
where we used (4.1.2) in the last identity. Substitute u = x̂ and v = x here. We
may do this since, as we noted above, both these vectors belong to K. But then
the term kA(u − v)k2 will equal zero by (4.5.4). It disappears from the bound, and
we get
√
E m kx̂ − xk2 . w(K).
√
Dividing both sides by m we complete the proof.
23Here and in other similar results, the notation . will hide possible dependence on the sub-gaussian
norms of the rows of A.
Roman Vershynin
37
Theorem 4.5.3 says that a signal x ∈ K can be efficiently recovered from
m ∼ w(K)2
random linear measurements.
4.6. Sparse recovery Let us illustrate Theorem 4.5.3 with an important specific
example of the feasible set K.
Suppose we know that the signal x is sparse, which means that only a few
coordinates of x are nonzero. As before, our task is to recover x from the random
linear measurements given by the vector
y = Ax,
where A is an m × n random matrix. This is a basic example of sparse recovery
problems, which are ubiquitous in various disciplines.
The number of nonzero coefficients of a vector x ∈ Rn , or the sparsity of x,
is often denoted kxk0 . This is similar to the notation for the `p norm kxkp =
P
p 1/p , and for a reason. You can quickly check that
( n
i=1 |xi | )
(4.6.1)
kxk0 = lim kxkp
p→0
(Do this!) Keep in mind that neither kxk0 nor kxkp for 0 < p < 1 are actually
norms on Rn , since they fail triangle inequality. (Give an example.)
Let us go back to the sparse recovery problem. Our first attempt to recover x
is to try the following optimization problem:
(4.6.2)
min kx 0 k0
subject to y = Ax 0 .
This is sensible because this program selects the sparsest feasible solution. But
there is an implementation caveat: the function f(x) = kxk0 is highly non-convex
and even discontinuous. There is simply no known algorithm to solve the optimization problem (4.6.2) efficiently.
To overcome this difficulty, let us turn to the relation (4.6.1) for an inspiration.
What if we replace kxk0 in the optimization problem (4.6.2) by kxkp with p > 0?
The smallest p for which f(x) = kxkp is a genuine norm (and thus a convex
function on Rn ) is p = 1. So let us try
(4.6.3)
min kx 0 k1
subject to
y = Ax 0 .
This is a convexification of the non-convex program (4.6.2), and a variety of numeric convex optimization methods are available to solve it efficiently.
We will now show that `1 minimization works nicely for sparse recovery. As
before, we assume that A is a random matrix as in Theorem 4.2.1.
Theorem 4.6.4 (Sparse recovery by optimization). Assume that an unknown vector
x ∈ Rn has at most s non-zero coordinates, i.e. kxk0 6 s. The solution x̂ of the
38
Four lectures on probabilistic methods for data science
optimization program (4.6.3) satisfies
r
s log n
kxk2 .
m
Proof. Since kxk0 6 s, Cauchy-Schwarz inequality shows that
√
(4.6.5)
kxk1 6 s kxk2 .
E kx̂ − xk2 .
n
n
(Check!) Denote the unit ball of the `1 norm in Rn by Bn
1 , i.e. B1 := {x ∈ R :
kxk1 6 1}. Then we can rewrite (4.6.5) as the inclusion
√
x ∈ s kxk2 · Bn
1 := K.
Apply Theorem 4.5.3 for this set K. We noted the Gaussian width of Bn
1 in (4.1.4),
so
p
√
√
√
n
w(K) = s kxk2 · w(Bn
log n.
1 ) 6 s kxk2 · γ(B1 ) 6 s kxk2 ·
Substitute this in Theorem 4.5.3 and complete the proof.
Theorem 4.6.4 says that an s-sparse signal x ∈ Rn can be efficiently recovered
from
m ∼ s log n
random linear measurements.
4.7. Notes For a more thorough introduction to Gaussian width and its role in
high-dimensional estimation, refer to the tutorial [59] and the textbook [60]; see
also [5]. Related to Gaussian complexity is the notion of Rademacher complexity
of T , obtained by replacing the coordinates of g by independent Rademacher (i.e.
±1 symmetric) random variables. Rademacher complexity of classes of functions
plays an important role in statistical learning theory, see e.g. [44]
Matrix deviation inequality (Theorem 4.2.1) is borrowed from [41]. In the special case where A is a Gaussian random matrix, this result follows from the work
of G. Schechtman [57] and could be traced back to results of Gordon [24–27].
In the general case of sub-gaussian distributions, earlier variants of Theorem 4.2.1 were proved by B. Klartag and S. Mendelson [35], S. Mendelson, A. Pajor
and N. Tomczak-Jaegermann [45] and S. Dirksen [20].
Theorem 4.4.1 for covariance estimation can be proved alternatively using more
elementary tools (Bernstein’s inequality and ε-nets), see [58]. However, no known
elementary approach exists for the low-rank covariance estimation discussed in
Remark 4.4.3. The bound (4.4.4) was proved by V. Koltchinskii and K. Lounici
[36] by a different method.
In Section 4.5, we scratched the surface of a recently developed area of sparse
signal recovery, which is also called compressed sensing. Our presentation there
essentially follows the tutorial [59]. Theorem 4.6.4 can be improved: if we take
m & s log(n/s)
Roman Vershynin
39
measurements, then with high probability the optimization program (4.6.3) recovers the unknown signal x exactly, i.e.
x̂ = x.
First results of this kind were proved by J. Romberg, E. Candes and T. Tao [13]
and a great number of further developments followed; refer e.g. to the book [23]
and the chapter in [19] for an introduction into this research area.
Acknowledgement
I am grateful to the referees who made a number of useful suggestions, which
led to better presentation of the material in this chapter.
References
[1] E. Abbe, A. S. Bandeira, G. Hall. Exact recovery in the stochastic block model, IEEE Transactions on
Information Theory 62 (2016), 471–487. 19
[2] D. Achlioptas, Database-friendly random projections: Johnson-Lindenstrauss with binary coins, Journal
of Computer and System Sciences, 66 (2003), 671–687. 9
[3] R. Ahlswede, A. Winter, Strong converse for identification via quantum channels, IEEE Trans. Inf.
Theory 48 (2002), 569–579. 18
[4] N. Ailon, B. Chazelle, Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform,
Proceedings of the 38th Annual ACM Symposium on Theory of Computing. New York: ACM
Press, 2006. pp. 557–563. 9
[5] D. Amelunxen, M. Lotz, M. McCoy, J. Tropp, Living on the edge: phase transitions in convex programs
with random data, Inf. Inference 3 (2014), 224–294. 38
[6] A. Bandeira, R. van Handel, Sharp nonasymptotic bounds on the norm of random matrices with independent entries, Ann. Probab. 44 (2016), 2479–2506. 19, 28, 29
[7] R. Baraniuk, M. Davenport, R. DeVore, M. Wakin, A simple proof of the restricted isometry property
for random matrices, Constructive Approximation, 28 (2008), 253–263. 9
[8] R. Bhatia, Matrix Analysis. Graduate Texts in Mathematics, vol. 169. Springer, Berlin, 1997. 18
[9] C. Bordenave, M. Lelarge, L. Massoulie, |em Non-backtracking spectrum of random graphs:
community detection and non-regular Ramanujan graphs, Annals of Probability, to appear. 19
[10] S. Boucheron, G. Lugosi, P. Massart, Concentration inequalities. A nonasymptotic theory of independence. With a foreword by Michel Ledoux. Oxford University Press, Oxford, 2013. 9
[11] O. Bousquet1, S. Boucheron, G. Lugosi, Introduction to statistical learning theory, in: Advanced
Lectures on Machine Learning, Lecture Notes in Computer Science 3176, pp.169–207, Springer
Verlag 2004.
[12] T. Cai, R. Zhao, H. Zhou, Estimating structured high-dimensional covariance and precision matrices:
optimal rates and adaptive estimation, Electron. J. Stat. 10 (2016), 1–59. 28
[13] E. Candes, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly
incomplete frequency information, IEEE Trans. Inform. Theory 52 (2006), 489–509. 39
[14] E. Candes, B. Recht, Exact Matrix Completion via Convex Optimization, Foundations of Computational Mathematics 9 (2009), 717–772. 29
[15] E. Candes, T. Tao, The power of convex relaxation: near-optimal matrix completion, IEEE Trans. Inform.
Theory 56 (2010), 2053–2080. 29
[16] R. Chen, A. Gittens, J. Tropp, The masked sample covariance estimator: an analysis using matrix concentration inequalities, Inf. Inference 1 (2012), 2–20. 28
[17] P. Chin, A. Rao, and V. Vu, |em Stochastic block model and community detection in the sparse
graphs: A spectral algorithm with optimal rate of recovery, preprint, 2015. 19
[18] M. Davenport, Y. Plan, E. van den Berg, M. Wootters, 1-bit matrix completion, Inf. Inference 3
(2014), 189–223. 29
40
References
[19] M. Davenport, M. Duarte, Yonina C. Eldar, Gitta Kutyniok, Introduction to compressed sensing, in:
Compressed sensing. Edited by Yonina C. Eldar and Gitta Kutyniok. Cambridge University Press,
Cambridge, 2012. 39
[20] S. Dirksen, Tail bounds via generic chaining, Electron. J. Probab. 20 (2015), 1–29. 38
[21] U. Feige, E. Ofek, Spectral techniques applied to sparse random graphs, Random Structures Algorithms 27 (2005), 251–275. 19
[22] S. Fortunato, Santo; D. Hric, Community detection in networks: A user guide. Phys. Rep. 659 (2016),
1–44. 19
[23] S. Foucart, H. Rauhut, A mathematical introduction to compressive sensing. Applied and Numerical
Harmonic Analysis. Birkhäuser/Springer, New York, 2013. 39
[24] Y. Gordon, Some inequalities for Gaussian processes and applications, Israel J. Math. 50 (1985), 265–289.
38
[25] Y. Gordon, Elliptically contoured distributions, Prob. Th. Rel. Fields 76 (1987), 429–438. 38
[26] Y. Gordon, On Milman’s inequality and random subspaces which escape through a mesh in Rn , Geometric aspects of functional analysis (1986/87), Lecture Notes in Math., vol. 1317, pp. 84–106.
38
[27] Y. Gordon, Majorization of Gaussian processes and geometric applications, Prob. Th. Rel. Fields 91
(1992), 251–267. 38
[28] D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Trans. Inform. Theory
57 (2011), 1548–1566. 29
[29] O. Guedon, R. Vershynin, Community detection in sparse networks via Grothendieck’s inequality, Probability Theory and Related Fields 165 (2016), 1025–1049. 19
[30] A. Javanmard, A. Montanari, F. Ricci-Tersenghi, Phase transitions in semidefinite relaxations, PNAS,
April 19, 2016, vol. 113, no.16, E2218–E2223. 19
[31] W. B. Johnson, J. Lindenstrauss, Extensions of Lipschitz mappings into a Hilbert space. In Beals,
Richard; Beck, Anatole; Bellow, Alexandra; et al. Conference in modern analysis and probability
(New Haven, Conn., 1982). Contemporary Mathematics. 26. Providence, RI: American Mathematical Society, 1984. pp. 189–206. 9
[32] B. Hajek, Y. Wu, J. Xu, Achieving exact cluster recovery threshold via semidefinite programming, IEEE
Transactions on Information Theory 62 (2016), 2788–2797. 19
[33] P. W. Holland, K. B. Laskey, S. Leinhardt, |em Stochastic blockmodels: first steps, Social Networks
5 (1983), 109–137. 19
[34] D. Kane, J. Nelson, Sparser Johnson-Lindenstrauss Transforms, Journal of the ACM 61 (2014): 1. 9
[35] B. Klartag, S. Mendelson, Empirical processes and random projections, J. Funct. Anal. 225 (2005),
229–245. 38
[36] V. Koltchinskii, K. Lounici, Concentration inequalities and moment bounds for sample covariance operators, Bernoulli 23 (2017), 110–133. 28, 38
[37] C. Le, E. Levina, R. Vershynin, Concentration and regularization of random graphs, Random Structures and Algorithms, to appear. 19
[38] M. Ledoux, The concentration of measure phenomenon. American Mathematical Society, Providence,
RI, 2001. 9
[39] M. Ledoux, M. Talagrand, Probability in Banach spaces. Isoperimetry and processes. Springer-Verlag,
Berlin, 1991. 9
[40] E. Levina, R. Vershynin, Partial estimation of covariance matrices, Probability Theory and Related
Fields 153 (2012), 405–419. 28
[41] C. Liaw, A. Mehrabian, Y. Plan, R. Vershynin, A simple tool for bounding the deviation of random matrices on geometric sets, Geometric Aspects of Functional Analysis, Lecture Notes in Mathematics,
Springer, Berlin, to appear. 38
[42] J. Matouĺek, Lectures on discrete geometry. Graduate Texts in Mathematics, 212. Springer-Verlag,
New York, 2002. 9
[43] F. McSherry, Spectral partitioning of random graphs, Proc. 42nd FOCS (2001), 529–537. 19
[44] S.Mendelson,S.Mendelson, A few notes on statistical learning theory, in: Advanced Lectures on Machine Learning, S. Mendelson, A.J. Smola (Eds.) LNAI 2600, pp. 1–40, 2003. 38
[45] S. Mendelson, A. Pajor, N. Tomczak-Jaegermann, Reconstruction and subgaussian operators in asymptotic geometric analysis. Geom. Funct. Anal. 17 (2007), 1248–1282. 38
[46] E. Mossel, J. Neeman, A. Sly, Belief propagation, robust reconstruction and optimal recovery of block
models. Ann. Appl. Probab. 26 (2016), 2211–2256. 19
References
41
[47] M. E. Newman, Networks. An introduction. Oxford University Press, Oxford, 2010. 19
[48] R. I. Oliveira, Concentration of the adjacency matrix and of the Laplacian in random graphs with independent edges, unpublished (2010), arXiv:0911.0600. 18, 19
[49] R. I. Oliveira, Sums of random Hermitian matrices and an inequality by Rudelson, Electron. Commun.
Probab. 15 (2010), 203–212. 18
[50] Y. Plan, R. Vershynin, E. Yudovina, High-dimensional estimation with geometric constraints, Information and Inference 0 (2016), 1–40. 29
[51] S. Riemer, C. Schütt, On the expectation of the norm of random matrices with non-identically distributed
entries, Electron. J. Probab. 18 (2013), no. 29, 13 pp. 28
[52] J. Tropp, User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 12 (2012),
389–434. 18
[53] J. Tropp, An introduction to matrix concentration inequalities. Found. Trends Mach. Learning 8 (2015),
10-230. 18
[54] R. van Handel, Structured random matrices. in: IMA Volume “Discrete Structures: Analysis and
Applications”, Springer, to appear.
[55] R. van Handel, On the spectral norm of Gaussian random matrices, Trans. Amer. Math. Soc., to appear.
29
[56] B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res. 12 (2011), 3413–3430. 29
[57] G. Schechtman, Two observations regarding embedding subsets of Euclidean spaces in normed spaces,
Adv. Math. 200 (2006), 125–135. 38
[58] R. Vershynin, Introduction to the non-asymptotic analysis of random matrices. Compressed sensing,
210–268, Cambridge University Press, Cambridge, 2012. 2, 9, 28, 29, 38
[59] R. Vershynin, Estimation in high dimensions: a geometric perspective. Sampling Theory, a Renaissance,
3–66, Birkhauser Basel, 2015. 2, 29, 38
[60] R. Vershynin, High-Dimensional Probability. An Introduction with Applications in Data Science. Cambridge University Press, to appear. 2, 9, 29, 38
[61] H. Zhou, A. Zhang, Minimax Rates of Community Detection in Stochastic Block Models, Annals of
Statistics, to appear. 19
Department of Mathematics, University of Michigan, 530 Church Street, Ann Arbor, MI 48109,
U.S.A.
E-mail address: [email protected]
| 10 |
NON-GORENSTEIN ISOLATED SINGULARITIES OF GRADED
COUNTABLE COHEN-MACAULAY TYPE
arXiv:1307.6206v1 [] 23 Jul 2013
BRANDEN STONE
Abstract. In this paper we show a partial answer the a question of C. Huneke
and G. Leuschke (2003): Let R be a standard graded Cohen-Macaulay ring of
graded countable Cohen-Macaulay representation type, and assume that R has
an isolated singularity. Is R then necessarily of graded finite Cohen-Macaulay
representation type? In particular, this question has an affirmative answer for
standard graded non-Gorenstein rings as well as for standard graded Gorenstein rings of minimal multiplicity. Along the way, we obtain a partial classification of graded Cohen-Macaulay rings of graded countable Cohen-Macaulay
type.
1. Motivation and Introduction
For motivational purposes, we first consider local Cohen-Macaulay rings. Throughout, a local Cohen-Macaulay ring is said to have finite Cohen-Macaulay type (respectively, countable Cohen-Macaulay type) if it has only finitely (respectively, countably)
many isomorphism classes of indecomposable maximal Cohen-Macaulay modules.
In [2], M. Auslander showed that local rings of finite Cohen-Macaulay type must
have an isolated singularity. It was later shown in [4] that this was true for standard
graded rings as well. Concerning countable Cohen-Macaulay type, it was shown in
the excellent case by C. Huneke and G. Leuschke that a local ring of countable
Cohen-Macaulay type does not necessarily have an isolated singularity, but the
singular locus has dimension at most one [12](the graded version of this is found
in [19]). Given this result, it is natural to try and find a local Cohen-Macaulay
ring of countable (and infinite) Cohen-Macaulay type with an isolated singularity.
Examining the known examples countable type rings with an isolated singularity,
C. Huneke and G. Leuschke found they were actually finite type! This led to the
following question:
Question 1.1 ([12]). Let R be a complete local Cohen-Macaulay ring of countable
Cohen-Macaulay representation type, and assume that R has an isolated singularity.
Is R then necessarily of finite Cohen-Macaulay representation type?
In this paper, we are mainly concerned with the graded version of this question
(see Question 1.2). In Corollary 6.1, we are able to show a positive answer for
standard graded non-Gorenstein rings as well as for graded Gorenstein rings of
minimal multiplicity. The main strategy is to classify the rings of graded countable
Cohen-Macaulay type and compare this list with a the classification of graded finite
Cohen-Macaulay type in [9].
The author was partially funded by the NSF grant, Kansas Partnership for Graduate Fellows
in K-12 Education (DGE-0742523).
1
2
B. STONE
The sections of this paper break the problem up into cases based on the dimension
of the ring, with Section 5 discussing the issues surrounding Gorenstein rings. The
results are summarized in Section 6.
1.1. Preliminaries and known results. We sayL
that a ring R is standard graded
if, as an abelian group, it has a decomposition R = i>0 Ri such that Ri Rj ⊆ Ri+j
for all i, j > 0, R = R0 [R1 ], and R0 is a field. Further, we will always assume that
a standard graded ring is Noetherian. Unless otherwise stated, we will denote
(R, m, k) byP
the standard graded ring with m being the irrelevant maximal ideal,
∞
that is m = i=1 Ri , and k := R0 = R/m being an uncountable algebraically closed
field of characteristic zero. We let e(R) be the multiplicity of the maximal ideal m.
When R is a d-dimensional Cohen-Macaulay ring, if e(R) = dimk (m/m2 )−dim R+1
then R is said to have minimal multiplicity. In the case when k is infinite, the
following are equivalent:
(i) R has minimal multiplicity;
(ii) there exists a regular sequence x1 , . . . , xd such that m2 = (x1 , . . . , xd )m;
(iii) the h-vector of R is of the form (1, n).
Here we define the h-vector of R to be the sequence (dimk (R/(x1 , . . . , xd ))i )i>0
where x1 , . . . , xd is a homogeneous regular sequence of degree one.
A standard graded Cohen-Macaulay ring (R, m, k) has graded finite Cohen-Macaulay
type (respectively, graded countable Cohen-Macaulay type) if it has only finitely (respectively, countably) many indecomposable, maximal Cohen-Macaulay modules
up to a shift in degree.
The graded version of Question 1.1 is as follows.
Question 1.2. Let R be a standard graded Cohen-Macaulay ring of graded countable Cohen-Macaulay representation type, and assume that R has an isolated singularity. Is R then necessarily of graded finite Cohen-Macaulay representation type?
Remark 1.3. It is known that if the completion of a standard graded ring R with
respect to the maximal ideal has finite (respectively countable) type, then R must
have graded finite (respectively countable) type (see [4, Prop 8 and 9] or [20, Corollary 2.5]). Hence any affirmative answer to Question 1.1 can be passed to a positive
answer of Question 1.2.
With this in mind, Question 1.2 has a positive answer if the ring is a hypersurface. In particular, it was shown in [14, 6] that if R is a complete hypersurface
containing an algebraically closed field k (of characteristic different from 2), then
R is of finite Cohen-Macaulay type if and only if R is the local ring of a simple
hypersurface singularity in the sense of [1]. For example, if we let k = C, then
R is one of the complete ADE singularities over C. That is, R is isomorphic to
kJx, y, z2 , . . . , zr K/(f ), where f is one of the following polynomials:
(An ) : xn+1 + y 2 + z22 + · · · + zr2 , n > 1;
(Dn ) : xn−1 + xy 2 + z22 + · · · + zr2 , n > 4;
(E6 ) : x4 + y 3 + z22 + · · · + zr2 ;
(E7 ) : x3 y + y 3 + z22 + · · · + zr2 ;
(E8 ) : x5 + y 3 + z22 + · · · + zr2 .
NON-GORENSTEIN, GRADED COUNTABLE COHEN-MACAULAY TYPE
3
It was further shown in [6] that a complete hypersurface singularity over an algebraically closed uncountable field k has (infinite) countable Cohen-Macaulay type
if and only if it is isomorphic to one of the following:
(1.1)
(1.2)
(A∞ ) : kJx, y, z2 , . . . , zr K/(y 2 + z22 + · · · + zr2 );
(D∞ ) : kJx, y, z2 , . . . , zr K/(xy 2 + z22 + · · · + zr2 ).
As both (A∞ ) and (D∞ ) are non-isolated singularities, if a hypersurface has an
isolated singularity and is countable type then it must have finite type.
In 2011, R. Karr and R. Wiegand [13, Theorem 1.4] showed the one dimensional
case as well. This was under the assumption that the integral closure of the ring R
is finitely generated as an R-module. We recover these results in Section 3.
2. Zero Dimensional Rings
It is well known that a zero dimensional equicharacteristic local ring R is a
hypersurface if and only if R is of finite Cohen-Macaulay type [11, Satz 1.5]. We
show the graded countable analog to this statement in Proposition 2.1.
Proposition 2.1. Let (R, m, k) be a 0-dimensional standard graded Cohen-Macaulay
ring. Further assume that k is an uncountable field. Then the following are equivalent:
(1) R is of graded finite Cohen-Macaulay type;
(2) R is of graded countable Cohen-Macaulay type;
(3) R is a hypersurface ring.
Proof. The implication (1) implies (2) is straight forward. To show (2) implies
(3), assume that R is not a hypersurface. Thus there must be two linear forms
a, b ∈ m \ m2 that are basis elements of m/m2 . By [20, Lemma 4.1], there are
uncountably many distinct homogeneous ideals {Iα }α∈k in R. In this context, we
have that Iα = (a + αb)R. Consider the graded indecomposable maximal CohenMacaulay modules {R/Iα }α∈k . As each of these modules have different annihilators,
we know they are not isomorphic. A contradiction as we assumed that R was of
graded countable type.
To prove that (3) implies (1), we consider the m-adic completion of R and then
apply [11, Satz 1.5] to see that the completion is of finite Cohen-Macaulay type.
By Remark 1.3, we know that R also has graded finite Cohen-Macaulay type.
We thus have a complete classification of graded countable Cohen-Macaulay
type for zero dimensional standard graded rings. Further, Proposition 2.1 gives a
positive answer to Question 1.2 for zero dimensional standard graded rings with
uncountable residue field.
3. One Dimensional Rings
In the one-dimensional case, Question 1.2 has a positive answer as shown by R.
Karr and R. Wiegand [13, Theorem 1.4]. In this section, we examine the DrozdRoı̆ter conditions and give a classification of one dimensional reduced standard
graded rings of graded countable Cohen-Macaulay type. Thus retrieving the result
of R. Karr and R. Wiegand.
4
B. STONE
3.1. Finite Type and the Drozd-Roı̆ter conditions. As detailed by N. Cimen,
R. Wiegand, and S. Wiegand [7], if (R, m, k) is a one dimensional, Noetherian,
reduced, local Cohen-Macaulay ring such that the integral closure of R, say S,
is finitely generated as an R-module, then we know precisely when R has finite
Cohen-Macaulay type. This happens when the following conditions occur:
DR1 S is generated by 3 elements as an R-module;
DR2 the intersection of the maximal R-submodules of S/R is cyclic as an Rmodule.
These conditions are called the Drozd-Roı̆ter conditions. It was further shown in [7,
Proposition 1.12] that the Drozd-Roı̆ter conditions are equivalent to the following:
(dr1)
dimk (S/mS) 6 3;
R + mS
dimk
6 1.
R + m2 S
(dr2)
In order to have a different grasp of what it means to satisfy the Drozd-Roı̆ter
conditions, we state another set of equivalent conditions. This result is stated in the
next proposition where λ represents length as an R-module, and ∗ is the integral
closure of ideals.
Proposition 3.1. Let (R, m, k) be a one-dimensional, Noetherian, reduced, local
Cohen-Macaulay ring with finite integral closure and uncountable residue field k.
Let x be a minimal reduction of the maximal ideal m. The Drozd-Roı̆ter conditions
are equivalent to the following:
(3.1)
e(R) 6 3;
(3.2)
λ(m2 /xm) 6 1.
Proof. To show that (3.1) holds, we will show that e(R) = dimk (S/mS) where S
is the integral closure of R. As x is a reduction of m, we know that xS is also a
reduction of mS. But this holds if and only if mS ⊆ xS. As principal ideals are
integrally closed in S, we have that
xS ⊆ mS ⊆ xS = xS,
and hence xS = mS. By assumption, S is finitely generated as an R-module. Therefore we have that the map S → S defined by multiplication by x is an injection.
Hence, we have the following commutative diagram in Figure 1.
0
0
K
0
/R
·x
/R
/ R/xR
/0
0
/S
·x
/S
/ S/xS
/0
C0
C
C0
Figure 1. A commutative diagram where K, C 0 , and C are the
respective kernel and cokernels.
NON-GORENSTEIN, GRADED COUNTABLE COHEN-MACAULAY TYPE
5
Consider the exact sequence in the right side of Figure 1,
(3.3)
0
/K
/ R/xR
/ S/xS
/C
/ 0.
Notice by the Snake Lemma applied to Figure 1 that λ(K) = λ(C); here λ represents length as R-modules. As C and K have the same length, (3.3) shows that
R/xR and S/xS also have the same length. Since x is a minimal reduction of the
maximal ideal, we know that e(R) = λ(R/xR). Therefore,
e(R) = λ(R/xR) = λ(S/xS) = λ(S/mS) = dimk (S/mS).
In order to show (3.2), first notice that
(3.4)
R + mS
mS
mS
xS
' 2
' 2
' 2
.
R + m2 S
m S + (R ∩ mS)
m S+m
x S+m
For simplicity, we define B as follows,
R + mS
xS
B := dimk
=λ
.
R + m2 S
x2 S + m
We now consider the short exact sequence
0
2
/ m S+m
m2 S
/ mS
m2 S
/
mS
m2 S + m
/ 0.
Rewriting the two terms on the left gives us
m2 S + m
m
' 2
' m/m2 ;
m2 S
m S∩m
S
S
mS
'
=
.
m2 S
mS
xS
(3.5)
(3.6)
Combining (3.5) and (3.6) with the above short exact sequence yields
S
(3.7)
λ
= λ m/m2 + B.
xS
On the other hand, consider the following short exact sequence
(3.8)
0
2
/ m + xR
xR
/ R
xR
/
R
m2
+ xR
/ 0,
along with the isomorphisms
(3.9)
m2 + xR
m2
m2
'
=
.
xR
xm
xR ∩ m2
Note that the equality in (3.9) can be justified as follows. Since xS = mS, we know
that m2 = m2 S ∩ R = x2 S ∩ R. If y ∈ xR ∩ m2 , then y = xr ∈ m2 = x2 S ∩ R. This
forces r ∈ xS ∩ R = m. Hence y ∈ xm. Equality follows as xm ⊆ xR ∩ m.
Computing length in (3.8) gives us
!
R
m2
R
λ
=λ
+λ
.
xR
xm
m2 + xR
6
B. STONE
We can repeat the above steps with the following short exact sequence and
isomorphisms:
0
2
/ m + xR
m2
m2 + xR
/ R
m2
R
m2
+ xR
/ 0;
xR
R
' .
mxR
m
∩ xR
Once again, if we compute the length, we have that
R
R
=1+λ
(3.10)
λ
m2
m2 + xR
R
S
Combining (3.7) and (3.10) with the fact that λ xR
= λ xS
and λ m2 =
m
λ R2 − 1, we have
m
!
m2
R
m
λ
+λ
=λ
+B
xm
m2 + xR
m2
R
=λ
−1+B
m2
R
− 1 + B.
=1+λ
m2 + xR
!
m2
Simplifying we see that λ
= B. We now have
xm
m2
λ
'
xR
/
m2
'
e(R) = dimk (S/mS);
!
m2
R + mS
= dimk
.
xm
R + m2 S
Applying (dr1) and (dr2) yields the desired result.
Given Proposition 3.1, we can construct a couple of examples.
Example 3.2. Consider the ring R = kJt3 , t7 K. This is a one-dimensional domain
3 7
with e(R) = 3. The element t3 is a minimal
of the maximal ideal (t , t )R.
reduction
If we compute the length, we see that λ m2 /t3 m = 2. Hence, by Proposition 3.1,
we have that R is not of finite type.
Example 3.3. Let R = kJx, yK/(x3 y − xy 3 ). This ring is one-dimensional and
reduced. If we compute the multiplicity, we find that e(R) = 4. Thus, we immediately have from Proposition 3.1
that R is not of finite type. Computing the length
none-the-less, we find that λ m2 /(x + 2y)m = 1.
3.2. Graded Countable Type. In [20, Theorem 5.3], one dimensional standard
graded rings of graded countable type were shown to either be of minimal multiplicity or a hypersurface. Turning to the case of minimal multiplicity, we find the
following.
NON-GORENSTEIN, GRADED COUNTABLE COHEN-MACAULAY TYPE
7
Theorem 3.4. Let (R, m, k) be a standard graded one dimensional Cohen-Macaulay
ring with uncountable residue field k. If the h-vector of R is (1, n) with n > 3, then
R is not of graded countable Cohen-Macaulay type.
Proof. Let x be a minimal homogeneous reduction of the maximal ideal m, and let
x, u, v, w, c4 , · · · , cn ∈ m be elements of a minimal k-basis of m/m2 . By assumption
n > 3, so we are guaranteed at least four elements in the basis of m/m2 . Without
losing any generality, we assume that w is the fourth basis element. Assume that
there is a graded isomorphism
Iα = (x, u + αv) ' (x, u + βv) = Iβ
where α, β are elements of k. As dim(R) = 1, these ideals are graded indecomposable maximal Cohen-Macaulay modules. Since this isomorphism is graded of
degree 0, we have that
x 7→ δ1 x + δ2 (u + βv)
u + αv 7→ δ3 x + δ4 (u + βv)
where det(δi ) is a unit and δi are elements of k. We have that
(3.11)
0 = δ3 x2 + δ4 x(u + βv) − δ1 x(u + αv) − δ2 (u + αv)(u + βv).
Notice that (u + αv)(u + βv) is an element of m2 . As R is of minimal multiplicity,
we have that xm = m2 . Hence we can view elements of m2 as elements of xm. In
particular we view u2 , uv, v 2 in the following way
x
u
2
v
u
x(a10 x + a11 u + a12 v + a13 w + a15 c4 + · · · + a1n cn )
uv = x(a20 x + a21 u + a22 v + a23 w + a25 c4 + · · · + a2n cn ) = x · A ·
w
v2
x(a30 x + a31 u + a32 v + a33 w + a35 c4 + · · · + a3n cn )
c4
..
.
cn
where the matrix A = (aij ), 1 6 i 6 3, 0 6 j 6 n. Since u2 , uv, v 2 are homogeneous
elements, the grading forces the entries of A to be elements of k. Further, if we let
t
Φ = 1 α + β αβ and b = x u v w c4 · · · cn ,
then we can use matrix notation to write
(u + αv)(u + βv) = u2 + (α + β)uv + αβv 2 = x · Φ · A · b.
We can now cancel the x in equation (3.11) and rewrite it as
t
δ3 − ΦA0 δ2
δ4 − δ1 − ΦA1 δ2
βδ4 − αδ1 − ΦA2 δ2
−ΦA3 δ2
(3.12)
0=
·b
−ΦA
δ
4 2
..
.
−ΦAn δ2
8
B. STONE
where Ai are the columns of the matrix A. All of the elements in the coefficient
matrix of (3.12) are elements of k and hence equal zero as x, u, v, w, c4 , . . . , cn form
a k-basis.
At this point we focus on δ2 . If δ2 6= 0, then the fact that ΦA3 δ2 = 0 implies
that
(3.13)
a13 + (α + β)a23 + αβa33 = 0
in the field k. As the aij are independent of our choice of α and β, Equation (3.13)
shows that every α, β such that Iα ' Iβ is a solution to
f (X, Y ) = a13 + (X + Y )a23 + XY a33 ∈ k[X, Y ].
This forces f (X, Y ) to be identically zero, a contradiction. Hence there are uncountably many Iα that are not isomorphic.
If we let δ2 = 0, then Equation (3.13) becomes the relation
(3.14)
δ3 x + (δ4 − δ1 )u + (βδ4 − αδ1 )v = 0.
As x, u, v are independent over k, we have that the coefficients are zero. In particular
δ4 − δ1 = 0. Since δ1 δ4 − δ2 δ3 6= 0, we know that δ1 = δ4 6= 0. Thus the fact that
βδ4 − αδ1 = 0 implies that α = β. Hence there are uncountably many nonisomrophic ideals Iα .
Given the above results, we are now ready to characterize one dimension standard
graded rings of graded countable Cohen-Macaulay type.
Corollary 3.5. Let (R, m, k) be a one-dimensional standard graded Cohen-Macaulay
ring with uncountable residue field k. If R is of graded countable Cohen-Macaulay
type, then R is either of minimal multiplicity with h-vector (1, 2), or is isomorphic
to one of the following hypersurfaces:
(1) k[x];
(2) k[x, y]/(xy);
(3) k[x, y]/(xy(x + y));
(4) k[x, y]/(xy 2 );
(5) k[x, y]/(y 2 ).
Proof. A direct application of [20, Theorem 5.3] and Theorem 3.4 show that R is
either a hypersurface ring, or has minimal multiplicity with h-vector (1, 2).
Concerning the hypersurfaces, items (1)-(3) have graded finite Cohen-Macaulay
type as can be seen from [6] or [9]. The hypersurfaces (4) and (5) are not graded
finite Cohen-Macaulay type, but their completions are the one dimensional (A∞ )
and (D∞ ) hypersurface singularities shown in (1.1) and (1.2) on page 3. It was
shown by R. Buchweitz, G. Greuel, and F. Schreyer in [6] that these are the only
hypersurfaces that are countable but not finite Cohen-Macaulay type. Hence by
Remark 1.3, the rings (4) and (5) are of graded countable Cohen-Macaulay type.
Corollary 3.6. Let (R, m, k) be a one-dimensional standard graded Cohen-Macaulay
ring with uncountable residue field k. If R is of graded countable Cohen-Macaulay
type then e(R) 6 3.
Proof. As shown in [20, Corollary 4.5], the possible h-vectors are (1), (1, n), or
(1, n, 1) for some integer n. Combining this with Corollary 3.5, we know that the
possible h-vectors of R are the following:
(1), (1, 1), (1, 2), (1, 1, 1).
NON-GORENSTEIN, GRADED COUNTABLE COHEN-MACAULAY TYPE
Hence we have that e(R) 6 3.
9
An obvious improvement to Corollary 3.5 would be to classify the rings whose
h-vector is (1, 2). Currently, there are two known examples of one-dimensional
standard graded Cohen-Macaulay rings of graded countable Cohen-Macaulay type
having h-vector (1, 2):
k[x, y, z]/(xy, yz, z 2 );
x y z
k[x, y, z]/det2
.
y z x
(3.15)
(3.16)
The completion of (3.15) is the local endomorphism ring of (x, y)R where R =
kJx, yK/(xy 2 ), the one-dimensional (D∞ ) hypersurface. This endomorphism ring
is known to be of countable Cohen-Macaulay type (see [15, Theorem 1.5] or [16,
Example 14.23]) and thus by Remark 1.3, (3.15) is of graded countable CohenMacaulay type. It is worth noting that (3.15) is not reduced.
As for (3.16), this is the only one dimensional ring of finite type and h-vector
(1, 2) found in the list of graded finite Cohen-Macaulay type rings [9]. Given this
result, if we assume our ring to have an isolated singularity, then we have a positive
answer to Question 1.2.
Corollary 3.7. Let (R, m, k) be a one-dimensional, reduced, standard graded CohenMacaulay ring with uncountable residue field k. Further assume that R has a finite
integral closure. If R is of graded countable Cohen-Macaulay type, then R is of
graded finite type and isomorphic to one of the following:
(1) k[x];
(2) k[x, y]/(xy);
(3) k[x, y]/(xy(x +y));
x y z
(4) k[x, y, z]/det2
.
y z x
Proof. By Remark 1.3, we can pass to the
Since local rings of minimal
completion.
2
multiplicity with h-vector (1, 2) have λ m /xm 6 1, we can apply Proposition
3.1 and Corollary 3.5 to obtain the desired result.
4. Non-Gorenstein Rings of Dimension at least Two
We are unable to classify standard graded rings of graded countable type with
dimension larger than one. However, if we assume an isolated singularity, we have
a positive answer to Question 1.2 in the non-Gorenstein case.
4.1. Non-Gorenstein Rings of Dimension Two. As shown in [9], two dimensional standard graded Cohen-Macaulay rings with an isolated singularity must be
a domain and have minimal multiplicity. In [8], it can be seen that standard graded
Cohen-Macaulay domains of minimal multiplicity have the following classification:
(i) quadratic hypersurfaces (not necessarily nondegenerate);
(ii) cones over the Veronese embedding P2 ,→ P5 . I.e., rings isomorphic to
x0 x1 x2
.
k[x0 , x1 , . . . , xn ] det2 x1 x3 x4 ,
x2 x4 x5
where for n > 5;
10
B. STONE
(iii) rational normal scrolls.
The ring defined in (i) is Gorenstein and is discussed in Section 5. The ring in (ii)
is of graded finite (hence countable) Cohen-Macaulay type when n = 5 as can be
seen in [9] or [21, Example 17.6.1]. For n > 6, the dimension of the singular locus is
larger than 1, hence they are not of graded countable type. However when n = 6,
it is unclear if the ring is graded countable type or not. Notice that this case does
not concern Question 1.2 as the ring does not have an isolated singularity, but does
play a role in a general classification. Further, the dimension of each ring in (ii) is
at least three. As such, it is left to consider the rational normal scrolls defined as
follows.
Definition 4.1. Let 0 6 a0 6 a1 6 · · · 6 ak be given integers and let {xij | 0 6
j 6 ai , 0 6 i 6 k} be a set of variables over a field k. Then, take the ideal I of the
polynomial ring S = k[xij | 0 6 j 6 ai , 0 6 i 6 k] generated by all the 2 × 2-minors
of the matrix:
0
x0 x01 · · · x0a0 −1 | · · · | xk0 xk1 · · · xkak −1
.
| · · · | xk1 xk2 · · ·
xkak
x01 x02 · · ·
x0a0
Define the graded ring R to be the quotient S/I with deg(xij ) = 1 for all i, j, and
call R the scroll of type (a0 , a1 , . . . , ak ).
Notice that the polynomial ring k[x0 , x1 , . . . , xN ] is a rational normal scroll of
Pk
type (0, · · · , 0, 1), where N = i=0 ai +k+1. Further, a scroll is of type (0, · · · , 0, 1)
if and only if it is regular (for basic properties of scrolls, see [8, 17]). In general,
it is known that scrolls are integrally closed Cohen-Macaulay domain of dimension
k + 2.
Proposition 4.2. Let (R, m, k) be a standard graded Cohen-Macaulay ring that is
not Gorenstein and dim(R) = 2. Further assume that R has an isolated singularity
and that k is an uncountable field. If R is of graded countable Cohen-Macaulay
type, then R is of graded finite type and is isomorphic to
.
x0 . . . xn−1
k[x0 , x1 , . . . , xn ] det2
,
x1 . . .
xn
where n > 2.
Proof. It was shown in [9] that two dimensional, non-Gorenstein rings with an
isolated singularity have minimal multiplicity.
By the above discussion, we know that R must be isomorphic to a two dimensional scroll. The only non-Gorenstein scrolls of dimension two are of type (m)
where m > 2. These are the rings listed in the statement of Proposition 4.2. It
is known that these rings are of graded finite Cohen-Macaulay type (see [9] or [3,
Theorem 2.3]).
Proposition 4.2 gives a partial (positive) answer to Question 1.2. Ideally though,
we would like to remove the isolated singularity condition from the hypothesis of
Proposition 4.2. Doing so would show that graded countable Cohen-Macaulay type
implies graded finite Cohen-Macaulay type for non-Gorenstein two dimensional
rings.
NON-GORENSTEIN, GRADED COUNTABLE COHEN-MACAULAY TYPE
11
4.2. Non-Gorenstein Rings of Dimension at least Three. Let (R, m, k) be a
standard graded Cohen-Macaulay ring of graded countable Cohen-Macaulay type,
that is not Gorenstein and dim R > 3. By [20, Proposition 4.6], we know that R
must be a domain and have minimal multiplicity. Thus we only have to consider
the three classes of rings listed in Section 4.1 in order to obtain the following.
Proposition 4.3. Let (R, m, k) be a standard graded Cohen-Macaulay ring with
uncountable residue field k and an isolated singularity. Further assume that R is
not Gorenstein and dim R > 3. If R is of graded countable Cohen-Macaulay type,
then R is of graded finite type and is isomorphic one of the following rings:
x1 x2 x4
(1) k[x1 , . . . , x5 ]/det2
;
x2 x3 x5
(2) k[x1 , . . . , x6 ]/det2 (sym 3 × 3).
Proof. Consider the classes of rings listed in Section 4.1. We can ignore the rings
in (i) as they are Gorenstein. The rings in (ii) only have an isolated singularity
when n = 5. This ring is known to be of graded finite Cohen-Macaulay type [9].
As such, we list it above.
Concerning the scrolls in (iii), M. Auslander and I. Reiten classify the scrolls of
finite type in [5]. In particular, they show that if k > 1 and R is not of type (1, 1)
or (1, 2), then R has |k| many indecomposable graded Cohen-Macaulay modules,
up to shifts. As it is, the graded scrolls of type (1, 1) and (1, 2) are the only graded
scrolls of dimension at least 3 that have graded countable Cohen-Macaulay type.
Notice that a scroll of type (1, 1) is Gorenstein and hence we are left with the scroll
of type (1, 2).
Proposition 4.3 gives another case where Question 1.2 has a positive answer.
Concerning the general classification, in order to remove the isolated singularity
condition from the hypothesis, one only needs to determine the graded CohenMacaulay type of
x0 x1 x2
.
(4.1)
k[x0 , x1 , . . . , x6 ] det2 x1 x3 x4 .
x2 x4 x5
Knowing this information would give a complete classification for non-Gorenstein
rings of dimension at least three.
5. Gorenstein Rings
Not much is known about Gorenstein rings of graded countable Cohen-Macaulay
type. However, it is well known that standard graded Gorenstein rings of graded
finite Cohen-Macaulay type are hypersurfaces [11, Satz 1.2]. This fact is heavily
exploited in the classification of standard graded Cohen-Macaulay rings of graded
finite Cohen-Macaulay type [9]. The countable analog of this fact is still unknown.
Conjecture 5.1. A Gorenstein ring of countable Cohen-Macaulay type is a hypersurface.
Using the concept of super-stretched, Conjecture 5.1 was shown to be true for
standard graded rings of dimension at most one [20], but the conjecture remains
open for higher dimensions. By the results of [14, 6] on hypersurfaces, a positive result to Conjecture 5.1 would completely classify Gorenstein rings of graded
countable Cohen-Macaulay type.
12
B. STONE
If we restrict to rings of minimal multiplicity, then we are only dealing with
hypersurfaces, as h-vectors of Gorenstein rings are symmetric. As such, by [14, 6]
we have a complete characterization. In particular, consider a standard graded
Gorenstein ring (R, m, k) of minimal multiplicity, i.e. the h-vector is (1, 1). Hence
if R is of graded countable Cohen-Macaulay type, then R is isomorphic to one of
the following hypersurfaces:
(A1 ) : k[x1 , . . . , xn ]/(x21 + · · · + x2n ), n > 1;
(A∞ ) : k[x1 , . . . , xn ]/(x22 + · · · + x2n ), n > 1.
Remark 5.2. If we further assume that a standard graded Gorenstein ring of minimal multiplicity has an isolated singularity, then graded countable Cohen-Macaulay
type implies graded finite Cohen-Macaulay type. To see this, notice that only (A1 )
has a zero dimensional singular locus.
It is worth noting that standard graded rings of dimension at least 2 and with
graded finite Cohen-Macaulay type have minimal multiplicity. This can be seen by
examining the list graded rings of graded finite Cohen-Macaulay type in [9]. With
these results, we ask the following question.
Question 5.3. If (R, m, k) is a standard graded Gorenstein ring of graded countable
Cohen-Macaulay type and dim(R) > 2, is R necessarily of minimal multiplicity?
A positive answer to Question 5.3 would ultimately force the ring to be a hypersurface and affirm Conjecture 5.1. Further it would show that Question 1.2 has an
affirmative answer for Gorenstein rings of graded countable Cohen-Macaulay type.
Remark 5.4. Since Conjecture 5.1 is still open, the natural place to look are rings of
complete intersection. According to [20], we know that standard graded complete
intersection with graded countable Cohen-Macaulay type are either a hypersurface
or defined by two quadrics. With this fact, one could use tools from representation
theory [18] and the analysis of the h-vector, to show that the ring is indeed a
hypersurface no matter what the dimension. As these techniques are outside the
scope of the paper, we leave it here as a remark.
6. Summary of Results
In Corollary 6.1 we summarize the previous results in relation to Question 1.2. As
noted earlier, proving Conjecture 5.1 would give a complete answer to the question.
In the rest of the section, we consider what can be shown when the ring in question
does not necessarily have an isolated singularity. In particular, we state a partial
classification of standard graded rings of graded countable Cohen-Macaulay type.
Corollary 6.1. Let (R, m, k) be a standard graded ring of graded countable CohenMacaulay type. Further assume that R has an isolated singularity and that k is an
uncountable algebraically closed field of characteristic zero. Then R is of graded
finite type if one of the following hold:
(1) dim(R) 6 1;
(2) R is not Gorenstein;
(3) R is of minimal multiplicity.
Proof. If the dimension of R is at most 1, then we can directly apply Proposition
2.1 and Corollary 3.7. For non-Gorenstein rings of dimension at least 2, we can
NON-GORENSTEIN, GRADED COUNTABLE COHEN-MACAULAY TYPE
13
apply Propositions 4.2 and 4.3. If R is Gorenstein of minimal multiplicity and
dim(R) > 2, then we have graded finite Cohen-Macaulay type from Remark 5.2
and the discussion preceding it.
6.1. Partial classification in the general case. Throughout this paper, we have
obtained a partial classification of standard graded rings of graded countable CohenMacaulay type. For uniformity’s sake, we state the partial classification in the same
format as the classification of graded finite Cohen-Macaulay type found in [9]. As
such, notice that rings listed in the arbitrary dimension section are omitted from
the other cases. In particular, the A∞ hypersurface is only listed in the arbitrary
dimension section. For more details on rings of countable Cohen-Macaulay type,
see [16].
Arbitrary Dimension: The following rings are shown to be graded countable type
in [9, 14, 6]. This list is not known to be complete.
k[x1 , . . . , xn ], n > 1;
k[x1 , . . . , xn ]/(x21 + · · · + x2n ), n > 1;
k[x1 , . . . , xn ]/(x22 + · · · + x2n ), n > 1.
Dimension Zero: As seen in Proposition 2.1, graded countable type is the same
as graded finite type. By [9], we have the following complete list.
k[x]/(xm ), m > 1.
Dimension One: We do not have a complete list of dimension one graded countable type rings. However, by Corollary 3.7 and the discussion before it, we know
these rings must have h-vector (1, 2) or be isomorphic to
k[x, y]/(xy);
k[x, y]/(xy(x + y));
k[x, y]/(xy 2 );
k[x, y, z]/det2
x
y
y
z
z
;
x
k[x, y, z]/(xy, yz, z 2 ).
Dimension Two: According to Proposition 4.2, the non-Gorenstein graded countable type rings with an isolated singularity are known. Besides the ring given in
the Arbitrary Dimension section, we are not aware of any other graded countable
type ring with a non-isolated singularity. Thus we state what was already recorded
in [9].
.
x1 . . .
xn
k[x1 , . . . , xn+1 ] det2
, n > 2.
x2 . . . xn+1
Dimension Three: If we restrict the proof of Proposition 4.3 to three dimensional
rings, we can omit the isolated singularity condition. As such, we know the rings
below are the only non-Gorenstein graded countable type rings of dimension three.
As the Gorenstein case is open, it is not know if this is a complete list or not.
.
x1 x2 x4
k[x1 , . . . , x5 ] det2
;
x2 x3 x5
k[x1 , . . . , x6 ]/det2 (sym 3 × 3).
14
B. STONE
Remark 6.2. According to the remarks after Proposition 4.3, for non-Gorenstein
rings of dimension at least four, the only possible ring is (4.1).
7. Acknowledgements
The author would like to thank the anonymous referee for valuable feedback and
suggestions regarding the content of this paper. For instance, the referee pointed
out the existence of the ring (3.15) as well as some discrepancies in the initial
version of Section 4. Further, the author is especially thankful to Craig Huneke
for several useful conversations. Additionally, results in this note were inspired by
many Macaulay2 [10] computations.
References
Arnol0 d.
[1] V. I.
Critical points of smooth functions. In Proceedings of the International Congress of Mathematicians (Vancouver, B. C., 1974), Vol. 1, pages 19–39. Canad. Math. Congress, Montreal, Que., 1975.
[2] Maurice Auslander. Isolated singularities and existence of almost split sequences. In Representation theory, II (Ottawa, Ont., 1984), volume 1178 of Lecture Notes in Math., pages
194–242. Springer, Berlin, 1986.
[3] Maurice Auslander and Idun Reiten. Almost split sequences for Z-graded rings. In Singularities, representation of algebras, and vector bundles (Lambrecht, 1985), volume 1273 of
Lecture Notes in Math., pages 232–243. Springer, Berlin, 1987.
[4] Maurice Auslander and Idun Reiten. Cohen-Macaulay modules for graded Cohen-Macaulay
rings and their completions. In Commutative algebra (Berkeley, CA, 1987), volume 15 of
Math. Sci. Res. Inst. Publ., pages 21–31. Springer, New York, 1989.
[5] Maurice Auslander and Idun Reiten. The Cohen-Macaulay type of Cohen-Macaulay rings.
Adv. in Math., 73(1):1–23, 1989.
[6] R.-O. Buchweitz, G.-M. Greuel, and F.-O. Schreyer. Cohen-Macaulay modules on hypersurface singularities. II. Invent. Math., 88(1):165–182, 1987.
[7] Nuri Cimen, Roger Wiegand, and Sylvia Wiegand. One-dimensional rings of finite representation type. In Abelian groups and modules (Padova, 1994), volume 343 of Math. Appl., pages
95–121. Kluwer Acad. Publ., Dordrecht, 1995.
[8] David Eisenbud and Joe Harris. On varieties of minimal degree (a centennial account). In
Algebraic geometry, Bowdoin, 1985 (Brunswick, Maine, 1985), volume 46 of Proc. Sympos.
Pure Math., pages 3–13. Amer. Math. Soc., Providence, RI, 1987.
[9] David Eisenbud and Jürgen Herzog. The classification of homogeneous Cohen-Macaulay rings
of finite representation type. Math. Ann., 280(2):347–352, 1988.
[10] Daniel R. Grayson and Michael E. Stillman. Macaulay2, a software system for research in
algebraic geometry. Available at http://www.math.uiuc.edu/Macaulay2/.
[11] Jürgen Herzog. Ringe mit nur endlich vielen Isomorphieklassen von maximalen, unzerlegbaren
Cohen-Macaulay-Moduln. Math. Ann., 233(1):21–34, 1978.
[12] Craig Huneke and Graham J. Leuschke. Local rings of countable Cohen-Macaulay type. Proc.
Amer. Math. Soc., 131(10):3003–3007 (electronic), 2003.
[13] Ryan Karr and Roger Wiegand. Direct-sum behavior of modules over one-dimensional
rings. In Commutative algebra—Noetherian and non-Noetherian perspectives, pages 251–275.
Springer, New York, 2011.
[14] Horst Knörrer. Cohen-Macaulay modules on hypersurface singularities. I. Invent. Math.,
88(1):153–164, 1987.
[15] Graham J. Leuschke and Roger Wiegand. Local rings of bounded Cohen-Macaulay type.
Algebr. Represent. Theory, 8(2):225–238, 2005.
[16] Graham J. Leuschke and Roger Wiegand. Cohen-Macaulay representations, volume 181 of
Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI,
2012.
[17] Rosa M. Miró-Roig. The representation type of rational normal scrolls. Rend. Circ. Mat.
Palermo (2), 62(1):153–164, 2013.
NON-GORENSTEIN, GRADED COUNTABLE COHEN-MACAULAY TYPE
15
[18] Greg Stevenson. Subcategories of singularity categories via tensor actions. arXiv.org,
, May 2012. arXiv:1105.4698.
[19] Branden Stone. Super-stretched and graded countable Cohen-Macaulay type. 2012. Thesis
(Ph.D.)–University of Kansas.
[20] Branden Stone. Super-stretched and graded countable Cohen-Macaulay type. arXiv.org,
, Jan 2013. arXiv:1301.3593.
[21] Yuji Yoshino. Cohen-Macaulay modules over Cohen-Macaulay rings, volume 146 of London
Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 1990.
Branden Stone, Mathematics Program, Bard College/Bard Prison Initiative, P.O.
Box 5000, Annandale-on-Hudson, NY 12504
E-mail address: [email protected]
| 0 |
Optimized M2L Kernels for the Chebyshev Interpolation based Fast
Multipole Method
arXiv:1210.7292v2 [cs.NA] 20 Nov 2012
Matthias Messner, Berenger Bramas, Olivier Coulaud, Eric Darve
November 21, 2012
Abstract
A fast multipole method (FMM) for asymptotically smooth kernel functions (1/r, 1/r4 , Gauss and
Stokes kernels, radial basis functions, etc.) based on a Chebyshev interpolation scheme has been introduced in [5]. The method has been extended to oscillatory kernels (e.g., Helmholtz kernel) in [12].
Beside its generality this FMM turns out to be favorable due to its easy implementation and its high
performance based on intensive use of highly optimized BLAS libraries. However, one of its bottlenecks
is the precomputation of the multiple-to-local (M2L) operator, and its higher number of floating point
operations (flops) compared to other FMM formulations. Here, we present several optimizations for that
operator, which is known to be the costliest FMM operator. The most efficient ones do not only reduce
the precomputation time by a factor up to 340 but they also speed up the matrix-vector product. We
conclude with comparisons and numerical validations of all presented optimizations.
1
Introduction
The fast multipole method (FMM) is a method first designed in [7] to reduce the cost of matrix-vector
products from O(N 2 ) to O(N ) or O(N log N ) depending on the underlying kernel function. Most FMM
variants have been developed and optimized for specific kernel functions [13, 8, 3, 2]. However, some have
a formulation that is independent of the kernel function [14, 10, 6, 5, 12]. The paper at hand addresses the
optimization of one of these formulations, the so called black-box FMM (bbFMM), presented in [5]. It is
based on the approximation of the kernel function via a Chebyshev interpolation and is a black-box scheme
for kernel functions that are asymptotically smooth, e.g., 1/(r2 + c2 )n/2 with r = |x − y|, c a given parameter
and n ∈ N. The bbFMM has been extended to the directional FMM (dFMM) for oscillatory kernels in [12].
It is suitable for any kernel function of the type g(r)eıkr where g(r) is an asymptotically smooth function
(ı2 = −1 is the imaginary unit and k the wavenumber).
The main idea of the FMM is to properly separate near-field (|x − y| → 0) and far-field (|x − y| → ∞).
The near-field is evaluated directly and the far-field can be approximated and thus computed efficiently. In
this paper, we will denote M2L the operator that transforms a multipole expansion into a local expansion.
The M2L operator is the costliest step in the method, since it needs to be applied to all clusters in the
interaction list, that is 189 times for each cluster for a typical configuration in bbFMM. In this paper we
focus on various optimizations of this operator for both the bbFMM and the dFMM.
First, we address the optimization proposed in [5, 12]. In that paper, the M2L operator between a cluster
and its interaction list is further compressed using a singular value decomposition (SVD). The key idea is that
the SVD provides the smallest possible rank given an error ε (therefore leading to the smallest computational
cost). In [5], a single SVD is used to compress all the M2L operators at a given level using a single
singular vector basis. However, the singular values of individual M2L operators decay at different rates; it
is often the case for example that cluster pairs that are separated by a small distance have higher rank than
pairs that are separated by a greater distance. If we denote w the diameter (or width) of a cluster and d
the distance
√ smallest distance corresponds to dmin = 2w while the largest is
√ between two clusters, then the
dmax = 3 3w. The ratio dmax /dmin = 3 3/2 is in fact relatively large. This can be taken advantage of to
1
further compress the M2L operator on a cluster-pair basis leading to an even smaller computational cost
than the original algorithm [5]. This point is investigated in details in this paper.
Another bottleneck in the FMM of [5] is the precomputation time of the M2L operators. We therefore
introduce a new set of optimizations that exploit symmetries in the arrangement of the M2L operators. For
example, for bbFMM, this allows us to express all M2L operators (316 = 73 − 33 ) as permutations of a subset
with only 16 unique M2L operators. These permutations correspond to reflections across various planes (for
example x = 0, y = 0, z = 0, etc). Table 5 for example reports reductions in precomputation time by a
factor of about 50–200x.
Besides drastically reducing precomputation time and memory requirement this approach paves also the
road for various algorithmic improvements. Modern processors use memory cache to enhance performance
and, as a result, schemes that block data (thereby reducing the amount of data traffic between main memory
and the floating point units) can result in much higher performance. In our case, the use of symmetries
allows calling highly optimized matrix-matrix product implementations (instead of matrix-vector) that have
better cache reuse. Overall, this may result in acceleration by a factor 4–6x for smooth kernels (Tables 6–8).
We also present results using oscillatory kernels of the type mentioned above (g(r)eıkr ). In that case, the
acceleration is more modest and ranges from about 1.2 to 2.7x (Table 9–11).
In this paper we therefore focus both on performance and precomputation time. Both are important
factors for the FMM: the precomputation is crucial if we are interested in only one matrix-vector product,
while in other cases, such as the solution of a linear system, the fast application of matrix-vector products
is central.
The paper is organized as follows. In Sec. 2, we briefly recall the bbFMM and the dFMM and introduce
notations that are needed for explanations later in this paper. In Sec. 3, we address the separation of nearand far-field, introduce the notion of transfer vector to uniquely identify M2L operators, and describe how
the kernel (smooth or oscillatory) affects the interaction list. We start Sec. 4 with a brief recall of the known
optimization of the M2L operator (see [5, 12]) and suggest further improvements. Then, we present a new
set of optimizations and explain them in details. These include using a low-rank approximation of M2L for
individual interactions (individual cluster pairs), along with the definition of symmetries and permutations
and how they are used to reduce the computational cost. Finally, in Sec. 5, we present numerical results.
We compare all variants for bbFMM and dFMM with three datasets corresponding to a sphere, an oblate
spheroid, and a prolate spheroid. The measured running time of the FMM, along with the precomputation
time, and its accuracy as a function of various parameters are reported. The efficiency of the algorithms
as a function of the target accuracy is considered. We also performed tests using two different kinds of
linear algebra libraries for the matrix-matrix products and other operations (an implementation of BLAS
and LAPACK vs the Intel MKL library).
2
Pairwise particle interactions
The problem statement reads as follows. Assume, the cells X and Y contain source {yj }N
j=1 and target
M
particles {xi }i=1 , respectively. Compute the interactions
fi =
N
X
K(xi , yj )wj
for i, . . . , M.
(1)
j=1
The kernel function K(x, y) describes the influence of the source particles onto the target particles. The cost
of directly evaluating the summation in Eqn. (1) grows like O(M N ) which becomes prohibitive as M, N → ∞
and it is why we need a fast summation scheme.
2.1
Fast summation schemes based on Chebyshev interpolation
For a detailed derivation and error analysis of the FMM based on Chebyshev interpolation we refer the reader
to [5, 12]. We adapt most of their notations and repeat only concepts which are necessary to understand
2
explanations hereafter.
Let the function f : R3 → C be approximated by a Chebyshev interpolation scheme as
X
Sℓ (x, x̄α )f (x̄α )
f (x) ∼
(2)
|α|≤3ℓ
with the 3-dimensional multi-index α = (α1 , α2 , α3 ) and |α| = max(α1 , α2 , α3 ) with αi ∈ (1, . . . , ℓ). The
interpolation points x̄α = (x̄α1 , x̄α2 , x̄α3 ) are given by the tensor-product of the Chebyshev roots x̄αi of the
Chebyshev polynomial of first kind Tℓ (x) = cos(arccos x) with x ∈ [−1, 1]. The interpolation operator reads
as
(3)
Sℓ (x, x̄α ) = Sℓ (x1 , x̄α1 ) Sℓ (x2 , x̄α2 ) Sℓ (x3 , x̄α3 ).
For the interpolation on arbitrary intervals, we need the affine mapping Φ : [−1, 1] → [a, b]. We omit it
hereafter for the sake of readability.
2.1.1
Black–box FMM (bbFMM)
If two cells X and Y are well separated, we know from [5] that asymptotically smooth kernel functions can
be interpolated as
X
X
K(x̄α , ȳβ ) Sℓ (y, ȳβ ).
(4)
Sℓ (x, x̄α )
K(x, y) ∼
|β|≤ℓ
|α|≤ℓ
We insert the above approximation into Eqn. (1) and obtain
fi ∼
X
Sℓ (xi , x̄α )
X
K(x̄α , ȳβ )
Sℓ (yj , ȳβ ) wj
(5)
j=1
|β|≤ℓ
|α|≤ℓ
N
X
which we split up in a three-stage fast summation scheme.
1. Particle to moment (P2M) or moment to moment (M2M) operator: equivalent source values are
anterpolated at the interpolation points ȳβ ∈ Y by
Wβ =
N
X
S(yj , ȳβ ) wj
j=1
for
|β| ≤ ℓ.
(6)
2. Moment to local operator (M2L): target values are evaluated at the interpolation points x̄α ∈ X by
X
K(x̄α , ȳβ ) Wβ for |α| ≤ ℓ.
(7)
Fα =
|β|≤ℓ
3. Local to local (L2L) or local to particle (L2P) operator: target values are interpolated at final points
xi ∈ X by
X
S(xi , x̄α ) Fα for i = 1, . . . , M.
(8)
fi ∼
|α|≤ℓ
Recall, the cells X and Y are well separated and thus all contributions of fi can be computed via the above
presented fast summation scheme (no direct summation is necessary).
3
2.1.2
Directional FMM (dFMM)
Whenever we deal with oscillatory kernel functions, such as the Helmholtz kernel, the wavenumber k comes
into play. Depending on the diameter of the cells X and Y and the wavenumber they are either in the
low-frequency or in the high-frequency regime. In the low-frequency regime the fast summation schemes of
the dFMM and the bbFMM are the same. In the high-frequency regime the fast summation scheme becomes
directional. From [12] we know that any oscillatory kernel function K(x, y) = G(x, y)eık|x−y| , where G(x, y)
is an asymptotically smooth function, can be rewritten as
K(x, y) = K u (x, y)eıku·(x−y)
K u (x, y) = G(x, y)eık(|x−y|−u·(x−y)) .
with
(9)
We assume that the cells X and Y of width w are centered at cx and cy and cy lies in a cone of direction u
being centered at cx (think of the domain around X being virtually subdivided in cones given by directional
unit vectors {uc }C
c=1 , where C is determined by their aperture). If the cell pair (X, Y ) satisfies the separation
criterion O(kw2 ) and the cone-aperture criterion O(1/kw), the error of the Chebyshev interpolation of the
kernel function
X
X
K u (x̄α , ȳβ ) Sℓ (y, ȳβ )
(10)
Sℓ (x, x̄α )
K u (x, y) ∼
|β|≤ℓ
|α|≤ℓ
decays exponentially in the interpolation order ℓ (independent of the wavenumber k; see [11, 12]). We insert
the above interpolated kernel function in Eqn. (1) and obtain
fi ∼ eıku·xi
X
Sℓ (xi , x̄α )e−ıku·x̄α
X
K(x̄α , ȳβ )eıku·ȳβ
Sℓ (yj , ȳβ )e−ıku·yj wj
(11)
j=1
|β|≤ℓ
|α|≤ℓ
N
X
for all i = 1, . . . , M . Similarly as with the bbFMM, a three-stage fast summation scheme for oscillatory
kernels in the high-frequency regime can be constructed.
1. Particle to moment (P2M) or moment to moment (M2M) operator: equivalent source values are
anterpolated at the interpolation points ȳβ ∈ Y by
Wβu = eıku·ȳβ
N
X
S(yj , ȳβ ) e−ıku·yj wj
j=1
for |β| ≤ ℓ.
(12)
2. Moment to local operator (M2L): target values are evaluated at the interpolation points x̄α ∈ X by
X
K(x̄α , ȳβ ) Wβu for |α| ≤ ℓ.
(13)
Fαu =
|β|≤ℓ
3. Local to local (L2L) or local to particle (L2P) operator: target values are interpolated at final points
xi ∈ X by
X
(14)
S(xi , x̄α ) e−ıku·x̄α Fαu for i = 1, . . . , M.
fi ∼ eıku·xi
|α|≤ℓ
Even though the bbFMM and the dFMM are here presented as single-level schemes, they are usually
implemented as multilevel schemes. Strictly speaking, the steps one and three of both schemes are the P2M
and L2P operators. Let us recall briefly on the directional M2M and L2L operators of dFMM: based on
the criterion O(1/kw), the aperture of the cones at the upper level is about half the aperture at the lower
level. Due to a nested cone construction along octree levels, we preserve the accuracy of the Chebyshev
interpolation within the multilevel scheme. For a detailed description of all operators we refer to [5, 12]).
Note, the similarity of the M2L operators (step two of both schemes) of bbFMM and dFMM. In fact, the
only difference in their implementation is that in the bbFMM case we have one loop over all cell pairs,
whereas in the dFMM case we have two loops: the outer loop over all existing cones of direction {uc }C
c=1
and the inner loop over all cell pairs lying in the current cone. In this paper, we focus on the M2L operators
and their efficient numerical treatment.
4
3
M2L operators
The first step of any FMM consists in a proper separation of near- and far-field. After that, the near-field
is evaluated directly and the far-field efficiently using a fast summation scheme. In this section, we focus
on the first step. The union of near- and far-field of a target cell X is spatially restricted to the near-field
of its parent cell. Algorithm 1 explains how these interactions are computed for dFMM [12]. The recursive
partitioning starts with the two root cells X and Y of the octrees for source and target particles. If a pair
of cells satisfies the separation criterion in the high- or low-frequency regime, Y is a far-field interaction of
X. Else, if they are at the leaf level of the octree, Y is a near-field interaction of X. If none is true, the cell
is subdivided and the tests are repeated recursively. In line 3 in Alg. 1 we use the term directional far-field,
Algorithm 1 Separate near- and far-field in the low- and high-frequency regime
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
function SeparateNearAndFarField(X, Y )
if (X, Y ) are admissible in the high-frequency regime then
add Y to the directional far-field of X return
else if (X, Y ) are admissible in the low-frequency regime then
add Y to the far-field of X return
else if (X, Y ) are leaves then
add Y to the near-field of X return
else
for all Xchild ∈ X and all Ychild ∈ Y do
SeparateNearAndFarField(Xchild , Ychild )
end for
end if
end function
a concept explained in detail in [12]: In the high-frequency regime the far-field is subdivided into cones of
direction u needed by the directional kernel function K u (x, y). Each source cell Y is assigned to a cone and
there are as many directional far-fields as there are cones.
3.1
Transfer vectors
In order to address interactions uniquely we introduce transfer vectors t = (t1 , t2 , t3 ) with t ∈ Z3 . They
describe the relative positioning of two cells X and Y and are computed as t = (cx −cy )/w where cx and cy
denote the centers of the cells X and Y and w their width. In the following we use transfer vectors to
uniquely identify the M2L operator that computes the interaction between a target cell X and a source cell
Y . In matrix notation an M2L operator reads as Kt of size ℓ3 × ℓ3 and the entries are computed as
(Kt )m(α)n(β) = K(x̄α , ȳβ )
(15)
with the interpolation points x̄α ∈ X and ȳβ ∈ Y . Bijective mappings
m, n : {1, . . . , ℓ} × {1, . . . , ℓ} × {1, . . . , ℓ} → {1, . . . , ℓ3 }
(16)
with m−1 (m(α)) = α and n−1 (n(β)) = β provide unique indices to map back and forth between multi-indices
of x̄α and ȳβ and rows and columns of Kt . We choose them to be m(α) = (α1 − 1) + (α2 − 1)ℓ + (α3 − 1)ℓ2 + 1
and same for n(β).
3.2
Interaction lists
Very commonly fast multipole methods are used for translation invariant kernel functions K(x, y) = K(x +
v, y + v) for any v ∈ R3 . Because of that and because of the regular arrangement of interpolation points x̄
and ȳ in uniform octrees it is sufficient to identify unique transfer vectors at each level of the octree and to
5
compute the respective M2L operators. In the following we refer to such sets of unique transfer vectors as
interaction lists T ⊂ Z3 .
If we
√ consider asymptotically smooth kernel functions the near-field is limited to transfer vectors satisfying
|t| ≤ 3; it leads to 33 = 27 near-field interactions (see [5]). In a multi-level scheme, these 27 near-field
interactions contain all 63 = 216 near- and far-field
interactions of its eight child-cells. Far-field interactions
√
are given by transfer-vectors that satisfy |t| > 3. This leads to a maximum of 63 − 32 = 189 interactions per
cell and we end up with the usual overall complexity of O(N ) of fast multipole methods for asymptotically
smooth kernel functions. The union of all possible far-field interactions of the eight child cells gives 73 − 33 =
316 interactions. That is also the largest possible number of different M2L operators have to be computed per
octree level. Most asymptotically smooth kernel functions happen also to be homogeneous K(αr) = αn K(r)
of degree n. In other words, if we scale the distance r = |x − y| between source and target by a factor of α the
resulting potential is scaled by αn , where n is a constant that depends on the kernel function. The advantage
of homogeneous kernel functions is that the M2L operators need to be precomputed only at one level and can
simply be scaled and used on other levels. This affects the precomputation time and the required memory.
If we consider oscillatory kernel functions we need to distinguish between the low- and high-frequency
regime [12]. The admissibility criteria in the low-frequency regime are the same as those for asymptotically
smooth kernel functions. However, in the high-frequency regime the threshold distance between near-field
and far-field is O(kw2 ); nonetheless, as shown in [12] we end up with the usual complexity of O(N log N )
of fast multipole methods for oscillatory kernel functions. It depends on the wavenumber k (a property
of the kernel function). Thus, the size of near- and far-field is not a priori known as it is in the case of
asymptotically smooth kernel functions. Table 1 summarizes the number of near and far-field interactions
to be computed depending on different kernel functions.
Table 1: Number of near- and far-field interactions depends on the kernel function
type of kernel function
smooth
smooth and homogeneous
oscillatory
4
cells in near-field
cells in far-field
≤ 27 per leaf
≤ 27 per leaf
depends on k
≤ 316 per level
≤ 316 for all levels
depends on k
Optimizing the M2L operators
In all fast multipole methods the M2L operator adds the largest contribution to the overall computational
cost: for bbFMM it grows like O(N ) and for dFMM like O(N log N ). In this section, we first briefly recall
the optimization that was used up to now and suggest an improvement. Then, we present a new set of
optimizations that exploit the symmetries in the arrangement of the M2L operators.
4.1
Single low-rank approximation (SA)
In the following we explain the basics of the optimization used in [5, 12] and we refer to it as the SA variant
hereafter. The idea is based on the fact that all M2L operators Kt with t ∈ T can be assembled as a single
big matrix in two ways: either as a row of matrices K(row) = [K1 , . . . , Kt , . . . , K|T | ] or as a column of matrices
K(col) = [K1 ; . . . ; Kt ; . . . ; K|T | ] of M2L operators. The cardinality |T | gives the number of transfer vectors in
the interaction list T . Next, both big matrices are compressed using truncated singular value decompositions
(SVD) of accuracy ε as
K(row) ∼ UΣV∗ and K(col) ∼ AΓB∗
(17)
with the unitary matrices U, B of size ℓ × r and V, A of size |T |ℓ3 × r and the r singular values in Σ, Γ. With
a few algebraic transformations each M2L operator can be expressed as
Kt ∼ UCt B∗
where Ct = U∗ Kt B of size r × r is computed as
6
Ct = ΣV∗t B or
Ct = U∗ At Γ.
(18)
The advantage of this representation is that the cost of applying the M2L operator gets reduced from
applying a matrix of size ℓ3 × ℓ3 to a matrix of only r × r. Moreover, less memory is required. However, the
precomputation time grows cubically with the accuracy of the method due to the complexity of the SVD.
In [12] the SVD has been substituted by the adaptive cross approximation (ACA) followed by a truncated
SVD [1]. The precomputation time has been cut down drastically due to the linear complexity of the ACA.
4.1.1
SA with recompression (SArcmp)
If we use the SA variant the achieved low-rank r is the same for all M2L operators given by Ct . To a large
extent, r is determined by the greatest individual low-rank of the M2L operators. This means that most of
the matrices Ct of size r × r have effectively a smaller low-rank rt ≤ r. We exploit this fact by individually
approximating them as
Ct ∼ Ūt V̄t∗
with Ūt , V̄t of size r × rt and the constraint rt < r/2.
(19)
Without the constraint the low-rank representation is less efficient than the original representation. The
effects of the recompression are studied in Sec. 5.2.
4.2
Individual low-rank approximation (IA)
As opposed to the SA approach, an individual low-rank approximation of the M2L operators as
Kt ∼ Ut Vt∗
with Ut Vt of size ℓ3 × rt
(20)
directly leads to the optimal low-rank representation of each of them. As in the previous section, the
approximation can be performed by either a truncated SVD or the ACA followed by a truncated SVD (note,
the rank rt in the Eqns. (19) and (20) might be similar but has not to be the same). All these variants (SA,
SArcmp and IA) still require the approximation and storage of all M2L operators. In terms of time and
memory however, it would be desirable to come up with a method that requires the approximation and the
storage of a subset of operators only. Let us present a set of optimizations that fulfill these two requests.
4.2.1
Symmetries and permutations
Here, we illustrate how the full set of M2L operators can be expressed by a subset only. The idea is based on
symmetries in the arrangement of M2L operators and exploits the uniform distribution of the interpolation
points. We start by presenting the idea using a model example. After generalizing this idea, we demonstrate
that M2L operators can be expressed as permutations of others.
Model example The target cell X in Fig. 1 has three interactions Yt with the transfer vectors t ∈
{(2, 1), (1, 2), (−2, 1)}. We choose the reference domain to be given by t1 ≥ t2 ≥ 0. The goal is to express the
M2L operators of all interactions via M2L operators of interactions that lie in the reference domain only. In
our example this is the interaction with the transfer vector (2, 1). The two transfer vectors (1, 2) and (−2, 1)
can be shown to be reflections along the lines given by t1 = 0 and t1 = t2 , respectively. We easily find that
any t ∈ Z2 can be expressed as a reflection (or a combination of reflections) of transfer vectors that satisfy
t1 ≥ t2 ≥ 0.
We claim that any reflection of a transfer vector corresponds to a permutation of the respective M2L
operator. Recall that the evaluation of K(x̄α , ȳβ ) gives the entry from row m(α) and column n(β) of the
M2L operator. K(2,1) is the only M2L operator whose transfer vector satisfies t1 ≥ t2 ≥ 0. The multi-indices
are constructed as presented in Fig. 1. As can be checked, the entry (K(2,1) )mn is not the same as (K(1,2) )mn
or (K(−2,1) )mn . However, if we use the permuted multi-indices from Fig. 2a for K(−2,1) or those from Fig. 2b
for K(1,2) they are the same. The logic behind this can be summarized as follows.
7
β = (β1 , β2 )
axial sym.
α = (α1 , α2 )
(1,3) (2,3) (3,3)
(1,2) (2,2) (3,2)
g.
.
m
(1,1) (2,1) (3,1)
Y(1,2)
(1,3) (2,3) (3,3)
(1,2) (2,2) (3,2)
a
di
sy
β2
Y(−2,1)
(1,1) (2,1) (3,1)
(1,3) (2,3) (3,3)
(1,2) (2,2) (3,2)
Y(2,1)
(1,1) (2,1) (3,1)
α2
β1
(1,3) (2,3) (3,3)
e2
e1
(1,2) (2,2) (3,2)
X
(1,1) (2,1) (3,1)
α1
Figure 1: Axial and diagonal symmetries of interactions. The interpolation points x̄α and ȳβ are indexed by
the multi-indices α and β, respectively (interpolation order ℓ = 3). The only transfer vector that satisfies
t1 ≥ t2 ≥ 0 is t = (2, 1). In that case, we claim that K(2,1) is the only M2L operator we need to compute.
The direction of the arrows indicates the growth of the respective multi-index component.
• If an axial symmetry is given by t1 = 0 as shown in Fig. 2a, we invert the corresponding component of
the multi-index as
α ← (ℓ − (α1 − 1), α2 ) and β ← (ℓ − (β1 − 1), β2 ).
(21)
• If the diagonal symmetry is given by t1 = t2 as shown in Fig. 2b, we swap the corresponding components
as
α ← (α2 , α1 ) and β ← (β2 , β1 ).
(22)
Sometimes it is necessary to combine axial and diagonal permutations. Take as example the transfer vector (−1, 2): we need to flip it along t1 = 0 and then along t1 = t2 to get (2, 1). Note that reflections are
non commutative, i.e., the order of their application matters. This is also true for permutations of the M2L
operators.
Generalization Let us extend the above concept to the three dimensional case. We start by introducing
three axial and three diagonal symmetries in Z3 .
• Axial symmetry planes are given by t1 = 0, t2 = 0 and t3 = 0 (see Fig. 3). Each of the three planes
divides Z3 in two parts, i.e., the negative part ti < 0 and the positive part ti ≥ 0. By combining all
three planes Z3 is divided into octants. In the following we use Z3+ , i.e., the octant with t1 , t2 , t3 ≥ 0
as reference octant.
• Diagonal symmetry planes are given by t1 = t2 , t1 = t3 and t2 = t3 (see Fig. 4). In Z3 there are six
diagonal symmetries; however, we restrict ourselves to the symmetries affecting the reference octant
Z3+ .
By combining the three diagonal symmetries and the three axial symmetries we obtain the cone shown in
Fig. 5. We refer to it as Z3sym ; it is given by
Z3sym = Z3sym ⊂ Z3 : t1 ≥ t2 ≥ t3 ≥ 0 with t ∈ Z3 .
8
(23)
α = (ℓ − (α1 − 1), α2 )
(3,3) (2,3) (1,3)
Y(−2,1)
(3,2) (2,2) (1,2)
β2
β = (β2 , β1 )
axial sym.
β = (ℓ − (β1 − 1), β2 )
α = (α2 , α1 )
(3,1) (3,2) (3,3)
β1
(2,1) (2,2) (2,3)
(1,1) (1,2) (1,3)
β2
a
di
(3,1) (2,1) (1,1)
β1
Y(1,2)
(3,3) (2,3) (1,3)
α2
α1
g.
sy
.
m
(3,1) (3,2) (3,3)
(3,2) (2,2) (1,2)
(2,1) (2,2) (2,3)
(3,1) (2,1) (1,1)
(1,1) (1,2) (1,3)
X
X
α1
α2
(a) Invert the component α1 and β1 due to the axial symmetry
t1 = 0.
(b) Swap the components α1 ↔ α2 and β1 ↔ β2 due
to the diagonal symmetry t1 = t2 .
Figure 2: The direction of the arrows indicates the growth of the respective multi-index component such
that the M2L operators K(−2,1) and K(1,2) become the same as K(2,1) . In other words, the mapping from
the arrows in Fig. 1 to the arrows here is analog to the mapping of the multi-indices and corresponds to the
permutations of K(2,1) such that it coincides with K(−2,1) and K(1,2) .
By its means we can identify the subset of transfer vectors Tsym ⊂ T ⊂ Z3 as
Tsym = T ∩ Z3sym
(24)
such that all others T \Tsym can be expressed as reflections of this fundamental set. Next, we claim that
these symmetries are also useful for M2L operators.
Permutation matrices Any reflection of a transfer vector along a symmetry plane determines the permutation of its associated M2L operator as
Kt = Pt Kp(t) P⊤
t .
(25)
The permutation matrix Pt depends on the transfer vector t ∈ T . We also need the surjective mapping
p : T → Tsym ; it associates every transfer vector in T to exactly one in Tsym . The left application of Pt ,
essentially, corresponds to the permutation of α and its right application to the permutation of β, affecting
rows (respectively columns) of the original matrix Kp(t) . Note, the permutation matrices Pt depend only on
the transfer vector t. How do we construct them? For some t we introduce axial and diagonal permutations
πtA and πtD that read as follows.
• Axial symmetries: multi-index permutations are computed as
(
πtA (α1 , α2 , α3 ) = (ᾱ1 , ᾱ2 , ᾱ3 )
with
ᾱi =
αi
if ti ≥ 0,
ℓ − (αi − 1) else.
(26)
There exist 8 different possibilities that correspond to the octants presented in Fig. 3. Note, πtA (α) = α
is only true for transfer vectors with t1 , t2 , t3 ≥ 0.
9
e3
e2
e1
Figure 3: Three axial symmetry planes split Z3 in octants. The reference octant is given by t1 , t2 , t3 ≥ 0.
e3
e3
e2
e1
e3
e2
e1
(a) t1 = t2
e2
e1
(b) t1 = t3
(c) t2 = t3
Figure 4: Three diagonal symmetry planes in the reference octant.
e3
e2
e1
Figure 5: The final cone Z3sym (t1 ≥ t2 ≥ t3 ≥ 0) is obtained by combining axial and diagonal symmetries.
10
• Diagonal symmetries: multi-index permutations are computed as
πtD (α1 , α2 , α3 ) = (αi , αj , αk )
such that |ti | ≥ |tj | ≥ |tk |.
(27)
There exist 6 different possibilities that correspond to the 6 different cones if we consider Fig. 5. Note
again, πtD (α) = α is only true for transfer vectors with t1 ≥ t2 ≥ t3 ≥ 0.
Given these multi-index permutations and the mapping functions m(α) and n(β) we can define a permutation
matrix Pt of size ℓ3 × ℓ3 . Its entries are 0 except in column j the entry i = m(πt (m−1 (j))) is 1. Let us go
through the computation of this index: first, we compute the multi-index α = m−1 (j), then, we permute the
multi-index ᾱ = πt (α) and last, we compute the row-index i = m(ᾱ). Permutation matrices may be written
as
Pt = em(πt (m−1 (0))) , em(πt (m−1 (1))) , . . . , em(πt (m−1 ((ℓ−1)3 ))) ,
(28)
where ej denotes a column unit vector of length ℓ3 with 1 in the jth position and 0 elsewhere. Permutation
−1
matrices are orthogonal Pt P⊤
= P⊤
t = I, hence, the inverse exists and can be written as Pt
t . Note that the
A
combination of permutations is non commutative. Given these permutations πt and πtD we setup PA
t and
PD
and
construct
the
permutation
matrix
as
t
A
Pt = PD
t Pt .
(29)
The permutation for the multi-index β is the same.
4.2.2
IA with symmetries (IAsym)
By exploiting the above introduced symmetries we end up with an optimization we refer to as the IAsym
variant. We individually approximate and store only M2L operators with t ∈ Tsym and express all others via
permutations as shown Eqn. (25). The IAsym variant for an arbitrary transfer vector t ∈ T consists of the
following three steps.
1. Permute multipole expansions
w t = P⊤
t w
(30)
ft = Kp(t) wt
(31)
f = Pt ft
(32)
2. Compute permuted local expansions
3. Un-permute local expansions
Note that the permutation matrix is not applied to the actual M2L operator (remains unchanged as can be
seen in step 2). Its application is implemented as a reordering of vector entries (step 1 and 3). Depending
on whether the M2L operator exist in its full-rank or in its low-rank representation (see Eqn. (20)) the
application corresponds to one or two matrix-vector products. In the following we introduce a blocking
scheme that leads to a faster execution on a computer.
4.2.3
IAsym with blocking (IAblk)
We know from Sec. 4.2.1 that based on the consideration of symmetries and permutations, many interactions share the same M2L operators. This paves the road for blocking schemes. Essentially, the idea is to
substitute many matrix-vector products by a few matrix-matrix products. Blocking schemes do not change
the overall complexity of the algorithm, but they allow for the use of highly optimized matrix-matrix product implementations. Such achieve much higher peak performances than optimized matrix-vector product
implementations due to better cache reuse and less memory traffic [4, 9].
In our concrete case, we use the blocking scheme to block multipole and local expansions. Instead
of permuting them and applying the M2L operators individually (matrix-vector products), we assemble
11
Algorithm 2 Blocking scheme with |Tsym | matrix-matrix products
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
function BlockedM2L(target cell X and all far-field interactions IY )
allocate Fp and Wp for p = 1, . . . , |T |sym
retrieve f from X
set all cp = 0
for all source cells Y in IY do
retrieve w from Y and compute t from cell-pair (X, Y )
column cp(t) of Wp(t) gets P⊤
⊲ Permute multipole expansions
t w
increment cp(t)
end for
for all {Kp } do
Fp ← Kp Wp
⊲ Compute permuted local expansions
end for
set all cp = 0
for all source cells Y in IY do
compute t from cell-pair (X, Y )
retrieve ft from column cp(t) of Fp(t)
increment cp(t)
f ← f + Pt ft
⊲ Permute permuted local expansions
end for
end function
those that share the same M2L operator as distinct matrices to whom we apply the M2L operators then
(matrix-matrix products). Algorithm 2 explains this in details. We need the matrices Wp and Fp of size
ℓ2 × np for p = 1, . . . , |Tsym |. Their columns store the permuted multipole and the resulting (also permuted)
local expansions. The values for np indicate how many interactions in T share the same M2L operator of
interactions in Tsym , in other words, n1 + · · · + np + · · · + n|Tsym| = |T | is true. In the case of bbFMM this is a
priori known, since the full interaction list is a priori given (see Sec. 3.2). That is not the case for dFMM and
the values for np have to be determined during a precomputation step. We also need counters cp to indicate
the position of the currently processed expansions in Wp and Fp . As opposed to IAsym, here we split up
the single loop over all interactions into three loops. In the first one, we assemble the set of matrices Wp .
At the end cp ≤ np is true for all p. In the second loop, we perform at most |Tsym | matrix-matrix products.
And in the last loop, we increment the local expansion with the expansions from all Fp .
Blocking along multiple target cells Algorithm 2 proposes to use the blocking scheme for all interactions of only one target cell. In the worst case no M2L operator is shared and the algorithm coincides with
IAsym. Moreover, the size of the matrices Wp and Fp might vary to a large extent. That is why we pursued
the blocking idea further to come up with a more efficient scheme. Instead of using individual np we choose
it to be a constant n for all p = 1, . . . , |Tsym |. Then we keep on blocking expansions using interactions lists
of multiple (as opposed to one) target cells. Once cp = n is true for some p, we apply the M2L operator as
Fp = Kp Wp where Wp , Fp are both of size ℓ3 × n. In our numerical studies we use this blocking scheme with
n = 128.
5
Numerical results
In the previous sections we introduced various optimizations of the M2L operators for bbFMM and dFMM.
As representative kernel functions we use
the Laplace kernel K(x, y) =
1
4π|x − y|
and the Helmholtz kernel K(x, y) =
12
eık|x−y|
.
4π|x − y|
(33)
In the numerical studies, hereafter, we use the same parameter setting as are used in the respective publications and for dFMM we use the wavenumber k = 1. All computations are performed on a single CPU of
a Intel Core i7–2760QM CPU @ 2.40GHz × 8 with 8GB shared memory. We used the compiler gcc 4.6.3
with the flags “-O2 -ffast-math”. The source files (C++) of the program we used can be downloaded via
http://github.com/burgerbua/dfmm; they are distributed under the “BSD 2-Clause” license.
M2L optimizations We show and analyze results for the following eight variants:
1. NA: the full interaction list T is represented by full-rank (not approximated) M2L operators
2. NAsym: the reduced interaction list Tsym is represented by full-rank (not approximated) M2L operators
3. NAblk: same as NAsym but with additional blocking of multipole and local expansions
4. SA: the variant presented in [5] and briefly sketched in Sec. 4.1
5. SArcmp: same as SA but with additional recompression of all Ct
6. IA: same as NA but with low-rank (individually approximated) M2L operators
7. IAsym: same as NAsym but with low-rank (individually approximated) M2L operators
8. IAblk: same as NAblk but with low-rank (individually approximated) M2L operators (see Alg. 2)
Moreover, we study two different low-rank approximation schemes for the SA and IA variants: on one
hand we use a truncated singular value decomposition (SVD) and on the other hand the adaptive cross
approximation (ACA) followed by a truncated SVD [1].
Example geometries We use the three benchmark examples shown in Fig. 6 and described in the listing
below. The depth of the octree is chosen such that near- and far-field are balanced, i.e., we want the fastest
possible matrix-vector product.
1. The sphere from Fig. 6a is contained in the bounding-box 64 × 64 × 64; 168 931 particles are randomly
scattered on its surface. The octree has 6 levels.
2. The oblate sphere from Fig. 6b is contained in the bounding-box 6.4 × 64 × 64; 125 931 particles are
randomly scattered on its surface. The octree has 6 levels.
3. The prolate sphere from Fig. 6c is contained in the bounding-box 6.4 × 6.4 × 64; 119 698 particles are
randomly scattered on its surface. The octree has 7 levels.
In approximative sense, the sphere is a three-dimensional, the oblate sphere a two-dimensional and the
prolate sphere a one-dimensional object in R3 . We choose these three geometries to study their influence on
the performance on the dFMM. Table 2 shows the size of the full and the reduced interaction lists (|T | and
|Tsym |) per level (lf stands for low-frequency, hf for high-frequency regime) for all three geometries. The size
of the interaction lists clearly grows with the dimensionality of the geometry. We report on the impact of
this behavior later.
3(hf)
|T |
|Tsym |
668
20
sphere
4(hf) 5(hf)
18710
518
2666
93
6(hf)
3(hf)
418
21
60
6
oblate sphere
4(hf) 5(hf)
2336
203
1502
89
6(hf)
4(hf)
400
21
214
35
prolate sphere
5(hf) 6(hf)
738
61
382
21
7(lf)
424
22
Table 2: Size of interactions lists per level for dFMM (hf and lf stands for high- and low-frequency regime,
respectively)
13
❨❳
(a) sphere
❨
❨❳
❩
❩
(b) oblate spheroid
❩
❳
(c) prolate spheroid
Figure 6: The example geometries are centered at (0, 0, 0)
5.1
Accuracy of the method
Both, the bbFMM and the dFMM, have two approximations: 1) the interpolation of the kernel functions
determined by interpolation order ℓ, and 2) the subsequent low-rank approximation of the M2L operators
determined by the target accuracy ε. The final relative error is a result of both approximations. We compute
it as
P
¯ 2 1/2
i∈M |fi − fi |
P
εL 2 =
(34)
i∈M |fi |
where M is the number of particles x in an arbitrary reference cluster at the leaf level; f and f¯ are the exact
and approximate results, respectively. In Fig. 7 we compare achieved accuracies for the bbFMM and the
dFMM with the IAblk variant (other variants produce identical results). Both plots show the behavior of the
relative error εL2 depending on the interpolation order ℓ and the target accuracy ε. Evident is the matching
result between the left and right figure. All curves show an initial plateau and then, after a sharp knee a
slope of roughly 1. The knee occurs approximately at (ℓ, ε) = (Acc, 10−Acc ). In the rest of the paper we use
this convention to describe the accuracy Acc of bbFMM and dFMM. The low-rank approximations for the
computations whose accuracies are shown in Fig. 7 were conducted with the ACA followed by a truncated
SVD. By just using a truncated SVD we obtain identical results.
5.2
Reducing the cost with the SArcmp variant
The cost of applying an approximate M2L operator mainly depends on its rank k. In Tab. 3 we compare
the average rank of M2L operators for the SA and the IA variants at all levels that have expansions. Let us
explain how we computed the average rank for SA. Recall, when we use that variant, all M2L operators from
one interaction list posses the same rank; the bbFMM and the dFMM in the low-frequency regime have one
and the dFMM in the high-frequency regime has potentially multiple (directional) interaction lists per level.
Thus, the average rank per level is the average of all ranks used in all interactions at that level.
The application of one M2L operator from SA and IA requires O(k 2 ), respectively O(2kℓ3 ) operations.
Note, the ranks k of the M2L operators are different; for a given accuracy Acc they are normally significantly
lower for IA than for SA. This can be seen in Tab. 3. The large ranks at level 7 (first level in the low-frequency
regime) of SA are noteworthy. There, the lower bound for the separation criterion is given by the usual lowfrequency criterion saying that non touching cells are admissible. Hence, the smallest possible transfer
14
Relative error εL2
10−1
10−3
10−5
10−1
ℓ=3
ℓ=4
ℓ=5
ℓ=6
ℓ=7
ℓ=8
ℓ=9
10−3
10−5
10−7
ℓ=3
ℓ=4
ℓ=5
ℓ=6
ℓ=7
ℓ=8
ℓ=9
10−7
10−9
10−12 10−10
−8
−6
−4
10
10
10
10
Target accuracy εACA
−2
10
0
10−9
10−12 10−10
(a) bbFMM for the smooth kernel function
10−8 10−6 10−4 10−2
Target accuracy εACA
(b) dFMM for the oscillatory kernel function
Figure 7: Accuracies for the prolate sphere from Fig. 6c. The target accuracy εACA refers to the accuracy
of the approximate M2L operators (see Eqn. (20)). Here, we used the ACA followed by a truncated SVD.
Acc
4(hf)
3
5
7
9
9.8
21.7
38.2
57.8
SA
5(hf) 6(hf)
12.3
30.8
58.6
96.5
12.8
39.2
80.0
138.7
7(lf)
4(hf)
19
71
163
296
5.4
11.3
18.9
28.6
IA
5(hf) 6(hf)
5.7
12.3
21.5
33.2
5.7
13.5
24.7
39.7
7(lf)
5.0
12.6
24.1
40.1
Table 3: Comparison of average ranks k for the SA and IA variants of the dFMM (prolate sphere)
15
100
vectors have a length of mint∈T |t| = 2. The slowly decaying singular values of associated M2L operators are
responsible for the large ranks. On the other hand, the upper bound for the separation criterion coincides
with the lower bound of level 6 (parent level), which is in the high-frequency regime. Hence, the largest
possible transfer vectors have a length of maxt∈T |t| ∼ 4k. M2L operators whose transfer vectors are in that
range have much faster decaying singular values. This fact explains the efficiency of SArcmp (individual
recompression of each M2L operator).
In Tab. 4 we analyze the cost savings of SArcmp compared to SA. The left values in each column
multiplied by 106 give the overall number of floating point operations per level for SA. The right values (in
brackets) indicate the respective cost ratio of SArcmp to SA. The recompression reduces the cost remarkably
(see also Fig. 8). At the low-frequency level 7, SArcmp reduces the cost by more than a factor of 2. This is
almost twice as much as in high frequency levels. For the impact on timing results we refer to Sec. 5.4.
Acc
3
5
7
9
4(hf)
0.8
4.9
19.3
55.0
cost(SA)/106
5(hf)
(0.98)
(0.97)
(0.97)
(0.97)
20.3
161.4
687.8
2138.2
(cost(SArcmp)/ cost(SA))
6(hf)
(0.93)
(0.89)
(0.88)
(0.87)
31.5
339.3
1590.2
5237.6
(0.93)
(0.86)
(0.83)
(0.83)
7(lf)
204.3
2889.3
15420.0
51505.7
(0.62)
(0.47)
(0.40)
(0.38)
Table 4: Comparison of cost in terms of floating point operations for SA and SArcmp (prolate sphere)
In Fig. 8 we compare the cost of SA, SArcmp and IA for bbFMM and dFMM for the prolate sphere and
accuracy Acc = 9. The bbFMM in Fig. 8a has expansions at the levels 2 − 7. The reason for the jump
between level 4 and 5 is because the levels 2 − 4 have a maximum of |T | = 16 and the levels 5 − 7 a maximum
of |T | = 316 (common maximum size for bbFMM) possible M2L operators. The jump in the case of dFMM
in Fig. 8b at level 5 can be explained in the same way: if we look at Tab. 2 we see that |T | is about twice as
large as it is at the other levels. The levels 4, 5, 6 of dFMM are in the high-frequency regime. There, the cost
of IA is approximately 12, 5, 3 times greater than the cost of SA. However, at level 7 (low-frequency regime)
of dFMM and at the levels 5 − 7 of bbFMM the cost of IA is only about 2/3 compared to SA. The reason is
the size of the interaction lists |T |. The larger they become the larger the span between the smallest and the
largest rank and that favors individual approximations. The SArcmp is computationally the least expensive
variant. Similar results are obtained for all other accuracies.
5.3
Speeding up the precomputation with IA variants
The bottleneck of SA and SArcmp is the relatively large precomputation time. This is a minor issue in the
case of homogeneous kernel functions. In that case the approximated M2L operators can be stored on the disk
and loaded if needed for further computations. However, if we deal with non-homogeneous kernel functions,
such as the Helmholtz kernel, IAsym is the way to go. In Tab. 5 we compare the precomputation time of
SA, IA and IAsym (we do not report on SArcmp; due to the additional recompression its precomputation
time is higher than the one for SA). For the low-rank approximation we use a truncated SVD or the ACA
followed by a truncated SVD. In both cases we end up with the same rank. We get remarkable speedups in
the precomputation. Let us look at an extreme example: for the sphere and an accuracy Acc = 7 we have
tSVD = 4740.9 s for SA versus tACA = 1.8 s for IAsym. This corresponds to a speedup greater than 2600.
For the oblate and prolate sphere we obtain similar results.
5.4
Comparison of all M2L variants
In the previous sections we have revealed the impact of SArcmp and IAsym on the computational cost and
the precomputation time of the M2L operators, respectively. In this section we compare all variants and
focus on their impact on memory requirement and running time (of applying M2L operators during the
16
floating point operations
1011
1011
SA
SArcmp
IA
SA
SArcmp
IA
1010
1010
109
109
108
108
107
2
3
4
5
6
octree level
7
107
4
5
6
7
octree level
(a) bbFMM
(b) dFMM
Figure 8: Comparison of the M2L cost (floating point operations) growth per level for the SA, SArcmp and
IA variants of bbFMM and dFMM (Acc = 9 and prolate sphere)
actual matrix-vector product). Tables 6–8 (respectively, 9–11) present running times of the M2L operators
for bbFMM (respectively, dFMM) and all three geometries. The upper set of rows reports on results obtained
with a BLAS and LAPACK implementation (libblas 1.2.20110419-2, which is sub-optimal in our case). The
lower set shows the times obtained with the Intel MKL [9] (Intel MKL 10.3.11.339 (intel64)), which proved
faster for our purpose.
Impact of using symmetries on the memory requirement Missing times in the tables indicate that
the required memory exceeded the available memory of 8 GB. Clearly, the NA variant (no approximation, no
use of symmetries) requires the most memory. On the other hand, IAsym and IAblk are the most memory
friendly variants. Both require only memory for storing the low-rank representations of M2L operators from
Tsym (see Tab. 2 for a comparison of T and Tsym ).
Impact of the blocking scheme on runtime Bold numbers indicate the fastest variants. Two results
attract our attention. 1) If we look at the times in the Tabs. 6–11 we notice that in four cases (bbFMM
with the sphere, the oblate sphere and the prolate sphere and dFMM with the prolate sphere) IAblk, and
in two cases (dFMM with the sphere and the oblate sphere) SArcmp is fastest. To be more general, IAblk
wins at levels having non-directional expansions (all levels of bbFMM and low-frequency levels of dFMM)
and SArcmp at levels having directional expansions (high-frequency levels of dFMM). Why is that? The
reason follows from the cost studies in Sec. 5.2. Let us take for example Acc = 5. Recall, we measure the
computational cost based on the size of the approximate M2L operators, ie., O(k 2 ) for SA and O(2kℓ2 ) for
IA. The respective ranks k are given in Tab. 3. The ratio of these costs for SA to IA is 0.46 at the highfrequency level 6 and 1.60 at the low-frequency level 7. As a matter of fact, at high-frequency levels wins SA
and at low-frequency levels IA. Even savings of 47% at the low-frequency level due to the recompression of
SArcmp (see Tab. 4) are not sufficient to outweigh the advantage of the blocking scheme of IAblk. If there
is no low-frequency level, such as for the sphere in Tab. 9 and the oblate sphere in Tab. 10, the SArcmp
outperforms all other variants. For example, if we would repeat the computations for the prolate sphere
with an octree of depth 6 (no low-frequency level) the resulting timing patterns would follow those from
17
Acc
SA
tSVD [s] tACA [s]
tSVD [s]
IA
tACA [s]
IAsym
tSVD [s] tACA [s]
sphere
3
4
5
6
7
7.0
75.0
317.2
1336.4
4740.9
4.6
20.2
69.1
197.0
435.9
6.2
37.0
188.8
790.5
-
1.9
4.9
12.4
28.6
-
0.2
1.1
5.6
22.9
84.0
0.1
0.2
0.4
0.9
1.8
0.4
1.1
2.7
6.4
13.3
29.0
-
0.1
0.5
2.8
11.1
40.9
129.3
369.2
0.0
0.1
0.2
0.5
0.9
2.0
3.9
0.3
0.7
2.1
5.0
11.6
25.0
50.2
0.1
0.4
2.5
15.7
66.0
322.3
807.4
0.0
0.1
0.1
0.4
0.8
1.7
3.7
oblate sphere
3
4
5
6
7
8
9
1.4
13.4
59.7
299.4
1021.2
-
0.9
3.8
13.5
39.4
97.6
217.7
444.6
1.2
7.1
37.0
150.1
551.0
1751.4
prolate sphere
3
4
5
6
7
8
9
0.6
7.5
64.7
358.2
1374.8
-
0.7
4.1
18.3
73,3
204.3
549.3
1273.2
0.8
5.3
31.0
199.4
808.9
-
Table 5: Precomputation times (SVD versus ACA) for the SA, IA and IAsym variants. Missing numbers
mean that the available memory of 8 GB has not been sufficient for the respective computation.
the sphere and the oblate sphere (the overall application time would increase too, since the tree-depth is
based on our choice of balancing near- and far-field, i.e., the shortest overall application time). 2) Evident
is the wide margin in the speedup of variants that use blocking and those which do not. If we use the MKL
(as opposed to libblas) for NA, NAsym, SA, SArcmp, IA and IAsym we end up with 1.5 − 2 times faster
application times. However, if we use the MKL for NAblk and IAblk we achieve 3 − 4 times faster times.
Even though these speedups are greatest with the MKL library, it highlights the benefits of the blocking
scheme presented in Alg. 2.
Varying growth of application times In Fig. 9 we visualize the different growth rates of M2L application
times for the bbFMM with increasing accuracies Acc. We are interested in the growth rates due to algorithmic
changes. That is why we only study those variants that do not use blocking. Since no approximation is
involved the times for NAsym grows the fastest. The times for SA grow slower but still faster than those for
IAsym. SArcmp features the slowest growth, it is the optimal variant in terms of computational cost (see
Tab. 4).
6
Conclusion
The fast multipole method based on Chebyshev interpolation, first presented in [5] for smooth kernel functions
(bbFMM) and extended in [12] to oscillatory ones (dFMM), is a convenient-to-use algorithm due to its easy
18
Acc
NA
NAsym
NAblk
SA
SArcmp
IA
IAsym
IAblk
0.4
1.3
3.4
9.0
20.2
40.7
74.0
0.4
1.4
3.6
8.7
18.4
35.9
68.6
0.3
0.7
1.8
4.2
8.6
16.2
28.4
0.4
0.8
2.0
4.0
8.2
16.2
32.9
0.2
0.4
1.0
1.7
3.5
6.0
9.9
libblas 1.2.20110419-2
3
4
5
6
7
8
9
0.7
3.6
14.2
42.8
102.7
229.3
-
0.8
3.8
12.8
38.2
101.7
234.2
484.5
0.4
1.8
5.9
16.5
40.7
89.7
180.0
0.5
1.4
4.6
11.5
25.5
47.8
83.8
0.3
0.9
2.2
4.8
9.6
18.3
30.0
Intel MKL 10.3.11.339 (intel64)
3
4
5
6
7
8
9
0.3
1.8
9.4
28.2
72.6
127.6
-
0.3
1.2
6.3
14.1
57.7
117.0
260.5
0.2
0.6
1.9
4.8
12.0
25.6
50.8
0.2
0.6
2.0
6.9
19.3
34.3
60.4
0.4
0.7
1.4
2.6
5.8
12.0
20.6
0.4
0.7
2.0
5.0
12.7
24.6
44.6
Table 6: M2L timings for the bbFMM (sphere). In this table and below as well, bold numbers correspond to
the smallest entry in a row. In some cases, two columns use bold font when the running times are sufficiently
close that the difference is not significant.
Acc
NA
NAsym
NAblk
SA
SArcmp
IA
IAsym
IAblk
0.2
0.6
1.6
4.0
9.0
18.2
32.4
0.2
0.6
1.6
3.9
9.0
16.5
30.4
0.1
0.3
0.8
1.9
3.8
7.2
12.8
0.2
0.4
0.9
1.8
4.4
7.2
14.2
0.1
0.2
0.4
0.8
1.6
2.7
4.4
libblas 1.2.20110419-2
3
4
5
6
7
8
9
0.3
1.6
6.4
19.1
46.8
102.8
-
0.4
1.6
5.8
16.7
44.8
101.9
212.2
0.2
0.8
2.6
7.4
18.3
40.1
81.1
0.2
0.8
1.9
5.4
11.2
21.3
37.0
0.1
0.4
0.9
2.1
4.5
8.4
13.5
Intel MKL 10.3.11.339 (intel64)
3
4
5
6
7
8
9
0.1
0.7
4.5
13.1
33.7
57.9
117.4
0.1
0.5
2.0
6.7
27.4
52.7
118.2
0.1
0.3
0.9
2.2
5.4
11.5
22.9
0.1
0.3
1.0
3.2
8.2
14.6
27.3
0.2
0.3
0.7
1.2
2.7
5.2
9.1
0.2
0.3
0.9
2.6
5.7
10.9
20.0
Table 7: M2L timings for bbFMM (oblate sphere)
19
Acc
NA
NAsym
NAblk
SA
SArcmp
IA
IAsym
IAblk
0.1
0.4
1.2
2.6
7.2
11.6
21.1
0.1
0.4
1.0
2.4
5.2
10.8
19.9
0.1
0.2
0.6
1.4
2.9
4.9
8.7
0.1
0.2
0.6
1.1
2.4
5.5
11.2
0.1
0.1
0.3
0.5
1.0
1.7
2.8
libblas 1.2.20110419-2
3
4
5
6
7
8
9
0.2
1.0
4.0
12.0
29.3
68.4
131.7
0.2
1.1
4.0
11.8
31.4
71.4
137.3
0.1
0.5
1.9
4.7
11.7
25.5
50.7
0.1
0.4
1.2
3.4
6.9
13.1
22.3
0.1
0.3
0.6
1.3
2.8
5.2
8.6
Intel MKL 10.3.11.339 (intel64)
3
4
5
6
7
8
9
0.1
0.4
2.9
8.2
20.5
35.7
73.3
0.1
0.3
1.3
4.2
16.3
34.1
73.8
0.1
0.2
0.6
1.4
3.4
7.2
14.4
0.1
0.2
0.6
2.1
5.9
9.1
17.6
0.1
0.2
0.4
0.7
1.7
3.7
5.9
0.1
0.2
0.6
1.5
3.7
7.1
12.8
Table 8: M2L timings for bbFMM (prolate sphere)
Acc
NA
NAsym
NAblk
SA
SArcmp
IA
IAsym
IAblk
3.2
8.0
19.8
43.0
-
2.9
8.1
19.4
42.8
86.5
3.5
8.3
19.7
40.4
78.1
2.4
5.9
14.0
29.7
60.8
2.2
4.8
10.0
17.6
30.8
libblas 1.2.20110419-2
3
4
5
6
7
6.3
25.7
113.0
-
5.9
25.1
89.5
275.9
-
4.8
20.1
71.4
202.2
-
2.0
4.3
7.4
14.5
-
2.0
3.9
7.1
11.6
-
Intel MKL 10.3.11.339 (intel64)
3
4
5
6
7
4.8
21.5
109.1
-
3.9
17.1
61.5
162.2
-
2.3
7.5
22.8
58.8
-
1.6
3.3
6.3
11.6
-
1.7
3.4
6.3
9.8
-
3.2
6.7
10.6
35.1
-
Table 9: M2L timings for dFMM (sphere); high-frequency leaf level
20
Acc
NA
NAsym
NAblk
SA
SArcmp
IA
IAsym
IAblk
0.9
2.7
6.9
15.9
32.1
59.2
-
0.9
2.6
6.8
15.9
32.3
59.6
105.1
1.1
2.7
6.9
14.8
28.9
52.7
90.1
0.8
1.8
4.5
10.3
21.8
42.0
79.6
0.6
1.3
3.1
5.9
11.0
18.8
30.9
libblas 1.2.20110419-2
3
4
5
6
7
8
9
1.8
8.9
31.4
91.8
-
1.8
8.6
30.6
95.9
228.4
-
1.6
6.8
24.8
71.6
176.6
-
0.6
1.3
2.8
5.5
9.6
16.4
25.5
0.6
1.1
2.3
4.1
7.3
11.2
21.9
Intel MKL 10.3.11.339 (intel64)
3
4
5
6
7
8
9
1.4
7.2
25.8
62.5
-
1.0
5.0
20.7
56.7
141.3
-
0.7
2.4
7.9
20.4
48.8
-
0.4
0.9
2.2
4.3
7.8
13.6
21.0
0.4
1.0
1.9
3.4
5.5
9.5
14.7
0.7
2.0
5.5
12.6
24.7
45.3
-
Table 10: M2L timings for dFMM (oblate sphere); high-frequency leaf level
Acc
NA
NAsym
NAblk
SA
SArcmp
IA
IAsym
IAblk
0.3
0.9
2.4
6.0
12.1
24.3
43.5
0.3
0.9
2.4
5.6
11.8
23.3
43.5
0.3
1.0
2.5
5.7
11.1
21.2
37.2
0.2
0.6
1.5
3.4
7.5
15.8
32.1
0.2
0.4
1.0
2.0
3.8
7.0
11.6
libblas 1.2.20110419-2
3
4
5
6
7
8
9
0.6
3.2
11.9
32.5
80.3
-
0.6
2.8
10.0
30.8
77.0
-
0.5
2.3
8.7
25.1
61.1
-
0.3
1.0
2.7
6.5
13.2
24.8
47.5
0.2
0.5
1.2
2.6
5.1
9.1
18.6
Intel MKL 10.3.11.339 (intel64)
3
4
5
6
7
8
9
0.3
2.1
8.7
20.2
48.4
-
0.3
1.4
5.8
17.9
46.5
-
0.2
0.7
2.5
6.7
16.3
-
0.2
0.5
2.0
5.1
10.3
16.6
31.6
0.2
0.4
1.0
2.0
4.0
7.2
14.8
0.2
0.6
2.0
4.6
9.6
18.3
33.7
Table 11: M2L timings for dFMM (prolate sphere); low-frequency leaf level
21
NAsym (s)
M2L application time [s]
NAsym (ps)
102
SA (s)
SA (ps)
SArcmp (s)
SArcmp (ps)
101
IAsym (s)
IAsym (ps)
100
10−1
3
4
5
6
7
8
9
Accuracy Acc
Figure 9: Running times versus accuracy for NAsym, SA, SArcmp and IAsym for bbFMM taken from the
Tabs. 6, 7 and 8; (s) stands for sphere and (ps) for prolate sphere
implementation and its kernel function independence. In this paper we investigated algorithms to reduce the
running time of the M2L operator. We proposed several optimizations and studied their respective strengths
and weaknesses.
On one hand we proposed SArcmp, which uses an individual recompression of the suboptimally approximated M2L operators obtained via SA (the variant presented in [5]). We have shown that this new variant
reduces the computational cost noticeably. In some settings it even provides the fastest M2L application
times. On the other hand we also proposed a new set of optimizations based on an individual low-rank
approximation of the M2L operators; we refer to them as IA variants. As opposed to SA they directly lead
to the optimal low-rank representation for each operator. The overall number of flops is greater than for
SArcmp (which is strictly a lower bound on the number of flops). However, the advantage of the individual
treatment of the M2L operators is that we can exploit symmetries in their arrangement. This means that
all operators can be expressed as permutations of a subset. For example, in the case of the bbFMM (in
which the full interaction list has a constant size), we need to approximate and store only 16 instead of
316 operators. The remaining ones can be expressed as permutations thereof. This has a great impact on
the precomputation time and the memory requirement. Moreover it allows to express (again in the case
of the bbFMM) the at most 189 matrix-vector products (applications of the M2L operators) as at most 16
matrix-matrix products. We referred to this approach as the IAblk variant. It can then take advantage of
highly optimized implementations of matrix-matrix operations (e.g., the MKL [9]).
Let us conclude by comparing SArcmp and IAblk, the two variants that have the fastest running times.
IAblk wins if we compare precomputation time, required memory and runtime at levels having non-directional
expansions (bbFMM and low-frequency levels in dFMM). SArcmp wins only if we compare the runtime at
levels having directional expansions (high-frequency levels in dFMM). However, in order to identify the
optimal variant we have to distinguish two potential uses of the FMM as a numerical scheme to perform
fast matrix-vector products. 1) If we are interested in the result of a single matrix-vector product, a quick
precomputation is essential. However, 2) if we are looking for the iterative solution of an equation system
(many matrix-vector products), a fast running time of the M2L operator is crucial. Let us explain this
with an example. We take dFMM (with MKL) with accuracy Acc = 5 for the sphere. IAblk wins if we
are interested in the former use. The precomputation takes 0.4 s versus 69.1 s (for SArcmp) and the M2L
application takes 10.0 s versus 6.3 s, which sums up to 10.4 s versus 75.4 s. All other operators (P2P, P2M,
M2M, L2L and L2P) have nearly the same runtime in both cases, and their runtimes are negligible compared
to M2L. Looking at the latter use, SArcmp starts being faster if the iterative solution requires more than 19
22
matrix-vector products. For higher accuracies this threshold rises, e.g., for Acc = 6 it lies at 26 matrix-vector
products. In the case of bbFMM, IAblk is always optimal.
References
[1] M. Bebendorf. Hierarchical LU decomposition based preconditioners for BEM. Computing, 74:225–247,
2005.
[2] H. Cheng, W. Y. Crutchfield, Z. Gimbutas, L. Greengard, J. F. Ethridge, J. Huang, V. Rokhlin,
N. Yarvin, and J. Zhao. A wideband fast multipole method for the Helmholtz equation in three dimensions. Journal of Computational Physics, 216(1):300–325, 2006.
[3] E. Darve and P. Havé. Efficient fast multipole method for low-frequency scattering. Journal of Computational Physics, 197(1):341–363, 2004.
[4] J. J. Dongarra, J. Du Croz, S. Hammarling, and I. S. Duff. A set of level 3 basic linear algebra
subprograms. ACM Trans. Math. Softw., 16(1):1–17, March 1990.
[5] W. Fong and E. Darve. The black-box fast multipole method. Journal of Computational Physics, 228
(23):8712–8725, 2009.
[6] Z. Gimbutas and V. Rokhlin. A generalized fast multipole method for nonoscillatory kernels. SIAM
Journal on Scientific Computing, 24(3):796–817, March 2002.
[7] L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. Journal of Computational
Physics, 73:325–348, 1987.
[8] L. Greengard and V. Rokhlin. A new version of the fast multipole method for the Laplace equation in
three dimensions. Acta Numerica, 6:229–269, 1997.
[9] Intel.
Intel
Math
Kernel
Library
(Intel
MKL)
10.3,
2012.
URL
http://software.intel.com/en-us/articles/intel-mkl/. [Online; accessed 28-August-2012].
[10] P. G. Martinsson and V. Rokhlin. An accelerated kernel-independent fast multipole method in one
dimension. SIAM Journal on Scientific Computing, 29(3):1160–1178, May 2007.
[11] J. C. Mason and D. C. Handscomb. Chebyshev Polynomials. Chapman & Hall/CRC, 2003.
[12] M. Messner, M. Schanz, and E. Darve. Fast directional multilevel summation for oscillatory kernels
based on chebyshev interpolation. Journal of Computational Physics, 231(4):1175 – 1196, 2012.
[13] V. Rokhlin. Diagonal forms of translation operators for the Helmholtz equation in three dimensions.
Applied and Computational Harmonic Analysis, 1:82–93, 1993.
[14] L. Ying, G. Biros, and D. Zorin. A kernel-independent adaptive fast multipole algorithm in two and
three dimensions. Journal of Computational Physics, 196(2):591 – 626, 2004.
23
| 5 |
The Lottery Ticket Hypothesis: Training Pruned
Neural Networks
arXiv:1803.03635v1 [cs.LG] 9 Mar 2018
Jonathan Frankle
MIT CSAIL
[email protected]
Michael Carbin
MIT CSAIL
[email protected]
Abstract
Recent work on neural network pruning indicates that, at training time, neural
networks need to be significantly larger in size than is necessary to represent the
eventual functions that they learn. This paper articulates a new hypothesis to explain
this phenomenon. This conjecture, which we term the lottery ticket hypothesis,
proposes that successful training depends on lucky random initialization of a
smaller subcomponent of the network. Larger networks have more of these “lottery
tickets,” meaning they are more likely to luck out with a subcomponent initialized
in a configuration amenable to successful optimization.
This paper conducts a series of experiments with XOR and MNIST that support
the lottery ticket hypothesis. In particular, we identify these fortuitously-initialized
subcomponents by pruning low-magnitude weights from trained networks. We then
demonstrate that these subcomponents can be successfully retrained in isolation
so long as the subnetworks are given the same initializations as they had at the
beginning of the training process. Initialized as such, these small networks reliably
converge successfully, often faster than the original network at the same level
of accuracy. However, when these subcomponents are randomly reinitialized or
rearranged, they perform worse than the original network. In other words, large
networks that train successfully contain small subnetworks with initializations
conducive to optimization.
The lottery ticket hypothesis and its connection to pruning are a step toward
developing architectures, initializations, and training strategies that make it possible
to solve the same problems with much smaller networks.
1
Introduction
Recent work on neural network pruning (e.g., [4, 3, 2]) indicates that neural networks can be dramatically simplified once trained. After training is complete, upwards of 90% of weights can be pruned
without reducing accuracy. If a network can be so pruned, then the function that it learned could
have been represented by a far smaller network than that used during training. However, researchers
believe1 that these smaller networks cannot be trained as readily as their larger counterparts, in spite
of the fact that they are demonstrably capable of representing the desired functions. In this paper,
we contend that it is indeed possible to train these smaller networks directly. In fact, these small,
trainable networks are embedded within the larger models that we typically train.
This paper articulates a possible explanation for the disconnect between a neural network’s representation capacity and its trainability. This conjecture, which we term the lottery ticket hypothesis,
1
[4] mentions that “CNNs contain fragile co-adapted features” and that “gradient descent is able to find
a good solution when the network is initially trained, but not after re-initializing some layers and retraining
them...When we retrain the pruned layers, we should keep the surviving parameters instead of re-initializing
them.”
states that training succeeds for a given network if one of its subnetworks has been randomly initialized such that it could be trained in isolation—independent of the rest of the network—to high
accuracy in at most the number of iterations necessary to train the original network. We refer to these
fortuitously-initialized networks as winning tickets.
Subnetworks and winning tickets. From the perspective of the lottery ticket hypothesis, a network’s
initialization procedure can be thought of as drawing many samples from a distribution over initialized
subnetworks. Ideally, the procedure manages to draw a subnetwork with the right architecture and
weight initializations for optimization to succeed (a winning ticket). If a network of size m (where
size is measured in units or weights) is being trained to solve a problem but a network of size n
(where n ≤ m) is sufficient to represent the function to be learned, then the lottery ticket hypothesis
views the original network as containing m
n overlapping subnetworks. If the larger network is able
to train successfully, it is because one of these subnetworks lucked into an initialization amenable to
optimization.
Metaphorically, training a network larger than is necessary to represent the function to be learned is
like buying many lottery tickets. Larger networks have combinatorially more subcomponents that
could facilitate successful training (i.e., more lottery tickets). The initialization strategy determines
which of these subcomponents are well-situated for optimization to succeed (i.e., which tickets are
winners). If a subcomponent is initialized favorably (i.e., the network picked a winning ticket),
training succeeds.
Identifying winning tickets. In this paper, we demonstrate that it is possible to automatically identify
winning tickets by making a small but critical modification to the experiment of Han et al. [4]. We
prune a trained neural network’s smallest weights (as measured by their magnitudes after training)
in the same manner as Han et al.; the set of connections that survives this pruning process is the
architecture of a winning ticket as anticipated by the lottery ticket hypothesis. Unique to our work,
the winning ticket’s weights are the values to which these connections were initialized before training
began. Where Han et al. aimed to compress networks during the training process, our goal is to
find small networks that can be trained independently from the start. We show that a winning ticket
extracted in this fashion, when initialized with the original weights from before training, can be
trained successfully in isolation at least as fast as (and typically faster than) the full network.
Methodology. To empirically assess the lottery ticket hypothesis, we use the following procedure to
extract winning tickets from fully-connected networks of a variety of sizes for MNIST and, as an
illustrative example, from small networks for XOR. This procedure is identical to Han et al.’s pruning
process with the addition of the crucial last step: resetting the weights to their original values from
before training.
1. Randomly initialize a neural network.
2. Train the network until it converges.
3. Prune a fraction of the network.
4. To extract the winning ticket, reset the weights of the remaining portion of the network to
their values from (1) (i.e., the initializations they received before training began).
If successful training really does rely on fortuitous initialization of a subcomponent of the network
and pruning really does reveal this winning ticket, then the lottery ticket hypothesis predicts that
the pruned network—when reset to the original initializations from before training—will train
successfully at sizes too small for a randomly-initialized or a randomly-configured network to do so.
Research questions. To test the lottery ticket hypothesis, we evaluate the following research
questions:
How effectively do winning tickets train in comparison to the original network and to randomly sampled networks of similar size? For a variety of network configurations and perturbations of winning
tickets, we measure both convergence times and test accuracy once the network has converged.
How big are winning tickets relative to the size of the original network? By training on networks
of various sizes, we explore whether the size of a winning ticket remains constant for a particular
learning problem or grows in proportion to the size of the larger network from which it is derived.
2
10 Units
DB
ZL
8 Units
DB
ZL
6 Units
DB
ZL
4 Units
DB
ZL
2 Units
DB
ZL
98.5
96.8
92.5
78.3
49.1
92.9
87.5
76.8
55.3
17.6
Figure 1: Success rates of 1000 random XOR networks, each with the specified number of hidden
units. DB = percent of trials that found the correct decision boundary. ZL = percent of trials that
reached zero loss.
How sensitive are our results to particular pruning strategies? We test two broad classes of strategies:
pruning hidden units with the lowest-magnitude incoming and/or outgoing weights (for XOR) and
individually pruning the lowest-magnitude weights (for MNIST). We also study whether networks
can be pruned in a single step or whether they must repeatedly be pruned, retrained, and reset in an
iterative process.
Results. Our experimental results support the lottery ticket hypothesis.
XOR. We trained a simple network with one hidden layer to learn XOR. The minimal architecture
capable of representing XOR, a hidden layer with two randomly-initialized units, reached zero loss
18% of the time. In contrast, when a network with ten hidden units that reached zero loss was
iteratively pruned down to a two-unit winning ticket, the winning ticket reached zero loss 80% of the
time when trained with its original initializations.
MNIST. Up to a certain point, winning tickets derived by pruning converged faster than and at least as
accurately as the original network; after this point, convergence times and accuracy gradually and
then rapidly dropped off. In a single step, we could prune networks by 75% while still finding winning
tickets that, on average, converged 38% faster than the original network and matched its accuracy.
Pruning iteratively by 20% at a time, winning tickets 91% smaller than the original network converged
on average 39% faster. Networks iteratively pruned by up to 98% on average still converged as fast as
the original network while maintaining accuracy. When winning tickets were randomly reinitialized
or their weights were randomly rearranged, convergence times increased and accuracy decreased as
compared to the original network. Depending on the metric of winning ticket size, winning tickets
grew either gradually or marginally with network size.
Contributions and implications.
• We propose the lottery ticket hypothesis as a new perspective on neural network training.
• We further posit that pruning uncovers the winning tickets that the lottery ticket hypothesis
predicts, leading to an algorithm for extracting winning tickets from trained networks.
• We apply this algorithm to empirically evaluate these conjectures on small networks. The
evidence we find supports both the lottery ticket hypothesis and our contention that pruning
can extract winning tickets.
Although this paper focuses mainly on measurement, it has important implications for our understanding of training. The increased representation power of large networks is not necessarily required for
gradient descent to learn functions with small representations. Lurking within these large networks
are small, fortuitously-initialized winning tickets that are both more efficient to train (as a product of
their size) and faster to converge (as a product of their initialization). By examining the initalizations
and architectures of successful winning tickets, we might find new ways of designing networks that
are smaller but equally-capable (if not superior).
2
Learning the XOR Function
The XOR function is among the simplest examples that distinguish neural networks from linear
classifiers. Before presenting our results for MNIST, we summarize the lottery ticket hypothesis
as it applies to this simple computation. The XOR function has four data points: the coordinates
(0, 0), (0, 1), (1, 0), and (1, 1). The first and last points should be placed in class 0 and the middle
two points in class 1. Geometrically, this problem requires a nonlinear decision boundary. In this
experiment, we consider the family of fully connected networks for XOR with two input units, one
hidden layer (ReLU activation), and one output unit (sigmoid activation).
3
Pruning Strategy
10 Units
DB
ZL
4 Units (Pruned)
DB
ZL
2 Units (Pruned)
DB
ZL
One-shot Product
99.2
93.3
98.0
90.3
82.4
65.3
Input Magnitude
98.9
93.5
97.9
92.2
83.8
76.5
Output Magnitude
99.0
93.3
96.9
85.9
78.6
56.1
Product
98.5
92.9
97.6
90.3
91.5
79.4
Figure 2: Success rates of different pruning strategies on 1000 trials each. DB and ZL defined as in
Figure 1. The pruned columns include only those runs for which both the original ten-unit network
and the pruned winning ticket found the right decision boundary or reached zero loss. The first row
of the table was obtained by pruning in one shot; all subsequent rows involved pruning iteratively
Although a network of this form with two hidden units is sufficient to perfectly represent the
XOR function,2 the probability that a standard training approach—one that randomly initializes the
network’s weights and then applies gradient descent—correctly learns XOR for a network with two
hidden units is low relative to that for a larger network.
Figure 1 contains the overall success rates (percent of networks that found the right decision boundary
or reached zero loss). In 1000 training runs, a network with two hidden units learned a correct
decision boundary in only 49.1% of trials. Cross-entropy loss reached 0 (meaning the network
learned to output a hard 0 or 1) in only 17.6% of trials. Meanwhile, an otherwise identical network
outfitted with ten hidden units learned the decision boundary in 98.5% of trials and reached 0 loss in
92.9% of trials. Figure 1 charts the loss for these and other hidden layer sizes.3
To put the central question of this paper in the concrete terms of the XOR problem, why do we need
to start with a neural network with ten hidden units to ensure that training succeeds when a much
smaller neural network with two hidden units can represent the XOR function perfectly? We propose
the lottery ticket hypothesis as an explanation for this phenomenon.
The Lottery Ticket Hypothesis. Training succeeds for a given network if one of its subnetworks
(a “winning ticket”) has been randomly initialized such that it can be trained in isolation to high
accuracy in at most the number of iterations necessary to train the original network.
According to the lottery ticket hypothesis, successful networks with a large number of parameters
(e.g., the XOR network with ten hidden units) should contain winning tickets comprising a small
number of fortuitously-initialized weights on which training will still succeed.
2.1
Methodology
To test the lottery ticket hypothesis with the XOR function, we instantiated the experiment from Part
1 with the following details:
1. Randomly initialize a network with ten hidden units.
2. Train for 10,000 iterations on the entire training set.
3. Prune a certain number of hidden units according to a particular pruning heuristic.
4. To extract the winning ticket, reset the pruned network to the original initializations.
The first three steps extract the architecture of the winning ticket; the crucial final step extracts the
corresponding initializations. We ran this experiment with two different classes of pruning strategies.
One-shot pruning involves pruning the network in a single pass. For example, one-shot pruning
a network by 80% would involve removing 80% of its units after it has been trained. In contrast,
iterative pruning involves repeating steps (2) through (4) several times, removing a small portion of
n −n
1
Example satisfying weights for the first layer: −n
n . Satisfying weights for the output unit: [ 1 ].
Satisfying bias for the output unit: −n/2. n ≥ 1. As n grows, the output approaches a hard 1 or 0.
3
Weights were sampled from a normal distribution centered at 0 with standard deviation 0.1; all values more
than two standard deviations from the mean were discarded and resampled. Biases were initialized to 0. The
network was trained for 10,000 iterations.
2
4
the units (in our case, two units) on each iteration. We find iterative pruning to be more effective for
extracting smaller winning tickets; Han et al. [4] found the same for compressing large networks
while maintaining accuracy.
We consider three different heuristics for determining which hidden units should be pruned:
• Input magnitude: remove the hidden unit with the smallest average input weight magnitudes.
• Output magnitude: remove the hidden unit with the smallest output weight magnitude.
• Magnitude product: remove the hidden unit with the smallest product of the magnitude of
its output weight and the sum of the magnitudes of its input weights.
The magnitude product heuristic achieved the best results, so we use it unless otherwise stated.
2.2
Results
One-shot pruning. We generated 1000 networks with ten hidden units and pruned them down to
both four and two hidden units using the magnitude product heuristic. The results of doing so appear
in the first row of Figure 2. The winning tickets with two hidden units found the correct decision
boundary 82.4% of the time (up from 49.1% for randomly-initialized networks with two hidden units)
and reached zero loss 65.3% of the time (up from 17.6% of the time for a random network).
Iterative pruning. We conducted the iterative version of the pruning experiment 1000 times, starting
with networks containing ten hidden units that were eventually pruned down (in two unit increments)
to networks containing a candidate winning ticket of just two hidden units. Of the 93.5% of ten
hidden unit networks that reached zero loss, 84.9% 4 had a two hidden unit winning ticket that also
reached zero loss (as compared to 17.6% of randomly-intialized two hidden unit networks). Likewise,
of the 98.9% of ten hidden unit networks that found the correct decision boundary, 92.8% had a
two-unit winning ticket that did the same (as compared to 49.1% of randomly-initialized two hidden
unit networks). The four hidden unit winning tickets almost identically mirror the performance of the
original ten hidden unit network. They found the correct decision boundary and reached zero loss
respectively in 99% and 97% of cases where the ten hidden unit network did so. Both of these pruned
trials appear in Figure 2 (under the Magnitude Product row).
These experiments indicate that, although iterative pruning is more computationally demanding than
one-shot pruning, it finds winning tickets at a higher rate than one-shot pruning. More importantly,
they also confirm that networks with ten hidden units can be pruned down to winning tickets of two
hidden units that, when initialized to the same values as they were in the original network, succeed in
training far more frequently than a randomly initialized network with two hidden units. The winning
tickets with four hidden units succeed nearly as frequently as the ten unit networks from which
they derive. Both of these results support the lottery ticket hypothesis—that large networks contain
smaller, fortuitously-initialized winning tickets amenable to successful optimization.
In addition to the magnitude-product pruning heuristic, we also tested the input magnitude and output
magnitude heuristics. The results of doing so appear in Figure 2. The magnitude product heuristic
outperformed both. We posit that this success is due to the fact that, in the XOR case when all
input values are either 0 or 1, the product of input and output weight magnitudes should mimic the
activation of the unit (and therefore with its influence on the output).
3
MNIST (One-shot Pruning)
In this section and those that follow, we explore the lottery ticket hypothesis as applied to the MNIST
dataset. Here, we analyze the behavior of one-shot pruning; in the following section, we show the
additional power that iterative pruning offers.
4
These numbers are derived from the last row of Figure 2. 93.5% of networks with ten hidden units reached
zero loss. 79.4% of networks started with ten units, reached zero loss, and were pruned to into two-unit networks
that also reached zero loss. 79.4% of 93.5% is 84.9%.
5
3.1
Methodology
We trained and pruned a network with two fully-connected layers. We used the LeNet-300-100
architecture [6], which has 784 input units (corresponding to the pixels of the 28x28 images in
MNIST), a fully-connected hidden layer with 300 units, a fully-connected hidden layer with 100
units, and ten fully-connected output units (one for each class). The hidden units have ReLU
activation functions, and the output units have softmax activation functions. By default, biases were
initialized to 0 and weights were randomly sampled from a normal distribution with mean 0 and
standard deviation 0.1 (values more than two standard deviations from the mean were discarded and
resampled). Networks were optimized using stochastic gradient descent with a learning rate of 0.05.
This section follows the experimental template from Section 1:
1. Randomly initialize the network.
2. Train for 50,000 iterations on 100-example mini-batches from the training data
3. Prune a certain percentage of the weights from within each hidden layer, removing those
with the lowest magnitudes.
4. To extract the winning ticket, reset the values of the weights of the pruned network to their
original initializations from before training.
The pruning strategy we follow for MNIST removes individual weights rather than entire units. In
preliminary experiments, we found this strategy to be more effective (Srinivas and Babu explore
pruning by unit in [12]). We use the simplest weight-by-weight pruning heuristic possible: remove
those weights with the lowest magnitudes within each hidden layer (just as in [4]). Weights connecting
to the output layer are pruned by half of the percentage at which the rest of the network is pruned to
avoid severing connectivity to the output units.
3.2
Results
Figure 3: The test set accuracy on MNIST as training proceeds. These charts are zoomed into the
very highest levels of accuracy. Each curve shows the average progression of five trials of training at
the specified pruning level. Percents are the percent of the weights in each layer that remain after
pruning. The error bars show the minimum and maximum values of any one of those five trials. Dots
signify the moment when the corresponding colored line has converged, with error bars showing the
earliest and latest convergence times amongst the five trials.
Pruning’s most substantial impact was on convergence times. When pruned to between 75% and
25% of the size of the original network, the winning tickets converged on average at least 25% faster
while accuracy remained on average within 0.15% of the original network’s accuracy. The winning
ticket that was pruned to 25% of the original size converged on average 38% faster than the original
network. Further pruning caused convergence times to slowly rise and accuracy to drop.
Figure 3 shows the test set accuracy and convergence behavior of winning tickets pruned to different
levels as they were trained.5 Each curve is the average of five different runs starting from distinct,
5
We define convergence as the moment at which the 1000-iteration moving average of test accuracy changed
by less than 0.002 for 1000 consecutive iterations. We measured test accuracy every 100 iterations. According to
this definition of convergence, the one-shot-pruned winning tickets improved their test accuracy by an average of
0.0019 (standard deviation 0.00085) after convergence. We acknowledge that determining convergence times is
an imprecise art, but this metric seems to adequately characterize the behavior of convergence for our purposes.
6
Figure 4: The test set accuracy on MNIST as training proceeds for winning tickets of various sizes
and for winning tickets whose weights have been randomly reinitialized (control experiment 1).
randomly initialized networks; error bars indicate the minimum and maximum value that any run
took on at each point in the training process. The dots indicate the average convergence times for the
curve in the corresponding color; error bars indicate the minimum and maximum convergence times.
The left graph in Figure 3 shows that, for the first few pruning levels, convergence times decrease
and accuracy increases. A winning ticket comprising 90% of the weights from the original network
converges slightly faster than the original network but slower than a winning ticket with 75% of the
original weights. This pattern continues until the network is pruned to about 55% of the original size,
after which (as the right graph in Figure 3 shows) convergence times flatten and then, after about
35%, increase. When the winning ticket is between 10% and 15% of the original size of the network,
it returns to the performance of the unpruned network.
In the terms of the lottery ticket hypothesis, we attribute improving convergence times to the removal
of unnecessary, noisy parts of the network as pruning hones in on the winning ticket. Convergence
times reach a tipping point as pruning begins to remove weights that are essential to the winning
ticket, after which convergence times increase and accuracy decreases.
The lottery ticket hypothesis also predicts that this behavior is largely attributable to a confluence of
initialization and architecture. To test this conjecture, we ran two control experiments: (1) retain the
winning ticket’s architecture but randomize its weights and (2) retain the winning ticket’s weights but
randomize its architecture.
Control experiment 1. This experiment evaluates the extent to which initialization is a necessary
component of a winning ticket. Figure 4 shows this experiment. The curves for the original network
and winning tickets that are 75% and 35% of the original network’s size are the same as in Figure 3.
Two curves have been added for the control experiments. Each control experiment entailed training
a network that used a winning ticket’s architecture but randomly reinitialized its weights from the
original initialization distribution (N (0, 0.1)). We trained three control experiments for each winning
ticket, so the control curves are the average of 15 experiments. Unlike the winning tickets, the control
experiments converged on average more slowly than the original network, simultaneously achieving
lower levels of accuracy. These differences were substantial: the average 35% and 25% winning
tickets converged 1.91 and 2.28 times as fast as the corresponding average controls.
The error bars on convergence times reflect that the control trials exhibited a much wider variance in
behavior. For example, the earliest-converging of the 35% control trials converged faster than the
average unpruned network; however, the average 35% control trial convergence time converged 27%
slower than the average original network.
This experiment further supports the lottery ticket hypothesis’ emphasis on fortuitous initialization.
Using the same pruned architecture, the original initialization not only withstood but benefited from
pruning, while performance of the reinitialized network immediately suffered and steadily diminished
as the network was further pruned. This outcome mirrors, on a larger scale, the result of the XOR
experiment, in which networks with many hidden units could be pruned to smaller winning tickets
that found the right decision boundary at a much higher rate than randomly-initialized small networks.
Figure 5 provides a broader perspective on these patterns across all of the levels to which we pruned.
The left graph shows the convergence time in relation to the percentage of the network remaining after
pruning. The blue line is the average of the five winning ticket trials at each level. The convergence
7
Figure 5: The convergence times (left) and accuracies (right) when running the MNIST pruning
experiment to various degrees of pruning. The blue line is the average of five trials with different
starting initializations that prune and reuse the original initialization. Each of the multicolored lines
represents three randomly reinitialized control trials (one for each trial with the original initialization).
The error bars are the minimum and maximum value any trial takes on at each interval.
Figure 6: The convergence times and accuracy for the five winning tickets at each level of pruning
(blue line), the 15 trials where the winning ticket weights were reinitialized (orange line), and the 15
trials where the winning ticket weights were maintained but shuffled within each layer (green line).
time initially decreases before leveling off between 55% and 35% and then slowly climbing again.
In contrast, the multicolored lines, which represent the groups of control trials for each winning
ticket, steadly require longer to converge as more of the network is pruned. In the control experiment,
the error bars are much larger, suggesting wider variation in convergence times compared to the
consistent convergence times of the winning tickets.
The right graph in Figure 5 provides important context: how accurate are the networks at the moment
they converge? The average trial that used the original initialization (the blue line) maintans accuracy
within 0.15% of the original network when pruned down to 15%, after which accuracy drops off. In
contrast, the accuracy of the average control trial drops below this level when the network has been
pruned by about 30%, falling precipitously when pruned by 15% (0.6% below the original network’s
accuracy).
This experiment supports the lottery ticket hypothesis’ prediction that fortuitous initialization is a
necessary ingredient to make a winning ticket. The winning ticket’s structure alone is insufficient to
explain its success.
Control experiment 2. This experiment evaluates the extent to which architecture is a necessary
component of a winning ticket. For each winning ticket at each level of pruning, we randomly shuffled
the locations of the weights in each hidden layer while maintaining their original initializations. The
results of doing so appear in Figure 6. Just as in Figure 5, the blue line traces winning tickets pruned
to various sizes. The orange line is the average of all 15 of the trials from control experiment 1
(reinitializing the winning tickets). The green line is the average of all 15 of the trials from control
experiment 2 (shuffling the winning tickets without reinitializing). The convergence times for the two
control experiments are similar: they start increasing immediately and increase more rapidly as the
network gets smaller. The accuracy of control experiment 2 drops off slightly earlier than control
experiment 1, which itself dropped off before the winning ticket did.
This experiment supports the lottery ticket hypothesis’ prediction that winning tickets emerge from a
combination of initialization and structure. Neither initialization (control experiment 1) nor structure
(control experiment 2) alone is sufficient to explain the better performance of the winning tickets.
8
Figure 7: The convergence times and accuracy of winning tickets extracted from fully-connected
networks for MNIST using one-shot pruning (orange) and iterative pruning (blue). Note that the
x-axis is logarithmic.
Summary. The first notable result from this set of experiments is that, even when pruned to sizes
much smaller than the original network, winning tickets are still able to converge at all. This supports
the core prediction of the lottery ticket hypothesis: pruning reveals smaller subcomponents that were
originally initialized such that they can train successfully in isolation. Not only do these networks
train successfully, but they converge faster than and maintain the accuracy of the networks from which
they derive. Furthermore, winning tickets emerge from a confluence of both fortuitous initalization
and structure.
4
MNIST (Iterative Pruning)
In the XOR experiment in Section 2, iterative pruning [4]—repeatedly training, pruning, reinitializing,
and pruning again—arrived at winning tickets that were more likely to train successfully In this
section, we find that iterative pruning makes it possible to extract winning tickets from our MNIST
network that are far smaller than those generated by one-shot pruning.
4.1
Methodology
We use the same experimental setup (network architecture, initialization strategy, and optimization
strategy) as in Section 3. We follow a similar procedure repetitively in order to iteratively prune.
1. Randomly initialize the network.
2. Train for 50,000 iterations on 100-example mini-batches from the training data
3. Prune 20% of the weights from within each hidden layer and 10% of the weights in the
output layer, removing those with the lowest magnitudes.
4. Reset the weight values of the pruned network to their initializations from before training.
5. Repeat steps (2) through (4) until the network has been pruned to the desired size. The result
of the last iteration of (4) is the winning ticket.
We iteratively prune the incoming weights of the first and second layers of the network by 20% and
the weights of the output layer by 10%.6 We start with a network with two fully-connected hidden
layers with 300 and 100 hidden units and prune the network until just under 1% of the original
weights remained.
Comparison to one-shot pruning. Figure 7 shows the difference in convergence times and accuracy
between one-shot pruning (orange) and iterative pruning (blue). (Note that the x-axis is logarithmic
in Figure 7 and in most figures in this section.)
The average iteratively pruned winning tickets reach initially reach lower convergence times. These
convergence times flatten when the original network is pruned to between 41% (36% faster than
the original network) and 8.5% (38% faster than the original network) of the original network size,
as compared to between 55% (44% faster than the original network) and 40% (41% faster than the
6
As mentioned in Section 3, we prune the output layer at a lower rate to reduce the chances of severing
connectivity to any of the output units.
9
Figure 8: The convergence times and accuracy of winning tickets extracted by iteratively pruning and
control experiments. The blue line is the average of five winning tickets. The orange line is control
experiment 1: winning tickets that have been reinitialized. The green line is control experiment 2:
winning tickets whose weights were randomly shuffled. The red line is the performance of one-shot
pruning. The locations where the control trials cut off are those at which, according to our metric,
they no longer converged.
original network) for one-shot pruning. The average iteratively pruned network returns to the original
convergence time when pruned to 1.8% (as compared to between 5% and 10% for one-shot pruning).
Likewise, accuracy actually increases slightly for many winning tickets, returning to the original
network’s accuracy at a winning ticket size of 2.8% on average. In contrast, one-shot pruning begins
to drop off when the winning ticket is 15% of the size of the original network.
Although iterative pruning can extract much smaller winning tickets than one-shot pruning, it is far
more costly to find those winning tickets. Extracting a winning ticket with one-shot pruning requires
training the original network a single time, regardless of how much the network is pruned. In contrast,
iteratively pruning a network by 20% at each iteration until it is about 5% of the original network’s
size requires training the network 14 times. However, since our goal is to understand the behavior
of winning tickets rather than to find them efficiently, iterative pruning’s compelling advantage is
that it is able to extract smaller winning tickets that maintain convergence and accuracy performance,
placing a tighter upper-bound on the size of a network’s winning ticket.
4.2
Results
In this section, we re-run the control experiments from Section 3.1. Just as before, we aim to explore
the extent to which architecture and initialization are responsible for a winning ticket’s ability to
continue to converge at such small sizes. Figure 8 contains the average results of performing control
experiment 1 (randomly reinitializing the winning ticket’s weights) in orange and control experiment
2 (randomly shuffling the winning ticket’s weights) in green. For comparison, the curve in red is the
performance of one-shot pruning.
Control experiment 1. Just as with one-shot pruning, average convergence times for control
experiment 1 begin increasing as soon as the network is pruned and continue to grow at a steady
rate. The error bars in Figure 8 reflect that convergence times vary widely for pruned networks that
are reinitialized. The average control trial’s accuracy begins dropping off (more than 0.15% lower
than the original networks accuracy) when the network is pruned to about 31%, whereas the average
iteratively pruned network drops below this level when pruned to 1.5%. Just as with the one-shot
experiment, this control trial indicates that initialization plays a critical role in making a winning
ticket.
Control experiment 2. Average convergence times for control trial 2 increase steadily in a pattern
similar to those from control trial 1. Error bars indicate that these convergence times similarly vary
widely. Accuracy begins dropping off earlier, at about 51.2% and more steeply, potentially suggesting
that architecture might be more important than initialization.
Summary. The control experiments for iterative pruning put the results from Section 3 in sharper
relief. Iterative pruning makes it possible to extract smaller winning tickets than from one-shot
pruning that reach lower convergence times than the original network while maintaining or exceeding
its level of accuracy. The control experiments show that both initialization and network architecture
10
Figure 9: The distributions of the initializations of weights that survived iterative pruning across ten
iterative pruning runs. The graphs contain the initializations of the network pruned to (left to rigth)
100%, 51.2%, 16.8%, and 5.50%. The blue, orange, and green lines are the distributions of initial
weights for the first hidden layer, second hidden layer, and output layer, respectively.
play factor into creating a winning ticket, with control trial 2 suggesting that network architecture
might be slightly more important.
The experiments on XOR and MNIST support the lottery ticket hypothesis. Embedded within larger
networks are small subcomponents that are fortuitously initialized in a manner conducive to successful
training. We extracted the winning ticket architectures by pruning and determined the corresponding
initializations by resetting the winning ticket’s connections to their original from before training.
These networks not only trained successfully but, in the case of iteratively-pruning with the MNIST
network, converged faster and more accurately. Meanwhile, neither architecture nor initialization
alone could entirely account for this result.
We next investigate the architecture and initializations of these small winning tickets (Section 5) and
the behavior of winning tickets when subjected to a wider variety of parameters (Section 6).
5
Examining Winning Tickets
In this section, we briefly explore the internal structure of winning tickets that result from iterativelypruning our MNIST network. We have already found evidence to support the claim that winning
tickets arise out of a confluence of architecture and initialization. What exactly do those architectures
and initializations look like?
Initializations. Figure 9 shows the initialization distributions for winning tickets at four different
levels of pruning. Note that these are the values of the winning ticket’s weights from before training.
The graph in the upper left contains the initial weights of the entire network (no pruning), which are
initialized according to a normal distribution with mean 0 and standard deviation 0.1. The graph
in the upper right contains the weights after iteratively pruning the network down to 51.2% of its
original size. The blue, orange, and green lines are the distriutions of initial weights for the first
hidden layer, second hidden layer, and output layer, respectively.
At 51.2%, the remaining weights already show the impact of pruning. The first and second hidden
layer’s distributions are bimodal, with two peaks mirrored opposite 0. Since these distributions plot
the original initializations of the weights that survive the pruning process (i.e., the weights before
training), these distributions were created by removing samples from a formerly normal distribution.
These peaks on the 51.2% graph appear to be the left and right tails of the original normal distribution.
The missing weights have been pruned.
Interestingly, pruning occurs after training and these are graphs of weights before training. In other
words, for these distributions to emerge, small weights from before training must have remained
small after training. The second hidden layer (orange) retains more of its center than the first hidden
layer, indicating that those weights likely moved more during training. The output distribution (green)
more closely resembles the original normal distribution, indicating that its weights probably moved
significantly during training. One other contributing factor to the output distribution is that we prune
it at a slower rate, meaning that the effects of pruning make take longer to appear.
11
Figure 10: For each unit in the current layer, how many units in the previous layer connect to it?
The left graph is for the first hidden layer. The middle graph is for the second hidden layer. The
right graph is for the output layer. The blue, orange, and green lines are for the winning tickets when
iteratively pruned to 80%, 16.8%, and 5.50%, respectively. Each point on the line represents a single
unit; the units have been sorted in descending order by the number of connections they have. These
data points were collected over ten trials.
The pattern for 51.2% pruning plays out in more extreme form at 16.8% (lower left graph) and 5.50%
(lower right graph). The middles of the first and second hidden layer distributions continue to get
hollowed out, and the same happens (albeit more slowly) to the output distribution. Even with these
extreme-looking input distributions, the corresponding networks converged faster than the original
network and retained the same accuracy.
Considering the extent to which the particular pruning strategy we pursued left its imprint on the
winning tickets, it is worth considering the impact that other pruning strategies would have and, more
broadly, whether the winning tickets we found are a product of the pruning strategy we pursued
or whether the pruning strategy we pursued happens to exploit a deeper reality of the way neural
networks behave.
Architecture. As the network is pruned, it becomes sparser. Figure 10 shows the distributions of
surviving connections aggregated across ten trials7 between a unit in layer n and units in layer n − 1
when the network is pruned to 80% (blue), 16.8% (orange), and 5.50% (green) of its original size.
The left, middle, and right graphs show the first hidden layer, second hidden layer, and output layer.
When pruned to 80%, the network remains almost fully-connected, with only slight differences
between the units with the most and least connections. As the network is further pruned, units in
the first hidden layer continue to have a roughly equal number of connections to units in the input
layer. Even when the network is pruned to 5.50%, only a small fraction of the hidden units in the first
layer have been eliminated entirely. The second hidden layer becomes less evenly connected as more
weights are pruned. By the time the network is pruned to 5.50%, nearly a third of the units in the
second hidden layer have been fully disconnected, and there is a steep decline from the best-connected
units to the worst-connected. The output layer shows a less severe slope, likely because every output
unit serves a clear function and because we prune the output layer at a slower rate.
The winning tickets are quite sparse. Even when the network is pruned by nearly 95%, only a fraction
of the units have been eliminated entirely. No units maintain a large number of connections after
pruning; instead, nearly all units retain a proportionally small number of connections.
6
Exploring MNIST Parameters
This section explores the sensitivity of the MNIST results to the parameters of the lottery ticket
experiment. Namely, we explore the role that initialization and network size play in the properties of
the winning tickets that emerge.
6.1
Initialization
Although our default network was initialized from a normal distribution with mean 0 and standard
deviation 0.1, we experimented with several other standard deviations to explore the effect of larger
7
We also removed any edges that did not have a path to an output unit.
12
Figure 11: The convergence times and accuracy for groups of five winning tickets initialized with
various standard deviations ≥ 0.1.
Figure 12: The convergence times and accuracy for groups of five winning tickets initialized with
various standard deviations ≤ 0.1.
and smaller weights on the behavior of winning tickets. One might expect that our pruning strategy
would be especially vulnerable to initializing network weights too large: by selecting for the highestmagnitude weights, it might exacerbate exploding gradients. Likewise, it might be resilient to
initializing network weights too small, since it will select for the largest weights after training.
In this section, we present the results using the one-shot pruning strategy. The results for iterative
pruning were similar.
Figure 11 shows the convergence times and accuracy for winning tickets of networks initialized with
standard deviations larger than 0.1 (0.2, 0.4, and 0.8). As expected, convergence times increase and
accuracy decreases as the standard deviations increase. We did not explore whether the extent to
which this behavior resulted from exploding gradients or weaknesses in the pruning strategy.
Figure 12 contains the same information for winning tickets of networks initialized with standard
deviations smaller than 0.1. A standard deviation of 0.1 produces the fastest convergence times but
cedes a certain amount of accuracy in doing so. In contrast, a standard deviation of 0.025 causes
winning tickets to converge more slowly but to higher-accuracy optima. This behavior suggests that
there are sweet spots for both convergence times (stddev=0.1) and accuracy (stddev=0.025) and a
tradeoff-space in between.
6.2
Network Size
We experimented with increasing the size from the default network (layers of 300 and 100 hidden
units) in order to determine whether there is a fixed winning ticket size for a particular learning
problem, or whether larger networks naturally beget larger winning tickets. We consider two possible
definitions of the “size” of a network’s winning ticket:
• A winning ticket is the minimal network that minimizes convergence time. Since convergence times initially decrease with pruning, this heuristic looks for the winning ticket with
the lowest possible convergence time.
• A winning ticket is the minimal network that retains the accuracy of the original network.
Accuracy remains relatively flat as smaller and smaller winning tickets are created; it then
13
Figure 13: The convergence times and accuracy for groups of five winning tickets extracted from
networks of various sizes with the one-shot pruning strategy. Error bars were elided to improve
readability. The legend contains the size of the network (e.g., 300-100 means a network with hidden
layers of 300 and 100 units). All networks were initialized with a standard deviation of 0.05.
Figure 14: The convergence times and accuracy for groups of five winning tickets extracted iteratively
from networks of various sizes. Error bars were elided to improve readability. All networks were
initialized with a standard deviation of 0.05.
reaches a tipping point and drops off rapidly. This definition considers the winning ticket to
be the last moment before this accuracy drop-off takes place.
One-shot pruning. We trained networks whose sizes were multiples of the original network size.
The results of doing so and applying the one-shot pruning strategy appear in Figure 13, which plots
convergence times and accuracy according to the number of weights in the winning ticket.
According to the convergence-based definition of a winning ticket, the winning ticket sizes increase
gradually with the size of the network. The LeNet-300-100 architecture appears to reach this point at
about 140,000 weights, where the LeNet-600-300 does so at about 200,000 weights. The same pattern
holds for the larger architectures. Larger networks are capable of representing more sophisticated
functions, so pruning larger networks may produce different network architectures that exploit this
additional representation capacity to converge faster. Indeed, the larger the network, the lower the
convergence times its winning tickets were able to achieve and the larger the size at which it reached
them.
The accuracy-based definition of a winning ticket agreed. As the bottom graph of Figure 13 illustrates,
the accuracy of larger networks dropped off steeply at slightly earlier times than the accuracy of
smaller networks. However, these differences were quite small—on the order of tens of thousands of
weights. Although winning ticket size does seem to increase with network size by this definition, the
changes were very slight and winning ticket sizes close to uniform.
Iterative pruning. As Figure 14 reflects, the convergence and accuracy trends for iteratively pruning
larger networks remains the same as in the one-shot case. Larger networks reach their minimum
convergence times at gradually larger sizes, but accuracy plummets in unison. There are two key
differences worth noting in the iterative case.
14
Figure 15: The convergence times and accuracy of winning tickets extracted by iteratively pruning by
different rates on each iteration. Error bars have been elided for readability. Note that the x-axis is
not logarithmic.
First, both minimum convergence times and accuracy dropoffs occur at much smaller network sizes
than in the one-shot experiments. This result coincides with the other iterative experiments, which
demonstrate that iterative pruning creates winning tickets that can be pruned to much smaller sizes
before convergence times increase and accuracy diminishes. Whereas the accuracy dropoff took
place when the networks had about 150,000 weights in the one-shot experiments, it occurs when the
iteratively-derived winning tickets have just tens of thousands of weights.
Second, each of the accuracy graphs has a small bulge upwards just before dropping off, indicating
that accuracy actually increases slightly when the winning tickets are smallest. These bulges occur at
the same winning ticket size in all cases, regardless of the initial size of the network.
Summary. The analysis in this subsection leaves many open questions for future research. Although
we do not undertake extensive analysis of the internal structure of winning tickets in this study,
comparing equally-sized winning tickets derived from networks of different sizes would shed light
on the extent to which the winning tickets themselves are similar or different between various initial
network sizes.
6.3
Exploring Iterative Pruning Rates
Choosing the exact rate at which to prune on each iteration of iterative pruning entails balancing
the performance of the resulting winning ticket with the number of iterations necessary to extract
that winning ticket. Figure 15 shows the convergence times and accuracy for the LeNet-300-100
architecture iteratively pruned at different rates on each iteration. (Note that the x-axis is not
logarithmic.) This experiment can be thought of as exploring middle grounds between one-shot
pruning and iteratively pruning at a small rate.
Although pruning by a larger percentage (e.g., 70% or 50%) on each iteration reaches smaller winning
tickets faster, those winning tickets are pruned too aggressively and fail to match the convergence
times or accuracy of winning tickets pruned more slowly. On the other end of the spectrum, iteratively
pruning by 10% appears to achieve the best convergence times and accuracy but would require
training the network 28 times to extract a winning ticket 5% of the original network’s size. For our
experiments, we prune by 20%, which balances performance with the amount of training required.
6.4
Weight Resetting
Before each training iteration of our iterative pruning approach, we reset the weights of the unpruned
connections to their original values from before training. Doing so is part of our experiment to
evaluate the lottery ticket hypothesis: exploring how well winning tickets obtained by pruning train
in isolation. We conjecture that resetting before each training iteration makes it easier to find small
winning tickets. In effect, each iteration is a recursive pruning problem in which a subnetwork that
trains effectively when starting from the original initializations must be pruned to a slightly smaller
network that does the same.
In contrast, Han et al. [4] interleave training and pruning without ever resetting weights. After a
round of training, low-magnitude weights are pruned and then training continues based on the trained
15
Figure 16: The convergence times and accuracy of winning tickets extracted by iteratively pruning
using weight resetting between iterations (our strategy, in blue) and by continuing to use the trained
weights after pruning (Han et al [4]’s strategy, in orange).
weights. These differences in approaches reflect two different goals: Han et al. want to produce the
smallest possible trained network, while we wish to find a pruned network that trains successfully
from the start.
Figure 16 shows the convergence times and accuracy achieved by the winning tickets extracted using
these two pruning strategies. To simulate Han et al.’s strategy, we iteratively trained a network, pruned
low-magnitude weights, and continued training using the trained weights. At each iteration, we
copied the resulting network, reset its weights to the original initializations, and trained the network
to obtain the results in Figure 16.
As Figure 16 shows, Han et al.’s pruning strategy is quite effective at finding small networks that rain
successfully, although our strategy of resetting weights at each iteration maintains lower convergence
times and higher accuracy for slightly longer. However, since Figure 16 is on a logarithmic scale,
these differences appear only at very small network sizes.
7
Related Work
Pruning. LeCun et al. first explored pruning as a way to reduce the size of neural networks [7]; they
pruned based on the second derivative of the loss function with respect to each weight. Hassibi et
al. [5] build on this approach.
More recently, Han et al. [4, 3, 2] showed that these techniques could be used to substantially reduce
the size of modern image-recognition networks. Since then, a rich variety of neural network pruning
approaches have emerged (e.g., pruning smallest weights [4], pruning units in a Bayesian fashion [9],
pruning entire convolutional filters [8, 10], fusing redundant units to increase network diversity [11]).
The goal of this literature on pruning is to compress trained neural networks, reducing the size of
a large model such that it can run efficiently on restricted computational platform (e.g., a mobile
device) without sacrificing accuracy. In contrast, we aim to make it possible to train small neural
networks from the start.
In [4] and follow-up work, network compression takes place in three iterative steps. First, a large
network is trained. Second, weights or units are pruned according to a heuristic. Third, the network
is further trained using the already-trained weights. Han et al. find that, without this third retraining
step, network performance drops off much earlier in the pruning process. Han et al. also caution that
the pruned network should not be re-initialized after training, but do not consider reusing the values
to which the surviving weights were initialized in the original network as we do.
Our work builds off of the literature on pruning by shedding light on the mechanisms that make
pruning possible. The fact that networks can be pruned while maintaining accuracy indicates that the
function to be learned can be represented with a much smaller network than the one used for training.
We aim to understand why pruning is possible and investigate whether small networks can be trained
directly (rather than pruning large networks to smaller sizes after training).
The lottery ticket hypothesis posits that large networks have small, fortuitously-initialized subnetworks
that facilitate successful training. From this point of view, neural network pruning finds these winning
tickets. To evaluate the lottery ticket hypothesis on small, fully-connected networks, we leverage Han
16
et al.’s experimental approach, except that we make a crucial modification: after pruning we reset
each weight to its original value.
Our results explain or complement those of Han et al. The lottery ticket hypothesis offers insight
into why Han et al. are able to prune networks. Many of the trends we see (e.g., that the accuracy
of iteratively-pruned winning tickets drops off at very small winning ticket sizes or that the original
initializations of pruned networks take on a bimodal distribution) parallel those that Han et al. find
when continuing to train pruned networks based on their trained weights.
Dropout. Dropout [13] creates a smaller subnetwork for each training iteration by randomly removing a subset of the units (or weights). At inference-time, a unit’s activation is reduced by the
probability that it was dropped out. Intuitively, dropout is intended to reduce overfitting and improve
generalization by forcing units to remain robust to changes in the network. Follow-up work on
dropout [1] has characterized training with dropout as “perform[ing] gradient descent...with respect
to...the ensemble of all possible subnetworks” and inference with dropout as approximately computing
the average over this ensemble.
In the terminology of dropout, our experiment aims to discover a single, particularly successful
member of this ensemble of subnetworks. Our dropout heuristic is that, after training the network
once without dropout, we drop out the lowest k% of weights (by magnitude after training) with
probability 1 and all other weights with probability 0. In other words, we perform an extremely
aggressive, coarse-grained form of dropout based on examining the results of training the network
once without dropout.
However, our goal is different. Dropout is designed to regularize a network during training, a process
that can be used to produce sparse networks. We aim to directly find small (and, in the case of the
networks we found, sparse) networks that can be trained from start to finish without removing further
weights.
Our broader formulation of the lottery ticket hypothesis does closely relate to dropout’s notion of
ensemble learning. The lottery ticket hypothesis views a randomly-initialized large network as a
collection of a combinatorial number of small networks (i.e., lottery tickets) of which one (i.e., the
winning ticket) must be initialized fortuitously to enable training to succeed. From this point of view,
a large network begins with the possibility of coalescing toward one of an exponential number of
subnetworks, and gradient descent drives it toward the subnetwork comprising the winning ticket that
we find.
8
Limitations
This work is limited in several ways. We only examine fully-connected networks, and for two of the
smallest possible examples (XOR and MNIST). We do not consider convolutional networks or larger
networks that better reflect real-world examples.8 Our evidence for the lottery ticket hypothesis is
purely experimental; we do not offer any theoretical analysis to formally support this claim. Finally,
although we analyze the structure and initialization distributions of winning tickets for MNIST, we
have yet to devise a way to turn these observations into useful strategies for training smaller networks.
We anticipate exploring these avenues in future work and updating this paper as we do so.
9
Conclusions and Future Work
This paper proposes a new hypothesis to explain why large neural networks are amenable to substantial
pruning yet the pruned networks cannot be trained effectively from scratch. This conjecture, known as
the lottery ticket hypothesis, holds that training succeeds when a subcomponent of the larger network
is randomly initialized in a fashion that is suitable for optimization. Furthermore, it conjectures
that pruning uncovers these winning tickets. To empirically evaluate this hypothesis, we devised an
experiment based on the work of Han et al. [4] where, after pruning a trained network, remaining
weights are reset to their original initializations. If the lottery ticket hypothesis holds and pruning
8
However preliminary experiments with CIFAR10 on a convolutional network reflect the same behavior
described in this paper for MNIST.
17
uncovers these winning tickets, then these pruned networks should train successfully in isolation
when reset to their original initializations.
On XOR, we found that winning tickets derived from larger networks were able to learn the decision
boundary and reach zero loss far more frequently than those that were randomly initialized. On
MNIST, winning tickets converged more quickly and reached higher accuracy than the original
network. Control experiments supported the claim that winning tickets represent a confluence of
fortuitious initialization and network architecture.
This paper articulates a new perspective on neural network training and supports that view empirically.
Now that a foundation has been laid, there are numerous research directions to further evaluate the
lottery ticket hypothesis and exploit this perspective to improve network design and training.
Larger examples. The largest network that we examine is a fully-connected network for MNIST.
Repeating the experiments outlined in this paper for a convolutional network, larger networks, and
harder learning tasks would make it possible to understand whether the lottery ticket hypothesis holds
more generally and how it manifests in these settings.
Understanding winning tickets. This paper focuses mainly on the behavioral properties of the
lottery ticket hypothesis, pruning, and winning tickets. One logical next step is to systematically
analyze the architectures and initializations of lottery tickets. To what extent are winning tickets
unique artifacts created by randomly initializing large networks and getting lucky? To what extent
is there common structure between multiple winning tickets for the same task? What do winning
tickets tell us about the functions that neural networks learn for particular tasks?
Lottery ticket networks. The lottery ticket hypothesis and the existence of winning tickets demonstrate that small networks can be trained from start to finish. The most concrete follow-up to this
work would be to exploit the lessons learned by leveraging winning tickets to develop new network
architectures and initialization regimes that allow smaller networks to be trained for a wider variety
of learning tasks. Doing so could reduce the amount of computation needed to train neural networks.
References
[1] Pierre Baldi and Peter J Sadowski. 2013. Understanding dropout. In Advances in neural
information processing systems. 2814–2822.
[2] Song Han, Huizi Mao, and William J. Dally. 2015. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. CoRR abs/1510.00149
(2015). arXiv:1510.00149 http://arxiv.org/abs/1510.00149
[3] Song Han, Huizi Mao, and William J Dally. 2015. A deep neural network compression pipeline:
Pruning, quantization, huffman encoding. arXiv preprint arXiv:1510.00149 10 (2015).
[4] Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems.
1135–1143.
[5] Babak Hassibi, David G Stork, and Gregory J Wolff. 1993. Optimal brain surgeon and general
network pruning. In Neural Networks, 1993., IEEE International Conference on. IEEE, 293–
299.
[6] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning
applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–2324.
[7] Yann LeCun, John S Denker, and Sara A Solla. 1990. Optimal brain damage. In Advances in
neural information processing systems. 598–605.
[8] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning
filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016).
[9] Christos Louizos, Karen Ullrich, and Max Welling. 2017. Bayesian compression for deep
learning. In Advances in Neural Information Processing Systems. 3290–3300.
18
[10] Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. 2017. Thinet: A filter level pruning method for
deep neural network compression. arXiv preprint arXiv:1707.06342 (2017).
[11] Zelda Mariet and Suvrit Sra. 2015. Diversity Networks: Neural Network Compression Using
Determinantal Point Processes. arXiv preprint arXiv:1511.05077 (2015).
[12] Suraj Srinivas and R Venkatesh Babu. 2015. Data-free parameter pruning for deep neural
networks. arXiv preprint arXiv:1507.06149 (2015).
[13] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.
2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of
Machine Learning Research 15, 1 (2014), 1929–1958.
19
| 2 |
arXiv:1211.3445v2 [] 9 Sep 2013
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
HENRIK HOLM
Abstract. For a local Cohen–Macaulay ring R of finite CM-type, Yoshino has
applied methods of Auslander and Reiten to compute the Grothendieck group
K0 of the category mod R of finitely generated R-modules. For the same type of
rings, we compute in this paper the first Quillen K-group K1 (mod R). We also
describe the group homomorphism R∗ → K1 (mod R) induced by the inclusion
functor proj R → mod R and illustrate our results with concrete examples.
1. Introduction
Throughout this introduction, R denotes a commutative noetherian local Cohen–
Macaulay ring. The lower K-groups of R are known: K0 (R) ∼
= Z and K1 (R) ∼
= R∗ .
For n ∈ {0, 1} the classical K-group Kn (R) of the ring coincides with Quillen’s Kgroup Kn (proj R) of the exact category of finitely generated projective R-modules;
and if R is regular, then Quillen’s resolution theorem shows that the inclusion
functor proj R → mod R induces an isomorphism Kn (proj R) ∼
= Kn (mod R). If R is
non-regular, then these groups are usually not isomorphic. The groups Kn (mod R)
are often denoted Gn (R) and they are classical objects of study called the G-theory
of R. A celebrated result of Quillen is that G-theory is well-behaved under (Laurent)
polynomial extensions: Gn (R[t]) ∼
= Gn (R) and Gn (R[t, t−1 ]) ∼
= Gn (R) ⊕ Gn−1 (R).
Auslander and Reiten [4] and Butler [9] computed K0 (mod Λ) for an Artin algebra Λ of finite representation type. Using similar techniques, Yoshino [32] computed
K0 (mod R) in the case where R has finite (as opposed to tame or wild) CM-type:
Theorem (Yoshino [32, thm. (13.7)]). Assume that R is henselian and that it
has a dualizing module. If R has finite CM-type, then there is a group isomorphism,
K0 (mod R) ∼
= Coker Υ ,
where Υ : Zt → Zt+1 is the Auslander–Reiten homomorphism from (2.3).
We mention that Yoshino’s result is as much a contribution to algebraic Ktheory as it is to the representation theory of the category MCM R of maximal
Cohen–Macaulay R-modules. Indeed, the inclusion functor MCM R → mod R induces an isomorphism Kn (MCM R) ∼
= Kn (mod R) for every n. The theory of maximal Cohen–Macaulay modules, which originates from algebraic geometry and integral representations of finite groups, is a highly active area of research.
In this paper, we build upon results and techniques of Auslander and Reiten [4],
Bass [7], Lam [20], Leuschke [21], Quillen [24], Vaserstein [28, 29], and Yoshino [32]
2010 Mathematics Subject Classification. 13C14, 13D15, 19B28.
Key words and phrases. Auslander–Reiten sequence; Bass’ universal determinant group; finite
Cohen–Macaulay type; maximal Cohen–Macaulay module; Quillen’s K-theory.
1
2
HENRIK HOLM
to compute the group K1 (mod R) when R has finite CM-type. Our main result is
Theorem (2.12); it asserts that there is an isomorphism,
∼ AutR (M )ab /Ξ ,
K1 (mod R) =
where M is any representation generator of the category of maximal Cohen–Macaulay R-modules and AutR (M )ab is the abelianization of its automorphism group.
The subgroup Ξ is more complicated to describe; it is determined by the Auslander–
Reiten sequences and defined in (2.10). Observe that in contrast to K0 (mod R), the
group K1 (mod R) is usually not finitely generated.
We also prove that if one writes M = R ⊕ M ′ , then the group homomorphism
∗ ∼
R = K1 (proj R) → K1 (mod R) induced by the inclusion functor proj R → mod R
can be identified with the map
r1R
0
∗
λ : R −→ AutR (M )ab /Ξ
given by
r 7−→
.
0
1M ′
The paper is organized as follows: In Section 2 we formulate our main result,
Theorem (2.12). This theorem is not proved until Section 8, and the intermediate Sections 3 (on the Gersten–Sherman transformation), 4 (on Auslander’s and
Reiten’s theory for coherent pairs), 5 (on Vaserstein’s result for semilocal rings), 6
(on certain equivalences of categories), and 7 (on Yoshino’s results for the abelian
category Y) prepare the ground.
In Sections 9 and 10 we apply our main theorem to compute the group K1 (mod R)
and the homomorphism λ : R∗ → K1 (mod R) in some concrete examples. E.g. for
the simple curve singularity R = k[[T 2 , T 3 ]] we obtain K1 (mod R) ∼
= k[[T ]]∗ and show
2
3 ∗
∗
that the homomorphism λ : k[[T , T ]] → k[[T ]] is the inclusion. It is well-known
that if R is artinian with residue field k, then one has K1 (mod R) ∼
= k ∗ . We apply
Theorem (2.12) to confirm this isomorphism for the ring R = k[X]/(X 2 ) of dual
numbers and to show that the homomorphism λ : R∗ → k ∗ is given by a+ bX 7→ a2 .
We end this introduction by mentioning a related preprint [23] of Navkal. Although the present work and the paper of Navkal have been written completely
independently (this fact is also pointet out in the latest version of [23]), there is a
significant overlap between the two manuscripts: Navkal’s main result [23, thm. 1.2]
is the existence of a long exact sequence involving the G-theory of the rings R and
EndR (M )op (where M is a particular representation generator of the category of
maximal Cohen–Macaulay R-modules) and the K-theory of certain division rings.
In Section 5 in loc. cit., Navkal applies his main result to give some description of
the group K1 (mod R) for the ring R = k[[T 2 , T 2n+1 ]] where n > 1. We point out
that the techniques used in this paper and in Navkal’s work are quite different.
2. Formulation of the Main Theorem
Let R be a commutative noetherian local Cohen–Macaulay ring. By mod R we
denote the abelian category of finitely generated R-modules. The exact categories
of finitely generated projective modules and of maximal Cohen–Macaulay modules
over R are written proj R and MCM R, respectively. The goal of this section is to
state our main Theorem (2.12); its proof is postponed to Section 8.
(2.1) Setup. Throughout this paper, (R, m, k) is a commutative noetherian local
Cohen–Macaulay ring satisfying the following assumptions.
(1) R is henselian.
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
3
(2) R admits a dualizing module.
(3) R has finite CM-type, that is, up to isomorphism, there are only finitely many
non-isomorphic indecomposable maximal Cohen–Macaulay R-modules.
Note that (1) and (2) hold if R is m-adically complete. Since R is henselian, the
category mod R is Krull–Schmidt by [32, prop. (1.18)]; this fact will be important
a number of times in this paper.
Set M0 = R and let M1 , . . . , Mt be a set of representatives for the isomorphism
classes of non-free indecomposable maximal Cohen–Macaulay R-modules. Let M
be any representation generator of MCM R, that is, a finitely generated R-module
such that addR M = MCM R (where addR M denotes the category of R-modules
that are isomorphic to a direct summand of some finite direct sum of copies of M ).
For example, M could be the square-free module
(2.1.1)
M = M0 ⊕ M1 ⊕ · · · ⊕ Mt .
We denote by E = EndR (M ) the endomorphism ring of M .
It follows from [32, thm. (4.22)] that R is an isolated singularity, and hence by
loc. cit. thm. (3.2) the category MCM R admits Auslander–Reiten sequences. Let
(2.1.2)
0 −→ τ (Mj ) −→ Xj −→ Mj −→ 0
(1 6 j 6 t)
be the Auslander–Reiten sequence in MCM R ending in Mj , where τ is the Auslander–Reiten translation.
(2.2) Remark. The one-dimensional Cohen–Macaulay rings of finite CM-type are
classified by Cimen [10, 11], Drozd and Roı̆ter [12], Green and Reiner [18], and
Wiegand [30, 31]. The two-dimensional complete Cohen–Macaulay rings of finite
CM-type that contains the complex numbers are classified by Auslander [2], Esnault
[14], and Herzog [19]. They are the invariant rings R = C[[X, Y ]]G where G is a nontrivial finite subgroup of GL2 (C). In this case, M = C[[X, Y ]] is a representation
generator for MCM R which, unlike the one in (2.1.1), need not be square-free.
(2.3) Definition. For each Auslander–Reiten sequence (2.1.2) we have
n
n
n
Xj ∼
= M0 0j ⊕ M1 1j ⊕ · · · ⊕ Mt tj
for uniquely determined n0j , n1j , . . . , ntj > 0. Consider the element,
τ (Mj ) + Mj − n0j M0 − n1j M1 − · · · − ntj Mt ,
in the free abelian group ZM0 ⊕ ZM1 ⊕ · · · ⊕ ZMt , and write this element as,
y0j M0 + y1j M1 + · · · + ytj Mt ,
where y0j , y1j , . . . , ytj ∈ Z. Define the Auslander–Reiten matrix Υ as the (t + 1) × t
matrix with entries in Z whose j’th column is (y0j , y1j , . . . , ytj ). When Υ is viewed
as a homomorphism of abelian groups Υ : Zt → Zt+1 (elements in Zt and Zt+1 are
viewed as column vectors), we refer to it as the Auslander–Reiten homomorphism.
(2.4) Example. Let R = C[[X, Y, Z]]/(X 3 + Y 4 + Z 2 ). Besides M0 = R there are
exactly t = 6 non-isomorphic indecomposable maximal Cohen–Macaulay modules,
4
HENRIK HOLM
and the Auslander–Reiten sequences have the following form,
0 −→ M1 −→ M2 −→ M1 −→ 0
0 −→ M2 −→ M1 ⊕ M3 −→ M2 −→ 0
0 −→ M3 −→ M2 ⊕ M4 ⊕ M6 −→ M3 −→ 0
0 −→ M4 −→ M3 ⊕ M5 −→ M4 −→ 0
0 −→ M5 −→ M4 −→ M5 −→ 0
0 −→ M6 −→ M0 ⊕ M3 −→ M6 −→ 0 ;
see [32, (13.9)]. The 7×6 Auslander–Reiten matrix Υ
0
0
0
0
0
2 −1 0
0
0
−1 2 −1 0
0
Υ=
0
−1
2
−1
0
0
0 −1 2 −1
0
0
0 −1 2
0
0 −1 0
0
is therefore given by
−1
0
0
−1
.
0
0
2
In this case, the Auslander–Reiten homomorphism Υ : Z6 → Z7 is clearly injective.
One hypothesis in our main result, Theorem (2.12) below, is that the Auslander–
Reiten homomorpism Υ over the ring R in question is injective. We are not aware
of an example where Υ is not injective. The following lemma covers the situation of
the rational double points, that is, the invariant rings R = k[[X, Y ]]G , where k is an
algebraically closed field of characteristic 0 and G is a non-trivial finite subgroup
of SL2 (k); see [5].
(2.5) Lemma. Assume that R is complete, integrally closed, non-regular, Gorenstein, of Krull dimension 2, and that the residue field k is algebraically closed. Then
the Auslander–Reiten homomorphism Υ is injective.
Proof. Let 1 6 j 6 t be given and consider the expression
τ (Mj ) + Mj − n0j M0 − n1j M1 − · · · − ntj Mt = y0j M0 + y1j M1 + · · · + ytj Mt
in the free abelian group ZM0 ⊕ ZM1 ⊕ · · · ⊕ ZMt , see Definition (2.3). Let Γ be
the Auslander–Reiten quiver of MCM R. We recall from [5, thm. 1] that the arrows
/ ◦ , and that collapsing each pair to an undirected edge
in Γ occur in pairs ◦ o
˜ Moreover, removing the vertex corresponding
gives an extended Dynkin diagram ∆.
to M0 = R and any incident edges gives a Dynkin graph ∆.
Now, Xj has a direct summand Mk if and only if there is an arrow Mk → Mj in Γ.
Also, the Auslander–Reiten translation τ satisfies τ (Mj ) = Mj by [5, proof of thm.
1]. Combined with the structure of the Auslander–Reiten quiver, this means that
if k = j ,
2
˜ ,
ykj =
−1 if there is an edge Mk
Mj in ∆
0
otherwise .
Hence the t × t matrix Υ0 with (y1j , . . . , ytj ) as j’th column, where 1 6 j 6 t, is the
Cartan matrix of the Dynkin graph ∆; cf. [8, def. 4.5.3]. This matrix is invertible
by [15, exer. (21.18)]. Deleting the first row (y01 , . . . , y0t ) in the Auslander–Reiten
matrix Υ, we get the invertible matrix Υ0 , and consequently, Υ : Zt → Zt+1 determines an injective homomorphism.
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
5
For a group G we denote by Gab its abelianization, i.e. Gab = G/[G, G], where
[G, G] is the commutator subgroup of G.
We refer to the following as the tilde construction. It associates to every automorphism α : X → X of a maximal Cohen–Macaulay module X an automorphism
α̃ : M q → M q of the smallest power q of the representation generator M such that
X is a direct summand of M q .
(2.6) Construction. The chosen representation generator M for MCM R has the
form M = M0m0 ⊕ · · · ⊕ Mtmt for uniquely determined integers m0 , . . . , mt > 0. For
any module X = M0n0 ⊕ · · · ⊕ Mtnt in MCM R, we define natural numbers,
q = q(X) = min{p ∈ N | pmj > nj for all 0 6 j 6 t} , and
vj = vj (X) = qmj − nj > 0 ,
∼
=
and a module Y = M0v0 ⊕ · · · ⊕ Mtvt in MCM R. Let ψ : X ⊕ Y −→ M q be the Risomorphism that maps an element
((x0 , . . . , xt ), (y 0 , . . . , y t )) ∈ X ⊕ Y = (M0n0 ⊕ · · · ⊕ Mtnt ) ⊕ (M0v0 ⊕ · · · ⊕ Mtvt ) ,
v
n
where xj ∈ Mj j and y j ∈ Mj j , to the element
((z 01 , . . . , z t1 ), . . . , (z 0q , . . . , z tq )) ∈ M q = (M0m0 ⊕ · · · ⊕ Mtmt )q ,
m
qm
n +v
where z j1 , . . . , z jq ∈ Mj j are given by (z j1 , . . . , z jq ) = (xj , y j ) ∈ Mj j = Mj j j .
Now, given α in AutR (X), we define α̃ to be the uniquely determined element
in AutR (M q ) that makes the following diagram commutative,
X ⊕Y
α⊕1Y
ψ
∼
=
∼
= α̃
∼
=
X ⊕Y
/ Mq
∼
=
ψ
/ Mq .
The automorphism α̃ of M q has the form α̃ = (α̃ij ) for uniquely determined endomorphisms α̃ij of M , that is, α̃ij ∈ E = EndR (M ). Hence α̃ = (α̃ij ) can naturally
be viewed as an invertible q × q matrix with entries in E.
(2.7) Example. Let M = M0 ⊕ · · · ⊕ Mt and X = Mj . Then q = 1 and
Y = M0 ⊕ · · · ⊕ Mj−1 ⊕ Mj+1 ⊕ · · · ⊕ Mt .
The isomorphism ψ : X ⊕ Y → M maps (xj , (x0 , . . . , xj−1 , xj+1 , . . . , xt )) in X ⊕ Y
to (x0 , . . . , xj−1 , xj , xj+1 , . . . , xt ) in M . Therefore, for α ∈ AutR (X) = AutR (Mj ),
Construction (2.6) yields the following automorphism of M ,
α̃ = ψ(α ⊕ 1Y )ψ −1 = 1M0 ⊕ · · · ⊕ 1Mj−1 ⊕ α ⊕ 1Mj+1 ⊕ · · · ⊕ 1Mt ,
which is an invertible 1 × 1 matrix with entry in E = EndR (M ).
The following result on Auslander–Reiten sequences is quite standard. We provide a few proof details along with the appropriate references.
(2.8) Proposition. Let there be given Auslander–Reiten sequences in MCM R,
0 −→ τ (M ) −→ X −→ M −→ 0
and
0 −→ τ (M ′ ) −→ X ′ −→ M ′ −→ 0 .
6
HENRIK HOLM
If α : M → M ′ is a homomorphism, then there exist homomorphisms β and γ that
make the following diagram commutative,
0
0
/ τ (M )
✤
✤γ
✤
/ τ (M ′ )
/X
✤
✤β
✤
/ X′
/M
/0
α
/ M′
/ 0.
Furthermore, if α is an isomorphism, then so are β and γ.
Proof. Write ρ : X → M and ρ′ : X ′ → M ′ . It suffices to prove the existence of β
such that ρ′ β = αρ, because then the existence of γ follows from diagram chasing.
As 0 → τ (M ′ ) → X ′ → M ′ → 0 is an Auslander–Reiten sequence, it suffices by
[32, lem. (2.9)] to show that αρ : X → M ′ is not a split epimorphism. Suppose that
there do exist τ : M ′ → X with αρτ = 1M ′ . Hence α is a split epimorphism. As
M is indecomposable, α must be an isomorphism. Thus ρτ α = α−1 (αρτ )α = 1M ,
which contradicts the fact that ρ is not a split epimorphism.
The fact that β and γ are isomorphisms if α is so follows from [32, lem. (2.4)].
The choice requested in the following construction is possible by Proposition (2.8).
(2.9) Construction. Choose for each 1 6 j 6 t and every α ∈ AutR (Mj ) elements
βj,α ∈ AutR (Xj ) and γj,α ∈ AutR (τ (Mj )) that make the next diagram commute,
0
(2.9.1)
/ τ (Mj )
∼
= γj,α
0
/ τ (Mj )
/ Xj
/ Mj
∼
= βj,α
∼
= α
/ Xj
/ Mj
/0
/ 0;
here the row(s) is the j’th Auslander–Reiten sequence (2.1.2).
As shown in Lemma (5.1), the endomorphism ring E = EndR (M ) of the chosen
representation generator M is semilocal, that is, E/J(E) is semisimple. Thus, if
the ground ring R, and hence also the endomorphism ring E, is an algebra over the
residue field k and char(k) 6= 2, then a result by Vaserstein [29, thm. 2] yields that
∗
C
the canonical homomorphism θE : Eab
→ KC
1 (E) is an isomorphism. Here K1 (E)
is the classical K1 -group of the ring E; see (3.1). Its inverse,
−1
∗
θE
= detE : KC
1 (E) −→ Eab = AutR (M )ab ,
is called the generalized determinant map. The details are discussed in Section 5.
We are now in a position to define the subgroup Ξ of AutR (M )ab that appears in
our main Theorem (2.12) below.
(2.10) Definition. Let (R, m, k) be a ring satisfying the hypotheses in Setup (2.1).
Assume, in addition, that R is an algebra over k and that one has char(k) 6= 2.
Define a subgroup Ξ of AutR (M )ab as follows.
– Choose for each 1 6 j 6 t and each α ∈ AutR (Mj ) elements βj,α ∈ AutR (Xj )
and γj,α ∈ AutR (τ (Mj )) as in Construction (2.9).
– Let α̃, β̃j,α , and γ̃j,α be the invertible matrices with entries in E obtained by
applying the tilde construction (2.6) to α, βj,α , and γj,α .
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
7
Let Ξ be the subgroup of AutR (M )ab generated by the elements
(detE α̃)(detE β̃j,α )−1 (detE γ̃j,α ) ,
where j ranges over {1, . . . , t} and α over AutR (Mj ).
A priori the definition of the group Ξ involves certain choices. However, it follows
from Proposition (8.8) that Ξ is actually independent of the choices made.
(2.11) Remark. In specific examples it is convenient to consider the simplest possible representation generator M = M0 ⊕ M1 ⊕ · · · ⊕ Mt . In this case, Example (2.7)
shows that α̃ and γ̃j,α are 1×1 matrices with entries in E, that is, α̃, γ̃j,α ∈ E ∗ ,
∗
and consequently detE α̃ = α̃ and detE γ̃j,α = γ̃j,α as elements in Eab
.
We are now in a position to state our main result.
(2.12) Theorem. Let (R, m, k) be a ring satisfying the hypotheses in Setup (2.1).
Assume that R is an algebra over its residue field k with char(k) 6= 2, and that the
Auslander–Reiten homomorphism Υ : Zt → Zt+1 from Definition (2.3) is injective.
Let M be any representation generator of MCM R. There is an isomorphism,
K1 (mod R) ∼
= AutR (M )ab /Ξ ,
where Ξ is the subgroup of AutR (M )ab given in Definition (2.10).
Furthermore, if inc : proj R → mod R is the inclusion functor and M = R ⊕ M ′ ,
then K1 (inc) : K1 (proj R) → K1 (mod R) may be identified with the homomorphism,
r1R
0
λ : R∗ −→ AutR (M )ab /Ξ
given by
r 7−→
.
0
1M ′
As mentioned in the introduction, the proof of Theorem (2.12) spans Sections 3
to 8. Applications and examples are presented in Sections 9 and 10. The interested
reader could go ahead and read Sections 9–10 right away, since these sections are
practically independent of 3–8.
3. The Gersten–Sherman Transformation
To prove Theorem (2.12), we need to compare and/or identify various K-groups.
The relevant definitions and properties of these K-groups are recalled below. The
(so-called) Gersten–Sherman transformation is our most valuable tool for comparing
K-groups, and the main part of this section is devoted to this natural transformation. Readers who are familiar with K-theory may skip this section altogether.
In the following, the Grothendieck group functor is denoted by G.
(3.1) Let A be a unital ring.
The classical K0 -group of A is defined as KC
0 (A) = G(proj A), that is, the Grothendieck group of the category of finitely generated projective A-modules.
The classical K1 -group of A is defined as KC
1 (A) = GL(A)ab , that is, the abelianization of the infinite (or stable) general linear group; see e.g. Bass [7, chap. V].
(3.2) Let C be any category. Its loop category ΩC is the category whose objects are
pairs (C, α) with C ∈ C and α ∈ AutC (C). A morphism (C, α) → (C ′ , α′ ) in ΩC is
8
HENRIK HOLM
a commutative diagram in C,
C
ψ
∼
= α′
α ∼
=
C
/ C′
ψ
/ C ′.
(3.3) Let C be a skeletally small exact category. Its loop category ΩC is also skeletally small, and it inherits a natural exact structure from C. Bass’ K1 -group (also
called Bass’ universal determinant group) of C, which we denote by KB
1 (C), is the
Grothendieck group of ΩC, that is G(ΩC), modulo the subgroup generated by all
elements of the form
(C, α) + (C, β) − (C, αβ) ,
where C ∈ C and α, β ∈ AutC (C); see the book of Bass [7, chap. VIII§1] or Rosenberg [25, def. 3.1.6]. For (C, α) in ΩC we denote by [C, α] its image in KB
1 (C).
(3.4) For every C in C one has [C, 1C ] + [C, 1C ] = [C, 1C 1C ] = [C, 1C ] in KB
1 (C).
Consequently, [C, 1C ] is the neutral element in KB
(C).
1
(3.5) For a unital ring A there is by [25, thm. 3.1.7] a natural isomorphism,
∼
=
B
ηA : KC
1 (A) −→ K1 (proj A) .
The isomorphism ηA maps ξ ∈ GLn (A), to the class [An , ξ] ∈ KB
1 (proj A). Here
ξ is viewed as an automorphism of the row space An (a free left A-module), that
is, ξ acts by multiplication from the right.
−1
The inverse map ηA
acts as follows. Let [P, α] be in KB
1 (proj A). Choose any Q
in proj A and any isomorphism ψ : P ⊕ Q → An with n ∈ N. In KB
1 (proj A) one has
[P, α] = [P, α] + [Q, 1Q ] = [P ⊕ Q, α ⊕ 1Q ] = [An , ψ(α ⊕ 1Q )ψ −1 ] .
The automorphism ψ(α ⊕ 1Q )ψ −1 of (the row space) An can be identified with a
−1
matrix in β ∈ GLn (A). The action of ηA
on [P, α] is now β’s image in KC
1 (A).
(3.6) Quillen defines in [24] functors KQ
n from the category of skeletally small exact
categories to the category of abelian groups. More precisely, KQ
n (C) = πn+1 (BQC, 0)
where Q is Quillen’s Q-construction and B denotes the classifying space.
The functor KQ
0 is naturally isomorphic to the Grothendieck group functor G; see
∼ C
[24, §2 thm. 1]. For a ring A there is a natural isomorphism KQ
1 (proj A) = K1 (A);
see for example Srinivas [27, cor. (2.6) and thm. (5.1)].
Gersten sketches in [17, §5] the construction of a natural transformation ζ : KB
1 →
of functors on the category of skeletally small exact categories. The details of
this construction were later given by Sherman [26, §3], and for this reason we refer to
ζ as the Gersten–Sherman transformation 1. Examples due to Gersten and Murthy
[17, prop. 5.1 and 5.2] show that for a general skeletally small exact category C,
Q
the homomorphism ζC : KB
1 (C) → K1 (C) is neither injective nor surjective. For the
Q
exact category proj A, where A is a ring, it is known that KB
1 (proj A) and K1 (proj A)
are isomorphic, indeed, they are both isomorphic to the classical K-group KC
1 (A);
see (3.5) and (3.6). Therefore, a natural question arises: is ζproj A an isomorphism?
Sherman answers this question affirmatively in [26, pp. 231–232]; in fact, in loc. cit.
KQ
1
1 In the papers by Gersten [17] and Sherman [26], the functor KB is denoted by Kdet .
1
1
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
9
Theorem 3.3 it is proved that ζC is an isomorphism for every semisimple exact category, that is, an exact category in which every short exact sequence splits. We
note these results of Gersten and Sherman for later use.
Q
(3.7) Theorem. There exists a natural transformation ζ : KB
1 → K1 , which we call
the Gersten–Sherman transformation, of functors on the category of skeletally small
Q
exact categories such that ζproj A : KB
1 (proj A) → K1 (proj A) is an isomorphism for
every ring A.
We will also need the next result on the Gersten–Sherman transformation. Recall
that a length category is an abelian category in which every object has finite length.
(3.8) Theorem. If A is a skeletally small length category with only finitely many
Q
simple objects (up to isomorphism), then ζA : KB
1 (A) → K1 (A) is an isomorphism.
Proof. We begin with a general observation. Given skeletally small exact categories
C1 and C2 , there are exact projection functors pj : C1 × C2 → Cj (j = 1, 2). From the
“elementary properties” of Quillen’s K-groups listed in [24, §2], it follows that the
Q
Q
Q
Q
homomorphism (KQ
1 (p1 ), K1 (p2 )) : K1 (C1 × C2 ) → K1 (C1 ) ⊕ K1 (C2 ) is an isomorB
B
phism. A similar argument shows that (K1 (p1 ), K1 (p2 )) is an isomorphism. Since
Q
ζ : KB
1 → K1 is a natural transformation, it follows that ζC1 ×C2 is an isomorphism
if and only if ζC1 and ζC2 are isomorphisms.
Denote by Ass the full subcategory of A consisting of all semisimple objects. Note
that Ass is a Serre subcategory of A, and hence Ass is itself an abelian category.
Let i : Ass ֒→ A be the (exact) inclusion and consider the commutative diagram,
KB
1 (Ass )
KB
1 (i)
∼
=
ζAss
KQ
(A
ss )
1
/ KB
1 (A)
ζA
KQ
1 (i)
∼
=
/ KQ (A) .
1
Since A is a length category, Bass’ and Quillen’s devissage theorems [7, VIII§3 thm.
Q
(3.4)(a)] and [24, §5 thm. 4] show that KB
1 (i) and K1 (i) are isomorphisms. Hence,
it suffices to argue that ζAss is an isomorphism. By assumption there is a finite set
{S1 , . . . , Sn } of representatives of the isomorphism classes of simple objects in A.
Note that every object A in Ass has unique decomposition A = S1a1 ⊕· · ·⊕Snan where
a1 , . . . , an ∈ N0 ; we used here the assumption that A has finite length to conclude
that the cardinal numbers ai must be finite. Since one has HomA (Si , Sj ) = 0 for
i 6= j, it follows that there is an equivalence of abelian categories,
Ass ≃ (add S1 ) × · · · × (add Sn ) .
Consider the ring Di = EndA (Si )op . As Si is simple, Schur’s lemma gives that Di is
a division ring. It easy to see that the functor HomA (Si , −) : A → Mod Di induces
an equivalence add Si ≃ proj Di . By Theorem (3.7) the maps ζproj D1 , . . . , ζproj Dn are
isomorphisms, so it follows from the equivalence above, and the general observation
in the beginning af the proof, that ζAss is an isomorphism, as desired.
Note that in this section, superscripts “C” (for classical), “B” (for Bass), and
“Q” (for Quillen) have been used to distinguish between various K-groups. In the
rest of the paper, K-groups without superscripts refer to Quillen’s K-groups.
10
HENRIK HOLM
4. Coherent Pairs
We recall a few results and notions from the paper [4] by Auslander and Reiten
which are central in the proof of our main Theorem (2.12). Throughout this section,
A denotes a skeletally small additive category.
(4.1) Definition. A pseudo (or weak ) kernel of a morphism g : A → A′ in A is a
morphism f : A′′ → A in A such that gf = 0, and which satisfies that every diagram
in A as below can be completed (but not necessarily in a unique way).
′′
A
~⑤
⑤
f
⑤
B❇
❇❇
❇❇0
h
❇❇
/A
/ A′ .
g
⑤
We say that A has pseudo kernels if every morphism in A has a pseudo kernel.
(4.2) Observation. Let A be a full additive subcategory of an abelian category
M. An A-precover of an object M ∈ M is a morphism u : A → M with A ∈ A
with the property that for every morphism u′ : A′ → M with A′ ∈ A there exists
a (not necessarily unique) morphism v : A′ → A such that uv = u′ . Following [13,
def. 5.1.1] we say that A is precovering (or contravariantly finite) in M if every
object M ∈ M has an A-precover. In this case, A has pseudo kernels. Indeed, if
i : K → A is the kernel in M of g : A → A′ in A, and if f : A′′ → K is an A-precover
of K, then if : A′′ → A is a pseudo kernel of g.
(4.3) Definition. Let B be a full additive subcategory of A. Auslander and Reiten
[4] call (A, B) a coherent pair if A has pseudo kernels in the sense of Definition (4.1),
and B is precovering in A.
If (A, B) is a coherent pair then also B has pseudo kernels by [4, prop. 1.4(a)].
(4.4) Definition. Write Mod A for the abelian category of additive contravariant
functors A → Ab, where Ab is the category of abelian groups. Denote by mod A
the full subcategory of Mod A consisting of finitely presented functors.
(4.5) If the category A has pseudo kernels then mod A is abelian, and the inclusion
functor mod A → Mod A is exact, see [4, prop. 1.3].
If (A, B) is a coherent pair, see (4.3), then the exact restriction Mod A → Mod B
maps mod A to mod B by [4, prop. 1.4(b)]. In this case, there are functors,
(4.5.1)
i
r
Ker r −→ mod A −→ mod B ,
where r is the restriction and i the inclusion functor. The kernel of r, that is,
Ker r = {F ∈ mod A | F (B) = 0 for all B ∈ B } ,
is a Serre subcategory of the abelian category mod A. The quotient (mod A)/(Ker r),
in the sense of Gabriel [16], is equivalent to the category mod B, and the canonical
functor mod A → (mod A)/(Ker r) may be identified with r. These assertions are
proved in [4, prop. 1.5]. Therefore (4.5.1) induces by Quillen’s localization theorem
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
11
[24, §5 thm. 5] a long exact sequence of K-groups,
(4.5.2)
···
/ Kn (Ker r)
···
/ K0 (Ker r)
Kn (i)
K0 (i)
/ Kn (mod A)
/ K0 (mod A)
Kn (r)
K0 (r)
/ Kn (mod B)
/ ···
/ K0 (mod B)
/ 0.
5. Semilocal Rings
A ring A is semilocal if A/J(A) is semisimple. Here J(A) is the Jacobson radical
of A. If A is commutative then this definition is equivalent to A having only finitely
many maximal ideals; see Lam [20, prop. (20.2)].
(5.1) Lemma. Let R be a commutative noetherian semilocal ring, and let M 6= 0
be a finitely generated R-module. Then the ring EndR (M ) is semilocal.
Proof. As R is commutative and noetherian, EndR (M ) is a module-finite R-algebra.
Since R is semilocal, the assertion now follows from [20, prop. (20.6)].
(5.2) Denote by A∗ the group of units in a ring A, and let ϑA : A∗ → KC
1 (A) be the
composite of the group homomorphisms,
(5.2.1)
A∗ ∼
= GL1 (A) ֒→ GL(A) ։ GL(A)ab = KC
1 (A) .
Some authors refer to ϑA as the Whitehead determinant. If A is semilocal, then
ϑA is surjective by Bass [7, V§9 thm. (9.1)]. As the group KC
1 (A) is abelian one has
[A∗ , A∗ ] ⊆ Ker ϑA , and we write θA : A∗ab → KC
(A)
for
the
induced
homomorphism.
1
Vaserstein [28] showed that the inclusion [A∗ , A∗ ] ⊆ Ker ϑA is strict for the semilocal ring A = M2 (F2 ) where F2 is the field with two elements. In [28, thm. 3.6(a)]
it is shown that if A is semilocal, then Ker ϑA is the subgroup of A∗ generated by
elements of the form (1 + ab)(1 + ba)−1 where a, b ∈ A and 1 + ab ∈ A∗ .
If A is semilocal, that is, A/J(A) is semisimple, then by the Artin–Wedderburn
theorem there is an isomorphism of rings,
A/J(A) ∼
= Mn1 (D1 ) × · · · × Mnt (Dt ) ,
where D1 , . . . , Dt are division rings, and n1 , . . . , nt are natural numbers all of which
are uniquely determined by A. The next result is due to Vaserstein [29, thm. 2].
(5.3) Theorem. Let A be semilocal and write A/J(A) ∼
= Mn1 (D1 )× · · ·× Mnt (Dt ).
If none of the Mni (Di )’s is M2 (F2 ), and at most one of the Mni (Di )’s is M1 (F2 ) = F2
then one has Ker ϑA = [A∗ , A∗ ]. In particular, ϑA induces an isomorphism,
∼
=
θA : A∗ab −→ KC
1 (A) .
(5.4) Remark. Note that if A is a semilocal ring which is an algebra over a field k
with characteristic 6= 2, then the hypothesis in Theorem (5.3) is satisfied.
If A is a commutative semilocal ring, then Ker ϑA and the commutator subgroup
[A∗ , A∗ ] = {1} are identical, i.e. the surjective homomorphism ϑA = θA : A∗ → KC
1 (A)
is an isomorphism. Indeed, the determinant homomorphisms detn : GLn (A) → A∗
∗
induce a homomorphism detA : KC
1 (A) → A that evidently satisfies detA θA = 1A∗ .
−1
Since θA is surjective, it follows that θA is an isomorphism with θA
= detA .
12
HENRIK HOLM
(5.5) Definition. Let A be a ring for which the homomorphism θA : A∗ab → KC
1 (A)
from (5.2) is an isomorphism; for example, A could be a commutative semilocal ring
or a noncommutative semilocal ring satisfying the assumptions in Theorem (5.3).
−1
The inverse θA
is denoted by detA , and we call it the generalized determinant.
(5.6) Remark. Let ξ be an m × n and let χ be an n × p matrix with entries in a
ring A. Denote by “·” the product Mm×n (Aop ) × Mn×p (Aop ) → Mm×p (Aop ). Then
(ξ · χ)T = χT ξ T ,
where χT ξ T is computed using the product Mp×n (A) × Mn×m (A) → Mp×m (A).
Thus, transposition (−)T : GLn (Aop ) → GLn (A) is an anti-isomorphism (this is also
op
C
noted in [7, V§7]), which induces an isomorphism (−)T : KC
1 (A ) → K1 (A).
−1
(5.7) Lemma. Let A be a ring for which the generalized determinant detA = θA
exists; cf. Definition (5.5). For every invertible matrix ξ with entries in A one has
an equality detAop (ξ T ) = detA (ξ) in the abelian group (Aop )∗ab = A∗ab .
Proof. Clearly, there is a commutative diagram,
(Aop )∗ab
A∗ab
∼
= θAop
θA ∼
=
KC
(A)
1
∼
=
(−)T
op
/ KC
(A
),
1
−1
−1
T
It follows that one has θA
= θA
, that is, detAop ◦ (−)T = detA .
op ◦ (−)
6. Some Useful Functors
Throughout this section, A is a ring and M is a fixed left A-module. We denote by
E = EndA (M ) the endomorphism ring of M . Note that M = A,E M has a natural
left-A-left-E–bimodule structure.
(6.1) There is a pair of adjoint functors,
HomA (M,−)
/
Mod A o
Mod(E op ) .
−⊗E M
It is easily seen that they restrict to a pair of quasi-inverse equivalences,
HomA (M,−)
addA M o
≃
−⊗E M
/
proj(E op ) .
Auslander referred to this phenomenon as projectivization; see [6, I§2].
Let F ∈ Mod(addA M ), that is, F : addA M → Ab is a contravariant additive functor, see Definition (4.4). The compatible E-module structure on the given A-module
M induces an E op -module structure on the abelian group F M which is given by
zα = (F α)(z) for α ∈ E and z ∈ F M .
(6.2) Proposition. There are quasi-inverse equivalences of abelian categories,
Mod(addA M ) o
eM
≃
fM
/
Mod(E op ) ,
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
13
where eM (evaluation) and fM (functorfication) are defined as follows,
eM (F ) = F M
and
fM (Z) = Z ⊗E HomA (−, M )|addA M ,
for F in Mod(addA M ) and Z in Mod(E op ). They restrict to quasi-inverse equivalences between categories of finitely presented objects,
mod(addA M ) o
eM
≃
fM
/
mod(E op ) .
Proof. For Z in Mod(E op ) the canonical isomorphism
∼
=
Z −→ Z ⊗E E = Z ⊗E HomA (M, M ) = eM fM (Z)
is natural in Z. Thus, the functors idMod(E op ) and eM fM are naturally isomorphic.
For F in Mod(addA M ) there is a natural transformation,
(6.2.1)
δ
fM eM (F ) = F M ⊗E HomA (−, M )|addA M −→ F ;
for X in addA M the homomorphism δX : F M ⊗E HomA (X, M ) → F X is given by
z ⊗ ψ 7→ (F ψ)(z). Note that δM is an isomorphism
as it may be identified with
∼
=
the canonical isomorphism F M ⊗E E E −→ F M in Ab. As the functors in (6.2.1)
are additive, it follows that δX is an isomorphism for every X ∈ addA M , that is,
δ is a natural isomorphism. Since (6.2.1) is natural in F , the functors fM eM and
idMod(addA M) are naturally isomorphic.
It is straightforward to verify that the functors eM and fM map finitely presented
objects to finitely presented objects.
(6.3) Observation. In the case M = A one has E = EndA (M ) = Aop , and therefore Proposition (6.2) yields an equivalence fA : mod A → mod(proj A) given by
X 7−→ X ⊗Aop HomA (−, A)|proj A .
It is easily seen that the functor fA is naturally isomorphic to the functor given by
X 7−→ HomA (−, X)|proj A .
We will usually identify fA with this functor.
(6.4) Definition. The functor yM : addA M → mod(addA M ) which for X ∈ addA M
is given by yM (X) = HomA (−, X)|addA M is called the Yoneda functor.
Let A be a full additive subcategory of an abelian category M. If A is closed
under extensions in M, then A has a natural induced exact structure. However,
one can always equip A with the trivial exact structure. In this structure, the
“exact sequences” (somtimes called conflations) are only the split exact ones. When
viewing A as an exact category with the trivial exact structure, we denote it A0 .
(6.5) Lemma. Assume that A is commutative and noetherian and let M ∈ mod A.
Set E = EndA (M ) and assume that E op has finite global dimension. For the exact Yoneda functor yM : (addA M )0 → mod(addA M ), see (6.4), the homomorphisms
Kn (yM ), where n > 0, and KB
1 (yM ) are isomorphisms.
14
HENRIK HOLM
Proof. By application of Kn to the commutative diagram,
(addA M )0
HomA (M,−)
≃
/ proj(E op )
≃
eM
/ mod(E op ) ,
yM
inc
mod(addA M )
it follows that Kn (yM ) is an isomorphism if and only if Kn (inc) is an isomorphism.
The latter holds by Quillen’s resolution theorem [24, §4 thm. 3], since E op has finite
global dimension. A similar argument shows that KB
1 (yM ) is an isomorphism. This
time one needs to apply Bass’ resolution theorem; see [7, VIII§4 thm. (4.6)].
Since K0 may be identified with the Grothendieck group functor, cf. (3.6), the
following result is well-known. In any case, it is straightforward to verify.
(6.6) Lemma. Assume that mod A is Krull–Schmidt. Let N = N1n1 ⊕ · · · ⊕ Nsns
be a finitely generated A-module, where N1 , . . . , Ns are non-isomorphic indecomposable A-modules and n1 , . . . , ns > 0. The homomorphism of abelian groups,
ψN : ZN1 ⊕ · · · ⊕ ZNs −→ K0 ((addA N )0 ) ,
given by Nj 7→ [Nj ], is an isomorphism.
7. The Abelian Category Y
By the assumptions in Setup (2.1), the ground ring R has a dualizing module.
It follows from Auslander and Buchweitz [3, thm. A] that MCM R is precovering
in mod R. Actually, in our case MCM R equals addR M for some finitely generated
R-module M (a representation generator), and it is easily seen that every category
of this form is precovering in mod R. By Observation (4.2) we have a coherent pair
(MCM R, proj R), which by (4.5) yields a Gabriel localization sequence,
(7.0.1)
i
r
Y = Ker r −→ mod(MCM R) −→ mod(proj R) .
Here r is the restriction functor, Y = Ker r, and i is the inclusion. Since an additive
functor vanishes on proj R if and only if it vanishes on R, one has
Y = {F ∈ mod(MCM R) | F (R) = 0} .
The following two results about the abelian category Y are due to Yoshino. The first
result is [32, (13.7.4)]; the second is (proofs of) [32, lem. (4.12) and prop. (4.13)].
(7.1) Theorem. Every object in Y has finite length, i.e. Y is a length category.
(7.2) Theorem. Consider for 1 6 j 6 t the Auslander–Reiten sequence (2.1.2) ending in Mj . The functor Fj , defined by the following exact sequence in mod(MCM R),
0 −→ HomR (−, τ (Mj )) −→ HomR (−, Xj ) −→ HomR (−, Mj ) −→ Fj −→ 0 ,
is a simple object in Y. Conversely, every simple functor in Y is naturally isomorphic
to Fj for some 1 6 j 6 t.
(7.3) Proposition. Let i : Y → mod(MCM R) be the inclusion functor from (7.0.1)
and let Υ : Zt → Zt+1 be the Auslander–Reiten homomorphism; see Definition (2.3).
The homomorphisms K0 (i) and Υ are isomorphic.
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
15
Proof. We claim that the following diagram of abelian groups is commutative,
ZM1 ⊕ · · · ⊕ ZMt
Υ
/ ZM0 ⊕ ZM1 ⊕ · · · ⊕ ZMt
∼
= ψM
K0 ((MCM R)0 )
ϕ ∼
=
∼
= K0 (yM )
K0 (Y)
/ K0 (mod(MCM R)) .
K0 (i)
The homomorphism ϕ is defined by Mj 7→ [Fj ] where Fj ∈ Y is described in (7.2).
From Theorems (7.1) and (7.2) and the proof of Rosenberg [25, thm. 3.1.8(1)] (or
the proof of Theorem (3.8)), it follows that ϕ is an isomorphism. The module M
is a representation generator for MCM R, see (2.1), and ψM is the isomorphism
given in Lemma (6.6). Finally, yM is the Yoneda functor from Definition (6.4).
By Leuschke [21, thm. 6] the ring E op , where E = EndR (M ), has finite global
dimension, and thus Lemma (6.5) implies that K0 (yM ) is an isomorphism.
From the definitions of the relevant homomorphims, it is straightforward to see
that the diagram is commutative; indeed, both K0 (i)ϕ and K0 (yM )ψM Υ map a
generator Mj to the element [Fj ] ∈ K0 (mod(MCM R)).
8. Proof of the Main Theorem
Throughout this section, we fix the notation in Setup (2.1). Thus, R is a commutative noetherian local Cohen–Macaulay ring satisfying conditions (2.1)(1)–(3),
M is any representation generator of MCM R, and E is its endomorphism ring.
We shall frequently make use of the Gabriel localization sequence (7.0.1), and i
and r always denote the inclusion and the restriction functor in this sequence.
(8.1) Remark. Let C be an exact category. As in the paragraph preceding Lemma (6.5), we denote by C0 the category C equipped with the trivial exact structure.
Note that the identity functor idC : C0 → C is exact and the induced homomorphism
B
B
B
KB
1 (idC ) : K1 (C0 ) → K1 (C) is surjective, indeed, one has K1 (idC ) [C, α] = [C, α].
(8.2) Lemma. Consider the restriction functor r : mod(MCM R) → mod(proj R) and
identity functor idMCM R : (MCM R)0 → MCM R. The homomorphisms KB
1 (r) and
B
(id
)
are
isomorphic,
in
particular,
K
(r)
is
surjective
by
Remark
(8.1).
KB
MCM R
1
1
Proof. Consider the commutative diagram of exact categories and exact functors,
(MCM R)0
idMCM R
/ MCM R
j
mod R
yM
≃ fR
mod(MCM R)
r
/ mod(proj R) ,
where yM is the Yoneda functor from Definition (6.4), j is the inclusion, and fR is
the equivalence from Observation (6.3). We will prove the lemma by arguing that
the vertical functors induce isomorphisms on the level of KB
1.
The ring E op has finite global dimension by Leuschke [21, thm. 6], and hence
Lemma (6.5) gives that that KB
1 (yM ) is an isomorphism. Since fR is an equivalence,
16
HENRIK HOLM
B
KB
1 (fR ) is obviously an isomorphism. To argue that K1 (j) is an isomorphism,
we apply Bass’ resolution theorem [25, thm. 3.1.14]. We must check that the
subcategory MCM R of mod R satisfies conditions (1)–(3) in loc. cit. Condition (1)
follows as MCM R is precovering in mod R. As R is Cohen–Macaulay, every module
in mod R has a resolution of finite length by modules in MCM R, see [32, prop. (1.4)];
thus condition (2) holds. Condition (3) requires that MCM R is closed under kernels
of epimorphisms; this is well-known from e.g. [32, prop. (1.3)].
Next we show some results on the Gersten–Sherman transformation; see Sect. 3.
(8.3) Lemma. ζC : KB
1 (C) → K1 (C) is an isomorphism for C = mod(MCM R).
Proof. As ζ is a natural transformation, there is a commutative diagram,
op
KB
1 (proj(E ))
KB
1 (inc)
ζproj(E op )
op
/ KB
1 (mod(E ))
KB
1 (fM )
/ KB
1 (mod(MCM R))
ζmod(E op )
K1 (proj(E op ))
K1 (inc)
/ K1 (mod(E op ))
ζmod(MCM R)
K1 (fM )
/ K1 (mod(MCM R)) ,
where fM : mod(E op ) → mod(MCM R) is the equivalence from Proposition (6.2)
and inc is the inclusion of proj(E op ) into mod(E op ).
From Leuschke [21, thm. 6], the noetherian ring E op has finite global dimension.
Hence Bass’ and Quillen’s resolution theorems, [7, VIII§4 thm. (4.6)] (see also Rosenberg [25, thm. 3.1.14]) and [24, §4 thm. 3], imply that KB
1 (inc) and K1 (inc) are
isomorphisms. Since fM is an equivalence, KB
(f
)
and
K
(f
M
1 M ) are isomorphisms
1
as well. Consequently, ζmod(MCM R) is an isomorphism if and only if ζproj(E op ) is an
isomorphism, and the latter holds by Theorem (3.7).
The goal is to compute Quillen’s K-group K1 (mod R) for the ring R in question.
For our proof of Theorem (2.12), it is crucial that this group can be naturally identified with Bass’ K-group KB
1 (mod R). To put Proposition (8.4) in perspective, we
remind the reader that the Gersten–Sherman transformation ζmod A is not surjective
for the ring A = ZC2 ; see [17, prop. 5.1].
(8.4) Proposition. If the Auslander–Reiten homomorphism from Definition (2.3)
is injective, then the following assertions hold:
(a) The homomorphism ζmod R : KB
1 (mod R) → K1 (mod R) is an isomorphism.
(b) There is an exact sequence,
KB
1 (Y)
KB
1 (i)
/ KB (mod(MCM R))
1
KB
1 (r)
/ KB (mod(proj R))
1
/0.
Proof. The Gabriel localization sequence (7.0.1) induces by (4.5) a long exact sequence of Quillen K-groups,
···
/ K1 (Y) K1 (i)/ K1 (mod(MCM R)) K1 (r)/ K1 (mod(proj R))
/ K0 (Y) K0 (i)/ · · ·
By Proposition (7.3), we may identify K0 (i) with the Auslander–Reiten homomorphism, which is assumed to be injective. Therefore, the bottom row in the following
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
17
commutative diagram of abelian groups is exact,
KB
1 (Y)
KB
1 (i)
/ KB (mod(proj R))
1
∼
= ζmod(MCM R)
∼
= ζY
K1 (Y)
KB
1 (r)
/ KB (mod(MCM R))
1
K1 (i)
/0
ζmod(proj R)
/ K1 (mod(MCM R))
K1 (r)
/ K1 (mod(proj R))
/ 0.
The vertical homomorphisms are given by the Gersten–Sherman transformation;
see Section 3. It follows from Theorems (7.1) and (7.2) that Y is a length category
with only finitely many simple objects; thus ζY is an isomorphism by Theorem (3.8).
And ζmod(MCM R) is an isomorphism by Lemma (8.3). Since ri = 0, it follows that
B
B
B
KB
1 (r)K1 (i) = 0 holds, and a diagram chase now shows that Im K1 (i) = Ker K1 (r).
B
Furthermore K1 (r) is surjective by Lemma (8.2). This proves part (b).
The Five Lemma now implies that ζmod(proj R) is an isomorphism. Since the
category mod(proj R) is equivalent to mod R, see Observation (6.3), it follows that
ζmod R is an isomorphism as well. This proves (a).
We will also need the following classical notion.
(8.5) Definition. Let M be an abelian category, and let M be an object in M. A
projective cover of M is an epimorphism ε : P ։ M in M, where P is projective,
such that every endomorphism α : P → P satisfying εα = ε is an automorphism.
(8.6) Lemma. Let there be given a commutative diagram,
P
ε
// M
ε
// M
ϕ
α
P
in an abelian category M, where ε : P ։ M is a projective cover of M . If ϕ is an
automorphism, then α is an automorphism.
Proof. As P is projective and ε is an epimorphism, there exists β : P → P such
that εβ = ϕ−1 ε. By assumption one has εα = ϕε. Hence εαβ = ϕεβ = ϕϕ−1 ε = ε,
and similarly, εβα = ε. As ε is a projective cover, we conclude that αβ and βα are
automorphisms of P , and thus α must be an automorphism.
The following lemma explains the point of the tilde construction (2.6).
op
B
op
(8.7) Lemma. Consider the isomorphism ηE op : KC
1 (E ) → K1 (proj(E )) in (3.5).
Let X ∈ MCM R and α ∈ AutR (X) be given, and α̃ be the invertible matrix with
entries in E obtained by applying Construction (2.6) to α. There is an equality,
ηE op (α̃T ) = HomR (M, X), HomR (M, α) .
∼
=
Proof. Write (M, −) for HomR (M, −), and let ψ : X ⊕ Y −→ M q be as in Construction (2.6). The R-module isomorphism ψ induces an isomorphism of E op -modules,
(M, X) ⊕ (M, Y ) = (M, X ⊕ Y )
(M,ψ)
∼
=
/ (M, M q ) ∼
= Eq .
Consider the automorphism of the free E op -module E q given by
(M, ψ) (M, α) ⊕ 1(M,Y ) (M, ψ)−1 = (M, ψ(α ⊕ 1Y )ψ −1 ) = (M, α̃) .
18
HENRIK HOLM
We view elements in the R-module M q as columns and elements in E q as rows. The
isomorphism E q ∼
= (M, M q ) identifies a row vector β = (β1 , . . . , βq ) ∈ E q with the
T
R-linear map β : M → M q whose coordinate functions are β1 , . . . , βq . The coordinate functions of (M, α̃)(β T ) = α̃ ◦ β T are the entries in the column α̃β T , where
the matrix product used is Mq×q (E) × Mq×1 (E) → Mq×1 (E). Thus, the action of
(M, α̃) on a row β ∈ E q is the row (α̃β T )T ∈ E q . In view of Remark (5.6) one has
(α̃β T )T = β · α̃T , where “·” is the product M1×q (E op ) × Mq×q (E op ) → M1×q (E op ).
Consequently, over the ring E op , the automorphism (M, α̃) of the E op -module E q
acts on row vectors by multiplication with α̃T from the right. These arguments
−1
T
show that ηE
op applied to [(M, X), (M, α)] is α̃ ; see (3.5).
(8.8) Proposition. Suppose, in addition to the blanket assumptions for this section, that R is an algebra over its residue field k and that char(k) 6= 2. Then there
is a group isomorphism,
∼
=
σ : AutR (M )ab −→ KB
1 (mod(MCM R)) ,
given by
α 7−→ HomR (−, M )|MCM R , HomR (−, α)|MCM R .
Furthermore, there is an equality,
σ(Ξ) = Im KB
1 (i) .
Here Ξ is the subgroup of AutR (M )ab given in (2.10), and i : Y → mod(MCM R) is
the inclusion functor from the Gabriel localization sequence (7.0.1).
Proof. We define σ to be the composite of the following isomorphisms,
∗
AutR (M )ab = Eab
= (E op )∗ab
(8.8.1)
θE op
∼
=
ηE op
∼
=
KB
1 ()
∼
=
KB
1 (fM )
∼
=
/ KC (E op )
1
/ KB (proj(E op ))
1
op
/ KB
1 (mod(E ))
/ KB
1 (mod(MCM R)) .
The ring E, and hence also its opposite ring E op , is semilocal by Lemma (5.1).
By assumption, R is a k-algebra, and hence so is E op . Thus, in view of Remark (5.4)
and the assumption char(k) 6= 2, we get the isomorphism θE op from Theorem (5.3).
op
It maps α ∈ AutR (M )ab to the image of the 1×1 matrix (α) ∈ GL(E op ) in KC
1 (E ).
op
The isomorphism ηE op is described in (3.5); it maps ξ ∈ GLn (E ) to the class
op
[(EE )n , ξ] ∈ KB
1 (proj(E )).
The third map in (8.8.1) is induced by the inclusion : proj(E op ) → mod(E op ).
By Leuschke [21, thm. 6] the noetherian ring E op has finite global dimension and
hence Bass’ resolution theorem [7, VIII§4 thm. (4.6)], or Rosenberg [25, thm. 3.1.14],
B
op
implies that KB
1 () is an isomorphism. It maps an element [P, α] ∈ K1 (proj(E ))
B
op
to [P, α] ∈ K1 (mod(E )).
The fourth and last isomorphism KB
1 (fM ) in (8.8.1) is induced by the equivalence
fM : mod(E op ) → mod(MCM R) from Proposition (6.2).
Thus, σ is an isomorphism that maps an element α ∈ AutR (M )ab to the class
EE ⊗E HomR (−, M )|MCM R , (α ·) ⊗E HomR (−, M )|MCM R ,
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
19
which is evidently the same as the class
HomR (−, M )|MCM R , HomR (−, α)|MCM R .
It remains to show the equality σ(Ξ) = Im KB
1 (i). By the definition (8.8.1) of σ
B
−1
op
(Im KB
θ
this is tantamount to showing that KB
()η
E op (Ξ) = K1 (fM )
E
1 (i)). As
1
B
eM is a quasi-inverse of fM , see Proposition (6.2), we have K1 (fM )−1 = KB
1 (eM ),
and hence we need to show the equality
(8.8.2)
B
B
KB
1 ()ηE op θE op (Ξ) = K1 (eM )(Im K1 (i)) .
By Definition (2.10), the group Ξ is generated by all elements of the form
∗
ξj,α := (detE α̃)(detE β̃j,α )−1 (detE γ̃j,α ) ∈ Eab
for j ∈ {1, . . . , t} and α ∈ AutR (Mj ); here βj,α ∈ AutR (Xj ) and γj,α ∈ AutR (τ (Mj ))
are choices of automorphisms such that the diagram (2.9.1) is commutative. It
follows from Lemma (5.7) that
T
T −1
) ∈ (E op )∗ab .
) (detE op γ̃j,α
ξj,α = (detE op α̃T )(detE op β̃j,α
By Definition (5.5) the homomorphism detE op is the inverse of θE op , and consequently the group θE op (Ξ) is generated by the elements
T −1 T
op
′
) γ̃j,α ∈ KC
ξj,α
:= θE op (ξj,α ) = α̃T (β̃j,α
1 (E ) .
op
′
′′
) ∈ KB
:= ηE op (ξj,α
Thus ηE op θE op (Ξ) is generated by the elements ξj,α
1 (proj(E )),
and it follows from Lemma (8.7) that
′′
ξj,α
= HomR (M, Mj ), HomR (M, α) − HomR (M, Xj ), HomR (M, βj,α )
+ HomR (M, τ (Mj )), HomR (M, γj,α ) .
Thus, the group KB
1 ()ηE op θE op (Ξ) on the left-hand side in (8.8.2) is generated by
′′
B
′′
′′
the elements KB
1 ()(ξj,α ). Note that K1 ()(ξj,α ) is nothing but ξj,α viewed as an
B
op
element in K1 (mod(E )). We have reached the following conclusion:
′′
The group KB
1 ()ηE op θE op (Ξ) is generated by the elements ξj,α ,
where j ranges over {1, . . . , t} and α over all automorphisms of Mj .
B
To give a useful set of generators of the group KB
1 (eM )(Im K1 (i)) on the righthand side in (8.8.2), recall from Theorems (7.1) and (7.2) that every element in Y
has finite length and that the simple objects in Y are, up to isomorphism, exactly
the functors F1 , . . . , Ft . Thus, by [25, (proof of) thm. 3.1.8(2)] the group KB
1 (Y) is
generated by all elements of the form [Fj , ϕ], where j ∈ {1, . . . , t} and ϕ is an automorphism of Fj . It follows that the group Im KB
1 (i) is generated by the elements
B
KB
(i)([F
,
ϕ]).
Note
that
K
(i)([F
,
ϕ])
is
nothing
but [Fj , ϕ] viewed as an element
j
j
1
1
(mod(MCM
R)).
By
definition
of
the
functor
eM , see Prop. (6.2), one has
in KB
1
λj,ϕ := KB
1 (eM )([Fj , ϕ]) = [Fj M, ϕM ] .
We have reached the following conclusion:
B
The group KB
1 (eM )(Im K1 (i)) is generated by the elements λj,ϕ ,
where j ranges over {1, . . . , t} and ϕ over all automorphisms of Fj .
′′
With the descriptions of the generators ξj,α
and λj,ϕ at hand, we are now in a
position to prove the identity (8.8.2).
20
HENRIK HOLM
′′
Consider an arbitrary generator ξj,α
in the group KB
1 ()ηE op θE op (Ξ). Recall from
Theorem (7.2) that there is an exact sequence in mod(MCM R),
0 −→ HomR (−, τ (Mj )) −→ HomR (−, Xj ) −→ HomR (−, Mj ) −→ Fj −→ 0 .
Thus, the commutative diagram (2.9.1) in MCM R induces a commutative diagram
in mod(MCM R) with exact row(s),
0
/ HomR (−, τ (Mj ))
/ HomR (−, Xj )
∼
= HomR (−,γj,α )
0
/ HomR (−, Mj )
∼
= HomR (−,βj,α )
/ HomR (−, τ (Mj ))
/ HomR (−, Xj )
∼
= HomR (−,α)
/ HomR (−, Mj )
/ Fj
✤
✤
∼
= ϕ
✤
/ Fj
/0
/0,
where ϕ is the uniquely determined natural endotransformation of Fj that makes
this diagram commutative. Note that ϕ is an automorphism by the Five Lemma,
and thus [Fj , ϕ] is a well-defined element in KB
1 (mod(MCM R)). The diagram above
is an exact sequence in the loop category Ω(mod(MCM R)), see (3.2) and (3.3), so
in the group KB
1 (mod(MCM R)) there is an equality:
[Fj , ϕ] = HomR (−, Mj ), HomR (−, α) − HomR (−, Xj ), HomR (−, βj,α )
+ HomR (−, τ (Mj )), HomR (−, γj,α ) .
′′
Applying the homomorphism KB
1 (eM ) to this equality, we get λj,ϕ = ξj,α . These
′′
arguments show that every generator ξj,α has the form λj,ϕ for some ϕ, and hence
the inclusion ”⊆” in (8.8.2) is established.
B
Conversely, consider an arbitrary generator λj,ϕ in the group KB
1 (eM )(Im K1 (i)).
As the category MCM R is a Krull–Schmidt variety in the sense of Auslander [1, II,
§2], it follows by [1, II, prop. 2.1(b,c)] and [1, I, prop. 4.7] that HomR (−, Mj ) ։ Fj
is a projective cover in mod(MCM R) in the sense of Definition (8.5). In particular,
ϕ lifts to a natural transformation ψ of HomR (−, Mj ), which must be an automorphism by Lemma (8.6). Thus we have a commutative diagram in mod(MCM R),
HomR (−, Mj )
✤
✤=
ψ ∼
✤
HomR (−, Mj )
/ / Fj
∼
= ϕ
/ / Fj .
As the Yoneda functor yM : MCM R → mod(MCM R) is fully faithful, see [32, lem.
(4.3)], there exists a unique automorphism α of Mj such that ψ = HomR (−, α). For
′′
this particular α, the arguments above show that λj,ϕ = ξj,α
. Thus every generator
′′
λj,ϕ has the form ξj,α for some α, and hence the inclusion ”⊇” in (8.8.2) holds.
(8.9) Observation.∼ For any commutative noetherian local ring R, there is an iso=
morphism ρR : R∗ −→ KB
1 (proj R) given by the composite of
ηR
θR
/ KB
/ KC
R∗
1 (R)
1 (proj R) .
∼
=
∼
=
The first map is described in (5.2); it is an isomorphism by Srinivas [27, exa. (1.6)].
The second isomorphism is discussed in (3.5). Thus, ρR maps r ∈ R∗ to [R, r1R ].
We are finally in a position to prove the main result.
Proof of Theorem (2.12). By Proposition (8.4) we can identify K1 (mod R) with the
group KB
1 (mod R). Recall that i and r denote the inclusion and restriction functors
from the localization sequence (7.0.1). By the relations that define KB
1 (mod R), see
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
21
(3.3), there is a homomorphism π0 : AutR (M ) → KB
1 (mod R) given by α 7→ [M, α].
Since KB
(mod
R)
is
abelian,
π
induces
a
homomorphism
π, which is displayed as
0
1
the upper horizontal map in the following diagram,
π
AutR (M )ab
(8.9.1)
/ KB
1 (mod R)
∼
= KB
1 (fR )
σ ∼
=
KB
1 (mod(MCM R))
KB
1 (r)
/ KB (mod(proj R)) .
1
Here σ is the isomorphism from Proposition (8.8), and the isomorphism KB
1 (fR )
is induced by the equivalence fR from Observation (6.3). The diagram (8.9.1) is
B
commutative, indeed, KB
1 (r)σ and K1 (fR )π both map α ∈ AutR (M )ab to the class
HomR (−, M )|proj R , HomR (−, α)|proj R .
By Lemma (8.2) the homomorphism KB
1 (r) is surjective, and hence so is π. Exactness of the sequence in Proposition (8.4)(b) and commutativity of the diagram
(8.9.1) show that Ker π = σ −1 (Im KB
1 (i)). Therefore Proposition (8.8) implies that
there is an equality Ker π = Ξ, and it follows that π induces an isomorphism,
∼
=
π
b : AutR (M )ab /Ξ −→ KB
1 (mod R) .
This proves the first assertion in Theorem (2.12).
To prove the second assertion, let inc : proj R → mod R denote the inclusion functor. Note that the Gersten–Sherman transformation identifies the homomorphisms
K1 (inc) and KB
1 (inc); indeed ζproj R is an isomorphism by Theorem (3.7) and ζmod R
is an isomorphism by Proposition (8.4)(a). Thus, we must show that KB
1 (inc) can be
identified with the homomorphism λ : R∗ → AutR (M )ab /Ξ given by r 7→ r1R ⊕ 1M ′
(recall that we have written M = R ⊕ M ′ ). To this end, consider the isomorphism
ρR : R ∗ → K B
1 (proj R) from Observation (8.9) given by r 7→ [R, r1R ]. The fact that
KB
1 (inc) and λ are isomorphic maps now follows from the diagram,
R∗
λ
ρR ∼
=
KB
1 (proj R)
KB
1 (inc)
/ AutR (M )ab /Ξ
∼
= π
b
/ KB
1 (mod R) ,
which is commutative. Indeed, for r ∈ R∗ one has
(b
π λ)(r) = [M, r1R ⊕ 1M ′ ] = [R, r1R ] + [M ′ , 1M ′ ] = [R, r1R ] = (KB
1 (inc)ρR )(r) ,
where the penultimate equality is by (3.4).
9. Abelianization of Automorphism Groups
To apply Theorem (2.12), one must compute AutR (M )ab , i.e. the abelianization
of the automorphism group of the representation generator M . In Proposition (9.6)
we compute AutR (M )ab for the R-module M = R ⊕ m, which is a representation
generator for MCM R if m happens to be the only non-free indecomposable maximal
Cohen–Macaulay module over R. Specific examples of rings for which this is the
case will be studied in Section 10. Throughout this section, A denotes any ring.
22
HENRIK HOLM
(9.1) Definition. Let N1 , . . . , Ns be A-modules, and set N = N1 ⊕ · · · ⊕ Ns . We
view elements in N as column vectors.
For ϕ ∈ AutA (Ni ) we denote by di (ϕ) the automorphism of N which has as its
diagonal 1N1 , . . . , 1Ni−1 , ϕ, 1Ni+1 , . . . , 1Ns and 0 in all other entries.
For i 6= j and µ ∈ HomA (Nj , Ni ) we denote by eij (µ) the automorphism of N
with diagonal 1N1 , . . . , 1Ns , and whose only non-trivial off-diagonal entry is µ in
position (i, j).
(9.2) Lemma. Let N1 , . . . , Ns be A-modules and set N = N1 ⊕ · · · ⊕ Ns . If 2 ∈ A
is a unit, if i 6= j, and if µ ∈ HomA (Nj , Ni ) then eij (µ) is a commutator in AutA (N ).
Proof. The commutator of ϕ and ψ in AutA (N ) is [ϕ, ψ] = ϕψϕ−1 ψ −1 . It is easily
verified that eij (µ) = [eij ( µ2 ), dj (−1Nj )] if i 6= j.
The idea in the proof above is certainly not new. It appears, for example, already
in Litoff [22, proof of thm. 2] in the case s = 2. Of course, if s > 3 then eij (µ) is a
commutator even without the assumption that 2 is a unit; see e.g. [25, lem. 2.1.2(c)].
(9.3) Lemma. Let X and Y be non-isomorphic A-modules with local endomorphism rings. Let ϕ, ψ ∈ EndA (X) and assume that ψ factors through Y . Then one
has ψ ∈
/ AutA (X). Futhermore, ϕ ∈ AutA (X) if and only if ϕ + ψ ∈ AutA (X).
Proof. Write ψ = ψ ′′ ψ ′ with ψ ′ : X → Y and ψ ′′ : Y → X. If ψ is an automorphism,
then ψ ′′ is a split epimorphism and hence an isomorphism as Y is indecomposable.
This contradicts the assumption that X and Y are not isomorphic. The second
assertion now follows as AutA (X) is the set of units in the local ring EndA (X).
(9.4) Proposition. Let N1 , . . . , Ns be pairwise non-isomorphic A-modules with
local endomorphism rings. An endomorphism
α = (αij ) ∈ EndA (N1 ⊕ · · · ⊕ Ns ) with αij ∈ HomA (Nj , Ni )
is an automorphism if and only if α11 , α22 , . . . , αss are automorphisms.
Furthermore, every α in AutA (N ) can be written as a product of automorphisms
of the form di (·) and eij (·), cf. Definition (9.1).
Proof. “Only if”: Assume that α = (αij ) is an automorphism with inverse
Ps β = (βij )
and let i = 1, . . . , s be given. In the local ring EndA (Ni ) one has 1Ni = j=1 αij βji ,
and hence one of the terms αij βji must be an automorphism. As αij βji is not an
automorphism for j 6= i, see Lemma (9.3), it follows that αii βii is an automorphism.
In particular, αii has a right inverse and βii has a left inverse, and since the ring
EndA (Ni ) is local this means that αii and βii are both automorphisms.
“If”: By induction on s > 1. The assertion is trivial for s = 1. Now let s > 1.
Assume that α11 , α22 , . . . , αss are automorphisms. Recall the notation from (9.1).
−1
−1
By composing α with es1 (−αs1 α−1
11 ) · · · e31 (−α31 α11 )e21 (−α21 α11 ) from the left
−1
−1
and with e12 (−α−1
11 α12 )e13 (−α11 α13 ) · · · e1s (−α11 α1s ) from the right, one gets an
endomorphism of the form
1N1 0
α11 0
= d1 (α11 )
,
α′ =
0 β
0
β
where β ∈ EndA (N2 ⊕ · · · ⊕ Ns ) is an (s − 1) × (s − 1) matrix with diagonal entries
given by αjj − αj1 α−1
11 α1j for j = 2, . . . , s. By applying Lemma (9.3) to the situa−1
tion ϕ = αjj − αj1 α11
α1j and ψ = αj1 α−1
11 α1j , it follows that the diagonal entries in
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
23
β are all automorphisms. By the induction hypothesis, β is now an automorphism
and can be written as a product of automorphisms of the form di (·) and eij (·).
Consequently, the same is true for α′ , and hence also for α.
(9.5) Corollary. Assume that 2 ∈ A is a unit and let N1 , . . . , Ns be pairwise nonisomorphic A-modules with local endomorphism rings. The homomorphism,
∆ : AutA (N1 ) × · · · × AutA (Ns ) −→ AutA (N1 ⊕ · · · ⊕ Ns ) ,
given by ∆(ϕ1 , . . . , ϕs ) = d1 (ϕ1 ) · · · ds (ϕs ), induces a surjective homomorphism,
∆ab : AutA (N1 )ab ⊕ · · · ⊕ AutA (Ns )ab −→ AutA (N1 ⊕ · · · ⊕ Ns )ab .
Proof. By Proposition (9.4) every element in AutA (N1 ⊕ · · · ⊕ Ns ) is a product of
automorphisms of the form di (·) and eij (·). As 2 ∈ A is a unit, Lemma (9.2) yields
that every element of the form eij (·) is a commutator; thus in AutA (N1 ⊕· · ·⊕Ns )ab
every element is a product of elements of the form di (·), so ∆ab is surjective.
As noted above, Lemma (9.2), and consequently also Corollary (9.5), holds without the assumption that 2 ∈ A is a unit provided that s > 3.
In the following, we write [ · ]m : R ։ R/m = k for the quotient homomorphism.
(9.6) Proposition. Let (R, m, k) be any commutative local ring such that 2 ∈ R
is a unit. Assume that m is not isomorphic to R and that the endomorphism ring
EndR (m) is commutative and local. There is an isomorphism of abelian groups,
∼
=
δ : AutR (R ⊕ m)ab −→ k ∗ ⊕ AutR (m) ,
given by
α11
α21
α22
α22
7−→ [α11 (1)]m , α11 α22 − α21 α12 .
Proof. First note that the image of any homomorphism α : m → R is contained in m.
Indeed if Im α * m, then u = α(a) is a unit for some a ∈ m, and thus α(u−1 a) = 1.
It follows that α is surjective, and hence a split epimorphism as R is free. Since m
is indecomposable, α must be an isomorphism, which is a contradiction.
Therefore, given an endomorphism,
HomR (R, R) HomR (m, R)
α11 α12
∈ EndR (R ⊕ m) =
,
HomR (R, m) HomR (m, m)
α21 α22
we may by (co)restriction view the entries αij as elements in the endomorphism
ring EndR (m). As this ring is assumed to be commutative, the determinant map
EndR (R ⊕ m) −→ EndR (m)
given by
(αij ) 7−→ α11 α22 − α21 α12
preserves multiplication. If (αij ) ∈ AutR (R ⊕ m), then Proposition (9.4) implies
that α11 ∈ AutR (R) and α22 ∈ AutR (m), and thus α11 α22 ∈ AutR (m). By applying
Lemma (9.3) to ϕ = α11 α22 − α21 α12 and ψ = α21 α12 we get ϕ ∈ AutR (m), and
hence the determinant map is a group homomorphism AutR (R ⊕ m) → AutR (m).
The map AutR (R ⊕ m) → k ∗ defined by (αij ) 7→ [α11 (1)]m is also a group homomorphism. Indeed, entry (1, 1) in the product (αij )(βij ) is α11 β11 + α12 β21 . Here
α12 is a homomorphism m → R, and hence α12 β21 (1) ∈ m by the arguments in the
beginning of the proof. Consequently one has
[(α11 β11 + α12 β21 )(1)]m = [(α11 β11 )(1)]m = [α11 (1)β11 (1)]m = [α11 (1)]m [β11 (1)]m .
24
HENRIK HOLM
These arguments and the fact that the groups k ∗ and AutR (m) are abelian show
that the map δ described in the proposition is a well-defined group homomorphism.
Evidently, δ is surjective; indeed, for [r]m ∈ k ∗ and ϕ ∈ AutR (m) one has
r1R
0
δ
= ([r]m , ϕ) .
0
r−1 ϕ
To show that δ is injective, assume that α ∈ AutR (R ⊕ m)ab with δ(α) = ([1]m , 1m ).
By Corollary (9.5) we can assume that α = (αij ) is a diagonal matrix. We write
α11 = r1R for some unit r ∈ R. Since one has δ(α) = ([r]m , rα22 ) we conclude that
r ∈ 1 + m and α22 = r−1 1m , that is, α has the form
r1R
0
with
r ∈ 1+m.
α=
0
r−1 1m
Thus, proving injectivity of δ amounts to showing that every automorphism α of the
form above belongs to the commutator subgroup of AutR (R ⊕ m). As r − 1 ∈ m the
map (r − 1)1R gives a homomorphism R → m. Since r(r−1 − 1) = 1 − r ∈ m and
r∈
/ m, it follows that r−1 − 1 ∈ m. Thus (r−1 − 1)1R gives another homomorphism
R → m. If ι : m ֒→ R denotes the inclusion, then one has2
1R −r−1 ι
1R
0
1R ι
1R
0
r1R
0
.
=
0
1m
(r − 1)1R 1m
0 1m
(r−1 − 1)1R 1m
0
r−1 1m
The right-hand of this equality is a product of matrices of the form eij (·), and since
2 ∈ R is a unit the desired conclusion now follows from Lemma (9.2).
10. Examples
We begin with a trivial example.
(10.1) Example. If R is regular, then there are isomorphisms,
K1 (mod R) ∼
= K1 (proj R) ∼
= KC (R) ∼
= R∗ .
1
The first isomorphism is by Quillen’s resolution theorem [24, §4 thm. 3], the second
one is mentioned in (3.6), and the third one is well-known; see e.g. [27, exa. (1.6)].
Theorem (2.12) confirms this result, indeed, as M = R is a representation generator
for MCM R = proj R one has AutR (M )ab = R∗ . As there are no Auslander–Reiten
sequences in this case, the subgroup Ξ is generated by the empty set, so Ξ = 0.
We now illustrate how Theorem (2.12) applies to compute K1 (mod R) for the ring
R = k[X]/(X 2 ). The answer is well-known to be k ∗ , indeed, for any commutative
artinian local ring R with residue field k one has K1 (mod R) ∼
= k ∗ by [24, §5 cor. 1].
(10.2) Example. Let R = k[X]/(X 2 ) be the ring of dual numbers over a field k
with char(k) 6= 2. Denote by inc : proj R → mod R the inclusion functor. The homomorphism K1 (inc) may be identified with the map,
µ : R∗ −→ k ∗
given by
a + bX 7−→ a2 .
Proof. The maximal ideal m = (X) is the only non-free indecomposable maximal
Cohen–Macaulay R-module, so M = R⊕m is a representation generator for MCM R;
see (2.1.1). There is an isomorphism k → EndR (m) of R-algebras given by a 7→ a1m ,
2 The identity comes from the standard proof of Whitehead’s lemma; see e.g. [27, lem. (1.4)].
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
25
in particular, EndR (m) is commutative. Via this isomorphism, k ∗ corresponds to
AutR (m). The Auslander–Reiten sequence ending in m is
ι
X
0 −→ m −→ R −→ m −→ 0 ,
2
where ι is the inclusion. The Auslander–Reiten homomorphism Υ = −1
2 : Z →Z
is injective, so Theorem (2.12) can be applied. Note that for every a1m ∈ AutR (m),
where a ∈ k ∗ , there is a commutative diagram,
0
ι
/m
/R
∼
= a1R
∼
= a1m
0
/R
ι
/m
X
X
/0
/m
∼
= a1m
/m
/0.
Applying the tilde construction (2.6) to the automorphisms a1m and a1R one gets
a1R 0
1R
0
g
g
;
and
a1R =
a1m =
0
1m
0 a1m
see Example (2.7). In view of Definition (2.10) and Remark (2.11), the subgroup Ξ
of AutR (R ⊕ m)ab is therefore generated by all elements of the form
−1
a 1R
0
−1 g
g
g
where
a ∈ k∗ .
ξa := (a1
)(
a1
)
(
a1
)
=
m
R
m
0
a2 1 m
Denote by ω the composite of the isomorphisms,
AutR (R ⊕ m)ab
δ
∼
=
/ k ∗ ⊕ AutR (m)
∼
=
/ k∗ ⊕ k∗ ,
where δ is the isomorphism from Proposition (9.6). As ω(ξa ) = (a−1 , a) we get that
ω(Ξ) = {(a−1 , a) | a ∈ k ∗ } and thus ω induces the first group isomorphism below,
AutR (R ⊕ m)ab /Ξ
ω
∼
=
/ (k ∗ ⊕ k ∗ )/ω(Ξ)
χ
∼
=
/ k∗ ;
the second isomorphism is induced by the surjective homomorphism k ∗ ⊕ k ∗ → k ∗ ,
given by (b, a) 7→ ba, whose kernel is exactly ω(Ξ). In view of Theorem (2.12) and
the isomorphisms ω and χ above, it follows that K1 (mod R) ∼
= k∗ .
Theorem (2.12) asserts that K1 (inc) may be identified with the homomorphism
r1R 0
.
λ : R∗ −→ AutR (R ⊕ m)ab /Ξ
given by
r 7−→
0
1m
It remains to note that the isomorphism χω identifies λ with the homomorphism µ
described in the example, indeed, one has χωλ = µ.
Example (10.2) shows that for R = k[X]/(X 2) the canonical homomorphism,
R∗ ∼
= K1 (proj R)
K1 (inc)
/ K1 (mod R) ∼
= k∗ ,
is not an isomorphism. It turns out that if k is algebraically closed with characteristic zero, then there exists a non-canonical isomorphism between R∗ and k ∗ .
(10.3) Proposition. Let R = k[X]/(X 2 ) where k is an algebraically closed field
with characteristic p > 0. The following assertions hold.
(a) If p > 0, then the groups R∗ and k ∗ are not isomorphic.
(b) If p = 0, then there exists a (non-canonical) group isomorphism R∗ ∼
= k∗ .
26
HENRIK HOLM
Proof. There is a group isomorphism R∗ → k ∗ ⊕ k + given by a + bX 7→ (a, b/a),
where k + denotes the underlying abelian group of the field k.
“(a)”: Let ϕ = (ϕ1 , ϕ2 ) : k ∗ → k ∗ ⊕ k + be any group homomorphism. As k is
algebraically closed, every element in x ∈ k ∗ has the form x = y p for some y ∈ k ∗ .
Therefore ϕ(x) = ϕ(y p ) = ϕ(y)p = (ϕ1 (y), ϕ2 (y))p = (ϕ1 (y)p , pϕ2 (y)) = (ϕ1 (x), 0),
which shows that ϕ is not surjective.
“(b)”: Since p = 0 the abelian group k + is divisible and torsion free. Therefore
+ ∼
k = Q(I) for some index set I. There exist algebraic field extensions of Q of any
finite degree, and these are all contained in the algebraically closed field k. Thus
|I| = dimQ k must be infinite, and it follows that |I| = |k|.
The abelian group k ∗ is also divisible, but it has torsion. Write k ∗ ∼
= T ⊕ (k ∗ /T ),
∗
n
∗
where T = {x ∈ k | ∃ n ∈ N : x = 1} is the torsion subgroup of k . For the divisible
torsion free abelian group k ∗ /T one has k ∗ /T ∼
= Q(J) for some index set J. It is
not hard to see that |J| must be infinite, and hence |J| = |k ∗ /T |. As |T | = ℵ0 it
follows that |k| = |k ∗ | = ℵ0 + |J| = |J|.
Since |J| = |k| = |I| one gets k ∗ ∼
= k∗ ⊕ k+ .
= T ⊕ Q(J) ⊕ Q(I) ∼
= T ⊕ Q(J) ∼
The artinian ring R = k[X]/(X 2) from Example (10.2) has length ℓ = 2 and
this power is also involved in the description of the homomorphism µ = K1 (inc).
The next result shows that this is no coincidence. As Proposition (10.4) might be
well-known to experts, and since we do not really need it, we do not give a proof.
(10.4) Proposition. Let (R, m, k) be a commutative artinian local ring of length ℓ.
The group homomorphism R∗ ∼
= K1 (proj R) → K1 (mod R) ∼
= k ∗ induced by the inclusion inc : proj R → mod R is the composition of the homomorphisms,
R∗
π
/ k∗
(·)ℓ
/ k∗ ,
where π : R ։ R/m = k is the canonical quotient map and (·)ℓ is the ℓ’th power.
Our next example is a non-artinian ring, namely the simple curve singularity of
type (A2 ) studied by e.g. Herzog [19, Satz 1.6] and Yoshino [32, prop. (5.11)].
(10.5) Example. Let R = k[[T 2 , T 3 ]] where k is an algebraically closed field with
char(k) 6= 2. Denote by inc : proj R → mod R the inclusion functor. The homomorphism K1 (inc) may be identified with the inclusion map,
µ : R∗ = k[[T 2 , T 3 ]]∗ ֒→ k[[T ]]∗ .
Proof. The maximal ideal m = (T 2 , T 3 ) is the only non-free indecomposable maximal Cohen–Macaulay R-module, so M = R ⊕ m is a representation generator for
MCM R; see (2.1.1). Even though T is not an element in R = k[[T 2 , T 3 ]], multiplication by T is a well-defined endomorphism of m. Thus there is a ring homomorphism,
χ : k[[T ]] −→ EndR (m)
given by
h 7−→ h1m .
It is not hard to see that χ is injective. To prove that it is surjective, i.e. that one
has EndR (m)/k[[T ]] = 0, note that there is a short exact sequence of R-modules,
0 −→ k[[T ]]/R −→ EndR (m)/R −→ EndR (m)/k[[T ]] −→ 0 .
To see that EndR (m)/k[[T ]] = 0, it suffices to argue that the R-module EndR (m)/R
is simple. As noted in the beginning of the proof of Proposition (9.6), the inclusion m ֒→ R induces an isomorphism EndR (m) ∼
= HomR (m, R), so by applying
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
27
HomR (−, R) to the short exact sequence 0 → m → R → k → 0, it follows that
EndR (m)/R ∼
= Ext1R (k, R) .
The latter module is isomorphic to k since R is a 1-dimensional Gorenstein ring.
Note that via the isomorphism χ, the group k[[T ]]∗ corresponds to AutR (m).
The Auslander–Reiten sequence ending in m is
(1 −T )t
0
/m
(1 −T )t
/m
/ R⊕m
(T 2 T )
/m
/0.
2
Since the Auslander–Reiten homomorphism Υ = −1
1 : Z → Z is injective, Theorem (2.12) can be applied. We regard elements in R ⊕ m as column vectors. Let
α = h1m ∈ AutR (m), where h ∈ k[[T ]]∗ , be given. Write h = f +gT for some f ∈ R∗
and g ∈ R. It is straightforward to verify that there is a commutative diagram,
0
0
(1 −T )
t
/m
f g
∼
= β = gT 2 f
∼
= γ = (f −gT )1m
/m
(T 2 T )
/ R⊕m
/ R⊕m
(T
2
/0
∼
= α = (f +gT )1m
/m
T)
/0.
Note that β really is an automorphism; indeed, its inverse is given by
f −g
.
β −1 = (f 2 − g 2 T 2 )−1
−gT 2 f
We now apply the tilde construction (2.6) to α, β, and γ; by Example (2.7) we get:
1
0
1
0
α̃ =
, β̃ = β, and γ̃ =
.
0 f + gT
0 f − gT
In view of Definition (2.10) and Remark (2.11), the subgroup Ξ of AutR (R ⊕ m)ab
is therefore generated by all the elements
f
−g(f − gT )
−1
2
2 2 −1
.
ξh := α̃β̃ γ̃ = (f − g T )
−gT 2(f + gT ) f (f 2 − g 2 T 2 )
Denote by ω the composite of the isomorphisms,
AutR (R ⊕ m)ab
δ
∼
=
/ k ∗ ⊕ AutR (m)
1 ⊕ χ−1
∼
=
/ k ∗ ⊕ k[[T ]]∗ ,
where δ is the isomorphism from Proposition (9.6). Note that δ(ξh ) = ([f ]m , 1m ) =
(h(0), 1m ) and hence ω(ξh ) = (h(0), 1). It follows that ω(Ξ) = k ∗ ⊕ {1} and thus ω
induces a group isomorphism,
∼
=
ω : AutR (R ⊕ m)ab /Ξ −→ (k ∗ ⊕ k[[T ]]∗ )/ω(Ξ) = k[[T ]]∗ .
In view of this isomorphism, Theorem (2.12) shows that K1 (mod R) ∼
= k[[T ]]∗ . Theorem (2.12) also asserts that K1 (inc) may be identified with the homomorphism
f 1R 0
∗
.
λ : R −→ AutR (R ⊕ m)ab /Ξ
given by
f 7−→
0
1m
It remains to note that the isomorphism ω identifies λ with the inclusion map µ
described in the example, indeed, one has ωλ = µ.
28
HENRIK HOLM
Acknowledgements
It is a pleasure to thank Peter Jørgensen without whom this paper could not
have been written. However, Peter did not wish to be a coauthor of this manuscript.
We are also grateful to Marcel Bökstedt and Charles A. Weibel for valuable input
on the Gersten–Sherman transformation, and to Christian U. Jensen for pointing
out Proposition (10.3). Finally, we thank Viraj Navkal and the anonymous referee
for their thoughtful comments and for making us aware of the paper [23].
References
[1] Maurice Auslander, Representation theory of Artin algebras. I, II, Comm. Algebra 1 (1974),
177–268; ibid. 1 (1974), 269–310.
, Rational singularities and almost split sequences, Trans. Amer. Math. Soc. 293
[2]
(1986), no. 2, 511–531.
[3] Maurice Auslander and Ragnar-Olaf Buchweitz, The homological theory of maximal CohenMacaulay approximations, Mém. Soc. Math. France (N.S.) (1989), no. 38, 5–37, Colloque en
l’honneur de Pierre Samuel (Orsay, 1987).
[4] Maurice Auslander and Idun Reiten, Grothendieck groups of algebras and orders, J. Pure
Appl. Algebra 39 (1986), no. 1-2, 1–51.
[5]
, Almost split sequences for rational double points, Trans. Amer. Math. Soc. 302
(1987), no. 1, 87–97.
[6] Maurice Auslander, Idun Reiten, and Sverre O. Smalø, Representation theory of Artin algebras, Cambridge Studies in Advanced Mathematics, vol. 36, Cambridge University Press,
Cambridge, 1995.
[7] Hyman Bass, Algebraic K-theory, W. A. Benjamin, Inc., New York-Amsterdam, 1968.
[8] David J. Benson, Representations and cohomology. I, second ed., Cambridge Studies in Advanced Mathematics, vol. 30, Cambridge University Press, Cambridge, 1998, Basic representation theory of finite groups and associative algebras.
[9] Michael C. R. Butler, Grothendieck groups and almost split sequences, Integral representations and applications (Oberwolfach, 1980), Lecture Notes in Math., vol. 882, Springer,
Berlin, 1981, pp. 357–368.
[10] Nuri Cimen, One-dimensional rings of finite Cohen-Macaulay type, ProQuest LLC, Ann
Arbor, MI, 1994, Thesis (Ph.D.)–The University of Nebraska - Lincoln.
[11]
, One-dimensional rings of finite Cohen-Macaulay type, J. Pure Appl. Algebra 132
(1998), no. 3, 275–308.
[12] Ju. A. Drozd and Andrei V. Roı̆ter, Commutative rings with a finite number of indecomposable integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 783–798.
[13] Edgar E. Enochs and Overtoun M. G. Jenda, Relative homological algebra, de Gruyter Expositions in Mathematics, vol. 30, Walter de Gruyter & Co., Berlin, 2000.
[14] Hélène Esnault, Reflexive modules on quotient surface singularities, J. Reine Angew. Math.
362 (1985), 63–71.
[15] William Fulton and Joe Harris, Representation theory, Graduate Texts in Mathematics, vol.
129, Springer-Verlag, New York, 1991, A first course, Readings in Mathematics.
[16] Pierre Gabriel, Des catégories abéliennes, Bull. Soc. Math. France 90 (1962), 323–448.
[17] Stephen M. Gersten, Higher K-theory of rings, Algebraic K-theory, I: Higher K-theories
(Proc. Conf. Seattle Res. Center, Battelle Memorial Inst., 1972), Springer, Berlin, 1973,
pp. 3–42. Lecture Notes in Math., Vol. 341.
[18] Edward L. Green and Irving Reiner, Integral representations and diagrams, Michigan Math.
J. 25 (1978), no. 1, 53–84.
[19] Jürgen Herzog, Ringe mit nur endlich vielen Isomorphieklassen von maximalen, unzerlegbaren Cohen-Macaulay-Moduln, Math. Ann. 233 (1978), no. 1, 21–34.
[20] Tsit-Yuen Lam, A first course in noncommutative rings, second ed., Graduate Texts in
Mathematics, vol. 131, Springer-Verlag, New York, 2001.
[21] Graham J. Leuschke, Endomorphism rings of finite global dimension, Canad. J. Math. 59
(2007), no. 2, 332–342.
K-GROUPS FOR RINGS OF FINITE COHEN–MACAULAY TYPE
29
[22] Oscar Litoff, On the commutator subgroup of the general linear group, Proc. Amer. Math.
Soc. 6 (1955), 465–470.
[23] Viraj Navkal, K’-theory of a local ring of finite Cohen-Macaulay type, preprint (2012)
arXiv:1108.2000v2 [math.KT] (see http://www.math.ucla.edu/˜viraj/ for the latest version).
[24] Daniel Quillen, Higher algebraic K-theory. I, Algebraic K-theory, I: Higher K-theories (Proc.
Conf., Battelle Memorial Inst., Seattle, Wash., 1972), Springer, Berlin, 1973, pp. 85–147.
Lecture Notes in Math., Vol. 341.
[25] Jonathan Rosenberg, Algebraic K-theory and its applications, Graduate Texts in Mathematics, vol. 147, Springer-Verlag, New York, 1994.
[26] Clayton Sherman, Group representations and algebraic K-theory, Algebraic K-theory, Part
I (Oberwolfach, 1980), Lecture Notes in Math., vol. 966, Springer, Berlin, 1982, pp. 208–243.
[27] Vasudevan Srinivas, Algebraic K-theory, second ed., Progress in Mathematics, vol. 90,
Birkhäuser Boston Inc., Boston, MA, 1996.
[28] Leonid N. Vaserstein, On the stabilization of the general linear group over a ring, Math.
USSR-Sb. 8 (1969), 383–400.
, On the Whitehead determinant for semi-local rings, J. Algebra 283 (2005), no. 2,
[29]
690–699.
[30] Roger Wiegand, Noetherian rings of bounded representation type, Commutative algebra
(Berkeley, CA, 1987), Math. Sci. Res. Inst. Publ., vol. 15, Springer, New York, 1989, pp. 497–
516.
[31]
, One-dimensional local rings with finite Cohen-Macaulay type, Algebraic geometry
and its applications (West Lafayette, IN, 1990), Springer, New York, 1994, pp. 381–389.
[32] Yuji Yoshino, Cohen-Macaulay modules over Cohen-Macaulay rings, London Mathematical
Society Lecture Note Series, vol. 146, Cambridge University Press, Cambridge, 1990.
Department of Mathematical Sciences, Faculty of Science, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen Ø, Denmark
E-mail address: [email protected]
URL: http://www.math.ku.dk/~holm/
| 0 |
GP-ILQG: Data-driven Robust Optimal Control for
Uncertain Nonlinear Dynamical Systems
arXiv:1705.05344v1 [cs.RO] 15 May 2017
Gilwoo Lee
Siddhartha S. Srinivasa
Matthew T. Mason ∗
Abstract
As we aim to control complex systems, use of a simulator in model-based reinforcement learning is becoming more common. However, it has been challenging
to overcome the Reality Gap, which comes from nonlinear model bias and susceptibility to disturbance. To address these problems, we propose a novel algorithm that
combines data-driven system identification approach (Gaussian Process) with a
Differential-Dynamic-Programming-based robust optimal control method (Iterative
Linear Quadratic Control). Our algorithm uses the simulator’s model as the mean
function for a Gaussian Process and learns only the difference between the simulator’s prediction and actual observations, making it a natural hybrid of simulation
and real-world observation. We show that our approach quickly corrects incorrect
models, comes up with robust optimal controllers, and transfers its acquired model
knowledge to new tasks efficiently.
1
Introduction
As we aim to control more complex robotic systems autonomously, simulators are being more
frequently used for training in model-based reinforcement learning [7]. A simulator allows us to
explore various policies without damaging the robot, and is also capable of generating a large amount
of synthetic data with little cost and time.
However, we often observe that simulator-based policies perform poorly in real world, due to model
discrepancy between the simulation and the real world. This discrepancy arises from two fundamental
challenges: (1) system identification to match the simulation model with the real world requires
the exploration of a large state space at the risk of damaging the robot, and (2) even with good
system identification, there is still discrepancy due to the limitations of a simulator’s ability to render
real-world physics.
Stochastic optimal control algorithms attempt to partially address this issue by artificially injecting
noise into the simulation during training [5, 24], or by explicitly modeling multiplicative noise [22,
20].
If the task domain is predefined, exploration can be limited to task-specific trajectories, and system
identification can be coupled with trajectory optimization [21]. Some recent works have suggested
policy training with multiple models, which results in a policy robust to model variance [11, 3].
While these methods have shown some successful results, they are still limited by the expressiveness
of the simulator’s model. If the true model is outside of the simulator’s model space, little can be
guaranteed.
Thus, although these algorithms produce more robust policies, they fail to address the fundamental
issue: there is unknown but structured model discrepancy between the simulation and the real world.
In this paper, we propose a novel algorithm that addresses both model bias and multiplicative noise.
Our work is based on the following key insight:
∗
gilwool, siddh, [email protected]. All authors are affiliated with Robotics Institute, Carnegie
Mellon University.
[Simulation]
Robust-ILQG
Policy π
Augmented model
[Real World]
Rollout
Model Learning
Data
Figure 1: GP-ILQG overview
Explicitly correcting model bias and incorporating the correction as well as our
uncertainty of the correction in optimal control enables lifelong learning of the
system and robust control under uncertainty.
Our algorithm iterates over simulation-based optimal control, real-world data collection, and model
learning, as illustrated in Figure 1. Starting from a potentially incorrect model given by the simulator,
we obtain a control policy, with which we collect data in the real world. This data feeds into model
learning, during which we correct model bias and estimate our uncertainty of the correction. Both the
correction and its uncertainty are incorporated into computing a robust optimal control policy, which
then gets used to collect more data.
Our approach improves any simulator beyond the scope of its model space to match real-world
observations and produces an optimal control policy robust to model uncertainty and multiplicative
noise. The improved simulator uses previous real-world observations to infer the true model when it
explores previously visited space, but when it encounters a new region, it relies on the simulator’s
original model. Due to this hybrid nature, our algorithm shows faster convergence to the optimal
policy than a pure data-driven approach [13] or a pure simulation-based approach. Moreover, as it
permanently improves the simulator, it shows even faster convergence in new tasks in similar task
domain.
2
Related Work
Most model-based reinforcement learning has both model learning (system identification) and policy
optimization components [7]. The data for a model comes either from real world or simulation, and
is combined to construct a model via nonlinear function approximators such as Locally Weighted
Regression [2], Gaussian Processes [17], or Neural Networks [12]. Once the model is built, a
typical policy gradient method computes the derivatives of the cost function with respect to control
parameters [4, 8].
If an analytic model is given, e.g., via equations of motion or as a simulator2 , one can use classical
optimal control techniques such as Differential Dynamic Programming (DDP) [6], which compute a
reference trajectory as well as linear feedback control law. For robustness, Iterative Linear Quadratic
Gaussian Control (ILQG) [22] or H-∞ Control [20] can be used to incorporate multiplicative noise.
Variants of [22] have been used to generate guiding policies for data-driven RL methods [8, 28].
Recently, there have been some attempts to combine DDP or ILQG with data-driven models by
replacing analytical models with locally linear models [10, 25] or nonlinear models [13, 15, 26, 14]
learned by Gaussian Processes or Neural Networks.
2
We consider black-box simulators as analytical models, as derivatives can be taken by finite differencing.
2
The goal of our algorithm is closely aligned with those of [27] and [1]. [27] has proposed a framework
in which the robot maintains two controllers, one for the simulator and another for the real world, and
aims to narrow the difference. [1] assumes a deterministic real world, constructs an optimal policy
based on the simulator’s deterministic model, and evaluates its performance in the real world, while
successively augmenting the simulator’s model with time-dependent corrections based on real-world
observations. Our method considers a stochastic system and the correction is not time-dependent.
3
Approach
We work with stochastic nonlinear dynamical systems, whose evolution is described by the stochastic
differential equation
dx = f (x, u) dt + F(x, u) dω
(1)
where the applied control u ∈ Rm transitions the state x ∈ Rn of the system according to a linear
sum of the system dynamics f ∈ Rn and state-dependent amplification F ∈ Rn×p of Brownian noise
increment dω ∈ Rp .
We assume that we have full knowledge of F and partial knowledge of f , e.g., from a simulator. We
represent f as the sum of simulated component fsim and unknown residual ,
f (x, u) = fsim (x, u) + (x, u).
Then, in a discrete setting with fixed step size δt, (1) can be rewritten as the following:
√
∆x = (fsim +)δt + F ξF δt
(2)
√
where ξF ∼ N (0, Ip×p ) and δt appears because the noise covariance increases linearly with time.
We omit x and u for clarity.
Nonzero model bias results in nonzero difference between the model’s prediction fsim δt and the
actual δx. From (2), the difference is equivalent to
√
∆x − fsim δt = δt + F ξF δt
In expectation,
√
E[ δt] = E[∆x − fsim δt − F ξF δt] = E[∆x − fsim δt]
as the mean of ξF is 0.
In order to correct the nonzero model bias, we approximate δt from data. From rollouts of
trajectories, we collect a set of {x, u, ∆x − fsim δt} tuples. Assuming that the the covariance of δt
grows linearly with δt, we use a Gaussian Process (GP) [17], which estimates δt as a Gaussian
distribution:
(x, u)δt ∼ N ˆ(x, u)δt, Σ(x, u)δt
whose derivation we provide in Section 3.1. Let Σ(x, u) be decomposed into G(x, u) G(x, u)> , with
G ∈ Rn×n . Now (2) can be rewritten as the following:
√
√
∆x = (fsim + ˆ) δt + F ξF δt + G ξG δt
(3)
where ξG ∼ N (0, In×n ) .
The original ILQG handles stochastic systems without uncertainty, but the above formulation makes
it straightforward to extend ILQG to incorporate uncertainty, which we refer to as Robust-ILQG (see
Section 3.2).
Our approach is summarized in Algorithm 1. Initially, with zero data and no policy, we start by using
ILQG3 to obtain a locally optimal policy for the suboptimal model provided by fsim . Then, with
rollouts of the computed policy, we compute the error between f and fsim , with which we train a GP
that estimates . We run Robust-ILQG again, but with the updated dynamics. This process is repeated
until the policy converges.
3
Robust-ILQG is equivalent to ILQG in the absence of .
3
Algorithm 1 GP-ILQG
Require: π, fsim , F, D
if D 6= ∅ then
[ˆ
, G] ← GP(D)
end if
if π == N IL then
π ← Robust-ILQG(fsim + ˆ, F, G)
end if
while ∆π > γ do
Collect {(x, u, δx − f δt)} with rollouts of π.
D ← D ∪ {(x, u, δx − f δt)}
[ˆ
, G] ← GP(D)
π ← Robust-ILQG(fsim + ˆ, F, G).
end while
150
100
Incorrect Model + GP
Incorrect Model
Target
Observation
50
0
−10
−5
0
5
10
15
20
Figure 2: Use of Gaussian Process to correct an incorrect model. Near previous observations,
the model’s prediction is corrected to match the target. When the test input is far from the prior
observations, predictions resort to the incorrect model. Shaded area indicates 95% confidence interval.
One major advantage of our approach is that it is straightforward to use the improved model for
a new task. Given observations learned from a previous task, Algorithm 1 starts with an estimate
of . Whenever the algorithm explores a previously visited state-control space, the learner corrects
any simulator error and provides smaller uncertainty covariance; in a new region, the algorithm
relies on the original model fsim with larger uncertainty covariance than the explored region. This
results in a more accurate model, and as Robust-ILQG takes into account uncertainty, it also results
in a more robust policy than those that rely only on the simulator or on a locally learned dynamics.
When using only a partially-correct simulated model, the policy generated is always limited by the
simulator’s accuracy; when using a locally learned dynamics, the robot knows very little outside
previously explored regions. Our algorithm, which combines both, our algorithm quickly outperforms
the simulator-only approaches and requires less data than the latter.
3.1
Gaussian Process with the Simulator as a Mean Function
Gaussian Process Regression [17] is a nonlinear regression technique that has been successfully used
in many model-based reinforcement learning approaches [4, 13]. A Gaussian Process is a stochastic
process in which any finite subset is jointly Gaussian. Given a set of inputs and corresponding
outputs {xi , yi }ni=1 the distribution of {yi }ni=1 is defined by the covariance given by a kernel function
k(xi , xj ).
4
We use a variant of Gaussian Processes that uses a nonzero mean function. With f : X → Y as the
mean function, the prediction for test input x becomes the following:
>
E[y] = f (x) + kxX
K −1 (Y − f (X))
>
var(y) = kxx − kxX
K −1 kxX
where kxX is the covariance between test input x and training set X, K is the covariance matrix of
among the elements of X. In this formulation, the GP provides a posterior distribution given f (x)
and observations.
Using the simulator as the mean function for a Gaussian Process allows us to combine the simulator
and real-world observations smoothly. Near previous observations, the simulator’s prediction is
corrected to match the target. When the test input is far from the previous observations, the predictions
resort to the simulator. See Figure 2 for an illustration.
e = [x> u> ]> to be the input and δx to be the
As we have both x and u as the input, we define x
output, and use fsim as the mean function. Then, the GP serves to correct the error, δx − fsim δt. We
use the ARD (Automatic Relevance Determination) squared exponential kernel function:
ej ) = σf2 exp(−(e
ej )> Λ−1 (e
ej ))
k(e
xi , x
xi − x
xi − x
where σf2 is the signal variance and Λ controls the characteristic length of each input dimension. These
hyperparameters are optimized to maximize log-likelihood of data using conventional optimization
methods. For multiple dimensions of Y , we train one GP per dimension and treat each dimension to
be independent.
3.2
Robust-ILQG
In order to address both uncertainty and stochasticity in computing an optimal policy, we propose a
new algorithm, which we refer to as Robust-Iterative Linear Quadratic Gaussian Control (RobustILQG). Robust-ILQG is an extension to the original ILQG [22], a variant of Differential Dynamic
Programming (DDP) for stochastic systems. It makes second-order approximation of the value
function to the second order and first-order approximation of the dynamics. The main difference
between ILQG and Robust-ILQG is that the latter takes into account uncertainty in dynamics, as we
will see in the following derivation.
We use a discretized version of a continuous system, with a fixed step size δt. From the equation
√
∆x = f (x, u)δt + F(x, u) δtξF
(4)
√
√
= (fsim + ˆ)δt + F ξF δt + G ξG δt
The state x0 at the next timestep can be written as
√
√
x0 = x + (fsim + ˆ)δt + F ξF δt + G ξG δt
Given a trajectory {x0 , u0 , · · · xT }, the total cost is given as
T
−1
X
J(x0 ) = E lT (xT ) +
l(xi , ui )
(5)
(6)
i=0
where lT is the final cost and l is the running cost. Our objective is to find a deterministic policy
π : X → U that minimizes this total cost.
We consider the local variation of value function with respect to change in x and u. The Bellman
equation gives us the following:
∗
0 0
0∗
V (x + δx) = l(x + δx, u + δu ) + E V (x + δx )
(7)
where V is the value function at a particular timestep and V 0 is the value function at the next timestep.
δx0∗ refers to the variation of the next state given an optimal control change δu∗ .
Analogous to ILQR which does not have stochastic components, we can analytically derive the
local optimal improvement δx∗ by taking first order approximation of dynamics and second order
approximation of the value function.
5
The local deviation of the next state from its nominal (deterministic) state can be written as the
following:
δx0 = Aδx + Bδu + CξF + DξG
(8)
where A, B linearizes the deterministic component of dynamics at the current time step, defined as
A = I + (fsimx + ˆx )δt
B = (fsimu + ˆu )δt
and C, D captures the deviation from nominal (deterministic) state caused by stochastic terms
√
C = (F + Fx ⊗δx + Fu ⊗δu) δt
√
D = (G + Gx ⊗δx + Gu ⊗δu) δt.
where ⊗ denotes tensor-vector multiplication as defined in Section 6 such that Fx ⊗δx, Gx ⊗δx are
matrices of size n × p and Fu ⊗δu, Gu ⊗δu are of size n × m.
Assuming that the variation of value at the next timestep can be written in a quadratic form,
1
V 0 (x0 + δx0 ) = s0 + s0> δx0 + δx0> S0 δx0
2
(9)
the local variation of cost-to-go at the current time step can be written as 4
1 >
1 >
>
0 0
0
V (x + δx) = l(x, u) + l>
x δx + δx lxx δx + lu δu + δu luu δu + E[V (x + δx )]
2
2
Note that, when taking expectation for s> δx0 , the stochasticity and uncertainty terms disappear as the
means of ξF and ξG are zero, while for the second-order terms such as δx0> (·)δx0 , their covariances
must be taken into account.5 .
Expanding above equation with the definitions of partial derivatives,
V (x + δx) = l + s0 + s0> (Aδx + Bδu) + lx δx + lu δu
1
> 0
>
>
+
(Aδx + Bδu) S (Aδx + Bδu) + δx lxx δx + δu luu δu
2
1
+ tr(CC> S0 ) + tr(DD> S0 ) δt
2
(10)
where the trace terms can be re-written as the following:
tr(CC> S0 ) = tr
X
(i)
(i)
(i) > 0
(F(i) + F(i)
+ F(i)
x + Fu )(F
x + Fu ) S
i
=
X
> 0 (i)
(F(i) + Fx(i) + F(i)
+ Fx(i) + F(i)
u ) S (F
u )
i
where the superscripts (i) denote the ith column of the corresponding matrix and subscripts denote
their partial derivatives.
4
5
We assume that lux is zero.
e.g., for a randomvariable ξ ∼ N (0, Σ), E[ξ > Sξ] = tr(ΣS)
6
Then, combining the like terms together, (10) becomes
1 X (i)> 0 (i) 1 X (j)> 0 (j)
= l + s0 + δt
F
S F + δt
G
S G
2
2
i
j
|
{z
}
q
X
X
(i)> 0 (i)
(j)> 0 (j)
δx
+ s0> A + l>
+
δt
F
S
F
+δt
G
S
G
x
x
x
i
j
|
{z
}
q>
X
X
1
(j)> 0 (j)
+ δx> A> S0 A + lxx + δt
Fx(i)> S0 F(i)
+δt
G
S
G
δx
x
x
x
2
i
j
|
{z
}
Q
+ s0> B + l>
u + δt
X
(i)
F(i)> S0 Fu
+δt
i
X
G(j)> S0 G(j)
u
δu
j
{z
|
}
g>
X
X
(j)> 0 (j)
0 (i)
+ δx> A> S0 B + δt
F(i)>
S
F
+δt
G
S
G
δu
x
u
x
u
i
|
j
{z
}
G>
X
X
1
(j)> 0 (j)
S Gu ) δu
Gu
S0 F(i)
F(i)>
+ δu> (B> S0 B + luu + δt
u +δt
u
2
j
i
|
{z
}
H
Minimizing this with respect to δu gives the optimal δu∗ :
δu∗ = −H−1 g − H−1 Gδx,
Plugging this δu∗ back into (10), we get a quadratic form of V (x + δx):
1
V (x + δx) = s + δx> s + δx> Sδx
2
with
S = Q − G> H−1 G
s = q − G> H−> g
1
s = q − g> H−> g
2
Note that, in the absence of uncertainty terms, this is equivalent to iLQG as introduced in [22], and
further, in the absence of both uncertainty and stochsaticity, this is equivelent to iLQR [9].
To make a local improvement of the nominal trajectory, we perform a backward pass to update the
(T )
(T )
value function and the optimal δu∗ for each timestep, starting with ST = lxx , sT = lx , sT = lT .
During a forward pass, the nominal trajectory is updated by applying δu = −αH−1 g − H−1 Gδx
to the deterministic part of the system. We use backtracking line-search to find α that minimizes the
total cost.
For a long horizon task, Model Predictive Contol can be used with Robust-ILQG, which is to run
Robust-ILQG for a short horizon repeatedly. We use this approach in our second experiment in
Section 4.2.
4
Experiments and Analysis
We consider two simulated tasks: cart-pole swing-up and quadrotor control. For each task, one set
of model parameters was used as the “simulator”, and another set of model parameters was used
7
as the “real-world.” Multiplicative noise was added to dynamics and Gaussian noise was added to
observation. We compare GP-ILQG’s performance with three optimal control algorithms: (1) ILQG
using the “real-world” model, (2) ILQG using the incorrect “simulator” model, (3) Probabilistic DDP
(PDDP) [13], which is a variant of ILQG that relies only on data. For both GP-ILQG and PDDP, the
same set of data and same GP implementation were used at each iteration.
As data gets large, computing Gaussian Process becomes computationally expensive, and thus
it is common to use a subset of the data for computing the mean and covariance [17]. In our
experiments, we keep all observations and uniformly sub-sample 300 data points at each iteration6 .
GP hyperparameters are optimized with the Gaussian Process for Machine Learning Toolbox [18].
If the learner’s prediction error for a validation set is higher than that of its previous learner, we
re-sample and re-train.
Incorrect Model
(b) Task 2 Cost
(a) Task 1 Cost
PDDP
(c) Task 1 U-Trajectory
Torque (N)
1
1
Normalized Cost
GP-ILQG
Correct Model
0.5
0.5
0
−20
0
0
0
10
20
Training Iterations
0
10
20
Training Iterations
0
50
100
Timesteps (0.04s/step)
Figure 3: Cartpole swing-up.
4.1
Cart-Pole Swing-Up
In the cart-pole problem, the state is defined as [x, ẋ, θ, θ̇], where x is the position of the cart along
the x−axis and θ is the angle of the pole from the vertical downright position. Control input is the
x-directional force (N).
The model parameters for the “simulator” and “real-world” model are given in Section 6. The
real-world model has a 30% longer pole.
We run two tasks in this experiment. In the first task, the initial state is [0, π/4, 0, 0]. Figure 3(a)
is the normalized cost for the first task. While both GP-ILQG and PDDP converges to the optimal
performance, GP-ILQG is converges much quickly, within the first 2 iterations.
The difference between GP-ILQG and PDDP is more noticeable in the second task (Figure 3(b)),
which starts from a different initial state [0, −π/4, 0, 0]. Both GP-ILQG and PDDP use the learner
used in the previous task, but the initial cost for PDDP is significantly higher than GP-ILQG. We
believe that this is because both algorithms explore an unexplored region in the first few iterations.
While GP-ILQG relies on the simulator’s inaccurate model in the unexplored region, PDDP has no
information to make meaningful advancement until enough data is collected.
What is more noticeable is the improved performance of GP-ILQG over the simulator-based ILQG.
The latter’s suboptimal policy results in significantly higher cost in general. Figure 3(c) shows the
control sequences generated by the final policies of the four algorithms. GP-ILQG’s control sequence
is almost identical to the optimal sequence, and PDDP closely follows the optimal sequence as well.
However, the simulator-based ILQG’s control sequence is quite different due to its incorrect model.
6
For better results, more advanced techniques such as Sparse Pseudo-input GP [19] or Sparse Spectral
Gaussian Process [16] can be used.
8
Incorrect Model
(b) Task 2 Cost
(a) Task 1 Cost
GP-ILQG
(c) Task 1 Z-Trajectory
1
1
5
Height (m)
Normalized Cost
Correct Model
0.5
0.5
0
0
0
5
4
3.5
0
10
4.5
5
10
Iterations
Iterations
0
100
200
Timesteps (0.02s/step)
Figure 4: Quadrotor control
4.2
Quadrotor
We use the quadrotor model introduced in [23]. The model has 12-dimensional state, x =
[p, v, r, w]> , where p (m) and v (m/s) refers to the quadrotor’s position and velocity in 3D space, r
is orientation (rotation about axis r by angle krk), and w (rad/s) is angular velocity. It has 4 control
inputs, u = [u1 , u2 , u3 , u4 ]> , which represent the force (N) exerted by the four rotors. The dynamics
is given as the following:
ṗ = v
v̇ = −ge3 + (
X
ui ) exp([r]e3 − kv v)/m
1
1
1
ṙ = w + [r]w + (1 − krk/ tan( krk)[r]2 /krk2
2
2
2
ẇ = J −1 (ρ(u2 − u4 )e1 + ρ(u3 − u1 )e2
+ km (u1 − u2 + u3 − u4 )e3 − [w]Jw)
where ei are the standard basis vectors, g = 9.8m/s2 is gravity, kv is the drag coefficient of rotors,
m (kg) is the mass, J (kg m2 ) is the moment of inertia matrix, and ρ (m) is the distance between the
center of mass and the center of rotors, and km is a constant relating the force of rotors to its torque.
[·] refers to the skew-symmetric cross product. The model parameters used are included in Section 6.
The real-world model is 40% heavier than the simulator’s model.
We evaluate the performance of two tasks. The quadrotor starts at a position near (0m, 0m, 5m)
with zero velocity. In the first task, the goal is to move the quadrotor forward to (4m, 0m, 5m) in 4
seconds. The second task is to drive the quadrotor to (2m, 1m, 7m) in 4 seconds. The cost function
was set to track a straight line from the initial position to the goal, with higher cost for maintaining
the height.
In this experiment, we were not able to run PDDP to convergence with the same data used in GPILQG. We believe that this arises from the same problem we saw in the second task of cart-pole:
PDDP has insufficient data to infer the unexplored state-space. We note that the original PDDP
algorithm requires random trajectories as its initial data set instead of random variations of a single
nominal trajectory. While our experiment does not indicate that PDDP is incapable of this task7 , it
highlights our algorithm’s data efficiency. Even with the small set of task-specific data, GP-ILQG
converges in the first two iterations in the initial task (Figure 4(a) and converges immediately to the
optimal policy in the second task (Figure 4(b)). Figure 4(c) compares the trajectories generated by the
three algorithms. It shows that while our algorithm closely tracks the desired height, the simulator’s
suboptimal controller fails to recover from the vertical drop due to its incorrect mass model.
7
A similar experiment with quadrotor control was shown to be successful in [15].
9
5
Conclusion
In this paper, we proposed a novel algorithm that combines real-world data with a simulator’s model to
improve real-world performance of simulation-based optimal control. Our approach uses a Gaussian
Process to correct a simulator’s nonlinear model bias beyond the scope of its model space while
incorporating the uncertainty of our estimate in computing a robust optimal control policy. Through
simulated experiments, we have shown that our approach converges to the optimal performance
within a few iterations and is capable of generalizing the learned dynamics for new tasks.
Although our algorithm is capable of correcting significant model errors, it is limited by the quality
of the initial policy based on the simulator’s incorrect model. For example, the simulator’s model
can be sufficiently different from the true model such that the initial policy results in catastrophic
damage to the robot. Our algorithm is incapable of measuring this initial uncertainty, although it can
be improved by providing an initial set of expert-generated trajectories. We leave this as a future
research direction.
6
Appendix
6.1
Stochasticity and uncertainty terms
We define the partial derivatives for stochasticity and uncertainty terms as the following:
(p)
Fx ⊗δx , F(1)
x δx, ..., Fx δx
(n)
Gx ⊗δx , G(1)
x δx, ..., Gx δx
(11)
(12)
(j)
where F(i)
x , Gx refer to the partial derivatives of i, j-th columns of F and G with respect to x.
6.2
Models used in experiments
We have used the following model parameters for the cart-pole and quadrotor experiments. “Simulator”
refers to the incorrect model in the simulator. “Real World” is the model used during rollouts and
policy evaluation; its model parameters are not revealed to our algorithm.
6.2.1
Cartpole
Cart Mass
Pole Mass
Pole Length
6.2.2
Simulator
Real World
1 kg
1 kg
1m
1 kg
1 kg
1.3 m
Quadrotor
kv is a constant relating the velocity to an opposite force, caused by rotor drag and induced inflow. m
(kg) is the mass, J (kg m2 ) is the moment of inertia matrix, ρ (m) is the distance between the center
of mass and the center of the rotors.
kv
km
m
J
ρ
Simulator
Real World
0.15
0.025
0.5
0.05 I
0.17
0.15
0.025
0.7
0.05 I
0.17
References
[1] P. Abbeel, M. Quigley, and A. Y. Ng. Using inaccurate models in reinforcement learning. In
W. W. Cohen and A. Moore, editors, Proceedings of the 23th International Conference on
10
Machine Learning (ICML-06), pages 1–8, 2006. URL http://www.machinelearning.org/
proceedings/icml2006/001_Using_Inaccurate_Mod.pdf.
[2] C. G. Atkeson, A. W. Moorey, and S. Schaalz. Locally weighted learning. Artif Intell Rev, 11
(1-5):11–73, 1997.
[3] A. Boeing and T. Bräunl. Leveraging multiple simulators for crossing the reality gap. In
Control Automation Robotics & Vision (ICARCV), 2012 12th International Conference on,
pages 1113–1119. IEEE, 2012.
[4] M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to policy
search. In Proceedings of the 28th International Conference on machine learning (ICML-11),
pages 465–472, 2011.
[5] D. Huh and E. Todorov. Real-time motor control using recurrent neural networks. In Adaptive
Dynamic Programming and Reinforcement Learning, 2009. ADPRL’09. IEEE Symposium on,
pages 42–49. IEEE, 2009.
[6] D. H. Jacobson and D. Q. Mayne. Differential dynamic programming. 1970.
[7] J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. The
International Journal of Robotics Research, 32(11):1238–1274, 2013.
[8] S. Levine and V. Koltun. Guided policy search. In ICML (3), pages 1–9, 2013.
[9] W. Li and E. Todorov. Iterative linear quadratic regulator design for nonlinear biological
movement systems.
[10] D. Mitrovic, S. Klanke, and S. Vijayakumar. Adaptive optimal feedback control with learned
internal dynamics models. In From Motor Learning to Interaction Learning in Robots, pages
65–84. Springer, 2010.
[11] I. Mordatch, N. Mishra, C. Eppner, and P. Abbeel. Combining model-based policy search with
online model learning for control of physical humanoids. In Robotics and Automation (ICRA),
2016 IEEE International Conference on, pages 242–248. IEEE, 2016.
[12] K. S. Narendra and K. Parthasarathy. Identification and control of dynamical systems using
neural networks. IEEE Transactions on neural networks, 1(1):4–27, 1990.
[13] Y. Pan and E. Theodorou. Probabilistic differential dynamic programming. In Advances in
Neural Information Processing Systems, pages 1907–1915, 2014.
[14] Y. Pan and E. A. Theodorou. Data-driven differential dynamic programming using gaussian
processes. In American Control Conference (ACC), 2015, pages 4467–4472. IEEE, 2015.
[15] Y. Pan, X. Yan, E. Theodorou, and B. Boots. Scalable reinforcement learning via trajectory
optimization and approximate gaussian process regression. NIPS Workshop on Advances in
Approximate Bayesian Inference, 2015.
[16] J. Quiñonero-Candela, C. E. Rasmussen, A. R. Figueiras-Vidal, et al. Sparse spectrum gaussian
process regression. Journal of Machine Learning Research, 11(Jun):1865–1881, 2010.
[17] C. E. Rasmussen. Gaussian processes for machine learning. 2006.
[18] C. E. Rasmussen and H. Nickisch. Gaussian processes for machine learning (gpml) toolbox.
Journal of Machine Learning Research, 11(Nov):3011–3015, 2010.
[19] E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. Advances in
neural information processing systems, 18:1257, 2006.
[20] A. A. Stoorvogel. The h-infinity control problem: A state space approach. 1993.
[21] J. Tan, Z. Xie, B. Boots, and C. K. Liu. Simulation-based design of dynamic controllers for
humanoid balancing. In Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International
Conference on, pages 2729–2736. IEEE, 2016.
[22] E. Todorov and W. Li. A generalized iterative lqg method for locally-optimal feedback control of
constrained nonlinear stochastic systems. In American Control Conference, 2005. Proceedings
of the 2005, pages 300–306. IEEE, 2005.
[23] J. van den Berg. Extended lqr: Locally-optimal feedback control for systems with non-linear
dynamics and non-quadratic cost. In Robotics Research, pages 39–56. Springer, 2016.
11
[24] J. M. Wang, D. J. Fleet, and A. Hertzmann. Optimizing walking controllers for uncertain inputs
and environments. In ACM Transactions on Graphics (TOG), volume 29, page 73. ACM, 2010.
[25] A. Yamaguchi and C. G. Atkeson. Differential dynamic programming with temporally decomposed dynamics. In Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International
Conference on, pages 696–703. IEEE, 2015.
[26] A. Yamaguchi and C. G. Atkeson. Neural networks and differential dynamic programming for
reinforcement learning problems. In Robotics and Automation (ICRA), 2016 IEEE International
Conference on, pages 5434–5441. IEEE, 2016.
[27] J. C. Zagal, J. Ruiz-del Solar, and P. Vallejos. Back to reality: Crossing the reality gap in
evolutionary robotics. In IAV 2004 the 5th IFAC Symposium on Intelligent Autonomous Vehicles,
Lisbon, Portugal, 2004.
[28] T. Zhang, G. Kahn, S. Levine, and P. Abbeel. Learning deep control policies for autonomous
aerial vehicles with mpc-guided policy search. In Robotics and Automation (ICRA), 2016 IEEE
International Conference on, pages 528–535. IEEE, 2016.
12
| 3 |
1
On Bounds of Spectral Efficiency of Optimally
Beamformed NLOS Millimeter Wave Links
arXiv:1708.04257v2 [] 23 Nov 2017
Rakesh R T, Student Member, IEEE,
Debarati Sen, Member, IEEE, Goutam Das
Abstract—Beamforming is an indispensable feature for millimeter wave (mmWave) wireless communications in order to
compensate for the severe path loss incurred due to high
frequency operation. In this paper, we introduce a novel framework to evaluate the spectral efficiency (SE) of non-line-ofsight (NLOS) mmWave links with optimal analog beamforming.
Optimality here implies the joint selection of antenna beams at
the transmitter and receiver which simultaneously maximize the
received power. We develop a mathematical framework based
on the extended Saleh-Valenzuela channel model to embody the
impact of optimal analog beamforming into the performance
metrics for NLOS mmWave links. Practical mmWave channels
are characterized by sparsity in terms of number of multi-path
components; we exploit this feature to derive upper and lower
bounds on SE of beamformed directional links. Simulation results
reveal that the proposed approach is fairly accurate to model
beamformed links in most practical operating scenarios. We also
study the impact of overhead due to antenna beam training on
the throughput (TP) of a link and obtain an approximate solution
for optimal antenna half power beamwidth which maximizes TP.
Index Terms—MmWave Communication, Directional Antenna,
Optimal Analog Beamforming, Spectral Efficiency.
I. I NTRODUCTION
R
ECENT advances in technology have paved the way
for emergence of wideband millimeter wave (mmWave)
communications providing a viable option to meet the future demand for multi-Gbps data rates [1]. However, high
frequency mmWave transmission incurs significantly large
path loss during signal propagation, and thereby limits the
transmission range. To overcome this bottleneck, directional
antennas with beamforming capability are employed for signal
transmission and/or reception [2]. The objective of beamforming protocol is to steer the antenna beams at the transmitter
and receiver nodes of a link such that the transmission rate
is maximized [2]. This is achieved by optimizing the signalto-noise ratio (SNR) or signal-to-interference plus noise ratio
(SINR) [3] at the receiver.
Beamforming protocols essentially enable spatial filtering
of multi-path signal components based on the defined optimality criteria [3]. The quality and reliability of the link
therefore depends on the beamformed directional channel and
in this context, statistical modeling of beamformed directional
channels is essential to accurately obtain mmWave network
performance metrics such as coverage probability, spectral
efficiency (SE) etc. The schemes proposed in [4], [5] which
Copyright ©2015 IEEE. Personal use of this material is permitted. However,
permission to use this material for any other purposes must be obtained from
the IEEE by sending a request to [email protected].
The authors are with G. S. Sanyal School of Telecommunications, Indian
Institute of Technology, Kharagpur, W.B -India . E-mail: [rakeshrt, debarati,
gdas]@gssst.iitkgp.ernet.in
evaluate the performance of mmWave networks with analog
beamforming [3] simply model the beamformed directional
channel by a random gain component assuming that the
channel is frequency flat. This is similar to the model used
for conventional sub-6 GHz systems where channel gain is
obtained as the product of a Rayleigh or Nakagami-m random
variable which accounts for small scale fading effect, and
a path loss term that models the large scale fading effect.
Similarly, a recent work on coverage analysis for mmWave
line-of-sight (LOS) links with analog beamforming [6] approximates the beamformed directional channel by a random
gain component based on the uniformly random single path
(UR-SP) assumption. However, in non-LOS (NLOS) mmWave
channels the power content of multi-path components [7] are
comparable, and thus the modeling approaches considered
for beamformed directional channels in existing literature are
not applicable. Therefore, a new mathematical framework is
required which embodies the impact of optimal beamforming
for performance study of NLOS mmWave links.
In this paper, we develop a mathematical framework to
statistically model NLOS mmWave links with optimal analog
beamforming in order to evaluate the SE of noise limited
NLOS mmWave links. We assume that the optimal transmitterreceiver antenna beam pair is chosen from a set of nonoverlapping antenna beams spanning the 3600 azimuth space
such that the received signal power is maximized. The omnidirectional propagation characteristics of the channel is represented by the extended Saleh-Valenzuela (S-V) spatial channel
model [7]–[9]. We utilize this model to derive lower and upper
bounds on SE of optimally beamformed mmWave links. We
further note that SE of a noise limited link can be enhanced
by using high resolution antenna beams albeit at the cost
of significant training overhead due to the associated analog
beamforming protocol [10]. The trade-off between training
overhead and throughput (TP) for indoor mmWave networks
is investigated through simulations in [10]. We propose a
mathematical framework to quantify TP as a function of SE
and training overhead. Moreover, the analysis also helps to
determine requirements for the design of antenna beamforming protocols in terms of an optimal antenna half power
beamwidth (HPBW) which maximizes TP and specifies the
feasible region of operation in terms of antenna HPBW so
that links are able to identify optimal antenna beam pairs. The
paper has three main contributions: (i) we introduce a novel
modeling approach to study the statistical behavior of optimal
analog beamforming in NLOS mmWave links, (ii) we obtain
tractable lower and upper bounds on SE of a NLOS mmWave
link utilizing the sparsity in practical mmWave channels, and
(iii) we provide a design insight for mmWave communication
2
systems by obtaining an approximate solution for optimal
antenna HPBW which maximizes TP for a given analog
beamforming protocol under a set of channel parameters.
II. S YSTEM M ODEL
As shown in Fig. 1, we consider a system model consisting
of an outdoor mmWave link with the transmitter and receiver
nodes separated by a distance d. The nodes are assumed
to be equipped with directional antennas with beamforming
capability. We further assume that direct LOS connectivity
between the transmitter and receiver is blocked and hence
the beamformed link is established through NLOS multi-path
components (Fig. 1). We approximate the antenna radiation
pattern by a sectored model [4] with zero side lobe gain.
Let θ3dB,t and θ3dB,r (in degrees) denote antenna HPBW
of the transmitter and receiver, respectively. The transmitter
and receiver main lobe antenna gain values can approximately
360
360
be calculated as Gm,t = θ3dB,t
and Gm,r = θ3dB,r
[4],
respectively. We further assume that the nodes select the
optimal antenna beam pair that maximizes the SNR at the
receiver node (out of Mt and Mr number of non-overlapping
beams at the transmitter and the receiver nodes, respectively).
ϴl
B=Mr Mt
ϴref
Blockage
d
θ3dB,t
A
Tx
Фl
θ3dB,r
Фref
Rx
C
Mt beams
Mr beams
Fig. 1: A typical node deployment scenario
We adopt a frequency flat equivalent of the extended S-V
channel model for our analysis. The total power received from
L multi-path components can thus be expressed as,
L
X
P = PT cd−α
|hl |2 GT (Θref − Θl )GR (Φref − Φl ), (1)
l=1
where PT represents the transmit power, c denotes the intercept point from the path loss formula, and α denotes the
path loss exponent. |hl | is the small scale fading amplitude
which is generally modeled as a Rayleigh or Rice random
variable [7], [8]. GT (.) and GR (.) represent the antenna gain
of the transmitter and receiver antennas respectively with
corresponding antenna pointing angles Θref and Φref . Θref
and Φref are defined as the angle between the maximal gain
direction of the antenna main lobe and the line segments Tx-A
and Rx-C at the transmitter and receiver, respectively, as shown
in Fig. 1. Θl and Φl are the angle of departure (AOD) and
angle of arrival (AOA) of the l-th multi-path component. The
number of multi-path components denoted by L is a random
variable with its average value denoted by λ0 [7]. Assuming
a sectored radiation pattern model and unit transmit power,
the signal power received by the i-th P
antenna beam pair can
2
be obtained from (1) as Pi = cd−α l∈Li |hl | Gm,t Gm,r ,
where Li denotes the set of multi-path components which
are located inside the antenna main lobes of the transmitter
and receiver corresponding to the i-th antenna beam pair.
We assume that the cardinality of the set Li (card(Li )) is
a Poisson random variable with average number of multi-path
components λd = λ0 /B, where B denotes the total number
of available transmitter-receiver beam pairs (B = Mt Mr ). It
should be noted that in practice the average received signal
power varies with antenna beam orientation angle [7], and
therefore λd as well as α are functions of antenna beam
orientation angle. As of now due to lack of availability
of empirical data to capture this variation, we assume λd
and α to be constant [7]–[9] which incidentally also lends
mathematical tractability for analysis. It may also be noted
that λd and α could be obtained by making use of the
analytical model reported in our prior work [11]. However,
this modeling approach is presently out of scope of this paper.
In this paper, the small scale fading gain |hl | is assumed to
be Nakagami-m distributed with
equal to 1/λ0 ,
hP mean power
i
L
2
≈
1, where E[.]
|h
|
which ensures that EL,|hl |2
l
l=1
denotes the expectation operator. The received power corresponding to the i-th antenna beam pair can thus be
h expressed
i
P
2
2
−α −1
as, Pi = cd λ0
|g
|
G
G
with
E
|g
|
= 1.
l
m,t
m,r
l
l∈Li
In practice mmWave multi-path components are sparse in
time as well as the angular dimension [8]. Consequently,
the probability of receiving multiple propagation components
inside the antenna main lobe is negligible and therefore with
most of the practical antenna radiation patterns, card(Li ) ≤
1, ∀i. Therefore, the presence of the multi-path component
inside a pair of antenna beams can be modeled by a Bernoulli
random variable with success probability p. The value of p can
be computed as, p = 1 − exp(−λd ). Based on this approximation, the received power corresponding to i-th antenna beam
2
−α
pair is simplified as Pi = Πi (p) |g| Gm,t Gm,r λ−1
,
0 cd
where Πi (.) denotes the Bernoulli random variable corresponding to the i-th antenna beam pair with success probability
p, i.e., Πi (p) = 1 with probability p; Πi (p) = 0 with
probability 1−p. The optimal transmitter and receiver antenna
beams (thick lined sectors in Fig. 1) are jointly selected based
on the maximum received signal power criteria, and therefore
the optimal received signal power is calculated as,
Popt = max (P1 , P2 , ..., PB ) .
(2)
III. C ALCULATION
OF
S PECTRAL E FFICIENCY
In this section, we first derive an upper bound on SE
of an optimally beamformed mmWave NLOS link (included
in Section III-A) by assuming Nakagami-m fading for each
multi-path component. In addition, we also present the upper
and lower bounds on SE in Section III-A. Finally, we obtain
an expression for link throughput in Section III-B which
determines the fraction of SE useful for communication after
accounting for the antenna beam training overhead.
A. SE under extended S-V channel model with the assumption
of Nakagami-m distributed |g| .
The variability in power received by each antenna beam pair
2
is essentially due to the parameters |g| and Πi (p). Therefore,
power maximization in (2) is equivalent to the calculation
3
of normalized received signal power corresponding
to the
′ ′
′
′
optimal antenna beam pair, i.e., Popt = max P1 , P2 , ..., PB ,
′
2
where Pi = Πi (p) |g| , i ∈ {1, ..., B}. In this section, we
first evaluate the cumulative distribution function (CDF) of
′
′
Popt . We note that Pi for i-th antenna beam pair is a mixed
random variable, since |g|2 and Πi (p) are continuous and
′
discrete random variables, respectively. Accordingly, Pi is
a continuous random variable if Πi (p) = 1; and a discrete
random variable if Πi (p) = 0. This condition also implies
′
that Popt is a continuous random variable if ∃i, where
Πi (p) = 1, ∀i ∈ {1, ..., B}. Therefore, we proceed with the
′
derivation for the CDF of Popt in two exclusive parts; the
first of which deals with the continuous case (∃i, where
Πi (p) = 1, ∀i ∈ {1, ..., B}) and the second part deals with
the discrete case (Πi (p) = 0, ∀i ∈ {1, ..., B}) only. Hence,
′
the CDF of Popt with ∃i, where Πi (p) = 1, ∀i ∈ {1, ..., B} is,
h ′
′
′
′
FPopt
(P ∗ ) =P rob (P1 ≤ P ∗ ) ∩ (P2 ≤ P ∗ )... ∩ (PB ≤ P ∗ )
i
|∃i, where Πi (p) = 1, ∀i ∈ {1, ..., B} ,
(3)
where P rob(.) represents probability of the given event.
′
′
identically
Since P1 ,.., PB are independent
distributed,
′ and
′
′
P rob P1 ≤ P ∗ =...=P rob PB ≤ P ∗ =P rob P ≤ P ∗ .
Hence, (3) can be simplified based on Bayes’ rule as,
PB B
′
B−i i
p P rob(P ≤ P ∗ )i
i=1 i (1 − p)
∗
′
(P ) =
FPopt
(4)
B
1 − (1 − p)
′
Without loss of generality, we calculate P rob(Pi ≤ P ∗ ) using
the probability density function (PDF) of Gamma random
m m−1 −mx
. The CDF of
variable X defined as fX (x) = m xΓ(m)e
′
Pi ∃i, where Πi (p) = 1, ∀i ∈ {1, ..., B} is calculated as,
Z P ∗ m m−1 −mx
′
m x
e
dx
γ(m, mP ∗ )
P rob(Pi ≤ P ∗ ) =
=
(5)
Γ(m)
Γ(m)
0
where γ(x, y) denotes the lower incomplete Gamma function
′
with parameters x and y. Substituting P rob(Pi ≤ P ∗ ) in (4)
results in,
h
iB
B
p γ(m,mP ∗ )
−1
1 + 1−p Γ(m)
(1 − p)
′
(6)
FPopt
(P ∗ ) =
1 − (1 − p)B
We note that (6) is intractable owing to the incomplete Gamma
function. For further simplification for the computation
of
∗
)
SE, we explore the possibility to approximate γ(m,mP
and
Γ(m)
iB
h
p γ(m,mP ∗ )
. Since P ∗ varies from 0 to ∞, only
1 + 1−p
Γ(m)
iB
h
p γ(m,mP ∗ )
loose approximations are possible for 1 + 1−p
Γ(m)
which can aid the evaluation of SE. Also, due to the possibly
large values for B (for example, antenna HPBWs of 330 and
150 at transmitter and receiver corresponds to B = 121 and
B = 625, respectively), any approximation for the incomplete
′
Gamma function may lead to significant error in FPopt
(P ∗ ).
The only option is to minimize the error,
and therefore we
∗
∗ m
)
apply a tighter approximation, γ(m,mP
≤
1 − e−aP
Γ(m)
−1
′
with a = mΓ(m + 1) m [4]. The bound on FPopt
(P ∗ ) is
therefore achieved by introducing this approximation in (6).
′
Further, the discrete probability component of the CDF of Popt
is determined by the condition Πi (p) =0, ∀i ∈ {1, ..., B}.
Hence, P rob Πi (p) = 0, ∀i ∈ {1, ..., B} = (1 − p)B . The
′
′
PDF of Popt , fPopt
(P ∗ ) is obtained as,
m−1
mapB(1 − p)B−1
−aP ∗
1
−
e
1 − (1 − p)B
m B−1
∗
p
−aP ∗
1−e
e−aP (7)
× 1+
1−p
SE is calculated using the following formula,
i
h
′
SE = E ln 1 + ρPopt
=(1 − p)B ln(1) + 1 − (1 − p)B
Z ∞
′
×
ln(1 + ρP ∗ )fPopt
(P ∗ ) dP ∗ , (8)
′
fPopt
(P ∗ ) ≤
0
−α
λ−1
0 Gm,t Gm,r cd
σ2
where ρ =
with σ 2 denotes noise power.
The upper bound on SE is obtained by substituting (7) in (8),
Z ∞
∗ m−1
SE ≤
ln(1 + ρP ∗ )mapB(1 − p)B−1 1 − e−aP
0
B−1
∗ m
∗
p
× 1+
1 − e−aP
e−aP dP ∗ .
(9)
1−p
The integration in (9) can be evaluated by replacing m with
m
b = ⌊m⌋. Hence, (9) is modified into,
i
mi−1
bX
mi
b −1
B
p
(1 − p)
b
ami
b
SE ≤
j
i
1−p
j=0
i=1
Z ∞
∗
× (−1)j
ln(1 + ρP ∗ )e−ba(1+j)P dP ∗
B
X
B
0
i mi−1
B
bX
X
B
mi
b −1
p
i
i
j
1−p
i=1
j=0
a
b(1+j)
e ρ
b
a(1 + j)
× (−1)j
,
(10)
E1
b
a(1 + j)
ρ
where E1 (.) denotes exponential integral function and
−1
c .To provide further insights into the system
b
a = mΓ(
b m
b + 1) m
design, we evaluate simplified upper and lower bounds for
SE. The upper bound on SE is derived by substituting m = 1
in (7) (equivalent to the Rayleigh assumption for |g|), i.e.,
∗
∗ B−1
pB
′
e−P
1 − pe−P
fPopt
(P ∗ ) =
B
1 − (1 − p)
∗
∗
pB
≤
exp(−λ0 e−P )e−P .
(11)
B
1 − (1 − p)
The last step in (11) is obtained from p ≈ λB0 (for large B)
B−1
followed by the relation 1 − λB0 x
≤ e−λ0 x [12].
Ap-∗
∗
−P
plying the inequality exp(−λ0 e
) ≤ 1 − 1 − e−λ0 e−P
in (11), the upper bound on SE is evaluated using (8) as,
"
#
1 − e−λ0 2
1
1
2
−
. (12)
e ρ E1
SE ≤ pB e ρ E1
ρ
2
ρ
=b
am(1
b − p)B
A closed form lower bound on SE can be derived by ignoring small scale fading for individual multi-path components.
′
Based on the aforementioned simplification, Popt becomes a
4
′
discrete random variable. Specifically, Popt = 1 with probaB
B
bility 1 − (1 − p) ; 0 with probability (1 − p) . Therefore,
lower bound on SE is determined
h as,
i
B
B
SE ≥ (1 − p) log(1) + 1 − (1 − p) ln(1 + ρ)
i
h
B
(13)
= 1 − (1 − p) ln(1 + ρ).
Interestingly, the bounds expressed in (12) and (13) can be
simplified further for highly sparse mmWave channels. Such
channels are envisaged when the transmitter-receiver distance
d is fairly large; in fact it has been reported that the number
of detectable multi-path components at the receiver decreases
with transmission distance (since the power level of most
multi-path components is below noise floor due to excessive
propagation loss at mmWave frequencies) [13]. Based on the
inequality ex E1 (x) ≤ ln(1 + x1 ) [12] and small λ0 , (12) is
approximated as SE ≤ pBln(1 + ρ) ≤ λd Bln(1 + ρ) ≤
λ0 ln(1 + ρ). Similarly,
the lower bound is approximated as
SE ≥ 1 − e−λ0 ln(1 + ρ) ≥ λ0 ln(1 + ρ), which converges
with the upper bound.
B. Computation of throughput by accounting antenna beam
training overhead
Generally, SE can be enhanced by operating with large B
which essentially increases the antenna gain. However, analog
beamforming protocols require a fixed training time (with
large B, training time increases) to identify the antenna beam
pair which maximizes SE. The training overhead reduces the
opportunity of nodes to communicate due to limited residual
duration for data transmission. This overhead is expected to
be significant for an outdoor environment since the channel
changes frequently and beamforming needs to be repeatedly
performed to discover strong multi-path components. In this
section, we quantify the TP of a link by associating antenna
beam training overhead with SE, and derive an approximate
value of optimal antenna HPBW which maximizes TP. Let
To denote the antenna beam training duration and let the total
duration due to antenna beam training plus data transmission
be T . In the present context, T can be same as the coherence
time of the channel Tc . We define TP as,
To
TP = 1 −
SE.
(14)
T
To evaluate TP for a practical network scenario, we consider
the Multiple Sector ID Capture (MIDC)1 scheme enabled
antenna beamforming protocol specified in the IEEE 802.11ad
standard [15]. In the recent past, several commercial products
compliant with the IEEE 802.11ad standard have been
released for outdoor communications [16], although the
standard was originally proposed for indoor communications.
We also note that the standard also allows nodes to employ
an antenna beam tracking mechanism to track the channel
variations due to mobility [15]. The change in direction of
arrival of strongest multi-path component is identified by
sending a channel estimation sequence appended to the data
1 The antenna beamforming protocols may also identify sub-optimal antenna
beam pairs as the solution. However, experimental evaluations confirm that
the MIDC based protocol obtains the optimal antenna beam pair with fairly
high probability [14].
frames. However, continuous antenna beam tracking results
in reduction of data transmission duration which eventually
degrades the TP of the link. Interestingly we observe that
antenna beamforming (though it requires more search time
compared to the antenna beam tracking mechanism) allows
data transmission for a longer duration in comparison to
the antenna beam tracking mechanism since it does not
require any prior knowledge of the channel. As such, a
judicial selection of analog beamforming and beam tracking
mechanism is required for communications in highly mobile
environments. However, an analysis based on this observation
is presently out of scope of this paper, and we take into
account analog beamforming only. To derive TP, we assume
same number of antenna
√ and receiver
√ beams at transmitter
nodes (Mt = Mr = B) and To = 2 2 B + Nb2 Tf
[14], where Tf represents the transmission duration of the
control frame for antenna beam training and Nb = 4 [14].
Further, for analytical tractability, the lower bound on SE in
this section is considered with (1 − p)B = exp(−λ0 ) and
λ−1 cd−α
ρ = BK, where K = 0 σ2 . Based on this parameter
setting, TP in (14) is modified as,
√
T
f
T P = 1 − 2 2 B + Nb2
(1 − e−λ0 )ln(1 + BK).
T
(15)
The optimal value of B is determined by equating the first
derivative of TP from (15) to zero and hence we obtain,
c∗ K ln 1 + B
c∗ K
p
1+B
1
c∗ + Nb2 , (16)
p
− 2 B
=
Ft
c∗
K B
2Tf
c∗ is the optimal value for B which can
where Ft = T and B
be found by numerically solving (16). However, it is possible
c∗ by
to derive an approximated closed form expression√for B
applying the simplification (1 + x)ln(1 + x) ≈ x x in (16)
which results√in,
p
c∗ + 2Ft B
c∗ + N 2 Ft − 1 = 0.
Ft K B
(17)
b
It is interesting to note that
pthe approximation in (16)
p leads
c∗ in (17). Therefore, B
c∗ is,
to a quadratic equation of B
r
√
√
1 − KNb2 Ft2 + KFt
−Ft +
p
c∗ =
√
.
(18)
B
Ft K
Correspondingly, the optimal antenna HPBW for the trans∗
∗
mitter and receiver nodes is determined as, θ3dB,t
= θ3dB,r
=
360
∗
θ3dB ≈ √ .
c∗
B
IV. P ERFORMANCE E VALUATION
Extensive Monte Carlo numerical simulations were performed to validate the analysis presented in the preceding
section. We set the simulation parameters as cd−α /σ 2 = 0.01,
Tf = 5 µs [15], and a variable number of antenna beam
pair B is selected. We consider the values of λ0 spanning
from 1 to 3.5 including the experimentally reported values
λ0 = 1.9 [7] and 3.3 [13]. Firstly, we compare the simulated
plot for SE and the plots for the analytically evaluated bounds
on SE in Fig. 2(a) for varying λo and arbitrarily chosen
5
3
2.5
B=625(θ3dB ≈ 14.50 )
2.5
2
Simulation
Upper bound from (10)
Upper bound from (12)
Lower bound from (13)
1.5
1
SE (bps/Hz)
2
B=625
SE (bps/Hz)
SE (bps/Hz)
2.5
1.5
1
Simulation
Upper bound from (10)
Lower bound from (13)
Based on [4],[5]
0.5
0.5
B=121
1
1.5
2
2.5
3
0
3.5
0
200
400
600
2
1.5
Simulation
Upper bound from (10)
Lower bound from (13)
1
0.5
B=121(θ3dB ≈ 330 )
0
800
5
6
B
λ0
(a)
7
8
KdB (dB)
(b)
(c)
Fig. 2: (a) Comparison of simulated plot for SE and the plots for the analytically evaluated bounds on SE for varying λ0 and m = 3.2 (b) Comparison of
simulated plot for SE and the plots for the analytically evaluated bounds on SE for varying B, m = 3.2 and λ0 = 1.9 (c) Comparison of simulated plot for
SE and the plots for the analytically evaluated bounds on SE for varying KdB with λ0 = 1.9.
330
0 11.4 dB
0
330
30
1.41 dB
300
-8.59 dB
60
300
-4.56 dB
-18.6 dB
-28.6 dB
270
15.4 dB
30
5.44 dB
-24.6 dB
270
1
∗
θ3dB
= 13.570
v=1 m/s
v=1.5 m/s
v=2 m/s
v=11.1 m/s
0.8
0.6
∗
θ3dB
= 19.880
0.4
∗
θ3dB
= 26.980
0.2
60
0
-14.6 dB
90
Fig. 2(b), the error between upper bound in (10) and the
simulated SE is reduced from approximately 8.7% to 4.6%
for B = 100 (θ3dB = θ3dB,t = θ3dB,r = 360 ) and B = 1000
(θ3dB = θ3dB,t = θ3dB,r = 11.40). We also observe that the
error due to the lower bound from (13) is as large as 15.6% for
B = 100. However, for higher values of B, the error is seen
to reduce to 6.8%, which indicates that the closed form lower
bound may also applicable for SE evaluation of optimally
beamformed links for higher values of B or equivalently while
operating with low resolution antenna beams.
TP (bps/Hz)
m = 3.2. The simulated SE plot is generated by averaging
the capacity evaluated for 105 realizations of channel. In nth realization of the channel, the capacity Cn is calculated
P
(Cn = log2 (1 + σopt
2 )) based on the criteria given in (2),
where power received
in i-th antenna beam pair is determined
P
2
as Pi = cd−α l∈Li |hl | Gm,t Gm,r . The variables card(Li )
and |hl | are generated randomly based on their respective
PDFs (as discussed in the System Model section). Further,
we choose two different values for B in the simulation. From
Fig. 2(a), the maximum error between the upper bound derived
in (10) and the simulated SE is found as approximately 7.2%
and 9.6% for B = 625 and B = 121, respectively. Moreover,
the plots in Fig. 2(a) also reveal that the upper bound from
(12) and the lower bound from (13) are tight bounds in lower
λ0 regime (for example, the lower and upper bounds show
error of 2.8% and 6.1%, respectively at λ0 = 1.25), which
indicate that the derived bounds on SE are fairly accurate for
highly sparse mmWave channels.
90
0
200
400
600
800
1000
B
Fig. 4: Effect of channel dynamics on TP
240
120
- Radiation pattern of patch antenna
- Sectored model of radiation pattern
210
150
180
(a)
240
120
- Radiation pattern of patch antenna
- Sectored model of radiation pattern
210
150
180
(b)
Fig. 3: Radiation pattern of different patch antennas with corresponding
sectored models for (a) θ3dB = 330 , (b) θ3dB = 14.50 .
Further, we compare the simulated plot for SE and the
plots for the analytically evaluated bounds on SE in Fig. 2(b)
for varying B with m = 3.2 and λ0 = 1.9. We exclude the plot for the SE bound obtained in (12) since the
corresponding error is significantly large at λ0 = 1.9 as
observed from Fig. 2(a). For comparison, we also illustrate
the result generated for the scenario where the directional
channel is simply modeled as a Nakagami-m (m = 2) random
variable, an assumption made in [4], [5]. As can be seen from
The plot for simulated SE assuming Rician distributed |hl |
is presented in Fig. 2(c). In the same figure we also plot the
bounds on SE (analytical) for varying Rician shape parameter,
KdB and λ0 = 1.9. Experimentally reported values of KdB [8]
were used in the simulation. The analytical plot corresponding
to the upper bound in (10) is generated for the values of
m determined from chosen KdB using (2.54) in [17]. An
omni-directional channel is generated in each iteration of the
simulation with Rician distributed multi-path amplitude for
adopted shape parameter KdB . Antenna radiation pattern is
then applied (based on discrete set of antenna pointing directions) to obtain optimally beamformed directional channel.
Two different patch antenna radiation patterns with antenna
HPBW, θ3dB = θ3dB,t = θ3dB,r = 14.50 and 330 depicted
in Fig. 3 are used for simulation. Correspondingly, we choose
6
B = 625 and B = 121 for the analytical plots assuming
that each node employs the same antenna HPBW (14.40
and 32.730, respectively) for communication. As shown in
Fig. 2(c), a maximum error of 8.2% and 18.5% is observed
for B = 625 and B = 121, respectively as a result of the
joint impact of the simplification of antenna radiation pattern
with sectored model and the approximations adopted for the
evaluation of the bound. We also note that the maximum error
between the simulated SE and the lower bound on SE is only
10.2% for B = 121.
Finally in Fig. 4, we plot simulated TP of a link with SE
determined using the simulation procedure used for Fig. 2(a)
and Fig. 2(b). We use T = Tc for the simulation. To the
best of our knowledge, an investigation pertaining to channel
dynamics in outdoor millimeter wave environment is unavailable in literature. Therefore, Tc is analytically determined by
assuming that the channel dynamics is only due to the motion
of transmitter or receiver node alone (related mathematical
expressions are available in [18]). In Fig. 4, we plot TP
for four scenarios. For three scenarios, one of the nodes is
assumed to be carried by a moving person and the in the last
scenario, the node is assumed to be located in a moving vehicle
(accordingly, velocity of the node varies from v = 1 m/s to
v = 11.1 m/s (= 3.6 km/hr − 40 km/hr)). As shown in Fig. 4,
TP initially increases and then reduces owing to the increasing
training overhead due to antenna beamforming. Moreover,
when velocity of the node increases, the maximum achievable
TP reduces. As evident from Fig. 4, there also exists a range of
B for which TP of the link becomes negative (corresponding
values of TP are truncated to zero in Fig. 4). We note that it
is not possible to complete the beamforming procedure within
the duration of Tc , a fact also evident from (15). Consequently,
for the nodes employing fixed antenna radiation pattern, beamforming protocol may have to choose a sub-optimal antenna
beam pairs so that those nodes are able to commence data
transmission before the channel starts changing significantly.
This effect is severe for highly mobile environments as shown
by the plot corresponding to v = 11.1 m/s, which reveals
that the identification of optimal antenna beam pairs is not
possible no matter what is the value of B. We also calculate
∗
∗
≈ √360 . The
using the formula θ3dB
approximated θ3dB
c∗
B
∗
calculated values are θ3dB
≈ 13.160, 18.320 , and 23.930,
respectively for v = 1, 1.5, and 2 m/s and the corresponding
∗
simulated values are θ3dB
= 13.570, 19.880 , and 26.980.
Thus, the analytical framework presented in Section III-B can
serve as a design tool for beamforming protocols for outdoor
mmWave communications.
V. C ONCLUSION
In this paper, we present a novel methodology to incorporate
optimal analog beamforming into the framework for evaluation
of bounds on SE of NLOS mmWave links. We establish that
the simplistic assumption of Rayleigh or Nakagami-m probability distribution for the beamformed directional channel gain
is inadequate to characterize NLOS mmWave beamformed
channels. In addition, we also investigate the effect of antenna
beam training overhead on throughput of a link, and identify
the necessary conditions for its maximization. The evaluation
of throughput based on an standard antenna beamforming
protocol reveals that for a highly mobile environment, it may
not be possible to identify optimal antenna beam pairs which
maximize SE, and as such the nodes may end up operating
with sub-optimal antenna beams.
As future work, it would be interesting to explore the
scenario where average number multi-path components per
antenna beam and the path loss exponent is considered to be
a function of the antenna beam orientation angle.
ACKNOWLEDGMENT
The author would like to thank to Mr. Shajahan Kutty for
proof reading this manuscript, and also express gratitude to
Dr. Saswati Ghosh for providing simulated antenna radiation
patterns for analysis.
R EFERENCES
[1] T. S. Rappaport, R. W. Heath Jr, R. C. Daniels, and J. N. Murdock, Millimeter Wave Wireless Communications. Prentice Hall Communications
Engineering and Emerging Technologies Series, 2014.
[2] S. Kutty and D. Sen, “Beamforming for millimeter wave communications:
An inclusive survey”, IEEE Commun. Surveys Tuts., vol. 18, no. 2, pp.
949-973, 2016.
[3] J. Qiao, X. Shen, J. W. Mark, and Y. He, “MAC-layer concurrent
beamforming protocol for indoor millimeter-wave networks”, IEEE Trans.
Veh. Technol., vol. 64, no. 1, pp. 327-338, 2015.
[4] T. Bai and R. W. Heath, “Coverage and rate analysis for millimeter-wave
cellular networks”, IEEE Trans. Wireless Commun., vol. 14, no. 2, pp.
1100-1114, 2015.
[5] M. Di Renzo, W. Lu, and P. Guan, “The intensity matching approach: A
tractable stochastic geometry approximation to system-level analysis of
cellular networks”, IEEE Trans. Wireless Commun., vol. 15, no. 9, pp.
5963-5983, 2016.
[6] X. Yu and J. Zhang et al., “Coverage Analysis for Millimeter Wave
Networks: The Impact of Directional Antenna Arrays”, IEEE J. Select.
Areas Commun., vol. 35, no. 7, 2017.
[7] M. R. Akdeniz, and Y. Liu et al., “Millimeter wave channel modeling
and cellular capacity evaluation”, IEEE J. Select. Areas Commun., vol.
32, no. 6, pp. 1164-1179, 2014.
[8] M. K. Samimi and G. R. MacCartney et al., “28 GHz millimeter-wave
ultrawideband small-scale fading models in wireless channels”, in IEEE
83rd Vehicular Technology Conference (VTC Spring), 2016, pp. 1-6.
[9] C.-H. Chen and C.-R. Tsai et al., “Compressive Sensing (CS) Assisted Low-Complexity Beamspace Hybrid Precoding for MillimeterWave MIMO Systems”, IEEE Trans. Signal Process., vol. 65, no. 6, pp.
1412-1424, 2017.
[10] H. Shokri-Ghadikolaei, L. Gkatzikis, and C. Fischione, “Beam-searching
and transmission scheduling in millimeter wave communications,” in
Proc. IEEE International Conf. on Comm., 2015, pp. 1292-1297.
[11] R. T. Rakesh, G. Das, and D. Sen, “An Analytical Model for Millimeter
Wave Outdoor Directional Non-Line-of-Sight Channels”, in Proc. IEEE
International Conf. on Comm., 2017, pp. 1-6.
[12] M. Abramowitz and I. A. Stegun, Handbook of mathematical functions:
with formulas, graphs, and mathematical tables. Courier Corporation,
1964.
[13] T. S. Rappaport and G. R. MacCartney et al., “Wideband millimeterwave propagation measurements and channel models for future wireless
communication system design,” IEEE Trans. Commun., vol. 63, no. 9,
pp. 3029-3056, 2015.
[14] K. Hosoya and N. Prasad et al., “Multiple sector ID capture (MIDC): A
novel beamforming technique for 60-GHz band multi-Gbps WLAN/PAN
systems, IEEE Trans. Antennas Propag., vol. 63, no. 1, pp. 81-96, 2015.
[15] “Part 11: Wireless LAN Medium Access Control (MAC) and Physical
Layer (PHY) Specifications Amendment 3: Enhancements for Very High
Throughput in the 60 GHz Band,” December 2012.
[16] http://www.bluwirelesstechnology.com/product/.
[17] G. L. Stüber, Principles of mobile communication. Springer, 2001.
[18] P. F. Smulders, “Statistical characterization of 60-GHz indoor radio
channels,” IEEE Trans. Antennas Propag., vol. 57, no. 10, pp. 2820-2829,
2009.
| 7 |
1
Millimeter Wave Communications for Future
Mobile Networks
arXiv:1705.06072v1 [] 17 May 2017
Ming Xiao, Senior Member, IEEE, Shahid Mumtaz, Senior Member, IEEE,
Yongming Huang, Senior Member, IEEE, Linglong Dai, Senior Member, IEEE, Yonghui Li, Senior Member, IEEE,
Michail Matthaiou, Senior Member, IEEE, George K. Karagiannidis, Fellow, IEEE, Emil Björnson, Member, IEEE,
Kai Yang, Senior Member, IEEE, Chih-Lin I., Senior Member, IEEE, Amitava Ghosh, Fellow, IEEE
Abstract—Millimeter wave (mmWave) communications have
recently attracted large research interest, since the huge available
bandwidth can potentially lead to rates of multiple Gbps (gigabit
per second) per user. Though mmWave can be readily used
in stationary scenarios such as indoor hotspots or backhaul, it
is challenging to use mmWave in mobile networks, where the
transmitting/receiving nodes may be moving, channels may have
a complicated structure, and the coordination among multiple
nodes is difficult. To fully exploit the high potential rates of
mmWave in mobile networks, lots of technical problems must
be addressed. This paper presents a comprehensive survey of
mmWave communications for future mobile networks (5G and
beyond). We first summarize the recent channel measurement
campaigns and modeling results. Then, we discuss in detail recent
progresses in multiple input multiple output (MIMO) transceiver
design for mmWave communications. After that, we provide an
overview of the solution for multiple access and backhauling,
followed by analysis of coverage and connectivity. Finally, the
progresses in the standardization and deployment of mmWave
for mobile networks are discussed.
Index Terms—Millimeter wave communications, mobile networks, channel model, MIMO beamforming, multiple access,
standardization.
W
I. I NTRODUCTION AND BACKGROUND
ITH the fast development of electronic devices and
computer science, various emerging applications (e.g.,
virtual reality, augmented reality, big data analytics, artificial
intelligence, three-dimensional (3D) media, ultra-high definition transmission video, etc.) have entered our society and
created a significant growth in the data volume of wireless networks. Meanwhile, mobile networks have become
indispensable to our society as a key service for personalcomputing devices. One of the main characteristics of future mobile networks (5G and beyond) is the unprecedented
M. Xiao is with the department of information science and engineering,
School of Electrical Engineering, Royal Institute of Technology, Sweden.
Email: [email protected]; Shahid Mumtaz, is with Campus Universitario de
Santiago, Portugal, Email: [email protected]; Yongming Huang, is with
Southeast University, China, Email: [email protected]; Linglong Dai is
with Tsinghua University, China, Email: [email protected]; Yonghui
Li, University of Sydney, Australia, Email: [email protected]
Michail Matthaiou is with Queen’s University Belfast, UK, Email:
[email protected]; George K. Karagiannidis is with Aristotle University
of Thessaloniki, Greece, Email: [email protected]; Emil Björnson is with
Linköping University, Sweden, Email: [email protected]; Kai Yang is with
TongJi University, China, [email protected]; Chih-Lin I., China Mobile,
China, Email: [email protected]; Amitava Ghosh, Nokia, USA, Email:
[email protected]
Ming Xiao would like to thank Mr. Zhengquan Zhang for valuable
discussion and drawing some of the tables.
traffic volumes, with huge area spectral efficiency (hundreds
of bit/s/Hz/km2 ) and the very high throughput per device
(multiple Gbps). For instance, it is predicted that the world
monthly traffic of smartphones will be about 50 petabytes in
2021 [1], which is about 12 times of the traffic in 2016.
In order to meet these requirements, the research and
deployment for the future mobile networks [2]–[4] have
already been launched. Since 2013, the national-level 5G
research organizations and projects (including European Union
(EU) 5GPPP/METIS, China IMT-2020 (5G) Promotion Group,
Korea 5G Forum, and Japan ARIB) have been set up one
after the other to achieve the 2020 technical targets. In
2015, ITU-R officially named 5G systems as IMT-2020,
and released recommendation on its framework and overall
objectives. Currently, Phase-1 of 5G is being standardized
in 3GPP (http://www.3gpp.org/news-events/3gpp-news). Fig. 1
illustrates potential usage scenarios and capabilities of IMT2020 [4]. Note that 5G is envisaged not only to expand and
support diverse usage scenarios and applications that will
continue beyond the current networks, but also to support a
broad variety of new application scenarios, including:
1) Enhanced Mobile BroadBand (eMBB);
2) Massive machine type communications (mMTC);
3) Ultra Reliable Low Latency Communication (URLLC).
It is expected that IMT-2020 can provide the following
eight key performance indicators (KPIs) [4]: greater than
10 Gbit/s peak data rate, 100 Mbit/s user-experienced data
rate, 3x spectrum efficiency, greater than 100 Mbps cell edge
rates, 10 Mbit/s/km2 area traffic capacity, 100x network energy
efficiency, 1 ms over-the-air latency, support for 500 km/h mobility, and 106 km2 connection density [2]. The multiplicative
improvements are measured with respect to IMT-Advanced.
Recently, EU has also launched Beyond 5G research within the
H2020 framework (ICT 2017-09 Call), where a key technology
foundation is to use the millimeter wave (mmWave) bands
from 30 GHz to 300 GHz, and also THz frequency bands.
To achieve the magnificent objectives and visions listed
above, several key enabling technologies have been identified,
such as mmWave communications, massive multiple-input
and multiple-output (MIMO), small cell deployment, fullduplex relaying, D2D communications, interference management techniques, dynamic TDD with self-backhauling and
novel access technologies. Many of these technologies have
complementary benefits and need to be combined to achieve all
2
Enhanced Mobile Broadband
Peak data rate
(Gbit/s)
20
100
IMT-2020
10
1
Area traffic capacity
(Mbit/s/m²) 10
1
100x
Network energy
efficiency
User experienced
data rate (Mbit/s)
10x
0.1
1x
3x
Spectrum
efficiency
1x
350400
Mobility
500 (km/h)
IMT-Advanced
10
105
106
Massive Machine Connectivity density
(devices/km²)
Type
Communications
1
Latency
(ms)
Ultra-Reliable
and Low Latency
Communications
Fig. 1. 5G usage scenarios and key capabilities of IMT-2020, as compared
to IMT-Advanced [2].
the key capabilities of 5G. For example, mmWave communications [4], [5], [7], [9], [12], [13] is widely considered the most
important technologies to achieve 10 Gbit/s peak data rates.
This is because there are a large amount of bandwidth available
in the mmWave bands, and expanding the bandwidth is an
efficient approach to enhance system capacity. In particular,
the channel capacity of an additive white Gaussian noise
channel operating over B Hz is
!
P
C = B log2 1 +
,
(1)
N0 B
where P is the signal power and N0 is the noise power
spectral density [15]. Hence, the capacity increases linearly
with the bandwidth B, if we also let P grow proportionally
to B. Since P is limited by regulations in practice, mmWave
communication is particularly well-suited for scenarios with
good channel conditions, such as short-range small cell access
and line-of-sight backhauling in mobile networks [16]–[19].
Self-bcakhaul, where the same wireless spectrum is shared
between access and backhaul [20], [23], can provide flexible and cost-efficient solutions to overcome the difficulty of
deploying dedicated backhaul, especially in an ultra dense
network (UDN) [24]–[26]. It refers to a set of solutions where
small base stations (BSs) without dedicated backhaul connect
to other BSs that have dedicated backhaul link, by utilizing a
similar radio access technology as the one used by the user
equipments (UEs) to access the network.
The study of mmWave can trace back to more than 100
years ago. For instance, the experiments at wavelengths
as short as 5 and 6 mm were performed by Bose and
Lebedew in the 1890s [27]. As for its application in radio
communications, the mmWave mobile communications
were originally invented in the 1990s and early 2000s,
including system design and channel measurements [28]–
[30]. Due to the fact that the spectral resource below
6 GHz is becoming scaring while abundant bandwidth
is availabe at mmWave band, recently large efforts have
been devoted to mmWave communications research and
mmWave wireless local area networks (WLAN); for example,
IEEE 802.11ad technology operating at 60 GHz, is already
available. The more challenging development of mmWave
mobile communications is ongoing and is what this paper
focuses on. Samsung first achieved 1 Gb/s data transmission
at 28 GHz in May 2013. Google also put substantial
research efforts into mmWave communications. Verizon
has submitted applications to the Federal Communications
Commission (FCC) to obtain special temporary authorization
(STA) to test mmWave communications technology at
28 GHz. T-Mobile is also expected to obtain STA at
28 GHz and 39 GHz. Nokia in collaboration with National
Instruments achieved a peak rate of 15 Gbps using their
proof-of-concept system at 73GHz band in April 2015.
To further promote the development of mmWave mobile
communications, Millimetre-wave evolution for backhaul and
access (MiWEBA) (http://www.miweba.eu/, a joint EU-Japan
project), Beyond 2020 heterogeneous wireless networks
with mmWave small cell access and backhauling (MiWaves)
(http://www.miwaves.eu/), and mmWave based mobile radio
access network for fifth generation integrated communications
(mmMAGIC) (https://5g-mmmagic.eu/) projects have been
initiated by EU. In Japan, to prepare for 2020 Tokyo
Olympic, DOCOMO and Ericsson tested at 15GHz to
reach the rates of 4.5Gbps in outdoor environments and at
70GHz to reach the rates of 2Gbps in indoor environments
(https://www.nttdocomo.co.jp/english/info/media_center/pr/201
5/030203.html). In China, the minister of science and
technology (MOST) has supported a few projects (through
project 863) in mmWave in mobile networks and the
RF chips of 60GHz and 42-48GHz have been produced
(http://www.most.gov.cn/kjbgz/201609/t20160923_127867.htm).
Huawei and China mobile also demonstrated the Ka-band
(26.5–40GHz) mobile access with 20Gbps rates in the Mobile
World Congress 2017.
The mmWave communication has a rich history. Recently
the main research interest on mmWave is shifting from local
area networks to mobile networks. The main focus of this
paper is on development of mmWave mobile communications.
We acknowledge the initial contribution in this field in the
1990s and 2000s including channel measurement and system
design, but we particularly concentrate on its new research
contributions for future mmWave mobile networks. Note that
though there have been many research progresses in the
area, the existing literature mainly focuses on addressing
specific technical challenges. A comprehensive survey, which
organically combines these numerous but disjointed works and
provides the summary of latest research progress, is basically
missing.
Note that references [7], [12]–[14], [16] gave excellent
overviews of some aspects of mmWave mobile networks from
different aspects. Reference [7] is a pioneering overview paper
on the mmWave for mobile networks. Yet, as published in
2013, [7] did not contain the technology development of
very recent years. Moreover, some topics, such as, mmWave
MIMO, multiple-access, backhual and standardization are not
included in [7]. Reference [12] is also an overview paper
3
networks.
10
2
A. Main Technical Challenges
Specific Attenuation (dB/km)
101
100
10-1
10-2
Despite the theoretical potentials for extremely high data
rates, there are several key technical challenges for using
mmWave in mobile networks, including severe pathloss, high
penetration loss, high power consumption, blockage due to
shadowing, hardware impairment, etc. In what follows, we
will give a brief introduction to these topics.
Pathloss: In free-space transmission, the power of the
received signal (outside the Kirchhoff area) can be determined
by Friis transmission formula [7]:
!2
λ
d −n,
(2)
Pr (d) = Pt Gt Gr
4π
where Pt is the transmit power and Gt and Gr are the
antenna gains of the transmitter and receiver, respectively.
10-3
50
100
150
200
250
300Moreover, λ is wavelength, d is the transmission distance,
and the pathloss exponent n equals to 2 in free space. The
Frequency (GHz)
formula in (2) can be also used to, approximately, describe
Fig. 2. Atmospheric and molecular absorption in different frequency bands
the power of the received signal in non-free-space propagation
[35].
as well, by making channel measurements and then finding a
suitable value of n that approximately describes the pathloss
on mmWave 5G paper published in early 2014, which did measurements. The value of n is usually in the range from 2
not include recent progress of the topic. Moreover, topics to 6. There are also refined models, e.g., for cellular networks
of mmWave MIMO and standardization are not included in and n can in some scenarios be smaller than 2 [31].
The wavelength of mmWave signals is much shorter than
[12]. Reference [16] gives a good brief overview of mmWave
for 5G. However, the introduction in [16] does not include conventional microwave communication signals, operating
technical details as a magazine paper. [14] is a recent overview at carrier frequency below 6 GHz. Hence, the pathloss of
paper on signal processing for mmWave MIMO. Yet, reference mmWave signals is much higher than that of microwave
[14] is focused in mmWave MIMO and other topics e.g., signals, if all other conditions including the antenna gains are
recent developments of channel measure/modeling, multiple the same. Although the pathloss of mmWave is generally quite
access, standardization, field test are missing. Reference [13], high, it is feasible to communicate over the distances that are
published in 2014, discussed the mmWave mobile in channel common in urban mobile networks, such as a few hundreds of
modeling, modulation, network architecture. Yet, [13] did meters [7] or even a few kilometers [32]. By using directive
not discuss the topics of mmWave MIMO, standardization, antennas, it has been demonstrated that 10 km communication
multiple access. Different from these papers, we provide such ranges are possible under clean air conditions [32]. If the air
a systematic survey on the main progress and technical content is not clean, the rain attenuation and atmospheric/molecular
of mmWave communications for mobile networks, and further absorption increase the pathloss and limit the communication
discuss some related research issues and challenges. Especially range [33], [34]. The impact of these factors varies with the
we include the recent technical development of related topics, carrier frequency; for instance, the atmospheric and molecular
including the accepted papers of this IEEE JSAC special issue. absorption are shown in Fig. 2 and the rain attenuation is
The rest of the paper is organized as follows. We first give shown in Fig. 3.
Penetration loss: The pathloss discussion assumes line-ofan overview of the key challenges and technical potentials of
mmWave communications in future mobile networks in Sec- sight (LoS) communications, but the high penetration loss is
tion II. Then, we discuss mmWave propagation characteristics compounded in non-line-of-sight (NLoS) scenarios. In indoor
and channel modeling in Section III, and then present MIMO environments, although the penetration losses for clear glass
design for mmWave communications in Section IV. Then, we and dry walls are relatively low for 28 GHz signals (compadiscuss the multiple access technologies, in-band backhauling, rable to microwave bands), the penetration losses for brick
and the coverage performance in Section V. We present the and tinted glass are high for 28 GHz signals (about 28 dB and
progress of the standardization and deployment of mmWave 40 dB), which is much higher than at microwave bands [37],
mobile networks in Section VI. The conclusions are finally [112]. The penetration losses are typically larger at higher
frequencies. Hence, it is difficult to cover inside with mmWave
given in Section VII.
nodes deployed outside and vice-versa due to high penetration
II. K EY C HALLENGES AND T ECHNICAL P OTENTIALS
loss.
High power consumption: In addition to the challenges
Next, we will describe some important challenges and potential gains from using mmWave communications in mobile imposed by high pathloss, (1) shows that the transmit power
4
elsewhere. This idealized radiation pattern, often referred as
the “flat-top model”, was used in [22], [38]–[40] for systemlevel performance analysis. However, in practice, the radiation
patterns are more complicated and implementation-dependent;
the main-lobe gain is not constant and the side-lobe radiation
is non-zero. The effect of side-lobe radiation and the gradual
reduction of main-lobe gain caused by beam misalignment
cannot be ignored. The maximum beamforming gain, which
can be achieved only if the main-lobes of the transmitter
and receiver are perfectly aligned, is rare due to practical
implementation constraints. In 3GPP, a more practical twodimensional directional antenna pattern [41] is adopted, where
the antenna gain G(θ), with respect to the relative angle θ to
its boresight, is given by
Rain Attenuation (dB/km)
102
101
100
0.25 mm/h
2.5 mm/h
12.5 mm/h
25 mm/h
50 mm/h
100 mm/h
150 mm/h
200 mm/hr
10-1
10-2
50
100
150
200
250
300
Frequency (GHz)
Fig. 3. Rain attenuation in different frequency bands [35].
needs to increase with the bandwidth if the signal-to-noise
ratio (SNR) should remain intact [14]. Alternatively, directive
antennas or MIMO technology can be used to direct the
signal power spatially, which leads to an array gain and
latter also provides the flexibility of spatial multiplexing
[36]. MIMO/beamforming is considered essential for mmWave
communications, particularly since the short wavelength at
mmWave frequencies makes it possible to fit many halfwavelength spaced antennas into a small area. MIMO arrays
are normally fully digital for sub 6 GHz systems, where
each antenna requires a dedicated radio-frequency (RF) chain,
including power amplifier (PA), low noise amplifier (LNA),
data converter (ADC/DACs), mixer, etc. However, realizing a
fully digital MIMO implementation at mmWave frequencies
is a non-trivial task, using current circuit design technology
[14]. Having hundreds (or even thousands) of antennas, each
supported by a separate RF chain, requires a very compact
circuit implementation. Moreover, due to the high bandwidth,
the PAs and data converters are expensive and power consuming. Hence, it appears that fully digital mmWave MIMO
implementations are currently infeasible from a cost-efficiency
perspective. This is likely to change in the future, but, in
the meantime, alternative low RF-complexity architectures
have received much attention from the research community.
In particular, hybrid analog/digital architectures are being
considered, where the corresponding signal processing techniques must be redesigned to enable channel estimation and
a good tradeoff between the spectral efficiency and energy
consumption/hardware cost [37].
Narrow beamwidth and side-lobes: To increase the transmission distance for mmWave, an array gain can be obtained
by using directional antennas, MIMO, and beamforming.
Consequently, the beamwidth of mmWave signals is normally
narrow. When modeling the directivity, the radiation patterns
are usually modeled in an idealized fashion, e.g., a constant
large antenna gain within the narrow main-lobe and zero
3
G m 10− 10
G (θ) =
G s,
2θ 2
ω
,
|θ| ≤
θm
,
2
θm
≤ |θ| ≤ π,
2
(3)
where ω denotes the half-power (3 dB) beamwidth and θ m is
the main-lobe beamwidth. G m and G s represent the maximum
main-lobe gain and averaged side-lobe gain, respectively.
Eq. (3) models how the signal changes with the beamwidth
and receiving angles. The narrow beamwidth has a two-fold
impact. While the pros will be discussed later, for the cons,
narrow beamwidth leads to higher sensitivity to misalignment
between the transmitter and the receiver, especially in mobile
networks that should support high mobility. The reason for
beam misalignment can be coarsely divided into two categories: 1) Imperfection of existing antenna and beamforming
techniques [42]–[44], such as the analog beamforming impairments, array perturbations, oscillator locking-range based
phase error, and the direction-of-arrival (DoA) estimation errors. Moreover, limited feedback may also cause the transmitter having only partial channel information and thus beaming
misalignment [45]. 2) Mobility of communication UE [46],
[47], which invokes tracking error and system reaction delay.
Hardware impairments and design challenges: In addition to above challenges, practical transceiver hardware are
impaired by phase noise (PN), non-linear PAs, I/Q imbalance,
and limited ADC resolution [48]. These effects limit the channel capacity [49], particularly when high spectral efficiency
is envisioned. On the other hand, it was proved in [50] that
MIMO communication links are less affected by hardware
impairments than single-antenna links.
In mmWave communication systems, mixers are applied for
signal up-conversion at the transmitter and down-conversion at
the receiver using local oscillators to generate carrier signals
operating at the desired carrier frequency. However, due to the
random deviation of the output signal frequency around the
carrier, it is infeasible that both oscillators at the transmitter
and the receiver operate exactly at the same carrier frequency.
Such a mismatch can be described by PN since the frequency
offset yields a random phase difference for the time domain
samples. Due to the high carrier frequency, mmWave communication systems are more sensitive to PN than conventional
ones.
5
Another important hardware impairment in mmWave is
the nonlinear PAs, since it is challenging to provide linear
amplification to a signal with very wide bandwidth. In practice,
each amplifier has a non-linear behavior, e.g., input signals
of large amplitude are clipped and different frequencies are
amplified differently. For the modeling of such non-linear
characteristics, the modified Rapp model [51], is commonly
used to describe the input-output relationship. Denoting Vi
the input voltage level of the PA, the output voltage level
Vo is described by the amplitude modulation to amplitude
modulation (AM-AM),
Vo =
rVi
(1 + (|rVi |/Vs ) 2p )
1
2p
,
(4)
where r is the small signal gain, Vs is the limiting output
amplitude, and p controls the smoothness of the transition from
linear operation to saturated operation. There are also other
models, such as the Saleh model [52].
Moreover, the higher and larger frequency bands of
mmWave communication systems cause many technical challenges in the design of circuit components and antennas.
In [53], authors discussed in depth the challenges against
the design of mmWave CMOS Radios, ranging from the
device-level challenges to the architecture-level challenges. In
particular, phase noise and IQ imbalance also cause severe
technical challenges against realizing mmWave RF circuits
[54]–[56]. In [57], authors have discussed and summarized
the latest research progress on integrated circuits for mmWave
communication systems. Authors have specifically summarized in details a few key technologies, including RF power
amplifiers , mixers, high speed analog-to digital converters, onchip and in-package antennas and 60 GHz voltage-controlled
oscillators.
B. Technical Potentials
Having listed the main challenges, we should not forget the
main reasons for using mmWave communications.
Large continuous unused bandwidth: Compared to microwave communications, one of the major benefits of
mmWave communications is the availability of large bandwidth, though wider bandwidth does not always lead to
higher rates in the noise-limited region [8]. Currently, the
available bandwidth for mobile networks (2G, 3G, 4G and
LTE-Advanced spectrum) is globally smaller than 780 MHz
and each major wireless provider has only a total of about
200 MHz spectrum [7]. This bandwidth is not sufficient for
providing rates of Gbps to multiple devices, since a huge
per-device spectral efficiency would be required. However, in
mmWave bands, there are large chunks of bandwidth available
for future mobile networks. As shown in Fig. 4, in the
mmWave bands, the potentially available bandwidth can be
more than 150 GHz [5], even excluding unfavorable bands
such as the 60 GHz oxygen absorption band (57–64 GHz) and
the water vapor (H2 O) absorption band (164–200 GHz). With
150 GHz of spectrum, a low spectral efficiency of 1 b/s/Hz is
sufficient to deliver a rate of 150 Gbps. Having a low spectral
efficiency simplifies the implementation and makes the end
Water
vapor
absorption
band
24.25GHz
42.5
64
57
76 86
71 81
109.5
164
200
300 GHz
102.2
Oxygen
absorption bands
27.5GHz
28.35
29.1 29.25
Bands proposed
for mobile use
31 31.3 GHz
37 GHz
Potential bands
for mobile use
38.6
Other bands
raised in NOI
40
42 42.5 GHz
Absorption bands
Fig. 4. Spectrum usage in mmWave bands.
.
performance less affected by hardware impairments. The large
unused frequency bands have therefore already attracted lots
of interest. For instance, in October 2003, the FCC announced
that the 71–76 GHz, 81–86 GHz, and 92–95 GHz frequency
bands (collectively referred as the E-band) will be available for
ultra-high-speed data communication including point-to-point
WLAN, mobile backhaul, and broadband Internet access. A
total of 12.9 GHz bandwidth is available in the E-band (6090 GHz). More recently, in July 2016, FCC dedicated large
bandwidths in mmWave bands for future cut-edge wireless
communications, namely, 64-71 GHz unlicensed bands (plus
previous 57-64 GHz) and 27.5-28.35 and 37-40 GHz licensed
bands.
Short wavelength and narrow beamwidth: Contrary to
signals at sub-6 GHz bands, the mmWave signal has much
shorter wavelength, which facilitates packing a large number
of antennas into an array of compact size [6], [7]. This
greatly expands the application range of large-scale antenna
communications in the future mobile networks [9], [10]. At
the same time, when having many antenna elements, the
beamwidth is narrow [11]. The positive side of this property is
the higher security against eavesdropping and jamming, and
a larger resilience against co-user interference. This implies
that the spectrum can be reused frequently in space, so many
interfering point-to-point MIMO systems (or multiuser MIMO
systems) can be deployed in a limited spatial region.
III. C HANNEL M EASUREMENTS AND M ODELING
A. Millimeter Wave Measurement Campaigns
Since the wavelength of mmWave bands is far shorter than
in microwave bands below 6 GHz, the parameters for radio
channel models will be quite different. Thus, understanding
the mmWave propagation characteristics is the first task to
design and develop mmWave communication systems. In
general, parameters such as pathloss, delay spread, shadowing, and angular spread are used to characterize the radio
propagation, which can be obtained through analyzing the data
collected by various channel measurement campaigns in different environments. Extensive channel measurement campaigns
6
TABLE I
S UMMARY OF MM WAVE MEASUREMENT CAMPAIGNS
Frequency
(GHz)
10
11
15
16
17
26
28
37
38
40
41
55
57
60
72
73
81-86
82
10
11, 16, 28, 38
73
26
Scenario
Site
Parameters
Ref.
Street canyon
Outdoor to indoor
Urban outdoor
Street canyon
Street canyon
Office
Airport
Outdoor to indoor
Urban outdoor
Outdoor to indoor
Urban steet and Sidewalk
Street canyon
Street canyon
Open square
Airport
Indoor (lab) and outdoor urban
Dense urban
Indoor and outdoor to indoor
Dense urban
Outdoor to indoor
Office
Laboratory
Station and airport
Urban
Urban steet and Sidewalk
Urban (Campus)
Urban (Campus)
Urban (Campus)
Indoor (lab)
Street canyon
Urban (street)
Urban (street)
Street canyon
Street canyon
Office
Shopping mall
Outdoor to indoor
Indoor (lab)
Urban (Campus)
Hospital
Outdoor courtyard and in vehicle
Street canyon
Street canyon
Urban
Office
Dense urban
Dense urban
Office
Roof-to-street
and street canyon
Street canyon
Office
Indoor hotspot
Indoor
rural
Indoor
Berlin
Belfort
Ishigaki
Helsinki
Stockholm
Large-scale parameters, time variance, and frequency dependence
Penetration losses, delay spread, and time variation
Pathloss, RMS delay spread, shadow, and power delay profile
Pathloss, delay spread, and angular spread
Pathloss and delay spread
Pathloss, directional spread, and delay spread
Pathloss, delay spread, and angular spread
Penetration loss and RMS delay spread
Pathloss
Penetration losses, delay spread, and time variation
Pathloss
Delay spread, angular spread, and pathloss
Large-scale parameters, time variance, and frequency dependence
Pathloss, delay spread, and angular spread
Pathloss, delay spread, and angular spread
Penetration loss, pathloss, reflectivity, and AoA
Pathloss, RMS delay spread, shadow, PDP, AoA, and AoD
Penetration loss and reflection coefficients
Outage
Excess loss, received power
Pathloss, RMS delay spread, and shadow
RMS delay spread and PDP
Pathloss and shadow
Pathloss, RMS delay spread, angle spread, AoA, and AoD
Pathloss
Pathloss, RMS delay spread, and PDP
Outage statistics
Pathloss, RMS delay spread, shadow, PDP, AOA, outage statistics
Penetration loss, reflectivity
Large-scale parameters, time variance, and frequency dependence
Pathloss, fading envelope, coherence bandwidth
Mean delay, delay spread, delay interval and delay window
Delay spread, angular spread, and pathloss
Pathloss and delay spread
Pathloss, directional spread, and delay spread
Direction
Penetration loss and RMS delay spread
Penetration loss
Pathloss, RMS delay spread, and PDP
Pathloss, delay spread, and PDP
Pathloss, RMS delay spread, PDP, and AoA
Pathloss
Pathloss and delay spread
Angular, RSS
Penetration loss, delay spread, PDP
Pathloss, delay spread, shadow, PDP, AoA, and AoD
Outage
Pathloss, RMS delay spread and shadow
Frequency response, impulse responses, and delay
[58]
[58]
[59]
[58]
[58]
[58]
[58]
[58]
[60]
[58]
[61]
[58]
[58]
[58]
[58]
[62]
[63]–[65]
[66]
[67]
[68]
[69], [70]
[71]
[72]
[73], [74]
[61]
[75]
[76]
[77]
[62]
[58]
[78]
[79]
[58]
[58]
[58]
[58]
[58]
[62]
[75]
[80]
[81]
[82]
[83]
[84]
[85]
[65], [86]
[67]
[69], [70]
[87]
Large-scale parameters, time variance, and frequency dependence
Delay and angular spread
Large-scale parameters, pathloss model
Power delay, azimuth, elevation profile
pathloss model
pathloss, shadow fading, and coherence bandwidth
[58]
[58]
[88]
[89]
[90]
[91]
Helsinki
Stockholm
Tokyo
Belfort
Tokyo
Helsinki
Berlin
Helsinki
Helsinki
Dallas
New York
New York
New York
Göteborg
New York
Seoul
Daejeon
Tokyo
Austin
Austin
Austin
Berlin
London
Oslo
Helsinki
Stockholm
University of Bristol
Stockholm
Austin
Japan
Berlin
Berlin
Aachen
New York
New York
New York
New York
Otaniemi
and Kaisaniemi (Helsinki)
Berlin
CEA-Leti
Oulu
Jinan, China
Virginia
BeiJing
7
TABLE II
S UMMARY OF MM WAVE MEASUREMENT RESULTS ON DIRECTIONAL (D) AND OMNIDIRECTIONAL (O) PATHLOSS , DELAY SPREAD AND SHADOWING
Freq.
(GHz)
11
11
11
28
Environment
Scenario
Site
Macro cellular
Micro cellular
Outdoor hotspot
Outdoor cellular
LoS/NLoS
LoS
LoS
LoS/NLoS
28
Dense urban
LoS/NLoS
28
Office
LoS/NLoS
28
Office
LoS/NLoS
28
Office
LoS/NLoS
28
Office
LoS/NLoS
28
28
28
28
28
Station
Airport
Urban
Urban street
Urban street
LoS/NLoS
LoS/NLoS
NLoS
LoS/NLoS
LoS/NLoS
28
Urban street
LoS/NLoS
38
38
38
38
38
38
38
38
55
60
60
60
60
72
Outdoor Cellular
Outdoor Cellular
Outdoor Cellular
Outdoor Cellular
Outdoor Cellular
Outdoor Cellular
Outdoor Cellular
Outdoor Cellular
Urban (street)
Hospital
Hospital
Hospital
Street canyon
Indoor
LoS/NLoS
LoS/NLoS
LoS/NLoS
LoS/NLoS
LoS/NLoS
LoS/NLoS
LoS/NLoS
LoS/NLoS
LoS/NLoS
LoS
LoS
LoS
LoS/NLoS
NLoS
73
Office
LoS/NLoS
73
Office
LoS/NLoS
73
Office
LoS/NLoS
73
Office
LoS/NLoS
73
73
73
73
73
Outdoor cellular
Outdoor cellular
Outdoor cellular
Outdoor cellular
Dense urban
LoS/NLoS
LoS/NLoS
LoS/NLoS
LoS/NLoS
LoS/NLoS
73
Dense urban
LoS/NLoS
Indoor
Indoor
rural
Hall
LoS/NLoS
Ishigaki
Ishigaki
Ishigaki
New York
(Manhattan)
New York
(Manhattan)
New York
(Brooklyn)
New York
(Brooklyn)
New York
(Brooklyn)
New York
(Brooklyn)
Seoul
Seoul
Daejeon
Daejeon
New York
(Manhattan)
New York
(Manhattan)
Austin
Austin
Austin
Austin
Austin
Austin
Austin
Austin
London
Japan
Japan
Japan
Berlin
New York
(Brooklyn)
New York
(Brooklyn)
New York
(Brooklyn)
New York
(Brooklyn)
New York
(Brooklyn)
New York
New York
New York
New York
New York
(Manhattan)
New York
(Manhattan)
Oulu
JiNan
Virginia
BeiJing
10
11/16/28/38
73
26
LoS/NLoS
LoS
Tx/Rx Antenna
(Height (m) etc.)
(O), Tx: 28, Rx: 3
(O), Tx: 8, Rx: 3
(O), Tx: 3, Rx: 3
(D), 24.5-dBi Tx/Rx
PL exp.
2.6/3.4
2.7
2.2
2.55/5.76
(O), Tx:7; 17, Rx: 1.5
2.1/3.4
(D), V-V, 15-dBi, Tx: 2.5, Rx: 1.5
1.7/4.5
(O), V-V, 15-dBi, Tx: 2.5, Rx: 1.5
1.1/2.7
(D), V-H, 15-dBi, Tx: 2.5, Rx: 1.5
4.1/5.1
(O), V-H, 15-dBi, Tx: 2.5, Rx: 1.5
Shadowing
(dB)
4.4/6.7
2.5
2.5
8.66/9.02
[59]
[59]
[59]
[7], [63]
3.6/9.7
[65]
2.6/11.6
[69]
1.7/9.6
[69]
8.0/10.9
[69]
2.5/3.6
3.0/9.4
[69]
(D), Tx: 8, Rx: 1.5
(D), Tx: 8, Rx: 1.5
(O), 24.5-dBi Tx:15
(O), Tx:15, Rx: 1.6
(O), Tx:15, Rx: 1.6
2.15/4.06
2.17/3.55
3.53
1.90/3.15
1.81/3.03
1.19/10.67
1.33/7.61
6.69
0.63/22.09
2.05/17.99
[72]
[72]
[73]
[74]
[74]
(O), Tx:15, Rx: 1.6
1.87/2.97
1.74/15.92
[74]
(D), 25-dBi TX and 13.3-dBi RX
(D), 25-dBi TX and 13.3-dBi RX
(D), 25-dBi TX and 13.3-dBi RX
(D), 25-dBi TX and 13.3-dBi RX
(D), 25-dBi TX and 25-dBi RX
(D), 25-dBi TX and 25-dBi RX
(D), 25-dBi TX and 25-dBi RX
(D), 25-dBi TX and 25-dBi RX
(D), Tx: 10, Rx: 2
Tx (O): 1.31, Rx (D): 2.04
(O), Tx : 2.30, Rx: 2.04
(O), Tx: 0.75, Rx: 1.58/2.29
(O)
20-dBi Tx/Rx: 1.61
2.13/2.54
2.16/2.52
2.03/2.40
2.74/2.97
2.25/3.29
2.38/3.20
2.01/2.70
2.99/4.17
3.6/10.4
1.34
1.86
2.23
2.02/3.06
10.1
17.3
4.8
17.5
13.5
15.1
5.3
16.5
8.14/7.74
8.78/7.82
5.31/5.27
12.46/11.16
6.51/11.63
14.12/8.97
6.56/5.54
13.92/8.90
[77]
[77]
[77]
[77]
[77]
[77]
[77]
[77]
[78]
[80]
[80]
[80]
[83]
[85]
(D), V-V, 15-dBi, Tx: 2.5, Rx: 1.5
1.7/5.3
3.3/13.3
2.1/15.6
[69]
(O), V-V, 15-dBi, Tx: 2.5, Rx: 1.5
1.3/3.2
1.9/11.3
[69]
(D), V-H, 15-dBi, Tx: 2.5, Rx: 1.5
4.7/6.4
9.0/15.8
[69]
(O), V-H, 15-dBi, Tx: 2.5, Rx: 1.5
3.5/4.6
6.3/9.7
[69]
(D), Tx: 7, Rx: 4
(D), Tx: 7, Rx: 2
(D), Tx: 17, Rx: 4
(D), Tx: 17, Rx: 2
(O), Tx:7; 17, Rx: 2
2.68/4.72
2.77/4.64
2.52/3.90
2.32/3.76
2.0/3.3
6.79/9.52
5.33/8.32
4.00/10.81
2.71/8.81
5.2/7.6
[86]
[86]
[86]
[86]
[65]
(O), Tx:7; 17, Rx: 4.06
2.0/3.5
4.2/7.9
[65]
Tx: 2, Rx: 1.6
Tx: 2.6, Rx: 1.45
Tx:110, Rx: 1.6-2
Tx: 2.5, Rx: 2
1.4/3.3
1.4/2.6
[88]
[89]
[90]
[91]
2.16/2.75
Delay spread
(ns)
4.1/18.4
12.8/18.7
22.29
6.7
6.3
3.8
21.2/10.3
17.4/30.4
9.1/10.2/10.6/8.5
16.08
1.7/6.7
0.098
Ref.
8
[58]–[87], covering potential mmWave mobile communication
bands, like 10, 28, 38, 60, and 82 GHz, have already been
performed. Table I summarizes the performed measurement
campaigns, where in particular NYU WIRELESS and the EU
mmMAGIC project make contributions.1 The researchers from
NYU WIRELESS have performed many measurements at 28,
38, 60, 72 and 73 GHz since year 2011, and obtained abundant
measurement results. Based on various measurement results
(including those from NYU WIRELESS) and ray tracing
analysis, 3GPP [21], [99] also provides new modeling features
like: i) dynamic LoS/non-LoS (NLoS) blockage 2 , spatial
consistency and penetration modeling, ii) extension of power
delay/angle profiles, and iii) the pathloss model based on onemeter reference distance. The EU mmMAGIC project plans
more than 60 single-frequency measurement campaigns, covering eight frequency bands from 6 to 100 GHz, under typical
environments and scenarios, including urban micro-cellular
(UMi) (street canyon, open square), indoor (office, shopping
mall, airport), outdoor-to-indoor (O2I) and two scenarios with
very high user densities (stadium and metro station) [58].
1) Pathloss and Shadowing: Pathloss and shadowing are
the two most important large-scale characteristics of the radio
channel, which have been reported for various environments
both in LoS and NLoS cases. It is common to determine
the pathloss exponent that best fits the measurements to Friis
transmission formula in (2), while the shadowing parameters
are used to model the random deviations from this model.
Table II summarizes the main mmWave measurement results
for directional and omnidirectional pathloss and shadowing.
2) Power Delay Profile and Delay Spread: The shape of
power delay profile (PDP) in measurements is a single or superposition of multiple exponentially decaying spectrums. The
delay spread denotes an extent of the multipath power spread
over the PDP, which is an important parameter that determines
the inter-symbol interference in single-carrier transmission
and the frequency-flatness of the subcarriers in multi-carrier
transmission. In Table II, we also summarize the delay spread
of various measurement results.
B. Millimeter Wave Channel Modeling
Channel models are important for evaluating and analyzing
the performance in system-level simulations. For mmWave,
some recent channel modeling results are reported in [92]–
[98]. For more general mobile and wireless networks, several
channel models have proposed continuously, which are illustrated in Fig. 5. To give a historical perspective, we give a
brief introduction as follows (even though some models are
not for mmWave).
1) 3GPP Spatial Channel Model and SCM-Extended: The
3GPP Spatial Channel Model (SCM) [41] was proposed in
2003 and supports six delay paths with 5 MHz bandwidth in
the 2 GHz frequency band under three scenarios: Suburban
1 Note that there are a lot of excellent measurement campaigns and results.
For space limits, we can only list a part of the existed literatures here. The
results, especially before 2012, are mostly omitted.
2 COST 259 also include the LoS/NLoS blockage probability model.
2003
3GPP-SCM
2004
2005
SCM-E
WINNER-I
COST 273
2006
WINNER-II
2007
2008
IMT-Advanced
2009
IEEE 802.11ad
WINNER+
2010
2011
COST 2100
2012
2013
QuaDRiGa
2014
METIS
3GPP 3D
2015
2016
MiWEBA
mmMAGIC
2017
Year
Fig. 5. Available channel models for mobile and wireless networks [58].
Macro (SMa), Urban Macro (UMa), and UMi. The SCMExtended model [99] further extended the SCM, by supporting
bandwidths of up to 100 MHz.
2) WINNER I/II/+ Model: As the EU flagship mobile
technological projects for 4G, the WINNER I/II/+ projects
developed several channel models for mobile networks. The
WINNER I model [100] is an antenna-independent model, in
which different antenna configurations and different element
patterns can be used. In order to support more scenarios, such
as outdoor-to-indoor and indoor-to-outdoor, and elevation in
indoor scenarios, the WINNER II model [101] was developed. It includes scenario-dependent polarization modeling,
and thus improves the accuracy for cross-polarized MIMO
antennas. The parameter tables were reviewed and additional
measurements were done in order to cover the complete 1–
6 GHz frequency range. As a major upgrade of the WINNER
II model, the WINNER+ model [102] supports 3D propagation
effects. WINNER II and WINNER+ models are also antenna
independent since they are based on double-directional channel
representations.
3) 3GPP 3D Model [103], [104]: This model is defined
in the 2 GHz band at a relatively small bandwidth of 10 MHz.
It has consolidated parameters for the two most commonly
used scenarios: UMa and UMi, which are further split into
LoS, NLoS and O2I propagation. The core part of this model,
i.e., the small-scale fading (SSF) model, is identical to the
WINNER+ model. Thus, the same parameters can be used
and similar functionality is provided. More recently, 3GPP
TR38.900 [98] summarizes the recent channel modeling re-
9
sults of 3GPP for the band above 6 GHz and up to 100 GHz.
The models in [98] are comprehensive including street canyon,
open area, rooftop, indoor, backhaul, D2D/V2V and stadium
etc.
4) COST 273/2100 Model: The COST 273 model [105]
was an evolution of the earlier COST 259 channel model
towards mobile broadband by using MIMO technologies. The
model is based on the concept of geometrically located multipath clusters in 2D propagation environment to model the interrelationship between Angle-of-Arrivals (AoAs) and Angleof-Departures (AoDs). This concept is effective to keep spatial
consistency, and can evaluate the performance of MIMO
beamforming and multi-cell transmission more accurately. As
an evolution of the COST 273 model, the COST 2100 model
[106] used visibility regions (VRs) introduced in COST 259 to
model the scenario-variation. These VRs make the evaluation
of multi-cell and heterogeneous transmissions more practical
by considering the VR of BSs from each UE. This model also
extends the multipath clusters to 3D propagation environments.
5) QuaDRiGa Model [107]: As an open source implementation of the 3GPP-3D channel model, the QuaDRiGa
channel model is further extended with the features of spatial
consistency (to accurately evaluate the performance of massive MIMO) and multi-cell transmissions by exploiting the
approach in SCM-E and COST 273.
6) IEEE 802.11ad Model [108]: This model was developed
in 2010 to support indoor short-range communications, such
as in offices and homes using 60 GHz unlicensed band. The
model is quasi-deterministic (QD): specular components, such
as the LoS path and single and double bounce reflections are
modeled deterministically in 3D propagation environments,
while other contributions are modeled stochastically as random
components in the cluster. One of the important features is the
support for blockage.
7) MiWEBA Model [109]: The model is an extension of
the IEEE802.11ad channel model towards outdoor access,
backhaul/fronthaul, and device-to-device (D2D) scenarios. The
approach is QD, where specular components are modeled deterministically, while other components are modeled stochastically. In this way, beamforming and path-blocking models can
be supported. Furthermore, the first 60 GHz pathloss model in
an UMi environment was developed in MiWEBA. The effect
of the ground reflection paths traveling closely in space to the
LoS path has been found to be of high significance.
8) IMT-Advanced Model [110]: This model consists of
a primary module and an extension module (not specific
to mmWave), where the former is based on the WINNER
II channel model, while the latter enhances the support for
variable BS antenna heights, street widths, and city structures.
9) METIS Model [111]: This model consists of a mapbased (deterministic) model, a stochastic model, and a hybrid
model as a combination of both. The stochastic model extends
the WINNER+ and 3GPP-3D models to support 3D shadowing
maps, mmWave parameters, direct sampling of the power
angular spectrum, and frequency dependent pathloss models.
Based on extensive measurement campaigns, lists of channel
parameters for <6 GHz and 50 to 70 GHz bands are available.
10) mmMAGIC Model [58]: This is a statistical channel
model for link- and system-level simulations that is designed
for the entire frequency range from 6 to 100 GHz for a large
variety of scenarios. The model uses the theoretical approach
of the existing 3GPP 3D channel model. However, it extends
it in the following ways: i) the spatial accuracy regarding
path and sub-path distributions is substantially improved, ii)
a realistic non-uniform distribution of sub-path amplitudes is
included, iii) sub-paths can be modeled using spherical waves;
iv) there is consistency over the frequency range (6−100 GHz),
v) there is frequency-dependent antenna models; vi) providing
continuous variations over time, vii) mmWave-specific random
blockage, clustering and scattering objects are being modeled,
and viii) the reflection from the ground or the floor is modeled.
11) 3GPP-like 5G Model [22], [112]: In [112], the outdoor
model is established for the bands from 6 GHz to 100 GHz.
An initial 3D channel model which includes: 1) typical deployment scenarios for urban microcells and urban macrocells,
and 2) a baseline model for incorporating pathloss, shadow
fading, LoS probability, penetration, and blockage models
for the typical scenarios. Various processing methodologies
such as clustering and antenna decoupling algorithms are also
included in [112]. In [21], [22], the indoor model is established
for office and shopping mall environments. The measurement
results show that the smaller wavelengths introduce an increased sensitivity of the propagation models to the scale of
the environment and show some frequency dependence of the
pathloss as well as increased occurrence of blockage.
IV. MIMO D ESIGN FOR MM WAVE C OMMUNICATIONS
While conventional microwave communications, operating
below 6 GHz, can cover many users in wide coverage areas,
mmWave communications mainly provide local-area coverage
and thus feature fewer users. Moreover, there is limited
scattering in outdoor mmWave communications, in contrast
to the rich scattering in conventional microwave communications [7]. Due to these differences, mmWave MIMO systems
have different constraints and requirements, requiring different
transceiver designs. In this section, we first describe hardware
architectures for mmWave MIMO systems, for the use in mobile networks. We discuss how to design the signal processing
techniques, including channel estimation, channel-tracking,
and precoding/combining for millimeter MIMO systems.
A. MIMO Architectures
The conventional MIMO system is fully digital, where all
the signal processing techniques are performed at baseband
as shown in Fig. 6 (a) [36]. As explained in Section II,
for mmWave communication with many antennas and high
bandwidth, the conventional fully digital MIMO architecture
requires many energy-intensive RF chains, leading to an unaffordable energy consumption and hardware cost. Therefore,
although the fully digital architecture most likely will be
available at some point in the future, alternative architectures
are required for emerging mmWave mobile networks.
To reduce the implementation complexity, a fully analog
architecture has been adopted in indoor mmWave communications, such as 60 GHz WLAN [114]. As shown in Fig. 6
10
RF chain
RF chain
Digital signal
processing
RF chain
(a)
Digital signal
processing
RF chain
Analog
circuit
(b)
RF chain
Dimensionreduced digital
signal processing
Analog
circuit
RF chain
(c)
Fig. 6. MIMO architectures: (a) fully digital architecture; (b) fully analog
architecture; (c) hybrid architecture.
(b), only one RF chain is employed to transmit a single
data stream, and the analog circuit (e.g., realized by analog
phase-shifters) is utilized to partially adjust the signals (e.g.,
phases of signals) to achieve an array gain. The advantage
of the fully analog architecture is that it only requires one
RF chain, leading to quite low hardware cost and energy
consumption [114]. However, since the analog circuit can
only partly adjust the signals, it is hard to adjust the beam
to the channel conditions and this leads to a considerable
performance loss [14], particularly for mobile users. In addition, fully analog architecture can only support single-stream
transmission, which cannot achieve the multiplexing gain to
improve spectral efficiency [14].
New MIMO architectures need to be designed to balance
between the benefits of fully digital and fully analog architectures. The hybrid analog-digital architecture is the key
solution [122]–[124]; the hybrid architecture was agreed to be
deployed in future 5G systems at the 3GPP RAN1 meeting
in June 2016 [125], as described later. The hybrid architecture
can be considered an extension of the fully analog architecture
to the multi-stream scenario. As shown in Fig. 6 (c), the key
idea is to divide the conventional digital signal processing
(e.g., precoding and combining) of large size into two parts:
a large-size analog signal processing (realized by analog
circuits) and a dimension-reduced digital signal processing
(requiring a small number of RF chains). Since there is
often only a small number of effective scatterers at mmWave
frequencies, each user has a MIMO channel matrix with
low rank [7]. Hence, the optimal number of data streams is
generally much smaller than the number of antennas. Since the
number of streams determines the minimum required number
of RF chains, this number can be significantly reduced by
the hybrid architecture, leading to reduced cost and energy
consumption.
The analog circuits of the hybrid architecture can be implemented by different circuit networks, leading to different
hardware constraints [14], [115], [116]. Next, we describe
the three typical implementation networks that are illustrated
in Fig. 7. The choice of architecture affects not only the
signal processing design but also the performance of mmWave
MIMO systems.
N1) Fully-connected network with phase-shifters: In this
network, each RF chain is connected to all antennas via phaseshifters, as shown in Fig. 7 (a) [117]. Hence, a highly directive
signal can be achieved by adjusting the phases of transmitted
signals on all antennas [117]. By employing such a network,
all elements of the analog precoder/combiner have the same
fixed amplitude.3 Difficult factors in the implementation can be
the addition of the analog signals at each antenna and selection
of a collection of phase-shifts that are suitable over the entire
bandwidth.
N2) Sub-connected network with phase-shifters: In this
network, each RF chain is only connected to a subarray
via phase-shifters, as shown in Fig. 7 (b) [113], [121]. Due
to this limitation, for each RF chain, only the transmitted
signals on a subset of antennas can be adjusted. Therefore,
compared with N1, the achieved array gain and directivity
is reduced proportionally to the number of subarrays [113].
However, this network might be preferred in practice, since
there is no need to add analog signals at the antenna inputs
and the number of phase-shifters required in this network
is also significantly reduced. Moreover, it has been shown
to achieve the performance close to that of N1 [121]. This
network imposes two hardware constraints [121]: i) the analog
precoder/combiner should be a block diagonal matrix; and
ii) All the nonzero elements of the analog precoder/combiner
have the same fixed amplitude.
N3) Lens antenna array: An alternative quite different from
the networks discussed above is to utilize a lens antenna array,
as shown in Fig. 7 (c) [126]. The lens antenna array (a feed
antenna array placed beneath the lens) can realize the functions
of signal emitting and phase-shifting simultaneously [126].
3 Here we assume the resolution of phase-shifter is sufficiently high, so that
the phase of transmitting signal can be arbitrarily adjusted. In practice, the
phase-shifters with finite resolution will incur phase noise and degrade the
performance of hybrid precoding and combing. In this case, how to design
the signal processing techniques is an interesting topic of future research, and
some initial works can be found in [118]–[120].
11
Beam
Beam
(a)
(b)
Fig. 8. Beam-training: (a) Wide beamwidth; (b) Narrow beamwidth.
Phase shifters
Phase shifters
(a)
(b)
Lens antenna array
Switches
(c)
Fig. 7. Analog circuit with different networks: (a) Fully-connected network
with phase-shifters (N1); (b) Sub-connected network with phase-shifters (N2);
(c) Lens antenna array (N3).
It can concentrate the signals from different propagation
directions (beams) on different feed antennas. As the scattering
for outdoor mmWave communications is not rich [7], the
number of effective propagation paths is usually limited, and
the channel power will be concentrated on only a small number
of beams. Therefore, the selecting network can be used to
significantly reduce the MIMO dimension as well as the
number of RF chains without major performance loss [126].
With careful design, the lens antenna array can excite several
orthogonal beams spanning the whole space. If the directions
of channel paths coincide with the directions of the orthogonal
beams, an array gain similar to N1 can be achieved4 [126].
Moreover, compared with phase-shifter network (including
a large number of phase-shifters, power splitters/combiners,
and signal/control lines) in N1, the hardware cost and energy
consumption incurred by lens antenna array in N3 is relatively
low [128], [251]. Essentially, the lens antenna array plays
the role of a discrete Fourier transform (DFT).5 Therefore,
in this network, each column of the analog precoder/combiner
restricts to a DFT column [126].
B. Channel Estimation with Hybrid Architecture
Channel state information (CSI) is essential to benefit from
the array gain provided by multiple antennas. Acquiring CSI
is particularly challenging in mobile networks, where the
channels can change rapidly. With a fully digital receiver, the
4 If not, power leakage will happen, leading to some performance loss after
the selecting network [127].
5 It is worth pointing out that mathematically, N3 is equivalent to the
structure based on butter matrix proposed in [129], which aims to improve the
performance of antenna selection for microwave MIMO with high correlation.
use of pilot transmission is the most efficient way to acquire
CSI [6], [130]. The channel estimation is more complicated
in a hybrid mmWave architecture, since we cannot extract the
actual received signals on all antennas simultaneously. We will
now discuss alternative methods for acquiring CSI in a hybrid
architecture.
To exemplify the signal processing, we consider a singleuser multi-stream system. The transmitter employs NT antenT RF chains, the receiver employs N antennas
nas and NRF
(
) R
T , NR
R
and NRF RF chains, and ND ≤ min NRF
RF parallel data
streams are transmitted from the transmitter to the receiver. In
this subsection, we mainly consider narrowband systems. Note
that for broadband systems, the analog circuit is fixed for the
whole bandwidth. As a result, the analog signal processing
(e.g., precoding) cannot be adaptively adjusted according to
different frequencies. This will lead to more challenges in
signal processing design, which requires future investigation,
but some pioneering works can be found in [137], [140]. The
narrowband system model of the hybrid architecture can be
presented as [117]
y = W H HFs + W H n,
(5)
where s and y both of size ND × 1 are the transmitted and
received signal vectors, respectively, H of size NR × NT is the
mmWave MIMO channel, which can be modeled as described
in Section III, n of size NR × 1 is the noise vector. F = FA FD
of size NT × ND is the hybrid precoder, where FA of size
T is the analog precoder and F of size N T × N is
NT × NRF
D
D
RF
the digital precoder. Similarly, W = WA WD of size NR × ND
R is the
is the hybrid combiner, where WA of size NR × NRF
R
analog combiner and WD of size NRF × ND is the digital
combiner.
The precoder and combiner matrices should be selected
based on the current channel realization H, but estimating
this matrix is non-trivial [141]. Firstly, due to the lack of
antenna gain before the establishment of the transmission link,
the SNR for channel estimation can be quite low. Secondly,
the number of RF chains in the hybrid architecture is usually
T N and
much smaller than the number of antennas (i.e., NRF
T
R N ), so we cannot simultaneously obtain the sampled
NRF
R
signals on all receive antennas. As a result, the traditional
channel estimation schemes [138], [139] requiring the sampled
signals on all antennas will involve unaffordable pilot overhead
in a hybrid architecture. To solve this problem, two dominant
categories of channel estimation schemes have been proposed
in the mmWave literature.
The key idea of the first category is to reduce the dimension
of channel estimation problem, by dividing it into two steps.
12
In the first step, it performs the beam-training between the
transmitter and receiver to obtain the analog precoder FA and
analog combiner WA . In the second step, the effective channel
matrix WAH HFA in the analog domain is estimated by classical
algorithms, such as least squares (LS) [139]. Note that the size
of WAH HFA is ND × ND , which is much smaller than that of
the original channel matrix (i.e., ND NT, NR ). Therefore, the
pilot overhead in the second step is relatively low, which is
proportional to ND . The remaining difficulty lies in how to
design the efficient beam-training scheme to find the optimal
FA and WA .
To achieve this goal, two primary approaches have been
proposed. The first one is to extend the traditional single-beam
training schemes standardized in IEEE 802.11ad/802.15.3c
to multi-beam training [47], [142]. For example, the singlebeam training scheme in IEEE 802.11ad consists of three
phases [142]: i) Sector level sweep (SLS): the wide beamwidth
is considered at first, as shown in Fig. 8 (a). The transmitter
and receiver try all possible wide beam pairs. With the channel
feedback to indicate the largest received SNR [143], [144], the
best beam pair (according to some criterion) can be selected
for the next phase; ii) Beam-refinement protocol (BRP): the
narrow beamwidth is considered as shown in Fig. 8 (b), and
the beams are trained in a similar way within the previous
selected wide beam pair; iii) Beam tracking: a periodic beamrefinement is performed for the time-varying channels. Repeating such procedure above for each RF chain to select its
corresponding beams, we can obtain the optimal FA and WA .
This hierarchical search could significantly reduce the training
overhead, but its performance heavily depends on the designed
training beam codebook. For the fully analog architecture,
forming a wide beam usually requires the deactivation of some
antennas and thus reduces the total transmit power. While for
the hybrid architecture, it is possible to design wide beam
without reducing the transmit power [145], [146]. Another
scheme can be found in [164], where a novel codebook-based
beam-training together with the following hybrid precoding
design was proposed. Specifically, an RF codebook, denoted
as FC B , is used to specify the possible set of RF beamforming vectors by considering the practical limitation of phaseshifters. Then, the original hybrid precoding problem is transformed into a joint codeword selection and precoding design
problem, which is essentially a group sparsity constrained
optimization problem and only requires effective channels
HFC B . Based on that, a beam-training procedure is proposed
to obtain effective channels with less signaling feedback by
utilizing the beam-domain sparse property of mmWave channels, and efficient algorithms are developed for maximizing
both the spectral efficiency and the energy efficiency. The
second approach is to employ algorithms developed from
machine learning to realize the beam-training, since it can be
considered as a combinatorial problem with a finite number
of beams to be searched. For example, in [147], a tabu search
(TS)-based beam-training scheme is proposed. It first selects
an initial beam pair, and defines the neighbors of this pair
(e.g., only one RF chain changes its beam and the others keep
fixed). Then, the training procedure is only executed within
the neighbors with much smaller size to find a pair which:
i) enjoys the best performance; ii) is not the tabu beam pair
according to some criteria. After a small number of iterations,
it can obtain FA and WA with satisfying performance. Other
algorithms, such as local search algorithm [148], can be also
used. They can usually significantly reduce the pilot overhead,
but the robustness cannot be guaranteed, since these algorithms
are performed in a random way.
The second category of channel estimation schemes is to
exploit the sparsity of mmWave MIMO channels. Instead of
estimating the effective channel matrix WAH HFA of small size,
it can directly obtain the complete channel matrix H with
low pilot overhead. To explain the basic idea, we consider
the simplest case where both the transmitter and receiver use
one RF chain for channel estimation6 [141]. In time slot m,
the transmitter uses a hybrid precoder fm of size NT × 1 to
transmit the pilot s m to the receiver. The received pilot ym at
the receiver using a hybrid combiner wm of size NR × 1 can
be presented as
√ H
H
Hfm s m + wm
n
ym = ρwm
√ T
H
H
= ρ fm ⊗ wm vec (H) + wm
n.
(6)
Note that the mmWave MIMO channel matrix H can be well
approximated by the extended virtual channel model with
sufficiently quantized AoAs/AoDs [141]. Therefore, vec (H)
can be rewritten as vec (H) = AD hb where hb = vec H̃b of
size G2 × 1 is a sparse channel vector, and G is number of
quantized AoAs/AoDs. The position of each nonzero element
in hb indicates the AoA and AoD of one channel path, and the
value of the nonzero element is the corresponding complex
gain of this path. AD of size NT NR × G2 is the dictionary
matrix of quantified AoAs/AoDs. After receiving M pilots,
we have
fT ⊗ w H
1
1T
H
√ f2 ⊗ w2
√
AD hb + neff = ρΨAD hb + neff, (7)
y = ρ
..
.
fT ⊗ w H
M
M
where y = y1, y2, · · · y M T and neff is the effective noise
vector.
To estimate hb from (7), there are two primary approaches
proposed in the literature. The first one is to combine the idea
of beam-training with sparse signal recovery. For example,
in [141], an adaptive channel estimation scheme is proposed.
This scheme divides the total channel estimation problem
into several subproblems, each of which only considers one
channel path. For each channel path, it first starts with a
coarse AoA/AoD grid, and determines the AoA and AoD of
this path belonging to which angle range by employing OMP
algorithm. Then, the AoA/AoD grid around the determined
angle range is narrowed and the AoA and AoD of this path
are further refined also by the OMP algorithm. In each step,
fm and wm are designed based on the corresponding AD
(which varies due to different AoA/AoD grid in each step)
to make the effective sensing matrix ΨAD have a fixed gain
6 When multiple RF chains are used, the transmitted pilots on different RF
chains can be designed to be orthogonal to simplify the channel estimation
at the receiver.
13
in a specific angle range. This requirement is the same as that
of the hierarchical beam-training introduced above. Therefore,
a good multi-resolution codebook such as [145], [146] can
greatly improve the performance. It has been shown that the
adaptive channel estimation scheme enjoys high accuracy with
reduced pilot overhead. Some improved schemes following
the similar idea can be found in [146], [149]. The second
approach is to regard (7) as a classical sparse signal recovery
problem [150]–[152]. Then, it designs fm and wm to make the
sensing matrix ΨAD enjoy sufficiently low mutual coherence,
which is crucial to achieve high signal recovery accuracy in
CS theory [153]. In [154], several design methods have been
proposed to achieve this goal. Finally, (7) can be solved by lots
of efficient CS algorithms, such as OMP and LASSO [153],
and the structural sparsity of mmWave MIMO channels (e.g.,
common support or partial common support) can be also
exploited to further reduce the pilot overhead and improve
the recovery accuracy [155].
the sparse structure of mmWave MIMO channels, it utilizes
the obtained channels in the previous time slots to predict
the prior information, i.e., the support of the channel, in the
following time slot without channel estimation. Finally, with
the known supports, the time-varying channels can be tracked
with a low pilot overhead. The related schemes can be found
in [160], [161]. Such schemes can perform well for the LoS
path. For the NLoS paths caused by complicated scattering,
it is difficult to analyze the geometrical relationship. As a
result, a more promising solution in practice is to utilize the
idea of the second category to track the LoS path. Then, by
eliminating the influence of this path, the NLoS paths can be
tracked following the idea of classical Kalman filters [162].
To realize mmWave mobile networks, the efficiency of
the beam-tracking schemes must be carefully tested in real
environments, to understand which mobility speeds that can be
supported and which channel characteristics that can reliably
imposed to aid the tracking procedure.
C. Channel Tracking
D. Hybrid Precoding and Combining
Since the mmWave channel varies over time in mobile networks, the conventional real-time channel estimation
should be executed frequently. This has two important consequences [156]: i) In mmWave bands, the user mobility leads
to huge Doppler effects and very limited channel coherence
time. This means that the mmWave MIMO channels will
vary quickly, even if we consider the short symbol duration
associated with the wide bandwidth; ii) In hybrid mmWave
implementations, there is no enough time to continuously redo
the beam-training from scratch. Hence, channel-tracking exploiting the temporal correlation of the time-varying channels
is preferable [156]. In this subsection, we describe two main
categories of channel-tracking schemes.
The first channel-tracking approach is an improved version
of beam-training. The key idea is to select several candidate
beam pairs instead of only the optimal beam pair, during each
beam-training procedure [142], [157]. For example, when the
beam pair with the highest SNR is selected, the beam pairs
achieving the second and third highest SNRs are also retained.
When the channel varies, only the candidate beam pairs are
tested and switched on to keep the SNR above a certain
threshold. If all the candidate beam pairs fail, the complete
beam-training will be executed again. This idea involves quite
low complexity, which makes it easy to implement. Thus, it
has been applied in current commercial mmWave communications, such as WLAN (IEEE 802.11ad) [114]. However, this
idea is only efficient for single-stream data transmission. For
multi-stream data transmission, the beam training itself will
incur high pilot overhead, not to mention the search among
all candidate beams for all data streams.
An alternative solution to track the mmWave MIMO channel
is to utilize the geometric relationship between the transmitter
and receiver to track the LoS path of the channel. Specifically,
in [156], a priori aided channel-tracking scheme was proposed.
By considering a motion model, this scheme first excavates
a temporal variation law of the AoA and AoD of the LoS
path. After that, by combining the temporal variation law with
After the CSI has been acquired, we can design the precoding and combining in the hybrid architecture to achieve
the multiplexing gain and array gain offered by MIMO.
However, the precoder/combiner design is considerably different from the precoding/combining optimizations in [163]
for the conventional fully digital MIMO systems, due to the
special hardware constraints in hybrid architecture of mmWave
systems. In this subsection, we discuss how to design the
hybrid precoder F = FA FD and combiner W = WA WD for N1N3 architectures discussed in Section III-A with perfect CSI,
which helps understand the fundamental limits of the mmWave
MIMO with a hybrid architecture. The practically more relevant case of imperfect CSI is largely open and deserves much
attention in the upcoming years; in particular, because it is
well-known from conventional MIMO communications that
precoding/combining schemes that work well under perfect
CSI can be widely different from the schemes that work well
in practice.
Let us for simplicity consider a non-fading channel H.
Specifically, we focus on the optimization problem of hybrid
precoding and combining with the aim of maximizing the
achievable rate, given by [117]
H
H H
Fopt, Wopt = arg max log2 I+ ρR−1
n W HFF H W ,
F,W
s.t. FA ∈ F , WA ∈ W, kFk 2F ≤ ND,
(8)
where Rn is the noise/interference covariance matrix, while
F and W are the sets with all possible analog precoders
and combiners satisfying the hardware constraints (which
are different for N1-N3 as discussed in Section IV-A), respectively. Generally, obtaining the optimal solution to (8)
is a non-trivial task, since F and W are much different
from conventional fully digital communication. To address
(8), several hybrid precoding/combining schemes have been
proposed in the literature to achieve feasible solutions.
We first discuss the hybrid precoding and combining
schemes for the architecture N1. One effective approach is
14
to decompose the original optimization problem into several
subproblems, and each subproblem is approximated as a
convex one and then solved by standard convex optimization
algorithms. Particularly, a spatially sparse precoding scheme
based on the orthogonal pursuit matching (OMP) has been
proposed in [117], which can fully exploit the sparsity of
mmWave MIMO channel and achieve the near-optimal performance. Using a similar idea, there are some other advanced
schemes proposed for N1 [164], [166], [167].
For the subconnected architecture N2, a few hybrid precoding solutions have been developed for maximizing the
achievable rate or the spectral efficiency [121], [168]. More
recently, the energy efficiency optimization was studied and
a solution was proposed in [165]. Since the subconnected
architecture adopts a subarray structure, it is natural to solve
the complicated hybrid precoding problem by decomposing it
into several subproblems and optimizing them in an alternating
manner. In particular, a successive interference cancelation
(SIC)-based precoding scheme was proposed in [121], which
enjoys higher energy efficiency than the spatially sparse precoding scheme. The hybrid precoding problem with energy
efficiency as the objective is more complicated. In [165],
this optimization problem is solved by jointly exploiting the
interference alignment and fractional programming. First, the
analog precoder and combiner are optimized via the alternating
direction optimization method. Then, the digital precoder and
combiner without hardware constraints are obtained based on
the effective channel matrix WAH HFA .
Finally, for the hybrid precoding/combining schemes for
N3, the main difficulty lies in the designs of FA and WA ,
as FD and WD can be easily obtained based on WAH HFA .
Therefore, it is essential to design an appropriate selecting
matrix to select DFT columns (beams) to form FA and WA .
Following this idea, a magnitude maximization (MM) beamselection scheme was proposed in [169], where several beams
with large power are selected. Alternatively, an interferenceaware beam-selection scheme was proposed in [127]. The key
idea is to classify all users into two user groups according
to the potential inter-beam interference. For users with low
inter-beam interference, it directly selects the beams with large
power, while for users with severe inter-beam interference,
a low-complexity incremental algorithm was proposed. More
beam-selection schemes for N3 can be found in [170], [171].
It worth pointing out that although we have only discussed
the single-user multi-stream scenario, the schemes designed
for N3 can be extended to multi-user multi-stream scenario.
In principle, we can replace the digital SVD precoder by the
classical zero-forcing (ZF) precoder to suppress multi-user
interference. However, such an extension is not straightforward
for the schemes designed for N1 and N2. To tackle this
problem, in [172], a two-stage precoding scheme was proposed
for architecture N1. In the first stage, FA is searched from a
predefined codebook to maximize the desired signal power
of each user. In the second stage, a digital precoder similar
to ZF precoder is designed to cancel multi-user interference.
The multi-user hybrid precoding scheme for N2, however, is
still an open problem requiring further research efforts.
Re ( ×)
RF chain
Im ( ×)
Re ( ×)
RF chain
Im ( ×)
Re ( ×)
RF chain
Im ( ×)
1-bit ADC
1-bit ADC
Digital signal
processing
1-bit ADC
Fig. 9. 1-bit ADC based architecture at the receiver.
E. Low-Resolution ADC Based Architecture
Besides the promising hybrid analog-digital architecture
discussed above, some other advanced architectures have also
been proposed to reduce the RF complexity of mmWave
MIMO systems. The low-resolution (e.g., 1-bit) ADC based
architecture7 is one typical example as shown in Fig. 9 [131].
Different from hybrid analog-digital architecture which aims
to reduce the number of RF chains, the key idea of the lowresolution ADC based architecture is to replace the highresolution (e.g., 15-bit) ADCs by low-resolution ADCs to
reduce the energy consumption and hardware cost, while the
total number of RF chains or ADCs is still the same as that
in the fully digital architecture [131].
The main advantage of low-resolution ADC based architecture is that it can significantly reduce the energy consumption
and hardware cost since it only requires a small number of
compactor in ADC [132]. Moreover, this architecture can
also simplify other circuit modules such as the automatic
gain control (AGC) [132]. It has been proved that lowresolution ADC based architecture can achieve the capacityapproaching performance in the low and medium SNR region [133]. However, due to the severe quantization effect
of low-resolution ADCs, the capacity of this architecture is
usually limited in the high SNR region [133] In addition, the
nonlinearity of the quantization also imposes new challenges
on the signal processing designs. Take the 1-bit ADC for
example, the sampled signal in the digital domain can only
be one of the two discrete values instead of one continuous
value. To solve this problem, some promising solutions have
been proposed. For example, in [134], a channel estimation
method using expectation-maximization (EM) algorithm is
proposed for low-resolution ADC based architecture to find the
maximum a posteriori probability estimate. In [135], a sumproduct-algorithm (SPA) based signal detector is designed by
utilizing the concept of clustered factor graph. In [136], the
authors develop an efficient algorithm for optimal Bayesian
data detection in the mmWave OFDM system with lowresolution ADCs. A power allocation (PA) scheme is also
proposed to minimize the average symbol error rate in [136].
Nevertheless, besides these works above, there are still
7 Note that the DACs at the transmitter usually consume less power than
ADCs for mmWave MIMO systems. Therefore, employing low-resolution
ADCs instead of DACs is more promising to reduce the energy consumption
and hardware cost [131].
15
some open issues for the low-resolution ADC based architecture [131], e.g., how to design signal processing for broadband
mmWave MIMO channels, which require future investigation.
F. Recent Progress
In the current special issue of IEEE JSAC, the signal
processing schemes on mmWave MIMO are investigated
in [174]–[189]. Specifically, in [174], a receive antenna selection (RAS)-aid spatial modulation scheme is proposed to reduce the RF complexity, and an iterative algorithm is proposed
to search the antenna index with low complexity. In [175],
a channel estimation scheme is proposed by utilizing the
structured sparsity of mmWave MIMO channel, and a training
sequence (TS) is designed by the genetic algorithm. In [176],
a low complexity non-iterative interference alignment (IA)
scheme for multi-cell mmWave MIMO systems is proposed.
In [177], a CANDECOMP/PARAFAC decomposition-based
channel estimation scheme is proposed for wideband mmWave
MIMO by utilizing the tensor theory. The Cramer-Rao bounds
of the estimated channel parameters are also derived to show
the advantages of the proposed scheme. In [178], a new
antipodal curvedly tapered slot antenna is designed to generate
circularly polarized field. It can achieve high gains with low
hardware complexity in E-band and W-band. [179] proposes a
single-user multi-beam transmission scheme in the beamspace,
and the corresponding multi-beam selection, cooperative beam
tracking, multi-beam power allocation, and synchronization
are investigated. [180] develops a low-complexity channel estimation for hybrid mmWave MIMO systems, and investigates
the achievable sum-rate of zero forcing with the estimated
channel under the consideration of system imperfection. [181]
investigates the hybrid precoding for wideband mmWave
MIMO, and proposes a unified heuristic design for two different hybrid precoding architectures, i.e., the fully-connected
and the partially-connected architectures, to maximize the
overall spectral efficiency. [182] proposes a novel unified
hybrid precoding design for fully- and partially-connected
hybrid architectures from the view of energy efficiency instead
of spectral efficiency. [183] characterizes the gains of pilot
precoding and combining in terms of channel estimation quality and achievable data rate. [184] considers the sub-28 GHz
communications that do not exhibit enough directivity and
selectively, and tackles the sum-rate maximization problem
based on the concept of difference of log and trace (DLT)
bound. [185] analyzes the achievable rate and energy efficiency
of hybrid precoding receivers with low resolution ADC, and
shows it robustness to small automatic gain control imperfections. [186] investigates the hybrid precoding design with antenna selection, and decomposes the whole problem into three
sub-problems which are solved via an alternating optimization
method. [187] proposes a hybrid precoding scheme based on
Kronecker decomposition for multi-cell multi-user mmWave
MIMO systems. [188] investigates beamforming training for
partially-connected hybrid architectures, and proposes two
multi-resolution time-delay codebooks. In [188], fundamental
limits of beam alignments are studied under different search
approaches (exhaustive or hierarchical).
V. ACCESS , BACKHAULING , AND C OVERAGE
A. Multiple-access Technologies
Multiple-access technologies are necessary for supporting
multiple users in mobile networks, and they have been widely
investigated in the lower frequency bands. Different multiple
access technologies have been utilized in practical systems,
including frequency division multiple access (FDMA), time
division multiple access (TDMA), code division multiple
access (CDMA), and orthogonal frequency division multiple
access (OFDMA). These multiple-access technologies are also
applicable to mmWave, but in different flavors than in lower
frequency bands due to the increased complexity caused by
the greatly increased bandwidth, and the different channel
characteristics in mmWave bands, e.g., highly directional
transmissions. More importantly, as described in Section IV,
the shorter wavelengths at mmWave frequencies make MIMO
technology suitable for mmWave, since it is possible to use
more antennas in the same physical space. One of the key
advantages of MIMO is the spatial resolution and beamforming capability that it provides. In this subsection, major nonorthogonal multiple-access technologies that exploit the spatial
or power domains will be discussed in the context of mmWave.
This includes spatial division multiple access (SDMA), nonorthogonal multiple access (NOMA), and finally random access.
1) SDMA: In addition to the orders-of-magnitude larger
bandwidths, as compared to conventional systems operating
below 6 GHz, a multiplexing gain can be achieved by multiplexing of users in the spatial domain. This technology is
known as SDMA was introduced in 1990s [192], and Massive
MIMO is the latest branch of this tree [36]. 802.11ac was
the first major wireless standard that integrated SDMA [193].
SDMA can ideally increase the sum rate proportionally to
the number of multiplexed users, provided that the BS is
equipped with at least as many RF transceiver chains as
all the users have in total. Due to the highly directional
transmissions in mmWave communication systems, users from
different directions may be well separated using different
spatial beams, which is also known as favorable propagation
[191]. The design of transmit precoding/beamforming and
receive combining for SDMA is generally a mature topic
[194], [195], but the special channel estimation and channel
characteristics of hybrid systems require some further work on
this topic [196], [197].
One key difference between systems operating in mmWave
band and at lower frequencies is the coverage area, which
is substantially smaller in mmWave. Hence, there is generally
fewer users to multiplex by SDMA in mmWave. Nevertheless,
one of the critical challenges of SDMA is how to serve
multiple users when the number of users is larger than the
number of antennas. It is necessary to group users so that users
from different groups may access the BS at the same time,
while not causing significant interference to each other. After
realizing the user grouping, user scheduling should also be
considered to select the users from different groups to access
the BS at the same time. Users in the same group can be
served orthogonally (or semi-orthogonal) in time, frequency,
16
or code domain. A few effective scheduling algorithms have
been proposed in [202]–[210] to enhance throughput.
2) NOMA: An alternative to the non-orthogonal spatial
multiplexing provided by SDMA is to perform non-orthogonal
multiple access (NOMA) in the power domain. This approach
is considered as one of the candidates for improving the
spectral efficiency and connectivity density in 5G [223]–
[227]. At the BS, different signals intended for different
users are superimposed on each other after classic channel
coding and modulation. Multiple users share the same timefrequency resources, and are then detected at the receivers
by successive interference cancellation (SIC). In this way,
the spectral efficiency can be enhanced at the cost of an
increased receiver complexity compared to conventional orthogonal multiple access (OMA), where the potential interference is treated as noise. NOMA has been considered in
mmWave communications in recent literatures. Specifically,
the performance of NOMA in mmWave communications was
evaluated in [228] and [229], and simulation results have
shown that NOMA can achieve better channel capacity than
OMA in both uplink and downlink mmWave communication
systems. Furthermore, the capacity performance of NOMAmmWave-massive-MIMO systems was investigated in [230],
and simulation results indicated that enormous capacity improvement can be achieved compared to the existing LTE
systems. In addition, sum rate and outage probabilities of
mmWave-NOMA systems with random beamforming were
analyzed in [231], where two users can be served by NOMA
in each beam. Furthermore, a transmission scheme that uses
NOMA in mmWave beamspace MIMO has been proposed
in [232]. By using intra-beam superposition coding and SIC
under the framework of NOMA in the proposed beamspace
MIMO-NOMA system, more than one user can be simultaneously supported in each beam, which is different from
conventional SDMA, where only the signal intended for one
user is transmitted in each beam. Consequently, the number of
served users can be significantly increased for a given number
of beams and RF chains, and the system achievable sum
rate can be also improved. Besides, beam division multiple
access (BDMA) with per-beam synchronization (PBS) in time
and frequency for wideband massive MIMO transmission
over mmWave/Terahertz (THz) bands has been proposed in
[233], beam scheduling for both UL and DL BDMA and a
greedy beam scheduling algorithm has also been developed.
Additionally, the design challenges of NOMA in mmWave
due to beamforming was investigated in [234]. Note that
with the enhanced pathloss in mmWave communications, the
interference experienced by the users in NOMA can be significantly reduced [228]. Considering this harmony between the
mmWave channel characteristics and the principle of NOMA,
the use of NOMA for mmWave communications is a research
direction deserves further investigation.
3) Random access: This is primarily used for initial access
and handover, which is very important in system design.
Since random access cannot fully benefit from beamforming
due to the lack of information on the best transmit-receive
beam pair, the design of the random access channel becomes
more challenging in mmWave communication systems [211]–
[213]. An overview of random access in mmWave mobile
networks has been presented in [211], where the important
issues for the design of a random access channel with respect
to initial access, handover, uplink-downlink configuration, and
scheduling have been discussed in detail.
More recently, there are some recent works emphasize on
lower frequencies in ad hoc wireless network scenarios or,
more recently, on the 60 GHz IEEE 802.11ad WLAN and
WPAN scenarios [214]. While the authors in [215] considered an exhaustive method which sequentially scans the 360
degree angular space, the authors in [216] proposed the use
of directional cell discovery procedure where the angular
space is scanned in time-varying random directions using
synchronization signals periodically transmitted by the base
stations. In [217], different scanning and signaling methods
were compared, with respect to the initial access design
options, in order to assess the respective access delays and
system overheads. The analysis shows that low-resolution fully
digital architectures have significant benefits when compared
with single-stream analog beamforming. In order to reduce
the delay due to exhaustive search procedures, the authors
in [211], [217], [218] implemented a faster user discovery
method which employs a two-stage hierarchical procedure,
while the use of context information about user and/or BS
positions, provided by a separate control plane, was considered
in [214], [219]. In a refinement to the method in [219], the
procedure to capture the effects of position inaccuracy and
obstacles was implemented in [220]. In addition, the use
of booster cells which operate at mm-Waves and under the
coverage of a microwave-based anchor cell was proposed by
the authors in [221]. In this arrangement, the booster BS gets
information about user locations from the anchor BS, enabling
it to directly steer the transmit beams towards the user position.
Finally, the authors in [222] showed that the performance of
analog beamforming degrades in presence of errors in the
available context information provided during the initial cell
search procedures.
B. Backhauling
To meet the aggressive 5G KPIs discussed in Section I,
an UDN deployment is considered as a promising system
architecture to enable Gbps user experience and seamless
coverage in mobile networks [235], [236]. In other words,
many small-cell BSs are densely deployed as hotspots (e.g., in
office buildings, shopping malls, residential apartments) that
would greatly offload the macro cells. Hence, the backhaul
between the macro-cell BS and the associated small-cell BSs
should provide large bandwidth with reliable link transmission.
Besides, energy efficiency and deployment cost are also key
considerations for operators. It has been demonstrated that
backhaul links with 1–10 Gbps is required to effectively
support UDN [235]. Conventional optical fiber supports large
data rates and reliable link transmission, but its application
to UDN as backhaul may not be an economical choice for
operators due to the restriction of deployment and installation
[237]. Hence, wireless backhaul is more attractive to overcome
the geographical constraints. Microwave backhaul in sub-6
17
GHz and 20 GHz bands have been successfully deployed
in current mobile networks. However, the available licensed
spectrum in these bands are limited, and insufficient to meet
the demand of 5G backhauling, and thus mmWave with wider
bandwidth is preferred.
Using mmWave bands for backhauling in UDN is desirable
due to the following reasons [237].
• Large bandwidth: The large amount of underutilized
mmWave including unlicensed V-band (57–67 GHz) and
lightly licensed E-band (71–76 GHz and 81–86 GHz) (the
specific regulation may vary from country to country)
can provide potential GHz transmission bandwidth. For
example, more than 1 Gbps backhaul capacity can be
supported over a 250 MHz channel in the E-band [238].
• Reduced interference: The coverage distance for E-band
is up to several km due to rain attenuation, while that for
V-band is about 50–700 m due to both the rain and oxygen
attenuation. Due to high pathloss, mmWave is suitable
for UDN, where improved frequency reuse and reduced
inter-cell interference are expected. It should be pointed
out that rain attenuation is not a big issue for mmWave
used for backhaul in UDN. For example, if we consider
very heavy rainfall of 25 mm/h, the rain attenuation is
only around 2 dB in the E-band for a backhaul link of
200 m [238].
• In-band backhauling: The backhaul and user access
links are conventionally carried out in different frequency
bands. Multiplexing backhaul and access on the same
mmWave frequency band, also named in-band backhaul
[235], has obvious cost benefits from the hardware and
frequency reuse perspective. However, as in any inband backhauling scenario, the interference between the
backhaul and the user access links must be controlled;
for example, by beamforming and signal processing.
C. Coverage and Connectivity
Since mmWave signals in general have high penetration
loss, they are very sensitive to the blockage by walls and other
objects. In [242], a mathematical framework for modeling the
random blockages that occur when users are moving around
was developed by leveraging concepts from random shape
theory. In this model, both BSs and users are assumed to form
a homogeneous Poisson point process (PPP) on the plane.
Random buildings are modeled as rectangles with random
sizes and orientations whose centers form a PPP on the plane.
Let K denote the total number of blockages crossing a link. As
shown in [242], K follows a Poisson distribution with the mean
of βR + p, where β = 2λ(E[W ] + E[L])/π, p = λE[L]E[W ],
E[L] and E[W ] represent the average length and width of a
random blockage building, respectively. Then, the probability
that link of length R is free from blockage has an exponential
distribution, given by
P(K = 0) = e−(βR+p) .
(9)
It should be noted that in practical mmWave systems the
parameters β and p in (9) need to be obtained through
experimental fitting.
A visible LoS region of location X is defined as the set of
locations which can be connected to X with a LoS link. If
we consider the case where the blockages are impenetrable,
the average size of a visible region in a cellular network can
be shown to be 2πe−p / β 2 [242] and the average number
of BSs which has a LoS link with a user is 2π µe−p / β 2 ,
where µ is the density of the BS PPP. Therefore, in order
to achieve acceptable coverage, we can either increase the
density of the BS deployment or reduce the physical length of
the communication links between two nodes in the network by
deploying more intermediate relays. The paper [173] systematically studies the blockage problem of mmWave networks.
The impact of reflection and relaying are also investigated. It is
shown that if the LoS signals are blocked, the reflection signals
may be useful in regions, while for some regions relaying
is preferred. Reference [173] also studies the optimal routing
problem in multiple relay mmWave networks. Recently, an
optimal opportunistic strategy for access point deployment is
proposed in [247] to well balance the overhead and capacity
under random blockage.
[243] takes a stochastic geometry approach to the connectivity of mmWave networks with multi-hop relaying. It is
shown that multi-hop relaying can greatly improve the connectivity compared to the single-hop mmWave transmission.
The results also show that to obtain near-optimal connectivity
the relaying route window should be about the size of the
obstacle buildings. The coverage probability is an important
performance metric in a mmWave cellular network. It is
defined as the probability that the destination is able to receive
a signal with a certain threshold SNR T:
Pc (T ) = Pr(SNR > T ).
(10)
To further improve the coverage, [244] studies the possibility
of BS cooperation in the downlink of mmWave networks in a
stochastic geometry framework. It is shown that cooperation
among randomly located BSs can effectively increase the
coverage probability.
By using tools from stochastic geometry, a general and
tractable framework for coverage analysis with arbitrary distributions for interference power and arbitrary antenna patterns
was developed and applied to mmWave ad hoc networks
exploiting directional antenna arrays in [245]. It is shown
that the coverage probabilities of mmWave ad hoc networks
increase as a non-decreasing concave function with the antenna
array size. Numerical results also show that largescale antenna
arrays are required for satisfactory coverage in mm-wave
networks. To further enhance the connectivity and session
continuity, multi-connectivity strategies were developed to
leverage multiple simultaneous small cell connections in ultradense urban deployments of mmWave networks in [246].
The benefits of multi-connectivity strategies were investigated
by taking into account: (i) the intricacies of mmWave radio
propagation in realistic urban environments; (ii) the dynamic
mmWave link blockage due to human mobility; and (iii)
the multi-connectivity network behavior to preserve session
continuity. Results show that even simpler multi-connectivity
schemes bring notable improvements to session-level mmWave
operation in realistic environments.
18
To achieve a robust coverage and connectivity of mmWave
networks, a heterogeneous mm-wave network architecture
consisting of small-cell BSs operating at mmWave bands and
macro-cell BSs, operating at microwave frequencies, should be
considered in practice. The macro-cells will be used as a signaling network for cell discovery and signaling transmission to
guarantee full coverage and provide reliable control channels.
The small-cell BSs form a data subnetwork that provides high
data rates using mmWave bands for the users that they cover.
VI. S TANDARDIZATION AND D EPLOYMENT
Despite of the huge potential and great number of benefits
in using mmWave in mobile networks, there are still a lot
of skepticisms, particularly from investors, as for whether
the technology is suitable for cellular coverage and mobility
scenarios. Due to the great potential of mmWave communications as described in previous sections, 3GPP is working
towards standardizing mmWave for in the 5G New Radio (NR)
interface. More specifically, 3GPP is working on Release 14
(will be finalized around June 2017), which will include the
channel modeling for radio above 6GHz. From the second
half of 2017, 3GPP will work on release 15, which will
deliver the first set of 5G standards. Commercial use of the
unlicensed mmWave band is not a new territory, but mmWave
bands have been used since the 1980s. However, the existing
standards and applications have only been for static ultrahigh-definition multimedia data transfer. Specifically, IEEE
802.11ad/aj for wireless local area network (WLAN), IEEE
802.15.3c and ECMA-387 for wireless personal area network
(WPAN), WirelessHD for video area networking (VAN) are
standards at unlicensed 60GHz and 45GHz for providing
short-range point-to-point (P2P) communications.
In this section, we will introduce the key features of the
aforementioned existing and in-development mmWave standards, followed by an overview of the plan for commercial
cellular deployment.
A. 3GPP’s New Radio at mmWave Band
1) Vision and use cases: 5G is envisioned to provide
orders-of-magnitude improvements in the peak data rate,
network capacity, latency, availability, and reliability over
legacy networks. Meanwhile, the deployment should cooperate
seamlessly with legacy networks and provide fundamental
shifts in cost and energy efficiency [120]. As discussed in
Section I, 3GPP has defined three use cases for 5G NR since
2016; namely, eMBB, mMTC and URLLC. EMBB is targeted
for mobile broadband services that require extraordinary data
rates; mMTC is the basis for connectivity in Internet of Things
(IoT); and URLLC is needed for applications which have
stringent latency and reliability requirements. 3GPP has also
identified its interest in the frequency bands above 6 GHz
and up to 100 GHz for NR. The channel model for above
6 GHz is defined in 3GPP Specification 38.900/38.901 [21],
[98]. Network operators around the world are exploring the
possibility of using parts of this spectrum for licensed mobile communications to meet diversified services from their
customers. mmWave is naturally envisioned to provide eMBB
services where high-speed data transmission is needed. This is
particularly important in small cells and dense urban scenarios.
Other applications are backhaul for last-mile fiber replacement,
wireless fronthauls, etc. However, the research has not been
very clear on how mmWave affects the latency and reliability
aspects.
2) mmWave and massive MIMO: The 5G vision is pushing
for a fundamental change in the mobile networks, starting with
the lowest physical layer (PHY). The two core PHY technologies, that will set 5G NR apart from previous radio access
technologies (RATs), are mmWave and massive MIMO. Both
put forth different paradigms that make a break with many
current understandings in the wireless propagation, signal
processing, device manufacturing and eventual network design
[113]. Moreover, there is a clear trend that mmWave and
massive MIMO will work in cooperation to ensure a successful
5G operation. The great SDMA capability of massive MIMO
is ideal in the frequency bands below 6 GHz, where a large
area is covered and many users with NLoS channels need
to be served simultaneously. The high capacity of mmWave
communications is ideal for hotspots that need not guarantee
coverage or support mobility, but can provide great service to
the LoS users that are static or moving at pedestrian speed. In
3GPP, the maximum number of the RF chains for the NR BS
and UE is determined as 32 and 8 respectively. The maximum
number of antenna elements can go up to 1024 at the BS and
64 at the UE (for 70 GHz), respectively.
3) Hybrid beamforming Architecture: At the PHY layer,
the hybrid analog-digital beamforming technique was agreed
to be used in 5G systems at the 3GPP RAN1 meeting in 2016
[125]. As discussed in Section IV, the hybrid architecture may
be used by both the BSs and UEs. It is worth noting that
the hybrid beamforming would not only be used to boost the
data rate for data transmission, but also be used at the control
channel to enhance cell coverage. Thereafter, it is deemed necessary to have Physical Uplink Control Channel (PUCCH) to
include CSI that is related to analog beamforming information.
As for the Physical Downlink Control Channel (PDCCH), it is
desirable to have a common design for high and low frequency
bands. Both uniform antenna arrays (i.e., antenna elements
with the same polarization from multiple panels are uniformly
distributed in horizontal and vertical dimensions, respectively)
and non-uniform antenna arrays [250] (i.e., antenna elements
with same polarization from multiple panels are not uniformly
distributed in horizontal or vertical dimension) should be
supported in 5G systems. This means that a flexible design
of the hybrid beamforming is needed, which is not limited to
calibrated arrays of a particular geometry. Spatial beams can be
generated from separate antenna panels (or different sections
of the antenna array) to serve different users. Meanwhile, it
is possible to allocate different transmission tasks (i.e., data
channel and control channel) to each subarray. Nevertheless,
fully digital beamforming is not disregarded by industry, and
low-resolution beamforming could work at the lower end of
the mmWave spectrum with manageable complexity.
4) PHY layer design: Some common understandings on
the way forward for the standardization of the PHY layer
for mmWave communications have been agreed upon. More
19
TABLE III
S UMMARY OF MM WAVE PERFORMANCE MEASUREMENT CAMPAIGNS
Freq band
Architecture
Peak Throughput (Gbps)
Huawei
QUALCOMM
Ericsson
Samsung
Nokia
28 GHz [252]
73 GHz [253]
Digital (@28 G)
Analog (@73 G)
35(@73 G)
28 GHz [254]
28 GHz [257]
28 GHz [256]
Hybrid
Hybrid
Hybrid
10
14
7.5
28 GHz
73 GHz [255]
Hybrid(@28GHz)
RF(@73GHz)
11
detailed discussion and specifications are still open and are
expected to be finalized in the 3GPP Release 15 in 2018.
Research should focus on, but not be limited to, the following areas: i) Beam-management, which includes the beamsweeping procedure, beam-selection based on CSI feedback or
beam-reciprocity assumptions, beam-tracking and recovery for
mobility support, etc.; ii) the corresponding Reference Signal
(RS) design for beam-management; and iii) Control channel
design.
Beam-sweeping is a main method for the analog part of
the hybrid beamforming, and its procedure design has a
big impact on the system implementation. To name a few,
how to design an efficient and reliable sweeping procedure
during the initial access stage, how to perform beam-tacking
for a UE as the propagation channel changes (due to fast
fading or mobility), and how to maintain a connection (i.e.,
beam-recovery) in the case of link failure and/or blockage.
A well designed beam-selection mechanism can ensure good
signal strength to reap the benefits of data transmission at the
mmWave frequencies with a wide bandwidth. The accuracy of
the selection is dependent on the analog beam-feedback, the
digital precoding matrix feedback and the Channel Quality
Index (CQI) feedback. The common understanding is that the
UE will report measurement results on different BS transmit
beams to aid beam-selection at the BS.
B. Prototypes and Deployment plan
There has been tremendous efforts from various organizations (vendors, operators, universities, etc.) in building
hardware platforms for mmWave channel measurement (as
described in Section II) and for proof-of-concept prototyping.
This is the necessary step to test for different frequency
bands, different use cases, and their respective KPIs before
the mass deployment. In the past year of 2016, there have
been many feasibility tests and functionality tests for potential
key techniques proposed for mmWave. A large quantity of
measurement data has been accumulated, which is vital for
clarification on the system architecture and the PHY design, as
well as the subsequent standardization and commercialization
for mmWave communication. Table III lists some known
operational prototypes and their key system parameters.
Current measurements are largely focused on four areas of
interest.
1) Basic throughput performance: the maximum number
of reported spatial streams is two (not counting dualpolarization of antenna elements) and the maximum spectral efficiency is less than 20 bps/Hz.
2) NLoS transmission: metal and concrete walls pose high
penetration loss; heavy foliage poses severe shadowing
loss; reasonable reflection loss but high diffraction loss.
3) Coverage: indoor ≤ 100 m; urban outdoor: ≤ 350 m;
outdoor-to-indoor: ≤ 20 m.
4) Beam-tracking: It can be done for a single user at
pedestrian (15 km/h) or low speed (40 km/h); multi-user
tracking (spatially) is lacking.
The next phase of testing will focus on the feasibility in
more difficult use cases for a single channel; for instance, outdoor BS serving indoor UEs, maximum outdoor coverage test,
high-mobility/vehicular scenarios, beam-tracking for multiple
UEs, dense urban services, multi-cell networking and core
network support. This will require the consumer equipments
to be developed and ready for such tests, so that end-toend performance evaluation can be possible. It is most likely
that mass deployment of mmWave technology for mobile
networks will occur after 2020, since the progress still largely
hinges on the standardization process and the resolving of the
aforementioned challenges.
VII. C ONCLUSIONS
Despite of high potentials of providing multiple Gbps rates,
many technical challenges have to be solved for mmWave
communications to become a mainstream technology in mobile networks. In recent years, large efforts have made to
tackle the various challenges and many excellent results have
been reported. This tutorial paper has summarized the recent
technical progresses in mmWave communications for mobile
networks, including channel measure/modeling, MIMO design, multiple access, performance analysis, standardization,
and deployment. Many directions for future research work
have also been identified. From our point of view, finding
effective solutions for applying mmWave technology in high
mobility environments, enabling enhanced transmission distance, combating hardware impairments, and achieving high
energy-efficiency are very important and interesting challenges
to tackle. From a broader perspective, the RF implementation
of mmWave technology is very important and a long-term goal
should be to obtain a cost-efficient fully digital implementation.
R EFERENCES
[1] Ericsson AB, “Traffic exploration tool, interactive online tool”, Available
at: http://www.ericsson.com/TET/trafficView/loadBasicEditor.ericsson8.
[2] “IMT vision-framework and overall objectives of the future development
of IMT for 2020 and beyond,” ITU-R M. 2083-0, Sep. 2015.
20
[3] Afif Osseiran et al., “Scenarios for 5G mobile and wireless communications: the vision of the METIS project,” IEEE Commun. Mag., vol. 52,
no. 5, pp. 26–35, May 2014.
[4] J. G. Andrews, S. Buzzi, W. Choi, S. V. Hanly, A. Lozano, A. C. Soong,
and J. C. Zhang, “What will 5G be?” IEEE J. Sel. Areas Commun.,
vol. 32, no. 6, pp. 1065–1082, Jun. 2014.
[5] Z. Pi and F. Khan, “An introduction to millimeter-wave mobile broadband
systems,” IEEE Commun. Mag., vol. 49, no. 6, pp. 101–107, Jun. 2011.
[6] E. Björnson, E. G. Larsson, and T. L. Marzetta, “Massive MIMO: Ten
myths and one critical question,” IEEE Commun. Mag., vol. 54, no. 2,
pp. 114–123, Feb. 2016.
[7] T. S. Rappaport et al., “Millimeter wave mobile communications for 5G
cellular: It will work!” IEEE Access, vol. 1, pp. 335–349, May 2013.
[8] J. Du and R. A. Valenzuela, “How Much Spectrum is Too Much in
Millimeter Wave Wireless Access” IEEE J. Selected Area on Commun.,
2017 (to appear).
[9] F. Boccardi, R. W. Heath, A. Lozano, T. L. Marzetta, and P. Popovski,
“Five disruptive technology directions for 5G,” IEEE Commun. Mag., vol.
52, no. 2, pp. 74–80, Feb. 2014.
[10] F. W. Vook, E. Visotsky, T. A. Thomas and A. Ghosh, “Performance
Characteristics of 5G mmWave Wireless-to-the-Home,” In Proc. Asilomar
2016.
[11] H. Huang, C. B. Papadias, S. Venkatesan,“MIMO Communication for
Cellular Networks” Springer US, 2012.
[12] S. Rangan, T. S. Rappaport, and E. Erkip, “Millimeter-wave cellular
wireless networks: Potentials and challenges,” Proc. IEEE, vol. 102, no.
3, pp. 366–385, Mar. 2014.
[13] A. Ghosh, et. al., “Millimeter-wave Enhanced Local Area Systems: A
High-Data-Rate Approach for Future Wireless Networks,” IEEE Journal
on Selected Areas in Commun., vol. 32, no. 6, pp. 1152–1163, June 2014.
[14] R. W. Heath, N. Gonzalez-Prelcic, S. Rangan, W. Roh, and A. Sayeed,
“An overview of signal processing techniques for millimeter wave MIMO
systems,” IEEE J. Sel. Top. Signal Process., vol. 10, no. 3, pp. 436–453,
Apr. 2016.
[15] C. E. Shannon, “A mathematical theory of communication,” Bell Sys.
Tech. J., vol. 27, no. 3, pp. 379–423, Jul. 1948.
[16] C. Dehos, J. L. Gonzalez, A. D. Domenico, D. Ktenas, and L. Dussopt,
“Millimeter-wave access and backhauling: The solution to the exponential
data traffic increase in 5G mobile communications systems?” IEEE
Commun. Mag., vol. 52, no. 9, pp. 88–95, Sep. 2014.
[17] X. Ge, H. Cheng, M. Guizani, and T. Han, “5G wireless backhaul
networks: Challenges and research advances,” IEEE Netw., vol. 28, no.
6, pp. 6–11, Jun. 2014.
[18] Z. Gao, L. Dai, D. Mi, Z. Wang, M. A. Imran, and M. Z. Shakir,
“MmWave massive-MIMO-based wireless backhaul for the 5G ultradense network,” IEEE Wireless Commun., vol. 22, no. 5, pp. 13–21, Oct.
2015.
[19] Z. Pi, J. Choi, and R. Heath, “Millimeter-wave gigabit broadband
evolution toward 5G: Fixed access and backhaul,” IEEE Commun. Mag.,
vol. 54, no. 4, pp. 138–144, Apr. 2016.
[20] S. Singh, M. N. Kulkarni, A. Ghosh, and J. G. Andrews, “Tractable
model for rate in self-backhauled millimeter wave cellular networks,”
IEEE J. Sel. Areas Commun., vol. 33, no. 10, pp. 2196–2211, Oct. 2015.
[21] 3GPP TR 38.901, http://www.3gpp.org/DynaReport/38901.htm.
[22] K. Haneda, et. al. “Indoor 5G 3GPP-like Channel Models for Office
and Shopping Mall Environments,” in in Proc. 2016 IEEE International
Conference on Communications Workshops (ICCW), May 2016.
[23] R.-A. Pitaval, O. Tirkkonen, R. Wichman, K. Pajukoski, E. Lahetkangas,
and E. Tiirola, “Full-duplex self-backhauling for small-cell 5G networks,”
IEEE Wireless Commun., vol. 22, no. 5, pp. 83–89, Oct. 2015.
[24] R. Baldemair et al., “Ultra-dense networks in millimeter-wave frequencies,” IEEE Commun. Mag., vol. 53, no. 1, pp. 202–208, Jan. 2015.
[25] X. Ge, S. Tu, G. Mao, C. Wang, and T. Han, “5G ultra-dense cellular
networks,” IEEE Wireless Commun., vol. 23, no. 1, pp. 72–79, Feb. 2016.
[26] S. Samarakoon, M. Bennis, W. Saad, M. Debbah, and M. Latva-aho,
“Ultra dense small cell networks: Turning density into energy efficiency,”
IEEE J. Sel. Areas Commun., vol. 34, no. 5, pp. 1267–1280, May 2016.
[27] D. T. Emerson, “The work of Jagadis Chandra Bose: 100 years of
millimeter-wave research,” IEEE Trans. Microwave Theory and Tech.,
vol. 45, no. 12, pp. 2267–2273, Dec. 1997.
[28] R. E. Ziemer, “An overview of millimeter wave communications,”in
European Micorwave Conference, 1985.
[29] H. H. Hmimy, S. C. Gupta, “Performance of frequency-hopped NPCSMA for broad-band personal communication services (B-PCS) at millimeter waves in an urban mobile radio environment,”IEEE Trans. Vehi.
Tech., vol. 49, no. 1, pp. 90–97, Jan. 1999.
[30] H. Xu, V. Kukshya, T. S. Rappaport, “Spatial and temporal characteristics of 60-GHz indoor channels,”IEEE Journal of Selected Area in
Communications, vol. 20, no. 3, pp. 620–630, Apr. 2002.
[31] X. Zhang and J. G. Andrews, “Downlink Cellular Network Analysis
With Multi-Slope Path Loss Models,” IEEE Trans. Commun., vol. 63,
no. 6, pp. 1881–1894, May 2015.
[32] G. R. MacCartney, S. Sun, T. S. Rappaport et al., “Millimeter wave
wireless communications: New Results for Rural Connectivity,” in Proc.
All Things Cellular 16: 5th workshop on All things cellular proceedings,
in conjunction with ACM mobiCom, Oct. 2016.
[33] C. Kourogiorgas, S. Sagkriotis, and A. D. Panagopoulos, “Coverage and
outage capacity evaluation in 5G millimeter wave cellular systems: Impact
of rain attenuation,” in Proc. 9th European Conf. Antennas Propagation
(EuCAP), Apr. 2015, pp. 1–5.
[34] Y. P. Zhang, P. Wang, and A. Goldsmith, “Rainfall effect on the
performance of millimeter-wave MIMO systems,” IEEE Trans. Wireless
Commun., vol. 14, no. 9, pp. 4857–4866, Sep. 2015.
[35] “E-Band
Technology,”
E-Band
Communications,
available:
http://www.e-band.com/index.php?id=86.
[36] F. Rusek, D. Persson, B. K. Lau, E. G. Larsson, T. L. Marzetta,
O. Edfors, and F. Tufvesson, “Scaling up MIMO: Opportunities and
challenges with very large arrays,” IEEE Signal Process. Mag., vol. 30,
no. 1, pp. 40–60, Jan. 2013.
[37] A. Ghosh, “The 5G mmWave radio revolution,” Microwave Journal,
vol. 59, no. 9, pp. 22–36, Sep. 2016.
[38] J. E. Wieselthier, G. D. Nguyen, and A. Ephremides, “Energy-limited
wireless networking with directional antennas: The case of sessionbased multicasting,” in IEEE 21st Annual Joint Conference of the IEEE
Computer and Communications Societies (INFOCOM 2002), Jun. 2002,
pp. 190–199.
[39] I. Kang, R. Poovendran, and R. Ladner, “Power-efficient broadcast routing in adhoc networks using directional antennas: Technology dependence
and convergence issues,” University of Washington, Washington, USA,
Tech. Rep. UWEETR-2003-0015, 2003.
[40] S. Singh, R. Mudumbai, and U. Madhow, “Interference analysis for
highly directional 60-Ghz mesh networks: The case for rethinking
medium access control,” IEEE/ACM Transa. Net, vol. 19, no. 5, pp. 1513–
1527, May 2011.
[41] UMTS: Spatial channel model for Multiple Input Multiple Output
(MIMO) simulations. ETSI 3rd Generation Partnership Project (3GPP).
Sophia Antipolis Cedex, France. 3GPP TR 25.996, V12.0.0.
[42] J. Yu, Y.-D. Yao, A. F. Molisch, and J. Zhang, “Performance evaluation
of CDMA reverse links with imperfect beamforming in a multicell
environment using a simplified beamforming model,” IEEE Trans Veh.
Tech., vol. 55, no. 3, pp. 1019–1031, Mar. 2006.
[43] J. Shen and L. W. Pearson, “The phase error and beam-pointing error in
coupled oscillator beam-steering arrays,” IEEE Trans. Antennas Propag.,
vol. 53, no. 1, pp. 386–393, Jan. 2005.
[44] H. Li, Y.-D. Yao, and J. Yu, “Outage probabilities of wireless systems
with imperfect beamforming,” IEEE Trans Veh. Tech., vol. 55, no. 5, pp.
1503–1515, 2006.
[45] T. Kim, B. Clerckx, D. J. Love and S. J. Kim, “Limited Feedback
Beamforming Systems for Dual-Polarized MIMO Channel,” IEEE Trans.
Wireless Commun., vol. 9, no. 11, pp. 3425–3439, Nov. 2010.
[46] A. W. Doff, K. Chandra, and R. V. Prasad, “Sensor assisted movement
identification and prediction for beamformed 60 GHz links,” in 12th
Annual IEEE Consumer Communications and Networking Conference
(CCNC), 2015, pp. 648–653.
[47] S. Hur, T. Kim, D. J. Love, J. V. Krogmeier, T. Thomas, A. Ghosh et al.,
“Millimeter wave beamforming for wireless backhaul and access in small
cell networks,” IEEE Trans. Commun., vol. 61, no. 10, pp. 4391–4403,
Oct. 2013.
[48] C.-S. Choi, Y. Shoji, H. Harada, R. Funada, S. Kato, K. Maruhashi,
I. Toyoda, and K. Takahashi, “RF impairment models for 60GHz-band
SYS/PHY simulation,” Tech. Rep., IEEE 802.15-06-0477-01-003c, Nov.
2006.
[49] E. Björnson, P. Zetterberg, M. Bengtsson, and B. Ottersten, “Capacity
limits and multiplexing gains of MIMO channels with transceiver impairments,” IEEE Commun. Lett., vol. 17, no. 1, pp. 91–94, Jan. 2013.
[50] E. Björnson, M. Matthaiou, and M. Debbah, “Massive MIMO with nonideal arbitrary arrays: Hardware scaling laws and circuit-aware design,”
IEEE Trans. Wireless Commun., vol. 14, no. 8, pp. 4353–4368, Aug.
2015.
[51] C. Rapp, “Effects of HPA-nonlinearity on a 4-DPSK/OFDM-signal for
a digital sound broadcasting system,” in Proc. of the Second European
Conference on Satellite Communications, Liege, Belgium, Oct. 1991.
21
[52] A. Saleh, “Frequency-independent and frequency-dependent nonlinear
models of TWT amplifiers,” IEEE Trans. Commun., vol. 29, no. 11, pp.
1715–1720, Nov. 1981.
[53] B. Razavi, “Design of Millimeter-Wave CMOS Radios: A Tutorial,”
IEEE Transactions on Circuits and Systems, vol. 56, no. 1, Jan. 2009.
[54] S. K. Yong, P. Xia, and A. Valdes-Garcia, “60GHz Technology for Gbps
WLAN and WPAN, John Wiley & Sons, 2011.
[55] A. M. Niknejad and H. Hashemi, “Mm-Wave Silicon Technology, 60
GHz and Beyond, Springer, 2008.
[56] B. Razavi, “Design considerations for direct-conversion receivers,” IEEE
Transactions on Circuits and Systems II, vol. 44, no. 6, pp. 428-435, June
1997.
[57] T. S. Rappaport, J. N. Murdock and F. Gutierrez, “State of the art in
60-GHz integrated circuits and systems for wireless communications,”
Proceedings of the IEEE, vol. 99, no. 8, pp. 1390–1436, Aug. 2011.
[58] mmMAGIC Deliverable D2.1, “Measurement Campaigns and Initial
Channel Models for Preferred Suitable Frequency Ranges,” Mar. 2016
[Online]. Available: https://5g-mmmagic.eu/results/#deliverables.
[59] M. Kim, J. Takada, Y. Chang, J. Shen, and Y. Oda, “Large scale
characteristics of urban cellular wideband channels at 11 GHz,” in Proc.
9th European Conf. Ant. Prop (EuCAP), 2015, pp. 1–4.
[60] H. Masui, M. Ishii, K. Sakawa, H. Shimizu, T. Kobayashi, and M.
Akaike, “Microwave path-loss characteristics in urban LOS and NLOS
environments,” in Proc. IEEE Vehi. Technol. Conf. (VTC Spring), 2001,
pp. 395–398.
[61] M. Sasaki, W. Yamada, T. Sugiyama, M. Mizoguchi, and T. Imai, “Path
loss characteristics at 800 MHz to 37 GHz in urban street microcell
environment,” in Proc. 9th European Conf. Ant. Prop. (EuCAP), 2015,
pp. 1–4.
[62] S. Rajagopal, S. Abu-Surra, and M. Malmirchegini, “Channel feasibility
for outdoor non-line-of-sight mmWave mobile communication,” in Proc.
IEEE Veh. Technol. Conf. (VTC Fall), 2012, pp. 1–6.
[63] Y. Azar et al., “28 GHz propagation measurements for outdoor cellular
communications using steerable beam antennas in New York city,” in
Proc. IEEE Int. Conf. Commun. (ICC), 2013, pp. 5143–5147.
[64] M. Samimi et al., “28 GHz angle of arrival and angle of departure analysis for outdoor cellular communications using steerable beam antennas in
New York city,” in Proc. IEEE Veh. Technol. Conf. (VTC Spring), 2013,
pp. 1–6.
[65] G. R. MacCartney, M. K. Samimi, and T. S. Rappaport, “Omnidirectional path loss models in New York City at 28 GHz and 73 GHz,”
in Proc. IEEE Int. Symp. Personal, Indoor and Mobile Radio Commun.
(PIMRC), 2014, pp. 227–231.
[66] H. Zhao et al., “28 GHz millimeter wave cellular communication
measurements for reflection and penetration loss in and around buildings
in New York city,” in Proc. IEEE Int. Conf. Commun. (ICC), 2013, pp.
5163–5167.
[67] S. Nie, G. R. MacCartney, S. Sun, and T. S. Rappaport, “28 GHz and
73 GHz signal outage study for millimeter wave cellular and backhaul
communications,” in Proc. IEEE Int. Conf. Commun. (ICC), 2014, pp.
4856–4861.
[68] C. Larsson, F. Harrysson, B.-. Olsson, and J.-E. Berg, “An outdoor-toindoor propagation scenario at 28 GHz,” in Proc. 8th European Conf.
Ant. Prop (EuCAP), 2014, pp. 3301–3304.
[69] S. Deng, M. K. Samimi, and T. S. Rappaport, “28 GHz and 73 GHz
millimeter-wave indoor propagation measurements and path loss models,”
in Proc. IEEE Int. Conf. Commun. Workshop (ICCW), 2015, pp. 1244–
1250.
[70] G. R. Maccartney, T. S. Rappaport, S. Sun, and S. Deng, “Indoor
office wideband millimeter-wave propagation measurements and channel
models at 28 and 73 GHz for ultra-dense 5G wireless networks,” IEEE
Access, vol. 3, pp. 2388–2424, Oct. 2015.
[71] X. Wu, Y. Zhang, C. Wang, G. Goussetis, E-H. M. Aggoune, and M.
M. Alwakeel, “28 GHz indoor channel measurements and modelling in
laboratory environment using directional antennas,” in Proc. 9th European
Conf. Ant. Prop (EuCAP), 2015, pp. 1–5.
[72] M. Kim, J. Liang, H. Kwon, and J. Lee, “Path loss measurement at
indoor commercial areas using 28GHz channel sounding system,” in Proc.
17th Int. Conf. Advanced Commun. Technol. (ICACT), 2015, pp. 535–538.
[73] S. Hur, Y. Cho, T. Kim, J. Park, A. F. Molisch, K. Haneda, and M. Peter,
“Wideband spatial channel model in an urban cellular environments at 28
GHz,” in Proc. 9th European Conf. Ant. Prop (EuCAP), 2015, pp. 1–5.
[74] S. Hur et al., “Proposal on millimeter-wave channel modeling for 5G
cellular system,” IEEE J. Sel. Topics Signal Process., vol. 10, no. 3, pp.
454–469, Mar. 2016.
[75] T. S. Rappaport, E. Ben-Dor, J. N. Murdock, and Y. Qiao, “38 GHz and
60 GHz angle-dependent propagation for cellular & peer-to-peer wireless
communications,” in Proc. IEEE Int. Commun. Conf. (ICC), 2012, pp.
4568–4573.
[76] J. N. Murdock, E. Ben-Dor, Y. Qiao, J. I. Tamir, and T. S. Rappaport, “A
38 GHz cellular outage study for an urban outdoor campus environment,”
in Rroc. IEEE Wireless Commun. Netw. Conf. (WCNC), 2012, pp. 3085–
3090.
[77] T. S. Rappaport, F. Gutierrez, E. Ben-Dor, J. N. Murdock, Y. Qiao,
and J. I. Tamir, “Broadband millimeter-wave propagation measurements
and models using adaptive-beam antennas for outdoor urban cellular
communications,” IEEE Trans. Antenna. Propag., vol. 61, no. 4, pp.
1850–1859, Apr. 2013.
[78] H. J. Thomas, R. S. Cole, and G. L. Siqueira, “An experimental study
of the propagation of 55 GHz millimeter waves in an urban mobile radio
environment,” IEEE Trans. Veh. Technol., vol. 43, no. 1, pp. 140–146,
Jan. 1994.
[79] G. Lovnes, J. J. Reis, and R. H. Raekken, “Channel sounding measurements at 59 GHz in city streets,” in Proc. IEEE 5th Int. Symp. Personal,
Indoor, and Mobile Radio Commun. (PIMRC), 1994, pp. 496–500.
[80] M. Kyro, K. Haneda, J. Simola, K. Nakai, K. i. Takizawa, H. Hagiwara,
and P. Vainikainen, “Measurement based path loss and delay spread
modeling in hospital environments at 60 GHz,” IEEE Trans. Wireless
Commun., vol. 10, no. 8, pp. 2423–2427, Aug. 2011.
[81] E. Ben-Dor, T. S. Rappaport, Y. Qiao, and S. J. Lauffenburger,
“Millimeter-wave 60 GHz outdoor and vehicle AOA propagation measurements using a broadband channel sounder,” in Proc. IEEE Global
Telecommun. Conf. (GLOBECOM), 2011, pp. 1–6.
[82] W. Keusgen, R. J. Weiler, M. Peter, M. Wisotzki, and B. Göktepe,
“Propagation measurements and simulations for millimeter-wave mobile
access in a busy urban environment,” in Proc. 39th Int. Conf. Infrared,
Millimeter, and Terahertz waves (IRMMW-THz), 2014, pp. 1–3.
[83] R. J. Weiler, M. Peter, T. Kĺźhne, M. Wisotzki, and W. Keusgen,
“Simultaneous millimeter-wave multi-band channel sounding in an urban
access scenario,” in Proc. 9th European Conf. Ant. Prop. (EuCAP), 2015,
pp. 1–5.
[84] L. Simic, N. Perpinias, and M. Petrova, “60 GHz outdoor urban
measurement study of the feasibility of multi-Gbps mm-wave cellular
networks,” http://arxiv.org/abs/1603.02584.
[85] S. Nie, G. R. MacCartney, S. Sun, and T. S. Rappaport, “72 GHz
millimeter wave indoor measurements for wireless and backhaul communications,” in Proc. IEEE 24th Int. Symp. Personal, Indoor, Mobile
Radio Commun. (PIMRC), 2013, pp. 2429–2433.
[86] G. R. MacCartney and T. S. Rappaport, “73 GHz millimeter wave
propagation measurements foroutdoor urban mobile and backhaul communications in New York city,” in Proc. IEEE Int. Conf. Commun. (ICC),
2014, pp. 4862–4867.
[87] M. Kyrö, S. Ranvier, V.-M. Kolmonen, K. Haneda, and P. Vainikainen,
“Long range wideband channel measurements at 81-86 GHz frequency
range,” in Proc. 4th European Conf. Ant. Prop., 2010, pp. 1–5.
[88] A. Roivainen, C. Ferreira Dias, N. Tervo, V. Hovinen, M. Sonkki, and
M. Latva-aho, “Geometry-based stochastic channel model for two-story
lobby environment at 10 GHz,” in IEEE Trans. on Antennas Propagation,
vol. 64, no. 9, pp 3990-4003, Sept. 2016.
[89] J. Huang, C. X. Wang, R. Feng, J. Sun, W. Zhang and Y. Yang,
“Multi-Frequency MmWave Massive MIMO Channel Measurements and
Characterization for 5G Wireless Communication Systems,” in IEEE
Journals on Selected Areas in Communications, June 2017.
[90] G. R. MacCartney and T. S. Rappaport, “Rural Macrocell Path Loss
Models for Millimeter Wave Wireless Communications,” in IEEE Journals on Selected Areas in Communications, June 2017.
[91] B. Ai, K. Guan, R. He, J. Li, G. Li, D. He, Z. Zhong, and K. M. Huq,
“On Indoor Millimeter Wave Massive MIMO Channels: Measurement
and Simulation,” in IEEE Journals on Selected Areas in Communications,
June 2017.
[92] A. I. Sulyman, A. T. Nassar, M. K. Samimi, G. R. Maccartney, T. S.
Rappaport, and A. Alsanie, “Radio propagation path loss models for 5G
cellular networks in the 28 GHz and 38 GHz millimeter-wave bands,”
IEEE Commun. Mag., vol. 52, no. 9, pp. 78–86, Sept. 2014.
[93] M. R. Akdeniz, Y. Liu, M. K. Samimi, S. Sun, S. Rangan, T. S.
Rappaport, and E. Erkip, “Millimeter wave channel modeling and cellular
capacity evaluation,” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp.
1164–1179, Jun. 2014.
[94] G. R. Maccartney, T. S. Rappaport, M. K. Samimi, and S. Sun,
“Millimeter-wave omnidirectional path loss data for small cell 5G channel
modeling,” IEEE Access, vol. 3, pp. 1573–1580, Aug. 2015.
22
[95] T. S. Rappaport, G. R. MacCartney, M. K. Samimi, and S. Sun, “Wideband millimeter-wave propagation measurements and channel models for
future wireless communication system design,” IEEE Trans. Commun.,
vol. 63, no. 9, pp. 3029–3056, Sept. 2015.
[96] M. K. Samimi, T. S. Rappaport, and G. R. MacCartney, “Probabilistic
omnidirectional path loss models for millimeter-wave outdoor communications,” IEEE Wireless Commun. Lett., vol. 4, no. 4, pp. 357–360, Aug.
2015.
[97] S. Sun et al., “Investigation of prediction accuracy and parameter
stability of large-scale propagation path loss models for 5G wireless
communications,” IEEE Trans. Veh. Technol., vol. 65, no. 5, pp. 2843–
2860, May 2016.
[98] 3GPP TR38.900: http://www.3gpp.org/ftp/Specs/archive/38_series/38.900
/38900-100.zip
[99] D. S. Baum, J. Hansen, and J. Salo, “An interim channel model for
beyond-3G systems: extending the 3GPP spatial channel model (SCM),”
in Proc. IEEE Veh. Technol. Conf. (VTC Spring), 2005, pp. 3132–3136.
[100] “Final Report on Link Level and System Level Channel Models,”
WINNER Deliverable 5.4, IST-2003-507581, Nov. 2005.
[101] “WINNER II Channel Models,” WINNER II, Deliverable 1.1.2, IST4-027756, Sep. 2007.
[102] “WINNER+ Final Channel Models,” CP5-026 WINNER+, Deliverable
5.3, Jun. 2010.
[103] Study on 3D channel model for LTE, 3GPP TR36.873, June 2015.
[104] A. Kammoun. H. Khanfir. Z. Altman. M. Debbah, and M. Kamoun,
“Preliminary results on 3D channel modeling: From theory to standardization,” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp. 1219–1229,
June 2014.
[105] L. Correia, “Mobile broadband multimedia networks,” Elsevier, 2006,
ch. 6.8: The COST 273 MIMO channel model, pp. 364–383.
[106] L. Liu et al., “The COST 2100 MIMO channel model,” IEEE Commun.
Mag., vol. 19, no. 6, pp. 92–99, Dec. 2012.
[107] S. Jaeckel, L. Raschkowski, K. Borner, and L. Thiele, “QuaDRiGa:
A 3-D multi-cell channel model with time evolution for enabling virtual
fieldtrials,” IEEE Trans. Antenna Propag., vol. 62, no. 6, pp. 3242–3256,
Jun. 2014.
[108] A. Maltsev et al., “Channel models for 60 GHz WLAN systems,” Jan.
2010.
[109] “Channel modeling and characterization,” FP7-ICT-608637, MiWEBA,
Deliverable 5.1, Jun. 2014.
[110] “Guidelines for evaluation of radio interface technologies for IMTAdvanced,” ITU-R M.2135-1, Dec. 2009.
[111] “METIS Channel Models,” ICT-317669 METIS, Deliverable 1.4, Jul.
2015.
[112] K. Haneda et. al., “5G 3GPP-like Channel Models for Outdoor Urban
Microcellular and Macro cellular Environments,” IEEE Vehicular Technical Conferences (VTC), 2016.
[113] S. Han, C.-L. I, Z. Xu, and C. Rowell, “Large-scale antenna systems
with hybrid precoding analog and digital beamforming for millimeter
wave 5G,” IEEE Commun. Mag., vol. 53, no. 1, pp. 186–194, Jan. 2015.
[114] J. Kim and I. Lee, “802.11 WLAN: History and new enabling MIMO
techniques for next generation standards,” IEEE Commun. Mag., vol. 53,
no. 3, pp. 134–140, Mar. 2015.
[115] R. Mendez-Rial, C. Rusu, A. Alkhateeb, N. González-Prelcic, and
R. W. Heath, “Channel estimation and hybrid combining for mmWave:
Phase shifters or switches?” in Proc. ITA Workshops, Feb. 2015, pp. 90–
97.
[116] R. Méndez-Rial, C. Rusu, N. González-Prelcic, A. Alkhateeb, and
R. W. Heath, “Hybrid MIMO architectures for millimeter wave communications: Phase shifters or switches?” IEEE Access, vol. 4, pp. 247–267,
Jan. 2016.
[117] O. El Ayach, S. Rajagopal, S. Abu-Surra, Z. Pi, and R. W. Heath,
“Spatially sparse precoding in millimeter wave MIMO systems,” IEEE
Trans. Wireless Commun., vol. 13, no. 3, pp. 1499–1513, Mar. 2014.
[118] F. Sohrabi and W. Yu, “Hybrid beamforming with finite-resolution
phase shifters for large-scale MIMO systems,” in Proc. IEEE SPAWC
Workshops, Jul. 2015, pp. 136–140.
[119] A. Alkhateeb, Y.-H. Nam, J. Zhang, and R. W. Heath, “Massive MIMO
combining with switches,” IEEE Wireless Commun. Lett., vol. 5, no. 3,
pp. 232–235, Jun. 2016.
[120] X. Gao, L. Dai, Y. Sun, S. Han, and C.-L. I., “Machine learning inspired
energy-efficient hybrid precoding for mmwave massive MIMO systems,”
in Proc. IEEE ICC’17, Paris, France, May 2017.
[121] X. Gao, L. Dai, S. Han, C.-L. I, and R. W. Heath, “Energy-efficient
hybrid analog and digital precoding for mmWave MIMO systems with
large antenna arrays,” IEEE J. Sel. Areas Commun., vol. 34, no. 4, pp.
998–1009, Apr. 2016.
[122] X. Zhang, A. F. Molisch, and S.-Y. Kung, “Variable-phase-shift-based
RF-baseband codesign for MIMO antenna selection," IEEE Trans. Signal
Process., vol. 53, no. 11, pp. 4091-4103, Nov. 2005.
[123] P. Sudarshan, N. B. Mehta, A. F. Molisch, and J. Zhang, “Channel
statistics-based RF pre-processing with antenna selection," IEEE Trans.
Wireless Commun., vol. 5, no. 12, pp. 3501-3511, Dec. 2006.
[124] V. Venkateswaran and A. Veen, “Analog beamforming in MIMO communications with phase shift networks and online channel estimation,"
IEEE Trans. Signal Process., vol. 58, no. 8, pp. 4131-4143, Aug. 2010.
[125] 3GPP, “Final report of 3GPP TSG RAN WG1 #85,” 2016, available
at: http://www.3gpp.org.
[126] J. Brady, N. Behdad, and A. Sayeed, “Beamspace MIMO for
millimeter-wave communications: System architecture, modeling, analysis, and measurements,” IEEE Trans. Ant. and Propag., vol. 61, no. 7,
pp. 3814–3827, Jul. 2013.
[127] X. Gao, L. Dai, Z. Chen, Z. Wang, and Z. Zhang, “Near-optimal
beam selection for beamspace mmWave massive MIMO systems,” IEEE
Commun. Lett., vol. 20, no. 5, pp. 1054–1057, May 2016.
[128] N. Behdad and A. Sayeed, “Continuous aperture phased MIMO: Basic
theory and applications," in Proc. Allerton Conference, Sep. 2010, pp.
1196-1203.
[129] A. F. Molisch and X. Zhang, “FFT-based hybrid antenna selection
schemes for spatially correlated MIMO channels," IEEE Commun. Lett.,
vol. 8, no. 1, pp. 36-38, Jan. 2004.
[130] A. Adhikary, J. Nam, J.-Y. Ahn, and G. Caire, “Joint spatial division
and multiplexing: The large-scale array regime," IEEE Trans. Inf. Theory,
vol. 59, no. 10, pp. 6441-6463, Oct. 2013.
[131] A. Alkhateeb, J. Mo, N. Gonzalez-Prelcic, and R. W. Heath, “MIMO
precoding and combining solutions for millimeter-wave systems," IEEE
Commun. Mag., vol. 52, no. 12, pp. 122-131, Dec. 2014.
[132] B. Le, T. W. Rondeau, J. H. Reed, and C. W. Bostian, “Analog-todigital converters," IEEE Signal Process. Mag., vol. 22, no. 6, pp. 69-77,
Nov. 2005.
[133] J. Mo and R. W. Heath, “Capacity analysis of one-bit quantized MIMO
systems with transmitter channel state information," IEEE Trans. Signal
Process., vol. 63, no. 20, pp. 5498-5512, Oct. 2015.
[134] A. Mezghani, F. Antreich, and J. Nossek, “Multiple parameter estimation with quantized channel output," in Proc. ITG Workshop on Smart
Antennas, Feb. 2010, pp. 143-150.
[135] S. Wang, Y. Li, and J. Wang, “Multiuser detection in massive spatial
modulation MIMO with low-resolution ADCs," IEEE Trans. Wireless
Commun., vol. 14, no. 4, pp. 2156-2168, Apr. 2015.
[136] H. Wang, C. K. Wen and S. Jin, “Bayesian Optimal Data Detector for
mmWave OFDM System with Low-Resolution ADC," to appear IEEE J.
Selected Areas on Commun., 2017.
[137] Z. Gao, C. Hu, L. Dai, and Z. Wang, “Channel estimation for
millimeter-wave massive MIMO with hybrid precoding over frequencyselective fading channels,” IEEE Commun. Lett., vol. 20, no. 6, pp. 1259–
1262, Jun. 2016.
[138] J. Kotecha and A. Sayeed, “Transmit signal design for optimal estimation of correlated MIMO channels,” IEEE Trans. Signal Process., vol. 52,
no. 2, pp. 546–557, Feb. 2004.
[139] M. Biguesh and A. Gershman, “Training-based MIMO channel estimation: A study of estimator tradeoffs and optimal training signals,” IEEE
Trans. Signal Process., vol. 54, no. 3, pp. 884–893, Mar. 2006.
[140] A. Alkhateeb and R. W. Heath, “Frequency selective hybrid precoding
for limited feedback millimeter wave systems,” IEEE Trans. Wireless
Commun., vol. 64, no. 5, pp. 1801–1818, May 2016.
[141] A. Alkhateeb, O. El Ayach, G. Leus, and R. W. Heath, “Channel
estimation and hybrid precoding for millimeter wave cellular systems,”
IEEE J. Sel. Top. Signal Process., vol. 8, no. 5, pp. 831–846, Oct. 2014.
[142] J. Wang, Z. Lan, C.-W. Pyo, T. Baykas, C.-S. Sum, M. A. Rahman,
J. Gao, R. Funada, F. Kojima, H. Harada et al., “Beam codebook based
beamforming protocol for multi-Gbps millimeter-wave WPAN systems,”
IEEE J. Sel. Areas Commun.
[143] W. Shen, L. Dai, B. Shim, S. Mumtaz, and Z. Wang, “Joint CSIT
acquisition based on low-rank matrix completion for FDD massive MIMO
systems,” IEEE Commun. Lett., vol. 19, no. 12, pp. 2178–2181, Dec.
2015.
[144] W. Shen, L. Dai, Y. Shi, B. Shim, and Z. Wang, “Joint channel
training and feedback for FDD massive MIMO systems,” IEEE Trans.
Veh. Technol., vol. 65, no. 10, pp. 8762–8767, Oct. 2016.
[145] J. Zhang, Y. Huang, Q. Shi, J. Wang, and L. Yang, “Codebook
design for beam alignment in millimeter wave communication systems,”
submitted for publication.
23
[146] Z. Xiao, T. He, P. Xia, and X.-G. Xia, “Hierarchical codebook design
for beamforming training in millimeter-wave communication,” IEEE
Trans. Wireless Commun., vol. 15, no. 5, pp. 3380–3392, May 2016.
[147] X. Gao, L. Dai, C. Yuen, and Z. Wang, “Turbo-like beamforming based
on tabu search algorithm for millimeter-wave massive MIMO systems,”
IEEE Trans. Veh. Technol., vol. 65, no. 7, pp. 5731–5737, Jul. 2016.
[148] T. Datta, N. Srinidhi, A. Chockalingam, and B. S. Rajan, “Randomrestart reactive tabu search algorithm for detection in large-MIMO systems,” IEEE Commun. Lett., vol. 14, no. 12, pp. 1107–1109, Dec. 2010.
[149] T. Kim and D. J. Love, “Virtual AoA and AoD estimation for sparse
millimeter wave MIMO channels,” in Proc. SPAWC Workshops, Jun.
2015, pp. 146–150.
[150] X. Gao, L. Dai, S. Han, C.-L. I, and X. Wang, “Reliable beamspace
channel estimation for millimeter-wave massive MIMO systems with lens
antenna array,” to appear in IEEE Trans. Wireless Commun., 2017.
[151] A. Alkhateeb, G. Leus, and R. W. Heath Jr, “Compressed sensing
based multi-user millimeter wave systems: How many measurements are
needed?” in Proc. ICASSP, Apr. 2015, pp. 2909–2913.
[152] K. Venugopal, A. Alkhateeb, N. G. Prelcic, and R. W. Heath, “Channel
Estimation for Hybrid Architecture Based Wideband Millimeter Wave
Systems," IEEE J. Sel. Areas Commun., 2017, (To appear).
[153] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 53,
no. 12, pp. 4655–4666, Dec. 2007.
[154] W. U. Bajwa, J. Haupt, A. M. Sayeed, and R. Nowak, “Compressed
channel sensing: A new approach to estimating sparse multipath channels,” Proc. IEEE, vol. 98, no. 6, pp. 1058–1076, Jun. 2010.
[155] Z. Gao, L. Dai, and Z. Wang, “Structured compressive sensing based
superimposed pilot design in downlink large-scale MIMO systems,”
Electron. Lett., vol. 50, no. 12, pp. 896–898, Jun. 2014.
[156] L. Dai and X. Gao, “Priori-aided channel tracking for millimeter-wave
beamspace massive MIMO systems,” in Proc. IEEE RADIO, Jul. 2016,
pp. 1493–1496.
[157] M. Cudak, T. Kovarik, T. A. Thomas, A. Ghosh, Y. Kishiyama, and
T. Nakamura, “Experimental mmWave 5G cellular system,” in Proc. IEEE
Globecom Workshops, Dec. 2014, pp. 377–381.
[158] C. Zhang, D. Guo, and P. Fan, “Tracking angles of departure and arrival
in a mobile millimeter wave channel,” in Proc. IEEE ICC, May 2016,
pp. 1–6.
[159] N. Kabaoğlu, “Target tracking using particle filters with support vector
regression,” IEEE Trans. Veh. Technol., vol. 58, no. 5, pp. 2569–2573,
Jun. 2009.
[160] Y. Zhou, P. C. Yip, and H. Leung, “Tracking the direction-of-arrival
of multiple moving targets by passive arrays: Asymptotic performance
analysis,” IEEE Trans. Signal Process., vol. 47, no. 10, pp. 2644–2654,
Oct. 1999.
[161] X. Gao, L. Dai, T. Xie, X. Dai, and Z. Wang, “Fast channel tracking for
terahertz beamspace massive MIMO systems,” to appear in IEEE Trans.
Veh. Technol., 2017.
[162] R. A. Iltis, “A tracking mode receiver for joint channel estimation and
detection of asynchronous CDMA signals,” Conference Record of the
Thirty-Third Asilomar Conference on Signals, Systems, and Computers,
vol. 2, 1999
[163] T. Yoo and A. Goldsmith, “Capacity and power allocation for fading
MIMO channels with channel estimation error,” IEEE Trans. Inf. Theory,
vol. 52, no. 5, pp. 2203–2214, Apr. 2006.
[164] S. He, J. Wang, Y. Huang, B. Ottersten, and W. Hong, “Codebook based
hybrid precoding for millimeter wave multiuser systems,” submitted for
publication.
[165] S. He, C. Qi, Y. Wu, and Y. Huang, “Energy-efficient transceiver design
for hybrid sub-array architecture MIMO systems,” to appear in IEEE
Access, 2017.
[166] F. Sohrabi and W. Yu, “Hybrid digital and analog beamforming design
for large-scale antenna arrays,” IEEE J. Sel. Top. Signal Process., vol. 10,
no. 3, pp. 501–513, Apr. 2016.
[167] X. Yu, J.-C. Shen, J. Zhang, and K. B. Letaief, “Alternating minimization algorithms for hybrid precoding in millimeter wave MIMO systems,”
IEEE J. Sel. Top. Signal Process., vol. 10, no. 3, pp. 485–500, Mar. 2016.
[168] O. El Ayach, R. W. Heath, S. Rajagopal, and Z. Pi, “Multimode
precoding in millimeter wave MIMO transmitters with multiple antenna
sub-arrays,” in Proc. IEEE GLOBECOM, Dec. 2013, pp. 3476–3480.
[169] A. Sayeed and J. Brady, “Beamspace MIMO for high-dimensional
multiuser communication at millimeter-wave frequencies,” in Proc. IEEE
GLOBECOM, Dec. 2013, pp. 3679–3684.
[170] P. Amadori and C. Masouros, “Low RF-complexity millimeter-wave
beamspace-MIMO systems by beam selection,” IEEE Trans. Commun.,
vol. 63, no. 6, pp. 2212–2222, Jun. 2015.
[171] J. Hogan and A. Sayeed, “Beam selection for performance-complexity
optimization in high-dimension MIMO systems,” in Proc. CISS, Mar.
2016, pp. 337–342.
[172] A. Alkhateeb, G. Leus, and R. W. Heath, “Limited feedback hybrid
precoding for multi-user millimeter wave systems,” IEEE Trans. Wireless
Commun., vol. 14, no. 11, pp. 6481–6494, Nov. 2015.
[173] G. Yang, J. Du, and M. Xiao, “Maximum throughput path selection
with random blockage for indoor 60 GHz relay networks,” IEEE Trans.
Commun., vol. 63, no. 10, pp. 3511–3524, Oct. 2015.
[174] P. Yang, Y. Xiao, Y. Guan, Z. Liu, S. Li, and W. Xiang, “Adaptive SMMIMO for mmWave communications with reduced RF chains," IEEE J.
Sel. Areas Commun., 2017 (To Appear).
[175] X. Ma, F. Yang, S. Liu, J. Song, and Z. Han, “Design and optimization
on training sequence for mmWave communications: A new approach
for sparse channel estimation in massive MIMO," IEEE J. Sel. Areas
Commun., 2017 (To Appear).
[176] C. Wang, C. Qin, Y. Yao, and Y. Li, and W. Wang, “Low complexity
interference alignment for mmWave MIMO channels in three-cell mobile
network," IEEE J. Sel. Areas Commun., 2017 (To Appear)..
[177] Z. Zhou, J. Fang, L. Yang, H. Li, Z. Chen, and R. S. Blum, “Low-rank
tensor decomposition-aided channel estimation for millimeter wave MI
MO-OFDM systems," IEEE J. Sel. Areas Commun., 2017 (To Appear).
[178] Y, Yao, X. Cheng, C. Wang, J. Yu, and X. Chen, “Wideband circularly
polarized antipodal curvedly tapered slot antenna array for 5G applications," IEEE J. Sel. Areas Commun., 2017 (To Appear)..
[179] Q. Xue, X. Fang, and C. Wang, “Beamspace SU-MIMO for future
millimeter wave wireless communications," IEEE J. Sel. Areas Commun.,
2017 (To Appear).
[180] L. Zhao, D. W. Kwan Ng, and J. Yuan, “Multi-user precoding and
channel estimation for hybrid millimeter wave systems," IEEE J. Sel.
Areas Commun., 2017 (To Appear)..
[181] F. Sohrabi and W. Yu, “Hybrid analog and digital beamforming
for mmWave OFDM large-scale antenna arrays," IEEE J. Sel. Areas
Commun., 2017 (To Appear).
[182] C. G. Tsinos, S. Maleki, S. Chatzinotas, and Björn Ottersten, “On
the energy-efficiency of hybrid analog-digital transceivers for single- and
multi-carrier large antenna array systems," IEEE J. Sel. Areas Commun.,
2017 (To Appear).
[183] N. N. Moghadam, H. Shokri-Ghadikolaei, G. Fodor, M. Bengtsson,
and C. Fischione, “Pilot precoding and combining in multiuser MIMO
networks," IEEE J. Sel. Areas Commun., 2017, (To Appear).
[184] H. Ghauch, T. Kim, M. Bengtsson, and M. Skoglund, “Sum-rate maximization in sub-28 GHz millimeter-wave MIMO interfering networks,"
IEEE J. Sel. Areas Commun., 2017, (To Appear).
[185] K. Roth and J. A. Nossek, “Achievable rate and energy efficiency
of hybrid and digital beamforming receivers with low resolution ADC,"
IEEE J. Sel. Areas Commun., 2017, (To Appear).
[186] X. Zhai, Y. Cai, Q. Shi, M. Zhao, G. Y. Li, and B. Champagne, “Joint
transceiver design with antenna selection for large-scale MU-MIMO
mmWave systems," IEEE J. Sel. Areas Commun., 2017, (To Appear).
[187] G. Zhu, K. Huang, V. K. N. Lau, B. Xia, X. Li, and S. Zhang,
“Hybrid beamforming via the kronecker decomposition for the millimeterwave massive MIMO systems," IEEE J. Sel. Areas Commun., 2017, (To
Appear).
[188] C. Lin, G. Y. Li, and L. Wang, “Subarray-based coordinated beamforming training for mmWave and sub-THz communications," IEEE J.
Sel. Areas Commun., 2017, (To Appear).
[189] C. Liu, M. Li, S. Hanly, I. Collings and P. Whiting, “Millimeter wave
beamforming alignment: large deviations analysis and design insights,"
IEEE J. Sel. Areas Commun., 2017, (To Appear).
[190] T. Bai, A. Alkhateeb, and R. W. Heath, “Coverage and capacity of
millimeter-wave cellular networks,” IEEE Commun. Mag., vol. 52, no. 9,
pp. 70–77, Sep. 2014.
[191] H. Q. Ngo, E. G. Larsson, and T. L. Marzetta, “Aspects of favorable
propagation in massive MIMO,” European Signal Processing Conference
(EUSIPCO), Sep. 2014, pp. 76–80.
[192] R. H. Roy and B. Ottersten, “Spatial division multiple access wireless
communication systems,” US Patent 5515378, 1991.
[193] IEEE Computer Society, “Wireless LAN medium access control
(MAC) and physical layer (PHY) specifications: Enhancements for very
high throughput for operation in bands below 6GHz,” IEEE P802.11ac,
Draft 0.1, Jan. 2011.
[194] E. Björnson, E. Jorswieck, “Optimal resource allocation in coordinated
multi-cell systems,” Foundations and Trends in Communications and
Information Theory, vol. 9, no. 2, pp. 113–381, 2013.
[195] T. L. Marzetta, E. G. Larsson, H. Yang, H. Q. Ngo, “Fundamentals of
Massive MIMO,” Cambridge University Press, 2016.
24
[196] G. Kwon and H. Park, “A joint scheduling and millimeter wave
hybrid beamforming system with partial side information,” in Proc. IEEE
International Conference on Communications (IEEE ICC’16), May 2016,
pp. 1–6.
[197] S. Sun, T. S. Rappaport, R. W. Heath, A. Nix, and S. Rangan,
“MIMO for millimeter-wave wireless communications: Beamforming,
spatial multiplexing, or both?,” IEEE Commun. Mag., vol. 52, no. 12,
pp. 110-121, Dec. 2016.
[198] C. Yiu and S. Singh, “Empirical capacity of mmWave WLANs,” IEEE
J. Sel. Areas Commun., vol. 27, no. 8, pp. 1479–1487, Oct. 2009.
[199] A. Adhikary, E. Al Safadi, M. K. Samimi, R. Wang, G. Caire, T. S.
Rappaport, and A. F. Molisch, “Joint spatial division and multiplexing
for mm-wave channels,” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp.
1239–1255, Jun. 2014.
[200] C. Sun, X. Gao, S. Jin, M. Matthaiou, Z. Ding, and C. Xiao, “Beam division multiple access transmission for massive MIMO communications,”
IEEE Trans. Commun., vol. 63, no. 6, pp. 2170–2184, Jun. 2015.
[201] P. Cao and J. S. Thompson, “Practical multi-user transmission design in
millimeter wave cellular networks: Is the joint SDMA-TDMA technique
the answer?,” in Proc. IEEE International Workshop on Signal Processing
Advances in Wireless Communications (IEEE SPAWC’16), Jul. 2016, pp.
1–5.
[202] C. Zhang, Y. Huang, Y. Jing, S. Jin, and L. Yang, “Sum-Rate Analysis
for Massive MIMO Downlink with Joint Statistical Beamforming and
User Scheduling,aś
˛ IEEE Transactions on Wireless Communications, vol.
16, no. 4, pp. 2181-2194, 2017.
[203] H. Miao, M. Faerber, M. Fresia, and V. Frascolla, “Joint beamfrequency multiuser scheduling for millimeter-wave downlink multiplexing,” in Proc. IEEE Vehicular Technology Conference (IEEE VTC
Spring’16), May 2016, pp. 1–5.
[204] C.-S. Sum, Z. Lan, M. A. Rahman, J. Wang, T. Baykas, R. Funada,
H. Harada, and S. Kato, “A multi-Gbps millimeter-wave WPAN system
based on STDMA with heuristic scheduling,” in Proc. IEEE Global
Communications Conference (IEEE GLOBECOM’09), Dec. 2009, pp. 16.
[205] C.-S. Sum and H. Harada, “Scalable heuristic STDMA scheduling
scheme for practical multi-Gbps millimeter-wave WPAN and WLAN
systems,” IEEE Trans. Wireless Commun., vol. 11, no. 7, pp. 2658–2669,
Jul. 2012.
[206] J. Qiao, L. X. Cai, X. Shen, and J. W. Mark, “STDMA-based scheduling algorithm for concurrent transmissions in directional millimeter wave
networks,” in Proc. IEEE International Conference on Communications
(IEEE ICC’12), Jun. 2012, pp. 5221–5225.
[207] Z. Yan, B. Li, X. Zuo, and M. Yang, “A heuristic clique based STDMA
scheduling algorithm for spatial concurrent transmission in mmWave
networks,” in Proc. IEEE Wireless Communications and Networking
Conference (IEEE WCNC’15), Mar. 2015, pp. 1036–1041.
[208] Q. Xue, X. Fang, M. Xiao, and Y. Li, “Multi-user millimeter wave
communications with nonorthogonal beams,” to appear in IEEE Trans
Veh. Technol., 2017.
[209] G. Yan and D. Liu, “A simple adaptive STDMA scheduling scheme in
mmWave wireless networks,” in Proc. IEEE International Conference on
Communications, Circuits and Systems (IEEE ICCCAS’13), Nov. 2016,
pp. 1-5.
[210] C. Li, R. Cai, and D. Liu, “A suboptimal STDMA scheduling for
concurrent transmissions in mmWave wireless networks,” in Proc. IEEE
International Conference on Signal Processing, Communications and
Computing (IEEE ICSPCC’14), Aug. 2014, pp. 137-141.
[211] C. Jeong, J. Park, and H. Yu, “Random access in millimeter-wave
beamforming cellular networks: issues and approaches,” IEEE Commun.
Mag., vol. 53, no. 1, pp. 180-185, Jan. 2015.
[212] N. Giatsoglou, K. Ntontin, E. Kartsakli, A. Antonopoulos, and C.
Verikoukis, “D2D-Aware device caching in mmWave-cellular networks,”
to appear in IEEE J. Sel. Areas Commun., 2017.
[213] M. Polese, M. Giordani, M. Mezzavilla, S. Rangan, and M. Zorzi,
“Improved handover through dual connectivity in 5G mmWave mobile
networks,” to appear in IEEE J. Sel. Areas Commun., 2017.
[214] H. Shokri-Ghadikolaei, C. Fischione, G. Fodor, P. Popovski, and M.
Zorzi, “Millimeter wave cellular networks: A MAC layer perspective,”
IEEE Transactions on Communications, vol. 63, no. 10, pp. 3437- 3458,
Oct 2015.
[215] M. Giordani, M. Mezzavilla, S. Rangan, and M. Zorzi, “MultiConnectivity in 5G mmwave cellular networks,” in Proc. 15th Annual Mediterranean Ad Hoc Networking Workshop (MED-HOC-NET) (MedHoc-Net
16), Vilanova i la Geltru, Barcelona, Spain, Jun. 2016.
[216] C. Barati, S. Hosseini, S. Rangan, P. Liu, T. Korakis, S. Panwar, and
T. Rappaport, “Directional cell discovery in millimeter wave cellular
networks,” IEEE Transactions on Wireless Communications, vol. 14, no.
12, pp. 6664-6678, Dec 2015.
[217] C. N. Barati, S. A. Hosseini, M. Mezzavilla, S. Rangan, T. Korakis, S.
S. Panwar, and M. Zorzi, “Directional initial access for millimeter wave
cellular systems,” CoRR, vol. abs/1511.06483, 2015. [Online]. Available:
http://arxiv.org/abs/1511.06483
[218] V. Desai, L. Krzymien, P. Sartori, W. Xiao, A. Soong, and A.
Alkhateeb, “Initial beamforming for mmWave communications,” in 48th
Asilomar Conference on Signals, Systems and Computers„ pp. 19261930, Nov 2014.
[219] A. Capone, I. Filippini, and V. Sciancalepore, “Context information for
fast cell discovery in mm-Wave 5G networks,” in Proc. of 21th European
Wireless Conference, May 2015.
[220] A. Capone, I. Filippini, V. Sciancalepore, and D. Tremolada, “Obstacle
avoidance cell discovery using mm-Waves directive antennas in 5G
networks,” in Proc. IEEE 26th Annual International Symposium on
Personal, Indoor, and Mobile Radio Communications (PIMRC), pp. 23492353, Aug. 2015,
[221] Q. Li, H. Niu, G. Wu, and R. Hu, “Anchor-booster based heterogeneous
networks with mmwave capable booster cells,” in Proc. EEE Globecom
Workshops (GC Wkshps), Dec 2013, pp. 93-98.
[222] W. B. Abbas and M. Zorzi, “Context information based initial cell
search for millimeter wave 5G cellular networks,” in Proc. of 25th
European Conference on Networks and Communications, EuCNC, 2016.
[223] K. Higuchi and A. Benjebbour, “Non-orthogonal multiple access
(NOMA) with successive interference cancellation for future radio access,” IEICE Trans. Commun., vol. E98-B, no. 3, pp. 403-414, Mar. 2015.
[224] Z. Ding, Z. Yang, P. Fan, and H. V. Poor, “On the performance of
non-orthogonal multiple access in 5G systems with randomly deployed
users,” IEEE Signal Process. Lett., vol. 21, no. 12, pp. 1501-1505, Dec.
2014.
[225] L. Dai, B. Wang, Y. Yuan, S. Han, C.-L. I, and Z. Wang, “Nonorthogonal multiple access for 5G: Solutions, challenges, opportunities,
and future research trends,” IEEE Commun. Mag., vol. 53, no. 9, pp.
74-81, Sep. 2015.
[226] D. Tse and P. Viswanath, Fundamentals of Wireless Communication.
Cambridge: Cambridge University Press, 2005.
[227] B. Wang, L. Dai, Z. Wang, N. Ge, and S. Zhou, “Spectrum and energy
efficient beamspace MIMO-NOMA for millimeter-wave communications
using lens antenna array,” submitted for publication, 2017.
[228] S. A. R. Naqvi, and S. A. Hassan, “Combining NOMA and mmWave
technology for cellular communication,” in Proc. IEEE Vehicular Technology Conference (IEEE VTC Fall’16), Sep. 2016, pp. 1-5.
[229] A. S. Marcano, and H. L. Christiansen, “Performance of nonorthogonal multiple access (NOMA) in mmWave wireless communications for 5G networks,” in Proc. IEEE International Conference on
Computing, Networking and Communications (IEEE ICNC’17), Jan.
2017, pp. 969-974.
[230] D. Zhang, Z. Zhou, C. Xu, Y. Zhang, J. Rodriguez, and T. Sato,
“Capacity analysis of non-orthogonal multiple access with mmwave
massive MIMO systems,” to appear in IEEE J. Sel. Areas Commun.,
2017.
[231] Z. Ding, P. Fan, and H. V. Poor, “Random beamforming in millimeterwave NOMA networks,” to appear in IEEE Access, 2017.
[232] B. Wang, L. Dai, Z. Wang, N. Ge, and S. Zhou, “Spectrum and energy
efficient beamspace MIMO-NOMA for millimeter-wave communications
using lens antenna array,” submitted to IEEE J. Sel. Areas Commun.
[233] L. You, X. Gao, G. Y. Li, X.-G. Xia, and N. Ma, “BDMA for
millimeter-wave/terahertz massive MIMO transmission with per-beam
synchronization,” to appear in IEEE J. Sel. Areas Commun., 2017.
[234] Z. Xiao, L. Dai, P. Xia, J. Choi, and X. Xia, “Millimeter-Wave
communication with non-orthogonal multiple access for 5G,” submitted
to IEEE Wireless Commun. Mag.
[235] C.-L. I, C. Rowell, S. Han, Z. Xu, G. Li, and Z. Pan, “Toward green
and soft: A 5G perspective,” IEEE Commun. Mag., vol. 52, no. 2, pp.
66–73, Feb. 2014.
[236] W. Feng, Y. Wang, D. Lin, N. Ge, J. Lu and S. Li, “When mmWave
Communications Meet Network Densification: A Scalable Interference
Coordination Perspective," IEEE J. Sel. Areas Commun., 2017 (to appear).
[237] R. Taori and A. Sridharan, “Point-to-multipoint in-band mmWave
backhaul for 5G networks,” IEEE Wireless Commun., vol. 53, no. 1, pp.
195–201, Jan. 2015.
[238] L. Wei, R.Q. Hu, Y. Qian, and G. Wu, “Key elements to enable
millimeter wave communications for 5G wireless systems,” IEEE Wireless
Commun., vol. 21, no. 6, pp. 136–43, Dec. 2014.
25
[239] L. Song, Y. Li and Z. Han, “Game-theoretic resource allocation for
full-duplex communications,” IEEE Commun. Mag., vol. 23, no. 3, pp.
50–56, Jun. 2016.
[240] Q. Li, G. Li, W. Lee, M. I. Lee, D. Mazzarese, B. Clerckx, and Z. Li,
“MIMO techniques in WiMAX and LTE: A feature overview,” IEEE
Commun. Mag., vol. 48, no. 5, pp. 86–92, May 2010.
[241] P. Wang, Y. Li, L. Song, and B. Vucetic, “Multi-gigabit millimeter wave
wireless communications for 5G: from fixed access to cellular networks,”
IEEE Commun. Mag., vol. 53, no. 1, pp. 168–178, Jan. 2015.
[242] T. Bai, R. Vaze, and R. W. Heath, “Analysis of blockage effects on
urban cellular networks,” IEEE Trans. Wireless Commun., vol. 13, no. 9,
pp. 5070–5083, Sep. 2014.
[243] X. Lin and J. G. Andrews, “Connectivity of millimeter wave networks
with multi-hop relaying,” IEEE Wireless Commun. Lett., vol. 4, no. 2, pp.
209–212, Apr. 2015.
[244] D. Maamari, N. Devroye, and D. Tuninetti, “Coverage in mmwave
cellular networks with base station co-operation,” IEEE Trans. Wireless
Commun., vol. 15, no. 4, pp. 2981–2994, Apr. 2016.
[245] X. Yu, J. Zhang, M. Haenggi, and K. B. Letaief, “Coverage Analysis for
Millimeter Wave Networks: The Impact of Directional Antenna Arrays,?
to appear in IEEE J. Sel. Areas Commun., 2017.
[246] V. Petrov, D. Solomitckii, A. Samuylov, M. A. Lema, M. Gapeyenko,
D. Moltchanov, S. Andreev, V. Naumov, K. Samouylov, M. Dohler, and Y.
Koucheryavy, ?Dynamic Multi-Connectivity Performance in Ultra-Dense
Urban mmWave Deployments,? to appear in IEEE J. Sel. Areas Commun.,
2017.
[247] D. Ramirez, L. Huang and B. Aazhang, ?On Opportunistic mmWave
Networks with Blockage,? to appear in IEEE J. Sel. Areas Commun.,
2017.
[248] H. Zhang, S. Huang, C. Jiang, K. Long, V. C. M. Leung, and H.
Vincent Poor, “Energy efficient user association and power allocation in
millimeter wave based ultra dense networks with energy harvesting base
stations,” to appear in IEEE J. Sel. Areas Commun., 2017.
[249] L. Wang, K.-K. Wong, R. W. Heath, and J. Yuan, “Wireless powered
dense cellular networks: How many small cells do we need?” to appear
in IEEE J. Sel. Areas Commun., 2017.
[250] Z. C. Phyo and A. Taparugssanagorn, “Hybrid analog-digital downlink
beamforming for massive MIMO system with uniform and non-uniform
linear arrays,” in Proc. 2016 13th International Conference on Electrical
Engineering/Electronics, Computer, Telecommunications and Information
Technology (ECTI-CON), Jun. 2016, pp. 1–6.
[251] J. Brady, N. Behdad, and A. Sayeed, “Beamspace MIMO for
millimeter-wave communications: System architecture, modeling, analysis, and measurements," IEEE Trans. Ant. and Propag., vol. 61, no. 7,
pp. 3814-3827, Jul. 2013.
[252] https://www.huawei.eu/blog/millimetre-wave-key-technology-5g.
[253] Huawei Technologies Co., Ltd. Huawei to Bring 73 GHz
mmWave Mu-MIMO live Demo to Deutsche Telekom. Available online: http://www.huawei.com/en/news/2016/2/73GHzmm-WaveMu-MIM-livedemo (accessed on 30 May 2016).
[254] Branda,
M.
Qualcomm
Research
Demonstrates
Robust
mmWave
Design
for
5G.
Available
online:
https://www.qualcomm.com/news/onq/2015/11/19/qualcomm-researchdemonstrates-robust-mmwavedesign- 5g (accessed on 30 May 2016).
[255] Nokia Networks. Nokia Networks Showcases 5G speed of
10Gbps with NI at the Brooklyn 5G Summit. Available online:
http://networks.nokia.com/news-events/press-room/press-releases/nokianetworksshowcases5g-speed-of-10gbps-with-ni-at-the-brooklyn-5gsummit (accessed on 30 May 2016).
[256] Samsung Electronics Co., Ltd. Samsung Electronics and Deutsche
Telekom Demonstrate World?s First End-to-End 5G Solution at
Mobile World Congress 2016. Available online: https://news.samsung.
com/global/samsung-electronics-and-deutsche-telekom-demonstrateworlds-first-end-to-end-5g-solutionatmobile-world-congress-2016
(accessed on 30 May 2016).
[257] https://www.ericsson.com/news/2076554.
| 7 |
Under review as a conference paper at ICLR 2017
D EEP U NSUPERVISED C LUSTERING WITH G AUSSIAN
M IXTURE VARIATIONAL AUTOENCODERS
arXiv:1611.02648v2 [cs.LG] 13 Jan 2017
Nat Dilokthanakul1,∗ , Pedro A. M. Mediano1 , Marta Garnelo1 ,
Matthew C. H. Lee1 , Hugh Salimbeni1 , Kai Arulkumaran2 & Murray Shanahan1
1
Department of Computing, 2 Department of Bioengineering
Imperial College London
London, UK
∗
[email protected]
A BSTRACT
We study a variant of the variational autoencoder model (VAE) with a Gaussian
mixture as a prior distribution, with the goal of performing unsupervised clustering through deep generative models. We observe that the known problem of
over-regularisation that has been shown to arise in regular VAEs also manifests
itself in our model and leads to cluster degeneracy. We show that a heuristic
called minimum information constraint that has been shown to mitigate this effect in VAEs can also be applied to improve unsupervised clustering performance
with our model. Furthermore we analyse the effect of this heuristic and provide
an intuition of the various processes with the help of visualizations. Finally, we
demonstrate the performance of our model on synthetic data, MNIST and SVHN,
showing that the obtained clusters are distinct, interpretable and result in achieving
competitive performance on unsupervised clustering to the state-of-the-art results.
1
I NTRODUCTION
Unsupervised clustering remains a fundamental challenge in machine learning research. While longestablished methods such as k-means and Gaussian mixture models (GMMs) (Bishop, 2006) still lie
at the core of numerous applications (Aggarwal & Reddy, 2013), their similarity measures are limited to local relations in the data space and are thus unable to capture hidden, hierarchical dependencies in latent spaces. Alternatively, deep generative models can encode rich latent structures. While
they are not often applied directly to unsupervised clustering problems, they can be used for dimensionality reduction, with classical clustering techniques applied to the resulting low-dimensional
space (Xie et al., 2015). This is an unsatisfactory approach as the assumptions underlying the dimensionality reduction techniques are generally independent of the assumptions of the clustering
techniques.
Deep generative models try to estimate the density of observed data under some assumptions about
its latent structure, i.e., its hidden causes. They allow us to reason about data in more complex
ways than in models trained purely through supervised learning. However, inference in models with
complicated latent structures can be difficult. Recent breakthroughs in approximate inference have
provided tools for constructing tractable inference algorithms. As a result of combining differentiable models with variational inference, it is possible to scale up inference to datasets of sizes that
would not have been possible with earlier inference methods (Rezende et al., 2014). One popular
algorithm under this framework is the variational autoencoder (VAE) (Kingma & Welling, 2013;
Rezende et al., 2014).
In this paper, we propose an algorithm to perform unsupervised clustering within the VAE framework. To do so, we postulate that generative models can be tuned for unsupervised clustering by
making the assumption that the observed data is generated from a multimodal prior distribution, and,
correspondingly, construct an inference model that can be directly optimised using the reparameterization trick. We also show that the problem of over-regularisation in VAEs can severely effect the
performance of clustering, and that it can be mitigated with the minimum information constraint
introduced by Kingma et al. (2016).
1
Under review as a conference paper at ICLR 2017
1.1
R ELATED W ORK
Unsupervised clustering can be considered a subset of the problem of disentangling latent variables,
which aims to find structure in the latent space in an unsupervised manner. Recent efforts have
moved towards training models with disentangled latent variables corresponding to different factors
of variation in the data. Inspired by the learning pressure in the ventral visual stream, Higgins et al.
(2016) were able to extract disentangled features from images by adding a regularisation coefficient
to the lower bound of the VAE. As with VAEs, there is also effort going into obtaining disentangled
features from generative adversarial networks (GANs) (Goodfellow et al., 2014). This has been recently achieved with InfoGANs (Chen et al., 2016a), where structured latent variables are included
as part of the noise vector, and the mutual information between these latent variables and the generator distribution is then maximised as a mini-max game between the two networks. Similarly,
Tagger (Greff et al., 2016), which combines iterative amortized grouping and ladder networks, aims
to perceptually group objects in images by iteratively denoising its inputs and assigning parts of the
reconstruction to different groups. Johnson et al. (2016) introduced a way to combine amortized
inference with stochastic variational inference in an algorithm called structured VAEs. Structured
VAEs are capable of training deep models with GMM as prior distribution. Shu et al. (2016) introduced a VAE with a multimodal prior where they optimize the variational approximation to the
standard variational objective showing its performance in video prediction task.
The work that is most closely related to ours is the stacked generative semi-supervised model
(M1+M2) by Kingma et al. (2014). One of the main differences is the fact that their prior distribution is a neural network transformation of both continuous and discrete variables, with Gaussian
and categorical priors respectively. The prior for our model, on the other hand, is a neural network
transformation of Gaussian variables, which parametrise the means and variances of a mixture of
Gaussians, with categorical variables for the mixture components. Crucially, Kingma et al. (2014)
apply their model to semi-supervised classification tasks, whereas we focus on unsupervised clustering. Therefore, our inference algorithm is more specific to the latter.
We compare our results against several orthogonal state-of-the-art techniques in unsupervised clustering with deep generative models: deep embedded clustering (DEC) (Xie et al., 2015), adversarial autoencoders (AAEs) (Makhzani et al., 2015) and categorial GANs (CatGANs) (Springenberg,
2015).
2
VARIATIONAL AUTOENCODERS
VAEs are the result of combining variational Bayesian methods with the flexibility and scalability
provided by neural networks (Kingma & Welling, 2013; Rezende et al., 2014). Using variational inference it is possible to turn intractable inference problems into optimisation problems (Wainwright
& Jordan, 2008), and thus expand the set of available tools for inference to include optimisation
techniques as well. Despite this, a key limitation of classical variational inference is the need for
the likelihood and the prior to be conjugate in order for most problems to be tractably optimised,
which in turn can limit the applicability of such algorithms. Variational autoencoders introduce the
use of neural networks to output the conditional posterior (Kingma & Welling, 2013) and thus allow
the variational inference objective to be tractably optimised via stochastic gradient descent and standard backpropagation. This technique, known as the reparametrisation trick, was proposed to enable
backpropagation through continuous stochastic variables. While under normal circumstances backpropagation through stochastic variables would not be possible without Monte Carlo methods, this
is bypassed by constructing the latent variables through the combination of a deterministic function
and a separate source of noise. We refer the reader to Kingma & Welling (2013) for more details.
3
G AUSSIAN M IXTURE VARIATIONAL AUTOENCODERS
In regular VAEs, the prior over the latent variables is commonly an isotropic Gaussian. This choice
of prior causes each dimension of the multivariate Gaussian to be pushed towards learning a separate
continuous factor of variation from the data, which can result in learned representations that are
structured and disentangled. While this allows for more interpretable latent variables (Higgins et al.,
2016), the Gaussian prior is limited because the learnt representation can only be unimodal and does
2
Under review as a conference paper at ICLR 2017
not allow for more complex representations. As a result, numerous extensions to the VAE have been
developed, where more complicated latent representations can be learned by specifying increasingly
complex priors (Chung et al., 2015; Gregor et al., 2015; Eslami et al., 2016).
In this paper we choose a mixture of Gaussians as our prior, as it is an intuitive extension of the unimodal Gaussian prior. If we assume that the observed data is generated from a mixture of Gaussians,
inferring the class of a data point is equivalent to inferring which mode of the latent distribution the
data point was generated from. While this gives us the possibility to segregate our latent space into
distinct classes, inference in this model is non-trivial. It is well known that the reparametrisation
trick which is generally used for VAEs cannot be directly applied to discrete variables. Several possibilities for estimating the gradient of discrete variables have been proposed (Glynn, 1990; Titsias
& Lázaro-Gredilla, 2015). Graves (2016) also suggested an algorithm for backpropagation through
GMMs. Instead, we show that by adjusting the architecture of the standard VAE, our estimator of
the variational lower bound of our Gaussian mixture variational autoencoder (GMVAE) can be optimised with standard backpropagation through the reparametrisation trick, thus keeping the inference
model simple.
3.1
G ENERATIVE AND R ECOGNITION M ODELS
w )p(zz )pβ (x
x|w
w , z )pθ (yy |x
x), where an observed
Consider the generative model pβ,θ (yy , x , w , z ) = p(w
sample y is generated from a set of latent variables x , w and z under the following process:
w ∼ N (0, I )
z ∼ M ult(π
π)
x |zz , w ∼
K
Y
(1a)
(1b)
z
w ; β), diag σ 2zk (w
w ; β) k
N µ zk (w
(1c)
k=1
x ∼ N µ (x
x; θ), diag σ 2 (x
x; θ) or B(µ
µ(x
x; θ)) .
y |x
(1d)
where K is a predefined number of components in the mixture, and µ zk (·; β), σ 2zk (·; β), µ (·; θ), and
σ 2 (·; θ) are given by neural networks with parameters β and θ, respectively. That is, the observed
sample y is generated from a neural network observation model parametrised by θ and the continw is a Gaussian mixture with means and
uous latent variable x . Furthermore, the distribution of x |w
variances specified by another neural network model parametrised by β and with input w .
More specifically, the neural network parameterised by β outputs a set of K means µ zk and K
variances σ 2zk , given w as input. A one-hot vector z is sampled from the mixing probability π ,
which chooses one component from the Gaussian mixture. We set the parameter πk = K −1 to
make z uniformly distributed. The generative and variational views of this model are depicted in
Fig. 1.
w
w
z
x
x
β
y
z
φx
φw
θ
y
Figure 1: Graphical models for the Gaussian mixture variational autoencoder (GMVAE) showing
the generative model (left) and the variational family (right).
3
Under review as a conference paper at ICLR 2017
3.2
I NFERENCE WITH THE R ECOGNITION M ODEL
The generative model is trained with the variational inference objective, i.e. the log-evidence lower
bound (ELBO), which can be written as
pβ,θ (yy , x , w , z )
LELBO = Eq
.
(2)
x, w , z |yy )
q(x
x, w , z |yy ) as a proxy to the posterior which factorises
We assume the mean-field
variational family q(x
Q
x, w , z |yy ) = i qφx (x
xi |yy i )qφw (w
w i |yy i )pβ (zz i |x
xi , w i ), where i indexes over data points. To
as q(x
simplify further notation, we will drop i and consider one data point at a time. We parametrise
each variational factor with the recognition networks φx and φw that output the parameters of the
variational distributions and specify their form to be Gaussian posteriors. We derived the z-posterior,
x, w ), as:
pβ (zz |x
x|zj = 1, w )
p(z = 1)p(x
x, w ) = PK j
pβ (zj = 1|x
x|zj = 1, w )
p(z
=
1)p(x
k
k=1
x|µj (w
w ; β), σj (w
w ; β))
πj N (x
= PK
.
x|µk (w
w ; β), σk (w
w ; β))
k=1 πk N (x
The lower bound can then be written as,
x|yy )||pβ (x
x|w
w , z ))
x) − Eq(w
LELBO = Eq(xx|yy ) log pθ (yy |x
w |y
y )p(z
z |x
x,w
w ) KL(qφx (x
z |x
x, w )||p(zz )) .
w |yy )||p(w
w )) − Eq(xx|yy )q(w
− KL(qφw (w
w |y
y ) KL(pβ (z
(3)
(4)
We refer to the terms in the lower bound as the reconstruction term, conditional prior term, w-prior
term and z-prior term respectively.
3.2.1
T HE C ONDITIONAL P RIOR T ERM
x|yy ), where the
The reconstruction term can be estimated by drawing Monte Carlo samples from q(x
gradient can be backpropagated with the standard reparameterisation trick (Kingma & Welling,
2013). The w-prior term can be calculated analytically.
Importantly, by constructing the model this way, the conditional prior term can be estimated using
x, w ).
Eqn. 5 without the need to sample from the discrete distribution p(zz |x
h
i
x|yy )||pβ (x
x|w
w, z ) ≈
Eq(w
w |y
y )p(z
z |x
x,w
w ) KL qφx (x
M K
1 XX
x(j) , w (j) )KL qφx (x
x|yy )||pβ (x
x|w
w (j) , zk = 1)
pβ (zk = 1|x
M j=1
(5)
k=1
x, w ) can be computed for all z with one forward pass, the expectation over it can be
Since pβ (zz |x
w |yy )
calculated in a straightforward manner and backpropagated as usual. The expectation over qφw (w
can be estimated with M Monte Carlo samples and the gradients can be backpropagated via the
reparameterisation trick. This method of calculating the expectation is similar to the marginalisation
approach of Kingma et al. (2014), with a subtle difference. Kingma et al. (2014) need multiple
forward passes to obtain each component of the z-posterior. Our method requires wider output
layers of the neural network parameterised by β, but only need one forward pass. Both methods
scale up linearly with the number of clusters.
3.3
T HE KL C OST OF THE D ISCRETE L ATENT VARIABLE
The most unusual term in our ELBO is the z-prior term. The z-posterior calculates the clustering
assignment probability directly from the value of x and w, by asking how far x is from each of
the cluster positions generated by w. Therefore, the z-prior term can reduce the KL divergence
between the z-posterior and the uniform prior by concurrently manipulating the position of the
clusters and the encoded point x. Intuitively, it would try to merge the clusters by maximising
the overlap between them, and moving the means closer together. This term, similar to other KLregularisation terms, is in tension with the reconstruction term, and is expected to be over-powered
as the amount of training data increases.
4
Under review as a conference paper at ICLR 2017
3.4
T HE OVER - REGULARISATION P ROBLEM
The possible overpowering effect of the regularisation term on VAE training has been described
numerous times in the VAE literature (Bowman et al., 2015; Sønderby et al., 2016; Kingma et al.,
2016; Chen et al., 2016b). As a result of the strong influence of the prior, the obtained latent representations are often overly simplified and poorly represent the underlying structure of the data. So
far there have been two main approaches to overcome this effect: one solution is to anneal the KL
term during training by allowing the reconstruction term to train the autoencoder network before
slowly incorporating the regularization from the KL term (Sønderby et al., 2016). The other main
approach involves modifying the objective function by setting a cut-off value that removes the effect of the KL term when it is below a certain threshold (Kingma et al., 2016). As we show in the
experimental section below, this problem of over-regularisation is also prevalent in the assignment
of the GMVAE clusters and manifests itself in large degenerate clusters. While we show that the
second approach suggested by Kingma et al. (2016) does indeed alleviate this merging phenomenon,
finding solutions to the over-regularization problem remains a challenging open problem.
4
E XPERIMENTS
The main objective of our experiments is not only to evaluate the accuracy of our proposed model,
but also to understand the optimisation dynamics involved in the construction of meaningful, differentiated latent representations of the data. This section is divided in three parts:
1. We first study the inference process in a low-dimensional synthetic dataset, and focus in
particular on how the over-regularisation problem affects the clustering performance of the
GMVAE and how to alleviate the problem;
2. We then evaluate our model on an MNIST unsupervised clustering task; and
3. We finally show generated images from our model, conditioned on different values of the
latent variables, which illustrate that the GMVAE can learn disentangled, interpretable latent representations.
Throughout this section we make use of the following datasets:
• Synthetic data: We create a synthetic dataset mimicking the presentation of Johnson et al.
(2016), which is a 2D dataset with 10,000 data points created from the arcs of 5 circles.
• MNIST: The standard handwritten digits dataset, composed of 28x28 grayscale images
and consisting of 60,000 training samples and 10,000 testing samples (LeCun et al., 1998).
• SVHN: A collection of 32x32 images of house numbers (Netzer et al., 2011). We use
the cropped version of the standard and the extra training sets, adding up to a total of
approximately 600,000 images.
4.1
S YNTHETIC DATA
We quantify clustering performance by plotting the magnitude of the z-prior term described in Eqn. 6
during training. This quantity can be thought of as a measure of how much different clusters overlap.
Since our goal is to achieve meaningful clustering in the latent space, we would expect this quantity
to go down as the model learns the separate clusters.
z |x
x, w )||p(zz ))
Lz = −Eq(xx|yy )q(w
w |y
y ) KL(pβ (z
(6)
Empirically, however, we have found this not to be the case. The latent representations that our
model converges to merges all classes into the same large cluster instead of representing information
about the different clusters, as can be seen in Figs. 2d and 3a. As a result, each data point is equally
likely to belong to any of clusters, rendering our latent representations completely uninformative
with respect to the class structure.
We argue that this phenomenon can be interpreted as the result of over-regularisation by the z-prior
term. Given that this quantity is driven up by the optimisation of KL term in the lower bound,
5
Under review as a conference paper at ICLR 2017
it reaches its maximum possible value of zero, as opposed to decreasing with training to ensure
encoding of information about the classes. We suspect that the prior has too strong of an influence
in the initial training phase and drives the model parameters into a poor local optimum that is hard
to be driven out off by the reconstruction term later on.
This observation is conceptually very similar to the over-regularisation problem encountered in regular VAEs and we thus hypothesize that applying similar heuristics should help alleviate the problem.
We show in Fig. 2f that by using the previously mentioned modification to the lower-bound proposed by Kingma et al. (2016), we can avoid the over-regularisation caused by the z-prior. This is
achieved by maintaining the cost from the z-prior at a constant value λ until it exceeds that threshold.
Formally, the modified z-prior term is written as:
z |x
x, w )||p(zz )) )
L0z = − max(λ, Eq(xx|yy )q(w
w |y
y ) KL(pβ (z
(7)
This modification suppresses the initial effect of the z-prior to merge all clusters thus allowing them
to spread out until the cost from the z-prior cost is high enough. At that point its effect is significantly
reduced and is mostly limited to merging individual clusters that are overlapping sufficiently. This
can be seen clearly in Figs. 2e and 2f. The former shows the clusters before the z-prior cost is
taken into consideration, and as such the clusters have been able to spread out. Once the z-prior is
activated, clusters that are very close together will be merged as seen in Fig. 2f.
Finally, in order to illustrate the benefits of using neural networks for the transformation of the
distributions, we compare the density observed by our model (Fig. 2c) with a regular GMM (Fig. 2c)
in data space. As illustrated by the figures, the GMVAE allows for a much richer, and thus more
accurate representations than regular GMMs, and is therefore more successful at modelling nonGaussian data.
1.5
1.0
0.5
0.0
−0.5
−1.0
−1.5
−1.5
(a) Data points in data space
(b) Density of GMVAE
4
4
4
3
3
2
2
1
1
2
1
0
−1
−2
−4
−4
0
0
−1
−1
−2
−2
−3
−3
−3
−2
−1
0
1
2
3
4
−4
−6
−0.5
0.0
0.5
1.0
1.5
(c) Density of GMM
5
3
−1.0
−3
−4
−2
0
2
4
6
(d) Latent space, at poor optimum (e) Latent space, clusters spreading
−4
−4
−3
−2
−1
0
1
2
3
4
5
(f) Latent space, at convergence
Figure 2: Visualisation of the synthetic dataset: (a) Data is distributed with 5 modes on the 2
dimensional data space. (b) GMVAE learns the density model that can model data using a mixture
of non-Gaussian distributions in the data space. (c) GMM cannot represent the data as well because
of the restrictive Gaussian assumption. (d) GMVAE, however, suffers from over-regularisation and
can result in poor minima when looking at the latent space. (e) Using the modification to the ELBO
(Kingma et al., 2016) allows the clusters to spread out. (f) As the model converges the z-prior term
is activated and regularises the clusters in the final stage by merging excessive clusters.
6
Under review as a conference paper at ICLR 2017
−0.5
−0.5
z-prior
z-prior
term
0.0
term
0.0
−1.0
−1.5
−2.0
−1.0
−1.5
0
50
100
Epoch
150
−2.0
200
(a) z-prior term with normal ELBO
0
50
100
Epoch
150
200
(b) z-prior term with the modification
Figure 3: Plot of z-prior term: (a) Without information constraint, GMVAE suffers from overregularisation as it converges to a poor optimum that merges all clusters together to avoid the KL
cost. (b) Before reaching the threshold value (dotted line), the gradient from the z-prior term can
be turned off to avoid the clusters from being pulled together (see text for details). By the time
the threshold value is reached, the clusters are sufficiently separated. At this point the activated
gradient from the z-prior term only merges very overlapping clusters together. Even after activating
its gradient the value of the z-prior continues to decrease as it is over-powered by other terms that
lead to meaningful clusters and better optimum.
4.2
U NSUPERVISED I MAGE C LUSTERING
We now assess the model’s ability to represent discrete information present in the data on an image clustering task. We train a GMVAE on the MNIST training dataset and evaluate its clustering
performance on the test dataset. To compare the cluster assignments given by the GMVAE with the
true image labels we follow the evaluation protocol of Makhzani et al. (2015), which we summarise
here for clarity. In this method, we find the element of the test set with the highest probability of
belonging to cluster i and assign that label to all other test samples belonging to i. This is then
repeated for all clusters i = 1, ..., K, and the assigned labels are compared with the true labels to
obtain an unsupervised classification error rate.
While we observe the cluster degeneracy problem when training the GMVAE on the synthetic
dataset, the problem does not arise with the MNIST dataset. We thus optimise the GMVAE using the ELBO directly, without the need for any modifications. A summary of the results obtained
on the MNIST benchmark with the GMVAE as well as other recent methods is shown in Table 1.
We achieve classification scores that are competitive with the state-of-the-art techniques1 , except for
adversarial autoencoders (AAE). We suspect the reason for this is, again, related to the KL terms in
the VAE’s objective. As indicated by Hoffman et al., the key difference in the adversarial autoencoders objective is the replacement of the KL term in the ELBO by an adversarial loss that allows the
latent space to be manipulated more carefully (Hoffman & Johnson, 2016). Details of the network
architecture used in these experiments can be found in Appendix A.
Empirically, we observe that increasing the number of Monte Carlo samples and the number of
clusters makes the GMVAE more robust to initialisation and more stable as shown in Fig. 4. If
fewer samples or clusters are used then the GMVAE can occasionally converge faster to poor local
minima, missing some of the modes of the data distribution.
1
It is worth noting that shortly after our initial submission, Rui Shu published a blog post
(http://ruishu.io/2016/12/25/gmvae/) with an analysis on Gaussian mixture VAEs. In addition to providing
insightful comparisons to the aforementioned M2 algorithm, he implements a version that achieves competitive clustering scores using a comparably simple network architecture. Crucially, he shows that model M2
does not use discrete latent variables when trained without labels. The reason this problem is not as severe
in the GMVAE might possibly be the more restrictive assumptions in the generative process, which helps the
optimisation, as argued in his blog.
7
Under review as a conference paper at ICLR 2017
Table 1: Unsupervised classification accuracy for MNIST with different numbers of clusters (K)
(reported as percentage of correct labels)
Method
K
Best Run
Average Run
CatGAN (Springenberg, 2015)
AAE (Makhzani et al., 2015)
AAE (Makhzani et al., 2015)
DEC (Xie et al., 2015)
20
16
30
10
90.30
84.30
90.45 ± 2.05
95.90 ± 1.13
-
GMVAE (M = 1)
GMVAE (M = 10)
GMVAE (M = 1)
GMVAE (M = 10)
GMVAE (M = 1)
GMVAE (M = 10)
10
10
16
16
30
30
87.31
88.54
89.01
96.92
95.84
93.22
77.78 ± 5.75
82.31 ± 3.75
85.09 ± 1.99
87.82 ± 5.33
92.77 ± 1.60
89.27 ± 2.50
1.0
Test Accuracy
0.8
0.6
0.4
K=10,M=1
K=10,M=10
K=16,M=1
K=16,M=10
0.2
0.0
0
20
40
Epoch
60
80
100
Figure 4: Clustering Accuracy with different numbers of clusters (K) and Monte Carlo samples
(M) : After only few epochs, the GMVAE converges to a solution. Increasing the number of clusters
improves the quality of the solution considerably.
4.2.1
I MAGE G ENERATION
So far we have argued that the GMVAE picks up natural clusters in the dataset, and that these
clusters share some structure with the actual classes of the images. Now we train the GMVAE with
K = 10 on MNIST to show that the learnt components in the distribution of the latent space actually
represent meaningful properties of the data. First, we note that there are two sources of stochasticity
in play when sampling from the GMVAE, namely
1. Sampling w from its prior, which will generate the means and variances of x through a
neural network β; and
2. Sampling x from the Gaussian mixture determined by w and z , which will generate the
image through a neural network θ.
In Fig. 5a we explore the latter option by setting w = 0 and sampling multiple times from the resulting Gaussian mixture. Each row in Fig. 5a corresponds to samples from a different component of
the Gaussian mixture, and it can be clearly seen that samples from the same component consistently
result in images from the same class of digit. This confirms that the learned latent representation
contains well differentiated clusters, and exactly one per digit. Additionally, in Fig. 5b we explore
the sensitivity of the generated image to the Gaussian mixture components by smoothly varying
8
Under review as a conference paper at ICLR 2017
w and sampling from the same component. We see that while z reliably controls the class of the
generated image, w sets the “style” of the digit.
Finally, in Fig. 6 we show images sampled from a GMVAE trained on SVHN, showing that the
GMVAE clusters visually similar images together.
(a) Varying z
(b) Varying w
Figure 5: Generated MNIST samples: (a) Each row contains 10 randomly generated samples
from different Gaussian components of the Gaussian mixture. The GMVAE learns a meaningful
generative model where the discrete latent variables z correspond directly to the digit values in an
unsupervised manner. (b) Samples generated by traversing around w space, each position of w
correspond to a specific style of the digit.
Figure 6: Generated SVHN samples: Each row corresponds to 10 samples generated randomly
from different Gaussian components. GMVAE groups together images that are visually similar.
5
C ONCLUSION
We have introduced a class of variational autoencoders in which one level of the latent encoding
space has the form of a Gaussian mixture model, and specified a generative process that allows
9
Under review as a conference paper at ICLR 2017
us to formulate a variational Bayes optimisation objective. We then discuss the problem of overregularisation in VAEs. In the context of our model, we show that this problem manifests itself in
the form of cluster degeneracy. Crucially, we show that this specific manifestation of the problem
can be solved with standard heuristics.
We evaluate our model on unsupervised clustering tasks using popular datasets and achieving competitive results compared to the current state of the art. Finally, we show via sampling from the
generative model that the learned clusters in the latent representation correspond to meaningful features of the visible data. Images generated from the same cluster in latent space share relevant
high-level features (e.g. correspond to the same MNIST digit) while being trained in an entirely
unsupervised manner.
It is worth noting that GMVAEs can be stacked by allowing the prior on w to be a Gaussian mixture
distribution as well. A deep GMVAE could scale much better with number of clusters given that it
would be combinatorial with regards to both number of layers and number of clusters per layer. As
such, while future research on deep GMVAEs for hierarchical clustering is a possibility, it is crucial
to also address the enduring optimisation challenges associated with VAEs in order to do so.
ACKNOWLEDGMENTS
We would like to acknowledge the NVIDIA Corporation for the donation of a GeForce GTX Titan Z
used in our experiments. We would like to thank Jason Rolfe, Rui Shu and the reviewers for useful
comments. Importantly, we would also like to acknowledge that the variational family which we
used throughout this version of the paper was suggested by an anonymous reviewer.
R EFERENCES
Charu C Aggarwal and Chandan K Reddy. Data clustering: algorithms and applications. CRC
Press, 2013.
Christopher M Bishop. Pattern recognition and machine learning. 2006.
Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets.
arXiv preprint arXiv:1606.03657, 2016a.
Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya
Sutskever, and Pieter Abbeel. Variational lossy autoencoder. arXiv preprint arXiv:1611.02731,
2016b.
J. Chung, K. Kastner, L. Dinh, K. Goel, A. Courville, and Y. Bengio. A Recurrent Latent Variable
Model for Sequential Data. ArXiv e-prints, June 2015.
SM Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geoffrey E
Hinton. Attend, infer, repeat: Fast scene understanding with generative models. arXiv preprint
arXiv:1603.08575, 2016.
PW Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the
ACM, 33(10):75–84, 1990.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,
Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680, 2014.
Alex Graves. Stochastic backpropagation through mixture density distributions. arXiv preprint
arXiv:1607.05690, 2016.
Klaus Greff, Antti Rasmus, Mathias Berglund, Tele Hotloo Hao, Jürgen Schmidhuber, and Harri
Valpola. Tagger: Deep unsupervised perceptual grouping. arXiv preprint arXiv:1606.06724,
2016.
10
Under review as a conference paper at ICLR 2017
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, and Daan Wierstra. Draw: A recurrent
neural network for image generation. In Proceedings of The 32nd International Conference on
Machine Learning, pp. 1462–1471, 2015.
I. Higgins, L. Matthey, X. Glorot, A. Pal, B. Uria, C. Blundell, S. Mohamed, and A. Lerchner. Early
Visual Concept Learning with Unsupervised Deep Learning. ArXiv e-prints, June 2016.
Matthew D. Hoffman and Matthew J. Johnson. Elbo surgery: yet another way to carve up the
variational evidence lower bound. Workshop in Advances in Approximate Bayesian Inference,
NIPS, 2016.
Matthew J Johnson, David Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams.
Composing graphical models with neural networks for structured representations and fast inference. arXiv preprint arXiv:1603.06277, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
Diederik P Kingma and Max Welling.
arXiv:1312.6114, 2013.
Auto-encoding variational bayes.
arXiv preprint
Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised
learning with deep generative models. In Advances in Neural Information Processing Systems,
pp. 3581–3589, 2014.
Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse
autoregressive flow. arXiv preprint arXiv:1606.04934, 2016.
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders.
arXiv preprint arXiv:1511.05644, 2015.
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading
digits in natural images with unsupervised feature learning. 2011.
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and
approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
R. Shu, J. Brofos, F. Zhang, M. Ghavamzadeh, H. Bui, and M. Kochenderfer. Stochastic video
prediction with conditional density estimation. In European Conference on Computer Vision
(ECCV) Workshop on Action and Anticipation for Visual Learning, 2016.
Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther.
How to train deep variational autoencoders and probabilistic ladder networks. arXiv preprint
arXiv:1602.02282, 2016.
Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative
adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
Michalis Titsias and Miguel Lázaro-Gredilla. Local expectation gradients for black box variational
inference. In Advances in Neural Information Processing Systems, pp. 2638–2646, 2015.
Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends R in Machine Learning, 1(1-2):1–305, 2008.
Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis.
arXiv preprint arXiv:1511.06335, 2015.
A
N ETWORK PARAMETERS
For optimisation, we use Adam (Kingma & Ba, 2014) with a learning rate of 10−4 and standard
hyperparameter values β1 = 0.9, β2 = 0.999 and = 10−8 . The model architectures used in our
experiments are shown in Tables A.1, A.2 and A.3.
11
Under review as a conference paper at ICLR 2017
x, w ): The hidden layers are shared between
Table A.1: Neural network architecture models of qφ (x
x) and q(w
w ), except the output layer where the neural network is split into 4 output streams, 2
q(x
with dimension Nx and the other 2 with dimension Nw . We exponentiate the variance components
to keep their value positive. An asterisk (*) indicates the use of batch normalization and a ReLU
nonlinearity. For convolutional layers, the numbers in parentheses indicate stride-padding.
Dataset
Input
Hidden
Output
Synthetic
2
fc 120 ReLU 120 ReLU
Nw = 2, Nw = 2 (Exp),
Nx = 2, Nx = 2 (Exp)
MNIST
28x28
conv 16x6x6* (1-0) 32x6x6* (1-0)
64x4x4* (2-1) 500*
Nw = 150, Nw = 150 (Exp),
Nx = 200, Nx = 200 (Exp)
SVHN
32x32
conv 64x4x4* (2-1) 128x4x4* (2-1)
246x4x4* (2-1) 500*
Nw = 150, Nw = 150 (Exp),
Nx = 200, Nx = 200 (Exp)
x|w
w , z ): The output layers are split into 2K
Table A.2: Neural network architecture models of pβ (x
streams of output, where K streams return mean values and the other K streams output variances of
all the clusters.
Dataset
Input
Hidden
Output
Synthetic
2
fc 120 Tanh
{Nx = 2}2K
MNIST
150
fc 500 Tanh
{Nx = 200}2K
SVHN
150
fc 500 Tanh
{Nx = 200}2K
x): The network outputs are Gaussian
Table A.3: Neural network architecture models of pθ (yy |x
parameters for the synthetic dataset and Bernoulli parameters for MNIST and SVHN, where we use
the logistic function to keep value of Bernoulli parameters between 0 and 1. An asterisk (*) indicates
the use of batch normalization and a ReLU nonlinearity. For convolutional layers, the numbers in
parentheses indicate stride-padding.
Dataset
Input
Hidden
Synthetic
2
fc 120 ReLU 120 ReLU
{2}2
MNIST
200
500* full-conv 64x4x4* (2-1) 32x6x6* (1-0)
16x6x6* (1-0)
28x28 (Sigmoid)
SVHN
200
500* full-conv 246x4x4* (2-1) 128x4x4* (2-1)
64x4x4* (2-1)
32x32 (Sigmoid)
12
Output
| 9 |
Probabilistic Numerical Methods for
PDE-constrained Bayesian Inverse Problems
Jon Cockayne∗
Chris Oates†
Mark Girolami§
Tim Sullivan
‡
arXiv:1701.04006v1 [stat.ME] 15 Jan 2017
January 17, 2017
This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations.
This construction enables the solution of Bayesian inverse problems while
accounting for the impact of the discretisation of the forward problem. In
particular, this drives statistical inferences to be more conservative in the
presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse
problems. This method is tested on a challenging inverse problem with a
nonlinear forward model.
1 Introduction
Partial differential equations (PDEs) are challenging problems which often have no analytical solution and must be solved numerically. In the style of Probabilistic Numerics
(PN) [7], in this work we describe methods for probabilistically modelling the uncertainty in the true solution arising from the numerical approximation. This uncertainty
can be thought of as arising from finite computation, as formalised in the Information
Complexity literature [11]; in solving a problem numerically, we are forced to discretise
some aspect of it. In the present work we model this uncertainty as arising from taking
a finite number of evaluations of the forcing terms of the system of PDEs.
One of the core principles of probabilistic numerics is that, in complex procedures in
which multiple numerical approximations must be composed to produce a final result,
the uncertainty from each procedure can combine in a nontrivial way which can lead
to incorrect inferences. The example we take here is that of PDE constrained Bayesian
∗
University of Warwick, [email protected]
University of Technology Sydney, [email protected]
‡
Free University of Berlin and Zuse Institute Berlin, [email protected]
§
Imperial College London and Alan Turing Institute, [email protected]
†
1
inverse problems, in which we wish to estimate parameters of a PDE model in a Bayesian
framework, based on observations of a system which is believed to be described by the
underlying PDE. In such problems it has been shown that employing an inaccurate PDE
solver in the sampling can lead to incorrect inferences in the inverse problem [3].
There has been recent interest in construction of probabilistic solvers for PDEs. Work
by [3] constructs a nonparametric posterior distribution for ODEs and PDEs by injecting
noise into standard numerical solvers in such a way as to maintain the convergence
properties of these solvers. In [8], the authors discuss a meshless method which is
similar to the method discussed herein by modelling the forcing of the PDE. This is
developed in [9], which discusses a methodology for probabilistic solution of PDEs by
an hierarchical game-theoretic argument. These latter two approaches do not examine
application to inverse problems, however.
Work from [1] discusses the interpretation of symmetric collocation as the mean function of a Gaussian process prior after conditioning on observed values of the forcing, but
applies this methodology predominantly to stochastic differential equations.
1.1 Structure of the Paper
We begin by introducing the concept of a probabilistic meshless method and giving some
theoretical results related to it. We then show how the posterior measure over the forward
solution of the PDE can be propagated to the posterior measure over parameters in a
Bayesian inverse problem. Finally we present some numerical results for a challenging
nonlinear inverse problem given by the steady-state Allen–Cahn equations.
Proofs for the presented theorems are omitted, and can be found in [2].
2 The Probabilistic Meshless Method
We now introduce the concept of a probabilistic meshless method (PMM). Consider
an open, bounded subset D of Rd with Lipschitz boundary ∂D. We seek a solution
u ∈ H(D), some Hilbert space of functions defined over D, of the following system of
operator equations
Au(x) = g(x)
x∈D
Bu(x) = b(x)
x ∈ ∂D.
(1)
Here A : H(D) → HA (D) and B : H(D) → HB (D) with g ∈ HA (D) and b ∈ HB (D). A
is associated with a partial differential operator and B is associated with the boundary
conditions of the system. For notational simplicity we restrict attention to systems of
two operators, however the methods discussed can be generalised to an arbitrary number
of operator equations.
We proceed in a Bayesian setting by placing a prior measure Πu on u, and determining
its posterior distribution based on a finite number of observations of the system given in
Eq. 1. In this work we focus on the most direct observations of said system; namely, we
choose sets of design points {xi,A } = X0A ⊂ D, {xj,B } = X0B ⊂ ∂D for i = 1, . . . , mA ,
2
j = 1, . . . , mB . We then evaluate the right-hand-side corresponding to each of the
operators in the system at these points; g = [g(xi,A )], b = [b(xi,B )].
It remains to specify our prior distribution. Here we choose a Gaussian process prior
Πu = GP(m, k). Recall that a Gaussian Process is characterised by its mean function m
and its covariance function k, and the property that, if u ∼ GP(m, k) then for any set
of points {xi } ⊂ Rd , i = 1, . . . , n
u(X) ∼ N (µ, Σ)
[µ]i = m(xi )
[Σ]ij = k(xi , xj )
As is common in the literature we will use a centred Gaussian process prior; Πu =
GP(0, k). Define
A
L=
L̄ = Ā B̄
B
and furthermore for sets X = {xi }, i = 1, . . . , N , Y = {yj }, j = 1, . . . , M let K(X, Y )
denote the Gram matrix of K applied to X and Y ; [K(X, Y )]ij = k(xi , yj ). Similarly
[AK(X, Y )]ij = Ak(xi , yj ), etc. Then
AĀK(X0A , X0A ) AB̄K(X0A , X0B )
LL̄K(X0 , X0 ) =
B ĀK(X0B , X0A ) B B̄K(X0B , X0B )
AK(X0A , X)
LK(X0 , X) =
BK(X0B , X)
L̄K(X, X0 ) = ĀK(X, X0A ) B̄K(X, X0B )
Here X is to be interpreted as a set of points at which we evaluate those functions drawn
from the posterior distribution, in contrast with X0 = X0A ∪X0B which is the set of points
at which evaluations of the forcing terms are taken.
Proposition 1 (Probabilistic Meshless Method). Assume A and B are linear operators.
Then the posterior distribution Πg,b
u over the solution of the PDE, conditional on the data
g,b
g, b is such that, for u ∼ Πu we have
u(X) ∼ N (µ, Σ)
−1 > > >
g
b
µ = L̄K(X, X0 ) LL̄K(X0 , X0 )
−1
Σ = K(X, X) − L̄K(X, X0 ) LL̄K(X0 , X0 )
LK(X0 , X)
(2)
Note that the mean function in Eq. 2 is the same as the numerical solution to the PDE
that would be obtained using the method of symmetric collocation [5].
Thus far we not discussed the choice of prior covariance k. There are several interesting choices in the literature. Work in [8] proposes use of a covariance which encodes
information about the system through its Green’s function; [2] examined the properties
of this choice in more detail. However, reliance on the Green’s function, which is not
3
in general available in closed-form for complex systems, is a significant drawback. In
practice we will generally posit a prior covariance directly by examining the system in
question and selecting a prior which encodes a suitable level of differentiability.
We now present a theoretical result describing the rate of convergence of the posterior
measure Πg,b
u . Denote by ρ the differential order of the PDE; that is, the maximum
number of derivatives of u required. Furthermore denote by β the smoothness of the
prior; the number of weak derivatives that almost surely exist under the prior measure.
Lastly, define h to be the “fill distance” of the design points X0 :
h = sup min
0
x∈D x ∈X0
x − x0
2
Theorem 2 (Rate of Convergence). For a ball B (u0 ) of radius centred on the true
solution u0 of the system (1):
2β−2ρ−d
h
g,b
c
Πu (B (u0 ) ) = O
where c denotes the set complement.
2.1 Illustrative Example: The Forward Problem
We conclude this section by examining the performance of the probabilistic meshless
method for a simple 1-dimensional PDE. Consider the system
−∇2 u(x) = sin(2πx) x ∈ (0, 1)
u(x) = 0
x = 0, 1
the solution to which can be computed by direct integration to be u(x) = −(2π 2 )−2 sin(2πx).
We compute the PMM solution to this PDE with varying number of design points. In
this setting the Green’s function for the system is available explicitly, and so we used its
associated prior covariance as suggested in [8]; full details are available in [2].
Samples from the posterior distribution can be seen in Fig. 1; note how, even with 20
design points, there is still significant posterior uncertainty. Convergence plots as the
number of design points is increased are shown in Fig. 2
3 Application to Bayesian Inverse Problems
We now turn to an examination of how the PMM, constructed in the previous section,
can be applied in Bayesian inverse problems. We now have a system in which we assume
the operator A depends upon some parameter θ, which we emphasise in the below
system:
Aθ u(x) = g(x)
x∈D
Bu(x) = b(x)
x ∈ ∂D.
4
0.06
0.04
0.02
0.02
0.00
0.00
u
u
0.04
0.06
Mean
Truth
Samples
0.02
0.02
0.04
0.04
0.06
0.0
0.2
0.4
x
0.6
0.8
0.06
0.0
1.0
Mean
Truth
Samples
0.2
0.4
x
0.6
0.8
1.0
Figure 1: Samples from the posterior distribution over the unkown solution to a onedimensional PDE, with mA = 10 (left) and mA = 40 (right).
10 -1
10 -1
10 -2
Tr(Σ)
kµ − uk
10 -2
10 -3
10 -3
10 -4
10 -5
0
20
40
mA
60
80
100
10 -4
0
20
40
mA
60
80
100
Figure 2: Convergence of mean function (left) and posterior covariance trace (right) as
the number of design points mA is increased.
5
In a Bayesian inverse problem we place a prior distribution over θ, θ ∼ Πθ , and seek to
determine its posterior distribution Πyθ based on data y collected at locations {xi } ⊂ D,
i = 1, . . . , n. Further details on Bayesian inverse problems can be found in [10].
Such a posterior distribution is usually intractable and must be investigated by sampling, which involves solution of the underlying system of PDEs as the sampler visits
different values of θ. We assume that the data is obtained by direct observation of the
solution u at these locations, corrupted with Gaussian noise
yi = u(xi ) + ξi
where ξ ∼ N (0, Γ). Our likelihood is thus given by
p(y|θ, u) = N (y; u, Γ)
(3)
where u, y are each vectors in Rn , with [u]i = u(xi ; θ) and [y]i = yi .
Since the solution u to the PDE system is inaccessible it is common to replace u with
an approximation û obtained by some numerical scheme. We instead use the PMM as
the forward solver, obtaining a measure Πg,b
u describing our uncertainty. We may them
marginalise u in Eq. 3 over this measure to obtain
Z
pPN (y|θ) = p(y|θ, u) Πg,b
u (du)
= N (y; µ(θ), Γ + Σ(θ))
(4)
where µ(θ), Σ(θ) are as in Prop. 1, and we have emphasised the dependence on θ. This is
thus similar to the standard approach of replacing u with û in Eq. 3, but we compensate
for the inaccuracy of the forward solver with an additive covariance term Σ incorporating
the uncertainty in the posterior distribution for the forward problem.
We now present a result which guarantees consistency in the inverse problem when
we replace the likelihood in Eq. 3 with that in Eq. 4.
Proposition 3. (Inverse Problem Consistency) Let Πyθ,PN be the posterior distribution
which uses the PN likelihood given in Eq. 4. Assume that the posterior distribution Πyθ
contracts such that Πyθ → δ(θ0 ) as n → ∞, a Dirac measure centred on the true value of
θ, θ0 . Then Πyθ,PN contracts such that Πyθ,PN → δ(θ0 ) provided
h = o(n−1/(β−ρ−d/2) )
3.1 Illustrative Example: The Inverse Problem
We now return to the previous illustrative example to demonstrate the use of a probabilistic solver in the inverse problem. Consideronsider the system
−∇ · θ∇u(x) = sin(2πx) x ∈ (0, 1)
u(x) = 0
x = 0, 1
6
Probabilistic
16
Standard
mA = 4
mA = 8
mA = 16
mA = 32
mA = 64
14
12
10
8
6
4
2
0
0.0
0.5
1.0
θ
1.5
2.0 0.0
0.5
1.0
θ
1.5
2.0
Figure 3: Posterior distributions over θ with varying numbers of design points, on the
left using the PMM, and on the right the standard approach of using a plug-in
estimate for the PDE solution, here given by symmetric collocation.
with the goal of inferring the parameter θ. Data yi was generated from the explicit
solution to this problem with θ = 1 at locations x = 0.25, 0.75, and corrupted with
Gaussian noise with distribution N (0, 0.012 ).
In Fig. 3 we compare posterior distributions for θ generated with the PMM versus the
standard approach of plugging a numerical solution of the PDE into the likelihood and
ignoring discretisation error. The numerical method used in the standard approach was
symmetric collocation, the most natural comparison. Note that when using collocation
the posterior distributions are peaked and biased for small mA , and that the posterior
uncertainty does not appear to depend on the number of design points. Conversely when
using the probabilistic method we see that for small mA the posterior distributions are
wide and flat, while as mA increases the distributions peak and centre on the true value
of θ. Thus, with a standard numerical method the posteriors over θ do not take into
account the quality of the numerical solver used; for poor forward solvers based on
coarse discretisations, the posteriors produced are as confident as those produced with
a fine, accurate numerical solver. With a probabilistic forward solver the variance in
the forward solver is propagated into the inverse problem, resulting in robust inferences
even when the discretisation is coarse.
4 A Nonlinear Example
We now present an application of the methods discussed herein to a nonlinear partial
differential equation known as the steady-state Allen–Cahn system, a model from mathematical physics describing the motion of boundaries between phases in iron alloys. This
7
Negative Stable
1.0
Unstable
1.0
Positive Stable
1.0
1.00
0.8
0.8
0.8
0.6
0.6
0.6
0.75
0.25
0.00
x2
x2
x2
0.50
0.4
0.4
0.4
0.2
0.2
0.2
0.25
0.50
0.0
0.0
0.2
0.4
x1
0.6
0.8
1.0
0.0
0.0
0.2
0.4
x1
0.6
0.8
1.0
0.0
0.0
0.75
1.00
0.2
0.4
x1
0.6
0.8
1.0
Figure 4: Solutions to the Allen–Cahn system for δ = 0.04
is given by
−δ∇2 u + δ −1 (u3 − u) = 0
x ∈ (0, 1)2
u = +1 x1 ∈ {0, 1} , x2 ∈ (0, 1)
u = −1
x2 ∈ {0, 1} , x1 ∈ (0, 1)
(5)
We phrase this as an inverse problem for determining δ. This system is noteworthy
for the fact that it does not admit a unique solution; the three solutions to this system
for δ = 0.04 are shown in Fig. 4. These were generated using the deflation technique
described in [4].
Since this is a nonlinear system the posterior distribution will not be Gaussian, and
we must resort to sampling techniques to explore the posterior distribution. In brief, we
introduce a latent function z and rearrange the system as follows:
−δ∇2 u − δ −1 u = z
δ
−1 3
u = −z
(6)
(7)
Note that by adding Eq. 6 and Eq. 7 we return to the original equation describing the
interior dynamics given in Eq. 5. However Eq. 7 is monotonic and thus invertible; by
inverting this we arrive at a new system:
−δ∇2 u − δ −1 u = z
u = (−δz)1/3
This system is equivalent to the original system but, importantly, is linear. Thus by the
introduction of z we are able to arrive at a new system which can be solved using the
PMM.
It remains to describe z, a latent function whose value is unknown. We seek to
marginalise z in the likelihood
Z
Z
p(y|δ) = p(z|δ) p(y|u) Πg,b,z
(du) dz
(8)
u
8
where Πg,b,z
is now additionally conditioned on a known value for z. This integral is
u
intractable. However by sampling from the posterior distribution over δ by pseudomarginal MCMC it is sufficient to produce an unbiased estimate of this quantity. This
is accomplished by importance sampling; we assume an improper prior p(z|δ) ∝ 1 and
approximate Eq. 8 by the Monte-Carlo estimate
M
1 X
p(y|δ) ≈
M
i=1
R
i
p(y|u) Πg,b,z
(du)
u
r(zi |y, δ)
for zi ∼ r(z|y, δ).
The importance distribution r(z|y, δ) is chosen by solving the original system in Eq. 5
using the techniques described in [4], with a coarse finite-element solver. This gives
estimates {û1 , û2 , û3 } for the solution given a value of δ. By applying Eq. 7 to these
estimates we obtain estimates of three values of z; {ẑ1 , ẑ2 , ẑ3 }.
To handle the multimodality in the solutions we extend the state-space of the inverse
problem to include the solution index j. The importance distribution is constructed as
a Gaussian distribution
z ∼ GP(ẑj , k)
with r(z|y, δ, j) thus the appropriate multivariate Gaussian density after the field for
z has been discretised. Discretisation points are necessarily chosen to match X0A , the
design points for u in the interior of the domain.
For application of the PMM we choose a squared-exponential prior covariance
!
kx − x0 k22
0
k(x, x ) = exp −
2`2
which is known to describe infinitely-differentiable functions. This choice is motivated
by the high differential order required by the PDE; since we must be able to apply both
the operator and the adjoint to the kernel, in this case we require that the covariance
be twice differentiable in each argument, which amounts to a four-times differentiable
covariance if the covariance chosen is isotropic.
The length-scale hyper-parameter ` was incorporated into the MCMC procedure, endowed with a half-Cauchy hyper-prior as recommended in [6]. The parameter of interest
δ was endowed with a uniform prior over the interval (0.02, 0.15), in which the PDE was
empirically found to consistently have three solutions.
Posterior distributions for δ generated using this methodology are shown in Fig. 5;
these are compared with posterior distributions generated using a finite-element forward
solver. In the finite-element case we see a more extreme version of the bias shown in
Fig. 3 for coarse grids, whereas when using a probabilistic forward solver the posteriors
are once again wider to account for an inaccurate forward solver.
We should also comment on the comparison to the finite-element method here; in
the previous example comparison was to the symmetric collocation method for solving
PDEs; in this case the comparison is more direct as the solution for the PDE in symmetric collocation is simply the posterior mean from the PMM. In this case we use a
9
1000
800
600
Probabilistic
1000
PMM ` = 5
PMM ` = 10
PMM ` = 20
PMM ` = 40
PMM ` = 80
800
600
400
400
200
200
0
0.02
0.03
0.04
0.05
δ
0.06
0.07
0.08
0.09
0
0.02
Standard
FEM 5x5
FEM 10x10
FEM 25x25
FEM 50x50
0.03
0.04
0.05
δ
0.06
0.07
0.08
0.09
Figure 5: Posterior distributions for δ obtained by use of the technique described herein
(left) versus a standard Finite Element solver that does not model discretisation error (right).
finite-element solver both to highlight the fact that the behaviour witnessed when using
symmetric collocation is not unique to that solver, and because in existing methods for
finding the multiple solutions to the Allen-Cahn equation the base numerical method
applied is the finite-element method. Furthermore we note that as the underlying numerical method becomes arbitrarily accurate, the posterior inferences made in the inverse
problem should be invariant to the forward solver used.
5 Discussion
We have shown how to construct probabilistic models for the solution of partial differential equations, which quantify the uncertainty arising from numerical discretisation of
the system. We have further shown how the uncertainty in the forward problem can be
propagated into posteriors over parameters in inverse problems. This allows robust inferences to be made in inverse problems, even when the numerical scheme used to solve the
forward problem is inaccurate, which is useful in cases where obtaining highly accurate
solutions is computationally expensive, or where we are willing to tolerate less certain
inferences in exchange for fast computation. In particular we have illustrated how this
might be used to make inferences in nonlinear systems where a variety of phenomena,
such as a non-unique solution could cause a numerical solver to fail.
Immediate extensions to this work lie in examining evolutionary systems in which the
solution is additionally a function of time; the added complexity from the additional
dimension demands more focussed attention. We also seek to examine a more generic
approach for sampling from posterior distributions for nonlinear PDEs. Furthermore we
note that the observations we have chosen for the forward problem are only one possible
choice; another attractive option is given by Galerkin schemes for approximating PDEs,
by choosing our observations to be Galerkin projections.
Lastly we seek to explore other choices of prior. The Gaussian measure is an unrealistic
option in general, as it penalises extreme values and prevents encoding such simple
properties as positivity of solutions.
10
6 Acknowledgements
TJS was supported by the Free University of Berlin within the Excellence Initiative of the
German Research Foundation (DFG). MG was supported by EPSRC [EP/J016934/1,
EP/K034154/1], an EPSRC Established Career Fellowship, the EU grant [EU/259348]
and a Royal Society Wolfson Research Merit Award.
The authors would like to thank John Skilling for useful discussion, Patrick Farrell
for providing code used in generating these results and François-Xavier Briol for helpful
feedback. In addition they express gratitude to the developers of the Python libraries
Autograd and GPyOpt.
References
[1] Igor Cialenco, Gregory E Fasshauer, and Qi Ye. Approximation of stochastic partial
differential equations by a kernel-based collocation method. International Journal
of Computer Mathematics, 89(18):2543–2561, December 2012.
[2] Jon Cockayne, Chris Oates, Tim Sullivan, and Mark Girolami. Probabilistic Meshless Methods for Partial Differential Equations and Bayesian Inverse Problems.
arXiv:1605.07811v1, May 2016.
[3] Patrick R Conrad, Mark Girolami, Simo Särkkä, Andrew Stuart, and Konstantinos Zygalakis. Statistical analysis of differential equations: introducing probability
measures on numerical solutions. Statistics and Computing, 2016.
[4] Patrick E Farrell, Asgeir Birkisson, and Simon W Funke. Deflation techniques for
finding distinct solutions of nonlinear partial differential equations. SIAM Journal
on Scientific Computing, 37(4):A2026–A2045, 2015.
[5] Gregory E Fasshauer. Solving differential equations with radial basis functions:
multilevel methods and smoothing. Advances in Computational Mathematics, 11
(2-3):139–159, 1999.
[6] A Gelman. Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper). Bayesian analysis, 1(3):515–534, 2006.
[7] Philipp Hennig, Michael A Osborne, and Mark Girolami. Probabilistic numerics
and uncertainty in computations. Proc R Soc A, 471(2179):20150142, July 2015.
[8] Houman Owhadi. Bayesian numerical homogenization. Multiscale Modeling & Simulation, 13(3):812–828, 2015.
[9] Houman Owhadi. Multigrid with rough coefficients and multiresolution operator
decomposition from Hierarchical Information Games. arXiv:1503.03467v4, March
2015.
11
[10] Andrew M. Stuart. Inverse problems: a Bayesian perspective. Acta Numer., 19:
451–559, 2010. ISSN 0962-4929.
[11] Henryk Woźniakowski. What is information-based complexity?
complexity of continuous problems, pages 89–95, 2009.
12
Essays on the
| 10 |
1
An Efficient Manifold Algorithm for Constructive
Interference based Constant Envelope Precoding
arXiv:1706.02900v1 [] 9 Jun 2017
Fan Liu, Student Member, IEEE, Christos Masouros, Senior Member, IEEE, Pierluigi Vito
Amadori, Student Member, IEEE, and Huafei Sun
Abstract—In this letter, we propose a novel manifold-based
algorithm to solve the constant envelope (CE) precoding problem
with interference exploitation. For a given power budget, we
design the precoded symbols subject to the CE constraints,
such that the constructive effect of the multi-user interference
(MUI) is maximized. While the objective for the original problem
is non-differentiable on the complex plane, we consider the
smooth approximation of its real representation, and map it onto
a Riemannian manifold. By using the Riemmanian conjugate
gradient (RCG) algorithm, a local minimizer can be efficiently
found for the problem. The complexity of the algorithm is
analytically derived in terms of floating-points operations (flops)
per iteration. Numerical results show that the proposed algorithm
outperforms the conventional methods on both symbol error rate
and computational complexity.
Index Terms—Constant envelope, MU-MISO downlink, massive MIMO, manifold optimization.
I. I NTRODUCTION
S one of the most promising approaches in 5G technology, massive multi-input-multi-output (mMIMO) communication systems are expected to provide significant benefits
over conventional MIMO systems by employing much larger
antenna arrays [1], [2]. Nevertheless, such systems face numerous challenges brought by the increasing number of antennas,
e.g., higher hardware costs and power consumption, which
may delay its deployment in future 5G systems. Hence, cheap
and efficient RF power amplifiers (PA) are required for making
the technology realizable in practical scenarios.
It is important to note that most of power-efficient PAs are
made by non-linear components, therefore waveforms with
low peak-to-average-power-ratio (PAPR) are needed to avoid
signal distortions when the PA is operated at the saturation
region [3]. Pioneered by [4], [5], the constant envelope precoding (CEP) has been proposed as an enabling solution,
where the MUI is minimized subject to the CE constraints.
The optimization in [5] is a non-convex non-linear least
square (NLS) problem, and is solved by sequential gradient
descent (GD) method, which converges to a local minimum. To
A
Manuscript received ***. This work was supported by the Engineering and
Physical Sciences Research Council (EPSRC) project EP/M014150/1 and the
China Scholarship Council (CSC).
F. Liu is with the School of Information and Electronics, Beijing Institute
of Technology, Beijing, 100081, China, and is also with the Department of
Electronic and Electrical Engineering, University College London, London,
WC1E 7JE, UK (e-mail: [email protected]).
C. Masouros and P. V. Amadori are with the Department of Electronic and
Electrical Engineering, University College London, London, WC1E 7JE, UK
(e-mail: [email protected], [email protected]).
H. Sun is with the School of Mathematics and Statistics, Beijing Institute
of Technology, Beijing, 100081, China (email: [email protected]).
further improve the performance, a cross-entropy optimization
(CEO) solver is introduced in [6]. More recently, by using
the fact that the feasible region of the CE problem can be
geometrically viewed as a complex circle manifold, a RCG
algorithm is proposed by [7], where the NLS problem is
solved with much lower complexity than both GD and CEO.
While the interference reduction (IR) methods in above works
are relatively straightforward, their performance is strongly
dependent with the constellation energy [5], which is difficult
to optimally set in advance. In addition, IR approaches ignore
that MUI is known to the base station (BS) in general, and thus
can be utilized as a source of useful power. Realizing these
facts, the previous work [8] considers a novel CEP approach
with the concept of constructive interference (CI) [9], which
can overcome the above drawbacks. Due to the CE constraints,
the CI-CEP problem is non-convex, but can be solved using
CEO solver as well. Moreover, by relaxing the constraints,
the CI-CEP problem becomes convex, thus can be solved by
standard numerical tools. However, both of the above methods
demand large amount of computations inevitably.
Based on the previous works on manifold optimizations
[10], [11], we consider a manifold-based algorithm to solve
the CI-CEP problem in this letter. Since the objective is not
complex differentiable, we first equivalently transform the
problem into its real representation, and use a smooth upperbound to obtain a differentiable approximation. By viewing
the feasible region as an oblique manifold, a RCG algorithm
is employed to find a local minimizer of the problem. Unlike
the relaxed convex problem in [8], the proposed algorithm is
guaranteed to yield precoded symbols with exactly constant
envelopes, and has better performance than the methods of
[8] in terms of both symbol error rate (SER) and complexity.
II. S YSTEM M ODEL
We consider a multi-user multi-input-single-output (MUMISO) downlink scenario where a N-antenna BS transmits
signals to M single-antenna users. The received signal vector
is given as
y = HT x + w,
(1)
where y = [y1 , y2 , ..., yM ]T ∈ CM×1 with ym being the received symbol for the m-th user, x =
T
[x1 , x2 , ..., xN ] ∈ CN ×1 represents the transmitted symbols,
T
w = [w1 , w2 , ..., wM ] ∈ CM×1 ∼ CN (0, N0 I) is the
Gaussian noise, and H = [h1 , h2 , ..., hM ] ∈ CN ×M is the
channel matrix, with hm being the channel vector for the m-th
user. Without loss of generality, the channel is assumed to be
2
Rayleigh fading, i.e., each entry of H subjects to i.i.d Complex
Gaussian distribution with zero-mean, and is perfectly known
to the BS. The transmitted signal is expected to have constant
envelope, which is
p
xn = PT /N ejθn , ∀n,
(2)
where PT is the total transmit power, θn is the phase of the
n-th transmitted symbol.
that the desired symbol for the m-th user is sm =
√ Assume
Em ejφm , where Em and φm denote the power and the phase
of the symbol respectively. The received symbol for the m-th
user can be written as
ym = sm + hTm x − sm +wm ,
(3)
|
{z
}
MUI
where the second term represents the interfering signal for the
user. The total MUI power is then given by
PMUI =
M
X
m=1
hTm x − sm
T
2
= HT x − s
2
,
(4)
where s = [s1 , s2 , ..., sM ] is the desired symbol vector.
a convex approximation problem can be efficiently solved
by numerical solvers, e.g., CVX toolbox. The results are
then normalized to obtain transmitted symbols with constant
envelopes [8]. Nevertheless, using CEO or CVX to solve (6)
requires significant computation resources. In the next section,
we propose a manifold based optimization technique to solve
(6), which has much lower complexity.
IV. P ROPOSED A LGORITHM BASED
M ANIFOLD
ON
O BLIQUE
Since Re (·) and Im (·) are not complex differentiable, we
formulate the real representation of (6). First we rewrite tm as
(7)
tm = hTm x − sm e−jφm = h̃Tm x − u,
where h̃m = hm e−jφm . We then separate the real and
imaginary parts of complex notations as follows
H̃ = H̃R + j H̃I , h̃m = h̃Rm + j h̃Im , x = xR + jxI . (8)
i
h
where H̃ = h̃1 , h̃2 , ..., h̃M . It follows that
III. P ROBLEMS F ORMULATION
Re (tm ) = h̃TRm xR − h̃TIm xI − u,
Aiming at minimizing the MUI power, the conventional
CEP approaches are designed to solve the following optimization problem [5]
By using the fact that |a| = max (a, −a), and denoting β =
tan ψ we have
2
HT x − s
p
s.t. |xn | = PT /N , ∀n.
min
Im (tm ) = h̃TIm xR + h̃TRm xI .
x
(5)
|Im (tm )| − Re (tm ) tan ψ = max (g2m−1 , g2m ) − uβ, (10)
where
Problem (5) is a NLS problem, which is obviously non-convex,
and has multiple local minima. Fortunately, it has been proven
that most of the local minima yield small values [5], and can
be obtained by a variety of approaches [5]–[7]. However, it
should be highlighted that by treating all the interference as
harmful, these techniques ignore the fact that MUI can be
employed as a green signal power source to benefit the symbol
demodulation. This has been first proposed by [12], where
the MUI is classified as constructive and destructive parts.
CI based beamformers aim at minimizing destructive and
exploiting constructive interference, which enable a relaxed
feasible region for the optimization [9]. Based on this, previous
work [8] focuses on maximizing the constructive effect of the
MUI to achieve CE precoding, where the PSK modulations
are employed. We refer the reader to the above literature for
detailed discussions. Here we recapture the CI-CEP problem
in [8] as follows
min max |Im (tm )| − Re (tm ) tan ψ
x
m
p
s.t. |xn | = PT /N , ∀n,
tm = hTm x − sm e−jφm , ∀m,
(9)
(6)
where sm = uejφm , ψ = π/L, u is the amplitude for the PSK
symbols, L is the PSK modulation order. The above problem
can be solved by CEO suboptimally, and has been further
relaxed as a convex problem by replacingpthe equality constraints on xn as inequalities, i.e., |xn | ≤ PT /N , ∀n. Such
T
T
g2m−1 = h̃Im − β h̃Rm xR + h̃Rm + β h̃Im xI ,
T
T
g2m = β h̃Im − h̃Rm xI − h̃Im + β h̃Rm xR .
(11)
q
T
Denoting X̃ = PNT [xR , xI ] , the real representaion of the
problem can be written compactly as follows
min max gi
i
X̃
s.t. X̃T X̃
(12)
nn
= 1, n = 1, 2, ..., N,
where i = 1, 2, ..., 2M . It is clear that the feasible region of
(12) can be given as
n
o
M = X̃ ∈ R2×N : (X̃T X̃)nn = 1, ∀n .
(13)
We say that M forms a manifold, and X̃ is a point on M. To
be more specific, M is a 2N -dimensional oblique manifold
[13]. In Riemannain geometry, a manifold is defined as a set
of points that endowed with a locally Euclidean structure near
each point. Given a point p on M, a tangent vector at p is
defined as the vector that is tangent to any smooth curves
on M through p. The set of all such vectors at p forms the
tangent space, denoted by Tp M, which is an Euclidean space.
Specially, the tangent space at X̃ is given as
n
o
TX̃ M = U ∈ R2×N : (X̃T U)nn = 0, ∀n .
(14)
3
Xk
mk
Xk
where an,m , bn,m , cn,m and dn,m denote the (n, m)-th enrty
of the following matrices
-ÑXk f
TXk
A = H̃I − β H̃R , B = −H̃I − β H̃R ,
Xk
( )
C = H̃R + β H̃I , D = H̃R − β H̃I ,
- grad f Xk
( Π k -1 )
dk Πk
Xk
X k +1
Fig. 1. Riemannian conjugate gradient algorithm.
If the tangent spaces of a manifold are equipped with a
smoothly varying inner product, the manifold is called Riemannian manifold [14]. Accordingly, the family of inner products is called Riemannian metric, which allows the existence
of rich geometric structure on the manifold. Here we use
the usual Euclidean inner product as the metric, which is
hU, ViX̃ = tr UT V , where U, V ∈ TX̃ M. The algorithm
that we employ is the so-called Riemannian conjugate gradient
(RCG) algorithm [15], which needs to first compute the
gradient of the objective. Since the objective in (12) is still
not differentiable, we consider
the well-known smooth logsum-exp upper-bound f X̃ for the max function, which is
gmax
!
X
≤ f X̃ = ε log
exp (gi /ε)
(19)
In the RCG algorithm, (16) is called Euclidean gradient,
and can be used to compute the Riemannian gradient, which
is defined as the tangent vector belongs
to TX̃ M that indicates
the steepest ascent direction of f X̃ . It can be viewed as
the orthogonal projection of the Euclidean gradient onto the
tangent space [16], which is given as
grad f X̃ = PX̃ ∇X̃ f = ∇X̃ f − X̃ diag(X̃T ∇X̃ f ),
(20)
where PX̃ (·) denotes the projector, diag (·) sets all offdiagonal entries of a matrix to zero. At the k-th iteration, the
descent direction Πk is obtained as
(21)
Πk = − grad f X̃k + µk PX̃k (Πk−1 ) .
Here the projector is used as vector transport, which maps the
vector from one tangent space to another. µk is given by the
Riemannian version of the Polak-Ribière formula, which is
µk
=
D
E
grad f X̃k , grad f X̃k − PX̃k grad f X̃k−1
X̃k
D
E
.
grad f X̃k−1 , grad f X̃k−1
X̃k−1
(22)
The k+1-th update is thus given by
X̃k+1 = RX̃k (δk Πk ) ,
(23)
(15)
where RX̃k (·) is called retraction, which maps a point on
TX̃k M to M with a local rigidity condition that preserves
gradients at X̃k [16], and is given as
where
ε > 0 is some small positive number. The gradient of
f X̃ is thus given as
RX̃k (δk Πk )
X̃k + δk Πk
X̃k + δk Πk
(24)
1 , ...,
N ,
=
X̃k + δk Πk
X̃k + δk Πk
1
N
where X̃k + δk Πk
is the n-th column of the matrix
n
X̃k + δk Πk , and the stepsize δk is obtained by backtracking line search algorithms, e.g., Armijo rule. Fig. 1 shows a
single iteration of the RCG algorithm on M, which has also
been summarized in Algorithm 1.
Note that the complexity of Algorithm 1 mainly comes from
line 4 and line 5, where 16N 2 + 14M N + 18M + 16N and
4N 2 + 6N flops are required
respectively, leading to a total
complexity of O N 2 for each iteration. By
contrast, the
complexity of GD and RCG-IR are O M N 2 and O (M N )
per iteration [5], [7], respectively. For CEO, the complexity
is O (KM N ) in each iteration [8], where K stands for the
number of random samples, which may be quite larger than M
and N . While the RCG-IR requires less computations compared to the proposed algorithm, the latter brings significant
performance gain as we will show in the next section, and
therefore offers a favourable performance-complexity tradeoff.
i
≤ gmax + ε log (2M ) ,
∇X̃ f =
∂f
∂f ∂f
,
, ...,
,
∂x̃1 ∂x̃2
∂x̃N
(16)
where x̃n is the n-thqcolumn of X̃. Noting that xR =
q
N
N
T
T
PT X̃ (:, 1) , xI =
PT X̃ (:, 2), which are the first and
second column of X̃T respectively, we have
r
r
∂xR
∂xI
N
N
=
[en , 0],
=
[0, en ],
∂x̃n
PT
∂x̃n
PT
(17)
where en ∈ RN ×1 have all-zero entries except that its nth entry equals to 1. Based on (17), the n-th column of the
gradient is given by
g
"
#
2m−1
r
M
N X an,m , bn,m exp
g ε
2m
PT m=1
cn,m , dn,m
exp
∂f
ε
=
,
g
2M
P
∂x̃n
i
exp
ε
i=1
(18)
4
3
IR Methods
10-1
10-2
10-2
2.5
2
CI Methods
10-3
10-4
1.5
RCG-CI
CVX-CI
CEO-CI
RCG-IR
GD-IR
CEO-IR
1
10-5
0.5
0
16
18
SNR = 8dB
10-1
IR Methods
SER
Average Execution Time (s)
4
3.5
M = 20
100
RCG-CI
CVX-CI
CEO-CI
RCG-IR
GD-IR
CEO-IR
20
22
24
10-6
SER
4.5
0
2
10-3
10-4
4
6
8
10
12
10-5
12
RCG-CI
CVX-CI
CEO-CI
RCG-IR
GD-IR
CEO-IR
CI Methods
14
16
18
Users (M)
Transmit SNR (dB)
Users (M)
(a)
(b)
(c)
20
22
24
Fig. 2. Numerical results. (a) Average execution time vs. number of users for different algorithms; (b) SER vs. SNR for different algorithms; (c) SER vs.
User for different algorithms.
Algorithm 1 RCG for CI-based CEP
Input: s, H, ∆ > 0, kmax > 0.
Output: Local minimizer X̃∗ for (12).
1. Initialize randomly
X̃0 ∈ M,
set Π0 = − grad f X̃0 , k = 0,
while k ≤ kmax & grad f X̃k
≥ ∆ do
F
2. k = k + 1,
3. Compute stepsize δk−1 by Armijo rule, and set X̃k
using the retraction defined in (23),
4. Compute µk by (22),
5. Compute Πk by (21).
end while
V. N UMERICAL R ESULTS
In this section, numerical results based on Monte Carlo
simulations have been provided to compare the performance of
different algorithms. We consider the following 6 algorithms:
• The proposed RCG algorithm for CI (RCG-CI);
• Convex relaxation for CI (CVX-CI) [8];
• Cross-entropy optimization for CI (CEO-CI) [8];
• RCG algorithm for IR (RCG-IR) [7];
• Gradient descent algorithm for IR (GD-IR) [5];
• Cross-entropy optimization for IR (CEO-IR) [6].
Without loss of generality, we use QPSK modulation for all the
approaches. We set u = 1, ∀m, which is a common assumption
in related literature for the reason that the optimal u is
difficult to determine for IR methods [5], [6] while arbitrary
u can be accepted by CI methods [8]. We also assume that
PT = 1, N = 64 for all the algorithms, and each entry of the
channel H subjects to standard complex Gaussian distribution,
i.e., hn,m ∼ CN (0, 1) , ∀n, ∀m. For CEO methods, we use
the same parameter configuration with [8], which is T = 1000
(the number of iterations), K = 500 (the number of initialized
random samples), ρ = 0.05 (quantile), α = 0.08 (the smooth
parameter). For GD-IR, the number of iterations is set as 50.
While the analytic complexity per iteration of the most
algorithms has already been given, we compare the overall
complexity in terms of average execution time in Fig. 2 (a)
since it is difficult to specify the complexity of the CVX-CI
approach. The simulation is performed on an Intel Core i7-
4790 CPU 32GB RAM computer with 3.6GHz. As expected,
the RCG methods require least execution time to solve the
problem while other methods need much more. Although the
proposed RCG-CI algorithm is more complex than RCG-IR
by each iteration, the total time needed is still comparable
with the latter. More importantly, RCG-CI is robust to the
increasing users because its complexity is mainly determined
by the antenna number of the BS.
In Fig. 2 (b), we show the error performance of all 6
approaches in terms of SER with increased transmit signal-tonoise-ratio (SNR), where M = 20, SNR = PT /N0 . Note that
all the IR methods show negligible difference under the given
parameter configuration, and all the CI methods outperform
the IR methods thanks to the utilization of the MUI power.
It is worth noting that the proposed RCG-CI has the best
performance among all the 6 approaches with 2dB gain over
IR methods, and 1dB gain against the CVX-CI algorithm.
We further consider the error performance with increased
number of users in Fig. 2 (c), where the SNR is fixed at 8dB
with the number of users ranging from 12 to 24. It can be
observed that the SER becomes worse with the growth of the
users due to the reduction of the Degrees of Freedom (DoFs).
Once again, we see that the proposed RCG-CI achieves the
lowest SER among all the approaches, and the CI methods
achieve far better performance than IR methods, while the
latter maintains an SER of 10−2 for all the users numbers.
VI. C ONCLUSION
A low-complexity manifold optimization algorithm has been
introduced to solve the CEP problem with the exploitation
of the MUI power. By viewing the feasible region of the
optimization as an oblique manifold, the proposed method can
efficiently find a near-optimal solution using the Riemannnian
conjugate gradient algorithm. Numerical results show that the
proposed RCG-CI algorithm outperforms the existing 5 other
approaches in terms of error performance, with a comparable
complexity to the fastest RCG-IR algorithm. It is further
shown that when the DoFs of the system are limited, the
proposed RCG-CI still performs far better than other methods.
5
R EFERENCES
[1] E. G. Larsson, O. Edfors, F. Tufvesson, and T. L. Marzetta, “Massive
MIMO for next generation wireless systems,” IEEE Communications
Magazine, vol. 52, no. 2, pp. 186–195, February 2014.
[2] T. L. Marzetta, “Noncooperative cellular wireless with unlimited numbers of base station antennas,” IEEE Transactions on Wireless Communications, vol. 9, no. 11, pp. 3590–3600, November 2010.
[3] V. Mancuso and S. Alouf, “Reducing costs and pollution in cellular
networks,” IEEE Communications Magazine, vol. 49, no. 8, pp. 63–71,
August 2011.
[4] S. K. Mohammed and E. G. Larsson, “Single-user beamforming in largescale MISO systems with per-antenna constant-envelope constraints: The
doughnut channel,” IEEE Transactions on Wireless Communications,
vol. 11, no. 11, pp. 3992–4005, November 2012.
[5] ——, “Per-antenna constant envelope precoding for large multi-user
MIMO systems,” IEEE Transactions on Communications, vol. 61, no. 3,
pp. 1059–1071, March 2013.
[6] J. C. Chen, C. K. Wen, and K. K. Wong, “Improved constant envelope
multiuser precoding for massive MIMO systems,” IEEE Communications Letters, vol. 18, no. 8, pp. 1311–1314, Aug 2014.
[7] J. C. Chen, “Low-PAPR precoding design for massive multiuser MIMO
systems via riemannian manifold optimization,” IEEE Communications
Letters, vol. 21, no. 4, pp. 945–948, April 2017.
[8] P. V. Amadori and C. Masouros, “Constant envelope precoding by
interference exploitation in phase shift keying-modulated multiuser
transmission,” IEEE Transactions on Wireless Communications, vol. 16,
no. 1, pp. 538–550, Jan 2017.
[9] C. Masouros and G. Zheng, “Exploiting known interference as green
signal power for downlink beamforming optimization,” IEEE Transactions on Signal Processing, vol. 63, no. 14, pp. 3628–3640, July 2015.
[10] X. Duan, H. Sun, L. Peng, and X. Zhao, “A natural gradient descent
algorithm for the solution of discrete algebraic lyapunov equations based
on the geodesic distance,” Applied Mathematics and Computation, vol.
219, no. 19, pp. 9899–9905, 2013.
[11] C. Li, E. Zhang, L. Jiu, and H. Sun, “Optimal control on special euclidean group via natural gradient algorithm,” Science China Information
Sciences, vol. 59, no. 11, p. 112203, 2016.
[12] C. Masouros and E. Alsusa, “Dynamic linear precoding for the exploitation of known interference in MIMO broadcast systems,” IEEE
Transactions on Wireless Communications, vol. 8, no. 3, pp. 1396–1404,
March 2009.
[13] N. Boumal, B. Mishra, P.-A. Absil, and R. Sepulchre, “Manopt, a
Matlab toolbox for optimization on manifolds,” The Journal of Machine
Learning Research, vol. 15, no. 1, pp. 1455–1459, 2014.
[14] P. Petersen, Riemannian geometry. Springer, 1998, vol. 171.
[15] N. Boumal, “Optimization and estimation on manifolds,” Ph.D. dissertation, Université catholique de Louvain, February 2014.
[16] P.-A. Absil, R. Mahony, and R. Sepulchre, Optimization algorithms on
matrix manifolds. Princeton University Press, 2009.
| 7 |
Under review as a conference paper at ICLR 2018
R EINFORCEMENT L EARNING A LGORITHM S ELECTION
Romain Laroche1 and Raphaël Féraud2
1
Microsoft Research Maluuba, Montréal, Canada
2
Orange Labs, Lannion, France
arXiv:1701.08810v3 [stat.ML] 14 Nov 2017
A BSTRACT
This paper formalises the problem of online algorithm selection in the context of
Reinforcement Learning. The setup is as follows: given an episodic task and a
finite number of off-policy RL algorithms, a meta-algorithm has to decide which
RL algorithm is in control during the next episode so as to maximize the expected
return. The article presents a novel meta-algorithm, called Epochal Stochastic
Bandit Algorithm Selection (ESBAS). Its principle is to freeze the policy updates
at each epoch, and to leave a rebooted stochastic bandit in charge of the algorithm
selection. Under some assumptions, a thorough theoretical analysis demonstrates
its near-optimality considering the structural sampling budget limitations. ESBAS
is first empirically evaluated on a dialogue task where it is shown to outperform
each individual algorithm in most configurations. ESBAS is then adapted to a true
online setting where algorithms update their policies after each transition, which
we call SSBAS. SSBAS is evaluated on a fruit collection task where it is shown to
adapt the stepsize parameter more efficiently than the classical hyperbolic decay,
and on an Atari game, where it improves the performance by a wide margin.
1
I NTRODUCTION
Reinforcement Learning (RL, Sutton & Barto (1998)) is a machine learning framework for optimising
the behaviour of an agent interacting with an unknown environment. For the most practical problems,
such as dialogue or robotics, trajectory collection is costly and sample efficiency is the main key
performance indicator. Consequently, when applying RL to a new problem, one must carefully choose
in advance a model, a representation, an optimisation technique and their parameters. Facing the
complexity of choice, RL and domain expertise is not sufficient. Confronted to the cost of data, the
popular trial and error approach shows its limits.
We develop an online learning version (Gagliolo & Schmidhuber, 2006; 2010) of Algorithm Selection
(AS, Rice (1976); Smith-Miles (2009); Kotthoff (2012)). It consists in testing several algorithms on
the task and in selecting the best one at a given time. For clarity, throughout the whole article, the
algorithm selector is called a meta-algorithm, and the set of algorithms available to the meta-algorithm
is called a portfolio. The meta-algorithm maximises an objective function such as the RL return.
Beyond the sample efficiency objective, the online AS approach besides addresses four practical
problems for online RL-based systems. First, it improves robustness: if an algorithm fails to terminate,
or outputs to an aberrant policy, it will be dismissed and others will be selected instead. Second,
convergence guarantees and empirical efficiency may be united by covering the empirically efficient
algorithms with slower algorithms that have convergence guarantees. Third, it enables curriculum
learning: shallow models control the policy in the early stages, while deep models discover the best
solution in late stages. And four, it allows to define an objective function that is not an RL return.
A fair algorithm selection implies a fair budget allocation between the algorithms, so that they can
be equitably evaluated and compared. In order to comply with this requirement, the reinforcement
algorithms in the portfolio are assumed to be off-policy, and are trained on every trajectory, regardless
which algorithm controls it. Section 2 provides a unifying view of RL algorithms, that allows information sharing between algorithms, whatever their state representations and optimisation techniques.
It also formalises the problem of online selection of off-policy RL algorithms.
Next, Section 3 presents the Epochal Stochastic Bandit AS (ESBAS), a novel meta-algorithm
addressing the online off-policy RL AS problem. Its principle is to divide the time-scale into epochs
1
Under review as a conference paper at ICLR 2018
of exponential length inside which the algorithms are not allowed to update their policies. During
each epoch, the algorithms have therefore a constant policy and a stochastic multi-armed bandit
can be in charge of the AS with strong pseudo-regret theoretical guaranties. A thorough theoretical
analysis provides for ESBAS upper bounds. Then, Section 4 empirically evaluates ESBAS on a
dialogue task where it is shown to outperform each individual algorithm in most configurations.
Afterwards, in Section 5, ESBAS, which is initially designed for a growing batch RL setting, is
adapted to a true online setting where algorithms update their policies after each transition, which we
call SSBAS. It is evaluated on a fruit collection task where it is shown to adapt the stepsize parameter
more efficiently than the classical hyperbolic decay, and on Q*bert, where running several DQN with
different network size and depth in parallel allows to improve the final performance by a wide margin.
Finally, Section 6 concludes the paper with prospective ideas of improvement.
2
Agent
A LGORITHM S ELECTION FOR RL
o(t + 1)
2.1
U NIFYING VIEW OF RL ALGORITHMS
r(t + 1)
a(t)
Stochastic
environment
The goal of this section is to enable information
sharing between algorithms, even though they
are considered as black boxes. We propose to Figure 1: RL framework: after performing action
share their trajectories expressed in a universal a(t), the agent perceives observation o(t + 1) and
receives reward r(t + 1).
format: the interaction process.
Reinforcement Learning (RL) consists in learning through trial and error to control an agent behaviour
in a stochastic environment: at each time step t ∈ N, the agent performs an action a(t) ∈ A, and then
perceives from its environment a signal o(t) ∈ Ω called observation, and receives a reward r(t) ∈ R,
bounded between Rmin and Rmax . Figure 1 illustrates the RL framework. This interaction process is
not Markovian: the agent may have an internal memory.
In this article, the RL problem is assumed to be episodic. Let us introduce two time scales with
different notations. First, let us define meta-time as the time scale for AS: at one meta-time τ
corresponds a meta-algorithm decision, i.e. the choice of an algorithm and the generation of a full
episode controlled with the policy determined by the chosen algorithm. Its realisation is called
a trajectory. Second, RL-time is defined as the time scale inside a trajectory, at one RL-time t
corresponds one triplet composed of an observation, an action, and a reward.
Let E denote the space of trajectories. A trajectory ετ ∈ E collected at meta-time τ is formalised
as a sequence of (observation, action, reward) triplets: ετ = hoτ (t), aτ (t), rτ (t)it∈J1,|ετ |K ∈ E,
where |ετ | is the length of trajectory ετ . The objective is, given a discount factor 0 ≤ γ < 1,
to generate trajectories with high discounted cumulative reward, also called return, and noted
P|ετ | t−1
µ(ετ ) = t=1
γ rτ (t). Since γ < 1 and R is bounded, the return is also bounded. The trajectory
set at meta-time T is denoted by DT = {ετ }τ ∈J1,T K ∈ E T . A sub-trajectory of ετ until RL-time t is
called the history at RL-time t and written ετ (t) with t ≤ |ετ |. The history records what happened in
episode ετ until RL-time t: ετ (t) = hoτ (t0 ), aτ (t0 ), rτ (t0 )it0 ∈J1,tK ∈ E.
The goal of each RL algorithm α is to find a policy π ∗ : E → A which yields optimal expected
returns. Such an algorithm α is viewed as a black box that takes as an input
set D ∈ E + ,
S a trajectory
+
+
T
where E is the ensemble of trajectory sets of undetermined size: E = T ∈N E , and that outputs
α
a policy πD
. Consequently, a RL algorithm is formalised as follows: α : E + → (E → A).
Such a high level definition of the RL algorithms allows to share trajectories between algorithms: a
trajectory as a sequence of observations, actions, and rewards can be interpreted by any algorithm
in its own decision process and state representation. For instance, RL algorithms classically rely
α
on an MDP defined on a explicit or implicit state space representation SD
thanks to a projection
α
α
Φα
:
E
→
S
.
Then,
α
trains
its
policy
π
on
the
trajectories
projected
on its state space
D
D
DT
representation. Off-policy RL optimisation techniques compatible with this approach are numerous in
the literature (Watkins, 1989; Ernst et al., 2005; Mnih et al., 2013). As well, any post-treatment of the
state set, any alternative decision process (Lovejoy, 1991), and any off-policy algorithm may be used.
The algorithms are defined here as black boxes and the considered meta-algorithms will be indifferent
to how the algorithms compute their policies, granted they satisfy the off-policy assumption.
2
Under review as a conference paper at ICLR 2018
2.2
O NLINE ALGORITHM SELECTION
Pseudo-code 1: Online RL AS setting
The online learning approach is tackled in
this article: different algorithms are experienced and evaluated during the data collection. Since it boils down to a classical exploration/exploitation trade-off, multi-armed bandit (Bubeck & Cesa-Bianchi, 2012) have been
used for combinatorial search AS (Gagliolo &
Schmidhuber, 2006; 2010) and evolutionary algorithm meta-learning (Fialho et al., 2010). The
online AS problem for off-policy RL is novel
and we define it as follows:
Data: D0 ← ∅: trajectory set
Data: P ← {αk }k∈J1,KK : algorithm portfolio
Data: µ : E → R: the objective function
for τ ← 1 to ∞ do
Select σ (Dτ −1 ) = σ(τ ) ∈ P;
σ(τ )
Generate trajectory ετ with policy πDτ −1 ;
Get return µ(ετ );
Dτ ← Dτ −1 ∪ {ετ };
end
• D ∈ E + is the current trajectory set;
• P = {αk }k∈J1,KK is the portfolio of off-policy RL algorithms;
• µ : E → R is the objective function, generally set as the RL return.
Pseudo-code 1 formalises the online RL AS setting. A meta-algorithm is defined as a function from
a trajectory set to the selection of an algorithm: σ : E + → P. The meta-algorithm is queried at
each meta-time τ = |Dτ −1 |+1, with input Dτ −1 , and it ouputs algorithm σ (Dτ −1 ) = σ(τ ) ∈ P
σ(τ )
controlling with its policy πDτ −1 the generation of the trajectory ετ in the stochastic environment.
The final goal is to optimise the cumulative expected return. It is the expectation of the sum of rewards
obtained after a run of T trajectories:
"
Eσ
T
X
#
"
µ (ετ ) = Eσ
τ =1
T
X
#
σ(τ )
EµDσ
τ −1
,
(1)
τ =1
α
α
with Eµα
D = EπD [µ (ε)] as a condensed notation for the expected return of policy πD , trained on
trajectory set D by algorithm α. Equation 1 transforms the cumulative expected return into two nested
expectations. The outside expectation Eσ assumes the meta-algorithm σ fixed and averages over
the trajectory set generation and the corresponding algorithms policies. The inside expectation Eµ
assumes the policy fixed and averages over its possible trajectories in the stochastic environment.
Nota bene: there are three levels of decision: meta-algorithm σ selects algorithm α that computes
policy π that is in control. In this paper, the focus is at the meta-algorithm level.
2.3
M ETA - ALGORITHM EVALUATION
In order to evaluate the meta-algorithms, let us formulate two additional notations. First, the optimal
expected return Eµ∗∞ is defined as the highest expected return achievable by a policy of an algorithm
in portfolio P. Second, for every algorithm α in the portfolio, let us define σ α as its canonical metaalgorithm, i.e. the meta-algorithm that always selects algorithm α: ∀τ , σ α (τ ) = α. The absolute
pseudo-regret ρσabs (T ) defines the regret as the loss for not having controlled the trajectory with an
optimal policy:
"
ρσabs (T )
=
T Eµ∗∞
− Eσ
T
X
#
σ(τ )
EµDσ
τ −1
.
(2)
τ =1
It is worth noting that an optimal meta-algorithm will unlikely yield a null regret because a large
part of the absolute pseudo-regret is caused by the sub-optimality of the algorithm policies when
the trajectory set is still of limited size. Indeed, the absolute pseudo-regret considers the regret for
not selecting an optimal policy: it takes into account both the pseudo-regret of not selecting the best
algorithm and the pseudo-regret of the algorithms for not finding an optimal policy. Since the metaalgorithm does not interfere with the training of policies, it ought not account for the pseudo-regret
related to the latter.
3
Under review as a conference paper at ICLR 2018
2.4
R ELATED WORK
Related to AS for RL, Schweighofer & Doya (2003) use meta-learning to tune a fixed RL algorithm
in order to fit observed animal behaviour, which is a very different problem to ours. In Cauwet et al.
(2014); Liu & Teytaud (2014), the RL AS problem is solved with a portfolio composed of online
RL algorithms. The main limitation from these works relies on the fact that on-policy algorithms
were used, which prevents them from sharing trajectories among algorithms (Cauwet et al., 2015).
Meta-learning specifically for the eligibility trace parameter has also been studied in White & White
(2016). Wang et al. (2016) study the learning process of RL algorithms and selects the best one for
learning faster on a new task. This work is related to batch AS.
An intuitive way to solve the AS problem is to consider algorithms as arms in a multi-armed bandit
setting. The bandit meta-algorithm selects the algorithm controlling the next trajectory ε and the
objective function µ(ε) constitutes the reward of the bandit. The aim of prediction with expert advice
is to minimise the regret against the best expert of a set of predefined experts. When the experts learn
during time, their performances evolve and hence the sequence of expert rewards is non-stationary.
The exponential weight algorithms (Auer et al., 2002b; Cesa-Bianchi & Lugosi, 2006) are designed
for prediction with expert advice when the sequence of rewards of experts is generated by an oblivious
adversary. This approach has been extended for competing against the best sequence of experts by
adding in the update of weights a forgetting factor proportional to the mean reward (see Exp3.S in
Auer et al. (2002b)), or by combining Exp3 with a concept drift detector Allesiardo & Féraud (2015).
The exponential weight algorithms have been extended to the case where the rewards are generated
by any sequence of stochastic processes of unknown means (Besbes et al., 2014). The stochastic
bandit algorithm such as UCB can be extended to the case of switching bandits using a discount
factor or a window to forget the past Garivier & Moulines (2011). This class of switching bandit
algorithms are not designed for experts that learn and hence evolve at each time step.
3
Pseudo-code 2: ESBAS with UCB1
E POCHAL S TOCHASTIC BANDIT Data: D , P, µ: the online RL AS setting
0
for β ← 0 to ∞ do
ESBAS description – To solve the offfor αk ∈ P do
k
policy RL AS problem, we propose a novel
πD
: policy learnt by αk on D2β −1
2β −1
meta-algorithm called Epochal Stochastic
end
Bandit AS (ESBAS). Because of the nonn ← 0, ∀αk ∈ P, nk ← 0, and xk ← 0
stationarity induced by the algorithm learning,
for τ ← 2β to 2β+1 − 1 do r
the stochastic bandit cannot directly select al!
log(n)
gorithms. Instead, the stochastic bandit can
k
kmax
α
= argmax x + ξ
choose fixed policies. To comply with this
nk
αk ∈P
constraint, the meta-time scale is divided into
kmax
Generate trajectory ετ with policy πD
epochs inside which the algorithms policies
2β −1
cannot be updated: the algorithms optimise
Get return µ(ετ ), Dτ ← Dτ −1 ∪ {ετ }
their policies only when epochs start, in such
nkmax xkmax + µ(ετ )
kmax
x
←
a way that the policies are constant inside each
nkmax + 1
epoch. As a consequence and since the returns
nkmax ← nkmax + 1 and n ← n + 1
are bounded, at each new epoch, the problem
end
can rigorously be cast into an independent end
stochastic K-armed bandit Ξ, with K = |P|.
The ESBAS meta-algorithm is formally sketched in Pseudo-code 2 embedding UCB1 Auer et al.
(2002a) as the stochastic K-armed bandit Ξ. The meta-algorithm takes as an input the set of algorithms
in the portfolio. Meta-time scale is fragmented into epochs of exponential size. The β th epoch lasts
2β meta-time steps, so that, at meta-time τ = 2β , epoch β starts. At the beginning of each epoch,
the ESBAS meta-algorithm asks each algorithm in the portfolio to update their current policy. Inside
an epoch, the policy is never updated anymore. At the beginning of each epoch, a new Ξ instance is
reset and run. During the whole epoch, Ξ selects at each meta-time step the algorithm in control of
the next trajectory.
Theoretical analysis – ESBAS intends to minimise the regret for not choosing the algorithm
yielding the maximal return at a given meta-time τ . It is short-sighted: it does not intend to optimise
4
Under review as a conference paper at ICLR 2018
the algorithms learning. We define the short-sighted pseudo-regret as follows:
" T
#
X
σ(τ )
σ
α
ρss (T ) = Eσ
max EµDτσ−1 − EµDσ
.
τ =1
(3)
τ −1
α∈P
The short-sighted pseudo-regret depends on the gaps ∆α
β : the difference of expected return between
the best algorithm during epoch β and algorithm α. The smallest non null gap at epoch β is noted
∆†β . We write its limit when β tends to infinity with ∆†∞ .
Based on several assumptions, three theorems show that ESBAS absolute pseudo-regret can be
expressed in function of the absolute pseudo-regret of the best canonical algorithm and ESBAS shortsighted pseudo-regret. They also provide upper bounds on the ESBAS short-sighted pseudo-regret.
The full theoretical analysis can be found in the supplementary material, Section B. We provide here
an intuitive overlook of its results. Table 1 numerically reports those bounds for a two-fold portfolio,
depending on the nature of the algorithms. It must be read by line. According to the first column:
the order of magnitude of ∆†β , the ESBAS short-sighted pseudo-regret bounds are displayed in the
second column, and the third and fourth columns display the ESBAS absolute pseudo-regret bounds
∗
also depending on the order of magnitude of ρσabs (T ).
Regarding the short-sighted upper bounds, the main result appears in the last line, when the algorithms
converge to policies with different performance:
ESBAS logarithmically converges to the best
algorithm with a regret in O log2 (T )/∆†∞ . Also, one should notice that the first two bounds are
obtained by summing the gaps. This means that the algorithms are almost equally good and√their gap
goes beyond the threshold of distinguishability. This threshold is structurally at ∆†β ∈ O(1/ T ). The
impossibility to determine which is the better algorithm is interpreted in Cauwet et al. (2014) as a
budget issue. The meta-time necessary to distinguish through evaluation arms that are ∆†β apart takes
√
†
†2
Θ(1/∆†2
β ) meta-time steps. As a consequence, if ∆β ∈ O(1/ T ), then 1/∆β ∈ Ω(T ). However,
the budget, i.e. the length of epoch β starting at meta-time T = 2β , equals T .
√
Additionally, the absolute upper bounds are logarithmic in the best case and still inferior p
to O( T ) in
the worst case, which compares favorably with those of discounted UCB and Exp3.S
in O( T log(T ))
√
and Rexp3 in O(T 2/3 ), or the RL with Policy Advice’s regret bounds of O( T log(T )).
ESBAS
Table 1: Bounds on ρσss
ESBAS
(T ) and ρσabs
(T ) given various settings for a two-fold portfolio AS.
ESBAS
∆†β
ESBAS
ρσss
(T )
Θ (1/T )
O (log(T ))
†
Θ(T −c ), and c† ≥ 0.5
O(T
1−c†
ρσabs
∗
(T ) in function of ρσabs (T )
∗
ρσabs (T ) ∈ O(log(T ))
∗
∗
ρσabs (T ) ∈ O(T 1−c )
O (log(T ))
†
†
O(T 1−c )
)
O(T 1−c )
∗
†
†
Θ(T −c ), and c† < 0.5
O(T c log(T ))
Θ (1)
O log2 (T )/∆†∞
4
O(T 1−c ), if c† < 1 − c∗
†
O(T c log(T ))
O log2 (T )/∆†∞
†
O(T c log(T )), if c† ≥ 1 − c∗
∗
O(T 1−c )
ESBAS D IALOGUE E XPERIMENTS
ESBAS is particularly designed for RL tasks when it is impossible to update the policy after every
transition or episode. Policy update is very costly in most real-world applications, such as dialogue
systems (Khouzaimi et al., 2016) for which a growing batch setting is preferred (Lange et al.,
2012). ESBAS practical efficiency is therefore illustrated on a dialogue negotiation game (Laroche
& Genevay, 2016) that involves two players: the system ps and a user pu . Their goal is to find an
agreement among 4 alternative options. At each dialogue, for each option η, players have a private
uniformly drawn cost νηp ∼ U[0, 1] to agree on it. Each player is considered fully empathetic to the
other one. The details of the experiment can be found in the supplementary material, Section C.1.1.
5
Under review as a conference paper at ICLR 2018
(2a) simple vs simple-2
(2c) simple-2 vs constant-1.009
(2e) 8 learners
(2b) simple vs simple-2
(2d) simple-2 vs constant-1.009
(2f) 8 learners
Figure 2: The figures on the top plot the performance over time. The figures on the bottom show the
ESBAS selection ratios over the epochs.
All learning algorithms are using Fitted-Q Iteration (Ernst et al., 2005), with a linear parametrisation and an β -greedy exploration : β = 0.6β , β being the epoch number. Several algorithms
differing by their state space representation Φα are considered: simple, fast, simple-2, fast-2, n-ζ{simple/fast/simple-2/fast-2}, and constant-µ. See Section C.1.2 for their full descriptions.
The algorithms and ESBAS are playing with a stationary user simulator built through Imitation
Learning from real-human data. All the results are averaged over 1000 runs. The performance figures
plot the curves of algorithms individual performance σ α against the ESBAS portfolio control σ ESBAS
in function of the epoch (the scale is therefore logarithmic in meta-time). The performance is the
average return of the RL problem. The ratio figures plot the average algorithm selection proportions
of ESBAS at each epoch. We define the relative pseudo regret as the difference between the ESBAS
absolute pseudo-regret and the absolute pseudo-regret of the best canonical meta-algorithm. Relative
pseudo-regrets have a 95% confidence interval about ±6 ≈ ±1.5 × 10−4 per trajectory. Extensive
numerical results are provided in Table 2 of the supplementary material.
Figures 2a and 2b plot the typical curves obtained with ESBAS selecting from a portfolio of two
learning algorithms. On Figure 2a, the ESBAS curve tends to reach more or less the best algorithm
in each point as expected. Surprisingly, Figure 2b reveals that the algorithm selection ratios are not
very strong in favour of one or another at any time. Indeed, the variance in trajectory set collection
makes simple better on some runs until the end. ESBAS proves to be efficient at selecting the best
algorithm for each run and unexpectedly obtains a negative relative pseudo-regret of -90. Figures
2c and 2d plot the typical curves obtained with ESBAS selecting from a portfolio constituted of a
learning algorithm and an algorithm with a deterministic and stationary policy. ESBAS succeeds in
remaining close to the best algorithm at each epoch and saves 5361 return value for not selecting
the constant algorithm, but overall yields a regret for not using only the best algorithm. ESBAS also
performs well on larger portfolios of 8 learners (see Figure 2e) with negative relative pseudo-regrets:
−10, even if the algorithms are, on average, almost selected uniformly as Figure 2f reveals. Each
individual run may present different ratios, depending on the quality of the trained policies. ESBAS
also offers some curriculum learning, but more importantly, early bad policies are avoided.
Algorithms with a constant policy do not improve over time and the full reset of the K-multi armed
bandit urges ESBAS to unnecessarily explore again and again the same underachieving algorithm.
One easy way to circumvent this drawback is to use this knowledge and to not reset their arms. By
operating this way, when the learning algorithm(s) start(s) outperforming the constant one, ESBAS
simply neither exploits nor explores the constant algorithm anymore. Without arm reset for constant
algorithms, ESBAS’s learning curve follows perfectly the learning algorithm’s learning curve when
this one outperforms the constant algorithm and achieves strong negative relative pseudo-regrets.
Again, the interested reader may refer to Table 2 in supplementary material for the numerical results.
6
Under review as a conference paper at ICLR 2018
5
S LIDING S TOCHASTIC BANDIT
In this section, we propose to adapt ESBAS to a true online setting where algorithms update their
policies after each transition. The stochastic bandit is now trained on a sliding window with the last
τ /2 selections. Even though the arms are not stationary over this window, there is the guarantee of
eventually forgetting the oldest arm pulls. This algorithm is called SSBAS for Sliding Stochastic
Bandit AS. Despite the lack of theoretical convergence bounds, we demonstrate on two domains
and two different meta-optimisation tasks that SSBAS does exceptionally well, outperforming all
algorithms in the portfolio by a wide margin.
5.1
G RIDWORLD DOMAIN
The goal here is to demonstrate that SSBAS can perform efficient hyperparameter optimisation on a simple tabular domain: a 5x5 gridworld
problem (see Figure 3), where the goal is to collect the fruits placed at
each corner as fast as possible. The episodes terminate when all fruits
have been collected or after 100 transitions. The objective function µ
used to optimise the stochastic bandit Ψ is no longer the RL return,
but the time spent to collect all the fruits (200 in case of it did not).
The agent has 18 possible positions and there are 24 − 1 = 15 nonterminal fruits configurations, resulting in 270 states. The action set is
Figure 3: gridworld
A = {N, E, S, W }. The reward function mean is 1 when eating a fruit,
0 otherwise. The reward function is corrupted with a strong Gaussian white noise of variance ζ 2 = 1.
The portfolio is composed of 4 Q-learning algorithms varying from each other by their learning rates:
{0.001, 0.01, 0.1, 0.5}. They all have the same linearly annealing τ -greedy exploration.
The selection ratios displayed in Figure 5 show that SSBAS selected the algorithm with the highest
(0.5) learning rate in the first stages, enabling to propagate efficiently the reward signal through the
visited states, then, overtime preferentially chooses the algorithm with a learning rate of 0.01, which
is less sensible to the reward noise, finally, SSBAS favours the algorithm with the finest learning rate
(0.001). After 1 million episodes, SSBAS enables to save half a transition per episode on average
as compared to the best fixed learning rate value (0.1), and two transitions against the worst fixed
learning rate in the portfolio (0.001). We also compared it to the efficiency of a linearly annealing
learning rate: 1/(1 + 0.0001τ ): SSBAS performs under 21 steps on average after 105 , while the
linearly annealing learning rate algorithm still performs a bit over 21 steps after 106 steps.
5.2
ATARI DOMAIN : Q* BERT
We investigate here AS for deep RL on the Arcade Learning Environment
(ALE, Bellemare et al. (2013)) and more precisely the game Q*bert (see
a frame on Figure 4), where the goal is to step once on each block. Then
a new similar level starts. In later levels, one needs to step twice on
each block, and even later stepping again on the same blocks will cancel
the colour change. We used three different settings of DQN instances:
small uses the setting described in Mnih et al. (2013), large uses the
setting in Mnih et al. (2015), and finally huge uses an even larger network
(see Section C.2 in the supplementary material for details). DQN is
known to reach a near-human level performance at Q*bert. Our SSBAS
instance runs 6 algorithms with 2 different random initialisations of each
DQN setting. Disclaimer: contrarily to other experiments, each curve
Figure 4: Q*bert
is the result of a single run, and the improvement might be aleatory.
Indeed, the DQN training is very long and SSBAS needs to train all the models in parallel. A more
computationally-efficient solution might be to use the same architecture as Osband et al. (2016).
Figure 6 reveals that SSBAS experiences a slight delay keeping in touch with the best setting
performance during the initial learning phase, but, surprisingly, finds a better policy than the single
algorithms in its portfolio and than the ones reported in the previous DQN articles. We observe that
the large setting is surprisingly by far the worst one on the Q*bert task, implying the difficulty to
predict which model is the most efficient for a new task. SSBAS allows to select online the best one.
7
Under review as a conference paper at ICLR 2018
1.0
Q-Learning with learning rate: 0.001
Q-Learning with learning rate: 0.01
Q-Learning with learning rate: 0.1
Q-Learning with learning rate: 0.5
0.8
×104
2.0
0.6
1.5
0.4
1.0
0.2
0.5
0.0
0
20000
40000
60000
80000
100000
0.0
Figure 5: gridworld ratios (3000 runs).
6
small network
large network
huge network
SSBAS
0
20000
40000
60000
80000
100000
120000
140000
Figure 6: Q*bert performance per episode (1 run).
C ONCLUSION
In this article, we tackle the problem of selecting online off-policy RL algorithms. The problem
is formalised as follows: from a fixed portfolio of algorithms, a meta-algorithm learns which one
performs the best on the task at hand. Fairness of algorithm evaluation is granted by the fact that the
RL algorithms learn off-policy. ESBAS, a novel meta-algorithm, is proposed. Its principle is to divide
the meta-time scale into epochs. Algorithms are allowed to update their policies only at the start
each epoch. As the policies are constant inside each epoch, the problem can be cast into a stochastic
multi-armed bandit. An implementation is detailed and a theoretical analysis leads to upper bounds
on the regrets. ESBAS is designed for the growing batch RL setting. This limited online setting is
required in many real-world applications where updating the policy requires a lot of resources.
Experiments are first led on a negotiation dialogue game, interacting with a human data-built
simulated user. In most settings, not only ESBAS demonstrates its efficiency to select the best
algorithm, but it also outperforms the best algorithm in the portfolio thanks to curriculum learning,
and variance reduction similar to that of Ensemble Learning. Then, ESBAS is adapted to a full online
setting, where algorithms are allowed to update after each transition. This meta-algorithm, called
SSBAS, is empirically validated on a fruit collection task where it performs efficient hyper-parameter
optimisation. SSBAS is also evaluated on the Q*bert Atari game, where it achieves a substantial
improvement over the single algorithm counterparts.
We interpret ESBAS/SSBAS’s success at reliably outperforming the best algorithm in the portfolio as
the result of the four following potential added values. First, curriculum learning: ESBAS/SSBAS
selects the algorithm that is the most fitted with the data size. This property allows for instance to use
shallow algorithms when having only a few data and deep algorithms once collected a lot. Second,
diversified policies: ESBAS/SSBAS computes and experiments several policies. Those diversified
policies generate trajectories that are less redundant, and therefore more informational. As a result, the
policies trained on these trajectories should be more efficient. Third, robustness: if one algorithm fails
at finding good policies, it will soon be discarded. This property prevents the agent from repeating
again and again the same obvious mistakes. Four and last, run adaptation: of course, there has to be
an algorithm that is the best on average for one given task at one given meta-time. But depending on
the variance in the trajectory collection, it did not necessarily train the best policy for each run. The
ESBAS/SSBAS meta-algorithm tries and selects the algorithm that is the best at each run. Some of
those properties are inherited by algorithm selection similarity with ensemble learning (Dietterich,
2002). Wiering & Van Hasselt (2008) uses a vote amongst the algorithms to decide the control of the
next transition. Instead, ESBAS/SSBAS selects the best performing algorithm.
Regarding the portfolio design, it mostly depends on the available computational power per sample
ratio. For practical implementations, we recommend to limit the use of two highly demanding
algorithms, paired with several faster algorithms that can take care of first learning stages, and to use
algorithms that are diverse regarding models, hypotheses, etc. Adding two algorithms that are too
similar adds inertia, while they are likely to not be distinguishable by ESBAS/SSBAS. More detailed
recommendations for building an efficient RL portfolio are left for future work.
8
Under review as a conference paper at ICLR 2018
R EFERENCES
Robin Allesiardo and Raphaël Féraud. Exp3 with drift detection for the switching bandit problem.
In Proceedings of the 2nd IEEE International Conference on the Data Science and Advanced
Analytics (DSAA), pp. 1–7. IEEE, 2015.
Jean-Yves Audibert and Sébastien Bubeck. Best Arm Identification in Multi-Armed Bandits. In
Proceedings of the 23th Conference on Learning Theory (COLT), Haifa, Israel, June 2010.
Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit
problem. Machine Learning, 2002a. doi: 10.1023/A:1013689704352.
Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed
bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002b.
Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:
253–279, 2013.
Omar Besbes, Yonatan Gur, and Assaf Zeevi. Stochastic multi-armed-bandit problem with nonstationary rewards. In Advances in neural information processing systems, pp. 199–207, 2014.
Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multiarmed bandit problems. Foundations and Trends in Machine Learning, 2012. doi: 10.1561/
2200000024.
Marie-Liesse Cauwet, Jialin Liu, and Olivier Teytaud. Algorithm portfolios for noisy optimization:
Compare solvers early. In Learning and Intelligent Optimization. Springer, 2014.
Marie-Liesse Cauwet, Jialin Liu, Baptiste Rozière, and Olivier Teytaud. Algorithm Portfolios for
Noisy Optimization. ArXiv e-prints, November 2015.
Nicolo Cesa-Bianchi and Gábor Lugosi. Prediction, learning, and games. Cambridge university
press, 2006.
Thomas G. Dietterich. Ensemble learning. The handbook of brain theory and neural networks, 2:
110–125, 2002.
Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement learning.
Journal of Machine Learning Research, 2005.
Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Pac bounds for multi-armed bandit and markov
decision processes. In Computational Learning Theory. Springer, 2002.
Álvaro Fialho, Luis Da Costa, Marc Schoenauer, and Michele Sebag. Analyzing bandit-based
adaptive operator selection mechanisms. Annals of Mathematics and Artificial Intelligence, 2010.
Matteo Gagliolo and Jürgen Schmidhuber. Learning dynamic algorithm portfolios. Annals of
Mathematics and Artificial Intelligence, 2006.
Matteo Gagliolo and Jürgen Schmidhuber. Algorithm selection as a bandit problem with unbounded
losses. In Learning and Intelligent Optimization. Springer, 2010.
Aurélien Garivier and Eric Moulines. On Upper-Confidence Bound Policies for Switching Bandit
Problems, pp. 174–188. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. ISBN 978-3642-24412-4. doi: 10.1007/978-3-642-24412-4_16. URL http://dx.doi.org/10.1007/
978-3-642-24412-4_16.
Hatim Khouzaimi, Romain Laroche, and Fabrice Lefevre. Optimising turn-taking strategies with
reinforcement learning. In Proceedings of the 16th Annual Meeting of the Special Interest Group
on Discourse and Dialogue (Sigdial), 2015.
Hatim Khouzaimi, Romain Laroche, and Fabrice Lefèvre. Reinforcement learning for turn-taking
management in incremental spoken dialogue systems. In Proceedings of the 25th International
Joint Conference on Artificial Intelligence (IJCAI), pp. 2831–2837, 2016.
9
Under review as a conference paper at ICLR 2018
Lars Kotthoff. Algorithm selection for combinatorial search problems: A survey. arXiv preprint
arXiv:1210.7959, 2012.
Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. In Reinforcement
learning, pp. 45–73. Springer, 2012.
Romain Laroche and Aude Genevay. A negotiation dialogue game. In Proceedings of the 7th
International Workshop on Spoken Dialogue Systems (IWSDS), Finland, 2016.
Jialin Liu and Olivier Teytaud. Meta online learning: experiments on a unit commitment problem.
In Proceedings of the 22nd European Symposium on Artificial Neural Networks, Computational
Intelligence and Machine Learning (ESANN), 2014.
William S. Lovejoy. Computationally feasible bounds for partially observed markov decision
processes. Operational Research, 1991.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan
Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint
arXiv:1312.5602, 2013.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare,
Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control
through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc G. Bellemare. Safe and efficient
off-policy reinforcement learning. CoRR, abs/1606.02647, 2016.
Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via
bootstrapped dqn. In Proceedings of the 29th Advances in Neural Information Processing Systems
(NIPS), 2016.
John R. Rice. The algorithm selection problem. Advances in Computers, 1976.
Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks,
2003.
Kate A. Smith-Miles. Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM
Computational Survey, 2009.
Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction (Adaptive
Computation and Machine Learning). The MIT Press, March 1998. ISBN 0262193981.
Jane X. Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Rémi Munos,
Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to reinforcement learn. CoRR,
abs/1611.05763, 2016.
C.J.C.H. Watkins. Learning from Delayed Rewards. PhD thesis, Cambridge University, Cambridge
(England), May 1989.
Martha White and Adam White. Adapting the trace parameter in reinforcement learning. In
Proceedings of the 15th International Conference on Autonomous Agents and Multi-Agent Systems
(AAMAS). International Foundation for Autonomous Agents and Multiagent Systems, 2016.
Marco A Wiering and Hado Van Hasselt. Ensemble algorithms in reinforcement learning. IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 38(4):930–936, 2008.
10
Under review as a conference paper at ICLR 2018
A
G LOSSARY
Symbol
t
τ, T
a(t)
o(t)
r(t)
A
Ω
Rmin
Rmax
ετ
|X|
Ja, bK
E
γ
µ(ετ )
DT
ετ (t)
π
π∗
α
E+
α
SD
Φα
D
α
πD
P
K
σ
σ(τ )
Ex0 [f (x0 )]
Eµα
D
Eµ∗∞
σα
ρσabs (T )
O(f (x))
κ
Ξ
β
ξ
ρσss (T )
∆
†
∆†β
bxc
σ ESBAS
Θ(f (x))
σ∗
Designation
Reinforcement learning time aka RL-time
Meta-algorithm time aka meta-time
Action taken at RL-time t
Observation made at RL-time t
Reward received at RL-time t
Action set
Observation set
Lower bound of values taken by R
Upper bound of values taken by R
Trajectory collected at meta-time τ
Size of finite set/list/collection X
Ensemble of integers comprised between a and b
Space of trajectories
Discount factor of the decision process
Return of trajectory ετ aka objective function
Trajectory set collected until meta-time T
History of ετ until RL-time t
Policy
Optimal policy
Algorithm
Ensemble of trajectory sets
State space of algorithm α from trajectory set D
State space projection of algorithm α from trajectory set D
Policy learnt by algorithm α from trajectory set D
Algorithm set aka portfolio
Size of the portfolio
Meta-algorithm
Algorithm selected by meta-algorithm σ at meta-time τ
Expected value of f (x) conditionally to x = x0
α
Expected return of trajectories controlled by policy πD
Optimal expected return
Canonical meta-algorithm exclusively selecting algorithm α
Absolute pseudo-regret
Set of functions that get asymptotically dominated by κf (x)
Constant number
Stochastic K-armed bandit algorithm
Epoch index
Parameter of the UCB algorithm
Short-sighted pseudo-regret
Gap between the best arm and another arm
Index of the second best algorithm
Gap of the second best arm at epoch β
Rounding of x at the closest integer below
The ESBAS meta-algorithm
Set of functions asymptotically dominating κf (x) and dominated by κ0 f (x)
Best meta-algorithm among the canonical ones
11
First use
Section 2
Section 2
Figure 1
Figure 1
Figure 1
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2
Section 2.2
Section 2.2
Section 2.2
Section 2.2
Equation 1
Equation 1
Section 2.3
Section 2.3
Definition 1
Section 3
Theorem 3
Section 3
Section 3
Pseudo-code 2
Definition 2
Theorem 2
Theorem 2
Theorem 2
Theorem 2
Theorem 2
Table 1
Theorem 3
Under review as a conference paper at ICLR 2018
Symbol
p, ps , pu
η
νηp
U[a, b]
sf
Rps (sf )
R EF P ROP(η)
A SK R EPEAT
ACCEPT(η)
E ND D IAL
SERsu
scoreasr
N (x, v)
R EF I NSIST
R EF N EW P ROP
ACCEPT
β
Φα
φ0
φasr
φdif
φt
φnoise
simple
fast
simple-2
fast-2
n-1-simple
n-1-fast
n-1-simple-2
n-1-fast-2
constant-µ
ζ
P(x|y)
Designation
Player, system player, and (simulated) user player
Option to agree or disagree on
Cost of booking/selecting option ν for player p
Uniform distribution between a and b
Final state reached in a trajectory
Immediate reward received by the system player at the end of the dialogue
Dialogue act consisting in proposing option η
Dialogue act consisting in asking the other player to repeat what he said
Dialogue act consisting in accepting proposition η
Dialogue act consisting in ending the dialogue
Sentence error rate of system ps listening to user pu
Speech recognition score
Normal distribution of centre x and variance v 2
R EF P ROP(η), with η being the last proposed option
R EF P ROP(η), with η being the best option that has not been proposed yet
ACCEPT(η), with η being the last understood option proposition
-greedy exploration in function of epoch β
Set of features of algorithm α
Constant feature: always equal to 1 α
ASR feature: equal to the last recognition score
Cost feature: equal to the difference of cost of proposed and targeted options
RL-time feature
Noise feature
FQI with Φ = {φ0 , φasr , φdif , φt }
FQI with Φ = {φ0 , φasr , φdif }
FQI with Φ = {φ0 , φasr , φdif , φt , φasr φdif , φt φasr , φdif φt , φ2asr , φ2dif , φ2t }
FQI with Φ = {φ0 , φasr , φdif , φasr φdif , φ2asr , φ2dif }
FQI with Φ = {φ0 , φasr , φdif , φt , φnoise }
FQI with Φ = {φ0 , φasr , φdif , φnoise }
FQI with Φ = {φ0 , φasr , φdif , φt , φnoise , φasr φdif , φt φnoise , φasr φt ,
φdif φnoise , φasr φnoise , φdif φt , φ2asr , φ2dif , φ2t , φ2noise }
FQI with Φ = {φ0 , φasr , φdif , φt }
Non-learning algorithm with average performance µ
Number of noisy features added to the feature set
Probability that X = x conditionally to Y = y
12
First use
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.1
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Section C.1.2
Equation 35
Under review as a conference paper at ICLR 2018
B
T HEORETICAL ANALYSIS
The theoretical aspects of algorithm selection for reinforcement learning in general, and Epochal
Stochastic Bandit Algorithm Selection in particular, are thoroughly detailed in this section. The
proofs of the Theorems are provided in Sections E, F, and G. We recall and formalise the absolute
pseudo-regret definition provided in Section 2.3.
Definition 1 (Absolute pseudo-regret). The absolute pseudo-regret ρσabs (T ) compares the metaalgorithm’s expected return with the optimal expected return:
" T
#
X σ(τ )
σ
∗
ρabs (T ) = T Eµ∞ − Eσ
EµDσ
.
(4)
τ −1
τ =1
B.1
A SSUMPTIONS
The theoretical analysis is hindered by the fact that AS, not only directly influences the return
distribution, but also the trajectory set distribution and therefore the policies learnt by algorithms for
next trajectories, which will indirectly affect the future expected returns. In order to allow policy
comparison, based on relation on trajectory sets they are derived from, our analysis relies on two
assumptions.
Assumption 1 (More data is better data). The algorithms train better policies with a larger trajectory
set on average, whatever the algorithm that controlled the additional trajectory:
α
∀D ∈ E + , ∀α, α0 ∈ P, Eµα
(5)
D ≤ Eα0 EµD∪εα0 .
Assumption 1 states that algorithms are off-policy learners and that additional data cannot lead to
performance degradation on average. An algorithm that is not off-policy could be biased by a specific
behavioural policy and would therefore transgress this assumption.
Assumption 2 (Order compatibility). If an algorithm trains a better policy with one trajectory set
than with another, then it remains the same, on average, after collecting an additional trajectory from
any algorithm:
h
i
h
i
α
α
α
(6)
∀D, D0 ∈ E + , ∀α, α0 ∈ P, Eµα
D < EµD 0 ⇒ Eα0 EµD∪εα0 ≤ Eα0 EµD 0 ∪εα0 .
Assumption 2 states that a performance relation between two policies trained on two trajectory sets
is preserved on average after adding another trajectory, whatever the behavioural policy used to
generate it. From these two assumptions, Theorem 1 provides an upper bound in order of magnitude
in function of the worst algorithm in the portfolio. It is verified for any meta-algorithm σ.
Theorem 1 (Not worse than the worst). The absolute pseudo-regret is bounded by the worst algorithm
absolute pseudo-regret in order of magnitude:
α
ρσabs (T ) ∈ O max ρσabs (T ) .
∀σ,
(7)
α∈P
Contrarily to what the name of Theorem 1 suggests, a meta-algorithm might be worse than the
worst algorithm (similarly, it can be better than the best algorithm), but not in order of magnitude.
Its proof is rather complex for such an intuitive result because, in order to control all the possible
outcomes, one needs to translate the selections of algorithm α with meta-algorithm σ into the
canonical meta-algorithm σ α ’s view.
B.2
S HORT- SIGHTED PSEUDO - REGRET ANALYSIS OF ESBAS
ESBAS intends to minimise the regret for not choosing the best algorithm at a given meta-time τ . It
is short-sighted: it does not intend to optimise the algorithms learning.
Definition 2 (Short-sighted pseudo-regret). The short-sighted pseudo-regret ρσss (T ) is the difference
between the immediate best expected return algorithm and the one selected:
" T
#
X
σ(τ )
σ
α
ρss (T ) = Eσ
max EµDτσ−1 − EµDσ
.
(8)
τ =1
α∈P
13
τ −1
Under review as a conference paper at ICLR 2018
Theorem 2 (ESBAS short-sighted pseudo-regret). If the stochastic multi-armed bandit Ξ guarantees
a regret of order of magnitude O(log(T )/∆†β ), then:
blog(T )c
X β
ESBAS
.
(9)
ρσss
(T ) ∈ O
†
∆
β
β=0
Theorem 2 expresses in order of magnitude an upper bound for the short-sighted pseudo-regret of
α0
α
ESBAS. But first, let define the gaps: ∆α
β = maxα0 ∈P Eµ σ ESBAS − Eµ σ ESBAS . It is the difference
D
D
2β −1
2β −1
of expected return between the best algorithm during epoch β and algorithm α. The smallest non null
†
gap at epoch β is noted: ∆†β = minα∈P,∆αβ >0 ∆α
β . If ∆β does not exist, i.e. if there is no non-null
gap, the regret is null.
Several upper bounds in order of magnitude on ρss (T ) can be easily deduced from Theorem 2,
depending on an order of magnitude of ∆†β . See the corollaries in Section F.1, Table 1 and more
generally Section 3 for a discussion.
B.3
ESBAS ABSOLUTE PSEUDO - REGRET ANALYSIS
The short-sighted pseudo-regret optimality depends on the meta-algorithm itself. For instance, a poor
deterministic algorithm might be optimal at meta-time τ but yield no new information, implying the
same situation at meta-time τ + 1, and so on. Thus, a meta-algorithm that exclusively selects the
deterministic algorithm would achieve a short-sighted pseudo-regret equal to 0, but selecting other
algorithms are, in the long run, more efficient. Theorem 2 is a necessary step towards the absolute
pseudo-regret analysis.
The absolute pseudo-regret can be decomposed between the absolute pseudo-regret of the best
canonical meta-algorithm (i.e. the algorithm that finds the best policy), the regret for not always
selecting the best algorithm, and potentially not learning as fast, and the short-sighted regret: the
regret for not gaining the returns granted by the best algorithm. This decomposition leads to Theorem
3 that provides an upper bound of the absolute pseudo-regret in function of the best canonical
meta-algorithm, and the short-sighted pseudo-regret.
But first let us introduce the fairness assumption. The fairness of budget distribution has been
formalised in Cauwet et al. (2015). It is the property stating that every algorithm in the portfolio has
as much resources as the others, in terms of computational time and data. It is an issue in most online
AS problems, since the algorithm that has been the most selected has the most data, and therefore
must be the most advanced one. A way to circumvent this issue is to select them equally, but, in an
online setting, the goal of AS is precisely to select the best algorithm as often as possible. Our answer
is to require that all algorithms in the portfolio are learning off-policy, i.e. without bias induced by
the behavioural policy used in the learning dataset. By assuming that all algorithms learn off-policy,
we allow information sharing Cauwet et al. (2015) between algorithms. They share the trajectories
they generate. As a consequence, we can assume that every algorithm, the least or the most selected
ones, will learn from the same trajectory set. Therefore, the control unbalance does not directly lead
to unfairness in algorithms performances: all algorithms learn equally from all trajectories. However,
unbalance might still remain in the exploration strategy if, for instance, an algorithm takes more
benefit from the exploration it has chosen than the one chosen by another algorithm. For analysis
purposes, Theorem 3 assumes the fairness of AS:
Assumption 3 (Learning is fair). If one trajectory set is better than another for training one given
algorithm, it is the same for other algorithms.
∀α, α0 ∈ P, ∀D, D0 ∈ E + ,
0
0
α
α
α
Eµα
D < EµD 0 ⇒ EµD ≤ EµD 0 .
(10)
Theorem 3 (ESBAS absolute pseudo-regret upper bound). Under assumption 3, if the stochastic
multi-armed bandit Ξ guarantees that the best arm has been selected in the T first episodes at least
T /K times, with high probability δT ∈ O(1/T ), then:
ESBAS
∗
ESBAS
T
(11)
+ ρσss
(T ) + κ log(T ),
∃κ > 0, ∀T ≥ 9K 2 , ρσabs (T ) ≤ (3K + 1)ρσabs 3K
14
Under review as a conference paper at ICLR 2018
α
where meta-algorithm σ ∗ selects exclusively algorithm α∗ = argminα∈P ρσabs (T ).
Successive and Median Elimination (Even-Dar et al., 2002) and Upper Confidence Bound (Auer et al.,
2002a) under some conditions (Audibert & Bubeck, 2010) are examples of appropriate Ξ satisfying
both conditions stated in Theorems 2 and 3. Again, see Table 1 and more generally Section 3 for a
discussion of those bounds.
15
Under review as a conference paper at ICLR 2018
C
E XPERIMENTAL DETAILS
C.1
C.1.1
D IALOGUE EXPERIMENTS DETAILS
T HE N EGOTIATION D IALOGUE G AME
ESBAS practical efficiency is illustrated on a dialogue negotiation game (Laroche & Genevay, 2016)
that involves two players: the system ps and a user pu . Their goal is to find an agreement among
4 alternative options. At each dialogue, for each option η, players have a private uniformly drawn
cost νηp ∼ U[0, 1] to agree on it. Each player is considered fully empathetic to the other one. As a
result, if the players come to an agreement, the system’s immediate reward at the end of the dialogue
is Rps (sf ) = 2 − νηps − νηpu , where sf is the state reached by player ps at the end of the dialogue,
and η is the agreed option; if the players fail to agree, the final immediate reward is Rps (sf ) = 0,
and finally, if one player misunderstands and agrees on a wrong option, the system gets the cost of
selecting option η without the reward of successfully reaching an agreement: Rps (sf ) = −νηps − νηp0u .
Players act each one in turn, starting randomly by one or the other. They have four possible actions.
First, R EF P ROP(η): the player makes a proposition: option η. If there was any option previously
proposed by the other player, the player refuses it. Second, A SK R EPEAT: the player asks the other
player to repeat its proposition. Third, ACCEPT(η): the player accepts option η that was understood
to be proposed by the other player. This act ends the dialogue either way: whether the understood
proposition was the right one or not. Four, E ND D IAL: the player does not want to negotiate anymore
and ends the dialogue with a null reward.
Understanding through speech recognition of system ps is assumed to be noisy: with a sentence error
rate of probability SERsu = 0.3, an error is made, and the system understands a random option instead of the one that was actually pronounced. In order to reflect human-machine dialogue asymmetry,
the simulated user always understands what the system says: SERus = 0. We adopt the way Khouzaimi et al. (2015) generate speech recognition confidence scores: scoreasr = 1+e1−X where X ∼
N (x, 0.2). If the player understood the right option x = 1, otherwise x = 0.
The system, and therefore the portfolio algorithms, have their action set restrained to five non parametric actions: R EF I NSIST ⇔ R EF P ROP(ηt−1 ), ηt−1 being the option lastly proposed by the system;
R EF N EW P ROP ⇔ R EF P ROP(η), η being the preferred one after ηt−1 , A SK R EPEAT, ACCEPT⇔
ACCEPT(η), η being the last understood option proposition and E ND D IAL.
C.1.2
L EARNING ALGORITHMS
All learning algorithms are using Fitted-Q Iteration (Ernst et al., 2005), with a linear parametrisation
and an β -greedy exploration : β = 0.6β , β being the epoch number. Six algorithms differing by
their state space representation Φα are considered:
• simple: state space representation of four features: the constant feature φ0 = 1, the last recognition score feature φasr , the difference between the cost of the proposed option and the next
0.1t
best option φdif , and finally an RL-time feature φt = 0.1t+1
. Φα = {φ0 , φasr , φdif , φt }.
• fast: Φα = {φ0 , φasr , φdif }.
• simple-2: state space representation of ten second order polynomials of simple features.
Φα = {φ0 , φasr , φdif , φt , φ2asr , φ2dif , φ2t , φasr φdif , φasr φt , φt φdif }.
• fast-2: state space representation of six second order polynomials of fast features. Φα = {φ0 ,
φasr , φdif , φ2asr , φ2dif , φasr φdif }.
• n-ζ-{simple/fast/simple-2/fast-2}: Versions of previous algorithms with ζ additional features
of noise, randomly drawn from the uniform distribution in [0, 1].
• constant-µ: the algorithm follows a deterministic policy of average performance µ without
exploration nor learning. Those constant policies are generated with simple-2 learning from
a predefined batch of limited size.
16
Under review as a conference paper at ICLR 2018
C.1.3
E VALUATION PROTOCOL
In all our experiments, ESBAS has been run with UCB parameter ξ = 1/4. We consider 12 epochs.
The first and second epochs last 20 meta-time steps, then their lengths double at each new epoch,
for a total of 40,920 meta-time steps and as many trajectories. γ is set to 0.9. The algorithms and
ESBAS are playing with a stationary user simulator built through Imitation Learning from realhuman data. All the results are averaged over 1000 runs. The performance figures plot the curves of
algorithms individual performance σ α against the ESBAS portfolio control σ ESBAS in function of the
epoch (the scale is therefore logarithmic in meta-time). The performance is the average return of the
reinforcement learning return: it equals γ || Rps (sf ) in the negotiation game. The ratio figures plot
the average algorithm selection proportions of ESBAS at each epoch. We define the relative pseudo
regret as the difference between the ESBAS absolute pseudo-regret and the absolute pseudo-regret
of the best canonical meta-algorithm. All relative pseudo-regrets, as well as the gain for not having
chosen the worst algorithm in the portfolio, are provided in Table 2. Relative pseudo-regrets have a
95% confidence interval about ±6 ≈ ±1.5 × 10−4 per trajectory.
C.1.4
A SSUMPTIONS TRANSGRESSIONS
Several results show that, in practice, the assumptions are transgressed. Firstly, we observe that
Assumption 3 is transgressed. Indeed, it states that if a trajectory set is better than another for a
given algorithm, then it’s the same for the other algorithms. Still, this assumption infringement does
not seem to harm the experimental results. It even seems to help in general: while this assumption
is consistent curriculum learning, it is inconsistent with the run adaptation property advanced in
Subsection 6 that states that an algorithm might be the best on some run and another one on other
runs.
And secondly, off-policy reinforcement learning algorithms exist, but in practice, we use state space
representations that distort their off-policy property (Munos et al., 2016). However, experiments do
not reveal any obvious bias related to the off/on-policiness of the trajectory set the algorithms train
on.
C.2
Q* BERT EXPERIMENT DETAILS
The three DQN networks (small, large, and huge) are built in a similar fashion, with relu activations
at each layer except for the output layer which is linear, with RMSprop optimizer (ρ = 0.95 and
= 10−7 ), and with He uniform initialisation. The hyperparameters used for training them are also
the same and equal to the ones presented in the table hereinafter:
hyperparameter
value
minibatch size
32
replay memory size
1 × 106
agent history length
4
target network update frequency
discount factor
5 × 104
0.99
action repeat
20
update frequency
20
2.5 × 10−4
learning rate
5 × t−1 × 10−6
exploration parameter
5 × 10−4
replay start size
no-op max
30
17
Under review as a conference paper at ICLR 2018
Only their shapes differ:
• small has a first convolution layer with a 4x4 kernel and a 2x2 stride, and a second convolution layer with a 4x4 kernel and a 2x2 stride, followed by a dense layer of size 128, and
finally the output layer is also dense.
• large has a first convolution layer with a 8x8 kernel and a 4x4 stride, and a second convolution
layer with a 4x4 kernel and a 2x2 stride, followed by a dense layer of size 256, and finally
the output layer is also dense.
• huge has a first convolution layer with a 8x8 kernel and a 4x4 stride, a second convolution
layer with a 4x4 kernel and a 2x2 stride, and a third convolution layer with a 3x3 kernel and
a 1x1 stride, followed by a dense layer of size 512, and finally the output layer is also dense.
18
Under review as a conference paper at ICLR 2018
D
E XTENDED RESULTS OF THE DIALOGUE EXPERIMENTS
Portfolio
w. best
simple-2 + fast-2
35
simple + n-1-simple-2
-73
simple + n-1-simple
3
simple-2 + n-1-simple-2
-12
all-4 + constant-1.10
21
all-4 + constant-1.11
-21
all-4 + constant-1.13
-10
all-4
-28
all-2-simple + constant-1.08
-41
all-2-simple + constant-1.11
-40
all-2-simple + constant-1.13
-123
all-2-simple
-90
fast + simple-2
-39
simple-2 + constant-1.01
169
simple-2 + constant-1.11
53
simple-2 + constant-1.11
57
simple + constant-1.08
54
simple + constant-1.10
88
simple + constant-1.14
-6
all-4 + all-4-n-1 + constant-1.09
25
all-4 + all-4-n-1 + constant-1.11
20
all-4 + all-4-n-1 + constant-1.14
-16
all-4 + all-4-n-1
-10
all-2-simple + all-2-n-1-simple
-80
4*n-2-simple
-20
4*n-3-simple
-13
8*n-1-simple-2
-22
simple-2 + constant-0.97 (no reset)
113
simple-2 + constant-1.05 (no reset)
23
simple-2 + constant-1.09 (no reset)
-19
simple-2 + constant-1.13 (no reset)
-16
simple-2 + constant-1.14 (no reset)
-125
w. worst
-181
-131
-2
-38
-2032
-1414
-561
-275
-2734
-2013
-799
-121
-256
-5361
-1380
-1288
-2622
-1565
-297
-2308
-1324
-348
-142
-181
-20
-13
-22
-7131
-3756
-2170
-703
-319
Table 2: ESBAS pseudo-regret after 12 epochs (i.e. 40,920 trajectories) compared with the best and
the worst algorithms in the portfolio, in function of the algorithms in the portfolio (described in the
first column). The ’+’ character is used to separate the algorithms. all-4 means all the four learning
algorithms described in Section C.1.2 simple + fast + simple-2 + fast-2. all-4-n-1 means the same four
algorithms with one additional feature of noise. Finally, all-2-simple means simple + simple-2 and
all-2-n-1-simple means n-1-simple + n-1-simple-2. On the second column, the redder the colour, the
worse ESBAS is achieving in comparison with the best algorithm. Inversely, the greener the colour
of the number, the better ESBAS is achieving in comparison with the best algorithm. If the number
is neither red nor green, it means that the difference between the portfolio and the best algorithm is
insignificant and that they are performing as good. This is already an achievement for ESBAS to
be as good as the best. On the third column, the bluer the cell, the weaker is the worst algorithm
in the portfolio. One can notice that positive regrets are always triggered by a very weak worst
algorithm in the portfolio. In these cases, ESBAS did not allow to outperform the best algorithm in
the portfolio, but it can still be credited with the fact it dismissed efficiently the very weak algorithms
in the portfolio.
19
Under review as a conference paper at ICLR 2018
E
N OT WORSE THAN THE WORST
Theorem 1 (Not worse than the worst). The absolute pseudo-regret is bounded by the worst algorithm
absolute pseudo-regret in order of magnitude:
σ
σα
∀σ,
ρabs (T ) ∈ O max ρabs (T ) .
(7)
α∈P
Proof. From Definition 1:
"
ρσabs (T )
=
T Eµ∗∞
− Eσ
T
X
#
σ(τ )
EµDσ
τ −1
,
(12a)
τ =1
σ
|subα (DT
)|
ρσabs (T ) = T Eµ∗∞ −
X
X
Eσ
Eµα
D σα
τ −1
i
i=1
α∈P
ρσabs (T ) =
X
,
(12b)
σ
|subα (DT
)|
Eσ |subα (DTσ )|Eµ∗∞ −
X
Eµα
D σα
τ
i=1
α∈P
i
−1
,
(12c)
where subα (D) is the subset of D with all the trajectories generated with algorithm α, where τiα is
the index of the ith trajectory generated with algorithm α, and where |S| is the cardinality of finite set
S. By convention, let us state that Eµα
= Eµ∗∞ if |subα (DTσ )|< i. Then:
D σα
τ −1
i
ρσabs (T )
=
T
XX
Eσ Eµ∗∞ − Eµα
D σα
.
τ −1
i
α∈P i=1
(13)
To conclude, let us prove by mathematical induction the following inequality:
h
i
α
Eσ Eµα
≥ Eσα Eµα
σ
D α
Dσ
τ −1
i
i−1
is true by vacuity for i = 0: both left and right terms equal Eµα
∅ . Now let us assume the property true
for i and prove it for i + 1:
,
Eσ Eµα
= Eσ Eµα
(14a)
D σα
τ
−1
i+1
Dτσα −1 ∪εα ∪ Dτσα
i
Eσ Eµα
D σα
τ
−1
i+1
\Dτσα
i
#
Dτσα −1 ∪εα ∪
α −1
Sτi+1
i
−1
"
= Eσ Eµ
τ
−1
i+1
Eσ Eµα
D σα
i+1
τ =τ α +1
i
εσ(τ )
,
"
(14b)
#
= Eσ Eµ
Sτ α −τiα −1 σ(τ α +τ )
Dτσα −1 ∪εα ∪ τi+1
ε i
=1
.
(14c)
i
If |subα (DTσ )|≥ i + 1, by applying mathematical induction assumption, then by applying Assumption
2 and finally by applying Assumption 1 recursively, we infer that:
h
i
α
α
Eσ EµDσα
≥ Eσα Eµα
,
(15a)
Dσ
τ −1
i
Eσ Eµα
D σα
τ
i
20
i−1
∪εα
−1
h
i
α
≥ Eσα Eµα
,
σ
α
D
∪ε
i−1
(15b)
Under review as a conference paper at ICLR 2018
Eσ Eµα
D σα
τ
i
∪εα
−1
"
i
h
α
,
≥ Eσα Eµα
Dσ
(15c)
i
h
α
.
≥ Eσα Eµα
Dσ
(15d)
i
#
Sτ α −τiα −1 σ(τ α +τ )
Dτσα −1 ∪εα ∪ τi+1
ε i
=1
Eσ Eµ
i
i
If |subα (DTσ )|< i + 1, the same inequality is straightforwardly obtained, since, by convention
EµkD k = Eµ∗∞ , and since, by definition ∀D ∈ E + , ∀α ∈ P, Eµ∗∞ ≥ Eµα
D.
τ
i+1
The mathematical induction proof is complete. This result leads to the following inequalities:
ρσabs (T ) ≤
T
XX
i
h
α
,
Eσα Eµ∗∞ − Eµα
σ
D
(16a)
i−1
α∈P i=1
ρσabs (T ) ≤
X
α
ρσabs (T ) ,
(16b)
α∈P
α
ρσabs (T ) ≤ K max ρσabs (T ) ,
α∈P
(16c)
which leads directly to the result:
∀σ,
ρσabs (T ) ∈ O
k
max ρσabs (T ) .
αk ∈P
(17)
This proof may seem to the reader rather complex for such an intuitive and loose result but algorithm
selection σ and the algorithms it selects may act tricky. For instance selecting algorithm α only
when the collected trajectory sets contains misleading examples (i.e. with worse expected return
than with an empty trajectory set) implies that the following unintuitive inequality is always true:
α
Eµα
Dτσ−1 ≤ EµD σα . In order to control all the possible outcomes, one needs to translate the selections
τ −1
of algorithm α into σ α ’s view.
21
Under review as a conference paper at ICLR 2018
F
ESBAS SHORT- SIGHTED PSEUDO - REGRET UPPER ORDER OF MAGNITUDE
Theorem 2 (ESBAS short-sighted pseudo-regret). If the stochastic multi-armed bandit Ξ guarantees
a regret of order of magnitude O(log(T )/∆†β ), then:
blog(T )c
X β
ESBAS
.
(9)
ρσss
(T ) ∈ O
†
∆
β
β=0
α
Proof. By simplification of notation, Eµα
Dβ = Eµ σ ESBAS . From Definition 2:
D
2β −1
"
ESBAS
ρσss
(T )
= Eσ ESBAS
T
X
τ =1
"
ESBAS
ρσss
(T )
= Eσ ESBAS
T
X
τ =1
max Eµα σ ESBAS
Dτ −1
α∈P
max Eµα
βτ
α∈P
−
σ ESBAS (τ )
− Eµ
#
ESBAS
Dτσ−1
σ ESBAS (τ )
Eµβτ
,
#
,
(18b)
ESBAS
ρσss
blog2 (T )c 2β+1 −1
X
(T ) ≤ Eσ ESBAS
X
ESBAS
ρσss
(T )
blog2 (T )c
≤
X
ESBAS
ρσss
σ
max Eµα
β − Eµβ
ESBAS
(τ )
,
α∈P
τ =2β
β=0
(18a)
(18c)
(β),
(18d)
β=0
ESBAS
(β) for each
where βτ is the epoch of meta-time τ . A bound on short-sighted pseudo-regret ρσss
epoch β can then be obtained by the stochastic bandit Ξ regret bounds in O (log(T )/∆):
ESBAS
ρσss
2β+1
X−1
(β) = Eσ ESBAS
σ
max Eµα
β − Eµβ
ESBAS
α∈P
τ =2β
(τ )
,
(19a)
log(2β )
,
∆β
β
σ ESBAS
ρss
(β) ∈ O
,
∆β
ESBAS
ρσss
⇔ ∃κ1 > 0,
ESBAS
ρσss
where
(β) ∈ O
(β) ≤
(19b)
(19c)
κ1 β
,
∆β
(19d)
X 1
1
=
∆β
∆α
β
(19e)
α∈P
and where
+∞
∆α
=
β
max
0
α
if Eµα
β = maxα0 ∈P Eµβ ,
0
α0 ∈P
α
Eµα
β − Eµβ
(19f)
otherwise.
Since we are interested in the order of magnitude, we can once again only consider the upper bound
of ∆1β :
!
[
1
1
∈
O
,
(20a)
∆β
∆α
β
α∈P
22
Under review as a conference paper at ICLR 2018
1
1
∈ O max α
α∈P ∆β
∆β
!
,
(20b)
1
κ2
≤ †,
∆β
∆β
⇔ ∃κ2 > 0,
(20c)
where the second best algorithm at epoch β such that ∆†β > 0 is noted αβ† . Injected in Equation 18d,
it becomes:
ESBAS
ρσss
blog2 (T )c
(T ) ≤ κ1 κ2
X
β
β=0
∆†β
,
(21)
which proves the result.
F.1
C OROLLARIES OF T HEOREM 2
Corollary 1. If
∆†β
∈ Θ (1), then
ESBAS
ρσss
(T )
!
log2 (T )
, where ∆†∞ = µ∗∞ − µ†∞ > 0.
∆†∞
∈O
Proof. ∆†β ∈ Ω (1) means that only one algorithm α∗ converges to the optimal asymptotic performance µ∗∞ and that ∃∆†∞ = µ∗∞ − µ†∞ > 0 such that ∀2 > 0, ∃β1 ∈ N, such that ∀β ≥ β1 ,
∆†β > ∆†∞ − . In this case, the following bound can be deduced from equation 21:
blog(T )c
ESBAS
ρσss
ESBAS
ρσss
(T ) ≤ κ4 +
(T ) ≤ κ4 +
X
κ1 κ2
β=β1
∆†∞ −
β,
(22a)
κ1 κ2 log2 (T )
,
2(∆†∞ − )
(22b)
where κ4 is a constant equal to the short-sighted pseudo-regret before epoch β1 :
ESBAS
κ4 = ρσss
2β1 −1
(23)
Equation 22b directly leads to the corollary.
†
ESBAS
†
Corollary 2. If ∆†β ∈ Θ β −m , then ρσss
(T ) ∈ O logm +2 (T ) .
Proof. If ∆†β decreases slower than polynomially in epochs, which implies decreasing polylogarith†
mically in meta-time, i.e. ∃κ5 > 0, ∃m† > 0, ∃β2 ∈ N, such that ∀β ≥ β2 , ∆†β > κ5 β −m , then,
from Equation 21:
blog(T )c
ESBAS
ρσss
X
(T ) ≤ κ6 +
β=β2
blog(T )c
ESBAS
ρσss
X
(T ) ≤ κ6 +
β=β2
ESBAS
ρσss
(T ) ≤
κ1 κ2
β,
κ5 β −m†
(24a)
κ1 κ2 m† +1
β
,
κ5
(24b)
†
κ1 κ2
logm +2 (T ),
κ5
23
(24c)
Under review as a conference paper at ICLR 2018
where κ6 is a constant equal to the short-sighted pseudo-regret before epoch β2 :
ESBAS
2β2 −1 .
κ6 = ρσss
(25)
Equation 24c directly leads to the corollary.
†
ESBAS
†
Corollary 3. If ∆†β ∈ Θ T −c , then ρσss
(T ) ∈ O T c log(T ) .
Proof. If ∆†β decreases slower than a fractional power of meta-time T , then ∃κ7 > 0, 0 < c† < 1,
†
∃β3 ∈ N, such that ∀β ≥ β3 , ∆†β > κ7 T −c , and therefore, from Equation 21:
blog(T )c
ESBAS
ρσss
(T ) ≤ κ8 +
X
β=β3
blog(T )c
ESBAS
ρσss
(T ) ≤ κ8 +
X
κ1 κ2
β=β3
κ7 (2β )
−c†
blog(T )c
ESBAS
ρσss
(T ) ≤ κ8 +
κ1 κ2
β,
κ7 τ −c†
X
β=β3
(26a)
β,
κ1 κ2 c† β
β 2
,
κ7
where κ8 is a constant equal to the short-sighted pseudo-regret before epoch β3 :
ESBAS
κ8 = ρσss
2β3 −1 .
(26b)
(26c)
(27)
The sum in Equation 26c is solved as follows:
n
X
ixi = x
n
X
ixi−1 ,
i =i0
i=i0
n
X
n
X
d xi
,
ix = x
dx
i=i
i =i0
n
X
i
i
ix = x
d
Pn
i=i0
dx
d
ixi = x
i =i0
n
X
(28b)
0
i =i0
n
X
(28a)
ixi =
i =i0
xi
,
(28c)
xn+1 − xi0
x−1
,
dx
(28d)
x
(x − 1)nxn − xn − (x − 1)i0 xi0 −1 + xi0 .
2
(x − 1)
(28e)
This result, injected in Equation 26c, induces that ∀3 > 0, ∃T1 ∈ N, ∀T ≥ T1 :
†
ESBAS
ρσss
(T )
κ1 κ2 (1 + 0 )2c
†
≤ κ8 +
log(T )2c log(T ) ,
κ7 (2c† − 1)
ESBAS
ρσss
(T )
κ1 κ2 (1 + 0 )2c c†
≤ κ8 +
T log(T ),
κ7 (2c† − 1)
(29a)
†
which proves the corollary.
24
(29b)
Under review as a conference paper at ICLR 2018
G
ESBAS ABOLUTE PSEUDO - REGRET BOUND
Theorem 3 (ESBAS absolute pseudo-regret upper bound). Under assumption 3, if the stochastic
multi-armed bandit Ξ guarantees that the best arm has been selected in the T first episodes at least
T /K times, with high probability δT ∈ O(1/T ), then:
ESBAS
ρσabs
∃κ > 0, ∀T ≥ 9K 2 ,
∗
T
3K
(T ) ≤ (3K + 1)ρσabs
ESBAS
+ ρσss
(T ) + κ log(T ),
(11)
α
where meta-algorithm σ ∗ selects exclusively algorithm α∗ = argminα∈P ρσabs (T ).
Proof. The ESBAS absolute pseudo-regret is written with the following notation simplifications :
ESBAS
Dτ −1 = Dτσ−1 and kτ = σ ESBAS (τ ):
"
ESBAS
ρσabs (T )
=
T Eµ∗∞
− Eσ ESBAS
T
X
τ =1
"
ESBAS
ρσabs
(T ) = T Eµ∗∞ − Eσ ESBAS
T
X
σ ESBAS (τ )
Eµ
#
ESBAS
Dτσ−1
,
(30a)
#
EµkDττ −1 .
(30b)
τ =1
Let σ ∗ denote the algorithm selection selecting exclusively α∗ , and α∗ be the algorithm minimising
the algorithm absolute pseudo-regret:
k
α∗ = argmin ρσabs (T ).
(31)
αk ∈P
Note that σ ∗ is the optimal constant algorithm selection at horizon T , but it is not necessarily the
optimal algorithm selection: there might exist, and there probably exists a non constant algorithm
selection yielding a smaller pseudo-regret.
ESBAS
The ESBAS absolute pseudo-regret ρσabs (T ) can be decomposed into the pseudo-regret for not
having followed the optimal constant algorithm selection σ ∗ and the pseudo-regret for not having
selected the algorithm with the highest return, i.e. between the pseudo-regret on the trajectory and the
pseudo-regret on the immediate optimal return:
"
ESBAS
ρσabs (T )
=
T Eµ∗∞
− Eσ ESBAS
T
X
#
Eµ∗sub∗ (Dτ −1 )
(32)
τ =1
"
+ Eσ ESBAS
T
X
#
Eµ∗sub∗ (Dτ −1 ) − Eσ ESBAS
τ =1
"
T
X
#
EµkDττ −1 ,
τ =1
∗
where Eµ∗sub∗ (Dτ −1 ) is the expected return of policy πsub
, learnt by algorithm α∗ on trajectory
∗ (D
τ −1 )
set sub∗ (Dτ −1 ), which is the trajectory subset of Dτ −1 obtained by removing all trajectories that
were not generated with algorithm α∗ .
First line of Equation 32 can be rewritten as follows:
"
T Eµ∗∞
− Eσ ESBAS
T
X
τ =1
#
Eµ∗sub∗ (Dτ −1 )
=
T
X
h
i
Eµ∗∞ − Eσ ESBAS Eµ∗sub∗ (Dτ −1 ) .
(33)
τ =1
The key point in Equation 33 is to evaluate the size of sub∗ (Dτ −1 ).
On the one side, Assumption 3 of fairness states that one algorithm learns as fast as any another over
any history. The asymptotically optimal algorithm(s) when τ → ∞ is(are) therefore the same one(s)
25
Under review as a conference paper at ICLR 2018
whatever the the algorithm selection is. On the other side, let 1 − δτ denote the probability, that at
time τ , the following inequality is true:
τ −1
|sub (Dτ −1 ) |≥
.
3K
∗
(34)
With probability δτ , inequality 34 is not guaranteed and nothing can be inferred about Eµ∗sub∗ (Dτ −1 ) ,
τ −1
except it is bounded under by Rmin /(1 − γ). Let E3K
be the subset of E τ −1 such that ∀D ∈
τ −1
∗
E3K , |sub (D)|≥ b(τ − 1)/3Kc. Then, δτ can be expressed as follows:
X
δτ =
P(D|σ ESBAS ).
(35)
τ −1
D∈E τ −1 \E3K
With these new notations:
h
i
Eµ∗∞ − Eσ ESBAS Eµ∗sub∗ (Dτ −1 ) ≤ Eµ∗∞ −
X
P(D|σ ESBAS )Eµ∗sub∗ (D) − δτ
τ −1
D∈E3K
Rmin
, (36a)
1−γ
h
i
Eµ∗∞ − Eσ ESBAS Eµ∗sub∗ (Dτ −1 )
X
Rmin
∗
ESBAS
∗
∗
.
≤ (1 − δτ )Eµ∞ −
P(D|σ
)Eµsub∗ (D) + δτ Eµ∞ −
1−γ
τ −1
(36b)
D∈E3K
Let consider E ∗ (α, N ) the set of all sets D such that |subα (D)|= N and such that last trajectory
in D was generated by α. Since ESBAS, with Ξ, a stochastic bandit with regret in O(log(T )/∆),
guarantees that all algorithms will eventually be selected an infinity of times, we know that :
X
∀α ∈ P, ∀N ∈ N,
P(D|σ ESBAS ) = 1.
(37)
D ∈E + (α,N )
By applying recursively Assumption 2, one demonstrates that:
X
D
∈E + (α,N )
X
X
P(D|σ ESBAS )Eµα
subα (D) ≥
P(D|σ α )Eµα
D,
(38a)
D∈E N
h
i
α
P(D|σ ESBAS )Eµα
subα (D) ≥ Eσ α EµD σα .
(38b)
N
D ∈E + (α,N )
One also notices the following piece-wisely domination from applying recursively Assumption 1:
X
X
(1 − δτ )Eµ∗∞ −
P(D|σ ESBAS )Eµ∗sub∗ (D) =
P(D|σ ESBAS ) Eµ∗∞ − Eµ∗sub∗ (D) ,
τ −1
D ∈E3K
τ −1
D∈E3K
(39a)
(1 −
δτ )Eµ∗∞
−
X
τ −1
D ∈E3K
P(D|σ ESBAS ) Eµ∗∞ − Eµ∗sub∗ (D) ,
X
P(D|σ ESBAS )Eµ∗sub∗ (D) ≤
−1
D∈E + (α∗ ,b τ3K
c)&|D|≤τ −1
(39b)
(1 − δτ )Eµ∗∞
X
−
P(D|σ ESBAS )Eµ∗sub∗ (D) ≤
τ −1
D ∈E3K
P(D|σ ESBAS ) Eµ∗∞ − Eµ∗sub∗ (D) ,
X
−1
D∈E + (α∗ ,b τ3K
c)
26
(39c)
Under review as a conference paper at ICLR 2018
X
(1 − δτ )Eµ∗∞ −
D
X
P(D|σ ESBAS )Eµ∗sub∗ (D) ≤ Eµ∗∞ −
P(D|σ ESBAS )Eµ∗sub∗ (D) .
−1
D∈E + (α∗ ,b τ3K
c)
τ −1
∈E3K
(39d)
Then, by applying results from Equations 38b and 39d into Equation 36b, one obtains:
Eµ∗∞
− Eσ ESBAS
h
Eµ∗sub∗ (Dτ −1 )
i
Eµ∗∞
≤
∗
− Eσ∗ Eµα
Dσ
τ −1
3K
+ δτ
Eµ∗∞
Rmin
−
. (40)
1−γ
Next, the terms in the first line of Equation 32 are bounded as follows:
"
T Eµ∗∞ − Eσ ESBAS
T
X
#
Eµ∗sub∗ (Dτ −1 ) ≤ T Eµ∗∞ − Eσ∗
+
T
X
δτ
τ =1
T Eµ∗∞ − Eσ ESBAS
T
X
∗
Eµα
Dσ
τ −1
3K
τ =1
τ =1
"
T
X
#
T
Eµ∗sub∗ (Dτ −1 ) ≤
(41a)
Rmin
Eµ∗∞ −
,
1−γ
∗
T
b 3K
c
τ =1
ρσabs
j
T
Rmin X
T k
δ(41b)
+ Eµ∗∞ −
τ.
3K
1 − γ τ =1
Again, for T ≥ 9K 2 :
"
T Eµ∗∞
− Eσ ESBAS
T
X
#
Eµ∗sub∗ (Dτ −1 )
≤ (3K
∗
+ 1)ρσabs
τ =1
T
3K
+
Eµ∗∞
T
Rmin X
δτ . (42)
−
1 − γ τ =1
Regarding the first term in the second line of Equation 32, from applying recursively Assumption 2:
h
i
Eσ ESBAS Eµ∗sub∗ (Dτ ) ≤ Eσ ESBAS Eµ∗Dτ ,
(43a)
h
i
Eσ ESBAS Eµ∗sub∗ (Dτ ) ≤ Eσ ESBAS max Eµα
Dτ .
(43b)
α∈P
From this observation, one directly concludes the following inequality:
"
Eσ ESBAS
T
X
#
Eµ∗sub∗ (Dτ )
"
− Eσ ESBAS
τ =1
T
X
#
EµkDττ
"
≤ Eσ ESBAS
τ =1
T
X
τ =1
max Eµα
Dτ
α∈P
−
T
X
#
EµkDττ
,
τ =1
(44a)
"
Eσ ESBAS
T
X
#
"
Eµ∗sub∗ (Dτ ) − Eσ ESBAS
τ =1
T
X
#
EµkDττ
ESBAS
≤ ρσss
(T ).
(44b)
τ =1
Injecting results from Equations 42 and 44b into Equation 32 provides the result:
ESBAS
ρσabs (T )
≤ (3K +
∗
1)ρσabs
T
3K
+
ESBAS
ρσss
(T )
+
Eµ∗∞
T
Rmin X
−
δτ .
1 − γ τ =1
(45)
We recall here that the stochastic bandit algorithm Ξ was assumed to guarantee to try the best
algorithm α∗ at least N/K times with high probability 1 − δN and δN ∈ O(N −1 ). Now, we show
that at any time, the longest stochastic bandit run (i.e. the epoch that experienced the biggest number
of pulls) lasts at least N = τ3 : at epoch βτ , the meta-time spent on epochs before βτ − 2 is equal to
27
Under review as a conference paper at ICLR 2018
Pβτ −2
2β = 2βτ −1 ; the meta-time spent on epoch βτ − 1 is equal to 2βτ −1 ; the meta-time spent
on epoch β is either below 2βτ −1 , in which case, the meta-time spent on epoch βτ − 1 is higher
than τ3 , or the meta-time spent on epoch β is over 2βτ −1 and therefore higher than τ3 . Thus, ESBAS
is guaranteed to try the best algorithm α∗ at least τ /3K times with high probability 1 − δτ and
δτ ∈ O(τ −1 ). As a result:
T
Rmin X κ3
ESBAS
∗
ESBAS
T
∃κ3 > 0, ρσabs (T ) ≤ (3K + 1)ρσabs
+ ρσss
, (46)
(T ) + Eµ∗∞ −
3K
1 − γ τ =1 τ
ESBAS
∗
ESBAS
T
∃κ > 0, ρσabs (T ) ≤ (3K + 1)ρσabs
+ ρσss
(T ) + κ log(T ),
(47)
3K
Rmin
with κ = κ3 Eµ∗∞ −
, which proves the theorem.
1−γ
β=0
28
| 2 |
1
Interference Model Similarity Index and
Its Applications to mmWave Networks:
arXiv:1710.02659v1 [] 7 Oct 2017
Extended version
Hossein Shokri-Ghadikolaei, Student Member, IEEE,
Carlo Fischione, Member, IEEE, and Eytan Modiano, Fellow, IEEE
Abstract
In wireless communication networks, interference models are routinely used for tasks such as
performance analysis, optimization, and protocol design. These tasks are heavily affected by the accuracy
and tractability of the interference models. Yet, quantifying the accuracy of these models remains a major
challenge. In this paper, we propose a new index for assessing the accuracy of any interference model
under any network scenario. Specifically, it is based on a new index that quantifies the ability of any
interference model in correctly predicting harmful interference events, that is, link outages. We consider
specific wireless scenario of both conventional sub-6 GHz and millimeter-wave (mmWave) networks
and demonstrate how our index yields insights into the possibility of simplifying the set of dominant
interferers, replacing a Nakagami or Rayleigh random fading by an equivalent deterministic channel,
and ignoring antenna sidelobes. Our analysis reveals that in highly directional antenna settings with
obstructions, even simple interference models (such as the classical protocol model) are accurate, while
with omnidirectional antennas, more sophisticated and complex interference models (such as the classical
physical model) are necessary. We further use the proposed index to develop a simple interference model
for mmWave networks that can significantly simplify design principles of the important procedures for
wireless communication, such as beamforming, interference management, scheduling, and topology
H. Shokri-Ghadikolaei and C. Fischione are with KTH Royal Institute of Technology, Stockholm, Sweden (E-mails: {hshokri,
carlofi}@kth.se).
E. Modiano is with the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge,
USA (E-mail: [email protected]).
A preliminary version of this paper [1] has been accepted for presentation at the IEEE International Conference on
Communications (ICC), 2016.
2
control. Our new approach makes it possible to adopt the simplest interference model of adequate
accuracy for every wireless network.
Index Terms
Wireless communications, interference model, performance analysis, millimeter wave networks.
I. I NTRODUCTION
Due to the shared nature of a wireless media, interference plays a critical role in the design
and performance analysis of wireless networks, where the intended signal is combined with
other undesired wireless signals transmitted at the same (time, frequency, spatial) channel. The
receiver typically decodes the received signal by canceling parts of the interference and treating
the rest as noise. Successful decoding at the receiver depends on the desired signal strength, the
ambient noise level accumulated over the operating bandwidth, and the interference level. Signalto-interference-plus-noise ratio (SINR) is a common metric to evaluate the outage probability
(or the probability of successful decoding) of a transmission. However, performance analysis
using the SINR expression is complex as it depends on the transmission strategies (transmission
power, antenna pattern, and medium access control (MAC) protocol), often unknown or hard to
estimate random channel attenuation, receiver design, and the (often partially unknown) network
topology. Due to this overwhelming complexity, the design and analysis of wireless networks
based on the actual SINR expression, while being accurate, is very challenging. This difficulty is
further exacerbated in millimeter-wave (mmWave) networks, where penetration loss, first-order
reflection, and antenna pattern introduce further elements of randomness [2]–[4]. This motivates
developing different techniques to mathematically model (abstract) various components of the
SINR, e.g., the transmission strategy, wireless channel, and network topology.
A. Related Works and Motivations
Define an interference model as a set of deterministic or stochastic functions that model various
components of the SINR expression. There have been many attempts in the literature to design
interference models (equivalently, to approximate the SINR expression) that accurately capture
the effect of interference, while being tractable for the mathematical analysis. These interference
models largely try to answer the following questions under various network settings:
3
Q1. How can we model the set of interferer whose contributions in the aggregated interference
term are dominant?
Q2. How can we simplify the transmission/reception and propagation models to enhance tractability of the interference model with marginal loss in its accuracy?
Answering Q1 demands a careful balance between the accuracy and the simplicity of the interference model. Considering the effects of more interferers in the SINR model generally increases the
accuracy but also the complexity. In this regard, the simplest model is the primary interference
model [5], wherein an outage event occurs only if two communication links share a common
endpoint. In other words, the only interference component in this model is self-interference that
leads to a half-duplex operating mode. Interference range model (IRM) is an attempt to improve
the accuracy of the primary interference model [6], where an outage event occurs if the closest
interferer is located no farther than a certain distance of the receiver, called the interference
range. By setting this distance to 0, the IRM can be reduced to the primary interference model.
A modified version of IRM is the protocol model (PRM), formalized by the seminal work of
Gupta and Kumar [7]. The only modification is that the interference range, instead of being a
constant value as in the IRM, depends on the received power from the intended transmitter and a
minimum SINR threshold for successful decoding. Although the IRM and PRM are very simple,
they fail to capture the effect of interference aggregation (i.e., the sum of the interference power
from multiple interferers). It might be that, while there is no interferer inside the interference
range, the aggregated interference from several transmitters outside the interference range downs
the perceived SINR below the threshold. Thus, these models are generally considered to be overly
simplistic. Nonetheless, due to their mathematical tractability, the IRM (including the primary
interference model) and PRM are extensively adopted for the performance analysis and for the
system design; e.g., transport capacity [7]–[9], delay [10], [11], fairness [12], throughput [13]–
[15], topology control [16], [17], routing [18], and backoff design [19].
To alleviate the aforementioned problem of IRM and PRM, the interference ball model (IBM)
considers the aggregated impacts of near-field interferers, located no farther than a certain
distance. The price is higher complexity of the IBM compared to the IRM and PRM. Nonetheless,
the IBM has been extensively adopted in the performance evaluation of wireless networks [4],
[20]–[22]. The topological interference model (TIM) [23] is a natural extension of the IBM that
4
considers the aggregated impact of all the transmitters whose individual interference level at
the receiver side is not below a certain threshold. In other words, this model neglects weak
links based on the “topological” knowledge. The TIM is adopted for capacity and degreeof-freedom analysis [23], [24]. The most accurate and complex answer to Q1 is the physical
model (PhyM) [7], which considers the aggregated interference of all transmitters in the entire
network.1 The PhyM, also known as the SINR model, is adopted mostly at the physical layer;
e.g., beamforming design [26]–[28], capacity evaluation [7], [29], [30], power control [31], [32],
coverage analysis [4], energy efficiency characterization [33], and spectrum sharing [34].
The answer to Q2 depends heavily on the transmission and reception strategies and propagation
environment. For instance, approximating the random wireless channel gain with its first moment
(average) is a common technique to simplify the SINR expression and to design MAC and
networking layers [14], [17], [21], [35]–[38]. Reference [39] replaced a Nakagami fading channel
by a Rayleigh one for mathematical tractability and numerically concluded from its Fig. 5 that
such approximation preserves the main properties of the rate coverage performance. Yet, the
impact of these mathematical approximations on the accuracy of the performance analysis is not
well understood. Recently, [40] considers the impact of such approximation on the scheduling.
In particular, the authors show that, if we design scheduling for n transmitters based on a proper
non-fading channel model (deterministic approximation of the random channel gain), the network
throughput will be within O(log n) of that of the optimal scheduler, designed based on the actual
random channel gains. This result, however, is limited to the Rayleigh fading model. As another
example, for mmWave communications with many antenna elements, [14] and [41] assume no
emissions from the antenna sidelobe, which affects the SINR distribution. This assumption is
relaxed in [4], where the antenna sidelobe is modeled by a small constant value, adding further
complexity into the interference model. As a result, the final derivations, while being more
accurate, are less tractable and provide less insights. However, without having a mathematical
framework that allows assessing the impact of neglecting antenna sidelobes, it is not clear which
approach better balances the simplicity-accuracy tradeoff of mathematical analysis.
1
Under very special network settings (e.g., homogenous Poisson field of interferers exhibiting Rayleigh fading channel), the
PhyM may be mathematically more tractable than both PRM and IBM [25]; however, the PRM and IBM are yet more desirable
models for protocol design and for network optimization [22].
5
The proper choice of interference model depends on many parameters such as the receiver design,
antenna directionality, network topology, channel model, and the choice of medium access
protocol [6], [35], [36]. To the best of our knowledge, there has been no systematic method
to analyze the accuracy of various interference models, choose the proper interference model,
and quantify the amount of error due to adopting other interference models for a given network
scenario. The accuracy of different interference models has been mostly evaluated qualitatively,
without fully understanding the mutual impacts of different parameters of the physical, medium
access, and network layers. This qualitative analysis, however, is often overly simplistic, and may
result in the use of interference models that are only marginally more accurate, yet significantly
more complex than needed. As we will show throughout this paper, in certain settings of relevant
practical interest, even the simplest interference models are sufficiently accurate and can be used
to provide significant insights into the network performance and to enable efficient protocol
design.
B. Contributions
In this paper, we substantially extend the preliminary version of this study [1] and propose a new
framework to assess the distance of two arbitrary SINR distributions. We use this framework to
develop an interference model similarity index that takes on real values between 0 and 1, where
higher values correspond to higher similarity. This index builds a universal method to assess
the accuracy of any interference model under any network scenario. In other words, instead of
introducing a new interference model or a new approach to analyze SINR distribution, we propose
a novel framework to investigate the accuracy of the existing interference models. Therefore,
our study is complementary to the rich literature of interference analysis.
To exemplify the abilities of the proposed index, we mathematically evaluate it for the PRM and
IBM under three scenarios: (i) Rayleigh fading channel and omnidirectional communications (a
typical sub-6 GHz system); and (ii) Rayleigh fading channel and directional communications; and
(iii) deterministic wireless channel, directional communications, and existence of impenetrable
obstacles in the environment (a typical mmWave system). Although the applications of the
proposed index is general and goes beyond the examples provided in this paper, we use these
examples to illustrate fundamental properties of this index and also to provide insights on the
6
mutual effects of various network parameters on the accuracy of the interference model, thus
commenting on the proper model for a given network scenario.
In the first example scenario, served as a baseline, we derive a closed-form expression for
the accuracy index. We show that the accuracy of the IBM monotonically increases with the
interference range, at the expense of an increased complexity. In contrast, we show that there
is no such monotonic improvement in the accuracy of PRM. Thereby, we find the optimal
interference range that maximizes the accuracy of the PRM.
In the second example scenario, we show that both the PRM and IBM are significantly more
accurate with directional antennas. Further, in the third example scenario, we show significant
accuracy improvement of both PRM and IBM due to deterministic channel, directionality, and
also blockage. As these conditions hold in mmWave networks, we show that the PRM can be
used in the analysis of mmWave networks to significantly improve the mathematical tractability
of the problem, with a negligible loss in the analysis accuracy. We further use this index to
observe marginal impacts of the first-order reflection and sidelobe transmissions on the accuracy
of the interference model, which inspire us to propose a tractable and accurate interference model
for mmWave networks.
Furthermore, we use the proposed framework to investigate the feasibility of modeling a random
fading channel with a deterministic channel. We show that if the spatial distribution of the
transmitters follow a Poisson point process on the plane and if the path-loss exponent is 2, then
the average of the fading random variable2 is among the best constant approximations of the
random fading channel to analyze any ergodic function of the SINR (e.g., transport capacity,
throughput, and delay).
Throughout the paper, we show how the proposed index can increase our understanding of the
mutual interactions among the accuracy of the performance evaluation and various network
parameters and modeling techniques. We also signify how we can rigorously develop simple
interference models of adequate accuracy to simplify design principles of the main functions
of wireless communications such as beamforming, interference management, scheduling, and
topology control.
2
Rigorously speaking, the fading should be an absolutely continuous random variable, which holds for almost all wireless
channels.
7
The rest of this paper is organized as follows. In Section II, we introduce our interference model
similarity index, and investigate it under various network scenarios in Sections III–VI. Future
works are presented in Section VII, and the paper is concluded in Section VIII.
II. I NTERFERENCE M ODEL S IMILARITY I NDEX
A. Interference Model
We define a link as the pair of a transmitter and its intended receiver, where transmitter (receiver) i
refers to the transmitter (receiver) of link i. Without loss of generality and for brevity, we assume
that there is no interference cancellation, so all unintended transmitters act as potential interferers
to any receiver. Consider a reference receiver and label its intended transmitter by subscript 0.
Denote by I the set of its interferers (all active transmitters excluding the intended transmitter),
by pi the transmission power of transmitter i, by σ the power of white Gaussian noise, by di the
distance between transmitter i and the reference receiver, and by giCh the channel gain between
transmitter i and the reference receiver. We denote by giTx the antenna gain at transmitter i toward
the reference receiver, and by giRx the antenna gain at the reference receiver toward transmitter
i. Then, the SINR at the reference receiver is
p0 g0Txg0Ch g0Rx
.
γ= P
pk gkTxgkCh gkRx + σ
k∈I
The SINR depends on the transmission powers, antenna patterns, set of active transmitters,
channel model, and network topology. Let β > 0 denote the SINR threshold corresponding to
a certain target bit error rate. An outage on the reference link occurs when γ < β. Different
interference models attempt to approximate the outage probability by ignoring certain components of the interference (see questions Q1 and Q2 in Section I-A). In particular, the IRM, PRM,
IBM, TIM, and PhyM characterize the set of interferers I. Neglecting various components of
the channel model translates into different distributions for giCh . Power allocation affects pi , and
various scheduling protocols further affect I.
B. Formal Definition of the Similarity Index
Consider reference interference model y under a given set of parameters/functions describing the
wireless network. Define γ y as the SINR of a reference receiver under this model. We define a
8
binary hypothesis test, where hypotheses H0 and H1 denote the absence and presence of outage
under reference model y, respectively. That is,
H0 , if γ y ≥ β ,
H ,
1
(1)
y
if γ < β .
We consider a test interference model x under any set of parameters/functions describing our
wireless network, which are not necessarily equal to those of the reference model y. These
differences result in possible deviation of the SINR of the reference receiver under x, denoted
by γ x , from γ y . From the outage point of view, irrespective of the differences between individual
parameters/functions of x and y, we say model x is similar to model y if it gives exactly the same
outage result as y. Assume interference model x is a detector of outage events under y. To evaluate
the performance of this detector compared to reference model y, we can use the notions of false
alarm and miss-detection. A false alarm corresponds to the event that x predicts outage under
hypothesis H0 (i.e., y declares no harmful interference); whereas a miss-detection corresponds
to the event that x fails to predict outage under hypothesis H1 . Now, the performance of any
interference model x can be evaluated using the false alarm and miss-detection probabilities,
x|y
x|y
namely pfa and pmd . Formally,
x|y
pfa = Pr [γ x < β | γ y ≥ β] ,
x|y
pmd = Pr [γ x ≥ β | γ y < β] .
(2)
The false alarm and miss-detection probabilities quantify the similarity of any interference model
x in detecting outage events compared to any reference model y. Next, we define our index to
be a convex combination of these probabilities.
Definition 1 (Interference Model Similarity Index). For any constant 0 ≤ ξ ≤ 1, any SINR
threshold β, any test interference model x, and any reference interference model y, we define
similarity of x to y at β as
x|y
x|y
x|y
x|y
Sβ,ξ (xky) = ξ 1 − pfa + (1 − ξ) 1 − pmd = 1 − ξ pfa − (1 − ξ) pmd ,
x|y
(3)
x|y
where pfa and pmd are given in (2). Notice that random variables γ x and γ y must have a
common support.
Sβ,ξ (xky) is a unit-less quantity ranging within [0, 1], where higher values represents higher similarity between x and y in capturing outage events at SINR threshold β. Setting ξ = Pr [γ y ≥ β],
x|y
x|y
ξpfa +(1 − ξ) pmd is the average error in detecting the outage events; therefore, Sβ,Pr[γ y ≥β] (xky)
9
shows the probability that interference model x has similar decision as reference interference
model y in detecting the outage events.
Remark 1 (Accuracy of an Interference Model). Let reference model y perfectly capture the
outage events in reality, namely the model y does not make any approximation/simplification.
The accuracy of any interference model x is then Sβ,ξ (xky), and we call it the accuracy index
throughout the paper.
The proposed index is a universal metric that can be used to quantify the accuracy of any
interference models, proposed in the literature, as we exemplify in the following sections.
C. Comparison to the Existing Statistical Distance Measures
Interference model similarity index, formulated in (3), is measuring the distance3 of the PDF
of γ x compared to that of γ y . Let fX denote PDF of random variable X. In the following, we
highlight three main advantages of using our index with respect to the existing standard distance
measures, such as the Bhattacharyya distance and the Kullback-Leibler (KL) divergence [42].
First, the existing standard distance measures mostly map the distance between fγ x and fγ y in
their entire support to only one real value. It might be that two distribution are very similar in the
meaningful ranges of the SINR values (0–10 dB), but very different outside this range. Still, the
classical statistical distance measures may result in a high distance between two distributions,
as they compare fγ x to fγ y in the entire SINR range. This is indeed a misleading result that
may mistakenly avoid the use of the simplified interference model x in practice. However, our
similarity index allows us to investigate whether or not x is accurate at any given SINR threshold.
Second, both the Bhattacharyya distance and the KL divergence may fail in a comparative
analysis. In particular, fγ y might be more similar to fγ x than fγ z with point-wise comparison,
but the Bhattacharyya distance and the KL divergence of fγ x from fγ y become higher than that
of fγ z from fγ y , as shown in the following toy example.
Example 1. Consider discrete random variables X, Y, and Z with common support of [1, 2, 3]
with probability mass functions
3
Rigorously speaking, our similarity index is not a distance measure, as it does not satisfy the subadditivity property. Moreover,
we are measuring the similarity, which could be in general a decreasing function of the distance.
10
t
fX (t)
fY (t)
fZ (t)
1
0.05
0.1
0.25
2
0.25
0.45
0.2
3
0.7
0.45
0.55
Then, we have the following metrics (fX is the reference in the KL divergence):
Distributions
fX ,fY
fX ,fZ
Euclidean distance
0.324
0.255
Bhattacharyya distance
0.033
0.045
KL divergence
0.059
0.098
In this example, neither the Bhattacharyya distance nor the KL divergence can identify higher
point-wise similarity of Z to X than Y to X.
Last, but not least, unlike the existing statistical distance metrics that are not necessarily intended
for communication systems, our similarity index is developed for these systems so that it has a
physical meaning and can provide practical insights. Specifically, setting ξ = Pr [γ y ≥ β], our
index Sβ,ξ (xky) evaluates the probability of correct decision of outage events under interference
model x.
Note that other distance metrics may still be useful to evaluate the accuracy of an interference
model, and they may also have some relationship to our proposed index; see the following
remark as an example.
Remark 2 (Relationship to the Bhattacharyya Coefficient). Let ξ = Pr [γ y ≥ β]. By noting that
Sβ,Pr[γ y ≥β] (xky) is the probability of having no hypothesis detection error and following [42,
Equation (48)], we get
r
p
3
1
− ξ − ρ ξ (1 − ξ) ≤ Sβ,ξ (xky) ≤ 1 − ξ +
− ξ (1 − ξ) ρ2 ,
2
4
R
where ρ = fγ x (t)fγ y (t)dt is the Bhattacharyya coefficient.
(4)
D. Applications of the Interference Model Similarity Index
In the following, we provide two class of illustrative examples where our index can be used
either to simplify the mathematical analysis or to justify the existing interference models. Use
cases of our index, however, goes beyond these examples.
11
1) Simplifying the Set of Interferers: This is one of the first steps in choosing an interference
model for performance analysis, protocol design, and network optimization. With omnidirectional
transmission/reception and without interference cancelation, an outage occurs under
•
PRM: if there is an active transmitter no farther than an interference range rPRM = (1+∆)d0 ,
where ∆ is a constant real positive value [7];
•
IBM: if its SINR due to all active transmitters located no farther than an interference range
rIBM is less than β [22];
•
TIM: if its SINR due to all active transmitters with strong links (with individual channel
gains higher than ε) toward receiver i is less than β [23]; and
•
PhyM: if its SINR due to all active transmitters is less than β [7].
To present a unified view, we associate three random variables aPRM
, aIBM
, and aTIM
to the link
k
k
k
between each transmitter k ∈ I and the typical receiver. aPRM
is set +∞ if dk ≤ (1 + ∆)d0 ,
k
and otherwise 0. aIBM
is set 1 if dk ≤ rIBM , and otherwise 0. Finally, aTIM
is set 1 if gkCh > ε,
k
k
and otherwise 0. We define a virtual channel gain for those interference models as
gkx = axk gkCh ,
for interference model x ,
(5)
where x is a label denoting PRM, IBM, TIM, or PhyM, and aPhyM
, 1. Despite the virtual
i
channel gain, all other parameters of interference models x and y are identical. The SINR at the
typical receiver under interference model x is given by
p0 g0Tx g0Ch g0Rx
γx = P
.
pk gkTxgkx gkRx + σ
(6)
k∈I
The design of many key functions of a wireless network such as scheduling [43] or power
allocation [31] need an estimate of (6). To this end, a receiver may need to coordinate with a
set of interferers to estimate their individual instantaneous contributions to the SINR expression,
namely pk gkTxgkx gkRx for all k ∈ I. The PhyM may imply that every receiver should coordinate
with all the interferers in the entire network (global information) whose cost, complexity, and
delay may be unaffordable in many networking scenarios. Using IBM implies that each nodes
should coordinate with all transmitters within a certain radius (local information), and the PRM
necessitates coordination only with the closest unintended transmitter, which are appealing from
energy and protocol overhead perspectives. Our proposed index gives a quantitative insight on
the accuracy of various interference models, used for protocol development and for network
12
optimization, and allows the use of the right interference model for a given channel model and
network scenario.
2) Simplifying the Channel Model: Our accuracy index can be used to adopt tractable channel
models (gkCh for every transmitter k) of adequate accuracy. This is specially important for
mmWave networks, where LoS and non-LoS conditions have different channel models, nonLoS (blockage) probability follows a rather complicated function, the LoS channel may follow
a Nakagami fading in general, and realistic antenna patterns might be a complicated non-linear
function. Various researches tried to simplify those complications without rigorous analysis
on the validity of such simplifications. For instance, [14] assumed impenetrable obstacles (so
communication only in the LoS conditions) and neglected antenna sidelobe, [4] approximated
the non-LoS stochastic function by a deterministic LoS ball in which there is no obstacle within
a certain range of the receiver and there is no LoS links outside the circle, and [39] replaced
the Nakagami fading channel by a Rayleigh fading that facilitates mathematical analysis. Due to
lack of a systematic approach to simplify the channel model, the understanding of the cross-layer
dynamics between MAC and physical layers of most of the existing standards is a largely open
problem, and the existing frameworks such as the one in [44] are not usually mathematically
tractable.
In the following, we illustrate the utility of our index for four example scenarios. Although
our index poses no limitation to these example scenarios, we may simplify some parameters of
the system model to avoid unnecessary complications. In the first three examples, we focus on
simplifying the set of interferers for various network settings and derive closed-form expressions
for the accuracy index to highlight its fundamental properties. In the last example scenario, we
use our index to numerically assess the accuracy of various approaches in simplifying the channel
model.
For the rest of this paper, without loss of generality, we assume ξ = Pr [γ y ≥ β], so Sβ,ξ (xky)
evaluates the probability of correct decision under interference model x.
13
III. E XAMPLE S CENARIO 1: R AYLEIGH FADING C HANNEL
WITH
O MNIDIRECTIONAL
C OMMUNICATIONS
Consider a wireless network with Rayleigh fading channel and omnidirectional transmission/reception.
Assume that the PhyM can perfectly capture the outage events. In this section, we evaluate the
accuracy of IBM, PRM, and TIM (see Section II-D where we recalled the definition of these
prominent models) for such scenario.
We consider a reference receiver (called the typical receiver) at the origin of the Polar coordinate,
and its intended transmitter having geometrical/spatial length d0 . We consider a homogeneous
Poisson network of interferers (unintended transmitters) on the plane with intensity λt . We assume
that all the transmitters are active with transmission power p (no power control), and that there is
no interference cancellation, which are natural assumptions in personal and local area networks.
With omnidirectional transmission and reception, there is no antenna gains, so gkTx = gkRx = 1,
k ∈ I ∪ {0}. Note that, under these assumptions, the PhyM is more tractable for coverage
and rate analyses than other models (PRM, IBM, and TIM) [25]; however, we still use this
example to derive closed-form expression for the new accuracy index and thereby illustrate its
fundamental properties that hold in general. Nonetheless, even in this network setting, the PRM
and IBM are more appealing than PhyM for protocol design and for network optimization [22].
We define by B(θ, rin , rout ) a geometrical annulus sector with angle θ, inner radius rin , and outer
radius rout , centered at the location of the typical receiver (origin of the Polar coordinate). To
model a wireless channel, we consider a constant attenuation c at reference distance 1 m, a
distance-dependent attenuation with exponent α, and a Rayleigh fading component h. To avoid
the physically unreasonable singularity that arises at the origin under power law attenuation, we
change the path loss index to α1B(2π,0,a) , where 1· is the indicator function assuming value 1 over
set · and zero otherwise. This modified power law model implies that the signal of all transmitters
located outside a disk with radius a will be attenuated by traditional power law method; however,
the transmitters inside this disk will observe no channel attenuation. Therefore, the channel gain
−α1B(2π,0,a)
between transmitter i at radial distance di and the typical receiver is giCh = chi di
.
To avoid unnecessary complications while illustrating the utility of our index, we eliminate the
shadow fading from our channel model.
We are now ready to illustrate the utility of our proposed index using the SINR expression (6).
14
A. Accuracy of the Interference Ball Model
For mathematical tractability, we assume that rIBM ≥ a and d0 ≥ a, and the extension to the
general case is straightforward. The false alarm probability can be reformulated as
IBM
Pr γ IBM < β Pr γ PhyM ≥ β | γ IBM < β
IBM|PhyM
PhyM
. (7)
pfa
= Pr γ
<β|γ
≥β =
1 − Pr [γ PhyM < β]
Although the PhyM considers the impacts of all the interferers in the entire network, the
IBM considers only the effects of the near-field ones. Consequently, γ PhyM ≤ γ IBM , and thus
IBM|PhyM
Pr γ PhyM ≥ β | γ IBM < β = 0 in the nominator of (7). This results in pfa
= 0.
For the miss-detection probability, we have
IBM|PhyM
pmd
= Pr γ IBM ≥ β | γ PhyM < β = 1 − Pr γ IBM < β | γ PhyM < β
Pr γ IBM < β Pr γ PhyM < β | γ IBM < β
Pr γ IBM < β
=1−
= 1−
,
(8)
Pr [γ PhyM < β]
Pr [γ PhyM < β]
where the last equality is
from γ PhyM ≤ γ IBM . In Appendix A, we have derived
α
−α
−σβd0
α
α
2
− πλt Eh a2 1 − e−βd0 h + rIBM
1 − e−βd0 hrIBM −
Pr γ IBM < β = 1 − exp
pc
2
2
−α
2/α
2/α
α
α
−α
α
−α
α
−βdα
ha
2
0
,
+ (βd0 h) Γ 1 − , βd0 hrIBM − (βd0 h) Γ 1 − , βd0 ha
a 1−e
α
α
(9)
and
Pr γ
PhyM
σβdα0
−βdα
ha−α
2
−βdα
h
2
0
0
< β = 1 − exp −
− πλt Eh a 1 − e
−a 1−e
pc
2
2
α 2/α
α
−α
α 2/α
− (βd0 h) Γ 1 − , βd0 ha
, (10)
+ (βd0 h) Γ 1 −
α
α
where Γ (·, ·) is the incomplete Gamma function, Γ (·) is the Gamma function, Eh denotes
expectation over random variable h, and the probability density function of h is fh (x) = e−x .
Substituting (9) and (10) into (8), the miss-detection probability can be found. Also, from (3),
the accuracy of the interference ball model Sβ,ξ (IBMkPhyM) is derived. A simple extension of
our analysis gives the accuracy index when d0 is a random variable. Recall that the purpose of
this section is to illustrate only the utility of our index, and investigating more practical system
models is a subject of our future work; see for instance [45].
15
Result 1 (Perfect Interference Ball Model). For any constant 0 ≤ ξ ≤ 1 and any β,
Sβ,ξ (IBMkPhyM) → 1 as rIBM → ∞.
IBM|PhyM
Proof: We know that pfa
= 0 for any constant 0 ≤ ξ ≤ 1 and any β. Moreover, as rIBM
IBM
IBM|PhyM
increases, Pr γ
< β tends to Pr γ PhyM < β . Considering (8), Pmd
asymptotically
goes to zero as rIBM → ∞. With zero false alarm and asymptotically zero miss-detection
probabilities, the proof is concluded from (3).
Result 1 indicates that the IBM becomes more accurate with higher rIBM , and it can be arbitrary
accurate for sufficiently large rIBM . The price, however, is more complicated IBM as its approximations at a receiver demands coordination with more interferers.4 Also, negotiation with other
transmitters (e.g., for MAC layer design) within this larger rIBM becomes more challenging in
terms of power consumption, signaling overhead, delay, and processing overhead.
B. Accuracy of the Protocol Model
We now consider the PRM and first note that
1 − Pr γ PRM < β 1 − Pr γ PhyM < β | γ PRM ≥ β
PRM|PhyM
,
(11)
pfa
=1−
1 − Pr [γ PhyM < β]
and that
1 − Pr γ PRM < β Pr γ PhyM < β | γ PRM ≥ β
PRM|PhyM
pmd
=
.
(12)
Pr [γ PhyM < β]
In the last two equations, note that Pr[γ PhyM < β] is derived in (10). In the following, we
derive Pr[γ PRM < β] and Pr γ PhyM < β | γ PRM ≥ β .
Event γ PRM < β occurs if there is at least one interferer inside B(2π, 0, rPRM ). As I is a
homogenous Poisson point process with intensity λt , we have
2
Pr γ PRM < β = 1 − exp −λt πrPRM
4
.
(13)
Note that for special settings of this section, considering the impact of all interferers (PhyM) simplifies the analysis. However,
this does not hold in general, e.g., if we change the spatial distribution of the interferers to a determinantal point process.
16
In Appendix A, we have also derived
−σβdα
−α
α
0
2
− πλt Eh − rPRM
1 − e−βd0 hrPRM
Pr[γ PhyM < β | γ PRM ≥ β] = 1 − exp
pc
2
2
α 2/α
α 2/α
α
−α
+ (βd0 h) Γ 1 −
− (βd0 h) Γ 1 − , βd0 hrPRM . (14)
α
α
Substituting (11)–(14) into (3), we can find Sβ,Pr[γ PhyM ≥β ] (PRMkPhyM) for Rayleigh fading
channel with omnidirectional transmission/reception.
Result 2 (Miss-detection–False Alarm Tradeoff). Consider the protocol model of interference
with Rayleigh fading channel. Increasing the interference range rPRM reduces the false alarm
probability and increases the miss-detection probability. Decreasing the interference range increases the false alarm probability and reduces the miss-detection probability.
Proof: Pr γ PRM < β is a strictly increasing function of rPRM , see (13). Considering the equations
of the false alarm and miss-detection probabilities given in (11) and (12), the proof concludes.
Result 3 (Asymptotic Accuracy of the Protocol Model). Consider Equations (3) and (11)–(13).
For any 0 ≤ ξ ≤ 1 and any β > 0, we have the following asymptotic results:
PRM|PhyM
→ 0 , pmd
PRM|PhyM
→ 1 , pmd
rPRM → a, a → 0 ⇒ pfa
rPRM → ∞ ⇒ pfa
PRM|PhyM
→ 1 , Sβ,ξ (PRMkPhyM) → ξ .
PRM|PhyM
→ 0 , Sβ,ξ (PRMkPhyM) → 1 − ξ .
Result 3 further confirms the tradeoff between the miss-detection and false alarm probabilities.
C. Numerical Illustrations
To illustrate the accuracy index in Scenario 1 with Monte Carlo simulation, we consider a spatial
Poisson network of interferers and obstacles with density λt and λo per unit area. Length of
the typical link is d0 = 20 m. We simulate a traditional outdoor microwave network [4] with
average attenuation c = 22.7 dB at the reference distance a = 1 m, path-loss index α = 3.6, and
noise power σ = −111 dBm (around 2 MHz bandwidth). We consider p = 20 dBm transmission
power and β = 5 dB minimum SINR threshold. For the ease of illustration, we define the notion
17
1
0.9
PRM: Pfa , dt = 30
0.8
PRM: Pfa , dt = 80
0.6
Accuracy index
Detection performance
1
PRM: Pmd , dt = 30
PRM: Pmd , dt = 80
0.4
IBM: Pmd , dt = 80
0.2
0
20
PRM: dt = 30
0.8
IBM: dt = 80
0.7
PRM: dt = 80
0.6
0.5
40
60
80
100
Interference range [m]
120
140
0.4
20
40
60
80
100
Interference range [m]
(a) Error probabilities
120
140
(b) Accuracy index
Fig. 1: Impact of the interference range on the accuracy of interference models under Rayleigh fading channel and omnidirectional
communications.
√
of the average inter-transmitter distance as dt = 1/ λt . This distance directly relates to the
inter-site distance in cellular networks, and also shows the transmitter density in a network.
Fig. 1 illustrates the impact of the interference range on the accuracy of both IBM and PRM
PRM|PhyM
under Scenario 1. From Fig. 1(a), increasing rPRM increases pfa
PRM|PhyM
and reduces pmd
,
highlighted as the tradeoff between the miss-detection and false alarm probabilities in Result 2.
This tradeoff may lead to increment (see dt = 30) or decrement (see dt = 80) of the accuracy
index of the PRM with the interference range. The IBM has zero false alarm probability, not
IBM|PhyM
depicted in Fig. 1(a) for sake of clarity of the figure. Moreover, as stated in Result 1, pmd
decreases with rPRM , leading to a more accurate IBM, as can be confirmed in Fig. 1(b). Note
that with the same transmitter density and interference range, the PRM has lower miss-detection
probability than the IBM; however, better false alarm performance of the IBM leads to less
errors in detecting outage events and therefore higher accuracy index, see Fig. 1(b). The TIM,
not depicted in the figure, has a very high accuracy in all simulations. In particular, with
ε = −130 dB, its accuracy is about 0.99. However, the corresponding TIM considers many
interferers inside an irregular geometrical shape, which substantially decreases the tractability of
the resulting interference model.
Fig. 2 shows the accuracy of the IBM and PRM under Scenario 1 against the average intertransmitter distance. Again, we can observe an enhancement in the accuracy of the IBM with
rIBM , whereas the accuracy index of the PRM shows a complicated behavior as a function of
rPRM . By adopting the optimal rPRM that maximizes the accuracy index, as shown in Fig. 2(b),
18
1
1
Accuracy index
Accuracy index
0.9
0.8
0.7
rIBM = 20
rIBM = 60
0.6
0.8
0.6
rPRM = 20
rPRM = 60
rPRM = 120
optimal rPRM
0.4
rIBM = 120
0.5
0
50
100
150
Average inter-transmitter distance [m]
(a) Interference ball model
200
0.2
0
50
100
150
Average inter-transmitter distance [m]
200
(b) Protocol model of interference
Fig. 2: Impact of transmitter density on the accuracy of the interference models under Rayleigh fading channel and omnidirectional
communications. The accuracy of the TIM with ε = −130 dB is higher than 0.98.
we can maintain a good performance for the PRM. Both interference models are very accurate
at extremely dense transmitter deployments. The main reason is the very high interference level
(ξ = Pr γ PhyM ≥ β is almost 0 in this case), implying that the accuracy index is determined only
by the miss-detection probability. Increasing the transmitter density through reducing dt decreases
the miss-detection probability for both IBM and PRM, see Fig. 1(a), improving their accuracy.
For ultra sparse transmitter deployments, again, both interference models work accurately, as ξ
goes to 1 in this case and therefore only the false alarm probability determines the accuracy
index. This probability is zero for the IBM, and it gets smaller values (asymptotically zero) for
the PRM with higher dt , see Fig. 1(a). Finally, the TIM with ε = −130 dB, not shown in Fig.2,
has a very high accuracy in modeling the interference. Its accuracy for the same ranges of dt is
higher than 0.98.
Fig. 3 shows the KL divergence of fγ IBM (x) from fγ PhyM (x) and also their Bhattacharyya distance
for the same setting of Fig. 2(a), where lower values translates into higher accuracy of the IBM.
From this figure, both the KL divergence and the Bhattacharyya distance can identify higher
accuracy of the IBM with rIBM = 60 m. However, they both fail to show that the performance
of IBM with rIBM = 20 m converges to that with rIBM = 60 m once the network gets sparser.
Moreover, calculating these measures entails almost the same mathematical/numerical complexity
as our similarity index. Due to these reasons, we investigate only our accuracy index for the
rest of the paper, though one may incorporate those metrics in our proposed interference model
similarity analysis framework.
19
0.08
KL: rIBM
KL: rIBM
BD: rIBM
BD: rIBM
Distance
0.06
= 20
= 60
= 20
= 60
0.04
0.02
0
0
50
100
150
Average inter-transmitter distance [m]
200
Fig. 3: The KL divergence (labeled by “KL”) of the distribution of γ IBM from that of γ PhyM and their Bhattacharyya distance
(labeled by “BD”) corresponding to the accuracy index values of Fig. 2(a). Lower values translate into higher similarity between
two distributions.
1
Accuracy index
0.8
0.6
0.4
0.2
0
-10
IBM:
IBM:
PRM:
PRM:
-5
dt
dt
dt
dt
= 50
= 100
= 50
= 100
0
5
10
SINR threshold [dB]
15
20
Fig. 4: Impact of the SINR threshold on the accuracy of interference models under Rayleigh fading channel and omnidirectional
communications.
Fig. 4 illustrates the accuracy index against the SINR threshold. Increasing the SINR threshold
generally increases the sensitivity of the interference model to any approximation error in x.
IV. E XAMPLE S CENARIO 2: R AYLEIGH FADING C HANNEL , D IRECTIONALITY,
AND
O BSTACLES
In this section, we analyze the accuracy of IBM and PRM in modeling a wireless network with
Rayleigh fading channels, where all transmitters and receivers use directional communications
to boost the link budget and to reduce multiuser interference. We also consider impenetrable
20
obstacles. The application areas of this scenario include modeling and performance evaluation
of mmWave networks, where directional communication is inevitable and extreme penetration
loss due to most of the solid materials (e.g., 20–35 dB due to the human body [46]) justifies the
impenetrable obstacle assumption. In Section VI-B, we will comment on the impact of assuming
impenetrable obstacles on the accuracy of the interference model.
Note that the interference is not the primary limitation of mmWave networks specially if we
take an average over all possible realizations of a random topology [4], [41]. However, even
if mmWave networks are noise-limited in a statistical sense (that is, taking an average of the
interference over some time or some topologies), there are significant realizations of network
topologies at given times where some transmitters can cause strong interference. We cannot use
noise-limited arguments, which are valid over some time horizons, when we have to optimize
in real-time resource allocations or routing. In the following two sections, we show that special characteristics of mmWave networks, such as blockage and deafness, can be exploited to
substantially simplify the interference model, so as to develop efficient scheduling and routing
algorithms, which may otherwise be impossible. In fact, our results provide, for the first time,
mathematical justifications for the use of simpler interference models in mmWave networks, as
extensively done in the literature [14], [15], [17], [47]–[51].
We assume a homogenous Poisson network of interferers as in Section III. If there is no obstacle
on the link between transmitter i and the typical receiver located at the origin, we say that
transmitter i has line-of-sight (LoS) condition with respect to the typical receiver, otherwise it
is in non-LoS condition. We assume that transmitter of every link is spatially aligned with its
intended receiver, so there is no beam-searching phase [52]. We model the antenna pattern by
an ideal sector model [4], where the antenna gain is a constant in the main lobe and another
smaller constant in the side lobe. We assume the same operating beamwidth θ for all devices
in both transmission and reception modes. Then, the antenna gain for each transmitter/receiver
is [52, Equation (3)]
2π − (2π − θ) z ,
θ
z,
inside the main lobe
(15)
inside the side lobe ,
where 0 ≤ z ≪ 1 is the side lobe gain. For mathematical tractability, we assume negligible
side lobe gain (i.e., z = 0) throughout this section, and numerically assess the impact of this
21
simplification in Section VI-B.
Consider the link between transmitter i and receiver j with distance dij . It is shown that
with a random number of obstacles, each having random location and size, this link is in the
LoS condition with probability e−ǫλo dij , where λo is the intensity of the obstacles and ǫ is a
constant value that depends on the average size of obstacles in the environment [53]. Due to
the exponential decrease of the LoS probability with the link length (also see [54, Fig. 4]), very
far interferers are most likely blocked. For mathematical simplicity, we assume independent
LoS conditions among the typical receiver and all other transmitters, and also impenetrable
obstacles. Nonetheless, the following analysis can be extended for more realistic blockage
models, introduced in [41]. Notice that we are using this simplified model to investigate the effects
of directionality and blockage on the accuracy of the interference models and to characterize
fundamental properties of the proposed accuracy index. The exact value of the accuracy index
with a more realistic mmWave channel can be readily numerically calculated under any system
model, as we highlight in the next sections.
To evaluate the accuracy of IBM and PRM, we first notice that an intended transmitter can cause
a significant interference contribution to the typical receiver if: (a) the typical receiver is inside
its main lobe, (b) it has LoS condition with respect to the typical receiver, and (c) it is inside the
main lobe of the typical receiver. Due to random deployment of the transmitters and receivers, the
probability that the typical receiver locates inside the main lobe of a transmitter is θ/2π. Moreover, we have independent LoS events among the typical receiver and individual transmitters.
Therefore, the interferers for which conditions (a)–(b) hold follow an inhomogeneous Poisson
point process I with intensity of λI (r) = λt θe−ǫλo r /2π at radial distance r. Condition (c) reduces
the angular region that a potential interferer should be located to contribute in the interference
observed by the typical receiver. We note that I ∩ B(θ, 0, rPRM ) is the set of potential interferers
inside the vulnerable region of the PRM, shown by red triangles in Fig. 5, and I ∩B(θ, rPRM , ∞)
shows the set of potential interferers outside that region, shown by green circles in Fig. 5. Also,
I ∩ B(θ, 0, rIBM ) is the set of potential interferers for IBM (near-field interferers).
22
U350
Fig. 5: Illustration of the vulnerable area.
A. Impact of Directionality and Blockage
Before deriving the accuracy of IBM and PRM, we first evaluate the impact of directionality
and blockage on the number of the interferers. We define by ΛB(θ,0,R) the measure of the region
B(θ, 0, R), i.e., the average number of interferers inside the region. We have
Z R
θ 2 λt
−ǫλo R
ΛB(θ,0,R) = θ λI (r)r dr =
1
−
(1
+
ǫλ
R)
e
.
(16)
o
2πǫ2 λ2o
0
Then, for any real R > 0, the number of potential interferer inside the region B(θ, 0, R), denoted
by NB(θ,0,R) , is a Poisson random variable with probability mass function
n
Λ
B(θ,0,R)
Pr[NB(θ,0,R) = n] = e−ΛB(θ,0,R)
.
(17)
n!
Result 4 (Impact of Directionality). Consider (16), and let ǫλo → 0. The average number of
potential interferers converges to
θ 2 λt 2
θ
θ 2
.
(18)
R =
λt
R
4π
2π
2
To interpret Result 4, with no obstacle in the environment (ǫλo → 0), we will have a homogenous
Poisson network of interferers with density λt θ/2π. Therefore, the average number of interferers
over B(θ, 0, R) is the product of the density per unit area and the area of B, which is θR2 /2. It can
be concluded that adopting narrower beams reduces the average number of potential interferers
within a certain distance R; however, it still tends to infinity almost surely as R → ∞.
Result 5 (Impact of Blockage). Consider (16), and let R → ∞. The average number of potential
interferers converges to
θ 2 λt
,
2πǫ2 λ2o
(19)
which is less than infinity almost surely if ǫλo > 0.
Result 5 implies that any receiver observes a finite number of potential interferers almost surely if
there is a non-negligible blockage. This unique feature holds for the mmWave bands, as most of
23
the obstacles can severely attenuate the signals.5 Therefore, not only the farther transmitters
will contribute less on the aggregated interference (due to higher path-loss), but they will
be also thinned by directionality and blockage such that only a finite number of spatially
close transmitters can cause non-negligible interference to any receiver. Note that, these fewer
interferers may still cause strong interference, if they are located very close to the receiver. The
point is that the thinning process due to directionality and blockage makes the SINR distribution
under PhyM closer to that of the IBM, which considers only the near-field interferers. To elaborate
more, we characterize the average number of far-field interferers in the following.
Proposition 1 (Measure of Far-Field Interferers). Let θ be the operating beamwidth, λt be the
density of the transmitters, λo be the density of the obstacles, and ǫ > 0 be a constant. Then,
the average number of interferers located inside B(θ, R, ∞) is
θ 2 λt
(1 + ǫλo R) e−ǫλo R ,
ΛB(θ,R,∞) =
2
2
2πǫ λo
and the probability of having no far-field interferer is
(20)
Pr[NB(θ,R,∞) = 0] = e−ΛB(θ,R,∞) .
(21)
R∞
Proof: To prove, we only need to compute R θλI (r) r dr, and (20) follows. Moreover, by
substituting ΛB(θ,R,∞) into (17) with n = 0, we conclude (21).
From Proposition 1, the average number of far-field interferers will be decreased exponentially
with distance. Consequently, from (21), the probability of having no far-field interferers increases
exponentially with the distance. Fig. 6 shows the probability of having at least one far-field
interferer as a function of the distance. By defining any arbitrary minimum threshold κ on this
probability, we can find a distance Rκ after which the probability of having far-field interferer(s)
is arbitrarily close to 0 (less than κ). This suggests that by setting rIBM = Rκ , IBM can capture,
at least, (1 − κ) fraction of the total number of interferers for any arbitrary small κ. Recall
that the neglected interferers, if any, are far-field and their contributions to the total interference
term are suppressed by the significant distance-dependent path-loss. All these facts result in the
following conclusion:
5
In the conventional microwave systems where the transmission is less sensitive to blockage, the number of potential interferers
is almost surely infinite, as highlighted in Result 4.
24
Probability of having far−field interferers
0
10
−1
10
−2
10
c = 0.5, λo=0.25, λt=1
c = 0.5, λo=0.25, λt=0.44
−3
10
c = 0.5, λo=0.44, λt=0.44
c = 1, λo=0.44, λt=0.44
−4
10
0
20
40
60
Distance [meters]
80
100
Fig. 6: Probability of having at least one far-field interferer as a function of distance. Simulation parameters are similar to those
of Fig. 1.
Result 6. Directionality and blockage can substantially increase the accuracy of the interference
ball model.
We can argue similar accuracy improvement in the PRM, as we numerically illustrate in the next
subsections.
B. Accuracy of the Interference Ball Model
Assume rIBM ≥ a and d0 ≥ a. Using similar claims as in Section III-A, it is straightforward to
IBM|PhyM
show pfa
Pr γ IBM
= 0. To findthe miss-detection probability, in Appendix B, we derive
2
α
2
1 − (ǫλo a + 1) e−ǫλo a
σθ βd0
θ
−βdα
h
0
< β = 1 − exp −
−
λt E h 1 − e
4pcπ 2
2π
ǫ2 λ2o
Z rIBM
−α
−ǫλ
r
−βdα
hr
o
+
e
r dr ,
1−e 0
a
(22)
25
and
Pr γ PhyM < β = 1 − exp
2
−
σθ βdα0
4pcπ 2
2
−
θ λt
1 − e−βdα0 h 1 − (ǫλo a + 1) e−ǫλo a
E
h
2πǫ2 λ2o
Z ∞
α
−α
2 2
−βd0 hr −ǫλo r
− ǫ λo
e
r dr
, (23)
a
and substitute them into (8). Then, Sβ,ξ (IBMkPhyM) can be found using (22), (23), (8), and
then (3). Similar to Remark 1, for any 0 ≤ ξ ≤ 1, Sβ,ξ (IBMkPhyM) → 1 as rIBM → ∞.
C. Accuracy of the Protocol Model
To derive the accuracy of the PRM, we need to derive Pr[γ PhyM < β], Pr[γ PRM ≤ β] and
Pr[γ PhyM < β | γ PRM ≥ β], and substitute them into (11) and (12). Pr[γ PhyM < β] is derived
in (23).
Event γ PRM < β implies that |I ∩ B(θ, 0, rPRM )| ≥ 1, namely there is at least one potential interferer inside B(θ, 0, rPRM ). Considering (17), the probability of this event is Pr[NB(θ,0,rPRM ) ≥ 1],
thus
n
o
Pr γ PRM < β = 1 − exp − ΛB(θ,0,rPRM) ,
(24)
Event γ PRM ≥ β implies that there is no interferer inside B(θ, 0, rPRM ). Assuming rPRM ≥ a, it
is easy to find Pr[γ PhyM < β | γ PRM ≥
β]:
Z ∞
σθ2 βdα θ2 λ
α hr −α
t
0
PhyM
PRM
−βd
−ǫλ
r
o
0
.
−
E
Pr[γ
<β|γ
≥ β] = 1−exp −
1
−
e
e
r
dr
h
4pcπ 2
2π
rPRM
(25)
Substituting (23)–(25) into (11) and (12) gives the accuracy index of the PRM. Note that Results 2
and 3 hold here.
D. Numerical Illustrations
To numerically illustrate the accuracy index in Scenario 2, we use the same simulation environment of Section III-C. We independently randomly mark some wireless link to be blocked by
obstacles, with the exponential blockage probability with ǫλo = 0.008 [53]. We then assume
infinite penetration loss for the blocked links, and use the large scale LoS path loss model at
26
Accuracy index
1
0.98
0.96
IBM:
IBM:
PRM:
PRM:
0.94
0.92
20
θ = 20o
θ = 40o
θ = 20o
θ = 40o
40
60
80
Average inter-transmitter distance [m]
100
Fig. 7: Accuracy of IBM and PRM under Rayleigh fading channel and directional communications with obstruction.
28 GHz [2, Table I]. System bandwidth is 1 GHz (noise power σ = −84 dBm). Without loss
of generality, we assume rPRM = 40 m and rIBM = 80 m.
Fig. 7 illustrates the impact of the operating bandwidth and average inter-transmitter distance on
the accuracy index of both IBM and PRM under Scenario 2. As expected, the IBM outperforms
PRM. More importantly, directionality and blockage improve the accuracy of both interference
models. We show in the following section that changing the underlying channel model from a
Rayleigh fading model to a deterministic model further enhances their accuracies. Moreover, the
accuracy of the TIM with ε = −130 dB, not depicted for the sake of clarity in Fig. 7, is nearly
1 in our simulations. Notice that a simplified interference model (e.g., PRM, IBM, or TIM) may
not be of sufficient accuracy for all range of parameters, still it is substantially improved by
directionality and blockage, as highlighted by Results 4–6.
V. E XAMPLE S CENARIO 3: D ETERMINISTIC C HANNEL , D IRECTIONALITY,
AND
O BSTACLES
In this section, we investigate how accurately the IBM and PRM can model a wireless network
with directional communications, blockage, and deterministic wireless channel. The last assumption holds generally in mmWave networks, where sparse scattering characteristic of mmWave
frequencies along with narrow beam operation makes the mmWave channel more deterministic
compared to that of microwave systems with rich scattering environment and omnidirectional
operation [54].
27
A. Accuracy of the Interference Ball Model
IBM|PhyM
Again, it is straightforward to show pfa
= 0. However, unlike previous cases, we cannot
derive closed-form expression for the miss-detection probability, and consequently for the accuracy index. In Appendix C, we have derived upper bounds on the miss-detection probability
using the Chernoff bound.
B. Accuracy of the Protocol Model
Again, deterministic wireless channel prohibits deriving closed-form expressions for the false
alarm and miss-detection probabilities. Nevertheless, we can show that both Remarks 2 and 3
holds here. Moreover, we have the following result:
Result 7 (Zero False Alarm Probability). Under the deterministic channel model, the false alarm
probability is zero for any rPRM ≤ ζ −1/α , where
2
θ
σ
d−α
0
−
.
(26)
ζ=
β
pc 2π
Proof: The SINR that the typical receiver experiences due to transmission of the intended transmitter and an unintended receiver located at distance rPRM is
2 −α
d0
pc 2π
θ
.
2
−α
pc 2π
r
+
σ
PRM
θ
Comparing the SINR expression to β, we get that any interferer located at distance rPRM less
than
σ
d−α
0
−
β
pc
θ
2π
2 !−1/α
= ζ −1/α
can cause packet loss at the typical receiver, namely γ PhyM < β. Now, if we consider the general
equation of the false alarm probability, we have
PRM
Pr γ PRM < β Pr γ PhyM ≥ β | γ PRM < β
PRM|PhyM
PhyM
pfa
= Pr γ
<β|γ
≥β =
.
1 − Pr [γ PhyM < β]
For rPRM ≤ ζ −1/α , Pr γ PhyM ≥ β | γ PRM < β = 0; since event γ PRM < β implies that there is
PRM|PhyM
at least one interferer inside B(θ, 0, rPRM ). This interferer ensures γ PhyM < β, so pfa
=0
for rPRM ≤ ζ −1/α .
The following proposition characterizes bounds for the accuracy index for the Example Scenario 3
(mmWave networks):
Proposition 2. For ξ = Pr γ PhyM ≥ β and any 0 < rPRM ≤ ζ −1/α , we have
Pr γ PRM < β ≤ Sβ,ξ (PRMkPhyM) ≤ 1 ,
28
where Pr γ PRM < β is given in (24).
We have provided a proof for this proposition along with other bounds in Appendix C. We have
the following scaling law results:
Result 8 (Scaling laws for the PRM). The following scaling laws are implied by Proposition 2
and inequality ex ≥ 1 + x for any x ≥ 0:
•
Scaling with θ: For any constant rPRM no larger than ζ −1/α , limθ→0 Sβ,ξ (PRMkPhyM) ≥
2
1 − e−θ C , for some constant C ≥ 0.
•
Scaling with λt : For any constant rPRM no larger than ζ −1/α , limλt →∞ Sβ,ξ (PRMkPhyM) ≥
1 − e−λt C for some constant C ≥ 0.
•
Scaling with λo : For any constant rPRM no larger than ζ −1/α , limλo →0 Sβ,ξ (PRMkPhyM) ≥
−2
1 − exp{−C} for some constant C ≥ 0, and limλo →∞ Sβ,ξ (PRMkPhyM) ≥ 1 − e−λo
D
for some constant D ≥ 0.
Due to lack of space and complexity of the analysis, we leave scaling laws of the IBM for a
future publication. In Appendix C, we have used the Chernoff bound to bound Pr γ PRM < β ,
which is the first step to derive scaling laws for the IBM.
C. Numerical Illustrations
Using similar setting as in Section IV-D, Fig. 8 shows the accuracy index of both IBM and PRM
under Scenario 3 against dt . Comparing this figure to Fig. 7, we observe that directionality and
blockage can further boost the accuracy index when we have a deterministic wireless channel.
Surprisingly, the PRM is accurate enough to motivate adopting this model to analyze and design
of mmWave networks instead of the PhyM, TIM, and even IBM. For relatively pencil-beams (e.g.,
θ = 10 ∼ 20◦ ), which may be used in wireless backhauling applications, the accuracy of the
PRM in detecting outage events is almost 1 in all our simulations. Compared to the PRM,
the PhyM and IBM respectively have less than 5% and 2% higher accuracy in modeling the
interference and detecting the outage events, but with substantially higher complexities. These
complexities often result in limited (mostly intractable) mathematical analysis and little insight.
More interestingly, the relative difference between the average rate of the typical link computed
by the PRM and that of computed by the PhyM, namely E[log2 (1 + γ x )] and E[log2 (1 + γ y )] is
29
1
Accuracy index
0.9995
0.999
0.9985
IBM:
IBM:
PRM:
PRM:
0.998
0.9975
0.997
20
θ = 20o
θ = 40o
θ = 20o
θ = 40o
25
30
35
40
45
Average inter-transmitter distance [m]
50
Fig. 8: Accuracy of IBM and PRM under deterministic channel and directional communications. rPRM = ζ −1/α where ζ is
given in (26), and rIBM = 2rPRM . The relative difference between the average per-user rate computed by the PRM and that of
computed by the PhyM is less than 0.002%.
less than 0.002%, implying the accuracy of the simple PRM to analyze long-term performance
metrics (such as throughput and delay).
Fig. 8 together with Results 4-7 support the validity of the previously proposed pseudo-wired
model [14], at least for sparse networks like mmWave mesh networks [55]. This highlights
the importance of having quantitative (not only qualitative) insight of the accuracy of different
interference models we may face in different wireless networks. Thereby, we can adopt a simple
yet accurate enough model for link-level and system-level performance analysis.
So far, we have observed how we can simplify the set of dominant interferers and how much
accuracy loss they entail under three network scenarios. Besides the set of interferers I, computing the SINR expression requires modeling the wireless channel and the antenna patterns.
More accurate models generally reduce tractability of the SINR expression and therefore the
interference model. In the next section, we analyze the possibility of adopting simple models
for the wireless channel and for the antenna pattern.
VI. E XAMPLE S CENARIO 4: I MPACT
OF
OTHER C OMPONENTS
OF THE
SINR E XPRESSION
In this section, we analyze the accuracy loss due to simplifying wireless channel model and
antenna pattern of the SINR expression. In particular, we use the proposed accuracy index to
investigate the feasibility of modeling a random fading channel with a constant value without
affecting the long-term performance of the real system (with random fading). The importance
30
Accuracy index
1
0.9
0.8
Nakagami (m = 9)
0.7
Nakagami (m = 3)
Rayleigh
0.6
2
2.5
3
3.5
Path-loss index
4
4.5
5
Fig. 9: Impact of modeling a fading channel by a deterministic one on the accuracy of the resulting interference model (dt =
80 m).
of this scenario is due to that numerous studies develop protocols and optimize the network
based on deterministic wireless channels, yet no study focuses on the accuracy and validity of
this underlying model. In the following, we comment on what this deterministic channel gain
should be to maximize its similarity to the actual random wireless channel. We then use the
proposed accuracy index to assess the impact of neglecting the reflections, assuming impenetrable
obstacles, and neglecting sidelobe gain of the directional antenna on the accuracy of the resulting
interference model. We consider the PhyM for both x and y throughout this section.
A. Approximating a Fading Channel with a Deterministic One
To design many protocols for wireless networks (such as power control, scheduling, and routing),
it is often preferable to use deterministic channel gains that depend only on the distance among
the transmitters and receivers [14], [17], [36]–[38]. In this subsection, we investigate the accuracy
of approximating the fading gain between transmitter i and the reference receiver (hi ) in y
by a deterministic value c0 in x. After this approximation, the channel gain in x becomes
giCh = ac0 d−α
i , and all other parameters of x are identical to those of y. For sake of simplicity,
we consider omnidirectional communications without blockage, as in Section III.
Using the same simulation setup as of Section III, we numerically find c0 in x that gives the
highest similarity between x and y, averaged over all β ∈ [0, 10] dB. Fig. 9 shows the accuracy
index, obtained by the optimal c0 , for Rayleigh and Nakagami fading. Moreover, we report in
31
TABLE I: Accuracy of the mathematical analysis when we replace fading channels with a deterministic one (dt = 80 m). “AI”
refers to our accuracy index, shown also in Fig. 9. “BC” refers to the Bhattacharyya coefficient of the SINR distributions of x
and y, and “TD” refers to the deviation of the throughput obtained by interference model x from that of y.
Fading type
Rayleigh
Nakagami (m = 3)
Nakagami (m = 9)
α=2
α=3
α=4
α=5
AI
0.68
0.881
0.939
0.956
BC
0.275
0.048
0.014
0.005
TD
13%
9.3%
6.7%
4.5%
AI
0.951
0.985
0.995
0.998
BC
0.01
0.004
0.003
0.002
TD
5.8%
4.1%
3.2%
2%
AI
0.997
0.9991
0.9996
0.9999
BC
0.001
0.0008
0.0006
0.0003
TD
1.4%
1%
0.7%
0.3%
Table I the Bhattacharyya coefficient between SINR distribution of x and that of y, and also the
relative difference in corresponding average throughput. From Fig. 9 and Table I, interference
model x (with deterministic channel) becomes more similar to y (with fading channel) as
the path-loss index grows. This higher similarity manifests itself in higher accuracy indices,
in lower Bhattacharyya coefficients, and also in lower errors in the rate analysis. Moreover,
approximating a random wireless channel gain with Rayleigh fading and a small path-loss index
(outdoor environment) by a constant value6 may lead to a non-negligible inaccuracy in the final
throughput analysis (up to 13% error in our example). However, a Nakagami-m fading channel
with high m can be well approximated by a deterministic channel gain, substantially simplifying
the mathematical analysis and protocol development. The error due to this approximation will
be reduced with m. To highlight the importance of this observation, we note that the directional
communications will be largely applied in future wireless networks [56]. Therefore, wireless
networks with Nakagami-m fading channels will play a major role in future of wireless networks.
For mmWave communications, for instance, we are already using narrow beams [3], [55], which
result in high m in the corresponding Nakagami-m fading channel. The following conjecture
states how we can approximate a Nakagami-m fading channel by a deterministic channel gain.
Conjecture 1. Consider a 2D network. Assume that the wireless channel attenuation consists of
a constant attenuation at a reference distance, a distance-dependent attenuation with path-loss
6
We observed in our simulations that c0 = Eh [h2/α ] = Γ(1 + 2/α) is roughly the optimal constant that provides the highest
similarity index in Rayleigh fading channel. Notice that it is 2/α-th moment of random variable h.
32
index α, and a random fading h. If h has a Nakagami-m distribution with m ≥ 3, the wireless
channel can be well approximated by a deterministic LoS channel without significant drop in
the accuracy of the resulting interference model or in the analysis of the ergodic performance
metrics such as spectral efficiency, energy efficiency, throughput, and delay. If h has Rayleigh
fading distribution, replacing h by its 2/α-th moment, namely Eh [h2/α ], results in sufficiently
accurate analysis of the ergodic performance metrics.
In the following, we further exemplify the proposed index to investigate the accuracy drop due
to simplifying other parameters of the SINR expression.
B. Other Components of the SINR
In this subsection, we focus on mmWave networks and propose a very simple yet accurate
interference model. In particular, we consider a PRM wherein we assume that i) obstacles are
impenetrable, ii) there is no reflection, and iii) there is no sidelobe transmissions/receptions. Although these assumptions do not generally hold in practice, we show that this simple interference
model can be very accurate abstraction of real mmWave networks. Previously, references [14],
[38] used this interference model for performance evaluation and protocol development for
mmWave networks. Therefore, the discussions of this subsection are a complementary study
of those works.
We consider a random number of obstacles in the environment each with penetration loss lo .
The obstacles are assumed to have rectangular shape whose centers follow a spatial Poisson
distribution with density λo on the plane, independent of the Poisson process of the interferers.
To each rectangle, we associate a random width that is independently uniformly taken from
[0, 4] meters, a random length that is independently uniformly taken from [0, 3] meters, and
a random orientation that is independently uniformly taken from [0, 2π]. The obstacles can
represent small buildings, human bodies, and cars. We independently randomly mark some
obstacles to be reflectors with coefficient r ≤ 1. Without loss of generality, we mark the obstacles
as reflectors with probability 0.1. We also assume that the links can be established either by
the direct path or by a first-order reflected path. We consider a large scale path loss model at
28 GHz [2], which consists of a constant attenuation, a distance dependent attenuation, and a
large scale log-normal fading. Besides these attenuation sources, we consider the penetration
33
TABLE II: Effects of assuming infinite penetration loss, no reflection, and no sidelobe gain on the accuracy of the resulting
√
√
interference model. The shown parameters are for reference model y. SINR threshold is β = 5 dB. dt = 1/ λt and do = 1/ λo .
Experiment
lo
r
z
θ
dt
do
Accuracy
◦
1
2
3
10
10
20
0.63
0.74
0.9
-10
-10
-10
20
40◦
40◦
50
30
50
20
20
50
0.9998
0.9992
0.9993
4
5
6
10
20
20
0.74
0.74
0.74
-10
-10
-10
20◦
20◦
20◦
50
30
30
50
50
20
0.9614
0.9856
0.9588
7
8
9
15
15
15
0.74
0.74
0.74
-5
-5
-10
20◦
20◦
40◦
50
20
30
20
50
20
0.9235
0.7090
0.9311
10
11
12
25
15
15
0.9
0.63
0.74
-10
-15
-10
10◦
30◦
20◦
30
50
100
30
50
50
0.8810
0.9473
0.9718
and reflection losses. Consider path k between transmitter i and the reference receiver. Let dik
be the distance of this path (path length), nk be the number of obstacles in this path, lo be the
penetration loss due to any obstacle in dB, and lr = −10 log(r) be the reflection loss in dB. Let
1k denote an indicator function that takes 1 if path k contains a reflector, otherwise 0. Then,
the channel gain in k-th path between transmitter i and the reference receiver is modelled as
Ch
gik
[dB] = −61.4 − 20 log(dik ) + 1k 10 log(r) − nj lo − X ,
(27)
where X is a zero mean i.i.d. Gaussian random variable with standard deviation 5.8 [2]. Note
that the atmospheric absorbtion is almost negligible (0.15 dB/Km) at the 28 GHz [46]. Moreover,
changing the carrier frequency will change the parameters of channel model (27), without
affecting the generality of the results of this subsection. Again, we consider ideal sector antenna
pattern, formulated in (15), at all transmitters and receivers.
We consider a realistic reference physical model y with finite penetration loss (lo < ∞), first-order
reflections (r > 0), and non-zero antenna side lobe (z > 0). We execute several experiments in
which we change the type of the reflectors, type of obstacles, side lobe gain, operating beamwidth,
and the average number of interferers and obstacles. We execute four sets of experiments. For
each experiment, we compute the average accuracy index over 105 random topologies and report
the result in Table II.
Effects of no reflection assumption: In the first set of experiments (1–3), we consider three
materials for the reflectors: drywall with reflection coefficient 0.63, clear glass with reflection
34
coefficient 0.74, and the tinted glass with reflection coefficient 0.9 [57]. All parameters of
interference models x and y are similar (reported in the table), except that r = 0 in x. From
Table II, the accuracy index is near 1 for all scenarios. The accuracy marginally decreases with
the density of the transmitters, yet it is high enough for typical transmitter densities (dt > 30 m in
downlink cellular networks). Increasing the operating beamwidth has similar effect as increasing
the transmitter densities.
Effects of infinite penetration loss assumption: In the second set of experiments (4–7), we
consider different penetration losses for the obstacles. All parameters of x and y are similar
(reported in the table), except that lo = ∞ in x. From the table, the accuracy index reduces
with the density of the obstacles, as more obstacles correspond to higher source of errors in x.
Moreover, the assumption of impenetrable obstacles is more accurate for higher penetration loss
values. Moreover, denser mmWave networks (dt = 30) are less sensitive to assuming infinite
penetration loss. The main reason is that densifying the network increases the probability of
having interferers with LoS condition to the reference receiver. The contribution of those nonblocked interferers in the aggregated interference term dominate that of blocked interferers.
Effects of no side lobe gains assumption: In the third set of experiments (7–9), we investigate
the impact of neglecting antenna side lobes z in interference model x. All parameters of x and y
are similar (reported in the table), except that z = 0 in x. Expectedly, neglecting higher z lowers
the accuracy of x, and this error increases also with the number of interferers in the network.
Unlike the previous parameters, neglecting side lobe gain may lead to a large deviation of x from
y. From the numerical results, not shown in this paper due to the space limitations, if we have
either a typical dense network
7
with dt = 30 or enough side lobe suppression (at least 10 dB),
we are safe to ignore side lobe gains from the interference model. Increasing the operating
beamwidth increases the chance of observing an aligned interferer (which contributes in the
link budget with its main lobe gain). As such interferers have dominant role in the aggregated
interference term, increasing the operating beamwidth can improve the accuracy of x.
Joint effects of all parameters: In the last set of experiments (10–12), we analyze joint effects of
7
Note that using scheduling, we can reduce mutual interference by controlling the number of simultaneous active transmitters.
Therefore, the number of transmitters in the environment is not necessarily equal to the number of interferers; rather, it is usually
much higher than that.
35
all those parameters by considering infinite penetration loss, zero reflection coefficient, and zero
side lobe gain in x. Other parameters of x are similar to those of y, reported in Table II. From
the results, our simple interference model x is sufficiently accurate for typical mmWave network
scenarios. On the negative side, larger number of interferers magnifies the small error due to
neglecting antenna side lobes. This magnified error together with other approximations leads
to 12% error in detecting outage event by the simplified interference model in Experiment 10.
On the positive side, this higher transmitter density reduces the error due to both neglecting
reflection and assuming impenetrable obstacles.
VII. F UTURE D IRECTIONS
Throughout this paper, we highlighted the tradeoff between the accuracy and mathematical
tractability of the interference models, and exemplified the use of our accuracy index to optimize such tradeoff for different wireless network scenarios, with specific reference to mmWave
networks. Although we have simplified system models of the examples to avoid unnecessary
complications, our index poses no limitation to these example scenarios. We have recently
used this index to assess the accuracy of a simple interference model for a mmWave cellular
network [45]. Two future directions can be envisioned from this paper.
First, one may use our accuracy index to simplify the existing and develop new interference
models for various network settings. In particular, illustrative examples of this paper were
more suitable for ad hoc networks, and evaluating the generality of the resulting insights is
an interesting future research line. Moreover, our proposed index can be used to assess the
accuracy of different blockage models like one-ball [53], two-ball [4], cone [41], and queuebased models [58] and even develop novel accurate yet tractable models.
Second, we can extend the index itself. In this paper, we have defined the similarity index for any
interference model x based on its ability to correctly predict the outage events; see Definition 1.
To generalize our approach, one may aim at measuring the similarity based on any other functions
of SNIR. For example, given some alternatives for one function inside SINR (e.g., different set
of interferers or different antenna models), one may use an extension of our approach to identify
which of them better balances the accuracy-complexity tradeoff for a throughput/delay analysis.
36
VIII. C ONCLUSION
We developed a new mathematical framework to address very fundamental questions in analysis
and design of wireless networks: how accurate different interference models are and how to select
the right one. We proposed a new accuracy index that quantifies the ability of any interference
model in correctly predicting outage events, under any network setting. We analytically and
numerically illustrated the use of our index via many example scenarios. In particular, we
evaluated the accuracy of the prominent techniques that model the set of dominant interferers.
We then showed that directional antenna and obstructions (basic characteristics of mmWave
networks) substantially enhance the accuracy of any interference model, making the simple
classical protocol model accurate enough for analysis and optimization of such networks. Furthermore, we measured the accuracy of approximating a random fading wireless channel with
a deterministic channel. We conjectured that a Nakagami-m fading channels with m ≥ 3 can
be well approximated by a deterministic value without introducing a significant gap in the
ergodic performance metrics (e.g., throughput and delay); whereas, such gap is generally nonnegligible under Rayleigh fading channels. Finally, we showed surprisingly high accuracy of a
simple interference model that assumes (i) infinite penetration loss, (ii) no reflection, and (iii)
no antenna side lobes in modeling a typical mmWave network where none of those assumptions
hold.
A PPENDIX A: D ERIVING C OMPONENTS
OF
E XAMPLE S CENARIO 1
A. The Interference Ball Model
Let Ex denote expectation
over random variable x. From (5) and
(6),
Pr γ IBM < β = Pr
pch0 d−α
0
P
k∈I∩B(2π,0,rIBM )
−α1
pchk dk B(2π,0,a)
= 1 − Pr h0 ≥
X
X
k∈I∩B(2π,0,rIBM )
σ α
βd0
pc
−α1
σ
hk dk B(2π,0,a) + βdα0
pc
−α1B(2π,0,a)
hk dk
k∈I∩B(2π,0,rIBM )
= 1 − EI,h exp −
+σ
< β
+
37
X
−α1
EI,h exp −
hk dk B(2π,0,a) βdα0
= 1 − exp
pc
k∈I∩B(2π,0,rIBM )
o
n
α
Y
−α1
−σβd0
EI
Eh exp −βdα0 hk dk B(2π,0,a) .
= 1 − exp
pc
k∈I∩B(2π,0,rIBM )
{z
}
|
−σβdα0
A
(28)
From probability generating functionals, we have
A = exp
− 2πλt
Z
∞
h
i
−α1
B(2π,0,a)
−βdα
hr
1B(2π,0,rIBM ) 1 − Eh e 0
r dr
Z ∞
−α1
B(2π,0,a)
−βdα
hr
0
r dr
= exp − 2πλt Eh
1B(2π,0,rIBM ) 1 − e
0
Z
Z
rIBM
a−
αh
α hr −α
−βd
−βd
0
0
1−e
r dr +
1−e
r dr
= exp − 2πλt Eh
0
a
−α
α
−α
α
α
(⋆)
2
1 − e−βd0 hrIBM − a2 1 − e−βd0 ha
= exp − πλt Eh a2 1 − e−βd0 h + rIBM
0
2
2
α 2/α
α
−α
α
−α
α 2/α
+ (βd0 h) Γ 1 − , βd0 hrIBM − (βd0 h) Γ 1 − , βd0 ha
,
α
α
(29)
where Γ (·, ·) is the incomplete Gamma function, (⋆) is derived using integration by parts, and
probability density function of h is fh (x) = e−x . To find Pr γ PhyM < β , we only need to
evaluate Pr γ IBM < β at rIBM → ∞, that is,
Pr γ PhyM < β = lim Pr γ IBM < β
rIBM →∞
38
= 1 − exp
−
σβdα0
pc
α
−α
α
− πλt Eh a2 1 − e−βd0 h − a2 1 − e−βd0 ha
2
2
α 2/α
α
−α
α 2/α
− (βd0 h) Γ 1 − , βd0 ha
,
+ (βd0 h) Γ 1 −
α
α
(30)
where Γ (·) is the Gamma function.
B. The Protocol Model
Event γ PRM ≥ β implies that there is no interferer inside B(2π, 0, rPRM). Assuming rPRM ≥ a
and d0 ≥ a, and following similar steps
as in (28) and (29), we have
Z ∞
−α1
α
B(2π,0,a)
Pr[γ PhyM < β | γ PRM ≥ β] = 1 − exp − 2πλt Eh
1B(2π,rPRM ,∞) 1 − e−βd0 hr
r dr
0
σβdα0
−
pc
Z
−σβdα
∞
α hr −α
0
−βd
= 1 − exp
− 2πλt Eh
1−e 0
r dr
pc
rPRM
−σβdα
−α
2
0
α 2/α
−βdα
hrPRM
2
0
+ (βd0 h) Γ 1 −
− πλt Eh − rPRM 1 − e
= 1 − exp
pc
α
2
−α
α 2/α
α
− (βd0 h) Γ 1 − , βd0 hrPRM
.
α
(31)
39
A PPENDIX B: D ERIVING C OMPONENTS
OF
E XAMPLE S CENARIO 2
A. The Interference Ball Model
We have
Pr γ IBM < β = Pr
p
P
p
2π 2
θ
= 1 − EI,h exp −
= 1 − exp
2
σθ βdα0
−
4pcπ 2
2π 2
θ
k∈I∩B(θ,0,rIBM )
ch0 d−α
0
−α1B(2π,0,a)
chk dk
X
+σ
EI
Y
k∈I∩B(θ,0,rIBM )
|
< β
−α1B(2π,0,a)
hk dk
k∈I∩B(θ,0,rIBM )
Eh
σθ α
+
βd0
4pcπ 2
2
n
o
−α1
exp −βdα0 hk dk B(2π,0,a) ,
{z
}
B
(32)
where
Z ∞
h
i
−α1
α
B(2π,0,a)
−βd0 hr
B = exp −
1B(θ,0,rIBM ) 1 − Eh e
θλI (r)r dr
0
Z ∞
θ2
−α1
α
B(2π,0,a)
−βd0 hr
−ǫλo r
λt E h
1B(θ,0,rIBM ) 1 − e
e
r dr
= exp −
2π
0
Z
Z
θ2
rIBM
a−
α h −ǫλ r
α hr −α
−βd
−βd
−ǫλ
r
o
o
= exp −
1−e 0 e
r dr +
1−e 0
e
r dr
λt E h
2π
0
a
Z rIBM
θ2
−ǫλo a
1
−
(ǫλ
a
+
1)
e
α hr −α
α h
o
−ǫλ
r
−βd
−βd
o
0
.
= exp −
+
e
r
dr
λt E h 1 − e 0
1
−
e
2π
ǫ2 λ2o
a
(33)
Also, we have
Pr γ PhyM < β =
Pr γ IBM < β
rIBM →∞
σθ2 βdα
2
θ λt
0
1 − e−βdα0 h 1 − (ǫλo a + 1) e−ǫλo a
E
−
= 1 − exp −
h
4pcπ 2
2πǫ2 λ2o
lim
40
Z ∞
2 2
−βdα
hr −α −ǫλo r
0
− ǫ λo
.
e
r dr
a
(34)
B. The Protocol Model
We have
Pr[γ PhyM < β | γ PRM ≥ β] = 1 − exp
− Eh θ
Z
∞
−α1
B(2π,0,a)
−βdα
0 hr
1B(θ,rPRM ,∞) 1 − e
0
−
σθ2 βdα
0
2
4pcπ
λI (r)r dr
Z
σθ2 βdα0
θ 2 λt ∞
−α
−βdα
hr
−ǫλ
r
o
0
= 1 − exp −
−
E
.
1
−
e
e
r
dr
h
4pcπ 2
2π
rPRM
(35)
A PPENDIX C: D ERIVING B OUNDS
FOR
E XAMPLE S CENARIO 3
In this appendix, we derive bounds on the miss-detection probability in Example Scenario 3.
A. The Interference Ball Model
To derive an upper bound on the miss-detection probability, we substitute a lower bound of
Pr γ IBM < β and upper bound of Pr γ PhyM < β into (8) of the main manuscript. Recall the
definition of ζ in (26).
" For any real positive τ ,
#
"
#
X −α1B(2π,0,a)
X −α1B(2π,0,a)
IBM
(⋆)
dk
dk
Pr γ
< β = Pr
1B(θ,0,rIBM ) > ζ = 1 − Pr
1B(θ,0,rIBM ) ≤ ζ
k∈I
k∈I
(⋆⋆)
(
≥ 1 − inf eτ ζ EI exp −τ
τ >0
|
X
−α1B(2π,0,a)
dk
k∈I
{z
C
1B(θ,0,rIBM )
)
.
}
(36)
where (⋆) is due to (6) in the main manuscript, and (⋆⋆) follows from the Chernoff bound and
the probability
generating functionals.
Z rIBM
−α1
B(2π,0,a)
−τ r
λI (r)r dr
C = exp − θ
1−e
0
41
Z rIBM
−ǫλo r
−τ
−ǫλo r
−τ r −α
re
dr +
1−e
re
dr
1−e
a
Z −
θ2 a
= exp −
λt
2π
0
θ2 λ 1 − e−τ + (1 + ǫλ a) e−ǫλo a−τ − (1 + ǫλ r ) e−ǫλo rIBM
o
o IBM
t
= exp −
2
2
2π
ǫ λo
Z rIBM
−ǫλo r−τ r −α
−
re
dr .
a
(37)
Using similar technique, we use the Chernoff bound to find exponentially decreasing bound on
the tail distribution"of γ PhyM as
#
X −α1B(2π,0,a)
PhyM
Pr γ
< β = Pr
dk
1B(θ,0,∞) > ζ
k∈I
(
≤ inf e−τ ζ EI exp τ
τ >0
= inf exp
τ >0
− τζ −
X
k∈I
)
−α1
dk B(2π,0,a) 1B(θ,0,∞)
θ2 λt 1 − eτ + (1 + ǫλo a) e−ǫλo a+τ
2π
ǫ2 λ2o
Z ∞
−ǫλo r+τ r −α
−
.
re
dr
a
(38)
Note that bounds (36) and (38) are derived using the Chernoff bound. However, easier but looser
bounds can be found using Markov and Chebyshev inequalities. These bounds can be readily
derived by direct application of the Campbell’s Theorem [25].
B. The Protocol Model
We can also find bounds and scaling laws for the accuracy index of the PRM in Example Scenario 3. To this end, we use the following proposition:
Proposition 3. For ξ = Pr γ PhyM ≥ β , we have
max Pr γ IBM < β , Pr γ PhyM ≥ β ≤ Sβ,ξ (IBMkPhyM) ≤ 1 .
Also, for any 0 < rPRM ≤ ζ −1/α where ζ is defined in (26) of the manuscript we have
max Pr γ PRM < β , Pr γ PhyM ≥ β ≤ Sβ,ξ (PRMkPhyM) ≤ 1 ,
(39)
(40)
42
where
Pr γ
PRM
θ 2 λt
−ǫλo rPRM
1 − (1 + ǫλo rPRM ) e
.
< β = 1 − exp −
2πǫ2 λ2o
Proof: For the IBM, the upper bound is trivial. To derive the lower bound, from (3) of the
manuscript we have
IBM|PhyM
Sβ,ξ (IBMkPhyM) = 1 − ξpfa
(⋆)
IBM|PhyM
where (⋆) is because pfa
IBM|PhyM
− (1 − ξ) pmd
IBM|PhyM
= 1 − (1 − ξ) pmd
(⋆⋆)
= Pr γ PhyM ≥ β + Pr γ IBM < β ,
(41)
= 0 for any rIBM ≥ 0, and (⋆⋆) is due to (8) of the manuscript.
Then, (39) follows.
To derive the lower bound of the PRM, we first note that Pr γ PhyM < β | γ PRM ≥ β ≤ Pr γ PhyM < β
for any rPRM > 0. Therefore, from (12) of the manuscript,
PRM
θ 2 λt
PRM|PhyM
−ǫλo rPRM
pmd
≤ 1 − Pr γ
< β = exp −
1 − (1 + ǫλo rPRM ) e
.
2πǫ2 λ2o
Then, from (3) we have
PRM|PhyM
Sβ,ξ (PRMkPhyM) = 1 − ξpfa
(⋆)
(42)
PRM|PhyM
− (1 − ξ) pmd
PRM|PhyM
= 1 − (1 − ξ) pmd
PRM|PhyM
≥ 1 − pmd
PRM
θ 2 λt
−ǫλo rPRM
,
1 − (1 + ǫλo rPRM ) e
≥ Pr γ
< β = 1 − exp −
2πǫ2 λ2o
(⋆⋆)
(43)
PRM|PhyM
where (⋆) is because pfa
= 0 for any rPRM ≤ ζ −1/α where ζ is defined in Eqn. (27) of
the manuscript (see Result 7 of the manuscript), and (⋆⋆) is due to Eqn (42). Moreover,
PRM|PhyM
Sβ,ξ (PRMkPhyM) = 1 − (1 − ξ) pmd
(⋆)
= 1 − Pr γ PRM ≥ β Pr γ PhyM < β | γ PRM ≥ β
≥ 1 − Pr γ PhyM < β | γ PRM ≥ β
≥ 1 − Pr γ PhyM < β ,
where (⋆) is from (12). Combining (43) and (44) results in (40).
(44)
43
R EFERENCES
[1] H. Shokri-Ghadikolaei, C. Fischione, and E. Modiano, “On the accuracy of interference models in wireless communications,” in Proc. IEEE International Conference on Communications (ICC), 2016.
[2] M. Akdeniz, Y. Liu, M. Samimi, S. Sun, S. Rangan, T. Rappaport, and E. Erkip, “Millimeter wave channel modeling and
cellular capacity evaluation,” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp. 1164–1179, Jun. 2014.
[3] H. Shokri-Ghadikolaei, C. Fischione, G. Fodor, P. Popovski, and M. Zorzi, “Millimeter wave cellular networks: A MAC
layer perspective,” IEEE Trans. Commun., vol. 63, no. 10, pp. 3437–3458, Oct. 2015.
[4] M. Di Renzo, “Stochastic geometry modeling and analysis of multi-tier millimeter wave cellular networks,” IEEE Trans.
Wireless Commun., vol. 14, no. 9, pp. 5038–5057, Sept. 2015.
[5] A. Ephremides, J. E. Wieselthier, and D. J. Baker, “A design concept for reliable mobile radio networks with frequency
hopping signaling,” Proc. IEEE, vol. 75, no. 1, pp. 56–73, 1987.
[6] A. Iyer, C. Rosenberg, and A. Karnik, “What is the right model for wireless channel interference?” IEEE Trans. Wireless
Commun., vol. 8, no. 5, pp. 2662–2671, May 2009.
[7] P. Gupta and P. R. Kumar, “The capacity of wireless networks,” IEEE Trans. Inform. Theory, vol. 46, no. 2, pp. 388–404,
Mar. 2000.
[8] B. Liu, Z. Liu, and D. Towsley, “On the capacity of hybrid wireless networks,” in Proc. IEEE International Conference
on Computer Communications (INFOCOM), vol. 2, 2003, pp. 1543–1552.
[9] P. Kyasanur and N. H. Vaidya, “Capacity of multichannel wireless networks under the protocol model,” IEEE/ACM Trans.
Netw., vol. 17, no. 2, pp. 515–527, Apr. 2009.
[10] A. E. Gamal, J. Mammen, B. Prabhakar, and D. Shah, “Throughput-delay trade-off in wireless networks,” in Proc. IEEE
International Conference on Computer Communications (INFOCOM), 2004, pp. 464–475.
[11] A. El Gamal, J. Mammen, B. Prabhakar, and D. Shah, “Optimal throughput-delay scaling in wireless networks–part I: The
fluid model,” IEEE Trans. Inform. Theory, vol. 52, no. 6, pp. 2568–2592, Jun. 2006.
[12] T. Nandagopal, T.-E. Kim, X. Gao, and V. Bharghavan, “Achieving MAC layer fairness in wireless packet networks,” in
Proc. ACM International Conference on Mobile Computing and Networking (MobiCom), 2000, pp. 87–98.
[13] K. Xu, M. Gerla, and S. Bae, “How effective is the IEEE 802.11 RTS/CTS handshake in ad hoc networks?” in Proc. IEEE
Global Communications Conference (GLOBECOM), 2002, pp. 72–76.
[14] S. Singh, R. Mudumbai, and U. Madhow, “Interference analysis for highly directional 60-GHz mesh networks: The case
for rethinking medium access control,” IEEE/ACM Trans. Netw., vol. 19, no. 5, pp. 1513–1527, Oct. 2011.
[15] Y. Xu, H. Shokri-Ghadikolaei, and C. Fischione, “Distributed association and relaying with fairness in millimeterwaves
networks,” IEEE Trans. Wireless Commun., vol. 15, no. 12, pp. 7955–7970, Dec. 2016.
[16] M. K. Marina, S. R. Das, and A. P. Subramanian, “A topology control approach for utilizing multiple channels in multi-radio
wireless mesh networks,” Computer Networks, vol. 54, no. 2, pp. 241–256, Feb. 2010.
[17] T. Stahlbuhk, B. Shrader, and E. Modiano, “Topology control for wireless networks with highly-directional antennas,” in
Proc. IEEE International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt),
2016.
[18] K. Jain, J. Padhye, V. N. Padmanabhan, and L. Qiu, “Impact of interference on multi-hop wireless network performance,”
Wireless Networks, vol. 11, no. 4, pp. 471–487, 2005.
44
[19] G. D. Celik, G. Zussman, W. F. Khan, and E. Modiano, “MAC for networks with multipacket reception capability and
spatially distributed nodes,” IEEE Trans. Mobile Comput., vol. 9, no. 2, pp. 226–240, Aug. 2010.
[20] S. P. Weber, X. Yang, J. G. Andrews, and G. De Veciana, “Transmission capacity of wireless ad hoc networks with outage
constraints,” IEEE Trans. Inform. Theory, vol. 51, no. 12, pp. 4091–4102, Dec. 2005.
[21] S. P. Weber, J. G. Andrews, X. Yang, and G. De Veciana, “Transmission capacity of wireless ad hoc networks with
successive interference cancellation,” IEEE Trans. Inform. Theory, vol. 53, no. 8, pp. 2799–2814, Aug. 2007.
[22] L. B. Le, E. Modiano, C. Joo, and N. B. Shroff, “Longest-queue-first scheduling under SINR interference model,” in Proc.
ACM International Symposium on Mobile Ad hoc Networking and Computing (MobiHoc), 2010, pp. 41–50.
[23] S. Jafar, “Topological interference management through index coding,” IEEE Trans. Inf. Theory, vol. 60, no. 1, pp. 529–568,
Jan. 2014.
[24] X. Yi and D. Gesbert, “Topological interference management with transmitter cooperation,” IEEE Trans. Wireless Commun.,
vol. 61, no. 11, pp. 6107–6130, Nov. 2015.
[25] M. Haenggi, Stochastic Geometry for Wireless Networks.
Cambridge University Press, 2013.
[26] M. Schubert and H. Boche, “Solution of the multiuser downlink beamforming problem with individual SINR constraints,”
IEEE Trans. Veh. Technol., vol. 53, no. 1, pp. 18–28, Jan. 2004.
[27] H. Dahrouj and W. Yu, “Coordinated beamforming for the multicell multi-antenna wireless system,” IEEE Trans. Wireless
Commun., vol. 9, no. 5, pp. 1748–1759, May 2010.
[28] N. N. Moghadam, H. Shokri-Ghadikolaei, G. Fodor, M. Bengtsson, and C. Fischione, “Pilot precoding and combining in
multiuser MIMO networks,” IEEE J. Sel. Areas Commun., vol. 35, no. 7, pp. 1632–1648, Jul. 2017.
[29] M. Sharif and B. Hassibi, “On the capacity of MIMO broadcast channels with partial side information,” IEEE Trans.
Inform. Theory, vol. 51, no. 2, pp. 506–522, Feb. 2005.
[30] F. Baccelli and B. Blaszczyszyn, Stochastic Geometry and Wireless Networks, Volume II–Applications. NOW publishers,
2009.
[31] F. Rashid-Farrokhi, L. Tassiulas, and K. Liu, “Joint optimal power control and beamforming in wireless networks using
antenna arrays,” IEEE Trans. Commun., vol. 46, no. 10, pp. 1313–1324, Oct. 1998.
[32] V. Chandrasekhar, J. G. Andrews, T. Muharemovic, Z. Shen, and A. Gatherer, “Power control in two-tier femtocell
networks,” IEEE Trans. Wireless Commun., vol. 8, no. 8, pp. 4316–4328, Aug. 2009.
[33] H. Q. Ngo, E. G. Larsson, and T. L. Marzetta, “Energy and spectral efficiency of very large multiuser MIMO systems,”
IEEE Trans. Commun., vol. 61, no. 4, pp. 1436–1449, Feb. 2013.
[34] H. Shokri-Ghadikolaei, F. Boccardi, C. Fischione, G. Fodor, and M. Zorzi, “Spectrum sharing in mmWave cellular networks
via cell association, coordination, and beamforming,” IEEE J. Sel. Areas Commun., vol. 34, no. 11, pp. 2902–2917, Nov.
2016.
[35] P. Cardieri, “Modeling interference in wireless ad hoc networks,” IEEE Commun. Surveys Tuts., vol. 12, no. 4, pp. 551–572,
Fourth Quarter 2010.
[36] L. Badia, A. Erta, L. Lenzini, and M. Zorzi, “A general interference-aware framework for joint routing and link scheduling
in wireless mesh networks,” IEEE Netw., vol. 22, no. 1, pp. 32–38, Jan. 2008.
[37] L. Chen, S. H. Low, M. Chiang, and J. C. Doyle, “Cross-layer congestion control, routing and scheduling design in ad
hoc wireless networks,” in Proc. IEEE International Conference on Computer Communications (INFOCOM), Apr. 2006.
[38] S. Singh, F. Ziliotto, U. Madhow, E. Belding, and M. Rodwell, “Blockage and directivity in 60 GHz wireless personal area
45
networks: From cross-layer model to multihop MAC design,” IEEE J. Sel. Areas Commun., vol. 27, no. 8, pp. 1400–1413,
Oct. 2009.
[39] A. K. Gupta, J. G. Andrews, and R. W. Heath, “On the feasibility of sharing spectrum licenses in mmwave cellular
systems,” IEEE Trans. Commun., vol. 64, no. 9, pp. 3981–3995, Jul. 2016.
[40] J. Dams, M. Hoefer, and T. Kesselheim, “Scheduling in wireless networks withrayleigh-fading interference,” IEEE Trans.
Mobile Comput., vol. 14, no. 7, pp. 1503–1514, Jul. 2015.
[41] H. Shokri-Ghadikolaei and C. Fischione, “The transitional behavior of interference in millimeter wave networks and its
impact on medium access control,” IEEE Trans. Commun., vol. 62, no. 2, pp. 723–740, Feb. 2016.
[42] T. Kailath, “The divergence and Bhattacharyya distance measures in signal selection,” IEEE Trans. Commun., vol. 15,
no. 1, pp. 52–60, Feb. 1967.
[43] E. Modiano, D. Shah, and G. Zussman, “Maximizing throughput in wireless networks via gossiping,” in Proc. ACM
SIGMETRICS Performance Evaluation Review, Jun. 2006, pp. 27–38.
[44] P. Di Marco, C. Fischione, F. Santucci, and K. H. Johansson, “Modeling ieee 802.15.4 networks over fading channels,”
IEEE Trans. Wireless Commun., vol. 13, no. 10, pp. 5366–5381, Oct. 2014.
[45] X. Jiang, H. Shokri-Ghadikolaei, C. Fischione, and Z. Pang, “A simplified interference model for outdoor millimeter wave
networks,” in Proc. EAI International Wireless Internet Conference (EAI WICOM), 2016.
[46] S. Rangan, T. Rappaport, and E. Erkip, “Millimeter wave cellular wireless networks: Potentials and challenges,” Proc.
IEEE, vol. 102, no. 3, pp. 366–385, Mar. 2014.
[47] S. Singh, M. Mudumbai, and U. Madhow, “Distributed coordination with deaf neighbors: Efficient medium access for
60GHz mesh networks,” in Proc. IEEE International Conference on Computer Communications (INFOCOM), 2010.
[48] X. An and R. Hekmat, “Directional MAC protocol for millimeter wave based wireless personal area networks,” in Proc.
IEEE Vehicular Technology Conference (VTC Spring), 2008, pp. 1636–1640.
[49] I. K. Son, S. Mao, M. X. Gong, and Y. Li, “On frame-based scheduling for directional mmWave WPANs,” in Proc. IEEE
International Conference on Computer Communications (INFOCOM), 2012, pp. 2149–2157.
[50] V. Petrov, M. Komarov, D. Moltchanov, J. M. Jornet, and Y. Koucheryavy, “Interference and SINR in millimeter wave and
terahertz communication systems with blocking and directional antennas,” IEEE Trans. Wireless Commun., vol. 16, no. 3,
pp. 1791–1808, Mar. 2017.
[51] J. García-Rois, F. Gómez-Cuba, M. R. Akdeniz, F. J. González-Castaño, J. C. Burguillo, S. Rangan, and B. Lorenzo,
“On the analysis of scheduling in dynamic duplex multihop mmwave cellular systems,” IEEE Trans. Wireless Commun.,
vol. 14, no. 11, pp. 6028–6042, Nov. 2015.
[52] H. Shokri-Ghadikolaei, L. Gkatzikis, and C. Fischione, “Beam-searching and transmission scheduling in millimeter wave
communications,” in Proc. IEEE International Conference on Communications (ICC), 2015, pp. 1292–1297.
[53] T. Bai, R. Vaze, and R. Heath, “Analysis of blockage effects on urban cellular networks,” IEEE Trans. Wireless Commun.,
vol. 13, no. 9, pp. 5070–5083, Sept. 2014.
[54] T. S. Rappaport, G. R. MacCartney, M. K. Samimi, and S. Sun, “Wideband millimeter-wave propagation measurements and
channel models for future wireless communication system design,” IEEE Trans. Commun., vol. 63, no. 9, pp. 3029–3056,
Sept. 2015.
[55] F. Yaghoubi, J. Chen, A. Rostami, and L. Wosinska, “Mitigation of rain impact on microwave backhaul networks,” in
Proc. IEEE International Conference on Communications (ICC) Workshop, 2016.
46
[56] F. Boccardi, R. Heath, A. Lozano, T. L. Marzetta, and P. Popovski, “Five disruptive technology directions for 5G,” IEEE
Commun. Mag., vol. 52, no. 2, pp. 74–80, Feb. 2014.
[57] H. Zhao, R. Mayzus, S. Sun, M. Samimi, J. K. Schulz, Y. Azar, K. Wang, G. N. Wong, F. Gutierrez, and T. S. Rappaport,
“28 GHz millimeter wave cellular communication measurements for reflection and penetration loss in and around buildings
in New York city,” in Proc. IEEE International Conference on Communications (ICC), 2013, pp. 5163–5167.
[58] R. Congiu, H. Shokri-Ghadikolaei, C. Fischione, and F. Santucci, “On the relay-fallback tradeoff in millimeter wave
wireless system,” in Proc. IEEE International Conference on Computer Communications (INFOCOM) Workshop, 2016,
pp. 622–627.
1
Accuracy index
0.8
0.6
0.4
0.2
θ = 10o (exact)
θ = 30o (exact)
θ = 10o (bound)
θ = 30o (bound)
0
-10
-5
0
5
10
SINR Threshold [dB]
15
20
0
10
θ = 30o
θ = 15o
−1
Accuracy index
10
θ = 5o
−2
10
−3
10
−4
10
−5
10
−4
10
−3
10
−2
10
Obstacle density
−1
10
0
10
0
10
−1
Performance of the detector
10
−2
10
−3
10
−4
Pmd
10
Pfa (upper bound)
Pmd (upper bound)
−5
10
Accuracy index
Accuracy index (bound)
−6
10
−6
10
−4
10
−2
10
Transmitter density
0
10
2
10
1
Accuracy index
0.9
0.8
0.7
0.6
θ = 10o (exact)
0.5
θ = 20o (exact)
θ = 10o (bound)
θ = 20o (bound)
10 -3
10 -2
10 -1
Transmitter density
10 0
10 1
0
10
Performance of the detector
Pmd (UB1), λt=0.1
Pmd (UB2), λt=0.1
−1
10
Pmd, λt=0.1
Pmd (UB1), λt=1
Pmd, λt=1
−2
10
Pmd (UB1), λt=10
Pmd, λt=10
−3
10
−4
10
0
10
20
30
40
50
60
70
Operating beamwidth [degrees]
80
90
100
Accuracy index
1
0.8
0.6
rPRM = 20
rPRM = 60
rPRM = 120
optimal rPRM
0.4
0.2
0
50
100
150
Average inter-transmitter distance [m]
200
Accuracy index
1
0.98
0.96
λt = 0.1, λo = 0.02
λt = 0.1, λo = 0.04
0.94
λt = 0.1, λo = 0.07
λt = 1 , λo = 0.04
0.92
5
10
15
20
25
30
35
Penetration loss [dB]
40
45
Accuracy index
1
0.999
0.998
λt = 0.1, θ = 20o
0.997
λt = 1 , θ = 20o
0.996
λt = 1 , θ = 40o
0
0.2
0.4
0.6
Reflection coefficient
0.8
1
Accuracy index
1
0.9
0.8
0.7
0.6
-30
λt = 0.1, θ = 20o , β = 5
λt = 1 , θ = 40o , β = 5
λt = 1, θ = 20o , β = 5
λt = 1, θ = 20o , β = 10
-25
-20
-15
Sidelobe gain [dB]
-10
-5
1
Accuracy index
0.99
0.98
0.97
0.96
IBM: θ = 10o
IBM: θ = 30o
PRM: θ = 10o
PRM: θ = 30o
0.95
10
15
20
25
Average inter-transmitter distance [m]
30
1
Accuracy index
0.95
IBM: rIBM =30, λ t = 0.11
IRM: rIRM=30, λ t = 0.11
0.9
IBM: rIBM =10, λ t = 0.0001
IRM: rIRM=10, λ t = 0.0001
IBM: rIBM =10, λ t = 0.11
0.85
IRM: rIRM=10, λ t = 0.11
0.8
-10
-5
0
5
10
SINR threshold [dB]
15
20
1
IRM: β = 1, λ t = 0.11
IBM: β = 1, λ t = 0.11
Accuracy index
0.95
IRM: β = 10, λ t = 0.0001
IRM: β = 10, λ t = 0.11
IBM: β = 10, λ t = 0.0001
0.9
IBM: β = 10, λ t = 0.11
0.85
0.8
0
20
40
60
Interference range [m]
80
100
1
Accuracy index
0.8
0.6
IBM: rIBM = 30, λ t = 0.0001
IRM: rIRM = 30, λ t = 0.0001
IBM: rIBM = 10, λ t = 0.0001
0.4
IRM: rIRM = 10, λ t = 0.0001
IBM: rIBM = 10, λ t = 0.11
IRM: rIRM = 10, λ t = 0.11
0.2
-10
-5
0
5
10
SINR threshold [dB]
15
20
1
IRM: β = 1, λt = 0.0001
Accuracy index
0.8
IBM: β = 1, λt = 0.0001
IRM: β = 10, λt = 0.0001
0.6
IRM: β = 10, λt = 0.11
IBM: β = 10, λt = 0.0001
0.4
IBM: β = 10, λt = 0.11
0.2
0
0
30
60
90
Interference range [m]
120
150
| 7 |
arXiv:1504.00923v1 [cs.CL] 3 Apr 2015
A UNIFIED DEEP NEURAL NETWORK FOR SPEAKER AND LANGUAGE RECOGNITION
Fred Richardson, Douglas Reynolds
Najim Dehak
MIT Lincoln Laboratory
Lexington, MA
MIT CSAIL
Cambridge, MA
ABSTRACT
Learned feature representations and sub-phoneme posteriors
from Deep Neural Networks (DNNs) have been used separately to produce significant performance gains for speaker
and language recognition tasks. In this work we show how
these gains are possible using a single DNN for both speaker
and language recognition. The unified DNN approach is
shown to yield substantial performance improvements on the
the 2013 Domain Adaptation Challenge speaker recognition
task (55% reduction in EER for the out-of-domain condition)
and on the NIST 2011 Language Recognition Evaluation
(48% reduction in EER for the 30s test condition).
Index Terms: i-vector, DNN, bottleneck features, speaker
recognition, language recognition
1. INTRODUCTION
The impressive gains in performance obtained using deep
neural networks (DNNs) for automatic speech recognition
(ASR) [1] have motivated the application of DNNs to other
speech technologies such as speaker recognition (SR) and
language recognition (LR) [2, 3, 4, 5, 6, 7, 8, 9]. Two general
methods of applying DNN’s to the SR and LR tasks have
been shown to be effective. The first or “direct” method uses
a DNN trained as a classifier for the intended recognition
task. In the direct method the DNN is trained to discriminate between speakers for SR [5] or languages for LR [4].
The second or “indirect” method uses a DNN trained for a
different purpose to extract data that is then used to train a
secondary classifier for the intended recognition task. Applications of the indirect method have used a DNN trained for
ASR to extract frame-level features [2, 3, 10], accumulate a
multinomial vector [7] or accumulate multi-modal statistics
[6, 8] that were then used to train an i-vector system [11, 12].
The unified DNN approach described in this work uses
two of the indirect methods described above. The first indirect method (“bottleneck”) uses frame-level features extracted
THIS WORK WAS SPONSORED BY THE DEPARTMENT OF DEFENSE UNDER AIR FORCE CONTRACT FA8721-05-C-0002. OPINIONS, INTERPRETATIONS, CONCLUSIONS, AND RECOMMENDATIONS ARE THOSE OF THE AUTHORS AND ARE NOT NECESSARILY ENDORSED BY THE UNITED STATES GOVERNMENT.
from a DNN with a special bottleneck layer [13] and the second indirect method (“DNN-posterior”) uses posteriors extracted from a DNN to accumulate multi-modal statistics [6].
The features and the statistics from both indirect methods are
then used to train four different i-vector systems: one for each
task (SR and LR) and each method (bottleneck and DNNposterior). A key point in the unified approach is that a single
DNN is used for all four of these i-vector systems. Additionally, we will examine the feasibility of using a single i-vector
extractor for both SR and LR.
2. I-VECTOR CLASSIFIER FOR SR AND LR
Over the past 5 years, state-of-the-art SR and LR performance
has been achieved using i-vector based systems [11]. In addition to using an i-vector classifier as a baseline approach for
our experiments, we will also show how phonetic-knowledge
rich DNN feature representations and posteriors can be incorporated into the i-vector classifier framework providing significant performance improvements. In this section we provide a high-level description of the i-vector approach (for a
detailed description see, for example, [11, 14]).
In Figure 1 we show a simplified block diagram of ivector extraction and scoring. An audio segment is first
processed to find the locations of speech in the audio (speech
activity detection) and to extract acoustic features that convey
speaker/language information. Typically 20 dimensional melfrequency cepstral coefficients (MFCC) and derivatives are
used for SR and 56 dimensional static cepstra plus shifteddelta cepstra (SDC) are used for LR analyzed at 100 feature vectors/second. Using a Universal Background Model
(UBM), essentially a speaker/language-independent Gaussian
mixture model (GMM), the per-mixture posterior probability
of each feature vector (“GMM-posterior”) is computed and
used, along with the feature vectors in the segment, to accumulate zeroth, first, and second order sufficient statistics
(SS). These SSs are then transformed into a low dimensional
i-vector representation (typically 400-600 dimensions) using
a total variability matrix, T. The i-vector is whitened by subtracting a global mean, m, scaled by the inverse square root
of a global covariance matrix, W, and then normalized to unit
length [14]. Finally, a score between a model and test i-vector
is computed. The simplest scoring function is the cosine dis-
3. DEEP NEURAL NETWORK CLASSIFIER FOR
SPEECH APPLICATIONS
3.1. DNN architecture
Fig. 1. Simplified block diagram of i-vector extraction and
scoring.
Fig. 2. Example DNN architecture
tance between the i-vector representing a speaker/language
model (average of i-vectors from the speaker’s/language’s
training segments) and the i-vector representing the test segment. The current state-of-the-art scoring function, called
Probabilistic Linear Discriminant Analysis (PLDA) [14],
requires a within-class matrix Σwc , characterizing how ivectors from a single speaker/language vary, and an across
class matrix Σac , characterizing how i-vectors between different speakers/languages vary.
Collectively, the UBM, T, W, m, Σwc , and Σac are
known as the system’s hyper-parameters and must be estimated before a system can enroll and/or score any data. The
UBM, T, W, and m represent general feature distributions
and total variance of statistics and i-vectors, so unlabeled data
from the desired audio domain (i.e., telephone, microphone,
etc.) can be used to estimate them. The Σwc and Σac matrices, however, each require a large collection of labeled data
for training. For SR, Σwc and Σac typically require thousands
of speakers each of whom contributes tens of samples to the
data set. For LR, the enrollment samples from each desired
languages, which typically hundreds of samples from many
different speakers, can be used to estimate Σwc and Σac .
By far the most computationally expensive part of an ivector system is extracting the i-vectors themselves. An efficient approach for performing both SR and LR on the same
data is to use the same i-vectors. This may be possible if both
systems use the same feature extraction, UBM, and T matrices. There may be some tradeoff in performance however
since the UBM, T matrix, and signal processing will not be
specialized for SR or LR.
A DNN, like a multi-layer preceptron (MLP), consists of an
input layer, several hidden layers and an output layer. Each
layer has a fixed number of nodes and each sequential pair of
layers are fully connected with a weight matrix. The activations of nodes on a given layer are computed by transforming the output of the previous layer with the weight matrix:
a(i) = M(i) x(i−1) . The output of a given layer is then computed by applying an “activation function” x(i) = h(i) (a(i) )
(see Figure 2). Commonly used activation function include
the sigmoid, the hyperbolic tangent, rectified linear units and
even a simple linear transformation. Note that if all the activation functions in the network are linear then the stacked
matrices reduce to a single matrix multiply.
The type of activation function used for the output layer
depends on what the DNN is used for. If the DNN is trained
as a regression the output activation function is linear and the
objective function is the mean squared error between the output and some target data. If the DNN is trained as a classifier
then the output activation function is the soft-max and the objective function is the cross entropy between the output and
the true class labels. For a classifier, each output node of the
DNN classifier correspond to a class and the output is an estimate of the posterior probability of the class given the input
data.
3.2. DNN Training for ASR
DNN classifiers can be used as acoustic models in ASR systems to compute the posterior probability of a sub-phonetic
unit (a “senone”) given an acoustic observation. Observations, or feature vectors, are extracted from speech data at a
fixed sample rate using a spectral technique such as filterbank
analysis, MFCC, or perceptual linear prediction (PLP) coefficients. Decoding is preformed using a hidden Markov model
(HMM) and the DNN to find the most likely sequence of
senones given the feature vectors (this requires using Bayes’
rule to convert the DNN posteriors to likelihoods). Training the DNN requires a significant amount of manually transcribed speech data [1]. The senones labels are derived from
the transcriptions using a phonetic dictionary and a state-ofthe-art GMM/HMM ASR system. Generally speaking, a refined set of phonotactic units aligned using a high performing
ASR system is required to train a high performing DNN system [1].
DNN training is essentially the same as traditional MLP
training. The most common approach uses stochastic gradient descent (SGD) with a mini-batch for updating the DNN
parameters throughout a training pass or “epoch”. The backpropegation algorithm is used to estimate the gradient of the
DNN parameters for each mini-batch. Initializing the DNN is
critical, but it has been shown that a random initialization is
adequate for speech applications where there is a substantial
amount of data [15]. A held out validation data set is used
to estimate the error rate after each training epoch. The SGD
algorithm uses a heuristic learning rate parameter that is adjusted in accordance with a scheduling algorithm which monitors the validation error rate at each epoch. Training ceases
when the error rate can no longer be reduced.
In the past, training neural networks with more than 2 hidden layers proved to be problematic. Recent advances in fast
and affordable computing hardware, optimization software
and initialization techniques have made it possible to train
much deeper networks. A typical DNN for ASR will have
5 or more hidden layers each with the same number of nodes
- typically between 500 and 3,000 [1]. The number of output
senones varies from a few hundred to tens of thousands [15].
proach has been shown to give significant gains for both SR
and LR [6, 7, 17].
4. EXPERIMENT SETUP
4.1. Corpora
Three different corpora are used in our experiments. The
DNN itself is trained using a 100 hours subset of Switchboard
1 [18]. The 100 hour Switchboard subset is defined in the example system distributed with Kaldi [19]. The SR systems
were trained and evaluated using the 2013 Domain Adaptation Challenge (DAC13) data [20]. The LR systems were
evaluated on the NIST 2011 Language Recognition Evaluation (LRE11) data [21]. Details on the LR training and development data can be found in [22].
3.3. DNN bottleneck features
4.2. System configuration
A DNN can also be used as a means of extracting features for
use by a secondary classifier - including another DNN [16].
This is accomplished by sampling the activation of one of the
DNN’s hidden layers and using this as a feature vector. For
some classifiers the dimensionality of the hidden layer is too
high and some sort of feature reduction is necessary like LDA
or PCA. In [13], a dimension reducing linear transformation is
optimized as part of the DNN training by using a special bottleneck hidden layer that has fewer nodes (see Figure 2). The
bottleneck layer uses a linear activation so that it behaves very
much like a LDA or PCA transformation on the activation of
the previous layer. The bottleneck DNN used in this work is
the same system described in [13]. In theory any layer can be
used as a bottleneck layer, but in our work we have chosen
to use the second to last layer with the hope that the output
posterior prediction will not be too adversely affected by the
loss of information at the bottleneck.
3.4. DNN stats extraction for an i-vector system
A typical i-vector system uses zeroth, first and second order
statistics generated using a GMM. Statistics are accumulated
by first estimating the posterior of each GMM component
density for a frame (the “occupancy”) and using these posteriors as weights for accumulating the statistics for each component of the mixture distribution. The zeroth order statistics are
the total occupancies for an utterance across all GMM components and the first order statistics are the weighted sum of
the means per a component. The i-vector is then computed
using a dimension reducing transformation that is non-linear
with respect to the zeroth order statistics.
An alternate approach to extracting statistics has been proposed in [6]. Statistics are accumulated in the same way as
for the GMM but class posteriors from the DNN are used in
place of GMM component posteriors. Once the statistics have
been accumulated, the i-vector extraction is performed in the
same way as it is from the GMM based statistics. This ap-
4.2.1. Commonalities
All systems use the same speech activity segmentation generated using a GMM based speech activity detector (GMM
SAD). The i-vector system uses MAP and PPCA to estimate
the T matrix. Scoring is performed using PLDA [14]. With
the exception of the input features or multi-modal statistics,
the i-vector systems are identical and use a 2048 component
GMM UBM and a 600 dimensional i-vector subspace. All
LR systems use the discriminative backend described in [22].
4.2.2. Baseline systems
The front-end feature extraction for the baseline LR system uses 7 static cepstra appended with 49 SDC. Unlike the
front-end described in [22], vocal track length normalization
(VTLN) and feature domain nuisance attribute projection
(fNAP) are not used. The front-end for the baseline SR system uses 20 MFCCs including C0 and their first derivatives
for a total of 40 features.
4.2.3. DNN system
The DNN was trained using 4,199 state cluster (“senone”)
target labels generated using the Kaldi Switchboard 1 “tri4a”
example system [19]. The DNN front-end uses 13 Gaussianized PLP coefficients and their first and second order derivatives (39 features) stacked over a 21 frame window (10 frames
to either side of the center frame) for a total of 819 input features. The GMM SAD segmentation is applied to the stacked
features.
The DNN has 7 hidden layers of 1024 nodes each with
the exception of the 6th bottleneck layer which has 64 nodes.
All hidden layers use a sigmoid activation function with the
exception of 6th layer which is linear[13]. The DNN training is preformed on an nVidia Tesla K40 GPU using custom
software developed at MIT/CSAIL.
Features
MFCC
MFCC
Bottleneck
Bottleneck
Posteriors
GMM
DNN
GMM
DNN
EER(%)
2.71
2.27
2.00
2.79
DCF*1000
0.404
0.336
0.269
0.388
Table 1. In-domain DAC13 results
Features
Posteriors EER(%) DCF*1000
MFCC
GMM
6.18
0.642
MFCC
DNN
3.27
0.427
Bottleneck
GMM
2.79
0.342
Bottleneck
DNN
3.97
0.454
Table 2. Out-of-domain DAC13 results
5. EXPERIMENT RESULTS
5.1. Speaker recognition experiments
5.2. Language recognition experiments
The experiments run on the LRE11 task are summarized in
Table 3 with the first row corresponding to the baseline system and the last row corresponding to a fusion of 5 “postevaluation” systems (see [22] for details). Bottleneck features
with GMM posteriors out performs the other systems configurations including the 5 system fusion. Interestingly, bottleneck features with DNN-posteriors show more of an improvement over the baseline system than in the speaker recognition
experiments.
30s
5.26
4.00
2.76
3.79
3.27
DAC13 in-domain
2.00% EER / 0.269 DCF
2.68% EER / 0.368 DCF
LRE11 30s
6.12 Cavg
2.76 Cavg
Table 4. Cross-task DNN-bottelneck feature i-vector systems
5.3. Cross-task i-vector Extraction
Table 4 shows the performance on the DAC13 and LRE11
tasks when extracting i-vectors using parameters from one of
the two systems. As expected, there is a degradation in performance for the mis-matched task, but the degradation is less on
the DAC13 SR task using the LRE11 LR hyper-parameters.
These result motivate further research in developing a unified i-vector extraction system for both SR and LR by careful
UBM/T training data selection.
6. CONCLUSIONS
Two sets of experiments were run on the DAC13 corpora: “indomain” and “out-of-domain”. For both sets of experiments,
the UBM and T hyper-parameters are trained on Switchboard
(SWB) data. The other hyper-parameters (the W, m , Σwc
and Σac ) are trained on 2004-2008 speaker recognition evaluation (SRE) data for the in-domain experiments and SWB
data for the out-of-domain experiments (see [20] for more details). Tables 1 and 2 summarize the results for the in-domain
and out-of-domain experiments with the first row of each table corresponding to the baseline system. While the DNNposterior technique with MFCCs gives a significant gain over
the baseline system for both sets of experiments, as also reported in [6]and [17], an even greater gain is realized using bottleneck features with a GMM. Unfortunately, using
both bottleneck features and DNN-posteriors degrades performance.
Features
Posteriors
SDC
GMM
SDC
DNN
Bottleneck
GMM
Bottleneck
DNN
5-way fusion
UBM/T
DAC13
LRE11
10s
10.7
8.21
6.55
7.71
6.67
Table 3. LRE11 results Cavg
3s
20.9
19.5
15.9
18.2
17.1
This paper has presented a DNN bottleneck feature extractor that is effective for both speaker and language recognition and produces significant performance gains over stateof-the-art MFCC/SDC i-vector approaches as well as more
recent DNN-posterior approaches. For the speaker recognition DAC13 task, the new DNN bottleneck features decreased
in-domain EER by 26% and DCF by 33% and out-of-domain
EER by 55% and DCF by 47%. The out-of-domain results
are particularly interesting since no in-domain data was used
for DNN training or hyper-parameter adaptation. On LRE11,
the same bottleneck features decreased EERs at 30s, 10s, and
3s test durations by 48%, 39%, and 24%, respectively, and
even out performed a 5 system fusion of acoustic and phonetic
based recognizers. A final set of experiments demonstrated
that it may be possible to use a common i-vector extractor for
a unified speaker and language recognition system. Although
not presented here, it was also observed that recognizers using the new DNN bottleneck features produced much better
calibrated scores as measured by CLLR metrics.
The DNN bottleneck features, in essence, are the learned
feature representation from which the DNN posteriors are derived. Experimentally, it appears that using the learned feature representation is better than using just the output posteriors with SR or LR features, but combining the DNN bottleneck features and DNN posteriors degrades performance.
This may be because we are able to train a better suited posterior estimator (UBM) with data more matched to the task data.
Since we are working with new features, future research will
examine whether there are more effective classifiers to apply
than i-vectors. Other future research will explore the sensitivity of the bottleneck features to the DNN’s configuration, and
training data quality and quantity.
Acknowledgments
The authors would like to thank Patrick Cardinal, Yu Zhang
and Ekapol Chuangsuwanich at MIT CSAIL for sharing their
DNN expertise and GPU optimized DNN training software.
7. REFERENCES
[1] Geoffrey Hinton, L. Deng, D. Yu, G. E. Dahl, A.-R. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen,
T. N. Sainath, and B. Kingsbury, “Deep neural networks
for acoustic modeling in speech recognition,” IEEE Signal Processing Magazine, pp. 82–97, November 2012.
[2] Y. Song, B. Jiang, Y. Bao, S. Wei, and L.-R. Dai, “Ivector representation based on bottleneck features for
language identification,” IEEE Electronics Letters, pp.
1569–1580, 2013.
[12] N. Dehak, P. Torres-Carrasquillo, D. Reynolds, and
R. Dehak, “Language recognition via ivectors and dimensionality reduction,” in Proc. of Interspeech, 2011,
pp. 857–860.
[13] Y. Zhang, E. Chuangsuwanich, and J. Glass, “Extracting deep neural network bottleneck features using lowrank matrix factorization,” in Proc. of ICASSP, 2014,
pp. 185–189.
[14] D. Garcia-Romero and C. Y. Espy-Wilson, “Analysis
of i-vector length normalization in speaker recognition
systems,” in Proc. of Interspeech, 2011, pp. 249–252.
[3] P. Matejka, L. Zhang, T. Ng, H. S. Mallidi, O. Glembek, J. Ma, and B. Zhang, “Neural network bottleneck
features for language identification,” in Proc. of IEEE
Odyssey, 2014, pp. 299–304.
[15] L. Deng, G. Hinton, and B. Kingsbury, “New types of
deep neural network learning for speech recognition and
related applications: An overview,” in Proc. of ICASSP,
2013.
[4] I. Lopez-Moreno, J. Gonzalez-Dominguez, O. Plchot,
D. Martinez, J. Gonzalez-Rodriguez, and P. Moreno,
“Automatic language identification using deep neural
networks,” in Proc. of ICASSP, 2014, pp. 5374–5378.
[16] K. Vesely, M. Karafiat, and F. Grezl, “Convolutive bottleneck network features for lvcsr,” in Proc. of IEEE
ASRU, 2011, pp. 42–47.
[5] T. Yamada, L. Wang, and A. Kai, “Improvement of
distant-talking speaker identification using bottleneck
features of dnn,” in Proc. of Interspeech, 2013, pp.
3661–3664.
[6] Y. Lei, N. Scheffer, L. Ferrer, and M. McLaren, “A novel
scheme for speaker recognition using a phoneticallyaware deep neural network,” in Proc. of ICASSP, 2014,
pp. 1714–1718.
[7] Y. Lei, L. Ferrer, A. Lawson, M. McLaren, and
N. Scheffer, “Application of convolutional neural networks to language identification in noisy conditions,” in
Proc. of IEEE Odyssey, 2014, pp. 287–292.
[8] P. Kenny, V. Gupta, T. Stafylakis, P. Ouellet, and
J. Alam, “Deep neural networks for extracting baumwelch statistics for speaker recognition,” in Proc. of
IEEE Odyssey, 2014, pp. 293–298.
[9] O. Ghahabi and J. Hernando, “I-vector modeling with
deep belief networks for multi-session speaker recognition,” in Proc. of IEEE Odyssey, 2014, pp. 305–310.
[10] A. K. Sarkar, C.-T. Do, V.-B. Le, and C. Barras, “Combination of cepstral and phonetically discriminative features for speaker verification,” IEEE Signal Processing
Letters, vol. 21, no. 9, pp. 1040–1044, Sept. 2014.
[11] N. Dehak, P. Kenny, R. Dehak, P. Ouellet, and P. Dumouchel, “Front end factor analysis for speaker verification,” IEEE Trans. Acoust., Speech, Signal Processing,
vol. 19, no. 4, pp. 788–798, may 2011.
[17] D. Garcia-Romero, X. Zhang, A McCree, and D. Povey,
“Improving speaker recognition performance in the domain adaptation challenge using deep neural networks,”
in Proc. of IEEE SLT Workshop, 2014.
[18] J. Godfrey, E. Holliman, and J. McDaniel, “Switchboard: Telephone speech corpus for research and development,” in Proc. of ICASSP, 1992, pp. 517–520.
[19] D. Povey, A. Ghoshal, G. Boulianne, L. Burget,
O. Glembek, N. Goel, M. Hannemann, P. Motlicek,
Y. Qian, P. Schwarz, J Silovsky, G. Stemmer, and
K. Vesel, “The kaldi speech recognition toolkit,” in
Proc. of IEEE ASRU, 2011.
[20] S. H. Shum, D. A. Reynolds, D. Garcia-Romero, and
A. McCree, “Unsupervised clustering approaches for
domain adaptation in speaker recognition systems,” in
Proc. of IEEE Odyssey, 2014, pp. 265–272.
[21] “The 2011 nist language recognition evaluation plan,”
2011.
[22] E. Singer, P. Torres-Carrasquillo, D. Reynolds, A. McCree, F. Richardson, N. Dehak, and D. Sturim, “The
mitll nist lre 2011 language recognition system,” in
Proc. of IEEE Odyssey, 2011, pp. 209–215.
| 9 |
Fitness, Apprenticeship, and Polynomials
arXiv:1612.03539v1 [math.AG] 12 Dec 2016
Bernd Sturmfels
Abstract This article discusses the design of the Apprenticeship Program at the
Fields Institute, held 21 August–3 September 2016. Six themes from combinatorial
algebraic geometry were selected for the two weeks: curves, surfaces, Grassmannians, convexity, abelian combinatorics, parameters and moduli. The activities were
structured into fitness, research and scholarship. Combinatorics and concrete computations with polynomials (and theta functions) empowers young scholars in algebraic geometry, and it helps them to connect with the historic roots of their field. We
illustrate our perspective for the threefold obtained by blowing up six points in P3 .
1 Design
A thematic program on Combinatorial Algebraic Geometry took place at the Fields
Institute, Toronto, Canada, during the Fall Semester 2016. The program organizers
were David Cox, Megumi Harada, Diane Maclagan, Gregory Smith, and Ravi Vakil.
As part of this semester, the Clay Mathematics Institute funded the “Apprenticeship Weeks”, held 21 August–3 September 2016. This article discusses the design
and mathematical scope of this fortnight. The structured activities took place in the
mornings and afternoons on Monday, Wednesday, and Friday, as well as the mornings on Tuesday and Thursday. The posted schedule was identical for both weeks:
MWF 9:00–9:30: Introduction to today’s theme
MWF 9:30–11:15: Working on fitness problems
MWF 11:15–12:15: Solutions to fitness problems
MWF 14:00–14:30: Dividing into research teams
MWF 14:30–17:00: Team work on projects
Bernd Sturmfels
Department of Mathematics, University of California, Berkeley, CA, 94720, United States of
America and Max-Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103
Leipzig, Germany, e-mail: [email protected] or [email protected]
1
2
Bernd Sturmfels
MWF 17:00–18:00: Teams present findings
TuTh 9:00–12:00: Discussion of the scholarship theme
The term “fitness” is an allusion to physical exercise. In order to improve physical
fitness, many of us go to the gym. A personal trainer can greatly enhance that experience. The trainer develops your exercise plan and he pushes you beyond previously
perceived limits. The trainer makes you sweat a lot, he ensures that you use exercise
equipment correctly, and he helps you to feel good about yourself afterwards. In the
context of team sports, the coach plays that role. She works towards the fitness of
the entire team, where every player will contribute to the best of their abilities.
The six fitness sessions were designed to be as intense as those in sports. Ten
problems were posted for each session, and these were available online two or three
days in advance. By design, these demanding problems were open-ended and probed
a different aspect of the theme. Section 3 of this article contains the complete list of
problems, along with a brief discussion and references that contain some solutions.
The “apprentices” were about 40 early-career mathematicians, graduate students
and postdocs, coming from a wide range of backgrounds. An essential feature of the
Apprenticeship Weeks was the effort to build teams, and to promote collaboration
as much as possible. This created an amazing sense of community within the group.
At 9:00am on each Monday, Wednesday or Friday, a brief introduction was given
to each fitness question. We formed ten teams to work on the problems. At 11:15am
we got together again, and one person from each team gave a brief presentation on
what had been discussed and discovered. Working on a challenging problem, with
a group of new collaborators, for less than two hours created a very intense and
stimulating experience. A balanced selection process ensured that each participant
had the opportunity to present for their team at least once.
At 2:00pm the entire group re-assembled and they discussed research-oriented
problems for the afternoons. This was conducted in the style of the American Institute for Mathematics (AIM), whereby one of the participants serves as the discussion leader, and only that person is allowed to touch the blackboard. This led to
an ample supply of excellent questions, some a direct continuation of the morning
fitness problems, and others only vaguely inspired by these. Again, groups were
formed for the afternoon, and they engaged in learning and research. Computations
and literature search played a big role, and a lot of teaching went on in the groups.
Tuesday and Thursdays were discussion days. Here the aim was to create a sense
of scholarship among the participants. The morning of these days involved studying
various software packages, classical research papers from the 19-th and early 20th centuries, and the diverse applications of combinatorial algebraic geometry. The
prompts are given in Section 2. The afternoons on discussion days were unstructured
to allow the participants time to ponder, probe, and write up their many new ideas.
Fitness, Apprenticeship, and Polynomials
3
2 Scholarship Prompts
Combinatorial algebraic geometry is a field that, by design, straddles mathematical boundaries. One aim is to study algebraic varieties with special combinatorial
features. At its roots, this field is about systems of polynomial equations in several
variables, and about symmetries and other special structures in their solution sets.
Section 5 offers a concrete illustration of this perspective for a system of polynomials in 32 variables. The objects of combinatorial algebraic geometry are amenable
to a wide range of software tools, which are now used widely among the researchers.
Another point we discussed is the connection to problems outside of pure mathematics. A new field, Applied Algebraic Geometry, has arisen in the past decade. The
techniques used there often connect back to 19th and early 20th century work in algebraic geometry, which is much more concrete and combinatorial than many recent
developments. And, even for her study of current abstract theories, an apprentice
may benefit from knowing the historic origins that have inspired the development of
algebraic geometry. Understanding these aspects, by getting hands-on experiences
and by studying original sources, was a focus in this part of the program.
In what follows we replicate the hand-outs for the four TuTh mornings. The
common thread can be summarized as: back to the roots. These were given to the
participants as prompts for explorations and discussions. For several of the participants, it was their first experience with software for algebraic geometry. For others,
it offered a first opportunity to read an article that was published over 100 years ago.
Tuesday, August 23: Software
Which software tools are most useful for performing computations in
Combinatorial Algebraic Geometry ? Why?
Many of us are familiar with Macaulay2. Some of us are familiar with
Singular. What are your favorite packages within these systems?
Lots of math is supported by general-purpose computer algebra systems such as
Sage, Maple, Mathematica, or Magma. Do you use any of these regularly? For
research or for teaching? How often and in which context?
Other packages that are useful for our community include Bertini, PHCpack,
4ti2, Polymake, Normaliz, GFan. What are these and what do they do? Who
developed them and why?
Does visualization matter in algebraic geometry?
Have you tried software like Surfex?
Which software tool do you want to learn today?
4
Bernd Sturmfels
Thursday, August 25: The 19th Century
Algebraic Geometry has a deep and distinguished history that goes back hundreds
of years. Combinatorics entered the scene a bit more recently.
Young scholars interested in algebraic geometry are strongly encouraged to familiarize themselves with the literature from the 19th century. Dig out papers from
that period and read them! Go for the original sources. Some are in English. Do not
be afraid of languages like French, German, Italian.
Today we form groups. Each group will explore the life and work of one mathematician, with focus on what he has done in algebraic geometry. Identify one key
paper written by that author. Then present your findings.
Here are some suggestions, listed alphabetically:
•
•
•
•
•
•
•
•
•
•
•
•
•
Alexander von Brill
Arthur Cayley
Michel Chasles
Luigi Cremona
Georges Halphen
Otto Hesse
Ernst Kummer
Max Noether
Julius Plücker
Bernhard Riemann
Friedrich Schottky
Hermann Schubert
Hieronymus Zeuthen
Tuesday, August 30: Applications
The recent years have seen a lot of interest in applications of algebraic geometry,
outside of core pure mathematics. An influential event was a research year 200607 at the IMA in Minneapolis. Following a suggestion by Doug Arnold (then IMA
director and SIAM president), it led to the creation of the SIAM activity group
in Algebraic Geometry, and (ultimately) to the SIAM Journal on Applied Algebra
and Geometry. The reader is referred to these resources for further information.
These interactions with the sciences and engineering have been greatly enhanced by
the interplay with Combinatorics and Computation seen here at the Fields Institute.
However, the term “Algebraic Geometry” has to be understood now in a broad sense.
Today we form groups. Each group will get familiar with one field of application,
and they will select one paper in Applied Algebraic Geometry that represent an
interaction with that field. Read your paper and then present your findings. Here are
some suggested fields, listed alphabetically:
• Approximation Theory
Fitness, Apprenticeship, and Polynomials
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
5
Bayesian Statistics
Chemical Reaction Networks
Coding Theory
Combinatorial Optimization
Computer Vision
Cryptography
Game Theory
Geometric Modeling
Machine Learning
Maximum Likelihood Inference
Neuroscience
Phylogenetics
Quantum Computing
Semidefinite Programming
Systems Biology
Thursday, September 1: The Early 20th Century
One week ago we examined the work of some algebraic geometers from the 19th
century. Today, we move on to the early 20th century, to mathematics that was published prior to World War II. You are encouraged to familiarize yourselves with
the literature from the period 1900-1939. Dig out papers from that period and read
them! Go for the original sources. Some are written in English. Do not be afraid of
languages like French, German, Italian, Russian.
Each group will explore the life and work of one mathematician, with focus on
what (s)he has done in algebraic geometry during that period. Identify one key paper
written by that author. Then present your findings.
Here are some suggestions, listed alphabetically:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Eugenio Bertini
Guido Castelnuovo
Wei-Liang Chow
Arthur B. Coble
Wolfgang Gröbner
William V.D. Hodge
Wolfgang Krull
Solomon Lefschetz
Frank Morley
Francis S. Macaulay
Amalie Emmy Noether
Ivan Georgievich Petrovsky
Virginia Ragsdale
Gaetano Scorza
Francesco Severi
6
Bernd Sturmfels
3 Fitness Prompts
This section presents the six worksheets for the morning sessions on Mondays,
Wednesdays and Fridays. These prompts inspired most of the articles in this volume. Specific pointers to dates refer to events that took place at the Fields Institute.
The next section contains notes for each problem, offering references and solutions.
Monday, August 22: Curves
1. Which genus can a smooth curve of degree 6 in P3 have? Give examples.
2. Let f (x) = (x − 1)(x − 2)(x − 3)(x − 6)(x − 7)(x − 8) and consider the genus
2 curve y2 = f (x). Where is it in the moduli space M2 ? Compute the Igusa
invariants. Draw the Berkovich skeleton for the field of 5-adic numbers.
3. The tact invariant of two plane conics is the polynomial of bidegree (6, 6) in
the 6 + 6 coefficients which vanishes when the conics are tangent. Compute this
invariant explicitly. How many terms does it have?
4. Bring’s curve lives in a hyperplane in P4 . It is defined by xi0 +xi1 +xi2 +xi3 +xi4 = 0
for i = 1, 2, 3. What is its genus? Determine all tritangent planes of this curve.
5. Let X be a curve of degree d and genus g in P3 . The Chow form of X defines a
hypersurface in the Grassmannian Gr(1, P3 ). Points are lines that meet X. Find
the dimension and (bi)degree of its singular locus.
6. What are the equations of the secant varieties of elliptic normal curves?
7. Let XP be the toric variety defined by a 3-dimensional lattice polytope, as in
Milena Hering’s July 18-22 course. Intersect XP with two general hyperplanes to
get a curve. What is the degree and genus of that curve?
8. A 2009 article by Sean Keel and Jenia Tevelev presents Equations for M 0,n .
Write these equations in Macaulay2 format for n = 5 and n = 6. Can you see
the ψ -classes (seen in Renzo Cavalieri’s July 18-22 course) in these coordinates?
9. Review the statement of Torelli’s Theorem for genus 3. Using Sage or Maple,
compute the 3 × 3 Riemann matrix of the Fermat quartic {x4 + y4 + z4 = 0}. How
can you recover the curve from that matrix?
10. The moduli space M7 of genus 7 curves has dimension 18. What is the codimension of the locus of plane curves? Hint: Singularities are allowed.
Wednesday, August 24: Surfaces
1. A nondegenerate surface in Pn has degree at least n − 1. Prove this fact and determine all surfaces of degree n − 1. Give their equations.
2. How many lines lie on a surface obtained by intersecting two quadratic hypersurfaces in P4 ? Find an instance where all lines are defined over Q.
3. What is the maximum number of singular points on an irreducible quartic surface
in P3 ? Find a surface and compute its projective dual.
Fitness, Apprenticeship, and Polynomials
7
4. Given a general surface of degree d in P3 , the set of its bitangent lines is a surface
in Gr(1, P3 ). Determine the cohomology class (or bidegree) of that surface.
5. Pick two random circles C1 and C2 in R3 . Compute their Minkowski sum C1 +C2
and their Hadamard product C1 ⋆ C2 . Try other curves.
6. Let X be the surface obtained by blowing up five general points in the plane.
Compute the Cox ring of X. Which of its ideals describe points on X?
7. The incidences among the 27 lines on a cubic surface defines a 10-regular graph.
Compute the complex of independent sets in this graph.
8. The Hilbert scheme of points on a smooth surface is smooth. Why? How many
torus-fixed points are there on the Hilbert scheme of 20 points in P2 ? What can
you say about the graph that connects them?
9. State the Hodge Index Theorem. Verify this theorem for cubic surfaces in P3 , by
explicitly computing the matrix for the intersection pairing.
10. List the equations of one Enriques surface. Verify its Hodge diamond.
Friday, August 26: Grassmannians
1. Find a point in Gr(3, 6) with precisely 16 non-zero Plücker coordinates. As in
June Huh’s July 18-22 course, determine the Chow ring of its matroid.
2. The coordinate ring of the Grassmannian Gr(3, 6) is a cluster algebra of finite
type. What are the cluster variables? List all the clusters.
3. Consider two general surfaces in P3 whose degrees are d and e respectively. How
many lines in P3 are bitangent to both surfaces?
4. The rotation group SO(n) is an affine variety in the space of real n × n-matrices.
Can you find a formula for the degree of this variety?
5. The complete flag variety for GL(4) is a six-dimensional subvariety of P3 × P5 ×
P3 . Compute its ideal and determine its tropicalization.
6. Classify all toric ideals that arises as initial ideals for the flag variety above. For
each such toric degeneration, compute the Newton-Okounkov body.
7. The Grassmannian Gr(4, 7) has dimension 12. Four Schubert cycles of codimension 3 intersect in a finite number of points. How large can that number be?
Exhibit explicit cycles whose intersection is reduced.
8. The affine Grassmannian and the Sato Grassmannian are two infinite-dimensional
versions of the Grassmannian. How are they related?
9. The coordinate ring of the Grassmannian Gr(2, 7) is Z7 -graded. Determine the
Hilbert series and the multidegree of Gr(2, 7) for this grading.
10. The Lagrangian Grassmannian parametrizes n-dimensional isotropic subspaces
in C2n . Find a Gröbner basis for its ideal. What is a ‘doset’?
Monday, August 29: Convexity
1. The set of nonnegative binary sextics is a closed full-dimensional convex cone in
Sym6 (R2 ) ≃ R7 . Determine the face poset of this convex cone.
8
Bernd Sturmfels
2. Consider smooth projective toric fourfolds with eight invariant divisors. What is
the maximal number of torus-fixed points of any such variety?
3. Choose three general ellipsoids in R3 and compute the convex hull of their union.
Which algebraic surfaces contribute to the boundary?
4. Explain how the Alexandrov-Fenchel Inequalities (for convex bodies) can be derived from the Hodge Index Theorem (for algebraic surfaces).
5. The blow-up of P3 at six general points is a threefold that contains 32 special
surfaces (exceptional classes). What are these surfaces? Which triples intersect?
Hint: Find a 6-dimensional polytope that describes the combinatorics.
6. Prove that every face of a spectrahedron is an exposed face.
7. How many combinatorial types of reflexive polytopes are there in dimension 3?
In dimension 4? Draw pictures of some extreme specimen.
8. A 4 × 4-matrix has six off-diagonal 2 × 2-minors. Their binomial ideal in 12
variables has a unique toric component. Determine the f-vector of the polytope
(with 12 vertices) associated with this toric variety.
9. Consider the Plücker embedding of the real Grassmannian Gr(2, 5) in the unit
sphere in R10 . Describe its convex hull. Hint: Calibrations, Orbitopes.
10. Examine Minkowski sums of three tetrahedra in R3 . What is the maximum number of vertices such a polytope can have? How to generalize?
Wednesday, August 31: Abelian Combinatorics
1. The intersection of two quadratic surfaces in P3 is an elliptic curve. Explain its
group structure in terms of geometric operations in P3 .
2. A 2006 paper by Keiichi Gunji gives explicit equations for all abelian surfaces
in P8 . Verify his equations in Macaulay2. How to find the group law?
3. Experiment with Swierczewski’s Sage code for the numerical evaluation of the
Riemann theta function θ (τ ; z). Verify the functional equation.
4. Theta functions with characteristics θ [ε , ε ′ ](τ ; z) are indexed by two binary vectors ε , ε ′ ∈ {0, 1}g. They are odd or even. How many each?
5. Fix the symplectic form hx, yi = x1 y4 + x2 y5 + x3 y6 + x4 y1 + x5 y2 + x6 y3 on the
64-element vector space (F2 )6 . Determine all isotropic subspaces.
6. Explain the combinatorics of the root system of type E7 . How would you choose
coordinates? How many pairs of roots are orthogonal?
7. In 1879 Cayley published a paper in Crelle’s journal titled Algorithms for ...
What did he do? How does it relate the previous two exercises?
8. The regular matroid R10 defines a degeneration of abelian 5-folds. Describe its
periodic tiling on R5 and secondary cone in the 2-nd Voronoi decomposition.
Explain the application to Prym varieties due to Gwena.
9. Consider the Jacobian of the plane quartic curve defined over Q2 by
Fitness, Apprenticeship, and Polynomials
9
41x4 + 1530x3y + 3508x3z + 1424x2y2 + 2490x2yz
− 2274x2z2 + 470xy3 + 680xy2z − 930xyz2 + 772xz3
+ 535y4 − 350y3z − 1960y2z2 − 3090yz3 − 2047z4
Compute its limit in Alexeev’s moduli space for the 2-adic valuation.
10. Let Θ be the theta divisor on an abelian threefold X. Find n = dim H 0 (X, kΘ ).
What is the smallest integer k such that kΘ is very ample? Can you compute (in
Macaulay2) the ideal of the corresponding embedding X ֒→ Pn−1 ?
Friday, September 2: Parameters and Moduli
1. Write down (in Macaulay2 format) the two generators of the ring of invariants
for ternary cubics. For which plane cubics do both invariants vanish?
2. Fix a Z-grading on the polynomial ring S = C[a, b, c, d] defined by deg(a) = 1,
deg(b) = 4, deg(c) = 5, and deg(d) = 9. Classify all homogeneous ideals I such
that S/I has Hilbert function identically equal to 1.
3. Consider the Hilbert scheme of eight points in affine 4-space A4 . Identify a point
that is not in the main component. List its ideal generators.
4. Let X be the set of all symmetric 4 × 4-matrices in R4×4 that have an eigenvalue
of multiplicity ≥ 2. Compute the C-Zariski closure of X.
5. Which cubic surfaces in P3 are stable? Which ones are semi-stable?
6. In his second lecture on August 15, Valery Alexeev used six lines in P2 to construct a certain moduli space of K3 surfaces with 15 singular points. List the most
degenerate points in the boundary of that space.
7. Find the most singular point on the Hilbert scheme of 16 points in A3 .
8. The polynomial ring C[x, y] is graded by the 2-element group Z/2Z where
deg(x) = 1 and deg(y) = 1. Classify all Hilbert functions of homogeneous ideals.
9. Consider all threefolds obtained by blowing up six general points in P3 . Describe
their Cox rings and Cox ideals. How can you compactify this moduli space?
10. The moduli space of tropical curves of genus 5 is a polyhedral space of dimension
12. Determine the number of i-faces for i = 0, 1, 2, . . . , 12.
4 Notes, Solutions and References
Solutions to several of the sixty fitness problems can be found in the 16 articles of
this volume. The articles are listed as the first 16 entries in our References. They
will be published in the order in which they are cited in this section. In what follows
we also offer references for other problems that did not lead to articles in this book.
10
Bernd Sturmfels
Notes on Curves
1. Castelnuovo classified the degree and genus pairs (d, g) for all smooth curves in
Pn . This was extended to characteristic p by Ciliberto [25]. For n = 3, d = 6, the
possible genera are g = 0, 1, 2, 3, 4. The Macaulay2 package RandomCurves
can compute examples. The Hartshorne-Rao module [50] plays a key role.
2. See Section 2 in the article by Bolognese, Brandt and Chua [1]. The approach
using Igusa invariants was developed by Helminck in [32].
3. The tact invariant has 3210 terms, by [57, Example 2.7].
4. See Section 2.1 in the article by Harris and Len [2]. The analogous problem for
bitangents of plane quartics is discussed by Chan and Jiradilok [3].
5. This is solved in the article by Kohn, Nødland and Tripoli [4]
6. Following Fisher [29], elliptic normal curves are defined by the 4 × 4-subpfaffians
of the Klein matrix, and their secant varieties are defined by its larger subpfaffians.
7. The degree of a projective toric variety XP is the volume of its lattice polytope P.
The genus of a complete intersection in XP was derived by Khovanskii in 1978. We
recommend the tropical perspective offered by Steffens and Theobald in [53, §4.1].
8. See the article by Monin and Rana [5] for a solution up to n = 6.
9. See [26] for how to compute the forward direction of the Torelli map of an arbitrary plane curve. For computing the backward direction in genus 3 see [61, §5.2].
10. Trinodal sextics form a 16-dimensional family; their codimension in M7 is two.
This is a result due to Severi, derived by Castryck and Voight in [24, Theorem 2.1].
Notes on Surfaces
1. This was solved by Del Pezzo in 1886. Eisenbud and Harris [27] give a beautiful
introduction to the theory of varieties of minimal degree, including their equations.
2. This is a del Pezzo surface of degree 4. It has 16 lines. To make them rational,
map P2 into P3 via a Q-basis for the cubics that vanish at five rational points in P2 .
3. The winner, with 16 singular points, is the Kummer surface [34]. It is self-dual.
4. This is solved in the article by Kohn, Nødland and Tripoli [4].
5. See Section 5 in the article by Friedenberg, Oneto and Williams [6].
6. This is the del Pezzo surface in Problem 2. Its Cox ring is a polynomial ring in
16 variables modulo an ideal generated by 20 quadrics. Ideal generators that are
universal over the base M0,5 are listed in [47, Proposition 2.1]. Ideals of points on
the surface are torus translates of the toric ideal of the 5-dimensional demicube D5 .
For six points in P2 we refer to Bernal, Corey, Donten-Bury, Fujita and Merz [7].
7. This is the clique complex of the Schläfli graph. The f-vector of this simplicial
complex is (27, 216, 720, 1080, 648, 72). The Schläfli graph is the edge graph of the
E6 -polytope, denoted 221 , which is a cross section of the Mori cone of the surface.
8. The torus-fixed points on Hilb20 (P2 ) are indexed by ordered triples of partitions
Fitness, Apprenticeship, and Polynomials
11
(λ1 , λ2 , λ3 ) with |λ1 | + |λ2 | + |λ3 | = 20. The number of such triples equals 341, 649.
The graph that connects them is a variant of the graph for the Hilbert scheme of
points in the affine plane. The latter was studied by Hering and Maclagan in [33].
9. The signature of the intersection pairing is (1, r − 1) where r is the rank of the
Picard group. This is r = 7 for the cubic surface. From the analysis in Problem 7,
we can get various symmetric matrices that represent the intersection pairing.
10. See the article by Bolognese, Harris and Jelisiejew [8].
Notes on Grassmannians
1. See the article by Wiltshire-Gordon, Woo and Zajackowska [9].
2. In addition to the 20 Plücker coordinates pi jk , one needs two more functions,
namely p123 p456 − p124 p356 and p234 p561 − p235 p461 . The six boundary Plücker coordinates p123 , p234 , p345 , p456 , p561 , p612 are frozen. The other 16 coordinates are
the cluster variables for Gr(3, 6). This was derived by Scott in [51, Theorem 6].
3. This is worked out in the article by Kohn, Nødland and Tripoli [4].
4. This is the main result of Brandt, Bruce, Brysiewicz, Krone and Robeva [10].
5. See the article by Bossinger, Lamboglia, Mincheva and Mohammadi [11].
6. See the article by Bossinger, Lamboglia, Mincheva, Mohammadi [11].
7. The maximum number is 8. This is obtained by taking the partition (2, 1) four
times. For this problem, and many other Schubert problems, instances exist where
all solutions are real. See the works of Sottile, specifically [52, Theorem 3.9 (iv)].
8. The Sato Grassmannian is more general than the affine Grassmannian. These are
studied, respectively, in integrable systems and in geometric representation theory.
9. A formula for the Zn -graded Hilbert series of Gr(2, n) is given by Witaszek [63,
§3.3]. For an introduction to multidegrees see [40, §8.5]. Try the Macaulay2 commands Grassmannian and multidegree. Escobar and Knutson [12] determine the multidegree of a variety that is important in computer vision.
10. The coordinate ring of the Lagrangian Grassmannian is an algebra with straightening law over a doset. This stands for double poset. See the exposition in [48, §3].
Notes on Convexity
1. The face lattice of the cone of non-negative binary forms of degree d is described
in Barvinok’s textbook [20, §II.11]. In more variables this is much more difficult.
2. This seems to be an open problem. For seven invariant divisors, this was resolved
by Gretenkort et al. [30]. Note the conjecture stated in the last line of that paper.
3. We refer to Nash, Pir, Sottile and Ying [13] and to the youtube video The Convex Hull of Ellipsoids by Nicola Geismann, Michael Hemmer, and Elmar Schömer.
4. We refer to Ewald’s textbook, specifically [28, §IV.5 and §VII.6].
12
Bernd Sturmfels
5. The relevant polytope is the 6-dimensional demicube; its 32 vertices correspond
to the 32 special divisors. See the notes for Problem 9 in Parameters and Moduli.
6. This was first proved by Ramana and Goldman in [43, Corollary 1].
7. Kreuzer and Skarke [37] classified such reflexive polytopes up to lattice isomorphism. There are 4319 in dimension 3, and there are 473800776 in dimension 4.
Lars Kastner classified the list of 4319 into combinatorial types. He found that there
are 558 combinatorial types of reflexive 3-polytopes. They have up to 14 vertices.
8. This 6-dimensional polytope is obtained from the direct product of two identical regular tetrahedra by removing the four pairs of corresponding vertices. It is the
convex hull of the points ei ⊕ e j in R4 ⊕ R4 where i, j ∈ {1, 2, 3, 4} with i 6= j. Using
the software Polymake, we find its f-vector to be (12, 54, 110, 108, 52, 12).
9. The faces of the Grassmann orbitopes conv(Gr(2, n)) for n ≥ 5 are described in
[49, Theorem 7.3]. It is best to start with the easier case n = 4 in [49, Example 7.1].
10. The maximum number of vertices is 38, by the formula of Karavelas et al. in
[35, §6.1, equation (49)]. A definitive solution to the problem of characterizing face
numbers of Minkowski sums of polytopes was given by Adiprasito and Sanyal [17].
Notes on Abelian Combinatorics
1. A beautiful solution was written up by Qiaochu Yuan when he was a high school
student; see [62]. The idea is to simultaneously diagonalize the two quadrics, then
project their intersection curve into the plane, thereby obtaining an Edwards curve.
2. This is a system of 9 quadrics and 3 cubics, derived from Coble’s cubic as in [45,
Theorem 3.2]. Using theta functions as in [45, Lemma 3.3], one gets the group law.
3. See [61] and compare with Problem 9 in Curves.
4. For the 22g pairs (ε , ε ′ ), we check whether ε · ε ′ is even or odd. There are
2g−1 (2g + 1) even theta characteristics and 2g−1 (2g − 1) odd theta characteristics.
5. The number of isotropic subspaces of (F2 )6 is 63 of dimension 1, it is 315 in
dimension 2, and it is 135 in dimension 3. The latter are the Lagrangians [46, §6].
6. The root system of type E7 has 63 positive roots. They are discussed in [46, §6].
7. Cayley gives a bijection between the 63 positive roots of E7 with the 63 non-zero
vectors in (F2 )6 . Two roots have inner product zero if and only if the corresponding
vectors in (F2 )6 \{0} are orthogonal in the setting of Problem 5. See [46, Table 1].
8. This refers to Gwena’s article [31]. Since the matroid R10 is not co-graphic, the
corresponding tropical abelian varieties are not in the Schottky locus of Jacobians.
9. This fitness problem is solved in the article by Bolognese, Brandt and Chua [1]
Chan and Jiradilok [3] study an important special family of plane quartics.
10. The divisor kΘ is very ample for k = 3. This embeds any abelian threefold into
P26 . For products of three cubic curves, each in P2 , this gives the Segre embedding.
Fitness, Apprenticeship, and Polynomials
13
Notes on Parameters and Moduli
1. The solution can be found, for instance, on the website
http://math.stanford.edu/∼notzeb/aronhold.html
The two generators have degree 4 and 6. The quartic invariant is known as the Aronhold invariant and it vanishes when the ternary cubic is a sum of three cubes of
linear forms. Both invariants vanish when the cubic curve has a cusp.
2. This refers to extra irreducible components in toric Hilbert schemes [42]. These
schemes were first introduced by Arnold [19], who coined the term A-graded algebras. Theorem 10.4 in [54] established the existence of an extra component for
A = (1347). We ask to verify the second entry in Table 10-1 on page 88 of [54].
3. Cartwright et al. [22] showed that the Hilbert scheme of eight points in A4 has
two irreducible components. An explicit point in the non-smoothable component is
given in the article by Douvropoulos, Jelisiejew, Nødland and Teitler [14].
4. At first, it is surprising that X has codimension 2. The point is that we work
over the real numbers R. The analogous set over C is the hypersurface of a sum-ofsquares polynomial. The C-Zariski closure of X is a nice variety of codimension 2.
The defining ideal and its Hilbert-Burch resolution are explained in [56, §7.1].
5. This is an exercise in Geometric Invariant Theory [41]. A cubic surface is stable if
and only if it has at most ordinary double points (A1 singularities). For semi-stable
surfaces, A2 singularities are allowed. For an exposition see [44, Theorem 3.6]; this
is E. Reinecke’s Bachelor thesis, written under the supervision of D. Huybrechts.
6. This is the moduli space of stable hyperplane arrangements [18], here for the case
of six lines in P2 . The precise space depends on a choice of parameters [18, §5.7].
For some natural parameters, this is the tropical compactification associated with
the tropical Grassmannian Gr(3, 6), so the most degenerate points correspond to the
seven generic types of tropical planes in 5-space, shown in [38, Figure 5.4.1].
7. See [55, Theorem 2.3].
8. For each partition, representing a monomial ideal in C[x, y], we count the odd and
even boxes in its Young diagram. The resulting Hilbert functions h : Z/2Z → N are
(h(even), h(odd)) = (k2 + m, k(k + 1) + m) or ((k + 1)2 + m, k(k + 1) + m), where
k, m ∈ N. This was contributed by Dori Bejleri. For more details see [21, §1.3].
9. The blow-up of Pn−3 at n points is a Mori dream space. Its Cox ring has 2n−1
generators, constructed explicitly by Castravet and Tevelev in [23]. These form a
Khovanskii basis [36], by [60, Theorem 7.10]. The Cox ideal is studied in [59].
Each point on its variety represents a rank two stable quasiparabolic vector bundle
on P1 with n marked points. The relevant moduli space is M0,n .
10. The moduli space of tropical curves of genus 5 serves as the first example in
the article by Lin and Ulirsch [15]. The article by Kastner, Shaw and Winz [16]
discusses state-of-the-art software tools for computing with such polyhedral spaces.
14
Bernd Sturmfels
5 Polynomials
The author of this article holds the firm belief that algebraic geometry concerns the
study of solution sets to systems of polynomial equations. Historically, geometers
explored curves and surfaces that are zero sets of polynomials. It is the insights
gained from these basic figures that have led, over the course of centuries, to the
profound depth and remarkable breadth of contemporary algebraic geometry. However, many of the current theories are now far removed from explicit varieties, and
polynomials are nowhere in sight. What we are advocating is for algebraic geometry to take an outward-looking perspective. Our readers should be aware of the
wealth of applications in the sciences and engineering, and be open to a “back to
the basics” approach in both teaching and scholarship. From this perspective, the
interaction with combinatorics can be particularly valuable. Indeed, combinatorics
is known to some as the “nanotechnology of mathematics”. It is all about explicit
objects, those that can be counted, enumerated, and dissected with laser precision.
And, these objects include some beautiful polynomials and the ideals they generate.
The following example serves as an illustration. We work in a polynomial ring
Q[p] in 32 variables, one for each subset of {1, 2, 3, 4, 5, 6} whose cardinality is odd:
p1 , p2 , . . . , p6 , p123 , p124 , p125 , . . . , p356 , p456 , p12345, p12346 , . . . , p23456.
The polynomial ring Q[p] is Z7 -graded by setting degree(pσ ) = e0 + ∑i∈σ ei , where
e0 , e1 , . . . , e6 is the standard basis of Z7 . Let X be a 5 × 6-matrix of variables, and let
I be the kernel of the ring map Q[p] → Q[X] that takes the variables pσ to the determinant of the submatrix of X with column indices σ and row indices 1, 2, . . . , |σ |.
The ideal I is prime and Z7 -graded. It has multiple geometric interpretations.
First of all, it describes the partial flag variety of points in 2-planes in hyperplanes
in P5 . This flag variety lives in P5 × P19 × P5 , thanks to the Plücker embedding. Its
projection into the factor P19 is the Grassmannian Gr(3, 6) of 2-planes in P5 . Flag
varieties are studied by Bossinger, Lamboglia, Mincheva and Mohammadi in [11].
But, let the allure of polynomials now speak for itself. Our ideal I has 66 minimal
quadratic generators. Sixty generators are unique up to scaling in their degree:
degree
(2, 0, 0, 1, 1, 1, 1)
(2, 0, 1, 0, 1, 1, 1)
···
(2, 1, 1, 1, 1, 0, 0)
ideal generator
p3 p456 − p4 p356 + p5 p346 − p6 p345
p2 p456 − p4 p256 + p5 p246 − p6 p245
··· ··· ···
p1 p234 − p2 p134 + p3 p124 − p4 p123
(2, 0, 1, 1, 1, 1, 2)
···
(2, 2, 1, 1, 1, 1, 0)
p256 p346 − p246 p356 + p236 p456
··· ··· ···
p125 p134 − p124 p135 + p123 p145
(2, 1, 1, 1, 1, 2, 2)
···
(2, 2, 2, 1, 1, 1, 1)
p156 p23456 − p256 p13456 + p356 p12456 − p456 p12356
··· ··· ···
p123 p12456 − p124 p12356 + p125 p12346 − p126 p12345
Fitness, Apprenticeship, and Polynomials
15
The other six minimal generators live in degree (2, 1, 1, 1, 1, 1, 1). These are the 4term Grassmann-Pücker relations, like p126 p345 − p125 p346 + p124 p356 − p123 p456 .
Hereis an alternate interpretation of the ideal I. It defines a variety of dimension
15 = 62 in P31 known as the spinor variety. In this guise, I encodes the algebraic
relations among the principal subpfaffians of a skew-symmetric 6 × 6-matrix. Such
subpfaffians are indexed with the subsets of {1, 2, 3, 4, 5, 6} of even cardinality. The
trick is to fix a natural bijection between even and odd subsets. This variety is similar
to the Lagrangian Grassmannian seen in fitness problem # 10 on Grassmannians.
At this point, readers who like combinatorics and computations may study I.
Can you compute the tropical variety of I? Which of its maximal cones are prime
in the sense of Kaveh and Manon [36, Theorem 1]? These determine Khovanskii
bases for Q[p]/I and hence toric degenerations of the spinor variety in P31 . Their
combinatorics is recorded in a list of Newton-Okounkov polytopes with 32 vertices.
Each of these polytopes comes with a linear projection to the 6-dimensional
demicube, which is the convex hull in R7 of the 32 points deg(pσ ). We saw this
demicube in fitness problem # 5 on Convexity, whose theme we turn to shortly.
It is the author’s opinion that Khovanskii bases deserve more attention than the
Newton-Okounkov bodies they give rise to. The former are the algebraic manifestation of a toric degeneration. These must be computed and verified. Looking at a
Khovanskii basis through the lens of convexity reveals the Newton-Okounkov body.
We now come to a third, and even more interesting, geometric interpretation of
our 66 polynomials. It has to do with Cox rings, and their Khovanskii bases, similar
to those in the article by Bernal, Corey, Donten-Bury, Fujita and Merz. We begin by
replacing the generic 5 × 6-matrix X by one that has the special form in [23, (1.2)]:
2
u1 x1 u22 x2 u23 x3 u24 x4 u25 x5 u26 x6
u 1 y1 u 2 y2 u 3 y3 u 4 y4 u 5 y5 u 6 y6
X =
u 1 v1 x1 u 2 v2 x2 u 3 v3 x3 u 4 v4 x4 u 5 v5 x5 u 6 v6 x6 .
v1 y1 v2 y2 v3 y3 v4 y4 v5 y5 v6 y6
v21 x1 v22 x2 v23 x3 v24 x4 v25 x5 v26 x6
Now, the polynomial ring Q[X] gets replaced by k[x1 , x2 , . . . , x6 , y1 , y2 , . . . , y6 ] where
k is the field extension of Q generated by the entries of a 2 × 6-matrix of scalars:
u1 u2 u3 u4 u5 u6
.
(1)
U =
v1 v2 v3 v4 v5 v6
We assume that the 2 × 2-minors of U are non-zero. Let J denote the kernel of the
odd-minors map k[p] → k[X] as before. The ideal J is also Z7 -graded and it strictly
contains the ideal I. Castravet and Tevelev [23, Theorem 1.1] proved that k[p]/J is
the Cox ring of the blow-up of P3k at six points. These points are Gale dual to U.
We refer to J as the Cox ideal of that rational threefold whose Picard group Z7 furnishes the grading. The affine variety in A32
k defined by J is 10-dimensional (it is the
universal torsor). Quotienting by a 7-dimensional torus action yields our threefold.
The same story for blowing up five points in P2k is problem # 6 on Surfaces.
In [59] we construct the Cox ideal by duplicating the ideal of the spinor variety:
16
Bernd Sturmfels
J = I + u ∗ I.
(2)
Here u is a vector in (K ∗ )32 that is derived from U. The ideal u ∗ I is obtained from I
by scaling the variables fσ with the coordinates of u. In particular, the Cox ideal J is
minimally generated by 132 quadrics. Now, there are two generators in each of the
sixty Z7 -degrees in our table, and there are 12 generators in degree (2, 1, 1, 1, 1, 1, 1).
Following [60, Example 7.6], we fix the rational function field k = Q(t) and set
1 t t2 t3 t4 t5
U =
.
t5 t4 t3 t2 t 1
The ring map k[p] → k[X] now maps the variables pσ like this:
p1 →
7 x1
p123 7→ x1 y2 x3 t 6 − (x1 x2 y3 + y1 x2 x3 )t 7 + (y1 x2 x3 + x1 x2 y3 )t 9 − x1y2 x3 t 10
p12345 7→ x1 y2 x3 y4 x5 t 10 − (y1 x2 x3 y4 x5 + x1y2 x3 x4 y5 + · · · + x1 x2 y3 y4 x5 )t 11 + · · ·
Here is a typical example of a Z7 -degree with two minimal ideal generators:
(2, 1, 1, 1, 1, 0, 0)
(2, 1, 1, 1, 1, 0, 0)
p1 p234 − p2 p134 + p3 p124 − p4 p123
t4 p
1 p234 − t
2p
2 p134 + t
2p
1 p234 + p4 p123
The algebra generators pσ form a Khovanskii basis for k[p]/J with respect to the
t-adic valuation. The toric algebra resulting from this flat family is generated by the
underlined monomials. Its toric ideal in(J) is generated by 132 binomial quadrics:
degree
(2, 0, 0, 1, 1, 1, 1)
(2, 0, 1, 0, 1, 1, 1)
···
(2, 1, 1, 1, 1, 0, 0)
(2, 0, 1, 1, 1, 1, 2)
···
(2, 2, 1, 1, 1, 1, 0)
(2, 1, 1, 1, 1, 2, 2)
pair of binomial generators for in(J)
p3 p456 − p4 p356
p5 p346 − p6 p345
p2 p456 − p4 p256
p5 p246 − p6 p245
···
···
p1 p234 − p2 p134
p3 p124 − p4 p123
p6 p23456 − p236 p456
p246 p356 − p256 p346
···
···
p1 p12345 − p123 p145
p124 p135 − p125 p134
p156 p23456 − p256 p13456 p356 p12456 − p456 p12356
···
(2, 2, 2, 1, 1, 1, 1)
···
p123 p12456 − p124 p12356
···
p125 p12346 − p126 p12345
These 132 binomials define a toric variety that is a degeneration of our universal
torsor. The ideal in(J) is relevant in both biology and physics. It represents the
Jukes-Cantor model in phylogenetics [58] and the Wess-Zumino-Witten model in
conformal field theory [39]. Beautiful polynomials can bring the sciences together.
Let us turn to another fitness problem. The past three pages offered a capoeira
approach to # 9 in Parameters and Moduli. The compactification is that given by the
tropical variety of the universal Cox ideal, to be computed as in [45, 47]. The base
space is M0,6 , with points represented by 2 × 6-matrices U as in (1). We encoun-
Fitness, Apprenticeship, and Polynomials
17
tered several themes that are featured in other articles in this book: flag varieties,
Grassmannians, Zn -gradings, Cox rings, Khovanskii bases, and toric ideals. The
connection to spinor varieties was developed in the article [59] with Mauricio Velasco. The formula (2) is derived in [59, Theorem 7.4] for the blow-up of Pn−3 at n
points when n ≤ 8. It is still a conjecture for n ≥ 9. On your trail towards solving
such open problems, fill your backpack with polynomials. They will guide you.
Acknowledgements This article benefited greatly from comments by Lara Bossinger, Fatemeh
Mohammadi, Emre Sertöz, Mauricio Velasco and an anonymous referee. The apprenticeship program at the Fields Institute was supported by the Clay Mathematics Institute. The author also
acknowledges partial support from the Einstein Foundation Berlin, MPI Leipzig, and the US National Science Foundation (DMS-1419018).
References
1. Barbara Bolognese, Madeline Brandt, and Lynn Chua: From curves to tropical Jacobians and
back, in Combinatorial Algebraic Geometry (eds. G.G. Smith and B. Sturmfels), to appear.
2. Corey Harris and Yoav Len: Tritangent planes to space sextics: the algebraic and tropical
stories, op. cit.
3. Melody Chan and Pakawut Jiradilok: Theta characteristics of tropical K4 -curves, op. cit.
4. Kathlén Kohn, Bernt Ivar Utstøl Nødland, and Paolo Tripoli: Secants, bitangents, and their
congruences, op. cit.
5. Leonid Monin and Julie Rana: Equations of M0,n , op. cit.
6. Netanel Friedenberg, Alessandro Oneto, and Robert Williams: Minkowski sums and
Hadamard products of algebraic varieties, op. cit.
7. Martha Bernal, Daniel Corey, Maria Donton-Bury, Naoki Fujita, and Georg Merz: Khovanskii
bases of Cox-Nagata rings and tropical geometry, op. cit.
8. Barbara Bolognese, Corey Harris, and Joachim Jelisiejew: Equations and tropicalization of
Enriques surfaces, op. cit.
9. John D. Wiltshire-Gordon, Alexander Woo, and Magdalena Zajaczkowska: Specht polytopes
and Specht matroids, op. cit.
10. Madeline Brandt, DJ Bruce, Taylor Brysiewicz, Robert Krone, and Elina Robeva: The degree
of SO(n), op. cit.
11. Lara Bossinger, Sara Lamboglia, Kalina Mincheva, and Fatemeh Mohammadi: Computing
toric degenerations of flag varieties, op. cit.
12. Laura Escobar and Allen Knutson: The multidegree of the multi-image variety, op. cit.
13. Evan D. Nash, Ata Firat Pir, Frank Sottile, and Li Ying: The convex hull of two circles in R3 ,
op. cit.
14. Theodosios Douvropoulos, Joachim Jelisiejew, Bernt Ivar Utstøl Nødland, and Zach Teitler:
The Hilbert scheme of 11 points in A3 is irreducible, op. cit.
15. Bo Lin and Martin Ulirsch: Towards a tropical Hodge bundle, op. cit.
16. Lars Kastner, Kristin Shaw, and Anna-Lena Winz: Computing sheaf cohomology in Polymake, op. cit.
17. Karim Adiprasito and Raman Sanyal: Relative Stanley–Reisner theory and Upper Bound Theorems for Minkowski sums, Publications mathématiques de l’IHÉS 124 (2016) 99–163.
18. Valery Alexeev: Moduli of weighted hyperplane arrangements, Advanced Courses in Mathematics, CRM Barcelona, Birkhäuser/Springer, Basel, 2015.
19. Vladimir I. Arnold: A-graded algebras and continued fractions, Communications in Pure and
Applied Mathematics 42 (1989) 993-1000.
18
Bernd Sturmfels
20. Alexander Barvinok: A Course in Convexity, Graduate Studies in Mathematics 54, American
Mathematical Society, Providence, RI, 2002.
21. Dori Bejleri and Gjergji Zaimi: The topology of equivariant Hilbert schemes,
arXiv:1512.05774.
22. Dustin Cartwright, Daniel Erman, Mauricio Velasco, and Bianca Viray: Hilbert schemes of 8
points, Algebra Number Theory 3 (2009) 763–795.
23. Ana-Maria Castravet and Jenia Tevelev: Hilbert’s 14th problem and Cox rings, Compositio
Mathematica 142 (2006) 1479–1498.
24. Wouter Castryck and John Voight: On nondegeneracy of curves, Algebra Number Theory 6
(2012) 1133–1169.
25. Ciro Ciliberto: On the degree of genus of smooth curves in a projective space, Advances in
Mathematics 81 (1990) 198–248.
26. Bernard Deconinck and Mark van Hoeij: Computing Riemann matrices of algebraic curves,
Physica D 152/153 (2001) 28–46.
27. David Eisenbud and Joe Harris: On varieties of minimal degree (a centennial account), Algebraic Geometry, Bowdoin, 1985, Proc. Sympos. Pure Math. 46, Part 1, Amer. Math. Soc.,
Providence, RI, 1987, pp. 3–13.
28. Günter Ewald: Combinatorial Convexity and Algebraic Geometry, Graduate Texts in Mathematics, 168, Springer-Verlag, New York, 1996
29. Tom Fisher: Pfaffian presentations of elliptic normal curves, Trans. Amer. Math. Soc. 362
(2010) 2525–2540.
30. Jörg Gretenkort, Peter Kleinschmidt, and Bernd Sturmfels: On the existence of certain smooth
toric varieties, Discrete and Computational Geometry 5 (1990) 255–262.
31. Tawanda Gwena: Degenerations of cubic threefolds and matroids, Proceedings of the American Mathematical Society 133 (2005) 1317–1323.
32. Paul Helminck: Tropical Igusa invariants and torsion embeddings, arXiv:1604.03987.
33. Milena Hering and Diane Maclagan: The T-graph of a multigraded Hilbert scheme, Experimantal Mathematics 21 (2012) 280–297.
34. Ronold W.H. Hudson: Kummer’s Quartic Surface, Cambridge University Press, 1905.
35. Manelaos Karavelas, Christos Konaxis and Eleni Tzanaki: The maximum number of faces of
the Minkowski sum of three convex polytopes, J. Comput. Geom. 6 (2015) 21–74.
36. Kiumars Kaveh and Christopher Manon: Khovanskii bases, Newton-Okounkov polytopes and
tropical geometry of projective varieties, arXiv:1610.00298.
37. Maximilian Kreuzer and Harald Skarke: Complete classification of reflexive polyhedra in
four dimensions, Advances in Theoretical and Mathematical Physics 4 (2000) 1209–1230.
38. Diane Maclagan and Bernd Sturmfels: Introduction to Tropical Geometry, Graduate Studies
in Mathematics, Vol 161, American Mathematical Society, 2015.
39. Christopher Manon: The algebra of SL3 (C) conformal blocks, Transformation Groups 4
(2013) 1165–1187.
40. Ezra Miller and Bernd Sturmfels: Combinatorial Commutative Algebra, Graduate Texts in
Mathematics, 227, Springer-Verlag, New York, 2004.
41. David Mumford, John Fogarty, and Frances Kirwan: Geometric Invariant Theory, Ergebnisse
der Mathematik und ihrer Grenzgebiete, vol. 34, third edition, Springer, Berlin, 1994.
42. Irena Peeva and Mike Stillman: Toric Hilbert schemes, Duke Math. J. 111 (2002) 419–449.
43. Motakuri Ramana and Alan Goldman: Some geometric results in semidefinite programming,
Joiurnal of Global Optimization 7 (1995) 33–50.
44. Emanuel Reinecke: Moduli Space of Cubic Surfaces, Bachelorarbeit Mathematik, Universität
Bonn, July 2012.
45. Qingchun Ren, Steven Sam, and Bernd Sturmfels: Tropicalization of classical moduli spaces,
Mathematics in Computer Science 8 (2014) 119–145.
46. Qingchun Ren, Steven Sam, Gus Schrader, and Bernd Sturmfels: The universal Kummer
threefold, Experimental Mathematics 22 (2013) 327–362.
47. Qingchun Ren, Kristin Shaw, and Bernd Sturmfels: Tropicalization of Del Pezzo surfaces,
Advances in Mathematics 300 (2016) 156–189.
Fitness, Apprenticeship, and Polynomials
19
48. James Ruffo: Quasimaps, straightening laws, and quantum cohomology for the Lagrangian
Grassmannian, Algebra Number Theory 2 (2008) 819–858.
49. Raman Sanyal, Frank Sottile and Bernd Sturmfels: Orbitopes, Mathematika 57 (2011) 275–
314.
50. Frank-Olaf Schreyer: Computer aided unirationality proofs of moduli spaces, Handbook of
moduli, Vol. III, 257–280, Adv. Lect. Math. (ALM), 26, Int. Press, Somerville, MA, 2013.
51. Joshua Scott: Grassmannians and cluster algebras, Proc. London Math. Soc. 92 (2006) 345–
380.
52. Frank Sottile: Real Schubert calculus: polynomial systems and a conjecture of Shapiro and
Shapiro, Experimental Mathematics 9 (2000) 161–182.
53. Reinhard Steffens and Thorsten Theobald: Combinatorics and genus of tropical intersections
and Ehrhart theory, SIAM J. Discrete Math. 24 (2010) 17–32.
54. Bernd Sturmfels: Gröbner Bases and Convex Polytopes, American Mathematical Society,
University Lectures Series, No 8, Providence, Rhode Island, 1996.
55. Bernd Sturmfels: Four counterexamples in combinatorial algebraic geometry, Journal of Algebra 230 (2000) 282–294.
56. Bernd Sturmfels: Solving Systems of Polynomial Equations, American Mathematical Society,
CBMS Regional Conferences Series, No 97, Providence, Rhode Island, 2002.
57. Bernd Sturmfels: The Hurwitz form of a projective variety, Journal of Symbolic Computation
79 (2017) 186–196.
58. Bernd Sturmfels and Seth Sullivant: Toric ideals of phylogenetic invariants, Journal of Computational Biology 228 (2005) 204–228.
59. Bernd Sturmfels and Mauricio Velasco: Blow-ups of Pn−3 at n points and spinor varieties,
Journal of Commutative Algebra 2 (2010) 223–244.
60. Bernd Sturmfels and Zhiqiang Xu: Sagbi bases of Cox-Nagata rings, J. Eur. Math. Soc. 12
(2010) 429–459.
61. Christopher Swierczewski and Bernard Deconinck: Riemann theta functions in Sage with
applications, Mathematics and Computers in Simulation 127 (2016) 263–272.
62. Qiaochu Yuan: Explicit equations in the plane for elliptic curves as space quartics, paper for
the Intel Talent Search 2008, https://math.berkeley.edu/∼qchu/Intel.pdf.
63. Jakub Witaszek: The degeneration of the Grassmannian into a toric variety and the calculation
of the eigenspaces of a torus action, Journal of Algebraic Statistics 6 (2015) 62–79.
| 0 |
PROGRAMMING REQUESTS/RESPONSES WITH
GREATFREE IN THE CLOUD ENVIRONMENT
Bing Li
Department of Computer Science and Engineering, Xi'An Technological University,
China
[email protected]
ABSTRACT
Programming request with GreatFree is an efficient programming technique to implement distributed
polling in the cloud computing environment. GreatFree is a distributed programming environment
through which diverse distributed systems can be established through programming rather than
configuring or scripting. GreatFree emphasizes the importance of programming since it offers developers
the opportunities to leverage their distributed knowledge and programming skills. Additionally,
programming is the unique way to construct creative, adaptive and flexible systems to accommodate
various distributed computing environments. With the support of GreatFree code-level Distributed
Infrastructure Patterns, Distributed Operation Patterns and APIs, the difficult procedure is accomplished
in a programmable, rapid and highly-patterned manner, i.e., the programming behaviors are simplified
as the repeatable operation of Copy-Paste-Replace. Since distributed polling is one of the fundamental
techniques to construct distributed systems, GreatFree provides developers with relevant APIs and
patterns to program requests/responses in the novel programming environment.
KEYWORDS
Code-Level Design Patterns, Cloud Programming, Distributed Systems, Highly-Patterned Development
Environment
1. INTRODUCTION
Programming requests/responses with GreatFree is an efficient programming technique to
implement distributed polling in the cloud computing environment. Distributed polling [1] is an
indispensable technique to construct distributed systems. When doing that with GreatFree, the
procedure becomes straightforward since developers are not required to take care of underlying
tough techniques. Using the rich APIs and patterns of DIP (Distributed Infrastructure Patterns)
and DOP (Distributed Operation Patterns) supported by GreatFree, the tough programming
skills are turned into simplified behaviors, the operation of CPR (Copy-Paste-Replace).1
GreatFree is a software development environment to construct cloud computing systems in
diverse distributed computing environments through programming rather than configuring or
scripting. Programming is defined as the procedure to implement a practical software system
with essential domain knowledge and required programming skills. In the case of cloud
programming, it is necessary for developers to perceive distributed knowledge as well as
corresponding programming expertise. Although the overhead is high, it is the unique way to
construct creative, adaptive and flexible systems with high-quality for various distributed
computing environments. To lower the burden to program distributed systems, GreatFree
provides developers with various code-level distributed patterns (DIP and DOP) and rich APIs.
1
The research is sponsored by the Ministry of Education, Shaanxi Province, China. The number of the funding is
14JK1358.
Different from the object-oriented ones [2] which are independent of computing environments
and the other distributed ones [3] which should be implemented by developers for specific
environments, the code-level patterns in GreatFree specify the generic, mature and executable
programs in the relatively fixed forms suitable to most distributed computing environments. For
that, developers’ effort is lowered even though they implement a distributed system through
programming rather than scripting and configuring.
To be fond of them, developers need to perform the system-level programming initially. That is,
they determine the infrastructure of their distributed systems through choosing the most suitable
GreatFree DIP initially with respect to their distributed knowledge. And then, they need to
accommodate the chosen DIP if it does not fulfill some of the requirements of the particular
computing environment exactly with GreatFree DOP and APIs. After that, the system-level
programming is completed and it constructs a high quality system foundation for developers to
perform the application-level programming using GreatFree DOP and APIs further.
When programming with the operation of CPR, two concepts, i.e., the class reference and the
instance reference, need to be identified by developers. The same as most object-oriented
programming approaches, GreatFree is implemented in the language of Java [4] and it supports
object-oriented programming for sure. The class reference specifies the class that need to be
created newly for a new feature. The instance reference specifies that class instances that need
to be created newly for a new feature. In the case of programming requests/responses through
CPR, one sample request/response should be chosen from DIP to follow. Those references can
be retrieved according to the sample. Once if those references are available, developers can
create new classes or instances respectively through straightforward operation of CPR in the
corresponding patterns of DOP.
The paper is organized in the following sections. Section 1 gives a brief introduction to the
technique of programming requests/responses with GreatFree. Section 2 presents the related
work to implement distributed polling in distributed computing environments and makes a
rough comparison. Section 3 talks about the procedures to program cloud systems with
GreatFree. Section 4 explains the details to program requests/responses with GreatFree through
one case. Section 5 talks about the evaluation environment of the programming technique and
discusses the potential future work of the technique.
2. RELATED WORK
Since distributed polling is one of the fundamental functions for any cloud systems, it is
required to implement it with efficient approaches. Nowadays, there are three categories of
solutions to the issue, including the traditional languages solutions, the framework-based
solutions [5] and the programming-oriented solutions [6][7].
2.1 Traditional Languages Based Solutions
To adapt to the requirements of distributed computing environments, many new APIs are
proposed to assist developers to do that. Traditional languages are defined as those
programming languages that are originally established on the synchronous and standalone
computing environment. In the obsolete environment, programming languages aim to solve the
problems which can be processed in a sequential fashion within a single computer. That
happens since the underlying CPU is designed in such a way as well as the computing resources
are limited in terms of the processing power and the smallness of main memory when those
languages are proposed initially.
Nowadays it is common to design asynchronous and distributed computing systems rather than
synchronous and standalone ones. For that, many new APIs and techniques have to be put
forward to accommodate them to the changes afterward. Although those techniques are
succeeded in various application domains, it is well known that it is difficult to learn, grasp and
program with them. The primary reason is due to the fact that each of them is founded on a
synchronous and standalone computing environment such that the main purpose of those
techniques is to transform the synchronous and standalone programming mechanism to the
asynchronous and distributed one. Unfortunately, the transformation is visible to developers, i.e.,
they are enforced to take care of the heavy overhead before implementing their final high level
applications. Because of the difficulty, only a small portion of developers would like to work
with those techniques. It is often heard that one developer knows the concept of threading [8]
whereas she never programs with it.
As one of the fundamental techniques of distributed computing, distributed polling is not
compatible with the synchronous and standalone language either. Various techniques have to be
resolved by developers themselves, such as communication [9], serialization [9], concurrency [8]
and scaling [10], and so forth. In addition, since it is scarce that traditional programming
languages are chosen as the development approach for distributed systems, it becomes almost
impossible to program requests/responses or distributed polling with them.
2.2 Frameworks Based Solutions
Because of the difficulty to program with traditional programming languages to implement
distributed polling as well as distributed systems, many frameworks [5] are proposed to hide
distributed computing environments from developers. In consequence, for the issue of
distributed polling, developers have no idea whether it happens in a standalone computer or a
distributed environment when working on those frameworks.
The solution aims to convert a physically asynchronous and distributed computing system to a
logically synchronous and standalone one for developers on the application level. Unluckily, it
consists of numerous mutations in a distributed environment. According to the property of highlevel applications, distributed systems are categorized into chatting, e-commerce, video, storage,
search, social networks, and etc. With respect to transmission protocols, they are roughly
categorized into the messaging one as well as the streaming one. It is reasonable to classify
distributed environments into the one for heavyweight data and the one for lightweight data. It
is also possible to identify distributed systems with the employed distributed models, such as the
client/server one or the peer-to-peer one. Another perspective to observe distributed systems is
to investigate its topology, such as the centralized one as well as the decentralized one. The
scale is also an important indicator to describe distributed circumstances, such as the small-scale
one and the large-scale one. Some systems work within a stable computing environment
whereas others are located in a churning one. It frequently happens over the Internet. Another
case over the Internet is that the huge differences exist among conceptual computing nodes in
terms of their computing capacities. For that, the environment is identified as the homogeneous
one and the heterogeneous one. The most difficult system might be the one that is dominated by
human beings other than computers only [11][12]. The one is named as the social computing
system [11][12] instead of the machine based one. In practice, it is usual that the above
characters coexist in one particular environment such that it results in a more complicated
computing system.
Because of the complexity of distributed environments, it is impossible to propose a system that
hides all of the underlying techniques such that those frameworks focus on one specific domain
that is widely engaged in popular applications. Within the narrow environment, the framework
handles limited underlying techniques to transform an asynchronous and distributed system to a
synchronous and standalone one. Thus, developers can implement their final applications
through configuring or scripting rapidly through specifying their high level application
requirements only within those dedicated frameworks. Although the approach is efficient,
developers lose the opportunity to exploit their knowledge and skills to establish a system that
exactly matches their functional and non-functional requirements. When a small incompatible
obstacle is detected in the environment, it is difficult for them to handle. As of the issue of
distributed polling, it is virtualized into the technique of conceptually synchronous method
invocation. Developers cannot be aware of any distributed issues when using the technique.
Similarly, it brings advantages as well as disadvantages.
2.3 New Languages Based Solutions
In recent years, some new languages [6][7] were proposed to adapt to the varieties of distributed
computing environments. They attempt to provide developers with a new programming
methodology that is asynchronous and distributed by nature. The philosophy is absolutely
different from that of traditional ones, which are synchronous and standalone by default. When
programming with those languages, the code is executed asynchronously without any explicit
effort. That is, one subprogram is called through asynchronous messaging instead of
synchronous invocation. Such a fundamental programming model [6][7] is validate in both of
the local and distributed environments without any revisions. Using those languages, distributed
polling is completely a built-in feature such that developers are not required to program it.
With such languages, developers are able to program distributed systems rather than work on
one specific circumstance since the languages strive to be generic in the domain of distributed
computing. Although it obtains the rapid development advantages, the technology of the new
languages introduces some disadvantages as well. First of all, one of the potential problems of
such a solution is that it forces developers to isolate from traditional programming experiences
and work in a new programming context which updates the language expressions as well as the
primary methodologies. Developers are enforced to think in a new asynchronous manner
defined by those languages.
Additionally, the pursuit of the new languages is identical to that of the framework-based
solutions. Both of them intend to hide developers from underlying techniques to raise the
development speed and guarantee the quality of the final system. Different from the approach to
provide application-specific script languages, the new languages claim that they support a
concurrent and distributed programming model, such as Actor [6][7], which does not exist in
traditional languages by default. It succeeds in speeding up the development procedure, but it
fails in losing the possibility to open for changes. For example, Gossip [6] is employed as the
protocol for Akka [6] to maintain the distributed nodes. In many cases of distributed
environments, it is more practical to allow designers to deal with the issue since each
environment usually needs to have its own appropriate algorithms for management.
More critical, the same as that of traditional programming ones, the methodology of the new
languages assumes that the world can always be abstracted into a universal fundamental model.
After the model is available, any scenarios in the computing environment can be programmed
through a uniform approach. Unfortunately, when being confronted with the complexity of the
Internet-based distributed environment, the assumption can hardly be correct all the time. For
example, it is required for a large-scale and social-oriented system [11][12] to design a
dedicated routing algorithm [13] to manage the overall topology and even states of each node.
The algorithm should deal with the heterogeneous computing resources in a global scale though
involving social theories [11][12]. Thus, the Akka built-in registry service has to be abandoned.
In consequence, the high-level multicasting needs to be redesigned since it relies on the
dedicated routing algorithm. Once if it happens, it represents the overall infrastructure of Akka
is overwhelmed such that the Actor model is useless.
2.4 Comparisons
Different from all of the existing solutions, GreatFree proposes a series of patterns and APIs [14]
to assist developers to come up with distributed systems. First of all, those APIs resolve most
fundamental distributed problems that exist in diverse environments such that developers do not
need to implement them with traditional programming languages. In addition, with the support
of code-level patterns, including DIP and DOP, the programming procedure is conducted in a
highly-patterned development environment to reach high efficiency and productivity. Finally,
GreatFree does not believe there exists an abstract universal model in various distributed
computing environments, especially over the Internet. To provide developers with a convenient
programming environment, the APIs and patterns are always the building blocks adaptive to
diverse cases. If those APIs and patterns are designed in a reasonable way, developers can
compose them to any distributed systems with simplified behaviors according to distributed
knowledge and programming skills arbitrarily.
3. PROGRAMMING REQUESTS/RESPONSES WITH GREATFREE
Programming requests/responses with GreatFree is a fundamental technique to implement the
request-response-based remote interaction, i.e., distributed polling, between two remote
computers within a distributed computing environment. Such an interaction is indispensable to
construct any types of distributed systems, including the current popular one, i.e., the cloud.
3.1 The Philosophy
Different from traditional approaches, GreatFree provides developers with a highly-patterned
environment to program requests/responses. First of all, developers implement
requests/responses rapidly with GreatFree APIs [5] and code-level design patterns [5] without
taking care of underlying techniques. During the procedure, they just need to perform simplified
behaviors, i.e., the operation of CPR. In addition, with GreatFree, developers do not work in a
virtualized standalone environment in which underlying techniques are invisible. Instead, at
least, they should be aware of the basics of the computing environment. It is preferred that
developers have sufficient knowledge about the Internet-based distributed environment in order
to propose reasonable solutions. Meanwhile, they need to keep their programming skills when
utilizing GreatFree APIs and patterns. The advantages of such a methodology not only
guarantee the high efficiency of programming but also provide developers with the sufficient
flexibility to deal with various distributed problems with their proficiency.
3.2 The Procedure of GreatFree Cloud Programming
The procedure of GreatFree programming is divided into two steps, the system-level phase and
the application-level phase. The first one constructs the underlying foundation of the overall
system that is suitable to one particular distributed environment. In contrast, the second one
implements the upper application over the foundation established by the first phase.
3.2.1 The System-Level Programming
When confronting with the problem to program within a distributed environment, first of all,
developers need to consider which DIP (Distributed Infrastructure Patterns) is suitable to the
domain. The DIP of GreatFree covers the most common cases of distributed computing
environments in the real world. If the chosen DIP fits perfectly, developers can start to
implement their upper level applications directly with GreatFree DOP and APIs. Then the
system-level phase is terminated.
However, because of the rich mutations of distributed computing environments over the Internet,
it is possible that the chosen DIP fulfills partial requirements only. In the case, developers have
to expand the phase of the system-level programming in order to update the chosen DIP to
accommodate the requirements in the particular environment using GreatFree DOP and APIs.
One extreme case is that no any DIP is suitable to the requirements of one particular distributed
computing environment. If so, the development effort must be raised to a large extent. It needs
to construct the particular infrastructure using GreatFree DOP and APIs heavily.
Only after the updated DIP meets the requirements of the specific circumstance, the systemlevel programming is accomplished and it is time to implement the applications upon it.
3.2.2 The Application-Level Programming
After the first phase is performed, a high-quality system foundation is constructed and it fits in
the current distributed environment in a high degree. Thereafter, developers can program their
upper applications. During the step, DOP and APIs are employed further to reach their final
goals on the high level.
3.3 The Steps to Program Requests/Responses
As one of mandatory techniques to construct distributed systems, programming
requests/responses with GreatFree is performed in a highly-patterned development environment.
During the procedure, developers work with a limited number of GreatFree APIs and DOP only.
More important, with the support of the highly-patterned environment, programming behaviors
are simplified as the repeatable operations, i.e., CPR, through identifying and following class
references and instance references. It represents that developers can implement a pair of
request/response even though they have no idea about GreatFree methodology in the extreme
case.
3.3.1 Choosing a Pair of Sample Request/Response
As a highly-patterned development environment, there are rich code-level patterns of DIP, DOP
and APIs supported by GreatFree. It indicates that developers are able to follow existing code to
implement their own systems. To program requests/responses, developers can choose the
counterpart code from the environment in each step.
The first step is to choose one pair of request/response from the DIP developers work on. Since
a DIP is a code-level system designed for one particular distributed environment and the
technique of distributed polling is indispensable in any circumstances, it is convenient to locate
one sample from the DIP. In addition, to help developers learn the development environment,
some practical samples are implemented as open source. Therefore, it is convenient to choose
one pair of sample request/response on the code-level to start up the programming.
3.3.2 Creating the Request/Response
To ease the design, after the sample request/response is chosen, developers do not need to
program the request/response from scratch. Instead, since each pair of request/response encloses
the data to be sent, developers just need to remain the request/response pattern of SM (Server
Message) [14] and replace the existing data with the data they attempt to send. If the data is
primitive, the replacement is performed straightforward. If the data is self-defined, i.e., a class,
it is necessary to serialize it since it is required to be transmitted within a distributed
environment.
3.3.3 Retrieving Class References
The concept of the class references is defined as one independent class that references one
distributed message in GreatFree development environment. Meanwhile, the class is
programmed particularly as new ones for a distributed computing environment in a predefined
code-level pattern. The class references can be easily retrieved for developers to follow through
CPR. In the case of programming requests/responses, the class references exist at the server side
only, including a thread that processes the incoming request concurrently and a thread creator
that is injected into the underlying thread pool to create new instances of the thread in case that
the existing ones cannot deal with the volume of incoming requests. It is convenient for
developers to retrieve the class references, i.e., the thread and the thread creator, with any
programming tools.
3.3.4 Retrieving Instance References
The concept of instance references specifies the newly created class instances in the existing
components, which are essential modules in DIP to keep the particular infrastructure to work in
a high-quality manner. Similar to class references, they are written in the code-level highlypatterned form such that it is convenient for developers to follow through CPR. In the case of
programming requests/responses, the instance references exist at both of the server side and the
client side. At the server side, the instance reference is written in the pattern of RD (Request
Dispatcher) [14] in the component of the server dispatcher, which is structured in the pattern,
SD (Server Dispatcher) [14], as well. At the client side, the instance reference is located in the
pattern of RR (Remote Reader) [14]. The pattern exists in the component of the client reader,
which is placed in the pattern of CR (Client Reader) [14] .
One special case is that one instance reference is not declared explicitly with an instance name
in the sample code, but it is referenced by type casting, the new keyword and so forth. It
happens frequently to simplify the syntax. The one is called the implicit instance reference. In
contrast, the one with a declared name is called the explicit instance reference. Whichever the
types of instance references are, the programming behavior is always the operation of CPR.
Usually, an implicit instance reference is more straightforward to be followed since it is
referenced independently in the form of the class.
3.3.5 Creating the Thread and Its Creator
The thread is responsible for processing incoming requests concurrently at the server side. In
GreatFree, each type of requests/responses has one dedicated thread. If programming from
scratch, it is a time-consuming and error-prone task. With the support of GreatFree, developers
is only required to follow the class references retrieved in the fourth step through CPR. To do
that, developers create a new class named in a proper convention at first. Then, they should keep
the pattern through the operations of copying/pasting from the class reference. At least, they
have to replace the sample request/response and the sample processing code with their own.
Another class reference is the thread creator that should accompany with the newly created
thread. The reference is created newly as well. Fortunately, the procedure is straightforward
through CPR, i.e., keeping the pattern and replacing the sample request/response and the sample
thread.
3.3.6 Updating the Server Dispatcher
After the thread and its creator are created, they should be injected into the server side to
process incoming requests. To achieve the goal in GreatFree, developers are required to perform
the operation of CPR on the instance reference in the pattern of RD (Request Dispatcher) and
SE (Server Dispatcher) at the server side. The instance reference contains the samples of the
request/response, the thread and the thread creator. Thus, to do that through CPR, at first,
developers ought to create a new instance for the new request/response following the instance
reference and replace its components, the samples of the request/response, the thread ant thread
creator with those new classes which are created following the corresponding class references in
relevant patterns. Then, for each place where the instance reference is located, perform the
operation of CPR again in the patterns of RD and SD.
3.3.7 Updating the Client Reader
Compared with the server side, the client side is simple since it is not necessary to take into
account the issue of processing high volume of incoming requests. Instead, the unique task of
the client is responsible for sending the request to the server and wait until the response from the
server is received. Thus, at the client side, the instance reference contains the sample
request/response only. Similar to what is performed on the server side, developers just need to
create a new instance containing the new request/response following the instance reference and
then perform the operation of CPR at each location of the instance reference again in the pattern
of CR (Client Reader).
4. CASE STUDY
To program requests/responses, it is suggested to follow the order, i.e., the request/response, the
server side and then the client side. No matter which step to work on, developers’ programming
behaviors are simplified as CPR. The steps are presented with a simple example, i.e., sending
one request from one client to one server to retrieve certain information and holding on at the
client until the response enclosing the retrieved information is received.
4.1 The Request/Response
To start the programming, one pair of sample request/response should be located from the DIP
or any other sample projects in open source. For example, one sample request,
WeatherRequest.java/WeatherResponse.java, is chosen and shown in Listing 1 and Listing 2,
respectively, from the relevant Java package of the DIP. This is the request/response that
developers can follow to design their own through CPR.
1
2
3
4
5
6
7
public class WeatherRequest extends ServerMessage
{
public WeatherRequest()
{
super(MessageType.WEATHER_REQUEST);
}
}
Listing 1. The Code of WeatherRequest.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class WeatherResponse extends ServerMessage
{
// Declare the instance of Weather. 02/15/2016, Bing Li
private Weather weather;
public WeatherResponse(Weather weather)
{
super(MessageType.WEATHER_RESPONSE);
this.weather = weather;
}
public Weather getWeather()
{
return this.weather;
}
}
Listing 2. The Code of WeatherResponse.java
After that, the request/response, i.e., TestRequest.java/TestResponse.java, are listed as shown in
Listing 3 and Listing 4, respectively.
1
2
3
4
5
public class TestRequest extends ServerMessage
{
private String request;
public TestRequest(String request)
6
7
8
9
10
11
12
13
14
15
{
super(MessageType.TEST_REQUEST);
this.request = request;
}
public String getRequest()
{
return this.request;
}
}
Listing 3. The Code of TestRequest.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class TestResponse extends ServerMessage
{
private String response;
public TestResponse(String response)
{
super(MessageType.TEST_RESPONSE);
this.response = response;
}
public String getResponse()
{
return this.response;
}
}
Listing 4. The Code of TestResponse.java
Comparing the structure of WeatherRequest/WeatherResponse with that of
TestRequest/TestResponse, they are identical except the data to be exchanged. The
WeatherRequest encloses nothing whereas the TestRequest contains only one String, request.
More important, both of them are written in the pattern of SM (Server Message).
Correspondingly, the WeatherResponse encloses the self-defined data type, Weather, whereas
the TestResponse just includes the contrived data in the type of String. Because of the identical
structure of requests/responses in GreatFree, it is convenient to create new ones through the
simplified behaviors, i.e., CPR. That represents that GreatFree is a highly-patterned
programming environment. Thus, CPR can be performed in GreatFree.
4.2 The Class References to the Sample Request/Response
To continue programming requests/responses using the approach of CPR, it is necessary to find
the additional samples to follow. The sample code is easily retrieved within the DIP source code
according to the request. If an appropriate IDE (Integrated Development Environment) [15] is
used, the procedure becomes more convenient. For example, if Eclipse [15] is the chosen IDE,
developers can retrieve the corresponding sample by searching the references to
WeatherRequest, as shown in Figure 1. Through the approach, developers can locate all of the
relevant code of the sample request. Each of the code should be followed through CPR at the
server side as well as the client side. Figure 2 presents the references to WeatherResponse.
Figure 1. The References to the Sample Request, WeatherRequest
Figure 2. The References of the Sample Response, WeatherResponse
All of the references retrieved in this case are summarized in Table 1. All of them are the
sample code developers need to follow for the new created request/response,
TestRequest/TestResponse. The references to the messages exist on both sides, the client and
the server, since the request and the response are the messages that are exchanged between the
client and the server.
Table 1. The References to the Sample Request/Response, WeatherRequest/WeatherResponse
Class
Explanation
Side
ClientReader
The client side that sends requests to the server side and waits until a
response is received
Client
ClientUI
The UI presenter at the client side
Client
MessageConfig
The configurations of messages
N/A
WeatherStream
The output stream that sends the response to the client
N/A
MyServerDispacher
The dispatcher that is responsible for dispatching received messages to
corresponding threads such that those messages are processed concurrently
Server
WeatherThread
The thread that deals with the request of WeatherRequest
Server
WeatherThreadCreator
The thread that creates a new instance of WeatherThread in case that
existing instances are too busy
Server
The next task is to determine the new classes to be created for TeststRequest/TestResponse.
Table 1 lists seven classes, i.e., ClientReader, ClientUI, MessageConfig, WeatherStream,
MyServerDispatcher, WeatherThread and WeatherThreadCreator. Although the names of them
are not necessarily given in any conventions, it is highly suggested to append the suffixes, i.e., Reader, -UI, -Config, -Stream, -ServerDispatcher, -Thread and -ThreadCreator, at the end of
each of them. On the other hand, the prefix of each class reference is preferred to exhibit the
request/response. Hence, according to the names of the classes, it is convenient to infer that the
three ones, WeatherStream, WeatherThread and WeatherThreadCreator, are designed
particularly for the corresponding request/response whereas the other four are existing ones for
any messages. The ones, which are designed particularly for WeatherRequest/WeatherResponse,
are called class references. Afterward, according to the class references to the sample
request/response, developers are able to create new classes for TestRequest/TestResponse, i.e.,
TestStream, TestRequestThread and TestRequestThreadCreator, which are named in the same
convention as their respective counterparts, i.e., WeatherStream, WeatherThread and
WeatherThreadCreator. Table 2 and Table 3 list the class references to
WeatherRequest/WeatherResponse and their counterparts, which should be programmed newly
by developers.
Table 2. The Class References to WeatherRequest and Their Counterparts for TestRequest
Type
Class References to WeatherRequest
Class References to TestRequest
Request
WeatherRequest
TestRequest
Stream
WeatherStream
TestStream
Thread
WeatherThread
TestRequestThread
Thread Creator
WeatherThreadCreator
TestRequestThreadCreator
Table 3. The Class References to WeatherResponse and Their Counterparts for TestResponse
Type
Class References to WeatherResponse
Class References to TestResponse
Response
WeatherResponse
TestResponse
Thread
WeatherThread
TestRequestThread
Thread Creator
WeatherThreadCreator
TestRequestThreadCreator
In short, Table 1 helps developers to locate all of the additional sample code to follow. When
programming new classes further, developers should CPR each of the class reference
counterparts based on Table 2 and Table 3.
4.3 The Instance References to the Sample Request/Response
Besides class references to the sample request/response, it is required to follow instance
references. In accordance with Table 1, there are four existing classes, ClientReader, ClientUI,
MessageConfig and MyServerDispatcher, which are not designed particularly for the
request/response of WeatherRequest/WeatherResponse. Instead, they are used to process any
requests/responses at the client side and the server side, respectively. Inside the four classes,
some class instances reference the WeatherRequest/WeatherResponse. Table 4 lists the instance
references to WeatherRequest/WeatherResponse in the four classes.
Table 4. The Instance References to WeatherRequest/WeatherResponse and Their CounterParts
to TestRequest/TestResponse
Class Instance
Type
Location
Counterpart
Implicit reference to
WeatherRequest
WeatherRequest
ClientReader
TestRequest
Implicit reference to
WeatherResponse
WeatherResponse
ClientReader
TestResponse
weatherResponse
WeatherResponse
ClientUI
testResponse
NO_WEATHER_RESPONSE
WeatherResponse
MessageConfig
NO_TEST_RESPONSE
weatherRequestDispatcher
RequestDispatcher
<WeatherRequest,
WeatherStream,
WeatherResponse,
WeatherThread,
WeatherThreadCreator>
MyServerDispatcher
testRequestDispatcher
With respect to Table 4, developers need to program the instance references, including those
implicit instance references in ClientReader, testResponse in ClientUI, NO_TEST_RESPONSE
in MessageConfig and testRequestDispatcher in MyServerDispatcher, following the instance
references with the approach of CPR.
4.4 The Server Side
For a pair of request/response, most effort is usually spent on the server side since it has to deal
with potentially high volume of accessing load, including incoming requests.
4.4.1 The Stream
According to Table 2, WeatherRequest has one class reference, WeatherStream. Besides
creating messages themselves, one additional effort is required when programming
requests/responses, compared with programming requests/responses. It is required to create a
new class, TestStream as shown in Listing 6 following its counterpart, WeatherStream as shown
in Listing 5, to complete the message programming. TestStream is used to send the response to
the client at the server side.
1
2
3
4
5
6
7
public class WeatherStream extends OutMessageStream<WeatherRequest>
{
public WeatherStream(ObjectOutputStream out, Lock lock, WeatherRequest message)
{
super(out, lock, message);
}
}
Listing 5. The code of WeatherStream.java
The structure of WeatherStream is simple and it is straightforward to follow it through CPR.
Thus, the counterpart, TestStream, is created as shown in Listing 6.
1
2
3
4
5
6
7
public class TestStream extends OutMessageStream<TestRequest>
{
public TestStream(ObjectOutputStream out, Lock lock, TestRequest message)
{
super(out, lock, message);
}
}
Listing 6. The code of TestStream.java
4.4.2 The Thread
To process requests concurrently, a thread, e.g., TestRequestThread, should be designed for that.
Its counterpart, WeatherThread, which is one class reference of WeatherRequest, is shown in
Listing 7. It needs to indicate that one important pattern, RDWC (Request Double-WhileConcurrency) [14], is used in the thread. This is one of the DOP in GreatFree. When
programming a thread and attempting to keep it under control by a pooling algorithm [14], it is
required to employ the pattern.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public class WeatherThread extends RequestQueue<WeatherRequest, WeatherStream, WeatherResponse>
{
public WeatherThread(int maxTaskSize)
{
super(maxTaskSize);
}
public void run()
{
WeatherStream request;
WeatherResponse response;
while (!this.isShutdown())
{
// The loop detects whether the queue is empty or not. 02/15/2016, Bing Li
while (!this.isEmpty())
{
// Dequeue a request. 02/15/2016, Bing Li
request = this.getRequest();
// Initialize an instance of WeatherResponse. 02/15/2016, Bing Li
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
response = new WeatherResponse(WeatherDB.SERVER().getWeather());
try
{
// Respond the response to the remote client. 02/15/2016, Bing Li
this.respond(request.getOutStream(), request.getLock(), response);
}
catch (IOException e)
{
e.printStackTrace();
}
// Dispose the messages after the responding is performed. 02/15/2016, Bing Li
this.disposeMessage(request, response);
}
try
{
this.holdOn(ServerConfig.REQUEST_THREAD_WAIT_TIME);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
}
Listing 7. The code of WeatherThread.java
After CPR, the thread of TestRequestThread is shown in Listing 8.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
public class TestRequestThread extends RequestQueue<TestRequest, TestStream, TestResponse>
{
public TestRequestThread(int maxTaskSize)
{
super(maxTaskSize);
}
public void run()
{
TestStream request;
TestResponse response;
// The loop detects whether the queue is empty or not. 02/15/2016, Bing Li
while (!this.isShutdown())
{
// The loop detects whether the queue is empty or not. 02/15/2016, Bing Li
while (!this.isEmpty())
{
// Dequeue a request. 02/15/2016, Bing Li
request = this.getRequest();
// Initialize an instance of WeatherResponse. 02/15/2016, Bing Li
response = new TestResponse("response");
try
{
// Respond the response to the remote client. 02/15/2016, Bing Li
this.respond(request.getOutStream(), request.getLock(), response);
}
catch (IOException e)
{
e.printStackTrace();
}
// Dispose the messages after the responding is performed. 02/15/2016, Bing Li
this.disposeMessage(request, response);
}
try
{
this.holdOn(ServerConfig.REQUEST_THREAD_WAIT_TIME);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
}
Listing 8. The code of TestRequestThread.java
Comparing the code of TestRequestThread with its counterpart, WeatherThread, the structure,
i.e., the pattern of RDWC, is identical. The distinctions occur in the request/response types and
the operations to process the messages. It proves further that GreatFree is a highly-patterned
programming environment.
4.5.2 The Thread Creator
The thread creator is employed by the underlying thread pool to create new instances of the
thread to process requests when existing ones can hardly deal with the accessing load. As the
class reference to WeatherRequest as well as WeatherResponse, the sample code of
WeatherThreadCreator is shown in Listing 9.
1
2
3
4
5
6
7
8
9
public class WeatherThreadCreator implements RequestThreadCreatable<WeatherRequest,
WeatherStream, WeatherResponse, WeatherThread>
{
@Override
public WeatherThread createRequestThreadInstance(int taskSize)
{
return new WeatherThread(taskSize);
}
}
Listing 9. The code of WeatherThreadCreator.java
The corresponding code to be programmed is named as TestRequestThreadCreator. To program
it with CPR, the structure remains whereas the request, the stream, the response and the thread
should be replaced with TestRequest, TestStream, TestResponse and TestRequestThread. The
code is shown in Listing 9.
1
2
3
4
5
6
7
8
9
public class TestRequestThreadCreator implements RequestThreadCreatable<TestRequest,
TestStream, TestResponse, TestRequestThread>
{
@Override
public TestRequestThread createRequestThreadInstance(int taskSize)
{
return new TestRequestThread(taskSize);
}
}
Listing 10. The code of TestRequestThreadCreator.java
4.4.3 The Server Dispatcher
The server dispatcher is the most important component for a distributed node as a server. It
implements the primary underlying techniques as an independent computing unit in a
distributed environment. After the request/response and the thread are available, it needs to
embed them into the server dispatcher to finish the programming on the server side.
Different from the previous programming to create new classes following class references to
one pair of request/response, developers are required to make revisions to the existing server
dispatcher in the case through following instance references to the sample request/response.
According to Table 4, as the instance reference to WeatherRequest/WeatherResponse in
MyServerDispatcher, weatherRequestDispatcher, is the sample code to be followed by its
counterpart. Developers just need to create a new instance, testRequestDispatcher, in the server
dispatcher. Fortunately, the entire procedure is still straightforward to be performed in a highlypatterned environment through CPR.
Listing 11 presents the code of the server dispatcher, MyServerDispatcher, which derives the
API, ServerMessageDispatcher. To save space, only the relevant instance references are shown
in the list.
1
2
3
6
7
8
9
10
11
12
13
14
15
16
17
public class MyServerDispatcher extends ServerMessageDispatcher<ServerMessage>
{
……
private RequestDispatcher<WeatherRequest, WeatherStream, WeatherResponse, WeatherThread,
WeatherThreadCreator> weatherRequestDispatcher;
……
public MyServerDispatcher(int threadPoolSize, long threadKeepAliveTime, int schedulerPoolSize,
long schedulerKeepAliveTime)
{
super(threadPoolSize, threadKeepAliveTime, schedulerPoolSize, schedulerKeepAliveTime);
……
this.weatherRequestDispatcher = new RequestDispatcher.RequestDispatcherBuilder
<WeatherRequest, WeatherStream, WeatherResponse, WeatherThread, WeatherThreadCreator>()
.poolSize(ServerConfig.REQUEST_DISPATCHER_POOL_SIZE)
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
.keepAliveTime(ServerConfig.REQUEST_DISPATCHER_THREAD_ALIVE_TIME)
.threadCreator(new WeatherThreadCreator())
.maxTaskSize(ServerConfig.MAX_REQUEST_TASK_SIZE)
.dispatcherWaitTime(ServerConfig.REQUEST_DISPATCHER_WAIT_TIME)
.waitRound(ServerConfig.REQUEST_DISPATCHER_WAIT_ROUND)
.idleCheckDelay(ServerConfig.REQUEST_DISPATCHER_IDLE_CHECK_DELAY)
.idleCheckPeriod(ServerConfig.REQUEST_DISPATCHER_IDLE_CHECK_PERIOD)
.scheduler(super.getSchedulerPool())
.build();
……
}
// Shut down the server message dispatcher. 09/20/2014, Bing Li
public void shutdown() throws InterruptedException
{
……
// Dispose the weather request dispatcher. 02/15/2016, Bing Li
this.weatherRequestDispatcher.dispose();
……
// Shutdown the derived server dispatcher. 11/04/2014, Bing Li
super.shutdown();
}
// Process the available messages in a concurrent way. 09/20/2014, Bing Li
public void consume(OutMessageStream<ServerMessage> message)
{
// Check the types of received messages. 11/09/2014, Bing Li
switch (message.getMessage().getType())
{
……
// If the message is the one of weather requests. 11/09/2014, Bing Li
case MessageType.WEATHER_REQUEST:
// Check whether the weather request dispatcher is ready. 02/15/2016, Bing Li
if (!this.weatherRequestDispatcher.isReady())
{
// Execute the weather request dispatcher concurrently. 02/15/2016, Bing Li
super.execute(this.weatherRequestDispatcher);
}
// Enqueue the instance of WeatherRequest into the dispatcher for
// concurrent responding. 02/15/2016, Bing Li
this.weatherRequestDispatcher.enqueue(new WeatherStream(message.getOutStream(),
message.getLock(), (WeatherRequest)message.getMessage()));
break;
……
}
}
}
Listing 11. The code of MyServerDispatcher.java
According
to
the
code
in
Listing
11,
the
instance
reference
to
WeatherRequest/WeatherResponse, weatherRequestDispatcher, is written in the syntax of
Generics [16]. It encloses five components, including WeatherRequest, WeatherStream,
WeatherResponse, SetWeatherThread and SetWeatherThreadCreator, all of which are the class
references to WeatherRequest/WeatherResponse. Thus, when programming its counterpart,
testRequestDispatcher, what developers should do first is to copy the instance reference and
replace the instance references as well as the class references with their counterparts,
respectively. It needs to repeat the operations for all of the lines where the instance reference
emerges. The entire CPR procedure is performed in a straightforward manner.
During the steps, all of the previous followed sample code, i.e., class references and instance
references, coexist in the server dispatcher. They form some of the important DOP patterns, i.e.,
the Server Dispatcher (SD) [14] and the Request Dispatcher (RD) [14]. That is why developers
are able to update the code with simplified behaviors, i.e., CPR.
After all of the above steps, the programming on the server side is done and the code is updated
as shown in Listing 12.
1
2
3
4
5
6
7
8
9
public class MyServerDispatcher extends ServerMessageDispatcher<ServerMessage>
{
……
private RequestDispatcher<WeatherRequest, WeatherStream, WeatherResponse, WeatherThread,
WeatherThreadCreator> weatherRequestDispatcher;
private RequestDispatcher<TestRequest, TestStream, TestResponse, TestRequestThread,
TestRequestThreadCreator> testRequestDispatcher;
……
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
public MyServerDispatcher(int threadPoolSize, long threadKeepAliveTime, int schedulerPoolSize,
long schedulerKeepAliveTime)
{
super(threadPoolSize, threadKeepAliveTime, schedulerPoolSize, schedulerKeepAliveTime);
……
this.weatherRequestDispatcher = new RequestDispatcher.RequestDispatcherBuilder
<WeatherRequest, WeatherStream, WeatherResponse, WeatherThread, WeatherThreadCreator>()
.poolSize(ServerConfig.REQUEST_DISPATCHER_POOL_SIZE)
.keepAliveTime(ServerConfig.REQUEST_DISPATCHER_THREAD_ALIVE_TIME)
.threadCreator(new WeatherThreadCreator())
.maxTaskSize(ServerConfig.MAX_REQUEST_TASK_SIZE)
.dispatcherWaitTime(ServerConfig.REQUEST_DISPATCHER_WAIT_TIME)
.waitRound(ServerConfig.REQUEST_DISPATCHER_WAIT_ROUND)
.idleCheckDelay(ServerConfig.REQUEST_DISPATCHER_IDLE_CHECK_DELAY)
.idleCheckPeriod(ServerConfig.REQUEST_DISPATCHER_IDLE_CHECK_PERIOD)
.scheduler(super.getSchedulerPool())
.build();
this.testRequestDispatcher = new RequestDispatcher.RequestDispatcherBuilder
<TestRequest, TestStream, TestResponse, TestRequestThread, TestRequestThreadCreator>()
.poolSize(ServerConfig.REQUEST_DISPATCHER_POOL_SIZE)
.keepAliveTime(ServerConfig.REQUEST_DISPATCHER_THREAD_ALIVE_TIME)
.threadCreator(new TestRequestThreadCreator())
.maxTaskSize(ServerConfig.MAX_REQUEST_TASK_SIZE)
.dispatcherWaitTime(ServerConfig.REQUEST_DISPATCHER_WAIT_TIME)
.waitRound(ServerConfig.REQUEST_DISPATCHER_WAIT_ROUND)
.idleCheckDelay(ServerConfig.REQUEST_DISPATCHER_IDLE_CHECK_DELAY)
.idleCheckPeriod(ServerConfig.REQUEST_DISPATCHER_IDLE_CHECK_PERIOD)
.scheduler(super.getSchedulerPool())
.build();
……
}
public void shutdown() throws InterruptedException
{
……
// Dispose the weather request dispatcher. 02/15/2016, Bing Li
this.weatherRequestDispatcher.dispose();
this.testRequestDispatcher.dispose();
……
// Shutdown the derived server dispatcher. 11/04/2014, Bing Li
super.shutdown();
}
// Process the available messages in a concurrent way. 09/20/2014, Bing Li
public void consume(OutMessageStream<ServerMessage> message)
{
// Check the types of received messages. 11/09/2014, Bing Li
switch (message.getMessage().getType())
{
……
// If the message is the one of weather requests. 11/09/2014, Bing Li
case MessageType.WEATHER_REQUEST:
// Check whether the weather request dispatcher is ready. 02/15/2016, Bing Li
if (!this.weatherRequestDispatcher.isReady())
{
// Execute the weather request dispatcher concurrently. 02/15/2016, Bing Li
super.execute(this.weatherRequestDispatcher);
}
// Enqueue the instance of WeatherRequest into the dispatcher for
// concurrent responding. 02/15/2016, Bing Li
this.weatherRequestDispatcher.enqueue(new WeatherStream(message.getOutStream(),
message.getLock(), (WeatherRequest)message.getMessage()));
break;
case MessageType.TEST_REQUEST:
System.out.println("TEST_REQUEST received @" + Calendar.getInstance().getTime());
// Check whether the test request dispatcher is ready. 02/15/2016, Bing Li
if (!this.testRequestDispatcher.isReady())
{
// Execute the test request dispatcher concurrently. 02/15/2016, Bing Li
super.execute(this.testRequestDispatcher);
}
// Enqueue the instance of WeatherRequest into the dispatcher for
// concurrent responding. 02/15/2016, Bing Li
this.testRequestDispatcher.enqueue(new TestStream(message.getOutStream(),
message.getLock(), (TestRequest)message.getMessage()));
break;
……
}
}
}
Listing 12. The code of MyServerDispatcher.java after CPR
4.5 The Client Side
The last step to program requests/responses is to update the instance reference at the client side.
To do that, the programming effort is spent on two portions, i.e., following the implicit instance
references and following the explicit instances.
4.5.1 Following the Implicit Instance References
According to Table 4, only implicit references exist in the code of ClientReader. To program the
client side, it still needs to perform the operation of CPR on those implicit instance references.
Before making any updates, the code of ClientReader is shown in Listing 13. The same as
MyServerDispatcher in Listing 11, only implicit instance references are presented and others are
omitted to save space.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public class ClientReader
{
……
public static WeatherResponse getWeather()
{
try
{
return (WeatherResponse)(RemoteReader.REMOTE().read(NodeID.DISTRIBUTED().getKey(),
ServerConfig.SERVER_IP, ServerConfig.SERVER_PORT, new WeatherRequest()));
}
catch (ClassNotFoundException | RemoteReadException | IOException e)
{
e.printStackTrace();
}
return MessageConfig.NO_WEATHER_RESPONSE;
}
……
}
Listing 13. The Code of ClientReader.java
To follow the implicit instance references, WeatherRequest/WeatherResponse, the operation is
more convenient since those references are usually used for type casting or creating new
instances independently such that the code is more simple. To program their counterparts,
TestRequest/TestResponse, just copy the lines where the implicit instances emerge and replace
each of them. During the procedure, the programming is performed within one DOP, RR
(Remote Reader).
After that, the code is updated as shown in Listing 14.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
public class ClientReader
{
……
public static WeatherResponse getWeather()
{
try
{
return (WeatherResponse)(RemoteReader.REMOTE().read(NodeID.DISTRIBUTED().getKey(),
ServerConfig.SERVER_IP, ServerConfig.SERVER_PORT, new WeatherRequest()));
}
catch (ClassNotFoundException | RemoteReadException | IOException e)
{
e.printStackTrace();
}
return MessageConfig.NO_WEATHER_RESPONSE;
}
public static TestResponse getResponse(String request)
{
try
{
return (TestResponse)(RemoteReader.REMOTE().read(NodeID.DISTRIBUTED().getKey(),
ServerConfig.SERVER_IP, ServerConfig.SERVER_PORT, new TestRequest(request)));
}
catch (ClassNotFoundException | RemoteReadException | IOException e)
{
e.printStackTrace();
}
return MessageConfig.NO_TEST_RESPONSE;
}
……
}
Listing 14. The Code of ClientReader After CPR
4.5.2 Following the Explicit Instance References
The last step is to program through following the explicit instance references. In the case, only
one explicit instance reference to WeatherResponse exists in the code of ClientUI at the client
side. Since it is necessary to display the response on the screen, it takes a little effort in the step.
In a more general case which does not need to present the response, the step is ignored.
The code of ClientUI before programming is shown in Listing 15.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public class ClientUI
{
……
public void send(int option)
{
……
WeatherResponse weatherResponse;
Weather weather;
// Check the option to interact with the polling server. 09/21/2014, Bing Li
switch (option)
{
……
// If the GET_WEATHER option is selected, send the request message to the
// remote server. 02/18/2016, Bing Li
case MenuOptions.GET_WEATHER:
weatherResponse = ClientReader.getWeather();
weather = weatherResponse.getWeather();
System.out.println("Temperature: " + weather.getTemperature());
System.out.println("Forcast: " + weather.getForecast());
System.out.println("Rain: " + weather.isRain());
System.out.println("How much rain: " + weather.getHowMuchRain());
System.out.println("Time: " + weather.getTime());
break;
……
}
}
}
Listing 15. The Code of ClientUI.java
The procedure to follow the explicit instance reference to WeatherResponse at the client side is
identical to that at the server side. After CPR, the update code of ClientUI.java is shown in
Listing 16.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
public class ClientUI
{
……
public void send(int option)
{
……
WeatherResponse weatherResponse;
Weather weather;
TestResponse testResponse;
// Check the option to interact with the polling server. 09/21/2014, Bing Li
switch (option)
{
……
// If the GET_WEATHER option is selected, send the request message to the
// remote server. 02/18/2016, Bing Li
case MenuOptions.GET_WEATHER:
weatherResponse = ClientReader.getWeather();
weather = weatherResponse.getWeather();
System.out.println("Temperature: " + weather.getTemperature());
System.out.println("Forcast: " + weather.getForecast());
System.out.println("Rain: " + weather.isRain());
System.out.println("How much rain: " + weather.getHowMuchRain());
System.out.println("Time: " + weather.getTime());
break;
case MenuOptions.REQUEST_TEST:
testResponse = (TestResponse)ClientReader.getResponse("request");
System.out.println(testResponse.getResponse());
break;
……
}
}
}
Listing 16. The Code of ClientUI.java after CPR
4.6 Testing
Now it is time to run the code of distributed polling implemented by the technique of
programming requests/responses with GreatFree. It is necessary to program some code that
invokes the client reader. To save space, it is omitted in the paper. The detailed information can
be found in the Chapter 4 of the book, Programming Clouds With GreatFree [14].
The server should be started up at first and then the client is executed. A new option, Request
Test, is displayed in the menu. Just select the option, Then, a message, “TEST_REQUEST”, is
displayed on the server side (Figure 3). It represents that distributed polling is injected into the
DIP successfully. At the client side, a "response" is received and display (Figure 4).
Figure 3. The message, “TEST_REQUEST”, of the new request, TestRequest, is displayed on
the server side
Figure 4. The client side after the new request is sent and the response is received
5. EVALUATION ANF FUTURE WORK
GreatFree cloud programming environment was implemented during the large research project,
the New World Wide Web [13]. The project proposes many new fundamental algorithms to
resolve. To implement them, it is impossible to employ any existing frameworks for distributed
computing environments. Instead, traditional generic programming language, i.e., Java SE [4],
has to be selected as the primary development approach. Because the system is huge, it has been
taking a couple of years. All of the source code, i.e., the DIP, DOP and APIs, is tested in the
platform. Furthermore, the development environment is taught as the primary content in the
class of Cloud Programming in the undergraduate and graduate programs of the Xi’An
Technological University. Some students program with the tool to complete the research
projects. Now the project of New World Wide Web has reached the lines of code more than
550,000. It is designed to enclose an unlimited number of computers as clusters to be as scalable
as possible to deal with the potentially high volume accessing from users over the globe.
Additionally, since the development tool is invented, the implementation efficiency is raised
apparently. Because it is necessary to confront the most difficult distributed environment, i.e.,
the large-scale heterogeneous social distributed computing circumstance [13], it proves further
that GreatFree has sufficient flexibility to deal with the extreme case without high
implementation overhead.
It is believed that the issue of programming in the diverse distributed systems, especially over
the Internet, is tough. For that, it is necessary to move forward based on the current achievement.
Since no proper programming languages are suitable to the current complicated distributed
computing environments over the Internet, it is feasible to propose a new language that is
asynchronous and distributed by nature rather than synchronous and standalone by default.
Moreover, different from the existing ones, such as Akka [6] and Erlang [7], the new language
intends to avoid the pursuit of a universal abstract model. In contrast, the new language attempts
to propose a series of novel keywords and semantics that are transformed from the DIP, DOP
and APIs. Finally, the new language does not think it is necessary to make updates to the current
syntax. It will conform to the convention of traditional programming languages, like C, C++ and
Java, in order to fit most developers’ habits.
REFERENCES
[1] Andrew S. Tanenbaum, Maarten Van Steen. 2007. Section 4.3, Message-Oriented Communication,
Chapter 4, Communication, Distributed Systems Principles Paradigms, 2nd Edition. Pearson
Prentice Hall, ISBN: 0-13-239227-5.
[2] Steven John Metsker. 2002. Design Patterns Java Workbook. Addison Wesley, ISBN: 978-1-49194483-7.
[3] Douglas Schmidt, Michael Stal, Hand Rohnert and Frank Buschmann. 2000. Pattern-Oriented
Software Architecture, Patterns for Concurrent and Networked Object, Volume 2. John Wiley &
Sons Ltd., ISBN: 0-471-60695-2.
[4] Bruce Eckel. 2006. Thinking in Java. Prentice Hall, ISBN: 0-13-187248-6.
[5] Bing Li. 2016. GreatFree: The Java APIs and Idioms to Program Large-Scale Distributed Systems.
International Journal of Advanced Information Technology, Volume 6, No. 1, Pages 1-22.
[6] Hector Veiga Ortiz, Piyush Mishra. 2017. Akka Cookbook. Packt Publishing, ISBN: 978-1-78528818-0.
[7] Joe Armstrong. 2013. Programming Erlang Software for a Concurrent World, 2nd Edition. The
Pragmatic Bookshelf, ISBN-13: 978-1-937785-53-6.
[8] Brian Goetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer, David Holmes and Doug Lea. 2006. Java
Concurrency in Practice. Addison-Wesley, ISBN: 978-0-321-34960-6.
[9] Elliotte Rusty Harold. 2014. Java Network Programming. O’Relly Media, ISBN: 978-1-449-35767-2.
[10] Andrew S. Tanenbaum, Maarten Van Steen. 2007. Section 4.3, Message-Oriented Communication,
Chapter 4, Communication, Distributed Systems Principles Paradigms, 2nd Edition. Pearson
Prentice Hall, ISBN: 0-13-239227-5.
[11] Yilei Shao. 2007. Exploring Social Networks in Computing Systems. PhD Dissertation, Princeton
University.
[12] Nitin Agarwal. 2009. Social Computing in Blogosphere. PhD Dissertation, Arizona State University.
[13] Bing Li. 2017. The Five Layers of the Internet on the Computing Level. To be appeared.
International Conference on Advanced Information Technologies and Applications (ICAITA
2017).
[14] Bing Li. 2017. Programming Clouds with GreatFree. To be appeared. Book. DOI:
https://github.com/greatfree/Programming-Clouds.
[15] Eclipse. 2017. DOI: https://www.eclipse.org/ide/.
[16] Bruce Eckel. 2006. Generics, Thinking in Java, Page 439~534. Prentice Hall, ISBN: 0-13-187248-6.
| 6 |
Evolutionary Multimodal Optimization: A Short Survey
Ka-Chun Wong (Department of Computer Science, University of Toronto)
arXiv:1508.00457v1 [] 3 Aug 2015
August 4, 2015
Real world problems always have different multiple solutions. For instance, optical engineers need to tune the recording parameters to get as many optimal solutions as possible
for multiple trials in the varied-line-spacing holographic grating design problem. Unfortunately, most traditional optimization techniques focus on solving for a single optimal
solution. They need to be applied several times; yet all solutions are not guaranteed to
be found. Thus the multimodal optimization problem was proposed. In that problem,
we are interested in not only a single optimal point, but also the others. With strong
parallel search capability, evolutionary algorithms are shown to be particularly effective
in solving this type of problem. In particular, the evolutionary algorithms for multimodal
optimization usually not only locate multiple optima in a single run, but also preserve their
population diversity throughout a run, resulting in their global optimization ability on multimodal functions. In addition, the techniques for multimodal optimization are borrowed
as diversity maintenance techniques to other problems. In this chapter, we describe and
review the state-of-the-arts evolutionary algorithms for multimodal optimization in terms
of methodology, benchmarking, and application.
1
Introduction and Background
Since genetic algorithm was proposed by John H. Holland [9] in the early 1970s, researchers
have been exploring the power of evolutionary algorithms [47]. For instance, biological pattern discovery [48] and computer vision [51]. In particular, its function optimization capability was highlighted [6] because of its high adaptability to different non-convex function
landscapes, to which we cannot apply traditional optimization techniques.
Real world problems always have different multiple solutions [45, 46]. For instance,
optical engineers need to tune the recording parameters to get as many optimal solutions
as possible for multiple trials in the varied-line-spacing holographic grating design problem
because the design constraints are too difficult to be expressed and solved in mathematical
forms [31]. Unfortunately, most traditional optimization techniques focus on solving for a
single optimal solution. They need to be applied several times; yet all solutions are not
guaranteed to be found. Thus the multimodal optimization problem was proposed. In that
1
problem, we are interested in not only a single optimal point, but also the others. Given
an objective function, an algorithm is expected to find all optimal points in a single run.
With strong parallel search capability, evolutionary algorithms are shown to be particularly
effective in solving this type of problem [6]: Given function f : X → R, we would like to
find all global and local maxima (or minima) of f in a single run. Although the objective
is clear, it is not easy to be satisfied in practice because some problems may have too
many optima to be located. Nonetheless, it is still of great interest to researchers how
these problems are going to be solved because the algorithms for multimodal optimization
usually not only locate multiple optima in a single run, but also preserve their population
diversity throughout a run, resulting in their global optimization ability on multimodal
functions. Moreover, the techniques for multimodal optimization are usually borrowed as
diversity maintenance techniques to other problems [42, 50].
2
Problem Definition
The multimodal optimization problem definition depends on the type of optimization (minimization or maximization). They are similar in principle and defined as follows:
2.1
Minimization
In this problem, given f :X → R, we would like to find all global and local minimums of f
in a single run.
Definition 1 Local Minimum [41]: A (local) minimum x̂l ∈ X of one (objective) function f :X → R is an input element with f (x̂l ) ≤ f (x) for all x neighboring x̂l . If X ∈ RN ,
we can write: ∀x̂l ∃ > 0 : f (x̂l ) ≤ f (x) ∀x ∈ X, |x − x̂l | < .
Definition 2 Global Minimum [41]: A global minimum xˆg ∈ X of one (objective) function
f :X → R is an input element with f (xˆg ) ≤ f (x) ∀x ∈ X.
2.2
Maximization
In this problem, given f :X → R, we would like to find all global and local maximums of f
in a single run.
Definition 3 Local Maximum [41]: A (local) maximum x̂l ∈ X of one (objective) function
f :X → R is an input element with f (x̂l ) ≥ f (x) for all x neighboring x̂l . If X ∈ RN , we
can write: ∀x̂l ∃ > 0 : f (x̂l ) ≥ f (x) ∀x ∈ X, |x − x̂l | < .
Definition 4 Global Maximum [41]: A global maximum xˆg ∈ X of one (objective) function
f :X → R is an input element with f (xˆg ) ≥ f (x) ∀x ∈ X.
2
3
Methodology
In the past literature, there are different evolutionary methods proposed for multimodal
optimization. In this section, we discuss and categorize them into different methodologies.
3.1
Preselection
In 1970, the doctorial thesis by Cavicchio introduced different methods for genetic algorithms [3]. In particular, the preselection scheme was proposed to maintain the population
diversity. In this scheme, the children compete with their parents for survival. If a child
has a fitness (measured by an objective function) higher than its parent, the parent is
replaced by the child in the next generation.
3.2
Crowding
In 1975, the work by De Jong [13] introduced the crowding technique to increase the chance
of locating multiple optima. In the crowding technique, each child is compared to a random sub-population of cf members in the existing parent population (cf means crowding
factor). The parent member which is most similar to the child itself is selected (measured
by a distance metric). If the child has a higher fitness than the parent member selected,
then it replaces the parent member in the population. Besides genetic algorithm, Thomsen
has also incorporated crowding techniques [13] into differential evolution (CrowdingDE) for
multimodal optimization [40]. In his study, the crowding factor is set to the population size
and Euclidean distance is adopted as the dissimilarity metric. The smaller the distance,
the more similar they are and vice versa. Although an intensive computation is accompanied, it can effectively transform differential evolution into an algorithm specialized for
multimodal optimization. In 2012, CrowdingDE has been investigated and extended by
Wong et al , demonstrating competitive performance even when it is compared to the other
state-of-the-arts methods [49].
3.3
Fitness Sharing
In 1989, Goldberg and Richardson proposed a fitness-sharing niching technique as a diversity preserving strategy to solve the multimodal optimization problem [7]. They proposed
a shared fitness function, instead of an absolute fitness function, to evaluate the fitness of
a individual in order to favor the growth of the individuals which are distinct from the
others. The shared fitness function is defined as follows:
f 0 (xi ) = Shared F itness =
Actual F itness
f (xi )
= PN
Degree of Sharing
j=1 sh(d(xi , xj ))
where f 0 (xi ) is the shared fitness of the ith individual xi ; f (xi ) is the actual fitness of the
ith individual xi ; d(xi , xj ) is the distance function between the two individuals xi and xj ;
3
sh(d) is the sharing function. With this technique, a population can be prevented from
the domination of a particular type of individuals. Nonetheless, a careful adjustment to
the sharing function sh(d) definition is needed because it relates the fitness domain f (xi )
to the distance domain d(xi , xj ) which are supposed to be independent of each other.
3.4
Species Conserving
Species conserving genetic algorithm (SCGA) [17] is a technique for evolving parallel subpopulations for multimodal optimization. Before each generation starts, the algorithm
selects a set of species seeds which can bypass the subsequent procedures and be saved into
the next generation. The algorithm then divides a population into several species based
on a dissimilarity measure. The fittest individual is selected as the species seed for each
species. After the identification of species seeds, the population undergoes the usual genetic
algorithm operations: selection, crossover, and mutation. As the operations may remove
the survival of less fit species, the saved species seeds are copied back to the population at
the end of each generation.
To determine the species seeds in a population, the algorithm first sorts the population
in a decreasing fitness order. Once sorted, it picks up the fittest individual as the first
species seed and forms a species region around it. The next fittest individual is tested
whether it is located in a species region. If not, it is selected as a species seed and another
species region is created around it. Otherwise, it is not selected. Similar operations are
applied to the remaining individuals, which are subsequently checked against all existing
species seeds.
To copy the species seeds back to the population after the genetic operations have been
executed, the algorithms need to scan all the individuals in the current population and
identify to which species they belong. Once it is identified, the algorithm replaces the
worst individual (lowest fitness) with the species seed in a species. If no individuals can
be found in a species for replacement, the algorithm replaces the worst and un-replaced
individual in the whole population. In short, the main idea is to preserve the population
diversity by preserving the fittest individual for each species.
3.5
Covariance Matrix Adaptation
Evolution strategy is an effective method for numerical optimization. In recent years, its
variant CMA-ES (Covariance Matrix Adaptation Evolution Strategy) showed a remarkable
success [8]. To extend its capability, niching techniques have been introduced to cope
with multimodal functions [35]. For instance, a concept called adaptive individual niche
radius has been proposed to solve the niche radius problem commonly found in speciation
algorithms [34].
4
3.6
Multiobjective Approach
At this point, we would like to note that the readers should not confuse evolutionary
multimodal optimization (main theme of this chapter) with evolutionary multiobjective optimization. The former aims at solving a single function for multiple opima, while
the latter aims at solving multiple functions for pareto front solutions. Nonetheless, the
techniques involved are related. In particular, Deb and Saha demonstrated that, by decomposing a single multimodal objective function problem into a bi-objective problem, they
can solve a multimodal function using a evolutionary multiobjective optimization algorithm
[4]. Briefly, they keep the original multimodal objective function as the first objective. On
the other hand, they use the gradient information to define peaks in the second objective.
3.7
Ensemble
As mentioned in the previous section, different niching algorithms have been proposed
over the past years. Each algorithm has its own characteristics and design philosophy
behind. Although it imposes difficult conditions to compare them thoroughly, it is a
double-edged sword. Such a vast amount of algorithms can provide us a “swiss army
knife” for optimizations on different problems. In particular, Yu and Suganthan proposed
an ensemble method to combine those algorithms and form a powerful method called
Ensemble of Niching Algorithms (ENA) [52]. An extension work can also be found in [32].
3.8
Others
Researchers have been exploring many different ways to deal with the problem. Those
methods include: clearing [29], repeated iterations [1], species-specific explosion [43], traps
[15], stochastic automation [25], honey bee foraging behavior [39], dynamic niching [27],
spatially-structured clearing [5], cooperative artificial immune network [21], particle swarm
optimization [10, 19, 14, 22] , and island model [2]. In particular, Stoean et al. have proposed a topological species conservation algorithm in which the proper topological separation into subpopulations has given it an advantage over the existing radius-based algorithms
[38]. Comparison studies were conducted by Singh et al. [36], Kronfeld et al. [16], and
Yu et al. [53]. Though different methods were proposed in the past, they are all based
on the same fundamental idea: it is to strike an optimal balance between convergence
and population diversity in order to locate multiple optima simultaneously in a single run
[37, 44].
5
4
Benchmarking
4.1
Benchmark Functions
There are many multimodal functions proposed for benchmarking in the past literature.
In particular, the following five benchmark functions are widely adopted in literature:
Deb’s 1st function [43], Himmelblau function [1], Six-hump Camel Back function [24],
Branin function [24], and Rosenbrock function [33]. In addition, five more benchmark
functions (PP1 to PP5) can be found in [43, 23]. For more viogorous comparisons, the IEEE
Congress on Evolutionary Computation (CEC) usually releases a test suite for multimodal
optimization every year. More than 15 test functions can be found there [20].
4.2
Performance Metrics
Several performance metrics have been proposed in the past literature [23, 18, 17, 40].
Among them, Peak Ratio (PR) and Average Minimum Distance to the Real Optima (D)
[23, 43] are commonly adopted as the performance metrics.
• A peak is considered found when there exists a individual which is within 0.1 Euclidean distance to the peak in the last population. Thus the Peak Ratio is calculated
using the equation (3):
P eak Ratio =
N umber of peaks f ound
T otal number of peaks
(3)
• The average minimum distance to the optima (D) is calculated using the equation
(4):
n
X
min d(peaki , indiv)
i=1
indiv∈pop
(4)
n
where n is the number of peaks, indiv denotes a individual, peaki is the ith peak, pop
denotes the last population, and d(peaki , indiv) denotes the distance between peaki
and indiv.
D=
As different algorithms perform different operations in one generation, it is unfair to set the
termination condition as the number of generations. Alternatively, it is also unfair to adopt
CPU time because it substantially depends on the implementation techniques for different
algorithms. For instance, the sorting techniques to find elitists and the programming languages used. In contrast, fitness function evaluation is always the performance bottleneck
[28] 1 . Thus the number of fitness function evaluations is suggested to be adopted as the
running or termination condition for convergence analysis.
1
For instance, over ten hours are needed to evaluate a calculation in computational fluid dynamics [11]
6
4.3
Statistical Tests
Since evolutionary multimodal optimization is stochastic in nature, multiple runs are
needed to evaluate each method on each test function. The means and standard deviations
of performance metrics are usually reported for fair comparison. To justify the results,
statistical tests are usually adopted to assess the statistical significances. For instance,
t-tests, Mann-Whitney U-tests (MWU), and Kolmogorov-Smirnov test (KS).
5
Application
Holographic gratings have been widely used in optical instruments for aberration corrections. In particular, Varied-Line-Spacing (VLS) holographic grating is distinguished by the
high order aberration eliminating capability in diffractive optical systems. It is commonly
used in high resolution spectrometers and monochromaters. A recording optical system of
VLS holographic grating is outlined in [31].
5.1
Problem Modelling
The core component descriptions of the optical systems are listed as follows [30]:
M1 , M2 : Two spherical mirrors
C, D
: Two coherent point sources
G
: A grating blank
In this system, there are two light point sources C and D. They emit light rays which are
then reflected by mirrors M1 and M2 respectively. After the reflection, the light rays are
projected onto the grating blank G. More details are given in [31, 26]. The objective for
the design is to find several sets of design variables (or recording parameters [31]) to form
the expected groove shape of G (or the distribution of groove density [30]). The design
variables are listed as follows:
γ :
ηC :
δ :
ηD :
pC :
qC :
pD :
qD :
The
The
The
The
The
The
The
The
incident angle of the ray O1 O
incident angle of the ray CO1
incident angle of the ray O2 O
incident angle of the ray DO2
distance between C and M1 (CO1 )
distance between M1 and G (O1 O)
distance between D and M2 (DO2 )
distance between M2 and G (O2 O)
7
Mathematically, the goal is to minimize the definite integral of the square
R w error between
the expected groove density and practical groove density [31]: min J = −w0 0 (np − ne )2 dw
where w0 is the half-width of the grating, np is the practical groove density, and ne is
the expected groove density. These two groove densities are complicated functions of the
design variables. Ling et al. have further derived the above formula into a simpler one [30]:
min J = r12 +
w02 (2r1 r3 + r22 ) w04 (r32 + 2r2 r4 ) w06 r42
+
+
3
5
7
j20
− n0 b2
λ0
j40
r4 =
− n0 b4
2λ0
j10
− n0 ,
λ0
3j30
r3 =
− n0 b3 ,
2λ0
r2 =
r1 =
where j10 , j20 , j30 and j40 are the functions of the design variables, which are n10 , n20 ,
n30 and n40 respectively in [26]. Theoretically, the above objective is simple and clear.
Unfortunately, there are many other auxiliary optical components in practice, which constraints are too difficult to be expressed and solved in mathematical forms. An optimal
solution is not necessarily a feasible and favorable solution. Optical engineers often need to
tune the design variables to find as many optimal solutions as possible for multiple trials.
Multimodal optimization becomes necessary for this design problem.
5.2
Performance measurements
As the objective function is an unknown landscape, the exact optima information is not
available. Thus the previous performance metrics cannot be adopted. We propose two
new performance metrics in this section. The first one is the best fitness, which is the
fitness value of the fittest individual in the last population. The second one is the number
of distinct peaks, where a distinct peak is considered found when there exists a individual
which fitness value is below a threshold 0.0001 and there isn’t any other individual found as
a peak before within 0.1 Euclidean distance in the last population. The threshold is chosen
to 0.0001 because the fitness values of the solutions found in [31] are around this order of
magnitude. On the other hand, the distance is chosen to 0.1 Euclidean distance because
it has already been set for considering peaks found in peak ratio [43, 23]. Nonetheless, it
is undeniable that such a threshold may not be suitable for this application because the
landscape is unknown, although the value of 0.1 is the best choice we can adopt in this
study.
5.3
Parameter setting
CrowdingDE-STL [49], CrowdingDE-TL [49], CrowdingDE-SL [49], Crowding Genetic Algorithm (CrowdingGA) [13], CrowdingDE [40], Fitness Sharing Genetic Algorithm (SharingGA) [7], SharingDE [40], Species Conserving Genetic Algorithm (SCGA) [17], SDE
8
Table 1: Results for all algorithms tested on the VLS holographic grating design problem
(50 runs)
Measurement
Mean of Best Fitness
StDev of Best Fitness
Mean of Peaks Found
StDev of Peaks Found
Measurement
Mean of Best Fitness
StDev of Best Fitness
Mean of Peaks Found
StDev of Peaks Found
CrowdingDE-STL [49]
8.29E-08
2.91E-07
41.42
13.07
SharingGA [7]
1.87E+04
6.82E+04
0.0
0.0
CrowdingDE-TL [49]
7.17E-07
5.01E-06
45.54
9.00
SharingDE [40]
1.74E+02
2.65E+02
0.0
0.0
CrowdingDE-SL [49]
1.18E-07
4.04E-07
43.38
10.69
SDE [18]
1.13E+00
1.56E+00
0.06
0.31
CrowdingGA [13]
9.02E-06
3.40E-05
8.94
5.04
SCGA [17]
1.24E+02
4.59E+02
0.02
0.14
CrowdingDE [40]
3.66E-06
2.31E-05
41.98
14.26
UN [12]
9.19E-04
3.15E-03
7.22
3.92
[18], and UN [12] are selected for illustrative purposes in this application. All the algorithms were run up to a maximum of 10000 fitness function evaluations. The above performance metrics were obtained by taking the average and standard deviation of 50 runs.
The groove density parameters followed the setting in [31]: n0 = 1.400 × 103 (line/mm),
b2 = 8.2453 × 10−4 (1/mm), b3 = 3.0015 × 10−7 (1/mm2 ) and b4 = 0.0000 × 10−10 (1/mm3 ).
The half-width w0 was 90mm. The radii of spherical mirrors M1 and M2 were 1000mm.
The recording wavelength (λ0 ) was 413.1nm. The population size of all the algorithms
was set to 50. The previous settings remained the same, except the algorithm-specific
parameters: The species distance of SDE and SCGA was set to 1000. The scaling factor
and niche radius of SharingDE and SharingGA were set to 1 and 1000 respectively. The
discount factor of the temporal locality was set to 0.5. The survival selection method of
the non-crowding algorithms was set to binary tournament [12].
5.4
Results
The result is tabulated in Table 1. It can be observed that CrowdingDE-STL can achieve
the best fitness whereas CrowdingDE-TL can acheive the best number of peaks found. To
compare the algorithms rigorously, statistical tests have also been used. The results are
depicted in Figure 1. One can observe that there are some statistically significant performance differences among them. In particular, CrowdingDE-based methods are shown to
have the results statistically different from CrowdingGA, SharingGA, SharingDE, SDE,
and SCGA. Some configurations obtained after a run of CrowdingDE-STL on this problem
are depicted in Figure 2. It can be seen that they are totally different and feasible configurations with which optical engineers can feel free to perform multiple trials after the single
run.
6
Discussion
To conclude, we have briefly reviewed the state-of-the-arts methods of evolutionary multimodal optimization from different perspectives in this chapter. Different evolutionary
9
multimodal optimization methodologies are described. To compare them fairly, we described different benchmarking techniques such as performance metrics, test functions,
and statistical tests. An application to Varied-Line-Spacing (VLS) holographic grating
is presented to demonstrate the real-wold applicability of evolutionary multimodal optimization. Nonetheless, we would like to note several current limitations of evolutionary
multimodal optimization as well as the possible solutions at the end of this chapter.
First, most of the past studies just focus on low dimensional test functions for benchmarking. More high dimensional test functions should be incorporated in the future. Second, we would like to point out that evolutionary multimodal optimization is actually
far from just finding multiple optima because the algorithms for multimodal optimization
usually not only locate multiple optima in a single run, but also preserve their population
diversity throughout a run, resulting in their global optimization ability on multimodal
functions. Moreover, the techniques for multimodal optimization are usually borrowed as
diversity maintenance techniques to other problems. Third, the computational complexities of the methods are usually very high comparing with the other methods since they
involve population diversity maintenance which implies that the related survival operators
need to take into account the other individuals, resulting in additional time complexity.
References
[1] David Beasley, David R. Bull, and Ralph R. Martin. A sequential niche technique for
multimodal function optimization. Evol. Comput., 1(2):101–125, 1993.
[2] Mourad Bessaou, Alain Pétrowski, and Patrick Siarry. Island model cooperating with
speciation for multimodal optimization. In PPSN VI: Proceedings of the 6th International Conference on Parallel Problem Solving from Nature, pages 437–446, London,
UK, 2000. Springer-Verlag.
[3] Daniel Joseph Cavicchio. Adaptive search using simulated evolution. 1970.
[4] Kalyanmoy Deb and Amit Saha. Finding multiple solutions for multimodal optimization problems using a multi-objective evolutionary approach. In Proceedings of
the 12th Annual Conference on Genetic and Evolutionary Computation, GECCO ’10,
pages 447–454, New York, NY, USA, 2010. ACM.
[5] G. Dick. Automatic identification of the niche radius using spatially-structured clearing methods. In 2010 IEEE Congress on Evolutionary Computation (CEC), pages 1
–8, july 2010.
[6] David E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1989.
10
[7] David E. Goldberg and Jon Richardson. Genetic algorithms with sharing for multimodal function optimization. In Proceedings of the Second International Conference
on Genetic algorithms and their application, pages 41–49, Hillsdale, NJ, USA, 1987.
L. Erlbaum Associates Inc.
[8] Nikolaus Hansen and Andreas Ostermeier. Completely derandomized self-adaptation
in evolution strategies. Evol. Comput., 9:159–195, June 2001.
[9] John H. Holland. Adaptation in natural and artificial systems. MIT Press, Cambridge,
MA, USA, 1992.
[10] Zhen Ji, Huilian Liao, Yiwei Wang, and Q.H. Wu. A novel intelligent particle optimizer
for global optimization of multimodal functions. In CEC 2007. IEEE Congress on
Evolutionary Computation, 2007, pages 3272 –3275, sept. 2007.
[11] Yaochu Jin, M. Olhofer, and B. Sendhoff. A framework for evolutionary optimization
with approximate fitness functions. IEEE Transactions on Evolutionary Computation,
6(5):481–494, 2002.
[12] Kenneth A. De Jong. Evolutionary Computation. A Unified Approach. MIT Press,
Cambridge, MA, USA, 2006.
[13] Kenneth Alan De Jong. An analysis of the behavior of a class of genetic adaptive
systems. PhD thesis, University of Michigan, Ann Arbor, MI, USA, 1975.
[14] Yau-Tarng Juang, Shen-Lung Tung, and Hung-Chih Chiu. Adaptive fuzzy particle
swarm optimization for global optimization of multimodal functions. Information
Sciences, 181(20):4539 – 4549, 2011. Special Issue on Interpretable Fuzzy Systems.
[15] Naoya Karatsu, Yuichi Nagata, Isao Ono, and Shigenobu Kobayashi. Globally multimodal function optimization by real-coded genetic algorithms using traps. In 2010
IEEE Congress on Evolutionary Computation (CEC), pages 1–8, 2010.
[16] M. Kronfeld and A. Zell. Towards scalability in niching methods. In 2010 IEEE
Congress on Evolutionary Computation (CEC), pages 1 –8, july 2010.
[17] Jian Ping Li, Marton E. Balazs, Geoffrey T. Parks, and P. John Clarkson. A species
conserving genetic algorithm for multimodal function optimization. Evol. Comput.,
10(3):207–234, 2002.
[18] Xiaodong Li. Efficient differential evolution using speciation for multimodal function
optimization. In GECCO ’05: Proceedings of the 2005 conference on Genetic and
evolutionary computation, pages 873–880, New York, NY, USA, 2005. ACM.
11
[19] Xiaodong Li. Niching without niching parameters: Particle swarm optimization using
a ring topology. IEEE Transactions on Evolutionary Computation, 14(1):150 –169,
feb. 2010.
[20] Xiaodong Li, Ke Tang, Mohammad N. Omidvar, Zhenyu Yang, and Kai Qin. Benchmark functions for the cec 2013 special session and competition on large-scale global
optimization, 2013.
[21] Li Liu and Wenbo Xu. A cooperative artificial immune network with particle swarm
behavior for multimodal function optimization. In CEC 2008. (IEEE World Congress
on Computational Intelligence). IEEE Congress on Evolutionary Computation, 2008,
pages 1550 –1555, june 2008.
[22] Lili Liu, Shengxiang Yang, and Dingwei Wang. Force-imitated particle swarm optimization using the near-neighbor effect for locating multiple optima. Information
Sciences, In Press, Uncorrected Proof:–, 2010.
[23] Rodica I. Lung, Camelia Chira, and D. Dumitrescu. An agent-based collaborative
evolutionary model for multimodal optimization. In GECCO ’08: Proceedings of the
2008 GECCO conference companion on Genetic and evolutionary computation, pages
1969–1976, New York, NY, USA, 2008. ACM.
[24] Zbigniew Michalewicz. Genetic algorithms + data structures = evolution programs
(3rd ed.). Springer-Verlag, London, UK, 1996.
[25] K. Najim and A.S. Poznyak. Multimodal searching technique based on learning automata with continuous input and changing number of actions. IEEE Transactions
on Systems, Man, and Cybernetics, Part B: Cybernetics, 26(4):666 –673, aug. 1996.
[26] Takeshi Namioka and Masato Koike. Aspheric wave-front recording optics for holographic gratings. Appl. Opt., 34(13):2180–2186, 1995.
[27] A. Nickabadi, M.M. Ebadzadeh, and R. Safabakhsh. Evaluating the performance of
dnpso in dynamic environments. pages 2640 –2645, oct. 2008.
[28] Yew S. Ong, Prasanth B. Nair, and Andrew J. Keane. Evolutionary optimization of
computationally expensive problems via surrogate modeling. AIAA Journal, 41(4):687
–696, 2003.
[29] A. Petrowski. A clearing procedure as a niching method for genetic algorithms. In
Proceedings of IEEE International Conference on Evolutionary Computation, 1996,
pages 798–803, Nagoya, Japan, May 1996.
[30] Ling Qing, Wu Gang, and Wang Qiuping. Restricted evolution based multimodal
function optimization in holographic grating design. In The 2005 IEEE Congress
12
on Evolutionary Computation, 2005, volume 1, pages 789–794, Edinburgh, Scotland,,
September 2005.
[31] Ling Qing, Wu Gang, Yang Zaiyue, and Wang Qiuping. Crowding clustering genetic
algorithm for multimodal function optimization. Appl. Soft Comput., 8(1):88–95, 2008.
[32] Bo-Yang Qu and P.N. Suganthan. Novel multimodal problems and differential evolution with ensemble of restricted tournament selection. In 2010 IEEE Congress on
Evolutionary Computation (CEC), pages 1 –7, july 2010.
[33] Yun-Wei Shang and Yu-Huang Qiu. A note on the extended rosenbrock function.
Evol. Comput., 14(1):119–126, 2006.
[34] Ofer Shir and Thomas Back. Niche radius adaptation in the cma-es niching algorithm.
In Thomas Runarsson, Hans-Georg Beyer, Edmund Burke, Juan Merelo-Guervos,
L. Whitley, and Xin Yao, editors, Parallel Problem Solving from Nature - PPSN IX,
volume 4193 of Lecture Notes in Computer Science, pages 142–151. Springer Berlin /
Heidelberg, 2006.
[35] Ofer M. Shir, Michael Emmerich, and Thomas Bäck. Adaptive niche radii and niche
shapes approaches for niching with the cma-es. Evol. Comput., 18:97–126, March
2010.
[36] Gulshan Singh and Dr. Kalyanmoy Deb. Comparison of multi-modal optimization
algorithms based on evolutionary algorithms. In GECCO ’06: Proceedings of the 8th
annual conference on Genetic and evolutionary computation, pages 1305–1312, New
York, NY, USA, 2006. ACM.
[37] M. Srinivas and L.M. Patnaik. Adaptive probabilities of crossover and mutation in
genetic algorithms. IEEE Transactions on Systems, Man and Cybernetics, 24(4):656
–667, apr. 1994.
[38] C. Stoean, M. Preuss, R. Stoean, and D. Dumitrescu. Multimodal optimization by
means of a topological species conservation algorithm. IEEE Transactions on Evolutionary Computation, 14(6):842 –864, dec. 2010.
[39] K. Sundareswaran and V.T. Sreedevi. Development of novel optimization procedure
based on honey bee foraging behavior. pages 1220 –1225, oct. 2008.
[40] R. Thomsen. Multimodal optimization using crowding-based differential evolution.
In CEC2004. IEEE Congress on Evolutionary Computation, 2004, volume 2, pages
1382–1389, June 2004.
[41] Thomas Weise. Global Optimization Algorithms ? Theory and Application. Thomas
Weise, july 16, 2007 edition, July 2007. Online available at http://www.itweise.de/projects/book.pdf.
13
[42] Ka-Chun Wong, Tak-Ming Chan, Chengbin. Peng, Yue Li, and Zhaolei Zhang. DNA
motif elucidation using belief propagation. Nucleic Acids Res., 41(16):e153, Sep 2013.
[43] Ka-Chun Wong, Kwong-Sak Leung, and Man-Hon Wong. An evolutionary algorithm
with species-specific explosion for multimodal optimization. In GECCO ’09: Proceedings of the 11th Annual conference on Genetic and evolutionary computation, pages
923–930, New York, NY, USA, 2009. ACM.
[44] Ka-Chun Wong, Kwong-Sak Leung, and Man-Hon Wong. Effect of spatial locality
on an evolutionary algorithm for multimodal optimization. In EvoApplications 2010,
Part I, LNCS 6024. Springer-Verlag, 2010.
[45] Ka-Chun Wong, Kwong-Sak Leung, and Man Hon Wong. Protein structure prediction
on a lattice model via multimodal optimization techniques. In GECCO ’10: Proceedings of the 12th annual conference on Genetic and evolutionary computation, pages
155–162, New York, NY, USA, 2010. ACM.
[46] Ka-Chun Wong, Yue Li, Chengbin. Peng, and Zhaolei Zhang. SignalSpider: probabilistic pattern discovery on multiple normalized ChIP-Seq signal profiles. Bioinformatics,
Sep 2014.
[47] Ka-Chun Wong, Chengbin Peng, Yue Li, and Tak-Ming Chan. Herd clustering: A synergistic data clustering approach using collective intelligence. Applied Soft Computing,
23:61–75, 2014.
[48] Ka-Chun Wong, Chengbin Peng, Man-Hon Wong, and Kwong-Sak Leung. Generalizing and learning protein-dna binding sequence representations by an evolutionary
algorithm. Soft Computing - A Fusion of Foundations, Methodologies and Applications, 15:1631–1642, 2011. 10.1007/s00500-011-0692-5.
[49] Ka-Chun Wong, Chun-Ho Wu, Ricky KP Mok, Chengbin Peng, and Zhaolei Zhang.
Evolutionary multimodal optimization using the principle of locality. Information
Sciences, 194:138–170, 2012.
[50] Ka-Chun Wong and Zhaolei Zhang.
SNPdryad: predicting deleterious nonsynonymous human SNPs using only orthologous protein sequences. Bioinformatics,
Jan 2014.
[51] Chun-Ho Wu, Na Dong, Waihung Ip, Ching-Yuen Chan, Kei-Leung Yung, and
Zengqiang Chen. Chaotic hybrid algorithm and its application in circle detection.
In EvoWorkshops, pages 302–311, 2010.
[52] E. L. Yu and P. N. Suganthan. Ensemble of niching algorithms. Inf. Sci., 180:2815–
2833, August 2010.
14
[53] E. L. Yu and P.N. Suganthan. Empirical comparison of niching methods on hybrid
composition functions. In CEC 2008. (IEEE World Congress on Computational Intelligence). IEEE Congress on Evolutionary Computation, 2008, pages 2194 –2201, june
2008.
15
(a) Best Fitness
(b) Best Fitness
(c) Peaks Found
(d) Peaks Found
Figure 1: For Table 1, we depict the statistical significance test results for the pairwise
performance differences between all algorithms tested on the VLS holgraphic grating design
problem by Mann-Whitney U-test (MWU) and Two-sample Kolmogorov-Smirnov test (KS)
with p=0.05. Each sub-figure correspond to the performance comparison using a metric
by a statistical test. The vertical axis is the same as the horizontal axis. Each algorithm is
represented by a number on each axis. The numbering of the algorithms follows the order
in Table 1. For instance, 1 refers to CrowdingDE-STL, 2 refers to CrowdingDE-TL......10
refers to UN. The color of each block represents whether the algorithm indicated by the
horizontal axis shows a performance different from the algorithm indicated by the vertical
axis in a statistically significant way. The black color denotes the p-values higher than
0.05 whereas the white color denotes the p-values lower than 0.05. The even numbered
sub-figures are the results obtained by MWU, whereas the odd numbered sub-figures are
the results obtained by KS.
16
(a)
(b)
Figure 2: Configurations obtained by a single run of CrowdingDE-STL on the VLS holographic grating design problem. It can be seen that they are totally different and feasible
configurations with which optical engineers can feel free to perform multiple trials after
the single run.
17
| 9 |
Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for
Action Classification and Detection
arXiv:1704.00616v2 [] 26 May 2017
Mohammadreza Zolfaghari , Gabriel L. Oliveira, Nima Sedaghat, and Thomas Brox
University of Freiburg
Freiburg im Breisgau, Germany
{zolfagha,oliveira,nima,brox}@cs.uni-freiburg.de
Abstract
General human action recognition requires understanding of various visual cues. In this paper, we propose a network architecture that computes and integrates the most
important visual cues for action recognition: pose, motion, and the raw images. For the integration, we introduce a Markov chain model which adds cues successively.
The resulting approach is efficient and applicable to action
classification as well as to spatial and temporal action localization. The two contributions clearly improve the performance over respective baselines. The overall approach
achieves state-of-the-art action classification performance
on HMDB51, J-HMDB and NTU RGB+D datasets. Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB.
Figure 1: The chained multi-stream 3D-CNN sequentially
refines action class labels by analyzing motion and pose
cues. Pose is represented by human body parts detected by
a deep network. The spatio-temporal CNN can capture the
temporal dynamics of pose. Additional losses on YP ose and
YOF are used for training. The final output of the network
YRGB is provided at the end of the chain.
1. Introduction
Human action recognition is a complex task in computer
vision, due to the variety of possible actions is large and
there are multiple visual cues that play an important role. In
contrast to object recognition, action recognition involves
not only the detection of one or multiple persons, but also
the awareness of other objects, potentially involved in the
action, such as the pose of the person, and their motion.
Actions can span various time intervals, making good use
of videos and their temporal context is a prerequisite for
solving the task to its full extent [38, 37].
The success of convolutional networks in recognition has
also influenced action recognition. Due to the importance of
multiple visual cues, as shown by Jhuang et al. [12], multistream architectures have been most popular. This trend was
initiated by Simonyan and Zisserman [33], who proposed a
simple fusion of the action class scores obtained with two
separate convolutional networks, where one was trained on
raw images and the other on optical flow. The relative success of this strategy shows that deep networks for action
recognition cannot directly infer the relevant motion cues
from the raw images, although, in principle, the network
could learn to compute such cues.
In this paper, we propose a three-stream architecture that
also includes pose, see Figure 1. Existing approaches model
the temporal dynamics of human postures with hand-crafted
features. We rather propose to compute the position of human body parts with a fast convolutional network. Moreover, we use a network architecture with spatio-temporal
convolutions [37]. This combination can capture temporal
dynamics of body parts over time, which is valuable to improve action recognition performance, as we show in dedicated experiments. The pose network also yields the spatial
localization of the persons, which allows us to apply the
approach to spatial action localization in a straightforward
manner.
1
The second contribution is on the combination of the
multiple streams, as also illustrated in Figure 1. The combination is typically done by summation of scores, by a linear classifier, or by early or late concatenation of features
within the network. In this paper, we propose the integration of different modalities via a Markov chain, which leads
to a sequential refinement of action labels. We show that
such sequential refinement is beneficial over independent
training of streams. At the same time, the sequential chain
imposes an implicit regularization. This makes the architecture more robust to over-fitting – a major concern when
jointly training very large networks. Experiments on multiple benchmarks consistently show the benefit of the sequential refinement approach over alternative fusion strategies.
Since actions may span different temporal resolutions,
we analyze videos at multiple temporal scales. We demonstrate that combining multiple temporal granularity levels
improves the capability of recognizing different actions. In
contrast to some other state-of-the-art strategies to analyze
videos over longer time spans, e.g., temporal segmentation networks [43], the architecture still allows the temporal localization of actions by providing actionness scores of
frames using a sliding window over video. We demonstrate
this flexibility by applying the approach also to temporal
and spatio-temporal action detection. Compared to previous spatio-temporal action localization methods, which are
typically based on region proposals and action tubes, the
pose network in our approach directly provides an accurate
person localization at no additional computational costs.
Therefore, it consistently outperforms the previous methods
in terms of speed and mean average precision.
LSTM to classify video sequences. More recently, several
CNN based works presented efficient deep models for action recognition [6, 29, 37]. Tran et al. [37] employed a 3D
architecture to learn spatio-temporal features from videos.
Fusion of multiple modalities. Zisserman et al. [33]
proposed a two-stream CNN to capture the complementary
information from appearance and motion, each modality in
an independent stream. Feichtenhofer et al. [8] investigated
the optimal position within a convolution network in detail
to combine the separate streams. Park et al. [28] proposed
a gated fusion approach. In a similar spirit, Wang et al. [46]
presented an adaptive fusion approach, which uses two regularization terms to learn fusion weights. In addition to optical flow, some works made use of other modalities like
audio [46], warped flow [43], and object information [11]
to capture complementary information for video classification. In the present work, we introduce a new, flexible fusion technique for early or late fusion via a Markov chain
and show that it outperforms previous fusion methods.
Pose feature based methods. Temporal dynamics of
body parts over time provides strong information on the performing action. Thus, this information has been employed
for action recognition in several works [4, 19, 39]. Cheron
et al. [4] used pose information to extract high-level features from appearance and optical flow. They showed that
using pose information for video classification is highly effective. Wang et al. [39] used data mining techniques to
obtain a representation for each video and finally, by using a bag-of-words model to classify videos. In the present
work, we compute the human body layout efficiently with
a deep network and learn the relevant spatio-temporal pose
features within one of the streams of our action classification network.
2. Related work
Feature based approaches. Many traditional works in
the field of action recognition focused on designing features
to discriminate action classes [17, 40, 5, 16]. These features
were encoded with high order encodings, e.g., bag of words
(BoW) [35] or Fisher vector based encodings [31], to produce a global representation for video and to train a classifier on the action labels. Recent research showed that most
of these approaches are not only computationally expensive,
but they also fail on capturing context and high-level information.
CNN based approaches. Deep learning has enabled
the replacement of hand-crafted features by learned features, and the learning of whole tasks end-to-end. Several works employed deep architectures for video classification [24, 37, 41]. Thanks to their hiearchical feature representation, deep networks learn to capture localized features as well as context cues and can exploit high-level information from large scale video datasets. Baccouche et
al. [2] firstly used a 3D CNN to learn spatio-temporal features from video and in the next step they employed an
3. Inputs to the Network
We rely on three input cues: the raw RGB images, optical flow, and human pose in the form of human body part
segmentation. All inputs are provided as spatio-temporal
inputs covering multiple frames.
3.1. Optical Flow
We compute the optical flow with the method from
Zach et al. [48], which is a reliable variational method that
runs sufficiently fast. We convert the x-component and ycomponent of the optical flow to a 3 channel RGB image by
stacking components and magnitude of them [29]. The flow
and magnitude values in the image are multiplied by 16 and
quantized into the [0,255] interval [18, 29, 42, 43].
3.2. Body Part Segmentation
Encoder-decoder architectures with an up-convolutional
part have been used successfully for semantic segmentation
tasks [23, 22, 30, 3, 27], depth estimation [20] and optical
2
Figure 2: Human body part segmentation architecture. Convolutions are shown in green, pooling in blue, feature map dropout
in brown, up-convolutional layers in red and softmax in yellow.
flow estimation [7]. For this work, we make use of Fast-Net
[27], a network for human body part segmentation, which
will provide our action recognition network with body pose
information. Figure 2 illustrates the architecture of FastNet. The encoder part of the network is initialized with the
VGG network [34]. Skip connections from the encoder to
the decoder part ensure the reconstruction of details in the
output up to the original input resolution.
We trained the Fast-Net architecture on the J-HMDB
[12] and the MPII [1] action recognition datasets. J-HMDB
provides body part segmentation masks and joint locations,
while MPII provides only joint locations. To make body
part masks compatible across datasets, we apply the following methodology, which only requires annotation for the
joint locations. First, we derive a polygon for the torso from
the joint locations around that area. Secondly, we approximate the other parts by ellipses scaled consistently based on
the torso area and the distance between the respective joints;
see second column of Fig. 3. We convert the body part segmentation into a 3 channel RGB image, mapping each label
to a correspondent pre-defined RGB value.
To the best of our knowledge, we are the first who trained
a convolutional network on body part segmentation for the
purpose of action recognition. Figure 3 shows exemplary
results of the body part segmentation technique on J-HMDB
and MPII datasets. Clearly, the network provides good accuracy on part segmentation and is capable of handling images with multiple instances. The pose estimation network
has a resolution of 150×150 and runs at 33 fps.
Figure 3: Qualitative results on J-HMDB and MPII datasets
(task with 15 body parts). First column: Input image. Second column: Ground truth. Third column: Result predicted with Fast-Net. First two rows correspond to results
on J-HMDB and the last ones on MPII.
4. Action Recognition Network
optical flow stream, and finally apply a refinement by the
RGB stream.
We use the assumption that the class predictions are conditionally independent due to the different input modalities.
Consequently, the joint probability over all input streams
factorizes into the conditional probabilities over the separate input streams.
In a Markov chain, given a sequence of inputs X =
{X1 , X2 , ..., XS }, we wish to predict the output sequence
Y = {Y1 , Y2 , ..., YS } such that P (Y |X) is maximized. Due
4.1. Multi-stream Fusion with a Markov Chain
To integrate information from the different inputs we rely
on the model of a multi-stream architecture [33], i.e., each
input cue is fed to a separate convolutional network stream
that is trained on action classification. The innovation in our
approach is the way we combine these streams. In contrast
to the previous works, we combine features from the different streams sequentially. Starting with the human body part
stream, we refine the evidence for an action class with the
3
YPose
YOF
YRGB
FC2
FC
FC
FC
FC1
FC
FC
Ypred
h1
YPose
h2
At each subsequent stage s > 2, we obtain a refined prediction ys by combining the hidden state and the predictions
from the previous stage.
FC
YPose YOF
h3
Concatenated Features
FC
h2 = f ([h1 , 3DCNN(XOF ), (Y1 )])
3DCNN
3DCNN
FC
3DCNN
FC
3DCNN
FC
3DCNN
FC
3DCNN
FC
Pose
OF
RGB
Pose
OF
RGB
P (Y2 |X, Y<2 ) = softmax(Net2 (h2 ))
(4)
h3 = f ([h2 , 3DCNN(XRGB ), (Y1 , Y2 )])
P (Y3 |X, Y<3 ) = softmax(Net3 (h3 ))
In the proposed model, at each stage, the next prediction
is made conditioned on all previous predictions and the new
input. Therefore, when training the network, the prediction
of the output class label does not only depend on the input,
but also on the previous state. Thus, the network in that
stream will learn complementary features to refine the class
labels from the previous streams. With this chaining and
joint training, the information at the previous stages serve
as the present belief for the predictions at the current stage,
as shown in Figure 4-right. This sequential improvement
of the class label enables the combination of multiple cues
within a large network, while keeping the risk of over-fitting
low.
This is in contrast to the fusion approaches that combine features from different, independently trained streams.
In such a case, the different streams are not enforced to
learn complementary features. In the other extreme, approaches that train all streams jointly but not sequentially,
are more prone to over-fitting, because the network is very
large, and, in such case, lacks the regularization via the separate streams and their additional losses.
It should be expected that the ordering of the sequence
plays a role for the final performance. We compared different ordering options in our experiments and report them
in the following section. The ordering that starts with the
pose as input and ends with the RGB image yielded the best
results.
It is worth noting that the concept of sequential fusion
could be applied to any layer of the network. Here we
placed the fusion after the first fully-connected layer, but
the fusion could also be applied to the earlier convolutional
layers.
Figure 4: Baseline fusion architecture (left) and the proposed approach (right). In the chained architecture, there
is a separate loss function for each stream. The final class
label is obtained at the end of the chain (rightmost prediction).
to the Markov property, P (Y |X) can be decomposed:
P (Y |X) = P (Y1 |X)
S
Y
P (Ys |X, Y1 , . . . , Ys−1 ) (1)
s=2
For the state s ∈ {1, . . . , S}, we denote by hs the hidden
state of that stream. We use deep networks to model the
likelihood in (1):
hs = f ([hs−1 , 3DCNN(Xs ), (Y1 , . . . , Ys−1 )])
P (Ys |X, Y<s ) = softmax(Nets (hs )),
(2)
where f is a non-linearity unit (ReLU), hs−1 denotes the
hidden state from the previous stream, and ys is the prediction of stream s. For the 3DCNN(·), we use the convolutional part of the network presented in Figure 5 to encapsulate the information in the input modality, and Nets is the
fully connected part in Figure 5.
At each fusion stage, we concatenate the output of the
function 3DCNN(·) with the hidden state and the outputs from the previous stream and apply the non-linearity
f before feeding them to Nets . Finally, at the output part, we use Nets to predict action labels from hs .
With the softmax(·) function we convert these scores into
(pseudo-)probabilities.
Using the above notation, we consider input modalities
as X = {Xpose , XOF , XRGB }, and Xs = {xt }Tt=1 , where
xt is the t-th frame in Xs , and T is the total number of
frames in Xs . At the stage s = 1, by considering X1 =
Xpose we start with an initial hidden state and obtain an
initial prediction (see Figure 4-right):
h1 = 3DCNN(Xpose )
P (Y1 |X) = softmax(Net1 (h1 ))
4.2. Network Configuration
In all streams, we use the C3D architecture [37] as the
base architecture, which has 17.5M parameters. The network has 8 three-dimensional convolution layers with kernel size of 3×3×3 and stride 1, 5 three-dimensional pooling
layers with kernel size of 2×2×2 and stride 2 and two fully
connected layers followed by a softmax; see Figure 5. Each
stream is connected with the next stream via layer FC6; see
Figure 4-right. Each stream takes 16 frames as input.
(3)
4
Figure 5: Base architecture used in each stream of the action recognition network. The convolutional part is a 3DCNN
architecture. We define the remaining fully connected layers as N ets .
4.3. Training
poral windows across a video and 10 crop scores per clip.
Apart from averaging, we also tested a multi-resolution approach, which we call multi-granular (MG), where we
trained separate networks for three different temporal resolutions. These are assembled as (1) 16 consecutive frames,
(2) 16 frames from a temporal window of 32 frames by a
sample rate of 2, and (3) 16 frames sampled randomly from
the entire video. For the final score, we take the average
over the scores produced by these temporal resolution networks. This approach extends the temporal context that the
network can see, which can be useful for more complex actions with longer duration.
In case of temporal action detection, we localize the action in time by thresholding the score provided for each
frame. Clearly, the MG approach is not applicable here.
In addition to the action score, also the human body part
network helps in temporal localization: we do not detect an
action as long as no human is detected. More details on the
spatio-temporal action detection are provided in the experimental section and in the supplemental material.
The network weights are learned using mini-batch
stochastic gradient descent (SGD) with a momentum of
0.9 and weight decay of 5e−4 . We jointly optimize the
whole network without truncating gradients and update the
weights of each stream based on the full gradient including the contribution from the following stream. We initialize the learning rate with 1e−4 and decrease it by a factor
of 10 every 2k for J-HMDB, 20k for UCF101 and NTU,
and at multiple steps for HMDB51. The maximum number
of iterations was 20k for J-HMDB, 40k for HMDB51 and
60k for the UCF101 and NTU datasets. We initialize the
weights of all streams with an RGB network pre-trained on
the large-scale Sports-1M dataset [14].
We split each video into clips of 16 frames with an overlap of 8 frames and feed each clip individually into the network stream with size of 16 × 112 × 112. We apply corner
cropping as a form of data augmentation to the training data.
Corner cropping extracts regions from the corners and the
center of the image. It helps to prevent the network from
bias towards the center area of the input. Finally, we resize
these cropped regions to the size of 112 × 112. In each iteration, all streams take the same clip from the video with the
same augmentation but with different modalities as input.
We used Caffe [13] and an NVIDIA Titan X GPU to run
our experiments. The training time for the J-HMDB dataset
was ∼ 10 hours for the full network.
5. Experiments
5.1. Datasets
UCF-101 [36] contains more than 2 million frames in
more than 13, 000 videos, which are divided into 101 human action classes. The dataset is split into three folds
and each split contains about 8000 videos for training.
The UCF101 dataset also comes with a subset for spatiotemporal action detection.
HMDB51 [15] contains 6766 videos divided into 51 action classes, each with at least 101 samples. The evaluation
follows the same protocol used for UCF-101.
J-HMDB contains a subset of videos from the HMDB
dataset, for which it provides additional annotation, in particular optical flow and joint localization [12]. Thus, it is
well-suited for evaluating the contribution of optical flow,
body part segmentation, and the fusion of all cues via a
4.4. Temporal Processing of the Whole Video
At test time, we feed the architecture with a temporal
window of 16 frames. The stride over the video is 8. Each
set of inputs is randomly selected for cropping operations,
which are 4 corners and 1 center crop for the original image and their horizontal flipping counterpart. We extract
scores before the softmax normalization in the last stream
(Y RGB).
In case of action classification, the final score of a video
is calculated by taking the average of scores over all tem5
Streams
1
RGB+OF
3 w/o GT
3 with GT
Variant
RGB
OF
Pose
Pose (GT)
baseline
chained
chained+MG
baseline
chained
chained+MG
baseline
chained
UCF101
84.2%
79.6%
56.9%
87.1%
88.9%
89.1%
90.4%
91.3%
-
HMDB
53.3%
45.2%
36.0%
55.6%
61.7%
66.0%
57.5%
62.1%
71.1%
-
J-HMDB
60.8%
61.9%
45.5%
56.8%
62.7%
72.8%
70.2%
79.1%
72.0%
83.2%
Datasets
Methods
TS Fusion [8]
LTC [38]
Two-stream [33]
TSN [43]
CPD [26]
Multi-Granular [18]
M-fusion [28]
KVMF [49]
P-CNN [4]
Action tubes [9]
TS R-CNN [29]
MR-TS R-CNN [29]
Ours (chained)
Table 1: The value of different cues and their integration
for action recognition on the UCF101, HMDB51, and JHMDB datasets (split 1). Adding optical flow and pose
is always beneficial. Integration via the proposed Markov
chain clearly outperforms the baseline fusion approach. In
all cases, the accuracy achieved with estimated optical flow
and body parts almost reaches the upper bound performance
when providing ground truth values for those inputs.
UCF101
HMDB51
J-HMDB
92.5%
91.7%
88.0%
94.2%
92.3%
90.8%
89.1%
93.1%
91.1%
65.4%
64.8%
59.4%
69.4%
66.2%
63.6%
54.9%
63.3%
69.7%
61.1%
62.5%
70.5%
71.1%
76.1%
Table 2: Comparison to the state of the art on UCF101,
HMDB51, and J-HMDB datasets (over all three splits).
is lost due to erroneous optical flow and pose estimates. Surprisingly, the difference between the results is rather small,
showing that the network does not suffer much from imperfect estimates. This conclusion can be drawn independently
of the fusion method.
Finally, the temporal multi-granularity fusion (MG) further improves results. Especially on HMDB51, there is a
large benefit.
Markov chain. The dataset comprises 21 human actions.
The complete dataset has 928 clips and 31838 frames.
There are 3 folds for training and testing for this dataset.
The videos in J-HMDB are trimmed and come with bounding boxes. Thus, it can be used also as a benchmark for
spatial action localization.
NTU RGB+D is a recent action recognition dataset that
is quite large and provides depth and pose ground truth
[32]. It contains more than 56000 sequences and 4 million
frames. NTU provides 60 action classes and 3D coordinates
for 25 joints. Additionally, the high intra-class variations
make NTU one of the most challenging datasets.
5.2.1
Comparison with the state-of-the-art
Table 3 compares the proposed network to the state of the
art in action classificaation. In contrast to Table 1, the comparison does not show the direct influence of single contributions anymore, since this table compares whole systems that are based on quite different components. Many of
these systems also use other features extraction approaches,
such as improved dense trajectories (IDT), which generally
have a positive influence on the results, but also make the
system more complicated and harder to control. Our network outperforms the state of the art on J-HMDB, NTU,
and HMDB51. Also, on UCF101 dataset our approach is
on par with the current state of the art while it does not
rely on any additional hand-crafted features. In two stream
case (RGB+OF), if we replace the 3DCNN network by the
TSN approach [43], we obtain a classification accuracy of
94.05% on UCF101 (over 3 splits), which is the state of the
art also on this dataset. However, the TSN approach does
not allow for action detection anymore.
Finally, we ran the network on the recent NTU RGB+D
dataset, which is larger and more challenging than the previous datasets. The dataset is popular for the evaluation of
methods that are based on human body pose. Clearly, the result of our network, shown in Table ??, compares favorably
5.2. Action Classification
Table 1 shows that fusion with the sequential Markov
chain model outperforms the baseline fusion consistently
across all datasets. The baseline fusion is shown in Figure 4
and can be considered a strong baseline. It consists of fusing the multiple modalities through feature concatenation
followed by a set of fully connected layers. The network is
trained jointly.
Adding pose leads to a substantial improvement over the
two-stream version. This confirms that pose plays an important role as complementary modality for action recognition
tasks. Again, the Markov chain fusion is advantageous with
a large margin.
For the J-HMDB dataset, ground truth for optical flow
and pose is available and can be provided to the method.
While not being relevant in practice, running the recognition with this ground truth shows on how much performance
6
Methods
Fusion Location
FC7
FC6
Cross Subject %
Deep LSTM [32]
P-LSTM [32]
HOGˆ2 [25]
FTP DS [10]
ST-LSTM [21]
Ours (Pose)
Ours (RGB+OF+Pose - Baseline)
Ours (RGB+OF+Pose - Chained)
60.7%
62.93%
32.2%
60.23%
69.2%
67.8%
76.9%
80.8%
UCF101
89.8%
89.6%
Dataset
J-HMDB (RGB)
NTU RGB+D (Pose)
OPR
59.8%
86.8%
ORP
57.3%
86.2%
RPO
54.8%
84.3%
ROP
54.1%
84.7%
PRO
56.4%
85.1%
POR
60.0%
87.1%
Y Pose
55.7%
40.9%
47.1%
Y OF
83.0%
56.4%
65.3%
5.2.4
Effect of clip length
We analyzed the effect of the size of the temporal window
on the action recognition performance. Larger windows
clearly improve the accuracy on all datasets; see Table 7.
For the J-HMDB dataset (RGB modality) we use a temporal window ranging from 4 to 16 frames every 4 frames.
The highest accuracy is obtained with a 16 frames clip size.
Based on the J-HMDB minimum video size, 16 is the highest possible time frame to be explored. We also tested multiple temporal resolutions for the NTU dataset (pose modality). Again, we obtained the best results for the network
with the larger clip length as input.
The conducted experiments confirm that increasing the
length of the clip, we decrease the chance of getting unrelated parts of an action in a video. In addition, with longer
sequences, 3D convolutions can better exploit their ability
to capture abstract spatio-temporal features for recognizing
actions.
to the existing methods. As a result, the used pose estimation network is competitive with pose estimates using depth
images and that our way to integrate this information with
the raw images and optical flow is advantageous.
Ordering of modalities in the Markov chain.
Table 4 shows an analysis on how the order of the modalities
affects the final classification accuracy. Clearly, the ordering has an effect. The proposed ordering starting with the
pose and then adding the optical flow and the RGB images
performed best, but there are alternative orders that do not
perform much worse.
Table 5 quantifies the improvement in accuracy when
adding a modality. Clearly, each additional modality improves the results.
5.2.3
Accuracy
44.8%
49.6%
58.7%
60.8%
61.6%
67.8%
the outcome of the study by Feichtenhofer et al. [8], where
the last convolutional layer worked best.
Y RGB
90.4%
62.1%
79.1%
Table 5: Sequential improvement of classification accuracy
on UCF101, HMDB51 and J-HMDB datasets (Split1) by
adding modalities to the chained network.
5.2.2
Clip length
4
8
12
16
16
32
Table 7: Effect of the temporal window size. Using more
frames as input to the network consistently increases classification performance.
Table 4: Impact of chain order on the performance (clip
accuracy) on UCF101 and HMDB51 datasets (split1). ”O”
= Optical flow, ”P” = Pose and ”R” = RGB.
Dataset
UCF101
HMDB51
J-HMDB
J-HMDB
73.9%
79.1%
Table 6: Classification performance for different fusion
locations on UCF101, HMDB51 and J-HMDB datasets
(split1).
Table 3: Comparison to literature on the NTU RGB+D
benchmark.
Dataset
HMDB51
UCF101
HMDB51
61.3%
62.1%
5.3. Action Detection
To demonstrate the generality of our approach, we show
also results on action detection on UCF101 and J-HMDB.
Many of the top performing methods for action classification are not applicable to action detection, because they integrate information over time in a complex manner, are too
slow, or are unable to spatially localize the action.
This is different for our approach, which is efficient and
can be run in a sliding window manner over time and provides good spatial localization via the human body part segmentation. In order to create temporally consistent spatial
detections, we link action bounding boxes over time to pro-
Fusion location
In principle the chained fusion can be applied to any layer in
the network. We studied the effect of this choice. In contrast
to the large scale evaluation in Feichtenhofer et al. [8], we
tested only two locations: FC6 and FC7. Table 6 shows
a clear difference only on the J-HMDB dataset. There it
seems that an earlier fusion, at a level where the features
are not too abstract yet, is advantageous. This is similar to
7
3DCNN
Per-frame scores
RGB
OF
Pose
Chained
Ground truth
Prediction
Basketball
Basketball
Temporal localization
Pose
Spatial localization
Figure 6: Scheme for spatio-temporal action detection. The
chained network provides action class scores and body part
segmentations per frame. From these we compute action
tubes and their actionness scores; see the supplemental material for details.
Figure 7: Qualitative results on the action detection task.
The first two rows correspond to detections on UCF101,
the last ones on J-HMDB. Ground truth bounding boxes are
shown in green and detections in red. Our spatial localization is accurate and robust to unusual pose.
duce action tube [9]; see the supplemental material for details. We use the frame level action classification scores to
make predictions at the tube level. Figure 6 schematically
outlines the detection procedure.
We also present a set of qualitative action detection experiments for the UCF and J-HMDB datasets. Figure 7
shows several examples where we can robustly localize the
action, even when unusual pose, illumination, viewpoints
and motion blur are presented. Additional results exploring
failure cases are provided in supplementary material.
Following recent works on action detection [9, 44, 29],
we report video-AP. A detection is considered correct if the
intersection over union (IoU) with the ground-truth is above
a threshold δ and the action label is predicted correctly.
The IoU between two tubes is defined as the IoU over the
temporal domain, multiplied by the average of the IoU between boxes averaged over all overlapping frames. VideoAP measures the area under the precision-recall curve of the
action tube predictions.
Table 8 and Table 9 show the video mAP results on spatial and spatio-temporal action detection with different IoU
thresholds on J-HMDB and UCF101 (split1) datasets respectively. Although we did not optimize our approach for
action detection, we obtain state-of-the-art results on both
datasets. Moreover, the approach is fast: spatial detection
runs at a rate of 31 fps and spatio-temporal detection with
10 fps. Compared to the recent works [9, 45, 29, 47], our
detection framework has two desirable properties: (1) the
pose network directly provides a single detection box per
person, which causes a large speed-up; (2) the classification
J-HMDB
IoU threshold (δ)
0.1
0.2
0.3
0.4
0.5
Actionness [42]
56.4
ActionTubes [9]
53.3
Weinzaepfel et al. [44]
63.1 63.5 62.2 60.7
Peng et al. [29]
74.3
73.1
Ours
78.81 78.20 77.12 75.05 73.47
Table 8: Spatial action detection results (Video mAP) on
the J-HMDB dataset. Across all IoU thresholds, our model
outperforms the state of the art.
UCF101
IoU threshold (δ)
0.05
0.1
0.2
0.3
Weinzaepfel et al. [44] 54.28 51.68 46.77 37.82
Yu et al. [47]
42.80
Peng et al. [29]
54.46 50.39 42.27 32.70
Weinzaepfel et al. [45] 62.8
45.4
Ours
65.22 59.52 47.61 38.00
Table 9: Spatio-temporal action detection results (Video
mAP) on UCF101 dataset (split1). Across all IoU thresholds, our model outperforms the state of the art.
8
takes advantage of three modalities and the chained fusion,
which yields highly accurate per-frame scores.
[9]
[10]
6. Conclusion
We have proposed a network architecture that integrates
multiple cues sequentially via a Markov chain model. We
have shown that this sequential fusion clearly outperforms
other ways of fusion, because it can consider the mutual
dependencies of cues during training while avoiding overfitting due to very large network models. Our approach provides state-of-the-art performance on all four challenging
action classification datasets UCF101, HMDB51, J-HMDB
and NTU RGB+D while not using any additional handcrafted features. Moreover, we have demonstrated the value
of a reliable pose representation estimated via a fast convolutional network. Finally, we have shown that the approach
generalizes also to spatial and spatio-temporal action detection, where we obtained state-of-the-art results as well.
[11]
[12]
[13]
[14]
7. Acknowledgements
[15]
We acknowledge funding by the ERC Starting Grant
VideoLearn and the Freiburg Graduate School of Robotics.
References
[16]
[1] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2d
human pose estimation: New benchmark and state of the
art analysis. Conference on Computer Vision and Pattern
Recognition (CVPR), 2014. 3
[2] M. Baccouche, F. Mamalet, C. Wolf, C. Garcia, and
A. Baskurt. Sequential deep learning for human action
recognition. In Proceedings of the Second International
Conference on Human Behavior Unterstanding, HBU’11,
pages 29–39, 2011. 2
[3] V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A
deep convolutional encoder-decoder architecture for image
segmentation. arXiv preprint arXiv: 1511.00561, 2015. 2
[4] G. Chéron, I. Laptev, and C. Schmid. P-CNN: Pose-based
CNN Features for Action Recognition. In ICCV, 2015. 2, 6
[5] N. Dalal and B. Triggs. Histograms of oriented gradients for
human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition (CVPR’05) - Volume 1 - Volume 01, CVPR ’05,
pages 886–893, Washington, DC, USA, 2005. IEEE Computer Society. 2
[6] A. Diba, A. M. Pazandeh, and L. V. Gool. Efficient twostream motion and appearance 3d cnns for video classification. CoRR, abs/1608.08851, 2016. 2
[7] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazrba,
V. Golkov, P. v.d. Smagt, D. Cremers, and T. Brox. Flownet:
Learning optical flow with convolutional networks. In IEEE
International Conference on Computer Vision (ICCV), Dec
2015. 3
[8] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional
two-stream network fusion for video action recognition. In
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
9
IEEE Conference on Computer Vision and Pattern Recognition, 2016. 2, 6, 7
G. Gkioxari and J. Malik. Finding action tubes. 2015. 6, 8
J. F. Hu, W. shi Zheng, J. Lai, and J. Zhang. Jointly learning heterogeneous features for rgb-d activity recognition. In
Computer Vision and Pattern Recognition (CVPR) (In press),
2015. 7
M. Jain, J. C. van Gemert, and C. G. M. Snoek. What do
15,000 object categories tell us about classifying and localizing actions? In 2015 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), pages 46–55, June 2015. 2
H. Jhuang, J. Gall, S. Zuffi, C. Schmid, and M. J. Black.
Towards understanding action recognition. In International
Conf. on Computer Vision (ICCV), pages 3192–3199, 2013.
1, 3, 6
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint
arXiv:1408.5093, 2014. 5
A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar,
and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014. 5
H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre.
HMDB: a large video database for human motion recognition. In Proceedings of the International Conference on
Computer Vision (ICCV), 2011. 5
I. Laptev. On space-time interest points. Int. J. Comput.
Vision, 64(2-3):107–123, Sept. 2005. 2
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. In Proceedings of the IEEE, pages 2278–2324, 1998. 2
Q. Li, Z. Qiu, T. Yao, T. Mei, Y. Rui, and J. Luo. Action
recognition by learning deep multi-granular spatio-temporal
video representation. In Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, ICMR ’16,
pages 159–166, New York, NY, USA, 2016. ACM. 2, 6
I. Lillo, J. C. Niebles, and A. Soto. A hierarchical pose-based
approach to complex action understanding using dictionaries
of actionlets and motion poselets. CoRR, abs/1606.04992,
2016. 2
F. Liu, C. Shen, G. Lin, and I. D. Reid. Learning depth from
single monocular images using deep convolutional neural
fields. IEEE Trans. Pattern Anal. Mach. Intell., 38(10):2024–
2039, 2016. 2
J. Liu, A. Shahroudy, D. Xu, e. B. Wang, Gang”, J. Matas,
N. Sebe, and M. Welling. Spatio-Temporal LSTM with Trust
Gates for 3D Human Action Recognition, pages 816–833.
Springer International Publishing, 2016. 7
W. Liu, A. Rabinovich, and A. C. Berg. Parsenet: Looking
wider to see better. arXiv preprint arXiv: 1506.04579, 2015.
2
J. Long, E. Shelhamer, and T. Darrell. Fully convolutional
networks for semantic segmentation. CVPR, Nov. 2015. 2
B. Mahasseni and S. Todorovic. Regularizing long short term
memory with 3d human-skeleton sequences for action recognition. In The IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), June 2016. 2
[25] E. Ohn-Bar and M. M. Trivedi. Joint angles similarities and
hog2 for action recognition. In Proceedings of the 2013
IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW ’13, pages 465–470, Washington,
DC, USA, 2013. IEEE Computer Society. 7
[26] K. Ohnishi, M. Hidaka, and T. Harada. Improved dense trajectory with cross streams. In Proceedings of the 2016 ACM
on Multimedia Conference, MM ’16, pages 257–261, New
York, NY, USA, 2016. ACM. 6
[27] G. L. Oliveira, W. Burgard, and T. Brox. Efficient deep models for monocular road segmentation. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),
2016. 2, 3
[28] E. Park, X. Han, T. L. Berg, and A. C. Berg. Combining
multiple sources of knowledge in deep cnns for action recognition. In WACV, pages 1–8. IEEE Computer Society, 2016.
2, 6
[29] X. Peng and C. Schmid. Multi-region two-stream R-CNN
for action detection. In ECCV 2016 - European Conference
on Computer Vision, Amsterdam, Netherlands, Oct. 2016. 2,
6, 8
[30] O. Ronneberger, P.Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention
(MICCAI), volume 9351 of LNCS, pages 234–241. Springer,
2015. 2
[31] J. Sanchez, F. Perronnin, T. E. J. Mensink, and J. Verbeek.
Image classification with the fisher vector: Theory and practice. International Journal of Computer Vision, 2013. 2
[32] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang. Ntu rgb+d:
A large scale dataset for 3d human activity analysis. In The
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 6, 7
[33] K. Simonyan and A. Zisserman. Two-stream convolutional
networks for action recognition in videos. In Advances in
Neural Information Processing Systems, 2014. 1, 2, 3, 6
[34] K. Simonyan and A. Zisserman. Very deep convolutional
networks for large-scale image recognition. ICLR, 2015. 3
[35] J. Sivic and A. Zisserman. Video google: A text retrieval
approach to object matching in videos. In Proceedings of the
Ninth IEEE International Conference on Computer Vision Volume 2, ICCV ’03, pages 1470–, Washington, DC, USA,
2003. IEEE Computer Society. 2
[36] k. Soomro, A. Roshan Zamir, and M. Shah. UCF101: A
dataset of 101 human actions classes from videos in the wild.
In CRCV-TR-12-01, 2012. 5
[37] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri.
Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the 2015 IEEE International
Conference on Computer Vision (ICCV), pages 4489–4497,
2015. 1, 2, 4
[38] G. Varol, I. Laptev, and C. Schmid. Long-term temporal
convolutions for action recognition. CoRR, abs/1604.04494,
2016. 1, 6
[39] C. Wang, Y. Wang, and A. L. Yuille. An approach to posebased action recognition. In Proceedings of the 2013 IEEE
Conference on Computer Vision and Pattern Recognition,
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
10
CVPR ’13, pages 915–922, Washington, DC, USA, 2013.
IEEE Computer Society. 2
H. Wang and C. Schmid. Action recognition with improved
trajectories. In Proceedings of the 2013 IEEE International
Conference on Computer Vision, ICCV ’13, pages 3551–
3558, Washington, DC, USA, 2013. IEEE Computer Society.
2
L. Wang, Y. Qiao, and X. Tang. Action recognition with
trajectory-pooled deep-convolutional descriptors. In CVPR,
pages 4305–4314, 2015. 2
L. Wang, Y. Qiao, X. Tang, and L. Van Gool. Actionness estimation using hybrid fully convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 2708–2717, 2016. 2, 8
L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and
L. Val Gool. Temporal segment networks: Towards good
practices for deep action recognition. In ECCV, 2016. 2, 6
P. Weinzaepfel, Z. Harchaoui, and C. Schmid. Learning to
track for spatio-temporal action localization. In ICCV 2015
- IEEE International Conference on Computer Vision, pages
3164–3172, Santiago, Chile, Dec. 2015. IEEE. 8
P. Weinzaepfel, X. Martin, and C. Schmid. Towards weaklysupervised action localization. CoRR, abs/1605.05197,
2016. 8
Z. Wu, Y. Jiang, X. Wang, H. Ye, X. Xue, and J. Wang.
Fusing multi-stream deep networks for video classification.
CoRR, abs/1509.06086, 2015. 2
G. Yu and J. Yuan. Fast action proposals for human action
detection and search. In 2015 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), pages 1302–1311,
June 2015. 8
C. Zach, T. Pock, and H. Bischof. A duality based approach
for realtime tv-l1 optical flow. In Proceedings of the 29th
DAGM Conference on Pattern Recognition, pages 214–223,
Berlin, Heidelberg, 2007. Springer-Verlag. 2
W. Zhu, J. Hu, G. Sun, X. Cao, and Y. Qiao. A key volume mining deep framework for action recognition. In 2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1991–1999, June 2016. 6
| 2 |
1
Simultaneously Color-Depth Super-Resolution with
Conditional Generative Adversarial Network
arXiv:1708.09105v2 [] 8 Nov 2017
Lijun Zhao, Huihui Bai, Member, IEEE, Jie Liang, Senior Member, IEEE, Bing Zeng, Fellow, IEEE,
Anhong Wang, Member, IEEE, and Yao Zhao, Senior Member, IEEE
Abstract—Recently, Generative Adversarial Network (GAN)
has been found wide applications in style transfer, image-to-image
translation and image super-resolution. In this paper, a colordepth conditional GAN is proposed to concurrently resolve the
problems of depth super-resolution and color super-resolution
in 3D videos. Firstly, given the low-resolution depth image and
low-resolution color image, a generative network is proposed to
leverage mutual information of color image and depth image to
enhance each other in consideration of the geometry structural
dependency of color-depth image in the same scene. Secondly,
three loss functions, including data loss, total variation loss, and
8-connected gradient difference loss are introduced to train this
generative network in order to keep generated images close to the
real ones, in addition to the adversarial loss. Experimental results
demonstrate that the proposed approach produces high-quality
color image and depth image from low-quality image pair, and
it is superior to several other leading methods. Besides, we use
the same neural network framework to resolve the problem of
image smoothing and edge detection at the same time.
Index Terms—Depth image, color image, generative
adversarial network, super-resolution, image smoothing, edge
detection.
I. I NTRODUCTION
L
OW-RESOLUTION and noisy images are always
annoying for a variety of practical applications such as
image and video display, surveillance, to name a few. In
order to enlarge image’s resolution and enhance the quality
of super-resolution image, a tremendous amount of works
have been developed in the field of color super-resolution
(SR) for several decades [1, 2]. Recently several
convolutional neural network (CNN) based methods such as
[3–5] have reported better super-resolution results and have
an order of magnitude lower complexity than previous
methods.
One of the earliest CNN-based super-resolution works is
three-layer convolutional neural network, named by SRCNN
in [6]. Latter, the deconvolution operation is used in [3] to
directly learn the projection from low resolution (LR) image
L. Zhao, H. Bai, Y. Zhao are with Institute Information Science, Beijing
Jiaotong University, Beijing, 100044, P. R. China, e-mail: 15112084, hhbai,
[email protected].
J. Liang is with School of Engineering Science, Simon Fraser University,
ASB 9843, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada, email:[email protected]
B. Zeng is with Institute of Image Processing, University of Electronic
Science and Technology of China, Chengdu, Sichuan 611731, China, email:[email protected]
A. Wang is with Institute of Digital Media & Communication, Taiyuan
University of Science and Technology, Taiyuan, 030024, P. R. China, email:wah [email protected]
to high-resolution (HR) image. In [4], an efficient sub-pixel
convolution layer is introduced to learn a series of filters to
project the final LR feature maps into HR image. Different
from the shallow neural network in [3, 4, 6], a very deep
neural network is presented in [5] to learn image’s residuals
with extremely high learning rates. However, these methods’
objective functions are always the mean squared SR errors,
so their SR output images usually fail to have
high-frequency details when the up-sampling factor is large.
In [7, 8], a generative adversarial network is proposed to
infer photo-realistic images in terms of the perceptual loss.
In addition to the single image SR, image SR aided with its
neighboring viewpoint’s high/low resolution image has also
been widely explored to further improve the quality. For
instance, high-frequency information from the neighboring
full-resolution views and corresponding depth image are used
to enhance the low-resolution view images in [9]. Except
mixed resolutions, the multiple LR stereo observations are
leveraged to increase image’s resolution [10].
Due to depth information’s facilities to many real-world
applications, depth SR problems have been widely explored
in recent years. When only LR depth image is given, this SR
problem is called single depth super-resolution. But, if the
LR depth image is available accompanied with corresponding
HR color image, researchers often name this kind problem
of SR after joint depth SR or color image-guided SR. In
[11], by searching a list of HR candidate patches from the
database to match with the LR patches, the problem of depth
SR is transformed into Markov random field (MRF) labeling
problem to reconstruct the full HR image. After that, single
depth SR is decomposed as two-step procedures: first the HR
edge map is synthesized with HR patches according to the
MRF optimization problem; and then a modified joint
bilateral filtering is employed to achieve image up-sampling
with this HR edge map [12]. Recently, depth
super-resolution is formulated as the boundary compensation
problem by learning multiple residual dictionaries and an
adaptive depth map refinement method is followed to remove
the ringing artifact around the edges after multiple residual
compensation [13].
Since the HR color image can be easily got by the
consumer camera sensors in most cases, the color image can
be used as an available prior information to upscaling the LR
depth image, under the assumption of structural similarity
between color image and depth image. Here, we just classify
joint depth SR approaches into three classes: filtering-based
methods, optimization methods and CNN-based SR methods.
2
For example, bilateral filtering and guided image filtering are
often used to get the interpolation weights to resolve the
problem of depth SR [14–16]. The joint bilateral filtering in
[14] use color image as a prior to guide the up-sampling
from low quality to high quality. Meanwhile, bilateral
filtering is iteratively used to refine the input low-resolution
depth image in [15] to improve the spatial resolution and
depth precision. In order to prevent texture-copy artifacts
from color image and against the inherent noisy nature of
real-time depth data, an adaptive multi-lateral up-sampling
filter in [16] is described to up-sample depth information. In
[17], a more advanced filtering is called guided filtering,
whose ambition is to transfer the structure from a guidance
image into a target image. From this paper, it can be seen
that depth super-resolution can be treated as structural
transfer filtering.
The second class of joint depth super-resolution methods
often build their model by converting SR problems into the
convex and non-convex optimization with different prior
knowledge to regularize the objective function. For example,
a MRF-based model [18], which consists of data term and
smoothness prior term, is built up to align the discontinuities
of depth image with color image’s boundaries. However, this
model always suffers from the texture-copy artifacts and
depth bleeding artifacts, when color image could not be used
sufficiently during depth image super-resolution. Thus, to
sharpen depth boundaries and to prevent depth bleeding, a
nonlocal means term is incorporated into the MRF model to
help local structure to be preserved [19]. To suppress
texture-copy artifacts and reduce computational cost, variable
bandwidth weighting scheme [20] is used into the MRF
model to adjust the guidance weights based on depth local
smoothness. These methods such as [19, 20] implicitly put
the inconsistency between the depth image and the color
image into the smoothness term of MRF model. Later, a
unified framework proposes to cast guided interpolation into
a global weighted least squares optimization framework [21].
In [22], the higher order regularization is used to formulate
depth image up-sampling as a convex optimization problem.
In [23], a static and dynamic filter (SDF) is designed to
address the problem of guided image filtering by using
structural information jointly from the guidance image and
input image.
Although these recent advanced techniques achieve some
appealing performances, they are built on the complex
optimization algorithms using hand-designed objective
functions, which always have high complexity of
computation and limit their widely practical applications.
Lately, deep joint image filtering framework based on
convolutional neural network is proposed in [24] to
adaptively transfer co-occurrence information from the
guidance image to the target images. Meanwhile, in order to
adaptively up-sample depth image’s small-scale and
large-scale structures, a multi-scale guided convolutional
network is trained in high-frequency domain for up-sampling
depth map [25].
In many cases, LR depth image can be available and
accompanied with HR color image in the dynamic scene, so
the majority of SR works put their emphasis on HR color
image aided depth super-resolution. But they often lose sight
of the significance of simultaneously depth image and color
image SR with deep learning. As a matter of fact, this task
is very significant for several 3D video applications, such as
3D video compression, 3D scene reconstruction. For
example, the 3D-HEVC [26] has leveraged the
full-resolution color video and depth video with multi-view
video plus depth (MVD) format to compress 3D video. If
the techniques of simultaneous depth and color SR can be
put into the 3D-HEVC framework, apparently their coding
efficiency can be greatly improved. From our investigation,
we find some works such as [27] have embedded the
CNN-based SR into HEVC coder to achieve significant bits
saving, so the research of simultaneous depth and color
image SR is a meaningful and important topic for both
industry and academia.
Recently, generative adversarial networks [28] is used to
generate high-quality image to achieve the tasks of
super-resolution
and
image
style
transfer,
and
image-to-image transfer [8]. In [8], perceptual loss function
is applied on the tasks of image transformation such as
image style transfer by training feed-forward networks. In
[29], a general solution to the problem of image-to-image
translation is proposed to finish a lot of tasks, such as
synthesizing a new image from the label map, reconstructing
a scene image from an edge map and so on.
Following the works of [8, 29], we propose to use
color-depth conditional generative adversarial network
(CDcGAN) to deal with challenging tasks of both color SR
and depth SR at the same time. Our generative network
consists of five components: color enhancement subnetwork,
depth enhancement subnetwork, color-depth feature merge
subnetwork, color image reconstruction subnetwork and
depth image reconstruction subnetwork. First, we
respectively enhance the quality of low-quality color-depth
image in the first two subnetworks and then these features
are incorporated by color-depth feature merge subnetwork,
which is inspired by the literature [24]. But, there are some
differences between them, which will be detailed later. After
that, color feature and/or depth feature feed into the last two
reconstruction subnetwork in addition to the merged
depth-color features in order to produce HR color-depth
images at the same time. Secondly, one discriminator is used
to distinguish real color image from the generated color
image. The reasons for training depth reconstruction
subnetwork without a discriminator to judge comes from that
depth image is not used to directly displayed on the screen,
but it is always used as scene’s geometry information to
direct the rendering of virtual images with each pixel of
depth image representing the distance between camera and
object. Thus, only three auxiliary losses are added to
regularize depth image SR. Thirdly, in our generative
network, three additional loss: data loss, Total Variation (TV)
loss, and 8-connected gradient difference loss are also used
for color image in addition to adversarial loss, so as to
ensure that image pairs produced by the generator are similar
to the true image pairs.
3
The rest of our paper is organized as follows. First, our
approach is given in Section 2. After experimental results are
evaluated in Section 3, we draw a conclusion in Section 4.
II. P ROPOSED M ETHOD
A. Networks Architecture
Given the LR color-depth image pair (c, d), we propose to
use the conditional GAN to generate the HR color-depth
image pair (x, y). To the best of our knowledge, this is the
first deep learning-based color-depth super-resolution
scheme. As illustrated in Fig. 1 and Fig. 2, our conditional
generative adversarial network consists of a generator
network (G) and a discriminator network(D). First, the
low-resolution color-depth images are initialized by Bicubic
interpolation to be full-resolution. Then, our proposed
generative network respectively feeds low-quality yet
full-resolution depth image d and color image c into color
enhancement subnetwork (S1) and depth enhancement
subnetwork (S2) to pre-filter the low-quality images, as
displayed in Fig. 1. In addition, the enhanced depth image
and enhanced color image are fed into color-depth feature
merge subnetwork (S3). Finally, the first two subnetworks
features and color-depth merged features are leveraged to
reconstruct HR color image and depth image with color
reconstruction subnetwork (S4) and depth reconstruction
subnetwork (S5) respectively. In particular, the generator G
has two subnetworks S4 and S5 to produce image pair
(x, y) from the low-quality image pair (c, d). In the
reconstruction subnetwork S4, the depth feature maps from
S1, S2, and S3 are chosen to generate HR color image.
However, HR depth images are convolved only with
enhanced depth image from S2 without enhanced color
image, in addition to the feature map from feature merge
subnetwork. In other words, the skip-connection is chosen
for both color and depth reconstruction, but for depth
image’s reconstruction, one skip-connection is used in order
to make the depth features only affected by depth features
and mutual features shared by color image and depth image
by considering depth super-resolution’s sensitivity to texture
details of color image. In [30], the skip-connection has been
successfully used for a semantic segmentation network.
Here, a similar idea about skip-connection is shared by
color-depth image super-resolution.
In each subnetwork, we use three convolutional layers
similar to [6]. The advantage of this network lies in the
middle convolutional network with kernel size of 1x1 in
spatial domain so that the parameters of the networks could
greatly decreased, while guaranteeing the nonlinear of this
neural network. The S1/S2 three convolutional layers are
presented as follows: 9x9x1x96; 1x1x96x48; and 5x5x48x1
respectively. The S3 convolutional layers are is listed as:
9x9x2x64; 1x1x64x32; and 5x5x64x2. In addition, the S4/S5
convolutional layers are 9x9x3x9; 1x1x96x48 and 5x5x48x1.
The convolutional layers of S1, S2, and S3 are operated with
stride of 1 and padding of 1, but the ones of S4 and S5 are
processed without padding in order to keep the output
image’s size same as the ground truth image. All the
convolutional layers are followed by the activation function
of ReLU, except each last convolutional layer of S1, S2, S3,
S4, and S5.
Except the SRCNN network [6], there are many other
choices for each component of our generative network. For
example, modified VGG deep networks [31], can be used for
S1 and S2 subnetworks. In [32], all the convolution kernel
size in spatial domain is 3x3 for image de-noising with
modified VGG deep networks. There are some alternative
networks for S1 and S2 subnetworks of our generator, such
as the Encoder-decoder network or U-net used in [29]. In
addition, two reconstruction subnetwork of our network can
alternatively choose the neural networks with deconvolution
or sub-pixel convolutional to reduce the parameter of
networks and corresponding computational costs [3, 4]. In
order to form a general neural network for image processing,
no deconvolution and sub-pixel convolutional is used in this
paper, because full-resolution image processing with deep
neural network is more general than low-resolution image
super-resolution and SR problem can be easily initialized to
the full-resolution image as the input of neural network.
Note that, the part network composed of S1, S2 and S3
subnetwork in our generator looks like network in [24], but
the functionality of this part network is different from [24],
where the CN NG is used to get the edge information of
corresponding color image and CN NT is used to initially
reconstruct the depth image. In our generator, both S1 and
S2 subnetworks are respectively regarded as a pre-filtering
network of low-quality color and depth image, which is
followed by feature merge subnetwork and reconstruction
subnetworks.
As depicted in Fig. 1, the discriminator is a three-layer
convolutional neural network. The parameters are
respectively 4x4x3x64 with stride=2, 4x4x64x64 with
stride=2, and 5x5x64x1 with stride=1. In the discriminator
network, the first two convolution layers are followed by
Leaky ReLU activation function, while the last layer is
activated by sigmoid function.
The generated color image x produced by G is used to fool
the discriminator D. The discriminator is trained to identify the
false images from the true image so that the generator tends to
create more realistic image. Note that only one discriminator
is used in our generative adversarial network. Actually, we
can use two discriminators to distinguish the true image pairs
from the false ones. However, the depth image is not watched
directly and keeping the accuracy of the depth image is the
major task for depth super-resolution, so there is no adversarial
loss for depth reconstruction subnetwork (S5).
B. Objective Function
In our objective function, there is no adversarial loss for
the depth image. Instead, three auxiliary losses are
considered to make the generated depth image close to the
truth image. Contrary to depth images, which only contain
sharp boundaries and some flat or piece-wise smooth
regions, color images usually have more informative textural
details. So it is important for color images to be more
4
Fig. 1. The diagram of color-depth conditional generative adversarial network(CDcGAN)
the probability of data from G(c, d, z) is represented as
D(c, d, G(c, d, z)).
In our model, the adversarial loss is expressed as follows:
LCDcGAN (G, D) = Ecg ∼pdata (cg ) log(D(c, d, cg ))+
Ex∼pdata (x),z∼pz [log(1 − D(c, d, G(c, d, z)))]
(2)
in which z is random noise.
D. Auxiliary Losses for Regularization
Fig. 2. The workflow of color-depth conditional generative adversarial
network(CDcGAN)
realistic compared to the true image, especially when the
up-sampling factor is very large.
In a summary, the objective of our model can be expressed
as follows:
G∗ = min max α · LCDcGAN (G, D)
D
G
(1)
+(Ldata (G) + LT V (G) + LGD (G)),
where LCDcGAN (G, D) is the adversarial loss and the others
are three auxiliary losses. They will be defined later. Here,
the parameter α is used to adjust the contribution for colordepth super-resolution between the GAN loss and three other
auxiliary losses.
C. Adversarial Loss
For brevity of latter description, we denote the true color
image data’s distribution and generated color image data’s
distribution as pdata (cg ) and pdata (x), while pz is input
noise’s distribution. As shown in Fig. 1, the generator
G(c, d, z) is used as a mapping from the LR image set (c, d)
to HR image one (cg , dg ). D(c, d, cg ) describes the
probability that cg comes from the true image data rather
than image data produced by the generator G(c, d, z), while
In our objective function, three auxiliary losses are included:
data loss, TV loss, and 8-connected gradient difference loss,
which are leveraged to make image pair (x, y) produced by
the generator G to be similar enough to the true image pair
(cg , dg ). The vectors of (x, y) and (cg , dg ) are represented as
(X, Y ), and (Cg , Dg ). Like the traditional TV model, the
data loss keeps the output’s value consistent to the ground truth
value, while the TV loss LT V (G) emphasizes the correlation
of the output’s values with its neighboring pixel’s value in
order to keep generated image to be smooth and robust against
noises. Our data loss function Ldata (G), including both color
image’s data loss and depth image’s data loss, is defined as
follows:
1 X
(||X(i) − Cg (i)||L
Ldata (G) =
M ·N i
(3)
+||Y (i) − Dg (i)||L ),
where || · ||L represents the L norm and the size of true image
pair is M · N .
Our TV loss function is defined as follows:
1 X
LT V (G) =
((||∇x X(i)||L +
M ·N i
(4)
||∇y X(i)||L ) + (||∇x Y (i)||L + ||∇y Y (i)||L ))
where ∇x , and ∇y are the gradients in the x-direction and
y-direction.
The 8-neighboring gradient difference (GD) loss is used to
make generated image pair’s gradient information to be similar
5
to that of the ground truth image in the gradient domain. The
8-neighboring GD loss is defined as follows:
1 X X
LGD (G) =
((
||∇k X(i)) − ∇k Cg (i)||L )+
M ·N i
k∈Ω
X
(
||∇k Y (i) − ∇k (Dg (i))||L ))
k∈Ω
(5)
where Ω is each pixel’s neighbourhood, and ∇k is the k-th
gradient between each pixel and k-th pixels among
8-neighbouring pixels.
In some literatures [33], it has been reported that L2 loss
always leads to image blurring. In [25, 29], the traditional loss,
such as L1 distance, has been added into the GAN objective
function, in which the generator aims to not only fool the
discriminator but also to make sure that generated samples
move towards the ground truth in an L1 sense. Thus, in the
proposed neural network, the L1 loss is used for three auxiliary
losses to keep generated sample close enough to the real one,
and make generated images to be sharp rather than blurring.
E. Other Applications
Our proposed network is not restricted to finish the task
of depth-color super-resolution. In fact, similar networks can
be specifically designed for different tasks, e. g. simultaneous
edge detection and semantic segmentation, concurrent edge
detection and image smoothing, and even finishing these three
tasks at the same time, when corresponding networks have one
input (e.g., color image) or two inputs (e.g. color image and
depth image).
For image smoothing and edge detection at the same time,
we change the output feature numbers of each subnetwork’s
last convolutional layer and each convolution layer is padded
to be the same size as the input features. For example, two
inputs in our network respectively have three channels and
six channels, then the last convolutional layer output feature
map numbers in the the S1 and S2 / in S4 and S5 will be 3
and 6 respectively. Here, one input image is color image,
while another one is composed of three gradient map of
color image in both vertical and horizonal direction. Note
that training image smoothing use adversarial loss, content
loss, TV loss, and gradient loss, but training edge detection
only employs content loss, because the edge information is
vulnerable to the regularization of adversarial loss, TV loss,
and gradient loss.
In fact, it can be extended into multiple inputs and
multiple outputs with one network. In many cases, several
images of a scene with different modalities or different
lighting conditions are observed at the same time, so one or
more images in other modalities are desire to be generated,
when some modal images are known.
III. E XPERIMETAL RESULTS AND ANALYSIS
To validate the efficiency of the proposed architecture for
super-resolution, we compare our method with the Bicubic
interpolation method and SRCNN [6] for color image SR. In
addition, for depth super-resolution, not only the results of
single depth SR with SRCNN [6] are given, we also compare
our approach with several joint depth super-resolution
methods such as GIF [17], FGS [21], RGIF [23], TGV [22],
RGDR [20], HQDU [22], MRF [18]. Three measurements of
image quality, e.g. Peak Signal-to-Noise Ratio (PSNR),
Structural SIMilarity (SSIM) index, and image sharpness
[33], are used for the comparison of different methods.
Finally, we also use our architecture to learn filters for
simultaneously image smoothing and edge detection.
A. Implementation details
Our architecture of simultaneous color-depth image superresolution is implemented by TensorFlow [34] including about
200 thousand parameters, but the generator only use 92358
parameters. We train our neural network with 100,000 image
color-depth patches with size 32 × 32 from 90 color-depth
image pairs. In our training dataset, 52 color-depth image pairs
come from the Middlebury dataset, and the remaining colordepth image pairs are got from the MPI Sintel color-depth
dataset. In our model, α equal to 0.002. We train our model
for 30 epochs using Adam, where the beta1=0.5, beta2=0.999,
and the learning rate is set to be 0.0002. Note that the hyperparameters of beta1 and beta2 of Adam control the exponential
decay rates of moving averages, whose details can be found in
[35]. During training, the the parameters of the discriminator
D are updated by Adam, which is followed by the updating
of generator’s ones. After alternative training the generator G
and discriminator D up to Nash equilibrium, the generator G
becomes powerful to produce high-quality image, as shown in
Fig. 2.
In order to further validate the efficiency of our
architecture, we use the same architecture in Fig. 1 to learn
filters for image smoothing and edge detection at the same
time. we use the BSDS500 dataset from Berkeley Computer
Vision Group and corresponding smoothed image with L0
gradient minimization in [36] as our training data for image
smoothing and edge detection. we augment data for training
by rotating image. Specifically, 100,000 patches with size
64 × 64 are extracted from these augmented data. We train
our model for image smoothing and edge detection with 10
epochs. The other training parameters are set the same as the
ones described above in simultaneous color and depth
super-resolution.
B. The objective
super-resolution
and
visual
quality
comparison
for
We use five standard 3D sequences to show the efficiency
of the proposed method. The five testing color-depth
sequences of first 100 frames contain Love Bird (denoted as
L), Book Arrival (B), and Newspaper (N) with resolution of
768x1024, Shark (S), and Undo dancer (U) with size of
1920x1088. Three objective quality of both color SR and
depth SR are evaluated in terms of PSNR, SSIM, and
sharpness. The comparative results are displayed in Table
1-3, where PSNR, SSIM, and sharpness are respectively
denoted as M1, M2, M3.
6
TABLE I
T HE OBJECTIVE COMPARISON OF COLOR IMAGE SUPER - RESOLUTION FOR 2 X AND 4 X UP - SAMPLING FACTOR
Seq
Bicubic
SRCNN [6]
2x
CDcGAN
B
L
N
U
S
Ave.
38.5
39.21
40.66
33.77
38.89
38.21
40.58
38.79
38.32
34.04
38.87
38.12
40.05
39.19
41.22
35.55
39.93
39.19
B
L
N
U
S
Ave.
0.926
0.97
0.968
0.881
0.958
0.941
0.934
0.967
0.965
0.89
0.958
0.943
0.928
0.969
0.97
0.906
0.967
0.948
B
L
N
U
S
Ave.
45.41
46.82
46.82
43.55
46.48
45.82
45.98
46.55
46.65
43.69
46.38
45.85
45.88
46.72
47.12
44.21
47.04
46.19
Seq
Bicubic
SRCNN [6]
4x
CDcGAN
B
L
N
U
S
Ave.
32.71
33.31
33.35
29.89
33.64
32.58
34.11
28.75
25.54
26.1
32.55
29.41
34.49
34.01
34.86
31.49
34.44
33.86
B
L
N
U
S
Ave.
0.833
0.882
0.894
0.732
0.875
0.843
0.849
0.85
0.869
0.729
0.859
0.831
0.86
0.891
0.906
0.77
0.889
0.863
B
L
N
U
S
Ave.
43.53
44.23
44.09
41.95
44.55
43.67
43.77
43.23
43.59
41.82
43.93
43.27
43.93
43.98
44.24
42.38
44.54
43.81
M1
M2
M3
TABLE II
T HE OBJECTIVE COMPARISON OF DEPTH SUPER - RESOLUTION FOR 2 X UP - SAMPLING FACTOR
Seq
B
L
N
U
S
Ave.
B
L
N
U
S
Ave.
B
L
N
U
S
Ave.
M
1
2
3
Bicubic
41.53
48.97
43.12
45.88
39.54
43.81
0.980
0.993
0.986
0.996
0.967
0.985
50.42
55.25
51.88
57.91
50.98
53.29
SRCNN
[6]
44.37
50.34
46.12
49.37
40.49
46.14
0.984
0.994
0.989
0.996
0.965
0.986
50.98
55.22
52.43
58.9
50.72
53.65
GIF
[17]
32.91
41.08
33.44
45.88
34.33
37.53
0.885
0.964
0.904
0.985
0.924
0.933
47.37
52.56
48.56
54.96
49.8
50.65
FGS
[21]
34.85
43.49
35.90
45.42
37.75
39.48
0.906
0.974
0.923
0.994
0.949
0.949
47.94
53.23
49.06
57.5
51.35
51.82
RGIF
[23]
39.54
47.13
40.52
45.72
37.21
42.02
0.952
0.984
0.961
0.992
0.944
0.967
49
53.82
50.17
56.34
50.28
51.92
TGV
[22]
38.37
46.09
38.91
44.26
37.95
41.11
0.968
0.987
0.965
0.993
0.955
0.974
49.33
54.05
50.32
56.23
50.69
52.12
RGDR
[20]
36.48
42.34
37.31
46.13
38.24
40.10
0.907
0.966
0.925
0.991
0.935
0.945
49.21
53.53
50.01
55.93
51.48
52.03
HQDU
[22]
37.74
46.24
38.74
43.36
37.43
40.70
0.958
0.988
0.967
0.990
0.940
0.968
49.05
54.05
50.25
56.02
50.79
52.03
MRF
[18]
36.70
44.92
37.61
43.51
36.77
39.90
0.944
0.983
0.956
0.993
0.946
0.964
48.44
53.49
49.63
56.46
50.43
51.69
CDcGAN
46.35
54.07
47.13
50.45
42.64
48.13
0.990
0.995
0.992
0.999
0.970
0.989
53
56.46
53.72
64.67
52.97
56.16
TABLE III
T HE OBJECTIVE COMPARISON OF DEPTH SUPER - RESOLUTION FOR 4 X UP - SAMPLING FACTOR
Seq
B
L
N
U
S
Ave.
B
L
N
U
S
Ave.
B
L
N
U
S
Ave.
M
1
2
3
Bicubic
37.91
45.54
39.22
42.33
36.42
40.28
0.951
0.984
0.964
0.991
0.943
0.967
48.22
53.35
49.58
55.30
49.57
51.21
SRCNN
[6]
39.70
46.42
40.80
38.59
27.66
38.63
0.957
0.985
0.967
0.971
0.657
0.908
48.43
53.24
49.75
53.74
49.62
50.95
GIF
[17]
32.82
40.98
33.35
40.82
34.00
36.39
0.883
0.964
0.903
0.985
0.920
0.931
47.37
52.55
48.54
54.89
49.79
50.63
FGS
[21]
31.19
40.83
32.23
42.77
35.23
36.45
0.871
0.964
0.892
0.991
0.928
0.929
47.58
52.95
48.59
55.94
50.65
51.14
From the Table I, it can be seen that PSNR, SSIM and
sharpness measurement of the proposed CDcGAN are better
than other approaches for 2x and 4x color super-resolution.
RGIF
[23]
37.24
45.01
38.46
42.55
35.83
39.82
0.937
0.979
0.950
0.989
0.935
0.958
48.06
53.10
49.36
54.92
49.74
51.04
TGV
[22]
35.14
43.58
35.42
42.67
35.44
38.45
0.933
0.976
0.927
0.989
0.938
0.953
47.85
52.95
48.76
55.34
50.30
51.04
RGDR
[20]
34.42
41.04
35.16
42.68
35.60
37.78
0.890
0.962
0.909
0.989
0.923
0.935
48.21
53.06
49.08
54.97
50.59
51.18
HQDU
[22]
33.92
42.54
34.37
40.35
34.56
37.15
0.913
0.973
0.927
0.987
0.928
0.945
47.60
52.73
48.66
54.33
49.79
50.62
MRF
[18]
33.01
41.36
33.56
40.07
33.93
36.39
0.900
0.969
0.917
0.986
0.918
0.938
47.64
52.70
48.73
54.50
49.78
50.67
CDcGAN
41.72
47.43
41.88
51.23
37.85
44.02
0.962
0.986
0.969
0.997
0.952
0.973
49.89
54.23
50.81
58.19
50.97
52.82
As displayed in Fig. 3, 4, 5, the proposed method keeps
generated image sharp enough and the visual quality is more
satisfactory as compared to SRCNN [6]. In a summary, our
7
Fig. 5. The SR results for color image with 4x scaling factor. (a) the
Fig. 3. The SR results for color image with 4x scaling factor. (a) the
first frame of Book Arrival, (b) the close-ups of (a), (c-e) the closeups of the results respectively with Bicubic interpolation, SRCNN
[6], and the proposed CDcGAN
Fig. 4. The SR results for color image with 4x scaling factor. (a) the
first frame of Shark, (b) the close-ups of (a), (c-e) the close-ups of
the results respectively with Bicubic interpolation, SRCNN [6] and
the proposed CDcGAN
method has better visual performance on image reconstruction
and is robust to noise, which benefits from that the TV loss
ensures the generated color image’s flat regions to be smooth,
and the gradient difference loss tends to keep the color image
similar enough to the ground truth color image in the gradient
domain. Thus, the generated color image better obey the real
sample’s distribution when conditional GAN is used for color
super-resolution.
We compare the proposed approach with nine methods
first frame of Undo Dancer, (b) the close-ups of (a), (c-e) the closeups of the results respectively with Bicubic interpolation, SRCNN
[6], and the proposed CDcGAN
including Bicubic interpolation, SRCNN [6], GIF [17], FGS
[21], RGIF [23], TGV [22], RGDR [20], HQDU [19], MRF
[18]. The methods of SRCNN [6] only take the LR-depth
image as input. For joint SR methods: GIF [17], FGS [21],
RGIF [23], TGV [22], RGDR [20], HQDU [19], MRF [18],
we use both low-resolution depth image and the ground-truth
HR color image to get the results of depth super-resolution
with the codes provided by the authors. As described above,
our CDcGAN uses the low-resolution depth image and
low-resolution color image as the input of our network. The
objective
quality
comparison
results
for
depth
super-resolution are presented in the Table II, III. From
these tables, it can be found the PSNR, SSIM, and sharpness
measurements of the proposed approach are better than
SRCNN [6]. The depth sharpness profits depth image’s
applications, such as depth-based image rendering, scene’s
foreground extraction. In addition, from the (d) of Fig. 6,
7, 8, it can be clearly seen that there are severe artifacts
existed in the SR image with SRCNN [6], but proposed
method does not suffer this problem. The objective and
subjective quality of the proposed method has better
performance than several novel joint methods such as
optimization and filtering including GIF [17], FGS [21],
RGIF [23], TGV [22], RGDR [20], HQDU [19], MRF [18],
although these methods use HR color image. From the (f-m)
of Fig. 6, 7, 8, it can be found that most of these methods
still has the problem of texture-copy and bleeding artifacts,
due to depth SR problem’s sensitivity to textural details and
the weak boundary of color image.
C. The visual comparison of architecture’s application on
image smoothing and edge detection
In [37], deep edge-aware filter is proposed to achieve the
tasks of learning different image smoothing approaches such
as L0 gradient minimization [36]. It uses a deep
8
on the tasks of simultaneously image smoothing and edge
detection have validated our architecture’s flexibility and
generality.
Fig. 6. The SR results for depth image with 4x scaling factor. (a) the
first frame of Book Arrival, (b) the close-ups of (a), (c-l) the closeups of the results respectively with Bicubic interpolation, SRCNN
[6], GIF [17], FGS [21], RGIF [23], TGV [22], RGDR [20], HQDU
[19], MRF [18], and the proposed CDcGAN
convolutional neural network to learn various filtering in the
gradient domain. Different from [37], we use the proposed
network to learn image smoothing filtering in both image
domain and gradient domain to finish the tasks of image
smoothing and edge detection. We use the learned gradient
information for image smoothing in the gradient domain
according to [37], in which you can find the detail
operations. As displayed in Fig 9(g-j) and Fig 10(g-j), we
can see that our image smoothing results in both gradient
domain and image domain are very close to the ones of L0
gradient minimization in the gradient domain [36]. It has
also reported that there is some problems for their deep edge
aware filters, such as unsatisfactory approximation to a few
edge-preserving operators, during learning the filters in the
image domain, but our architecture does not have this
problem, due to the usage of TV loss, and gradient
difference loss with L1 norm. So the extensive application
Fig. 7. The SR results for depth image with 4x scaling factor. (a) the
first frame of Shark, (b) the close-ups of (a), (c-l) the close-ups of
the results respectively with Bicubic interpolation, SRCNN [6], GIF
[17], FGS [21], RGIF [23], TGV [22], RGDR [20], HQDU [19],
MRF [18], and the proposed CDcGAN
IV. C ONCLUSION
In this paper, color-depth conditioned generative
adversarial network is trained to achieve color-depth
super-resolution concurrently. Three auxiliary losses are used
as complementary regularization terms to train our networks
in order to ensure the generated image close to the ground
truth images, in addition to the adversarial loss. More
importantly, we also apply our architecture to concurrently
resolving the problems of image smoothing and edge
detection.
9
Fig. 9. The results of image smoothing and edge detection. (a-i)
Fig. 8. The SR results for depth image with 4x scaling factor. (a) the
first frame of Undo Dancer, (b) the close-ups of (a), (c-l) the closeups of the results respectively with Bicubic interpolation, SRCNN
[6], GIF [17], FGS [21], RGIF [23], TGV [22], RGDR [20], HQDU
[19], MRF [18], and the proposed CDcGAN
R EFERENCES
[1] S. Park, M. Park, and M. Kang, “Superresolution image reconstruction: a technical
overview,” IEEE signal processing magazine,
vol. 20, no. 3, pp. 21–36, 2003.
[2] C. Yang, C. Ma, and M. Yang, “Single-image
super-resolution: A benchmark,” in European
Conference on Computer Vision, 2014, pp. 372–
386.
[3] C. Dong, C. C. Loy, and X. Tang, “Accelerating
the super-resolution convolutional neural
network,” in European Conference on Computer
Vision, 2016, pp. 391–407.
[4] W. Shi, J. Caballero, F. Huszr, J. Totz,
A. Aitken, R. Bishop, and Z. Wang, “Realtime single image and video super-resolution
using an efficient sub-pixel convolutional neural
the first rows are the full-resolution image; the second rows are
the close-ups located in the red rectangle; (a) the input image,
(b-c) input image’s edges of (a) in the horizonal and vertical
direction, (d) the smoothed image with L0 gradient minimization
approach [36], (e-f) the edges of (d) in the horizonal and vertical
direction, (g) the smoothed image using outputted edges generated
by the proposed CDcGAN, (h-i) the outputted edges generated by
the proposed CDcGAN, (j) the smoothed image generated by the
proposed CDcGAN
[5]
[6]
[7]
[8]
network,” in IEEE Conference on Computer
Vision and Pattern Recognition, 2016, pp. 1874–
1883.
J. Kim, L. Kwon, and L. Mu, “Accurate image
super-resolution using very deep convolutional
networks,” in IEEE Conference on Computer
Vision and Pattern Recognition, 2016, pp. 1646–
1654.
C. Dong, C. Loy, and X. Tang, “Image superresolution using deep convolutional networks,”
IEEE transactions on pattern analysis and
machine intelligence, vol. 38, no. 2, pp. 295–
307, 2016.
C. Ledig, L. Theis, F. Huszr, J. Caballero,
A. Cunningham, A. Acosta, and W. Shi, “Photorealistic single image super-resolution using
a generative adversarial network,” in arXiv:
1609.04802, 2016.
J. Johnson, A. Alahi, and F. Li, “Perceptual
losses for real-time style transfer and superresolution,” in European Conference on
10
[14]
[15]
[16]
[17]
[18]
[19]
Fig. 10. The results of image smoothing and edge detection. (a-i)
the first rows are the full-resolution image; the second rows are
the close-ups located in the red rectangle; (a) the input image,
(b-c) input image’s edges of (a) in the horizonal and vertical
direction, (d) the smoothed image with L0 gradient minimization
approach [36], (e-f) the edges of (d) in the horizonal and vertical
direction, (g) the smoothed image using outputted edges generated
by the proposed CDcGAN, (h-i) the outputted edges generated by
the proposed CDcGAN, (j) the smoothed image generated by the
proposed CDcGAN
Computer Vision, 2016, pp. 694–711.
[9] D. Garcia, C. Dorea, and R. de Queiroz, “Super
resolution for multiview images using depth
information,” IEEE Transactions on Circuits
and Systems for Video Technology, vol. 22,
no. 9, pp. 1249–1256, 2012.
[10] Z. Jin, T. Tillo, C. Yao, J. Xiao, and Y. Zhao,
“Virtual-view-assisted video super-resolution
and enhancement,” IEEE Transactions on
Circuits and Systems for Video Technology,
vol. 26, no. 3, pp. 467–478, 2016.
[11] A. Mac, N. Campbell, A. Nair, and G. Brostow,
“Patch based synthesis for single depth image
super-resolution,” in European Conference on
Computer Vision, 2012, pp. 71–84.
[12] J. Xie, R. Feris, and M. Sun, “Edge-guided
single depth image super resolution,” IEEE
Transactions on Image Processing, vol. 25,
no. 1, pp. 428–438, 2016.
[13] L. Zhao, H. Bai, J. Liang, A. Wang, and
Y. Zhao, “Single depth image super-resolution
with multiple residual dictionary learning and
[20]
[21]
[22]
[23]
[24]
[25]
[26]
refinement,” in IEEE International Conference
on Multimedia and Expo, 2017, pp. 739–744.
J. Kopf, M. Cohen, D. Lischinski, and
M. Uyttendaele, “Joint bilateral upsampling,”
ACM Transactions on Graphics, vol. 26, no. 3,
pp. 96–96, 2007.
Q. Yang, R. Yang, J. Davis, and D. Nistr,
“Spatial-depth super resolution for range
images,” in IEEE Conference on Computer
Vision and Pattern Recognition, 2007, pp. 1–8.
D. Chan, H. Buisman, C. Theobalt, and
S. Thrun, “A noise-aware filter for realtime depth upsampling,” in Workshop on
Multi-camera and Multi-modal Sensor Fusion
Algorithms and Applications, 2008.
K. He, J. Sun, and X. Tang, “Guided image
filtering,” IEEE transactions on Pattern Analysis
and Machine Intelligence, vol. 35, no. 6, pp.
1397–1409, 2013.
J. Diebel and S. Thrun, “An application of
markov random fields to range sensing,” in
Neural Information Processing Systems, 2005,
pp. 291–298.
J. Park, H. Kim, Y. Tai, M. Brown, and
I. Kweon, “High-quality depth map upsampling
and completion for rgb-d cameras,” IEEE
Transactions on Image Processing, vol. 23,
no. 12, pp. 5559–5572, 2014.
W. Liu, X. Chen, J. Yang, and Q. Wu, “Robust
color guided depth map restoration,” IEEE
Transactions on Image Processing, vol. 26,
no. 1, pp. 315–327, 2017.
Y. Li, D. Min, M. N. Do, and J. Lu, “Fast guided
global interpolation for depth and motion,”
in European Conference on Computer Vision,
2016, pp. 717–733.
D. Ferstl, C. Reinbacher, R. Ranftl, M. Rther,
and H. Bischof, “Image guided depth
upsampling using anisotropic total generalized
variationn,” in IEEE International Conference
on Computer Vision, 2013, pp. 993–1000.
B. Ham, M. Cho, and J. Ponce, “Robust guided
image filtering using nonconvex potentials,”
IEEE Transactions on Pattern Analysis and
Machine Intelligence, 2017.
Y. Li, J. Huang, N. Ahuja, and M. Yang, “Deep
joint image filtering,” in European Conference
on Computer Vision, 2016, pp. 154–169.
T. Hui, C. Loy, and X. Tang, “Depth map
super-resolution by deep multi-scale guidance,”
in European Conference on Computer Vision,
2016, pp. 353–369.
G. Tech, Y. Chen, K. Mller, J. Ohm, A. Vetro,
and Y. Wang, “Overview of the multiview and
3d extensions of high efficiency video coding,”
IEEE Transactions on Circuits and Systems for
Video Technology, vol. 26, no. 1, pp. 35–49,
2016.
11
[27] Y. Li, D. Liu, H. Li, L. Li, and F. Wu,
“Convolutional neural network-based block upsampling for intra frame coding,” in arXiv:
1702.06728, 2017.
[28] I. Goodfellow, J. Pouget, M. Mirza, B. Xu,
D. Warde, S. Ozair, and Y. Bengio, “Generative
adversarial nets,” in Neural Information
Processing Systems, 2014, pp. 2672–2680.
[29] P. Isola, J. Zhu, T. Zhou, and A. Efros, “Imageto-image translation with conditional adversarial
networks,” in arXiv: 1611.07004, 2016.
[30] E. Shelhamer, J. Long, and T. Darrell,
“Fully convolutional networks for semantic
segmentation,” IEEE transactions on pattern
analysis and machine intelligence, vol. 39, no. 4,
pp. 640–651, 2017.
[31] K. Simonyan and A. Zisserman, “Very deep
convolutional networks for large-scale image
recognition,” in arXiv: 1409.1556, 2014.
[32] K. Zhang, W. Zuo, Y. Chen, D. Meng, and
L. Zhang, “Beyond a gaussian denoiser:
Residual learning of deep cnn for image
denoising,” IEEE Transactions on Image
Processing, 2017.
[33] M. Mathieu, C. Couprie, and Y. LeCun,
“Deep multi-scale video prediction beyond
mean square error,” in arXiv: 1511.05440, 2015.
[34] M. Abadi, A. Agarwal, P. Barham, E. Brevdo,
Z. Chen, C. Citro, and et al., “Tensorflow:
Large-scale machine learning on heterogeneous
distributed systems,” in arXiv:1603.04467,
2016.
[35] D. Kingma and J. Ba, “Adam: A method for
stochastic optimization,” in arXiv:1412.6980,
2014.
[36] L. Xu, C. Lu, Y. Xu, and J. Jia, “Image
smoothing via l 0 gradient minimization,” ACM
Transactions on Graphics, vol. 30, no. 6, p. 174,
2011.
[37] L. Xu, J. Ren, Q. Yan, R. Liao, and J. Jia, “Deep
edge-aware filters,” in International Conference
on Machine Learningn, 2015, pp. 1669–1678.
| 1 |
Robust Kernelized Multi-View Self-Representations for Clustering by Tensor
Multi-Rank Minimization
Yanyun Qu, Jinyan Liu
Yuan Xie, Wensheng Zhang
arXiv:1709.05083v1 [] 15 Sep 2017
State Key Laboratory of Intelligent Control
Fujian Key Laboratory of Sensing and Computing
and Management of Complex Systems
for Smart City, School of Information Science
Institute of Automation, Chinese Academy of Sciences
and Engineering, Xiamen University, China
[email protected], [email protected]
[email protected], [email protected]
Abstract
Most recently, tensor-SVD is implemented on multi-view
self-representation clustering and has achieved the promising results in many real-world applications such as face clustering, scene clustering and generic object clustering. However, tensor-SVD based multi-view self-representation clustering is proposed originally to solve the clustering problem
in the multiple linear subspaces, leading to unsatisfactory results when dealing with the case of non-linear subspaces. To
handle data clustering from the non-linear subspaces, a kernelization method is designed by mapping the data from the
original input space to a new feature space in which the transformed data can be clustered by a multiple linear clustering
method. In this paper, we make an optimization model for the
kernelized multi-view self-representation clustering problem.
We also develop a new efficient algorithm based on the alternation direction method and infer a closed-form solution.
Since all the subproblems can be solved exactly, the proposed
optimization algorithm is guaranteed to obtain the optimal
solution. In particular, the original tensor-based multi-view
self-representation clustering problem is a special case of our
approach and can be solved by our algorithm. Experimental results on several popular real-world clustering datasets
demonstrate that our approach achieves the state-of-the-art
performance.
1
Introduction
With the advent of big data, clustering attracts more and
more attentions. In the past decades, great progresses have
been made in clustering methods. The spectral clustering based methods are the mainstream clustering methods.
There are two critical factors in spectral clustering, one is
the subspace representation, and the other is the affinity matrix construction. The early spectral clustering methods focus on how to construct the affinity matrix (Ng, Jordan, and
Weiss 2002). Recently, more and more concerns are given
to subspace representation for clustering. Sparse subspace
clustering (Elhamifar and Vidal 2013) tries to find a sparse
representation for an instance, and the affinity matrix is constructed in the sparse subspace. The low-rank representation
(LRR) method (Liu et al. 2013) is a typical subspace clustering method, and it pays attention to a self-representation
Copyright c 2017, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
subspace, and the affinity matrix is constructed based on the
self-representation coefficients.
Spectral clustering has been extended to the multi-view
domain for machine learning tasks. In fact, many scientific
data have heterogeneous features, which are collected from
different domains or generated from different feature extractors. Here, we can give two typical application scenarios: 1) web-pages can be represented by using both pagetext and hyperlinks pointing to them; 2) images can be described by different kinds of features, such as color, edge
and texture. Each type of feature is regarded as a particular view, and multi-view features are combined for clustering. It has been proved that multi-view features could improve the clustering performance (Xu, Tao, and Xu 2013;
Luo et al. 2015).
Extensive studies are made in multi-view clustering methods. The existing multi-view clustering methods are divided
into three categories (Xu, Tao, and Xu 2013): 1) graph-based
approaches, 2) co-training or co-regularized approaches, 3)
subspace learning approaches. We will focus on the last category in this paper. Most existing multi-view subspace clustering (MSC) methods grasp the clustering cues only in a
2-order way and neglect the high order information, so that
they may not make the best of the consensus property and
the complementary property. Xie et al. (Xie et al. 2016)
made the multi-view features form a tensor, and utilized the
self-representation strategy as a constraint and enforced the
tensor multi-rank minimization on the objective function for
clustering. It is referred to as t-SVD-MSC. This method has
achieved the state-of-the-art multi-view clustering results on
several popular image datasets.
However, it is originally proposed to handle the data from
multiple linear subspaces on the input space, and it may
not be suitable for the data from the non-linear subspace.
Inspired by the work (Xie et al. 2016; Xiao et al. 2016),
we could use kernel-induced mapping to transform the data
from the original input space to a feature space, in which the
data reside in the union of multiple linear subspaces. Therefore, a new kernelized version of tensor-SVD multi-view
self-representation model is proposed for clustering. We
rewrite the optimization problem of tensor multi-view selfrepresentation for clustering in the kernelized version, where
the data matrix X appears only in the form of inner product.
In order to solve this optimization problem, which is actual
a convex but nonsmooth objective function, we adopt the alternation direction method (ADM) (Lin, Liu, and Su 2011;
Boyd et al. 2011; Yang and Zhang 2011) with multiple
blocks of variables. ADM can theoretically guarantee to
achieve the global optimum.
The main contributions are three folds. 1) We firstly
present a kernel version of multi-view self-representation
subspace clustering, namely Kt-SVD-MSC, to deal with
the case of non-linear subspaces. An optimization model
is made for the kernel tensor based multi-view selfrepresentation, and MSC is solved in the kernel induced
feature subspace. 2) We develop a new optimization algorithm based on ADM to solve the optimization problem of
Kt-SVD-MSC. We infer the analytical solutions for the subproblems with respect to l2,1 norm and obtain the closedform solutions. More interestingly, the original t-SVD-MSC
becomes the special case of our method. Thus, the proposed
algorithm can be used to solve the optimization problem of
t-SVD-MSC. In addition, the kernel trick makes the computation efficient. 3) The proposed method achieves breakthrough clustering results on Scene-15 which is a challenging clustering dataset and achieves the state-of-the-art results on the other testing datasets, such as face clustering
and generic object clustering, which demonstrates the effectiveness of our method.
The rest of this paper is organized as follows. The t-SVDMSC is briefly introduced in section 2. The proposed KtSVD-MSC is detailed in section 3. And experimental results
are shown in section 4. Conclusions are given in Section 5.
2
the v-th learned subspace representation. The multi-view
subspace clustering problem can be reformulated as,
V
X
min
Z(v) ,E(v)
s.t.
X
(λ||E(v) ||2,1 + ||Z(v) ||∗ ),
v=1
(v)
= X(v) Z(v) + E(v) , v = 1, . . . , V,
where V denotes the number of all views and E(v) is the
noise matrix corresponding to the v-th view. From (3), the
multi-view clustering can use LRR in a direct and simple way. After obtaining the multi-view subspace representation, the affinity matrix can be constructed as A =
PV
T
1
(v)
| + |Z(v) |)/2.
v=1 (|Z
V
The limitation of (3) is that it treats each subspace representation independently, neglecting the complementarity of
different views. Thus, a tensor based clustering method is
proposed in which a tensor nuclear norm (TNN) is used as
a regularization term instead of the matrix nuclear norm. A
Figure 1: The t-SVD of an n1 × n2 × n3 tensor.
tensor nuclear norm is defined as,
Brief review of Tensor-SVD based
multi-view subspace clustering
||X ||~ :=
min(n1 ,n2 ) n3
X X
i=1
In this section, we briefly review the general procedure of
self-representation based subspace clustering methods. Low
rank representation method (LRR) (Liu et al. 2013) is a typical self-representation based subspace clustering method,
and many self-representation methods are the variant of
LRR (Liu et al. 2013). Suppose X = [x1 , x2 , . . . , xN ] ∈
Rd×N is the data matrix, whose column is a sample vector
of d dimensions in the feature space. LRR can be formulated
as the following optimization problem,
(3)
|S f (i, i, k)|,
(4)
k=1
where X is a tensor. The tensor singular value decomposition is X = U ∗ S ∗ V T , where U and V are orthogonal
tensors of size n1 × n1 × n3 and n2 × n2 × n3 respectively,
and S is an f-diagonal tensor of size n1 × n2 × n3 , as shown
in Fig.1 . And the tensor multi-rank minimization problem
min λ||E||2,1 + ||Z||∗ ,
Z,E
s.t.
(1)
X = XZ + E
where Z = [z1 , z2 , . . . , zN ] ∈ RN ×N is the coefficient matrix with each zi being the new coding of sample xi . k k∗
is the nuclear norm, k k2,1 denotes the l2,1 norm of a matrix. E is the corresponding noise matrix. After obtaining
the self-representation matrix Z, the affinity matrix is constructed as,
1
A = (|Z| + |ZT |)
(2)
2
where | | represents the absolute operator. And then, spectral clustering is utilized with the affinity matrix A to generate the clustering result.
LRR can be extended to multi-view clustering. Let X(v)
denote the feature matrix of the v-th view and Z(v) denote
Figure 2: The rotated coefficient tensor in our approach.
for multi-view self-representation clustering is formulated
as,
min λ||E||2,1 + ||Z||~ ,
Z(v) ,E(v)
s.t.
X(v) = X(v) Z(v) + E(v) , v = 1, . . . , V,
Z = Φ(Z(1) , Z(2) , . . . , Z(V ) ),
E = [E(1) ; E(2) ; . . . , E(V ) ],
(5)
where the function Φ(·) constructs the tensor Z by stacking
different representation Z(v) to a 3-mode tensor. As illustrated in Fig. 2, we rotates its dimensionality to N × V × N .
In Fig. 2, the self-representation coefficient(mode-1 fiber
marked by the gray bar in Tensor A) is transformed into
the mode-3 fiber marked by the gray bar in Tensor B. The
notation of tensor is detailed in (Xie et al. 2016).
Thus, (5) can be converted to the non-linear version of tSVD-MSC in the feature subspace F (v) as:
min λ
Z(v)
V
X
||Ψ(v) (X(v) ) − Ψ(v) (X(v) )Z(v) ||2,1 + ||Z||~ ,
v=1
Z = Φ(Z(1) , Z(2) , . . . , Z(V ) ).
(10)
3
The error term can be rewritten as:
The Proposed Method
In this section, we give the problem statement of kernelized
multi-view self-representation by tensor multi-rank minimization. The framework of the proposed method is illustrated in Fig.3.
Let X(v) denote the v-th view feature matrix whose column is the feature vector of a sample in the original subspace. We transform a feature vector in the original subspace to the non-linear feature subspace by using a kernel induced latent mapping function. In the feature subspace, t-SVD-MSC is implemented on the new feature matrix. In other words, self-representation is implemented on
the new feature vectors in the v-th view feature subspace.
And then the V -view subspace representation matrices are
obtained, denoted by Z(1) , · · · , Z(V ) . They are merged to
constructed a 3-order tensor Z. According to t-SVD-MSC
(Xie et al. 2016), this tensor needs to be rotated so as to
keep self-representation coefficient in Fourier domain. After
that, the rotated tensor Z̃ is efficiently updated by tensorSVD based tensor nuclear norm minimization in order to
grasp the high order information of the multi-view representation. Sequently, each Z(v) will be updated by using selfrepresentation method. These operations are executed iteratively until convergence is achieved. The challenging task
is how to solve the solution Z(v) under the kernel induced
feature subspace. In this paper, we utilize the kernel strategy to avoid finding the explicit non-linear mapping function and high complication complexity in a high-dimension
subspace.
Following the existing kernel-based method (Cristianini,
Shawe-Taylor, and Kandola 2002; Hofmann, Schölkopf, and
Smola 2008), for the v-th view, let K(v) ∈ RN ×N denote the
kernel matrix, and let ker(v) (x, y) denote the kernel function, we have:
(v)
(v)
(v)
Kij = ker(v) (xi , xj ),
∀i, j = 1, . . . , N.
(6)
where the kernel function ker(v) (x, y) induces a mapping
(v)
ψ (v) : Rd → F (v) (usually implicitly defined), in which
F (v) is referred to as the new feature space (Shawe-Taylor
and Cristianini 2004). Given the mapping ψ (v) , function
ker(v) (·, ·) can be rewritten as:
ker(v) (x, y) = ψ (v) (x)T ψ (v) (y).
(v)
(7)
(v)
Let’s define Ψ(v) (X(v) ) = [ψ (v) (x1 ), . . . , ψ (v) (xN )],
and the kernel matrix of the v-th view can be calculated as:
K(v) = Ψ(v) (X(v) )T Ψ(v) (X(v) ).
(8)
Furthermore, the reconstruction error in feature space
F (v) can be expressed as:
E(v) = Ψ(v) (X(v) ) − Ψ(v) (X(v) )Z(v) .
(9)
kΨ(v) (X(v) ) − Ψ(v) (X(v) )Z(v) k2,1
= kΨ(v) (X(v) )P(v) k2,1
PN
(v) 1
(v)T
K(v) pi ) 2 ,
=
i=1 (pi
(11)
where P(v)
=
I − Z(v)
∈
RN ×N and
(v)
(v)
(v)
(v)
P
= [p1 , . . . , pN ]. Let’s define h (P(v) ) =
q
PN
(v)T
(v)
pi K(v) pi ). Consequently, the problem (10)
i=1
can be reformulated as:
min
λ
V
X
Z(v) ,P(v)
h(v) (P(v) ) + ||Z||~ ,
v=1
Z = Φ(Z(1) , Z(2) , . . . , Z(V ) ),
(12)
P(v) = I − Z(v) , v = 1, . . . , V.
Obviously, after converting the non-linear version (10) into
the formulation in (12), we can solve this optimization problem by using the kernel trick without computing a explicit
non-linear mapping function Ψ(v) (X(v) ) and high computational complexity.
3.1
Optimization
In this subsection, we solve the problem (12) by using alternating direction minimizing(ADM)(Lin, Liu, and Su 2011).
According to the Augmented Lagrange Multiplier (ALM)
(Lin, Chen, and Ma 2010), we introduce multiple blocks of
Lagrange multipliers: the matrix Yv , (v = 1, 2, · · · , V ) and
the tensor W , and introduce the auxiliary tensor variable
G to replace Z in the TNN regularization term (Xie et al.
2016). The optimization problem can be transferred to minimize the following unconstrained problem:
L(Z(1) , . . . , Z(V ) ; P(1) , . . . , P(V ) ; Y1 , . . . , YV ; W; G)
= kGk~ + λ
V
X
h(v) (P(v) )
v=1
+
V
X
v=1
hYv , I
(v)
(v)
−Z
(v)
−P
µ
i + kI(v) − Z(v) − P(v) k2F
2
ρ
+ hW, Z − Gi + ||Z − G||2F .
2
(13)
where µ and ρ are actually the penalty parameters, which are
adjusted by using adaptive updating strategy as suggested in
(Lin, Liu, and Su 2011).
ADM is adopted for updating Z(v) , P(v) , G and Yv (v =
1, 2, · · · , V ). The detailed procedure for the v-th view can
be partitioned into four steps alternatingly.
Figure 3: The framework of our method. a) Feature extraction for multi-view representation, obtaining the multi-view features
{X(v) }Vv=1 , b) Feature transformation from the input space to the new feature space, c) Multi-view self-representation in the
kernel induced feature space, d) Tensor construction by stacking the kernel multi-view self-representation {Z(v) }Vv=1 , and then
solving the subspace representations optimal solution by using t-SVD based tensor multi-rank minimization
1. Z(v) -subproblem: When P, and G are fixed, we define
(v)
(v)
Φ−1
and Φ−1
, and then we will
(v) (W) = W
(v) (G) = G
solve the following subproblem for updating the subspace
representation Z(v) :
µ
||I − Z(v) − P(v) ||2F
2
ρ
+ hW(v) , Z(v) − G(v) i + ||Z(v) − G(v) ||2F .
2
(14)
By setting the derivative of (14) to zero, the closed-form of
Z(v) can be obtained by
min hYv , I − Z(v) − P(v) i +
Z(v)
∗
Z(v) =(µI + Yv + ρG(v) − µP(v) − W(v) )/(µ + ρ).
(15)
(v)
The solution is denoted by Zt+1 .
(v)
(v)
where Σ(v) = diag(σ1 , . . . , σrK , 0, . . . , 0) and rK is the
(v)
(v)
rank of Matrix K(v) . And we define ci (respectively, pi )
as the ith column of C(v) (respectively, P(v) ) and scalar τ =
(v)
µ/λ. We have the solution pi according to (Xiao et al.
2016):
(
(v)
(v)
(v)
p̂(v) , if ||[1/σ1 , . . . , 1/σrK ] ◦ tu || > 1/τ
(v)∗
pi
=
(v)
(v) (v)
ci − VK tu , otherwise
(18)
where ◦ is the element multiplication operator, and a ◦ b is
a new vector whose ith element is ai bi .
T (v)
(v)
(v)
Let’s define tu = VK (v) ci ∈ RrK , where VK (v) ∈
(v)
(v)
RN ×rK is formed by the first rK columns of V(v) , and the
(v)
vector p̂ is defined as
(v)2
(v)
(v)
2. P(v) -subproblem: After obtaining Zt+1 , we solve the
optimization solution P(v) by fixing the other parameters.
p̂(v) = ci
− VK (v) ([
σ1
(v)2
,...,
(v)2
α(v) + σ1
σrK
(v)2
α(v) + σrK
]T ◦ t(v)
u )
(19)
where α(v) is a positive scalar, satisfying
µ
min λh(v) (P(v) ) + hYv , I − Z(v) − P(v) i + kI − Z(v) − P(v) k2F
(v)2
σi
1
2
P(v)
(v)T
(v)
t
diag({
}
= 2 (20)
(v) )tu
u
µ
(v)2 2 1≤i≤rK
(v)
(v)
(v)
(v)
(v)
2
τ
(α + σri )
= min λh (P ) + ||I − Z − P + Yv /µ||F
2
P(v)
(v)
(v)
(v)
µ
In particular, when k[1/σ1 , . . . , 1/σrK ] ◦ tu k > 1/τ , the
= min λh(v) (P(v) ) + ||P(v) − C(v) ||2F
(v)
equation in (20) (with respect to α) has a unique positive
2
P
(16)
root, which can be obtained by the bisection method (Burden
where C(v) = I − Z(v) + Yv /µ, and we have dropped the
and Faires 2011).
terms that are irrelevant to P(v) .
Note that it is nontrivial to solve the problem in (16),
3. G(v) -subproblem: When Z(v) , (v = 1, 2, . . . , V), are
where the objective function is convex but nonsmooth. In
fixed, we solve the following subproblem in order to update
order to explain how to solve the second subproblem, we
the tensor G,
formulate the SVD of the v-th view kernel matrix as,
ρ
1
T
G ∗ = argmin ||G||~ + ||G − (Z + W)||2F .
(21)
(17)
K(v) = V(v) Σ(v)2 V(v)
2
ρ
G
Let’s define F = Z + ρ1 W. According to (Xie et al.
2016), let fft(F , [ ], 3) denote the Fourier transform along
the third dimension and ifft(G f , [ ], 3) denote the inverse
Fourier transform along the third dimension . The solution
of the optimization problem (21) can be achieved through
Algorithm 1.The detail can be found in (Xie et al. 2016).
Algorithm 2: Kt-SVD-MSC
Input: Multi-view feature matrices:
X(1) , X(2) , . . . , X(V ) , λ, cluster number C, and
the SVD of K(v)
Output: Clustering results C
1
Algorithm 1: t-SVD based Tensor Nuclear Norm Minimization
Input: Observed tensor F ∈ Rn1 ×n2 ×n3
Output: tensor G
1
2
3
F f = fft(F , [ ], 3);
for j = 1 : n3 do
(j)
(j)
(j)
(j)
[U f , S f , V f ] = SVD(F f );
(j)
Jf
4
7
8
9
(j)
(j)
(j)
(j)
G f = U f S f,n3 V f
end
G = ifft(G f , [ ], 3);
Return tensor G.
3
4
5
6
7
=
1, . . . , min(n1 , n2 );
(j)
(j) (j)
S f,n3 = S f J J ;
5
6
= diag{(1 −
τ0
)+ }, i
(j)
S f (i,i)
2
8
9
10
11
12
T
;
13
14
15
16
17
18
4. Updating the Lagrange multipliers Y(v) , W , µ, and
ρ:
Yv∗ = Yv + µ(I − Z(v) − P(v) ),
(22)
∗
W = W + ρ(Z − G),
(23)
µ = min(ηµ, µmax ),
(24)
ρ = min(ηρ, ρmax ).
(25)
where µmax and ρmax are presetting parameters.
We summarize the algorithm for Kt-SVD-MSC in Algorithm 2.
4
https://cvc.yale.edu/projects/yalefaces/yalefaces.html
https://cvc.yale.edu/projects/yalefacesB/yalefacesB.html
3
http://www.uk.research.att.com/facedatabase.html
4
http://www-cvr.ai.uiuc.edu/ponce grp/data/
5
http://www.cs.columbia.edu/CAVE/software/softlib/
2
20
21
22
23
24
25
26
Experimental Results and Analysis
In this section, in order to estimate the proposed method, we
implement it on six challenging image clustering datasets:
Yale1 , Extended YaleB2 , ORL3 , Notting-Hill (Zhang et al.
2009), Scene-154 , COIL-205 . The first three datasets are
three typical face datasets, and Nottting-Hill is a video face
dataset. Scene-15 are for scene clustering, and COIL-20 is
for generic object clustering. Table 1 shows the details of the
six datasets, such as the number of images and the number of
classes. We use six popular criteria to evaluate a clustering
method: Normalized Mutual Information (NMI), Accuracy
(ACC), Adjusted Rank index (AR), F-score, Precision and
Recall. For all the criteria, the higher the score is, the better the performance is. In all datasets, we use the average
1
19
Initialize Z(v) = Yv = 0, i = 1, . . . , V ; P(v) = In
G = W = 0;
µ = ρ = 10−5 , η = 2, µmax = 1013 , ρmax = 1013 ,
ε = 10−7 ;
while not converge do
for v = 1 : V do
// Solving Z
Update {Z(v) }Vv=1 by using (15);
end
// Solving P
Update P by solving (18);
for v = 1 : V do
// Solving Y
Update {Y(v) }Vv=1 by using (22);
end
Obtain Z = Φ(Z(1) , Z(2) , . . . , Z(V ) );
// Solving G;
Update G by using Algorithm 1;
Update W by using (23);
Update parameters µ and ρ by using (24) and (25),
respectively;
Check the convergence conditions:
||I − Z(v) − P(v) ||∞ < ε and
||Z(v) − G(v) ||∞ < ε;
end
Obtain the affinity matrix by
PV
T
A = V1 v=1 (|Z(v) | + |Z(v) |)/2;
Apply the spectral clustering method with the affinity
matrix A;
Return Clustering result C.
of ten runs as the final results. In the Section 1 of supplemental material, we present more clustering results, such
as the mean and standard deviation of the six criteria. All
experiments are implemented in Matlab on a workstation
with 4GHz CPU, 32GB RAM, and TITANX GPU(12GB
caches). To promotethe culture of reproducible research,
source codes accompanying this work can be achieved at
http://pan.baidu.com/s/1bp4JUqz (password: m9qo).
Table 1: Statistics of different test datasets
Dataset
Images
Objective
Clusters
Yale
165
Face
15
Extended YaleB
640
Face
10
ORL
400
Face
10
Notting-Hill
4660
Face
5
Scene-15
4485
Scene
15
COIL-20
1440 Generic Object
20
We take different strategies for clustering to deal with different applications. As for face clustering, three types of features are extracted: intensity, LBP (Ojala, Pietikainen, and
Maenpaa 2002), and Gabor (Lades et al. 1993). The LBP
feature is extracted from an image with the sampling size
of 8 pixels and the blocking number of 7 × 8. The Gabor
feature is extracted with only one scale λ = 4 at four orientations θ = {0o , 45o , 90o , 135o }. Thus, the LBP feature is of
3304 dimensions, and Gabor feature is of 6750 dimensions.
As for Scene-15, three types of features are extracted: Pyramid histograms of visual words (PHOW)(Bosch, Zisserman,
and Munoz 2007), Pairwise rotation invariant co-occurrence
local binary pattern feature (PRI-CoLBP)(Qi et al. 2014),
and CENsus TRansform hISTogram (CENTRIST)(Wu and
Rehg 2011). PHOW is extracted with dense sampling step
of 8 pixels and 300 words and is of 1800 dimensions. PRICoLBP is of 1180 dimensions. Three level features are extracted to form CENTRIST, which contains 1, 5, 25 blocks
respectively. CENTRIST is of 1240 dimensions. COIL-20
uses the same features as those used for face clustering. And
the feature are extracted from a normalized image with size
of 32 × 32.
In the experiments, we implement a linear kernel on each
view on Yale and ORL and implement linear kernels and
Gaussian kernels on different views on Extended YaleB,
Scene-15, Notting-Hill and COIL-20. We compare our approach with eight representative clustering methods: the
standard spectral clustering algorithm with the most informative view (SPCbest), LRR algorithm with the most informative view (LRRbest), robust multi-view spectral clustering via low-rank and sparse decomposition (RMSC)(Xia
et al. 2014), diversity-induced multi-view subspace clustering (DiMSC) (Cao et al. 2015), low-rank tensor constrained
multi-view subspace clustering (LTMSC) (Zhang et al.
2015), the tensor-SVD multi-view clustering method with
tensor multi-rank minimization (t-SVD-MSC)(Xie et al.
2016), the tensor-SVD with unrotated coefficient tensor (UtSVD-MSC), and exclusivity-consistency regularized MSC
(ECMSC) (Wang et al. 2017). The first two methods are
the single view baselines. The RMSC, DiMSC, and LTMSC
are three state-of-the-art methods in multi-view clustering.
As for ECMSC, only clustering results on Yale, Extended
YaleB, ORL were published. Thus, we run the code provided
by the authors on Notting Hill, Scene-15 and COIL-20 and
obtain the clustering results by tuning the parameters. The
comparison results are shown in Table 2 ∼ 7. We compare
our approach with the eight methods on each dataset. Moreover, confusion matrices of the t-SVD-MSC and the proposed method is shown in Fig. 1 in supplemental material,
in which the superiority of the proposed method can be easily observed. On the six real-world datasets, our approach
achieves the best clustering performance in terms of the
six criteria. The clustering performance of Kt-SVD-MSC is
higher by (0.0917, 0.1052, 0.1425, 0.1290, 0.1438, 0.1140)
than t-SVD-MSC (Xie et al. 2016) and is higher
by (0.1937, 0.2255, 0.3295, 0.2935, 0.3453, 0.2243) than
ECMSC (Wang et al. 2017) in terms of the average gain
of clustering performance on the six datasets. The results
demonstrate that the kernel version of tensor-SVD is helpful
for self-representation multi-view clustering, and the proposed method achieves the best performance among nine
compared methods.
Furthermore, we investigate how the parameter λ influences the clustering performance in terms of NMI and ACC.
In (12), the proposed model contains only one tuning parameter λ which balances the effects of the two parts. And
the choice of λ is set empirically on the testing data of each
dataset. In Fig. 4, we plot the curves of NMI and ACC with
the change of the parameter λ. In each curve, a red circle denotes the λ value where the clustering performance(NMI or
ACC) achieves the highest value. It also demonstrates that
most settings of λ can make NMI or ACC achieve higher
value than the other compared methods. Thus, Fig.4 implies
that our approach is stable with regards of λ.
(a)
(b)
Figure 4: The effect of the parameter λ to NMI and ACC on
Scene-15 dataset. a) λ affects NMI, b) λ affects ACC
Thanks to ADM and the inferred closed-form solution,
the proposed optimization model Kt-SVD-MSC converges
fast. In order to discuss the convergence of our approach,
we define the reconstruction error as (26) and matching error
as (27). We plot two curves for the two errors on Scene-15.
The reconstruction error curve is signed by the solid line
and the matching error curve is signed by the dotted line.
Observing the changes of the reconstruction error and the
matching error with the increase of the iterations in Fig.5, we
find that both curves decrease severely to approximate zero
during 15 iterations and they become flat after 15th iteration.
Thus, Fig.5 demonstrates the proposed Kt-SVD-MSC can
converge fast.
Reconerror =
V
1 X
kX (v) − X (v) Z (v) − E (v) k∞ (26)
V v=1
M atcherror =
5
V
1 X (v)
kZ − G(v) k∞
V v=1
(27)
Conclusions
In this paper, we proposed a kernel version of selfrepresentation multi-view clustering with tensor multi-rank
minimization. A new optimization model for the robust kernel tensor-SVD representation is made. After that, ADM algorithm is implemented on solving this optimization prob-
Table 4: Clustering results (mean) on ORL. We set λ =
0.01 in proposed Kt-SVD-MSC.
Method
SPCbest
LRRbest
RMSC
DiMSC
LTMSC
Ut-SVD-MSC
t-SVD-MSC
ECMSC
Kt-SVD-MSC
Figure 5: Convergence curve on Scene-15.
Table 2: Clustering results (mean) on Yale. We set λ =
0.21 in proposed Kt-SVD-MSC.
Method
SPCbest
LRRbest
RMSC
DiMSC
LTMSC
Ut-SVD-MSC
t-SVD-MSC
ECMSC
Kt-SVD-MSC
NMI
0.654
0.709
0.684
0.727
0.765
0.756
0.953
0.773
0.987
ACC
0.618
0.697
0.642
0.709
0.741
0.733
0.963
0.771
0.982
AR
0.440
0.512
0.485
0.535
0.570
0.584
0.910
0.590
0.973
F-score Precision Recall
0.475
0.457 0.500
0.547
0.529 0.567
0.517
0.500 0.535
0.564
0.543 0.586
0.598
0.569 0.629
0.610
0.591 0.630
0.915
0.904 0.927
0.617
0.584 0.653
0.975
0.971 0.979
Table 3: Clustering results (mean) on Extended YaleB.
We set λ = 0.05 in proposed Kt-SVD-MSC.
Method
SPCbest
LRRbest
RMSC
DiMSC
LTMSC
Ut-SVD-MSC
t-SVD-MSC
ECMSC
Kt-SVD-MSC
NMI
0.360
0.627
0.157
0.636
0.637
0.479
0.667
0.759
0.893
ACC
0.366
0.615
0.210
0.615
0.626
0.470
0.652
0.783
0.896
AR
0.225
0.451
0.060
0.453
0.459
0.274
0.500
0.544
0.813
F-score Precision Recall
0.308
0.296 0.310
0.508
0.481 0.539
0.155
0.151 0.159
0.504
0.481 0.534
0.521
0.485 0.539
0.350
0.327 0.375
0.550
0.514 0.590
0.597
0.513 0.718
0.832
0.821 0.842
lem. We provide the closed-form solutions for the l2,1 normrelated subproblems so that the subproblems can be efficiently solved. Extensive experimental results on three
types of applications: face clustering, scene clustering, and
generic object clustering have clearly demonstrated that the
proposed self-representation multi-view clustering method
based on kernelized tensor-SVD is effective and efficient.
In the nine compared clustering methods, our approach
achieves the breakthrough clustering performance on Scene15 which is a challenging clustering dataset and achieves the
state-of-the-art clustering results on the other datasets.
NMI
0.884
0.895
0.872
0.940
0.930
0.874
0.993
0.947
0.994
ACC
0.725
0.773
0.723
0.838
0.795
0.765
0.970
0.854
0.971
AR
0.655
0.724
0.645
0.802
0.750
0.666
0.967
0.810
0.972
F-score Precision Recall
0.664
0.610 0.728
0.731
0.701 0.754
0.654
0.607 0.709
0.807
0.764 0.856
0.768
0.766 0.837
0.675
0.643 0.708
0.968
0.946 0.991
0.821
0.783 0.859
0.972
0.956 0.991
Table 5: Clustering results (mean) on Notting-Hill. We
set λ = 0.009 in proposed Kt-SVD-MSC.
Method
SPCbest
LRRbest
RMSC
DiMSC
LTMSC
Ut-SVD-MSC
t-SVD-MSC
ECMSC
Kt-SVD-MSC
NMI
0.723
0.579
0.585
0.799
0.779
0.837
0.900
0.817
0.973
ACC
0.816
0.794
0.807
0.837
0.868
0.933
0.957
0.767
0.992
AR
0.712
0.558
0.496
0.787
0.777
0.847
0.900
0.679
0.981
F-score Precision Recall
0.775
0.780 0.776
0.653
0.672 0.636
0.603
0.621 0.586
0.834
0.822 0.827
0.825
0.830 0.814
0.880
0.900 0.861
0.922
0.937 0.907
0.764
0.637 0.954
0.985
0.989 0.981
Table 6: Clustering results (mean) on Scene-15. We set
λ = 0.0018 in proposed Kt-SVD-MSC.
Method
SPCbest
LRRbest
RMSC
DiMSC
LTMSC
Ut-SVD-MSC
t-SVD-MSC
ECMSC
Kt-SVD-MSC
NMI
0.421
0.426
0.564
0.269
0.571
0.555
0.858
0.463
0.966
ACC
0.437
0.445
0.507
0.300
0.574
0.510
0.812
0.457
0.984
AR
0.270
0.272
0.394
0.117
0.424
0.375
0.771
0.303
0.967
F-score Precision Recall
0.321
0.314 0.329
0.324
0.316 0.333
0.437
0.425 0.450
0.181
0.173 0.190
0.465
0.452 0.479
0.422
0.389 0.460
0.788
0.743 0.839
0.357
0.318 0.408
0.969
0.971 0.968
Table 7: Clustering results (mean) on COIL-20. We set
λ = 0.0009 in proposed Kt-SVD-MSC.
Method
SPCbest
LRRbest
RMSC
DiMSC
LTMSC
Ut-SVD-MSC
t-SVD-MSC
EMSC
Kt-SVD-MSC
NMI
0.806
0.829
0.800
0.846
0.860
0.841
0.884
0.942
0.992
ACC
0.672
0.761
0.685
0.778
0.804
0.788
0.830
0.782
0.990
AR
0.619
0.720
0.637
0.732
0.748
0.732
0.786
0.781
0.983
F-score Precision Recall
0.640
0.596 0.692
0.734
0.717 0.751
0.656
0.620 0.698
0.745
0.739 0.751
0.760
0.741 0.776
0.746
0.731 0.760
0.800
0.785 0.808
0.794
0.695 0.925
0.984
0.984 0.985
References
[Bosch, Zisserman, and Munoz 2007] Bosch, A.; Zisserman,
A.; and Munoz, X. 2007. Image classification using random
forests and ferns. In Computer Vision, 2007. ICCV 2007.
IEEE 11th International Conference on, 1–8. IEEE.
[Boyd et al. 2011] Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.;
and Eckstein, J. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends R in Machine Learning 3(1):1–
122.
[Burden and Faires 2011] Burden, R., and Faires, J. 2011.
The bisection method. Numerical Analysis 48–56.
[Cao et al. 2015] Cao, X.; Zhang, C.; Fu, H.; Liu, S.; and
Zhang, H. 2015. Diversity-induced multi-view subspace
clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, 586–594.
[Cristianini, Shawe-Taylor, and Kandola 2002] Cristianini,
N.; Shawe-Taylor, J.; and Kandola, J. S. 2002. Spectral
kernel methods for clustering. In Advances in neural
information processing systems, 649–655.
[Elhamifar and Vidal 2013] Elhamifar, E., and Vidal, R.
2013. Sparse subspace clustering: Algorithm, theory, and
applications. IEEE transactions on pattern analysis and machine intelligence 35(11):2765–2781.
[Hofmann, Schölkopf, and Smola 2008] Hofmann,
T.;
Schölkopf, B.; and Smola, A. J. 2008. Kernel methods in
machine learning. The annals of statistics 1171–1220.
[Lades et al. 1993] Lades, M.; Vorbruggen, J. C.; Buhmann,
J.; Lange, J.; Von Der Malsburg, C.; Wurtz, R. P.; and Konen, W. 1993. Distortion invariant object recognition in the
dynamic link architecture. IEEE Transactions on computers
42(3):300–311.
[Lin, Chen, and Ma 2010] Lin, Z.; Chen, M.; and Ma, Y.
2010. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint
arXiv:1009.5055.
[Lin, Liu, and Su 2011] Lin, Z.; Liu, R.; and Su, Z. 2011.
Linearized alternating direction method with adaptive
penalty for low-rank representation. In Advances in Neural
Information Processing Systems 612–620.
[Liu et al. 2013] Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; and
Ma, Y. 2013. Robust recovery of subspace structures by lowrank representation. IEEE Transactions on Pattern Analysis
and Machine Intelligence 35(1):171–184.
[Luo et al. 2015] Luo, Y.; Tao, D.; Wen, Y.; Kotagiri, R.; and
Xu, C. 2015. Tensor canonical correlation analysis for
multi-view dimension reduction. IEEE Trans. on Knowledge and Data Engineering 27(11):3111–3124.
[Ng, Jordan, and Weiss 2002] Ng, A. Y.; Jordan, M. I.; and
Weiss, Y. 2002. On spectral clustering: Analysis and an
algorithm. In Advances in neural information processing
systems, 849–856.
[Ojala, Pietikainen, and Maenpaa 2002] Ojala,
T.;
Pietikainen, M.; and Maenpaa, T. 2002. Multiresolution gray-scale and rotation invariant texture classification
with local binary patterns. IEEE Transactions on pattern
analysis and machine intelligence 24(7):971–987.
[Qi et al. 2014] Qi, X.; Xiao, R.; Li, C.-G.; Qiao, Y.; Guo,
J.; and Tang, X. 2014. Pairwise rotation invariant co-
occurrence local binary pattern. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(11):2199–2213.
[Shawe-Taylor and Cristianini 2004] Shawe-Taylor, J., and
Cristianini, N. 2004. Kernel methods for pattern analysis.
Cambridge university press.
[Wang et al. 2017] Wang, X.; Guo, X.; Lei, Z.; Zhang, C.;
and Li, S. Z. 2017. Exclusivity-consistency regularized
multi-view subspace clustering. CVPR.
[Wu and Rehg 2011] Wu, J., and Rehg, J. M. 2011. Centrist: A visual descriptor for scene categorization. IEEE
transactions on pattern analysis and machine intelligence
33(8):1489–1501.
[Xia et al. 2014] Xia, R.; Pan, Y.; Du, L.; and Yin, J. 2014.
Robust multi-view spectral clustering via low-rank and
sparse decomposition. In AAAI, 2149–2155.
[Xiao et al. 2016] Xiao, S.; Tan, M.; Xu, D.; and Dong, Z. Y.
2016. Robust kernel low-rank representation. IEEE transactions on neural networks and learning systems 27(11):2268–
2281.
[Xie et al. 2016] Xie, Y.; Tao, D.; Zhang, W.; Zhang, L.;
and Yanyun, Q. 2016. On unifying multi-view selfrepresentations for clustering by tensor multi-rank minimization. arXiv preprint arXiv:1610.07126.
[Xu, Tao, and Xu 2013] Xu, C.; Tao, D.; and Xu, C. 2013. A
survey on multi-view learning. Computer Science.
[Yang and Zhang 2011] Yang, J., and Zhang, Y. 2011. Alternating direction algorithms for l1 -problems in compressive
sensing. SIAM journal on scientific computing 33(1):250–
278.
[Zhang et al. 2009] Zhang, Y.-F.; Xu, C.; Lu, H.; and Huang,
Y.-M. 2009. Character identification in feature-length films
using global face-name matching. IEEE Transactions on
Multimedia 11(7):1276–1288.
[Zhang et al. 2015] Zhang, C.; Fu, H.; Liu, S.; Liu, G.; and
Cao, X. 2015. Low-rank tensor constrained multiview subspace clustering. In Proceedings of the IEEE International
Conference on Computer Vision, 1582–1590.
| 1 |
arXiv:1401.5438v1 [math.CA] 21 Jan 2014
On the Reduction of Singularly-Perturbed
Linear Differential Systems∗
Moulay Barkatou, Suzy S. Maddah
XLIM UMR 7252 ; DMI
University of Limoges; CNRS
123, Avenue Albert Thomas
87060 Limoges, France
[email protected]
[email protected]
†
Hassan Abbas
Laboratory of Mathematics
Lebanese University, Beirut, Lebanon
[email protected]
February 17, 2018
Abstract
In this article, we recover singularly-perturbed linear differential
systems from their turning points and reduce their parameter singularity’s rank to its minimal integer value. Our treatment is Moser-based;
that is to say it is based on the reduction criterion introduced for linear
singular differential systems in [Moser, 1960]. Such algorithms have
proved their utility in the symbolic resolution of the systems of linear functional equations [Barkatou, 1989; Barkatou et al., 2008, 2009],
giving rise to the package ISOLDE [Barkatou et al., 2013], as well as in
the perturbed algebraic eigenvalue problem [Jeannerod et al., 1999].
In particular, we generalize the Moser-based algorithm described in
[Barkatou, 1995]. Our algorithm, implemented in the computer algebra system Maple, paves the way for efficient symbolic resolution of
singularly-perturbed linear differential systems as well as further applications of Moser-based reduction over bivariate (differential) fields
[Barkatou et al., 2014].
∗
†
Submitted to ISSAC’14, Kobe, Japan
Enrolled under a joint PhD program with the Lebanese University
1
Keywords: Moser-based reduction, Perturbed linear differential systems,
Turning points, Computer algebra.
1
Introduction
Let K be a commutative field of characteristic zero equipped with a derivation δ, that is a map δ : K → K satisfying
δ(f + g) = δf + δg and δ(f g) = δ(f )g + f δ(g) for all f, g ∈ K.
We denote by C its field of constants, V a K-vector space of dimension n, and
A an element in K n×n . Set ∆ = δ − A, then ∆ is a δ- differential operator
acting on V , that is, a C-linear endomorphism of V satisfying the Leibniz
condition:
∀f ∈ K, v ∈ V ∆(f v) = δ(f )v + f ∆(v).
Let O = C[[x, ǫ]], the ring of formal power series in x and ǫ over the field of
complex numbers, where x is a complex variable and ǫ is a small parameter.
d
Denote by K its field of fractions equipped with δ = ǫ dx
. Let Ox = C[[x]],
Kx be its fields of fractions, and h, p be integers (h ≥ 0). Thereby, ∆ is the
differential operator associated to a singularly-perturbed linear differential
system. Denoting by Y an unknown n-dimensional column vector, the former
is written as
∞
X
dY
−h −p
Ak (x)ǫk Y.
(1)
= A(x, ǫ)Y = ǫ x
ǫ
dx
k=0
Such systems have countless applications which are traced back to the year
1817 and their study encompasses a vast body of literature (see, e.g. [Chen,
1984; Lin, 1966; McHugh, 1971; Wasow, 1985] and references therein). However, their symbolic resolution is still open to investigation.
Clearly, system (1) is a singular perturbation of the widely studied linear
singular system of differential equations (see, e.g., [Balser, 2000; Hsieh et al.,
d
1999] and references therein). The latter is obtained by setting δ = x dx
and
K = C((x)), the univariate field of formal Laurent power series. Hence ∆ is
associated to
∞
X
dY
−p
x
Ak xk Y.
(2)
= A(x)Y = x
dx
k=0
Contrary to system (1), there exist efficient algorithms contributing to the
formal reduction of system (2) (see, e.g [Barkatou, 1997; Pflugel , 2000]),
that is the algorithmic procedure that computes a change of basis w.r.t.
which A(x) has normal form facilitating the construction of formal solutions.
Among which, rank reduction algorithms play an eminent role as they reduce
the nonnegative integer p, called the Poincaré rank, to its minimal integer
value, the true Poincaré rank (see, e.g., [Barkatou et al., 2009; Levelt, 1991]).
In particular, if the latter is null then system (2) is said to be regular singular,
2
that is to say, in any small sector, its solutions grow at most as an algebraic
function. Otherwise, it is irregular singular.
However, the classical formal simplification of system (2) begins with the
reduction of its leading coefficient matrix A0 to its Jordan form. Hence, in
addition to the usual difficulties encountered within the formal reduction itself, additional ones arise for system (1) since its leading coefficient matrix
A0 (x) is a matrix-valued function rather than a constant one. In particular, if it has mulitple eigenvalues then they might coalesce (see, e.g., the
example page 223, [Wasow, 1985]). In classical textbooks, points where the
Jordan form of A0 (x) is unstable, that is to say either the multiplicity of the
eigenvalues or the degrees of the elementary divisors are not constant in a
neighborhood of such points, are referred to as turning points (see, e.g., page
57, [Wasow, 1979]). The behavior of solutions of differential systems around
such points is far more complicated than that of system (2) around an irregular singularity. In fact, the neighborhood of such a point is decomposed into
a finite number of x-dependent neighborhoods in each of which the solution
behaves quite differently though it may still be asymptotically accomplished
(see, e.g., [Iwano et al., 1963]). In particular, if h ≤ −1 then the solution of
system (1) can be sought upon presenting the solution as a power series in
ǫ. The latter can then be inserted into (1) and the like powers of ǫ equated.
This reduces the problem to solving recursively a set of non-homogeneous
linear singular differential systems over Kx , the first of which is system (2)
(see, e.g. page 52, [Chen, 1984] and page 101, [Wasow, 1979]). Hence, it
seems plausible to investigate wether h rather than p can be diminished for
such systems.
The methods proposed in the literature of system (1) either exclude turning point [Chen, 1984] or are not algorithmic throughout. Moreover, they
make an essential use of the so-called Arnold-Wasow form. For the univariate
case, system (2), the research advanced profoundly in the last two decades
making use of methods of modern algebra and topology. The former classical approach is substituted by efficient algorithms (see, e.g., [Barkatou, 1997;
Barkatou et al., 2013, 2009; Pflugel , 2000] and references therein). It was
the hope of Wasow [Wasow, 1985], in his 1985 treatise summing up contemporary research directions and results on system (1), that techniques of
system (2) be generalized to tackle the problems of system (1).
In this article, we are interested in recovering system (1) from its turning
points and decoupling it into a set of systems of lower dimensions for each of
which h has a minimal integer value.
Given system (2), Moser defined two rational numbers:
m(A) =
µ(A) =
rank(A0 )
) and
n
min { m(T −1 ∆T ) | T ∈ GL(V )}.
max (0, p +
(3)
(4)
It follows that system (2) is regular whenever µ(A) ≤ 1. For m(A) > 1, he
3
proved that m(A) > µ(A) if and only if the polynomial
θ(λ) := xrank(A0 ) det(λI +
A0
+ A1 )|x=0
x
vanishes identically in λ. In this case, system (2) (resp. A(x)) is said to
be Moser-reducible and m(A) can be diminished by applying a coordinate
transformation Y = T Z where T ∈ GL(V ) of the form
T = (P0 + P1 x) diag(1, . . . , 1, x, . . . , x)
where P0 , P1 are constant matrices and det(P0 ) 6= 0 [Theorems 1 and 2,
page 381, [Moser, 1960]]. This notion and algorithms developed (see, e.g.
[Barkatou et al., 2009]), were generalized as well to linear functional matrix
equations in [Barkatou et al., 2008], a particular case of which is system (2).
This gave rise to the package ISOLDE [Barkatou et al., 2013], written in the
computer algebra system Maple and dedicated to their symbolic resolution.
Moser’s reduction criterion was also borrowed from the theory of differential systems to investigate efficient algorithmic resolution of the perturbed
algebraic eigenvalue-eigenvector problem in [Jeannerod et al., 1999]. This
problem is another prominent example in the univariate case, for which δ is
the zero map and K = C((ǫ)). Thereby, ∆ is just a linear operator in the
standard way and A(ǫ) is the widely studied perturbation of the constant
matrix A0 (see, e.g., [Kato, 1980]).
However, despite their utility and efficiency in these univariate cases,
Moser-based algorithms are not considered yet over bivariate fields. In particular, the second author of this article developed in [Barkatou, 1995] a
d
Moser-based algorithm for the differential systems associated to δ = x dx
and
K = Q(x), the field of rational functions in x. This algorithm is the one we
generalize here to system (1). Although Moser’s motivation for introducing
these rational numbers for system (2) was to distinguish regular singular systems in the sense of Poincaré, we show hereby that they serve as well for the
reduction of h to its minimal integer value.
This article is organized as follows: In Section 2, we give preliminaries. In
Section 3, we present a Moser-based method of recovery from turning points.
In Section 4, we introduce a Moser-based reduction algorithm to diminish h
to its minimal integer value. Our main results are Theorem 1 and Proposition 1. In Subection 4.3, we give an outline of a generalization of Levelt’s
rank reduction algorithm of [Levelt, 1991] and illustrate by an example its
comparison to the Moser-based one. Finally, we conclude in Section 5 and
point out some prospects of further research. We remark however that as
our interest is formal, reference to asymptotic behavior and/or convergence is
dropped. One may consult in the latter direction, e.g., [Balser, 2000; Wasow,
1985].
4
2
Preliminaries
Consider system (1). Without loss of generality, we can assume that Ak (x) ∈
Kxn×n and the leading coefficient matrix A0 (x) is a nonzero element of Oxn×n
otherwise h and p can be readjusted. We refer to A00 := A(0, 0) as the
leading constant matrix. The nonnegative integer h will be referred to as
the rank of singularity in ǫ (ǫ-rank, in short). We assume that system (1)
has at most one turning point otherwise the region of study can be shrinked.
Moreover, this turning point is placed at the origin, otherwise, a translation
in the independent variable can be performed. We denote by Ik and Ok,l the
identity and zero matrices respectively of prescribed dimensions.
Let T ∈ GLn (K) then the change of the dependent variable Y = T Z in
(1) gives
∞
X
dZ
Ak (x)ǫk Z.
(5)
ǫ
= Ã(x, ǫ)Z = ǫ−h̃ x−p̃
dx
k=0
where Ã(x, ǫ) ∈ K n×n and
T [A] := Ã(x, ǫ) = T −1 A(x, ǫ)T − ǫ T −1
d
T.
dx
(6)
Systems (1) and (5) are called equivalent. In fact, T is a change of basis and
Ã(x, ǫ) is the matrix of T −1 ∆T .
Remark 1 Given system (1) and an equivalent system (5).
• If h > 0 and T (x) ∈ GLn (Kx ) then it follows from (6)
x−p̃ (Ã0 (x) + Ã1 (x)ǫ) = T −1 x−p (A0 (x) + A1 (x)ǫ)T.
Hence, since we’ll be interested solely in Ã0 (x) and/or Ã1 (x), it suffices
to investigate T −1 AT , the similarity term of T [A]. In particular, if
T (x) ∈ GLn (Ox ) then p = p̃ as well.
• If T = diag(ǫα1 , . . . , ǫαn ), where α1 , . . . , αn are integers, then δT is a
zero matrix. Hence, T [A] = T −1 AT .
A classical tool in perturbation theory is the so-called splitting which separates off the existing distinct coalescence patterns. Whenever the leading
constant matrix A00 admits at least two distinct eigenvalues, there exists
a coordinate transformation T ∈ GLn (O) which block-diagonalizes System
(1) so that it can be decoupled into subsystems of lower dimensions whose
leading constant matrices have a unique distinct eigenvalue each (see, e.g.,
Theorem XII − 4 − 1 page 381, [Hsieh et al., 1999]).
Remark 2 In (Theorem 2.3 − 1, page 17, [Wasow, 1985]), we have a generalization of the splitting to the well-behaved case which refers to A0 (x) being
5
holomorphically similar to its holomorphic Jordan form. In particular, its
eigenvalues don’t coalesce in the region of study. However, we limit ourselves
to the weaker version given above as we aim to give a general discussion
which doesn’t exclude turning points.
We now consider one of these resulting subsystems and assume its leading
constant matrix is in jordan normal form with a unique eigenvalue γ ∈ C.
This can be always attained by a constant transformation. Upon applying
the so-called eigenvalue shifting, i.e.
Z
−h−1
γx−p dx)Z,
Y = exp(ǫ
it is easy to verify from (6) that the resulting system has a nilpotent leading
constant matrix. Hence, without loss of generality, we can assume that system (1) is such that A00 is nilpotent. Clearly, it doesn’t follow that A0 (x) is
nilpotent and we deviate here from the classical treatment of system (2) as
we may encounter turning points.
The subscripts are to be ommitted and x, ǫ dropped from the notation
whenever ambiguity is not likely to arise, e.g. A(x, ǫ) , A0 (x), A1 (x) will be
denoted by A, A0 , A1 respectively. We set r = rank(A0 (x)).
3
Recovery from Turning Points
We arrive at this section with A0 (x) of system (1) being nonnilpotent in
contrary to the leading constant matrix A00 . We show that by Moser-based
reduction of A0 (x) itself and ramification in x, that is a readjustment x = ts
with s a positive integer, we can modify radically the nilpotency of A00 , i.e.
arrive at a system for which splitting can be applied.
The motivation behind considering such a general form of system (1)
rather than that for which p ≤ 0 is to be justified in this section. In fact,
system (1) will undergo a sequence of transformations which might introduce
or elevate the order of the pole in x. This elevation is inevitable and is
introduced identically in the classical treatment of such systems. However,
the order of poles of Ak (x), k = 0, 1, 2, . . . grows at worst linearly with k
which maintains an asymptotic validity of the formal construction [page 60,
[Wasow, 1979]].
By Remark (1), the discussion is restricted to the similarity term T −1 AT
of T [A]. Hence, A0 (x) can be treated as a perturbation of A00 .
Proposition 1 Given system (1)
ǫ
∞
X
dY
= A(x, ǫ)Y = ǫ−h x−p
Ak (x)ǫk Y
dx
k=0
6
where A0 (x) = A00 + A01 x + A02 x2 + . . . such that A00 is a nilpotent constant
matrix. If A0 (x) has a nonzero eigenvalue then there exist a positive integer
s and a T ∈ GLn (Kx ) such that by setting x = ts , Y = T Z, we have in (5)
Ã0 (t) := T −1 A0 (t)T = Ã00 + Ã01 t + Ã02 t2 + . . .
where Ã00 has a nonzero constant eigenvalue.
Proof. The eigenvalues of A0 (x) admit a formal expansion in the fractional
powers of x in the neighborhood of x = 0 (see, e.g. [Kato,P1980]). We are
j/s
interested only in their first nonzero terms. Let µ(x) = ∞
be a
j=0 µj x
nonzero eigenvalue of A0 (x) whose leading exponent, i.e. smallest j/s for
which µj 6= 0 and j, s are coprime, is minimal among the other nonzero
eigenvalues. Then, without loss of generality, we can assume that s = 1
otherwise we set x = ts . Now, let T ∈ GLn (Kx ) such that Ã0 (x) is Moserirreducible in x i.e.
θ(λ) = xrank(Ã00 ) det(λI +
Ã00
+ Ã01 )|x=0
x
doesn’t vanish identically in λ. Then there are n − degθ eigenvalues of A0 (x)
whose leading exponents lie in [0, 1[, and deg θ eigenvalues for which the
leading exponent is greater or equal 1 [?]in [Jeannerod et al., 1999]]. But
µ(x) is an eigenvalue of Ã0 (x) with a minimal leading exponent and hence
it is among those whose leading exponents lie in [0, 1[. By our assumption,
s = 1 and hence the leading exponent of µ(x) is zero and µ0 6= 0. Since µ0 is
an eigenvalue of Ã00 , it follows that the latter is nonnilpotent.
Remark 3 The eigenvalues of A0 (x) are the roots of the algebraic scalar
equation f (x, µ) = det(A(x) − µIn ) = 0 and can be computed by NewtonPuiseux algorithm. The linear transformation T ∈ GLn (Kx ) can be computed
efficiently via ISOLDE [Barkatou et al., 2013].
We illustrate our approach by an Example from [page 88, [Wasow, 1979]].
We remark however that the technique proposed in the former, adapted
from [Turrittin, 1952] and [Iwano et al., 1963], debutes with the reduction
of A(x, ǫ) to its Arnold-Wasow form as mentioned above and then constructing the so-called characteristic polygon. This particular example was given
initially in that for simplicity and hence, the transformations computed by
our algorithm, which doesn’t require reduction to this form, do not deviate
hereby from those in the former.
0 1 0
Example 1 Let ǫ dY
= ǫ−1 0 0 1 Y . Clearly, we have a truning point
dx
ǫ 0 x
at x = 0 and
0 1 0
A0 (x) = 0 0 1 .
0 0 x
7
A00 is nilpotent in Jordan form while the eigenvalues of A0 (x) are 0, 0, x
whence s = 1. A simple calculation shows that A0 (x) is Moser-reducible in
x (θ(λ) ≡ 0). Let T = diag(1, x, x2 ) then upon setting Y = T Z we get
0 1 0
0 0 0
dZ
= ǫ−1 x{0 0 1 + 0 0 0 x−3 ǫ +
ǫ
dx
0 0 1
1 0 0
0 0
0
0 −1 0 x−2 ǫ2 }Z.
0 0 −2
The constant leading coefficient matrix of the resulting matrix is no longer
nilpotent. Hence the system can be decoupled into two subsystems upon setting
Z = T W where
1 0 1
−1 −1 0
T = 0 1 1 + −1 −1 −2 x−3 ǫ + O(x−6 ǫ2 ).
0 0 1
−1 −1 0
The resulting equivalent system then consists of the two uncoupled lower dimension systems where W = [W1 , W2 ]T :
dW1
0 1
−1 0 −3
−1
ǫ
= ǫ x{
+
x ǫ+
0 0
−1 0
dx
1
−1
x−6 ǫ2 + O(x−9 ǫ3 )}W1 .
1 −1 + x4
dW2
= ǫ−1 x{1 + x−3 ǫ + (1 + 2x4 )x−6 ǫ2 + O(x−9 ǫ3 )}W2 .
dx
For the former subsystem, A0 (x) and A00 are now simultaneously nilpotent.
As for the latter, the exponential part is 21 ǫ−2 x2 (1 + O(x−3 ǫ)) which exhibits
that the eigenvalue x of the leading coefficient matrix of the system we started
with is recovered as expected.
ǫ
4
ǫ-Rank Reduction
Following Section 3, we can now assume without loss of generality that A0 (x)
and A00 are simultaneously nilpotent. The system is recovered from its turning points and no further reduction can be attained via Splitting. In analogy
to modern techniques for system (2), we investigate ǫ-rank reduction of system (1).
8
4.1
ǫ-Reduction Criteria
Following (3) and (4), we define respectively the ǫ-Moser rank and ǫ-Moser
Invariant of system (1) as the rational numbers:
mǫ (A) =
µǫ (A) =
r
) and
n
min {mǫ (T [A])|T ∈ GLn (K) }.
max (0, h +
Definition 1 System (1) (the matrix A respectively) is called ǫ-reducible in
the sense of Moser (ǫ-reducible, in short) if mǫ (A) > µǫ (A), otherwise it is
said to be ǫ-irreducible.
Remark 4 This definition is not to be mixed neither with the usual sense
of reduced system in the literature, i.e. the system (2) obtained from system
(1) (ǫ = 0); nor with the usual sense of Moser-reduced systems as for system
(2), i.e. the systems whose p is minimal.
In particular, if mǫ (A) ≤ 1 then h = 0 and no further reduction is to be
sought via this approach.
4.2
ǫ-Reduction Algorithm
We follow the algorithmic description of [Barkatou, 1995]. The main result
of this section is the following theorem which is to be proved after giving its
useful building blocks.
Theorem 1 Given system (1)
∞
X
dY
−h −p
ǫ
Ak (x)ǫk Y
= A(x, ǫ)Y = ǫ x
dx
k=0
such that rank(A0 (x)) = r and mǫ (A) > 1. A necessary and sufficient
condition for the system to be ǫ-reducible (in the sense of Moser), i.e. the
existence of a T (x, ǫ) ∈ Gln (K) such that r(Ã0 (x)) < r, is that the polynomial
θ(λ) := ǫr det(λI +
A0
+ A1 )|ǫ = 0
ǫ
vanishes identically in λ.
Moreover, T (x, ǫ) can always be chosen to be a product of unimodular transformations in GLn (Ox ) and polynomial transformations of the form diag(ǫα1 , . . . , ǫαn )
where α1 , . . . , αn are nonnegative integers.
9
Lemma 1 There exists a unimodular matrix U(x) in GLn (Ox ) such that the
leading coefficient matrix Ã0 (x) of U[A] have the following form
11
Ã0 O
Ã0 =
(7)
Ã21
O
0
11
Ã0
11
where Ã0 is a square matrix of dimension r and
is a n × r matrix of
Ã21
0
full column rank r.
Proof. By Remark 1, Ã0 = U −1 A0 U hence it suffices to search a similarity
transformation U(x). Since Ox is a Principal Ideal Domain (the ideals of Ox
are of the form xk O), it is well known that one can construct unimodular
transformations Q(x), R(x) lying in GLn (Ox ) such that the matrix QA0 R
has the Smith Normal Form
Q A0 R = diag(xβ1 , . . . , xβr , 0, . . . , 0))
where detR(0) 6= 0, detQ(0) 6= 0 , and β1 , . . . , βr in Z with 0 ≤ β1 ≤ β2 ≤
· · · ≤ βr .
It follows that we can compute a unimodular matrix R(x) in GLn (Ox ) so that
its n − r last columns form a basis of ker(A0 ). Hence, we set U(x) = R(x).
Remark 5 In practice, U(x) can be obtained by performing gaussian elimination on the columns of A0 (x) taking as pivots the elements of minimum
order (valuation) in x as already suggested in [Barkatou et al., 2006].
Hence, we can suppose without loss of generality that A0 (x) consists of r independent columnsand (n−r) zero columns. We partition A1 (x) conformally
A11
A12
1
1
, and consider
with A0 , i.e. A1 =
21
A1 A22
1
11
A0
A12
1
.
(8)
Gλ (A) =
A21
A22
0
1 + λIn−r
We illustrate our progress with the following simple example.
Example 2 Given ǫ dY
= A(x, ǫ)Y where
dx
ǫ −x3 ǫ (1 + x)ǫ
0
x2
xǫ
0
−2xǫ
.
A = ǫ−2
−x
0
0
2ǫ
0
2
0
ǫ2
Clearly, A0 (x) is nilpotent of rank 2
0
x2
Gλ (A) =
−x
0
and
0 x+1 0
0
0
−2x
.
0
λ
2
2
0
λ
10
(9)
This consideration of Gλ (A) gives an ǫ-reduction criteria equivalent to θ(λ)
as demonstrated in the following lemma.
Lemma 2 Det(Gλ (A) ≡ 0 vanishes identically in λ if and only if θ(λ) does.
Proof. Let D(ǫ) = diag(ǫIr , In−r ) then we can write ǫ−1 A = ND −1 where
N := N(x, ǫ) ∈ K n×n has no poles in ǫ. Set D0 = D(0) and N0 = N(x, 0).
Then we have
det(Gλ (A) = det(N0 + λD0 ) = det(N + λD)|ǫ=0
A
= (det( + λIn ) det(D))|ǫ=0
ǫ
A0
+ A1 + λIn ) ǫr )|ǫ=0 = θ(λ).
= (det(
ǫ
Proposition 2 If mǫ (A) > 1 and det(Gλ (A)) ≡ 0 then there exists a unimodular matrix Q(x) in GLn (Ox ) with det Q(x) = ±1 such that the matrix
Gλ (Q[A]) has the form
11
A0
U1
U2
W2 ,
Gλ (Q[A]) = V1 W1 + λIn−r−ρ
(10)
M1
M2
W3 + λIρ
where 0 ≤ ρ ≤ n − r, W1 , W3 are square matrices of order (n − r − ρ) and
ρ respectively , and
11
A0 U1
U1 ),
(11)
) = rank( A11
rank(
0
M1 M2
U1 ) < r.
rank( A11
(12)
0
Our procedure is that of [Barkatou, 1995] except for the properties of M1 ,
M2 , and W3 . The nullity of M1 and M2 in the former is replaced by the
weaker condition (11) here. Otherwise, the unimodularity of Q(x) cannot
be guaranteed. Moreover, this refinement avoids unnecessary computations.
The bridge between both descriptions is established in Remark 6 before proceeding to the proof of the Proposition.
Remark 6 Suppose that Gλ (Q[A]) has the form (10) and there exists a
transformation T (x) ∈ GLn (Kx ) such that Gλ (T [Q[A]]) has the form
11
A0
U1
U2
W2 ,
Gλ (T [Q[A]]) = V1 W1 + λIn−r−ρ
O
O
W̃3 + λIρ
11
where 0 ≤ ρ ≤ n − r, W1 , W̃3 are square matrices of order (n − r − ρ) and
ρ respectively, and W̃3 is upper triangular with zero diagonal. Then,
11
A0
U1
ρ
det Gλ (T [Q[A]]) = λ det
.
V1 W1 + λIn−r−ρ
If det(Gλ (A) ≡ 0) then we have as well det(Gλ (Q[A]) = det(Gλ (T [Q[A]]) ≡
0. Hence,
11
A0
U1
det
= 0.
(13)
V1 W1 + λIn−r−ρ
11
A0 U1
(ρ)
(ρ)
.
We shall denote by G0 the matrix G0 =
V1 W1
Proof. (Proposition 2) Since det(Gλ (A)) ≡ 0 then in particular, det(Gλ=0 (A)) =
0. The matrix Gλ=0 (A) is thus singular. Let E (respectively F ) be the vector
space spanned by the first r (resp. last n − r) rows of Gλ=0 (A). We have
dim(E + F ) = rank(Gλ=0 (A)) < n.
If dim(E) < r then set ρ = 0. Otherwise, since
dim(E + F ) = dim(E) + dim(F ) − dim(E ∩ F ) < n,
it follows that either dim(F ) < n − r or dim(E ∩ F ) > 0. In both cases,
there exists at least a row vector W1 (x) = (w1 , . . . , wn ) with entries in C((x))
in the left null space of Gλ=0 (A), such that wi 6= 0 for some r + 1 ≤ i ≤ n.
We can assume without loss of generality that W1 (x) has its entries in Ox .
Indeed, this assumption can be guaranteed by a construction as in Lemma 1.
Let V1 (x) be the nonzero row vector whose entries vi are such that vi = wi
for r + 1 ≤ i ≤ n and zero elsewhere. It can be attained that vn = 1 upon
replacing A by P [A] where P is a permutation matrix which exchanges the
rows of A so that the pivot is taken to have the minimum order (valuation) in
x. Let R1 (x) be the matrix whose nth row is formed by V1 (x), the other rows
being those of the identity matrix In . Note that R1 (x) is a lower triangular
matrix in GLn (Ox ) with det R1 = 1. Now put A(1) = R1−1 [A]. Thus, Gλ (A(1) )
has the form (10) with (11) and ρ = 1.
(1)
By (13), the matrix G0 (x) is singular so if the condition (12) does not
occur then one can find, by same argument as above, a vector V2 (x) in Ox
(1)
whose (n − 1)th component is 1 such that V2 G0 = 0. If R2 (x) denotes the
matrix whose n − 1th row is formed by (V2 (x), 0), the others being those of
the identity matrix In , and A(2) = R2−1 [A(1) ] then the matrix Gλ (A(2) ) has
the form (10) with (11) and ρ = 2.
A finite number of applications of this process yields an equivalent matrix
(ρ)
A with (11) for which either (12) occurs or ρ = n−r. But in the latter case
one has, again by Remark 6 and (13), that det A11
0 = 0, and so (12) occurs.
The matrix Q(x) we are seeking is obtained as a product of permutation
12
matrices or lower triangular matrices whose determinant is 1, and so its
determinant is ±1.
We consider again Example 2.
Example* 1 A simple calculation shows that det(Gλ (A)) ≡ 0 hence A is
ǫ-reducible. From (9), for λ = 0, we have the singular matrix
1 0 0 0
0 0 x+1 0
x2 0
0
−2x
. Let Q = 0 1 0 0 ,
Gλ=0 (A) =
−x 0
0 0 0 1
0
2
0 0 1 0
0 2
0
0
ǫ −x3 ǫ
0
(1 + x)ǫ
x2
xǫ −2xǫ
0
.
hence Q[A] = ǫ−2
2
0
2
ǫ
0
−x
0
2ǫ
0
Moreover, Gλ (Q[A])
0 0
x2 0
Gλ (Q[A]) =
0 2
−x 0
has the form
(10) with ρ = 1 and r = 2. In fact,
0 (1 + x)
0
0
.
λ
0
2
λ
Proposition 3 If mǫ (A) > 1 and Gλ (A) has the form (10) with conditions
(11) and (12) satisfied, then system (1) (resp. A) is ǫ-reducible and reduction can be carried out with the so-called shearing transformations S =
diag(ǫIr , In−r−ρ, ǫIρ ) whenever ρ 6= 0 and S = diag(ǫIr , In−r ) otherwise.
Proof. We partition A(x, ǫ) as follows
11
A
A12 A13
A = A21 A22 A23
A31 A32 A33
where A11 , A22 , A33 are of dimensions r, n − r − ρ, and ρ respectively. It is
easy to verify then that
11 −1 12
A
ǫ A
A13
A22
ǫA23 .
Ã(x, ǫ) = S −1 AS − S −1 δS = S −1 AS = ǫA21
31
−1 32
A
ǫ A
A33
Hence, the new leading coefficient matrix is
11
A0 U1 O
O O
Ã0 = O
M1 M2 O
and rank(Ã0 ) = rank(A11
0
U1 ) < r.
13
Example* 2 Let S = diag(ǫ, ǫ, 1, ǫ) then
ǫ −x3 ǫ
0 (1 + x)ǫ
x2
xǫ −2x
0
.
S[Q[A]] = ǫ−2
2
0
2ǫ
ǫ
0
−x
0
2
0
It is clear that the leading term has rank 1 < r.
Now, we are ready to give the proof of Theorem 1.
Proof. (Theorem 1) We first prove that the condition is necessary. In
fact, suppose that such a T exists then we have det(λI + Aǫ ) = det(λI + Ãǫ ).
It then suffices to see that
A
Ã
ǫr det(λI + )|ǫ=0 = ǫr det(λI + )|ǫ=0 = ǫr−m P (x, ǫ)|ǫ=0
ǫ
ǫ
n×n
where m ≤ r(Ã0 ) < r and P (x, ǫ) ∈ K
has no poles in ǫ. Moreover, it is
evident that θ(λ) = 0 since
∞
X
A
A0
r
r
ǫ det(λI + )|ǫ=0 = ǫ det(λI +
+ A1 +
Ak ǫk−1 )|ǫ=0 .
ǫ
ǫ
k=2
As for the sufficiency of the theorem, by Lemma 1, we can assume that A0 (x)
is in the form (7). Then, Gλ (A) is constructed as in (8). Since θ(λ) = 0
and mǫ (A) > 1, it follows from Lemma 2 that det(Gλ (A)) = 0 and A is
ǫ-reducible. Hence, the matrix S[Q[A]] where S, Q are as in Propositions 2
and 3 respectively, has the desired property.
Remark 7 The ǫ-reducibility of A implies that the rank of the leading coefficient matrix can be reduced without necessarily reducing the ǫ-rank of the
system. If the ǫ-reduction criteria holds for a sufficient number of equivalent
systems then a repetitive application of such a transformation results in an
equivalent system whose leading coefficient matrix has a null rank, hence h is
reduced by one (e.g. Example 3). At this point, the discussion restarts from
the nature of the eigenvalues of the leading constant matrix.
= A(x, ǫ)Y where
Example 3 Given ǫ dY
dx
3
2xǫ 3x2 ǫ4 2xǫ2 (2x + 1)ǫ5
0
ǫ4
0
0
.
A = ǫ−3
0
0
ǫ2
0
−2x
0
0
0
The gauge transformaiton Y = T Z computed by our algorithm results in an
equivalent ǫ-irreducible system whose ǫ-rank is diminished by two as illustrated below:
1
0
0
0
0 ǫ3 0 0
0 0 ǫ 0
2xǫ 3x2 2x + 1
−1 2x
.
and
T
[A]
=
ǫ
T =
0
ǫ3 0 0 0
0
ǫ2
0
0 −2xǫ 0
0
0 0 0 1
14
Algorithm 1 Moser-based ǫ-Rank Reduction of System (1)
Input: A(x, ǫ) of system (1)
Output: T (x, ǫ) ∈ GLn (K) and an equivalent system given by T [A] which
is ǫ-irreducible in the sense of Moser.
T ← In ;
h ← ǫ-rank of A;
U(x) ← unimodular transformation computed in Lemma 1 and Remark 5
such that U −1 A0 U has form (7);
T ← T U; A ← U −1 AU − ǫU −1 dU
;
dx
while Det(Gλ (A)) = 0 and h > 0 do
Q(x), ρ ← Proposition 2 ;
S(ǫ) ← Proposition 3;
P ← QS;
T ← TP;
;
A ← P −1 AP − ǫP −1 dP
dx
h ← ǫ-rank of A;
U(x) ← Lemma 1 and Remark 5;
T ← T U;
A ← U −1 AU − ǫU −1 dU
;
dx
end while.
return (T, A).
15
4.3
Example of Comparison with Levelt’s Approach
While Moser defined two rational numbers in [Moser, 1960], Levelt investigated the existence of stationary sequences of free lattices in [Levelt, 1991].
Since Moser-based and Levelt’s algorithms serve the same utility, i.e. rank
reduction of system (2), it is natural to question their comparison. The cost
analysis in the univariate case of the Moser-based algorithm described in
[Barkatou, 1995] and that of Levelt’s [Levelt, 1991] was given in [page 108,
[LeRoux, 2006]]. Both algorithms turn out to have an identical cost which
suggests a further experimental study so that they can be well-compared.
The latter was generalized in [Barkatou et al., 2006] to a certain class of
Pfaffian systems over bivariate fields. Consequently, following arguments
similar to those of [Barkatou et al., 2006] whenever the leading coefficient
matrix is nilpotent and to Section 3 of this article whenever it’s not, Levelt’s
algorithm seems adaptable to system (1) as well. Such an attempt would
give rise to Algorithm 2.
Algorithm 2 Generalization of Levelt’s Rank Reduction to System (1)
Input: A(x, ǫ) of system (1);
Output: T (x, ǫ) ∈ GLn (K) computed via Levelt’s approach and an
equivalent system given by T [A] which is ǫ-irreducible in the sense of Moser.
T ← In
i ← 0;
h ← ǫ-rank of A;
while i < n − 1 and h > 0 do r ← rank(A0 );
U(x) ← unimodular transformation of Lemma 1 and Remark 5 such
that U −1 A0 U has the form (7);
S(x) ← diag(ǫIr , In−r );
P ← US;
T ← TP;
A ← P −1 AP − ǫP −1 ∂P
;
∂x
h̃ ← ǫ-rank of A;
if h̃ < h then
i ← 0;
else
i ← i + 1;
end if
h ← h̃;
end while.
return (T, A).
It is clear that Algorithm 2 coincides with Algorithm 1 for ρ = 0. For
ρ > 0, which frequently occurs for matrices of dimension greater than 3, we
ran simple examples of systems (1). Despite the identical cost of both algo16
rithms, these examples exhibited that Algorithm 2 complicates dramatically
the coefficient matrices of the system under reduction. One factor in this
complication stems from the weak termination criterion of this algorithm.
However, even upon adjoining Moser’s termination criterion (given by θ(λ))
to this algorithm, as suggested in [Secion 5 of [Barkatou et al., 2006]], the
result remains less satisfying than that of Algorithm 1. We exhibit here a
selected example over O = C[[x, ǫ]]. Additional examples along with our
implementation are available online at [Maddah, 2014].
= ǫ−h x−p A(x, ǫ)Y where
Example 4 Let ǫ dY
dx
0
0 0 ǫx(ǫx + 3) ǫ3 (x + 9) ǫx2
x
0 0
0
9ǫ2 x2
0
2
6
x
−1 0 ǫx(ǫx + 1)
0
0
.
A= 3
2
x
+
x
−x
0
ǫx
0
0
0
0 x
−3ǫ
0
0
x3 − 1 5 0
0
0
0
It is easily verified that A is Moser-reducible in ǫ. Furthermore, we have
mǫ (A) = h+ 36 and µǫ (A) = h+ 62 . Hence, it suffices to run only one reduction
step, for which the rank of the leading coefficient matrix is dropped by one.
We give in the following the transformation T (x, ǫ) and T [A] as computed
by our Moser-based algorithm, the generalization of Levelt’s algorithm with
Moser’s reducibility criterion, and Levelt’s algorithm respectively. For the
latter, T and T [A] are to be found at [Maddah, 2014] due to the lack of space
here. However we illustrate the dramatic growth of their coefficients by listing
one entry of each.
ǫ 0 0 0 0 0
0 ǫ 0 0 0 0
0 0 ǫ 0 0 0
• T =
0 0 0 0 0 ǫ and
0 0 0 1 0 0
0 0 0 0 ǫ 0
0
0 0 ǫ2 (x + 9) x2 ǫ ǫx(ǫx + 3)
x
0 0
9x2 ǫ
0
0
2
6
x
−1 0
0
0 ǫx(ǫx + 1)
T [A] =
2
0
0
ǫx
0
0
−3ǫ
3
x −1
5 0
0
0
0
2
2
x(x + 1) −x 0
0
0
xǫ
3
ǫ 0
0
0 0 −9xǫ2
0 0 −1/3x2 ǫ2 ǫ2 0
0
0 0
0
0
ǫ
0
and
• T =
2
0 ǫ
−1/3ǫx
0
0
0
0 0
0
0 0
1
0 0
ǫ
0 0
0
17
T [A] = [ãij ] including entries having 2-digit coefficients and degree 8 in
x, e.g.,
ã12 = −x(27ǫ2 − ǫx − 3)
1 2
ã13 = 3 x (−x + 27ǫ)
ã41 = ǫx4 − ǫx + 3
ã52 = ǫ2 x(ǫx6 + 1)
ã = −(1/3)ǫ2 x8
53
• T = [tij ], T [A] = [ãij ] have entries with 4-digit and 10-digit coefficients.
Degrees in x surpass 8, e.g.
2
2 5 3
2
11
10
t23 = 1458ǫ x (x + 3x + 2) /(243x + 1944x
+3627x9 + 1026x8 + 1944x7 − 8820x6 + 432x5
−4860x4 − 72x3 − 652x2 − 72x − 324)
25
24
23
ã13 = (39366ǫx + 1220346ǫx + 14565420ǫx +
83731482ǫx22 − 1003833x23 + 236309724ǫx21
−25095825x22 + 708588ǫ2x19 + 299723976ǫx20
−237858120x21 + 17714700ǫ2x18 + 191306610ǫx19
−1080181170x20 + 143134776ǫ2x17 + 140023890ǫx18
−2462664465x19 + 462261816ǫ2x16 − 236799612ǫx17
−3116776563x18 + 553879620ǫ2x15 + 98626896ǫx16
−4040830962x17 + 384763284ǫ2x14 − 334491552ǫx15
−5062097592x16 + 750333456ǫ2x13 + 149450184ǫx14
−3027609900x15 + 681679152ǫ2x12 − 469371996ǫx13
14
2 11
12
−4012634700x + 460735776ǫ x − 88155972ǫx
−1297170936x13 + 804260016ǫ2x10 − 332366796ǫx11
−786840174x12 + 245153952ǫ2x9 − 90553302ǫx10
−404146368x11 + 383522040ǫ2x8 − 73322280ǫx9
+451532556x10 + 111422304ǫ2x7 − 9084492ǫx8
−104014116x9 + 74305512ǫ2x6 − 14703120ǫx7
+222024672x8 + 17635968ǫ2x5 − 7039224ǫx6
−17230320x7 − 23184ǫ2 x4 − 8030664ǫx5 + 88635312x6
−414720ǫ2 x3 − 3070548ǫx4 − 294176x5
−1819584ǫ2 x2 + 32403312x4 + 419904ǫ2x + 1692576x3
+944784ǫ2 + 3895776x2 + 524880x + 944784)/(243x11
+1944x10 + 3627x9 + 1026x8 + 1944x7 − 8820x6
+432x5 − 4860x4 − 72x3 − 652x2 − 72x − 324)2 ;
18
5
Conclusion and Further Investigations
We proposed a Moser-based algorithm which recovers a singularly-perturbed
linear differential system from its turning points and reduces its ǫ-rank to
its minimal integer value. A complementary step to attain the full formal
reduction would be to find the ramification in the parameter which renders
the general case to the case discussed here in a recursive process. One approach is that based on analysis by a Newton polygon and applied to system
(2) in [Barkatou, 1997] and the scalar case of system (1) in [Macutan, 1999].
The sufficient number of coefficient matrices in computations is still to be
investigated.
In the usual treatment of turning points, a restraining index χ is defined
and updated at every reduction step to observe the growth of the order of
poles in the Ak (x)’s (see, e.g. [Wasow, 1985, 1979] and references therein).
This restraining index plays a role in the asymptotic interpretation of the
formal solutions. As demonstrated in Section 3, the transformations we apply
to recover from turning points are polynomial (shearings). Hence, the growth
of the poles order is bounded and can be expected apriori. The insight this
gives into the restraining index is to be investigated.
Examples comparing this algorithm with a generalization of Levelt’s favors the former. However, it suggests that Levelt’s algorithm be generalized
to system (1) and that a bit complexity study comparing both algorithms
be held alongside. Furthermore, it motivates generalization of the Moserbased algorithms over differential bivariate fields, e.g. Pfaffian systems in
two variables [Barkatou et al., 2014].
An additional field of investigation is the two-parameter algebraic eigenvalue problem as a generalization of the one parameter case investigated via
Moser-based approach in [Jeannerod et al., 1999]. In fact, the main role in
the reduction process is reserved to the similarity term of T [A]. Hence, the
discussion of such problems is not expected to deviate from the discussion
presented here in the differential case.
References
M. Barkatou, S.S. Maddah, and H. Abbas. Formal Reduction of a class of
Pfaffian Systems in Two Variables. Submitted to ISSAC’14. A preliminary
version is available at arXiv.
W. Balser. Formal Power Series and Linear Systems of Meromorphic Ordinary Differential Equations. Springer-Verlag, New York, 2000.
M. Barkatou. An algorithm to compute the exponential part of a formal
fundamental matrix solution of a linear differential system. Journal of App.
Alg. in Eng. Comm. and Comp., 8(1):1-23, 1997.
19
M. Barkatou. A Rational Version of Moser’s Algorithm. In Proceedings of the
International Symposium on Symbolic and Algebraic Computation, pages
297-302. ACM Press, July 1995.
M. Barkatou. On the reduction of Linear Systems of Difference Equations.
In Proceedings of the International Symposium on Symbolic and Algebraic
Computation, pages 1-6. ACM Press, USA, 1989.
M. Barkatou, G. Broughton, and E. Pflugel. Regular Systems of Linear Functional Equations and Applications. In Proceedings of the International
Symposium on Symbolic and Algebraic Computation, pages15-22. ACM
Press, 2008.
M. Barkatou and E. Pflugel. ISOLDE, Integration of Systems of Ordinary
Linear Differential Equations. Available at: http://isolde.sourceforge.net/
M. Barkatou and E. Pflugel. On the Moser-and super-reduction algorithms
of systems of linear differential equations and their complexity. Journal of
Sym. Comput., 44 (8), 1017-1036, 2009.
M. Barkatou and N. LeRoux. Rank Reduction of a class of Pfaffian Systems in
Two Variables. In Proceedings of the International Symposium on Symbolic
and Algebraic Computation, pages 204-211. ACM Press, 2006.
G. Chen. Solutions Formelles de Systèmes d’Equations Différentièlles
Linèaires Ordinaires Homogènes. PhD Thesis. Université Joseph Fourier.
Grenoble 1. 1984.
P.F. Hsieh and Y. Sibuya. Basic theory of Ordinary Differential Equations.
Springer. NewYork, USA, 1999.
M. Iwano and Y. Sibuya. Reduction of the Order of a Linear Ordinary Differential Equation Containing a Small Parameter. 1963
C.P. Jeannerod and E. Pflugel. A Reduction Algorithm for Matrices Depending on a Parameter. In Proceedings of the International Symposum
on Symbolic and Algebraic Computation, Pages 121-128. ACM Press, USA
1999.
T. Kato. Perturbation Theory for Linear Operators. Springer. Berlin. 1980.
N. LeRoux. Solutions formelles d’équations aux dérivées partielles. Ph.D.
Thesis. University of Limoges. 2006.
A.H.M. Levelt. Stabilizing Differential Operators: a method for Computing
Invariants at Irregular Singularities. Differential Equations and Computer
Algebra, M. Singer (ed.), pages 181-228, 1991.
C. C. Lin, The theory of hydrodynamic stability. Cambridge Univ. Press.
Cambridge, 1966.
20
Y.O. Macutan. Formal Solutions of Scalar Singularly-Perturbed Linear Differential Equations. In Proceedings of the International Symposum on Symbolic and Algebraic Computation, Pages 113-120. ACM Press, USA 1999.
Maddah S.S. http : //www.unilim.f r/pages perso/suzy.maddah/
J.A.M. McHugh. An historical Survey of Ordinary Linear Differential Equations with a Large Parameter and Turning Points. Archive for History of
Exact Sciences, 7(4): pp 277-324,1971.
J. Moser. The Order of a Singularity in Fuchs’ Theory. Mathematische
Zeitschrift, 72:379- 398, 1960.
E. Pflugel. Effective Formal Reduction of Linear Differential Systems. Journal
of App. Alg. in Eng. Comm. and Comp., 10, 153-187, 2000.
H. L. Turrittin. Asymptotic Expansions of Solutions of Systems of Ordinary
Differential Equations. Contributions to the Theory of Nonlinear Oscillations II. Ann. of Math. Studies. No 29: 81-116. 1952.
W. Wasow. Linear Turning Point Theory. Springer-Verlag. 1985.
W. Wasow. Topics in the Theory of Linear Ordinary Differential Equations
Having Singularities with respect to a Parameter. Institut de Recherche
Mathématique Avancée. Université Louis Pasteur. Strasbourg. 1979.
21
| 0 |
1
Uniform Recovery Bounds for Structured Random
Matrices in Corrupted Compressed Sensing
arXiv:1706.09087v3 [] 7 Feb 2018
Peng Zhang, Lu Gan, Cong Ling and Sumei Sun
Abstract—We study the problem of recovering an s-sparse
signal x⋆ ∈ Cn from corrupted measurements y = Ax⋆ +z⋆ +w,
where z⋆ ∈ Cm is a k-sparse corruption vector whose nonzero
entries may be arbitrarily large and w ∈ Cm is a dense noise
with bounded energy. The aim is to exactly and stably recover
the sparse signal with tractable optimization programs. In this
paper, we prove the uniform recovery guarantee of this problem
for two classes of structured sensing matrices. The first class
can be expressed as the product of a unit-norm tight frame
(UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random
√ circulant matrix).
When the UTF is bounded (i.e. µ(U) ∼ 1/ m), we prove that
with high probability, one can recover an s-sparse signal exactly
and stably by l1 minimization programs even if the measurements
are corrupted by a sparse vector, provided m = O(s log2 s log 2 n)
and the sparsity level k of the corruption is a constant fraction
of the total number of measurements. The second class considers
randomly sub-sampled orthonormal matrix (e.g., random Fourier
matrix). We prove the uniform recovery guarantee provided that
the corruption is sparse on certain sparsifying domain. Numerous
simulation results are also presented to verify and complement
the theoretical results.
Index Terms—Compressed sensing, corruption, dense noise,
unit-norm tight frames.
I. I NTRODUCTION
The theory of compressed sensing has been widely studied and applied in various promising applications over the
recent years [1]–[5]. It provides an efficient way to recover a
sparse signal from a relatively small number of measurements.
Specifically, an s-sparse signal x⋆ is measured through
y = Ax⋆ + w,
(1)
where A ∈ Cm×n is referred to as the sensing matrix,
y ∈ Cm is the measurement vector and w ∈ Cm represents
the noise vector with the noise level kwk2 ≤ ε. It has been
shown that if A satisfies the restricted isometry property (RIP)
and ε is small, the recovered signal x̂ obtained by l1 norm
minimization is close to the true x⋆ , i.e. kx̂ − x⋆ k ≤ Cε with
C being a small numerical constant. Many types of sensing
matrices have been proven to satisfy the RIP condition. For
P. Zhang was with the Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK (e-mail:
[email protected]). He is now with the Institute for Infocomm
Research, A∗ STAR, Singapore, 138632, Singapore (e-mail: [email protected]).
C. Ling is with the Department of Electrical and Electronic Engineering,
Imperial College London, London, SW7 2AZ, UK (e-mail: [email protected]) .
L. Gan is with the College of Engineering, Design and Physical Science,
Brunel University, London, UB8 3PH, UK (e-mail: [email protected]).
S. Sun is with the Institute for Infocomm Research, A∗ STAR, Singapore,
138632, Singapore (e-mail: [email protected]).
example, random Gaussian/Bernoulli matrices satisfy the RIP
with high probability if m ≥ O(s log(n/s)) [1], [3], whereas
structured sensing matrices consisting of either randomly
subsampled orthonormal matrix [6] or modulated unit-norm
tight frames [7] have the RIP with high probability when m
is about O(s log4 n)1 .
This standard compressed sensing problem has been generalized to cope with the recovery of sparse signals when some
unknown entries of the measurement vector y are severely
corrupted. Mathematically, we have
y = Ax⋆ + z⋆ + w,
⋆
(2)
m
where z ∈ C is an unknown sparse vector. To reconstruct
x⋆ from the measurement vector y, the following penalized
l1 norm minimization has been proposed:
min kxk1 + λkzk1
x,z
s.t.
ky − (Ax + z)k2 ≤ ε. (3)
In [10], it was shown that random Gaussian matrices can
provide uniform recovery guarantees to this problem (3). In
other words, a single random draw of a Gaussian matrix A is
able to stably recover all s-sparse signals x⋆ and all k-sparse
corruptions z⋆ simultaneously with high probability. On the
other hand, for structured sensing matrices, the nonuniform
recovery guarantees2 can be proved for randomly subsampled
orthonormal matrix [11] and its generalized model - bounded
orthonormal systems3 [10]. Very recently, the uniform recovery guarantee for bounded orthonormal systems is shown in
[13].
In this paper, we prove the uniform recovery guarantee for
two different corrupted sensing models. In the first model, the
measurement matrix is based on randomly modulated unitnorm frames [7] and the corruption is sparse on the identity
basis. It is noted that the measurement matrix in the first model
does not consist of a random subsampling operator, e.g., the
partial random circulant matrix [14]. For the second model,
we consider
y = Ax⋆ + Hz⋆ + w,
(4)
where A represents a randomly subsampled orthonormal matrix, and the corruption Hz⋆ is assumed to be sparse on
1 Recent works [8] [9] for subsampled Fourier matrices show that the factor
log4 n can be reduced to log3 n.
2 A nonuniform recovery result only states that a fixed pair of sparse signal
and sparse corruption can be recovered with high probability using a random
draw of the matrix. Sometimes, the signs of the non-zero coefficients of the
sparse vector (and corruption) can be chosen at random to further simplify
arguments. Uniform recovery is stronger than nonuniform recovery. (see [5,
Chapter 9.2] [6, Section 3.1] for more details.)
3 See [12] for the construction of the generalized model.
2
certain bounded domain (e.g., a discrete Fourier transform
(DFT) matrix). Our results imply that many structured sensing
matrices can be employed in the corrupted sensing model
to ensure the exact and stable recovery of both x⋆ and z⋆ ,
even when the sparsity of the corruption is up to a constant
fraction of the total number of measurements. Thanks to
the uniform recovery guarantee, our results can address the
adversarial setting, which means that exact and stable recovery
is still guaranteed even when x⋆ , z⋆ and w are selected
given knowledge of the sensing matrix A. In addition, our
analysis results are also applicable to demonstrate the recovery
guarantee when the corrupted sensing problem is solved via
nonconvex optimization.
A. Potential Applications
The problem of recovering sparse signal x⋆ and sparse
corruption z⋆ from the measurement vector y arise from many
applications, where the compressed measurements may be
corrupted by impulse noise.
For example, in a sensor network, each sensor node measures the same signal x⋆ independently before sending the
outcome to the center hub for analysis. In this setting, each
sensor makes the measurement hai , x⋆ i, and the resultant
measurement vector is Ax⋆ by arranging each ai as the rows
of A [11], [15]. However, in practice, some sensor readings
can be anomalous from the rest. These outliers could be caused
by individually malfunctioned sensors, or due to some unusual
phenomena or event happening in certain areas of the network
[16] [17]. This anomaly effect can be modeled by a sparse
vector z⋆ . Mathematically, we have y = Ax⋆ + z⋆ + w,
where z⋆ represents the outlier regions and w stands for
possible small noise in the data transmission. Our results make
it possible to recover both the underlying signal and detect the
outlier regions simultaneously, which could be very useful for
network monitoring.
Another application of sparse signal recovery from sparsely
corrupted measurements is error correction in joint sourcechannel coding. In [11], [18], [19], compressed sensing has
been exploited as a joint source-channel coding strategy for
its efficient encoding and robust error correcting performance.
For a signal f that is sparse in the domain Ψ, i.e., f = Ψx⋆ ,
it can be encoded by a linear projection y = Φf = Ax⋆
with A = ΦΨ. Existing works have investigated the situations
where the encoded signal y is sent through either an erasure
channel [18] or a gross error channel [11], [19]. Our results
can not only be applied in these scenarios, but also provide
a new design on the encoding matrix with uniform recovery
guarantee.
In some scenarios, the measurement noise may be sparse or
compressible in some sparsifying basis. One example is the
recovery of video or audio signal that are corrupted by narrowband interference (NBI) due to improper designed equipment
[20], [21]. Electric hum as a typical impairment is sparse
in the Fourier basis. Another example is the application of
compressed sensing to reduce the number of samples in convolution systems with deterministic sequences (e.g., m-sequence,
Golay sequence). Such convolution systems are widely used in
communications, ultrasound and radar [22], [23]. In practice,
the measurements may be affected by frequency domain
interference or multi-tone jamming [24]. For instance, in CSbased OFDM channel estimation [25]–[28], suppose x is the
channel response and that the pilot sequence g is constructed
from Golay sequences, thep
time-domain received signal can be
n
RΩ′ F∗ diag(g)Fx + w, where
represented as [28], y = m
′
RΩ is a random subsampling operator and F denotes the
DFT matrix. The recovery performance can be guaranteed
by noticing that the sensing model is a subsampled version
of the orthonormal matrix F∗ diag(g)F. However, in OFDMbased powerline communications, the NBI due to intended
or unintended narrow-band signals can severely contaminate
the transmitted OFDM signal. The time-domain NBI vector is
sparse in the Fourier basis [29], [30]. Our results cover these
settings, and therefore, provide a CS-based method to jointly
estimate the signal of interest and the NBI.
B. Notations and Organization of the paper
For an n-element vector a, we denote by ai , (i ∈ [n] =
{0, ..., n − 1}), the i-th element of this vector. We represent a
sequence of vectors by a0 , ..., an−1 and a column vector with
q ones by 1q . The sparsity of a vector can be measured by its
best s-term approximation error,
σs (a)p =
inf ka − ãkp ,
kãk0 ≤s
where k · kp is the standard lp norm on vectors. For a
matrix A, Ajk denotes the element on its j-th row and kth column. The vector obtained by taking the j-th row (k-th
column) of A is represented by A(j,:) (A(:,k) ). We denote by
A0 , ..., An−1 a sequence of matrices. A−1 and A∗ represent
the inverse and the conjugate transpose of A. The Frobenius
norm and the
p operator norm of matrix A are denoted by
kAkF =
tr(A∗ A) and kAk2→2 = supkxk2 =1 kAxk2
respectively. We write A . B if there is an absolute constant
c such that A ≤ cB. We denote A ∼ B if c1 A ≤ B ≤ c2 A
for absolute constants c1 and c2 .
The coherence µ(A) of an ñ × n matrix A describes the
maximum magnitude of the elements of A, i.e., µ(A) =
max1≤j≤ñ |Ajk |. For a unitary matrix Ψ ∈ Cn×n , we have
√1
n
1≤k≤n
≤ µ(Ψ) ≤ 1.
The rest of the paper is organized as follows. We start by
reviewing some key notions and results in compressed sensing
in Section II. In Section III, we prove the uniform recovery
guarantee for two classes of structured random matrices. In
Section IV, we conduct a series of simulations to reinforce
our theoretical results. Conclusion is given in Section V. We
defer most of the proofs to the Appendices.
II. P RELIMINARIES
A. RIP and structured sensing matrices
The restricted isometry property (RIP) is a sufficient condition that guarantees uniform and stable recovery of all s-sparse
vectors via nonlinear optimization (e.g. l1 -minimization). For a
3
matrix A ∈ Cm×n and s < n, the restricted isometry constant
δs is defined as the smallest number such that
(1 − δs )kxk22 ≤ kAxk22 ≤ (1 + δs )kxk22 ,
holds for all s-sparse vectors x. Alternatively, the restricted
isometry constant of A can be written as
δs = sup
x∈Ds,n
kAxk22 − kxk22 ,
(5)
where Ds,n = {x ∈ Cn : kxk2 ≤ 1, kxk0 ≤ s}.
Among the many structured sensing matrices that satisfy the
RIP, two classes have been found to be applicable in various
scenarios. One is the randomly subsampled orthonormal systems [6], which encompass structured sensing matrices like
partial random Fourier [2], convolutional CS [28], [31] and
spread spectrum [32]. The other is the UDB framework which
consists of a unit-norm tight frame (UTF), a random diagonal
matrix and a bounded column-wise orthonormal matrix [7].
Popular sensing matrices under this framework include partial
random circulant matrices [14], random demodulation [33],
random probing [34] and compressive multiplexing [35].
B. Recovery Condition
We review the definition of generalized RIP, which is
useful to establish robustness and stability of the optimization
algorithm.
Definition II.1. [10, Definition 2.1] For any matrix Θ ∈
Cr×(n+m) , it has the (s, k)-RIP with constant δs,k if δs,k is
the smallest value of δ such that
(1 −
δ)(kxk22
+
kzk22 )
x
≤ Θ
z
2
2
≤ (1 + δ)(kxk22 + kzk22 )
(6)
holds for any x ∈ Cn with kxk0 ≤ s and any z ∈ Cm with
kzk0 ≤ k.
Here, the generalized RIP is termed as the (s, k)-RIP for
convenience. We note that the (s, k)-RIP is more stringent
than the conventional RIP. In other words, the fact that a
sensing matrix A satisfies the RIP does not mean that the
associated matrix Θ = [A, I] would satisfy the (s, k)-RIP.
The recovery performance of the penalized optimization (3)
can be guaranteed by the following result.
⋆
⋆
Theorem II.2. [13, Theorem 3.7] Suppose y = Ax +z +w
and Θ = [A, I] ∈ Cm×(n+m) has the (2s, 2k)-RIP constant
δ2s,2k satisfying
δ2s,2k < r
1+
1
1
√
2 2
+
√ 2
η
2
s+λ k
⋆
∈ Cn , z⋆ ∈ Cm , and
with η = min{s,λ
2 k} . Then for x
m
w ∈ C with kwk2 ≤ ε, the solution (x̂, ẑ) to the penalized
optimization problem (3) satisfies
kx̂ − x⋆ k1 + kẑ − z⋆ k1 ≤ c1 (σs (x)1 + λσk (z))
p
+ c2 s + λ2 kε
σ (x)
σ (z)
s
√ 1 + k√ 1
kx̂ − x⋆ k2 + kẑ − z⋆ k2 ≤ c3 1 + η 1/4
s
k
1/4
ε,
+ c4 1 + η
where the constants c1 , c2 , c3 , c4 depend on δ2s,2k only.
We note that similar theorem has been proven in [10] when
both the signal and corruption are vectors with exact sparsity.
The above result not only relaxes the requirement on the
(2s, 2k)-RIP constant, but also guarantees stable recovery of
inexactly sparse signals and corruptions. Therefore, for either
sparse or compressible signals and corruptions, the key to
establish the recovery guarantee for a sensing matrix is to
prove the (s, k)-RIP.
III. M AIN R ESULTS
In this section, we prove the (s, k)-RIP for two classes of
structured sensing matrices. This result can then be combined
with Theorem II.2 to prove the recovery guarantee. In addition,
the extension to the recovery via nonconvex optimization is
presented. Last but not least, we compare the main theorems
to existing literature where relevant.
A. Randomly modulated unit-norm tight frames
We prove the uniform recovery guarantees for the class
of structured sensing matrices that can be written as A
=
e where U ∈ Cm×ñ is a UTF with µ(U) ∼ 1/√m,
UDB,
D = diag(ξ) is a diagonal matrix with ξ being a lengthñ random vector with independent, zero-mean, unit-variance,
e ∈ Cñ×n , ñ ≥ n, represents
and L-subgaussian entries, and B
e ∗B
e = I.
a column-wise orthonormal matrix, i.e. B
The following result presents a bound on the required number of measurements m such that the corresponding matrix
Θ has the (s, k)-RIP constant satisfying δs,k ≤ δ for any
δ ∈ (0, 1).
Theorem III.1. Suppose y = Ax⋆ + z⋆ + w with√ Θ =
e and µ(U) ∼ 1/ m. If,
[A, I] ∈ Cm×(n+m) , A = UDB
for δ ∈ (0, 1),
e log2 s log2 ñ,
m ≥ c5 δ −2 sñµ2 (B)
m ≥ c6 δ −2 k log2 k log2 ñ,
where c5 and c6 are some absolute constants, then with
2
probability at least 1 − 2ñ− log s log ñ , the (s, k)-RIP constant
of Θ satisfies δs,k ≤ δ.
Proof. The (s, k)-RIP constant δs,k can be equivalently expressed as
2
x
Θ
δs,k = sup
(7)
− kxk22 − kzk22 ,
z 2
(x,z)∈T
4
where T := {(x, z) : kxk22 + kzk22 = 1, kxk0 ≤ s, kzk0 ≤
k, x ∈ Cn , z ∈ Cm }. With Θ = [A, I], the RIP-constant can
be further reduced to
δs,k = sup
(x,z)∈T
≤ sup
(x,z)∈T
|
kAxk22 + kzk22 + 2hAx, zi − kxk22 − kzk22
kAxk22 − kxk22 + 2 sup |hAx, zi|
{z
}
δ1
|
(x,z)∈T
{z
δ2
(8)
}
Our aim is to derive bounds on the number of measurements
m such that for any δ ∈ (0, 1) the RIP-constant δs,k is upper
bounded by δ. We have
kAxk22 − kxk22
δ1 ≤ sup
x∈Ds,n
(9)
with supx∈Ds,n kAxk22 − kxk22 being the restricted isometry
constant in the standard RIP definition (5). Then, by [7,
Theorem III.2], we reach the following result.
Suppose, for any δ ∈ (0, 1),
2
e
m ≥ 4c1 δ −2 sñµ2 (B)(log
s log2 ñ),
then δ1 ≤ δ/2 holds with probability at least 1 −
2
ñ−(log ñ)(log s) .
Therefore, proof of the (s, k)-RIP is reduced to bounding
the inner product term δ2 .
e zi| = 2 sup |z∗ UDBx|
e
δ2 = 2 sup |hUDBx,
(x,z)∈T
(x,z)∈T
e
= 2 sup |z Udiag(Bx)ξ|
= 2 sup |hv, ξi|,
∗
(x,z)∈T
(10)
v∈Av
∗
e
where v = (z∗ Udiag(Bx))
, and
Av := {v : kxk22 + kzk22 = 1, kxk0 ≤ s, kzk0 ≤ k}.
(11)
The following lemma is proved in Appendix A.
Lemma III.2. Suppose ξ is a length-ñ random vector with
independent, zero-mean, unit-variance, and L-subgaussian
entries. For any δ ∈ (0, 1), if
e log2 s log2 ñ
m ≥ c5 δ −2 sñµ2 (B)
m ≥ c6 δ −2 k log2 k log2 ñ,
then supv∈Av |hv, ξi| ≤ δ/2 holds with probability exceeding
1 − exp(− log2 s log2 ñ), where c5 and c6 are some constants
depending only on L.
Combining (10) with Lemma III.2, we have, for any
τ > 0, δ2 ≤ cδ holds with probability exceeding 1 −
exp(− log2 s log2 ñ) for some constant c.
Finally, Theorem III.1 can be obtained by combining
the above results. Suppose, for any δ ∈ (0, 1), m ≥
2
e
c5 δ −2 sñµ2 (B)(log
s log2 ñ), m ≥ c6 δ −2 k log2 m log2 ñ and
√
µ(U ) ∼ 1/ m, then we have δs,k ≤ δ1 + δ2 ≤ δ with
probability exceeding
1 − ñ−(log ñ log
2
s)
= 1 − ñ−(log ñ log
= 1 − 2ñ− log
2
− exp(−c log2 s log2 ñ)
2
s)
s log ñ
− ñ−c log
2
s log ñ
The uniform recovery guarantee can be obtained by combining Theorem II.2 and III.1.
e is a bounded
A few remarks are in order. First, when B
√
e
column-wise orthonormal matrix, i.e., µ(B) ∼ 1/ ñ, the
bound on the sparsity of x⋆ can be relaxed to kx⋆ k0 ≤
Cm/(log2 ñ log2 m). The sparsity kz⋆ k0 is always a constant
fraction of the total number of measurements m regardless
e When kwk2 = 0,
the magnitude of the coherence µ(B).
Theorem III.1 implies that a sparse signal can be exactly
recovered by tractable l1 minimization even if some parts of
the measurements are arbitrarily corrupted.
Second, the proposed class of structured sensing matrices
is equivalent to the UDB framework
√ [7] but with an additional requirement of µ(U) ∼ 1/ m. The UDB framework
has been proved to support uniform recovery guarantees for
conventional CS problem, while with the extra condition it
is now shown to provide uniform recovery guarantees for the
CS with sparse corruptions problem. Theorem III.1 holds for
many existing and new structured sensing matrices as long as
e
they can be decomposed into A = UDB.
One application of the UDB framework is to simplify
the mask design in double random phase encoding (DRPE)
for optical image encryption. Consider an image f that is
sparse in the domain Ψ, i.e., f = Ψx⋆ , DRPE is based
on random masks placed in the input and Fourier planes
of the optical system [36], [37] . p
Mathematically, the mean
∗
surements can be written as y =
m RΩ F Λ1 FΛ2 f + w,
n
m
where RΩ : C → C represents an arbitrary/deterministic
subsampling operator with Ω being the set of selected row
indices, Λ1 and Λ2 are random diagonal matrices. By the
UDB framework, the random diagonal matrix Λ2 can be
replaced by a deterministic diagonal matrix constructed from a
Golay sequence g. The reason is that the measurement model
p
n
RΩ F∗ Λ1 Fdiag(g)Ψx⋆ can be decomposed into a UTF
pm
n
∗
m RΩ F , a random diagonal matrix Λ1 , and a orthonormal
matrix Fdiag(g)Ψ whose coherence is proven to be bounded
for many orthonormal transforms Ψ, e.g., DCT, Haar wavelet
[7, Lemma IV.2]. When the measurements are corrupted by
impulse noise due to detector plane impairment, our theorem
above provides a recovery guarantee on the image.
Furthermore, the UDB framework emcompasses some popular structured sensing matrices, e.g., partial random circulant
matrices [14] and random probing [34]. To elaborate, consider
the partial random circulant matrices
1
A = √ R Ω Cǫ
m
where Cǫ denotes the circulant matrix generated from ǫ.
Suppose ǫ = F∗ ξ, where F is a normalized DFT matrix and ξ
is a length-n random vector with independent, zero-mean, unitvariance,
entries. Let D = diag(ξ),
p nwe have∗
p nand sub-Gaussian
RΩ F∗ DF. It can be observed that U = m
RΩ F
A= m
is a UTF and B = F is a unitary matrix. Hence, Theorem
III.1 implies that any sparse signal x and sparse corruption
z can be faithfully recovered from the measurement model
y = √1m RΩ Cǫ x⋆ + z⋆ + w by the penalized recovery
5
algorithm. The sparse recovery from partial random circulant
measurements can be applied in many common deconvolution
tasks, such as radar [38] and coded aperture imaging [39]. In
practice, where the measurements can be corrupted by impulsive noise due to bit errors in transmission, faulty memory
locations, and buffer overflow [40], our theorem guarantees
the recovery of both the signal of interest and the corruption.
In some situations, the proposed framework can still provide
reliable recovery guarantee even if the corruption is sparse in
some basis. Suppose the corruption is sparse under some fixed
and known orthonormal transformation H, i.e. H∗ H = I. We
consider the measurement model
y = Ax⋆ + Hz⋆ + w.
(12)
It is clear that this setting can be reduced to
H∗ y = H∗ Ax⋆ + z⋆ + H∗ w.
(13)
e := UD
b B,
e where U
b = H∗ U
Notice that H∗ A = H∗ UDB
is still a UTF
due to the orthogonality of H. Therefore, if
b ∼ 1/√m, Theorem III.1 still holds in this measurement
µ(U)
model.
B. Randomly sub-sampled orthonormal system
Next, we consider the corrupted sensing measurement
model for randomly sub-sampled orthonormal system. We
prove the uniform recovery guarantee for such matrices provided that the corruption is sparse on certain sparsifying
domain. Suppose λ ∈ Rn is a random Bernoulli vector
with i.i.d. entries such that P(λi = 1) = m
n ∀i ∈ [n] and
Ω′ = {i : λi = 1} with |Ω′ | = M , the random sampling
operator RΩ′ ∈ RM×n is a collection of the i-th row of an ndimensional identity matrix for all i ∈ Ω′ . Here, M is random
with mean value m. The observation model is
y = Ax⋆ + Hz⋆ + w,
p
(14)
n
where A = M
RΩ′ G, G ∈ Cn×n is an orthonormal
√ basis
M×M
and H ∈ C
is a unitary matrix with µ(H) ∼ 1/ M .
From our analysis in previous subsection, the uniform
recovery performance can be guaranteed as long as the associated matrix Θ satisfies the (s, k)-RIP. Since the matrix A
satisfies the standard RIP, the problem of proving the (s, k)RIP is again reduced to bounding the inner product term
sup(x,z)∈T |hAx, Hzi|. Detail proof of the following result
is given in Appendix B.
When
G is a bounded orthonormal basis, i.e., µ(G) ∼
√
1/ n, the bound on the sparsity of x⋆ can be relaxed to
m ≥ O(δ −2 s log2 s log2 n, δ −2 k log2 k log2 n), which implies
that a sparse signal can be exactly recovered by tractable l1
minimization even if the measurements are affected by corruption sparse on some bounded domain. A bounded orthonormal
basis can include the Fourier transform or the Hadamard
transform. In addition, in CS-based OFDM where the pilot is
generated from a Golay sequence and a random subsampler is
employed at the receiver (Section I-A), the effective orthonor√
mal basis is also bounded, i.e., µ(F∗ diag(g)F) ∼ 1/ n [28].
C. Nonconvex optimization
We have shown the (s, k)-RIP for two popular classes
of structured sensing matrices, and proven the performance
guarantee for the recovery of the sparse signal and corruption
via the l1 -norm minimization algorithm (3). However, our
(s, k)-RIP analysis on the structured sensing matrices is also
applicable to proving the recovery guarantee for nonconvex
optimization. Consider the following formulation of the problem
y = Ax⋆ + Hz⋆ ,
(15)
It was demonstrated in [41] that the unique minimizer of the
lp minimization problem (0 < p < 1)
min kxkpp + νkzkpp
x,z
s.t.
Ax + Hz = y,
(16)
is exactly the pair (x⋆ , z⋆ ) if the combined matrix [A, H]
satisfies the (s, k)-RIP, where ν is the regularization parameter.
In addition, the lp minimization approach still provides stable
recovery even when there is additional dense noise as long as
the (s, k)-RIP holds [41], [42]. The lp minimization problem
can be numerically solved via an iteratively reweighted least
squares (IRLS) method [43]. However, [41] only considers
the sensing model with A being random Gaussian matrices
and H being an identity matrix. With our (s, k)-RIP analysis,
many structured sensing matrices can be employed to provide
exact/stable recovery performance for this corrupted sensing
problem via lp minimization.
D. Comparison with related literature
In this part, we compare our main results with related
literature.
1) Sparse signal, sparse corruption: [10] proved that
sensing matrices with independent Gaussian entries provide
Theorem III.3. Suppose y p
= Ax⋆ + Hz⋆ + w with √
Θ = uniform recovery guarantee for corrupted CS by solving (3) for
n
M×(n+M)
RΩ′ G and µ(H) ∼ 1/ M . all vectors x⋆ and z⋆ satisfying kxk0 ≤ αm/(log(n/m) + 1)
[A, H] ∈ C
,A= M
If, for δ ∈ (0, 1),
and kz⋆ k0 ≤ αm. The difference is that our theorems come
with a tighter requirement on the sparsity of x⋆ and the
m ≥ max(c7 δ −2 snµ2 (G) log2 s log2 n, c8 δ 2 s log4 n, 2c9 log n), sparsity of z⋆ , which is a compensation on the employment
m ≥ c10 δ −2 knµ2 (G) log2 k log2 n,
of structured measurements.
2
[10] also proved the recovery guarantee for structured
m ≤ c11 δ n,
sensing matrices that belong to the framework proposed
where {ci }i=7,...,11 are constants, then with probability at least in [12]. Here, faithful recovery is guaranteed provided that
2
1−2ñ− log s log n −n−c9 , the (s, k)-RIP constant of Θ satisfies kxk0 ≤ αm/(µ2 log2 n) and kz⋆ k0 ≤ βm/µ2 , where µ is the
δs,k ≤ δ.
coherence of the sensing matrix. [11] considered the corrupted
6
CS with sensing matrices that are randomly subsampled orthonormal matrix, and proved similar results. It is noted that
the requirements on the sparsity of x⋆ in these works seem less
strict than that in our results. However, in both [10] and [11],
performance guarantees of their structured sensing matrices
rely on the assumption that the support set of x⋆ or z⋆ is
fixed and the signs of the signal are independently and equally
likely to be 1 or −1 [10, Section 1.3.2] [11, Section II.B]
(i.e. a nonuniform recovery guarantee). While in our theorem,
two classes of structured sensing matrices (including randomly
subsampled orthgonal matrix) are shown to provide uniform
recovery guarantee for corrupted CS.
We note that recently the uniform recovery guarantee for
bounded orthonormal systems is proven in [13]. The bounded
orthonormal systems is more general than the random subsampled orthonormal matrix considered in our second class.
However, the corruption models are different: the corruption
vector in [13] is sparse in time domain, whereas our theorem
considers
corruption in sparsifying domain with µ(H) ∼
√
1/ M . Due to this difference in the corruption model, the
techniques used to prove the (s, k)-RIP (specifically, bound the
inner product sup(x,z)∈T |hAx, Hzi|) are essentially different.
2) Structured signal, structured corruption: In a recent
work [44], sensing with random Gaussian measurements for
general structured signals and corruptions (including sparse
vectors, low rank matrix, sign vectors and etc) has been
proven. However, our study departs from it in the following aspects: [44] proved a nonuniform recovery guarantee
for the recovering of sparse signals from sparse corruptions
and dense noise. In our paper, we established a uniform
recovery guarantee for the corresponding problem. Moreover,
[44] considered random Gaussian matrices, while we propose
structured sensing matrices.
We have shown that a large class of structured sensing
matrices can provide faithful recovery for the sparse sensing
with sparse corruption. Whether such structured measurements
can be applied in a general corrupted sensing problem (e.g.
structured signal with structured corruption) is still open.
Extension of our measurement framework to the general
corrupted sensing problem is interesting for further study.
Other works related to the recovery of signals from corrupted measurements include [20], [45]–[51]. However, their
models are different from the one in our paper.
Remark III.4. We note that the value
p of the regularization
parameter can be chosen as λ = s/k. In practice, when
no a priori knowledge on the sparsity levels of the signal and
the corruption is available, λ can usually be taken by cross
validation. On the other hand, if it is known a priori that
the corruption (the signal) is very sparse, one can increase
(decrease) the value of λ to improve the overall recovery performance. Similar discussion on the theoretical and practical
settings of the regularization parameter has also been noted in
[10, Section 1.3.3], [11, Section II.E, Section VII], [44, Section
III.B]. In addition, an iteratively reweighted l1 minimization
method can be used to adaptively improve the setting of λ in
practice [13].
IV. N UMERICAL S IMULATIONS
In this section, we verify and reinforce the theoretical
results of Section III with a series of simulations. We present
experiments to test the recovery performance of the penalized
recovery algorithm for the proposed structured sensing matrices. In each experiment, we used the CVX Matlab package
[52], [53] to specify and solve the convex recovery programs.
Two different ways of generating sparse vectors are considered:
• Gaussian setting: the nonzero entries are drawn from
a Gaussian distribution and their locations are chosen
uniformly at random,
• Flat setting: the magnitudes of nonzero entries are 1 and
their locations are chosen uniformly at random.
A. Penalized Recovery
This experiment is to investigate the empirical recovery
performance of the penalized recovery algorithm (3) when
the dense noise is zero. Here, the sensing matrix (Mtx-I)
A = UDB of size m × n with m = 256 and n = 512
is constructed as below.
1) Arbitrarily select m = 256 rows from a 512 × 512
Hadamard matrix√to form a new matrix, which is then
normalized by 1/ m to form the UTF U.
2) The diagonal entries of the diagonal matrix D are i.i.d.
Bernoulli random variables.
3) B is a normalized Hadamard matrix.
We vary the signal sparsity and the corruption sparsity with
s ∈ [1, 100] and k ∈ {10, 20, 30}. For each pair of (s, k),
we draw a sensing matrix as described above and perform the
following experiment 100 times:
1) Generate x⋆ with sparsity s by the Gaussian setting
2) Generate z⋆ with sparsity k by the Gaussian setting
3) Solve (3) by setting λ = 1
4) Declare success if4
kx̂ − x⋆ k2 /kx⋆ k2 + kẑ − z⋆ k2 /kz⋆ k2 < 10−3
The fraction of successful recovery averaged over the 100 iterations is presented in Fig. 1a. To demonstrate the performance
for signals and corruptions that do not have i.i.d. signs, the
experiment is repeated by generating the sparse vectors x⋆
and z⋆ based on the Flat setting as shown in Fig. 1b. It can be
seen that in both scenarios the performance improves as the
sparsity of the corruption decreases.
Next, we demonstrate the performance of the penalized recovery algorithm when the sensing matrix is from a randomly
subsampled orthonormal matrix. The sensing matrix (Mtx-II)
A is a collection of randomly selected M = 256 rows
p from
a 512 × 512 Hadamard matrix, and normalized by n/M .
The corruption is Hz, where H is an M × M normalized
Hadamard matrix. For each pair of (s, k), we repeat the above
steps 100 times to obtain the probability of success (see Fig.
2). It is noted that the recovery performance of Mtx-I is better
than that of Mtx-II. This seems consistent with our theoretical
analysis as the random subsampled orthonormal matrix shows
4 This
criterion indicates that both x⋆ and z⋆ have been faithfully recovered.
7
1
1
k=10
k=20
k=30
0.8
Probability of success
Probability of success
0.8
k=10
k=20
k=30
0.6
0.4
0.2
0.6
0.4
0.2
0
0
0
20
40
60
80
100
0
20
Signal sparsity
40
60
80
100
Signal sparsity
(a) Gaussian Setting
(b) Flat Setting
Fig. 1. Probability of success as a function of the signal sparsity s using penalized recovery with signal dimension n = 512, number of measurements
m = 256, and the corruption sparsity k = {10, 20, 30} for Mtx-I.
1
1
k=10
k=20
k=30
0.8
Probability of success
Probability of success
0.8
k=10
k=20
k=30
0.6
0.4
0.2
0.6
0.4
0.2
0
0
0
20
40
60
80
100
0
20
Signal sparsity
40
60
80
100
Signal sparsity
(a) Gaussian Setting
(b) Flat Setting
Fig. 2. Probability of success as a function of the signal sparsity s using penalized recovery with signal dimension n = 512, number of measurements
m = 256, and the corruption sparsity k = {10, 20, 30} for Mtx-II.
more stringent recovery condition than the UDB framework
(see Theorem III.1 and III.3). However, since the (s, k)-RIP
is a sufficient condition for the recovery guarantee, it may not
fully reflect the performance gap between the two classes of
structured sensing matrices. Further investigation based on a
necessary and sufficient condition for the recovery guarantee
of the corrupted CS problem is a difficult, but interesting open
question.
B. Stable recovery
We study the stability of the penalized recovery algorithms
when the dense noise is nonzero, i.e., ε 6= 0, and compare
the performance of structured sensing matrix (Mtx-I) with
random Gaussian sensing matrix. In this experiment, the 256by-512 sensing matrix (Mtx-I) is constructed as in previous
subsection. We fix the signal and corruption sparsity levels at
s = 10 and k = 10 respectively. The dense noise w consists
of i.i.d. Bernoulli entries with amplitude ε. We vary the noise
level with ε ∈ [0, 0.1], and perform the following experiment
100 times for each ε:
8
of the set. For a metric space (T, d), the covering number
N (T, d, u) is the minimal number of open balls of radius u
needed to cover (T, d). A subset T of T is called a u-net of
T if every point x ∈ T can be approximated to within u by
some point x̄ ∈ T , i.e., d(x, x̄) ≤ u. The minimal cardinality
of T is equivalent to the covering number N (T, d, u). The pth moment (or the Lp -norm) of a random variable is denoted
by kXkLp = (E|X|p )1/p .
We aim to upper bound the variable ∆ := supv∈Av |hv, ξi|
which is the supremum of a stochastic process with the index
set Av . To complete the proof, we require the following
important result due to Krahmer et al.:
3
p-rec Mtx-I
p-rec Gaussian
noncvx Mtx-I
noncvx Gaussian
Recovery error
2.5
2
1.5
1
0.5
0
0
0.02
0.04
0.06
0.08
0.1
Noise level
Theorem A.1. [14, Theorem 3.5 (a)] Let A be a set of
matrices, and let ξ be a random vector whose entries ξj are
independent, mean 0, variance 1, and L-subgaussian random
variables. Set
dF (A) = sup kSkF ,
d2→2 (A) = sup kSk2→2 ,
S∈A
Fig. 3. Empirical recovery error versus the noise level ε.
S∈A
1) Generate x⋆ with s = 10 by the Gaussian setting
2) Generate z⋆ with k = 10 by the Gaussian setting
3) Solve penalized recovery (p-rec) algorithm (3) by setting
λ=1
4) Record the empirical recovery error kx̂ − x⋆ k2 + kẑ −
z⋆ k2
An average recovery error is then obtained for each ε. Fig. 3
depicts the average error with varying noise levels. The results
in Theorems II.2 and III.1 imply that the recovery errors are
bounded by the noise level ε up to some constants. Fig. 3
clearly shows this linear relationship. In addition, we repeat the
above experiments with an iteratively reweighted least squares
approach [43] using p = 0.5. As shown in Fig. 3, the structured
sensing matrix is still able to exhibit stable performance by the
nonconvex optimization algorithm.
V. C ONCLUSION
We have studied a generalized CS problem where the
measurement vector is corrupted by both sparse noise and
dense noise. We have proven that structured random matrices
encompassed in the UDB framework or the randomly subsampled orthonormal system can satisfy the sufficient condition,
i.e., the (s, k)-RIP. These structured matrices can therefore be
applied to provide faithful recovery of both the sparse signal
and the corruption by the penalized optimization algorithm as
well as the nonconvex optimization algorithm. Our simulations
have clearly illustrated and reinforced our theoretical results.
A PPENDIX A
P ROOF OF L EMMA III.2
Throughout the proof in this and the following sections, C
and c denote an absolute constant whose values may change
from occurrence to occurrence.
A metric space is denoted by (T, d), where T is a set
and d is the notion of distance (metric) between elements
S∈A
NA (ξ) := sup kSξk2 ,
E = γ2 (A, k · k2→2 ) + dF (A).
Then, for every p ≥ 1,
kNA (ξ)kLp ≤ C(E +
√
pd2→2 (A)),
(17)
where C is a constant depends only on L.
Here, NA (ξ) represents the supremum of certain stochastic
processes indexed by a set of matrices A. The above Proposition implies that NA (ξ) can be bounded by three parameters:
the suprema of Frobenius norms dF (A), the suprema of
operator norms d2→2 (A) and a γ2 -functional γ2 (A, k · k2→2 ),
which can be bounded in terms of the covering numbers
N (A, k · k2→2 , u) as below.
γ2 (A, k · k2→2 ) ≤ c
Z
0
d2→2 (A)
p
log N (A, k · k2→2 , u) du,
where the integral is known as Dudley integral or entropy
integral [54].
We can transfer the estimates on the moment (17) to a tail
bound by the standard estimate due to Markov’s inequality
(see [5, Proposition 7.15]).
Proposition A.2. Following the definitions in Theorem A.1,
for t ≥ 1,
P(NA (ξ) ≥ CE + Cd2→2 (A)t) ≤ exp(−t2 ).
(18)
It can be observed that ∆ can be expressed in the form
of NA (ξ), where S and A are replaced with v and Av ,
respectively. Now, we only need to estimate the parameters
dF (Av ), d2→2 (Av ) and γ2 (Av , k · k2→2 ) before bounding ∆
by using Theorem A.2. Since Av is a set of vectors, we have
dF (Av ) = d2→2 (Av ) and γ2 (Av , k · k2→2 ) = γ2 (Av , k · k2 ).
For any vector x ∈ Ds,n , we denote by xs the lengths vector that retains only the non-zero elements in x. And
correspondingly for any vector b ∈ Cn , we denote by bs
the length-s vector that retains only the elements that have
9
the same indexes as those of the non-zero elements in x. We
have, for any v ∈ Av ,
∗
e
e
kvk2 = kz∗ Udiag(Bx)k
2 ≤ kz Uk2 kBxk∞
r
ñ
e (j,:) , xi|}
=
kzk2 max{|hB
m
j∈[ñ]
r
ñ
e s , xs i|}
kzk2 max{|hB
=
(j,:)
m
j∈[ñ]
r
r
ñ e √
1 ñ e √
µ(B) skxk2 kzk2 ≤
µ(B) s,
≤
m
2 m
where the last inequality is due to
kxk22 +kzk22
dF (Av ) = d2→2 (Av ) ≤
1
2
r
e
e − z̄∗ Udiag(Bx̄)k
kv − v̄k2 = kz∗ Udiag(Bx)
2
∗
∗
e
e
= kz Udiag(Bx) − z Udiag(Bx̄)
e − z̄∗ Udiag(Bx̄)k
e
+ z∗ Udiag(Bx̄)
2
∗
∗
e
e
≤ kz Udiag(B(x − x̄))k2 + k(z − z̄) Udiag(Bx̄)k
2
= 1. Therefore,
ñ e √
µ(B) s.
m
(19)
∗
e
e
kvk2 = kz∗ Udiag(Bx)k
2 ≤ kz Uk∞ kBxk2
√
1
≤ µ(U) k.
2
This provides another upper bound
√
1
µ(U) k.
2
(20)
We note that both (19) and (20) are valid bounds, and
they are not comparable to each other since the relationship
between s and k is unknown. It will be clear later that both
bounds are useful for computing the entropy integrals. In
particular, (19) and (20) are used for computing I1 and I2
respectively (as in (23)).
Next, we bound γ2 -functional γ2 (Av , k · k2 ) by estimating
the covering numbers N (Av , k · k2 , u). The derivation is
divided into two steps.
Step 1. Decompose N (Av , k · k2 , u). Let D1 = {x ∈ Cn :
kxk22 ≤ 1, kxk0 ≤ s} and define the semi-norm k · kK1 as
e
kxkK1 = kUdiag(Bx)k
2→2
∀x ∈ Cn .
(21)
For the metric space (D1 , k · kK1 ), we take D1 to be a u2 -net
of D1 with |D1 | = N (D1 , k · kK1 , u2 ). Let D2 = {z ∈ Cm :
kzk22 ≤ 1, kzk0 ≤ k} and define the semi-norm k · kK2 as
e ∗ diag(U∗ z)k2→2
kzkK2 = kB
∀z ∈ Cm .
e − x̄))k2 + kx̄∗ B
e ∗ diag(U∗ (z − z̄))k2
= kz∗ Udiag(B(x
e − x̄))k2→2
≤ kzk2 kUdiag(B(x
(a)
e ∗ diag(U∗ (z − z̄))k2→2
+ kx̄k2 kB
≤ kx − x̄kK1 + kz − z̄kK2 ≤ u,
Following the same steps, we can alternatively obtain, for
any v ∈ Av ,
dF (Av ) = d2→2 (Av ) ≤
∗
e
For any v = (z∗ Udiag(Bx))
∈ Av , there exist v̄ =
∗
e
¯
(z̄ Udiag(Bx̄)) ∈ Av with x̄ ∈ D1 and z̄ ∈ D2 obeying
kx − x̄kK1 ≤ u2 and kz − z̄kK2 ≤ u2 . This gives
∗
(22)
For the metric space (D2 , k · kK2 ), we take D2 to be a u2 -net
of D2 with |D2 | = N (D2 , k · kK2 , u2 ).
∗
e
: x̄ ∈ D1 , z̄ ∈ D2 } and
Now, let Av = {(z̄∗ Udiag(Bx̄))
remark that |Av | ≤ |D1 ||D2 |. It remains to show that for all
v ∈ Av , there exists v̄ ∈ A¯v with kv − v̄k2 ≤ u.
where (a) is due to the fact that kzk2 ≤ 1 and kx̄k2 ≤ 1.
Hence,
N (Av , k · k2 , u) ≤ |Av |
≤ N (D1 , k · kK1 , u/2)N (D2 , k · kK2 , u/2).
The γ2 -functional γ2 (Av , k · k2 ) can now be estimated by
Z d2→2 (A) p
γ2 (Av , k · k2 ) ≤ c
log N (Av , k · k2 , u)du
0
Z d2→2 (A) p
.
log N (D1 , k · kK1 , u/2) du
0
{z
}
|
+
Z
|0
I1
d2→2 (A)
p
log N (D2 , k · kK2 , u/2) du .
{z
}
I2
(23)
Step 2. Estimate the covering numbers N (D1 , k · kK1 , u/2)
and N (D2 , k·kK2 , u/2) and the entropy integrals. We estimate
each covering number in two different ways. For small value of
u, we use a volumetric argument. For large value of u, we use
the Maurey method ( [14, Lemma 4.2], or [5, Problem 12.9]).
Then, the resultant covering number estimates can be used to
compute the entropy integrals I1 and I2 . Similar techniques
on the covering number estimation and the entropy integral
computation have been used in the CS literature, i.e., [6], [7],
[14], [55].
From [7, Equation (28)] and (19), we have
r
sñ e
I1 .
µ(B)(log s)(log ñ).
(24)
m
It remains to estimate N (D2 , k · kK2 , u/2) and compute I2 .
1) small u. We observe that D2 is a subset of the union of
m
k
k unit Euclidean balls B2 ,
B2k := {z ∈ Cm : kzk2 ≤ 1, |supp(z)| ≤ k}.
For any z ∈ D2 ,
(25)
e ∗ diag(U∗ z)k2→2 ≤ kU∗ zk∞ ≤ max |hU(:,i) , zi|
kzkK2 = kB
i∈[n]
r
√
k
kzk2 ,
(26)
≤ µ(U)kzk1 ≤ µ(U) kkzk2 ≤
m
10
where the last step is due to the assumption that µ(U) ∼ √1m .
Therefore,
m
N (B2k , k · kK2 , u/2)
N (D2 , k · kK2 , u/2) ≤
k
r
k
m
k
k · k2 , u/2)
≤
N (B2 ,
m
k
r
k 1 k
em k
) (1 + 4
) ,
(27)
≤(
k
mu
where the last inequality is an application of [6, Proposition
10.1] and [5, Lemma C.5].
√
√ 2) large u. For any z ∈ D2 , we have kzk1 ≤ kkzk2 ≤
k, which gives
√
√
D2 ⊂ kB1m := {z ∈ Cm : kzk ≤ k}.
Then,
√
N (D2 , k · kK2 , u/2) ≤ N ( kB1m , k · kK2 , u/2)
√
= N (B1m , k · kK2 , u/(2 k)).
√
Based on the Maurey method, for 0 < u < 21 µ(U) k, the
covering number can be estimated by [6, Lemma 8.3]
√
p
p
log N (D2 , k · kK2 , u/2) . kµ(U) log ñ log mu−1
r
kp
log ñ log mu−1 . (28)
≤
m
We note that the estimation based on Maurey method
depends on the range of the parameter u (see [6, Lemma 8.3]),
which is the reason why we employ different bounds ((19) and
(20)) when computing the entropy integrals I1 and I2 .
We now combine the results (27) and (28) to estimate the
qen1
1
,
tropy integral I2 : we apply the first bound for 0 < u ≤ 10 m
q
q
1
k
1
1
and the second bound for 10 m < u ≤ d2→2 (Av ) = 2 m .
It reveals that
r
k
I2 .
log ñ log k.
(29)
m
Combine (23), (24) and (29)
r
sñ e
µ(B)(log s)(log ñ)
γ2 (Av , k · k2 ) .
mr
k
log ñ log k.
(30)
+
m
Finally, we are ready to complete the proof by applying
Proposition A.2. For the assumption on m and p, δ ∈ (0, 1),
e log2 s log2 ñ
m ≥ c1 δ −2 sñµ2 (B)
m ≥ c2 δ
−2
2
⋆
⋆
Recall
p nthat in the measurement model y = Ax +Hz +w,
′
A =
M RΩ G is a randomly sub-sampled unitary√matrix
M×M
and H ∈ C
is a unitary matrix with µ(H) ∼ 1/ M .
The following Lemma from [56] is needed.
Lemma B.1 (Theorem 3.3 [56]). For the matrix A =
p
n
′
m RΩ G, if for δ ∈ (0, 1),
m ≥ cδ −2 s log4 n,
then with probability at least 1 − n− log
isometry constant δs of A satisfies δs ≤ δ.
k log k log ñ,
δs,k ≤ sup
δ
, γ2 (Av , k · k2 ) . δ.
dF (Av ) = d2→2 (Av ) .
log s log ñ
By substituting the above results into Proposition A.2 (let t =
log s log ñ), one obtains
(31)
v∈Av
The proof is completed by incorporating the constant c into
c1 , c2 .
3
n
the restricted
(x,z)∈T
|
kAxk22 − kxk22 + 2 sup |hAx, Hzi|,
{z
}
δ1
|
(x,z)∈T
{z
δ2
}
(33)
where T := {(x, z) : kxk22 + kzk22 = 1, kxk0 ≤ s, kzk0 ≤
k, x ∈ Cn , z ∈ Cm }.
By Lemma B.1, we have δ1 ≤ δ/2 holds with probability
3
1 − n− log n for any δ ∈ (0, 1) if m ≥ cδ −2 s log4 n.
n
Define
q a random vector d ∈ C with i.i.d. entries satisfying
∗
′
λi = m(n−m)
di + m
n2
n . Assume Λ = diag(λ) and H RΩ =
′
U . We have,
r
n
δ2 = 2 sup |h
RΩ′ Gx, Hzi|
M
(x,z)∈T
r
n ∗ ′
z U ΛGx
= 2 sup
M
(x,z)∈T
r
n ∗ ′
= 2 sup
z U diag(Gx)λ
M
(x,z)∈T
r
1 n ∗ ′
z U diag(Gx)d
≤ 2 sup
M
(x,z)∈T 2
{z
}
|
(x,z)∈T
we have, by (19),
(32)
The (s, k)-RIP associated with Θ = [A, H] can be bounded
by
+ 2 sup
2
P( sup |hv, ξi| ≤ cδ) ≥ 1 − exp(− log2 s log2 ñ).
A PPENDIX B
P ROOF OF T HEOREM III.3
|
r
t1
m2 ∗ ′
z U Gx ,
Mn
{z
}
t2
q
≤ 21
where the last inequality is due to the fact that m(n−m)
n2
for any m ≤ n.
Since λ is a random Bernoulli vector with i.i.d. entries,
by construction d is a length-n random vector with independent, zero-mean, unit-variance, and L-subgaussian entries. Hence, the bound for t1 can be formulated as the
supremum of a stochastic process with the index Ar , where
11
pn ∗ ′
2
2
r =
M z U diag(Gx) and Ar := {r : kxk2 + kzk2 =
1, kxk0 ≤ s, kzk0 ≤ k}. For any r ∈ Ar ,
r
n ∗ ′
kz U diag(Gx)k2
krk2 =
M
r
n ∗ ∗
≤
kz H RΩ′ k2 kGxk∞
M
r
n
kzk2 max{|hG(j,:) , xi|}
=
M
j∈[n]
r
√
1 n
≤
µ(G) s,
2 M
r
n ∗ ′
kz U diag(Gx)k2
krk2 =
M
r
n ∗ ∗
≤
kz H k∞ kRΩ′ Gxk2
M
r
√
n
µ(H) kkG(Ω′ ,:) xk2
≤
M
√
1√
nµ(G)µ(H) k.
≤
2
Therefore,
r
√
1 n
µ(G) s,
dF (Ar ) ≤
2 M
√
1√
nµ(G)µ(H) k.
dF (Ar ) ≤
2
By following the same proof steps as in Appendix A, we
have
P( sup |hr, di| ≤ cδ) ≥ 1 − exp(− log2 s log2 n)
(34)
r∈Ar
provided that
M ≥ cδ −2 snµ2 (G) log2 s log2 n
M ≥ cδ −2 knµ2 (G) log2 k log2 n.
Bernstein’s inequality [57, Theorem A.1.13] gives, for any
ν > 0,
mν 2
P(M > (1 − ν)m) ≥ 1 − exp −
.
(35)
2
Hence, if
1
cδ −2 snµ2 (G) log2 s log2 n
1−ν
1
cδ −2 knµ2 (G) log2 k log2 n,
m≥
1−ν
m≥
then
P( sup |hr, di| ≤ cδ) ≥ 1 − exp(− log2 s log2 n)
r∈Ar
mν 2
− exp −
.
2
q
′ log n
, the above
By assuming m ≥ 2c′ log n and ν = 2c m
probability of success can be written as
P( sup |hr, di| ≤ cδ) ≥ 1 − n− log
r∈Ar
2
s log n
′
− n−c .
(36)
For the second term, we have
t2 = sup
(x,z)∈T
2
m2 ∗ ′
z U Gx
Mn
m
kzk2 kxk2
Mn
2
1 m
,
≤
2 Mn
≤
where the last inequality is due to kxk22 +kzk22 = 1. Therefore,
t2 ≤ δ/2 for any δ ∈ (0, 1) if M ∼ m and m ≤ δn.
By Bernstein’s inequality, this condition can be satisfied with
′
probability exceeding 1−n−c as long as m ≤ cδ 2 n. Theorem
III.3 is proved by combining the above results.
R EFERENCES
[1] E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact
signal reconstruction from highly incomplete frequency information,”
IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, Feb 2006.
[2] E. Candes and T. Tao, “Near-optimal signal recovery from random
projections: Universal encoding strategies?” IEEE Trans. Inf. Theory,
vol. 52, no. 12, pp. 5406 –5425, Dec. 2006.
[3] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52,
no. 4, pp. 1289–1306, Apr. 2006.
[4] Y. C. Eldar and G. Kutyniok, Compressed sensing: theory and applications. Cambridge University Press, 2012.
[5] S. Foucart and H. Rauhut, A mathematical introduction to compressive
sensing. Springer, 2013.
[6] H. Rauhut, “Compressive sensing and structured random matrices,”
Theoretical foundations and numerical methods for sparse recovery,
vol. 9, pp. 1–92, 2010.
[7] P. Zhang, L. Gan, S. Sun, and C. Ling, “Modulated unit-norm tight
frames for compressed sensing,” IEEE Trans. Signal Process., vol. 63,
no. 15, pp. 3974–3985, Aug 2015.
[8] J. Bourgain, “An improved estimate in the restricted isometry problem,”
in Geometric Aspects of Functional Analysis. Springer, 2014, pp. 65–
70.
[9] I. Haviv and O. Regev, “The restricted isometry property of subsampled fourier matrices,” in Geometric Aspects of Functional Analysis.
Springer, 2017, pp. 163–179.
[10] X. Li, “Compressed sensing and matrix completion with constant
proportion of corruptions,” Constructive Approximation, vol. 37, no. 1,
pp. 73–99, 2013.
[11] N. Nguyen and T. Tran, “Exact recoverability from dense corrupted
observations via l1 -minimization,” IEEE Trans. Inf. Theory, vol. 59,
no. 4, pp. 2017–2035, April 2013.
[12] E. J. Candes and Y. Plan, “A probabilistic and RIPless theory of
compressed sensing,” IEEE Trans. Inf. Theory, vol. 57, no. 11, pp. 7235–
7254, 2011.
[13] B. Adcock, A. Bao, A. Narayan et al., “Compressed sensing with
sparse corruptions: Fault-tolerant sparse collocation approximations,”
arXiv preprint arXiv:1703.00135, 2017.
[14] F. Krahmer, S. Mendelson, and H. Rauhut, “Suprema of chaos processes
and the restricted isometry property,” Communications on Pure and
Applied Mathematics, 2014.
[15] J. Haupt, W. U. Bajwa, M. Rabbat, and R. Nowak, “Compressed sensing
for networked data,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 92–
101, 2008.
[16] C. Franke and M. Gertz, “ORDEN: Outlier region detection and exploration in sensor networks,” in Proceedings of the 2009 ACM SIGMOD
International Conference on Management of data. ACM, 2009, pp.
1075–1078.
[17] Y. Zhang, N. Meratnia, and P. Havinga, “Outlier detection techniques for
wireless sensor networks: A survey,” Commun. Surveys Tuts., vol. 12,
no. 2, pp. 159–170, 2010.
[18] Z. Charbiwala, S. Chakraborty, S. Zahedi, Y. Kim, M. Srivastava,
T. He, and C. Bisdikian, “Compressive oversampling for robust data
transmission in sensor networks,” in INFOCOM, 2010 Proceedings
IEEE, March 2010, pp. 1–9.
[19] Z. Li, F. Wu, and J. Wright, “On the systematic measurement matrix
for compressed sensing in the presence of gross errors,” in Data
Compression Conference (DCC), 2010. IEEE, 2010, pp. 356–365.
12
[20] J. N. Laska, M. Davenport, R. G. Baraniuk et al., “Exact signal recovery
from sparsely corrupted measurements through the pursuit of justice,” in
Signals, Systems and Computers, 2009 Conference Record of the FortyThird Asilomar Conference on. IEEE, 2009, pp. 1556–1560.
[21] C. Studer, P. Kuppinger, G. Pope, and H. Bolcskei, “Recovery of sparsely
corrupted signals,” IEEE Trans. Inf. Theory, vol. 58, no. 5, pp. 3115–
3130, 2012.
[22] B. M. Popovic, “Synthesis of power efficient multitone signals with flat
amplitude spectrum,” IEEE Trans. Commun., vol. 39, no. 7, pp. 1031–
1033, 1991.
[23] J. A. Davis and J. Jedwab, “Peak-to-mean power control in OFDM,
Golay complementary sequences, and Reed-Muller codes,” IEEE Trans.
Inf. Theory, vol. 45, no. 7, pp. 2397–2417, 1999.
[24] B. Levitt, “FH/MFSK performance in multitone jamming,” IEEE J. Sel.
Areas Commun., vol. 3, no. 5, pp. 627–643, 1985.
[25] C. R. Berger, S. Zhou, J. C. Preisig, and P. Willett, “Sparse channel
estimation for multicarrier underwater acoustic communication: From
subspace methods to compressed sensing,” IEEE Trans. Signal Process.,
vol. 58, no. 3, pp. 1708–1721, 2010.
[26] J. Haupt, W. U. Bajwa, G. Raz, and R. Nowak, “Toeplitz compressed
sensing matrices with applications to sparse channel estimation,” IEEE
Trans. Inf. Theory, vol. 56, no. 11, pp. 5862–5875, 2010.
[27] J. Meng, W. Yin, Y. Li, N. T. Nguyen, and Z. Han, “Compressive sensing
based high-resolution channel estimation for OFDM system,” IEEE J.
Sel. Topics Signal Process., vol. 6, no. 1, pp. 15–25, 2012.
[28] K. Li, L. Gan, and C. Ling, “Convolutional compressed sensing using
deterministic sequences,” IEEE Trans. Signal Process., vol. 61, no. 3,
pp. 740–752, 2013.
[29] D. Umehara, H. Nishiyori, and Y. Morihiro, “Performance evaluation
of CMFB transmultiplexer for broadband power line communications
under narrowband interference,” in Power Line Communications and Its
Applications, 2006 IEEE International Symposium on. IEEE, 2006, pp.
50–55.
[30] A. Gomaa and N. Al-Dhahir, “A sparsity-aware approach for NBI
estimation in MIMO-OFDM,” IEEE Trans. Wireless Commun., vol. 10,
no. 6, pp. 1854–1862, 2011.
[31] J. Romberg, “Compressive sensing by random convolution,” SIAM J.
Imaging Sciences, vol. 2, no. 4, pp. 1098–1128, 2009.
[32] G. Puy, P. Vandergheynst, R. Gribonval, and Y. Wiaux, “Universal and
efficient compressed sensing by spread spectrum and application to
realistic Fourier imaging techniques,” EURASIP J. Advances in Signal
Processing, vol. 2012, no. 1, pp. 1–13, 2012.
[33] J. A. Tropp, J. N. Laska, M. F. Duarte, J. K. Romberg, and R. G.
Baraniuk, “Beyond Nyquist: Efficient sampling of sparse bandlimited
signals,” IEEE Trans. Inf. Theory, vol. 56, no. 1, pp. 520–544, 2010.
[34] J. Romberg and R. Neelamani, “Sparse channel separation using random
probes,” Inverse Problems, vol. 26, no. 11, p. 115015, 2010.
[35] J. P. Slavinsky, J. N. Laska, M. A. Davenport, and R. G. Baraniuk,
“The compressive multiplexer for multi-channel compressive sensing,”
in IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP),
2011, pp. 3980–3983.
[36] Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution
compressive imaging by double phase encoding,” Optics express, vol. 18,
no. 14, pp. 15 094–15 103, 2010.
[37] J. Li, J. S. Li, Y. Y. Pan, and R. Li, “Compressive optical image
encryption,” Scientific reports, vol. 5, 2015.
[38] M. A. Herman and T. Strohmer, “High-resolution radar via compressed
sensing,” IEEE Trans. Signal Process., vol. 57, no. 6, pp. 2275–2284,
2009.
[39] R. F. Marcia, C. Kim, J. Kim, D. J. Brady, and R. M. Willett, “Fast
disambiguation of superimposed images for increased field of view,” in
Image Processing, 2008. ICIP 2008. 15th IEEE International Conference
on. IEEE, 2008, pp. 2620–2623.
[40] D.-S. Pham and S. Venkatesh, “Improved image recovery from compressed data contaminated with impulsive noise,” IEEE Trans. Image
Process., vol. 21, no. 1, pp. 397–405, 2012.
[41] M. Filipović, “Reconstruction of sparse signals from highly corrupted
measurements by nonconvex minimization,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.
[42] R. Saab, R. Chartrand, and O. Yilmaz, “Stable sparse approximations via
nonconvex optimization,” in Acoustics, Speech and Signal Processing,
2008. ICASSP 2008. IEEE International Conference on. IEEE, 2008,
pp. 3885–3888.
[43] R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” in IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP). IEEE, 2008, pp. 3869–3872.
[44] R. Foygel and L. Mackey, “Corrupted sensing: Novel guarantees for
separating structured signals,” IEEE Trans. Inf. Theory, vol. 60, no. 2,
pp. 1223–1247, Feb 2014.
[45] Y. Chen, C. Caramanis, and S. Mannor, “Robust sparse regression
under adversarial corruption,” in Proceedings of the 30th International
Conference on Machine Learning (ICML-13), 2013, pp. 774–782.
[46] C. Studer and R. G. Baraniuk, “Stable restoration and separation of
approximately sparse signals,” Applied and Computational Harmonic
Analysis, vol. 37, no. 1, pp. 12–35, 2014.
[47] G. Pope, A. Bracher, and C. Studer, “Probabilistic recovery guarantees
for sparsely corrupted signals,” IEEE Trans. Inf. Theory, vol. 59, no. 5,
pp. 3104–3116, May 2013.
[48] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face
recognition via sparse representation,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 31, no. 2, pp. 210–227, 2009.
[49] J. N. Laska, P. T. Boufounos, M. A. Davenport, and R. G. Baraniuk,
“Democracy in action: Quantization, saturation, and compressive sensing,” Applied and Computational Harmonic Analysis, vol. 31, no. 3, pp.
429–443, 2011.
[50] Y. Chi, “Convex relaxations of spectral sparsity for robust superresolution and line spectrum estimation,” in Wavelets and Sparsity XVII,
vol. 10394. International Society for Optics and Photonics, 2017, p.
103941G.
[51] C. Fernandez-Granda, G. Tang, X. Wang, and L. Zheng, “Demixing
sines and spikes: Robust spectral super-resolution in the presence of
outliers,” arXiv preprint arXiv:1609.02247, 2016.
[52] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex
programming, version 2.1,” http://cvxr.com/cvx, Mar. 2014.
[53] ——, “Graph implementations for nonsmooth convex programs,”
in Recent Advances in Learning and Control, ser. Lecture Notes
in Control and Information Sciences, V. Blondel, S. Boyd, and
H. Kimura, Eds.
Springer-Verlag Limited, 2008, pp. 95–110,
http://stanford.edu/∼ boyd/graph dcp.html.
[54] M. Talagrand, The generic chaining. Springer, 2005, vol. 154.
[55] A. Eftekhari, H. L. Yap, C. J. Rozell, and M. B. Wakin, “The restricted
isometry property for random block diagonal matrices,” arXiv preprint
arXiv:1210.3395, 2012.
[56] M. Rudelson and R. Vershynin, “On sparse reconstruction from Fourier
and Gaussian measurements,” Communications on Pure and Applied
Mathematics, vol. 61, no. 8, pp. 1025–1045, 2008.
[57] N. Alon and J. H. Spencer, The probabilistic method. John Wiley &
Sons, 2004.
| 7 |
A short note on Simulation and Abstraction
Chris Hankin
Institute for Security Science and Technology
Imperial College London, UK
[email protected]
This short note is written in celebration of David Schmidt’s sixtieth birthday. He has now been active
in the program analysis research community for over thirty years and we have enjoyed many interactions with him. His work on characterising simulations between Kripke structures using Galois
connections was particularly influential in our own work on using probabilistic abstract interpretation to study Larsen and Skou’s notion of probabilistic bisimulation. We briefly review this work and
discuss some recent applications of these ideas in a variety of different application areas.
1
Introduction
Since his earliest contributions on state transition machines for lambda calculus expressions [13], David
Schmidt has been at the forefront of research in programming language theory, particularly program
analysis.
His work on program analysis started through his collaboration with Neil Jones. No doubt partly
inspired by Patrick and Radhia Cousot’s work on abstract interpretation [2], he has made a study of
various aspects of Galois connections. An early contribution was [7]. A later example, which was
influential in our own work, was [15] where he shows how to characterise simulation relations using
Galois Connections.
The early work of our group was also inspired by the Cousots [1]. Over the last fourteen years,
we have been working on the analysis of probabilistic and quantitative programming languages and
systems [12]. This has led to a framework called probabilistic abstract interpretation (PAI). Rather than
using lattices and Galois Connections, PAI uses Hilbert Spaces and Moore-Penrose Pseudo Inverses.
Inspired by [15], we have used PAI to characterise probabilistic bisimulation [8, 10]; we demonstrated
the probabilistc analogue of Dave’s earlier results which amounted to characterising Larsen and Skou’s
probabilistic bisimulation [5] using Moore-Penrose Pseudo Inverses. A major feature of our approach is
that it becomes natural to introduce a notion of approximate bisimulation – this has proved to be very
useful in studies of language-based security [9].
The author first met David Schmidt nearly thirty years ago. He visited Imperial College whilst
developing the work in [7], his visit coincided with our own work on the ideas in [1]. We found a lot
to talk about and it was the start of many further interactions. In addition to his scientific contributions,
he has always given time to developing text books; his [14] was used to educate several generations of
Imperial students in the principles of programming language design. The author has also worked with
Dave on many programme committees; perhaps the high point was when they were co-General Chairs
of POPL in 2001. Dave has a had a long and successful career and we wish him many more years. The
author would like to add his congratulations to Dave on this important milestone.
A. Banerjee, O. Danvy, K.-G. Doh, J. Hatcliff (Eds):
David A. Schmidt’s 60th Birthday Festschrift
EPTCS 129, 2013, pp. 337–340, doi:10.4204/EPTCS.129.20
Simulation and Abstraction
338
2
Simulation and Galois Connections
In [15], Dave addresses, inter alia, the question of characterising simulation relations between Kripke
structures using Galois connections.
Recall that we define (L, α , γ , M) to be a Galois connection between the complete lattices (L, ⊑) and
(M, ⊑) if and only if
α : L → M and γ : M → L are monotone functions
that satisfy:
γ ◦ α ⊒ idL
α ◦ γ ⊑ idM
For Kripke structures C = hΣC , →C , IC i and A = hΣA , →A , IA i, a binary relation R ⊆ ΣC × ΣA is a
simulation of C by A (C ⊳R A), if for every c ∈ ΣC , a ∈ ΣA :
if c R a and c →C c′ then ∃a′ ∈ ΣA [a →A a′ and c′ R a′ ]
In the framework studied in [15], the concrete Kripke structures are often infinite state structures
representing programs and the abstract Kripke structures are some program analysis. In this setting,
we should abstract sets of states of the concrete structure to a single state in the abstract structure.
Given a Galois connection, (α : P(ΣC ) → ΣA , γ : ΣA → P(ΣC )), we can construct the relation, R(α ,γ ) ⊆
P(ΣC ) × ΣA :
S R(α ,γ ) a if and only if α (S) ⊑A a
This can be shown to be the suitable basis for a simulation relation.
Whilst [15] achieves much more than described here, it was these ideas that inspired our own work
on probabilistic bisimulation (originally introduced by Larsen and Skou [5]) for reactive systems (also
called fully probabilistic systems):
Definition 1 A probabilistic bisimulation is an equivalence relation ∼b on states of a probabilistic transition system satisfying for all actions a ∈ A:
p ∼b q and p →a π ⇔ q →a ρ and π ∼b ρ .
where π and ρ are distributions of states.
.
3
Probabilistic Bisimulation and Moore-Penrose Pseudo Inverses
In [10, 8] we introduced an approximate version of bisimulation and confinement where the approximation can be used as a measure ε for the information leakage of the system under analysis. We represented
probabilistic transition systems by linear operators, i.e. by their transition matrices M. In the case
of probabilistic programs and systems these matrices M are the usual well known stochastic matrices
which are the generators of the corresponding Markov chains (for details see [10, 8]).
We then showed that two systems M1 and M2 are bisimilar if there exist simplified, or abstracted,
versions of M1 and M2 , represented by matrices M#1 and M#2 , such that M#1 = M#2 . In the probabilistic
Chris Hankin
339
abstract interpretation setting that we use, a bounded linear operator and its Moore-Penrose Pseudo
Inverse are the analogue of the adjoined pair of monotonic functions in a Galois insertion. The abstract
systems are obtained by lumping states, i.e. by identifying each concrete state si with a class C j of states
which are all behavioural equivalent to each other.
Concretely, we compute this via n × m matrices K (where n is the number of concrete states and m
the number of abstract classes) with Kij = 1 iff si ∈ C j and 0 otherwise. We refer to such matrices which
have exactly one entry 1 in each row while all other entries are 0 as classification matrices, and denote
the set of all classification matrices by K . The abstract systems are then given by M#i = K†i Mi Ki with
Ki some classification matrix and † constructing the so called Moore-Penrose pseudo-inverse – in the
case of classification matrices K† can be constructed as the row-normalised transpose of K.
The problem of showing that two systems M1 and M2 are behaviourally equivalent, i.e. are (probabilistically) bisimilar, is now translated into finding two classification matrices Ki ∈ K such that
M#1 = K†1 M1 K1 = K†2 M2 K2 = M#2 .
In case that two systems are not bisimilar we can still define a quantity ε which describes how
(non-)bisimilar the two systems are. This ε is formally defined in terms of the norm of a linear operator
representing the partition induced by the ‘minimal’ bisimulation on the set of the states of a given system,
i.e. the one minimising the observational difference between the system’s components (see again [10]
for further details, in particular regarding labeled probabilistic transition systems):
Definition 2 Let M1 and M2 be the matrix representations of two probabilistic transition systems. We
say that M1 and M2 are ε -bisimilar, denoted by M1 ∼εb M2 , iff
inf
K1 ,K2 ∈K
kK†1 M1 K1 − K†2 M2 K2 k = ε
where k.k denotes an appropriate norm, e.g. the supremum norm k.k∞ .
In [10] we show that, when ε = 0 this gives the standard notion of probabilistic bisimulation.
4
Conclusion
This short note has sketched some early work by David which forms part of his deep study of the use
of Galois connections and relations in program analysis. Our own work on characterising probabilistic
bisimulation using probabilistic abstract interpretation has found a number of applications, including:
• the detection and removal of timing channels in probabilistic transition systems [11] – we study a
concept called probabilistic time bisimilarity and use it to detect timing channels;
• the detection of sub-communities in social media [6] – we evaluate a number of algorithms including one using the notion of stability from [3] which effectively lumps nodes together if their
mutual interactions are “stronger” than interactions outside the group; and
• the abstraction of stochastic and Bayesian games to provide decision support in cyber security [4] –
where we hope to apply probabilistic abstract interpretation directly to the underlying probabilistic
transition systems in the games, thereby developing a principled way of reducing the state spaces
to achieve tractability of game solutions.
We look forward to discussing some of this work with David in the future but, in the meantime, reiterate
our best wishes on this important anniversary.
Simulation and Abstraction
340
5
Acknowledgements
Much of the work discussed above was done in collaboration with Alessandra Di Pierro and Herbert
Wiklicky. More recently, I have enjoyed working on the application of these ideas to other areas with
Erwan Le Martelot and Pasquale Malacaria.
References
[1] Geoffrey L. Burn, Chris Hankin & Samson Abramsky (1986): Strictness Analysis for Higher-Order Functions. Sci. Comput. Program. 7(3), pp. 249–278, doi:10.1016/0167-6423(86)90010-9.
[2] Patrick Cousot & Radhia Cousot (1977): Abstract Interpretation: A Unified Lattice Model for Static
Analysis of Programs by Construction or Approximation of Fixpoints. In: POPL, ACM, pp. 238–252,
doi:10.1145/512950.512973.
[3] Jean-Charles Delvenne, Sophia Yaliraki & Mauricio Barahona (2010): Stability of graph communities across
time scales. Proc. Nat. Acad. Sci. 107(29), pp. 12755–12760, doi:10.1073/pnas.0903215107.
[4] Chris Hankin & Pasquale Malacaria (2013): Payoffs, Intensionality and Abstraction in Games. In: Computation, Logic, Games, and Quantum Foundations - The Many Facets of Samson Abramsky, Lecture Notes in
Computer Science 7860, Springer, doi:10.1007/978-3-642-38164-5-6.
[5] Kim Guldstrand Larsen & Arne Skou (1989): Bisimulation Through Probabilistic Testing. In: POPL, ACM,
pp. 344–352, doi:10.1145/75277.75307.
[6] Erwan Le Martelot & Chris Hankin (2013): Fast Multi-Scale Detection of Relevant Communities in LargeScale Networks. Computer Journal, doi:10.1093/comjnl/bxt002.
[7] Austin Melton, David A. Schmidt & George E. Strecker (1986): Galois Connections and Computer
Science Applications. In: CTCS, Lecture Notes in Computer Science 240, Springer, pp. 299–312,
doi:10.1007/3-540-17162-2-130.
[8] Alessandra Di Pierro, Chris Hankin & Herbert Wiklicky (2003): Quantitative Relations and Approximate
Process Equivalences. In: CONCUR, Lecture Notes in Computer Science 2761, Springer, pp. 498–512,
doi:10.1007/978-3-540-45187-7-33.
[9] Alessandra Di Pierro, Chris Hankin & Herbert Wiklicky (2004): Approximate Non-interference. Journal of
Computer Security 12(1), pp. 37–82.
[10] Alessandra Di Pierro, Chris Hankin & Herbert Wiklicky (2005): Measuring the confinement of probabilistic
systems. Theor. Comput. Sci. 340(1), pp. 3–56, doi:10.1016/j.tcs.2005.03.002.
[11] Alessandra Di Pierro, Chris Hankin & Herbert Wiklicky (2011): Probabilistic timing covert channels: to
close or not to close? Int. J. Inf. Sec. 10(2), pp. 83–106, doi:10.1007/s10207-010-0107-0.
[12] Alessandra Di Pierro & Herbert Wiklicky (2000): Concurrent constraint programming: towards probabilistic
abstract interpretation. In: PPDP, ACM, pp. 127–138, doi:10.1145/351268.351284.
[13] David A. Schmidt (1980): State transition machines for lambda calculus expressions. In: SemanticsDirected Compiler Generation, Lecture Notes in Computer Science 94, Springer, pp. 415–440,
doi:10.1007/3-540-10250-7-32.
[14] David A. Schmidt (1986): Denotational semantics: a methodology for language development. Allyn and
Bacon.
[15] David A. Schmidt (1999): Binary relations for abstraction and refinement. In: Workshop on Refinement and
Abstraction.
| 6 |
An Online Development Environment for
Answer Set Programming
Elias Marcopoulos1 and Christian Reotutar2 and Yuanlin Zhang3
1
Department of Computer Science, Tufts University, USA
Department of Computer Science, Johns Hopkins University, USA
3
Texas Tech University, Lubbock, TX, USA
[email protected], [email protected], [email protected]
arXiv:1707.01865v1 [cs.OH] 20 Jun 2017
2
Abstract. Recent progress in logic programming (e.g., the development
of the Answer Set Programming paradigm) has made it possible to teach
it to general undergraduate and even high school students. Given the
limited exposure of these students to computer science, the complexity
of downloading, installing and using tools for writing logic programs
could be a major barrier for logic programming to reach a much wider
audience. We developed an online answer set programming environment
with a self contained file system and a simple interface, allowing users to
write logic programs and perform several tasks over the programs.
1
Introduction
Answer Set Programming (ASP) [8] is becoming a dominating language in the
knowledge representation community [15,12] because it has offered elegant and
effective solutions not only to classical Artificial Intelligence problems but also
to many challenging application problems. Thanks to its simplicity and clarity
in both informal and formal semantics, Answer Set Programming provides a
“natural” modeling of many problems. At the same time, the fully declarative
nature of ASP also cleared a major barrier to teach logic programming, as the
procedural features of classical logic programming systems such as PROLOG are
taken as the source of misconceptions in students’ learning of Logic Programming
[16].
ASP has been taught to undergraduate students, in the course of Artificial
Intelligence, at Texas Tech for more than a decade. We believe ASP has become
mature enough to be a language for us to introduce programming and problem
solving to high school students. We have offered many sessions to students at
New Deal High School and a three week long ASP course to high school students
involved in the TexPREP program (http://www.math.ttu.edu/texprep/). In our
teaching practice, we found that ASP is well accepted by the students and the
students were able to focus on problem solving, instead of the language itself. The
students were able to write programs to answer questions about the relationships
(e.g., parent, ancestor) amongst family members and to find solutions for Sudoku
problems.
2
Marcopoulos, Reotutar and Zhang
However, we have some major issues while using existing tools: installation
of the tools to computers at a lab or at home is complex, and the existing
tools are sensitive to the local settings of a computer. As a result, the flow of
teaching the class was often interrupted by the problems associated with the use
of the tools. Strong technical support needed for the management and use of the
tools is prohibitive for teaching ASP to general undergraduate students or K-12
students.
During our teaching practice, we also found the need for a more vivid presentation of the results of a logic program (more than just querying the program or
getting the answer sets of the program). We also noted observations in literature
that multimedia and visualization play a positive role in promoting students’
learning [9,3].
To overcome the issues related to software tool management and use, we have
designed and built an online development environment for Answer Set Programming. The environment provides an editor for users to edit their programs, an
online file system for them to store and retrieve their program and a few simple
buttons allows querying the program inside the editor or getting answer sets of
the program. The environment uses SPARC [2] as the ASP language. SPARC
is designed to further facilitate the teaching of logic programming by introducing sorts (or types) which simplify the difficult programming concept of domain
variables in classical ASP systems such as Clingo [7] and help programmers to
identify errors early thanks to sort information. Initial experiment of teaching
SPARC to high school students is promising [18]. To promote students’ interests
and learning, our environment also introduces predicates for students to present
their solutions to problems in a more visually straightforward and exciting manner (instead of the answer sets which are simply a set of literals). The URL for
the online environment is http://goo.gl/ukSZET.
The rest of the paper is organized as follows. Section 2 recalls SPARC. The
design and implementation of the online environment are presented in Section 3.
The design and rendering of the drawing and animation predicates are presented
in Section 4. The paper is concluded by Section 5.
2
Answer Set Programming Language – SPARC
SPARC is an Answer Set Programming language which allows for the explicit
representation of sorts. A SPARC program consists of three sections: sorts, predicates and rules. We will use the map coloring problem as an example to illustrate
SPARC: can the USA map be colored using red, green and blue such that no
two neighboring states have the same color?
The first step is to identify the objects and their sorts in the problem. For
example, the three colors are important and they form the sort of color for this
problem. In SPARC syntax, we use #color = {red, green, blue} to represent the
objects and their sort. The sorts section of the SPARC program is
sorts % the keyword to start the sorts section
An Online Development Environent
3
#color = {red,green,blue}.
#state = {texas, colorado, newMexico, ......}.
The next step is to identify relations in the problem and declare in the predicates section the sorts of the parameters of the predicates corresponding to the
relations. The predicates section of the program is
predicates % the keyword to start the predicates section
% neighbor(X, Y) denotes that state X is a neighbor of state Y.
neighbor(#state, #state).
% ofColor(X, C) denotes that state X has color C
ofColor(#state, #color).
The last step is to identify the knowledge needed in the problem and translate
it into rules. The rules section of a SPARC program consists of rules in the typical
ASP syntax. The rules section of a SPARC program will include the following.
rules % the keyword to start the rules section
% Texas is a neighor of Colorado
neighbor(texas, colorado).
% The neighbor relation is symetric
neighbor(S1, S2) :- neighbor(S2, S1).
% Any state has one of the three colors: red, green and blue
ofColor(S, red) | ofColor(S, green) | ofColor(S, blue).
% No two neighbors have the same color
:- ofColor(S1, C), ofColor(S2, C), neighbor(S1, S2), S1 != S2.
3
3.1
Online Development Environment Design and
Implementation
Environment Design
The principle of the design is that the environment, with the simplest possible
interface, should provide full support, from writing programming to getting the
answer sets of the program, for teaching Answer Set Programming.
The design of the interface is shown in Figure 1. It consists of 3 components:
1) the editor to edit a program, 2) the file navigation system and 3) the operations
over the program.
4
Marcopoulos, Reotutar and Zhang
Fig. 1. User Interface of the System (the red numbers indicate the areas/components
in the interface)
One can edit a SPARC program directly inside the editor which has syntax
highlighting features (area 1). The file inside the editor can be saved by clicking
the “Save” button (2.4). The files and folders are displayed in the area 2.1. The
user can traverse them using the mouse like traversing a file system on a typical
operating system. Files can be deleted and their names can be changed. To
create a folder or a file, one clicks the “New” button (2.3). The panel showing
files/folders can be toggled by clicking the “Directory” button (2.2) (so that users
can have more space for the editing or result area (4)). To ask queries to the
program inside the editor, one can type a query (a conjunction of literals) in the
text box (3.1) and then press the “Submit” button (3.1). The answer to the query
will be shown in area 4. For a ground query (i.e., a query without variables), the
answer is yes if every literal in the query is in every answer set of the program,
is no if the complement (p and ¬p, where p is an atom, are complements) of
some literal is in every answer set of the program, and unknown otherwise. An
answer to a query with variables is a set of ground terms for the variables in the
query such that the answer to the query resulting from replacing the variables
by the corresponding ground terms is yes. Formal definitions of queries and
answers to queries can be found in Section 2.2 of [8]. To see the answer sets of
a program, click the “Get Answer Sets” button (3.2). When “Execute” button
(3.3) is clicked, the atoms with drawing and animation in the answer set of the
program will be rendered in the display area (4). (For now, when there is more
than one answer set, the environment displays an error.)
A user can only access the full interface discussed above after login. The user
will log out by clicking the “Logout” button (5). Without login, the interface is
much simpler, with all the file navigation related functionalities invisible. Such
an interface is convenient for testing or doing a quick demo of a SPARC program.
3.2
Implementation
.
The architecture of the online environment follows that of a typical web
application. Is consists of a front end component and a back end component.
The front end provides the user interface and sends users’ request to the back
An Online Development Environent
5
end, and the back end fulfills the request and returns results, if needed, back
to the front end. After getting the results from the back end, the front end will
update the interface correspondingly (e.g., display query answers to the result
area). Details about the components and their interactions are given below.
Front End. The front end is implemented by HTML and JavaScript. The editor
in our front end uses ACE which is an embeddable (to any web page) code
editor written in JavaScript (https://ace.c9.io/). The panel for file/folder
navigation is based on JavaScript code by Yuez.me.
Back End and Interactions between the Front End and the Back End.
The back end is mainly implemented using PHP and is hosted on the server side.
It has three components: 1) file system management, 2) inference engine and 3)
drawing/animation rendering.
The file system management uses a database to manage the files and
folders of all users of the environment. The ER diagram of the system is shown
below:
Fig. 2. The ER diagram for file/folder management. Most names have a straightforward meaning. The Folderurl and Fileurl above refer to the full path of the folder/file
in the file system.
The SPARC files are saved in the server file system, not in a database table.
The sharing is managed by the sharing information in the relevant database
tables. In our implementation, we use mySQL database system.
6
Marcopoulos, Reotutar and Zhang
The file management system gets request such as creating a new file/folder,
deleting a file, saving a file, getting the files and folders, etc, from the front end.
It then updates the tables and local file system correspondingly and returns the
needed results to the front end. After the front end gets the results, it will update
the graphical user interface (e.g., display the program returned from the back
end inside the editor) if needed.
The inference engine gets the request of answering a query or obtaining
all answer sets of a program. It calls the SPARC solver [2] to find all answer
sets. Then in terms of these answer sets, it returns requested information to the
front end. After the front end gets the response from the back end, it will show
the result in the display area of the web page.
Details of the design and implementation of drawing/animation rendering can be found in Section 4.2.
4
4.1
Drawing and Animation Design and Implementation
Drawing and Animation Design
To allow programmers to create drawings and animations using SPARC, we
simply design two predicates, called display predicates: one for drawing and one
for animation. The atoms using these predicates are called display atoms. To use
these atoms in a SPARC program, a programmer needs to include sorts (e.g.,
sort of colors, fonts and numbers) and the corresponding predicate declaration
which are predefined. In the following, we only focus on the atoms and their use
for drawing and animation.
Drawing. A drawing predicate is of the form: draw(c) where c is called a drawing command. Intuitively the atom containing this predicate draws texts and
graphics as instructed by the command c. By drawing a picture, we mean a
shape is drawn with a style. We define a shape as either text or a geometric line
or curve. Also, a style specifies the physical visual properties of the shape it is
applied to. For example, visual properties include color, thickness, and font. For
modularity, we introduce style names, which are labels that can be associated
with different styles so that the style may be reused without being redefined. A
drawing is completed by associating this shape and style to a certain position in
the canvas, which is simply the display board. Note, the origin of the coordinate
system is at the top left corner of the canvas.
Here is a an example of drawing a red line from point (0, 0) to (2, 2). First,
we introduce a style name redline and associate it to the red color by the
style command line color(redline, red). With this defined style we then
draw the red line by the shape command draw line(redline, 0, 0, 2, 2).
Style commands and shape commands form all drawing commands. The SPARC
program rules to draw the given line are
draw(line color(redline, red)).
draw(draw line(redline, 0, 0, 2, 2)).
An Online Development Environent
7
The style commands of our system include the following:
linewidth(sn, t) specifies that lines drawn with style name sn should be
drawn with a line thickness t. textfont(sn, fs, ff) specifies that text drawn
with style name sn should be drawn with a font size fs and a font family ff.
linecap(sn, c) specifies that lines drawn with style name sn should be drawn
with a capping c, such as an arrowhead. textalign(sn, al) specifies that text
drawn with style name sn should be drawn with an alignment on the page al.
line color(sn, c) specifies that lines drawn with style name sn should be
drawn with a color c. textcolor(sn, c) specifies that text drawn with style
name sn should be drawn with a color c.
The shape commands include the following:
draw line(sn, xs, ys, xe, ye) draws a line from starting point (xs, ys)
to ending point (xe, ye) with style name sn; draw quad curve(sn, xs, ys,
bx, by, xe, ye) draws a quadratic Bezier curve, with style name sn, from the
current point (xs, ys) to the end point (xe, ye) using the control point (bx,
by); draw bezier curve(sn, xs, ys, b1x, b1y, b2x, b2y, xe, ye) draws
a cubic Bezier curve, using style name sn, from the current point (xs, ys) to
the end point (xe, ye) using the control points (b1x, b1y) and (b2x, b2y);
draw arc curve(sn, xs, ys, r, sa, se) draws an arc using style name sn
and the arc is centered at (x, y) with radius r starting at angle sa and ending
at angle se going in the clockwise direction; draw text(sn, x, xs, ys) prints
value of x as text to screen from point (xs, ys) using style name sn.
Animation. A frame, a basic concept in animation, is defined as a drawing.
When a sequence of frames, whose content is normally relevant, is shown on the
screen in rapid succession (usually 24, 25, 30, or 60 frames per second), a fluid
animation is seemingly created. To design an animation, a designer will specify
the drawing for each frame. Given that the order of frames matters, we give
a frame a value equal to its index in a sequence of frames. We introduce the
animate predicate animate(c, i) which indicates a desire to draw a picture at
the ith frame using drawing command c and i starts from 0. The frames will be
shown on the screen at a rate of 60 frames per second, and the ith frame will be
showed at time (i ∗ 1/60) (in a unit of second) from the start of the animation
for a duration of 1/60 of a second.
As an example, we would like to elaborate on an animation where a red box
(with side length of 10 pixels) moves from the point (1, 70) to (200, 70). We will
create 200 frames with the box (whose bottom left corner is) at point (i + 1, 70)
in ith frame.
Let the variable I be of a sort called frame, defined from 0 to some large
number. In every frame I, we specify the drawing styling redline:
animate(line color(redline, red), I).
To make a box at the I th frame, we need to draw the box’s four sides using
the style associated with style name redline. The following describes the four
sides of a box at any frame: bottom - (I +1, 70) to (I +1+10, 70), left - (I +1, 70)
to (I + 1, 60), top - (I + 1, 60) to (I + 1 + 10, 60) and right - (I + 1 + 10, 60) to
(I + 1 + 10, 70). Hence we have the rules
8
Marcopoulos, Reotutar and Zhang
animate(draw
animate(draw
animate(draw
animate(draw
line(redline,I+1,70,I+11,70),I).
line(redline,I+1,70,I+1,60),I).
line(redline,I+1,60,I+11,60),I).
line(redline,I+11,60,I+11,70),I).
Note that the drawing predicate produces the intended drawing throughout
all the frames creating a static drawing. On the other hand, the animate predicate
produces a drawing only for a specific frame.
4.2
Algorithm and Implementation
We first define our input and output: The input to the main algorithm is a
SPARC program P . The output is an HTML5 program segment containing a
canvas element which will be rendered by an Internet browser. A key part of our
algorithm is to render the display atoms (specified in the answer set of P ) using
canvas methods.
HTML5 canvas element is used to draw graphics via scripting using JavaScript.
In the following, we will use an example to demonstrate how a drawing command
is implemented by JavaScript code using canvas methods. Consider again
draw(line color(redline, red)).
draw(draw line(redline, 0, 0, 2, 2)).
When we render the shape command draw line, we need to know the meaning of the redline style. From the style command line color, we know it
means red. We first create an object ctx for a given canvas (simply identified
by a name) where we would like to render the display atoms. The object offers methods to render the graphics in the canvas. We then use the following
JavaScript code to implement the shape command to draw a line from (0,0) to
(2,2):
ctx.beginPath();
ctx.moveTo(0,0);
ctx.lineTo(2,2);
ctx.stroke();
To make the line in red color, we have to insert the following JavaScript
statement before the ctx.stroke() in the code above:
ctx.strokeStyle="red";
The meaning of the canvas methods in the code above is straightforward. We
don’t explain them further. Now we are in a position to present the algorithm.
Algorithm:
– Input: a SPARC program P with display predicates.
– Output: a HTML program segment which allows the rendering of the display
atoms in the answer set of P in an Internet Browser.
– Steps:
1. Call SPARC solver to obtain an answer set S of P .
2. Let script be an array of empty strings. script[i] will hold the JavaScript
statements to render the graphics for ith frame.
3. For each display atom a in S,
An Online Development Environent
9
• If any error is found in the display atoms, present an error to the
user detailing the incorrect usage of the atoms.
• If a contains a shape command, let its style name be sn, find all style
commands defining sn. For each style command, translate it into the
corresponding JavaScript code Ps on modifying the styling of the
canvas pen. Then translate the shape command into JavaScript code
Pr that renders that command. Let Pd be the proper combination
of Ps and Pr to render a.
∗ if a is an drawing atom, append Pd to script[i] for every frame i
of the animation.
∗ if a is an animation atom, let i be the frame referred to in a.
Append Pd to script[i].
4. Formulate the output program P as follows:
• add, to P , the canvas element <canvas id="myCanvas" width="500"
height="500"> </canvas>.
• add, to P , the script element <script> </script> whose content
includes
∗ the JavaScript code to associate the drawings in this script element with the canvas element above.
∗ an array drawings initialized by the content of script array.
∗ Javascript code executing the statements in drawings[i] when
the time to show frame i starts.
End of algorithm.
Implementation. The “Execute” button in the webpage (front end) of the online SPARC environment is for programmers to render the display atoms in the
answer set of their programs. The Java program implementing our algorithm
above is at the server side. When the “Execute” button is clicked, the programmer’s SPARC program will be sent to the server side and the algorithm will be
invoked with the program. The output (i.e., the canvas and script elements) of
the algorithm will be sent back to the front end and the JavaScript in the front
end will catch the output and insert it into the result display area of the front
web page (See Figure 1). The Internet browser will then automatically render
the updated web page and the drawing or animation will be rendered as a result.
Example SPARC programs with drawing and animation can be found at
https://goo.gl/nLD4LD.
5
Discussion and Related Work
As ASP has been applied to more and more problems, the importance of ASP
software development tools has been realized by the community. Some integrated
development environment (IDE) tools, e.g., APE [19], ASPIDE[6], iGROM[10]
and SeaLion [17] have previously been developed. They provide a graphical user
interface for users to carry out a sequence of tasks from editing an ASP program
to debugging that program, easing the use of ASP significantly. However, the
10
Marcopoulos, Reotutar and Zhang
target audience of these tools is experienced software developers. Compared with
the existing environments, our environment is online, self contained (i.e., fully
independent of the users’ local computers) and provides a very simple interface,
focusing on teaching only. The interface is operable by any person who is able
to use a typical web site and traverse a local file system.
As for drawing and animation, our work is based on the work of Cliffe et
al. [4]. They are the first to introduce, to ASP, a design of display predicates
and to render drawings and animations using the program ASPviz. Our drawing commands are similar to theirs. The syntax of their animation atoms is not
clear from their paper. It seems (from examples on github at goo.gl/kgUzJK
accessed on 4/30/2017) that multiple answer sets may be needed to produce an
animation. In our work we use a design where the programmers are allowed to
draw at any frame (specifying a range of the frames) and the real time difference between two neighboring frames is 1/60 second. Another clear difference is
that our implementation is online while theirs is a standalone software. A more
recent system, Kara, a standalone software by Kloimullner et al. [11], deals with
drawing only. Another system ARVis [1] offers method to visualize the relations
between answer sets of a given program. We also note an online environment for
IDP (which is a knowledge representation paradigm close to ASP) by Dasseville
and Janssens [5]. It also utilizes a very simple interface for the IDP system and
allows drawing and animation using IDP through IDPD3 (a library to visualize
models of logic theories) by Lapauw et al. [14]. In addition to drawing and animation, IDPD3 allows users’ interaction with the IDP program (although in a
limited manner in its current implementation), which is absent from most other
systems including ours. Our environment is also different from the online IDP
environment in that ours targets ASP and offers an online file system. Both
DLV and Clingo offer online environments (http://asptut.gibbi.com/ and
http://potassco.sourceforge.net/clingo.html respectively) which provide
an editor and a window to show the output of the execution of dlv and clingo
command, but provide no other functionalities. We also noted the SWISH
(http://lpsdemo.interprolog.com) which offers an online environment for
Prolog and a more recent computer language Logic-based Production Systems
[13]. A unique functionality of our online environment is to query a program.
It allows to teach (particular to general students) basics of Logic Programming
without first touching the full concept of answer sets.
When we outreached to a local high school before, we needed an experienced
student to communicate with the school lab several times before the final installation of the software on their computers could be completed. A carefully
drafted document is prepared for students to install the software on their computers. There are still unexpected issues during lab or when students use/install
the software at home. These difficulties made it almost impossible to outreach to
the high school with success. With the availability of our online environment, we
only need to focus on the teaching content of ASP without worrying about the
technical support. We hope our environment, and other online environments, for
knowledge representation systems will expand the teaching of knowledge repre-
An Online Development Environent
11
sentation to a much wider audience in the future. The drawing and animation
are new features of the online environment and was not tested in high school
teaching. We have used the drawing and animation in a senior year course –
special topics in AI – in spring 2017. Students demonstrated interests in drawing and animation and they were able to produce interesting animation. We also
noted that it can be very slow for ASP solvers to produce the answer set of an
animation program when the ground program is big.
In the future, it will be interesting to have a more rigorous evaluation of the
online environment.
6
Acknowledgments
The authors were partially supported by National Science Foundation (grant#
CNS-1359359). We thank Evgenii Balai, Mbathio Diagne, Michael Degraw, Peter Lee, Maede Rayatidamavandi, Crisel Suarez, Edward Wertz and Shao-Lon
Yeh for their contribution to the implementation of the environment. We thank
Michael Gelfond and Yinan Zhang for their input and help.
References
1. Ambroz, T., Charwat, G., Jusits, A., Wallner, J.P., Woltran, S.: Arvis: visualizing
relations between answer sets. In: International Conference on Logic Programming
and Nonmonotonic Reasoning. pp. 73–78. Springer (2013)
2. Balai, E., Gelfond, M., Zhang, Y.: Towards answer set programming with sorts. In:
Logic Programming and Nonmonotonic Reasoning, 12th International Conference,
LPNMR 2013, Corunna, Spain, September 15-19, 2013. Proceedings. pp. 135–147
(2013), http://dx.doi.org/10.1007/978-3-642-40564-8 14
3. Clark, D., Nelson, B., Sengupta, P., DAngelo, C.: Rethinking science learning
through digital games and simulations: Genres, examples, and evidence. In: Learning science: Computer games, simulations, and education workshop sponsored by
the National Academy of Sciences, Washington, DC (2009)
4. Cliffe, O., De Vos, M., Brain, M., Padget, J.: Aspviz: Declarative visualisation and
animation using answer set programming. In: International Conference on Logic
Programming. pp. 724–728. Springer (2008)
5. Dasseville, I., Janssens, G.: A web-based ide for idp. arXiv preprint
arXiv:1511.00920 (2015)
6. Febbraro, O., Reale, K., Ricca, F.: ASPIDE: integrated development environment for answer set programming. In: Logic Programming and Nonmonotonic
Reasoning - 11th International Conference, LPNMR 2011, Vancouver, Canada,
May 16-19, 2011. Proceedings. pp. 317–330 (2011), http://dx.doi.org/10.1007/
978-3-642-20895-9 37
7. Gebser, M., Kaufmann, B., Kaminski, R., Ostrowski, M., Schaub, T., Schneider, M.: Potassco: The potsdam answer set solving collection. Ai Communications
24(2), 107–124 (2011)
8. Gelfond, M., Kahl, Y.: Knowledge Representation, Reasoning, and the Design of
Intelligent Agents. Cambridge University Press (2014)
12
Marcopoulos, Reotutar and Zhang
9. Guzdial, M.: Use of collaborative multimedia in computer science classes. ACM
SIGCSE Bulletin 33(3), 17–20 (2001)
10. iGROM: http://igrom.sourceforge.net/
11. Kloimüllner, C., Oetsch, J., Pührer, J., Tompits, H.: Kara: A system for visualising and visual editing of interpretations for answer-set programs. In: Applications
of Declarative Programming and Knowledge Management, pp. 325–344. Springer
(2013)
12. Kowalski, R.: Logic programming. Computational Logic, Volume 9 (Handbook of
the History of Logic) (2014)
13. Kowalski, R., Sadri, F.: Programming in logic without logic programming. Theory
and Practice of Logic Programming 16(03), 269–295 (2016)
14. Lapauw, R., Dasseville, I., Denecker, M.: Visualising interactive inferences with
idpd3. arXiv preprint arXiv:1511.00928 (2015)
15. McIlraith, S.: What’s hot in knowledge representation and reasoning. Talk in
the AAAI-12 SUBAREA SPOTLIGHTS TRACK on Knowledge Representation
(2011)
16. Mendelsohn, P., Green, T., Brna, P.: Programming languages in education: The
search for an easy start. Psychology of programming pp. 175–200 (1990)
17. Oetsch, J., Pührer, J., Tompits, H.: The sealion has landed: An ide for answer-set
programmingpreliminary report. In: Applications of Declarative Programming and
Knowledge Management, pp. 305–324. Springer (2013)
18. Reyes, M., Perez, C., Upchurch, R., Yuen, T., Zhang, Y.: Using declarative programming in an introductory computer science course for high school students. In:
Thirtieth AAAI Conference on Artificial Intelligence (2016)
19. Sureshkumar, A., De Vos, M., Brain, M., Fitch, J.: APE: an ansprolog* environment. Proc. SEA 7, 101–115 (2007)
| 2 |
The exp-log normal form of types
Decomposing extensional equality and representing terms compactly
Danko Ilik
arXiv:1502.04634v4 [cs.LO] 30 Jun 2016
Inria & LIX, Ecole Polytechnique
91128 Palaiseau Cedex, France
[email protected]
Abstract
1.
Lambda calculi with algebraic data types lie at the core of functional
programming languages and proof assistants, but conceal at least
two fundamental theoretical problems already in the presence of
the simplest non-trivial data type, the sum type. First, we do not
know of an explicit and implemented algorithm for deciding the
beta-eta-equality of terms—and this in spite of the first decidability
results proven two decades ago. Second, it is not clear how to decide
when two types are essentially the same, i.e. isomorphic, in spite of
the meta-theoretic results on decidability of the isomorphism.
In this paper, we present the exp-log normal form of types—
derived from the representation of exponential polynomials via the
unary exponential and logarithmic functions—that any type built
from arrows, products, and sums, can be isomorphically mapped
to. The type normal form can be used as a simple heuristic for
deciding type isomorphism, thanks to the fact that it is a systematic
application of the high-school identities.
We then show that the type normal form allows to reduce the
standard beta-eta equational theory of the lambda calculus to a
specialized version of itself, while preserving the completeness of
equality on terms.
We end by describing an alternative representation of normal
terms of the lambda calculus with sums, together with a Coqimplemented converter into/from our new term calculus. The difference with the only other previously implemented heuristic for
deciding interesting instances of eta-equality by Balat, Di Cosmo,
and Fiore, is that we exploit the type information of terms substantially and this often allows us to obtain a canonical representation
of terms without performing sophisticated term analyses.
The lambda calculus is a notation for writing functions. Be it simplytyped or polymorphic, it is also often presented as the core of modern
functional programming languages. Yet, besides functions as firstclass objects, another essential ingredient of these languages are
algebraic data types that typing systems supporting only the →-type
and polymorphism do not model directly. A natural model for the
core of functional languages should at least include direct support
for a simplest case of variant types, sums, and of records i.e. product
types. But, unlike the theory of the {→}-typed lambda calculus, the
theory of the {→, +, ×}-typed one is not all roses.
Introduction
Canonicity of normal terms and η-equality A first problem is
canonicity of normal forms of terms. Take, for instance, the term
λxy.yx of type τ + σ → (τ + σ → ρ) → ρ, and three of its η-long
representations,
λx.λy.yδ(x, z.ι1 z, z.ι2 z)
λx.λy.δ(x, z.y(ι1 z), z.y(ι2 z))
λx.δ(x, z.λy.y(ι1 z), z.λy.y(ι2 z)),
where δ is a pattern matching construct, i.e. a case-expression
analysing the first argument, with branches of the pattern matching
given via the variable z in the second and third argument.
These three terms are all equal with respect to the standard
equational theory =βη of the lambda calculus (Figure 1), but why
should we prefer any one of them over the others to be a canonical
representative of the class of equal terms?
Or, consider the following two terms of type (τ1 → τ2 ) →
(τ3 → τ1 ) → τ3 → τ4 + τ5 → τ2 (example taken from (Balat et al.
2004)):
λxyzu.x(yz)
Categories and Subject Descriptors Software and its engineering [Language features]: Abstract data types; Software and its
engineering [Formal language definitions]: Syntax; Theory of computation [Program constructs]: Type structures
λxyzu.δ(δ(u, x1 .ι1 z, x2 .ι2 (yz)), y1 .x(yy1 ), y2 .xy2 ).
These terms are βη-equal, but can one easily notice the equality? In
order to do so, since both terms are β-normal, one would need to do
non-trivial β- and η-expansions (see Example 2 in Section 4).
For the lambda calculus over the restricted language of types—
when the sum type is absent—these problems do not exist, since
β-normalization followed by an η-expansion is deterministic and
produces a canonical representative for any class of βη-equal terms.
Deciding =βη for that restricted calculus amounts to comparing
canonical forms up to syntactic identity.
In the presence of sums, we only have a notion of canonical interpretation of terms in the category of sheaves for the Grothendieck
topology over the category of constrained environments (Altenkirch
et al. 2001), as well as the sophisticated normal form of terms due to
Balat, Di Cosmo, and Fiore which is not canonical (unique) syntactically (Balat et al. 2004). Balat et al. also provide an implementation
of a type-directed partial evaluator that normalizes terms to their
Keywords sum type, eta equality, normal type, normal term, type
isomorphism, type-directed partial evaluation
[Copyright notice will appear here once ’preprint’ option is removed.]
1
2016/7/1
2.
normal form, and this represented up to now the only implemented
heuristic for deciding βη-equality—it is not a full decision procedure, because the normal forms are not canonical. We shall discuss
these and the other decidability results some more in Related work
of Section 5.
Treating full βη-equality is hard, even if, in practice, we often
only need to treat special cases of it, such as certain commuting
conversions.
The exp-log normal form of types
The trouble with sums starts already at the level of types. Namely,
when we consider types built from function spaces, products, and
disjoint unions (sums),
τ, σ ::= χi | τ → σ | τ × σ | τ + σ,
where χi are atomic types (or type variables), it is not always clear
when two given types are essentially the same one. More precisely,
it is not known how to decide whether two types are isomorphic (Ilik
2014). Although the notion of isomorphism can be treated abstractly
in Category Theory, in bi-Cartesian closed categories, and without
committing to a specific term calculus inhabiting the types, in the
language of the standard syntax and equational theory of lambda
calculus with sums (Figure 1), the types τ and σ are isomorphic
when there exist coercing lambda terms M : σ → τ and N : τ → σ
such that
Recognizing isomorphic types If we leave aside the problems of
canonicity of and equality between terms, there is a further problem
at the level of types that makes it hard to determine whether two
type signatures are essentially the same one. Namely, although for
each of the type languages {→, ×} and {→, +} there is a very
simple algorithm for deciding type isomorphism, for the whole of
the language {→, +, ×} it is only known that type isomorphism
is decidable when types are to be interpreted as finite structures,
and that without a practically implementable algorithm in sight (Ilik
2014).
The importance of deciding type isomorphism for functional
programming has been recognized early on by Rittri (Rittri 1991),
who proposed to use it as a criterium for searching over a library
of functional subroutines. Two types being isomorphic means that
one can switch programs and data back and forth between the types
without loss of information. Recently, type isomorphisms have also
become popular in the community around homotopy type theory.
It is embarrassing that there are no algorithms for deciding type
isomorphism for such an ubiquitous type system. Finally, even
if finding an implementable decision procedure for the full type
language {→, +, ×} were hard, might we simply be able to cover
fragments that are important in practice?
λx.M (N x) =βη λx.x
and
λy.N (M y) =βη λy.y.
In other words, data/programs can be converted back and forth
between τ and σ without loss of information.
The problem of isomorphism is in fact closely related to the
famous Tarski High School Identities Problem (Burris and Yeats
2004; Fiore et al. 2006). What is important for us here is that types
can be seen as just arithmetic expressions: if the type τ → σ is
denoted by the binary arithmetic exponentiation σ τ , then every
type ρ denotes at the same time an exponential polynomial ρ. The
difference with ordinary polynomials is that the exponent can now
also contain a (type) variable, while exponentiation in ordinary
polynomials is always of the form σ n for a concrete n ∈ N i.e.
σ n = σ × · · · × σ . Moreover, we have that
|
{z
}
n-times
+
τ ∼
= σ implies N τ = σ,
that is, type isomorphism implies that arithmetic equality holds for
any substitution of variables by positive natural numbers.
This hence provides an procedure for proving non-isomorphism:
given two types, prove they are not equal as exponential polynomials,
and that means they cannot possibly be isomorphic. But, we are
interested in a positive decision procedure. Such a procedure exists
for both the languages of types {→, ×} and {×, +}, since then we
have an equivalence:
∼ σ iff N+ τ = σ.
τ =
Organization of this paper In this paper, we shall be treating
the two kinds of problems explained above simultaneously, not as
completely distinct ones: traditionally, studies of canonical forms
and deciding equality on terms have used very little of the type
information annotating the terms (with the exceptions mentioned in
the concluding Section 5).
We shall start by introducing in Section 2 a normal form for
types—called the exp-log normal form (ENF)—that preserves the
isomorphism between the source and the target type; we shall also
give an implementation, a purely functional one, that can be used as
a heuristic procedure for deciding isomorphism of two types.
Even if reducing a type to its ENF does not present a complete
decision procedure for isomorphism of types, we shall show in
the subsequent Section 3 that it has dramatic effects on the theory
of βη-equality of terms. Namely, one can reduce the problem of
showing equality for the standard =βη relation to the problem of
showing it for a new equality theory =eβη (Figure 2)—this later
being a specialization of =βη . That is, a complete axiomatization
of βη-equality that is a strict subset of the currently standard one is
possible.
In Section 4, we shall go further and describe a minimalist
calculus of terms—compact terms at ENF type—that can be used
as an alternative to the usual lambda calculus with sums. With its
properties of a syntactic simplification of the later (for instance,
there is no lambda abstraction), the new calculus allows a more
canonical representation of terms. We show that, for a number of
interesting examples, converting lambda terms to compact terms
and comparing the obtained terms for syntactic identity provides a
simple heuristic for deciding =βη .
The paper is accompanied by a prototype normalizing converter
between lambda- and compact terms implemented in Coq.
Indeed, in these cases type isomorphism can not only be decided, but
also effectively built. In the case of {×, +}, the procedure amounts
to transforming the type to disjunctive normal form, or the (nonexponential) polynomial to canonical form, while in that of {→, ×},
there is a canonical normal form obtained by type transformation
that follows currying (Rittri 1991).
Given that it is not known whether one can find such a canonical
normal form for the full language of types (Ilik 2014), what we can
hope to do in practice is to find at least a pseudo-canonical normal
form. We shall now define such a type normal form.
The idea is to use the decomposition of the binary exponential
function σ τ through unary exponentiation and logarithm. This is
a well known transformation in Analysis, where for the natural
logarithm and Euler’s number e we would use
σ τ = eτ ×log σ
also written σ τ = exp(τ × log σ).
The systematic study of such normal forms by Du Bois-Reymond
described in the book (Hardy 1910) served us as inspiration.
But how exactly are we to go about using this equality for
types when it uses logarithms i.e. transcendental numbers? Luckily,
we do not have to think of real numbers at all, because what is
described above can be seen through the eyes of abstract Algebra, in
2
2016/7/1
M, N ::= xτ | (M τ →σ N τ )σ | (π1 M τ ×σ )τ | (π2 M τ ×σ )σ | δ(M τ +σ , xτ1 .N1ρ , xσ2 .N2ρ )ρ |
|(λxτ .M σ )τ →σ | hM τ , N σ iτ ×σ | (ι1 M τ )τ +σ | (ι2 M σ )τ +σ
(λx.N )M =β N {M/x}
(β→ )
πi hM1 , M2 i =β Mi
(β× )
δ(ιi M , x1 .N1 , x2 .N2 ) =β Ni {M/xi }
(β+ )
x 6∈ FV(N )
N =η λx.N x
N =η hπ1 N , π2 N i
(η→ )
(η× )
N {M/x} =η δ(M, x1 .N {ι1 x1 /x}, x2 .N {ι2 x2 /x})
x1 , x2 6∈ FV(N )
(η+ )
Figure 1. Terms of the {→, +, ×}-typed lambda calculus and axioms of the equational theory =βη between typed terms.
exponential fields, as a pair of mutually inverse homomorphisms exp
and log between the multiplicative and additive group, satisfying
exp(τ1 + τ2 ) = exp τ1 × exp τ2
exp(log τ ) = τ
log(τ1 × τ2 ) = log τ1 + log τ2
log(exp τ ) = τ.
instance for the second example of Section 1:
(τ1 → τ2 ) → (τ3 → τ1 ) → τ3 → τ4 + τ5 → τ2 =
τ1 τ3 τ2 τ1
τ4 +τ5 τ3
=
τ2
τ
τ
τ 1 τ1 3 τ3 τ4
τ2 2
In other words, exp and log can be considered as macro expansions
rather than unary type constructors. Let us take the type τ + σ →
(τ + σ → ρ) → ρ from Section 1, assuming for simplicity that
τ, σ, ρ are atomic types. It can be normalized in the following way:
τ5 × τ3 × (τ3 → τ1 ) × (τ1 → τ2 ) → τ2 .
Of course, some care needs to be taken when applying the rewrite
rules, in order for the procedure to be deterministic, like giving precedence to the type rewrite rules and normalizing sub-expressions. To
be precise, we provide a purely functional Coq implementation below. This is just one possible implementation of the rewriting rules,
but being purely functional and structurally recursive (i.e. terminating) it allows us to understand the restrictions imposed on types in
normal form, as it proves the following theorem.
= exp((τ + σ) log[exp{exp((τ + σ) log ρ) log ρ}])
exp((τ + σ) log[exp{exp(τ log ρ) exp(σ log ρ) log ρ}])
exp((τ + σ) exp(τ log ρ) exp(σ log ρ) log ρ)
Theorem 1. If τ is a type in exp-log normal form, then τ ∈ ENF,
where
exp τ exp(τ log ρ) exp(σ log ρ) log ρ)
exp(σ exp(τ log ρ) exp(σ log ρ) log ρ =
τ σ
ρ
ENF 3 e ::= c | d,
τ σ
ρσρ
ρ
where
= (τ ×(τ → ρ)×(σ → ρ) → ρ)×(σ×(τ → ρ)×(σ → ρ) → ρ).
As the exp-log transformation of arrow types is at the source
of this type normalization procedure, we call the obtained normal
form the exp-log normal form (ENF). Be believe the link to abstract
algebra is well work keeping in mind, since it may give rise to
further cross-fertilization between mathematics and the theory of
programming languages. However, from the operational point of
view, all this transformation does is that it prioritized and orients
the high-school identities,
(f + g) + h
f + (g + h)
(1)
(f g)h
f (g + h)
f (gh)
fg + fh
(2)
(3)
(f + g)h
f h + gh
(4)
f g+h
f gf h
(5)
h
f h gh
(6)
hg
(7)
(f g)
g h
(f )
=
= τ4 × τ3 × (τ3 → τ1 ) × (τ1 → τ2 ) → τ2 ×
τ + σ → (τ + σ → ρ) → ρ =
τ +σ τ +σ
= ρρ
=
= ρτ ρ
τ
τ
τ 1 τ1 3 τ3 τ5
τ2 2
f
,
DNF 3 d, di ::= c1 + (c2 + (· · · + n) · · · )
n≥2
CNF 3 c, ci ::= (c1 → b1 ) × (· · · × (cn → bn ) · · · )
Base 3 b, bi ::= p | d,
n≥0
and p denotes atomic types (type variables).
Assuming a given set of atomic types,
Parameter Proposition : Set.
the goal is to map the unrestricted language of types, given by the
inductive definition,1
Inductive Formula : Set :=
| prop : Proposition → Formula
| disj : Formula → Formula → Formula
| conj : Formula → Formula → Formula
| impl : Formula → Formula → Formula.
1 May
the reader to forgive us for the implicit use of the Curry-Howard
correspondence in the Coq code snippets, where we refer to types and type
constructors as formulas and formula constructors.
all of which are valid as type isomorphisms. We can thus also
compute the isomorphic normal form of the type directly, for
3
2016/7/1
into the exp-log normal form which fits in the following inductive
signature.
| two c0 c1 ⇒ dnf (two (ntimes c c0) (ntimes c c1))
| dis c0 d0 ⇒ dnf match distrib0 c d0 with
| cnf c1 ⇒ two (ntimes c c0) c1
| dnf d1 ⇒ dis (ntimes c c0) d1
end
end.
Inductive CNF : Set :=
| top
| con : CNF → Base → CNF → CNF
with DNF : Set :=
| two : CNF → CNF → DNF
| dis : CNF → DNF → DNF
with Base : Set :=
| prp : Proposition → Base
| bd : DNF → Base.
Definition distrib1 (c : CNF)(e : ENF) : ENF :=
match e with
| cnf a ⇒ cnf (ntimes c a)
| dnf b ⇒ distrib0 c b
end.
Fixpoint explog0 (d : Base)(d2 : DNF) {struct d2} : CNF
:=
match d2 with
| two c1 c2 ⇒ ntimes (con c1 d top) (con c2 d top)
| dis c d3 ⇒ ntimes (con c d top) (explog0 d d3)
end.
Inductive ENF : Set :=
| cnf : CNF → ENF
| dnf : DNF → ENF.
The con c1 b c2 constructor corresponds to bc1 c2 or the type
(c1 → b) × c2 from Theorem 1. The normalization function, enf (·),
Definition explog1 (d : Base)(e : ENF) : CNF :=
match e with
| cnf c ⇒ con c d top
| dnf d1 ⇒ explog0 d d1
end.
Fixpoint enf (f : Formula) {struct f } : ENF :=
match f with
| prop p ⇒ cnf (p2c p)
| disj f0 f1 ⇒ dnf (nplus (enf f0) (enf f1))
| conj f0 f1 ⇒ distrib (enf f0) (enf f1)
| impl f0 f1 ⇒ cnf (explogn (enf2cnf (enf f1)) (enf f0))
end.
Fixpoint distribn (d : DNF)(e2 : ENF) {struct d} : ENF
:=
match d with
| two c c0 ⇒ dnf (nplus (distrib1 c e2) (distrib1 c0 e2))
| dis c d0 ⇒ dnf (nplus (distrib1 c e2) (distribn d0 e2))
end.
is defined using the following fixpoints:
Definition distrib (e1 e2 : ENF) : ENF :=
match e1 with
| cnf a ⇒ distrib1 a e2
| dnf b ⇒ distribn b e2
end.
nplus which makes a flattened n-ary sum out of two given n-ary
sums, i.e. implements the +-associativity rewriting (1),
ntimes which is analogous to ‘nplus’, but for products, implementing (2),
distrib which performs the distributivity rewriting, (3) and (4), and
Fixpoint explogn (c:CNF)(e2:ENF) {struct c} : CNF :=
match c with
| top ⇒ top
| con c1 d c2 ⇒
ntimes (explog1 d (distrib1 c1 e2)) (explogn c2 e2)
end.
explogn which performs the rewriting involving exponentiations,
(5), (6), and (7).
Fixpoint nplus1 (d : DNF)(e2 : ENF) {struct d} : DNF :=
match d with
| two c c0 ⇒ match e2 with
| cnf c1 ⇒ dis c (two c0 c1)
| dnf d0 ⇒ dis c (dis c0 d0)
end
| dis c d0 ⇒ dis c (nplus1 d0 e2)
end.
Definition p2c : Proposition → CNF :=
fun p ⇒ con top (prp p) top.
Definition b2c : Base → CNF :=
fun b ⇒
match b with
| prp p ⇒ p2c p
| bd d ⇒ con top (bd d) top
end.
Definition nplus (e1 e2 : ENF) : DNF :=
match e1 with
| cnf a ⇒ match e2 with
| cnf c ⇒ two a c
| dnf d ⇒ dis a d
end
| dnf b ⇒ nplus1 b e2
end.
Fixpoint enf2cnf (e:ENF) {struct e} : CNF :=
match e with
| cnf c ⇒ c
| dnf d ⇒ b2c (bd d)
end.
Fixpoint ntimes (c1 c2 : CNF) {struct c1} : CNF :=
match c1 with
| top ⇒ c2
| con c10 d c13 ⇒ con c10 d (ntimes c13 c2)
end.
From the inductive characterization of the previous theorem, it
is immediate to notice that the exp-log normal form (ENF) is in fact
a combination of disjunctive- (DNF) and conjunctive normal forms
(CNF), and their extension to also cover the function type. We shall
now apply this simple and loss-less transformation of types to the
equational theory of terms of the lambda calculus with sums.
Fixpoint distrib0 (c : CNF)(d : DNF) : ENF :=
match d with
4
2016/7/1
3. βη -Congruence classes at ENF type
to compacting the βη-congruence class to a single point, a canonical
normal term of type enf (τ ).
Assuming τ, σ, τi are base types, the canonical representatives
for the two βη-congruence classes of Section 1 are
The virtue of type isomorphisms is that they preserve the equational
theory of the term calculus: an isomorphism between τ and σ is
witnessed by a pair of lambda terms
T :σ→τ
hλx.(π1 (π2 x))(π1 x), λx.(π2 (π2 x))(π1 x)i
S:τ →σ
and
such that
and
λx.T (Sx) =βη λx.x and λy.S(T y) =βη λy.y.
Therefore, when τ ∼
= σ, and σ happens to be more canonical than τ —
in the sense that to any βη-equivalence class of type τ corresponds
a smaller one of type σ—one can reduce the problem of deciding
βη-equality at τ to deciding it for a smaller subclass of terms.
hλx.(π1 x)((π1 π2 x)(π1 π2 π2 x)), λx.(π1 x)((π1 π2 x)(π1 π2 π2 x))i.
Note that, unlike (Balat et al. 2004), we do not need any sophisticated term analysis to derive a canonical form in this kind of cases.
One may either apply the standard terms witnessing the isomorphisms by hand, or use our normalizer described in Section 4.
The natural place to pick a canonical representative is thus the
βη-congruence class of terms at the normal type, not the class at
the original type! Moreover, beware that even if it may be tempting
to map a canonical representative along isomorphic coercions back
to the original type, the obtained representative may not be truly
canonical since there is generally more than one way to specify the
terms S and T that witness a type isomorphism.
Of course, not always can all sum types be eliminated by type
isomorphism, and hence not always can a class be compacted to a
single point in that way. Nevertheless, even in the case where there
are still sums remaining in the type of a term, the ENF simplifies
the set of applicable =βη -axioms.
We can use it to get a restricted set of equations, =eβη , shown
in Figure 2, which is still complete for proving full βη-equality, as
made precise in the following theorem.
S
τ
σ
T
T
S
In the case when σ = enf (τ ), the equivalence classes at type σ will
not be larger than their original classes at τ , since the main effect of
the reduction to exp-log normal form is to get rid of as many sum
types on the left of an arrow as possible, and it is known that for the
{×, →}-typed lambda calculus one can choose a single canonical
η-long β-normal representative out of a class of βη-equal terms.
Thus, from the perspective of type isomorphisms, we can observe the partition of the set of terms of type τ into =βη -congruence
classes as projected upon different parallel planes in three dimensional space, one plane for each type isomorphic to τ . If we choose
to observe the planes for τ and enf (τ ), we may describe the situation by the following figure.
ype
=β
Theorem 2. Let P, Q be terms of type τ and let S : τ →
enf (τ ) , T : enf (τ ) → τ be a witnessing pair of terms for
the isomorphism τ ∼
= enf (τ ). Then, P =βη Q if and only if
SP =eβη SQ and if and only if T (SP ) =βη T (SQ).
τ
Proof. Since the set of terms of ENF type is a subset of all typable
terms, it suffices to show that all =βη -equations that apply to terms
of ENF type can be derived already by the =eβη -equations.
Notice first that ηλe and ηπe are special cases of η+ , so, in fact,
the only axiom missing from =eβη is η+ itself,
at t
ses
s
a
cl
η-
N {M/x} =eη δ(M, x1 .N {ι1 x1 /x}, x2 .N {ι2 x2 /x})
(x1 , x2 6∈ FV(N )),
ses
=β
as
-cl
at
nf
ee
(τ )
when N is of type c; the case of N of type d is covered directly by
e
the η+
-axiom. We thus show that the η+ -axiom is derivable from
the =eβη -ones by induction on c.
typ
η
Case for N of type (c → b) × c0 .
The dashed circle depicts the compaction, if any, of a congruence
class achieved by coercing to ENF type. The single point depicts
the compaction to a singleton set, the case where a unique canonical
representative of a class of βη-terms exists.
We do not claim that the plane of enf (τ ) is always the best
possible plane to choose for deciding =βη . Indeed, for concrete
base types there may well be further type isomorphisms to apply
(think of the role of the unit type 1 in (1 → τ + σ) → ρ) and hence
a better plane than the one for enf (τ ). However, it is a reasonably
good default choice.
For the cases of types where the sum can be completely eliminated, such as the two examples of Section 1, the projection amounts
N {M/x}
e
=eη hπ1 (N {M/x}), π2 (N {M/x})i by η×
=h(π1 N ){M/x}, (π2 N ){M/x}i
=eη hδ(M, x1 .(π1 N ){ι1 x1 /x}, x2 .(π1 N ){ι2 x2 /x}),
δ(M, x1 .(π2 N ){ι1 x1 /x}, x2 .(π2 N ){ι2 x2 /x})i by IH
=eβη hπ1 (δ(M, x1 .N {ι1 x1 /x}, x2 .N {ι2 x2 /x})),
π2 (δ(M, x1 .N {ι1 x1 /x}, x2 .N {ι2 x2 /x}))i by ηπe
e
=eη δ(M, x1 .N {ι1 x1 /x}, x2 .N {ι2 x2 /x}) by η×
5
2016/7/1
M, N ::= xe | (M c→b N c )b | (π1 M (c→b)×c0 )c→b | (π2 M (c→b)×c0 )c0 | δ(M c+d , xc1 .N1e , xd2 .N2e )e
| (λxc .M b )c→b | hM b→c , N c0 i(b→c)×c0 | (ι1 M c )c+d | (ι2 M d )c+d
(λxc .N b )M =eβ N {M/x}
e
(β→
)
πi hM1b→c , M2c0 i =eβ Mi
e
(β×
)
δ(ιi M , x1 .N1 , x2 .N2 )e =eβ Ni {M/xi }
e
(β+
)
N c→b =eη λx.N x
N
(c→b)×c0
=eη
d
=eη
=eη
b
N {M /x}
πi δ(M, x1 .N1 , x2 .N2 )
x 6∈ FV(N )
e
(η→
)
e
(η×
)
hπ1 N , π2 N i
δ(M, x1 .N {ι1 x1 /x}, x2 .N {ι2 x2 /x})
b
x1 , x2 6∈ FV(N )
δ(M, x1 .πi N1 , x2 .πi N2 )c
e
(η+
)
(ηπe )
λy.δ(M, x1 .N1 , x2 .N2 ) =eη δ(M, x1 .λy.N1 , x2 .λy.N2 )c→b
y 6∈ FV(M )
(ηλe )
Figure 2. Lambda terms of ENF type and the equational theory =eβη .
Case for N of type c → b.
N {M/x}
=eη λy.(N {M/x})y
the two term calculi. First, we represent the usual lambda calculus
with sums.
e
by η→
Inductive ND : list Formula → Formula → Set :=
| hyp : ∀ {Gamma A},
ND (A :: Gamma) A
| wkn : ∀ {Gamma A B},
ND Gamma A → ND (B :: Gamma) A
| lam : ∀ {Gamma A B},
ND (A :: Gamma) B → ND Gamma (impl A B)
| app : ∀ {Gamma A B},
ND Gamma (impl A B) → ND Gamma A → ND Gamma B
| pair : ∀ {Gamma A B},
ND Gamma A → ND Gamma B → ND Gamma (conj A B)
| fst : ∀ {Gamma A B},
ND Gamma (conj A B) → ND Gamma A
| snd : ∀ {Gamma A B},
ND Gamma (conj A B) → ND Gamma B
| inl : ∀ {Gamma A B},
ND Gamma A → ND Gamma (disj A B)
| inr : ∀ {Gamma A B},
ND Gamma B → ND Gamma (disj A B)
| cas : ∀ {Gamma A B C},
ND Gamma (disj A B) →
ND (A :: Gamma) C → ND (B :: Gamma) C →
ND Gamma C.
= λy.(N y){M/x} for y 6∈ FV (N {M/x})
e
=eη λy.δ(M, x1 .(N y){ι1 x1 /x}, x2 .(N y){ι2 x2 /x}) by η+
e
=η δ(M, x1 .(λy.N y){ι1 x1 /x}, x2 .(λy.N y){ι2 x2 /x}) by ηλe
e
=eη δ(M, x1 .N {ι1 x1 /x}, x2 .N {ι2 x2 /x}) by η→
The transformation of terms to ENF type thus allows to simplify
the (up to now) standard axioms of =βη . The new axioms are
complete for =βη in spite of them being only special cases of
the old ones. A notable feature is that we get to disentangle the lefthand side and right-hand side of the equality axioms: for instance,
the right-hand side of β→ -axiom can no longer overlap with the
left-hand side of the η+ -axiom, due to typing restrictions on the
term M .
One could get rid of ηπe and ηλe if one had a version of λcalculus resistant to these permuting conversions. The syntax of
such a lambda calculus would further be simplified if, instead of
binary, one had n-ary sums and products. In that case, there would
be no need for variables of sum type at all (currently they can only
be introduced by the second branch of δ). We would in fact get a
calculus with only variables of type c → b, and that would still
be suitable as a small theoretical core of functional programming
languages.
4.
The constructors are self-explanatory, except for hyp and wkn, which
are in fact used to denote de Bruijn indices: hyp denotes 0, while
wkn is the successor. For instance, the term λxyz.y is represented
as lam (lam (lam (wkn hyp))) i.e. lam (lam (lam 1)).
DeBruijn indices creep in as the simplest way to work with
binders in Coq, and although they may reduce readability, they solve
the problem with α-conversion of terms.
Next is our compact representation of terms, defined by the
following simultaneous inductive definition of terms at base type
(HSb), together with terms at product type (HSc). These later are
simply finite lists of HSb-terms.
A compact representation of terms at ENF type
It is the subject of this section to show that the desiderata for a
more canonical calculus from the previous paragraph can in fact be
achieved. We shall define a new representation of lambda terms, that
we have isolated as the most compact syntax possible during the
formal Coq development of a normalizer of terms at ENF type. The
description of the normalizer itself will be left for the second part
of this section, Subsection 4.1. In the first part of the section, we
shall demonstrate the value of representing terms in our calculus on
a number of examples. Comparing our normal form for syntactical
identity provides a first such heuristic for deciding =βη in the
presence of sums.
Before we continue with the presentation of the new calculus, for
the sake of precision, we give the formal representation of terms of
Inductive HSc : CNF → Set :=
| tt : HSc top
| pair : ∀ {c1 b c2}, HSb c1 b → HSc c2 → HSc (con c1 b c2)
6
2016/7/1
follows:
with HSb : CNF → Base → Set :=
| app : ∀ {p c0 c1 c2},
HSc (explogn c1 (cnf (ntimes c2 (con c1 (prp p) c0)))) →
HSb (ntimes c2 (con c1 (prp p) c0)) (prp p)
| cas : ∀ {d b c0 c1 c2 c3},
HSc (explogn c1 (cnf (ntimes c2 (con c1 (bd d) c0)))) →
HSc (explogn (explog0 b d)
(cnf (ntimes c3 (ntimes c2 (con c1 (bd d) c0))))) →
HSb (ntimes c3 (ntimes c2 (con c1 (bd d) c0))) b
| wkn : ∀ {c0 c1 b1 b},
HSb c0 b → HSb (con c1 b1 c0) b
| inl two : ∀ {c0 c1 c2},
HSc (explogn c1 (cnf c0)) → HSb c0 (bd (two c1 c2))
| inr two : ∀ {c0 c1 c2},
HSc (explogn c2 (cnf c0)) → HSb c0 (bd (two c1 c2))
| inl dis : ∀ {c0 c d},
HSc (explogn c (cnf c0)) → HSb c0 (bd (dis c d))
| inr dis : ∀ {c0 c d},
HSb c0 (bd d) → HSb c0 (bd (dis c d)).
c0 ⇒ (c1 → b1 ) × · · · × (cn → bn ) ≡
(c1 × c0 → b1 ) × · · · × (cn × c0 → bn )
(c1 + · · · + cn ) ⇒ b ≡ (c1 → b) × · · · × (cn → b)
Note that the usage of d ⇒ b in the typing rule for δ allows the
number of premises contained in Q to be determined by the size
of the sum d.
The typing rules also rely on a implicit variable convention, where a
variable xn actually denotes the variable whose deBruijn index is n
(we start counting from 0).
Variables as deBruijn indices: The variable xn in the rules for
xn P and δ(xn P, Q) represent the hypothesis c1 → p and
c1 → d. For concrete c2 , c3 , the subscript n means that the
variable represents the n-th hypothesis of the form c → b,
counting from left to right and starting from 0, in the context of
the term P , or the n + 1-st, in the context of the term Q.
We shall motivate our syntax in comparison to the syntax of the
lambda calculus from Figure 2, by considering in order all term
constructors of the later.
For a more human-readable notation of our calculus, we are
going to use the following one,
P, Q ::= hM1 , . . . , Mn i
xe Since e ∈ ENF, either e = c ∈ CNF or e = d ∈ DNF.
Variables of type d only appear as binders in the second branch
of δ, so if we have n-ary instead of binary δ’s, the only type a
variable x could have will be a c. But, since c is always of the
form (c1 → b1 ) × · · · × (cn → bn ), a variable xc could be
written as a tupple of n variables xi of types ci → bi . Moreover,
as we want our terms to always be η-expanded, and ci → bi is
an arrow type, we will not have a separate syntactic category of
terms for variables xi in the new calculus, but they will rather
be encoded/merged with either the category of applications
xi P (when b is an atomic type p), or the category of case
analysis δ(xi P, Q) (when b ∈ DNF), the two new constructors
explained below.
(n ≥ 0)
M, Mi ::= xn P | δ(xn P, Q) | wM | ι1 P | ι2 P | ι01 P | ι02 M,
with typing rules as follows:
M1 : (c1 ` b1 ) · · · Mn : (cn ` bn )
hM1 , . . . , Mn i : (c1 → b1 ) × · · · × (cn → bn )
P : (c2 × (c1 → p) × c0 ⇒ c1 )
xn P : (c2 × (c1 → p) × c0 ` p)
P : (c2 × (c1 → d) × c0 ⇒ c1 )
Q : (c3 × c2 × (c1 → d) × c0 ⇒ (d ⇒ b))
δ(xn P, Q) : (c3 × c2 × (c1 → d) × c0 ` b)
M c→b N c We shall only need this term constructor at type b = p,
since if b ∈ DNF the term M N would not be η-long (we want it
to be represented by a δ(M N, · · · )). As we realized during our
Coq development, we shall only need the case M = x, as there
will be no other syntactic element of type c → p (there will be
no projections πi left, while the δ will only be necessary at type
b). In particular, the application xhi can be used to represent the
old category of variables, where hi is the empty tupple of unit
type 1 (the nullary product).
M : (c0 ` b)
wM : ((c1 → b1 ) × c0 ` b)
P : (c0 ⇒ c1 )
ι1 P : (c0 ` c1 + c2 )
P : (c0 ⇒ c)
ι01 P : (c0 ` c + d)
P : (c0 ⇒ c2 )
ι1 P : (c0 ` c1 + c2 )
(π1 M (c→b)×c0 )c→b If M is η-expanded, as we want all terms to
be, this term would only create a β-redex, and so will not be a
part of the new syntax, as we are building a syntax for β-normal
and η-long terms.
M : (c0 ` d)
.
ι02 M : (c0 ` c + d)
The typing rules above involve two kinds of typing judgments.
(π2 M (c→b)×c0 )c0 When product types are represented as n-ary,
the same reasoning as for π1 applies, so π2 will not be part of
the new syntax.
Judgments at base type: Denoted M : (c ` b), this is the main
judgment kind, the conclusion of all but the first typing rule.
It says that M is a term of type b (i.e. either an atomic p or a
disjunction type d) in the typing context c. This context c takes
over the place of the usual context Γ and allows only hypotheses
(variables) of type ci → bi to be used inside the term M .
δ(M c+d , xc1 .N1e , xd2 .N2e )e This constructor is only needed at the
e
type e = b, a consequence of the fact that the η+
-axiom is
e
e
specialized to type b: the axioms ηπ and ηλ will not expressible
in the new syntax, since it will not contain πi , as we saw, and it
will not contain λ, as we shall see. We will also only need the
scrutinee M to be of the form xN , like it the case of application;
this additional restriction was not possible to see upfront, but
only once we used Coq to analyze the terms needed for the
normalizer.
The new constructor δ(xn P, Q) is thus like the old δ(xn P, · · · ),
except that Q regroups in the form of an n-ary tupple all the
Judgments at product type: Denoted P : c or Q : c, this kind of
judgment is only the conclusion of the first typing rule, whose
sole purpose is to make a tupple of base type judgments.
However, the judgments at product type are used as hypotheses
in the other typing rules, where their role is to allow n premises
to the typing rule. For this usage, they are disguised as the macroexpansions c1 ⇒ c2 or c ⇒ (d ⇒ b). Implemented by the Coq
fixpoints explogn and explog1, these macro expansions work as
7
2016/7/1
Example 2. This is the second example from the introduction
(Example 6.2.4.2 from (Balat et al. 2004)). The βη-equal terms
possible branches of the pattern matching (sum types will also
be n-ary, not binary like before).
(λxc .M b )c→b This terms constructor is already severely restricted
(for instance only one variable x can be abstracted), thanks to the
restrictions on the left- and right-hand sides of the function type.
But, as we found out during the Coq development, somewhat
to our surprise, there is no need for λ-abstraction in our syntax.
When reverse-normalizing from our calculus to the standard
lambda calculus (see the six examples below), λ’s can be
reconstructed thanks to the typing information.
hM
b→c
λxyzu.x(yz)
(13)
λxyzu.δ(u, x1 .x(yz), x2 .x(yz))
(14)
λxyzu.δ(u, x1 .δ(ι1 z, y1 .x(yy1 ), y2 .xy2 ),
x2 .δ(ι2 yz, y1 .x(yy1 ), y2 .xy2 ))
(15)
λxyzu.δ(δ(u, x1 .ι1 z, x2 .ι2 (yz)), y1 .x(yy1 ), y2 .xy2 )
c0 (b→c)×c0
This constructor will be maintained, corre,N i
sponding to the only typing rule with conclusion a judgment
of product type, but it will become n-ary, hM1 , . . . , Mn i. In
particular, we may have the nullary tupple hi of the null product
type (i.e. unit type 1).
(16)
at type
(a → b) → (c → a) → c → (d + e) → b,
are all normalized to the compact term
hx3 (x2 x1 ), x3 (x2 x1 )i
(ι1 M c )c+d , (ι2 M d )c+d These constructors will be maintained, but
will be duplicated: the new ι1 , ι2 will only be used to construct
a binary sum c1 + c2 (this is the base case of sum constructors
which must be at least binary by construction), while the ι01 , ι02
will be used to construct sums of the form c + d.
(17)
at the ENF type
(d × c × (c → a) × (a → b) → b)×
(e × c × (c → a) × (a → b) → b),
which can then be reverse-normalized to (14), if desired.
We shall now show a number of examples that our compact
term representation manages to represent canonically. We will also
show cases when βη-equality can not be decided using bringing
terms to the compact normal form. For simplicity, all type variables
(a, b, c, d, e, f, g, p, q, r, s, i, j, k, l) are assumed to be of atomic
type, none of them denoting members of Base, CNF, and DNF,
anymore, and for the rest of this subsection.
A reviewer once remarked that the two previous examples can
be handled just by a CPS transformation. While our implementation
will be based on continuations, the reason why these examples are
handled by our method are not continuations, but rather the fact that
all sum types can be eliminated, allowing us to choose a canonical
term in the compact representation of the {→, ×}-typed lambda
calculus.
Convention 1. We shall adopt the convention of writing the type
1 → p as p (1 is the unit type i.e. the nullary product type), writing
the application to a nullary pair xn hi as xn , and writing a singleton
pair hM i as just M . Hence, for instance, an application of some
term M to a singleton pair, containing an application of a term N to
a nullary pair, M hN hii, will be written as the more readable M N
corresponding to the usual λ-calculus intuitions.
Example 3 (Commuting conversions). The left and right hand sides
of the common commuting conversions,
λxyzu.δ(u, v1 .yv1 , v2 .zv2 )x =βη
=βη λxyzu.δ(u, v1 .(yv1 )x, v2 .(zv2 )x),
λxyzuv.δ(δ(x, x1 .yx1 , x2 .zx2 ), w1 .uw1 , w2 .vw2 ) =βη
=βη λxyzuv.δ(x, x1 .δ(yx1 , w1 .uw1 , w2 .vw2 ),
x2 .δ(zx2 , w1 .uw1 , w2 .vw2 ))
Example 1. This is the first example from the introduction, concerning the relative positions of λ’s, δ’s, and applications. The βη-equal
terms
λx.λy.yδ(x, z.ι1 z, z.ι2 z)
λx.λy.δ(x, z.y(ι1 z), z.y(ι2 z))
(8)
(9)
λx.δ(x, z.λy.y(ι1 z), z.λy.y(ι2 z))
(10)
λx.λy.yx
(11)
(18)
(19)
of types
s → (p → s → r) → (q → s → r) → (p + q) → r
and
(p + q) → (p → r + s) → (q → r + s) →
(r → a) → (s → a) → a,
at type
are normalized to the compact terms
(p + q) → ((p + q) → r) → r,
hx2 hx3 , x0 i, x1 hx3 , x0 ii,
are all normalized to the same canonical representation
hx0 x2 , x1 x2 i
(18’)
of ENF type
(12)
(p × (s × q → r) × (s × p → r) × s → r)×
(q × (s × q → r) × (s × p → r) × s → r)
at the ENF type
and
((p → r) × (q → r) × p → r)×
((p → r) × (q → r) × q → r)
hδ(x3 x4 , hx2 x0 , x1 x0 i), δ(x2 x4 , hx2 x0 , x1 x0 i)i,
(19’)
of ENF type
which can be reverse-normalized back to (9). However, the point
is not that (9) is somehow better than the other 3 terms, but that a
canonical representation should be sought at the ENF type, not the
original type! This remark is valid in general, and in particular for
the other examples below.
((s → a) × (r → a) × (q → r + s) × (p → r + s) × p → a)
× ((s → a) × (r → a) × (q → r + s) × (p → r + s) × q → a),
which can be reverse-normalized to the right-hand sides of (18), and
(19), respectively, if desired.
8
2016/7/1
Example 4 (Eta equations). Both the left- and the right-hand sides
of the eta rules (represented as closed terms),
λx.x =βη λxy.xy
λx.x =βη λx.hπ1 x, π2 xi
of type
(f → g) → (h → g) → i → (i → f + h) → g
(20)
(21)
are normalized to two different compact representations:
λxy.xy =βη λxy.δ(y, x1 .x(ι1 x1 ), x2 .x(ι2 x2 ))
(22)
λxyz.δ(z, z1 .λu.xz1 , z2 .λu.yz2 ) =βη λxyzu.δ(z, z1 .xz1 , z2 .yz2 )
(23)
(i → f + h) × i × (h → g) × (f → g) → g
Example 6. The following βη-equal terms,
of types
((p + q) → r) → ((p + q) → r)
(p → s) → (q → s) → (p + q) → r → s
(22)
(23)
(p → s × r) → (q → s × r) → (p + q) → s
(24)
(p → s × r) → (q → s × r) → (p + q) → r
(25)
(27’)
which can then be reverse-normalized to the starting lambda terms
themselves.
λxyz.π2 δ(z, z1 .xz1 , z2 .yz2 ) =βη λxyz.δ(z, z1 .π2 xz1 , z2 .π2 yz2 )
(25)
(20)
(21)
(26’)
of ENF type
λxyz.π1 δ(z, z1 .xz1 , z2 .yz2 ) =βη λxyz.δ(z, z1 .π1 xz1 , z2 .π1 yz2 )
(24)
(p → p) → (p → p)
(p × q) → (p × q)
δ(x0 x1 , hx4 x0 , x3 x0 i)
δ(x0 x1 , hδ(x1 x2 , hx5 x0 , x4 x0 i), x3 x0 i),
λxyzuv.δ(zv, x1 .ι1 x, x2 .δ(uv, y1 .ι2 y, y2 .ι1 x))
(28)
λxyzuv.δ(uv, y1 .δ(zv, x1 .ι1 x, x2 .ι2 y), y2 .ι1 x),
(29)
of type
k → l → (f → g + h) → (f → i + j) → f → k + l
are normalized to two different compact representations:
are mapped to the same compact term
hδ(x2 x0 , hι1 x5 , δ(x3 x1 , hι2 x5 , ι1 x6 i)i)i
(28’)
hδ(x1 x0 , hδ(x2 x1 , hι1 x6 , ι2 x5 i), ι1 x5 i)i,
(29’)
of ENF type
x1 x0
(20’)
hx0 , x1 i
(21’)
f × (f → i + j) × (f → g + h) × l × k → k + l
hx1 x0 , x2 x0 i
hx3 x1 , x2 x1 i
(22’)
(23’)
which can then be reverse-normalized to the starting lambda terms
themselves.
hx3 x0 , x1 x0 i
hx4 x0 , x2 x0 i,
(24’)
(25’)
p × (p → p) → p
(20’)
Comparison to the examples covered by the heuristic of (Balat
et al. 2004) In addition to Example 2 that was borrowed from
(Balat et al. 2004), other examples that can be covered from
that paper are examples 4.2.1– 4.2.4 and Example 4.3.1. In these
examples, not only are the input and the output of their TDPE
represented uniquely, but also, in the cases when there are two
distinct output normal forms according to Balat et al., our normalizer
unifies the two normal forms into one, shown below:
of ENF types
(p × q → p) × (p × q → q)
(p × (p → r) × (q → r) → r)×
(q × (p → r) × (q → r) → r)
(r × p × (q → s) × (p → s) → s)×
(r × q × (q → s) × (p → s) → s)
(21’)
x0
hι1 x0 , ι2 x0 i
(22’)
hι01 hx0 , x1 i, ι02 ι01 hx0 , x1 i, ι02 ι02 ι1 hx0 , x1 i, ι02 ι02 ι2 hx0 , x1 ii
(4.2.3)
(23’)
(p × (q → s) × (q → r) × (p → s) × (p → r) → s)×
(q × (q → s) × (q → r) × (p → s) × (p → r) → s) (24’)
hι01 hx1 , x0 i, ι02 ι02 ι1 hx1 , x0 i, ι02 ι01 hx1 , x0 i, ι02 ι02 ι2 hx1 , x0 ii
(4.2.4)
(p × (q → s) × (q → r) × (p → s) × (p → r) → r)×
(q × (q → s) × (q → r) × (p → s) × (p → r) → r), (25’)
hι02 ι02 ι1 x1 , ι02 ι01 x1 i.
Finally, as we shall see in the following two examples, our conversion to compact form does not guarantee a canonical representation for terms that are equal with respect to the strong forms of
βη-equality used to duplicate subterms (Example 5) or change the
order of case analysis of subterms (Example 6). Although such term
transformations might not be desirable in the setting of real programming languages, for they change the order of evaluation, in a
pure effect-free setting like a proof assistant, such transformation
would be handy to have.
Example 5. The following βη-equal terms,
0
0
0
0
λxyzu.δ(uz, w.δ(uz, w .xw , w .yw ), w.yw),
(4.3.1)
Canonical representations could be obtained in these examples,
because it was possible to represent the input and output terms in
the fragment of the compact calculus which does not include δ’s
(although it still involves sum types).
On the other hand, there are also the examples where the input
and output are not unified by our procedure.2 Examples 4.3.2
and 4.3.3 are not handled because we do not permute the order of
case analyses (as shown by our Example 6); Example 6.2.4.2 is not
handled because we do not analyze if a subterm has been used twice
in a term or not (as shown also by our Example 5); examples 4.3.4
and 4.4 are not even executable in our implementation, because we
do not have a special treatment of the atomic empty type.
Of course, there is nothing stopping us from applying the
program transformations that would allow to handle this kind of
and reverse-normalizing these compact terms produces always the
right-hand side of the corresponding equation involving lambda
terms.
λxyzu.δ(uz, w.xw, w.yw)
(4.2.1)
(4.2.2)
2 We
shall not reproduce the compact representation for these examples in
this paper, but they are available for inspection in the Coq formalization
accompanying it.
(26)
(27)
9
2016/7/1
cases—or nothing stopping Balat et al. from first applying our typedirected normalization procedure before performing their heuristic
to unify the different normal outputs that they sometimes get. The
point is that these two methodologies are orthogonal and they would
ideally be used in combination inside a real-world application; for
more comments about the two approaches, see Section 5.
4.1
(∀ {w’’}, le w’ w’’ → f w’’ x → X w’’ x0) → X w’ x0.
Next, the necessary forcing fixpoints are defined: bforces, cforces,
and dforces, which are used to construct the type of the continuation monad corresponding to Base, CNF, and DNF, respectively;
sforces is used for constructing the type of the continuation monad
corresponding to non-normalized types.
A converter for the compact term representation
In the remaining part of this section, we explain the high-level structure of our prototype normalizer of lambda terms into compact terms
and vice versa. The full Coq implementation of the normalizer, together with the examples considered above, is given as a companion
to this paper. This is only one possible implementation, using continuations, but all the previous material of this paper was written as
generically as possible, so that it is useful if other implementation
techniques are attempted in the future, such as rewriting based on
evaluation contexts (i.e. the first-order reification of continuations),
or abstract machines.
In a nutshell, our implementation is a type-directed partial
evaluator, written in continuation-passing style, with an intermediate
phase between the evaluation and reification phases, that allows to
map a ‘semantic’ representation of a term from a type to its ENF
type, and vice versa. Such partial evaluators can be implemented
very elegantly, and with getting certain correctness properties for
free, using the GADTs from Ocaml’s type system, as shown by
the recent work of Danvy, Keller, and Puech (Danvy et al. 2015).
Nevertheless, we had chosen to carry out our implementation in Coq,
because that allowed us to perform a careful interactive analysis of
the necessary normal forms—hence the compact calculus introduced
in the first part of this section.
Usual type-directed partial evaluation (TDPE), aka normalizationby-evaluation (NBE), proceeds in tho phases. First an evaluator is
defined which takes the input term and obtains its semantic representation, and then a reifier is used to map the semantic representation
into an output syntactic term. Our TDPE uses an intermediate phase
between the two phases, a phase where type isomorphisms are
applied to the semantic domain so that the narrowing down of a
class of equal terms, described in Section 2, is performed on the
semantic annotation of a term.
The semantics that we use is defined by a continuation monad
over a forcing structure, together with forcing fixpoints that map
the type of the input term into a type of the ambient type theory.
The forcing structure is an abstract signature (Coq module type),
requiring a set K of possible worlds, a preorder relation on worlds,
le, an interpretation of atomic types, pforces, and X, the return type
of the continuation monad.
Fixpoint bforces (w:K)(b:Base) {struct b} : Set :=
match b with
| prp p ⇒ pforces w p
| bd d ⇒ dforces w d
end
with cforces (w:K)(c:CNF) {struct c} : Set :=
match c with
| top ⇒ unit
| con c1 b c2 ⇒
(∀ w’, le w w’ → Cont cforces w’ c1 → Cont bforces w’ b)
× (Cont cforces w c2)
end
with dforces (w:K)(d:DNF) {struct d} : Set :=
match d with
| two c1 c2 ⇒ (Cont cforces w c1) + (Cont cforces w c2)
| dis c1 d2 ⇒ (Cont cforces w c1) + (Cont dforces w d2)
end.
Fixpoint eforces (w:K)(e:ENF) {struct e} : Set :=
match e with
| cnf c ⇒ cforces w c
| dnf d ⇒ dforces w d
end.
Fixpoint sforces (w:K)(F:Formula) {struct F} : Set :=
match F with
| prop p ⇒ pforces w p
| disj F G ⇒ (Cont sforces w F) + (Cont sforces w G)
| conj F G ⇒ (Cont sforces w F) × (Cont sforces w G)
| impl F G ⇒ ∀ w’,
le w w’ → (Cont sforces w’ F) → (Cont sforces w’ G)
end.
Given these definitions, we can write an evaluator for compact
terms, actually two simultaneously defined evaluators evalc and
evalb, proceeding by induction on the input term.
Theorem evalc {c} : (HSc c → ∀ {w}, Cont cforces w c)
with evalb {b c0} : (HSb c0 b → ∀ {w},
Cont cforces w c0 → Cont bforces w b).
Module Type ForcingStructure.
Parameter K : Set.
Parameter le : K → K → Set.
Parameter pforces : K → Proposition → Set.
Parameter Answer : Set.
Parameter X : K → Answer → Set.
End ForcingStructure.
An evaluator for usual lambda terms can also be defined, by
induction on the input term. A helper function lforces analogous to
the list map function for sforces is necessary.
The continuation monad is polymorphic and instantiable by
a forcing fixpoint f and a world w. It ensures that the preorder
relation is respected; intuitively, this has to do with preserving the
monotonicity of context free variables: we cannot ‘forget’ a free
variable i.e. contexts cannot decrease.
Fixpoint
lforces (w:K)(Gamma:list Formula) {struct Gamma} :
Set :=
match Gamma with
| nil ⇒ unit
| cons A Gamma0 ⇒
Cont sforces w A × lforces w Gamma0
end.
Definition Cont {class:Set}(f :K→class→Set)(w:K)(x:class)
:= ∀ (x0:Answer), ∀ {w’}, le w w’ →
10
2016/7/1
Inductive le : CNF → CNF → Set :=
| le refl : ∀ {w}, le w w
| le cons : ∀ {w1 w2 c b},
le w1 w2 → le w1 (con c b w2).
Theorem eval {A Gamma} : ND Gamma A → ∀ {w},
lforces w Gamma → Cont sforces w A.
The novelty of our implementation (besides isolating the compact term calculus itself), in comparison to previous type-directed
partial evaluators for the lambda calculus with sums, consists in
showing that one can go back and forth between the semantic annotation at a type F and the semantic annotation of the normal form
enf (F ). The proof of this statement needs a number of auxiliary
lemmas that we do not mention in the paper. We actually prove two
statements simultaneously, f2f and f2f’, declared as follows.
Definition le := le .
Definition le refl : ∀ {w}, le w w.
Definition pforces := fun w p ⇒ HSb w (prp p).
Definition Answer := Base.
Definition X := fun w b ⇒ HSb w b.
End structureHS.
Theorem f2f :
(∀ F, ∀ w, Cont sforces w F → Cont eforces w (enf F))
with f2f’ :
(∀ F, ∀ w, Cont eforces w (enf F) → Cont sforces w F).
Using the instantiated forcing structures, we can provide reification functions for terms of the lambda calculus,
Theorem sreify : (∀ F w, Cont sforces w F → ND w F)
with sreflect : (∀ F w, ND w F → Cont sforces w F).
As one can see from their type signatures, f2f and f2f’ provide a link
between the semantics of the standard lambda calculus for sums
(ND) and the semantics of our compact calculus(HSc/HSb).
We move forward to describing the reification phase. In this
phase, two instantiations of a forcing structure are needed. Unlike
the evaluators, which can work over an abstract forcing structure,
the reifiers need concrete instantiations built from the syntax of the
term calculus in order to produce syntactic normal forms.
The first instantiation is a forcing structure for the standard
lambda calculus with sums. The set of worlds is the set of contexts
(lists of types), the preorder on worlds is defined as the prefix relation
on contexts, the forcing of an atomic type p is the set of terms of type
p in the context w, and the answer type of the continuation monad
is the set of terms of type F in the context w. One could be more
precise, and instantiate the answer type by the set of normal/neutral
terms, like it has been done in most other implementations of TDPE,
and in our own prior works, but for the sake of simplicity, we do not
make that distinction in this paper.
and for our compact terms:
Theorem creify :
(∀ c w, Cont cforces w c → HSc (explogn c (cnf w)))
with creflect : (∀ c w, Cont cforces (ntimes c w) c)
with dreify : (∀ d w, Cont dforces w d → HSb w (bd d))
with dreflect : (∀ d c1 c2 c3,
HSc (explogn c1 (cnf (ntimes c3 (con c1 (bd d) c2)))) →
Cont dforces (ntimes c3 (con c1 (bd d) c2)) d).
The reifier for atomic types, preify, is not listed above, because it is
simply the ‘run’ operation on the continuation monad. As usually in
TDPE, every reification function required its own simultaneously
defined reflection function.
Finally, one can combine the reifiers, the evaluators, and the functions f2f and f2f’, in order to obtain both a normalizing converter of
lambda terms into compact terms (called nbe in the Coq implementation), and a converter of compact terms into lambda terms (called
ebn in the Coq implementation). One can, if one desires, also define
only a partial evaluator of lambda terms and only a partial evaluator
of compact terms.
Module structureND <: ForcingStructure.
Definition K := list Formula.
Inductive le : list Formula → list Formula → Set :=
| le refl : ∀ {w}, le w w
| le cons : ∀ {w1 w2 F},
le w1 w2 → le w1 (cons F w2).
5.
Definition le := le .
Conclusion
Summary of our results We have brought into relation two distinct fundamental problems of the lambda calculi underlying modern
functional programming languages, one concerning identity of types,
and the other concerning identity of terms, and we have shown how
improved understanding of the first problem can lead to improved
understanding of the second problem.
We started by presenting a normal form of types, the explog normal form, that is a systematic ordering of the high-school
identities allowing for a type to be mapped to normal form. This
can be used as a simple heuristic for deciding type isomorphism, a
first such result for the type language {→, +, ×}. We beleive that
the link established to analysis and abstract algebra (the exp-log
decomposition produces a pair of homomorphisms between the
additive and the multiplicative group in an exponential field) may
also be beneficial to programming languages theory in the future.
The typing restrictions imposed to lambda terms in exp-log
normal form allowed us to decompose the standard axioms for =βη
into a proper and simpler subset of themselves, =eβη . As far as we
are aware, this simpler axiomatization has not been isolated before.
Definition le refl : ∀ {w}, le w w.
Definition pforces := fun w p ⇒ ND w (prop p).
Definition Answer := Formula.
Definition X := fun w F ⇒ ND w F.
End structureND.
The second instantiation is a forcing structure for our calculus
of compact terms. The set of worlds is the same as the set of CNFs,
because our context are simply CNFs, the preorder is the prefix
relation on CNFs, the forcing of atomic types are terms of atomic
types, and the answer type of the continuation monad is the set of
terms at base type.
Module structureHS <: ForcingStructure.
Definition K := CNF.
11
2016/7/1
Even more pleasingly, the new axiomatization disentangles the old
one, in the sense that left-hand sides and right-hand sides of the
equality axioms can no longer overlap.
Finally, we ended by giving a compact calculus of terms that
can be used as a more canonical alternative to the lambda calculus
when modeling the core of functional programming languages: the
new syntax does not allow for the η-axioms of Figure 2 even to
e
be stated, with the exception of η+
that is still present, albeit with
a restricted type. As our method exploits type information, it is
orthogonal to the existing approaches that rely on term analysis
(discussed below), and hence could be used in addition to them; we
hope that it may one day help with addressing the part of η-equality
that is still beyond decision procedures. We also implemented and
described a prototype converter from/to standard lambda terms.
In the future, we would like to derive declarative rules to describe
more explicitly the extent of the fragment of =βη decided by our
heuristic, although implicitly that fragment is determined by the
reduction to ENF congruence classes explained in Section 3. It
should be noted that in this respect our heuristic is no less explicitly
described, than the only other published one (Balat et al. 2004)
(reviewed below).
ample 6. For determining if something is a normal form, in addition
to the standard separation of neutral vs normal terms, one uses three
additional criteria: (A) δ-expressions that appear under a lambda
abstraction must only case-analyze terms involving the abstracted
variable; (B) no two terms which are equal modulo ≈ can be case analyzed twice; in particular, no term can be case analyzed twice; (C)
no case analysis can have the two branches which are equal modulo
≈. To enforce condition A, particular powerful control operators,
set/cupto, are needed in the implementation, requiring a patch of the
ocaml toplevel. Using our compact terms instead of lambda terms
should help get rid of condition A (hence set/cupto), since as we
showed keeping a constructor for λ’s in the representation of normal
forms is not necessary. On the other hand, we could profit from implementing checks such as B and C in our implementation; however,
our goal was to see how far we can get in a purely type-directed way
without doing any program analysis.
A final small remark about this line of works: in (Balat 2009),
Balat used the word “canonical” to name his normal forms, but this
does not preserve the usual meaning of that word, as showed in the
previous article (Balat et al. 2004).
Lindley (Lindley 2007) presents another proof of decidability
of βη-equality for the lambda calculus with sums, based on an
original decomposition of the η+ -axiom into four axioms involving
evaluation contexts (the proof of this decomposition, Proposition 1,
is unfortunately only sketched); the proof of decidability uses
rewriting modulo the congruence relation ≈ of Balat et al.
Scherer (Scherer 2015) reinterprets Lindley’s rewriting approach
to decidability in the setting of the structural proof theory of maximal
multi-focusing, where he brings it in relation to the technique of
preemptive rewriting (Chaudhuri et al. 2008). Scherer seems to
derive canonicity of his normal forms for natural deduction from
Lindley’s results, although the later does not seem to show canonical
forms are a result of his rewriting decidability result.
The idea to apply type isomorphism in order to capture equality
of terms has been used before (Ahmad et al. 2010), but only implicitly. Namely, in the focusing approach to sequent calculi (Liang
and Miller 2007), one gets a more canonical representation of
terms (proofs) by grouping all so called asynchronous proof rules
into blocks called asynchronous phases. However, while all asynchronous proof rules are special kinds of type isomorphisms, not all
possible type isomorphisms are accounted for by the asynchronous
blocks: sequent calculi apply asynchronous proof rules superficially,
by looking at the top-most connectives, but normalizing sequents
(formulas) to their exp-log normal form applies proof rules deeply inside the proof tree. Our approach can thus also been seen as moving
focusing proof systems into the direction of so called deep inference
systems.
Related work Dougherty and Subrahmanyam (Dougherty and Subrahmanyam 1995) show that the equational theory of terms (morphisms) for almost bi-Cartesian closed categories is complete with
respect to the set theoretic semantics. This presents a generalization
of Friedman’s completeness theorem for simply typed lambda calculus without sums (Cartesian closed categories) (Friedman 1975).
Ghani (Ghani 1995) proves βη-equality of terms of the lambda
calculus with sum types to be decidable, first proceeding by rewriting and eta-expansion, and then checking equality up to commuting
conversions by interpreting terms as finite sets of quasi-normal
forms; no canonical normal forms are obtained.
When sums are absent, the existence of a confluent and strongly
normalizing rewrite system proves the existence of canonical normal
forms, and then decidability is a simple check of syntactic identity
of canonical forms. Nevertheless, even in the context when sums are
absent, one may be interested in getting term representations that
are canonical modulo type isomorphism, as in the recent works of
Dı́az-Caro, Dowek, and Martı́nez López (Dı́az-Caro and Dowek
2015; Dı́az-Caro and Martı́nez López 2016).
Altenkirch, Dybjer, Hofmann, and Scott (Altenkirch et al. 2001)
give another proof of decidability of βη-equality for the lambda
calculus with sums by carrying out a normalization-by-evaluation
argument in category theory. They provide a canonical interpretation
of the syntax in the category of sheaves for the Grothendieck
topology over the category of constrained environments, and they
claim that one can obtain an algorithm for a decision procedure by
virtue of the whole development being formalizable in extensional
Martin-Löf type theory.
In the absence of η+ (Dougherty 1993), or for the restriction of
η+ to N being a variable (Di Cosmo and Kesner 1993), a confluent
and strongly normalizing rewrite system exists, hence canonicity of
normal forms for such systems follows.
In (Balat et al. 2004), Balat, Di Cosmo, and Fiore, present a
notion of normal form which is a syntactic counterpart to the notion
of normal forms in sheaves of Altenkirch, Dybjer, Hofmann, and
Scott. However, the forms are not canonical, as there may be two
different syntactic normal forms corresponding to a single semantic
one. They also say they believe (without further analysis or proof)
that one can get canonical normal forms if one considers an ordering
of nested δ-expressions.
The normal forms of Balat et al. are sophisticated and determining if something is a normal form relies on comparing sub-terms up
to a congruence relation ≈ on terms; essentially, this congruence
allows to identify terms such as the ones of our Example 5 and Ex-
References
A. Ahmad, D. Licata, and R. Harper. Deciding coproduct equality with
focusing. Manuscript, 2010.
T. Altenkirch, P. Dybjer, M. Hofmann, and P. Scott. Normalization by
evaluation for typed lambda calculus with coproducts. In Logic in
Computer Science, 2001. Proceedings. 16th Annual IEEE Symposium on,
pages 303–310, 2001.
V. Balat. Keeping sums under control. In Workshop on Normalization by
Evaluation, pages 11–20, Los Angeles, United States, Aug. 2009.
V. Balat, R. Di Cosmo, and M. Fiore. Extensional normalisation and
type-directed partial evaluation for typed lambda calculus with sums.
In Proceedings of the 31st ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages, POPL ’04, pages 64–76, New
York, NY, USA, 2004. ACM.
S. N. Burris and K. A. Yeats. The saga of the high school identities. Algebra
Universalis, 52:325–342, 2004.
12
2016/7/1
K. Chaudhuri, D. Miller, and A. Saurin. Canonical Sequent Proofs via
Multi-Focusing, pages 383–396. Springer US, Boston, MA, 2008. ISBN
978-0-387-09680-3.
O. Danvy, C. Keller, and M. Puech. Typeful Normalization by Evaluation.
In P. L. Hugo Herbelin and M. Sozeau, editors, 20th International
Conference on Types for Proofs and Programs (TYPES 2014), volume 39
of Leibniz International Proceedings in Informatics (LIPIcs), pages 72–
88, Dagstuhl, Germany, 2015. Schloss Dagstuhl–Leibniz-Zentrum fuer
Informatik. ISBN 978-3-939897-88-0.
R. Di Cosmo and D. Kesner. A confluent reduction for the extensional
typed λ-calculus with pairs, sums, recursion and terminal object. In
A. Lingas, R. Karlsson, and S. Carlsson, editors, Automata, Languages
and Programming, volume 700 of Lecture Notes in Computer Science,
pages 645–656. Springer Berlin Heidelberg, 1993.
A. Dı́az-Caro and G. Dowek. Simply typed lambda-calculus modulo type
isomorphisms. Draft at https://hal.inria.fr/hal-01109104, 2015.
A. Dı́az-Caro and P. E. Martı́nez López. Isomorphisms considered as equalities: Projecting functions and enhancing partial application through an
implementation of λ+ . In IFL 2015: Symposium on the implementation
and application of functional programming languages. ACM, 2016. To
appear. Preprint at arXiv:1511.09324.
D. Dougherty. Some lambda calculi with categorical sums and products. In
Rewriting Techniques and Applications, pages 137–151. Springer, 1993.
D. J. Dougherty and R. Subrahmanyam. Equality between functionals
in the presence of coproducts. In Proceedings of the 10th Annual
IEEE Symposium on Logic in Computer Science, LICS ’95, pages 282–,
Washington, DC, USA, 1995. IEEE Computer Society.
M. Fiore, R. D. Cosmo, and V. Balat. Remarks on isomorphisms in typed
lambda calculi with empty and sum types. Annals of Pure and Applied
Logic, 141:35–50, 2006.
H. Friedman. Equality between functionals. In Logic Colloquium ’73,
volume 453 of Lecture Notes in Mathematics, pages 22–37. Springer,
1975.
N. Ghani. βη-equality for coproducts. In Typed Lambda Calculi and
Applications, pages 171–185. Springer, 1995.
G. H. Hardy. Orders of Infinity. The ‘Infinitärcalcül’ of Paul Du BoisReymond. Cambridge Tracts in Mathematic and Mathematical Physics.
Cambridge University Press, 1910.
D. Ilik. Axioms and decidability for type isomorphism in the presence of
sums. In Proceedings of the Joint Meeting of the Twenty-Third EACSL
Annual Conference on Computer Science Logic (CSL) and the TwentyNinth Annual ACM/IEEE Symposium on Logic in Computer Science
(LICS), CSL-LICS ’14, pages 53:1–53:7, New York, NY, USA, 2014.
ACM.
C. Liang and D. Miller. Focusing and polarization in intuitionistic logic. In
J. Duparc and T. A. Henzinger, editors, Computer Science Logic, volume
4646 of Lecture Notes in Computer Science, pages 451–465. Springer
Berlin Heidelberg, 2007. ISBN 978-3-540-74914-1.
S. Lindley. Extensional rewriting with sums. In S. R. Della Rocca, editor,
Typed Lambda Calculi and Applications, volume 4583 of Lecture Notes
in Computer Science, pages 255–271. Springer Berlin Heidelberg, 2007.
M. Rittri. Using types as search keys in function libraries. J. Funct. Program.,
1(1):71–89, 1991.
G. Scherer. Multi-Focusing on Extensional Rewriting with Sums. In T. Altenkirch, editor, 13th International Conference on Typed Lambda Calculi
and Applications (TLCA 2015), volume 38 of Leibniz International Proceedings in Informatics (LIPIcs), pages 317–331, Dagstuhl, Germany,
2015. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. ISBN 978-3939897-87-3.
13
2016/7/1
| 6 |
1
Random Access Communication for Wireless
Control Systems with Energy Harvesting Sensors
arXiv:1801.10141v1 [math.OC] 30 Jan 2018
Miguel Calvo-Fullana, Carles Antón-Haro, Javier Matamoros, and Alejandro Ribeiro
Abstract—In this paper, we study wireless networked control
systems in which the sensing devices are powered by energy
harvesting. We consider a scenario with multiple plants, where
the sensors communicate their measurements to their respective
controllers over a shared wireless channel. Due to the shared
nature of the medium, sensors transmitting simultaneously can
lead to packet collisions. In order to deal with this, we propose
the use of random access communication policies and, to this end,
we translate the control performance requirements to successful
packet reception probabilities. The optimal scheduling decision
is to transmit with a certain probability, which is adaptive to
plant, channel and battery conditions. Moreover, we provide
a stochastic dual method to compute the optimal scheduling
solution, which is decoupled across sensors, with only some of the
dual variables needed to be shared between nodes. Furthermore,
we also consider asynchronicity in the values of the variables
across sensor nodes and provide theoretical guarantees on the
stability of the control systems under the proposed random access
mechanism. Finally, we provide extensive numerical results that
corroborate our claims.
Index Terms—Energy harvesting, networked control systems,
random access communication.
I. I NTRODUCTION
The rapid pace of development of technologies such as
robotic automation, smart homes, autonomous transportation,
and the internet of things is causing a dramatic increase in the
average number of sensors in modern control systems. Usually,
the previously mentioned technologies rely on networked control systems, and tend to incorporate wireless sensing devices
to perform the monitoring of physical processes. These sensors
might be deployed in large quantities and over large areas,
making the replacement of their batteries a difficult and costly
task. This has led to an increasing interest in alternative ways
of powering wireless devices. An important technology that
has recently emerged as capable of alleviating the limitations
imposed by traditional battery operation is Energy Harvesting
(EH). The use of energy harvesting technologies allows the
devices to obtain energy from their environment (with common
sources being solar, wind or kinetic energy [2]). In turn, this
removes some of the limitations imposed by traditional battery
This work is supported by ARL DCIST CRA W911NF-17-2-0181 and the
Intel Science and Technology Center for Wireless Autonomous Systems.
M. Calvo-Fullana, and A. Ribeiro are with the Department of Electrical and
Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104,
USA (e-mail: [email protected]; [email protected]).
C. Antón-Haro, and J. Matamoros are with the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA), 08860 Castelldefels, Barcelona,
Spain (e-mail: [email protected]; [email protected]).
This work has been presented in part at the 2017 American Control
Conference (ACC) [1].
operation and grants an increase to the expected lifetime of the
devices.
The study of communication systems powered by energy
harvesting has recently received considerable attention. Current results available in the literature range from throughput
maximization [3]–[6], source-channel coding [7]–[10], estimation [11]–[13], and others (see [14] for a comprehensive
overview). However, in general, limited attention has been
given to the use of energy harvesting technologies in control
applications. Most of the works currently available deal with
the estimation of dynamical systems with sensors powered by
energy harvesting [15]–[18]. Nonetheless, the more explicit
study of closed-loop system stability under energy harvesting
constraints has been less studied, and only for single plant
scenarios [19], [20].
In this paper, we consider the multi-plant problem of
scheduling communication between sensor nodes and their respective controllers. For control systems with classically powered sensor nodes (i.e., not energy harvesting), the scheduling
problem in wireless networked control systems has been previously studied in several forms. The most common approach to
this problem is the design of centralized scheduling policies.
In such setup, in order to avoid packet collisions between the
transmissions of the nodes, there exists an overseeing entity
specifying which sensor is allowed to transmit at a given time
slot. These type of policies might be static [21], [22] or of a
more dynamic nature, where centralized decisions can be taken
based on plant state information [23] or others. Decentralized
policies have received less attention, with the authors in [24]
proposing a random access mechanism that adapts to channel
conditions.
In our case, we study the scenario in which the sensors are
powered by energy harvesting, and we focus on the design of
decentralized scheduling policies. We consider the coexistence
of multiple plants, with sensor nodes transmitting their measurements to their controllers over a shared wireless medium.
Due to this, multiple sensors accessing the medium at the same
time can cause collisions, leading to the unsuccessful reception
of the sensor measurements by the controller. To mitigate
this, we propose to use a random access communication
scheme. First, we abstract the required control performance
into a required successful packet reception probability. Under
this abstraction, a Lyapunov function of each control loop is
required to decrease at a given average rate. Then, we pose
the random access mechanism as an stochastic optimization
problem where the required successful reception probabilities
act as constraints of the optimization problem. Then, the
energy harvesting constraints are introduced into the problem
2
multiple sensors transmitting simultaneously can led to packet
collisions, with the consequential lack of packet delivery. It
is our objective to design transmission policies that adapt to
the wireless medium and the energy harvesting process of the
sensors, and are capable of stabilizing all control loops.
A. Control Model
Fig. 1. System model.
in an average manner and and we modify the formulation
to allow us to ensure time slot to time slot causality in the
stochastic framework. To solve the optimization problem, we
resort to a primal-dual stochastic subgradient method [25].
At a given time slot, the resulting scheduling decision is to
transmit with a certain probability, which is adaptive to the
plant, channel and battery conditions. The resulting policy
requires minimal coordination, with only some of the dual
variables being shared between the sensors. Furthermore, we
consider the possibility of asynchronism between the sensor
nodes (i.e., nodes with outdated dual information) and provide
theoretical guarantees that ensure the stability of all control
loops under these conditions when using our proposed scheme.
Finally, we validate our policy by means of simulations, which
illustrate its ability to adapt to environmental conditions and
satisfy the stability of all control loops.
The rest of the paper is organized as follows. In Section
II we introduce the system model and provide details on its
control, communication and control performance aspects. Section III develops the proposed random access communication
scheme and we discuss how to adapt it to deal with energy
harvesting. In Section IV we introduce the algorithm used to
obtain the random access communication policy. The stability
of the system under the proposed policy is studied in Section
V. After this, we devote Section VI to simulations assessing
the performance of the proposed random access mechanism.
Finally, we provide some concluding remarks in Section VII.
II. S YSTEM M ODEL
Consider the system model shown in Figure 1. This scenario
consists of M different plants, which have their system state
measured by sensor nodes powered by energy harvesting.
The energy harvesting process imposes causality constraints
on the transmission capabilities of the sensors, as sensors
cannot transmit if they have not harvested sufficient energy.
The measurements collected by the sensor nodes have then
to be wirelessly transmitted to their respective controllers in
order to ensure plant stability. However, the wireless medium
over which the sensors transmit is shared. This implies that
We consider a group of M plants and use xi [t] ∈ Rni to
denote the state of the i-th plant at time t. Plant dynamics
are dictated by a linear time-invariant system in which plant
control is contingent on the successful reception of information
from the sensors. Define then the indicator variable γi [t] ∈
{0, 1} to signify with the value γi [t] = 1 that the transmission
of the i-th sensor at the t-th time slot has been successfully
received by the i-th controller. If information is successfully
received, we have γi [t] = 1, in which case the controller closes
the loop and the state evolves according to the closed loop
dynamics described by the matrix Ac,i ∈ Rni ×ni . If, on the
contrary, γi [t] = 0, the state evolves in open loop as described
by the matrix Ao,i ∈ Rni ×ni . We then have that the state xi [t]
evolves according to the switched state linear dynamics
xi [t + 1] =
(
Ac,i xi [t] + wi [t], if γi [t] = 1,
Ao,i xi [t] + wi [t], if γi [t] = 0,
(1)
where wi [t] correspond to independent and identically distributed (i.i.d.) Gaussian noise with covariance Ci . The design
of the controllers is not the focus of this paper. The matrices
are assumed given and are such that the closed loop matrix
Ac,i produces stable dynamics. The open loop matrix Ao,i
may produce stable or unstable dynamics but the problem is
of most interest when the open loop dynamics are unstable.
B. Communication Model
In the control model we have defined the variables γi [t] to
signify the successful reception of the sensor measurements.
This is a random variable whose distribution is dependent on
the chosen communication policy. We consider a time-slotted
communication model. At every time slot t, the i-th sensor
node decides to transmit with probability zi [t] ∈ [0, 1], where
we denote zi [t] as the scheduling variable. Then, if multiple
sensors transmit during the same time slot, we consider that
a collision occurs with probability qc ∈ [0, 1]. If a collision occurs, then none of the colliding packets are received.
Therefore, the probability of the i-th sensor node transmitting
at time slot t andQnot colliding with any other transmission
is given by zi [t] j6=i (1 − qc zj [t]). Apart from collisions,
packet loss can also occur due to incorrect decoding. The
probability of successful decoding is dependent on the channel
conditions at the i-th link during time slot t, which we denote
by hi [t]. Channel states are considered independent across the
M systems. Further, we consider a block fading model [26],
whereby the channel states hi [t] are i.i.d. over time slots and
constant during a time slot. The probability of successfully
decoding a packet given the channel state is denoted by
q(hi [t]), which is a continuous and strictly increasing function
q : R+ → [0, 1] (We show in Fig. 2 a typical decoding
3
Since this condition needs to hold for all xi [t], we can
equivalently rewrite this condition as the following linear
matrix inequality
1
q(hi [t])
0.8
0.6
ATc,i Pi Ac,i Pr(γi [t] = 1)
0.4
+ ATo,i Pi Ao,i (1 − Pr(γi [t] = 1)) ≤ ρi Pi ,
0.2
0
0
1
2
3
4
5
hi [t]
Fig. 2. Probability of decoding as a function of the channel state.
function). Also, for notational compactness, we also define
qi [t] , q(hi [t]). Then, the probability of successful reception
γi [t] is given by
Y
Pr (γi [t] = 1) = qi [t]zi [t]
(1 − qc zj [t]) .
(2)
j6=i
This expression simply corresponds to the successful decoding probability multiplied by the probability of transmitting
without colliding. Also, we assume that sensor nodes have
knowledge of their channel state before transmitting (In practice, this is usually achieved with pilot signals [26]).
C. Control Performance
The control loop of each plant is closed with a probability
given by equation (2). Since it is our objective to design
communication policies that satisfy a desired control performance, we aim to establish a relationship between the control
performance and the probability of successful reception. We
can do so by the following proposition.
Proposition 1 (Control performance abstraction [24]). Consider the switched system described by (1) with γi [t] given
by a sequence of i.i.d. Bernoulli random variables, and the
quadratic Lyapunov function Vi (xi ) = xTi Pi xi , with Pi ∈
Rni ×ni positive definite. Then the function Vi (xi ) decreases
at an average rate ρi < 1, denoted by
E[Vi (xi [t + 1])|xi [t]] ≤ ρi V (xi [t]) + tr(Pi Ci )
(3)
if and only if Pr(γi [t] = 1) ≥ pi , where pi is given by
pi = min θATc,i Pi Ac,i + (1 − θ)ATo,i Pi Ao,i ≤ ρi Pi
(4)
θ≥0
Proof. By particularizing the function Vi (xi ) =
the system dynamics (1), we can write the equation
xTi Pi xi
with
E[Vi (xi [t + 1])|xi [t]] = xi [t]ATc,i Pi Ac,i xi [t] Pr(γi [t] = 1)
+ xi [t]ATo,i Pi Ao,i xi [t] Pr(γi [t] = 0)
+ tr(Pi Ci ).
(5)
Then, by substituting this expression in the left hand side
of the average decrease inequality (3) we have the following
inequality
xi [t]ATc,i Pi Ac,i xi [t] Pr(γi [t] = 1)
+xi [t]ATo,i Pi Ao,i xi [t] Pr(γi [t] = 0) ≤ ρi xi [t]Pi xi [t]. (6)
(7)
where we have also used the fact that Pr(γi [t] = 0) = (1 −
Pr(γi [t] = 1)). Then, the Pr(γi [t] = 1) values satisfying this
inequality define a convex set of which there is a minimum
value pi such that the condition is equivalent to Pr(γi [t] =
1) ≥ pi .
This proposition allows us to establish a connection between
the control performance and the packet transmisisons. By
solving the semidefinite program (4), we obtain the successful
reception probabilities pi that allow us to satisfy the required
control performance. Then, we simply need to design communication policies that satisfy Pr(γi [t] = 1) ≥ pi for all
systems.
III. R ANDOM ACCESS C OMMUNICATION
We aim to design communication policies that satisfy the
successful packet reception probabilities given by Proposition 1. Under an assumption of ergodic processes, the successful packet reception probabilities are given by the long
term behavior of expression (2). Hence, in order to stabilize
the control system to the required control performance, the
scheduling variables zi [t] need to satisfy the following long
term constraint
t
Y
1X
qi [l]zi [l]
(1 − qc zj [l]) .
t→∞ t
pi ≤ lim
l=1
(8)
j6=i
Since we are working under the assumption of ergodicity, we
can write the previous limit as the expected value over channel
realizations. That is,
Y
pi ≤ E qi zi
(1 − qc zj ) ,
(9)
j6=i
and, since scheduling decision are independent over nodes, we
can further rewrite the previous expression as
Y
pi ≤ E [qi zi ]
(1 − E [qc zj ]) .
(10)
j6=i
Aside from the stabilization of all control loops, we also
want to minimize the number of times that a sensor node
accesses the medium. P
We do this by the introduction of
M
2
the objective function
i=1 E zi . Then, we formulate the
following optimization problem
minimize
zi ∈Z
M
X
E zi2
i=1
subject to pi ≤ E [qi zi ]
(11a)
Y
j6=i
(1 − E [qc zj ]) , i = 1, . . . , M
(11b)
where Z := {zi : R+ → [0, 1]} is the set of functions R+ →
[0, 1] taking values on [0, 1]. Notice that, while scheduling
4
decisions are statistically independent across sensors, solving
the previous optimization problem requires it being done in
a centralized manner (as constraint (11b) is coupled across
sensors). Nonetheless, we can separate the problem in a per
sensor manner by taking the logarithm of constraint (11b) as
follows
minimize
zi ∈Z,
sij ∈[0,1]
M
X
E zi2
(12a)
i=1
subject to
log(pi ) ≤ log(sii ) +
sii ≤ E qi zi ,
sij ≥ E qc zj ,
X
j6=i
log (1 − sij ) ,
i = 1, . . . , M
(12b)
i = 1, . . . , M
(12c)
i = 1, . . . , M, j 6= i
(12d)
where we have introduced the auxiliary variables sii and sij
and converted the logarithm of the product into a sum of logarithms. Solving the optimization problem (12) is equivalent
to solving (11). Under the assumption that this problem is
strictly feasible, that is, that
zi capable
Q there exist schedules
of satisfying pi < E qi zi j6=i (1 − qc zj ) , the goal is then
to design an algorithm such
that
the instantaneous scheduling
decisions zi [t] satisfy E zi [t] = zi .
A. Random Access Communication with Energy Harvesting
We have proposed a random access optimization problem
that allows us to stabilize all control loops. However, the
formulation previously introduced does not account for either
the energy consumption nor the energy harvesting process.
We consider that the i-th sensor at time slot t acquires ei [t]
units of energy and stores it in a battery of finite capacity
bmax
. Further, we assume the energy harvesting process to be
i
stationary with mean E ei [t] . We consider that the sensor
nodes consume one unit of energy per channel access, hence
the scheduling variable also represents the power consumption
of a medium access. Then, in order to ensure that the sensor
nodes only use the energy available in their batteries, we have
the following energy causality constraint
(13)
zi [t] ≤ bi [t].
where bi [t] is the battery state of node i at time t. Further,
the battery of the nodes evolves according to the following
dynamics
bmax
i
bi [t + 1] = bi [t] − zi [t] + ei [t]
,
(14)
0
bmax
[·]0i
denotes the projection to the interval [0, bmax
].
where
i
However, these constraints are coupled across time slots and
cannot be directly introduced into the stochastic optimization
problem (12). In order to circumvent this, we consider the
long-term behavior of the energy causality constraints (13),
which, by recursively substituting the battery dynamics (14)
in (13) can be written as follows
t
t
l=1
l=1
1X
1X
lim
zi [l] ≤ lim
ei [l].
t→∞ t
t→∞ t
(15)
That is, in the long term, the battery state is dominated
by the harvested energy. Then, due to the ergodicity of the
scheduling variables zi [t] and the energy harvesting process
ei [t], the previous expression (15) can be simply written as
the expectation with respect to the channel states hi [t] and the
energy harvesting process ei [t], as follows
E zi ≤ E ei .
(16)
This constraint simply implies that, on average, the energy
spent for transmitting has to be lower than the harvested energy. Then, by introducing constraint (16) into the optimization
problem (12) we have the following problem
minimize
zi ∈Z,
sij ∈[0,1]
M
X
E zi2
(17a)
i=1
subject to
log(pi ) ≤ log(sii ) +
X
j6=i
log (1 − sij ) ,
i = 1, . . . , M
sii ≤ E qi zi ,
sij ≥ E qc zj ,
E zi ≤ E ei
(17b)
i = 1, . . . , M
(17c)
i = 1, . . . , M, j 6= i
(17d)
i = 1, . . . , M
(17e)
However, substituting the time slot to time slot constraints
(13) by the average ones (16) does not ensure that they are
satisfied at each time slot. This means that solutions to the
optimization problem (17) do not necessarily satisfy the energy
causality constraints zi [t] ≤ bi [t]. To overcome this problem,
and ensure causality, we introduce the following modified
problem formulation
minimize
zi ∈Z,
sij ∈[0,1],
yij ∈[0,ȳij ]
subject to
M
X
E zi2 +
i=1
M
M X
X
E ν̄ij yij
log(pi ) ≤ log(sii ) +
X
j6=i
log (1 − sij ) ,
i = 1, . . . , M
sii ≤ E qi zi + yii ,
sij ≥ E qc zj − yij ,
E zi ≤ E ei
(18a)
i=1 j=1
i = 1, . . . , M
(18b)
(18c)
i = 1, . . . , M, j 6= i (18d)
i = 1, . . . , M
(18e)
This optimization problem has been modified by the introduction of the auxiliary variables yij in constraints (18c)
and (18d), as well as in the objective function. The auxiliary variable yij is forced to take values in the interval
yij ∈ [0, ȳij ], where ȳij is a system-dependent constant.
PM PM
The term i=1 j=1 E ν̄ij yij in the objective function has
the constant ν̄ij , where the ν̄ij value is an upper bound on
the Lagrange multipliers of constraints (18c) and (18d). This
modified problem formulation allows us to ensure that even
though the energy constraint (18e) is on average form, the
energy causality constraints are satisfied in a time slot to time
slot basis, as we will show in the upcoming sections.
IV. R ANDOM ACCESS A LGORITHM
In this section, we aim to solve the optimization problem
(18). For notational compactness, let us define the vector z =
5
{zi , sij , yij } collecting all the primal variables and the vector
λ = {φi , νij , βi } collecting the dual variables. Further, we
collect the implicit primal variable constraints in the set X ,
{zi ∈ Z, sij ∈ [0, 1], yij ∈ [0, ȳij ]}. Then, the Lagrangian of
problem (18) can be written as
L(z, λ) =
+
M
X
i=1
M
X
i=1
+
+
E zi2 +
M
X
φi log (pi ) − log (sii ) −
νii (sii
i=1
M X
X
M
X
i=1
In a similar way, the auxiliary variables yij [t] can be computed
as the solution of the minimization
E ν̄ij yij
i=1 j=1
i=1 j6=i
+
M X
M
X
X
j6=i
log (1 − sij )
− E qi zi − yii )
νij (E qc zj − yij − sij )
(19)
βi (E zi − E ei ) .
yij [t] := arg min yij (ν̄ij − νij [t]) ,
(26)
yij ∈[0,ȳij ]
which is a thresholding condition. The auxiliary variable yij [t]
takes the value yij [t] := 0 if νij [t] ≤ ν̄ij and yij [t] := ȳij if
νij [t] > ν̄ij . Next, since the dual function is concave, we
can perform a subgradient ascent on the dual domain. The
corresponding dual variable updates are given by
φi [t + 1] := φi [t] + log (pi ) − log (sii [t])
+
X
log (1 − sij [t])
−
(27)
j6=i
The Lagrange dual function of this problem is given by
(20)
g(λ) = min L(z, λ).
z∈X
Note that, while the primal problem is infinite dimensional, the
dual problem has a finite number of variables (the dual variables). Furthermore, for this problem, the duality gap can be
shown to be zero [27]. Hence, we resort to a dual subgradient
method to solve the optimization problem. However, the sensor
nodes have no knowledge of the probability distribution over
which the expectation is taken. In order to overcome this, we
substitute the random variables by their instantaneous values,
which are known by the sensors. Finally, by reordering the
Lagrangian (19), the scheduling variables zi [t] are given by
the following minimization
X
zi [t] := arg min zi zi − νii [t]qi [t] + qc
νji [t] + βi [t] ,
zi ∈[0,1]
j6=i
(21)
which is separated across sensors and leads to the following
closed form solution
1
X
1
νii [t]qi [t] − qc
νji [t] − βi [t] .
(22)
zi [t] :=
2
0
j6=i
The resulting optimal scheduling policy is to transmit at time
slot t with the probability given by (22). This is a policy
that dynamically adapts to the time-varying conditions of the
system. Namely, the dual variables νii [t] and νji [t] depend on
the stability of all the plants, the qi [t] variable is dependent
on the channel state hi [t], and the dual variables βi [t] depend
on the energy harvesting process ei [t]. The rest of the primal
variables sii [t] and sij [t] can be found by the minimization
sii [t] := arg min −φi [t] log(sii ) + νii [t]sii ,
(23)
sij [t] := arg min −φi [t] log(1 − sij ) − νij [t]sij ,
(24)
sii ∈[0,1]
sij ∈[0,1]
which, again, are separated across sensors and have the following closed forms solutions
1
1
φi [t]
φi [t]
sii [t] :=
,
sij [t] := 1 −
.
(25)
νii [t] 0
νij [t] 0
+
νii [t + 1] := νii [t] + sii [t] − zi [t]qi [t] − yii [t]
(28)
+
νij [t + 1] := νij [t] + qc zi [t] − sij [t] − yij [t]
+
βi [t + 1] := βi [t] + zi [t] − ei [t]
(29)
(30)
where, in order to have an algorithm than can be run in an
online manner, we have considered a fixed step sized . For
notational compactness, we also write the dual update
+ in a
concatenated vector form as λ[t + 1] := λ[t] + s[t] , where
si [t] corresponds to the stochastic subgradient. The steps in
the resulting random access mechanism are summarized in
Algorithm 1.
Also, it is important to note that we can establish a parallel
relationship between the dual variables βi [t] associated to the
energy constraint E zi ≤ E ei and the actual battery state bi [t].
Thisrelationship is given by the expression βi [t] = bmax
−
i
bi [t] . Hence, a mirrored symmetry (scaled by the step size )
exists between these variables. This relationship will be crucial
in ensuring the energy causality of the algorithm, as we show
next.
A. Energy Causality
Now, we turn our attention to the study of the conditions
required to satisfy the causality constraints, i.e., zi [t] ≤ bi [t]
for all time slots. We have introduced the modified problem
formulation (18) to help achieve this. First, we show that
this problem formulation allows us to upper bound the dual
variables νij [t] over all time slots.
Proposition 2. Let the upper bound ȳij of the auxiliary
variables yij satisfy the inequality ȳij ≥ 1 (ν̄ij + 2). Then,
the dual variables νij [t] are upper bounded by νij [t] ≤ ν̄ij +
for all time slots t.
6
Algorithm 1 Random access scheduling algorithm.
1: Initialize: Initialize the dual variables to φi [0] := 0,
− bi [0] .
νij [0] := 0, and βi [0] := bmax
i
2: Step 1: Medium access decision
1
P
3: zi [t] := 21 νii [t]qi [t] − qc
ν
[t]
−
β
[t]
i
j6=i ji
Step 2: Other
variables h
iprimal
h
1
φi [t]
and sij [t] := 1 −
5: sii [t] := ν [t]
ii
0
6: Step 3: Auxiliary variable
7: yij [t] := arg max yij (ν̄ij − νij [t])
0
4:
φi [t]
νij [t]
i1
0
yij ∈[0,ȳij ]
8:
9:
10:
11:
12:
13:
Step 4: The sensor updates the dual variables
φi [t + 1] := φi [t] + log (pi ) − log (sii [t])
+
P
−
log (1 − sij [t])
j6=i
+
νii [t + 1] := νii [t] + sii [t] − zi [t]qi [t] − yii [t]
+
νij [t + 1] := νij [t] + qc zi [t] − sij [t] − yij [t]
+
βi [t + 1] := βi [t] + zi [t] − ei [t]
Step 5: Set t := t + 1 and go to Step 1.
Proof. The dual variable νij [t] is updated according to the
following equations
+
νii [t + 1] := νii [t] + sii [t] − zi [t]qi [t] − yii [t]
(31)
+
νij [t + 1] := νij [t] + qc zi [t] − sij [t] − yij [t]
, (32)
where the subgradient terms are upper bounded by 1, namely
sii [t]−zi [t]qi [t]−yii [t] ≤ 1 and qc zi [t]−sij [t]−yij [t] ≤ 1 for
all time slots t. Hence, we have that the maximum increase of
these dual variables in a given time slot is |νij [t+1]−νij [t]| ≤
for all i, j and t. Overall, the maximum value that the dual
variables νij [t] can take is controlled by the yij [t] term. As
long as yij [t] = 0 the dual variables can increase in value,
until νij [t] = ν̄ij + and the auxiliary variable condition in
(26) is triggered, leading to the yij [t] term taking the value
yij [t] = ȳij . Then, the next update of the dual variable is
given by
+
νij [t + 1] ≤ ν̄ij + − yij [t]
+
1
≤ ν̄ij + + −
(ν̄ij + 2)
= 0.
(33)
Since after this event, the dual variables take the zero value, the
dual variables νij [t] are necessarily upper bounded by νij [t] ≤
ν̄ij + for all time slots t.
This proposition states that by ensuring the correct value of
the parameter ȳij (which we can select freely), an upper bound
on νij [t] can be established. Then, by further appropriately
selecting the battery size bmax
of the nodes, we can ensure
i
that energy use is causal to the energy harvested.
Proposition 3 (Energy Causality). Let the battery capacity of the i-th sensor satisfy bmax
≥ 1 ν̄ii + 1 and let
i
1
ȳij ≥ (ν̄ij + 2). Then, Algorithm 1 satisfies the energy
consumption causality constraints zi [t] ≤ bi [t] for all time
slots.
Proof. In order to satisfy the energy causality constraints
zi [t] ≤ bi [t], it suffices to verify that no transmission occurs
when there is no energy left in the battery. This implies that the
scheduling variable zi [t] has to take the zero value when the
battery bi [t] is P
empty. By equation (22), it suffices to satisfy
νii [t]qi [t] − qc j6=i νji [t] − βi [t] ≤ 0, when bi [t] = 0. Note
that the battery state bi [t] and the battery multipliers
βi [t] are
−
b
[t]
.
Hence,
the
related by the expression βi [t] = bmax
i
i
battery being empty, bi [t] = 0, implies the battery multipliers
taking the value βi [t] = bmax
i P. Therefore, the condition to
be satisfied is νii [t]qi [t] − qc j6=i νji [t] − bmax
≤ 0. Since
i
qi [t] ≤ 0 and qc ≥ 0, we can further rewrite this inequality as
≤ 0. Then, by Proposition 2, we have the upper
νii [t] − bmax
i
bound on the dual variables νij [t] ≤ ν̄ij + . This allows us
to further rewrite the inequality as ν̄ij + − bmax
≤ 0. Then,
i
1
ν̄
+
1,
ensures
this
inequality,
and
the battery size bmax
≥
ii
i
hence, that the energy constraints zi [t] ≤ bi [t] are satisfied for
all time slots.
According to this proposition, by choosing a sufficiently
large battery size bmax
we can make the energy consumption
i
causal to the energy harvesting process. This is due to the
modified problem formulation proposed in (18). In the original
problem (17), a dual ascent algorithm can lead to the dual
variables becoming arbitrarily large. This is not the case
when introducing the auxiliary variables yij , as shown by
Proposition 2. Then, by Proposition 3, the bound on the dual
variables allows us to establish conditions on the battery size
that ensure energy causality.
B. Asynchronous Operation
In order for Algorithm 1 to properly function, the i-th
sensor node requires the dual variables νij [t] for j 6= i of
the other nodes. This is needed in order to compute the
optimal scheduling variable zi [t] (cf. equation (22)). In order
to ensure the robustness of our algorithm, we take into account
the notion of asynchronicity in the data shared across the
sensor nodes. This is to say that we consider the possibility of
different nodes having different (out of date) values of the dual
variables shared by the other nodes. Since nodes are powered
by energy harvesting, this might happen when a node is unable
to transmit or receive data due to lack of energy. Also, the
consideration of asynchronicity includes the practical case in
which the sensor nodes simply attach the value of their dual
variable to the packet containing the measurement. Therefore,
only sharing their dual variable when they need to transmit a
measurement to their controller. To consider this, we introduce
the asynchronicity model of [28] into our analysis.
Let us define the set T i ⊆ Z+ of all time slots in which
the i-th node is capable of receiving and sending information.
Then, we define a function π i [t], that for a given node and
7
time slot, returns the most recent time slot at which the node
was available. Namely,
π i [t] := max t̂ | t̂ < t, t̂ ∈ T i .
(34)
In a similar
manner, we then define the function πji [t] :=
j
i
π π [t] to denote the most recent time slot the i-th node
has received information sent by the j-th node. Then, at time
slot t, the i-th node has knowledge of a possibly outdated
vector of dual variables νij , given by
i
ν̃i [t] = νi1 [π1i [t]], . . . , νiM [πM
[t]] .
(35)
Further, we will denote by λ̃ the vector formed by the collection of the outdated duals together with the rest of the dual
variables. Following, we can write the asynchronism into the
dual variable update by defining the asynchronous stochastic
subgradient s̃νij [t], corresponding to the νij variable. We do
so as follows
(
sνij [t], if t ∈ T j ,
(36)
s̃νij [t] =
0,
otherwise.
Simply meaning that, if t ∈ T j , the ascent direction given
by the subgradient sνij [t], is available. Otherwise, the dual
variable is not updated. Then, we can concatenate all the subgradients of all dual variables into an asynchronous stochastic subgradient vector s̃[t]. Afterwards, the dual variable
update is simply given by the usual expression but with
the asynchronous stochastic subgradient, i.e., λ[t + 1] :=
+
[λ[t] + s̃[t]] .
V. S TABILITY A NALYSIS
In this section, we analyze the stability of the systems when
operating under the proposed random access communication
scheme. In order to do this, we leverage on the fact that
the proposed scheme is a stochastic subgradient algorithm.
Hence, we rely on duality theory arguments to show that the
iterates generated by Algorithm 1 satisfy the constraints of
the optimization problem (18) almost surely. Then, we further
show that if the constants ν̄ij are chosen to upper bound the
?
optimal Lagrange multipliers νij
, then the iterates generated
by Algorithm 1 also satisfy the constraints of the optimization
problem (17) (i.e., without the auxiliary variables). In turn, this
guarantees by Proposition 1 the stability of all control loops.
First, in order to ensure the convergence of Algorithm 1, we
need to assume an upper bound on the asynchronicity between
the sensor nodes.
Assumption 4. There exists an upper bound 0 < B < ∞ to
the asynchronicity between nodes, such that for all time t and
nodes i, j we have
Proposition 5. Given
dual
variables λ[t], the conditional
expected value E s[t]|λ[t] of the stochastic subgradient s[t]
is a subgradient of the dual function. Namely, for any λ,
E sT [t]|λ[t] λ[t] − λ ≤ g(λ[t]) − g(λ).
(38)
Proof. We intend to show that the expected value of the
stochastic subgradient s[t] given λ[t] is a subgradient of the
dual function g(λ). To do this, we take the Lagrangian (19)
of optimization problem (18), given by
X
X
X
M
2
M
M
L(z, λ) =
i=1 E zi +
i=1
j=1 E ν̄ij yij
X
X
M
+
j6=i log (1 − sij )
i=1 φi log (pi ) − log (sii ) −
X
M
+
i=1 νii sii − E qi zi − yii
X
X
M
+
j6=i νij E qc zj − yij − sij
i=1
X
M
+
(39)
i=1 βi E zi − E ei .
Then, take the dual function at time t, denoted by g(λ[t])
and remember that the dual function is given by g(λ) =
minz∈X L(z, λ). The primal variables that minimize this dual
function are obtained by the primal minimization of Algorithm
1, namely, zi [t], sij [t] and yij [t], given by equations (22), (25),
and (26), respectively. Then, we write the dual function at time
t as
X
X
X
M
2
M
M
g(λ[t]) =
i=1 E zi [t] +
i=1
j=1 E ν̄ij yij [t]+
X
X
M
j6=i log (1 − sij [t])
i=1 φi [t] log (pi ) − log (sii [t]) −
X
M
+
i=1 νii [t] E sii [t] − qi [t]zi [t] − yii [t]
X
X
M
+
j6=i νij [t] E qc zj [t] − yij [t] − sij [t]
i=1
X
M
+
(40)
i=1 βi [t] E zi [t] − ei [t] ,
where, due to its linearity, we have moved the expectation E[·]
out of the subgradients. Now, by compacting the Lagrange
multipliers into a vector λ[t] and the subgradients to s[t], we
can rewrite the dual function at time t as
X
X
X
M
2
M
M
g(λ[t]) =
i=1 E zi [t] +
i=1
j=1 E ν̄ij yij [t]
T
+ E s [t]|λ[t] λ[t].
(41)
Further, for any arbitrary λ, the dual function g(λ) can be
bounded as
X
X
X
M
2
M
M
g(λ) ≤
i=1 E zi [t] +
i=1
j=1 E ν̄ij yij [t]
T
+ E s [t]|λ[t] λ.
(42)
(37)
Then, by substracting expression (42) from (41) we obtain
E sT [t]|λ[t] λ[t] − λ ≤ g(λ[t]) − g(λ),
(43)
This assumption simply implies that nodes are at most B
time slots out of synchronism and it is required to ensure the
convergence of the variables. Now, we proceed to show the
convergence of Algorithm 1. We start by recalling a common
property of the subgradient method.
The previous proposition states that, on average, the stochastic subgradient s[t] is an ascent direction of the dual function
g(λ[t]). Then, the next step is to quantify the average reduction
in distance to the optimal dual variables λ? that occurs in a
dual variable update step.
max {0, t − B + 1} ≤ πji [t] ≤ t.
which is the desired inequality.
8
Lemma 6. Let E ks[t]k2 |λ[t] ≤ S 2 be a bound on the second
moment of the norm of the stochastic subgradients s[t]. The
dual updates of Algorithm 1, satisfy the following inequality
2
2
E kλ? −λ[t + 1] |λ[t] ≤ 1 − m + 22 LB λ? − λ[t]
+ 2 S 2 + 22 LBS 2 − g(λ? ) − g(λ[t]) ,
(44)
where the constant L > 0 corresponds to the L-Lipschitz
continuity of the gradients of the dual function g(λ) and
m > 0 to the strong concavity constant of the dual function
g(λ).
Proof. Let us consider the squared distance between the dual
iterates λ at time t+1 and their optimal value, i.e., λ? −λ[t+
2
+
1] . By means of the dual update λ[t + 1] = [λ[t] + s̃[t]] ,
we can rewrite this expression as
+ 2
2
λ? − λ[t + 1] = λ? − λ[t] + s̃[t]
2
≤ λ? − λ[t] − s̃[t]
(45)
where we have further upper bounded the expression by the
nonexpansive property of the nonnegative projection. Then,
we expand the square norm, yielding the expressions
λ? − λ[t + 1]
2
≤ λ? − λ[t]
2
+ 2 s̃[t]
2
− 2s̃T [t] λ? − λ[t] . (46)
We can further rewrite this inequality
by adding and subtract
ing the term 2sT [t] λ? − λ[t] to expression (46), leading to
the following
λ? − λ[t + 1]
2
2
2
≤ λ? − λ[t] + 2 s̃[t]
T ?
+ 2 s[t] − s̃[t]
λ − λ[t] − 2sT [t] λ? − λ[t] . (47)
By applying the Cauchy-Schwarz inequality to the third term
on the right hand side, we further rewrite the expression as
λ? − λ[t + 1]
2
+ 2 s[t] − s̃[t]
≤ λ? − λ[t]
2
+ 2 s̃[t]
2
λ? − λ[t] − 2sT [t] λ? − λ[t] . (48)
2
≤ λ? − λ[t]
+ 2L λ[t] − λ̃[t]
+ 2 s̃[t]
2
λ? − λ[t] − 2sT [t] λ? − λ[t] (49)
Now, we bound the difference λ[t] − λ̃[t] between the dual
variables λ[t] and their asynchronous counterpart λ̃[t]. Recall
that Assumption 4 states that there exists an asynchronicity
limit of B time slots between the global and asynchronous
variables. This means that the asynchronous dual variables
λ̃[t] are, at most, B subgradients steps out of synchronysm.
We can then bound the difference by λ[t] − λ̃[t] ≤
Pt−1
Pt−1
l=t−B−1 s̃[l] , where we have also
l=t−B−1 s̃[l] ≤
applied the triangle inequality. Then, we write
?
λ − λ[t + 1]
2
?
≤ λ − λ[t]
+ 22 L
2
t−1
X
+
l=t−B−1
?
2
s̃[t]
s̃[l]
− 2sT [t] λ − λ[t] .
λ? − λ[t + 1]
2
≤ λ? − λ[t]
2
t−1
X
+ 22 L
2
+ 2 s̃[t]
2
s̃[l] + λ? − λ[t]
l=t−B−1
?
− 2s [t] λ − λ[t] .
T
2
λ? − λ[t]
!
(50)
2
(51)
Rearranging the terms
λ? − λ[t + 1]
2
≤ λ? − λ[t]
+ 2 s̃[t]
2
2
+ 22 LB λ? − λ[t]
+ 22 L
t−1
X
s̃[l]
2
2
l=t−B−1
T
?
− 2s [t] λ − λ[t]
(52)
Then, separate the last term on the right hand side, and take
−sT [t] λ? − λ[t] and note that we can rewrite it as − s? −
T ?
s[t]
λ − λ[t] . Then, by strong concavity we can bound
2
this term by −m λ? − λ[t] . Now, we take the expectation
conditioned on λ[t] on both sides of the previous inequality
2
2
E kλ? − λ[t + 1] |λ[t] ≤ 1 − m + 22 LB λ? − λ[t]
+ 2 E ks̃[t]k2 |λ[t] + 22 L
t−1
X
l=t−B−1
− E sT [t]|λ[t] λ? − λ[t]
E ks̃[l]k2 |λ[t]
(53)
And
then, by applying the subgradient bound given by
E ks[t]k2 |λ[t] ≤ S 2 , and particularizing Proposition 5 with
λ = λ? , we have
2
2
E kλ? −λ[t + 1] |λ[t] ≤ 1 − m + 22 LB λ? − λ[t]
+ 2 S 2 + 22 LBS 2 − g(λ? ) − g(λ[t]) ,
(54)
which gives us the desired inequality.
2
Then, we can further bound the expression by relying on the
L-Lipschitz continuity of the subgradients of the dual function
λ? − λ[t + 1]
The third term on the right hand side can be further expanded
by making use of the inequality kukkvk ≤ kuk2 + kvk2 ,
leading to the following expression
The previous lemma holds on average, while we are interested in establishing convergence almost surely. We leverage
on this lemma and resort to a supermartingale argument to
show that Algorithm 1 converges to a neighborhood of the
optimal solution of the dual function.
Lemma 7. Let E ks[t]k2 |λ[t] ≤ S 2 be a bound on the
second moment of the norm of the stochastic subgradients
s[t]. Further, consider the dual updates of Algorithm 1, with
step size ≤ m/(2LB). Then, assume that the dual variable λ[T ] is given for an arbitrary time T and define as
λbest [t] := arg minλ[l] g(λ[l]) the dual variable leading to the
best value of the of the dual function for the interval l ∈ [T, t].
Then, we have
lim g(λbest [t]|λ[T ]) ≥ g(λ? ) − 2 S 2 − 22 LBS 2 a.s. (55)
t→∞
Proof. Let T = 0 for simplicity of exposition. Then, define
the sequence α[t] corresponding to a stopped process tracking
2
the dual distance λ? −λ[t] until the optimality gap g(λ? )−
g(λ[t]) falls bellow S 2 + 2LBS 2 . Namely,
2
α[t] : = λ? − λ[t]
I g(λ? ) − g(λbest [t]) > S 2 + 2LBS 2 ,
(56)
9
where I{·} is the indicator function. In a similar manner define
the sequence β[t] as follows
β[t] : = g(λ? ) − g(λ[t]) − 2 S 2 − 22 LBS 2
I g(λ? ) − g(λbest [t]) > S 2 + 2LBS 2 .
(57)
Now, let F[t] be the filtration measuring α[t], β[t] and λ[t].
Since α[t] and β[t] are completely determined by λ[t], and
λ[t] is a Markov process, conditioning on F[t] is equivalent
to conditioning on λ[t]. Hence, we can write the expectation
E [α[t]|F[t]] = E [α[t]|λ[t]]. Now, consider this expectation at
time t + 1, given by
2
E α[t + 1]|λ[t] = E λ? − λ[t + 1]
?
I g(λ ) − g(λbest [t + 1]) > S 2 + 2LBS 2 |λ[t] . (58)
By noting that the indicator term is lower or equal than 1, we
can upper bound this expression as
2
(59)
E α[t + 1]|λ[t] ≤ E λ? − λ[t + 1] |λ[t] .
Then, by application of Lemma 6 we have
E α[t + 1]|λ[t] ≤ 1 − m + 22 LB λ? − λ[t]
+ 2 S 2 + 22 LBS 2 − g(λ? ) − g(λ[t]) .
2
(60)
By making use of the definitions of α[t] and β[t], given by
equations (56) and (57), we can rewrite the previous expression
as
E α[t + 1]|λ[t] ≤ 1 − m + 22 LB α[t] − β[t].
(61)
Now, since the step size chosen is ≤ m/(2LB), this means
that the factor multiplying
the process α[t] is upper bounded
by 1 − m + 22 LB ≤ 1. Therefore, we can further write
the expectation as
E α[t + 1]|λ[t] ≤ α[t] − β[t].
(62)
Since by definition the processes α[t] and β[t] are nonnegative,
the supermartingale convergence theorem [29, Theorem 5.2.9]
statesP
that the sequence α[t] converges almost surely, and the
∞
sum t=1 β[t] < ∞ is almost surely finite. This carries the
implication that lim inf t→∞ β[t] = 0. Given the definition
of the sequence β[t], this implies that limt→∞ g(λbest [t]) ≥
g(λ? ) − 2 S 2 − 22 LBS 2 almost surely.
Now, it suffices to show that the iterates generated by the
algorithm are almost surely feasible to the original problem
(17) if the constants ν̄ij are chosen to be an upper bound of
?
the optimal νij
multipliers.
Proposition 8 (Auxiliary Feasibility). Assume there exist
feasible primal variables zi , sij andPyij , such that for some
ξ > 0, we have log(pi ) − log(sii ) − j6=i log (1 − sij ) < −ξ,
sii − E qi zi − yii < −ξ, E qc zj − yij − sij < −ξ and
E zi −E ei < −ξ. Then, the sequences generated by Algorithm
1, satisfy the constraints (18b)−(18e) almost surely.
Proof. First, we start by upper bounding the value of the dual
variables. We collect the feasible primal variables in a vector
ẑ = {zi , sij , yij }. Then, given feasible primal variables ẑ we
bound the value of the dual function g(λ). Recall that the dual
function is defined as g(λ) = minz∈X L(z, λ), then for any
feasible ẑ, we necessarily have g(λ) ≤ L(ẑ, λ). Hence, we
can write
X
X
X
M
2
M
M
g(λ) ≤
i=1 E zi +
i=1
j=1 E ν̄ij yij
X
X
M
+
j6=i log (1 − sij )
i=1 φi log (pi ) − log (sii ) −
X
M
+
i=1 νii sii − E qi zi − yii
X
X
M
+
j6=i νij E qc zj − yij − sij
i=1
X
M
+
(63)
i=1 βi E zi − E ei .
Since
P we have a constant ξ > 0 such that log(pi ) − log(sii ) −
j6=i log (1 − sij ) < −ξ, sii − E qi zi − yii < −ξ, E qc zj −
yij − sij < −ξ and E zi − E ei < −ξ. We can simplify the
bound on the dual function to the following inequality
g(λ) ≤
M
X
i=1
E zi2 +
M X
M
X
i=1 j=1
E ν̄ij yij − ξλT 1.
(64)
Then, we simply reorder the previous expression to establish
an upper bound on the dual variables,
M
M
M X
X
1 X
2
λ≤
E ν̄ij yij − g(λ) ,
(65)
E zi +
ξ i=1
i=1 j=1
where the inequality is taken component-wise for all elements
of the vector λ with respect to the scalar on the right hand
side of the inequality. Then, by Lemma 7 we can certify the
existence of a time t ≥ T1 such that g(λ[t]) ≥ g(λ? ) − 2 S 2 −
22 LBS 2 . Hence, we can write
M
M
M X
X
1 X
λ[t] ≤
E ν̄ij yij
E zi2 +
ξ i=1
i=1 j=1
− g(λ? ) + 2 S 2 + 22 LBS 2
(66)
for some t ≥ T1 . Now, consider the feasibility conditions
of the optimization problem with the auxiliary variables
(18b)−(18e), which are given by the long term behavior of
the following inequalities
t
X
1X
log(pi ) − log(sii [l]) −
log (1 − sij [l]) ≤ 0
t→∞ t
lim
l=1
1
t→∞ t
lim
1
t→∞ t
lim
lim
t→∞
1
t
t
X
l=1
t
X
l=1
t
X
l=1
j6=i
sii [l] − qi [l]zi [l] − yii [l] ≤ 0
qc zj [l] − yij [l] − sij [l] ≤ 0
zi [l] − ei [l] ≤ 0.
(67)
(68)
(69)
(70)
These inequalities simply correspond to the stochastic subgradients of the optimization problem (18). Therefore, we
can write P
the feasibility conditions in a condensed form as
t
limt→∞ 1t l=1 s[l] ≤ 0. Now, we consider the dual updates
10
+
of the problem, given by λ[t + 1] = λ[t] + s̃[t] . Since the
projection is nonnegative, we can upper bound λ[t + 1] by
λ[t + 1] ≥ λ[t] + s̃[t] ≥ λ[1] +
t
X
l=1
s̃[l] ≥
t
X
s̃[l], (71)
l=1
where we have removed the projection and further upper
bounded the expression by recursively substituting the dual updates. Now, we proceed to prove feasibility of the constraints
of the auxiliary problem (18). In order to to this, we follow
by contradiction. Assume that the conditions (67)−(70) are
infeasible. This means that there exists some time T2 where
there is a P
constant δ > 0 such that for t ≥ T2 we have
t
limt→∞ 1t l=1 s[l] ≥ δ. Substituting this expressions in the
dual update bound (71) we have that λ[t + 1] ≥ δt. Then, we
can freely choose a time t ≥ T2 such that
M
M X
M
X
1 X
λ[t] >
E zi2 +
E ν̄ij yij
ξ i=1
i=1 j=1
?
2 2
2
2
− g(λ ) + S + 2 LBS
(72)
However, the upper bound established in (66) contradicts this
expression. Therefore, the inequalities (67)−(70) are satisfied
almost surely, which implies that the constraints (18b)−(18e)
of the auxiliary optimization problem (18) are almost surely
satisfied.
This proposition allows us to certify that the constraints of
the problem with the auxiliary variables are satisfied. However,
this does not ensure stability. Nonetheless, we show that if
the parameters ν̄ij are chosen as to upper bound the optimal
dual variables νij , then the optimal auxiliary variables are
zero, and the proposed algorithm also satisfies the constraints
(17b)−(17e) of the original problem without the auxiliary
variables.
Theorem 9 (Stability). Assume there exist feasible primal
variables zi and sijP
, such that for some ξ > 0, we have
log(pi ) − log(sii ) − j6=i log (1 − sij ) < −ξ, sii − E qi zi <
−ξ, E qc zj − sij < −ξ and E zi − E ei < −ξ. Further, let ν̄ij
be an upper bound to the optimal νij multipliers. Then, the
scheduling decisions zi [t] generated by Algorithm 1 satisfy the
successful packet transmission requirement
t
Y
1X
qi [l]zi [l]
(1 − qc zj [l]) > pi ,
lim
t→∞ t
l=1
(73)
j6=i
which ensures the stability of all control loops.
Proof. To verify this, subtract the Lagrangian of the optimization problem with the auxiliary variables (18) and the original
problem (17). This difference is given by,
L(z, λ) − L̂(z, λ) =
−
M X
M
X
i=1 j=1
M X
M
X
i=1 j=1
ν̄ij − νij − θij + µij yij
µij ȳij
(74)
where θij ≥ 0 and µij ≥ 0 are the Lagrange multipliers of
the implicit constraints yij ≥ 0 and yij ≤ ȳij , respectively. To
certify the equivalence between the optimization problems (17)
and (18) we need to certify that (74) is zero for the optimal
values. This implies that there must exist Lagrange multipliers
such that ν̄ij −νij −θij +µij = 0 and µij = 0 for all i, j. Since
?
≤ ν̄ij , we can find multipliers satisfying the constraints by
νij
?
?
. Then, L(z, λ) − L̂(z, λ) =
= ν̄ij − νij
letting µ?ij = 0 and θij
0, which implies
the
solution
of
both
problems is equal. Since
Pt
?
?
limt→∞ 1t l=1 yij [l] = yij
and yij
= 0, the primal variables
zi and sij almost surely satisfy the original constraints of the
optimization problem without the auxiliary variable, given by
(17b)−(17e). Since constraints
are equivalent to
Q (17b)−(17d)
the constraint pi < E qi zi j6=i (1 − qc zj ) , by Proposition 1
we have that Algorithm 1 generates schedules zi [t] that ensure
the stability of all control loops.
Theorem 9 states that if there exist schedules zi [t] capable of
stabilizing the plants, then Algorithm 1 generates them. Note
that the optimal auxiliary variables yij take the zero value
and are therefore not necessary in the long run to satisfy the
constraints of the original optimization problem. As previously
discussed, they have been introduced in order to provide a way
to bound the dual variables and allow us to enforce causality
in the energy consumption.
VI. N UMERICAL R ESULTS
In this section, we study the performance of the proposed
random access scheme with energy harvesting sensors. We
consider a scalar control system, with M = 2 plants over
T = 10,000 time slots. The plant dynamics are given by
Ao,1 = 1.1 and Ac,1 = 0.15 for the first plant, and Ao,2 =
1.05 and Ac,2 = 0.1 for the second one. Hence, the first
system is slightly more unstable than the second one. Further,
we consider both systems to be perturbed by i.i.d. zeromean Gaussian noise. Also, we assume the same performance
requirement for both plants, given by the Lyapunov function
Vi (xi [t]) = x2i [t] and an expected decrease rate of ρi = 0.8.
With regards to the communication aspects, we consider a
system where the channel
hi [t] are i.i.d. exponential
states
variables with mean E hi [t] = 2, and the decoding probability q(hi [t]) is given by the function shown in Figure 2.
Since the communication medium is shared, we consider that
packet collisions occur with probability qc = 0.25. Moreover,
we consider the sensing devices
to be powered by an energy
harvesting process of rate E ei = 0.5, and that they store the
collected energy in batteries of size bmax
= 20. Finally, the
i
parameters of the algorithm are chosen as ȳij = 25, ν̄ij = 19,
and step size = 1.
A. System Dynamics
We start by studying the evolution of the system dynamics.
In Figure 3, we plot the evolution of the plant state at each
time slot. As expected, since our proposed policy stabilizes
both plants, the system state oscillates around the zero value.
Furthermore, this figure illustrates that System 1 is slightly
more unstable than System 2. This is evidenced by the
11
20
10
System 1
System 2
8
kxi [l]k2
xi [t]
10
4
1
t
Pt
l=0
0
6
−10
−20
2
0
0.2
0.4
0.6
0.8
0
1
·104
Time (t)
Fig. 3. Evolution of the plant state at each time slot.
System 1
System 2
0
0.2
0.4
0.6
Time (t)
1
·104
Fig. 5. Evolution of the system control performance.
20
0.3
System 1
System 2
Energy Balance
15
bi [t]
0.8
10
5
0.2
0.1
System 1
System 2
0
0
0.2
0.4
0.6
Time (t)
0.8
1
·10
4
Fig. 4. Energy stored in the batteries at each time slot.
somewhat more pronounced peaks of instability, and higher
variance of the plant state x1 [t]. In a similar manner, this
behavior is also shown in Fig. 4. In this figure, we have plotted
the evolution of the battery state of the sensing devices of
both systems. Since the first plant is slightly more unstable
than the second plant, the energy consumption of the sensor
in the first system is slightly more pronounced. Also, by taking
a look at this figure, one can expect the first system to have
larger battery requirements than the second system. Intuitively,
since System 1 is more unstable, its sensor node requires a
larger battery to counteract its instability. The extent of this
requirement will become more apparent once we take a look
at the values of the dual variables.
B. Stability
In order to study the stability of the plants under our proposed policy, we look at the long term evolution of the system
states. First, we look at the evolution of the system control
performance. By our design, we require the Lyapunov function
Vi (xi [t]) = x2i [t] to decrease at an average rate of ρi = 0.8.
Iterating expression (3) in Proposition 1, we have that the limit
0
0
0.2
0.4
0.6
Time (t)
0.8
1
·10
4
Fig. 6. AveragePenergy balance in the system over time. Given by the
expression (1/t) tl=0 (ei [l] − zi [l]).
of the control performance is upper bounded in the long run by
the term tr(Pi Wi )/(1 − ρi ). By particularizing this expression
to our parameters, we expect the control performance in the
limit to be below tr(Pi Wi )/(1 − ρi ) = 1/(1 − 0.8) = 5. We
plot in Figure 5 the system control evolution. As expected,
both systems are asymptotically stable andP
the control perforT
mance converges to approximately (1/T ) t=0 kxi [t]k2 ≈ 4
for both plants.
Another interesting measure to consider is the energy
balance of the systems. We denote by energy balance the
difference between the harvested energy and the consumed
energy. Thus, the average energy
Pt balance of the i-th system
at time t is given by (1/t) l=0 (ei [l] − zi [l]). We plot this
measure in Figure 6. As expected, since both sensors are
powered
by energy harvesting processes of the same mean
E ei = 0.5 and System 1 is more unstable than System 2,
the energy balance of the first system is lower. Also, note
that the lower bound on the energy balance is zero, since
the total energy spent has to necessarily be lower or equal
to the total energy harvested. This allows us to interpret the
12
10
Transmission Probability
8
6
Pt
l=0
νij [l]
0.5
ν11
ν12
ν21
ν22
1
t
4
2
0
0
0.2
0.4
0.6
0.8
Time (t)
1
0.4
0.3
0.2
System 1
System 2
0.1
0
0
0.2
·104
0.4
0.6
Time (t)
Fig. 7. Average evolution of the dual variables νij over time.
Fig. 8. Average transmission probabilities.
energy balance as a measure of how much more control
performance can be obtained with the same energy harvesting
process. For example, System 1 has an energy balance of
approximately 0.05 units. Since we have assumed an unitary
power consumption, this means that System 1 has energy to
support an increase by 0.05 of its transmit probability. In the
same manner, System 2 can support an increase of around
0.14 of its transmit probability.
Now, we take a look at the dual variables. As we have
thoroughly discussed in previous sections, the selection of
the ν̄ij parameters is crucial to the proper operation of the
algorithm. Specifically, these parameters should be chosen
such that their corresponding optimal dual variables are upper
bounded by them. In Figure 7 we plot the time averaged dual
variables νij , where the average over time of these variables
converges to their optimal value. First, the choice of ν̄ij = 19
for all i, j satisfies the required condition, i.e., being an upper
bound of the optimal dual variables. Further, by taking a closer
look at these dual variables, we can gain some insight into the
behavior of the system. First, note that the variables νii are
associated to the constraint sii ≤ E qi zi + yii , and hence,
represent the requirement of plant i to transmit its state. By
looking at expression (22), corresponding to the closed-form
solution of the primal zi [t], a larger value of the dual variable
νii [t] leads to a larger value of the scheduling variable zi [t].
Thus, since System 1 is more unstable than System 2, we
have that ν11 > ν22 . In a similar way, the variables νij for
j 6= i are associated to the constraint sij ≥ E qc zj − yij and
represent, at node i, the interference-adjusted need to transmit
of the other plants j 6= i. Therefore, in our two-plant scenario,
a large value of ν21 [t] leads to a lower z1 [t]. And in a similar
manner as previously, since System 1 is more unstable than
System 2, and the systems interfere symmetrically, we have
that ν12 > ν21 . Also, when previously evaluating Fig. 4, we
expected System 1 to have higher battery requirements due
to its higher instability. Now, as per Proposition 3, which
states that the required battery size bmax
is proportional to
i
the dual variables ν̄ii by the inequality bmax
≥ 1 ν̄ii + 1, we
i
see that System 1 requires a larger ν̄ii value, and hence, a
larger battery.
Reception
Required
Transmission
0.8
1
·10
4
C. Communication
We turn our attention to the communication aspects of
the proposed policy. As we have discussed previously, System 1 is slightly more unstable than System 2. Since we
are requiring for both plants an expected decrease rate of
ρi = 0.8 for a Lyapunov function Vi (xi [t]) = x2i [t], by
Proposition 1 this translates to required successful transmission probabilities of p1 ≈ 0.3453 and p2 ≈ 0.2769,
respectively. As expected, the less stable system requires a
higher successful transmission probability. In Figure 8 we
have plotted the resulting transmission probabilities during
our simulation. We look at three different probabilities, (i)
the required transmission probabilities, given P
by pi ; (ii) the
t
actual transmission probabilities, pTi X , (1/t) l=0 zi [l]; and
RX
(iii) the
probabilities,
,
given by pi
Ptsuccessful reception
Q
(1/t) l=0 qi [l]zi [l] j6=i (1 − qc zj [l]) .
While the required probabilities are p1 ≈ 0.3453 and
p2 ≈ 0.2769, we have that the actual successful reception
probabilities are pRX
= 0.3607 and pRX
= 0.2827. These
1
2
probabilities are over the required ones and, hence, ensure the
stability of the systems. However, the probability at which the
sensors try to access the medium are higher, pT1 X = 0.4446
and pT2 X = 0.3558, respectively. This is due to the effects
of the transmission medium. Packets can be lost if collisions
occur, and they might not be decoded properly if the channel
conditions are not sufficiently favorable (cf. Figure 2).
The overall effect of the transmission medium is better displayed in Figure 9, where we show the transmission schedules
from t = 1050 to t = 1100. In this figure, the bars represent
the probability of successful decoding for a given time slot,
and the stems represent an access to the medium. Further,
collisions are represented by a red dot. From this plot, it is
clear that the sensor node tends to access the medium when the
channel conditions are favorable (i.e., the decoding function
qi [t] takes values closer to 1). Also, this figure evidences that
collisions happen with a sufficiently low chance, since due to
13
Plant 1
Idle
Plant 2
Tx
Idle
1,050
1,060
1,070
1,080
1,090
1,100
Time (t)
Fig. 9. Transmission schedules from t = 1050 to t = 1100. Bars represent
the decoding probability qi [t], stems denote a transmission taking place, and
a red dot denotes the occurrence of a collision.
the independence assumption of the channel states between
sensors, access does not happen at the same time very often.
VII. C ONCLUSIONS
In this work, we have designed a random access communication scheme for energy harvesting sensors in wireless control
systems. We have considered a scenario with multiple plants
sharing a wireless communication medium. Under these conditions, we have shown that the optimal scheduling decision is
to transmit with a certain probability, which is adaptive to the
time-varying channel, battery and plant conditions. In order to
compute the optimal policy, we have provided an algorithm
based on a stochastic dual method. The proposed algorithm is
decentralized, where the sensors only need to share some of
their dual variables. Furthermore, we have provided theoretical
guarantees on the stability of all control loops under the
proposed policy, including the consideration of asynchroniticty
between the information shared between the nodes. Finally, we
have validated by means of simulations the performance of the
proposed scheme. The numerical results show that the random
access policy is capable of stabilizing all control loops while
also satisfying the energy constraints imposed by the energy
harvesting process.
R EFERENCES
[1] M. Calvo-Fullana, C. Antón-Haro, J. Matamoros, and A. Ribeiro,
“Random access policies for wireless networked control systems with
energy harvesting sensors,” in American Control Conference (ACC),
2017. IEEE, 2017, pp. 3042–3047.
[2] R. J. Vullers, R. Schaijk, H. J. Visser, J. Penders, and C. V. Hoof,
“Energy harvesting for autonomous wireless sensor networks,” IEEE
Solid-State Circuits Magazine, vol. 2, no. 2, pp. 29–38, 2010.
[3] J. Yang and S. Ulukus, “Optimal packet scheduling in an energy harvesting communication system,” IEEE Transactions on Communications,
vol. 60, no. 1, pp. 220–230, 2012.
[4] K. Tutuncuoglu and A. Yener, “Optimum transmission policies for
battery limited energy harvesting nodes,” IEEE Transactions on Wireless
Communications, vol. 11, no. 3, pp. 1180–1189, 2012.
[5] O. Ozel, K. Tutuncuoglu, J. Yang, S. Ulukus, and A. Yener, “Transmission with energy harvesting nodes in fading wireless channels: Optimal
policies,” IEEE Journal on Selected Areas in Communications, vol. 29,
no. 8, pp. 1732–1743, 2011.
[6] C. K. Ho and R. Zhang, “Optimal energy allocation for wireless
communications with energy harvesting constraints,” IEEE Transactions
on Signal Processing, vol. 60, no. 9, pp. 4808–4818, 2012.
[7] M. Calvo-Fullana, J. Matamoros, and C. Antón-Haro, “Reconstruction
of correlated sources with energy harvesting constraints in delayconstrained and delay-tolerant communication scenarios,” IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1974–1986,
2017.
[8] P. Castiglione and G. Matz, “Energy-neutral source-channel coding with
battery and memory size constraints,” IEEE Transactions on Communications, vol. 62, no. 4, pp. 1373–1381, 2014.
[9] O. Orhan, D. Gunduz, and E. Erkip, “Source-channel coding under
energy, delay, and buffer constraints,” IEEE Transactions on Wireless
Communications, vol. 14, no. 7, pp. 3836–3849, July 2015.
[10] P. Castiglione, O. Simeone, E. Erkip, and T. Zemen, “Energy management policies for energy-neutral source-channel coding,” IEEE Transactions on Communications, vol. 60, no. 9, pp. 2668–2678, 2012.
[11] G. Yang, V. Y. Tan, C. K. Ho, S. H. Ting, and Y. L. Guan, “Wireless
compressive sensing for energy harvesting sensor nodes,” IEEE Transactions on Signal Processing, vol. 61, no. 18, pp. 4491–4505, 2013.
[12] S. Knorn, S. Dey, A. Ahlén, and D. E. Quevedo, “Distortion minimization in multi-sensor estimation using energy harvesting and energy
sharing.” IEEE Trans. Signal Processing, vol. 63, no. 11, pp. 2848–2863,
2015.
[13] M. Calvo-Fullana, J. Matamoros, and C. Antón-Haro, “Sensor selection
and power allocation strategies for energy harvesting wireless sensor
networks,” IEEE Journal on Selected Areas in Communications, vol. 34,
no. 12, pp. 3685–3695, 2016.
[14] S. Ulukus, A. Yener, E. Erkip, O. Simeone, M. Zorzi, P. Grover, and
K. Huang, “Energy harvesting wireless communications: A review of
recent advances,” IEEE Journal on Selected Areas in Communications,
vol. 33, no. 3, pp. 360–381, 2015.
[15] A. Nayyar, T. Başar, D. Teneketzis, and V. V. Veeravalli, “Optimal
strategies for communication and remote estimation with an energy
harvesting sensor,” IEEE Transactions on Automatic Control, vol. 58,
no. 9, pp. 2246–2260, 2013.
[16] M. Nourian, A. S. Leong, and S. Dey, “Optimal energy allocation for
kalman filtering over packet dropping links with imperfect acknowledgments and energy harvesting constraints,” IEEE Transactions on
Automatic Control, vol. 59, no. 8, pp. 2128–2143, 2014.
[17] Y. Li, F. Zhang, D. E. Quevedo, V. Lau, S. Dey, and L. Shi, “Power
control of an energy harvesting sensor for remote state estimation,” IEEE
Transactions on Automatic Control, vol. 62, no. 1, pp. 277–290, 2017.
[18] J. Huang, D. Shi, and T. Chen, “Event-triggered state estimation with
an energy harvesting sensor,” IEEE Transactions on Automatic Control,
2017.
[19] N. J. Watkins, K. Gatsis, C. Nowzari, and G. J. Pappas, “Battery
management for control systems with energy harvesting sensors,” in
IEEE Conference on Decision and Control. IEEE, 2017.
[20] ——, “Stability of control systems with feedback from energy harvesting
sensors,” arXiv preprint arXiv:1712.02847, 2017.
[21] W. Zhang, M. S. Branicky, and S. M. Phillips, “Stability of networked
control systems,” IEEE Control Systems, vol. 21, no. 1, pp. 84–99, 2001.
[22] D. Hristu-Varsakelis, “Feedback control systems as users of a shared
network: Communication sequences that guarantee stability,” in Decision
and Control, 2001. Proceedings of the 40th IEEE Conference on, vol. 4.
IEEE, 2001, pp. 3631–3636.
[23] G. C. Walsh, H. Ye, and L. G. Bushnell, “Stability analysis of networked
control systems,” IEEE transactions on control systems technology,
vol. 10, no. 3, pp. 438–446, 2002.
[24] K. Gatsis, A. Ribeiro, and G. J. Pappas, “Random access design for
wireless control systems,” arXiv preprint arXiv:1605.00627, 2016.
[25] A. Ribeiro, “Ergodic stochastic optimization algorithms for wireless
communication and networking,” IEEE Transactions on Signal Processing, vol. 58, no. 12, pp. 6369–6386, 2010.
[26] A. Goldsmith, Wireless communications. Cambridge University Press,
2005.
[27] A. Ribeiro, “Optimal resource allocation in wireless communication
and networking,” EURASIP Journal on Wireless Communications and
Networking, vol. 2012, no. 1, p. 272, 2012.
[28] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and distributed computation:
numerical methods. Prentice hall Englewood Cliffs, NJ, 1989, vol. 23.
[29] R. Durrett, Probability: theory and examples. Cambridge university
press, 2010.
| 7 |
1
Strategic Topology Switching for Security–Part I:
Consensus & Switching Times
arXiv:1711.11183v2 [] 6 Apr 2018
Yanbing Mao, Emrah Akyol, and Ziang Zhang
Abstract—This two-part paper considers the strategic topology
switching for the second-order multi-agent systems under attack.
In Part I, we propose a strategy on switching times that enables
the strategic topology-switching algorithm proposed in Part II [1]
to reach the second-order consensus in the absence of attacks.
The control protocol introduced is governed only by the relative
positions of agents. Based on the stability of switched linear
systems without stable subsystems, and the period of the multiagent systems under fixed topology, the strategy on dwell time
of topology-switching signal is derived. Employing this strategy
and a finite-time consensus algorithm, a decentralized topologyswitching algorithm is proposed. The primary advantages of the
algorithm in achieving consensus are: (i) the control protocol
relies only on relative position measurements, which means that
no velocity measurements are needed; (ii) it has no constraint on
the magnitude of coupling strength. Simulations are provided
to verify the effectiveness of strategic topology switching in
achieving the second-order consensus.
Index Terms—Multi-agent system, second-order consensus,
strategic topology switching, dwell time.
series [1]) to reach the second-order consensus in the absence
of attacks. In the following, we present the definition of
second-order consensus in this context.
Definition 1: [14] The second-order consensus in the multiagent system (1) is achieved if and only if the following holds
for any initial condition:
lim |vi (t) − vj (t)| = 0, ∀i, j = 1, . . . , n.
t→∞
T
HE consensus of multi-agent systems with the firstorder dynamics is a well-studied theoretical problem
(see e.g., [2]–[5]) with many practical applications including
decentralized computation [6], distributed optimization [7],
[8], power sharing for droop-controlled inverters in islanded
microgrids [9], clock synchronization for sensor network [10],
and more. However, current and emerging systems, such
as connected vehicles [2], [5], spacecraft [11], robot [12]
and electrical power networks [13], rely on the second-order
dynamics. This observation, together with the fact that the
consensus algorithms designed for the first-order multi-agent
systems cannot be directly applied to those with the secondorder dynamics, is the main motivation of this work.
A second-order multi-agent system consists of a population
of n agents whose dynamics are governed by the following
equations:
ẋi (t) = vi (t) ,
v̇i (t) = ui (t),
i = 1, . . . , n
(1a)
(1b)
where xi (t) ∈ R is the position, vi (t) ∈ R is the velocity, and
ui (t) ∈ R is the control protocol of agent i.
This first part of the two-part paper proposes a strategy on
switching times that enables the strategic topology-switching
algorithm (explained in detail in the second part of this
Y. Mao, E. Akyol and Z. Zhang are with the Department of Electrical
and Computer Engineering, Binghamton University–SUNY, Binghamton, NY,
13902 USA, (e-mail: {ymao3,eakyol,zhangzia}@binghamton.edu).
(2b)
A. Related Work
Among the few studies [5], [14], [15] on this problem, two
commonly studied control protocols are
ui (t) = α
n
∑
j=1
I. I NTRODUCTION
(2a)
lim |xi (t) − xj (t)| = 0,
t→∞
and
ui (t) = α
n
∑
aij (xj (t) − xi (t)) − βvi (t) ,
aij (xj (t)−xi (t))+β
j=1
n
∑
(3)
aij (vj (t)−vi (t)), (4)
j=1
where aij is the entry of (weighted)adjacency matrix describing the structure of the undirected/directed control network,
and α and β are the coupling strengths. In the past several
years, based on the two consensus protocols (3) and (4), some
more realistic problems are addressed. For example, Mei et
al. [16] propose an adaptive control gain to relax the conditions
on the coupling strengths and obtained a fully distributed consensus algorithm; Qin et al. [17] study the leaderless consensus
and leader-following consensus, and report some lower bounds
for coupling strengths; to deal with the problem of limited
interaction ranges, Song and You [18] propose the rangebased varying weighs along coupling matrix. The conditions
of the well-studied second-order consensus are summarized in
Table I. The control protocols considered therein show that
the control protocols need the necessary individual/relative
velocity measurements. Yu et al. [19] propose a sampledposition based control protocol that does not need velocity
measurements. However, it has a constraint on the magnitudes
of coupling strengths.
B. Motivation of Part-I Paper
In practice, many network topologies are non-static, such
as the communication networks of mobile agents [3] and
brain networks [20]. Recent studies of dynamical networks
2
Table I
C ONDITIONS OF S ECOND -O RDER C ONSENSUS
Reference
[14], [15]
[16], [17]
Constriant on
magnitude of coupling strength
magnitude of coupling strength
[16]
magnitude of coupling strength
[18]
magnitude of coupling strength,
initial conditions
NONE
[5]
Protocol
Directed (3)
Directed
weighted (3)
Directed
weighted (4)
Directed
weighted (3)
Undirected (4)
have highlighted the important role played by the network
topology [21]–[26]. For example, Menck et al. [21] find in
numerical simulations of artificially generated power grids
that tree-like connection schemes, so-called dead ends and
dead trees, can strongly diminish the stability; Schultz et
al. [24] show that how the addition of links can change the
synchronization properties of the network; Mao et al. [25]
propose strategic topology switching to reveal zero-dynamics
attack in the first-order multi-agent systems. In power grids,
a certain group of generators can be cut off or connected to
prevent cascading failure [22]; and in the context of power
systems, topology switching is equivalent to actively tripping
or re-closing transmission lines, or adjusting active power
output or transmission line reactance to improve the linear
stability of the power system [27]. With the advances in
wireless communication networks, it has become more feasible
to set the topology of communication network as a control
variable [28], as carried out in this work.
In past studies, unintentional topology changes in a networked control system are commonly handled as disturbances [3], [29]–[32] to the system. To the best of our knowledge, active/strategic topology switching for networked control
systems has not been systemically studied, with exceptions
being [25], [26], [33]. However, the centralized topologyswitching algorithm studied in [33] and the strategic topologyswitching algorithm proposed in [25] are only applicable to the
first-order multi-agent systems. Moreover, for the second-order
multi-agent systems, the state-dependent topology-switching
algorithm studied in [26] rules out only the Chattering Zeno
behavior, i.e., it still has the Genuinely Zeno behavior. In this
paper, the time-dependent topology switching algorithm derived for the second-order multi-agent systems has neither the
Chattering Zeno behavior nor the Genuinely Zeno behavior.
However, an intriguing question regarding the dynamic
topology is whether the well-studied control protocols can be
simplified significantly, and the advantage of control protocols
under fixed topology can still be maintained. This Part-I paper
answers affirmatively to that.
Security concerns regarding networked cyber-physical systems pose an existential threat to their wide-deployment, see
e.g., Stuxnet malware attack and Maroochy Shire Council
Sewage control incident [34]. The “networked” aspect exacerbates the difficulty of detecting and preventing aforementioned
attacks since centralized measurement (sensing) and control
are not feasible for such large-scale systems [35]. One of the
fundamental problems of security in multi-agent systems is
detection of stealthy attacks. Recent experiments of stealthy
false-data injection attacks on networked control system [36]
showed changes in system dynamics could be used to reveal
stealthy attacks. To make changes in the system dynamics
in order to reveal zero-dynamics attack, Teixeira et al. [36]
considered the method of modifying input or output matrices.
Obviously, topology switching is another way of making
changes in the dynamics of a multi-agent system. Before using
the topology switching to reveal the zero-dynamics attack,
the problem of using the simplified control protocol which
needs only the relative position measurements, whether the
changes in system dynamics can destroy system stability in
the absence of attacks must be investigated. The strategy on
switching times proposed in this Part-I paper addresses this
problem completely.
Another interesting question pertains to the detectability
of zero-dynamics attacks by strategic topology switching is
studied explicitly in Part-II of this two-part paper [1].
C. Contributions of Part-I Paper and Preview of Part-II Results
The two-part paper comprises a study of a strategic
topology-switching algorithm for the second-order multi-agent
system under attack. Part I provides a basis for this strategic
algorithm: when the topology should strategically switch such
that the agents can have the ability of reaching consensus in the
absence of attacks. The contribution of this paper is twofold,
which can be summarized as follows:
•
•
We propose a control protocol using only measurements
of relative positions for the second-order multi-agent
system. Based on the stability of switched linear systems
without stable subsystems, and the period of the multiagent systems under fixed topology, in Section IV we
obtain a strategy on the dwell time of topology-switching
signal that enables consensus in the absence of attacks.
Based on the strategy on switching times, through employing a finite-time consensus network, we propose a
decentralized topology-switching algorithm in Section V.
In achieving the second-order consensus, the algorithm
has no constraint on the magnitude of coupling strength.
Using the obtained strategy on switching times for the simplified control protocol which needs only the measurements
of relative positions, Part II [1] will focus on the strategy on
switching topologies that addresses the problem of switching
to what topologies to reveal zero-dynamics attack. In revealing
zero-dynamics attack, the strategy allows the system operator
or the defender to have no knowledge of the misbehavingagent set or the attack-starting time.
D. Organization of Part-I Paper
Section II presents the notions and terminology. Problem
formulation is given in Section III. In Section IV, we derive the
strategy on switching times. Section V presents a decentralized
topology-switching algorithm. Numerical examples are given
in Section VI. Finally, Section VII concludes this Part-I paper.
3
II. N OTIONS AND T ERMINOLOGY
Lemma 5 (Stability of Switched Systems without Stable
Subsystems [42]): Consider switched linear system
A. Notations
We use P > 0 (≥, <, ≤ 0) to denote a positive definite (positive semi-definite,√negative definite, negative semi-definite)
matrix P . Let i = −1 be the imaginary unit. Rn and Rm×n
denote the set of n-dimensional real vectors and the set of
m × n-dimensional real matrices, respectively. N represents
the set of natural numbers and N0 = N ∪ {0}. Let I and 0
be the identity matrix and the zero matrix with compatible
dimension. 1n ∈ Rn and 0n ∈ Rn denote the vector with all
ones and the vector with all zeros, respectively. The superscript
‘⊤’ stands for matrix transpose.
The interaction among n agents is modeled by an undirected
graph G = (V, E), where V = {1, 2, . . . , n} is the set of
vertices that represent n agents and E ⊂ V × V is the set of
edges of the graph G. The adjacency matrix A = [aij ] ∈ Rn×n
of the undirected graph G is defined as aij = (i, j) ∈ E, where
aij = aji = 1 if agents i and j interact with each other, and
aij = aji = 0 otherwise. Assume that there are no self-loops,
i.e., for any i ∈ V, aii = 0. A path is a sequence of connected
edges in a graph. A graph is a connected graph if there is a
path between every pair of vertices.
where z (t) ∈ R , Aσ(t) ∈ R
and σ(t) ∈ S. Given
scalars α > 0, 0 < β < 1, 0 < τbmin ≤ τbmax , if there exists
a set of matrices Pr,q > 0, q = 0, 1, . . . , L, r ∈ S, such that
∀q = 0, 1, . . . , L − 1, ∀r, s ∈ S, we have
q
A⊤
r Pr,q+1 + Pr,q+1 Ar + Ψr − αPr,q+1 < 0,
Lemma 3 (Proposition 1.3.3 in [40]): Let G be a connected
graph with diameter d. Then G has at least d + 1 distinct
eigenvalues, at least d + 1 distinct Laplace eigenvalues, and at
least d + 1 distinct signless Laplace eigenvalues.
Lemma 4: [41] The determinant of the Vandermonde matrix
1
1
···
1
a1
a2
···
an
2
a21
a
·
·
·
a2n
2
H , a31
∈ Rn×n ,
a32
···
a3n
..
..
..
..
.
.
.
.
n−1
n−1
n−1
a1
a2
· · · an
is
det (H) = (−1)
n2 −n
2
∏
i<j
(ai − aj ).
(6)
(8)
(9)
A⊤
r Pr,L
(10)
(11)
ln β + αb
τmax < 0,
(12)
+ Pr,L Ar − αPr,L < 0,
Ps,0 − βPr,L ≤ 0, s ̸= r,
−P
L(P
)
r,q+1
r,q
where Ψqr =
. Then, the system (7) is globally
τbmin
uniformly asymptotically stable under any switching signal
σ(t) satisfies (23).
Lemma 6 (Simplified Finite-Time Consensus Algorithm without External Disturbances [43]): Consider the multi-agent
system
n
∑
j=1
The following definition and auxiliary lemmas will be used
throughout this paper.
Definition 2 (Path Graph [37]): The path graph Pn is a tree
with two nodes of vertex degree 1, and the other n − 2 nodes
of vertex degree 2.
Lemma 1: [38] If the undirected graph G is connected, then
its Laplacian L ∈ Rn×n has a simple zero eigenvalue (with
eigenvector 1n ) and all its other eigenvalues are positive and
real.
Lemma 2: [39] The Laplacian of a path graph Pn has the
eigenvalues as
(
)
(k − 1) π
, k = 1, . . . , n.
(5)
λk = 2 − 2 cos
n
m×m
q
A⊤
r Pr,q + Pr,q Ar + Ψr − αPr,q < 0,
ṙi = α̃
B. Preliminaries
(7)
ż (t) = Aσ(t) z (t) ,
m
m̄
bij (rj −ri ) n̄ + β̃
n
∑
j=1
p̄
bij (rj −ri ) q̄ , i = 1,. . ., n (13)
where α̃ > 0 and β̃ > 0 are the coupling strengths, bij is
the element of the coupling matrix that describes topology
of an undirected connected communication network and its
corresponding Laplacian matrix is denoted as LA , the odd
numbers m̄ > 0, n̄ > 0, p̄ > 0 and q̄ > 0 that satisfy m̄ > n̄
and p̄ < q̄. Its global finite-time consensus can be achieved,
i.e.,
n
ri (t) −
1∑
ri (0) = 0, ∀t ≥ P, i = 1, . . . , n.
n i=1
Further, the setting time P is bounded by
( m̄−n̄
)
1
n 2n̄
n̄
1 q̄
P <
+
.
λ2 (LA )
α̃ m̄ − n̄ β̃ q̄ − p̄
(14)
(15)
III. P ROBLEM F ORMULATION
The second-order multi-agent system (1) under the control
protocol, which uses only relative position measurements, that
involves topology switching is described by
(16a)
ẋi (t) = vi (t)
v̇i (t) = ui (t)= γ
n
∑
σ(t)
aij
(xj (t)−xi (t)), i = 1, . . . , n (16b)
j=1
where
(i) γ > 0 is the coupling strength;
(ii) σ(t) : [t0 , ∞) → S , {1, 2, . . . , s}, s ∈ N, is
the switching signal of the interaction topology of the
communication network, i.e., σ(t) = pk ∈ S for
t ∈ [tk , tk+1 ) means the pth topology is activated over
the time interval [tk , tk+1 ), k ∈ N0 ;
4
(iii) apijk is the entry of the adjacency matrix that describes
the activated pth topology of undirected communication
graph over the time interval [tk , tk+1 ), k ∈ N0 .
σ(t)
σ(t)
For the undirected topology, aij = aji , it should be easy
n
∑
to verify from (16b) that
v̇i (t) = 0, ∀t ≥ 0, which implies
IV. S TRATEGY ON S WITCHING T IMES
The objective of this section is to propose a strategy on
switching times that enables the multi-agent system (20) to
achieve the second-order consensus without constraint on the
magnitude of coupling strength γ.
i=1
that the average position
n
n
1∑
1∑
x̄(t) =
xi (0) +
vi (0)t,
n i=1
n i=1
proceeds with the constant velocity
n
n
1∑
1∑
vi (t) =
vi (0).
v̄ =
n i=1
n i=1
(17)
(18)
If the second-order consensus is achieved, the individual
velocities will converge asymptotically to the average of initial
of velocities, i.e., lim |vi (t) − v̄| = 0, i = 1, . . . , n. Based
t→∞
on relations (17) and (18), we define the following fluctuation
terms:
x̃i (t) = xi (t) − x̄(t),
ṽi (t) = vi (t) − v̄.
(19a)
(19b)
The dynamics (16) can now be expressed equivalently as
x̃˙ i (t) = ṽi (t)
n
∑
σ(t)
ṽ˙ i (t) = γ
aij (x̃j (t) − x̃i (t)), i = 1, . . . , n.
(20a)
(20b)
j=1
It follows from (17), (18), and (19) that
1⊤
n x̃ (t) = 0, ∀t ≥ 0
1⊤
n ṽ (t)
= 0, ∀t ≥ 0.
(21)
(22)
In practice, high switching frequency can result in high
switching cost, which is undesirable. In the context of attack
detection by strategic topology switching, low switching frequency is as well undesirable since it can result in attacks
going undetected for a long time. Taking these remarks into
account, we impose the minimum and maximum dwell times
on the topology-switching signal σ(t).
Assumption 1: For the second-order multi-agent system (20), the minimum dwell time τmin and maximum dwell
time τmax of topology-switching signal satisfy
τmin ≤ tk+1 − tk ≤ τmax , ∀k ∈ N0 .
(23)
In Section IV, we first show that the multi-agent system (20)
under fixed topology, i.e, σ(t) = p ∈ S for t ∈ [0, ∞),
is oscillating and has a period.This implies that the multiagent system (16) under fixed topology cannot achieve the
second-order consensus even for large coupling strength. Then
using the stability of switched linear system–Lemma 5 and
the period of the system (20) under fixed topology, a strategy
on switching times is derived, which enables the multi-agent
system (20) to achieve the second-order consensus without
constraint on the magnitude of coupling strength γ. Based on
the strategy on switching times, through employing the finitetime consensus algorithm–Lemma 6, a decentralized topologyswitching algorithm is proposed Section V.
A. Periodically Oscillating under Fixed Topology
The following lemmas show some important properties of
the system (20) under fixed topology, which will be used to
derive feasible decentralized topology-switching algorithm.
Lemma 7: Consider the following system
∫ t
˙x̃ (t) = −γL
x̃ (τ )dτ + ṽ(0), t ≥ 0
(24)
0
where γ > 0, L ∈ R
is the Laplacian matrix of a
connected undirected graph; and x̃ (t) ∈ Rn and ṽ (t) =
x̃˙ (t) ∈ Rn satisfy (21) and (22), respectively. The solutions
of x̃i (t), i = 1, . . . , n, are obtained as
n×n
x̃i (t)
(25)
)
(
n
∑
√
√
ṽ (0)
sin( γλl t) , t ≥ 0
=
qli ql⊤ x̃ (0) cos( γλl t)+ √
γλl
l=2
where λl (λ1 = 0) are the nonzero eigenvalues that are asso⊤
ciated with the orthogonal vectors ql = [ql1 , . . . , qln ] ∈ Rn
of L, l = 2, . . . , n.
Proof: See Appendix A.
Lemma 7 implies that the system (24), with x̃ (t) and
ṽ (t) satisfying (21) and (22), can be viewed as one class
of coupled oscillators. However, given some initial conditions
and topologies, the inadmissible energy functions, i.e., nonzero
constant positive functions, of the system (24) are common.
⊤
⊤
Take the positive function F (t) = γ x̃ (t)Lx̃(t)
+ ṽ (t)ṽ(t)
2
2
as an example. Differentiating it along the solutions of (24)
yields Ḟ (t) = γṽ ⊤ (t) Lx̃ (t)+ṽ ⊤ (t) ṽ˙ (t) = γṽ ⊤ (t) Lx̃ (t)−
γṽ ⊤ (t) Lx̃ (t) ≡ 0, which means that given any nonzero initial
condition, the function is a nonzero scalar over time, which is
inadmissible. Obviously, such inadmissible energy functions
are undesirable since they cannot capture the oscillating property of the system (24). Therefore, it is nontrivial to provide
a guide to construct an admissible energy function.
Lemma 8: Consider the function
F (t) = ϖ
x̃⊤ (t) x̃ (t) ṽ ⊤ (t) ṽ (t)
+
,
2
2
(26)
˙
with x̃(t)
= ṽ(t) ∈ Rn . Along the solutions (25) of the
system (24), if the Laplacian matrix L has distinct eigenvalues
and ϖ satisfies
0 < ϖ ̸= γλi (L) , ∀i = 2, . . . , n,
(27)
where λ1 (L) , . . . , λn (L) are the corresponding eigenvalues
of L. Then, for any nonzero initial condition, there never exists
a nonzero scalar φ such that
F (t) = φ ̸= 0, ∀t ≥ 0.
Proof: See Appendix B.
(28)
5
Remark 1: For the undirected communication network considered in this paper, there indeed exist many topologies
whose associated Laplacian matrices have distinct eigenvalues.
Noticing that 0 ≤ (k−1)π
< π, ∀k = 1, . . . , n, Lemma 2
n
implies that the Laplacian matrix of a path graph has distinct
eigenvalues. Moreover, Lemma 3 provide a guide to design
such topology with distinct Laplacian eigenvalues.
with
(31)
ξ < α,
− ln β
0 < τbmax <
,
α
) L
mT ( − 1
− β L −1
,
0 < τbmax +
2
α−ξ
ξ=
max
{1 − γλi (Lr ), −1 + γλi (Lr )} ,
r∈S,i=1,...,n
B. Dwell Time
The system under switching topology (20) can be modeled
as a switched linear system (7). Lemma 7 shows that each
subsystem of the switched system (20), i.e., the multi-agent
system (20) under each fixed topology, is not stable. The
equilibrium point of the multi-agent system (20) is (x̃∗ , ṽ ∗ ) =
(0n , 0n ). Hence, the problem of strategic topology switching
studied in the following sections would be designing the
stabilizing switching rule for the switched systems without
stable subsystems. Let us first recall a technical lemma regarding the stability of switched linear systems without stable
subsystems–Lemma 5, which will be used to derive a strategy
on switching times that enables agents to achieve consensus
without constraint on the magnitude of coupling strength.
However, without exploring and exploiting additional system
information, the conditions in Lemma 5 are not feasible for
the system (20).
Proposition 1: For the multi-agent system (20), the conditions in Lemma 5 are infeasible.
Proof: See Appendix C.
Let σ(t) = g ∈ S for t ∈ [tk , tk+1 ), k ∈ N0 .∫ Then, the
t
dynamics (20) can be rewritten as x̃˙ (t) = −γLg tk x̃ (τ )dτ
+ ṽ(tk ), t ∈ [tk , tk+1 ). Therefore, Lemma 7 implies that the
multi-agent agent system (20) under each fixed topology has
a period T such that
{
ṽi (t) = ṽi (t + T ) , i = 1, . . . , n
(29)
x̃i (t) = x̃i (t + T ) , ∀t ∈ [tk , tk+1 ) , ∀k ∈ N0 .
Remark 2: The solution (25) implies that to obtain a
small period T , the control designer needs the knowledge
of the eigenvalues and eigenvectors of the Laplacian matrix
that describes the network structure. However, for large-scale
networked systems, the network structure may not be available
to the designer. Fortunately, the well-developed distributed
eigenvalue and eigenvector estimation algorithms [44], [45]
well address this issue.
The period T can be used to make Lemma 5 applicable
to the multi-agent system (20) to derive a strategy on the
switching times.
Theorem 1: Consider the second-order multi-agent system (20). For the given period T satisfying (29), scalars
1 > β > 0, α > 0 and L ∈ N, if the dwell time τ , the
minimum dwell time τmin and the maximum dwell time τmax
satisfy
1
(β − L −1)
mT
L
< τmin ≤ τ ≤ τmax , τbmax +
, m ∈ N (30)
α−ξ
2
(32)
(33)
(34)
where γ is the coupling strength of the multi-agent system (20), and λi (Lr ) , . . . , λn (Lr ) are the corresponding
eigenvalues of the Laplacian matrix Lr . Then, the secondorder consensus is achieved.
Proof: We note that the multi-agent system (20) can
be described by switched system (7) where z (t) =
⊤
[x̃1 (t) , . . . , x̃n (t) , ṽ1 (t) , . . . , ṽn (t)] ∈ R2n and
[
]
0
I
Aσ(t) =
.
(35)
−γLσ(t) 0
For each activated topology of system (20), let us consider
the positive definite matrix
[
]
P̂r,q
0
Pr,q =
> 0,
(36)
0
P̂r,q
where
q
P̂r,q = β − L hI, q = 0, . . . , L, ∀r ∈ S
(37)
with h being a positive scalar. It follows from (37) that
1
P̂r,q = β L P̂r,q+1 , q = 0, . . . , L − 1, ∀r ∈ S,
(38)
Substituting the matrices Pr,q (36) and Aσ(t) (35)
conditions (8), (9) and (10), respectively, yields
[
]
Qr,q
(I − γLr )P̂r,q
∆
Rr,q =
< 0,
(I − γLr )P̂r,q
Qr,q
]
[
Q̆r,q
(I − γLr )P̂r,q+1
∆
< 0,
R̆r,q =
(I − γLr )P̂r,q+1
Q̆r,q
[
]
−αP̂r,L
(I − γLr )P̂r,L
∆
Sr,L =
< 0,
(I − γLr )P̂r,L
−αP̂r,L
into
P̂s,0 = β P̂r,L ,
∀r ̸= s ∈ S.
(39)
(40)
(41)
(42)
where
L
(P̂r,q+1 − P̂r,q ) − αP̂r,q ,
τmin
L
(P̂r,q+1 − P̂r,q ) − αP̂r,q+1 .
=
τmin
Qr,q =
(43)
Q̆r,q
(44)
Let W be the matrix that is orthogonal to the
∆
symmetric matrix Lr , for which W ⊤ Lr W = Λr =
diag {0, λ2 (Lr ) , . . . , λn (Lr )}. Considering the matrices Pr,q
and P̂r,q , in (36) and (37), the conditions (40), (41) and (42)
can be equivalently expressed in term of eigenvalue as
L
(P̂r,q+1 − P̂r,q ) − αP̂r,q ± (1 − γΛr ) P̂r,q < 0,
(45)
τmin
L
(P̂r,q+1 − P̂r,q )−αP̂r,q+1 ±(1 − γΛr ) P̂r,q+1 < 0, (46)
τmin
− αP̂r,L ± (1 − γΛr ) P̂r,L < 0.
(47)
6
In view of Lemma 5, to prove the second-order consensus,
it suffices to verify that the conditions (8)–(12) are satisfied,
as carried out in the following four steps.
Step One: It follows from (36), (38), and (39) that Ps,0 =
βPr,L , r ̸= s ∈ S. Thus, the condition (11) in Lemma 5 holds.
Step Two: To obtain Lemma 5, the considered discretized
Lyapunov function for mode r ∈ S in [42] is
{
(q)
z ⊤(t) Pr (ζ)z(t) , t ∈ Nk,q , q = 0, 1,. . . ,L−1
Vr (t) = ⊤
(48)
z (t) Pr,L z (t) , t ∈ [tk + τmin , tk+1 )
(q)
where Pr (ζ) = (1 − ζ) Pr,q + ζPr,q+1 with 0 < ζ < 1,
min
Nk,q = [tk + θq , tk + θq+1 ), θq+1 = (q+1)τ
, Pr,q > 0,
L
q = 0, 1, . . . , L − 1. In [42], the purposes of the condition (12)
is to ensure that
Vσ(tk ) (tk + τbmax ) ≤ β ∗ Vσ(t− ) (tk ) ,
k
(49)
with 1 > β > 0. Considering the right-hand side of (30), i.e.,
τmax , τbmax + mT
2 , from (48) one has
∗
(50)
T
= Vr (tk + τbmax + m )
2
T
T
⊤
= z (tk + τbmax + m )Pr,L z(tk + τbmax + m ), ∀r ∈ S.
2
2
Lemma 7 shows that system (20) under fixed topology has
period T , for which (29) holds. Then, noting that Pr,L > 0
and the solutions (25), we have
T
T
z ⊤ (tk + τbmax + m )Pr,L z(tk + τbmax + m )
(51)
2
2
= z ⊤ (tk + τbmax )Pr,L z(tk + τbmax ) , Vr (tk + τbmax ), ∀r ∈ S.
Vr (tk + τmax )
Combining (50) with (51) yields
Vr (tk + τmax ) = Vr (tk + τbmax ), ∀r ∈ S.
(52)
We note that the condition (32) is equivalent to αb
τmax +
ln β < 0, which corresponds to the condition (12) in Lemma 5.
Without loss of generality, let σ (tk ) = r ∈ S. Then, it follows
from (49) and (52) that
Vσ(tk ) (tk +τmax ) = Vσ(tk ) (tk +b
τmax ) ≤ β ∗ Vσ(t− ) (tk ) . (53)
k
From (53), we conclude that mT
2 , m ∈ N, which is imposed
on the right-hand side of (30), is to maintain the original
goal of the condition (12) through keeping (53) holding, while
ensuring τmax ≥ τmin . This also means that it is the period T
that makes Lemma 5 applicable to system (20).
Step Three: Since P̂r,L > 0, condition (31) implies
0 > −αP̂r,L + ξ P̂r,L , while condition (34) implies ξ ≥
± (1 − γΛr ). Thus, 0 > −αP̂r,L + ξ P̂r,L > −αP̂r,L ±
(1 − γΛr ) P̂r,L . From (47), the condition (10) in Lemma 5
is satisfied.
Step Four: It follows from (31) and the left-hand side
of (30) that
1
(α − ξ) τmin
+ 1 > β− L .
(54)
L
Considering h > 0, from (38) and (54) one has 1 +
1
P̂
(α−ξ)τmin
> P̂r,q+1 = β − L , which is equivalent to
L
Since 1 > β > 0, relation (38) implies that P̂r,q < P̂r,q+1 .
Condition (31) is equivalent to α − ξ > 0. Therefore, (55)
implies
L
(P̂r,q+1−P̂r,q )−(α−ξ) P̂r,q+1 < 0, q = 0,. . . ,L − 1. (56)
τmin
According to (34), 0 > −αP̂r,q+1 + ξ P̂r,q+1 > −αP̂r,q+1 ±
(1 − γΛr ) P̂r,q+1 and 0 > −αP̂r,q + ξ P̂r,q > −αP̂r,q ±
(1 − γΛr ) P̂r,q , which together with (55) and (56) imply (45)
and (46), respectively. Thus, the conditions (8) and (9) in
Lemma 5 hold and the proof is now completed.
Remark 3: Theorem 1 shows that the strategy on switching
times enables the system (20) that its control protocol uses
only the relative position measurements to achieve the secondorder consensus. Moreover, the strategy has no constraint on
the magnitude of coupling strength in achieving consensus,
which maintains the advantage of the control protocol (4)
studied in [5].
The conditions (30) and (34) in Theorem 1 imply that
the coupling strength affects the minimum dwell time and
maximum dwell time of topology switching signals, which
may further affect the convergence speed to consensus.
Remark 4: For the problem of security in networked systems, in the situation where the defender or the system operator has no knowledge of the attack-starting time, Theorem 1
indicates when the system dynamics should have changes
(caused by topology switching) to reveal zero-dynamics attack [36], so that the changes do not destroy the system
stability in the absence of attacks.
V. D ECENTRALIZED T OPOLOGY S WITCHING
The finite-time consensus algorithm–Lemma 6, can be used
to estimate the global coordinators precisely in finite time.
It is employed to derive a decentralized topology-switching
algorithm. Furthermore, from (15) we can see that through
adjusting the control gains α̃ and β̃, we obtain any desirable
setting time ∞ > P > 0.
Let us adjust parameters α̃ > 0 and β̃ > 0 in the
algorithm (13), such that
( m̄−n̄
)
1
n 2n̄
n̄
1 q̄
+
< τmin .
(57)
λ2 (LA )
α̃ m̄ − n̄ β̃ q̄ − p̄
Therefore, the setting time P in (15) satisfies P < τmin .
Condition (57) together with (14) and (15) show that if we
input individual data
Fi (tk ) =
1
ϖ 2
x̃i (tk ) + ṽi2 (tk ) .
2
2
and Ḟi (tk ) to the corresponding agent i in the algorithm (13)
at time tk , at time tk + τmin each agent in algorithm (13) will
output the exact global coordinators:
n
Fi (tk + τmin ) =
τmin
(P̂r,q+1 − P̂r,q )−(α−ξ) P̂r,q < 0, q = 0, . . . , L−1. (55)
1∑
∆ 1
Fi (tk ) = F (tk ),
n i=1
n
(59)
n
r,q
L
(58)
Ḟi (tk + τmin ) =
1∑
∆ 1
Ḟi (tk ) = Ḟ (tk ).
n i=1
n
(60)
7
Based on Lemma 8 and Theorem 1, through employing the
finite-time consensus algorithm (13), we propose the following
Algorithm 1, which is a decentralized topology-switching
algorithm.
Algorithm 1: Decentralized Topology-Switching Algorithm
Input: Topology set S that includes more than one
connected topology, at least one of which has
distinct Laplacian eigenvalues; individual
functions Fi (tk−1 ) (58) with ϖ satisfying (27);
initial time tk−1 = 0; initial topology Gσ(tk−1 ) ;
dwell time τ generated by Theorem 1;
loop-stopping criteria δ ≥ 0.
1 while F (tk−1 ) > δ do
2
Input individuals Fi (tk ) and Ḟi (tk ) to agent i in the
finite-time consensus algorithm (13) at time tk ;
3
Output F (tk ) by (59) and Ḟ (tk ) by (60) from the
finite-time consensus algorithm (13) to the agents
in (20) at time tk + τmin ;
4
Run until tk+1 ← tk + τ ;
5
if Ḟ (tk ) = 0 then
6
Switch the topology of network (20b) to σ(tk+1 )
that satisfies:
• σ(tk+1 ) ̸= σ(tk ),
• Lσ(tk+1 ) has distinct eigenvalues.
7
else
8
Switch the topology of network (20b) to σ(tk+1 )
that satisfies:
• σ(tk+1 ) ̸= σ(tk ),
9
end
10
Update the topology-switching time: tk−1 ← tk ;
11
Update the topology-switching time: tk ← tk+1 .
12 end
Theorem 2: Consider the system (20). If its topologyswitching signal is generated by Algorithm 1, then the following properties hold:
(i)
if the loop-stopping criteria δ = 0 (in Line 1 of
Algorithm 1), the agents achieve the second-order
consensus;
(ii)
if the loop-stopping criteria δ > 0, the agents achieve
the second-order consensus under admissible consensus error δ through finitely topology switching, i.e.,
F (tk̄ ) ≤ δ with 0 < k̄ < ∞ and F (t) given by (26).
Proof of Theorem 2: We first prove property (i). The
loop-stopping criteria δ = 0 means topology will stop switching when F (tk ) = 0. The definition of F (t) in (26) implies
that lim F (t) = 0 is equivalent to (2). This analysis means
t→∞
the topology switching will not stop until the second-order
consensus is achieved. Since the provided dwell time τ in
Input of Algorithm 1 is generated by Theorem 1, we conclude
that property (i) holds.
We now show that property (ii) holds. Assume the function
F (t) is a non-zero constant over time. If F (0) = φ > δ,
from (28) we have F (t) ≡ φ > δ for ∀t ≥ 0. Thus, Line 1
Table II
C ANDIDATE T OPOLOGY S ET S = {1∗ , 2∗ }
Index σ(t)
1∗
2∗
σ(t)
a12
1
1
σ(t)
a13
0
1
σ(t)
a14
0
0
σ(t)
a23
1
0
σ(t)
a24
0
1
σ(t)
a34
1
1
of Algorithm 1 implies that in this situation the topology will
never stop switching regardless of whether the consensus is
achieved. The objective of Line 5 and Line 6 in Algorithm 1
is to switch to a topology whose associated Laplacian matrix
has distinct eigenvalues when Ḟ (tk ) = 0. Hence, by Lemma 8,
F (t) cannot be a constant over time if Ḟ (tk ) = 0. Obviously,
if Ḟ (tk ) ̸= 0, F (t) cannot be constant over time. Therefore,
by Lemma 8 we conclude property (ii) under Algorithm 1.
Remark 5 (Nonzero Loop-Stopping δ): We note that, in real
world applications of consensus algorithms, such as decentralized computation and distributed optimization, objectives must
be achieved in finite time, rather than in infinite horizon. This
explains motivation behind the nonzero loop-stopping δ (property (ii) in Theorem 2), which achieves “nearly consensus” in
finite time.
VI. S IMULATION
The simulations on a second-order system with n = 4
agents will be presented to demonstrate the effectiveness of
the proposed topology switching algorithm. Initial position and
⊤
velocity conditions are set as x(0) = v(0) = [1, 2, 3, 4] .
To illustrate the effectiveness of the topology-switching
algorithm–Algorithm 1, in achieving the second-order consensus, the topology set S, which is described in Table II,
provided to Algorithm 1 includes only two topologies: the
path graph and the ring graph. The eigenvalues of Laplacian
matrices of the two topologies in Table II are computed as
[λ1 (L1 ) , λ2 (L1 ) , λ3 (L1 ) , λ4 (L1 )] = [0, 0.6, 2, 3.4] , (61)
[λ1 (L2 ) , λ2 (L2 ) , λ3 (L2 ) , λ4 (L2 )] = [0, 2, 2, 4] .
(62)
Lemma 7 implies that the states of the system (20) under
fixed topology keep oscillating, which means that even with
a very large coupling strength, the second-order cannot be
achieved in the situation of fixed topology. This is verified by
the trajectories in Figure 1, where the coupling strength is set
as large as γ = 100. Figure 1 also shows that the states of the
system (20) under each fixed connected topology are periodic.
A. Second-Order Consensus
Let us set the coupling strength as small as γ = 2. Using
the eigenvalues computed in (61) and (62), and the state solutions (25), the period is calculated as T ≈ 6. Considering (61)
and (62), we use (34) to obtain ξ = 7. Following (31), set
α = 7.5 and let β = 0.4 and L = 1. It follows from (32) and
the left-hand side of (30) that τ̂max < 0.1222 and τmin > 3.
Let m = 1. Following (30) and (33), we choose the dwell time
τ = T2 + 0.1 = 3.1. Then, under Algorithm 1, trajectories of
position differences and velocity differences of system (16)
are shown in Figures 2 (a) and (b), respectively. Figure 2
8
(a) Under Fixed Topology 1*, Coupling Strenth
350
(a)
= 100
4
Position Differences
300
V(t)
250
200
150
100
50
2
0
-2
0
0
5
10
Time
(b) Under Fixed Topology 2*, Coupling Strenth
15
-4
0
= 100
50
100
1000
800
6
600
4
400
200
0
0
5
10
15
Time
∑
Figure 1.
Trajectories of V (t) =
(xi (t) − xj (t))2 +
i<j
∑
(vi (t) − vj (t))2 : (i) the second-order consensus cannot be achieved
i<j
under fixed topology, (ii) the system (20) has period under fixed topology.
shows that in the absence of attacks, Algorithm 1 succeeds
in achieving the second-order consensus. Thus, property (i) in
Theorem 2 is verified.
B. Finitely Topology Switching
For the condition (27), we consider ϖ
>
max
{γλi (Lr )}. From γ = 2 and the eigenvalues
i=2,...,n,r=1,2
in (61) and (62), we choose ϖ = 8.1. Let the loop-stopping
criteria δ = 2. Then, under Algorithm 1, the trajectory of
F (t) given in (26) and the topology-switching signal are
shown in Figures 3 (a) and (b), respectively. Figure 3 well
illustrates property (ii) in Theorem 2.
VII. C ONCLUSION
This Part-I paper explains how to take the network topology
as a control variable for the second-order multi-agent system.
The obtained results highlight the merits of topology switching in achieving the second-order consensus: (i) the control
protocol does not need the velocity measurements, and (ii)
the topology-switching algorithm has no constraint on the
magnitude of the coupling strength. The strategy on switching
times provides a basis for the strategic topology switching that
is studied in Part-II paper [1]: when the topology of the multiagent should switch to make changes in the system dynamics
Velocity Differences
V(t)
150
200
250
150
200
250
Time
(b)
2
0
-2
-4
-6
-8
0
50
100
Time
Figure 2. In the absence of attacks, trajectories of positions differences and velocities differences: the second-order consensus are achieved by Algorithm 1.
in order to reveal zero-dynamics attack, such that the changes
do not destroy the agents’ ability of reaching consensus in the
absence of attacks.
A PPENDIX A
P ROOF OF L EMMA 7
By Lemma 1, we arrange the eigenvalues of L in the
increasing order as 0 = λ1 < λ2 ≤ . . . ≤ λn . Since L
is a real symmetric matrix, there exists an orthogonal matrix
⊤
Q , [q1 ; . . . ; qn ] ∈ Rn×n with qi = [qi1 , qi2 , . . . , qin ] ∈ Rn ,
i = 1, . . ., n, such that
Q⊤ = Q−1 ,
q11 = q12 = . . . = q1n ,
(63)
(64)
∆
Q⊤ LQ = diag {0, λ2 , . . . , λn } = Λ.
(65)
We denote X (s) = L {x̃ (t)}, where L(·) stands for the
Laplace transform operator. The Laplace transform of the
dynamics (24) can be obtained as
ṽ (0)
γ
,
sX (s) − x̃ (0) = − LX (s) +
s
s
(66)
9
Combing (70) with (71) and (72) yields
(a)
80
Positive Function F(t)
loop-stopping criteria = 2
Xi (s) =
n
∑
l=2
60
n
∑
s
1
qli ql⊤ x̃ (0) +
qli ql⊤ ṽ (0) .
2
s + γλl
s2 + γλl
l=2
Considering γ > 0 and λl > 0, l = 2, . . . , n, the solution (25)
can be obtained immediately from the inverse Laplace transform of Xi (s).
40
A PPENDIX B
P ROOF OF L EMMA 8
20
Let us assume that (28) holds, from which we obtain
0
0
10
20
30
40
50
Time
(b)
∂d
F (t) = 0, ∀d ∈ N, ∀t ≥ 0.
∂td
It follows from the dynamics (24) that
(73)
¨ (t) = −γLx̃ (t) , ∀t ≥ 0
x̃
ṽ¨ (t) = −γLṽ (t) , ∀t ≥ 0.
2*
(74)
(75)
Signal (t)
The rest of the proof is divided into two steps.
Step One: For the dynamics (24), relations (73), (74),
and (75) imply
1*
0
10
20
30
40
50
Time
Figure 3. Trajectory of F (t) and topology-switching signal σ(t): after fifteen
times topology switching, the energy function F (t) is under preset admissible
error δ = 2, i.e, F (t15 ) < δ = 2.
which is equivalent to
I
X (s) = 2
(sx̃ (0) + ṽ (0)) .
(67)
s I + γL
{
}
s
s
Let Λ̃(s) = diag 1s , s2 +γλ
,
.
.
.
,
and Λ̄(s) =
2 +γλ
s
n
{
}2
1
1
1
diag s2 , s2 +γλ2 , . . . , s2 +γλn . It follows from (63) and (65)
that
sI
sI
=
= QΛ̃(s)Q⊤ ,
s2 I + γL
Q (s2 I + γΛ) Q⊤
I
I
=
= QΛ̄(s)Q⊤ .
2
2
s I + γL
Q (s I + γΛ) Q⊤
(68)
(69)
Let Xi (s), i = 1, . . . , n, be the ith element of X (s).
From (67), (68), and (69),
)
(
Xi (s) = q1i q1⊤ 1s x̃ (0) + s12 ṽ (0)
n
(
)
∑
(70)
s
⊤
+
x̃ (0) + 1s ṽ (0) .
s2 +γλl qli ql
l=2
It follows from (21), (22) and (64) that
q1⊤ x̃ (0) = 0,
q1⊤ ṽ (0)
= 0.
(71)
(72)
∂d
F (t)
∂td
∂ d−1
= ∂t
d−1 Ḟ (t)
∂ d−1 ⊤
= ∂td−1 ṽ (t) (ϖI − γL) x̃ (t)
( ⊤
∂ d−2
(t) (ϖI − γL) ṽ (t)
= ∂t
d−2 2 ṽ
⊤
)
− x̃ (t) γL (ϖI − γL) x̃ (t)
(
∂ d−4
3
⊤
= ∂t
(t) γL (ϖI − γL) ṽ (t)
d−4 (−2 ) ṽ
)
2
− x̃⊤ (t) (γL) (ϖI − γL) x̃ (t)
(
)( ⊤
m−1
∂ d−2m
m−1 2m−1
2
ṽ (t) (γL)
(ϖI − γL)ṽ (t)
= ∂t
d−2m (−1)
)
m
⊤
− x̃ (t) (γL) (ϖI − γL) x̃ (t)
= 0, ∀m ∈ N, ∀d > 2m ∈ N, ∀t ≥ 0
which then implies
m
x̃⊤ (t) (γL) (ϖI − γL) x̃ (t)
⊤
= ṽ (t) (γL)
m−1
(76)
(ϖI − γL) ṽ (t) , ∀m ∈ N, ∀t ≥ 0.
It follows from (76), (63), and (65) that
x̃⊤ (t) Qγ m Λm (ϖI − γΛ) Q⊤ x̃ (t)
⊤
= ṽ (t) Qγ
m−1
m−1
Λ
(77)
⊤
(ϖI − γΛ) Q ṽ (t) , ∀m ∈ N, ∀t ≥ 0
where Λ is given in (65), and Q = [q1 ; . . . ; qn ] ∈ Rn×n is
an orthogonal matrix of the real symmetric matrix L, where
q1 , q2 , . . . , qn are orthogonal vectors that correspond to the
eigenvalues 0 = λ1 , λ2 , . . . , λn of L. Let
x̂ (t) = Q⊤ x̃ (t) ,
(78)
⊤
(79)
v̂ (t) = Q ṽ (t) .
Considering (71) and (72), from (78) and (79), one has
x̂1 (t) = 0 and v̂1 (t) = 0, respectively. Hence, x̂ (t) and v̂ (t)
can be rewritten as
⊤
x̂ (t) = [0, x̂2 (t) , x̂3 (t) , . . . , x̂n (t)] ∈ Rn ,
⊤
n
v̂ (t) = [0, v̂2 (t) , v̂3 (t) , . . . , v̂n (t)] ∈ R .
(80)
(81)
10
Noting that γ > 0 and the matrix Λ given in (65), we
conclude that the relation (77) is equivalent to
n
∑
i=2
=
i=2
where Λ̆ = diag {λ2 , . . . , λn } ∈ R(n−1)×(n−1) and z̄ (t) =
⊤
[z̄2 (t) , . . . , z̄n (t)] ∈ Rn−1 with
λim−1 (ϖ − γλi ) v̂i2 (t), ∀m ∈ N, ∀t ≥ 0,
z̄i (t) = (ϖ − γλi ) x̂2i (t) , i = 2, . . . , n.
which is also equivalent to
n
∑
λm−1
i
(
(ϖ−γλi )
i=2
Let us denote:
ẑi (t) = (ϖ − γλi )
1
λ2
2
H = λ2
..
.
λn−2
2
γλi x̂2i
(t)−v̂i2
)
(t) = 0,∀m ∈ N, t≥0. (82)
(
)
γλi x̂2i (t) − v̂i2 (t) , i = 2, . . . , n, (83)
1
···
1
1
λ3
· · · λn−1
λn
λ23
· · · λ2n−1
λ2n
(84)
.
..
..
..
..
.
.
.
.
λn−2
3
···
λn−2
n−1
λnn−2
Hẑ (t) = 0n−1 ,
(85)
⊤
where ẑ (t) = [ẑ2 (t) , . . . , ẑn (t)] ∈ Rn−1 .
The condition that L has distinct eigenvalues implies that
the elements λl , l = 2, . . ., n, in the Vandermonde matrix
H ∈ R(n−1)×(n−1) given by (84) are also distinct. Hence,
by Lemma 4, det (H) ̸= 0. Therefore, one concludes that the
solution of (85) is ẑ (t) = 0n−1 . Furthermore, considering
the condition (27), one obtains from (83) that γλi x̂2i (t) =
v̂i2 (t) , ∀i = 2, . . . , n, ∀t ≥ 0, which is equivalent to
v̂ (t) ≡ ∆x̂ (t) , ∀t ≥ 0
A PPENDIX C
P ROOF OF P ROPOSITION 1
The proof can be finished by contradiction. Let us assume
the conditions in Lemma 5 are feasible along the solutions
of the dynamics (20). We write the multi-agent system (20)
in the form of switched system (7) where Aσ(t) is given
by
[ (35). We now
] consider a positive definite matrix Pr,q =
Gr,q Vr,q
> 0 where Gr,q , Vr,q , Sr,q ∈ Rn×n . Then,
⊤
Vr,q
Sr,q
from (35), we have
∆
Γr,q = A⊤
Pr,q + Pr,q Ar
[r
⊤
− γVr,q Lr
−γLr Vr,q
=
Gr,q − γLr Sr,q
(86)
where
{
}
√
√
∆ = diag 0, ± γλ2 , . . . , ± γλn ∈ Rn×n .
(87)
Step Two: In view of the dynamics (24), it follows
from (26), (65), (73), (74), (78), (79), and (86) that
∂m
(t)
∂tm F
∂ m−1
= ∂t
m−1 Ḟ (t)
(
)
∂ m−1
= ∂tm−1 ṽ ⊤ (t) x̃ (t) + ṽ ⊤ (t) ṽ˙ (t)
(
)
∂ m−1
= ∂t
ṽ ⊤ (t) (ϖI − γL) x̃ (t)
m−1
)
m−1 (
∂
ṽ ⊤ (t) Q (ϖI − γΛ) Q⊤ x̃ (t)
= ∂t
m−1
)
m−1 (
∂
= ∂t
v̂ ⊤ (t) (ϖI − γΛ) x̂ (t)
m−1
( ⊤
)
∂ m−1
= ∂tm−1 x̂ (t) ∆ (ϖI − γΛ) x̂ (t)
( ⊤
)
∂ m−2
= ∂t
2v̂ (t) ∆ (ϖI − γΛ) x̂ (t)
m−2
)
m−2 (
∂
= ∂t
2x̂⊤ (t) ∆2 (ϖI − γΛ) x̂ (t)
m−2
m−1 ⊤
m
(88)
which can be expressed equivalently in term of the entries of
the matrices Λ in (65) and ∆ in (87) as
i=2
m
Gr,q − γSr,q Lr
⊤
Vr,q + Vr,q
]
.
(89)
(92)
⊤
If Γr,q is negative definite, we have −γLr Vr,q
− γVr,q Lr <
⊤
0 and Vr,q + Vr,q < 0. For undirected topology, it follows
⊤
from Lemma 1 that Lr ≥ 0. Thus, Vr,q + Vr,q
< 0 contradicts
⊤
with −γLr Vr,q − γVr,q Lr < 0. Therefore, Γr,q cannot be
negative definite. Then, noting Γr,q is a real and symmetric
matrix, one concludes that Γr,q has at least one non-negative
real eigenvalue.
Let Φr,q be one orthogonal matrix of Γr,q (92) such that
{
} ∆ q
⊤
Φr,q Γr,q Φr,q = diag λ1r,q , λ2r,q , . . . , λ2n
= Xr and λ1r,q ≥
r,q
1
n
0, where λr,q , . . . , λr,q are the corresponding eigenvalues of
q
matrix Γr,q (92). Then, by denoting Ψ̆qr = Φ⊤
r,q Ψr Φr,q and
⊤
P̆r,q = Φr,q Pr,q Φr,q , the conditions (8) and (11) are expressed
equivalently as
Xqr + Ψ̆qr − αP̆r,q < 0, ∀r ∈ S, ∀q = 0, 1, . . . , L − 1
=2
x̂ (t) ∆ (ϖI − γΛ) x̂ (t)
= 0, ∀m ∈ N, ∀t ≥ 0
(±γλi ) 2 (ω − γλi ) x̂2i (t) ≡ 0, ∀m ∈ N, ∀t ≥ 0.
(91)
As obtained in the previous
( ) step (Step One) of the
( proof,
)
det (H) ̸= (0.) Since det Λ̄ ̸= 0, one has det HΛ̄ =
det (H) det Λ̄ ̸= 0. Therefore, the solution of (90) is z̄ (t) ≡
0n−1 , ∀t ≥ 0. We note that the condition (27) is equivalent to
ϖ − γλi ̸= 0, ∀i = 2, . . . , n, which together with (91) imply
that the obtained solution z̄ (t) = 0n−1 , ∀t ≥ 0, is equivalent
to x̂2i (t) = 0, ∀t ≥ 0, ∀i = 2, . . . , n. Now, considering (86),
one has v̂i2 (t) = 0, ∀t ≥ 0, ∀i = 2, . . . , n. Finally, noting the
orthogonal matrix is full-rank, from (78), (79), (80), and (81)
one concludes that x̃ (t) = 0n and ṽ (t) = 0n , ∀t ≥ 0, which
contradicts with (28). Thus, the proof is complete.
Considering (83) and (84), one obtains from (82) that
n
∑
(90)
HΛ̆z̄ (t) = 0n−1 , ∀t ≥ 0.
2
γλm
i (ϖ − γλi ) x̂i (t)
n
∑
Let m belong to the set of even numbers. Noting the defined
Vandermonde matrix H in (84), from (89) one has
P̆s,0 − β P̆r,L ≤ 0, ∀s ̸= r ∈ S.
(93)
(94)
Let p̆r,q be the element located
[ at the
] first row and the first
column of the matrix P̆r,q , i.e., P̆r,q
= p̆r,q . Considering
Ψ̆qr
=
L(P̆r,q+1 −P̆r,q )
τmin
1,1
and noting the first element of Xqr is
11
It is straightforward to verify from (102) that g̃(0) = 0 and
g̃(1) = 0, which imply that under any fixed L ∈ N < ∞, the
function g̃ (β) has at least one extreme point over the interval
(0, 1) and g̃ (β) > 0 can happen in its interval (0, 1).
The derivative of g̃ (β) (102) with respect to β ∈ (0, 1) is
obtained as
1
0.8
0.6
−1
1
∂g̃ (β)
= eL(1−β L ) β −( L +1) − 1.
∂β
0.4
−1
∂ g̃(β∗ )
L(1−β∗L )
=
∂β∗ = 0, one obtains from (103) that e
1
(L
+1)
, substituting which into (102) yields that at the exβ∗
1
( 1 +1)
treme point: g̃ (β∗ ) = β∗ L
− β∗ = β∗ (β∗L − 1) < 0. Then,
For
0.2
0
0
0.2
0.4
0.6
0.8
1
−1
Figure 4. Take L = 1 as an example, e(1−β ) < β for ∀β ∈ (0, 1).
non-negative, i.e., λ1r,q ≥ 0, from (93) and (94), we conclude
that (8) and (11) imply
L
(p̆r,q+1 − p̆r,q ) < αp̆r,q , ∀r ∈ S, q = 0, 1, . . . , L−1 (95)
τmin
p̆s,0 ≤ β p̆r,L ,
∀s ̸= r ∈ S.
(96)
Since P̆r,q , ∀r ∈ S, ∀q = 0, 1, . . . , L, is positive define,
p̆r,q > 0, ∀r ∈ S, ∀q = 0, 1, . . . , L. Thus, (95) is equivalent
to
(
)
L p̆r,q+1
τmin >
− 1 , ∀q = 0, . . . , L − 1, ∀r ∈ S. (97)
α
p̆r,q
β
Condition (12) is equivalent to τmax < − ln
α , which
together with (97) and τmax ≥ τmin yields
(
)
p̆r,q+1
− ln β > L
− 1 , ∀q = 0, . . . , L − 1, ∀r ∈ S. (98)
p̆r,q
Condition (96) implies
which further implies
p̆r,L
p̆s,0
≥
1
β
> 1 and
p̆s,L
p̆r,0
≥
1
β
> 1,
−1
p̆−1
s,0 p̆r,0 p̆s,L p̆r,L
=
(103)
L−1
∏
q=0
(99)
−1
−2
> 1, ∀r ̸= s ∈ S.
p̆−1
s,q p̆r,q p̆s,q+1 p̆r,q+1 ≥ β
Considering (99), one can pick up one number q̂ ∈
2
−1
−L
{0, 1, . . . , L − 1} such that p̆−1
>
s,q̂ p̆r,q̂ p̆s,q̂+1 p̆r,q̂+1 ≥ β
1, ∀r ̸= s ∈ S, which also implies that one of the indices s
and r, say r, which satisfies
1
−L
p̆−1
> 1, r ∈ S.
r,q̂ p̆r,q̂+1 ≥ β
(100)
1
Combining (98) with (100) yields − ln β > L(β − L − 1)
which is equivalent to
β < eL(1−β
−1
L
)
, β ∈ (0, 1).
(101)
For (101), let us consider the function
g̃ (β) = eL(1−β
−1
L
)
− β, β ∈ (0, 1).
(102)
noting g̃(0) = 0 and g̃(1) = 0, we conclude that under any
fixed L ∈ N < ∞, g̃(β) < 0 for ∀β ∈ (0, 1). Therefore, (101)
never holds (an illustration is given in Figure 4, where we
take L = 1 as an example), which is a contradiction, and this
completes the proof.
ACKNOWLEDGMENT
The authors are grateful to Professor Sadegh Bolouki for
valuable discussions.
R EFERENCES
[1] Y. Mao, E. Akyol, and Z. Zhang, “Strategic topology
switching for security-part ii: detection & switching topologies,”
https://arxiv.org/abs/1711.11181, 2017.
[2] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks
of agents with switching topology and time-delays,” IEEE Transactions
on automatic control, vol. 49, no. 9, pp. 1520–1533, 2004.
[3] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile
autonomous agents using nearest neighbor rules,” IEEE Transactions on
automatic control, vol. 48, no. 6, pp. 988–1001, 2003.
[4] J. A. Fax and R. M. Murray, “Information flow and cooperative control
of vehicle formations,” IEEE transactions on automatic control, vol. 49,
no. 9, pp. 1465–1476, 2004.
[5] W. Ren and E. Atkins, “Distributed multi-vehicle coordinated control
via local information exchange,” International Journal of Robust and
Nonlinear Control, vol. 17, no. 10-11, pp. 1002–1033, 2007.
[6] J. N. Tsitsiklis, “Problems in decentralized decision making and computation,” MASSACHUSETTS INST OF TECH CAMBRIDGE LAB
FOR INFORMATION AND DECISION SYSTEMS, Tech. Rep., 1984.
[7] A. Nedić and A. Ozdaglar, “Distributed subgradient methods for multiagent optimization,” IEEE Transactions on Automatic Control, vol. 54,
no. 1, pp. 48–61, 2009.
[8] Z. Zhang and M.-Y. Chow, “Convergence analysis of the incremental cost consensus algorithm under different communication network
topologies in a smart grid,” IEEE Transactions on Power Systems,
vol. 27, no. 4, pp. 1761–1768, 2012.
[9] L.-Y. Lu and C.-C. Chu, “Consensus-based droop control synthesis for
multiple dics in isolated micro-grids,” IEEE Transactions on Power
Systems, vol. 30, no. 5, pp. 2243–2256, 2015.
[10] Q. Li and D. Rus, “Global clock synchronization in sensor networks,”
IEEE Transactions on computers, vol. 55, no. 2, pp. 214–226, 2006.
[11] A. Abdessameud and A. Tayebi, “Attitude synchronization of a group
of spacecraft without velocity measurements,” IEEE Transactions on
Automatic Control, vol. 54, no. 11, pp. 2642–2648, 2009.
[12] S.-J. Chung and J.-J. E. Slotine, “Cooperative robot control and concurrent synchronization of lagrangian systems,” IEEE Transactions on
Robotics, vol. 25, no. 3, pp. 686–700, 2009.
[13] B. B. Johnson, S. V. Dhople, A. O. Hamadeh, and P. T. Krein, “Synchronization of nonlinear oscillators in an lti electrical power network,”
IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 61,
no. 3, pp. 834–844, 2014.
[14] W. Yu, G. Chen, and M. Cao, “Some necessary and sufficient conditions for second-order consensus in multi-agent dynamical systems,”
Automatica, vol. 46, no. 6, pp. 1089–1095, 2010.
12
[15] J. Zhu, Y.-P. Tian, and J. Kuang, “On the general consensus protocol of
multi-agent systems with double-integrator dynamics,” Linear Algebra
and its Applications, vol. 431, no. 5-7, pp. 701–715, 2009.
[16] J. Mei, W. Ren, and J. Chen, “Distributed consensus of second-order
multi-agent systems with heterogeneous unknown inertias and control
gains under a directed graph,” IEEE Transactions on Automatic Control,
vol. 61, no. 8, pp. 2019–2034, 2016.
[17] J. Qin, C. Yu, and B. D. Anderson, “On leaderless and leader-following
consensus for interacting clusters of second-order multi-agent systems,”
Automatica, vol. 74, pp. 214–221, 2016.
[18] X. Ai, S. Song, and K. You, “Second-order consensus of multi-agent
systems under limited interaction ranges,” Automatica, vol. 68, pp. 329–
333, 2016.
[19] W. Yu, W. X. Zheng, G. Chen, W. Ren, and J. Cao, “Second-order
consensus in multi-agent dynamical systems with sampled position
data,” Automatica, vol. 47, no. 7, pp. 1496–1503, 2011.
[20] D. S. Bassett, N. F. Wymbs, M. A. Porter, P. J. Mucha, and S. T.
Grafton, “Cross-linked structure of network evolution,” Chaos: An
Interdisciplinary Journal of Nonlinear Science, vol. 24, no. 1, p. 013112,
2014.
[21] P. J. Menck, J. Heitzig, J. Kurths, and H. J. Schellnhuber, “How dead
ends undermine power grid stability,” Nature communications, vol. 5, p.
3969, 2014.
[22] P. Kundur, J. Paserba, V. Ajjarapu, G. Andersson, A. Bose, C. Canizares,
N. Hatziargyriou, D. Hill, A. Stankovic, C. Taylor et al., “Definition and
classification of power system stability ieee/cigre joint task force on
stability terms and definitions,” IEEE transactions on Power Systems,
vol. 19, no. 3, pp. 1387–1401, 2004.
[23] S. H. Moghaddam and M. R. Jovanovic, “Topology design for
stochastically-forced consensus networks,” IEEE Transactions on Control of Network Systems, DOI: 10.1109/TCNS.2017.2674962, 2017.
[24] P. Schultz, T. Peron, D. Eroglu, T. Stemler, G. M. R. Ávila, F. A.
Rodrigues, and J. Kurths, “Tweaking synchronization by connectivity
modifications,” Physical Review E, vol. 93, no. 6, p. 062211, 2016.
[25] Y. Mao, E. Akyol, and Z. Zhang, “Strategic topology switching for
security of multi-agent systems,” Accepted by NERCCS 2018: First
Northeast Regional Conference on Complex Systems, 2018.
[26] Y. Mao and Z. Zhang, “Second-order consensus for multi-agent systems
by state-dependent topology switching,” Accepted by 2018 American
Control Conference, 2018.
[27] E. Mallada and A. Tang, “Improving damping of power networks: Power
scheduling and impedance adaptation,” in Decision and Control and
European Control Conference (CDC-ECC), 2011 50th IEEE Conference
on. IEEE, 2011, pp. 7729–7734.
[28] S. K. Mazumder, Wireless networking based control. Springer, 2011.
[29] W. Ren and R. W. Beard, “Consensus seeking in multiagent systems
under dynamically changing interaction topologies,” IEEE Transactions
on automatic control, vol. 50, no. 5, pp. 655–661, 2005.
[30] H. E. Psillakis, “Consensus in networks of agents with unknown highfrequency gain signs and switching topology,” IEEE Transactions on
Automatic Control, vol. 62, no. 8, pp. 3993–3998, 2017.
[31] P. Lin and Y. Jia, “Consensus of a class of second-order multiagent systems with time-delay and jointly-connected topologies,” IEEE
Transactions on Automatic Control, vol. 55, no. 3, pp. 778–784, 2010.
[32] S. Su and Z. Lin, “Distributed consensus control of multi-agent systems
with higher order agent dynamics and dynamically changing directed interaction topologies,” IEEE Transactions on Automatic Control, vol. 61,
no. 2, pp. 515–519, 2016.
[33] G. Xie and L. Wang, “Consensus control for networks of dynamic agents
via active switching topology,” in International Conference on Natural
Computation. Springer, 2005, pp. 424–433.
[34] A. A. Cárdenas, S. Amin, and S. Sastry, “Research challenges for the
security of control systems.” in HotSec, 2008.
[35] F. Pasqualetti, F. Dörfler, and F. Bullo, “Attack detection and identification in cyber-physical systems,” IEEE Transactions on Automatic
Control, vol. 58, no. 11, pp. 2715–2729, 2013.
[36] A. Teixeira, D. Pérez, H. Sandberg, and K. H. Johansson, “Attack models
and scenarios for networked control systems,” in Proceedings of the
1st international conference on High Confidence Networked Systems.
ACM, 2012, pp. 55–64.
[37] J. L. Gross and J. Yellen, Graph theory and its applications. CRC
press, 2005.
[38] C. Godsil and G. F. Royle, Algebraic graph theory. Springer Science
& Business Media, 2013, vol. 207.
[39] D.
A.
Spielman,
“Spectral
graph
theory
(lecture
5
rings, paths, and cayley graphs),” Online lecture notes.
http://www.cs.yale.edu/homes/spielman/561/, September 16, 2014.
[40] A. E. Brouwer and W. H. Haemers, Spectra of graphs. Springer Science
& Business Media, 2011.
[41] R. A. Hom and C. R. Johnson, “Topics in matrix analysis,” Cambridge
UP, New York, 1991.
[42] W. Xiang and J. Xiao, “Stabilization of switched continuous-time
systems with all modes unstable via dwell time switching,” Automatica,
vol. 50, no. 3, pp. 940–945, 2014.
[43] Z. Zuo and L. Tie, “Distributed robust finite-time nonlinear consensus
protocols for multi-agent systems,” International Journal of Systems
Science, vol. 47, no. 6, pp. 1366–1375, 2016.
[44] A. Gusrialdi and Z. Qu, “Distributed estimation of all the eigenvalues and eigenvectors of matrices associated with strongly connected
digraphs,” IEEE control systems letters, vol. 1, no. 2, pp. 328–333, 2017.
[45] A. Y. Kibangou et al., “Distributed estimation of laplacian eigenvalues
via constrained consensus optimization problems,” Systems & Control
Letters, vol. 80, pp. 56–62, 2015.
| 3 |
3D Trajectory Reconstruction of Dynamic Objects Using Planarity Constraints
Sebastian Bullinger1 , Christoph Bodensteiner1 , Michael Arens1 and Rainer Stiefelhagen2
1
arXiv:1711.06136v1 [] 16 Nov 2017
2
Fraunhofer IOSB
Karslruhe Institute of Technology
{sebastian.bullinger,christoph.bodensteiner,michael.arens}@iosb.fraunhofer.de
[email protected]
Abstract
mented reality applications. There are different platforms
like drones or wearable systems where one wants to achieve
this task with a minimal number of devices in order to reduce weight or lower production costs. We propose an approach to reconstruct three-dimensional object motion trajectories using a single camera as sensor.
The reconstruction of object motion trajectories in monocular video data captured by moving cameras is a challenging
task, since in general it cannot be solely solved exploiting
image observations. Each observed object motion trajectory
is scale ambiguous. Additional constraints are required to
identify a motion trajectory consistent to background structures. [23, 13, 3] assume that the camera is mounted on
a driving vehicle, i.e. the camera has specific height and
a known pose. [16, 28, 17] solve the scale ambiguity by
making assumptions about object and camera motion trajectories. We follow Ozden’s principle of non-accidental
motion trajectories [16] and introduce a new object motion
constraint exploiting semantic segmentation and terrain geometry to compute consistent object motion trajectories.
In many scenarios objects cover only a minority of pixels in
video frames. This increases the difficulty of reconstructing object motion trajectories using image data. In such
cases current state-of-the-art Structure from Motion (SfM)
approaches treat moving object observations most likely as
outliers and reconstruct background structures instead. Previous works, e.g. [11, 12], tackle this problem by considering multiple video frames to determine moving parts in the
video. They apply motion segmentation or keypoint tracking to detect moving objects. These kind of approaches are
vulnerable to occlusion and require objects to move in order
to separate them from background structures.
Our method exploits recent results in instance-aware semantic segmentation and rigid Structure from Motion techniques. Thus, our approach extends naturally to stationary objects. In addition, we do not exploit specific camera
pose constraints like a fixed camera-ground-angle or a fixed
We present a method to reconstruct the threedimensional trajectory of a moving instance of a known
object category in monocular video data. We track the
two-dimensional shape of objects on pixel level exploiting
instance-aware semantic segmentation techniques and optical flow cues. We apply Structure from Motion techniques
to object and background images to determine for each
frame camera poses relative to object instances and background structures. By combining object and background
camera pose information, we restrict the object trajectory to
a one-parameter family of possible solutions. We compute a
ground representation by fusing background structures and
corresponding semantic segmentations. This allows us to
determine an object trajectory consistent to image observations and reconstructed environment model. Our method
is robust to occlusion and handles temporarily stationary
objects. We show qualitative results using drone imagery.
Due to the lack of suitable benchmark datasets we present a
new dataset to evaluate the quality of reconstructed threedimensional object trajectories. The video sequences contain vehicles in urban areas and are rendered using the
path-tracing render engine Cycles to achieve realistic results. We perform a quantitative evaluation of the presented
approach using this dataset. Our algorithm achieves an average reconstruction-to-ground-truth distance of 0.31 meter. The dataset will be publicly available on our website1 .
1. Introduction
1.1. Trajectory Reconstruction
The reconstruction of three-dimensional object motion
trajectories is important for autonomous systems and aug1 Project
page: URL
1
camera-ground-distance. We evaluate the presented object
motion trajectory reconstruction algorithm in UAV scenarios, where such constraints are not valid.
struction related research.
1.4. Paper Overview
The paper is organized as follows. Section 2 describes
the structure and the components of the proposed pipeline.
In section 2.1 we derive an expression for a one-parameter
family of possible object motion trajectories combining object and background reconstruction results. Section 2.2 describes a method to approximate the ground locally. In section 2.3 we describe a method to compute consistent object
motion trajectories. In section 4 we provide an qualitative
and quantitative evaluation of the presented algorithms using drone imagery and rendered video data. Section 5 concludes the paper.
1.2. Related Work
Semantic segmentation or scene parsing is the task of
providing semantic information at pixel-level. Early semantic segmentation approaches using ConvNets, e.g. Farabet
et al. [5], exploit patchwise training. Long et al. [21] applied Fully Convolutinal Networks for semantic segmentation, which are trained end-to-end. Recently, [4, 14, 9] proposed instance-aware semantic segmentation approaches.
The field of Structure from Motion (SfM) can be divided
into iterative and global approaches. Iterative or sequential
SfM methods [22, 27, 15, 24, 20] are more likely to find
reasonable solutions than global SfM approaches [15, 24].
However, the latter are less prone to drift.
The determination of the correct scale ratio between object and background reconstruction requires additional constraints. Ozden et al. [16] exploit the non-accidentalness
principle in the context of independently moving objects.
Yuan et al. [28] propose to reconstruct the 3D object trajectory by assuming that the object motion is perpendicular to the normal vector of the ground plane. Kundu et al.
[11] exploit motion segmentation with multibody VSLAM
to reconstruct the trajectory of moving cars. They use an
instantaneous constant velocity model in combination with
Bearing only Tracking to estimate consistent object scales.
Park et al. propose an approach in [17] to reconstruct the
trajectory of a single 3D point tracked over time by approximating the motion using a linear combination of trajectory
basis vectors. Previous works, like [16, 28, 11, 17] show
only qualitative results.
2. Object Motion Trajectory Reconstruction
The pipeline of our approach is shown in Fig. 1. The
input is an ordered image sequence. We track twodimensional object shapes on pixel level across video sequences following the approach presented in [2]. In contrast
to [2], we identify object shapes exploiting the instanceaware semantic segmentation method presented in [14] and
associate extracted object shapes of subsequent frames using the optical flow approach described in [10]. Without
the loss of generality, we describe motion trajectory reconstructions of single objects. We apply SfM [15, 20] to object and background images as shown in Fig. 1. Object
images denote images containing only color information of
single object instance. Similarly, background images show
only background structures. We combine object and background reconstructions to determine possible, visually identical, object motion trajectories. We compute a consistent
object motion trajectory exploiting constraints derived from
reconstructed terrain ground geometry.
1.3. Contribution
2.1. Object Trajectory Representation
The core contributions of this work are as follows. (1)
We present a new framework to reconstruct the threedimensional trajectory of moving instances of known object categories in monocular video data leveraging sateof-the-art semantic segmentation and structure from motion approaches. (2) We propose a novel method to compute object motion trajectories consistent to image observations and background structures. (3) In contrast to previous work, we quantitatively evaluate the reconstructed object motion trajectories. (4) We created a new object motion trajectory benchmark dataset due to the lack of publicly
available video data of moving objects with suitable ground
truth data. The dataset consists of photo-realistic rendered
videos of urban environments. It includes animated vehicles
as well as set of predefined camera and object motion trajectories. 3D vehicle and environmental models used for rendering serve as ground truth. (5) We will publish the dataset
and evaluation scripts to foster future object motion recon-
In order to estimate a consistent object motion trajectory
we apply SfM simultaneously to object and background images as shown in Fig. 1. We denote the corresponding SfM
(o)
results with sf m(o) and sf m(b) . Let oj ∈ P (o) and
(b)
bk ∈ P (b) denote the 3D points contained in sf m(o) or
(o)
sf m(b) , respectively. The superscripts o and b in oj and
(b)
bk describe the corresponding coordinate frame. The variables j and k are the indices of points in the object or the
background point cloud, respectively. We denote the reconstructed intrinsic and extrinsic parameters of each registered input image as virtual camera. Each virtual camera
in sf m(o) and sf m(b) corresponds to a certain frame from
which object and background images are extracted. We associate virtual cameras in sf m(o) with the corresponding
virtual cameras in sf m(b) and vice versa. In the following, we consider only camera pairs, whose virtual cameras
2
Input Frames
Semantic
Segmentation and
Object Tracking
Object Segmentations
Background
Segmentations
SfM
Ground Segmentations
SfM
Background
SfM Result
Object SfM Result
Object Trajectory Family
Trajectory Family
Computation
Background
SfM Result
Scale Estimation
and Trajectory
Computation
Ground Representation
Consistent Object
Motion Trajectory
Ground
Computation
Figure 1: Overview of the Trajectory Reconstruction Pipeline.
are contained in sf m(o) and sf m(b) . Because of missing
image registrations this may not be the case for all virtual
cameras.
We reconstruct the object motion trajectory by combining
information of corresponding virtual cameras. For any virtual camera pair of an image with index i the object SfM
result sf m(o) contains information of object point positions
(o)
(o)
oj relative to virtual cameras with camera centers ci and
(o)
(o)
rotations Ri . We express each object point oj
(i)
coordinates oj of camera i using equation (1)
(i)
(o)
oj = Ri
(o)
(o)
· (oj − ci ).
described according to equation (3).
(b)
(b)
(b)
T (b)
(i)
· oj .
T (b)
(o)
· Ri
(o)
(o)
· (oj − ci )
:= ci(b) + r · v(b)
j,i
(3)
with
(b)
T (b)
vj,i = Ri
in camera
(o)
· Ri
(o)
(o)
(b)
(b)
· (oj − ci ) = oj,i − ci .
(4)
Given the scale ratio r, we can recover the full object motion
trajectory computing equation (4) for each virtual camera
(b)
pair. We use oj,i of all cameras and object points as object
motion trajectory representation. The ambiguity mentioned
in section 1 is expressed by the unknown scale ratio r.
(1)
The background SfM result sf m(b) contains the camera
(b)
(b)
center ci and the corresponding rotation Ri , which provide pose information of the camera with respect to the reconstructed background. Note, that the camera coordinate
systems of virtual cameras in sf m(o) and sf m(b) are equiv(b)
(b)
alent. We use ci and Ri to transform object points to the
background coordinate system using equation (2)
oj,i = ci + Ri
(b)
oj,i = ci + r · Ri
2.2. Terrain Ground Approximation
Further camera or object motion constraints are required
to determine the scale ratio r introduced in equation (4).
In contrast to previous work [16, 28, 17, 13, 23, 3] we assume that the object category of interest moves on top of
the terrain. We exploit semantic segmentation techniques
to estimate an approximation of the ground surface of the
scene. We apply the ConvNet presented in [21] to determine ground categories like street or grass for all input images on pixel level. We consider only stable background
points, i.e. 3D points that are observed at least four times.
We determine for each 3D point a ground or non-ground
label by accumulating the semantic labels of corresponding
keypoint measurement pixel positions. This allows us to determine a subset of background points, which represent the
ground of the scene. We approximate the ground surface
locally using plane representations. For each frame i we
(2)
In general, the scale ratio of object and background reconstruction does not match due to the scale ambiguity of
SfM reconstructions [8]. We tackle this problem by treating the scale of the background as reference scale and by
introducing a scale ratio factor r to adjust the scale of object point coordinates. The overall transformation of object
(o)
points given in object coordinates oj to object points in
(b)
the background coordinate frame system oj,i of camera i is
3
Equation (8) allows us to determine the scale ratio r
between object and background reconstruction using the
extrinsic parameters of two cameras and corresponding
ground approximations.
use corresponding estimated camera parameters and object
point observations to determine a set of ground points Pi
close to the object. We build a kd-tree containing all ground
measurement positions of the current frame. For each object point observation, we determine the numb closest background measurements. In our experiments, we set numb to
50. Let cardi be the cardinality of Pi . While cardi is less
than numb , we add the next background observation of each
point measurement. This results in an equal distribution of
local ground points around the vehicle. We apply RANSAC
[6] to compute a local approximation of the ground surface
using Pi . Each plane is defined by a corresponding normal
vector ni and an arbitrary point pi lying on the plane.
2.3.2
The accuracy of the estimated scale ratio r in equation (8)
is subject to the condition of the parameters of the particular view pair. For instance, if the numerator or denominator is close to zero, small errors in the camera poses or
ground approximations may result in negative scale ratios.
In addition, wrongly estimated local plane normal vectors
may disturb camera-plane distances. We tackle these problems by combining two different view pair rankings. The
first ranking uses for each view pair the difference of the
camera-plane distances, i.e. the numerator in equation (8).
The second ranking reflects the quality of the local ground
approximation w.r.t. the object reconstruction. For a view
pair with well reconstructed local planes the variance of the
corresponding scale ratios is small. This allows for the determination of ill conditioned view pairs. The second ranking uses the scale ratio difference to order the view pairs.
We sort the view pairs by weighting both ranks equally. Let
vp denote the view pair with the lowest overall rank. The final scale ratio is determined by using a least squares method
w.r.t. all equations of vp.
2.3. Scale Estimation using Constant Distance Constraints
In section 2.3, we exploit priors of object motion to improve the robustness of the reconstructed object trajectory.
We assume that the object of interest moves on a locally
planar surface. In this case the distance of each object point
(b)
oj,i to the ground is constant for all cameras i. The reconstructed trajectory shows this property only for the true
scale ratio and non-degenerated camera motion. For example, a degenerate case occurs when the camera moves exactly parallel to a planar object motion. For a more detailed
discussion of degenerated camera motions see [16].
2.3.1
Scale Ratio Estimation using a Single View Pair
We use the term view to denote cameras and corresponding local ground planes. The signed distance of an object
(b)
point oj,i to the ground plane can be computed according
to equation (5)
(b)
dj,i = ni · (oj,i − pi ),
3. Virtual Object Motion Trajectory Dataset
To quantitatively evaluate the quality of the reconstructed object motion trajectory we require accurate object
and environment models as well as object and camera poses
at each time step. The simultaneous capturing of corresponding ground truth data with sufficient quality is difficult
to achieve. For example, one could capture the environment
geometry with LIDAR sensors and the camera / object pose
with an additional system. However, the registration and
synchronization of all these different modalities is a complex and cumbersome process. The result will contain noise
and other artifacts like drift. To tackle these issues we exploit virtual models. Previously published virtually generated and virtually augmented datasets, like [18, 19, 7, 25],
provide data for different application domains and do not include three-dimensional ground truth information. We build
a virtual world including an urban environment, animated
vehicles as well as predefined vehicle and camera motion
trajectories. This allows us to compute spatial and temporal
error free ground truth data. We exploit procedural generation of textures to avoid artificial repetitions. Thus, our
dataset is suitable for evaluating SfM algorithms.
(5)
where pi is a point on the local ground plane and ni is the
corresponding normal vector. If the object moves on top of
the approximated terrain ground the distance dj,i should be
independent of a specific camera i. We substitute dj,i with
dj in equation (5). This allows us to combine equation (5)
of the same point and different cameras.
(b)
(b)
ni · (oj,i − pi ) = ni0 · (oj,i0 − pi0 ).
(6)
Substituting equation (3) in equation (6) results in (7)
(b)
(b)
(b)
(b)
ni · (ci + r · vj,i − pi ) = ni0 · (ci0 + r · vj,i0 − pi0 ) (7)
Solving equation (7) for r yields equation (8)
(b)
r=
(b)
ni0 · (ci0 − pi0 ) − ni · (ci − pi )
(b)
(b)
(ni · vj,i − ni0 · vj,i0 )
.
Scale Ratio Estimation using View Pair Ranking
(8)
4
(a) Environment Model from (b) Environment Rendered from (c) Car Model with Motion Path
a Bird’s Eye Perspective (in a Bird’s Eye Perspective (Ren- in Street Scene (in Blender)
Blender)
dered)
(d) Rendered Street Scene
Figure 2: Example Scenes from the Virtual Object Trajectory Dataset.
(a) Input Frame 0
(b) Input Frame 100
(c) Input Frame 200
(d) Reconstructed
(Top View)
Trajectory
Figure 3: Car Trajectory Reconstruction using 250 frames with a resolution of 1920 x 1080 pixels captured by a DJI drone.
3.1. Virtual World
quences cover a high variety of object and camera poses.
The object trajectories reflect common vehicle motions include vehicle acceleration, different curve types and motion
on changing slopes. We use the path-tracing render engine
Cycles [1] to achieve photo realistic rendering results. We
observed that the removal of artificial path-tracing artifacts
using denoising is crucial to avoid degenerated SfM reconstructions.
The dataset includes 6D object and camera poses for each
frame as well as ground truth meshes of corresponding vehicle models. In contrast to measured ground truth data, virtual ground truth data is free of noise and shows no spatial
registration or temporal synchronization inaccuracies. The
dataset contains semantic segmentations of objects, ground
and background to separate the reconstruction task from
specific semantic segmentation and tracking approaches. In
addition to the virtual data, the dataset also includes the
computed reconstruction results. We will make our evaluation scripts publicly available to foster future analysis of
object trajectory estimation.
We used Blender [1] to create a virtual world consisting of a city surrounded by a countryside. We exploit procedural generation to compute textures of large surfaces,
like streets and sidewalks, to avoid degenerated Structure
from Motion results caused by artificial texture repetitions.
The virtual world includes different assets like trees, traffic lights, streetlights, phone booths, bus stops and benches.
We collected a set of publicly available vehicle assets to
populate the scenes. We used skeletal animation, also referred to as rigging, for vehicle animation. This includes
wheel rotation and steering w.r.t. the motion trajectory as
well as consistent vehicle placement on uneven ground surfaces. The animation of wheels is important to avoid unrealistic wheel point triangulations. We adjusted the scale of
vehicles and virtual environment using Blender’s unit system. This allows us to set the virtual space in relation to the
real world. The extent of the generated virtual world corresponds to one square kilometer. We exploit environment
mapping to achieve realistic illumination. With Blender’s
built-in tools, we defined a set of camera and object motion
trajectories. This allows us to determine the exact 3D pose
of cameras and vehicles at each time step.
4. Experiments and Evaluation
We show qualitative results and quantitative evaluations
using real drone footage and virtual generated drone imagery, respectively. Fig. 3 shows the reconstruction of a car
motion trajectory using images captured by a DJI drone.
Fig. 4 shows an example of the virtual object trajectory
dataset. Fig. 4(j) and Fig. 4(i) show the object point cloud
transformed into the virtual world coordinate frame system.
3.2. Trajectory Dataset
We use the previously created virtual world to build a
new object trajectory dataset1 . The dataset consists of 35
sequences capturing five vehicles in different urban scenes.
Fig. 2 shows some example images. The virtual video se5
(a) Ground Truth Object Path
(b) Object Model in Blender
(c) Rendered Scene
(d) Object Segmentation
(e) Background Segmentation
(f) Ground Segmentation
(g) Object Reconstruction
(h) Background Reconstruction
(i) Reconstructed Object Motion (j) Reconstructed Object Motion (k) Ground Truth Model at Dif- (l) Ground Truth Model at DifferTrajectory
Trajectory
ferent Frames Overlayed With ent Frames Overlayed With ReReconstruction
construction
Figure 4: Car trajectory reconstruction using 60 virtual frames with a resolution of 1920 x 1080 pixels.
truth coordinates. For each sequence we define the trajectory error as the average trajectory-point-mesh distance.
Fig. 5 shows for each sequence the trajectory error in meter. The average trajectory error per vehicle using the full
dataset is shown in table 1. Overall, we achieve a trajectory
error of 0.31 meter. The error of the object trajectory reconstructions reflects four types of computational inaccuracies:
deviations of camera poses w.r.t. object and background
point clouds, wrong triangulated object points as well as
scale ratio discrepancies. Fig. 6 compares the estimated
scale ratios of the proposed and the baseline method w.r.t.
the reference scale ratio. The reference scale ratio computation is described in section 4.4. The overall estimated scale
ratio deviation w.r.t. the reference scale per vehicle is shown
in table 1. The provided reference scale ratios are subject to
the registration described in section 4.3. Wrongly reconstructed background camera poses may influence the reference scale ratio. The van object reconstruction was only
partial successful on the sequences crossing, overtaking and
steep street. The SfM algorithm registered 19%, 60% and
98% of the images, respectively. The object reconstruction
of the smart model contained 74% of the crossing input object images. Here, we use the subset of registered images to
perform the evaluation. The camera and the object motion
in bumpy road simulate a sequence close to a degenerated
case, i.e. equation (8) is ill conditioned for all view pairs.
Fig. 4(k) and Fig. 4(l) show the overlay result of transformed points and the corresponding virtual ground truth
model. To segment the two-dimensional shape of objects of
interest we follow the approach presented in [2]. In contrast to [2], we used [14] and [10] to segment and track
visible objects, respectively. We considered the following SfM pipelines for object and background reconstructions: Colmap [20], OpenMVG [15], Theia [24] and VisualSfM [27]. Our object trajectory reconstruction pipeline
uses Colmap for object and OpenMVG for background reconstructions, since Colmap and OpenMVG created in our
experiments the most reliable object and background reconstructions.
4.1. Virtual Object Motion Trajectory Evaluation
We use the dataset presented in section 3 to quantitatively evaluate the proposed object motion trajectory reconstruction approach. The evaluation is based on object, background and ground segmentations included in the dataset.
This allows us to show results independent from the performance of specific instance segmentation and tracking approaches. We compare the proposed method with the baseline presented in section 4.2 using 35 sequences contained
in the dataset. We automatically register the reconstructed
object trajectory to the ground truth using the method described in section 4.3. We compute the shortest distance of
each object trajectory point to the object mesh in ground
6
Trajectory Error in meter
Lancer (Baseline)
Lancer (Ours)
2
Lincoln (B.)
Lincoln (O.)
Smart (B.)
Smart (O.)
Golf (B.)
Golf (O.)
Van (B.)
Van (O.)
1
0
Right Curves
Left Curves
Crossing
Overtaking
Bridge
Steep Street
Bumpy Road
Deviation w.r.t. Reference
Figure 5: Quantitative evaluation of the trajectory error in meter computed by the methods constant distance (ours) and
intersection (baseline). We evaluate seven different vehicle trajectories (right curves, left curves, crossing, overtaking, bridge,
steep street and bumpy road) and five different vehicle models (Lancer, Lincoln Navigator, Smart, Golf and Van). The
cropped values of the baseline are 6.3m, 4m and 4.8m.
Lancer (Baseline)
Lancer (Ours)
0.4
Lincoln (B.)
Lincoln (O.)
Smart (B.)
Smart (O.)
Golf (B.)
Golf (O.)
Van (B.)
Van (O.)
0.2
0
Right Curves
Left Curves
Crossing
Overtaking
Bridge
Steep Street
Bumpy Road
Figure 6: Quantitative evaluation of the scale ratio of the methods constant distance (ours) and intersection (baseline). The
provided values are the deviations w.r.t. reference scales.
4.2. Scale Estimation Baseline: Intersection Constraints
The intersection parameter ri corresponds to the point being
closest to the ground surface, i.e. a point at the bottom of the
object. Plugging ri in equation (3) for camera i places the
object point cloud on top of the ground surface. Thus, the
smallest ray-plane-intersection-parameter ri is a value close
to the real object-to-background-scale. Finally, we use the
median s of all image scale ratio factors as scale ratio factor
to reconstruct the trajectory as computed in equation (11)
The baseline is motivated by the fact, that some reconstructed points of the bottom of an object should lie in the
proximity of the ground surface of the environment. Consider for example the reconstructed 3D points corresponding to the wheels of a car. This approach works only if
at least one camera-point-ray of an arbitrary point in the
object point cloud intersects the ground surface. For each
(b)
camera we generate a set of vectors vj,i pointing from the
(b)
r = med({ri |i ∈ {1, . . . , nI }}),
where med denotes the median and nI the number of images. We do not consider invalid image scale ratios ri , i.e.
cameras which have no camera-object-point-rays intersecting the ground representation.
(b)
camera center ci towards each object point oj,i . For non(b)
orthogonal direction vectors vj,i and normal vectors ni
we compute the ray plane intersection parameter for each
camera-object-point-pair according to equation (9)
4.3. Registration of Background Reconstruction
and Virtual Environment
(b)
rj,i =
(pi − ci ) · ni
(b)
vj,i · ni
.
(9)
A common approach to register different coordinate systems is to exploit 3D-3D correspondences. To determine
points in the virtual environment corresponding to background reconstruction points one could create a set of rays
from each camera center to all visible reconstructed back-
We compute the smallest ray-plane-intersection parameter
for each image i.
ri = min({rj,i |j ∈ {1, . . . , |P (o) |}})
(11)
(10)
7
Scale Ratio
Est. Type
Intersection (Baseline)
Constant Distance (Ours)
Average Scale Ratio Deviation w.r.t. Reference
Lancer Lincoln Smart Golf
Van
0.05
0.07
0.01
0.08
0.13
0.04
0.04
0.04
0.06
0.08
Lancer
0.42
0.20
Average Trajectory Error
Lincoln Smart Golf
0.53
0.25
0.95
0.23
0.33
0.33
Van
1.68
0.47
Table 1: Summary of the conducted evaluation. The second column shows the deviation of the estimated scale ratio w.r.t to
the reference scale ratio per vehicle. The third column contains the average distances of the full dataset in meter. Overall, the
trajectory error of the baseline and our approach is 0.77m and 0.31m.
ground points. The corresponding environment points are
defined by the intersection of these rays with the mesh of
the virtual environment. Due to the complexity of our environment model this computation is in terms of memory and
computational effort quite expensive. Instead, we use the
algorithm presented in [26] to estimate a similarity transformation Ts between the cameras contained in the background reconstruction and the virtual cameras used to render the corresponding video sequence. This allows us to
perform 3D-3D-registrations of background reconstructions
and the virtual environment as well as to quantitatively evaluate the quality of the reconstructed object motion trajectory. We use the camera centers as input for [26] to compute an initial reconstruction-to-virtual-environment transformation. Depending on the shape of the camera trajectory
there may be multiple valid similarity transformations using camera center positions. In order to find the semantically correct solution we enhance the original point set
with camera pose information, i.e. we add points reflect(b)
T (b)
ing up vectors ui = Ri · (0, 1, 0)T and forward vectors
(b)
T (b)
f i = Ri · (0, 0, 1)T . For the reconstructed cameras, we
adjust the magnitude of these vectors using the scale computed during the initial similarity transformation. We add
(b)
(b)
the corresponding end points of up ci + m · ui as well
(b)
(b)
as viewing vectors ci + m · f i to the camera center point
set. Here, m denotes the corresponding magnitude.
plicitly contains information about the scale ratio r(bv) between background reconstruction and virtual environment.
To compute r(ov) we use corresponding pairs of object reconstruction and virtual cameras. We use the extrinsic parameters of the object reconstruction camera to transform
all 3D points in the object reconstruction into camera coordinates. Similarly, the object mesh with the pose of the
corresponding frame is transformed into the camera coordinates leveraging the extrinsic camera parameters of the
corresponding virtual camera. The ground truth pose and
shape of the object mesh is part of the dataset. In camera
coordinates we generate rays from the camera center (i.e.
(i)
the origin) to each 3D point oj in the object reconstruc-
4.4. Reference Scale Ratio Computation
ref
Equation (14) shows that r(ob)
depends on the quality of the
cameras in the background reconstruction and may slightly
differ from the true scale ratio.
(i)
tion. We determine the shortest intersection mj of each
ray with the object mesh in camera coordinates. This allows us to compute r(ov) according to equation (13)
(i)
r(ov) = med({med({
kmj k
(i)
koj k
|j ∈ {1, . . . , nJ }}
(13)
|i ∈ {1, . . . , nI }}).
ref
This allows us to compute the reference scale ratios r(ob)
using equation (14)
ref
r(ob)
= r(ov) · r(bv) −1 .
As explained in section 4.1 the presented average trajectory errors in Fig. 5 are subject to four different error
sources. To evaluate the quality of the scale ratio estimation between object and background reconstruction we provide corresponding reference scale ratios. The scale ratios
between object reconstruction, background reconstruction
and virtual environment are linked via the relation shown in
equation (12)
r(ov) = r(ob) · r(bv) ,
(12)
(14)
5. Conclusions
This paper presents a pipeline to reconstruct the threedimensional trajectory of moving objects using monocular
video data. We propose a novel constraint to estimate consistent object motion trajectories. In contrast to previous
approaches, the presented scale estimation constraint is robust to occlusion and extents naturally to stationary objects.
Due to the lack of 3D object motion trajectory benchmark
datasets with suitable ground truth data, we present a new
virtual dataset to quantitatively evaluate object motion trajectories. The dataset contains rendered videos of urban en-
where r(ov) and r(bv) are the scale ratios between object
and background reconstructions and virtual environment,
respectively. The scale ratios r(ob) in Fig. 6 express the
spatial relation of object and background reconstructions.
The similarity transformation Ts defined in section 4.3 im8
vironments and accurate ground truth data including semantic segmentations, object meshes as well as object and camera poses for each frame. The proposed algorithm achieves
an average reconstruction-to-ground-truth distance of 0.31
m evaluating 35 trajectories. In future work, we will analyze
breakdown points of the proposed pipeline in more detail.
This includes minimal object sizes, object occlusions and
degeneracy cases. In addition, we intend to integrate previously published scale estimation approaches. These will
serve together with the dataset1 as benchmark references for
future object motion trajectory reconstruction algorithms.
[14]
[15]
[16]
[17]
References
[1] Blender Online Community. Blender - a 3d modelling and
rendering package, 2016.
[2] S. Bullinger, C. Bodensteiner, and M. Arens. Instance flow
based online multiple object tracking. In IEEE International
Conference on Image Processing (ICIP), 2017.
[3] F. Chhaya, N. D. Reddy, S. Upadhyay, V. Chari, M. Z. Zia,
and K. M. Krishna. Monocular reconstruction of vehicles:
Combining SLAM with shape priors. In 2016 IEEE International Conference on Robotics and Automation, ICRA
2016, Stockholm, Sweden, May 16-21, 2016, pages 5758–
5765, 2016.
[4] J. Dai, K. He, and J. Sun. Instance-aware semantic segmentation via multi-task network cascades. In IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2016.
[5] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning
hierarchical features for scene labeling. IEEE Transactions
on Pattern Analysis and Machine Intelligence, August 2013.
[6] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM,
24(6):381–395, June 1981.
[7] A. Gaidon, Q. Wang, Y. Cabon, and E. Vig. Virtual worlds as
proxy for multi-object tracking analysis. In IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2016.
[8] R. I. Hartley and A. Zisserman. Multiple View Geometry
in Computer Vision. Cambridge University Press, ISBN:
0521540518, second edition, 2004.
[9] K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask r-cnn.
In The IEEE International Conference on Computer Vision
(ICCV), Oct 2017.
[10] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and
T. Brox. Flownet 2.0: Evolution of optical flow estimation
with deep networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
[11] A. Kundu, K. M. Krishna, and C. V. Jawahar. Realtime multibody visual slam with a smoothly moving monocular camera. In ICCV, 2011.
[12] K. Lebeda, S. Hadfield, and R. Bowden. 2D or not 2D:
Bridging the gap between tracking and structure from motion. In Proceedings, Asian Conference on Computer Vision
(ACCV), 2014.
[13] B. Lee, K. Daniilidis, and D. D. Lee. Online self-supervised
monocular visual odometry for ground vehicles. In 2015
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
9
IEEE International Conference on Robotics and Automation
(ICRA), pages 5232–5238, May 2015.
Y. Li, H. Qi, J. Dai, X. Ji, and Y. Wei. Fully convolutional
instance-aware semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
July 2017.
P. Moulon, P. Monasse, R. Marlet, and Others. Openmvg. an
open multiple view geometry library., 2013.
K. E. Ozden, K. Cornelis, L. V. Eycken, and L. J. V. Gool.
Reconstructing 3d trajectories of independently moving objects using generic constraints. Computer Vision and Image
Understanding, 96(3):453–471, 2004.
H. S. Park, T. Shiratori, I. Matthews, and Y. Sheikh. 3d trajectory reconstruction under perspective projection. International Journal of Computer Vision, 115(2):115–135, 2015.
S. R. Richter, V. Vineet, S. Roth, and V. Koltun. Playing
for data: Ground truth from computer games. In B. Leibe,
J. Matas, N. Sebe, and M. Welling, editors, European Conference on Computer Vision (ECCV), volume 9906 of LNCS,
pages 102–118. Springer International Publishing, 2016.
G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M.
Lopez. The synthia dataset: A large collection of synthetic
images for semantic segmentation of urban scenes. In 2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3234–3243, June 2016.
J. L. Schönberger and J.-M. Frahm. Structure-from-motion
revisited. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
E. Shelhamer, J. Long, and T. Darrell. Fully convolutional
networks for semantic segmentation. IEEE Trans. Pattern
Anal. Mach. Intell., 39(4):640–651, 2017.
N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism:
Exploring photo collections in 3d. ACM Trans. Graph.,
25(3):835–846, July 2006.
S. Song, M. Chandraker, and C. C. Guest. High accuracy
monocular SFM and scale correction for autonomous driving. IEEE Trans. Pattern Anal. Mach. Intell., 38(4):730–743,
2016.
C. Sweeney. Theia Multiview Geometry Library: Tutorial &
Reference. University of California Santa Barbara., 2014.
A. Tsirikoglou, J. Kronander, M. Wrenninge, and J. Unger.
Procedural modeling and physically based rendering for synthetic data generation in automotive applications. CoRR,
abs/1710.06270, 2017.
S. Umeyama. Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern
Anal. Mach. Intell., 13(4):376–380, 1991.
C. Wu. Visualsfm: A visual structure from motion system,
2011.
C. Yuan and G. G. Medioni. 3d reconstruction of background
and objects moving on ground plane viewed from a moving camera. In 2006 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition (CVPR 2006), 1722 June 2006, New York, NY, USA, pages 2261–2268, 2006.
| 1 |
An Efficient Algorithm for Computing High-Quality Paths amid
Polygonal Obstacles∗
Pankaj K. Agarwal†
Kyle Fox‡
Oren Salzman§
arXiv:1706.02939v1 [cs.CG] 9 Jun 2017
June 12, 2017
Abstract
We study a path-planning problem amid a set O of obstacles in R2 , in which we wish to compute a
short path between two points while also maintaining a high clearance from O; the clearance of a point
is its distance from a nearest obstacle in O. Specifically, the problem asks for a path minimizing the
reciprocal of the clearance integrated over the length of the path. We present the first polynomial-time
approximation scheme for this problem. Let n be the total number of obstacle vertices and let ε ∈ (0, 1].
2
Our algorithm computes in time O( nε2 log nε ) a path of total cost at most (1 + ε) times the cost of the
optimal path.
1
Introduction
Motivation. Robot motion planning deals with planning a collision-free path for a moving object in an
environment cluttered with obstacles [6]. It has applications in diverse domains such as surgical planning and
computational biology. Typically, a high-quality path is desired where quality can be measured in terms of
path length, clearance (distance from nearest obstacle at any given time), or smoothness, to mention a few
criteria.
Problem statement. Let O be a set of polygonal obstacles in the plane, consisting of n vertices in total.
A path γ for a point robot moving in the plane is a continuous function γ : [0, 1] → R2 . Let kpqk denote the
Euclidean distance between two points p, q. The clearance of a point p, denoted by clr(p) := mino∈O kpok, is
the minimal Euclidean distance between p and an obstacle (clr(p) = 0 if p lies in an obstacle).
We use the following cost function, as defined by Wein et al. [17], that takes both the length and the
clearance of a path γ into account:
Z
1
µ(γ) :=
dτ.
(1)
γ clr(γ(τ ))
This criteria is useful in many situations because we wish to find a short path that does not pass too close to
the obstacles due to safety requirements. For two points p, q ∈ R2 , let π(p, q) be the minimal cost1 of any
path between p and q.
∗ A preliminary version of this work appear in the Proceedings of the 27th Annual ACM-SIAM Symposium on Discrete
Algorithms. Most of this work was done while O. Salzman was a student at Tel Aviv University. Work by P.K. Agarwal and K.
Fox was supported in part by NSF under grants CCF-09-40671, CCF-10-12254, CCF-11-61359, IIS-14-08846, and CCF-15-13816,
and by Grant 2012/229 from the U.S.-Israel Binational Science Foundation. Work by O. Salzman was supported in part by the
Israel Science Foundation (grant no.1102/11), by the German-Israeli Foundation (grant no. 1150-82.6/2011), by the Hermann
Minkowski–Minerva Center for Geometry at Tel Aviv University and by the National Science Foundation IIS (#1409003), Toyota
Motor Engineering & Manufacturing (TEMA), and the Office of Naval Research.
† Duke University, [email protected]
‡ Duke University, [email protected]
§ Carnegie Mellon University, [email protected]
1 Wein et al. assume the minimal-cost path exists. One can formally prove its existence by taking the limit of paths
approaching the infimum cost.
1
The (approximate) minimal-cost path problem is defined as follows: Given the set of obstacles O in R2 , a
real number ε ∈ (0, 1], a start position s and a target position t, compute a path between s and t with cost at
most (1 + ε) · π(s, t).
Related work. There is extensive work in robotics and computational geometry on computing shortest
collision-free paths for a point moving amid a set of planar obstacles, and by now optimal O(n log n) algorithms
are known; see Mitchell [12] for a survey and [5, 10] for recent results. There is also work on computing
paths with the minimum number of links [13]. A drawback of these paths is that they may touch obstacle
boundaries and therefore their clearance may be zero. Conversely, if maximizing the distance from the
obstacles is the optimization criteria, then the path can be computed by constructing a maximum spanning
tree in the Voronoi diagram of the obstacles (see Ó’Dúnlaing and Yap [14]). Wein et al. [16] considered
the problem of computing shortest paths that have clearance at least δ for some parameter δ. However,
this measure does not quantify the tradeoff between the length and the clearance, and the optimal path
may be very long. Wein et al. [17] suggested the cost function defined in equation (1) to balance find a
between minimizing the path length and maximizing its clearance. They devise an approximation algorithm
to compute near-optimal paths under this metric for a point robot moving amidst polygonal obstacles in
the plane. Their approximation algorithm runs in time polynomial in 1ε , n, and Λ where ε is the maximal
additive error, n is the number of obstacle vertices, and Λ is (roughly speaking) the total cost of the edges in
the Voronoi diagram of the obstacles; for the exact definition of Λ, see [17]. Note that the running time of
their algorithm is exponential in the worst-case, because the value of Λ may be exponential as a function of
the input size. We are not aware of any polynomial-time approximation algorithm for this problem. It is not
known whether the problem of computing the optimal path is NP-hard.
The problem of computing shortest paths amid polyhedral obstacles in R3 is NP-hard [3], and a few
heuristics have been proposed in the context of sampling-based motion planning in high dimensions (a widely
used approach in practice [6]) to compute a short path that has some clearance; see, e.g., [15].
Several other bicriteria measures have been proposed in the context of path planning amid obstacles in R2 ,
which combine the length of the path with curvature, the number of links in the path, the visibility of the
path, etc. (see e.g. [1, 4, 11] and references therein). We also note a recent work by Cohen et al. [7], which is
in some sense dual to the problem studied here: Given a point set P and a path γ, they define the cost of γ
to be the integral of clearance along the path, and the goal is to compute a minimal-cost path between two
given points. They present an approximation algorithm whose running time is near-linear in the number of
points.
Our contribution. We present an algorithm that given O, s, t and ε ∈ (0, 1], computes in time O
n2
ε2
log
n
ε
a path from s to t whose cost is at most (1 + ε)π(s, t).
As in [17], our algorithm is based on sampling, i.e., it employs a weighted geometric graph G = (V, E)
with V ⊂ R2 and s, t ∈ V and computes a minimal-cost path in G from s to t. However, we prove a number
of useful properties of optimal paths that enable us to sample much fewer points and construct a graph of
2
size O( nε2 log nε ).
We first compute the Voronoi diagram V of O and then refine each Voronoi cell into constant-size cells.
We refer to the latter as the refined Voronoi diagram of O and denote it by Ṽ. We prove in Section 3 the
existence of a path γ from s to t whose cost is O(π(s, t)) and that has the following useful properties: (i) for
every cell T ∈ Ṽ, γ ∩ int(T ) is a connected subpath and the clearances of all points in this subpath are the
same; we describe these subpaths as well-behaved ; (ii) for every edge e ∈ Ṽ, there are O(1) points, called
anchor points, that depend only on the two cells incident to e with the property that either γ intersects e
transversally (i.e., γ ∩ e is a single point) or the endpoints of γ ∩ e are anchor points. We use anchor points to
propose a simple O(n)-approximation algorithm (Section 4.1). We then use anchor points and the existence of
well-behaved paths to choose a set of O(n) sample points on each edge of Ṽ and construct a planar graph G
with O(n2 ) vertices and edges so that the optimal path in G from s to t has cost O(π(s, t)) (Section 4.2).
2
Finally, we prove additional properties of optimal paths to construct the final graph with O( nε2 log nε )
edges (Section 4.3). Unlike Wein et al. [17], we do not connect every pair of sample points on the boundary
of a cell T of Ṽ. Instead, we construct a small size spanner within T which ensures that the number of edges
2
3
in the graph is only O( nε2 log nε ) and not O( nε2 ).
2
uT
αT
κT
T
vT
βT
s
t
(a) Voronoi Diagram
(b) Refined Voronoi Diagram
Figure 1: The Voronoi diagram and the refined Voronoi diagram of a set of obstacles (dark red). (a) Voronoi edges
are depicted by solid black lines. (b) Voronoi edges are depicted by dashed black lines. Green solid lines and blue
dotted lines represent edges of type (i) and type (ii), respectively. A representative cell T is depicted in light blue.
2
Preliminaries
Recall that O is a set of polygonal obstacles in the plane consisting of n vertices in total. We refer to the
edges and vertices of O as its features. Given a point p and a feature o, let ψo (p) be the closest point to p
on o so that kpok = kpψo (p)k. If a path γ contains two points p and q, we let γ[p, q] denote the subpath of γ
between p and q. Let F = cl(R2 \ O) denote the free space. We assume in this paper that the free space F is
bounded. This assumption can be enforced by placing a sufficiently large bounding box around O and the
points s and t.
Voronoi diagram and its refinement. The Voronoi cell of a polygon feature o, denoted by V(o), is the
set of points in F for which o is a closest feature of O. Cell V(o) is star shaped, and the interiors of Voronoi
cells of two different features are disjoint. The Voronoi diagram of features of O, denoted by V, is the planar
subdivision of F induced by the Voronoi cells of features in O. The edge between the Voronoi cells of a vertex
and an edge feature is a parabolic arc, and between two vertex or two edge features, it is a line segment. See
Figure 1(a). The Voronoi diagram has total complexity O(n). See [2] for details.
For any obstacle feature o and for any point x along any edge on ∂V(o), the function kxψo (x)k is convex.
We construct the refined Voronoi diagram Ṽ by adding the following edges to each Voronoi cell V(o) and
refining it into constant-size cells:
(i) the line segments xψo (x) between each obstacle feature o and every vertex x on V(o) and
(ii) for each edge e of V(o), the line segment xψo (x), where x ∈ e is the point that minimizes kxψo (x)k.
We also add a line segment from the obstacle feature o closest to s (resp. t) that initially follows ψo (s)s
(resp. ψo (t)t) and ends at the first Voronoi edge it intersects. Note that some edges of type (i) may already
be present in the Voronoi diagram V. We say that an edge in Ṽ is an internal edge if it separates two cells
incident to the same polygon. Other edges are called external edges.
Clearly, the complexity of Ṽ is O(n). Moreover, each cell T in Ṽ is incident to a single obstacle feature o
and has three additional edges. One edge is external, and it is a monotone parabolic arc or line segment. The
other two edges are internal edges on T , and they are both line segments. For each cell T , let κT denote the
external edge of T , let αT and βT denote the shorter and longer internal edges of T respectively, and let uT
and vT denote the vertices connecting αT and βT to κT respectively. See Figure 1(b).
For any value c > 0, the set of points in Voronoi cell T of clearance c, if nonempty, forms a connected
arc η which is a circular arc centered at o if o is a vertex and a line segment parallel to o if o is an edge. One
endpoint of η lies on βT and the other on αT or κT .
3
t
t
t
rt
s
rt
o
θt
θt
s
θs
o
o
(b) Line obstacle
(c) Line obstacle (degenerate)
θs
(a) Point obstacle
s
rs
rs
s
t
(d) Polygonal obstacles
Figure 2: Different examples of optimal paths (blue) among different types obstacles (red). In (d), the Voronoi
diagram of the obstacles is depicted by dashed black lines.
Properties of optimal paths. We list several properties of our cost function. For detailed explanations
and proofs, the reader is referred to Wein et al. [17]. Let s = rs eiθs be a start position and t = rt eiθt be a
target position.
(P1) Let o be a point obstacle with O = {o}, and assume without loss of generality that o lies at the origin
and 0 ≤ θs ≤ θt ≤ π. The optimal path between s and t (see Figure 2(a)) is a logarithmic spiral centered
on o, and its cost is
p
(2)
π(s, t) = (θt − θs )2 + (ln rt − ln rs )2 .
(P2) Let o be a line obstacle with O = {o}, and assume without loss of generality that o is supported by the
line y = 0, 0 ≤ θs ≤ θt ≤ π, and rs = rt = r. The optimal path between s and t (see Figure 2(b)) is a
circular arc with its center at the origin, and its cost2 is
π(s, t) = ln
1 − cos θt
1 − cos θs
θt
θs
− ln
= ln tan − ln tan .
sin θt
sin θs
2
2
(3)
(P3) Let o be an obstacle with O = {o} and s on the line segment between ψo (t) and t. The optimal path
between s and t (see Figure 2(c)) is a line segment, and its cost is
π(s, t) = ln clr(t) − ln clr(s).
(4)
(P4) The minimal-cost path γ between two points p and q on an edge e of V is the piece of e between p
and q. Moreover, there is a closed-form formula describing the cost of γ.
(P5) Since each point within a single Voronoi cell is closest to exactly one obstacle feature, we may conclude
the following: Given a set of obstacles, the optimal path connecting s and t consists of a sequence of
circular arcs, pieces of logarithmic spirals, line segments, and pieces of Voronoi edges. Each of member
of this sequence begins and ends on an edge or vertex of Ṽ (see Figure 2(d)).
The following lemma follows immediately from (1) and (2).
Lemma 2.1. Let p and q be two points such that clr(p) ≤ clr(q). The following properties hold:
2 The
original equation describing the cost of the optimal path in the vicinity of a line obstacle had the obstacle on x = 0,
and it contained a minor inaccuracy in its calculation. We present the correct cost in (3).
4
clr(q)
(i) We have π(p, q) ≥ ln clr(p)
. If p and q lie in the same Voronoi cell of an obstacle feature o and if p lies
on the line segment qψo (q), then the bound is tight.
(ii) If there is a single point obstacle o located at the origin, p = rp eiθp and q = rq eiθq with 0 ≤ θp ≤ θq ≤ π,
then π(p, q) ≥ θq − θp . If rp = rq (namely, p and q are equidistant to o), then the bound is tight.
An immediate corollary of Lemma 2.1 is the following:
Corollary 2.2. Let p and q be two points on the boundary of a Voronoi cell T . Let w be another point on
the same edge of T as p, and let clr(p) ≤ clr(w) ≤ clr(q). Then π(p, w) ≤ π(p, q).
Model of computation. We are primarily concerned with the combinatorial time complexity of our
algorithm. Therefore, we assume a model of computation that allows us to evaluate basic trigonometric and
algebraic expressions, such as the ones given above, in constant time. Our model also allows us to find the
roots of a constant-degree polynomial in constant time.
3
Well-behaved Paths
Let T be a cell of Ṽ incident to obstacle feature o, and let p and q be two points on ∂T .
We define a well-behaved path between p and q, denoted
by γ(p, q), whose cost is at most 11π(p, q) and that can be
vT
computed in O(1) time. We first define γ(p, q), then analyze
its cost, and finally prove an additional property of γ(p, q) that
allows us to compute it in O(1) time.
ηw
w̄p∗
wp∗
uT
If both p and q lie on the same edge of ∂T or neither of them
λ(p; w)
γ(w̄; q)
lies on the edge βT , then we define γ(p, q) to be the unique
path from p to q along ∂T that does not intersect o. If one of p
p
q
and q, say, p, lies on βT , then γ(p, q) is somewhat more involved,
T
αT
βT
because the path along ∂T can be quite expensive. Instead, we
let γ(p, q) enter the interior of T . For a point w ∈ βT , let ηw
be the maximal path in T of clearance clr(w) beginning at w,
i.e., its image is the set of points {z ∈ T | clr(z) = clr(w)}.
By the discussion in Section 2, ηw is a line segment or a Figure 3: Components of the well-behaved
circular arc with w as one of its endpoints. Let w̄ be the other path γ(p, q).
endpoint of ηw . We define the path
λ(p; w) = pw ◦ ηw
to be the segment of pw followed by the arc ηw . We refer to w as the anchor point of λ(p; w). Let wp∗ be the
anchor point on edge βT of clearance greater than clr(p) that minimizes the cost of λ(p; w). Namely,
wp∗ =
argmin µ(λ(p; w)).
w∈βT
clr(w)≥clr(p)
We now define
γ(p, q; w) = λ(p; w) ◦ γ(w̄, q),
γ(p, q) = γ(p, q; wp∗ ).
See Figure 3.
The next two lemmas bound the cost of γ(p, q).
Lemma 3.1.
(i) If p and q lie on the same edge of ∂T , then µ(γ(p, q)) = π(p, q).
(ii) If neither p nor q lies on βT , then µ(γ(p, q)) ≤ 3π(p, q).
5
vT
vT
vT
κT
uT
q
w
w̄
uT
λ(p; w)
γ(w̄, q)
κT
w̄
w
w̄
Π∗[p, q]
λ(p; w)
γ(w̄, q)
p
αT
βT
(a) Case 1: q ∈ κT
κT
uT
w
λ(p; w)
γ(w̄, q)
∗
Π [p, q]
q
Π∗[p, q]
q
p
p
αT
βT
(b) Case 2: q, w̄ ∈ αT
αT
βT
(c) Case 3: q ∈ αT , w̄ ∈ κT
Figure 4: Different cases considered in the proof of Lemma 3.2. We use Π∗ [p, q] to denote the minimal-cost path
between p and q.
Proof.
(i) If p and q lie on the same edge e of ∂T , then γ(p, q) ⊆ e, and by (P4), γ(p, q) is the optimal path
between p and q. Hence, the claim follows.
(ii) Suppose p ∈ αT and q ∈ κT . Path γ(p, q) travels along αT from p to uT , and then along κT from uT to q.
By Corollary 2.2 and the fact that uT is the lowest clearance point on κT , we have π(p, uT ) ≤ π(p, q).
By the triangle inequality, we have that
π(uT , q) ≤ π(uT , p) + π(p, q) ≤ 2π(p, q).
Finally,
µ(γ(p, q)) = π(p, uT ) + π(uT , q) ≤ 3π(p, q).
Lemma 3.2. If p ∈ βT and q ∈
/ βT , then µ(γ(p, q)) ≤ 11π(p, q).
Proof. Let w be any point of βT such that clr(w) ≥ clr(p). We begin by proving µ(γ(p, q; w)) ≤ 4µ(λ(p; w)) +
3π(p, q). Later, we will show µ(λ(p; wp∗ )) ≤ 2π(p, q), proving the lemma.
To prove the first claim, we consider different cases depending on the edges of ∂T that contain w̄ and q.
See Figure 4.
Case 1: q ∈ κT . In this case, γ(w̄, q) ⊆ κT , and therefore µ(γ(w̄, q)) = π(w̄, q). By the triangle inequality,
π(w̄, q) ≤ µ(λ(p; w)) + π(p, q).
Case 2: q, w̄ ∈ αT . In this case, γ(w̄, q) ⊆ αT , and therefore µ(γ(w̄, q)) = π(w̄, q). Again, π(w̄, q) ≤
µ(λ(p; w)) + π(p, q).
Case 3: q ∈ αT , w̄ ∈ κT . In this case, γ(w̄, q) first travels along κT from w̄ to uT and then along αT
from uT to q. Since clr(uT ) ≤ clr(w),
π(q, uT ) ≤ π(q, p) + π(p, w) ≤ π(p, q) + µ(λ(p; w))
by Corollary 2.2. Furthermore, by the triangle inequality,
π(w̄, uT ) ≤ µ(λ(p; w)) + π(p, q) + π(q, uT )
≤ 2µ(λ(p; w)) + 2π(p, q).
Hence, µ(γ(w̄, q)) = π(w̄, uT ) + π(uT , q) ≤ 3µ(λ(p; w)) + 3π(p, q).
6
y
vT
βT
κT
η(t)
a/
p
t
uT
θβ
θα
q
q
p
w̄(t)
a
αT
x
(a) Case 1(a): w̄∗ ∈ αT
o
θβ
θ θα
a
uT
x
w(θ) η(θ)
p
uT
αT
βT
cos θ
αT
o
co
sθ
w(t)
y
κT w̄
q
w
vT
1+
2a/
vT
βT κ
T
w
ηw
w̄∗
y
θβ θ θα
a
o
x
(b) Case 1(b)(i): w̄∗ ∈ κT ; (c) Case 1(b)(ii): w̄∗ ∈ κT ; κT is a
κT is a line segment
parabolic arc
Figure 5: Different cases considered in the proof of Lemma 3.3, Case 1: o is a vertex.
Since µ(γ(w̄, q)) ≤ 3µ(λ(p; w)) + 3π(p, q) in all three cases, µ(γ(p, q; w)) ≤ 4µ(λ(p; w)) + 3π(p, q) as claimed.
Let c∗ be the maximum clearance of a point on the optimal path between p and q (if there are multiple
optimal paths between p and q, choose one of them arbitrarily). Let w ∈ βT be the point of clearance
min{c∗ , clr(vT )}. We now prove that µ(λ(p; w)) ≤ 2π(p, q).
We first note that clr(p) ≤ clr(w) ≤ c∗ . Therefore, by Corollary 2.2, µ(pw) = π(p, w) ≤ π(p, q). Next, we
argue that µ(ηw ) ≤ π(p, q). Indeed, if o is a polygon edge, then ηw is the Euclidean shortest path between
any pair of points on βT and one of αT or κT whose clearance never exceeds c∗ . It also (trivially) has the
highest clearance of any such path. If o is a polygon vertex, then ηw spans a shorter angle relative to o than
any other path whose clearance never exceeds c∗ . By Lemma 2.1(ii), the cost of any such path from βT to
one of αT or κT is at least this angle, and by (P1), the cost of ηw is exactly this lower bound. Either way,
any path between p and q also goes between βT and one of αT or κT , so we conclude that µ(ηw ) ≤ π(p, q).
Hence, µ(λ(p; w)) ≤ 2π(p, q). In particular, µ(λ(p; wp∗ )) ≤ 2π(p, q), and µ(γ(p, q; wp∗ )) ≤ 11π(p, q).
If p ∈ βT and q ∈
/ βT , then computing γ(p, q) requires computing the anchor point w that minimizes µ(λ(p, w)). We show that the point wp∗ ∈ βT that defines γ(p, q) is either p itself or a point that only
depends on the geometry of ∂T and not on p or q.
Lemma 3.3. There exist two points wαT and wκT on βT , such that for any p ∈ βT , wp∗ ∈ {p, wαT , wκT }.
Furthermore, these two points can be computed in O(1) time.
Proof. There are several cases to consider depending on whether o is a vertex or an edge, whether the point w̄∗
lies on κT or αT , and whether κT is a line segment or a parabolic arc. Depending on the geometry of T , we
define wαT and wκT accordingly. In each case, we parameterize the anchor point w on βT appropriately and
show that wp∗ ∈ {p, wαT , wκT }. For simplicity, for a parameterized anchor point w(t), we use η(t), λ(t), and
µ(t) to denote ηw(t) , λ(p; w(t)), and µ(λ(p; w(t))), respectively. We now describe each case:
Case 1: o is a vertex. Without loss of generality, assume that o lies at the origin, edges αT and βT intersect
the line y = 0 at the origin with angles θα and θβ respectively, and θβ > θα ≥ 0. In this case, ηw , the
constant-clearance path anchored at w ∈ βT , is a circular arc. We consider two cases depending on whether w̄∗
lies on αT or κT . See Figure 5.
Case 1(a): w̄∗ ∈ αT . We parameterize the anchor point w by its clearance, i.e. clr(w(t)) = t, and the feasible
range of t is [clr(p), clr(uT )]. If w̄(t) ∈ αT , then the cost of η(t) is simply θβ − θα , and
µ(t) = ln
clr(w(t))
t
+ θβ − θα = ln
+ θβ − θα .
clr(p)
clr(p)
Therefore, µ is minimized in the range [clr(p), clr(uT )] for t = clr(p), so w∗ = p in this case.
Case 1(b): w̄∗ ∈ κT . We parameterize w with the angle θ of the segment ow̄. We call θ feasible if clr(w(θ)) ≥
clr(p) and θα ≤ θ ≤ θβ . We divide this case further into two subcases:
7
κT
βT
uT
w̄
t
p
αT q
xα
o
uT
w
ηw
xβ x
(a) Case 2(a): w̄∗ ∈ αT
q
αT
xα
θ
w
p
t
κT
βT
ηw
w̄
vT
y
ηw
w̄
uT
a
q
αT
xα
xβ
w
p
t
x
o
βT
t2/4a + a
κT
vT
y
t tan θ
vT
y
xβ
x
o
(b) Case 2(b)(i): w̄∗ ∈ κT ; (c) Case 2(b)(ii): w̄∗ ∈ κT ;
κT is a line segment
κT is a parabolic arc
Figure 6: Different cases considered in the proof of Lemma 3.3, Case 2: o is an edge.
Case 1(b)(i): κT is a line segment. Without loss of generality, κT is supported by the line x = a. The equation
of the line in polar coordinates is r = a/ cos θ. We have θβ ≤ π/2. Restricting ourselves to feasible values
of θ, we have
clr(w(θ))
a/ cos θ
µ(θ) = ln
+ θβ − θ = ln
+ θβ − θ.
clr(p)
clr(p)
Taking the derivative, we obtain
d
µ(θ) = tan θ − 1.
dθ
This expression is negative for θ = 0, positive near θ = π/2, and it has at most one root within feasible
values of θ, namely at θ = π/4. Therefore, µ(θ) is minimized when either clr(w(θ)) = clr(p) or θ = θ∗ =
min{max{π/4, θα }, θβ }. We pick wκT = w(θ∗ ).
Case 1(b)(ii): κT is a parabolic arc. Without loss of generality, the parabola supporting κT is equidistant
between o and the line x = 2a. The equation of the parabola in polar coordinates is r = 2a/(1 + cos θ). We
2a
have θβ ≤ π. The polar coordiantes of w(θ) are w(θ) = ( 1+cos
θ , θβ ).
Restricting ourselves to feasible values of θ, we have
µ(θ) = ln
2a/(1 + cos θ)
clr(w(θ))
+ θβ − θ = ln
+ θβ − θ.
clr(p)
clr(p)
Here,
d
sin θ
µ(θ) =
− 1 = tan(θ/2) − 1.
dθ
1 + cos θ
Again, the expression is negative for θ = 0, positive for θ near π, and it has at most one root within
feasible values of θ, namely at θ = π/2. Therefore, µ(θ) is minimized when either clr(w(θ)) = clr(p) or
θ = θ∗ = min{max{π/2, θαT }, θβT }. We pick wκT = w(θ∗ ).
Case 2: o is an edge. Without loss of generality, o lies on the line y = 0, the edge αT lies on the line x = xα ,
the edge βT lies on the line x = xβ , and xβ > xα ≥ 0. In this case, ηw is a horizontal segment. We again
consider two cases. See Figure 6.
Case 2(a): w̄∗ ∈ αT . As in Case 1(a), we parameterize the anchor point w by its clearance, i.e., clr(w(t)) = t,
and the feasible range of t is [clr(p), clr(uT )]. Restricting ourselves to feasible values of w(t), we have
µ(γ(p, λ)) = ln
We have
clr(w(t))
kη(t)k
t
xβ − xα
+
= ln
+
.
clr(p)
clr (w(t))
clr(p)
t
d
(xβ − xα )
µ(t) = 1/t −
.
dt
t2
8
This expression is negative for t near 0, positive for large t, and it has at most one root within feasible values
of t, namely at t = xβ − xα . If xβ − xα ≤ clr(uT ), then µ(t) is minimized when either clr(w(t)) = clr(p)
or t = t∗ = xβ − xα . If xβ − xα > clr(uT ), then w̄∗ ∈ κT , so assume that xβ − xα ≤ clr(uT ). We
pick wαT = w(t∗ ).
Case 2(b): w̄∗ ∈ κT . We parameterize w by the x-coordinate of w̄. We call t feasible if clr(w(t)) ≥ clr(p) and
t ∈ [xα , xβ ]. There are two subcases.
Case 2(b)(i): κT is a line segment. Without loss of generality, the line supporting κT intersects o at the origin
with angle θ. Restricting ourselves to feasible values of t, we have
µ(t) = ln
clr(w(t))
kη(t)k
t tan θ xβ − t
+
= ln
+
.
clr(p)
clr (w(t))
clr(p)
t tan θ
We see
1
xβ
tan θ − xβ
d
µ(t) = − 2
= 2
.
dt
t
t tan θ
t tan θ
This expression is negative for t near 0, positive for large t, and it has at most one root within feasible
values of t, namely at t = xβ / tan θ. Therefore, µ(t) is minimized when either clr(w(t)) = clr(p) or t = t∗ =
min{max{xβ / tan θ, xα }, xβ }. We pick wκT = w(t∗ ); note that clr(wκT ) = t∗ tan θ.
Case 2(b)(ii): κT is a parabolic arc. Without loss of generality, the parabola supporting κT is equidistant
between o and a point located at (0, 2a). Therefore, the parabola is described by the equation y = x2 /(4a) + a.
Restricting ourselves to feasible values of t, we have
µ(t) = ln
We have
clr(w(t))
kη(t)k
t2 /(4a) + a
xβ − t
+
= ln
+ 2
.
clr(p)
clr (w(t))
clr(p)
t /(4a) + a
d
2t3 + 4at2 + 8a(a − xβ )t − 16a3
µ(t) =
.
dt
(t2 + 4a2 )2
This expression is negative for t near 0 and positive for large t. The derivative of the numerator is 6t2 +
8at + 8a(a − xβ ), which has at most one positive root. Therefore, the numerator has at most one positive
d
local maximum or minimum. We see
µ(t) goes from negative to positive around exactly one positive
dt
root (which may not be feasible), and µ(t) has one minimum at a positive value of t. Let t0 be this root
d
of
µ(t). Value µ(t) is minimized when either clr(w(t)) = clr(p) or t = t∗ = min{max{t0 , xα }, xβ }. We
dt
pick wκT = w(t∗ ); note that clr(wκT ) = (t∗ )2 /(4a) + a.
We note that in all cases w∗ ∈ {p, wκT , wαT } for some choices of wκT and wαT that depend only on the
geometry of T and not on p. Note that no subcase of Case 1 required picking a concrete wαT , so if o is a
vertex, we let wαT be an arbitrary point on βT . In every case, wκT and wαT can be computed in O(1) time.
We conclude the proof of the lemma.
4
Approximation Algorithms
In this section, we propose a near-quadratic-time (1 + ε)-approximation algorithm for computing the minimalcost path between two points s, t ∈ F amid O. We assume that clr(s) ≤ clr(t) throughout this section. We
first give a high-level overview of the algorithm and then describe each step in detail. Throughout this section,
let Π∗ denote a minimal-cost (s, t)-path.
High-level description. Our algorithm begins by computing the refined Voronoi diagram Ṽ of O. The
algorithm then works in three stages. The first stage computes an O(n)-approximation of d∗ = µ(s, t), i.e., it
returns a value d˜ such that d∗ ≤ d˜ ≤ cnd∗ for some constant c > 0. By augmenting Ṽ with a linear number
of additional edges, each a constant-clearance path between two points on the boundary of a cell of Ṽ, the
9
κT
uT
ηκT
vT
w κT
vT
ηκT
ws
ηs
uT
ηs
αT
βT
T
w κT
ws
ηαT
αT βT
(a) Point-obstacle cell
κT
vT
w αT
uT
αT
(b) Line-obstacle cell
κT
q
Π[p, q] wκ
T
p
ws
T
βT
w αT
(c) (p, q)-path γ
Figure 7: (a), (b) Edges added within cell T for the O(n)-approximation algorithm. (c) Visualization of proof of
Lemma 4.1.
algorithm constructs a graph G1 with O(n) vertices and edges, and it computes a minimal-cost path from s
to t in G1 .
˜ the second stage computes an O(1)-approximation of d∗ . For a given d ≥ 0,
Equipped with the value d,
this algorithm constructs a graph G2 = G2 [d] by sampling O(n) points on the boundary of each cell T of Ṽ
and connecting these sample points by adding O(n) edges (besides the boundary of T ), each of which is again
a constant-clearance path. The resulting graph G2 is planar and has O(n2 ) edges total, so a minimal-cost path
in G2 from s to t can be computed in O(n2 ) time [9]. We show that if d ≥ d∗ , then the cost of the optimal
path from s to t in G2 is O(d). Therefore, if d ∈ [d∗ , 2d∗ ], the cost of the optimal path is O(d∗ ). Using the
˜ i | 0 ≤ i ≤ dlog cne},
˜ we run the above procedure for O(log n) different values of d, namely d ∈ {d/2
value of d,
2
and return the least costly path among them. Let dˆ be the cost of the path returned.
ˆ the third stage samples O(n/ε) points on the boundary of each cell T of Ṽ and
Finally, using the value d,
connects each point to O((1/ε) log(n/ε)) other points on the boundary of T by an edge. Unlike the last two
stages, each edge is no longer a constant-clearance path but it is a minimal-cost path between its endpoints
lying inside T . The resulting graph G3 has O(n2 /ε) vertices and O((n2 /ε2 ) log(n/ε)) edges. The overall
algorithm returns the minimal-cost path in G3 . Anchor points and well-behaved paths play a pivotal role in
each stage of the algorithm.
4.1
Computing an O(n)-approximation algorithm
Here, we describe a near-linear time algorithm to obtain an O(n)-approximation of d∗ . We augment Ṽ
with O(n) additional edges as described below to create the graph G1 = (V1 , E1 ).
We do the following for each cell T of Ṽ. We compute anchor points wαT and wκT as described in Lemma 3.3.
Let sT be the point on βT of clearance min{clr(vT ), clr(s)}. Set WT = {wαT , wκT , sT }, ET = {ηw |w ∈ WT },
and W̄T = {w̄αT , w̄κT , s̄T }. Vertex set V1 is the set of Voronoi vertices plus the set WT ∪ W̄T for all cells
T ∈ Ṽ3 . Next, for each edge e of Ṽ, we add the portion of e between two consecutive vertices of V1 as an edge
of E1 , and for each cell T ∈ Ṽ we also add ET to E1 . See Figure 7(a) and (b). (Note that if sT = vT then ηsT
is a trivial path and there is no need to add ηsT to E1 . Paths ηwαT and ηwκT may be trivial as well.) The
cost µ(e) for each edge e ∈ E1 is computed using (1) or the equations of Wein et al. [17] for Voronoi edges.
By construction, |V1 | = O(n) and |E1 | = O(n). We compute and return, in O(n log n) time, an optimal path
from s to t in G1 .
Lemma 4.1. Graph G1 contains an s, t-path of cost at most O(n) · d∗ .
Proof. Let Π∗ be an optimal path from s to t. We will deform Π∗ into another path Π̃ from s to t of cost
O(n) · d∗ that enters or exits the interior of a cell of Ṽ only at the vertices of V1 and follows an arc of ET in
the interior of the cell T . By construction, Π̃ will be a path in G1 which will imply the claim.
By construction s, t ∈ V1 . Let Π denote the current path that we have obtained by deforming Π∗ . Let
T ∈ Ṽ be the first cell (along Π) such that Π enters the interior of T but int(T ) ∩ Π is not an arc of ET . Let p
(resp. q) be the first (resp. last) point of Π ∩ T . If both p and q lie on the same edge of T or neither of them
3 Note that as we consider each cell independently, we actually consider each edge e twice as it is adjacent to two cells and
add vertices on e for each cell independently. The set of vertices put on e is the union of these two sets. Considering each edge
twice does not change the complexity of the algorithm or its analysis, and doing so simplifies the algorithm’s description.
10
vT
u0T
uT
vT0
vT00
vT+
κT
u−
T
vT−
αT
T
βT
Figure 8: Samples placed on the edges of a cell T of Ṽ. The sampled regions are depicted in purple. For the
constant-factor approximation algorithm, samples are placed on βT only.
lies on βT , we replace Π[p, q] with the well-behaved path γ(p, q), because γ(p, q) ⊆ ∂T in this case. Suppose
p ∈ βT and q ∈
/ βT (the other case is symmetric). We replace Π[p, q] with Π[p, q] = psT ◦ γ(sT , q), i.e., the
segment psT ⊆ βT followed by the well-behaved path γ(sT , q). By Lemma 3.3, cl(int(T ) ∩ γ(st , q)) ∈ ET .
We repeate the above step until no such cell T is left. Since the above step is performed at most once for
each cell of Ṽ, we obtain the final path Π̃ in O(n) steps.
We now bound the cost of Π̃. If Π[p, q] is replaced by γ(p, q), then by Lemma 3.1, µ(γ(p, q)) ≤ 3µ(Π[p, q]).
On the other hand, if p ∈ βT and q ∈
/ βT , then µ(Π̃[p, q]) = µ(psT ) + µ(γ(sT , q)). By the triangle inequality,
π(sT , q) ≤ π(sT , p) + π(p, q), and by Lemma 3.2,
µ(γ(sT , q)) ≤ 11π(sT , q) ≤ 11(π(sT , p) + π(p, q)).
By Corollary 2.2, π(p, sT ) = µ(psT ) ≤ d∗ . Putting everything together,
µ(Π̃[p, q]) ≤ 12π(p, sT ) + 11π(p, q)
∗
(5)
∗
≤ 12d + 11π(p, q) = O(d ).
Summing over all O(n) steps, the cost of Π̃ is O(n) · d∗ .
We thus obtain the following.
Theorem 4.2. Let O be a set of polygonal obstacles in the plane, and let s, t be two points outside O. There
exists an O(n log n)-time O(n)-approximation algorithm for computing the minimal-cost path between s and t.
4.2
Computing a constant-factor approximation
Recall that, given an estimate d of the cost d∗ of the optimal path, we construct a planar graph G2 = G2 [d]
by sampling points along the edges of the refined Voronoi diagram Ṽ. The sampling procedure here can be
thought of as a warm-up for the more-involved sampling procedure given in Section 4.3.
Let T be a Voronoi cell of Ṽ. Let vT− and vT+ be the points on βT with clearance min{clr(vT ), clr(t)/ exp(d)}
and min{clr(vT ), clr(s) · exp(d)}, respectively. We refer to the segment β̂T = vT− vT+ ⊆ βT as the marked
portion of βT . By (4), µ(β̂T ) = O(d). We place sample points on β̂T , its endpoints always being sampled, so
that the cost between consecutive samples is exactly nd (except possibly at one endpoint). Given a sample
point p on an edge of Ṽ, it is straightforward to compute the coordinates of the sample point p0 on the same
edge such that π(p, p0 ) = c for any c > 0. Simply use the formula for the cost along a Voronoi edge given
in [17, Corollary 8]. We emphasize that the points are separated evenly by cost; the samples are not uniformly
placed by Euclidean distance along the edge; see Figure 8.
For each cell T ∈ Ṽ, let WT be the set of sample points on βT plus the anchor points wκT and wαT . For each
point w ∈ WT , we compute the constant-clearance arc ηw . Let ET = {ηw |w ∈ WT } and W̄T = {w̄|w ∈ WT }
be the set of other endpoints of arcs in ET . Set V2 is the set of vertices of Ṽ plus the set WT ∪ W̄T over all
cells in Ṽ. For each edge of Ṽ, we add the portions between consecutive sample vertices of V2 to E2 , and
we also add ET , over all cells T ∈ Ṽ, to E2 . The cost of each edge in E2 is computed as before. We have
|V2 |, |E2 | = O(n2 ), and G2 can be constructed in O(n2 ) time.
The refined Voronoi diagram Ṽ is planar. Every edge ηw added to create G2 stays within a single cell
of Ṽ and has constant clearance. Therefore, no new crossings are created during its construction, and G2 is
11
planar as well. We compute the minimal-cost path from s to t in G2 , in O(n2 ) time, using the algorithm of
Henzinger et al. [9].
Lemma 4.3. If d ≥ d∗ , then Π∗ ∩ βT ⊆ β̂T for any cell T ∈ Ṽ.
Proof. Let pmin be the point where Π∗ attains the minimal clearance. Clearly, π(s, t) ≥ π(s, pmin ) + π(pmin , t).
Using this observation together with Lemma 2.1(i) and the assumption that clr(s) ≤ clr(t), we conclude that
the clearance of any point on Π∗ is at least clr(t)/ exp(d∗ ) ≥ clr(t)/ exp(d). A similar argument implies the
clearance of any point on Π∗ is at most clr(s) · exp(d). Hence, Π∗ ∩ βT ⊆ β̂T .
Lemma 4.4. For d ≥ d∗ , graph G2 [d] contains an s, t-path of cost O(d).
Proof. We deform the optimal path Π∗ into a path Π̃ of G2 in the same way as in the proof of Lemma 4.1
except for the following twist. As in Lemma 4.1, let p (resp. q) be the first (resp. last) point on Π∗ in a cell
T of Ṽ. If p ∈ βT and q ∈
/ βT , let p0 be a sample point on βT such that π(p, p0 ) ≤ d/n; the existence of p0
follows from Lemma 4.3. We replace Π∗ [p, q] with Π̃T = pp0 ◦ γ(p0 , q), i.e., p0 replaces the role of sT in the
proof of Lemma 4.1. Since π(p, p0 ) ≤ d/n, we have
µ(Π̃T ) ≤ 11π(p, q) + O(d/n).
Summing over all steps in the deformation of Π∗ and using the fact d ≥ d∗ = µ(Π∗ ), we obtain µ(Π̃) = O(d).
It is clear from the construction that Π̃ is a path in G2 .
For our constant-factor approximation algorithm, we perform an exponential search over the values of
path costs. Let d˜ ≤ cnd∗ be the cost of the path returned by the O(n)-approximation algorithm (Section 4.1).
˜ i . We run the above procedure to construct a graph G2 [di ]
For each i from 0 to dlog cne, we choose di = d/2
and compute a minimal-cost path Πi in the graph. Let ∆i = µ(Πi ). We compute k = argmini ∆i and return
Πk .
Fix integer î so d∗ ≤ dî ≤ 2d∗ . By Lemma 4.4, we have
∆k ≤ ∆î = O(dî ) = O(d∗ ).
Theorem 4.5. Let O be a set of polygonal obstacles in the plane, and let s, t be two points outside O. There
exists an O(n2 log n) time O(1)-approximation algorithm for computing the minimal-cost path between s and t.
4.3
Computing the final approximation
Finally, let d be the estimate returned by our constant factor approximation algorithm so that d∗ ≤ d ≤ cd∗
for some constant c. We construct a graph G3 = (V3 , E3 ) by sampling O(n/ε) points along each edge of Ṽ
and connecting (a certain choice of) O( εn2 log nε ) pairs of sample points on the boundary of each cell of Ṽ
2
2
by “locally optimal” paths. We guarantee |V3 | = O( nε ) and |E3 | = O( nε2 log nε ). We compute and return, in
2
O( nε2 log nε ) time, a minimal-cost path in G3 [8].
Vertices of G3 . Let c = clr(t)/ exp(d) and c = clr(s) · exp(d). Let T be a cell of Ṽ. For each edge of T ,
we mark at most two connected portions, each of cost O(d). We refer to each marked portion as an edgelet.
We sample points on each edgelet so that two consecutive samples lie at cost εd/n apart; endpoints of each
edgelet are always included in the sample. The total number of samples places on ∂T is O(n/ε). We now
describe the edgelets of T .
+
Let u−
T , uT be points on αT of clearance min{clr(uT ), c} and min{clr(uT ), c}, respectively. Similarly, let
− +
vT , vT be points on βT of clearance min{clr(vT ), c} and min{clr(vT ), c}, respectively. The edgelets on αT and
+
− +
βT are the segments u−
T uT and vT vT , respectively. Next, we mark (at most) two edgelets on κT : If µ(κT ) ≤ 2d,
then the entirety of κT is a single edgelet; otherwise, let u0T ∈ κT be the point such that µ(κT [uT , u0T ]) = 2d.
Let vT0 ∈ κT be the point of clearance clr(vT+ ). If µ(κT [u0T , vT0 ]) ≤ 4d, then κT [uT , vT0 ] is the only edgelet on
κT . Otherwise, let vT00 ∈ κT be the point such that clr(vT00 ) ≤ clr(vT0 ) and µ(κT [vT0 , vT00 ]) = 4d; κT has two
edgelets κT [uT , u0T ] and κT [vT0 , vT00 ]. See Figure 8. We repeat this procedure for all cells of T . Set V3 is the
set of all samples placed on the edges of Ṽ. We have |V3 | = O(n2 /ε).
12
κT
Vertices in V (G3) ∪ S(p)
Vertices in V (G3) \ S(p)
ξ
→
−
q
w κT
←
−
q
w κT
p
αT
T
βT
Figure 9: Vertices in S(p) which are used to construct the set of edges of G3 .
The edges of G3 . Let T be a cell of Ṽ incident to obstacle feature o. We say two points p and q in T
are locally reachable from one another if the minimal-cost path from p to q relative only to o lies within T .
Equivalently, the minimal-cost path relative to o is equal to the minimal-cost path relative to O.
Let p ∈ ∂T be a sample point. We compute a subset S(p) ⊆ V3 of candidate neighbors of p in G3 . Let
N (p) ⊆ S(p) be the subset of these points that are locally reachable from p. We connect p to each point
q ∈ N (p) by an edge in E3 of cost π(p, q). By definition, the minimal-cost path between p and q lies inside T .
Finally, as in G1 and G2 , we add the portion of each edge of Ṽ between two sample points as an edge of E3 .
We now describe how we construct S(p). Let ξ be an edgelet of ∂T such that p and ξ do not lie on the
same edge of T . We first define a shadow point p̆ of p. If p ∈ αT ∪ κT , then p̆ = p. If p ∈ βT , and ξ ∈ κT
(resp. ξ ∈ αT ), then p̆ = p if clr(p) ≥ clr(wκT ) (resp. clr(p) ≥ clr(wαT )), and p̆ = wκT (resp. wαT ) otherwise.
−
−
q (resp. →
q ) be the sample point on ξ of highest (resp. lowest) clearance less (resp. more) than p̆, if
Let ←
−
−
q or →
q may not exist if no point of clearance p̆ exists on ξ; in this case,
such a point exists. Exactly one of ←
←
−
−
the construction implies that q is the endpoint of ξ of higher clearance or →
q is the endpoint of ξ of lower
←
−
←
−
clearance. If q exists, we add q to S(p). We iteratively walk along sample points of ξ in decreasing order
−
of clearance starting with the first sample point encountered after ←
q . For each non-negative integer i, we
−
i
add the point qi encountered at step b(1 + ε) c of the walk until we reach an endpoint of ξ. Similarly, if →
q
→
−
exists, we add to S(p) the point q and perform the walk along points of greater clearance. See Figure 9.
Finally, we add the two endpoints of ξ to S(p). We repeat this step for all edgelets on ∂T . Since ξ has O(n/ε)
sample points and ∂T has at most four edgelets, |S(p)| = O((1/ε) log(n/ε)), and S(p) can be constructed in
O(|S(p)|) time.
2
2
Analysis. It is clear from the construction that |V3 | = O( nε ), |E3 | = O( nε2 log nε ), and that G3 can be
2
constructed in time O( nε2 log nε ). By using Dijkstra’s algorithm with Fibonacci heaps [8], a minimal-cost
2
path in G3 can be computed in O( nε2 log nε ) time. So it remains to prove that the algorithm returns a path
of cost at most (1 + O(ε))π(s, t). By rescaling ε, we can thus compute a path from s to t of cost at most
2
(1 + ε)π(s, t) in time O( nε2 log nε ).
Lemma 4.6. Let Π∗ be a minimal-cost path from s to t. For every edge e ∈ Ṽ, Π∗ ∩ e lies inside the marked
portion of e.
Proof. Fix an edge e ∈ Ṽ, and let q ∈ Π∗ ∩ e. We aim to prove q lies inside the marked portion of e. Recall,
d ≥ d∗ . The proof of Lemma 4.3 already handles the case of e being an internal edge.
Now, suppose e is an external edge. We assume µ(e) > 2d; otherwise, the proof is trivial. We have e ∈ κT
and e ∈ κT 0 for two adjacent Voronoi cells T and T 0 . By construction, point s lies outside the interior of
T ∪ T 0 . Therefore, Π∗ [s, q] intersects at least one internal edge incident to T or T 0 at some point p. Without
loss of generality, assume that internal edge belongs to T . We have two cases.
Case 1: p ∈ αT . Since clr(p) ≤ clr(uT ) ≤ clr(q), we have π(p, uT ) ≤ d∗ . By triangle inequality,
π(uT , q) ≤ π(uT , p) + π(p, q) ≤ 2d∗ ≤ 2d.
Case 2: p ∈ βT . By (4), π(vT+ , p) ≤ 2d. Recall from the proof of Lemma 4.3 that the clearance of any
point on Π∗ is at most clr(s) · exp(d). We defined ηv+ as the line segment or circular arc with vT+ as one
T
13
vT
y
βT
f (βT ) = βT∗
f (a) = a∗
b
a
αT
θβ
f (κT ) = κ∗T
f (r, θ) = (θ, ln r)
γ
o
ln r
κT
uT
f (αT ) = αT∗
f (b) = b∗
rT x
θα
θβ
θα
θ
Figure 10: Case 1 of proof of Lemma 4.7: a cell of a point obstacle in Ṽ and the optimal path between points
a and b (blue) in the original (left) and transformed plane (right), respectively.
ln r
ln r
`p
κ∗T
ζp
p∗
θα
βT∗
Vp∗
Vp∗
αT∗
p∗
βT∗
gp
`p∗
κ∗T
αT∗
θβ
θαT
θ
(a) p ∈ αT
θβ
θ
(b) p ∈ κT
Figure 11: Case 1 of proof of Lemma 4.7: the set of locally reachable points (purple) from the point p in the
transformed plane; the solid part of `p is gp .
of its endpoints; vT0 is its other endpoint. One can easily verify µ(ηv+ ) ≤ d∗ ; see the proof of Lemma 3.2.
T
By the triangle inequality,
π(vT0 , q) ≤ µ(ηv+ ) + π(vT+ , p) + π(p, q) ≤ 2d∗ + 2d ≤ 4d.
T
Now, we prove a property of locally reachable points from a fixed point which will be crucial for our
analysis.
Lemma 4.7. Let T be a cell of Ṽ, and let p ∈ ∂T . For every edge e of T , the set of points on e locally
reachable from p, if non-empty, is a connected portion of e and contains an endpoint of e.
Proof. Let o be the feature of O associated with T . We consider two main cases.
Case 1: o is a vertex. Without loss of generality, o lies at the origin, edge αT intersects the line y = 0 at the
origin with angle θα , edge βT intersects the line y = 0 at the origin with angle θβ , and θβ > θα ≥ 0. We
consider a map f : R2 → R2 taking points to what we refer to as the transformed plane. Given in polar
coordinates point (r, θ), the map f is defined as f (r, θ) = (θ, ln r). For a point x ∈ R2 , let x∗ = f (x), and for
a point set X ⊆ R2 , let X ∗ = {f (x)|x ∈ X}. Both αT and βT become vertical rays in the transformed plane
going to −∞. Further, it is straightforward to show that κT becomes a convex curve in the transformed plane
when restricted to values of θ such that θα ≤ θ ≤ θβ . Therefore, T ∗ is a semi-bounded pseduo-trapezoid. By
(P1) in Section 2 (see also [17]), the minimal-cost path with respect to o between two points a, b ∈ T maps
to the line segment a∗ b∗ . So a and b are locally reachable if a∗ b∗ ⊆ T ∗ , i.e., a∗ and b∗ are visible from each
other (see Figure 11).
For a point p∗ ∈ (∂T )∗ , let Vp∗ ⊆ T ∗ be the set of points of T ∗ visible from p∗ , `p the line tangent to κ∗T
from p∗ (if it exists), and ζp = `p ∩ κ∗T . Note that `p is well defined, because either p∗ ∈ κ∗T or the x-monotone
14
vT
p = (xp , yp )
+
r+
0
C (t )
uT
−
C (t)
r−
t
t0
βT
C0
αT
x
T
C
C + (t)
C − (t0 )
κT
p
(a)
(b)
Figure 12: Illustration of properties of Cp in Case 2 of proof of Lemma 4.7.
convex curve κ∗T either lies to the left or to the right of p∗ . The closure of ∂Vp∗ \ (∂T )∗ consists of a line
segment gp = a∗ b∗ ⊂ `p . If p ∈
/ κT , then ζp is one endpoint of gp and the other endpoint lies on αT∗ or βT∗ . In
either case, for any edge e ∈ ∂T , if (e∗ \ {p∗ }) ∩ Vp∗ =
6 ∅, then it is a connected arc and contains one of the
endpoints of e∗ , as claimed.
Case 2: o is an edge. Without loss of generality, o lies on the line y = 0, the edge αT lies on the line x = xα ,
the edge βT lies on the line x = xβ , and xβ > xα ≥ 0. There is no equally convenient notion of the transformed
plane for edge feature o, but we are still able to use similar arguments to those given in Case 1.
In this case, for two points a, b ∈ T , the minimal-cost path with respect to o from a to b is the circular
arc with a and b as its endpoints and centered at the x-axis (see (P2) in Section 2). Therefore, a and b are
locally reachable if this circular arc does not cross κT .
Fix a point p = (xp , yp ) ∈ ∂T . If p ∈ αt ∪ βT , then all points on the edge of T containing p are locally
reachable, and if p ∈ κT then no point on κT \ {p} is locally reachable from p. So we will focus on edges of T
that do not contain p.
Let Cp denote the one-parameter family of circles that pass through p and that are centered at the x-axis.
For any q ∈ T \ {p}, there is a unique circle Cq ∈ Cp that passes through q. We parameterize the circles in Cp
with the x-coordinate of its center, i.e., C(t) ∈ Cp for t ∈ (−∞, ∞) and is centered at (t, 0). Let C + (t) (resp.
C − (t)) be the circular arc of C(t) lying to the right (resp. left) of the line x = xp . See Figure 12(a). The
following properties of Cp are easily verified:
(a) For t < t0 , C + (t) (resp. C − (t0 )) lies in the interior of C(t0 ) (resp. C(t)); see Figure 12(a).
(b) If a circle C ∈ Cp intersects κT at two points, say, r− and r+ , then there is another circle C 0 ∈ Cp that
is tangent to κT between r− and r+ ; see Figure 12(b).
(c) A circle in Cp intersects αT or βT in at most one point.
(d) There is at most one circle C ∈ Cp that is tangent to κT .
Properties (a) and (b) are straightforward; (c) follows from (a) and a continuity argument; (d) follows
from (a), (c), and the convexity of κT .
If there is no circle in Cp that is tangent to κT then for any point q ∈ T , the arc Cq [p, q] lies inside T , so
every point in T is locally reachable, and the lemma follows.
Next, assume there is a circle C0 ∈ Cp that is tangent to κT at a point rp . By (d), C0 is the only such
circle. There are three cases:
(i) If p ∈
/ αT , then points in int(C) ∩ αT are locally reachable from p by property (a).
(ii) Similar, if p ∈
/ βT , then the points in int(C) ∩ βT are locally reachable, again by property (a).
(iii) If p ∈ αT (resp. p ∈ βT ), then the points in κT [ut , rp ] (resp. κT [rp , vT ]) are locally reachable from p by
properties (c) and (d).
15
κT
a
q
αT
ξ
←
− w̄∗
qk q
qk+1
T
wκT
p
0
βT p
Figure 13: Visualization of proof of Lemma 4.8.
Hence, in each case at most one connected portion of an edge e of T is locally reachable from p, and it
contains one endpoint of e.
Lemma 4.8. Graph G3 contains an s, t-path of cost at most (1 + O(ε))d∗ .
Proof. Once again, we deform the optimal path Π∗ into a path Π̃ of G3 as in Lemmas 4.1 and 4.4. Let Π
denote the current path that we have obtained by deforming Π∗ . Let T ∈ Ṽ be the first cell such that Π enters
int(T ) but int(T ) ∩ Π is not an arc of E3 . Let p ∈ Π be the first point (on ∂T ) at which Π enters in int(T ),
and let q be the next point on Π ∩ ∂T , i.e., int(Π[p, q]) ⊂ int(T ). If both p and q lie on the same edge e of T ,
we replace Π∗ [p, q] with the portion of e between p and q, denoted by Π̃T ; note that µ(Π̃T ) = π(p, q).
Now, suppose p ∈ βT and q ∈ κT . The other cases are similar. Points p and q are locally reachable from
each other. By Lemmas 4.6 and 4.7, there exists a sample point p0 locally reachable from q on βT such
that π(p, p0 ) ≤ εd/n. We have π(p0 , q) ≤ π(p, q) + εd/n. Suppose there exists a point q 0 ∈ S(p0 ) on κT locally
reachable from p0 such that π(q, q 0 ) ≤ εd/n. Let a be the minimal-cost path from p0 to q 0 . In this case, we
replace Π∗ [p, q] with Π̃T = pp0 ◦ a ◦ γ(q 0 , q). We have µ(Π̃T ) ≤ π(p, q) + 4εd/n.
Finally, suppose there is no locally reachable q 0 as described above. As in Section 3, let w̄∗ denote the
first intersection of well-behaved path γ(p0 , q) with κT . Recall our algorithm adds sample points along several
edgelets of length O(d) such that each pair of samples lies at cost εd/n apart. By Lemma 4.6, point q lies on
one of these edgelets ξ.
By Lemma 3.3 and construction, either w̄∗ ∈ ξ and w̄∗ lies between consecutive sample points of ξ we
−
−
−
−
denoted as ←
q and →
q , or w̄∗ ∈
/ ξ and exactly one of ←
q or →
q exists at an endpoint of ξ. By construction,
←
−
→
−
←
−
→
−
each existing point of q and q is in S(p0 ). Let q− ∈ { q , q } be the first sample point of ξ encountered as
we walk along κT from w̄∗ , past q, and to an endpoint of κT . We claim there exists at least one additional
sample point of ξ other than q− encountered during this walk, and we denote q0 as the first of these sample
−
−
points. Indeed, if q0 does not exist, then w̄∗ ∈ ξ and q lies between ←
q and →
q . At least one of them is locally
reachable from p0 by Lemma 4.7, which contradicts the assumption that q is at least εd/n cost away from any
sample point of ξ ∩ S(p0 ) locally reachable from p0 . By a similar argument, we claim q does not lie between
q0 and w̄∗ .
−
−
Recall, our algorithm adds samples qi to S(p) spaced geometrically away from one of ←
q and →
q in the
direction of q; point q0 is one of these samples. These samples also include one endpoint of ξ. Let qk , qk+1
be two consecutive sample points of S(p) such that q lies between them. By Lemma 4.7, at least one of qk
and qk+1 is locally reachable from p0 . Let q 0 be this locally reachable point. Let a be the mimimal-cost path
from p0 to q 0 . As before, we replace Π∗ [p, q] with Π̃T = pp0 ◦ a ◦ γ(q 0 , q). See Figure 13.
Let δ = π(q− , q)n/(εd). Value δ is an upper bound on the number of samples in ξ between q− and q.
We have b(1 + ε)k c ≤ δ ≤ b(1 + ε)k+1 c. In particular δ ≤ (1 + ε)k+1 , which implies δ − b(1 + ε)k c ≤ εδ + 1.
16
Similarly, b(1 + ε)k+1 c − δ ≤ εδ. By Lemma 3.2, π(q− , q) ≤ 11µ(p0 , q). We have
εd
π(q, q 0 ) ≤ (εδ + 1)
n
εd
εn
+1
≤ π(q− , q)
εd
n
εd
= επ(q− , q) +
n
εd
≤ 11επ(p0 , q) + .
n
We have π(p0 , q 0 ) ≤ π(p0 , q) + π(q, q 0 ) ≤ (1 + 11ε) · π(p0 , q) + εd/n. Therefore, in all three cases we have
µ(Π̃T ) ≤ (1 + O(ε))π(p, q) + O(εd/n).
Summing over all steps in the deformation of Π∗ and using the fact d∗ ≤ d ≤ cd∗ for a constant c, we
obtain µ(Π̃) = (1 + O(ε))d∗ . As before, it is clear from the construction that Π̃ is a path in G3 .
We conclude with our main theorem.
Theorem 4.9. Let O be a set of polygonal obstacles in the plane with n vertices total, and let s, t be two
2
points outside O. Given a parameter ε ∈ (0, 1], there exists an O( nε2 log nε )-time approximation algorithm
for the minimal-cost path problem between s and t such that the algorithm returns an s, t-path of cost at
most (1 + ε)π(s, t).
5
Discussion
In this paper, we present the first polynomial-time approximation algorithm for the problem of computing
minimal-cost paths between two given points (when using the cost defined in (1)). One immediate open
problem is to improve the running time of our algorithm to be near-linear. A possible approach would be to
refine the notion of anchor points so it suffices to put only O(log n) additional points on each edge of the
refined Voronoi diagram.
Finally, there are other natural interesting open problems that we believe should be addressed. The first
is to determine if the problem at hand is NP-hard. When considering the complexity of such a problem, one
needs to consider both the algebraic complexity and the combinatorial complexity. In this case we suspect
that the algebraic complexity may be high because of the cost function we consider. However, we believe
that combinatorial complexity, defined analogously to the number of “edge sequences”, may be small. The
second natural interesting open problem calls for extending our algorithm to compute near-optimal paths
amid polyhedral obstacles in R3 .
References
[1] P. K. Agarwal and H. Wang. Approximation algorithms for curvature-constrained shortest paths. SIAM
J. Comput. 30(6):1739–1772, 2000.
[2] F. Aurenhammer, R. Klein, and D. Lee. Voronoi Diagrams and Delaunay Triangulations. World Scientific,
2013.
[3] J. F. Canny and J. H. Reif. New lower bound techniques for robot motion planning problems. Proc.
28th Annu. Symp. Found. Comput. Sci., pp. 49–60. 1987.
[4] D. Z. Chen, O. Daescu, and K. S. Klenk. On geometric path query problems. Int. J. Comput. Geometry
Appl. 11(6):617–645, 2001.
[5] D. Z. Chen and H. Wang. Computing shortest paths among curved obstacles in the plane. Proc. 29th
Annu. Symp. Comput. Geom., pp. 369–378. 2013.
17
[6] H. Choset, K. M. Lynch, S. Hutchinson, G. Kantor, W. Burgard, L. E. Kavraki, and S. Thrun. Principles
of Robot Motion: Theory, Algorithms, and Implementation. MIT Press, June 2005.
[7] M. B. Cohen, B. T. Fasy, G. L. Miller, A. Nayyeri, D. R. Sheehy, and A. Velingker. Approximating
nearest neighbor distances. Proc. 14th Symp. Algo. Data Struct., pp. 200–211. 2015.
[8] M. L. Fredman and R. E. Tarjan. Fibonacci heaps and their uses in improved network optimization
algorithms. J. ACM 34(3):596–615, 1987.
[9] M. R. Henzinger, P. N. Klein, S. Rao, and S. Subramanian. Faster shortest-path algorithms for planar
graphs. J. Comput. Syst. Sci. 55(1):3–23, 1997.
[10] J. Hershberger, S. Suri, and H. Yıldız. A near-optimal algorithm for shortest paths among curved
obstacles in the plane. Proc. 29th Annu. Symp. Comput. Geom., pp. 359–368. 2013.
[11] N. Lebeck, T. Mølhave, and P. K. Agarwal. Computing highly occluded paths on a terrain. Proc. 21st
Annu. ACM SIGSPATIAL Int. Conf. Adv. Geo. Info. Sys., pp. 14–23. 2013.
[12] J. S. B. Mitchell. Shortest paths and networks. Handbook of Discrete and Computational Geometry,
Second Edition., pp. 607–641. 2004.
[13] J. S. B. Mitchell, G. Rote, and G. J. Woeginger. Minimum-link paths among obstacles in the plan.
Algorithmica 8(5&6):431–459, 1992.
[14] C. Ó’Dúnlaing and C. Yap. A “retraction” method for planning the motion of a disc. J. Algo. 6(1):104–111,
1985.
[15] G. Song, S. Miller, and N. M. Amato. Customizing PRM roadmaps at query time. Proc. IEEE Int.
Conf. Robotics Automat., pp. 1500–1505. 2001.
[16] R. Wein, J. P. van den Berg, and D. Halperin. The visibility-voronoi complex and its applications.
Comput. Geom. 36(1):66–87, 2007.
[17] R. Wein, J. P. van den Berg, and D. Halperin. Planning high-quality paths and corridors amidst obstacles.
I. J. Robotics Res. 27(11-12):1213–1231, 2008.
18
| 8 |
Wireless Expanders
Shirel Attali∗
Merav Parter†
David Peleg‡
Shay Solomon§
Abstract
arXiv:1802.07177v1 [] 20 Feb 2018
This paper introduces an extended notion of expansion suitable for radio networks. A graph
G = (V, E) is said to be an (αw , βw )-wireless expander if for every subset S ⊆ V s.t. |S| ≤ αw · |V |,
there exists a subset S 0 ⊆ S s.t. there are at least βw · |S| vertices in V \S that are adjacent in G
to exactly one vertex in S 0 . The main question we ask is the following: to what extent are ordinary
expanders also good wireless expanders? We answer this question in a nearly tight manner. On
the positive side, we show that any (α, β)-expander with maximum degree ∆ and β ≥ 1/∆ is also a
(αw , βw ) wireless expander for βw = Ω(β/ log(2·min{∆/β, ∆·β})). Thus the wireless expansion can be
smaller than the ordinary expansion by at most a factor that is logarithmic in min{∆/β, ∆ · β}, which,
in turn, depends on the average degree rather than the maximum degree of the graph. In particular, for
low arboricity graphs (such as planar graphs), the wireless expansion matches the ordinary expansion
up to a constant factor. We complement this positive result by presenting an explicit construction of
a “bad” (α, β)-expander for which the wireless expansion is βw = O(β/ log(2 · min{∆/β, ∆ · β}).
We also analyze the theoretical properties of wireless expanders and their connection to unique
neighbor expanders, and then demonstrate their applicability: Our results (both the positive and the
negative) yield improved bounds for the spokesmen election problem that was introduced in the seminal
paper of Chlamtac and Weinstein [7] to devise efficient broadcasting for multihop radio networks. Our
negative result yields a significantly simpler proof than that from the seminal paper of Kushilevitz and
Mansour [11] for a lower bound on the broadcast time in radio networks.
∗
The Weizmann
The Weizmann
‡
The Weizmann
§
IBM Research.
†
Institute of Science. Email: [email protected].
Institute of Science. Email: [email protected].
Institute of Science. Email: [email protected].
Email: [email protected].
1
1.1
Introduction
Background and motivation
An expander is a sparse graph that has strong connectivity properties [10]. There are several definitions
for expanders, with natural connections between them. We focus on the following combinatorial definition.
Expanders: Let G = (V, E) be an undirected graph. For a set S ⊂ V , let Γ(S) denote the set of
neighbors of the verices of S, and define Γ− (S) = Γ(S) \ S. We say that G is an (α, β) expander, for
positive parameters α and β, if |Γ− (S)| ≥ β · |S| for every S ⊆ V s.t. |S| ≤ α · |V |.
One of the main advantages of expanders is that they enable fast and effective dissemination of
information from a small group of vertices to the outside world. This property becomes less immediate
when we consider using the expansion property in the context of wireless communication networks. Such
networks can be represented by a specific kind of graphs, called radio networks [8]. A radio network is an
undirected (multihop) network of processors that communicate in synchronous rounds in the following
manner. In each step, a processor can either transmit or keep silent. A processor receives a message in
a given step if and only if it keeps silent and precisely one of its neighbors transmits in this step. If none
of its neighbors transmits, it hears nothing. If more than one neighbor (including itself) transmits in a
given step, then none of the messages is received. In this case we say that a collision occurred. It is
assumed that the effect at processor u of more than one of its neighbors transmitting is the same as of
no neighbor transmitting, i.e., a node cannot distinguish a collision from silence.
The usual definition of expanders is not enough to ensure fast message propagation in radio networks.
Consider, for example, a radio network C + consisting of a complete graph C with one more vertex s0 ,
the source, connected to two vertices x and y from C. Obviously this is a good expander, but in this
case, after the first step of broadcast, if all the vertices that received the message (i.e., the three vertices
s0 , x and y) transmit it simultaneously to all their neighbors, then no one will hear it. This motivates
considering another definition of expanders, namely, unique neighbor expanders (or unique expanders, in
short) [2].
Unique neighbor expanders: Let G = (V, E) be an undirected graph. We say that G is an (αu , βu )unique neighbor expander if for every S ⊆ V s.t. |S| ≤ αu · |V |, there are at least βu · |S| vertices in V \S
that are adjacent in G to exactly one vertex in S.
Clearly, if G is a unique expander with good parameters, then broadcasting on it can be fast (again,
by requiring all the vertices that received the message to send it to all their neighbors). Unfortunately,
it seems that unique neighbor expansion might be hard to come by. For example, while the graph C +
described above is a good (ordinary) expander, it is clearly not a good unique expander, as can be realized
by considering the set S = {x, y, s0 }. (In general, ordinary expanders might have rather small unique
neighbor expansion, as will be shown soon.) In addition, explicit constructions of unique expanders are
rather scarce and known only for a limited set of parameters [2, 6].
The key observation triggering the current paper is that the property required from unique expanders
might be stronger than necessary. This is because there is no reason to require all the vertices that received
the message to send it. Rather, it may be enough to pick a subset X of this set, that has a large set of
unique neighbors, and require only the vertices of X to transmit. This may be an attractive alternative
since such a property may be easier to guarantee than unique neighbor expansion, and therefore may be
achievable with better parameters α and β. (Note, e.g., that this property holds for our example graph
C + .) This observation thus motivates our definition for a new variant of expanders.
Wireless expanders: Let G = (V, E) be an undirected graph. We say that G is an (αw , βw )-wireless
expander if for every S ⊆ V s.t. |S| ≤ αw · |V |, there exists a subset S 0 ⊆ S s.t. there are at least βw · |S|
vertices in V \S that are adjacent to exactly one vertex in S 0 .
In this paper we are interested in investigating the properties of wireless expanders and the relationships between these graphs and the classes of ordinary expanders and unique neighbor expanders. We
1
ask the following questions: by how much does the relaxed definition of wireless expanders (compared to
unique neighbor expanders) help us in providing expanders with better parameters that are suitable for
radio network communication? More specifically, given an (α, β)-expander, can we prove that it is also
an (αw , βw )-wireless expander with αw = f (α, β) and βw = g(α, β), for some functions f and g?
1.2
Our Contribution
We present several results relating the parameters of the different notions of expanders. We begin by
investigating the relationships between ordinary expanders and the more strict notion of unique neighbor
expanders.
• Let G = (V, E) be a d-regular graph that is an (αu , βu )-unique neighbor expander, and let λ = λ2
denote the second largest eigenvalue of its adjacency matrix, given by auv = 1 if (u, v) ∈ E and auv =
0 otherwise. Then G is an (α, β)-expander with α = αu and β ≥ (1 − 1/d) · βu + (d − λ)/d · (1 − αu ).
• Suppose G = (V, E) is an (α, β)-expander with maximum degree ∆. Then it is also an (αu , βu )unique expander with αu = α, and βu ≥ 2β − ∆. On the other hand, we show that there is an
(α, β) bipartite expander whose unique expansion is βu ≤ 2β − ∆.
We then turn to consider our new relaxed notion of wireless expander. Our key contribution is in providing
nearly tight characterization for the relation between ordinary expanders and wireless expanders. On the
positive side, using the probabilistic method, we show:
Theorem 1.1 (Positive Result) For every ∆ ≥ 1, β ≥ 1/∆, every (α, β)-expander G with maximum degree ∆ is a also an (αw , βw )-wireless expander with αw ≥ α and
βw = Ω(β/ log(2 · min{∆/β, ∆ · β})).
Our probabilistic argument has some similarity to the known decay method [5], which is a standard
technique for coping with collisions in radio networks. Roughly speaking, in the decay protocol of [5],
time is divided into phases of log n rounds and in the ith round of each phase, each node that holds a
message transmits it with probability 2−i . Hence, each node that has a neighbor that holds a message,
receives it within O(log n) phases. We use the idea of the decay method to show the existence of a subset
S 0 ⊆ S with a large unique neighborhood in Γ(S).
An important feature of our argument is that it bounds the deviation of the wireless expansion from the
ordinary expansion as a function of the average-degree rather than the maximum degree. As β gets closer
to ∆ or to 1/∆, this finer dependence leads to significantly better results than what could be achieved using
the standard decay argument; our argument is also arguably simpler than the standard decay argument.
As a technical note, we use the probabilistic method to prove a lower bound of Ω(β/ log(2 · ∆/β)) on
βw , and then we push it up to the bound of Theorem 1.1 via a separate deterministic argument. As
a corollary, for the important family of low arboricity graphs, which includes planar graphs and more
generally graphs excluding a fixed minor, the wireless expansion matches the ordinary expansion up to
a constant factor. (Indeed, the arboricity is at least min{∆/β, ∆ · β}; see Section 2.1 for the definition
of arboricity.) In particular, this shows that radio broadcast in low arboricity graphs can be done much
more efficiently than what was previously known!
Beyond the probabilistic argument, we also provide explicit deterministic arguments that obtain better
parameters (by a constant factor); these are deferred to the appendix.
We also show that asymptotically, no tighter connection can be established:
Theorem 1.2 (Negative Result) There exists an (α, β)-expander with maximum degree ∆, whose
wireless expansion is βw = O(β/ log(2 · min{∆/β, ∆ · β}).
2
The explicit construction of this bad graph example is perhaps the most technically challenging result of
this paper. Our explicit construction has interesting connections to related constructions that have been
studied in the context of broadcast in radio networks [3, 11]. For instance, our “core graph” from Section
4.3.1 is reminiscent of a fundamental construction from [3]. However, while the construction of [3] is
implicit (using the probabilistic method), our construction is explicit and can be viewed in some sense
as the deterministic counterpart of [3]; moreover, our construction is arguably much simpler than that
of [3]. We view this explicit construction as the technical highlight of our work, and anticipate that it
will find further applications.
An additional application of both our positive and negative results is to the Spokesman Election
problem introduced in the seminal paper of [7], where given a bipartite graph G = (S, N, E), the goal
is to compute a subset S 0 ⊆ S with the maximum number of unique neighbors Γ1 (S 0 ) in N . More
specifically, we provide tight bounds for this problem, which apply to any expansion and average degree
parameters, whereas the previous result of [7] applies only to one specific (very large) expansion parameter
and only with respect to the maximum degree (rather than the average degree, which is a finer measure).
In Section 4.2.1, we provide a detailed comparison to the bounds obtained by [7].
Finally, another application of our negative result, and of our explicit core graph in particular, is
in the context of broadcast lower bounds in radio networks. In their seminal paper, Kushilevitz and
Mansour [11] proved that there exist networks in which the expected time to broadcast a message is
Ω(D log(n/D)), where D is the network diameter and n is the number of vertices, and this lower bound
is tight for any D = Ω(log n) due to a highly nontrivial upper bound by Czumaj and Rytter [9]. Since
the upper bound of [9] holds with high probability, it implies that the lower bound Ω(D log(n/D)) of [11]
also holds with high probability. Newport [12] presented an interesting alternative proof to the one by
Kushilevitz and Mansour. Although short and elegant, Newport’s proof relies on two fundamental results
in this area, due to Alon et al. [1] and Alon et al. [3] – Lemma 3.1 in [12] – whose proof is intricate.
Also, as with Kushilevitz and Mansour’s proof, Newport only proves an expected lower bound on the
broadcast time, with the understanding that a high probability bound follows from [9]. By unwinding
the ingredients of Newport’s proof, the resulting proof (especially for a high probability bound on the
broadcast time) is long and intricate. Using the properties of our explicit core graph construction, we
derive a simple and self-contained proof for the same lower bound, arguably much simpler than that
of [11, 12]. An important advantage of our proof over [11, 12] is that it gives a high probability bound on
the broadcast time directly, i.e., without having to take a detour through the upper bound of [9].
Summarizing, besides the mathematical appeal of wireless expanders and their connections to wellstudied types of expanders, we demonstrate that they find natural applications in the well-studied area
of radio networks. We anticipate that a further study of wireless expanders will reveal additional applications, also outside the scope of radio networks, and we thus believe it is of fundamental importance.
1.3
Organization
In Section 2 we introduce the notation and definitions used throughout. We investigate the relations
between ordinary expanders and unique neighbor expanders in Section 3. Section 4 is devoted to our
new notion of wireless expanders, where we present nearly tight characterization for the relation between
ordinary expanders and wireless expanders. We start (Section 4.1) with describing our basic framework;
the positive and negative results are presented in Section 4.2 and Section 4.3, respectively. (As mentioned,
some positive results are deferred to the appendix. These improve on the parameters provided in Section
4.2 by constant factors, using explicit deterministic arguments.) Our results for the Spokesman Election
problem [7] are given in Section 4.2.1. Finally, Section 5 is devoted to our alternative lower bound proof
of Ω(D log(n/D)) on the broadcast time in radio networks.
3
2
2.1
Preliminaries
Graph Notation
For an undirected graph G = (V, E), vertex v ∈ V and
S a subset S ⊆ V , denote the set of v’s neighbors
in G by Γ(v) = {u | (u, v) ∈ E}, and let Γ(S) = v∈S Γ(v) be the neighborhood of a vertex set S in
G (including neighbors that belong to S itself), and Γ− (S) = Γ(S) \ S be the set of neighbors external
to S. Also define Γ(v, S) = Γ(v) ∩ S as the neighbors of v in the subset S. The expansion of S is the
ratio |Γ− (S)|/|S|. The unique-neighborhood of S, denoted by Γ1 (S), is the set of vertices outside S that
have a unique neighbor from S. The unique-neighbor expansion of S is the ratio |Γ1 (S)|/|S|. Let S 0
be an arbitrary subset of S. The S-excluding neighborhood of S 0 , denoted by ΓS (S 0 ), is the set of all
vertices outside S that have at least one neighbor from S 0 . Similarly, the S-excluding unique-neighborhood
of S 0 , denoted by Γ1S (S 0 ), is the set of all vertices outside S that have a unique neighbor from S 0 . In
particular, Γ1 (S) = Γ1S (S). The wireless expansion of S is the maximum ratio |Γ1S (S 0 )|/|S| over all
subsets S 0 of S. For two sets S, T ⊂ V , let e(S, T ) be the set of edges connecting S and T . For vertex
v ∈ V , let deg(v) = degG (v) denote the degree of v in G, i.e., the number of v 0 s neighbors, and let
∆(G) = max{deg(v) | v ∈ V } be the maximum degree over all the vertices in G. For set S ⊂ V and
vertex v ∈ V , let deg(v, S) = deg(v) ∩ S be the number of v 0 s neighbors that are in S. For two vertices
v, u ∈ V , let d(u, v) be the distance between u and v (i.e., the length of the shortest path connecting
them), and let D = D(G) = max{d(u, v) | u, v ∈ V } be the diameter of the graph, i.e. the maximum
distance between any two vertices.
We use the combinatorial definition for (vertex) expansion, which requires that every (not too large)
set of vertices of the graph has a relatively large set of neighbors. Specifically, an n-vertex graph G is
called an (α, β) vertex expander for positive parameters α and β, if every subset S ⊆ V s.t. |S| ≤ αn has
many external neighbors, namely, |Γ− (S)| ≥ β · |S|. The (ordinary) expansion β(G) of G is defined as the
minimum expansion over all vertex sets S ⊆ V of size |S| ≤ αn, namely, β(G) = min{|Γ− (S)|/|S| | S ⊆
V, |S| ≤ αn}. A similar definition appears in the literature for bipartite graph, namely, a bipartite graph
G = (L, R, E) with sides L and R, such that every edge from E ⊂ L × R connects one vertex of L and
one vertex of R is called an (α, β) bipartite vertex expander if every subset S ⊂ L s.t. |S| ≤ α|L| has
at least β|S| neighbors in R. It is usually assumed that the two sides L and R of the bipartition are of
(roughly) the same size.
A graph G = (V, E) has arboricity η = η(G) if
|E(U )|
η = max
,
U ⊆V |U | − 1
where E(U ) = {(u, v) ∈ E | u, v ∈ U }. Thus the arboricity is the same (up to a factor of 2) as the
maximum average degree over all induced subgraphs of G. It is easy to see that for any (α, β)-expander
with maximum degree ∆, the arboricity is at least min{∆/β, ∆ · β}.
2.2
Unique Neighbor and Wireless Expanders
Let us now define formally the notions of unique and wireless expanders. Let G = (V, E) be an n-vertex
undirected graph. We say that G is an (αu , βu )-unique expander [2] if for every S ⊆ V s.t. |S| ≤ αu n, there
are at least βu · |S| vertices in V \S that are adjacent to exactly one vertex in S, namely, |Γ1 (S)| ≥ βu · |S|.
The unique-neighbor expansion βu (G) of G is defined as the minimum unique-neighbor expansion over
all vertex sets S ⊆ V with |S| ≤ αu n, namely,
βu (G) = min{|Γ1 (S)|/|S| | S ⊆ V, |S| ≤ αu n}.
We say that G is an (αw , βw )-wireless expander if for every S ⊆ V s.t. |S| ≤ αw n, there exists a
subset S 0 ⊆ S s.t. there are at least βw · |S| vertices in V \S that are adjacent in G to exactly one vertex
4
in S 0 , i.e., |Γ1S (S 0 )| ≥ βw · |S|. The wireless expansion βw (G) of G is defined as the minimum wireless
expansion over all sets S ⊆ V with |S| ≤ αw n, namely,
βw (G) = min{max{|Γ1S (S 0 )|/|S| | S 0 ⊆ S} | S ⊆ V, |S| ≤ αw n}.
In our arguments, we usually fix α and study the relations between the β-values for different notions of
expanders. The following connection is easy to verify.
Observation 2.1 If α = αu = αw , then β(G) ≥ βw (G) ≥ βu (G).
3
Relations between β and βu
Let G = (V, E) be a d-regular undirected graph and let A = AG = (auv )u,v∈V be its adjacency matrix
given by auv = 1 if (u, v) ∈ E and auv = 0 otherwise. Since G is d-regular, the largest eigenvalue of A is
d, corresponding to the all-1 eigenvector (as 1/d · A is a stochastic matrix). Let λ = λ2 denote the second
largest eigenvalue of G.
Lemma 3.1 If a d-regular graph G = (V, E) is an (αu , βu )-unique expander, then it also an (α, β)expander with α = αu and β ≥ (1 − 1/d) · βu + (d − λ) · (1 − αu )/d.
Proof: Alon and Spencer [4] prove that every partition of the set of vertices V into two disjoint subsets
A and B satisfies |e(A, B)| ≥ (d − λ) · |A| · |B|/|V |. In our case (i.e. A = S, B = V \ S = S̄, and
|S| ≤ αu · |V |) we get that
|S| · |S̄|
|V |
|S| · (|V | − αu · |V |)
≥ (d − λ) ·
|V |
= (d − λ) · |S| · (1 − αu ).
|e(S, S̄)| ≥ (d − λ) ·
Moreover, by the expansion properties, there exists a set U of at least βu · |S| vertices in Γ− (S) that
have a unique neighbor in S. From uniqueness, we have e(S, U ) = |U | ≥ βu |S|. Thus, there are at least
(d − λ) · |S| · (1 − αu ) − |U | edges in e(S, S̄) that are not connect to the vertices in U (i.e. in e(S, S̄ \ U )).
Now, because G is d-regular, we get that there exist at least |U | + ((d − λ) · |S| · (1 − αu ) − |U |)/d vertices
in Γ− (S). Hence, we get
((d − λ) · |S| · (1 − αu ) − |U |)
|Γ− (S)| ≥ |U | +
d
1
(d − λ) · (1 − αu )
=
1−
· |U | +
· |S|
d
d
1
(d − λ) · (1 − αu )
≥
1−
· βu |S| +
· |S|
d
d
1
(d − λ) · (1 − αu )
=
1−
· βu +
· |S|
d
d
thus, G is a (α, β)-expander with α ≥ αu and β ≥ (1 − 1/d) · βu + ((d − λ) · (1 − αu )/d.
It is known (and easy to verify) that ordinary expanders whose expansion is close to the (maximum)
degree in the graph are also good unique expanders, or formally:
Lemma 3.2 Suppose G = (V, E) is an (α, β)-expander with maximum degree ∆. Then it is also a unique
(αu , βu )-expander, with αu = α and βu ≥ 2β − ∆.
5
Remark. Substituting β = (1 − ε)∆ (for ε ≤ 1/2), we obtain βu ≥ (1 − 2ε)∆.
The lower bound 2β − ∆ on the unique-neighbor expansion βu provided by Lemma 3.2 is meaningful
only when β is larger than ∆/2. The following example shows that this lower bound 2β − ∆ is tight.
Lemma 3.3 For any ∆ and β such that ∆/2 ≤ β ≤ ∆, there is an (α, β) bipartite expander Gbad =
(S, N, E) with maximum degree ∆ whose unique expansion is βu ≤ 2β − ∆.
Proof: Construct the graph Gbad as follows. Let S = {v1 , . . . , vs }, with s = |S|, and suppose that
each vertex vi ∈ S has exactly ∆ neighbors, all of which are in N . (For technical convenience, we define
v0 = vs , v1 = vs+1 ; that is, the vertices v1 and vs are not different than the other vertices (they should
not be viewed as “endpoints”, but rather part of an implicit “cycle”). Moreover, for each i = 1, . . . , s,
the vertices vi and vi+1 have exactly ∆ − β common neighbors; that is, |Γ(vi ) ∩ Γ(vi+1 )| = ∆ − β. More
concretely, writing Γ(vi ) = {vi1 , . . . , vi∆ }, we have that
∆−β
1
Γ(vi ) ∩ Γ(vi+1 ) = {viβ+1 , . . . , vi∆ } = {vi+1
, . . . , vi+1
}.
1 , . . . , v ∆−β
In other words, the “last” ∆−β neighbors viβ+1 , . . . , vi∆ of vi are the “first” ∆−β neighbors vi+1
i+1
of vi+1 , respectively. (See Figure 1 for an illustration.)
Figure 1: An illustration of a worst-case scenario for the unique-neighbor expansion.
This means that for each i = 1, . . . , s, the first (resp., last) ∆ − β neighbors of vi are also neighbors of
vi−1 (resp, vi+1 ). The remaining ∆ − 2(∆ − β) = 2β − ∆ neighbors of vi , however, are uniquely covered by
vi . It follows that the number of vertices in the neighborhood of S that are uniquely covered by vertices
from S is equal to s(2β − ∆). Consequently, the unique neighbor expansion βu is 2β − ∆, as claimed.
Noting that the ordinary expansion is β completes the proof of the lemma.
Remarks. (1) The meaning of Lemma 3.3 is that a graph with high (ordinary) expansion may have
unique neighbor expansion of zero. For example, in the graph Gbad described in the proof of Lemma 3.3,
the unique-neighbor expansion is 2β − ∆, but the wireless expansion is at least max{2β − ∆, ∆/2}. To
see that, let S 0 be a subset of S and suppose S 0 = S1 ∪ S2 ∪ . . . ∪ Sk such that each Si is a sequence
of consecutive vertices, i.e., using the previous notations, for Si of size l, Si = {vj , .., vj+l } for some
index 1 ≤ j ≤ s. Suppose also that between every two sets Si and Sj there is at least one vertex
that is not in S 0 (in other words, we can’t expand Si to be a longer secuence in S 0 ). Therefore, to
compute βw , it is enough to compute the expansion parameter for each Si = {vj , . . . , vj+l }. Consider
two options for choosing the set S 00 ⊂ Si . The first choice is to take S 00 = Si . Then we get an expansion
6
of f (l) = (l∆ − 2(l − 1)(∆ − β))/l = ((2 − l)∆ + 2(l − 1)β)/l. The second choice is to take into
S 00 every second vertex in the sequence of Si . Then we get an expansion of g(l) = l∆/(2l) if l is
even, and g(l) = (l + 1)∆/(2l) if l is odd. (In the case where S 0 = S we get in the first choice an
expansion of f (l) = l(2β − ∆)/l = 2β − ∆ and in the second an expansion of g(l) = (l − 1)∆/(2l)).
Thus, βw ≥ min{max{g(l) , f (l)} | l > 0}. As f
(l) and g(l) are both decreasing functions, we get
that βw ≥ max{liml→∞ g(l) , liml→∞ f (l)} = max 2β − ∆ , ∆
2 . This calculation also shows that if
β = ∆/2, then the unique-neighbor expansion becomes 0, but the wireless expansion becomes ∆/2.
(2) Although the bipartite graph used in the proof of Lemma 3.3 is an ordinary bipartite expander
(according to the definition given in Section 2.1), note that the sizes of the two sides S and N differ by
a factor of β. Also, it does not provide an ordinary non-bipartite expander, because the expansion is
achieved only on one side, from S towards N . Nevertheless, one can plug this “bad” bipartite graph on
top of an ordinary (α, β)-expander with a possibly good unique-neighbor expansion, so that the graph
resulting from this tweak is an ordinary (α, β)-expander with a unique-neighbor expansion bounded by
2β − ∆. Notice, however, that the maximum degree in the resulting graph, denoted by ∆0 , may be as
large as the sum of the maximum degrees of the “bad” bipartite graph and the (α, β)-expander that we
started from. For example, if ∆0 = 2∆, then the unique-neighbor expansion of the resulting graph is
bounded by 2β − ∆ = 2β − ∆0 /2. Since we apply a similar tweak in Section 4.3 (in the context of wireless
expansion rather than unique expansion), we omit the exact details of this rather simple tweak from the
extended abstract.
4
4.1
Bounds on Wireless Expansion
Our Framework
Consider an arbitrary (ordinary) (α, β)-expander G. As shown in Section 3, the unique-neighbor expansion βu provided by G may be zero even if the ordinary expansion β is high. In what follows we
demonstrate that the wireless expansion βw (G) of G cannot be much lower than its ordinary expansion
β(G). Moreover, we prove asymptotically tight bounds on the ratio β(G)/βw (G). This yields a strong
separation between the unique-neighbor expansion and the wireless expansion, which provides a natural motivation for studying wireless expanders, particularly in applications where we are given a fixed
expander network (that cannot be changed).
First let us observe that by Obs. 2.1, Lemma 3.2 yields the following bound on βw .
Lemma 4.1 Suppose G = (V, E) is an (α, β)-expander with maximum degree ∆. Then it is also a
wireless (αw , βw )-expander with αw = α and βw ≥ 2β − ∆.
Throughout what follows, we simplify the discussion by focusing attention to an arbitrary bipartite
graph GS = (S, N, ES ) with sides S and N , such that |N | ≥ β · |S|. We assume that no vertex of GS is
isolated, i.e., all vertex degrees are at least 1.
Note that this bipartition can be thought of as representing all edges in the original graph G that
connect an arbitrary vertex set S with its neighborhood N = Γ− (S). While in G there might be edges
internal to S and/or N , ignoring these edges has no effect whatsoever on the expansion bounds.
Our goal is to show the existence of a subset S 0 of S in the graph GS , whose S-excluding uniqueneighborhood Γ1S (S 0 ) is not much smaller than the entire neighborhood N of S. Of course, this would
imply that the wireless expansion of an arbitrary set S in G (of any size) is close to its ordinary expansion,
yielding the required result.
4.2
Positive Results: Ordinary Expanders are Good Wireless Expanders
Let
the set S (resp., N ) in the graph GS . That is, δS =
P δS (resp., δN ) be the average
P degree of
0
u∈S deg(u, N )/|S| and δN =
u0 ∈N deg(u , S)/|N |. Clearly, δN , δS ≥ 1. In this section, we show that
βw can be bounded from below as a function of min{δS , δN }.
7
We begin by considering an (α, β)-expander G for β ≥ 1. We now show:
Lemma 4.2 For every β, ∆ ≥ 1, there exists a subset S ∗ ⊆ S, satisfying that
|Γ1S (S ∗ )| = Ω(|N |/ log 2δN ) = Ω(β/ log 2δN ) · |S|. Hence, βw = Ω(β/ log 2(∆/β)).
Proof: Since β ≥ 1, we have |S| ≤ |N |, 1 ≤ δN ≤ δS and δN ≤ ∆/β. The proof relies on the
probabilistic method. First, consider the set N 0 of all vertices from N with degree at most 2δN . Note
that |N 0 | ≥ |N |/2 and that all vertices of N 0 have positive degree. We now divide the subset N 0 into
k = blog 2δN c subsets depending on their degree in S, where the ith subset Ni consists of all vertices
u ∈ N 0 with deg(u, S) ∈ [2i , 2i+1 ). Let Nj be the largest subset among these k subsets. We have that
|Nj | ≥ |N |/k = Ω(|N |/ log 2δN ) = Ω(|N |/ log 2(∆/β)). We next show that there exists a subset S ∗ ⊆ S
such that Γ1S (S ∗ ) contains a constant fraction of the vertices of Nj .
Consider a random subset S 0 ⊆ S obtained by sampling each vertex u ∈ S independently with
probability 1/2j . For every vertex u ∈ Nj , let X(u) ∈ {0, 1} be the indicator random variable that takes
value 1 if u has exactly one neighbor in S 0 . As deg(u, S) ∈ [2j , 2j+1 ), we have that
IE (() X(u)) = IP(X(u) = 1) = deg(u, S)/2j · (1 − 1/2j )deg(u,S)−1
j+1 −1
≥ (1 − 1/2j )2
≥ e−3 .
P
Hence, u∈Nj IE (() X(u)) = Ω(|Nj |) = Ω(β|S|/ log 2(∆/β)). We get that the expected number of vertices
in N that are uniquely covered by a random subset S 0 is Ω(β|S|/ log 2(∆/β)). Hence, there exists a subset
S ∗ ⊆ S with
|Γ1S (S ∗ )| = Ω(β/ log 2(∆/β)) · |S|. The lemma follows.
In Appendix A, we provide a sequence of deterministic arguments that obtain better bounds for βw (by
constant factors) compared to the probabilistic argument shown above.
We now turn to consider the case β < 1. In this case the bound on the wireless expansion depends
on δS , namely, on the average degree in the larger set S. We show:
Lemma 4.3 For every ∆ ≥ 1 and β ∈ [1/∆, 1), there exists a subset S ∗ ⊆ S, satisfying that |Γ1S (S ∗ )| =
Ω(β/ log δS ) · |S|. Since δS ≤ β · ∆, we have βw = Ω(β/ log 2(∆ · β)).
Proof: Let S 0 ⊆ S be the set of all vertices u ∈ S with deg(u, N ) ≤ 2δS , and note that |S 0 | ≥ |S|/2. Let
N 0 = Γ− (S 0 ) be the set of neighbors of S 0 in N . By the expansion of G, we have |N 0 | ≥ β · |S 0 | ≥ β|S|/2.
We now claim that there exists a subset S 00 ⊆ S 0 satisfying Γ− (S 00 ) = N 0 and |S 00 | ≤ |N 0 |. To see this,
initially set S 00 to be empty. Iterate over the vertices of S 0 and add a vertex u ∈ S 0 to S 00 only if it
covers a new vertex of N 0 (i.e., it has a new neighbor in N 0 that has not been covered before). Then
|S 00 | ≤ |N 0 | and hence in the induced bipartite graph G0 with sides S 00 and N 0 , the expansion measure
β’, with β 0 = |N 0 |/|S 00 |, is at least 1. The average degree of a vertex u ∈ N 0 in the graph G0 is bounded
by |E(G0 )|/|N 0 | ≤ 2δS · |S 00 |/|N 0 | ≤ 2δS . Employing the argument of Lemma 4.2 on the bipartite graph
G0 , we get that there exists a subset S ∗ ⊆ S 00 satisfying |Γ1S 00 (S ∗ )| = Ω(|N 0 |/ log 4δS ) = Ω(β/ log 2δS )|S|.
Since δS ≤ ∆ · β, it follows that βw = Ω(β/ log 2(∆ · β)).
Theorem 1.1 follows from Lemmas 4.2 and 4.3.
4.2.1
Relation to the Spokesman Election problem [7]
Motivated by broadcasting in multihop radio networks, Chalmtac and Weinstien [7] defined the spokesmen
election problem. In this problem, given a bipartite graph G = (S, N, E), the goal is to compute a subset
S 0 ⊆ S with the maximum number of unique neighbors Γ1 (S 0 ) in N . This problem was shown in [8]
to be NP-hard. In [7], an approximation scheme is presented that computes a subset S 0 ⊆ S with
|Γ1 (S 0 )| ≥ |N |/ log |S|, and this approximation scheme was then used to devise efficient broadcasting
algorithms for multihop radio networks.
8
The bounds provided in Lemmas 4.2 and 4.3 refine and strengthen upon the bound of [7]. Our bounds
show that |Γ1 (S 0 )| cannot be smaller than |N | by more than a factor that is logarithmic in 2 min{δN , δS },
which depends on the average degree in G, whereas the bound of 4.3 did not preclude the possibility of
|Γ1 (S 0 )| being smaller than |N | by a factor of log |S|. Note that min{δN , δS } is always upper bounded by
|S|, but can be much smaller than it. In particular, min{δN , δS } is always low in low arboricity graphs
(even if the maximum degree is huge), regardless of |S|.
We remark that our randomized approach of choosing the subset S 0 ⊆ S is extremely simple, and in
particular, it yields a much simpler solution to the Spokesman Election problem than that of [7]. Since
the solution to this problem was used in [7] to devise efficient broadcasting algorithms for multihop radio
networks, our solution can be used to obtain simpler broadcasting algorithms for multihop radio networks
than those of [7].
In the next section (Section 4.3), we show that our positive results for (α, β)-expanders are essentially
the best that one can hope for, by providing a “bad” expander example. A bad graph expander example
for the related Spokesman Election problem was given in [7], but our graph example is stronger than that
of [7] in several ways, and is based on completely different ideas. The graph example of [7] is tailored
for the somewhat degenerate case where |N | = Ω(|S|!), whence N is exponentially larger than S, thus
the expansion of the bad graph (and the degree) is huge. In addition, in their example, one cannot
uniquely cover more than |N |/ log(|S|) = |N |/ log log |N | vertices of N , leaving a big gap between their
positive and negative results. Our bad graph example, in contrast, works for any expansion parameter
β. Moreover, similarly to our positive result, the bounds implied by our negative result depend on the
average degree of the graph rather than the maximum degree or the size of S. In particular, by taking β
to be constant and ∆ to be sufficiently large, our graph example shows that one cannot cover more than
|N |/ log |N | vertices of N , which not only matches our positive result, but also closes the gap left by [7].
4.3
Negative Results: Worst-Case Expanders
In this section we present a “bad graph ” expander construction. The description of our construction is
given in three stages. First, in Section 4.3.1 we construct a bipartite graph GS = (S, N, ES ) with sides
S and N that satisfies two somewhat contradictory requirements: On the one hand, for every subset S 0
of S, |Γ(S 0 )| ≥ log 2|S| · |S 0 |. Hence the ordinary expansion of GS , denoted by β, is at least log 2|S|.
On the other hand, for every subset S 0 of S, |Γ1S (S 0 )| ≤ (2/ log 2|S|) · |N |. Hence the wireless expansion
of GS , denoted βw , satisfies βw ≤ β(2/ log 2|S|). Although this graph is an ordinary bipartite expander
(according to the definition given in Section 2.1), note that the size of N is greater than that of S by a
factor of log 2|S|. Also, it does not provide an ordinary non-bipartite expander, because the expansion
is achieved only on one side, from S towards N . Nevertheless, it provides the core of our worst-case
expander, and is henceforth referred to as the core graph. Next, in Section 4.3.2 we describe a generalized
core graph G∗S = (S ∗ , N ∗ , ES∗ ) with an arbitrary expansion β ∗ , while preserving the same upper bound
on the wireless expansion. Finally, in Section 4.3.3 we plug the generalized core graph on top of an
ordinary expander G(V, E) with a possibly good wireless expansion, such that N ∗ ⊆ V and S ∗ ∩ V = ∅,
and demonstrate that the resulting graph G̃ = (V ∪ S ∗ , E ∪ ES∗ ) is an ordinary expander with a similar
expansion but a poor wireless expansion. While the generalized core graph is bipartite, the ordinary
expander G that we started from does not have to be bipartite. If the original expander G is bipartite,
we can ensure that the expander resulting from our modification will also be bipartite.
4.3.1
The Core Graph
Lemma 4.4 For any integer s ≥ 1, there is a bipartite graph GS = (S, N, ES ) such that:
1. s := |S| and |N | = s log 2s.
2. Each vertex in S has degree 2s − 1.
9
3. The maximum degree ∆N of a vertex in N is s, and the average degree δN of a vertex in N is at
most 2s/ log 2s.
4. For every subset S 0 of S, |Γ(S 0 )| ≥ log 2s · |S 0 |. (Hence the ordinary expansion, denoted β, is at
least log 2s.)
5. For every subset S 0 of S, |Γ1S (S 0 )| ≤ 2s = (2/ log 2s) · |N |. (Hence the wireless expansion, denoted
βw , satisfies βw ≤ β(2/ log 2s).)
Proof: We assume for simplicity that s is an integer power of 2, which may effect the bounds in the
statements of the lemma by at most a small constant. To describe the edge set ES of GS , consider a
perfect binary tree TS with s leaves (and s − 1 internal vertices). We identify each leaf z of TS with a
unique vertex of S. Each vertex v of TS is
Sassociated with a set Nv of vertices from N ; all these vertex sets
are pairwise disjoint, and we have N = v∈TS Nv . For a vertex v at level i of the tree, i = 0, 1, . . . , log s,
the set Nv contains s/2i vertices. Thus the sizes of these vertex sets decrease geometrically with the
level, starting with the set Nrt at the root rt that consists of s vertices, and ending with singletons at
the leaves. Denote by Ni the union of the sets Nv over all i-level vertices in TS . For all i = 0, 1, . . . , log s,
we have |Ni | = s, hence |N | = s log
in TS
S
S 2s. For a leaf z in TS , let A(z) denote the set of its ancestors
(including z itself), and let N̂z = w∈A(z) Nw . Define E(z) = {(z, v) | v ∈ N̂z }. Then ES = z∈S E(z).
(See Fig. 2 for an illustration.)
Figure 2
Observation 4.5 There is an edge between vertex z ∈ S and vertex v ∈ N iff the unique vertex w in TS
such that v ∈ Nw is an ancestor of z in TS .
P s i
Note that the degree of each vertex z ∈ S, namely |E(z)|, is equal to log
i=0 2 = 2s − 1. On the other
hand, the degrees of vertices in N are not uniform. For a vertex v in TS , each vertex in Nv is incident on
the descendant leaves of v. This means that if v is at level i of TS , then all vertices in Nv have degree
2log s−i = s/2i . Hence, the maximum degree ∆N of a vertex in N is s and the average degree δN of a
vertex in N is given by
log s
δN
=
1 X
(
|Ni |(s/2i ))
|N |
i=0
10
log s
=
1 X s2
2s2
2s
)
≤
(
=
.
i
|N |
2
s log 2s
log 2s
i=0
Next, we lower bound the expansion β of the graph GS . Fix an arbitrary set S 0 ⊆ S of size k, for any
1 ≤ k ≤ s, and consider the set of k leaves in TS identified with S 0 , denoted by s1 , . . . , sk . Recall that
the level of the root rt is 0, the level of its children is 1, etc., the level of the leaves of TS is log s; in
what follows we say that a vertex has inverse-level j if its level in TS is log s − j. For each vertex v at
inverse-level j in TS , the associated vertex set Nv has size 2j . Next, we distinguish between inverse-levels
at most blog kc and higher inverse-levels. For any inverse-level 0 ≤ j ≤ blog kc, the number of ancestors
of the k leaves s1 , . . . , sk in the tree TS is at least k/2j , hence the union of the corresponding vertex sets is
of size at least k. (The lower bound is realized when the k leaves are consecutive to each other in TS .) For
each inverse-level higher than blog kc, the number of ancestors of the k leaves s1 , . . . , sk may be as small
as 1, but the vertex set associated with such an ancestor is of size at least k. It follows that the union of
the corresponding vertex sets at each level is lower bounded by k, and so the union of the vertex sets of all
ancestors of the k leaves s1 , . . . , sk over all levels is at least (log s+1)·k. By Observation 4.5, all the vertices
in this union are neighbors of the vertices in S 0 , thereby yielding |Γ(S 0 )| ≥ (log s + 1) · k = log 2s · |S 0 |. It
follows that β ≥ log 2s.
It remains to upper bound the wireless expansion βw of the graph GS . Fix an arbitrary set S 0 ⊆ S,
and recall that Γ1S (S 0 ) denotes the set of all vertices outside S that have a single neighbor from S 0 .
For a S
vertex v in TS , let D(v) denote the set of its descendants in TS (including v itself), and let
Ňv = w∈D(v) Nw . We argue that for any vertex v at inverse-level j, for j = 0, 1, . . . , log s, it holds that
|Γ1S (S 0 ) ∩ Ňv | ≤ 2j+1 − 1. The proof is by induction on j. Basis j = 0. In this case v is a leaf, hence
Ňv = Nv = {v}, and so |Γ1S (S 0 ) ∩ Ňv | ≤ 1 = 2j+1 − 1. Induction step: Assume the correctness of the
statement for all smaller values of j, and prove it for j. Consider an arbitrary vertex v at level j, and
denote its left and right children by vL and vR , respectively. Suppose first that S 0 contains at least one
leaf zL from the subtree of vL and at least one leaf zR from the subtree of vR . By Observation 4.5, every
vertex in Nv is incident to both zL and zR , hence no vertex of Nv belongs to Γ1S (S 0 ). It follows that
Γ1S (S 0 ) ∩ Ňv = (Γ1S (S 0 ) ∩ ŇvL ) ∪ (Γ1S (S 0 ) ∩ ŇvR ). By the induction hypothesis, we conclude that
|Γ1S (S 0 ) ∩ Ňv | = |Γ1S (S 0 ) ∩ ŇvL | + |Γ1S (S 0 ) ∩ ŇvR |
≤ 2 · (2j − 1) ≤ 2j+1 − 1 .
We henceforth assume that no leaf in the subtree of either vL or vR , without loss of generality vL ,
belongs to S 0 . Hence, by Observation 4.5 again, no vertex of ŇvL belongs to Γ(S 0 ) ⊇ Γ1S (S 0 ), which gives
Γ1S (S 0 ) ∩ Ňv = (Γ1S (S 0 ) ∩ Nv ) ∪ (Γ1S (S 0 ) ∩ ŇvR ). Obviously |(Γ1S (S 0 ) ∩ Nv )| ≤ |Nv | = 2j . By the induction
hypothesis, we obtain |Γ1S (S 0 ) ∩ Ňv | = |Γ1S (S 0 ) ∩ Nv | + |Γ1S (S 0 ) ∩ ŇvR | ≤ 2j + 2j − 1 = 2j+1 − 1. This
completes the proof of the induction.
Since Ňrt = N , applying the induction statement for the root rt of TS yields
|Γ1S (S 0 )| = |Γ1S (S 0 ) ∩ Ňrt | ≤ 2log s+1 − 1 ≤ 2s = (2/ log 2s) · |N |.
It follows that βw ≤ β(2/ log 2s), which completes the proof of the lemma.
4.3.2
The Core Graph with Arbitrary Expansion
Notice that the expansion of the graph provided by Lemma 4.4 is logarithmic in the size of its vertex set
and also in the maximum and average degree (both in S and in N ). In what follows we show how to
construct a generalized core graph that has an arbitrary expansion.
Lemma 4.6 For any integer ∆∗ ≥ 1 and any β ∗ satisfying (2e)/∆∗ ≤ β ∗ ≤ ∆∗ /(2e) (where e is the
base of the natural logarithm), there exists a bipartite graph G∗S = (S ∗ , N ∗ , ES∗ ) with sides S ∗ and N ∗ of
maximum degree ∆∗ , such that
11
1. |S ∗ | ≤ ∆∗ /2, |N ∗ | = β ∗ · |S ∗ |.
2. For every subset S 0 of S ∗ , |Γ(S 0 )| ≥ β ∗ · |S 0 |. (Thus, ordinary expansion is at least β ∗ .)
3. For every subset S 0 of S ∗ ,
|Γ1S ∗ (S 0 )| ≤ (4/ log(min{∆∗ /β ∗ , ∆∗ ·β ∗ }))·|N ∗ |. (Hence the wireless expansion, denoted βw , satisfies
βw ≤ β ∗ (4/ log(min{∆∗ /β ∗ , ∆∗ · β ∗ })).)
To prove Lemma 4.6, we first present the following two lemmas which generalize Lemma 4.4 to get an
arbitrary expansion.
Lemma 4.7 For any integer s ≥ 1 and any β > log 2s, there exists a bipartite graph ĜS = (S, N̂ , ÊS )
such that
1. s := |S| and |N̂ | = s · β.
2. Each vertex in S has degree (2s − 1) · (β/ log 2s).
3. The maximum degree ∆N̂ of a vertex in N̂ is s, and the average degree δN̂ of a vertex in N̂ is at
most 2s/ log 2s.
4. For every subset S 0 of S, |Γ(S 0 )| ≥ β · |S 0 |. (Hence the ordinary expansion is at least β.)
5. For every subset S 0 of S, |Γ1S (S 0 )| ≤ 2s·(β/ log 2s) = (2/ log 2s)·|N̂ |. (Hence the wireless expansion,
denoted βw , satisfies βw ≤ β(2/ log 2s).)
Proof: We assume for simplicity that k = β/ log 2s is an integer, and modify the construction used
to prove Lemma 4.4 by creating k copies v1 , . . . , vk for each vertex v in N . Thus each vertex set Nv
is “expanded” by a factor of k; denote the expanded vertex set by N̂v . The vertex set N̂ of ĜS is the
union of all
S copies of all vertices in N , or in other words, it is the union of all the expanded vertex sets,
i.e., N̂ = v∈TS N̂v . The edge set ÊS of ĜS is obtained by translating each edge (v, u) in the original
graph GS , where v ∈ N , into the k edges (v1 , u), . . . , (vk , u) in ĜS . Other than this modification, the
construction remains intact. Note that S remains unchanged, and the degree of vertices in N̂ is the same
as the degree of vertices in N in the original graph GS (both the maximum and average degree). On the
other hand, we now have |N̂ | = (s log 2s) · (β/ log 2s) = s · β. Moreover, the expansion increases from at
least log 2s to at least β, and the degree of vertices in S increases from 2s − 1 to (2s − 1) · (β/ log 2s).
Finally, note that for every subset S 0 of S, |Γ1S (S 0 )| increases by a factor of β/ log 2s, hence |Γ1S (S 0 )| is at
most 2s · (β/ log 2s) = (2/ log 2s) · |N̂ |, thus the wireless expansion βw satisfies βw ≤ β(2/ log 2s).
Lemma 4.8 For any integer s ≥ 1 and any β ≤ log 2s, there exists a bipartite graph ǦS = (Š, N, ĚS )
with sides Š and N , such that
1. |Š| = s · (log 2s/β) and |N | = s log 2s.
2. Each vertex in Š has degree 2s − 1.
3. The maximum degree ∆N of a vertex in N is s · (log 2s/β), and the average degree δN of a vertex
in N is at most 2s/β.
4. For every subset S 0 of Š, |Γ(S 0 )| ≥ β · |S 0 |. (Hence the ordinary expansion is at least β.)
5. For every subset S 0 of Š, |Γ1Š (S 0 )| ≤ 2s = (2/ log 2s) · |N |. (Hence the wireless expansion, denoted
βw , satisfies βw ≤ β(2/ log 2s).)
12
Proof: We assume for simplicity that k = log 2s/β is an integer, and modify the construction used to
prove Lemma 4.4 by creating k copies v1 , . . . , vk for each vertex v in S. The vertex set Š of ǦS is the union
of all copies of all vertices in S, and the edge set ĚS is obtained by translating each edge (v, u) in the
original graph GS , where v ∈ S, into the k edges (v1 , u), . . . , (vk , u) in ǦS . Other than this modification,
the construction remains intact. Note that N remains unchanged, and the degree of vertices in Š is the
same as the degree of vertices in S in the original graph GS (both the maximum and average degree).
On the other hand, we now have |Š| = s · (log 2s/β). Moreover, the expansion decreases from at least
log 2s to at least β, and the degree of vertices in N increases by a factor of log 2s/β. Finally, note that
for every subset S 0 of Š, |Γ1Š (S 0 )| remains at most 2s = (2/ log 2s) · |N |, thus the wireless expansion βw
remains unchanged, satisfying βw ≤ β(2/ log 2s).
We are now ready to complete to proof of Lemma 4.6.
Proof: [Lemma 4.6] Since β ∗ ≤ ∆∗ /(2e), we may write ∆∗ = 2s · (β ∗ / log 2s), for s ≥ e. Suppose first
that β ∗ > log 2s. In this case we take G∗S to be the graph provided by Lemma 4.7 for dse and β ∗ = β;
we assume for simplicity that s is an integer, but this assumption has a negligible effect. The maximum
degree in the graph is (2s − 1) · (β ∗ / log 2s), which is bounded by ∆∗ := 2s · (β ∗ / log 2s). This in particular
yields ∆∗ ≥ 2s, and so |S ∗ | = s ≤ ∆∗ /2. We also have |N ∗ | = β ∗ · |S ∗ |. The second assertion follows
immediately from Lemma 4.7(4). It remains to prove the third assertion. Lemma 4.7(5) implies that for
every subset S 0 of S ∗ , |Γ1S ∗ (S 0 )| ≤ 2s · (β ∗ / log 2s) = (2/ log 2s) · |N ∗ |. Observe that
min{∆∗ /β ∗ , ∆∗ · β ∗ } = ∆∗ /β ∗ = 2s/ log 2s ≤ 2s.
Hence 2/ log 2s ≤ 2/ log(min{∆∗ /β ∗ , ∆∗ · β ∗ }), which implies that
|Γ1S ∗ (S 0 )| ≤ (2/ log 2s) · |N ∗ |
≤ (2/ log(min{∆∗ /β ∗ , ∆∗ · β ∗ })) · |N ∗ | .
We henceforth assume that β ∗ ≤ log 2s. Since β ∗ ≥ (2e)/∆∗ , we may write ∆∗ = 2s0 · (log 2s0 /β ∗ ), for
s0 ≥ e/2. Next, we argue that β ∗ ≤ log 2s0 . Since β ∗ ≤ log 2s and as ∆∗ is equal to both 2s · (β ∗ / log 2s)
and 2s0 · (log 2s0 /β), it follows that
((2s0 )/(2s)) log(2s0 ) log(2s) = (β ∗ )2 ≤ log2 (2s).
Thus (2s0 ) · log(2s0 ) ≤ (2s) · log(2s), and so s0 ≤ s. Next, we prove that (2s0 )/ log(2s0 ) ≤ (2s)/ log(2s)
by taking logarithms for both hand sides and noting that the function f (x) = x − log x is monotone
increasing for x > log e and that s ≥ s0 ≥ e/2. Rearranging, we get
(β ∗ )2 = ((2s0 )/(2s)) log(2s0 ) log(2s) ≤ log2 (2s0 ), thus β ∗ ≤ log 2s0 .
In this case we take G∗S to be the graph provided by Lemma 4.8 for ds0 e and β ∗ = β; we again assume
for simplicity that s is an integer, but this assumption has a negligible effect. The maximum degree
in the graph is max{2s0 − 1, s0 · (log 2s0 /β)}, which is bounded by ∆∗ := 2s0 · (log 2s0 /β ∗ ). Note that
|S ∗ | = s0 · (log 2s0 /β ∗ ) = ∆∗ /2 and |N ∗ | = s0 log 2s0 = β ∗ · |S ∗ |. The second assertion follows immediately
from Lemma 4.8(4). It remains to prove the third assertion. Lemma 4.8(5) implies that for every subset
S 0 of S ∗ , |Γ1S ∗ (S 0 )| ≤ 2s0 = (2/ log 2s0 ) · |N ∗ |. Observe that
min{∆∗ /β ∗ , ∆∗ · β ∗ } ≤ ∆∗ · β ∗ = 2s0 · log 2s0 .
Hence
2/ log 2s0 = 4/ log((2s0 )2 ) ≤ 4/ log(2s0 · log 2s0 )
≤ 4/ log(min{∆∗ /β ∗ , ∆∗ · β ∗ }),
which implies that
|Γ1S ∗ (S 0 )| ≤ (2/ log 2s0 ) · |N ∗ |
≤ (4/ log(min{∆∗ /β ∗ , ∆∗ · β ∗ })) · |N ∗ |.
13
4.3.3
Worst-Case Expanders
Let G be an arbitrary (α, β)-expander on n vertices with maximum degree ∆, and let 0 < < 1/2 be
a “blow-up” parameter. That is, will determine the extent by which the parameters of interest blow
up due to the modification that we perform on the original graph G to obtain poor wireless expansion.
There is a tradeoff between the wireless expansion and the other parameters: The stronger our upper
bound on the wireless expansion is, the larger the blow-up in the other parameters becomes.
For technical reasons, we require that ∆ · β ≥ 1/(1 − 2 ). We start by constructing the generalized
core graph G∗S = (S ∗ , N ∗ , ES∗ ) provided by Lemma 4.6 for ∆∗ = · ∆ and expansion β ∗ = β/, thus
yielding |S ∗ | ≤ ∆∗ /2 = (∆/2) and |N ∗ | = β ∗ · |S ∗ | = (β/) · |S ∗ |. Our worst-case expander G̃ is obtained
by plugging G∗S on top of G. The vertices of S ∗ are not part of the original vertex set of G, but are rather
new vertices added to it. The vertices of N ∗ are chosen arbitrarily from V (G).
Remark. If G is a bipartite expander, expanding from the left side L to the right side R, and if we want
G̃ to remain bipartite and to expand from L̃ to R̃, then L̃ will be defined as the union of L and S ∗ , and
R̃ will be defined as the union of R and a dummy vertex set of the same size as S ∗ , to guarantee that
|L̃| = |R̃|.
In what follows we analyze the properties of G̃. Denoting the number of vertices in G̃ by ñ, we have
˜ = (1 + ) · ∆, and note that the maximum
n ≤ ñ ≤ n + 2|S ∗ | ≤ n + 2(∆/2) ≤ (1 + ) · n. Write ∆
∗
˜
degree in G̃ is bounded by ∆ + ∆ ≤ ∆ + · ∆ = ∆.
Claim 4.9 G̃ is an ordinary (α̃, β̃)-expander, where β̃ = (1 − ) · β, α̃ = (1 − ) · α.
Proof: Since ñ < (1 + ) · n and as α̃ = (1 − ) · α, it follows that α̃ · ñ ≤ (1 − )α · (1 + ) · n =
(1−2 )α·n < α·n. Consider an arbitrary set X of at most α̃· ñ ≤ α·n vertices from G̃. By Lemma 4.6(2),
the expansion in G∗S is at least β ∗ = β/, hence |Γ− (X ∩ S ∗ )| ≥ (β/) · |X ∩ S ∗ |. If |X ∩ S ∗ | ≥ · |X|,
then we have |Γ− (X)| ≥ |Γ− (X ∩ S ∗ )| ≥ (β/) · |X ∩ S ∗ | ≥ (β/) · ( · |X|) = β · |X| > β̃ · |X|.
Otherwise, |X \ S ∗ | ≥ (1 − ) · |X|, and as the expansion in G is at least β, we have |Γ− (X)| ≥ |Γ− (X \
S ∗ )| ≥ β · |X \ S ∗ | ≥ β · (1 − ) · |X| = β̃ · |X|.
˜ · β̃ = (1 + )∆ · (1 − )β ≥ 1. We also have
Recall that ∆ · β ≥ 1/(1 − 2 ), and note that ∆
˜
˜
˜ · β̃}) is non-negative, and the upper bound
that ∆/β̃ > ∆/β ≥ 1. Hence the term log(min{∆/β̃, ∆
˜ β̃, ∆
˜ · β̃}))) in the following claim is well-defined.
O(β̃/(3 · log(min{∆/
˜ β̃, ∆
˜ · β̃}))).
Claim 4.10 The wireless expansion β̃w of G̃ satisfies β̃w = O(β̃/(3 · log(min{∆/
Proof: Note that β̃w is trivially upper bounded by β, thus the claim holds vacuously whenever 3 ·
˜ β̃, ∆
˜ · β̃}) < 2. We may henceforth assume that 3 · log(min{∆/
˜ β̃, ∆
˜ · β̃}) ≥ 2, which implies
log(min{∆/
3
2/
˜
˜
that both ∆/β̃ and ∆ · β̃ are at least 2
. Since < 1/2, it follows that
˜
˜ · β̃ ≥ 22/3 ≥ 2e
∆∗ · β ∗ = ∆ · β ≥ (∆/(1
+ )) · (β̃/(1 − )) ≥ ∆
and
˜
∆∗ /β ∗ = 2 (∆/β) ≥ 2 (∆/(1
+ ))/(β̃/(1 − ))
2
˜
= ((1 − )/(1 + )) · (∆/β̃)
≥ 2 ((1 − )/(1 + )) · 22/
3
≥ 2e.
In particular, we have (2e)/∆∗ ≤ β ∗ ≤ ∆∗ /(2e), as required in Lemma 4.6. Since all edges adjacent to
the vertices of S ∗ belong to the core graph G∗S with parameters ∆∗ and β ∗ , Lemma 4.6(3) implies that
for every subset S 0 of S ∗ ,
|Γ1S ∗ (S 0 )| ≤ (4/ log(min{∆∗ /β ∗ , ∆∗ · β ∗ })) · |N ∗ |
14
˜ β̃, ∆
˜ · β̃}))) · |N ∗ |.
≤ (4(1 + )/(2 (1 − ) · log(min{∆/
˜ β̃, ∆
˜ · β̃}))) · |N ∗ |.
≤ (12/(2 · log(min{∆/
˜ β̃, ∆
˜ · β̃}))) · β · |S ∗ |.
= (12/(3 · log(min{∆/
˜ β̃, ∆
˜ · β̃}))) · β̃ · |S ∗ |.
≤ (24/(3 · log(min{∆/
(It is easily verified that the third and last inequalities hold for < 1/2.) The bottom-line constant 24
can be improved; we did not try to optimize it.
We derive the following corollary, which implies the existence of expanders with worst possible wireless
expansion. The bound on the wireless expansion is tight in the entire range of parameters, disregarding
constants and dependencies on .
Corollary 4.11 For any n, ∆, β and 0 < < 1/2 such that ∆ · β ≥ 1/(1 − 2 ), if there exists an ordinary
(α, β)-expander G on n vertices with maximum degree ∆, then there exists an (α̃, β̃)-expander G̃ on
˜ and wireless expansion β̃w , where: (1) ∆ ≤ ∆
˜ ≤ (1 + ) · ∆; (2)
ñ vertices with maximum degree ∆
3
˜ β̃, ∆
˜ · β̃}))).
n ≤ ñ ≤ (1 + ) · n; (3) β̃ = (1 − ) · β; (4) α̃ = (1 − ) · α; and (5) β̃w = O(β̃/( · log(min{∆/
One may use Corollary 4.11 in conjunction with known constructions of explicit expanders (such as
Ramanujan graphs), which achieve near-optimal expansion for any degree parameter. Taking to be a
sufficiently small constant thus completes the proof of Theorem 1.2.
5
A tight lower bound on the broadcast time in radio networks
In this section we provide a simple proof for obtaining a tight lower bound of Ω(D log(n/D)) on the
broadcast time in radio networks, which holds both in expectation and with high probability.
Consider our core bipartite graph GS = (S, N, ES ) from Lemma 4.4, with sides S and N , where
s = |S| and |N | = s log 2s. Suppose that we connect an additional vertex rt to all vertices of S and
initiate a (radio) broadcast at rt in the resulting graph. By Lemma 4.4(5), one cannot uniquely cover
more than 2s vertices (i.e., a (2/(log 2s))-fraction) of N using any subset S 0 ⊆ S. It follows that at any
round after the first, the broadcast may reach at most 2s new vertices of N , which yields the following
corollary.
Corollary 5.1 The number of rounds needed for the broadcast to reach a (2i/(log 2s))-fraction of N is
at least 1 + i, for any 0 ≤ i ≤ ((log 2s)/2).
Next, we construct a graph G of diameter Θ(D), for an arbitrary parameter D = Ω(log n), in which
the number of rounds needed to complete a broadcast is Ω(D log(n/D)).
The core graph GS has |S| + |N | = s(1 + log 2s) = s(log 4s) vertices. We take D/2 copies of this
D/2
graph, denoted by G1S , G2S , . . . , GS , each containing roughly n/D vertices. Thus we take s so that
n/D ≈ s(log 4s), and so log s = Θ(log(n/D)). Denote the sides of GiS by S i and N i . We connect the root
rt = rt0 to all vertices of S 1 , and for each 1 ≤ i ≤ D/2, we randomly sample a vertex from N i , denoted
by rti , and connect it (unless i = D/2) to all vertices of S i+1 . This completes the construction of the
graph G. It is easy to verify that the diameter of G is Θ(D), and to be more accurate, the diameter is
D + 2. In what follows we assume that none of the processors associated with the vertices of the graph
initially have any topological information on the graph (except for its size and diameter). This rather
standard assumption was also required in the proof of Kushilevitz and Mansour [11].
Consider a broadcast initiated at rt. We make the following immediate observation.
Observation 5.2 The message must reach rti−1 before reaching rti , for 1 ≤ i ≤ D/2.
Denote by Ri the random variable for the number of rounds needed for the message to be sent from rti−1
to rti , for each i, and let R be the random variable for the number of rounds needed to send the message
from rt to rtD/2 . We thus have R = R1 + R2 + . . . + RD/2 .
15
By Corollary 5.1, the number of rounds needed for the broadcast message to reach half of the vertices
of N 1 (from rt = rt0 ) is at least ((log 2s)/4) + 1 = Θ(log(n/D)). Since rt1 was sampled randomly
from all vertices N 1 and as none of the processors have any topological information on the graph, rt1
received this message within this many rounds with probability at most 1/2, hence R1 = Ω(log(n/D))
with constant probability. By Observation 5.2, the only way for the message to reach any vertex of
S 2 , and later rt2 , is via rt1 , hence we can repeat this argument, and carry it out inductively. Since
the D/2 variables R1 , R2 , . . . , RD/2 are independently and identically distributed, and as D = Ω(log n)
(where the constant hiding in the Ω-notation is sufficiently large), a Chernoff bound implies that IP(R =
Ω(D log(n/D))) ≥ 1 − n−c , where c is a constant as big as needed. For the expectation bound, note
that IE (() Ri ) > (log 2s)/4 = Ω(log(n/D)) by Corollary 5.1, for each i, and by linearity of expectation
we obtain IE (() R) = IE (() R1 ) + IE (() R2 ) + . . . + IE (() RD/2 ) = Ω(D log(n/D)). (The assumption that
D = Ω(log n) is used for deriving the high probability bound but not the expectation bound.)
Acknowledgments.
We are grateful to Mohsen Ghaffari for the useful discussions on the probabilistic arguments of Section
4.2.
References
[1] N. Alon, A. Bar-Noy, N. Linial, and D. Peleg. A lower bound for radio broadcast. J. Comput. Syst. Sci.,
43(2):290–298, 1991.
[2] N. Alon and M. R. Capalbo. Explicit unique-neighbor expanders. In Proc. 43rd FOCS, pages 73–82, 2002.
[3] N. Alon, M. Ghaffari, B. Haeupler, and M. Khabbazian. Broadcast throughput in radio networks: routing
vs. network coding. In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms,
pages 1831–1843. SIAM, 2014.
[4] N. Alon and J. Spencer. The Probabilistic Method. John Wiley, 1992.
[5] R. Bar-Yehuda, O. Goldreich, and A. Itai. On the time-complexity of broadcast in radio networks: An
exponential gap between determinism and randomization. In Proceedings of the Sixth Annual ACM Symposium
on Principles of Distributed Computing, Vancouver, British Columbia, Canada, August 10-12, 1987, pages
98–108, 1987.
[6] O. Becker. Symmetric unique neighbor expanders and good LDPC codes. Discrete Applied Mathematics,
211:211–216, 2016.
[7] I. Chlamtac. The wave expansion approach to broadcasting in multihop radio networks. IEEE Transactions
on Communications, 39(3):426–433, 1991.
[8] I. Chlamtac and S. Kutten. On broadcasting in radio networks–problem analysis and protocol design. IEEE
Transactions on Communications, 33(12):1240–1246, 1985.
[9] A. Czumaj and W. Rytter. Broadcasting algorithms in radio networks with unknown topology. J. Algorithms,
60(2):115–143, 2006.
[10] S. Hoory, N. Linial, and A. Wigderson. Expander graphs and their applications. Bulletin of the American
Mathematical Society, 43(4):439–561, 2006.
[11] E. Kushilevitz and Y. Mansour. An ω(d\log(n/d)) lower bound for broadcast in radio networks. SIAM journal
on Computing, 27(3):702–712, 1998.
[12] C. C. Newport. Radio network lower bounds made easy. In Distributed Computing - 28th International
Symposium, DISC 2014, Austin, TX, USA, October 12-15, 2014. Proceedings, pages 258–272, 2014.
16
Appendix
A
Deterministic and Constructive Analysis with Improved Bounds
A.1
A.1.1
Bounds depending on the maximum degree
A naive approach
In this section we provide a simple argument showing that when the maximum degree is small, the wireless
expansion βw is not much smaller than the ordinary expansion β. Recall that we consider an arbitrary
bipartite graph GS = (S, N, ES ) with sides S and N , such that |N | = β · |S|. We assume that no vertex
of GS is isolated, i.e., all vertex degrees are at least 1. In what follows we define s = |S|, γ = |N |.
Lemma A.1 In GS = (S, N, ES ), if the maximum degree is ∆, then there is a subset S 0 of S with
|Γ1S (S 0 )| ≥ γ/∆.
Proof: We describe a procedure for computing vertex sets Suni ⊆ S and Nuni ⊆ N , such that |Nuni | ≥ γ/∆
and every vertex of Nuni has a unique neighbor in Suni .
Initialize Nuni = Suni = ∅, Ntmp = N, Stmp = S. At each step of the procedure, the sets Nuni and
Suni (respectively, Ntmp and Stmp ) grow (resp., shrink). The procedure maintains the following invariant
throughout.
Invariant:
(I1) Stmp ∪ Suni ⊆ S and Stmp ∩ Suni = ∅.
(I2) Ntmp ∪ Nuni ⊆ N and Ntmp ∩ Nuni = ∅.
(I3) Every vertex of Nuni has a unique neighbor in Suni .
(I4) Every vertex of Ntmp has at least one neighbor in Stmp , but has no neighbor in Suni .
For a vertex x ∈ Ntmp , recall that Γ(x, Stmp ) is the set of neighbors of x in Stmp . At each step we
pick a vertex v ∈ Ntmp minimizing |Γ(v, Stmp )|, i.e., a vertex with a minimum number of neighbors in
Stmp . (By invariant (I4), we have |Γ(v, Stmp )| ≥ 1.) Let Qv be the set of all vertices in Ntmp that
are incident on at least one vertex of Γ(v, Stmp ). By the choice of v, for any vertex u in Qv satisfying
Γ(u, Stmp ) ⊆ Γ(v, Stmp ), we must have Γ(u, Stmp ) = Γ(v, Stmp ). We partition Qv into two subsets Q0v and
Q00v , where Q0v contains all vertices u for which Γ(u, Stmp ) = Γ(v, Stmp ) and Q00v contains the remaining
vertices of Qv (all of which must have a neighbor in Stmp \ Γ(v, Stmp )). Obviously we have Q0v ⊇ {v}, so
|Q0v | ≥ 1.
We start by moving an arbitrary vertex w of Γ(v, Stmp ) from Stmp to Suni ; note that w is incident
on all vertices of Q0v . Then we remove all other vertices of Γ(v, Stmp ) from Stmp , which prevents these
vertices from entering Suni later on, thus guaranteeing that all vertices in Q0v will have w as their unique
neighbor in Suni . Subsequently, all vertices of Q0v are moved from Ntmp to Nuni . (See Figure 3 for an
illustration.)
In addition, to prevent violating invariant (I4) now and invariant (I3) in the future, all neighbors of w
that belong to Q00v are removed from Ntmp (they are incident to w which has just moved to Suni , and they
might have neighbors in Stmp that will be moved to Suni later on). It is clear that the first three invariants
(I1) − (I3) continue to hold following this step. As for invariant (I4), consider an arbitrary vertex u of
Ntmp at the beginning of this step. We know that u had no neighbors in Suni at the beginning of the
step. If u is not a neighbor of w, then u had no neighbors in Suni also at the end of the step. Otherwise,
If u is a neighbor of w, then its only new neighbor in Suni at the end of the step is w and u was removed
from Ntmp (it either moves to Nuni if it belongs to Q0v , or it is removed altogether if it belongs to Q00v ).
This shows that every vertex u of Ntmp has no neighbor in Suni at the end of the step. Next, if u has a
neighbor outside Γ(v, Stmp ), then this neighbor remains in Stmp following the step (since only the vertices
i
Figure 3: An illustration of a single step of the procedure. The dashed lines represent edges that connect
vertices in Qv with vertices in Stmp , where v is a vertex in Ntmp minimizing |Γ(v, Stmp )|. The vertices in
Q0v are colored black, and they move from Ntmp to Nuni ; the vertices in Q00v are colored green, and they are
removed from Ntmp ; the vertices in Γ(v, Stmp ) are colored red, and they are removed from Stmp , except for w
which moves to Suni .
of Γ(v, Stmp ) are removed from Stmp during the step). Otherwise, we have Γ(u, Stmp ) ⊆ Γ(v, Stmp ), which
by the choice of v implies that Γ(u, Stmp ) = Γ(v, Stmp ). By definition, u ∈ Q0v , and is thus removed from
Ntmp during the step. This shows that at the end of this step, every vertex of Ntmp has at least one
neighbor in Stmp , so (I4) holds.
This procedure terminates once Ntmp = ∅. By invariant (I3), every vertex of Nuni has a unique
neighbor in Suni . At each step of the procedure, we move |Q0v | ≥ 1 vertices from Ntmp to Nuni , all of which
are neighbors of some vertex w ∈ Γ(v, Stmp ), and remove some of the other (at most ∆ − 1) neighbors of
w from Ntmp . Consequently, at least one vertex among every ∆ vertices removed from Ntmp must move
to Nuni . Since initially we have Ntmp = N , it follows that |Nuni | ≥ γ/∆.
Note that the proof of this lemma takes into account the maximum degree ∆S of a vertex in S, rather
than the maximum degree ∆ in the entire graph.
Corollary A.2 Suppose G is an (α, β)-expander with maximum degree ∆. Then it is also an (αw , βw )wireless expander, with αw = α and βw ≥ β/∆.
A.1.2
Procedure Partition
Our next goal is to strengthen Corollary A.2. In this section we describe a procedure, hereafter named
Procedure Partition, which lies at the core of our lower bounds on the wireless expansion. This procedure
is then employed in various scenarios to conclude that the wireless expansion is close to the ordinary
expansion. The procedure partitions N into Nuni , Nmany , Ntmp and S into Suni and Stmp , such that the
following conditions hold. (In what follows we refer to these conditions as the “partition conditions”.)
(P1) Every vertex of Nuni has a unique neighbor in Suni .
(P2) Every vertex of Ntmp has at least one neighbor in Stmp , but has no neighbor in Suni .
(P3) |Nuni | ≥ |Nmany |.
(P4) Either Ntmp = ∅, or |Etmp | ≤ 2|Euni | holds, where Euni (resp., Etmp ) denotes the set of edges
connecting all vertices in Stmp with vertices in Nuni (resp., Ntmp ).
ii
At the outset, we initialize Nuni = Nmany = Suni = ∅, Ntmp = N, Stmp = S. At each step of the
procedure, the sets Nuni and Suni grow and the set Ntmp and Stmp shrink. The set Nmany also grows, but
not necessarily at each step; it contains “junk” vertices that once belonged to Nuni , but were removed
from Nuni due to new vertices added to Suni .
The first three aforementioned conditions are maintained throughout the execution of the procedure.
(Notice that initially all three of them hold trivially.) On the other hand, condition (P 4) is required to
hold only when the procedure terminates.
For a vertex x ∈ Stmp , denote by Ntmp (x) (resp., Nuni (x)) the set of neighbors of x in Ntmp (resp.,
Nuni ).
At each step we pick a vertex v ∈ Stmp maximizing gain(v) := |Ntmp (v)| − 2|Nuni (v)|. Assuming
gain(v) > 0, we move v from Stmp to Suni ; to preserve condition (P 1), we move the vertices of Nuni (v)
from Nuni to Nmany . Next, we move all vertices of Ntmp (v) from Ntmp to Nuni . Since gain(v) > 0,
condition (P 3) holds. The reason condition (P 2) holds is because once a vertex of Stmp moves to Suni ,
all its neighbors in Ntmp are moved to Nuni . Obviously the sets Nuni , Nmany , Ntmp (resp., Suni , Stmp ) form
a partition of N (resp., S).
Procedure Partition terminates once Stmp becomes empty or once gain(v) ≤ 0 for all v ∈ Stmp . In
the former case, condition (P 2) implies that Ntmp = ∅, and we are done. In the latter case, we have
|Ntmp (v)| ≤ 2|Nuni (v)| for any v ∈ Stmp , yielding
X
X
|Etmp | =
|Ntmp (v)| ≤
2|Nuni (v)| = 2|Euni |.
(1)
v∈Stmp
v∈Stmp
(See Figure 4 for an illustration.)
Figure 4: An illustration of the edge sets Euni and Etmp which connect Stmp to Nuni and Ntmp , respectively.
The vertices in Ntmp (v) and Nuni (v), as well as the edges connecting them to v, are colored red; here we have
gain(v) = |Ntmp (v)| − 2|Nuni (v)| = −2.
A.1.3
Constructive lower bound for βw in terms of the average degree
P
Let N = Γ− (S) and γ = |N |, and denote by δ the average degree of a vertex in N , i.e., δ = (1/γ) v∈N deg(v, S).
(As all vertex degrees are at least 1, we have δ ≥ 1.)
We next show a lower bound that takes into account the average degree δ rather than the maximum
degree ∆.
Lemma A.3 In the graph GS there exists a subset S 0 of S with |Γ1S (S 0 )| ≥ γ/(8δ).
iii
Proof: Denote by N 2δ the set of vertices of N = Γ− (S) with degree at most 2δ. Observe that at least
half the vertices of N have degree at most twice the average, implying that |N 2δ | ≥ γ/2. We apply
Procedure Partition, but consider the vertex set N 2δ rather than N . Thus we obtain a partition of
2δ , N 2δ , N 2δ and a partition of S into S
N 2δ rather than N into Nuni
uni and Stmp satisfying the partition
many
tmp
2δ
conditions (P 1) − (P 4). Next, we show that |Nuni | ≥ γ/(8δ).
2δ = ∅. By partition condition (P 3), 2|N 2δ | ≥
Suppose first that the procedure terminates because Ntmp
uni
2δ | + |N 2δ | = |N 2δ |. It follows that
|Nuni
many
2δ
|Nuni
| ≥
|N 2δ |
γ
γ
≥
≥
.
2
4
4δ
(2)
We henceforth assume that |Etmp | ≤ 2|Euni |. By definition, each vertex in N 2δ has at most 2δ neighbors
2δ has a single neighbor in S , so it has at most
in S. Condition (P 1) implies that each vertex in Nuni
uni
2δ |. By condition (P 2), each vertex of N 2δ is
2δ − 1 neighbors in Stmp , yielding |Euni | ≤ (2δ − 1)|Nuni
tmp
2δ |. It follows that
incident on at least one edge of Etmp , and so |Etmp | ≥ |Ntmp
2δ
2δ
|Ntmp
| ≤ |Etmp | ≤ 2|Euni | ≤ (4δ − 2)|Nuni
|.
Hence,
2δ
2δ
4δ · |Nuni
| = (2 + (4δ − 2))|Nuni
|
2δ
2δ
2δ
≥ |Nuni
| + |Nmany
| + |Ntmp
|
γ
= |N 2δ | ≥ ,
2
2δ | ≥ γ/(8δ).
which yields |Nuni
For every S ⊂ V denote by δS the average degree of a vertex in N = Γ− (S), i.e., δS = (1/|N |)
and denote δ̄ = max{δS | S ⊂ V, |S| ≤ αn}.
P
v∈N
deg(v, S)
Corollary A.4 Let G = (V, E) be an (α, β)-expander. Then
(1) G is an (αw , βw )-wireless expander with αw = α and βw ≥ β/(8δ̄) ≥ β/(8∆), where ∆ is the maximum
degree in the graph.
(2) In the regime β ≥ 1, we have δS ≤ ∆/β, for every S such that |S| ≤ αn, thus δ̄ ≤ ∆/β and we get
βw ≥ β 2 /(8∆).
A.1.4
“Convenient” degree constraints
The following lemmas show that if many vertices in Γ− (S) have roughly the same degree, then the
ordinary expansion β and the wireless expansion βw of GS are roughly the same.
Lemma A.5 In GS , for any c > 1 and any i ∈ {1, 2, . . . , logc |S|}, there is a subset S 0 of S with
|Γ1S (S 0 )| ≥ |N (i) |/2(1 + c), where N (i) denotes the set of vertices in N with degree in [ci−1 , ci ) for i <
logc |S| and for i = logc |S| is the set of vertices in N with degree in [ci−1 , ci ] = [|S|/c, |S|].
Corollary A.6 In GS , for any c > 1 there is a subset S 0 of S such that
|Γ1S (S 0 )| ≥
log2 c
·γ .
2(1 + c) log2 ∆
Proof: The previous lemma implies also that for every c > 1 and every i ∈ {1, 2, . . . , logc ∆} (rather then
logc |S|), there is a subset S 0 of S with |Γ1S (S 0 )| ≥ |N (i) |/2(1 + c), where N (i) denotes the set of vertices
in N with degree in [ci−1 , ci ) for i < logc ∆ and for i = logc ∆ is the set of vertices in N with degree in
iv
[ci−1 , ci ] = [∆/c, ∆]. Observe that there exists an index j ∈ {1, 2, . . . , logc ∆} s.t. |N (j) | ≥ γ/ logc ∆, for
this index j, we get |Γ1S (S 0 )| ≥ |N (j) |/2(1 + c) ≥ 2(1+c)γlog ∆ .
c
The maximum of f (c) = log2 c/(2(1 + c)) is attained at c ≈ 3.59112 and equals ≈ 0.20087, hence we
get the following.
Corollary A.7 Let G = (V, E) be an (α, β)-expander with maximum degree ∆. Then it is also an
(αw , βw )-wireless expander with αw ≥ α and βw ≥ 0.20087
log ∆ · β.
2
A.2
Bounds depending on the average degree
Recall that δ denotes the average degree of a vertex in N = Γ− (S). In case δ is known, we can state a
stronger bound than that of Corollary A.7, using δ in place of ∆.
Corollary A.8 In GS , for any c > 1 and t > 1 there is a subset S 0 of S such that
1
1
· γ.
|Γ1S (S 0 )| ≥ 1 −
t 2(1 + c) logc (tδ)
Corollary A.9 In GS , for every > 0, and for sufficiently large1 δ, there is a subset S 0 of S such that
2.0087
|Γ1S (S 0 )| ≥ (1+)
log (δ) · γ.
2
Corollary A.10 Let G = (V, E) be an (α, β)-expander with maximum degree ∆ and let > 0. Suppose
that for every S, δS is large2 enough. Then G is also an (αw , βw )-wireless expander with αw ≥ α and
βw ≥
2.0087
·β .
(1 + ) log2 (δ̄)
Proof: Given S ⊂ V with |S| ≤ α|V |, write γ = |Γ− (S)| and let GS = (S, Γ− (S), e(S, Γ− (S)). Note
that as G is an (α, β)-expander, γ ≥ β|S| and by Corollary A.9, there is a subset S 0 of S with
|Γ1S (S 0 )| ≥
≥
Hence βw ≥
2.0087
(1+) log2 (δ̄)
2.0087
·γ
(1 + ) log2 (δS )
2.0087
2.0087
· β|S| ≥
· β|S| .
(1 + ) log2 (δS )
(1 + ) log2 (δ̄)
· β.
Lemma A.11 Suppose there exists c > 1 and t > 1 such that for every subset N 0 of N in GS of
sufficiently large size (say, of size at least (γ/2)(1 − 1/t)), the average degree δ 0 of a vertex in N 0 is at
least tδ/c. Then there is a subset S 0 of S such that
γ
1
1
0
1−
.
|ΓS (S )| ≥
2(1 + c)
t
Proof: We apply Procedure Partition, but consider the vertex set N tδ of vertices in N = Γ− (S)
tδ , N tδ , N tδ and a
with degree at most tδ. Thus we obtain a partition of N tδ rather than N into Nuni
many
tmp
partition of S into Suni and Stmp satisfying the partition conditions (P 1) − (P 4). Next, we show that
tδ | ≥ |N tδ |/2(1 + c). This complete the proof as |N tδ | ≥ γ(1 − 1/t).
|Nuni
tδ | < (γ/2)(1 − 1/t), as |N tδ | ≥ γ(1 − 1/t) and by using partition condition (P 3) we get
If |Ntmp
tδ | ≥ |N tδ | + |N tδ | ≥ (γ/2)(1 − 1/t), hence |N tδ | ≥ (γ/4)(1 − 1/t) ≥ (γ/(2(1 + c)))(1 − 1/t).
2|Nuni
many
uni
uni
1
2
δ that that satisfies ln(δ) − ln(ln δ) − ln(1 + ) − 1 ≥ 0 is enough.
i.e., satisfies ln(δS ) − ln(ln δS ) − ln(1 + ) − 1 ≥ 0.
v
tδ | ≥ (γ/2)(1 − 1/t), in particular nonempty and it must hold that |E
Otherwise, if |Ntmp
tmp | ≤ 2|Euni |. By
definition, each vertex in N tδ has at most tδ neighbors in S. Condition (P 1) implies that each vertex in
tδ has a single neighbor in S , so it has at most tδ−1 neighbors in S
tδ
Nuni
tmp , yielding |Euni | ≤ (tδ−1)|Nuni |.
uni
tδ
tδ
By condition (P 2), each vertex of Ntmp is incident only on edges of Etmp . Since |Ntmp | ≥ (γ/2)(1 − 1/t),
tδ |. It follows that
the average degree in this set is at least tδ/c. Therefore, |Etmp | ≥ (tδ/c)|Ntmp
tδ 2δ
tδ
|N | ≤ |Etmp | ≤ 2|Euni | ≤ 2(tδ − 1)|Nuni
|.
c tmp
Hence
2
tδ
tδ
tδ
tδ
+ tδ |Nuni
| ≥
2·
+ 2(tδ − 1) |Nuni
|
c
c
tδ
tδ
tδ
tδ
≥
· (|Nuni
| + |Nmany
| + |Ntmp
|)
c
tδ tδ
=
|N |,
c
which yields
tδ
|Nuni
| ≥
|N tδ |
.
2(1 + c)
Corollary A.12 Let G = (V, E) be an (α, β)-expander and suppose there exists c > 1 and t > 1 such
that for every subset S of V of size |S| ≤ αn and for every subset M of Γ− (S) of sufficiently large size
(say, of size at least (|Γ− (S)|/2)(1 − 1/t)), the average degree δ 0 of a vertex in M is at least (tδS )/c. Then
G is also an (αw , βw )-wireless expander with αw ≥ α and
β
1
βw ≥
1−
.
4(1 + c)
t
Proof: The proof follows similar lines as those in the proof of Corollary A.10.
A.2.1
Near-optimal bounds
Lemma A.13 In GS there is a subset S 0 of S with |Γ1S (S 0 )| ≥ γ/(9 log(2δ)).
Proof: We prove the existence of vertex sets Suni ⊆ S and Nuni ⊆ N = Γ− (S), such that |Nuni | ≥
γ/(9 log(2δ)) and every vertex of Nuni has a unique neighbor in Suni . The proof is by induction on γ, for
all values of δ ≥ 1. (Since δ ≥ 1, we have log(2δ) ≥ 1.)
Basis: γ ≤ 9. Let v be an arbitrary vertex of S with at least one neighbor in N , let Suni = {v}, and let
Nuni be the (non-empty) neighborhood of v. We thus have |Nuni | ≥ 1 ≥ γ/(9 log(2δ)).
Induction step: Assume the correctness of the statement for all smaller values of γ, and prove it for γ.
We apply Procedure Partition (with the bipartite graph induced by the sets S and N ). If the procedure
terminates because Ntmp = ∅, then we have |Nuni | ≥ |N |/2 (cf. Equation (2)).
We henceforth assume that Ntmp 6= ∅, i.e., γ 0 = |Ntmp | ≥ 1. In particular, it must hold that |Etmp | ≤
2|Euni |. Denote by δ 0 the average degree of a vertex in Ntmp , counting only neighbors that belong to
Stmp . By partition condition (P 2), the entire neighborhood of Ntmp is contained in Stmp ; confusing as it
might be, we do not make use of this property here. We do use, however, another property guaranteed by
partition condition (P 2): Each vertex of Ntmp has at least one neighbor in Stmp , which implies that δ 0 ≥ 1,
thus log(2δ 0 ) ≥ 1. Since Ntmp is non-empty, it must hold that |Etmp | ≥ 1. Hence |Euni | ≥ |Etmp |/2 ≥ 1/2,
yielding |Euni | ≥ 1. Consequently, we have |Nuni | ≥ 1, which in turn yields 1 ≤ γ 0 ≤ γ − 1.
vi
Suppose first that γ 0 / log(2δ 0 ) ≥ γ/ log(2δ). By the induction hypothesis for γ 0 (restricting ourselves
to the subgraph of GS induced by the vertex sets Stmp and Ntmp ), we conclude that there is a subset S̃
of Stmp with |Γ1Stmp (S̃) ∩ Ntmp | ≥ γ 0 /(9 log(2δ 0 )), yielding
|Γ1S (S̃)| ≥ |Γ1Stmp (S̃) ∩ Ntmp | ≥
We may henceforth assume that
γ0
γ
≥
.
0
9 log(2δ )
9 log(2δ)
γ0
γ
<
.
0
log(2δ )
log(2δ)
(3)
Observe that |Euni | + |Etmp | ≤ |ES | = δ · γ. By definition, |Etmp | = δ 0 · γ 0 . It follows that
3δ 0 · γ 0 = 3|Etmp | ≤ 2(|Euni | + |Etmp |) ≤ 2δ · γ,
yielding
0
log(2δ ) ≤ log(2δ) + log
γ
3
− log
γ0
2
.
(4)
Plugging Equation (4) into Equation (3), we obtain
0
γ
γ
γ
0
γ <
log(2δ) + log
− log
.
log(2δ)
3
2
(5)
We may assume that |Nuni | < γ/9, as otherwise |Nuni | ≥ γ/9 ≥ γ/(9 log(2δ)) and we are done. By
partition condition (P 3), |Nuni | ≥ |Nmany |. Hence γ = |Nuni | + |Nmany | + γ 0 ≤ 2|Nuni | + γ 0 , yielding
(γ − γ 0 )/2 ≤ |Nuni | < γ/9. Hence 2/3(γ/γ 0 ) ≤ 6/7, which gives
0
γ
γ
6
2
log
− log
≤ log
≤ − .
3
2
7
9
It follows that
0
γ
γ
γ
log(2δ) + log
− log
log(2δ)
3
2
γ
≤
log(2δ)
2
log(2δ) −
. (6)
9
Plugging Equation (6) into Equation (5) gives
2
2γ
γ
0
γ ≤
log(2δ) −
= γ−
log(2δ)
9
9 log(2δ)
2γ
≤ 2|Nuni | + γ 0 −
,
9 log(2δ)
implying that |Nuni | ≥ γ/(9 log(2δ)).
Corollary A.14 Let G = (V, E) be an (α, β)-expander. Then,
(1) G is an (αw , βw )-wireless expander with αw = α and βw ≥ β/(9 log(2δ̄)) ≥ β/(9 log(2∆)), where ∆
is the maximum degree in the graph.
(2) In the regime β ≥ 1, we have δS ≤ ∆/β, thus δ̄ ≤ ∆/β, and hence βw ≥ β/(9 log(2∆/β)).
Corollary A.15 In GS there is a subset S 0 of S with
γ
γ
1
0
,
|ΓS (S )| ≥ min
.
9 log δ 20
vii
Proof: We prove that if δ < 2 then there is a subset S 0 of S with |Γ1S (S 0 )| ≥ γ/20 and if δ ≥ 2 then
there is a subset S 0 of S with |Γ1S (S 0 )| ≥ γ/(9 log δ). The proof is by induction on γ.
Basis: γ ≤ 9. If δ < 2, by Lemma A.13 there is a subset S 0 of S with |Γ1S (S 0 )| ≥ γ/(9 log(2δ)) > γ/18 ≥
γ/20. For δ ≥ 2, let v be an arbitrary vertex of S with at least one neighbor in N , let S 0 = {v}, then
|Γ(v)| = |Γ1S (S 0 )| ≥ 1 ≥ γ/(9 log δ).
Induction step: Assume the correctness of the statement for all smaller values of γ, and prove it for γ.
If δ < 2, then the same proof holds as in the basis case. Let assume δ ≥ 2 and therefore log δ ≥ 1. We
apply Procedure Partition (with the bipartite graph induced by the sets S and N ). If the procedure
terminates because Ntmp = ∅, then we have |Nuni | ≥ |N |/2 (cf. Equation (2)).
We henceforth assume that Ntmp 6= ∅, i.e., γ 0 = |Ntmp | ≥ 1. In particular, it must hold that |Etmp | ≤
2|Euni |. Denote by δ 0 the average degree of a vertex in Ntmp , counting only neighbors that belong
to Stmp . By partition condition (P 2) each vertex of Ntmp has at least one neighbor in Stmp , which
implies that δ 0 ≥ 1, thus log(2δ 0 ) ≥ 1. Since Ntmp is non-empty, it must hold that |Etmp | ≥ 1. Hence
|Euni | ≥ |Etmp |/2 ≥ 1/2, yielding |Euni | ≥ 1. Consequently, we have |Nuni | ≥ 1, which in turn yields
1 ≤ γ 0 ≤ γ − 1.
There are two cases. The first case is when δ 0 < 2. By Lemma A.13, there is a subset S 0 of S s.t.
|Γ1S (S 0 )| ≥ |Γ1Stmp (S 0 ) ∩ Ntmp | ≥
γ0
γ0
≥
.
9 log(2δ 0 )
18
(7)
If γ 0 < γ(9/10), then as γ = |Nuni | + |Nmany | + γ 0 ≤ 2|Nuni | + γ 0 , we get |Nuni | ≥ γ/20. So we can assume
γ 0 ≥ γ(9/10), and by Equation (7),
γ0
γ
|Γ1S (S 0 )| ≥
≥ .
18
20
The second case is when δ 0 ≥ 2 and therefore log δ ≥ 1.
Suppose first that γ 0 / log δ 0 ≥ γ/ log δ. By the induction hypothesis for γ 0 (restricting ourselves to the
subgraph of GS induced by the vertex sets Stmp and Ntmp ), we conclude that there is a subset S̃ of Stmp
with |Γ1Stmp (S̃) ∩ Ntmp | ≥ γ 0 /(9 log δ 0 ), yielding
γ
γ0
≥
.
9 log δ 0
9 log δ
|Γ1S (S̃)| ≥ |Γ1Stmp (S̃) ∩ Ntmp | ≥
We may henceforth assume that
γ
γ0
<
.
0
log δ
log δ
(8)
Observe that |Euni | + |Etmp | ≤ |ES | = δ · γ. By definition, |Etmp | = δ 0 · γ 0 . It follows that
3δ 0 · γ 0 = 3|Etmp | ≤ 2(|Euni | + |Etmp |) ≤ 2δ · γ,
yielding
0
log δ ≤ log δ + log
γ
3
− log
γ0
2
.
Plugging Equation (9) into Equation (8), we obtain
0
γ
γ
γ
γ0 <
log δ + log
.
− log
log δ
3
2
(9)
(10)
We may assume that |Nuni | < γ/9, as otherwise |Nuni | ≥ γ/9 ≥ γ/(9 log δ) and we are done. By
partition condition (P 3), |Nuni | ≥ |Nmany |. Hence γ = |Nuni | + |Nmany | + γ 0 ≤ 2|Nuni | + γ 0 , yielding
(γ − γ 0 )/2 ≤ |Nuni | < γ/9. Hence 2/3(γ/γ 0 ) ≤ 6/7, which gives
0
γ
γ
6
2
log
− log
≤ log
≤ − .
3
2
7
9
viii
It follows that
γ
log δ
log δ + log
γ
3
− log
γ0
2
γ
≤
log δ
2
log δ −
.
9
(11)
Plugging Equation (11) into Equation (10) gives
2
2γ
γ
2γ
0
log δ −
= γ−
γ ≤
≤ 2|Nuni | + γ 0 −
,
log δ
9
9 log δ
9 log δ
implying that |Nuni | ≥ γ/(9 log δ).
By corollaries A.13, A.8 and A.15 we get the following result. Denote
min{1/(9 log x), 1/20},
1/(9 log(2x)),
MG (x) = max
.
max{(1 − 1/t)(2.0087/ log(tx)) | t > 1}
Corollary A.16 In GS , there is a subset S 0 of S with |Γ1S (S 0 )| ≥ γ · MG (δ) .
Observation A.17 max{min{γ/(9 log δ), γ/20}, γ/(9 log(2δ))} is given by
if δ ≤ 211/9
γ/(9 log(2δ))
γ/20
if 211/9 ≤ δ ≤ 220/9
γ/(9 log δ)
otherwise.
Moreover, for every > 0, if δ satisfies ln(δ) − ln(ln δ) − ln(1 + ) − 1 ≥ 0, then max{γ(1 −
2.0087
1/t)(2.0087/ log(tδ)) | t > 1} = γ (1+)
log(δ) . In that case,
2.0087
max{γ/(9 log δ), max{γ(1 − 1/t)(1/(2(1 + c) logc (tδ)) | t > 1}} ≥ γ (1+)
log(δ) if and only if < 17.0783,
i.e., to understand which expression is the maximum, we need to take 0 = min{ | ln(δ) − ln(ln δ) −
ln(1 + ) − 1 ≥ 0} and then check if 0 < 17.0783 or not.
Let G = (V, E) be an (α, β)-expander, and for every S in V , denote γS = |Γ− (S)|. As G is an
(α, β)-expander, γS ≥ β|S|. Then, Corollary A.16 yields the following bound on βw .
Lemma A.18 Let G = (V, E) be an (α, β)-expander. Then,
(1) G is an (αw , βw )-wireless expander with αw = α and βw ≥ β · MG (δ̄).
(2) In the regime β ≥ 1, we have δS ≤ ∆/β, thus δ̄ ≤ ∆/β, and hence βw ≥ β · MG (∆/β).
Proof: Let S in V s.t. |S| ≤ αn, and let
GS = (S, Γ− (S), ES ) be the corresponding graph. Then, by Corollary A.16, |Γ1S (S 0 )| ≥ γS · MG (δS ) ≥
β|S| · MG (δS ). Now, MG (x) is a decreasing function, and as δS ≥ δ̄, we get that MG (δS ) ≥ MG (δ̄) and
thus |Γ1S (S 0 )| ≥ β|S| · MG (δ̄). Moreover, in the regime β ≥ 1, we have δS ≤ ∆/β, thus δ̄ ≤ ∆/β, and
hence |Γ1S (S 0 )| ≥ β|S| · MG (∆/β).
The bounds presented in Section A.2 on βw are functions of δ̄ (like the inequality βw ≥ β/(9 log(2δ̄))
that we proved in Corollary A.14). Theses bounds are usually hard to use, since in most cases we cannot
give an evaluation of δ̄. But there are cases in which we can evaluate δ̄, and get a better lower bound for
βw than β/(9 log(2∆)). One such example is the class of bounded arboricity graphs.
ix
| 8 |
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
arXiv:1612.06638v2 [math.MG] 6 Jan 2017
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
Abstract. We give a characterization for asymptotic dimension growth. We
apply it to CAT(0) cube complexes of finite dimension, giving an alternative proof
of N. Wright’s result on their finite asymptotic dimension. We also apply our new
characterization to geodesic coarse median spaces of finite rank and establish
that they have subexponential asymptotic dimension growth. This strengthens a
recent result of J. S̆pakula and N. Wright.
1. Introduction
The concept of asymptotic dimension was first introduced by Gromov [15]
in 1992 as a coarse analogue of the classical topological covering dimension.
It started to attract much attention in 1998 when Yu proved that the Novikov
higher signature conjecture holds for groups with finite asymptotic dimension
(FAD) [30]. A lot of groups and spaces are known to have finite asymptotic
dimension. Among those are, for instance, finitely generated abelian groups, free
groups of finite rank, Gromov hyperbolic groups [14, 24], mapping class groups
[5], CAT(0) cube complexes of finite dimension [29], see [3] for an excellent survey
of these and other results. Recently Behrstock, Hagen and Sisto introduced the
powerful new notion of hierarchically hyperbolic spaces and showed that these
have finite asymptotic dimension [1], recovering a number of the above results,
including notably mapping class groups and a number of CAT(0) cube complexes.
On the other hand, there are many groups and spaces with infinite asymptotic
dimension. Examples are the wreath product Z≀Z, the Grigorchuk group [27], the
Thompson groups, etc. Generalizing FAD, Dranishnikov defined the asymptotic
dimension growth for a space [13]; if the asymptotic dimension growth function
is eventually constant then the space has FAD. Dranishnikov showed that the
wreath product of a finitely generated nilpotent group with a finitely generated
FAD group has polynomial asymptotic dimension growth. He also showed that
polynomial asymptotic dimension growth implies Yu’s Property A, and, hence,
the coarse Baum-Connes Conjecture, provided the space has bounded geometry
[31]. Later, Ozawa [22] extended this result to spaces with subexponential growth;
see also [21]. Bell analyzed how the asymptotic dimension function is affected by
various group-theoretical constructions [4].
2010 Mathematics Subject Classification. 20F65, 20F67, 20F69, 51F99.
Key words and phrases. Asymptotic dimension growth, CAT(0) cube complex, coarse median
space, mapping class group.
Partially supported by the European Research Council (ERC) grant of Goulnara ARZHANTSEVA, no. 259527 and the Sino-British Fellowship Trust by Royal Society.
1
2
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
In this paper, we give an alternative characterization for the asymptotic dimension growth function which is inspired by Brown and Ozawa’s proof of Property A
for Gromov’s hyperbolic groups, [9, Theorem 5.3.15], which is in turn inspired by
[17]. We use this to study two notable examples: CAT(0) cube complexes of finite
dimension and coarse median spaces of finite rank.
The techniques used to study these examples are developments of those used
by S̆pakula and Wright [28] to establish Property A for uniformly locally finite
coarse median spaces of finite rank. As a byproduct, we obtain a new proof
of finite asymptotic dimension for CAT(0) cube complexes which allows one to
explicitly construct the required controlled covers. This compares with Wright’s
original proof, [29], which is discussed below.
CAT(0) cube complexes are a nice class of non-positively curved spaces, first
studied by Gromov who gave a purely combinatorial condition for recognizing
the non-positive curvature of cube complexes [14]. Many well-known groups
act properly on CAT(0) cube complexes. For instance, right-angled Artin groups,
many small cancellation groups, and Thompson’s groups admit such actions. This
makes it possible to deduce properties of these groups from the corresponding
properties of the CAT(0) cube complexes.
In 2010, Wright [29] proved that the asymptotic dimension of a CAT(0) cube
complex X is bounded by its dimension. He proved this by constructing a family
of ε−Lipschitz cobounded maps to CAT(0) cube complexes of (at most) the same
dimension indexed by ε > 0. We use our characterization for finite asymptotic
dimension to give a direct proof of this result. Namely, we construct uniformly
bounded covers with suitable properties. Being more explicit, this proof loses,
however, the sharp bound on the asymptotic dimension. Thus, we give an alternative proof of the following non-quantitative variant of Wright’s theorem:
Theorem 1.1. Let X be a CAT(0) cube complex of finite dimension, then X has finite
asymptotic dimension.
The key point in our approach is to analyse the normal cube path distance on
the cube complex, introduced by Niblo and Reeves [18]. We consider the ball
with respect to the normal cube path distance rather than to the ordinary edgepath distance. We decompose such a ball into intervals and use induction on
the dimension in order to construct some “separated” net satisfying a suitable
consistency property. In the process, we give a detailed analysis of normal balls
and normal spheres (i.e. balls and spheres with respect to the normal cube path
distance). See Section 4 for all details.
Our second application is to coarse median spaces. They were introduced
by Bowditch as a coarse variant of classical median spaces [6]. The notion of a
coarse median group leads to a unified viewpoint on several interesting classes
of groups, including Gromov’s hyperbolic groups, mapping class groups, and
CAT(0) cubical groups. Bowditch showed that hyperbolic spaces are exactly
coarse median spaces of rank 1, and mapping class groups are examples of coarse
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
3
median spaces of finite rank [6]. He also established interesting properties for
coarse median spaces such as Rapid Decay, the property of having quadratic
Dehn function, etc.
Intuitively, a coarse median space is a metric space equipped with a ternary
operator (called the coarse median), in which every finite subset can be approximated by a finite median algebra. In these approximations the coarse median
is approximated by an actual median with the distortion being controlled by the
metric. This extends Gromov’s observation that in a δ-hyperbolic space finite
subsets can be well approximated by finite trees.
Recently, S̆pakula and Wright proved that a coarse median space with finite
rank and at most exponential volume growth has Property A [28]. Following
their proof and using our characterization for asymptotic dimension growth, we
obtain the following result:
Theorem 1.2. Let X be a geodesic coarse median space with finite rank and at most
exponential volume growth, then X has subexponential asymptotic dimension growth.
Hierarchically hyperbolic spaces are examples of coarse median spaces, see [2],
hence our theorem is broader in scope, though with a weaker conclusion, than the
finite asymptotic dimension result proven in [1]. We expect the following general
result.
Conjecture 1.3. Every geodesic coarse median space with finite rank has finite asymptotic
dimension.
By a result of Ozawa [22], subexponential asymptotic dimension growth implies
Property A, thus, our theorem strengthens the result of [28].
The paper is organized as follows. In Section 2, we give some preliminaries
on asymptotic dimension growth, CAT(0) cube complexes, and coarse median
spaces. In Section 3, we provide a characterization of the asymptotic dimension
growth function, and, as a special case, give a characterization of finite asymptotic
dimension. Sections 4 and 5 deal with CAT(0) cube complexes: in Section 4, we
study normal balls and spheres which are essential in our approach to prove
Theorem 1.1 in Section 5. Section 6 deals with the coarse median case, and we
prove Theorem 1.2 there.
2. Preliminaries
2.1. Asymptotic Dimension. The notion of asymptotic dimension was first introduced by Gromov in 1993 [15] as a coarse analogue of the classical Lebesgue
topological covering dimension. See also [3].
Let (X, d) be a metric space and r > 0. We call a family U = {Ui } of subsets in X
r−disjoint, if for any U , U′ in U, d(U, U′) > r, where d(U, U′) = inf {d(x, x′) : x ∈
U, x′ ∈ U′ }. We write
G
Ui
r−dis joint
4
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
for the union of {Ui }. A family V is said to be uniformly bounded, if mesh(V) =
sup {diam(V) : V ∈ V} is finite. Let U = {Ui } be a cover of X and r > 0. We define
the r−multiplicity of U, denoted by mr (U), to be the minimal integer n such that
for any x ∈ X, the ball B(x, r) intersects at most n elements of U. As usual, m(U)
denotes the multiplicity of a cover U, that is the maximal number of elements of
U with a non-empty intersection. A number λ > 0 is called a Lebesgue number of
U, if for every subset A ⊆ X with diameter 6 λ, there exists an element U ∈ U
such that A ⊆ U. The Lebesgue number L(U) of the cover U is defined to be the
infimum of all Lebesgue numbers of U.
Definition 2.1 ([15]). We say that the asymptotic dimension of a metric space X does
not exceed n and we write asdim X 6 n, if for every r > 0, the space X can be
covered by n + 1 subspaces X0 , X1 , . . . , Xn , and each Xi can be further decomposed
into some r−disjoint uniformly bounded subspaces:
X=
n
[
Xi ,
i=0
Xi =
G
r − disjoint
j∈N
Xij , and sup diam Xij < ∞.
i, j
We say asdim X = n, if asdim X 6 n and asdim X is not less than n.
Here are basic examples of spaces and groups with finite asymptotic dimension.
Example 2.2 ([20], [24]).
1) asdim Zn = n for all n ∈ N, where Z is the group of integers;
2) Gromov’s δ-hyperbolic spaces, e.g., word hyperbolic groups, have finite asymptotic
dimension.
From the definition, it is easy to see that the asymptotic dimension of a subspace
is at most that of the ambient space. There are other equivalent definitions of
asymptotic dimension. We list one here for a later use, and guide the reader to [3]
for others.
Proposition 2.3 ([3]). Let X be a metric space, then asdim X 6 n if and only if for any
r > 0, there exists a uniformly bounded cover U of X, such that mr (U) 6 n + 1.
2.2. Asymptotic Dimension Growth.
LLet us consider the direct sum of infinitely
many copies of the integers: G =
Z. Since for any n ∈ N, the group Zn
∞
is contained in G, by the above mentioned results, G has infinite asymptotic
dimension. In order to deal with such groups/spaces, Dranishnikov studied the
following concept as a generalization of the property of having a finite asymptotic
dimension.
Definition 2.4 ([13]). Let (X, d) be a metric space. Define a function
adX (λ) = min{m(U) : U is a cover of X, L(U) > λ} − 1,
which is called the asymptotic dimension function of X.
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
5
Note that adX is monotonic and
lim adX (λ) = asdim (X).
λ→∞
Like in the case of the volume function, the growth type of the asymptotic
dimension function is more essential than the function itself. Recall that for
f, g : R+ → R+ , we write f g, if there exists k ∈ N, such that for any x > k,
f (x) 6 kg(kx + k) + k. We write f ≈ g if both f g and g f . It is clear that “ ≈ ” is
an equivalence relation. We define the growth type of f to be the ≈-equivalence
class of f . Define the asymptotic dimension growth of X to be the growth type of
adX .
By a result of Bell and Dranishnikov, the growth type of the asymptotic dimension function is a quasi-isometric invariant.
Proposition 2.5 ([4, 13]). Let X and Y be two discrete metric spaces with bounded
geometry. If X and Y are quasi-isometric, then adX ≈ adY . In particular, the asymptotic
dimension growth is well-defined for finitely generated groups.
We give an alternative (equivalent) definition of the asymptotic dimension
growth that is used in our characterization.
Lemma 2.6. Let X be a metric space, and define
f X ≈ adX .
Then ad
f X (λ) = min{ mλ (U) : U is a cover of X} − 1.
ad
Proof. Given λ > 0, suppose U is a cover of X with L(U) > λ. For any U ∈ U,
define the inner λ−neighborhood of U to be
N−λ (U) = X \ Nλ (X \ U),
where Nλ denotes the usual λ-neighborhood of the set, and we define
N−λ (U) = {N−λ (U) : U ∈ U}.
Since L(U) > λ, N−λ (U) is still a cover of X. By definition, it is obvious that
f X adX .
mλ(N−λ (U)) 6 m(U), which yields ad
Conversely suppose U is a cover of X. Consider Nλ (U), which has Lebesgue
number not less than λ. It is easy to show m(Nλ(U)) 6 mλ(U), which implies
fX .
adX ad
f X as the definition for the
By the preceding lemma, we can use either adX or ad
asymptotic dimension function. Recall that if there exists a polynomial (subexponential) function f such that adX f , then X is said to have polynomial
(subexponential) asymptotic dimension growth.
Dranishnikov has shown that polynomial asymptotic dimension growth implies Yu’s Property A, and he gave a class of groups having such property.
Proposition 2.7 ([13]). Let N be a finitely generated nilpotent group and G be a finitely
generated group with finite asymptotic dimension. Then the wreath product N ≀ G has
6
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
polynomial asymptotic dimension growth. In particular, Z ≀ Z has polynomial asymptotic
dimension growth.
2.3. CAT(0) Cube Complexes. We recall basic notions and results on the structure
of CAT(0) cube complexes. We omit some details and most of the proofs but direct
the readers to [8, 12, 14, 18, 26] for more information.
A cube complex is a polyhedral complex in which each cell is isometric to a Euclidean cube and the gluing maps are isometries. The dimension of the complex is
the maximum of the dimensions of the cubes. For a cube complex X, we can associate it with the intrinsic pseudo-metric dint , which is the minimal pseudo-metric
on X such that each cube embeds isometrically. When X has finite dimension, dint
is a complete geodesic metric on X. See [8] for a general discussion on polyhedral
complex and the associated intrinsic metric.
There is also another metric associated with X. Let X(1) be the 1-skeleton of X,
that is a graph with the vertex set V = X(0) . We equip V with the edge-path metric
d, which is the minimal number of edges in a path connecting two given vertices.
Clearly, when X(1) is connected, d is a geodesic metric on V. For x, y ∈ V, the
interval is defined by [x, y] = {z ∈ V : d(x, y) = d(x, z) + d(x, y)}, that is it consists of
all points on any geodesic between x and y.
A geodesic metric space (X, d) is CAT(0) if all geodesic triangles in X are slimmer
than the comparative triangle in the Euclidean space. For a cube complex (X, dint ),
Gromov has given a combinatorial characterization of the CAT(0) condition [14]:
X is CAT(0) if and only if it is simply connected and the link of each vertex is a
flag complex (see also [8]).
Another characterization of the CAT(0) condition was obtained by Chepoi [12]
(see also [25]): a cube complex X is CAT(0) if and only if for any x, y, z ∈ V,
the intersection [x, y] ∩ [y, z] ∩ [z, x] consists of a single point µ(x, y, z), which is
called the median of x, y, z. In this case, we call the graph X(1) a median graph;
and V equipped with the ternary operator m is indeed a median algebra [16]. In
particular, the following equations hold: ∀x, y, z, u, v ∈ V,
M1. µ(x, x, y) = x;
M2. µ(σ(x), σ(y), σ(z)) = µ(x, y, z), where σ is any permutation of {x, y, z};
M3. µ(µ(x, y, z), u, v) = µ(µ(x, u, v), µ(y, u, v), z).
Obviously, µ(x, y, z) ∈ [x, y], and [x, y] = {z ∈ V : µ(x, y, z) = z}.
Lemma 2.8. Let x, y, z, w ∈ V such that z, w ∈ [x, y]. Then z ∈ [x, w] implies w ∈ [z, y].
Proof. Since z ∈ [x, w] and w ∈ [x, y], we have µ(z, x, w) = z and µ(x, w, y) = w. So,
µ(z, w, y) = µ(µ(z, x, w), w, y) = µ(µ(z, w, y), µ(x, w, y), w) = µ(µ(z, w, y), w, w) = w,
which implies w ∈ [z, y].
Lemma 2.9. For x, y, z ∈ V and d(z, y) = 1, [x, z] ⊆ [x, y] or [x, y] ⊆ [x, z].
Proof. By Chepoi’s result [12], X(1) is a median graph, hence it is weakly modular
(see [12]). So d(x, y) , d(x, z), which implies d(x, y) = d(x, z)+1 or d(x, z) = d(x, y)+1,
i.e. [x, z] ⊆ [x, y] or [x, y] ⊆ [x, z].
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
7
A CAT(0) cubical complex X can be equipped with a set of hyperplanes [11,
18, 19, 26]. Each hyperplane does not intersect itself, and divides the space into
two halfspaces. Given two hyperplanes h, k, if the four possible intersections of
halfspaces are all nonempty, then we say h crosses k, denoted by h ⋔ k. This
occurs if and only if h and k cross a common cube C (also denoted by h ⋔ C). Furthermore [26], given a maximal collection of pairwise intersecting hyperplanes,
there exists a unique cube which all of them cross. Thus, the dimension of X is
the maximal number of pairwise intersecting hyperplanes. We can also define
intervals in the language of hyperplanes: [x, y] consists of points which lie in all
halfspaces containing x and y.
We call a subset Y ⊆ V convex, if for any x, y ∈ Y, [x, y] ⊆ Y. Obviously,
halfspaces are convex since any geodesic crosses a hyperplane at most once [18,
26]. This also implies
d(x, y) = ♯{ hyperplane h : h separates x from y}.
2.4. Coarse Median Spaces. According to Gromov, hyperbolic spaces can be
considered locally as a coarse version of trees, in the sense that every finite subset
can be approximated by a finite tree in a controlled way [14]. If one wants to
approximate a space locally by finite median algebras (graphs), this would turn
to the definition of coarse median spaces introduced by Bowditch. See [6, 7, 32]
for details.
Definition 2.10 ([6]). Let (X, ρ) be a metric space, and µ : X3 → X be a ternary
operation. We say that (X, ρ, µ) is a coarse median space and µ is a coarse median on
X, if the following conditions hold:
C1. There exist constants K, H(0) > 0 such that ∀a, b, c, a′, b′ , c′ ∈ X,
ρ(µ(a, b, c), µ(a′ , b′ , c′ )) 6 K(ρ(a, a′ ) + ρ(b, b′ ) + ρ(c, c′)) + H(0).
C2. There exists a function H : N → [0, +∞) with the following property. For
a finite subset A ⊆ X with 1 6 |A| 6 p, there exists a finite median algebra
(Π, ρΠ , µΠ ), and maps π : A → Π, λ : Π → X such that ∀x, y, z ∈ Π, a ∈ A,
ρ(λµΠ (x, y, z), µ(λx, λy, λz)) 6 h(p),
and
ρ(a, λπa) 6 h(p).
We refer to K, H as the parameters of (X, ρ, µ). Furthermore, if there exists d ∈ N,
such that we can always choose the median algebra Π in condition (C2) above of
rank at most η, then we say X has (coarse) rank at most η.
A finitely generated group is said to be coarse median if some Cayley graph has
a coarse median.
Note that, by definition, a coarse median on a group is not required to be
equivariant under the group action.
Remark 2.11. According to Bowditch, without loss of generality, we may always
assume that µ satisfies the median axioms M1 and M2: for all a, b, c ∈ X,
8
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
M1. µ(a, a, b) = a;
M2. µ(a, b, c) = µ(b, c, a) = µ(b, a, c).
A large class of groups and spaces have been shown to be coarse median,
including Gromov’s hyperbolic groups, right-angled Artin groups, mapping class
groups, CAT(0) cube complexes, etc. [6]. Bowditch has proved that coarse median
groups have Property of Rapid Decay [7], quadratic Dehn’s function [6], etc.
This yielded a unified way to prove these properties for the above-listed groups.
Recently, S̆pakula and Wright have proved that coarse median spaces of finite
rank and of at most exponential volume growth have Yu’s Property A [28].
3. Characterization for asymptotic dimension growth
In this section, we establish a characterization for asymptotic dimension growth
and obtain several interesting consequences of this main result. For instance, we
get a characterization for a group to have finite asymptotic dimension.
Theorem 3.1. Let (X, d) be a discrete metric space, and f : R+ → R+ be a function. Then
the following are equivalent.
(1) adX f ;
(2) There exists a function g : R+ → R+ which has the same growth type as f , such
that ∀l ∈ N, ∀k = 1, 2, . . . , 3l, ∀x ∈ X, we can assign a subset S(x, k, l) ⊆ X,
satisfying:
i) ∀l ∈ N, ∃Sl > 0, such that S(x, k, l) ⊆ B(x, Sl ) for all k = 1, . . . , 3l;
ii) ∀l ∈ N, ∀k, k′ with 1 6 k 6 k′ 6 3l, ∀x ∈ X, we have S(x, k, l) ⊆ S(x, k′ , l);
iii) ∀x, y ∈ X with d(x, y) 6 l, we have:
• S(x, k − d(x, y), l) ⊂ S(x, k, l) ∩ S(y, k, l), for k = d(x, y) + 1, . . . , 3l;
• S(x, k + d(x, y), l) ⊃ S(x, k, l) ∩ S(y, k, l), for k = 1, . . . , 3l − d(x, y);
iv) ∀l ∈ N, ∀k = 1, 2, . . . , 3l, ∀x ∈ X, we have ♯S(x, k, l) 6 g(l).
Proof.
• 1 ⇒ 2: By Lemma 2.6, we can assume that there exists a function g : R+ →
R+ with g ≈ f such that ∀l ∈ N, there exists a uniformly bounded cover U
of X with m3l (U) 6 g(l). Suppose U = {Ui : i ∈ I}, and choose xi ∈ Ui . For
k = 1, 2, . . . , 3l and x ∈ X, we define
S(x, k, l) = {xi : B(x, k) ∩ Ui , ∅}.
Now let us check the four properties in Condition 2.
i) If B(x, k) ∩ Ui , ∅, we can assume y ∈ B(x, k) ∩ Ui . Now d(y, xi ) 6
mesh(U), so d(x, xi) 6 k + mesh(U) 6 3l + mesh(U). In other words,
S(x, k, l) ⊆ B(x, 3l + mesh(U)).
ii) It is immediate, by our definition of sets S(x, k, l).
iii) ∀x, y ∈ X with d(x, y) 6 l, ∀k = d(x, y) + 1, . . . , 3l, we have:
S(x, k − d(x, y), l) = {xi : B(x, k − d(x, y)) ∩ Ui , ∅}.
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
9
Now if B(x, k − d(x, y)) ∩ Ui , ∅, we can assume z ∈ B(x, k − d(x, y)) ∩ Ui,
i.e. z ∈ Ui and d(z, x) 6 k − d(x, y). So d(z, y) 6 k, i.e. z ∈ B(y, k) ∩ Ui . So
B(y, k) ∩ Ui , ∅, which implies
S(x, k − d(x, y), l) ⊂ S(x, k, l) ∩ S(y, k, l).
On the other hand, ∀k′ = 1, . . . , 3l − d(x, y), suppose x j ∈ S(x, k′ , l) ∪
S(y, k′ , l). We can assume that x j ∈ S(y, k′ , l), i.e. B(y, k′ ) ∩ U j , ∅, which
implies B(x, k′ + d(x, y)) ∩ U j , ∅. So we have:
S(x, k′ + d(x, y), l) ⊃ S(x, k′ , l) ∩ S(y, k′ , l).
iv) ∀l ∈ N, ∀k = 1, 2, . . . , 3l, ∀x ∈ X, we have
♯S(x, k, l) = ♯{xi : B(x, k) ∩ Ui , ∅} 6 m3l (U) 6 g(l).
S
S(x, l, l). Also, ∀h ∈ H, we define Ah = {y : h ∈
• 2 ⇒ 1: ∀l ∈ N, let H =
x∈X
S(y, l, l)}. We define Ul = {Ah : h ∈ H}. Since ∀x ∈ X, if we take h ∈ S(x, l, l),
then x ∈ Ah . So Ul is a cover of X. Since ∃Sl > 0 such that S(x, l, l) ⊆ B(x, Sl ),
we know that d(h, y) 6 Sl for all y ∈ Ah , which implies mesh(Ul ) 6 Sl .
Finally, let us analyse ml (Ul ). ∀x ∈ X, consider h ∈ H with B(x, l) ∩ Ah , ∅.
Take y ∈ B(x, l) ∩ Ah , i.e. d(y, x) 6 l and h ∈ S(y, l, l). Now by assumptions
in Condition 2, we have
S(y, l, l) ⊆ S(x, l + d(x, y), l) ⊆ S(x, 2l, l).
So ml (Ul ) 6 ♯S(x, 2l, l) 6 g(l). Finally by Lemma 2.6, we have
f X 6 g ≈ f.
adX ≈ ad
Taking in the preceding theorem a constant function f , we obtain a characterization for finite asymptotic dimension.
Corollary 3.2. Let (X, d) be a discrete metric space, n ∈ N. Then the following are
equivalent:
(1) asdim X 6 n;
(2) ∀l ∈ N, ∀k = 1, 2, . . . , 3l, ∀x ∈ X, we can assign a subset S(x, k, l) ⊆ X, satisfying:
i) ∀l ∈ N, ∃Sl > 0, such that S(x, k, l) ⊆ B(x, Sl ) for all k = 1, . . . , 3l;
ii) ∀l ∈ N, ∀k, k′ with 1 6 k 6 k′ 6 3l, ∀x ∈ X, we have S(x, k, l) ⊆ S(x, k′ , l);
iii) ∀x, y ∈ X with d(x, y) 6 l, we have:
• S(x, k − d(x, y), l) ⊂ S(x, k, l) ∩ S(y, k, l), for k = d(x, y) + 1, . . . , 3l;
• S(x, k + d(x, y), l) ⊃ S(x, k, l) ∩ S(y, k, l), for k = 1, . . . , 3l − d(x, y);
iv) ∀l ∈ N, ∀k = 1, 2, . . . , 3l, ∀x ∈ X, we have ♯S(x, k, l) 6 n + 1.
Now we turn to the case when X is a graph, and obtain a characterization for
finite asymptotic dimension which is easier to check.
Corollary 3.3. Given a graph X = (V, E) with vertices V and edges E, and equipped with
the edge-path length metric d, then the following are equivalent:
(1) asdim X 6 n;
10
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
(2) ∀l ∈ N, ∀k = 1, 2, . . . , 3l, ∀x ∈ X, we can assign a subset S(x, k, l) ⊆ X, satisfying:
i) ∀l ∈ N, ∃Sl > 0, such that S(x, k, l) ⊆ B(x, Sl ) for all k = 1, . . . , 3l;
ii) ∀l ∈ N, ∀k, k′ with 1 6 k 6 k′ 6 3l, ∀x ∈ X, we have S(x, k, l) ⊆ S(x, k′ , l);
iii) ∀x, y ∈ X with d(x, y) = 1 (i.e. with x and y connected by an edge), we have:
S(y, k, l) ⊆ S(x, k + 1, l) for all k = 1, 2, . . . , 3l − 1;
iv) ∀l ∈ N, ♯S(x, 2l, l) 6 n + 1.
Remark 3.4. The only distinction between the above two corollaries is that in Corollary 3.3, assumption 2/iii) is required only for endpoints of an edge rather than
for an arbitrary pair of points as in Corollary 3.2. We point out that the preceding corollaries can be generalized to the case of arbitrary asymptotic dimension
growth. We will not use such a generalization, so we omit it.
Proof of Corollary 3.3.
1 ⇒ 2 is implied directly by Corollary 3.2, so we focus on 2 ⇒ 1.
S
S(x, l, l).
Following the proof of 2 ⇒ 3 in Proposition 3.1, ∀l ∈ N, let H =
x∈X
And ∀h ∈ H, define Ah = {y : h ∈ S(y, l, l)}. Define Ul = {Ah : h ∈ H}. Since
∀x ∈ X, if we take h ∈ S(x, l, l), then x ∈ Ah . So Ul is a cover of X. Since ∃Sl > 0
such that S(x, l, l) ⊆ B(x, Sl ), we know d(h, y) 6 Sl for all y ∈ Ah , which implies
mesh(Ul ) 6 Sl . Finally, let us analyse ml (Ul ). ∀x ∈ X, consider h ∈ H with
B(x, l) ∩ Ah , ∅. Take y ∈ B(x, l) ∩ Ah , i.e. d(y, x) 6 l and h ∈ S(y, l, l). By the
definition of the edge-path length metric d, we know that there exists a sequence
of vertices y = y0 , y1 , . . . , yk = x such that yi ∈ V for all i = 0, 1, . . . , k, d(yi , yi+1 ) = 1
for all i = 0, 1, . . . , k − 1, and k 6 l. Now by the hypothesis, we know:
S(y, l, l) ⊆ S(y1 , l + 1, l) ⊆ S(y2 , l + 2, l) ⊆ . . . ⊆ S(yk , l + k, l) = S(x, k + l, l) ⊆ S(x, 2l, l).
So {h ∈ H : B(x, l) ∩ Ah , ∅} ⊆ S(x, 2l, l), which implies ml (Ul ) 6 ♯S(x, 2l, l) 6
n + 1.
4. Normal cube path and normal distance
In the next two sections, we focus on CAT(0) cube complexes, and prove Theorem 1.1. We prove it by constructing a uniformly bounded cover with suitable
properties. Such a construction relies deeply on the analysis of normal balls and
spheres, which we give in this section.
Normal cube paths, which were introduced by Niblo and Reeves in [18] play
a key role in the construction of the cover. They determine a distance function
on the vertices and the balls and spheres defined in terms of this distance are
essential in our proof of Theorem 1.1.
Throughout this section we fix a CAT(0) cube complex X with a fixed vertex
x0 . The 1-skeleton X(1) of X is a graph with vertex set V = X(0) , and edge set E,
which give us the edge metric d on V. This is the restriction of the ℓ1 metric to the
0-skeleton.
4.1. Normal cube paths. Given a cube C ∈ X, we denote by St(C) the union of all
cubes which contain C as a subface.
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
11
Definition 4.1 ([18]). Let {Ci }ni=0 be a sequence of cubes such that each cube has
dimension at least 1, and Ci−1 ∩ Ci consists of a single point, denoted by vi .
• Call {Ci }ni=0 a cube path, if Ci is the (unique) cube of minimal dimension
containing vi and vi+1 , i.e. vi and vi+1 are diagonally opposite vertices of Ci .
Define v0 to be the vertex of C0 diagonally opposite to v1 , and vn+1 to be the
vertex of Cn diagonally opposite to vn . The so-defined vertices {vi }n+1
are
i=0
called the vertices of the cube path, and we say the cube path is from v0 to
vn+1 .
• The length of a cube path is the number of the cubes in the sequence.
• A cube path is called normal if Ci ∩ St(Ci−1 ) = vi .
Normal cube paths in CAT(0) cube complexes behave like geodesics in trees.
More precisely, in [18], the existence and uniqueness of normal cube paths connecting any pair of vertices is established. See also [23].
Proposition 4.2 ([18]). For any two vertices x, y ∈ V, there exists a unique normal cube
path from x to y. (Note that the order is important here since in general normal cube
paths are not reversible).
Proposition 4.3 ([18]). The intersection of a normal cube path and a hyperplane is
connected. In other words, a normal cube path crosses a hyperplane at most once.
Proposition 4.4 ([18], [10]). Let {Ci }ni=0 and {D j }mj=0 be two normal cube paths in X, and
let {vi }n+1
and {w j }m+1
be the vertices of these normal cube paths. If d(v0 , w0 ) 6 1 and
i=0
j=0
d(vn+1 , wm+1 ) 6 1, then for all k, we have d(vk , wk ) 6 1.
We omit the proofs for the above three propositions, the readers can find them
in the original paper. However, let us recall the construction of the normal cube
path from x to y as follows: consider all the hyperplanes separating x from y
and adjacent to x. The key fact is that these hyperplanes all cross a unique cube
adjacent to x lying in the interval from x to y. This cube is defined to be the first
cube on the normal cube path; then one proceeds inductively to construct the
required normal cube path.
We will also need the following lemma, abstracted from [18].
Lemma 4.5. Let {Ci }ni=0 be the normal cube path, and h be a hyperplane. If h ⋔ Ci , then
∃ a hyperplane k, such that k ⋔ Ci−1 and h does not intersect with k.
Proof. Otherwise, ∀k ⋔ Ci−1 , we have h ⋔ k. Now by Lemma 2.15 in [18], we know
that there exists a cube C ∈ X, such that all such k ⋔ C and h ⋔ C, and Ci−1 is a face
of C. Moreover, C contains an edge e of Ci since h ⋔ C. So St(Ci−1 ) ∩ Ci contains e,
which is a contradiction to the definition of normal cube path.
Now for any two vertices of X, we consider all the hyperplanes separating them,
with a partial order by inclusion. More explicitly, for any x, y ∈ V, let H(x, y) be
the set of hyperplanes separating x and y. For any h ∈ H(x, y), let h− be the
halfspace containing x. Define h 6 k if h− ⊆ k− . Note that the definition depends
12
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
on the vertices we choose, and we may change them under some circumstances,
but still write h− for abbreviation. To avoid ambiguity, we point out the vertices
if necessary. We write h < k to mean a strict containment h− ( k− .
Lemma 4.6. For any h, k ∈ H(x, y), h and k do not intersect if and only if h 6 k or k 6 h.
Proof. We only need to show the necessity. Let {Ci }ni=0 be the normal cube path
from x to y, and assume h ⋔ Ci , k ⋔ C j . Since h and k do not intersect, i , j.
Assume i < j. Obviously, x ∈ h− ∩ k− , and y ∈ h+ ∩ k+ . Since h ⋔ Ci and k ⋔ C j ,
by Proposition 4.3, vi+1 ∈ h+ ∩ k− . Since h does not intersect with k, we have
h− ∩ k+ = ∅, which implies h− ⊆ k− .
Combining the above two lemmas, we have the following result on the existence
of chains in H(x, y).
Proposition 4.7. Let {Ci }ni=0 be the normal cube path from x to y, and h be a hyperplane
such that h ⋔ Cl . Then there exists a chain of hyperplanes h0 < h1 < · · · < hl−1 , hl = h
such that hi ⋔ Ci .
Proof. By Lemma 4.5, there exists a hyperplane k, such that k ⋔ Cl−1 and h does
not intersect with k. Define hl−1 = k. Inductively, we can define a sequence of
hyperplanes as required. Then the conclusion follows by Lemma 4.6.
Finally, we give a lemma used in the proof of the consistency part of our main
theorem.
Lemma 4.8. Let x0 , x, y ∈ V with [x0 , y] ⊆ [x0 , x], and let x′ , y′ be the n−th vertex on
the normal cube path from x0 to x, and to y. If x′ , y′ , then x′ < [x0 , y].
Proof. Otherwise, x′ ∈ [x0 , y]. By the construction of the normal cube path, we
know x′ is also the n−th vertex on the normal cube path from x0 to y, since
y ∈ [x0 , x]. In other words, x′ = y′ , which is a contradiction to the assumption.
4.2. Normal metric. We define a new metric on V = X(0) using normal cube
paths [18, 23].
Definition 4.9. For any x, y ∈ V, define dnor (x, y) to be the length of the normal
cube path from x to y. We call dnor the normal metric on V.
One needs to verify that dnor is indeed a metric. It is easy to see that dnor (x, y) > 0,
and dnor (x, y) = 0 if and only if x = y. Note that the normal cube path from x to y
is not the one from y to x in general, so the symmetric relation is not that obvious.
In order to show the symmetric relation and the triangle inequality, we give the
following characterization.
Lemma 4.10. For x, y ∈ V, let < be the relation defined as above. Then
dnor (x, y) = sup{m + 1 : h0 < h1 < · · · < hm , hi ∈ H(x, y)}.
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
13
Proof. Suppose {Ci }ni=0 is the normal cube path from x to y, so dnor (x, y) = n + 1.
Denote the right hand side of the equality in the lemma by n′ . Now for any chain
h0 < h1 < · · · < hm in H(x, y), by Proposition 4.3, hi intersects with just one cube,
denoted by Ck(i) . Obviously, if h, k ⋔ Ci , then h ⋔ k. So k(i) , k(j) if i , j, which
implies m 6 n, so n′ 6 n.
On the other hand, for any h ⋔ Cn , by Proposition 4.7, we have a chain of
hyperplanes h0 < h1 < · · · < hn−1 < hn = h, such that hi ⋔ Ci , which implies
n 6 n′ .
Proposition 4.11. dnor is indeed a metric on V.
Proof. By Lemma 4.10, H(x, y) = H(y, x), and as posets they carry opposite orders.
One can thus deduce dnor (x, y) = dnor (y, x). For x, y, z ∈ V, H(x, y)△H(y, z) =
H(x, z), where △ is the symmetric difference operation. The inclusions of H(x, y) ∩
H(x, z) into H(x, y) and H(y, z) ∩ H(x, z) into H(y, z) are both order preserving, and
therefore, by Lemma 4.10, we have dnor (x, z) 6 dnor (x, y) + dnor (y, z).
4.3. Normal balls and normal spheres. Recall that for any two points x, y in
V = X(0) , the interval between them is
[x, y] = {z ∈ V : d(x, y) = d(x, z) + d(z, y)}.
In other words, [x, y] is the set of vertices on the union of all the edge geodesics
from x to y. A subset Y ⊆ V is called convex, if for any x, y ∈ Y, [x, y] ⊆ Y.
Now let B(x, n) be the closed ball in the edge metric with centre x ∈ V and
radius n. Generally, B(x, n) is not convex (for example, take X = Z2 ). However, as
we will see, for the normal metric balls are convex. More precisely, we define the
normal ball with centre x ∈ V and radius n to be
Bnor (x, n) = {y ∈ V : dnor (x, y) 6 n}
and the normal sphere with centre x ∈ V and radius n to be
Snor (x, n) = {y ∈ V : dnor (x, y) = n}.
Lemma 4.12. Bnor (x, n) is convex for all x ∈ V and n ∈ N.
Proof. Given z, w ∈ Bnor (x, n), and a geodesic γ from z to w, if γ * Bnor (x, n), we
can assume u is the first vertex on γ which is not in Bnor (x, n), which implies
dnor (x0 , u) = n + 1. Let z′ be the vertex preceding u on γ, so dnor (x0 , z′ ) = n (since
dnor (z′ , u) = 1). Since d(z′ , u) = 1, there exists a unique hyperplane h separating z′
from u, so H(x0 , u) = H(x0 , z′ ) ⊔ {h}. Now according to Lemma 4.10, there exists
a chain h0 < · · · < hn−1 < h in H(x0 , u) with hi ∈ H(x0 , z′ ). Since every geodesic
intersects with any hyperplane at most once (see for example [26]), w ∈ h+ , which
implies h0 < · · · < hn−1 < h is also a chain in H(x0 , w). This is a contradiction to
dnor (x0 , w) 6 n, by Lemma 4.10.
Since the intersection of two convex sets is still convex, we have the following
corollary.
Corollary 4.13. For any x ∈ V and n ∈ N, the set [x0 , x] ∩ Bnor (x0 , n) is convex.
14
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
It is well known that for a convex subset Y in a CAT(0) cube complex and a
point v < Y, there is a unique point in Y which is closest to v (see, for example, [8]).
This statement is true both for the intrinsic CAT(0) metric on the cube complex
and the edge metric on the vertex set, and we have a similar statement for the
normal distance:
Proposition 4.14. There exists a unique point v ∈ [x0 , x] ∩ Bnor (x0 , n) such that
[x0 , x] ∩ Bnor (x0 , n) ⊆ [x0 , v].
The point v is characterized by:
d(x0 , v) = max{d(x0 , v′) : v′ ∈ [x0 , x] ∩ Bnor (x0 , n)}.
Furthermore, if dnor (x0 , x) > n, then v ∈ [x0 , x] ∩ Snor (x0 , n), which implies that v is
also the unique point in [x0 , x] ∩ Snor (x0 , n) such that
d(x0 , v) = max{d(x0 , v′ ) : v′ ∈ [x0 , x] ∩ Snor (x0 , n)}.
Proof. If there exist z , w ∈ [x0 , x] ∩ Bnor (x0 , n) such that d(x0 , z) = d(x0 , w) attains
the maximum, consider the median m = µ(z, w, x). By Corollary 4.13, m ∈ [z, w] ⊆
[x0 , x] ∩ Bnor (x0 , n), so d(m, x0) = d(z, x0 ) = d(w, x0). While m ∈ [z, x] ∩ [w, x], so
m = z = w, which is a contradiction.
By Corollary 4.13 [x0 , v] ⊆ [x0 , x] ∩ Bnor (x0 , n). Conversely, for any u ∈ [x0 , x] ∩
Bnor (x0 , n), let m = µ(u, v, x) ∈ [u, v]. By Corollary 4.13, m ∈ [x0 , x] ∩ Bnor (x0 , n).
While m ∈ [v, x], so d(m, x0) > d(v, x0), which implies m = v by the choice of v, i.e.
µ(u, v, x) = v, so v ∈ [u, x]. Now by Lemma 2.8, u ∈ [x0 , v].
Now for x, n satisfying dnor (x0 , x) > n, if v ∈ [x0 , x] ∩ Bnor (x0 , n − 1), take a geodesic
γ from v to x, and let v = y0 , y1 , . . . , yk = x be the vertices on γ. Since dnor (x0 , x) > n,
x , v, which implies k > 0. Now for y1 , since y1 ∈ [v, x], d(x0 , v) < d(x0 , y1). By the
definition of v, we know y1 < [x0 , x] ∩ Bnor(x0 , n), so y1 < Bnor (x0 , n). However, since
d(v, y1) = dnor (v, y1) = 1, we have
dnor (x0 , y1 ) 6 dnor (x0 , v) + dnor (v, y1 ) 6 n,
which is a contradiction.
To use the above proposition more flexibly, we give another characterization of
v, which can also be viewed as an alternative definition of v. In the rest of this
subsection, we fix x ∈ V and n ∈ N with dnor (x0 , x) > n.
be the normal cube path from x0 to x, and ṽ = vn be the
Proposition 4.15. Let {Ci }N
i=0
n-th vertex on this normal cube path. Then ṽ = v, which is provided by Proposition 4.14.
To prove this result, let us focus on subsets in H(x0 , x). Recall that H(x0 , x) is
endowed with the relation 6, as defined prior to Lemma 4.6.
Definition 4.16. A subset A ⊆ H(x0 , x) is called closed (under <), if ∀h ∈ A and
k < h, k ∈ A.
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
15
Lemma 4.17. Let ṽ be the n-th vertex on the normal cube path from x0 to x, then H(x0 , ṽ)
is maximal in the following sense: for any closed A ⊆ H(x0 , x) which contains chains only
with lengths at most n, A ⊆ H(x0 , ṽ).
Proof. We proceed by induction on n. Suppose that the lemma holds for n − 1,
and let v′ be the (n − 1)−th vertex on the normal cube path from x0 to x. Given a
closed A ⊆ H(x0 , x) containing chains only with lengths at most n, and a maximal
chain h0 < h1 < · · · < hm in A. If m 6 n − 2, then the closed set {h ∈ A : h 6 hm }
contains chains only with lengths at most n − 1; by induction, it is contained in
H(x0 , v′) ⊆ H(x0 , ṽ). Now for m = n − 1: similarly, {h ∈ A : h 6 hn−2 } ⊆ H(x0 , v′ ),
which implies hi ⋔ Ci for i = 0, 1, . . . , n − 2. So hn−1 ⋔ Ck for some k > n − 1. If
k , n− 1, by Proposition 4.7 and the closeness of A, we get a chain in A with length
greater than n, which is a contradiction. So hn−1 ⋔ Cn−1 , i.e. hn−1 ∈ H(x0 , ṽ).
Proof of Proposition 4.15. By Proposition 4.14, ṽ ∈ [x0 , x] ∩ Bnor (x0 , n) ⊆ [x0 , v], which
implies H(x0 , ṽ) ⊆ H(x0 , v). However, H(x0 , v) is closed and contains chains
only with lengths at most n according to Lemma 4.10, so H(x0 , v) ⊆ H(x0 , ṽ) by
Lemma 4.17, which implies H(x0 , v) = H(x0 , ṽ). So H(v, ṽ) = H(x0 , v)△H(x0 , ṽ) = ∅,
which implies v = ṽ.
Finally, we characterize those points in [x0 , x] which lie in the intersection [x0 , x]∩
Snor (x0 , n). This will be used in the next subsection to decompose [x0 , x] ∩ Snor (x0 , n)
into a union of intervals.
Let Cn−1 be the n-th cube on the normal cube path from x0 to x, and v = ṽ is
the n-th vertex on the cube path as above. Let Hn be the set of all hyperplanes
intersecting with Cn−1 .
Proposition 4.18. For w ∈ [x0 , x], the following are equivalent:
1) w ∈ [x0 , x] ∩ Snor (x0 , n);
2) ∃h ∈ Hn , such that h crosses the last cube on the normal cube path from x0 to w;
3) ∃h ∈ Hn , such that h separates w from x0 and w ∈ [x0 , v].
Proof.
• 1) ⇒ 3): By Proposition 4.14, w ∈ [x0 , v]. Since dnor (x0 , w) = n, by
Lemma 4.10, the maximum length of chains in H(x0 , w) is n. Take such
a chain h0 < h1 < · · · < hn−1 in H(x0 , w) ⊆ H(x0 , v). Obviously, hi intersects
with different cubes, which implies hi ⋔ Ci . So hn−1 ∈ Hn , and it separates
w from x0 .
• 3) ⇒ 2): Since h separates w from x0 , h must cross some cube C on the
normal cube path from x0 to w. Since h ∈ Hn , we know there is a chain
h0 < h1 < · · · < hn−1 = h in H(x0 , v), which is also a chain in H(x0 , w).
So h cannot cross the first n − 1 cubes of the normal cube path from x0
to w. If h does not cross the last cube, then dnor (x0 , w) > n. However,
w ∈ [x0 , v] implies H(x0 , w) ⊆ H(x0 , v), by Lemma 4.10, dnor (x0 , v) > n, which
is a contradiction.
• 2) ⇒ 1): This is immediate, by Lemma 4.10.
16
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
We have another description for Hn , which is implied by Proposition 4.7 directly.
Lemma 4.19. For h ∈ H(x0 , x), h ∈ Hn if and only if the maximal length of chains in
{k ∈ H(x0 , x) : k 6 h} is n.
4.4. Decomposition of [x0 , x] ∩ Snor (x0 , n). We want to decompose the set [x0 , x] ∩
Snor (x0 , n) so that we can proceed by the induction on dimension in the proof of
Theorem 1.1.
Throughout this subsection, we fix x ∈ V and n ∈ N with dnor (x0 , x) > n, and let
v be as defined in Proposition 4.14. At the end of the preceding subsection, we
have defined Hn to be the set of all hyperplanes intersecting with Cn−1 , where {Ci }
is the normal cube path from x0 to x.
Now we decompose [x0 , x]∩Snor (x0 , n) into a union of intervals with dimensions
lower than [x0 , x], and the number of these intervals can be controlled by the
dimension of [x0 , x]. This will make it possible to do induction on the dimension.
Definition 4.20. For h ∈ Hn , we define
Fh = {w ∈ [x0 , x] ∩ Snor (x0 , n) : h separates w from x0 }.
By Proposition 4.18, we immediately obtain the following two lemmas.
Lemma 4.21.
Fh = {w ∈ [x0 , x] : h crosses the last cube on the normal cube path from x0 to w}
= {w ∈ [x0 , v] : h separates w from x0 }.
S
Lemma 4.22. [x0 , x] ∩ Snor (x0 , n) = h∈Hn Fh .
By definition, we know
Fh = [x0 , x] ∩ Bnor (x0 , n) ∩ {v′ : h separates v′ from x0 },
which implies that Fh is convex. Moreover, we will show that Fh is actually an
interval.
Lemma 4.23. Let xh ∈ Fh be the point minimising d(x0 , xh ). Then Fh = [xh , v].
Proof. Since Fh is convex and xh , v ∈ Fh , so [xh , v] ⊆ Fh . On the other hand, ∀z ∈ Fh ,
let m = µ(x0 , z, xh ). So, m ∈ Fh and d(x0 , m) 6 d(x0 , xh ). By the choice of xh , we know
that d(x0 , m) = d(x0 , xh ), which implies m = xh , so xh ∈ [x0 , z]. By Proposition 4.14,
xh , z ∈ [x0 , v]. Thus, by Lemma 2.8, z ∈ [xh , v].
S
Proposition 4.24. [x0 , x] ∩ Snor (x0 , n) = h∈Hn [xh , v], and dim[xh , v] < dim[x0 , x].
Proof. We only need to show dim[xh , v] < dim[x0 , x]. For any hyperplane k crossing
[xh , v], by Proposition 4.18, k ⋔ h. So dim[xh , v] < dim[x0 , x].
Now we give another characterization for xh , which is useful in the proof of the
consistency condition of Theorem 1.1.
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
17
Lemma 4.25. Let xh be the closest point to x0 on Fh , then xh is the unique point in
Bnor (x0 , n) such that h separates x0 from xh , and for any hyperplane k ⋔ h, k does not
separate xh from x0 .
Proof. Since xh ∈ Fh , we have xh ∈ [x0 , x] ∩ Bnor (x0 , n) and h separates x0 from xh .
Now for any hyperplane k ⋔ h, if k separates xh from x0 , we have xh ∈ h+ ∩ k+ and
x0 ∈ h− ∩ k− . Choose x̃h ∈ [x0 , xh ] such that x̃h ∈ h+ ∩ k− . Since k does not separate
x̃h from x0 , so d(x0 , x̃h) < d(x0 , xh). However, by Lemma 4.21, x̃h ∈ Fh , which is a
contradiction.
It remains to show that xh is the unique point satisfying these conditions. Otherwise, let x̂h be another point satisfying the hypothesis in the lemma and x̂h , xh .
Let k be a hyperplane separating x̂h from xh , and assume xh ∈ k− . Obviously,
k , h. If k ⋔ h, by hypothesis, k does not separate xh from x0 , as well as x̂h from
x0 , which is a contradiction since k separates x̂h from xh . So k does not cross h,
which implies h− ( k− by Lemma 4.6. However, x̂h ∈ k+ , so by Lemma 4.10,
dnor (x0 , xh ) < dnor (x0 , x̂h ). This is a contradiction since dnor (x0 , xh ) > n as h separates
x0 from xh .
4.5. S̆pakula and Wright’s Construction. We conclude this section with a recent application of normal cube paths, which were invoked by S̆pakula and
Wright, [28], in order to provide a new proof that finite dimensional CAT(0) cube
complexes have Yu’s Property A. The key to their proof was the construction of a
family of maps hl with the property that for any interval and any neighbourhood
of an endpoint of the interval the maps push that neighbourhood into the interval
itself. These maps were defined in terms of the normal cube paths as follows:
Definition 4.26 (The h maps). Given l ∈ N, we define hl : X → X as follows.
For x ∈ X, let hl (x) be the 3l−th vertex on the normal cube path from x to x0 if
dnor (x, x0 ) > 3l; and let it be x0 if dnor (x, x0 ) < 3l.
Lemma 4.27 ([28]). Let hl be defined as above and y ∈ B(x, 3l). Then hl (y) ∈ [x0 , x].
Proof. We only need to show that every halfspace containing x and x0 contains
also z = hl (y). For any hyperplane h such that one of the associated halfspaces,
say h+ , contains x and x0 , either y ∈ h+ or y ∈ h− . In the former case, z ∈ h+ , so we
only need to check the case that h separates x, x0 from y.
Denote by C0 , C1 , . . . , Cm the normal cube path from y to x0 , and denote by
y = v0 , v1, . . . , vm = x0 the vertices on this cube path. We shall argue that any
hyperplane separating y from x, x0 is “used” within the first d(x, y) steps on the
cube path. Suppose that the cube Ci does not cross any hyperplane h with h
separating y from x, x0. Hence every hyperplane k ⋔ Ci separates y, x from x0 , vi+1 .
If there was a hyperplane l separating y from x, x0 before Ci , then necessarily l
separates y, vi+1 from x, x0 , hence l crosses all the hyperplanes k crossing Ci . This
contradicts the maximality of this step on the normal cube path. Thus, there is no
such l, and so all the hyperplanes h separating y from x, x0 must be crossed within
the first d(x, y) steps.
18
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
Since z is the 3l−th vertex on the cube path and d(x, y) 6 3l, all the hyperplanes h
separating y from x, x0 must have been crossed before z. Thus, any such h actually
also separates y from x, x0, z.
We will use the remarkable properties of the h maps to construct the S sets
defined in our characterization of finite asymptotic dimension in the next section.
5. Finite dimensional CAT(0) cube complexes
Throughout this section, we fix a CAT(0) cube complex X of finite dimension η
and equipped with a basepoint x0 ∈ X. We will make use of the characterization
obtained in Corollary 3.3 in order to prove Theorem 1.1.
5.1. Constructing the sets S(x, k, l). By Corollary 3.3, in order to prove X has
finite asymptotic dimension, we need to find a constant N ∈ N such that ∀l ∈ N,
∀k = 1, 2, . . . , 3l, ∀x ∈ X, we can assign a subset S(x, k, l) ⊆ X, satisfying:
i)
ii)
iii)
iv)
∀l ∈ N, ∃Sl > 0, such that S(x, k, l) ⊆ B(x, Sl ) for all k = 1, . . . , 3l;
∀l ∈ N, ∀k, k′ with 1 6 k 6 k′ 6 3l, ∀x ∈ X, S(x, k, l) ⊆ S(x, k′ , l);
∀x, y ∈ X with d(x, y) = 1, S(y, k, l) ⊆ S(x, k + 1, l) for all k = 1, 2, . . . , 3l − 1;
∀l ∈ N, ♯S(x, 2l, l) 6 N + 1.
Now for l ∈ N, k = 1, 2, . . . , 3l, and x ∈ X, we define
e
S(x, k, l) = hl (B(x, k)).
It is easy to show that {e
S(x, k, l)} satisfies i) to iii), but it does not satisfy iv) above,
so we need some modification. Intuitively, we construct S(x, k, l) as a uniformly
separated net in e
S(x, k, l). To be more precise, we require the following lemma.
Lemma 5.1. There exist two constants N, K only depending on the dimension η, such
that ∀l ∈ N, ∀x ∈ V, there are subsets Cx ⊆ [x0 , x], and maps px : [x0 , x] → P(Cx ), where
P(Cx ) denotes the power set of Cx , satisfying:
• If d(x, y) = 1 and y ∈ [x0 , x], then Cx ∩ [x0 , y] = Cy , and px |[x0 ,y] = p y ;
• For z ∈ [x0 , x]and w ∈ px (z), we have d(z, w) 6 Kl;
• ∀z ∈ [x0 , x], ♯ B(z, Ml) ∩ Cx 6 N, where M = 3η + 3 + K.
We postpone the proof of the above lemma and first show how to use it to
construct S(x, k, l) (and, hence, to conclude the proof of Theorem 1.1).
Proof of Theorem 1.1. Let N, K be the constants in Lemma 5.1. ∀l ∈ N, ∀k =
1, 2, . . . , 3l, ∀x ∈ X, let e
S(x, k, l) = hl (B(x, k)) be as above, and by Lemma 4.27,
e
we know S(x, k, l) ⊆ [x0 , x]. Now we define
[
S(x, k, l) =
px (e
S(x, k, l)),
and the only thing left to complete the proof is to verify the conditions in Corollary
3.3.
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
19
i) By the definition of hl , we know ∀y ∈ B(x, k), d(y, hl (y)) 6 3ηl. So for any
z∈e
S(x, k, l), d(z, x) 6 (3η + 3)l. For such z and any w ∈ px (z), by Lemma 5.1,
we know d(z, w) 6 Kl, which implies:
S(x, k, l) ⊆ B(x, (3η + 3 + K)l) = B(x, Ml).
ii) ∀l ∈ N, ∀k, k′ with 1 6 k 6 k′ 6 3l, ∀x ∈ X, we have e
S(x, k, l) ⊆ e
S(x, k′ , l). Now
immediately by the definition, S(x, k, l) ⊆ S(x, k′ , l).
iii) ∀x, y ∈ X with d(x, y) = 1, by Lemma 2.9, y ∈ [x0 , x] or x ∈ [x0 , y]. Assume the
former. Let k = 1, 2, . . . , 3l − 1. Obviously, e
S(y, k, l) ⊆ e
S(x, k + 1, l), so we have
[
[
[
e
e
S(y, k, l) =
p y (S(y, k, l)) =
px |[x0 ,y] (S(y, k, l)) =
px (e
S(y, k, l))
[
⊆
px (e
S(x, k + 1, l)) = S(x, k + 1, l).
Here we use the first part of Lemma 5.1 in the second equation. On the other
hand, e
S(x, k, l) ⊆ e
S(y, k + 1, l), so we have
[
[
S(x, k, l) =
px (e
S(x, k, l)) ⊆
px (e
S(y, k + 1, l))
[
[
e
=
px |[x0 ,y] (S(y, k + 1, l)) =
p y (e
S(y, k + 1, l)) = S(y, k + 1, l).
Here we use the first part of Lemma 5.1 in the fourth equality.
iv) By i), we know that S(x, k, l) ⊆ B(x, Ml) for all k = 1, 2, . . . , 3l. Hence, by
definition, S(x, k, l) ⊆ B(x, Ml) ∩ Cx . Now by the third part of Lemma 5.1, we
have ♯S(x, k, l) 6 N.
The last thing is to prove Lemma 5.1. We use the analysis in Section 4 to
construct Cx and px inductively. Recall that in Section 4 (Proposition 4.24), for any
l, n ∈ N, and any x ∈ X, we have
[
[x0 , x] ∩ Snor (x0 , nl) =
[xh , v],
h∈Hnl
with ♯Hnl 6 η and dim[xh , v] < dim[x0 , x]. In order to carry out induction on the
dimension of [x0 , x], we require a stronger version of Lemma 5.1, which is more
flexible on the choice of endpoints of intervals. More explicitly, we have
Lemma 5.2. There exist two constants N, K only depending on the dimension η, such
that ∀l ∈ N, ∀x̄, x ∈ V, ∃Cx̄,x ⊆ [x̄, x], and a map px̄,x : [x̄, x] → P(Cx̄,x ) satisfying:
• If d(x, y) = 1 and y ∈ [x̄, x], then Cx̄,x ∩ [x̄, y] = Cx̄,y , and px̄,x |[x̄,y] = px̄,y ;
• For z ∈ [x̄, x]and w ∈ px̄,x (z), we have d(z, w) 6 Kl;
• ∀z ∈ [x̄, x], ♯ B(z, Ml) ∩ Cx̄,x 6 N, where M = 3η + 3 + K.
It is obvious that Lemma 5.1 is implied by Lemma 5.2 (one just needs to take
x̄ = x0 ). Now we prove Lemma 5.2.
Proof of Lemma 5.2. Fix an l ∈ N. We will carry out induction on dim[x̄, x].
Given any x̄, x ∈ V with dim[x̄, x] = 1, we define
Cx̄,x = {y ∈ [x̄, x] : dnor (x̄, y) ∈ lN},
20
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
where lN = {0, l, 2l, 3l, . . .}. Since dim[x̄, x] = 1, [x̄, x] is indeed isometric to an
interval in R. We define px̄,x : [x̄, x] → P(Cx̄,x ) as follows: for any y ∈ [x̄, x], px̄,x (y)
consists of a single point which is at distance l⌊dnor (x̄, y)/l⌋ from x̄ in [x̄, y], where
⌊·⌋ is the function of taking integer part. Now it is obvious that
• If d(x, y) = 1 and y ∈ [x̄, x], then Cx̄,x ∩ [x̄, y] = Cx̄,y , and px̄,x |[x̄,y] = px̄,y ;
• For any z ∈ [x̄, x] and w ∈ px̄,x (z), we have d(z, w) 6 l;
• ∀z ∈ [x̄, x], ♯ B(z, Ml) ∩ Cx̄,x 6 3M.
Suppose for any x̄, x ∈ V with dim[x̄, x] 6 η − 1, we have defined Cx̄,x ⊆ [x̄, x]
and a map px̄,x : [x̄, x] → P(Cx̄,x ) satisfying:
• If d(x, y) = 1 and y ∈ [x̄, x], then Cx̄,x ∩ [x̄, y] = Cx̄,y , and px̄,x |[x̄,y] = px̄,y ;
(η−1)η
• For z ∈ [x̄, x] and w ∈ px̄,x (z),
we have d(z, w) 6 2 l;
• ∀z ∈ [x̄, x], ♯ B(z, Ml) ∩ Cx̄,x 6 (3M)η−1 (η − 1)!.
Now we focus on x̄, x ∈ V with dim[x̄, x] = η. For any n ∈ N with nl 6 dnor (x̄, x),
by Proposition 4.24,
[
[
[xh , vxnl ],
Fxh =
[x̄, x] ∩ Snor (x̄, nl) =
x
h∈Hnl
x
h∈Hnl
x
where vxnl is the farthest point from x̄ in [x̄, x]∩Snor (x̄, nl), Hnl
is the set of hyperplanes
crossing the nl-th cube of the normal cube path from x̄ to x, and we also have
dim[xh , vxnl ] < dim[x̄, x]. By induction, Cxh ,vxnl and pxh ,vxnl have already been defined.
Now we define
[
Cnx̄,x =
Cxh ,vxnl ,
x
h∈Hnl
and
Cx̄,x =
⌊dnor[
(x̄,x)/l⌋
Cnx̄,x .
n=0
For any z ∈ [x̄, x], let z̃ be the nl-th vertex on the normal cube path from x̄ to z,
where n = ⌊dnor (x̄, z)/l⌋, so dnor (z̃, z) 6 l, which implies d(z̃, z) 6 ηl, and
[
[xh , vxnl ].
z̃ ∈ [x̄, x] ∩ Snor (x̄, nl) =
x
h∈Hnl
Now define
[n
o
x
px̄,x (z) =
pxh ,vxnl (z̃) : h ∈ Hnl
and z̃ ∈ [xh , vxnl] ,
and we need to verify the requirements hold for Cx̄,x and px̄,x .
First, suppose d(x, y) = 1 and y ∈ [x̄, x], and let h′ be the hyperplane separating
x from y. Given n ∈ N such that [x̄, y] ∩ Snor (x̄, nl) , ∅, by Proposition 4.15,
y
vxnl is the nl-th vertex on the normal cube path from x̄ to x, and vnl is the nl-th
vertex on the normal cube path from x̄ to y. Due to the fellow-traveller property,
y
Proposition 4.4, d(vxnl , vnl ) 6 1. By Proposition 4.14, we have
y
vnl ∈ Snor (x̄, nl) ∩ [x̄, y] ⊆ Snor (x̄, nl) ∩ [x̄, x] ⊆ [x̄, vxnl ].
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
21
Recall that H(z, w) denotes the set of all hyperplanes separating z from w. Obviously,
H(x̄, x) = H(x̄, y) ∪ {h′ },
y
y
x
which implies Hnl ⊆ Hnl
⊆ Hnl ∪ {h′ }, by Lemma 4.10 and Lemma 4.19.
y
x
If h′ ∈ Hnl
then Fxh′ ∩ [x̄, y] = ∅ by Proposition 4.18. On the other hand, ∀h ∈ Hnl
by Lemma 4.25, yh is the unique point in Bnor (x̄, nl) such that h separates x̄ from yh ,
and for any hyperplane k ⋔ h, k does not separate y from x̄. This implies yh = xh
y
x
since Hnl ⊆ Hnl
, so we can do induction for the new “base” point yh = xh and
y
y
y
x
vnl , vnl , since d(vxnl , vnl ) 6 1 and vnl ∈ [xh , vxnl ]. This implies
y
Cxh ,vxnl ∩ [xh , vnl ] = Cxh ,vy .
nl
Since Cxh ,vxnl ⊆
[xh , vxnl],
we have
Cxh ,vxnl ∩ [x̄, y] = Cxh ,vxnl ∩ [xh , vxnl ] ∩ [x̄, y] = Cxh ,vxnl ∩ [xh , µ(xh , vxnl, y)].
y
y
y
Claim: µ(xh , vxnl , y) = vnl . Indeed, if vxnl = vnl , then it holds naturally; If vxnl , vnl ,
y
y
y
then by Lemma 4.8, vxnl < [x̄, y]. Since d(vxnl , vnl ) = 1, so vxnl ∈ [vnl , y] or vnl ∈ [vxnl , y].
y
y
While the former cannot hold since [vnl , y] ⊆ [x̄, y], so vnl ∈ [vxnl , y], which implies
y
vnl ∈ [xh , y] ∩ [xh , vxnl ] ∩ [vxnl , y],
y
i.e. vnl = µ(xh , vxnl, y).
By the claim,
y
Cxh ,vxnl ∩ [x̄, y] = Cxh ,vxnl ∩ [xh , vnl ] = Cxh ,vy .
nl
Now for the above n, we have
[
[
Cxh ,vxnl ∩ [x̄, y] =
Cnx̄,x ∩ [x̄, y] =
Cxh ,vxnl ∩ [x̄, y]
y
x
h∈Hnl
[
=
Cxh ,vy =
y
nl
[
h∈Hnl
Cyh ,vy = Cnx̄,y .
nl
y
h∈Hnl
h∈Hnl
Since Cnx̄,x ⊆ [x̄, x] ∩ Snor (x̄, nl), we have
Cx̄,x ∩ [x̄, y] =
⌊dnor[
(x̄,x)/l⌋
n=0
=
[
Cnx̄,x ∩ [x̄, y] =
[
Cnx̄,x ∩ [x̄, y]
n:[x̄,x]∩Snor (x̄,nl),∅
Cnx̄,x ∩ [x̄, y] =
n:[x̄,y]∩Snor (x̄,nl),∅
[
Cnx̄,y = Cx̄,y .
n:[x̄,y]∩Snor (x̄,nl),∅
∀z ∈ [x̄, y], one need to show that px̄,x (z) = px̄,y (z). Let z̃ be the nl-th vertex on
the normal cube path from x̄ to z, where n = ⌊dnor (x̄, z)/l⌋. By the analysis above,
we know
y
y
y
y
x
⊆ Hnl ∪ {h′ }.
d(vxnl , vnl ) 6 1, vnl ∈ [vxnl , y], xh = yh , Hnl ⊆ Hnl
y
x
For h ∈ Hnl
with z̃ ∈ [xh , vxnl ], then h ∈ Hnl , i.e. h , h′ since z̃ ∈ [x̄, y]. Now for such
h,
y
y
z̃ ∈ [x̄, y] ∩ [xh , vxnl ] = [xh , vnl ] = [yh , vnl ],
22
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
where the first equation comes from the claim above. Inductively, we know for
such h,
pxh ,vxnl (z̃) = p yh ,vy (z̃).
nl
Now by definition,
[n
o
x
pxh ,vxnl (z̃) : h ∈ Hnl
and z̃ ∈ [xh , vxnl ]
[n
o
y
=
pxh ,vxnl (z̃) : h ∈ Hnl and z̃ ∈ [xh , vxnl ]
[n
o
y
y
=
p yh ,vy (z̃) : h ∈ Hnl and z̃ ∈ [yh , vnl ]
px̄,x (z) =
nl
= px̄,y (z).
Second, for any z ∈ [x̄, x] and w ∈ px̄,x (z), assume that w ∈ pxh ,vxnl (z̃) for some
x
h ∈ Hnl
and z̃ ∈ [xh , vxnl ] as in the definition. By induction, we know d(z̃, w) 6
since dim[xh , vxnl ] 6 η − 1. So
(η−1)η
l
2
(η − 1)η
η(η + 1)
l=
l.
2
2
Third, for any z ∈ [x̄, x], consider B(z, Ml) ∩ Cx̄,x . Suppose
n ∈ N satisfying
S
x
B(z, Ml) ∩ Cx̄,x ∩ Snor (x̄, nl) , ∅, so B(z, Ml) ∩
x [xh , v ]
, ∅, which means
h∈Hnl
nl
x
x
there exists some h ∈ Hnl such that B(z, Ml) ∩ [xh , vnl ] , ∅. For such n and h, let
z′ = µ(z, xh , vxnl ) ∈ [xh , vxnl ]. Obviously,
d(z, w) 6 d(z, z̃) + d(z̃, w) 6 ηl +
B(z, Ml) ∩ [xh , vxnl] ⊆ B(z′ , Ml) ∩ [xh , vxnl ].
By induction, we have
♯ B(z, Ml) ∩ Cxh ,vxnl 6 ♯ B(z′ , Ml) ∩ Cxh ,vxnl 6 (3M)η−1 (η − 1)!.
Now for the above z, there exist at most 3M values of n such that B(z, Ml) ∩ [x̄, x] ∩
x
Snor (x̄, nl) , ∅; and for such n, since ♯Hnl
6 η, there exist at most η hyperplanes h
x
such that B(z, Ml) ∩ [x̄, x] ∩ [xh , vnl ] , ∅. So we have
X
♯ B(z, Ml) ∩ Cx̄,x 6
♯ B(z, Ml) ∩ Cnx̄,x
n:B(z,Ml)∩[x̄,x]∩Snor (x̄,nl),∅
6
X
X
♯ B(z, Ml) ∩ Cxh ,vxnl
x
n:as above h∈Hnl
6 3M · η · (3M)η−1 (η − 1)! = (3M)η η!.
(η−1)η
Now we take K = 2 , and N = (3M)η η! = (3K + 9η + 9)η η!, then the lemma
holds for these constants.
6. Coarse median spaces
In this section, we discuss the coarse median case, and prove Theorem 1.2. We
fix a coarse median space X with geodesic metric ρ and coarse median µ with
parameters K, H and finite rank η. The definitions and notations are the same as
in Section 2.4. According to Remark 2.11, we also assume that the coarse median
µ satisfies M1 and M2. We recall:
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
23
Theorem 6.1 ([28]). Any geodesic uniformly locally finite coarse median space of finite
rank and at most exponential growth has Property A.
Our result, Theorem 1.2, says that any coarse median space as above has subexponential asymptotic dimension growth. Thus, combining with Ozawa’s result [22], our theorem yields a strengthening of Theorem 6.1.
To prove Theorem 1.2, we use several notations and lemmas from [28]. We use
the notation x ∼s y for ρ(x, y) 6 s. Given r > 0 and a, b ∈ X, the coarse interval
[a, b]r is defined to be:
[a, b]r = {z ∈ X : µ(a, b, z) ∼r z}.
By a result of Bowditch [7], there exists a constant λ > 0 depending only on the
parameter K, H, such that for all x, y, z ∈ X, µ(x, y, z) ∈ [x, y]λ .
Also recall that the median axiom M3 holds in the coarse median case up to a
constant γ > 0 depending only on the parameters K, H: for all x, y, z, u, v ∈ X, we
have
µ(µ(x, y, z), u, v) ∼γ µ(µ(x, u, v), µ(y, u, v), z).
Actually we can take γ = 3K(3K + 2)H(5) + (3K + 2)H(0).
Given r, t, κ > 0, denote
L1 (r) = (K + 1)r + Kλ + γ + 2H(0),
L2 (r, κ) = (K + 1)r + κ + H(0), and
L3 (r, t) = 3η Kη rt + r.
We need the following lemmas from [28].
Lemma 6.2 ([28]). Let X be a coarse median space, r > 0, and let a, b ∈ X, x ∈ [a, b]λ .
Then [a, x]r ⊂ [a, b]L1 (r) .
Lemma 6.3 ([28]). Let X be a geodesic coarse median space of rank at most η. For every
κ > 0 and t > 0, there exists rt > 0, such that for all r > rt , a, b ∈ X, there exists
h ∈ [a, b]L1 (r) , such that
• ρ(a, h) 6 L3 (r, t), and
• B(a, rt) ∩ [a, b]κ ⊂ [a, h]L2 (r,κ) .
Lemma 6.4 ([28]). Let X be a coarse median space. Fix κ > 0. There exist constants
α, β > 0 depending only on the parameters of the coarse median structure and κ, such
that the following holds: let a, b, h, m ∈ X and r > 0 satisfy m ∈ [a, h]L2 (r,κ) , h ∈ [a, b]L1 (r) .
Then p = µ(m, b, h) satisfies ρ(h, p) 6 αr + β.
Proof of Theorem 1.2. The proof is based on the construction used in [28] to prove
property A, and for the readers’ convenience, we give a sketch of their proof.
In fact we will verify the stronger conditions on the S sets required to apply
Theorem 3.1. Fix a base point x0 ∈ X, and let α, β be the constants from Lemma
6.4. First apply Lemma 6.3 for κ = λ and all t ∈ N to obtain a sequence rt ∈ N,
such that the conclusion of the lemma holds. Furthermore, we can choose the rt
tr −H(0)
inductively to arrange the sequence N ∋ t 7→ lt = t 3K is increasing.
24
GOULNARA ARZHANTSEVA, GRAHAM A. NIBLO, NICK WRIGHT, AND JIAWEN ZHANG
Now fix x ∈ X, t ∈ N, and k ∈ {1, 2, . . . , 3lt}. For any y ∈ B(x, k), Lemma 6.3
applied for a = y, b = x0 and r = rt , produces a point h y ∈ [y, x0 ]L1 (rt ) . We define
S(x, k, lt ) = {h y ∈ X : y ∈ B(x, k)}.
We need to verify these sets satisfy Condition 2 in the statement of Proposition 3.1,
i.e. we need to show there exists a subexponential function f : R → R, satisfying:
i) ∀t ∈ N, ∃St > 0, such that S(x, k, lt ) ⊆ B(x, St ) for all k = 1, . . . , 3lt;
ii) ∀t ∈ N, ∀k, k′ with 1 6 k 6 k′ 6 3lt, ∀x ∈ X, we have S(x, k, lt ) ⊆ S(x, k′ , lt);
iii) ∀x, y ∈ X with ρ(x, y) 6 lt , we have:
• S(x, k − ρ(x, y), lt) ⊂ S(x, k, lt ) ∩ S(y, k, lt ), for k = ρ(x, y) + 1, . . . , 3lt;
• S(x, k + ρ(x, y), lt) ⊃ S(x, k, lt ) ∩ S(y, k, lt ), for k = 1, . . . , 3lt − ρ(x, y);
iv) ∀t ∈ N, ∀k = 1, 2, . . . , 3lt, ∀x ∈ X, we have ♯S(x, k, lt ) 6 f (lt ).
By the construction, ii) and iii) hold naturally. For i), by Lemma 6.3, we know
S(x, k, lt ) ⊆ B(x, lt + L3 (rt , t)). The only thing left is to find a subexponential function
f such that condition iv) holds. The following argument follows totally from the
proof in [28], and we omit some calculation. The readers can turn to their original
paper for more details.
Take y ∈ B(x, k), with the notation as above. Denote m y = µ(x, y, x0). Then by
Lemma 6.3, one can deduce that m y ∈ [y, h y ]L2 (rt ,λ) . Now since h y ∈ [y, x0 ]L1 (rt ) ,
Lemma 6.4 implies the point p y = µ(m y , x0 , h y ) ∈ [m y , x0 ]λ satisfies ρ(h y , p y ) 6
αrt + β. As m y = µ(x, y, x0) ∈ [x, x0 ]λ , Lemma 6.2 now implies p y ∈ [x, x0]L1 (λ) .
Consequently, we have ρ(x, p y ) 6 3lt + 3η Kη trt + rt + αrt + β, which depends linearly
on lt . Now by Proposition 9.8 in [7], the number of possible points p y is bounded by
P(lt ) for some polynomial P depending only on H, K, η and uniform local finiteness
of X. Since X has at most exponential growth, it follows that ♯S(x, k, lt ) is at most
P(lt )c′ crt for some constants c, c′ > 1. Take f (lt ) = P(lt )c′ crt and recall that in the limit
rt /lt → 0. We extend f to a function on R+ by setting f (r) := f (lt ) for r ∈ (lt−1 , lt ].
This completes the proof.
References
[1] J. Behrstock, M.F. Hagen, and A. Sisto. Asymptotic dimension and small-cancellation for
hierarchically hyperbolic spaces and groups. arXiv:1512.06071, 2015.
[2] J. Behrstock, M.F. Hagen, and A. Sisto. Hierarchically hyperbolic spaces ii: combination
theorems and the distance formula. arXiv:1509.00632, 2015.
[3] G. Bell and A. N. Dranishnikov. Asymptotic dimension in Bȩdlewo. Topology Proc, 38:209–236,
2011.
[4] G. C. Bell. Growth of the asymptotic dimension function for groups. arXiv:math/0504577,
2005.
[5] M. Bestvina, K. Bromberg, and K. Fujiwara. The asymptotic dimension of mapping class
groups is finite. arXiv:1006.1939, 2010.
[6] B. H. Bowditch. Coarse median spaces and groups. Pacific Journal of Mathematics, 261(1):53–93,
2013.
[7] B. H. Bowditch. Embedding median algebras in products of trees. Geometriae Dedicata,
170(1):157–176, 2014.
A CHARACTERIZATION FOR ASYMPTOTIC DIMENSION GROWTH
25
[8] M. R. Bridson and A. Häfliger. Metric spaces of non-positive curvature, volume 319 of Grundlehren
der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. SpringerVerlag, Berlin, 1999.
[9] N. P. Brown and N. Ozawa. C∗ -algebras and finite-dimensional approximations, volume 88 of
Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2008.
[10] S. Campbell and G. A. Niblo. Hilbert space compression and exactness of discrete groups.
Journal of functional analysis, 222(2):292–305, 2005.
[11] I. Chatterji and G. A. Niblo. From wall spaces to CAT(0) cube complexes. International Journal
of Algebra and Computation, 15(05n06):875–885, 2005.
[12] V. Chepoi. Graphs of some CAT(0) complexes. Advances in Applied Mathematics, 24(2):125–179,
2000.
[13] A. N. Dranishnikov. Groups with a polynomial dimension growth. Geometriae Dedicata,
119(1):1–15, 2006.
[14] M. Gromov. Hyperbolic groups. In Essays in group theory, pages 75–263. Springer, 1987.
[15] M. Gromov. Asymptotic invariants of infinite groups. Geometric group theory, Vol.2, volume
182 of London Math. Soc. Lecture Note Ser., pages 1–295, 1993.
[16] J. R. Isbell. Median algebra. Transactions of the American Mathematical Society, 260(2):319–362,
1980.
[17] V. A. Kaimanovich. Boundary amenability of hyperbolic spaces. In Discrete geometric analysis,
volume 347 of Contemp. Math., pages 83–111. Amer. Math. Soc., Providence, RI, 2004.
[18] G. A. Niblo and L. D. Reeves. The geometry of cube complexes and the complexity of their
fundamental groups. Topology, 37(3):621–633, 1998.
[19] B. Nica. Cubulating spaces with walls. Algebr. Geom. Topol, 4:297–309, 2004.
[20] P. W. Nowak and G. Yu. Large scale geometry. 2012.
[21] I. Oppenheim. An intermediate quasi-isometric invariant between subexponential asymptotic dimension growth and Yu’s property A. Internat. J. Algebra Comput., 24(6):909–922, 2014.
[22] N. Ozawa. Metric spaces with subexponential asymptotic dimension growth. International
Journal of Algebra and Computation, 22(02):1250011, 2012.
[23] L. D. Reeves. Biautomatic structures and combinatorics for cube complexes. PhD thesis, University
of Melbourne, 1995.
[24] J. Roe. Hyperbolic groups have finite asymptotic dimension. Proceedings of the American
Mathematical Society, 133(9):2489–2490, 2005.
[25] M. Roller. Poc sets, median algebras and group actions. an extended study of Dunwoody’s
construction and Sageev’s theorem. Southampton Preprint Archive, 1998.
[26] M. Sageev. Ends of group pairs and non-positively curved cube complexes. Proceedings of the
London Mathematical Society, 3(3):585–617, 1995.
[27] J. Smith. The asymptotic dimension of the first Grigorchuk group is infinity. Revista Matemática
Complutense, 20(1):119–121, 2007.
[28] J. S̆pakula and N. J. Wright. Coarse medians and property A. arXiv:1602.06084, 2016.
[29] N. J. Wright. Finite asymptotic dimension for CAT(0) cube complexes. Geometry & Topology,
16(1):527–554, 2012.
[30] G. Yu. The Novikov conjecture for groups with finite asymptotic dimension. The Annals of
Mathematics, 147(2):325–355, 1998.
[31] G. Yu. The coarse Baum-Connes conjecture for spaces which admit a uniform embedding
into Hilbert space. Inventiones Mathematicae, 139(1):201–240, 2000.
[32] R. Zeidler. Coarse median structures on groups. Master’s thesis, University of Vienna, Vienna,
Austria, 2013.
Universität Wien, Fakultät für Mathematik, Oskar-Morgenstern-Platz 1, 1090 Wien, Austria.
E-mail address: [email protected]
School of Mathematics, University of Southampton, Highfield, SO17 1BJ, United Kingdom.
E-mail address: [email protected],[email protected],[email protected]
| 4 |
Variable selection with Hamming loss
Butucea, C.1,2 , Stepanova, N.A.3 , and Tsybakov, A.B.2
arXiv:1512.01832v4 [] 10 Mar 2017
1
Université Paris-Est Marne-la-Vallée, LAMA(UMR 8050), UPEM, UPEC, CNRS,
F-77454, Marne-la-Vallée, France
2
CREST, ENSAE, Université Paris Saclay, 3, avenue P. Larousse 92245 Malakoff Cedex, France
3
School of Mathematics and Statistics, Carleton University, Ottawa, Ontario, K1S 5B6 Canada
March 13, 2017
Abstract
We derive non-asymptotic bounds for the minimax risk of variable selection under
expected Hamming loss in the Gaussian mean model in Rd for classes of s-sparse vectors
separated from 0 by a constant a > 0. In some cases, we get exact expressions for the nonasymptotic minimax risk as a function of d, s, a and find explicitly the minimax selectors.
These results are extended to dependent or non-Gaussian observations and to the problem
of crowdsourcing. Analogous conclusions are obtained for the probability of wrong recovery of the sparsity pattern. As corollaries, we derive necessary and sufficient conditions
for such asymptotic properties as almost full recovery and exact recovery. Moreover, we
propose data-driven selectors that provide almost full and exact recovery adaptively to the
parameters of the classes.
Keywords:
adaptive variable selection, almost full recovery, exact recovery, Hamming
loss, minimax selectors, nonasymptotic minimax selection bounds, phase transitions.
1
Introduction
In recent years, the problem of variable selection in high-dimensional regression models has
been extensively studied from the theoretical and computational viewpoints. In making effective high-dimensional inference, sparsity plays a key role. With regard to variable selection in
sparse high-dimensional regression, the Lasso, Dantzig selector, other penalized techniques as
well as marginal regression were analyzed in detail; see, for example, [20, 27, 24, 19, 23, 25, 21,
12, 15] and the references cited therein. Several other recent papers deal with sparse variable
selection in nonparametric regression; see, for example, [17, 5, 10, 14, 8].
In this paper, we study the problem of variable selection in the Gaussian sequence model
Xj = θj + σξj ,
1
j = 1, . . . , d,
(1)
where ξ1 , . . . , ξd are i.i.d. standard Gaussian random variables, σ > 0 is the noise level, and
θ = (θ1 , . . . , θd ) is an unknown vector of parameters to be estimated. We assume that θ is
(s, a)-sparse, which is understood in the sense that θ belongs to one of the following sets:
n
Θd (s, a) =
θ ∈ Rd : there exists a set S ⊆ {1, . . . , d} with s elements
such that |θj | ≥ a for all j ∈ S, and θj = 0 for all j 6∈ S}
or
Θ+
d (s, a) =
n
θ ∈ Rd : there exists a set S ⊆ {1, . . . , d} with s elements
such that θj ≥ a for all j ∈ S, and θj = 0 for all j 6∈ S} .
Here, a > 0 and s ∈ {1, . . . , d} are given constants.
We study the problem of selecting the relevant components of θ, that is, of estimating the
vector
η = η(θ) = (I(θj 6= 0))j=1,...,d ,
where I(·) is the indicator function. As estimators of η, we consider any measurable functions
ηb = ηb(X1 , . . . , Xn ) of (X1 , . . . , Xn ) taking values in {0, 1}d . Such estimators will be called
selectors. We characterize the loss of a selector ηb as an estimator of η by the Hamming
distance between ηb and η, that is, by the number of positions at which ηb and η differ:
|b
η − η| ,
d
X
j=1
|b
ηj − ηj | =
d
X
j=1
I(b
ηj 6= ηj ).
Here, ηbj and ηj = ηj (θ) are the jth components of ηb and η = η(θ), respectively. The expected
Hamming loss of a selector ηb is defined as Eθ |b
η − η|, where Eθ denotes the expectation with
respect to the distribution Pθ of (X1 , . . . , Xn ) satisfying (1). Another well-known risk measure
is the probability of wrong recovery Pθ (Sb 6= S(θ)), where Sb = {j : ηbj = 1} and S(θ) = {j :
ηj (θ) = 1}. It can be viewed as the Hamming distance with an indicator loss and is related to
the expected Hamming loss as follows:
Pθ (Sb 6= S(θ)) = Pθ (|b
η − η| ≥ 1) ≤ Eθ |b
η − η|.
(2)
In view of the last inequality, bounding the expected Hamming loss provides a stronger result
than bounding the probability of wrong recovery.
Most of the literature on variable selection in high dimensions focuses on the recovery of the
sparsity pattern, that is, on constructing selectors such that the probability Pθ (Sb 6= S(θ)) is
close to 0 in some asymptotic sense (see, for example, [20, 27, 24, 19, 23, 25, 21]). These papers
consider high-dimensional linear regression settings with deterministic or random covariates.
√
In particular, for the sequence model (1), one gets that if a > Cσ log d for some C > 0 large
2
enough, then there exist selectors such that Pθ (Sb 6= S(θ)) tends to 0, while this is not the case
√
if a < cσ log d for some c > 0 small enough. More insight into variable selection was provided
in [12, 15] by considering a Hamming risk close to the one we have defined above. Assuming
that s ∼ d1−β for some β ∈ (0, 1), the papers [12, 15] establish an asymptotic in d “phase
diagram” that partitions the parameter space into three regions called the exact recovery,
almost full recovery, and no recovery regions. This is done in a Bayesian setup for the linear
regression model with i.i.d. Gaussian covariates and random θ. Note also that in [12, 15] the
knowledge of β is required to construct the selectors, so that in this sense the methods are not
√
adaptive. The selectors are of the form η̂j = I(|Xj | ≥ t) with threshold t = τ (β)σ log d for
some function τ (·) > 0. More recently, these asymptotic results were extended to a combined
minimax - Bayes Hamming risk on a certain class of vectors θ in [16].
The present paper makes further steps in the analysis of variable selection with a Hamming
loss initiated in [12, 15]. Unlike [12, 15], we study the sequence model (1) rather than Gaussian
regression and analyze the behavior of the minimax risk rather than that of the Bayes risk with
a specific prior. Furthermore, we consider not only s ∼ d1−β but general s and derive nonasymptotic results that are valid for any sample size. Remarkably, we get an exact expression
for the non-asymptotic minimax risk and find explicitly the minimax selectors. Finally, we
construct data-driven selectors that are simultaneously adaptive to the parameters a and s.
Specifically, we consider the minimax risk
inf sup
η̃ θ∈Θ
1
Eθ |η̃ − η|
s
(3)
e. In
for Θ = Θd (s, a) and Θ = Θ+
d (s, a), where inf η̃ denotes the infimum over all selectors η
Section 2, for both classes Θ = Θ+
d (s, a) and Θ = Θd (s, a) we find the exact values of the
minimax risks and derive minimax selectors for any fixed d, s, a > 0 such that s < d. For
Θ = Θd (s, a) we also propose another selector attaining the minimax risk up to the factor 2.
Interestingly, the thresholds that correspond to the minimax optimal selectors do not have
√
the classical form Aσ log d for some A > 0; the optimal threshold is a function of a and s.
Analogous minimax results are obtained for the risk measured by the probability of wrong
recovery Pθ (Sb 6= S(θ)). Section 3 considers extensions of the non-asymptotic exact minimax
theorems of Section 2 to settings with non-Gaussian or dependent observations. In Section 4,
as asymptotic corollaries of these results, we establish sharp conditions under which exact
and almost full recovery are achievable. Section 5 is devoted to the construction of adaptive
selectors that achieve almost full and exact recovery without the knowledge of the parameters
a and s. Most of the proofs are given in the Appendix.
Finally, note that quite recently several papers have studied the expected Hamming loss
in other problems of variable selection. Asymptotic behavior of the minimax risk analogous
to (3) for classes Θ different from the sparsity classes that we consider here was analyzed in
3
[8] and without the normalizing factor 1/s in [14]. Oracle inequalities for Hamming risks in
the problem of multiple classification under sparsity constraints are established in [22]. The
paper [26] introduces an asymptotically minimax approach based on the Hamming loss in the
problem of community detection in networks.
2
Non-asymptotic minimax selectors
In what follows, we assume that s < d. We first consider minimax variable selection for the
+
class Θ+
d (s, a). For this class, we will use a selector η̂ with the components
η̂j+ = I(Xj ≥ t),
j = 1, . . . , d,
(4)
where the threshold is defined by
t=
a σ2
+
log
2
a
d
−1 .
s
(5)
Set
Ψ+ (d, s, a) =
d
d
a
σ
σ
a
d
−1 Φ −
− log
−1
+ log
−1
+Φ −
,
s
2σ
a
s
2σ
a
s
where Φ(·) denotes the standard Gaussian cumulative distribution function.
Theorem 2.1. For any a > 0 and s < d the selector η̂ + in (4) with the threshold t defined in
(5) satisfies
1
Eθ |η̂ + − η| ≤ Ψ+ (d, s, a).
s
+
θ∈Θ (s,a)
sup
(6)
d
The proof is given in the Appendix.
The next theorem gives a lower bound on the minimax risk showing that the upper bound
in Theorem 2.1 is tight.
Theorem 2.2. For any a > 0 and s < d we have
inf
ηe
1
Eθ |e
η − η| ≥ Ψ+ (d, s, a),
θ∈Θ+ (s,a) s
sup
d
where inf ηe denotes the infimum over all selectors ηe.
The proof is given in the Appendix.
As a straightforward corollary of Theorems 2.1 and 2.2, we obtain that the estimator η̂ +
is minimax in the exact sense for the class Θ+
d (s, a) and the minimax risk satisfies
inf
sup
ηe θ∈Θ+ (s,a)
d
1
1
Eθ |e
η − η| = sup
Eθ |η̂ + − η| = Ψ+ (d, s, a).
s
s
+
θ∈Θ (s,a)
d
4
(7)
Remarkably, this holds under no assumptions on d, s, a except for, of course, some minimal
conditions under which the problem ever makes sense: a > 0 and s < d. Analogous nonasymptotic minimax result is valid for the class
Θ−
d (s, a) =
n
θ ∈ Rd : there exists a set S ⊆ {1, . . . , d} with s elements
such that θj ≤ −a for all j ∈ S, and θj = 0 for all j 6∈ S} .
We omit details here.
Next, consider the class Θd (s, a). A direct analog of η̂ + for Θd (s, a) is a selector η̂ with
the components
η̂j = I(|Xj | ≥ t),
j = 1, . . . , d,
(8)
where the threshold t is defined in (5). Set
d
d
d
a
σ
σ
a
Ψ(d, s, a) =
−1 Φ −
− log
−1
− log
−1
+Φ −
,
s
2σ
a
s
2σ
a
s
+
where x+ = max(x, 0). Note that
Ψ(d, s, a) ≤ Ψ+ (d, s, a).
(9)
We have the following bound.
Theorem 2.3. For any a > 0 and s < d the selector η̂ in (8) with the threshold t defined in
(5) satisfies
1
Eθ |η̂ − η| ≤ 2Ψ(d, s, a).
θ∈Θd (s,a) s
sup
(10)
The proof is given in the Appendix.
For the minimax risk on the class Θd (s, a), we have the following corollary, which is an
immediate consequence of Theorems 2.2, 2.3, and inequality (9).
Corollary 2.1. For any a > 0 and s < d the selector η̂ in (8) with the threshold t defined in
(5) satisfies
sup
θ∈Θd (s,a)
Eθ |η̂ − η| ≤ 2 inf
sup
ηe θ∈Θd (s,a)
Eθ |e
η − η|.
(11)
Thus, the risk of the thresholding estimator (8) cannot be greater than the minimax risk
over the class Θd (s, a) multiplied by 2.
We turn now to exact minimax variable selection over the class Θd (s, a). Consider a selector
η = (η 1 , . . . , η d ) with the components
aX
j
≥t ,
η j = I log cosh
σ2
5
j = 1, . . . , d,
(12)
where the threshold is defined by
t=
a2
+ log
2σ 2
d
−1 .
s
(13)
Set
aξ d
2
d
− a2
Ψ(d, s, a) =
≥ −1
− 1 P e 2σ cosh
s
σ
s
2
2
a
d
aξ
− a2
2σ
+ 2 < −1 ,
+P e
cosh
σ
σ
s
where ξ denotes a standard Gaussian random variable. Our aim is to show that Ψ(d, s, a) is
the minimax risk of variable selection under the Hamming loss over the class Θd (s, a) and that
it is achieved by the selector in (12). We first prove that Ψ(d, s, a) is an upper bound on the
maximum risk of the selector (12).
Theorem 2.4. For any a > 0 and s < d the selector η in (12) with the threshold t defined in
(13) satisfies
1
Eθ |η − η| ≤ Ψ(d, s, a).
θ∈Θd (s,a) s
sup
The next theorem establishes the lower bound on the minimax risk showing that the upper
bound in Theorem 2.4 cannot be improved.
Theorem 2.5. For any a > 0 and s < d we have
inf
ηe
1
Eθ |e
η − η| ≥ Ψ(d, s, a),
θ∈Θd (s,a) s
sup
where inf ηe denotes the infimum over all selectors ηe.
Next, we discuss a connection to the Bayesian setting. It is not hard to check that, for
each j, the minimax optimal selector η̂j+ coincides with the Bayes test of the null hypothesis
H0 : θj = 0 against the alternative H0 : θj = a with prior probabilities 1 − s/d and s/d,
respectively. This can be also seen from the proof of Theorem 2.2. Furthermore, the minimax
risk on Θ+
d (s, a) is exactly equal to the risk of the corresponding Bayes selector as shown in
the next proposition. Analogous result holds for the class Θd (s, a) but we do not state it here
for the sake of brevity.
Proposition 2.1. Let µ be the uniform distribution on the set Θ′ of all θ in Θ+
d (s, a) such
that s components θj of θ are equal to a and the remaining d − s components are 0. Then,
Z
inf sup Eθ |e
η − η| = inf Eθ |e
η − η|µ(dθ).
(14)
ηe θ∈Θ+ (s,a)
d
ηe
6
The proof is given in the Appendix.
Finally, we show how the above non-asymptotic minimax results can be extended to the
probability of wrong recovery. For any selector ηe, we denote by Sηe the selected set of indices:
Sηe = {j : ηej = 1}. A selector ηe = (e
η1 , . . . , ηed ) will be called separable if its jth component ηej
depends only on Xj for all j = 1, . . . , d. We denote by T the set of all separable selectors.
Theorem 2.6. For any a > 0 and s < d the selectors η̂ in (8) and η̂ + in (4) with the threshold
t defined in (5), and the selector η in (12) with the threshold t defined in (13) satisfy
sup
θ∈Θ+
d (s,a)
Pθ (Sη̂+ 6= S(θ)) ≤ sΨ+ (d, s, a),
(15)
Pθ (Sη 6= S(θ)) ≤ sΨ(d, s, a)
(16)
Pθ (Sη̂ 6= S(θ)) ≤ 2sΨ(d, s, a).
(17)
sup
θ∈Θd (s,a)
and
sup
θ∈Θd (s,a)
Furthermore,
inf
sup
ηe∈T θ∈Θ+ (s,a)
d
Pθ (Sηe 6= S(θ)) ≥
sΨ+ (d, s, a)
,
1 + sΨ+ (d, s, a)
(18)
Pθ (Sηe 6= S(θ)) ≥
sΨ(d, s, a)
.
1 + sΨ(d, s, a)
(19)
and
inf
sup
ηe∈T θ∈Θd (s,a)
The proof is given in the Appendix.
Although Theorem 2.6 does not provide the exact minimax solution, it implies sharp
minimaxity in asymptotic sense. Indeed, an interesting case is when the minimax risk in
Theorem 2.6 goes to 0 as d → ∞. Assuming that s and a are functions of d, this corresponds
to sΨ+ (d, s, a) → 0 as d → ∞. In this natural asymptotic setup, the upper and lower bounds
of Theorem 2.6 for both classes Θ+
d (s, a) and Θd (s, a) are sharp. We discuss this issue in
Section 4, cf. Theorem 4.5.
Remark 2.1. Papers [12, 15, 16] use a different Hamming loss defined in terms of vectors of signs. In our setting, this would mean considering not |η̂ − η| but the following loss:
Pd
j=1 I(sign(θ̂j ) 6= sign(θj )), where θ̂j is an estimator of θj and sign(x) = I(x > 0) − I(x < 0).
Theorems of this section are easily adapted to such a loss, but in this case the corresponding
expressions for the non-asymptotic risk contain additional terms and we do not obtain exact minimax solutions as above. On the other hand, these additional terms are smaller than
Ψ(d, s, a) and Ψ+ (d, s, a), and in the asymptotic analysis, such as the one performed in Sections 4 and 5, can often be neglected. Thus, in many cases, one gets the same asymptotic
results for both losses. We do not discuss this issue in more detail here.
7
3
Generalizations and extensions
Before proceeding to asymptotic corollaries, we discuss some generalizations and extensions of
the non-asymptotic results of Section 2.
3.1
Dependent observations
It is easy to see that Theorems 2.1 and 2.3 do not use any information on the dependence
between the observations and thus remain valid for dependent Xj . Furthermore, a minimax
optimality property holds under dependence as well. To be specific, denote by Nd (θ, Σ) the
d-dimensional Gaussian distribution with mean θ and covariance matrix Σ. Assume that the
distribution P of (X1 , . . . , Xd ) belongs to the class
2
Pd+ (s, a, σ 2 ) = {Nd (θ, Σ) : θ ∈ Θ+
d (s, a), σii = σ , for all i = 1, . . . , d}
where we denote by σii the diagonal entries of Σ. Note that, for distributions in this class,
Σ can be any covariance matrix with constant diagonal elements.
Theorem 3.1. For any a > 0 and s < d, and for the selector η̂ + in (4) with the threshold t
defined in (5) we have
inf
sup
ηe P∈P + (s,a,σ2 )
d
EP |e
η − η| =
sup
P∈Pd+ (s,a,σ2 )
EP |η̂ + − η| = sΨ+ (d, s, a),
where inf ηe denotes the infimum over all selectors ηe, and EP denotes the expectation with
respect to P.
Proof. The upper bound Ψ+ (d, s, a) on the minimax risk follows from the fact that the proofs
of Theorems 2.1 and 2.3 are not affected by the dependence. Indeed, both the selector and the
Hamming loss proceed coordinate-wisely. The lower bound Ψ+ (d, s, a) on the minimax risk
follows from Theorem 2.2 after observing that the maximum over Pd+ (s, a, σ 2 ) is greater than
the maximum over the subfamily of Gaussian vectors with independent entries {Nd (θ, σ 2 Id ) :
θ ∈ Θ+
d (s, a)}, where Id is the d × d identity matrix.
An interesting consequence of Theorem 3.1 and of (7) is that the model with independent
Xj is the least favorable model, in the exact non-asymptotic sense, for the problem of variable
selection with Hamming loss on the class of vectors Θ+
d (s, a).
This fact was also noticed and discussed in [13] for the detection problem. That paper
considers the Gaussian model with covariance matrix Σ that is not necessarily a diagonal matrix. It is shown that faster detection rates are achieved in the case of dependent observations
(under some assumptions) than in the case of independent data. It would be interesting to
extend these results to the variable selection problem in hand.
8
3.2
Non-Gaussian models
As a building block for extension to non-Gaussian observations, we first consider the following
simple model. We observe independent random variables X1 , . . . , Xd with values in a measurable space (X , U ) such that s among them are distributed according to the probability
distribution P1 and the other d − s are distributed according to the probability distribution
P0 . We assume that P0 6= P1 . Let f0 and f1 be densities of P0 and P1 with respect to some
dominating measure. Denote by η = (η1 , . . . , ηd ) the vector such that ηj = 1 if the distribution
of Xj is P1 and ηj = 0 if it is P0 . Let Θd (s, f0 , f1 ) be the set of all such vectors η. Consider
the selector η̂ = (η̂1 , . . . , η̂d ) where
f1
d
η̂j = I log (Xj ) ≥ log
−1
,
f0
s
j = 1, . . . , d.
(20)
Theorem 3.2. For any s < d, the selector η̂ in (20) satisfies
sup
η∈Θd (s,f0 ,f1 )
E|η̂ − η| = inf
sup
η̃ η∈Θ (s,f0 ,f1 )
d
E|η̃ − η| = sΨ,
(21)
where inf η̃ denotes the infimum over all selectors, and
d
d
d
f1
f1
−1 +
− 1 P0 log (X) ≥ log
−1 .
Ψ = P1 log (X) < log
f0
s
s
f0
s
(22)
Proof. The proof of the upper bound supη∈Θd (s,f0 ,f1 ) E|η̂ − η| ≤ sΨ is obvious. The proof
of the lower bound inf η̃ supη∈Θd (s,f0 ,f1 ) E|η̃ − η| ≥ sΨ follows the same lines as the proof of
Theorem 2.2 with the only difference in the definition of probability measures. We replace
the Gaussian distributions centered at 0 and a by the distributions P0 and P1 , respectively.
With this change, the Bayesian version of the Neyman-Pearson lemma leads to the optimal
test statistic T ∗ of the form
T ∗ (X) = I
(s/d)f1 (X)
>1 ,
(1 − s/d)f0 (X)
and, respectively, to the lower bound (22) on the minimax risk.
Suppose now that instead of two measures P0 and P1 we have a parametric family of
probability measures {Pa , a ∈ U } where U ⊆ R. Let f a be a density of Pa with respect to
some dominating measure. Recall that the family {fa , a ∈ U } is said to have the Monotone
Likelihood Ratio (MLR) property if, for all a0 , a1 in U such that a0 < a1 , the log-likelihood
ratio log(fa1 (X)/fa0 (X)) is an increasing function of X. In particular, this implies, cf. [18,
Lemma 3.4.2] that {fa , a ∈ U } is a stochastically ordered family, i.e.,
Fa (x) ≥ Fa′ (x) for all x
9
if a < a′
(23)
where Fa is the cumulative distribution function corresponding to fa . Using these facts, we
generalize the non-asymptotic results of the previous section in two ways. First, we allow for
not necessarily Gaussian distributions and second, instead of the set of parameters Θ+
d (s, a),
we consider the following set with two restrictions:
n
(s,
a
,
a
)
=
θ ∈ Rd : there exists a set S ⊆ {1, . . . , d} with s elements
Θ+
0
1
d
such that θj ≥ a1 for all j ∈ S, and θj ≤ a0 for all j 6∈ S}
where a0 < a1 . In what follows, we use the notation fj = faj , j = 0, 1.
Theorem 3.3. Let {fa , a ∈ U } be a family with the MLR property, and let a0 , a1 ∈ U be such
that a0 < a1 . Then, for any s < d, the selector η̂ in (20) with f0 = fa0 and f1 = fa1 satisfies
sup
θ∈Θ+
d (s,a0 ,a1 )
Eθ |η̂ − η| = inf
η̃
sup
θ∈Θ+
d (s,a0 ,a1 )
Eθ |η̃ − η| = sΨ,
where inf η̃ denotes the infimum over all selectors and Ψ is given in (22).
Proof. We have
d
f1
sup Pa log (X) < log
−1
f0
s
a≥a1
d
d
f1
+ sup
− 1 Pa log (X) ≥ log
− 1 = Ψ,
f0
s
a≤a0 s
1
Eθ |η̂ − η| =
θ∈Θ+ (s,a0 ,a1 ) s
sup
d
where the last equality is due to the monotonicity of log ff10 (X) and to the stochastic order
property (23). The proof of the lower bound
inf
η̃
sup
θ∈Θ+
d (s,a0 ,a1 )
1
Eθ |η̃ − η| ≥ Ψ
s
follows from the fact supθ∈Θ+ (s,a0 ,a1 ) Eθ |η̃ − η| ≥ supη∈Θd (s,f0 ,f1 ) E|η̃ − η| and Theorem 3.2.
d
Example 1. Let fa be the Gaussian N (a, σ 2 ) density with some σ 2 > 0, and let a0 < a1 .
For f1 = fa1 and f0 = fa0 , the log-likelihood ratio
log
a1 − a0 a21 − a20
f1
(X) = X
−
f0
σ2
2σ 2
is increasing in X. By Theorem 3.3, the minimax optimal selector η̂ on the class Θ+
d (s, a0 , a1 )
is a vector with components
η̂j = I (Xj ≥ t(a0 , a1 )) , j = 1, . . . , d,
where
t(a0 , a1 ) =
a1 + a0 σ 2 log(d/s − 1)
+
.
2
a1 − a0
10
(24)
Note that for a0 = 0 it coincides with the selector in (4) with a = a1 , which is minimax optimal
on Θ+
d (s, a1 ). Moreover, the minimax risk only depends on a0 and a1 through the difference
δ = a1 − a0 :
δ σ 2 log(d/s − 1) d
δ σ 2 log(d/s − 1)
Ψ=Φ − +
+
.
−1 Φ − +
2
δ
s
2
δ
Example 2 Let Pa be the Bernoulli distribution B(a) with parameter a ∈ (0, 1), and
0 < a0 < a1 < 1. Denoting by fa the density of Pa with respect to the counting measure we
have, for f1 = fa1 and f0 = fa0 ,
log
a
f1
1 − a0
1 − a1
1
(X) = X log
+ log
f0
1 − a1 a0
1 − a0
which is increasing in X for 0 < a0 < a1 < 1. The minimax optimal selector η̂ on the class
Θ+
d (s, a0 , a1 ) is a vector with components η̂j in (24) where the threshold t(a0 , a1 ) is given by
t(a0 , a1 ) =
log( ds − 1) − log
1−a1
1−a0
a1 1−a0
log( 1−a
)
1 a0
.
Note that the minimax selector η̂j differs from the naive selector η̂jn = Xj . Indeed since
Xj ∈ {0, 1} we have η̂j = 1 if either Xj = 1 or t(a0 , a1 ) ≤ 0, and η̂j = 0 if either Xj = 0 or
t(a0 , a1 ) > 1. The value Ψ in the minimax risk has the form
d
Ψ = Pa1 (X < t(a0 , a1 )) +
s
d/s − 1,
=
1 − a1 + a0 (d/s − 1),
1,
− 1 Pa0 (X ≥ t(a0 , a1 ))
t(a0 , a1 ) ≤ 0,
0 < t(a0 , a1 ) < 1,
t(a0 , a1 ) ≥ 1.
In the asymptotic regime when d → ∞ and s → ∞, the minimax risk sΨ can converge to
0 only when the parameters d, s, a0 , a1 are kept such that 0 < t(a0 , a1 ) < 1, and in addition
(1 − a1 )s → 0, a0 (d − s) → 0. Thus, the risk can converge to 0 only when the Bernoulli
probabilities a1 and a0 tend sufficiently fast to 1 and to 0, respectively.
Example 3. Let Pa be the Poisson distribution with parameter a > 0, and let a1 > a0 > 0.
Denoting by fa the density of Pa with respect to the counting measure we have
a
f1
1
− a1 + a0 ,
log (X) = X log
f0
a0
which is increasing in X. The components of the minimax optimal selector η̂ are given by (24)
with
t(a0 , a1 ) =
log(d/s − 1) + a1 − a0
.
log(a1 /a0 )
Note that t(a0 , a1 ) > 0 as soon as d/s ≥ 2 and a1 > a0 > 0. The minimax risk has the form
Ψ = Pa1 (X < t(a0 , a1 )) + (d/s − 1)Pa0 (X ≥ t(a0 , a1 )).
11
3.3
Crowdsourcing with sparsity constraint
The problem of crowdsourcing with two classes is a clustering problem that can be formalized
as follows, cf. [11]. Assume that m workers provide class assignments for d items. The class
assignment Xij of the ith worker for the jth item is assumed to have a Bernoulli distribution
B(ai0 ) if the jth item belongs to class 0, and a Bernoulli distribution B(ai1 ) if it belongs to class
1. Here, ai0 , ai1 ∈ (0, 1) and ai0 6= ai1 for i = 1, . . . , m. The observations (Xij , i = 1, . . . , m, j =
1, . . . , d) are assumed to be jointly independent. Thus, each vector Xj = (X1j , . . . , Xmj ) is
distributed according to P0 or to P1 where each of these two measures is a product of Bernoulli
measures, and P0 6= P1 . We assume that there are s vectors Xj with distribution P1 , and
d − s vectors Xj with distribution P0 . The aim is to recover the binary vector of class labels
η = (η1 , . . . , ηd ) based on the observations X = (X1 , . . . , Xd ). Here, ηj ∈ {0, 1} satisfies ηj = k
if the jth item belongs to class k ∈ {0, 1}. Thus, we are in the framework of Theorem 3.2 with
a particular form of the log-likelihood ratio
m
a
X
f1
1 − ai1
1 − ai0
i1
log (Xj ) =
+ log
,
Xij log
f0
1 − ai1 ai0
1 − ai0
(25)
i=1
where fk is the density of Pk , k ∈ {0, 1}. The following corollary is an immediate consequence
of Theorem 3.2.
Corollary 3.1. Let s < d, ai0 , ai1 ∈ (0, 1) and ai0 6= ai1 for i = 1, . . . , m. Then, the selector
η̂ in (20) with log
f1
f0 (Xj )
defined in (25) is minimax optimal on the class Θd (s, f0 , f1 ). The
corresponding minimax risk is given in (22).
Thus, a selector which is minimax optimal in the exact non-asymptotic sense is explicitly
given by formula (20). For suitable combinations of parameters d, s, ai0 , ai1 , the exact value of
the minimax risk Ψ can be further analyzed to obtain asymptotics of interest. Gao et al. [11]
have studied a setting of crowdsourcing problem which is different from the one we consider
here. They did not assume sparsity s, and instead of the class Θd (s, f0 , f1 ) of s-sparse binary
sequences, they considered the class of all possible binary sequences {0, 1}d . For this class, Gao
et al. [11] did not derive the exact minimax solution but rather analyzed specific asymptotics
of the minimax risk inf η̃ supη∈{0,1}d d−1 E|η̃ − η| in large deviations perspective.
4
Asymptotic analysis. Phase transitions
In this section, we conduct the asymptotic analysis of the problem of variable selection. The
results are derived as corollaries of the minimax bounds of Section 2. We will assume that
d → ∞ and that parameters a = ad and s = sd depend on d.
The first two asymptotic properties we study here are exact recovery and almost full re-
covery. We use this terminology following [12, 15] but we define these properties in a different
12
way, as asymptotic minimax properties for classes of vectors θ. The papers [12, 15] considered
a Bayesian setup with random θ and studied a linear regression model with i.i.d. Gaussian
regressors rather than the sequence model (1).
The study of exact recovery and almost full recovery will be done here only for the classes
−
Θd (sd , ad ). The corresponding results for the classes Θ+
d (sd , ad ) or Θd (sd , ad ) are completely
analogous. We do not state them here for the sake of brevity.
Definition 4.1. Let (Θd (sd , ad ))d≥1 be a sequence of classes of sparse vectors.
• We say that exact recovery is possible for (Θd (sd , ad ))d≥1 if there exists a selector η̂
such that
lim
sup
d→∞ θ∈Θ (s ,a )
d d d
Eθ |η̂ − η| = 0.
(26)
In this case, we say that η̂ achieves exact recovery.
• We say that almost full recovery is possible for (Θd (sd , ad ))d≥1 if there exists a
selector η̂ such that
1
Eθ |η̂ − η| = 0.
d→∞ θ∈Θd (sd ,ad ) sd
lim
sup
(27)
In this case, we say that η̂ achieves almost full recovery.
It is of interest to characterize the sequences (sd , ad )d≥1 , for which exact recovery and
almost full recovery are possible. To describe the impossibility of exact or almost full recovery,
we need the following definition.
Definition 4.2. Let (Θd (sd , ad ))d≥1 be a sequence of classes of sparse vectors.
• We say that exact recovery is impossible for (Θd (sd , ad ))d≥1 if
lim inf inf
d→∞
sup
η̃ θ∈Θ (s ,a )
d d d
Eθ |η̃ − η| > 0,
(28)
• We say that almost full recovery is impossible for (Θd (sd , ad ))d≥1 if
lim inf inf
d→∞
sup
η̃ θ∈Θ (s ,a )
d d d
1
Eθ |η̃ − η| > 0,
sd
(29)
where inf η̃ denotes the infimum over all selectors.
The following general characterization theorem is a straightforward corollary of the results
of Section 2.
13
Theorem 4.1.
(i) Almost full recovery is possible for (Θd (sd , ad ))d≥1 if and only if
Ψ+ (d, sd , ad ) → 0
as d → ∞.
(30)
In this case, the selector η̂ defined in (8) with threshold (5) achieves almost full recovery.
(ii) Exact recovery is possible for (Θd (sd , ad ))d≥1 if and only if
sd Ψ+ (d, sd , ad ) → 0
as d → ∞.
(31)
In this case, the selector η̂ defined in (8) with threshold (5) achieves exact recovery.
Although this theorem gives a complete solution to the problem, conditions (30) and (31)
are not quite explicit. Intuitively, we would like to get a “phase transition” values a∗d such
that exact (or almost full) recovery is possible for ad greater than a∗d and is impossible for ad
smaller than a∗d . Our aim now is to find such “phase transition” values. We first do it in the
almost full recovery framework.
The following bounds for the tails of Gaussian distribution will be useful:
r
r
Z ∞
2
2
e−y /2
e−y /2
2
1
2
−u2 /2
p
p
,
e
du ≤
<√
π y + y2 + 4
π y + y 2 + 8/π
2π y
(32)
for all y ≥ 0. These bounds are an immediate consequence of formula 7.1.13. in [3] with
√
x = y/ 2.
Furthermore, we will need some non-asymptotic bounds for the expected Hamming loss
that will play a key role in the subsequent asymptotic analysis. They are given in the next
theorem.
Theorem 4.2. Assume that s < d/2.
(i) If
a2 ≥ σ 2 2 log((d − s)/s) + W for some W > 0,
then the selector η̂ defined in (8) with threshold (5) satisfies
√
sup Eθ |η̂ − η| ≤ (2 + 2π)s Φ(−∆),
(33)
(34)
θ∈Θd (s,a)
where ∆ is defined by
(ii) If a > 0 is such that
then
W
∆= p
.
2 2 log((d − s)/s) + W
a2 ≤ σ 2 2 log((d − s)/s) + W for some W > 0,
inf
sup
ηe θ∈Θd (s,a)
Eθ |e
η − η| ≥ s Φ(−∆),
where the infimum is taken over all selectors ηe and ∆ > 0 is defined in (35).
14
(35)
(36)
(37)
The proof is given in the Appendix.
The next theorem is an easy consequence of Theorem 4.2. It describes a “phase transition”
for ad in the problem of almost full recovery.
Theorem 4.3. Assume that lim supd→∞ sd /d < 1/2.
(i) If, for all d large enough,
p
a2d ≥ σ 2 2 log((d − sd )/sd ) + Ad 2 log((d − sd )/sd )
for an arbitrary sequence Ad → ∞, as d → ∞, then the selector η̂ defined by (8) and (5)
achieves almost full recovery:
1
Eθ |η̂ − η| = 0.
d→∞ θ∈Θd (sd ,ad ) sd
lim
sup
(ii) Moreover, if there exists A > 0 such that for all d large enough the reverse inequality
holds:
p
a2d ≤ σ 2 2 log((d − sd )/sd ) + A 2 log((d − sd )/sd )
then almost full recovery is impossible:
lim inf inf
η̃
d→∞
A
1
> 0.
Eθ |η̃ − η| ≥ Φ −
2
θ∈Θd (sd ,ad ) sd
sup
Here, inf η̃ is the infimum over all selectors ηe.
The proof is given in the Appendix.
Under the natural sparsity assumption that
d/sd → ∞
as
d → ∞,
(38)
Theorem 4.3 shows that the “phase transition” for almost full recovery occurs at the value
ad = a∗d , where
a∗d = σ
p
2 log((d − sd )/sd ) 1 + o(1) .
(39)
Furthermore, Theorem 4.3 details the behavior of the o(1) term here.
We now state a corollary of Theorem 4.3 under simplified assumptions.
Corollary 4.1. Assume that (38) holds and set
ad = σ
p
2(1 + δ) log(d/sd ),
for some δ > 0.
Then the selector η̂ defined by (8) with threshold t = σ
depends only on δ, achieves almost full recovery.
15
p
2(1 + ε(δ)) log(d/sd ) where ε(δ) > 0
In the particular case of sd = d1−β (1 + o(1)) for some β ∈ (0, 1), condition (38) is
satisfied. Then log(d/sd ) = β(1 + o(1)) log d and it follows from Corollary 4.1 that for
p
p
ad = σ 2β(1 + δ) log d the selector with components η̂j = I |Xj | > σ 2β(1 + ε) log d
achieves almost full recovery. This is in agreement with the findings of [12, 15] where an
analogous particular case of sd was considered for a different model and the Bayesian definition of almost full recovery.
We now turn to the problem of exact recovery. First, notice that if
lim sup sd < ∞
d→∞
the properties of exact recovery and almost full recovery are equivalent. Therefore, it suffices
to consider exact recovery only when sd → ∞ as d → ∞. Under this assumption, the “phase
transition” for ad in the problem of exact recovery is described in the next theorem.
Theorem 4.4. Assume that sd → ∞ as d → ∞, and lim supd→∞ sd /d < 1/2.
(i) If
a2d ≥ σ 2 2 log((d − sd )/sd ) + Wd
for all d large enough, where the sequence Wd is such that
Wd
≥ 1,
lim inf
p
d→∞ 4 log(s ) +
log(s
)
log(d
−
s
)
d
d
d
(40)
then the selector η̂ defined by (8) and (5) achieves exact recovery:
lim
sup
d→∞ θ∈Θd (sd ,ad )
Eθ |η̂ − η| = 0.
(41)
(ii) If the complementary condition holds:
a2d ≤ σ 2 2 log((d − sd )/sd ) + Wd
for all d large enough, where the sequence Wd is such that
Wd
< 1,
lim sup
p
d→∞ 4 log(sd ) +
log(sd ) log(d − sd )
then exact recovery is impossible, and moreover we have
lim inf
d→∞
sup
ηe θ∈Θd (sd ,ad )
Here, inf ηe is the infimum over all selectors ηe.
16
Eθ |e
η − η| = ∞.
(42)
The proof is given in the Appendix.
Some remarks are in order here. First of all, Theorem 4.4 shows that the “phase transition”
p
for exact recovery occurs at Wd = 4 log(sd ) + log(sd ) log(d − sd ) , which corresponds to
the critical value ad = a∗d of the form
p
p
a∗d = σ
2 log(d − sd ) + 2 log sd .
(43)
This value is greater than the critical value a∗d for almost full recovery, cf. (39), which is
intuitively quite clear. The optimal threshold (5) corresponding to (43) has a simple form:
p
a∗d σ 2
d
∗
td =
+ ∗ log
− 1 = σ 2 log(d − sd ).
2
ad
sd
√
√
For example, if sd = d1−β (1 + o(1)) for some β ∈ (0, 1), then a∗d ∼ σ(1 + 1 − β) 2 log d. In
p
√
this particular case, Theorem 4.4 implies that if ad = σ(1 + 1 − β) 2(1 + δ) log d for some
p
δ > 0, then exact recovery is possible and the selector with threshold t = σ 2(1 + ε) log d for
some ε > 0 achieves exact recovery. This is in agreement with the results of [12, 15] where
an analogous particular case of sd was considered for a different model and the Bayesian
definition of exact recovery. For our model, even a sharper result is true; namely, a simple
√
universal threshold t = σ 2 log d guarantees exact recovery adaptively in the parameters a
and s. Intuitively, this is suggested by the form of t∗d . The precise statement is given in
Theorem 5.1 below.
Finally, we state an asymptotic corollary of Theorem 2.6 showing that the selector η̂ considered above is sharp in the asymptotically minimax sense with respect to the risk defined as
the probability of wrong recovery.
Theorem 4.5. Assume that exact recovery is possible for the classes (Θd (sd , ad ))d≥1 and
+
(Θ+
d (sd , ad ))d≥1 , that is, condition (31) holds. Then, for the selectors η̂ and η̂ defined by (8),
(4) and (5), and for the selector η defined by (12) and (13), we have
Pθ (Sη̂+ 6= S(θ))
= lim inf
d→∞
d→∞ ηe∈T
θ∈Θ+ (sd ,ad ) sd Ψ+ (d, sd , ad )
lim
sup
d
Pθ (Sη 6= S(θ))
= lim inf
d→∞ θ∈Θ (s ,a ) sd Ψ(d, sd , ad )
d→∞ ηe∈T
d d d
lim
sup
Pθ (Sηe 6= S(θ))
= 1,
θ∈Θ+ (sd ,ad ) sd Ψ+ (d, sd , ad )
sup
d
Pθ (Sηe 6= S(θ))
= 1,
θ∈Θd (sd ,ad ) sd Ψ(d, sd , ad )
sup
and
lim sup
d→∞
Pθ (Sη̂ 6= S(θ))
≤ 2.
θ∈Θd (sd ,ad ) sd Ψ+ (d, sd , ad )
sup
Note that the threshold (5) depends on the parameters s and a, so that the selectors
considered in all the results above are not adaptive. In the next section, we propose adaptive
selectors that achieve almost full recovery and exact recovery without the knowledge of s and a.
17
Remark 4.1. Another procedure of variable selection is the exhaustive search estimator of the
support S(θ) defined as
S̃ =
argmax
X
Xj .
C⊆{1,...,d}:|C|=s j∈C
This estimator was studied by Butucea et al. [7]. The selection procedure can be equivalently stated as choosing the indices j corresponding to s largest order statistics of the sample
(X1 , . . . , Xd ). In [7, Theorem 2.5], it was shown that, on the class Θ+
d (sd , ad ), the probability
of wrong recovery Pθ (S̃ 6= S(θ)) tends to 0 as d → ∞ under a stronger condition on (sd , ad )
than (31). The rate of this convergence was not analyzed there. If we denote by ηS̃ the selector
with components I(j ∈ S̃) for j from 1 to d, it can be proved that E|ηS̃ − η| ≤ 2E|η̂ + − η| and
thus the risk of ηS̃ is within the factor 2 of the minimax risk over the class Θ+
d (s, a). Thus,
it does not enjoy the non-asymptotic sharp optimality that we have established for the selector
defined by (4) and (5) over the class Θ+
d (s, a) and for the selector defined by (12) and (13)
over the class Θd (s, a).
5
Adaptive selectors
In this section, we consider the asymptotic setup as in Section 4 and construct the selectors
that provide almost full and exact recovery adaptively, that is, without the knowledge of a
and s.
As discussed in Section 4, the issue of adaptation for exact recovery is almost trivial.
Indeed, the expressions for minimal value a∗d , for which exact recovery is possible (cf. (43)),
and for the corresponding optimal threshold t∗d suggest that taking a selector with the universal
√
threshold t = σ 2 log d is enough to achieve exact recovery simultaneously for all values
(ad , sd ), for which the exact recovery is possible. This point is formalized in the next theorem.
Theorem 5.1. Assume that sd → ∞ as d → ∞ and that lim supd→∞ sd /d < 1/2. Let the
sequence (ad )d≥1 be above the phase transition level for exact recovery, that is, ad ≥ a∗d for all
√
d, where a∗d is defined in (43). Then the selector η̂ defined by (8) with threshold t = σ 2 log d
achieves exact recovery.
The proof of this theorem is given in the Appendix.
We now turn to the problem of adaption for almost full recovery. Ideally, we would like to
construct a selector that achieves almost full recovery for all sequences (sd , ad )d≥1 for which
almost full recovery is possible. We have seen in Section 4 that this includes a much broader
range of values than in case of exact recovery. Thus, using the adaptive selector of Theorem 5.1
for almost full recovery does not give a satisfactory result, and we have to take a different
approach.
18
Following Section 4, we will use the notation
1/2
p
a0 (s, A) , σ 2 log((d − s)/s) + A log((d − s)/s)
.
As shown in Section 4, it makes sense to consider the classes Θd (s, a) only when a ≥ a0 (s, A)
with some A > 0, since for other values of a almost full recovery is impossible. Only such
classes will be studied below.
In the asymptotic setup of Section 4 we have used the assumption that d/sd → ∞ (the
sparsity assumption), which is now transformed into the condition
sd ∈ Sd , {1, 2, . . . , s∗d } where s∗d is an integer such that
d
→ ∞ as d → ∞.
s∗d
(44)
Assuming sd to be known, we have shown in Section 4 that almost full recovery is achievable for
all a ≥ a0 (sd , Ad ), where Ad tends to infinity as d → ∞. The rate of growth of Ad was allowed
to be arbitrarily slow there, cf. Theorem 4.3. However, for adaptive estimation considered in
this section we will need the following mild assumption on the growth of Ad :
Ad ≥ c0 log log
1/2
d
,
−1
s∗d
(45)
where c0 > 0 is an absolute constant. In what follows, we will assume that s∗d ≤ d/4, so that
the right-hand side of (45) is well-defined.
Consider a grid of points {g1 , . . . , gM } on Sd , where gj = 2j−1 and M is the maximal
integer such that gM ≤ s∗d . For each gm , m = 1, . . . , M , we define a selector
η̂(gm ) = (η̂j (gm ))j=1,...,d , (I(|Xj | ≥ w(gm )))j=1,...,d ,
where
w(s) = σ
r
2 log
d
s
−1 .
Note that w(s) is monotonically decreasing. We now choose the “best” index m, for which gm
is near the true (but unknown) value of s, by the following data-driven procedure:
n
m
b = min m ∈ {2, . . . , M } :
d
X
j=1
o
I w(gk ) ≤ |Xj | < w(gk−1 ) ≤ τ gk for all k ≥ m ,
where
τ = log (d/s∗d − 1)
− 1
7
,
and we set m
b = M if the set in (46) is empty. Finally, we define an adaptive selector as
η̂ ad = η̂(gm
b ).
19
(46)
This adaptive procedure is quite natural in the sense that it can be related to the Lepski
method or to wavelet thresholding that are widely used for adaptive estimation. Indeed, as
in wavelet methods, we consider dyadic blocks determined by the grid points gj . The value
Pd
j=1 I w(gk ) ≤ |Xj | < w(gk−1 ) is the number of observations within the kth block. If this
number is too small (below a suitably chosen threshold) we decide that the block corresponds
to pure noise and it is rejected; in other words, this k is not considered as a good candidate
for m.
b This argument is analogous to wavelet thersholding. We start from the largest k
(equivalently, smallest w(gk )) and perform this procedure until we find the first block, which
is not rejected. The corresponding value k determines our choice of m
b as defined in (46).
Theorem 5.2. Let c0 ≥ 16. Then the selector η̂ ad adaptively achieves almost full recovery in
the following sense:
1
Eθ |η̂ ad − η| = 0
d→∞ θ∈Θd (sd ,ad ) sd
lim
sup
(47)
for all sequences (sd , ad )d≥1 such that (44) holds and ad ≥ a0 (sd , Ad ), where Ad satisfies (45).
Remark 5.1. Another family of variable selection methods originates from the theory of multiple testing. These are, for example, the Benjamini-Hochberg, Benjamini-Yekutieli or SLOPE
procedures. We refer to [6] for a recent overview and comparison of these techniques. They
have the same structure as the exhaustive search procedure in that they keep only the largest
order statistics. The difference is that the value s (which is usually not known in practice) is
replaced by an estimator ŝ obtained from comparing the ith order statistic of (|X1 |, . . . , |Xd |)
with a suitable normal quantile depending on i. The analysis of these methods in the literature
is focused on the evaluation of false discovery rate (FDR). Asymptotic power calculations for
the Benjamini-Hochberg procedure are given in [4]. To the best of our knowledge, the behavior
of the risk Pθ (S̃ 6= S(θ)) and of the Hamming risk, even in a simple consistency perspective,
was not studied.
Remark 5.2. In this paper, the variance σ was supposed to be known. Extension to the case
of unknown σ can be treated as described, for example, in [9]. Namely, we replace σ in the
definition of the threshold w(s) by a statistic σ̂ defined in [9, Section 3]. As shown in [9,
Proposition 1], this statistic is such that σ ≤ σ̂ ≤ C ′ σ with high probability provided that
s ≤ d/2, and d ≥ d0 for some absolute constants C ′ > 1, d0 ≥ 1. Then, replacing σ by σ̂
in the expression for w(s), one can show that Theorem 5.2 remains valid with this choice of
w(s) independent of σ, up to a change in numerical constants in the definition of the adaptive
procedure. With this modification, we obtain a procedure which is completely data-driven and
enjoys the property of almost full recovery under the mild conditions given in Theorem 5.2. The
same modification can be done in Theorem 5.1. Namely, under the assumptions of Theorem 5.1
and ad ≥ c′ a∗d , where c′ ≥ 1 is a numerical constant, the selector η̂ defined by (8) with threshold
√
t = σ̂ 2 log d achieves exact recovery when σ is unknown.
20
Remark 5.3. In this section, the problem of adaptive variable selection was considered only
−
for the classes Θd (sd , ad ). The corresponding results for classes Θ+
d (sd , ad ) and Θd (sd , ad ) are
completely analogous. We do not state them here for the sake of brevity.
6
Appendix
Proof of Theorem 2.3. We have, for any t > 0,
|η̂ − η| =
X
η̂j +
j:ηj =0
=
X
j:ηj =0
X
(1 − η̂j )
j:ηj =1
I(|σξj | ≥ t) +
X
j:ηj =1
I(|σξj + θj | < t).
Now, for any θ ∈ Θd (s, a) and any t > 0,
E (I (|σξj + θj | < t)) ≤ P(|θj | − |σξj | < t) ≤ P(|ξ| > (a − t)/σ)
= P(|ξ| > (a − t)+ /σ),
where ξ denotes a standard Gaussian random variable. Thus, for any θ ∈ Θd (s, a),
1
d
t
(a − t)+
Eθ |η̂ − η| ≤
− 1 P(|ξ| ≥ ) + P |ξ| >
= 2Ψ(d, s, a).
s
s
σ
σ
(48)
Note that the inequality here is valid for any t > 0, not necessarily for t defined in (5).
Proof of Theorem 2.1. Arguing as in the proof of Theorem 2.3, we obtain
|η̂ + − η| =
X
j:ηj =0
I(ξj ≥ t) +
X
I(σξj + θj < t),
j:ηj =1
and E (I (σξj + θj < t)) ≤ P(ξ < (t − a)/σ). Thus, for any θ ∈ Θ+
d (s, a),
1
d
+
Eθ |η̂ − η| ≤
− 1 P(ξ ≥ t/σ) + P(ξ < (t − a)/σ) = Ψ+ (d, s, a).
s
s
Proof of Theorem 2.2. An estimator η̄ = (η̄1 , . . . , η̄d ) of η (not necessarily a selector) will
be called separable if η̄j depends only on Xj for all j = 1, . . . , d. First note that instead of
considering all selectors, it suffices to prove the lower bound for the class of separable estimators
η̄ with components η̄j ∈ [0, 1]. Indeed, for any selector ηe, using Jensen’s inequality, we obtain
Eθ |e
η − η| =
d
X
j=1
Eθ |e
ηj − ηj | =
d
X
j=1
Ej,θj E{θi ,i6=j} |e
ηj − ηj | ≥
21
d
X
j=1
Ej,θj |η̄j − ηj |
where η̄j = E{θi ,i6=j} (e
ηj ), and the symbols Ej,θj and E{θi ,i6=j} stand for the expectations over
the distributions of Xj and (X1 , . . . , Xj−1 , Xj+1 , . . . , Xd ), respectively. Clearly, η̄j depends
only on Xj and takes on values in [0, 1]. Thus,
inf
ηe
1
Eθ |e
η − η| ≥ inf
η̄∈T[0,1]
θ∈Θ+ (s,a) s
sup
d
d
1X
Ej,θj |η̄j − ηj |
θ∈Θ+ (s,a) s
sup
(49)
j=1
d
where T[0,1] is the class of all separable estimators η̄ with components η̄j ∈ [0, 1].
Let Θ′ be the set of all θ in Θ+
d (s, a) such that s components θj of θ are equal to a and the
remaining d − s components are 0. Denote by |Θ′ | = ds the cardinality of Θ′ . Then, for any
η̄ ∈ T[0,1] we have
d
d
1 XX
1X
Ej,θj |η̄j − ηj | ≥
Ej,θj |η̄j − ηj |
s|Θ′ |
′
θ∈Θ+ (s,a) s
sup
j=1
d
=
1
s|Θ′ |
d
X
j=1
X
(50)
θ∈Θ j=1
Ej,0 (η̄j ) +
θ∈Θ′ :θj =0
X
θ∈Θ′ :θj =a
Ej,a (1 − η̄j )
d
s
s
1 X
Ej,0 (η̄j ) + Ej,a (1 − η̄j )
1−
=
s
d
d
j=1
s
s
d
E0 (T ) + Ea (1 − T ) ,
inf
1−
≥
s T ∈[0,1]
d
d
′
where we have used that |{θ ∈ Θ′ : θj = a}| = d−1
s−1 = s|Θ |/d. In the last line of display (50),
Eu is understood as the expectation with respect to the distribution of X = u + σξ, where
ξ ∼ N (0, 1) and inf T ∈[0,1] denotes the infimum over all [0, 1]-valued statistics T (X). Set
s
s
L∗ = inf
1−
E0 (T ) + Ea (1 − T )
d
d
T ∈[0,1]
By the Bayesian version of the Neyman-Pearson lemma, the infimum here is attained for
T = T ∗ given by
∗
T (X) = I
(s/d)ϕσ (X − a)
>1
(1 − s/d)ϕσ (X)
where ϕσ (·) is the density of an N (0, σ 2 ) distribution. Thus,
s
ϕσ (σξ − a)
ϕσ (σξ)
d
s
d
∗
L =
1−
P
> −1 + P
≤ −1 .
d
ϕσ (σξ)
s
d
ϕσ (σξ + a)
s
Combining this with (49) and (50), we get
1
Eθ |e
η − η|
ηe θ∈Θ+ (s,a) s
d
aξ
aξ
a2 d
a2 d
d
− 1 P exp
− 2 > − 1 + P exp
+ 2 ≤ −1
≥
s
σ
2σ
s
σ
2σ
s
a
σ
σ
d
d
d
a
−1 P ξ >
+ log
−1
+ log
−1
+P ξ ≤−
=
s
2σ
a
s
2σ
a
s
= Ψ+ (d, s, a).
inf
sup
22
Proof of Proposition 2.1. Using (7) it suffices to show that the right hand side of (14) is
bounded from above and from below by sΨ+ (d, s, a). The upper bound is obvious in view of
Theorem 2.1. To prove lower bound, we follow the same lines as in the proof of Theorem 2.2.
The only difference is that, instead of (49), we now use the inequality
inf
ηe
d
1 X
1 XX
E
|e
η
−
η|
≥
inf
Ej,θj |η̄j − ηj |,
θ
η̄∈T[0,1] s|Θ′ |
s|Θ′ |
′
′
θ∈Θ j=1
θ∈Θ
and we do not need the first inequality in (50).
Proof of Theorem 2.4. For any θ ∈ Θd (s, a), we have
X
X
X
Eθ |η − η| =
Pj,0 (η j = 1) +
Pj,θj (η j = 0) +
Pj,θj (η j = 0)
j:θj =0
j:θj ≥a
(51)
j:θj ≤−a
d
= (d − s)P e
cosh
> −1
σ
s
X
X
Pj,θj (η j = 0),
+
Pj,θj (η j = 0) +
−
aξ
a2
2σ 2
j:θj ≤−a
j:θj ≥a
where Pj,θj denotes the distribution of Xj , and ξ is a standard Gaussian random variable. We
now bound from above the probabilities Pj,θj (η j = 0). Introduce the notation
(x + σξ)a
, ∀x ∈ R,
g(x) = cosh
σ2
and
u = exp
We have
a2
+ log
2σ 2
d
−1
.
s
Pj,θj (η j = 0) = P(g(θj ) < u) = P (−b − θj < σξ < b − θj ) .
where b = (σ 2 /a)arccosh(u) > 0. It is easy to check that the function x 7→ P (−b − x < σξ < b − x)
is monotonically decreasing on [0, ∞). Therefore, the maximum of P (−b − θj < σξ < b − θj )
over θj ≥ a is attained at θj = a. Thus, for any θj ≥ a we have
a2
(a + σξ)a
d
Pj,θj (η j = 0) ≤ P(g(a) < u) = P e− 2σ2 cosh
−
1
.
<
σ2
s
Analogously, for any θj ≤ −a,
−
a2
2σ 2
Pj,θj (η j = 0) ≤ P e
a2
= P e− 2σ2
(−a + σξ)a
d
cosh
< −1
σ2
s
(a + σξ)a
d
cosh
< −1
σ2
s
(52)
(53)
where the last equality follows from the fact that ξ has the same distribution as −ξ and cosh
is an even function. Combining (51) – (53) proves the theorem.
23
Proof of Theorem 2.5. We follow the lines of the proof of Theorem 2.2 with suitable modifications. The same argument shows that instead of considering all selectors, it suffices to prove
the lower bound for the class of separable estimators η̄ with components η̄j ∈ [0, 1]. Thus,
inf
sup
ηe θ∈Θd (s,a)
Eθ |e
η − η| ≥
inf
sup
d
X
η̄∈T[0,1] θ∈Θ (s,a)
d
j=1
Ej,θj |η̄j − ηj |
(54)
where T[0,1] is the class of all separable estimators η̄ with components η̄j ∈ [0, 1] and Ej,θj
denotes the expectation with respect to Pj,θj .
Let Θ+ and Θ− be the sets of all θ in Θd (s, a) such that d − s components θj of θ are equal
to 0 and the remaining s components are equal to a (for θ ∈ Θ+ ) or to −a (for θ ∈ Θ− ). For
any η̄ ∈ T[0,1] we have
sup
d
X
θ∈Θd (s,a) j=1
Ej,θj |η̄j − ηj | ≥
d
d
o
X
X
1n
Ej,θj |η̄j − ηj | .
Ej,θj |η̄j − ηj | + sup
sup
2 θ∈Θ+
θ∈Θ−
j=1
j=1
As shown in the proof of Theorem 2.2, for any η̄ ∈ T[0,1] ,
d
X
sup
θ∈Θ+ j=1
Ej,θj |η̄j − ηj | ≥
d
X
s
s
1−
Ej,0 (η̄j ) + Ej,a (1 − η̄j ) .
d
d
j=1
Analogously,
sup
d
X
θ∈Θ− j=1
d
X
s
s
1−
Ej,θj |η̄j − ηj | ≥
Ej,0 (η̄j ) + Ej,−a(1 − η̄j ) .
d
d
j=1
From the last three displays we obtain
sup
d
X
θ∈Θd (s,a) j=1
Ej,θj |η̄j − ηj | ≥
d
X
j=1
1−
s
s
Ej,0 (η̄j ) + Ēj (1 − η̄j ) .
d
d
where Ēj is the expectation with respect to the measure P̄j = (Pj,a + Pj,−a)/2. It follows that
sup
d
X
θ∈Θd (s,a) j=1
Ej,θj |η̄j − ηj | ≥
inf
T ∈[0,1]
(d − s)E0 (T ) + s Ē(1 − T ) .
(55)
Here, E0 denotes the expectation with respect to the distribution of X with density ϕσ (·),
Ē is the expectation with respect to the distribution of X with mixture density ϕ̄σ (·) =
(ϕσ (· + a) + ϕσ (· − a))/2, and inf T ∈[0,1] denotes the infimum over all [0, 1]-valued statistics
T (X). Recall that we denote by ϕσ (·) is the density of N (0, σ 2 ) distribution. Set
s
s
E0 (T ) + Ē(1 − T ) .
1−
d
d
T ∈[0,1]
L̃ = inf
24
By the Bayesian version of the Neyman-Pearson lemma, the infimum here is attained for T = T̃
given by
T̃ (X) = I
Thus,
(s/d)ϕ̄σ (X)
>1 .
(1 − s/d)ϕσ (X)
ϕ̄σ (σξ)
d
s
d
ϕ̄σ (X)
s
P
>
−
1
+
P
≤
−
1
L̃ =
1−
a
d
ϕσ (σξ)
s
2d
ϕσ (X)
s
s
d
ϕ̄σ (X)
+ P−a
≤ −1
2d
ϕσ (X)
s
aξ d
2
a
s
− 2
2σ
=
1−
P e
> −1
cosh
d
σ
s
s
ϕ̄σ (X)
ϕ̄σ (X)
d
s
d
+ Pa
≤ −1 +
P−a
≤ −1
2d
ϕσ (X)
s
2d
ϕσ (X)
s
(56)
where Pu denotes the probability distribution of X with density ϕσ (· − u). Note that, for all
x ∈ R,
ax
a2
ϕ̄σ (x)
= e− 2σ2 cosh
.
ϕσ (x)
σ2
Using this formula with x = σξ + a and x = σξ − a, and the facts that cosh(·) is an even
function and ξ coincides with −ξ in distribution, we obtain
d
d
ϕ̄σ (X)
ϕ̄σ (X)
≤ −1
= P−a
≤ −1
Pa
ϕσ (X)
s
ϕσ (X)
s
aξ
a2
a2 d
= P e− 2σ2 cosh
+ 2 ≤ −1 .
σ
σ
s
Thus, L̃ = (s/d)Ψ(d, s, a). Combining this equality with (54) and (55) proves the theorem.
Proof of Theorem 2.6. The upper bounds (15), (16) and (17) follow immediately from (2) and
Theorems 2.1, 2.4 and 2.3, respectively. We now prove the lower bound (18). To this end, first
note that for any θ ∈ Θ+
e ∈ T we have
d (s, a) and any η
Pθ (Sηe 6= S(θ)) = Pθ (∪dj=1 {e
ηj 6= ηj }) = 1 −
d
Y
pj (θ),
j=1
where pj (θ) , Pθ (e
ηj = ηj ). Hence, for any ηe ∈ T ,
sup
θ∈Θ+
d (s,a)
Pθ (Sηe 6= S(θ)) ≥ max′ Pθ (Sηe 6= S(θ)) = 1 − p∗ ,
where Θ′ is the subset of Θ+
d (s, a) defined in the proof of Theorem 2.2, and p∗ = minθ∈Θ′
Next, for any selector ηe we have Pθ (Sηe 6= S(θ)) ≥ Pθ (|e
η − η| = 1). Therefore,
sup
θ∈Θ+
d (s,a)
(57)
θ∈Θ
Pθ (Sηe 6= S(θ)) ≥
25
1 X
Pθ (|e
η − η| = 1).
|Θ′ |
′
θ∈Θ
Qd
j=1 pj (θ).
(58)
Here, Pθ (|e
η − η| = 1) = Pθ (∪dj=1 Bj ) with the random events Bj = {|e
ηj − ηj | = 1, and ηei =
ηi , ∀ i 6= j}. Since the events Bj are disjoint, for any ηe ∈ T we get
d
1 XX
1 X
P
(|e
η
−
η|
=
1)
=
Pθ (Bj )
θ
|Θ′ |
|Θ′ |
′
′
θ∈Θ j=1
θ∈Θ
1 X
|Θ′ |
d
=
j=1
p∗ X
|Θ′ |
j=1
p∗ X
|Θ′ |
j=1
X
Pj,0 (e
ηj = 1) +
X
Ej,0 (e
ηj ) +
Y
pi (θ) +
i6=j
θ∈Θ′ :θj =0
d
=
Pj,0 (e
ηj = 1)
θ∈Θ′ :θj =0
d
≥
X
Pj,a (e
ηj = 0)
X
X
θ∈Θ′ :θj =a
Y
i6=j
θ∈Θ′ :θj =a
θ∈Θ′ :θj =a
θ∈Θ′ :θj =0
X
Pj,a (e
ηj = 0)
pi (θ)
Ej,a(1 − ηej )
(59)
where Pj,u denotes the distribution of Xj when θj = u. We now bound the right-hand side
of (59) by following the argument from the last three lines of (50) to the end of the proof of
Theorem 2.2. Applying this argument yields that, for any ηe ∈ T ,
1 X
Pθ (|e
η − η| = 1) ≥ p∗ dL̃ ≥ p∗ sΨ+ (d, s, a).
|Θ′ |
′
(60)
θ∈Θ
Combining (57), (58), and (60), we find that, for any ηe ∈ T ,
sup
θ∈Θ+
d (s,a)
Pθ (Sηe 6= S(θ)) ≥ min
max{1 − p∗ , p∗ sΨ+ (d, s, a)} =
∗
0≤p ≤1
sΨ+ (d, s, a)
.
1 + sΨ+ (d, s, a)
We now prove the lower bound (19). Let the sets Θ+ and Θ− and the constants pj (θ) be
the same as in the proof of Theorem 2.5. Then
sup
θ∈Θd (s,a)
where p̄ = minθ∈Θ+ ∪Θ−
Pθ (Sηe 6= S(θ)) ≥
max
θ∈Θ+ ∪Θ−
Pθ (Sηe 6= S(θ)) = 1 − p̄,
Qd
j=1 pj (θ).
For any selector η̃, we use that Pθ (Sηe 6= S(θ)) ≥ Pθ (|e
η − η| = 1) and, therefore,
sup
θ∈Θd (s,a)
Pθ (Sηe 6= S(θ)) ≥
X
X
1
1
Pθ (|e
η − η| = 1) +
Pθ (|e
η − η| = 1).
+
−
2|Θ |
2|Θ |
+
−
θ∈Θ
θ∈Θ
26
We continue along the same lines as in the proof of (59) to get, for any separable selector η̃,
d
X
p̄ X X
sup Pθ (Sηe 6= S(θ)) ≥
Ej,0 (e
ηj ) +
Ej,a (1 − ηej )
2|Θ+ |
θ∈Θd (s,a)
+
j=1 θ∈Θ+ :θj =0
θ∈Θ :θj =a
d
X
p̄ X X
Ej,0 (e
ηj ) +
Ej,−a(1 − ηej )
+
2|Θ− |
−
−
j=1
≥
θ∈Θ :θj =0
θ∈Θ :θj =−a
d
s
s
p̄ X
Ej,0 (e
ηj ) + Ej,a(1 − ηej )
1−
2
d
d
j=1
+
d
s
s
p̄ X
Ej,0 (e
ηj ) + Ej,−a(1 − ηej )
1−
2
d
d
j=1
= p̄
d
X
1−
j=1
s
s
Ej,0 (η̄j ) + Ej (1 − ηej ) ,
d
d
where again Ēj denotes the expected value with respect to P̄j = 12 (Pj,a + Pj,−a). Analogously
to the proof of Theorem 2.5, the expression in the last display can be further bounded from
below by p̄dL̃ = p̄sΨ(d, s, a). Thus,
sup
θ∈Θd (s,a)
Pθ (Sηe 6= S(θ)) ≥ min max{1 − p̄, p̄sΨ(d, s, a)} =
0≤p̄≤1
sΨ(d, s, a)
.
1 + sΨ(d, s, a)
Proof of Theorem 4.2. (i) In the proof of Theorem 2.3, we have obtained that
1
d
sup
Eθ |η̂ − η| ≤ 2
− 1 Φ(−t/σ) + 2Φ(−(a − t)+ /σ),
s
θ∈Θd (s,a) s
(61)
− 1 is the threshold (5). Since a2 ≥ 2σ 2 log(d/s − 1) we get that
2
2
a ≥ t and that t > a/2, which is equivalent to t > a − t. Furthermore, ds − 1 e−t /(2σ ) =
where t =
a
2
2 /(2σ 2 )
e−(a−t)
+
σ2
a
log
d
s
. These remarks and (32) imply that
r
d
exp(−(a − t)2 /(2σ 2 ))
2
p
− 1 Φ(−t/σ) ≤
s
π (a − t)/σ + (a − t)2 /σ 2 + 8/π
exp(−(a − t)2 /(2σ 2 ))
p
(a − t)/σ + (a − t)2 /σ 2 + 4
r
a−t
π
Φ −
≤
.
2
σ
≤
Combining this with (61) we get
√
a−t
1
Eθ |η̂ − η| ≤ (2 + 2π)Φ −
.
σ
θ∈Θd (s,a) s
sup
27
Now, to prove (34) it remains to note that under assumption (33),
a−t
d
a
σ
a2 − 2σ 2 log((d − s)/s)
=
− log
−1 =
≥ ∆.
σ
2σ
a
s
2aσ
1/2
Indeed, assumption (33) states that a ≥ a0 , σ 2 log((d − s)/s) + W
, and the function
2
2
a 7→ a − 2σ log((d − s)/s) /a is monotonically increasing in a > 0. On the other hand,
a20 − 2σ 2 log((d − s)/s) /(2a0 σ) = ∆.
(62)
(ii) We now prove (37). By Theorem 2.2,
inf
ηe
d
1
a
σ
Eθ |e
η − η| ≥ Ψ+ (d, s, a) ≥ Φ −
+ log
−1
.
2σ
a
s
θ∈Θd (s,a) s
sup
Here,
d
2σ 2 log((d − s)/s) − a2
σ
a
+ log
−1 =
.
2σ
a
s
2σa
Observe that the function a 7→ 2σ 2 log((d − s)/s) − a2 /a is monotonically decreasing in
−
a > 0 and that assumption (36) states that a ≤ a0 . In view of (62), the value of its minimum
for a ≤ a0 is equal to −∆. The bound (37) now follows by the monotonicity of Φ(·).
Proof of Theorem 4.3. Assume without loss of generality that d is large enough to have (d −
p
sd )/sd > 1. We apply Theorem 4.2 with W = A 2 log((d − sd )/sd ). Then,
p
A2 2 log((d − sd )/sd )
2
∆ = p
.
4 2 log((d − sd )/sd ) + A
By assumption, there exists ν > 0 such that (2 + ν)sd ≤ d for all d large enough. Equivalently,
d/sd − 1 ≥ 1 + ν and therefore, using the monotonicity argument, we find
p
A2 2 log(1 + ν)
2
∆ ≥p
→ ∞ as A → ∞.
2 log(1 + ν) + A
This and (34) imply part (i) of the theorem. Part (ii) follows from (37) by noticing that
∆2 ≤ supx>0
A2 x
4(x+A)
= A2 /4 for any fixed A > 0.
Proof of Theorem 4.4. Throughout the proof, we assume without loss of generality that d is
p
large enough to have sd ≥ 2, and (d − sd )/sd > 1. Set W∗ (s) , 4 log s + log s log(d − s) ,
and notice that
p
W∗ (sd )
p
= 2 log sd ,
2 2 log((d − sd )/sd ) + W∗ (sd )
2
p
p
log(d − sd ) + log sd .
2 log((d − sd )/sd ) + W∗ (sd ) = 2
28
(63)
(64)
If (40) holds, we have Wd ≥ W∗ (sd ) for all d large enough. By the monotonicity of the quantity
∆ defined in (35) with respect to W , this implies
Wd
p
∆d ,
2 2 log((d − sd )/sd ) + Wd
p
W∗ (sd )
p
≥
= 2 log sd .
2 2 log((d − sd )/sd ) + W∗ (sd )
(65)
Now, by Theorem 4.2 and using (32) we may write
√
sup
Eθ |η̂ − η| ≤ (2 + 2π)sd Φ (−∆d )
θ∈Θd (sd ,ad )
∆2d
1
exp −
≤ 3sd min 1,
∆d
2
2
∆d − 2 log sd
1
= 3 min 1,
exp −
.
∆d
2
(66)
This and (65) imply that, for all d large enough,
1
.
sup
Eθ |η̂ − η| ≤ 3 min 1, √
2 log sd
θ∈Θd (sd ,ad )
Since sd → ∞, part (i) of the theorem follows.
We now prove part (ii) of the theorem. It suffices to consider Wd > 0 for all d large enough
since for non-positive Wd almost full recovery is impossible and the result follows from part
(ii) of Theorem 4.3. If (42) holds, there exists A < 1 such that Wd ≤ AW∗ (sd ) for all d large
enough. By the monotonicity of the quantity ∆ defined in (35) with respect to W and in view
of equation (63), this implies
∆2d − 2 log sd
A2 W∗2 (sd )
W∗2 (sd )
≤
−
4(2 log((d − sd )/sd ) + AW∗ (sd )) 4(2 log((d − sd )/sd ) + W∗ (sd ))
(A − 1)W∗2 (sd )(AW∗ (sd ) + 2(A + 1) log((d − sd )/sd ))
=
4(2 log((d − sd )/sd ) + AW∗ (sd ))(2 log((d − sd )/sd ) + W∗ (sd ))
≤
(A − 1)AW∗2 (sd )
4(2 log((d − sd )/sd ) + W∗ (sd ))
=
2
p
2(A − 1)A log sd + log sd log(d − sd )
= 2(A − 1)A log sd ,
p
2
√
log(d − sd ) + log sd
(67)
where we have used the fact that A < 1 and equations (63), (64). Next, by Theorem 4.2 and
using (32), we have
inf
sup
ηe θ∈Θd (sd ,ad )
∆2
1 1
sd
min
,
exp − d
4
2 ∆d
2
∆2 − 2 log sd
1
1 1
min
,
exp − d
.
4
2 ∆d
2
Eθ |e
η − η| ≥ sd Φ (−∆d ) ≥
=
29
Combining this inequality with (67), we find that, for all d large enough,
1 1
1
,
inf
sup
Eθ |e
η − η| ≥ min
exp ((1 − A)A log sd ) .
4
2 ∆d
ηe θ∈Θd (sd ,ad )
√
Since A < 1 and ∆d ≤ A 2 log sd by (67), the last expression tends to ∞ as sd → ∞. This
proves part (ii) of the theorem.
Proof of Theorem 5.1. By (48), for any θ ∈ Θd (sd , ad ), and any t > 0 we have
Eθ |η̂ − η| ≤ (d − sd ) P(|ξ| ≥ t/σ) + sd P(|ξ| > (ad − t)+ /σ),
where ξ is a standard normal random variable. It follows that, for any ad ≥ a∗d , any θ ∈
Θd (sd , ad ), and any t > 0,
Eθ |η̂ − η| ≤ d P(|ξ| ≥ t/σ) + sd P(|ξ| > (a∗d − t)+ /σ).
Without loss of generality assume that d ≥ 6 and 2 ≤ sd ≤ d/2. Then, using the inequality
√
√
√
√
x − y ≤ (x − y)/ 2y, ∀x > y > 0, we find that, for t = σ 2 log d,
p
p
√ p
(a∗d − t)+ /σ ≥
2
log(d − sd ) − log d + log(sd )
p
p
d
2 log(sd ) − log
/ log(d − sd )
≥
d − sd
p
p
2 log(sd ) − (log 2)/ log(d/2) > 0.
≥
From this we also easily deduce that, for 2 ≤ sd ≤ d/2, we have ((a∗d − t)+ /σ)2 /2 ≥ log(sd ) −
√
2 log 2. Combining these remarks with (32) and (43), we find
√
sd exp − log(sd ) + 2 log 2
1
p
sup
Eθ |η̂ − η| ≤ √
,
+
2 log d
2 log(sd )
θ∈Θd (sd ,ad )
which immediately implies the theorem by taking the limit as d → ∞.
Proof of Theorem 5.2. Throughout the proof, we will write for brevity sd = s, ad = a, Ad = A,
and set σ = 1. Since Θd (s, a) ⊆ Θd (s, a0 (s, A)) for all a ≥ a0 (s, A), it suffices to prove that
1
Eθ |η̂ ad − η| = 0.
d→∞ θ∈Θd (s,a0 (s,A)) s
lim
sup
(68)
Here s ≤ s∗d and recall that throughout this section we assume that s∗d ≤ d/4; since we deal
with asymptotics as d/s∗d → ∞, the latter assumption is without loss of generality in the
current proof.
If s < gM , let m0 ∈ {2, . . . , M } be the index such that gm0 is the minimal element of
the grid, which is greater than the true underlying s. Thus, gm0 /2 = gm0 −1 ≤ s < gm0 . If
s ∈ [gM , s∗d ], we set m0 = M . In both cases,
s ≥ gm0 /2.
30
(69)
We decompose the risk as follows:
1
Eθ |η̂ ad − η| = I1 + I2 ,
s
where
I1 =
I2 =
1
Eθ (|η̂(gm
b ≤ m0 )) ,
b ) − η|I(m
s
1
Eθ (|η̂(gm
b ≥ m0 + 1)) .
b ) − η|I(m
s
We now evaluate I1 . Using the fact that η̂j (gm ) is monotonically increasing in m and the
definition of m,
b we obtain that, on the event {m
b ≤ m0 },
|η̂(gm
b ) − η̂(gm0 )| ≤
m0
X
m=m+1
b
|η̂(gm ) − η̂(gm−1 )|
m0
d
X
X
(η̂j (gm ) − η̂j (gm−1 ))
=
j=1
m=m+1
b
=
m0
d
X
X
j=1
m=m+1
b
m
0
X
≤ τ
m=m+1
b
I w(gm ) ≤ |Xj | < w(gm−1 )
gm ≤ τ s
m0
X
m=2
2m−m0 +1 ≤ 4τ s,
where we have used the equality gm = 2m and (69). Thus,
1
1
Eθ (|η̂(gm
b ≤ m0 )) + Eθ |η̂(gm0 ) − η|
b ) − η̂(gm0 )|I(m
s
s
1
≤ 4τ + Eθ |η̂(gm0 ) − η|.
s
I1 ≤
By (48), for any θ ∈ Θd (s, a0 (s, A)) we have
d
1
Eθ |η̂(gm0 ) − η| ≤
− 1 P(|ξ| ≥ w(gm0 )) + P(|ξ| > (a0 (s, A) − w(gm0 ))+ )
s
s
(70)
(71)
where ξ is a standard Gaussian random variable. Using the bound on the Gaussian tail
probability and the fact that s ≥ gm0 /2, we get
d
π −1/2
d/s − 1
p
− 1 P(|ξ| ≥ w(gm0 )) ≤
s
d/gm0 − 1
log(d/gm0 − 1)
≤
(72)
2π −1/2
d−s
3π −1/2
p
≤p
.
d − 2s
log(d/s∗d − 1)
log(d/s − 1)
To bound the second probability on the right-hand side of (71), we use the following lemma.
Lemma 6.1. Under the assumptions of Theorem 5.2, for any m ≥ m0 we have
P(|ξ| > (a0 (s, A) − w(gm ))+ ) ≤ log (d/s∗d − 1)
31
− 1
2
.
(73)
Combining (71), (72) and (73) with m = m0 , we find
1
Eθ |η̂(gm0 ) − η| ≤
s
p
which together with (70) leads to the bound
I1 ≤ 4τ + p
3π −1/2 + 1
,
log(d/s∗d − 1)
(74)
3π −1/2 + 1
.
log(d/s∗d − 1)
(75)
We now turn to the evaluation of I2 . It is enough to consider the case m0 ≤ M − 1 since
I2 = 0 when m0 = M . We have
I2 =
M
1 X
Eθ (|η̂(gm̂ ) − η|I(m
b = m))
s m=m +1
(76)
0
≤
1
s
M
X
1/2
1/2
Eθ |η̂(gm ) − η|
Pθ (m
b = m)
.
m=m0 +1
By definition, the event {m
b = m} occurs if and only if there exists some ℓ ≥ m such that
Pd
j=1 I(wℓ ≤ |Xj | < wℓ−1 ) > τ gℓ , vℓ , where we set for brevity wℓ = w(gℓ ). Thus,
Pθ (m
b = m) ≤
M
X
ℓ=m
Pθ
d
X
j=1
I(wℓ ≤ |Xj | < wℓ−1 ) > vℓ .
(77)
By Bernstein’s inequality, for any t > 0 we have
d
d
X
X
I(wℓ ≤ |Xj | < wℓ−1 ) > t
I(wℓ ≤ |Xj | < wℓ−1 ) − Eθ
Pθ
j=1
j=1
≤ exp − Pd
j=1 Eθ
t2 /2
(I(wℓ ≤ |Xj | < wℓ−1 )) + 2t/3
!
,
(78)
where we have used that, for random variables with values in {0, 1}, the variance is smaller
than the expectation.
Now, similar to (48), for any θ ∈ Θd (s, a0 (s, A)),
Eθ
d
X
j=1
I(wℓ ≤ |Xj | < wℓ−1 )
≤ (d − s)P (wℓ ≤ |ξ| < wℓ−1 ) +
X
P (|θj + ξ| < wℓ−1 )
j:θj 6=0
≤ (d − s)P (|ξ| ≥ wℓ ) + sP(|ξ| > −(a0 (s, A) − wℓ−1 )+ ),
where ξ is a standard Gaussian random variable. Since ℓ ≥ m0 + 1, from Lemma 6.1 we get
P(|ξ| > (a0 (s, A) − wℓ−1 ))+ ) ≤ log (d/s∗d − 1)
32
− 1
2
.
(79)
Next, using the bound on the Gaussian tail probability and the inequalities gℓ ≤ s∗d ≤ d/4, we
find
(d − s)P (|ξ| ≥ wℓ ) ≤
d−s
(4/3)π −1/2 gℓ
π −1/2
p
≤p
.
d/gℓ − 1 log(d/gℓ − 1)
log(d/s∗d − 1)
We now deduce from (79) and (80), and the inequality s ≤ gℓ for ℓ ≥ m0 + 1, that
d
X
(4/3)π −1/2 + 1 gℓ
Eθ
I(wℓ ≤ |Xj | < wℓ−1 ) ≤ p
≤ 2τ gℓ .
log(d/s∗d − 1)
j=1
(80)
(81)
Taking in (78) t = 3τ gℓ = 3vℓ and using (81), we find
d
X
I(wℓ ≤ |Xj | < wℓ−1 ) > vℓ ≤ exp(−C1 vℓ ) = exp(−C1 2ℓ τ ),
Pθ
j=1
for all ℓ ≥ m0 + 1 and some absolute constant C1 > 0. This implies
Pθ (m
b = m) ≤
M
X
ℓ=m
exp(−C1 2ℓ τ ) ≤ C2 2−m τ −1 exp(−C1 2m τ )
(82)
for some absolute constant C2 > 0.
On the other hand, notice that the bounds (71), and (72) are valid not only for gm0 but
also for any gm with m ≥ m0 + 1. Using this observation and Lemma 6.1 we get that, for any
θ ∈ Θd (s, a0 (s, A)) and any m ≥ m0 + 1,
#
"
− 1
d/s − 1
π −1/2
∗
p
+ log (d/sd − 1) 2
Eθ |η̂(gm ) − η| ≤ s
d/gm − 1
log(d/gm − 1)
(4/3)π −1/2 + 1 gm
p
≤
, τ ′ gm = τ ′ 2m ,
log(d/s∗d − 1)
(83)
where the last inequality follows from the same argument as in (80).
Now, we plug (82) and (83) in (76) to obtain
1/2
I2 ≤
C2 (τ ′ /τ )1/2
s
τ
exp(−C1 2m−1 τ )
(84)
m=m0 +1
′ 1/2 −3/2
≤ C3 (τ )
M
X
exp(−C1 2m0 τ ) ≤ C3 (τ ′ )1/2 τ −3/2
− 1
for some absolute constant C3 > 0. Notice that (τ ′ )1/2 = O log (d/s∗d − 1) 4 as d/s∗d → ∞
3
while τ −3/2 = O log (d/s∗d − 1) 14 . Thus, I2 = o(1) as d → ∞. Since from (75) we also get
that I1 = o(1) as d → ∞, the proof is complete.
Proof of Lemma 6.1. Let first s < gM . Then, by definition of m0 , we have s < gm0 . Therefore,
s < gm for m ≥ m0 , and we have w(gm ) < w(s). It follows that
!
√
√
A
A
a0 (s, A) − w(gm ) ≥ a0 (s, A) − w(s) ≥ √ min √ , log1/4 (d/s − 1) ,
2 2
2
33
where we have used the elementary inequalities
√
√ √
√
√
√
x + y − y ≥ y/(2 x + y) ≥ (2 2)−1 min y/ x, y
q
p
with x = 2 log (d/s − 1) and y = A log (d/s − 1). By assumption, A ≥ 16 log log d/s∗d − 1 ,
so that we get
a0 (s, A) − w(gm ) ≥ a0 (s, A) − w(s) ≥ 4 log log
1/2
d
.
−1
s∗d
(85)
This and the standard bound on the Gaussian tail probability imply
P(|ξ| > (a0 (s, A) − w(gm ))+ ) ≤ exp(−(a0 (s, A) − w(gm ))2 /2)
− 1
≤
log (d/s∗d − 1) 2 .
(86)
(87)
Let now s ∈ [gM , s∗d ]. Then m0 = M and we need to prove the result only for m = M . By
definition of M we have s∗d ≤ 2gM . This and (85) imply
1/2
d
a0 (s, A)−w(gM ) ≥
≥ 4 log log
−(w(s∗d /2)−w(s∗d )).
−1
s∗d
p
p
p
Now, using the elementary inequality log(x + y) − log(x) ≤ y/(2x log(x)) with x =
a0 (s, A)−w(s)−(w(s∗d /2)−w(s∗d ))
d/s∗d − 1 and y = d/s∗d , and the fact that s∗d ≤ d/4 we find
w(s∗d /2)
−
w(s∗d )
√
1/2
2 2
1
d
d
.
≤ p
−1
≤p
≤ 3 log log
s∗d
2 log(d/s∗d − 1) d − s∗d
3 log(d/s∗d − 1)
The last two displays yield a0 (s, A) − w(gM ) ≥
in (86).
log log
d
s∗d
1/2
−1
, and we conclude as
Acknowledgements. We would like to thank Felix Abramovich for helpful discussion of
the results. The work of N.A. Stepanova was supported by an NSERC grant. The work
of A.B. Tsybakov was supported by GENES and by the French National Research Agency
(ANR) under the grants IPANEMA (ANR-13-BSH1-0004-02), and Labex ECODEC (ANR 11-LABEX-0047). It was also supported by the ”Chaire Economie et Gestion des Nouvelles
Données”, under the auspices of Institut Louis Bachelier, Havas-Media and Paris-Dauphine.
References
[1] F. Abramovich and Y. Benjamini (1995). Thresholding of wavelet coefficients as multiple
hypotheses testing procedure. In Wavelets and Statistics. Lecture Notes in Statistics, 103,
Antoniadis ed., 5-14. Springer, New York.
34
[2] F. Abramovich, Y. Benjamini, D. L. Donoho, and I. M. Johnstone (2006). Adapting to
unknown sparsity by controlling the false discovery rate. Ann. Statist., 34, 584 – 653.
[3] M. Abramowitz, I. A. Stegun (1964). Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables, volume 55 of National Bureau of Standards Applied
Mathematics Series. Washington, D.C.
[4] E. Arias-Castro and S. Chen (2016). Distribution-free multiple testing.
http://arxiv.org/abs/1604.07520
[5] K. Bertin and G.Lecué (2008). Selection of variables and dimension reduction in highdimensional nonparametric regression. Electronic J. Statist., 2, 1224–1241.
[6] M. Bogdan, E. van den Berg, C. Sabatti, W.Su, and Candès, E. J. (2015). SLOPE –
adaptive variable selection via convex optimization. Ann. Appl. Statist., 9, 1103 – 1140.
[7] C. Butucea, Yu.I. Ingster and I. Suslina (2015). Sharp variable selection of a sparse submatrix in a high-dimensional noisy matrix. ESAIM: Probability and Statistics, 19, 115-134.
[8] C. Butucea and N. Stepanova (2015). Adaptive variable selection in nonparametric sparse
additive models. http://arxiv.org/abs/1508.06660
[9] O. Collier, L.Comminges, A.B.Tsybakov and N. Verzelen. (2016). Optimal adaptive estimation of linear functionals under sparsity. http://arxiv.org/abs/1611.09744
[10] L. Comminges and A. S. Dalalyan (2012). Tight conditions for consistency of variable
selection in the context of high dimensionality. Ann. Statist. 40 (5), 2667–2696.
[11] C. Gao, Y. Lu, D. Zhou (2016). Exact exponent in optimal rates for crowdsourcing.
http://arxiv.org/abs/1605.07696
[12] C. R. Genovese, J. Jin, L. Wasserman, and Z. Yao. (2012). A comparison of the Lasso
and Marginal Regression. J. Mach. Learn. Res., 13, 2107–2143.
[13] P. Hall and J. Jin (2010). Innovated higher criticism for detecting sparse signals in correlated noise. Ann. Statist., 38, 1686-1732.
[14] Yu. I. Ingster and N. A. Stepanova (2014). Adaptive variable selection in nonparametric
sparse regression. Journal of Mathematical Sciences, 199, 184–201.
[15] P. Ji, and J. Jin (2012). UPS delivers optimal phase diagram in high-dimensional variable
selection. Ann. Statist., 40 (1), 73–103.
35
[16] J.Jin, C.-H.Zhang, and Q.Zhang (2014). Optimality of graphlet screening in high dimensional variable selection. J. of Machine Learning Research, 15, 2723–2772.
[17] J. Lafferty, and L. Wasserman (2008). Rodeo: sparse, greedy nonparametric regression.
Ann. Statist., 36, 28–63.
[18] E.L. Lehmann, and J.P. Romano (2005). Testing Statistical Hypotheses. Springer, New
York.
[19] K. Lounici (2008). Sup-norm convergence rate and sign concentration property of Lasso
and Dantzig estimators. Electronic J. Statist., 2, 90–102.
[20] N. Meinshausen and P. Bühlmann (2006). High-dimensional graphs and variable selection
with the Lasso. Ann. Statist., 34 (3), 1436–1462.
[21] N. Meinshausen and P. Bühlmann (2010). Stability selection. J. Roy. Stat. Soc. Ser. B,
72 (4), 417–473.
[22] P. Neuvial and E. Roquain (2012). On false discovery rate thresholding for classification
under sparsity. Ann. Statist., 40 (5), 2572–2600.
[23] L. Wasserman and K. Roeder (2009). High-dimensional variable selection. Ann. Statist.,
37 (5A), 2178–2201.
[24] M. Wainwright (2009). Sharp thresholds for high-dimensional and noisy sparsity recovery
using l1 -constrained quadratic programming (Lasso). IEEE Trans. Inf. Theory, 55 (5),
2183–2202.
[25] C.-H. Zhang (2010). Nearly unbiased variable selection under minimax concave penalty.
Ann. Statist., 38 (2), 894–942.
[26] A. Y. Zhang, H. H. Zhou (2015). Minimax rates of community detection in stochastic
block models. http://arxiv.org/abs/1507.05313
[27] P. Zhao and B. Yu (2006). On model selection consistency of Lasso. J. Mach. Learn. Res.,
7, 2541–2563.
36
| 10 |
arXiv:1802.07284v1 [] 20 Feb 2018
Logic programming applications:
What are the abstractions and implementations?∗
Yanhong A. Liu
Computer Science Department, Stony Brook University
[email protected]
December 24, 2017
Abstract
This article presents an overview of applications of logic programming, classifying
them based on the abstractions and implementations of logic languages that support
the applications. The three key abstractions are join, recursion, and constraint. Their
essential implementations are for-loops, fixed points, and backtracking, respectively.
The corresponding kinds of applications are database queries, inductive analysis, and
combinatorial search, respectively. We also discuss language extensions and programming paradigms, summarize example application problems by application areas, and
touch on example systems that support variants of the abstractions with different implementations.
1
Introduction
Common reasoning with logic is the root of logic programming, which allows logic rules
and facts to be expressed formally and used precisely for inference, querying, and analysis
in general. Logic formalisms, or languages, allow complex application problems to be expressed declaratively with high-level abstractions and allow desired solutions to be found
automatically with potentially efficient low-level implementations.
The biggest challenge in logic programming has been the need for efficient implementations. Much progress has been made, with efficient implementations in some cases beating
manually written low-level code. However, inadequate performance in many cases has led to
This work was supported in part by NSF under grants CCF-0964196, CCF-1248184, CCF-1414078, and
IIS-1447549; and ONR under grants N00014-15-1-2208.
∗
1
the introduction of non-declarative features in logic languages and resulted in the writing of
obscure logic programs.
Despite the challenges, the most exciting aspect of logic programming is its vast areas of
applications. They range from database queries to program analysis, from text processing
to decision making, from security to knowledge engineering, and more. These vast, complex,
and interrelated areas make it challenging but necessary to provide a deeper understanding
of the various kinds of applications in order to help advance the state of the art of logic
programming and realize its benefits.
This article presents an overview of applications of logic programming based on a study of
the abstractions and implementations of logic languages. The rationale is that abstractions
and implementations are the enabling technologies of the applications. The abstractions
are essential for determining what kinds of application problems can be expressed and how
they can be expressed, for ease of understanding, reuse, and maintenance. The underlying
implementations are essential for high-level declarative languages to be sufficiently efficient
for substantial applications.
We discuss the following essential abstractions, where data abstractions are for expressing
the data, and control abstractions are for expressing computations over the data:
1. data abstractions: objects and relationships;
2. control abstractions: (1) join, (2) recursion, and (3) constraint, which capture bounded,
cyclic, and general computations, respectively.
In logic languages, the data abstractions as objects and relationships are essential for all
three control abstractions.
The essential techniques for implementing the three control abstractions listed are (1)
for-loops, (2) fixed points, and (3) backtracking, respectively. The corresponding kinds of
applications are
(1) database-style queries, e.g., for ontology management, business intelligence, and access
control;
(2) inductive analysis, e.g., for text processing, program analysis, network traversal, and
trust management;
(3) combinatorial search, e.g., for decision making, resource allocation, games and puzzles,
and administrative policy analysis.
We categorize application problems using these three control abstractions because they capture conceptually different kinds of problems, with inherently different implementation techniques, and at the same time correspond to very different classes of applications.
Note that the same application domain may use different abstractions and implementations for different problems. For example, enterprise software may use all three of traditional
2
database queries, inductive analysis, and combinatorial search, for business intelligence and
decision making; and security policy analysis and enforcement may use database-style queries
for access control, inductive analysis for trust management, and combinatorial search for administrative policy analysis.
We also discuss additional extensions, especially regular-expression paths for higher-level
queries and updates for modeling actions; additional applications; and abstractions used in
main programming paradigms. We also touch on several well-known systems while discussing
the applications.
There is a large body of prior work, including surveys of logic programming in general
and applications in particular, as discussed in Section 7. This article distinguishes itself
from past work by analyzing classes of applications based on the language abstractions and
implementations used.
The rest of the article is organized as follows. Section 2 presents essential abstractions in
logic languages. Sections 3, 4, and 5 describe abstractions, implementations, and applications
centered around join, recursion, and constraint. Section 6 discusses additional language
extensions, applications, and programming paradigms. Section 7 discusses related literature
and future directions.
2
Logic language abstractions
Logic languages provide very high-level data and control abstractions, using mostly very
simple language constructs. We describe these abstractions and their meanings intuitively.
2.1
Data abstractions
All data in logic languages are abstracted, essentially, as objects and relationships.
Objects. Objects are primitive values, such as numbers and strings, or structured values
whose components are objects.
Examples of primitive values are integer number 3 and string ’Amy’. We enclose a
string value in single quotes; if a string starts with a lower-case letter, such as ’amy’,
the quotes can be omitted, as has been conventional in logic languages.
Examples
of
structured
values
are
succ(3),
father(amy),
and
cert(’Amy’,birth(’2000-02-28’,’Rome’)), denoting the successor integer of 3, the father
of amy, and the certificate that ’Amy’ was born on ’2000-02-28’ in ’Rome’, respectively.
The names of structures, such as succ, father, cert, and birth above, are called function
symbols. They correspond to object constructors in object-oriented languages.
3
Relationships. Relationships are predicates, or properties, that hold among objects. In
particular, p (o1 ,...,ok ), i.e., predicate p over objects o1 ,...,ok being true, is equivalent to (o1 ,...,ok ) in p , i.e., tuple (o1 ,...,ok ) belonging to relation p —a table that
holds the set of tuples of objects over which p is true.
Examples
of
relationships
are
male(bob),
is_parent(bob,amy),
and
issue(mario,’Amy’,birth(’2000-02-28’,’Rome’)), denoting that bob is male, bob is a parent of amy, and mario issued a certificate that ’Amy’ was born on ’2000-02-28’ in ’Rome’,
respectively.
Structured values can be easily captured using relationships, but not vice versa. For
example, f being the structured value father(c) can be captured using relationship
is_father(f,c), but relationship is_parent(p,c) cannot simply be captured as p being
the structured value parent(c) when c has two parents.
Such high-level data abstraction allows real-world objects or their lower-level representations,
from bits and characters to lists to sets, to be captured easily without low-level implementation details. For example,
• bits and characters are special cases of integers and strings, respectively.
• lists are a special case of linearly nested structured values, and
• sets are a special case of relations consisting of tuples of one component.
Objects and relationships can be implemented using well-known data structures, including
linked list, array, hash table, B-tree, and trie, usually taking O(1) or O(log n) time to access
an object, where n is the size of the data.
2.2
Control abstractions
Control in logic languages is abstracted at a high level, as logical inference or logic queries
over asserted relationships among objects:
• asserted relationships can be connected by logical connectives: conjunction (read
“and”), disjunction (read “or”), negation (read “not”), implication (read “then”), backward implication (read “if”), and equivalence (read “if and only if”);
• variables can be used in place of objects and be quantified over with universal quantifier
(read “all”) and existential quantifier (read “some”); and
• one can either infer all relationships that hold or query about certain relationships,
among all objects or among certain objects.
Rules and facts are the most commonly supported forms in existing logic languages:
4
Rules. A rule is of the following form, where assertion0 is called the conclusion, and other
assertions are called the hypotheses. Each assertion is a predicate over certain objects,
where variables may be used in place of objects. Intuitively, left arrow (<-) indicates
backward implication, comma (,) denotes conjunction, and all variables in a rule are
implicitly universally quantified, i.e., the rule holds for all values of the variables.
assertion0 <- assertion1 , ..., assertionk .
For example, the second rule below says: X is a grandfather of Y if X is the father of Z
and Z is a parent of Y, and this holds for all values of variables X, Y, and Z; the other
rules can be read similarly. Following logic language conventions, names starting with
an upper-case letter are variables.
is_parent(X,Y) <- is_father(X,Y).
is_grandfather(X,Y) <- is_father(X,Z), is_parent(Z,Y).
is_ancestor(X,Y) <- is_parent(X,Z), is_ancestor(Z,Y).
is_positive(succ(N)) <- is_positive(N).
The second rule is a join query—its two hypotheses have a shared variable, and it
concludes a new predicate.
The third and fourth rules are recursive—the predicate in the conclusion depends on
itself in a hypothesis, or in general possibly indirectly through another predicate.
Note that disjunction of a set of hypotheses can be expressed using a set of rules with
the same conclusion.
Facts. A fact is a rule that has no hypotheses and is denoted simply as assertion0. For
example, is_father(bob,amy). says that bob is the father of amy, and is_positive(1).
says that 1 is positive.
The meaning of a set of rules and facts is the least set of facts that contains all the given facts
and all the facts that can be inferred, directly or indirectly, using the rules. This set can be
computed by starting with the given facts and repeatedly applying the rules to conclude new
facts—i.e., matching hypotheses of rules against facts, instantiating variables in rules with
values in matched facts, and adding instantiated conclusions of rules as new facts. However,
• repeated application of rules might not terminate if function symbols are used in the
rules, because facts about infinitely many new objects may be concluded, e.g., the
fourth example rule above may infer is_positive(succ(1)), is_positive(succ(succ(1))),
and so on.
• when only certain relationships about certain objects are queried, application of rules
may stop as soon as the query can be answered, e.g., if only is_positive(succ(1)) is
queried, application of rules can stop after one use of the given rule and the given fact.
5
Rules that do not contain function symbols are called Datalog rules. For example, the first
three example rules given earlier in this section are Datalog rules.
General logic forms have also been increasingly supported, typically by extending the
rule form above:
Negation in the hypotheses. A hypothesis in a rule may be prefixed with not, denoting
negation of the asserted relationship.
For example, the following rule says: for all values of X and Y, X is the mother of Y if X
is a parent of Y and X is not male.
is_mother(X,Y) <- is_parent(X,Y), not male(X).
Difficulties arise when negation is used with recursion. For example, what can be
inferred from the following rule? Is good(zak) true or false?
good(zak) <- not good(zak).
More general forms. More general forms include disjunction and negation in the conclu-
sion and, most generally, quantifiers all and some in any scope, not only the outermost
scope. For example, the first rule below says: X is male or female if X is a person. The
second rule says: X is not a winning position if, for all Y, there is no move from X to Y
or else Y is a winning position.
male(X) or female(X) <- person(X).
not win(X) <- all Y: not move(X,Y) or win(Y).
The meaning of recursive rules with negation is not universally agreed upon. The two
dominant semantics are well-founded semantics (WFS) [VRS91, VG93] and stable model
semantics (SMS) [GL88]. Both WFS and SMS use the closed-world assumption, i.e., they
assume that what cannot be inferred to be true from the given facts and rules, is false.
• WFS gives a single 3-valued model, with the additional truth value undefined besides
true and false.
• SMS gives zero or more 2-valued models, using only true and false.
Other formalisms and semantics include partial stable models, also called stationary models [Prz94]; first-order logic with inductive and fixed-point definitions, called FO(ID) and
FO(FD) [DT08, HDCD10]; and the newly proposed founded semantics and constraint semantics [LS18]. The first two are both aimed at unifying WFS and SMS. The last unifies and
cleanly relates WFS, SMS, and other major semantics by allowing the assumptions about
the predicates and rules to be specified explicitly.
For practical applications, logic languages often also support predefined relationships
among objects, including equality, inequality, and general comparisons. Cardinality and
other aggregates over relationships are often also supported.
6
2.3
Combinations of control abstractions
There are many possible combinations of the language constructs. We focus on the following
three combinations of constructs as essential control abstractions. We identify them by
join, recursion, and constraint. They capture bounded, cyclic, and general computations,
respectively.
(1) Join—with join queries, no recursive rules, and restricted negation and other constructs; the restriction is that, for each rule, each variable in the conclusion must also
appear in a hypothesis that is a predicate over arguments. Implementing this requires
that common objects for the shared variables be found for the two hypotheses of a join
query to be true at the same time; the number of objects considered are bounded, by
the predicates in the hypotheses, following a bounded number of dependencies.
(2) Recursion—with join queries, recursive rules, and restricted negation and other constructs; the restriction is as for join above plus that a predicate in the conclusion of a
rule does not depend on the negation of the predicate itself in a hypothesis. Implementing this requires repeatedly applying the recursive rules following cyclic dependencies,
potentially an unbounded number of times if new objects are in some conclusions.
(3) Constraint—with join queries, recursive rules, and unrestricted negation and other
constructs; unrestricted negation and other constructs can be viewed as constraints to
be satisfied. Implementing this could require, in general, trying different combinations
of variable values, as in general constraint solving.
Table 1 summarizes these three essential control abstractions and the corresponding kinds
of computations and applications.
Essential
(1) Join
(2) Recursion
(3) Constraint
Has join
queries
yes
yes
yes
Has rec.
rules
no
yes
yes
Has neg.
and others
restricted
restricted
unrestricted
Computations Application kinds
bounded
cyclic
general
database-style queries
inductive analysis
combinatorial search
Table 1: Essential control abstractions of logic languages.
3
Join and database-style queries
Join queries are the most basic and most commonly used queries in relating different objects.
They underlie essentially all nontrivial queries in database applications and many other
applications.
7
3.1
Join queries
A join query is a conjunction of two hypotheses that have shared variables, concluding possible values of variables that satisfy both hypotheses. A conjunction of two hypotheses that
have no shared variables, i.e., a Cartesian product, or a single hypothesis can be considered
a trivial join query. A join query corresponds to a rule whose predicate in the conclusion is
different from predicates in the hypothesis, so the rule is not recursive. A non-recursive rule
with more than two hypotheses corresponds to multiple join queries, as a nesting or chain
of join queries starting with joining any two hypotheses first.
For example, the first rule below, as seen before, is a join query. So is the second rule;
it defines sibling over X and Y if X and Y have a same parent. The third rule defines a chain
of red, green, and blue links from X to Y through U and V; it can be viewed as two join
queries—join any two hypotheses first, and then join the result with the third hypothesis.
is_grandfather(X,Y) <- is_father(X,Z), is_parent(Z,Y).
sibling(X,Y) <- is_parent(Z,X), is_parent(Z,Y).
chain(X,Y) <- link(X,U,red), link(U,V,green), link(V,Y,blue).
In general, the asserted predicates can be about relationships among any kinds of objects—
whether people, things, events, or anything else, e.g., students, employees, patients, doctors,
products, courses, hospitals, flights, interviews, and hangouts; and the join queries can be
among any kinds of relationships—whether family, friend, owning, participating, thinking,
or any other relation in the real world or conceptual world.
Join queries expressed using rules correspond to set queries. For example, in a language
that supports set comprehensions with tuple patterns [RL07, LBSL16] the is_grandfather
query corresponds to
is_grandfather = {(X,Y): (X,Z) in is_father, (Z,Y) in is_parent}
Without recursion, join queries can be easily supported together with the following extensions, with the restriction that, for each rule, each variable in the conclusion must also appear
in a hypothesis that is a predicate over arguments, so the domain of the variable is bounded
by the predicate; queries using these extensions can be arbitrarily nested:
• unrestricted negation, other connectives, and predefined relationships in additional
conditions,
• aggregates, such as count and max, about the relationships, and
• general universal and existential quantifiers in any scope.
These subsume all constructs in the select statement for SQL queries. Essentially, join
queries, with no recursion, relate objects in different relationships within a bounded number
of steps.
8
3.2
Implementation of join queries
A join query can be implemented straightforwardly using nested for-loops and if-statements,
where shared variables in different hypotheses correspond to equality tests between the corresponding variables. For example, the is_grandfather query earlier in this section can be
implemented as
is_grandfather = {}
for (X,Z1) in is_father:
for (Z2,Y) in is_parent:
if Z1 == Z2:
is_grandfather.add(X,Y)
-- time factor: number of is_father pairs
-- time factor: number of is_parent pairs
In a language that supports set comprehensions, such as Python, the above implementation
can be expressed as
is_grandfather = {(X,Y) for (X,Z1) in is_father for (Z2,Y) in is_parent if Z1 == Z2}
For efficient implementations, several key implementation and optimization techniques
are needed, described below; additional optimizations are also needed, e.g., for handling
streaming data or distributed data.
Indexing. This creates an index for fast lookup based on values of the indexed arguments
of a relation; the index is on the shared arguments of the two hypotheses. For example, for any fact is_father(X,Z), to find the matching is_parent(Z,Y), an index called,
say, children{Z}—mapping the value of Z, the first argument of is_parent, to the set
of corresponding values of second argument of is_parent—significantly speeds up the
lookup, improving the time factor for the inner loop to the number of children of Z:
is_grandfather = {(X,Y) for (X,Z) in is_father for Y in children{Z}}
Join ordering. This optimizes the order of joins when there are multiple joins, e.g., in a
rule with more than two hypotheses. For example, for the rule for chain, starting by
joining the first and third hypotheses is never more efficient than starting by joining
either of these hypotheses with the second hypothesis, because the former yields all
pairs of red and blue links, even if there are no green links in the middle.
Tabling. This stores the result of common sub-joins so they are not repeatedly com-
puted. Common sub-joins may arise when there are nested or chained join queries.
For example, for the rule for chain earlier in this section, consider joining the first two
hypotheses first: if there are many red and green link pairs from a value of X to a value
of V, then storing the result of this sub-join avoids recomputing it when joining with
blue links to find each target Y.
9
Demand-driven computation. This computes only those parts of relationships that af-
fect a particular query. For example, a query may only check whether is_father(dan,bob)
holds, or find all values of X for is_father(dan, X), or find all is_father pairs, as opposed
to finding all relationships that can be inferred.
Basic ideas for implementing the extensions negation, aggregates, etc. are as follows, where
nested queries using these extensions are computed following their order of dependencies:
• negation, etc. in additional conditions: test them after the variables in them become
bound by the joins.
• aggregates: apply the aggregate operation while collecting the query result of its argument.
• quantifiers: transform them into for-loops, or into aggregates, e.g., an existential quantification is equivalent to a count being positive.
Efficient implementation techniques for join queries and extensions have been studied in
a large literature, e.g., [Ioa96]. Some methods also provide precise complexity guarantees,
e.g., [Wil02, LBSL16].
3.3
Applications of join queries
Join queries are fundamental in querying complex relationships among objects. They are
the core of database applications [KBL06], from enterprise management to ontology management, from accounting systems to airline reservation systems, and from electronic health
records management to social media management. Database and logic programming are
so closely related that one of the most important computer science bibliographies is called
DBLP, and it was named after Database and Logic Programming [Ley02]. Join queries also
underlie applications that do not fit in traditional database applications, such as complex
access control policy frameworks [ANS04].
We describe three example applications below, in the domains of ontology management,
enterprise management, and security policy frameworks. They all heavily rely on the use of
join queries and optimizations, especially indexing. We give specific examples of facts, rules,
and indexing for the first application.
Ontology management—Coherent definition framework (CDF). CDF is a system for
ontology management that has been used in numerous commercial projects [GAS10],
for organizing information about, e.g., aircraft parts, medical supplies, commercial
processes, and materials. It was originally developed by XSB, Inc. Significant portions
have been released in the XSB packages [SW+ 14].
10
The data in CDF are classes and objects. For example, XSB, Inc. has a part taxonomy,
combining UNSPSC (United Nations Standard Products and Services Code) and Federal INC (Item Name Code) taxonomies, with a total of over 87,000 classes of parts.
The main relationships are variants of isa, hasAttr, and allAttr. Joins are used extensively to answer queries about closely related classes, objects, and attributes. Indexing
and tabling are heavily used for efficiency. Appropriate join order and demand-driven
computation are also important.
An example fact is as follows, indicating that specification ’A-A-1035’ in ontology specs
has attribute ’MATERIAL’ whose value is ’ALUMINUM ALLOY UNS A91035’ in material_taxonomy.
Terms cid(Identifier, Namespace) represent primitive classes in CDF.
hasAttr_MATERIAL(cid(’A-A-1035’,specs),
cid(’ALUMINUM ALLOY UNS A91035’,material_taxonomy)).
An example rule is as follows, meaning that a part PartNode has attribute
’PART-PROCESS-MATERIAL’ whose value is process-material pair (Process,Material) in
’ODE Ontology’ if PartNode has attribute ’PROCESS’ whose value is Process, and Process
has attribute ’PROCESS-MATERIAL’ whose value is Material.
hasAttr_PART-PROCESS-MATERIAL(PartNode,
cid(’process-material’(Process,Material),’ODE Ontology’)) <hasAttr_PROCESS(PartNode, Process),
hasAttr_PROCESS-MATERIAL(Process, Material).
An example of indexing is for hasAttr_ATTR, for any ATTR, shown below, in XSB notation,
meaning: use as index all symbols of the first argument if it is bound, or else do so for
the second argument.
[*(1), *(2)]
XSB, Inc. has five major ontologies represented in CDF, for parts, materials, etc., with
a total of over one million facts and five meta rules. The rules are represented using a
Description Logic form—an ontology representation language. The example rule above
is an instance of such a rule when interpreted. The indexing used supports different
appropriate indices for different join queries.
CDF is used in XSB, Inc.’s ontology-directed classifier (ODC) and extractor (ODE) [SW12].
ODC uses a modified Bayes classifier to classify item descriptions. For example, it is
used quarterly by the U.S. Department of Defense to classify over 80 million part
descriptions. ODE extracts attribute-value pairs from classified descriptions to build
structured knowledge about items. ODC uses aggregates extensively, and ODE uses
string pattern rules.
11
Enterprise management—Business intelligence (BI). BI is a central component of
enterprise software. It tracks the performance of an enterprise over time by storing
and analyzing historical information recorded through online transaction processing
(OLTP), and is then used to help plan future actions of the enterprise. LogicBlox
simplifies the hairball of enterprise software technologies by using a Datalog-based
language [GAK12, AtCG+ 15].
All data are captured as logic relations. This includes not only data as in conventional
databases, e.g., sale items, price, and so on for a retail application, but also data not
in conventional databases, e.g., sale forms, display texts, and submit buttons in a user
interface. Joins are used for easily querying interrelated data, as well as for generating
user interfaces. Many extensions such as aggregates are also used. For efficiency,
exploiting the rich literature of automatic optimizations, especially join processing
strategies and incremental maintenance, is of paramount importance.
Using the same Datalog-based language, LogicBlox supports not only BI but also
OLTP and prescriptive and predictive analytics. “Today, the LogicBlox platform has
matured to the point that it is being used daily in dozens of mission-critical applications in some of the largest enterprises in the world, whose aggregate revenues exceed
$300B” [AtCG+ 15].
Security policy frameworks—Core role-based access control (RBAC). RBAC is a
framework for controlling user access to resources based on roles. It became an ANSI
standard [ANS04] building on much research during the preceding decade and earlier,
e.g., [LHM84, FK92, GB98, FSG+ 01].
Core RBAC defines users, roles, objects, operations, permissions, sessions and a number
of relations among these sets; the rest of RBAC adds a hierarchical relation over roles,
in hierarchical RBAC, and restricts the number of roles of a user and of a session, in
constrained RBAC. Join queries are used for all main system functions, especially the
CheckAccess function, review functions, and advanced review functions on the sets and
relations. They are easily expressed using logic rules [BLV04, BF06].
Efficient implementations rely on all main optimizations discussed, especially auxiliary
maps for indexing and tabling [LWG+ 06]. Although the queries are like relational
database queries, existing database implementations would be too slow for functions
like CheckAccess. Unexpectedly, uniform use of relations and join queries also led to
a simplified specification, with unnecessary mappings removed, undesired omissions
fixed, and constrained RBAC drastically simplified [LS07].
4
Recursion and inductive analysis
Recursive rules are most basic and essential in relating objects that are an unknown number of relationships apart. They are especially important for problems that may require
12
performing the inference or queries a non-predetermined number of steps, depending on the
data.
4.1
Recursive rules and queries
Given a set of rules, a predicate p depends on a predicate q if p is in the conclusion of a rule,
and either q is in a hypothesis of the rule or some predicate r is in a hypothesis of the rule
and r depends on q . A given set of rules is recursive if a predicate p in the conclusion of a
rule depends on p itself.
For example, the second rule below, as seen in Section 2.2, is recursive; the first rule is
not recursive; the set of these two rules is recursive, where the first rule is the base case, and
the second rule is the recursive case.
is_ancestor(X,Y) <- is_parent(X,Y).
is_ancestor(X,Y) <- is_parent(X,Z), is_ancestor(Z,Y).
In general, recursively asserted relationships can be between objects of any kind, e.g., relatives and friends that are an unknown number of connections apart in social networks,
direct and indirect prerequisites of courses in universities, routing paths in computer networks, nesting of parts in products, supply chains in supply and demand networks, transitive
role hierarchy relation in RBAC, and repeated delegations in trust management systems.
Recursive queries with restricted negation correspond to least fixed-point computations.
For example, in a language that supports least fixed points, the is_ancestor query corresponds
to the minimum is_ancestor set below, where, for any sets S and T, S subset T holds iff every
element of S is an element of T:
min is_ancestor: is_parent subset is_ancestor,
{(X,Y): (X,Z) in is_parent, (Z,Y) in is_ancestor} subset is_ancestor
With cyclic predicate dependencies, recursion allows the following restricted extensions to
be supported while still providing a unique semantics; there is also the restriction that, for
each rule, each variable in the conclusion must also appear in a hypothesis that is a predicate
over arguments, as in extensions to join queries:
• stratified negation, where negation and recursion are separable, i.e., there is no predicate that depends on the negation of itself, and
• other connectives and predefined relationships in additional conditions, aggregates,
and general quantifiers, as in extensions for join queries, when they do not affect the
stratification.
Essentially, recursive rules capture an unbounded number of joins, and allow inference and
queries by repeatedly applying the rules.
13
4.2
Implementation of recursive rules and queries
Inference and queries using recursive rules can be implemented using while-loops; for-loops
with predetermined number of iterations do not suffice, because the number of iterations
depends on the rules and facts. Each iteration applies the rules in one step, so to speak,
until no more relevant facts can be concluded. For example, the is_ancestor query earlier in
this section can be implemented as
is_ancestor = is_parent
while exists (X,Y): (X,Z) in is_parent, (Z,Y) in is_ancestor, (X,Y) not in is_ancestor:
is_ancestor.add((X,Y))
Each iteration computes the existential quantification in the condition of the while-loop, and
picks any witness (X,Y) to add to the result set. It can be extremely inefficient to recompute
the condition in each iteration after a new pair is added.
For efficient implementations, all techniques for joins are needed but are also more critical
and more complex. In particular, to ensure termination,
• tabling is critical if relationships form cycles, and
• demand-driven computation is critical if new objects are created in the cycles.
For the is_ancestor query, each iteration computes the following set, which is a join, plus
the last test to ensure that only a new fact is added:
{(X,Y): (X,Z) in is_parent, (Z,Y) in is_ancestor, (X,Y) not in is_ancestor}
Two general principles underlying the optimizations for efficient implementations are:
1. incremental computation for expensive relational join operations, with respect to facts
that are added in each iteration.
2. data structure design for the relations, for efficient retrievals and tests of relevant facts.
For the restricted extensions, iterative computation follows the order of dependencies determined by stratification; additional aggregates, etc. that do not affect the stratification can
be handled as described in Section 3.2 for computing the join in each iteration.
Efficient implementation techniques for recursive queries and extensions have been studied extensively, e.g., [AHV95]. Some methods also provide precise complexity guarantees,
e.g., [McA99, GM01, LS09, TL11].
14
4.3
Applications of recursive rules and queries
Recursive rules and queries can capture any complex reachability problem in recursive structures, graphs, and hyper-graphs. Examples are social network analysis based on all kinds
of social graphs; program analysis over many kinds of flow and dependence graphs about
program control and data values; model checking over labeled transition systems and state
machines; routing in electronic data networks, telephone networks, or transportation networks; and security policy analysis and enforcement over trust or delegation relationships.
We describe three example applications below, in the domains of text and natural language processing, program analysis, and distributed security policy frameworks. They all
critically depend on the use of recursive rules and efficient implementation techniques, especially tabling and indexing.
Text processing—Super-tokenizer. Super-tokenizer is an infrastructure tool for text
processing that has been used by XSB, Inc.’s ontology-directed classifier (ODC) and
extractor (ODE) for complex commercial applications [SW12]. It was also developed
originally at XSB, Inc.
Super-tokenizer supports the declaration of complex rewriting rules for token lists. For
example, over 65,000 of these rules implement abbreviations and token corrections in
ODC and complex pattern-matching rules in ODE for classification and extraction
based on combined UNSPSC and Federal INC taxonomies at XSB, Inc. Recursion is
used extensively in the super-tokenizer, for text parsing and processing. The implementation uses tabled grammars and trie-based indexing in fundamental ways.
Super-tokenizer is just one particular application that relies on recursive rules for text
processing and, more generally, language processing. Indeed, the original application
of Prolog, the first and main logic programming language, was natural language processing (NLP) [PS02], and a more recent application in NLP helped the IBM Watson
question answering system win the Jeopardy Man vs. Machine Challenge by defeating
two former grand champions in 2011 [LF11, LPM+ 12].
Pointer analysis statically determines the set
of objects that a pointer variable or expression in a program can refer to. It is
a fundamental program analysis with wide applications and has been studied extensively, e.g., [Hin01, SCD+ 13]. The studies especially include significantly simplified specifications using Datalog in more recent years, e.g., [SB15], and powerful
systems such as bddbddb [WACL05] and Doop [BS09b], the latter built using LogicBlox [GAK12, AtCG+ 15].
Program analysis—Pointer analysis.
Different kinds of program constructs and analysis results relevant to pointers are relations. Datalog rules capture the analysis directly as recursively defined relations. For
example, the well-known Andersen’s pointer analysis for C programs defines a points-to
relation based on four kinds of assignment statements [And94], leading directly to four
Datalog rules [SR05]. Efficient implementation critically depends on tabling, indexing,
15
and demand-driven computation [SR05, TL11]. Such techniques were in fact followed
by hand to arrive at the first ultra fast analysis [HT01b, HT01a].
Indeed, efficient implementations can be generated from Datalog rules giving much better, more precise complexity guarantees [LS09, TL11] than the worst-case complexities,
e.g., the well-known cubic time for Andersen’s analysis. Such efficient implementation
with complexity guarantees can be obtained for program analysis in general [McA99].
Commercial tools for general program analysis based on Datalog have also been built,
e.g., by Semmle based on CodeQuest [HVM06].
Security policy frameworks—Trust management (TM). TM is a unified approach
to specifying and enforcing security policies in distributed systems [BFL96, GS00,
RK05]. It has become increasingly important as systems become increasingly interconnected, and logic-based languages have been used increasingly for expressing TM
policies [Bon10], e.g., SD3 [Jim01], RT [LMW02], Binder [DeT02], Cassandra [BS04],
and many extensions, e.g., [BRS12, SBK13].
Certification, delegation, authorization, etc. among users, roles, permissions, etc. are
relations. Policy rules correspond directly to logic rules. The relations can be transitively defined, yielding recursive rules. For example, one of the earliest TM frameworks,
SPKI/SDSI [EFL+ 99], for which various sophisticated methods have been studied, corresponds directly to a few recursive rules [HTL07], and efficient implementations with
necessary indexing and tabling were generated automatically.
TM studies have used many variants of Datalog with restricted constraints [LM03],
not unrestricted negation. A unified framework with efficient implementations is still
lacking. For example, based on the requirements of the U.K. National Health Service,
a formal electronic health records (EHR) policy was written, as 375 rules in Cassandra [Bec05b], heavily recursive. As the largest case study in the TM literature, its
implementation was inefficient and incomplete—techniques like indexing were deemed
needed but missing [Bec05a].
5
Constraint and combinatorial search
Constraints are the most general form of logic specifications, which easily captures the most
challenging problem-solving activities such as planning and resource allocation.
5.1
Constraint satisfaction
A constraint is, in general, a relationship among objects but especially refers to cases when
it can be satisfied with different choices of objects and the right choice is not obvious.
For example, the rule below says that X is a winning position if there is a move from X
to Y and Y is not a winning position. It states a relationship among objects, but its meaning
16
is not obvious, because the concluding predicates are recursively defined using a negation of
the predicate itself.
win(X) <- move(X,Y), not win(Y).
In general, constraints can capture any real-world or conceptual-world problems, e.g., rules
for moves in any game—whether recreational, educational, or otherwise; actions with conditions and effects for any planning activities; participants and resource constraints in
scheduling—whether for university courses or manufacturer goods production or hospital
surgeries; real-world constraints in engineering design; as well as knowledge and rules for
puzzles and brain teasers.
Given constraints may have implications that are not completely explicit. For example,
the win rule implies not just the first constraint below, but also the second, by negating
the conclusion and hypotheses in the given rule, following the closed-world assumption; the
second constraint makes the constraint about not win explicit:
win(X) if some Y: move(X,Y) and not win(Y)
not win(X) if all Y: not move(X,Y) or win(Y)
Indeed, with general constraints, objects can be related in all ways using all constructs
together with join and recursion: unrestricted negation, other connectives, predefined relationships, aggregates, and general quantifiers in any scope.
However, due to negation in dependency cycles, the meaning of the rules and constraints
is not universally agreed on anymore.
• Well-founded semantics (WFS) gives a single, 3-valued model, where relationships that
are true or false are intended to be supported from given facts, i.e., well-founded, and
the remaining ones are undefined.
• Stable model semantics (SMS) gives zero or more 2-valued models, where each model
stays the same, i.e., is stable, when it is used to instantiate all the rules; in other words,
applying the rules to each model yields the same model.
For example, for the win example,
• if there is only one move, move(a,b), not forming a cycle, then
WFS and SMS both give that win(b) is false and win(a) is true;
• if there is only one move, move(a,a), forming a self cycle, then
WFS gives that win(a) is undefined, and
SMS gives that there is no model;
17
• if there are only two moves, move(a,b) and move(b,a), forming a two-move cycle, then
WFS gives that win(a) and win(b) are both undefined, and
SMS gives two models: one with win(a) true and win(b) false, and one with the opposite
results.
Despite the differences, WFS and SMS can be computed using some shared techniques.
5.2
Implementation of constraint satisfaction
Constraint solving could in general use straightforward generate-and-test—generate each
possible combination of objects for solutions and test whether they satisfy the constraints—
but backtracking is generally used, as it is much more efficient.
Backtracking. Backtracking incrementally builds variable assignments for the solutions,
and abandons each partial assignment as soon as it determines that the partial assignment cannot be completed to a satisfying solution, going back to try a different value
for the last variable assigned; this avoids trying all possible ways of completing those
partial assignments or naively enumerating all complete assignments.
For example, the win(X) query can basically try a move at each next choice of moves and
backtrack to try a different move as soon as the current move fails. Expressed using recursive
functions, this corresponds basically to the following:
def win(X): return (some Y: move(X,Y) and not_win(Y))
def not_win(X): return (all Y: not move(X,Y) or win(Y))
This backtracking answers the query correctly when the moves do not form a cycle. However,
it might not terminate when the moves form a cycle, and the implementation depends on
the semantics used. Both WFS and SMS can be computed by using and extending the basic
backtracking:
• WFS computation could track cycles, where executing a call requires recursively making the same call, and infer undefined for those queries that have no execution paths
to infer the query result to be true or false.
• SMS computation could generate possible partial or complete variable assignments,
called grounding, and check them, possibly with the help of an external solver like
Boolean satisfiability (SAT) solvers or satisfiability modulo theories (SMT) solvers.
For efficient implementations, techniques for join and recursion are critical as before,
especially tabling to avoid repeated states in the search space. Additionally, good heuristics
for pruning the search space can make drastic performance difference in computing SMS,
e.g., as implemented in answer set programming (ASP) solvers.
18
Backjumping. One particular optimization of backtracking in SMS computation is back-
jumping. Backtracking always goes back one level in the search tree when all values
for a variable have been tested. Backjumping may go back more levels, by realizing
that a prefix of the partial assignment can lead to all values for the current variable to
fail. This helps prune the search space.
For extensions that include additional constraints, such as integer constraints, as well as
aggregates and quantifiers, an efficient solver such as one that supports mixed integer programming (MIP) can be used.
Efficient implementation techniques for constraint solving have been studied extensively,
e.g., for ASP solvers [LPF+ 06, GKKS12].
5.3
Applications of constraint satisfaction
The generality and power of constraints allow them to be used for all applications described
previously, but constraints are particularly important for applications beyond those and
that require combinatorial search. Common kinds of search problems include planning and
scheduling, resource allocation, games and puzzles, and well-known NP-complete problems
such as graph coloring, k-clique, set cover, Hamiltonian cycle, and SAT.
We describe three example applications, in the domains of decision making, resource
allocation, and games and puzzles. They all require substantial use of general constraints and
efficient constraint solvers exploiting backtracking, backjumping, and other optimizations.
Enterprise decision making—Prescriptive analysis. Prescriptive analysis suggests de-
cision options that lead to optimized future actions. It is an advanced component of
enterprise software. For example, for planning purposes, LogicBlox supports prescriptive analysis using the same Datalog-based language as for BI and OLTP [GAK12,
AtCG+ 15].
The data are objects and relations, same as used for BI, but may include, in particular,
costs and other objective measures. Constraints capture restrictions among the objects
and relations. When all data values are provided, constraints can simply be checked.
When some data values are not provided, different choices for those values can be
explored, and values that lead to certain maximum or minimum objective measures
may be prescribed for deciding future actions. Efficient implementations can utilize
the best constraint solvers based on the kinds of constraints used.
LogicBlox’s integrated solution to decision making based on BI and OLTP has led
to significant success. For example, for a Fortune 50 retailer with over $70 billion in
revenue and with products available through over 2,000 stores and digital channels, the
solution processes 3 terabytes of data on daily, weekly, and monthly cycles, deciding
exactly what products to sell in what stores in what time frames; this reduces a multi19
year cycle of a challenging task for a large team of merchants and planners to an
automatic process and significantly increases profit margins [Log15a].
Resource allocation—Workforce management (WFM) in Port of Gioia Tauro. WFM
handles activities needed to maintain a productive workforce. The WFM system for
automobile logistics in the Port of Gioia Tauro, the largest transshipment terminal in
the Mediterranean, allocates available personnel of the seaport such that cargo ships
mooring in the port are properly handled [RGA+ 12, LR15]. It was developed using
the DLV system [LPF+ 06].
The data include employees of different skills, cargo ships of different sizes and loads,
teams and roles to be allocated, and many other objects to be constrained, e.g., workload of employees, heaviness of roles, and contract rules. Constraints include matching
of available and required skills, roles, hours, etc., fair distribution of workload, turnover
of heavy or dangerous roles, and so on. The constraints are expressed using rules with
disjunction in the conclusion, general negation, and aggregates. The DLV system uses
backtracking and a suite of efficient implementation techniques.
This WFM system was developed by Exeura s.r.l. and has been adopted by the company ICO BLG operating automobile logistics in the Port of Gioia Tauro [LR15],
handling every day several ships of different sizes that moor in the port [RGA+ 12].
Games and puzzles—N-queens. We use a small example in a large class of problems.
The n-queens puzzle is the problem of placing n queens on a chessboard of n-by-n squares
so that no two queens threaten each other, i.e., no two queens share the same row,
column, or diagonal. The problem is old, well-studied, and can be computationally
quite expensive [BS09a].
The allowed placements of queens can be specified as logic rules with constraints.
Naively enumerating all possible combination of positions and checking the constraints
is prohibitively expensive. More efficient solutions use backtracking, and furthermore
backjumping, to avoid impossible placement of each next queen as soon as possible.
Stronger forms of constraints may also be specified to help prune the search space
further [GKKS12]. For example, backtracking can solve for one or two scores of queens
in an hour, but backjumping and additional constraints help an ASP system like Clingo
solve for 5000 queens in 3758.320 seconds of CPU time [Sch14].
Many other games and puzzles can be specified and solved in a similar fashion. Examples are all kinds of crossword puzzles, Sudoku, Knight’s tour, nonograms, magic
squares, dominos, coin puzzles, graph coloring, palindromes, among many others,
e.g., [DNST05, Edm15, Het15, Mal15, Kje15].
20
6
Further extensions, applications, and discussion
We discuss additional language extensions and applications, summarize applications based
on the key abstractions used, touching on example logic programming systems, and finally
put the abstractions into the perspective of programming paradigms.
6.1
Extensions
Many additional extensions to logic languages have been studied. Most of them can be
viewed as abstractions that capture common patterns in classes of applications, to allow
applications to be expressed more easily. Important extensions include:
• regular-expression paths, a higher-level abstraction for commonly-used linear recursion;
• updates, for real-world applications that must handle changes;
• time, for expressing changes over time, as an alternative to supporting updates directly;
• probability, to capture uncertainty in many challenging applications; and
• higher-order logic, to support applications that require meta-level reasoning.
We discuss two of the most important extensions below:
Regular-expression paths. A regular-expression path relates two objects using regular
expressions and extensions. It allows repeated joins of a binary relation to be expressed
more easily and clearly than using recursion; such joins capture reachability and are
commonly used. For example, is_ancestor(X,Y), defined in Section 4 using two rules
including a recursive rule, can now be defined simply as below; it indicates that there
are one or more is_parent relationships in a path from X to Y:
is_ancestor(X,Y) <- is_parent+(X,Y).
This is also higher-level than using recursion, because the recursive rule has to pick one
of three possible forms below: with is_parent on the left, as seen before; with is_parent
on the right; and with both conjuncts using is_ancestor.
is_ancestor(X,Y) <- is_parent(X,Z), is_ancestor(Z,Y).
is_ancestor(X,Y) <- is_ancestor(X,Z), is_parent(Z,Y).
is_ancestor(X,Y) <- is_ancestor(X,Z), is_ancestor(Z,Y).
Depending on the data, the performance of these forms can be asymptotically different
in most implementations.
Regular-expression paths have many important applications including all those in Section 4, especially graph queries, with also parametric extensions for more general relations, not just binary relations [dMLW03, LRY+ 04, LS06, TGL10].
21
Updates. An update, or action, can be expressed as a predicate that captures the update,
e.g., by relating the values before and after the update and the change in value. The
effect of the update could be taken immediately after the predicate is evaluated, similar
to updates in common imperative languages, but this leads to lower-level control flows
that are harder to reason about. Instead, it is better for the update to take effect as part
of a transaction of multiple updates that together satisfy high-level logic constraints.
For example, with this approach, the following rule means that adopted_by_from holds
if the updates add_child and del_child and the check adoption_check happen as a transaction.
adopted_by_from(C,X,Y) <- add_parent(X,C), del_parent(Y,C), adoption_check(C,X,Y).
It ensures at a high-level that certain bad things won’t happen, e.g., no child would end
with one fewer parent or one more parent than expected. Transaction logic is an extension of logic rules for reasoning about and executing transactional state changes [BK94].
LPS, a Logic-based approach to Production Systems, captures state changes by associating timestamps with facts and events, and this is shown to correspond to updating
facts directly [KS15].
Logic languages with updates have important applications in enterprise software [GAK12,
AtCG+ 15]. Transaction logic can also help in planning [BKB14].
Additional implementation support can help enhance applications and enable additional
applications. A particular helpful feature is to record justification or provenance information
during program execution [RRR00, DAA13], providing explanations for how a result was
obtained. The recorded information can be queried to improve understanding and help
debugging.
6.2
Additional applications
Many additional applications have been developed using logic programming, especially including challenging applications that need recursion and those that furthermore need constraint.
Table 2 lists example application areas with example application problems organized
based on the main abstractions used. Note that application problems can often be reduced
to each other, and many other problems can be reduced to the problems in the table. For
example, model checking a property of a system [CGP99, CGL94] can be reduced to planning,
where the goal state is a state violating the property specified, so a plan found by a planner
corresponds to an error trace found by a model checker [EMP14, CEFP14]. Administrative
policy analysis also has correspondences to planning, by finding a sequence of actions to
achieve the effect of a security breach [SYGR11].
Table 2 is only a small sample of the application areas, with example application problems
or kinds of application problems in those areas. Many more applications have been developed,
22
Area
Data
management
Knowledge
management
Using join
Using recursion
business intelligence,* route queries,
many database
many database
join queries
recursive queries
Using constraint
ontology
management*
reasoning with
knowledge
Decision
support
Linguistics
Program
analysis
type checking,
many local analyses
Security
role-based
access control*
Games and
puzzles
Teaching
course management
ontology analysis
data cleaning,
data repair
prescriptive analysis,*
supply-chain management,
planning, scheduling,
market analysis
resource allocation*
text processing,*
context-sensitive
context-free parsing,
analysis,
semantic analysis
deep semantics analysis
pointer analysis,*
type inference,
type analysis,
many constraint-based
many dependency analyses
analyses
trust management,*
administrative policy
hierarchical role-based
analysis,
access control
cryptanalysis
n-queens,*
Hanoi tower,
Sudoku,
many recursion problems
many constraint puzzles
question analysis,
course analysis
problem diagnosis,
test generation
Table 2: Example application areas with example application problems organized based on
the main abstractions used. Applications discussed in some detail in this article are marked
with an asterisk.
in many more areas, using systems that support variants of the abstractions with different
implementations. Some examples are:
• XSB has also been used to develop applications for immunization survey [BKGD+ 12],
standardizing data, spend analysis, etc. [XSB15], and it is discussed in many publications1 .
• LogicBlox has also been used to create solutions for predicting consumer demand,
optimizing supply chain, etc. [Log15b] and more [GAK12].
• ASP systems have been used in bioinformatics, hardware design, music composition,
robot control, tourism, and many other application areas [Gro05, Sch11], including part
1
A Google Scholar search with +XSB +”logic programming” returns over 2300 results, July 2, 2017.
23
of a decision support system for the Space Shuttle flight controller [NBG+ 01, BG05].
• Logic systems have been developed for additional applications, e.g., PRISM [SK97]
and ProbLog [DRKT07] for probabilistic models; XMC [RRS+ 00] and ProB [LB08]
for verification; and NDlog [LCG+ 09], Meld [ARLG+ 09], Overlog [ACC+ 10], and
Bloom [Ber13] for network and distributed algorithms.
Languages and systems with more powerful features such as constraints for general applications are often also used in less challenging application areas such as those that need only
join queries. For example, DLV has also been used in ontology management [RGS+ 09].
6.3
Additional discussion on abstractions
We give an overview of the main abstractions in the larger picture of programming paradigms,
to help put the kinds of applications supported into broader perspective.
The three main abstractions—join, recursion, and constraint—correspond generally to
more declarative programming paradigms. Each is best known in its corresponding main
programming community:
• Join in database programming. Database systems have join at the core but support
restricted recursion and constraints in practice.
• Recursion in functional programming. Functional languages have recursion at the core
but do not support high-level join or constraints.
• Constraint in logic programming. Logic engines support both join and recursion at the
core, and have increasingly supported constraints at the core as well.
The additional extensions help further raise the level of abstraction and broaden the programming paradigms supported:
• Regular-expression paths raise the level of abstraction over lower-level linear recursion.
• Updates, or actions, are the core of imperative programming; they help capture realworld operations even when not used in low-level algorithmic steps.
• Time, probability, higher-order logic, and many other features correspond to additional
arguments, attributes, or abstractions about objects and relationships.
One main paradigm not yet discussed is object-oriented programming. Orthogonal to data
and control abstractions, objects in common object-oriented languages provide a kind of
module abstraction, encapsulating both data structures and control structures in objects
and classes. Similar abstractions have indeed been added to logic languages as well. For
24
example, F-logic extends traditional logic programming with objects [KLW95] and is supported in Flora-2 [KYWZ14]; it was also the basis of a highly scalable commercial system,
Ontobroker [Sem12], and a recent industry suite, Ergo [GBF+ 15]. For another example, ASP
has been extended with object constructs in OntoDLV [RGS+ 09].
Finally, building practical applications requires powerful libraries and interfaces for many
standard functionalities. Many logic programming systems provide various such libraries.
For example, SWI-Prolog has libraries for constraint logic programming, multithreading,
interface to databases, GUI, a web server, etc, as well as development tools and extensive
documentation.
7
Related literature and future work
There are many overview books and articles about logic programming in general and applications of logic programming in particular. This article differs from prior works by studying the
key abstractions and their implementations as the driving force underlying vastly different
application problems and application areas.
Kowalski [Kow14] provides an extensive overview of the development of logic programming. It describes the historical root of logic programming, starting from resolution theoremproving; the procedural interpretation and semantics of rules with no negated hypotheses,
called Horn clause programs; negation as failure, including completion semantics, stratification, well-founded semantics, stable model semantics, and ASP; as well as logic programming
involving abduction, constraints, and argumentation. It focuses on three important issues:
logic programming as theorem proving vs. model generation, with declarative vs. procedural
semantics, and using top-down vs. bottom-up computation. Our description of abstractions
and implementations aims to separate declarative semantics from procedural implementations.
Other overviews and surveys about logic programming in general include some that
cover a collection of topics together and some that survey different topics separately. Example collections discuss the first 25 years of logic programming from 1974 [AWT99] and
the first 25 years of the Italian Association of Logic Programming from 1985 [DP10]. Example topics surveyed separately include logic programming semantics [Fit02], complexity
and expressive power [DEGV01], constraints [JM94], ASP and DLV [GLR13], deductive
databases [CGT90, AHV95, RU95, MSZ14], and many more. Our description of abstractions and implementations is only a highly distilled overview of the core topics.
Overviews and surveys about logic programming applications in particular are spread
across many forums. Example survey articles include an early article on Prolog applications [Rot93], DLV applications [GILR09, GLMR11, LR15], applications in Italy [DPT10],
emerging applications [HGL11], and a dedicated workshop AppLP—Applications of Logic
Programming [WL17]. For example, the early article [Rot93] describes six striking practical
applications of Prolog that replaced and drastically improved over systems written previously
25
using Fortran, C++, and Lisp. Example collections of applications on the Web include one
at TU Wien [Gro05], one by Schaub [Sch11], and some of the problems in various competitions, e.g., as described by Gebser et al. [GMR17]. We try to view the applications by the
abstractions and implementations used, so as to not be distracted by specific details of very
different applications.
There are also many articles on specific applications or specific classes of applications.
Examples of the former include team building [RGA+ 12], program pointer analysis [SB15],
and others discussed in this article. Examples of the latter include applications in software
engineering [CS95], DLV applications in knowledge management [GILR09], and IDP applications in data mining and machine learning [BBB+ 14]. We used a number of such specific
applications as examples and described some of them in slightly more detail to illustrate the
common technical core in addition to the applications per se.
Directions for future work. There are several main areas for future study: (1) more highlevel abstractions that are completely declarative, (2) more efficient implementations with
complexity guarantees, and (3) more unified and standardized languages and frameworks
with rich libraries. These will help many more applications to be created in increasingly
complex problem domains.
Acknowledgment
I would like to thank David S. Warren for his encouragement over the years at Stony Brook,
and his patient and stimulating explanations about logic programming in general and XSB
implementation in particular. I am grateful to Molham Aref, Francesco Ricca, and David
Warren for helpful suggestions and additional information about applications using LogicBlox, DLV, and XSB, respectively. I thank Molham Aref and others at LogicBlox, Jon
Brandvein, Christopher Kane, Michael Kifer, Bob Kowalski, Bo Lin, Francesco Ricca, Scott
Stoller, Tuncay Tekle, David Warren, Neng-Fa Zhou, and anonymous reviewers for helpful
comments on drafts of this article.
References
[ACC+ 10]
P. Alvaro, T. Condie, N. Conway, J.M. Hellerstein, and R. Sears. I do declare: Consensus in a
logic language. ACM SIGOPS Operating Systems Review, 43(4):25–30, 2010.
[AHV95]
Serge Abiteboul, Richard Hull, and Victor Vianu. Foundations of Databases: The Logical Level.
Addison-Wesley, 1995.
[And94]
Lars Ole Andersen. Program Analysis and Specialization for the C Programming Language.
PhD thesis, DIKU, University of Copenhagen, 1994.
[ANS04]
ANSI INCITS. Role-Based Access Control. ANSI INCITS 359-2004, American National Standards Institute, International Committee for Information Technology Standards, Feb. 2004.
26
[ARLG+ 09] Michael P. Ashley-Rollman, Peter Lee, Seth Copen Goldstein, Padmanabhan Pillai, and Jason D. Campbell. A language for large ensembles of independently executing nodes. In Proceedings of the 25th International Conference on Logic Programming, pages 265–280. Springer,
2009.
+
[AtCG 15] Molham Aref, Balder ten Cate, Todd J. Green, Benny Kimelfeld, Dan Olteanu, Emir Pasalic,
Todd L. Veldhuizen, and Geoffrey Washburn. Design and implementation of the LogicBlox
system. In Proceedings of the 2015 ACM SIGMOD International Conference on Management
of Data, pages 1371–1382, 2015.
[AWT99]
Krzysztof R. Apt, David S. Warren, and Mirek Truszczynski, editors. The Logic Programming
Paradigm: A 25-Year Perspective. Springer, 1999.
[BBB+ 14]
Maurice Bruynooghe, Hendrik Blockeel, Bart Bogaerts, Broes De Cat, Stef De Pooter, Joachim
Jansen, Anthony Labarre, Jan Ramon, Marc Denecker, and Sicco Verwer. Predicate logic as
a modeling language: Modeling and solving some machine learning and data mining problems
with IDP3. Theory and Practice of Logic Programming, pages 1–35, 2014.
[Bec05a]
Moritz Y. Becker. Cassandra: Flexible trust management and its application to electronic
health records. PhD dissertation, Technical Report UCAM-CL-TR-648, Computer Laboratory,
University of Cambridge, 2005.
[Bec05b]
Moritz Y. Becker. A formal security policy for an NHS electronic health record service. Technical
Report UCAM-CL-TR-628, Computer Laboratory, University of Cambridge, 2005.
[Ber13]
Bloom Programming Language. http://www.bloom-lang.net, 2013. Lastest release April 23,
2013. Accessed January 14, 2017.
[BF06]
Steve Barker and Maribel Fernández. Term rewriting for access control. In Data and applications security XX, pages 179–193. Springer, 2006.
[BFL96]
Matt Blaze, Joan Feigenbaum, and Jack Lacy. Decentralized trust management. In Proceedings
of the 1996 IEEE Symposium on Security and Privacy, pages 164–173, 1996.
[BG05]
Marcello Balduccini and Michael Gelfond. Model-based reasoning for complex flight systems. In
Proceedings of the 5th AIAA Conference on Aviation, Technology, Integration, and Operations,
2005.
Anthony J. Bonner and Michael Kifer. An overview of transaction logic. Theoretical Computer
Science, 133(2):205–265, 1994.
[BK94]
[BKB14]
Reza Basseda, Michael Kifer, and Anthony J Bonner. Planning with transaction logic. In
Proceedings of the 8th International Conference on Web Reasoning and Rule Systems, pages
29–44. Springer, 2014.
[BKGD+ 12] Anthony Burton, Robert Kowalski, Marta Gacic-Dobo, Rouslan Karimov, and David Brown. A
formal representation of the WHO and UNICEF estimates of national immunization coverage:
A computational logic approach. PLOS ONE, Oct. 2012.
[BLV04]
Steve Barker, Michael Leuschel, and Mauricio Varea. Efficient and flexible access control via
logic program specialisation. In Proceedings of the 2004 ACM SIGPLAN Symposium on Partial
Evaluation and Semantics-Based Program Manipulation, pages 190–199, 2004.
[Bon10]
Piero A. Bonatti. Datalog for security, privacy and trust. In Proceedings of the 1st International
Conference on Datalog Reloaded, pages 21–36. Springer, 2010.
[BRS12]
Moritz Y Becker, Alessandra Russo, and Nik Sultana. Foundations of logic-based trust management. In Proceedings of the 2012 IEEE Symposium on Security and Privacy, pages 161–175.
IEEE CS Press, 2012.
[BS04]
Moritz Y. Becker and Peter Sewell. Cassandra: Flexible trust management, applied to electronic
health records. In Proceedings of the 17th IEEE Computer Security Foundations Workshop,
pages 139–154. IEEE CS Press, 2004.
[BS09a]
Jordan Bell and Brett Stevens. A survey of known results and research areas for n-queens.
Discrete Mathematics, 309(1):1–31, 2009.
27
[BS09b]
Martin Bravenboer and Yannis Smaragdakis. Strictly declarative specification of sophisticated
points-to analyses. In Proceedings of the 24rd ACM SIGPLAN Conference on Object Oriented
Programming Systems Languages and Applications, pages 243–262, 2009.
[CEFP14]
Alessandro Cimatti, Stefan Edelkamp, Maria Fox, and Erion Plaku. Dagstuhl Seminar 14482:
Automated Planning and Model Checking. http://www.dagstuhl.de/no_cache/en/program/
calendar/semhp/?semnr=14482, Nov. 23–28, 2014. Accessed June 6, 2015.
[CGL94]
Edmund M. Clarke, Orna Grumberg, and David E. Long. Model checking and abstraction.
ACM Transactions on Programming Languages and Systems, 16(5):1512–1542, 1994.
[CGP99]
Edmund M. Clarke, Jr., Orna Grumberg, and Doron A. Peled. Model Checking. MIT Press,
1999.
Stefano Ceri, Georg Gottlob, and Letizia Tanca. Logic Programming and Databases. Springer,
1990.
P. Ciancarini and Leon Sterling. Report on the Workshop: Applications of Logic Programming
in Software Engineering. The Knowledge Engineering Review, 10(01):97–100, 1995.
[CGT90]
[CS95]
[DAA13]
Carlos Viegas Damásio, Anastasia Analyti, and Grigoris Antoniou. Justifications for logic
programming. In Proceedings of the 12th International Conference on Logic Programming and
Nonmonotonic Reasoning, pages 530–542. Springer, 2013.
[DEGV01]
Evgeny Dantsin, Thomas Eiter, Georg Gottlob, and Andrei Voronkov. Complexity and expressive power of logic programming. ACM Computing Surveys, 33(3):374–425, 2001.
[DeT02]
John DeTreville. Binder, a logic-based security language. In Proceedings of the 2002 IEEE
Symposium on Security and Privacy, pages 105–113. IEEE CS Press, 2002.
[dMLW03]
Oege de Moor, David Lacey, and Eric Van Wyk. Universal regular path queries. Higher-Order
and Symbolic Computation, 16(1–2):15–35, 2003.
[DNST05]
Bart Demoen, Phuong-Lan Nguyen, Tom Schrijvers, and Remko Troncon. The first 10 Prolog
programming contests. http://dtai.cs.kuleuven.be/ppcbook/, 2005. Accessed May 20,
2015.
Agostino Dovier and Enrico Pontelli, editors. A 25-year Perspective on Logic Programming:
Achievements of the Italian Association for Logic Programming, GULP. Springer, 2010.
[DP10]
[DPT10]
Alessandro Dal Palù and Paolo Torroni. 25 years of applications of logic programming in Italy.
In A 25-Year Perspective on Logic Programming, pages 300–328. Springer, 2010.
[DRKT07]
Luc De Raedt, Angelika Kimmig, and Hannu Toivonen. ProbLog: A probabilistic Prolog and
its application in link discovery. In Proceedings of the 20th International Joint Conference on
Artifical Intelligence, pages 2468–2473. Morgan Kaufman, 2007.
[DT08]
Marc Denecker and Eugenia Ternovska. A logic of nonmonotone inductive definitions. ACM
Transactions on Computational Logic, 9(2):14, 2008.
[Edm15]
Doug Edmunds. Learning constraint logic programming—finite domains with logic puzzles.
http://brownbuffalo.sourceforge.net/, 2015. Accessed May 20, 2015.
[EFL+ 99]
C. Ellison, B. Frantz, B. Lampson, R. L. Rivest, B. Thomas, and T. Ylonen. RFC 2693: SPKI
Certificate Theory. http://www.ietf.org/rfc/rfc2693.txt, Sept. 1999. Accessed June 4,
2015.
Stefan Edelkamp, Daniele Magazzeni, and Erion Plaku. Workshop on Model Checking and
Automated Planning (MOCHAP’14). http://icaps14.icaps-conference.org/workshops_
tutorials/mochap.html, Portsmouth, NH, June 23, 2014. Accessed June 6, 2015.
[EMP14]
[Fit02]
Melvin Fitting. Fixpoint semantics for logic programming: A survey. Theoretical Computer
Science, 278(1):25–51, 2002.
[FK92]
D. Ferraiolo and R. Kuhn. Role-based access control. In Proceedings of the 15th NIST-NSA
National Computer Security Conference, pages 554–563, Blatimore, Maryland, 1992. http:
//arxiv.org/abs/0903.2171.
28
[FSG+ 01]
David F. Ferraiolo, Ravi Sandhu, Serban Gavrila, D. Richard Kuhn, and Ramaswamy Chandramouli. Proposed NIST standard for role-based access control. ACM Transactions on Information and Systems Security, 4(3):224–274, 2001.
[GAK12]
Todd J. Green, Molham Aref, and Grigoris Karvounarakis. LogicBlox, platform and language:
A tutorial. In Proceedings of the 2nd International Conference on Datalog in Academia and
Industry, Datalog 2.0, pages 1–8. Springer, 2012.
[GAS10]
Ana Sofia Gomes, José Júlio Alferes, and Terrance Swift. Implementing query answering for hybrid MKNF knowledge bases. In Proceedings of the 12th International Conference on Practical
Aspects of Declarative Languages, pages 25–39. Springer, 2010.
[GB98]
A. Gavrila and J. Barkley. Formal specification for RBAC user/role and role relationship
management. In Proceedings of the 3rd ACM Workshop on Role Based Access Control, pages
81–90, 1998.
[GBF+ 15]
Benjamin Grosof, Janine Bloomfield, Paul Fodor, Michael Kifer, Isaac Grosof, Miguel Calejo,
and Theresa Swift. Automated decision support for financial regulatory/policy compliance,
using textual rulelog. In Proceedings of the RuleML 2015 Challenge, the Special Track on Rulebased Recommender Systems for the Web of Data, the Special Industry Track and the RuleML
2015 Doctoral Consortium, 2015. http://ceur-ws.org/Vol-1417/.
[GILR09]
Giovanni Grasso, Salvatore Iiritano, Nicola Leone, and Francesco Ricca. Some DLV applications
for knowledge management. In E. Erdem, F. Lin, and T. Schaub, editors, Proceedings of the 10th
International Conference on Logic Programming and Nonmonotonic Reasoning, pages 591–597.
Springer, 2009.
[GKKS12]
M. Gebser, R. Kaminski, B. Kaufmann, and T. Schaub. Answer Set Solving in Practice.
Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool, 2012.
[GL88]
Michael Gelfond and Vladimir Lifschitz. The stable model semantics for logic programming. In
Proceedings of the 5th International Conference and Symposium on Logic Programming, pages
1070–1080. MIT Press, 1988.
[GLMR11]
Giovanni Grasso, Nicola Leone, Marco Manna, and Francesco Ricca. ASP at work: Spin-off
and applications of the DLV system. In Logic Programming, Knowledge Representation, and
Nonmonotonic Reasoning—Essays Dedicated to Michael Gelfond on the Occasion of His 65th
Birthday, pages 432–451. Springer, 2011.
[GLR13]
Giovanni Grasso, Nicola Leone, and Francesco Ricca. Answer set programming: Language,
applications and development tools. In Proceedings of the 7th International Conference on Web
Reasoning and Rule Systems, pages 19–34. Springer, 2013.
[GM01]
Harald Ganzinger and David A. McAllester. A new meta-complexity theorem for bottomup logic programs. In Proceedings of the 1st International Joint Conference on Automated
Reasoning, pages 514–528. Springer, 2001.
[GMR17]
Martin Gebser, Marco Maratea, and Francesco Ricca. The sixth answer set programming
competition. J. Artif. Intell. Res., 60:41–95, 2017.
[Gro05]
TU Wien Knowledge-Based Systems Group. WP5 report: Model applications and proofs-ofconcept. http://www.kr.tuwien.ac.at/research/projects/WASP/report.pdf, Aug. 2005.
Accessed May 20, 2015.
[GS00]
Tyrone Grandison and Morris Sloman. A survey of trust in Internet applications. IEEE
Communications Surveys and Tutorials, 3(4):2–16, 2000.
[HDCD10]
P. Hou, B. De Cat, and M. Denecker. FO(FD): Extending classical logic with rule-based fixpoint
definitions. Theory and Practice of Logic Programming, 10(4-6):581–596, 2010.
[Het15]
Werner Hett. Prolog Site—Prolog Problems. http://sites.google.com/site/prologsite/
prolog-problems/, 2015. Accessed May 28, 2015.
[HGL11]
Shan Shan Huang, Todd Jeffrey Green, and Boon Thau Loo. Datalog and emerging applications: An interactive tutorial. In Proceedings of the 2011 ACM SIGMOD International
Conference on Management of data, pages 1213–1216, 2011.
29
[Hin01]
Michael Hind. Pointer analysis: Haven’t we solved this problem yet? In Proceedings of the 2001
ACM SIGPLAN-SIGSOFT workshop on Program Analysis for Software Tools and Engineering,
pages 54–61, 2001.
[HT01a]
Nevin Heintze and Olivier Tardieu. Demand-driven pointer analysis. In Proceedings of the ACM
SIGPLAN 2001 Conference on Programming Language Design and Implementation, pages 24–
34, 2001.
[HT01b]
Nevin Heintze and Olivier Tardieu. Ultra-fast aliasing analysis using CLA: A million lines of
C code in a second. In Proceedings of the ACM SIGPLAN 2001 Conference on Programming
Language Design and Implementation, pages 254–263, 2001.
[HTL07]
Katia Hristova, K. Tuncay Tekle, and Yanhong A. Liu. Efficient trust management policy
analysis from rules. In Proceedings of the 9th ACM SIGPLAN International Conference on
Principles and Practice of Declarative Programming, pages 211–220, 2007.
[HVM06]
Elnar Hajiyev, Mathieu Verbaere, and Oege De Moor. CodeQuest: Scalable source code queries
with Datalog. In Proceedings of the 20th European Conference on Object-Oriented Programming, pages 2–27. Springer, 2006.
[Ioa96]
Yannis E Ioannidis. Query optimization. ACM Computing Surveys, 28(1):121–123, Mar. 1996.
[Jim01]
Trevor Jim. SD3: A trust management system with certified evaluation. In Proceedings of the
2001 IEEE Symposium on Security and Privacy, pages 106–115. IEEE CS Press, 2001.
[JM94]
Joxan Jaffar and Michael J. Maher. Constraint logic programming: A survey. Journal of Logic
Programming, 19:503–581, 1994.
[KBL06]
Michael Kifer, Arthur Bernstein, and Philip M. Lewis. Database Systems: An Application
Oriented Approach, Complete Version. Addison-Wesley, 2nd edition, 2006.
[Kje15]
Hakan Kjellerstrand. My Picat page. http://www.hakank.org/picat/, 2015. Accessed May
29, 2015.
[KLW95]
Michael Kifer, Georg Lausen, and James Wu. Logical foundations of object-oriented and framebased languages. Journal of the ACM, 42(4):741–843, 1995.
[Kow14]
Robert Kowalski. Logic programming. In Dov M. Gabbay, Jörg H. Siekmann, and John Woods,
editors, Computational Logic, volume 9 of Handbook of the History of Logic, pages 523–569.
Elsevier, 2014.
[KS15]
Robert Kowalski and Fariba Sadri. Reactive computing as model generation. New Generation
Computing, 33(1):33–67, 2015.
[KYWZ14]
Michael Kifer, Guizhen Yang, Hui Wan, and Chang Zhao. Flora-2: User’s Manual Version 1.0.
Stony Brook University, July 2014. http://flora.sourceforge.net/. Accessed June 6, 2015.
[LB08]
Michael Leuschel and Michael Butler. ProB: An automated analysis toolset for the B method.
International Journal on Software Tools for Technology Transfer, 10(2):185–203, 2008.
[LBSL16]
Yanhong A. Liu, Jon Brandvein, Scott D. Stoller, and Bo Lin. Demand-driven incremental
object queries. In Proceedings of the 18th International Symposium on Principles and Practice
of Declarative Programming, pages 228–241. ACM Press, 2016.
[LCG+ 09]
Boon Thau Loo, Tyson Condie, Minos Garofalakis, David E. Gay, Joseph M. Hellerstein, Petros
Maniatis, Raghu Ramakrishnan, Timothy Roscoe, and Ion Stoica. Declarative networking.
Communications of the ACM, 52:87–95, 2009.
[Ley02]
Michael Ley. The DBLP computer science bibliography: Evolution, research issues, perspectives. In Proceedings of the 9th International Symposium on String Processing and Information
Retrieval, pages 1–10. Springer, 2002.
[LF11]
Adam Lally and Paul Fodor. Natural language processing with Prolog in the IBM Watson
system. Association for Logic Programming (ALP) Issue, Featured Articles, Mar. 31 2011.
Accessed April 23, 2015.
[LHM84]
Carl E. Landwehr, Constance L. Heitmeyer, and John McLean. A security model for military
message systems. ACM Transactions on Computer Systems, 2(3):198–222, 1984.
30
[LM03]
Ninghui Li and John C. Mitchell. Datalog with constraints: A foundation for trust management languages. In Proceedings of the 5th International Symposium on Practical Aspects of
Declarative Languages, pages 58–73. Springer, 2003.
[LMW02]
Ninghui Li, John C. Mitchell, and William H. Winsborough. Design of a role-based trustmanagement framework. In IEEE Symposium on Security and Privacy, pages 114–130, 2002.
[Log15a]
LogicBlox.
Assortment planning and management.
solution-four.html, 2015. Accessed May 18, 2015.
[Log15b]
LogicBlox. Solutions. http://www.logicblox.com/solutions.html, 2015. Accessed May 18,
2015.
Nicola Leone, Gerald Pfeifer, Wolfgang Faber, Thomas Eiter, Georg Gottlob, Simona Perri,
and Francesco Scarcello. The DLV system for knowledge representation and reasoning. ACM
Transactions on Computational Logic, 7(3):499–562, July 2006.
[LPF+ 06]
http://www.logicblox.com/
[LPM+ 12]
Adam Lally, John M. Prager, Michael C. McCord, Branimir K. Boguraev, Siddharth Patwardhan, James Fan, Paul Fodor, and Jennifer Chu-Carroll. Question analysis: How Watson reads
a clue. IBM Journal of Research and Development, 56(3/4):2:1–2:13, 2012.
[LR15]
Nicola Leone and Francesco Ricca. Answer Set Programming: A tour from the basics to
advanced development tools and industrial applications. In Proceedings of the 11th International
Summer School on Reasoning Web, pages 308–326. Springer, 2015.
[LRY+ 04]
Yanhong A. Liu, Tom Rothamel, Fuxiang Yu, Scott Stoller, and Nanjun Hu. Parametric regular
path queries. In Proceedings of the ACM SIGPLAN 2004 Conference on Programming Language
Design and Implementation, pages 219–230, 2004.
[LS06]
Yanhong A. Liu and Scott D. Stoller. Querying complex graphs. In Proceedings of the 8th International Symposium on Practical Aspects of Declarative Languages, pages 199–214. Springer,
2006.
Yanhong A. Liu and Scott D. Stoller. Role-based access control: A corrected and simplified
specification. In Department of Defense Sponsored Information Security Research: New Methods for Protecting Against Cyber Threats, pages 425–439. Wiley, 2007.
[LS07]
[LS09]
[LS18]
Yanhong A. Liu and Scott D. Stoller. From Datalog rules to efficient programs with time and
space guarantees. ACM Transactions on Programming Languages and Systems, 31(6):1–38,
2009.
Yanhong A. Liu and Scott D. Stoller. Founded semantics and constraint semantics of logic
rules. In Symposium on Logical Foundations of Computer Science, Lecture Notes in Computer
Science. Springer, Jan. 2018.
[LWG+ 06]
Yanhong A. Liu, Chen Wang, Michael Gorbovitski, Tom Rothamel, Yongxi Cheng, Yingchao
Zhao, and Jing Zhang. Core role-based access control: Efficient implementations by transformations. In Proceedings of the ACM SIGPLAN 2006 Workshop on Partial Evaluation and
Program Manipulation, pages 112–120, 2006.
[Mal15]
Mihaela Malita. Logic puzzles in Prolog. http://www.anselm.edu/internet/compsci/
faculty_staff/mmalita/HOMEPAGE/logic/index.html, 2015. Accessed May 28, 2015.
[McA99]
David A. McAllester. On the complexity analysis of static analyses. In Proceedings of the 6th
International Static Analysis Symposium, pages 312–329. Springer, 1999.
[MSZ14]
Jack Minker, Dietmar Seipel, and Carlo Zaniolo. Logic and databases: History of deductive
databases. In D. Gabbay, J. Siekmann, and J. Woods, editors, Handbook of Computational
Logic, chapter 17, pages 571–628. North-Holland, 2014.
[NBG+ 01]
Monica Nogueira, Marcello Balduccini, Michael Gelfond, Richard Watson, and Matthew Barry.
An A-Prolog decision support system for the Space Shuttle. In Practical Aspects of Declarative
Languages, pages 169–183. Springer, 2001.
[Prz94]
Teodor C. Przymusinski. Well-founded and stationary models of logic programs. Annals of
Mathematics and Artificial Intelligence, 12(3):141–187, 1994.
31
[PS02]
Fernando C.N. Pereira and Stuart M Shieber. Prolog and Natural-Language Analysis. Microtome Publishing, 2002. Revision of October 5, 2005.
[RGA+ 12]
Francesco Ricca, Giovanni Grasso, Mario Alviano, Marco Manna, Vincenzino Lio, Salvatore
Iiritano, and Nicola Leone. Team-building with answer set programming in the Gioia-Tauro
Seaport. Theory and Practice of Logic Programming, 12(3):361–381, 2012.
[RGS+ 09]
Francesco Ricca, Lorenzo Gallucci, Roman Schindlauer, Tina Dell’Armi, Giovanni Grasso, and
Nicola Leone. OntoDLV: An ASP-based system for enterprise ontologies. Journal of logic and
computation, 19(4):643–670, 2009.
[RK05]
Sini Ruohomaa and Lea Kutvonen. Trust management survey. In Proceedings of the Third
international conference on Trust Management, pages 77–92. Springer, 2005.
[RL07]
Tom Rothamel and Yanhong A. Liu. Efficient implementation of tuple pattern based retrieval.
In Proceedings of the ACM SIGPLAN 2007 Workshop on Partial Evaluation and Program
Manipulation, pages 81–90, 2007.
[Rot93]
Al Roth. The practical application of Prolog. AI Expert, 8:24–24, 1993. In Dr. Dobb’s, http://
www.drdobbs.com/parallel/the-practical-application-of-prolog/184405220, Dec.10,
2002. Accessed June 6, 2015.
[RRR00]
Abhik Roychoudhury, CR Ramakrishnan, and IV Ramakrishnan. Justifying proofs using memo
tables. In Proceedings of the 2nd ACM SIGPLAN International Conference on Principles and
Practice of Declarative Programming, pages 178–189, 2000.
[RRS+ 00]
C.R. Ramakrishnan, I.V. Ramakrishnan, Scott A. Smolka, Yifei Dong, Xiaoqun Du, Abhik Roychoudhury, and V.N. Venkatakrishnan. XMC: A logic-programming-based verification toolset.
In Proceedings of the 12th International Conference on Computer Aided Verification, pages
576–580. Springer, 2000.
[RU95]
Raghu Ramakrishnan and Jeffrey D Ullman. A survey of deductive database systems. Journal
of Logic Programming, 23(2):125–149, 1995.
[SB15]
Yannis Smaragdakis and George Balatsouras. Pointer analysis. Foundations and Trends in
Programming Languages, 2(1):1–69, 2015.
[SBK13]
Nik Sultana, Moritz Y. Becker, and Markulf Kohlweiss. Selective disclosure in Datalog-based
trust management. In Proceedings of the 9th International Workshop on Security and Trust
Management, pages 160–175. Springer, 2013.
[SCD+ 13]
Manu Sridharan, Satish Chandra, Julian Dolby, Stephen J Fink, and Eran Yahav. Alias analysis
for object-oriented programs. In Aliasing in Object-Oriented Programming: Types, Analysis and
Verification, pages 196–232. Springer, 2013.
[Sch11]
Torsten Schaub. Collection on Answer Set Programming (ASP) and more. http://www.cs.
uni-potsdam.de/~torsten/asp/, Mar. 2011. Accessed May 18, 2015.
[Sch14]
Torsten Schaub. Answer set solving in practice. http://www.cs.uni-potsdam.de/~torsten/
Potassco/Slides/asp.pdf, Dec. 23, 2014. Accessed May 20, 2015.
[Sem12]
Semafora. Semantic infrastructure: OntoBroker. http://www.semafora-systems.com/en/
products/ontobroker/, 2012. Accessed May 18, 2015.
[SK97]
Taisuke Sato and Yoshitaka Kameya. PRISM: A language for symbolic-statistical modeling.
In Proceedings of the 15th International Joint Conference on Artifical Intelligence-Volume 2,
pages 1330–1335. Morgan Kaufman, 1997.
[SR05]
Diptikalyan Saha and C. R. Ramakrishnan. Incremental and demand-driven points-to analysis
using logic programming. In Proceedings of the 7th ACM SIGPLAN International Conference
on Principles and Practice of Declarative Programming, pages 117–128, 2005.
[SW12]
Terrance Swift and David S Warren. XSB: Extending Prolog with tabled logic programming.
Theory and Practice of Logic Programming, 12(1-2):157–187, 2012.
[SW+ 14]
Terrance Swift, David S. Warren, et al. The XSB System Version 3.5.x, June 2014. http:
//xsb.sourceforge.net. Accessed June 6, 2015.
32
[SYGR11]
Scott D Stoller, Ping Yang, Mikhail I Gofman, and CR Ramakrishnan. Symbolic reachability
analysis for parameterized administrative role-based access control. Computers & Security,
30(2):148–164, 2011.
[TGL10]
K. Tuncay Tekle, Michael Gorbovitski, and Yanhong A. Liu. Graph queries through Datalog optimizations. In Proceedings of the 12th International ACM SIGPLAN Symposium on
Principles and Practice of Declarative Programming, pages 25–34, 2010.
[TL11]
K. Tuncay Tekle and Yanhong A. Liu. More efficient Datalog queries: Subsumptive tabling
beats magic sets. In Proceedings of the 2011 ACM SIGMOD International Conference on
Management of Data, pages 661–672, 2011.
[VG93]
Allen Van Gelder. The alternating fixpoint of logic programs with negation. Journal of Computer and System Sciences, 47(1):185–221, 1993.
[VRS91]
Allen Van Gelder, Kenneth Ross, and John S. Schlipf. The well-founded semantics for general
logic programs. Journal of the ACM, 38(3):620–650, 1991.
[WACL05]
John Whaley, Dzintars Avots, Michael Carbin, and Monica S Lam. Using Datalog with binary
decision diagrams for program analysis. In Programming Languages and Systems, pages 97–118.
Springer, 2005.
[Wil02]
Dan E. Willard. An algorithm for handling many relational calculus queries efficiently. Journal
of Computer and System Sciences, 65:295–331, 2002.
[WL17]
David S. Warren and Yanhong A. Liu. AppLP: A dialogue on applications of logic programming.
Computing Research Repository, arXiv:1704.02375 [], Apr. 2017.
[XSB15]
XSB. Case Studies. http://www.xsb.com/case-studies, 2015. Accessed May 18, 2015.
33
| 2 |
1
Spectral Efficiency of Mixed-ADC Massive MIMO
arXiv:1802.10259v1 [] 28 Feb 2018
Hessam Pirzadeh, Student Member, IEEE, and A. Lee Swindlehurst, Fellow, IEEE
Abstract—We study the spectral efficiency (SE) of a mixedADC massive MIMO system in which K single-antenna users
communicate with a base station (BS) equipped with M antennas
connected to N high-resolution ADCs and M − N one-bit ADCs.
This architecture has been proposed as an approach for realizing
massive MIMO systems with reasonable power consumption.
First, we investigate the effectiveness of mixed-ADC architectures
in overcoming the channel estimation error caused by coarse
quantization. For the channel estimation phase, we study to
what extent one can combat the SE loss by exploiting just
N ≪ M pairs of high-resolution ADCs. We extend the roundrobin training scheme for mixed-ADC systems to include both
high-resolution and one-bit quantized observations. Then, we
analyze the impact of the resulting channel estimation error in the
data detection phase. We consider random high-resolution ADC
assignment and also analyze a simple antenna selection scheme
to increase the SE. Analytical expressions are derived for the
SE for maximum ratio combining (MRC) and numerical results
are presented for zero-forcing (ZF) detection. Performance comparisons are made against systems with uniform ADC resolution
and against mixed-ADC systems without round-robin training
to illustrate under what conditions each approach provides the
greatest benefit.
Index Terms—Massive MIMO, analog-to-digital converter,
mixed-ADC, spectral efficiency.
I. I NTRODUCTION
T
HE seminal work of Marzetta introduced massive MIMO
as a promising architecture for future wireless systems
[2]. In the limit of an infinite number of base station (BS)
antennas, it was shown that massive MIMO can substantially
increase the network capacity. Another key potential of massive MIMO systems which has also made it interesting from
a practical standpoint is its ability of achieving this goal with
inexpensive, low-power components [3], [4]. However, preliminary studies on massive MIMO systems have for the most part
only analyzed its performance under the assumption of perfect
hardware [5], [6]. The impact of hardware imperfections and
nonlinearities on massive MIMO systems has recently been
investigated in [7]-[12]. Although it is well-known that the
dynamic power in
√massive MIMO systems can be scaled down
proportional to M , where M denotes the number of BS
antennas, the static power consumption at the BS will increase
proportionally to M [8]. Hence, considering hardware-aware
design together with power consumption at the BS seems
necessary in realizing practical massive MIMO systems.
This work was supported by the National Science Foundation under Grants
ECCS-1547155 and CCF-1703635, and by a Hans Fischer Senior Fellowship
from the Technische Universität München Institute for Advanced Study.
H. Pirzadeh and A. L. Swindlehurst are with the Center for Pervasive
Communications and Computing, University of California, Irvine, CA 92697
USA (e-mail: [email protected]; [email protected]).
Portions of this paper have appeared in [1].
Among the various components responsible for power
dissipation at the BS, the contribution of analog-to-digital
converters (ADCs) is known to be dominant [13]. Consequently, the idea of replacing the high-power high-resolution
ADCs with power efficient low-resolution ADCs could be a
viable approach to address power consumption concerns at the
massive MIMO BSs. The impact of utilizing low-resolution
ADCs on the spectral efficiency (SE) and energy consumption
of massive MIMO systems has been considered in [14]-[22]. In
particular, studies on massive MIMO systems with purely onebit ADCs show that the high spatial multiplexing gain owing
to the use of a large number of antennas is still achievable even
with one-bit ADCs [14], [15]. However, many more antennas
with one-bit ADCs (at least 2-2.5 times) are required to attain
the same performance as in the high-resolution ADCs case.
One of the main causes of SE degradation in purely one-bit
massive MIMO systems is the error due to the coarse quantization that occurs during the channel estimation phase. While
at low SNR the loss due to one-bit quantization is only about 2
dB, at higher SNRs performance degrades considerably more
and leads to an error floor [14]. The SE degradation can be reduced by improving the quality of the channel estimation prior
to signal detection. One approach for doing so is to exploit socalled mixed-ADC architectures during the channel estimation
phase, in which a combination of low- and high-resolution
ADCs are used side-by-side. This architecture is depicted in
Fig. 1. Mixed-ADC implementations were introduced in [23],
[24] and their performance was studied from an information
theoretic perspective via generalized mutual information.
The basic premise behind the mixed-ADC architecture is to
achieve the benefits of conventional massive MIMO systems
by just exploiting N ≪ M pairs of high-resolution ADCs.
An SE analysis of mixed-ADC massive MIMO systems with
maximum ratio combining (MRC) detection for Rayleigh and
Rician fading channels was carried out in [25] and [26],
respectively. The SE and energy efficiency of mixed-ADC
systems compared with systems composed of one-bit ADCs
was studied in [27] for MRC detection, and conditions were
derived under which each architecture provided the highest
SE for a given power consumption. The advantage of using a
mixed-ADC architecture in designing Bayes-optimal detectors
for MIMO systems with low-resolution ADCs is reported in
[28]. Although the nonlinearity of the quantization process
increases the complexity of the optimal detectors, it is shown
that adding a small number of high-resolution ADCs to
the system allows for less complex detectors with only a
slight performance degradation. Moreover, the benefit of using
mixed-ADC architectures in massive MIMO relay systems and
cloud-RAN deployments is elaborated in [29], [30].
Most existing work in the mixed-ADC massive MIMO
literature has assumed either perfect channel state information
2
(CSI) or imperfect CSI with “round-robin” training. In the
round-robin training approach [23], [24], [26], the training data
is repeated several times and the high-resolution ADCs are
switched among the RF chains so that every antenna can have
a “clean” snapshot of the pilots for channel estimation. This
obviously requires a larger portion of the coherence interval to
be devoted to training rather than data transmission. More precisely, for M antennas and N pairs of high-resolution ADCs,
M/N pilot signals are required in the single-user scenario
to estimate all M channel coefficients with high-resolution
ADCs. This issue is pointed out in [23] for the single user
scenario and its impact is taken into account. This training
overhead will be exacerbated in the multiuser scenario where
orthogonal pilot sequences should be assigned to the users.
In this case, the training period becomes (M/N )η, where η
represents the length of the pilot sequences (at least as large
as the number of user terminals), which could be prohibitively
large and may leave little room for data transmission. Hence,
it is crucial to account for this fact in any SE analysis of
mixed-ADC massive MIMO systems.
In this paper, we examine the channel estimation performance and the resulting uplink SE of mixed-ADC architectures with and without round-robin training, and compare them
with implementations that employ uniform ADC quantization
across all antennas. The main goals are to determine when,
if at all, the benefits of using the round-robin approach with
ADC/antenna switching outweigh the cost of increasing the
training overhead, and furthermore to examine the question of
whether or not one should employ a mixed-ADC architecture
in the first place. The contributions of the paper can be
summarized as follows.
•
•
•
We first present an extension of the round-robin training
approach that incorporates both high-resolution and onebit measurements for the channel estimation. The roundrobin training proposed in [23], [24], [26] based the
channel estimate on only high-resolution observations,
assuming that no data was collected from antennas during
intervals when they were not connected to the highresolution ADCs. In contrast, our extension assumes that
these antennas collect one-bit observations and combine
this data with the high-resolution samples to improve the
channel estimation performance.
We use the Bussgang decompositon [31] to develop a
linear minimum mean-squared error (LMMSE) channel
estimator based on the combined round-robin measurements and we derive a closed-form expression for the
resulting mean-squared error (MSE). We further illustrate the importance of using the Bussgang approach
rather than the simpler additive quantization noise model
in obtaining the most accurate characterization of the
channel estimation performance for round-robin training.
The analysis illustrates that the addition of the one-bit
observations considerably improves performance at low
SNR.
We perform a spectral efficiency analysis of the mixedADC implementation for the MRC and ZF receivers, and
obtain expressions for a lower bound on the SE that takes
•
into account the channel estimation error and the loss of
efficiency due to the round-robin training. We compare
the resulting SE with that achieved by mixed-ADC implementations that do not switch ADCs among the RF
chains, and hence do not use round-robin training. We
also compare against the SE for architectures that do not
mix the ADC resolution across the array, but instead use
uniform resolution with a fixed number of comparators
for different array sizes. We show that, depending on
the SNR, coherence interval, number of high-resolution
ADCs, and the choice of the linear receiver, there are
situations where each of the considered approaches shows
superior performance. In particular, even the round-robin
method with its considerable training overhead can provide the best SE under certain circumstances.
We analyze the possible SE improvement that can be
achieved by using an antenna selection algorithm that
connects the high-resolution ADCs to the subset of antennas with the highest channel gain. We analytically derive
the SE performance of the antenna selection algorithm
for MRC and numerically study its performance for ZF
detection, comparing against the simpler approach of
assigning the high-resolution ADCs to an arbitrary fixed
subset of the RF chains.
In addition to the above contributions, we also discuss
some of the issues related to implementing an ADC switch
or multiplexer in hardware that allows different ADCs to be
assigned to different antennas. We restrict our analysis and
numerical examples to a single-carrier flat-fading scenario, although our methodology can be used in a straightforward way
to extend the results to frequency-selective fading or multiplecarrier signals (e.g., see our prior work in Section III.B of
[14] for the SE analysis of an all-one-bit ADC system for
OFDM and frequency selectivity). The reasons for focusing
on the single-carrier flat-fading case are as follows: (1) the
mixed-ADC assumption already makes the resulting analytical
expressions quite complicated even for the simple flat-fading
case, and it would be more difficult to gain insight into the
problem if the expressions were further complicated; (2) the
original round-robin training idea was proposed in [23] for the
single-carrier flat-fading case, and thus we analyze it under
the same assumptions; (3) the main conclusions of the paper
are based on relative algorithm comparisons for the same set
of assumptions, and we expect our general conclusions to
remain unchanged if frequency rather than flat fading were
considered; and (4) the flat fading case is still of interest in
some applications, for example in a micro-cell setting with
typical path-length differences of 50-100 m, the coherence
bandwidth is between 3-6 MHz, which is not insignificant.
Further assumptions regarding the system model are outlined in the next section. Section III discusses channel estimation using round-robin training, and derives the LMMSE
channel estimator that incorporates both the high-resolution
and one-bit observations. A discussion of hardware and other
practical considerations associated with using a mixed-ADC
system with ADC/antenna switching is presented in Section
IV. Section V then presents the analysis of the spectral
3
M antennas
M-N one-bit ADCs
Z&
Z&
ŚĂŝŶ
ŚĂŝŶ
ϭͲďŝƚ
ϭͲďŝƚ
Z&
Z&
ŚĂŝŶ
ŚĂŝŶ
DƵůƚŝƉůĞdžĞƌ
ϭͲďŝƚ
ϭͲďŝƚ
,ŝͲƌĞƐ
,ŝͲƌĞƐ
ĂƐĞďĂŶĚ
ŽŵďŝŶŝŶŐ
symbols and changes independently between different intervals. Note that T is a fixed system parameter chosen as the
minimum coherence duration of all users. At the beginning of
each coherence interval, the users send their η-tuple mutually
orthogonal pilot sequences (K ≤ η ≤ T ) to the BS for channel
estimation. Denoting the length of the training phase as ηeff ,
the remaining T − ηeff symbols are dedicated to uplink data
transmission.
,ŝͲƌĞƐ
,ŝͲƌĞƐ
Z&
Z&
ŚĂŝŶ
ŚĂŝŶ
III. T RAINING P HASE
N high-resolution ADCs
Fig. 1. Mixed-ADC architecture.
efficiency for MRC and ZF receivers based on the imperfect
channel state estimates, including an analytical performance
characterization of antenna selection and architectures with
uniform ADC resolution across the array. A number of numerical studies are then presented in Section VI to illustrate
the relative performance of the algorithms considered.
Notation: We use boldface letters to denote vectors, and
capitals to denote matrices. The symbols (.)∗ , (.)T , and (.)H
represent conjugate, transpose, and conjugate transpose, respectively. A circularly-symmetric complex Gaussian (CSCG)
random vector with zero mean and covariance matrix R
is denoted v ∼ CN (0, R). The symbol k.k represents the
Euclidean norm. The K × K identity matrix is denoted by
I K and the expectation operator by E{.}. We use 1N to
denote the N ×1 vector of all ones, and diag{C} the diagonal
matrix formed from the diagonal elements of the square
matrix C. For a complex value, c = cR + jcI , we define
arcsin(c) , arcsin(cR ) + jarcsin(cI ).
In this section, we investigate the linear minimum mean
squared error (LMMSE) channel estimator for different ADC
architectures at the BS. In all scenarios, the pilot sequences
are drawn from an η × K matrix Φ, where the kth column of
Φ, φk , is the kth user’s pilot sequence and ΦH Φ = I K .
Therefore, the M × η received signal at the BS before
quantization becomes
X=
K
X
√
ηpk g k φTk + N ,
(2)
k=1
where N is an M × η matrix with i.i.d. CN (0, σn2 ) elements.
Since the rows of X are mutually independent due to the
assumption of spatially uncorrelated Gaussian channels and
noise, we can analyze them separately. As a result, we will
focus on the mth row of X which is
xTm
K
X
√
ηpk gmk φTk + nTm ,
=
(3)
k=1
where gmk is the mth element of the kth user channel vector,
g k , and nTm is the mth row of N . Since the analysis is not
dependent on m, hereafter we drop this subscript and denote
the received signal at the mth antenna by x.
II. S YSTEM M ODEL
Consider the uplink of a single-cell multi-user MIMO
system consisting of K single-antenna users that send their
signals simultaneously to a BS equipped with M antennas.
Assuming a single-carrier frequency flat channel , the M × 1
signal received at the BS from the K users is given by
r=
K
X
√
pk g k sk + n,
(1)
k=1
where pk represents
√ the average transmission power from the
kth user, g k = βk hk is the channel vector between the
kth user and the BS where βk models geometric attenuation
and shadow fading, and hk ∼ CN (0, I M ) represents the
fast fading and is assumed to be independent of other users’
channel vectors. The symbol
transmitted by the kth user is
denoted by sk where E |sk |2 = 1 and is drawn from
a CSCG codebook independent of the other users. Finally,
n ∼ CN 0, σn2 I M denotes additive CSCG receiver noise at
the BS.
We consider a block-fading model with coherence bandwidth Wc and coherence time Tc . In this model, each channel
remains constant in a coherence interval of length T = Tc Wc
A. Estimation Using One-Bit Quantized Observations
In this subsection, to have a benchmark for comparison
purposes, we consider the case in which all antennas at the
BS are connected to one-bit ADCs. The received signal xT
after quantization by one-bit ADCs can be written as
(4)
y Tt = Q xT ,
where the element-wise one-bit quantization operation Q(·) replaces each input entry with the quantized value √12 (±1 ± j),
depending on the sign of the real and imaginary parts. According to the Bussgang decomposition [31], the following linear
representation of the quantization can be employed [14]:
r
2 T − 21
T
Q x =
x Dx + q Tt ,
(5)
π
where Dx = diag{Cx } and Cx denotes autocorrelation
matrix of x, which can be calculated as
Cx =
K
X
k=1
ηpk βk φ∗k φTk + σn2 Iη .
(6)
4
where Q is an M × η matrix whose mth row is q Tt . The
LMMSE estimate of the channel G = [g 1 , ..., g K ] based on
just one-bit quantized observations (8) is given in the following
theorem.
00 00 00 00 00 00 00 00 00 00 00 00
000000000000
2
σw
=
k
r
π 21
Dx φk
2
(10)
1 2
∗
T
σn + φ̄k Cq t φ̄k .
ηpk
(11)
φ̄k ,
Define the channel estimation error ε , ĝ k − g k . Then we
have
σĝ2k
βk2
=
2
βk + σw
k
and
σε2k
2
σw
β
k k
=
,
2
βk + σw
k
(12)
where σĝ2k and σε2k are the variances of the independent zeromean elements of ĝ k and ε, respectively.
From Theorem 1, it is apparent that in the channel estimation analysis of massive MIMO systems with one-bit ADCs,
the estimation error is directly affected not only by the inner
product of the pilot sequences, but also by their outer product
as well [14]. To get insight into the impact of the one-bit
quantization on the channel estimation, in the next corollary
we adopt the statistics-aware power control policy proposed
in [37]. Apart from its practical advantages, this policy is
especially suitable specially for one-bit ADCs since it avoids
near-far blockage and hence strong interference. Moreover,
this power control approach also leads to simple expressions
and provides analytical convenience for our derivation in
Section VI. Although not the focus of this paper, we note
that in general a massive MIMO system employing a mixedADC architecture will be more resilient than an all one-bit
implementation to the near-far effect and jamming. This is an
interesting topic for further study.
Data Transmission
Antenna set 1
Antenna set 2
00 00 00 00 00 00 00 00 00
0 0 0 0 0 0 0 0 0000000000000000000
000000000
Antenna set 3
Antenna set 4
Antenna set 5
00 00 00 00 00 00 00 00
00000000
Full-resolution
observations
No observation
Wc
Tc
Fig. 2. Transmission protocol for estimation using full-resolution observations.
Corollary 1. For the case in which power control is performed,
i.e., pk = βpk for some fixed value p and for k ∈ K =
{1, · · · , K}, the number of users is equal to the length of
pilot sequences, i.e., η = K, and the pilot matrix satisfies
ΦΦH = IK , we have
(13)
Cx = Kp + σn2 IK = Dx
2
Cq t = 1 −
IK ,
(14)
π
which yields
Theorem 1. The LMMSE estimate of the k-th user channel,
g k , given the one-bit quantized observations Y is [14]
r
1
βk
∗
Yφ̄k ,
(9)
ĝ k =
2
βk + σw
ηp
k
k
where
Training
antennas
In addition, q t represents quantization noise which is uncorrelated with x and its autocorrelation matrix can be derived
based on the arcsine law as [32]
2 −1
2
−1
−1
−1
Cqt = arcsin{Dx 2 Cx Dx 2 } − Dx 2 Cx Dx 2 . (7)
π
π
Much of the existing work on massive MIMO systems with
low-resolution ADCs employs the simple additive quantization
noise model (AQNM) for their analysis [20]-[22], [25]-[30],
[39] which is valid only for low SNRs and does not capture the
correlation among the elements of q t , which turns out to be
of crucial importance in our analysis. Hence, we consider the
Bussgang decomposition instead and will show its effect on
the system performance analysis. Stacking the rows of (5) into
a matrix, the one-bit quantized observation at the BS becomes
r
2
−1
Y=
XDx 2 + Q,
(8)
π
σĝ2k =
2 βk
π 1 + σn2
(15)
Kp
σε2k =
Kp
2
σn
1−
2
π
1+
+ 1 βk
Kp
2
σn
.
(16)
Corrollary 1 states conditions under which Cqt is diagonal.
In addition, it is evident that the channel estimation suffers
from an error floor at high SNRs.
B. Channel Estimation with Few Full Resolution ADCs
Channel estimation with coarse observations suffers from
large errors especially in the high SNR regime. On the
other hand, while estimating all channels using high-resolution
ADCs is desirable, the resulting power consumption burden
makes this approach practically infeasible. This motivates
the use of a mixed-ADC architecture for channel estimation
to eliminate the large estimation error caused by one-bit
quantization while keeping the power consumption penalty
at an acceptable level. In the approach described in [23],
[24], [26] , N ≪ M pairs of high-resolution ADCs are
deployed and switched between different antennas during
different transmission intervals in an approach referred to as
“round-robin” training. In this approach, the M BS antennas
are grouped into M/N sets1 . In the first training sub-interval,
users send their mutually orthogonal pilots to the BS while the
N high-resolution ADC pairs are connected to the first set of
N antennas. After receiving the pilot symbols from all users in
the η-symbol-length training sub-interval, the high-resolution
ADCs are switched to the next set of antennas and so on.
In this manner, after (M/N )η pilot transmissions (M/N subintervals), we can estimate each channel based on observations
with only high-resolution ADCs. This round-robin channel
1 We
assume M/N is an integer throughout the paper.
5
Antenna set 1
Antenna set 2
Antenna set 3
Antenna set 4
Antenna set 5
00 00 00 00 00 00 00 00 00 00 00 00 Training
000000000000
00 00 00 00 00 00 00 00 00
0 0 0 0 0 0 0 0000000000000000000
0 0 0 0 0 0 0 0 00 00 00 00 00 00 00 00 00
0 0 0 0 0 0 0 0 00 00 00 00 00 00 00 00 00
000000000
Data Transmission
antennas
estimation protocol is illustrated in Fig. 2 for a mixed-ADC
system with M/N = 5.
Stacking all N ×η full-resolution observations into an M ×η
matrix, X, the LMMSE estimate of the k-th user channel, g k ,
is [5]
1
1
ĝ k =
Xφ∗k ,
(17)
√
2
σn
ηp
k
1+
σĝ2k =
βk
1+
2
σn
ηpk βk
and
σε2k =
βk
1+
ηpk βk
2
σn
.
(18)
Eq. (18) states that by employing only N pairs of highresolution ADCs and by expending a larger portion of the
coherence interval for channel estimation, the channel can
be estimated with the same precision as that achieved by
conventional high-resolution ADC massive MIMO systems.
However, this comes at the high cost of repeating the training
data M/N times, which can significantly reduce the time
available for data transmission. Indeed, we will see later that in
some cases, a mixed-ADC implementation with round-robin
training achieves a lower SE than a system with all one-bit
ADCs because of the long training interval (even with the
improvements we propose below for the round-robin method).
However, we will also see that there are other situations for
which the mixed-ADC round-robin method provides a large
gain in SE. The primary goal of this paper is to elucidate
under what conditions these and other competing approaches
provide the best performance.
Before analyzing the tradeoff between the gain (lower
channel estimation error) and cost (longer training period) of
the round-robin approach, in the next subsection we propose
channel estimation based on the use of both full-resolution and
one-bit data received by the BS in order to further improve the
performance of the mixed-ADC architecture with round-robin
channel estimation. To our knowledge, this approach has not
been considered in prior work on mixed-ADC massive MIMO.
Full-resolution
observations
One-bit
observations
Wc
Tc
ηpk βk
and the resulting variances of the channel estimate and the
error are given respectively by
00 00 00 00 00 00 00 00
00000000
Fig. 3. Transmission protocol for estimation using full-resolution/one-bit
observations.
Theorem 2. Stacking all N × η full-resolution observations
into an M × η matrix, X, and all (M/N ) − 1 N × η
one-bit quantized observations into M × η matrices, Yt ,
t ∈ T = {1, ..., M/N − 1}, the LMMSE estimate of the k-th
user channel, g k , is
M
r
−1
N
X
1
∗
w∞k Xφ∗k + w1k
ĝ k =
Yt φ̄k ,
(19)
ηpk
t=1
where
ηpk
2
σn
ηpk
2 + ςk (pk )
σn
(20)
−1
M
ςk (pk )
N −1
ηpk
1
2 + ςk (pk )
βk + σn
(21)
w∞k =
1
βk
+
w1k =
M
N
ςk (pk ) =
2
σw
k
C̄q t =
−1
2 + M −2 ̺
σw
k
N
k
1
∗
T
σn2 + φ̄k Cq t φ̄k
=
ηpk
(22)
̺k =
(24)
1 T
∗
φ̄ C̄q φ̄
ηpk k t k
2
2 −1
−1
−1
−1
arcsin{D̄x 2 C̄x D̄x 2 } − D̄x 2 C̄x D̄x 2
π
π
C̄x =
K
X
ηpk βk φ∗k φTk
(23)
(25)
(26)
k=1
C. Estimation Using Joint Full-Resolution/One-Bit Observations
While channel estimation performance based on coarsely
quantized observations suffers from large errors in the high
SNR regime, it provides reasonable performance for low
SNRs. Hence, in this subsection we consider joint channel
estimation based on observations from both high-resolution
and one-bit ADCs to further improve the channel estimation
accuracy. Unlike the previous subsection in which the onebit ADCs were not employed, here we incorporate their
coarse observations into the channel estimation procedure. The
protocol for this method is illustrated in Fig. 3 for a mixedADC system with M/N = 5. It can be seen that, in addition to
one set of full-resolution observations for each antenna, there
are (M/N ) − 1 sets of one-bit observations which are also
taken into account for channel estimation. The next theorem
characterizes the performance of this approach.
D̄x = diag{C̄x }.
(27)
This approach yields the following variances for the channel
estimate and the estimation error, respectively:
σĝ2k
=
ηpk
2 + ςk (pk )
σn
βk
ηpk
1
2 + ςk (pk )
βk + σn
σε2k =
Proof. See Appendix A.
1
βk
+
1
.
+ ςk (pk )
ηpk
2
σn
(28)
(29)
Theorem 2 demonstrates the optimal approach for combining the observations from high-resolution and one-bit ADCs.
In addition, this highlights the importance of considering the
correlation among the one-bit observations in the analysis
of mixed-ADC channel estimation, something that could not
be addressed by the widely-used AQNM approach. More
6
precisely, it can be seen that the impact of joint highresolution/one-bit channel estimation is manifested in the variance of the channel estimation error by the term ςk (pk ). To see
this, assume that the correlation among one-bit observations in
different training sub-intervals is ignored (as would be the case
with the AQNM approach). As shown in the appendix, this is
equivalent to setting ̺k = 0 in (24). Under this assumption,
ςk (pk ) becomes
M
N −1
> ςk (pk ),
(30)
ςk0 (pk ) =
2
σw
k
and thus, σε2k > σε2k where σε2k denotes the estimation
0
0
error for ̺k = 0. Consequently, the AQNM model yields an
overly optimistic assessment of the channel estimation error
compared with the more accurate Bussgang analysis. We will
see below that the impact of the AQNM approximation is
significant for mixed-ADC channel estimation.
The next corollary provides insight into the impact of
the system parameters on the joint high-resolution/one-bit
LMMSE estimation.
Corollary 2. For the case in which power control is performed,
i.e., pk = βpk for k ∈ K, the number of users is equal to the
length of pilot sequences, i.e., η = K, and the pilot matrix
satisfies ΦΦH = IK , we have
C̄x = KpIK = D̄x ,
and
C̄q t
1
and σε2k =
where
ς(p) =
2
π σn
2 Kp
+
In addition,
w∞ =
1+
Kp
2
σn
Kp
2 +
σn
ς(p)
M
N −1
π
2 −1
and w1 =
(32)
1
1+
M
N
M
N
Kp
2
σn
−1
−1
Kp
2
σn
βk ,
(33)
.
−1
1+
+ ς(p)
(34)
ς(p)
+ ς(p)
, (35)
where w∞ and w1 denote the weights of the high-resolution
and one-bit observations in the LMMSE estimation, respectively.
Corallary 2 states that in contrast to Theorem 1 where the
correlation among one-bit observations within each training
sub-interval can be eliminated by carefully selecting the system parameters as in Corollary 1, we cannot overcome the
correlation among one-bit observations from different training
sub-intervals. This phenomenon makes the addition of the onebit observations less useful especially in the high SNR regime.
For instance, in the asymptotic case, as the SNR = σp2 goes
n
to infinity, we have
ς −→
π
2
1
,
−1
-10
-15
-20
-15
-10
-5
0
5
2.
Fig. 4. Channel estimation error σε2k /βk versus p/σn
w∞ −→ 1, w1 −→ 0.
2
IK ,
= 1−
π
Kp
2 + ς(p)
σn
βk
+ Kp
2 + ς(p)
σn
-5
(31)
which yields
σĝ2k =
0
(36)
(37)
It is apparent from (36) that in the asymptotic regime ς tends to
a finite value and also is independent of M/N . Moreover, (37)
implies that the optimal approach for high SNRs is to estimate
the channel based solely on the high-resolution observations.
The error for the three channel estimation approaches in
Eqs. (12), (18), and (29) is depicted in Fig. 4 for a case with
M = 100 antennas, K = 10 users, and various numbers of
high-resolution ADCs, N and training lengths η. The label
“Joint” refers to round-robin channel estimation that includes
the one-bit observations as described in the previous section,
”Full resolution” indicates the performance achieved using a
full array of high-resolution ADCs, and “One-bit” refers to
the performance of an all-one-bit architecture. We also plot
the performance predicted for the Joint approach based on the
AQNM analysis, which ignores the correlation among the onebit observations. We see that the AQNM-based analysis yields
an overly optimistic prediction for the channel estimation
error. In particular, unlike AQNM, the more accurate Bussgang
analysis shows that channel estimation with all an one-bit
BS actually outperforms the Joint method for low SNRs, a
critical observation in analyzing whether or not a mixed-ADC
implementation makes sense. However, we see that the mixedADC architecture eventually overcomes the error floor of the
all one-bit system for high SNRs and in such cases can reduce
the estimation error dramatically. Fig. 4 focuses on channel
estimation performance, but does not reflect the full impact
of the round-robin training on the overall system spectral
efficiency, since reducing N increases the amount of training
required by the round-robin method. This will be taken into
account when we analyze the SE in Section V.
7
IV. P RACTICAL C ONSIDERATIONS
The improvement in channel estimation performance provided by the round-robin training clearly comes at the expense
of a significantly increased training overhead. For example,
consider a simple worst-case example with a 400 Hz Doppler
spread in a narrowband channel of 400 kHz bandwidth; in
this case, the coherence time is roughly 1000 symbols. For
higher bandwidths or smaller cells with lower mobility, the
coherence time can easily approach 10,000 symbols or more.
A mixed-ADC array of 128 antennas with 16 high-resolution
ADCs would require repeating the pilots 8 times, which for 20
users would amount to 160 symbols, or 16% of the coherence
time when T = 1000 symbols. This is a relatively high price
to pay, and as we will see later, in many instances the resulting
loss in SE cannot be offset by the improved channel estimate.
However, we will also see that on the other hand, there are
other situations where the opposite is true, where the roundrobin method leads to significant gains in SE even taking the
training overhead into account.
Besides the extra training overhead, the round-robin method
has the disadvantage of requiring extra RF switching or
multiplexing hardware prior to the ADCs, as shown in Fig. 1.
It is unlikely that a single large M × M multiplexer would be
used for this purpose, since complete flexibility in assigning
a given high-resolution ADC to any possible antenna is not
needed. A more likely architecture would employ a bank of
smaller multiplexers that allows one high-resolution ADC to
be switched among a smaller subarray of antennas, ensuring
that each RF chain has access to high-resolution training data
during one of the round-robin intervals. Such an approach
is similar to the simplified “subarray switching” schemes
proposed for antenna selection in massive MIMO [33]-[35]. In
an interesting earlier example, a large 108 × 108 multiplexer
chipset for a local area network application was developed
in [36], composed of several 36 × 36 differential crosspoint
ASIC switches that consume less than 100 mW each, with a
bandwidth of 140 MHz and a 0 dB insertion loss.
In the 20 years since [36], RF switch technology has advanced considerably. For the example discussed above involving a 128-element array with 16 high-resolution ADCs and 112
one-bit ADCs, the multiplexing could be achieved using 16
8×8 analog switches arranged in parallel. Consider the Analog
Devices ADV3228 8 × 8 crosspoint switch as an example
of an off-the-shelf component for such an architecture2. The
ADV3228 has a 750 MHz bandwidth, a switching time of 15
ns, and a power consumption of 500 mW, which is similar
to that of an 8-bit ADC (for example, see Texas Instruments’
ADC08B200 8-bit 200 MS/s ADC3 ). Since the switches can
be implemented at a lower intermediate frequency prior to the
I-Q demodulation, only one per subarray is required, and thus
the total power consumption of the switches would be less
than half that of the ADCs.
Note that for the vast majority of the coherence time, the
switch is idle. To accommodate the round-robin training, the
2 See
http://www.analog.com/en/products/switches-multiplexers/bufferedanalog-crosspoint-switches/adv3228.html#product-overview for product
details.
3 http://www.ti.com/product/ADC08B200/technicaldocuments.
switches only need to be operated M
N −1 times, once for every
repetition of the training data. This reduces the actual power
consumption to below the specification, and further reduces the
impact of the additional training. Short guard intervals would
need to be inserted between the training intervals to account
for the switching transients, but these will typically not impact
the SE. For the example discussed above with 128 antennas
and 8 switches, 7 switching events are required for a total
switching time of 105 ns, which is insignificant compared to
the coherence time of 2.5 ms at a 400 Hz Doppler.
The insertion loss of the analog switches would also have
to be taken into account in an actual implementation, since
this will directly reduce the overall SNR of the received
signals. Harmonic interference due to nonlinearities in the
switch are likely not an issue; for example, the specifications
for a Texas Instruments switch (LMH6583) similar to the
ADV3228 indicate that the power of the second and third
harmonic distortions were -76 dBc. Furthermore, it has been
shown that the use of signal combining with a massive antenna
array provides significant robustness to such nonlinearities and
other hardware imperfections [7]-[12].
V. S PECTRAL E FFICIENCY
Although channel estimation with a mixed-ADC architecture using round-robin training can substantially improve
the channel estimation accuracy, it requires a longer training
interval and, therefore, leaves less room for data transmission
in each coherence interval. More precisely, (M/N )η symbol
transmissions are required for round-robin channel estimation
which could be large when the number of high-resolution
ADCs, N , is small4 . Despite losing a portion of the coherence interval for channel estimation due to the mixed-ADC
architecture, the improvement in the signal-to-quantizationinterference-and-noise ratio (SQINR) can be significant owing
to more accurate channel estimation, and thus a higher rate
would be expected during this shorter data transmission period.
In this section, we study this system performance trade-off in
terms of spectral efficiency for the three mentioned channel
estimation approaches.
In the data transmission phase, all users simultaneously send
their data symbols to the BS. To begin, assume the antennas
are ordered so that the last N antennas are connected to highresolution ADCs in this phase. A more thoughtful assignment
of the high-resolution ADCs will be considered below. From
equation (1), and based on the Bussgang decomposition, the
received signal at the BS after one-bit quantization is
#
"q
2 − 21
q̄
D̄
0
π
(38)
r+ d
yd =
0
0
IN
| {z }
qd
D̄ = diag{Cr }
(39)
4 Note that in designing a mixed-ADC system with round-robin channel
training, one should consider the ratio M/N in scaling the system instead
of just increasing the number of antennas M . In particular, increasing the
number of BS antennas requires increasing of the high-resolution ADCs, N ,
as well.
8
Cr =
K
X
2
pk ḡ k ḡ H
k + σn IM−N ,
(40)
k=1
where ḡ k denotes the M − N elements of g k corresponding
to the M − N one-bit ADCs and q̄ d is the (M − N ) × 1
quantization noise in the data transmission phase. It is apparent
that the covariance matrix in (40) is not diagonal which makes
analytical tractability difficult. However, by adopting statisticsaware power control [37], i.e., pk = βpk , and assuming that
the number of users is relatively large (typical for massive
MIMO systems), channel hardening occurs [14], and (40) can
be approximated as
Cr ∼
(41)
= Kp + σn2 IM−N = D̄.
As a result, according to the arcsine law (see (7)), the covariance matrix of the quantization noise in the data transmission
phase becomes Cq̄d ∼
= (1 − 2/π)IM−N and (38) simplifies to
!
K
X
√
∼A
y =
phk sk + n + q
(42)
d
d
k=1
αIM−N
A=
0
0
,
IN
q
1
where α , π2 (Kp+σ
2 .
n)
For data detection, the BS selects a linear receiver W ∈
CM×K as a function of the channel estimate. Note that the
quantization model considered in (4) and (5) does not preserve
the power of the input of the quantizer since the power of the
output is forced to be 1. Thus we premultiply the received
signal as follows to offset this effect:
ŷ d = A−1 y d .
(43)
By employing the linear detector W, the resulting signal at
the BS is
ŝ = W H ŷ d .
(44)
Thus, the kth element of ŝ is
ŝk =
K
√ H
√ X
pw k hk sk + p
wH
k hi si
i=1,i6=k
H −1
+ wH
q d , (45)
k n + wk A
where w k is the kth column of W . We assume the BS
treats w H
k hk as the gain of the desired signal and the other
terms of (45) as Gaussian noise when decoding the signal5 .
Consequently, we can use the classical bounding technique of
[37] to derive an approximation for the ergodic achievable SE
at the kth user as
Sk = R (SQINRk ) ,
(46)
where the effective SQINRk is defined by (47) at the top of the
next page, and R (θ) , (1 − ηeff /T ) log2 (1 + θ) where ηeff
represents the training duration which is η and (M/N ) η for
the pure one-bit and mixed-ADC architectures, respectively.
5 Note that in general, the quantization noise is not Gaussian. However, to
derive a lower bound for the SE, we assume it is Gaussian with covariance
Cq d .
A. MRC Detection
1) Random Mixed-ADC Detection: In this subsection, we
consider the case in which the high-resolution ADCs are
connected to an arbitrary set of N antennas. Denoting the
estimate of the channel by Ĥ = [ĥ1 , ..., ĥK ], setting W = Ĥ,
and following the same reasoning as in [14], the SE of the
mixed-ADC architecture with MRC detection can be derived
as
2
pM
σ
ĥ
(48)
SkMRC = R
2
,
1−
(
)
N
pK + σn2 + α2π 1 − M
where the channel estimate variance σĥ2 = σĝ2k /βk depends on
the estimation approach as denoted in (12), (18), and (28).
From (48), it can be observed that the gain of exploiting the
mixed-ADC architecture is manifested in the SE expressions
by two factors, channel estimation improvement by a factor of
σĥ2 , and quantization noise reduction by a factor of 1 − N/M .
2) Mixed-ADC Detection with Antenna Selection: Having
an accurate channel estimate can help us to employ the
N high-resolution ADCs in an intelligent manner to further
improve the performance of the mixed-ADC architecture. A
careful look at the SQINR expression in (47) reveals that
the effect of one-bit quantization on the SE is manifested by
the last term of the denominator. Hence, one can maximize
the SE by minimizing this term through smart use of the N
high-resolution ADCs. We refer to this approach as MixedADC with Antenna Selection. We consider an antenna selection
scheme suggested by the SQINR expression in (47). In this
approach, the N high-resolution ADCs are connected to the
antennas corresponding to rows of Ĥ with the largest energy,
2
PK
i.e. k=1 ĥmk . Besides numerical evaluation in Section VI,
in Theorem 3 we derive a bound for the SE achieved by MRC
detection with antenna selection.
Theorem 3. The spectral efficiency of the mixed-ADC system
with antenna selection and an MRC receiver is lower bounded
by
2
pM σĥ
S̄kMRC = R
, (49)
1−
( π2 ) PM−N
2
pK + σn + MKα2
m=1 χm
where χm is defined at the top of the next page, and FA
denotes the Lauricella function of type A [45].
Proof. See Appendix B.
The lower bound (49) explicitly reflects the benefit of
antenna selection in the data transmission phase. By comparing (49) with (48), it is evident that antenna P
selection has
M −N
χm
improved the SE by replacing 1 − N/M by m=1
. In
MK
Section VI we illustrate how antenna selection improves SE
for different SNRs. Note that Theorem 3 assumes the ability
to make an arbitrary assignment of the high-resolution ADCs
to different RF chains, which may not be possible if the ADC
multiplexing is implemented by a bank of subarray switches.
In the numerical results presented later, we show that this does
not lead to a significant degradation in performance.
9
SQINRk =
χm =
p
PK
i=1
E
n
wH
k hi
2
p E wH
k hk
o
− p E wH
k hk
2
2
+ σn2 E {kwk k2 } + α−2 E wH
k Cq d w k
(47)
M−m
X
M −m
M!
(Γ(K))−m−ℓ K 1−m−ℓ Γ (1 + K (m + ℓ))
(−1)−ℓ
(m − 1) ! (M − m) !
ℓ
ℓ=0
(m+ℓ−1)
× FA
(1 + K (m + ℓ) ; K, · · · , K; K + 1, · · · , K + 1; −1, · · · , −1) (50)
B. ZF Detection
In this section, we study the SE of the mixed-ADC architecture with ZF detection. To design a mixed-ADC adapted ZF
detector, we re-write the last two terms of the denominator of
(47) as follows:
H
2
−2
H
(51)
wk σn IM + α Cqd w k = W Cneff W ,
kk
where Cneff = σn2 IM +α−2 Cqd . Accordingly, the ZF detector
for the mixed-ADC architecture can be written as
−1
H −1
.
(52)
Ĥ
C
Ĥ
W = C−1
Ĥ
neff
neff
Plugging (52) into (47) yields (53) at the top of the next page.
Similar to the MRC case, the SQINR in (53) suggests the
same antenna selection approach for ZF detection. In general,
calculating the expected values in (53) is not tractable neither
for arbitrary-antenna mixed-ADC detection nor mixed-ADC
with antenna selection. Hence, we numerically evaluate the
performance of mixed-ADC with ZF detection in the next
section.
C. Massive MIMO with Uniform ADC Resolution
Contrary to the mixed-ADC architecture where the ADC
comparators are concentrated in a few antennas, uniformly
spreading the comparators over the array is an alternative
approach [19], [20], [21], [41], [44]. In this subsection, we provide the SE expressions for such systems. These expressions
will be used in the next section for performance comparisons
with the mixed-ADC architecture.
The SE for the case of all one-bit ADCs was derived in
[14] using the Bussgang decomposition. For ADC resolutions
of 2 bits or higher, the AQNM model is sufficiently accurate.
Using AQNM and following the same reasoning as in [21],
[41], [44], the SE of a massive MIMO system with uniform
resolution ADCs can be derived as
S̃kMRC =
pM σ̃ĥ2
R
0)
2 + K + σ2
pK + σn2 + (1−α
p
σ̃
n
α2
ĥ
0
(54)
S̃kZF =
R
pK 1 − σ̃ĥ2
p (M − K) σ̃ĥ2
(M−K)σ̃2
ĥ
+ σn2 +
E wH
k C0 w k
α2
,
(55)
for MRC and ZF detection, respectively. In (54) and (55),
σ̃ĥ2 =
α20 ηp
+
α20 σn2
α20 ηp
,
+ α0 (1 − α0 ) (pK + σn2 )
(56)
α0 is a scalar depending on the ADC resolution and can
be found in Table
I of [21], wk is the kth column of
−1
, and C0 denotes the covariance matrix
W = Ĥ ĤH Ĥ
of the quantization noise based
on the AQNM model [21]. The
detailed calculation of E w H
C
0 w k in (55) is provided in
k
[44] which we do not include here for the sake of brevity.
VI. N UMERICAL R ESULTS
By substituting from (12), (18), and (28) into (48), (49),
and (53), we can evaluate the performance of mixed-ADC
architectures for different system settings. We consider a
system with M = 100 antennas at the BS, and K = 10 users.
Also, we assume the power control approach of [37] is used,
so that pk βk = p for all k. We also assume that an optimal
resource allocation has been performed [41], [42] such that
the training length, ηeff , transmission power during the training
phase, pt , and data transmission phase, pd are optimized under
a power constraint ηeff pt + (T − ηeff )pd = Pave T . In the
following figures, the SNR is defined as SNR , Pave /σn2 .
Fig. 5 illustrates the optimal weights for combining
high-resolution and one-bit observations for the joint highresolution/one-bit LMMSE channel estimation. Interestingly, it
can be seen that when M/N is large, the one-bit observations
are emphasized in the low SNR regime relative to the highresolution observations. In addition, in contrast to the weights
for the high-resolution observations, which rise monotonically
with increasing SNR, the weight for the one-bit observations
grows at first and then decreases to zero.
To study the performance improvement due to joint channel estimation and antenna selection in mixed-ADC massive
MIMO, the sum SE for the MRC and ZF detectors for a system
with coherence interval T = 400 symbols and N = 20 highresolution ADCs is depicted in Fig. 6 and Fig. 7, respectively.
10
SQINRZF
k =
p
−1
−1
−1
−1
−2
−1
−1
2
H
H
H
H
Ĥ Cneff Ĥ
Ĥ Cneff Ĥ Ĥ Cneff Ĥ
pK 1 − σĥ E
+E
Ĥ Cneff Ĥ
kk
1
100
0.9
90
0.8
80
0.7
70
0.6
60
0.5
50
0.4
40
0.3
30
0.2
20
0.1
10
0
-20
-15
-10
-5
0
5
10
15
20
Fig. 5. Weights used in the LMMSE channel estimator for high-resolution
and one-bit observations.
30
25
20
15
10
5
0
-20
-15
-10
-5
0
5
10
15
20
Fig. 6. Sum SE for MRC detection versus SNR for M = 100, N = 20, and
T = 400.
In these and subsequent figures, “Joint with AS” indicates
that the channel estimation was performed with both onebit and high-resolution ADCs and that antenna selection (AS)
was used for data detection, “Joint without AS” represents
the same case without antenna selection, “Joint Subarray AS”
means that the antenna selection only occurred within each
M/N -element subarray (one high-resolution ADC assigned
to the strongest channel within each subarray), and “Not
0
-20
-15
-10
(53)
kk
-5
0
5
10
15
20
Fig. 7. Sum SE for ZF detection versus SNR for M = 100, N = 20, and
T = 400.
Joint without AS” represents the case in which channel is
estimated based on only high-resolution observations and no
antenna selection is employed. For both MRC and ZF, it can
be seen that antenna selection slightly improves the SE for
high SNRs, where the channel estimation is most accurate.
At low SNR, we see that joint channel estimation provides
a gain from the use of one-bit ADCs, which provide useful
information at these SNRs. We also see that the constrained
AS required when the switching is only performed within
subarrays provides nearly identical performance to the case
where arbitrary AS is allowed.
Note that the main reason for the small gain for antenna
selection is due to the fact that, with multiple users, selecting
a given antenna does not benefit all users simultaneously, and
the strong users responsible for a given antenna being selected
will in general be different for different antennas. Thus, the
improvement due to increased signal-to-noise ratio for some
users is somewhat offset by the fact that other users may
experience a lower SNR on those same antennas. We would
see a much larger benefit for antenna selection if only a single
user were present.
Figs. 8 and 9 provide a comparison among a mixed-ADC
massive MIMO system with joint channel estimation and
antenna selection, an all-one-bit architecture (“One-bit”), and a
mixed-ADC without round-robin training for which the highresolution ADCs are connected to a fixed set of antennas
without ADC switching or antenna selection (“Non-roundrobin”) [27]. Since mixed-ADC channel estimation improves
the channel estimation accuracy by expending a larger portion
11
100
30
20
50
10
0
-20
-15
-10
-5
0
5
10
15
20
0
-20
-15
-10
-5
0
5
10
15
20
-15
-10
-5
0
5
10
15
20
100
30
20
50
10
0
-20
-15
-10
-5
0
5
10
15
20
Fig. 8. Sum SE for MRC detection versus SNR for M = 100, N = 20, 10,
and T = 400, 1000.
of the coherence interval for training, its benefit is directly
related to the length of the coherence interval. For MRC
detection, when T = 400, the mixed-ADC architecture performs better than the all-one-bit architecture for N = 20,
but when N = 10 the all-one-bit architecture is better due
to the larger training overhead incurred when N is smaller.
However, for T = 1000, mixed-ADC outperforms the allone-bit architecture at high SNRs for both N = 10, 20,
while the all-one-bit case is still better for N = 10 at low
SNRs. Round-robin training provides better SE performance
at high SNR when N = 20 compared to the case without
antenna switching, especially for the larger coherence interval.
However, for other cases, the round-robin training overhead
significantly reduces the SE, especially for N = 10 and the
shorter coherence interval.
For ZF detection, we see that the mixed-ADC architectures
can provide very large gains in SE compared to the one-bit
case at high SNRs, regardless of T . For low SNRs, there is
little to no improvement. These cases still do not show a
significant benefit for round-robin training compared with a
fixed ADC assignment; only when N = 20 and T = 1000 do
we see a slight improvement.
For N = 20, Figs. 10 and 11 show how the coherence
coherence interval T impacts the effectiveness of the mixedADC architecture for MRC and ZF detectors, respectively.
For mixed-ADC MRC detection, it is apparent that the best
choice among the three architectures (all one-bit, mixed-ADC
with and without round-robin training) depends on the SNR
operating point and the length of the coherence interval. The
advantage of round-robin training becomes apparent for long
coherence intervals, where the increased training length has a
smaller impact. The gain for round-robin training is greatest
at higher SNRs. For shorter coherence intervals, mixed ADC
with fixed antenna/ADC assignments provides the best SE,
with the largest gains again coming at higher SNRs. For this
0
-20
Fig. 9. Sum SE for ZF detection versus SNR for M = 100, N = 20, 10,
and T = 400, 1000.
30
28
26
24
22
20
18
16
14
12
500
1000
1500
2000
Fig. 10. Sum SE for MRC detection versus T for M = 100, N = 20, and
SN R = −10, 0, 10 dB.
value of N , the all-one-bit system generally has the lowest
SE, although the difference is not large for MRC.
The next example investigates the impact of distributing the
resolution (i.e., the comparators of the ADCs) across the array
with different numbers of antennas. If we assume that the
“high-resolution” ADCs consist of 5 bits [43], a mixed-ADC
architecture with N = 20 high-resolution and M − N = 80
one-bit ADCs will have 180 total comparators. Figs. 12 and 13
illustrate the SE achieved by distributing the 180 comparators
across arrays of different length for MRC and ZF detection,
respectively. In these figures, “Joint with AS” and “Non-roundrobin” refer to mixed-ADC architectures with N = 20 5-bit
ADCs and M −N = 80 one-bit ADCs, “One-bit” corresponds
to M = 180 antennas with one-bit ADCs, and “Multi-bit”
12
70
100
60
50
50
0
-20
-15
-10
-5
0
5
10
15
20
-15
-10
-5
0
5
10
15
20
40
100
30
50
20
0
-20
10
500
1000
1500
2000
Fig. 11. Sum SE for ZF detection versus T for M = 100, N = 20, and
SN R = −10, 0, 10 dB.
Fig. 13. Sum SE for ZF detection versus SNR for 180 comparators and
T = 400, 1000.
40
30
35
20
10
0
-20
30
-15
-10
-5
0
5
10
15
20
25
40
30
20
20
10
0
-20
15
-15
-10
-5
0
5
10
15
20
10
10
20
30
40
50
60
70
80
90
100
Fig. 12. Sum SE for MRC detection versus SNR for 180 comparators and
T = 400, 1000.
Fig. 14. Sum SE for MRC detection versus N for SN R = −10, 0, 10 dB
and T = 1000.
indicates a system with either M = 90 2-bit ADCs or M = 60
3-bit ADCs. As we see in the figures, it can be inferred that
for MRC detection, which is interference limited, it is better to
have a larger number of antennas with lower-resolution ADCs
instead of equipping the BS with fewer antennas and high
resolution ADCs. This is consistent with the results of [30],
[39], and is due to the fact that a larger number of antennas
helps the system to more effectively cancel the interference.
On the other hand, for ZF detection which is noise limited,
the use of high-resolution ADCs avoids additional quantization
noise imposed by the low-resolution ADCs, and is more
beneficial than having a larger number of antennas with lowresolution ADCs at high SNR.
Finally, Figs. 14 and 15 show the impact of the number of
high-resolution ADCs in a mixed-ADC system with M = 100
antennas, K = 10 users, and various numbers N of highresolution ADCs, where N = 100 denotes the all-highresolution system. It is apparent that with a large enough
coherence interval and a sufficient number of high-resolution
ADCs, the mixed-ADC implementation with joint round-robin
channel estimation and antenna selection outperforms the
all-one-bit architecture and mixed-ADC without round-robin
training. The gains are greatest when ZF detection is used and
the SNR is high, but such gains must be weighed against the
increased power consumption and hardware complexity.
13
A PPENDIX
100
A. Proof of Theorem 2
90
From (2), the observations from the high-resolution ADCs
can be written as
r
1
v(0) =
Xφ∗k = g k + ñ(0),
(57)
ηpk
80
70
60
σ2
where ñ(0) ∼ CN (0, ηpnk I M ). In addition, from (8), the
observations from the one-bit ADCs become
r
1
∗
v(t) =
Yt φ¯k = g k + ñ(t) + q̃(t), t ∈ T , (58)
ηpk
50
40
30
σ2
20
10
10
20
30
40
50
60
70
80
90
100
Fig. 15. Sum SE for ZF detection versus N for SN R = −10, 0, 10 dB and
T = 1000.
VII. C ONCLUSION
We studied the spectral efficiency of mixed-ADC massive
MIMO systems with either MRC or ZF detection. We showed
that properly accounting for the impact of the quantized
receivers using the Bussgang decomposition is important for
obtaining an accurate analysis of the SE. We introduced a joint
channel estimation approach to leverage both high-resolution
ADCs and one-bit ADCs and our analytical and numerical
results confirmed the benefit of joint channel estimation for
low SNRs.
Mixed-ADC detection with MRC and ZF detectors and
antenna selection were also studied. Analytical expressions
were derived for MRC detection and a numerical performance
analysis was performed for ZF detection. It was shown that
antenna selection provides a slight advantage for high SNRs
while this advantage tends to disappear for low SNRs.
We showed that the SNR, the number of high-resolution
ADCs and the length of the coherence interval play a pivotal
role in determining the performance of mixed-ADC systems.
We showed that, in general, mixed-ADC architectures will
have the greatest benefit compared to implementations with all
low-resolution ADCs when ZF detection is used and the SNR
is relatively high. In such cases, the gain of the mixed-ADC
approach can be substantial. Gains are also possible for MRC,
but they are not as significant, and require larger numbers of
high-resolution ADCs to see a benefit compared with the ZF
case. The more complicated mixed-ADC approach based on
ADC switching and round-robin training can achieve the best
performance in some cases, particularly when the coherence
interval is long and more high-resolution ADCs are available
to reduce the number of training interval repetitions. Otherwise, a mixed-ADC implementation without ADC switching
and extra training is preferred.
where ñ(t) ∼ CN (0, ηpnk I M ) is independent of ñ(t′ ) for
q
1
¯∗
t 6= t′ , and q̃(t) =
ηpk Q(t)φk . Since the elements of
v(t) are independent, we can estimate the mth channel gmk
separately. Therefore, stacking all the observations in a vector,
we can write
vm (0)
ñm (0)
1
..
..
..
.
.
.
vm (t) = 1 gmk +
.
ñ
(t)
+
q̃
(t)
m
m
.
..
.
.
.
.
.
.
M
M
M
1
vm ( N − 1)
ñm ( N − 1) + q̃m ( N − 1)
|
|
{z
} |{z}
{z
}
1M
v
u
N
(59)
As a result, the LMMSE estimation of the mth channel
coefficient for the kth user is [40]
−1
1
1TM C−1
+ 1TM C−1
1
M
(60)
ĝmk =
u
u v.
N
N
N
βk
In Eq. (60), Cu denotes the covariance matrix of u which is
a block diagonal matrix of the form
σ2
n
0
...
0
ηpk
" 2
#
0
2
σn
σw
. . . ̺k
0
k
= ηpk
,
(61)
Cu = .
..
..
..
..
0 S
.
.
.
0
̺k
...
2
σw
k
where
∗
̺k = E{(ñm (t) + q̃m (t)) (ñm (t′ ) + q̃m (t′ )) }, t 6= t′ , (62)
can be easily calculated with the aid of the Bussgang decomposition and the arcsine law as in (24). Substituting (61) into
(60), we have
ĝmk =
−1
ηpk
1
+ 2 + 1TM −1 S−1 1 M −1
N
N
βk
σn
#
"
σn2
×
1TM −1 S−1 v.
N
ηpk
(63)
To calculate the inverse of the matrix S, we re-write it as
2
S = σw
− ̺k I M −1 + ̺k 1 M −1 1TM −1 ,
(64)
k
N
N
N
14
where γ(., .) denotes the incomplete Gamma function. From
[47], the integral (70) can be calculated in closed form for
Gamma random variables as
and use Woodbury’s matrix identity:
S−1 =
2
σw
k
1
IM −
− ̺k N −1
1
2 −̺
σw
k
k
which yields
2
( M − 1)
1
+ 2N
̺k
σwk − ̺k
1TM −1 S−1 =
N
T
1 M −1 S
N
−1
1
2 +
σw
k
1 M −1 =
N
M
N
2
σw
k
!−1
T
1 M −1 1 M −1 , (65)
N
N
1TM ,
− 2 ̺k N −1
M
N −1
.
+ M
N − 2 ̺k
(66)
(67)
Substituting (66) and (67) into (63) completes the proof.
E{E(m) } = σĥ2 χm .
(72)
This is in contrast to the unordered case where E{Em } =
Kσĥ2 . As a result
2 M−N
oo
n n H
2 σĥ X
= 1−
min E ĥk Cq d ĥk
χm . (73)
π K m=1
The remaining terms in (47) can be calculated similar to
the case where the high-resolution ADCs are connected to
arbitrary antennas. Plugging these terms and (73) into (47)
and some algebraic manipulation results in (49).
R EFERENCES
B. Proof of Theorem 3
Denote the energy of the mth row, m ∈ M = {1, ..., M },
of Ĥ by Em , i.e.,
Em ,
K
X
2
ĥmk .
(68)
k=1
To do antenna selection, we must connect the N highresolution ADCs to the antennas corresponding to the largest
Em . Suppose that the indices of the N antennas to which the
high-resolution ADCs are connected are contained in the set
N . Hence, we have
K
X
k=1
o
n H
E ĥk Cq d ĥk =
o
n H
2 X
E{Em }. (69)
KE ĥk Cqd ĥk = 1 −
π
M\N
Eq. (69) provides a criterion for connecting the N highresolution ADCs in the data transmission phase. In fact, it
states that, for the MRC receiver, the expected value in (69)
will be minimized if the high-resolution ADCs are connected
to the antennas corresponding to the largest Em . Denote E(m)
as the mth smallest value of Em , i.e.,
E(1) ≤ E(2) ≤ · · · ≤ E(M) .
Hence, E(m) is the mth order statistic, and assuming that the
E(m) are statistically independent and identically distributed,
we have [46]
E{E(m) } =
Z ∞
M −1
m−1
M−m
M
x [F (x)]
[1 − F (x)]
dF (x),
m−1
−∞
(70)
where x is the realization of E(m) and F (x) is the cumulative distribution function of Em . For the case that we have
considered, where the channel coefficients are i.i.d. Rayleigh
distributed, the Em are independent Gamma random variables
with
!
x
F (x) = γ
,K ,
(71)
σĥ2
[1] H. Pirzadeh, and A. Swindlehurst, “Analysis of MRC for Mixed-ADC
Massive MIMO,” in Proc. IEEE Int. Workshop Comput. Adv. Multi-Sensor
Adaptive Process., 2017.
[2] T. L. Marzetta, “Noncooperative cellular wireless with unlimited numbers
of base station antennas,” IEEE Trans. Wireless Commun., vol. 9, no. 11,
pp. 3590-3600, Nov. 2010.
[3] L. Lu, G. Y. Li, A. Swindlehurst, A. Ashikhmin, and R. Zhang, “An
overview of massive MIMO: Benefits and challenges,” IEEE J. Sel.
Topics in Signal Process., vol. 8, no. 5, pp. 742-758, Oct. 2014.
[4] E. G. Larsson, O. Edfors, F. Tufvesson, and T. L. Marzetta, “Massive
MIMO for next generation wireless systems,” IEEE Commun. Mag., vol.
52, no. 2, pp. 186-195, Feb. 2014.
[5] H. Q. Ngo, E. G. Larsson, and T. L. Marzetta, “Energy and spectral efficiency of very large multiuser MIMO systems,” IEEE Trans. Commun.,
vol. 61, no. 4, pp. 1436-1449, Apr. 2013.
[6] H. Yang and T. L. Marzetta, “Performance of conjugate and zero-forcing
beamforming in large-scale antenna systems,” IEEE J. Sel. Areas in
Commun., vol. 31, no. 2, pp. 172-179, Feb. 2013.
[7] E. Björnson, J. Hoydis, M. Kountouris, and M. Debbah, “Massive
MIMO systems with non-ideal hardware: Energy efficiency, estimation,
and capacity limits,” IEEE Trans. Inf. Theory, vol. 60, no. 11, pp. 71127139, Nov. 2014.
[8] E. Björnson, M. Matthaiou, and M. Debbah, “Massive MIMO with nonideal arbitrary arrays: Hardware scaling laws and circuit-aware design,”
IEEE Trans. Wireless Commun., vol. 14, no. 8, pp. 4353-4368, Aug. 2015.
[9] C. Mollén, E. Larsson and T. Eriksson, “Waveforms for the massive
MIMO downlink: Amplifier efficiency, distortion, and performance,”
IEEE Trans. Commun., vol. 64, no. 12, pp. 5050-5063, Dec. 2016.
[10] C. Mollén, U. Gustavsson, T. Eriksson, and E. Larsson, “Spatial
characteristics of distortion radiated from antenna arrays with transceiver
nonlinearities,” Arxiv preprint, arXiv:1711.02439.
[11] C. Mollén, E. Larsson, U. Gustavsson, T. Eriksson, and R. Heath
Jr., “Out-of-Band radiation from large antenna arrays,” Arxiv preprint,
arXiv:1611.01359.
[12] C. Mollén, U. Gustavsson, T. Eriksson, and E. Larsson, “Impact of
spatial filtering on distortion from low-noise amplifiers in massive MIMO
base stations,” Arxiv preprint, arXiv:1712.09612, submitted to IEEE
Trans. Commun..
[13] Q. Bai and J. A. Nossek, “Energy efficiency maximization for 5G multiantenna receivers,” Trans. Emerging Telecommun. Technol., vol. 26, no.
1, pp. 3-14, 2015.
[14] Y. Li, C. Tao, L. Liu, A. Mezghani, G. Seco-Granados, and A. Swindlehurst, “Channel estimation and performance analysis of one-bit massive
MIMO systems,” IEEE Trans. Signal Process., vol. 65, no. 15, pp. 40754089, May 2017.
[15] C. Mollén, J. Choi, E. G. Larsson, and R. W. Heath, “Uplink
performance of wideband massive MIMO with one-bit ADCs,” IEEE
Trans. Wireless Commun., vol. 16, no. 1, pp. 87-100, Jan. 2017.
[16] S. Jacobsson, G. Durisi, M. Coldrey, U. Gustavsson, and C. Studer,
“Throughput analysis of massive MIMO uplink with low-resolution
ADCs,” IEEE Trans. Wireless Commun., vol. 16, no. 6, pp. 4038-4051,
June 2017.
[17] C. Studer and G. Durisi, “Quantized massive MU-MIMO-OFDM
uplink,” IEEE Trans. Commun., vol. 64, no. 6, pp. 2387–2399, June
2016.
15
[18] J. Mo and R. W. Heath, “Capacity analysis of one-bit quantized MIMO
systems with transmitter channel state information,” IEEE Trans. Signal
Process., vol. 63, no. 20, pp. 5498–5512, Oct. 2015.
[19] M. Sarajlić, L. Liu, and O. Edfors, “When are low resolution ADCs
energy efficient in massive MIMO?,” IEEE Access, vol. 5, pp. 1483714853, July 2017.
[20] D. Verenzuela, E. Björnson, and M. Matthaiou, “Hardware design and
optimal ADC resolution for uplink massive MIMO systems,” in IEEE
Sensor Array and Multichannel Signal Processing Workshop (SAM), Rio
de Janeiro, Brazil, July 2016.
[21] L. Fan, S. Jin, C. Wen, and V. Zhang, “Uplink achievable rate for
massive MIMO systems with low-resolution ADC,” IEEE Commun. Lett.,
vol. 19, no. 12, pp. 2186-2189, Dec. 2015.
[22] J. Zhang, L. Dai, S. Sun, and Z. Wang, “On the spectral efficiency
of massive MIMO systems with low-resolution ADCs,” IEEE Commun.
Lett., vol. 20, no. 5, pp. 842-845, May. 2016.
[23] N. Liang, W. Zhang, “Mixed-ADC massive MIMO,” IEEE J. Sel. Areas
in Commun., vol. 34, no. 4, pp. 983-997, April 2016.
[24] N. Liang, W. Zhang, “Mixed-ADC Massive MIMO uplink in frequencyselective channels,” IEEE Trans. Commun., vol. 64, no. 11, pp. 46524666, Nov. 2016.
[25] W. Tan, S. Jin, C. Wen and Y. Jing, “Spectral efficiency of mixedADC receivers for massive MIMO systems,” IEEE Access, vol. 4, pp.
7841-7846, Aug. 2016.
[26] J. Zhang, L. Dai, Z. He, S. Jin, and X. Li, “Performance analysis of
mixed-ADC massive MIMO systems over Rician fading channels,” IEEE
J. Sel. Areas in Commun., vol. 35, no. 6, pp. 1327-1338, June 2017.
[27] H. Pirzadeh, and A. Swindlehurst, “Spectral efficiency under energy
constraint for mixed-ADC MRC massive MIMO,” IEEE Sig. Process.
Lett., vol. 24, no. 12, pp. 1847-1851, Oct. 2017.
[28] T. C. Zhang, C. K. Wen, S. Jin, and T. Jiang, “Mixed-ADC massive
MIMO detectors: Performance analysis and design optimization,” IEEE
Trans. Wireless Commun., vol. 15, no. 11, pp. 7738–7752, Nov. 2016.
[29] J. Liu, J. Xu, W. Xu, S. Jin, and X. Dong, “Multiuser massive MIMO
relaying with Mixed-ADC receiver,” IEEE Sig. Process. Lett., vol. 24,
no. 1, pp. 76-80, Dec. 2016.
[30] J. Park, S. Park, A. Yazdan and R. W. Heath “Optimization of MixedADC multi-antenna systems for Cloud-RAN deployments,” IEEE Trans.
Commun., vol. 65, no. 9, pp. 3962-3975, Sep. 2017.
[31] J. J. Bussgang, “Crosscorrelation functions of amplitude-distorted
Gaussian signals,” Res. Lab. Electron., Massachusetts Inst. Technol.,
Cambridge, MA, USA, Tech. Rep. 216, 1952.
[32] G. Jacovitti and A. Neri, “Estimation of the autocorrelation function
of complex Gaussian stationary processes by amplitude clipped signals,”
IEEE Trans. Inf. Theory, vol. 40, no. 1, pp. 239-245, Jan. 1994.
[33] A. Garcia-Rodriguez, C. Masouros, and P. Rulikowski, “Reduced
Switching Connectivity for Large Scale Antenna Selection,” IEEE Trans.
Commun., vol. 65, no. 5, pp. 2250-2263, May 2017.
[34] Y. Gao, H. Vinck, and T. Kaiser, “Massive MIMO antenna selection:
Switching architectures, capacity bounds, and optimal antenna selection
algorithms,” IEEE Trans. Sig. Process., vol. 66, no. 5, pp. 1346-1360,
March, 2018.
[35] X. Gao, O. Edfors, F. Tufvesson, and E. Larsson, “Multi-Switch for
antenna selection in massive MIMO,” in Proc. IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015.
[36] A. Le Fevre, R. Flett, “A 100 Mb/s Multi-LAN crosspoint chip set
for cable management,” IEEE J. Solid-State Circuits, vol. 32, no. 7, pp.
1115-1121, July 1997.
[37] E. Björnson, E. G. Larsson, and M. Debbah, “Massive MIMO for
maximal spectral efficiency: How many users and pilots should be
allocated?,” IEEE Trans. Wireless Commun., vol. 15, no. 2, pp. 12931308, Feb. 2016.
[38] “http://www.analog.com/media/en/news-marketing-collateral/productselection-guide/HighSpeedSwitches.pdf”
[39] H. Pirzadeh, and A. Swindlehurst, “On the optimality of mixed-ADC
massive MIMO with MRC detection,” in Proc. Int. ITG Workshop Smart
Antennas (WSA), 2017.
[40] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation
Theory. Englewood Cliffs, NJ: Prentice-Hall, 1993.
[41] L. Fan, S. Jin, C. K. Wen, and M. Matthaiou, “Optimal pilot length
for uplink massive MIMO systems with low-resolution ADC,” in IEEE
Sensor Array and Multichannel Signal Processing Workshop (SAM), Rio
de Janeiro, Brazil, July 2016.
[42] H. Q. Ngo, M. Matthaiou, and E. G. Larsson, “Massive MIMO with
optimal power and training duration allocation,” IEEE Wireless Commun.
Lett., vol. 3, no. 6, pp. 605-608, Dec. 2014.
[43] K. Roth, H. Pirzadeh, A. L. Swindlehurst, and J. A. Nossek, “A
comparison of hybrid beamforming and digital beamforming with lowresolution ADCs for multiple users and imperfect CSI,” arXiv.org Sep.
2017. Available: http://arxiv.org/abs/1709.09047.
[44] D. Qiao, W. Tan, Y. Zhao, C.-K. Wen and S. Jin, “Spectral efficiency
for massive MIMO zero-forcing receiver with low-resolution ADC,” in
IEEE Wireless Communication and Signal Processing (WCSP), Yangzhou,
China, Oct. 2016.
[45] Q. Shi, and Y. Karasawa, “Some applications of Lauricella hypergeometric function FA in performance analysis of wireless communications,”
IEEE Commun. Lett., vol. 16, no. 5, pp. 581-584, May 2012.
[46] H. A. David, Order Statistics, 2nd ed. New York: Wiley, 1981.
[47] S. Nadarajah and M. Pal, “Explicit expressions for moments of gamma
order statistics,” Bulletin of the Brazilian Mathematical Society, New
Series, vol. 39, no. 1, pp. 45-60, Mar. 2008.
| 7 |
Co-evolutionary multi-task learning for dynamic time series
prediction
Rohitash Chandra, Yew-Soon Ong and Chi-Keong Goh
arXiv:1703.01887v1 [] 27 Feb 2017
Rolls Royce @NTU Corp Lab
Nanyang Technological University, 42 Nanyang View, Singapore
Abstract
Multi-task learning employs shared representation of knowledge for learning multiple instances from the same or related problems. Time series prediction consists of several instances
that are defined by the way they are broken down into fixed windows known as embedding dimension. Finding the optimal values for embedding dimension is a computationally intensive
task. Therefore, we introduce a new category of problem called dynamic time series prediction that requires a trained model to give prediction when presented with different values of the
embedding dimension. This can be seen a new class of time series prediction where dynamic
prediction is needed. In this paper, we propose a co-evolutionary multi-task learning method that
provides a synergy between multi-task learning and coevolution. This enables neural networks
to retain modularity during training for building blocks of knowledge for different instances of
the problem. The effectiveness of the proposed method is demonstrated using one-step-ahead
chaotic time series problems. The results show that the proposed method can effectively be used
for different instances of the related time series problems while providing improved generalisation performance.
Keywords:
Coevolution; multi-task learning; modular neural networks; chaotic time series; and dynamic
programming.
1. Introduction
Multi-task learning employs shared representation knowledge for learning multiple instances
from the same problem with the goal to develop models with improved generalisation performance [1, 2, 3, 4]. On the other hand, multi-task evolutionary algorithms have been proposed for
optimisation problems with the intention of exploring and exploiting the common knowledge between the tasks and enabling transfer of knowledge between them for optimisation [5, 6]. It has
been shown that knowledge from related tasks can help in speeding up the optimisation process
and obtain better quality solutions when compared to single-task approaches. Inspired by this
new phenomenon, in this paper, we present a study on multi-task learning for time series prediction. Recently, evolutionary multi-tasking has been used for efficiently training feedforward
neural networks for n-bit parity problem [7], where different tasks were implemented as different
topologies given by the number of hidden neurons that obtained improved training performance.
Preprint submitted to
March 7, 2017
Time series prediction using neural networks have been popular for applications that range
from business [8] to weather, and climate prediction [9, 10]. In the past, different neural network
architectures that include feedforward and recurrent neural networks have been used, with a
wide range of algorithms that can be characterised as gradient based approaches [11, 12], neuroevolution [13, 14, 15], hybrid algorithms and ensemble learning [16, 17, 18]. Time series prediction typically involves a preprocessing stage where the original time series is reconstructed into
a state-space vector. This involves breaking the time series using overlapping windows known
as embedding dimension taken at regular intervals or time lags [19]. The optimal values for embedding dimension and time lag are used to train the chosen model. These values vary on the
type of problem and requires costly computational evaluation for model selection, hence, some
effort has been made to address this issue. Multi-objective and competitive coevolution methods
have been used to take advantage of different features from the embedding dimension during
training [20, 21]. Moreover, neural network have been used for determining optimal embedding
dimension of selected time series problems [22].
In time series for natural disasters such as cyclones [23, 10], it is important to develop models
that can make predictions dynamically, i.e the model has the ability to make a prediction as soon
as minimal data is available for the time series. The minimal value for the embedding dimension
can have huge impact for the case of cyclones, where data is only available every 6 hours [24].
A way to address such categories of problems is to devise robust training algorithms and models
that are capable of performing given different types of input or tasks. We define dynamic time
series prediction as the need for a single model that can be used to make prediction for different
values of the embedding dimension after training. It has been highlighted in recent work [24] that
recurrent neural networks trained with a predefined embedding dimension can only generalise
for the same embedding dimension which makes dynamic time series prediction a challenging
problem. In this paper, we define tasks as different instances of the embedding dimension.
Furthermore, we note that different values in the embedding dimension can be used to generate several distinct datasets that have overlapping features which can be used to train modules
for shared knowledge representation as needed for multi-task learning. Hence, it is important
to ensure modularity is retained during learning. Modular neural networks are motivated from
repeating structures in nature [25]. Modular networks were introduced for for visual recognition tasks that were trained by genetic algorithms and produced generalisation capability [25].
More recently, a modular neural network was presented where the performance and connection
costs were optimised through neuro-evolution which achieved better performance when compared to fully connect neural network [26]. Modular neural networks have also been designed
with the motivation to learn new tasks without forgetting old ones [27]. It was shown that modular networks learn new tasks faster from knowledge of previous tasks. Modular neural network
architectures have been beneficial for hardware implementations [28]. Modular neural networks
enable smaller networks to be used as building blocks for a larger network.
In dynamic programming, a large problem is broken down into sub-problems, from which
at least one sub-problem is used as a building block for the optimisation problem. Although
dynamic programming has been primarily used for optimisation problems, it has been briefly
explored for data driven learning [29] [30]. The concepts in using sub-problems as building
block in dynamic programming can be used in developing algorithms for multi-task learning.
Cooperative coevolution (CC) is a divide and conquer approach that has been initially used for
optimisation problems [31] and later been effective for neuro-evolution [32] and applied to time
series problems [14, 15]. CC provides more diverse solutions through the subcomponents when
compared to conventional single population-based evolutionary algorithms [32].
2
Time series prediction problems can be generally characterised into three major types of
problems that include one-step prediction [16, 12, 14], multi-step-ahead prediction [33, 34, 35],
and multi-variate time series prediction [36, 37, 38]. These problems at times may overlap with
each other, for instance, a multi-step-ahead prediction can have a multi-variate component. Similarly, a one-step prediction can also have a multi-variate component, or a one-step ahead prediction can be used for multi-step prediction and vice-versa. In this paper, we identify a special
class of problems that require dynamic prediction with the hope that the trained model can be
useful for different instances of the problem. It would be able to feature different values of the
embedding dimension or incorporate additional features in the case of multivariate problems.
Although, neuro-evolution has been successfully applied for training neural networks, multitask learning for enhancing neuro-evolution has not been fully explored. There has not been any
work that explores the embedding dimension of a time series as tasks for multi-task learning.
This can be beneficial for dynamic time series prediction that requires a model to make robust
prediction.
In this paper, we propose a co-evolutionary multi-tasking method that provides a synergy
between multi-task learning and coevolution and enables neural networks to be trained with
shared knowledge representation while retaining modularity. This enables the learning process
to employ modules of knowledge from different but related tasks as building blocks of a single
model. The proposed method is used for one-step-ahead chaotic time series problems using
feedforward neural networks for seven benchmark problems.
The rest of the paper is organised as follows. Section 2 gives a background on multi-task
learning, cooperative neuro-evolution and time series prediction. Section 3 gives details of the coevolutionary multi-task learning method for dynamic time series prediction. Section 4 presents
the results with discussion. Section 5 presents the conclusions and directions for future research.
2. Background and Related Work
2.1. Multi-task learning and applications
Multi-task learning employs a shared representation of knowledge for learning several different instance of the same or related problems [1]. A number of approaches have been presented
that considers multi-task learning for different types of problems that include supervised and unsupervised learning [39, 40, 41, 42]. Negative transfer has been a major challenge for multi-task
learning. The major approach to address it has been through task grouping where knowledge
transfer is performed only within each group [43, 44]. Bakker et. al for instance, presented a
Bayesian approach in which some of the model parameters were shared and others loosely connected through a joint prior distribution learnt from the data [44]. Zhang and Yeung presented a
convex formulation for multi-task metric learning by modeling the task relationships in the form
of a task covariance matrix [43]. Moreover, Zhong et. al presented flexible multi-task learning
framework to identify latent grouping structures in order to restrict negative knowledge transfer
[45].
Multi-task learning has recently contributed to a number of successful real-world applications that gained better performance by exploiting shared knowledge for multi-task formulation.
Some of these applications include 1) multi-task approach for “ retweet” prediction behaviour of
individual users [46], 2) recognition of facial action units [38], 3) automated Human Epithelial
Type 2 (HEp-2) cell classification [47], 4) kin-relationship verification using visual features [48]
and 5) object tracking [49].
3
2.2. Cooperative Neuro-evolution
Neuro-evolution employs evolutionary algorithms for training neural networks [50]. Neuroevolution can be classified into direct [50, 51], and indirect encoding strategies [52]. In direct
encoding, every connection and neuron is specified directly and explicitly in the genotype [50,
51]. In indirect encoding, the genotype specifies rules or some other structure for generating the
network [52]. Performance of direct and indirect encodings vary for specific problems. Indirect
encodings seem very intuitive and have biological motivations, however, in several cases they
have shown not to outperform direct encoding strategies [53, 54].
Cooperative coevolution for training neural networks is known as cooperative neuroevolution [32, 55]. Although cooperative coevolution faced challenges in problem decomposition,
it showed promising features that included modularity and diversity [32]. Further challenges
have been in area of credit assignment for subcomponents [32, 55], problem decomposition, and
adaptation due to issues of separability [56]. In cooperative neuro-evolution, problem decomposition has a major effect in the training and generalisation performance. Although several
decomposition strategies have been implemented that vary for different network architectures,
the two established decomposition methods are those on the synapse level [53] and neuron level
[57, 56, 58].In synapse level, the network is decomposed to its lowest level where each weight
connection (synapse) forms a subcomponent [53, 13].In neuron level, the neurons in the network
act as the reference point for the decomposition [59, 58]. They have shown good performance in
pattern classification problems [60, 57, 58]. Synapse level decomposition has shown good performance in control and time series prediction problems [53, 13, 14], however, they gave poor
performance for pattern classification problems [56].
Chandra et. al applied neural and synapse level decomposition for chaotic time series problems using recurrent neural networks [14]. Hence, it was established that synapse level encoding
was more effective for time series and control problems [53, 14]. A competitive and collaborate
method was proposed with very promising performance for chaotic time series problems [15].
Alg. 1 Cooperative Coevolution
Step 1: Decompose the problem (Neuron or Synapse level decomposition)
Step 2: Initialise and cooperatively evaluate each sub-population
for each cycle until termination do
for each Sub-population do
for n Generations do
i) Select and create new offspring
ii) Cooperatively evaluate the new offspring
iii) Update sub-population
end for
end for
end for
In Algorithm 1, the network is decomposed according to the selected decomposition method.
Neuron level decomposition is shown in Figure 1. Once the decomposition is done, the subcomponents that are implemented as sub-populations are initialized and evolved in a round-robin
fashion, typically for a fixed depth of search given by generations. The evaluation of the fitness of each individual for a particular sub-population is done cooperatively by concatenating
the current individual with the fittest individuals from the rest of the sub-populations [32]. The
concatenated individual is then encoded into the neural network where its fitness is evaluated
4
and returned. The fitness of the entire network is assigned to the particular individual of the
sub-population, although it is a representative fitness. This is further illustrated in Figure 1.
Figure 1: Feedforward network with Neuron level decomposition. Note that 4 input neurons represent time series
reconstruction with embedding dimension of 4.
2.3. Problems in time series prediction
We present details of the three major types of problems in time series that include one-step
prediction, multi-step-ahead prediction, and multi-variate time series prediction. Another type
of problem for time series prediction include applications that have missing data. Wu et. al
approached the missing data problem in time series with non-linear filters and neural networks
[61]. In their method, a sequence of independent Bernoulli random variables were used to model
random interruptions which was later used to construct the state-space vector in preprocessing
stage.
A number of methods have been used for one-step ahead time series prediction with promising results from neural networks with various architectures [12, 14] and algorithms that include gradient-based learning [62, 11], evolutionary techniques [13, 14, 15] and hybrid methods
[16, 17, 12]. These methods can also be used for multi-step ahead and multivariate time series
prediction.
Multi-step-ahead (MSA) prediction refers to the forecasting or prediction of a sequence of
future values from observed trend in a time series [63]. It is challenging to develop models that
produce low prediction error as the prediction horizon increases [33, 34, 35]. MSA prediction
has been approached mostly with the recursive and direct strategies. In the recursive strategy, the
prediction from a one-step-ahead prediction model is used as input for future prediction horizon
[64, 65]. Although relatively new, a third strategy is a combination of these approaches [64, 66].
Multi-variate time series prediction typically involves the prediction of single or multiple
values from multi-variate input that are typically interconnected through some event [36, 37, 38].
Examples of single value prediction are the prediction of flour prices of time series obtained from
different cities [36] and traffic time series [67]. Note that the goal is to enhance the prediction
performance from the additional features are in the input, although the problem can be solved
in a univariate approach [67]. In the case of prediction of multiple values, the model needs to
predict future values of the different features, for example, prediction of latitude and longitude
that defines the movement of cyclones [68]. A recent study has shown that that multivariate
5
prediction would perform better than univariate for MSA as the prediction horizon becomes
larger, multi-variate information becomes more important [69].
3. Co-evolutionary Multi-task Learning
3.1. Preliminaries: time series reconstruction
In state-space reconstruction, the original time series is divided using overlapping windows
at regular intervals that can be used for one-step-ahead and MSA prediction. Taken’s theorem expresses that the vector series reproduces many important characteristics of the original time series [19]. Hence, given an observed time series x(t), an embedded phase space
Y(t) = [(x(t), x(t − T ), ..., x(t(D − 1)T )] can be generated, where, T is the time delay, D is the
embedding dimension (window), t = 0, 1, 2, ..., N − DT − 1, and N is the length of the original
time series. The optimal values for D and T must be chosen in order to efficiently apply Taken’s
theorem [70]. Taken’s proved that if the original attractor is of dimension d, then D = 2d + 1 will
be sufficient to reconstruct the attractor [19]. In the case of using feedforward neural networks,
D is the number of input neurons.
3.2. Problem: Dynamic time series prediction
Natural disasters such as torrential rainfall, cyclones, tornadoes, wave surges and droughts
[71, 72, 9, 10] require dynamic and robust prediction models that can make a decision as soon
as the event take place. Therefore, if the model is trained over specific months for rainy seasons,
the system should be able to make a robust prediction from the beginning of the rainy season.
We define the event length as the duration of an event which can be number of hours of a cyclone
or number of days of drought or torrential rain.
As noted earlier, in a typical time series prediction problem, the original time series is reconstructed using Taken’s theorem [19, 70]. In the case of cyclones, it is important to measure
the performance of the model when dynamic prediction is needed regarding track, wind or other
characteristics of the cyclone [24]. Dynamic prediction can provide early warnings to the community at risk. For instance, data about tropical cyclone in the South Pacific is recorded at six
hour intervals [73]. If the embedding dimension D = 6, the first prediction by the model at hand
would come after 36 hours which could have devastating effects.
The problem arises when the gap between each data point in the times series is a day or number of hours. The problem with the existing models such as neural networks used for cyclones
is the minimal embedding dimension D needed to make a prediction. It has been reported that
recurrent neural networks trained with a given embedding dimension (Eg. D = 5), cannot make
robust prediction for other embedding dimension ( Eg. D = 7 or D = 3 ) [24]. Therefore, we
introduce the problem of dynamic time series prediction (DTSP) that involves the minimum embedding dimension needed for a model to effectively reach a prediction for a given time-series.
This enables different embedding dimension values to be used in a model for prediction, i.e. the
model can provide a prediction irrespective of the embedding dimension.
3.3. Method
In the proposed method, a coevolution algorithm based on a dynamic programming strategy
is proposed for multi-task learning. It features problem decomposition in a similar way as cooperative coevolution, however, the major difference lies in the way the solutions of the subcomponents are combined to build to the final solution. Hence, the proposed co-evolutionary multi-task
6
learning algorithm is inspired from the strategies used in dynamic programming where a subset
of the solution is used as the main building block for the optimisation problem. In this case,
the problem is learning the weights of a neural network and the base problem is the neural network with the smallest architecture and lowest number of input features. The weights in the base
network are then mapped into larger network architectures that consist of more hidden neurons
and input features. This can be viewed as modules of knowledge that are combined for larger
tasks that use knowledge from smaller tasks as building blocks. The larger network architectures
can also been seen as additional tasks, hence, we name the approach co-evolutionary multi-task
learning (CMTL).
CMTL is used for training feedforward neural networks (FNNs) for dynamic time series
prediction. It considers different tasks as neural network topologies defined by different number
of input and hidden neurons. The different number of input neurons refer to additional features
that are defined by the task. Lets assume that different sets of features from the same problem
make the different dataset with some overlapping component, i.e some of the features in these
datasets are over-lapping. Hence, multi-task learning can be used to represent the problem where
there is some form of shared knowledge representation for the learning process which refers to
the overlapped features from the different tasks. This means that the overlapping features can
be grouped together as task = 1, while the remaining features as task => 2 in these types of
problems.
In the CMTL algorithm, each sub-population is given as S 1 , S 2 , ..S N , where N is number
of sub-populations. The sub-populations consist of matrix of variables that refer to the weights
of the FNN that correspond to the different tasks S = X(i, j) . i is the number of variables and j
is the number of individuals in the respective sub-population. S (task) corresponds to a specific
task where, task = 1, 2, ..., N, which corresponds to Network(task) , with data for each task with
different embedding dimension of time series, D(task) .
Suppose that W(task) is the set of input to hidden layer and hidden to output layer weights.
In our example, we limit number of output neurons as 1 for all the tasks. Therefore, we
have the weights of a neural network topology for each task appended with the rest of the task
knowledge which range from the smallest to the second largest task x, as shown in Equation 1.
x = [W(task−1) , W(task−2) , ..W(task=1) ]
Network(task) = [W(task),x ]
(1)
Algorithm 2 gives details for CMTL which begins by initialising all the components which
include the sub-populations S task for co-evolutionary multitasking and the different neural network topologies defined by the tasks Networktask which feature the weights Wtask and respective
task data Datatask . The S task are initialised with real values in a range [−a, a] and also assigned
arbitrary values for fitness Fi , for every individual i in the sub-population.
Once the initializing phase has been completed, the algorithm moves into the evolution phase
where each task is evolved for a fixed number for generations defined by depth of search, depth.
We then check if task == 1, then the T askS oltask returns the best solution S olbest from the
sub-population S task . Otherwise, the current task solution S oltask is appended with best solutions
from previous tasks, T askS oltask = [S ol1 , S ol2 , ..S oltask ]. Next, the task solution obtained is
given as a parameter to Algorithm 3 along with the network topology Networktask of all the tasks
in order to decode the task solution into the respective weights of the network. This procedure
is done for every individual i in the sub-population of the task, S task . This procedure is repeated
for every task for different phases until the termination condition is satisfied. The termination
condition can be either the maximum number of function evaluations or a minimum fitness value
7
Alg. 2 Co-evolutionary Multi-task Learning
Data: Requites data for different tasks D(task)
Result: Weights as model parameters for FNN Networktask
initialization
for each task do
1. Define different tasks using data D(task) that corresponds to neural network Networktask given by
different number of neurons: ( input i, hidden j and output k)
2. Define the weight space for the different tasks W(task)
3. Initialise individuals of the sub-populations S task , within the unified search space
4. Assign arbitrary fitness values for fitness Fi of individuals in each sub-population S task .
5. Assign depth of search, eg. depth = 5 that defines the number of generation for each sub-population
S task .
end
while each phase until termination do
for each task do
for each generation until depth do
- Get best Solution S ol from S task
if task == 1 then
- Get the current solution and assign T askS ol(task) = S ol1
end
else
- Append the current task solution S oltask with best solutions from previous tasks,
T askS ol(task) = [S ol1 , S ol2 , ..S oltask ]
end
for each Individual j in S do
- Call Algorithm 3: This will encode the T askS ol(task) into the Networktask
- Load data D(task) for the task and evaluate the Networktask for fitness F given by RMS E
end
for each Individual j in S do
- Select and create new offspring via evolutionary operators such selection, crossover and
mutation
end
- Update S
Update number of FE
end
end
end
- Test the obtained solution
for each task do
1. Load best solution S best from S task .
2. Map into the weight space for the task Wtask ;
3. Load test data for T estDatatask and test the Networktask
4. Report RMS E
end
from the training or validation dataset. Figure 2 shows an exploded view of the neural network
topologies associated with the respective tasks, however, they are all part of the same network as
later shown in Figure 3. The way the task solution is decomposed and mapped into the network
8
Figure 2: Problem decomposition as tasks in co-evolutionary multi-task learning. Note that the colours associated with
the synapses in the network are linked to their encoding that are given as different tasks. Task 1 employs a network
topology with 2 hidden neurons while the rest of the tasks add extra input and hidden neurons. The exploded view shows
that the different neural network topologies are assigned to the different tasks, however, they are all part of the same
network as shown in Figure 3
is given in Figure 3 and discussed detail in the next section.
Note that the major way CMTL differs from cooperative neuro-evolution (CNE) given in
Algorithm 1 is by the way the problem is decomposed and the way the fitness for each individual
is calculated. In CMTL, the fitness of an individual from a sub-population S task depends on the
previous tasks if the task is greater than 1. This is different for the case of CNE as the fitness of an
individual is calculated when it is concatenated with the best individuals from all the respective
sub-populations. This is a major difference in the approach which makes CMTL useful for tasks
where the problem changes or features increase with the task, whereas CNE can only be used for
single-tasking. In CMTL, the number of input features related to the task data does not matter
as long as it increases with the task. CMTL can be easily modified to be applicable for the tasks
that have the same number of input features, and with same or different number of instances in
the respective task datasets.
Finally, when the termination criteria has been met, the algorithm moves into the testing
phase where the best solutions from all the different tasks are saved and encoding into their
9
respective network topologies. Once this is done, the respective task test data is loaded and the
network is used to make a decision that results in a certain measure of error which can be given
by the RMS E, however, any other measure can also be implemented.
Hence, we have highlighted the association of every individual in the respective sub-populations
with different tasks in the multi-task learning environment. There is transfer of knowledge in
terms of weights from smaller to bigger networks as defined by the task with its data which is
covered in detail in next section.
3.4. Transfer of Knowledge from Tasks
One challenging aspect of the Algorithm 2 is the transfer of knowledge represented by the
weights of the respective neural networks that is learnt by the different tasks in CMTL. We
assume that the topology in terms of the number of input, hidden and output neurons increase
with the tasks. Algorithm 3 is able to handle any number of increase in the respective neurons of
the different tasks.
The purpose of Algorithm 3 is to transfer neural network weights that are mapped from
different sub-populations defined by the tasks. Therefore, it is used for transfer for different
number of tasks. The algorithm is given input parameters which are:
1. The current task (task = 1, 2, .., N), where, N is the number of tasks and each task corresponds to the respective sub-population and data with input and target instances;
2. The current task solution (T askS oltask = [S ol1 , S oltask=−2 , .. + S oltask ]). The solutions are
appended with solutions of previous tasks in cases when task > 1;
3. The topology of the respective neural networks for the different tasks in terms of number
of input, hidden and output neurons.
We describe the algorithm with reference to Figure 3 which shows a case where the network
for task = 3 goes through the transfer where task = 1 and task = 2 are used as building blocks
of knowledge given in the weights. Therefore, we use examples for the network topology as
highlighted below.
1. Input is vector of number of input neurons for the respective tasks, eg. Input = [2, 3, 4] ;
2. Hidden is vector of number of input neurons for the respective tasks, eg. Hidden =
[2, 3, 4];
3. Output: is vector of output neurons for the respective tasks, eg. Output = [1, 1, 1]. Note
that since our application is limited for one step ahead time series prediction, we only
consider 1 output neuron for all the tasks.
The algorithm begins by assigning BaseT ask = 1, as base case is applied irrespective of the
number of tasks. In Step 1, the Transfer for Input-Hidden layer weights from the T askS ol is
done in a straight simple manner as shown by weights (1-4) in Figure 3. Step 2 executes the
transfer for Hidden-Output layer weights from the T askS ol as shown by weights (5-6) in Figure
3.
Note that Step 1 and 2 are applied for all the cases given by the number of tasks. Once this
is done, the algorithm terminates if task = 1 or proceeds if task >= 2. Moving on, in Step 3,
the situation is more complex as we consider task >= 2. In this case, Step 1 and 2 are executed
before moving to Step 3 where T askS ol contains the appended solution sets from previous task,
T askS ol(2) = [S oltask=1 , S oltask=2 ]. If we consider the position t of T askS ol(t), by the time the
10
algorithm reaches Step 3, t in principle points to the beginning of the solution given by subpopulation for task = 2. Here, the transfer for Input-Hidden layer weights (7-9) is executed
from the T askS ol for T ask = 2, which is marked by position t that increments itself during the
transfer. Note that in this case, we begin with the weights with reference to the number of hidden
neurons from previous task j = Hidden(task−1) + 1, and move to the number of hidden neurons
of the current task j = Hidden(task) in order to transfer the weights to all the input neurons. This
refers to weights (7-9) in Figure 3. Before reaching to the transfer for task = 3, task = 1
and task = 2 transfer would have already taken place and hence the weights (13-16) would be
transferred as shown in the same figure.
Moving on to Step 4, we first consider the transfer for Input-Hidden layer weights for task =
2 through the transfer of weights from beginning of previous task input, i = Input(task−1) + 1 to
current task input connected with all hidden neurons. This is given by weights (10-11) in Figure
3. For the case of task = 3, this would refer to weights (17-19) in the same figure.
Finally, in Step 5, the algorithm executes the transfer for Hidden-Output layer weights based
on the hidden neurons from previous task to the current task that are linked to the output neuron.
In case of task = 2, this results in transferring weight (12) and for task = 3, the transfer is weight
(20) in Figure 3, respectively.
Note that the algorithm can transfer any number of input and hidden neurons as the number
of tasks increase. It can also consider whether to transfer when either all the input or hidden
neurons are of the same size for the different tasks.
4. Simulation and Analysis
This section presents an experimental study that compares the performance of CMTL with
single task learning methods such as cooperative neuro-evolution and evolutionary algorithm for
dynamic time series prediction. Note that single task learning methods only provide a comparison while they cannot be applied to dynamic time series as they cannot handle different values of
embedding dimension while training a single model. In the beginning, seven chaotic bench mark
times problems are employed and compared with EA and cooperative neuro-evolution (CNE).
4.1. Benchmark Chaotic Time Series Problems
In the benchmark chaotic time series problems, the Mackey-Glass, Lorenz , Henon and
Rossler are the four simulated time series problems. The experiments use the chaotic time series
with length of 1000 generated by the respective chaotic attractor. The first 500 samples are used
for training and the remaining 500 is used for testing.
In all cases, the phase space of the original time series is reconstructed with the embedding
dimensions for 3 datasets for the respective tasks with embedding dimension D = 3, 5, 7 and time
lag T = 2. All the simulated and real-world time series were scaled in the range [0,1]. Further
details of each of the time series problem is given as follows.
The Mackey-Glass time series has been used in literature as a benchmark problem due to
its chaotic nature [74]. The Lorenz time series was introduced by Edward Lorenz who has
extensively contributed to the establishment of Chaos theory [75]. The Henon time series is
generated with a Henon map which is a discrete-time dynamical system that exhibit chaotic
behavior [76] and the Rossler time series is generated using the attractor for the Rossler system,
a system of three non-linear ordinary differential equations as given in [77].
The real-world problem are the Sunspot, ACI finance and Lazer time series. The Sunspot time
series is a good indication of the solar activities for solar cycles which impacts Earth’s climate,
11
Figure 3: Transfer of knowledge from tasks encoding as sub-populations in co-evolutionary multi-task learning algorithm. This diagram shows transfer of knowledge from Task 1 to Task 2 and then finally to Task 3. Note that Task 2
utilises the knowledge of Task 1. The same concept is used for Task 3 which utilizes knowledge of the previous tasks.
Once the previous tasks knowledge is transferred into Task 3, the network loads the Task 3 data of (4 features in this
example) for further evolution of the sub-population that links with Task 3.
12
Alg. 3 Transfer of knowledge from previous tasks
Input Parameters: Task, TaskSol , Input, Hidden and Output
BaseT ask = 1
Step 1:
for each j = 1 to Hidden(BaseT ask) do
for each i = 1 to Input(BaseT ask) do
W(i, j) = T askS ol(t)
t =t+1
end
end
Step 2:
for each k = 1 to Output(BaseT ask) do
for each j = 1 to Hidden(BaseT ask) do
W( j,k) = T askS ol(t)
t =t+1
end
end
if task >= 2 then
Step 3
for each j = Hidden(task−1) + 1 to Hidden(task) do
for each i = 1 to Input(task) do
W(i, j) = T askS ol(t)
t =t+1
end
end
Step 4
for each j = 1 to Hidden(task) -1 do
for each i = Input(task−1) + 1 to Input(task) do
W(i, j) = T askS ol(t)
t =t+1
end
end
Step 5
for each k = 1 to Output(task) do
for each j = Hidden(task−1) + 1 to Hidden(task) do
W( j,k) = T askS ol(t)
t =t+1
end
end
end
weather patterns, satellite and space missions [78]. The Sunspot time series from November
1834 to June 2001 is selected which consists of 2000 points.
The ACI financial time series is obtained from the ACI Worldwide Inc. time series, which
is one of the companies listed on the NASDAQ stock exchange. The data set contains closing
stock prices from December 2006 to February 2010, which is equivalent to approximately 800
data points. The closing stock prices were normalized between 0 and 1. The data set features the
recession that hit the U.S. market in 2008 [79]
13
the Lazer time series is time series measured in a physics laboratory experiment that were
used in the Santa Fe Competition [80]. All the real world time series used the first 50 percent
samples for training and remaining for testing.
4.2. Experimental Design
In the case of cooperative neuro-evolution, neuron level problem decomposition is applied
for training feedforward networks [56] on the given problems. We employ covariance matric
adaptation evolution strategies (CMAES) [81] as the evolutionary algorithm in sub-populations
of CMTL, CNE and the population of EA. The training and generalisation performances are
reported for each case given by the different tasks in the respective time series problems. Note
that only CMTL can be used to approach dynamic time series problem with the power of multitask learning, however, we provide results for single tasking approaches (CNE and EA) in order
to provide baseline performance.
The respective neural networks used both sigmoid units in the hidden and output layer for
all the different problems. The RMSE is given in Equation 2 and used as the main performance
measure. Each neural network architecture was tested with different numbers of hidden neurons.
We employ fixed depth = 5 generation in the sub-populations of CMTL as the depth of
search as it has given optimal performance in trial runs. CNE also employs the same value.
Note that all the sub-populations evolve for the same depth of search. The population size of
CMAES in all the respective methods is given by P = 4 + f loor(3 ∗ log(W s )), where W s is the
total number of weights that is encoded into the sub-population ( for case of CNE and CMTL)
or the population (for case of EA).
The termination condition is fixed at 30 000 function evaluations for each task, hence, CMTL
employs 120 000 function evaluations while single tasking approaches use 30 000 for each of
the respective tasks for all the problems. Note that since there is a fixed training time, there was
no validation set used to stop training.
The root mean squared error (RMSE) is used to measure the prediction performance as given
in Equation 2.
v
u
t
N
1 X
(yi − ŷi )2
(2)
RMS E =
N i=1
where yi , ŷi are the observed and predicted data, respectively. N is the length of the observed data.
These two performance measures are used in order to compare the results with the literature.
4.3. Results for Benchmark Problems
The results for the 7 benchmark chaotic time series problems are given in 4 to 10 which
highlight the training (Train) and generalisation performance (Test) given by the respective single
task learning methods (EA and CNE) and CMTL. We limit our discussion to the generalisation
performance, although the training performance is also shown.
Figure 4 shows that CMTL generalisation performance is better than EA and CNE for timespan D = 3, 5, 7. CMTL and EA beat the performance of CNE in all the cases of the respective
timespan. The same trend is shown in general for Lorenz and Henon time series as shown in
Figure 5 and Figure 6, respectively. There is one exception, D = 5 for Henon time series where
CME gives better performance than EA, however, worse than CMTL. Figure 7 shows the results for the Rossler time series which follows a similar trend when compared to the previous
14
problems. Hence, we can conclude here that CMTL generalisation performance is the best when
compared to the single-tasking methods (CNE and EA) for the 4 simulated time series problems
which have little or no external noise present.
Moving on to the real-world chaotic time series, Figure 8 for the Sunspot problem shows
that CMTL provides the best generalisation performance when compared to EA and CNE for all
the cases (timespan). The same is given for first two timespan cases for ACI-Finance problem as
shown in Figure 9, except for one case, D = 7, where EA and CMTL gives the same performance.
In the case of the Lazer time series in Figure 10, which is known as one of the most chaotic time
series problems,
CMTL beats CNE and EA, except for one case, D = 7. Therefore, at this stage, we can
conclude that CMTL gives the best performance for most of the cases in the real-world time
series problems.
Mackey-Glass Time Series Performance
Root Mean Squared Error (RMSE)
0.12
EA-Train
CNE-Train
CMNN-Train
EA-Test
CNE-Test
CMNN-Test
0.1
0.08
0.06
0.04
0.02
0
3
5
7
Timespan (D)
Figure 4: Performance given by EA, CNE, CMTL for Mackey-Glass time series
Lorenz Time Series Performance
0.09
EA-Train
CNE-Train
CMNN-Train
EA-Test
CNE-Test
CMNN-Test
Root Mean Squared Error (RMSE)
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
3
5
7
Timespan (D)
Figure 5: Performance given by EA, CNE, CMTL for Lorenz time series
Table 1 shows the mean of RMSE and confidence interval across the 3 embedding dimension.
Here, we find that the CMTL performs better than EA and CNE for almost all the problems. The
15
Henon Time Series Performance
Root Mean Squared Error (RMSE)
0.25
EA-Train
CNE-Train
CMNN-Train
EA-Test
CNE-Test
CMNN-Test
0.2
0.15
0.1
0.05
0
3
5
7
Timespan (D)
Figure 6: Performance given by EA, CNE, CMTL for Henon time series
Rossler Time Series Performance
Root Mean Squared Error (RMSE)
0.12
EA-Train
CNE-Train
CMNN-Train
EA-Test
CNE-Test
CMNN-Test
0.1
0.08
0.06
0.04
0.02
0
3
5
7
Timespan (D)
Figure 7: Performance given by EA, CNE, CMTL for Rossler time series
Lazer problem is the only exception where the EA is slightly better than CMTL.
Table 1: Performance across the 3 embedding dimensions
Problem
EA
CNE
CMTL
Mackey-Glass
Lorenz
Henon
Rossler
Sunspot
Lazer
ACI-finance
0.0564 ±0.0081
0.0444 ±0.0067
0.1612 ± 0.0120
0.0617± 0.0091
0.0529 ± 0.0062
0.0917 ±0.0056
0.0565 ±0.0091
0.0859 ±0.0147
0.0650 ± 0.0127
0.1721 ± 0.0128
0.0903± 0.0138
0.0773 ±0.0137
0.1093 ±0.0099
0.0866 ±0.0159
0.0472 ±0.0054
0.0353 ± 0.0049
0.1267 0.0127
0.0489 ±0.0054
0.0399 ± 0.0052
0.0936 ±0.0077
0.0471 ±0.0087
16
Sunspot Time Series Performance
Root Mean Squared Error (RMSE)
0.12
EA-Train
CNE-Train
CMNN-Train
EA-Test
CNE-Test
CMNN-Test
0.1
0.08
0.06
0.04
0.02
0
3
5
7
Timespan (D)
Figure 8: Performance given by EA, CNE, CMTL for Sunspot time series
ACI-Finance Time Series Performance
Root Mean Squared Error (RMSE)
0.12
EA-Train
CNE-Train
CMNN-Train
EA-Test
CNE-Test
CMNN-Test
0.1
0.08
0.06
0.04
0.02
0
3
5
7
Timespan (D)
Figure 9: Performance given by EA, CNE, CMTL for ACI-Finance time series
4.4. Discussion
The goal of the experiments were to evaluate if the proposed CMTL method can deliver
similar results when compared to single task learning approaches for the introduced dynamic
time series problems. Therefore, the comparison was to ensure that the approach does not lose
quality in terms of generalisation performance when compare to single tasking approaches. The
results have shown that CMTL not only addresses the problem of minimal timespan in dynamic
time series, but is a way to improve the performance if each of the multi-task learning cases were
alienated and approached as single tasks.
It is important to understand why CMTL has shown better results for almost all the cases
when compared to single-tasking approaches with same the neural network topology and data
for respective tasks. Note that the CMTL is an incremental evolutionary learning approach as the
algorithm employs the consecutive evolution of each task for a small depth of search in terms of
number of generations. Therefore, this can be viewed as incremental knowledge-based learning
where the bigger tasks take advantage of knowledge gained from learning smaller tasks. This is
done though collaborative fitness evaluation, where in the bigger tasks, the best solution from the
17
Lazer Time Series Performance
0.14
EA-Train
CNE-Train
CMNN-Train
EA-Test
CNE-Test
CMNN-Test
Root Mean Squared Error (RMSE)
0.12
0.1
0.08
0.06
0.04
0.02
0
3
5
7
Timespan (D)
Figure 10: Performance given by EA, CNE, CMTL for Lazer time series
smaller task is combined. However, when it comes to the smaller tasks, they do not combine with
bigger task solutions as would have been in conventional cooperative coevolution. Through such
concatenation of knowledge, there is diversity in incremental knowledge development from the
base task which seems to be beneficial for future tasks. However, the reason why the base task
produces better results when compared to similar experimental design in a single-task learning
approach is of good interest that can be explored with further analysis during the learning process.
Note that the features in the bigger tasks contain all the overlapping features from the base tasks
- hence bigger tasks can be seen as those that have additional features that guide the bigger
network(s) with more hidden neurons during training.
This type of incremental learning not only improved the learning, but also enables modularity that is used for dynamic time series prediction. Modularity for knowledge is essential for
dynamic problems, where groups of knowledge can be combined as the nature or complexity of
the problem increases. Modularity is important for design of neural network in hardware [28]
as disruptions in certain synapse(s) can result in problems with the whole network which can be
eliminated by persevering knowledge as modules [26].
Due to being evolutionary in nature, CMTL can be seen as a flexible method that can be
used for multiple sets of data that have different features of which some are overlapping and
distinctly contribute to the problem. The common features can be captured as a task. Through
multi-task learning, the overlapping features can be used as building blocks to learn the nature of
the problem through the model at hand. Although feedforward neural networks have been used
in CMTL, other other neural network architectures and learning models can be used depending
on the nature of the tasks.
In case of computer vision applications such as face recognition, the different tasks can be
different number of features, i.e. the algorithm can execute face recognition based on either
D = 10, D = 15 or D = 20 features that link with different features from the same problem.
The major limitation of the method is the training time as CMTL is evolutionary in nature.
However, with help of gradient based local search methods, hybrid instances of CMTL can be developed, either as a two stage evolutionary global -local search method or as memetic algorithms
where local refinement occurs during the evolution [82].
18
5. Conclusions and Future Work
We presented a novel algorithm that provides a synergy between coevolution and multitasking for training neural networks for dynamic time series problems. The results shows that the
proposed algorithm not only addresses the problem of minimal timespan in dynamic time series
problems, but also provides better performance in most cases when compared to single-tasking
approaches. Each point in a given timespan represents a number of hours and the proposes algorithm can be used to train a model that can work with multiple timespan values which makes the
prediction dynamic and robust.
In future work, the proposed approach can be used for other time series problems that can
be broken into multiple tasks, such as multiple step ahead time series prediction. The proposed
method can also be extended for transfer learning problems that can include both heterogeneous
and homogeneous domain adaptation cases. In case of tropical cyclones, which is multi-variate
time series problems, the different tasks can be seen as features that include cyclone tracks, seas
surface temperature and humidity.
References
[1] R. Caruana, “Multitask learning,” Machine Learning, vol. 28, no. 1, pp. 41–75, Jul. 1997.
[2] T. Evgeniou, C. A. Micchelli, and M. Pontil, “Learning multiple tasks with kernel methods,” Journal of Machine
Learning Research, vol. 6, no. Apr, pp. 615–637, 2005.
[3] H. Zheng, X. Geng, D. Tao, and Z. Jin, “A multi-task model for simultaneous face identification and facial expression recognition,” Neurocomputing, vol. 171, pp. 515 – 523, 2016.
[4] T. Zeng and S. Ji, “Deep convolutional neural networks for multi-instance multi-task learning,” in Data Mining
(ICDM), 2015 IEEE International Conference on, Nov 2015, pp. 579–588.
[5] A. Gupta, Y. Ong, and L. Feng, “Multifactorial evolution: Toward evolutionary multitasking,” IEEE Trans. Evolutionary Computation, vol. 20, no. 3, pp. 343–357, 2016.
[6] Y. Ong and A. Gupta, “Evolutionary multitasking: A computer science view of cognitive multitasking,” Cognitive
Computation, vol. 8, no. 2, pp. 125–142, 2016.
[7] Y. O. C. G. Rohitash Chandra, Abhishek Gupta, “Evolutionary multi-tasking for training feedforward neural networks,” in Proceedings of the International Conference on Neural Information Processing. Springer, 2016, p. In
Press.
[8] M. Tk and R. Verner, “Artificial neural networks in business: Two decades of research,” Applied Soft Computing,
vol. 38, pp. 788 – 804, 2016.
[9] B. W. Stiles, R. E. Danielson, W. L. Poulsen, M. J. Brennan, S. Hristova-Veleva, T. P. Shen, and A. G. Fore,
“Optimized tropical cyclone winds from quikscat: A neural network approach,” IEEE Transactions on Geoscience
and Remote Sensing, vol. 52, no. 11, pp. 7418–7434, Nov 2014.
[10] L. Zjavka, “Numerical weather prediction revisions using the locally trained differential polynomial network,”
Expert Systems with Applications, vol. 44, pp. 265 – 274, 2016.
[11] D. Mirikitani and N. Nikolaev, “Recursive bayesian recurrent neural networks for time-series modeling,” Neural
Networks, IEEE Transactions on, vol. 21, no. 2, pp. 262 –274, Feb. 2010.
[12] M. Ardalani-Farsa and S. Zolfaghari, “Chaotic time series prediction with residual analysis method using hybrid
Elman-NARX neural networks,” Neurocomputing, vol. 73, no. 13-15, pp. 2540 – 2553, 2010.
[13] C.-J. Lin, C.-H. Chen, and C.-T. Lin, “A hybrid of cooperative particle swarm optimization and cultural algorithm
for neural fuzzy networks and its prediction applications,” Systems, Man, and Cybernetics, Part C: Applications
and Reviews, IEEE Transactions on, vol. 39, no. 1, pp. 55–68, Jan. 2009.
[14] R. Chandra and M. Zhang, “Cooperative coevolution of Elman recurrent neural networks for chaotic time series
prediction,” Neurocomputing, vol. 186, pp. 116 – 123, 2012.
[15] R. Chandra, “Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for
time-series prediction,” Neural Networks and Learning Systems, IEEE Transactions on, vol. 26, pp. 3123–3136,
2015.
[16] K. K. Teo, L. Wang, and Z. Lin, “Wavelet packet multi-layer perceptron for chaotic time series prediction: Effects
of weight initialization,” in Proceedings of the International Conference on Computational Science-Part II, ser.
ICCS ’01, 2001, pp. 310–317.
19
[17] A. Gholipour, B. N. Araabi, and C. Lucas, “Predicting chaotic time series using neural and neurofuzzy models: A
comparative study,” Neural Process. Lett., vol. 24, pp. 217–239, 2006.
[18] D. Ruta and B. Gabrys, “Neural network ensembles for time series prediction,” in 2007 International Joint Conference on Neural Networks, 2007, pp. 1204–1209.
[19] F. Takens, “Detecting strange attractors in turbulence,” in Dynamical Systems and Turbulence, Warwick 1980, ser.
Lecture Notes in Mathematics, 1981, pp. 366–381.
[20] R. Nand and R. Chandra, “Coevolutionary feature selection and reconstruction in neuro-evolution for time series
prediction,” in Artificial Life and Computational Intelligence - Second Australasian Conference, ACALCI 2016,
Canberra, ACT, Australia, February 2-5, 2016, Proceedings, 2016, pp. 285–297.
[21] S. Chand and R. Chandra, “Multi-objective cooperative coevolution of neural networks for time series prediction,”
in International Joint Conference on Neural Networks (IJCNN), Beijing, China, July 2014, pp. 190–197.
[22] A. Maus and J. Sprott, “Neural network method for determining embedding dimension of a time series,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 8, pp. 3294 – 3302, 2011.
[23] M. M. Ali, P. S. V. Jagadeesh, I. I. Lin, and J. Y. Hsu, “A neural network approach to estimate tropical cyclone heat
potential in the indian ocean,” IEEE Geoscience and Remote Sensing Letters, vol. 9, no. 6, pp. 1114–1117, Nov
2012.
[24] R. Deo and R. Chandra, “Identification of minimal timespan problem for recurrent neural networks with application
to cyclone wind-intensity prediction,” in International Joint Conference on Neural Networks (IJCNN), Vancouver,
Canada, July 2016, p. In Press.
[25] B. L. Happel and J. M. Murre, “Design and evolution of modular neural network architectures,” Neural
Networks, vol. 7, no. 67, pp. 985 – 1004, 1994, models of Neurodynamics and Behavior. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0893608005801558
[26] J. Clune, J.-B. Mouret, and H. Lipson, “The evolutionary origins of modularity,” Proceedings of the Royal Society
of London B: Biological Sciences, vol. 280, no. 1755, 2013.
[27] K. O. Ellefsen, J.-B. Mouret, and J. Clune, “Neural modularity helps organisms evolve to learn new skills without
forgetting old skills,” PLoS Comput Biol, vol. 11, no. 4, pp. 1–24, 04 2015.
[28] J. Misra and I. Saha, “Artificial neural networks in hardware: A survey of two decades of progress,” Neurocomputing, vol. 74, no. 13, pp. 239 – 255, 2010, artificial Brains.
[29] J. N. Tsitsiklis and B. V. Roy, “Neuro-dynamic programming overview and a case study in optimal stopping,” in
Decision and Control, 1997., Proceedings of the 36th IEEE Conference on, vol. 2, Dec 1997, pp. 1181–1186 vol.2.
[30] X. Fang, D. Zheng, H. He, and Z. Ni, “Data-driven heuristic dynamic programming with virtual reality,” Neurocomputing, vol. 166, pp. 244 – 255, 2015.
[31] M. Potter and K. De Jong, “A cooperative coevolutionary approach to function optimization,” in Parallel Problem
Solving from Nature PPSN III, ser. Lecture Notes in Computer Science, Y. Davidor, H.-P. Schwefel, and R. Mnner,
Eds. Springer Berlin Heidelberg, 1994, vol. 866, pp. 249–257.
[32] M. A. Potter and K. A. De Jong, “Cooperative coevolution: An architecture for evolving coadapted subcomponents,” Evol. Comput., vol. 8, pp. 1–29, 2000.
[33] S. B. Taieb and A. F. Atiya, “A bias and variance analysis for multistep-ahead time series forecasting,” 2015.
[34] L.-C. Chang, P.-A. Chen, and F.-J. Chang, “Reinforced two-step-ahead weight adjustment technique for online
training of recurrent neural networks,” Neural Networks and Learning Systems, IEEE Transactions on, vol. 23,
no. 8, pp. 1269–1278, 2012.
[35] R. Boné and M. Crucianu, “Multi-step-ahead prediction with neural networks: a review,” 9emes rencontres internationales: Approches Connexionnistes en Sciences, vol. 2, pp. 97–106, 2002.
[36] K. Chakraborty, K. Mehrotra, C. K. Mohan, and S. Ranka, “Forecasting the behavior of multivariate time series
using neural networks,” Neural Networks, vol. 5, no. 6, pp. 961 – 970, 1992.
[37] L. Wang, Z. Wang, and S. Liu, “An effective multivariate time series classification approach using echo state
network and adaptive differential evolution algorithm,” Expert Systems with Applications, vol. 43, pp. 237 – 249,
2016.
[38] S. Zhang, “Adaptive spectral estimation for nonstationary multivariate time series,” Computational Statistics and
Data Analysis, vol. 103, pp. 330 – 349, 2016.
[39] R. K. Ando and T. Zhang, “A framework for learning predictive structures from multiple tasks
and unlabeled data,” J. Mach. Learn. Res., vol. 6, pp. 1817–1853, Dec. 2005. [Online]. Available:
http://dl.acm.org/citation.cfm?id=1046920.1194905
[40] L. Jacob, J. philippe Vert, and F. R. Bach, “Clustered multi-task learning: A convex formulation,”
in Advances in Neural Information Processing Systems 21, D. Koller, D. Schuurmans, Y. Bengio, and
L. Bottou, Eds. Curran Associates, Inc., 2009, pp. 745–752. [Online]. Available: http://papers.nips.cc/paper/
3499-clustered-multi-task-learning-a-convex-formulation.pdf
[41] J. Chen, L. Tang, J. Liu, and J. Ye, “A convex formulation for learning shared structures from multiple tasks,” in
Proceedings of the 26th Annual International Conference on Machine Learning, ser. ICML ’09. New York, NY,
20
USA: ACM, 2009, pp. 137–144. [Online]. Available: http://doi.acm.org/10.1145/1553374.1553392
[42] J. Zhou, J. Chen, and J. Ye, “Clustered multi-task learning via alternating structure optimization,”
in Advances in Neural Information Processing Systems 24, J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett,
F. Pereira, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2011, pp. 702–710. [Online]. Available:
http://papers.nips.cc/paper/4292-clustered-multi-task-learning-via-alternating-structure-optimization.pdf
[43] Y. Zhang and D.-Y. Yeung, “Transfer metric learning by learning task relationships,” in Proceedings of the 16th
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’10. New York,
NY, USA: ACM, 2010, pp. 1199–1208. [Online]. Available: http://doi.acm.org/10.1145/1835804.1835954
[44] B. Bakker and T. Heskes, “Task clustering and gating for bayesian multitask learning,” J. Mach. Learn. Res.,
vol. 4, pp. 83–99, Dec. 2003. [Online]. Available: http://dx.doi.org/10.1162/153244304322765658
[45] S. Zhong, J. Pu, Y.-G. Jiang, R. Feng, and X. Xue, “Flexible multi-task learning with latent task grouping,”
Neurocomputing, vol. 189, pp. 179 – 188, 2016. [Online]. Available: http://www.sciencedirect.com/science/
article/pii/S0925231216000035
[46] X. Tang, Q. Miao, Y. Quan, J. Tang, and K. Deng, “Predicting individual retweet behavior by user similarity:
A multi-task learning approach,” Knowledge-Based Systems, vol. 89, pp. 681 – 688, 2015. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0950705115003470
[47] A. Liu, Y. Lu, W. Nie, Y. Su, and Z. Yang, “Hep-2 cells classification via clustered multi-task learning,”
Neurocomputing, vol. 195, pp. 195 – 201, 2016, learning for Medical Imaging. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0925231216001235
[48] X. Qin, X. Tan, and S. Chen, “Mixed bi-subject kinship verification via multi-view multi-task learning,” Neurocomputing, pp. –, 2016. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0925231216306658
[49] S. Zhang, Y. Sui, S. Zhao, X. Yu, and L. Zhang, “Multi-local-task learning with global regularization
for object tracking,” Pattern Recognition, vol. 48, no. 12, pp. 3881 – 3894, 2015. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0031320315002265
[50] P. Angeline, G. Saunders, and J. Pollack, “An evolutionary algorithm that constructs recurrent neural networks,”
Neural Networks, IEEE Transactions on, vol. 5, no. 1, pp. 54 –65, jan 1994.
[51] D. E. Moriarty and R. Miikkulainen, “Forming neural networks through efficient and adaptive coevolution,”
Evolutionary Computation, vol. 5, no. 4, pp. 373–399, 1997. [Online]. Available: http://www.mitpressjournals.
org/doi/abs/10.1162/evco.1997.5.4.373
[52] K. O. Stanley and R. Miikkulainen, “Evolving neural networks through augmenting topologies,” Evolutionary
Computation, vol. 10, no. 2, pp. 99–127, 2002.
[53] F. Gomez, J. Schmidhuber, and R. Miikkulainen, “Accelerated neural evolution through cooperatively coevolved
synapses,” J. Mach. Learn. Res., vol. 9, pp. 937–965, 2008.
[54] V. Heidrich-Meisner and C. Igel, “Neuroevolution strategies for episodic reinforcement learning,” Journal of
Algorithms, vol. 64, no. 4, pp. 152 – 168, 2009, special Issue: Reinforcement Learning. [Online]. Available:
http://www.sciencedirect.com/science/article/B6WH3-4W7RY8J-3/2/22f7075bc25dab10a8ff3714e2fee303
[55] N. Garca-Pedrajas, C. Hervas-Martinez, and J. Munoz-Perez, “Multi-objective cooperative coevolution of artificial
neural networks (multi-objective cooperative networks),” Neural Networks, vol. 15, pp. 1259–1278, 2002.
[56] R. Chandra, M. Frean, and M. Zhang, “On the issue of separability for problem decomposition in cooperative
neuro-evolution,” Neurocomputing, vol. 87, pp. 33–40, 2012.
[57] ——, “An encoding scheme for cooperative coevolutionary neural networks,” in 23rd Australian Joint Conference
on Artificial Intelligence, ser. Lecture Notes in Artificial Intelligence. Adelaide, Australia: Springer-Verlag, 2010,
pp. 253–262.
[58] R. Chandra, M. Frean, M. Zhang, and C. W. Omlin, “Encoding subcomponents in cooperative co-evolutionary
recurrent neural networks,” Neurocomputing, vol. 74, no. 17, pp. 3223 – 3234, 2011.
[59] F. Gomez and R. Mikkulainen, “Incremental evolution of complex general behavior,” Adapt. Behav., vol. 5, no. 3-4,
pp. 317–342, 1997.
[60] F. J. Gomez, “Robust non-linear control through neuroevolution,” PhD Thesis, Department of Computer Science,
The University of Texas at Austin, Technical Report AI-TR-03-303, 2003.
[61] X. Wu, Y. Wang, J. Mao, Z. Du, and C. Li, “Multi-step prediction of time series with random missing
data,” Applied Mathematical Modelling, vol. 38, no. 14, pp. 3512 – 3522, 2014. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0307904X13007658
[62] T. Koskela, M. Lehtokangas, J. Saarinen, and K. Kaski, “Time series prediction with multilayer perceptron, FIR
and Elman neural networks,” in In Proceedings of the World Congress on Neural Networks, San Diego, CA, USA,
1996, pp. 491–496.
[63] H. Sandya, P. Hemanth Kumar, and S. B. Patil, “Feature extraction, classification and forecasting of time series
signal using fuzzy and garch techniques,” in Research & Technology in the Coming Decades (CRT 2013), National
Conference on Challenges in. IET, 2013, pp. 1–7.
[64] L. Zhang, W.-D. Zhou, P.-C. Chang, J.-W. Yang, and F.-Z. Li, “Iterated time series prediction with multiple support
21
vector regression models,” Neurocomputing, vol. 99, pp. 411–422, 2013.
[65] S. Ben Taieb and R. Hyndman, “Recursive and direct multi-step forecasting: the best of both worlds,” Monash
University, Department of Econometrics and Business Statistics, Tech. Rep., 2012.
[66] A. Grigorievskiy, Y. Miche, A.-M. Ventel, E. Sverin, and A. Lendasse, “Long-term time series prediction using
op-elm,” Neural Networks, vol. 51, pp. 50 – 56, 2014.
[67] Y. Yin and P. Shang, “Forecasting traffic time series with multivariate predicting method,” Applied Mathematics
and Computation, vol. 291, pp. 266 – 278, 2016. [Online]. Available: http://www.sciencedirect.com/science/
article/pii/S0096300316304477
[68] R. Chandra, K. Dayal, and N. Rollings, “Application of cooperative neuro-evolution of Elman recurrent networks
for a two-dimensional cyclone track prediction for the South Pacific region,” in International Joint Conference on
Neural Networks (IJCNN), Killarney, Ireland, July 2015, pp. 721–728.
[69] M. Chayama and Y. Hirata, “When univariate model-free time series prediction is better than multivariate,”
Physics Letters A, vol. 380, no. 3132, pp. 2359 – 2365, 2016. [Online]. Available: http://www.sciencedirect.com/
science/article/pii/S0375960116302195
[70] C. Frazier and K. Kockelman, “Chaos theory and transportation systems: Instructive example,” Transportation
Research Record: Journal of the Transportation Research Board, vol. 20, pp. 9–17, 2004.
[71] R. A. Calvo, H. D. Navone, and H. A. Ceccatto, Neural Network Analysis of Time Series: Applications to Climatic
Data. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000, pp. 7–16.
[72] Y. Wang, W. Zhang, and W. Fu, “Back propogation(bp)-neural network for tropical cyclone track forecast,” in
Geoinformatics, 2011 19th International Conference on, June 2011, pp. 1–4.
[73] (2015) JTWC tropical cyclone best track data site.
[74] M. Mackey and L. Glass, “Oscillation and chaos in physiological control systems,” Science, vol. 197, no. 4300, pp.
287–289, 1977.
[75] E. Lorenz, “Deterministic non-periodic flows,” Journal of Atmospheric Science, vol. 20, pp. 267 – 285, 1963.
[76] M. Hénon, “A two-dimensional mapping with a strange attractor,” Communications in Mathematical Physics,
vol. 50, no. 1, pp. 69–77, 1976.
[77] O. Rssler, “An equation for continuous chaos,” Physics Letters A, vol. 57, no. 5, pp. 397 – 398, 1976.
[78] S. S., “Solar cycle forecasting: A nonlinear dynamics approach,” Astronomy and Astrophysics, vol. 377, pp. 312–
320, 2001.
[79] “NASDAQ Exchange Daily: 1970-2010 Open, Close, High, Low and Volume,” accessed: 02-02-2015. [Online].
Available: http://www.nasdaq.com/symbol/aciw/stock-chart
[80] “Sante Fe Competition Data.” [Online]. Available: http://www-psych.stanford.edu/∼andreas/Time-Series/SantaFe.
html,note={Accessed:30-06-2016}
[81] N. Hansen, S. D. Müller, and P. Koumoutsakos, “Reducing the time complexity of the derandomized evolution
strategy with covariance matrix adaptation (CMA-ES),” Evolutionary Computation, vol. 11, no. 1, pp. 1–18, 2003.
[Online]. Available: http://dx.doi.org/10.1162/106365603321828970
[82] X. Chen, Y. S. Ong, M. H. Lim, and K. C. Tan, “A multi-facet survey on memetic computation,” IEEE Transactions
on Evolutionary Computation, vol. 15, no. 5, pp. 591–607, 2011.
22
| 9 |
Using Power-Hardware-in-the-Loop Experiments
together with Co-simulation for the Holistic
Validation of Cyber-Physical Energy Systems
Van Hoa NGUYEN1, Yvon BESANGER1, Quoc Tuan TRAN2, Cédric BOUDINNET1, Tung Lam NGUYEN1,2,
Ron BRANDL3, Thomas I. STRASSER4
1
University Grenoble Alpes, G2Elab, F-38000 Grenoble, France
CNRS, G2Elab, F-38000 Grenoble, France
2
CEA-INES, 50 Avenue du Lac Léman, 73370 Le Bourget-du-lac, France
3
Fraunhofer Institute of Wind Energy and Energy System Technology, Kassel, Germany
4
AIT Austrian Institute of Technology, A-1210 Vienna, Austria
Email: [email protected]
Abstract—Composed of a large variety of technologies and applications with unprecedented complexity, the smart grid, as a
cyber-physical energy system, needs careful investigation of the interactions between the various domains involved, especially the
coupling between the power and information systems. In this paper, two modern ways of modeling and simulating complex cyberphysical energy systems are considered: Co-simulation and PowerHardware-in-the-Loop experiments. An analysis of these two approaches shows that a complementary and joint setup with realistic
behaviors of hardware equipment under a variety of complex environments, co-simulated by several simulators from different domains, create a complete and high performance environment to
achieve a holistic approach for smart grid validation and roll out.
In the scope of coupling these two techniques, major technical challenges are identified and advanced solutions are outlined.
Index Terms—Co-simulation, Cyber-Physical Energy System, Holistic Validation Approach, Power-Hardware-in-theLoop, Smart Grid Systems Testing.
I. INTRODUCTION
A variety of changes and developments have been carried
out during the past, presenting a portfolio of initiatives and
perspectives of different smart grid solutions on European but
also on international level [1]–[3]. In general, an increased
consumption of electricity, with important peak loading due to
electrification of transport is expected [1]. The decarbonized
scenario requires a high penetration of distributed and renewable energy resources, over levels of 15% to 20%, leading to
an increasingly difficulty to ensure the reliable and stable
management of electricity systems [3]. The integration of Information and Communication Technology (ICT) into the
electrical energy infrastructure, along with smart metering is
shifting from demonstration phase to large scale deployment.
This will have a strong impact on system architectures as well
as it raises concerns about cyber-security issue. The electric
power grid integrated with communication systems and distributed energy resources (solar, heat, etc.) has become a
cyber-physical energy system (smart grid) nowadays.
A general framework for smart grid validation and roll out,
which takes into account their mutual interactions and interde-
pendencies, is required. One of the main barriers to this has been
the lack of design and validation tools that are capable of analyzing power and communication systems in a holistic manner.
Extending power system simulation tools for the ICT domain, or vice versa, demands a lot of effort and collaboration
among experts of both areas, because the life cycle and technical specifications of the electrical and communication
equipment (in terms of reliability requirements, round trip time,
determinism, temporal consistency and hierarchy) are significantly different. By creating a so-called co-simulation environment for the integrated analysis of both domains, via means
of ad-hoc connections or in a master/slave fashion, one can understand the impacts of different communication solutions used
for the operation of power systems much better. Although simulation architectures may vary, a co-simulation framework allows in general the joint and simultaneous investigation of
models developed with different tools, in which the intermediate results are exchanged during the execution of the simulations. However, the sub-systems are usually solved independently by their corresponding domain-specific simulators
[4]. Co-simulation allows to have a complete view of both
network behavior and the physical energy system states, while
power system and communication networks are simulated with
the most suitable solver and the calculation loads are shared.
Power-Hardware-in-the-Loop (PHIL) technology is increasingly used by the industry and the research community
for testing hardware components, devices or systems in real
conditions and scale, where a part of the whole test components are simulated in a Digital Real-Time Simulator (DRTS)
[5]. The PHIL approach allows a safe and repeatedly testing of
a device also in faulty and extreme conditions without damaging lab equipment, while providing also flexibility in setting
up the test setup in transient and steady state operation [5], [6].
In the European ERIGrid project, a survey was addressed to
the experts in 12 top European power and energy systems research institutions about mandatory future improvements of
PHIL technology1. 63.6% of the experts stated that the power
and software interfaces in PHIL technology should be im1
81.8% have PHIL testing capacity, in which 54.5% are in welldeveloped/experienced state.
proved to enable the capacity of remote/distributed testing and
integration with co-simulation.
In this paper, we explore the possibilities of integrating
PHIL in co-simulation in order to enable a holistic evaluation
of smart grid solutions (addressing mainly power and ICT
domains). By offering experiments very closed to real situations, this approach provides an important tool during the design, implementation and roll out of smart grid technologies,
solutions and corresponding products. Beyond the added value
to testing methods, this approach would also offer international and multi-laboratory cooperation, which in turn, have a positive impact to interoperability and confidence in the applicability of the researches under different grid conditions.
The main parts of the paper are structured as follows: The
fundamental points of PHIL and co-simulation relevant for
this investigation are outlined in Section II. A general architecture is also proposed in this section. Major technical obstacles towards a seamless integration of PHIL and co-simulation
are discussed in Section III together with possible solutions.
The paper is concluded with a discussion and an outlook about
future research in Section IV.
face (API) is often missing. On top of that, the fundamentally
different concepts behind power and communication systems
are also a challenge; detecting, linking, and handling related
events in both domains can be a complex task (cf. Fig. 1)2:
Power system simulation is usually continuous with the
possibility of detecting events associated to values crossing a certain threshold.
Communication network simulation is based on discrete
events whose occurrence usually unevenly distributed
with respect to time. Corresponding domain-specific simulators provide an event scheduler to record current system time and process the events in an event list.
II. INTEGRATION OF PHIL AND CO-SIMULATION
In this section, we outline the principal elements of the cosimulation and the PHIL approaches followed by a generic architecture for an integration of them.
A. Co-simulation of Power and ICT Systems
Most of the work related to co-simulation in the field of
smart grid solutions is mainly related to the necessity of interconnecting power system and communication network simulations. As the traditional passive electric power grid (with unidirectional power flows) evolves towards and active power system (with bi-directional power flows), the existing energy infrastructure suffer from several drawbacks (fragmented architecture, lack of adequate bandwidth for two-way communication, in-ability to handle the increasing amount of data from
smart devices, etc.) [7]. It is therefore crucial to take the communication network in the development of smart grids – in
terms of efficient topology, latency and security – into account.
Usually, communication networks used in the context of
lab experiments have very low latency due to short geographical distances. This does not reflect real scenarios whereas the
long geographic distance between different, networked devices may cause unexpected delays and signal losses resulting in
an unexpected and faulty control behavior. Therefore, the
communication network is usually separately analyzed using
dedicated software tools in order to study the effect of realistic
latencies, packet losses or failures in the ICT/automation system [8]. Communication simulators facilitate also cybersecurity related investigations, such as denial-of-service protection, confidentiality and integrity testing.
Co-simulation of the power and the communication system
for an integrated analysis of both domains is however, not an
easy task since the synchronizing of both simulation packages
during runtime is required. Moreover, existing simulation tools
are usually provided limited coupling possibilities with external
tools; an adequate and suitable Application Programming Inter-
Fig. 1. Time synchronization between power and communication simulation.
Once an event occurs, the associated information is passed
to the other domain where the other simulator will create the
reaction. A co-simulation framework then has to execute some
algorithms to ensure the synchronous and deterministic execution of both domains simultaneously. Scientists have come up
with various methods and techniques to deal with the synchronization issue. We can classify them into four main synchronization techniques:
“Offline” Co-simulation or “Model Exchange”: In this
approach, the model of power system is exported to Ccodes and then be compiled and imported to the network
simulator for co-simulation. This is usually used as an alternative when direct co-simulation is hard to achieve.
Master-Slave: In this approach, one simulator (usually the
communication simulator, due to discrete timeline) is given
higher priority and will coordinate the co-simulation steps.
Point-based or time-stepped method: The individual simulators run their simulations independently but pause at
fixed synchronization points where information is exchanged between simulators. In this approach, a middleware is normally needed.
Global Event-Driven: In this approach, a global event list
is created by mixing up the power system iteration steps
with the communication network events according to their
timestamps. Then only one simulator is allowed to pro-
2
The software mentioned in the figures is representative and serves only
for illustration purpose, to aide with reader comprehension. They are, by no
mean, suggestion or recommendation from the authors for simulator selection.
ceed at a time and the other will halt. This structure leads
to a limited speed of co-simulation.
A quite general review of existed works on the cosimulation of power and communication systems can be found
in [7]–[9]. Generally, one can acknowledge two different
structures of co-simulation:
Ad-hoc Co-simulation: Most of the work in the literature
falls into this category (usually coupling directly one
power system simulator and one communication network
simulator).
Co-simulation with Master Algorithm: In this approach, a
master algorithm (e.g., HLA [10]) or a co-simulation framework (e.g., mosaik3, Ptolemy4) will orchestrate the process.
This master algorithm is responsible for synchronizing different timelines of involved simulators and for directing the
information exchange among simulator’s inputs/outputs.
In order to improve interoperability and reusability of the
models developed in co-simulation frameworks mainly two
major standards have been issued: Functional Mockup Interface5 (FMI) and High Level Architecture framework (HLA)
[10]. While FMI is oriented towards model exchange and the
coupling of simulators for co-simulation, HLA provides a kind
of master algorithm to orchestrate the co-simulated processes
(which is addressed as “federate”). It appears that while both
standards serve for co-simulation, FMI and HLA are not exactly at the same level of abstraction. Individually, HLA allows highly parallelized simulations of large-scale systems,
but introduces additional time-synchronization issues [8].
B. Power-Hardware-in-the-Loop Experiments
The high ratio of Distributed Energy Resources (DER) integration in a decarbonized scenario leads also to technical difficulty to preserve the security and reliability of the network
operation and to ensure the fulfilment of the established voltage quality standards [11]. The Hardware-in-the-Loop (HIL)
approach used in the power and energy systems domain is an
efficient testing method for DER devices, for manufacturers to
adapt their products to the increasingly demanding requirements, as well as for network operators and regulation authorities to establish new testing and certification procedures [6],
[11]. In this approach, a real hardware setup for a domain (or
part of a domain) is coupled with a simulation tool to allow
testing of hardware or software components under realistic
conditions. The execution of the simulator in that case requires
strictly small simulation time steps in accordance to the real
time constraint of the physical target. Since the HIL approach
usually involve coupling of different domains, it deals with
quite similar challenges as co-simulation. On the other hand,
HIL provides the advantage of replacing error-prone or incomplete models with real-world counterparts and the possibility of scalable testing in faulty and extreme conditions.
HIL in smart grids is generally classified into Controller
Hardware-in-the-loop (CHIL) and Power-Hardware-in-the3 http://mosaik.offis.de/
4 http://ptolemy.eecs.berkeley.edu/
5 https://www.fmi-standard.org/
loop (PHIL) experiments [5], [6], [11]. CHIL involves in testing of a device (usually a controller) where signals are exchanged between a DRTS and the device under test via its information ports. The interface consists most of the time only
Analogue to Digital and Digital to Analogue converters. In
contrary, PHIL involves in testing a device which absorbs or
generates power (e.g., inverter-based DER). A power interface
is therefore necessary (see Fig. 2). In this paper, the focus is
mainly on the PHIL approach.
D
Power
Amplification
A
A
Measurement
D
DIGITAL
REAL-TIME
SIMULATOR
(DRTS)
HARDWARE-
POWER INTERFACE (PI)
UNDER-TEST
(HUT)
Fig. 2. General architecture of a PHIL experiment.
A general PHIL setup consists of three main elements: (i)
the DRTS, (ii) the Hardware-under-Test (HuT), and (iii) the
Power Interface (PI):
The DRTS computes the simulation model and offer I/O
capacities. As aforementioned, the simulation time-step of
the DRTS must be small enough to reproduce the behavior of the simulated system under dynamic condition (Fig.
3). The simulator allows designing and performing various test scenarios with a great flexibility.
The HuT is usually a wide variety of different DER devices and networks (e.g., inverter-based DER, electric vehicles, smart transformers) or a whole microgrid can also
be tested in a realistic environment.
A PI generally consists of a power amplifier and sensors
that transmit measurements in feedback. It allows the interaction of the virtual simulated system with the HuT.
Computation step
Simulation Clock
n1
n2
n3
n4
t0
t1
t2
t3
t4
Real-time Clock t
0
t1
t2
t3
t4
Fig. 3. Time step restriction of a real-time simulation.
While offering a great flexibility of testing, PHIL requires
serious consideration on stability and accuracy [11]. The introduction of power interface to the test setup creates an additional close loop, which possibly injects errors, time delay,
and distortion that may cause severe instability issues or inaccurate results [12]. Generally, the power amplifier also affects
the magnitude and phase of the signal under amplification.
However, the inserted time delay is the main obstacle that limits the current capacity of PHIL over a large geographical area
or remote PHIL.
The two principal characteristics of a PI in PHIL experiments are the power amplification unit and the interface algorithm. There is a variety of options for power amplifier with
diverse performance characteristics. A review on power amplification units and their topologies can be found in [6].
Comparison of different types of amplifier, as well as recommendations for selection can be found in [13]. In general, the
following three types of power amplifiers are common in
PHIL experiments:
Switched-mode Power Amplification: Commonly used for
small scale PHIL simulation in megawatt range. It is less
expensive but represents a higher level of time-delay and
lower accuracy than the others.
Generator Type Power Amplification: Is used extensively
for interfacing of balanced three phase grid simulations at
low and medium power range.
Linear Power Amplification: Is the most suitable aggregate for PHIL applications in small to medium power
range. The linear amplifier has very high dynamic performance, short time delay and less stability issues.
Configuration and impact of the power amplifier (I/O
boundaries, galvanic isolation, short circuit behavior, slewrate, etc.) must be addressed and evaluated to match the specific requirement for each PHIL setup as it strongly influence
the determination of system stability, bandwidth, and the expected accuracy.
The interface algorithm between the DRTS and the hardware part in a PHIL experiment may be either voltage type
(for voltage amplifier) or current type (for current amplifier).
Three commonly employed interface algorithms are:
Ideal Transformer Method (ITM)
Partial circuit duplication method (PCD)
Damping impedance method (DIM)
A complete review of various interface algorithms and
recommendations for selection in PHIL experiments can be
found in [11] and [12].
Offering a wide range of possibilities for validation and testing of smart grid solutions, PHIL simulations still be restricted
by some limitations, mostly due to the technical challenges related to the introduction of a power interface (e.g., simulation of
nonlinearities, studies of high harmonics, stability and power
level of amplifier, accuracy of measurements in transient phase,
bandwidth limitation). The main difficulty towards integration
of PHIL into a holistic validation framework is, inter alia, the issue of signal latency, compensation of loop delay and time synchronization. Besides, the issue of time synchronization also
limits the capacity of PHIL simulation of complex systems. Due
to the aforementioned obstacles, especially the stability issue,
there does not exist any interface or standard which enables interoperable PHIL applications. The harmonization and standardization of PHIL testing is therefore also a topic of common interest to the power and energy domain.
C. Integration of PHIL and Co-simulation
We investigate in this section the possibility of integrating
PHIL technology in a co-simulation framework in a holistic
approach for cyber-physical energy systems. Combining the
strong points of both approaches, we can study multi-domain
experiments with realistic behaviors from hardware equipment
under a variety of complex environments, co-simulated by
several simulators from different domains. It will enable a
complete consideration of electrical grid interconnected with
other domains and is an important contribution to a holistic
approach for smart grid system validation and roll out. A general architecture for this integration is proposed in
Fig. 4.
D
A
A
MASTER ALGORITHM
D
Power
Amplification
Measurement
POWER INTERFACE
CO-SIMULATION
HUT
DRTS
SOFTWARE
INTERFACE
Fig. 4. General architecture for the integration of PHIL and co-simulation.
This architecture also enables the possibility to cooperate
multiple research infrastructures’ resources to actualize collaborated experiments and provides a way to include valuable
knowledge and intelligence of researcher from different domains to study, in a holistic manner, the cyber-physical energy
system (i.e., smart grid). This desired scenario requires however a strong interoperability among partner’s platforms in
various levels.
Most of the current works involving the integration of the
HIL approach into a co-simulation framework use only a direct coupling with the DRTS [14] or a kind of CHIL setup
[15]. Only until recently, scientists have investigated the possibility of extending PHIL beyond laboratory geographical
boundaries, and mostly, for latency tolerant applications, i.e.
monitoring [16]. These developments, along with deeper studies on impact of latency in distributed DRTS [17], would create a technical base to enable the integration of PHIL to cosimulation framework.
III. TECHNICAL CHALLENGES AND PROPOSED SOLUTIONS
In order to run the holistic experiment correctly and seamlessly, the following major technical challenges arise.
A. Data Flow and Concurrency
Within the process of integrating PHIL into co-simulation,
it is crucial to ensure a synchronous data flow among the individual components, as well as the concurrency of the different
simulators. In the general architecture from
Fig. 4, three spots should be considered:
1) Power Interface
Basically, the challenge here is to synchronize and compensate the loop delay in order to stabilize the system and increase the accuracy of the test. The first step should be the selection of an appropriated interface algorithms and corresponding power amplification where recommendations from
[11] and [12] should be considered.
Secondly, a time delay compensation method could be applied, such as introducing phase shifting, low-pass filter to the
feedback signal [18], extrapolation prediction to compensate
for time delays [19], phase advance calibration [20] or multirate real-time simulation [21].
2) Co-simulation Interface
The issue at the co-simulation interface was already addressed in Section II.A. Besides synchronizing time steps of
different simulators using the aforementioned techniques, the
master algorithm or the co-simulation framework has to deal
with the harmonization of the continuous/discrete event timelines of the power/communication interface.
3) Software Interface
On top of that, when integrating with real-time simulation
and PHIL, it is necessary to ensure that the harmonized time
steps should be small enough to be coupled in real-time.
Therefore, the interfacing of real-time and offline simulations
needs to be taken into consideration.
The principle of real-time simulation is presented in Fig. 3.
Offline simulation, on the other hand, may have a simulation
clock speed different to real time clock. Two kinds of nonreal-time simulations are classified (i) slow, and (ii) fast as depicted in Fig. 5.
C. Remote coupling PHIL/Cosimulation
In the context of coupling PHIL and co-simulation for
multi-laboratory experiments, there are scenarios where the
DRTS and offline simulation are geographically separated. In
that case, the latency may accumulate and surpass the limitation of the time synchronization algorithm (see Fig. 6). Moreover, random packet loss due to network congestion outside of
a communication network (e.g., LAN) may alter the information and cause malfunction to the DRTS, as well as any
connected hardware.
Where the command
should be applied
State1
State1'
State2
State2
U0
U0
U0
U1
t0
Real-time Clock t
0
n1 n2
n3
n4
t1
t3
t4
t2
t1
S1
t2
t3
t4
t0
Real-time Clock t
0
t1
n2
n3
n4
t1
t2
t3
t4
t3
Step
TCal
RT2
n1
t2
S2
RT1
State1
Computation step
T2 ≠ T1 + ∆T
Tsim
Non Real-Time Simulation - Fast
Simulation Clock
T1 + ∆T
T1
Computation step
Simulation Clock
does not cover the ICT domain, an interface with the communication simulators must be provided as well.
t4
U1 &
∆T = ∆T1 + ∆T2
WAN Connection
to Partners
State1 &
U1
∆T1
Non Real-Time Simulation - Slow
Fig. 5. Non-real-time simulation types.
The non real-time simulations step, in case of coupling,
has to be adapted to the real-time simulation step, either by delaying the step in case of fast simulation, or increasing the
computation speed in case of slow simulation.
B. Interoperability, Data Model, and System Topology
Besides the above technical challenges, when the experiments involve multiple domains or multi-laboratory, it is required to have a certain degree of interoperability among the
different actors as well as among different elements of the experiments. A common information model or at least a conversion interface is necessary.
In a power system simulation, the exact and proper representation of a system’s topology is critical, proportionally with
scale and complexity. The information model should be capable to represent, encapsulate and exchange static and dynamic
data, as well as, to inform any modification in topology and
current state of the network in real-time and in a standardized
way. It is suggested in [22] that IEC 61970/61968
(CIM/XML/RDF) and OPC UA could be combined to provide
a seamless and meaningful communication among applications and a strong support for multiplatform experiment,
which is capable of transmitting static and dynamic data of
system’s topology in real-time. This combination, however,
Controller
Fig. 6. Delayed application of command due to unexpected latency.
Therefore, in case of coupling PHIL together with cosimulation in geographically distributed experiments, the time
synchronization algorithm must be adapted. The PI compensation has to take into account communication latency.
IV. DISCUSSION AND OUTLOOK
Two modern ways of modeling and simulating complex
cyber-physical energy systems were presented. While cosimulation includes and combines knowledges in various domains in order to consider the system in a holistic manner,
PHIL provides users with the advantage of replacing errorprone or incomplete models with real-world counterparts and
the possibility of scalable testing in faulty and extreme conditions. An analysis of these two tools shows that it will make
sense to combine the strong points of both approaches to study
multi-domain experiments. The advantage is a complementary
and joint setup, benefited from realistic behaviors of hardware
equipment under a variety of complex environments, cosimulated by several simulators from different domains. The
goal is to create a complete and high performance environment to achieve a holistic approach for smart grid validation
and roll out.
Major technical challenges have been identified and some
solutions were suggested. This contribution gives way for further proposals in future developments of coupling PHIL and
co-simulation.
[10]
[11]
[12]
ACKNOWLEDGEMENT
This work is supported by the European Community's
Horizon 2020 Programme (H2020/2014-2020) under project
“ERIGrid” (Grant Agreement No. 654113, www.erigrid.eu).
The work of G2Elab and CEA-INES is also partially supported by the Carnot Institute “Energies du Futur” under the
PPInterop II project (www.energiesdufutur.eu).
[13]
[14]
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
International Energy Agency, “Smart Grids in Distribution Network Roadmap development and Implementation,” 2015.
European Network of Transmission System Operators for Electricity,
“Research and development roadmap 2013-2022,” 2013.
European Commission, “Energy Roadmap 2050,” Smart Grid Task
Force, Brussels, Communication from the commission to the european
parliament, the council, the european economic and social committee
and the committee of the regions, 2011.
M. Faschang, F. Kupzog, E. Widl, S. Rohjans, and S. Lehnhoff, “Requirements for Real-Time Hardware Integration into Cyber-Physical
Energy System Simulation,” presented at the Workshop on Modeling
and Simulation of Cyber-Physical Energy Systems (MSCPES), Seattle, WA, USA, 2015.
M. D. O. Faruque, T. Strasser, G. Lauss, V. Jalili-Marandi, P. Forsyth,
and C. Dufour, “Real-Time Simulation Technologies for Power Systems Design, Testing, and Analysis,” IEEE Power Energy Technol.
Syst. J., vol. 2, no. 2, pp. 63–73, Jun. 2015.
G. Lauss, M. D. O. Faruque, K. Schoder, C. Dufour, A. Viehweider,
and J. Langston, “Characteristics and Design of Power Hardware-inthe-Loop Simulations for Electrical Power Systems,” IEEE Trans.
Ind. Electron., vol. 63, no. 1, pp. 406–417, Jan. 2016.
K. Mets, J. . Ojea, and C. Develder, “Combining Power and Communication Network Simulation for Cost-Effective Smart Grid Analysis,”
IEEE Commun. Surv. Tutor., vol. 16, no. 3, pp. 1771–1796, Mar.
2014.
S. C. Mueller et al., “Interfacing Power System and ICT Simulators:
Challenges, State-of-the-Art, and Case Studies,” IEEE Trans. Smart
Grid, vol. PP, no. 99, pp. 1–1, 2016.
L. Weilin and Z. Xiaobin, “Simulation of the smart grid communications: Challenges, techniques, and future trends,” Comput. Electr.
Eng., vol. 40, no. 1, pp. 270–288, Jan. 2014.
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
IEEE Computer Society, “IEEE SA - 1516-2010 - IEEE Standard for
Modeling and Simulation (M&S) High Level Architecture (HLA)-Framework and Rules.” 2010.
E. De Jong et al., “European White Book on Real-Time Power Hardware in the Loop Testing,” DERlab Report R-005.0, 2012.
W. Ren, M. Steurer, and T. L. Baldwin, “Improve the Stability and the
Accuracy of Power Hardware-in-the-Loop Simulation by Selecting
Appropriate Interface Algorithms,” in 2007 IEEE/IAS Industrial
Commercial Power Systems Technical Conference, 2007, pp. 1–7.
F. Lehfuss, G. Lauss, P. Kotsampopoulos, N. Hatziargyriou, P. Crolla,
and A. Roscoe, “Comparison of multiple power amplification types
for power Hardware-in-the-Loop applications,” in 2012 Complexity in
Engineering (COMPENG). Proceedings, 2012, pp. 1–6.
D. Bian, M. Kuzlu, M. Pipattanasomporn, S. Rahman, and Y. Wu,
“Real-time co-simulation platform using OPAL-RT and OPNET for
analyzing smart grid performance,” in 2015 IEEE Power Energy Society General Meeting, 2015, pp. 1–5.
S. Rotger-Griful, S. Chatzivasileiadis, R. H. Jacobsen, E. M. Stewart,
J. M. Domingo, and M. Wetter, “Hardware-in-the-Loop co-simulation
of distribution Grid for demand response,” in 2016 Power Systems
Computation Conference (PSCC), 2016, pp. 1–7.
B. Palmintier, B. Lundstrom, S. Chakraborty, T. Williams, K. Schneider, and D. Chassin, “A Power Hardware-in-the-Loop Platform With
Remote Distribution Circuit Cosimulation,” IEEE Trans. Ind. Electron., vol. 62, no. 4, pp. 2236–2245, Apr. 2015.
R. Liu, M. Mohanpurkar, M. Panwar, R. Hovsapian, A. Srivastava,
and S. Suryanarayanan, “Geographically distributed real-time digital
simulations using linear prediction,” Int. J. Electr. Power Energy
Syst., vol. 84, pp. 308–317, Jan. 2017.
P. Kotsampopoulos, V. Kleftakis, G. Messinis, and N. Hatziargyriou,
“Design, development and operation of a PHIL environment for Distributed Energy Resources,” in IECON 2012 - 38th Annual Conference on IEEE Industrial Electronics Society, 2012, pp. 4765–4770.
W. Ren et al., “Interfacing Issues in Real-Time Digital Simulators,”
IEEE Trans. Power Deliv., vol. 26, no. 2, pp. 1221–1230, Apr. 2011.
A. J. Roscoe, A. Mackay, G. M. Burt, and J. R. McDonald, “Architecture of a Network-in-the-Loop Environment for Characterizing AC
Power-System Behavior,” IEEE Trans. Ind. Electron., vol. 57, no. 4,
pp. 1245–1253, Apr. 2010.
A. Viehweider, G. Lauss, and L. Felix, “Stabilization of Power Hardware-in-the-Loop simulations of electric energy systems,” Simul.
Model. Pract. Theory, vol. 19, no. 7, pp. 1699–1708, Aug. 2011.
V. H. Nguyen, Q. T. Tran, and Y. Besanger, “SCADA as a service
approach for interoperability of micro-grid platforms,” Sustain. Energy Grids Netw., vol. 8, pp. 26–36, Dec. 2016.
| 3 |
On imitation dynamics in potential population games
arXiv:1709.04748v1 [] 13 Sep 2017
Lorenzo Zino, Giacomo Como, and Fabio Fagnani
Abstract— Imitation dynamics for population games are studied and their asymptotic properties analyzed. In the considered
class of imitation dynamics —that encompass the replicator
equation as well as other models previously considered in
evolutionary biology— players have no global information
about the game structure, and all they know is their own current
utility and the one of fellow players contacted through pairwise
interactions. For potential population games, global asymptotic
stability of the set of Nash equilibria of the sub-game restricted
to the support of the initial population configuration is proved.
These results strengthen (from local to global asymptotic
stability) existing ones and generalize them to a broader class
of dynamics. The developed techniques highlight a certain
structure of the problem and suggest possible generalizations
from the fully mixed population case to imitation dynamics
whereby agents interact on complex communication networks.
I. I NTRODUCTION
Imitation dynamics provide a powerful game-theoretic
paradigm used to model the evolution of behaviors and
strategies in social, economic, and biological systems [1],
[2], [3]. The assumption beyond these models is that individuals interact in a fully mixed population having no
global information about the structure of the game they are
playing. Players just measure their own current utility and, by
contacting other individuals, they get aware of the the action
currently played by them and of the associated utility. Then,
in order to increase their utility, players may revise their
action and adopt the one of the contected fellow players.
We focus on the asymptotic behavior of such imitation
dynamics. Available result in this area can be found in [4],
[5], [6], [7]. In particular, [7] contains a study of local
stability and instability for the different kinds of rest points of
such dynamics. These results, however, deal only with local
stability, therefore one can not conclude global asymptotic
stability. Indeed, only for specific dynamics, such as the
replicator equation, and for some specific classes of games, a
global analysis has been carried on [8], [9], [10], [11], [12].
This work contributes to expanding the state of the art
on the analysis of the asymptotic behavior of imitation
dynamics. For the important class of potential population
games, we obtain a global convergence result, Theorem 6,
that is stronger and more general than the results presented
in the literature. Another novelty of this work consists in the
definition of imitation dynamics, that is more general than
the classical one [7].
The authors are with the “Lagrange” Department of Mathematical Sciences, Politecnico di Torino, 10129 Torino, Italy {lorenzo.zino,
The paper is organized as follows. Section II is devoted
to the introduction of population games and to the definition
of the class of imitation dynamics. Both these concepts are
presented along with some explanatory examples. Thereafter,
the main results on the asymptotic behavior of the imitation
dynamics are presented and proved in Section III. Examples
of the use of these results will then be presented in Section
IV. Finally, Section V discusses some future research lines.
Before moving to the next section, let us define the
following notation: δ (i) denotes a vector of all zeros but a 1 in
the ith position. We denote the sets of reals and nonnegative
reals by R and R+ = {x ∈ R : x ≥ 0}, respectively.
II. P OPULATION G AMES AND I MITATION DYNAMICS
Throughout the paper we study imitation dynamics in
continuous population games. In such setting, a continuum
of players of total unitary mass choose actions from a finite
set A and the reward ri (x) of all those players playing
action i ∈ A is a function of the empirical distribution x
of the actions played
P across the population. Formally, let
:
X = x ∈ RA
+
i∈A xi = 1 be the unitary simplex over
the action set A and refer to vectors x ∈ X as configurations
of the population. If the population is in configuration x ∈ X ,
then a fraction xi of the players is playing action i, for i ∈ A.
Let r : X → RA be reward vector function whose entries
ri (x) represent the reward received by any player playing
action i ∈ A when the population is in configuration x ∈ X .
Throughout, we assume the reward vector function r(x) to
be Lipschitz-continuous over the configuration space X . Let
X
r∗ (x) := max ri (x) ,
r(x) :=
xi ri (x)
i∈A
i∈A
stand for the maximum and, respectively, the average rewards
in a configuration x ∈ X . Then, the set of Nash equilibria
of the considered continuous population game is denoted by
N = {x ∈ X : xi > 0 ⇒ ri (x) = r∗ (x)} .
(1)
As is known, every continuous population game admits a
Nash equilibrium [7, Theorem 2.1.1], so N is never empty.
Example 1 (Linear reward population games): A class of
continuous population games is the one where the rewards
are linear functions of the configuration, i.e., when
r(x) = Rx ,
(2)
giacomo.como, fabio.fagnani}@polito.it
L. Zino is also with the “Peano” Department of Mathematics, Università
di Torino, 10123 Torino, Italy [email protected]
G. Como is also with the Department of Automatic Control, Lund University, 22100 Lund, Sweden [email protected]
for some reward matrix R ∈ RA×A . Linear reward population games have a standard interpretation in terms of
symmetric 2-player games [1] played by each player against
the average population [13]. Population games with binary
action space A = {1, 2} and linear reward function (2) with
a b
R=
(3)
c d
can be grouped in the following three classes:
(i) for a > c and d > b, one has binary coordination games
[14], [15] (such as the stag hunt game [16]), where the
set of Nash equilibria N = {δ (1) , δ (2) , x} comprises the
two pure configurations and the interior point x̄ with
x̄1 = 1 − x̄2 = (d − b)/(a − c + d − b) ;
(ii) for a < c and d < b, one has anti-coordination games
(including hawk-dove game [17], [18]), where the only
Nash equilibrium is the interior point x̄ as above;
(iii) for other cases of the parameters (e.g., in the Prisoner’s
dilemma [15]), there is one of the two actions i that is
(possibly weakly) dominating the other one j, and the
pure configuration δ (i) is the only Nash equilibrium.
Larger action spaces admit no as simple classifications.
In this paper, we are concerned with imitation dynamics
arising when players in the population modify their actions
in response to pairwise interactions [3]. We assume that the
population is fully mixed so that any pairs of players in
the population meet with the same frequency [19]. Upon a
possible renormalization, the overall frequency of pairwise
interactions between agents playing actions i and j can
then be assumed equal to the product xi xj of the fraction
of players currently playing actions i and j, respectively.
When two players meet, they communicate to each other the
action they are playing and the rewards they are respectively
getting. Then, depending on the difference between the two
rewards and possibly other factors, each interacting player
either keeps playing the same action he/she is playing, or
updates his/her action to the one of the other player.
Definition 1 (Imitation dynamics): A
(deterministic,
continuous-time) imitation dynamics for a continuous
population game with action set A and reward function
vector r(x) is the system of ordinary differential equations
X
ẋi = xi
xj (fji (x) − fij (x)) i ∈ A ,
(4)
j∈A
where, for i, j ∈ A, the function fij (x) is Lipschitzcontinuous on the configuration space X and such that
sgn (fij (x) − fji (x)) = sgn (rj (x) − ri (x)) , x ∈ X . (5)
Equivalently, the imitation dynamics (4) may be rewritten as
ẋ = diag (x)(F T (x) − F (x))x ,
(6)
where F (x) = (fij (x))i,j is a matrix-valued function on X .
Observe that, in order to satisfy (5), the functions fij s
should clearly depend on the difference between the rewards
ri (x) − rj (x) in such a way that fij (x) = fji (x), for every
configuration x such that ri (x) = rj (x). In principle, these
functions can possibly depend on the whole configuration x
in a non-trivial way. However, while our results hold true
in such greater generality, we are mostly concerned with the
case where the functions fij (x) only depend on the rewards’
differences ri (x) − rj (x), possibly in a different way for
each different pair of actions i, j ∈ A. In fact, in this case,
the considered imitation dynamics model makes minimal
assumptions on the amount of information available to the
players, i.e., they only know their own current action, the
one of the other player met, and difference of their respective
current rewards. In particular, players need not to know any
other information about the game they are engaged in, such
as, e.g., the current configuration of the population, the form
of the reward functions, or even the whole action space.
Remark 1: This class of imitation dynamics generalize the
ones considered in many papers [7], which satisfy
ri (x) ≥ rj (x) ⇐⇒ fki (x) − fik (x) ≥ fkj (x) − fjk (x) ,
(7)
for every i, j, k ∈ A. In fact, it is straightforward to check
that (7) is in general more restrictive than (5), that is obtained
from (7) in the case k = j. Notably, (7) induces an ordering
of the actions such that, when comparing two of them, the
one with the larger reward should always result the more
appealing to any third party, quite a restrictive condition that
is not required in our more general formulation. Example 3,
which follows, is a concrete example of a realistic situation
in which our relaxed condition (5) holds and (7) does not.
We now present two examples of imitation dynamics.
Example 2 (Replicator Dynamics): In the case when
1
(rj (x) − ri (x)) ,
2
i, j ∈ A ,
or, equivalently, F (x) = 12 1rT (x) − r(x)1T , the imitation dynamics (4) reduces to the replicator equation
fij (x) =
ẋi = xi (ri (x) − r(x)) ,
i ∈ A.
(8)
Hence, imitation dynamics encompass and generalize the
replicator equation, for which an extensive analysis has been
developed, see, e.g., [1], [13], [20], [21].
Example 3 (Stochastic Imitation Dynamics): Let
1
1
+ arctan(Kij (rj (x) − ri (x))) ,
(9)
2 π
for i, j ∈ A, where Ki,j > 0. Such [0, 1]-valued functions
fij (x) have an immediate interpretation as probabilities that
players playing action i switch to action j when observing
others playing such action j. Therefore these dynamics
might be used when modeling mean-filed limits of stochastic
imitation dynamics [22]. If the positive constants Kij are
not all the same, the associated imitation dynamics may not
satisfy (7), but still fit in our framework.
fij (x) =
We now move on to discussing some general properties of
imitation dynamics in continuous population games. To this
aim, we first introduce some more notions related to Nash
equilibria. For a nonempty subset of actions S ⊆ A, let
XS = {x ∈ X : xi = 0, ∀ i ∈ A \ S}
be the subset of configurations supported on S and let
NS = {x ∈ XS : xi > 0 ⇒ ri (x) ≥ rj (x), ∀ j ∈ S} (10)
be the set of Nash equilibria of the population game restricted
to S. Clearly, XA = X and NA = N . Finally, we define the
set of critical configurations as
[
Z=
NS .
(11)
∅6=S⊆A
Observe that Z includes the set of Nash equilibria N and
can equivalently be characterized as
Z = {x ∈ X : xi > 0 ⇒ ri (x) = r(x)} .
(12)
Remark 2: The set Z always includes the vertices δ (i) , i ∈
A, of the simplex X . In fact, in the case when |A| = 2, the
set of critical configurations consists just of the two vertices
of X and the possible interior Nash equilibria of the game.
For |A| ≥ 3, the set of critical configurations Z includes,
besides vertices of X and Nash equilibria of the game, all
Nash equilibria of the sub-games obtained by restricting the
action set to a non-trivial action subset S ⊆ A.
Some basic properties of the imitation dynamics (4) are
gathered in the following Lemma. These results are already
proven in [7] under the more restrictive condition on the
dynamics. For the proof in our more general setting is
included in the Appendix.
Lemma 2: For any imitation dynamics (4) satisfying (5):
(i) if x(0) ∈ XS for some nonempty subset of actions S ⊆
A, then x(t) ∈ XS for all t ≥ 0;
(ii) if xi (0) > 0 for some i ∈ A, then xi (t) > 0 for t ≥ 0;
(iii) every restricted Nash equilibrium x ∈ Z is a rest point.
III. A SYMPTOTIC B EHAVIOR OF I MITATION DYNAMICS
FOR P OTENTIAL P OPULATION G AMES
The main results of this work deal with the global asymptotic behavior of the imitation dynamics (4) for potential
population games. Therefore, before presenting these results,
we briefly introduce the notion of potential game [23] in the
context of continuous population games.
Definition 3: A population game with action set A and
Lipschitz-continuous reward function vector r : X → RA
is a potential population game if there exists a potential
function Φ : X → R that is continuous on X , continuously
differentiable in its interior, with gradient ∇Φ(x) extendable
by continuity to the boundary of X , and such that
∂
∂
rj (x) − ri (x) =
Φ(x) −
Φ(x) ,
∂xj
∂xi
[7], is thus generalized in the following result, whose proof
is reported in the Appendix.
Lemma 4: Let r : X → RA be the reward function vector
of a potential population game with potential Φ : X → R.
Then, every imitation dynamics (4) satisfying (5) is such that
Φ̇(x) = ∇Φ(x) · ẋ ≥ 0 ,
An intuitive consequence of Lemma 4 and point (iii) of
Lemma 2 is that every imitation dynamics in a potential
continuous population game has ω-limit set coinciding with
the set of critical configurations Z. As we shall see, our
main result, beyond formally proving this intuitive statement,
consists in a significant refinement of it.
Observe that, from (1) and (12), the set B := Z \ N of
critical configurations that are not Nash equilibria satisfies
B = {x ∈ X : xi > 0 ⇒ ri (x) = r(x) < r∗ (x)} .
(15)
In other terms, critical configurations x that are not Nash
equilibria have the property that all actions played by a
non-zero fraction of players in the population (i.e., those
i ∈ A such that xi > 0) give the same average reward
(ri (x) = r(x)), that is strictly less than the maximum
reward (r(x) < r∗ (x)). This implies that r∗ (x) is necessarily
achieved by some action that is not adopted by anyone, i.e.,
r(x) < r∗ (x) = rj (x) for some j ∈ A such that xj = 0.
Notice that, in particular, B is a subset of the boundary of
X , since critical configurations that are not Nash equilibria
necessarily have at least one zero entry. The following
result states that, in potential population games, every such
configuration x ∈ B has an interior neighborhood in X where
the potential is strictly larger than in x. This result is the main
novelty of this work, being the key Lemma to prove global
asymptotic stability results for the imitation dynamics.
Lemma 5: Let r : X → RA be the reward function vector
of a potential population game with potential Φ : X → R.
Let B = Z \ N be the set of critical configurations that are
not Nash equilibria. Then, for every x ∈ B, there exists some
ε > 0 such that Φ(x) > Φ(x̄) for all x ∈ X such that
X
||x − x̄|| < ε
and
xi > 0 . (16)
i∈A:ri (x)=r∗ (x)
Proof: For x̄ ∈ B, let I := {i ∈ A : ri (x) = r∗ (x)}
and J = A \ I = {i ∈ A : ri (x) < r∗ (x)} . From (13),
m := min
∂Φ(x̄)
∂Φ(x̄)
− max
= r∗ (x̄) − max rj (x̄) > 0 .
j∈J
j∈J
∂xi
∂xj
By continuity of ∇Φ(x), there exists ε > 0 such that
for i, j ∈ A, and almost every x ∈ X .
The asymptotic analysis of imitative dynamics for potential population games begins by proving that the potential
function Φ(x) is never decreasing along trajectories of the
imitation dynamics (4) and it is strictly increasing whenever
x does not belong to the set Z of critical configurations. This
result, already known for more specific classes of dynamics
(14)
with equality if and only if x ∈ Z, as defined in (11).
i∈I
(13)
for all x ∈ X ,
min
i∈I
∂Φ(x)
∂Φ(x)
m
− max
≥
,
j∈J
∂xi
∂xj
2
(17)
for every x ∈ X such that ||x−x|| < ε. Then, fix any x ∈ X
satisfying (16), let z = x − x, and observe that
X
X
a :=
zi = −
zj > 0 .
(18)
i∈I
j∈J
It then follows from (17) and (18) that, for every point
t ∈ [0, 1] ,
y(t) = x + tz ,
along the segment joining x and x, one has that
X
X
∂
∂
∇Φ(y(t)) · z =
zi
Φ(y(t)) −
zj
Φ(y(t))
∂xi
∂xj
i∈I
j∈J
∂
∂
≥ a min
Φ(y(t)) − a max
Φ(y(t))
i∈I ∂xi
j∈J ∂xj
am
≥
,
2
so that
Z 1
am
(∇Φ(y(t)) · z) dt ≥ Φ(x)+
Φ(x) = Φ(x)+
> Φ(x).
2
0
In order to understand the novelty of this result we
consider that in [7], where the stability of points in Z is
analyzed for a subclass of imitation dynamics, it is proven
that all the points in B are unstable, whereas a subset of the
points in N , coinciding with the local maximizers of Φ are
stable. However, these two results deal with local stability
and their mere combination is not sufficient to prove global
asymptotic stability. On the contrary, our characterization of
the instability of the rest points in B through the analysis
of the value of the potential function in their neighborhood,
paves the way for our main result, which characterizes the
global asymptotic behavior of solutions of a broad class of
imitation dynamics in potential population games.
Theorem 6: Consider a potential population game with
action set A and configuration space X . Let (x(t))t≥0 be
a solution of some imitation dynamics (4) satisfying (5) and
S = {i ∈ A : xi (0) > 0}
be the support of the initial configuration. Then,
lim dist(x(t), NS ) = 0 .
t→+∞
In particular, if xi (0) > 0 for every i ∈ A, then x(t)
converges to the set N of Nash equilibria.
Proof: By Lemma 2 part (i) there is no loss of generality in assuming that S = A, i.e., xi (0) > 0 for every i ∈ A.
Let r(x) be the reward vector function of the considered
population game and let Φ(x) be a potential. Observe that
Φ(x) is continuous over the compact configuration space X ,
so that
∆ = max Φ(x) − min Φ(x) < +∞ .
x∈X
x∈X
Then, for every t ≥ 0 we have that
Z t
Φ̇(x(s))ds = Φ(x(t)) − Φ(x(0)) ≤ ∆ < +∞
0
Since Φ̇(x) ≥ 0 for every x ∈ X by Lemma 4, the above
implies that
lim Φ̇(x(t)) = 0 .
t→+∞
Then, continuity of Φ̇(x) and the second part of Lemma 4
imply that x(t) converges to the set Z, as t grows.
We are now left with proving that every solution x(t) of
an imitation dynamics with xi (0) > 0 for every i ∈ A
approaches the subset N ⊆ Z of Nash equilibria. By
contradiction, let us assume that ∃ ε > 0 such that ∀ t∗ > 0
there exists some t ≥ t∗ such that dist(x(t), N ) ≥ ε.
Since x(t) approaches Z as t grows large, this implies
that for every ε > 0 and every large enough t∗ there
exists t ≥ t∗ such that dist(x(t), B) < ε. It follows that
there exists a sequence of times t1 ≤ t2 ≤ . . . such that
n→+∞
dist(x(tn ), B) −→ 0. Since the configuration space X is
compact, we may extract a converging subsequence x(tnk )
with limit x ∈ B. Now, observe that Lemma 2 part (ii)
implies that xi (t) > 0 for every action i ∈ A. Then, Lemma
5 implies that there exists k0 ≥ 1 such that
Φ(x(tk )) > Φ(x) ,
∀k ≥ k0 .
Hence, the fact that Φ(x(tk )) is never decreasing as stated
in Lemma 4, would lead to
Φ(x) = lim Φ(x(tnk )) ≥ Φ(x(tnk0 )) > Φ(x) ,
k→+∞
a contradiction. Hence, lim dist(x(t), N ) = 0.
t→+∞
IV. E XAMPLES
In this section we present some applications of the results
from Section III. For the imitation dynamics from Example
9 (with all Kij sampled from independently and uniformly
from [0, 1]), we compare the analytical results obtained
from Theorem 6 with some numerical simulations of the
dynamics, in order to corroborate our theoretical results.
A. Linear reward population games
We present some examples of binary games as in Example
1 and of pure coordination games.
Example 4 (Binary linear reward games): First of all, it
is straightforward to prove that all binary games are potential
games. In fact, from a 2 × 2 reward matrix R, as defined in
(3), we can immediately obtain a potential function, that is
1
φ(x) =
(a − c)x21 + (d − b)x22 .
(19)
2
Notice that is not true that a generic linear reward game is
potential, for example a ternary game such as Rock-ScissorsPaper is known to be not a potential game [7].
In the following, three short examples of binary linear
potential population games will be presented. Let us consider
the following three reward matrices:
10 0
0 7
2 0
R(1) =
R(2) =
R(3) =
,
8 7
2 6
3 1
(20)
Matrix R(1) leads to a coordination game. Trajectories converge to one of the three Nash equilibria: the global minimum
of the potential function, attained in an interior point x̄, and
the two vertices of the simplex. Moreover, from Lemma 4,
we deduce that all trajectories with x1 (0) < x̄1 converge to
(0, 1), all trajectories with x1 (0) > x̄1 converge to (1, 0),
whereas x̄ is an unstable equilibrium.
Φ(x)
Φ(x)
Φ(x)
x1
x1
(a) Coordination
(b) Anti-coordination
x1
(c) Dominated action
Fig. 1: Potentials of the games from Example 4. Crosses
are Nash equilibria, circles are Nash equilibria for restricted
games.
x1
x1
(a) Coordination
time
(b) Anti-coordination
time
(c) Dominated action
Fig. 2: Trajectories of the imitation dynamics (9) for the
games from Example 4. Solid lines are asymptotically stable
equilibria, dotted lines are unstable.
Matrix R(2) leads to an anti-coordination game, where
the Nash equilibrium x̄ is unique and it is an interior point.
Therefore, if the support of the initial condition is A, then
Theorem 6 guarantees convergence to it.
Matrix R(3) leads to a game with a dominated action. In
this case, the potential is a monotone increasing function
in x2 . Therefore its maximum is attained in δ (2) , that is the
only Nash equilibrium. Theorem 6 guarantees all trajectories
with x2 (0) > 0 to converge to it. Fig. 1 shows the plot of
the potential functions of the three games and Fig. 2 shows
examples of trajectories of the imitation dynamics (9).
Example 5 (Pure coordination games): Another class of
potential games are the linear reward pure coordination
games [14], in which the reward matrix R is a diagonal
positive (entry-wise) matrix. It is straightforward to check
that a potential function is given by
and the three minima on the boundary of X :
(c, 0, 1)
(b, 1, 0)
(0, c, b)
, x̄(2) =
, x̄(3) =
.
b+c
c+1
b+1
Through Lemma 4 we conclude that x̄ is an unstable node,
the three points on the boundaries x̄(1) , x̄(2) , and x̄(3) are
saddle points, whose stable manifolds actually divide the
basins of attraction of the three asymptotically stable nodes
δ (1) , δ (2) , δ (3) . Fig. 3 shows two examples of potential and
velocity plots of the imitation dynamics (9) for these games.
x̄(1) =
B. Congestion games
Another important class of potential games are congestion games [24], [23]. Let A = {1, . . . , l} be a set of
resources and A ∈ {0, 1}l×m be the adjacency matrix
of a bipartite graph connecting agents with resources. Let
us introduce l continuous functions, collected in a vector
ψ(·) = (ψ1 (·), . . . , ψl (·)), where the generic ψk (y) is the
reward for agents that use resource k, when the resource
is used by a fraction y of agents. The reward vector functions for these games are simply r(x) = AT ψ(Ax) and a
straightforward computation shows that congestion games
are always potential games, with
m
Φ(x) =
1X
Rii x2i .
2 i=1
(b) b = 0.2, c = 5
Fig. 3: Potential of the pure coordination games from Example 5 and velocity plot of imitation dynamics (9) from
Example 3 for them. The unstable nodes and the saddle
points are denoted by white circles.
x1
time
(a) b = 2, c = 3
(21)
Being Φ(x) convex, its minimum is attained in an interior
point x̄, and all the vertices of the simplex are local maxima
of the potential function. All the other critical points are
minima of the potential subject to belong to the boundaries.
All of these points are Nash equilibria. Therefore Theorem
6 guarantees asymptotic convergence to them. In the ternary
case |A| = 3, a complete analysis can be carried out. Without
any loss in generality, we can set R11 = 1 and name R22 =b
and R33 = c. Then, analyzing Φ(x) = 21 x21 + bx22 + cx23 ,
we explicitly compute the seven Nash equilibria: δ (1) , δ (2) ,
and δ (3) , the global minimum of the potential
c
b
bc
x̄ =
,
,
,
b + c + bc b + c + bc b + c + bc
Φ(x) =
k=l
X
Ψk ((Ax)k ),
(22)
k=1
where Ψk is an anti-derivative of ψk .
Often, the functions ψk s represent a cost for the use of
the resources, so they are monotone decreasing functions. In
this case, the potential function Φ(x) is concave, possessing
a global maximum x̄, that is the only Nash equilibrium of
the game. Depending on A, x̄ can be an interior point,
or it can belong to the boundary of the simplex. As the
other critical points are considered, δ (i) are minima of the
potential, whereas local maxima are present on the boundary,
that are Nash equilibria for restricted games. Theorem 6
guarantees therefore that trajectories with x(0) > 0 (entrywise) converge to x̄, that is an asymptotically stable node.
The Nash equilibria of the restricted games are saddle points
(i.e., stable on the respective boundaries) and the vertices that
(a) Ex. 6
(b) (24) from Ex. 7
(c) (25) from Ex. 7
Fig. 4: Potential of the congestion games from Example 6
(with m = 3, c1 = 1, c2 = 2, and c3 = 3) and from Example
7, respectively, and velocity plot of imitation dynamics (9)
from Example (3) for them. The unstable nodes are denoted
by white circles, saddle points by gray circles, and black
circles denote the only asymptotically stable equilibrium.
are not in one of the previous set are unstable nodes. Fig.
4 shows the velocity plots of the imitation dynamics for the
following examples of congestion games.
Example 6 (Exponential costs game): Let A = I and let
the cost be ψi (xi ) = exp(c1 x1 ), for P
ci > 0. Then, the
m
maximum of the potential Φ(x) = − i=1 c1i exp(−ci xi )
is achieved in an interior point x̄, that is the unique Nash
equilibrium of the game.
Example 7 (Dominated strategy): We construct now two
examples of congestion games in which the Nash equilibrium
of the dynamics is on the boundary. Let l = 2, ψi (y) = −y,
and let us consider the following two adjacency matrices:
1 0 1
1 1 1
A1 =
A2 =
.
(23)
0 1 1
0 1 1
The potential functions are, respectively:
1
(x1 + x3 )2 + (x2 + x3 )2
(24)
2
1
(25)
Φ2 (x) = − (x2 + x3 )2 .
2
As A1 is considered, the Nash equilibrium of the dynamics
is in x̄ = (1/2, 1/2, 0), whereas A2 has its Nash equilibrium
in the vertex δ (1) .
Φ1 (x) = −
V. C ONCLUSION AND F URTHER W ORK
In this work we analyzed the asymptotic behavior of
imitation dynamics in potential population games, proving
convergence of the dynamics to the set of Nash equilibria of
the sub-game restricted to the set of actions used in the initial
configuration of the population. This results strengthen the
state of the art, both ensuring global stability to the Nash
equilibria, and generalizing the result to a class of dynamics
that encompasses the replicator dynamics and the class of
imitation dynamics considered in many previous works.
The main research lines arising from this work point in
two directions. On the one hand, taking advantage on the
techniques developed in this work, our analysis has to be
extended to the case in which the population is not fully
mixed and agents interact on a non-complete communication
network, similar to what have been done for other learning
mechanisms, such as the replicator and logit choice [25],
[12], or to cases in which the learning process interacts
with the dynamics of a physical system [26]. On the other
hand, stochasticity in the revising of the agent’s opinion
should be included into the imitation dynamics. This leads
to model imitation dynamics with Markovian stochastic
processes, paving the way for the study of several interesting
open problems in the relationships between the asymptotic
behavior of the new stochastic process and the one of the
deterministic process analyzed in this work.
R EFERENCES
[1] J. W. Weibull, Evolutionary game theory. MIT Press, 1995.
[2] J. Björnerstedt and J. W. Weibull, “Nash equilibrium and evolution by
imitation,” in The Rational Foundations of Economic Behavior, 1996,
pp. 155–171.
[3] J. Hofbauer and K. Sigmund, “Evolutionary game dynamics,” Bulletin
(New Series) of the American Mathematical Society, vol. 40, no. 4,
pp. 479–519, 2003.
[4] J. H. Nachbar, ““Evolutionary” selection dynamics in games: Convergence and limit properties,” International Journal of Game Theory,
vol. 19, no. 1, pp. 59–89, mar 1990.
[5] J. Hofbauer, “From Nash and Brown to Maynard Smith: Equilibria,
Dynamics and ESS,” Selection, vol. 1, no. 1, pp. 81–88, 2000.
[6] W. H. Sandholm, “Potential Games with Continuous Player Sets,”
Journal of Economic Theory, vol. 97, no. 1, pp. 81–108, mar 2001.
[7] ——, Population Games and Evolutionary Dynamics. Cambridge
University Press, 2010, pp. 153–164, 221–275.
[8] I. M. Bomze, “Regularity versus Degeneracy in Dynamics, Games,
and Optimization: A Unified Approach to Different Aspects,” SIAM
Review, vol. 44, no. 3, pp. 394–414, jan 2002.
[9] J. S. Shamma and G. Arslan, “Dynamic fictitious play, dynamic
gradient play, and distributed convergence to Nash equilibria,” IEEE
Transactions on Automatic Control, vol. 50, no. 3, pp. 312–327, 2005.
[10] M. J. Fox and J. S. Shamma, “Population games, stable games, and
passivity,” in Proceedings of the IEEE Conference on Decision and
Control. IEEE, dec 2012, pp. 7445–7450.
[11] R. Cressman and Y. Tao, “The replicator equation and other game
dynamics,” Proceedings of the National Academy of Sciences of the
United States of America, vol. 111, pp. 10 810–7, 2014.
[12] J. Barreiro-Gomez, G. Obando, and N. Quijano, “Distributed Population Dynamics: Optimization and Control Applications,” IEEE
Transactions on Systems, Man, and Cybernetics: Systems, vol. 47,
no. 2, pp. 304 – 314, 2016.
[13] J. Hofbauer and K. Sigmund, Evolutionary games and population
dynamics. Cambridge University Press, 1998.
[14] R. W. Cooper, Coordination Games. Complementarity and Macroeconomics. Cambridge University Press, 1999.
[15] D. Easley and J. Kleinberg, Networks, crowds, and markets: reasoning
about a highly connected world. Cambridge University Press, 2010.
[16] B. Skyrms, “The Stag Hunt and the Evolution of Social Structure,”
Cambridge University Press, vol. 1, pp. 1–147, 2004.
[17] A. Rapoport and A. M. Chammah, “The Game of Chicken,” American
Behavioral Scientist, vol. 10, no. 3, pp. 10–28, nov 1966.
[18] R. Sugden, The Economics of Rights, Co-operation and Welfare.
London: Palgrave Macmillan UK, 1986, vol. 97.
[19] T. G. Kurtz, Approximation of Population Processes. Philadelphia,
PA: SIAM, 1981, vol. 36.
[20] P. D. Taylor and L. B. Jonker, “Evolutionarily stable strategies and
game dynamics,” Mathematical Biosciences, vol. 40, no. 1-2, pp. 145–
156, jul 1978.
[21] P. Schuster and K. Sigmund, “Replicator dynamics,” Journal of
Theoretical Biology, vol. 100, no. 3, pp. 533–538, feb 1983.
[22] M. Benaim and J. W. Weibull, “Deterministic Approximation of
Stochastic Evolution in Games,” Econometrica, vol. 71, no. 3, pp.
873–903, may 2003.
[23] D. Monderer and L. S. Shapley, “Potential Games,” Games and
Economic Behavior, vol. 14, no. 1, pp. 124–143, may 1996.
[24] R. W. Rosenthal, “A class of games possessing pure-strategy Nash
equilibria,” International Journal of Game Theory, vol. 2, no. 1, pp.
65–67, dec 1973.
[25] J. R. Marden and J. S. Shamma, “Revisiting log-linear learning:
Asynchrony, completeness and payoff-based implementation,” Games
and Economic Behavior, vol. 75, no. 2, pp. 788–808, jul 2012.
[26] G. Como, K. Savla, D. Acemoglu, M. A. Dahleh, and E. Frazzoli,
“Stability analysis of transportation networks with multiscale driver
decisions,” SIAM Journal on Control and Optimization, vol. 51, no. 1,
pp. 230–252, 2013.
A PPENDIX
Lemma 2: For any imitation dynamics (4) satisfying (5):
(i) if x(0) ∈ XS for some nonempty subset of actions S ⊆
A, then x(t) ∈ XS for all t ≥ 0;
(ii) if xi (0) > 0 for some action i ∈ A, then xi (t) > 0 for
every t ≥ 0;
(iii) every restricted Nash equilibrium x ∈ Z is a rest point.
Proof:
(i) It follows from the fact that any solution of (4) with
xi (0) = 0 for some i ∈ A is such that xi (t) = 0,
∀ t ≥ 0.
(ii) Since ẋi (t) ≥ −Ci xi (t), where Ci = |A| max{|ri (x) :
x ∈ X |}, Gronwall’s inequality implies that xi (t) ≥
xi (0)e−Ci t > 0.
(iii) For every x ∈ Z and i, j ∈ A, one has that xi xj (ri (x)−
rj (x)) = 0. Then, (5) implies that xi xj (fij (x) −
fji (x))) = 0.
Lemma 4: Let r : X → RA be the reward function vector
of a potential population game with potential function Φ :
X → R. Then, every imitation dynamics (4) satisfying (5)
is such that
Φ̇(x) = ∇Φ(x) · ẋ ≥ 0 ,
for all x ∈ X ,
(26)
with equality if and only if x ∈ Z, as defined in (11).
Proof: For every x ∈ X , we have
Φ̇(x) = ∇Φ(x) · ẋ
= ∇Φ(x) · diag (x)(F 0 (x) − F (x))x
X ∂Φ(x)
xi xj (fji (x) − fij (x))
∂xi
i,j∈A
∂Φ(x) ∂Φ(x)
1 X
xi xj
−
(fji (x) − fij (x))
=
2
∂xi
∂xj
i,j∈A
1 X
=
xi xj (ri (x) − rj (x)) (fji (x) − fij (x)) ,
2
i,j∈A
(27)
where the last identity follows from (13). It now follows
from property (5) of the imitation dynamics that, ∀ i, j ∈ A,
=
(ri (x) − rj (x)) (fji (x) − fij (x)) ≥ 0 .
Being all entries of a configuration x ∈ X are non-negative,
xi xj (ri (x) − rj (x)) (fji (x) − fij (x)) ≥ 0.
Combining the above with (27), we get that Φ̇(x) ≥ 0 (thus
proving (26)). Finally, Φ̇(x) = 0 if and only if all the terms
xi xj (ri (x) − rj (x)) (fji (x) − fij (x)) = 0 ,
(28)
∀ i, j ∈ A. Using again (5), we have that
(ri (x) − rj (x)) (fji (x) − fij (x)) = 0 ⇐⇒ ri (x) = rj (x).
Then (28) is equivalent to
xi xj (ri (x) − rj (x)) = 0.
(29)
To conclude the proof, we are simply left with showing that a
configuration x ∈ X satisfies (29) if and only if it is critical,
i.e., it belongs to Z. Indeed, if x ∈ NS for some nonempty
subset of actions S ⊆ A, then necessarily ri (x) = rj (x)
for every i, j ∈ A such that xi xj > 0. On the other hand,
for any x ∈ X satisfying (29), it is immediate to verify that
x ∈ NS , where S = {i ∈ A : xi > 0} is its support.
| 3 |
DISCRETENESS OF F -JUMPING NUMBERS AT ISOLATED
NON-Q-GORENSTEIN POINTS
arXiv:1605.03825v2 [math.AG] 11 Apr 2017
PATRICK GRAF AND KARL SCHWEDE
Abstract. We show that the F -jumping numbers of a pair (X, a) in positive characteristic have no limit points whenever the symbolic Rees algebra of −KX is finitely generated
outside an isolated collection of points. We also give a characteristic zero version of this result, as well as a generalization of the Hartshorne–Speiser–Lyubeznik–Gabber stabilization
theorem describing the non-F -pure locus of a variety.
1. Introduction
By now it is well understood that there is an interesting connection between multiplier
ideals in characteristic zero, defined via resolution of singularities, and test ideals in positive
characteristic, defined via the behavior of the Frobenius map. Recall that for any complex
pair (X, a), the multiplier ideal J (X, at ) gets smaller as t increases, but it does not change
if we increase t just slightly: J (X, at ) = J (X, at+ε ) for 0 < ε ≪ 1. Hence it makes sense
to define the jumping numbers of (X, a) as those real numbers ti such that J (X, ati ) (
J (X, ati −ε ) for ε > 0. By analogy, the F -jumping numbers are the real numbers ti where
the test ideal jumps or changes: τ (X, ati ) ( τ (X, ati −ε ) for ε > 0.
The discreteness and rationality of (F -)jumping numbers has been studied by many authors, e.g. [ELSV04, BdFFU15, Har06, BMS08, BMS09, KLZ09, BSTZ10, KZ14, ST14,
KSSZ14]. In characteristic zero, discreteness and rationality of jumping numbers is elementary if X is Q-Gorenstein, but rationality fails in general [Urb12, Theorem 3.6]. Discreteness
remains an open problem, with several special cases known, e.g. if the non-Q-Gorenstein
locus of X is zero-dimensional [Gra16, Theorem 1.4] (and [Urb12, Theorem 5.2] for an earlier, weaker version). For test ideals, discreteness and rationality are known whenever the
algebra of local sections
M
R X, −(KX + ∆) :=
OX ⌊−m(KX + ∆)⌋
m≥0
(also known as the symbolic Rees algebra) is finitely generated [BSTZ10, Sch11b, CEMS14].
In this paper, we prove the following result.
Theorem A (Discreteness of F -jumping numbers, Theorem 4.2). Suppose that X is a
normal variety over an F -finite field
k of positive characteristic and that ∆ ≥ 0 is a Qdivisor such that R X, −(KX + ∆) is finitely generated except at an isolated collection of
points. Suppose a ⊆ OX is a nonzero coherent ideal sheaf. Then the F -jumping numbers of
(X, ∆, a) have no limit points.
Date: April 11, 2017.
2010 Mathematics Subject Classification. 13A35, 14F18.
Key words and phrases. Test ideals, F -jumping numbers, Q-Gorenstein, multiplier ideals.
The first-named author was supported in part by the DFG grant “Zur Positivität in der komplexen
Geometrie”. The second named author was supported in part by the NSF grant DMS #1064485, NSF FRG
Grant DMS #1501115, NSF CAREER Grant DMS #1501102.
1
2
P. GRAF AND K. SCHWEDE
The corresponding statement in characteristic zero is also new.
Theorem B (Discreteness of jumping numbers, Theorem 4.3). Suppose that X is a normal variety over a field k of characteristic zero and that ∆ ≥ 0 is a Q-divisor such that
R X, −(KX + ∆) is finitely generated except at an isolated collection of points. Suppose
a ⊆ OX is a nonzero coherent ideal sheaf. Then the jumping numbers of (X, ∆, a) have no
limit points.
We would like to point out that the question whether F -jumping numbers are always
rational is still open. However the characteristic zero counterexample mentioned above
suggests that maybe one should expect a negative answer.
The method used to prove these results builds upon [Urb12] and [Gra16]. In particular,
we prove global generation of (Frobenius pushforwards of) sheaves used to compute test
ideals after twisting by a sufficiently ample divisor H. If X is projective, discreteness of
F -jumping numbers follows quickly, since the twisted test ideals are globally generated by
vector subspaces within the finite-dimensional vector space H 0 (X, OX (H)). The general
case is easily reduced to the projective case by a compactification argument.
Using these same methods, we also obtain a generalization of the Hartshorne–Speiser–
Lyubeznik–Gabber stabilization theorem. Let us motivate the result briefly. Notice that
if R is a ring, we have canonical maps HomR (F∗e R, R) −
→ R obtained by evaluation at 1.
These images yield a descending chain of ideals Je . If KR is Cartier, it follows from [HS77,
Lyu97, Gab04] that these images stabilize, giving a canonical scheme structure to the nonF -pure locus of X = Spec R. Blickle and Böckle also proved a related stabilization result
for arbitrary rings (and even more) [BB11, Bli09] but their result does not seem to imply
that Je = Je+1 for e ≫ 0 (Blickle obtained another result which implies stabilization of a
different set of smaller ideals). However, as a corollary of our work, we obtain the following
generalization, also see [CEMS14, Proposition 3.7].
Theorem C (HSLG-type stabilization, Theorem 4.4). Suppose that X is a normal variety
over an F -finite field k of characteristic p > 0. Set
eval@1
Je := Image F∗e OX (1 − pe )KX ∼
= Hom OX (F∗e OX , OX ) −−−−→ OX ⊂ OX .
If R(X, −KX ) is finitely generated except at an isolated collection of points, then Je = Je+1
for all e ≫ 0.
Remark 1.1. There should be a more general version of Theorem 4.4 with R X, −(KX +∆)
in place of R(X, −KX ), but for the proof one would probably need to generalize Theorem 3.1
further.
We end the introduction by pointing out some geometric and cohomological conditions
on the singularities of X which ensure that our assumptions on the anticanonical algebra
of X are satisfied.
Proposition 1.2 (Klt or rational singularities and finite generation). Let X be a normal
variety over a field k. Assume any of the following:
(1.2.1) char(k) = 0 and there is a Q-divisor D ≥ 0 such that the pair (X, D) is klt except
at an isolated collection of points.
(1.2.2) dim X ≤ 3 and X has pseudorational singularities except at an isolated collection
of points.
Then for any Q-Weil divisor B on X, the algebra of local sections R(X, B) is finitely generated except at an isolated collection of points. Hence the assumptions of Theorems 4.2, 4.3
and 4.4 are satisfied in this case.
F -JUMPING NUMBERS AT ISOLATED NON-Q-GORENSTEIN POINTS
3
For the definition of pseudorational singularities, see [LT81, Section 2, p. 102]. An equivalent condition, emphasizing the point of view of extendability of differential forms, is given
in [LT81, Section 4, Corollary on p. 107]. From the latter condition, it is easy to see that
for a klt threefold pair (X, D), the space X has pseudorational singularities except at an
isolated collection of points since X is Cohen–Macaulay in codimension two anyways.
Acknowledgements. The authors first began working on this project at the 2015 Summer
Research Institute on Algebraic Geometry held at the University of Utah. Furthermore they
would like to thank the referee for helpful suggestions.
2. Preliminaries
Convention 2.1. Throughout this paper, all schemes are Noetherian and separated and of
finite type over a field which in characteristic p > 0 is always assumed to be F -finite. In
characteristic p > 0, F : X −
→ X denotes the absolute Frobenius map, acting on an affine
scheme U = Spec R by r 7→ r p .
The material in this section is mostly well-known to experts, and collected for convenience
of the reader.
2.1. Grothendieck duality. We will use the following special case of Grothendieck dual→ Y be a finite map and F , G coherent sheaves
ity [KM98, Proposition 5.67]. Let f : X −
on X and on Y , respectively. Then there is a natural f∗ OX -linear isomorphism
(2.1.1)
Hom OY (f∗ F , G ) = f∗ Hom OX (F , f ! G ),
where f ! G := Hom OY (f∗ OX , G ). Furthermore we will use the fact that if X is essentially of
finite type over an F -finite field, then F ! ωX ∼
= ωX where ωX is the canonical sheaf of X.
Suppose now that X is a normal integral scheme of finite type over an F -finite field of
characteristic p > 0. Then we have a canonical map (called the trace map)
F∗e ωX −
→ ωX
which under (2.1.1) corresponds to id ∈ F∗e Hom OX (ωX , ωX ). Let KX be a canonical divisor
on X. Twisting by OX (−KX ) and reflexifying yields
→ OX
F∗e OX (1 − pe )KX −
and then for any effective Weil divisor D ≥ 0 by restriction we obtain a map
→ OX .
(2.1.2)
tr : F∗e OX (1 − pe )KX − D −
Using (2.1.1) again, the left-hand side sheaf is identified with Hom OX F∗e OX (D), OX . It
is straightforward to check that under this identification, (2.1.2) becomes the “evaluation
at 1” map
eval@1
Hom OX F∗e OX (D), OX −−−−→ OX .
2.2. Test ideals. We recall the following definition of test ideals from the literature.
Definition 2.2 ([HT04, Lemma 2.1], [Sch11c, Proof of 3.18]). If X is a normal F -finite
scheme, ∆ ≥ 0 is a Q-divisor on X, a is an ideal sheaf and t ≥ 0 is a real number then for
any sufficiently large effective Weil divisor C, we define the sheaf
X X
e
τ (X, ∆, at ) :=
φ F∗e a⌈t(p −1)⌉ · OX (−C)
e
e≥0 φ∈C∆
where C∆e = Hom OX F∗e OX ⌈(pe − 1)∆⌉ , OX .
4
P. GRAF AND K. SCHWEDE
Remark 2.3. The choice of C is philosophically the same as the choice of a test element
usually included in the (local) literature mentioned above. Note that for any affine chart
U = Spec R ⊆ X, if c ∈ R is an appropriate test element and C is a Weil divisor on X
such that OX (−C) U ⊆ c · OU , then one can always find another test element d ∈ R with
d · OU ⊆ OX (−C) U . It follows that our definition of the test ideal is indeed independent
of the choice of C.
Lemma 2.4. Notation as above. Then for any e0 ≥ 0, we have
X X
e
τ (X, ∆, at ) =
φ F∗e a⌈t(p −1)⌉ · OX (−C) .
e
e≥e0 φ∈C∆
Proof. The inclusion “⊇” is clear. For “⊆”, choose C1 a sufficiently large Cartier divisor
such that OX (−C1 ) is contained in the right-hand side, and put C ′ = C + pe0 −1 C1 . Then
we see that
X X
X X e ⌈t(pe −1)⌉
e
φ F∗e a⌈t(p −1)⌉ · OX (−C ′ ) ⊆
φ F∗ a
· OX (−C)
e
e≥0 φ∈C∆
e
e≥e0 φ∈C∆
Pe0 −1
P
since we can split up the sum on the left-hand side as e=0
( · · · ) + e≥e0 ( · · · ). But the
left-hand side is just τ (X, ∆, at ), hence the lemma is proved.
If C is in fact Cartier, an easy direct computation yields
X
e ⌈t(pe −1)⌉
φ
F
a
·
O
(−C)
X
∗
e
φ∈C∆
i
h
eval@1
e
= Image F∗e a⌈t(p −1)⌉ · Hom OX F∗e OX ⌈(pe − 1)∆⌉ + C , OX −−−−→ OX ,
hence
τ (X, ∆, at )
h
i
X
eval@1
e
=
Image F∗e a⌈t(p −1)⌉ · Hom OX F∗e OX ⌈(pe − 1)∆⌉ + C , OX −−−−→ OX
e≥e0
i
h
X
tr
e
=
Image F∗e a⌈t(p −1)⌉ · OX (1 − pe )KX − ⌈(pe − 1)∆⌉ − C −−→ OX
e≥e0
i
h
X
tr
e
=
Image F∗e a⌈t(p −1)⌉ · OX ⌊(1 − pe )(KX + ∆)⌋ − C −−→ OX .
e≥e0
However, for our purposes this definition of the test ideal is not quite optimal. Fortunately,
this is easy after adjusting C.
Lemma 2.5. With notation as above, assume that C is Cartier. Then for any e0 ≥ 0 we
have
h
i
X
tr
e
Image F∗e a⌈tp ⌉ · OX ⌊(1 − pe )(KX + ∆)⌋ − C −−→ OX .
τ (X, ∆, at ) =
e≥e0
Proof. Without loss of generality, and in view of Remark 2.3, we may assume that X =
Spec R is affine and that OX (−C) = c · OX for some c ∈ R. Choose b ∈ a⌈t⌉ nonzero, so that
e
e
e
ba⌈t(p −1)⌉ ⊆ a⌈t⌉ a⌈t(p −1)⌉ ⊆ a⌈tp ⌉ . Replacing C = div(c) by div(bc) we obtain our desired
formula.
F -JUMPING NUMBERS AT ISOLATED NON-Q-GORENSTEIN POINTS
5
2.3. Multiplier ideals. In this section we work over a field of characteristic zero. The
theory of non-Q-Gorenstein multiplier ideals was developed by [DH09]. The starting point
is a notion of pullback for Weil divisors.
Definition 2.6 ([DH09, Def. 2.6]). Let f : Y −
→ X be a proper birational morphism
between normal varieties, and let D be an integral Weil divisor on X. Then the natural
pullback f ♮ D of D along f is defined by
∗∗
OY (−f ♮D) = OX (−D) · OY ,
where we consider OX (−D) ⊂ KX as a fractional ideal sheaf on X.
We will consider triples (X, ∆, at ) consisting of a normal variety X, an effective Q-divisor
∆ ≥ 0, a nonzero coherent ideal sheaf a ⊂ OX and a real number t ≥ 0. In the case ∆ = 0,
the following definition was made in [DH09, Definition 4.8].
Definition 2.7 ([CEMS14, Def. 2.19]). Let (X, ∆, at ) be a triple, and let m ∈ N be a
positive integer such that m∆
→ X be a log resolution of the pair
is integral. Let f : Y −
X, OX (−m(KX + ∆)) + a in the sense of [DH09, Definition 4.1]. Let Z be the Cartier
divisor on Y such that a · OY = OY (−Z). Then we define
1 ♮
Jm (X, ∆, at ) := f∗ OY KY − m
f m(KX + ∆) − tZ .
One shows that this is a coherent ideal sheaf on X, independent of the choice of the
resolution f . Furthermore, Jm (X, ∆, at ) ⊂ Jkm(X, ∆, at ) for any integer k > 0. Thus by
the Noetherian property of X, the following definition makes sense.
Definition 2.8. The multiplier ideal J (X, ∆, at ) of a triple as above is defined to be the
unique maximal element of the family
Jm (X, ∆, at ) | m ≥ 1 and m∆ is integral ,
i.e. it is equal to Jm (X, ∆, at ) for m sufficiently divisible.
We will need the following notion of compatible boundaries, which is a straightforward
generalization of the ∆ = 0 case in [DH09, Definition 5.1].
Definition 2.9. Let (X, ∆, at ) be a triple, and fix an integer m ≥ 2 such that m∆ is
integral. Given a log resolution f : Y −
→ X of X, OX (−m(KX + ∆)) + a , a Q-Weil divisor
∆′ on X is called m-compatible for (X, ∆, at ) with respect to f if the following hold:
(i) KX + ∆ + ∆′ is Q-Cartier,
(ii) m∆′ is integral and ⌊∆′ ⌋ = 0,
(iii) no component of ∆′ is contained in supp(∆) ∪ supp(OX /a),
(iv) f is a log resolution of (X, ∆ + ∆′ ), OX (−m(KX + ∆)) + a ,
1 ♮
(v) KY + f∗−1 ∆′ − f ∗ (KX + ∆ + ∆′ ) = KY − m
f m(KX + ∆) .
Proposition 2.10. Let (X, ∆, at ) be a triple, and fix an integer m ≥ 2 such that m∆ is
integral and J (X, ∆, at ) = Jm (X, ∆, at ). Then for any m-compatible boundary ∆′ we have
J (X, ∆, at ) = J (X, ∆ + ∆′ ); at ,
where the right-hand side is a multiplier ideal in the usual Q-Gorenstein sense [Laz04b,
Definition 9.3.60].
Proof. The proof is analogous to [DH09, Proposition 5.2], and thus it is omitted.
6
P. GRAF AND K. SCHWEDE
The existence of compatible boundaries is ensured by the following theorem, cf. [DH09,
Theorem 5.4] and [Gra16, Theorem 4.4]. The Weil index of a triple (X, ∆, at ) is defined to
be the smallest positive integer m0 such that m0 (KX + ∆) is integral.
Theorem 2.11. Let (X, ∆, at ) be a triple of Weil index m0 , and let k ≥ 2 be an integer.
Choose an effective Weil divisor D on X such that m0 (KX + ∆) − D is Cartier, and let
L ∈ Pic X be a line bundle such that L (−kD) := L ⊗
OX (−kD) is globally generated.
Pick a finite-dimensional subspace V ⊂ H 0 X, L (−kD) that generates L (−kD), and let
M be the divisor of a general element of V . Then
1
M
∆′ :=
km0
is a (km0 )-compatible boundary for (X, ∆, at ).
Proof. Let f : Y −
→ X be a log resolution of (X, ∆), OX (−km0 (KX +∆))+OX (−kD)+a ,
and set E := f ♮ (kD). Then
f ♮ km0 (KX + ∆) = f ♮ km0 (KX + ∆) − kD + kD = km0 · f ∗ KX + ∆ − m10 D + E
and so we have
KY −
1
♮
km0 f
km0 (KX + ∆) = KY − f ∗ KX + ∆ −
1
m0 D
−
1
km0 E.
Let M be the divisor of a general element of V . Then since L (−kD) is generated by V ,
we see that M is reduced with no component contained in supp(∆) ∪ supp(OX /a). Put
G = M + kD, a Cartier divisor. Since also f ∗ L ⊗ OY (−E) is generated by the (pullbacks
of the) sections in V , we have that f ∗ G = f∗−1 M + E.
1
M . Then by the above, conditions (ii)–(iv) of Definition 2.9 are
Now set ∆′ = km
0
satisfied. Also it is clear that
KX + ∆ + ∆′ = KX + ∆ −
1
m0 D
+
1
km0 G
is Q-Cartier, so (i) is fulfilled. To check (v), note that
KY + f∗−1 ∆′ − f ∗ (KX + ∆ + ∆′ )
1
= KY + f∗−1 ∆′ − f ∗ KX + ∆ + ∆′ − km
G
−
0
1
E
= KY − f ∗ KX + ∆ − m10 D − km
0
1
= KY − km
f ♮ km0 (KX + ∆) .
0
1
∗
km0 f G
This proves the theorem.
3. Global generation at isolated non-finitely generated points
The following theorem is a positive characteristic version of [Gra16, Theorem 7.1]. But
even in characteristic zero (cf. Remark 3.2), the present result is stronger than that theorem.
Most notably, we remove the divisibility condition in [Gra16, Theorem 7.1] and replacethe
“KX + ∆ is Q-Cartier” condition with the weaker requirement that R X, −(KX + ∆) be
finitely generated.
Theorem 3.1. Let X be a normal projective d-dimensional variety over an F -finite field
k of characteristic p > 0. Further let D be a Weil Q-divisor L
and C a Cartier divisor on
X. Suppose that W ⊆ X is a closed set such that R(X, D) := m≥0 OX (⌊mD⌋) is finitely
generated on X \ W , let W0 be the set of isolated points of W and put
U = (X \ W ) ∪ W0 .
F -JUMPING NUMBERS AT ISOLATED NON-Q-GORENSTEIN POINTS
7
Then there exists an ample Cartier divisor H on X such that for all e ≥ 0, m ≥ 0,
ℓ ≥ max{m, pe } and any nef Cartier divisor N , the sheaf
F∗e OX ⌊mD + ℓH⌋ − C + N
is globally generated on U (as an OX -module). Furthermore, for any ample Cartier divisor
H ′ on X fixed in advance, H can be taken to be a sufficiently high multiple of H ′ .
Remark 3.2. Theorem 3.1 continues to hold over an arbitrary field k of characteristic zero
if one interprets F = idX and pe = 1. The proof does not require any changes.
Proof of Theorem 3.1. The strategy is similar to [Gra16, Theorem
7.1]. We will find a
→ F∗e OX ⌊mD + ℓH⌋ − C + N so that the cokernel is
globally generated sheaf F∗e Fm,ℓ ֒−
supported on W . The proof is divided into three steps.
Step 1: Blowing up. Let N0 be a positive integer such that N0 D is an integral Weil divisor.
It follows that the Veronese subring R(X \ W, N0 D) is also Noetherian and R(X \ W, D)
is a finite R(X \ W, N0 D)-module [GHNV90, Lemma 2.4]. By [GHNV90, Theorem 3.2(3)],
making N0 more divisible if necessary we may assume that R(X \ W, N0 D) is generated in
degree 1 as a graded ring.
Let f : Y −
→ X be the normalized blowup of the fractional ideal sheaf OX (N0 D). Then
we have f −1 (X \ W ) = Proj R(X \ W, N0 D). In particular, f is a small morphism over
X \ W by [KM98, Lemma 6.2]. Furthermore if we write
.
∗
OY (B) = f OX (N0 D) torsion = OX (N0 D) · OY ,
then B is Cartier and f -ample by [Gra16, Theorem 6.2]. Thus by [Har77, II, Proposition
7.10] or [KM98, Prop. 1.45] there exists a very ample Cartier divisor A on X so that B +f ∗ A
is globally ample on Y .
Step 2: Vanishing. Now for any integer m ∈ N0 , write uniquely
m = q m N0 + r m
with 0 ≤ rm ≤ N0 − 1,
i.e. qm = ⌊m/N0 ⌋. Fix a nef Cartier divisor N on X and form the sheaf
Gm := OY (qm B + f ∗ N ) ⊗ OX ⌊rm D − C⌋ · OY .
|
{z
}
=:Hm
Claim 3.3. There exists an m0 ≥ 1 such that for all ℓ ≥ m ≥ m0 , i ≥ 1, and any nef
Cartier divisor P on Y we have
(3.3.1)
H i Y, Gm ⊗ OY (ℓf ∗ A + P ) = 0 and
(3.3.2)
f∗ Gm ≃ R f∗ Gm .
Proof of Claim 3.3. Let us make two easy observations: Firstly, the sheaf Hm can take on
only finitely many values. Secondly, qm −
→ ∞ as m −
→ ∞. Hence (3.3.1) follows from Fujita
vanishing [Fuj83, Theorem 1] applied to the ample divisor B + f ∗ A on Y upon writing
H i Y, Gm ⊗ OY (ℓf ∗ A + P )
= H i Y, Hm ⊗ OY (qm B + ℓf ∗ A + f ∗ N + P )
= H i Y, Hm ⊗ OY qm (B + f ∗ A) + (ℓ − qm ) f ∗ A + f ∗ N + P .
| {z }
≥0
Similarly, making m0 even larger if necessary, (3.3.2) follows from relative Serre vanish
ing [Laz04a, Theorem 1.7.6] for the f -ample divisor B.
8
P. GRAF AND K. SCHWEDE
Claim 3.4. Put H = b · A where b > d = dim X. Then for every e ≥ 0, m ≥ m0 ,
and ℓ ≥ max{m, pe }, the sheaf F∗e f∗ Gm ⊗ OX (ℓH) is 0-regular with respect to A, and
hence globally generated as an
OX -module. Furthermore its first cohomology group vanishes,
H 1 X, F∗e f∗ Gm ⊗ OX (ℓH) = 0.
Proof of Claim 3.4. We need to show that for every 1 ≤ j ≤ d, we have
H j X, F∗e f∗ Gm ⊗ OX (ℓH) ⊗ OX (−jA) = 0.
The left-hand side is
e
e
j
= H X, F∗ f∗ Gm ⊗ OX (ℓb − p j)A
= H j X, f∗ Gm ⊗ OX (ℓb − pe j)A
= Hj X, R f∗ Gm ⊗ OX (ℓb − pe j)A
= H j Y, Gm ⊗ OY (ℓb − pe j)f ∗ A
=0
projection formula
F e is finite(a)
(3.3.2)
composition of derived functors
(3.3.1) and ℓb − pe j ≥ m.
The assertion ℓb − pe j ≥ m in the last line is justified since
ℓb ≥ ℓ + ℓd ≥ m + pe d ≥ m + pe j.
Any coherent sheaf 0-regular with respect to A is globally generated by Castelnuovo–
Mumford regularity [Laz04a, Theorem 1.8.5]. The desired vanishing
H 1 X, F∗e f∗ Gm ⊗ OX (ℓH) = 0
follows by the same argument as above, leaving off the OX (−jA).
Step 3: Reflexification and global generation.
Claim 3.5. For any e, m, ℓ ≥ 0, the reflexive hull of F∗e f∗ Gm ⊗ OX (ℓH) is equal to
F∗e OX ⌊mD + ℓH⌋ − C + N .
Furthermore the cotorsion of F∗e f∗ Gm ⊗ OX (ℓH) is supported on W , i.e. in the natural
short exact sequence
0 −→ F∗e f∗ Gm ⊗ OX (ℓH) −→ F∗e OX ⌊mD + ℓH⌋ − C + N −→ Qe,m,ℓ −→ 0
the support of Qe,m,ℓ is contained in W .
Proof of Claim 3.5. Let V ⊂ X be the maximal open subset over which f is an isomorphism.
Note that codimX (X \ V ) ≥ 2. We see that
(f∗ Gm ) V = OV (qm N0 D + N ) ⊗ OV ⌊rm D − C⌋ = OV ⌊mD⌋ − C + N ,
the first equality holding by definition and the second one because N0 D is Cartier on V .
Pushing this forward by the inclusion V ֒−
→ X, we get
∗∗
(3.5.1)
(f∗ Gm ) = OX ⌊mD⌋ − C + N .
Observe that Frobenius pushforward commutes with taking the reflexive hull. Hence twisting (3.5.1) by OX (ℓH) and applying F∗e ( · ) proves the first part of the claim. For the second
part, use the fact that f is small over X \ W and that the pushforward of a reflexive sheaf
under a small birational map is again reflexive.
(a) One may also argue by noting that F e ( · ) leaves the abelian sheaf structure, and hence sheaf coho∗
mology, unchanged.
F -JUMPING NUMBERS AT ISOLATED NON-Q-GORENSTEIN POINTS
9
Returning to the proof of Theorem 3.1, choose a point x ∈ U = (X \ W ) ∪ W0 , and
assume first that m ≥ m0 . We see that Q = Qe,m,ℓ is globally generated at x since either
◦ x ∈ W0 and then by Claim 3.5, x is an isolated point of supp Q or Q is even zero at x, or
◦ x ∈ X \ W and then Q definitely is zero at x.
Hence F∗e OX ⌊mD + ℓH⌋ − C + N is globally generated at x by Claim 3.4 and [Gra16,
Lemma 7.3].
To finish the proof, we still need to take care of the sheaves
for e ≥ 0, 0 ≤ m < m0 , and ℓ ≥ max{m, pe }.
(3.5.2)
F∗e OX ⌊mD + ℓH⌋ − C + N
To this end, notice that arguing as in the proof of Claim 3.4, for 1 ≤ j ≤ d we have
H j X, F∗e OX ⌊mD + ℓH⌋ − C + N ⊗ OX (−jA)
= H j X, OX ⌊mD⌋ − C ⊗OX (ℓb − pe j) A + N .
| {z }
|
{z
}
finitely many values
≥pe (b−d)
Hence by Fujita vanishing, taking b sufficiently large in Claim 3.4, we may assume that the
sheaves (3.5.2) are 0-regular with respect to A. In particular they are globally generated
on U .
To justify the last claim of Theorem 3.1, simply note that for any ample Cartier divisor
H ′ on X given in advance, we may pick A to be a sufficiently high multiple of H ′ and then
also H will be a multiple of H ′ .
4. Proof of main results
In this section we prove the results announced in the introduction. But first we state a
weak result on global generation of test ideals. Compare with [Mus11, Sch11a, Kee08].
Proposition 4.1. Suppose X is a normal projective variety over an F -finite field k of
characteristic p > 0, ∆ ≥ 0 is a Q-divisor, a isa nonzero coherent ideal sheaf and t0 > 0 is
a real number. Suppose that R X, −(KX + ∆) is finitely generated away from a closed set
W ⊆ X and that W0 ⊆ W is the set of isolated points of W . Set U = (X \ W ) ∪ W0 . Then
there exists an ample divisor H such that
τ (X, ∆, at ) ⊗ OX (H)
is globally generated on U for all t ∈ [0, t0 ].
Proof. Choose an effective Cartier divisor C ≥ 0 on X so that OX (−C) ⊆ τ (X, ∆, at0 ) ⊆
τ (X, ∆, at ). By Lemma 2.5, for any t ∈ [0, t0 ] we have
i
h
X
tr
e
(4.1.1)
τ (X, ∆, at ) =
Image F∗e a⌈tp ⌉ OX ⌊(1 − pe )(KX + ∆)⌋ − C −−→ OX .
e≥0
Now fix an ample Cartier divisor A on X so that
a⌈t⌉ ⊗ OX (A)
is globally generated for all t ∈ [0, t0 ]. We then observe that for all m > 0 and t ∈ [0, t0 ],
(4.1.2)
a⌈mt⌉ ⊗ OX (mA)
is also globally generated. The reason is that since ⌈mt⌉ ≤ m⌈t⌉, the ideal a⌈mt⌉ can be
written as the product of m ideals of the form a⌈s⌉ for various values of s ∈ [0, t0 ]. For
t for the k-vector space of global sections of the sheaf (4.1.2). It
ease of notation, write Wm
follows that for every m > 0, the map
t
Wm
⊗k OX (−mA) −
→ a⌈mt⌉
10
P. GRAF AND K. SCHWEDE
is surjective. Combining with (4.1.1), we get that
i
h
X
e
→ OX
τ (X, ∆, at ) =
Im F∗e a⌈tp ⌉ OX ⌊(1 − pe )(KX + ∆)⌋ − C −
e≥0
=
X
i
h
→ OX
Im F∗e Wpte ⊗k OX (−pe A) ⊗ OX ⌊(1 − pe )(KX + ∆)⌋ − C −
=
X
i
h
→ OX .
Im F∗e Wpte ⊗k OX ⌊(pe − 1)(−KX − ∆ − A)⌋ − C − A −
e≥0
e≥0
Now choose an ample divisor H that satisfies the conclusion of Theorem 3.1 relative to the
divisor D = −(KX + ∆ + A), where C + A takes the role of C. Then for m = pe − 1, ℓ = pe
and N = 0 we get that
(4.1.3)
F∗e OX ⌊(pe − 1)(−KX − ∆ − A)⌋ − C − A ⊗ OX (H)
is globally generated (as an OX -module) over U for all e ≥ 0. Hence also
τ (X, ∆, at ) ⊗ OX (H)
is globally generated over U , being a quotient of a direct sum of sheaves of the form (4.1.3).
This completes the proof.
Theorem 4.2. Suppose that X is a normal variety over an F -finite
field k of positive
characteristic and that ∆ ≥ 0 is a Q-divisor such that R X, −(KX +∆) is finitely generated
except at an isolated collection of points. Suppose a ⊆ OX is a nonzero coherent ideal sheaf.
Then the F -jumping numbers of (X, ∆, a) have no limit points.
Proof. By [BSTZ10, Proposition 3.28], we may assume that X is affine. Let X denote the
closure of X in some projective space. By normalizing, we may also assume that X is
normal. There exists a Q-divisor ∆ ≥ 0 on X and a coherent ideal sheaf a ⊂ OX which
restrict to ∆ and a, respectively.
Pick an arbitrary real number t0 > 0. By Proposition 4.1, we know that there exists an
ample Cartier divisor H on X such that
τ X, ∆, at ⊗ OX (H) is globally generated on X ⊂ X for all t ∈ [0, t0 ].
Note that τ X, ∆, at X = τ (X, ∆, at ) since X ⊂ X is open. Now it follows from [Gra16,
Lemma 8.2] that for any strictly increasing sequence of numbers 0 ≤ s0 < s1 < · · · < t0 ,
the corresponding sequence of test ideals
τ (X, ∆, as0 ) ⊃ τ (X, ∆, as1 ) ⊃ · · ·
stabilizes. Hence the set of F -jumping numbers of (X, ∆, a) does not have a limit point in
the interval [0, t0 ]. As t0 > 0 was chosen arbitrarily, this proves the theorem.
Theorem 4.3. Suppose that X is a normal variety overa field k of characteristic zero and
that ∆ ≥ 0 is a Q-divisor such that R X, −(KX + ∆) is finitely generated except at an
isolated collection of points. Suppose a ⊆ OX is a nonzero coherent ideal sheaf. Then the
jumping numbers of (X, ∆, a) have no limit points.
Proof. The proof follows quite closely along the lines of [Gra16, Theorem 8.1]. For the
reader’s convenience, we give a sketch of the argument here.
Arguing by contradiction, assume that there is a strictly increasing and bounded above
sequence 0 ≤ s0 < s1 < · · · of jumping numbers of (X, ∆, a). As above, we may assume that
there is a triple (X, ∆, a) containing (X, ∆, a) as an open subset and such that X is normal
and projective. Let m0 be the Weil index of (X, ∆, a). By Theorem 3.1 in combination
F -JUMPING NUMBERS AT ISOLATED NON-Q-GORENSTEIN POINTS
11
with Remark 3.2, using Theorem 2.11 we can construct for each k ≥ 2 a Q-Weil divisor ∆k
on X such that ∆k := ∆k X is a (km0 )-compatible boundary for (X, ∆, a) and furthermore
the Q-linear equivalence class of ∆k does not depend on k. The last property is crucial, as
it enables us to find an ample Cartier divisor H on X such that
J (X, ∆ + ∆k , asℓ ) ⊗ OX (H)
is globally generated for all k ≥ 2, ℓ ≥ 0,
using [Gra16, Proposition 8.3]. Since ∆k is (km0 )-compatible, it follows that J (X, ∆, asℓ )⊗
OX (H) is globally generated on X for all ℓ ≥ 0. By [Gra16, Lemma 8.2], this implies that
the sequence of ideals
J (X, ∆, as0 ) ⊃ J (X, ∆, as1 ) ⊃ · · ·
stabilizes when restricted to X. Since J (X, ∆, asℓ ) X = J (X, ∆, asℓ ), this contradicts the
assumption that each si is a jumping number of (X, ∆, a).
Theorem 4.4. Suppose that X is a normal variety over an F -finite field k of characteristic
p > 0. Set
eval@1
Je := JeX := Image F∗e OX (1 − pe )KX ∼
= Hom OX (F∗e OX , OX ) −−−−→ OX ⊂ OX .
If R(X, −KX ) is finitely generated except at an isolated collection of points, then Je = Je+1
for all e ≫ 0.
Proof. First notice that Je ⊇ Je+1 since F∗e OX ֒→ F∗e+1 OX and Hom OX (−, OX ) is contravariant. As in the proof of Theorem 4.2, we may assume that X is an open subset of
some normal projective variety X. By Theorem 3.1, we know there exists an ample divisor
H on X such that
F∗e OX (1 − pe )KX + pe H = F∗e OX (1 − pe )KX ⊗ OX (H)
is globally generated on X, as an OX -module, for all e ≥ 0. Hence its image JeX ⊗ OX (H)
is also globally generated on X. But JeX X = JeX since X ⊂ X is open. Hence we see that
Je = Je+1 for e ≫ 0 by [Gra16, Lemma 8.2]. This completes the proof.
Finally we prove the final statement from the introduction.
Proof of Proposition 1.2. For (1.2.1), we need to prove that for every characteristic zero
klt pair (X, D) and for every Q-divisor B on X, the algebra R(X, B) is finitely generated.
This is well-known to experts (see e.g. [Kol08, Theorem 92]), but for completeness’ sake we
provide a proof.
The question is local, so we may assume that B is effective and that KX + D ∼Q 0. Let
π: Y −
→ X be a small Q-factorial modification, which exists
by [BCHM10, Corollary 1.4.3].
For some rational 0 < ε ≪ 1, the pair Y, π∗−1 (D + εB) is klt. The map π being small, we
have
π∗ R Y, KY + π∗−1 (D + εB) = R(X, KX + D + εB).
By [BCHM10, Theorem 1.2(3)], the left-hand side is finitely generated. Hence so is the
right-hand side. Since KX + D ∼Q 0 and ε ∈ Q, we see that R(X, KX + D + εB) and
R(X, B) have isomorphic Veronese subalgebras. We conclude by [GHNV90, Lemma 2.4
and Theorem 3.2].
Concerning (1.2.2), after shrinking X we may assume that X = Spec R is affine and has
pseudorational singularities. If the singular locus of X is zero-dimensional, we are clearly
done. So let p ∈ Spec R be the generic point of a one-dimensional component of Sing(X).
Localizing at p, we obtain a two-dimensional pseudorational germ U := Spec Rp −
→ X with
e
closed point m := pRp . Let π : U −
→ U be a desingularization of U , with exceptional divisor
12
P. GRAF AND K. SCHWEDE
e . The Grothendieck spectral sequence associated to the composition of functors
E ⊂ U
Γm ◦ π∗ = ΓE yields an exact sequence
e, Oe) −
e , O e ),
HE1 (U
→ Hm0 (U, R1 π∗ OUe ) −
→ Hm2 (U, OU ) −
→ HE2 (U
U
|
|
{z U }
{z
}
|
{z
}
=0
e ,O e )
=H 1 (U
U
injective by pseudorationality
e , O e ) = 0, so
where the first term is zero due to [Lip78, Theorem 2.4]. It follows that H 1 (U
U
U has rational singularities in the sense of Lipman [Lip69, Definition 1.1].
Now consider a Q-Weil divisor B on X. By [Lip69, Proposition 17.1], the restriction of B
to U is Q-Cartier and then B itself is Q-Cartier in a neighborhood of p ∈ X. Applying this
argument to every one-dimensional component of Sing(X), we see that except at an isolated
collection of points, B is Q-Cartier and in particular R(X, B) is finitely generated.
References
[BCHM10] C. Birkar, P. Cascini, C. D. Hacon, and J. McKernan: Existence of minimal models for
varieties of log general type, J. Amer. Math. Soc. 23 (2010), no. 2, 405–468. 2601039 (2011f:14023)
↑ 11
[Bli09]
M. Blickle: Test ideals via algebras of p−e -liner maps, arXiv:0912.2255, to appear in J. Algebraic Geom. ↑ 2
[BB11]
M. Blickle and G. Böckle: Cartier modules: finiteness results, J. Reine Angew. Math. 661
(2011), 85–123. 2863904 ↑ 2
[BMS08]
M. Blickle, M. Mustaţǎ, and K. E. Smith: Discreteness and rationality of F -thresholds,
Michigan Math. J. 57 (2008), 43–61, Special volume in honor of Melvin Hochster. 2492440
(2010c:13003) ↑ 1
[BMS09]
M. Blickle, M. Mustaţă, and K. E. Smith: F -thresholds of hypersurfaces, Trans. Amer.
Math. Soc. 361 (2009), no. 12, 6549–6565. 2538604 (2011a:13006) ↑ 1
[BSTZ10] M. Blickle, K. Schwede, S. Takagi, and W. Zhang: Discreteness and rationality of F jumping numbers on singular varieties, Math. Ann. 347 (2010), no. 4, 917–949. 2658149 ↑ 1,
10
[BdFFU15] S. Boucksom, T. de Fernex, C. Favre, and S. Urbinati: Valuation spaces and multiplier
ideals on singular varieties, Recent advances in algebraic geometry, London Math. Soc. Lecture
Note Ser., vol. 417, Cambridge Univ. Press, Cambridge, 2015, pp. 29–51. 3380442 ↑ 1
[CEMS14] A. Chiecchio, F. Enescu, L. E. Miller, and K. Schwede: Test ideals in rings with finitely
generated anti-canonical algebras, ArXiv e-prints (2014). ↑ 1, 2, 5
[DH09]
T. De Fernex and C. Hacon: Singularities on normal varieties, Compos. Math. 145 (2009),
no. 2, 393–414. ↑ 5, 6
[ELSV04] L. Ein, R. Lazarsfeld, K. E. Smith, and D. Varolin: Jumping coefficients of multiplier
ideals, Duke Math. J. 123 (2004), no. 3, 469–506. MR2068967 (2005k:14004) ↑ 1
[Fuj83]
T. Fujita: Vanishing theorems for semipositive line bundles, Algebraic geometry (Tokyo/Kyoto,
1982), Lecture Notes in Math., vol. 1016, Springer, Berlin, 1983, pp. 519–528. 726440 (85g:14023)
↑7
[Gab04]
O. Gabber: Notes on some t-structures, Geometric aspects of Dwork theory. Vol. I, II, Walter
de Gruyter GmbH & Co. KG, Berlin, 2004, pp. 711–734. ↑ 2
[GHNV90] S. Goto, M. Herrmann, K. Nishida, and O. Villamayor: On the structure of Noetherian
symbolic Rees algebras, Manuscripta Math. 67 (1990), no. 2, 197–225. 1042238 (91a:13006) ↑ 7,
11
[Gra16]
P. Graf: The jumping coefficients of non-Q-Gorenstein multiplier ideals, J. Algebra 450 (2016),
323–348. 3449696 ↑ 1, 2, 6, 7, 9, 10, 11
[Har06]
N. Hara: F-pure thresholds and F-jumping exponents in dimension two, Math. Res. Lett. 13
(2006), no. 5-6, 747–760, With an appendix by Paul Monsky. MR2280772 ↑ 1
[HT04]
N. Hara and S. Takagi: On a generalization of test ideals, Nagoya Math. J. 175 (2004),
59–74. MR2085311 (2005g:13009) ↑ 3
[Har77]
R. Hartshorne: Algebraic geometry, Springer-Verlag, New York, 1977, Graduate Texts in
Mathematics, No. 52. MR0463157 (57 #3116) ↑ 7
F -JUMPING NUMBERS AT ISOLATED NON-Q-GORENSTEIN POINTS
[HS77]
[KLZ09]
[KSSZ14]
[KZ14]
[Kee08]
[Kol08]
[KM98]
[Laz04a]
[Laz04b]
[Lip69]
[Lip78]
[LT81]
[Lyu97]
[Mus11]
[Sch11a]
[Sch11b]
[Sch11c]
[ST14]
[Urb12]
13
R. Hartshorne and R. Speiser: Local cohomological dimension in characteristic p, Ann. of
Math. (2) 105 (1977), no. 1, 45–79. MR0441962 (56 #353) ↑ 2
M. Katzman, G. Lyubeznik, and W. Zhang: On the discreteness and rationality of F jumping coefficients, J. Algebra 322 (2009), no. 9, 3238–3247. 2567418 (2011c:13005) ↑ 1
M. Katzman, K. Schwede, A. K. Singh, and W. Zhang: Rings of Frobenius operators,
Math. Proc. Cambridge Philos. Soc. 157 (2014), no. 1, 151–167. 3211813 ↑ 1
M. Katzman and W. Zhang: Castelnuovo-Mumford regularity and the discreteness of F jumping coefficients in graded rings, Trans. Amer. Math. Soc. 366 (2014), no. 7, 3519–3533.
3192605 ↑ 1
D. S. Keeler: Fujita’s conjecture and Frobenius amplitude, Amer. J. Math. 130 (2008), no. 5,
1327–1336. 2450210 (2009i:14006) ↑ 9
J. Kollár: Exercises in the birational geometry of algebraic varieties, arXiv:0809.2579. ↑ 11
J. Kollár and S. Mori: Birational geometry of algebraic varieties, Cambridge Tracts in Mathematics, vol. 134, Cambridge University Press, Cambridge, 1998, With the collaboration of C.
H. Clemens and A. Corti, Translated from the 1998 Japanese original. MR1658959 (2000b:14018)
↑ 3, 7
R. Lazarsfeld: Positivity in algebraic geometry. I, Ergebnisse der Mathematik und ihrer
Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics
and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], vol. 48, SpringerVerlag, Berlin, 2004, Classical setting: line bundles and linear series. MR2095471 (2005k:14001a)
↑ 7, 8
R. Lazarsfeld: Positivity in algebraic geometry. II, Ergebnisse der Mathematik und ihrer
Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and
Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], vol. 49, Springer-Verlag,
Berlin, 2004, Positivity for vector bundles, and multiplier ideals. MR2095472 (2005k:14001b) ↑ 5
J. Lipman: Rational singularities, with applications to algebraic surfaces and unique factorization, Inst. Hautes Études Sci. Publ. Math. (1969), no. 36, 195–279. MR0276239 (43 #1986)
↑ 12
J. Lipman: Desingularization of two-dimensional schemes, Ann. Math. (2) 107 (1978), no. 1,
151–207. 0491722 (58 #10924) ↑ 12
J. Lipman and B. Teissier: Pseudorational local rings and a theorem of Briançon-Skoda about
integral closures of ideals, Michigan Math. J. 28 (1981), no. 1, 97–116. MR600418 (82f:14004) ↑ 2,
3
G. Lyubeznik: F -modules: applications to local cohomology and D-modules in characteristic
p > 0, J. Reine Angew. Math. 491 (1997), 65–130. MR1476089 (99c:13005) ↑ 2
M. Mustaţǎ: The non-nef locus in positive characteristic, arXiv:1109.3825, to appear in the
volume dedicated to Joe Harris on the occasion of his 60th birthday. ↑ 9
K. Schwede: A canonical linear system associated to adjoint divisors in characteristic p > 0,
arXiv:1107.3833. ↑ 9
K. Schwede: A note on discreteness of F -jumping numbers, Proc. Amer. Math. Soc. 139
(2011), no. 11, 3895–3901. ↑ 1
K. Schwede: Test ideals in non-Q-Gorenstein rings, Trans. Amer. Math. Soc. 363 (2011),
no. 11, 5925–5941. ↑ 3
K. Schwede and K. Tucker: Test ideals of non-principal ideals: computations, jumping
numbers, alterations and division theorems, J. Math. Pures Appl. (9) 102 (2014), no. 5, 891–
929. 3271293 ↑ 1
S. Urbinati: Discrepancies of non-Q-Gorenstein varieties, Michigan Math. J. 61 (2012), no. 2,
265–277. ↑ 1, 2
PG: Lehrstuhl für Mathematik I, Universität Bayreuth, 95440 Bayreuth, Germany
URL: www.pgraf.uni-bayreuth.de/en/
E-mail address: [email protected]
KS: Department of Mathematics, The University of Utah, 155 S 1400 E Room 233, Salt Lake
City, UT 84112, USA
URL: www.math.utah.edu/~schwede/
E-mail address: [email protected]
| 0 |
arXiv:1709.06624v1 [math.AG] 19 Sep 2017
On the multiplicity of isolated roots of sparse polynomial
systems∗
Marı́a Isabel Herrero♯,⋄ , Gabriela Jeronimo♯,†,⋄ , Juan Sabia†,⋄
♯ Departamento de Matemática, Facultad de Ciencias Exactas y Naturales,
Universidad de Buenos Aires, Ciudad Universitaria, (1428) Buenos Aires, Argentina
† Departamento de Ciencias Exactas, Ciclo Básico Común,
Universidad de Buenos Aires, Ciudad Universitaria, (1428) Buenos Aires, Argentina
⋄ IMAS, UBA-CONICET, Buenos Aires, Argentina
September 21, 2017
Abstract
We give formulas for the multiplicity of any affine isolated zero of a generic polynomial system of n equations in n unknowns with prescribed sets of monomials. First,
we consider sets of supports such that the origin is an isolated root of the corresponding generic system and prove formulas for its multiplicity. Then, we apply these
formulas to solve the problem in the general case, by showing that the multiplicity
of an arbitrary affine isolated zero of a generic system with given supports equals the
multiplicity of the origin as a common zero of a generic system with an associated
family of supports.
The formulas obtained are in the spirit of the classical Bernstein’s theorem, in
the sense that they depend on the combinatorial structure of the system, namely,
geometric numerical invariants associated to the supports, such as mixed volumes of
convex sets and, alternatively, mixed integrals of convex functions.
1
Introduction
The connections between the set of solutions of a polynomial system and the geometry
of the supports of the polynomials involved have been vastly studied in the literature,
starting with the foundational work of Bernstein ([1]), Kushnirenko ([13]) and Khovanskii
([11]). They proved that the number of isolated solutions in (C∗ )n of a system with n
polynomial equations in n unknowns is bounded by the mixed volume of their support sets.
Afterwards, combinatorial invariants of the same type also allowed to obtain bounds for
the number of isolated solutions of the system in the affine space Cn (see, for example, [20],
[14], [9] and [5]). In [19], another refinement of Bernstein’s bound was given by introducing
mixed integrals of concave functions to estimate the number of isolated solutions in C ×
∗
Partially supported by the following Argentinian grants: PIP 11220130100527CO CONICET (20142016) and UBACYT 2017, 20020160100039BA.
1
(C∗ )n−1 . In addition, the equidimensional decomposition of the affine variety defined by
a generic system with given supports and the degree of this variety are also related to the
geometry and combinatorics of the supports (see [8]).
Even though for generic sparse polynomial systems their common zeroes in (C∗ )n are
simple, their isolated roots with zero coordinates may have greater multiplicities. The aim
of this paper is to prove formulas for the multiplicity of the isolated affine zeroes of generic
sparse polynomial systems in terms of the geometry of their supports. This dependence
is already present in the seminal work of Kushnirenko ([12]), where the Milnor number of
the singularity at the origin of a hypersurface is studied.
Several authors have used geometric tools, including convex sets, volumes and covolumes, to solve related problems. Geometric invariants of this type are considered in [23] to
determine multiplicities of monomial ideals in local rings. In [6, Chapter 5], the multiplicity of a singular point on a toric variety is given as a normalized volume. A particular case
of this result is recovered in [3], where the multiplicity of the origin as an isolated zero of a
generic unmixed polynomial system is computed under the assumption that each polynomial contains a pure power of each variable. A generalization of this result to the mixed
case under the same assumption can be found in [10], where the multiplicity of the origin
is expressed in terms of mixed covolumes. Recently, in [16] a formula for the intersection
multiplicity at the origin of the hypersurfaces defined in Cn by n generic polynomials with
fixed Newton diagrams is proved.
In this paper, we obtain formulas for the multiplicities of all the affine isolated zeros
of a generic polynomial system of n polynomials in n variables with given supports in
terms of mixed volumes and, alternatively, in terms of mixed integrals of convex functions
associated to the supports of the polynomials involved.
First, we consider the case of the origin. In this case, our formulas for the multiplicity
can be seen as a generalization of those in [10], in the sense that the only hypotheses on the
supports we make are the necessary ones, proved in [8, Proposition 6], so that the origin
is an isolated zero of a generic system with the given supports. The previous approach
from [16] to compute the multiplicity of the origin under no further assumptions on the
supports leads to a formula which, unlike ours, is not symmetric in the input polynomials.
In order to analyze the case of arbitrary affine isolated zeros, the result in [8, Proposition 6] enables us to determine all sets I ⊂ {1, . . . , n} such that a generic system with
the given supports has isolated zeros whose vanishing coordinates are indexed by I. For
such an isolated zero, we prove that its multiplicity equals the multiplicity of the origin as
an isolated zero of an associated generic sparse system of #I polynomials in #I variables
whose supports can be explicitly defined from the input supports and the set I. Thus,
a formula for the multiplicity of an arbitrary affine zero of the system follows from our
previous result concerning the multiplicity of the origin.
The paper is organized as follows: Section 2 recalls the definitions and basic properties
of mixed volumes and mixed integrals, and describes the algorithmic approach to compute
multiplicities of isolated zeros of polynomial systems by means of basic linear algebra given
in [4], which we use as a tool. In Section 3, formulas for the multiplicity of the origin are
obtained, first for systems where each polynomial contains a pure power of each variable
and then, in the general case. Finally, Section 4 is devoted to computing the multiplicity
of an arbitrary affine isolated zero of a generic system.
2
2
Preliminaries
2.1
Mixed volume and stable mixed volume
Let A1 , . . . , An be finite subsets of (Z≥0 )n . A sparse polynomial system supported on
A = (A1 , . . . , An ) is given by polynomials
X
fj =
cj,a xa
a∈Aj
in the variables x = (x1 , . . . , xn ), with cj,a ∈ C \ {0} for each a ∈ Aj and 1 ≤ j ≤ n.
We denote by M Vn (A) = M Vn (A1 , . . . , An ) the mixed volume of the convex hulls of
A1 , . . . , An in Rn , which is defined as
X
X
conv(Aj )
M Vn (A) =
(−1)n−#J V oln
j∈J
J⊂{1,...,n}
(see, for example, [2, Chapter 7]). The mixed volume of A is an upper bound for the
number of isolated roots in (C∗ )n of a sparse system supported on A (see [1]).
The stable mixed volume of A = (A1 , . . . , An ), denoted by SMn (A) = SMn (A1 , . . . , An ),
is introduced in [9] to estimate the number of isolated roots in Cn of a sparse polynomial
system supported on A and is defined as follows. Let A0 = (A01 , . . . , A0n ) be the family
with A0j := Aj ∪ {0} for every 1 ≤ j ≤ n, and let ω 0 = (ω10 , . . . , ωn0 ) be the lifting function
for A0 defined by ωj0 (q) = 0 if q ∈ Aj and ωj0 (0) = 1 if 0 ∈
/ Aj . Consider the polytope
0
n+1
Q in R
obtained by taking the Minkowski (pointwise) sum of the convex hulls of the
graphs of ω10 , . . . , ωn0 . The projection of the lower facets of Q0 (that is, the n-dimensional
faces with inner normal vector with a positive last coordinate) induces a subdivision of
A0 . A cell C = (C1 , . . . , Cn ), with Cj ⊂ A0j for every 1 ≤ j ≤ n, of this subdivision is
said to be stable if it corresponds to a facet of Q0 having an inner normal vector with all
non-negative coordinates. The stable mixed volume SMn (A1 , . . . , An ) is the sum of the
mixed volumes of all the stable cells in the subdivision of A0 .
Note that A = (A1 , . . . , An ) is a stable cell in the defined subdivision of A0 , namely,
the cell with associated inner normal vector (0, . . . , 0, 1); therefore, we have that
M Vn (A1 , . . . , An ) ≤ SMn (A1 , . . . , An ) ≤ M Vn (A1 ∪ {0}, . . . , An ∪ {0}).
2.2
Mixed integrals for concave and convex functions
Let P1 , . . . , Pn be polytopes in Rn−1 , and, for 1 ≤ j ≤ n, let σj : Pj → R be a concave
function and ρj : Pj → R a convex function. Following [18], we can define concave
(respectively convex) functions as:
σi ⊞ σj : Pi + Pj → R,
σi ⊞ σj (x) = max{σi (y) + σj (z) : y ∈ Pi , z ∈ Pj , y + z = x}
ρi ⊞′ ρj : Pi + Pj → R,
ρi ⊞′ ρj (x) = min{ρi (y) + ρj (z) : y ∈ Pi , z ∈ Pj , y + z = x}.
Note that ρi ⊞′ ρj = −(−ρi ) ⊞ (−ρj ).
In the same way, for every non-empty subset J ⊂ {1, . . . , n}, we can define
X
X
⊞j∈J σj :
Pj → R and
⊞′j∈J ρj :
Pj → R.
j∈J
j∈J
3
The mixed integrals of σ1 , . . . , σn (respectively, ρ1 , . . . , ρn ) are defined as:
k=1
X
J ⊂{1,...,n}
#J =k
Z
n
X
X
Z
n
X
(−1)n−k
M In (σ1 , . . . , σn ) =
M In′ (ρ1 , . . . , ρn ) =
(−1)n−k
k=1
J ⊂{1,...,n}
#J =k
P
P
j∈J
j∈J
⊞j∈J σj (x)dx,
Pj
Pj
⊞′j∈J ρj (x)dx.
For a polytope P ⊂ Rn−1 , a convex function ρ : P → R and a concave function
σ : P → R such that ρ(x) ≤ σ(x) for every x ∈ P , we denote
Pρ,σ = conv({(x, ρ(x)) : x ∈ P } ∪ {(x, σ(x)) : x ∈ P }).
Given a polytope Q ⊂ Rn , if π : Rn → Rn−1 is the projection to the first n − 1
coordinates, we may define a concave function σQ : π(Q) → R and a convex function
ρQ : π(Q) → R as:
σQ (x) = max{xn ∈ R : (x, xn ) ∈ Q}
and
ρQ (x) = min{xn ∈ R : (x, xn ) ∈ Q}.
Remark 1 The functions σQ and ρQ defined above parameterize the lower and upper
envelopes of Q respectively. Moreover, π(Q)ρQ ,σQ = Q.
Let Q1 , . . . , Qn be polytopes in Rn . P
For 1 ≤ j ≤ n, let σj = σQj and
P ρj = ρQj . Let
′
)
→
R
and
⊞
ρ
:
J ⊂ {1, . . . , n}, J 6= ∅. Then, ⊞j∈J σj : j∈J π(Q
j
j
j∈J π(Qj ) → R
j∈J
P
parameterize the upper and lower envelopes of j∈J Qj respectively.
2.3
Multiplicity matrices
In order to compute multiplicities of isolated zeros of polynomial systems, we will follow
the algorithmic approach from [4] based on duality theory, which we briefly recall in this
section.
Let f = (f1 , . . . , fn ) be a system of polynomials in C[x1 , . . . , xn ]. Denote I the ideal
of C[x] = C[x1 , . . . , xn ] generated by f1 , . . . , fn .
For an isolated zero ζ ∈ Cn of the system f , we denote multζ (f ) its multiplicity,
defined as the dimension (as a C-vector space) of the local ring C[x]mζ /IC[x]mζ , where
mζ = (x1 − ζ1 , . . . , xn − ζn ) is the maximal ideal associated with ζ (see, for instance, [2,
Chapter 4, Definition (2.1)]).
Let Dζ (I) the dual space of the ideal I at ζ; namely, the vector space
n
o
X
Dζ (I) = c =
cα ∂α [ζ] | c(f ) = 0 for all f ∈ I ,
α∈(Z≥0 )n
where, for every α = (α1 , . . . , αn ) ∈ (Z≥0 )n , cα ∈ C,
∂α =
∂ |α|
1
,
α1
α1 ! . . . αn ! ∂x1 . . . , xαnn
4
(1)
and
∂α [ζ] : C[x] → C,
∂α [ζ](f ) = (∂α f )(ζ).
The dimension of Dζ (I) equals the multiplicity of ζ as a zero of I (see [15], [22]).
For every k ≥ 0, consider the subspace
n
o
X
cα ∂α [ζ] | c(f ) = 0 for all f ∈ I
Dζk (I) = c =
α∈(Z≥0 )n , |α|≤k
of all functionals in Dζ (I) with differential order bounded by k. Since ζ is an isolated
common zero of I, there exists k0 ∈ Z≥0 such that Dζ (I) = Dζk0 (I) = Dζk (I) for all k ≥ k0
and dim(Dζk (I)) < dim(Dζk+1 (I)) for every 0 ≤ k < k0 (see [4, Lemma 1]).
Following [4, Section 4], the dimension of the vector spaces Dζk (I) can be computed
by means of the multiplicity matrices, defined as follows. For k = 0, set S0 (f , ζ) =
[f1 (ξ) · · · fn (ξ)]t = 0 ∈ Cn×1 . Take ≺ a graded monomial ordering. For k ≥ 1, consider
the sets Ik = {α ∈ (Z≥0 )n | |α| ≤ k} ordered by ≺, and Ik−1 × {1, . . . , n} with the
ordering
k+n
(β, j) ≺ (β ′ , j ′ ) if β ≺ β ′ or β = β ′ and j < j ′ . Let Sk (f , ζ) be the k−1+n
k−1 n ×
k
matrix whose columns are indexed by Ik (corresponding to the differential functionals ∂α
for α ∈ Ik ) and whose rows are indexed by (β, j) ∈ Ik−1 × {1, . . . , n} (corresponding to
the polynomials (x − ζ)β fj ) such that the entry at the intersection of the row indexed by
(β, j) and the column indexed by α is
(Sk (f , ζ))(β,j),α = ∂α ((x − ζ)β fj )(ζ).
(Here, (x − ζ)β = (x1 − ζ1 )β1 · · · (xn − ζn )βn .) Then, the dimension of Dζk (I) equals the
dimension of the nullspace of Sk (f , ζ) (see [4, Theorems 1 and 2]). As a consequence:
Proposition 2 With the previous assumptions and notation, if
k0 = min{k ∈ Z≥0 | dim(ker(Sk (f , ζ))) = dim(ker(Sk+1 (f , ζ)))},
the multiplicity of ζ as an isolated zero of f is multζ (f ) = dim(ker(Sk (f , ζ))) for any
k ≥ k0 .
3
Multiplicity of the origin
Consider a family A = (A1 , . . . , An ) of finite sets in (Z≥0 )n such that 0 6∈ Aj for all
1 ≤ j ≤ n. Under this assumption, 0 ∈ Cn is a common zero of any sparse system of
polynomials f1 , . . . , fn ∈ C[x1 , . . . , xn ] supported on A.
We are interested in the case when 0 is an isolated common zero of the system. By
[8, Proposition 6], for a generic family of polynomials f = f1 , . . . , fn ∈ C[x1 , . . . , xn ]
supported on A, we have that 0 is an isolated point of V (f ) if and only if #I + #JI ≥ n
for all I ⊂ {1, . . . , n}, where JI is the set of subindexes of all polynomials that do not
vanish when we evaluate xi = 0 for all i ∈ I.
Every c = (c1 , . . . , cn ) ∈ C#A1 × · · · × C#An defines a system fc of polynomials with
coefficients c supported on a family of subsets of A1 , . . . , An . If 0 is an isolated zero of fc ,
we define multA (c) := mult0 (fc ) ∈ Z>0 .
5
Lemma 3 Under the previous assumptions and notation, let µA be the minimum of the
function multA . Then, {c ∈ C#A1 × · · · × C#An | multA (c) = µA } contains a non-empty
Zariski open set of C#A1 × · · · × C#An .
Proof: It is straightforward, for example, from the computation of multiplicities by using
multiplicity matrices (see Section 2.3).
In this sense, we may speak of µA as the multiplicity of 0 as an isolated root of a
generic sparse system supported on A.
Therefore, in this section, we will focus on the computation of the multiplicity of the
origin as a common zero of a generic polynomial system supported on A = (A1 , . . . , An ),
under the following assumptions:
(H1) 0 ∈
/ Aj ⊂ (Z≥0 )n for every 1 ≤ j ≤ n;
(H2) for all I ⊂ {1, . . . , n}, if JI := {j ∈ {1, . . . , n} | ∃a ∈ Aj : ai = 0 ∀i ∈ I}, then
#I + #JI ≥ n.
Moreover, in [8, Proposition 5], these conditions are proved to be equivalent to the fact
that, for a generic system f supported on A and vanishing at 0 ∈ Cn , the variety V (f )
consists only of isolated points in Cn .
Under these assumptions, by [9, Theorem 2], the number of common zeros of f in Cn
counted with multiplicities is the stable mixed volume SMn (A). In particular, since the
number of common zeros of the system in (C∗ )n is the mixed volume M Vn (A) (see [1]),
we have that
mult0 (f ) ≤ SMn (A) − M Vn (A) ≤ M Vn (A0 ) − M Vn (A),
(2)
where A0 = (A1 ∪ {0}, . . . , An ∪ {0}).
3.1
A particular case
The first case we are going to consider is when the following stronger assumption on A
holds:
(H3) For every 1 ≤ i, j ≤ n, there exists µij ∈ N such that µij ei ∈ Aj , where ei is the ith
vector of the canonical basis of Qn .
Note that assumption (H3) implies that assumption (H2) holds.
Under condition (H3), in [10, Theorem 7.6] the multiplicity of the origin as an isolated
common zero of a generic polynomial system supported on A is computed in terms of
covolumes of coconvex bodies associated to A. Here, we will first re-obtain this result by
proving a formula using mixed volumes of convex polytopes and then, we will reformulate
this formula in terms of mixed integrals of convex functions.
We start by comparing stable mixed volumes with mixed volumes in our particular
setting.
6
Lemma 4 With the previous notation, if assumptions (H1) and (H3) hold, we have that
SMn (A1 , . . . , An ) = M Vn (A01 , . . . , A0n ).
Proof: It suffices to prove that every cell in the subdivision of A0 = (A01 , . . . , A0n ) induced
by the lifting function introduced in Section 2.1 is stable.
Consider a cell C = (C1 , . . . , Cn ) of the stated subdivision different from (A1 , . . . , An )
(for which the result is trivial), and let η = (η1 , . . . , ηn , 1) be its associated inner normal
vector. We have to show that ηi ≥ 0 for every 1 ≤ i ≤ n.
For every 1 ≤ j ≤ n, there exists aCj ∈ R such that aCj = η. (q, ωj0 (q)) for all q ∈ Cj
and aCj ≤ η. (q, ωj0 (q)) for all q ∈ A0j . As the cell C is not (A1 , . . . , An ), there exists j0
such that 0 ∈ Cj0 and 0 ∈
/ Aj0 ; then, aCj0 = η. (0, 1) = 1. Since, by assumption (H3), for
all 1 ≤ i ≤ n, there exists µij0 ∈ N such that µij0 ei ∈ A0j0 , then, 1 = aCj0 ≤ η . µij0 (ei , 0) =
ηi µij0 . The result follows from the fact that µij0 > 0 for all 1 ≤ i ≤ n.
Now, we can state our first formula for the multiplicity of the origin.
Proposition 5 Let f = (f1 , . . . , fn ) be a generic polynomial system in C[x1 , . . . , xn ] supported on a family A = (A1 , . . . , An ) of finite sets of (Z≥0 )n satisfying assumptions (H1)
and (H3). Then, the origin is an isolated common zero of f and
mult0 (f ) = M Vn (A0 ) − M Vn (A).
Proof: Assumption (H1) implies that the origin is a common zero of the polynomials f .
In addition, by assumption (H3), the only common zero of f not in (C∗ )n is the origin.
Then, all the common zeros of f in Cn are isolated and so, the number of these common
zeros is SMn (A) (see [9]). Finally, since the number of common zeros of f in (C∗ )n
is M Vn (A) (see [1]) and all these zeros have multiplicity 1 (see [17]), we deduce that
M Vn (A) + mult0 (f ) = SMn (A). Thus, the result follows from Lemma 4.
Example 1 Consider the generic polynomial system f = (f1 , f2 , f3 ) with
f1 = c11 x1 + c12 x2 + c13 x22 + c14 x21 x2 x3 + c15 x73
f2 = c21 x21 + c22 x31 + c23 x21 x2 + c24 x33 + c25 x72
f3 = c31 x1 + c32 x1 x2 + c33 x23 + c34 x2 x33 + c35 x72
with support family A = (A1 , A2 , A3 ), where
A1 = {(1, 0, 0), (0, 1, 0), (0, 2, 0), (2, 1, 1), (0, 0, 7)}
A2 = {(2, 0, 0), (3, 0, 0), (2, 1, 0), (0, 0, 3), (0, 7, 0)}
A3 = {(1, 0, 0), (1, 1, 0), (0, 0, 2), (0, 1, 3), (0, 7, 0)}
satisfying assumptions (H1) and (H3). Then, Proposition 5 states that 0 is an isolated
common root of f with multiplicity
mult0 (f ) = M V3 (A0 ) − M V3 (A) = 147 − 144 = 3.
7
In order to restate the formula in the previous proposition by means of a mixed integral
of suitable convex functions, we first introduce further notation and prove some auxiliary
results.
For 1 ≤ j ≤ n, let Qj = conv(Aj ) and ∆j = conv{0, λ1j e1 , . . . , λnj en }, where
λij = min{µ ∈ N | µei ∈ Qj }
for 1 ≤ i ≤ n.
(3)
Let π : Rn → Rn−1 be the projection to the first n − 1 coordinates. As in Section 2.2,
let σj : π(Qj ) → R denote the concave function that parameterizes the upper envelope of
Qj and ρj : π(Qj ) → R the convex function that parameterizes its lower envelope. Since
π(∆j ) ⊂ π(Qj ), we may consider
σ j = σj |π(∆j )
and
ρj = ρj |π(∆j ) ,
(4)
the restrictions of these functions to π(∆j ).
For a non-empty set J ⊂ {1, . . . , n}, we denote
X
X
∆J :=
∆j ,
QJ :=
Qj .
j∈J
j∈J
Lemma 6 Let J ⊂ {1, . . . , n} be a non-empty set. Then, every facet of ∆J that is not
contained in a hyperplane {xi = 0}, for 1 ≤ i ≤ n, has an inner normal vector with all
negative coordinates. We will call these facets the non-trivial facets of ∆J .
Proof: If J = {j} for some 1 ≤ j ≤ n, the result is straightforward because the only facet
satisfying the required conditions is F = conv{λ1j e1 , . . . , λnj en }, and λij ∈ N for every
1 ≤ i ≤ n.
Let F beP
a non-trivial facet of ∆J and η = (η1 , . . . , ηn ) an inner normal vector of F .
Then, F =
j∈J Fj , where Fj is a face of ∆j with inner normal vector η. For every
1 ≤ i ≤ n, since F is not contained in the hyperplane {xi = 0}, there exists ji ∈ J such
that λiji ei ∈ Fji ; then,
0 = η. 0 ≥ η. λiji ei = ηi λiji
(5)
and, so ηi ≤ 0.
If 0 ∈ Fj for some j ∈ J, then η. q ≥ η. 0 = 0 for every q ∈ ∆j ; in particular,
ηk λkj = η. λkj ek ≥ η. 0 = 0 for every 1 ≤ k ≤ n. This implies that η = 0, a contradiction.
Then, 0 ∈
/ Fj for every j ∈ J, the inequalities in (5) are strict and, therefore, ηi < 0 for
every 1 ≤ i ≤ n.
Lemma 7 Let J ⊂ {1, . . . , n} be a non-empty set. Then, for every point x in a non-trivial
facet of π(∆J ) we have that (⊞′j∈J ρj )(x) = 0.
Pn−1
Proof: If J =P
{j}, x ∈ conv{λ1j π(e1 ), . . . , λn−1,j π(en−1 )}, thatP
is, x = i=1
ti λij π(ei )
n−1
n−1
ti ρj (λij π(ei )) = 0.
for ti ≥ 0 with i=1
ti = 1. Then, since ρj is convex, 0 ≤ ρj (x) ≤ i=1
P
If #J > 1, let x be in a nontrivial facet F of P
π(∆J ). We have that F = j∈J Fj , with
Fj a face of π(∆j ) such that 0 ∈
/ Fj ; then, x = j∈J pj with pj ∈ F
j . Hence, ρj (pj ) = 0
P
′
′
and, by the definition of ⊞j∈J ρj , it follows that 0 ≤ (⊞j∈J ρj )(x) ≤ j∈J ρj (pj ) = 0.
8
Lemma 8 For every non-empty subset J of {1, . . . , n}, the convex function ⊞′j∈J ρj defined
over π(∆J ) parameterizes the lower envelope of QJ over the points of π(∆J ).
Proof: For every J ⊂ {1, . . . , n}, we denote
X
X
PJ :=
π(Qj ), DJ :=
π(∆j )
j∈J
j∈J
and ⊞′j∈J ρj to the restriction of ⊞′j∈J ρj : PJ → R to DJ ⊂ PJ . With this notation, we
have to prove that
(6)
⊞′j∈J ρj = ⊞′j∈J ρj .
Before proceeding, we will state three basic results that will be applied throughout the
proof. We use the notation
ρJ := ⊞′j∈J ρj .
Claim I. If p1 lies in a non-trivial facet of DJ and p2 ∈ PJ , then for every x lying on the
line segment p1 p2 , we have ρJ (x) ≤ ρJ (p2 ): as x = (1−t)p1 +tp2 for 0 ≤ t ≤ 1, ρJ is convex
and ρJ ≡ 0 on the non-trivial facets of DJ , ρJ (x) ≤ (1 − t)ρJ (p1 ) + t ρJ (p2 ) = t ρJ (p2 ).
Claim II. If p1 ∈ DJ and p2 ∈
/ DJ , then for every x 6= p2 lying on the line segment p1 p2 ,
since DJ is a convex set, d(p1 , DJ ) < d(p2 , DJ ), where d(·, DJ ) is the distance to DJ .
Claim III. If p1 ∈ DJ and p2 ∈ (R≥0 )n−1 \DJ there exists t ∈ (0, 1] such that tp1 +(1−t)p2
lies in a non-trivial facet of DJ .
The proof will be done recursively. For a fixed non-empty set J ⊂ {1, . . . , n}, let J1 , J2
be disjoints sets such that J = J1 ∪ J2 and assume that identity (6) holds for each of them.
We will prove that if ρJk := ⊞′j∈Jk ρj , for k = 1, 2, then ρJ1 ⊞′ ρJ2 = ρJ1 ⊞′ ρJ2 .
Let x ∈ DJ . Then, there exist y0 ∈ DJ1 and z0 ∈ DJ2 such that x = y0 + z0 . Let
y ′ ∈ PJ1 and z ′ ∈ PJ2 be such that x = y ′ + z ′ and ρJ1 ⊞′ ρJ2 (x) = ρJ1 (y ′ ) + ρJ2 (z ′ ). If
y ′ ∈ DJ1 and z ′ ∈ DJ2 the result follows.
We first show that there exist y ′ and z ′ as before satisfying that y ′ ∈ DJ1 or z ′ ∈ DJ2 .
For every 0 ≤ t ≤ 1, if yt = (1 − t)y0 + ty ′ and zt = (1 − t)z0 + tz ′ , then x = yt + zt . If
y′ ∈
/ DJ1 and z ′ ∈
/ DJ2 , there exist 0 < t1 , t2 ≤ 1 such that yt1 and zt2 lie in non-trivial
facets of DJ1 and DJ2 respectively. Consider t0 = min{t1 , t2 }; then x = yt0 + zt0 and, by
Claim I, ρJ1 ⊞ ρJ2 (x) = ρJ1 (yt0 ) + ρJ2 (zt0 ).
Now, without loss of generality, assume that z ′ ∈ DJ2 . Consider the compact set
Cx = {y ∈ PJ1 | x − y ∈ DJ2 and ρJ1 ⊞′ ρJ2 (x) = ρJ1 (y) + ρJ2 (x − y)}.
We will prove that Cx ∩DJ1 6= ∅. If not, let y ∈ Cx be such that d(Cx , DJ1 ) = d(y, DJ1 ) > 0.
First, assume that z := x−y does not lie in a non-trivial facet of DJ2 . This implies that
z+w ∈ DJ2 for every w with sufficiently small non-negative coordinates. Let 0 < ǫ < 1 such
that (1−ǫ)y ∈
/ DJ1 and that z+ǫy ∈ DJ2 . Claims III and I imply that ρJ1 ((1−ǫ)y) ≤ ρJ1 (y)
and that ρJ2 (z + ǫy) ≤ ρJ2 (z) and, therefore, ρJ1 ⊞′ ρJ2 (x) = ρJ1 ((1 − ǫ)y) + ρJ2 (z + ǫy).
As, by Claim II, d((1 − ǫ)y, DJ1 ) < d(y, DJ1 ) we have a contradiction.
Assume now that z := x − y lies in non-trivial facets of DJ2 .
Recall that x = y0 + z0 with y0 ∈ DJ1 , z0 ∈ DJ2 . If z and z0 lie in the same non-trivial
facet of DJ2 , then the line segment zz0 is contained in this facet. On the other hand,
9
there exists 0 ≤ t ≤ 1 such that (1 − t)y0 + ty lies in a non-trivial facet of DJ1 . Therefore,
x = ((1−t)y0 +ty)+((1−t)z0 +tz), ρJ1 ⊞′ ρJ2 (x) = ρJ1 ((1−t)y0 +ty)+ρJ2 ((1−t)z0 +tz) = 0
and so, (1 − t)y0 + ty ∈ Cx ∩ DJ1 , which is a contradiction.
If z0 does not lie in any of the non-trivial facets of DJ2 containing z, let η 1 , . . . , η k
be inner normal vectors to these facets and consider the hyperplanes parallel to them
and containing y, which are defined by the equations η ℓ .(Y − y) = 0 for 1 ≤ ℓ ≤ k. As
η ℓ .y + η ℓ .z = η ℓ .y0 + η ℓ .z0 and η ℓ .z < η ℓ .z0 , then η ℓ .y0 < η ℓ .y. In addition, since all the
coordinates of η ℓ are negative (see Lemma 6) and y ∈ (R≥0 )n−1 , then η ℓ .y < 0. Therefore,
the hyperplane η ℓ .(Y −y) = 0 intersects the line segment 0y0 in a point λℓ y0 with 0 ≤ λℓ ≤
1. If λ = max{λℓ / 1 ≤ ℓ ≤ k}, consider yt = (1 − t)y + tλy0 and zt = x − yt for 0 ≤ t ≤ 1.
For t sufficiently small, we will show that zt ∈ DJ2 , that ρJ1 ⊞′ ρJ2 (x) = ρJ1 (yt ) + ρJ2 (zt )
and that d(yt , DJ1 ) < d(y, DJ1 ), which leads to a contradiction.
For 1 ≤ ℓ ≤ k, as λ ≥ λℓ , η ℓ .(y − λy0 ) ≥ 0; then η ℓ .(y − yt ) ≥ 0 and so η ℓ .zt =
ℓ
η .z + η ℓ .(y − yt) ≥ η ℓ .z. If z lies in a trivial facet of DJ2 , that is, zi = 0 for some 1 ≤ i ≤ n,
then yi = xi ; as (y0 )i ≤ xi , we have that (zt )i = t(yi − λ(y0 )i ) ≥ 0. Taking t sufficiently
small, zt satisfies all the remaining inequalities defining DJ2 and so, zt ∈ DJ2 . Moreover,
since y ∈
/ DJ1 , for t sufficiently small, yt ∈
/ DJ1 . Then, by Claim I, ρJ1 (yt ) ≤ ρJ1 (y). On
the other hand, zt lies in the same non-trivial facet of DJ2 as z, namely, the facet defined
by η ℓ0 . (Z − z) = 0 for ℓ0 such that λ = λℓ0 and, therefore, ρJ2 (zt ) = 0. We conclude that
ρJ1 (yt ) + ρJ2 (zt ) = ρJ1 ⊞′ ρJ2 (x). Finally, the inequality d(yt , DJ1 ) < d(y, DJ1 ) holds by
Claim II.
For every 1 ≤ j ≤ n, let Q0j = conv(Aj ∪{0}) and σj0 , ρ0j the functions that parameterize
its upper and lower envelopes respectively. Assumption (H3) ensures that π(Q0j ) = π(Qj ).
Lemma 9 For every 1 ≤ j ≤ n, ρ0j (x) =
(
0
ρj (x)
if x ∈ π(∆j )
and σj0 = σj .
if x ∈
6 π(∆j )
0
Proof: Since Qj ⊂ Q0j , then ρ0j (x) ≤ ρj (x) and σj (x) ≤ σj0 (x) for every x ∈ π(Q
Pn j ).
If xP
∈ π(∆j ), there exists xn ≥ 0 such that (x, xn ) ∈ ∆j . Then,
(x, xn ) = i=1 ti λij ei ,
Pn−1
where ni=1 ti = 1 and ti ≥ 0 for every 1 ≤ i ≤ n. Taking y = i=1
ti λij ei , we have that
0
0
π(y) = x, (y)n = 0 and y ∈ Qj . Hence, ρj (x) = 0.
Consider now x ∈ π(Qj )\π(∆j ). Take (x, ρ0j (x)) ∈ Q0j = conv(Qj ∪ {0}). Then,
(x, ρ0j (x)) = tq, with q ∈ Qj and 0 < t ≤ 1. Since (x, ρ0j (x)) ∈
/ ∆j , there exists 0 < t′ < 1
such that t′ (x, ρ0j (x)) lies in the nontrivial facet of ∆j and so, q ′ := t′ (x, ρ0j (x)) ∈ Qj .
Then, the line segment qq ′ is contained in Qj ; in particular, (x, ρ0j (x)) ∈ Qj . It follows
that ρj (x) ≤ ρ0j (x).
If x ∈ π(Qj ) = π(Q0j ) ⊂ Rn−1 , consider (x, σj0 (x)) ∈ Q0j . Then, (x, σj0 (x)) = tq
with q ∈ Qj and 0 ≤ t ≤ 1. If y = tq + (1 − t)λnj en ∈ Qj , then π(y) = x and so,
σj (x) ≥ (y)n = σj0 (x) + (1 − t)λnj ≥ σj0 (x).
Now, we can restate the formula for the multiplicity of the origin in Proposition 5 as
a mixed integral of convex functions:
Theorem 10 Let A = (A1 , . . . , An ) be a family of finite sets in (Z≥0 )n satisfying assumptions (H1) and (H3). Let f = (f1 , . . . , fn ) be a generic system of sparse polynomials in
10
C[x1 , . . . , xn ] supported on A. For every 1 ≤ j ≤ n, let ρ̄j be the convex function defined
in (4). Then, the origin is an isolated common zero of f and
mult0 (f ) = M In′ (ρ1 , . . . , ρn ).
Proof: By Proposition 5, it suffices to show that M Vn (A0 )− M Vn (A) = M In′ (ρ1 , . . . , ρn ).
For every 1 ≤ j ≤ n, considerPa constant νj ∈ R such
P that νj ≥ max(ρj ) ≥ max(ρj ).
For J ⊂ {1, . . . , n}, let DJ = j∈J π(∆j ) and νJ = j∈J νj . Since νJ ≥ max(⊞′j∈J ρj ),
we have that
Z
⊞′j∈J ρj dx1 . . . dxn = νJ V oln−1 (DJ ) − V oln (DJ )⊞′j∈J ρj ,νJ .
DJ
Then, by Lemma 8,
Z
⊞′j∈J ρj dx1 . . . dxn = V oln ((DJ )0,νJ ) − V oln (DJ )(⊞′j∈J ρj )|D
DJ
Now, if PJ =
P
j∈J
π(Q0j ) =
P
j∈J
J
,νJ
= V oln (PJ )⊞′
0
0
j∈J ρj ,⊞j∈J σj
Finally, by Remark 1,
= V oln ((PJ )⊞′
0
j∈J ρj ,νJ
.
) − V oln ((PJ )⊞′j∈J ρj ,νJ )
− V oln (PJ )⊞′j∈J ρj ,⊞j∈J σj .
V oln (PJ )⊞′
and
,νJ
π(Qj ), Lemma 9 implies that
V oln ((DJ )0,νJ ) − V oln (DJ )(⊞′j∈J ρj )|D
J
0
0
j∈J ρj ,⊞j∈J σj
= V oln
X
j∈J
Q0j
X
Qj ,
V oln (PJ )⊞′j∈J ρj ,⊞j∈J σj = V oln
j∈J
and so,
Z
DJ
⊞′j∈J ρj dx1 . . . dxn = V oln
X
j∈J
X
Qj .
Q0j − V oln
j∈J
The theorem follows from the definitions of the mixed integral and the mixed volume.
Example 2 Consider the generic sparse polynomial system
f1 = c1,20 x21 + c1,11 x1 x2 + c1,04 x42 + c1,13 x1 x32 + c1,33 x31 x32
f2 = c2,40 x41 + c2,21 x21 x2 + c2,04 x42 + c2,25 x21 x52 + c2,13 x1 x32
with supports
A1 = {(2, 0), (1, 1), (0, 4), (1, 3), (3, 3)},
A2 = {(4, 0), (2, 1), (0, 4), (2, 5), (1, 3)}.
Here, we have ∆1 = conv{(0, 0), (2, 0), (0, 4)} and ∆2 = conv{(0, 0), (4, 0), (0, 4)}.
11
conv(A2 )
conv(A1 )
π(∆1 )
π(∆2 )
To compute the multiplicity of the origin following Theorem 10, consider the convex
functions ρ1 : π(∆1 ) → R and ρ2 : π(∆2 ) → R:
ρ1 ⊞ ρ2
ρ1
ρ2
Therefore,
mult0 (f ) = M I2′ (ρ1 , ρ2 ) =
Z
0
6
ρ1 ⊞ ρ2 (x) dx −
Z
2
0
ρ1 (x) dx −
Z
4
0
ρ2 (x) dx = 7.
Remark 11 The computation of the multiplicity of the origin by means of mixed integrals
following Theorem 10 may involve smaller polytopes than its computation using mixed volumes according to Proposition 5, since it depends only on the points of the lower envelopes
of the polytopes Qj = conv(Aj ) that lie above the simplices π(∆j ) for j = 1, . . . , n.
3.2
General case
Consider now a family A = (A1 , . . . , An ) of finite sets in (Z≥0 )n satisfying conditions (H1)
and (H2). Let f = (f1 , . . . , fn ) be a system of generic sparse polynomials in C[x1 , . . . , xn ]
supported on A.
M
For M ∈ Z>0 , let ∆M := {M ei }ni=1 and, for all 1 ≤ j ≤ n, let A∆
:= Aj ∪ ∆M and
j
M ,0
∆M
∆M ,0 := (A∆M ,0 , . . . , A∆M ,0 ).
M
M
A∆
:= A∆
∪ {0}. Set A∆M := (A∆
n
1 , . . . , An ) and A
1
j
j
Proposition 12 With the previous assumptions and notation, we have that 0 is an isolated common zero of f and, for every M ≫ 0, its multiplicity is
mult0 (f ) = M Vn (A∆M ,0 ) − M Vn (A∆M ).
12
Moreover, the identity holds for every M > mult0 (f ). In particular, it suffices to take
M = M Vn (A0 ) − M Vn (A) + 1.
Proof: Conditions (H1) and (H2) imply that 0 is an isolated common zero of the generic
system f supported on A.
Take M > mult0 (f ) and consider polynomials
gj = fj +
n
X
cj,M ei xM
i
i=1
M
with support sets A∆
= Aj ∪ ∆M and generic coefficients for all 1 ≤ j ≤ n.
j
∆
∆
M
M
Since A M = (A1 , . . . , A∆
n ) fulfills the conditions (H1) and (H3) stated in Section
3.1, by Proposition 5 the multiplicity of the origin as a common isolated root of g :=
(g1 , . . . , gn ) is mult0 (g) = M Vn (A∆M ,0 ) − M Vn (A∆M ).
Let us prove that mult0 (f ) = mult0 (g). To do so, we consider the matrices Sk (f , 0) and
Sk (g, 0), for k ≥ 0, introduced in Section 2.3. Note that, since M > mult0 (f ), in order to
compute mult0 (f ), it suffices to compare the dimensions of the nullspaces of the matrices
Sk (f , 0) for 0 ≤ k ≤ M − 1. Now, for every k ≤ M − 1, α, β ∈ (Z≥0 )n , with |α| ≤ k and
|β| ≤ k − 1, and every 1 ≤ j ≤ n, we have that
(Sk (f , 0))(β,j),α =
1 ∂ |α| β
1 ∂ |α| β
(x
f
)(0)
=
(x gj )(0) = (Sk (g, 0))(β,j),α .
j
α! ∂xα
α! ∂xα
Since the dimensions of the nullspaces of Sk (f , 0) = Sk (g, 0) stabilize for k < M , then,
mult0 (f ) = mult0 (g).
The fact that we can take M = M Vn (A0 ) − M Vn (A) + 1 follows from inequality (2).
From the previous result and Theorem 10 we can express the multiplicity of the origin
as an isolated zero of a generic sparse system via mixed integrals:
Corollary 13 Let A = (A1 , . . . , An ) be a family of finite sets in (Z≥0 )n satisfying assumptions (H1) and (H2). Let f = (f1 , . . . , fn ) be a generic family of polynomials in
C[x1 , . . . , xn ] supported on A. Let M := M Vn (A0 ) − M Vn (A) + 1 and, for 1 ≤ j ≤ n,
M
let ρ∆
be the convex function that parameterizes the lower envelope of the polytope
j
∆M
conv(Aj ) and ρj ∆M its restriction defined as in (4). Then,
mult0 (f ) = M In′ (ρ1 ∆M , . . . , ρn ∆M ).
The following property enables us to deal with smaller support sets when computing
multiplicities.
Proposition 14 Let f = (f1 , . . . , fn ) be a generic system of polynomials in C[x1 , . . . , xn ]
supported on a family A = (A1 , . . . , P
An ) of finite subsets of (Z≥0 )n . Assume that 0 is an
isolated common zero of f . Let f1 = a∈A1 c1,a xa . If α, α + β ∈ A1 with β ∈ (Z≥0 )n \ {0},
then
mult0 (f ) = mult0 (f1 − c1,α+β xα+β , . . . , fn ).
13
Proof: Let h1 , . . . , hn be polynomials of the form hj = fj +
generic coefficients and M ∈ N sufficiently big such that
Pn
i=1 cj,M ei
xM
i with cj,M ei ∈ C
mult0 (f1 , . . . , fn ) = mult0 (h1 , . . . , hn ),
mult0 (f1 − c1,α+β xα+β , . . . , fn ) = mult0 (h1 − c1,α+β xα+β , . . . , hn ),
α + β 6= M ei for all 1 ≤ i ≤ n and A1 ⊂ conv({0, M e1 , . . . , M en }). The existence of M is
ensured by Proposition 12 and its proof.
To prove that mult0 (h1 , . . . , hn ) = mult0 (h1 − c1,α+β xα+β , . . . , hn ), by Proposition 5, it
suffices to show that conv(A1 ∪ {M ei }ni=1 \{α + β}) = conv(A1 ∪ {M ei }ni=1 ). This follows
from the fact that α + β ∈ conv({α, M e1 , . . . , M en }), since
α+β = 1−
X βi
|β|αi
|β|
α+
M ei
+
M − |α|
M
(M − |α|)M
n
i=1
is a convex linear combination of α, M e1 , . . . , M en .
As a consequence of Proposition 14 we are able to obtain a refined formula for the
multiplicity of the origin for generic polynomials supported on a family A satisfying conditions (H1) and (H2), with no need of adding extra points to the supports whenever they
intersect the coordinate axes.
Proposition 15 Let A = (A1 , . . . , An ) be a family of finite subsets of (Z≥0 )n satisfying
assumptions (H1) and (H2), and let f = (f1 , . . . , fn ) be a generic sparse polynomial system
supported on A. Let M ∈ Z, M ≥ M Vn (A0 )−M Vn (A)+1. Then, 0 is an isolated common
zero of f with multiplicity
M,0
M
M
mult0 (f ) = M Vn (AM,0
1 , . . . , An ) − M Vn (A1 , . . . , An ),
where, for every 1 ≤ j ≤ n, AM
j := Aj ∪ M ei : 1 ≤ i ≤ n, Aj ∩ {µei | µ ∈ Z≥0 } = ∅ and
AM,0
:= AM
j ∪ {0}.
j
Example 3 Consider the generic polynomial system f = (f1 , f2 , f3 ) with
f1 = c11 x1 + c12 x2 + c13 x22 + c14 x21 x2 x3
f2 = c21 x21 + c22 x31 + c23 x21 x2 + c24 x33
f3 = c31 x1 + c32 x1 x2 + c33 x23 + c34 x2 x33
with support family A = (A1 , A2 , A3 ), where
A1 = {(1, 0, 0), (0, 1, 0), (0, 2, 0), (2, 1, 1)}
A2 = {(2, 0, 0), (3, 0, 0), (2, 1, 0), (0, 0, 3)}
A3 = {(1, 0, 0), (1, 1, 0), (0, 0, 2), (0, 1, 3)}
which satisfies assumptions (H1) and (H2). Then, 0 is an isolated common root of f . In
order to compute its multiplicity according to Proposition 15, let
M := M V3 (A0 ) − M V3 (A) + 1 = 28 − 22 + 1 = 7,
14
and consider the modified support sets
A71 = {(1, 0, 0), (0, 1, 0), (0, 2, 0), (2, 1, 1), (0, 0, 7)}
A72 = {(2, 0, 0), (3, 0, 0), (2, 1, 0), (0, 0, 3), (0, 7, 0)} ,
A73 = {(1, 0, 0), (1, 1, 0), (0, 0, 2), (0, 1, 3), (0, 7, 0)}
which coincide with the supports of the polynomials in Example 1. Therefore,
7,0
7,0
7
7
7
mult0 (f ) = M V3 (A7,0
1 , A2 , A3 ) − M V3 (A1 , A2 , A3 ) = 3.
4
Multiplicity of other roots with zero coordinates
Let A = (A1 , . . . , An ) be a family of finite sets in (Z≥0 )n and f = (f1 , . . . , fn ) ⊂
C[x1 , . . . , xn ] a generic family of polynomials with support set A.
For I ⊂ {1, . . . , n}, recall that
JI = {j ∈ {1, . . . , n} | ∃a ∈ Aj : ai = 0 ∀i ∈ I}
is the set of indices of the polynomials in f that do not vanish identically under the
specialization xi = 0 for every i ∈ I. Also, for every j ∈ JI , we denote
AIj = {a ∈ Aj | ai = 0 ∀i ∈ I}.
Following [8, Section 3.2.1], the system f has isolated common zeros lying in OI :=
{x ∈ Cn | xi = 0 if and only if i ∈ I} if and only if
(A1) #I + #JI = n,
(A2) for every Ie ⊂ I, #Ie + #JIe ≥ n,
P
(A3) for every J ⊂ JI , dim( j∈J AIj ) ≥ #J.
From now on, we will consider a non-empty set I ⊂ {1, . . . , n} satisfying the conditions
above and we will study the multiplicity of the isolated common zeros of f lying in OI .
4.1
Multiplicity of affine isolated roots
The aim of this section is to compute multiplicities of the isolated zeros of f in OI in terms
of mixed volumes and mixed integrals associated to the system supports. The key result
that allows us to do this shows that these multiplicities coincide with the multiplicity of
the origin as an isolated root of an associated generic sparse system:
Theorem 16 Let A = (A1 , . . . , An ) be a family of finite sets in (Z≥0 )n and f = (f1 , . . . , fn )
a generic sparse system of polynomials in C[x1 , . . . , xn ] supported on A. Assume that
∅=
6 I ⊂ {1, . . . , n} satisfies conditions (A1), (A2) and (A3). Let ζ ∈ Cn be an isolated
zero of f with ζ ∈ OI . Then
multζ (f ) = mult0 (g),
I
for a system g := (gj )j ∈J
/ I of generic polynomials with supports Bj := πI (Aj ) for every
j∈
/ JI , where πI : Zn → Z#I is the projection onto the coordinates indexed by I.
15
For this statement to make sense, we need the following:
Lemma 17 Under the previous assumptions and notation, let B I = (BjI )j ∈J
/ I . Then,
#I
I
0 ∈ C is an isolated zero of a generic polynomial system supported on B .
Proof: It suffices to show that B I satisfies conditions (H1) and (H2) stated at the beginning of Section 3 (see [8, Proposition 6]).
By the definition of JI , it follows that 0 ∈
/ πI (Aj ) = BjI for every j ∈
/ JI .
In order to simplify notation, we will index the coordinates of Z#I by the corresponding
elements of I.
To prove that condition (H2) holds, we must show that #Ie + #JIe(B I ) ≥ #I for every
e Now, for every Ie ⊂ I, we have
/ JI | ∃b ∈ BjI : bi = 0 ∀i ∈ I}.
Ie ⊂ I, where JIe(B I ) = {j ∈
that
e = JI ∪ J e(B I ).
/ JI | ∃a ∈ Aj : ai = 0 ∀i ∈ I}
JIe(A) = JI ∪ {j ∈
I
Under assumption (A2) on I, the inequality #Ie + #JIe(A) ≥ n holds; then,
#Ie + #JIe(B I ) = #Ie + #JIe(A) − #JI ≥ n − #JI = #I,
where the last identity follows from assumption (A1).
In order to prove Theorem 16, we first introduce some notation and prove some auxiliary results.
For a polynomial g ∈ C[x1 , . . . , xn ], gI will denote the polynomial in C[(xi )i6∈I ] obtained
from g by specializing xi = 0 for every i ∈ I, and fI the associated family of polynomials
fI = ((fj )I )j∈JI .
Then, fI is the set of polynomials obtained by specializing the variables indexed by I to 0
in the polynomials in f and discarding the ones that vanish identically, and AI = (AIj )j∈JI
is the family of supports of fI .
We will use an auxiliary polynomial system defined as follows:
(
(fj )I if j ∈ JI
f (I) = (f1,I , . . . , fn,I ), where fj,I =
.
fj
if j ∈
/ JI
Note that the family of supports of these polynomials is
(
A(I) = (A1,I , . . . , An,I ), where Aj,I =
AIj
Aj
if j ∈ JI
.
if j ∈
/ JI
Lemma 18 Under the previous assumptions and notation, if ζ ∈ Cn is an isolated zero
of f lying in OI , then ζ is also an isolated zero of f (I) and multζ (f ) ≤ multζ (f (I)).
Proof: The fact that ζ is an isolated zero of f (I) follows from the facts that f (I) is a generic
system supported on A(I) vanishing at ζ, and that, for every Ie ⊂ I, JIe(A(I)) = JIe(A).
The inequality between the multiplicities is a consequence of Lemma 3.
16
We now focus on a special case of polynomial systems with the same structure as f (I),
namely, systems of n polynomials in n variables which contain r polynomials depending
only on r variables.
Proposition 19 Let h = (h1 , . . . , hn ) be a system of polynomials in C[x1 , . . . , xn ] such
that h1 , . . . , hr ∈ C[x1 , . . . , xr ]. Let ξ ∈ Cr be an isolated nondegenerate common zero
of h1 , . . . hr such that 0 ∈ Cn−r is an isolated zero of hξ := (hr+1 (ξ, xr+1 , . . . , xn ), . . . ,
hn (ξ, xr+1 , . . . , xn )). Then, ζ = (ξ, 0) ∈ Cn is an isolated zero of h satisfying:
multζ (h) = mult0 (hξ ).
Proof: Under our assumptions, it follows that ζ = (ξ, 0) is an isolated zero of the system
h: if there is an irreducible curve C passing through ζ, since ξ is an isolated common zero
of h1 , . . . , hr ∈ C[x1 , . . . , xr ], we have that C ⊂ {x1 = ξ1 , . . . , xr = ξr } and so, (ξ, 0) ∈
C ⊂ {x1 = ξ1 , . . . , xr = ξr , hr+1 (x) = 0 . . . , hn (x) = 0} = {ξ} × V (hξ ), contradicting the
fact that 0 is an isolated zero of hξ .
In order to prove the stated equality of multiplicities, we will compare the multiplicity
matrices Sk (h, ζ) and Sk (hξ , 0) for k ∈ N (see Section 2.3 for the definition of multiplicity
matrices). To this end, we will analyze the structure of Sk (h, ζ).
Recall that for the system h, for k ≥ 1, the columns of Sk (h, ζ) are indexed by α
for |α| ≤ k and its rows are indexed by (β, j) for |β| ≤ k − 1 and 1 ≤ j ≤ n; the entry
corresponding to row (β, j) and column α is
(Sk (h, ζ))(β,j),α = ∂α ((x − ζ)β hj )(ζ),
where ∂α is defined in (1).
Note that, for γ = (γ1 , . . . , γn ) ∈ (Z≥0 )n , we have
r
Q γi γi +βi −αi
if βi ≤ αi ≤ βi + γi ∀ 1 ≤ i ≤ r
αi −βi ξi
|α|
1 ∂
i=1
β γ
and αi = βi + γi ∀ r + 1 ≤ i ≤ n, (7)
((x − ζ) x )(ζ) =
α! ∂xα
0
otherwise.
Then, an entry of Sk (h, ζ) corresponding to a row indexed by (β, j) and a column
indexed by α is 0 whenever |β| ≥ |α|.
We will first consider the columns of Sk (h, ζ) indexed by α = (0, . . . , 0, αr+1 , . . . , αn ) 6=
0. For 1 ≤ j ≤ r, since the polynomial hj does not depend on the variables xr+1 , . . . , xn ,
we have that (Sk (h, ζ))(β,j),α = 0 for every β. For r + 1 ≤ j ≤ n and β with βi 6= 0 for
some 1 ≤ i ≤ r, we also have (Sk (h, ζ))(β,j),α = 0 since βi > αi = 0 (see equation (7)).
Finally, for r + 1 ≤ j ≤ n and β = (0, . . . , 0, βr+1 , . . . , βn ),
(Sk (h, ζ))(β,j),α =
1
∂ |α|
β
x r+1 . . . xβnn hj (ξ, xr+1 , . . . , xn )(0)
r+1
α! ∂xαr+1
. . . ∂xαnn r+1
(8)
= (Sk (hξ , 0))((βr+1 ,...,βn),j),(αr+1 ,...,αn ) .
We analyze now the remaining columns of the matrix.
Consider the submatrix of Sk (h, ζ) given by the columns indexed by α such that
(α1 , . . . , αr ) 6= 0 and |α| = k. From identity (7), we can observe that in every row
17
indexed by (β, j) for |β| = k − 1 and 1 ≤ j ≤ n, the only columns with (possibly) non-zero
coordinates are indexed by α = β +ei where {ei }ni=1 is the canonical basis of Rn ; moreover,
(Sk (h, ζ))(β,j),β+ei =
∂hj
(ζ).
∂xi
∂h
Note that, for 1 ≤ j ≤ r and r + 1 ≤ i ≤ n, we have ∂xji ≡ 0. Then, for every β with
|β| = k − 1, in
indexed by (β, j) for 1 ≤ j ≤ r we have a copy of the Jacobian
the rows
∂hj
in the columns indexed by β + e1 , . . . , β + er , and all other
matrix J := ∂xi (ξ)
1≤j,i≤r
entries of the matrix Sk (h, ζ) in these rows are zero. We remark that J is an invertible
matrix since ξ is a nonsingular common zero of h1 , . . . hr . Note that, for every α with
|α| = k and αi ≥ 1 for some 1 ≤ i ≤ r, there is at least one β = α − ei with |β| = k − 1;
so, all the columns indexed by α with |α| = k and (α1 , . . . , αr ) 6= 0 are involved in at least
one of the copies of J .
Therefore, by performing row operations in Sk (h, ζ) we can obtain a matrix such that
each column indexed by a vector α with |α| = k and (α1 , . . . , αr ) 6= 0 contains all zero
entries except for a unique coordinate equal to 1 in a row indexed by (β, j) for some β
with |β| = k − 1 and 1 ≤ j ≤ r, and all these 1’s lie in different rows. Moreover, these row
operations do not modify the remaining columns of Sk (h, ζ).
Then, the dimension of the kernel of Sk (h, ζ) is the same as the dimension of the kernel
of the matrix obtained by removing the columns indexed by α with (α1 , . . . , αr ) 6= 0 and
|α| = k. We repeat this procedure for s = k, k − 1, . . . , 1 (in this order) and we conclude
that the dimension of the kernel of Sk (h, ζ) is the same as the dimension of the kernel
of the submatrix obtained by removing all columns indexed by α with (α1 , . . . , αr ) 6= 0.
This submatrix consists of the first column of Sk (h, ζ), which is identically zero, and all
columns indexed by α = (0, . . . , 0, αr+1 , . . . , αn ) 6= 0. Due to our previous considerations
on the matrix formed by these columns, we have that the only rows that are not zero are
those indexed by (β, j) with r + 1 ≤ j ≤ n and β = (0, . . . , 0, βr+1 , . . . , βn ) and these are
exactly the rows of Sk (hξ , 0) (see identity (8)). Therefore,
dim(ker(Sk (h, ζ))) = dim(ker(Sk (hξ , 0))) for every k ≥ 1.
The result follows.
Now we can prove Theorem 16.
Proof: Without loss of generality, we may assume that I = {r + 1, . . . , n} for some
r ∈ {1, . . . , n} and JI = {1, . . . , r}.
We will first prove that multζ (f ) ≥ mult0 (g).
We make the change of variables
P
x1 := ri=1 c1i yi + ζ1
xr+1 := yr+1
..
..
.
Pr .
xr := i=1 cri yi + ζr
xn := yn
where (cki )1≤k,i≤r ⊂ Q are generic constants and obtain the polynomial system e
f =
e
e
e
(f1 , . . . , fn ) in C[y1 , . . . , yn ] from the system f . Note that mult0 (f ) = multζ (f ).
18
For every 1 ≤ j ≤ n, let Aej be the support of fej .
For 1 ≤ j ≤ r, since fj (x1 , . . . , xr , 0, . . . , 0) 6= 0 and has a non-constant term (since it
vanishes at (ζ1 , . . . , ζr ) ∈ (C∗ )r ), due to the genericity of the coefficients and the change
of variables, we have that the monomials y1 , . . . , yr appear with non-zero coefficients in
fej (y). On the other hand, again, for the genericity of coefficients and change of variables,
for r + 1 ≤ j ≤ n,
πI (Aej ) = πI (Aj );
(9)
moreover, taking into account that fej (0, . . . , 0, yr+1 , . . . , yn ) = fj (ζ1 , . . . , ζr , xr+1 , . . . , xn ),
we conclude that
{β ∈ (Z≥0 )n−r / (0, β) ∈ Aej )} = πI (Aj ).
(10)
Let h = (h1 , . . . , hn ) be a generic polynomial system with supports Ae = (Ae1 , . . . , Aen ).
e Let us see that Ae also satisfies condition (H2),
Note that condition (H1) holds for A.
e < n,
which implies that 0 is an isolated zero of h. For Ie ⊂ {1, . . . , n}, if #Ie + #JIe(A)
e we obtain a system in n − #Ie unknowns with
when setting yi = 0 in e
f for every i ∈ I,
e < n − #Ie equations. This system vanishes at 0 and defines a positive dimensional
#JIe(A)
variety, contradicting the fact that 0 ∈ Cn is an isolated common zero of e
f . By Lemma 3,
e
the inequality mult0 (f ) ≥ mult0 (h) holds.
Applying Proposition 14 to the polynomials in the system h, since the monomials
y1 , . . . , yr appear with non-zero coefficients in h1 , . . . , hr and, for r + 1 ≤ j ≤ n, the
supports supp(hj ) = supp(fej ) satisfy conditions
P (9) and (10), it follows that mult0 (h) =
e = (e
mult0 (e
g, g), where g
g1 , . . . , ger ) with gej = ri=1 ϑji yi + pj (yr+1 , . . . , yn ) for 1 ≤ j ≤ r,
and g = (gr+1 , . . . , gn ) with gj ∈ C[yr+1 , . . . , yn ] a generic polynomial with support πI (Aj )
for r + 1 ≤ j ≤ n.
Then, if A is the inverse of the matrix (ϑji ) and
A.(e
g1 , . . . , ger )t = (y1 + q1 (yr+1 , . . . , yn ), . . . , yr + qr (yr+1 , . . . , yn ))t ,
the following is an isomorphism:
Q[y1 , . . . , yn ]/(e
g, g) → Q[yr+1 , . . . , yn ]/(g)
7→ −qi for all 1 ≤ i ≤ r
7
→
yi for all r + 1 ≤ i ≤ n
yi
yi
and hence mult0 (e
g, g) = mult0 (g).
Therefore,
multζ (f ) = mult0 (e
f ) ≥ mult0 (h) = mult0 (e
g, g) = mult0 (g).
To prove the other inequality, note that, by Lemma 18, we have that
multζ (f ) ≤ multζ (f (I)).
Then, applying Proposition 19 to the system f (I) and ξ = (ζ1 , . . . , ζr ), we deduce that
multζ (f (I)) = mult0 (f (I)ξ ).
By the genericity of the coefficients of f and the triangular structure of f (I), the system
I , . . . , BI .
f (I)ξ turns to be a generic system supported on Br+1
n
We conclude that multζ (f ) ≤ mult0 (g).
19
Taking into account that the results in Section 3 enable us to express the multiplicity
of the origin as an isolated zero of a generic sparse system in terms of mixed volumes and
mixed integrals, we can now state a similar result regarding the multiplicity of any affine
isolated zero of a generic sparse system of n equations in n unknowns.
Theorem 20 Let A = (A1 , . . . , An ) be a family of finite sets in (Z≥0 )n and f = (f1 , . . . , fn )
be a generic sparse system of polynomials in C[x1 , . . . , xn ] supported on A.
Let I ⊂ {1, . . . , n} satisfying conditions (A1), (A2) and (A3). For j ∈
/ JI , let BjI =
#I is the projection to the coordinates indexed by I. Let
πI (Aj ), where πI : Zn → Z
I
MI := M V#I (Bj ∪ {0})j6∈JI − M V#I (BjI )j6∈JI + 1.
Then, for every isolated zero ζ ∈ Cn of f such that ζi = 0 if and only if i ∈ I, we have
#I
I
multζ (f ) = M V#I ((BjI ∪ {0, MI ei }#I
/ I ) − M V#I ((Bj ∪ {MI ei }i=1 )j ∈J
/ I ).
i=1 )j ∈J
Moreover, if (ρj )j6∈JI are the convex functions that parameterize the lower envelopes of the
polytopes conv(BjI ∪ {MI ei }#I
/ I are their restrictions as defined in (4), then
i=1 ) and (ρj )j ∈J
′ ((ρ )
multζ (f ) = M I#I
).
/ I
j j ∈J
Note that the previous formula for multiplicities can be refined applying Proposition
15 instead of Proposition 12.
4.2
Examples
The following examples illustrate the result in the previous section.
Example 4 Consider the generic polynomial system
c11 x21 + c12 x21 x22 + c13 x1 x3 + c14 x1 x22 x3 + c15 x43 + c16 x22 x43 = 0
c x4 + c22 x41 x22 + c23 x21 x3 + c24 x21 x22 x3 + c25 x43 + c26 x22 x43 = 0
21 1
c31 x1 + c32 x1 x22 + c33 + c34 x22 + c35 x3 + c36 x22 x3 = 0
taken from [7, Example 3]. There is a unique nonempty set I = {1, 3} satisfying conditions
(A1), (A2) and (A3), which leads to two isolated solutions with x1 = 0, x2 6= 0 and
x3 = 0. Since JI = {3}, Theorem 16 tell us that the multiplicity of each of these solutions
equals the multiplicity of (0, 0) as an isolated root of a generic sparse system supported on
B1I = {(2, 0), (1, 1), (0, 4)} and B2I = {(4, 0), (2, 1), (0, 4)}, namely a system of the type
a1 x21 + b1 x1 x3 + c1 x43 = 0
a2 x41 + b2 x21 x3 + c3 x43 = 0
This multiplicity can be computed, by Proposition 5, as M V2 (B1I ∪{(0, 0)}, B2I ∪{(0, 0)})−
M V2 (B1I , B2I ) = 7 or, alternatively, by Theorem 10, as M I2′ (ρ1 , ρ2 ) = 7, where ρ1 and ρ2
are the functions whose graphs are given in Example 2.
Example 5 Consider the generic polynomial system
a11 x1 + a12 x1 x2 = 0
a21 x22 + a22 x21 x42 + a23 x31 = 0
a31 x3 + a32 x1 x3 + a33 x23 x24 + a34 x33 x4 = 0
a41 x34 + a42 x32 x34 + a43 x23 x34 + a44 x54 + a45 x23 x54 = 0
20
Using [8, Proposition 5] we can check that all zeros of the system are isolated. Moreover,
all the subsets I ⊂ {1, 2, 3, 4} satisfying conditions (A1), (A2) and (A3) are
I1 = ∅, I2 = {3}, I3 = {1, 2}, I4 = {3, 4}, I5 = {1, 2, 3} and I6 = {1, 2, 3, 4}.
By Bernstein’s theorem, the system has 24 different simple zeros with all non-zero coordinates (associated to I1 ) and, by Theorem 20, we can see that there are
• 6 simple zeros associated to I2 ,
• 8 zeros with multiplicity 2 associated to I3 ,
• 3 zeros with multiplicity 3 associated to I4 ,
• 2 zeros with multiplicity 2 associated to I5 ,
and that the origin is an isolated zero of multiplicity 6.
That is, the system has a total of 65 (isolated) zeros counting multiplicities. Note that,
in this case, SM4 (A) = 65 is smaller than M V4 (A ∪ {0}) = 85.
References
[1] D. N. Bernstein, The number of roots of a system of equations. Funct. Anal. Appl. 9
(1975), 183–185.
[2] D. Cox, J. Little, D. O’Shea, Using Algebraic Geometry. Grad. Texts in Math., vol.
185. Springer, New York, 1998.
[3] M.A. Cueto, A. Dickenstein, Some results on inhomogeneous discriminants. Proceedings of the XVIth Latin American Algebra Colloquium, Bibl. Rev. Mat. Iberoamericana, Madrid, 2007, 41–62.
[4] B.H. Dayton, Z. Zeng, Computing the multiplicity structure in solving polynomial
systems. Proc. 2005 Internat. Symp. Symbolic and Algebraic Computation, ACM,
New York, 2005, pp. 116–123.
[5] I.Z. Emiris, J. Verschelde, How to count efficiently all affine roots of a polynomial
system. In: 13th European Workshop on Computational Geometry CG97. Würzburg,
1997, Discrete Appl. Math. 93 (1999), no. 1, 21–32.
[6] I.M. Gelfand, M.M. Kapranov, A.V. Zelevinsky, Discriminants, resultants, and multidimensional determinants. Mathematics: Theory & Applications. Birkhäuser Boston,
Inc., Boston, MA, 1994.
[7] M. I. Herrero, G. Jeronimo, J. Sabia, Computing isolated roots of sparse polynomial
systems in affine space. Theoret. Comput. Sci. 411 (2010), no. 44-46, 3894–3904.
[8] M.I. Herrero, G. Jeronimo, J. Sabia, Affine solution sets of sparse polynomial systems.
J. Symbolic Comput. 51 (2013), 34–54.
21
[9] B. Huber, B. Sturmfels, Bernstein’s theorem in affine space. Discrete Comput. Geom.
17 (1997), no. 2, 137–141.
[10] K. Kaveh, A.G. Khovanskii, Convex bodies and multiplicities of ideals. Proc. Steklov
Inst. Math. 286 (2014), no. 1, 268–284.
[11] A.G. Khovanskii, Newton polyhedra and toroidal varieties. Funct. Anal. Appl. 11
(1978), 289-296.
[12] A.G. Kouchnirenko, Polyèdres de Newton et nombres de Milnor. Invent. Math. 32
(1976), no. 1, 1–31.
[13] A.G. Kushnirenko, Newton polytopes and the Bézout theorem. Funct. Anal. Appl.
10 (1976), 233–235.
[14] T. Y. Li, X. Wang, The BKK root count in Cn . Math. Comp. 65 (1996), no. 216,
1477-1484.
[15] F.S. Macaulay, The algebraic theory of modular systems. Cambridge Univ. Press.,
Cambridge, 1916.
[16] P. Mondal, Intersection multiplicity, Milnor number and Bernstein’s theorem.
Preprint. arXiv:1607.04860
[17] M. Oka, Non-degenerate complete intersection singularity. Actualités Mathématiques.
Hermann, Paris, 1997.
[18] P. Philippon, M. Sombra, Hauteur normalisée des variétés toriques projectives. J.
Inst. Math. Jussieu 7 (2008) no. 2, pp 327–373.
[19] P. Philippon, M. Sombra, A refinement of the Bernstein-Kushnirenko estimate. Adv.
Math. 218 (2008), no. 5, 1370–1418.
[20] J.M. Rojas, A convex geometrical approach to counting the roots of a polynomial
system. Theoret. Comput. Sci. 133 (1994), no. 1, 105–140.
[21] J. M. Rojas, X. Wang, Counting affine roots of polynomial systems via pointed Newton polytopes. J. Complexity 12 (1996), no. 2, 116–133.
[22] H.J. Stetter, Numerical polynomial algebra. SIAM, Philadelphia, 2004.
[23] B. Teissier, Monômes, volumes et multiplicités. Introduction à la théorie des singularités, II, 127–141, Travaux en Cours, 37, Hermann, Paris, 1988.
22
| 0 |
E cient textualrepresentation of structure
Brenton Chapin
arXiv:1706.00862v1 [] 2 Jun 2017
ABSTRACT
This paper attempts a more formal approach to the legibility of text
based programming languages, presenting, with proof, minimum
possible ways of representing structure in text interleaved with
information. This presumes that a minimalist approach is best for
purposes of human readability, data storage and transmission, and
machine evaluation.
Several proposals are given for improving the expression of interleaved hierarchical structure. For instance, a single colon can
replace a pair of brackets, and bracket types do not need to be repeated in both opening and closing symbols or words. Historic and
customary uses of punctuation symbols guided the chosen form
and nature of the improvements.
KEYWORDS
programming language design, structured programming, human
readability, syntax, notation, history, data compression, minification
ACM Reference format:
Brenton Chapin. 2016. Efficient textual representation of structure. In
Proceedings of ACM Conference, Washington, DC, USA, July 2017 (Conference’17), 11 pages.
DOI: 10.1145/nnnnnnn.nnnnnnn
1
INTRODUCTION
Information is almost always more useful when organized, and
structure is key to that. Therefore efficient and clear representation of structure is of paramount importance. Structured programming languages are only one use of structure to organize one kind
of information, source code, but they make such unprecedentedly
elaborate use of structure that they have exposed deficiencies in
our methods of expression.
The languages for programming and math developed in an evolutionary manner, with much borrowing from earlier work, and
the reasons for various decisions became buried in custom and
history. Studies on choices of characters and legibility have been
patchy, with some questions overlooked because they were thought
relatively unimportant. Pioneers of programming languages hurriedly made expedient adaptations of existing notations for similar
problems. Most heavily borrowed was mathematical notation.
With many important questions needing settling with the creation of the first programming languages, issues of symbology were
mostly passed over as unimportant and arbitrary. As Bob Bemer
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from [email protected].
Conference’17, Washington, DC, USA
© 2016 ACM. 978-x-xxxx-xxxx-x/YY/MM. . . $15.00
DOI: 10.1145/nnnnnnn.nnnnnnn
noted, “much documentation is lost, and it was characteristic of
the times that nobody seemed to think character sets a very important feature of computers [12].” There are many studies on the
readability and other properties of various fonts and color combinations, but when discussed in relation to source code, the term
“readability” refers more to comprehensibility [21].
Punctuation is the class of symbol most closely associated with
showy, interleaved structure. Positioning is the other major method
used to indicate structure, and among the other intended purposes,
“control characters” attempted to provide ways to position text.
Currently, Python is the most popular programming language that
relies on text positioning rather than punctuation to indicate structure. Visual programming goes further yet, replacing textual indicators of structure and flow with graphical ones.
The programming language wars are still hot today, with new
languages emerging and gaining followers. One cause of the passionate debates is the tendency of language designers to resort to
an evangelical approach to justify their choices of design elements
for which they have little compelling technical reason. Sometimes
the designers make overlarge and unsubstantiated claims [19]. For
many programming languages, one of the defining features is the
choice and usage of symbols. These choices are not modifiable by
the programmers, so that if such changes are desired, a whole new
programming language may need to be created, another factor in
the very proliferation of programming languages that the ALGOL
designers were hoping to avoid.
The ideas in this paper aim at the foundation, the symbolic representation of the structure. Structure is chosen as the crucial concept that must be addressed to improve representation. Minimalism is the chosen guide.
Too much minimalism is certainly possible, for instance by expecting people to work with information that has been minimized
by data compression techniques which transform the data into a
very compact but human unreadable form. The minification techniques of removing unnecessary whitespace and shortening variable names is another example. The target of these minimization
efforts is the representation of the structure of the source code, not
the source code itself. Further, this is not about rewriting and rearranging to place information in more efficient structures, this is
about making the representation more efficient regardless of the
structure chosen. Punctuation has always been minimal, using
smaller and less obtrusive symbols than those used to represent
the letters of a language, and the syntax of structural elements follows that pattern.
Some more items to note are splits between textual representations used for source code, versus those used for markup, as in
HTML, and data organization, as in XML and YAML. Within programming languages, there is the dot (or arrow) notation of Object
Oriented Programming and the completely different notations for
Structured Programming, such as the curly braces. Yet those splits
Conference’17, July 2017, Washington, DC, USA
seem artificial, as hierarchical structure is used in all. Many programming languages are needlessly poor at expressing data. Several of the improvements in C++11 and C++14 touch on this issue, allowing more flexible constructions of constants and initializations of arrays and objects. One intent of JSON is to bridge this
divide.
Also, these are all interleaved formats, meaning the symbols
that denote the structure are interleaved with the symbols of the
data. Goals in data storage are minimal size, and fast access. An
obvious and common method to achieve both is to exclude all complex structure from the data, using an external representation. The
disadvantage is that they require some connection, often expressed
in fixed sizes and padding, which can end up using more space
than an interleaved method. Over the years, the attention paid to
brevity has varied with the cost and availability of storage.
By 1963, the American Standard Code for Information Interchange (ASCII) was set to a fixed size of 7 bits, though at least
two variable codes, Morse code (1844) and Huffman coding (1952)
existed at the time.
One of the goals of XML was human readability. Minimalism
was thought orthogonal or possibly even antithetical to the goal
of human readability, and the resulting language ironically suffers
from excessive verbosity that obscures essentials, rendering the result less human readable. XML and COBOL show that a negative
attitude towards minimalism (”10. Terseness in XML markup is
of minimal importance. [14]”), that regarding minimalism as unrelated or even an impediment to comprehension, is not correct.
Minimalism is also central to Information Theory, in which it was
demonstrated that the crude redundancy of repeating information
over and over, is very wasteful and poor at preserving the fidelity
of data against errors in the transmission. If repetition is a poor
means of ensuring the fidelity of data, perhaps it is also a poor
means of representing structure in ways easy for humans to read.
Another demonstration of the limited usefulness of repetition is
the FAT file system, which despite allocating room for a copy of
the directories and file names, is actually one of the most fragile
and easily corrupted file systems currently in use.
Of particular note is the C programming language. So many
programming languages adopted the C syntax that they have been
tagged with a moniker of their own, the “curly-brace” languages.
Perhaps one of the reasons curly brace syntax eclipsed Pascal and
ALGOL is the use of a single character each, rather than the words
BEGIN and END, to delimit blocks. The designers of C did not restrict themselves to curly braces only, they also used square brackets, parentheses, and even angle brackets, for array indexing, lists
of function parameters, and macros respectively. Why that choice
of symbol assignment? Why not use parentheses for all, and rely
on context or some other means to distinguish between a parameter list and a block of code? If there is any doubt that it is possible
to use only parentheses, the LISP programming language is proof.
Or, why not copy FORTRAN in the use of parentheses for array
indices? One kind of answer is that in C these different kinds of
brackets serve as sigils, to distinguish between identifiers for functions, arrays, and variables. But that only begs the question of why
have sigils? And it still does not answer why any particular symbol
was chosen for a particular use.
Brenton Chapin
2 HISTORY
For answers, one must dig into the history of computation and
mathematics. In the case of C, the chain of preceding languages is
roughly B, BCPL (Basic CPL), CPL (Combined Programming Language), and finally ALGOL (Algorithmic Language). The paper on
ALGOL 58 [6] says of the choice to use square brackets to delimit
array indices, only that “subscripted variables” (the term used in
ALGOL for what today we call an array variable, or simply an array), “designate quantities which are components of multidimensional arrays” and that “The complete list of subscripts is enclosed
in the subscript brackets [].” But why did they pick square brackets? FORTRAN, the oldest programming language to achieve wide
acceptance, uses parentheses, not square brackets.
For that matter, why use any bracket at all? No one says. It
seems likely that they would rather have used actual subscripted
text, just like in mathematical notation, but early computers could
not do it. Square brackets was a notational device to indicate subscription without actually presenting the text so. Apart from computer limitations, a big problem with subscripting is that the notation doesn’t nest well, at 3 or more levels becoming too small for
the human eye to read. One can surmise from the use of the term
“subscript” that this was another borrowing, from linear algebra in
which a matrix is denoted with square brackets. And indeed the
original name of ALGOL 58, is International Algebraic Language.
The only deviation in the use of square brackets for array indexes
from ALGOL to C was BCPL, which among the many simplifications of CPL it introduced, attempted to repurpose square brackets
for code blocks, using only pointer arithmetic to access array elements [8].
ASCII codified the glyphs used for nearly all programming languages. A notable exception is APL, which makes use of mathematical symbols, mainly from Set Theory and Vector calculus, that
were not put in ASCII [7]. Unlike EBCDIC, ASCII at least organized the alphabet into a contiguous block. But the exact set of
punctuation symbols is unclear, ranging from all symbols that are
not letters, numbers, and control characters, to only those used to
clarify the structure and meaning of sentences and text. There are
no formal, ordered, centuries old lists of punctuation symbols.
The ASCII ordering and choice of punctuation is derived from
the QWERTY keyboard layout, which dates to the late 19th century. The notion that QWERTY was deliberately arranged to slow
typists down is a popular but wrong myth [23]. Morse Code and
many other factors were considered, and over the years small changes
have been made to accommodate new uses. For instance, “shift-2”
is the double quote mark on many older keyboards, but today is
‘@’ on most keyboards.
We could go further back, and ask why mathematical notation
uses parentheses for functions, and square brackets for matrices.
Why is y = fx the customary, canonical expression for a function,
and why in particular the use of parentheses to bracket the independent variable x? In A History of Mathematical Notations [3],
Cajori credits Euler (1707-1783) with the first use of parentheses
to bracket the variable of a function, in a 1734 paper. That paper is
E44 [1] in the numbering scheme created to refer to Euler’s works.
However, examining E44 and several others of Euler’s papers, one
finds no such use of parentheses, and the exact phrase and formula
Efficient textual representation of structure
Cajori quoted is not present. Euler uses parentheses to group parts
of equations, but not to separate function names and variables. Euler’s notation is y = fx, and it is up to the reader to understand
that x and y are variables, and f is a function.
Note also the choice of the letter f because it is the first letter of
the word “function”, a custom followed in many places, such as the
decision in FORTRAN to use the first letter of a variable name to
indicate integer (name begins with “I” for integer, through “Z”) or
floating point (name begins with “A” through “H”). This desire to
match functionality to the first letter of an appropriate term was
taken to extremes, so that more than one early game employed
a lucky placement of keys on the QWERTY keyboard,’W’, ’E’, ’S’,
plus ’3’, to refer to west, east, south, and north respectively.
By 1837, in a major work on Number Theory which is regarded
as also an important paper on the modern definition of a function, Dirichlet (1805-1859) used parentheses around the independent variable [2]. But why did mathematicians pick those symbols,
that format? They too engaged in expedience, adopting the idea of
parentheses from still earlier scholars. Mathematical notation has
a long evolutionary history, and while fascinating, the main point
here is that many choices of symbols and syntax were made long
before any possible use in programming languages was conceived.
While 1837 is also the year that Babbage proposed the Analytical
Engine, arguably the first computer, functioning computation machinery would not be built until many years later. Therefore symbols and syntax certainly could not have been chosen based on
experiences in computer programming.
That was about as far as the early pioneers went in exploring
questions of how best to symbolize code and data. None of the
terms and areas of study, not semiotics, symbology, linguistics,
grammar, lexicology, punctuation, readability, typography, legibility, notation, expressiveness, or rubrication, quite address these
questions. Studies of notation and syntax get the closest, but even
there syntax is confined to issues of context.
Most programming languages use a hierarchical structure to organize code. Possibly the earliest and simplest formally specified
language for expressing hierarchy is Dyck Language. Object Oriented Programming and Functional Programming did not abandon
this fundamental organization, they only added to it. Declarative
programming, as represented in Prolog and SQL, at first glance
seems not to need much structure. A point of confusion is order vs
structure vs hierarchy. Declarative programming needs structure,
but not order and not necessarily hierarchy. Hierarchic structure,
of programs and data, can be more efficiently represented with several changes.
The advent of markup languages revived interest in hierarchical data storage, which was introduced in the 1960s, before the
relational database model. No longer were interleaving structural
symbols just for programs, they were harnessed to organize data.
Traditionally, data has been organized into fixed size elements so
that no symbols need be reserved for explicit denotation of structure, and, even more importantly, so that random access is quick,
taking O 1 time to retrieve any one element. This is also true of the
pre-computer era, which used tables extensively, carefully lining
up columns to aid the human eye. Where one-size-fits-all is inadequate, the expedient method used is to have a small fixed size field
Conference’17, July 2017, Washington, DC, USA
to hold a value for the size of a variable length field. Packet networking is an example of this organization of data. The roughly
analogous method in writing is the technique of employing any
of a variety of superscripted symbols such as an asterisk, *, or a
dagger, y, to indicate there is a footnote.
XML and HTML are the most well known of these markup languages, and like programming languages, their history is also evolutionary. Both trace back to Standard Generalized Markup Language (SGML) which was standardized in 1986, predating the World
Wide Web. Like so many other decisions in languages, the creators
of the Web seized upon SGML out of expediency. SGML in turn
descends from GML, an IBM effort to manage technical documentation and data, based upon ideas first articulated circa 1966 [16].
But as many have complained over the years, these markup
languages have undesirable features, and among the biggest is extreme verbosity. The rules they force upon users, to come closer to
the goal of “human readability”, often have the opposite effect. On
the scales of minimalism, XML and relatives are extremely poor because their representations are highly redundant. Not only must
brackets be balanced in “proper” HTML and XML, but the matching tags must repeat the tag name. Why did the designers do it?
Ironically, those rules have done much to add clutter and thereby
reduce the human readability that was their intended goal. YAML
(YAML Ain’t Markup Language) was motivated in part by recognition that XML is burdened with design constraints that have little
purpose in data serialization [17]. Lightweight markup languages
such as Markdown are an acknowledgment that the human readability of HTML could be better.
Most popular programming languages are poor at expressing
data. Here are some examples to illustrate this. A list of the first 10
chemical elements can be encoded in a JavaScript array like this:
c o n s t CE = [ ” ? ” , ”H” , ” He ” , ” L i ” , ” Be ” , ” B ” ,
”C ” , ” N ” , ” O ” , ” F ” , ” Ne ” ] ;
A simple trick yields a much cleaner representation:
c o n s t CE = ” ? H He L i Be B C N O F Ne ”
. split (” ”);
But this is the very sort of trick that makes programming needlessly difficult for professional programmers unfamiliar with the
arcana of a particular language.
One problem is that the default, unquoted meaning of an alphanumeric sequence is to treat it as the name of a variable. The
double quote mark changes the mode, but that mode has no support for structural elements, so only a simple string can be encoded.
The programmer is forced to change modes over and over, entering
string mode to give a short string, leaving string mode to impart a
tiny amount of structure, then entering string mode again to give
the next string. Or the programmer can use a clever trick such as
the split function, or create a function to parse a string into a
complicated object, or even employ a library such as YAML.
Another example, of a family tree, in Python:
c l a s s t n : # t n means ” t r e e node ”
def
init
( s e l f , name , c h i l d =None ) :
i f c h i l d ==None : s e l f . c = [ ]
else : self . c = child
Conference’17, July 2017, Washington, DC, USA
s e l f . n = name
familytree =
[ tn ( ” grandmother ” ,
[ tn ( ” older uncle ” ,
[ tn ( ” o l d e st 1 st cousin ” ) ,
t n ( ” 2 nd o l d e s t 1 s t c o u s i n ” ) ] ) ,
tn ( ” fat her ” ,
[ tn ( ” older s i s t e r ” ,
[ tn ( ” niece ” ) ,
t n ( ” nephew ” ) ] ) ,
t n ( ” you ” ,
[ t n ( ” son ” ,
[ tn ( ” granddaughter ” ) ] ) ,
tn ( ” daughter ” ,
[ tn ( ” grandson ” ) ] ) ] ) ,
t n ( ” younger b r o t h e r ” ) ] ) ,
...
This terrible encoding is littered with alternating brackets of 2
kinds, as well as double quote marks and commas. This shows
that Python can be even worse than LISP, for those who thought
Python’s use of indentation lead to clean code in all cases, and that
LISP had too many parentheses. To get clean looking code, the expert programmer resorts to using functions to read a simple string
(which may be a data file) into a complicated object. Employing a
data serialization library such as YAML, is a common method of
handling this issue. Should it be the preferred method? Shouldn’t
programming languages be able to do better with their native syntax? After all, native handling of regular expressions is what made
Perl popular. Improvements in the representation of structure are
applicable both to coding and to data representation.
3
ELIMINATING RUNS OF BRACKETS
The first change addresses a problem most languages have, but
which is perhaps most obvious in LISP, and for which it has been
criticized in the “backronym” of Lots of Idiotic Spurious Parentheses. Often, brackets cluster, as several structures all start or end
simultaneously. They can add to the visual clutter without adding
to the ease of comprehension.
There are many solutions to this problem, among them operator
precedence, and postfix notation, also known as Reverse Polish notation, first conceived in 1924[4]. A limitation of these Polish notations is that to make brackets unnecessary, the number of operands
must be fixed, an inflexibility that is insufficiently general for the
structures used in programming.
A popular short cut is use of context and knowledge about the
permitted or sensible content of subtrees. For instance, in HTML
the paragraph indicator, <p>, cannot be nested. This is often used
to omit the matching closing bracket, </p>, when the next structure is another paragraph, or something else that cannot be inside
a paragraph, such as a header. Such omissions are not officially
sanctioned in HTML, but are so popular that web browsers had to
support them anyway. Obvious problems with this approach are
that knowledge of every exception to the rules for indicating the
nesting may be very large, and may change.
Brenton Chapin
The approach taken in Perl 6 is to allow all kinds of shortcuts
that do not greatly complicate the parser. Compared to Perl 5,
some brackets are no longer required. In particular, the parentheses of the if and for statements are optional [18]. Effectively, this
change is a recognition that if and for are enough by themselves
to indicate structure, that they are in fact now part of the set of
symbols used to denote structure.
One could employ 2 sets of brackets, perhaps () and [], in a
scheme in which a closing bracket closes its matching opening
bracket, and all the open brackets of the other kind in between.
For example, [a [b]] becomes [a (b], [d [e [f]]] becomes
(d [e [f). This idea can work in the other direction. [[g] h]
becomes [g) h]. It even works in both directions at once, with
((j)(k)) becoming [j)(k]. However, the best this idea can do
for ((m)) is [(m].
An issue is that 2 more symbols are needed. We can employ
only one more symbol, eliminating only one of the excess opening or closing brackets, and still clean up most of clutter. Call a 3
symbol system that eliminates excess closing brackets a “closing 3”,
and a 3 symbol system that eliminates excess opening brackets an
“opening 3”. Using colon, :, for this 3rd symbol in a closing 3 system, because that approximately matches the traditional use of the
colon in written natural languages, changes (a (b)) into (a : b).
((m)) becomes (:m), (((n))) becomes (::n), and ((j)(k)) becomes ((j):k). Additionally, the brackets are still balanced, with
equal numbers of opening and closing brackets in all the systems.
For a slightly larger example, consider this Ackermann function,
from the classic textbook Structure and Interpretation of Computer
Programs, exercise 1.8 [10]:
( d e f i n e (A x y )
( cond ( ( = y 0 )
((= x 0)
((= y 1)
( e l s e (A
0)
(∗ 2 y ))
2)
(
x 1)
(A x (
y 1))))))
Employing a closing 3 system as suggested above, gives this:
( d e f i n e (A x y )
: cond ( ( = y 0 )
((= x 0)
((= y 1)
: else :A
0)
:∗ 2 y)
2)
(
x 1)
:A x :
y 1)
6 colons have replaced 6 opening brackets. The 6 matching closing brackets have been removed. Indeed, there is never a need for
multiple adjacent closing brackets, as proven next.
Theorem 3.1. Given a sequence S of arbitrary symbols over an
alphabet A in which 2 symbols, an “opening” and a “closing” symbol,
are reserved to denote hierarchy in a format that interleaves data
and structure, and S is properly balanced, the hierarchy can always
be represented in a system with 3 reserved symbols in which there are
no runs (sequences of length 2 or greater) of the closing symbol.
Proof. WLOG, let ‘(’ and ‘)’, the parentheses, represent the
opening and closing symbols in both systems, and let ‘:’, the colon,
Efficient textual representation of structure
represent the 3rd symbol in the 3 symbol system. To allow elimination of all runs of 2 or more closing symbols, assign ‘:’ the same
meaning as ‘(’, the opening of a subtree, except that the matching
closing symbol for ‘:’ is an already necessary ‘)’ that matches an
existing ‘’ which precedes the ’:’.
Then, instances of the sequence “( s1 ( s2 ))” in which s1
and s2 are arbitrary sequences which may include balanced occurrences of ‘(’ and ‘)’ and ‘:’, may be replaced with “( s1 : s2
)”.
The replacement symbols are sufficient to represent all the relationships. The symbols still indicate that s1 is the parent of s2 ,
preserve all relationships s1 and s2 have with all other sequences
before and after because none of them need change and no additional context is needed, and preserve all relationships contained
within and between s1 and s2 also because none of them change,
nor add any contextual dependencies.
This replacement can be applied repeatedly, to reduce any number of adjacent closing brackets to 1 closing bracket. Each replacement preserves the property of balance for all remaining parentheses, as exactly one pair of matched parentheses is replaced with a
single colon.
The corollary that no runs of the opening bracket are needed in
an opening 3 system, is obvious.
A pushdown automaton can easily transform from an opening
3 to a 2, or from a 2 to a closing 3, if the data is processed in reverse order. Of course, a pushdown automaton can easily reverse
a string. In practice, the C++ style of comment delimited by a 2
slashes takes more work to detect from back to front. Nor can the
start of a C style comment be simply determined working from
back to front, because “/*” can be within a comment.
A natural question is why not use a 4 symbol system, as originally outlined above with the 2 sets of bracket symbols, and eliminate all runs of opening and closing brackets? Simply put, the
additional savings is not significant, as can be seen in that it is no
help at all on the examples of ((m)) and (((n))).
As to why, it is not possible to employ any finite set of symbols to represent infinitely many numbers with just 1 symbol each,
no matter what form the representation takes. If the representation takes the form of n opening brackets followed by n closing
brackets, all of one kind of bracket can be collapsed, because n is
preserved in the other. If both are collapsed, then n must be represented some other way. That is why the idea of using 2 sets of
brackets does not work to reduce all runs of opening and of closing
brackets to 1.
Thus we see that the idea of replacing each run of closing brackets with a single closing bracket is really the removal of a redundancy, the redundancy of specifying the depth twice, first with
opening brackets, then with an equal number of closing brackets.
That redundancy is no longer available to remove once one kind
of bracket has been reduced.
The 3 symbol system need not be exclusive, can mix with 2 symbol usage as in (a (b : c)). In practice, in coding it will likely be
preferable to use the 3rd symbol only for subtrees that are known
in advance to be the terminal child. For other uses, such as minification of JavaScript or JSON, may want to use the 3rd symbol
everywhere possible.
Conference’17, July 2017, Washington, DC, USA
Removing the redundancies of the 2 symbol system can be of
some value in data compression. Since the amount of information
encoded is the same, an ideal data compression algorithm should
produce the same size compressed output whether a 2 or a 3 symbol system is used. In practice, the output sizes vary, sometimes
better for the 3 symbol system, and sometimes worse. To better test
whether the more efficient representation helps with data compression, can try a much larger example. Biologists have organized millions of species into a Tree of Life [28], using Newick format [11],
an interleaved hierarchical format. Tests upon grafted solution.tre
from version 9.1, the file with the highest percentage of interleaved
structural symbols relative to data, containing 100,734 parentheses in 721,324 characters total, show an “opening 3” system does
reduce size even after compression.
compression
none
gzip
bzip2
xz
system
original 2 symbol
721,324
250,142
218,717
211,812
opening 3
690,077
241,169
213,341
203,724
A final note about whether to prefer an opening 3 or a closing
3 system. The closing 3 is the better fit with our customs. For instance, in curly brace languages, the name of an array is given before the index of the desired element. It is arr[39] not [39]arr. It
is the same with function names and parameters– the name comes
first.
4 UNIVERSAL BRACKET
A sequence such as “[x(y]z)” in which 2 different sets of brackets are interwoven, is almost always an error, not valid in any
mainstream language. An analogous sequence in HTML could be
“<b>x<i>y</b>z</i>”, which is not valid, even though its meaning can in this case make sense. The HTML specification calls this
“misnesting”. This invalidity is used in an ad hoc fashion to reduce some of HTML’s redundancy. A common case is the closing of an outer structure that implies an inner structure must also
close, as in this example: “<tr>x<td>y</tr>”. Some omissions require knowledge that some structure is not allowed. For instance,
“<p><p></p></p>” is not valid because the ‘p’ element (p for paragraph) can’t be the direct child of another ‘p’ element. Therefore
“<p>x<p>” always implies a closing tag preceding the 2nd opening
tag: “<p>x</p><p>”. This usage is acknowledged in HTML5, but
still recommended against: “..the closing tag is considered optional.
Never rely on this. It might produce unexpected results and/or errors if you forget the end tag.” [25]
A combination opening and closing tag in one, called a “selfclosing” tag, is meant for an empty element, and has been in XML
from the start [14]. As of version 5, HTML has adopted a variation
of this idea. The XML self-closing tag requires a slash character immediately before the closing angle bracket. In HTML5, 15 element
types were singled out as making sense only as empty (void), and
HTML does not use or permit the penultimate slash character in
those tags.
Another solution to some of HTML’s verbosity is to omit the
name from the end tags, using only “</>”, which works fine since
misnesting is not allowed or often sensible anyway. SGML has
Conference’17, July 2017, Washington, DC, USA
this feature in its SHORTTAG constructs, calling it the empty end
tag. But HTML does not allow it. This idea of a universal closing
bracket or, alternatively, a universal opening bracket, can be employed in any language containing 2 or more sets of bracket symbols and in which interweaving is invalid. It eliminates misnesting,
as interweaving is no longer possible. And it reduces the alphabet
size.
If we choose the square bracket for the universal closing symbol, then a sequence such as “(x[y]z)” could become “(x[y]z]”,
and the closing parenthesis symbol would be unused, and could
be repurposed. (Note that this change does not reduce the number
of closing brackets, there are still 2 in the example. It reduces the
required size of the alphabet.)
There can still be a need for other closing characters, such as
an “unindent” invisible control character. The universal closing
bracket could still be used for that, but would want it to be invisible
in that case.
Converting back and forth between a representation that uses
closing brackets that match the opening brackets, and a representation that uses only a universal closing bracket is easily done with
a pushdown automaton. The type of the node is preserved in the
choice of opening bracket, and having the type repeated with a
matching closing bracket is merely redundant.
Having established that a universal bracket symbol is workable,
several more questions naturally arise. Does it make code easier
to understand, more human readable? Many have expressed the
sentiment that requiring a closing tag to repeat the type given in
the opening tag helps prevent human mistakes, and is therefore
good. The issue is confused by the practice of entirely omitting
tags in specific situation. With a means of representing structure
that is not so tiresomely redundant, these ugly short cuts can be
made unnecessary.
5
TYPES FOR NODES
Often the representational capability of a hierarchical structure is
enhanced by adding some means of representing different kinds of
children. An example is the “red–black tree” in which tree nodes
have been assigned an additional property, a color. This can be
and is often done independently of the structure, by means of an
additional data item. Another very popular method is sigils in the
form of different kinds of brackets. ASCII has 3 sets of symbols
meant solely for brackets: the parenthesis, square bracket, and
curly braces. One more set, the angle brackets, doubles as the mathematical symbols “greater than” and “less than”, and for that reason
was used gingerly. Further sets can be contrived, for instance ‘n’
and ‘/’, and for that matter of course any two arbitrary characters
could be chosen to serve as brackets. The obvious complaint is that
4 sets is far too few. Even if a dozen plus from Unicode are added,
it still isn’t enough.
In any case, programming language designers used all the ASCII
symbols meant for brackets early on. The curly brace languages
employ curly braces to denote blocks of code, square brackets to
denote array indices, and parentheses for parameter lists. The dual
purpose symbols for angle brackets did not go unused as brackets,
being employed in the C preprocessor, and later in the markup
languages SGML, XML, and HTML.
Brenton Chapin
These SGML markup languages expanded the number of bracket
types infinitely, by allowing multiple character brackets. Although
that solves the problems caused by finite quantities of different
bracket symbols, the means and requirements chosen add greatly
to the verbosity, a common criticism often expressed in abuse of
the rules rather than in words. It is possible that the desire for a
visual match between the opening and closing bracket led to the
SGML requirement that both the opening and closing brackets contain copies of the string employed to give them a type, despite the
obvious redundancy.
An efficient way is to designate one of the bracket sets as ”typed”,
the start of a multicharacter bracket. That allows the other brackets
to remain bare to be used same as traditionally used in programming languages, and still allows infinite bracket types. Which symbol is best employed, and where should it be positioned? Between
bracket and name, or after the name? Or, should it be a combined
symbol, a bracket that indicates a name is to follow, since there
is more than one kind of bracket available? Possibly the most efficient use of existing symbols is to keep parentheses as is, bare,
and designate the square bracket or curly brace as the indicator
for a child structure with a type, with the name to follow, similar
to HTML.
Another method is to reserve a symbol to indicate a type name
only, no structure. ‘$’ is often used similarly.
Whichever method is chosen to indicate the start of a type name,
how is the name to be ended? The name could be just like variable
names in programming languages, with only letters and numbers
(and the underscore character) allowed in the name so that any
other symbol, such as the space character, automatically ends the
name. The method of using a special symbol, as done with the
closing angle bracket of HTML, is also workable.
But the designers of HTML did not let tag names be only names.
They crammed additional structured information into “attributes”.
An example is “<ul class="vs1" id="item1"> content </ul>”.
This information could have been in the same system, for instance
something like “<ul> <attr> class = "vs1" id = "item1"
</attr> content </ul>”, or even “<ul> <attr> <nm>class </nm>
<val> vs1 </val> <nm> id </nm> <val> item1 </val> </attr>
content </ul>”. The only purposes this alternate subsystem really
serves is visual distinction and less verbosity, though it’s claimed
to maintain the distinction between data and metadata. HTML has
evolved towards lighter use of attributes, moving much formatting
information from the tags to CSS, where it is also less redundant.
6 REPRESENTING SIBLINGS AND COUSINS
The list is well known and has a long history. Each item in a list
can be considered a sibling of each other item. Traditionally, each
item is on its own line, or is separated by a comma. LISP means
“LISt Processor”, and is built around the idea of making lists the
fundamental building block with which to organize both data and
code. Comma Separated Values (CSV) notation [20] is a simple
data format based on one list with items separated by, of course,
commas. One of the most notorious departures from the use of
commas is multidimensional arrays in C, in which the syntax to
access an element at index x;y is not x;y, it is xy.
Efficient textual representation of structure
The idea of separating items in a list with a single symbol (or
word) seems simple, but turns out to have several surprisingly
tricky issues.
Consider how to represent a list in a language that does not have
any symbol analogous to the comma, Dyck Language interleaved
with data. How is the sibling relationship expressed? (First, note
the convention is to place the parent before the child, as in p(c),
although the opposite, (c)p is just as expressive.) One way is to
wrap brackets around each individual data item. Then the number
of brackets needed to represent a relationship must be increased
by 1 for all depths, so that (a)(b) means a and b are siblings, and
((c))((d)) means c and d are 1st cousins. A 2x2 array would be
((p)(q))((r)(s)). Although it works, it is far more verbose. Additionally it spoils the abbreviation of allowing siblings to be separated by a child, as in a(e)b, which must instead be (a(e))(b).
So, a better way is to always separate siblings with a child, using
a null child if the older sibling has no children, as in a()b. Then a
2x2 array can be represented with (p()q)(r()s).
Expanding to cousins is still a problem. With the addition of the
comma as a sibling separator, (p()q)(r()s) becomes (p,q)(r,s).
The sequence still has a “)(”, which the comma does not help reduce. An obvious extension is to introduce another symbol, say
semicolon, to separate 1st cousins. Then the sequence can become
(p,q;r,s).
What to do for 2nd cousins? Just add brackets to the semicolon, as in );(? Or employ yet another symbol to replace ))((?
How many symbols should be so employed? The ASCII committee
settled on 4, ASCII characters 28 through 31, though 8 were proposed [12]. They were called Information Separators [9]. We can
do better than that.
There are several issues with having 2 or more Information Separators that merit careful consideration.
First, consider the sequence p(q,r;s). q and r are siblings to
each other, and descendants of p, and s is 1st cousin to q and r.
There are several different more precise meanings this could have.
The semicolon can be given higher precedence than the brackets,
that is, all three of q, r, and s are children of p. In that case, this
particular sequence is invalid, because r and s cannot be children
of p and 1st cousins to each other. All children of the same parent
must be siblings.
Another interpretation is to allow a single opening bracket to
separate a variable number of generations instead of always one
generation. Then, since grandchildren of p can be 1st cousins to
one another, all 3 of q, r, and s must be grandchildren of p. But
this idea has the big disadvantage of adding context dependency
to the grammar. Whether q is a child or a grandchild of p cannot
be known until all the characters between the opening and closing
brackets are scanned. If a semicolon is found on the same level as
q, then q is a grandchild of p. If there are even deeper separators,
q is a great grandchild or even more distant descendant of p. If
none are found, then q is a child of p.
Best is to consider the semicolon as a combined open and close
bracket, )(, having the same precedence as any other bracket. In
that case, sis not a descendant of p, sis a nephew of p. That meaning does not add context. This does have more invalid strings, for
instance the simple sequence r;s is invalid because the brackets
are not balanced.
Conference’17, July 2017, Washington, DC, USA
Second, consider how to combine separators with colons. The
colon is a bracket, and should have the same precedence. Then a
sequence such as (p:q;r) means that p is parent to q, and not
parent or sibling to r. p is uncle to r, q is 1st cousin to r, and
r’s parent is null. Perhaps the easiest way to see this is to reverse
the colon transform to get (p(q;r)), then reverse the separator
transform to get (p(q)(r)). If p and rare supposed to be siblings,
and q a child of p, the correct way to represent that is not to use
semicolon or colon, it is p(q)r.
The 2 transforms, colon and separator, are mostly complementary, but in some cases can compete to reduce the same redundancies. The following table shows the results of transforming each of
the 14 Dyck words of length 8 (replacing ][ with a comma rather
than a semicolon, for greater visual clarity.)
Dyck word colon
separator both
1 [[[[]]]]
[:::]
[[[[]]]]
[:::]
2 [][[[]]]
[][::]
[,[[]]]
[,::]
3 [[][[]]]
[[]::]
[[,[]]]
[:,:]
4 [[]][[]]
[:][:]
[[],[]]
[[],:]
5 [[[][]]]
[:[]:]
[[[,]]]
[::,]
6 [[[]][]]
[[:]:]
[[[],]]
[:[],]
7 [[[]]][]
[::][]
[[[]],]
[[:],]
8 [][[][]]
[][[]:]
[,[,]]
[,:,]
9 [][[]][]
[][:][]
[,[],]
[,[],]
10 [[][][]]
[[][]:]
[[,,]]
[:,,]
11 [[][]][]
[[]:][]
[[,],]
[[,],]
12 [[]][][]
[:][][]
[[],,]
[[],,]
13 [][][[]]
[][][:]
[,,[]]
[,,:]
14 [][][][]
[][][][] [,,,]
[,,,]
The last column shows the result of applying the semicolon transform, followed by the colon transform. Applied second, the transform to colon can be blind to the presence of any separators, and
be correct and achieve maximum reduction. A separator acts as a
bridge, so that a colon can start a list, a natural looking use, rather
than opening the last item of a list.
If the separator transform is second, then to achieve maximum
reduction, as well as a correct transformation, it has to be done
with awareness of colons. A colon may be opening the last item
in a list, and it can be moved to the head. [[]:] can become [:,]
by replacing ]:, which is an open close pair of brackets, with a
separator, and then, replacing the opening bracket of the previous
item in the list with a colon. This can be repeated until the colon
has migrated to the front of the list. If the separator transform is
done blindly on a sequence with colons, it can be incorrect. [[]][]
is [:][] but then replacing the ][ with a separator gives [:,],
which is not correct. Correct is [[],]. Undoing [:,] shows that
sequence is actually [[][]], a list of 2 items.
Applying both transforms to the Ackermann function given earlier replaces a total of 11 bracket pairs with either a single separator
(the comma was used in this example) or a single colon:
( define :A x
cond : ( = y
(= x
(= y
else
y,
0)
0,
1)
:A
0,
∗ 2 y) ,
2,
:
x 1,
A x :
y 1)
Conference’17, July 2017, Washington, DC, USA
Third, what of types? Should the separated items be the same
types? The traditional meaning of a comma is as a separator only,
of untyped data.
Or, should text adjacent to a comma be interpreted as a type
name? A way to resolve that question is to provide another means
to add a type if desired, and let separators remain separators only.
For example, as mentioned in the section on types, the ‘$’ character could be used to indicate an alphanumeric sequence is a type
name. Deeper separators would need a list of types, or restrictions
on elements for which they can specify a new type, and while notations for that can of course be invented, there is little point when
opening brackets can accomplish that with reasonable efficiency
and without any additional rules.
Fourth, there are different potential meanings for runs of a separator symbol. 2 adjacent semicolons could mean that there is an
empty element in the middle, like for(;;i++) in C. Or, it could
mean that the separation between the data elements on either side
is deeper, that is, they are 2nd cousins instead of 1st cousins. What
should 2 semicolons mean, )()( or ))((? The former is the more
widely used meaning. The latter is accomplished by the limited
method of having more Information Separator symbols, which cannot neatly handle great depths. It seems useful to have clear and
concise ways to express either meaning. One way to do this is to
have 2 Information Separators, one for siblings and one for cousins.
The wrinkle is that repetition of these symbols would have the 2
different meanings. n of the sibling separator can mean there are
n 1 siblings who have been omitted from the list, while n of the
cousin separator can mean the cousins are nth cousins, being 1st
cousins only when n = 1. This approach, combined with an efficient way to express quantities, discussed next, can express both
meanings.
However, another way is not to use the system for expressing
quantities, and then assign different meanings to those quantities,
but to use typing. A semicolon could be followed by an integer
to indicate the depth of the divide, eg. “;3” means the adjacent
elements are 3rd cousins. Then a run of n semicolons can mean
that there are n 1 1st cousins in the middle, same as a run of n
commas means n 1 middle siblings. This makes it slightly harder
to support typed separators, but of course it can still be done, That
point is moot if sticking with the traditional meaning of separators
being typeless.
A minor matter is that separators have an inherent off-by-one
issue. A comma separated list usually contains one fewer commas
than data items. Often, specifications allow a meaningless trailing
comma to be present, for the convenience of programmers.
A big reason to support efficient representation of a cousin relationship and even reserve symbols especially for it rather than rely
on brackets is that it is a natural way to map multidimensional arrays to a hierarchical structure. Another reason is that people are
familiar with and like separators.
7
EFFICIENT REPRESENTATION OF
ARBITRARY QUANTITIES
Infinitely many numbers cannot be represented with single symbols from a finite set of symbols.
Brenton Chapin
Though we can’t collapse arbitrary quantities to single symbols,
we can however do better than using n symbols to represent n,
by employing the same principle used in the Arabic numbering
system that replaced unary numbering systems such as the Roman
one and hash marks. All this is well known, as is that a binary
numbering system has the minimum number of symbols needed
to represent quantities of n with log n symbols.
Can we do even better than log n, represent any arbitrary quantity of size n with even fewer symbols? No. For this question, the
Pigeonhole principle applies. As in data compression, to be able
to represent some quantities of amount n with fewer than log n
symbols (from a finite set of symbols), other quantities must be
represented with more than log n symbols. When the amounts are
averaged over all quantities n, the size is log n, or greater.
Numbering systems can be employed to represent structure. Rather
than come up with more and more symbols to represent greater
and greater quantities, as the ASCII committee did with their 4
separator symbols, can employ 2 symbols in a binary code.
Obviously any one symbol which may be repeated can be made
one member of a set of 2 symbols to be used in a binary encoding. But if there are many symbols which may be repeated, finding
enough symbols becomes a problem.
Since quantities are so useful, and unused symbols so precious, a
better idea is to reserve 2 symbols for a binary code for quantities
only, for any other symbol that may be repeated. For example,
instead of using 2 kinds of open bracket symbol in a binary code
as in something like [(([ to represent 9 open brackets, have 1001(
mean 9 open brackets, 1101* mean 13 asterisks, and so on.
Still better is to use an escape character and a decimal representation. The backslash can be used for this, as the only backslash
escape sequence that uses a number is n0, to mean the NULL character, ASCII 0. Then 9 open brackets can be represented with n9(.
One desirable additional symbol to allow is the minus sign, for negative quantities. If only integers are allowed, then there is no need
to overload the meaning of an escaped period for a decimal point
character.
This sort of representation is the well known idea of run-length
encoding (RLE) [5]. RLE is simple and easy, even relatively easy
for a person to understand without computer aid.
Of course there is the minor problem that the numeric symbols
themselves cannot be the object of a RLE escape sequence. There
are several easy ways to resolve that issue. Easiest is to simply not
support repetition of the digit characters, forcing the use of the
traditional method if repetition of a digit is wanted. Perhaps next
easiest is to employ a terminal symbol. To keep the representation
one character shorter, the terminal symbol can be optional, used
only if needed to remove ambiguity.
But RLE is very limited in what it can express. That keeps it
dirt simple and easy to read, but perhaps more expressiveness is
desirable, for such uses as repeating patterns, not only single characters. One simple example of such a sequence is the CR/LF pair.
With a trivial amount of additional syntax, it is possible to efficiently encode repeating patterns. A further use is as a repetition
of an escape. Suppose one has a string consisting of many, perhaps over half, of characters that must be escaped. One traditional
method is to inflate by up to double the quantity of characters by
preceding each special character with an escape character. That
Efficient textual representation of structure
can get difficult for a programmer to read, as seen in Perl’s regular expressions. A quantity that can be applied to indicate how
many characters are to be escaped can supersede the traditional escape character method. This notion is fairly obvious and has been
proposed on a number of occasions, for instance by Rivest in his
draft for S-expressions [15], for what he called “Verbatim representation”, and with Hollerith constants in FORTRAN 66 [13]. Perl’s
regular expressions has a similar mechanism.
While it is trivial to extend run length encoding to handle repeating patterns, there are still many other highly redundant strings
that this extended RLE cannot encode, yet are simple to describe.
The question is how far to go, how much complexity is really useful and can still be easily encoded? And, would it still be human
readable?
Perhaps an efficient way to represent “))(())((” and larger
combinations is also desirable? To encode such things, more is
required. A simple idea is to support the encoding of lists of quantities, a vector, rather than a single quantity. The escape character
can be employed as a separator. Then what is needed is agreement
on the meanings to assign to the multiple quantities. For example,
to encode 5 repetitions of a string of length 4, “abcd”, should it be
“n4n5abcd” or “n5n4abcd” or something else?
But if a vector of quantities is such a good idea, why not a tree
of quantities? Takes only 2 symbols to represent the structure of
a tree. However, the additional complexity is almost certainly too
much to remain human readable, and there’s the question of what
uses could we make of a tree of quantities?
One use for a vector of quantities is for the sizes of the dimensions of a multidimensional array. Such a usage creeps into the domain of external representation of structure. The interleaving can
be reduced to a single character used as a separator, or removed entirely. For instance, a 2x3 array with variable sized elements could
be notated as n2n3n? 1a,1b,1c,2a,2b,2c, using the same separator symbol every time, with the division between 1c and 2a known
to be deeper than the rest only because that info was given in the
vector of quantities. Or that 2x3 array with fixed sized elements
could be notated as n2n3n2 1a1b1c2a2b2c.
If means to represent something analogous to a Hollerith constant are provided, some probably will use it for very complicated
objects. Just serialize the data, and use the total length of the resulting string as the size. Supporting runs of the same symbol, and
blocks of data analogous to Hollerith constants, provides enough
for further elaboration if desired, while keeping the notation simpler.
We get away with unary representations, because we stick to
small and simple structures. If we seldom count higher than 3 or 5,
and almost never higher than 10, tally marks work fine. A check of
the Firefox source code reveals that only a few programs reached
a nesting depth of 15, with most never exceeding 10, so RLE for
opening brackets and colons, and quantities to indicate the depths
of separators are not going to remove much clutter. But perhaps
flatter structuring has been chosen to avoid the clutter that would
result from deeper nestings. And block escapes are still a viable
use of quantities.
Conference’17, July 2017, Washington, DC, USA
8 REPRESENTING STRUCTURE WITH
POSITIONING
The only ASCII control characters still really useful are the 2 for
indicating a new line of text. Next most used is tab, which has
ambiguous meaning and is easily and often replaced with spaces,
except in a few special cases such as Makefiles. The rest of the
ASCII control characters are very seldom seen, and when present,
modern applications may simply ignore their meanings [27]. Of
the 132,231 text files in the Firefox 50 source code, just 121 have
ASCII control characters other than the 3 most common: LF, tab,
and CR. A mere 5 files use ANSI escape sequences, which start with
the Escape character (ctrl-[, ASCII 27), and that only to set text
colors.
ASCII’s minimal means of positioning text is sufficient but not
efficient or neat. One of the worst inefficiencies is the very repetitive use of spaces to indent lines of text. Some ANSI escape sequences address this issue, but not well. The VT100 series text
terminals became popular in large part because they adopted and
extended the ANSI escape sequences. Yet they have not been much
used outside of terminal control. They did not grow beyond that
niche to become common within text files. Colored text is the ANSI
escape sequence most used, yet it is rare. One of the most common uses of colored text, highlighting of source code, does not use
ANSI at all, even though editing may still be done in a terminal
that supports ANSI. Rather, text editors parse the source code being edited to compute which colors to assign, updating in real time
as the user makes changes. HTML and CSS can specify colors directly, and are not limited to a tiny 16 color palette. That and word
processor options have become the way to set text and other colors in documents. ASCII and ANSI must use a fixed width font to
position text accurately, and consequently, source code is almost
always viewed in such fonts.
What sort of positioning information would be most useful?
Means of clear, easy, and minimal description of position that best
supports useful structures should be leading contenders. Indentation is the most popular way to express hierarchy through position
alone. It is so common that even though curly brace languages do
not use indentation, coders are exhorted to use “proper indentation” anyway, so that their source code is more readable. Perhaps
the most prominent and distinctive feature of the Python programming language is the use of pure positioning to indicate code structure. Another major use is the alignment of columns, usually for
tables. The ASCII tab character does not do either of these well.
Superscripting and subscripting can be considered a kind of positioning. It has a major limitation in that it does not scale. Each
successively deeper nesting requires progressively smaller text, which
soon becomes too small to read.
A proposal is to reassign 4 ASCII control characters for indentation. 3 of them can be increase indent (push), revert indent (pop),
and boost indent, analogous to the 2 brackets and colon in a closing 3 system. These characters can be invisible and have a width of
zero, not directly affecting the position of text. They only change
the level of indentation. The 4th character can mean forward to
the next indentation, replacing the leading spaces on all indented
lines of text. It could also mean advance to the next line, but that
Conference’17, July 2017, Washington, DC, USA
would be less flexible, wouldn’t support situations in which text
such as a line number is wanted before the indentation.
These characters do not specify the size of an indentation, only
the number of levels. This would allow individual users to set the
indentation size themselves without affecting others. It could also
make variable width fonts usable, as the problem of what unit to
use to specify indentation sizes is entirely avoided. It does add one
item to the state a text editor or viewer must maintain: a stack of
indentation levels.
Indentation characters could ease editing of source code. There
would be no more need to shift blocks of code several spaces right
or left by changing the number of leading spaces on each line,
whether by manually adding or deleting each space, or by using
some smart editor function such as that assigned to the tab key in
EMACS.
For the columns of tables, need better tab functionality. It would
be desirable not to rely on the use of a monospace font to recreate
the intended horizontal alignments. A limitation to dump is any
sort of tiny maximum distance of 8 spaces. Further, it should be
independent of any indentation setting. The C1 set of control characters contains several intended for tabular positioning, but they
do not do enough. One problem is that the state they set up is
global. Another is that they still implicitly depend upon a fixed
font, using the cursor position for fixing the location of tab stops.
It is basically a copy of the ideas and means used in the most advanced typewriters, with all their limitations.
The means HTML provides for laying out tables is fairly comprehensive and flexible, and proven over many years of use. If
better handling of tables is desired in plain text, copying HTML’s
handling and a subset of capabilities concerning tables into control character functionality seems a good approach. Lightweight
markup languages such as Markdown [26] and Bulletin Board Code
(bbcode) [22], arose to satisfy the desire to be able to create lists
and tables in text based forums more easily than with HTML. This
shows that many users like text editing to have such capabilities.
9
CONCLUSION
This paper proposed several changes in standard textual notations
to eliminate redundancy that may be hampering the human readability of structured documents such as source code. Proving that
human readability is improved was not attempted. Instead, the paper surmises that some kinds of redundancy merely add clutter,
showed where and how redundancy lurks, and proposed ways to
eliminate it. Good answers to Lots of Idiotic Spurious Parentheses
have been desired for a long time, and perhaps until now have not
been satisfactory.
Notation that scales and adds expressiveness, and allows much
more brevity without sacrificing clarity, is especially preferred. Punctuation with long histories in natural languages, especially English,
was tapped as a guide, in part because those uses are familiar to
people literate in those languages.
The first proposed change was to add a 3rd kind of bracket symbol roughly equivalent to the meaning of the colon in English, so
that a parent–child relationship represented as “(p(c))” is instead
represented as “(p:c)”. Proof was given that this 3 symbol system
Brenton Chapin
can collapse all runs of 2 or more closing brackets to a single closing bracket.
The idea of a universal closing bracket was presented. ”fa (b
[c] d) eg” can be represented as “fa (b [c] d] e]”, reducing the number of different symbols required, as ‘)’ and ‘g’ are no
longer needed.
More use of separators was proposed to replace sequences of
closing brackets followed by opening brackets. “((a()b)(c()d))”
can be represented with “((a,b;c,d))”.
Ways of adding types to the structure were discussed.
Positioning was recognized as an important way of denoting
structure. It is observed that means of expressing position have
been neglected. Markup languages limit themselves to data, and
are not much used for writing of other structured information such
as source code. Moreover, by using visible text to express position,
and requiring translation with special tools such as a web browser,
they fail at the goal of using position alone to express structure.
The means provided in ASCII work only with monospace fonts,
and require much wasteful redundancy. Repurposing some of the
unused control characters to better support indentation and tabular structure was proposed.
Together, these changes reduced the number and quantity of
symbols needed. They improved the amount of data compression
obtained by general purpose data compression programs. They reduced the size of the source code. Whether the goal of greater
human readability was also achieved was not studied, but it was
surmised that removing redundancies in the notation does help
with readability.
REFERENCES
[1] Leonhard Euler. “De infinitis curvis eiusdem generis seu methodus inveniendi aequationes pro infinitis curvis eiusdem generis,” Commentarii
academiae scientiarum Petropolitanae 7, 1740, pp. 174-189. Original available
at http://eulerarchive.maa.org/docs/originals/E044.pdf, English translation at
http://www.17centurymaths.com/contents/euler/e044tr.pdf .
[2] Dirichlet, P. G. L., “Beweis des Satzes, dass jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen
ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen
enthlt”, Abhand. Ak. Wiss. Berlin, 1837. English translation available at
http://arxiv.org/pdf/0808.1408v2.pdf.
[3] Florian Cajori. “A History of Mathematical Notations.” 1928-1929. Paragraph
643, Vol II, pg. 268.
[4] Jan Lukasiewicz. “Uwagi o aksjomacie Nicoda i ’dedukcji uoglniajfbcej’ ”
Ksiga pamifbtkowa Polskiego Towarzystwa Filozoficznego, Lww 1931.
[5] Oliver, B. M. “Efficient Coding.” Bell System Technical Journal, 31: 724-750.
doi:10.1002/j.1538-7305.1952.tb01403.x
[6] A. J. Perlis and K. Samelson. “Preliminary Report – International Algebraic
Language,” Communications of the ACM, July 1958.
[7] Kenneth E. Iverson. A Programming Language. John Wiley & Sons, Inc., New
York, NY, USA. 1962.
[8] Richards, Martin. “BCPL: A tool for compiler writing and system programming”
[9] Winett, J., ”EBCDIC Codes and Their Mapping to ASCII”, RFC 183, DOI
10.17487/RFC0183, July 1971, http://www.rfc-editor.org/info/rfc183
[10] Abelson, Harold and Sussman, Gerald Jay, with Sussman, Julie. “Structure
and Interpretation of Computer Programs.” 1985.
[11] Felsenstein,
Joe
et
al.
The
Newick
tree
format.
http://evolution.genetics.washington.edu/phylip/newicktree.html
[12] Bemer,
Robert.
“The
Great
Curly
Brace
Trace
Chase”
http://www.bobbemer.com/BRACES.HTM
[13] FORTRAN 77 4.0 Reference Manual. SunSoft. Part No.: 802-2998-10 Revision
A, November 1995. p. 38. https://archive.org/details/Fortran77Manual
[14] Tim Bray,
C. M.
Sperberg-McQueen,
et.
al.
Extensible
Markup Language (XML),
W3C
Working
Draft 14-Nov-96.
https://www.w3.org/TR/WD-xml-961114
[15] Rivest, R. “S-Expressions” Network Working Group, Internet Draft, May 4,
1997. Section 4.1. http://people.csail.mit.edu/rivest/Sexp.txt
Efficient textual representation of structure
[16] Goldfarb, Charles F. “SGML: The Reason Why and the First Published Hint.”
Journal of the American Society for Information Science, volume 48, number
7 (July 1997). Annotated reprint of Goldfarb, Charles F., Mosher, Edward J.,
and Peterson, Theodore I. “An Online System for Integrated Text Processing”
Journal of the American Society for Information Science, volume 7 (Oct 1970).
http://www.sgmlsource.com/history/jasis.htm.
[17] Oren Ben-Kiki, Clark Evans, Brian Ingerson. YAML Ain’t Markup Language
(YAML) 1.0. Final Draft 2004-JAN-29. http://yaml.org/spec/1.0/
[18] Wall,
Larry.
“Synopsis
4:
Blocks
and
Statements”
http://design.perl6.org/S04.html
[19] Markstrum, Shane. “Staking Claims: A History of Programming Language
Design Claims and Evidence.”
[20] Shafranovich, Y., ”Common Format and MIME Type for Comma-Separated
Values (CSV) Files”, RFC 4180, DOI 10.17487/RFC4180, October 2005,
http://www.rfc-editor.org/info/rfc4180
[21] Buse, Raymond P.L. and Weimer, Westley R. “A Metric for Software Readability.” Proceedings of the 2008 International Symposium on Software Testing
and Analysis, pp. 121-130.
[22] anonymous authors. Bulletin Board Code. http://www.bbcode.org
[23] Yasuoka, Kiochi and Yasuoka, Motoko. “On the Prehistory of QWERTY.” ZINBUN (2011), 42: 161-174. https://doi.org/10.14989/139379
[24] ECMA
404
The
JSON
Data
Interchange
Format.
https://www.ecma-international.org/publications/files/ECMA-ST/ECMA-404.pdf
[25] HTML Elements. http://www.w3schools.com/html/html elements.asp
[26] MacFarlane, John. CommonMark Spec. http://spec.commonmark.org/
[27] Leonard, S., ”Guidance on Markdown: Design Philosophies, Stability Strategies, and Select Registrations”, RFC 7764, DOI 10.17487/RFC7764, March 2016.
http://www.rfc-editor.org/info/rfc7764
[28] Open Tree of Life, v9.1. http://files.opentreeoflife.org/synthesis/opentree9.1/opentree9.1 tree.tgz
Conference’17, July 2017, Washington, DC, USA
| 2 |
1
System-Level Modeling and Optimization of the
Energy Efficiency in Cellular Networks – A
Stochastic Geometry Framework
arXiv:1801.07513v1 [] 23 Jan 2018
Marco Di Renzo, Senior Member, IEEE, Alessio Zappone, Senior Member, IEEE,
Thanh Tu Lam, Student Member, IEEE, and Mérouane Debbah, Fellow, IEEE
Abstract—In this paper, we analyze and optimize the energy efficiency of downlink cellular networks. With the aid
of tools from stochastic geometry, we introduce a new closedform analytical expression of the potential spectral efficiency
(bit/sec/m2 ). In the interference-limited regime for data transmission, unlike currently available mathematical frameworks, the
proposed analytical formulation depends on the transmit power
and deployment density of the base stations. This is obtained
by generalizing the definition of coverage probability and by
accounting for the sensitivity of the receiver not only during
the decoding of information data, but during the cell association
phase as well. Based on the new formulation of the potential
spectral efficiency, the energy efficiency (bit/Joule) is given in
a tractable closed-form formula. An optimization problem is
formulated and is comprehensively studied. It is mathematically
proved, in particular, that the energy efficiency is a unimodal and
strictly pseudo-concave function in the transmit power, given the
density of the base stations, and in the density of the base stations,
given the transmit power. Under these assumptions, therefore, a
unique transmit power and density of the base stations exist,
which maximize the energy efficiency. Numerical results are
illustrated in order to confirm the obtained findings and to prove
the usefulness of the proposed framework for optimizing the
network planning and deployment of cellular networks from the
energy efficiency standpoint.
Index Terms—Cellular Networks, Energy Efficiency, Poisson
Point Processes, Stochastic Geometry, Optimization.
I. I NTRODUCTION
The Energy Efficiency (EE) is regarded as a key performance metric towards the optimization of operational cellular networks, and the network planning and deployment
Manuscript received August 2, 2017; revised November 19, 2017; accepted
January 9, 2018. Date of publication January XY, 2018; date of current
version January XY, 2018. This work was supported in part by the European
Commission through the H2020-MSCA ETN-5Gwireless project under Grant
Agreement 641985, the H2020-MSCA IF-BESMART project under Grant
Agreement 749336, and the H2020-ERC PoC-CacheMire project under Grant
Agreement 727682. The associate editor coordinating the review of this paper
and approving it for publication was S. Mukherjee. (Corresponding author:
Marco Di Renzo)
M. Di Renzo and T. Tu-Lam are with the Laboratoire des Signaux et
Systèmes, CNRS, CentraleSupélec, Univ Paris Sud, Université Paris-Saclay,
3 rue Joliot Curie, Plateau du Moulon, 91192 Gif-sur-Yvette, France. (e-mail:
[email protected], [email protected]).
A. Zappone and M. Debbah are with the LANEAS group of the Laboratoire des Signaux et Systèmes, CentraleSupélec, CNRS, Univ Paris Sud,
Université Paris-Saclay, 3 rue Joliot Curie, Plateau du Moulon, 91192
Gif-sur-Yvette, France. (e-mail: [email protected], [email protected]). M. Debbah is also with the Mathematical and Algorithmic Sciences Laboratory, France Research Center, Huawei
Technologies, 20 Quai du Point du Jour, 92100 Boulogne-Billancourt, France.
of emerging communication systems [1]. The EE is defined
as a benefit-cost ratio where the benefit is given by the
amount of information data per unit time and area that can be
reliably transmitted in the network, i.e., the network spectral
efficiency, and the cost is represented by the amount of power
per unit area that is consumed to operate the network, i.e.,
the network power consumption. Analyzing and designing a
communication network from the EE standpoint necessitate
appropriate mathematical tools, which are usually different
from those used for optimizing the network spectral efficiency
and the network power consumption individually [2]. The
optimization problem, in addition, needs to be formulated in
a sufficiently simple but realistic manner, so that all relevant
system parameters appear explicitly and the utility function is
physically meaningful.
Optimizing the EE of a cellular network can be tackled in
different ways, which include [1]: the design of medium access
and scheduling protocols for optimally using the available
resources, e.g., the transmit power; the use of renewable
energy sources; the development of innovative hardware for
data transmission and reception; and the optimal planning and
deployment of network infrastructure. In the present paper, we
focus our attention on optimizing the average number of Base
Stations (BSs) to be deployed (or to be kept operational) per
unit area and their transmit power. Henceforth, this is referred
to as “system-level EE” optimization, i.e., the EE across the
entire (or a large portion of the) cellular network is the utility
function of interest.
System-level analysis and optimization are useful when the
network operators are interested in optimizing the average
performance across the entire cellular network. Hence, they
are relevant for optimally operating current networks, and for
deploying and planning future networks. In the first case, given
an average number of BSs per unit area already deployed, they
may provide information on the average number of BSs that
can be switched off based on the average load of the network,
and on their optimal transmit power to avoid coverage holes.
In the second case, they may guide the initial deployment
of cellular infrastructure that employs new types of BSs
(e.g., powered by renewable energy sources), new transmission
technologies (e.g., large-scale antennas), or that operate in new
frequency bands (e.g., the millimeter-wave spectrum).
In the last few years, the system-level modeling and analysis
of cellular networks have been facilitated by capitalizing on the
mathematical tool of stochastic geometry and, more precisely,
2
on the theory of spatial point processes [3]-[5]. It has been
empirically validated that, from the system-level standpoint,
the locations of the BSs can be abstracted as points of a
homogeneous Poisson Point Process (PPP) whose intensity
coincides with the average number of BSs per unit area [6]. A
comprehensive survey of recent results in this field of research
is available in [7].
A relevant performance metric for the design of cellular
networks is the Potential Spectral Efficiency (PSE), which
is the network information rate per unit area (measured in
bit/sec/m2 ) that corresponds to the minimum signal quality
for reliable transmission. Under the PPP modeling assumption,
the PSE can be obtained in two steps: i) first by computing
the PSE of a randomly chosen Mobile Terminal (MT) and
by assuming a given spatial realization for the locations of
the BSs and ii) then by averaging the obtained conditional
PSE with respect to all possible realizations for the locations
of the BSs and MTs. In the interference-limited regime, this
approach allows one to obtain a closed-form expression of
the PSE under the (henceforth called) standard modeling
assumptions, i.e., single-antenna transmission, singular pathloss model, Rayleigh fading, fully-loaded BSs, cell association
based on the highest average received power [3]. Motivated by
these results, the PPP modeling approach for the locations of
the BSs has been widely used to analyze the trade-off between
the network spectral efficiency and the network power consumption, e.g., [8], as well as to minimize the network power
consumption given some constraints on the network spectral
efficiency or to maximize the network spectral efficiency given
some constraints on the network power consumption [9]. The
PPP modeling approach has been applied to optimize the
EE of cellular networks as well. Notable examples for this
field of research are [10]-[26]. A general study of the energy
and spectral efficiencies of multi-tier cellular networks can
be found in [27]. In the authors’ opinion, however, currently
available approaches for modeling and optimizing the systemlevel EE of cellular networks are insufficient and/or unsuitable
for mathematical analysis. This is further elaborated in the next
section.
A. Fundamental Limitations of Current Approaches for
System-Level EE Optimization
We begin with an example that shows the limitations of
the available analytical frameworks. In the interference-limited
regime, under the standard modeling assumptions, the PSE is:
PSE = λBS BW log2 (1 + γD ) Pcov (γD )
(a)
λBS BW log2 (1 + γD )
=
2 F1 (1, −2/β, 1 − 2/β, −γD )
(1)
where λBS is the density of BSs, BW is the transmission bandwidth, γD is the threshold for reliable decoding, β > 2 is the
path-loss exponent, 2 F1 (·, ·, ·, ·) is the Gauss hypergeometric
function, Pcov (·) is the coverage probability defined in [3, Eq.
(1)], and (a) follows from [3, Eq. (8)].
The main strength of (1) is its simple closed-form formulation. This is, however, its main limitation as well, especially
as far as formulating meaningful system-level EE optimization
problems is concerned. Under the standard modeling assumptions, in fact, the network power consumption (Watt/m2 ) is1
Pgrid = λBS (Ptx + Pcirc ), where Ptx is the transmit power
of the BSs and Pcirc is the static power consumption of the
BSs, which accounts for the power consumed in all hardware
blocks, e.g., analog-to-digital and digital-to-analog converters,
analog filters, cooling components, and digital signal processing [1]. The system-level EE (bit/Joule) is defined as
the ratio between (1) and the network power consumption,
i.e., EE = PSE/Pgrid . Since the PSE in (1) is independent
of the transmit power of the BSs, Ptx , and the network
power consumption, Pgrid , linearly increases with Ptx , we
conclude that any EE optimization problems formulated based
on (1) would result in the trivial optimal solution consisting
of turning all the BSs off (the optimal transmit power is
zero). In the context of multi-tier cellular networks, a similar
conclusion has been obtained in some early papers on systemlevel EE optimization, e.g., [8], where it is shown that the
EE is maximized if all macro BSs operate in sleeping mode.
A system-level EE optimization problem formulated based
on (1) would result, in addition, in a physically meaningless
utility function, which provides a non-zero benefit-cost ratio,
i.e., a strictly positive EE while transmitting zero power
(EE (Ptx = 0) = PSE/(λBS Pcirc ) > 0). In addition, the EE
computed from (1) is independent of the density of BSs. We
briefly mention here, but will detail it in Section III, that
the load model, i.e., the fully-loaded assumption, determines
the conclusion that the EE does not depend on λBS . This
assumption, however, does not affect the conclusion that the
optimal Ptx is zero. This statement is made more formal in
the sequel (see Proposition 1 and Corollary 1). It is worth
nothing that the conclusion that the PSE is independent of
Ptx is valid regardless of the specific path-loss model being
used2 . It depends, on the other hand, on the assumptions of
interference-limited operating regime and of having BSs that
emit the same Ptx .
Based on these observations, we conclude that a new
analytical formulation of the PSE that explicitly depends on
the transmit power and density of the BSs, and that is tractable
enough for system-level EE optimization is needed. From an
optimization point of view, in particular, it is desirable that
the PSE is formulated in a closed-form expression and that
the resulting EE function is unimodal and strictly pseudoconcave in the transmit power (given the density) and in the
density (given the transmit power) of the BSs. This would
imply, e.g., that the first-order derivative of the EE with respect
to the transmit power of the BSs (assuming the density given)
would have a unique zero, which would be the unique optimal
transmit power that maximizes the EE [2]. Similar conclusions
would apply to the optimal density of the BSs for a given
transmit power. Further details are provided in Section IV.
In this regard, a straightforward approach to overcome the
limitations of (1) would be to abandon the interference-limited
assumption and to take the receiver noise into account. In this
1 In the present paper, this holds true for Load Model 1 that is introduced
in Section II-D.
2 The reader may verify this statement by direct inspection of (4), where
Ptx cancels out for any path-loss models.
3
case, the PSE would be formulated in terms of a single-integral
that, in general, cannot be expressed in closed-form [3], [17,
Eq. (9)]. This integral formulation, in particular, results in
a system-level EE optimization problem that is not easy to
tackle. This approach, in addition, has the inconvenience of
formulating the optimization problem for an operating regime
where cellular networks are unlikely to operate in practice.
B. State-of-the-Art on System-Level EE Optimization
We briefly summarize the most relevant research contributions on energy-aware design and optimization of cellular
networks. Due to space limitations, we discuss only the
contributions that are closely related to ours. A state-of-the-art
survey on EE optimization is available in [2].
In [8], the authors study the impact of switching some macro
BSs off in order to minimize the power consumption under
some constraints on the coverage probability. Since the authors
rely on the mathematical framework in (1), they conclude
that all macro BSs need to be switched off to maximize
the EE. In [9], the author exploits geometric programming
to minimize the power consumption of cellular networks
given some constraints on the network coverage and capacity.
The EE is not studied. A similar optimization problem is
studied in [11] and [17] for two-tier cellular networks but
the EE is not studied either. As far as multi-tier cellular
networks are concerned, an important remark is necessary. In
the interference-limited regime, optimal transmit powers and
densities for the different tiers of BSs may exist if the tiers
have different thresholds for reliably decoding the data. The
PSE, otherwise, is the same as that of single-tier networks, i.e.,
it is independent of the transmit power and density of the BSs.
In [14], the authors study the EE of small cell networks with
multi-antenna BSs. For some parameter setups, it is shown
that an optimal density of the BSs exists. The EE, however,
still decreases monotonically with the transmit power of the
BSs, which implies that the EE optimization problem is not
well formulated from the transmit power standpoint. More
general scenarios are considered in [10], [12], [13], [15], [16],
[18]-[25], but similar limitations hold. In some cases, e.g.,
[20], the existence and uniqueness of an optimal transmit
power and density of the BSs are not mathematically proved
or, e.g., in [24], the problem formulation has a prohibitive
numerical complexity as it necessitates the computation of
multiple integrals and infinite series. It is apparent, therefore,
that a tractable approach for system-level EE optimization is
missing in the open technical literature. In the present paper,
we introduce a new definition of PSE that overcomes these
limitations.
C. Research Contribution and Novelty
In the depicted context, the specific novel contributions
made by this paper are as follows:
• We introduce a new closed-form analytical formulation of
the PSE for interference-limited cellular networks (during
data transmission), which depends on the transmit power
and density of the BSs. The new expression of the PSE
is obtained by taking into account the power sensitivity
of the receiver not only for data transmission but for cell
association as well.
• Based on the new expression of the PSE, a new systemlevel EE optimization problem is formulated and comprehensively studied. It is mathematically proved that the
EE is a unimodal and strictly pseudo-concave function in
the transmit power given the BSs’ density and in the BSs’
density given the transmit power. The dependency of the
optimal power as a function of the density and of the
optimal density as a function of the power is discussed.
• A first-order optimal pair of transmit power and density
of the BSs is obtained by using a simple alternating
optimization algorithm whose details are discussed in the
sequel. Numerical evidence of the global optimality of
this approach is provided as well.
• Two load models for the BSs are analyzed and compared
against each other. It is shown that they provide the same
PSE but have different network power consumptions.
Hence, the optimal transmit power and density of the
BSs that maximize their EEs are, in general, different.
Their optimal EEs and PSEs are studied and compared
against each other.
The paper is organized as follows. In Section II, the system
model is presented. In Section III, the new definition of PSE
is introduced. In Section IV, the EE optimization problem is
formulated and studied. In Section V, numerical results are
shown. Finally, Section VI concludes the paper.
Notation: The main symbols and functions used in the
present paper are reported in Table I.
II. S YSTEM M ODEL
In this section, the network model is introduced. With the
exception of the load model, we focus our attention on a
system where the standard modeling assumptions hold. One of
the main aims of the present paper is, in fact, to highlight the
differences between currently available analytical frameworks
and the new definition of PSE that is introduced. The proposed
approach can be readily generalized to more advanced system
models, such as that recently adopted in [5].
A. Cellular Network Modeling
A downlink cellular network is considered. The BSs are
modeled as points of a homogeneous PPP, denoted by ΨBS , of
density λBS . The MTs are modeled as another homogeneous
PPP, denoted by ΨMT , of density λMT . ΨBS and ΨMT are
independent of each other. The BSs and MTs are equipped
with a single omnidirectional antenna. Each BS transmits with
a constant power denoted by Ptx . The analytical frameworks
are developed for the typical MT, denoted by MT0 , that is
located at the origin (Slivnyak theorem [28, Th. 1.4.5]). The
BS serving MT0 is denoted by BS0 . The cell association
criterion is introduced in Section II-C. The subscripts 0, i and
n identify the intended link, a generic interfering link, and a
generic BS-to-MT link. The set of interfering BSs is denoted
(I)
by ΨBS . As for data transmission, the network operates in
the interference-limited regime, i.e., the noise is negligible
compared with the inter-cell interference.
4
TABLE I
S UMMARY OF MAIN SYMBOLS AND FUNCTIONS USED THROUGHOUT THE PAPER .
Symbol/Function
E{·}, Pr {·}
λBS , λMT
(I)
ΨBS , ΨMT , ΨBS
BS0 , BSi , BSn
Ptx , Pcirc , Pidle
rn , gn
l (·), Ln , L0
κ, β > 0
BW , N0
2
σN
= BW N0 , Iagg (·)
γD , γA
−α
L (x) = 1 − (1 + x/α) , α = 3.5
fX (·), FX (·)
1 (·), 2 F1 (·, ·, ·, ·), Γ(·)
max {x, y}, min {x, y}
Υ = 2 F1 (−2/β, 1, 1 − 2/β, −γD ) − 1 ≥ 0
Q (x, y, z) = 1 − exp −πx(y/η)2/β (1 + ΥL (z))
SIR, SNR
Pcov , PSE, Pgrid
.
..
z x (x, y), z x (x, y)
Definition
Expectation operator, probability measure
Density of base stations, mobile terminals
PPP of base stations, mobile terminals, interfering base stations
Serving, interfering, generic base station
Transmit, circuits, idle power consumption of base stations
Distance, fading power gain of a generic link
Path-loss, shorthand of path-loss, path-loss of intended link
Path-loss constant, slope (exponent)
Transmission bandwidth, noise power spectral density
Noise variance, aggregate other-cell interference
Reliability threshold for decoding, cell association
Probability that a base station is in transmission mode
Probability density/mass, cumulative distribution/mass function of X
Indicator function, Gauss hypergeometric function, gamma function
Maximum, minimum between x and y
Shorthand
2
Shorthand with η = κσN
γA
Signal-to-interference-ratio, average signal-to-noise-ratio
Coverage, potential spectral efficiency, network power consumption
First-order, second-order derivative with respect to x
B. Channel Modeling
D. Load Modeling
For each BS-to-MT link, path-loss and fast-fading are
considered. Shadowing is not explicitly taken into account
because its net effect lies in modifying the density of the
BSs [5]. All BS-to-MT links are assumed to be mutually
independent and identically distributed (i.i.d.).
a) Path-Loss: Consider a generic BS-to-MT link of
length rn . The path-loss is l (rn ) = κrnβ , where κ and β
are the path-loss constant and the path-loss slope (exponent).
For simplicity, only the unbounded path-loss model is studied
in the present paper. The analysis of more general path-loss
models is an interesting but challenging generalization that is
left to future research [29].
b) Fast-Fading: Consider a generic BS-to-MT link. The
power gain due to small-scale fading is assumed to follow
an exponential distribution with mean Ω. Without loss of
generality, Ω = 1 is assumed. The power gain of a generic
BS-to-MT link is denoted by gn .
Based on (2), several or no MTs can be associated to a
generic BS. In the latter case, the BS transmits zero power, i.e.,
Ptx = 0, and, thus, it does not generate inter-cell interference.
In the former case, on the other hand, two load models are
studied and compared against each other. The main objective
is to analyze the impact of the load model on the power
consumption and EE of cellular networks. Further details are
provided in the sequel. Let NMT denote the number of MTs
associated to a generic BS and BW denote the transmission
bandwidth available to each BS. If NMT = 1, for both load
models, the single MT associated to the BS is scheduled
for transmission and the entire bandwidth, BW , and transmit
power, Ptx , are assigned to it.
a) Load Model 1: Exclusive Allocation of Bandwidth
and Power to a Randomly Selected MT: If NMT > 1, the
BS randomly selects, at each transmission instance, a single
MT among the NMT associated to it. Also, the BS allocates
the entire transmission bandwidth, BW , and the total transmit
power, Ptx , to it. The random scheduling of the MTs at each
transmission instance ensures that, in the long term, all the
MTs associated to a BS are scheduled for transmission.
b) Load Model 2: Equal Allocation of Bandwidth and
Power Among All the MTs: If NMT > 1, the BS selects, at
each transmission instance, all the NMT MTs associated to it.
The BS equally splits the available transmission bandwidth,
BW , and evenly spreads the available transmit power, Ptx ,
among the NMT MTs. Thus, the bandwidth and power are
viewed as continuous resources by the BS’s scheduler: each
MT is assigned a bandwidth equal to BW /NMT and the power
C. Cell Association Criterion
A cell association criterion based on the highest average
received power is assumed. Let BSn ∈ ΨBS denote a generic
BS of the network. The serving BS, BS0 , is obtained as
follows:
BS0 = arg maxBSn ∈ΨBS {1/l (rn )}
= arg maxBSn ∈ΨBS {1/Ln }
(2)
where the shorthand Ln = l (rn ) is used. As for the intended
link, L0 = minrn ∈ΨBS {Ln } holds.
5
spectral density at the detector’s (i.e., the typical MT, MT0 )
input is equal to Ptx /BW .
In the sequel, we show that the main difference between the
two load models lies in the power consumption of the BSs. In
simple terms, the more MTs are scheduled for transmission the
higher the static power consumption of the BSs is. The analysis
of general load models, e.g., based on a discrete number of
resource blocks [5], is left to future research due to space
limits.
E. Power Consumption Modeling
In the considered system model, the BSs can operate in
two different modes: i) they are in idle mode if no MTs are
associated to them and ii) they are in transmission mode if
at least one MT is associated to them. The widespread linear
power consumption model for the BSs is adopted [1], [30],
which accounts for the power consumption due to the transmit
power, Ptx , the static (circuit) power, Pcirc , and the idle power,
Pidle . If the BS is in idle mode, its power consumption is
equal to Pidle . If the BS is in transmission mode, its power
consumption is a function of Ptx , Pcirc , and depends on
the load model. Further details are provided in the sequel.
In the present paper, based on physical considerations, the
inequalities 0 ≤ Pidle ≤ Pcirc are assumed.
III. A N EW A NALYTICAL F ORMULATION
OF THE
PSE
In this section, we introduce and motivate a new definition
of coverage probability, Pcov , and PSE, which overcomes the
limitations of currently available analytical frameworks and is
suitable for system-level optimization (see Section I-A). All
symbols are defined in Table I.
Definition 1: Let γD and γA be the reliability thresholds
for the successful decoding of information data and for the
successful detection of the serving BS, BS0 , respectively. The
coverage probability, Pcov , of the typical MT, MT0 , is defined
as follows:
Pcov (γD , γA )
(
Pr SIR ≥ γD , SNR ≥ γA
=
0
if
if
MT0 is selected
MT0 is not selected
(3)
where the Signal-to-Interference-Ratio (SIR) and the average
Signal-to-Noise-Ratio (SNR) can be formulated, for the network model under analysis, as follows:
SIR = P
(I)
BSi ∈ΨBS
Ptx /L0
.
SNR =
2
σN
Ptx g0 /L0
Ptx gi /Li 1 (Li > L0 )
(4)
Remark 1: The definition of Pcov in (3) reduces to the
conventional one if γA = 0 [3].
Remark 2: The average SNR, SNR, in (4) is averaged with
respect to the fast fading. The SIR depends, on the other hand,
on fast fading. This choice is discussed in the sequel.
Remark 3: The new definition of coverage probability, Pcov ,
in (3) is in agreement with the cell selection criterion specified
Fig. 1. Illustration of the interplay between Ptx and λBS . For simplicity,
only a cluster of seven BSs is represented by keeping the size of the region of
interest (square box) the same. The inter-site distance of the BSs (represented
as red dots), i.e., the size of the hexagonal cells, is determined by λBS . The
shape of the cells depends on the cell association in (2). The circular shaded
disk (in light yellow) represents the actual coverage region of the BSs that is
determined by Ptx : i) a MT inside the disk receives a sufficiently good signal
to detect the BS and to get associated with it, ii) a MT outside the disk cannot
detect the BS and is not in coverage. The sub-figures (a)-(c) are obtained by
assuming the same λBS but a different Ptx . The sub-figures (d) and (e) are
obtained by considering a λBS greater than that of sub-figures (a)-(c) but
keeping the same Ptx as sub-figures (a) and (b), respectively. The sub-figure
(f) is obtained by considering a λBS smaller than that of sub-figure (c) but
keeping the same Ptx as it. We observe that, for a given λBS , the transmit
power Ptx is appropriately chosen in sub-figures (a), (e) and (f). Ptx is, on
the other hand, under-provisioned in sub-figure (b) and over-provisioned in
sub-figures (c) and (d). In the first case, the MTs are not capable of detecting
the BS throughout the entire cell, i.e., a high outage probability is expected.
In the second case, the BSs emit more power than what is actually needed,
which results in a high power consumption.
by the 3rd Generation Partnership Project (3GPP) [31, Sec.
5.2.3.2].
a) Motivation for the New Definition of Pcov : The motivation for the new definition of coverage probability originates
from the inherent limitations of the conventional definition
(obtained by setting γA = 0 in (3)), which prevents one
from taking into account the strong interplay between the
transmit power and the density of the BSs for optimal cellular
networks planning. In fact, the authors of [3] have shown
that, in the interference-limited regime, Pcov is independent
of the transmit power of the BSs. If, in addition, a fullyloaded model is assumed, i.e., λMT /λBS ≫ 1, then Pcov is
independent of the density of BSs as well. This is known as
the invariance property of Pcov as a function of Ptx and λBS
[5]. The tight interplay between Ptx and λBS is, on the other
hand, illustrated in Fig. 1, where, for ease of representation,
an hexagonal cellular layout is considered. Similar conclusions
apply to the PPP-based cellular layout studied in the present
paper. In Fig. 1, it is shown that, for a given λBS , Ptx needs
to be appropriately chosen in order to guarantee that, for
any possible location of MT0 in the cell, two conditions are
fulfilled: i) the MT receives a sufficiently good signal quality,
i.e., the average SNR is above a given threshold, γA , that
ensures a successful cell association, i.e., to detect the presence
(pilot signal) of the serving BS and ii) the BSs do not overprovision Ptx , which results in an unnecessary increase of the
6
power consumption. It is expected, therefore, that an optimal
value of Ptx given λBS and an optimal value of λBS given
Ptx that optimize EE exist [32].
b) Advantages of the New Definition of Pcov : The new
definition of Pcov allows one to overcome the limitations of the
conventional definition and brings about two main advantages.
The first advantage originates from direct inspection of (4). In
the conventional definition of Pcov , only the SIR is considered
and the transmit power of the BSs, Ptx , cancels out between
numerator and denominator. This is the reason why Pcov is
independent of Ptx . In the proposed new definition, on the
other hand, Ptx explicitly appears in the second constraint
and does not cancel out. The density of the BSs, λBS , appears
implicitly in the distribution of the path-loss of the intended
link, L0 . The mathematical details are provided in the sequel.
The second inequality, as a result, allows one to explicitly
account for the interplay between Ptx and λBS (shown in Fig.
1). If λBS increases (decreases), in particular, L0 decreases
(increases) in statistical terms. This implies that Ptx can be
decreased (increased) while still ensuring that the average SNR
is above γA . The second advantage is that the new definition of
Pcov is still mathematically tractable and the PSE is formulated
in a closed-form expression. This is detailed in Proposition 1.
Remark 4: The new definition of Pcov in (3) is based on the
actual value of L0 because a necessary condition for the typical
MT to be in coverage is that it can detect the pilot signal of at
least one BS during the cell association. If the BS that provides
the highest average received power cannot be detected, then
any other BSs cannot be detected either. The second constraint
on the definition of Pcov , in addition, is based on the average
SNR, i.e., the SNR averaged with respect to the fast fading,
because the cell association is performed based on long-term
statistics, i.e., based on the path-loss in the present paper, in
order to prevent too frequent handovers.
Remark 5: Compared with the conventional definition
of coverage based on the Signal-to-Interference+Noise-Ratio
(SINR) [3], the new definition in (3) is conceptually different.
Equation (3) accounts for the signal quality during both the
cell association and data transmission phases. The definition
of coverage based on the SINR, on the other hand, accounts
for the signal quality only during the data transmission phase.
In spite of this fundamental difference, Pcov in (3) may be
interpreted as an approximation for the coverage probability
based on the SINR, and, more precisely, as an alternative
method to incorporate the thermal noise into the problem
formulation. Compared with the coverage based on the SINR,
however, the new definition in (3) accounts for the impact of
thermal noise when it is the dominant factor, i.e., during the
cell association phase when the inter-cell interference can be
ignored as orthogonal pilot signals are used.
Remark 6: Figure 1 highlights that the new definition of
coverage in (3) is not only compliant with [31] but it has a
more profound motivation and wider applicability. In PPPbased cellular networks, in contrast to regular grid-based
network layouts, the size and shape of the cells are random.
This implies that it is not possible to identify a relation, based
on pure geometric arguments, between the cell size and the
transmit power of the BSs that makes the constraint on SNR
in (3) ineffective in practice. In equivalent terms, in this case,
the threshold γA may turn out to be sufficiently small to
render the constraint on SNR ineffective. This is, e.g., the
approach employed in [32, Eq. (1)], where the relation between
the transmit power and density of BSs is imposed a priori
based on the path-loss. In practice, however, cellular networks
are irregularly deployed, which makes the optimal relation
between the transmit power and density of BSs difficult to
identify because of the coexistence of cells of small and large
sizes. The constraint on SNR in (3) allows one to take into
account the interplay between the transmit power and density
of BSs in irregular (realistic) cellular network deployments.
A. Analytical Formulation of the PSE
In this section, we provide the mathematical definitions
of the PSE for the two load models introduced in Section
II-D. They are summarized in the following two lemmas,
which constitute the departing point to obtain the closed-form
analytical frameworks derived in Section III-B.
Remark 7: The PSE is defined from the perspective of
the typical MT, MT0 rather than from the perspective of the
typical cell (or BS). This implies that the proposed approach
allows one to characterize the PSE of the so-called Crofton
cell, which is the cell that contains MT0 . This approach is
commonly used in the literature and is motivated by the lack
of results on the explicit distribution of the main geometrical
characteristics of the typical cell of a Voronoi tessellation.
Further details on the Crofton and typical cells are available
in [34] and [35].
Let N̄MT be the number of MTs that lie in the cell of
the typical MT, MT0 , with the exception of MT0 . N̄MT is
a discrete random variable whose probability mass function
in the considered system model can be formulated, in an
approximated closed-form expression, as [33, Eq. (3)]:
fN̄MT (u) = Pr N̄MT = u
(5)
3.54.5 Γ (u + 4.5) (λMT /λBS )u
≈
u+4.5 .
Γ (4.5) Γ (u + 1) (3.5 + λMT /λBS )
Remark 8: The probability mass function in (5) is an
approximation because it is based on the widely used empirical
expression of the probability density function of the area of
the Voronoi cells in [36, Eq. (1)]. A precise formula for the
latter probability density function is available in [37]. It is,
however, not used in the present paper due to its mathematical
intractability, as recently remarked in [26]. Throughout the
rest of the paper, for simplicity, we employ the sign of
equality (“=”) in all the analytical formulas that rely solely
on the approximation in (5). This is to make explicit that
our analytical frameworks are not based on any other hidden
approximations.
Based on (5), a formal mathematical formulation for the
PSE is given as follows.
Lemma 1: Let Load Model 1 be assumed. The PSE
(bit/sec/m2 ) can be formulated as shown in (6) at the top of
this page.
Proof : It follows from the definition of PSE [5], where (a)
originates from the fact that MT0 is scheduled for transmission
7
PSE (γD , γA ) = EN̄MT PSE γD , γA | N̄MT
(a)
= λMT BW log2 (1 + γD ) Pr SIR ≥ γD , SNR ≥ γA Pr N̄MT = 0
+∞
X
1
+
λMT BW log2 (1 + γD )
Pr SIR ≥ γD , SNR ≥ γA Pr N̄MT = u
u+1
u=1
+∞
X
Pr N̄MT = u
= λMT BW log2 (1 + γD ) Pr SIR ≥ γD , SNR ≥ γA
.
u+1
u=0
(6)
PSE (γD , γA ) = EN̄MT PSE γD , γA | N̄MT
+∞
(b) X
BW
=
λMT
log2 (1 + γD ) Pr SIR ≥ γD , SNR ≥ γA Pr N̄MT = u
u+1
u=0
+∞
X
Pr N̄MT = u
= λMT BW log2 (1 + γD ) Pr SIR ≥ γD , SNR ≥ γA
.
u+1
u=0
(7)
with unit probability if it is the only MT in the cell, while it is
scheduled for transmission with probability 1/(u + 1) if there
are other u MTs in the cell.
Lemma 2: Let Load Model 2 be assumed. The PSE
(bit/sec/m2 ) can be formulated as shown in (7) at the top of
this page.
Proof : It follows from the definition of PSE [5], where (b)
originates from the fact that MT0 is scheduled for transmission
with unit probability but the bandwidth is equally allocated
among the MTs in the cell, i.e., each of the u + 1 MTs is
given a bandwidth equal to BW /(u + 1).
Remark 9: By comparing (6) and (7), we note that the same
PSE is obtained for both load models. This originates from
the fact that Pcov in (3) is independent of the number of
MTs in the cell. This property follows by direct inspection
of (4) and has been used in the proof of Lemma 1 and Lemma
2. As far as the first load model is concerned, this property
originates from the fact that a single MT is scheduled at
every transmission instance. It is, however, less intuitive for
the second load model. In this latter case, as mentioned in
Section II-D, Ptx and BW are viewed as continuous resources
by the BS’s scheduler. The transmit power per unit bandwidth
of both intended and interfering links is equal to Ptx /BW .
Regardless of the number of MTs available in the interfering
cells, MT0 “integrates” this transmit power per unit bandwidth
over the bandwidth allocated to it, which depends on the total
number of MTs in its own cell. Let the number of these MTs
be u+1. Thus, the receiver bandwidth of MT0 is BW /(u + 1).
This implies that the received power (neglecting path-loss and
fast-fading) of both intended and interfering links is Prx =
(Ptx /BW ) (BW /(u + 1)) = Ptx /(u + 1). As a result, the
number of MTs, u+1, cancels out in the SIR of (4). Likewise,
the received average SNR (neglecting the path-loss) is equal to
Prx
/(N0 BW /(u + 1)) = (Ptx /(u + 1))/(N0 BW /(u + 1)) =
2
, which is independent of the number of MTs, u + 1,
Ptx σN
and agrees with the definition of average SNR in (4). In the
next section, we show that the load models are not equivalent
in terms of network power consumption.
B. Closed-Form Expressions of PSE and Pgrid
In this section, we introduce new closed-form analytical
frameworks for computing the PSE. We provide, in addition,
closed-form expressions of the network power consumption
for the two load models under analysis. These results are
summarized in the following three propositions.
Let NMT be the number of MTs that lie in an arbitrary
(idle)
cell. The probability that the BS is in idle mode, PBS , and
(tx)
in transmission mode, PBS , can be formulated as follows [33,
Prop. 1]:
(idle)
PBS
(tx)
PBS
= Pr {NMT = 0} = 1 − L (λMT /λBS )
(idle)
= Pr {NMT ≥ 1} = 1 − PBS
= L (λMT /λBS )
(8)
where L (·) is defined in Table II. Using (8), PSE and Pgrid
are given in the following propositions.
Proposition 1: Consider either Load Model 1 or Load Model
2. Assume notation and functions given in Tables I and II.
The PSE (bit/sec/m2 ) can be formulated, in closed-form, as
follows:
λBS L (λMT /λBS )
PSE (γD , γA ) = BW log2 (1 + γD )
1 + ΥL (λMT /λBS ) (9)
× Q (λBS , Ptx , λMT /λBS ) .
Proof : See Appendix A.
Corollary 1: If γA = 0, i.e., the conventional definition of
Pcov is used, the PSE in (9) simplifies as follows:
λBS L (λMT /λBS )
.
1 + ΥL (λMT /λBS )
(10)
≫ 1, the PSE in (9) reduces to
PSE (γD , γA = 0) = BW log2 (1 + γD )
If, in addition, λMT /λBS
(1).
Proof : It follows because Q (·, ·, ·) = 1 if γA = 0 and
L (λMT /λBS ≫ 1) → 1.
Remark 10: Corollary 1 substantiates the comments made
above in this section about the need of a new definition of PSE,
8
EE (Ptx , λBS ) =
BW log2 (1 + γD ) L (λMT /λBS ) Q (λBS , Ptx , λMT /λBS )
PSE
=
. (14)
Pgrid
[1 + ΥL (λMT /λBS )] [L (λMT /λBS ) (Ptx + Pcirc − Pidle ) + Pidle + M (λMT /λBS ) Pcirc ]
as well as the advantages of the proposed analytical formulation. In particular, (10) confirms that the PSE is independent
of Ptx if γA = 0 and that the PSE is independent of Ptx and
λBS if fully-loaded conditions hold, i.e., λMT /λBS ≫ 1.
Proposition 2: Let Load Model 1 be assumed. Pgrid
(Watt/m2 ) can be formulated as follows:
convenience of analysis, we introduce the following auxiliary
function (LM = Load Model):
M (λMT /λBS )
(
0
if
=
λMT /λBS − L (λMT /λBS ) if
(1)
Pgrid = λBS (Ptx + Pcirc ) L (λMT /λBS )
+ λBS Pidle (1 − L (λMT /λBS )) .
(11)
Proof : The network power consumption is obtained by
multiplying the average number of BSs per unit area, i.e., λBS ,
and the average power consumption of a generic BS, which
is Ptx + Pcirc if the BS operates in transmission mode, i.e.,
with probability L (λMT /λBS ), and Pidle if the BS operates
in idle mode, i.e., with probability 1 − L (λMT /λBS ).
Proposition 3: Let Load Model 2 be assumed. Pgrid
(Watt/m2 ) can be formulated as follows:
(2)
Pgrid = λBS Ptx L (λMT /λBS )
+ λMT Pcirc + λBS Pidle (1 − L (λMT /λBS )) .
(12)
Proof : It is similar to the proof of Proposition 2. The
difference is that the power dissipation of a generic BS
that operates
P in transmission mode is, in this case, equal to
Ptx +Pcirc +∞
u=1 u Pr {NMT = u} = Ptx +Pcirc (λMT /λBS ),
where NMT is the number of MTs in the cell and the last
equality follows from [33, Lemma 1].
Remark 11: The power consumption models obtained in
(11) and (12), which account for the transmit, circuits, and idle
power consumption of the BSs, have been used, under some
simplifying assumptions, in previous research works focused
on the analysis of the EE of cellular networks. Among the
many research works, an early paper that has adopted this
approach under the assumption of fully-loaded BSs and of
having a single active MT per cell is [8].
Remark 12: Since L (λMT /λBS ) ≤ λMT /λBS for every
(2)
(1)
λMT /λBS ≥ 0, we conclude that Pgrid ≥ Pgrid by assuming
the same Ptx and λBS for both load models. This originates
from the fact that, in the present paper, we assume that the
circuits power consumption increases with the number of MTs
that are served by the BSs. It is unclear, however, the best
load model to be used from the EE standpoint, especially if
Ptx and λBS are optimized to maximize their respective EEs.
In other words, the optimal Ptx and λBS that maximize the
EE of each load model may be different, which may lead to
different optimal EEs. The trade-off between the optimal PSE
and the optimal EE is analyzed numerically in Section V for
both load models.
IV. S YSTEM -L EVEL EE O PTIMIZATION : F ORMULATION
AND S OLUTION
In this section, we formulate a system-level EE optimization problem and comprehensively analyze its properties. For
LM − 1 is assumed
LM − 2 is assumed.
(13)
A unified formulation of the EE (bit/Joule) for the cellular
network under analysis is provided in (14) shown at the top of
this page, where the parameters of interest from the optimization standpoint, i.e., Ptx and λBS , are explicitly highlighted.
In the rest of the present paper, all the other parameters are
assumed to be given.
A. Preliminaries
For ease of presentation, we report some lemmas that
summarize structural properties of the main functions that
constitute (14). Some lemmas are stated without proof because
they are obtained by simply studying the sign of the first-order
and second-order derivatives of the function with respect to
the variable of interest and by keeping all the other variables
fixed. Functions of interest for this section are given in Table
II. Also, we define ∆P = Pcirc − Pidle ≥ 0.
Lemma 3: The function L (λMT /λBS ) fulfills the following properties with respect to λBS (assuming λMT
fixed): i) L (λMT /λBS ) ≥ 0 for λBS ≥ 0; ii)
L (λMT /λBS ) = 1 if λBS . → 0; iii) L (λMT /λBS ) =
0 if λBS → .. ∞; iv) LλBS (λMT /λBS ) ≤ 0 for
λBS ≥ 0; v) LλBS (λMT /λBS
.. ) ≤ 0 for λMT /λBS ≥
2α/(α − 1) = 2.8; and vi) LλBS (λMT /λBS ) ≥ 0 for
λMT /λBS ≤ 2α/(α − 1) = 2.8.
Lemma 4: As far as Load Model 2 is concerned, the function
M (λMT /λBS ) fulfills the following properties with respect to
λBS (assuming λMT fixed): i) M (λMT /λBS ) ≥ 0 for λBS ≥
0; ii) M (λMT /λBS ) →
. ∞ if λBS → 0; iii) M (λMT /λBS ) =
0 if ..λBS → ∞; iv) MλBS (λMT /λBS ) ≤ 0 for λBS ≥ 0; and
v) MλBS (λMT /λBS ) ≥ 0 for λBS ≥ 0.
Lemma 5: The function Q (λBS , Ptx , λMT /λBS )
fulfills the following properties with respect to Ptx :
i) Q (λBS , Ptx , λMT /λBS ) ≥ 0 for Ptx ≥ 0; ii)
Q (λBS , Ptx , λMT /λBS )
=
0 if Ptx
→
0; iii)
Q. (λBS , Ptx , λMT /λBS ) = 1 if Ptx → ∞; iv)
Q
..Ptx (λBS , Ptx , λMT /λBS ) ≥ 0 for Ptx ≥ 0; and v)
QPtx (λBS , Ptx , λMT /λBS ) ≤ 0 for Ptx ≥.. 0.
Proof : The result in v) follows from QPtx (·, ·, ·) in Table
II, because iv) and β > 2 hold.
Lemma 6: The function Q (λBS , Ptx , λMT /λBS ) fulfills the following properties with respect to λBS (assuming λMT fixed): i) Q (λBS , Ptx , λMT /λBS ) ≥ 0 for
λBS ≥ 0; ii) Q (λBS , Ptx , λMT /λBS ) = 0 if λBS →
0; iii) Q (λBS , Ptx , λMT /λBS ) = 1 if λBS → ∞; iv)
9
TABLE II
S UMMARY OF MAIN AUXILIARY FUNCTIONS USED THROUGHOUT THE PAPER .
Function Definition
−α
L (λMT /λBS ) = 1 − (1 + (1/α) λMT /λBS )
M (λMT /λBS ) = λMT /λBS − L (λ
MT /λBS )
Q (λBS , Ptx , λMT /λBS ) = 1 − exp −πλBS (Ptx /η)2/β (1 + ΥL (λMT /λBS ))
.
2/β
QPtx (λBS , Ptx , λMT /λBS ) = πλBS
(1/η)
..
2/β−1
(2/β) (1 + ΥL (λMT /λBS )) Ptx
× exp −πλBS (Ptx /η)2/β (1 + ΥL (λMT /λBS ))
2/β
QPtx (λBS , Ptx , λMT /λBS ) = hπλBS (1/η)
2/β−1
(2/β) (1 + ΥL i(λMT /λBS )) Ptx
× −Q̇Ptx (λBS , Ptx , λMT /λBS )
2/β
+πλBS(1/η)
2/β−2
(2/β) (2/β − 1) (1 + ΥL (λMT /λ
BS ))Ptx
2/β
× exp −πλBS (Ptx /η)
(1 + ΥL (λMT /λBS ))
2
.
−(α+1)
LλBS (λMT /λBS ) = − λMT λBS (1h + (1/α) λMT /λBS )
i
2
.
−(α+1)
MλBS (λMT /λBS ) = − λMT λBS 1 − (1 + (1/α) λMT /λBS )
i
h
.
.
2/β
1 + ΥL (λMT /λBS ) + ΥλBS LλBS (λMT /λBS )
QλBS (λBS , Ptx , λMT /λBS ) = π(Ptx /η)
2/β
× exp −πλBS (Ptx /η)
(1 + ΥL (λMT /λBS ))
..
LλBS (λMT /λBS ) = h λMT λ3BS (1 + (1/α) λMT /λBS )−(α+1)
i
−1
× 2 − (1 + α) (1/α) λMT /λBS (1 + (1/α)λMT /λBS )
i
h
..
−(α+1)
MλBS (λMT /λBS ) = 2 λMT λ3BS 1 − (1 + (1/α) λMT /λBS )
+ (1
+ α) (1/α) λ2MT λ4BS (1 + (1/α) λMT /λBS )−(α+2)
.Q(λBS ,Ptx ,λMT /λBS ) − (Ptx + ∆P) − Pcirc M (λMT /λBS )
SP (Ptx ) = L λλMT
BS
QPtx (λBS ,Ptx ,λMT /λBS )
.
. λMT λMT
λMT
Pcirc
−
L
M
L λλMT
SD (λBS ) = .
λBS
λ
BS
λ
λBS M λBS
BS
BS
LλBS (λMT /λBS )
.
M
(λ
. λBS MT /λBS )
+ΥL2 λλMT
(Ptx + ∆P) + ΥPcirc L2 λλMT
BS
BS
LλBS (λMT /λBS )
.
L(λ
/λ )Q
(λ ,P ,λ
/λ )
− . MT BS λBS BS tx MT BS 1 + ΥL λλMT
BS
LλBS (λMT /λBS )Q(λBS ,Ptx ,λMT /λBS )
× [L (λMT /λBS ) (Ptx + ∆P) + Pidle + Pcirc M (λMT /λBS )]
.
Q
..λBS (λBS , Ptx , λMT /λBS ) ≥ 0 for λBS ≥ 0; and v)
QλBS (λBS , Ptx , λMT /λBS ) ≤ 0 for λBS ≥ 0.
.
Proof : The result in iv) follows from QλBS (·, ·, ·)
in Table
II because, for λBS ≥ 0, L (λMT /λBS ) +
.
λBS LλBS (λMT /λBS ) ≥ 0. This latter inequality holds true
because 1 + x (1 + 1/α) ≤ (1 + x/α)(α+1) for x.. ≥ 0. The
result in v)
. follows without explicitly computing QλBS (·, ·, ·)
because QλBS (·, ·, ·) in Table II is the composition of two
increasing and concave functions in λBS , i.e., the function in
the square brackets in the first row and the exponential function
in the second row.
Lemma 7: The EE in (14) fulfills the following properties
with respect to Ptx and λBS : i) EE (Ptx , λBS ) = 0 if Ptx → 0
or λBS → 0; and ii) EE (Ptx , λBS ) = 0 if Ptx → ∞ or
λBS =→ ∞.
Proof : This immediately follows from Lemmas 3-6.
B. Optimal Transmit Power Given the Density of the BSs
In this section, we analyze whether there exists an optimal
(opt)
and unique transmit power, Ptx , that maximizes the EE formulated in (14), while all the other parameters, including λBS ,
are fixed and given. In mathematical terms, the optimization
problem can be formulated as follows:
EE (Ptx , λBS )
h
i
(min)
(max)
subject to Ptx ∈ Ptx , Ptx
.
maxPtx
(min)
(15)
(max)
where Ptx
≥ 0 and Ptx
≥ 0 are the minimum and
maximum power budget of the BSs, respectively. One may
(max)
(min)
assume, without loss of generality, Ptx
→ 0 and Ptx
→
∞.
The following theorem completely characterizes the solution
of (15).
Theorem 1: Let SP (·) be the function defined in Table II.
The EE in (14) is a unimodal and strictly pseudo-concave function in Ptx . The optimization problem
in (15)nhas a uniqueoo
son
(opt)
(min)
(max)
lution given by Ptx = max Ptx , min P∗tx , Ptx
,
where P∗tx is the only stationary point of the EE in (14) that
is obtained as the unique solution of the following equation:
.
EEPtx (P∗tx , λBS ) = Pidle − SP (P∗tx ) = 0
⇔ SP (P∗tx ) = Pidle .
(16)
10
Proof : See Appendix B.
C. Optimal Density Given the Transmit Power of the BSs
In this section, we analyze whether there exists an optimal
(opt)
and unique density of BSs, λBS , that maximizes the EE formulated in (14), while all the other parameters, including Ptx ,
are fixed and given. In mathematical terms, the optimization
problem can be formulated as follows:
EE (Ptx , λBS )
i
h
(max)
(min)
subject to λBS ∈ λBS , λBS
maxλBS
(min)
(17)
(max)
where λBS
≥ 0 and λBS
≥ 0 are the minimum and
maximum allowed density of the BSs, respectively. One may
(min)
(min)
assume, without loss of generality, λBS → 0 and λBS →
∞.
The following theorem completely characterizes the solution
of (17).
Theorem 2: Let SD (·) be the function defined in Table II.
The EE in (14) is a unimodal and strictly pseudo-concave function in λBS . The optimization problem
in (17)nhas a uniqueoo
son
(opt)
(min)
(max)
lution given by λBS = max λBS , min λ∗BS , λBS
,
∗
where λBS is the only stationary point of the EE in (14) that
is obtained as the unique solution of the following equation:
.
EEλBS (Ptx , λ∗BS ) = SD (λ∗BS ) − Pidle = 0
⇔ SD (λ∗BS ) = Pidle .
Proof : See Appendix C.
(18)
D. On the Dependency of Optimal Transmit Power and Density of the BSs
The optimal transmit power and BSs’ density that maximize
the EE are obtained from the unique solutions of (16) and
(18), respectively. These equations, however, cannot be further
simplified and, therefore, explicit analytical expressions for
(opt)
(opt)
Ptx
and λBS cannot, in general, be obtained. This is
an inevitable situation when dealing with EE optimization
problems, and, indeed, a closed-form expression of the optimal
transmit power for simpler EE optimization problems does not
exist either [1]. In some special cases, the transmit power can
be implicitly expressed in terms of the Lambert-W function,
which, however, is the solution of a transcendental equation
[2]. Notable examples of these case studies include even basic
point-to-point communication systems without interference
[40]. Based on these considerations, it seems hopeless to
attempt finding explicit analytical expressions from (16) and
(18), respectively. However, thanks to the properties of the
EE function, i.e., unimodality and strict pseudo-concavity,
(opt)
(opt)
proved in Theorem 1 and Theorem 2, Ptx and λBS can be
efficiently computed with the aid of numerical methods that
are routinely employed to obtain the roots of non-linear scalar
equations, e.g., the Newton’s method [42]. For example, the
unique solutions of (16) and (18) may be obtained by using the
functions FSolve in Matlab and NSolve in Mathematica.
Theorem 1 and Theorem 2 are, however, of paramount importance, since they state that an optimum maximizer exists and
is unique.
(opt)
Even though explicit analytical formulas for Ptx
and
(opt)
λBS cannot be obtained, it is important to understand how
these optimal values change if any other system parameter
changes. For instance, two worthwhile questions to answer are:
(opt)
“How does Ptx change as a function of λBS ?” and “How
(opt)
does λBS change as a function of Ptx ?”. These questions
are relevant to optimize the deployment of cellular networks
from the EE standpoint, since they unveil the inherent interplay
between transmit power and density of BSs discussed in
Section III and illustrated in Fig. 1. A general answer to these
two questions is provided in the following two propositions.
∗
Proposition 4: Let Ptx be the unique solution of (16) if
λBS = λBS . Let nthe optimal P
to Theorem 1
ntx ∗ accordingoo
(opt)
(min)
(max)
be Ptx
= max Ptx , min Ptx , Ptx
. Let λBS ≶
.
λBS be another BSs’ density. Let EEPtx (·, ·) be the first-order
derivative in (16). The following holds:
(opt)
(opt)
.
(opt)
(19)
Ptx ⋚ Ptx ⇔ EEPtx Ptx , λBS ⋚ 0.
Proof : Theorem 1 states that the EE function has a single
stationary point that is its unique
global maximizer. In mathe.
∗
(P
matical
terms,
this
implies
EE
tx , λBS ) > 0 if Ptx < Ptx
P
tx
.
∗
and EEPtx (Ptx , λBS ) < 0 if Ptx > Ptx for every λBS ≥ 0.
Therefore, the optimal transmit power needs to be increased
(decreased) if the first-order derivative of the EE is positive
(negative). Based on this, (19) follows because min {·, ·} and
max {·, ·} are increasing functions.
∗
Proposition 5: Let λBS be the unique solution of (18) if
Ptx = Ptx . Letn the optimalnλBS according
ooto Theorem 2 be
(opt)
∗
(min)
(max)
λBS = max λBS , min λBS , λBS
. Let Ptx ≶ Ptx
.
be another transmit power. Let EEλBS (·, ·) be the first-order
derivative in (18). The following holds:
(opt)
.
(opt)
(opt)
⋚ 0.
(20)
λBS ⋚ λBS ⇔ EEλBS Ptx , λBS
Proof : It follows from Theorem 2, similar to the proof of
Proposition 4.
Remark 13: It is worth mentioning that the approach utilized
to prove Proposition 4 and Proposition 5 is applicable to study
(opt)
(opt)
the dependency of Ptx and λBS , respectively, with respect
to any other system parameters. The findings in Proposition 4
and Proposition 5 are especially relevant for cellular network
planning. Let us consider, e.g., (19).
. By simply studying the
sign of the first-order derivative EEPtx (·, ·), one can identify,
with respect to an optimally deployed cellular network, the set
of BSs’ densities that would require to increase or decrease the
transmit power while still operating at the optimum. In Section
(opt)
V, numerical examples are shown to highlight that Ptx may
either decrease or increase as λBS increases or decreases.
E. Joint Optimization of Transmit Power and Density of the
BSs
In Sections IV-B and IV-C, either λBS or Ptx are assumed
to be given, respectively. In practical applications,
however,
(opt)
(opt)
it is important to identify the optimal pair Ptx , λBS
11
TABLE III
A LTERNATING OPTIMIZATION OF THE EE.
Algorithm h
i
h
i
(min)
(max)
(min)
(max)
; λBS ∈ λBS , λBS
;
Let Ptx ∈ Ptx , Ptx
i
h
(opt)
(max)
(min)
(initial guess); V = 0; ǫ > 0;
Set λBS = λBS ∈ λBS , λBS
Repeat
V0 = V ;
n
n
oo
.
∗
(opt)
(opt)
∗
(min)
(max)
= 0; Ptx = max Ptx , min Ptx , Ptx
Ptx ← EEPtx Ptx , λBS
;
(opt)
n
n ∗
oo
.
∗
(opt)
(min)
(max)
λBS ← EEλBS Ptx , λBS = 0; λBS = max λBS , min λBS , λBS
;
(opt) (opt)
V = EE Ptx , λBS ;
Until |V − V0 | /V ≤ ǫ;
(opt)
Return Ptx
(opt)
= Ptx
(opt)
; λBS
(16)
(18)
(14)
(opt)
= λBS .
that jointly maximizes the EE in (14). This joint optimization
problem can be formulated as follows:
TABLE IV
S ETUP OF PARAMETERS ( UNLESS OTHERWISE STATED ). I T IS WORTH
NOTHING THAT THE SETUP γD = γA CONSTITUTES JUST A CASE STUDY
EE (Ptx , λBS )
i
h
i
h
(max)
(min)
(max)
(min)
Ptx ∈ Ptx , Ptx
, λBS ∈ λBS , λBS
(21)
AND THAT THE MAIN FINDINGS OF THE PRESENT PAPER HOLD TRUE FOR
EVERY γA > 0.
maxPtx ,λBS
subject to
where a notation similar to that used in (15) and (17) is
adopted.
In Theorem 1 and Theorem 2, we have solved the optimization problem formulated in (21) with respect to Ptx for a given
λBS and with respect to λBS for a given Ptx , respectively. By
leveraging these results, a convenient approach for tackling
(21) with respect to Ptx and λBS is to utilize the alternating
optimization method, which iteratively optimizes Ptx for a
given λBS and λBS for a given Ptx until convergence of the
EE in (14) within a desired level of accuracy [41, Proposition
2.7.1]. The algorithm that solves (21) based on the alternating
optimization method is reported in Table III. Its convergence
and optimality properties are summarized as follows.
(opt)
(opt)
Proposition 6: Let Ptx (m), λBS (m), and EE(m) be
Ptx , λBS and EE obtained from the algorithm in Table III at the
mth iteration, respectively. The sequence EE(m) is monotonically increasing and converges. In addition,
every limit point
(opt)
(opt)
of the sequence Ptx (m) , λBS (m) fulfills the KarushKuhn-Tucker (KKT) first-order optimality conditions of the
problem in (21).
Proof : At the end of each iteration of the algorithm in Table
III, the value of EE does not decrease. The sequence EE(m),
hence, converges, because the EE in (14) is a continuous
function over the compact feasible set of the problem in
(21) and, thus, it admits a finite maximum by virtue of the
Weierstrass extreme value theorem [41]. From [41, Proposition
2.7.1], the alternating optimization method fulfills the KKT
optimality conditions, provided that i) the objective and constraint functions are differentiable, ii) each constraint function
depends on a single variable, and iii) each subproblem has a
unique solution. The first and second requirements follow by
direct inspection of (21). The third requirement is ensured by
Theorem 1 and Theorem 2.
Parameter
β
2
κ = 4πfc /3 · 108
N0
BW
Pcirc
Pidle
Ptx
λBS = 1/ πR2cell BSs/m2
λMT = 1/ πR2MT = 121 MTs/km2
γD = γA
Value
3.5
fc = 2.1 GHz
-174 dBm/Hz
20 MHz
51.14 dBm [8]
48.75 dBm [8]
43 dBm [8]
Rcell = 250 m
RMT = 51.29
5 dB
Remark 14: The optimization problems in Theorem 1 and
Theorem 2 can be efficiently solved by using the Newton’s
method, which allows one to find the root of real-valued objective functions via multiple iterations of increasing accuracy
and at a super-linear (i.e., quadratic if the initial guess is
sufficiently close to the actual root) convergence rate [42].
The properties of convergence of the alternating maximization
algorithm in Table III to a stationary point of the objective
function in (21) are discussed in [41, Proposition 2.7.1]. Under
mild assumptions that hold for the specific problem at hand,
the algorithm in Table III is locally q-linearly convergent to
a local maximizer of the objective function provided that the
initial guess is sufficiently close to the actual root [43, Section
2]. Further details can be found in [43].
In Section V, numerical evidence of the global optimality
of the algorithm in Table III is given as well. In addition,
numerical results on the average (with respect to the initial
guess) number of iterations as a function of the tolerance of
convergence, ǫ > 0, are illustrated.
V. N UMERICAL R ESULTS
In this section, we show numerical results to validate the
proposed analytical framework for computing the PSE and
12
(a)
56
(b)
35
(a)
100
LM-1
LM-2
LM-1
LM-2
(b)
100
P tx = 35 [dBm]
90
90
80
80
70
70
P tx = 45 [dBm]
50
EE [bit/mJoule]
EE(P(opt)
) [bit/mJoule]
tx
P(opt)
[dBm]
tx
52
25
20
15
EE [bit/mJoule]
30
54
60
50
40
30
10
P tx = 45 [dBm]
10
00
0
100
200
300
0
100
Rcell [m]
200
300
0
00
(a)
100
Rcell [m]
70
EE [bit/mJoule]
80
70
60
50
40
30
85
R cell = 75 [m]
30
R cell = 100 [m]
75
25
60
50
40
20
15
30
20
R cell = 75 [m]
10
10
10
20
30
Ptx [dBm]
40
50
5
0
0
0
55
45
35
15
LM-1
LM-2
10
R cell = 100 [m]
0
65
25
20
R cell = 50 [m]
00
(b)
95
R cell = 50 [m]
90
80
(a)
35
P(opt)
[dBm]
tx
90
EE [bit/mJoule]
0 10 20 30 40 50 60 70 80
Rcell [m]
Fig. 4. Energy efficiency versus Rcell for Load Model 1 (a) and Load Model
2 (b). Solid lines: Mathematical Framework from (14). Markers: Monte Carlo
simulations.
(b)
100
00
0
0 10 20 30 40 50 60 70 80
Rcell [m]
Fig. 2. Optimal transmit power (a) and energy efficiency (b) versus Rcell .
Solid lines: Optimum from Theorem 1. Markers: Optimum from a brute-force
search of (15). Special case with β = 6.5 and λMT = 21 MTs/km2 .
40
10
P tx = 55 [dBm]
EE(P(opt)
) [bit/mJoule]
tx
0
50
20
P tx = 35 [dBm]
5
46
60
30
20
48
P tx = 55 [dBm]
0
10
20
30
40
50
0
Ptx [dBm]
0
50
100
150
Rcell [m]
200
250
LM-1
LM-2
5
0
00
0
50
100
150
200
250
00
Rcell [m]
Fig. 3. Energy efficiency versus the transmit power for Load Model 1 (a)
and Load Model 2 (b). Solid lines: Framework from (14). Markers: Monte
Carlo simulations.
Fig. 5. Optimal transmit power (a) and energy efficiency (b) versus Rcell .
Solid lines: Optimum from Theorem 1. Markers: Optimum from a brute-force
search of (15). LM-1: Load Model 1 and LM-2: Load Model 2.
EE, as well as to substantiate the findings originating from
the analysis of the system-level EE optimization problems as
a function of the transmit power and density of the BSs. Unless
otherwise stated, the simulation setup is summarized in Table
IV. For ease of understanding, the BSs’ density is represented
via the inter-site distance (Rcell ) defined in Table IV. A similar
comment applies to the density of the MTs that is expressed
in terms of their average distance (RMT ). As far as the choice
of the setup of parameters is concerned, it is worth mentioning
that the power consumption model is in agreement with [8]
and [30]. The density of the MTs coincides with the average
density of inhabitants in France.
a) Validation Against Monte Carlo Simulations: In Figs.
3 and 4, we validate the correctness of (14) against Monte
Carlo simulations. Monte Carlo results are obtained by simulating several realizations, according to the PPP model, of
the cellular network and by empirically computing the PSE
according to its definition in (6) and (7), as well as the power
consumption based on the operating principle described in
the proofs of Proposition 2 and Proposition 3. It is worth
mentioning that, to estimate the PSE, only the definitions in
the first line of (6) and (7) are used. The results depicted
in Figs. 3 and 4 confirm the good accuracy of the proposed
mathematical approach. They highlight, in addition, the unimodal and pseudo-concave shape of the EE as a function
of the transmit power, given the BSs’ density, and of the
BSs’ density, given the transmit power. If the same transmit
power and BSs’ density are assumed for both load models,
we observe, as expected, that the first load model provides a
better EE than the second load model.
b) Validation of Theorem 1 and Theorem 2: In Figs.
5 and 6, we compare the optimal transmit power and BSs’
density obtained from Theorem 1 and Theorem 2, i.e., by
computing the unique zero of (16) and (18), respectively,
13
20
90
85
120
20
60
P(opt)
[dBm]
tx
EE(R (opt)
) [bit/mJoule]
cell
30
LM-1
LM-2
140
50
40
30
100
18
16
80
60
14
20
0
20
40
10
60
Ptx [dBm]
0
20
40
60
80
00
00
(a)
(b)
250
LM-1
LM-2
50
100
150
RMT [m]
200
0
50
10
15
20
45
0
5
10
15
20
0
γD = γA [dB]
5
10
15
20
γD = γA [dB]
LM-1
LM-2
85
80
[bit/Joule]
85
80
EE
75
0
1
0
0
-1
-1
-1
70
100
150
RMT [m]
200
75
70
(opt)
EE(P(opt)
,R(opt)
) [bit/mJoule]
tx
cell
R(opt)
[m]
cell
P(opt)
[dBm]
tx
10
50
5
LM-1
LM-2
90
100
15
60
90
LM-1
LM-2
150
20
65
Fig. 8. Optimal transmit power (a), density of BSs (Rcell ) (b), and energy
efficiency (c) versus the reliability thresholds (γD = γA ). Solid lines:
Optimum from the algorithm in Table III. Markers: Optimum from a bruteforce search of (21). LM-1: Load Model 1 and LM-2: Load Model 2.
(c)
95
200
25
20
0
γD = γA [dB]
LM-1
LM-2
30
10
Ptx [dBm]
Fig. 6. Optimal density of BSs (Rcell ) (a) and energy efficiency (b) versus
the transmit power. Solid lines: Optimum from Theorem 2. Markers: Optimum
from a brute-force search of (17). LM-1: Load Model 1, LM-2: Load Model
2.
35
70
50
LM-1
LM-2
0
-40 -20
80
75
40
12
0
-40 -20
80
55
10
LM-1
LM-2
1
00
00
-1 0
(c)
95
22
70
40
(b)
150
LM-1
LM-2
80
50
(a)
24
EE(P(opt)
,R(opt)
) [bit/mJoule]
tx
cell
60
R(opt)
[m]
cell
(b)
95
90
R(opt)
[m]
cell
(a)
70
65
50
100
150
200
RMT [m]
Fig. 7. Optimal transmit power (a), density of BSs (Rcell ) (b), and energy
efficiency (c) versus the density of MTs (RMT ). Solid lines: Optimum from
the algorithm in Table III. Markers: Optimum from a brute-force search of
(21). LM-1: Load Model 1 and LM-2: Load Model 2.
against a brute-force search of the optimum of (15) and (17),
respectively. We observe the correctness of Theorem 1 and
Theorem 2 for the load models analyzed in the present paper.
Figures 5 and 6, in addition, confirm two important remarks
that we have made throughout this paper. The first is that a
joint pair of transmit power and BSs’ density exists. This is
highlighted by the fact that the EE evaluated at the optimal
transmit power, given the BSs’ density, and at the optimal
BSs’ density, given the transmit power, is still a unimodal
and pseudo-concave function. This motivates one to use the
alternating optimization algorithm proposed in Section IV-E.
The second is related to the difficulty of obtaining an explicit
closed-form expression of the optimal transmit power as a
function of the BSs’ density and of the BSs’ density as a
function of the transmit power. Figure 6(a), for example,
65
60
55
50
0
0.5
1
1.5
2
PSE (opt) [kbit/sec/m2]
2.5
3
0
0
Fig. 9. Analysis of the EE vs. PSE trade-off. Solid lines: Optimum from the
algorithm in Table III. Markers: Optimum from a brute-force search of (21).
LM-1: Load Model 1 and LM-2: Load Model 2.
clearly shows that the behavior of the optimal transmit power
is not monotonic as a function of the BSs’ density. This
is in contrast with heuristic optimization criteria based on
the coverage probability metric [32]. Figure 5(a), on the
other hand, provides more intuitive trends according to which
the optimal transmit power increases as the density of the
BSs decreases. This is, however, just a special case that is
parameter-dependent. A counter-example is, in fact, illustrated
in Fig. 2, where, for a different set of parameters, it is shown
that the optimal transmit power may increase, decrease and
then increase again as a function of the average inter-site
distance of the BSs (Rcell). In this case, the density of the MTs
coincides with the average density of inhabitants in Sweden
and a large path-loss exponent is assumed to highlight the
peculiar performance trend. These numerical examples clearly
substantiate the importance of Theorem 1 and Theorem 2, and
14
16
(a)
(b)
4
γD = γA = 0 dB
γD = γA = 0 dB
γD = γA = 20 dB
Average Number of Iterations
Average Number of Iterations
3.8
γD = γA = 10 dB
14
12
10
8
6
γD = γA = 10 dB
γD = γA = 20 dB
3.6
3.4
3.2
3
2.8
2.6
2.4
4
2.2
2
10-10
10-5
100
2
10-10
ǫ
00
10-5
100
00
ǫ
Fig. 10. Number of iterations of the algorithm in Table III as a function of
ǫ > 0. The number of iterations
is averaged
(15000 trials) over the initial
h
i
(opt)
(min)
(max)
guess λBS = λBS ∈ λBS , λBS
. (a) Load Model 1 and (b) Load
(min)
Model 2. Setup: Rcell
(max)
Ptx
= 60 dBm.
(max)
= 10 m, Rcell
(min)
= 2000 m, Ptx
= −20 dBm,
highlight the complexity of the optimization problem that is
analyzed and successfully solved in the present paper.
c) Validation of the Alternating Optimization Algorithm
in Table III: In Figs. 7 and 8, we provide numerical evidence
of the convergence of the alternating optimization algorithm
introduced in Section IV-E towards the global optimum of
the optimization problem formulated in (21). The study is
performed by computing the joint optimal transmit power and
BSs’ density as a function of the density of the MTs (Fig. 7)
and of the reliability thresholds (Fig. 8). We observe a very
good agreement between the algorithm in Table III and a bruteforce search of the optimum of (21). Similar studies have been
conducted as a function of other system parameters, but they
are not reported in the present paper due to space limitations.
d) Comparison Between Load Model 1 and 2: With the
exception of Figs. 3 and 4, all the figures reported in this
section illustrate the achievable EE of the two load models
analyzed in the present manuscript when they operate at
their respective optima. Based on the obtained results, we
conclude that, for the considered system setup, the first load
model outperforms the second one in terms of EE. Figures
7 and 8 show, for example, that this may be obtained by
transmitting a higher power but, at the same time, by reducing
the deployment density of the BSs. It is worth mentioning
that, even though both load models provide the same PSE and
serve, in the long time-horizon, all the MTs of the network,
they have one main difference: the MTs under the first load
model experience a higher latency (i.e., the MTs experience
a longer delay before being served, since they are randomly
chosen among all the available MTs in the cell), since a single
MT is served at any time instance. We evince, as a result, that
the higher EE provided by the first load model is obtained
at the price of increasing the MTs’ latency. The analysis and
optimization of energy-efficient cellular networks with latency
constraints is, therefore, an important generalization of the
study conducted in the present paper.
e) Analysis of the EE vs. PSE Trade-Off: In Fig. 9,
we illustrate the trade-off between EE and PSE, which is
obtained by setting the transmit power and density of the
BSs at the optimal values that are obtained by solving the
optimization problem in (21) with the aid of the algorithm in
Table III. Figure 9 provides a different view of the comparison
between Load Model 1 and 2 introduced in Section II-D.
The Load Model 1 is a suitable choice to obtain a high
EE at low-medium PSEs, while the Load Model 2 is a
more convenient option torequired to converge within obtain
a good EE at medium-high PSEs. Based on these results,
the optimization of the EE vs. PSE trade-off constitutes an
interesting generalization of the study carried out in the present
paper.
f) Convergence Analysis of the Maximization Algorithm
in Table III: Motivated by Remark 14, Fig. 10 shows the
average number of iterations of the alternating optimization
algorithm in Table III as a function of the convergence
accuracy ǫ. We observe that the algorithm necessitates more
iterations for Load Model 1. In general, however, we observe
that the number of iterations that are required to converge
within the defined convergence accuracy is relatively small.
VI. C ONCLUSION
In the present paper, we have introduced a new closedform analytical expression of the potential spectral efficiency
of cellular networks. Unlike currently available analytical
frameworks, we have shown that the proposed approach allows
us to account for the tight interplay between transmit power
and density of the base stations in cellular networks. Therefore,
the proposed approach is conveniently formulated for the
optimization of the network planning of cellular networks,
by taking into account important system parameters. We have
applied the new approach to the analysis and optimization of
the energy efficiency of cellular networks. We have mathematically proved that the proposed closed-form expression of the
energy efficiency is a unimodal and strictly pseudo-concave
function in the transmit power, given the density, and in the
density, given the transmit power of the base stations. Under
these assumptions, as a result, a unique transmit power and
density of the base stations exist, which can be obtained by
finding the unique zero of a simple non-linear function that
is provided in a closed-form expression. All mathematical
derivations and findings have been substantiated with the
aid of numerical simulations. We argue that the applications
of the proposed approach to the system-level modeling and
optimization of cellular networks are countless and go beyond
the formulation of energy efficiency problems.
Extensions and generalizations of the analytical and optimization frameworks proposed in the present paper, include,
but are not limited to, the system-level analysis and optimization of i) the energy efficiency versus spectral efficiency
trade-off, ii) uplink cellular networks, iii) three-dimensional
network topologies with elevated base stations and spatial
blockages, iv) cache-enabled cellular networks, v) cellular
networks with network slicing, vi) cellular networks with
15
renewable energy sources and energy harvesting, and vii)
multi-tier (heterogeneous) cellular networks.
A PPENDIX A
P ROOF OF P ROPOSITION 1
Under the assumption that MT0 is selected, from (3) and
(4), we have:
)
(
2
g0 /L0 1 L0 ≤ Ptx γA σN
≥ γD
Pcov (γD , γA ) = Pr P
(I) gi /Li 1 (Li > L0 )
BSi ∈Ψ
BS
2
Ptx /(γA σN
)
=
Z
Pr
0
(
|
)
g0 /x
P
≥ γD fL0 (x) dx
(I) g /Li 1 (Li > x)
BSi ∈ΨBS i
{z
}
G(γD ;x)
(22)
−1 2/β−1 −πλ (x/κ)2/β
BS
x
e
is
where fL0 (x) = 2πλBS κ2/β β
the probability density function of L0 that is obtained by
applying the displacement theorem of PPPs [5, Eq. (21)]. It
is worth mentioning that (22) is exact if the Crofton cell is
considered, while it is an approximation if the typical cell is
considered (see Remark 7 for further details).
The probability term, G (·; ·), in the integrand function of
(22) can be computed as follows:
!
−1
Z +∞
2/β−1
(a)
y
(tx) y
2πλBS 2/β dy
1+
G (γD ; x) = exp −
xγD
κ β
x
(b)
2/β
= exp −πλ(tx)
Υ
BS (x/κ)
(23)
where (a) follows from the probability generating functional
theorem of PPPs [3] by taking into account that, based on
(8), the interfering BSs constitute a PPP of intensity equal
(tx)
(tx)
to λBS = λBS PBS = λBS L (λMT /λBS ), and (b) follows by
(tx)
solving the integral. The intensity of the interfering PPP, λBS ,
is obtained by taking into account that only that BSs that are
in transmission mode contribute to the inter-cell interference.
(tx)
The analytical expression of λBS is, in particular, obtained
with the aid of the independent thinning theorem of PPPs,
similar to [5] and [38]. The impact of the spatial correlation
that exists among the BSs that operate in transmission mode
[39], is, on the other hand, postponed to future research.
By inserting (23) in (22) and by applying some changes of
variable, we obtain:
Pcov (γD , γA ) = πλBS κ−2/β
2/β
2
γA σN
))
(Ptx /(Z
×
0
exp −πλBS κ−2/β (1 + ΥL (λMT /λBS )) z dz.
(24)
The proof follows from (6) and (7)
the aid of some
and by
P+∞ simplifications
−1
the
identity
Pr N̄MT = u
u=0 (u + 1)
−1
(λMT /λBS ) L (λMT /λBS ) [33, Proposition 2].
with
using
=
A PPENDIX B
P ROOF OF T HEOREM 1
In this section, we are interested in the functions that
depend on Ptx . For ease of writing, we adopt the simplified
notation:. Ptx → P, L (·) →. L, M (·) → M, Q (·, Ptx , ·) →
Q (P), QPtx (·, Ptx , ·) → Q (P),
. = Pi ,
. Pcirc = Pc , Pidle
EE (Ptx , ·) → EE (P), and EEPtx (Ptx , ·) → EE (P). A
similar notation is adopted for higher-order derivatives with
respect to P.
The stationary points of (14) are the zeros of the first-order
derivative
of EE (·) with respect to P. From (14), we obtain
.
EE (P) = 0 ⇔ Pi − SP (P) = 0, which can be re-written as
follows:
..
(25)
Q (P) Q (P) − P = ∆P + Pi /L + Pc M/L .
{z
}
{z
} |
|
Wleft (P)
Wright
With the aid of some algebraic manipulations and by
exploiting Lemmas 3-6, the following holds: i) Wright ≥ 0
is a non-negative function that is independent of P, ii)
Wleft (P). ≥ 0 is a non-negative and increasing
.. function of
P, i.e., W left (P) ≥ 0, since Q (P) ≥ 0 and Q (P) ≤ 0 from
Lemma 5, iii) Wleft (P → 0) = 0 and Wleft (P → ∞) = ∞.
This implies that Wleft (·) and Wright intersect each other
∗
in just one point.
. Therefore, a unique∗ stationary
. point, P ,
exists. Also, EE (P) > 0 for P < P and EE (P) < 0 for
P > P∗ . Finally, by taking into account the constraints on the
transmit power, it follows
that the unique
optimal maximizer
of the EE is P(opt) = max P(min) , min P∗ , P(max) , since
P ∈ P(min) , P(max) . This concludes the proof.
A PPENDIX C
P ROOF OF T HEOREM 2
In this section, we are interested in the functions that
depend on λBS . For ease of writing, we adopt the simplified
notation: λBS → λ, L (·/λBS ) → .L (λ), M (·/λBS ) →
M
. (λ), Q (λBS , ·, ·/λBS ) → Q (λ), QλBS (λBS , ·, ·/λBS ) →
Q. (λ), Pcirc = Pc., Pidle = Pi , EE (·, λBS ) → EE (λ),
EEλBS (·, λBS ) → EE (λ), Ptx → P. Similar notation applies
to higher-order derivatives.
The proof is split in two parts: i) λMT /λ ≥ 2.8 and ii)
λMT /λ ≤ 2.8. This is necessary because, from Lemma 3,
L (·) is concave in λ if λMT /λ ≥ 2.8 and convex in λ if
λMT /λ ≤ 2.8.
a) Case Study λMT /λ ≥ 2.8: The stationary points of
(14) are the zeros of the first-order derivative
of EE (·) with
.
respect to λ. From (14), we obtain EE (λ) = 0 ⇔ SD (λ) −
Pi = 0. This stationary equation can be re-written as follows
16
P5
(Wright (λ) = ℓ=1 Wℓ (λ)):
..
.
.
Pi = − L (λ) L (λ) Q (λ) Q (λ)
|{z} |
{z
}
W
left
W1 (λ)
× [1 + ΥL (λ)] [L (λ) (P + ∆P) + Pi + Pc M (λ)]
{z
}
|
+ Pcirc
|
.
W2 (λ)
..
M (λ) L (λ) L (λ) − M (λ)
{z
}
W3 (λ)
..
.
+ ΥL (λ) (P + ∆P) + ΥPc L2 (λ) M (λ) L (λ) .
{z
} |
|
{z
}
2
W4 (λ)
W5 (λ)
(26)
With the aid of some algebraic manipulations and by
exploiting Lemmas 3-6, the following holds: i) Wleft ≥ 0 is a
non-negative function that is independent of λ, ii) Wright (λ) ≥
0 is a non-negative function of λ, since Wℓ (λ) ≥ 0 for
ℓ = 1, . . . , 5 if λMT /λ ≥ 2.8. In particular, W3 (λ) ≥ 0 if
λMT /λ ≥ 1.4 and Wℓ (λ) ≥ 0 for λ ≥ 0 if ℓ = 1, 2, 4, 5, iii)
Wright (λ → 0) = ∞ and Wright (λ → ∞) = 0. This implies
that Wleft and Wright (·) would intersect each other in just
a. single point if Wright is a decreasing function in λ, i.e.,
W right (λ) ≤ 0 for λMT /λ ≥ 2.8. A sufficient condition for
this to hold is that W.ℓ (·) for ℓ = 1, . . . , 5 are decreasing
functions in λ, i.e., Wℓ (λ) ≤ 0 for λMT /λ ≥. 2.8. This
holds to be true and. can be proved as follows. W2 (λ) ≤ 0
for λ ≥ 0 and W4 (λ) ≤ 0 for λ ≥ 0 because L (·)
and M (·) are. decreasing functions in λ (see. Lemma 3 and
Lemma 4). W3 (λ) ≤ 0 for λ ≥ 0 and W5 (λ) ≤ 0 for
λ ≥ 0 immediately follow by inserting into them the firstorder derivatives of L (·) and M (·) with respect to λ and with
the aid of simple algebraic manipulations. Less evident is the
behavior of W1 (·) as a function of λ. Using some algebra,
the first-order derivative satisfies the following:
.
. 2
.
..
W 1 (λ) Q (λ) L (λ) = −L (λ) L (λ) Q (λ) Q (λ)
{z
}
|
A1 (λ)
.2
.
+ −L (λ) Q (λ) Q (λ)
|
{z
}
.
A2 (λ)
2
.
+ L (λ) L (λ) Q (λ)
|
{z
}
A (λ)
.. 3
(27)
.
+ L (λ) L (λ) Q (λ) Q (λ) .
{z
}
|
A4 (λ)
A sufficient condition for W1 (·) to be a decreasing function
in λ is that Aℓ (λ) ≤ 0 for ℓ = 1, . . . , 4. From Lemmas 3-6,
this can be readily proved. In particular, Aℓ (λ) ≤ 0 for λ ≥ 0
if ℓ = 1, 2, 3 and A4 (λ) ≤ 0 for λMT /λ ≥ .2.8. Therefore,
∗
a unique stationary
EE (λ) > 0 for
. point, λ , exists. Also,
∗
λ < λ and EE (λ) < 0 for λ > λ∗ . Finally, by taking
into account the constraints on the density of BSs, it follows
that
the unique optimal
maximizer of the EE is λ(opt) =
∗ (max)
(min)
max λ
, min λ , λ
, since λ ∈ λ(min) , λ(max) .
b) Case Study λMT /λ ≤ 2.8: As for this case study,
we leverage a notable result in fractional optimization [2]:
the ratio between a i) non-negative, differentiable and concave
function, and a ii) positive, differentiable and convex function
is a pseudo-concave function. It is, in addition, a unimodal
function with a finite maximizer if the ratio vanishes when
the variable of interest (i.e., the BSs’ density) tends to zero
and to infinity. As for the case study under analysis, the EE
in (14) can be re-written, by neglecting unnecessary constants
that are independent of λ and do not affect the properties of
the function, as follows:
EE (λ) =
Q (λ)
.
[1 + ΥL (λ)] [(P + ∆P) + Pi /L (λ) + Pc M (λ)/L (λ)]
(28)
From Lemma 6, the numerator of (28) is a non-negative,
differentiable, increasing and concave function for λ ≥ 0.
From Lemma 7, the EE in (28) tends to zero if λ → 0
and λ → ∞. Therefore, a sufficient condition to prove the
unimodality and pseudo-concavity of the EE is to show that
the denominator of (28) is a positive, differentiable and convex
function in λ for λMT /λ ≤ 2.8. From Lemma 3 and Lemma 4,
the first two properties are immediately verified. To complete
the proof, the convexity of the denominator of (28) needs to
be analyzed.
Let Den (·) be the denominator
of (28). Let us introduce the
.2 .
..
function K (λ) = 2L (λ) L (λ) − L (λ). The second-order
derivative of Den (·), as a function of λ, is as follows:
..
..
..
Den (λ) = Υ(P + ∆P)L (λ) + ΥPc M (λ)
{z
} | {z }
|
D1 (λ)
D2 (λ)
.
+ Pc L (λ) 2λMT λ
L (λ) + λL (λ)
|
{z
}|
{z
}
2
D3 (λ)
3
D4 (λ)
+ Pc M (λ) L (λ) K (λ) + Pc L (λ) K (λ)
|
|
{z
}
{z
}
D5 (λ)
2
2
D6 (λ)
+ Pi L (λ) K (λ) .
|
{z
}
D7 (λ)
(29)
A sufficient condition for proving that Den (·) is a convex
function in λ is to show that Dℓ (λ) ≥ 0 for ℓ = 1, 2, . . . , 7
and K (λ) ≥ 0 if λMT /λ ≤ 2.8. This can be proved as
follows. D1 (λ) ≥ 0 for λMT /λ ≤ 2.8 follows from Lemma
3. Dℓ (λ) ≥ 0 for ℓ = 2, 5 if λ ≥ 0 follows from Lemma 4.
Dℓ (λ) ≥ 0 for ℓ = 3, 6, 7 if λ ≥ 0 follows from Lemma 3.
D4 (·) and K (·) require deeper analysis. Define ξ = λMT /λ.
D4 (·) and K (·) are positive functions in ξ if:
D4 (ξ) ≥ 0 ⇔
D4 (ξ) = 1 − (1 + ξ/α)−α − x(1 + ξ/α)−(α+1) ≥ 0
(30)
K (ξ) ≥ 0 ⇔
−α
K (ξ) = (1 + ξ/α)
(31)
−1
+ [2 + (1 + 1/α) x] [2 − (1 − 1/α) x]
≥ 1.
17
By direct inspection of (30) and (31), it is .not difficult to
prove the following: i) D4 (ξ → 0) =. 0 and D 4 (ξ) ≥ 0 for
ξ ≥ 0, and ii) K (ξ → 0) = 1 and K (ξ) ≥ 0 for ξ ≤ 2.8.
These two conditions imply D4 (λ) ≥ 0 for λ ≥ 0 and
K (λ) ≥ 0 for λMT /λ ≤ 2.8. This concludes the proof.
R EFERENCES
[1] S. Buzzi, Chih-Lin I, T. E. Klein, H. V. Poor, C. Yang, and A. Zappone,
“A survey of energy-efficient techniques for 5G networks and challenges
ahead”, IEEE J. Sel. Areas Commun., vol. 34, no. 4, pp. 697-709, Apr.
2016.
[2] A. Zappone and E. Jorswieck, “Energy efficiency in wireless networks
via fractional programming theory”, Found. Trends Commun. Inf. Theory, vol. 11, no. 3-4, pp. 185-396, Jun. 2015
[3] J. G. Andrews, F. Baccelli, and R. K. Ganti, “A tractable approach to
coverage and rate in cellular networks”, IEEE Trans. Commun., vol. 59,
no. 11, pp. 3122-3134, Nov. 2011.
[4] H. S. Dhillon, R. K. Ganti, F. Baccelli, and J. G. Andrews, “Modeling
and analysis of K-tier downlink heterogeneous cellular networks”, IEEE
J. Sel. Areas Commun., vol. 30, no. 3, pp. 550-560, Apr. 2012.
[5] M. Di Renzo, W. Lu, and P. Guan, “The intensity matching approach:
A tractable stochastic geometry approximation to system-level analysis
of cellular networks”, IEEE Trans. Wireless Commun., vol. 15, no. 9,
pp. 5963-5983, Sep. 2016.
[6] W. Lu and M. Di Renzo, “Stochastic geometry modeling of cellular networks: Analysis, simulation and experimental validation”, ACM MSWiM,
pp. 179-188, Nov. 2015.
[7] H. ElSawy, A. K. Sultan-Salem, M.-S. Alouini, and M. Z. Win, “Modeling and analysis of cellular networks using stochastic geometry: A
tutorial”, IEEE Proc., vol. 19, no. 1, pp. 167-203, Jan. 2017.
[8] Y. S. Soh, T. Q. S. Quek, M. Kountouris, and H. Shin, “Energy efficient
heterogeneous cellular networks”, IEEE J. Sel. Areas Commun., vol. 31,
no. 5, pp. 840-850, May 2013.
[9] T. Kwon, “Spatial topology adjustment for minimizing multicell network
power consumption”, arXiv:1403.4346, Mar. 2014. [Online]. Available:
https://arxiv.org/pdf/1403.4346.pdf.
[10] S.-R. Cho and W. Choi, “Energy-efficient repulsive cell activation for
heterogeneous cellular networks”, IEEE J. Sel. Areas Commun., vol. 31,
no. 5, pp. 870-882, May 2013.
[11] D. Cao, S. Zhou, and Z. Niu, “Optimal combination of base station
densities for energy-efcient two-tier heterogeneous cellular networks”,
IEEE Trans. Wireless Commun., vol. 12, no. 9, pp. 4350-4362, Sep.
2013.
[12] S. Luo, R. Zhang, and T. J. Lim, “Optimal power and range adaptation
for green broadcasting”, IEEE Trans. Wireless Commun., vol. 12, no. 9,
pp. 4592-4603, Sep. 2013.
[13] M. Wildemeersch, T. Q. S. Quek, C. H. Slump, and A. Rabbachin,
“Cognitive small cell networks: Energy efficiency and trade-offs”, IEEE
Trans. Commun., vol. 61, no. 9, pp. 4016-4028, Sep. 2013.
[14] C. Li, J. Zhang, and K. B. Letaief, “Throughput and energy efficiency
analysis of small cell networks with multi-antenna base stations”, IEEE
Trans. Wireless Commun., vol. 13, no. 5, pp. 2505-2517, May 2014.
[15] Y. Huang, X. Zhang, J. Zhang, J. Tang, Z. Su, and W. Wang, “Energyefficient design in heterogeneous cellular networks based on large-scale
user behavior constraints”, IEEE Trans. Wireless Commun., vol. 13, no.
9, pp. 4746-4757, Sep. 2014.
[16] C.-Y. Chang, W. Liao, H.-Y. Hsieh, and D.-S. Shiu, “On optimal cell
activation for coverage preservation in green cellular networks”, IEEE
Trans. Mob. Comp., vol. 13, no. 11, pp. 2580-2591, Nov. 2014.
[17] J. Peng, P. Hong, and K. Xue, “Energy-aware cellular deployment
strategy under coverage performance constraints”, IEEE Trans. Wireless
Commun., vol. 14, no. 1, pp. 69-80, Jan. 2015.
[18] X. Ge, B. Yang, J. Ye, G. Mao, C.-X. Wang, and T. Han, “Spatial
spectrum and energy efciency of random cellular networks”, IEEE Trans.
Commun., vol. 63, no. 3, pp. 1019-1030, Mar. 2015.
[19] C. Liu, B. Natarajan, and H. Xia, “Small cell base station sleep strategies
for energy efciency”, IEEE Trans. Veh. Techol., vol. 65, no. 3, pp. 16521661, Mar. 2016.
[20] T. Zhang, J. Zhao, L. An, and D. Liu, “Energy efficiency of base station
deployment in ultra dense hetnets: A stochastic geometry analysis”,
IEEE Wireless Commun. Lett., vol. 5, no. 2, pp. 184-187, Apr. 2016.
[21] J. B. Rao and A. O. Fapojuwo, “An analytical framework for evaluating
spectrum/energy efficiency of heterogeneous cellular networks”, IEEE
Trans. Veh. Techol., vol. 65, no. 5, pp. 3568-3584, May 2016.
[22] Z. Chen, L. Qiu, and X. Liang, “Area spectral efficiency analysis and
energy consumption minimization in multiantenna Poisson distributed
networks”, IEEE Trans. Wireless Commun., vol. 15, no. 7, pp. 48624874, Jul. 2016.
[23] L. Li, M. Peng, C. Yang, and Y. Wu, “Optimization of base-station
density for high energy-efficient cellular networks with sleeping strategies”, IEEE Trans. Wireless Commun., vol. 65, no. 9, pp. 7501-7514,
Sep. 2016.
[24] A. Shojaeifard, K.-K. Wong, K. A. Hamdi, E. Alsusa, D. K. C. So, and
J. Tang, “Stochastic geometric analysis of energy-efficient dense cellular
networks”, IEEE Access, vol. 5, no. 3, pp. 455-469, Mar. 2017.
[25] P. Chang and G. Miao, “Energy and spectral efficiency of cellular
networks with discontinuous transmission”, IEEE Trans. Wireless Commun., vol. 16, no. 5, pp. 2991-3002, May 2017.
[26] A. Alam, P. Mary, J.-Y. Baudais, and X. Lagrange, “Asymptotic analysis
of area spectral efficiency and energy efficiency in PPP networks with
SLNR precoder”, IEEE Trans. Commun., vol. 65, no. 7, pp. 3172–3185,
July 2017.
[27] S. Mukherjee, Analytical Modeling of Heterogeneous Cellular Networks:
Geometry, Coverage, and Capacity, Cambridge University Press, 1st ed.,
Feb. 2014.
[28] F. Baccelli and B. Blaszczyszyn, Stochastic Geometry and Wireless
Networks, Part I: Theory, Now Publishers, Sep. 2009.
[29] J. Liu, M. Sheng, L. Liu, and J. Li, “Effect of densication on cellular
network performance with bounded pathloss model’, IEEE Commun.
Lett., vol. 21, no. 2, pp. 346-439, Feb. 2017.
[30] G. Auer et al., “How much energy is needed to run a wireless network?”,
IEEE Wireless Commun. Mag., vol. 18, no. 5, pp. 40-49, Oct. 2011.
[31] 3GPP
TS36.304,
“3rd
generation
partnership
project;
technical specification group radio access network, evolved
universal terrestrial radio access (E-UTRA); user equipment
(UE)
procedures
in
idle
mode”.
[Online].
Available:
https://portal.3gpp.org/desktopmodules/Specifications/Specification
Details.aspx?specificationId=2432.
[32] S. Bhaumik, G. Narlikar, S. Chattopadhyay, and S. Kanugovi, “Breathe
to stay cool: Adjusting cell sizes to reduce energy consumption”, ACM
SIGCOMM, Green Networking Workshop, pp. 41-46, Aug. 2010.
[33] S. M. Yu and S.-L. Kim, “Downlink capacity and base station density
in cellular networks”, IEEE Workshop on Spatial Stochastic Models for Wireless Networks, pp. 1-7, May 2013. [Online]. Available:
http://arxiv.org/pdf/1109.2992.pdf.
[34] B. Yu, S. Mukherjee, H. Ishii, and L. Yang, “Dynamic TDD support
in the LTE-B enhanced local area architecture”, IEEE Global Commun.
Conf. – Workshops, pp. 585591, Dec. 2012.
[35] A. Rajanna and M. Haenggi, “Enhanced cellular coverage and throughput using rateless codes”, IEEE Trans. Commun., vol. 65, no. 5, pp.
1899-1912, May 2017.
[36] J.-S. Ferenc and Z. Neda, “On the size distribution of poisson voronoi
cells”, Physica A: Statistical Mechanics and its Applications, vol. 385,
no. 2, pp. 518-526, 2007.
[37] P. Calka, “Precise formulae for the distributions of the principal geometric characteristics of the typical cells of a two-dimensional Poisson
voronoi tessellation and a Poisson line process”, Adv. Appl. Prob., vol.
35, pp. 551-562, Sep. 2003.
[38] H. S. Dhillon, R. K. Ganti, J. G. Andrews, “Load-aware modeling
and analysis of heterogeneous cellular networks”, IEEE Trans. Wireless
Commun., vol. 12, no. 4, pp. 1666-1677, Apr. 2013.
[39] A. Shojaeifard, K. A. Hamdi, E. Alsusa, D. K. C. So, and J. Tang, “A
unified model for the design and analysis of spatially-correlated loadaware HetNets”, IEEE Commun., vol. 62, no. 1, pp. 1-16, Nov. 2014.
[40] C. Isheden, Z. Chong, E. Jorswieck, and G. Fettweis, “Framework for
link-level energy efficiency optimization with informed transmitter”,
IEEE Trans. Wireless Commun., vol. 11, no. 8, pp. 2946-2957, Aug.
2012.
[41] D. P. Bertsekas, Nonlinear Programming, Athena Scientific, Sep. 1999.
[42] J. P. Crouzeix and J. A. Ferland, “Algorithms for generalized fractional
programming”, Springer Mathematical Programming, vol. 52, no. 1-3,
pp. 191-207, May 1991.
[43] J. C. Bezdek and R. J. Hathaway, “Convergence of alternating optimization”, J. Neural, Parallel & Scientific Computations, vol. 11, no. 4, pp.
351-368, Dec. 2003.
| 7 |