13.07.2015 Views

Low Eigenvalues of Laplacian Matrices of Large Random Graphs ...

Low Eigenvalues of Laplacian Matrices of Large Random Graphs ...

Low Eigenvalues of Laplacian Matrices of Large Random Graphs ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Low</strong> <strong>Eigenvalues</strong> <strong>of</strong> <strong>Laplacian</strong> <strong>Matrices</strong> <strong>of</strong> <strong>Large</strong> <strong>Random</strong> <strong>Graphs</strong>AbstractTiefeng Jiang 1For each n ≥ 2, let A n = (ξ ij ) be an n × n symmetric matrix with diagonal entriesequal to zero and the entries in the upper triangular part being independent with mean µ n andstandard deviation σ n . The <strong>Laplacian</strong> matrix is defined by ∆ n = diag( ∑ nj=1 ξ ij) 1≤i≤n − A n . Inthis paper, we obtain the laws <strong>of</strong> large numbers for λ n−k (∆ n ), the (k + 1)-th smallest eigenvalue <strong>of</strong>∆ n , through the study <strong>of</strong> the order statistics <strong>of</strong> weakly dependent random variables. Under certainmoment conditions on ξ ij ’s, we prove that, as n → ∞,(i)λ n−k (∆ n ) − nµ nσ n√ n log n→ − √ 2 a.s.for any k ≥ 1. Further, if {∆ n ; n ≥ 2} are independent with µ n = 0 and σ n = 1, then,{ }λn−k (∆ n )[(ii) the sequence √ ; n ≥ 2 is dense in − √ 2 + 2(k + 1) −1 , − √ ]2 a.s.n log nfor any k ≥ 0. In particular, (i) holds for the Erdös-Rényi random graphs. Similar results are alsoobtained for the largest eigenvalues <strong>of</strong> ∆ n .1 IntroductionLet G be a non-oriented graph with n different vertices {v 1 , · · · , v n }. We assume that G does nothave loops or multiple edges, its <strong>Laplacian</strong> matrix ∆ = (l ij ) n×n is defined by⎧deg(v ⎪⎨ i ), if i = j;l ij = −1, if i ≠ j and v i is adjacent to v j ;⎪⎩0, otherwise,where deg(v i ) is the degree <strong>of</strong> v i , that is, the number <strong>of</strong> vertices v j , j ≠ i, which have edges with v i .Evidently, ∆ = diag(deg(v i )) 1≤i≤n − A, where A = (ϵ ij ) n×n is the adjacency matrix <strong>of</strong> the graphwith ϵ ij = 1 if vertices v i and v j are connected, and ϵ ij = 0 if not. In other words, the <strong>Laplacian</strong>matrix <strong>of</strong> graph G is the difference <strong>of</strong> the degree matrix and the adjacency matrix. The matrix ∆is sometimes also called the admittance matrix or Kirchh<strong>of</strong>f matrix in literature.For a random graph G, the corresponding ∆ is a random matrix. For instance, suppose G is anErdös-Rényi random graph G(n, p n ), that is, a (non-oriented) graph with n vertices, and for each1 Supported in part by NSF #DMS-0449365, School <strong>of</strong> Statistics, University <strong>of</strong> Minnesota, 224 Church Street,MN55455, tjiang@stat.umn.edu.Key Words: random graph, random matrix, <strong>Laplacian</strong> matrix, extreme eigenvalues, order statistics.AMS (2000) subject classifications: 05C80, 05C50, 15A52, 60B10, 62G30.1


pair <strong>of</strong> vertices v i and v j with i ≠ j, an edge between them is formed randomly with chance p n andindependently <strong>of</strong> other edges, see [13, 14]. Then⎛∑j≠1 ξ(n) 1j −ξ (n)12 · · · −ξ (n)1n−ξ (n)21∑j≠2∆ n =ξ(n) 2j · · · −ξ (n)2n⎜⎝ ....−ξ (n)n1 −ξ (n)n2 · · ·∑j≠n ξ(n) nj⎞, (1.1)⎟⎠where {ξ (n)ij ; 1 ≤ i < j ≤ n} are independent random variables with P (ξ(n) ij0) = p n for all 1 ≤ i < j ≤ n and n ≥ 2.= 1) = 1 − P (ξ (n)ij =REMARK 1.1 By construction zero is an eigenvalue <strong>of</strong> ∆ n (the corresponding eigenvector is 1 =(1, · · · , 1) T ∈ R n ), and if ξ ij ’s are all non-negative then −∆ n is the generator <strong>of</strong> a Markov processon {1, 2, · · · , n}, hence all the eigenvalues <strong>of</strong> ∆ n are non-negative, i.e., ∆ n is non-negative definite.The Kirchh<strong>of</strong>f theorem from [20] establishes the relationship between the number <strong>of</strong> spanningtrees <strong>of</strong> G and the eigenvalues <strong>of</strong> ∆ n ; the second smallest eigenvalue relates to the algebraic connectivity<strong>of</strong> the graph, see, e.g., [16].Bryc, Dembo and Jiang [6] and Ding and Jiang [11] show that the empirical distribution <strong>of</strong>suitably normalized eigenvalues <strong>of</strong> ∆ n converges to a deterministic probability distribution, whichis the free convolution <strong>of</strong> the semi-circle law and the standard normal distribution. Note that thesecond smallest eigenvalue <strong>of</strong> ∆ n stands for the algebraic connectivity <strong>of</strong> a graph G as mentionedearlier, it is our purpose here to study the properties <strong>of</strong> the second smallest eigenvalue as well asother low eigenvalues <strong>of</strong> ∆ n .For weighted random graphs, {ξ (n) ; 1 ≤ i < j ≤ n} in (1.1) are independent random variablesijand each <strong>of</strong> them is the product <strong>of</strong> a Bernoulli random variable Ber(p n ) and a nice random variable,for instance, a Gaussian random variable or a random variable with all finite moments (see, e.g.,[18, 19]). For the sign model studied in [3, 19, 25, 26], ξ (n)ij are independent random variables takingthree values: 0, 1, −1. From this perspective, to make our results more applicable, we investigatethe spectral properties <strong>of</strong> ∆ n under more general conditions on the entries {ξ (n)ij ; 1 ≤ i < j ≤ n}given below.Assumption A p : Let {ξ (n)ij ; 1 ≤ i ≠ j ≤ n, n ≥ 2} be random variables defined on the sameprobability space and {ξ (n) ; 1 ≤ i < j ≤ n} be independent for each n ≥ 2 (not necessarily identicallydistributed) with ξ (n)ijij= ξ (n)ji, E(ξ(n) ij) = µ n, V ar(ξ (n)ij ) = σ2 n > 0 for all 1 ≤ i < j ≤ n and n ≥ 2,and sup 1≤i p > 0.We will state our main results next. Before that some notation is needed. Given an n ×n symmetric matrix M, let λ 1 ≥ λ 2 ≥ · · · ≥ λ n be the eigenvalues <strong>of</strong> M. Sometimes this isalso written as λ 1 (M) ≥ λ 2 (M) ≥ · · · ≥ λ n (M) for clarity.For an n × n matrix M, we use∥M∥ = sup x∈R n : ∥x∥ 2=1 ∥Mx∥ 2 to denote its spectral norm, where ∥x∥ 2 = √ x 2 1 + · · · + x2 n for x =(x 1 , · · · , x n ) ′ ∈ R n .2


Throughout this paper, log x = log e x for x > 0. Keep in mind our application for the Erdös-Rényi random graph G(n, p n ), which will be given after Theorem 2, the mean µ n <strong>of</strong> ξ (n)ij is not equalto zero in general. The following step reduces the general case to that with µ n = 0. Fix n ≥ 2. Setη (n)ijX n,i == ξ(n) ijσ nn∑j=1− µ n(, η (n)ii = 0 for 1 ≤ i ≠ j ≤ n, B n =η (n)ij)1≤i,j≤n ,η (n)ij , X n,(1) ≥ · · · ≥ X n,(n) is the order statistics <strong>of</strong> {X n,i ; 1 ≤ i ≤ n}. (1.2)Easily, ∆ n = σ n · diag(X n,i ) 1≤i≤n − σ n B n + µ n H n where all <strong>of</strong> the <strong>of</strong>f-diagonal entries <strong>of</strong> H n areequal to −1 and all <strong>of</strong> the diagonal entries are identical to n − 1.Based on this, the followingproposition establishes a connection between the eigenvalues <strong>of</strong> ∆ n and the order statistics {X n,(i) }.PROPOSITION 1.1 Suppose Assumption A 0 holds. Then the following are true for all n ≥ 3.(i) For all 2 ≤ i ≤ n − 1, we have thatX n,(i+1) − ∥B n ∥ ≤ λ i(∆ n ) − nµ nσ n≤ X n,(i−1) + ∥B n ∥.(ii) Further, if µ n ≥ 0, then, for all n ≥ 2,X n,(2) − ∥B n ∥ ≤ λ 1(∆ n ) − nµ nσ n≤ X n,(1) + ∥B n ∥.(iii) Under Assumption A 6 , lim n→∞1 √ n log n∥B n ∥ = 0 a.s.The moment assumption in (iii) seems optimal from its pro<strong>of</strong> via a general theorem on the spectrum<strong>of</strong> a random matrix (Lemma 2.1). It would be interesting to see a pro<strong>of</strong> to confirm this. On theother hand, we see from this proposition that the limiting behavior <strong>of</strong> (λ i (∆ n ) − nµ n )/(σ n√ n log n)can be obtained from X n,(i) / √ n log n if the latter is understood. In fact we have the following resultsabout X n,(i) .PROPOSITION 1.2 Let {η (n)ij } and X n,(i) be as in (1.2).(i) If Assumption A 6 holds, then, for any k ≥ 1, X n,(k) / √ n log n → √ 2 in probability as n → ∞.(ii) Suppose Assumption A p holds for all p > 0. Let {k n ; n ≥ 1} be a sequence <strong>of</strong> integers suchthat k n → +∞ and log k n = o(log n) as n → ∞. Then X n,(kn)/ √ n log n → √ 2 a.s. as n → ∞.(iii) Given integer k ≥ 1, let Assumption A 4k+4 hold. If {∆ 2 , ∆ 3 · · · } are independent, thenX n,(k)lim inf √ = √ 2 a.s. andn→∞ n log n{ }Xn,(k)the sequence √ ; n ≥ 2n log nlim supn→∞X n,(k)√ n log n= √ 2 + 2k −1is dense in [ √ 2, √ 2 + 2k −1 ] a.s.a.s. andThe (4k + 4)-th moment assumption in (iii) above comes from a condition to guarantee a largedeviation result (Lemma 3.6). It is not known if it is the best moment condition. The orderstatistics <strong>of</strong> independent random variables are understood quite well, see, e.g., [10]. However, as the3


(iv) The conclusion in (i) also holds if “λ k (∆ n )” is replaced by “−λ n−k+1 (∆ n )”; the conclusionin (ii) also holds if “nµ n − λ n−k (∆ n )” is replaced by “λ k+1 (∆ n ) − nµ n .”The (4k +12)-th moment condition in the above theorem comes from replacing A 4k+4 in Proposition1.2 with A 4(k+2)+4 when applying (i) <strong>of</strong> Proposition 1.1.It is interesting to observe that the “limsup” for λ k (∆ n ) depends on k, however, the “liminf”remains the same for any k. This is very different from the corresponding results for the classicalrandom matrices such as the Gaussian Orthogonal Ensembles or the Gaussian Unitary Ensembles,see the comments between Theorem 1 and Corollary 1.1 in [11].Notice ξ (n)ij ≥ 0 for all 1 ≤ i < j ≤ n for the Erdös-Rényi random graph G(n, p n ). Also, ifinf n≥2 {p n , 1 − p n } > 0, Assumption A p obviously holds with sup 1≤i 0 and δ n ↓ 0 such that sup 1≤i,j≤n E|ω n ij |l ≤ b(δ n√ n)l−3for all n ≥ 1 and l ≥ 3. Set W n = (ω n ij ) n×n. Then lim sup n→∞λ 1(W n)n 1/2 ≤ 2σ a.s.5


The inequalities in the following lemma are standard in matrix theory, see, e.g., p.62 in [1] or p.184in [15].LEMMA 2.2 Let M 1 and M 2 be two Hermitian matrices. Then(i) λ i (M 1 + M 2 ) ≤ λ j (M 1 ) + λ i−j+1 (M 2 ) for 1 ≤ j ≤ i ≤ n;(ii) λ i (M 1 + M 2 ) ≥ λ j (M 1 ) + λ i−j+n (M 2 ) for 1 ≤ i ≤ j ≤ n;(iii) (Weyl’s perturbation theorem) max 1≤j≤n |λ j (M 1 ) − λ j (M 2 )| ≤ ∥M 1 − M 2 ∥.Recall ∆ n defined in (1.1). Suppose Assumption A 0 holds. For n ≥ 2, recalling (1.2), setIn other words, ˜∆ n is defined by replacing ξ (n)ijn ≥ 2.˜∆ n = diag(X n,i ) 1≤i≤n − B n . (2.1)from ∆ n in (1.1) with η (n)ijPro<strong>of</strong> <strong>of</strong> Proposition 1.1. (i) Reviewing (1.1) and (1.2), we havefor all 1 ≤ i < j ≤ n and∆ n = σ n ˜∆n + µ n H n and ˜∆ n = diag(X n,i ) 1≤i≤n − B n (2.2)where all <strong>of</strong> the <strong>of</strong>f-diagonal entries <strong>of</strong> H n are equal to −1 and all <strong>of</strong> the diagonal entries are identicalto n − 1. It is easy to check that the eigenvalues <strong>of</strong> µ n H n are equal to nµ n with n − 1 folds, and 0with one fold. Observe that λ i (diag(X n,i ) 1≤i≤n ) = X n,(i) for 1 ≤ i ≤ n. Apply (i) in Lemma 2.2 tothe first identity in (2.2) to getλ i (∆ n ) ≤ λ i−1 (σ n ˜∆n ) + λ 2 (µ n H n )= σ n · λ i−1 ( ˜∆ n ) + nµ n≤σ n · X n,(i−1) + σ n ∥B n ∥ + nµ nfor any 2 ≤ i ≤ n − 1, where, in the last step, (iii) <strong>of</strong> Lemma 2.2 is applied to the second equality in(2.2). On the other hand, applying (ii) in Lemma 2.2 to the first identity in (2.2), we have thatλ i (∆ n ) ≥ λ i+1 (σ n ˜∆n ) + λ n−1 (µ n H n )= σ n · λ i+1 ( ˜∆ n ) + nµ n≥ σ n · X n,(i+1) − σ n ∥B n ∥ + nµ n (2.3)for any 1 ≤ i ≤ n − 1. The two inequalities conclude (i).(ii) Notice λ j (µ n H n ) = nµ n for 1 ≤ j ≤ n − 1 as µ n ≥ 0. From (2.2), we know ∆ n = σ n ·diag(X n,i ) 1≤i≤n −σ n B n +µ n H n . For any symmetric matrices M 1 and M 2 , we know λ 1 (M 1 +M 2 ) ≤λ 1 (M 1 ) + λ 1 (M 2 ), it follows thatλ 1 (∆ n ) ≤ λ 1 (σ n · diag(X n,i ) 1≤i≤n ) + λ 1 (−σ n B n ) + λ 1 (µ n H n )≤ σ n · X n,(1) + σ n · ∥B n ∥ + nµ n .Take i = 1 in (2.3), we have that λ 1 (∆ n ) ≥ σ n · X n,(2) − σ n ∥B n ∥ + nµ n . So (ii) follows.(iii) The pro<strong>of</strong> <strong>of</strong> this part is relatively long, we put it into Lemma 2.3 below which is slightlystronger than what we need. 6


LEMMA 2.3 For n ≥ 2, let U n = (u (n)ij ) be an n × n symmetric random matrix with u(n) ii = 0 forall 1 ≤ i ≤ n. Suppose {u (n)ij ; 1 ≤ i < j ≤ n, n ≥ 2} are defined on the same probability space, andfor each n ≥ 2, {u (n)ij ; 1 ≤ i < j ≤ n} are independent random variables with mean zero and varianceone, and sup 1≤i 0. Then lim sup n→∞ ∥U n ∥/ √ n ≤ 2 a.s.Pro<strong>of</strong>. We claim that it suffices to showlim supn→∞In fact, once this is true, by applying it to −U n , we see thatlim supn→∞−λ n (U n )√ nλ 1 (U n )√ n≤ 2 a.s. (2.4)= lim supn→∞λ 1 (−U n )√ n≤ 2 a.s.Noticing ∥U n ∥ = max{λ 1 (U n ), −λ n (U n )}, it follows that lim sup n→∞ ∥U n ∥/ √ n ≤ 2 a.s.Now we prove (2.4). Defineδ n =1log(n + 1) ,ũ(n) ij= u (n)ijI(|u(n) ij | ≤ δ nfor 1 ≤ i ≤ j ≤ n and n ≥ 2. By the Markov inequality,√ n) and Ũ n = (ũ (n)P (U n ≠ Ũn) ≤ P (|u (n)ij | > δ n√ n for some 1 ≤ i < j ≤ n)≤ n 2 maxP1≤i δ √ K(log(n + 1)) 6+δn n) ≤n 1+(δ/2)where K := sup 1≤i δ √n n)| ≤K(δ n√ n)5+δfor any 1 ≤ i ≤ j ≤ n and n ≥ 2. Note that λ 1 (A) ≤ n · max 1≤i,j≤n |a ij | for any n × n symmetricmatrices A = (a ij ). We have from (2.6) thatλ 1 (Ũn) − λ 1 (Ũn − E(Ũn)) ≤ λ 1 (EŨn)for any n ≥ 2. This and (2.5) indicate thatlim supn→∞λ 1 (U n )√ n≤ n max= lim supn→∞1≤i


for all 1 ≤ i, j ≤ n and n ≥ 2. Now, max i,j,n E|u (n)ij |3 ≤ max i,j,n (E|u (n)ij |6+δ ) 3/(6+δ) = K 3/(6+δ) by()the Hölder inequality. Write E|u (n)ij |l = E |u (n)ij |l−3 · |u (n)ij |3 . Then,( √ ) l−3 2 nmax1≤i,j≤n E|u(n) ij |l ≤ K 3/(6+δ) ·(2.7)log(n + 1)for all n ≥ 2 and l ≥ 3, where K is a constant. By Lemma 2.1, we get (2.4).3 Order Statistics <strong>of</strong> Weakly Dependent <strong>Random</strong> VariablesIn this section, we will prove Proposition 1.2. First, we collect some facts for a preparation. Thefollowing is a Bonferroni-type inequality, see, e.g., page 4-5 from [22].LEMMA 3.1 Let k and n be two integers with n ≥ k + 1. Then, for any events A 1 , · · · , A n ,≤∣∣P (k∪1≤i 1


for any x ∈ R and 1 ≤ k ≤ n. Note that n!/((k − 1)!(n − k)!) ≤ n k and (1 − Φ(x)) k−1 ≤ 1 for any1 ≤ k ≤ n and x ∈ R. We haveP (U (k) ≤ t) ≤ n k ∫ t−∞Φ(x) n−k ϕ(x) dx∫ Φ(t)= n k y n−k dy ≤ n k Φ(t) n−k0for any t ∈ R. The first part <strong>of</strong> the lemma is proved. Now, write Φ(t) n−k = (1 − (1 − Φ(t))) n−k . Bythe fact that 1 − x ≤ e −x for x ∈ R, we get from the above thatP (U (k) ≤ t) ≤ n k · e −(n−k)(1−Φ(t)) . (3.2)By using the fact that 1 − Φ(t) ∼ 1 √2πte −t2 /2 as t → +∞, the second part <strong>of</strong> the lemma follows.LEMMA 3.4 For each n ≥ 1, let U n,1 , · · · , U n,n be independent random variables with mean zero,variance one and S n = U n,1 + · · · + U n,n . Suppose K := sup 1≤i≤n 2. Given t > 0, set α = min{t 2 /2, (p/2) − 1}. Then, for any β < α, there exists a constant C β,tdepending only on β and t such thatsup n β Pn≥2( )|Sn |√ ≥ t ≤ C β,t K.n log nPro<strong>of</strong>. By Lemma 3.2, for each n ≥ 1, there exist i.i.d. N(0, 1)-distributed random variables{ϵ n,i ; 1 ≤ i ≤ n} such that for any δ > 0,(P |S n −n∑i=1ϵ n,i | ≥ δ √ )n log n≤≤C1 + (δ √ ∑n E|Un log n) p n,i | pi=1CKδ p (log n) · 1p/2 n p/2−1as n ≥ 2, where C is a universal constant. Since |S n | ≤ | ∑ ni=1 ϵ n,i| + |S n − ∑ ni=1 ϵ n,i|, noticing∑ ni=1 ϵ n,i ∼ √ n · N(0, 1), we get≤≤as n ≥ 2. It is well known thatP (|S n | ≥ t √ n log n)( n∑P |S n − ϵ n,i | ≥ δ √ )n log ni=1(+ P |n∑i=1ϵ n,i | ≥ (t − δ) √ )n log nCKδ p (log n) · 1p/2 n + P (|N(0, 1)| ≥ (t − δ)√ log n) (3.3)p/2−1for any x > 0. Therefore, for any 0 < δ < t,x√ /2 ≤ P (N(0, 1) ≥ x) ≤ √ 1 e 2π(1 + x2 ) e−x2 −x2 /22πxP (|N(0, 1)| ≥ (t − δ) √ log n) ≤91(t − δ) √ log n · 1n (t−δ)2 /2(3.4)


as n ≥ 2. This together with (3.3) yields thatP (|S n | ≥ t √ n log n) ≤CKδ p (log n) p/2 ·1n + 1p/2−1 (t − δ) √ log n ·1n (t−δ)2 /2for any δ ∈ (0, t) and n ≥ 2. Since (t − δ) 2 /2 → t 2 /2 as δ → 0 + , for β < α ≤ t 2 /2, choose δ 0 > 0such that (t − δ 0 ) 2 /2 > β. Thenn β P (|S n | ≥ t √ n log n) ≤for all n ≥ 2. The conclusion then follows.CKδ p 0 (log + 1n)p/2 (t − δ 0 ) √ log nNow we introduce a method that will be used in later pro<strong>of</strong>s. Let {η (n)ij } and X n,(i) be as in(1.2). Set m n = [n/ log n] for n ≥ 3,V n = max∑1≤i≤n | m nj=1η (n)ij | and Y n,i =n∑j=m n+1η (n)ij , 1 ≤ i ≤ n. (3.5)Obviously, max 1≤i≤n |X n,i − Y n,i | ≤ V n for n ≥ 2. Applying (iii) <strong>of</strong> Lemma 2.2 to the two diagonalmatrices: diag(X n,i ) 1≤i≤n and diag(Y n,i ) 1≤i≤n , we obtain thatmax |X n,(i) − Y n,(i) | ≤ V n (3.6)1≤i≤nfor n ≥ 2. Let Z n,(1) ≥ Z n,(2) ≥ · · · ≥ Z n,(mn ) be the order statistics <strong>of</strong> Y n,i , 1 ≤ i ≤ m n , which is asmall portion <strong>of</strong> Y n,i , 1 ≤ i ≤ n. Thus, Y n,(i) ≥ Z n,(i) for i = 1, 2, · · · , m n . This together with (3.6)yields a useful inequality:Z n,(k) − V n ≤ X n,(k) ≤ max1≤j≤n {X n,j} (3.7)for any 1 ≤ k ≤ m n . The idea <strong>of</strong> connecting X n,(k) to Z n,(k) is that {Z n,(i) } m ni=1 are order statistics<strong>of</strong> independent random variables {Y n,i , 1 ≤ i ≤ m n }, and V n is negligible. We next give some tailprobabilities <strong>of</strong> V n and Z n,(k) .LEMMA 3.5 Suppose Assumption A 6 holds. Let {η (n)ij } and X n,(i) be as in (1.2), and {k n ; n ≥ 1}be positive integers such that log k n = o(log n) as n → ∞. Then, for any ϵ ∈ (0, 1/2), there existsγ = γ(ϵ) > 1 not depending on n such that( )VnP √ ≥ ϵ ≤ 1n log n n γ and Pas n is sufficiently large.(Zn,(kn )√ ≤ √ ) (2 − 2ϵ = On log n)1n(log n) 3.5Pro<strong>of</strong>. Note that α = min{p/2 − 1, 10 2 /2} > 2 since p > 6. By Lemma 3.4, for any β ∈ (2, α),notifying η (n)i,i = 0 for each 1 ≤ i ≤ n,P( )Vn√ ≥ ϵ n log n≤≤⎛n · max P ⎝|1≤i≤n∑m nj=1⎞η (n)ij | ≥ 10√ m n log m n⎠(3.8)n(m n − 1) β ≤ 1n γ (3.9)10


as n is large enough because β − 1 > β/2 := γ > 1.Now we prove the second assertion in (3.8). By Lemma 3.2, for each 1 ≤ i ≤ m n and n ≥ 2,there exist i.i.d. N(0, 1)-distributed random variables {ζ (n)ij ; 1 ≤ i ≤ n, m n < j ≤ n} such that, forany ϵ > 0,(P |n∑j=m n +1η (n)ij−n∑j=m n +1)ζ (n)ij | ≥ ϵ√ n log n≤≤C1 + (ϵ √ ∑nn log n) 6j=m n +1E|η (n)ij |61n 2 (log n) 2.5 (3.10)uniformly for any 1 ≤ i ≤ m n as n is sufficiently large. Define W n,i = ∑ nj=m n+1 ζ(n) ij for 1 ≤ i ≤ m n .Then {W n,i ; 1 ≤ i ≤ m n } are i.i.d. N(0, n − m n )-distributed random variables. Recalling thatZ n,(1) ≥ Z n,(2) ≥ · · · ≥ Z n,(mn ) are the order statistics <strong>of</strong> Y n,i , 1 ≤ i ≤ m n , similar to (3.6), we knowthat max 1≤i≤mn |Z n,(i) − W n,(i) | ≤ max 1≤i≤mn | ∑ nj=m n +1 η(n) ij − ∑ nj=m n +1 ζ(n) ij |. Thus, by (3.10),for any ϵ > 0,≤≤P ( max |Z n,(i) − W n,(i) | ≥ ϵ √ n log n)1≤i≤m nn∑n∑P ( max | η (n)ij − ζ (n)ij | ≥ ϵ√ n log n)1≤i≤m nj=m n +1j=m n +1m nn 2 (log n) 2.5 ≤ 1n(log n) 3.5 (3.11)as n is sufficiently large. Fix β > 0 and ϵ ∈ (0, 2β). Use the fact √ n/ √ n − m n → 1 as n → ∞, wesee thatP (Z n,(k) ≤ (β − 2ϵ) √ n log n)≤ P (W n,(k) ≤ (β − ϵ) √ n log n) + P (|Z n,(k) − W n,(k) | ≥ ϵ √ n log n)≤ P (U (k) ≤ (β − (ϵ/2)) √ 1log n) +n(log n) 3.5 (3.12)uniformly for all 1 ≤ k ≤ m n as n is sufficiently large, where U (k) is as in Lemma 3.3. By the samelemma, we have P (U (kn) ≤ (β − (ϵ/2)) √ log n) ≤ n k nexp{−n 1−(β−2−1 ϵ) 2 /2 / log n} as n is sufficientlylarge. Therefore,P (Z n,(kn ) ≤ (β − 2ϵ) √ {}n log n) ≤ n k nexp −n 1−(β−2−1 ϵ) 2 /2 1/ log n +n(log n) 3.5 (3.13)as n is sufficiently large. Now, take β = √ 2, then 2ρ := 1 − (β − 2 −1 ϵ) 2 /2 > 0 for ϵ ∈ (0, √ 2).By the condition on k n , the first term on the right hand side <strong>of</strong> (3.13) is bounded by e −nρsufficiently large. Therefore,(Zn,(kn )P √ ≤ √ )2 − 2ϵn log n(= O)1n(log n) 3.5as n is(3.14)as n → ∞ for all 0 < ϵ < √ 2/2.11


LEMMA 3.6 Let {η (n)ij } and {X n,(i)} be as in (1.2). Fix integer k ≥ 1 and a > √ 2. SupposeAssumption A 2ka 2 holds. Then,limn→∞1log n · log P (Xn,(k)√ n log n≥ a)( ) a2= −k2 − 1 .Pro<strong>of</strong>. Let a n = a √ n log n, n ≥ 2. Observe that⎛P (X n,(k) ≥ a n ) = P ⎝∪⎞{X n,l1 ≥ a n , · · · , X n,lk ≥ a n } ⎠1≤l 1


for any 0 < ϵ < β. This leads to that, uniformly for all k ≤ j ≤ 2k,P (X n,1 ≥ a n , · · · , X n,j ≥ a n ) ≥ min1≤i≤j P (Y n,i ≥ c n ) j − 1n p/2 (3.21)as n ≥ K 1 , where c n = (a+ϵ) √ n log n. From (3.20) and (3.21), to estimate P (X n,1 ≥ a n , · · · , X n,j ≥a n ), it is enough to study P (Y n,i ≥ t √ n log n ) for any t > 0.Step 2. Now we estimate P (Y n,i ≥ t √ n log n ). By Lemma 3.2, for each i = 1, 2, · · · , 2k andn ≥ 2, there exist i.i.d. N(0, 1)-distributed random variables {ϵ (n) ; 2k + 1 ≤ j ≤ n} and anuniversal constant C > 0 such that⎛P ⎝|Y n,i −n∑⎞| ≥ ϵ√ n log n⎠j=2k+1ϵ (n)ij≤≤ijC1 + (ϵ √ ∑nn log n) pj=2k+1E|η (n)ij |pC ′n (p/2)−1 (log n) p/2 (3.22)uniformly for i = 1, 2, · · · , 2k, where C ′ = CC k ϵ −p and C k is as in (3.18). It is easy to see thatP (η ≥ x + 2y) − P (|ξ − η| ≥ y) ≤ P (ξ ≥ x + y) ≤ P (η ≥ x) + P (|ξ − η| ≥ y) (3.23)for any random variables ξ and η, and constants x > 0 and y > 0. Thus, for any 0 < ϵ < t, by (3.22),P (n∑j=2k+1ϵ (n)ij≥ (t + ϵ) √ n log n) −≤ P (Y n,i ≥ t √ n log n) ≤ P (n∑j=2k+1C ′n (p/2)−1 (log n) p/2ϵ (n)ij ≥ (t − ϵ) √ C ′n log n) +n (p/2)−1 (log n) p/2for all 1 ≤ i ≤ 2k. Clearly, ∑ nj=2k+1 ϵ(n) ij ∼ √ n − 2k · N(0, 1) for each 1 ≤ i ≤ 2k. By using (3.4), wesee thatandP (P (n∑j=2k+1n∑j=2k+1ϵ (n)ijϵ (n)ij≥ (t − ϵ) √ n log n) ≤ P (N(0, 1) ≥ (t − ϵ) √ log n) ≤≥ (t + ϵ) √ n log n) ≥ P (N(0, 1) ≥ (t + 1.5ϵ) √ log n) ≥1n (t−ϵ)2 /21n (t+2ϵ)2 /2for all 1 ≤ i ≤ 2k as n ≥ K 2 for some constant K 2 := K 2 (k, ϵ, t) ≥ K 1 . The last three inequalitiesgive1n − C ′≤ P (Y (t+2ϵ)2 /2n (p/2)−1 (log n) p/2 n,i ≥ t √ n log n)≤1n + C ′(t−ϵ)2 /2n (p/2)−1 (log n) p/2uniformly for all 1 ≤ i ≤ 2k and t > ϵ > 0 as n ≥ K 2 . Taking t = a − ϵ and a + ϵ respectively, andnoticing (p/2) − 1 > a 2 /2 from the condition p > 2ka 2 , we have that, uniformly for all 1 ≤ i ≤ 2k,1n ≤ P (Y 1(a+δ)2 /2 n,i ≥ c n ) ≤ P (Y n,i ≥ b n ) ≤n (a−δ)2 /213(3.24)


as n ≥ K 3 := K 3 (k, ϵ, a) and as δ = δ(ϵ) > 0 is small enough.Step 3. Now we make a summary. Based on the assumption p > 2ka 2 , we know that j(a−δ) 2 /2 0 is small enough. Taking δ > 0 small enough, since a > √ 2,one can see that, for n sufficiently large, α n,j = o(α n,k ) for all j ≥ k + 1 as n → ∞. Noting that1/n ka1 < 1/n k(a2 /2−1) < 1/n ka2 as a 1 > a 2 /2 − 1 > a 2 , then by combining (3.16) and the aboveassertion, we obtain1n ka1( )≤ j! · PXn,(k)√ ≥ a ≤ 1n log n n ka2as n is sufficiently large. The desired conclusion follows by taking the logarithm for the three termsabove and then letting a 1 ↓ a 2 /2 − 1 and a 2 ↑ a 2 /2 − 1.Pro<strong>of</strong> <strong>of</strong> Proposition 1.2. (i) First, observe that X n,j is a sum <strong>of</strong> n − 1 random variables withmean zero and variance one. Second, α := min{( √ 2 + ϵ) 2 /2, p/2 − 1} > 1 + √ 2ϵ := β for anyϵ ∈ (0, 1) since p > 6. Then by Lemma 3.4, we obtain(Xn,(k)P √ ≥ √ ) (max1≤j≤n {X n,j }2 + ϵ ≤ P √ n log n n log n≤≤≥ √ )2 + ϵ)(n · max P |Xn,j |√ ≥ √ 2 + ϵ1≤j≤n n log nn(n − 1) 1+√ 2ϵ ≤ 1 n ϵas n is large enough. On the other hand, taking k n ≡ k in Lemma 3.5, we have from (3.7) and (3.8)thatP(Xn,(k)√ ≤ √ )2 − 3ϵn log n≤ P(= O( )Vn√ ≥ ϵ n log n)1n(log n) 214+ P(Zn,(k)√ ≤ √ )2 − 2ϵn log n


as n → ∞ for sufficiently small ϵ. The above two inequalities imply (i).(ii) Fix integer k ≥ 1. For any ϵ > 0, let t = √ 2 + 2k −1 + ϵ, then k(t 2 /2 − 1) > 1 +√2ϵ. By Lemma 3.6, P (Xn,(kn) ≥ t √ n log n) ≤ P (X n,(k) ≥ t √ n log n) ≤ 1/n 1+ϵ as n is sufficientlylarge. It follows that ∑ n≥2 P (X n,(k n) ≥ t √ n log n) < ∞ for any ϵ > 0. This implies thatlim sup n→∞ X n,(kn)/ √ n log n ≤ √ 2 + 2k −1 a.s. for any k ≥ 1. Let k → +∞, we getlim supn→∞To complete the pro<strong>of</strong>, it suffices to showlim infn→∞X n,(kn)√ n log n≤ √ 2 a.s. (3.26)X n,(kn )√ n log n≥ √ 2 a.s. (3.27)Recalling (3.7) and Lemma 3.5, for any ϵ > 0, there exits γ > 1 such that P ( V n / √ n log n ≥ ϵ ) ≤ n −γas n is sufficiently large, then by the Borel-Cantelli lemma that lim n→∞ V n / √ n log n = 0 a.s. Lookingat (3.7) again, to finish the pro<strong>of</strong>, it is enough to provelim infn→∞Z n,(kn )√ n log n≥ √ 2 a.s. (3.28)Clearly, the condition log k n = o(log n) implies that k n = o(m n ) as n → ∞. By Lemma 3.5,P (Z n,(kn) ≤ ( √ 2 − 2ϵ) √ ( )1n log n) = On(log n) 3.5(3.29)as n is large enough and ϵ > 0 is small enough, it follows that ∑ n≥2 P (Z n,(k n) ≤ ( √ 2−2ϵ) √ n log n) 0 small enough. Therefore (3.28) is yielded by the Borel-Cantelli lemma.(iii) The given assumptions say that{η (n)ij; 1 ≤ i < j ≤ n, n ≥ 2} are independent random variables,η (n)ii = 0, Eη (n)ij = 0, E(η (n)ij )2 = 1 and sup E|η (n)ij |p < ∞ (3.30)1≤i 4k + 4. By the condition p > 4k + 4 ≥ 6 and Lemma 3.5, for anyϵ > 0, there exists γ > 1 such that P (V n ≥ ϵ √ n log n) ≤ n −γ as n is large enough. Consequently,∑n≥2 P (V n ≥ ϵ √ n log n) < ∞ for any ϵ > 0. By the Borel-Cantelli lemma, V n / √ n log n → 0 almostsurely as n → ∞. Thus, recalling (3.7), to prove the lemma, it suffices to show thatlim supn→∞∞∑Pn=2X n,(k)√ ≤ √ 2 + 2k −1 a.s. and lim infn log n n→∞)(Xn,(k)√ n log n∈ [a, b)Z n,(k)√ n log n≥ √ 2 a.s. and (3.31)= ∞ (3.32)for any [a, b) ⊂ ( √ 2, √ 2 + 2k −1 ). In fact, since {X n,(k) ; n ≥ 2} are independent, by the secondBorel-Cantelli lemma, (3.32) implies that, with probability one,n’s.X n,(k)√ n log n∈ [a, b) for infinitly manySince p > 4k+4, √ p/2k > √ 2 + 2k −1 . For any 0 < ϵ < √ p/2k− √ 2 + 2k −1 , let a = √ 2 + 2k −1 +ϵ. Then p > 2ka 2 and k(a 2 /2 − 1) > 1 + √ 2ϵ. By Lemma 3.6, P (X n,(k) ≥ a √ n log n) ≤ n −1−ϵ as n15


is sufficiently large. It follows that ∑ n≥2 P (X n,(k) ≥ a √ n log n) < ∞ for any ϵ > 0. Thus, the firstinequality in (3.31) follows from the Borel-Cantelli lemma.From (3.14) we see that ∑ n≥2 P (Z n,(k) ≤ ( √ 2 − 2ϵ) √ n log n) < ∞ for any ϵ > 0 small enough.By the Borel-Cantelli again, the second inequality in (3.31) is obtained. Now we prove (3.32).Given [a, b) ⊂ ( √ 2, √ 2 + 2k −1 ). Trivially, k(2 −1 a 2 − 1) ∈ (0, 1). Choose ϵ > 0 such that ϵ


[8] Chung, F. and Lu, L. (2006). Complex <strong>Graphs</strong> and Networks (CBMS Regional ConferenceSeries in Mathematics, No. 107). American Mathematical Society.[9] Colin de Verdière, Y. (1998). Spectres de Graphes. Societe Mathematique De France.[10] David, H. and Nagaraja, H. (2003). Order Statistics, 3rd Eds. Wiley-Interscience.[11] Ding, X. and Jiang, T. (2010). Spectral distributions <strong>of</strong> adjacency and <strong>Laplacian</strong> matrices<strong>of</strong> weighted random graphs. Ann. Appl. Probab. 20(6), 2086-2117.[12] Durrett, R. (2006). <strong>Random</strong> Graph Dynamics. Cambridge University Press.[13] Erdös, P. and Rényi, A. (1960). On the evolution <strong>of</strong> random graphs. Magyar Tud. Akad.Mat. Kuató Int. Közl 5, 17-61.[14] Erdös, P. and Rényi, A. (1959). On random graphs I. Publ. Math. Debrecen 6, 290-297.[15] Horn, R. and Johnson, C. (1985). Matrix Analysis. Cambridge Univesity Press.[16] Fiedler, M. (1973). Algebraic connectivity <strong>of</strong> graphs (English). Czechoslovak MathematicalJournal, 23(2), 298-305.[17] Janson, S., ̷Luczak, T. and Ruciński, A. (2000). <strong>Random</strong> <strong>Graphs</strong>. New York: Wiley.[18] Khorunzhy, O., Shcherbina, M., and Vengerovsky, V. (2004). Eigenvalue distribution <strong>of</strong>large weighted random graphs. J. Math. Phys., 45(4), 1648-1672.[19] Khorunzhy, A., Khoruzhenko, B., Pastur, L. and Shcherbina, M. (1992). The large n-limitin Statistical Mechanics and the Spectral Theory <strong>of</strong> Disordered Systems, Phase Transitionand Critical Phenomenon (Academic, New York), Vol. 15, p.73.[20] Kirchh<strong>of</strong>f, G. (1847) Über die Auflösung der Gleichungen, auf welche man bei der untersuchungder linearen verteilung galvanischer Ströme geföhrt wird. Ann. Phys. Chem. 72,497-508.[21] Kolchin, V. (1998). <strong>Random</strong> <strong>Graphs</strong>. New York: Cambridge University Press.[22] Lin, Z. and Bai, Z. (2011). Probability Inequalities. Springer, 1st Edition.[23] Mohar. B. (1991). Graph Theory, Combinatorics, and Applications, Vol. 2, Ed. Y. Alavi,G. Chartrand, O. R. Oellermann, A. J. Schwenk, Wiley, pp. 871-898.[24] Palmer, E. (1985). Graphical Evolution: an Introduction to the Theory <strong>of</strong> <strong>Random</strong><strong>Graphs</strong>. New York: Wiley.[25] Rogers, G. and De Dominicis, C. (1990). Density <strong>of</strong> states <strong>of</strong> sparse random matrices. J.Phys. A: Math. Gen. 23, 1567-1573.[26] Rogers, G. and Bray, A. (1988). Density <strong>of</strong> states <strong>of</strong> a sparse random matrix. Phys. Rev.B, 37, 3557-3562.[27] Sakhanenko, A. (1991). On the accuracy <strong>of</strong> normal approximation in the invariance principle[translation <strong>of</strong> Trudy Inst. Mat. (Novosibirsk) 13 (1989), Asimptot. Analiz Raspred.Sluch. Protsess., 40-66; MR 91d:60082]. Siberian Adv. Math., 1 (4), 58-91. Siberian Advancesin Mathematics.[28] Sakhanenko, A. (1985). Estimates in an invariance principle. In Limit theorems <strong>of</strong> probabilitytheory, volume 5 <strong>of</strong> Trudy Inst. Mat. (pp. 27-44, 175). Novosibirsk: “Nauka” Sibirsk.Otdel.17

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!