Theorem 4 Let B denote an n × m Gaussian or Bernoulli r<strong>and</strong>om basis. If m ≤ cn, 0 ≤ c < 1, then asn → ∞, B is θ-orthogonal almost surely with( ) 1 − cθ = sin −1 . (21)1 + cFurther, given an ɛ > 0, there exists an N ɛ such that for every n > N ɛ <strong>and</strong> r > 0, B is θ-orthogonal,()θ = sin −1 1 − c1 + c − 3√ 34 (r + ɛ) , (22)with probability greater than 1 − 2e − nr2ρ , where ρ = 2 for Gaussian B <strong>and</strong> ρ = 16 for Bernoulli B.The value of θ in (21) is not the best possible in the sense that for a given value of c, a r<strong>and</strong>om n×m Gaussianmatrix with m ≤ cn would be θ ′ -orthogonal (with high probability) for some θ ′ > θ (see Figure 2). Thereason is that the θ predicted by Lemma 2 is satisfied by all matrices. However, Theorem 4 is restricted tor<strong>and</strong>om matrices.Theorem 4 allows us to bound the length of the shortest non-zero vector in a r<strong>and</strong>om lattice.Corollary 2 Let the n × m-matrix B = (b 1 ,b 2 ,... ,b m ), with m ≤ cn <strong>and</strong> 0 ≤ c < 1, denote a Gaussianor Bernoulli r<strong>and</strong>om basis for a lattice L. Then the shortest vector’s length λ(L) satisfiesalmost surely as n → ∞.λ(L) ≥ 1 − c1 + cEach column of a Bernoulli B is unit-length by construction. For Gaussian B, it is not difficult to show thatall columns have length 1 almost surely as n → ∞. Hence Corollary 2 is an immediate consequence ofTheorem 4 <strong>and</strong> (2). Corollary 2 implies that in r<strong>and</strong>om lattices that are not full-dimensional, it is easy toobtain approximate solutions to the SVP (within a constant factor). This is because for r<strong>and</strong>om lattices in R nwith dimension n(1−ɛ), λ(L) is greater than ɛ times the length of the shortest basis vector (approximately).Compare this with Daudé <strong>and</strong> Vallée’s [9] result that in r<strong>and</strong>om full-dimensional lattices in R n , λ(L) is atleast O(1/ √ n) times the length of the shortest basis vector with high probability.By substituting θ = π 3into Theorem 4 <strong>and</strong> then invoking Corollary 1, we can deduce sufficient conditionsfor a r<strong>and</strong>om basis to be π 3 -orthogonal.Corollary 3 Let the n × m-matrix B denote a Gaussian or Bernoulli r<strong>and</strong>om basis for lattice L. If m n ≤c < ( 7 − √ 48 ) (≈ 0.071), then B is π 3-orthogonal almost surely as n → ∞. Further, given an ɛ > 0, thereexists an N ɛ such that for every n > N ɛ <strong>and</strong>4(1−c)3 √ − ɛ − 2 3(1+c) 3 > r > 0, B is π 3-orthogonal with probabilitygreater than 1 − 2e − nr2ρ , where ρ = 2 for Gaussian B <strong>and</strong> ρ = 16 for Bernoulli B.Figure 2 illustrates that, in practice, an n×m Gaussian <strong>and</strong> Bernoulli r<strong>and</strong>om matrix is nearly orthogonalfor much larger values of m nthan our results claim. Our plots suggest that the probability for a r<strong>and</strong>om basisto be nearly orthogonal sharply transitions from 1 to 0 for m nvalues in the interval [0.2,0.25]. Sorkin [31]has shown us that if the columns of B represent points chosen uniformly from the unit sphere in R n (onecan obtain such points by dividing the columns of a Gaussian matrix by their norms), then the best possiblemn value for r<strong>and</strong>om n × m matrices to be π 3 -orthogonal is m n= 0.25. Further, if m/n > 0.25, B is almostsurely not π 3-orthogonal as n → ∞. For large n, the columns of a Gaussian matrix almost surely have length1, <strong>and</strong> thus behave like points chosen uniformly from the unit sphere in R n . Therefore, as n → ∞, r<strong>and</strong>omn × n/4 Gaussian matrices are almost surely π 3 -orthogonal.12
5.4 Proof of Results on R<strong>and</strong>om <strong>Lattice</strong>sThis section provides the proofs for Lemma 2 <strong>and</strong> Theorem 4.5.4.1 Proof of Lemma 2Our goal is to construct a lower-bound for the angle between any column of B <strong>and</strong> the subspace spanned byall the other columns in terms of the singular values of B. Clearly, if ψ min = 0, then the columns of B arelinearly dependent. Hence, (20) holds as B’s columns are θ-orthogonal with θ = 0. For the rest of the proof,we will assume that ψ min ≠ 0.Consider the singular value decomposition (SVD) of BB = XΨY, (23)where X <strong>and</strong> Y are n × m <strong>and</strong> m × m real-valued matrices respectively with orthonormal columns, <strong>and</strong> Ψis a m × m real-valued diagonal matrix. Let b i <strong>and</strong> x i denote the i-th column of B <strong>and</strong> X respectively, lety ij denote the element from the i-th row <strong>and</strong> j-th column of Y, <strong>and</strong> let ψ i denote the i-th diagonal elementof Ψ. Then, (23) can be rewritten as⎡⎤ ⎡⎤ψ 1 0 ... 0 y 11 y 12 ... y 1m[ ] [ ]0 ψ 2 ... 0y 21 y 22 ... y 2mb1 b 2 ... b m = x1 x 2 ... x m ⎢⎣.. ..⎥ ⎢. ⎦ ⎣.. ..⎥. ⎦ .0 ... ... ψ m y m1 ... ... y mmWe now analyze the angle between b 1 (w.l.o.g) <strong>and</strong> the subspace spanned by {b 2 ,b 3 ,... ,b m }. Note thatb 1 =m∑ψ i y i1 x i .i=1Let ˜b 1 denote an arbitrary non-zero vector in the subspace spanned by {b 2 ,b 3 ,... ,b m }. Then,˜b1 =m∑α k b k =m∑k=2 k=2α km ∑i=1ψ i y ik x i =m∑x i ψ ii=1m ∑k=2α k y ik .for some α k ∈ R with ∑ k |α k| > 0. Let ỹ i1 = ∑ mk=2 α k y ik . Then,˜b1 =m∑ψ i ỹ i1 x i .i=1Let ˜θ ≥ θ denote the angle between b 1 <strong>and</strong> ˜b 1 . Then,〈 〉∣ ∣∣cos ˜θ∣ b 1 ,˜b 1=∥∥ ∥∥= |〈∑ mi=1 ψ i y i1 x i , ∑ mi=1 ψ i ỹ i1 x i 〉|‖b 1 ‖ ∥˜b ‖ ∑ m1 i=1 ψ i y i1 x i ‖ ‖ ∑ mi=1 ψ (24)i ỹ i1 x i ‖∣ ∑ mi=1=ψ2 i y ∣i1 ỹ i1 , (25)√ ∑mi=1 ψ2 i y2 i1√ ∑mi=1 ψ2 i ỹ2 i113