09.07.2015 Views

Semidefinite Programming Relaxation vs Polyhedral Homotopy ...

Semidefinite Programming Relaxation vs Polyhedral Homotopy ...

Semidefinite Programming Relaxation vs Polyhedral Homotopy ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Semidefinite</strong> <strong>Programming</strong> <strong>Relaxation</strong> <strong>vs</strong> <strong>Polyhedral</strong> <strong>Homotopy</strong> Methodfor Problems Involving PolynomialsWorkshop on Advances in OptimizationTokyo Institute of Technology, April 19-21, 2007Masakazu KojimaNumerical resultsTokyo Institute of TechnologyWorkshop on Advances in Optimization, April 19-21, 2007 – p.1/24


Contents1. PHoMpara — Parallel implementation of the polyhedralhomotopy method ([1] Gunji-Kim-Fujisawa-Kojima ’06)2. SparsePOP — Matlab implementation of SDP relaxationfor sparse POPs ([2] Waki-Kim-Kojima-Muramatsu ’05)3. Numerical comparison between the SDP relaxation andthe polyhedral homotopy method([1]+[2]+[3] Mevissen-Kojima-Nie-Takayama)4. Concluding remarksSDP = <strong>Semidefinite</strong> Program or <strong>Programming</strong>POP = Polynomial Optimization ProblemWorkshop on Advances in Optimization, April 19-21, 2007 – p.2/24


Contents1. PHoMpara — Parallel implementation of the polyhedralhomotopy method ([1] Gunji-Kim-Fujisawa-Kojima ’06)2. SparsePOP — Matlab implementation of SDP relaxationfor sparse POPs ([2] Waki-Kim-Kojima-Muramatsu ’05)3. Numerical comparison between the SDP relaxation andthe polyhedral homotopy method([1]+[2]+[3] Mevissen-Kojima-Nie-Takayama)4. Concluding remarksSDP = <strong>Semidefinite</strong> Program or <strong>Programming</strong>POP = Polynomial Optimization ProblemWorkshop on Advances in Optimization, April 19-21, 2007 – p.3/24


The polyhedral homotopy methodImplementation on a single CPU:PHCpack [Verschelde]HOM4PS [Li-Li-Gao]PHoM [Gunji-Kim-Kojima-Takeda-Fujisawa-Mizutani]Workshop on Advances in Optimization, April 19-21, 2007 – p.4/24


The polyhedral homotopy methodImplementation on a single CPU:PHCpack [Verschelde]HOM4PS [Li-Li-Gao]PHoM [Gunji-Kim-Kojima-Takeda-Fujisawa-Mizutani]Suitable for parallel computation — all isolated solutionscan be computed independently in parallel.PHoMpara [Gunji, Kim, Fujisawa and Kojima] — NextLeykin, Verschelde and ZhuangWorkshop on Advances in Optimization, April 19-21, 2007 – p.4/24


Numerical results: Hardware — PC cluster (AMD Athlon 2.0GHz)Problem cpu time speedup(#sol) #CPUs in second rationoon-10 1 62,672 1.0(59,029) 40 1,797 34.9eco-14 1 22,653 1.0(4,096) 40 626 36.2Workshop on Advances in Optimization, April 19-21, 2007 – p.5/24


Numerical results: Hardware — PC cluster (AMD Athlon 2.0GHz)Problem cpu time speedup(#sol) #CPUs in second rationoon-10 1 62,672 1.0(59,029) 40 1,797 34.9eco-14 1 22,653 1.0(4,096) 40 626 36.2Problem #CPUs in second speedupnoon-12 40 49,458(531,417)eco-16 40 12,051(16,384)Workshop on Advances in Optimization, April 19-21, 2007 – p.5/24


Contents1. PHoMpara — Parallel implementation of the polyhedralhomotopy method ([1] Gunji-Kim-Fujisawa-Kojima ’06)2. SparsePOP — Matlab implementation of SDP relaxationfor sparse POPs ([2] Waki-Kim-Kojima-Muramatsu ’05)3. Numerical comparison between the SDP relaxation andthe polyhedral homotopy method([1]+[2]+[3] Mevissen-Kojima-Nie-Takayama)4. Concluding remarksSDP = <strong>Semidefinite</strong> Program or <strong>Programming</strong>POP = Polynomial Optimization ProblemSparsePOP (Waki-Kim-Kojima-Muramatsu ’06) = Lasserre’sSDP relaxation ’01 + “structured sparsity” — c-sparsityWorkshop on Advances in Optimization, April 19-21, 2007 – p.6/24


POP min. f 0 (x) s.t. f j (x) ≥ 0 or = 0 (j = 1,...,m).Example:f 0 (x) = ∑ nk=1 (−x2 k )f j (x) = 1 − x 2 j − 2x 2 j+1 − x 2 n (j = 1,...,n − 1).Hf 0 (x) : the n × n Hessian mat. of f 0 (x),Jf ∗ (x) : the m × n Jacob. mat. of f ∗ (x) = (f 1 (x),...,f m (x)) T ,R : the csp matrix, the n × n density pattern matrix ofI + Hf 0 (x) + Jf ∗ (x) T Jf ∗ (x) (no cancellation in ’+’).[Jf ∗ (x) T Jf ∗ (x)] ij ≠ 0 iff x i and x j are in a common constraint.Workshop on Advances in Optimization, April 19-21, 2007 – p.7/24


POP min. f 0 (x) s.t. f j (x) ≥ 0 or = 0 (j = 1,...,m).Example:f 0 (x) = ∑ nk=1 (−x2 k )f j (x) = 1 − x 2 j − 2x 2 j+1 − x 2 n (j = 1,...,n − 1).Hf 0 (x) : the n × n Hessian mat. of f 0 (x),Jf ∗ (x) : the m × n Jacob. mat. of f ∗ (x) = (f 1 (x),...,f m (x)) T ,R : the csp matrix, the n × n density pattern matrix ofI + Hf 0 (x) + Jf ∗ (x) T Jf ∗ (x) (no cancellation in ’+’).[Jf ∗ (x) T Jf ∗ (x)] ij ≠ 0 iff x i and x j are in a common constraint.Example with n = 6:⎛⎞⋆ ⋆ 0 0 0 ⋆⋆ ⋆ ⋆ 0 0 ⋆the csp matrix R =0 ⋆ ⋆ ⋆ 0 ⋆0 0 ⋆ ⋆ ⋆ ⋆⎜⎟⎝ 0 0 0 ⋆ ⋆ ⋆ ⎠⋆ ⋆ ⋆ ⋆ ⋆ ⋆Workshop on Advances in Optimization, April 19-21, 2007 – p.7/24


POP min. f 0 (x) s.t. f j (x) ≥ 0 or = 0 (j = 1,...,m).Example:f 0 (x) = ∑ nk=1 (−x2 k )f j (x) = 1 − x 2 j − 2x 2 j+1 − x 2 n (j = 1,...,n − 1).Hf 0 (x) : the n × n Hessian mat. of f 0 (x),Jf ∗ (x) : the m × n Jacob. mat. of f ∗ (x) = (f 1 (x),...,f m (x)) T ,R : the csp matrix, the n × n density pattern matrix ofI + Hf 0 (x) + Jf ∗ (x) T Jf ∗ (x) (no cancellation in ’+’).[Jf ∗ (x) T Jf ∗ (x)] ij ≠ 0 iff x i and x j are in a common constraint.Workshop on Advances in Optimization, April 19-21, 2007 – p.8/24


POP min. f 0 (x) s.t. f j (x) ≥ 0 or = 0 (j = 1,...,m).Example:f 0 (x) = ∑ nk=1 (−x2 k ) — — — c-sparsef j (x) = 1 − x 2 j − 2x 2 j+1 − x 2 n (j = 1,...,n − 1).Hf 0 (x) : the n × n Hessian mat. of f 0 (x),Jf ∗ (x) : the m × n Jacob. mat. of f ∗ (x) = (f 1 (x),...,f m (x)) T ,R : the csp matrix, the n × n density pattern matrix ofI + Hf 0 (x) + Jf ∗ (x) T Jf ∗ (x) (no cancellation in ’+’).[Jf ∗ (x) T Jf ∗ (x)] ij ≠ 0 iff x i and x j are in a common constraint.POP : c-sparse (correlatively sparse) ⇔ The n × n csp matrixR = (R ij ) allows a symbolic sparse Cholesky factorization (undera row & col. ordering like a symmetric min. deg. ordering).Workshop on Advances in Optimization, April 19-21, 2007 – p.8/24


Sparse (SDP) relaxation = Lasserre (2001) + c-sparsityPOP min. f 0 (x) s.t. f j (x) ≥ 0 or = 0 (j = 1,...,m), c-sparse.⇓A sequence of c-sparse SDP relaxation problems dependingon the relaxation order r= 1, 2,...;Workshop on Advances in Optimization, April 19-21, 2007 – p.9/24


Sparse (SDP) relaxation = Lasserre (2001) + c-sparsityPOP min. f 0 (x) s.t. f j (x) ≥ 0 or = 0 (j = 1,...,m), c-sparse.⇓A sequence of c-sparse SDP relaxation problems dependingon the relaxation order r= 1, 2,...;(a) Under a moderate assumption,opt. sol. of SDP → opt sol. of POP as r → ∞.(b) r = ⌈“the max. deg. of poly. in POP”/2⌉+0 ∼ 3 is usuallylarge enough to attain opt sol. of POP in practice.(c) Such an r is unknown in theory except ∃ special cases.(d) The size of SDP increases rapidly as r → ∞.Workshop on Advances in Optimization, April 19-21, 2007 – p.9/24


Contents1. PHoMpara — Parallel implementation of the polyhedralhomotopy method ([1] Gunji-Kim-Fujisawa-Kojima ’06)2. SparsePOP — Matlab implementation of SDP relaxationfor sparse POPs ([2] Waki-Kim-Kojima-Muramatsu ’05)3. Numerical comparison between the SDP relaxation andthe polyhedral homotopy method([1]+[2]+[3] Mevissen-Kojima-Nie-Takayama)4. Concluding remarksSDP = <strong>Semidefinite</strong> Program or <strong>Programming</strong>POP = Polynomial Optimization ProblemWorkshop on Advances in Optimization, April 19-21, 2007 – p.10/24


A POP alkyl from globalibmin − 6.3x 5 x 8 + 5.04x 2 + 0.35x 3 + x 4 + 3.36x 6sub.to − 0.820x 2 + x 5 − 0.820x 6 = 0,0.98x 4 − x 7 (0.01x 5 x 10 + x 4 ) = 0, −x 2 x 9 + 10x 3 + x 6 = 0,x 5 x 12 − x 2 (1.12 + 0.132x 9 − 0.0067x 2 9) = 0,x 8 x 13 − 0.01x 9 (1.098 − 0.038x 9 ) − 0.325x 7 = 0.574,x 10 x 14 + 22.2x 11 = 35.82, x 1 x 11 − 3x 8 = −1.33,lbd i ≤ x i ≤ ubd i (i = 1, 2,...,14).14 variables, 7 poly. equality constraints with deg. 3.Workshop on Advances in Optimization, April 19-21, 2007 – p.11/24


A POP alkyl from globalibmin − 6.3x 5 x 8 + 5.04x 2 + 0.35x 3 + x 4 + 3.36x 6sub.to − 0.820x 2 + x 5 − 0.820x 6 = 0,0.98x 4 − x 7 (0.01x 5 x 10 + x 4 ) = 0, −x 2 x 9 + 10x 3 + x 6 = 0,x 5 x 12 − x 2 (1.12 + 0.132x 9 − 0.0067x 2 9) = 0,x 8 x 13 − 0.01x 9 (1.098 − 0.038x 9 ) − 0.325x 7 = 0.574,x 10 x 14 + 22.2x 11 = 35.82, x 1 x 11 − 3x 8 = −1.33,lbd i ≤ x i ≤ ubd i (i = 1, 2,...,14).14 variables, 7 poly. equality constraints with deg. 3.SparseDense (Lasserre)r ǫ obj ǫ feas cpu ǫ obj ǫ feas cpu2 1.0e-02 7.1e-01 1.8 7.2e-3 4.3e-2 14.43 5.6e-10 2.0e-08 23.0 out of memoryǫ obj = approx.opt.val. − lower bound for opt.val.ǫ feas = the maximum error in the equality constraintsWorkshop on Advances in Optimization, April 19-21, 2007 – p.11/24


Systems of polynomial equationsIs the (sparse) SDP relaxation useful to solve systems ofpolynomial equations?The answer depends on:how sparse the system of polynomial equations is,the maximum degree of polynomials.Workshop on Advances in Optimization, April 19-21, 2007 – p.12/24


Systems of polynomial equationsIs the (sparse) SDP relaxation useful to solve systems ofpolynomial equations?The answer depends on:how sparse the system of polynomial equations is,the maximum degree of polynomials.2 types of systems of polynomial equations(a) Benchmark test problems from Verschelde’s homepage;Katsura, cyclic — not c-sparse(b) Systems of polynomials arising from discretization of anODE and a DAE (Differential Algebraic Equations)— c-sparseWorkshop on Advances in Optimization, April 19-21, 2007 – p.12/24


Katsura n system of polynomial equations; n = 8 case0 = −x 1 + 2x 2 9 + 2x 2 8 + 2x 2 7 + · · · + 2x 2 2 + x 2 1,0 = −x 2 + 2x 9 x 8 + 2x 8 x 7 + 2x 7 x 6 + · · · + 2x 3 x 2 + 2x 2 x 1 ,· · · · · · not c-sparse0 = −x 8 + 2x 9 x 2 + 2x 8 x 1 + 2x 7 x 2 + 2x 6 x 3 + 2x 5 x 4 ,1 = 2x 9 + 2x 8 + 2x 7 + 2x 6 + 2x 5 + 2x 4 + 2x 3 + 2x 2 + x 1 .Workshop on Advances in Optimization, April 19-21, 2007 – p.13/24


Katsura n system of polynomial equations; n = 8 case0 = −x 1 + 2x 2 9 + 2x 2 8 + 2x 2 7 + · · · + 2x 2 2 + x 2 1,0 = −x 2 + 2x 9 x 8 + 2x 8 x 7 + 2x 7 x 6 + · · · + 2x 3 x 2 + 2x 2 x 1 ,· · · · · · not c-sparse0 = −x 8 + 2x 9 x 2 + 2x 8 x 1 + 2x 7 x 2 + 2x 6 x 3 + 2x 5 x 4 ,1 = 2x 9 + 2x 8 + 2x 7 + 2x 6 + 2x 5 + 2x 4 + 2x 3 + 2x 2 + x 1 .Numerical results on SparsePOP (WKKM 2004)n obj.funct. relax. order r cpu∑8 xi ↑ 1 0.08∑8 x2i ↓ 2 7.1∑11 xi ↑ 1 0.14∑11 x2i ↓ 2 101.3A formulation∑in terms of a POPmax ni=1 x i or min ∑ ni=1 x2 isub.to Katsura n system , −5 ≤ x i ≤ 5 (i = 1,...,n).Different objective functions ⇒ different solutions.Workshop on Advances in Optimization, April 19-21, 2007 – p.13/24


Katsura n system of polynomial equations; n = 8 case0 = −x 1 + 2x 2 9 + 2x 2 8 + 2x 2 7 + · · · + 2x 2 2 + x 2 1,0 = −x 2 + 2x 9 x 8 + 2x 8 x 7 + 2x 7 x 6 + · · · + 2x 3 x 2 + 2x 2 x 1 ,· · · · · · not c-sparse0 = −x 8 + 2x 9 x 2 + 2x 8 x 1 + 2x 7 x 2 + 2x 6 x 3 + 2x 5 x 4 ,1 = 2x 9 + 2x 8 + 2x 7 + 2x 6 + 2x 5 + 2x 4 + 2x 3 + 2x 2 + x 1 .Numerical results on SparsePOP (WKKM 2004)n obj.funct. relax. order r cpu∑8 xi ↑ 1 0.08∑8 x2i ↓ 2 7.1∑11 xi ↑ 1 0.14∑11 x2i ↓ 2 101.3Workshop on Advances in Optimization, April 19-21, 2007 – p.14/24


Katsura n system of polynomial equations; n = 8 case0 = −x 1 + 2x 2 9 + 2x 2 8 + 2x 2 7 + · · · + 2x 2 2 + x 2 1,0 = −x 2 + 2x 9 x 8 + 2x 8 x 7 + 2x 7 x 6 + · · · + 2x 3 x 2 + 2x 2 x 1 ,· · · · · · not c-sparse0 = −x 8 + 2x 9 x 2 + 2x 8 x 1 + 2x 7 x 2 + 2x 6 x 3 + 2x 5 x 4 ,1 = 2x 9 + 2x 8 + 2x 7 + 2x 6 + 2x 5 + 2x 4 + 2x 3 + 2x 2 + x 1 .Numerical results on SparsePOP (WKKM 2004)n obj.funct. relax. order r cpu∑8 xi ↑ 1 0.08∑8 x2i ↓ 2 7.1∑11 xi ↑ 1 0.14∑11 x2i ↓ 2 101.3Numerical results on HOM4PS (Li-Li-Gao 2002)n cpu sec. #solutions8 1.9 25611 209.1 2048Workshop on Advances in Optimization, April 19-21, 2007 – p.14/24


cyclic n system of polynomial equations: n = 5 case0 = x 1 + x 2 + x 3 + x 4 + x 5 ,0 = x 1 x 2 + x 2 x 3 + x 3 x 4 + x 4 x 5 + x 5 x 1 , not c-sparse0 = x 1 x 2 x 3 + x 2 x 3 x 4 + x 3 x 4 x 5 + x 4 x 5 x 1 + x 5 x 1 x 2 ,0 = x 1 x 2 x 3 x 4 + x 2 x 3 x 4 x 5 + x 3 x 4 x 5 x 1 + x 4 x 5 x 1 x 2 + x 5 x 1 x 2 x 3 ,0 = −1 + x 1 x 2 x 3 x 4 x 5 .Numerical results on SparsePOP: obj.funct.+lbd, ubd on x in obj.funct. relax. order r cpu5 ∑ x i ↑ 3 1.836 ∑ x i ↑ 4 753.2Workshop on Advances in Optimization, April 19-21, 2007 – p.15/24


cyclic n system of polynomial equations: n = 5 case0 = x 1 + x 2 + x 3 + x 4 + x 5 ,0 = x 1 x 2 + x 2 x 3 + x 3 x 4 + x 4 x 5 + x 5 x 1 , not c-sparse0 = x 1 x 2 x 3 + x 2 x 3 x 4 + x 3 x 4 x 5 + x 4 x 5 x 1 + x 5 x 1 x 2 ,0 = x 1 x 2 x 3 x 4 + x 2 x 3 x 4 x 5 + x 3 x 4 x 5 x 1 + x 4 x 5 x 1 x 2 + x 5 x 1 x 2 x 3 ,0 = −1 + x 1 x 2 x 3 x 4 x 5 .Numerical results on SparsePOP: obj.funct.+lbd, ubd on x in obj.funct. relax. order r cpu5 ∑ x i ↑ 3 1.836 ∑ x i ↑ 4 753.2Numerical results on HOM4PS (Li-Li-Gao)n cpu sec.#solutions5 0.1 706 0.2 156Workshop on Advances in Optimization, April 19-21, 2007 – p.15/24


Discretization of Mimura’s ODE with 2 unknowns u, v : [0, 5] → Ru xx = −(20/9)(35 + 16u − u 2 )u + 20uv,v xx = (1/4)((1 + (2/5)v)v − uv),u x (0) = u x (5) = v x (0) = v x (5) = 0,Discretize:x i = i∆x (i = 0, 1, 2,... ), u x (x i ) ≈ (u(x i+1 ) − u(x i−1 ))/(2∆x).Workshop on Advances in Optimization, April 19-21, 2007 – p.16/24


Discretization of Mimura’s ODE with 2 unknowns u, v : [0, 5] → Ru xx = −(20/9)(35 + 16u − u 2 )u + 20uv,v xx = (1/4)((1 + (2/5)v)v − uv),u x (0) = u x (5) = v x (0) = v x (5) = 0,Discretize:x i = i∆x (i = 0, 1, 2,... ), u x (x i ) ≈ (u(x i+1 ) − u(x i−1 ))/(2∆x).Discretized system of polynomials with ∆x = 1:f 1 (u,v) = 76.8u 1 + u 3 + 35.6u 2 1 − 20.0u 1 v 1 − 2.22u 3 2,f 2 (u,v) = −1.25v 1 + v 2 + 0.25u 1 v 1 − 0.1v1,2f 3 (u,v) = u 1 + 75.8u 2 + u 3 + 35.6u 2 2 − 20.0u 2v 2 − 2.22u 3 2 ,f 4 (u,v) = v 1 − 2.25v 2 + v 3 + 0.25u 2 v 2 − 0.1v2,2f 5 (u,v) = u 2 + 75.8u 3 + u 4 + 35.6u 2 3 − 20.0u 3 v 3 − 2.22u 2 3,f 6 (u,v) = v 2 − 2.25v 3 + v 4 + 0.25u 3 v 3 − 0.1v3,2f 7 (u,v) = u 3 + 76.8u 4 + 35.6u 2 4 − 20.0u 4 v 4 − 2.22u 3 4,f 8 (u,v) = v 3 − 1.25v 4 + 0.25u 4 v 4 − 0.1v4.2Here u i = u(x i ), v i = v(x i ) (i = 0, 1, 2, 3, 4, 5),u 0 = u 1 , u 5 = u 4 , v 0 = v 1 and v 5 = v 4 .⇒ c-sparseWorkshop on Advances in Optimization, April 19-21, 2007 – p.16/24


Discretization of Mimura’s ODE with 2 unknowns u, v : [0, 5] → Ru xx = −(20/9)(35 + 16u − u 2 )u + 20uv,v xx = (1/4)((1 + (2/5)v)v − uv),u x (0) = u x (5) = v x (0) = v x (5) = 0,Discretize:x i = i∆x (i = 0, 1, 2,... ), u x (x i ) ≈ (u(x i+1 ) − u(x i−1 ))/(2∆x).Workshop on Advances in Optimization, April 19-21, 2007 – p.17/24


Discretization of Mimura’s ODE with 2 unknowns u, v : [0, 5] → Ru xx = −(20/9)(35 + 16u − u 2 )u + 20uv,v xx = (1/4)((1 + (2/5)v)v − uv),u x (0) = u x (5) = v x (0) = v x (5) = 0,Discretize:x i = i∆x (i = 0, 1, 2,... ), u x (x i ) ≈ (u(x i+1 ) − u(x i−1 ))/(2∆x).Numerical results on SparsePOP∆x n obj.funct. relax. order r cpu1.0 8 ∑ r i u(x i ) ↑ 3 11.30.5 18 ∑ r i u(x i ) ↑ 3 57.8Here r i ∈ (0, 1) : random numbers.Workshop on Advances in Optimization, April 19-21, 2007 – p.17/24


Discretization of Mimura’s ODE with 2 unknowns u, v : [0, 5] → Ru xx = −(20/9)(35 + 16u − u 2 )u + 20uv,v xx = (1/4)((1 + (2/5)v)v − uv),u x (0) = u x (5) = v x (0) = v x (5) = 0,Discretize:x i = i∆x (i = 0, 1, 2,... ), u x (x i ) ≈ (u(x i+1 ) − u(x i−1 ))/(2∆x).Numerical results on SparsePOP∆x n obj.funct. relax. order r cpu1.0 8 ∑ r i u(x i ) ↑ 3 11.30.5 18 ∑ r i u(x i ) ↑ 3 57.8Here r i ∈ (0, 1) : random numbers.1212101088664422∆x = 1.000 0.5 1 1.5 2 2.5 3 3.5 4 4.5 500 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5∆x = 0.5Workshop on Advances in Optimization, April 19-21, 2007 – p.17/24


Discretization of Mimura’s ODE with 2 unknowns u, v : [0, 5] → Ru xx = −(20/9)(35 + 16u − u 2 )u + 20uv,v xx = (1/4)((1 + (2/5)v)v − uv),u x (0) = u x (5) = v x (0) = v x (5) = 0,Discretize:x i = i∆x (i = 0, 1, 2,... ), u x (x i ) ≈ (u(x i+1 ) − u(x i−1 ))/(2∆x).Numerical results on SparsePOP∆x n obj.funct. relax. order r cpu1.0 8 ∑ r i u(x i ) ↑ 3 11.30.5 18 ∑ r i u(x i ) ↑ 3 57.8Here r i ∈ (0, 1) : random numbers.Workshop on Advances in Optimization, April 19-21, 2007 – p.18/24


Discretization of Mimura’s ODE with 2 unknowns u, v : [0, 5] → Ru xx = −(20/9)(35 + 16u − u 2 )u + 20uv,v xx = (1/4)((1 + (2/5)v)v − uv),u x (0) = u x (5) = v x (0) = v x (5) = 0,Discretize:x i = i∆x (i = 0, 1, 2,... ), u x (x i ) ≈ (u(x i+1 ) − u(x i−1 ))/(2∆x).Numerical results on SparsePOP∆x n obj.funct. relax. order r cpu1.0 8 ∑ r i u(x i ) ↑ 3 11.30.5 18 ∑ r i u(x i ) ↑ 3 57.8Here r i ∈ (0, 1) : random numbers.Numerical results on HOM4PS∆x n cpu sec. #solutions #real solutions1.0 8 2.2 1296 2220.5 18 167.7 10,077,696 not traced(M.vol.) (M.vol.) (M.cells=1089)Workshop on Advances in Optimization, April 19-21, 2007 – p.18/24


Discretization of DAE with 3 unknowns y 1 ,y 2 ,y 3 : [0, 2] → Ry ′ 1 = y 3 , 0 = y 2 (1 − y 2 ), 0 = y 1 y 2 + y 3 (1 − y 2 ) − t, y 1 (0) = y 0 1.2 solutions : y(t) = (t, 1, 1) and y(t) = (y 0 1 + t 2 2, 0,t).Workshop on Advances in Optimization, April 19-21, 2007 – p.19/24


Discretization of DAE with 3 unknowns y 1 ,y 2 ,y 3 : [0, 2] → Ry ′ 1 = y 3 , 0 = y 2 (1 − y 2 ), 0 = y 1 y 2 + y 3 (1 − y 2 ) − t, y 1 (0) = y 0 1.2 solutions : y(t) = (t, 1, 1) and y(t) = (y1 0 + t 2 2, 0,t).Numerical results on SparsePOP— c-sparsey 0 1 ∆t n obj.funct. relax. order r cpu0 0.02 297 ∑ y 2 (t i ) ↑ 2 30.91 0.02 297 ∑ y 1 (t i ) ↑ 2 33.9Workshop on Advances in Optimization, April 19-21, 2007 – p.19/24


21.81.61.41.210.80.60.40.200 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2Solution: y(t) = (t, 1, 1)Workshop on Advances in Optimization, April 19-21, 2007 – p.20/24


32.521.510.500 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2Solution: y(t) = (y 0 1 + t 2 2, 0,t)Workshop on Advances in Optimization, April 19-21, 2007 – p.21/24


Contents1. PHoMpara — Parallel implementation of the polyhedralhomotopy method ([1] Gunji-Kim-Fujisawa-Kojima ’06)2. SparsePOP — Matlab implementation of SDP relaxationfor sparse POPs ([2] Waki-Kim-Kojima-Muramatsu ’05)3. Numerical comparison between the SDP relaxation andthe polyhedral homotopy method([1]+[2]+[3] Mevissen-Kojima-Nie-Takayama)4. Concluding remarksSDP = <strong>Semidefinite</strong> Program or <strong>Programming</strong>POP = Polynomial Optimization ProblemWorkshop on Advances in Optimization, April 19-21, 2007 – p.22/24


Some essential differences between <strong>Homotopy</strong>Continuation and (sparse) SDP <strong>Relaxation</strong> — 1:Workshop on Advances in Optimization, April 19-21, 2007 – p.23/24


Some essential differences between <strong>Homotopy</strong>Continuation and (sparse) SDP <strong>Relaxation</strong> — 1:(a) HC works on C n while SDPR on R n .(b) HC aims to compute all isolated solutions; in SDPR,computing all isolated solutions is possible but expensive.(c) SDPR can process inequalities.Workshop on Advances in Optimization, April 19-21, 2007 – p.23/24


Some essential differences between <strong>Homotopy</strong>Continuation and (sparse) SDP <strong>Relaxation</strong> — 2:Workshop on Advances in Optimization, April 19-21, 2007 – p.24/24


Some essential differences between <strong>Homotopy</strong>Continuation and (sparse) SDP <strong>Relaxation</strong> — 2:(d) SDPR is sensitive to degrees of polynomials of a POPbecause the SDP relaxed problem becomes larger rapidlyas they increase.⇒ SDPR can be applied to POPs with lower degreepolynomials such as degree ≤ 4 in practice.(e) HC fits parallel computation more than SDPR.(f) The effectiveness of sparse SDPR depends on thec-sparsity; for example, discretization of ODE, DAE,Optimal control problem and PDE.Workshop on Advances in Optimization, April 19-21, 2007 – p.24/24


Some essential differences between <strong>Homotopy</strong>Continuation and (sparse) SDP <strong>Relaxation</strong> — 2:(d) SDPR is sensitive to degrees of polynomials of a POPbecause the SDP relaxed problem becomes larger rapidlyas they increase.⇒ SDPR can be applied to POPs with lower degreepolynomials such as degree ≤ 4 in practice.(e) HC fits parallel computation more than SDPR.(f) The effectiveness of sparse SDPR depends on thec-sparsity; for example, discretization of ODE, DAE,Optimal control problem and PDE.Thank you!Workshop on Advances in Optimization, April 19-21, 2007 – p.24/24

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!