J. Huang et al. / Medical Image Analysis 15 (2011) 670–679 677 SNR SNR SNR 24 22 20 18 16 14 12 TVC<strong>MR</strong>I 10 RecPF CSA 8 FCSA 6 0 10 20 30 40 50 Iterations 20 18 16 14 12 10 TVC<strong>MR</strong>I RecPF 8 CSA FCSA 6 0 10 20 30 40 50 Iterations 18 16 14 12 10 8 TVC<strong>MR</strong>I RecPF 6 CSA FCSA 4 0 10 20 30 40 50 Iterations (a) CPU Time (s) CPU Time (s) CPU Time (s) 14 12 10 8 6 4 TVC<strong>MR</strong>I RecPF 2 CSA FCSA 0 0 10 20 30 40 50 Iterations 14 12 10 8 6 4 TVC<strong>MR</strong>I RecPF 2 CSA FCSA 0 0 10 20 30 40 50 Iterations 14 12 10 8 6 4 TVC<strong>MR</strong>I RecPF 2 CSA FCSA 0 0 10 20 30 40 50 Iterations (b) Fig. 8. Per<strong>for</strong>mance comparisons on the full body <strong>MR</strong> <strong>image</strong> with different sampling ratios. The sample ratios are: (1) 36%; (2) 25% and (3) 20%. The per<strong>for</strong>mance: (a) Iterations vs. SNR (db) and (b) iterations vs. CPU time (s). effects on all <strong>MR</strong> <strong>image</strong>s in less CPU time. The CSA is always inferior to the FCSA, which shows the effectiveness of acceleration steps in the FCSA <strong>for</strong> the <strong>MR</strong> <strong>image</strong> <strong>reconstruction</strong>. The classical CG (Lustig et al., 2007) is far worse than others because of its higher cost in each iteration, the RecPF is slightly better than the TVC<strong>MR</strong>I, which is consistent with observations in Ma et al. (2008) and Yang et al. (2010). In our experiments, these methods have also been applied on the test <strong>image</strong>s with the sample ratio set to 100%. We observed that all methods obtain almost the same <strong>reconstruction</strong> results, with SNR 64.8, after sufficient iterations. This was to be expected, since all methods are essentially solving the same <strong>for</strong>mulation ‘‘Model (1)’’. 3.3. CPU time and SNRs Fig. 6 gives the per<strong>for</strong>mance comparisons between different methods in terms of the CPU time over the SNR. Tables 1 and 2 tabulate the SNR and CPU Time by different methods, averaged over 100 runs <strong>for</strong> each experiment, respectively. The FCSA always obtains the best <strong>reconstruction</strong> results on all <strong>MR</strong> <strong>image</strong>s by achieving the highest SNR in less CPU time. The CSA is always inferior to the FCSA, which shows the effectiveness of acceleration steps in the FCSA <strong>for</strong> the <strong>MR</strong> <strong>image</strong> <strong>reconstruction</strong>. While the classical CG (Lustig et al., 2007) is far worse than others because of its higher cost in each iteration, the RecPF is slightly better than the TVC<strong>MR</strong>I, which is consistent to observations in Ma et al. (2008) and Yang et al. (2010). 3.4. Sample ratios To test the efficiency of the proposed method, we further per<strong>for</strong>m experiments on a full body <strong>MR</strong> <strong>image</strong> with size of 924 208. Each algorithm runs 50 iterations. Since we have shown that the CG method is far less efficient than other methods, we will not include it in this experiment. The sample ratio is set to be
678 J. Huang et al. / Medical Image Analysis 15 (2011) 670–679 approximately 25%. To reduce the randomness, we run each experiment 100 times <strong>for</strong> each parameter setting of each method. The examples of the original and recovered <strong>image</strong>s by different algorithms are shown in Fig. 7. From there, we can observe that the results obtained by the FCSA are not only visibly better, but also superior in terms of both the SNR and CPU time. To evaluate the <strong>reconstruction</strong> per<strong>for</strong>mance with different sampling ratio, we use sampling ratio 36%, 25% and 20% to obtain the measurement b respectively. Different methods are then used to per<strong>for</strong>m <strong>reconstruction</strong>. To reduce the randomness, we run each experiments 100 times <strong>for</strong> each parameter setting of each method. The SNR and CPU time are traced in each iteration <strong>for</strong> each methods. Fig. 8 gives the per<strong>for</strong>mance comparisons between different methods in terms of the CPU time and SNR when the sampling ratios are 36%, 25% and 20% respectively. The <strong>reconstruction</strong> results produced by the FCSA are far better than those produced by the CG, TVC<strong>MR</strong>I and RecPF. The <strong>reconstruction</strong> per<strong>for</strong>mance of the FCSA is always the best in terms of both the <strong>reconstruction</strong> accuracy and the computational complexity, which further demonstrates the effectiveness and efficiency of the FCSA <strong>for</strong> the <strong>compressed</strong> <strong>MR</strong> <strong>image</strong> construction. 3.5. Discussion The experimental results reported above validate the effectiveness and efficiency of the proposed composite splitting algorithms <strong>for</strong> <strong>compressed</strong> <strong>MR</strong> <strong>image</strong> <strong>reconstruction</strong>. Our main contributions are: We propose an efficient algorithm (FCSA) to reconstruct the <strong>compressed</strong> <strong>MR</strong> <strong>image</strong>s. It minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. The computational complexity of the FCSA is Oðp logðpÞÞ in each iteration (p is the pixel number in reconstructed <strong>image</strong>). It also has fast convergence properties. It has been shown to significantly outper<strong>for</strong>m the classic CG methods (Lustig et al., 2007) and two state-of-the-art methods (TVC<strong>MR</strong>I (Ma et al., 2008) and RecPF (Yang et al., 2010)) in terms of both accuracy and complexity. The step size in the FCSA is designed according to the inverse of the Lipschitz constant L f . Actually, using larger values is known to be a way of obtaining faster versions of the algorithm (Wright et al., 2009). Future work will study the combination of this technique with the CSD or FCSA, which is expected to further accelerate the optimization <strong>for</strong> this kind of problems. In this paper, the proposed methods are developed to efficiently solve model (1), which has been addressed by Sparse<strong>MR</strong>I, TVC<strong>MR</strong>I and RecPF. There<strong>for</strong>e, with enough iterations, Sparse- <strong>MR</strong>I, TVC<strong>MR</strong>I and RecPF will obtain the same solution as that obtained by our methods. Since all of them solve the same <strong>for</strong>mulation, they will lead to the same gain in in<strong>for</strong>mation content. In our future work, we will develop new effective models <strong>for</strong> <strong>compressed</strong> <strong>MR</strong> <strong>image</strong> <strong>reconstruction</strong>, which can lead to more in<strong>for</strong>mation gains. 4. Conclusion We have proposed an efficient algorithm <strong>for</strong> the <strong>compressed</strong> <strong>MR</strong> <strong>image</strong> <strong>reconstruction</strong>. Our work has the following contributions. First, the proposed FCSA can efficiently solve a composite regularization problem including both TV term and L1 norm term, which can be easily extended to other medical <strong>image</strong> applications. Second, the computational complexity of the FCSA is only Oðp logðpÞÞ in each iteration where p is the pixel number of the reconstructed <strong>image</strong>. It also has strong convergence properties. These properties make the real <strong>compressed</strong> <strong>MR</strong> <strong>image</strong> <strong>reconstruction</strong> much more feasible than be<strong>for</strong>e. Finally, we conduct numerous experiments to compare different <strong>reconstruction</strong> methods. Our method is shown to impressively outper<strong>for</strong>m the classical methods and two of the fastest methods so far in terms of both accuracy and complexity. References Beck, A., Teboulle, M., 2009a. Fast gradient-based algorithms <strong>for</strong> constrained total variation <strong>image</strong> denoising and deblurring problems. IEEE Transaction on Image Processing 18, 2419–2434. Beck, A., Teboulle, M., 2009b. A fast iterative shrinkage-thresholding algorithm <strong>for</strong> linear inverse problems. SIAM Journal on Imaging Sciences 2, 183–202. Candes, E.J., Romberg, J., Tao, T., 2006. Robust uncertainty principles: exact signal <strong>reconstruction</strong> from highly incomplete frequency in<strong>for</strong>mation. IEEE Transactions on In<strong>for</strong>mation Theory 52, 489–509. Chartrand, R., 2007. Exact <strong>reconstruction</strong> of sparse signals via nonconvex minimization. IEEE Signal Processing Letters 14, 707–710. Chartrand, R., 2009. Fast algorithms <strong>for</strong> nonconvex compressive sensing: <strong>MR</strong>I <strong>reconstruction</strong> from very few data. In: Proceedings of the Sixth IEEE international conference on Symposium on Biomedical Imaging: From Nano to Macro. IEEE Press, pp. 262–265. Combettes, P.L., 2009. Iterative construction of the resolvent of a sum of maximal monotone operators. Journal of Convex Analysis 16, 727–748. Combettes, P.L., Pesquet, J.C., 2008. A proximal decomposition method <strong>for</strong> solving convex variational inverse problems. Inverse Problems 24, 1–27. Combettes, P.L., Wajs, V.R., 2008. Signal recovery by proximal <strong>for</strong>ward-backward splitting. SIAM Journal on Multiscale Modeling and Simulation 19, 1107–1130. Donoho, D., 2006. Compressed sensing. IEEE Transactions on In<strong>for</strong>mation Theory 52, 1289–1306. Eckstein, J., Svaiter, B.F., 2009. General projective splitting methods <strong>for</strong> sums of maximal monotone operators. SIAM Journal on Control Optimization 48, 787– 811. Gabay, D., 1983. Chapter IX applications of the method of multipliers to variational inequalities. Studies in Mathematics and its Applications 15, 299–331. Gabay, D., Mercier, B., 1976. A dual algorithm <strong>for</strong> the solution of nonlinear variational problems via finite-element approximations. Computers and Mathematics with Applications 2, 17–40. Glowinski, R., Le Tallec, P., 1989. Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, vol. 9. Society <strong>for</strong> Industrial Mathematics. Goldfarb, D., Ma, S., 2009. Fast Multiple Splitting Algorithms <strong>for</strong> Convex Optimization. Technical Report, Department of IEOR, Columbia University, New York. He, B.S., Liao, L.Z., Han, D., Yang, H., 2002. A new inexact alternating direction method <strong>for</strong> monotone variational inequalities. Mathematical Programming 92, 103–118. He, L., Chang, T.C., Osher, S., Fang, T., Speier, P., 2006. <strong>MR</strong> <strong>image</strong> <strong>reconstruction</strong> by using the iterative refinement method and nonlinear inverse scale space methods. Technical Report UCLA CAM 06-35. . Huang, J., Zhang, S., Metaxas, D., 2010. <strong>Efficient</strong> <strong>MR</strong> <strong>image</strong> <strong>reconstruction</strong> <strong>for</strong> <strong>compressed</strong> <strong>MR</strong> <strong>imaging</strong>. In: Proceedings of the 13th International Conference on Medical Image Computing and Computer-Assisted Intervention: Part I. LNCS, vol. 6361. Springer-Verlag, pp. 135–142. Ji, S., Ye, J., 2009. An accelerated gradient method <strong>for</strong> trace norm minimization. In: Proceedings of the 26th Annual International Conference on Machine Learning. ACM, pp. 457–464. Lustig, M., Donoho, D., Pauly, J., 2007. Sparse <strong>MR</strong>I: the application of <strong>compressed</strong> sensing <strong>for</strong> rapid <strong>MR</strong> <strong>imaging</strong>. Magnetic Resonance in Medicine 58, 1182–1195. Ma, S., Yin, W., Zhang, Y., Chakraborty, A., 2008. An efficient algorithm <strong>for</strong> <strong>compressed</strong> mr <strong>imaging</strong> using total variation and wavelets. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. Malick, J., Povh, J., Rendl, F., Wiegele, A., 2009. Regularization methods <strong>for</strong> semidefinite programming. SIAM Journal on Optimization 20, 336–356. Nesterov, Y.E., 1983. A method <strong>for</strong> solving the convex programming problem with convergence rate o(1/k 2 ). Doklady Akademii Nauk SSSR 269, 543–547. Nesterov, Y.E., 2007. Gradient methods <strong>for</strong> minimizing composite objective function. Technical Report. . Spingarn, J.E., 1983. Partial inverse of a monotone operator. Applied Mathematics and Optimization 10, 247–265. Trzasko, J., Manduca, A., Borisch, E., 2009. Highly undersampled magnetic resonance <strong>image</strong> <strong>reconstruction</strong> via homotopic l0-minimization. IEEE Transactions on Medical Imaging 28, 106–121. Tseng, P., 1991. Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM Journal on Control and Optimization 29, 119–138. Tseng, P., 2000. A modified <strong>for</strong>ward-backward splitting method <strong>for</strong> maximal monotone mappings. SIAM Journal on Control and Optimization 38, 431–446.