05.12.2012 Views

NASA Scientific and Technical Aerospace Reports

NASA Scientific and Technical Aerospace Reports

NASA Scientific and Technical Aerospace Reports

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

near-optimal strategy with DBN serving as a fitness evaluator. The probability of achieving the desired effects (namely, the<br />

probability of success) at a specified terminal time is a r<strong>and</strong>om variable due to uncertainties in the environment. Consequently,<br />

the authors focus on signal-to-noise ratio (SNR), a measure of mean <strong>and</strong> variance of the probability of success, to gauge the<br />

goodness of a strategy. The resulting strategy will not only have a relatively high probability of inducing the desired effects,<br />

but also be robust to environmental uncertainties.<br />

DTIC<br />

Bayes Theorem; Mathematical Models; Optimization; Organizations<br />

20060001859 Connecticut Univ., Storrs, CT USA<br />

Goal Management in Organizations: A Markov Decision Process (MDP) Approach<br />

Meirina, C<strong>and</strong>ra; Levchuk, Yuri N.; Levchuk, Georgiy M.; Pattipati, Krishna R.; Kleinman, David L.; Jan. 1, 2005; 25 pp.;<br />

In English; Original contains color illustrations<br />

Contract(s)/Grant(s): N00014-00-1-0101<br />

Report No.(s): AD-A440393; No Copyright; Avail.: Defense <strong>Technical</strong> Information Center (DTIC)<br />

Goal management is the process of recognizing or inferring goals of individual team members; ab<strong>and</strong>oning goals that are<br />

no longer relevant; identifying <strong>and</strong> resolving conflicts among goals; <strong>and</strong> prioritizing goals consistently for optimal team<br />

collaboration <strong>and</strong> effective operations. A Markov decision process (MDP) approach is employed to maximize the probability<br />

of achieving the primary goals (a subset of all goals). The authors seek to address the computational adequacy of an MDP as<br />

a planning model by introducing novel problem domain-specific heuristic evaluation functions (HEF) to aid the search<br />

process. They employ the optimal AO* search <strong>and</strong> two suboptimal greedy search algorithms to solve the MDP problem. A<br />

comparison of these algorithms to the dynamic programming algorithm shows that computational complexity can be reduced<br />

substantially. In addition, they recognize that embedded in the MDP solution there are a number of different action sequences<br />

by which a team’s goals can be realized. That is, in achieving the aforementioned optimality criterion, they identify alternate<br />

sequences for accomplishing the primary goals.<br />

DTIC<br />

Decision Making; Heuristic Methods; Markov Processes; Military Operations; Optimization; Planning<br />

20060001895 North Dakota Univ., Gr<strong>and</strong> Forks, ND USA<br />

Characteristics, Nonlinearity of Statistical Control <strong>and</strong> Relations with Dynamic Game Theory<br />

Won, Chang-Hee; Nov. 7, 2005; 6 pp.; In English<br />

Contract(s)/Grant(s): W911NF-05-1-0212<br />

Report No.(s): AD-A440472; ARO-47574.3-CI-II; No Copyright; Avail.: CASI: A02, Hardcopy<br />

The PI was awarded a short-term innovative research grant through the University of North Dakota. The grant period was<br />

15 April 2005 to 14 January 2006. However, during the grant period the PI transferred to Temple University in August 2005.<br />

So, the PI is submitting the final progress report with the results that the PI obtained during 15 April 2005 to 31 August 2005.<br />

The main objective of this proposal was to develop statistical control theory. Statistical control is a generalization of Kalman’s<br />

linear-quadratic-Gaussian control, where one optimizes the probability density of the performance index by controlling the<br />

cumulants. A statistical controller will have better performance <strong>and</strong> stability margin than the linear-quadratic-Gaussian<br />

controller. In this project, the authors investigated the characteristics of linear statistical controllers, developed nonlinear<br />

statistical control theory, <strong>and</strong> utilized the statistical control paradigm in dynamic game theory.<br />

DTIC<br />

Dynamic Programming; Game Theory; Nonlinear Systems; Nonlinearity; Optimization<br />

20060001896 North Carolina State Univ., Raleigh, NC USA<br />

Active Contours for Multispectral Images With Non-Homogeneous Sub-Regions<br />

Snyder, Wesley E.; Sep. 16, 2005; 114 pp.; In English<br />

Contract(s)/Grant(s): W911NF-04-1-0432<br />

Report No.(s): AD-A440479; ARO-47072.1-C1; No Copyright; Avail.: Defense <strong>Technical</strong> Information Center (DTIC)<br />

In this work, we develop a framework for image segmentation which partitions an image based on the statistics of image<br />

intensity where the statistical information is represented as a mixture of probability density functions defined in a<br />

multi-dimensional image intensity space. Depending on the method to estimate the mixture density functions, three active<br />

contour models are proposed: unsupervised multi-dimensional histogram method, half-supervised multivariate Gaussian<br />

mixture density method, <strong>and</strong> supervised multivariate Gaussian mixture density method. The implementation of active contours<br />

176

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!