08.01.2013 Views

Back Room Front Room 2

Back Room Front Room 2

Back Room Front Room 2

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

284 ENTERPRISE INFORMATION SYSTEMS VI<br />

Higginbotham, D. J., Lesher, G. W., and Moulton, B. J.<br />

(1998). Techniques for augmenting scanning communication.<br />

Augmentative and Alternative Communication,<br />

14:81–101.<br />

ISAAC (1983). Int. Soc. for Augmentative and Alternative<br />

Communication, Internet site. http://www.isaaconline.org.<br />

Last accessed October 1th, 2003.<br />

Juang, B. and Rabiner, L. (1990). The segmental k-means<br />

algorithm for estimating parameters of hidden markov<br />

models. In IEEE Trans. on Acoustics Speech and<br />

Signal processing, ASSP-38, pages 1639–1641. IEEE<br />

Computer Society Press.<br />

Juang, B., Rabiner, L., and Wilpon, G. (1986). A segmental<br />

k-means training procedure for connected word recognition.<br />

AT&T Technical Journal, 65:21–31.<br />

Koester, H. and Levine, S. (1994). Learning and performance<br />

of ablebodied individuals using scanning systems<br />

with and without word prediction. Assistive<br />

Technology, page 42.<br />

Lee, H.-Y., Yeh, C.-K., Wu, C.-M., and Tsuang, M.-F.<br />

(2001). Wireless communication for speech impaired<br />

subjects via portable augmentative and alternative system.<br />

In Proc. of the Int. Conf. of the IEEE on Eng.<br />

in Med. and Bio. Soc., volume 4, pages 3777–3779,<br />

Washington, DC, USA.<br />

Prechelt, L. (1996). Early stopping-but when? In Neural<br />

Networks: Tricks of the Trade, pages 55–69.<br />

Quillian, M. (1968). Semantic memory. In Minsky ed.<br />

Semantic Information Processing. MIT Press, Cambridge.<br />

Rabiner, L. (1989). A tutorial on hidden markov models and<br />

selected applications in speech recognition. In Proc. of<br />

the IEEE, 77, pages 257–286. IEEE Computer Society<br />

Press.<br />

Shane, B. (1981). Augmentative Communication: an introduction.<br />

Blackstone, Toronto, Canada.<br />

Simpson, R. C. and Koester, H. H. (1999). Adaptive oneswitch<br />

row-column scanning. IEEE Trans. on Rehabilitation<br />

Engineering, 7:464–473.<br />

Swiffin, A. L., Pickering, J. A., and Newell, A. F. (1987).<br />

Adaptive and predictive techniques in a cammunication<br />

prosthesis. Augmentative and Alternative Communication,<br />

3:181–191.<br />

APPENDIX A<br />

In this appendix, we briefly describe the DAR-HMM according<br />

to the formalism adopted by Rabiner in (Rabiner,<br />

1989) to specify classical hidden Markov models:<br />

• S � {si}, subcategories set with N = |S|;<br />

• V � {vj}, predictable symbols set with M = |V |;<br />

• V (i) = {v (i)<br />

k }, set of symbols predictable in subcategory<br />

i with M (i) = |V (i) | and V = �<br />

i V (i) ;<br />

• O(t) ∈ V , observed symbol at time t;<br />

• Q(t) ∈ S, state at time t;<br />

• πi(t) =P (Q(t) =si), probability that si is the actual<br />

subcategory at time t;<br />

• aii ′ = P (Q(t +1)=si|Q(t) =si ′), transition probability<br />

from si ′ to si;<br />

• b i k = P (O(0) = v (i)<br />

k |Q(0) = si), probability of observing<br />

v (i)<br />

k from subcategory si at t =0;<br />

• b ii′<br />

kk ′ = P (O(t) =v(i)<br />

k |Q(t) =si,O(t − 1) = v(i′ )<br />

k ′ ),<br />

probability of observing v (i)<br />

k from the subcategory si<br />

having just observed v (i′ )<br />

k ′ .<br />

DAR-HMM for symbolic prediction can thus be described<br />

using a parameter vector λ =< Π 0 ,A,B >, where<br />

Π 0 [N] is the vector of inital subcategory probability πi(0),<br />

A[N][N] is the matrix with subcategory transition probabilities<br />

aii ′, and B[N][M][M +1]is the emission matrix with<br />

symbol probabilities b ii′<br />

kk ′ and b i k.Givenλthe vector of parameters<br />

describing a specific language behavior model, we<br />

can predict the first observed symbol as the most probable<br />

one at time t =0:<br />

Ô(0) = arg max<br />

v (i)<br />

k<br />

= argmax<br />

v (i)<br />

k<br />

= argmax<br />

v (i)<br />

k<br />

�<br />

P (O(0) = v (i)<br />

k |λ)<br />

�<br />

(P (O (0) |Q (0) ,λ) P (Q(0)))<br />

�<br />

b i �<br />

k · πi(0) .<br />

Then mimicking the DAR-HMM generative model, to predict<br />

the t th symbol of a sentence we want to maximize the<br />

symbol probability in the present (hidden) state given the<br />

last observed symbol:<br />

�<br />

P O(t) =v (i)<br />

k ,Q(t) =si|O(t − 1) = v(i′ �<br />

)<br />

k ′ ,λ .<br />

Recalling that we can compute the probability of the current<br />

(hidden) state as:<br />

P (Q(t)) =<br />

=<br />

N� P (Q(t)|Q(t − 1)) P (Q(t − 1)) =<br />

N�<br />

i ′ =1<br />

πi ′(t − 1)aii ′ = πi(t),<br />

we obtain a recursive form for symbol prediction at time t:<br />

�<br />

b ii′<br />

N�<br />

�<br />

kk ′ · πi ′(t − 1)aii ′ .<br />

Ô(t) = arg max<br />

v (i)<br />

k<br />

i ′ =1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!