Views
1 year ago

arXiv:1702.00887v2

1702.00887

Published as a

Published as a conference paper at ICLR 2017 APPENDICES A MODEL DETAILS A.1 SYNTACTIC ATTENTION The syntactic attention layer (for tree transduction and natural language inference) is similar to the first-order graph-based dependency parser of Kipperwasser & Goldberg (2016). Given an input sentence [x 1 , . . . , x n ] and the corresponding word vectors [x 1 , . . . , x n ], we use a bidirectional LSTM to get the hidden states for each time step i ∈ [1, . . . , n], h fwd i = LSTM(x i , h fwd i−1) h bwd i = LSTM(x i , h bwd i+1) h i = [h fwd i ; h bwd i ] where the forward and backward LSTMs have their own parameters. The score for x i → x j (i.e. x i is the parent of x j ), is given by an MLP θ ij = tanh(s ⊤ tanh(W 1 h i + W 2 h j + b)) These scores are used as input to the inside-outside algorithm (see Appendix B) to obtain the probability of each word’s parent p(z ij = 1 | x), which is used to obtain the soft-parent c j for each word x j . In the non-structured case we simply have p(z ij = 1 | x) = softmax(θ ij ). A.2 TREE TRANSDUCTION Let [x 1 , . . . , x n ], [y 1 , . . . , y m ] be the sequence of source/target symbols, with the associated embeddings [x 1 , . . . , x n ], [y 1 , . . . , y m ] with x i , y j ∈ R l . In the simplest baseline model we take the source representation to be the matrix of the symbol embeddings. The decoder is a one-layer LSTM which produces the hidden states h ′ j = LSTM(y j, h ′ j−1 ), with h′ j ∈ Rl . The hidden states are combined with the input representation via a bilinear map W ∈ R l×l to produce the attention distribution used to obtain the vector m i , which is combined with the decoder hidden state as follows, α i = exp x i Wh ′ j ∑ n k=1 exp x kWh ′ j m i = n∑ α i x i i=1 ĥ j = tanh(U[m i ; h ′ j]) Here we have W ∈ R l×l and U ∈ R 2l×l . Finally, ĥj is used to to obtain a distribution over the next symbol y j+1 , p(y j+1 | x 1 , . . . , x n , y 1 , . . . , y j ) = softmax(Vĥj + b) For structured/simple models, the j-th source representation are respectively [ ] [ ] n∑ n∑ ˆx i = x i ; p(z ki = 1 | x) x k ˆx i = x i ; softmax(θ ki ) x k k=1 where θ ij comes from the bidirectional LSTM described in A.1. Then α i and m i changed accordingly, α i = exp ˆx i Wh ′ j ∑ n k=1 exp ˆx kWh ′ j k=1 m i = n∑ α iˆx i Note that in this case we have W ∈ R 2l×l and U ∈ R 3l×l . We use l = 50 in all our experiments. The forward/backward LSTMs for the parsing LSTM are also 50-dimensional. Symbol embeddings are shared between the encoder and the parsing LSTMs. Additional training details include: batch size of 20; training for 13 epochs with a learning rate of 1.0, which starts decaying by half after epoch 9 (or the epoch at which performance does not improve on validation, whichever comes first); parameter initialization over a uniform distribution U[−0.1, 0.1]; gradient normalization at 1 (i.e. renormalize the gradients to have norm 1 if the l 2 norm exceeds 1). Decoding is done with beam search (beam size = 5). i=1 16

Published as a conference paper at ICLR 2017 A.3 NEURAL MACHINE TRANSLATION The baseline NMT system is from Luong et al. (2015). Let [x 1 , . . . , x n ], [y 1 , . . . , y m ] be the source/target sentence, with the associated word embeddings [x 1 , . . . , x n ], [y 1 , . . . , y m ]. The encoder is an LSTM over the source sentence, which produces the hidden states [h 1 , . . . , h n ] where h i = LSTM(x i , h i−1 ) and h i ∈ R l . The decoder is another LSTM which produces the hidden states h ′ j ∈ Rl . In the simple attention case with categorical attention, the hidden states are combined with the input representation via a bilinear map W ∈ R l×l and this distribution is used to obtain the context vector at the j-th time step, n∑ θ i = h i Wh ′ j c j = softmax(θ i )h i The Bernoulli attention network has the same θ i but instead uses a sigmoid to obtain the weights of the linear combination, i.e., n∑ c j = sigmoid(θ i )h i i=1 And finally, the structured attention model uses a bilinear map to parameterize one of the unary potentials { hi Wh ′ j θ i (k) = , k = 1 0, k = 0 i=1 θ i,i+1 (z i , z i+1 ) = θ i (z i ) + θ i+1 (z i+1 ) + b zi,z i+1 where b are the pairwise potentials. These potentials are used as inputs to the forward-backward algorithm to obtain the marginals p(z i = 1 | x, q), which are further normalized to obtain the context vector c j = n∑ i=1 p(z i = 1 | x, q) h i γ γ = 1 λ n∑ p(z i = 1 | x, q) We use λ = 2 and also add an l 2 penalty of 0.005 on the pairwise potentials b. The context vector is then combined with the decoder hidden state ĥ j = tanh(U[c j ; h ′ j]) and ĥj is used to obtain the distribution over the next target word y j+1 p(y j+1 | x 1 , . . . , x n , y 1 , . . . y j ) = softmax(Vĥj + b) The encoder/decoder LSTMs have 2 layers and 500 hidden units (i.e. l = 500). Additional training details include: batch size of 128; training for 30 epochs with a learning rate of 1.0, which starts decaying by half after the first epoch at which performance does not improve on validation; dropout with probability 0.3; parameter initialization over a uniform distribution U[−0.1, 0.1]; gradient normalization at 1. We generate target translations with beam search (beam size = 5), and evaluate with multi-bleu.perl from Moses. 9 i A.4 QUESTION ANSWERING Our baseline model (MemN2N) is implemented following the same architecture as described in Sukhbaatar et al. (2015). In particular, let x = [x 1 , . . . , x n ] represent the sequence of n facts with the associated embeddings [x 1 , . . . , x n ] and let q be the embedding of the query q. The embeddings 9 https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/ multi-bleu.perl 17

The Future of Machine Intelligence
NIPS 2015
Deep Learning
deeplearning-nowpublishing-vol7-sig-039
Flyer (Page 1) - Wolfram Media Center - Wolfram Research
Determination of t0 using a Neural Network
Machine Duping Pwning Deep Learning Systems
Machine Duping Pwning Deep Learning Systems
DRK12 cover_tr4v1 - ecd.sri.com. - SRI International
SPE 91413 SPE Eastern Regional Meeting Charleston, 2004
Deep Learning & Feature Learning Methods for Vision - IPAM
Presentation slides - Vadim Ermolayev
DeepLearning-NowPublishing-Vol7-SIG-039