15.12.2012 Views

scipy tutorial - Baustatik-Info-Server

scipy tutorial - Baustatik-Info-Server

scipy tutorial - Baustatik-Info-Server

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

SciPy Reference Guide, Release 0.8.dev<br />

conditionalmodel.dual([params, The entropy dual function is defined for conditional models as<br />

ignorepenalty])<br />

conditionalmodel.expectations()<br />

The vector of expectations of the features with respect to the<br />

conditionalmodel.fit([algorithm]) Fits the conditional maximum entropy model subject to the<br />

conditionalmodel.lognormconst()<br />

Compute the elementwise log of the normalization constant<br />

conditionalmodel.logpmf() Returns a (sparse) row vector of logarithms of the conditional probability mass<br />

function (pmf) values p(x | c) for all pairs (c, x), where c are contexts and x are<br />

points in the sample space.<br />

dual(params=None, ignorepenalty=False)<br />

The entropy dual function is defined for conditional models as<br />

L(theta) = sum_w q(w) log Z(w; theta)<br />

or equivalently as<br />

• sum_{w,x} q(w,x) [theta . f(w,x)]<br />

L(theta) = sum_w q(w) log Z(w; theta) - (theta . k)<br />

where K_i = sum_{w, x} q(w, x) f_i(w, x), and where q(w) is the empirical probability mass function derived<br />

from observations of the context w in a training set. Normally q(w, x) will be 1, unless the same class label is<br />

assigned to the same context more than once.<br />

Note that both sums are only over the training set {w,x}, not the entire sample space, since q(w,x) = 0 for all w,x<br />

not in the training set.<br />

The entropy dual function is proportional to the negative log likelihood.<br />

Compare to the entropy dual of an unconditional model:<br />

L(theta) = log(Z) - theta^T . K<br />

expectations()<br />

The vector of expectations of the features with respect to the distribution p_tilde(w) p(x | w), where p_tilde(w)<br />

is the empirical probability mass function value stored as self.p_tilde_context[w].<br />

fit(algorithm=’CG’)<br />

Fits the conditional maximum entropy model subject to the constraints<br />

sum_{w, x} p_tilde(w) p(x | w) f_i(w, x) = k_i<br />

for i=1,...,m, where k_i is the empirical expectation<br />

k_i = sum_{w, x} p_tilde(w, x) f_i(w, x).<br />

lognormconst()<br />

Compute the elementwise log of the normalization constant (partition function) Z(w)=sum_{y in Y(w)}<br />

exp(theta . f(w, y)). The sample space must be discrete and finite. This is a vector with one element for each<br />

context w.<br />

logpmf()<br />

Returns a (sparse) row vector of logarithms of the conditional probability mass function (pmf) values p(x | c)<br />

for all pairs (c, x), where c are contexts and x are points in the sample space. The order of these is log p(x | c) =<br />

logpmf()[c * numsamplepoints + x].<br />

258 Chapter 3. Reference

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!