15.12.2012 Views

scipy tutorial - Baustatik-Info-Server

scipy tutorial - Baustatik-Info-Server

scipy tutorial - Baustatik-Info-Server

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

SciPy Reference Guide, Release 0.8.dev<br />

This method minimizes the Lagrangian dual L of the entropy, which is defined for conditional models as<br />

L(theta) = sum_w q(w) log Z(w; theta)<br />

• sum_{w,x} q(w,x) [theta . f(w,x)]<br />

Note that both sums are only over the training set {w,x}, not the entire sample space, since q(w,x) = 0 for all w,x<br />

not in the training set.<br />

The partial derivatives of L are:<br />

dL / dtheta_i = K_i - E f_i(X, Y)<br />

where the expectation is as defined above.<br />

Methods<br />

beginlogging(filename[, Enable logging params for each fn evaluation to files named ‘filename.freq.pickle’,<br />

freq])<br />

‘filename.(2*freq).pickle’, ...<br />

clearcache() Clears the interim results of computations depending on the<br />

crossentropy(fx[, Returns the cross entropy H(q, p) of the empirical<br />

log_prior_x, base])<br />

dual([params,<br />

The entropy dual function is defined for conditional models as<br />

ignorepenalty])<br />

endlogging() Stop logging param values whenever setparams() is called.<br />

entropydual([params, Computes the Lagrangian dual L(theta) of the entropy of the<br />

ignorepenalty, ignoretest])<br />

expectations() The vector of expectations of the features with respect to the<br />

fit([algorithm]) Fits the conditional maximum entropy model subject to the<br />

grad([params,<br />

Computes or estimates the gradient of the entropy dual.<br />

ignorepenalty])<br />

log(params) This method is called every iteration during the optimization process.<br />

lognormconst() Compute the elementwise log of the normalization constant<br />

logparams() Saves the model parameters if logging has been<br />

logpmf() Returns a (sparse) row vector of logarithms of the conditional probability mass<br />

function (pmf) values p(x | c) for all pairs (c, x), where c are contexts and x are<br />

points in the sample space.<br />

normconst() Returns the normalization constant, or partition function, for the current model.<br />

pmf() Returns an array indexed by integers representing the values of the probability mass<br />

function (pmf) at each point in the sample space under the current model (with the<br />

current parameter vector self.params).<br />

pmf_function([f]) Returns the pmf p_theta(x) as a function taking values on the model’s sample space.<br />

probdist() Returns an array indexed by integers representing the values of the probability mass<br />

function (pmf) at each point in the sample space under the current model (with the<br />

current parameter vector self.params).<br />

reset([numfeatures]) Resets the parameters self.params to zero, clearing the cache variables dependent on<br />

them.<br />

setcallback([callback, Sets callback functions to be called every iteration, every function evaluation, or<br />

callback_dual, ...]) every gradient evaluation.<br />

setfeaturesandsamplespace(f, Creates a new matrix self.F of features f of all points in the<br />

samplespace)<br />

setparams(params) Set the parameter vector to params, replacing the existing parameters.<br />

setsmooth(sigma) Speficies that the entropy dual and gradient should be computed with a quadratic<br />

penalty term on magnitude of the parameters.<br />

3.8. Maximum entropy models (<strong>scipy</strong>.maxentropy) 257

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!