12.07.2015 Views

Ivancevic_Applied-Diff-Geom

Ivancevic_Applied-Diff-Geom

Ivancevic_Applied-Diff-Geom

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

596 <strong>Applied</strong> <strong>Diff</strong>erential <strong>Geom</strong>etry: A Modern Introductioncovariant–inhibitory random differential Hebbian innovation–functions withtensorial Gaussian noise σ (in both variances); fs and fs ˙ denote sigmoidactivation functions (f = tanh(.)) and corresponding signal velocities( f ˙ = 1 − f 2 ), respectively in both variances; A i = A i (t) and B i = B i (t)are contravariant–excitatory and covariant–inhibitory neural inputs to thecorresponding cortical cells, respectively;Nonlinear activation (x, y)−-dynamics, describes a two–phase biologicalneural oscillator field, in which the excitatory neural field excites theinhibitory neural field, which itself reciprocally inhibits the excitatory one.(x, y)−-dynamics represents a nonlinear extension of a linear, Lyapunov–stable, conservative, gradient system, defined in local neural coordinatesx i , y i ∈ V y on T ∗ M asẋ i = − ∂Φ∂y i= ω ij y j − x i ,ẏ i = − ∂Φ∂x i = ω ijx j − y i . (4.117)The gradient system (4.117) is derived from scalar, neuro–synaptic actionpotential Φ : T ∗ M → R, given by a negative, smooth bilinear form inx i , y i ∈ V y on T ∗ M as−2Φ = ω ij x i x j + ω ij y i y j − 2x i y i ,(i, j = 1, . . . , N),which itself represents a Ψ−-image of the Riemannian metrics g : T M → Ron the configuration manifold M N .The nonlinear oscillatory activation (x, y)−-dynamics (4.111–4.114) isget from the linear conservative dynamics (4.117), by adding configurationdependent inputs A i and B i , as well as sigmoid activation functions f jand f j , respectively. It represents an interconnected pair of excitatory andinhibitory neural fields.Both variant–forms of learning (ω)−-dynamics (4.113–4.114) are givenby a generalized unsupervised (self–organizing) Hebbian learning scheme(see [Kosko (1992)]) in which ˙ω ij (resp. ˙ω ij ) denotes the new–updatevalue, -ω ij (resp. ω ij ) corresponds to the old value and I ij (x i , y j ) (resp.I ij (x i , y j )) is the innovation function of the symmetric 2nd order synaptictensor–field ω. The nonlinear innovation functions I ij and I ij are definedby random differential Hebbian learning process (4.115–4.116). As ω is asymmetric and zero–trace coupling synaptic tensor, the conservative linearactivation dynamics (4.117) is equivalent to the rule that ‘the state of eachneuron (in both neural fields) is changed in time if, and only if, the scalaraction potential Φ (52), is lowered’. Therefore, the scalar action potentialΦ represents the monotonically decreasing Lyapunov function (such that

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!