Thursday, July 31, 2008

Degree of synchrony and neural assembly

Can the degree of synchrony be used as an analog value for that variable, instead of being just on or off?
The problem is when a neuron is shared by multiple neural assemblies, different degree might be confused with different variables. Therefore the total number of effective variables (ensembles) will decrease. How much? Can we quantify this with entropy?

Posted via blackberry while eating breakfast.

Wednesday, July 30, 2008

Independent but late finding!

In one of my papers
Il Park, António R. C. Paiva, José Príncipe, Thomas B. DeMarse. An Efficient Algorithm for Continuous-time Cross Correlogram of Spike Trains, Journal of Neuroscience Methods, Volume 168, Issue 2, 15 March 2008
I invented a fast computation of kernel density estimation using the Laplacian distribution kernel. After defeating me on the pool table, my bro and colleage Sohan Seth was reading my other blog and pointed out he saw a paper with the same trick! Indeed, 2 years before my publication, there was a conference paper:
Aiyou Chen, Fast Kernel Density Independent Component Analysis. ICA 2006: 24-31
And the conference was organized by my advisor! I swear that I didn't take that idea!!!!
It was an independent finding. It was not a complicated idea anyway, but still...

Tuesday, July 29, 2008

Superposition Catastrophe

Imagine a group of neurons, a neural assembly or ensemble, to represent a feature. Suppose we have 4 of such, denoted as A, B, C, and D. Now you are in a situation where the representation of a complicated situation where all 4 of them needs to be activated, such as recognition of a blue (represented by A) monster (B), and a red (C) lobster (D). If each feature is represented by merely activating the corresponding ensemble, it would not be able to distinguish the situation against "red monster and blue lobster". This is the so called superposition catastrophe and is a big part of the binding problem. (Malsburg)

If each combination has to be represented via a single assembly, we would need astronomical number corresponding to the possible combinations of the features. Since we can certainly distinguish between red monster eating blue lobster and blue monster eating red lobster, there should be something wrong about our assumptions.
  • existence of a neural ensemble for each feature
  • temporal inseparability among combinations
Oscillation and/or synchronization have been proposed to solve the binding problem by attacking the second assumption.

Choosing a simulator environment

8am
Trying to find a good simulator for my synchrony code and STDP idea. I requested NEST 2.0 via email twice but never got any replies yet. I have a feeling that I have to implement my own simulator this time, because I have to manipulate the input in a way that people usually don't.
Plain MATLAB is one easy way to approach, but the speed is a concern. Although recent versions of MATLAB really showed great performance enhancement via JIT compiler. (I wrote a C-MEX to optimize a speed and it was not better than plain for loops in MATLAB!!) The good thing about MATLAB is that I am already very used to it (so I can implement things very fast), and the bad thing is that it is not a good programming language.
The other options are to use C, C++, python with numPy, or some other languages.
After reading alternatives to MATLAB and python vs matlab, I decided to try numPy, it is faster than JIT equipped MATLAB in a few examples.

Monday, July 28, 2008

after 5 hours of sleep

7am
Moshe Abeles (2008) Synfire chain, Scholarpedia
Synfire chain is a feed-forward layered network that propagates synchronous firing patterns. It supports stable transmission of activity. The article concentrates on the state memory that it can represent, but not a lot of computation. The encoding of different state is through different chains.
In terms of computation, the article mainly focuses on synfire chain as a solution to the binding problem. The article also mentions the role of STDP of time-locking two activated synfire chains.

8:30am
What input would best demonstrate a learning and computation based on synchrony coding?

Saturday, July 26, 2008

Story of excitatory synapses to STDP

I am a tiny synapse, the excitatory kind. I have a simple life, like a traffic light - I propagate the action potential from the presynaptic neuron to the postsynaptic neuron, only one way.
Unlike other pathetic sentient beings in this universe, my primary goal is simple: transfer as much information as possible. Through a long time of evolution, I've been thinking how to do this in the most efficient way. I have limited knowledge at a given time. I can sense the membrane potential of the (postsynaptic) neuron, remember for a while the fact that there was an action potential in either presynaptic or postsynaptically. I can change the amount of excitation that I will bring to the (postsynaptic) neuron which in turn increases the probability of firing. I also know that there are many other synapses like me, but I cannot communicate with them.

So, I thought that increasing the amount I am exciting the neuron per presynaptic action potential would be the only way I would increase the information transfer. Any information transfer has to be supported by a physical link, and it would be stronger as I increase my action.
This strategy worked sometimes, but often didn't work. Due to the dynamics of the neuron, some other action potential that I didn't cause discouraged from my contribution to fire. I definitely increased the probability of the neuron to fire, so I was happy.

One day it just struck me. I realized that I was asking the wrong question. The question I should have asked is how to transfer maximal information from all synapses through the neuron, not just me. That means I'll have to communicate with other syanpses and find out how to work together. There's only one information that all the synapses share, the timing of the postsynaptic action potential. The membrane potential was quite local in the dendritic structure and highly noisy. I have to rely on the output train of action potentials to figure out how I can cooperate with other synapses.

Imagine a network of thousand roads and millions of cars trying to get from point A to B. If there were no coordination of traffic lights, and each road is trying to maximize the number of cars that goes through itself, all the intersections would be clogged by interference among incoming traffic. Of course, information is nothing like cars, it's an analogy! If they were, a car would clone itself and turn into two cars, and transform into different cars, and just disappear without a clue. Anyway, the point is that if somebody else is sending some information, I should not send mine at the moment. But, unlike cars, if somebody is sending information through this neuron, and I have a very similar information, I should send mine too, to reinforce the transfer. That way we collaborate to reduce the variability of the output spike train.

Yes, that's what I should do. I should find a group of other synapses that share some input features and collaborate. If I cannot find one, I should stay low profile and do not disturb others but still be open to chances of finding future opportunities of collaborations. How will I find a group by only observing the output spike train? I would like to see if I seem to be strongly causing the output. If I am, that means there is a group of synapses that are also causing the output spike train, because I am not strong enough to cause it myself. So I should also participate strongly to the information transfer process.

How would I know that I am strongly causing the output? The feature I am looking for should be in the spike trains and cannot be temporally too spread, because of my short memory. In the simplest case, I would just see the relation between one presynaptic action potential and one postsynpatic action potential in a small time window. If the presynaptic one occurs first, then I am somewhat causing the output, and if it is the opposite case, I might just be disturbing. Therefore, if I use spike-timing dependent plasticity (STDP), over time this instantaneous causality would accumulate and I would be collaborating with similar synapses or not.



  • One big advantage of point process over regular random process is that the causality is easy to detect.
  • There could be other measures of causality that extends to more than a pair of input/output.
  • Papers in the literature does not address spatial patterns so that the STDP synapses could collaborate.

Thursday, July 24, 2008

Bad milk breakfast day

7am
Distributed synchrony in a cell assembly of spiking neurons
Nir Levy, David Horn, Isaac Meilijson, Eytan Ruppin
Neural Networks, Vol. 14, No. 6-7. (9 July 2001), pp. 815-824.

Experiment:
Step 1. Feed a strong common input to a population of spiking neurons (both excitatory and inhibitory)
Step 2. Let STDP do their job for excitatory-excitatory synapses
Step 3. Stop the input and observe the sustained activity

Observation:
  • Oscillation of synchronous firing is observed
  • The neurons formed subgroups that fired in sequence
  • The frequency of oscillation was simply dependent on the synaptic delay and number of subgroups
  • Over time the group formation can slightly change
They call this synchronous subgroup firing oscillation as "distributed synchrony" state.
This is an evidence that STDP can be used to make synchronously firing neural ensembles.

9am
Time independent Schrödinger equation and information potential
Aim: A potential field that would yield the Parzen estimated pdf as one of the solutions of a
finite energy time-independent Shrodinger equation. The solution (wave function) has a physical meaning of being a probability amplitude function (complex valued) of a particle. Therefore somehow linking the probability and potential.
Dr. P: The potential might act as a regularizer

10am
Discussion with Dr. D
Memming: STDP can do some computation with synchrony code. Suppose you have two ensembles A and B. I can train a third ensemble C with STDP to fire only when A and B fire.
Dr. D: Why not try the idea of association? Eliminate C and just make A and B fire even when only A fires.