1. Kernel size problem
How can I obtain the kernel size from the data? I fix a kernel size for 1D and scale it appropriately to all dimensions. So for each dimension, I considering the scale of analysis to be equal -- this is against the PDF estimation principle. For each dimension if I want the best kernel size, I should estimate the kernel size every time. Instead, I am using the kernel size like the scale analysis parameter for spike train distance methods. But again, this contradicts with the scaling of kernel size for both number of samples and dimension.
I have too little data spread in many different spaces to obtain a nearest neighbor or variance based kernel size.
2. Cutting and Stitching long spike trains
For a smaller number of spike trains, it is advisable to control the spike count distribution to be concentrated on small dimensions. But then, if the discriminability is not within that chosen interval, it would not work.
If we try to perform multiple hypothesis tests and combine the results, we would suffer from the multiple comparison problem. Adjustments to the significance size such as Bonferroni correction is required.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment