Download presentation
Presentation is loading. Please wait.
1
ENERGY-EFFICIENT RECURSIVE ESTIMATION BY VARIABLE THRESHOLD NEURONS Toby Berger, Cornell & Uva Chip Levy, Uva Medical Galen Reeves, Cornell & UCB CoSyNe Workshop on Info-Neuro, Park City, UT 2/25-26,2007
2
In time slot k, let X k denote the mean contribution to energy-efficient neuron N’s PSP in response to action potentials afferent to N’s synapses during the slot. The value of X k is assumed to be in 1-1 correspondence with a random variable,, that is of interest to the neurons in N’s efferent cohort and that remains effectively constant over any two successive slots. This implies that X also remains effectively constant over the two slots we shall consider in this paper. Moreover, for any r.v. V, I(X;V) = I( ;V), where I( ; ) is Shannon mutual information.
3
The process of PSP formation is random because of Poisson noise with variance proportional to the mean input signal strength, among other noises. In this preliminary investigation we simplistically assume that the overall noise in slot k, denoted N k, is additive, causing the net PSP in slot k due to both signal and noise to be of the form: Moreover, in our “toy problem” study, we assume that the noise also is (i) independent from slot to slot, (ii) independent of the signal strength, and (iii) uniformly distributed over [-1,1]. Although assumptions (ii) and (iii) are unrealistic, they’ve shamelessly been made purely for analytical convenience.
4
What’s more, we assume that X is uniformly distributed over the interval [-A,A], so that A may be thought of as the square root of the signal-to- noise ratio. Information Theory Remark: It is easy to show that a time-discrete,amplitude-continuous, memoryless channel that adds white uniform [-1,1] noise to its inputs to produce its outputs, and whose input amplitude magnitude is constrained not to exceed A, has Shannon capacity C = log(A+1). This capacity is achieved with an input distribution that concentrates its probability uniformly on the A+1 discrete points {A, A-2, A-4, …,-A}. However, distributing the input uniformly over the continuous input space [-A,A] achieves almost the full capacity when A >> 1.
5
Case 1: Only one slot. N encodes by generating an action potential in the slot if and only if Y exceeds a judiciously preset threshold, T. N does not set T at the median of Y even though that would maximize the information that the spike v. nonspike decision provides about. Rather, N uses a higher T than that in order to reduce the expected energy expenditure, E, down to where I/E is the quantity that gets maximized (cf., Levy and Baxter).
6
Fig. 2:with A = 5 and T 2 chosen a priori. Case 2: Two slots: This case divides into three subcases as follows, wherein Q i is a binary r.v. that equals 1 if Y i > T i and equals 0 otherwise, I = 1,2 : Case 2A: Maximize I(X;Q 1,Q 2 ) with T2 chosen before Q 1 is observed.
7
Fig. 3: I/E as a function of T 1 (Case 1)
8
Case 2B 1 : T 2 = T 2 (Q 1 ), i.e., second threshold set a posteriori We pick the initial threshold T 1, resulting in the binary output Q 1. Then we pick T 2 as a function of Q 1, say T 2 (Q 1 ), so as to achieve Case 2B 2 : Same as Case 2B 1 except we maximize bits/joule, where the maximum is over T 1, T 2 (Q 1 ), R and A. THIS IS AN IMPORTANT CASE because it addresses the task of maximizing bits per unit of energy via a slot-by-slot recursive strategy. Some graphical results follow, after which we introduce Case 2C
9
Fig 4: Maximum I/E as a function of T 1 (Case 2B)
10
Fig. 5: Optimal T 2 (0), T 2 (1) as a function of T 1 (Case 2B)
11
Case 2C: Refractoriness: This is similar to Case 2B except we apply a refractoriness constraint that prohibits N from firing in the second slot if it has just fired in the first slot by requiring that T 2 (Q 1 =1) be infinite. The refractoriness requirement changes the T1 that maximizes overall I/E, as shown in Figure 6. Figure 7 shows the optimum T 2 (0) as a function of T 1, with T 2 (1) = infinity.
12
Fig. 6: Maximum I/E as a function of T 1 (Case 2C)
13
Fig. 7: Optimal T 2 (0) as a function of T 1 (Case 2C)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.