Skip to content

Commit 81e5a6e

Browse files
committed
Clarify in docs that Poisson noise is generated using spike_generator not poisson_generator
1 parent bb49ac5 commit 81e5a6e

6 files changed

+9
-9
lines changed

pynest/examples/eprop_plasticity/eprop_supervised_classification_evidence-accumulation_bsshslm_2020.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@
4747
4848
Learning in the neural network model is achieved by optimizing the connection weights with e-prop plasticity.
4949
This plasticity rule requires a specific network architecture depicted in Figure 1. The neural network model
50-
consists of a recurrent network that receives input from Poisson generators and projects onto two readout
50+
consists of a recurrent network that receives input from spike generators and projects onto two readout
5151
neurons - one for the left and one for the right turn at the end. The input neuron population consists of four
5252
groups: one group providing background noise of a specific rate for some base activity throughout the
5353
experiment, one group providing the input spikes of the left cues and one group providing them for the right

pynest/examples/eprop_plasticity/eprop_supervised_classification_neuromorphic_mnist.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@
4444
4545
Learning in the neural network model is achieved by optimizing the connection weights with e-prop plasticity.
4646
This plasticity rule requires a specific network architecture depicted in Figure 1. The neural network model
47-
consists of a recurrent network that receives input from Poisson generators and projects onto multiple readout
47+
consists of a recurrent network that receives input from spike generators and projects onto multiple readout
4848
neurons - one for each class. Each input generator is assigned to a pixel of the input image; when an event is
4949
detected in a pixel at time :math:`t`, the corresponding input generator (connected to an input neuron) emits a spike
5050
at that time. Each readout neuron compares the network signal :math:`y_k` with the teacher signal :math:`y_k^*`,
@@ -177,10 +177,10 @@
177177
# We proceed by creating a certain number of input, recurrent, and readout neurons and setting their parameters.
178178
# Additionally, we already create an input spike generator and an output target rate generator, which we will
179179
# configure later. Each input sample is mapped out to a 34x34 pixel grid and a polarity dimension. We allocate
180-
# Poisson generators to each input image pixel to simulate spike events. However, due to the observation
180+
# spike generators to each input image pixel to simulate spike events. However, due to the observation
181181
# that some pixels either never record events or do so infrequently, we maintain a blocklist of these inactive
182-
# pixels. By omitting Poisson generators for pixels on this blocklist, we effectively reduce the total number of
183-
# input neurons and Poisson generators required, optimizing the network's resource usage.
182+
# pixels. By omitting spike generators for pixels on this blocklist, we effectively reduce the total number of
183+
# input neurons and spike generators required, optimizing the network's resource usage.
184184

185185
pixels_blocklist = np.loadtxt("./NMNIST_pixels_blocklist.txt")
186186

pynest/examples/eprop_plasticity/eprop_supervised_regression_handwriting_bsshslm_2020.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545
4646
Learning in the neural network model is achieved by optimizing the connection weights with e-prop plasticity.
4747
This plasticity rule requires a specific network architecture depicted in Figure 1. The neural network model
48-
consists of a recurrent network that receives frozen noise input from Poisson generators and projects onto two
48+
consists of a recurrent network that receives frozen noise input from spike generators and projects onto two
4949
readout neurons. Each individual readout signal denoted as :math:`y_k` is compared with a corresponding target
5050
signal represented as :math:`y_k^*`. The network's training error is assessed by employing a mean-squared error
5151
loss.

pynest/examples/eprop_plasticity/eprop_supervised_regression_lemniscate_bsshslm_2020.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545
4646
Learning in the neural network model is achieved by optimizing the connection weights with e-prop plasticity.
4747
This plasticity rule requires a specific network architecture depicted in Figure 1. The neural network model
48-
consists of a recurrent network that receives frozen noise input from Poisson generators and projects onto two
48+
consists of a recurrent network that receives frozen noise input from spike generators and projects onto two
4949
readout neurons. Each individual readout signal denoted as :math:`y_k` is compared with a corresponding target
5050
signal represented as :math:`y_k^*`. The network's training error is assessed by employing a mean-squared error
5151
loss.

pynest/examples/eprop_plasticity/eprop_supervised_regression_sine-waves.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
4747
Learning in the neural network model is achieved by optimizing the connection weights with e-prop plasticity.
4848
This plasticity rule requires a specific network architecture depicted in Figure 1. The neural network model
49-
consists of a recurrent network that receives frozen noise input from Poisson generators and projects onto one
49+
consists of a recurrent network that receives frozen noise input from spike generators and projects onto one
5050
readout neuron. The readout neuron compares the network signal :math:`y` with the teacher target signal
5151
:math:`y*`, which it receives from a rate generator. In scenarios with multiple readout neurons, each individual
5252
readout signal denoted as :math:`y_k` is compared with a corresponding target signal represented as

pynest/examples/eprop_plasticity/eprop_supervised_regression_sine-waves_bsshslm_2020.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545
4646
Learning in the neural network model is achieved by optimizing the connection weights with e-prop plasticity.
4747
This plasticity rule requires a specific network architecture depicted in Figure 1. The neural network model
48-
consists of a recurrent network that receives frozen noise input from Poisson generators and projects onto one
48+
consists of a recurrent network that receives frozen noise input from spike generators and projects onto one
4949
readout neuron. The readout neuron compares the network signal :math:`y` with the teacher target signal
5050
:math:`y*`, which it receives from a rate generator. In scenarios with multiple readout neurons, each individual
5151
readout signal denoted as :math:`y_k` is compared with a corresponding target signal represented as

0 commit comments

Comments
 (0)