diff --git a/docs/notebooks/nonlinear_gaussian_ssm/ekf_mlp.ipynb b/docs/notebooks/nonlinear_gaussian_ssm/ekf_mlp.ipynb index f1c7a3ef..914cd40f 100644 --- a/docs/notebooks/nonlinear_gaussian_ssm/ekf_mlp.ipynb +++ b/docs/notebooks/nonlinear_gaussian_ssm/ekf_mlp.ipynb @@ -154,7 +154,7 @@ "source": [ "## Neural network\n", "\n", - "We aim to approximate the true data generating function, $f(x)$, with a parametric approximation, $h(\\theta, x)$, where $\\theta$ are the parameters and $x$ are the inputs. We use a simple feedforward neural network — a.k.a. multi-layer perceptron (MLP) — with sigmoidal noinlinearities. Here, $\\theta$ corresponds to the flattened vector of all the weights from all the layers of the model. " + "We aim to approximate the true data generating function, $f(x)$, with a parametric approximation, $h(\\theta, x)$, where $\\theta$ are the parameters and $x$ are the inputs. We use a simple feedforward neural network — a.k.a. multi-layer perceptron (MLP) — with sigmoidal nonlinearities. Here, $\\theta$ corresponds to the flattened vector of all the weights from all the layers of the model. " ] }, {