- Do I understand the idea of weights in neural networks?
- Do I understand the purposes of the bias in a neural network?
- Do I understand how the net input to an activation function is computed
- Do I understand the purpose of an activation function?
- Do I know how to perform a forward pass, given a neural network and an instance of data?
- Do I understand why features are normalised before training a neural network? → Normalising features
- Do I understand the idea of a single artificial neuron?
- Do I understand that categorical features need to be one-hot encoded before training a neural network?
- Do I understand why MLPs are more suitable for > 2 class classification problems and non-linear data?
- Do I understand the feed forward neural network architecture, the idea of the input layer, hidden layer (s) and output layer, weights, bias, and calculations to perform a forward pass?
- Do I understand the difference between feedforward and recurrent neural networks?
- Do I understand the Elman, Jordan and Multi-recurrent neural networks?
- Do I understand the steps taken in the iterate process associated with training a neural network?
- Do I understand the difference between online and offline learning?
- Do I understand weight initialisation, how that impacts training and how gradient descent is applied?
- Do I understand the difference between the forward and backward pass?
- Can I perform a forward pass?
- Do I understand the main ideas associated with gradient descent?
- Do I understand the stopping conditions when training a neural network?
- Do I understand why a learning rate is used and the effects of different rates?
- Do I understand the ideas about momentum?
- Do I understand early stopping and overfitting?
- Do I understand the ideas around architecture selection, pruning, growing and regularisation?