Image representation
Image representation
 Grayscale image = matrix, each entry = a pixel.
 entry values $\epsilon [0,255]$
Image classification
Using FFNNs
Problems:
 Many weight inputs
 The first layer will require at least as many neurons as there are pixels in the image, leading to an impractical number of parameters
 Variability in position
 Small translations in the input image will cause the network to produce incorrect predictions
 due to a lack of spatial invariance
 may struggle to detect feature in inputs that deviate from the specific positions covered in the training dataset.
 Solution Suggestion: Collecting numerous recordings with various positions of the feature may help
 but it might be impractical to cover all possible permutations.
 Proposal for a More Flexible Network:
 recognize the target irrespective of its specific location within the recording (input).
 develop models that generalize well to different permutations of the target pattern within the data.
Scanning for a pattern in a timeseries
Algorithm:
 sliding window approach to making predictions using a neural network on a time series.
1 given a neural network $M$ and a timeseries $X$ with length of $T$ 2 choose a scanning width $w$ 3 for $t=1$ to $TW+1$ 4 select a segment $X_{\text {segment }}=X(t: t+w1)$ of timeseries $X$ 5 generate a prediction $y(t)=M\left(X_{\text {segment }}\right)$ 6 predict the maximum $y(\mathrm{t})$ value

Neural Network and Time Series: You have a neural network$M$ that is trained to make predictions based on input data. In this case, the input data is a time series $X$ with $T$ data points.

Scanning Width: You choose a scanning width $w$. This is essentially the size of the window that will slide over your time series.

Loop Over Time Series: You iterate over the time series from$t = 1$ to $T  w + 1$. This loop is responsible for moving the window over the time series.

Select Segment: At each step of the loop, you select a segment $X_{\text {segment }}$ of the time series. This segment has a length of$w$and starts from the current position $t$ and goes up to$t + w  1$

Generate Prediction: You feed this segment $X_{\text {segment }}$ into your neural network $M$ to generate a prediction $y(t)$. Essentially, you're asking the neural network to predict what comes next based on the current segment of the time series.

Predict the Maximum: Finally, you record the maximum predicted value $y(t)$ for each iteration of the loop. This means you're interested in the highest prediction value across all the windows as you slide through the time series.

this algorithm is a way of applying a trained neural network

to make predictions

on different segments of a time series

by sliding a window over it

and recording the maximum prediction at each step.

This could be useful for identifying peaks or significant changes in the time series data.
Weightsharing

same set of weights is used of the same layer.

weights shared across multiple instances of the time series

can learn and recognize patterns in a more generalized manner
 as it is not learning separate sets of weights for each segment.

alternative approach: using fullyconnected layer instead of simply selecting the max value
 takes into account information from the entire segment, not just the maximum value
 provides a more comprehensive analysis of the segment
 might capture more complex patterns.
 using fully connected neural networks to classify images is not efficient

enhance ability to recognize patterns during the scanning process of a time series.