In the Coursera class "Machine Learning" offered by Stanford University, the professor claims the algorithms are vectorized. However, the algorithms use for loops. Can the algorithms be vectorized further, for efficiency?

In this post, I would like to explore this question, and to share my results. All vectors are assumed to be column vectors.

Before we dive in, I want to recommend the Machine Learning Coursera class. It has lots of resources, and the professor has very solid explanations. Even for people with prior AI experience, it shows new tricks, and it will help solidify your understanding.

### Linear Regression

#### Refresher

Before we start, here is a quick refresher of what each variables mean:

\(x_i\) - Vector of an example

\(m\) - The number of examples

\(y_i\) - The actual value, for example i, of some statistic we're predicting

\(\theta\) - Model parameters

\(h_{\theta} (x_i)\) - Our model's prediction for the example i

\(h_{\theta} (x)\) - Our model's predictions for the examples

\(J(\theta)\) - The cost function, aka how bad our model is

The linear model used to make predictions is:

The standard cost function is:

#### Gradient Descent

"Machine Learning" offers the following algorithm for computing the gradient of the cost:

The main issue with this is that the algorithm is iterative, and hence must be run multiple times. The algorithm can be vectorized further:

#### Proof

Let's take the gradient of J. We'll start by rewriting it a little:

Then breaking it up via the chain rule:

Next, let's find the gradient of h, with respect to (\theta):

Inserting back, we get:

However, this produces a row vector. We want a column vector. Let's transpose, and...

### Logistic Regression Gradient

Similarly, "Machine Learning" offers the following gradient descent algorithm:

Thanks to nice partials, the gradient is the same as in linear regression. Hence, we can re-use the fully vectorized gradient equation for linear regression:

### Neural Network Gradient

As a snack, here's a refresher for what each variable means:

\( \delta^{(n)} \) - The "error term" of NN layer n

\( \Delta^{(n)} \) - The cumulative gradient of NN layer n

The "Machine Learning" algorithm for computing a NN's gradient is:

This looks very daunting. Can we vectorize this behemoth of a for loop? Let's work our way down.

For forward propagation, let's make each row of matrix \(a_1\) contain the vector \(a^{(1)}\) for each example:

For back propagation, let's make each row of matrix \(\delta_3\) contain the vector \(\delta^{(3)}\) for each example:

Finally, we are left with vectorizing \(\Delta^{(n)}\), however there are issues. \(\Delta^{(n)}\) is a matrix, yet our input data, \(a^{(l)}\) and \(\delta^{(l)}\), are a vector and matrix respectively. Ignoring algorithmic correctness, there is no way to multiply a vector and a matrix to get a matrix. This might be possible with tensor mathematics, so if anyone knows, please comment below. However, that begs the question if we still gain optimization benefits?

On that note, unless performance ia a constraint for your AI, it's best to keep readable code. In that situation, I recommend using "Machine Learning" 's semi-vectorized approach.