2.7 代码

理解了抽象的反向传播的理论知识,我们现在就可以学习上一章中使用的实现反向传播的代码了。回想上一章的代码,需要研究的是在 Network 类中的 update_mini_batchbackprop 方法。这些方法的代码其实是我们上面的算法描述的直接翻版。特别地,update_mini_batch 方法通过计算当前 mini_batch 中的训练样本对 Network 的权重和偏置进行了更新:

class Network(object):
...
    def update_mini_batch(self, mini_batch, eta):
        """Update the network's weights and biases by applying
        gradient descent using backpropagation to a single mini batch.
        The "mini_batch" is a list of tuples "(x, y)", and "eta"
        is the learning rate."""
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        for x, y in mini_batch:
            delta_nabla_b, delta_nabla_w = self.backprop(x, y)
            nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
            nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
        self.weights = [w-(eta/len(mini_batch))*nw
                        for w, nw in zip(self.weights, nabla_w)]
        self.biases = [b-(eta/len(mini_batch))*nb
                       for b, nb in zip(self.biases, nabla_b)]

主要工作其实是在 delta_nabla_bdelta_nabla_w = self.backprop(x, y) 这里完成的,调用了 backprop 方法计算出了偏导数, backprop 方法跟上一节的算法基本一致。这里只有个小小的差异 —— 我们使用一个略微不同的方式来索引神经网络的层。这个改变其实是为了 Python 的特性 —— 负值索引的使用能够让我们从列表的最后往前遍历,如 l[-3] 其实是列表中的倒数第三个元素。下面 backprop 的代码,和一些帮助函数一起,被用于计算 、导数 及代价函数的导数。所以理解了这些,我们就完全可以掌握所有的代码了。如果某些东西让你困惑,你可能需要参考代码的原始描述(以及完整清单)

class Network(object):
...
   def backprop(self, x, y):
        """Return a tuple "(nabla_b, nabla_w)" representing the
        gradient for the cost function C_x.  "nabla_b" and
        "nabla_w" are layer-by-layer lists of numpy arrays, similar
        to "self.biases" and "self.weights"."""
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        # feedforward
        activation = x
        activations = [x] # list to store all the activations, layer by layer
        zs = [] # list to store all the z vectors, layer by layer
        for b, w in zip(self.biases, self.weights):
            z = np.dot(w, activation)+b
            zs.append(z)
            activation = sigmoid(z)
            activations.append(activation)
        # backward pass
        delta = self.cost_derivative(activations[-1], y) * \
            sigmoid_prime(zs[-1])
        nabla_b[-1] = delta
        nabla_w[-1] = np.dot(delta, activations[-2].transpose())
        # Note that the variable l in the loop below is used a little
        # differently to the notation in Chapter 2 of the book.  Here,
        # l = 1 means the last layer of neurons, l = 2 is the
        # second-last layer, and so on.  It's a renumbering of the
        # scheme in the book, used here to take advantage of the fact
        # that Python can use negative indices in lists.
        for l in xrange(2, self.num_layers):
            z = zs[-l]
            sp = sigmoid_prime(z)
            delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
            nabla_b[-l] = delta
            nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
        return (nabla_b, nabla_w)

...

    def cost_derivative(self, output_activations, y):
        """Return the vector of partial derivatives \partial C_x /
        \partial a for the output activations."""
        return (output_activations-y)

def sigmoid(z):
    """The sigmoid function."""
    return 1.0/(1.0+np.exp(-z))

def sigmoid_prime(z):
    """Derivative of the sigmoid function."""
    return sigmoid(z)*(1-sigmoid(z))

问题

  • 在一个小批量数据上的反向传播的全矩阵方法 我们对于随机梯度下降的实现是对一个小批量数据中的训练样本进行遍历。所以也可以更改反向传播算法使得 它同时对一个小批量数据中的所有样本进行梯度计算。这个想法其实就是我们可以用一个矩阵 ,其中每列就是在小批量数据中的向量,而不是单个的输入向量,。我们通过乘权重矩阵,加上对应的偏置进行前向传播,在所有地方应用 S 型函数。然后按照类似的过程进行反向传播。请显式写出这种方法下的伪代码。更改network.py 来实现这个方案。这样做的好处其实利用到了现代的线性代数库。所以,这会比在小批量数据上进行遍历要运行得更快(在我的笔记本电脑上,在 MNIST 分类问题上,我相较于上一章的实现获得了 2 倍的速度提升)。在实际应用中,所有靠谱的反向传播的库都是用了类似的基于矩阵或者其变化形式来实现的。

results matching ""

    No results matching ""