博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
DeepLearning.ai作业:(2-2)-- 优化算法(Optimization algorithms)
阅读量:4100 次
发布时间:2019-05-25

本文共 14777 字,大约阅读时间需要 49 分钟。


title: ‘DeepLearning.ai作业:(2-2)-- 优化算法(Optimization algorithms)’

id: 2018091711
tags:

  • homework
    categories:
  • AI
  • Deep Learning
    date: 2018-09-17 11:06:06

  1. 不要抄作业!
  2. 我只是把思路整理了,供个人学习。
  3. 不要抄作业!

首发于个人博客:,欢迎来访

本周作业实践了课上的各种优化算法:

  • mini-batch
  • momentum
  • Adam

首先是标准的gradient descent:

def update_parameters_with_gd(parameters, grads, learning_rate):    """    Update parameters using one step of gradient descent        Arguments:    parameters -- python dictionary containing your parameters to be updated:                    parameters['W' + str(l)] = Wl                    parameters['b' + str(l)] = bl    grads -- python dictionary containing your gradients to update each parameters:                    grads['dW' + str(l)] = dWl                    grads['db' + str(l)] = dbl    learning_rate -- the learning rate, scalar.        Returns:    parameters -- python dictionary containing your updated parameters     """    L = len(parameters) // 2 # number of layers in the neural networks    # Update rule for each parameter    for l in range(L):        ### START CODE HERE ### (approx. 2 lines)        parameters["W" + str(l+1)]  = parameters["W" + str(l+1)] - learning_rate * grads['dW' + str(l+1)]        parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads['db' + str(l+1)]        ### END CODE HERE ###            return parameters

mini-batch

步骤是:

  • shuffle:将数据随机打乱,使用np.random.permutation(m)函数可以把m个样本的顺序重新映射,变成一个len为m的列表,里面的值就是映射原本的顺序。
  • 再根据size大小进行分区,需要注意的是最后的数据有可能小于size大小的,因为可能无法整除,要单独考虑
# GRADED FUNCTION: random_mini_batchesdef random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):    """    Creates a list of random minibatches from (X, Y)        Arguments:    X -- input data, of shape (input size, number of examples)    Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)    mini_batch_size -- size of the mini-batches, integer        Returns:    mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)    """        np.random.seed(seed)            # To make your "random" minibatches the same as ours    m = X.shape[1]                  # number of training examples    mini_batches = []            # Step 1: Shuffle (X, Y)    permutation = list(np.random.permutation(m))    print(permutation)    shuffled_X = X[:, permutation]    shuffled_Y = Y[:, permutation].reshape((1,m))    # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.    num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning    for k in range(0, num_complete_minibatches):        ### START CODE HERE ### (approx. 2 lines)        mini_batch_X = shuffled_X[:,k * mini_batch_size:(k+1)* mini_batch_size]        mini_batch_Y = shuffled_Y[:,k * mini_batch_size:(k+1)* mini_batch_size]        ### END CODE HERE ###        mini_batch = (mini_batch_X, mini_batch_Y)        mini_batches.append(mini_batch)        # Handling the end case (last mini-batch < mini_batch_size)    if m % mini_batch_size != 0:        ### START CODE HERE ### (approx. 2 lines)        mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size:]        mini_batch_Y = shuffled_Y[:,num_complete_minibatches * mini_batch_size:]        ### END CODE HERE ###        mini_batch = (mini_batch_X, mini_batch_Y)        mini_batches.append(mini_batch)        return mini_batches

Momentum

先初始化为0,

# GRADED FUNCTION: initialize_velocitydef initialize_velocity(parameters):    """    Initializes the velocity as a python dictionary with:                - keys: "dW1", "db1", ..., "dWL", "dbL"                 - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.    Arguments:    parameters -- python dictionary containing your parameters.                    parameters['W' + str(l)] = Wl                    parameters['b' + str(l)] = bl        Returns:    v -- python dictionary containing the current velocity.                    v['dW' + str(l)] = velocity of dWl                    v['db' + str(l)] = velocity of dbl    """        L = len(parameters) // 2 # number of layers in the neural networks    v = {
} # Initialize velocity for l in range(L): ### START CODE HERE ### (approx. 2 lines) v["dW" + str(l+1)] = np.zeros((parameters['W' + str(l+1) ].shape[0],parameters['W' + str(l+1) ].shape[1])) v["db" + str(l+1)] = np.zeros((parameters['b' + str(l+1) ].shape[0],parameters['b' + str(l+1) ].shape[1])) ### END CODE HERE ### return v

再按公式进行迭代,因为指数加权平均不需要知道前面n个数据,只要一步一步进行迭代,知道当前的数据就行,节省空间。

# GRADED FUNCTION: update_parameters_with_momentumdef update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):    """    Update parameters using Momentum        Arguments:    parameters -- python dictionary containing your parameters:                    parameters['W' + str(l)] = Wl                    parameters['b' + str(l)] = bl    grads -- python dictionary containing your gradients for each parameters:                    grads['dW' + str(l)] = dWl                    grads['db' + str(l)] = dbl    v -- python dictionary containing the current velocity:                    v['dW' + str(l)] = ...                    v['db' + str(l)] = ...    beta -- the momentum hyperparameter, scalar    learning_rate -- the learning rate, scalar        Returns:    parameters -- python dictionary containing your updated parameters     v -- python dictionary containing your updated velocities    """    L = len(parameters) // 2 # number of layers in the neural networks        # Momentum update for each parameter    for l in range(L):                ### START CODE HERE ### (approx. 4 lines)        # compute velocities        v["dW" + str(l+1)] = beta * v["dW" + str(l+1)] + (1 - beta) * grads["dW" + str(l+1)]        v["db" + str(l+1)] = beta * v["db" + str(l+1)] + (1 - beta) * grads["db" + str(l+1)]        # update parameters        parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v["dW" + str(l+1)]        parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v["dW" + str(l+1)]        ### END CODE HERE ###            return parameters, v

Adam

没什么好说的,先初始化,根据公式来就行了。

def initialize_adam(parameters) :    """    Initializes v and s as two python dictionaries with:                - keys: "dW1", "db1", ..., "dWL", "dbL"                 - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.        Arguments:    parameters -- python dictionary containing your parameters.                    parameters["W" + str(l)] = Wl                    parameters["b" + str(l)] = bl        Returns:     v -- python dictionary that will contain the exponentially weighted average of the gradient.                    v["dW" + str(l)] = ...                    v["db" + str(l)] = ...    s -- python dictionary that will contain the exponentially weighted average of the squared gradient.                    s["dW" + str(l)] = ...                    s["db" + str(l)] = ...    """        L = len(parameters) // 2 # number of layers in the neural networks    v = {
} s = {
} # Initialize v, s. Input: "parameters". Outputs: "v, s". for l in range(L): ### START CODE HERE ### (approx. 4 lines) v["dW" + str(l+1)] = np.zeros((parameters['W'+str(l+1)].shape[0],parameters['W'+str(l+1)].shape[1])) v["db" + str(l+1)] = np.zeros((parameters['b'+str(l+1)].shape[0],parameters['b'+str(l+1)].shape[1])) s["dW" + str(l+1)] = np.zeros((parameters['W'+str(l+1)].shape[0],parameters['W'+str(l+1)].shape[1])) s["db" + str(l+1)] = np.zeros((parameters['b'+str(l+1)].shape[0],parameters['b'+str(l+1)].shape[1])) ### END CODE HERE ### return v, s
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,                                beta1 = 0.9, beta2 = 0.999,  epsilon = 1e-8):    """    Update parameters using Adam        Arguments:    parameters -- python dictionary containing your parameters:                    parameters['W' + str(l)] = Wl                    parameters['b' + str(l)] = bl    grads -- python dictionary containing your gradients for each parameters:                    grads['dW' + str(l)] = dWl                    grads['db' + str(l)] = dbl    v -- Adam variable, moving average of the first gradient, python dictionary    s -- Adam variable, moving average of the squared gradient, python dictionary    learning_rate -- the learning rate, scalar.    beta1 -- Exponential decay hyperparameter for the first moment estimates     beta2 -- Exponential decay hyperparameter for the second moment estimates     epsilon -- hyperparameter preventing division by zero in Adam updates    Returns:    parameters -- python dictionary containing your updated parameters     v -- Adam variable, moving average of the first gradient, python dictionary    s -- Adam variable, moving average of the squared gradient, python dictionary    """        L = len(parameters) // 2                 # number of layers in the neural networks    v_corrected = {
} # Initializing first moment estimate, python dictionary s_corrected = {
} # Initializing second moment estimate, python dictionary # Perform Adam update on all parameters for l in range(L): # Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v". ### START CODE HERE ### (approx. 2 lines) v["dW" + str(l+1)] = beta1 * v["dW" + str(l+1)] + (1-beta1) * grads['dW' + str(l+1)] v["db" + str(l+1)] = beta1 * v["db" + str(l+1)] + (1-beta1) * grads['db' + str(l+1)] ### END CODE HERE ### # Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected". ### START CODE HERE ### (approx. 2 lines) v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)] / (1 - beta1 ** t) v_corrected["db" + str(l+1)] = v["db" + str(l+1)] / (1 - beta1 ** t) ### END CODE HERE ### # Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s". ### START CODE HERE ### (approx. 2 lines) s["dW" + str(l+1)] = beta2 * s["dW" + str(l+1)] + (1-beta2) * (grads['dW' + str(l+1)]**2) s["db" + str(l+1)] = beta2 * s["db" + str(l+1)] + (1-beta2) * (grads['db' + str(l+1)]**2) ### END CODE HERE ### # Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected". ### START CODE HERE ### (approx. 2 lines) s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)] / (1 - beta2 ** t) s_corrected["db" + str(l+1)] = s["db" + str(l+1)] / (1 - beta2 ** t) ### END CODE HERE ### # Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters". ### START CODE HERE ### (approx. 2 lines) parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v_corrected["dW" + str(l+1)] / (s_corrected["dW" + str(l+1)]**0.5 + epsilon) parameters["b" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v_corrected["db" + str(l+1)] / (s_corrected["db" + str(l+1)]**0.5 + epsilon) ### END CODE HERE ### return parameters, v, s

最后代入模型函数,根据关键字选择需要的优化算法就行了。

def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,          beta1 = 0.9, beta2 = 0.999,  epsilon = 1e-8, num_epochs = 10000, print_cost = True):    """    3-layer neural network model which can be run in different optimizer modes.        Arguments:    X -- input data, of shape (2, number of examples)    Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)    layers_dims -- python list, containing the size of each layer    learning_rate -- the learning rate, scalar.    mini_batch_size -- the size of a mini batch    beta -- Momentum hyperparameter    beta1 -- Exponential decay hyperparameter for the past gradients estimates     beta2 -- Exponential decay hyperparameter for the past squared gradients estimates     epsilon -- hyperparameter preventing division by zero in Adam updates    num_epochs -- number of epochs    print_cost -- True to print the cost every 1000 epochs    Returns:    parameters -- python dictionary containing your updated parameters     """    L = len(layers_dims)             # number of layers in the neural networks    costs = []                       # to keep track of the cost    t = 0                            # initializing the counter required for Adam update    seed = 10                        # For grading purposes, so that your "random" minibatches are the same as ours        # Initialize parameters    parameters = initialize_parameters(layers_dims)    # Initialize the optimizer    if optimizer == "gd":        pass # no initialization required for gradient descent    elif optimizer == "momentum":        v = initialize_velocity(parameters)    elif optimizer == "adam":        v, s = initialize_adam(parameters)        # Optimization loop    for i in range(num_epochs):                # Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch        seed = seed + 1        minibatches = random_mini_batches(X, Y, mini_batch_size, seed)        for minibatch in minibatches:            # Select a minibatch            (minibatch_X, minibatch_Y) = minibatch            # Forward propagation            a3, caches = forward_propagation(minibatch_X, parameters)            # Compute cost            cost = compute_cost(a3, minibatch_Y)            # Backward propagation            grads = backward_propagation(minibatch_X, minibatch_Y, caches)            # Update parameters            if optimizer == "gd":                parameters = update_parameters_with_gd(parameters, grads, learning_rate)            elif optimizer == "momentum":                parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)            elif optimizer == "adam":                t = t + 1 # Adam counter                parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,                                                               t, learning_rate, beta1, beta2,  epsilon)                # Print the cost every 1000 epoch        if print_cost and i % 1000 == 0:            print ("Cost after epoch %i: %f" %(i, cost))        if print_cost and i % 100 == 0:            costs.append(cost)                    # plot the cost    plt.plot(costs)    plt.ylabel('cost')    plt.xlabel('epochs (per 100)')    plt.title("Learning rate = " + str(learning_rate))    plt.show()    return parameters

效果

gradient descent

gradient descent with momentum

Adam mode

效果还是很明显的:

转载地址:http://xerii.baihongyu.com/

你可能感兴趣的文章
最小费用流 Bellman-Ford与Dijkstra 模板
查看>>
mapReduce(3)---入门示例WordCount
查看>>
hbase(3)---shell操作
查看>>
hbase(1)---概述
查看>>
hbase(5)---API示例
查看>>
SSM-CRUD(1)---环境搭建
查看>>
Nginx(2)---安装与启动
查看>>
springBoot(5)---整合servlet、Filter、Listener
查看>>
C++ 模板类型参数
查看>>
C++ 非类型模版参数
查看>>
DirectX11 光照
查看>>
图形学 图形渲染管线
查看>>
DirectX11 计时和动画
查看>>
DirectX11 光照与材质的相互作用
查看>>
DirectX11 镜面光
查看>>
DirectX11 三种光照组成对比
查看>>
DirectX11 指定材质
查看>>
DirectX11 点光
查看>>
DirectX11 聚光灯
查看>>
DirectX11 HLSL打包(packing)格式和“pad”变量的必要性
查看>>