2019 年 2 月

# 進一步探討神經網路

## 正向的傳播

[圖 1 中的類神經網路的正向傳播

``````x1 = .5
w1 = .2
x2 = 4
w2 = .5
b = .03

z = x1 * w1 + x2 * w2 + b

print (z)
``````

2.13 為 z 的加權總和的值。回想一下，下一個步驟是透過啟用函式執行此值。接下來，輸入新的儲存格，來建立啟用下列程式碼

``````import numpy as np
def sigmoid_activation(weighted_sum):
return 1.0 / (1.0 + np.exp(-1 * weighted_sum))
a = sigmoid_activation(z)
print(a)
``````

Sigmoid_activation 傳回值介於 0 和 1，不論大小或數目的小型。輸入下列程式碼來測試它：

``````print(sigmoid_activation(1000000))
print(sigmoid_activation(.000001))
``````

## 建立類神經網路

[圖 2 建立的類神經網路

``````def initialize_neural_network(num_inputs, num_hidden_layers,
num_nodes_hidden, num_nodes_output):

num_nodes_previous = num_inputs # number of nodes in the previous layer

network = {}

# Loop through each layer and randomly initialize
# the weights and biases associated with each layer.
for layer in range(num_hidden_layers + 1):

if layer == num_hidden_layers:
layer_name = 'output'
num_nodes = num_nodes_output
else:
layer_name = 'layer_{}'.format(layer + 1)
num_nodes = num_nodes_hidden[layer]

# Initialize weights and bias for each node.
network[layer_name] = {}
for node in range(num_nodes):
node_name = 'node_{}'.format(node+1)
network[layer_name][node_name] = {
'weights': np.around(np.random.uniform(size=num_nodes_previous),
decimals=2),
'bias': np.around(np.random.uniform(size=1), decimals=2),
}

num_nodes_previous = num_nodes

return network
``````

``````network1 = initialize_neural_network(10, 5, [32, 32, 32, 32, 32], 2)
``````

``````mnist_network = initialize_neural_network(784, 2, [32, 32], 10)
print(network1)
``````

## 瀏覽類神經網路

``````from random import seed
np.random.seed(2019)
input_values = np.around(np.random.uniform(size=10), decimals=2)

print('Input values = {}'.format(input_values))
``````

``````node_weights = network1['layer_1']['node_1']['weights']
node_bias = network1['layer_1']['node_1']['bias']

print(node_weights)
print(node_bias)
``````

``````def calculate_weighted_sum(inputs, weights, bias):
return np.sum(inputs * weights) + bias
``````

``````weighted_sum_for_node = calculate_weighted_sum(inputs, node_weights, node_bias)
print('Weighted sum for layer1, node1 = {}'.format(
np.around(weighted_sum_for_node[0], decimals=2)))
``````

``````node_output_value  = sigmoid_activation(weighted_sum_for_node)
print('Output value for layer1, node1 =
{}'.format(np.around(node_output_value[0], decimals=2)))
``````

## 總結

Frank La Vigne適用於在 Microsoft 擔任他能協助公司 AI 技術解決方案專業人員達成更充分運用其資料與分析和 AI。他也共同主機何處播客。他定期在部落格，您可以觀看他在他的 YouTube 頻道，"Frank 的世界電視 」 (FranksWorld.TV)。

MSDN Magazine 論壇中的這篇文章的討論