下面我附上了4张错误的图片作为图片。
通常,我正在训练我的神经网络(具有2,3,1
架构),它由输入层中的两个输入神经元、隐藏层中的3个神经元和输出层中的1个输出神经元组成。
因此,我使用反向传播训练了我的网络,并且我有很小的误差(在图片中指定)。
有人能帮我拿一下吗?
错误:形状(200,200)和(1,3)未对齐:200(dim 1)!=1(dim 0)
import numpy as np
import random
# Generating training data set according to the function y=x^2+y^2
input1_train = np.random.uniform(low=-1, high=1, size=(200,))
input2_train = np.random.uniform(low=-1, high=1, size=(200,))
input1_sq_train= input1_train **2
input2_sq_train= input2_train **2
input_merge= np.column_stack((input1_train,input2_train))
# normalized input data
input_merge= input_merge / np.amax(input_merge, axis=0)
# output of the training data
y_output_train= input1_sq_train + input2_sq_train
# normalized output data
y_output_train= y_output_train / 100
# Generating test data set according to the function y=x^2+y^2
input1_test = np.random.uniform(low=-1, high=1, size=(100,))
input2_test = np.random.uniform(low=-1, high=1, size=(100,))
input1_sq_test= input1_test **2
input2_sq_test= input2_test **2
y_output_test= input1_sq_test + input2_sq_test
# Merging two inputs of testing data into an one matrix
input_merge1= np.column_stack((input1_test,input2_test))
# normalized input test data
input_merge1=input_merge1 / np.amax(input_merge1, axis=0)
# normalized output test data
y_output_test= y_output_test / 100
# Generating validation data set according to the function y=x^2+y^2
input1_validation = np.random.uniform(low=-1, high=1, size=(50,))
input2_validation = np.random.uniform(low=-1, high=1, size=(50,))
input1_sq_validation= input1_validation **2
input2_sq_validation= input2_validation **2
input_merge2= np.column_stack((input1_validation,input2_validation))
# normalized input validation data
input_merge2= input_merge2 / np.amax(input_merge2, axis=0)
y_output_validation= input1_sq_validation + input2_sq_validation
# normalized output validation data
y_output_validation= y_output_validation / 100
class Neural_Network(object):
def __init__(self):
# parameters
self.inputSize = 2
self.outputSize = 1
self.hiddenSize = 3
# weights
self.W1 = np.random.randn(self.inputSize, self.hiddenSize) # (3x2)
# weight matrix from input to hidden layer
self.W2 = np.random.randn(self.hiddenSize, self.outputSize) # (3x1)
# weight matrix from hidden to output layer
def forward(self, input_merge):
# forward propagation through our network
self.z = np.dot(input_merge, self.W1) # dot product of X (input) and first set of 3x2 weights
self.z2 = self.sigmoid(self.z) # activation function
self.z3 = np.dot(self.z2, self.W2) # dot product of hidden layer (z2)
# and second set of 3x1 weights
o = self.sigmoid(self.z3) # final activation function
return o
def costFunction(self, input_merge, y_output_train):
# Compute cost for given X,y, use weights already stored in class.
self.o = self.forward(input_merge)
J = 0.5*sum((y_output_train-self.yHat)**2)
return J
def costFunctionPrime(self, input_merge, y_output_train):
# Compute derivative with respect to W and W2 for a given X and y:
self.o = self.forward(input_merge)
delta3 = np.multiply(-(y_output_train-self.yHat),
self.sigmoidPrime(self.z3))
dJdW2 = np.dot(self.a2.T, delta3)
delta2 = np.dot(delta3, self.W2.T)*self.sigmoidPrime(self.z2)
dJdW1 = np.dot(input_merge.T, delta2)
return dJdW1, dJdW2
def sigmoid(self, s):
# activation function
return 1/(1+np.exp(-s))
def sigmoidPrime(self, s):
# derivative of sigmoid
return s * (1 - s)
def backward(self, input_merge, y_output_train, o):
# backward propgate through the network
self.o_error = y_output_train - o # error in output
self.o_delta = self.o_error*self.sigmoidPrime(o) # applying derivative of sigmoid to error
self.z2_error = self.o_delta.dot(self.W2.T) # z2 error: how much our hidden layer weights contributed to output error
self.z2_delta = self.z2_error*self.sigmoidPrime(self.z2) # applying derivative of sigmoid to z2 error
self.W1 += input_merge.T.dot(self.z2_delta) # adjusting first set (input --> hidden) weights
self.W2 += self.z2.T.dot(self.o_delta) # adjusting second set (hidden --> output) weights
def train (self, input_merge, y_output_train):
o = self.forward(input_merge)
self.backward(input_merge, y_output_train, o)
NN = Neural_Network()
for i in range(1000): # trains the NN 1,000 times
# print ( "Actual Output for training data: \n" + str(y_output_train))
# print ("Predicted Output for training data: \n" + str(NN.forward(input_merge)))
print ( "Loss for training: \n"
+ str( np.mean( np.square( y_output_train
- NN.forward( input_merge )
)
)
)
) # mean sum squared loss
NN.train(input_merge, y_output_train)
# NN.test(input_merge1,y_output_test)
# NN.validation(input_merge2,y_output_validation)
因此,首先,发布基于MCVE的StackOverflow公式提出的问题是一种公平的做法。
在这里,这意味着还要复制完整的Error-Traceback,包括已抛出Traceback的行号。好的,下次你会明白的。
你的问题不是一个小错误你的代码主要是错误的,因为它试图(在一个未知的位置)处理一对数组,这对数组的形状与一个未知的操作不匹配(似乎. multiply()
是正确的怀疑对象,但不确定它可以在哪里被调用,因为没有明确的请求要求.costFunctionPrime()
方法)。
尽管如此,还是在某个地方尝试处理这对矩阵/向量数组,
一个是[200,200]
,另一个是[1,3]
根本不可能处理它们。
因此,错误在您的代码/语法中。检查它,可能使用预打印的形状检查:
def aFormatSHAPE( anArray ):
return "[{0: >4d},{1: >4d}]".format( anArray.shape[0],
anArray.shape[1]
)
def aHelperPrintSHAPE( anArray1, anArray2 ):
try:
print( "CHK:{0:}-(op)-{1:}".format( aFormatSHAPE( anArray1 ),
aFormatSHAPE( anArray2 )
)
)
except:
pass
return
一旦您修复了代码以使其符合所有常见的矩阵向量代数规则(关于如何在数组/向量上处理加法、减法、乘法、点积),那么您的小错误就解决了。
你不应该看到这样的东西:
CHK:[200,200]-(op)-[1,3]
在我看来,你的矩阵维度不合适。你不能用(200,200)乘以(1,3)。简单来说,第一个矩阵的列数必须与第二个矩阵的行数相匹配。希望这有所帮助。