A Neural Network Learning Method Using Samples with Different Confidence Levels
-
摘要: 针对含不同置信级样本的模型拟合问题,该文提出了一种基于神经网络的二次学习方法。文中指出真实模型是实验模型的一种变异,提出逼近真实模型期望值的神经网络,是融合先验样本和真实样本的最佳网络。首先,以先验样本为训练样本进行第1次神经网络学习,并计算取决于硬点信息的软点误差容量区间;然后,同时将先验样本和真实样本作为训练样本,利用软点误差容量区间和硬点误差敏感系数,对神经网络训练过程中输入/目标对的误差进行修改,通过第2次学习得到既能精确拟合真实样本,又能最大化利用先验样本信息的综合网络。与基于知识的神经网络(KBNN)相比,该方法更加简单,可操控性更强并具有更加明确的逻辑意义。
-
关键词:
- 神经网络 /
- 模型拟合 /
- 基于知识的神经网络(KBNN) /
- 先验知识
Abstract: To solve the model-fitting problem with different confidence levels of samples, a Neural-Network (NN)- based twice learning method is proposed. It is pointed out that the real model is a variation of experimental model. The neural network approximation to the mathematical expectation of real model, is believed to be the best network fusing the information of prior samples and real samples. In the first learning, neural network is trained using the prior samples only, and the error capacity intervals of the soft points, which are determined by the information of hard points, are calculated. Then, both prior samples and real samples are included in the training samples. The import-objective errors in the process of NN training are modified, using soft point error capacity intervals and hard point error-sensitivity coefficients. The expected network is generated by the second learning, with accurate fitting to the real samples and efficacious utilization of the prior samples. In contrast with Knowledge-Based Neural Networks (KBNN), this method is simpler and more amenable to manipulation with definite logical significance.
计量
- 文章访问数: 2195
- HTML全文浏览量: 99
- PDF下载量: 683
- 被引次数: 0