在实际应用中,根据输入输出的关系使用某一个传递函数。如果输入不含负
值,所以采纳log-sigmoid函数。如果包括负值,则采纳tan-sigmoid函数。在本文中,隐藏层神经元使用Sigmoid传递函数,输出层神经元用纯线性传递函数。
4.2 隐藏图层节点的选择
许多学者曾从事研究隐藏层的最佳节点。柯尔莫哥洛夫定理证明,只要一个
隐层的节点足够多,神经网络的隐藏层可以以任意精度逼近非线性函数。然而,对于一个从输入到输出有限的映射,无限多的隐藏层节点是没有必要的。以及如何选择隐藏层节点仍是一个尚未解决的问题。隐藏层节点是通过经验和实验设计确定的。一般地,基于对输入和输出关系准确的反映,选择一个隐藏层的小节点,以保持网络的结构简单。但规模越小的节点,神经网络的泛化能力越糟糕。在相反,如果隐藏层的节点越大,训练过程中的复杂度也会随之升高,那么这种情况将导致过度拟合现象。在设计过程中,许多因素必须结合起来。在具体设计中, 首先选择一个隐藏层。如果增加隐含层节点无法获得更好的网络,层号和隐藏层节点仍需要添加。
在本文中,四个影响因素(电压、浓度、温度、流量)是一个输入层节点,并且
分离百分比是一个输出层节点。所以一个隐藏层的节点数应在4和12之间。网络的结构是4:4:1, 4:10:1,4:12:1。通过MATLAB软件训练,性能图表如图9所以。
在图9(A)中,总训练时间是74,最佳验证性能高于10?3,训练数据的MSE(均
方差)低于10?3并且MSE的测试样品和有效数据约0.005。在图9(B)中,总训练时间是33,最佳验证性能高于10?3,训练数据的MSE低于10?3并且MSE的测试样品和有效数据可能略高于10?3。在图9(C)中,总训练时间是101, 最佳验证性能高于10?3,训练数据的MSE、测试样品和有效数据均高于10?3。这三个图的结果表明,4:10:1是最好的网络结构,因为它有最低的MSE值、最短的训练时间。
5.结论
BP神经网络和改进的BP算法作为预测氯化钠溶液电渗析实验分离百分比两种方法,改进的BP算法比BP神经网络优越。改进的BP算法弥补了BP神经网络不合适的学习速率和权重训练过程中的缺陷,并且改进的BP算法使用增加学习
7
速率和权值的方法。灵活的BP算法是一种改进的BP算法的方法,其预测显然比BP神经网络更好。
在不同的训练参数 (神经元的传递函数, 大量的隐藏层的神经元和学习速率) 这个条件下讨论、研究BP神经网络和改进的BP算法的预测能力。我们获得了最优训练参数。本文的隐藏层神经元使用Sigmoid传递函数,并且输出层神经元使用纯线性传递函数。4:10:1网络是最好的网络结构,因此最优的隐藏层节点是10。1.05在学习速度训练数据中作为最佳的增加比率。然而, 由于实验器材的定位和极化,导致最优训练参数值受限。
浓度、流量、温度和电压与分离比例呈现非线性关系,温度和电压与分离比例呈现正相关关系, 浓度和流率与分离比例呈现负相关关系。对于非线性关系,改进的BP算法能够更好的预测。实验结果表明,改进的BP算法对复杂的数据组有泛化的、高效的和自适应的能力,使得为复杂系统建模成为有吸引力的选择,比如水处理过程和膜技术。
8
Studies on prediction of separation percent in
electrodialysis process via BP neural networks and improved BP
algorithms
abstract
In the electrodialysis process, separation percent (SP) had nonlinear relationships with a number of influencing factors (feed concentration (C),flow rate of dilute compartment (Q), reaction temperature (T) and applied voltage (V)), and the relationships were hard to express by a simple formula. And four influencing factors had remarkable effects on SP. In this paper, the four factors were studied in the electrodialysis experiments. Back propagation (BP) neural networks and improved BP algorithms were applied on the prediction of SP, and their prediction capabilities could reflect generalization and adaptive abilities on complex data which had nonlinear relationships with each other. And with different structures of neural networks, transfer functions of neurons and learning rates, the optimum training parameters were obtained. Comparing BP neural networks with improved BP algorithms, improved BP algorithms were better than BP algorithm, due to changing with increasing ratios of learning rates and weights properly. And in the condition of high temperatures and voltages, the improved BP algorithms were predicted to have better performance, this was because improved BP algorithms had the generalization ability for high values.
Keywords: BP neural networks; Improved BP algorithms; Electrodialysis
Separation percent; Flexible BP algorithm; Adaptive learning rate method
I
1. Introduction
Electrodialysis (ED) is an electro-membrane process for separation of ions across charged membranes from one solution to another with the aid of an electrical potential difference used as a driving force. This process has been widely used for production of drinking and processed water from brackish water and sea water, treatment of industrial effluents, recovery of useful materials from effluents and salt production. The basic principles and applications of ED were reviewed in the literatures[1–6]. Numerous versatile industrial applications of ED using ion-exchange membranes were developed and commercialized because of their high chemical stability, flexibility and high ionic conductivity due to their strong ionic characteristics[7–10]. Two different types of ion-exchange membranes are used in conventional electrodialysis: cation-exchange (CEM) and anion-exchange (AEM) membranes, which are permeable to cationic and anionic species, respectively[11].
However, in operating an electrodialyzer, the current density should be maintained less than the limiting current density because water dissociation gives rise to scale formation and membrane breakages[12]. So the determination of the limiting current density and potential for the system is also performed. The limiting current density is the maximum current density (current per unit membrane area) that can be used without causing negative effects such as higher electrical resistance and lower current efficiency. At the limiting current density, the concentration of a cation or an anion at the surfaces of the cation exchange or anion-exchange membrane, as appropriate, in the cells with the depleted solution will be zero[12–14]. At and beyond the limiting current density, H + and OH ? generated upon dissociation of water transport a part of the electric current[15].
Artificial neural networks (ANNs) utilize interconnected mathematical nodes or neurons to form a network that can model complex functional relationships[16]. Its development started in the 1940s to help cognitive scientists to understand the complexity of the nervous system. It has been evolved steadily and was adopted in many areas of science. Basically, ANNs are numerical structures inspired by the learning process in the human brain. They are constructed and used as alternative mathematical tools to solve a diversity of problems in the fields of system identification, forecasting, pattern recognition, classification, process control and many others[17]. Artificial neural networks have been used in a wide range of membrane process applications (reverse osmosis, nanofiltration, ultrafiltration, microfiltration, membrane filtration, gas separation, membrane bioreactor and fuel cell) [18]. However, there are a few records in the literature which apply artificial neural networks for the prediction of SP in the electrodialysis process.
One ANN which has received the most attention is back propagation network (BPN) [19]. BPNs have hierarchical feed forward networks frame. In the classical structure of BPNs, the outputs of each layer are sent directly to each neuron of the next layer. There are many layers, but at least three layers are considered: an input layer receives and distributes inputs, a middle or hidden layer captures the nonlinear relationships of inputs and outputs, and an output layer produces calculated data. BPNs also may contain a bias neuron that produces constant outputs, but receives no
I
相关推荐: