Journal of Molecular Liquids 175 (2012) 85–90
Contents lists available at SciVerse ScienceDirect
Journal of Molecular Liquids journal homepage: www.elsevier.com/locate/molliq
Modeling viscosity of nanofluids using diffusional neural networks Fakhri Yousefi a,⁎, Hajir Karimi b, Mohammad Mehdi Papari c a b c
Department of Chemistry, Yasouj University, Yasouj, 75914‐353, Iran Department of Chemical Engineering, Yasouj University, Yasouj, 75914‐353, Iran Department of Chemistry, Shiraz 0055niversity of Technology, 71555‐313, Iran
a r t i c l e
i n f o
Article history: Received 10 August 2011 Received in revised form 15 August 2012 Accepted 17 August 2012 Available online 28 August 2012 Keywords: Viscosity Nanofluid Neural networks Suspension
a b s t r a c t In our previous work (Int. J. Thermal Sci. 50 (2011) 44–52), we developed diffusional neural network scheme to model thermal conductivity of several nanofluids. In this paper, we have extended the neural networks method to predict the relative viscosity of nanofluids including nanoparticles CuO suspended in propylene glycol+ water, CuO suspended in ethylene glycol + water,SiO2 suspended in water, SiO2 suspended in ethanol, Al2O3suspended in water, and TiO2 suspended in water. The results obtained have been compared with other theoretical models as well as experimental values. The predicted relative viscosities of suspensions using diffusional neural networks (DNN) are in accordance with the literature values. © 2012 Elsevier B.V. All rights reserved.
1. Introduction Nanofluids are mixtures of solid nanoparticles with average particle size smaller than 100 nm dispersed in base fluids such as water, ethylene glycol, propylene glycol or engine oil. Research on nanofluids has received great attention during the last decade due to the prospect of enhanced transport properties. Among transport properties, viscosity is a fundamental characteristic property of a fluid that influences flow and heat transfer phenomena. Determining the viscosity of nanofluids is essential for optimizing flow transport devices in energy supply. Many experimental and theoretical studies have been dedicated to the thermal conductivity of nanofluids [1–9]. However, experimental data for the effective viscosity of nanofluids are limited to certain nanofluids [10–22]. The ranges of the investigated variables such as the particle volume concentration; particle size and temperature are also limited. More correlations reported for determining nanofluids viscosity are based on simple Einstein model [23,24]. A few theoretical models have been developed for the determination of a nanoparticles suspension viscosity [25,26]. Still, the experimental data show the trend that the effective viscosities of nanofluids are higher than the existing theoretical predictions [14]. In an attempt to modify this situation, researchers proposed models applied to specific applications, e.g., Al2O3 in water [14,27], Al2O3 in ethylene glycol [27], and CuO in water with temperature change [28].However, the problem with these models is that they do not reduce to the Einstein model [23] at very low particle volume concentrations and, hence, lack a sound physical basis. Moreover, many of the deterministic or conceptual viscosity ⁎ Corresponding author. Fax: +98 741 222 1711. E-mail address: fyousefi@mail.yu.ac.ir (F. Yousefi). 0167-7322/$ – see front matter © 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.molliq.2012.08.015
models need a sufficient amount of data for calibration and validation purposes that makes them computationally incompetent. As a result this has caused the attention of the researcher to focus on a separate category of models called systems theoretic models. These models such as artificial neural networks (ANN), also known as the blackbox models, attempt to develop relationships among input and output variables involved in a physical process without considering the underlying physical process. The ANN technique has applied successfully in various fields of modeling and prediction in many engineering systems, mathematics, medicine, economics, metrology and many others. It has become increasingly popular during the last decade. The advantages of ANN compared to conceptual models are its high speed, simplicity and large capacity which reduce engineering attempt. Some recent applications are made in thermophysical properties [29–34]. Back propagation neural network (BPNN) is widely used because it can effectively solve non-linear problem. However, there are some deficiencies for BP neural network, such as getting into local extreme and slow convergence. This is disadvantageous under limited experiment data of nanofluid viscosity. In the present study, the effective viscosity of nanofluids is determined as a function of the temperature, nanoparticle volume fraction, nanoparticle size and the base fluid physical properties. In this paper, we have used neural networks method to calculate the relative viscosity of suspension of nanoparticles CuO in propylene glycol+ water, CuO in ethylene glycol + water SiO2 in water, SiO2 in ethanol, Al2O3 in water, and TiO2 in water. In this respect, first, we utilize a hybrid model based upon the mega-trend-diffusion technique and neural networks to estimate the expected domain range of data as hypothetical. Second, we generate a number of virtual data points to reduce the error of estimated function with respect to a small actual dataset. Third, we build
86
F. Yousefi et al. / Journal of Molecular Liquids 175 (2012) 85–90
Input layer
Hidden layer
Output layer
Where nj is the output of the j-th neuron and x is given by: x¼
n X
wij pi þ bj
ð2Þ
i¼1
P variables
S neurons
Where wij represents the weights applied to the connections from the i-th neurons in the previous layer to the j-th neurons, pi is the output from the i-th neuron and b is a bias term. MLP networks operate in the supervised learning mode. In the training algorithm, commonly back propagation error called BPE, training data are given to the networks and the network iteratively adjusts connection weights w and biases b (starting from initial random values) until the network predicted values match the actual values satisfactorily. In the BPE training algorithm, this adjustment is carried out by comparing the actual value tij and the predicted values aij of the network by means of calculation of the total sum of the square error (SSE) for the n data of the training dataset,
1 Out put
Fig. 1. Topology of conventional P - S - 1 MLP.
the robust prediction model for predicting relative viscosity of the aforementioned nanofluids. Finally, we will compare the obtained results with actual data as well as other models to show the reliability of the ANN method. 2. Conventional artificial neural networks The ANNs supplies a non-linear function mapping of a set of input variables into the corresponding network output variables, without the requirement of having to specify the actual mathematics form of the relation between the input and output variables. The multilayer perceptron (MLP) is a type of feedforward neural network that has been used commonly for the approximate functions [35].The ANNs are discussed in details in the literature, therefore only a few well-known features of it are given here to describe the general nature of the network. MLP neural networks consist of multiple layers of simple activation units called neurons that are arranged in such a way that each neuron in one layer is connected with each neuron in the next by weighted connections (see Fig. 1). Neurons are arranged in layers that make up the global architecture. MLP networks are comprised of one input layer, at least one hidden layer and an output layer. The number of neurons in the input layer is defined by the problem to be solved. The input layer receives the data. The output layer delivers the response corresponding to the property values. The hidden layer processes and organizes the information received from the input layer and delivers it to the output layer. The number of neurons in the hidden layer, which to some extent play the role of intermediate variables, may be considered as which might be either output from other neurons or input from external sources. Each neuron thus has a series of weighted inputs, wij which might be either output from other neurons or input from external sources. Each neuron calculates a sum of the weighted inputs and transforms it by the following transfer function:
nj ¼
1 1 þ expð−xÞ
ð1Þ
SSE ¼
n 2 X t ij −aij
ð3Þ
i¼1
Although artificial neural networks (ANNs) are widely utilized to extract management knowledge from acquired data, in neuralnetwork modeling mainly assume that the learning data for training a neural network are sufficient. For cases in which the data are insufficient, it is impossible to recognize a nonlinear system. In other words, there is a non-negligible error between the real function and the estimated function from a trained neural network [36]. Moreover using basic neural networks with small training data points, it is difficult to guarantee that a good predictive model will be obtained in the complete actual domain. In most cases, only good predictions are achieved in regions in the vicinity of actual points contained in the training dataset [37]. However, a sufficient training data set is defined as a data set which provides enough information to a learning method to obtain stable training accuracy. 2.1. Detailed steps in the ANN modeling procedure In this study, the actual data set was provided by the Refs. [38–42]. the number of actual data points is182 in total. According to available data, there are five variables including temperature, particle volume fraction, particle size, base fluid viscosity and density ratio of fluid base to particle as input and relative viscosity of nanofluid is the output of the network. As explained previously, the available experimental data points (182 data points) do not provide sufficient information for training robust prediction neural network. Therefore, the research of Li et al. [43] was referred to, and the megatrend-diffusion and estimation of domain range techniques were used to create virtual data (adding a number of virtual data points) to construct the predicted model. Because the purpose of this research is to apply the procedure proposed by Li et al. [43] to construct a prediction model, the details of the derivation of the methodology are omitted here. In order to fill all the data gaps and estimate the data trend, the mega-trend-diffusion method assumes that the data selected are located within a certain range and have possibility indexes determined by a common membership function (MF). The numbers of data which are
Table 1 The lower and upper limits of virtual data for each variable. Limit
T[K]
f (fraction)
Particle size [nm]
Density ratio of (fluid/particle)
Vis. of base [pa.s]
Relative vis. of nanofluid
L U
238.25 323.35
0.0246 0.0431
78.72 105.11
0.2980 0.3530
2.172 3.999
1.378 1.647
F. Yousefi et al. / Journal of Molecular Liquids 175 (2012) 85–90
87
Table 2 The random virtual numbers selected between L and U for each variable.
Nv MF
T[K]
f (fraction)
238.58 0.01
263.56 0.59
Particle size [nm]
284.12 0.92
0.0067 0.1905
0.0405 0.8416
0.0501 0.5688
29.00 0.19
23.33 0.13
smaller or greater than the average of the data are considered the skewness of the MF, which is used to estimate the trend of the population. When the data collected are insufficient to build a reliable predicting model, the procedure systematically generates some virtual data between the estimated lower and upper extremes (L and U) of the range to fill the data gaps. The computation of L and U mainly employs the statistical diffusion techniques provided in the procedure. The procedure further calculates the MF values for each virtual data as its importance level. Li et al. [43] showed that if the number of actual data (experimental data) chosen for generation of virtual data increases, the average error rate of trained net will decrease. So in this study first all available actual data were used for generating a virtual data then the ANN was trained using generated data. Finally, the actual data were used for testing the ANN. There are a total of four steps in the procedure which was implemented with182 actual data: Step 1 Selection of all actual data as the training data of ANN. Step 2 Usage of Eqs. (4) and (5) to calculate lower bound (L) and upper bound (U), respectively, for each variable (i.e. input– output variables) of the ANN. The results are shown in Table 1. L ¼ U set −SkL
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi −2 ^s 2x =N L lnðϕðLÞÞ
U ¼ U set þ SkU
ð4Þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi −2 ^s 2x =NU lnðϕðU ÞÞ:
Density ratio of (fluid /particle 83.40 0.80
0.24 0.06
MF ¼
In these equations: 2
ð6Þ
U set ¼ ð min þ maxÞ=2;
ð7Þ
^s 2x
n 2 ∑i¼1 ðxi −xÞ =n−1
where ¼ is the data set variance; n is the dataset size; and Uset, the core of the dataset so that SkL ¼ NL =ðNL þ NU Þ and SkU ¼ NU =ðNL þ NU Þ
ð8Þ
MF variable 2P variables
Output layer
6.220 0.194
0.001 0.000
1.117 0.351
1.462 0.863
1.841 0.572
8 N v −L > > < U −L
Nv b U set
set
U−Nv > > : U−U set
ð9Þ
N v > U set
Repeating the step 3 for each variable, this gives rise to 462 virtual data for the training the MLP network. The conventional topology of MLP can be shown as a P-S-1 network appeared in Fig. 1. This topology represents the five input variables corresponding to the temperature (T), volume fraction of nanoparticles (f), particle size (d), base fluid viscosity (μ f) density ratio of fluid base to
CuO / PG+Water (T=293.35)
2.3
Exp
Relative Viscosity
Hidden layer
0.656 0.190
where Nv is a virtual number.
The skewness (Sk) characterizes the degree of asymmetry of a distribution around dataset core. SkU indicates a distribution fraction with an asymmetric tail extending greater than
Input layer
0.41 0.34
Relative viscosity
the core value. SkL indicates a distribution fraction with an asymmetric tail extending less than core value. NL and NU represent the number of data smaller and greater than Uset, respectively. Since a common function to diffuse the data set was utilized, the values of the membership function for these two data range limits, the left end L and the right end U, approach zero. Consequently, in Eqs. (4), and (5), a very small number, 10−20 was set for φ(L) and φ(U) to be the value of the membership function, respectively. The lower (L) and upper (U) limits are calculated using Eqs. (4) and (5) for each variable. The results are listed in Table 1. Step 3 generation a virtual data between L and U for each variable. After calculating the domain (L, U) for each variable, nsdata are generated within this domain. These data are defined as virtual data. These data, however, cannot be used to train the BPNN because the data cannot present the complete information for the data distribution. Therefore, the value of membership function is calculated using Eq. (9) and considered for each data. The viscosity ratio is the output for the ANN, and there is no need to calculate its MF value. For instance, a number of virtual data generated for each variable and the corresponding value of MF are summarized in Table 2.
ð5Þ
hset ¼ ^s x =n
0.25 0.21
Vis. of base [pa.s]
AAN
1.8
Einstein Batchlor Brinkman
1.3
Lundgren Masoumi
0.8 0
S* neurons
1 Out put
Fig. 2. The topology of 2P-S - 1 MLP for the small data set training.
0.02
0.04
0.06
Volume Fraction Fig. 3. Comparison of the predicted relative viscosity of CuO‐(60:40) PG + H2O at T = 293.35 K with the experimental values [39] and other models [23–25,44,45].
88
F. Yousefi et al. / Journal of Molecular Liquids 175 (2012) 85–90
TiO2/ Water(T=298.15 K)
SiO2/Water(φ =0.019) 1.35 1.3
2
Exp
Exp
1.8
AAN
1.6
Einstein Batchlor
1.4
Brinkman
1.2
Lundgren
1 0.8 290
Masoumi
300
310
320
330
Relative Viscosity
Relative Viscosity
2.2
1.25
AAN
1.2 Einstein
1.15
Batchlor
1.1 1.05
Brinkman
1
Lundgren
0.95
Masoumi
0.9
T(K)
0
Fig. 4. Comparison of the predicted relative viscosity with the experimental [42] and other models [23–25,44,45] for SiO2-H2O at volume fraction 0.019.
particle; S is the neuron in hidden layer and 1 is the output variable corresponding to the relative viscosity (μe/μ f). In sum, the hybrid model that was employed in this work comprises both the mega-trend diffusion technique along with the neural network. This hybrid model comprises an input layer of ten neurons corresponding to the temperature (T), volume fraction of nanoparticles (f), particle size (d), base fluid viscosity (μf) density ratio of fluid base to particle and their corresponding values of MF, plus a variable number of neurons in the hidden layer and a neuron in the output layer corresponding to the relative viscosity (μe/μ f). This topology is illustrated in Fig. 2. The output layer neuron has a linear transfer function, while the hidden layer neurons have a hyperbolic tangent transfer function. The neural network was trained according to scaled conjugate gradient backpropagation (trainscg) algorithm available in the neural network toolbox of MATLAB. The 462 virtual data obtained in Step 4 plus 10% of actual data (18 data) were assigned to the training of MLP, while rest of the actual data were utilized to test the model. In the first step of the training procedure, all data points were scaled to the range of [− 1, 1]. For the training step, the number of neurons in the hidden layer plays an important role in the network optimization. Therefore, in order to optimize the network, to achieve generalization of the model and avoid over-fitting, we started with 1 neuron in the hidden layer and gradually increased the number neurons
Al2O3/ Water(φ=0.015)
0.005
0.01
0.015
0.02
0.025
Volume Fraction Fig. 6. Comparison of the predicted relative viscosity with the experimental [41] and other models [23–25,44,45] for TiO2-water at T = 298.15.
until no significant improvement in the performance of the net was observed. For this study, the mean square error (MSE) was chosen as a measure of the performance of the networks. The network model with seven neurons in the hidden layer, maximum epoch 300, momentum constant 0.8 and learning rate 0.01. The MSE for this configuration was 0.0014. After determining the optimal topology of the network, the actual data and the corresponding MF values are used as testing data for the trained MLP to calculate the absolute average error. The average absolute deviation (AAD) is defined as: N X exp: calc: exp =X i 100 X i −X i AAD ¼
i¼1
ð10Þ
N
where N is the number of data. 3. Result and discussion In this study, as already mentioned, in this study diffusional neural network (DNN) scheme has been employed to compute the relative viscosity of nanofluids. The selected nanofluids were CuO in propylene glycol + water, SiO2 in water, Al2O3 in water, TiO2 in water, CuO in ethylene glycol + water and SiO2 in ethanol. Several models [23–25,44,45] capable of calculating the viscosity of nanofluids were selected to be compared against our results. It should be remembered that most of the old theoretical models [23,24,44,45], depend only on the volume fraction of particle and viscosity of base fluid. A few theoretical models have been developed for the determination of a nanoparticles
2.4 2.2
CuO/ EG+water(φ=0.05) 2.3
AAN
1.8
Einstein
1.6
Batchlor
1.4
Brinkman Lundgren
1.2
Masoumi
1 0.8 290
300
310
320
330
T(K) Fig. 5. Comparison of the predicted relative viscosity with the experimental [42] and other models [23–25,44,45] for Al2O3-water at volume fraction 0.015.
Relative Viscosity
Relative Viscosity
Exp
2
2.1
Exp
1.9
AAN Einstein
1.7
Batchlor
1.5
Brinkman Lundgren
1.3
Masoumi
1.1 0.9 220
240
260
280
300
320
340
T(K) Fig. 7. Comparison of the predicted relative viscosity with the experimental [40] and other models [23–25,44,45] for CuO‐(60:40) ethylene glycol+H2O at volume fraction 0.05.
F. Yousefi et al. / Journal of Molecular Liquids 175 (2012) 85–90
CuO/ EG+Water(T=303.05 K)
SiO2/Ethanol(d=35 nm , T=298.15 K)
2.3
2.1
2.1
Exp
1.9
AAN
1.7
Einstein Batchlor
1.5
Brinkman
1.3
Lundgren
1.1 Masoumi
0.9 0
0.02
0.04
0.06
1.9
Relative Viscosity
Relative Viscosity
89
Exp
1.7
AAN
1.5
Einstein Batchlor
1.3 Brinkman Lundgren
1.1
0.08
Masoumi
Volume Fraction 0.9 0
Fig. 8. Comparison of the predicted relative viscosity with the experimental [40] and other models [23–25,44,45] for CuO‐(60:40) ethylene glycol +H2O at T = 303.05 K.
0.01
0.02
0.03
0.04
0.05
0.06
Volume Fraction Fig. 10. Comparison of the predicted relative viscosity with the experimental data [38] and other models [23–25,44,45] for SiO2‐Ethanol at d = 35 nm and T = 298.15 K.
suspension viscosity [25]. In this model thevolume fraction of particle, viscosity of base fluid and diameter of particle are considered. In Masoumi et al. [25] the model parameters are adjusted for limited nanoparticle size, consequently this model is not usable for wide range of nanoparticle size. We carried out diffusional neural network calculations, explained in the previous section, for all fluids and compared the results obtained with those calculated by the above-mentioned models. For the verification of the validity of our calculations and the models, the predicted values of the ratio viscosity have also been compared with the experimental measurements [38–42]. Fig. 3 compares the predicted relative viscosity of CuO‐(60:40) PG: H2O at T = 293.35 as a function of the volume fraction of nanoparticle with the experimental data [39] and other models [23–25,44,45]. As it may be observed in this figure, the calculated relative viscosity using the DNN method agrees well with the experimental value [39]. Fig. 4 shows the comparison of the predicted relative viscosity of SiO2/H2O at volume fraction 0.019 as a function of temperature with the experimental [42] and other models [23–25,44,45]. As it is obvious the DNN stands over other models. The predicted relative viscosity of Al2O3-Water at volume fraction 0.015 as a function of temperature with the experimental [42] and other models [23–25,44,45] is compared in Fig. 5. Fig. 6 compares the predicted relative viscosity TiO2-Water at T = 298.15 as a function of volume fraction with the experimental values [41] and other models [23–25,44,45]. Furthermore,
Figs. 7, 8 illustrate, respectively, the deviation plot of relative viscosity of CuO- (60:40) EG: H2O at volume fraction 0.05and as a function of temperature and CuO-(60:40) EG: H2O at T = 303.05 K as a function of volume fraction, with the experimental ones [40] and other models [23–25,44,45]. Further Figs. (9) and (10) show respectively, the comparison of the predicted relative viscosity of SiO2‐Ethanol with nanoparticle size 94 nm and T = 298.15 K and of SiO2‐Ethanol with nanoparticle size 35 nm and T = 298.15 K as a function of volume fraction with the experimental data [38] and other models [23–25,44,45]. As all figures illustrate that the DNN method outperforms other models. In order that the validity of the results obtained are checked further, the average absolute deviation (AAD) of the relative viscosity from the literature values in the entire range of volume fraction for whole of nanofluids has been tabulated in Table 3. As Table 3 attests, the relative viscosity computed from the Mega-Trend-Diffusion neural networks method stands over other models. From Table 3, it is immediately verifiable that the DNN-MLP model possesses a high ability to predict the relative viscosity with an overall AAD of 3.44%. There is a good agreement between the predicted and the experimental values of the viscosity ratio (r = 0.994).
Acknowledgments The authors thank Yasouj University and Shiraz University of Technology for providing the computer facility.
SiO2/Ethanol(d= 94 nm, T=298.15 K ) 2.1
Relative Viscosity
1.9
Exp AAN
1.7
Einstein
1.5
Batchlor
1.3 1.1
Brinkman
Systems
Einstein Batchlor Brinkman Lundgren Masoumi ANN
Lundgren
CuO-(60:40)ethylene glycol+H2O SiO2-H2O Al2O3-H2O TiO2-H2O CuO-(60:40) Propylene glycol+H2O SiO2-Ethanol
14.32
13.71
13.87
13.61
44.9
1.63
27.58 31.86 6.84 18.64
27.52 31.81 6.76 18.00
27.54 31.83 6.78 18.15
27.52 31.81 6.76 17.90
67.6 7.59 17.41 32.24
5.55 6.64 4.08 1.22
18.01
18.09
18.22
18.01
8.20
3.25
Masoumi
0.9 0
0.02
0.04
0.06
Table 3 The AAD% of the calculated relative viscosity using Einstein, Batchlor, Brinkman, Lundgren, Masoumi models and ANN method from the experimental data [38–42].
0.08
Volume Fraction Fig. 9. Comparison of the predicted relative viscosity with the experimental [38] and other models [23–25,44,45] for SiO2‐Ethanol at d = 94 nm and T = 298.15 K.
90
F. Yousefi et al. / Journal of Molecular Liquids 175 (2012) 85–90
References [1] S.U.S. Choi, ASME Publications 66 (1995) 99–105. [2] S. Lee, S.U.S. Choi, S. Li, J.A. Eastman, Journal of Heat Transfer 121 (1999) 280–289. [3] X. Zhang, M. Fujii, International Journal of Heat and Mass Transfer 48 (2004) 2926–2932. [4] Y. Ren, H. Xie, A. Cai, Applied Physics 38 (2005) 3958–3961. [5] R. Prasher, E.P. Phelan, ASME Journal of Heat Transfer 128 (2006) 588–596. [6] S. Lee, S.U.S. Choi, S. Li, J.A. Eastman, ASME Journal of Heat Transfer 121 (1999) 280–290. [7] H.A. Mintsa, G. Roy, C.T. Nguyen, D. Doucet, International Journal of Thermal Sciences 48 (2009) 363–371. [8] M. Mehrabi, M. Sharifpur, J.P. Meyer, International Communications in Heat and Mass Transfer 39 (2012) 971–977. [9] S.K. Das, N. Putra, P. Thiesen, W. Roetzel, ASME Journal of Heat Transfer 125 (2003) 567–573. [10] N.S. Cheng, A.W.K. Law, Powder Technology 129 (2003) 156–160. [11] C.T. Nguyen, F. Desgranges, G. Roy, N. Galanis, T. Mare, S. Boucher, H.A. Mintsa, International Journal of Heat and Fluid Flow 28 (2007) 1492–1506. [12] S.M.S. Murshed, K.C. Leong, C. Yang, International Journal of Thermal Sciences 47 (2008) 560–568. [13] R. Prasher, D. Song, J. Wang, P.E. Phelan, Applied Physics Letters 89 (2006) 33108–133111. [14] C.T. Nguyen, F. Desgranges, N. Galanis, G. Roy, T. Mare, S. Boucher, H. Angue Mintsa, International Journal of Thermal Sciences 47 (2008) 103–111. [15] Y. Yang, Z.G. Zhang, E.A. Grulke, W.B. Anderson, G. Wu, Journal of Heat and Mass Transfer 48 (2005) 1107–1116. [16] W.J. Tseng, K.C. Lin, Journal of Materials Science and Engineering A 355 (2003) 186–192. [17] H. Chen, W. Yang, Y. He, Y. Ding, L. Zhang, C. Tan, A.A. Lapkin, D.V. Bavaykin, Journal of Powder Technology 183 (2007) 63–72. [18] Y. He, Y. Jin, H. Chen, Y. Ding, D. Cang, H. Lu, Journal of Heat and Mass Transfer 50 (2007) 2272–2281. [19] C.T. Nguyen, F. Desgranges, G. Roy, N. Galanis, T. Maré, S. Boucher, H.A. Mintsa Temperature, Journal of Heat and Fluid Flow 28 (2007) 1492–1506. [20] P.K. Namburu, D.P. Kulkarni, D. Misra, D.K. Das, Experimental Thermal and Fluid Science 32 (2007) 397–402. [21] M. Chandrasekar, S. Suresh, A. Chandra Bose, Experimental Thermal and Fluid Science 34 (2010) 210–216. [22] W. Duangthongsuk, S. Wongwises, Journal of Experimental Thermal and Fluid Science 33 (2009) 706–714.
[23] A. Einstein, Annals of Physics 19 (1906) 289–306. [24] G.K. Batchelor, Journal of Fluid Mechanics 83 (1977) 97–117. [25] N. Masoumi, N. Sohrabi, A. Behzadmehr, Journal of Physics D: Applied Physics 42 (2009) 055501–055507. [26] M.S. Hosseini, A. Mohebbi, S. Ghader, Chinese Journal of Chemical Engineering 18 (2010) 102–110. [27] S.E.B. Maiga, C.T. Nguyen, N. Galanis, G. Roy, Superlattices and Microstructures 35 (2004) 543–557. [28] D.P. Kulkarni, D.K. Das, G. Chukwu, Journal of Nanoscience and Nanotechnology 6 (2006) 1150–1154. [29] H. Karimi, F. Yousefi, Chinese Journal of Chemical Engineering 15 (2007) 765–771. [30] H. Karimi, N. Saghatoleslami, M.R. Rahimi, Chemical and Biochemical Engineering Quarterly 24 (2009) 167–176. [31] H. Kurt, M. Kayfeci, Applied Energy 86 (2009) 862244–862248. [32] S.S. Sablani, A. Kacimov, J. Perret, A.S. Mujumdar, A. Campo, International Journal of Heat and Mass Transfer 48 (2005) 665–790. [33] H. Kurt, K. Atik, M. Ozkaymak, A.K. Binark, Journal of the Energy Institute 80 (2006) 46–51. [34] M.M. Papari, F. Yousefi, J. Moghadasi, H. Karimi, A. Campo, International Journal of Thermal Sciences 50 (2011) 44–52. [35] P.P. Van der Smagt, Neural Networks 7 (1994) 1994. [36] C.C.F. Huang, C. Moraga, International Journal of Approximate Reasoning 35 (2004) 137–161. [37] R. Lanouette, J. Thibault, J.L. Valade, Computers & Chemical Engineering 23 (1999) 1167–1176. [38] J. Chevalier, O. Tillement, F. Ayela, Applied Physics Letters 91 (2007) 233103. [39] P. Devdatta, D.P. Kulkarni, K. Das. Debendra, L. Patil. Shirish, Journal of Nanoscience and Nanotechnology 7 (2007) 2318–2322. [40] P.K. Namburu, D.P. Kulkarni, D. Misra, D.K. Das, Experimental Thermal and Fluid Science 32 (2007) 397–402. [41] W. Duangthongsuk, S. Wongwises, Experimental Thermal and Fluid Science 33 (2009) 706–714. [42] A. Tavman, M. Turgut, H.P. Chirtoc, S. Schuchmann, International Scientific Journal 34 (2008) 99–104. [43] D.C. Li, C.S. Wu, T.I. Tsai, Y.S. Lina, Computers and Operations Research 34 (2007) 966–982. [44] H.C. Brinkman, Journal of Chemical Physics 20 (1952) 571–581. [45] T. Lundgren, Journal of Fluid Mechanics 51 (1972) 273–299.