Employ the Taguchi Method to Optimize BPNN’s Architectures in Car Body Design System

Previous research works tried to optimize the architectures of Back Propagation Neural Networks (BPNN) in order to enhance their performance. However, the using of appropriate method to perform this task still needs expanding knowledge. The paper studies the effect and the benefit of using Taguchi method to optimize the architecture of BPNN car body design system. The paper started with literatures review to define factors and level of BPNN parameters for number of hidden layer, number of neurons, learning algorithm, and etc. Then the BPNN architecture is optimized by Taguchi method with Mean Square Error (MSE) indicator. The Signal to Noise (S/N) ratio, analysis of variance (ANOVA) and analysis of means (ANOM) have been employed to identify the Taguchi results. The optimal BPNN training has been used successfully to tackle uncertain of hidden layer’s parameters structure. It has faster iterations to reach the convergent condition and it has ten times better MSE achievement than NN machine expert. The paper still shows how to use the information of car body shapes, car speed, vibration, noise, and fuel consumption of the car body database in BPNN training and validation.


Introduction
The back propagation neural network (BPNN) is widely used in industry, military, finance, etc. It is a tool to dealing with a system which very co mplex and difficult to obtain the mathematical model of the system. Literature rev iew is used to see currently research direction and development in BPNN application. The commonly method to create the BPNN module is by the trial and error method. The method will be time consuming during the training procedure [1][2][3].
John F.C. Khaw and friends [4] investigated the optimal design of neural network using the Taguchi Method. They worked in NN's parameters of nu mber of h idden layer and number of node in hidden layer. The paper gives chance to study more NN's parameters, e.g. transfer function, epoch, learning algorith m, parameter interaction, etc. Jorge Bardina and T. Rajku mar [5] studied the training data requirement for a neural network to predict aerodynamic coefficient. The paper shows manual NN training co mparisons based on different transfer function and training dataset. It noted that dataset is important part to obtain a better MSE performance. Chien Yu Huang and friends [6] investigated the optimizing design of back p ropagation networks using genetic algorithm (GA) calibrated with the Taguchi method. They used Taguchi method successfully to define the GA parameters of population, mutation, cross over rate, and etc. Then the optimu m GA is employed to optimize the NN training. In summary, co mp lete and comprehensive BPNN investigation is needed to do better NN train ing process. As a result, parameters selection of mu lti layer perceptron (M LP) is still an open area for researcher.
Genetic algorith m (GA ) is co mmonly used to search the global optimu m through the fitness functions by application of the principle of evolut ionary biology and has been used for long time in different applications [7,8]. Th is paper shows how emp loy GA to adjust three learning algorith ms; Conjugate Grad ient(CG), Delta-Bar-Delta (DBD) and Quick Propagation for weights adjustment during the BPNN training.

Intelligent Car Body System Design
The Intelligent car body databases are emp loyed to test the optimu m BPNN arch itecture and to investigate the GA influence in the BPNN t rain ing performance. Figure 1 shows the developed intelligent car body design system in derby, UK. The system is divided into three sections included data collection, BPNN training section and design section. In data collect section, fo llowing data has to be collected and saved in database as: the information between car body geometry and fuel consumption, noise and vibration in varieties of speed, etc (table 1). In order to do it, the CAD, CFD, CAA and FEA software have been emp loyed together to obtain the output information. The CAD (Co mputational Aided Design) software is used to create car body models in 3D view. The CFD (Co mputational Fluid Dynamic) is used to test the car models to get the informat ion of fuel consuming factors (drag coefficient and lift force). The CAA (co mputational aeroacoustics) is used to test car models to get the information of no ise (dB). Finally, the FEA (Finite Element Analysis) is used to test the car models to obtain the values of vibration in Z, X, Y, YZ and XY d irect ions. At the end of the section, a database is ready to use by the next stage of BPNN training system.
In the BPNN training system section, the optimu m BPNN architecture has been trained by the data from the database. Taguchi tool is used to create the optimu m BPNN arch itecture with 12 control variab les (3 levels) and 3 noise variables (estate, hatchback and saloon car types). At the end of the second section, the optimu m BPNN model is ready for application.
Third section is new design application section which has completes all in itial train ing tasks and the data collection tasks. In this section, the BPNN is ready to apply by user. User only needs to input the car body design parameters to the intelligent design system, a parameters based car body should be designed and with the full influence of vibration, noise and fuel consumption. As show the section two has emp loyed the important tool of BPNN model, the following chart will give mo re details discussion on NNs.

Principle of Taguchi Method
At the end of 1940, Dr. Gen ichi Taguchi offered new statistical approaches which have proven to be important tools in the design and process quality [9]. The Taguchi method is widely used in process design and product design that it is classified as "off line" of quality control. The Taguchi method is a technique for designing and performing experiments to optimise process design or product design where the system involves control factors, uncontrolled factors (noise factors), and interaction factors. The final design will be robust from a variety of mu ltiple factors of signal input and signal noise. Orthogonal array (OA) is a special construction table in the Taguchi method to lay out the experiment. The OA is an experiment approach, which reduces the cost, improves the quality and enhances the robustness of a design. It is a matrix of nu mbers arranged in columns and ro ws that provides a set of well balanced, consistent and very easy experiments with a minimu m number of factor co mbinations. The Taguchi method emp loys signal to noise (S/N) rat io and analysis of variance (ANOVA) to evaluate the best experiment design into amount of variability in the response and measurement o f the effect o f noise factors. They are also used to quantitatively determine the interaction between factors of an experiment There are three Taguchi characteristics: a. min imu m va lue, b. maximu m value and c. nominal value. Each characteristic will be evaluated by different S/N approaches. The S/N ratio parameter analysis is set from electric analogy problem converted to logarithmic decibel scale. Signal-to-noise (S/N) ratio measures the positive quality contribution fro m controllab le or design factors versus the negative quality contribution fro m uncontrollable or noise factors [9]. The following figure 2 and equations 1, 2, 3, 4 explain the quality loss function and S/N ratio.
Quadratic loss function L(y ) is defined in equation 1 [10]: where: k = constant of quality loss function = m = target value Let y 1, y 2 , y 3 , .., y n (y i is responses variable at i test), the average quality loss (Q) is given by equation 2:

Optimum BPNN Architecture Car Body Design System
The optimu m arch itecture of BPNN can be obtained by applying the Taguchi method. Figure 3 shows the steps of running the Taguchi method to optimize the BPNN architecture in the developed intelligence car body design system. The details of the description are in the fo llo wing paragraphs.

Define the Taguchi Criteria
In the BPNN, controllable experiment factors of the Taguchi method include the number of neurons in each h idden layer, transfer function, nu mber of hidden layers, epoch, etc. The response factor is the MSE of each BPNN train ing. Then the noise factor is defined in three different car body databases for estate, saloon and hatchback car types.

Identify Controllable Experi ment Factors, Response and Noise Factors
In the BPNN, controllable experiment factors of the Taguchi method include the number o f neurons each hidden layer, transfer function, number of h idden layer, epoch, etc. The response factor is MSE of each BPNN training. And then the noise factor is defined to be three difference car body databases for estate, saloon and hatchback car types.

Identify Levels and Experi ments Factors
This step is to set up the details levels and values of the factors which are involved in build ing the optimu m BPNN architecture. Table 2

a. Number of hidden layers (A)
MLP NN is fo rmed by 3 different kinds of layers wh ich are: input layer, hidden layer(s) and output layer. In general, one or two h idden layers are used by most research applications. As a result, values "1" or "2" can be chosen to hint the one or two hidden layer(s) structure. b

. Number of neurons (B and C)
The common used method to choose the number of neurons on hidden layer are defined by Kolmogorov's and Lip mann's [11] as fo llo w: Lower bound of neurons in first hidden layer: 2N + 1 (4) Upper bound of neurons in first hidden layer: OP x (N + 1) c. Learning rules (G) Learn ing ru les are used to define the weight value during the NN training. There are 3 learning ru les used: conjugate gradient, Delta-Bar-Delta and quick propagation. All three rules are enhanced by applying the GA operation on them. The complete learning ru les exp lanation and analysis is defined in section IV.

d. Transfer function (D, E and F)
The transfer function is used to squash the individual sum value of each neuron into the[-1, 1] range. Normally, each hidden layer emp loys the same Transfer function for its individual neurons. Five choices of the transfer functions are: Sig mo id, Tanh, lin ier Sig moid, Lin ier Tanh and linier.

e. Epoch set up (H)
Epoch is the terminator condition. Here, the program will be terminated by the number of running cycle. It can be selected as 1.000, 5.000 or 10.000.

f. Interaction factors
The advantage of the Taguchi method is the author can investigate the existence of interaction between experiment factors. It determines whether the interaction is present, and a proper interpretation of the results is necessary. According to table 2, Three options of interaction factors that will be investigated are: A and (B or C), A and (D or E), (B or C) and (D o r E).
g. Error factor (I) The other advantage of the Taguchi method is the author can provide a place for the other NN factors that are not involved in the training process. The error factor could be contained mo mentum init ial, learn ing rate in itial, the other factor interactions, etc.

Select the Orthogonal Array (OA)
The Orthogonal Array (OA) has been categorised by Dr. Genichi Taguchi [65] into 2 n series and 3 n series array. They are named as L 4 (2 3 ), L 8 (2 7 ), L 16 (2 15 ), L 32 (2 31 ), L 9 (3 4 ), L 18 (2 1 x 3 7 ), L 27 (3 13 ), L 36 (3 13 ), etc. The OA is emp loyed to screen all the experiment factors, levels, and responses. The Taguchi designs notation describes the OA lay out. It can be written as: Where: n = nu mber of experiment P = number of level k = nu mber of co lu mn (factor) In this section one of the arrays should be selected for further application based on the calculation results of dof number of levels and number of factors. The dof is a measure of the amount of in formation that can be uniquely determined fro m a given set of data [12]. For a factor A with 3 levels, A 1 data can be compared with A 2 and A 3 . The n value in Taguchi design notation must be higher or the same value with the dof calcu lation based on experiment factors in table 2 above. The dof value is calculated by equation 8. It generated the total of dof as 27 as shown in equation 8a. dof=1 + Σ(factors x (levels-1)) +Σ(factors interactions x (levels factor 1-1) x (levels factor 2-1) …(levels factor n-1)  Table 3. Standard Orthogonal Array L27(3 13 ) [9] As a result, the best OA selection from the standard option is L 27 (3 13 ) which is as table 3. Its design notation provides 13 factors of experiment, 3 level options for each factor and 27 parameters' co mb inations.

Fill in the Orthog onal Array (OA) Table
In this step, the standard OA (table 3)   Use a standard L 27 (3 13 ) graph linier ( Figure 4) to fill the factors of the experiment into a standard OA L 27 (3 13 ) (3 13 ) linier g raph as shown in figure 4a, b, c. According to main factors and interaction configuration, figure 4a is the best graph linier selection for 13 factors and 3 interactions. The circles are coded to place factors of experiment into OA's column and the line between circles exp lains the factor of interaction. Point one points to a number of hidden layers (A factor). The number one should be a strong choice because, changing the number of hidden layers gives co mplex influence in mathematical description. Point two is used to B or C factor, point 5 is used to D or E factor, point 3 is placed at the interaction between A and B/C with leaved point 4 for nil (unused), point 9, 10 12, 13 are addressed to F, G, H, I factors randomly, etc.
Convert the placement of factors of the experiment fro m lin ier graph in figure 4d into L 27 (3 13 ) OA modification in table 4. According to the previous step, column 1 is for A factor, colu mn 2 is fo r B and C factors, colu mn 3 is for A to interact with B and C, etc. All OA codes in Table 3 work in 3 levels, except for A factor (number of h idden layer) which works in 2 levels (A 1 and A 2 ). The space for the third level is allocated to A 2 as a dummy variable with the hypothesis (presumptive) that two hidden layers will provide a better performance than one hidden layer. As a result, OA's code 1 in the A factor is replaced by A 1 and OA's code 2 is replaced by A 2 respectively. Then OA's code 1 in B/C factor is replaced by B1 for one hidden layer (A 1 ) and B 1 C 1 for t wo hidden layers (A 2 ) and so on. The others factors of D/E, F, G, H, etc. will be treated in the same way.

Train the BPNN to Obtai n MS E
In this stage, 27 BPNN structures have been created from Table 4. According to Taguchi method statistics, they are the best combination for conducting the experiments. The database (table 1) is used to train the BPNN individually and to get each MSE value of BPNN when they are met by the same terminate conditions. Each BPNN structure is tested three times for each car body type of saloon, estate and hatchback car. Then the average MSE values will be put into the BPNN results section of Table 4.  Put thus value into table 5. In the same way, the S/N value for all factors are calculated individually and then put in Table 5. Table 5 contains S/N value within the range level 1 to level 3, effect/difference value, rank order and possible optimu m selection in each factor. Effect or difference co mponent is calcu lated by subtracting the highest S/N value with the smallest S/N value at the same factor. For examp le the effect at B factor has a value of 3.13344 co ming fro m 30.6433 subtracting to 27.51289. The d ifferent effect values of each factor are emp loyed to define the rank influence of the factor to the MSE performance includ ing interaction factors. According to Table 3.7, the ran k of factors' influence is F, E, I, B/C x D/E, etc. respectively. Based on the highest S/N ratio, the optimu m level condition for saloon's BPNN is A 1 -B 1 -D 2 -F 1 -G 2 -H 2 . Fo llo wing the best level at A factor is A 1 (one hidden layer), logically the C factor and E factors are neglected. It means that the optimu m BPNN architecture for saloon car is created by parameter of one hidden layer, 17 neurons in the hidden layer, Tanh transfer function in the hidden layer, Sig moid linier transfer function in the output layer, Qu ick prop learn ing rule with GA application and using 5.000 epoch.

Taguchi Test Results Analysis
The S/N rat ios table for estate and hatchback car have been investigated as well. Both of them have the same S/N ratios configuration. As a consequence, the optimu m BPNN architecture used model A 1 -B 1 -D 2 -F 1 -G 2 -H 2 , same as the optimu m BPNN architecture for a saloon car.
The interaction effect is the one with the main effects of the factors assigned to the matrix colu mn designed for interactions. According to table 5, the interaction between B/C and D/E gave MSE influence at 4 th rank, the interaction between A and B/C gave MSE influence at 9 th rank and interaction between A and D/E gave MSE influence at 10 th rank. The calculation belo w shows one example of how to determine S/N rat io for interaction factors (A x B/C) 1 : To determine whether two factors of experiment interact together, a proper interpretation of the results is necessary. The general approach is to separate the influence of interacting factors from the influence of the others. The interaction factors are analysed from the table 4. The interaction factor (A x B/ C) 1 is not the same as the factor (A 1 x(B/ C) 1 ). In this analysis, the interaction columns are not used; instead columns of table 5 wh ich represent the individual factors are used. The following example is a calculation of the interaction A1 x B 1 based on mean analysis: A 1 x B 1 = (0.02469+0.02340+0.02783)/3 = 0.02531 Figure 5 shows three interaction factors for the saloon car type. The complete exp lanation of the factors interaction is described below: The line A 1 and A 2 intersect. Hence, A and B/C interact. The line A 1 and A 2 intersect. Hence, A and D/E interact. The line B/ C 1 , B/ C 2 and B/C 3 intersect. Thus, B/ C and D/ E interact.
By the same exp lanation, all the factors' interaction' in estate and hatchback cars are plotted intersecting with each other. It means that all factors have interacted significantly. Changing one interaction factor will influence the other factor interactions. The details percent o f interaction impact in the BPNN perfo rmance are analysed by using the ANOVA approach. The analysis of variance (ANOVA) is the statistical treatment most commonly employed to analyse the results of an experiment to show the relative contribution of each factor experiment. The analysis of variance will be defined by the degree of freedom (dof), su ms of square, mean square, F test, error, etc. Since the partial experiment is a simplification of the full experiment, the analysis of the partial experiment should be included in analysis of the confidence that it can be tackled by ANOVA. This research project e mployed two ways ANOVA which worked in more than one factor and three levels. The F test is used to determine whether a factor of experiment is significant relatively to the other factors in ANOVA table. Braspenning P.J and friends [11] classified F test range into three categories of: Ftest < 1: Section effect is insignificant (experimental error outweighs the control factor effect).

Ftest ≈ 2: Section has only a moderate effect co mpared with experimental error.
Ftest > 4: Sect ion has a strong (clearly significant) effect. The calculation of F test is at appendix. According to the F test value in Table 6, the most significant factor for building an optimu m BPNN architecture is the transfer function between the hidden layer to the output layer (F factor) with the ma ximu m F test = 9.32 (51.50 %). Error (I) factor which indicated the other factors of the experiment that with the F test value = 1.59, 2 nd large of the F test value. It is 2 nd significant factor after the BPNN structure.
The ANOVA tables for estate and hatchback cars have been investigated as well. Generally, they have the same ANOVA configuration. The most influential is Factor F with 47.69 % in the estate car and 39.83 % in the hatchback car. Both error factors are s maller than in the saloon car with 3.13 % for the estate and 0.40 % for the hatchback car.   After having completely investigated the Taguchi test results, the next step is to analyse the noise factor in the project. The noise factor has been defined as the types of the car body in the OA model. The investigation of noise contains analysis of mean difference and significant effect of noise factor in MSE performance and a t test must be employed as well. As we know, the t -test is a statistical test that is used to determine if there is a significant difference between the mean of two group data [12]. The following paragraphs explain: a. How to calculate t -test, b. Apply t test into noise analysis and c. t-test result. a

. How to calculate t -test
The t-test is determined by involving the parameters of mean data, sample size and hypothesis mean as written in equation 9 and equation 10 below [14]. In this problem, the SPSS software is employed to calculate the t values. (9) or (10) where: n = samp le size, n should less than 30.
= mean data = hypothesis mean s = standard deviation SE = standard error mean b. Apply t test into noise analysis As mentioned in OA table 4, the groups of MSE data are defined into three factor noise of estate, saloon and hatchback car types. The two tailed t -test is emp loyed to measure whether or not three groups of MSE data are d ifferent to each other based on means analysis. There are three pairs of group data which should be investigated in t test between saloon -estate, between saloon -hatchback, and between estate -hatchback. The null hypothesis offers the author to accept ho and rejected h 1 if the t-test value is smaller than the t-table standard or if significance 2-tailed value is b igger than 5% error α. The h 0 has means that all three groups (saloon, estate and hatchback noise factor) are not significantly different in average mean; on the contrary h 1 has means that all three groups are significantly different in average mean. The null hypothesis for the Taguchi test result is formu lated as: and c. T test result According to equation 9 and 10 and by using SPSS software, the t test result for paired difference data groups is shown in table 7. Based on the null hypothesis h 0 and h 1 , the comparison between t test result and standard t table can be used to know the mean difference groups. It also used the comparison between Sig.  Because all the t tests are smaller than t table or Sig. values bigger than α = 0.05, it is absolutely to accept h 0 condition. It means that the noise factor (car databases) did not have a different set of data (MSE) configurat ion.

Conclusion of the Taguchi Result
In summary, the optimu m BPNN parameters for all car types (saloon, estate and hatchback) are indiv idually presented in Table 8 below. All the car types have the same optimu m PBNN architecture. Moreover, all the interaction factors strongly interact with each other. The error factor (factor I) for saloon car is classified as moderate effect, but in contrast, factor I in estate and hatchback car is categorised as small impact.  The comparison between the optimu m BPNN result and NN mach ine expert that people common ly use in "Neurosolution Software" has been used to verify the Taguchi results in the intelligent car body design system. All car types have been tested 10 times with different NN input data by using the optimu m BPNN results and NN machine expert. The NN machine expert worked averagely at MSE = 0.03377, and the optimu m BPNN architecture averagely worked at MSE = 0.00587. It means the new optimu m BPNN architecture based on Taguchi optimisation imp roved the NN performance at around 82.62% fro m the current NN model. It can be inferred that the optimu m BPNN architecture which is applied in the intelligent car body design system is much better than the current NN machine expert. Figure 6 shows this comparison for saloon, estate and hatchback car types.

Effect of GA in Training Performance
This section used to discuss the effective when the GA application in the intelligent car body design training is employed. According to table 2, the GA application is employed in the G factor of experiment as a method to adjust NN's weights for all factors' levels of learning rules. As a result, the NN training process worked in the optimu m weights value selected. The discussion will divide into 4 steps for: 5.1. introduction the learning algorith m, 5.2. test BPNN without GA in intelligent car body training, 5.3. test BPNN with GA applicat ion in intelligent car body training, and 5.4. co mparison the optimu m BPNN training with or without GA applicat ion.

Introduction the Learning Algorithm in B PNN Weight Adjustment
Learn ing algorith m is defined as a procedure for modifying or adjusting the weight on the connection of each of the neurons or units in the BPNN training process. The BPNN training involves 3 stages; the feedforward of the input training pattern, the calculation and backpropagation of the associated error, and the adjustment of the weight. In another side, learn ing rate and mo mentum are very important parameters in BPNN training process.
The learn ing rate parameter in the first order gradient approach method is based on the step length. If the step size is too small, it will take too many steps to reach the minima

MSE Comparison Between Optimum BPNN Architecture & NN Machine Expert
(bottom) condition. Conversely, if the step size is too large, then it will approach the minimu m point fast, but it will ju mp around the minimu m po int to left the large approach error. Figure 7 shows the impact of different step sizes to reach the optimu m solution in weighting update. Approach step size is constant at Δl. So searching start fro m x 0 and go through x 1 , x 2 and the Δl will lead the net to jump over x min point to x 3 . If the step length keeps at Δl, the searching result will be ju mped at x 3 and x 2 point. Momentum learn ing is a backpropogation parameter that can make the weight adjustment direction change to the opposite gradient direction occasionally, to avoid the approaching process from being trapped in the local optimu m point. As a result, energy mo mentu m can be employed to avoid the local optima of the learning process as was the case in the steepest decent algorithm. Energy mo mentum can be emp loyed to avoid the local optima of learning process as was the case in the steepest decent algorithm. Energy momentu m is defined as a function of mo mentu m parameter ( ) t ime correction weight based previous gradient. The new weight has the ability to ju mp to the next searching while avoiding local min ima (see figure 8). The convergence process is faster than steepest learning as an effect of using mo mentu m energy. The mo mentum equation in backpropagation algorithm is presented by the new weight for training step t +1 based on the weight at training steps t and t -1. Mo mentum parameter has value in range fro m 0 to 1 Figure 8. Effect of momentum for achieving optimum condition in NN training Figure 9 chronologically explains the interconnection between BPNN and GA application to adjust weights parameter in the DBD learning rule. The activities in Figure 9 contains: collection and preparation of data, define robust (optimu m) NN architectures, initialise population (connection weights and thresholds), assign input and output values to ANN, co mpute hidden layer values, compute output values, compute fitness using MSE formu la. If the error is acceptable then go to the next step, but if it is not then go to next iteration of GA application and finally train the neural network with selected connection weights.
The following part 5.2 and part 5.3 will show the Qu ick prop learning rate benefit in the optimu m BPNN training for saloon, estate and hatchback car body databases.

BPNN wi thout GA i n Intelligent Car B ody Training
When the optimu m BPNN structure has been designed, next step is to train the weight values in the BPNN. They are two train ing methods for this purpose: with or without GA support ( fig. 10 & 11). The MSE values and convergence condition are used as indicators in the comparison test. The experiment parameters used Quick prop in init ial learning rate = 0.50 and mo mentum = 0.0166. Figure 10 shows the iteration process of the optimu m BPNN train ing without GA application for saloon car body database in 5 times running to ensure the stability of the results. According to the figure  10, the MSE train ing is convergence in epoch 5000. Finally, the BPNN training results gave average MSE performance for 0.04408686 in saloon car database. µ Figure 10. BPNN training for saloon car without GA application

BPNN wi th GA in Intelligent Car Body Traini ng
The selection of connection weights in the neural network is a key issue in BPNN performance. The complex network connection will degrade BPNN performance to find the global min ima. The randomisation method is commonly used to initialise the network weights before training. Genetic algorith m (GA ) is emp loyed to minimise fitness criteria (MSE) by BPNN weights adjustment. The main advantage of using GA is associated with its ability to automatically discover a new value of neural network parameters fro m the initial value. There are so me GA parameters that are employed in this BPNN training: This study selected fitness convergence that the BPNN training will stop the evolution when the fitness is deemed as converged.
The Roulette rule is emp loyed to select the best chromosome based on proportionality to its rank.
The initial values for learn ing rate and mo mentum are 0.500000 and 0.0166.
Nu mber of population is 50 chro mosomes and generation number for maximu m 100.
Initial network weight factor is 0.1074. Mutation probability is 0.01. Using heuristic crossover. Crossover will co mbine two chro mosomes (parents) to generate new chro mosome (offspring). Green nu meric indicates the best parent and red numeric is the worst parent. Below is an example of t wo parents that were used in the BPNN training. Parent 1: 11001010 Parent 2: 00100111 The heuristic uses the fitness values of the two parent chromosomes to determine the direction of the search. The offspring are generated according to the following equation 8 [2]: Offspring1 = BestParent + r *(BestParent -WorstParent) (8) Offspring2 = BestParent The symbol r is a random nu mber between 0 and 1. One example of new chro mosomes fro m parents 1 and 2 at r = 0.4 are: Figure 11 shows the MSE training process by the optimu m BPNN architecture in five times simulat ion running. The saloon's training gave an average of MSE = 0.004468267.

Comparison of the Opti mum BPNN Training wi th or
wi thout GA Application Figure 12 shows the comparison between the optimu m BPNN training performance without GA application as explained in part B and the optimu m BPNN training performance with GA applicat ion as explained in part C. The figure shows that by using GA, the optimu m BPNN training has faster iteration to reach the convergent condition. It also has ten times better MSE achievement than the optimu m BPNN training without GA application. In short, it can beconfidently said that the GA applicat ion could significantly increase the BPNN performance.

Conclusions
In this paper, the Taguchi method for finding the optimu m Many advantages can be obtained from using the Taguchi method. Firstly, authors enable to evaluate the impact of BPNN's parameters including parameters' interaction in the MSE performance. Transfer function in the output layer dominated the BPNN training perfo rmance with 51.50% for saloon car, 47.69% for estate car and 39.83% fo r hatchback car; no researchers investigated and founded it before. Secondly, the authors concluded that the car's database is immune fro m the noise factor of car types as has been investigated by statistics approaches. Thirdly, there is strong interaction between number of hidden layer -nu mber of neuron, number of h idden layer -transfer function and between number of neuron -transfer function in BPNN training. Fourthly, GA application in BPNN train ing can speed up the convergence condition and ten times increase the MSE performance. Lastly, it is a big chance to develop a software combination between NN parameters and design of experiment (DoE) -Taguchi tool adaptable with any databases. It will not only work in intelligent car body design system but it also can be used in any kinds of problems.