Open Access
Issue
Int. J. Metrol. Qual. Eng.
Volume 16, 2025
Article Number 8
Number of page(s) 10
DOI https://doi.org/10.1051/ijmqe/2025008
Published online 10 December 2025

© Y. Chen and Y. Liu, Published by EDP Sciences, 2025

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Owing to the continuous development of science and technology, the requirements for the production and inspection of parts are increasing. The most commonly used method for inspecting free-form surface machining error information involves the use of a coordinate measuring machine (CMM) for contact measurement. This method can be used to obtain accurate measurement point information, but when faced with a measured workpiece with many measurement points, the inspection efficiency of the CMM is greatly reduced. Therefore, improving inspection efficiency calls for the construction of a machining error prediction model, in which the machining error information of the measured points is used to predict the machining errors of the unmeasured points, thereby effectively improving the inspection efficiency of the CMM.

Neural network models are widely used in error prediction. Many experts and scholars have researched neural network model prediction. Among them, many schemes are proposed for the optimization of radial basis function (RBF) neural network model. Shahriar et al. [1] used the Harris hawk optimization (HHO) algorithm to optimize the network parameters in the RBF neural network, and compared it with other algorithm-optimized models through simulation experiments. The experimental results confirm that the HHO algorithm has better performance in training neural networks. Although the HHO algorithm exhibits a high degree of adaptability, there are still problems associated with low optimization efficiency, as well as unbalanced exploration and development in the exploration and development stages. In order to enhance the prediction performance of the RBF neural network model, Wang et al. [2] used the multi-strategy sparrow search algorithm (SSA) to optimize the RBF neural network, and improved its basic function and the weight from the hidden layer to the input layer. Although this method enhances the prediction performance of RBF neural network to a certain extent, the stability of prediction needs to be improved because the algorithm was easy to fall into the local optimal solution. Gao et al. [3] proposed a prediction model based on the HHO algorithm combined with a convolutional neural network and a regression vector machine and compared it with other experimental models. The results demonstrate that the model exhibits high prediction accuracy and stability. However, the implementation process of this method is complicated and the applicability is low. Qosja et al. [4] used the radial basis function neural network model to perform parameter learning and power demand forecasting in Kalman filtering and compared this method with other prediction models. Their findings indicate that they effectively improved the prediction accuracy of the model and reduced its complexity. Neven et al. [5] used two different types of neural networks, including multilayer perceptron and RBF neural networks, to predict the compression coefficient of natural gas. By using continuous measurements, the two models were compared and analysed. The experimental results demonstrate that the RBF prediction model has significant advantages. By combining a mathematical model with an RBF neural network, Mita et al. [6] showed that their proposed method exhibited strong generalizability in nonlinear relationships and enhanced the prediction accuracy of the model through the introduction of a data distance index. Sun et al. [7] proposed a prediction model based on chaos theory and an RBF neural network. The model is used to extract dynamic information and structures from chaotic characteristics, update the weights of the RBF neural network model, and improve the training speed and prediction accuracy of the model. Bao et al. [8] used the adaptive SSA to optimize the parameters of the RBF neural network, and optimized the nonlinear system by updating the deviation of the output layer of the RBF neural network model, and calculated the predictive control function according to the deviation. Although the optimized prediction model is more efficient than the prediction performance of the ordinary model, it has the disadvantages of complex model and long running time. Jin et al. [9] constructed a particle swarm optimization radial basis function prediction model and used an adaptive weight optimization scheme to optimize the particle swarm optimization algorithm. The optimized model exhibits improved prediction accuracy. Yang et al. [10] optimized the parameters of the grey neural network through the fruit fly optimization algorithm (FOA), and simulated the proposed prediction model through the experimental data, which proved that the optimized neural network model had better prediction performance. Although the optimization method has high prediction accuracy, the data processing time was long, and it was not suitable for the case of large sample data. Although these studies have resulted in some advancements in neural network prediction, there is still much room for improvement in model and algorithm optimization.

Therefore, these studies used different methods to optimize the RBF neural network. The HHO algorithm has a better effect on its optimization. There are also many problems. For example, the algorithm was easy to fall into the local optimal solution, the search ability was weak, the parameter adjustment was complex, and the optimization efficiency was low.

In addition, in order to make further research on the prediction of machining error, many scholars use different methods to carry out different experiments to demonstrate the effectiveness of the method. Dang et al. [11] established the practical behaviour of cutter and part (PBCP) model by dividing the tool and part into differential units. Subsequently, they optimized the PBCP model using a noniterative method. The optimized model was used to predict machining errors, and the reliability of the method was verified via experiments. This method was only a simple model optimization, not compared with other methods, and has certain limitations. Iglesias et al. [12] proposed the Flapham prediction model for predicting the machining errors of a manipulator during surface milling operations. According to the prediction results of the model, there is a high degree of similarity between the predicted value of the machining error and the true value. This method only proves that the predicted value is similar to the real value, so it is necessary to improve the efficiency and practical effect of optimization. Based on the characteristics of the surrogate model, Zhou et al. [13] constructed an agent prediction model and analysed the influence of experimental parameters on the machining error of compressor blades. Subsequently, the machining error was predicted. The results demonstrate that the prediction model can effectively improve error prediction accuracy. Li et al. [14] proposed a machining error prediction model based on elastic force. The dynamic response correction coefficient is introduced in the model, which is then used to predict machining errors via the Gaussian process regression method. The prediction accuracy of the model was verified via experiments, which demonstrated that the model can effectively predict the machining errors of thin-walled parts. Han et al. [15] collected thermal error data from different signal sources and structures and extracted features according to the characteristics of these data. Subsequently, based on the nonlinear regression model, the thermal error prediction model of the machine tool spindle was constructed. The experimental results were compared with those of a single-source model to verify the effectiveness of the method. Yu et al. [16] proposed a migration-based machining error prediction strategy to address the difficulties associated with the machining error inspection of thin-walled parts. The method involves the reuse of error data, which is facilitated by establishing a relationship between historical data and real-time data. The effectiveness of the method was also verified. Li et al. [17] established a mapping relationship between tolerance parameters and the geometric errors of key parts by adding a Fourier series. Subsequently, a prediction model for machining errors was constructed by using a homogeneous transformation matrix and the multibody system theory. The reliability of the prediction model was verified via a grinding experiment involving a precision vertical grinder. Denkenaa et al. [18] proposed a data-driven active machine learning prediction method that can effectively predict the machining error of a workpiece. A comparison of the two datasets reveals that the active learning strategy can improve model accuracy. Sun et al. [19] proposed a Bayesian learning method to predict the machining error of a blade. The generalization capabilities of the model were improved by incorporating engineering knowledge information, and the complexity of the model was reduced by using the sparse Bayesian learning method. The effectiveness of the method was verified via blade milling experiments. Liu et al. [20] proposed a new machining error prediction method that integrates multiple models and workpiece systems. The method was verified via experiments conducted on flexible thin-walled parts, which indicate the reliability of the method and demonstrated that it can effectively shorten the time required for error inspections. These studies have led to improvements in the accuracy of model prediction to a certain extent through the construction of new error prediction models. However, several shortcomings remain, such as insufficient application and a lack of model adjustment and optimization.

In summary, although domestic and foreign scholars have proposed many optimization schemes for error prediction models, there are still several problems associated with the prediction of machining errors, such as low prediction accuracy, incomplete prediction models, unreasonable parameter selection and low optimization efficiency. Therefore, in terms of machining error prediction, it is necessary to further optimize the prediction performance of the HHO-RBF model. To address the problem of low efficiency in free-form surface machining error inspection, an improved Harris hawk optimization-radial basis function (IHHO-RBF) prediction model is proposed in this work. Aiming at the problem of weak search ability and low optimization efficiency of Harris hawk optimization-radial basis function (HHO-RBF) model. A dynamic programming learning mechanism is adopted to avoid the repeated calculation of information and improve the global search capabilities of the HHO algorithm. Aiming at the imbalance problem of HHO-RBF model in the development stage and the problem that it was easy to fall into local optimal solution. The precise Nelder-Mead algorithm is used to improve the optimization ability of the HHO algorithm. The improved Harris hawk optimization (IHHO) algorithm is used to optimize the network parameters of the RBF neural network. The IHHO-RBF prediction model is compared with the sparrow search algorithm-radial basis function (SSA-RBF) and fruit fly optimization algorithm-radial basis function (FOA-RBF) prediction models.

2 RBF neural network model

In an RBF neural network, a radial basis function is used as the activation function. This function has the advantages of fast speed, easy training and strong generalization ability. The RBF neural network consists of an input layer, a hidden layer and an output layer. The number of nodes in the hidden layer is usually determined according to the complexity of the problem. The transfer function of the hidden layer neurons is a nonnegative nonlinear function that is radially symmetrical and attenuated to the centre point. In an RBF neural network, the input layer is responsible for receiving external inputs, the hidden layer performs nonlinear transformations on the input data, and the output layer outputs the final prediction result. The activation function of the hidden layer usually involves the use of radial basis functions, such as Gaussian functions and polynomial functions. In this work, the Gaussian function was used as the radial basis function, expressed as follows:

G(xci)=e(xci)22σ2(1)

where x is a random variable, ci is the centre point position, and σ is the width parameter. Assuming that there are n training data points, for the input x=(x1, x2, ......, xn)T, n training data points can be used as a sample centre; that is, c1, c2, ..., cm is the sample centre, and the m th output is as follows:

him=Gm(xicm)(2)

where cm=xm is the sample centre of the hidden layer. The output of the RBF neural network is as follows:

yi=bm+m=1nhimwm=bm+m=1nwmGm(xicm)(3)

where m is the weight of the output layer, bm is the threshold, and ||xi-cm|| is the Euclidean distance. When the distance between the input and the sample centre cm is shorter, the radial basis function responds to the input faster. The topology of the RBF neural network is shown in Figure 1.

thumbnail Fig. 1

Topology of the RBF neural network.

3 HHO algorithm principle and improvement strategy

3.1 Principle of the HHO algorithm

The Harris hawk algorithm is an excellent algorithm for solving model optimization problems. The solution process includes three stages: exploration, transition and development.

In the exploration stage, the Harris hawk randomly inhabits certain locations and uses two strategies with equal probability to search for prey globally. When the probability, p, is less than 0.5, the hawk will move according to the locations of family members and prey. When the probability is greater than or equal to 0.5, the hawk will randomly stop on a tree within the population range.

In the transition stage, different development and utilization behaviours are dynamically selected according to the escape energy of the prey. During the iterative process, the dynamic escape energy tends to decrease. When the escape energy is greater than or equal to 1, the Harris hawk will search for prey in different regions (global exploration); when the escape energy is less than 1, the Harris hawk will search the area around the prey (local exploration).

In the development stage, four strategies are used to imitate the hunting behaviour of the Harris hawk. These four strategies involve soft encirclement and hard encirclement when the prey has not successfully escaped, as well as soft encirclement and hard encirclement when the prey has successfully escaped.

x(t+1)={y,f(y)<f(x(t))z,f(z)<f(x(t))(4)

z=y+slf(d), 0.5|E|<1,r>0.5.(5)

The soft encirclement and hard encirclement adopted when the prey has failed to escape are described as follows:

{x(t+1)=(xrabbit(t)x(t))E|jxrabbit(t)x(t)|,0.5|E|<1,r0.5(6)

{x(t+1)=xrabbit(t)E|Δx(t)|,|E|<0.5,r0.5.(7)

The soft encirclement and hard encirclement adopted when the prey has successfully escaped are described as follows:

y=xrabbit(t)Ejxrabbit(t)x(t)(8)

y=xrabbit(t)Ejxrabbit(t)xm(t)(9)

where y represents the individual before reverse learning, f(y) represents the fitness value before reverse learning, z represents the individual after reverse learning, f(z) represents the fitness value after reverse learning, and lf represents the Levy flight factor.

3.2 HHO algorithm improvement strategy

Although the HHO algorithm exhibits a high degree of adaptability and is suitable for use in a variety of constrained and unconstrained optimization problems, there are still problems associated with low optimization efficiency, as well as unbalanced exploration and development in the exploration and development stages. Therefore, the exploration and development stages in this work are optimized through a dynamic learning mechanism and the Nelder-Mead algorithm.

(1) Dynamic programming learning mechanism optimization exploration stage. Although the HHO algorithm performs well in the transition stage, it still exhibits problems associated with low efficiency in complex optimization problems during the exploration stage. Therefore, the dynamic programming learning mechanism has been improved to enhance the global search capabilities of the HHO algorithm. By using the dynamic programming learning mechanism and memory technology, the problem can be divided into several subproblems of the same type. The Fibonacci sequence, F(n)=F(n−1)+F(n−2), n>1, F(0)=0, and F(1)=1, was proposed by Fibonacci. The calculation process of the F(4) subproblem when n=4 is shown in Figure 2.

Shadows of the same colour in Figure 2 represent the same subproblem. If the top-down recursive solution is used, the same subproblem will be calculated repeatedly. The dynamic programming learning mechanism is used to store the obtained subresults. When the problem located at the same node is solved, the stored sub information can be used directly, which prevents the repeated calculation of the solution during the search process of the algorithm, thereby improving the search efficiency of the HHO algorithm. Therefore, in the exploration stage of the HHO algorithm, the habitat location of the Harris eagle can be expressed as follows:

x(n+1)={xrand(n)r1|xrand(n)2r2x(n)|q0.5(xrabbit(n)xm(n))r3(lb+r4(ublb))q<0.5(10)

Where xrand(n) is the random individual position, xrabbit(n) is the location of the prey, xm(n) is the average position of the group, ub is the upper bound of the variable and lb is the lower bound of the variable.

(2) Nelder-Mead algorithm optimization development stage. The HHO algorithm changes the escape energy factor linearly, which leads to an imbalance in exploration and development. Therefore, the Nelder-Mead algorithm is used to improve the optimization ability of the HHO algorithm. The Nelder-Mead algorithm is an unconstrained optimization method for solving multivariate problems that was proposed by Nelder and Mead; it constructs a polyhedron that contains d+1 vertices in the d-dimensional space, as shown in Figure 3.

In each iteration, it is necessary to ensure that the quality of the solution is not lower than that of the previous iteration. If the initial solution is not the optimal solution, the algorithm will find a better solution according to the preset rules and gradually approach the optimal solution through multiple iterations. This process is conducted by calculating the fitness value of all individuals to find the centre position, as follows:

xc=(xg+xp)/2(11)

xr=xc+r×(xcxw)(12)

where xg is the best position, xp is the second position, and f(xg) and f(xp) are the corresponding fitness values, respectively; xr is the reflection point, and f(xr) is the corresponding fitness value; r is the reflection coefficient; and xw is the base variable.

When f(xr)<f(xg), the reflection direction is correct, and the expansion operation can be performed. The point of expansion is as follows:

xy=xc+δ(xrxc)(13)

where xy is the point of expansion, and δ is the coefficient of expansion. If f(xy)<f(xg), then xw=xy; otherwise, xw=xr.

When f(xr)>f(xw), the reflection direction is wrong, and the compression operation can be performed. The compression point is as follows:

xi=xc+β(xrxc)(14)

where xi and xc are the compression points, and β represents the compression coefficient.

When f(xr)>f(xw)>f(xp), the contraction operation is performed. The contraction point is as follows:

xo=xcα(xwxc)(15)

where xo and xc are the points of contraction, and α is the contraction coefficient.

In the search process of the Nelder-Mead algorithm, the fitness of each vertex of the polyhedron is evaluated, the worst vertices are identified and improved, and these vertices are improved via operations such as reflection, expansion, and compression to form a new polyhedron. Each vertex of the polyhedron is the solution of the HHO algorithm. The polyhedron is equivalent to the optimal hunting range of the Harris hawk. This method, which is based on vertex fitness comparison and step-by-step optimization, allows the Nelder-Mead algorithm to search effectively, approximate the optimal solution of the problem, and efficiently strengthen the exploration and development performance of the HHO algorithm.

thumbnail Fig. 2

Calculation process of the F(4) subproblem.

thumbnail Fig. 3

Principle of the Nelder-Mead algorithm.

4 IHHO algorithm-optimized RBF neural network prediction model

By combining the IHHO algorithm with the RBF neural network the search and optimization ability of the RBF neural network can be effectively improved. Additionally, falling into local optimal solutions can be avoided by optimizing the network parameters of the RBF neural network. The specific algorithmic steps of the IHHO-RBF neural network are as follows:

  • The IHHO algorithm population is initialized, with each individual representing a set of RBF neural network parameters (centre vector, width parameter and output layer weight).

  • The topology of the RBF neural network model is determined according to the input and output of the model.

  • The IHHO algorithm is used to initialize and update the RBF neural network parameters.

  • The fitness of each individual is calculated, and the pros and cons of the solution are determined based on the calculation results.

  • The search and development stages are optimized via a dynamic programming learning mechanism and the Nelder-Mead algorithm.

  • The IHHO algorithm is used to iteratively update the parameters of individuals in the population and conduct a search with the aim of minimizing the objective function value.

  • The optimized network parameters of the IHHO algorithm are output to the RBF neural network for calculation.

  • The model determines whether the termination condition is satisfied. If the condition is satisfied, the predicted value of the machining error is output; otherwise, the model returns to step (3).

The algorithmic process of the IHHO-RBF prediction model is shown in Figure 4.

thumbnail Fig. 4

Flowchart of the IHHO-RBF prediction model.

5 Experimental verification

5.1 Analysis of machining error composition

The machining error of free-form surface parts refers to the shortest distance between the theoretical surface points and the actual inspection points. It also serves as an evaluation index for surface machining quality. For example, in Figure 5, the machining error diagram is shown, where R is the machining error that corresponds to the measurement point, P2(X2, Y2, Z2), equation (1) is the expression that corresponds to the machining error, and P1(X1, Y1, Z1) is the theoretical point that corresponds to the shortest distance from the measurement point, P2, to the theoretical surface. The normal vector, N, of the theoretical point, P1, on the free surface is the vector formed via the connection between measurement point P2 and theoretical point P1.

R=(X1X2)2+(Y1Y2)2+(Z1Z2)2.(16)

thumbnail Fig. 5

Machining error of the free-form surface.

5.2 Experimental design

To verify the prediction performance of the IHHO-RBF prediction model, the machining error data obtained via a CMM inspection were used for the experiments. The experiment uses Hexagon CMM, (PC-DMIS software, MPEE=0.9+L/400µm), ruby measuring ball with a diameter of 5 mm, the moving speed of the measuring ball is 15 mm/s, the back distance of the measuring ball is 4 mm, and the back speed is 2 mm/s. The experimental object was the free-form surface model processed using the CNC machine tool. A total of 898 measuring points were selected on the free-form surface, and 600 measuring points were randomly selected as samples for the training model. The data from the remaining 298 measuring points were used as prediction samples to verify the prediction performance of the model. The CAD model and the theoretical measuring points of the free-form surface are shown in Figure 6. The CMM inspection experiment is shown in Figure 7.

To verify the effectiveness of the proposed algorithm, four prediction models, including the SSA-RBF, FOA-RBF, HHO-RBF and IHHO-RBF frameworks, were used to predict the machining error on the MATLAB R2023b platform. The theoretical measurement points (the x, y and z coordinates on the free-form surface) were taken as the three input parameters of the model, and the data were normalized and analysed using the grey correlation degree. The resolution coefficient was 0.5, the initial population was 50, the number of hidden layer neurons was 20, and the number of output layer nodes was 1. The mean absolute error (MAE), mean absolute percentage error (MAPE) and root mean square error (RMSE) were used as evaluation indices to evaluate the prediction effect of the model. When the values of the three evaluation indices are closer to 0, the prediction performance of the model is better. The calculation method of the evaluation index is expressed as follows:

σMAE=1Ni=1N yiyi(17)

σMAPE=100%Ni=1N|yiyiyi|(18)

σRMSE=i=1N(yiyi)2N(19)

where σMAE, σMAPE and σRMSE represent the mean absolute error, mean absolute percentage error and root mean square error, respectively. N is the sample size, yi' is the predicted value of the machining error, and yi is the actual value of the machining error.

thumbnail Fig. 6

CAD model and measuring points on the curved surface.

thumbnail Fig. 7

CMM inspection experiment.

5.3 Experimental results and analysis

The prediction results of the IHHO-RBF, SSA-RBF, FOA-RBF and HHO-RBF models are shown in Figure 8, and the prediction errors are shown in Figure 9. The prediction results for the different models are shown in Table 1, and the comparison data between the IHHO-RBF prediction model and the other models are shown in Table 2.

It can be seen in Figure 9 that the IHHO-RBF model exhibits higher prediction accuracy than the other prediction models, and the predicted values are closer to the real values. The prediction performance of the FOA-RBF and HHO-RBF models was not stable enough. The prediction performance of the SSA-RBF model was relatively stable, but the overall error was higher. The prediction performance of the IHHO-RBF model was stronger. Additionally, the prediction error fluctuation of the model when the data distribution was offset was smaller, and the robustness was stronger.

It can be seen in Tables 1 and 2 that the mean absolute error of the IHHO-RBF model is 0.0026 mm, the root mean square error is 0.0028 mm, and the mean absolute percentage error is 17.85%. The mean absolute error of the SSA-RBF model is 0.0041 mm, the root mean square error is 0.0053 mm, and the mean absolute percentage error is 34.51%. The mean absolute error of the HHO-RBF model is 0.0047 mm, the root mean square error is 0.0058 mm, and the mean absolute percentage error is 35.32%. The mean absolute error of the FOA-RBF model is 0.0061 mm, the root mean square error is 0.0067 mm, and the mean absolute percentage error is 42.63%. Compared with the FOA-RBF prediction model, the SSA-RBF prediction model and the HHO-RBF prediction model, the IHHO-RBF prediction model decreased the mean absolute error by 57.38%, 36.59% and 44.68%, respectively, and the root mean square error by 56.72%, 45.28% and 51.72%, respectively.

thumbnail Fig. 8

Prediction results for different models.

thumbnail Fig. 9

Comparison of the prediction errors of different models.

Table 1

Prediction results for different models.

Table 2

Comparison data between the IHHO-RBF prediction model and the other models.

6 Conclusions

To address the problems associated with machining error prediction for free-form surfaces, a neural network prediction model based on the IHHO-RBF neural network is established. To address the shortcomings of the exploration and development stages of the HHO algorithm, a dynamic programming learning mechanism is proposed to increase its search efficiency, and the Nelder-Mead algorithm is used to improve its optimization capabilities. The improved HHO algorithm is used to optimize the network parameters of the RBF neural network and enhance the prediction performance of the model. The prediction experiment involved predicting the machining errors of free-form surfaces, and the results are compared with those of other prediction models. The IHHO-RBF neural network model exhibited high prediction accuracy and strong stability, leading to significant improvements in the prediction results.

Funding

This work was financially supported by the National Nature Science Foundation of China (51565006), the Natural Science Foundation of Guangxi Province (2025GXNSFHA069171), the Science Research Innovation Team Project of Guangxi Provincial Education Department, and the Science Research Innovation Team Project of Guangxi University of Science and Technology.

Conflicts of interest

There are no conflicts of interest.

Data availability statement

The research data associated with this article are included in the article.

Author contribution statement

Yueping Chen: received the Ph.D. degree in mechanical engineering from Guangdong University of Technology, Guangzhou, China, in 2012. He is currently a Professor with Guangxi University of Science and Technology Liuzhou, Guangxi, China. His research interests include precision engineering andmachining error compensation.

Yapeng Liu: was born in 1997 in Henan, China. He is currently studying for a master's degree in engineering at Guangxi University of Science and Technology Liuzhou, Guangxi, China. His research interests are precision inspection technology and free-form surface inspection and machining error prediction.

References

  1. F. Shahriar, S.M.H. Hosseini, DMPPT control of photovoltaic systems under partial shading conditions based on optimized neural networks, Soft Comput. 28, 4987–5014 (2023) [Google Scholar]
  2. L. Wang, Q. Fang, L. Gao, Y. Sun, H. Cao, Research on load excitation identification method of multi-connected air conditioning compressor based on RBF network with multi-strategy fusion SSA, Int. J. Mach. Learn. Cybernet. 15, 5185–5198 (2024) [Google Scholar]
  3. Z. Gao, W. Yi, Prediction of projectile interception point and interception time based on Harris hawk optimization-convolutional neural network-support vector regression algorithm, Mathematics 13, 338–338 (2025) [Google Scholar]
  4. A. Qosja, D. Georges, L. Nikolla, A. Cela, A novel approach to electricity demand forecasting: an optimized kalman filter-based RBF model, Int. J. Dyn. Control 13, 165–165 (2025) [Google Scholar]
  5. N. Kanchev, N. Stoyanov, G. Milushev, Prediction of the natural gas compressibility factor by using MLP and RBF artificial neural networks, Meas. Sci. Rev. 25, 1–9 (2025) [Google Scholar]
  6. M. Nurhayati, K. Jeong, S. Kim, J. Park, H.K. Cho, K.H. Shon, S. Lee, From comparison to integration: enhancing forward osmosis performance prediction with mathematical and RBF neural network models, Desalination 597, 118322–118322 (2025) [Google Scholar]
  7. F. Sun, C. Gong, Z. Lyu, Grain storage temperature prediction based on chaos and enhanced RBF neural network, Sci. Rep. 14, 24015–24015 (2024) [Google Scholar]
  8. H. Bao, H. Zhu, D. Liu, Improved SSA‐RBF neural network‐based dynamic 3‐D trajectory tracking model predictive control of autonomous underwater vehicles with external disturbances, Optim. Control Appl. Methods, 45, 138–162 (2023) [Google Scholar]
  9. H. Jin, M. Wang, H. Xiang, X. Liu, C. Wang, D. Fu, A PSO-RBF prediction method on flow corrosion of heat exchanger using the industrial operations data, Process Safety Environ. Protect. 183, 11–23 (2024) [Google Scholar]
  10. J. Yang, H. Zeng, J. Huang, Grey neural network prediction model based on fruit fly optimisation algorithm and its application, Int. J. Informat. Commun. Technol. 12, 98–112 (2018) [Google Scholar]
  11. J. Dang, B. Wu, Q. Cui, Z. Zhang, Y. Zhang, A machining error prediction approach without iteration for thin-walled part in flank milling, J. Manuf. Process. 124, 399–418 (2024) [Google Scholar]
  12. I. Iglesias, S. Lite, G. Gaya, Silva, A flatness error prediction model in face milling operations using 6-DOF robotic arms, J. Manuf. Mater. Process. 9, 66–66 (2025) [Google Scholar]
  13. J. Zhou, S. Qian, T. Han, R. Zhang, J. Ren, Error prediction for machining thin-walled blade with Kriging model, Results Eng. 26, 2–10 (2025) [Google Scholar]
  14. W. Li, J. Ren, K. Shi, Y. Lu, J. Zhou, H. Zheng, A method for predicting machining error of thin-walled part considering the dynamic response of elastic deformation, Int. J. Adv. Manuf. Technol. 137, 1–13 (2025) [Google Scholar]
  15. Y. Han, X. Deng, J. Zheng, X. Lin, X. Wang, Y. Chen, Thermal error prediction for vertical machining centers using decision-level fusion of multi-source heterogeneous information, Machines 12, 3–12 (2024) [Google Scholar]
  16. Y. Yu, M. Shi, H. Ding, X. Zhang, Prediction of thin-walled workpiece machining error: a transfer learning approach, J. Intell. Manuf. 36, 1–25 (2024) [Google Scholar]
  17. Z. Li, J. Fan, P. Pan, K. Sun, R. Yu, A study on machining error prediction model of precision vertical grinding machine based on the tolerance of key components, Int. J. Adv. Manuf. Technol. 131, 4515–4528 (2024) [Google Scholar]
  18. B. Denkenaa, M. Wichmanna, M. Rokickib, L. Sturenburga, Active learning for the prediction of shape errors in milling, Procedia CIRP 126, 324–329 (2024) [Google Scholar]
  19. H. Sun, S. Zhao, F. Peng, R. Yan, L. Zhou, T. Zhang, C. Zhang, In-situ prediction of machining errors of thin-walled parts: an engineering knowledge based sparse Bayesian learning approach, J. Intell. Manuf. 35, 387–411 (2024) [Google Scholar]
  20. S. Liu, A. Shukri, B. Adib, R. Svetan, Machining error prediction scheme aided smart fixture development in machining of a Ti6Al4V slender part, Proc. Inst. Mech. Eng. 237, 1509–1517 (2023) [Google Scholar]

Cite this article as: Yueping Chen, Yapeng Liu, Application of RBF neural network based on improved Harris eagle algorithm optimization for free-form surface machining error prediction, Int. J. Metrol. Qual. Eng. 16, 8 (2025), https://doi.org/10.1051/ijmqe/2025008

All Tables

Table 1

Prediction results for different models.

Table 2

Comparison data between the IHHO-RBF prediction model and the other models.

All Figures

thumbnail Fig. 1

Topology of the RBF neural network.

In the text
thumbnail Fig. 2

Calculation process of the F(4) subproblem.

In the text
thumbnail Fig. 3

Principle of the Nelder-Mead algorithm.

In the text
thumbnail Fig. 4

Flowchart of the IHHO-RBF prediction model.

In the text
thumbnail Fig. 5

Machining error of the free-form surface.

In the text
thumbnail Fig. 6

CAD model and measuring points on the curved surface.

In the text
thumbnail Fig. 7

CMM inspection experiment.

In the text
thumbnail Fig. 8

Prediction results for different models.

In the text
thumbnail Fig. 9

Comparison of the prediction errors of different models.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.