Perpendicularity assessment and uncertainty estimation using coordinate measuring machine

. The validation of the conformity of parts according to the ISO 98-4 standard, cannot be achieved without an accurate estimation of the measurement uncertainty, which can become dif ﬁ cult when it comes to a complex measurement strategy to control a geometrical speci ﬁ cation of a mechanical part using a Coordinate Measuring Machine (CMM). The purpose of the study in this paper is to analyze the measurement strategy following the Geometric Product Speci ﬁ cation (GPS) Standard, to estimate the associated uncertainty of the different parameters of each step, to be able to achieve the uncertainty of the measurement of a given speci ﬁ cation (perpendicularityerrorinourstudy)usingtheGuidetotheexpressionofuncertaintyinmeasurement(GUM).This uncertaintywillbethereaftervalidatedbyaMonteCarlosimulation,andaninterlaboratorycomparisonwillbe conductedtocomparetheobtainedresultsaccordingtotheISO13528standard.Ourcontributionisbasedonamore accurate estimation of the measurement strategy ’ s parameters uncertainties. This approach can also be used by accreditedcalibrationlaboratories(ISO17025)orinthegeneralcaseinthecontrolofperpendicularityspeci ﬁ cation of mechanical parts using a coordinate measuring machine. A case study has been conducted, controlling a perpendicularity speci ﬁ cation with a tolerance limit of 15 m m, after the calibration of the CMM to obtain the variance-covariance matrices. The mechanical part perpendicularity error (12.55 m m) was below the limit, however, was judged “ notconform ” when consideringthe estimated uncertainty (4.06 m m) andthe interlaboratory comparison was satisfactory despite the difference of the acceptance criterion.


Introduction
Coordinate measuring machines are very popular in the industrial field; it allows controlling dimensional and geometrical specifications of complex mechanical parts with great accuracy and precision of less than 1mm.Both hardware and software work simultaneously to collect and process data to generate measurement reports, hence the importance of estimating the uncertainty associated with the measurement.Equipped with a probing system, and following a specific measurement strategy, it collects the coordinates of the tolerance features, then proceeds to the surface fitting according to a given criterion, least-squares method in our case, then proceeds to the verification of a dimensional or geometrical specification.This succession of steps is subject to a propagation of uncertainties, and if not estimated correctly, can lead to aberrant decisions.
Evaluating the CMM's measurement-associated uncertainty is a challenging task, especially when examining geometrical error specifications, mainly due to the large number of factors that influence the measurement (Fig. 1).
Several studies have been made to estimate the influence of these parameters on the coordinate measuring machine's measurement uncertainty, such as geometric errors that goes up to 5.63 arcsec following the Y axis for Zeiss Opton CMM with a maximum permissible error of 1.3 mm þ L 350 [1], measuring probe errors estimated around almost ±0.9 mm for TP2 Renishaw probe head [2,3], thermal influence errors [4,5] that should be reduced by regulating the temperature homogeneously to 20 ± 1 °C, with a variation less than 0.5 °C per hour and less than 0.5 °C/m height, measurement strategy and fitting criterion [6] that are proven to show minimal influence by probing every 1/10 of the dimension of the surface feature, position, size and shape for point cloud data [7,8].Rosenda et al. [9] proposed a simplified model, considering these parameters, to estimate the circularity and cylindricity measurement uncertainty using a coordinate measuring machine.Other studies have been oriented toward estimating the uncertainty of geometrical specifications.Wojciech et al. [10,11] developed different models for size, distance, angle, and geometrical deviation measurement uncertainty, including perpendicularity, fully consistent with the GPS [12] norm.Our contribution is positioned in this context, we seek to estimate the uncertainty of the orientation error following a measurement strategy that respects the normative guidelines.
The GUM and the Monte Carlo method are generally used to estimate the measurement uncertainty.Balasubramanian et al. [13] estimated uncertainty in angle measurement using the GUM considering the geometrical errors, temperature, vibrations, and measuring strategy.Moona et al. [14] developed a model using the Monte Carlo method to estimate the uncertainty for length measurement errors using an articulated arm coordinate measuring machine.Using a comparison between the GUM [15] approach and a Monte Carlo simulation [16] as a validation method has proven to give consistent results, it's within this framework that Jalid [17,18] proposed a comparison of these two methods estimating flatness uncertainty which showed satisfactory results with a gap less than 10 À4 mm, then studied the influence that sample size has on it.
In this paper, we aim to review the process of validating the conformity of the mechanical parts inspected using CMM, by introducing and considering the measurement uncertainty as stated in the ISO/IEC Guide 98-4 [19].Our model combines the experimental and the analytical methods to estimate the measurement associated uncertainty.The advantage of this approach is that the perpendicularity uncertainty can be estimated directly from the set of measured points and the calibration of the CMM.It is important to mention too that the uncertainty varies according to the number and position of the measured points and the chosen fitting criterion.To estimate the measurement-associated uncertainty, a deconstruction of the process has been realized, by identifying the different steps of the measurement strategy following the ISO 1101 [12] standard (GPS), and by estimating the variance-covariance matrices at the level of each step by considering the parameters which influence the results, to be able to estimate the final uncertainty of the measurement.This uncertainty will be thereafter validated by a Monte Carlo simulation, before finally proceeding to an inter-laboratory comparison to compare the obtained results.Our contribution is based on a more accurate estimation of the measurement strategy's parameters uncertainties.This approach can be used by ISO 17025 [20] laboratories in the control of perpendicularity specification of mechanical parts using a CMM.

Materials and methods
To validate the conformity of a mechanical part using a coordinate measuring machine according to the ISO 98-4 standard, an estimation of the measurement-associated uncertainty is necessary, which can be particularly problematic considering the measurement strategy, mainly due to the number of unknown parameters that can influence the measurement.To do so, we studied a perpendicularity case following this approach: -Setting a perpendicularity error equation according to the ISO 1101 standard (GPS).-Estimation of the perpendicularity error associated uncertainty.-Validation of the proposed method.
-Declaration of conformity according to the ISO 98-4 standard.
A verification of our results through an inter-laboratory comparison according to the ISO 13528 standard will be accomplished in the results and discussion section.

Perpendicularity error modeling
Based on the Geometric Product Specification (ISO 1101 standard) [12], perpendicularity is an orientation tolerance; and can be defined as the minimum distance between two theoretical parallel elements, both perpendicular to the datum, within which all measured points lie inside, whether it is a plane or axis (Fig. 2).We have studied a plane-to-plane perpendicularity case, the geometrical specification was summarized as follows: -Tolerance feature: probed points that belong to the tolerance surface.-Datum: theorical fitted plane P0.
-Tolerance zone: Volume between two theorical parallel planes P1 and P2, both perpendicular to the datum.-Condition: all probed points must lie inside the tolerance zone.

Measurement strategy
The measurement strategy using CMM should be carefully planned and executed to achieve the desired level of accuracy and comply with the GPS norm.In order to measure the perpendicularity error, we applied the following strategy (Fig. 3).Probing the datum surface then fitting a theorical plane using the least-squares method, the choice of the number of measured points and fitting criterion has been chosen to represent the best the surface [6], then extracting the datum's associated plane normal vector n d ! .Probing the tolerance element using the same method and extracting the measured points coordinates as well as the tolerance surface associated plane normal vector n t ! .It is important to mention that n d ! and n t ! are not necessarily perfectly perpendicular, hence the need to calculate the vector n p ! .
Calculating the datum and tolerance planes intersection ! to be able to calculate the vector Calculating the two most distant measured points p max and p min along the vector n p ! , allowing us to set the planes P1 (p max ; n p ! Þ and P2(p min ; n p ! Þ, and deducting the perpendicularity error (Fig. 4).
This succession of steps is subject to a propagation of uncertainties, and if not estimated correctly, can lead to a false conformity declaration.According to equation (1), and following this measurement strategy, the main sources of the perpendicularity measurement uncertainty are the probed points, the associated datum normal vector and the intersection vector.In the following Section, we will quantify the uncertainties associated with these parameters for each step of the measurement strategy.Forbes [21,22] conducted other studies on the estimation of the variance-covariance matrix of the features with a finite set of points dispersed evenly over the surface being sampled, allowing to estimate uncertainties using the GUM method without knowing the measurement strategy, and reducing the effect of form effort, considering only the number of data points and geometry of the area being sampled.

Perpendicularity error
Let p i be the coordinates of the i th probed point, such as p max (x max , y max , z max ) ∈ P1 and p min (x min , y min , z min ) ∈ P2 the two most distant measured points, the perpendicularity error can be expressed as follows: where

Estimation of the perpendicularity error associated uncertainty
In order to estimate the perpendicularity error associated uncertainty using the GUM uncertainty propagation model, we applied the following procedure: -Applying the GUM method to the perpendicularity equation ( 1).-Estimating the parameters and their associated variancecovariance matrix.-Validation of the GUM results through a Monte Carlo simulation.

GUM uncertainty propagation model
The GUM (Guide to the Expression of Uncertainty in Measurement [15]) variance propagation method is widely used in different fields, especially in metrology, it provides an analytic approach for quantifying and expressing the uncertainty of measurement based on a first-order Taylor expansion of a function through a linear approximation.To estimate the perpendicularity error default uncertainty, the GUM method is applied to the perpendicularity model dp ¼ fðn dx ; n dy ; n dz ; n ex ; n ey ; n ez ; x p min ; y p min ; z p min ; x p max ; y p max ; z p max Þ in equation ( 1): The modeling in matrix form will allow us thereafter to implement the calculations on Matlab, where J represents the Jacobian matrix: And M represents the uncertainty variance-covariance associated matrix: where

Estimation of the parameters associated variance-covariance matrix
Coordinate measuring machines are precise and accurate.However, various factors influence the measurement uncertainty (Fig. 1), and it is very difficult to quantify the influence of each of these parameters independently of the others.Several approaches have been made, Bahassou et al. [23,24] proposed an estimation of the variances according to the ISO 10360 standard [25].We will assume that the errors along the axes are independent and linear: Thereby measuring 5 gauge blocks, each block for 3 repetitions, along 3 of the 7 directions (Fig. 5), then calculating the error equations along each direction E x =A x x+B x (error following the direction X for example), Then applying the law of propagation of uncertainties to estimate the associated uncertainty u x .
Surface fitting is a critical step.To estimate the variance-covariance matrix associated to the datum normal vector, we must first select a mathematical model to associate the set of probed points to an ideal plane, representing the measured surface without overfitting or underfitting.It may be done using a variety of techniques, such as polynomial fitting, radial basis functions, and splines, each method has advantages and disadvantages.Several criteria [6] of surface fitting are commonly used and comply with the norms, of which we can mention: The least-squares method, it consists of minimizing the sum of squared residuals: min i where e i ¼ Ap i

!
: n → such as (A, n → ) are the substitute plane parameters and P i are the measured points..And the Chebyshev criterion, minimizing the maximum absolute difference from the data points to the fitted surface: (Fig. 6).
The least-squares method tends to be more sensitive to outliers in the data because it squares the errors.Large errors have a more significant impact which provides a good overall fit to the data but may not guarantee the smallest maximum error across the entire data range.While the Chebyshev approximation method is less sensitive to outliers because it focuses on the maximum absolute error providing a more accurate fit in terms of the worst-case scenario but potentially sacrificing the overall fit.The choice between these methods depends on the specific requirements of the problem, the characteristics of the data, and the desired trade-off between overall fit and worst-case accuracy.For the rest of this study, we will refer to the least-squares consisting of minimizing equation ( 7): To solve this equation, we used the "nlinfit" function in Matlab, which requires starting parameters (A 0 ; n 0 !Þ.To achieve a stable result and avoid local solutions, we have chosen the center of mass of the measured points Once the associated plane (A, n → ) is estimated, we proceed to the estimation of the variance-covariance matrix associated with a n The objective of the introduction of this matrix, is to highlight the influence of the measurement strategy parameters: the chosen fitting criterion (LSM) as well as the number and distribution of the probed points, based on the principle that the greater the number and of points probed and the larger their coverage, the more we converge to the normal that better represents the real surface, assuming that it follows a normal distribution of the form: We will not consider the influence of the uncertainty associated with the probed points, being already taken into account in the matrix [P i ] above (Eq.( 6)).The feature is measured for N repetitions, and we then estimate the normal vector for each sample using the least-squares algorithm coded above, before calculating their variance-covariance referring to the following equations: Following this procedure above, we developed an algorithm on Matlab, starting from a set of data points, proceeding to the fitting process using the least-squares method then giving us the fitted datum plane parameters (A, n d ! ) and its variance-covariance matrix associated with the associated measurement strategy: Regarding the intersection vector n i !, representing the direction of the intersection between the datum and tolerance planes, expressed as follows: We will assume that sin n d ! ; n t !À Á ≃ 1, and we apply the law of propagation of uncertainties on each term of the vector: Similarly, for u 2 c n e;y À Á and u 2 c n e;z À Á , to reach the final form of the variance-covariance matrix: Once the estimation of the variance-covariance matrix of each parameter is done, respectively p min ½ ; , we will obtain the final form of the matrix [M] (Eq.( 5)).

Monte Carlo simulation
The estimation of measurement uncertainty using a Monte Carlo simulation [16] is a great alternative especially when other methods present some difficulties such as an inadequate linearization of the model resulting in unrealistic confidence intervals.It's a statistical propagation of distributions that uses random sampling through a mathematical model to determine the range of possible outcomes allowing us therefore to estimate the model's uncertainty.A Monte Carlo simulation could also be used to compare and validate the results using the GUM method following this procedure.
Calculating the limits of the confidence interval dp low GUM and dp high GUM resulting from the application of the GUM method, where "dp" represents the nominal value of the perpendicularity error and U(dp) it's associated uncertainty: Running a Monte Carlo Simulation and extracting from the generated distribution both perpendicularity error mean value and its deviation, to be able to calculate dp low MCS and dp high MCS as they represent the limits for a 95.45% confidence interval (dp ± 2s), then comparing the GUM and Monte Carlo confidence interval limits: Setting the numerical tolerance z = 0.5x10 r where "r" is expressing the necessary number of accurate decimal digits.Then if the condition z ≥ max (d low , d high ) is verified, the comparison is favorable, meaning that GUM framework has been validated in this instance.

Declaration of the conformity
The conformity assessment is a critical step, it can decide whether the mechanical part conforms to the given specification.If the measurement uncertainty is not considered, it can lead to aberrant decisions, especially if the measurement result is close to the specification limit.CMMs can automatically generate a conformity report based on the specification tolerance interval.If we take the perpendicularity as an example, the CMM validity assessment follows this procedure:
If we consider the uncertainty, two forms of incorrect decisions would appear inside the uncertainty zone.False acceptance, which is validating the non-conform specification part, known as consumer's risk.And false rejection, which is rejecting a conform specification part, also known as producer's risk (respectively Type I (a) and Type II (b) errors).The decision-making process was significantly impacted by the development of a probabilistic approach, introducing measurement uncertainty as a conformity parameter (Fig. 8).
To establish a conformity validation procedure associated with the measured dimensional or geometrical specifications, it will be necessary to calculate the risk zone, assuming that the uncertainty follows a normal distribution: According to the ISO/IEC Guide 98-4 [17], if the tolerated risk limit is not specified by the customer, the risk p a = {dp > z i } = 1 À Ø (z i ) should not exceed 2.3%.Where z i is the Gaussian coefficient using the standard normal distribution expressed as follows: z i ¼ TsÀdp U=2 :

Results and discussion
This experimental study aims to bring the previously developed theoretical model into practice.The tests were carried out in the PCMT metrology laboratory where the temperature is regulated at around 20 ± 2 °C, the coordinate measuring machine used is a Mitutoyo Euro-C 544 coupled to a TP2 type probing head on which is mounted a Tungsten Carbide stylus of effective working length EWL = 14 mm and D=2 mm ruby ball diameter, altogether driven by Geopak software.The maximum permissible error is E L,MPE = ± (4mm + L/200) with L in mm.The geometrical specification being studied is a perpendicularity error with a tolerance limit of 15 mm: We started by estimating the variance-covariance matrix associated to this CMM's measured points by applying the GUM method to the error equations following the ISO 10360 directions [23,24]: Most of researchers use uncertainties based on the MPE.The main purpose of the variance matrix proposed, is to make good use of the ISO 10360 calibration results of the CMM, generating a correction matrix and a plausible variance matrix consistent with the MPE statement (Fig. 9).
To control the mechanical part, we referred to the steps described in Section 2.2, we probed the reference plane, then the specified plane, it is important to note that in order to minimize probing error, the probe must be oriented in the same orientation as the normal vector while measuring all the data points (Tab.1).
After extracting the cloud of points, we then proceeded to the construction of the required vectors as shown in Table 2:  It is important to mention that he problem with the ISO 1101 definition of perpendicularity is that the uncertainty associated with the measurement is directly related to the measurement strategy and form error, which influence on the parallelism error considerably.

GUM application
The perpendicularity error associated uncertainty is obtained by propagating the parameters uncertainty across the measurement strategy process through a linear approximation.By that means, we started by estimating the variance covariance matrix associated to the datum normal vector, which evaluates the influence of the number of probed points and their distribution as a result of the randomization of the measured points to generate the different possible combinations (Tab.3).Therefore, allowing us to set the datum normal vector associated variance-covariance matrix: However, it is important to mention that the probing error influence the matrix being based on a repeatability model.An interesting alternative approach to estimate the normal vector associated variance-covariance matrix would be for each "N" probed points p 1 ; p 2 . . .:p N f g , we proceed to a Monte Carlo randomization of the measured points to generate the different possible combinations, representing the same feature plane, with varied distributions and number of points N with 3 n < N. We then estimate the normal vector for each sample using a specific fitting criterion, before calculating their variance covariance.
Similarly, we estimated ñt ½ associated with the tolerance plane's normal vector, to be able to assess the intermediate vector's variance-covariance matrix, representing the direction of the intersection of the two measured planes:    The Jacobian matrix (Eq.( 4)) is calculated with the following simplifications: Consequently, we can estimate the perpendicularity error associated uncertainty using the GUM method developed in Section 2.2.
The uncertainty may seem relatively big compared to the error U dp ð Þ dp ≃ 32%, but is mainly due to the low perpendicularity default compared to the capability of CMMs used.

Monte Carlo simulation
We referred to a comparison between the GUM results and a Monte Carlo simulation to validate the perpendicularity uncertainty estimation.The Monte Carlo method can cope with non-smooth input-output models and can be used to evaluate the uncertainty associated with the perpendicularity error.Supposing that the parameters follow a normal law distribution, the simulation was carried out in two stages, the first being to randomize the cloud of probed points, with known mean values and standard deviations s = U/k extracted from their respective variance matrices with k = 2 as coverage factor, to determine the maximum and minimum points for each sample, followed by a second randomization of the ñd ½ and ñe ½ vectors, in order to obtain the perpendicularity error output estimated referring to the (n px ; n py ; n pz; n ex ; n ey ; n ez; x p min ; y p min ; z p min ; x p max ; y p max ; z p max Þ parameters.
Figure 10 shows the distribution function obtained when generating a 10 5 sample and 10 3 classes of 0.02 mm: We extract the following results (Tab.4).The numerical tolerance is z = 0, 5.10 À3 mm, and (d low , d high ) represent the difference of the limits for a 95.45% confidence interval (y mean ± 2s MCM ) of the generated distribution and the GUM method results calculated as follows: The validation criterion max (d low , d high ) z is verified, meaning that the comparison is favorable, and that the GUM framework estimating the perpendicularity uncertainty has been validated in this instance.

Conformity assessment
In order to control the perpendicularity specification (15mm tolerance limit), we calculated the consumer risk: p a = {dp > zi} = 1 À Ø (z i ) where z i ¼ T sÀdp U=2 (Tab.5).The risk alpha p a = 11.3% is significantly higher than the 2.3% limit specified by the standard ISO/IEC Guide 98-4.We can then conclude that the part is "not conform" to the perpendicularity specification.However, it is important to note that the conformity assessment could show different results measuring the same part and estimating the uncertainty referring to the same model, using a more performant and precise CMM, hence the necessity of an inter-laboratory comparison.

Inter-laboratory comparison
Inter-laboratory comparison (ILC) is a procedure usually used to evaluate the accuracy and the consistency of results obtained by different laboratories realizing the same measurement or test on the same sample, it can also be used in our case to validate our perpendicularity assessment model.Although there are several evaluation techniques, the calculation of the normalized error is the most often used [26,27]: where dp L and U L are respectively the perpendicularity error and its associated uncertainty measured by the participant laboratory.The comparison would show satisfactory results if |E n | 1.
The ILC was realized with the Measurement Control Center (MCC) laboratory where the temperature is regulated around 20 °±2 °C, the coordinate measuring machine used is a Zeiss Duramax coupled to a Vast-xxt-tl3 type probing head on which is mounted a Tungsten Carbide stylus of effective working length EWL = 14 mm and D = 2 mm ruby ball diameter, altogether driven by Calypso software.The same industrial part was controlled under the same conditions and following the same measurement strategy, resulting in a perpendicularity error dp MCC = 11.9 mm, and the mechanical part was judged to be compliant with the given specification.The CMM-associated measurement uncertainty is U MCC = 3.3 mm estimated by manufacturer calibration.The normalized error is significantly inferior to 1: E n ¼ j0:0125À0:0119j ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 0:0040 2 þ0:0033 2 p ¼ 0:115 < 1.
The ILC showed very satisfactory results (|E n | ≪ 1), we can then conclude that our CMM is accurate and that our uncertainty estimation is suitable for perpendicularity measurement.Both laboratories evaluated the that the part is "not conform" to the given specification, however, the MCC laboratory judgment was based on the application of the acceptance criterion: dp L + U L < T s , which in this case, did not alter the decision.

Conclusion
The proposed article presents a different approach for the perpendicularity conformity validation of mechanical parts using the coordinate measuring machine, by estimating the measurement uncertainty and including it in the assessment as stated in the ISO/IEC 98-4 standard.The main purpose is to provide the perpendicularity error, its associated uncertainty, and the conformity risk, directly from the set of data points.
In order to evaluate the perpendicularity error, a measuring strategy was set according to the ISO 1101 specifications, then the error mathematical model was developed (Eq.2).To estimate it's associated uncertainty, a deconstruction of the process has been realized and the GUM propagation of uncertainties was applied, then put together in matrix form (Eq. 3).The uncertainty variancecovariance matrices were then estimated in Section 2.3, highlighting the influence of the measurement strategy parameters: the chosen fitting criterion as well as the distribution and number of the measured points.Then a Monte Carlo simulation was used to compare and validate the uncertainty estimation and showed complying results (gap less than 10 À4 mm) which validates our developed model.The uncertainty may seem relatively big compared to the error U dp ð Þ dp ≃ 32%, but it is mainly due to the low perpendicularity default compared to the capability of CMMs used.
The interlaboratory comparison was satisfactory, the normalized error confirms the concordance between the perpendicularity error and its associated uncertainty of the measured mechanical part for both laboratories.However, despite the difference of the acceptance criterion, the conformity assessment was the same.
n and an initial normal vector n 0 !¼ ab !ac !ab !ac !j based on the most distant probed point apart a, b and c.

Table 3 .
Sample of datum plane normal vectors for 10 repetitions (in mm).
the theorical parallel plane's P1 and P2 normal vector, n d !(n dx , n dy , n dz ) the datum plane's normal vector, and n e !(n ex , n ey , n ez ) the vector of the intersection between the datum and tolerance surface (unit vectors).Hence the final expression of the parallelism error: ez À n dz n ey n dz n ex À n dx n ez n dx n ey À n dy n ex

Table 1 .
Tolerance plane measured points (in mm).

Table 2 .
Construction of the vectors (in mm).