Open Access
Issue
Int. J. Metrol. Qual. Eng.
Volume 14, 2023
Article Number 16
Number of page(s) 9
DOI https://doi.org/10.1051/ijmqe/2023013
Published online 22 December 2023

© M. Zhang et al., Published by EDP Sciences, 2023

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

With the rapid development of modern industrial technology, more and more industries require the use of Non-Destructive Testing (NDT) for quality inspection. In order to meet the needs of different NDT applications, the use of automated equipment to replace manual inspection of workpieces offers a more advanced inspection method leading to less operator fatigue, and improvements in repeatability [1]. Robots can be programmed to perform specific operations, thus replacing manual labour for heavy, complex and dangerous inspection tasks. Deployment of NDT by robots has the advantages of high accuracy of movement, high efficiency and lower operating costs. As a result, robotic inspection has been a growing area of research in the manufacturing and remanufacturing industry in recent years [2].

Robotic ultrasonic inspection has been used to speed up the NDT process of critical components, although typically such systems rely on precise digital twins to support the path planning activity. Autonomous systems have been developed based on visual feedback to remove the requirement of a digital twin, and automate the path planning process leading to improved inspection speeds. These autonomous systems are based on robots that manipulate ultrasonic sensors based on predefined or pre-computed toolpaths [3]. As a result, such robotic inspection systems typically lack a strategy for flexible and agile automation, particularly where visual systems do not provide the necessary feedback. There is an increasing need to automate the process of robotic dynamic toolpath generation to enable efficient NDT inspection of components where visual information is not available − such as in the case of grounded flush bevel welds where no visual cap is present.

The key to generating automated scanning paths for NDT robots lies in the automated movement of the ultrasound probe under different scenarios. A variety of control modes have been used in current research on industrial robotic NDT systems to automate the movement of ultrasonic probes in different scenes, mainly based on path planning and marker-based guidance control methods [4,5]. Among them, path planning based on 3-dimensional scene reconstruction and marker-based object pose estimation are typical ultrasound robot imaging control methods. Based on this paradigm, previous robotic non-destructive systems use 3-dimensional cameras or other high-precision acquisition devices to acquire and reconstruct the scene surface and plan the robot motion path based on the geometric properties of the target obtained after analysis with reported 80% success rate of capturing the ultrasonic pulse-echo data for the desired surface points on the sample and an accuracy of 0.43 mm for the ultrasonic testing tool path produced using paraboloid spiral image acquisition approach [6]. Some studies have combined force sensors to adjust the contact force by adjusting the position of the ultrasound probe, with the reported accuracy of 3D ultrasound reconstruction as 0.72 ± 0.24 mm [7]. However, these methods have certain limitations; firstly, the use of a conventional camera and laser feedback intelligence to locate the surface of the workpiece does not allow for the identification of internal defects in the work piece. Secondly, these control methods contain multiple, highly interrelated motions, and aspects of each motion need to be highly characterised to match the specific imaging task.

The main purpose of this paper is to consider the use of real-time ultrasonic inspection data feedback as a basis for NDT dynamic robot path planning. Based on the inspection data there are two main objectives: firstly analysis of the ultrasonic NDT data to determine positional information of geometric features and secondly to use this information to derive a motion for the robotic system to take. The proposed novel dynamic path planning method overcomes the limitations of conventional vision systems such as the need for complex and potentially inaccurate point cloud correction. In addition, the requirement for a priori knowledge of the surface (e.g., imposed primitives) is eliminated. This is done while making full use of the available data to minimise the risk of in-process conflicts. The proposed approach enables real-time robot path correction that was previously unavailable in vision-enabled path planning regimes. The proposed full path planning execution process is illustrated in Figure 1.

thumbnail Fig. 1

Diagram of proposed automated robotic non-destructive testing.

2 Theoretical background

2.1 Ultrasonic weld inspection

The weld inspection deployed in this work has utilised Full Matrix Capture (FMC) for ultrasonic data acquisition and collection and the Total Focusing Method (TFM) as the image reconstruction algorithm.

FMC is an advanced data acquisition process that uses ultrasonic array transducers to capture the complete time domain signal from every possible transmitting and receiving element combination. Traditional weld inspection approaches have used single-element ultrasonic probes and phased array methods. FMC has been chosen for this work as it provides improved image quality and resolution over other methods and is commonly referred to as the ‘gold standard’ [8]. Additionally, FMC data is often processed with an image reconstruction algorithm allowing for computer vision methods to be exploited. The principle of FMC is shown in Figure 2.

Generally modern array ultrasound transducer excitation receiver boards have parallel independent receiver channels. Thus, the full matrix acquisition process can be described as assuming the ultrasound array contains N elements from which a matrix is populated {Aij (t)} representing the amplitude responses of the echo signal excited by the ith element and received by the jth element(1 ≤ i ≤ N, 1 ≤ j ≤ N) . If elements 1 to N are excited in sequence and all arrays are received, the final N × N sets of ultrasonic A-scan data are collected known as the full matrix. The collected data is arranged in the following way: data in column i represents the echo amplitude emitted from the i th element, while the rows represent the echo amplitudes received by the jth element. The full matrix data structure format is shown in equation (1).

FMC=[A1,1A12A21A22Ai1Ai2AN,1AN2A1jA1NA2jA2NAijAiNANjANN](1)

TFM uses the FMC data to provide an image within the component that allows for increase in sensitivity to small defects through an image reconstruction algorithm [9]. This algorithm is based on the Synthetic Aperture Focusing Technique [10] but extended specifically for FMC data. The inspection region with respect to the transducer is defined as the Region of Interest (ROI). Each pixel within this image represents a finite inspection resolution with the x-axis being the direction of the transducer's active axis (i.e., the direction in which the elements are stacked), while the image y-axis refers the depth within the component (commonly referred to as the z-axis during inspection). For each pixel and each transmit-receive combination, the arrival time along the respective A-Scan is computed after consideration of the component geometry and velocity − referred to as the Time of Flight (ToF). This process is repeated for all pixels and all transmit and receive combinations with the pixel intensity values being summed. The method is also referred to as the Delay and Sum (DAS) approach [11].

The TFM algorithm is expressed in equation (2) and supported by Figure 3, where i(x,z)k is the time of flight from the transmission element k and j(x,z)n is the receive time of flight to reception element n of the array for a given pixel at position (x,z), h is the Hilbert transformed signal allowing for both the real and imaginary parts to be computed. For a TFM reconstruction this calculation is computed for every pixel within the image.

I(x,z)=|k=1Kn=1Nh(i(x,z)k+j(x,z)n)|(2)

Compared to other traditional ultrasonic methods, the TFM algorithm has the advantages of being flexible and highly efficient. Not only that, TFM can improve the coverage of corner detection and increase the sensitivity to small defects. The algorithm makes use of valid data information at each point for computational imaging and can obtain ultrasound images with high resolution and contrast.

The main visualisation modes (A-scan, B-scan, C-scan, D-scan) offered by NDT based on ultrasonic signal transmission/reception are shown in Figure 4 [12].

The A-Scan shows a one-dimensional raw ultrasonic signal, i.e., the amount of ultrasound energy received versus time. Traditionally the B-scan provides a two-dimensional image with the vertical axis representing time converted to distance and the horizontal axis representing the next A-Scan in the sequence. The TFM reconstructed image is also referred to as the B-Scan (a synthetically generated one). The C-scan is a two-dimensional top view showing the test part. In the C-scan no front and rear surface reflections are used, instead the reflection from the defect is the only projection. The 3-dimensional position of the defect in the weld can be determined from both the C-scan and B-Scan. The D-scan shows a profile where both the acoustic beam plane and the scanned surface are perpendicular. In the D-scan view, the height of the defect and the length of the defect displayed in the direction of the scan axis can be observed, allowing quick differentiation of defects in the weld.

thumbnail Fig. 2

Full matrix data collection schematic.

thumbnail Fig. 3

Total focusing method schematic.

thumbnail Fig. 4

Visualisation of the basic pattern diagram of ultrasound test results.

2.2 Computer vision

Typically when analysing an ultrasonically generated image, the operator will make fundamental decisions as to the classification of a defect based on its position, size, orientation, and amplitude. In many ways, this decision is based on intuition and experience, which can be difficult to quantify. One of the objectives of this research is to lock in on to a key geometric feature and track changes in its position relative to time. To support this, Computer Vision (CV) algorithms are explored within this paper, specifically the meanshift algorithm [13,14].

The intuition behind the meanshift algorithm is as follows: consider a collection of pixels within a window (typically a user defined box). The goal is to move the window relative to mean pixel location. This is repeated for the next image and the centre location of the previous window and newly computed window is extracted and used as a measure as to how much and in what direction the user-selected feature has moved. This concept is shown in Figure 5 using images of a motorway by way of example.

The meanshift algorithm is based on target tracking. All the target tracking is done through a cycle of three operations: firstly, generate the target model and update the target model. Secondly, the target position is predicted, and finally the matching criterion of the target model is calculated. Among them, the initialisation of the tracking target in the first frame is done manually as follows.

  • Consider a collection of pixels within a window (typically a user defined box), which is the region where the object to be tracked is located, the dimensions of the region and the kernel function window width h.

  • The probabilities of each feature value in the corresponding feature space are calculated for the selected region for the objective description and modelling.

  • The description of the candidate template is calculated for each feature value in the feature space of the possible target regions in the subsequent frames.

  • The Bhattacharyya similarity coefficient is used as the similarity function to calculate the similarity between the candidate region in the current frame and the target model in the previous frame to find the meanshift vector that maximizes the similarity function for the purpose of tracking.

Since the starting point of this vector is the position of the target in the previous frame and the ending point is the position of the target in the current frame. According to the convergence property of the meanshift algorithm, the meanshift vector will eventually converge at the real position of the target through continuous cyclic computation. The pseudo-code of the meanshift algorithm based on this study is shown in Table 1 and a flowchart described in Figure 6.

Through the use of ultrasonically generated images, and computer vision, the next goal is to use this information to determine a direction the robotic dynamic path should take.

thumbnail Fig. 5

Tracking example by using meanshift algorithm.

Table 1

Pseudo code of the mean shift algorithm.

thumbnail Fig. 6

Meanshift algorithm schematic.

2.3 Robotic control

In order for the robot to be able to adjust its path dynamically according to the meanshift coordinates during the inspection process, it is first necessary to convert the local coordinates of the end-effector to the world coordinate system by means of robot coordinate conversion. The B-scan image coordinate system is set by the system to be consistent with the world coordinate system. The transformation process is to determine the coordinates of the weld feature point P in the robotic arm coordinate system OBXBYBZB as (xPB,yPB,zPB). In general, the coordinates of the weld feature under the ultrasonic coordinate system OUXUYUZU as (xPU,yPU,zPU) can be easily extracted from the B-scan data and transformed to the coordinates of the locating point under the rotating mechanism coordinate system ORXRYRZR of the probe as(xPR,yPR,zPR) with the transformation matrix as equation (3):

[xPRyPRzPR1]=TUR[xPUyPUzPU1]=[RURPOUR01][xPUyPUzPU1](3)

where: RUR is a 3−3 dimensional rotational transformation matrix; POUR is a 3 × 1 dimensional translational transformation matrix.

The ultrasound probe scans the weld platform for geometric features by travelling initially in a straight line at a speed of 15 mm/s using the SpeedL command as supported by the robotic platform (Universal Robotics UR3e). Based on the feedback from the internal algorithm for tracking of the weld root geometric feature in the B-scan, the motion is adjusted in the +x, −x direction. Scanning along +y to complete the scan of the weld seam of the welding platform. The coordinate system of robot and weld platform is shown in Figure 7.

thumbnail Fig. 7

Coordinate system conversion diagram.

3 Experimental configuration

An experimental data acquisition and data processing system has been designed and assembled to verify the feasibility of enabling robots to perform autonomous weld scanning based on ultrasonic inspection data. The system architecture is illustrated in Figure 8. The system components are presented below.

  • A Universal Robotics UR3e platform was deployed, where the robot UR3e was able to reach a load of 6.6lbs (3 kg) during automatic tasks, Reach radius of up to 19.7 inches (500 mm), Weight of 24.7lbs (11.2 kg), Small footprint of just Ø128 mm, 6-axis motion with 360-degree wrist-joint rotation and infinite end-joint rotation.

  • An Olympus 5L64-A2 transducer with a 0.6mm pitch, containing a 64-element linear array with a 5 MHz central frequency. Data was sampled at a rate of 25 MHz with a 16-bit amplitude resolution for 64 active elements. This probe was attached to a Rexolite wedge of 360 incidence angle.

  • A data acquisition system controlled by the LTPA array controller as manufactured by PeakNDT. The system contained separate transmit and receive lines per channel and facilitated the use of parallel and sequential transmission techniques. The data acquisition rate via Ethernet of this system was established as approximately 90 MB/s according to the manufacturers specification.

  • A desktop PC with Windows 10 containing two quad-core 3 GHz CPUs, 8 GB of RAM and an NVIDIA Geforce GTX 560Ti  graphics card with 384 CUDA cores and 1024 MB of GDDR5 memory, connected via Ethernet and communicating with the Internet via Transmission Control Protocol.

  • An experimental aluminium test plate specimen with dimensions of 445 × 545 × 50 mm. Two different “T” joint fillet welds are welded on the plate, following a linear and sinusoidal shape respectively.

Image reconstruction from the acquired ultrasonic data was carried out using TFM at a resolution of 125 µm per pixel. TFM was selected over conventional UT and Phased Array UT due to its ability to provide greater lateral resolution, thus providing optimal image resolution for tracking of geometric features. Acquisition and data processing was performed in real-time focused on an area 30 mm2 (horizontally −15 mm to +15 mm and vertically from 20 mm to 50 mm) depicted in Figure 9. This allowed for the volume of the weld to be imaged including the weld root and weld cap at a pixel density of 240 × 240 per image mode. The image modes of half-skip and full-skip were volumetrically merged and used as the input image for the meanshift algorithm, this is to provide a single image for computation.

The probe on the robot end-effector was manually placed on one side of the weld plate and adjusted to the optimum position based on the imaging results of the B-scan. The dynamic path planning was then initiated and monitored, allowing for collection of multiple B-Scans and the generation of a C-Scan view to assist in later interpretation.

thumbnail Fig. 8

System diagram of the experimental equipment.

thumbnail Fig. 9

Depiction of FMC data acquisition region of interest.

4 Experimental results

The reliability of the meanshift tracking algorithm used in this paper was experimentally verified through testing on a calibration block with a thickness of 25 mm. The experiment evaluated the accuracy of the algorithm by comparing the actual moving distance of the ultrasonic probe with the moving distance of the tracking box on the ultrasonic detection image. The benchmark table demonstrates that the algorithm achieved a high level of accuracy, with an error rate within ±1mm as shown in Table 2. These findings indicate that the tracking algorithm can effectively and accurately identify and track defects in the sample during the ultrasonic probe detection process. The accuracy is comparable to those achieved with additional vision camera [6] or force measurement [7], but our method did not use additional sensors other than the ultrasound probe.

Initial experimental results demonstrated the robot was able to adjust the path according to the positioning of the geometric feature within the B-scan image acquired by ultrasonic inspection. And keeping the window always within the ultrasonic scan imaging area, enabling fine tuning of the robotic path in the −X, +X direction in the −Y direction of travel.

The method has been shown to be feasible in application to both linear and sinusoidal shaped welds. The robot scan trajectory can be seen in the shape of the ultrasonic gel imprint generated on the weld platform. The linear weld sweep trajectory forms a serrated shape within the ultrasonic gel, as shown in Figure 10. The sinusoidal shaped weld scan trajectory results in a smooth sine shape within the ultrasonic gel which was marked by red curve lines, once again, proving and supporting the feasibility of the method. There is a very high confidence level for the fillet weld type, based on a close examination of the C-scan results.

The developed visualisation platform of the system is shown in Figure 11, where the different window areas show the different imaging results. The top left corner shows the B-scan imaging, which is used to adjust the robot starting position as well as to observe the inspection process of the robot. In the upper right corner is the A-scan image of the defect scan. Figure 11 shows a C-scan image in the bottom window which visualises the horizontal position of the weld.

The imaging results of the C-scan are shown in more detail in Figure 12. The shape and size of the geometric feature can be clearly seen, indicating that the robot is able to obtain a good geometric detection rate when automatically inspecting the weld with TFM imaging, something not possible by human operator. In addition, by adjusting the scale line in the figure, its defect imaging is basically level with the horizontal scale line. This indicates that using the image tracking algorithm can accurately manoeuvre the robot with respect to this response, as well as that the robot's autonomous sweeping path has a high degree of accuracy.

The ability of the robot to automatically scan the weld seam and extract defect features provides a significant advantage to NDT.

Table 2

Comparison of actual and tracked probe movement distances on B-scan image.

thumbnail Fig. 10

Automatic robot scanning of weld tracks.

thumbnail Fig. 11

Robotic autonomous scanning visualisation platform interface.

thumbnail Fig. 12

C-scan image.

5 Conclusions

This paper presents a novel method for real-time adjustment of the scan path of an NDT robot, which uses real-time data collected by phased array probes, after conversion of the robot end-effector ultrasound probe to B-scan 2-D image coordinates, to adjust the sweep path of the robot in real-time. The proposed method allows the operator to obtain highly accurate ultrasonic sweep data without the need for digital twinning and without lengthy pre-scan and path planning steps when the part weld distribution is not known.

A potential limitation of the method is the accuracy of the first B-scan image obtained, i.e., the ability to detect the location of the target feature and ensure it is in the region of interest, which requires the operator to autonomously approach the precise placement of the UT probe to ensure that the most comprehensive defect imaging map possible is obtained so that the resulting path is accurately placed within the robot work cell.

The method established in this paper for the autonomous scanning of weld seams by NDT robots is effective and accurate, and can provide a theoretical basis for subsequent NDT robots to achieve intelligent path planning for sweeping weld seams. The future plan is to upgrade the scanning method to include probe skew based on the ultrasonic response. This will improve the accuracy and reliability of the. robot's automated weld seam inspection results.

Acknowledgement

This project was part of an initiative known as AEMRI (Advanced Engineering Material Research Institute which is funded by the Welsh European Funding Office (WEFO) using European Regional Development Funds (ERDF).

References

  1. A. Poole, M. Sutcliffe, G. Pierce, A. Gachagan, A novel complete-surface-finding algorithm for online surface scanning with limited view sensors, Sensors (Basel) 21, 7692 (2021) [CrossRef] [Google Scholar]
  2. A. Caggiano, R. Teti, Digital factory technologies for robotic automation and enhanced manufacturing cell design, Cogent Eng. 5, 1426676 (2018) [CrossRef] [Google Scholar]
  3. V. Ivan, A. Garriga-Casanovas, P. Khalili, W. Merkt, F.B. Cegla, S. Vijayakumar, Autonomous non-destructive remote robotic inspection of offshore assets, in: Proceedings of the Offshore Technology Conference 2020; Offshore Technology Conference, May 4 2020 [Google Scholar]
  4. C. Mineo, S.G. Pierce, P.I. Nicholson, I. Cooper, Robotic path planning for non-destructive testing − a custom MATLAB tool box approach, Robot. Comput. −Integr. Manuf. 37, 1–12 (2016) [Google Scholar]
  5. C. Mineo, M. Vasilev, B. Cowan, C.N. MacLeod, S.G. Pierce, C. Wong, E. Yang, R. Fuentes, E.J. Cross, Enabling robotic adaptive behaviour capabilities for new industry 4.0 automated quality inspection paradigms, Insight: J. British Inst. Non-Destr. Test. 62, 338–344 (2020) [Google Scholar]
  6. A. Khan, C. Mineo, G. Dobie, C. MacLeod, G. Pierce, Vision guided robotic inspection for parts in manufacturing and remanufacturing industry, J. Remanufacturing 11, 49–70 (2020) [Google Scholar]
  7. S. Chen, Z. Li, Y. Lin, F. Wang, Q. Cao, Automatic ultrasound scanning robotic system with optical waveguide-based force measurement, Int. J. CARS 16, 1015–1025 (2021) [CrossRef] [PubMed] [Google Scholar]
  8. C. Holmes, B. Drinkwater, P. Wilcox, Post-processing of the full matrix of ultrasonic transmit-receive array data for non-destructive evaluation, NDT E Int. 38, 701–711 (2005) [CrossRef] [Google Scholar]
  9. M. Weston, P. Mudge, C. Davis, A. Peyton, Time efficient auto-focussing algorithms for ultrasonic inspection of dual-layered media using full matrix capture, NDT E Int. 47, 43–50 (2012) [CrossRef] [Google Scholar]
  10. M. Sutcliffe, M. Weston, B. Dutton, P. Charlton, K.E. Donne, Real-time full matrix capture for ultrasonic non-destructive testing with acceleration of post-processing through graphic hardware, NDT E Int. 51, 16–23 (2012) [CrossRef] [Google Scholar]
  11. M. Khelil, J.-H. Thomas, L. Simon, R. El Guerjouma, M. Boudraa, Characterization of structural noise patterns and echo separation in the time-frequency plane for austenitic stainless steels, J. Nondestruct. Eval. 36, 31 (2017) [CrossRef] [Google Scholar]
  12. D. Comaniciu, P. Meer, Mean shift analysis and applications, in: Proceedings of the Proceedings of the Seventh IEEE International Conference on Computer Vision, September 1999, Vol. 2, pp. 1197–1203 [CrossRef] [Google Scholar]
  13. D. Demirović, An implementation of the mean shift algorithm, Image Process. Line 9, 251–268 (2019) [CrossRef] [Google Scholar]
  14. Z. Xiang, T. Tao, L. Song, Z. Dong, Y. Mao, S. Chu, H. Wang, Object tracking algorithm for unmanned surface vehicle based on improved mean-shift method, Int. J. Adv. Robot. Syst. 17, 1729881420925294 (2020) [CrossRef] [Google Scholar]

Cite this article as: Mengyuan Zhang, Mark Sutcliffe2, David Carswell2, Qingping Yang, Robotic path planning using NDT ultrasonic data for autonomous inspection, Int. J. Metrol. Qual. Eng. 14, 16 (2023)

All Tables

Table 1

Pseudo code of the mean shift algorithm.

Table 2

Comparison of actual and tracked probe movement distances on B-scan image.

All Figures

thumbnail Fig. 1

Diagram of proposed automated robotic non-destructive testing.

In the text
thumbnail Fig. 2

Full matrix data collection schematic.

In the text
thumbnail Fig. 3

Total focusing method schematic.

In the text
thumbnail Fig. 4

Visualisation of the basic pattern diagram of ultrasound test results.

In the text
thumbnail Fig. 5

Tracking example by using meanshift algorithm.

In the text
thumbnail Fig. 6

Meanshift algorithm schematic.

In the text
thumbnail Fig. 7

Coordinate system conversion diagram.

In the text
thumbnail Fig. 8

System diagram of the experimental equipment.

In the text
thumbnail Fig. 9

Depiction of FMC data acquisition region of interest.

In the text
thumbnail Fig. 10

Automatic robot scanning of weld tracks.

In the text
thumbnail Fig. 11

Robotic autonomous scanning visualisation platform interface.

In the text
thumbnail Fig. 12

C-scan image.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.