Title Author Keyword ::: Volume ::: Vol. 32Vol. 31Vol. 30Vol. 29Vol. 28Vol. 27Vol. 26Vol. 25Vol. 24Vol. 23Vol. 22Vol. 21Vol. 20Vol. 19Vol. 18Vol. 17Vol. 16Vol. 15Vol. 14Vol. 13Vol. 12Vol. 11Vol. 10Vol. 9Vol. 8Vol. 7Vol. 6Vol. 5Vol. 4Vol. 3Vol. 2Vol. 1 ::: Issue :::

In-House Developed Surface-Guided Repositioning and Monitoring System to Complement In-Room Patient Positioning System for Spine Radiosurgery

Kwang Hyeon Kim1, Haenghwa Lee1, Moon-Jun Sohn1, Chi-Woong Mun2

1Department of Neurosurgery, Neuroscience & Radiosurgery Hybrid Research Center, Inje University Ilsan Paik Hospital, Inje University College of Medicine, Goyang, 2Department of Biomedical Engineering, U-Health Research Center, Inje University, Gimhae, Korea
Correspondence to: Moon-Jun Sohn
(mjsohn@paik.ac.kr)
Tel: 82-31-910-7730
Fax: 82-31-915-0885
Received March 25, 2021; Revised June 8, 2021; Accepted June 8, 2021.
Abstract
Purpose: This study aimed to develop a surface-guided radiosurgery system customized for a neurosurgery clinic that could be used as an auxiliary system for improving the accuracy, monitoring the movements of patients while performing hypofractionated radiosurgery, and minimizing the geometric misses.
Methods: RGB-D cameras were installed in the treatment room and a monitoring system was constructed to perform a three-dimensional (3D) scan of the body surface of the patient and to express it as a point cloud. This could be used to confirm the exact position of the body of the patient and monitor their movements during radiosurgery. The image from the system was matched with the computed tomography (CT) image, and the positional accuracy was compared and analyzed in relation to the existing system to evaluate the accuracy of the setup.
Results: The user interface was configured to register the patient and display the setup image to position the setup location by matching the 3D points on the body of the patient with the CT image. The error rate for the position difference was within 1-mm distance (min, 一0.21 mm; max, 0.63 mm). Compared with the existing system, the differences were found to be as follows: x=0.08 mm, y=0.13 mm, and z=0.26 mm.
Conclusions: We developed a surface-guided repositioning and monitoring system that can be customized and applied in a radiation surgery environment with an existing linear accelerator. It was confirmed that this system could be easily applied for accurate patient repositioning and inter-treatment motion monitoring.
Keywords : Spine radiosurgery, Patient repositioning, Patient monitoring, Point cloud, Surface-guided radiosurgery
Introduction

Since stereotactic radiosurgery (SRS) delivers high-prescription doses to the treatment site, accurate target positioning and patient setup are essential. Linear accelerator (LINAC)-based SRS requires a frame-based stereotactic approach to achieve the required accuracy. However, auxiliary systems for radiation exposure control have been developed for a frameless stereotactic approach [1]. Patient positioning systems based on infrared (IR) reflective markers demonstrate comparable accuracy with setups using X-ray imaging and are currently used in the field of spine radiosurgery [2-4]. A frameless LINAC-based radiosurgery system captures and tracks the movement of a patient during treatment and checks whether the position of the patient is within the acceptable range. At this point, the X-ray dose used to check the setup accuracy may appear to be negligible when compared with the dose scattered throughout the body during radiosurgery or treatment; however, surface-guided radiation therapy (SGRT) is being developed and utilized as a system using three-dimensional (3D) surface imaging techniques without the disadvantage of the ionizing radiation as reported in American Association of Physicists in Medicine Task Group 75 (AAPM TG 75) [5-8].

While it is challenging to directly use surface imaging systems for monitoring spine lesions in the body, the system can be widely used for monitoring patient positional changes The SGRT system will record the changes in the motion of the 3D surface of the patient [9]. If a difference in location occurs, the location of the lesion is considered to have changed. The SGRT system using non-ionizing radiation integrates a projector with two or three cameras to obtain a real-time 3D surface of the patient [10]. During radiation therapy and radiosurgery, a stereo vision system or depth camera is used to optically scan the surface, thereby identifying the location of the patient and monitoring it with high spatial resolution [11].

Since SGRT provides real-time information about the body surface and surgery/treatment site of the patient in the treatment room, it yields more accurate positioning for radiosurgery than laser positioning within this context. The system also has the advantage of reduced radiation doses for fractionated/hypofractionated treatment with a reduced amount of X-ray imaging per day [5,6].

The SGRT system monitors the position of the patient in real time during radiosurgery, which contributes to the standardization of the radiosurgery workflow with high precision and reproducibility while prioritizing patient safety [10,11].

The clinical use of SGRT involves optical surface scanning for patient positioning, motion monitoring within the treatment area, and respiratory gating techniques and has proven to be highly beneficial. The AlignRT system (Vision RT, London, UK) that is constructed with this technology uses SGRT in gated radiation therapy for tumor locations close to the skin surface, such as accelerated partial breast irradiation, whole brain radiation therapy, and SRS using deep inspiration breath hold (DIBH) and voluntary DIBH [12,13].

The SGRT system involves the inherent risk of deviating positional precision in the sub-process of repositioning the patient’s treatment, with a risk that is ranked high in the risk priority number scoring using failure modes and effects analysis [14]. In the case of our neurosurgery clinic, the coordinates reconfirmed by repositioning in the treatment position inevitably involve an intrinsic error of submillimeters while attaching an IR reflective marker using the ExacTrac® (BrainLab, Munich, Germany) in the computed tomography (CT) simulation stage with the treatment positioning for the patient.

In other words, establishing and confirming the patient’s treatment location is an important step in surface-guided radiosurgery (SGRS) to ensure accurate execution of the treatment plan. This means that a treatment setup check and treatment monitoring are pivotal steps in the treatment process. Therefore, a more accurate and intuitive system is required to compensate for the geometric misses on existing systems that only use a few IR markers to set up the patient’s position. These errors can be reduced by using point cloud surface imaging as an additional method to model the body of the patient since it provides a representation of the target scene in the Cartesian coordinate system of x, y, and z. Various radiation treatment applications that have incorporated this method have been studied and used for treatment [12,13,15].

Another method to reduce such errors involves the use of a surgical guide system. Computer-based surgical assistance systems (NAV3i; Stryker, Kalamazoo, MI, USA, and StealthStation; Medtronic, Minneapolis, MN, USA) are being widely used in the field of neurosurgery. These systems provide a guide to the exact location of the identified lesion by checking the guide from the incision site to the surgical site based on CT and magnetic resonance images during surgery [16-20].

Therefore, this study aims to develop a SGRS system that is customized for a neurosurgery clinic and could be used as an auxiliary system for improving accuracy and tracking the movements of patients while performing divided therapy during treatment and minimizing the geometric misses.

Materials and Methods

### 1. Surface-guided reposition and monitoring system

A point cloud is a set of data points with x, y, and z coordinates in 3D space. A point cloud generally measures the outer surface of an object using a 3D scanner, which undergoes processing to ultimately yield a 3D image (

Fig. 1. Three-dimensional surface modeling system architecture and surface-guided radiosurgery (SGRS) in the treatment room: (a) the architecture of the image acquisition using the depth camera, (b) the surface imaging profile in sagittal plane, and (c) the installed SGRS in the treatment room. IR, infrared; LINAC, linear accelerator .

To create a 3D image of the patient, a cloud of 200,000 points was obtained for the region of interest (ROI) recognized by the depth camera, while a 35,000-point cloud was designed to shape the object to reduce the amount of computation. A RGB-D camera was used to recognize the depth, the body surface, and the position of the patient to create 3D images [21]. To obtain a point on the surface of an object from a depth camera, the depth information distance (z) and the distance between the illuminator and the actual camera sensor (x) must be obtained using the triangulation principle. Here, these images were updated by tracking the changes in the location, surface, and depth information caused by the movement of the patient. Meanwhile, Eqs. (1) and (2) were used to recognize the ROI and the depth of the patient. Here, Z is the distance between the patient body and camera; b is the distance from the IR illuminator to the center of the camera lens; f is the distance from the lens to the camera sensor; α and β are the angles of the IR illuminator and lens for the body surface spot, respectively; x is the distance from the body surface spot to the body point orthogonal to the illuminator; P is the distance between the center and outer edge of the sensor; and y is the body depth from the surface [21].

$z=btan(α)+tan(β)$
$x=z⋅tan(α)$

Three depth cameras (ASTRA S; ORBBEC, Troy, MI, USA) were installed for imaging the entire body surface on the ceiling above the gantry head, with the specifications of the SGRS-integrated computer as follows: I7-8770, 16G RAM, SSD 512G, GTX 1080Ti, and Windows 10 (Fig. 1c). Each point cloud was obtained from the cameras installed on the left, center, and right, and a 3D point cloud model was created by overlapping the obtained point clouds (Fig. 2).

Fig. 2. Acquired three-dimensional images using RGB-D cameras: (a) left, (b) center, (c) right direction, and (d) integrated images through the cameras.

### 2. Point cloud spatial transformation

The matrix of the newly transformed coordinates x’, y’, and z’ for the spatial coordinates x, y, and z, which combine the points obtained from the camera positions on the left, center, and right of the point cloud into one scene, was obtained by performing spatial transformation corresponding to the movement described in Eq. (3). The rotation (R) and the translation matrix (T) for the homogeneous coordinates were as represented in Eqs. (3) and (4) [22,23]. In terms of geometric transformation, the value of the moving direction was Tx, Ty, Tz, respectively, with the point (0, 0, 0) moving to (Tx, Ty, Tz). In the case of the X axis, the X coordinate did not change because it was multiplied by 1, while the remaining Y- and Z-axis coordinates were rotated in the same way as the Cartesian coordinate system. The same principle was applied to the Y and Z axes described in Eq. (4); that is, each point was defined in 3D space with x, y, z Cartesian coordinates. Therefore, a 4×4 matrix was required to perform the rotation transformation for image composition, and an element (1) was added to the end of each of the x, y, z 3D vectors.

$x'y'z'1=RxxRxyRxzTxRyxRyyRyzTyRzxRzyRzzTz0001xyz1$
$Rx(θ)=10000cos(θ)−sin(θ)00sin(θ)cos(θ)00001,Ry(θ)=cos(θ)0sin(θ)00100−sin(θ)0cos(θ)00001,Rz(θ)=cos(θ)sin(θ)00−sin(θ)cos(θ)0000100001$

A point cloud library (PCL) was used to acquire the point clouds and to build the user interfaces [24]. Additionally, a point cloud to CT image registration algorithm was developed and installed in the integrated graphical user interface. This enabled the patient setup position to be adjusted in real time using the SGRS-integrated computer in the treatment room and the movement of the patient to be monitored in real time during the treatment. The point clouds obtained after scanning the camera were down-sampled using voxel grid filtering. Voxel grid filtering calculates the presence or absence of a point in the leaf size at the center point of each voxel, calculates the center point of the points, and removes the remainder of the surrounding points. Then, the noise was removed, and the CT images on DICOM coordinates were converted into point clouds on patient-table coordinates. These were then aligned on the coordinates using the iterative closet point (ICP) algorithm (Fig. 3) [25-28].

Fig. 3. Image registration process and user interface integration. 3D, three-dimensional; CT, computed tomography.

### 3. Measurement accuracy

The Hausdorff distance is the distance between two sets of metric spaces (Eq. 5). In Eq. (5), a and b represent points from sets A and B, respectively, while for simplicity, d(a, b) is any metric between these points. Here, d(a, b) is denoted as the Euclidean distance between a and b. The Hausdorff distance can be used in computer vision applications to obtain a given reference image from any target image [29]. In this study, a reference image was obtained in the CT simulation stage for radiosurgery. This image was used as the basis for a comparison and the difference between the reference image and the first or fractionated treatment was observed. The area of the actual binary target image was processed as a point cloud. The treatment position was located by calculating the Hausdorff distance between the reference image and the area of the target image at the time of treatment with the distance matching algorithm and by minimizing this distance:

Subsequently, 10-th setups were performed to verify the positional accuracy of the developed system. The positional accuracy was calculated in relation to the existing system (ExacTrac).

Results

A user interface was established for the SGRS system (Fig. 4). The point cloud of the patient in the CT simulation stage for radiosurgery was registered by providing the patient information registration window, which included patient name, department number, date of birth, gender, tumor location, and treatment information (fraction), and a window (panel) for real-time monitoring of the point cloud in one screen (Fig. 4a). A dummy phantom was used for testing (Fig. 4b) and was regarded as a rigid body with no movement via breathing. The acquired point cloud image is shown in

Fig. 4. Surface-guided radiosurgery user interface in the treatment room. (a) The user interface for the patient setup matching. (b) The phantom experiment using our surface-guided reposition and monitoring system. CT, computed tomography.

The point cloud dataset obtained in the CT simulation process was used for the first treatment and the split treatment after establishing the radiation surgery plan. In addition, it was matched with the CT image for patient setup and for monitoring in the treatment room (Fig. 5). The accuracy of the matching (the average of ten trial images) using the ICP matching algorithm of the PCL library was as follows: x=1.44±0.5 mm, y=1.48±0.31 mm, z=0.09±0.11 mm, pitch=0.01°±0.02°, roll=0.01°±0.01°, and yaw=0.05°±0.02° (Fig. 5b, c). An error occurred in the process of matching the setup point cloud acquired in the treatment room and the CT image conversion into a point cloud. It was possible to locate the target lesion by obtaining the CT point cloud (Fig. 5a).

Fig. 5. Image registration results of surface-guided image to computed tomography (CT) for a phantom and clinical case. (a) The 3D image registration result for a phantom CT and point cloud. (b) The image registration process in the same coordinate plane. (c) The final image in which the point cloud and CT images are registered.

The developed system was used to perform ten repeated patient setups. The maximum error of the Hausdorff distance was found to be within 1 mm, with a minimum error of –0.21 mm and a maximum error of 0.63 mm (Fig. 6).

Fig. 6. Multi-fractional setup trials using Hausdorff distance in difference plots.

Fig. 7a presents the results of the location accuracy verification for the constructed SGRS system. The x, y, and z values of the developed system all exhibited an error within the range of 0.25 mm. While evaluating the difference from the existing system, an error in the range of 0.26 mm was observed in the maximum z direction and an error within 0.15 mm was perceived in the x and y directions (Fig. 7b).

Fig. 7. Multi-fractional setup trials involving existing system (ExacTrac, BrainLab, Munich, Germany) and the developed surface-guided radiosurgery (SGRS) in difference plots. (a) The location accuracy of the developed system for the multi-fractional setup trials. (b) The difference for the x, y, and z axis.
Discussion

A SGRS system can be used to continuously monitor a patient’s surface motion during radiation therapy through optical surface imaging technology. We attempted to reduce the image processing burden by reducing the amount of point cloud raw data by around one-fifth and controlled the latency to within several hundred milliseconds. Wang et al. applied a machine learning method to predict external respiratory motion signals and the internal liver motion. As a result, the matching latency with the surface image was adjusted to within 500 ms by predicting the liver motion [30]. While our system is based on a couch angle of 0°, a recent study by Covington and Popple [31] presented a simple and inexpensive procedure for evaluating the performance of a surface imaging system used in stereotactic radiosurgery treatment at a non-zero couch angle. Meanwhile, Chan et al. [32] used the ICP registration method to match CT and 3D ultrasound images for a surface-guided system for spinal surgery as a navigation imaging system. As a result, the accuracy was reported to be 0.3±0.2 mm and 0.9°±0.8°.

When obtaining a 3D point cloud, geometric and calibration errors occur, which result in noise in the point cloud image [33-36]. Furthermore, the patient’s ROI can be imaged according to the acquisition distance, with the angle varied depending on the specifications of the RGB-D camera [37]. Therefore, it is crucial to obtain a suitable installation location since the installation distance and angle vary depending on the range of treatment for the patient or on whether the gantry in the treatment room is rotated. This study excluded the treatment of brain patients, with the distance and angle set for spine treatment and the ROI for monitoring. In other words, there will be certain differences between the setup of the imaging system when treating brain and spine cases for accurate observation of the lesion site.

The ICP algorithm was used to evaluate the distance between two-point clouds. The matching was performed by repeatedly obtaining the closest distance between each point on the 3D body surface of the patient and the point cloud of the matching CT image. If there was a section wherein the dots did not coincide while selecting an ROI, the error of x, y, and z exceeded 2 mm. However, it was possible to improve the precision by removing the weights and outliers, while this aspect requires further optimization research [38,39].

The surface point cloud was used to visualize the patient’s current position and to execute a more accurate setup with the CT point cloud set according to image registration. Meanwhile, the movement of the patient can be detected and monitored in the user interface screen. Comparable results of the position of the body surface after matching the surface point cloud and the surface of the CT via the ICP algorithm demonstrated that the difference was negligible. However, the surfaces of small areas, such as the nipple of the patient, were not well sampled due to the limited spatial resolution [40], which was related to the resolution limit of maximum distance and the ROI provided by the depth camera. However, since an increase in the sample size aimed at improving the spatial resolution will increase the calculation time, a loss of spatial resolution was preferred over a delay of tens of milliseconds in real-time monitoring. The temporal resolution was measured in real time at several milliseconds to tens of microseconds in relation to the IR-reflective-marker-based system. Here, the slow read-out speed can cause motion blurring effects. Further research is required to improve the spatial resolution while reducing the computation time in a trade-off relationship between the two factors [41].

With the exception of cases wherein the latest SGRT system is installed in terms of the initial LINAC setup, difficulties in terms of installation and cost will emerge because the latest SGRT is installed on the existing radiation therapy machine models. While actual testing and analysis using a patient setup are required, the advantage of the system proposed in this study is that it is possible to determine the location and to monitor errors, such as ExacTrac (existing SGRT), in a system that incorporates, for example, a RGB-D camera and an image display user interface and an image registration algorithm using ICP. The advantage of our system is that it can be easily installed by combining it with the existing systems.

Conclusions

In this study, we developed a surface-guided repositioning and monitoring system that can be customized for an environment with an existing LINAC. The system assists in improving the setup accuracy in radiation surgery and can be easily applied for more accurate patient repositioning and inter-treatment motion monitoring.

Acknowledgements

This work was supported by the 2017 Inje University research grant.

Conflicts of Interest

The authors have nothing to disclose.

Availability of Data and Materials

The data that support the findings of this study are available on request from the corresponding author.

Author Contributions

Conceptualization: Kwang Hyeon Kim and Moon-Jun Sohn. Data curation and formal analysis: Kwang Hyeon Kim and Haenghwa Lee. Funding acquisition: Moon-Jun Sohn. Investigation: Kwang Hyeon Kim, Haenghwa Lee, and Moon-Jun Sohn. Methodology: Kwang Hyeon Kim, Moon-Jun Sohn, and Chi-Woong Mun. Supervision: Moon-Jun Sohn. Validation: Kwang Hyeon Kim, Haenghwa Lee, and Chi-Woong Mun. Writing–original draft: Kwang Hyeon Kim. Writing–review & editing: Moon-Jun Sohn and Chi-Woong Mun.

References
1. Li G, Ballangrud A, Kuo LC, Kang H, Kirov A, Lovelock M, et al. Motion monitoring for cranial frameless stereotactic radiosurgery using video-based three-dimensional optical surface imaging. Med Phys. 2011;38:3981-3994.
2. Tagaste B, Riboldi M, Spadea MF, Bellante S, Baroni G, Cambria R, et al. Comparison between infrared optical and stereoscopic X-ray technologies for patient setup in image guided stereotactic radiotherapy. Int J Radiat Oncol Biol Phys. 2012;82:1706-1714.
3. Wang LT, Solberg TD, Medin PM, Boone R. Infrared patient positioning for stereotactic radiosurgery of extracranial tumors. Comput Biol Med. 2001;31:101-111.
4. Schipani S, Wen W, Jin JY, Kim JK, Ryu S. Spine radiosurgery: a dosimetric analysis in 124 patients who received 18 Gy. Int J Radiat Oncol Biol Phys. 2012;84:e571-e576.
5. Wu VW, Ho YY, Tang YS, Lam PW, Yeung HK, Lee SW. Comparison of the verification performance and radiation dose between ExacTrac x-ray system and On-Board Imager-a phantom study. Med Dosim. 2019;44:15-19.
6. Murphy MJ, Balter J, Balter S, BenComo JA Jr, Das IJ, Jiang SB, et al. The management of imaging dose during image-guided radiotherapy: report of the AAPM Task Group 75. Med Phys. 2007;34:4041-4063.
7. Cheng CS, Jong WL, Ung NM, Wong JHD. Evaluation of imaging dose from different image guided systems during head and neck radiotherapy: a phantom study. Radiat Prot Dosimetry. 2017;175:357-362.
8. Steiner E, Stock M, Kostresevic B, Ableitinger A, Jelen U, Prokesch H, et al. Imaging dose assessment for IGRT in particle beam therapy. Radiother Oncol. 2013;109:409-413.
9. Hoisak JDP, Pawlicki T. The role of optical surface imaging systems in radiation therapy. Semin Radiat Oncol. 2018;28:185-193.
10. Freislederer P, Kügele M, Öllers M, Swinnen A, Sauer TO, Bert C, et al. Recent advanced in surface guided radiation therapy. Radiat Oncol. 2020;15:187.
11. Li J, Shi W, Andrews D, Werner-Wasik M, Lu B, Yu Y, et al. Comparison of online 6 degree-of-freedom image registration of Varian TrueBeam cone-beam CT and BrainLab ExacTrac X-ray for intracranial radiosurgery. Technol Cancer Res Treat. 2017;16:339-343.
12. Laaksomaa M, Sarudis S, Rossi M, Lehtonen T, Pehkonen J, Remes J, et al. AlignRT® and Catalyst™ in whole-breast radiotherapy with DIBH: is IGRT still needed? J Appl Clin Med Phys. 2019;20:97-104.
13. Agazaryan N, Tenn S, Dieterich S, Gevaert T, Goetsch SJ, Kaprealian T. Frameless image guidance in stereotactic radiosurgery. Cham: Springer; 2020:37-48.
14. Manger RP, Paxton AB, Pawlicki T, Kim GY. Failure mode and effects analysis and fault tree analysis of surface image guided cranial radiosurgery. Med Phys. 2015;42:2449-2461.
15. Gilles M, Fayad H, Miglierini P, Clement JF, Scheib S, Cozzi L, et al. Patient positioning in radiotherapy based on surface imaging using time of flight cameras. Med Phys. 2016;43:4833.
16. Padilla L, Pearson EA, Pelizzari CA. Collision prediction software for radiotherapy treatments. Med Phys. 2015;42:6448-6456.
17. Hoole AC, Twyman N, Langmack KA, Hebbard M, Lowrie D. Laser scanning of patient outlines for three-dimensional radiotherapy treatment planning. Physiol Meas. 2001;22:605-610.
18. Roessler K, Ungersboeck K, Dietrich W, Aichholzer M, Hittmeir K, Matula C, et al. Frameless stereotactic guided neurosurgery: clinical experience with an infrared based pointer device navigation system. Acta Neurochir (Wien). 1997;139:551-559.
19. Kosugi Y, Watanabe E, Goto J, Watanabe T, Yoshimoto S, Takakura K, et al. An articulated neurosurgical navigation system using MRI and CT images. IEEE Trans Biomed Eng. 1988;35:147-152.
20. Fan Y, Jiang D, Wang M, Song Z. A new markerless patient-to-image registration method using a portable 3D scanner. Med Phys. 2014;41:101910.
21. Giancola S, Valenti M, Sala R. A survey on 3D cameras: metrological comparison of time-of-flight, structured-light and active stereoscopy technologies. Cham: Springer; 2018.
22. He Y, Liang B, Yang J, Li S, He J. An iterative closest points algorithm for registration of 3D laser scanner point clouds with geometric features. Sensors (Basel). 2017;17:1862.
23. Habib A, Detchev I, Bang K. A comparative analysis of two approaches for multiple-surface registration of irregular point clouds. Paper presented at: The 2010 Canadian Geomatics Conference and Symposium of Commission I. Calgary, Canada. 2010: 39.
24. Rusu RB, Cousins S. 3D is here: Point Cloud Library (PCL). Paper presented at: 2011 IEEE International Conference on Robotics and Automation; 2011 May 9-13; Shanghai, China.
25. Arun KS, Huang TS, Blostein SD. Least-squares fitting of two 3-D point sets. IEEE Trans Pattern Anal Mach Intell. 1987;PAMI-9:698-700.
26. Ge Y, Maurer CR Jr, Fitzpatrick JM. Surface-based 3D image registration using the iterative closest-point algorithm with a closest-point transform Medical Imaging 1996: Image Processing. SPIE Digital Library. . 1996: 358-367.
27. Wu ML, Chien JC, Wu CT, Lee JD. An augmented reality system using improved-iterative closest point algorithm for on-patient medical image visualization. Sensors (Basel). 2018;18:2505.
28. Tehrani JN, O’Brien RT, Poulsen PR, Keall P. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm. Phys Med Biol. 2013;58:8517-8533.
29. Huttenlocher DP, Klanderman GA, Rucklidge WJ. Comparing images using the Hausdorff distance. IEEE Trans Pattern Anal Mach Intell. 1993;15:850-863.
30. Wang G, Li Z, Li G, Dai G, Xiao Q, Bai L, et al. Real-time liver tracking algorithm based on LSTM and SVR networks for use in surface-guided radiation therapy. Radiat Oncol. 2021;16:13.
31. Covington EL, Popple RA. A low-cost method to assess the performance of surface guidance imaging systems at non-zero couch angles. Cureus. 2021;13:e14278.
32. Chan A, Coutts B, Parent E, Lou E. Development and evaluation of CT-to-3D ultrasound image registration algorithm in vertebral phantoms for spine surgery. Ann Biomed Eng. 2021;49:310-321.
33. Wang S, Sun HY, Guo HC, Du L, Liu TJ. Multi-view laser point cloud global registration for a single object. Sensors (Basel). 2018;18:3729.
34. Li J, Zhou Q, Li X, Chen R, Ni K. An improved low-noise processing methodology combined with PCL for industry inspection based on laser line scanner. Sensors (Basel). 2019;19:3398.
35. Liu W, Cheung Y, Sabouri P, Arai TJ, Sawant A, Ruan D. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system. Med Phys. 2015;42:6564-6571.
36. Fan Y, Yao X, Hu T, Xu X. An automatic spatial registration method for image-guided neurosurgery system. J Craniofac Surg. 2019;30:e344-e350.
37. Muralikrishnan B, Rachakonda P, Lee V, Shilling M, Sawyer D, Cheok G, et al. Relative range error evaluation of terrestrial laser scanners using a plate, a sphere, and a novel dual-sphere-plate target. Meas Sci Technol. 2017;111:60-68.
38. Maier-Hein L, Franz AM, dos Santos TR, Schmidt M, Fangerau M, Meinzer HP, et al. Convergent iterative closest-point algorithm to accomodate anisotropic and inhomogenous localization error. IEEE Trans Pattern Anal Mach Intell. 2011;34:1520-1532.
39. Liu W. LiDAR-IMU time delay calibration based on iterative closest point and iterated sigma point Kalman filter. Sensors (Basel). 2017;17:539.
40. Coroiu ADCA, Coroiu A. Interchangeability of Kinect and Orbbec sensors for gesture recognition. Paper presented at: 2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP). Cluj-Napoca, Romania. 2018: 309-315.
41. Wiersma RD, Tomarken SL, Grelewicz Z, Belcher AH, Kang H. Spatial and temporal performance of 3D optical surface imaging for real-time head position tracking. Med Phys. 2013;40:111712.

June 2021, 32 (2)
Full Text(PDF) Free

Social Network Service
Services

Author ORCID Information

Funding Information