The main goal of radiotherapy is to deliver the prescribed dose accurately to malignant tumors while minimizing the dose to adjacent normal organs. To achieve this goal, optimizing the direction, amount, and distribution of the radiation to be irradiated to the patient is necessary. This optimization process is called radiation treatment planning, which is implemented through a treatment planning system (TPS). By using a TPS, we plan and simulate treatment before the radiation beam is delivered to the tumor. During treatment planning, ensuring that the doses prescribed for safe and quality treatment are properly targeted to the tumor or do not exceed the excess dose in normal organs is necessary. Therefore, the radiation TPS must accurately present the calculated radiation dose values to the tumor and surrounding normal organs, and accordingly, an important element in a TPS would be an accurate calculation of radiation dose.
Radiation dose is the total amount of ionizing radiation energy absorbed by a material or tissue. Dose calculation is computing the energy absorbed by the media, and dose calculation algorithms are the fundamental tools to facilitate this process. Dose calculation algorithms should be able to provide results quickly so that the treatment planning process can be completed within a clinically acceptable time range, and these results must be accurate enough to establish a correlation between the delivered dose and the clinician. The collision of “high speed” and “high accuracy” is an important challenge in developing modern dose calculation algorithms.
Before the 1920s, the universal concept of dose was not established, and medical physicists decided to investigate the dose to the patient by calculating the exposure time of the X-ray machine. By the 1920s, the energy of the X-ray units had sufficient energy to treat at depth, introducing the concept of depth-dose, iso-dose, and opposing-beam techniques. The concept of treatment planning was introduced between the 1920s and 1950s, and dose calculations were performed manually. In the 1950s and 1960s, dose distribution calculations were performed for the first time using a computer and dose calculations considered multiple planes and the internal anatomy. With the prevalence of computed tomography (CT) in the 1970s, three-dimensional (3D) dose calculations became possible, and accordingly, the treatment plan S/W was rapidly developed. Since then, intensity-modulated radiation therapy for optimizing radiation therapy has been introduced, and various techniques for accurate dose calculation for non-homogeneous areas in patients have been employed, and advances in computer science have made providing complex and accurate dose calculation results in a short time possible. Dose calculation algorithms used in radiation therapy have been rapidly evolving by rapid advances in particle/nuclear physics and computer science. These advances have been achieved by the improved understanding of the physical processes involved in the interaction between the beam particle medium and increased speed of computer-based simulation and dose calculation. From the history of dose calculations and mechanisms used in the radiotherapy field, we can classify dose calculation algorithms into three major categories. The first category is called factor-based algorithms, which use semiempirical approaches to resolve tissue heterogeneity and surface curvature based on effective spatial dose measures such as depth, field boundaries, and path lengths in water. The second category is called model-based algorithms, which predict patient dose distributions from the primary particle fluence and a dose kernel. The third category is Monte Carlo simulations, which calculate dose distributions based on computer simulations of the physical process of the particle in matter.
In a factor-based algorithm, measuring the absorbed dose in a water phantom from various rectangular beams normally incident on the phantom surface is important. All measured data are parameterized by the absorbed dose distribution as a function of source center distance, field size, depth, and side position. Then, the factors are applied posteriorly to the measurements to consider the phantom settings and the fact that the patient’s specific surface (block, compensator, etc.) and the patient’s surface are not flat and the patient’s tissue is not water. Most factor-based algorithms provide approximations by using interpolation and extrapolation based on data from direct measurements for various treatment conditions and by considering the difference in attenuation as a result of the beam passing through a heterogeneous tissue. Factor-based dose calculations have been traditionally used in situations where patient treatment investigation conditions do not differ significantly from the measurement conditions used for commissioning or where heterogeneity correction is considered insignificant. Representative examples of these algorithms used in the early days are Clarkson’s technology and IRREG, which are still used for manual dose calculations and in some commercial software for secondhand dose checks [1 -3]. For the inhomogeneity correction, it can be broadly divided into two categories: one is a one-dimensional density sampling method and the other is a 3D density sampling method. Typical one-dimensional density sampling methods are a linear attenuation method, an effective attenuation coefficient method, a ratio of tissue–air ratios (RTAR) method, and a power law method. A linear attenuation method is the simplest form of inhomogeneity correction to adjust the dose at point based on the overlying thickness of inhomogeneity by using a “percent per cm” correction . The inhomogeneity correction factor (ICF) is presented as follows using a linear attenuation method:
An effective attenuation coefficient method is similar to a linear attenuation method but uses an effective attenuation coefficient instead of a “percent per cm” correction as follows:
where μ’ is the effective attenuation coefficient of water, d is the physical depth, and d’ is the equivalent path length. Both linear attenuation and effective attenuation coefficient methods are relatively crude to other modern and complicated methods but are quick. The RTAR or effective source-to-surface distance method is commonly used in old commercial treatment planning and secondary manual check as given by the following equation:
where d’ is the equivalent path length, d is the physical depth, W is the field size at the level of point of interest, and tissue–air ratio (TAR) is defined as the ratio of the dose at a given point in the phantom to the dose in free space at the same point [5 -8]. Under the assumption that the conditions for electron equilibrium have been met, this method accurately corrects the dose. The tissue maximum ratio values could be introduced instead of TAR values as they are formally identical . This method is advantageous primarily due to its simplicity and ability to implement scattering changes by field size and effective depth. The demerit of the RTAR method is that it does not properly implement the lateral component of the scattered photon contribution. Therefore, this method may over-correct or under-correct when the density is lesser or larger than water, respectively. The power law method was introduced by Batho in 1964  and Young and Gaylord in 1970  as an empirical correction factor method employing TAR and electron density of the inhomogeneity. The correction factor is given by the following equation:
where ρ1 is the relative electron density of the medium at the point where the calculation lies, ρ2 is the relative electron density of the overlying material, d 1 is the depth within this medium, and d 2 is the distance to the upper surface of overlying material. After the 1980s, Webb et al. extended this method to be adapted for the tissue density in CT as follows:
where N is the number of layers with different densities above the point of calculation, m is the layer number, Xm is the distance from the point of interest to the surface of the mth layer, ρμ is the electron density of the mth layer, ρ0 is the electron density of water, and (μ en / ρ)N is the mass absorption coefficient of the material in layer N. This method is sensitive to the proximity of the inhomogeneous and provides a first-order approximation to changes in both the primary and scattered photon fluence. This method has no consideration for the buildup area; therefore, for energy higher than 60Co, modifying the formula systematically by adding the buildup distance zm instead of Xm to all depths in the buildup area is necessary. This method provides an acceptable approximation for a single inhomogeneous layer with a larger field size and lesser electron density less than those of tissue.
All of the aforementioned methods in the first category were developed in the era of using photon beams with a lower energy than 60Co for radiation therapy; therefore, an approximation of the electron balance was acceptable, and thus, TAR data could be used and adjusted directly. However, these methods have a weakness that extending to situations where there is no electron equilibrium is difficult [12 -17].
The second category for the inhomogeneity correction consists of the equivalent tissue–air ratio (ETAR), differential tissue–air ratio (dTAR), and 3D beam subtraction (3D-BSM) methods. The ETAR method was introduced in the late 1970s as the practical dose calculation method using the full CT data set for computerized treatment planning systems [18,19]. The ETAR method with an inhomogeneous medium is given by the following equation:
where d’ and r~ are the “scaled” or “effective” values of depth at interesting point and field radius, respectively, for the energy of the radiation being exposed. d’ is scaled by averaging the CT values along the primary photon beam paths. The scaled beam radius
where ε’ij is the sign of product; Ui, Wj, Ui, and Wj are the algebraic distances from the point of interest, P, to the inhomogeneity boundary; Dw0(Ui, Wj) is the dose at the center of the field when there is no heterogeneous area; Cij is the conventional correction factor for the rectangular fields Ui and Wj on the axis. This method does not correct the lateral scattering effect for the inhomogeneity. Therefore, the uncertainty could increase if inhomogeneity is large and electron density is different from unity. The 3D-BSM is quick and comparable in performance to the aforementioned methods with no additional data requirements.
This “factor-based algorithm” has the advantage that the dose calculation speed is quick and distinguishing the subsequent energy transfer by photons and electrons in the patient’s body is not needed. However, this algorithm has less accuracy for a heterogeneous body with energy greater than 6 MV, where the scattering contribution is less significant and the effects of electron motion caused by photons can locally lead to high dose changes.
Model-based algorithms are a widely and predominantly used dose calculation concept in currently commercially available treatment planning systems [23 -30]. These algorithms model the radiation output such as “primary energy fluence” of the photon from the treatment machine before the energy absorption process is considered. These models are calibrated to measure data using simple treatment fields in water. Then, the modeled energy fluence of the primary photons is used to calculate the energy release and transport into the patient’s body. All model-based algorithms have two essential components: one is the total energy released per unit mass (TERMA), which is the energy released to the medium by interactions of the primary photons emerging from the linear accelerator (LINAC) . The TERMA at the interaction point of the primary photons
where μ is the linear photon absorption coefficient, ρ is the medium density, and Ψ is the energy fluence of primary photons. The other component is the kernel representing the energy deposited on the primary photon interaction site by scattering photons and electrons [27,30 -32]. The dose at each point can be calculated from the convolution of the TERMA with the kernel. By combining the TERMA and dose kernel, the dose D(
which is called the “superposition” method. However, this method is too sophisticated and requires much computational effort. To reduce the computational power required for the superposition method, the convolution approach was proposed by changing the kernel to the function of the distance between the interaction points
This well-known method for accurate dose calculation in an inhomogeneous medium is called the convolution/superposition method. Several algorithms distinguished by the convolution kernel treats exist, such as the pencil beam convolution (PBC) algorithm, analytical anisotropic algorithm (AAA) (Varian Medical System, Inc., Palo Alto, CA, USA), and collapse cone convolution (CCC) algorithm (Pinnacle, CMS XiO, etc.) [24,30,31,33 -35]. For homogeneous media such as water, the accuracy does not depend much on calculation algorithms. For heterogeneous media, the difference in the accuracy of dose calculation is determined by how well the kernels of these algorithms can simulate the actual scattering. The 3D pencil beam-type dose calculation, the so-called “differential pencil beam”, was introduced by Mohan et al. in 1986 . A pencil beam is a dose kernel presenting the 3D dose distribution of an infinitely narrow mono-energetic photon beam in water. The PBC algorithm assumes that all points of interaction are on the central axis of the pencil beam and does consider the lateral scattering to be homogeneous; thus, inhomogeneity correction is considered only in the longitudinal direction. The value of TERMA is calculated from the scaling of the path length by the ratio of relative electron density between tissue and water in a heterogeneous medium. The values of the pencil beam kernel in water are used according to the radiological path length calculated along the central axis of the pencil beam. For the AAA and CCC algorithm, these methods consider the inhomogeneous effect of both the longitudinal and lateral directions unlike the PBC algorithm. The AAA was implemented in Eclipse (Varian Medical System, Inc.). The AAA employs spatially variant MC-derived convolution scatter kernels and has separate modeling for primary photons, scattered extra-focal photons, and contamination electrons. For the inhomogeneity consideration, the AAA is handled using radiological scaling of the dose deposition in the beamlet direction and electron-density-based scaling of the photon scatter kernels in the lateral direction. The doses are obtained by superposing the doses from the photon and electron convolution. The advantage of the AAA is its relatively short calculation time and accuracy comparable to the CCC algorithm and better than the PBC algorithm [36 -38]. The CCC method is proposed by Ahnesjo in 1989, which uses poly-energetic energy deposition kernels from a spectrum beam by calculating from the database of mono-energetic kernels . In the CCC algorithm, the kernel is represented analytically and expressed in polar coordinates, which consist of a finite number of polar angles for the primary beam. The point of interaction can be considered to be at the apex of a set of radially directed lines spread out in three dimensions, and each line is considered to be the axis of a cone. The kernel along each line is the energy deposited within the entire cone collapsed onto the line. The advantage of the CCC algorithm over standard convolution algorithms is that it can reduce the computation resources. The computation time for the CCC method in an inhomogeneous medium is proportional to MN3 as opposed to N6, where M is the number of cones and N is the number of voxels along one side of the calculation volume. The CCC algorithm was applied to Pinnacle (Philips Inc., Amsterdam, Netherlands), Oncentra MasterPlan (Nucletron, Inc., Columbia, MD, USA), CMS XiO (Elekta AB, Stockholm, Sweden), RayStation (RaySearch Laboratories AB, Stockholm, Sweden), etc.
Model-based algorithms are more accurate than factor-based algorithms, especially in inhomogeneous media, but still rely on approximations and only partially handle the physical processes involved in the microscopic absorption of the energy transferred by the radiation field.
Principle-based algorithms, commonly known as Monte Carlo algorithms, are the most sophisticated approach that includes almost all known physical features for microscopic radiation–tissue interactions. Monte Carlo algorithms are a stochastic method and have been widely used in the field of experimental particle physics, and in the clinical field, they were initially used as a benchmark to verify the accuracy of other dose calculation algorithms [39 -42]. Usually, a Monte Carlo dose calculation algorithm consists of two major steps that are initiated by a random number seed generation: first, the simulation of the radiation beam travels through the accelerator gantry head and other parts before the patient’s body. Second, the energy absorption and transport inside the patient body’s including immobilization tools and bolus are simulated. Monte Carlo modeling is often performed in situations where physical measurements are difficult or impossible. Monte Carlo algorithms could “synthetically measure” significant but almost immeasurable quantities such as the contribution of dose from different orders of photon scattering. Many researchers performed Monte Carlo simulations to model the accelerator heads, generate the energy spectra and angular distributions of primary photon beams produced, and study other characteristics of photon beams [43 -45]. In the case of Monte Carlo simulations, accurate charged particle transport is important in a heterogeneous environment because charged particles set in motion from one side of the interface can move to accumulate energy. Inadequate treatment of charged particle transport will result in inaccurate prediction of the resultant. Monte Carlo algorithms simulate all real physical processes involving beam particles during transportation; therefore, the dose calculation results are expected to be accurate, but there are weaknesses in some areas. In order for a Monte Carlo algorithm to give accurate results, it must first have correct and detailed geometry information for the accelerator head, followed by fine and precise tuning by comparing the measured result values with the Monte Carlo calculation results. In addition, the accuracy of Monte Carlo calculation results is mainly determined by the number of events generated. This statistical uncertainty is proportional to the inverse square root of the generated event numbers . Therefore, Monte Carlo dose calculations are rather slow and time consuming. The rapid development of computer central processing units and the introduction of graphics processing units have greatly improved the speed of Monte Carlo dose calculations, making it possible to apply them to the clinical field [47,48]. Alternatively, the Acuros XB algorithm in Eclipse was proposed to simulate all the physical processes by using a group of Boltzmann transport equations to describe all the physical processes involved, and these equations are solved using numerical methods using a computer, which is much faster than Monte Carlo algorithms but still provides an accuracy comparable to Monte Carlo algorithms [49 -52]. In 2018, Chopra et al.  reported the accuracy of five dose calculation algorithms within three treatment planning systems: Brainlab’s iPlan 4.2 (BL: PBC and MC [Monte Carlo]), Philips’ Pinnacle (PL: CCC), and Varian’s Eclipse (VR: AAA and Acuros XB). They found that BL:MC and measured doses agreed well (Ddiff < 3%) for all field sizes and depths. In contrast, BL:PBC showed significant over-prediction of the measured dose for a lung phantom. They showed that PL:CCC has the best agreement with measured data compared with other TPS algorithms. In addition, VR:AAA over-predicted the measured results and was unable to replicate dose variations near heterogeneities, whereas Acuros XB showed good agreement with the measurements for heterogeneous media. Recently, many studies have focused on increasing the computation speed of Monte Carlo dose calculation algorithms. In 2020, David et al.  reported the clinical validation of a Monte Carlo dose engine. According to them, timing studies demonstrated that volumetric modulated arc therapy planning was completed in less than 4 minutes.
Dose calculation algorithms play an important role in radiation therapy. This article described various dose calculation algorithms used in treatment planning systems for radiation therapy from the past to the present. The radiation-calculating dose calculation algorithm is broadly classified into three main groups based on the mechanisms used: (1) factor-based, (2) model-based, and (3) principle-based. The PBC algorithm is still used in many places because it generates dose distributions with excellent precision and provides the best tradeoff between accuracy and calculation times in homogeneous regions, such as the brain and abdomen. However, the general applicability of the PBC algorithm in inhomogeneous environment is questionable. In cases of severe tissue inhomogeneities, convolution/superposition methods such as the “AAA” and “CCC” algorithm produce fairly accurate dose distributions even if minor differences are observed in comparison with Monte Carlo calculations. Due to the relatively fast dose calculation speed and high accuracy for both homogeneous and heterogeneous cases, convolution/superposition methods are now a major part of radiation therapy treatment planning systems. In the meantime, some Monte Carlo-based programs offer computational times similar to those of superimposition algorithms. The clinical applicability of principle-based algorithms has been steadily increasing due to advances in computer hardware and software and will be further developed in the future. Much effort has been done to evaluate dose calculation algorithms, and the accuracy hierarchy is reported as follows: principle-based methods>model-based methods (CCC>AAA>PBC)>factor-based methods. For approximately 70 years, through the development of dose calculation algorithms and computing technology, the accuracy of dose calculation seems close to our clinical needs. Next-generation dose calculation algorithms are expected to include biologically equivalent doses or biologically effective doses, and doctors expect to be able to use them to improve the quality of treatment in the near future.
This work was supported by the General Researcher Program (NRF- 2018R1D1A1B0705021713) through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning; the Startup Growth Technology Development Program (No.S2796688) through the Business for Cooperative R&D between Industry, Academy, and Research Institute funded by the Small and Medium Business Administration; the Nuclear Safety Research Program (No. 2003013-0120-SB120) through the Korea Foundation of Nuclear Safety (KOFONS), using the financial resource granted by the Nuclear Safety and Security Commission (NSSC), Republic of Korea.
The authors have nothing to disclose.
All relevant data are within the paper and its Supporting Information files.
Conceptualization: Dong Wook Kim. Supervision: Jinsung Kim. Validation: Kwangwoo Park and Hojin Kim. Writing–original draft: Dong Wook Kim, Kwangwoo Park, and Hojin Kim. Writing–review & editing: Dong Wook Kim and Jinsung Kim.