CN106408616B - The inconsistent bearing calibration of perspective view background in a kind of CT imaging - Google Patents
The inconsistent bearing calibration of perspective view background in a kind of CT imaging Download PDFInfo
- Publication number
- CN106408616B CN106408616B CN201611046838.XA CN201611046838A CN106408616B CN 106408616 B CN106408616 B CN 106408616B CN 201611046838 A CN201611046838 A CN 201611046838A CN 106408616 B CN106408616 B CN 106408616B
- Authority
- CN
- China
- Prior art keywords
- image
- projection
- value
- bright
- field image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000013170 computed tomography imaging Methods 0.000 title claims abstract description 12
- 238000000034 method Methods 0.000 claims abstract description 51
- 238000013519 translation Methods 0.000 claims abstract description 17
- 238000005286 illumination Methods 0.000 claims description 47
- 239000011159 matrix material Substances 0.000 claims description 35
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000002247 constant time method Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 238000010586 diagram Methods 0.000 claims description 5
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 4
- 238000005314 correlation function Methods 0.000 claims description 4
- 102000002274 Matrix Metalloproteinases Human genes 0.000 claims description 3
- 108010000684 Matrix Metalloproteinases Proteins 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 3
- 238000011426 transformation method Methods 0.000 claims description 3
- 238000010219 correlation analysis Methods 0.000 abstract 1
- 238000001514 detection method Methods 0.000 abstract 1
- 238000012937 correction Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- VOYADQIFGGIKAT-UHFFFAOYSA-N 1,3-dibutyl-4-hydroxy-2,6-dioxopyrimidine-5-carboximidamide Chemical compound CCCCn1c(O)c(C(N)=N)c(=O)n(CCCC)c1=O VOYADQIFGGIKAT-UHFFFAOYSA-N 0.000 description 1
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 1
- 229910052782 aluminium Inorganic materials 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/005—Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/58—Testing, adjusting or calibrating thereof
- A61B6/582—Calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- High Energy & Nuclear Physics (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Public Health (AREA)
- Radiology & Medical Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Pulmonology (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to the inconsistent bearing calibrations of perspective view background in a kind of CT imaging, specifically a kind of CT perspective view background cross-correlation based on image grayscale information is registrated method, include the following steps: 1) before acquiring sample CT projected image, acquisition bright-field image one is opened.Determine the spot center of bright-field image;2) bright-field image penalty function is calculated;3) bright-field image illuminance compensation;4) projected image illuminance compensation;5) real offset of perspective view background and bright field figure background calculates;6) image translation is registrated.The present invention is based on image grayscale information, realize the registration to CT perspective view using principle of correlation analysis, have and do not need to be known in advance detection implement body deflection parameter, easy to operate, the high advantage of computational efficiency.
Description
Technical Field
The invention relates to a method for correcting inconsistent projection image backgrounds in CT imaging, in particular to a CT projection image background cross-correlation registration method based on image gray information.
Background
The CT imaging technology can rapidly and nondestructively detect the three-dimensional structure of an object, and has become an important three-dimensional structure characterization means for materials at present. In the CT reconstruction process, in order to avoid the influence of dust, defects or electronic noise on a detector on a CT projection image, a bright field image and a dark field image are often collected in the CT experiment process, and the CT image precision is improved by deducting the background from a projection data image. The high-precision CT image reconstruction requires that the X-ray source, the axis of the sample table and the detector are kept relatively still in the experimental process, so that the projection images acquired by the detector when the sample rotates at different angles have the same background image. However, in the actual imaging process, the relative displacement among the radiation source, the axis of the sample stage and the detector caused by the drift of the radiation source or mechanical vibration is difficult to completely avoid, so that the backgrounds of the CT projection images acquired at different angles are inconsistent, and the background subtraction of the CT projection images and the subsequent CT image reconstruction are affected.
At present, a rotation center measurement correction method is commonly used for correcting artifacts generated by CT reconstruction due to movement of a sample table in a CT experimental scanning process. Common methods for measuring the rotation center include a projection boundary method, a geometric relation method, a needle model method, a central ray method, a symmetric relation method, an iterative convergence method and the like. For the CT reconstructed image artifact caused by detector deflection in the CT experimental scanning process, the common correction method includes calculating geometric parameters necessary for the correction algorithm by using the sinogram of the original projection data, and directly reconstructing the CT image from the original projection data by using the reconstruction formula containing the detector deflection parameters. However, the existing method needs to know the deflection parameters of the detector in advance and has the technical problems of complex operation and large calculation amount.
Disclosure of Invention
The invention provides a correction method for projection image background inconsistency in CT imaging, which aims at the technical problems of complex operation and large calculation amount of the correction method for CT reconstructed image artifacts caused by projection image background inconsistency in the CT scanning reconstruction process.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a method for correcting projection image background inconsistency in CT imaging comprises the following steps:
1) before the CT projection image of the sample is collected, a bright field image is collected, and the light spot center O (i) of the bright field image is determined0,j0);
2) Calculating a bright field image compensation function f (r);
3) compensating the illumination of the bright field image;
4) compensating the illumination of the projected image;
5) calculating the actual offset of the projection image background and the bright-field image background;
6) and (5) image translation registration.
Determining the light spot center O (i) of the bright field image in the step 1)0,j0) The specific operation steps are as follows: reading out the gray value of each point of the bright field image, and expressing the gray value as a gray value matrix { p }i,j}(i=0,1,2...,imax;j=0,1,2,...,jmax) Where i, j each represent a matrix { p }i,jThe column coordinates and the row coordinates of the points correspond to the coordinates of all pixel points on the bright-field image one by one; i.e. imaxIs the maximum coordinate value, j, of a pixel point in the direction of the image columnmaxThe maximum coordinate of the pixel point in the image row direction; p is a radical ofi,jRepresenting the gray scale value at coordinate (i, j);
selecting j-th image area with clear light spot boundary in bright-field image1,j2,...,jqThe gray scale data of q rows are obtained, and the difference f 'of the row coordinate i is obtained for each row gray scale data'j(i):
Where Δ i is a natural number whose value is f 'at the guaranteed spot boundary'j(i) Clearly distinguishable from other positions of the imageTaking the minimum value on the premise; when the difference function is taken to the minimum value f'j(i)minAnd a maximum value f'j(i)maxWhen, the corresponding independent variables are respectively recorded asAndcalculating according to the formula (2) to obtain the center coordinate of the light spot on the jth line of the bright field image
WhereinRepresenting a value of takingThe integer part of (1); when different j values are calculatedIs taken as the spot center coordinate i of the bright field image in the line direction0Namely:
wherein,representing a value of takingThe integer part of (1);
obtaining the central coordinate j of the light spot in the column direction of the bright-field image by the same method0And determining the coordinates of the spot center O point of the bright-field image as O (i)0,j0)。
The method for calculating the bright field image compensation function in the step 2) comprises the following steps: with the spot center O (i)0,j0) As a circle center, a circle with a radius of (r +0.5), where r ═ 1,20,j0)]Unit is one pixel, min (i)0,j0) Represents to take i0,j0The smaller of the two values. When r takes different values, the arithmetic mean value of the gray values of all the pixel points on the circumference is respectively solved,representing the arithmetic mean value of all pixel point gray values on the circumference with the radius of (r + 0.5); obtained from different r valuesObtained by polynomial fittingThe function of r is denoted as f (r), which is the bright field image compensation function.
The specific operation steps of the bright field image illumination compensation in the step 3) are as follows: and (3) performing illumination compensation on all pixel points in the bright field image according to the bright field image compensation function f (r): bright field image gray scale value matrix { p }i,jIn the pixel point with coordinates (i, j) and the light source center O (i, j)0,j0) Is denoted as R, then:
substituting R into the bright field image compensation function f (R), calculating to obtain the gray value a of the pixel point at the coordinate (i, j) after the bright field image compensationi,j:
And (3) calculating all pixel points in the bright field image according to the formula (5) to obtain a compensated bright field image gray scale value matrix { a }i,jAccording to { a }i,jAnd (5) the compensated bright-field image can be output by utilizing computer programming.
The specific operation steps of the illumination compensation of the projected image in the step 4) are as follows:
reading out the gray value of each point of each projection image and expressing the gray value as a gray value matrix(m=0,1,2...,mmax;n=0,1,2,...,nmax,k=1,2,...,kmax) Wherein m and n respectively represent a matrixM, corresponding to the coordinates of each pixel point on the projected image one by onemaxIs the maximum coordinate value, n, of a pixel point in the direction of the image columnmaxIs the maximum coordinate value of the pixel point in the image line direction, k represents the number of the projection image, kmaxIs the total number of the projection drawings;representing the gray scale value with the coordinate of (m, n) on the kth projection diagram; determining the coordinates of the center of the light spot of each projectionRespectively representing the row coordinate and the column coordinate of the light spot center of the kth projection drawing; the gray scale value of each projection image is compensated by a bright field image compensation function f (r),representing the gray scale value of the compensated kth projectionThe matrix is a matrix of a plurality of matrices,representing the gray scale value of the k projection graph after illumination compensation at the position where the column coordinate is m and the row coordinate is n; according toAnd the computer program can be used for outputting the projected image after illumination compensation.
The coordinates of the center of the light spot of each projection diagram are determinedThe specific operation steps are as follows:
selecting the nth image area with clear light spot boundary from the kth projection image1,n2,...,nzThe gray scale data of z rows are obtained, and the difference of the gray scale data of each row to the column coordinate m is obtained
Where Δ m is a natural number whose value is to be at the spot boundaryThe minimum value can be taken on the premise of being obviously different from other positions of the image; when the difference function reaches the minimum valueAnd maximum valueWhen, the corresponding independent variables are respectively recorded asAndcalculating the center coordinate of the light spot on the nth line of the kth projection image according to the formula (7)
WhereinRepresenting a value of takingThe integer part of (1); when different n values are calculatedAs the spot center coordinates of the k-th projection image in the line directionNamely:
wherein,representing a value of takingThe integer part of (1);
obtaining the center coordinates of light spots in the column direction of the k projection image by the same methodThe light spot center O of the k-th projection imagekThe point coordinates are determined as
The specific operation steps for compensating the gray scale value of each projection image are as follows:
and (3) performing illumination compensation on all pixel points in the k projection image according to a bright field image compensation function f (r): the k-th projection image gray value matrixIn (m, n) pixel point and spot centerIs denoted as D, and is,
then:
substituting r-D into the bright field image compensation function f (r), calculating the gray value of the pixel point at the coordinate (m, n) after the illumination compensation of the k-th projection image
Performing calculation of a formula (10) on all pixel points in the kth image to obtain a compensated kth projection image gray value matrixAccording toAnd outputting the compensated k-th projection image by using computer programming.
The actual offset calculation method of the projection image background and the bright field image background in the step 5) comprises the steps of taking the bright field image after illumination compensation as a template image, taking the k-th projection image after illumination compensation as an image to be registered, selecting an area 1 and an area 2 with the same size at the same position in the template image and the image to be registered, and taking gray values of pixel points contained in the two areas as two groups of discrete sequences ξ (e, f) and χ respectivelyk(e, f), wherein e, f respectively represent the column coordinates and row coordinates of the selected image area, e ═ 0,1,2max,f=0,1,2,...,fmax,emaxAnd fmaxMaximum coordinate values respectively representing a column direction and a row direction in the selected image area, the numerical values of which are determined according to the size of the selected image area; calculating the cross-correlation function c according to equation (11)kThe numerical values of (u, v),
in equation (11), u and v represent possible background offsets of the bright-field image and the projection image in the column direction and the row direction, respectively, and u is 0,1,2max,v=0,1,2,...,vmaxWherein u ismaxAnd vmaxRespectively representing the maximum background offset of the projection image and the bright field image in the column direction and the row direction, the value of which can be determined according to the maximum offset which can occur among an X-ray source, a sample stage and a detector in an X-ray CT experiment,represents the arithmetic mean of the sequence ξ (e, f),representing sequence χkArithmetic mean of (e, f)Mean value; when c is going tok(u, v) when the maximum value is obtained, the corresponding background offset (each is expressed as) The actual background offset of the bright field image and the k-th projection image is obtained; and (4) registering the backgrounds of all the projection images by adopting the same method, thereby obtaining the actual offset of the background of each projection image and the background of the bright-field image.
The specific operation steps of the image translation registration in the step 6) are as follows: all projection images can be subjected to translation transformation according to the actual offset of the background of each projection image and the background of the bright-field image after illumination compensation, and registration of the background of the CT projection image and the background of the bright-field image is realized; registering the k-th projection image gray value matrix according to the backgroundThe projection image after background registration can be output by using computer programming.
The translation transformation method of the projection image is shown in formula (12),
wherein k represents the k-th projection drawing after illumination compensation,and representing the gray scale value of the image after the translation transformation at the position of a column coordinate m and a row coordinate n.
In step 1), the selected total number of lines q may be different numbers according to image conditions, and the more the number of lines selected, the more accurate the calculation of the spot center position of the bright field image in the line direction. The same is true when determining the column direction spot center.
In the step 1), before the sample CT projection image is collected, a bright-field image is collected. From bright field pictureSelecting j (th) image area with clear light spot boundary in image1,j2,...,jqThe gray scale data of q rows are obtained, and the difference f 'of the row coordinate i is obtained for each row gray scale data'j(i) In that respect The bright field image acquisition method and the determination of the image area with clear light spot boundary in the bright field image are conventional methods in the art.
In step 2), obtained from different r valuesObtained by polynomial fittingThe function for r is denoted as f (r). Polynomial fitting methods are routinely used in the art.
In step 4), selecting the nth image area with clear light spot boundary from the kth projection image1,n2,...,nzThe gray scale data of z rows are obtained, and the difference of the gray scale data of each row to the column coordinate m is obtainedThe total number of lines z selected can be chosen to be different depending on the image situation. The more rows are selected, the more accurate the spot center position of the projected image in the row direction is calculated. The same is true when determining the column direction spot center. Determining the image area in which the light spot boundary in the projected image is sharp is a routine method in the art.
In step 5), in which umaxAnd vmaxThe maximum background offset values of the projection image and the bright field image in the column direction and the row direction are respectively shown, and the numerical values can be determined according to the maximum offset values which can occur among an X-ray source, a sample stage and a detector in an X-ray CT experiment. Determining umaxAnd vmaxThe values of (A) are conventional in the art.
The reading and expressing of the gray value of the image as a gray value matrix and the outputting of the gray value matrix as the image are realized by computer programming and are conventional means in the field. The calculation of the gray values at all or part of coordinates in the gray value matrix according to different formulas is realized by using computer programming, and belongs to the conventional use means in the field.
Compared with the background technology, the invention adopts the technical scheme and has the following advantages:
1. the detector deflection parameters do not need to be known in advance;
2. the operation is simple, and the calculated amount is small.
Drawings
FIG. 1 is a bright field image;
FIG. 2 is the gray scale values at the horizontal line positions in FIG. 1;
FIG. 3 is a bright field image after illumination compensation;
FIG. 4 is a distribution of gray values at the position of the horizontal line in FIG. 3;
FIG. 5 is a first perspective view;
FIG. 6 is a gray scale value distribution at the position of the horizontal line in FIG. 5;
FIG. 7 is a first projection of illumination compensated;
FIG. 8 is a distribution of gray values at the position of the horizontal line in FIG. 7;
FIG. 9 is a first projection of the background after calibration;
FIG. 10 is a CT slice reconstructed using a CT projection view without background registration;
fig. 11 shows a CT slice reconstructed from the CT projection images after background registration.
Detailed Description
The method for correcting the CT projection image background can correct the CT projection image backgrounds of different samples acquired by different CT devices. The projection data used in this example were acquired on a laboratory CT machine manufactured by trione precision instruments ltd. In order to verify the effectiveness of the cross-correlation registration method, detector jitter is artificially introduced in the data acquisition process. The sample was a cylindrical synthetic sandstone with a diameter of 2.5mm bonded with a cylindrical pure aluminum sample with a diameter of 2.85 mm. The pixels of the bright field image, the dark field image and the sample CT projection image are all 2048 multiplied by 2048, and 901 projection images are collected in the experiment. The examples are intended to illustrate the invention further and do not limit the scope of protection of the invention.
A method for correcting projection image background inconsistency in CT imaging comprises the following steps:
1) before the CT projection image of the sample is collected, a bright field image is collected, and the light spot center O (i) of the bright field image is determined0,j0): reading out the gray value of each point of the bright field image, and expressing the gray value as a gray value matrix { p }i,j}(i=0,1,2...,imax;j=0,1,2,...,jmax) Where i, j each represent a matrix { p }i,jThe column coordinates and the row coordinates of the points correspond to the coordinates of all pixel points on the bright-field image one by one; i.e. imaxIs the maximum coordinate value, j, of a pixel point in the direction of the image columnmaxThe maximum coordinate of the pixel point in the image row direction; p is a radical ofi,jRepresenting the gray scale value at coordinate (i, j);
selecting j-th image area with clear light spot boundary in bright-field image1,j2,...,jqThe gray scale data of q rows are obtained, and the difference f of the gray scale data of each row to the column coordinate i is obtainedj'(i):
Wherein Δ i is a natural number whose value is to ensure speckleF at the boundaryj' (i) taking a minimum value on the premise that the minimum value can be clearly distinguished from other positions of the image; when the difference function reaches the minimum value fj'(i)minAnd maximum value fj'(i)maxWhen, the corresponding independent variables are respectively recorded asAndcalculating according to the formula (2) to obtain the center coordinate of the light spot on the jth line of the bright field image
WhereinRepresenting a value of takingThe integer part of (1); when different j values are calculatedIs taken as the spot center coordinate i of the bright field image in the line direction0Namely:
wherein,representing a value of takingThe integer part of (1);
obtaining the central coordinate j of the light spot in the column direction of the bright-field image by the same method0And determining the coordinates of the spot center O point of the bright-field image as O (i)0,j0)。
In this step, the total number of lines q selected can be selected to be different according to the image situation, and the more the number of lines selected, the more accurate the spot center position of the bright field image in the line direction is calculated. The same is true when determining the column direction spot center.
Fig. 1 shows a bright-field image collected in a CT experiment, the image pixels are 2048 × 2048, and the gray value readout of all the points in the image can be represented as a gray value matrix { p }i,jAnd (i ═ 0,1,2., 2047; j ═ 0,1,2., 2047). Fig. 2 is the gray scale values at the positions of the horizontal lines in fig. 1, and it can be seen from fig. 2 that the gray scale values of the bright field image are obviously reduced from the horizontal line direction to the left to the right in fig. 1, and the distribution of the gray scale values is not uniform. In the text, gray scale data of 51 lines in total in 100-150 th line (between a point and b point in the figure) with clear light spot boundaries in the image are selected, and the difference f 'of the gray scale data of each line to a column coordinate i is obtained'j(i):
Taking j as 100 (i.e., row 100), if Δ i is 1, the differential fluctuations are uneven, and it is difficult to find f'100(i) Is measured. If Δ i ═ 2, except for abrupt changes at the spot boundary, other regions f'100(i) The fluctuation of (a) is uniform, i.e., Δ i is preferably 2. Calculated according to the formula (1), and when the differential function is taken to be the minimum value f'100(i)minWhen it is-200.5, corresponding to independent variableIs the left boundary of the spot on row 100; when the difference function is taken to the maximum value f'100(i)maxWhen 236.75, the corresponding independent variableThe spot is at the right boundary of row 100. Calculating according to the formula (2) to obtain the center coordinate of the light spot on the 100 th line of the bright field imageThe center coordinates of all the lines from the 100 th line to the 150 th line in the image are calculated by the same method, and the spot center coordinate in the line direction of the bright field image is 1018 calculated according to the formula (3). In the same way, the spot center coordinate of the bright field image in the column direction is calculated to be 1017, and thus the spot center O point coordinate of the bright field image is determined (1018, 1007).
2) Calculating a bright-field image compensation function f (r): with the spot center O (i)0,j0) As a circle center, a circle with a radius of (r +0.5), where r ═ 1,20,j0)]Unit is one pixel, min (i)0,j0) Represents to take i0,j0The smaller of the two values. When r takes different values, the arithmetic mean value of the gray values of all the pixel points on the circumference is respectively solved,representing the arithmetic mean value of all pixel point gray values on the circumference with the radius of (r + 0.5); obtained from different r valuesObtained by polynomial fittingThe function of r is denoted as f (r), which is the bright field image compensation function.
Taking a pixel point with coordinates (1018, 1007) in the bright field image shown in fig. 1 as a center of a circle, taking r as 1.5,2.5, and 1007.5 for the radius, respectively, making a circle on the image, and respectively calculating an average gray value of the pixel point on the circumference corresponding to each radiusObtained by polynomial fittingFunction for r: (r) 4.90051r3-0.03556r2+0.0000118r
3) And (5) compensating the illumination of the bright-field image. And (3) performing illumination compensation on all pixel points in the bright field image according to the bright field image compensation function f (r): bright field image gray scale value matrix { p }i,jIn the pixel point with coordinates (i, j) and the light source center O (i, j)0,j0) Is denoted as R, then:
substituting R into the bright field image compensation function f (R), calculating to obtain the gray value a of the pixel point at the coordinate (i, j) after the bright field image compensationi,j:
And (3) calculating all pixel points in the bright field image according to the formula (5) to obtain a compensated bright field image gray scale value matrix { a }i,jAccording to { a }i,jAnd (5) the compensated bright-field image can be output by utilizing computer programming.
Calculating the distances R from all points except the center O in FIG. 1 to the center O according to the formula (4), and substituting the value of R into the function f (R) 4.90051R3-0.03556r2+0.0000118r, by substituting f (R) into equation (5), the gray-scale value a at the (i, j) point after illumination compensation can be calculatedi,j. The above operation is performed for all the points in fig. 1, and then the compensated bright-field image is output, as shown in fig. 3. Fig. 4 is a distribution of gray values at the position of the horizontal line in fig. 3. By comparing fig. 1, fig. 2 and fig. 3, fig. 4, it can be seen that the bright field image after illumination compensationThe gray distribution becomes significantly more uniform.
4) And (3) illumination compensation of the projected image: reading out the gray value of each point of each projection image and expressing the gray value as a gray value matrix(m=0,1,2...,mmax;n=0,1,2,...,nmax,k=1,2,...,kmax) Wherein m and n respectively represent a matrixM, corresponding to the coordinates of each pixel point on the projected image one by onemaxIs the maximum coordinate value, n, of a pixel point in the direction of the image columnmaxIs the maximum coordinate value of the pixel point in the image line direction, k represents the number of the projection image, kmaxIs the total number of the projection drawings;representing the gray scale value with the coordinate of (m, n) on the kth projection diagram; determining the coordinates of the center of the light spot of each projectionRespectively representing the row coordinate and the column coordinate of the light spot center of the kth projection drawing; the gray scale value of each projection image is compensated by a bright field image compensation function f (r),a gray scale value matrix representing the compensated kth projection,representing the gray scale value of the k projection graph after illumination compensation at the position where the column coordinate is m and the row coordinate is n; according toAnd the computer program can be used for outputting the projected image after illumination compensation.
Determining the coordinates of the center of the light spot of each projectionThe specific operation steps are as follows: selecting the nth image area with clear light spot boundary from the kth projection image1,n2,...,nzThe gray scale data of z rows are obtained, and the difference of the gray scale data of each row to the column coordinate m is obtained
Where Δ m is a natural number whose value is to be at the spot boundaryThe minimum value can be taken on the premise of being obviously different from other positions of the image; when the difference function reaches the minimum valueAnd maximum valueWhen, the corresponding independent variables are respectively recorded asAndcalculating according to the formula (7) to obtain the center coordinate of the light spot on the nth line of the projection image
WhereinRepresenting a value of takingThe integer part of (1); when different n values are calculatedAs the spot center coordinates of the k-th projection image in the line directionNamely:
wherein,representing a value of takingThe integer part of (1);
obtaining the center coordinates of light spots in the column direction of the k projection image by the same methodThe light spot center O of the k-th projection imagekThe point coordinates are determined as
The specific operation steps for compensating the illumination of each projection drawing are as follows: and (3) performing illumination compensation on all pixel points in the k projection image according to a bright field image compensation function f (r): the k-th projection image gray value matrixIn (m, n) pixel point and spot centerIs denoted as D, then:
substituting r-D into the bright field image compensation function f (r), calculating the gray value of the pixel point at the coordinate (m, n) after the illumination compensation of the k-th projection image
Executing the calculation of formula (10) on all pixel points in the k projection image to obtain a compensated gray value matrix of the k projection imageAccording toAnd outputting the k-th projection image after illumination compensation by using computer programming.
Taking the first projection drawing in the CT experiment as an example, as shown in fig. 5, fig. 6 is a distribution of gray scale values at the position of the horizontal line in fig. 5, and it can be seen from fig. 6 that the gray scale values along the horizontal line in fig. 5 are obviously reduced from left to right, and the distribution is not uniform. The readout of the grey values of all the points in fig. 5 can be represented as a matrix of grey values(m=01,2., 2047; n is 0,1,2.. and 2047), selecting the 100,101.. and 150 lines (between the points c and d in the figure) of gray data of 51 lines in total from the image area with clear light spot boundary in the 1 st projection image, and calculating the difference of the gray data of each line to the column coordinate m
Taking the data in the 100 th row as an example, when Δ m is 1, the difference in the 100 th row is calculatedUneven fluctuation and difficult findingIs measured. When Δ m is 2, except for the abrupt change at the spot boundary, other regions are presentThe fluctuation of (2) is uniform, i.e., it is preferable to select Δ m as 2. When the difference function reaches the minimum valueTime, corresponding to the independent variableIs the left boundary of the light spot; when the difference function reaches the maximum valueTime, corresponding to the independent variableThe spot right boundary. Calculating the center coordinate of the light spot on the 100 th line of the projection image according to the formula (7)The center coordinates of all the lines from the 100 th line to the 150 th line in the first projection image are calculated by the same method, and the spot center coordinate in the line direction of the first projection image is 992 according to the formula (8). In the same way, the spot center coordinates of the first projection image in the column direction are calculated to be 1006, so that the spot center of the first projection image is determinedPoint coordinates (992, 1006).
The illumination compensation of the projected image is illustrated by taking the first projection drawing as an example, and the center of the circle in FIG. 5 is calculated according to the formula (9)In addition, the distances D from all the other points to the center O are substituted into the function f (r) 4.90051r3-0.03556r2+0.0000118r, after f (D) is calculated, the gray scale value at the (m, n) point after illumination compensation can be calculated by substituting the formula (10)The compensated first projection image is output after the above operation is performed on all the points in fig. 5, as shown in fig. 7. Fig. 8 is a distribution of gray values at the position of the horizontal line in fig. 7. Comparing fig. 4, fig. 5 and fig. 7, fig. 8 shows that the gray scale distribution of the first projection image after illumination compensation becomes more uniform.
5) Calculating the actual offset of the projection image background and the bright field image background, namely taking the bright field image after illumination compensation as a template image, taking the k-th projection image after illumination compensation as an image to be registered, selecting an area 1 and an area 2 with the same size at the same position in the template image and the image to be registered, and respectively taking the gray values of pixel points contained in the two areas as two groups of discrete sequences ξ (e, f) and χk(e, f), wherein e, f respectively represent the column coordinates and row coordinates of the selected image area, e ═ 0,1,2max,f=0,1,2,...,fmax,emaxAnd fmaxMaximum coordinate values respectively representing a column direction and a row direction in the selected image area, the numerical values of which are determined according to the size of the selected image area; calculating the cross-correlation function c according to equation (11)kThe numerical values of (u, v),
in equation (11), u and v represent possible background offsets of the bright-field image and the projection image in the column direction and the row direction, respectively, and u is 0,1,2max,v=0,1,2,...,vmaxWherein u ismaxAnd vmaxRespectively representing the maximum background offset of the projection image and the bright field image in the column direction and the row direction, the value of which can be determined according to the maximum offset which can occur among an X-ray source, a sample stage and a detector in an X-ray CT experiment,represents the arithmetic mean of the sequence ξ (e, f),representing sequence χk(e, f) arithmetic mean; when c is going tok(u, v) when the maximum value is obtained, the corresponding background offset (each is expressed as) The actual background offset of the bright field image and the k-th projection image is obtained; and (4) registering the backgrounds of all the projection images by adopting the same method, thereby obtaining the actual offset of the background of each projection image and the background of the bright-field image.
Taking the bright field image (figure 3) after illumination compensation as a template, the projection image (figure 7) after illumination compensation as an image to be registered, selecting areas (area 1 in figure 3 and area 2 in figure 7) with the same size at the same position in the two images, reading out the gray values in the two areas to form two groups of discrete sequences ξ (e, f) and χk(e, f) whereine, f respectively represent the column coordinates and row coordinates of the selected image area, e is 0,1,2,.., 34, f is 0,1,2,.., 691, k is 1, and the cross-correlation function c is calculated according to equation (11)1Numerical values of (u, v).
In equation (11), u and v represent possible background offsets of the bright-field image and the projection image in the column direction and the row direction, respectively, where u is 0,1,2max,v=0,1,2,...,vmaxAccording to the maximum offset which can occur among the X-ray source, the sample stage and the detector in the X-ray CT experiment in the embodiment, the maximum background offset u which can occur in the column direction and the row direction of the projection image and the bright-field image is determinedmax=100,vmax=100。Represents the arithmetic mean of the sequence ξ (e, f),representing sequence χ1Arithmetic mean of (e, f).
Found by calculation when c1(u, v) when the maximum value is 0.988, the correspondingNamely the actual background offset of the bright field image and the 1 st projected image.
6) Image translation registration: all projection images can be subjected to translation transformation according to the actual offset of the background of each projection image and the background of the bright-field image after illumination compensation, and registration of the background of the CT projection image and the background of the bright-field image is realized; registering the k-th projection image gray value matrix according to the backgroundThe projection image after background registration can be output by using computer programming. The projection image translation transformation method is as shown in equation (12),
wherein k represents the k-th projection drawing after illumination compensation,and representing the gray scale value of the image after the translation transformation at the position of a column coordinate m and a row coordinate n.
Taking the first projection image after illumination compensation as an example, the actual background offset is determined according to the bright field image and the 1 st projection imageThe illumination-compensated first projection is shifted according to equation (12), since there is always a shift in the first projectionThe concrete form of the translation transformation equation (12) becomes:
wherein,and representing the gray scale value of the image after the translation transformation at the position of a column coordinate m and a row coordinate n. After the projection drawing is translated, the gray value matrix is obtainedThat is, the gray scale value matrix of the 1 st projection image after background calibration, and the image is output by computer programming as shown in fig. 9. Fig. 10 and 11 are CT slices obtained at the same position of the sample by using the CT projection image without background registration and the CT projection image after background registration reconstruction, respectively. Comparing fig. 10 and fig. 11, it can be seen that the quality of CT slice reconstruction is significantly improved after background registration of the projection images.
Claims (4)
1. A method for correcting projection image background inconsistency in CT imaging is characterized in that: the method comprises the following steps:
1) before the CT projection image of the sample is collected, a bright field image is collected, and the light spot center O (i) of the bright field image is determined0,j0);
Determining the spot center O (i) of the bright-field image0,j0) The specific operation steps are as follows: reading out the gray value of each point of the bright field image, and expressing the gray value as a gray value matrix { p }i,j}(i=0,1,2...,imax;j=0,1,2,...,jmax) Where i, j each represent a matrix { p }i,jThe column coordinates and the row coordinates of the points correspond to the coordinates of all pixel points on the bright-field image one by one; i.e. imaxIs the maximum coordinate value, j, of a pixel point in the direction of the image columnmaxThe maximum coordinate of the pixel point in the image row direction; p is a radical ofi,jRepresenting the gray scale value at coordinate (i, j);
selecting j-th image area with clear light spot boundary in bright-field image1,j2,...,jqThe gray scale data of q rows are obtained, and the difference f 'of the row coordinate i is obtained for each row gray scale data'j(i):
Where Δ i is a natural number whose value is f 'at the guaranteed spot boundary'j(i) The minimum value can be taken on the premise of being obviously different from other positions of the image; when the difference function is taken to the minimum value f'j(i)minAnd a maximum value f'j(i)maxWhen, the corresponding independent variables are respectively recorded asAndcalculating according to the formula (2) to obtain the center coordinate of the light spot on the jth line of the bright field image
WhereinRepresenting a value of takingThe integer part of (1); when different j values are calculatedIs taken as the spot center coordinate i of the bright field image in the line direction0Namely:
wherein,representing a value of takingThe integer part of (1);
obtaining the central coordinate j of the light spot in the column direction of the bright-field image by the same method0And determining the coordinates of the spot center O point of the bright-field image as O (i)0,j0) (ii) a 2) Calculating a bright field image compensation function f (r);
the method for calculating the bright field image compensation function comprises the following steps: with the spot center O (i)0,j0) As a circle center, a circle with a radius of (r +0.5), where r ═ 1,20,j0)]Unit is one pixel, min (i)0,j0) Represents to take i0,j0When the smaller one of the two values is different from the other value, the arithmetic mean value of the gray values of all the pixel points on the circumference is respectively solved,representing the arithmetic mean value of all pixel point gray values on the circumference with the radius of (r + 0.5); obtained from different r valuesObtained by polynomial fittingThe function of r, denoted as f (r), is a bright-field image compensation function;
3) compensating the illumination of the bright field image;
the bright field image illumination compensation method comprises the following specific operation steps: and (3) performing illumination compensation on all pixel points in the bright field image according to the bright field image compensation function f (r): bright field image gray scale value matrix { p }i,jIn the pixel point with coordinates (i, j) and the light source center O (i, j)0,j0) Is denoted as R, then:
substituting R into the bright field image compensation function f (R), calculating to obtain the gray value a of the pixel point at the coordinate (i, j) after the bright field image compensationi,j:
And (3) calculating all pixel points in the bright field image according to the formula (5) to obtain a compensated bright field image gray scale value matrix { a }i,jAccording to { a }i,jUtilizing computer programming to output a compensated bright-field image;
4) compensating the illumination of the projected image;
the specific operation steps of the illumination compensation of the projected image are as follows: reading out the gray value of each point of each projection image and expressing the gray value as a gray value matrixWherein m and n respectively represent a matrixM, corresponding to the coordinates of each pixel point on the projected image one by onemaxIs the maximum coordinate value, n, of a pixel point in the direction of the image columnmaxIs the maximum coordinate value of the pixel point in the image line direction, k represents the number of the projection image, kmaxIs the total number of the projection drawings;representing the gray scale value with the coordinate of (m, n) on the kth projection diagram; determining the coordinates of the center of the light spot of each projection Respectively representing the row coordinate and the column coordinate of the light spot center of the kth projection drawing; the gray scale value of each projection image is compensated by a bright field image compensation function f (r),a gray scale value matrix representing the compensated kth projection,representing the gray scale value of the k projection graph after illumination compensation at the position where the column coordinate is m and the row coordinate is n; according toThe computer program can output the projected image after the illumination compensation;
5) calculating the actual offset of the projection image background and the bright-field image background;
the method for calculating the actual offset of the projection image background and the bright field image background comprises the steps of taking an illumination-compensated bright field image as a template image, taking an illumination-compensated kth projection image as an image to be registered, selecting an area 1 and an area 2 with the same size at the same position in the template image and the image to be registered, and respectively taking gray values of pixel points contained in the two areas as two groups of discrete sequences ξ (e, f) and χk(e, f), wherein e, f respectively represent the column coordinates and the row coordinates of the selected image area, and e ═ is0,1,2,...,emax,f=0,1,2,...,fmax,emaxAnd fmaxMaximum coordinate values respectively representing a column direction and a row direction in the selected image area, the numerical values of which are determined according to the size of the selected image area; calculating the cross-correlation function c according to equation (11)kThe numerical values of (u, v),
in equation (11), u and v represent possible background offsets of the bright-field image and the projection image in the column direction and the row direction, respectively, and u is 0,1,2max,v=0,1,2,...,vmaxWherein u ismaxAnd vmaxRespectively representing the maximum background offset of the projection image and the bright field image in the column direction and the row direction, the value of which is determined according to the maximum offset which can occur among an X-ray source, a sample stage and a detector in an X-ray CT experiment,represents the arithmetic mean of the sequence ξ (e, f),representing sequence χk(e, f) arithmetic mean; when c is going tok(u, v) when the maximum value is obtained, the corresponding background offset (each is expressed as) The actual background offset of the bright field image and the k-th projection image is obtained; registering the backgrounds of all the projection images by adopting the same method, thereby obtaining the actual offset of the background of each projection image and the background of the bright-field image;
6) image translation registration;
the specific operation steps of the image translation registration are as follows: all projection images can be subjected to translation transformation according to the actual offset of the background of each projection image and the background of the bright-field image after illumination compensation, so that the background and the bright-field image of the CT projection image are realizedRegistration of image background; registering the k-th projection image gray value matrix according to the backgroundThe projection image after background registration can be output by using computer programming.
2. The method for correcting projection view background inconsistency in CT imaging according to claim 1, wherein: the coordinates of the center of the light spot of each projection diagram are determinedThe specific operation steps are as follows:
selecting the nth image area with clear light spot boundary from the kth projection image1,n2,...,nzThe gray scale data of z rows are obtained, and the difference of the gray scale data of each row to the column coordinate m is obtained
Where Δ m is a natural number whose value is to be at the spot boundaryThe minimum value can be taken on the premise of being obviously different from other positions of the image; when the difference function reaches the minimum valueAnd maximum valueWhen, the corresponding independent variables are respectively recorded asAndcalculating the center coordinate of the light spot on the nth line of the kth projection image according to the formula (7)
WhereinRepresenting a value of takingThe integer part of (1); when different n values are calculatedAs the spot center coordinates of the k-th projection image in the line directionNamely:
wherein,representing a value of takingThe integer part of (1);
obtaining the center coordinates of light spots in the column direction of the k projection image by the same methodThe light spot center O of the k-th projection imagekThe point coordinates are determined as
3. The method for correcting projection image background inconsistency in CT imaging according to claim 2, wherein: the specific operation steps for compensating the gray scale value of each projection image are as follows:
and (3) performing illumination compensation on all pixel points in the k projection image according to a bright field image compensation function f (r): the k-th projection image gray value matrixIn (m, n) pixel point and spot centerIs denoted as D, then:
substituting r-D into the bright field image compensation function f (r), calculating the gray value of the pixel point at the coordinate (m, n) after the illumination compensation of the k-th projection image
Performing calculation of a formula (10) on all pixel points in the kth image to obtain a compensated kth projection image gray value matrixAccording toAnd outputting the compensated k-th projection image by using computer programming.
4. The method for correcting projection image background inconsistency in CT imaging according to claim 3, wherein: the translation transformation method of the projection image is shown in formula (12),
wherein k represents the k-th projection drawing after illumination compensation,and representing the gray scale value of the image after the translation transformation at the position of a column coordinate m and a row coordinate n.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611046838.XA CN106408616B (en) | 2016-11-23 | 2016-11-23 | The inconsistent bearing calibration of perspective view background in a kind of CT imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611046838.XA CN106408616B (en) | 2016-11-23 | 2016-11-23 | The inconsistent bearing calibration of perspective view background in a kind of CT imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106408616A CN106408616A (en) | 2017-02-15 |
CN106408616B true CN106408616B (en) | 2019-02-26 |
Family
ID=58082181
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611046838.XA Expired - Fee Related CN106408616B (en) | 2016-11-23 | 2016-11-23 | The inconsistent bearing calibration of perspective view background in a kind of CT imaging |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106408616B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108956658B (en) * | 2017-05-27 | 2021-10-22 | 上海西门子医疗器械有限公司 | X-ray system and method for calibrating deflection current of X-ray tube |
CN108364325B (en) * | 2018-01-30 | 2021-03-30 | 山西大学 | Regular sample X-ray CT projection image position translation deviation detection and correction method |
CN109493293A (en) * | 2018-10-30 | 2019-03-19 | 京东方科技集团股份有限公司 | A kind of image processing method and device, display equipment |
CN109636874B (en) * | 2018-12-17 | 2023-05-26 | 浙江科澜信息技术有限公司 | Perspective projection method, system and related device for three-dimensional model |
CN109870730B (en) * | 2018-12-28 | 2020-11-20 | 中国科学院重庆绿色智能技术研究院 | Method and system for regular inspection of X-ray machine image resolution test body |
CN110310313B (en) * | 2019-07-09 | 2021-10-01 | 中国电子科技集团公司第十三研究所 | Image registration method, image registration device and terminal |
CN114757853B (en) * | 2022-06-13 | 2022-09-09 | 武汉精立电子技术有限公司 | Method and system for acquiring flat field correction function and flat field correction method and system |
CN117953022B (en) * | 2024-03-27 | 2024-07-05 | 杭州邦杰星医疗科技有限公司 | Medical image registration processing system and method based on CUDA convolution algorithm |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542569A (en) * | 2011-12-21 | 2012-07-04 | 武汉市兑尔科技有限公司 | Rapid image registration and calibration method and system for implementing same |
CN103116879A (en) * | 2013-03-15 | 2013-05-22 | 重庆大学 | Neighborhood windowing based non-local mean value CT (Computed Tomography) imaging de-noising method |
CN103390272A (en) * | 2013-07-16 | 2013-11-13 | 西安应用光学研究所 | Method for achieving registration and fusion of multi-spectral pseudo color images |
DE102012216272A1 (en) * | 2012-09-13 | 2014-03-13 | Siemens Aktiengesellschaft | Method for adjusting focus of X-ray source of computer tomography system that is utilized for imaging patient, involves generating adjustment measurement data, and performing calibration of X-ray detector based on measurement data |
CN105869108A (en) * | 2016-03-23 | 2016-08-17 | 北京环境特性研究所 | Method for registering images in mobile platform moving target detection |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201019694D0 (en) * | 2010-11-19 | 2011-01-05 | Setred As | Camera and visualisation of images and video for autostereoscopic display |
-
2016
- 2016-11-23 CN CN201611046838.XA patent/CN106408616B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542569A (en) * | 2011-12-21 | 2012-07-04 | 武汉市兑尔科技有限公司 | Rapid image registration and calibration method and system for implementing same |
DE102012216272A1 (en) * | 2012-09-13 | 2014-03-13 | Siemens Aktiengesellschaft | Method for adjusting focus of X-ray source of computer tomography system that is utilized for imaging patient, involves generating adjustment measurement data, and performing calibration of X-ray detector based on measurement data |
CN103116879A (en) * | 2013-03-15 | 2013-05-22 | 重庆大学 | Neighborhood windowing based non-local mean value CT (Computed Tomography) imaging de-noising method |
CN103390272A (en) * | 2013-07-16 | 2013-11-13 | 西安应用光学研究所 | Method for achieving registration and fusion of multi-spectral pseudo color images |
CN105869108A (en) * | 2016-03-23 | 2016-08-17 | 北京环境特性研究所 | Method for registering images in mobile platform moving target detection |
Non-Patent Citations (3)
Title |
---|
A Flat-Field Correction Method for Photon-Counting-Detector-Based Micro-CT;So E. Park 等;《Medical Imaging: Physics of Medical Imaging》;20140331;第9033卷(第90335N期);第1-7页 |
Micro CT投影图像噪声的去除;董歌 等;《医疗卫生装备》;20090228;第30卷(第2期);第8页第3节 |
转台中心偏移与探测器偏转造成的图像伪影分析与校正;陈明 等;《第十一届中国体视学于图像分析学术年会论文集》;20061001;摘要 |
Also Published As
Publication number | Publication date |
---|---|
CN106408616A (en) | 2017-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106408616B (en) | The inconsistent bearing calibration of perspective view background in a kind of CT imaging | |
Palatinus et al. | Specifics of the data processing of precession electron diffraction tomography data and their implementation in the program PETS2. 0 | |
CN105849537B (en) | Calibration device and computed tomography method | |
CN109685877B (en) | Micro-nano CT focus drift correction method based on adaptive projection image characteristic region matching | |
KR101473538B1 (en) | Extracting patient motion vectors from marker positions in x-ray images | |
KR101850320B1 (en) | Computed tomography and method of error correction | |
EP3488233A1 (en) | Inspection method for a manufactured article and system for performing same | |
CN109712212A (en) | A kind of industry CT artifact correction method | |
US8508591B2 (en) | System and method for estimating the height of an object using tomosynthesis-like techniques | |
CN107796834B (en) | Orthogonal electronic linear scanning CL imaging system and method | |
CN101584587B (en) | Automatic calibration method for CT projection center | |
EP2386996A1 (en) | Method and device for a precise contour determinaton of an object during an imaging analysis method | |
CN111415349A (en) | Method for detecting polyester filament yarn based on image processing technology | |
US20220130081A1 (en) | Computer-implemented method for determining at least one geometric parameter required for evaluating measurement data | |
Fleßner et al. | Evaluating and visualizing the quality of surface points determined from computed tomography volume data | |
CN107016655A (en) | Cone-beam CL geometry population parameter iteration correction methods | |
CN104000618B (en) | The true number of photons gate control method of one ring carries out the bearing calibration of respiratory movement gate | |
Ametova et al. | A tool for reducing cone-beam artifacts in computed tomography data | |
Schlagenhauf et al. | A stitching algorithm for automated surface inspection of rotationally symmetric components | |
CN112504240A (en) | Laser demarcation device calibration system and calibration method | |
Blumensath et al. | Calibration of robotic manipulator systems for cone-beam tomography imaging | |
KR20200015375A (en) | Inspection apparatus using x-ray penetration and inspection method using x-ray penetration | |
CN110292390A (en) | CT layers of sensitivity curve test body mould Set-up errors bearing calibration | |
JP2019164132A (en) | GEOMETRIC ALIGNMENT, SAMPLE MOTION CORRECTION, AND INTENSITY NORMALIZATION OF COMPUTED TOMOGRAPHY PROJECTIONS USING π-LINE OPTIMIZATION | |
Zemek et al. | Voxel size calibration for high-resolution CT |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190226 Termination date: 20211123 |