DETECT SURFACE POINTS OF AN ULTRASOUND IMAGE Background
Procedures, such as some medical procedures, utilizing some form of image guidance are becoming more and more prevalent. For instance, in the area of orthopedic surgery, preoperative tomographic images, such as computerized tomography (CT) images, magnetic resonance images (MRI), and ultrasonic images, are increasingly used as a navigational aid. Other interventional medical procedures also benefit from image guidance. Particularly in orthopedic surgery, it is beneficial to be able to identify from an ultrasound image, the precise picture elements, or pixels, that represent the surface points of a bone. Obtaining an accurate map of the bone structure is highly beneficial to a surgeon who is relying on these types of tomographic images as a navigational tool.
While the accurate and fast detection of bone surface points from ultrasound and other tomographic images can aid medical procedures that use image guidance, ultrasound images in particular consistently exhibit a high image noise level and poor quality surface delineation. For instance, an ultrasonic image of a human leg or arm usually does not provide a crisp delineation between the bone surface and the surrounding soft tissue. While image segmentation methods exist to help emphasize the bone-tissue borders, they do not provide the level of robustness, accuracy, and speed required for current surgical applications.
Many of the known methods of gathering medical image data are either direct applications or extensions of approaches from the "computer vision" field. This technology concentrates on the detection of known objects from digital images.
Known image segmentation algorithms can be grouped into three broad classes, each of which utilizes algorithms to segment the sampled image data. These three groups can be roughly classified as manual, semi-automatic, and automatic. Descriptions and reviews of algorithms from each of these classes can be found in L.P. Clarke, et al., "Review of MRI segmentation: Methods and applications," Magnetic Resonance Imaging, vol. 13, pp. 343-368, 1995, S. Sarker and K.L. Boyer, "Perceptual Organization in Computer Vision: A Review and a Proposal for a Classificatory Structure," IEEE Trans. Syst. Man. Cybern., vol. 23, pp. 382-399, 1993, and P. Suetens, et al., "Image Segmentation: Methods and Applications in diagnostic Radiology and Nuclear Medicine," European Journal of Radiology, vol. 17, pp. 14-21, 1993, the details of which are incorporated by reference into the present application.
Another known method of image segmentation is called global thresholding. In this method, pixel intensities from the image are mapped into a feature space called a histogram. Image tliresholds are chosen at the valleys between pixel clusters, each representing a region of similar valued pixels in the image. Global thresholding is described in the Suetens article mentioned above and incorporated by reference into the present application, and also in R.M. Haralick and L.G. Shapiro, "Image Segmentation Techniques," Computer Graphics Image Processes, vol. 29, pp. 100-132, 1985, the details of which are hereby incorporated by reference into the present application.
Further known image segmentation methods include spatial domain, edge detection, and region growing methods. Spatial domain methods use spatial proximity in the image to group pixels. Edge detection methods use local gradient information to define edge elements, which are then combined into contours to form
region boundaries. Region growing methods, on the other hand, construct regions by grouping spatially proximate pixels so that some homogeneity criterion is satisfied over the region.
Other conventional image segmentation methods employ statistical classification or mathematical morphological operations. A combination of statistical classification and anatomical information has also been used. Each of the above mentioned methods are further described in D.L. Collins, T.M. Peters, W. Dai, and
A.C. Evans, "Model Based Segmentation of Individual Brain Structures from MRI
Data," SPIE Visualization in Biomedical Computing, 1808:10-23, 1992; G. Gerig, J. Martin, R. Kikinis, O. Kϋbler, M. Shenton, and F.A. Jolesz, "Unsupervised Tissue Type Segmentation of 3D Dual-Echo MR Head Data," Image and Vision Computing, 10(6):349-360, 1992; P. Gibbs, D.L. Buckley, SJ. Blackland, and A. Horsman, "Tumor Volume Determination From MR Images by Morphological Segmentation," Physics in Medicine and Biology, 41:2437-2446, 1996; S. Vinitski, C. Gonzalez, F. Mohamed, T. Iwanaga, R.L. Knobler, K. Khalili, and J. Mack, "Improved Intracranial Lesion Characterization by Tissue Segmentation Based on a 3D Feature Map," Magn Res. Med, 37:457-469, 1997; and S.K. Warfield, J. Dengler, J. Zaers, C.R.G. Guttman, W.M Wells, G.J. Ettinger, J. Fuller, and R. Kikinis, "Automatic Identification of Grey Matter Structures From MRI to Improve the Segmentation of White Matter Lesions," Journal of Image Guided Surgery, l(6):326-338, 1995. The details of each of these references are hereby incorporated by reference into the present application.
These known methods, while exhibiting reasonable performance and robustness in limited applications, are extremely sensitive to various changes in image
characteristics. Further, their speed and accuracy when applied to ultrasound images in particular are lacking.
Summary of the Invention
In accordance with a general aspect of the present invention, a system and method for detecting surface points on an object in a body, such as a bone surface, are directed to obtaining an ultrasound image of the object surface, and then applying a series of image processing filters to refine the ultrasound image. In one embodiment, the image processing filters include a median filter, a normalization filter, an edge detection filter, and an erosion filter. In preferred embodiments, each of the image processing filters is implemented through a computer program algorithm.
Brief Description of the Drawings
Fig. 1 is a flow chart illustrating one implementation of a method of detecting object surface points in accordance with the present invention;
Fig. 2 is a raw ultrasound image of a lamb spine vertebra bone; Fig. 3 is an image showing the detected edge of the bone from Fig. 2 after a median filter step of the bone surface detection method in accordance with the present invention is implemented;
Fig. 4 is an image showing the remaining points representing the bone surface after an outlier removal step of the bone surface detection method in accordance with the present invention;
Fig. 5 is an image showing the remaining points representing the bone surface after a best point selection step of a bone surface detection method in accordance with the present invention;
Fig. 6 is an ultrasound image of a human spine where the bone surface is shown by a manual drawing;
Fig. 7 is an ultrasound image of a human spine showing the resulting set of bone surface points extracted by a method in accordance with the present invention; and
Fig. 8 is a simplified diagrammatic illustration of a system for detecting surface points of a bone in a body.
Detailed Description of Preferred Embodiments
A system and method for detecting an object surface in a body as taught herein are based on several observations, including that for some applications, it is sufficient to detect only a small fraction of the object's surface. Any elements that are suspected of being a false surface point can be eliminated thus leading to an increased detection quality and improved robustness.
In one application, the system and method can be used to detect bone surfaces in a human or animal body. For example, one purpose of bone detection is to support the necessary step of multi-modal data registration, such that there is no need to extract a contiguous surface or contour. In this instance, a small set of points or a disconnected set of points on the surface of the object will suffice. Furthermore, since a system and method in accordance with the present invention does not require a very large number of surface points, any point that is suspected, even remotely, of not being a part of the actual surface, can be discarded. Points residing in regions of surface change (location or inclination) will benefit from registration more than points
residing in a monotonous region. Therefore, some of these points can be discarded as well.
Additionally, bone edge intensity values are among the highest intensity values in an ultrasound image, assuming that there are no barriers between bone and skin (i.e., a metal implantation of some sort), and the thickness of the edge of the bone is typically more than a few pixels thick. In an ultrasound image, bone looks like a thick band of noisy intensity values. For example, Fig. 2 shows an ultrasound image of a lamb spine vertebra and the distorted (i.e. "fuzzy") view of the bone material 200 that the ultrasound image displays. The area lying behind this band is always of lower intensity, indicating that the ultrasound waves are mostly absorbed or reflected once it passes the edge of the bone structure.
In accordance with a general aspect of the present invention, a system and method for detecting surface points on an object in a body are directed to obtaining an ultrasound image of the object surface, and then applying a series of image processing filters to refine the ultrasound image. In one embodiment, the image processing filters include a median filter, a normalization filter, an edge detection filter, and an erosion filter. In preferred embodiments, each of the image processing filters is implemented through a computer program algorithm. Implementation of the respective image processing filters in a system and method according to the invention will now be described in greater detail in accordance with an exemplary application of the system and method - detecting a bone surface in a body by obtaining and refining an ultrasound image.
First, the respective filters for refining the ultrasound image are described. An exemplary method of employing the filters to refine an ultrasound image of a bone surface in a body is then described. Lastly, a processor-based system for
implementing a set of filters for refining an ultrasound image of a bone surface is described. Notably, embodiments of the invention may employ one or more of the described filters, and preferred embodiments of the invention may employ fewer than all of the described filters. At least one embodiment employs all of the described filters.
Median Filter
The first applied filter is preferably a median filter. The median filter is preferably an algorithm that is applied to the raw ultrasound image, removing distortion from image speckle and enhancing the bone edge in the ultrasound image. The median filter obtains its input by reading the pixels from the ultrasound image. The pixels from a rectangular pixel image area that is centered around a specific pixel are ranked and sorted based on each pixel's intensity compared to the other pixels in the pixel image area. The median filter then chooses the middle value of the sorted list, i.e. the value that has the same number of points above and below it. The width and height of the initial rectangular pixel area are input parameters to the median filter. If the absolute difference of the value of the pixel at the center of the rectangular region, and the median value is greater than a specified input parameter, the pixel's value is replaced with the median value. Normalization Filter The second applied filter is preferably a normalization filter. The normalization filter is preferably an algorithm that is applied to the modified ultrasound image and is used to reduce the variability in the bone intensity values between images. The normalization filter works basically as follows; the maximum intensity value (frnax) in each image and the maximum intensity value in all of the ultrasound data (Dmax) are identified. For each image in the ultrasound data set, a
scalar factor is calculated by dividing Imax by Dmax and then multiplying every pixel in the image by this scale factor. A further modified ultrasound image results from the application of the normalization filter. Edge Detection Filter The next applied filter is preferably an edge detection filter, which detects the bone edge, is applied to the further modified ultrasound image. The edge detection filter is preferably an algorithm applied to the further modified ultrasound image. Three parameters need to be assigned prior to applying the edge detection filter. Those three parameters are:
• The threshold value.
• The edge bone thickness (i.e. 2 to 4 pixels), and
• The search range.
Preferably, the threshold value is below the bone edge intensity value, the edge bone thickness is 2 to 4 pixels, and the search range begins after finding the first pixel above the threshold.
The edge detection filter projects rays from the image side opposite to the ultrasound probe location and in the direction of the ultrasound probe. Once a ray reaches a first pixel value that is greater than the threshold value, a counter is set to zero. The counter counts the number of pixels that have intensity values above the threshold value while the counter is still within the search range. If the counter determines that the edge thickness is less than the preset value, the counter is reset and the search continues on the same projected ray. Preferably, the search range is greater than or equal to the desired bone edge thickness because the edge of the bone is not continuously connected. This is due to the inherently noisy data in the ultrasound.
This process is implemented on all the images contained in the further modified ultrasound image data set.
Erosion Filter
The next applied filter is preferably an erosion filter. An erosion filter is a morphological filter that changes the shape of objects in an image by eroding
(reducing) the boundaries of bright objects, and enlarging the boundaries of dark ones.
The erosion filter thins down image elements by setting any white (bone) pixel to black if it is axially or diagonally adjacent to at least one black pixel in the source image. The erosion filter is used to remove any mis-classified edges. Preferably, the structure kernel element has a size of 3x3x1 pixels, because the erosion filter also thins out the detected thickness of the bone edge. However, for the purpose of using ultrasound in guided image surgery, a thin edge should be sufficient.
Outlier Removal
At this stage, the expected shape of the object is examined for the first time and an outlier removal step is implemented. Since it is assumed that the bone does not exhibit abrupt changes in its shape, discontinuities in the surface or its derivative can be reliably ignored or discounted.
The outlier removal algorithm scans the image from left to right and connects a line between the points it finds. It then measures the angles between these lines. Points that generate sharp angles along the line are deleted.
Best-Point Selection
Finally, a survey of the remaining set of points is done and a quality measure of its members takes place. For example, for the purpose of registration, surface points in areas of inclination change, such as corners, are most important. These points receive a higher score according to a registration-quality measurement
function. Once points have been graded and ranked, points with a lower grade are eliminated. The grade value serving as the decision criteria for accepting or rejecting a point is decided by the application. For a registration algorithm, this grade may be rather high since only a few tens of points are required for accurate registration. In addition, the registration algorithm speed is linearly proportional to the number of surface points, which is another incentive for reducing the number of surface points.
Fig. 1 illustrates the process flow of a method 100 for detecting the surface of a bone from an ultrasound image in accordance with the present invention. Beginning with a previously generated ultrasound image 110, a median filter 115 is applied to the image. As described above, the median filter enhances the resolution of the bone edge in the ultrasound image and removes any speckle or other rough distortion in the image. The result of applying the median filter 115 is a modified ultrasound image 120. Next, a normalization filter 125 is applied to the modified ultrasound image 120, which reduces the variability in the bone intensity values between images. The result of applying the normalization filter 125 is a further modified ultrasound image 130.
With the further modified ultrasound image 130 as an input, an edge detection filter 135 is applied. The edge detection filter 135 detects the bone edge from the modified ultrasound image 130. Prior to application of the edge detection filter 135, three parameter must be input by an operator: a threshold value 140, an edge bone thickness 145, and a search range value 150. As a result of the application of the edge detection filter 135, a data point list 155 is generated that represents bone edge points on the ultrasound image. If the object is not a bone, then the operator will enter a threshold value 140, and edge thickness 145, and a search range value 150 that is appropriate for the object viewed in the ultrasound image.
An erosion filter 160 is then applied to the data points 155. The erosion filter 160 removes any misclassified edges, i.e. data points, and generates a modified data point list 165. The modified point list 165 is examined in an outlier removal step 170 and any abrupt discontinuities in the surface can be removed. The outlier removal step 170 assumes that there are no abrupt changes in the bone surface features. After all outlier points are removed, a further modified data point list 175 is generated. Finally, a best-point filter 180 is applied to the further modified data point list 175. The best point filter 180 surveys the further modified data point list 175 and evaluates a quality measure of its members. The result is a final point list 185 that represents points on the bone edge.
Referring to Figs. 2-5, an ultrasound image is shown at various stages of an edge detection method in accordance with the present invention. Fig. 2 shows a raw ultrasound image taken of a lamb spine vertebrae. As seen in Fig. 2, while the actual bone material 200 is relatively easy to perceive, the actual edge of the bone 210 is fuzzy and the precise delineation between bone and the soft surrounding tissue can not be precisely pinpointed.
Fig. 3 shows the detected edge 220 of the bone 200 after application of several of the filters in accordance with the present invention. Fig. 3 represents an image formed from a set of points after applying the median filter 115, the normalization filter 125, the edge detection filter 135, and the erosion filter 160.
Fig. 4 shows the detected edge 230 of the bone 200 after the image from Fig. 3 has been examined and the outlier removal step 170 has taken place. Finally, Fig. 5 shows the points that represent the detected edge 240 of the bone 200 after the best point filter 180 has been applied.
Figs. 6 and 7 each show an ultrasound image of a human spine. Fig. 6 shows the bone location 250 as manually drawn, and Fig. 7 shows the bone location 260 as a result of applying a method in accordance with the present invention. In Fig. 6, the dotted lines represent the regions of the bone that are not visible in the ultrasound image.
Fig. 8 shows a simplified diagrammatic illustration of a processor-based system for detecting surface points of an object in a body, such a bone surface points, by obtaining and refining an ultrasound image of the object surface.
The system consists of a processing system 500, an ultrasound imaging probe 505, an input means 515 and an output device 520. The processing system 500 consists of the processor 510, which may execute a plurality of software based filter algorithms for refining an ultrasound image obtained from the ultrasound image probe
505. The processor 510 is preferably a computer including an associated memory for storing, among other things, the software filter programs and image data. For better understanding of the invention, the various software-based filters are shown separate from the processor. Each software-based filter actually resides on the processor 510 and/or associate memory. The functions of the various filters have previously been described. The input means 515 can be typical computer input means, such as a keyboard, mouse or other device. The output device 520 is preferably a video monitor.
In operation, the ultrasound probe 505 is used to obtain a raw ultrasound image of a surface of an object (e.g., a bone) 508 within a body 507. The processor
510 causes the raw ultrasound image to be displayed on the output device 520, so that an operator can view the image. The operator, through the input means 515, can then refine the raw ultrasound image by inputting commands to the processing system to
apply one or more selected filters, whether based on an automated program, or manual selection by the operator, both options of which are contemplated by the invention.
After each filter is applied, the further refined image is preferably displayed on the display device 520. The first filter applied to the ultrasound image is preferably a median filter
530. The next filter applied to the image is preferably a normalization filter 540. The next filter applied to the image is preferably an edge detection filter 550. Prior to applying the edge detection filter 550, the operator must first input certain values, such as a threshold value, object edge thickness, and the search range for the edge detection, through the input means 115. The next filter applied to the image is preferably an erosion filter 560. After the ultrasound image is filtered through the erosion filter 560, any points that appear to be discontinuous on the surface of the object 508 can be removed through an outlier removal process filter 570. A further filter that may be applied, automatically or by the operator, is a best point filter 580, as described above.