[go: nahoru, domu]

US20090324097A1 - System and method for using a template in a predetermined color space that characterizes an image source - Google Patents

System and method for using a template in a predetermined color space that characterizes an image source Download PDF

Info

Publication number
US20090324097A1
US20090324097A1 US11/374,613 US37461306A US2009324097A1 US 20090324097 A1 US20090324097 A1 US 20090324097A1 US 37461306 A US37461306 A US 37461306A US 2009324097 A1 US2009324097 A1 US 2009324097A1
Authority
US
United States
Prior art keywords
image
source
images
objects
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/374,613
Inventor
Thomas E. Ramsay
Eugene B. Ramsay
Gerard Felteau
Victor Hamilton
Martin Richard
Anatoliy Fesenko
Oleksandr Andrushchenko
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Applied Visual Sciences Inc
Original Assignee
Guardian Technologies International Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guardian Technologies International Inc filed Critical Guardian Technologies International Inc
Priority to US11/374,613 priority Critical patent/US20090324097A1/en
Assigned to GUARDIAN TECHNOLOGIES INTERNATIONAL, INC. reassignment GUARDIAN TECHNOLOGIES INTERNATIONAL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ANDRUSCHENKO, OLEKSANDR, FELTEAU, GERALD, FESENKO, ANATOLIY, HAMILTON, VICTOR, RAMSAY, EUGENE B., RAMSAY, THOMAS E., RICHARD, MARTIN
Assigned to GUARDIAN TECHNOLOGIES INTERNATIONAL, INC. reassignment GUARDIAN TECHNOLOGIES INTERNATIONAL, INC. CORRECTED COVER SHEET TO CORRECT INVENTOR'S NAME, PREVIOUSLY RECORDED AT REEL/FRAME 018113/0682 (ASSIGNMENT OF ASSIGNOR'S INTEREST) Assignors: ANDRUSHCHENKO, OLEKSANDR, FELTEAU, GERARD, FESENKO, ANATOLIY, HAMILTON, VICTOR, RAMSAY, EUGENE B., RAMSAY, THOMAS E., RICHARD, MARTIN
Publication of US20090324097A1 publication Critical patent/US20090324097A1/en
Assigned to APPLIED VISUAL SCIENCES, INC. reassignment APPLIED VISUAL SCIENCES, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: GUARDIAN TECHNOLOGIES INTERNATIONAL, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Definitions

  • This invention relates to image analysis and, more specifically, to a system and method for identifying objects of interest in image data. This includes, but is not limited to a methodology for accomplishing image segmentation, clarification, visualization, feature extraction, classification, and identification.
  • Computer-aided image recognition systems rely solely on the pixel content contained in a two-dimensional image.
  • the image analysis relies entirely on pixel luminance or color, and/or spatial relationship of pixels to one another.
  • image recognition systems utilize analysis methodologies that often assume that distinctive characteristics of objects exist and can be differentiated.
  • An object of the invention is to solve at least the above problems and/or disadvantages and to provide at least the advantages described hereinafter.
  • an object of the present invention is to provide a system capable of detecting objects of interest in image data with a high degree of confidence and accuracy.
  • Another object of the present invention is to provide a system and method that does not directly rely on predetermined knowledge of an objects shape, volume, texture or density to be able to locate and identify a specific object or object type in an image.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in image data that is effective at analyzing images in both two- and three-dimensional representational space using either pixels or voxels.
  • Another object of the present invention is to provide a system and method of distinguishing a class of known objects from objects of similar color and texture whether or not they have been previously explicitly observed by the system.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in image data that works with very difficult to distinguish/classify image object types, such as: (i) apparent random data; (ii) unstructured data; and (iii) different object types in original images.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in image data that can cause either convergence or divergence (clusterization) of explicit or implicit image object characteristics that can be useful in creating discriminating features/characteristics.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in image data that can preserve object self-similarity during transformations.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in image data that is stable and repeatable in its behavior.
  • a method of using at least one template in at least one predetermined color space that characterizes an image source comprising receiving an image from the image source, mapping the image to the at least one predetermined color space to yield a mapped image, and comparing the mapped image to the at least one template.
  • a method of using at least one template in at least one predetermined color space that characterizes an image source comprising receiving a plurality of images from the image source, mapping the plurality of images to the at least one predetermined color space to yield mapped images, and comparing the mapped images to the at least one template.
  • a method of using at least one template in at least one predetermined color space that characterizes an image source comprising receiving an image from a different image source, mapping the image to the at least one predetermined color space to yield a mapped image, and comparing the mapped image to the at least one template.
  • a system for using at least one template in at least one predetermined color space that characterizes an image source comprising an image receiving unit that receives an image from the image source, an image mapping unit that maps the image to the at least one predetermined color space to yield at least one mapped image, and a comparing unit that compares the mapped image to the at least one template.
  • FIG. 1 is a bifurcation diagram
  • FIG. 2 is a diagram illustrating how three complementary paradigms are used to obtain intelligent image informatics, in accordance with one embodiment of the present invention
  • FIG. 3 is a block diagram of a system for identifying an object of interest in image data, in accordance with one embodiment of the present invention
  • FIGS. 4A-5C are transfer functions applied to the pixel color of the image, in accordance with the present invention.
  • FIG. 6A is an input x-ray image of a suitcase, in accordance with the present invention.
  • FIG. 6B is the x-ray image of FIG. 6 a after application of the image transformation divergence process of the present invention
  • FIG. 7 is a block diagram of an image transformation divergence system and method, in accordance with one embodiment of the present invention.
  • FIGS. 8A-8M are x-ray images of a suitcase at different stages in the image transformation recognition process of the present invention.
  • FIG. 8N is an example of a divergence transformation applied to an x-ray image during the image transformation divergence process of the present invention.
  • FIG. 9 is an original input medical image of normal and cancerous cells
  • FIG. 10 is the image of FIG. 9 after application of the image transformation recognition process of the present invention.
  • FIG. 11 is an original input ophthalmology image of a retina
  • FIG. 12 is the image of FIG. 11 after application of the image transformation recognition process of the present invention.
  • FIG. 13 is a flowchart of a method of creating a Support Vector Machine model, in accordance with one embodiment of the present invention.
  • FIG. 14 is a flowchart of a method of performing a Support Vector Machine operation, in accordance with one embodiment of the present invention.
  • FIGS. 15A-15C are medical x-ray images
  • FIGS. 16A and 16B are x-ray images from a Smith Detection (Smith) x-ray scanner and a Rapiscan x-ray scanner, respectively;
  • FIG. 17 is a schematic diagram of an x-ray scanner
  • FIG. 18 is a schematic diagram of an x-ray source used in the x-ray scanner of FIG. 17 ;
  • FIGS. 19A and 19B are X-ray images from a Smith scanner and a Rapiscan scanner, respectively, which illustrate geometric distortions with colors;
  • FIG. 20 is a schematic diagram of an x-ray scanner
  • FIG. 24A is a plot showing an RGB_DNA 3 ⁇ 2D view for a Smith HiScan 6040i scanner
  • FIG. 24B is a plot showing an RGB_DNA 3 ⁇ 2D view for a Rapiscan 515 scanner
  • FIG. 25A is a plot showing an RGB_DNA 3D view for a Smith HiScan 6040i scanner
  • FIG. 25B is a plot showing an RGB_DNA 3D view for a Rapiscan 515 scanner
  • FIG. 26 are plots showing the modeling of 2D (P,C) space on the left and 3D RGB_DNA on the right for a Smith scanner;
  • FIG. 27 are plots showing the sequence of (P,C) 2D elastic transformation to RGB_DNA (and back);
  • FIG. 28 is a plot of a 2D (P,C) representation of a Smith RGB_DNA set of unique colors
  • FIG. 30 is a schematic diagram of an x-ray scanner with an object to be scanned that consists of multiple layers of materials
  • FIG. 31 is a plot showing 2D (P,C) space with vector addition
  • FIG. 32 is a plot showing a color algebra example for a Smith calibration bag consisting of overlapped materials
  • FIG. 33 are examples of images with their 3D RGB_DNA views
  • FIG. 34 are plots showing incorrect RGB_DNA as a result of accidental conversion from 24-bit bmp to 16-bit bmp and back to 24 bit bmp;
  • FIG. 36 is a plot showing the z-lines shown in FIG. 35 from the point in RGB space lying on the prolongation of the major diagonal of RGB cube
  • FIG. 37 are plots showing examples of extracted z-lines and theirs colors in 3 ⁇ 2D RGB_DNA view
  • FIG. 38 are plots showing extracted z-lines number 1, 7 and 25 and theirs colors in 3D RGB_DNA view;
  • FIG. 39 is a plot showing the fragment of typical 25 bin's z-metrics for 1 st nine z-lines.
  • FIG. 40 are organic only, normal and metal only images and their respective 3D RGB_DNA;
  • FIG. 41 shows an original image and its RGB_DNA with no filters applied
  • FIG. 42 shows the image of FIG. 41 with a z-filter applied to keep light organics
  • FIG. 43 shows the image of FIG. 41 with a z-filter applied to keep heavy organics
  • FIG. 44 shows the image of FIG. 41 with a z-filter applied to keep heavy organics and metal
  • FIG. 45 shows the image of FIG. 41 with a z-filter applied to keep light organics and metal.
  • Point operation is a mapping of a plurality of data from one space to another space which, for example, can be a point-to-point mapping from one coordinate system to a different coordinate system.
  • Such data can be represented, for example, by coordinates such as (x, y) and mapped to different coordinates ( ⁇ , ⁇ ) values of pixels in an image.
  • Z effective (Z eff ) Is the effective atomic number for a mixture/compound of elements. It is an atomic number of a hypothetical uniform material of a single element with an attenuation coefficient equal to the coefficient of the mixture/compound. Z effective can be a fractional number and depends not only on the content of the mixture/compound, but also on the energy spectrum of the x-rays.
  • Hyperspectral data is data that is obtained from a plurality of sensors at a plurality of wavelengths or energies.
  • a single pixel or hyperspectral datum can have hundreds or more values, one for each energy or wavelength.
  • Hyperspectral data can include one pixel, a plurality of pixels, or a segment of an image of pixels, etc., with said content.
  • hyperspectral data can be treated in a manner analogous to the manner in which data resulting from a divergence transformation is treated throughout this application for systems and methods for threat or object recognition, identification, image normalization and all other processes and systems discussed herein.
  • a divergence transformation can be applied to hyperspectral data in order to extract information from the hyperspectral data that would not otherwise have been apparent.
  • Divergence transformations can be applied to a plurality of pixels at a single wavelength of hyperspectral data or multiple wavelengths of one or more pixels of hyperspectral data in order to observe information that would otherwise not have been apparent.
  • Nodal point is a point in an image transformation or series of image transformations where similar pixel values exhibit a significantly distinguishable change in value. Pixels are a unitary value within a 2D or multi-dimensional space (such as a voxel).
  • An object can be a person, place or thing.
  • An object of interest is a class or type of object such as explosives, guns, tumors, metals, knives, camouflage, etc.
  • An object of interest can also be a region with a particular type of rocks, vegetation, etc.
  • a threat is a type of object of interest which typically but not necessarily could be dangerous.
  • Image receiver can include a process, a processor, software, firmware and/or hardware that receives image data.
  • Image mapping unit can be a processor, a process, software, firmware and/or hardware that maps image data to predetermined coordinate systems or spaces.
  • a comparing unit can be hardware, firmware, software, a process and/or processor that can compare data to determine whether there is a difference in the data.
  • Color space is a space in which data can be arranged or mapped.
  • One example is a space associated with red, green and blue (RGB). However, it can be associated with any number and types of colors or color representations in any number of dimensions.
  • HSI color space A color space where data is arranged or mapped by Hue, Saturation and Intensity.
  • Predetermined color space is a space that is designed to represent data in a manner that is useful and that could, for example, cause information that may not have otherwise been apparent to present itself or become obtainable or more apparent.
  • RGB DNA refers to a representation in a predetermined color space of most or all possible values of colors which can be produced from a given image source.
  • the values of colors again are not limited to visual colors but are representations of values, energies, etc., that can be produced by the image system.
  • a signature is a representation of an object of interest or a feature of interest in a predetermined space and a predetermined color space. This applies to both hyperspectral data and/or image data.
  • a template is part or all of an RGB DNA and corresponds to an image source or that corresponds to a feature or object of interest for part or all of a mapping to a predetermined color space.
  • Algorithms From time to time, transforms and/or divergence transformations are referred to herein as algorithms.
  • Algorithms and systems discussed throughout this application can be implemented using software, hardware, and firmware.
  • Modality any of the various types of equipment or probes used to acquire images. Radiography, CT, ultrasound and magnetic resonance imaging are examples for modalities in this context.
  • the analysis capabilities of the present invention can apply to a multiplicity of input devices created from different electromagnetic and sound emanating sources such as ultraviolet, visual light, infra-red, gamma particles, alpha particles, etc.
  • the present invention identifies objects of interest in image data utilizing image conditioning and data analysis in a process herein termed “Image Transformation” (ITR) or, equivalently, “Image Transformation Divergence” (ITD).
  • ITR Image Transformation
  • ITD Image Transformation Divergence
  • the ITD process can cause different yet almost identical objects in a single image to diverge in their measurable properties.
  • An aspect of the present invention is the discovery that objects in images, when subjected to special transformations, will exhibit radically different responses based on the pixel values of the imaged objects. Using the system and methods of the present invention, certain objects that appear almost indistinguishable from other objects to the eye or computer recognition systems, or are otherwise identical, generate radically different and significant differences that can be measured.
  • Another aspect of the present invention is the discovery that objects in images can be driven to a point of non-linearity by certain transformation functions.
  • the transformation functions can be applied singly or in a sequence, so that the behavior of the system progresses from one state through a series of changes to a point of rapid departure from stability called the “point of divergence.”
  • FIG. 1 is an example of a bifurcation diagram illustrating iterative uses of divergence transforms, where each node represents an iteration or application of another divergence transform.
  • a single image is represented as a simple point on the left of the diagram.
  • There are several branches in the diagram at lines A, B and C) as the line progresses from the original image representation on the left, indicating node points where bifurcation occurs (“points of bifurcation”).
  • three divergence transforms were used in series at points A, B and C.
  • each divergence transform results in a bifurcation of the image objects or data.
  • the object integrity may deteriorate or no further improvement in the detection process is realized.
  • other methodologies e.g., Machine Learning Algorithms (MLAs) may be applied to further distinguish the objects of interest from other object of interest candidates.
  • MLAs Machine Learning Algorithms
  • Another aspect of the present invention is that one can apply the “principle of divergence” to the apparent stability of fixed points or pixels in an image and, by altering one or more parameter values, give rise to a set of new, distinct and clearly divergent image objects. Because each original object captured in an image responds uniquely at its point of divergence, the methods of the present invention can be used in an image recognition system to distinguish and measure objects. It is particularly useful in separating and identifying objects that have almost identical color, density and volume.
  • special transformations are applied to images in an iterative “filter chain” sequence.
  • the nature of the sequence of transforms causes objects in the image to exhibit radically different responses based on their pixel value(s) such as color (that are related to the physical properties inherent in the original objects in the image).
  • pixel value(s) such as color (that are related to the physical properties inherent in the original objects in the image).
  • certain objects that appear almost indistinguishable to the eye or computer recognition systems from other objects generate radically different and significant differences that can be easily measured.
  • the ITD process works with an apparently stable set of fixed points or pixels in an image and, by altering one or more parameter values, giving rise to a set of new, distinct, and clearly divergent image objects. Commonly used and understood transforms work within the domain where images maintain equilibrium.
  • the ITD method starts by first segmenting the image into objects of interest, then applying different filter sequences to the same original pixels in the identified objects of interest using the process.
  • the process is not limited to a linear sequence of filter processing.
  • an explosive inside of a metal container can be located by first locating all containers, remapping the original pixel data with known coordinates in the image and then examining the remapped original pixels in the identified object(s) in the image for threats with additional filter sequences.
  • transforms can be tuned to optimize the distinction of the object of interest of the images.
  • the process works for both image segmentation and feature generation through an iterative process of applying image transforms. As discussed above, it is defined mathematically as a reaching a Repellor Point.
  • An aspect of present invention is the use of three complementary paradigms to extract information out of images that would otherwise not be readily available. This process is herein referred to as “Intelligent Image Informatics”. As illustrated in FIG. 2 , the three complementary paradigms include: (1) Image Processing; (2) Pattern Classification (Contextual Imagery with Machine Learning); and (3) ⁇ -Physics.
  • Imaging can take place in the spatial domain, spectral domain, RGB_DNA space and/or feature space.
  • the Feature Extraction Process can use the image's describers/qualifiers/characteristics from the above mentioned domains. These feature can be analyzed by many pattern classification techniques, also called Machine Learning Algorithms such as Support Vector Machines (SVM), decision trees/graphs.
  • SVM Support Vector Machines
  • ⁇ -Physics refers to the physics that governs the image source, such as dual energy scanning systems, the z-effective exhibited by different materials and the RGB_DNA that characterizes the image source. All of these methodologies and concepts will be explained in more detail below.
  • the ITD methodologies of the present invention reveal signatures in radiographic image objects that have been previously invisible to the human eye.
  • the application of specific non-linear functions to a grey-scale or color radiographic images is the basis of ITD. Due to the Compton and photoelectric effects, objects in the image exhibit unique, invariant responses to the ITD algorithms based on their physical interactions with the electromagnetic beam. By applying a combination of complementary functions in an iterative fashion, objects of very similar grey-scale or color content in the original image significantly diverge at a point of non-linearity. This divergence causes almost statistically equivalent objects in the original image to display significant density, color and pattern differences. Different algorithms are used for distinguishing objects that exhibit different ranges of effective atomic numbers (Z eff ). The algorithms are tuned to be optimal within certain fractional ranges of resultant electromagnetic Compton/photoelectric combinations.
  • the hypercube now contains spectral bands for each object that are the result of the object's response to each ITD iteration. This is quite similar to the creation of hyperspectral data that is collected by sensors from the reflectance of objects.
  • the hypercube data contains both spatial and spectral components that can be used for effective pattern classification rule generation.
  • Empirical testing has shown that objects retain their characteristic “response-based signatures” for a wide range of fractional Compton/photoelectric results, even when there is significant pixel mixing due to overlapping of other objects. This should not be completely unexpected since differences in a given object's thickness can generate the same Z eff with the variability being expressed as a change in density.
  • FIG. 3 is a block diagram of a system 100 for identifying an object of interest in image data, in accordance with one embodiment of the present invention.
  • the system 100 comprises an input channel 110 for inputting image data 120 from an image source (not shown) and an image analysis system 130 .
  • the image analysis system 130 generates transformed image data utilizing ITD, in which the object of interest is distinguishable from other objects in the image data.
  • the object of interest can be any type of object.
  • the object of interest can be a medical object of interest, in which case the image data can be computer tomography (CT) image data, x-ray image data, or any other type of medical image data.
  • CT computer tomography
  • the object of interest can be a threat object, such as weapons, explosives, biological agents, etc., that may be hidden in luggage.
  • the image data is typically x-ray image data from luggage screening machines.
  • At least one divergence transformation preferably a point operation, is preferably utilized in the image analysis system 130 .
  • a point operation converts a single input image into a single output image. Each output pixel's value depends only on the value(s) of its corresponding pixel in the input image. Input pixel coordinates correlate to output pixel coordinates such that X i , Y i ⁇ X o , Y o .
  • a point operation does not change the spatial relationships within an image. This is quite different from local operations where the value of neighboring pixels determines the value of the output pixel.
  • Point operations can correlate both gray levels and individual color channels in images.
  • One example of a point operation is shown in the transfer function of FIG. 4A .
  • 8 bit (256 shades of gray) input levels are shown on the horizontal axis and output levels are shown on the vertical axis. If one were to apply the point operation of FIG. 4A to an input image, there would be a 1 to 1 correlation between the input and the output (transformed) image. Thus, input and output images would be the same.
  • Point operations are predictable in how they modify the histogram of an image. Point operations are typically used to optimize images by adjusting the contrast or brightness of an image. This process is known as contrast enhancing. They are typically used as a copying technique, except that the pixel values are modified according to the specified transfer function. Point operations are also typically used for photometric calibration, contrast enhancement, monitor display calibration, thresholding and clipping to limit the number of levels of gray in an image.
  • the point operation is specified by the transformation function ⁇ and can be defined as:
  • A is an input image and B is an output image.
  • the at least one divergence transformation used in the image analysis system 130 can be either linear or non-linear point operations, or both.
  • Non-linear point operations are used for changing the brightness/contrast of a particular part of an image relative to the rest of the image. This can allow the midpoints of an image to be brightened or darkened while maintaining blacks and white in the picture.
  • FIG. 4B is a linear transfer function
  • FIGS. 4C-4E illustrate transformations of some non-linear point operations.
  • An aspect of the present invention is the discovery that the transfer function can be used to bring an images to a point where two initially close colors become radically different after the application of the transfer function. This typically requires a radical change in the output slope of the resultant transfer function of FIG. 5A .
  • the present invention preferably utilizes radical luminance (grayscale), color channel or a combination of luminance and color channel transfer functions to achieve image object differentiation for purposes of image analysis and pattern recognition of objects.
  • the placement of the nodal points in the transfer function(s) is one key parameter. An example of nodal point placements are shown in the transfer function example illustrated in the FIG. 5B .
  • the nodal points in the transfer function used in the present invention are preferably placed so as to frequently create radical differences in color or luminance between image objects that otherwise are almost identical.
  • FIG. 6A shows an input image
  • FIG. 6B shows the changes made to the input image (the transformed image obtained) as a result of applying the transfer function of FIG. 5C .
  • the input image is an x-ray image of a suitcase taken by a luggage scanner.
  • the objects of interest are shoes 300 and a bar of explosives 310 on the left side of the suitcase.
  • the orange background in the image makes a radical departure from the orange objects of interest ( 300 and 310 ) and other objects that are almost identical to the objects of interest.
  • the use of different nodal points in the transfer function will cause the objects of interest to exhibit a different color from other objects.
  • Data points connecting the nodes can be calculated using several established methods.
  • a common method of mathematically calculating the data points between nodes is through the use of cubic splines.
  • Additional imaging processes are preferably applied in the process of object recognition to accomplish specific tasks. Convolutions such as median and dilate algorithms cause neighboring pixels to behave in similar ways under the transfer function, and may be applied to assure the objects' integrity during the transformation process.
  • FIG. 7 is a block diagram of one preferred embodiment of the image analysis system 130 of FIG. 3 , along with a flowchart of a method for identifying an object of interest in image data using the image analysis system 130 .
  • the image analysis system 130 includes an image conditioner 2000 and a data analyzer 3000 .
  • FIGS. 8A-8M are x-ray images of a suitcase at different stages in the image analysis process. These images are just one example of the types of images that can be analyzed with the present invention. Other types of images, e.g., medical images from X-ray machines or CT scanners, or quantized photographic images can also be analyzed with the system and methods of the present invention.
  • the method starts at step 400 , where image may optionally be normalized.
  • the normalization process preferably comprises the following processes: (1) referencing; (2) benchmarking; (3) conformity process; and (4) correction process.
  • the referencing process is used to get a reference image containing an object of interest for a given type of X-ray machine.
  • This process consists of passing a container containing one or more objects of interest into a reference X-ray machine to get a reference image.
  • the referencing process is preferably performed once for each X-ray machine model/type/manufacturer.
  • the benchmarking process is used to get a transfer function used to adjust the colors of the reference image taken by a given X-ray machine that is not the reference X-ray machine.
  • This process consists of passing a reference container into any given X-ray machine to get the image of this reference container, which is herein referred to as the “current image.” Then, the current image obtained for this X-ray machine is compared with the reference image. The difference between the current image and the reference container is made to create a transfer function.
  • the benchmarking process determines the transfer function that maps all the colors of the current image color scheme (“current color scheme”) to the corresponding colors that are present in the reference color scheme of the reference image.
  • the transfer function applied to the current image transforms it into the reference image.
  • the adjustment of the colors of X-ray machines of a different type/model/manufacturer requires a distinct and specific calibration process. All X-ray machines are preferably also put through a normalization process. X-ray machines of a same type/model/manufacturer are preferably normalized using the same calibration process. All X-ray machines of different types are preferably calibrated and all the machines, no matter their type, are preferably normalized.
  • the conformity process is preferably used to correct the image color representation of any objects that pass through a given X-ray machine.
  • the conformity process corrects the machine's image color representation (color scheme) in such a way that the color scheme of a reference image will fit the reference color scheme of the reference container.
  • the conformity process preferably consists of applying the transfer function to each bag that passes into an X-ray machine to “normalize” the color output of the machine. This process is specific to every X-ray machine because of the machine's specific transfer function. Each time a container passes through the X-ray machine, the conformity process is preferably applied.
  • the correction process is preferably used to correct the images from the X-ray machine. It preferably minimizes image distortions and artifacts. X-ray machine manufacturers use detector topologies and algorithms that could have negative effects on the image geometry and colors. Geometric distortions, artifacts and color changes made by the manufacturer have negative impacts on images that are supposed to rigorously represent the physical aspects and nature of the objects that are passed through the machine.
  • the correction process is preferably the same for all X-ray machines of a given model/type/manufacturer.
  • image processing is performed on the image.
  • image processing techniques including, but not limited to, ITD, spatial and spectral transformations, convolutions, histogram equalization and gamma adjustments, color replacement, band-pass filtering, image sharpening and blurring, region growing, hyperspectral image processing, color space conversion, etc.
  • ITD is used for the image processing step 410 , and as such the image is segmented by applying a color determining transform that effect specifically those objects that match a certain color/density/effective atomic number characteristics. Objects of interest are isolated and identified by their responses to the sequence of filters. Image segmentation is preferably performed using a series of sub-steps.
  • FIGS. 8B-8H show the image after each segmentation sub-step.
  • the resulting areas of green in FIG. 8G are analyzed to see if they meet a minimum size requirement. This removes the small green pixels.
  • the remaining objects of interest are then re-mapped to a new white background, resulting in the image of FIG. 8H .
  • Most of the background, organic substances, and metal objects are eliminated in this step, leaving the water bottle 500 , fruit 510 , peanut butter 520 and object of interest 530 .
  • step 420 features are extracted by the data analyzer 3000 subjecting the original pixels of the areas of interest identified in step 410 to at least one feature extraction process. It is at this step that at least one divergence transformation is applied to the original pixels of the areas of interest identified in step 410 .
  • the first process in this example uses the following formulation (in the order listed):
  • FIG. 8I results after process step (4) above
  • the image shown in FIG. 8J results after process step (5) above
  • the image shown in FIG. 8K results after process step (7) above.
  • most of the fruit 510 and the water bottle 500 pixels on the lower left-hand side of the image in FIG. 8K have either disappeared or gone to a white color. This is in contrast to the preservation of large portions of the peanut butter jar 520 pixels and object of interest 530 pixels, which are now remapped to a new image in preparation for the second feature extraction process (FEP).
  • FEP second feature extraction process
  • data conditioning is performed by the data analyzer 3000 , in which the data is mathematically transformed to enhance its efficiency for the MLA to be applied at step 440 .
  • meta data is created (new metrics from the metrics created in the feature extraction step 420 such as the generation of hypercubes.
  • This metadata can consist of any feature that is derived from the initial features generated from the spatial domain. Meta data are frequently features of the spectral domain, Fourier space, RGB_DNA, and z-effective among others.
  • Machine Learning Algorithms are capable of automatic pattern classification. Pattern classification techniques automatically determine extremely complex and reliable relationships between the image characteristics also called features. These characteristics are use by the Rules-base that exploits the relationships to automatically detect object into the images.
  • MLAs machine learning algorithms
  • the feature extraction process of step 420 is applied in order to represent the images with numbers.
  • the MLAs applied at step 440 are responsible for generating the detection system that determines if an object of interest is present.
  • MLAs need structured data types, such as numbers and qualitative/categorical data as inputs.
  • the Feature Extraction Process is applied to transform the image or segments of an image into numbers. Each number is a metric that represents a characteristic of the image.
  • Each image is associated with a collection of the metrics that represents it.
  • the collection of the metrics related to an image is herein referred to as a vector.
  • MLAs analyze the vector of the metrics for all the images and find the metrics' relationships that make up a “rules-base.”
  • the metrics created by the feature extraction process 420 are used to reflect the image content are, but not limited to, mean, median, standard deviation, rotation cosine measures, kurtosis, Skewness of colors and, spectral histogram, co-occurance measures, gabor wavelet measures, unique color histograms, percent response, and arithmetic entropy measures.
  • the objects are classified by the data analyzer 3000 based upon the rules-base that classify images into objects of interest and objects not of interest according to the values of their metrics, which were extracted at step 420 .
  • the object of interest 530 is measured in this process for its orange content.
  • the peanut butter jar 520 shows green as its primary value, and is therefore rejected.
  • the detected objects of interest 530 are thus distinguished from all other objects (non-detected objects 470 ). Steps 410 - 450 may be repeated as many times as desired on the non-detected objects 470 in an iterative fashion in order to improve the detection performance.
  • Determination of distinguishing features between objects of interest and other possible objects is done by the rule-base as a result of the analysis of the vectors of the metrics by the MLAs applied at step 440 .
  • MLAs There are hundreds of different MLAs that can be used including, but not limited to, decision trees, neural networks, support vector machines (SVMs) and Regression.
  • the rules-base is therefore preferably entered into code and preferably accessed from an object oriented scripting language, such as Threat Assessment Language (TAL).
  • TAL Threat Assessment Language
  • a second pass is now made with all remaining objects in the image.
  • the rules defined above can now eliminate objects identified in process 1.
  • a second process that follows the logic rules will now create objects of new colors for the remaining objects of interest.
  • the vectors of metrics of the transformed objects of interest are examined. Multiple qualitative approaches may be used in the evaluation of the objects, such as prototype performance and figure of merit.
  • Metrics in the spatial domain such as image amplitude (luminance, tristimulus value, spectral value) utilizing different degrees of freedom, the quantitative shape descriptions of a first-order histogram, such as standard deviation, mean, median, Skewness, Kurtosis, Energy and Entropy, % Color for red, green, and blue ratios between colors (total number of yellow pixels in the object/the total number of red pixels in the object), object symmetry, arithmetic encoder, wavelet transforms as well as other home made measurements are some, but not all, of the possible measurements that can be used.
  • image amplitude luminance, tristimulus value, spectral value
  • a first-order histogram such as standard deviation, mean, median, Skewness, Kurtosis, Energy and Entropy
  • % Color for red, green, and blue ratios between colors total number of yellow pixels in the object/the total number of red pixels in the object
  • object symmetry arithmetic encoder
  • Additional metrics can be created by applying spectrally-based processes, such as Fourier, to the previously modified objects of interest or by analyzing eigenvalue produced from a Principal Components Analysis to reduce the dimension space of the vectors and remove outliers and non-representative data (metrics/images).
  • spectrally-based processes such as Fourier
  • eigenvalue produced from a Principal Components Analysis to reduce the dimension space of the vectors and remove outliers and non-representative data (metrics/images).
  • a color replacement technique is used to further emphasize tendencies of color changes. For example, objects that contain a value on the red channel>100, can be remapped to a level of 255 red so all bright red colors are made pure red. This is used to help identify metal objects that have varying densities.
  • the system and methods of the present invention are based on a methodology that is not restricted to a specific image type or imaging modality. It is capable of identifying and distinguishing a broad range of object types across a broad range of imaging applications. It works equally as well in applications such as CT scans, MRI, PET scans, mammography, cancer cell detection, geographic information systems, and remote sensing. It can identify and distinguish metal objects as well.
  • the present invention is capable of, for example, distinguishing cancer cell growth in blood samples and is being tested with both mammograms and x-rays of lungs.
  • FIG. 9 shows an original input image with normal and cancerous cells.
  • FIG. 10 shows the image after the ITD process of the present invention has been applied, with only cancer cells showing up in green.
  • FIGS. 11 and 12 Another example of a medical application for the present invention is shown in FIGS. 11 and 12 .
  • FIG. 11 shows an original ophthalmology image of the retina
  • FIG. 12 shows the image after the ITD process of the present invention have been applied, with the area of interest defined in red.
  • the analytical processing provided by the present invention can be extended to integrate data from a patient's familial history, blood tests, x-rays, CT, PET (Positron Emission Tomography), and MRI scans into a single integrated analysis for radiologists, oncologists and the patient's personal physician. It can also assist drug companies in reducing costs by minimizing testing time for new drug certification.
  • Contextual imagery not only focuses on the segmented imaged, but on the entire image as well. Context often carries relevant and discriminative information that could determine if an object of interest is present or not in the scene.
  • the MLAs analyze the vectors of metrics taken from the images.
  • the choice of metrics is important. Therefore, the feature extraction process preferably includes “data conditioning” to statistically improve the dataset analyzed by the MLA.
  • Image conditioning is preferably carried out as part of the data conditioning.
  • Image conditioning is one of the first steps performed by the image processing function. It initially consists of the removal of obvious or almost obvious objects that are not one of the objects of interest from the image.
  • image processing functions By applying image processing functions to the image, some important observations can also be made. For example, some unobvious portions of the object of interest may be distinguished from other elements that are not part of the object of interest upon the application of certain types of image processing.
  • Image normalization is preferably the first process applied to the image. This consists of the removal of certain image characteristics, such as the artificial image enhancement (artifacts) that is sometimes applied the system that created the image. Image normalization could also include removing image distortions created by the acquisition system, as well as removal of intentional and unintentional artifacts created by the software that constructed the image.
  • SVMs Support Vector Machines
  • the separating surface is drawn by the SVM technique in an optimal way, maximizing the margin between the classes. In general, this provides a high probability that, with proper implementation, no other separating surface will provide better generalization performance within this framework.
  • the SVM technique is robust to small perturbations and noise in data.
  • the SVM technique relies on the following stages:
  • FIG. 13 is a flowchart of a method of creating an SVM model, in accordance with one embodiment of the present invention.
  • the method starts at step 600 , where a nonlinear transformation type and its parameters are chosen.
  • the transformation is performed by the use of specific “kernels”, which are mathematical functions. Sigmoid, Gaussian or Polynomial kernels are preferably used.
  • step 610 a quadratic programming optimization problem for the soft margin is solved efficiently. This requires a proper choice of the optimization procedure parameters as well.
  • FIG. 14 is a flowchart of a method of performing an SVM operation, in accordance with one embodiment of the present invention.
  • a feature generation technique is applied at step 700 to yield a vector of the generated features that is used for the analysis.
  • a specified kernel transformation is applied to each of all possible couples of the analyzed vector and a Support Vector.
  • the received values are weighted according to the respective weight coefficients and added all together with the free term.
  • the result of the kernel transformation is used to classify the image.
  • the image is classified as falling in a first class (e.g., a threat) if the final result is larger than or equal to zero, and is otherwise classified as belonging to a second class (e.g., non-threat).
  • a first class e.g., a threat
  • a second class e.g., non-threat
  • RGB-DNA is one of the image processing techniques that can be used in normalization step 400 and the image processing step 410 ( FIG. 7 ).
  • RGB-DNA refers to a representation, in a predetermined color space, of most or all possible values of colors which can be produced from a given image source.
  • values of colors is not limited to visual colors, but refers to representations of values, energies, etc. that can be produced by the imaging system. The use of RGB DNA for image analysis will be described in detail in this section.
  • the invention of energy-selective or dual energy reconstruction made distinguishing Compton and photoelectric fractions of attenuation with acceptable accuracy possible.
  • the effective atomic number of materials Z eff could be computationally reconstructed, in addition to the direct measurement of attenuation alone, giving a clue about the chemical structure of the samples.
  • the images in medical diagnostics are usually visualized on gray-scaled screens, as images of a doctor's choice—conventional/standard ( FIG. 15C ), soft issues only ( FIG. 15A ), or bones only ( FIG. 15B ).
  • FIGS. 16A and 16B are x-ray images from a Smith Detection (Smith) x-ray scanner and a Rapiscan x-ray scanner, respectively. These are the two most commonly used baggage x-ray scanners. The principal components of any x-ray scanner are:
  • FIG. 17 is a schematic diagram of a typical x-ray scanner.
  • the scanner includes an L-shaped detector array 810 , a moving belt 820 for moving the item being scanned 830 through the scanner 800 , an X-ray source 840 , a collimator 850 for collimating an X-ray beam 860 from the X-ray source 840 , and a photodiode assembly 865 .
  • the X-ray source 840 is typically implemented with an X-ray tubes that has a rotating anode 900 , which is used for generating an uninterrupted flow of X-ray photons 910 .
  • the spectrum 920 of the x-ray radiation is polychromatic, with a couple of peaks of characteristic lines. For the baggage scanners of interest, the spectrum covers a range from approximately 160 KeV to approximately 25 KeV.
  • the X-ray photons 910 of the beam 860 penetrate the materials in the item being scanned 830 , thereby experiencing attenuation of different natures (scattering, absorption etc.). Then, the x-ray beam 860 goes into the L-shaped detector array 810 to be measured.
  • the array 810 is typically a set of pre-assembled groups of detectors (16, 32 or 64 detectors) positioned perpendicular to the x-ray beams 860 .
  • Each individual detector is responsible for one row of pixels on the x-ray image.
  • two detectors per pixel row are used, i.e., the high-energy detector is placed on the top of the low energy one. They are typically separated by a copper filter (typically ⁇ 0.5 mm thick) installed for energy discrimination. This filter is a crucial element of this technique. This paves a path for calculating the Z eff (effective atomic number) and d (integral density of the material) of the scanned object 830 .
  • the moving belt 820 in the scanner 800 works as a slicing mechanism.
  • One slice is one column of pixels.
  • the speed of the belt should be synchronized with timing of the system to avoid distortion in lengthwise dimensions of the images.
  • An L-shaped detector array 810 causes clearly visible geometric distortions in shapes. These distortions are the results of the projection-detection scheme of a particular scanner design, which can be understood by simple geometrical constructions.
  • FIGS. 19A and 19B are X-ray images from a Smith scanner and a Rapiscan scanner, respectively, which illustrate geometric distortions with colors. The distortions are particularly apparent in the shapes of the frames 1000 and wheels 1010 .
  • the RGB 3D color schemes of different vendors can be mapped into a single universal 2D (Z eff , d) space of physical parameters of Z eff and d.
  • the possibility of such mapping can be shown by looking at a mathematical description of the dual energy technique, and by looking at the depth of proprietary color schemes of two well known scanner vendors—Smith Detection and Rapiscan.
  • Equations (1) and (2) are as follows:
  • any color image we see on the computer screen of a dual energy scanner is a 2D array of pixels with colors represented by (R,G,B) triplets.
  • the number of unique colors needed to maintain an acceptable visual quality of a dual energy color image can be quite large and approaches at least the number of colors of a medium class digital camera ⁇ 1500000. Nevertheless, it was discovered that the number of unique colors in an average baggage color image is approximately 7,000 colors for a Smith HiScan 6040i scanner and less than 100,000 for a Rapiscan 515 scanner.
  • FIGS. 24A and 24B are RGB_DNA 3 ⁇ 2D views for a Smith HiScan 6040i scanner and a Rapiscan 515 scanner, respectively.
  • FIGS. 25A and 25B are 3D rotating views for a Smith HiScan 6040i scanner and a Rapiscan 515 scanner, respectively.
  • the phrase “RGB_DN” was assigned to the discovered color schemes, where term “DNA” was used because of the fact that all images, at least from the scanners of a particular model, will inherit this unique set of RGB colors.
  • RGB_DNA maps RGB_DNA to (P,C) and as such build a bridge between (Z eff , d) and RGB_DNA. This provides a uniform way to work with images of different vendors regardless of their color schemes.
  • FIG. 26 include plots of 2D (P,C) space (left plot) and 3D RGB_DNA (right plot) for a Smith scanner. It is clear that the point of origin (0,0) of (P,C) reflects the RGB point of (255,255,255) on a 3D view of RGB_DNA. These points are responsible for the case of zero attenuation.
  • the next logical step consists of finding the relations between Black Pole (0,0,0) of the RGB_DNA and the Black Zone boundary of the (P,C) space. This point and the boundary are responsible for the scenario of the maximum possible measured attenuation. Beyond this point, the penetration is so weak that detectors “can not see it at all”.
  • 3D RGB_DNA we have a single point-wise Black Pole
  • 2D (P,C) we have a stretched boundary. As shown in FIG.
  • the Black Zone boundary in (P,C) can be compressed/tightened to a single Black Pole or, what is more practical and convenient, the Black Pole of 3D RGB_DNA can be expanded and transformed to the curve, and together with unbent (piecewise-linear in our case) color curves of RGB_DNA, this 3D surface can be transformed to the 2D area similar to the 2D (P,C) space.
  • This mapping for Smiths scanner is shown in FIG. 28 .
  • the Rapiscan scanner color scheme can be mapped to the (P,C) space in the same manner as continuous elastic deformation.
  • FIG. 29 shows a plot of the colors resulting from the scanning of a wedge on a Smith scanner, and verifies that the colors form one color curve on the Smith RGB_DNA.
  • the RGB_DNA color of the overlapped materials can be calculated, as shown in FIG. 32 .
  • Color algebra is valid for any number K of overlapped layers:
  • Equations (11) and (12) above express the effective atomic number Z and density d as functions of P and C for a single uniform layer of a material.
  • the formulas for Z and d can be derived from
  • FIG. 33 shows examples of images with their 3D RGB_DNA views. Only the image on the far left is in correct original RGB_DNA colors. The other two images are visually undistinguished from first one. Nevertheless, they are in fact bmp images converted back from gif and jpeg conversions of an original bmp image.
  • FIG. 34 shows accidental conversion from 24-bit bmp to 16-bit and back.
  • FIGS. 34 and 32 can be compared to see the difference between the incorrect RGB_DNA and the correct one.
  • RGB_DNA colors in color images of dual energy x-ray scanners is fixed for each model and are much less than 16,777,216 RGB triplets in 24-bit bmp, it is possible to make automated inspection of incoming images without the actual visual review of their RGB_DNA (3 ⁇ 2D or 3D). This process can detect the presence of images not created with that x-ray scanner or the scanner is out of calibration.
  • RGB_DNA itself as a limited subset of the entire 24-bit RGB set makes it possible.
  • the component designed and implemented for this purpose performs a fast search through already collected RGB_DNA sets for each pixel of an incoming image, and assures that the system will not be confused.
  • the color scheme of the Smith scanner is comprised of 29 color curves that are stretched from white RGB pole (255,255,255) to black pole (0,0,0). There is one more line of gray colors used for edge enhancement, but these colors do not represent any materials.
  • the color component can determine that the RGB color of a pixel belongs to the RGB_DNA whole set of colors, but it can not determine which one of the 29 curves this color is a part.
  • the first application is the physics-based feature vectors computation in pattern classification algorithms, which will be discussed in more detail below.
  • the second application uses z-lines for removing or keeping selected materials from an image. This is a much more flexible image filtering tool than so called “organic and metal stripping” provided by x-ray scanner manufacturers, as will be discussed in more detail below.
  • the angular coordinate ⁇ is an invariant for all points of the same z-line.
  • HSI color space is more suitable, or more natural, for z-lines than RGB, and extraction of z-line's colors is a straightforward operation universal, not only for the Smith color scheme, but for the Rapiscan color scheme as well.
  • FIG. 37 shows z-line numbers 3 and 15 together with their respective colors.
  • FIG. 38 shows a 3D view for extracted z-line numbers 1, 7 and 25 with respective colors.
  • hue coordinate H of HSI is the carrier of Z eff and intensity I to be responsible for density of a material.
  • Saturation S is thus far unemployed. It can be an unemployed free parameter (and is for Smith and Rapiscan scanners) responsible for carrying the proprietary “look and feel” of the color scheme. Colors of the same objects can appear differently on Smith and Rapiscan scanners having the same or close H and I, but different S.
  • Results of feature extraction for color images depends on the colors of an image, the color scheme (RGB, HIS or other) and the algorithm of the features computation itself. Mapping z-lines and their ordered colors to (P,C) space opens up an opportunity to exclude color from the feature extraction process. Instead of using three variables of a particular color space, such as R, G, and B in RGB, to feed the feature extraction algorithm, two variables of (P,C) can be used.
  • FIG. 39 is a plot of fragment of typical 25 bin's z-metrics for the first 9 z-lines.
  • the image analysis system 130 can be implemented with a general purpose computer. However, it can also be implemented with a special purpose computer, programmed microprocessor or microcontroller and peripheral integrated circuit elements, ASICs or other integrated circuits, hardwired electronic or logic circuits such as discrete element circuits, programmable logic devices such as FPGA, PLD, PLA or PAL or the like. In general, any device on which a finite state machine capable of executing code for implementing the process steps of FIG. 7 can be used to implement the image analysis system 130 .
  • Input channel 110 may be, include or interface to any one or more of, for instance, the Internet, an intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network) or a MAN (Metropolitan Area Network), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, an Ethernet connection, an ISDN (Integrated Services Digital Network) line, a dial-up port such as a V.90, V.34bis analog modem connection, a cable modem, and ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection.
  • AIN Advanced Intelligent Network
  • SONET synchronous optical network
  • DDS Digital Data Service
  • DSL Digital Subscriber Line
  • Input channel 110 may furthermore be, include or interface to any one or more of a WAP (Wireless Application Protocol) link, a GPRS (General Packet Radio Service) link, a GSM (Global System for Mobile Communication) link, CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access) link such as a cellular phone channel, a GPS (Global Positioning System) link, CDPD (Cellular Digital Packet Data), a RIM (Research in Motion, Limited) duplex paging type device, a Bluetooth radio link, or an IEEE 802.11-based radio frequency link.
  • WAP Wireless Application Protocol
  • GPRS General Packet Radio Service
  • GSM Global System for Mobile Communication
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • GPS Global Positioning System
  • CDPD Cellular Digital Packet Data
  • RIM Research in Motion, Limited
  • Bluetooth radio link or an IEEE 802.11-based radio frequency link.
  • Input channel 110 may yet further be, include or interface to any one or more of an RS-232 serial connection, an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, a SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection.
  • an RS-232 serial connection an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, a SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A system and method for identifying objects of interest in image data is provided. The present invention utilizes principles of Iterative Transformational Divergence in which objects in images, when subjected to special transformations, will exhibit radically different responses based on the physical, chemical, or numerical properties of the object or its representation (such as images), combined with machine learning capabilities. Using the system and methods of the present invention, certain objects that appear indistinguishable from other objects to the eye or computer recognition systems, or are otherwise almost identical, generate radically different and statistically significant differences in the image describers (metrics) that can be easily measured.

Description

    REFERENCE TO RELATED APPLICATIONS
  • This application claims priority to U.S. Provisional Patent Application No. 60/661,477, filed Mar. 15, 2005, U.S. patent application Ser. No. 11/136,406, filed May 25, 2005, U.S. patent application Ser. No. 11/136,526, filed May 25, 2005, which are incorporated herein by reference in their entirety.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • This invention relates to image analysis and, more specifically, to a system and method for identifying objects of interest in image data. This includes, but is not limited to a methodology for accomplishing image segmentation, clarification, visualization, feature extraction, classification, and identification.
  • 2. Background of the Related Art
  • Computer-aided image recognition systems rely solely on the pixel content contained in a two-dimensional image. The image analysis relies entirely on pixel luminance or color, and/or spatial relationship of pixels to one another. In addition, image recognition systems utilize analysis methodologies that often assume that distinctive characteristics of objects exist and can be differentiated.
  • However, most real-world image analysis problems involve limitations in accurately segmenting/classifying the objects. The following are some of the specific issues limiting existing image analysis methodologies:
    • (1) input data (image objects) need to be transformed into structured data type;
    • (2) did not adjust for proper combination of scale, rotation, perspective, size, etc.;
    • (3) classes of objects need to be distinguishable using the image or its representation;
    • (4) Grayscale image analysis still represents a serious problem in some applications.
    • (5) Color processing can be very computationally intensive.
    SUMMARY OF THE INVENTION
  • An object of the invention is to solve at least the above problems and/or disadvantages and to provide at least the advantages described hereinafter.
  • Therefore, an object of the present invention is to provide a system capable of detecting objects of interest in image data with a high degree of confidence and accuracy.
  • Another object of the present invention is to provide a system and method that does not directly rely on predetermined knowledge of an objects shape, volume, texture or density to be able to locate and identify a specific object or object type in an image.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in image data that is effective at analyzing images in both two- and three-dimensional representational space using either pixels or voxels.
  • Another object of the present invention is to provide a system and method of distinguishing a class of known objects from objects of similar color and texture whether or not they have been previously explicitly observed by the system.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in image data that works with very difficult to distinguish/classify image object types, such as: (i) apparent random data; (ii) unstructured data; and (iii) different object types in original images.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in image data that can cause either convergence or divergence (clusterization) of explicit or implicit image object characteristics that can be useful in creating discriminating features/characteristics.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in image data that can preserve object self-similarity during transformations.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in image data that is stable and repeatable in its behavior.
  • To achieve the at least above objects, in whole or in part, there is provided a method of using at least one template in at least one predetermined color space that characterizes an image source, comprising receiving an image from the image source, mapping the image to the at least one predetermined color space to yield a mapped image, and comparing the mapped image to the at least one template.
  • To achieve the at least above objects, in whole or in part, there is also provided a method of using at least one template in at least one predetermined color space that characterizes an image source, comprising receiving a plurality of images from the image source, mapping the plurality of images to the at least one predetermined color space to yield mapped images, and comparing the mapped images to the at least one template.
  • To achieve the at least above objects, in whole or in part, there is also provided a method of using at least one template in at least one predetermined color space that characterizes an image source, comprising receiving an image from a different image source, mapping the image to the at least one predetermined color space to yield a mapped image, and comparing the mapped image to the at least one template.
  • To achieve the at least above objects, in whole or in part, there is also provided a system for using at least one template in at least one predetermined color space that characterizes an image source, comprising an image receiving unit that receives an image from the image source, an image mapping unit that maps the image to the at least one predetermined color space to yield at least one mapped image, and a comparing unit that compares the mapped image to the at least one template.
  • Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objects and advantages of the invention may be realized and attained as particularly pointed out in the appended claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Patent Office upon request and payment of the necessary fee.
  • The invention will be described in detail with reference to the following drawings, in which like reference numerals refer to like elements, wherein:
  • FIG. 1 is a bifurcation diagram;
  • FIG. 2 is a diagram illustrating how three complementary paradigms are used to obtain intelligent image informatics, in accordance with one embodiment of the present invention;
  • FIG. 3 is a block diagram of a system for identifying an object of interest in image data, in accordance with one embodiment of the present invention;
  • FIGS. 4A-5C are transfer functions applied to the pixel color of the image, in accordance with the present invention;
  • FIG. 6A is an input x-ray image of a suitcase, in accordance with the present invention;
  • FIG. 6B is the x-ray image of FIG. 6 a after application of the image transformation divergence process of the present invention;
  • FIG. 7 is a block diagram of an image transformation divergence system and method, in accordance with one embodiment of the present invention;
  • FIGS. 8A-8M are x-ray images of a suitcase at different stages in the image transformation recognition process of the present invention;
  • FIG. 8N is an example of a divergence transformation applied to an x-ray image during the image transformation divergence process of the present invention;
  • FIG. 9 is an original input medical image of normal and cancerous cells;
  • FIG. 10 is the image of FIG. 9 after application of the image transformation recognition process of the present invention;
  • FIG. 11 is an original input ophthalmology image of a retina;
  • FIG. 12 is the image of FIG. 11 after application of the image transformation recognition process of the present invention;
  • FIG. 13 is a flowchart of a method of creating a Support Vector Machine model, in accordance with one embodiment of the present invention;
  • FIG. 14 is a flowchart of a method of performing a Support Vector Machine operation, in accordance with one embodiment of the present invention;
  • FIGS. 15A-15C are medical x-ray images;
  • FIGS. 16A and 16B are x-ray images from a Smith Detection (Smith) x-ray scanner and a Rapiscan x-ray scanner, respectively;
  • FIG. 17 is a schematic diagram of an x-ray scanner;
  • FIG. 18 is a schematic diagram of an x-ray source used in the x-ray scanner of FIG. 17;
  • FIGS. 19A and 19B are X-ray images from a Smith scanner and a Rapiscan scanner, respectively, which illustrate geometric distortions with colors;
  • FIG. 20 is a schematic diagram of an x-ray scanner;
  • FIG. 21 is a plot of (P,C) space with Zeff (P,C)=const;
  • FIG. 22 is a plot showing a 3D view of (P,C) space with Zeff (P,C)=const;
  • FIGS. 23A and 23B are plots showing 2D and 3D view of (P,C) space with d(P,C)=const;
  • FIG. 24A is a plot showing an RGB_DNA 3×2D view for a Smith HiScan 6040i scanner;
  • FIG. 24B is a plot showing an RGB_DNA 3×2D view for a Rapiscan 515 scanner;
  • FIG. 25A is a plot showing an RGB_DNA 3D view for a Smith HiScan 6040i scanner;
  • FIG. 25B is a plot showing an RGB_DNA 3D view for a Rapiscan 515 scanner;
  • FIG. 26 are plots showing the modeling of 2D (P,C) space on the left and 3D RGB_DNA on the right for a Smith scanner;
  • FIG. 27 are plots showing the sequence of (P,C) 2D elastic transformation to RGB_DNA (and back);
  • FIG. 28 is a plot of a 2D (P,C) representation of a Smith RGB_DNA set of unique colors;
  • FIG. 29 is a plot showing the color curve(s) of Zeff=const on the Smith RGB_DNA;
  • FIG. 30 is a schematic diagram of an x-ray scanner with an object to be scanned that consists of multiple layers of materials;
  • FIG. 31 is a plot showing 2D (P,C) space with vector addition;
  • FIG. 32 is a plot showing a color algebra example for a Smith calibration bag consisting of overlapped materials;
  • FIG. 33 are examples of images with their 3D RGB_DNA views;
  • FIG. 34 are plots showing incorrect RGB_DNA as a result of accidental conversion from 24-bit bmp to 16-bit bmp and back to 24 bit bmp;
  • FIG. 35 are plots showing the fine structure of z-lines on their way from the central region of the 3D RGB cube towards the black pole with RGB=(0,0,0);
  • FIG. 36 is a plot showing the z-lines shown in FIG. 35 from the point in RGB space lying on the prolongation of the major diagonal of RGB cube
  • FIG. 37 are plots showing examples of extracted z-lines and theirs colors in 3×2D RGB_DNA view;
  • FIG. 38 are plots showing extracted z- lines number 1, 7 and 25 and theirs colors in 3D RGB_DNA view;
  • FIG. 39 is a plot showing the fragment of typical 25 bin's z-metrics for 1st nine z-lines;
  • FIG. 40 are organic only, normal and metal only images and their respective 3D RGB_DNA;
  • FIG. 41 shows an original image and its RGB_DNA with no filters applied;
  • FIG. 42 shows the image of FIG. 41 with a z-filter applied to keep light organics;
  • FIG. 43 shows the image of FIG. 41 with a z-filter applied to keep heavy organics;
  • FIG. 44 shows the image of FIG. 41 with a z-filter applied to keep heavy organics and metal; and
  • FIG. 45 shows the image of FIG. 41 with a z-filter applied to keep light organics and metal.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Definition of Terms
  • The following definitions hold throughout the contents of this application. If additional or alternative definitions of the same or similar words are provided herein, those definitions should be included herein as well.
  • “Statistically identical” or “statistically indistinguishable”: Two sets of data are referred to as “statistically identical” or “statistically indistinguishable” if under one or more types of statistics or observation there is almost no discernable difference between them.
  • Point operation: Point operation is a mapping of a plurality of data from one space to another space which, for example, can be a point-to-point mapping from one coordinate system to a different coordinate system. Such data can be represented, for example, by coordinates such as (x, y) and mapped to different coordinates (α, β) values of pixels in an image.
  • Z effective (Zeff): Is the effective atomic number for a mixture/compound of elements. It is an atomic number of a hypothetical uniform material of a single element with an attenuation coefficient equal to the coefficient of the mixture/compound. Z effective can be a fractional number and depends not only on the content of the mixture/compound, but also on the energy spectrum of the x-rays.
  • Divergence: The movement or spreading of points in vector space from within a local neighborhood (vicinity) to radically different values or locations.
  • “Divergence transform” or “bifurcation transform”: The phrases “divergence transform” and “bifurcation transform” are used interchangeably and each results from a nonlinear or discontinuous remapping of points in vector space which when operating on data such as an entire image, image segment, or subset of an image, causes information relating to the content of the data that otherwise would not have been readily or easily apparent to become available or more easily apparent or accessible.
  • “Divergence transformation” or “bifurcation transformation”: The phrases “divergence transformation” and “bifurcation transformation” are used interchangeably and refer to a transform which, when operating on data such as a segment or subset of an image, causes information relating to the content of the data that otherwise would not have been readily or easily apparent to become available or more easily apparent or accessible. The intent of the transform is to cause a bifurcation of the image data.
  • For example, when applying a divergence transformation to an image or a segment of the image, information regarding the contents of the image which would not have been easily recognized prior to application of the divergence transformation becomes more apparent or known. For example, two objects in the same image that are almost indistinguishable become distinguishable after the divergence transformation is applied.
  • Hyperspectral data: Hyperspectral data is data that is obtained from a plurality of sensors at a plurality of wavelengths or energies. A single pixel or hyperspectral datum can have hundreds or more values, one for each energy or wavelength. Hyperspectral data can include one pixel, a plurality of pixels, or a segment of an image of pixels, etc., with said content. As contained herein, it should be noted that hyperspectral data can be treated in a manner analogous to the manner in which data resulting from a divergence transformation is treated throughout this application for systems and methods for threat or object recognition, identification, image normalization and all other processes and systems discussed herein.
  • For example, a divergence transformation can be applied to hyperspectral data in order to extract information from the hyperspectral data that would not otherwise have been apparent. Divergence transformations can be applied to a plurality of pixels at a single wavelength of hyperspectral data or multiple wavelengths of one or more pixels of hyperspectral data in order to observe information that would otherwise not have been apparent.
  • Nodal point: A nodal point is a point in an image transformation or series of image transformations where similar pixel values exhibit a significantly distinguishable change in value. Pixels are a unitary value within a 2D or multi-dimensional space (such as a voxel).
  • Object: An object can be a person, place or thing.
  • Object of interest: An object of interest is a class or type of object such as explosives, guns, tumors, metals, knives, camouflage, etc. An object of interest can also be a region with a particular type of rocks, vegetation, etc.
  • Threat: A threat is a type of object of interest which typically but not necessarily could be dangerous.
  • Image receiver: An image receiver can include a process, a processor, software, firmware and/or hardware that receives image data.
  • Image mapping unit: An image mapping unit can be a processor, a process, software, firmware and/or hardware that maps image data to predetermined coordinate systems or spaces.
  • Comparing unit: A comparing unit can be hardware, firmware, software, a process and/or processor that can compare data to determine whether there is a difference in the data.
  • Color space: A color space is a space in which data can be arranged or mapped. One example is a space associated with red, green and blue (RGB). However, it can be associated with any number and types of colors or color representations in any number of dimensions.
  • HSI color space: A color space where data is arranged or mapped by Hue, Saturation and Intensity.
  • Predetermined color space: A predetermined color space is a space that is designed to represent data in a manner that is useful and that could, for example, cause information that may not have otherwise been apparent to present itself or become obtainable or more apparent.
  • RGB DNA: RGB DNA refers to a representation in a predetermined color space of most or all possible values of colors which can be produced from a given image source. Here, the values of colors again are not limited to visual colors but are representations of values, energies, etc., that can be produced by the image system.
  • Signature: A signature is a representation of an object of interest or a feature of interest in a predetermined space and a predetermined color space. This applies to both hyperspectral data and/or image data.
  • Template: A template is part or all of an RGB DNA and corresponds to an image source or that corresponds to a feature or object of interest for part or all of a mapping to a predetermined color space.
  • Algorithms: From time to time, transforms and/or divergence transformations are referred to herein as algorithms.
  • Algorithms and systems discussed throughout this application can be implemented using software, hardware, and firmware.
  • Modality: any of the various types of equipment or probes used to acquire images. Radiography, CT, ultrasound and magnetic resonance imaging are examples for modalities in this context.
  • The analysis capabilities of the present invention can apply to a multiplicity of input devices created from different electromagnetic and sound emanating sources such as ultraviolet, visual light, infra-red, gamma particles, alpha particles, etc.
  • Image Transformation Divergence System and Method—General Overview
  • The present invention identifies objects of interest in image data utilizing image conditioning and data analysis in a process herein termed “Image Transformation” (ITR) or, equivalently, “Image Transformation Divergence” (ITD). The terms ITR and ITD refer to the same process, and may be used interchangeably herein.
  • The ITD process can cause different yet almost identical objects in a single image to diverge in their measurable properties. An aspect of the present invention is the discovery that objects in images, when subjected to special transformations, will exhibit radically different responses based on the pixel values of the imaged objects. Using the system and methods of the present invention, certain objects that appear almost indistinguishable from other objects to the eye or computer recognition systems, or are otherwise identical, generate radically different and significant differences that can be measured.
  • Another aspect of the present invention is the discovery that objects in images can be driven to a point of non-linearity by certain transformation functions. The transformation functions can be applied singly or in a sequence, so that the behavior of the system progresses from one state through a series of changes to a point of rapid departure from stability called the “point of divergence.”
  • FIG. 1 is an example of a bifurcation diagram illustrating iterative uses of divergence transforms, where each node represents an iteration or application of another divergence transform. A single image is represented as a simple point on the left of the diagram. There are several branches in the diagram (at lines A, B and C) as the line progresses from the original image representation on the left, indicating node points where bifurcation occurs (“points of bifurcation”). In this example, three divergence transforms were used in series at points A, B and C. In this example, each divergence transform results in a bifurcation of the image objects or data. At point A, some objects that are very dissimilar from the objects of interest diverge away from the most likely object of interest candidates (e.g., threat vs. non-threat, malignant vs. benign tumor, vegetation vs. camouflage, etc.). This is defined mathematically as reaching a “Repellor Point.”
  • At point B, additional objects are rejected and diverge away from the remaining object of interest candidates. At point C, the search is further refined and additional objects are rejected and diverge away from the remaining object of interest candidates. This spatial filtering process is analogous to applying narrower and narrower band pass filters in the frequency domain.
  • At a certain number of iterations (beyond point C in this example), the object integrity may deteriorate or no further improvement in the detection process is realized. At this point, other methodologies, e.g., Machine Learning Algorithms (MLAs) may be applied to further distinguish the objects of interest from other object of interest candidates.
  • Another aspect of the present invention is that one can apply the “principle of divergence” to the apparent stability of fixed points or pixels in an image and, by altering one or more parameter values, give rise to a set of new, distinct and clearly divergent image objects. Because each original object captured in an image responds uniquely at its point of divergence, the methods of the present invention can be used in an image recognition system to distinguish and measure objects. It is particularly useful in separating and identifying objects that have almost identical color, density and volume.
  • The system and methods of the present invention provides at least the following advantages over prior image extraction methodologies:
    • (1) It is a system capable of detecting objects with a high degree of confidence;
    • (2) It does not rely only on a prior knowledge of an objects shape, volume, texture or density to be able to locate and identify a specific object or object type in the image;
    • (3) It is effective at analyzing images in multi-dimensional representational space using either pixels or voxels;
    • (4) It is most powerful where a class of known objects is to be distinguished from objects of similar color and texture, whether or not they have not been previously observed or trained by the ITD system;
    • (5) It works with very difficult to distinguish/classify image object types, such as different object types in original images (threats and non-threats for example or different types of threats) have almost indistinguishable differences between their features when analyzed;
    • (6) It can more effectively apply statistical analysis tools to distinguish data;
    • (7) It can cause either convergence or divergence of image object features;
    • (8) It can preserve object geometrical integrity during transformations; and
    • (9) It is stable and repeatable in its behavior.
  • In one exemplary embodiment of the present invention, special transformations are applied to images in an iterative “filter chain” sequence. The nature of the sequence of transforms causes objects in the image to exhibit radically different responses based on their pixel value(s) such as color (that are related to the physical properties inherent in the original objects in the image). Using the sequencing process, certain objects that appear almost indistinguishable to the eye or computer recognition systems from other objects, generate radically different and significant differences that can be easily measured.
  • As transform parameters are increased, the behavior of the objects progresses from one of simple stability, through a sequence of changes, to a state of a unique and radical change. The state of unique and radical change comes about due to a characteristic “signature” associated with the object of interest's interaction with the source used to create the image. These signatures are exploited by adapting the divergence transforms of the present invention.
  • The ITD process works with an apparently stable set of fixed points or pixels in an image and, by altering one or more parameter values, giving rise to a set of new, distinct, and clearly divergent image objects. Commonly used and understood transforms work within the domain where images maintain equilibrium.
  • As will be discussed in more detail below, the ITD method starts by first segmenting the image into objects of interest, then applying different filter sequences to the same original pixels in the identified objects of interest using the process. In this way, the process is not limited to a linear sequence of filter processing.
  • Because of the unique nature of the segmentation process using this iterative approach, objects within objects can be examined. As an example, an explosive inside of a metal container can be located by first locating all containers, remapping the original pixel data with known coordinates in the image and then examining the remapped original pixels in the identified object(s) in the image for threats with additional filter sequences.
  • With the ITD process, transforms can be tuned to optimize the distinction of the object of interest of the images. In addition, the process works for both image segmentation and feature generation through an iterative process of applying image transforms. As discussed above, it is defined mathematically as a reaching a Repellor Point.
  • An aspect of present invention is the use of three complementary paradigms to extract information out of images that would otherwise not be readily available. This process is herein referred to as “Intelligent Image Informatics”. As illustrated in FIG. 2, the three complementary paradigms include: (1) Image Processing; (2) Pattern Classification (Contextual Imagery with Machine Learning); and (3) χ-Physics.
  • Imaging can take place in the spatial domain, spectral domain, RGB_DNA space and/or feature space. The Feature Extraction Process can use the image's describers/qualifiers/characteristics from the above mentioned domains. These feature can be analyzed by many pattern classification techniques, also called Machine Learning Algorithms such as Support Vector Machines (SVM), decision trees/graphs. χ-Physics refers to the physics that governs the image source, such as dual energy scanning systems, the z-effective exhibited by different materials and the RGB_DNA that characterizes the image source. All of these methodologies and concepts will be explained in more detail below.
  • The ITD methodologies of the present invention reveal signatures in radiographic image objects that have been previously invisible to the human eye. The application of specific non-linear functions to a grey-scale or color radiographic images is the basis of ITD. Due to the Compton and photoelectric effects, objects in the image exhibit unique, invariant responses to the ITD algorithms based on their physical interactions with the electromagnetic beam. By applying a combination of complementary functions in an iterative fashion, objects of very similar grey-scale or color content in the original image significantly diverge at a point of non-linearity. This divergence causes almost statistically equivalent objects in the original image to display significant density, color and pattern differences. Different algorithms are used for distinguishing objects that exhibit different ranges of effective atomic numbers (Zeff). The algorithms are tuned to be optimal within certain fractional ranges of resultant electromagnetic Compton/photoelectric combinations.
  • Both spatial and spectral analysis is utilized. The probability of achieving accurate results can be improved by utilizing multiple passes. With each run of the ITD process, a new hyperplane of image pixel data is created for each object. The combination of the original image plus the newly-created hyperplanes is mapped to form a multi-spectral hypercube. The hypercube has pixel dimensions Pn where n is the total number of outputs from all iterations.
  • The hypercube now contains spectral bands for each object that are the result of the object's response to each ITD iteration. This is quite similar to the creation of hyperspectral data that is collected by sensors from the reflectance of objects. The hypercube data contains both spatial and spectral components that can be used for effective pattern classification rule generation.
  • Empirical testing has shown that objects retain their characteristic “response-based signatures” for a wide range of fractional Compton/photoelectric results, even when there is significant pixel mixing due to overlapping of other objects. This should not be completely unexpected since differences in a given object's thickness can generate the same Zeff with the variability being expressed as a change in density.
  • Exemplary Embodiments A. General System and Method for Identifying an Object of Interest
  • FIG. 3 is a block diagram of a system 100 for identifying an object of interest in image data, in accordance with one embodiment of the present invention. The system 100 comprises an input channel 110 for inputting image data 120 from an image source (not shown) and an image analysis system 130. In one preferred embodiment of the present invention, the image analysis system 130 generates transformed image data utilizing ITD, in which the object of interest is distinguishable from other objects in the image data.
  • The object of interest can be any type of object. For example, the object of interest can be a medical object of interest, in which case the image data can be computer tomography (CT) image data, x-ray image data, or any other type of medical image data. As another example, the object of interest can be a threat object, such as weapons, explosives, biological agents, etc., that may be hidden in luggage. In the case, the image data is typically x-ray image data from luggage screening machines.
  • At least one divergence transformation, preferably a point operation, is preferably utilized in the image analysis system 130. A point operation converts a single input image into a single output image. Each output pixel's value depends only on the value(s) of its corresponding pixel in the input image. Input pixel coordinates correlate to output pixel coordinates such that Xi, Yi →Xo, Yo. A point operation does not change the spatial relationships within an image. This is quite different from local operations where the value of neighboring pixels determines the value of the output pixel.
  • Point operations can correlate both gray levels and individual color channels in images. One example of a point operation is shown in the transfer function of FIG. 4A. In FIG. 4A, 8 bit (256 shades of gray) input levels are shown on the horizontal axis and output levels are shown on the vertical axis. If one were to apply the point operation of FIG. 4A to an input image, there would be a 1 to 1 correlation between the input and the output (transformed) image. Thus, input and output images would be the same.
  • Point operations are predictable in how they modify the histogram of an image. Point operations are typically used to optimize images by adjusting the contrast or brightness of an image. This process is known as contrast enhancing. They are typically used as a copying technique, except that the pixel values are modified according to the specified transfer function. Point operations are also typically used for photometric calibration, contrast enhancement, monitor display calibration, thresholding and clipping to limit the number of levels of gray in an image. The point operation is specified by the transformation function ƒ and can be defined as:

  • B(x, y)=ƒ[A(x, y)],
  • where A is an input image and B is an output image.
  • The at least one divergence transformation used in the image analysis system 130 can be either linear or non-linear point operations, or both. Non-linear point operations are used for changing the brightness/contrast of a particular part of an image relative to the rest of the image. This can allow the midpoints of an image to be brightened or darkened while maintaining blacks and white in the picture.
  • FIG. 4B is a linear transfer function, and FIGS. 4C-4E illustrate transformations of some non-linear point operations. An aspect of the present invention is the discovery that the transfer function can be used to bring an images to a point where two initially close colors become radically different after the application of the transfer function. This typically requires a radical change in the output slope of the resultant transfer function of FIG. 5A.
  • The present invention preferably utilizes radical luminance (grayscale), color channel or a combination of luminance and color channel transfer functions to achieve image object differentiation for purposes of image analysis and pattern recognition of objects. The placement of the nodal points in the transfer function(s) is one key parameter. An example of nodal point placements are shown in the transfer function example illustrated in the FIG. 5B. The nodal points in the transfer function used in the present invention are preferably placed so as to frequently create radical differences in color or luminance between image objects that otherwise are almost identical.
  • This is illustrated in the sample transfer function of FIG. 5C. Using this transformation, two objects that are very close in color/luminance in an original image would be on opposite sides of a grayscale representation in the output (transformed) image. FIG. 6A shows an input image, and FIG. 6B shows the changes made to the input image (the transformed image obtained) as a result of applying the transfer function of FIG. 5C. The input image is an x-ray image of a suitcase taken by a luggage scanner. In this example, the objects of interest are shoes 300 and a bar of explosives 310 on the left side of the suitcase.
  • Note that the orange background has gone a very different color from the shoes 300 and the bar 310 on the left side of the suitcase. The transfer function of FIG. 5C uniquely delineates the objects of interest, while eliminating the background clutter in the image.
  • As can be seen by the input and transformed images shown in FIGS. 6A and 6B, respectively, the orange background in the image makes a radical departure from the orange objects of interest (300 and 310) and other objects that are almost identical to the objects of interest. The use of different nodal points in the transfer function will cause the objects of interest to exhibit a different color from other objects.
  • Data points connecting the nodes can be calculated using several established methods. A common method of mathematically calculating the data points between nodes is through the use of cubic splines.
  • Additional imaging processes are preferably applied in the process of object recognition to accomplish specific tasks. Convolutions such as median and dilate algorithms cause neighboring pixels to behave in similar ways under the transfer function, and may be applied to assure the objects' integrity during the transformation process.
  • FIG. 7 is a block diagram of one preferred embodiment of the image analysis system 130 of FIG. 3, along with a flowchart of a method for identifying an object of interest in image data using the image analysis system 130. The image analysis system 130 includes an image conditioner 2000 and a data analyzer 3000.
  • Some of the method steps will be explained with reference to the images shown in FIGS. 8A-8M, which are x-ray images of a suitcase at different stages in the image analysis process. These images are just one example of the types of images that can be analyzed with the present invention. Other types of images, e.g., medical images from X-ray machines or CT scanners, or quantized photographic images can also be analyzed with the system and methods of the present invention.
  • The method starts at step 400, where image may optionally be normalized. The normalization process preferably comprises the following processes: (1) referencing; (2) benchmarking; (3) conformity process; and (4) correction process.
  • The referencing process is used to get a reference image containing an object of interest for a given type of X-ray machine. This process consists of passing a container containing one or more objects of interest into a reference X-ray machine to get a reference image. The referencing process is preferably performed once for each X-ray machine model/type/manufacturer.
  • The benchmarking process is used to get a transfer function used to adjust the colors of the reference image taken by a given X-ray machine that is not the reference X-ray machine. This process consists of passing a reference container into any given X-ray machine to get the image of this reference container, which is herein referred to as the “current image.” Then, the current image obtained for this X-ray machine is compared with the reference image. The difference between the current image and the reference container is made to create a transfer function.
  • As a transformation of the image's colors of a container, the benchmarking process determines the transfer function that maps all the colors of the current image color scheme (“current color scheme”) to the corresponding colors that are present in the reference color scheme of the reference image. The transfer function applied to the current image transforms it into the reference image.
  • The adjustment of the colors of X-ray machines of a different type/model/manufacturer requires a distinct and specific calibration process. All X-ray machines are preferably also put through a normalization process. X-ray machines of a same type/model/manufacturer are preferably normalized using the same calibration process. All X-ray machines of different types are preferably calibrated and all the machines, no matter their type, are preferably normalized.
  • The conformity process is preferably used to correct the image color representation of any objects that pass through a given X-ray machine. For a given X-ray machine, the conformity process corrects the machine's image color representation (color scheme) in such a way that the color scheme of a reference image will fit the reference color scheme of the reference container.
  • The conformity process preferably consists of applying the transfer function to each bag that passes into an X-ray machine to “normalize” the color output of the machine. This process is specific to every X-ray machine because of the machine's specific transfer function. Each time a container passes through the X-ray machine, the conformity process is preferably applied.
  • The correction process is preferably used to correct the images from the X-ray machine. It preferably minimizes image distortions and artifacts. X-ray machine manufacturers use detector topologies and algorithms that could have negative effects on the image geometry and colors. Geometric distortions, artifacts and color changes made by the manufacturer have negative impacts on images that are supposed to rigorously represent the physical aspects and nature of the objects that are passed through the machine.
  • Unlike the conformity process that preferably compensates in a specific way the randomness of the X-ray detector sensitivities of every X-ray machine, the correction process is preferably the same for all X-ray machines of a given model/type/manufacturer.
  • Next, at step 410, image processing is performed on the image. Many different types of image processing techniques can be used including, but not limited to, ITD, spatial and spectral transformations, convolutions, histogram equalization and gamma adjustments, color replacement, band-pass filtering, image sharpening and blurring, region growing, hyperspectral image processing, color space conversion, etc.
  • In one preferred embodiment, ITD is used for the image processing step 410, and as such the image is segmented by applying a color determining transform that effect specifically those objects that match a certain color/density/effective atomic number characteristics. Objects of interest are isolated and identified by their responses to the sequence of filters. Image segmentation is preferably performed using a series of sub-steps.
  • FIGS. 8B-8H show the image after each segmentation sub-step. The resulting areas of green in FIG. 8G are analyzed to see if they meet a minimum size requirement. This removes the small green pixels. The remaining objects of interest are then re-mapped to a new white background, resulting in the image of FIG. 8H. Most of the background, organic substances, and metal objects are eliminated in this step, leaving the water bottle 500, fruit 510, peanut butter 520 and object of interest 530.
  • At step 420, features are extracted by the data analyzer 3000 subjecting the original pixels of the areas of interest identified in step 410 to at least one feature extraction process. It is at this step that at least one divergence transformation is applied to the original pixels of the areas of interest identified in step 410.
  • In the image examples shown in FIGS. 8I-8M, two feature extraction processes are applied. The first process in this example uses the following formulation (in the order listed):
    • (1) Replace colors
    • (2) Maximum filter 3×3
    • (3) Median filter 3×3
    • (4) Levels and Gamma Luminance=66 black level and 255 white level and Green levels=189 black, 255 white and gamma=9.9
    • (5) Apply divergence transformation
    • (6) Maximum filter 3×3
    • (7) Replace black with white
    • (8) Median filter 3×3
  • The image shown in FIG. 8I results after process step (4) above, the image shown in FIG. 8J results after process step (5) above, and the image shown in FIG. 8K results after process step (7) above. Note that most of the fruit 510 and the water bottle 500 pixels on the lower left-hand side of the image in FIG. 8K have either disappeared or gone to a white color. This is in contrast to the preservation of large portions of the peanut butter jar 520 pixels and object of interest 530 pixels, which are now remapped to a new image in preparation for the second feature extraction process (FEP).
  • At step 430, data conditioning is performed by the data analyzer 3000, in which the data is mathematically transformed to enhance its efficiency for the MLA to be applied at step 440. In addition, meta data is created (new metrics from the metrics created in the feature extraction step 420 such as the generation of hypercubes. This metadata can consist of any feature that is derived from the initial features generated from the spatial domain. Meta data are frequently features of the spectral domain, Fourier space, RGB_DNA, and z-effective among others.
  • Machine Learning Algorithms (MLAs) are capable of automatic pattern classification. Pattern classification techniques automatically determine extremely complex and reliable relationships between the image characteristics also called features. These characteristics are use by the Rules-base that exploits the relationships to automatically detect object into the images.
  • At step 440, machine learning algorithms (MLAs) are applied by the data analyzer 3000. The feature extraction process of step 420 is applied in order to represent the images with numbers. The MLAs applied at step 440 are responsible for generating the detection system that determines if an object of interest is present. In order to work properly, MLAs need structured data types, such as numbers and qualitative/categorical data as inputs. Since images are unstructured data types, the Feature Extraction Process is applied to transform the image or segments of an image into numbers. Each number is a metric that represents a characteristic of the image. Each image is associated with a collection of the metrics that represents it. The collection of the metrics related to an image is herein referred to as a vector. MLAs analyze the vector of the metrics for all the images and find the metrics' relationships that make up a “rules-base.”
  • The metrics created by the feature extraction process 420 are used to reflect the image content are, but not limited to, mean, median, standard deviation, rotation cosine measures, kurtosis, Skewness of colors and, spectral histogram, co-occurance measures, gabor wavelet measures, unique color histograms, percent response, and arithmetic entropy measures.
  • At step 450, the objects are classified by the data analyzer 3000 based upon the rules-base that classify images into objects of interest and objects not of interest according to the values of their metrics, which were extracted at step 420. As shown in FIG. 8M, the object of interest 530 is measured in this process for its orange content. The peanut butter jar 520 shows green as its primary value, and is therefore rejected.
  • The detected objects of interest 530 are thus distinguished from all other objects (non-detected objects 470). Steps 410-450 may be repeated as many times as desired on the non-detected objects 470 in an iterative fashion in order to improve the detection performance.
  • Determination of distinguishing features between objects of interest and other possible objects is done by the rule-base as a result of the analysis of the vectors of the metrics by the MLAs applied at step 440. There are hundreds of different MLAs that can be used including, but not limited to, decision trees, neural networks, support vector machines (SVMs) and Regression.
  • The rules-base is therefore preferably entered into code and preferably accessed from an object oriented scripting language, such as Threat Assessment Language (TAL). A sample of TAL is shown below.
  • call show_msg(“C4 Process 3a”)
    call set_gray_threshold(255)
    call set_area_threshold(400)
    call color_replace and(imagewrk,dont_care,dont_care,greater_than,0,0,45,255,255,255)
    call color_replace and(imagewrk,less_than,dont_care,less_than,128,0,15,255,255,255)
    call apply_curve(image_wrk,purple_path)
    call color_replace_and(image_wrk,equals,equals,equals,65,65,65,255,255,255)
    call color_replace_and(image_wrk,equals,equals,equals,0,255,0,255,255,255)
    call color_replace_and(image_wrk,greater_than,equals,equals,150,0,255,0,255,0)
    call color_replace_and(image_wrk,equals,equals,equals,0,0,255,255,255,255)
    call color_replace_and(image_wrk,dont_care,less_than,less_than,0,255,255,255,255,255)
    call color_replace_and(image_wrk,dont_care,equals,dont_care,0,0,0,255,255,255)
    #if (show_EOP = 1)
    # call display_and_wait(image_wrk)
    #endif
    call pix_map = get_first_aoi(image_wrk,ALLCHAN,1,0)
    if (pix_map = 0)
    jump @done_with_file
    endif
    call destroy_pixmap(AOI_wrk)
    call AOI_wrk = copy_pixmap
    call color_replace(image_tmp,greater_than,greater_than,greater_than,−1,−1,−1,255,255,255)
    aoinum = 1
    @C4loop3
    call show_AOI_bounding_box( )
    #if (show_AOI = 1)
    # call_display_and_wait(AOI_wrk)
    # endif
    call AOI_masked = get_pixmap_from_bbox(scan_org,0
    call image_tmp2 = composite_aoi(image_tmp,AOI_masked,255,255,255)
    call destroy_pixmap(image_tmp)
    call image_tmp = copy_pixmap(image_tmp2)
    call destroy_pixmap(image_tmp2)
    call destroy_pixmap(AOI_masked)
    call pix_map = get_next_aoi( )
    if (pix_map = 0)
    call destroy_aoi_list( )
    jump @C4Process3b
    endif
    call destroy_pixmap(AOI_wrk)
    call AOI_wrk = copy_pixmap
    aoinum = aoinum + 1
    jump @C4loop3
  • A second pass is now made with all remaining objects in the image. The rules defined above can now eliminate objects identified in process 1. A second process that follows the logic rules will now create objects of new colors for the remaining objects of interest. The vectors of metrics of the transformed objects of interest are examined. Multiple qualitative approaches may be used in the evaluation of the objects, such as prototype performance and figure of merit. Metrics in the spatial domain, such as image amplitude (luminance, tristimulus value, spectral value) utilizing different degrees of freedom, the quantitative shape descriptions of a first-order histogram, such as standard deviation, mean, median, Skewness, Kurtosis, Energy and Entropy, % Color for red, green, and blue ratios between colors (total number of yellow pixels in the object/the total number of red pixels in the object), object symmetry, arithmetic encoder, wavelet transforms as well as other home made measurements are some, but not all, of the possible measurements that can be used. Additional metrics can be created by applying spectrally-based processes, such as Fourier, to the previously modified objects of interest or by analyzing eigenvalue produced from a Principal Components Analysis to reduce the dimension space of the vectors and remove outliers and non-representative data (metrics/images).
  • A color replacement technique is used to further emphasize tendencies of color changes. For example, objects that contain a value on the red channel>100, can be remapped to a level of 255 red so all bright red colors are made pure red. This is used to help identify metal objects that have varying densities.
  • This can now help indicate the presence of a certain metal objects regardless of its orientation in the image. It can also be correlated to geometric measurements using tools that determine boundaries and shapes. An example would be the correlation of the pixels with this red value with boundaries and centroid location. Other process may additionally be used as well.
  • The system and methods of the present invention are based on a methodology that is not restricted to a specific image type or imaging modality. It is capable of identifying and distinguishing a broad range of object types across a broad range of imaging applications. It works equally as well in applications such as CT scans, MRI, PET scans, mammography, cancer cell detection, geographic information systems, and remote sensing. It can identify and distinguish metal objects as well.
  • In medicine, the present invention is capable of, for example, distinguishing cancer cell growth in blood samples and is being tested with both mammograms and x-rays of lungs. For example, FIG. 9 shows an original input image with normal and cancerous cells. FIG. 10 shows the image after the ITD process of the present invention has been applied, with only cancer cells showing up in green.
  • Another example of a medical application for the present invention is shown in FIGS. 11 and 12. FIG. 11 shows an original ophthalmology image of the retina FIG. 12 shows the image after the ITD process of the present invention have been applied, with the area of interest defined in red.
  • The analytical processing provided by the present invention can be extended to integrate data from a patient's familial history, blood tests, x-rays, CT, PET (Positron Emission Tomography), and MRI scans into a single integrated analysis for radiologists, oncologists and the patient's personal physician. It can also assist drug companies in reducing costs by minimizing testing time for new drug certification.
  • B. Machine Learning Algorithms (MLA) used for Image Classification
  • As discussed above, MLAs are responsible for generating the detection system that determines if an object of interest is present. Using Machine Learning Algorithms for image classification is herein referred to as “contextual imagery.” Contextual imagery not only focuses on the segmented imaged, but on the entire image as well. Context often carries relevant and discriminative information that could determine if an object of interest is present or not in the scene.
  • MLAs analyze the vectors of metrics taken from the images. The choice of metrics is important. Therefore, the feature extraction process preferably includes “data conditioning” to statistically improve the dataset analyzed by the MLA.
  • Image conditioning is preferably carried out as part of the data conditioning. Image conditioning is one of the first steps performed by the image processing function. It initially consists of the removal of obvious or almost obvious objects that are not one of the objects of interest from the image. By applying image processing functions to the image, some important observations can also be made. For example, some unobvious portions of the object of interest may be distinguished from other elements that are not part of the object of interest upon the application of certain types of image processing. These aspects of image conditioning leverage the MLA's detection capability
  • Image normalization is preferably the first process applied to the image. This consists of the removal of certain image characteristics, such as the artificial image enhancement (artifacts) that is sometimes applied the system that created the image. Image normalization could also include removing image distortions created by the acquisition system, as well as removal of intentional and unintentional artifacts created by the software that constructed the image.
  • There are thousands of Machine Learning Algorithms including, but not limited to, Kernel Systems such as the Support Vector Machines (SVMs) that are preferably used as one of the classification instruments. The SVM approach exhibits the following advantages:
  • 1. It can be used with data that has a complicated structure for which a simple separating hyperplane is not sufficient for classification purposes. A nonlinear separating surface between the classes can be drawn with the SVM technique.
  • 2. The separating surface is drawn by the SVM technique in an optimal way, maximizing the margin between the classes. In general, this provides a high probability that, with proper implementation, no other separating surface will provide better generalization performance within this framework.
  • 3. Even when the amount of available data is small, the generalization performance is impressive.
  • 4. The SVM technique is robust to small perturbations and noise in data.
  • 5. A positive synergetic effect is often possible. This means that adding image data collected from new objects of interest (e.g., new types of explosives) frequently results in a more efficient recognition of images of objects of interest already included in the model.
  • In the case of a data set in which different classes are not linearly separated in the feature space, it is necessary to design a nonlinear separating rule between them. However, this rule can be developed in an infinite number of ways. For example, using a method of potential functions, it is possible to reach 100% of the class separation for the training data set. At the same time, the respective model would typically have a very poor generalization performance on the unseen data. This effect is commonly called ‘overfitting’. Thus, the goal is to avoid over-fitting while using the nonlinear approach.
  • To address this issue, the SVM technique relies on the following stages:
    • (1) mapping the initial feature vectors to a new feature space using a nonlinear transformation; and
    • (2) applying a linear separating rule (a hyperplane) to vectors in the new feature space.
  • The use of these two stages allows one to draw the nonlinear separating surface in the original feature space. The linear character of the separating rule means, in general, better robustness and the possibility of maximizing the margin between the classes explicitly.
  • An improved immunity to both noise and presence of possible outliers is provided by introducing a “soft” margin. When a soft margin is used, a predetermined portion of training vectors are allowed to be misclassified. Negative consequences of the over-fitting effect can be significantly diminished or even completely averted by sacrificing this small portion of typically non-representative vectors. As a result, a much better overall generalizing performance and robustness can be achieved in practical applications.
  • FIG. 13 is a flowchart of a method of creating an SVM model, in accordance with one embodiment of the present invention. The method starts at step 600, where a nonlinear transformation type and its parameters are chosen. The transformation is performed by the use of specific “kernels”, which are mathematical functions. Sigmoid, Gaussian or Polynomial kernels are preferably used.
  • Then, at step 610, a quadratic programming optimization problem for the soft margin is solved efficiently. This requires a proper choice of the optimization procedure parameters as well.
  • During the quadratic programming optimization procedure, some of the most representative vectors are selected from the pool of all vectors available for training. These vectors are herein referred to as “Support Vectors.” The respective weights of the Support Vectors and a free term (a constant) are also calculated. This completes the SVM model.
  • FIG. 14 is a flowchart of a method of performing an SVM operation, in accordance with one embodiment of the present invention. When a previously unseen image is classified (any sub-image can also be used instead of the image), a feature generation technique is applied at step 700 to yield a vector of the generated features that is used for the analysis.
  • At step 710, a specified kernel transformation is applied to each of all possible couples of the analyzed vector and a Support Vector. The received values are weighted according to the respective weight coefficients and added all together with the free term.
  • At step 720, the result of the kernel transformation is used to classify the image. In a preferred embodiment, the image is classified as falling in a first class (e.g., a threat) if the final result is larger than or equal to zero, and is otherwise classified as belonging to a second class (e.g., non-threat).
  • Although this framework was described in connection with two possible classes, it can be applied to multi-class classification problems with appropriate modification of the framework.
  • C. RGB-DNA Image Analysis
  • As discussed above, RGB-DNA is one of the image processing techniques that can be used in normalization step 400 and the image processing step 410 (FIG. 7). The phrase “RGB-DNA”, as used herein, refers to a representation, in a predetermined color space, of most or all possible values of colors which can be produced from a given image source. The phrase “values of colors” is not limited to visual colors, but refers to representations of values, energies, etc. that can be produced by the imaging system. The use of RGB DNA for image analysis will be described in detail in this section.
  • Physics of X-Ray Scanners and Color Images
  • Since the discovery of x-ray radiation in 1895 by W. C. Röentgen, the output of x-ray diagnostics equipment has been associated with gray-scaled images. Initially the photographic films were used to visualize x-ray attenuation as a negative image. This technique is still used today.
  • Later, fluorescent screens were employed to visualize the positive image. The digital imaging of detector-to-pixel design brought the possibility of color highlighting based on one-to-one mapping of a predefined color palette into an original grayscale. In this case, the color image reflected x-ray attenuation in a more vivid way, as the grayscale image did.
  • The invention of energy-selective or dual energy reconstruction, initially in medical x-ray diagnostics, made distinguishing Compton and photoelectric fractions of attenuation with acceptable accuracy possible. As a result, the effective atomic number of materials Zeff could be computationally reconstructed, in addition to the direct measurement of attenuation alone, giving a clue about the chemical structure of the samples.
  • As shown in FIGS. 15A-15C, the images in medical diagnostics are usually visualized on gray-scaled screens, as images of a doctor's choice—conventional/standard (FIG. 15C), soft issues only (FIG. 15A), or bones only (FIG. 15B).
  • When vendors of baggage x-ray scanners adopted the dual energy technique, they replaced the black and white monitors by the color ones. It was done to simplify the work of the screeners. Instead of analyzing a sequence of gray scaled images switching from conventional to low energy, high energy, Compton fraction, photoelectric fraction and back, a single color image was delivered. The colors of the image were assigned to differentiate the chemical structure of materials according to their effective atomic numbers Zeff and integral density d along the x-ray beam.
  • According to the recommendations of the United States Transportation Security Administration (TSA), shades of blue represent metal materials, shades of orange are assigned to organic compounds, and green looking colors are responsible for so called mixed or inorganic materials. FIGS. 16A and 16B are x-ray images from a Smith Detection (Smith) x-ray scanner and a Rapiscan x-ray scanner, respectively. These are the two most commonly used baggage x-ray scanners. The principal components of any x-ray scanner are:
    • x-ray source with collimator;
    • array of detectors;
    • moving belt;
    • digital image formation and processing software; and
    • computing and visualization hardware.
  • FIG. 17 is a schematic diagram of a typical x-ray scanner. The scanner includes an L-shaped detector array 810, a moving belt 820 for moving the item being scanned 830 through the scanner 800, an X-ray source 840, a collimator 850 for collimating an X-ray beam 860 from the X-ray source 840, and a photodiode assembly 865.
  • As shown in FIG. 18, the X-ray source 840 is typically implemented with an X-ray tubes that has a rotating anode 900, which is used for generating an uninterrupted flow of X-ray photons 910. The spectrum 920 of the x-ray radiation is polychromatic, with a couple of peaks of characteristic lines. For the baggage scanners of interest, the spectrum covers a range from approximately 160 KeV to approximately 25 KeV.
  • The X-ray photons 910 of the beam 860 penetrate the materials in the item being scanned 830, thereby experiencing attenuation of different natures (scattering, absorption etc.). Then, the x-ray beam 860 goes into the L-shaped detector array 810 to be measured. The array 810 is typically a set of pre-assembled groups of detectors (16, 32 or 64 detectors) positioned perpendicular to the x-ray beams 860.
  • Each individual detector is responsible for one row of pixels on the x-ray image. For energy-selective reconstruction, two detectors per pixel row are used, i.e., the high-energy detector is placed on the top of the low energy one. They are typically separated by a copper filter (typically<0.5 mm thick) installed for energy discrimination. This filter is a crucial element of this technique. This paves a path for calculating the Zeff (effective atomic number) and d (integral density of the material) of the scanned object 830.
  • The moving belt 820 in the scanner 800 works as a slicing mechanism. One slice is one column of pixels. The speed of the belt should be synchronized with timing of the system to avoid distortion in lengthwise dimensions of the images.
  • An L-shaped detector array 810 causes clearly visible geometric distortions in shapes. These distortions are the results of the projection-detection scheme of a particular scanner design, which can be understood by simple geometrical constructions. FIGS. 19A and 19B are X-ray images from a Smith scanner and a Rapiscan scanner, respectively, which illustrate geometric distortions with colors. The distortions are particularly apparent in the shapes of the frames 1000 and wheels 1010.
  • Another concern is the color representation of the dual energy image itself. X-ray scanners of different vendors, with identical dimensions and components, can deliver identical geometric appearance of shapes, but quite different colors for chemically identical scanned objects. These colors depend on the vendor's proprietary color scheme.
  • Nevertheless, as it will be described in more detail below, the RGB 3D color schemes of different vendors can be mapped into a single universal 2D (Zeff, d) space of physical parameters of Zeff and d. The possibility of such mapping can be shown by looking at a mathematical description of the dual energy technique, and by looking at the depth of proprietary color schemes of two well known scanner vendors—Smith Detection and Rapiscan.
  • Mathematics of Dual Energy Technique Without Colors
  • The two integral equations (1) and (2) below describe the flux of X-ray photons FL (θ) and FH (θ) measured by low energy and high energy detectors, respectively, for the geometry shown in FIG. 20.
  • E min E max r 0 2 r L 2 ( θ ) · S ( θ , r 0 , E ) · exp [ - P E 3 - f KN ( E ) · C ] E = F L ( θ ) ( 1 ) E min E max r 0 2 r H 2 ( θ ) · S ( θ , r 0 , E ) · exp [ - P E 3 - f KN ( E ) · C ] · Q ( θ , E ) · E = F H ( θ ) ( 2 )
  • where function
  • P ( θ ) = r 0 r L ( θ ) k p ρ ( r , θ ) A ( r , θ ) Z n ( r , θ ) · r ( 3 )
  • is a photoelectric term or fraction of attenuation, and function
  • C ( θ ) = r 0 r L ( θ ) k c ρ ( r , θ ) A ( r , θ ) Z ( r , θ ) · r ( 4 )
  • is the Compton term of attenuation considered for every fixed value of polar angle θ. P(θ) and C(θ) are the desired solution we are looking for. The function of the energy selective (copper) filter is defined as
  • Q ( θ , E ) = exp [ - r L ( θ ) r H ( θ ) μ f ( θ , r , E ) · r ] ( 5 )
  • The other symbols and definitions used in Equations (1) and (2) are as follows:
      • S(θ,r0,E)—the input flux of x-ray photons of energy E at the surface with radius r0;
      • r0—the distance from the x-ray generator spot to the surface where S is known;
      • rL—the distance to the low energy detector;
      • rH—the distance to the high energy detectors (rH-rL is the thickness of the filter);
      • θ—polar angle
      • Z=Zeff—the effective atomic number
      • ƒKN(E)—Klein-Nishina function of Compton attenuation energy dependence;
      • ρ—physical mass density;
      • A—atomic weight;
      • ρ/A—density of atoms;
      • kp and kc are the constants dependent on the system of units of measurements; and
      • n—empiric parameter (n=4 for our case).
  • For any given angle θ, the unknown variables are P, which is responsible for photoelectric, and C, which stands for Compton attenuation. This nonlinear system can be solved when Jacobian
  • J = det ( F L P F L C F H P F H C ) 0 ( 6 )
  • If P and C are found for a particular case of a uniform layer of thickness L with ρ=const, Z=const, A=const, it means that:
  • P ( x , y ) = k p ρ A Z n · L = k p d · Z n ( 7 ) C ( x , y ) = k c ρ A Z · L = k c d · Z where ( 8 ) d = ρ A · L ( 9 )
  • is the integral density, and ratio
  • P C = k p k c Z n - 1 ( 10 )
  • does not depend on d. Therefore,
  • Z = ( P C k p k c ) 1 / ( n - 1 ) ( 11 ) d = P / ( k p · Z n ) = C / ( k c · Z ) = C / [ k c · ( P C k c k p ) 1 / ( n - 1 ) ] ( 12 )
  • It can be seen that Z=const if the ratio P/C=const. In the (P,C) space, Z=const forms a straight line which goes from the point of origin, and the tangent of the angle between the line and P axes is equals to C/P. The plot shown in FIG. 21 shows the lines in conditional units.
  • The surface Z=Z(P,C) is two-dimensional manifold in three-dimensional (P,C,Z) space, as shown in the plot of FIG. 22. The surface d=d(P,C) is a two-dimensional manifold in (P,C,d) as well, a shown in the plots of FIGS. 23A and 23B, which are plots of 2D and 3D views, respectively, of (P,C) space with Zeff (P,C)=const.
  • The result derived above shows the straight and simple interpretation of the lines and points in the (P,C) space. Each point with coordinates P and C in the space can be computed from the equations (1) and (2) if the right parts are measured correctly and Jacobian is nonzero. Each of the points reflects the effective atomic number Z and integral density d of an object responsible for the measured right parts of the system.
  • Colors in Dual Energy Scanners Without Mathematics
  • Any color image we see on the computer screen of a dual energy scanner is a 2D array of pixels with colors represented by (R,G,B) triplets. Each of these three values (R,G,B) belongs to the interval [0,255], and the (R,G,B) set of all possible colors composes the 3D RGB space or cube of 25633=16777216 discrete points. One can thus assume that the number of unique colors needed to maintain an acceptable visual quality of a dual energy color image can be quite large and approaches at least the number of colors of a medium class digital camera ˜1500000. Nevertheless, it was discovered that the number of unique colors in an average baggage color image is approximately 7,000 colors for a Smith HiScan 6040i scanner and less than 100,000 for a Rapiscan 515 scanner.
  • An aspect of the present invention is the development of tools to visualize the set of unique colors, both as 3×2D projections to RG, GB and BR planes of the RGB cube, as shown in FIGS. 24A and 24B, and also as a 3D rotating view based on an OpenGL open source prototype, as shown in FIGS. 25A and 25B. FIGS. 24A and 24B are RGB_DNA 3×2D views for a Smith HiScan 6040i scanner and a Rapiscan 515 scanner, respectively. FIGS. 25A and 25B are 3D rotating views for a Smith HiScan 6040i scanner and a Rapiscan 515 scanner, respectively. The phrase “RGB_DN was assigned to the discovered color schemes, where term “DNA” was used because of the fact that all images, at least from the scanners of a particular model, will inherit this unique set of RGB colors.
  • As discussed above, it is possible to map RGB_DNA to (P,C) and as such build a bridge between (Zeff, d) and RGB_DNA. This provides a uniform way to work with images of different vendors regardless of their color schemes.
  • FIG. 26 include plots of 2D (P,C) space (left plot) and 3D RGB_DNA (right plot) for a Smith scanner. It is clear that the point of origin (0,0) of (P,C) reflects the RGB point of (255,255,255) on a 3D view of RGB_DNA. These points are responsible for the case of zero attenuation.
  • The next logical step consists of finding the relations between Black Pole (0,0,0) of the RGB_DNA and the Black Zone boundary of the (P,C) space. This point and the boundary are responsible for the scenario of the maximum possible measured attenuation. Beyond this point, the penetration is so weak that detectors “can not see it at all”. In 3D RGB_DNA we have a single point-wise Black Pole, and in 2D (P,C) we have a stretched boundary. As shown in FIG. 27, the Black Zone boundary in (P,C) can be compressed/tightened to a single Black Pole or, what is more practical and convenient, the Black Pole of 3D RGB_DNA can be expanded and transformed to the curve, and together with unbent (piecewise-linear in our case) color curves of RGB_DNA, this 3D surface can be transformed to the 2D area similar to the 2D (P,C) space.
  • The next step that needs to be performed, is noting that the color curves on the Smiths 3D RGB_DNA surface and the straight lines of the Zeff=const on (P,C) plane are actually the same entities. They are the two-dimensional manifolds which are topologically equivalent, and can be mutually mapped by a one-to-one relationship. This mapping for Smiths scanner is shown in FIG. 28. The Rapiscan scanner color scheme can be mapped to the (P,C) space in the same manner as continuous elastic deformation.
  • The simplest way to confirm that this hypothesis is correct, is to measure the colors resulting from scanning of the same material of different thickness, e.g., a wedge. If the hypothesis is correct, one should see the colors forming one color curve on the Smith RGB_DNA. FIG. 29 shows a plot of the colors resulting from the scanning of a wedge on a Smith scanner, and verifies that the colors form one color curve on the Smith RGB_DNA.
  • Overlapping and Color Algebra
  • The case discussed above was for a uniform of thickness L with ρ=const, Z=const, A=const. Increasing the thickness L results in increasing the values of P and C. The case of a non-uniform layer composed of two or more uniform layers made of two or more different materials, as shown in FIG. 30, will now be addressed. In the case of two layers, the non-uniform layer (L) can be expressed as L=L1+L2.
  • For fixed θ, we have
  • P = r 0 r L ( θ ) k p ρ ( r ) A ( r ) Z n ( r ) · r = k = 1 2 L k k p ρ k A k Z k n · r = k = 1 2 k p ρ k A k Z k n L k = k = 1 2 P k ( 13 ) C = r 0 r L ( θ ) k c ρ ( r ) A ( r ) Z ( r ) · r = k = 1 2 L k k p ρ k A k Z k · r = k = 1 2 k p ρ k A k Z k L k = k = 1 2 C k = C 1 + C 2 ( 14 )
  • The fact that P=P1+P2 and C=C1+C2 can be interpreted as vector addition in (P,C) space, as shown in FIG. 31.
  • In the scenario in which a one-to-one mapping from (P,C) to RGB_DNA and back exists, and in which their colors are known, the RGB_DNA color of the overlapped materials can be calculated, as shown in FIG. 32.
  • Using vector subtraction, the color of one of the two overlapped materials can be found if the color of the second one and color resulting from their overlapping is known. This technique of adding and subtracting colors is referred to herein as “color algebra.” Color algebra is valid for any number K of overlapped layers:
  • P = r 0 r L ( θ ) k p ρ ( r ) A ( r ) Z n ( r ) · r = k = 1 K L k k p ρ k A k Z k n · r = k = 1 K k p ρ k A k Z k n L k = k = 1 K P k = P 1 + P 2 ( 15 ) C = r 0 r L ( θ ) k c ρ ( r ) A ( r ) Z ( r ) · r = k = 1 K L k k p ρ k A k Z k · r = k = 1 K k p ρ k A k Z k L k = k = 1 K C k ( 16 )
  • Equations (11) and (12) above express the effective atomic number Z and density d as functions of P and C for a single uniform layer of a material. In the case of K layers of different materials with the effective atomic numbers Zk and densities dk k=1, . . . K, the formulas for Z and d can be derived from
  • P k = k p ρ k A k Z k n L k ( 17 ) C k = k p ρ k A k Z k L k and ( 18 ) P = k = 1 K P k ( 19 ) C = k = 1 K C k ( 20 )
  • using substitution
  • d k = ρ k A k L k
  • as a density for the layer k:
  • P = k p d · Z n = k = 1 K k p d k Z k n ( 21 ) C = k c d · Z = k = 1 K k p d k Z k , ( 22 )
  • the expression for the resulting Z and d can be found:
  • Z = ( k = 1 K d k Z k n k = 1 K d k Z k n ) 1 / ( n - 1 ) ( 23 ) d = [ ( k = 1 K d k Z k ) n k = 1 K d k Z k n ] 1 / ( n - 1 ) ( 24 )
  • Equations (23) and (24) are the mathematical expressions for decomposition of the initial Z and d into several components with different Zk and dk, k=1, . . . , K. These formulas built the foundation for the potential solution of the inverse problem of multi-layer material detection in x-ray scanning machines.
  • Limitation of Visual Perception of Digital Images
  • Explosive detection using the methodology described above has been tested on images stored in 24-bit bitmap (bmp) format. These images were supposed to be exact copies from the computer screen of x-ray scanners. However, in several cases, the images were not what they were supposed to be despite the fact that the images passed visual inspection by humans. Identical for human perception but different in their pixel's RGB content, these images confused the system functionality and had not been caught until they were forced to go through RGB_DNA viewers.
  • FIG. 33 shows examples of images with their 3D RGB_DNA views. Only the image on the far left is in correct original RGB_DNA colors. The other two images are visually undistinguished from first one. Nevertheless, they are in fact bmp images converted back from gif and jpeg conversions of an original bmp image.
  • Another example, shown in FIG. 34, shows accidental conversion from 24-bit bmp to 16-bit and back. FIGS. 34 and 32 can be compared to see the difference between the incorrect RGB_DNA and the correct one.
  • The fact that number of RGB_DNA colors in color images of dual energy x-ray scanners is fixed for each model and are much less than 16,777,216 RGB triplets in 24-bit bmp, it is possible to make automated inspection of incoming images without the actual visual review of their RGB_DNA (3×2D or 3D). This process can detect the presence of images not created with that x-ray scanner or the scanner is out of calibration. The existence of RGB_DNA itself as a limited subset of the entire 24-bit RGB set makes it possible. The component designed and implemented for this purpose performs a fast search through already collected RGB_DNA sets for each pixel of an incoming image, and assures that the system will not be confused.
  • The Colors of Zeff=const—z-Lines Extraction
  • The color scheme of the Smith scanner is comprised of 29 color curves that are stretched from white RGB pole (255,255,255) to black pole (0,0,0). There is one more line of gray colors used for edge enhancement, but these colors do not represent any materials. As discussed above, the color component can determine that the RGB color of a pixel belongs to the RGB_DNA whole set of colors, but it can not determine which one of the 29 curves this color is a part.
  • As discussed above, each color curve represents the line in (P,C) space with Zeff=const. They are referred to herein as “z-lines.” If the colors of each line are known, it is possible to exploit this fact for at least two very useful applications. The first application is the physics-based feature vectors computation in pattern classification algorithms, which will be discussed in more detail below.
  • The second application uses z-lines for removing or keeping selected materials from an image. This is a much more flexible image filtering tool than so called “organic and metal stripping” provided by x-ray scanner manufacturers, as will be discussed in more detail below.
  • Z-lines are clearly visible on RGB_DNA 3D view (see FIGS. 25A and 26) and they can be extracted without any difficulties. Nevertheless, there are several confusing facts. For example, in the areas close to polar (white and black) zones, the number of RGB_DNA colors can be less than 29 (the actual number of lines). Another surprise is the fact that z-lines themselves are actually not lines. They are integer approximation of the ideal continuous 3D lines by the finite set of points with integer three coordinates. Therefore, there are cases where three dimensional aliasing resembles 3D stares in RGB cube. In such cases, the procedure of extracting RGB coordinates of points for z-lines is not straightforward. FIG. 35 shows the fine structure of z-lines on their way from the central region of the 3D RGB cube towards the black pole with RGB=(0,0,0). As can be seen, the aliasing gets worse closer to poles. The picture appears so complicated it is difficult to determine whether the 3D RGB cube is the native home for z-lines.
  • One can look at z-lines from another point. That is, from the point in RGB space lying on the prolongation of the major diagonal of RGB cube, as shown in FIG. 36. One can see that each z-line seems to be a plane line, and the plane goes through the major diagonal. In a cylindrical system of coordinates (z, r, θ) with z coincided with the major diagonal, each z-line has its very narrow sector of the angular coordinates θ for all its points. One can say with good accuracy that the z-line for a particular Zeff=const contains the points with θ=const. Therefore, the cylindrical system of coordinates is more natural or, at least, more convenient.
  • In this system, the angular coordinate θ is an invariant for all points of the same z-line. This means that HSI color space is more suitable, or more natural, for z-lines than RGB, and extraction of z-line's colors is a straightforward operation universal, not only for the Smith color scheme, but for the Rapiscan color scheme as well.
  • FIG. 37 shows z- line numbers 3 and 15 together with their respective colors. FIG. 38 shows a 3D view for extracted z- line numbers 1, 7 and 25 with respective colors.
  • Having z-lines extracted and sorted by intensity value I of HSI, one is able to navigate in color RGB_DNA space like in (P,C) space. For any two colors, one can say which one has a greater effective atomic number Zeff corresponding to their colors or, in case of the same Zeff, one can say which one is more dense. Moreover, one can extend the meaning of hue coordinate H of HSI to be the carrier of Zeff and intensity I to be responsible for density of a material.
  • Saturation S is thus far unemployed. It can be an unemployed free parameter (and is for Smith and Rapiscan scanners) responsible for carrying the proprietary “look and feel” of the color scheme. Colors of the same objects can appear differently on Smith and Rapiscan scanners having the same or close H and I, but different S.
  • Physics Based Feature Vectors—z-Metrics
  • Results of feature extraction for color images depends on the colors of an image, the color scheme (RGB, HIS or other) and the algorithm of the features computation itself. Mapping z-lines and their ordered colors to (P,C) space opens up an opportunity to exclude color from the feature extraction process. Instead of using three variables of a particular color space, such as R, G, and B in RGB, to feed the feature extraction algorithm, two variables of (P,C) can be used.
  • Two dimensions reduce complexity. Unlike of points in color spaces, the points in (P,C) space have clear physical meaning. P stands for Photoelectric fraction of attenuation and C stands for Compton attenuation. To exploit these advantages, the methodology of z-metrics implemented. Z-metrics is actually the set of 29 histograms, one per each z-line. It can be computed with bins or without bins, weighed or not. Experiments have shown that this metric alone is as effective as an assembly of several metrics based on traditional features of color images. FIG. 39 is a plot of fragment of typical 25 bin's z-metrics for the first 9 z-lines.
  • Beyond Organic and Metal Stripping—z-Filters
  • Manufacturers of dual energy x-ray machines advertise the “material stripping” feature of their scanners. FIG. 40 shows an example of this feature, together with 3D views of the respective RGB_DNA. It is obvious that stripping is a result of simple color replacement of colors from “orange” or “blue” z-lines by gray colors, and it can be considered as a manufacturer's proof of the idea that z-lines are actually the lines of Zeff=const.
  • Organic or metal stripping, as is easy to see, are only very limited simple special cases from an unlimited number of other possibilities. The colors mapped to any region in (P,C) space can be replaced, i.e., z-filtered. Alternatively, they can be processed in some other way. The key point is the mapping itself. If one has the mapping, one can apply z-filters of any kind, like those shown in FIGS. 41-45.
  • The image analysis system 130 can be implemented with a general purpose computer. However, it can also be implemented with a special purpose computer, programmed microprocessor or microcontroller and peripheral integrated circuit elements, ASICs or other integrated circuits, hardwired electronic or logic circuits such as discrete element circuits, programmable logic devices such as FPGA, PLD, PLA or PAL or the like. In general, any device on which a finite state machine capable of executing code for implementing the process steps of FIG. 7 can be used to implement the image analysis system 130.
  • Input channel 110 may be, include or interface to any one or more of, for instance, the Internet, an intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network) or a MAN (Metropolitan Area Network), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, an Ethernet connection, an ISDN (Integrated Services Digital Network) line, a dial-up port such as a V.90, V.34bis analog modem connection, a cable modem, and ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection. Input channel 110 may furthermore be, include or interface to any one or more of a WAP (Wireless Application Protocol) link, a GPRS (General Packet Radio Service) link, a GSM (Global System for Mobile Communication) link, CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access) link such as a cellular phone channel, a GPS (Global Positioning System) link, CDPD (Cellular Digital Packet Data), a RIM (Research in Motion, Limited) duplex paging type device, a Bluetooth radio link, or an IEEE 802.11-based radio frequency link. Input channel 110 may yet further be, include or interface to any one or more of an RS-232 serial connection, an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, a SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection.
  • The foregoing embodiments and advantages are merely exemplary, and are not to be construed as limiting the present invention. The present teaching can be readily applied to other types of apparatuses. The description of the present invention is intended to be illustrative, and not to limit the scope of the claims. Many alternatives, modifications, and variations will be apparent to those skilled in the art. Various changes may be made without departing from the spirit and scope of the present invention, as defined in the following claims.

Claims (32)

1. A method of using at least one template in at least one predetermined color space that characterizes an image source, comprising:
receiving at least one image from the image source;
mapping the at least one image to the at least one predetermined color space to yield at least one mapped image; and
comparing the at least one mapped image to the at least one template.
2. The method of claim 1, further comprising determining whether the image source is at variance based on the comparing step.
3. The method of claim 2, wherein the determining step comprises determining whether the image source is calibrated.
4. The method of claim 1, further comprising determining a level of variance of the image source based on the comparing step.
5. The method of claim 1, further comprising determining whether the image source has malfunctioned based on the comparing step.
6. The method of claim 1, wherein the at least one image comprises a hyperspectral image.
7. The method of claim 1, wherein the at least one image comprises a satellite image.
8. The method of claim 1, wherein the at least one image comprises an infrared image.
9. The method of claim 1, wherein the at least one image comprises a laser radar image.
10. The method of claim 1, wherein the at least one image comprises an x-ray image and the source of the image comprises an x-ray imaging machine.
11. The method of claim 1, wherein the least one image comprises an infrared image and the source of the image comprises a forward looking infrared (FLIR) system.
12. The method of claim 1, wherein the at least one image comprises a magnetic resonance image and the source of the image comprises a magnetic resonance imaging (MRI) machine.
13. The method of claim 1, wherein the at least one image comprises a positron emission tomography (PET) image and the source of the image comprises a PET machine.
14. The method of claim 1, wherein the at least one image comprises a laser radar image and the source of the at least one image comprises a laser radar imaging system.
15. The method of claim 1, wherein the source of the at least one image comprises a camera.
16. The method of claim 15, wherein the camera comprises a digital camera.
17. The method of claim 1, wherein the at least one image comprises an ultrasound image and the source of the at least one image comprises an ultrasound imaging system.
18. The method of claim 1, wherein the at least one image comprises an ultrasound image.
19. The method of claim 1, wherein the source of the at least one image comprises a radar system.
20. The method of claim 1, wherein the source of the at least one image comprises a phased-array radar system.
21. The method of claim 1, wherein the at least one image comprises a medical image.
22. The method of claim 1, further comprising normalizing the at least one image based on the comparing step.
23. The method of claim 1, wherein the at least one image comprises a grey-scale image.
24. A method of using at least one template in at least one predetermined color space that characterizes an image source, comprising:
receiving a plurality of images from the image source;
mapping the plurality of images to the at least one predetermined color space to yield mapped images; and
comparing the mapped images to the at least one template.
25. The method of claim 24, further comprising normalizing the plurality of images based on the comparing step.
26. The method of claim 24, wherein the plurality of images comprise a plurality of grey-scale images.
27. A method of using at least one template in at least one predetermined color space that characterizes an image source, comprising:
receiving an image from a different image source;
mapping the image to the at least one predetermined color space to yield a mapped image; and
comparing the mapped image to the at least one template.
28. The method of claim 27, further comprising determining whether the different image source and the image source are at variance based on the comparing step.
29. The method of claim 28, wherein the determining step comprises determining whether the image source and the different image source are calibrated with respect to each other.
30. The method of claim 27, further comprising determining a level of variance between the image source and the different image source based on the comparing step.
31. The method of claim 27, further comprising determining whether one of the image source and the different image source has malfunctioned based on the comparing step.
32. A system for using at least one template in at least one predetermined color space that characterizes an image source, comprising:
an image receiving unit that receives at least one image from the image source;
an image mapping unit that maps the at least one image to the at least one predetermined color space to yield at least one mapped image; and
a comparing unit that compares the at least one mapped image to the at least one template.
US11/374,613 2005-03-15 2006-03-14 System and method for using a template in a predetermined color space that characterizes an image source Abandoned US20090324097A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/374,613 US20090324097A1 (en) 2005-03-15 2006-03-14 System and method for using a template in a predetermined color space that characterizes an image source

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US66147705P 2005-03-15 2005-03-15
US11/374,613 US20090324097A1 (en) 2005-03-15 2006-03-14 System and method for using a template in a predetermined color space that characterizes an image source

Publications (1)

Publication Number Publication Date
US20090324097A1 true US20090324097A1 (en) 2009-12-31

Family

ID=39584088

Family Applications (4)

Application Number Title Priority Date Filing Date
US11/374,578 Abandoned US20090324067A1 (en) 2005-03-15 2006-03-14 System and method for identifying signatures for features of interest using predetermined color spaces
US11/374,613 Abandoned US20090324097A1 (en) 2005-03-15 2006-03-14 System and method for using a template in a predetermined color space that characterizes an image source
US11/374,189 Abandoned US20080159605A1 (en) 2005-03-15 2006-03-14 Method for characterizing an image source utilizing predetermined color spaces
US11/374,612 Expired - Fee Related US8045805B2 (en) 2005-03-15 2006-03-14 Method for determining whether a feature of interest or an anomaly is present in an image

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/374,578 Abandoned US20090324067A1 (en) 2005-03-15 2006-03-14 System and method for identifying signatures for features of interest using predetermined color spaces

Family Applications After (2)

Application Number Title Priority Date Filing Date
US11/374,189 Abandoned US20080159605A1 (en) 2005-03-15 2006-03-14 Method for characterizing an image source utilizing predetermined color spaces
US11/374,612 Expired - Fee Related US8045805B2 (en) 2005-03-15 2006-03-14 Method for determining whether a feature of interest or an anomaly is present in an image

Country Status (1)

Country Link
US (4) US20090324067A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080163070A1 (en) * 2007-01-03 2008-07-03 General Electric Company Method and system for automating a user interface

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080232696A1 (en) * 2007-03-23 2008-09-25 Seiko Epson Corporation Scene Classification Apparatus and Scene Classification Method
JP5072693B2 (en) * 2007-04-11 2012-11-14 キヤノン株式会社 PATTERN IDENTIFICATION DEVICE AND ITS CONTROL METHOD, ABNORMAL PATTERN DETECTION DEVICE AND ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
JP2008282085A (en) * 2007-05-08 2008-11-20 Seiko Epson Corp Scene discrimination device and scene discrimination method
WO2009012352A1 (en) * 2007-07-18 2009-01-22 Bruker Biosciences Corporation Handheld spectrometer including wireless capabilities
JP5159242B2 (en) 2007-10-18 2013-03-06 キヤノン株式会社 Diagnosis support device, diagnosis support device control method, and program thereof
US8170342B2 (en) * 2007-11-07 2012-05-01 Microsoft Corporation Image recognition of content
WO2010063010A2 (en) * 2008-11-26 2010-06-03 Guardian Technologies International Inc. System and method for texture visualization and image analysis to differentiate between malignant and benign lesions
US8274565B2 (en) * 2008-12-31 2012-09-25 Iscon Video Imaging, Inc. Systems and methods for concealed object detection
US9277878B2 (en) 2009-02-26 2016-03-08 Tko Enterprises, Inc. Image processing sensor systems
US9740921B2 (en) 2009-02-26 2017-08-22 Tko Enterprises, Inc. Image processing sensor systems
US9293017B2 (en) 2009-02-26 2016-03-22 Tko Enterprises, Inc. Image processing sensor systems
US8116527B2 (en) * 2009-10-07 2012-02-14 The United States Of America As Represented By The Secretary Of The Army Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background
US9569439B2 (en) 2011-10-31 2017-02-14 Elwha Llc Context-sensitive query enrichment
US8761476B2 (en) 2011-11-09 2014-06-24 The Johns Hopkins University Hyperspectral imaging for detection of skin related conditions
US10559380B2 (en) 2011-12-30 2020-02-11 Elwha Llc Evidence-based healthcare information management protocols
US10552581B2 (en) 2011-12-30 2020-02-04 Elwha Llc Evidence-based healthcare information management protocols
US10679309B2 (en) 2011-12-30 2020-06-09 Elwha Llc Evidence-based healthcare information management protocols
US10340034B2 (en) 2011-12-30 2019-07-02 Elwha Llc Evidence-based healthcare information management protocols
US10528913B2 (en) 2011-12-30 2020-01-07 Elwha Llc Evidence-based healthcare information management protocols
US20130173295A1 (en) 2011-12-30 2013-07-04 Elwha LLC, a limited liability company of the State of Delaware Evidence-based healthcare information management protocols
US10475142B2 (en) 2011-12-30 2019-11-12 Elwha Llc Evidence-based healthcare information management protocols
US20140281980A1 (en) 2013-03-15 2014-09-18 Chad A. Hage Methods and Apparatus to Identify a Type of Media Presented by a Media Player
WO2017139367A1 (en) 2016-02-08 2017-08-17 Imago Systems, Inc. System and method for the visualization and characterization of objects in images
US10331979B2 (en) * 2016-03-24 2019-06-25 Telesecurity Sciences, Inc. Extraction and classification of 3-D objects
US11832969B2 (en) 2016-12-22 2023-12-05 The Johns Hopkins University Machine learning approach to beamforming
US12056774B2 (en) * 2017-11-21 2024-08-06 International Business Machines Corporation Predicting a time of non-real time posts using contextual metadata
US11158286B2 (en) * 2018-10-05 2021-10-26 Disney Enterprises, Inc. Machine learning color science conversion
US11378965B2 (en) * 2018-11-15 2022-07-05 Toyota Research Institute, Inc. Systems and methods for controlling a vehicle based on determined complexity of contextual environment
US10992902B2 (en) * 2019-03-21 2021-04-27 Disney Enterprises, Inc. Aspect ratio conversion with machine learning
KR102048948B1 (en) * 2019-04-30 2020-01-08 (주)제이엘케이인스펙션 Image analysis apparatus and method
US11455724B1 (en) 2021-05-12 2022-09-27 PAIGE.AI, Inc. Systems and methods to process electronic images to adjust attributes of the electronic images

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4991428A (en) * 1989-09-25 1991-02-12 Heyde H Paul Ion chromatography method for low concentrations
US5157506A (en) * 1990-08-29 1992-10-20 Savitar, Inc. Standardized color calibration of electronic imagery
US5280428A (en) * 1992-07-14 1994-01-18 General Electric Company Method and apparatus for projecting diagnostic images from volumed diagnostic data accessed in data tubes
US5754676A (en) * 1994-04-08 1998-05-19 Olympus Optical Co., Ltd. Image classification apparatus
US5764386A (en) * 1996-01-25 1998-06-09 Medar, Inc. Method and system for automatically monitoring the colors of an object at a vision station
US5767980A (en) * 1995-06-20 1998-06-16 Goss Graphic Systems, Inc. Video based color sensing device for a printing press control system
US5850472A (en) * 1995-09-22 1998-12-15 Color And Appearance Technology, Inc. Colorimetric imaging system for measuring color and appearance
US5854851A (en) * 1993-08-13 1998-12-29 Sophis View Technologies Ltd. System and method for diagnosis of living tissue diseases using digital image processing
US5970164A (en) * 1994-08-11 1999-10-19 Sophisview Technologies, Ltd. System and method for diagnosis of living tissue diseases
US5987345A (en) * 1996-11-29 1999-11-16 Arch Development Corporation Method and system for displaying medical images
US6011866A (en) * 1995-05-22 2000-01-04 Canon Kabushiki Kaisha Template formation method and apparatus
US6088473A (en) * 1998-02-23 2000-07-11 Arch Development Corporation Method and computer readable medium for automated analysis of chest radiograph images using histograms of edge gradients for false positive reduction in lung nodule detection
US6226034B1 (en) * 1997-05-06 2001-05-01 Roper Scientificomasd, Inc. Spatial non-uniformity correction of a color sensor
US6721446B1 (en) * 1999-04-26 2004-04-13 Adobe Systems Incorporated Identifying intrinsic pixel colors in a region of uncertain pixels
US6768814B1 (en) * 1999-10-05 2004-07-27 Akzo Nobel N.V. Methods applying color measurement by means of an electronic imaging device
US20050281459A1 (en) * 2004-06-18 2005-12-22 Xerox Corporation Method for scanner characterization for color measurement of printed media having four or more colorants
US7023956B2 (en) * 2002-11-11 2006-04-04 Lockheed Martin Corporaiton Detection methods and system using sequenced technologies
US7313221B2 (en) * 2002-12-10 2007-12-25 Commonwealth Scientific And Industrial Research Organization Radiographic equipment
US7486414B2 (en) * 2003-12-02 2009-02-03 Fuji Xerox Co., Ltd. Image forming device, pattern formation method and storage medium storing its program
US7505622B2 (en) * 2004-05-25 2009-03-17 Seiko Epson Corporation Color information acquisition apparatus, color information acquisition method, and color information acquisition program product

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5185809A (en) * 1987-08-14 1993-02-09 The General Hospital Corporation Morphometric analysis of anatomical tomographic data
US4991092A (en) * 1988-08-12 1991-02-05 The Regents Of The University Of California Image processor for enhancing contrast between subregions of a region of interest
US6868171B2 (en) * 1997-10-24 2005-03-15 Ultratouch Corporation Dynamic color imaging method and system
WO2001037717A2 (en) * 1999-11-26 2001-05-31 Applied Spectral Imaging Ltd. System and method for functional brain mapping
US7155041B2 (en) 2000-02-16 2006-12-26 Fuji Photo Film Co., Ltd. Anomalous shadow detection system
JP4149126B2 (en) 2000-12-05 2008-09-10 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー Image processing method, image processing apparatus, and image photographing apparatus
EP1293925A1 (en) 2001-09-18 2003-03-19 Agfa-Gevaert Radiographic scoring method
US7492937B2 (en) * 2004-05-26 2009-02-17 Ramsay Thomas E System and method for identifying objects of interest in image data
US7907762B2 (en) * 2004-05-26 2011-03-15 Guardian Technologies International, Inc. Method of creating a divergence transform for identifying a feature of interest in hyperspectral data
US20060269140A1 (en) * 2005-03-15 2006-11-30 Ramsay Thomas E System and method for identifying feature of interest in hyperspectral data
US7283654B2 (en) * 2004-08-26 2007-10-16 Lumeniq, Inc. Dynamic contrast visualization (DCV)

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4991428A (en) * 1989-09-25 1991-02-12 Heyde H Paul Ion chromatography method for low concentrations
US5157506A (en) * 1990-08-29 1992-10-20 Savitar, Inc. Standardized color calibration of electronic imagery
US5280428A (en) * 1992-07-14 1994-01-18 General Electric Company Method and apparatus for projecting diagnostic images from volumed diagnostic data accessed in data tubes
US5854851A (en) * 1993-08-13 1998-12-29 Sophis View Technologies Ltd. System and method for diagnosis of living tissue diseases using digital image processing
US5754676A (en) * 1994-04-08 1998-05-19 Olympus Optical Co., Ltd. Image classification apparatus
US5970164A (en) * 1994-08-11 1999-10-19 Sophisview Technologies, Ltd. System and method for diagnosis of living tissue diseases
US6011866A (en) * 1995-05-22 2000-01-04 Canon Kabushiki Kaisha Template formation method and apparatus
US5767980A (en) * 1995-06-20 1998-06-16 Goss Graphic Systems, Inc. Video based color sensing device for a printing press control system
US5850472A (en) * 1995-09-22 1998-12-15 Color And Appearance Technology, Inc. Colorimetric imaging system for measuring color and appearance
US5764386A (en) * 1996-01-25 1998-06-09 Medar, Inc. Method and system for automatically monitoring the colors of an object at a vision station
US5987345A (en) * 1996-11-29 1999-11-16 Arch Development Corporation Method and system for displaying medical images
US6226034B1 (en) * 1997-05-06 2001-05-01 Roper Scientificomasd, Inc. Spatial non-uniformity correction of a color sensor
US6088473A (en) * 1998-02-23 2000-07-11 Arch Development Corporation Method and computer readable medium for automated analysis of chest radiograph images using histograms of edge gradients for false positive reduction in lung nodule detection
US6721446B1 (en) * 1999-04-26 2004-04-13 Adobe Systems Incorporated Identifying intrinsic pixel colors in a region of uncertain pixels
US6768814B1 (en) * 1999-10-05 2004-07-27 Akzo Nobel N.V. Methods applying color measurement by means of an electronic imaging device
US7023956B2 (en) * 2002-11-11 2006-04-04 Lockheed Martin Corporaiton Detection methods and system using sequenced technologies
US7313221B2 (en) * 2002-12-10 2007-12-25 Commonwealth Scientific And Industrial Research Organization Radiographic equipment
US7486414B2 (en) * 2003-12-02 2009-02-03 Fuji Xerox Co., Ltd. Image forming device, pattern formation method and storage medium storing its program
US7505622B2 (en) * 2004-05-25 2009-03-17 Seiko Epson Corporation Color information acquisition apparatus, color information acquisition method, and color information acquisition program product
US20050281459A1 (en) * 2004-06-18 2005-12-22 Xerox Corporation Method for scanner characterization for color measurement of printed media having four or more colorants
US7295703B2 (en) * 2004-06-18 2007-11-13 Xerox Corporation Method for scanner characterization for color measurement of printed media having four or more colorants

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080163070A1 (en) * 2007-01-03 2008-07-03 General Electric Company Method and system for automating a user interface
US8566727B2 (en) * 2007-01-03 2013-10-22 General Electric Company Method and system for automating a user interface

Also Published As

Publication number Publication date
US20080159626A1 (en) 2008-07-03
US8045805B2 (en) 2011-10-25
US20090324067A1 (en) 2009-12-31
US20080159605A1 (en) 2008-07-03

Similar Documents

Publication Publication Date Title
US8045805B2 (en) Method for determining whether a feature of interest or an anomaly is present in an image
US7817833B2 (en) System and method for identifying feature of interest in hyperspectral data
US7907762B2 (en) Method of creating a divergence transform for identifying a feature of interest in hyperspectral data
US20060269135A1 (en) System and method for identifying objects of interest in image data
US7492937B2 (en) System and method for identifying objects of interest in image data
US20100266179A1 (en) System and method for texture visualization and image analysis to differentiate between malignant and benign lesions
US20110052032A1 (en) System and method for identifying signatures for features of interest using predetermined color spaces
WO2008157843A1 (en) System and method for the detection, characterization, visualization and classification of objects in image data
US10839510B2 (en) Methods and systems for human tissue analysis using shearlet transforms
US20210056694A1 (en) Visual augmentation of regions within images
Mouton et al. A review of automated image understanding within 3D baggage computed tomography security screening
CN105559813B (en) Medical diagnostic imaging apparatus and medical image-processing apparatus
Park et al. AE—Automation and emerging technologies: Co-occurrence matrix texture features of multi-spectral images on poultry carcasses
US20120033852A1 (en) System and method to find the precise location of objects of interest in digital images
US20060269161A1 (en) Method of creating a divergence transform for a class of objects
Park et al. Discriminant analysis of dual-wavelength spectral images for classifying poultry carcasses
Gupta et al. Predicting detection performance on security X-ray images as a function of image quality
WO2010063010A2 (en) System and method for texture visualization and image analysis to differentiate between malignant and benign lesions
US10908098B1 (en) Automatic method of material identification for computed tomography
Wang et al. Unsupervised cell identification on multidimensional X-ray fluorescence datasets
Kehl et al. Multi-spectral imaging via computed tomography (music)-comparing unsupervised spectral segmentations for material differentiation
Panda et al. Screening chronic myeloid leukemia neutrophils using a novel 3-Dimensional Spectral Gradient Mapping algorithm on hyperspectral images
Mouton et al. On the relevance of denoising and artefact reduction in 3d segmentation and classification within complex computed tomography imagery
CN111340127A (en) Energy spectrum CT iterative material decomposition method and device based on material clustering
Boccignone et al. Using Renyi's information and wavelets for target detection: an application to mammograms

Legal Events

Date Code Title Description
AS Assignment

Owner name: GUARDIAN TECHNOLOGIES INTERNATIONAL, INC., VIRGINI

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAMSAY, THOMAS E.;RAMSAY, EUGENE B.;FELTEAU, GERALD;AND OTHERS;REEL/FRAME:018113/0682

Effective date: 20060601

AS Assignment

Owner name: GUARDIAN TECHNOLOGIES INTERNATIONAL, INC., VIRGINI

Free format text: CORRECTED COVER SHEET TO CORRECT INVENTOR'S NAME, PREVIOUSLY RECORDED AT REEL/FRAME 018113/0682 (ASSIGNMENT OF ASSIGNOR'S INTEREST);ASSIGNORS:RAMSAY, THOMAS E.;RAMSAY, EUGENE B.;FELTEAU, GERARD;AND OTHERS;REEL/FRAME:018760/0265

Effective date: 20060601

AS Assignment

Owner name: APPLIED VISUAL SCIENCES, INC., VIRGINIA

Free format text: MERGER;ASSIGNOR:GUARDIAN TECHNOLOGIES INTERNATIONAL, INC.;REEL/FRAME:025238/0566

Effective date: 20100610

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION