[go: nahoru, domu]

CN107578424B - Dynamic background difference detection method, system and device based on space-time classification - Google Patents

Dynamic background difference detection method, system and device based on space-time classification Download PDF

Info

Publication number
CN107578424B
CN107578424B CN201710659723.6A CN201710659723A CN107578424B CN 107578424 B CN107578424 B CN 107578424B CN 201710659723 A CN201710659723 A CN 201710659723A CN 107578424 B CN107578424 B CN 107578424B
Authority
CN
China
Prior art keywords
pixel
pixels
background
foreground
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710659723.6A
Other languages
Chinese (zh)
Other versions
CN107578424A (en
Inventor
李熙莹
李国鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201710659723.6A priority Critical patent/CN107578424B/en
Publication of CN107578424A publication Critical patent/CN107578424A/en
Application granted granted Critical
Publication of CN107578424B publication Critical patent/CN107578424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a dynamic background difference detection method, a system and a device based on space-time classification, wherein the method comprises the following steps: establishing a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image; and taking a foreground pixel point in the rough foreground mask image as a center, classifying the pixel points in the set neighborhood range of the central pixel point, and correcting the central pixel point into a background pixel point or continuously keeping the central pixel point as the foreground pixel point according to the number of the background pixel points in the set neighborhood range and the similar pixel points of the central pixel point. The invention adopts a method of grouping and sampling, thereby enhancing the capability of describing the dynamic background; only the pixels similar to the central pixel are adopted to determine whether the foreground pixels are real foreground pixels, and the detection accuracy is improved. The invention can be widely applied to the field of moving target detection.

Description

Dynamic background difference detection method, system and device based on space-time classification
Technical Field
The invention relates to the field of moving target detection, in particular to a dynamic background difference detection method, a dynamic background difference detection system and a dynamic background difference detection device based on space-time classification.
Background
The moving target detection is the basis of target recognition, tracking and later-stage object behavior understanding, and is a research hotspot in the field of computer vision. Background subtraction is the most commonly used method for detecting moving objects, and the basic principle is to detect moving objects by differentiating a current frame image from a background image. The background difference method has the advantages of high speed, accurate detection and easy realization of the detection of the moving target, and the key point is the acquisition of a background image. In practical application, a static background is not easily and directly obtained under the influence of factors such as sudden change of illumination, fluctuation of some objects in an actual background image, shaking of a camera, influence of moving objects entering and exiting a scene on an original scene and the like, so that a background difference method under a dynamic background becomes a main detection algorithm for detecting moving targets.
The dynamic background is one of the factors that influence the effect of the background subtraction method. Dynamic backgrounds in video scenes, such as twiddle branches and fountains, are not regions of interest for detection, but due to the fact that the dynamic backgrounds have the characteristic of motion, the dynamic backgrounds are often mistakenly detected as moving objects. Dynamic backgrounds tend to have two characteristics: firstly, the pixel value change presents a plurality of numerical values; secondly, the motion is often in a small range, and the motion is strongly related to surrounding pixels. In the related research for eliminating false detection caused by dynamic background, the research methods can be divided into two categories: firstly, directly describing the change of background pixel values on a time sequence, namely representing the background pixels by establishing a mathematical model of the change of the pixel values along with time; and secondly, performing background modeling by combining neighborhood space information of the pixels, namely describing the background pixels by using the characteristic that the neighborhood pixels have similar pixel value distribution or the texture characteristics of a background area.
Common in the first method are a mixture gaussian model method, a codebook method and related improvement methods of the mixture gaussian model method and the codebook method. The mixed Gaussian model method considers the pixel value of the image as the superposition of several Gaussian models, and has better robustness to the change of the background pixel value. The codebook method represents a change value of a background pixel by a plurality of symbols, and thus can be applied to modeling in a dynamic background. The related improved methods of the two methods comprise nonparametric background modeling methods such as kernel density estimation, and the local modeling method is utilized, so that the sensitivity is high, and the robustness is provided for the frequently-changed dynamic background modeling. However, the first method generally directly adopts a continuous video frame sampling mode to perform background modeling, the sampling range is small, the situation that too many samples are concentrated near a fixed sampling time point cannot be avoided, the representativeness of the samples is not strong, and the capability of a background model for describing a dynamic background is reduced.
Common methods in the second category of methods are a Vibe (visual background extraction) method, a method based on a principal component analysis method, and a foreground segmentation method based on local texture features. The Vibe method and the related improved method utilize the characteristic that the pixel points and the pixel points in the neighborhood have temporary similar numerical value distribution, and utilize the neighborhood pixel values to establish a sample set for the background pixels. The principal component analysis-based method distinguishes dynamic backgrounds by analyzing dissimilarities of the dynamic and static backgrounds in a feature space. The method based on the principal component analysis method involves a large number of matrix operations in the calculation process, so the calculation efficiency is low. The method based on local texture features segments foreground objects and background according to the texture smoothness of different components in a video scene, and the method has the limitation that the features with good discrimination are required to be designed manually. Therefore, when a background subtraction method (i.e., a second type of method) combined with spatial neighborhood information is used to detect a dynamic background and a foreground object, the dynamic background which moves frequently has better robustness due to the adoption of neighborhood spatial information (i.e., region characteristics) of pixels, but all neighborhood pixels of the pixels are also used to describe background pixels, and if part of the neighborhood pixels of the pixels are foreground pixels, the detection effect is affected, and the detection accuracy is reduced.
Disclosure of Invention
To solve the above technical problems, a first object of the present invention is to: the dynamic background difference detection method based on space-time classification has strong description capability on the dynamic background and high detection accuracy.
The second object of the present invention is to: the dynamic background difference detection system has strong description capability on the dynamic background and high detection accuracy and is based on space-time classification.
The third object of the present invention is to: the dynamic background difference detection device has strong description capacity on the dynamic background and high detection accuracy and is based on space-time classification.
The first technical scheme adopted by the invention is as follows:
a dynamic background difference detection method based on space-time classification comprises the following steps:
establishing a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image;
and classifying the pixels in the set neighborhood range of the central pixel by taking the foreground pixel in the rough foreground mask image as the center, and correcting the central pixel as the background pixel or continuously keeping the central pixel as the foreground pixel according to the number of pixels belonging to the background in the pixels which are similar to the central pixel in the set neighborhood range, thereby obtaining the accurate foreground mask image.
Further, the step of establishing a corresponding background model for each pixel in the image through group sampling on the time series, and classifying the pixels in the background model according to the pixel to be detected to obtain a rough foreground mask image specifically includes:
selecting a first frame image of a video as an initial reference background image;
selecting the first N frames of images of the video from each pixel in the video image, and initializing a background model by adopting a packet sampling method;
updating a reference background image by adopting a background model;
updating a background model for every k frames of images;
and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image.
Further, the step of selecting the first N frames of images of the video for each pixel in the video image and initializing the background model by using a packet sampling method specifically includes:
averagely dividing pixel values of the same position of N frames of images in front of a video into m sampling groups according to a time sequence, wherein each sampling group has k pixel values, and N is mk;
adopting a nearest neighbor pixel sampling method in each of m sampling groups, and selecting a pixel with the minimum distance with a reference background pixel as a pixel sample of the sampling group, wherein the pixel sample of the sampling group is selected according to the formula:
Figure BDA0001370238690000031
wherein, csBeing pixel samples of a sample group, ciFor pixels within a sample group, cbgIs a reference background pixel;
forming m pixel samples of m sampling groups into a background model, wherein the expression of the background model C is as follows:
Figure BDA0001370238690000032
wherein
Figure BDA0001370238690000033
Respectively, the pixel samples of the 1 st to m-th sampling groups.
Further, the step of updating the reference background image by using the background model specifically includes:
updating a reference background image by adopting a nearest pixel sampling method according to a pixel sample of a background model, wherein an updating formula of the reference background image is as follows:
Figure BDA0001370238690000034
wherein,
Figure BDA0001370238690000035
and
Figure BDA0001370238690000036
respectively pre-update and post-update reference background images,
Figure BDA0001370238690000037
j is the jth pixel sample of the background model C, j is 1,2, …, m.
Further, the step of classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image specifically includes:
finding out all pixels which are the same as the pixels to be detected in the background model C, recording the number of all pixels which are the same as the pixels to be detected as T, wherein the pixels which are the same as the pixels to be detected meet the following conditions:
Figure BDA0001370238690000038
wherein, ctFor the pixel to be detected, for a given first threshold,
Figure BDA0001370238690000039
j is the jth pixel sample of the background model C, j is 1,2, …, m;
determining whether the number T is greater than a given second threshold ftIf yes, c istJudging as background pixel point, otherwise, judging ctAnd judging the foreground pixel points to finally obtain a rough foreground mask image.
Further, the step of classifying the pixels in the neighborhood range set by the central pixel by taking the foreground pixel in the rough foreground mask image as the center, and correcting the central pixel as the background pixel or continuously keeping the central pixel as the foreground pixel according to the number of the background pixels in the neighborhood range set by the central pixel, thereby obtaining the accurate foreground mask image specifically includes:
for each foreground pixel in the rough foreground mask image, setting a radius r and a window size of (2r +1) by taking the foreground pixel as a center2A square window W;
classifying pixels in the window W according to the pixel value of the central pixel in the original video frame, and searching and recording the number of pixels belonging to background pixels in pixels similar to the central pixel in the window W;
and correcting the central pixel point of the window W into a background pixel point or continuously keeping the central pixel point of the window W as a foreground pixel point according to the number of the recorded pixel points.
Further, the step of classifying the pixels in the window W based on the pixel value of the center pixel in the original video frame, and searching and recording the number of pixels belonging to the background pixel among the pixels of the same kind as the center pixel of the window W specifically includes:
finding out the pixel value c of the central pixel point of the window W in the original video framef
Find and c among the pixels in the window WfPixels of the same kind, saidfThe similar pixel points meet the following requirements: omegaw·||cw-cf| | is less than or equal to gamma, wherein, cwFor the pixel points in the window W, γ is a given third threshold, ωwIs cwWeight coefficient of (a), ωwThe expression of (a) is:
Figure BDA0001370238690000041
pwis cwPixel coordinate of (2), pfIs cfPixel coordinates, | cw-cfI is cwAnd cfIs measured, | pw-pfI is pwAnd pfWhen the condition is true, the value of I { } is 1, otherwise, the value of I { } is 0, and h is a distance threshold;
find and record the in-window W and cfThe number D of pixels belonging to the background pixel among the similar pixels0
Further, the step of correcting the central pixel point of the window W to be a background pixel point or continuously keeping the central pixel point of the window W to be a foreground pixel point according to the number of the recorded pixel points specifically includes:
judging the number D of recorded pixel points0Whether or not D is satisfied0And D is not less than α, if yes, the central pixel point of the window W is corrected to be a background pixel point, otherwise, the central pixel point of the window W is continuously kept to be a foreground pixel point, wherein D is the number of all pixel points in the window W, and α is a given proportionality coefficient.
The second technical scheme adopted by the invention is as follows:
a dynamic background difference detection system based on space-time classification comprises the following modules:
the time classification module is used for establishing a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image;
and the space classification module is used for classifying the pixels in the set neighborhood range of the central pixel by taking the foreground pixel in the rough foreground mask image as the center, and correcting the central pixel as the background pixel or continuously keeping the central pixel as the foreground pixel according to the number of the background pixels in the set neighborhood range and the pixels which are similar to the central pixel, so that the accurate foreground mask image is obtained.
The third technical scheme adopted by the invention is as follows:
a dynamic background difference detection device based on space-time classification comprises:
a memory for storing a program;
a processor for executing the program to:
establishing a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image;
and classifying the pixels in the set neighborhood range of the central pixel by taking the foreground pixel in the rough foreground mask image as the center, and correcting the central pixel as the background pixel or continuously keeping the central pixel as the foreground pixel according to the number of pixels belonging to the background in the pixels which are similar to the central pixel in the set neighborhood range, thereby obtaining the accurate foreground mask image.
The method of the invention has the beneficial effects that: the method has the advantages that the corresponding background model is established for each pixel in the image through grouping sampling on the time sequence, and a grouping sampling method is adopted when the background model of the pixel is established, so that compared with a mode of directly adopting continuous video frame sampling, the sampling range is larger, the situation that excessive samples are concentrated near a fixed sampling time point can be avoided, the representativeness of the samples is stronger, and the capability of the background model for describing the dynamic background is enhanced; according to the number of the background pixels in the pixels which are similar to the central pixels in the set neighborhood range, the central pixels are corrected to be the background pixels or continuously kept to be the foreground pixels, whether the foreground pixels are real foreground pixels or not is determined only by adopting the pixels which are similar to the central pixels in the set neighborhood range, and not all neighborhood pixels are used blindly, so that the detection accuracy is improved.
The system of the invention has the advantages that: the method comprises a time classification module and a space classification module, wherein a grouping sampling method is adopted when the time classification module establishes a background model of pixels, compared with a mode of directly adopting continuous video frame sampling, the sampling range is larger, the situation that excessive samples are concentrated near a fixed sampling time point can be avoided, the representativeness of the samples is stronger, and the capability of the background model for describing a dynamic background is enhanced; the space classification module corrects the central pixel point into the background pixel point or continuously keeps the central pixel point as the foreground pixel point according to the number of the background pixel points in the set neighborhood range and the same types of the central pixel points, and only the set neighborhood range and the same types of the central pixel points are adopted to determine whether the foreground pixel points are real foreground pixel points or not, instead of blindly using all the neighborhood pixels, so that the detection accuracy is improved.
The device of the invention has the beneficial effects that: the processor executes the program stored in the memory to establish a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and a grouping sampling method is adopted when the background model of the pixel is established; the processor executes the program stored in the memory to correct the central pixel point to be the background pixel point or continuously maintain the central pixel point to be the foreground pixel point according to the number of the background pixel points in the pixels which are similar to the central pixel point in the set neighborhood range, and only the pixels which are similar to the central pixel point in the set neighborhood range are adopted to determine whether the foreground pixel point is the real foreground pixel point or not, instead of blindly using all the neighborhood pixels, so that the detection accuracy is improved.
Drawings
FIG. 1 is a flow chart of a dynamic background difference detection method based on spatiotemporal classification according to the present invention;
FIG. 2 is a flowchart of background model initialization and update during the time classification phase of the present invention;
FIG. 3 is a flow chart of pixel classification detection during the temporal classification phase of the present invention;
FIG. 4 is a flow chart of the detection in the spatial classification stage according to the present invention.
Detailed Description
Referring to fig. 1, a dynamic background difference detection method based on spatiotemporal classification includes the following steps:
establishing a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image;
and classifying the pixels in the set neighborhood range of the central pixel by taking the foreground pixel in the rough foreground mask image as the center, and correcting the central pixel as the background pixel or continuously keeping the central pixel as the foreground pixel according to the number of pixels belonging to the background in the pixels which are similar to the central pixel in the set neighborhood range, thereby obtaining the accurate foreground mask image.
Wherein, the image can be a video image (composed of one or more frames of video). The accurate foreground mask image reflects the results of moving object detection.
With reference to fig. 2 and fig. 3, as a further preferred embodiment, the step of establishing a corresponding background model for each pixel in the image through packet sampling on a time series, and classifying the pixels in the background model according to the pixel to be detected to obtain a rough foreground mask image specifically includes:
selecting a first frame image of a video as an initial reference background image;
selecting the first N frames of images of the video from each pixel in the video image, and initializing a background model by adopting a packet sampling method;
updating a reference background image by adopting a background model;
updating a background model for every k frames of images;
and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image.
Wherein N and k are both positive integers. k is equal to the number of pixel values in each sample group in the grouped sampling method.
Further as a preferred embodiment, the step of selecting, for each pixel in the video image, N frames of images before the video and initializing the background model by using a packet sampling method specifically includes:
averagely dividing pixel values of the same position of N frames of images in front of a video into m sampling groups according to a time sequence, wherein each sampling group has k pixel values, and N is mk;
adopting a nearest neighbor pixel sampling method in each of m sampling groups, and selecting a pixel with the minimum distance with a reference background pixel as a pixel sample of the sampling group, wherein the pixel sample of the sampling group is selected according to the formula:
Figure BDA0001370238690000071
wherein, csBeing pixel samples of a sample group, ciFor pixels within a sample group, cbgIs a reference background pixel;
forming m pixel samples of m sampling groups into a background model, wherein the expression of the background model C is as follows:
Figure BDA0001370238690000072
wherein
Figure BDA0001370238690000073
Respectively, the pixel samples of the 1 st to m-th sampling groups.
Further as a preferred embodiment, the step of updating the reference background image by using the background model specifically includes:
updating a reference background image by adopting a nearest pixel sampling method according to a pixel sample of a background model, wherein an updating formula of the reference background image is as follows:
Figure BDA0001370238690000074
wherein,
Figure BDA0001370238690000075
and
Figure BDA0001370238690000076
respectively pre-update and post-update reference background images,
Figure BDA0001370238690000077
j is the jth pixel sample of the background model C, j is 1,2, …, m.
After the background model is adopted to update the reference background image, for the subsequent video images, each k frame of image can form a new sampling group, and the method is utilized
Figure BDA0001370238690000078
And obtaining new pixel samples, adding the new pixel samples into the background model C, and deleting the first pixel sample in the background model C to keep the total number of the samples to be m. The updated background model may be used to update the reference background image again, as shown in FIG. 2.
Referring to fig. 3, as a further preferred embodiment, the step of classifying pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image specifically includes:
finding out all pixels which are the same as the pixels to be detected in the background model C, recording the number of all pixels which are the same as the pixels to be detected as T, wherein the pixels which are the same as the pixels to be detected meet the following conditions:
Figure BDA0001370238690000079
wherein, ctFor the pixel to be detected, for a given first threshold,
Figure BDA00013702386900000710
j is the jth pixel sample of the background model C, j is 1,2, …, m;
determining whether the number T is greater than a given second threshold ftIf yes, judging ct as a background pixel point, otherwise, judging ctAnd judging the foreground pixel points to finally obtain a rough foreground mask image.
Referring to fig. 4, as a preferred embodiment, the step of classifying the pixels in the neighborhood range set by the central pixel by using the foreground pixel in the rough foreground mask image as the center, and correcting the central pixel as the background pixel or continuously keeping the central pixel as the foreground pixel according to the number of the background pixels in the same type of pixels as the central pixel in the neighborhood range set, so as to obtain the accurate foreground mask image specifically includes:
for each foreground pixel in the rough foreground mask image, setting a radius r and a window size of (2r +1) by taking the foreground pixel as a center2A square window W;
classifying pixels in the window W according to the pixel value of the central pixel in the original video frame, and searching and recording the number of pixels belonging to background pixels in pixels similar to the central pixel in the window W;
and correcting the central pixel point of the window W into a background pixel point or continuously keeping the central pixel point of the window W as a foreground pixel point according to the number of the recorded pixel points.
Further, as a preferred embodiment, the step of classifying the pixels in the window W based on the pixel value of the center pixel in the original video frame, and finding and recording the number of pixels belonging to the background pixel among the pixels of the same kind as the center pixel in the window W specifically includes:
finding out the pixel value c of the central pixel point of the window W in the original video framef
Find and c among the pixels in the window WfPixels of the same kind, saidfThe similar pixel points meet the following requirements: omegaw·||cw-cf| | is less than or equal to gamma, wherein, cwFor the pixel points in the window W, γ is a given third threshold, ωwIs cwWeight coefficient of (a), ωwThe expression of (a) is:
Figure BDA0001370238690000081
pwis cwPixel coordinate of (2), pfIs cfPixel coordinates, | cw-cfI is cwAnd cfIs measured, | pw-pfI is pwAnd pfWhen the condition is true, the value of I { } is 1, otherwise, the value of I { } is 0, and h is a distance threshold;
find and record the in-window W and cfThe number D of pixels belonging to the background pixel among the similar pixels0
Further as a preferred embodiment, the step of correcting the central pixel point of the window W to be a background pixel point or continuously keeping the central pixel point of the window W to be a foreground pixel point according to the number of the recorded pixel points specifically includes:
judging the number D of recorded pixel points0Whether or not D is satisfied0And D is not less than α, if yes, the central pixel point of the window W is corrected to be a background pixel point, otherwise, the central pixel point of the window W is continuously kept to be a foreground pixel point, wherein D is the number of all pixel points in the window W, and α is a given proportionality coefficient.
Corresponding to the method of fig. 1, the invention also provides a dynamic background difference detection system based on space-time classification, which comprises the following modules:
the time classification module is used for establishing a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image;
and the space classification module is used for classifying the pixels in the set neighborhood range of the central pixel by taking the foreground pixel in the rough foreground mask image as the center, and correcting the central pixel as the background pixel or continuously keeping the central pixel as the foreground pixel according to the number of the background pixels in the set neighborhood range and the pixels which are similar to the central pixel, so that the accurate foreground mask image is obtained.
Corresponding to the method of fig. 1, the invention also provides a dynamic background difference detection device based on space-time classification, which includes:
a memory for storing a program;
a processor for executing the program to:
establishing a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image;
and classifying the pixels in the set neighborhood range of the central pixel by taking the foreground pixel in the rough foreground mask image as the center, and correcting the central pixel as the background pixel or continuously keeping the central pixel as the foreground pixel according to the number of pixels belonging to the background in the pixels which are similar to the central pixel in the set neighborhood range, thereby obtaining the accurate foreground mask image.
The invention will be further explained and explained with reference to the drawings and the embodiments in the description.
Example one
The invention has proposed a new dynamic background differential detection method based on space-time classification, this method has adopted the method of grouping and sampling while setting up the background model, and the prior art is directly to use the continuous video frame to initialize the background model, so the method adopted in the invention can get the more representative pixel sample, can represent the dynamic background better; the method distinguishes the category of the neighborhood pixels in the spatial classification step, only the same type of pixels are used for further determining whether the central pixel is a real foreground pixel, the prior art utilizes all the neighborhood pixels to describe the background pixel, and if some neighborhood pixels are the foreground pixels, the detection effect is influenced because the neighborhood pixels are wrongly described as the background pixels, so the method adopted by the invention can effectively improve the accuracy of the detection of the moving target under the dynamic background.
As shown in fig. 1, the dynamic background difference detection method of the present invention mainly includes two steps of temporal classification and spatial classification. The time classification means that a corresponding background model is established for each pixel in an image through grouping sampling on a time sequence, then the pixels in the background model are classified according to the pixels to be detected, if the number of the pixels which belong to the same type as the pixels to be detected is larger than a given threshold value, the pixels to be detected are judged as background pixels, otherwise, the pixels to be detected are judged as foreground pixels through a dynamic background difference detection method, and accordingly a rough mask image of a foreground target is obtained. The spatial classification is to further suppress the foreground pixel points of the false detection on the basis of the rough mask, and specifically comprises the following steps: and classifying the pixels in the set neighborhood range of the foreground point by taking the foreground point in the rough mask as a center, and if more than a set number of pixels of the same type as the central pixel in the set neighborhood range belong to background pixels, correcting the foreground point into a background point, thereby further obtaining a more accurate foreground mask.
As shown in fig. 2 and 3, the time classification step specifically includes:
(1) background model initialization and updating.
As shown in fig. 2, the background model initialization and update process may be further subdivided into:
1) background model initialization phase
In the initialization stage of the background model, the background model is established for each pixel in the image by a grouping sampling method: firstly, selecting a first frame image of a video as an initial reference background image, and recording the image as cbg. Next, for each pixel in the video image, initializing a background model by using the previous N frames of images, specifically: the pixel values of the same position of the previous N frames of images are averagely divided into m sampling groups according to the time sequence, each group has k pixel values, wherein N is mk, a nearest pixel sampling method is adopted in each group, namely, a pixel with the minimum distance to a background pixel is selected as a pixel sample, and the selection formula of the pixel sample is as follows:
Figure BDA0001370238690000101
wherein, ciFor pixels within a sample group, csIs the sampled pixel sample. For m sample groups, m pixel samples are obtained to form a background model C:
Figure BDA0001370238690000102
2) update phase
In the updating stage, the reference background image is updated by using the nearest-neighbor pixel sampling method according to the pixel samples in the background model C, that is, the following steps are performed:
Figure BDA0001370238690000103
for subsequent video images, a new sampling group can be formed by each k frame of images, new pixel samples are obtained by using the formula (1), the new pixel samples are added into the background model C, and the first pixel sample in the background model C is deleted, so that the total number of the samples is kept to be m. The updated background model may be used to update the reference background image again, as shown in FIG. 2.
(2) And (5) detecting pixel classification.
As shown in fig. 3, the pixel classification detection can be further subdivided into:
1) searching pixels satisfying the formula (4) in the background model C as pixels which are similar to the pixels to be detected, and recording the number of the pixels which are similar to the pixels to be detected as T:
Figure BDA0001370238690000111
in the formula (4), ctRepresenting the pixel to be detected, for a given first threshold value.
2) Determining whether the number T is greater than a given second threshold ftIf yes, c istJudging as background pixel point, otherwise, judging ctJudging as a foreground pixel point, namely as shown in formula (5):
Figure BDA0001370238690000112
and (5) finishing the time classification step to obtain a rough foreground mask image containing a small amount of noise.
As shown in fig. 4, the spatial classification step specifically includes:
(1) and setting a square window.
In the spatial classification stage, in order to further determine that the foreground point in the rough foreground mask is a real foreground point rather than a dynamic background pixel, a square window with radius r is set around each foreground point in the rough mask image, the window is marked as W, and the size of the window is (2r +1)2
(2) And classifying the pixels in the window according to the pixel values of the central foreground points in the original video frame (namely, the image before the grouped sampling).
Let c be the pixel value of a foreground point in the rough foreground mask in the original video framef,cwFor pixel points in the window, find and c in the window pixelfPixels of the same kind, i.e. pixels that find a pixel that satisfies equation (6):
ωw·||cw-cf||≤γ (6)
where γ is a given third threshold value, ωwIs cwWeight coefficient of (a), ωwIs defined as follows:
Figure BDA0001370238690000113
wherein p iswIs cwPixel coordinate of (2), pfIs cfPixel coordinates, | cw-cfI is cwAnd cfIs measured, | pw-pfI is pwAnd pfThe distance between pixel coordinates of (a) is an indicator function, when the condition { } is true, I { } is 1, otherwise, I { } is 0, and h is a distance threshold. In the formula (7), ω is larger as the coordinate distance between pixels is largerwThe larger the distance between pixels, the larger the distance between pixelsw-cfIf | is small, equation (6) is satisfied. Meanwhile, the indicating function I { } in the formula (7) is such that | | | pw-pfWhen | | | is less than or equal to h, omega is constantly presentw0, i.e. the constant formula (6) holds, which indicates cwAnd cfWhen the distance between the pixels is less than h, the pixels are necessarily the same type of pixels.
(3) Find and window center pixel c according to equation (6)fPixels of the same type, and according to the result of the rough foreground mask, recording the number of pixels belonging to the background pixel points as D0And the number of all pixels in the window is denoted as D.
(4) And (3) further determining whether the pixel point at the center of the window is a real foreground pixel point by using the formula (8):
Figure BDA0001370238690000121
where α is a given scaling factor.
Formula (8) illustrates if D0And if the current value is more than or equal to α. D, the central pixel point of the window is not a real foreground point and is corrected to be a background point.
And in the spatial classification step, an accurate foreground mask which does not contain noise and is accurate in foreground and background segmentation is further obtained on the basis of the rough foreground mask.
Compared with the prior art, the invention has the following advantages:
1) the background model of the pixels is established in a grouping sampling mode, the sampling range is expanded, the situation that sampling points fall on a foreground target excessively is avoided, meanwhile, excessive samples are prevented from being concentrated near fixed sampling time points through grouping sampling, the representativeness of the samples is enhanced, and therefore the effectiveness of the background model on dynamic background description is enhanced.
2) When the sampling group selects the pixel sample, a nearest pixel sampling method is adopted, namely, the pixel point nearest to the current background pixel is directly used as the sample, and meanwhile, the reference background image is updated by the nearest pixel, so that a complex mathematical modeling process is not needed, floating point type operation is not needed, the method is very simple and efficient, and the method is favorable for being realized on a computer.
3) In the spatial classification step, the real foreground points in the rough mask are further determined according to the window neighborhood pixels, the neighborhood pixels are classified in the process, whether the pixels are the real foreground points or not is determined only by the pixels which are the same as the window center pixels, and all the neighborhood pixels are not used blindly, so that the detection accuracy is improved.
4) In the spatial classification step, the pixel value difference and the pixel coordinate distance are considered in the classification process, the pixel value difference and the pixel coordinate distance are unified by using the formulas (6) and (7), and an indication function is introduced into the formula (7), so that the pixels in the small neighborhood range are taken as the pixels which are the same as the central pixel, and the pixels outside the small neighborhood range are classified according to the calculation result of the formula (6), the classification mode is more consistent with the actual condition, and the classification result is more accurate.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A dynamic background difference detection method based on space-time classification is characterized in that: the method comprises the following steps:
establishing a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image;
classifying pixels in a set neighborhood range of a central pixel by taking a foreground pixel in a rough foreground mask image as a center, and correcting the central pixel as a background pixel or continuously keeping the central pixel as the foreground pixel according to the number of pixels belonging to the background in the same type of pixels with the central pixel in the set neighborhood range, thereby obtaining an accurate foreground mask image;
the step of classifying the pixels in the background model according to the pixels to be detected to obtain the rough foreground mask image specifically includes:
finding out all pixels which are the same as the pixels to be detected in the background model C, recording the number of all pixels which are the same as the pixels to be detected as T, wherein the pixels which are the same as the pixels to be detected meet the following conditions:
Figure FDA0002443490040000011
wherein, ctFor the pixel to be detected, for a given first threshold,
Figure FDA0002443490040000012
j is the jth pixel sample of the background model C, j is 1,2, …, m;
determining whether the number T is greater than a given second threshold ftIf yes, c istJudging as background pixel point, otherwise, judging ctAnd judging the foreground pixel points to finally obtain a rough foreground mask image.
2. The dynamic background difference detection method based on spatiotemporal classification as claimed in claim 1, characterized in that: the step of establishing a corresponding background model for each pixel in the image through grouping sampling on the time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image specifically comprises the following steps:
selecting a first frame image of a video as an initial reference background image;
selecting the first N frames of images of the video from each pixel in the video image, and initializing a background model by adopting a packet sampling method;
updating a reference background image by adopting a background model;
updating a background model for every k frames of images;
and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image.
3. The dynamic background difference detection method based on spatiotemporal classification as claimed in claim 2, characterized in that: the step of selecting the first N frames of video images to initialize the background model by adopting a packet sampling method for each pixel in the video images specifically comprises the following steps:
averagely dividing pixel values of the same position of N frames of images in front of a video into m sampling groups according to a time sequence, wherein each sampling group has k pixel values, and N is mk;
adopting a nearest neighbor pixel sampling method in each of m sampling groups, and selecting a pixel with the minimum distance with a reference background pixel as a pixel sample of the sampling group, wherein the pixel sample of the sampling group is selected according to the formula:
Figure FDA0002443490040000021
wherein, csBeing pixel samples of a sample group, ciFor pixels within a sample group, cbgIs a reference background pixel;
forming m pixel samples of m sampling groups into a background model, wherein the expression of the background model C is as follows:
Figure FDA0002443490040000022
wherein
Figure FDA0002443490040000023
Respectively, the pixel samples of the 1 st to m-th sampling groups.
4. The dynamic background difference detection method based on spatiotemporal classification as claimed in claim 3, characterized in that: the step of updating the reference background image by using the background model specifically comprises:
updating a reference background image by adopting a nearest pixel sampling method according to a pixel sample of a background model, wherein an updating formula of the reference background image is as follows:
Figure FDA0002443490040000024
wherein,
Figure FDA0002443490040000025
and
Figure FDA0002443490040000026
respectively pre-update and post-update reference background images,
Figure FDA0002443490040000027
j is the jth pixel sample of the background model C, j is 1,2, …, m.
5. The dynamic background difference detection method based on spatio-temporal classification as claimed in any one of claims 1-4, characterized in that: the method comprises the following steps of taking foreground pixel points in a rough foreground mask image as centers, classifying the pixel points in a neighborhood setting range of a central pixel point, correcting the central pixel point into a background pixel point or continuously keeping the central pixel point as the foreground pixel point according to the number of the background pixel points in the neighborhood setting range and the same type of the central pixel point, and thus obtaining an accurate foreground mask image, wherein the method specifically comprises the following steps:
for each foreground pixel in the rough foreground mask image, setting a radius r and a window size of (2r +1) by taking the foreground pixel as a center2A square window W;
classifying pixels in the window W according to the pixel value of the central pixel in the original video frame, and searching and recording the number of pixels belonging to background pixels in pixels similar to the central pixel in the window W;
and correcting the central pixel point of the window W into a background pixel point or continuously keeping the central pixel point of the window W as a foreground pixel point according to the number of the recorded pixel points.
6. The dynamic background difference detection method based on spatiotemporal classification as claimed in claim 5, characterized in that: the method comprises the following steps of classifying pixels in a window W according to the pixel value of a center pixel in an original video frame, and searching and recording the number of pixels belonging to a background pixel in pixels similar to the center pixel in the window W, and specifically comprises the following steps:
finding out the pixel value c of the central pixel point of the window W in the original video framef
Find and c among the pixels in the window WfPixels of the same kind, saidfThe similar pixel points meet the following requirements: omegaw·||cw-cf| | is less than or equal to gamma, wherein, cwFor the pixel points in the window W, γ is a given third threshold, ωwIs cwWeight coefficient of (a), ωwThe expression of (a) is:
Figure FDA0002443490040000031
pwis cwPixel coordinate of (2), pfIs cfPixel coordinates, | cw-cfI is cwAnd cfIs measured, | pw-pfI is pwAnd pfWhen the condition is true, the value of I { } is 1, otherwise, the value of I { } is 0, and h is a distance threshold;
find and record the in-window W and cfThe number D of pixels belonging to the background pixel among the similar pixels0
7. The dynamic background difference detection method based on spatiotemporal classification as claimed in claim 6, characterized in that: the step of correcting the central pixel point of the window W into a background pixel point or continuously keeping the central pixel point of the window W as a foreground pixel point according to the number of the recorded pixel points specifically comprises the following steps:
judging the number D of recorded pixel points0Whether or not D is satisfied0And D is not less than α, if yes, the central pixel point of the window W is corrected to be a background pixel point, otherwise, the central pixel point of the window W is continuously kept to be a foreground pixel point, wherein D is the number of all pixel points in the window W, and α is a given proportionality coefficient.
8. A dynamic background difference detection system based on space-time classification is characterized in that: the system comprises the following modules:
the time classification module is used for establishing a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image;
the spatial classification module is used for classifying pixels in a set neighborhood range of a central pixel by taking a foreground pixel in a rough foreground mask image as a center, correcting the central pixel as a background pixel or continuously keeping the central pixel as the foreground pixel according to the number of pixels belonging to the background pixel in the same type of pixels with the central pixel in the set neighborhood range, and thus obtaining an accurate foreground mask image;
the time classification module is specifically configured to:
finding out all pixels which are the same as the pixels to be detected in the background model C, recording the number of all pixels which are the same as the pixels to be detected as T, wherein the pixels which are the same as the pixels to be detected meet the following conditions:
Figure FDA0002443490040000032
wherein, ctFor the pixel to be detected, for a given first threshold,
Figure FDA0002443490040000033
j is the jth pixel sample of the background model C, j is 1,2, …, m;
determining whether the number T is greater than a given second threshold ftIf yes, c istJudging as background pixel point, otherwise, judging ctAnd judging the foreground pixel points to finally obtain a rough foreground mask image.
9. A dynamic background difference detection device based on space-time classification is characterized in that: the method comprises the following steps:
a memory for storing a program;
a processor for executing the program to:
establishing a corresponding background model for each pixel in the image through grouping sampling on a time sequence, and classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image;
classifying pixels in a set neighborhood range of a central pixel by taking a foreground pixel in a rough foreground mask image as a center, and correcting the central pixel as a background pixel or continuously keeping the central pixel as the foreground pixel according to the number of pixels belonging to the background in the same type of pixels with the central pixel in the set neighborhood range, thereby obtaining an accurate foreground mask image;
the classifying the pixels in the background model according to the pixels to be detected to obtain a rough foreground mask image specifically includes:
in the background modelAnd finding all pixels which are the same as the pixels to be detected in the type C, recording the number of all pixels which are the same as the pixels to be detected as T, wherein the pixels which are the same as the pixels to be detected meet the following conditions:
Figure FDA0002443490040000041
wherein, ctFor the pixel to be detected, for a given first threshold,
Figure FDA0002443490040000042
j is the jth pixel sample of the background model C, j is 1,2, …, m;
determining whether the number T is greater than a given second threshold ftIf yes, c istJudging as background pixel point, otherwise, judging ctAnd judging the foreground pixel points to finally obtain a rough foreground mask image.
CN201710659723.6A 2017-08-04 2017-08-04 Dynamic background difference detection method, system and device based on space-time classification Active CN107578424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710659723.6A CN107578424B (en) 2017-08-04 2017-08-04 Dynamic background difference detection method, system and device based on space-time classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710659723.6A CN107578424B (en) 2017-08-04 2017-08-04 Dynamic background difference detection method, system and device based on space-time classification

Publications (2)

Publication Number Publication Date
CN107578424A CN107578424A (en) 2018-01-12
CN107578424B true CN107578424B (en) 2020-09-29

Family

ID=61035644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710659723.6A Active CN107578424B (en) 2017-08-04 2017-08-04 Dynamic background difference detection method, system and device based on space-time classification

Country Status (1)

Country Link
CN (1) CN107578424B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738682B (en) * 2019-10-23 2022-02-01 南京航空航天大学 Foreground segmentation method and system
CN111027602B (en) * 2019-11-25 2023-04-07 清华大学深圳国际研究生院 Method and system for detecting target with multi-level structure
CN111476729B (en) * 2020-03-31 2023-06-09 北京三快在线科技有限公司 Target identification method and device
CN113727176B (en) * 2021-08-30 2023-05-16 杭州国芯科技股份有限公司 Video motion subtitle detection method
CN117710235B (en) * 2024-02-06 2024-05-14 浙江华感科技有限公司 Image target enhancement method, device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998033323A1 (en) * 1997-01-29 1998-07-30 Levent Onural Rule-based moving object segmentation
US6870945B2 (en) * 2001-06-04 2005-03-22 University Of Washington Video object tracking by estimating and subtracting background
US7916944B2 (en) * 2007-01-31 2011-03-29 Fuji Xerox Co., Ltd. System and method for feature level foreground segmentation
CN104392468A (en) * 2014-11-21 2015-03-04 南京理工大学 Improved visual background extraction based movement target detection method
CN105160689A (en) * 2015-07-22 2015-12-16 南通大学 Motion target detecting method in rainy and snowy weather
CN106157332A (en) * 2016-07-07 2016-11-23 合肥工业大学 A kind of motion inspection optimization method based on ViBe algorithm
CN106910203A (en) * 2016-11-28 2017-06-30 江苏东大金智信息系统有限公司 The method for quick of moving target in a kind of video surveillance

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998033323A1 (en) * 1997-01-29 1998-07-30 Levent Onural Rule-based moving object segmentation
US6870945B2 (en) * 2001-06-04 2005-03-22 University Of Washington Video object tracking by estimating and subtracting background
US7916944B2 (en) * 2007-01-31 2011-03-29 Fuji Xerox Co., Ltd. System and method for feature level foreground segmentation
CN104392468A (en) * 2014-11-21 2015-03-04 南京理工大学 Improved visual background extraction based movement target detection method
CN105160689A (en) * 2015-07-22 2015-12-16 南通大学 Motion target detecting method in rainy and snowy weather
CN106157332A (en) * 2016-07-07 2016-11-23 合肥工业大学 A kind of motion inspection optimization method based on ViBe algorithm
CN106910203A (en) * 2016-11-28 2017-06-30 江苏东大金智信息系统有限公司 The method for quick of moving target in a kind of video surveillance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
视频序列中运动目标检测技术研究;李莉;《中国优秀硕士学位论文全文数据库 信息科技辑》;20091115;全文 *
高饱和交叉口背景提取与更新算法;李熙莹等;《中山大学学报(自然科学版)》;20120630;全文 *

Also Published As

Publication number Publication date
CN107578424A (en) 2018-01-12

Similar Documents

Publication Publication Date Title
CN107578424B (en) Dynamic background difference detection method, system and device based on space-time classification
CN109035304B (en) Target tracking method, medium, computing device and apparatus
CN109829398B (en) Target detection method in video based on three-dimensional convolution network
US10311595B2 (en) Image processing device and its control method, imaging apparatus, and storage medium
CN109636771B (en) Flight target detection method and system based on image processing
CN106023257B (en) A kind of method for tracking target based on rotor wing unmanned aerial vehicle platform
CN105404884B (en) Image analysis method
US9904868B2 (en) Visual attention detector and visual attention detection method
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
JP7272024B2 (en) Object tracking device, monitoring system and object tracking method
CN109063549B (en) High-resolution aerial video moving target detection method based on deep neural network
WO2004095358A1 (en) Human figure contour outlining in images
CN111383244B (en) Target detection tracking method
CN111340749B (en) Image quality detection method, device, equipment and storage medium
CN110647836B (en) Robust single-target tracking method based on deep learning
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
JP6116765B1 (en) Object detection apparatus and object detection method
CN112258403A (en) Method for extracting suspected smoke area from dynamic smoke
Wang et al. Combining semantic scene priors and haze removal for single image depth estimation
CN115861352A (en) Monocular vision, IMU and laser radar data fusion and edge extraction method
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
CN115294358A (en) Feature point extraction method and device, computer equipment and readable storage medium
Zhou et al. On contrast combinations for visual saliency detection
CN110910332B (en) Visual SLAM system dynamic fuzzy processing method
Zhao et al. An improved VIBE algorithm for fast suppression of ghosts and static objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
OL01 Intention to license declared
OL01 Intention to license declared