[go: nahoru, domu]

CN104809719A - Virtual view synthesis method based on homographic matrix partition - Google Patents

Virtual view synthesis method based on homographic matrix partition Download PDF

Info

Publication number
CN104809719A
CN104809719A CN201510152377.3A CN201510152377A CN104809719A CN 104809719 A CN104809719 A CN 104809719A CN 201510152377 A CN201510152377 A CN 201510152377A CN 104809719 A CN104809719 A CN 104809719A
Authority
CN
China
Prior art keywords
matrix
visual angle
camera
homography
viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510152377.3A
Other languages
Chinese (zh)
Other versions
CN104809719B (en
Inventor
冯颖
张欣
杜娟
陈新开
苏比哈什·如凯迦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201510152377.3A priority Critical patent/CN104809719B/en
Publication of CN104809719A publication Critical patent/CN104809719A/en
Application granted granted Critical
Publication of CN104809719B publication Critical patent/CN104809719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a virtual view synthesis method based on homographic matrix partition. The virtual view synthesis method based on homographic matrix partition comprises the following steps of 1) calibrating left and right neighboring view cameras to obtain the internal reference matrixes of the left and right neighboring view cameras and a basis matrix between the left and right neighboring view cameras, deriving an essential matrix from the basis matrix, performing singular value decomposition on the essential matrix, and computing the motion parameters including a rotation matrix and a translation matrix between the left and right neighboring view cameras; 2) performing interpolation division on the rotation matrix and the translation matrix to obtain sub homographic matrixes from left and right neighboring views to a middle virtual view; 3) applying the forward mapping technology to map two view images to a middle virtual view image respectively through the sub homographic matrixes, taking the mapping graph of one of the images as a reference coordinate system and performing interpolation fusion on the mapped two images to synthesize a middle virtual view image. The virtual view synthesis method based on the homographic matrix partition has the advantages of being high in synthesis speed, simple and effective in process and high in practical engineering value.

Description

Based on the method for the virtual view synthesis of homography matrix segmentation
Technical field
The present invention relates to the method for a kind of virtual view synthesis, especially a kind of method of the virtual view synthesis based on homography matrix segmentation, belongs to technical field of image processing.
Background technology
Virtual view synthesis refers to and utilizes computing machine, synthesizes original non-existent visual point image.Virtual view interpolation is the given camera array put in advance according to certain rule, takes multiple images simultaneously, adopts corresponding viewpoint interpolation technique to calculate middle image sequence often between adjacent two visual angles.
In recent years, along with subject height intersection such as computer vision, image procossing, computer graphicses.Virtual view synthesis is widely used in the project such as three-dimensional reconstruction and panoramic mosaic.Viewpoint interpolation technique is widely used in virtual reality, special efficacy photography, three-dimensional television and animation field.Successful story comprises the Quick Time VR virtual reality system being applied in " bullet time " in the Olympic Games and Apple.
Virtual view interpolation technique develops more than 20 year, the method based on image interpolation that classical method is taught by the interpolation method based on offset vector and the Seitz of Apple researcher chen.Within 2004, Microsoft researcher Zitnick proposes a kind of video View Synthesis system schema of new Image Based Rendering technology, program off-line obtains sync pulse jamming video flowing, then obtain the structural parameters of camera array by the stereo algorithm of the segmentation based on color and set up high-quality image corresponding relation, Video Composition procedural depth locus of discontinuity automatically extracts shade and is used for reducing human error, finally realizes the synthesis of video alternately.The advantage of the program be obtain Video Composition quality but its speed be difficult to ensure real-time.2008, Farin etc. it is also proposed the virtual view interpolation technique of base length by any arrangement array retrained.The people such as Guillemaut it is also proposed the optimization viewpoint interpolation method based on Iamge Segmentation, and the compound movement scene that the moving camera for multiple different resolution is taken carries out Iamge Segmentation reconstruction.
At present, the existing algorithm relative complex based on virtual view interpolation synthesis virtual image, real-time performance are poor, stability can not be guaranteed, and some algorithm needs special hardware support, and Project Realization is difficult, and engineering practical value is not high.
Summary of the invention
The object of the invention is the defect in order to solve above-mentioned prior art, provide a kind of method of the virtual view synthesis based on homography matrix segmentation, the method has the advantage that aggregate velocity is fast, process is simple effectively, engineering practical value is high.
Object of the present invention can reach by taking following technical scheme:
Based on the method for the virtual view synthesis of homography matrix segmentation, said method comprising the steps of:
1) adopt external standard size gridiron pattern standardization to demarcate left and right adjacent view camera, obtain the internal reference matrix K of left and right adjacent view camera l, K rand the basis matrix F between the adjacent view camera of left and right, derive essential matrix E by basis matrix F, SVD svd is carried out to essential matrix, calculate the kinematic parameter between the adjacent view camera of left and right: rotation matrix R and translation matrix T;
2) interpolation segmentation is carried out to rotation matrix R and translation matrix T, obtain left and right adjacent view to the sub-homography matrix H mapping to a certain position between adjacent view liand H ri;
3) by sub-homography matrix H liand H rirespectively visual angle, left and right figure is carried out image mapped, the image after a selected wherein visual angle maps, as the reference coordinate system of composograph, uses bilinear interpolation to carry out interpolation fusion, each position scene image in resultant motion process.
As a kind of preferred version, step 1) described employing external standard size gridiron pattern standardization demarcates left and right adjacent view camera, obtains the internal reference matrix K of left and right adjacent view camera l, K rand the basis matrix F between the adjacent view camera of left and right, specific as follows:
1.1) gridiron pattern aimed at by camera, shooting at least three group gridiron pattern photos;
1.2) utilize opencv calibration function or matlab calibration tool case, by many groups gridiron pattern photo of shooting, calibrate the internal reference matrix K of left and right adjacent view camera l, K rand the basis matrix F of left and right adjacent view camera.
As a kind of preferred version, step 1) describedly derive essential matrix E by basis matrix F, SVD svd is carried out to essential matrix, calculates the kinematic parameter between the adjacent view camera of left and right: rotation matrix R and translation matrix T, specific as follows:
1.3) essential matrix E is derived by basis matrix F:
E = K r T FK l - - - ( 1 )
Wherein, F is the basis matrix between the adjacent view camera of left and right, K lfor left camera internal reference, K rright camera internal reference;
1.4) translation matrix T=[t is established 1, t 2, t 3], then the antisymmetric matrix of translation matrix T is:
[ t ] x = 0 - t 3 t 2 t 3 0 - t 1 - t 2 t 1 0 - - - ( 2 )
1.5) essential matrix E is represented by rotation matrix R and translation matrix T, as follows:
E=[T] xR (3)
1.6) by carrying out SVD svd to essential matrix, rotation matrix R and translation matrix T is obtained.
As a kind of preferred version, step 2) described interpolation segmentation is carried out to rotation matrix R and translation matrix T, obtain left and right adjacent view to the sub-homography matrix H mapping to a certain position between adjacent view liand H ri, specific as follows:
2.1) if given calibration system, projection matrix P=K [I|0], P '=K ' [I|0], then corresponding homography matrix H is as follows:
H=K′(R-tn T/d)K -1(4)
Wherein, K and K -1represent first, second visual angle camera internal reference respectively, n tfor the unit normal vector of reference planes, d is the degree of depth of External Reference plane, and outerplanar normal vector is v=n t/ d;
2.2) by 3 couples of corresponding point (X of H matrix and outerplanar not conllinear i, X ' i), derive outerplanar normal vector v
Obtained by homography matrix H:
X′ i=HX i(5)
Obtained by formula (4):
X′ i=[K′(R-tn T/d)K -1]X i(6)
Vector X ' i[K ' (R-tn t/ d) K -1] X iconllinear, obtains thus:
X′ i×[K′(R-tv)K -1]X i=0 (7)
By 3 pairs of corresponding point of outerplanar not conllinear, simultaneous formula (7), obtains outerplanar normal vector v;
2.3) homography matrix that two visual angles are mapped to a certain position in camera motion process is built, be called sub-homography matrix, being constructed by of sub-homography matrix obtains carrying out interpolation to rotation matrix R and translation matrix T, definition visual angle 1 is the starting point of camera motion, visual angle 2 is the terminal of camera motion, so camera is after piece image is taken at visual angle 1, move to visual angle 2 shooting second width image, the total amount of exercise of terminal is moved to (R (θ), t) represents from starting point;
2.4) suppose to want the virtual image that in resultant motion process, some viewpoint i go out, so from the movement angle of camera, calculate the amount of exercise (R (θ that visual angle 1 goes out to viewpoint i i), t i) and the amount of exercise (R (θ-θ that goes out to viewpoint i of visual angle 2 i), t-t i), obtain visual angle 1 to viewpoint i virgin homography matrix H according to this kinematic parameter liwith visual angle 2 to viewpoint i virgin homography matrix H ri, according to formula (4), obtain sub-homography matrix as follows:
H li = K ′ ( R ( θ i ) - t i v 1 T ) K - 1 - - - ( 8 )
H ri = K ( R ( θ - θ i ) - ( t - t i ) v 2 T ) K ′ - 1 - - - ( 9 )
Wherein, H lifor visual angle 1 is to viewpoint i virgin homography matrix, H rifor visual angle 2 is to viewpoint i virgin homography matrix, K and K -1represent first, second visual angle camera internal reference respectively, for visual angle 1 reference planes normal vector, for visual angle 2 reference planes normal vector.
As a kind of preferred version, step 3) each position scene image in described resultant motion process, specific as follows:
3.1) sub-homography matrix H is used liand H ri, calculate visual angle 1 and be mapped to the subimage 2 that the subimage 1 at viewpoint i place and visual angle 2 are mapped to viewpoint i place;
3.2) this subimage 1 and subimage 2 are carried out additive fusion, obtain the virtual visual point image at viewpoint i place.
The present invention has following beneficial effect relative to prior art:
1, the inventive method can carry out the real-time performance of elevator system by the basis matrix of the internal reference matrix of off-line calibration adjacent view camera and adjacent view camera, and speed of rebuilding is fast, and robust performance is high.
2, the inventive method is relatively effectively simple, does not need special hardware supported, has very high engineering practical value.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of the virtual view synthesis based on homography matrix segmentation of the embodiment of the present invention 1.
Fig. 2 is the homograph schematic diagram of the adjacent two width images of the embodiment of the present invention 1;
Fig. 3 is that the outerplanar normal vector of the embodiment of the present invention 1 solves schematic diagram;
Fig. 4 is view transformation and the motion segmentation schematic diagram of the embodiment of the present invention 1;
Fig. 5 is the bilinear interpolation principle schematic of the embodiment of the present invention 1.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited thereto.
Embodiment 1:
As shown in Figure 1, the method for the virtual view synthesis based on homography matrix segmentation of the present embodiment, comprises the following steps:
1) adopt external standard size gridiron pattern standardization to demarcate left and right adjacent view camera, obtain the internal reference matrix K of left and right adjacent view camera l, K rand the basis matrix F between the adjacent view camera of left and right, essential matrix E is derived by basis matrix F, SVD (Sigular Value Decomposition) svd is carried out to essential matrix, calculate the kinematic parameter between the adjacent view camera of left and right: rotation matrix R and translation matrix T, specific as follows:
1.1) gridiron pattern aimed at by camera, shooting at least three group gridiron pattern photos;
1.2) utilize opencv calibration function or matlab calibration tool case, by many groups gridiron pattern photo of shooting, calibrate the internal reference matrix K of left and right adjacent view camera l, K rand the basis matrix F of left and right adjacent view camera.
1.3) essential matrix E is derived by basis matrix F:
E = K r T FK l - - - ( 1 )
Wherein, F is the basis matrix between the adjacent view camera of left and right, K lfor left camera internal reference, K rright camera internal reference;
1.4) translation matrix T=[t is established 1, t 2, t 3], then the antisymmetric matrix of translation matrix T is:
[ t ] x = 0 - t 3 t 2 t 3 0 - t 1 - t 2 t 1 0 - - - ( 2 )
1.5) essential matrix E is represented (essential matrix equals antisymmetric matrix and is multiplied by rotation matrix) by rotation matrix R and translation matrix T, as follows:
E=[T] xR (3)
1.6) by carrying out SVD svd to essential matrix, rotation matrix R and translation matrix T is obtained.
2) interpolation segmentation is carried out to rotation matrix R and translation matrix T, obtain left and right adjacent view to the sub-homography matrix H mapping to a certain position between adjacent view (i.e. intermediate virtual viewpoint) liand H ri, specific as follows:
2.1) the homograph schematic diagram of adjacent two width images as shown in Figure 2, if given calibration system, projection matrix P=K [I|0], P '=K ' [I|0], then corresponding homography matrix H is as follows:
m′=Hm
H=K′(R-tn T/d)K -1(4)
Wherein, K and K -1represent first, second visual angle camera internal reference respectively, n tfor the unit normal vector of reference planes, d is the degree of depth of External Reference plane, and outerplanar normal vector is v=n t/ d, m are the points in figure in left reference planes, and m ' is the point in figure in right reference planes;
2.2) outerplanar normal vector as shown in Figure 3 solves schematic diagram, by 3 X of H matrix and outerplanar not conllinear 1, X 2, X 3, derive outerplanar normal vector v
Obtained by homography matrix H:
X 12=HX 11(5)
Obtained by formula (4):
X 12=[K′(R-tn T/d)K -1]X 11(6)
Vector X 12and X 12=[K ' (R-tn t/ d) K -1] X 11conllinear, obtains thus:
X 12×[K′(R-tv)K -1]X 11=0 (7)
In like manner, X 2and X 3can obtain a prescription journey equally, simultaneous three equations can solve outerplanar normal vector v, therefore by 3 pairs of corresponding point of outerplanar not conllinear, namely can simultaneous equations, and obtain outerplanar normal vector v;
2.3) view transformation as shown in Figure 4 and motion segmentation schematic diagram, build the homography matrix that two visual angles are mapped to a certain position in camera motion process, be called sub-homography matrix, being constructed by of sub-homography matrix obtains carrying out interpolation to rotation matrix R and translation matrix T, definition visual angle 1 is the starting point of camera motion, visual angle 2 is the terminal of camera motion, so camera is after piece image is taken at visual angle 1, move to visual angle 2 shooting second width image, the total amount of exercise of terminal is moved to (R (θ), t) represents from starting point;
2.4) suppose to want the virtual image that in resultant motion process, some viewpoint i go out, so from the movement angle of camera, calculate the amount of exercise (R (θ that visual angle 1 goes out to viewpoint i i), t i) and the amount of exercise (R (θ-θ that goes out to viewpoint i of visual angle 2 i), i-t i), obtain visual angle 1 to viewpoint i virgin homography matrix H according to this kinematic parameter liwith visual angle 2 to viewpoint i virgin homography matrix H ri, according to formula (4), obtain sub-homography matrix as follows:
H li = K ′ ( R ( θ i ) - t i v 1 T ) K - 1 - - - ( 8 )
H ri = K ( R ( θ - θ i ) - ( t - t i ) v 2 T ) K ′ - 1 - - - ( 9 )
Wherein, H lifor visual angle 1 is to viewpoint i virgin homography matrix, H rifor visual angle 2 is to viewpoint i virgin homography matrix, K and K -1represent first, second visual angle camera internal reference respectively, for visual angle 1 reference planes normal vector, for visual angle 2 reference planes normal vector.
3) by sub-homography matrix H liand H rirespectively visual angle, left and right figure is carried out image mapped (application forward mapping technology), image after a selected wherein visual angle maps is as the reference coordinate system of composograph, bilinear interpolation is used to carry out interpolation fusion (namely merging two width images after mapping), each position scene image (i.e. intermediate virtual visual point image) in resultant motion process, specific as follows:
3.1) according to adjacent view to the sub-homography matrix H of virtual view liand H ri, adopt H liand H rirespectively visual angle, left and right figure is mapped to intermediate virtual viewpoint;
3.2) bilinear interpolation principle schematic as shown in Figure 5, supposes the value that will obtain P=(X, Y), supposes known Q 11, Q 12, Q 21, Q 22value, first obtain R in X-direction interpolation 1and R 2, then interpolation obtains P point coordinate in the Y direction;
3.3) be floating number due to what be multiplied by from visual angle 1 or visual angle 2 that matrix obtains, therefore adopt reversed interpolation technology; To the location of pixels of target image, first find out the floating-point position of original image, then utilize the shaping value interpolation of ambient sources pixel to make new advances value, therefore can obtain the virtual image that visual angle 1 and visual angle 2 are interpolated into middle visual angle;
3.4) image after a selected wherein multi-view image mapping is as the reference coordinate system of composograph, be interpolated into middle virtual image to visual angle 1 and visual angle 2 to merge, half respectively got to the pixel value of public domain and carries out additive fusion and obtain intermediate virtual multi-view image.
In sum, the inventive method can carry out the real-time performance of elevator system by the basis matrix of the internal reference matrix of off-line calibration adjacent view camera and adjacent view camera, has the advantage that aggregate velocity is fast, process is simple effectively, engineering practical value is high.
The above; be only patent preferred embodiment of the present invention; but the protection domain of patent of the present invention is not limited thereto; as gridiron pattern can replace with other independent signable object; anyly be familiar with those skilled in the art in the scope disclosed in patent of the present invention; be equal to according to the technical scheme of patent of the present invention and inventive concept thereof and replace or change, all belonged to the protection domain of patent of the present invention.

Claims (5)

1., based on the method for the virtual view synthesis of homography matrix segmentation, it is characterized in that: said method comprising the steps of:
1) adopt external standard size gridiron pattern standardization to demarcate left and right adjacent view camera, obtain the internal reference matrix K of left and right adjacent view camera l, K rand the basis matrix F between the adjacent view camera of left and right, derive essential matrix E by basis matrix F, SVD svd is carried out to essential matrix, calculate the kinematic parameter between the adjacent view camera of left and right: rotation matrix R and translation matrix T;
2) interpolation segmentation is carried out to rotation matrix R and translation matrix T, obtain left and right adjacent view to the sub-homography matrix H mapping to a certain position between adjacent view liand H ri;
3) by sub-homography matrix H liand H rirespectively visual angle, left and right figure is carried out image mapped, the image after a selected wherein visual angle maps, as the reference coordinate system of composograph, uses bilinear interpolation to carry out interpolation fusion, each position scene image in resultant motion process.
2. the method for the virtual view synthesis based on homography matrix segmentation according to claim 1, it is characterized in that: step 1) described employing external standard size gridiron pattern standardization demarcates left and right adjacent view camera, obtains the internal reference matrix K of left and right adjacent view camera l, K rand the basis matrix F between the adjacent view camera of left and right, specific as follows:
1.1) gridiron pattern aimed at by camera, shooting at least three group gridiron pattern photos;
1.2) utilize opencv calibration function or matlab calibration tool case, by many groups gridiron pattern photo of shooting, calibrate the internal reference matrix K of left and right adjacent view camera l, K rand the basis matrix F of left and right adjacent view camera.
3. the method for the virtual view synthesis based on homography matrix segmentation according to claim 1, it is characterized in that: step 1) described by basis matrix F derivation essential matrix E, SVD svd is carried out to essential matrix, calculate the kinematic parameter between the adjacent view camera of left and right: rotation matrix R and translation matrix T, specific as follows:
1.3) essential matrix E is derived by basis matrix F:
E = K r T FK l - - - ( 1 )
Wherein, F is the basis matrix between the adjacent view camera of left and right, K lfor left camera internal reference, K rright camera internal reference;
1.4) translation matrix T=[t is established 1, t 2, t 3], then the antisymmetric matrix of translation matrix T is:
[ t ] x = 0 - t 3 t 2 t 3 0 - t 1 - t 2 t 1 0 - - - ( 2 )
1.5) essential matrix E is represented by rotation matrix R and translation matrix T, as follows:
E=[T] xR (3)
1.6) by carrying out SVD svd to essential matrix, rotation matrix R and translation matrix T is obtained.
4. the method for the virtual view synthesis based on homography matrix segmentation according to claim 1, it is characterized in that: step 2) described interpolation segmentation is carried out to rotation matrix R and translation matrix T, obtain left and right adjacent view to the sub-homography matrix H mapping to a certain position between adjacent view liand H ri, specific as follows:
2.1) if given calibration system, projection matrix P=K [I|0], P '=K ' [I|0], then corresponding homography matrix H is as follows:
H=K′(R-tn T/d)K -1(4)
Wherein, K and K -1represent first, second visual angle camera internal reference respectively, n tfor the unit normal vector of reference planes, d is the degree of depth of External Reference plane, and outerplanar normal vector is v=n t/ d;
2.2) by 3 couples of corresponding point (X of H matrix and outerplanar not conllinear i, X ' i), derive outerplanar normal vector v
Obtained by homography matrix H:
X′ i=HX i(5)
Obtained by formula (4):
X′ i=[K′(R-tn T/d)K -1]X i(6)
Vector X ' i[K ' (R-tn t/ d) K -1] X iconllinear, obtains thus:
X′ i×[K′(R-tv)K -1]X i=0 (7)
By 3 pairs of corresponding point of outerplanar not conllinear, simultaneous formula (7), obtains outerplanar normal vector v;
2.3) homography matrix that two visual angles are mapped to a certain position in camera motion process is built, be called sub-homography matrix, being constructed by of sub-homography matrix obtains carrying out interpolation to rotation matrix R and translation matrix T, definition visual angle 1 is the starting point of camera motion, visual angle 2 is the terminal of camera motion, so camera is after piece image is taken at visual angle 1, move to visual angle 2 shooting second width image, the total amount of exercise of terminal is moved to (R (θ), t) represents from starting point;
2.4) suppose to want the virtual image that in resultant motion process, some viewpoint i go out, so from the movement angle of camera, calculate the amount of exercise (R (θ that visual angle 1 goes out to viewpoint i i), t i) and the amount of exercise (R (θ-θ that goes out to viewpoint i of visual angle 2 i), t-t i), obtain visual angle 1 to viewpoint i virgin homography matrix H according to this kinematic parameter liwith visual angle 2 to viewpoint i virgin homography matrix H ri, according to formula (4), obtain sub-homography matrix as follows:
H li = K ′ ( R ( θ i ) - t i v 1 T ) K - 1 - - - ( 8 )
H ri = K ( R ( θ - θ i ) - ( t - t i ) v 2 T ) K ′ - 1 - - - ( 9 )
Wherein, H lifor visual angle 1 is to viewpoint i virgin homography matrix, H rifor visual angle 2 is to viewpoint i virgin homography matrix, K and K -1represent first, second visual angle camera internal reference respectively, for visual angle 1 reference planes normal vector, for visual angle 2 reference planes normal vector.
5. the method for the virtual view synthesis based on homography matrix segmentation according to claim 1, is characterized in that: step 3) each position scene image in described resultant motion process, specific as follows:
3.1) sub-homography matrix H is used liand H ri, calculate visual angle 1 and be mapped to the subimage 2 that the subimage 1 at viewpoint i place and visual angle 2 are mapped to viewpoint i place;
3.2) this subimage 1 and subimage 2 are carried out additive fusion, obtain the virtual visual point image at viewpoint i place.
CN201510152377.3A 2015-04-01 2015-04-01 The method of virtual view synthesis based on homography matrix segmentation Active CN104809719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510152377.3A CN104809719B (en) 2015-04-01 2015-04-01 The method of virtual view synthesis based on homography matrix segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510152377.3A CN104809719B (en) 2015-04-01 2015-04-01 The method of virtual view synthesis based on homography matrix segmentation

Publications (2)

Publication Number Publication Date
CN104809719A true CN104809719A (en) 2015-07-29
CN104809719B CN104809719B (en) 2018-01-05

Family

ID=53694523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510152377.3A Active CN104809719B (en) 2015-04-01 2015-04-01 The method of virtual view synthesis based on homography matrix segmentation

Country Status (1)

Country Link
CN (1) CN104809719B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105262958A (en) * 2015-10-15 2016-01-20 电子科技大学 Panoramic feature splicing system with virtual viewpoint and method thereof
CN105955311A (en) * 2016-05-11 2016-09-21 阔地教育科技有限公司 Tracking control method, tracking control device and tracking control system
CN106060509A (en) * 2016-05-19 2016-10-26 西安电子科技大学 Free viewpoint image synthetic method introducing color correction
CN106204731A (en) * 2016-07-18 2016-12-07 华南理工大学 A kind of multi-view angle three-dimensional method for reconstructing based on Binocular Stereo Vision System
CN106600667A (en) * 2016-12-12 2017-04-26 南京大学 Method for driving face animation with video based on convolution neural network
CN106648109A (en) * 2016-12-30 2017-05-10 南京大学 Real scene real-time virtual wandering system based on three-perspective transformation
CN107808403A (en) * 2017-11-21 2018-03-16 韶关学院 A kind of camera calibration method based on sparse dictionary
CN109443245A (en) * 2018-11-09 2019-03-08 扬州市职业大学 A kind of multi-line structured light vision measuring method based on homography matrix
CN110360991A (en) * 2019-06-18 2019-10-22 武汉中观自动化科技有限公司 A kind of photogrammetric survey method, device and storage medium
CN110874818A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium
CN113538316A (en) * 2021-08-24 2021-10-22 北京奇艺世纪科技有限公司 Image processing method, image processing device, terminal device and readable storage medium
CN114401391A (en) * 2021-12-09 2022-04-26 北京邮电大学 Virtual viewpoint generation method and device
CN115578296A (en) * 2022-12-06 2023-01-06 南京诺源医疗器械有限公司 Stereo video processing method
CN116193158A (en) * 2023-01-13 2023-05-30 北京达佳互联信息技术有限公司 Bullet time video generation method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100943719B1 (en) * 2008-11-04 2010-02-23 광주과학기술원 System and apparatus of geometrical compensation for multi-view video
CN103269438A (en) * 2013-05-27 2013-08-28 中山大学 Method for drawing depth image on the basis of 3D video and free-viewpoint television
CN103345736A (en) * 2013-05-28 2013-10-09 天津大学 Virtual viewpoint rendering method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100943719B1 (en) * 2008-11-04 2010-02-23 광주과학기술원 System and apparatus of geometrical compensation for multi-view video
CN103269438A (en) * 2013-05-27 2013-08-28 中山大学 Method for drawing depth image on the basis of 3D video and free-viewpoint television
CN103345736A (en) * 2013-05-28 2013-10-09 天津大学 Virtual viewpoint rendering method

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105262958A (en) * 2015-10-15 2016-01-20 电子科技大学 Panoramic feature splicing system with virtual viewpoint and method thereof
CN105262958B (en) * 2015-10-15 2018-08-21 电子科技大学 A kind of the panorama feature splicing system and its method of virtual view
CN105955311A (en) * 2016-05-11 2016-09-21 阔地教育科技有限公司 Tracking control method, tracking control device and tracking control system
CN106060509B (en) * 2016-05-19 2018-03-13 西安电子科技大学 Introduce the free view-point image combining method of color correction
CN106060509A (en) * 2016-05-19 2016-10-26 西安电子科技大学 Free viewpoint image synthetic method introducing color correction
CN106204731A (en) * 2016-07-18 2016-12-07 华南理工大学 A kind of multi-view angle three-dimensional method for reconstructing based on Binocular Stereo Vision System
CN106600667A (en) * 2016-12-12 2017-04-26 南京大学 Method for driving face animation with video based on convolution neural network
CN106600667B (en) * 2016-12-12 2020-04-21 南京大学 Video-driven face animation method based on convolutional neural network
CN106648109A (en) * 2016-12-30 2017-05-10 南京大学 Real scene real-time virtual wandering system based on three-perspective transformation
CN107808403A (en) * 2017-11-21 2018-03-16 韶关学院 A kind of camera calibration method based on sparse dictionary
CN110874818B (en) * 2018-08-31 2023-06-23 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium
CN110874818A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium
CN109443245A (en) * 2018-11-09 2019-03-08 扬州市职业大学 A kind of multi-line structured light vision measuring method based on homography matrix
CN110360991A (en) * 2019-06-18 2019-10-22 武汉中观自动化科技有限公司 A kind of photogrammetric survey method, device and storage medium
CN113538316A (en) * 2021-08-24 2021-10-22 北京奇艺世纪科技有限公司 Image processing method, image processing device, terminal device and readable storage medium
CN113538316B (en) * 2021-08-24 2023-08-22 北京奇艺世纪科技有限公司 Image processing method, device, terminal equipment and readable storage medium
CN114401391A (en) * 2021-12-09 2022-04-26 北京邮电大学 Virtual viewpoint generation method and device
CN114401391B (en) * 2021-12-09 2023-01-06 北京邮电大学 Virtual viewpoint generation method and device
CN115578296A (en) * 2022-12-06 2023-01-06 南京诺源医疗器械有限公司 Stereo video processing method
CN115578296B (en) * 2022-12-06 2023-03-10 南京诺源医疗器械有限公司 Stereo video processing method
CN116193158A (en) * 2023-01-13 2023-05-30 北京达佳互联信息技术有限公司 Bullet time video generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN104809719B (en) 2018-01-05

Similar Documents

Publication Publication Date Title
CN104809719A (en) Virtual view synthesis method based on homographic matrix partition
US11350073B2 (en) Disparity image stitching and visualization method based on multiple pairs of binocular cameras
CN101276465B (en) Method for automatically split-jointing wide-angle image
JP6273163B2 (en) Stereoscopic panorama
CN104301677B (en) The method and device monitored towards the panoramic video of large scene
CN104661010B (en) Method and device for establishing three-dimensional model
CN101976455B (en) Color image three-dimensional reconstruction method based on three-dimensional matching
CN102968809B (en) The method of virtual information mark and drafting marking line is realized in augmented reality field
CN103971375B (en) A kind of panorama based on image mosaic stares camera space scaling method
CN102984453A (en) Method and system of real-time generating hemisphere panoramic video images through single camera
CN104506828B (en) A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes
US11812009B2 (en) Generating virtual reality content via light fields
CN112215880B (en) Image depth estimation method and device, electronic equipment and storage medium
CN106997579A (en) The method and apparatus of image mosaic
CN104618648A (en) Panoramic video splicing system and splicing method
CN107451952A (en) A kind of splicing and amalgamation method of panoramic video, equipment and system
CN106170086B (en) Method and device thereof, the system of drawing three-dimensional image
CN101916455A (en) Method and device for reconstructing three-dimensional model of high dynamic range texture
CN101754042A (en) Image reconstruction method and image reconstruction system
TWI820246B (en) Apparatus with disparity estimation, method and computer program product of estimating disparity from a wide angle image
CN105979241B (en) A kind of quick inverse transform method of cylinder three-dimensional panoramic video
CN117456124B (en) Dense SLAM method based on back-to-back binocular fisheye camera
CN101383051B (en) View synthesizing method based on image re-projection
CN109272445B (en) Panoramic video stitching method based on spherical model
CN114004773A (en) Monocular multi-view video synthesis method based on deep learning and reverse mapping

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant