[go: nahoru, domu]

CN110581987A - Three-dimensional display with gesture sensing function - Google Patents

Three-dimensional display with gesture sensing function Download PDF

Info

Publication number
CN110581987A
CN110581987A CN201810578401.3A CN201810578401A CN110581987A CN 110581987 A CN110581987 A CN 110581987A CN 201810578401 A CN201810578401 A CN 201810578401A CN 110581987 A CN110581987 A CN 110581987A
Authority
CN
China
Prior art keywords
value
screen
gesture
depth
centroid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810578401.3A
Other languages
Chinese (zh)
Inventor
孙嘉余
郭峻廷
黄昭世
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to CN201810578401.3A priority Critical patent/CN110581987A/en
Publication of CN110581987A publication Critical patent/CN110581987A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

the invention discloses a three-dimensional display, which comprises a screen, a depth of field detection unit and a processing circuit. The depth-of-field detection unit comprises a plurality of groups of infrared sensors. The processing circuit is used for receiving optical signals of the multiple groups of infrared sensors to provide data obtained by scanning areas of the multiple groups of infrared sensors, judging whether a gesture is detected according to the data obtained by the scanning areas of the multiple groups of infrared sensors, calculating the position of one or more centroids of the gesture, identifying distance change between the gesture and a screen according to the movement information of the one or more centroids, and indicating the screen to adjust the visual distance between a displayed three-dimensional object and the screen, the size of the three-dimensional object and the depth of field of the three-dimensional object in a corresponding distance change mode according to the distance change. The invention can provide the three-dimensional display with the functions of gesture sensing and depth of field adjustment, which has low cost, low energy consumption and small volume.

Description

Three-dimensional display with gesture sensing function
Technical Field
The present invention relates to a three-dimensional display with gesture sensing and depth of field adjusting functions, and more particularly, to a three-dimensional display with gesture sensing and depth of field adjusting functions, which is low in cost, power consumption, and size.
Background
Currently, the remote sensing technology uses non-contact measurement and imaging, and the common methods include microwave (microwave), acoustic wave (acoustic wave), Infrared (Infrared), laser (laser), and stereo vision (stereo), which are mostly applied to triangulation. Although the interactive concept of three-dimensional (3D) display and gesture sensing has been proposed for a long time, it has not been easy to implement, one of the main reasons is that the gesture sensing component of the camera is bulky, consumes a lot of power, and is often expensive, and is not suitable for being installed on a general notebook computer, a desktop computer, or a portable electronic device. Moreover, the effect (such as depth of field) of the three-dimensional display cannot be changed along with the gesture interaction, so that the display effect is unnatural during the gesture interaction.
Therefore, a three-dimensional display with gesture sensing and depth of field adjusting functions is needed, which has low cost, low power consumption and small volume.
Disclosure of Invention
In view of the above problems in the prior art, an object of the present invention is to provide a three-dimensional display with gesture sensing and depth adjustment functions, which is low in cost, low in power consumption, and small in size.
In order to achieve the above object, the present invention discloses a three-dimensional display with gesture sensing function, which includes a screen, a depth-of-field detecting unit, and a processing circuit. The screen is used for displaying a three-dimensional object. The depth of field detection unit comprises a plurality of groups of infrared sensors. The processing circuit is used for receiving optical signals of the multiple groups of infrared sensors and further providing data obtained by scanning areas of the multiple groups of infrared sensors; judging whether a gesture is detected according to data obtained by scanning areas of the multiple groups of infrared sensors; calculating locations of one or more centroids of the gesture; recognizing a distance change between the gesture and the screen according to the movement information of the one or more centroids; and instructing the screen to adjust at least one of a visual distance between the three-dimensional object and the screen, a size of the three-dimensional object, and a depth of field of the three-dimensional object in a manner corresponding to the distance change according to the distance change.
Drawings
FIG. 1 is a block diagram of a three-dimensional display with gesture sensing function according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a three-dimensional display according to an embodiment of the invention.
fig. 3 is a schematic diagram illustrating the operation of the depth of field detection unit according to the embodiment of the present invention.
Fig. 4A to 4C are schematic diagrams illustrating the operation of the depth-of-field detection unit according to the embodiment of the present invention.
Fig. 5A to 5C are schematic diagrams illustrating the operation of the depth-of-field detection unit according to the embodiment of the present invention.
fig. 6A and 6B are schematic diagrams illustrating adjustment of a displayed image according to a zoom-in gesture or a zoom-out gesture in an embodiment of the present invention.
Wherein the reference numerals are as follows:
10 depth of field detection unit
20 screens
30 processing circuit
40 cover body
50 bottom shell
80 palm
100 three-dimensional display
A-D scanning area
Arrows S1, S2
Centroidal coordinates of P1-P4 and Q1-Q4
SR1~SRMInfrared sensor
d 1-d 4 visual distance
Depth of field of Depth1 to Depth 4
L1-L4 left eye image
R1-R4 right eye image
Detailed Description
Fig. 1 is a functional block diagram of a three-dimensional display 100 with gesture sensing function according to an embodiment of the present invention. The three-dimensional display 100 includes a depth detection unit 10, a screen 20, and a processing circuitand a path 30. The depth-of-field detection unit 10 includes a plurality of sets of infrared sensors (IR sensors) SR1~SRMWherein M is an integer greater than 1. The processing circuit 30 can adjust the depth of field of the object displayed on the screen 20 according to the data obtained from the scanning area of the depth of field detection unit 10.
in the present embodiment, the screen 20 of the three-dimensional display 100 includes a liquid crystal panel, and the display image thereof is divided into left-eye image pixels and right-eye image pixels, and the left-eye image pixels and the right-eye image pixels are projected to the left and right eyes respectively by using parallel shields or lenticular lenses in front of the liquid crystal panel to form a binocular parallax effect, so that the viewer can see the stereoscopic image. The difference between the left-eye image and the right-eye image is called depth (depth), and the three-dimensional display 100 of the embodiment can change the rotation angle of the liquid crystal molecules through the upper and lower transparent electrodes disposed at both sides of the liquid crystal panel to adjust the depth. When the difference of the images of the two eyes is larger, the stereoscopic sense that the viewer sees the three-dimensional image is more obvious; when the difference between the images of the two eyes is smaller, the stereoscopic feeling of the three-dimensional image seen by the viewer is less obvious. In other embodiments, the screen 20 may be implemented using other suitable three-dimensional display technologies.
In the embodiment, the processing circuit 30 may be implemented by a circuit component such as a processor or an application-specific integrated circuit (ASIC). However, the implementation of the processing circuit 30 does not limit the scope of the present invention.
in the present embodiment, the three-dimensional display 100 may be a device having a display function, such as a notebook computer, a desktop computer, or a television. The depth-of-field detection unit 10 is disposed below the effective display range of the screen 20, such that the infrared sensor SR1~SRMCan detect a change in distance between the gesture and the screen 20. In one embodiment, the effective display range of the screen 20 can be determined by the viewing angle of the screen 20, i.e. the user can clearly see all the displayed contents on the screen 20 from different directions (according to the predetermined image quality, contrast detail, brightness and color variation) within the viewing angle. However, the effective display range size of the screen 20 does not limit the scope of the present invention.
Fig. 2 is a schematic diagram of a three-dimensional display 100 according to an embodiment of the invention. In this embodiment, the three-dimensional display 100 is a notebook computer, wherein the screen 20 is disposed on a cover 40, the depth of field detecting unit 10 is disposed on a bottom housing 50, and the processing circuit 30 (not shown) is disposed in the bottom housing 50. The cover 40 is pivotally connected to the bottom housing 50 such that a user can adjust the angle between the cover 40 and the bottom housing 50. However, the type of the three-dimensional display 100 is not limited to the scope of the present invention.
For illustrative purposes, fig. 2 shows an embodiment where M is 4, however, the value of M does not limit the scope of the present invention. In the three-dimensional display 100 shown in fig. 2, the cover 40 carrying the screen 20 is located on a first side of the bottom housing 50, and the infrared sensor SR of the depth-of-field detection unit 101~SR4On a second side of the bottom housing 50, wherein the first side and the second side are two adjacent sides of the bottom housing 50.
The three-dimensional display 100 of the present embodiment employs a time difference ranging technique to provide a function of adjusting the depth of field. An infrared sensor in the depth-of-field detection unit 10 emits an infrared beam, which strikes the surface of an object and is reflected, and then receives a signal and records the time. Since the speed of light is a known condition, the time of the infrared beam signal going back and forth can be converted into the distance traveled by the signal, and the position of the object can be further known.
Fig. 3 is a schematic diagram illustrating the operation of the depth of field detection unit 10 according to the embodiment of the present invention. Infrared sensor SR in depth of field detection unit 101scanning area A, infrared sensor SR2Scanning area B, outside line sensor SRU3Scanning area C and infrared sensor SRU4may be a cone in front of the screen 20, and thus may monitor gestures within the effective display range of the screen 20. However, the shape of the scanning areas a to D does not limit the scope of the present invention.
Fig. 4A to 4C are schematic diagrams illustrating the operation of the depth-of-field detection unit 10 according to the embodiment of the present invention. Fig. 4A and 4B sequentially show the process when the user's palm 80 is down to the grip gesture. At the first time point in the initial state shown in fig. 4A, assuming that the palm 80 of the user appears in the scanning areas a-D, the depth-of-field detecting unit 10 can detect 4 centroid coordinates P1-P4 corresponding to the palm 80 of the user. At the second time point in the ending state shown in fig. 4B, assuming that the palm 80 of the user appears in the scanning areas B-D, the depth-of-field detecting unit 10 can detect 3 centroid coordinates Q1-Q3 corresponding to the palm 80 of the user. Since the second time point is later than the first time point and the positions of the centroid coordinates P1-P4 and Q1-Q3 change, the processing circuit 30 can determine the moving direction of each centroid as each centroid or most centroids move away from the screen 20, as shown by the arrow S1 (toward the user) in fig. 4B. Based on the moving direction of each centroid, the processing circuit 30 can determine that the palm 80 of the user performs a close gesture, and accordingly instruct the object displayed on the screen 20 to visually present the effect of being closed by the user, as shown in fig. 4C, the three-dimensional object moves in the direction of the arrow S1, i.e., moves closer to the user, and the user observes that the visual distance between the three-dimensional object and the screen 20 becomes longer and the size of the three-dimensional object becomes larger.
Fig. 5A to 5C are schematic diagrams illustrating the operation of the depth-of-field detection unit 10 according to the embodiment of the present invention. Fig. 5A and 5B sequentially show the process when the user's palm 80 is performing the push-away gesture. At the third time point in the initial state shown in fig. 5A, assuming that the palm 80 of the user appears in the scanning areas B-D, the depth-of-field detecting unit 10 can detect 3 centroid coordinates P1-P3 corresponding to the palm 80. At the fourth time point in the ending state shown in fig. 5B, assuming that the palm 80 of the user appears in the scanning areas a-D, the depth-of-field detecting unit 10 can detect 4 centroid coordinates Q1-Q4 corresponding to the palm 80. Since the fourth time is later than the third time, and according to the position changes of the centroid coordinates P1-P3 and Q1-Q4, the processing circuit 30 can determine the moving direction of the centroids, wherein each centroid or most centroids move close to the screen 20, as shown by the arrow S2 (in the direction away from the user) in fig. 5B. Based on the moving direction of each centroid, the processing circuit 30 determines that the palm 80 of the user is performing a push-away gesture, and accordingly instructs the object displayed on the screen 20 to visually present the effect of being pushed away by the user, as shown in fig. 5C, the three-dimensional object moves in the direction of the arrow S2, i.e., moves away from the user, and the user observes that the visual distance between the three-dimensional object and the screen 20 becomes shorter and the size of the three-dimensional object becomes smaller.
In the above embodiment, the centroid coordinate is generated by the depth-of-field detection unit 10 according to the detected signal. In another embodiment, the centroid coordinate may also be generated by the signal processing circuit 30 by the depth-of-field detection unit 10 transmitting the detected signal to the signal processing circuit 30.
Fig. 6A and 6B are schematic diagrams illustrating adjustment of a displayed image according to a zoom-in gesture or a zoom-out gesture in an embodiment of the present invention. L1 to L4 represent left-eye images when the visual distance between the screen 20 and the displayed object is d1 to d4, R1 to R4 represent right-eye images when the visual distance between the screen 20 and the displayed object is d1 to d4, respectively, Depth1 represents the Depth of field of the left-eye image L1 and the right-eye image R1, Depth 2 represents the Depth of field of the left-eye image L2 and the right-eye image R2, Depth 3 represents the Depth of field of the left-eye image L3 and the right-eye image R3, Depth 4 represents the Depth of field of the left-eye image L4 and the right-eye image R4, the direction of the arrow S1 corresponds to the process of changing the displayed object of the screen 20 upon the pull-in gesture, and the direction of the arrow S2 corresponds to the process of changing the displayed object of the screen 20 upon the push-out gesture.
Since the users may have different visual preferences, in the above embodiment, the values of the depths of field Depth1 to Depth 4 of the left and right eye images when the visual distance between the screen 20 and the displayed object is d1 to d4 may be set according to the parameters of the size of the screen 20, the structure of the displayed object, the background illumination level, the age of the viewer, and the like. For example, as shown in fig. 6A and 6B, as the object is farther from the screen 20 and closer to the user, the corresponding left-right eye image is larger (R4> R3> R2> R1 and L4> L3> L2> L1). In the embodiment shown in fig. 6A, the Depth of field of the corresponding left-right eye image may be set to be larger (Depth 4> Depth 3> Depth 2> Depth1) as the object is farther from the screen 20 and closer to the user. In the embodiment shown in fig. 6B, the Depth of field of the corresponding left-right eye image may be set to be smaller as the object is farther from the screen 20 and closer to the user (Depth 4< Depth 3< Depth 2< Depth 1). However, the progressive depth adjustment shown in fig. 6A and 6B is only an example and does not limit the scope of the present invention. For example, some users may set the Depth depths Depth1 to Depth 4 of the left and right eye images corresponding to different visual distances d1 to d4 to the same value according to the visual preference.
in the above embodiment, the depth of field detection unit 10 and the screen 20 are disposed on two adjacent sides of the bottom housing 50, and the infrared sensor SR of the depth of field detection unit 10 is disposed on the bottom housing1~SR4The arrangement and the corresponding scanning area are set to detect the gesture motion in the direction perpendicular to the screen 20, so that the visual distance between the three-dimensional object displayed on the screen 20 and the screen 20, the size of the three-dimensional object and the depth of field of the three-dimensional object can be adjusted correspondingly. In another embodiment, the depth-of-field detection unit 10 can be disposed at other suitable positions, not limited to the adjacent side of the screen 20, and the infrared sensor SR can be used to detect the depth of field1~SR4The arrangement and the corresponding scanning area are set to detect the gesture motion in the direction perpendicular to the screen 20, so that the visual distance between the three-dimensional object displayed on the screen 20 and the screen 20, the size of the three-dimensional object and the depth of field of the three-dimensional object can be adjusted correspondingly.
In summary, the three-dimensional display of the present invention uses the infrared sensor with low cost, low energy consumption and small volume to detect the distance between the gesture and the screen, and then the screen is indicated to adjust the size, the depth of field and the distance between the three-dimensional display object and the screen along with the gesture interaction, so as to provide a natural three-dimensional display effect.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A three-dimensional display capable of gesture sensing, comprising:
a screen for displaying a three-dimensional object;
A depth of field detection unit, which comprises a plurality of groups of infrared sensors; and
A processing circuit configured to:
Receiving optical signals of the multiple groups of infrared sensors, and further providing data obtained by scanning areas of the multiple groups of infrared sensors;
Judging whether a gesture is detected according to data obtained by scanning areas of the multiple groups of infrared sensors;
Calculating locations of one or more centroids of the gesture;
Recognizing a distance change between the gesture and the screen according to the movement information of the one or more centroids; and is
Instructing the screen to adjust at least one of a visual distance between the three-dimensional object and the screen, a size of the three-dimensional object, and a depth of field of the three-dimensional object in a manner corresponding to the distance change in accordance with the distance change.
2. The three-dimensional display according to claim 1, wherein the scanning areas of the sets of infrared sensors do not intersect with each other.
3. The three-dimensional display according to claim 1, wherein the scanning areas of the plurality of sets of infrared sensors are a plurality of cone-shaped areas in front of the screen.
4. The three-dimensional display of claim 1, wherein:
The processing circuit judges the moving directions of a first centroid and a second centroid according to the positions of the first centroid and the second centroid of the gesture; and is
When the distance between the first centroid and the second centroid is unchanged, the first centroid moves away from the screen, and the second centroid moves away from the screen, the processing circuit determines that the gesture is a zoom-in gesture.
5. The three-dimensional display of claim 4, wherein:
When the processing circuit determines that the gesture is the zoom-in gesture, the processing circuit instructs the screen to adjust the visual distance between the three-dimensional object and the screen from a first value to a second value, to adjust the size of the three-dimensional object from a third value to a fourth value, and to adjust the depth of field of the three-dimensional object from a fifth value to a sixth value;
the first value is less than the second value;
the third value is less than the fourth value; and is
The fifth value is different from the sixth value.
6. The three-dimensional display of claim 1, wherein:
The processing circuit judges the moving directions of a first centroid and a second centroid according to the positions of the first centroid and the second centroid of the gesture; and is
When the distance between the first centroid and the second centroid is unchanged, the first centroid moves close to the screen, and the second centroid moves close to the screen, the processing circuit judges that the gesture is a push-away gesture.
7. the three-dimensional display of claim 6, wherein:
When the processing circuit determines that the gesture is the push-away gesture, the processing circuit instructs the screen to adjust the visual distance between the three-dimensional object and the screen from a first value to a second value, to adjust the size of the three-dimensional object from a third value to a fourth value, and to adjust the depth of field of the three-dimensional object from a fifth value to a sixth value;
The first value is greater than the second value;
the third value is greater than the fourth value; and is
the fifth value is different from the sixth value.
8. The three-dimensional display of claim 1, wherein:
The screen is arranged on a cover body;
The depth of field detection unit is arranged on a bottom shell;
The processing circuit is arranged in the bottom shell;
the cover body is pivoted to a first side of the bottom shell;
the depth of field detection unit is arranged on a second side of the bottom shell; and is
The first side and the second side are two adjacent sides of the bottom housing.
9. the three-dimensional display according to claim 1, wherein the depth of field detection unit is disposed below an effective display range of the screen.
CN201810578401.3A 2018-06-07 2018-06-07 Three-dimensional display with gesture sensing function Withdrawn CN110581987A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810578401.3A CN110581987A (en) 2018-06-07 2018-06-07 Three-dimensional display with gesture sensing function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810578401.3A CN110581987A (en) 2018-06-07 2018-06-07 Three-dimensional display with gesture sensing function

Publications (1)

Publication Number Publication Date
CN110581987A true CN110581987A (en) 2019-12-17

Family

ID=68809990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810578401.3A Withdrawn CN110581987A (en) 2018-06-07 2018-06-07 Three-dimensional display with gesture sensing function

Country Status (1)

Country Link
CN (1) CN110581987A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365410A (en) * 2012-04-03 2013-10-23 纬创资通股份有限公司 Gesture sensing device and electronic system with gesture input function
CN105425937A (en) * 2014-09-03 2016-03-23 液态三维系统有限公司 Gesture control system capable of interacting with 3D (three-dimensional) image
CN105759967A (en) * 2016-02-19 2016-07-13 电子科技大学 Global hand gesture detecting method based on depth data
CN106133651A (en) * 2014-03-25 2016-11-16 Lg伊诺特有限公司 Gesture identifying device
US20170300209A1 (en) * 2013-01-15 2017-10-19 Leap Motion, Inc. Dynamic user interactions for display control and identifying dominant gestures
US20180059925A1 (en) * 2009-03-13 2018-03-01 Apple Inc. Enhanced 3D interfacing for remote devices

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180059925A1 (en) * 2009-03-13 2018-03-01 Apple Inc. Enhanced 3D interfacing for remote devices
CN103365410A (en) * 2012-04-03 2013-10-23 纬创资通股份有限公司 Gesture sensing device and electronic system with gesture input function
US20170300209A1 (en) * 2013-01-15 2017-10-19 Leap Motion, Inc. Dynamic user interactions for display control and identifying dominant gestures
CN106133651A (en) * 2014-03-25 2016-11-16 Lg伊诺特有限公司 Gesture identifying device
CN105425937A (en) * 2014-09-03 2016-03-23 液态三维系统有限公司 Gesture control system capable of interacting with 3D (three-dimensional) image
CN105759967A (en) * 2016-02-19 2016-07-13 电子科技大学 Global hand gesture detecting method based on depth data

Similar Documents

Publication Publication Date Title
JP5006587B2 (en) Image presenting apparatus and image presenting method
CN104765445B (en) Eye vergence detection on a display
US9880395B2 (en) Display device, terminal device, and display method
JP6123365B2 (en) Image display system and head-mounted display device
US9986228B2 (en) Trackable glasses system that provides multiple views of a shared display
CN102547324B (en) Image display device and camera head
EP2249558A1 (en) Digital image capturing device with stereo image display and touch functions
US9086742B2 (en) Three-dimensional display device, three-dimensional image capturing device, and pointing determination method
EP2372512A1 (en) Vehicle user interface unit for a vehicle electronic device
CN114730094A (en) Artificial reality system with zoom display of artificial reality content
US10652525B2 (en) Quad view display system
EP3118722A1 (en) Mediated reality
US20150009119A1 (en) Built-in design of camera system for imaging and gesture processing applications
JP7099326B2 (en) Information processing equipment, information processing methods, and programs
CN103176605A (en) Control device of gesture recognition and control method of gesture recognition
TWI669653B (en) 3d display with gesture recognition function
WO2011136213A1 (en) Display device
CN111857461B (en) Image display method and device, electronic equipment and readable storage medium
US10506290B2 (en) Image information projection device and projection device control method
US20190141314A1 (en) Stereoscopic image display system and method for displaying stereoscopic images
US11144194B2 (en) Interactive stereoscopic display and interactive sensing method for the same
CN110581987A (en) Three-dimensional display with gesture sensing function
JP2012103980A5 (en)
US11934585B2 (en) Method for performing interactive operation upon a stereoscopic image and stereoscopic image display system
KR101950816B1 (en) Display Apparatus For Displaying Three Dimensional Picture And Driving Method For The Same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20191217