WO2023052535A1 - Devices and systems for use in imaging during surgery - Google Patents
Devices and systems for use in imaging during surgery Download PDFInfo
- Publication number
- WO2023052535A1 WO2023052535A1 PCT/EP2022/077170 EP2022077170W WO2023052535A1 WO 2023052535 A1 WO2023052535 A1 WO 2023052535A1 EP 2022077170 W EP2022077170 W EP 2022077170W WO 2023052535 A1 WO2023052535 A1 WO 2023052535A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- head mounted
- mounted display
- content
- display
- video
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 55
- 238000001356 surgical procedure Methods 0.000 title description 18
- 238000012545 processing Methods 0.000 claims description 56
- 238000004891 communication Methods 0.000 claims description 13
- 230000005540 biological transmission Effects 0.000 claims description 9
- 230000003416 augmentation Effects 0.000 claims description 7
- 238000002073 fluorescence micrograph Methods 0.000 claims description 7
- 230000008878 coupling Effects 0.000 claims description 5
- 238000010168 coupling process Methods 0.000 claims description 5
- 238000005859 coupling reaction Methods 0.000 claims description 5
- 238000000034 method Methods 0.000 description 26
- 238000004590 computer program Methods 0.000 description 10
- 239000011521 glass Substances 0.000 description 10
- 239000000463 material Substances 0.000 description 10
- 210000003128 head Anatomy 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000012937 correction Methods 0.000 description 4
- 240000004759 Inga spectabilis Species 0.000 description 3
- 238000000799 fluorescence microscopy Methods 0.000 description 3
- 238000002675 image-guided surgery Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- ACGUYXCXAPNIKK-UHFFFAOYSA-N hexachlorophene Chemical compound OC1=C(Cl)C=C(Cl)C(Cl)=C1CC1=C(O)C(Cl)=CC(Cl)=C1Cl ACGUYXCXAPNIKK-UHFFFAOYSA-N 0.000 description 2
- 238000012014 optical coherence tomography Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000029058 respiratory gaseous exchange Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 206010020675 Hypermetropia Diseases 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000004305 hyperopia Effects 0.000 description 1
- 201000006318 hyperopia Diseases 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000000386 microscopy Methods 0.000 description 1
- 208000001491 myopia Diseases 0.000 description 1
- 230000004379 myopia Effects 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00203—Electrical control of surgical instruments with speech control or speech recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/371—Surgical systems with images on a monitor during operation with simultaneous use of two cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/372—Details of monitor hardware
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/50—Supports for surgical instruments, e.g. articulated arms
- A61B2090/502—Headgear, e.g. helmet, spectacles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/50—Supports for surgical instruments, e.g. articulated arms
Definitions
- Examples relate to image processing, displays, and communication devices which are used in a surgical setting.
- Surgical procedures carried out in operating rooms often utilize imaging apparatuses such as microscopes to aid medical professionals in viewing the surgical site.
- more than one device which may provide data such as vital signs
- more than one type of real-time video can be useful in the surgical environment.
- Various medical practitioners which can include surgeons, nurses, and other personal (within the operating room and outside of it), can be aided by having access to multiple sources of information, data, and/or video during the surgery. Such access can improve the quality of care and/or aid in the instruction of new medical professionals.
- a plurality of head mounted displays configured for receiving a real-time video stream of a surgical site which is based on a video captured by a surgical imaging device.
- Providing head mounted displays to a plurality of users can provide convenient access to visualizing the surgical site for each user, and may reduce interference from other users who might in other circumstances restrict each other’s views to a panel display, for example.
- a plurality of head mounted displays configured for receiving real-time auxiliary data from an auxiliary device. Medical professionals may be provided improved access to different types of real-time data and/or images, e.g. with reduced interference from other users.
- a plurality of head mounted displays in which the real-time auxiliary data includes vital signs. Medical professionals may be provided improved access to different types of real-time data, particularly vital sign data, e.g. with reduced interference from other users.
- a content for display is selectable from at least: the real time video of the surgical site, the real time data from the auxiliary data source, and stored data.
- Medical professionals may be provided improved access to different types of real-time data and/or images, e.g. with reduced interference from other users.
- medical professionals can have improved access to selectable data/videos, which may be of variable relevance to different medical professionals associated with a surgical procedure.
- a plurality of head mounted displays configured for directly coupling to the auxiliary device for receiving the real-time auxiliary data.
- Medical professionals may be provided improved access to different types of real-time data and/or images, e.g. with reduced interference from other users.
- a plurality of head mounted displays in which the content is further selectable from a second real-time video stream which is based on video captured by the surgical imaging device.
- Medical professionals can have improved access to selectable videos, which may be of variable relevance to different medical professionals associated with a surgical procedure.
- the real time video is a superposition image of a white light image and a fluorescence image.
- Medical professionals can have improved access to selectable videos, particularly videos based on visible and fluorescence image capture, which may be of variable relevance to different medical professionals associated with a surgical procedure.
- each head mounted display includes at least one of: a first adjuster for variable interpupillary distance or a second adjuster for variable diopter. Adjustable interpupillary distance and/or diopter can reduce eye strain and/or improve access to videos, with reduced interference from other users.
- a plurality of head mounted displays including at least one head mounted display which is remotely located, outside of an operating room in which the video is captured. Medical professionals can have improved access to stereoscopic videos, with reduced interference from other users.
- each head mounted display configured to provide a first mode in which each head mounted display receives the content selected by a first user. Medical professionals can have improved access to relevant content, with reduced interference from other users.
- a plurality of head mounted displays configured to provide: a second mode in which: at each head mounted display, a respective user interface is provided, and each respective user interface is configured for receiving a user selection for selection of at least one of a mode, a respective content, or a respective format for the respective head mounted display.
- Medical professionals can have improved access to relevant content, with reduced interference from other users, and/or flexible content tailored to different medical professionals in different roles related to surgery.
- a plurality of head mounted displays in which the format is selectable from a plurality of possible formats including at least one of: a superposition, a picture in picture, full-screen, portion-screen, zoom, stereoscopic, monoscopic, augmentation, left eye only, or right eye only; wherein augmentation is a superposition of an image on a semitransparent display, the semitransparent display passing ambient light to a user.
- Medical professionals can have improved access to relevant content, with reduced interference from other users, and/or improved access to flexible content/format tailored to different medical professionals in different roles related to surgery.
- a plurality of head mounted displays configured for: transmission of the user selection to an image processing device for transmission of the respective content to the respective head mounted display or transmission of the respective format to the respective head mounted display.
- Medical professionals can have improved access to relevant content, with reduced interference from other users, and/or flexible content/format tailored to different medical professionals in different roles related to surgery.
- each head mounted display is configured for audio communication by at least one of: communicatively coupling to audio communication devices, or by a microphone and a speaker included in each head mounted display.
- Medical professionals can have improved access to relevant content, with reduced interference from other users, possibly while having an ability to communicate with each other, and/or to select flexible content/format tailored to different medical professionals in different roles related to surgery.
- a surgical system which includes a plurality of head mounted displays as described herein and a surgical imaging device configured to capture the video.
- Medical professionals can have improved access to relevant content, with reduced interference from other users, and/or flexible content tailored to different medical professionals in different roles related to surgery.
- Fig. 1 shows a surgical system and auxiliary data source
- Fig. 2 shows an imaging processing device and head mounted displays
- Fig. 3 A shows a head mounted display
- Fig. 3B shows a head mounted display
- Fig. 3C shows a head mounted display
- Fig. 4 shows a method of communicating a surgical procedure.
- the term “and/or” includes any and all combinations of one or more of the associated listed items and may be abbreviated as “/”.
- a trailing “(s)” indicates one or more; for example display(s) indicates one or more displays.
- a “head mounted display” can include at least one display which can be mounted on a human head for viewing content such as real time video based on video captured at a surgical site.
- real time video may be provided to multiple users at the same time, e.g. to multiple users, each having a head mounted display for displaying the real time video.
- image can mean a moving image, e.g. a video.
- image can mean a static single image.
- a stored image can be a static image accessible in a memory.
- a stereoscopic image is a stereoscopic video/moving image.
- augmentation can be a format of display which is a superposition of an image (video) on a semitransparent display, the semitransparent display passing ambient light to a user.
- a fluorescence microscope can be a microscope with optics for capturing fluorescence images.
- a fluorescence microscope may also include optics for capturing other images such as reflectance white light images, including.
- a fluorescence microscope herein, may be capable of simultaneously capturing fluorescence and white light images.
- auxiliary device can be used interchangeably with “auxiliary data source.”
- Fig. 1 illustrates a surgical system and auxiliary data source.
- the surgical system 100 can include at least one sensor 170 such as a camera that can be for generating a real time video of a surgical site.
- the system can include at least one head mounted display 160, e.g. for displaying content such as a real-time video of the surgical site.
- the system 100 can include an image processing device 101, which may include a processor(s) 110 (computer and/or graphics processor) and/or memory 120.
- Memory 120 may alternatively/additionally include remote memory such as memory accessible in a network coupled to the image processing device 101.
- the processor(s) may alternatively/additionally include remote processor(s), e.g. in an accessible network.
- the image processing device 101 can communicatively couple to the at least one head mounted display 160.
- the image processing device 101 can communicatively couple to the sensor 170.
- the image processing device 101 can be configured, e.g. programmed, for generating a real time video of a surgical site, such as by processing the data from the sensor(s) 170.
- the image processing device 101 can determine content for display by the at least one head mounted display 160, and transmit the content to the at least one head mounted display 160.
- the content can be selectable from: a real time video (e.g. a real-time video determined from the sensor), real time data from an auxiliary data source 180, and stored data (e.g. data stored in the memory 120).
- the sensor 170 can be part of a surgical imaging device 150 which may be part of the surgical system 100.
- the device 150 can be a surgical microscope (such as a stereoscopic surgical microscope and/or fluorescence microscope).
- the systemlOO can include an arm 155 which may be movable, and can connect the surgical imaging 150 to the image processing device 101.
- a head mounted display 160 can be optionally connectable to the surgical system 100, such as at the imaging device 101, surgical microscope 150, and/or arm 155. Alternatively/additionally/ the head mounted display(s) 160 can be wirelessly connected for receiving content, e.g. real time video(s).
- the configurations described herein can improve ergonomics, such as for allowing flexibility in the positioning of the medical professionals using the imaging device and/or viewing the generated content.
- Remote access to the content including possibly from ranges that may be beyond the typical wireless transmission range capability, can provide for collaboration and/or teaching events with remote users.
- Head mounted displays 160 for users can be particularly helpful in allowing direct visualization of the content, e.g. video from the surgical site.
- a head mounted display 160 can provide improved ergonomics, for example by removing the constraint of being positioned to access oculars and/or a shared panel display.
- Fig. 2 illustrates an imaging processing device and head mounted displays.
- the image processing device 101 can process images/data received from the sensor(s) 170, auxiliary data device 180, and/or surgical imaging device 150.
- the image processing device 101 can generate at least one real-time video 210a, 210b as content 220.
- the image processing device 101 can transmit selectable content 220, including real-time video(s) 210a, 210b to the head mounted display(s) 160. It can be desirable to have selectable content 220 for the head mounted display(s), such as when multiple users are using headsets 160.
- the content can be selected from real time video (e.g. real time video from the sensor(s) 170), real time data from the auxiliary data source, and/or stored data.
- Real-time video(s) 210a, 210b can include video of white light reflectance microscopy, fluorescence microscopy, a superposition of white light and fluorescence microscopy, and/or optical coherence tomography.
- the content 220 can include ultrasound (e.g. real time ultrasound which can be in the form of video).
- the selectable content 220 can include selectable real-time video(s) 210a, 210b that can be generated/transmitted by image processing device 101.
- the image processing device 101 can generate/transmit any content 220 that is selected by the user, e.g. at a user interface of the image processing device 101, surgical system 100, and/or head mounted display 160.
- the content 220 can be transmitted to each of a plurality of head mounted display 160.
- the entirety of the content 220 (the selected content) is transmitted to each head mounted display 160.
- the format 230 of the content 220 is also transmitted to each head mounted display 160.
- the format 230 is determined by user input from a first user, e.g. at a first head mounted display, the surgical system 100, the image processing device 101, and/or the surgical imaging device 150.
- the content 220 can be transmitted with the format 230.
- a picture-in-picture format 230 is transmitted, the content 220 including a stereoscopic video in a main portion of the displayed content, and a stored image (e.g. a pre-op image) displayed in the smaller portion of the displayed content.
- a superimposed format is transmitted, the content 220 including a stereoscopic video of a white light image and a fluorescence video.
- the content 220 can be pieced out such that each head mounted display 160 may receive any portion of the content 220.
- the content 220 includes a real time stereoscopic video and a fluorescence video.
- a first head mounted display can receive and display, in a first format 230a (e.g. a superposition format), the real time stereoscopic video and the real time fluorescence video (simultaneously); and a second head mounted display 160 can display only part of the content 220, e.g. the real time stereoscopic video.
- a second user at the second head mounted display can select the format 230b and/or content 220 for the second head mounted display.
- the second user can select auxiliary data (e.g. vital signs) to be displayed in a picture-in-picture format with the real time stereoscopic video.
- Each user such as a first user and second user, can possibly determine the respective formats 230a, 230b, e.g. by user input at the respective head mounted displays 160. It is possible that there is a mode selection, e.g. at the image processing device 101, surgical system 100, and/or first head mounted display, that authorizes user input from each head mounted display 160 to be used to determine the respective content(s) and/or format(s) displayed at each respective head mounted display 160.
- a mode selection e.g. at the image processing device 101, surgical system 100, and/or first head mounted display, that authorizes user input from each head mounted display 160 to be used to determine the respective content(s) and/or format(s) displayed at each respective head mounted display 160.
- the content 220 can include real time data from an auxiliary data source 180.
- vital sign data 210c can be selected as content 220 for display to one or more of the head mounted displays 160.
- Vital sign data can include data such as heartrate, breathing rate, and/or blood pressure, for example.
- Data such as vital sign data can come from one or more auxiliary data sources 180, which may be communicatively coupled to the image processing device 101 and/or surgical system 100.
- the real time vital sign data and/or one or more of the auxiliary data source(s) 180 can be selected from a plurality of auxiliary data sources, e.g. as content to be displayed by at least one head mounted display 160.
- Stored data can be, for example, patient data such as patient identifying data, weight, age, height, and/or images (e.g. pre-op images) that are stored on local and/or remotely located memory 120.
- patient data such as patient identifying data, weight, age, height, and/or images (e.g. pre-op images) that are stored on local and/or remotely located memory 120.
- images e.g. pre-op images
- Alternative/additional stored data can be a brain map.
- a first selectable real time video 101a may be generated by a first sensor.
- a first and second sensor may be used to generate the real time video 101a.
- the real time video is a stereoscopic video.
- the real time video is a fluorescence video.
- a second real time video 101b can be selected for display.
- a first real time video 101a is a stereoscopic video
- a second real time video 101b is a fluorescence video.
- the real time video(s) is a superposition of a white light video and a fluorescence video.
- the image processing device 101 and/or surgical imaging device 100 can be capable of more or fewer real time videos.
- One or more sensors 170 may provide data for the image processing device 101 to generate real time video(s) 101a, 101b as selected by user(s).
- the image processing device 101 can have multiple modes of operation which may be selected by user(s).
- a first mode can be one in which a user interface receives, from a first user, input of the user selection of the content and/or the format.
- Each head mounted display 160 can receive the content 220 that is selected by the first user.
- the surgeon can be the first user who may have the ability to select and/or determine the content and/or format for all the head mounted displays 160.
- having the ability of a single user to select/determine the displayed content can allow for efficient communication to the users.
- the user interface can be at the image processing device 101, the surgical system 100, and/or at one of the head mounted displays 160, e.g. a first head mounted display which may be attached ot the image processing device 101 and/or surgical system.
- the format can be selectable from a plurality of possible formats including at least one of: a superposition, a picture in picture, a rotation, full-screen, portion-screen, zoom, stereoscopic, monoscopic, augmentation, left eye only, or right eye only.
- the image processing device 101 and/or surgical system 100 can provide user interfaces at each head mounted display 160.
- Each user interface can receive a user selection for content and/or format for each respective head mounted display.
- the user selection can be received by the image processing device for determination of the respective content and/or the respective format for each head mounted display.
- the second mode can be helpful to provide flexibility in the type of content and/or format that is viewed by each user. For example, during surgery, tasks and/or responsibilities can be delegated to different medical professionals who may each be better able to perform his/her respective tasks or meet responsibilities by having access to different content and/or format.
- the surgical system 100 can be used for communication of the surgical procedure, for example.
- the detector 170 can capture a captured video of the surgical site in an operating room, and the system 100 can include an output 190 which outputs a real-time video to at least one remote user outside of the operating room.
- the real-time video can be based on the captured video, e.g. the real time video is processed from video captured at least one sensor 170.
- Fig. 3A illustrates a head mounted display.
- the head mounted display 300 described herein may be one of a plurality which may be coupled to the image processing device 101, surgical system 100, and/or surgical imaging device 150.
- the head mounted display can include at least one display, such as two displays 30, 40.
- the head mounted display 300 can be configured for augmented reality (AR), mixed reality (MR), and/or virtual reality (VR).
- AR augmented reality
- MR mixed reality
- VR virtual reality
- the head mounted display 300 can have settings for selecting AR, MR, or VR.
- AR the head mounted display 300 passed ambient light so that the user sees the environment and variable superimposed images generated at the display of the head mounted display.
- head mounted displays 300 When a plurality of head mounted displays 300 are used, it is possible for one or more to be located outside of the operating room in which the video of the surgical procedure is captured. This can be advantageous, for example, in teaching/training environments, and/or when a surgeon would like to consult with a colleague remotely, e.g. to discuss the content 220. This may improve patient outcome, e.g.by allowing for remote collaboration in real time during surgery. In environments in which there are multiple users (e.g. in teaching environments), having head mounted displays 300 for the users can improve the learning experience, e.g. by facilitating visualization, to each user, of the content 220, rather than having a single screen for which multiple viewers may limit each others’ view. Head mounted displays 300 can also reduce the amount of space in the operating room used by panel displays.
- the head mounted display(s) 300 can be communicatively coupled to any of the surgical imaging devices 150, surgical systems 100, and/or image processing devices 101 described herein.
- the head mounted display 300 of Fig. 3 includes a display 310 and a mounting structure 320 for mounting on the head of a user.
- the mounting structure can be a pair of legs, as shown. Alternatively/additionally, a head band can be used.
- the head mounted display 300 can receive a real time video stream of a surgical site which is based on a video captured by a surgical imaging device 150.
- the head mounted display 300 can display selectable content 220, e.g. selected from any one or more of real time video of a surgical site, real time data from an auxiliary data source, and stored data.
- the head mounted display 300 can receive user input, e.g. for determining the content 220.
- the user input can be made by a graphical user interface which may be displayed such as superimposed on the content 220.
- the user input can be determined by voice activation, e.g. through a microphone(s) 340 which may be integrated in the head mounted display 300.
- User input can be for determining the content 220, e.g. the content 220 transmitted and/or displayed at the head mounted display 300.
- each head mounted display 300 may determine which content 220 and/or format to display.
- a head mounted display 300 can be configured for audio communication by the microphone 340 and speaker 350, for example.
- the head mounted display 300 can communicatively couple to an audio communication device, such as an external device like an audio headset.
- the audio communication e.g. the microphone 340 can also be used for selection (e.g. menu driven selection) of content 220 and/or format 230 of the displayed content 220.
- a menu e.g. for user selection of content and/or format, can be provided visually (e.g. at the head mounted display 300) and/or audibly (e.g. at the speaker 350 of the head mounted display 300 and/or via a coupled audio communication device).
- the content 220 can be selected, e.g. at the respective head mounted display(s) 300, from a plurality of video streams receivable from a surgical system 100 and/or image processing device 101.
- User input can be received, e.g. at each respective head mounted display 300, for determining the content 220 for the respective head mounted display 300 which receives the user input. It is possible to transmit the user input to the surgical system 100 and /or image processing device 101.
- the surgical system 100 and /or image processing device 101 can determine the content 220 (e.g. any one or more of the video streams 210a, 210b can be transmitted to the head mounted display(s) 300 as at least part of the content 220.
- a single user can determine the content 220 and/or format for all of the coupled head mounted display(s) 300.
- each user can determine content 220 and/or format at each head mounted display 300.
- the format e.g. of displayed content, can be selected from, for example, monoscopic display, stereoscopic display, picture-in-picture, or superpositional display.
- the head mounted display(s) 300 can be communicatively coupled to one or more auxiliary devices 180. Data received from the auxiliary device can be displayed by the head mounted display(s) 300.
- the head mounted display(s) can be directly communicatively coupled to one or more auxiliary devices 180 or, for example, through the image processing device 101 and/or surgical imaging system 100.
- the real time auxiliary data can include vital signs (e.g. heart rate, breathing rate, blood pressure).
- the auxiliary device(s) 180 can be directly coupled to the head mounted display(s) 300.
- User input e.g. audible input received by the microphone 340 and/or buttons on the head mounted display 300
- different modes of the imaging device 150 can be activated or deactivated, such as fluorescence imaging modes, modes which utilize image guided surgery (IGS), and/or modes which include activation of a communication channel with auxiliary equipment, e.g. IGS equipment.
- IGS image guided surgery
- the head mounted display(s) 300 can displaly content 220 which is selectable from at least the real time video of the surgical site, the real time data from the auxiliary data source, and stored data.
- content 220 there can be more than one real time video which is selectable as content 220.
- a second real time video based on video captured by the surgical imaging device 150 can be selected as content 220 and displayed.
- the selectable content 220 can include real time video which is a processed video, e.g. a superposition image of a white light image nada fluorescence image.
- the content 220 can include, or example real time video can be a stereoscopic image.
- Fig. 3B illustrates a head mounted display.
- the head mounted display 300b described herein may be one of a plurality which may be coupled to the image processing device 101, surgical system 100, and/or surgical imaging device 150.
- Features described with respect to the illustrated head mounted display 300b can be used in any head mounted display described herein.
- the head mounted display 300b can display content 220, e.g. content transmitted by the surgical imaging device 150, image processing device 101, and/or surgical imaging system 100.
- the head mounted display 300b can include a display 10, e.g. for displaying the content 220.
- the head mounted display 300b can include a mounting structure 20 for mounting on a head of a user.
- the mounting structure 20 can fasten the head mounted display 300b to the head.
- the head mounted display 300b can include at least one adjuster 30, 50 for variable diopter(s).
- Each adjuster(s) 30 can include one or more lenses 32, 34, 36.
- at least one lens 32, 34, 36 of the adjuster 30 is movable so as to alter the effective diopter of the head mounted display 300b, e.g. for allowing a user to adjust the focus.
- the user can use the adjuster 30 to focus the image of the display 10 in the user’s eye.
- the adjuster may allow at least partially for some correction of the user’s myopia or hyperopia.
- the diopter adjustment by the adjuster 30 can allow the user to better focus the image plane of the ambient image (from the surgical site, for example) on the user’s eye.
- the adjuster 30 can allow the image plane of the user’s field of view to be moved so as to allow better focus.
- the head mounted display 300b can include at least one adjuster 30 (such as one or two) for the adjustment/correction of diopter (such as for each eye).
- At least three lenses of the adjuster can be between the display 10 and the eye of the user.
- Three or more lenses can allow for a suitable range of correction (e.g. diopter adjustment) and/or magnification.
- One of the lenses can be an aspheric lens. By using at least one aspheric lens, the size and weight of the optical arrangement may be significantly reduced in comparison to a system using only spherical lenses.
- the first optical arrangement 30 comprises a first lens 32, a second lens 34 and a third lens 36.
- the first lens 32 may be the lens of the three lenses closest to the first display 10.
- the second lens 34 may be arranged between the first lens 32 and the third lens 36.
- the first optical arrangement 30 may comprise exactly three lenses or may comprise more than three lenses.
- the three lenses may be glass lenses or may be made of other suitable material.
- the aspheric lens of the three lenses may be the first, second or third lens.
- the aspheric lens may be the second lens 34 while the first lens 32 and the third lens may be spherical lenses.
- all three lenses may be aspheric lenses. In this way, size and weight of the first optical arrangement 30 may be kept low.
- each lens of the three lenses may comprise a first surface and a second surface.
- the surfaces of the three lenses may represent or form a sequence of surfaces.
- the first surface of the first aspheric lens may be a first spherical surface
- the second surface of the first aspheric lens may be a first aspherical surface
- the first surface of the second aspheric lens may be a second spherical surface
- the second surface of the second aspheric lens may be a second aspherical surface
- the first surface of the third aspheric lens may be a third spherical surface
- the second surface of the third aspheric lens may be a third aspherical surface.
- the sequence of surfaces may comprise a first spherical surface followed by a first aspherical surface followed by a second spherical surface followed by a second aspherical surface followed by a third spherical surface followed by a third aspherical surface.
- each lens of the three lenses comprises a different glass material.
- Three different glass materials may be used for the three lenses.
- the first lens may comprise or consist of a first glass material
- the second lens may comprise or consist of a second glass material
- the third lens may comprise or consist of a third glass material.
- the first glass material, the second glass material and the third glass material are three different glass materials.
- the first lens 32 may be a positive lens and/or an aspheric lens.
- a focal length of the first lens 32 may be at most 25mm (or at most 20mm or at most 30mm) and/or at least 15mm (or at least 10mm or at least 20mm).
- the second lens 34 may be a negative lens and/or an aspheric lens.
- a focal length of the second lens 34 may be at most -15mm (or at most -20mm or at most -13mm) and/or at least -5mm (or at least -10mm or at least -3mm).
- the third lens 36 may be a positive lens and/or an aspheric lens.
- a focal length of the third lens 36 may be at most 20mm (or at most 25mm or at most 17mm) and/or at least 10mm (or at least 13mm or at least 7mm).
- a desired viewing angle, overall size, weight and/or exit pupil size may be obtained.
- one or more of the three aspheric lenses may be free form lenses. In this way, the size and weight may be further reduced.
- a total weight of the three lenses e.g. first lens 32, second lens 34 and third lens 36
- a diameter of each lens of the three lenses e.g. first lens 32, second lens 34 and third lens 36
- a total focal length of the adjuster 30 may be at most 30mm (or at most 35mm or at most 25mm) and/or at least 15mm (or at least 10mm or at least 20mm).
- a total optimal distance between the first display 10 and an eye of a user caused by the adjuster 30 may be at most 60mm (or at most 70mm, at most 55mm or at most 50mm).
- size and/or weight of the first optical arrangement 30 and the head mounted display 300b may be kept low.
- a display diagonal of the display 10 may be at least 125 mm (or at least 150 mm) and/or at most 250 mm (or at most 200 mm). In this way, a sufficiently large image can be displayed while the weight may be kept low.
- the head mounted display 300b may include a second display and a second adjuster.
- a single display spans across the fields of view of each eye, and each eye has a corresponding adjuster 30 for the diopter adjustment.
- the head mounted display 300b can include at least one adjuster 30, 50 for the adjustment/correction of diopter.
- the head mounted display 300b can include one or more additional optional features corresponding to one or more aspects of any examples described herein.
- Fig. 3C illustrates schematically a head mounted display 300c.
- the head mounted display 300c described herein may be one of a plurality which may be coupled to the image processing device 101, surgical system 100, and/or surgical imaging device 150.
- the head mounted display 300c can include an adjuster 310 (e.g. an interpupillary adjuster) for adjusting the interpupillary distance (IPD).
- the adjuster can adjust the relative positions of lenses and/or displays 10, 40 (e.g. in a direction substantially parallel to the direction of a line connecting the pupils of a user).
- the head mounted display 300c can include one or more adjusters 320, 330 for the diopter adjustment.
- the adjuster(s) 320, 330 for the diopter adjustment can include respective lenses in the optical paths between the user’s respective eyes and respective displays 10, 40.
- the diopter adjuster(s) 320, 330 can allow the user to improve the focus of the displays 10, 40 of the head mounted display 300c.
- the adjusters 320, 330 can include respective optics.
- the head mounted display 300c can include a first adjuster for variable interpupillary distance and/or a second adjuster for variable diopter, and possibly a third adjuster for variable diopter of a second eye. It is possible to adjust the focus of the surgical imaging device 150 such that the third adjuster is not strictly necessary for one user. When multiple users each have a respective head mounted display 300c, it can be advantageous for each user to have the capability of adjusting diopter for each eye.
- the display(s) 10, 40 may be an LCD display (Liquid Crystal Display), a TFT display (Thin- film transistor-Display) or an OLED display (organic light-emitting diode display).
- LCD display Liquid Crystal Display
- TFT display Thin- film transistor-Display
- OLED display organic light-emitting diode display
- the head mounted display(s) 300 can have multiple modes.
- a first mode can be that each head mounted display 300 receives the content 220 selected by a first user, e.g. the surgeon. This can be convenient particularly when the surgeon is communicating to other users, e.g. remote collaborators and/or students.
- a second mode can be one in which, at each head mounted display, a respective user interface is provided.
- Each respective user interface can be configured for receiving a user selection for selection of at least one of a mode, a respective content, or a respective format for the respective head mounted display.
- a second user for example, can select the mode in which the content is determined by the first user (e.g. the surgeon).
- a user can select a mode in which the content is determined by the respective user.
- the content 220 can be selected by the respective user and/or the format.
- Examples of various formats which can be selected are a superposition, a picture in picture, full-screen, portion-screen, zoom, stereoscopic, monoscopic, augmentation, left eye only, or right eye only.
- the selection can be transmitted to the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 for transmission of the respective content 220 to the respective head mounted display 300 or for transmission of the respective format to the respective head mounted display.
- the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 transmits the selected respective content 220 to the respective head mounted display 300, and the head mounted display 300 controls the format.
- a user may wish to view the real time video of the white light image of the surgical site as the main portion of the screen, and to view, as a smaller picture in picture display, another source (such as video of an OCT image, a stored image, or the like).
- the content 220 can be transmitted to the head mounted display 300.
- the format e.g. the picture and picture selection, can be transmitted by the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 to the respective head mounted display 300.
- the format can be determined at the head mounted display 300.
- the content 220 and the format may be transmitted, e.g. after a selection of picturein-picture format, from the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 to the respective head mounted display 300.
- the respective head mounted display 300 may display the content 220 and format as received from the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100.
- the content and format are transmitted by the image processing device 101 including the superposition format of a stereoscopic white light image and a false-color fluorescence image. These cases illustrate that the content and format can be transmitted to the head mounted display(s) 300.
- the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 transmits the selected content 220, and the headset 300 determines the format (e.g. after selection of picture in picture or superposition formats).
- the image processing device 101 transmits a real time stereoscopic video of a white light image of the surgical site, and a real time video of a fluorescence image.
- the user at the head mounted display 300, can change the format without the selection of format being transmitted to the image processing device 101.
- the user can change from a picture-in-picture format to a superposition format.
- the format selection can be done by transmitting the selected format to the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100; the content 220 and format are transmitted by the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 to the head mounted display(s) 300.
- the content 220 can possibly be continuously transmitted from the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 to the head mounted display(s) 300.
- the user may select different formats of the content 220, e.g. without transmitting the selection of the format to the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100.
- Each of the head mounted displays 300, 300b, 300c describe herein include features which can be used in the other head mounted displays 300, 300b, 300c.
- each can have features for user input, displays, and/or adjusters for IPD and/or diopter(s).
- Each can be communicatively coupled to auxiliary devices 180 and/or image processing devices 101. The coupling may be wired and/or wireless.
- Fig. 4 illustrates a method of communicating a surgical procedure.
- the method 400 can include capturing 410, using a detector 170, a captured video of a surgical site in an operating room, and outputting 420 a real time video to at least one remote user outside of the operating room (e.g. outputting 420 to a head mounted display 300 which is outside of the operating room).
- the real time video is based on the captured video.
- the method 400 can be done by an apparatus, such as the surgical system 100 including the surgical imaging device 150 and image processing device 101.
- the surgical imaging device can be a stereomicroscope.
- the apparatus can include a head mounted display for displaying the real time video to a user(s) in the operating room, e.g. a surgeon.
- the output can include the real time video displayed to the user(s) in the operating room.
- Any of the surgical imaging devices 150, image processing devices 101, and/or surgical systems 100 described herein can be configured, such as by a computer program stored in memory 120, to perform the method 400.
- An apparatus for performing the method 400 can include at least one detector 170 for capturing a captured video of a surgical site in an operating room, and an output configured to output 420 the real time video.
- a second detector for example, can be for generating the real time video or a second real time video. Any one or more of the real time videos generated can be a stereoscopic video. Alternatively/additionally, a stereoscopic video can be generated at a user, e.g. at a remotely located head mounted display 300, from two or more real time videos.
- the apparatus may communicatively couple to one or more auxiliary data sources 180.
- the method 400 can include receiving a real-time vital sign data from the auxiliary data source(s) 180.
- the apparatus can determine the content 220 for output, and transmit the content to the remote user(s).
- the content can be selectable from at least one of: the real time video, real time data from an auxiliary data source, or stored data.
- the content includes the real time video and at least one of a selectable content of at least one of: real time data from an auxiliary data source, or stored data.
- the methods described herein can be implemented in hardware or in software.
- the implementation can be performed using a non-transitory storage medium such as a digital storage medium, for example a floppy disc, a DVD, a Blu-Ray, a CD, a ROM, a PROM, and EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
- the digital storage medium may be computer readable.
- Some embodiments include a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
- Embodiments described herein can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer.
- the program code may, for example, be stored on a machine readable carrier.
- inventions include the computer program for performing one of the methods described herein, stored on a machine readable carrier.
- a computer program having a program code for performing the methods described herein, when the computer program runs on a computer.
- a computer program configured to operate any one or more of the head mounted display(s) described herein, the image processing device 101 described herein, the surgical system 100 described herein, and/or the surgical imaging device 150 described herein.
- a storage medium (or a data carrier, or a computer-readable medium) comprising, stored thereon, the computer program for performing the methods described herein when it is performed by a processor.
- an apparatus as described herein comprising a processor and the storage medium for executing the methods described herein.
- a data stream or a sequence of signals representing the computer program for performing one of the methods described herein.
- the data stream or the sequence of signals may, for example, be configured to be transferred via a data communication connection, for example, via the internet.
- processing means for example, a computer or a programmable logic device, configured to, or adapted to, perform the methods described herein.
- a programmable logic device for example, a field programmable gate array
- a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. The methods described herein are preferably performed by any hardware apparatus.
Landscapes
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Endoscopes (AREA)
Abstract
A plurality of head mounted displays is disclosed which receie a real-time video stream of a surgical site which is based on a video captured by a surgical imaging device. The had mounted displays can receive real-time auxiliary data from an auxiliary device. The real-time auxiliary data can include vital signs. Content for display can be selectable from at least: real time video of the surgical site, real time data from the auxiliary device, or stored data.
Description
Devices and Systems for Use in Imaging During Surgery
Technical field
Examples relate to image processing, displays, and communication devices which are used in a surgical setting.
Background
Surgical procedures carried out in operating rooms often utilize imaging apparatuses such as microscopes to aid medical professionals in viewing the surgical site. Often, more than one device (which may provide data such as vital signs) and/or more than one type of real-time video can be useful in the surgical environment. Various medical practitioners, which can include surgeons, nurses, and other personal (within the operating room and outside of it), can be aided by having access to multiple sources of information, data, and/or video during the surgery. Such access can improve the quality of care and/or aid in the instruction of new medical professionals. There are challenges in providing access to data and images to multiple medical personnel in a surgical setting.
Summary
Herein is disclosed a plurality of head mounted displays, configured for receiving a real-time video stream of a surgical site which is based on a video captured by a surgical imaging device. Providing head mounted displays to a plurality of users can provide convenient access to visualizing the surgical site for each user, and may reduce interference from other users who might in other circumstances restrict each other’s views to a panel display, for example.
Herein is disclosed a plurality of head mounted displays, configured for receiving real-time auxiliary data from an auxiliary device. Medical professionals may be provided improved access to different types of real-time data and/or images, e.g. with reduced interference from other users.
Herein is disclosed a plurality of head mounted displays in which the real-time auxiliary data includes vital signs. Medical professionals may be provided improved access to different types of real-time data, particularly vital sign data, e.g. with reduced interference from other users.
Herein is disclosed a plurality of head mounted displays in which a content for display is selectable from at least: the real time video of the surgical site, the real time data from the auxiliary data source, and stored data. Medical professionals may be provided improved access to different types of real-time data and/or images, e.g. with reduced interference from other users. Furthermore, in synergy, medical professionals can have improved access to selectable data/videos, which may be of variable relevance to different medical professionals associated with a surgical procedure.
Herein is disclosed a plurality of head mounted displays, configured for directly coupling to the auxiliary device for receiving the real-time auxiliary data. Medical professionals may be provided improved access to different types of real-time data and/or images, e.g. with reduced interference from other users.
Herein is disclosed a plurality of head mounted displays in which the content is further selectable from a second real-time video stream which is based on video captured by the surgical imaging device. Medical professionals can have improved access to selectable videos, which may be of variable relevance to different medical professionals associated with a surgical procedure.
Herein is disclosed a plurality of head mounted displays in which the real time video is a superposition image of a white light image and a fluorescence image. Medical professionals can have improved access to selectable videos, particularly videos based on visible and fluorescence image capture, which may be of variable relevance to different medical professionals associated with a surgical procedure.
Herein is disclosed a plurality of head mounted displays in which the real time video is a stereoscopic image. Medical professionals can have improved access to stereoscopic videos, with reduced interference from other users.
Herein is disclosed a plurality of head mounted displays in which each head mounted display includes at least one of: a first adjuster for variable interpupillary distance or a second adjuster for variable diopter. Adjustable interpupillary distance and/or diopter can reduce eye strain and/or improve access to videos, with reduced interference from other users.
Herein is disclosed a plurality of head mounted displays including at least one head mounted display which is remotely located, outside of an operating room in which the video is captured. Medical professionals can have improved access to stereoscopic videos, with reduced interference from other users.
Herein is disclosed a plurality of head mounted displays, configured to provide a first mode in which each head mounted display receives the content selected by a first user. Medical professionals can have improved access to relevant content, with reduced interference from other users.
Herein is disclosed a plurality of head mounted displays, configured to provide: a second mode in which: at each head mounted display, a respective user interface is provided, and each respective user interface is configured for receiving a user selection for selection of at least one of a mode, a respective content, or a respective format for the respective head mounted display. Medical professionals can have improved access to relevant content, with reduced interference from other users, and/or flexible content tailored to different medical professionals in different roles related to surgery.
Herein is disclosed a plurality of head mounted displays in which the format is selectable from a plurality of possible formats including at least one of: a superposition, a picture in picture, full-screen, portion-screen, zoom, stereoscopic, monoscopic, augmentation, left eye only, or right eye only; wherein augmentation is a superposition of an image on a semitransparent display, the semitransparent display passing ambient light to a user. Medical professionals can have improved access to relevant content, with reduced interference from other users, and/or improved access to flexible content/format tailored to different medical professionals in different roles related to surgery.
Herein is disclosed a plurality of head mounted displays, configured for: transmission of the user selection to an image processing device for transmission of the respective content to the respective head mounted display or transmission of the respective format to the respective head mounted display. Medical professionals can have improved access to relevant content, with reduced interference from other users, and/or flexible content/format tailored to different medical professionals in different roles related to surgery.
Herein is disclosed a plurality of head mounted displays in which each head mounted display is configured for audio communication by at least one of: communicatively coupling to audio communication devices, or by a microphone and a speaker included in each head mounted display. Medical professionals can have improved access to relevant content, with reduced interference from other users, possibly while having an ability to communicate with each other, and/or to select flexible content/format tailored to different medical professionals in different roles related to surgery.
Herein is disclosed a surgical system which includes a plurality of head mounted displays as described herein and a surgical imaging device configured to capture the video. Medical professionals can have improved access to relevant content, with reduced interference from other users, and/or flexible content tailored to different medical professionals in different roles related to surgery.
Short Description of the Figures
Some examples of apparatuses and/or methods will be described in the following by way of example only, and with reference to the accompanying figures, in which
Fig. 1 shows a surgical system and auxiliary data source;
Fig. 2 shows an imaging processing device and head mounted displays;
Fig. 3 A shows a head mounted display;
Fig. 3B shows a head mounted display;
Fig. 3C shows a head mounted display; and
Fig. 4 shows a method of communicating a surgical procedure.
Detailed Description
Various examples will now be described more fully with reference to the accompanying drawings in which some examples are illustrated. In the figures, which are not to be assumed to be to scale, the thicknesses of lines, layers and/or regions may be exaggerated for clarity.
As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items and may be abbreviated as “/”. Herein, a trailing “(s)” indicates one or more; for example display(s) indicates one or more displays.
Herein a “head mounted display” can include at least one display which can be mounted on a human head for viewing content such as real time video based on video captured at a surgical site.
Herein, real time video may be provided to multiple users at the same time, e.g. to multiple users, each having a head mounted display for displaying the real time video.
Herein “image” and “video” may be used interchangeably. The term “image” can mean a moving image, e.g. a video. Alternatively, when apparent from the context that a video/moving image cannot be meant, or when expressly stated, “image” can mean a static single image. In an example, a stored image can be a static image accessible in a memory. In another example, a stereoscopic image is a stereoscopic video/moving image.
Herein, augmentation, or augmented format, can be a format of display which is a superposition of an image (video) on a semitransparent display, the semitransparent display passing ambient light to a user.
Herein, a fluorescence microscope can be a microscope with optics for capturing fluorescence images. A fluorescence microscope may also include optics for capturing other images such as reflectance white light images, including. For example, a fluorescence microscope, herein, may be capable of simultaneously capturing fluorescence and white light images.
Herein “auxiliary device” can be used interchangeably with “auxiliary data source.”
Fig. 1 illustrates a surgical system and auxiliary data source. The surgical system 100 can include at least one sensor 170 such as a camera that can be for generating a real time video of a surgical site. The system can include at least one head mounted display 160, e.g. for displaying content such as a real-time video of the surgical site. The system 100 can include an image processing device 101, which may include a processor(s) 110 (computer and/or graphics processor) and/or memory 120. Memory 120 may alternatively/additionally include remote memory such as memory accessible in a network coupled to the image processing device 101. The processor(s) may alternatively/additionally include remote processor(s), e.g. in an accessible network.
The image processing device 101 can communicatively couple to the at least one head mounted display 160. The image processing device 101 can communicatively couple to the sensor 170. The image processing device 101 can be configured, e.g. programmed, for generating a real time video of a surgical site, such as by processing the data from the sensor(s) 170. The image processing device 101 can determine content for display by the at least one head mounted display 160, and transmit the content to the at least one head mounted display 160. The content can be selectable from: a real time video (e.g. a real-time video determined from the sensor), real time data from an auxiliary data source 180, and stored data (e.g. data stored in the memory 120).
The sensor 170 can be part of a surgical imaging device 150 which may be part of the surgical system 100. The device 150 can be a surgical microscope (such as a stereoscopic surgical microscope and/or fluorescence microscope). The systemlOO can include an arm 155 which may be movable, and can connect the surgical imaging 150 to the image processing device 101. A head mounted display 160 can be optionally connectable to the surgical system 100, such as
at the imaging device 101, surgical microscope 150, and/or arm 155. Alternatively/additionally/ the head mounted display(s) 160 can be wirelessly connected for receiving content, e.g. real time video(s).
The configurations described herein can improve ergonomics, such as for allowing flexibility in the positioning of the medical professionals using the imaging device and/or viewing the generated content. Remote access to the content, including possibly from ranges that may be beyond the typical wireless transmission range capability, can provide for collaboration and/or teaching events with remote users. Head mounted displays 160 for users can be particularly helpful in allowing direct visualization of the content, e.g. video from the surgical site. For the surgeon, also, a head mounted display 160 can provide improved ergonomics, for example by removing the constraint of being positioned to access oculars and/or a shared panel display.
Fig. 2 illustrates an imaging processing device and head mounted displays. The image processing device 101 can process images/data received from the sensor(s) 170, auxiliary data device 180, and/or surgical imaging device 150. The image processing device 101 can generate at least one real-time video 210a, 210b as content 220. The image processing device 101 can transmit selectable content 220, including real-time video(s) 210a, 210b to the head mounted display(s) 160. It can be desirable to have selectable content 220 for the head mounted display(s), such as when multiple users are using headsets 160. For example, the content can be selected from real time video (e.g. real time video from the sensor(s) 170), real time data from the auxiliary data source, and/or stored data.
Real-time video(s) 210a, 210b can include video of white light reflectance microscopy, fluorescence microscopy, a superposition of white light and fluorescence microscopy, and/or optical coherence tomography. Alternatively/additionally, the content 220 can include ultrasound (e.g. real time ultrasound which can be in the form of video). The selectable content 220 can include selectable real-time video(s) 210a, 210b that can be generated/transmitted by image processing device 101. For example, the image processing device 101 can generate/transmit any content 220 that is selected by the user, e.g. at a user interface of the image processing device 101, surgical system 100, and/or head mounted display 160.
The content 220 can be transmitted to each of a plurality of head mounted display 160. In one case, the entirety of the content 220 (the selected content) is transmitted to each head mounted display 160. In such a case, it is possible that the format 230 of the content 220 is also transmitted to each head mounted display 160. For example, the format 230 is determined by user input from a first user, e.g. at a first head mounted display, the surgical system 100, the image processing device 101, and/or the surgical imaging device 150. The content 220 can be transmitted with the format 230. For example, a picture-in-picture format 230 is transmitted, the content 220 including a stereoscopic video in a main portion of the displayed content, and a stored image (e.g. a pre-op image) displayed in the smaller portion of the displayed content. In another example, a superimposed format is transmitted, the content 220 including a stereoscopic video of a white light image and a fluorescence video.
In another case, the content 220 can be pieced out such that each head mounted display 160 may receive any portion of the content 220. For example, the content 220 includes a real time stereoscopic video and a fluorescence video. A first head mounted display can receive and display, in a first format 230a (e.g. a superposition format), the real time stereoscopic video and the real time fluorescence video (simultaneously); and a second head mounted display 160 can display only part of the content 220, e.g. the real time stereoscopic video. Alternatively/additionally, a second user at the second head mounted display can select the format 230b and/or content 220 for the second head mounted display. For example, the second user can select auxiliary data (e.g. vital signs) to be displayed in a picture-in-picture format with the real time stereoscopic video.
Each user, such as a first user and second user, can possibly determine the respective formats 230a, 230b, e.g. by user input at the respective head mounted displays 160. It is possible that there is a mode selection, e.g. at the image processing device 101, surgical system 100, and/or first head mounted display, that authorizes user input from each head mounted display 160 to be used to determine the respective content(s) and/or format(s) displayed at each respective head mounted display 160.
The content 220 can include real time data from an auxiliary data source 180. For example, vital sign data 210c can be selected as content 220 for display to one or more of the head
mounted displays 160. Vital sign data can include data such as heartrate, breathing rate, and/or blood pressure, for example. Data such as vital sign data can come from one or more auxiliary data sources 180, which may be communicatively coupled to the image processing device 101 and/or surgical system 100. The real time vital sign data and/or one or more of the auxiliary data source(s) 180 can be selected from a plurality of auxiliary data sources, e.g. as content to be displayed by at least one head mounted display 160.
Stored data can be, for example, patient data such as patient identifying data, weight, age, height, and/or images (e.g. pre-op images) that are stored on local and/or remotely located memory 120. Alternative/additional stored data can be a brain map.
A first selectable real time video 101a may be generated by a first sensor. Alternatively/additionally, a first and second sensor may be used to generate the real time video 101a. For example, the real time video is a stereoscopic video. In another example, the real time video is a fluorescence video. Alternatively/additionally, a second real time video 101b can be selected for display. For example, a first real time video 101a is a stereoscopic video, and a second real time video 101b is a fluorescence video. In another example, the real time video(s) is a superposition of a white light video and a fluorescence video.
The image processing device 101 and/or surgical imaging device 100 can be capable of more or fewer real time videos. One or more sensors 170 may provide data for the image processing device 101 to generate real time video(s) 101a, 101b as selected by user(s).
The image processing device 101 can have multiple modes of operation which may be selected by user(s). For example, a first mode can be one in which a user interface receives, from a first user, input of the user selection of the content and/or the format. Each head mounted display 160 can receive the content 220 that is selected by the first user. In an example, the surgeon can be the first user who may have the ability to select and/or determine the content and/or format for all the head mounted displays 160. In a teaching environment, for example, having the ability of a single user to select/determine the displayed content can allow for efficient communication to the users. The user interface can be at the image processing device 101, the
surgical system 100, and/or at one of the head mounted displays 160, e.g. a first head mounted display which may be attached ot the image processing device 101 and/or surgical system.
The format can be selectable from a plurality of possible formats including at least one of: a superposition, a picture in picture, a rotation, full-screen, portion-screen, zoom, stereoscopic, monoscopic, augmentation, left eye only, or right eye only.
In a second mode, the image processing device 101 and/or surgical system 100 can provide user interfaces at each head mounted display 160. Each user interface can receive a user selection for content and/or format for each respective head mounted display. In the second mode, the user selection can be received by the image processing device for determination of the respective content and/or the respective format for each head mounted display. The second mode can be helpful to provide flexibility in the type of content and/or format that is viewed by each user. For example, during surgery, tasks and/or responsibilities can be delegated to different medical professionals who may each be better able to perform his/her respective tasks or meet responsibilities by having access to different content and/or format.
The surgical system 100 can be used for communication of the surgical procedure, for example. The detector 170 can capture a captured video of the surgical site in an operating room, and the system 100 can include an output 190 which outputs a real-time video to at least one remote user outside of the operating room. The real-time video can be based on the captured video, e.g. the real time video is processed from video captured at least one sensor 170.
Fig. 3A illustrates a head mounted display. Features described with respect to the illustrated head mounted display 300 can be used in any head mounted display described herein. The head mounted display 300 described herein may be one of a plurality which may be coupled to the image processing device 101, surgical system 100, and/or surgical imaging device 150. The head mounted display can include at least one display, such as two displays 30, 40.
The head mounted display 300 can be configured for augmented reality (AR), mixed reality (MR), and/or virtual reality (VR). For example, the head mounted display 300 can have settings for selecting AR, MR, or VR. In AR, the head mounted display 300 passed ambient light so
that the user sees the environment and variable superimposed images generated at the display of the head mounted display.
When a plurality of head mounted displays 300 are used, it is possible for one or more to be located outside of the operating room in which the video of the surgical procedure is captured. This can be advantageous, for example, in teaching/training environments, and/or when a surgeon would like to consult with a colleague remotely, e.g. to discuss the content 220. This may improve patient outcome, e.g.by allowing for remote collaboration in real time during surgery. In environments in which there are multiple users (e.g. in teaching environments), having head mounted displays 300 for the users can improve the learning experience, e.g. by facilitating visualization, to each user, of the content 220, rather than having a single screen for which multiple viewers may limit each others’ view. Head mounted displays 300 can also reduce the amount of space in the operating room used by panel displays.
The head mounted display(s) 300 can be communicatively coupled to any of the surgical imaging devices 150, surgical systems 100, and/or image processing devices 101 described herein. The head mounted display 300 of Fig. 3 includes a display 310 and a mounting structure 320 for mounting on the head of a user. The mounting structure can be a pair of legs, as shown. Alternatively/additionally, a head band can be used.
The head mounted display 300 can receive a real time video stream of a surgical site which is based on a video captured by a surgical imaging device 150.
The head mounted display 300 can display selectable content 220, e.g. selected from any one or more of real time video of a surgical site, real time data from an auxiliary data source, and stored data. The head mounted display 300 can receive user input, e.g. for determining the content 220. The user input can be made by a graphical user interface which may be displayed such as superimposed on the content 220. Alternatively/additionally, the user input can be determined by voice activation, e.g. through a microphone(s) 340 which may be integrated in the head mounted display 300. User input can be for determining the content 220, e.g. the content 220 transmitted and/or displayed at the head mounted display 300. For example, each head mounted display 300 may determine which content 220 and/or format to display.
A head mounted display 300 can be configured for audio communication by the microphone 340 and speaker 350, for example. Alternatively/additionally, the head mounted display 300 can communicatively couple to an audio communication device, such as an external device like an audio headset. The audio communication, e.g. the microphone 340 can also be used for selection (e.g. menu driven selection) of content 220 and/or format 230 of the displayed content 220. A menu, e.g. for user selection of content and/or format, can be provided visually (e.g. at the head mounted display 300) and/or audibly (e.g. at the speaker 350 of the head mounted display 300 and/or via a coupled audio communication device).
The content 220 can be selected, e.g. at the respective head mounted display(s) 300, from a plurality of video streams receivable from a surgical system 100 and/or image processing device 101. User input can be received, e.g. at each respective head mounted display 300, for determining the content 220 for the respective head mounted display 300 which receives the user input. It is possible to transmit the user input to the surgical system 100 and /or image processing device 101. The surgical system 100 and /or image processing device 101 can determine the content 220 (e.g. any one or more of the video streams 210a, 210b can be transmitted to the head mounted display(s) 300 as at least part of the content 220. A single user can determine the content 220 and/or format for all of the coupled head mounted display(s) 300. Alternatively, each user can determine content 220 and/or format at each head mounted display 300.
The format, e.g. of displayed content, can be selected from, for example, monoscopic display, stereoscopic display, picture-in-picture, or superpositional display.
The head mounted display(s) 300 can be communicatively coupled to one or more auxiliary devices 180. Data received from the auxiliary device can be displayed by the head mounted display(s) 300. The head mounted display(s) can be directly communicatively coupled to one or more auxiliary devices 180 or, for example, through the image processing device 101 and/or surgical imaging system 100.
The real time auxiliary data can include vital signs (e.g. heart rate, breathing rate, blood pressure). The auxiliary device(s) 180 can be directly coupled to the head mounted display(s) 300.
User input, e.g. audible input received by the microphone 340 and/or buttons on the head mounted display 300, can be used for operating the surgical imaging device 150. For example, different modes of the imaging device 150 can be activated or deactivated, such as fluorescence imaging modes, modes which utilize image guided surgery (IGS), and/or modes which include activation of a communication channel with auxiliary equipment, e.g. IGS equipment.
As described herein, the head mounted display(s) 300 can displaly content 220 which is selectable from at least the real time video of the surgical site, the real time data from the auxiliary data source, and stored data. Alternatively/additionally, there can be more than one real time video which is selectable as content 220. For example, a second real time video, based on video captured by the surgical imaging device 150 can be selected as content 220 and displayed. The selectable content 220 can include real time video which is a processed video, e.g. a superposition image of a white light image nada fluorescence image. The content 220 can include, or example real time video can be a stereoscopic image.
Fig. 3B illustrates a head mounted display. The head mounted display 300b described herein may be one of a plurality which may be coupled to the image processing device 101, surgical system 100, and/or surgical imaging device 150. Features described with respect to the illustrated head mounted display 300b can be used in any head mounted display described herein.
The head mounted display 300b can display content 220, e.g. content transmitted by the surgical imaging device 150, image processing device 101, and/or surgical imaging system 100.
The head mounted display 300b can include a display 10, e.g. for displaying the content 220. The head mounted display 300b can include a mounting structure 20 for mounting on a head of a user. The mounting structure 20 can fasten the head mounted display 300b to the head. The head mounted display 300b can include at least one adjuster 30, 50 for variable diopter(s). Each
adjuster(s) 30 can include one or more lenses 32, 34, 36. For example, at least one lens 32, 34, 36 of the adjuster 30 is movable so as to alter the effective diopter of the head mounted display 300b, e.g. for allowing a user to adjust the focus. The user can use the adjuster 30 to focus the image of the display 10 in the user’s eye. Alternatively/additionally, the adjuster may allow at least partially for some correction of the user’s myopia or hyperopia. For example, the diopter adjustment by the adjuster 30 can allow the user to better focus the image plane of the ambient image (from the surgical site, for example) on the user’s eye. The adjuster 30 can allow the image plane of the user’s field of view to be moved so as to allow better focus.
The head mounted display 300b can include at least one adjuster 30 (such as one or two) for the adjustment/correction of diopter (such as for each eye).
In an embodiment, at least three lenses of the adjuster can be between the display 10 and the eye of the user. Three or more lenses can allow for a suitable range of correction (e.g. diopter adjustment) and/or magnification. One of the lenses can be an aspheric lens. By using at least one aspheric lens, the size and weight of the optical arrangement may be significantly reduced in comparison to a system using only spherical lenses.
For example, the first optical arrangement 30 comprises a first lens 32, a second lens 34 and a third lens 36. The first lens 32 may be the lens of the three lenses closest to the first display 10. The second lens 34 may be arranged between the first lens 32 and the third lens 36. The first optical arrangement 30 may comprise exactly three lenses or may comprise more than three lenses. The three lenses may be glass lenses or may be made of other suitable material.
The aspheric lens of the three lenses may be the first, second or third lens. For example, the aspheric lens may be the second lens 34 while the first lens 32 and the third lens may be spherical lenses. Alternatively, all three lenses may be aspheric lenses. In this way, size and weight of the first optical arrangement 30 may be kept low.
For example, each lens of the three lenses may comprise a first surface and a second surface. The surfaces of the three lenses may represent or form a sequence of surfaces. For example, the first surface of the first aspheric lens may be a first spherical surface, the second surface of the
first aspheric lens may be a first aspherical surface, the first surface of the second aspheric lens may be a second spherical surface, the second surface of the second aspheric lens may be a second aspherical surface, the first surface of the third aspheric lens may be a third spherical surface and the second surface of the third aspheric lens may be a third aspherical surface. The sequence of surfaces may comprise a first spherical surface followed by a first aspherical surface followed by a second spherical surface followed by a second aspherical surface followed by a third spherical surface followed by a third aspherical surface.
For example, each lens of the three lenses comprises a different glass material. Three different glass materials may be used for the three lenses. For example, the first lens may comprise or consist of a first glass material, the second lens may comprise or consist of a second glass material and the third lens may comprise or consist of a third glass material. For example, the first glass material, the second glass material and the third glass material are three different glass materials.
For example, the first lens 32 may be a positive lens and/or an aspheric lens. A focal length of the first lens 32 may be at most 25mm (or at most 20mm or at most 30mm) and/or at least 15mm (or at least 10mm or at least 20mm).
For example, the second lens 34 may be a negative lens and/or an aspheric lens. A focal length of the second lens 34 may be at most -15mm (or at most -20mm or at most -13mm) and/or at least -5mm (or at least -10mm or at least -3mm).
For example, the third lens 36 may be a positive lens and/or an aspheric lens. A focal length of the third lens 36 may be at most 20mm (or at most 25mm or at most 17mm) and/or at least 10mm (or at least 13mm or at least 7mm).
By using one or more of the above parameters, a desired viewing angle, overall size, weight and/or exit pupil size may be obtained.
For example, one or more of the three aspheric lenses may be free form lenses. In this way, the size and weight may be further reduced.
A total weight of the three lenses (e.g. first lens 32, second lens 34 and third lens 36) may be at most 30g (or at most 25 g, at most 20 g or at most 35 g). A diameter of each lens of the three lenses (e.g. first lens 32, second lens 34 and third lens 36) may be at most 25mm (or at most 20mm or at most 30mm). A total focal length of the adjuster 30 may be at most 30mm (or at most 35mm or at most 25mm) and/or at least 15mm (or at least 10mm or at least 20mm). A total optimal distance between the first display 10 and an eye of a user caused by the adjuster 30 may be at most 60mm (or at most 70mm, at most 55mm or at most 50mm). By implementing the adjuster 30 with one or more of the mentioned parameters, size and/or weight of the first optical arrangement 30 and the head mounted display 300b may be kept low.
A display diagonal of the display 10 may be at least 125 mm (or at least 150 mm) and/or at most 250 mm (or at most 200 mm). In this way, a sufficiently large image can be displayed while the weight may be kept low.
The head mounted display 300b may include a second display and a second adjuster. For example, there may be a display and/or adjuster for each eye. In another example, a single display spans across the fields of view of each eye, and each eye has a corresponding adjuster 30 for the diopter adjustment. The head mounted display 300b can include at least one adjuster 30, 50 for the adjustment/correction of diopter.
More details and aspects of the head mounted display 300b can be mentioned in connection with the one or more examples described above or below (e.g. Figs. 1-8 or 10-11). The head mounted display 300b can include one or more additional optional features corresponding to one or more aspects of any examples described herein.
Fig. 3C illustrates schematically a head mounted display 300c. Features described with respect to the illustrated head mounted display 300c can be used in any head mounted display described herein. The head mounted display 300c described herein may be one of a plurality which may be coupled to the image processing device 101, surgical system 100, and/or surgical imaging device 150.
The head mounted display 300c can include an adjuster 310 (e.g. an interpupillary adjuster) for adjusting the interpupillary distance (IPD). The adjuster can adjust the relative positions of lenses and/or displays 10, 40 (e.g. in a direction substantially parallel to the direction of a line connecting the pupils of a user).
The head mounted display 300c can include one or more adjusters 320, 330 for the diopter adjustment. The adjuster(s) 320, 330 for the diopter adjustment can include respective lenses in the optical paths between the user’s respective eyes and respective displays 10, 40. The diopter adjuster(s) 320, 330 can allow the user to improve the focus of the displays 10, 40 of the head mounted display 300c. The adjusters 320, 330 can include respective optics.
For example, the head mounted display 300c can include a first adjuster for variable interpupillary distance and/or a second adjuster for variable diopter, and possibly a third adjuster for variable diopter of a second eye. It is possible to adjust the focus of the surgical imaging device 150 such that the third adjuster is not strictly necessary for one user. When multiple users each have a respective head mounted display 300c, it can be advantageous for each user to have the capability of adjusting diopter for each eye.
The display(s) 10, 40 may be an LCD display (Liquid Crystal Display), a TFT display (Thin- film transistor-Display) or an OLED display (organic light-emitting diode display).
AS mentioned herein, the head mounted display(s) 300, as described herein, can have multiple modes. For example, a first mode can be that each head mounted display 300 receives the content 220 selected by a first user, e.g. the surgeon. This can be convenient particularly when the surgeon is communicating to other users, e.g. remote collaborators and/or students.
A second mode can be one in which, at each head mounted display, a respective user interface is provided. Each respective user interface can be configured for receiving a user selection for selection of at least one of a mode, a respective content, or a respective format for the respective head mounted display. A second user, for example, can select the mode in which the content is determined by the first user (e.g. the surgeon). Alternatively, a user can select a mode in
which the content is determined by the respective user. The content 220 can be selected by the respective user and/or the format.
Examples of various formats which can be selected are a superposition, a picture in picture, full-screen, portion-screen, zoom, stereoscopic, monoscopic, augmentation, left eye only, or right eye only.
For example, when a user at a head mounted display 300 makes a selection, the selection can be transmitted to the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 for transmission of the respective content 220 to the respective head mounted display 300 or for transmission of the respective format to the respective head mounted display. In another example, the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 transmits the selected respective content 220 to the respective head mounted display 300, and the head mounted display 300 controls the format.
For example, a user may wish to view the real time video of the white light image of the surgical site as the main portion of the screen, and to view, as a smaller picture in picture display, another source (such as video of an OCT image, a stored image, or the like). The content 220 can be transmitted to the head mounted display 300. In one case, the format, e.g. the picture and picture selection, can be transmitted by the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 to the respective head mounted display 300. In another case, the format can be determined at the head mounted display 300.
Other format selections are also possible. Below the two cases are further explained with respect to another example.
In one case, the content 220 and the format may be transmitted, e.g. after a selection of picturein-picture format, from the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 to the respective head mounted display 300. The respective head mounted display 300 may display the content 220 and format as received from the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100. In another example, the content and format are transmitted by the image processing device 101
including the superposition format of a stereoscopic white light image and a false-color fluorescence image. These cases illustrate that the content and format can be transmitted to the head mounted display(s) 300.
In another case, the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 transmits the selected content 220, and the headset 300 determines the format (e.g. after selection of picture in picture or superposition formats).
For example, the image processing device 101 transmits a real time stereoscopic video of a white light image of the surgical site, and a real time video of a fluorescence image. The user, at the head mounted display 300, can change the format without the selection of format being transmitted to the image processing device 101. For example, the user can change from a picture-in-picture format to a superposition format.
When a user selects a format (first case), the format selection can be done by transmitting the selected format to the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100; the content 220 and format are transmitted by the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 to the head mounted display(s) 300.
In another example (second case), the content 220 can possibly be continuously transmitted from the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100 to the head mounted display(s) 300. The user may select different formats of the content 220, e.g. without transmitting the selection of the format to the image processing device 101, surgical imaging device 150, and/or surgical imaging system 100.
Each of the head mounted displays 300, 300b, 300c describe herein include features which can be used in the other head mounted displays 300, 300b, 300c. For example, each can have features for user input, displays, and/or adjusters for IPD and/or diopter(s). Each can be communicatively coupled to auxiliary devices 180 and/or image processing devices 101. The coupling may be wired and/or wireless.
Fig. 4 illustrates a method of communicating a surgical procedure. The method 400 can include capturing 410, using a detector 170, a captured video of a surgical site in an operating room, and outputting 420 a real time video to at least one remote user outside of the operating room (e.g. outputting 420 to a head mounted display 300 which is outside of the operating room). The real time video is based on the captured video.
The method 400 can be done by an apparatus, such as the surgical system 100 including the surgical imaging device 150 and image processing device 101. The surgical imaging device can be a stereomicroscope. The apparatus can include a head mounted display for displaying the real time video to a user(s) in the operating room, e.g. a surgeon. The output can include the real time video displayed to the user(s) in the operating room. Any of the surgical imaging devices 150, image processing devices 101, and/or surgical systems 100 described herein can be configured, such as by a computer program stored in memory 120, to perform the method 400.
An apparatus for performing the method 400 can include at least one detector 170 for capturing a captured video of a surgical site in an operating room, and an output configured to output 420 the real time video. A second detector, for example, can be for generating the real time video or a second real time video. Any one or more of the real time videos generated can be a stereoscopic video. Alternatively/additionally, a stereoscopic video can be generated at a user, e.g. at a remotely located head mounted display 300, from two or more real time videos.
The apparatus may communicatively couple to one or more auxiliary data sources 180. The method 400 can include receiving a real-time vital sign data from the auxiliary data source(s) 180.
The apparatus can determine the content 220 for output, and transmit the content to the remote user(s). The content can be selectable from at least one of: the real time video, real time data from an auxiliary data source, or stored data. Alternatively/additionally, the content includes the real time video and at least one of a selectable content of at least one of: real time data from an auxiliary data source, or stored data.
Some or all of the method steps described herein may be executed by (or using) a hardware apparatus, like for example, a processor, a microprocessor, a programmable computer or an electronic circuit.
The methods described herein can be implemented in hardware or in software. The implementation can be performed using a non-transitory storage medium such as a digital storage medium, for example a floppy disc, a DVD, a Blu-Ray, a CD, a ROM, a PROM, and EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. The digital storage medium may be computer readable.
Some embodiments include a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Embodiments described herein can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may, for example, be stored on a machine readable carrier.
Other embodiments include the computer program for performing one of the methods described herein, stored on a machine readable carrier.
Herein is disclosed a computer program having a program code for performing the methods described herein, when the computer program runs on a computer. Herein is disclosed a computer program configured to operate any one or more of the head mounted display(s) described herein, the image processing device 101 described herein, the surgical system 100 described herein, and/or the surgical imaging device 150 described herein.
Herein is disclosed a storage medium (or a data carrier, or a computer-readable medium) comprising, stored thereon, the computer program for performing the methods described herein
when it is performed by a processor. Herein is disclosed an apparatus as described herein comprising a processor and the storage medium for executing the methods described herein.
Herein is disclosed a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may, for example, be configured to be transferred via a data communication connection, for example, via the internet.
Herein is disclosed a processing means, for example, a computer or a programmable logic device, configured to, or adapted to, perform the methods described herein.
Herein is disclosed a computer having installed thereon the computer program for performing the methods described herein.
In some embodiments, a programmable logic device (for example, a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. The methods described herein are preferably performed by any hardware apparatus.
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method. Analogously, aspects described in the context of a method step also represent a description of a corresponding apparatus.
Claims
25
Claims urality of head mounted displays (160, 300), configured for: receiving a real-time video stream of a surgical site which is based on a video captured by a surgical imaging device (150). plurality of head mounted displays (160, 300) of claim 2, configured for: receiving real-time auxiliary data from an auxiliary device (180). plurality of head mounted displays (160, 300) of any preceding claim, wherein the real-time auxiliary data includes vital signs (210c). plurality of head mounted displays (160, 300) of any preceding claim, wherein a content (220) for display is selectable from at least: the real time video of the surgical site, the real time data from the auxiliary device (180), or stored data. plurality of head mounted displays (160, 300) of any preceding claim, configured for: directly coupling to the auxiliary device (180) for receiving the real-time auxiliary data. plurality of head mounted displays (160, 300) of any one of claims 4-5, wherein the content (220) is further selectable from a second real-time video stream which is based on video captured by the surgical imaging device (150). plurality of head mounted displays (160, 300) of any preceding claims, wherein the real time video is a superposition image of a white light image and a fluorescence image. plurality of head mounted displays (160, 300) of any preceding claims, wherein the real time video is a stereoscopic image. plurality of head mounted displays (160, 300) of any preceding claims, wherein
each head mounted display includes at least one of: a first adjuster (310) for variable interpupillary distance or a second adjuster (30) for variable diopter.
10. The plurality of head mounted displays (160, 300) of any preceding claim, comprising at least one head mounted display which is remotely located, outside of an operating room in which the video is captured.
11. The plurality of head mounted displays (160, 300) of any preceding claim, configured to provide: a first mode in which: each head mounted display receives the content (220) selected by a first user.
12. The plurality of head mounted displays (160, 300) of any one of claims 1-10, configured to provide: a second mode in which: at each head mounted display, a respective user interface is provided, and each respective user interface is configured for receiving a user selection for selection of at least one of a mode, a respective content (220), or a respective format (230) for the respective head mounted display.
13. The plurality of head mounted displays (160, 300) of claim 12, wherein the format (230) is selectable from a plurality of possible formats including at least one of: a superposition, a picture in picture, full-screen, portion-screen, zoom, stereoscopic, monoscopic, augmentation, left eye only, or right eye only; wherein augmentation is a superposition of an image on a semitransparent display, the semitransparent display passing ambient light to a user.
14. The plurality of head mounted displays (160, 300) of claim 12 or 13, configured for: transmission of the user selection to an image processing device (101) for
1 transmission of the respective content (220) to the respective head mounted display or transmission of the respective format (230) to the respective head mounted display.
15. The plurality of head mounted displays (160, 300) of any preceding claim, wherein each head mounted display is configured for audio communication by at least one of: communicatively coupling to audio communication devices, or by a microphone (340) and a speaker (350) included in each head mounted display.
16. A surgical system (100), comprising: the plurality of head mounted displays (160, 300) of any one of claims 1-15, and a surgical imaging device (150) configured to capture the video.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102021125419 | 2021-09-30 | ||
DE102021125419.1 | 2021-09-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023052535A1 true WO2023052535A1 (en) | 2023-04-06 |
Family
ID=84053365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/077170 WO2023052535A1 (en) | 2021-09-30 | 2022-09-29 | Devices and systems for use in imaging during surgery |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023052535A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200246081A1 (en) * | 2018-02-19 | 2020-08-06 | Globus Medical, Inc. | Augmented reality navigation systems for use with robotic surgical systems and methods of their use |
WO2021168449A1 (en) * | 2020-02-21 | 2021-08-26 | Raytrx, Llc | All-digital multi-option 3d surgery visualization system and control |
-
2022
- 2022-09-29 WO PCT/EP2022/077170 patent/WO2023052535A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200246081A1 (en) * | 2018-02-19 | 2020-08-06 | Globus Medical, Inc. | Augmented reality navigation systems for use with robotic surgical systems and methods of their use |
WO2021168449A1 (en) * | 2020-02-21 | 2021-08-26 | Raytrx, Llc | All-digital multi-option 3d surgery visualization system and control |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10197803B2 (en) | Augmented reality glasses for medical applications and corresponding augmented reality system | |
US20230122367A1 (en) | Surgical visualization systems and displays | |
US20230255446A1 (en) | Surgical visualization systems and displays | |
US20220054223A1 (en) | Surgical visualization systems and displays | |
US10716460B2 (en) | Stereoscopic video imaging and tracking system | |
US11154378B2 (en) | Surgical visualization systems and displays | |
US4395731A (en) | Television microscope surgical method and apparatus therefor | |
US12062430B2 (en) | Surgery visualization theatre | |
EP2903551B1 (en) | Digital system for surgical video capturing and display | |
WO2021226134A1 (en) | Surgery visualization theatre | |
US20240266033A1 (en) | Surgery visualization theatre | |
Mueller-Richter et al. | Possibilities and limitations of current stereo-endoscopy | |
JP2004320722A (en) | Stereoscopic observation system | |
WO2023052535A1 (en) | Devices and systems for use in imaging during surgery | |
WO2023052566A1 (en) | Devices and systems for use in imaging during surgery | |
WO2023052474A1 (en) | Devices and systems for use in imaging during surgery | |
JP2023542384A (en) | Microsurgical aid device | |
Southern et al. | Video microsurgery: early experience with an alternative operating magnification system | |
WO2024202956A1 (en) | Medical data processing device and medical system | |
US20240090742A1 (en) | Portable surgical methods, systems, and apparatus | |
US20230179755A1 (en) | Stereoscopic imaging apparatus with multiple fixed magnification levels | |
EP4146115A1 (en) | Surgery visualization theatre |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22799884 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22799884 Country of ref document: EP Kind code of ref document: A1 |