WO2019168973A1 - Robotic stereotaxic platform with computer vision - Google Patents
Robotic stereotaxic platform with computer vision Download PDFInfo
- Publication number
- WO2019168973A1 WO2019168973A1 PCT/US2019/019818 US2019019818W WO2019168973A1 WO 2019168973 A1 WO2019168973 A1 WO 2019168973A1 US 2019019818 W US2019019818 W US 2019019818W WO 2019168973 A1 WO2019168973 A1 WO 2019168973A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- animal subject
- skull
- brain
- stereotaxic
- platform
- Prior art date
Links
- 241001465754 Metazoa Species 0.000 claims abstract description 163
- 210000004556 brain Anatomy 0.000 claims abstract description 49
- 238000001356 surgical procedure Methods 0.000 claims abstract description 34
- 210000003625 skull Anatomy 0.000 claims description 93
- 238000000034 method Methods 0.000 claims description 37
- 230000000007 visual effect Effects 0.000 claims description 34
- 230000033001 locomotion Effects 0.000 claims description 17
- 238000005286 illumination Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 5
- 230000004807 localization Effects 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract description 84
- 238000012360 testing method Methods 0.000 abstract description 30
- 238000011282 treatment Methods 0.000 abstract description 3
- 238000002347 injection Methods 0.000 abstract description 2
- 239000007924 injection Substances 0.000 abstract description 2
- 239000013307 optical fiber Substances 0.000 abstract description 2
- 230000015654 memory Effects 0.000 description 36
- 238000005259 measurement Methods 0.000 description 11
- 241000699694 Gerbillinae Species 0.000 description 10
- 238000002591 computed tomography Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 210000000988 bone and bone Anatomy 0.000 description 6
- 238000002595 magnetic resonance imaging Methods 0.000 description 6
- 241000699684 Meriones unguiculatus Species 0.000 description 5
- 210000003128 head Anatomy 0.000 description 5
- 230000002093 peripheral effect Effects 0.000 description 5
- 206010002091 Anaesthesia Diseases 0.000 description 4
- 230000037005 anaesthesia Effects 0.000 description 4
- 239000003550 marker Substances 0.000 description 4
- 230000004075 alteration Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 2
- 238000010171 animal model Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000010438 heat treatment Methods 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 230000005055 memory storage Effects 0.000 description 2
- 241000283984 Rodentia Species 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000036760 body temperature Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007428 craniotomy Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 239000003479 dental cement Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 239000008177 pharmaceutical agent Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000011012 sanitization Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000003612 virological effect Effects 0.000 description 1
- 229960001600 xylazine Drugs 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/04—Positioning of patients; Tiltable beds or the like
- A61B6/0492—Positioning of patients; Tiltable beds or the like using markers or indicia for aiding patient positioning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/501—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the head, e.g. neuroimaging or craniography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/508—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for non-human patients
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5247—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2503/00—Evaluating a particular growth phase or type of persons or animals
- A61B2503/40—Animals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2505/00—Evaluating, monitoring or diagnosing in the context of a particular type of medical care
- A61B2505/05—Surgical care
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
Definitions
- Various embodiments of the present technology generally relate to stereotaxic devices. More specifically some embodiments relate to a robotic stereotaxic platform with computer vision to automatically and accurately position an animal subject into the proper position for a surgical procedure.
- the brain is not a uniform organ but consists of multiple distinct regions that perform distinct information processing tasks. Testing the function or experimentally manipulating these regions requires precise surgical placement of electrodes, injection pipettes, or optical fibers within the desired regions. In typical animal models (e.g., rodents) the spatial accuracy requirement is typically in the hundreds or even tens of micrometers. Stereotaxic (or stereotactic) surgery for small animals is an indispensable tool for system neuroscience studies. Neuroscience behavioral animal studies often require injecting of tracers, viral constructs, pharmaceutical agents, or fluorescent dies into specific brain regions within the animal’s skull.
- Fig. 1 illustrates an example of a robotic stereotaxic platform with computer vision utilizing two cameras in which some embodiments of the present technology may be utilized.
- Fig. 2 illustrates an example of a robotic stereotaxic platform with computer vision utilizing one camera to position an animal subject and command a surgical instrument to perform a surgical procedure in accordance with one or more embodiments of the present technology.
- FIG. 3A illustrates an example alignment diagram for a positioning platform in which some embodiments of the present technology may be utilized.
- Figs. 3B-3D illustrate examples of positioning platforms that may be used in one or more embodiments of the present technology.
- Fig. 4 illustrates a set of components within a robotic stereotaxic platform with computer vision in which some embodiments of the present technology may be utilized.
- Fig. 5 is a flowchart illustrating a set of operations for performing a surgical procedure on an animal subject in accordance with one or more embodiments of the present technology.
- Fig. 6 is a flowchart illustrating an alternative set of operations for performing a surgical procedure on an animal subject in accordance with one or more embodiments of the present technology.
- Fig. 7 is a sequence diagram illustrating an example of the data flow between the various components of an automated robotic stereotaxic platform in accordance with various embodiments of the present technology.
- Fig. 8 is a sequence diagram illustrating an alternative example of the data flow between the various components of a robotic stereotaxic platform, wherein the components include a graphical user interface, a 3D computer vision, system, a reconstruction module, and a positioning platform, in accordance with various embodiments of the present technology.
- Fig. 9 is a flowchart illustrating a series of steps performed by a 3D computer vision system in accordance with various embodiments of the present technology.
- Fig. 10A illustrates projecting multiple visual patterns onto an animal subject and capturing the images using the left camera in accordance with various embodiments of the present technology.
- Fig. 10B illustrates projecting multiple visual patterns onto an animal subject and capturing the images using the right camera in accordance with one or more embodiments of the present technology.
- FIGs. 1 1 A-1 1 B illustrate example views of skull in accordance with various embodiments of the present technology.
- Figs. 1 1 C-1 1 E illustrate examples of various views of a reconstructed 3D skull profile in accordance with one or more embodiments of the present technology.
- Fig. 12A illustrates a 2D optical image of a gerbil skull taken by a camera from the right in accordance with various embodiments of the present technology.
- Fig. 12B illustrates a top view of a reconstructed 3D profile of a gerbil skull in accordance with one or more embodiments of the present technology.
- Fig. 12C illustrates a top-right view of a reconstructed 3D profile of a gerbil skull in accordance with some embodiments of the present technology.
- Fig. 12D illustrates a top-left view of a reconstructed 3D profile of a gerbil skull in accordance with some embodiments of the present technology.
- Fig. 13A illustrates an example of the left camera view of two fixed test objects in accordance with various embodiments of the present technology.
- Fig. 13B illustrates an example of the right camera view of two fixed test objects in accordance with some embodiments of the present technology.
- Figs. 14A-14C illustrate an example of measuring 3D reconstruction error for the two fixed test objects as shown in Fig. 13A and Fig. 13B that may be used for one or more embodiments of the present technology.
- Fig. 15A illustrates an example of measuring 3D reconstruction error in the left view for a profile of a gerbil skull that may be used for one or more embodiments of the present technology.
- Fig. 15B illustrates an example of measuring 3D reconstruction error in the right view for a profile of a gerbil skull in accordance with various embodiments of the present technology.
- Fig. 16 is a block diagram illustrating an example machine representing the computer systemization that may be used in some embodiments of the present technology.
- Various embodiments of the present technology generally relate to stereotaxic devices. More specifically some embodiments relate to a six degree-of- freedom robotic stereotaxic platform with a three-dimensional (3D) computer vision system to automatically and accurately position an animal subject into the proper position to perform a surgical procedure that involves inserting a probe or other device into the brain or other part of the body of an animal subject.
- 3D three-dimensional
- Traditional stereotaxic systems use manual mechanical alignment that is slow and makes it difficult to achieve the required accuracy to successfully perform a desired surgical procedure.
- various embodiments provide for system and techniques for a robotic stereotaxic platform that allows faster and more accurate automated placement of an animal subject resulting in an improved surgical procedure success rate.
- an animal subject may first be anesthetized with any type of anesthesia, for example a mixture of ketamine-xylazine and given a maintenance dose following the complete anesthesia to maintain the anesthetized state.
- any type of anesthesia for example a mixture of ketamine-xylazine and given a maintenance dose following the complete anesthesia to maintain the anesthetized state.
- the fur over the skull may be shaved off and the underlying skin may be sanitized with ethanol. Skin and muscle overlying the skull can be cut away allowing a craniotomy, a craniectomy, or other procedure to be made to the skull using a surgical instrument, such as a dental drill.
- the animal subject may be secured to a positioning platform comprising a movable linear plate using a head immobilization device, such as ear bars that push against the animal subject’s ears, a bite bar attached to the teeth of the animal subject, a head post that may be attached to the exposed animal subject’s skull, possibly using dental cement, and/or another immobilization device.
- a heating pad may be used to maintain the animal subject’s proper internal temperature during the surgical procedure.
- a 3D computer vision system can then be used to construct a 3D contour map of the skull to locate the position of the animal subject secured to the movable linear plate of a positioning platform, provide visual feedback during positioning of the animal subject, and confirm that the animal subject is moved into the correct final position.
- the 3D computer vision system can be realized by one or more techniques, including structural illumination, light-field, time-of-flight, structured-laser-light-based 3-D scanning, a projected light stripe system, or simultaneous localization and mapping, as well as other techniques.
- the 3D computer vision system consists of a video projector and one or more computer cameras.
- Data acquisition performed by the 3D computer vision system can include both a scanning procedure and a 3D reconstruction routine.
- a scanning procedure a sequence of spatially structured patterns consisting of vertical or horizontal monochromatic stripes can be projected onto the animal skull by the video projector with increasing spatial frequencies.
- the stripes are laterally distorted proportional to the vertical displacement of the skull surface.
- the computer cameras can be used to capture the overlaid structured patterns. Upon collecting the images captured by the cameras, the images can be combined to create a unique binary code for each point on the surface of the entire animal skull.
- structural illumination allows 3D reconstruction profiles to be created on monotone low-contrast surfaces.
- a projector projects multiple visual patterns onto an animal subject, e.g., a skull, and one or more cameras can capture 2D images of each of the visual patterns.
- a light and a rod may be used to create the visual pattern.
- the light bends around the animal subject to allow the 2D images to be used to reconstruct a 3D skull profile of the animal subject based on geometric triangulation.
- each part of the image may be encoded using sixteen binary bits, and triangulation between the right and left views may be used to determine the depth of each point in the image.
- Encoding strategies include but are not limited to Fourier methods, binary boundary based methods, and line shifting.
- the 3D reconstruction profiles may be used to accurately determine the animal subject’s skull’s position relative to the movable linear plate of the positioning platform. This information may then be used to align the skull a) either according to standard landmarks such as Bregma and Lambda, or b) by aligning the skull profile with the computerized tomography ("CT") scan of the animal subject. CT scans of the animal subject may be used to show the location of bones under the animal subject’s skin and be used to identify markers on the bones.
- CT scans of the animal subject may be used to show the location of bones under the animal subject’s skin and be used to identify markers on the bones.
- Bregma and Lambda are commonly used landmarks on the skull relied on for aligning targets in the brain. Bregma, the most prominent landmark, is the intersection of the sagittal and coronal sutures of the skull.
- Lambda is located posterior to Bregma, at the intersection of the sagittal and the lambdoid sutures of the skull. Spatial and rotational displacements can be calculated based on Bregma and Lambda and used to guide stereotaxic platforms into alignment. In some embodiments of the present technology, the Bregma and Lambda landmarks can be precisely located based on the 3D reconstruction and used for precise alignment of the skull in 3D space.
- the 3D computer vision system may use one or more conventional cameras to capture the 2D images, a 3D camera or scanner may be used, or another image capture device may be used.
- a camera may be attached to a robotic arm that can move the camera into multiple positions to capture multiple images of the test patterns.
- a camera mounted on a movable stage or a robotic arm can be used to estimate the 3D location of the skull profile based on relative movements between the camera and the skull, as in simultaneous localization and mapping.
- the positioning platform can be utilized to reposition the animal subject into the proper position for the surgical procedure.
- the positioning platform should be able to move with six degrees of freedom.
- six degrees of freedom may be accomplished by utilizing six servo or stepper motors or translational shafts.
- Each servo motor is secured to a base plate and is connected to the movable linear plate of the positioning platform using a rod and can move independently of the other servo motors to position an animal subject into the proper position.
- Actuating the servo motors allows the animal subject to follow a calculated trajectory in both position and orientation from the initial position to the final proper position. Actuation of the servo motors may be performed by the vision feedback motion controller of the robotic stereotaxic platform, programmatical under the control of a user through a user interface, or by another means.
- the positioning platform can be built using conventional translational shafts.
- the positioning platform can be built by separating the translational motions from the rotational motions using separate translational stages and rotational platforms.
- the current position of the animal subject on the movable linear plate of the positioning platform may be described using six parameters that correspond to the three linear movements (lateral, longitudinal, and vertical), and the three rotations (roll, pitch, and yaw).
- the initial position may be described by (xi, yi, z, n, pi, Wi).
- the final position may be determined based on user selection of a desired brain nucleus and the position of the animal subject on the movable linear plate of the positioning platform.
- the desired position may be described by (xt, yt, , n, pt, wt) or by relation to these brain atlas coordinates.
- the positioning platform may be controlled to guide the animal subject from the initial position to the final position required by the surgical procedure.
- the control of the positioning platform may be automatic or a user may use an interface to control the position of the movable linear plate of the positioning platform.
- the position platform and 3D computer vision system may perform simultaneous localization and mapping ("SLAM") while moving the animal subject, the surgical instrument, the camera(s), and/or other components of the robotic stereotaxic platform.
- SLAM simultaneous localization and mapping
- the final position of the animal subject may be based on alignment of a brain nucleus of the animal subject.
- a graphical user interface may allow a user to select a desired brain nucleus from a magnetic resonance imaging (“MRI”) image, a stereotaxic brain atlas, or other image source.
- the stereotaxic brain image may be fused to the CT scan of the animal subject to accurately determine the location of the brain nucleus within the skull.
- a skull marker like Lambda, Bregma, Intra Aural Line, another marker or combination of markers, may be used to determine the position of the brain within the animal subject’s skull.
- the CT scan may also be aligned to the 3D reconstruction to provide an external reference to the brain nucleus. Once a desired brain nucleus is selected, the user may initiate the robotic stereotaxic platform to place the animal subject into the proper final position for the surgical procedure.
- the animal subject is stationary, and a surgical robotic arm may guide, with high accuracy, a surgical instrument to advance an electrode, fiber, or other tool into the animal subject’s brain to the user selected brain nucleus or desired area.
- a positioning platform moves the animal subject, and a surgical robotic arm guides the surgical instrument.
- inventions introduced here can be embodied as special-purpose hardware (e.g., circuitry), as programmable circuitry appropriately programmed with software and/or firmware, or as a combination of special-purpose and programmable circuitry.
- embodiments may include a machine-readable medium having stored thereon instructions that may be used to program a computer (or other electronic devices) to perform a process.
- the machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), magneto-optical disks, ROMs, random access memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media / machine-readable medium suitable for storing electronic instructions.
- Fig. 1 illustrates an example of a six degree of freedom robotic stereotaxic platform with computer vision utilizing two cameras 100 in which some embodiments of the present technology may be utilized.
- Components of the platform may include a projector 1 10 used to project visual patterns 1 15 onto an animal subject.
- the animal subject may be placed on the movable linear plate of the positioning platform 120.
- Cameras 130 and 135 may capture images of the visual pattern projected onto the animal subject.
- the robotic stereotaxic platform 140 can create a 3D reconstruction of the animal subject using the images acquired with cameras 130 and 135.
- Positioning platform 120 is guided by robotic stereotaxic platform to place the animal subject in the correct position to perform a surgical procedure.
- Fig. 2 illustrates an example of a robotic stereotaxic platform with computer vision utilizing one camera to position an animal subject or surgical instrument to perform a surgical procedure 200 in accordance with one or more embodiments of the present technology.
- Components of the platform may include a projector 1 10, one or more projected visual patterns 1 15, a positioning platform 120, a camera 130, an animal subject 220, a user selected brain nucleus 230, a surgical robotic arm 240, and a surgical instrument 250.
- the animal subject 220 can be secured to the movable linear plate of the positioning platform 120 using an immobilization device (not shown).
- a heating pad (not shown) may also be used to maintain the animal subject’s 220 proper body temperature during the procedure.
- the projector 1 10 can be used to project visual patterns 1 15 onto the animal subject 220 placed on the movable linear plate of the positioning platform 120.
- Camera 130 may be rotated around the positioning platform 120 using a robotic arm to capture multiple images of the visual patterns 1 15 projected onto the animal subject 220. In some embodiments, multiple cameras are used and may be moved by one or more robotic arms.
- the 2D images captured by camera 130 can then be used to create a 3D reconstruction of the animal subject 220.
- a user may select a brain nucleus 230 and the positional platform 120 may move the animal subject 220 into a proper position for the surgical procedure.
- the surgical robotic arm 240 can direct the surgical instrument 250 to the user selected brain nucleus 230 while the animal subject 220 remains stationary, or both may move.
- multiple surgical robotic arms 240 and multiple surgical instruments 250 may be used to perform parallel or sequential surgical procedures.
- Fig. 3A illustrates an example diagram for a robotic positioning platform in which some embodiments of the present technology may be utilized.
- the robotic positioning platform has a base plate 305 and a top plate 310.
- the x-axis 315, y-axis 320, and the z-axis 325 correspond to the three linear movements (lateral, longitudinal, and vertical) and Q 330, f 335, and y 340 correspond to the three rotations around the x-axis 315, y-axis 320, and z-axis 325.
- O 345 and O’ 350 are the centers of the base and top plates.
- the base plate 205 and top plate 310 are connected by six extendable arms controlled by six individual motors.
- L 355 is the length of an extendable arm, where the extendable arm is constructed with two sub-arms with fixed lengths Li 360 and l_2 365.
- the length of each arm can be independently adjusted through the rotation of the corresponding motor by which the two sub-arms, Li 360 and l_2 365, are coordinated.
- An animal subject placed on the movable linear plate of the positioning platform can be moved in six degrees of freedom, including the three linear movements and the three rotations (pitch, roll, and yaw).
- the base plate 305 is considered to be the reference frame for the top plate 310.
- the center of the base plate is the origin for the primary frame with coordinates of X, Y and Z (x-axis 315, y-axis 320, and z-axis 325).
- Top plate 310 has its own secondary frame with coordinates of X’, Y’ and Z’.
- the movement of top plate 310 can be considered as a mathematical mapping from the origin, O 345, of the base plate 305 frame of reference relative to the origin, O’ 350, of the top plate 310 frame by the lengths of the six extendable arms.
- Fig. 3B illustrates an example of positioning platform 120 in accordance with one or more embodiments of the present technology.
- the positioning platform 120 can move in six degrees of freedom by controlling six servo motors or linear actuators connected to the movable linear plate that allows sub-millimeter accuracy.
- Each servo motor connects to the movable linear plate using a rod or virtual rod, and each servo motor or linear actuator can move independently of the other servo motors to position an animal subject into the proper position for a surgical procedure.
- Fig. 3C illustrates an alternative embodiment of the present technology wherein x-y stage 375 is added to separate translational positioning from the positioning platform 310. x-y stage 375 is driven by translational motors 380 and 385.
- positioning platform 120 is only used for rotational positioning of the animal subject on top plate 310. By separating the translational and rotational motion in this manner, finer, more accurate control can be achieved. Additionally, the present embodiment enables a larger range for positioning. If a user of the robotic stereotaxic platform prefers to position surgical tool 370 towards an animal’s skull while keeping the animal steady, the present embodiment may accommodate that preference.
- Fig. 3D illustrates an alternative embodiment which simplifies the rotational platform design using a pivot design.
- a center ball pivot 395 and multiple springs are used to hold the top platform 310 to the bottom platform 305 such that it is held in place.
- Linear translational micrometers 390 control the pitch and roll positioning of top platform 310 and may be motorized.
- the bottom platform 305 is used for yaw positioning.
- Translational motors 380 and 385 are used for translational positioning of x-y stage 375.
- Fig. 4 illustrates a set of components within a robotic stereotaxic platform with 3D computer vision in which some embodiments of the present technology may be utilized.
- robotic stereotaxic platform 140 may include memory 410 (e.g., volatile memory and/or nonvolatile memory); one or more processors 415; power supply 420 (e.g., a battery); operating system 425; graphical user interface 430; internal imaging systems 440; a 3D computer vision system 450 comprising a projector 452, one or more cameras 454, one or more light sources 456, and a robotic arm 458; a positioning platform 460 comprising a vision feedback motion controller 462, and a head immobilization device 464; a reconstruction module 470; a surgical robotic arm 480; a surgical instrument 490; and/or additional components (e.g., audio interfaces, keypads or keyboards, and other input and/or output interfaces).
- additional components e.g., audio interfaces, keypads or keyboards, and other input and/or output
- Memory 410 can be any device, mechanism, or populated data structure used for storing information.
- memory 410 can encompass any type of, but is not limited to, volatile memory, nonvolatile memory, and dynamic memory.
- memory 410 can be random access memory, memory storage devices, optical memory devices, media magnetic media, floppy disks, magnetic tapes, hard drives, SDRAM, RDRAM, DDR RAM, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), compact disks, DVDs, and/or the like.
- memory 410 may include one or more disk drives, flash drives, one or more databases, one or more tables, one or more files, local cache memories, processor cache memories, relational databases, flat databases, and/or the like.
- memory 410 may include one or more disk drives, flash drives, one or more databases, one or more tables, one or more files, local cache memories, processor cache memories, relational databases, flat databases, and/or the like.
- Memory 410 may be used to store instructions for running one or more applications or modules on processor(s) 415.
- memory 410 could be used in one or more embodiments to house all or some of the instructions needed to execute the functionality or controlling operating system 425, graphical user interface 430; internal imaging systems 440; a 3D computer vision system 450; a positioning platform 460; a reconstruction module 470; a surgical robotic arm 480, and/or additional components.
- Operating system 425 can provide a software package that is capable of managing the hardware resources of robotic stereotaxic platform 140. Operating system 425 can also provide common services for software applications running on processor(s) 415.
- Processor(s) 415 are the main processors of robotic stereotaxic platform 140 which may include application processors, baseband processors, various coprocessors, and other dedicated processors.
- processor(s) 415 can provide the processing power to support software applications, memory management, graphics processing, and multimedia.
- Processors 415 may be communicably coupled with memory 410 and configured to run the operating system 425, the graphical user interface 430, and the applications stored on memory 410 or data storage component (not shown).
- Processor(s) 415 along with the other components may be powered by power supply 420.
- the volatile and nonvolatile memories found in various embodiments may include storage media for storing information such as processor- readable instructions, data structures, program modules, or other data. Some examples of information that may be stored include basic input/output systems (BIOS), operating systems, and applications.
- BIOS basic input/output systems
- 3D computer vision system 450 can be realized by one or more techniques, including structural illumination, light-field, time-of-flight, as well as other techniques. These techniques may require additional components that are not shown but are included.
- structural illumination allows 3D profiles to be created on monotone low-contrast surfaces.
- a projector 452 projects multiple visual patterns onto an animal subject, e.g., a skull, and one or more cameras 454 capture 2D images of the visual patterns. These 2D images can then be used by the reconstruction module 470 to create a 3D profile of the animal subject.
- the positioning platform 460 can then align the animal subject into the proper position for the surgical procedure.
- the projector is not necessary and other techniques of producing a visual pattern may be used.
- the animal subject may remain stationary, and the surgical robotic arm 480 may move the surgical instrument 490 to advance an electrode, fiber, or other device into the animal subject’s brain to the user selected brain nucleus or desired area in the brain with high accuracy.
- Internal imaging systems 440 include MRI images, CT scans, and/or other internal imaging techniques.
- a user may select a brain nucleus using an MRI image, a stereotaxic brain atlas, or other image source.
- the brain image then can be fused with CT scans of the animal subject to accurately determine the location of the brain nucleus within the animal subject’s skull.
- CT scans of the animal subject may also be used to show the location of bones under the animal subject’s skin to identify markers on the bones, like Lambda, Bregma or another marker, that can be used to identify the position of the brain.
- the 3D reconstruction may be used to accurately determine the animal subject’s position relative to the movable linear plate of the positioning platform by aligning the skull profile with the CT scans of the animal subject.
- positioning platform 460 can include a vision feedback motion controller 462, head immobilization device 464, and/or other positioning controls.
- the vision feedback motion controller 462 may use dynamic information from the 3D computer vision system 450 or static information from the reconstruction module 470 to guide the animal subject into the proper position.
- the final position of the animal subject may be confirmed by creating a new 3D reconstruction profile of the animal subject, and aligning the 3D reconstruction with CT scan(s) of the animal subject, as well as performing other position verification methods.
- FIG. 5 is a flowchart illustrating a set of operations 500 for performing a surgical procedure on an animal subject in accordance with one or more embodiments of the present technology.
- the positioning platform is moved to a default or reset position in operation 505.
- the animal subject can be induced with anesthesia and placed on the movable linear plate of the positioning platform in operation 510.
- a visual pattern can be projected onto the animal subject, and one or more cameras are used to capture images in operation 515.
- the images captured in operation 515 are used to create a 3D reconstruction of the animal subject’s skull profile, and photogrammetric bundle adjustment can be performed to optimize the 3D reconstruction.
- Photogrammetric bundle adjustment is a mathematical model of the imaging properties of the projector and camera(s), where the parameters of the projector and camera(s) as well as their orientations in space can be determined by a series of calibration measurements, to optimize or correct the 3D reconstruction of the subject’s skull profile.
- the robotic stereotaxic platform determines if the animal skull profile is aligned in the“skull-flat” position.
- Skull-flat position is accomplished when Bregma and Lambda (which are points along a dorsal (top) midline bone suture which runs from front to back) are brought into the same horizontal plane. This is typically accomplished by titling the anterio-posterior (front-back) axis of the animal’s skull until these two point are aligned. Skull-flat also includes the leveling of the skull in the medio-lateral axis, meaning that corresponding points located symmetrically to the left and the right of the same midline by the same distance, are also brought into the same horizontal plane.
- the medio-lateral axis of the skull is automatically calculated, allowing the software to estimate the required translational and rotational motions required in order to achieve skull-flat position, operation 530.
- the robotic platform may then move to the new position, operation 535.
- operations 515 and 520 begin again and the new reconstructed 3D skull profile is used to verify that the animal skull is in skull-flat position.
- Operations 515, 520, 525, 530 and 535 are performed repeatedly until the animal skull profile is in the skull-flat position.
- the animal skull profile is aligned in the plane of de Groot.
- the 3D reconstruction can be aligned according to the position of the animal subject in operation 540.
- the proper position of the subject relative to the surgical instrument can then be determined in operation 545.
- the positioning platform can be controlled to place the animal subject into the proper position to allow insertion of the surgical instrument in operation 550.
- a user-specific coordinate position is used to align the animal subject.
- the coordinate position can be based on pre-determined coordinates and standard landmarks (Bregma and Lambda) to position the animal subject once the animal skull profile is in the skull-flat position.
- Fig. 6 is a flowchart illustrating an alternative series of steps 600 to those presented in Fig. 5.
- the animal subject is immobilized using a head immobilization device.
- the 3D skull profile is aligned by aligning it with a CT scan of the animal subject.
- a user can select a brain nucleus that will be used to determine the proper position of the animal subject.
- the proper position of the animal subject is determined based on the selected brain nucleus.
- Fig. 7 is a sequence diagram 700 illustrating an example of the data flow between the various components of an automated stereotaxic platform in accordance with various embodiments of the present technology.
- a visual pattern is projected onto the animal subject by the 3D computer vision system 710.
- 2D images from one or more cameras are then collected by the 3D computer vision system 710.
- 3D reconstruction is then initiated by the 3D computer vision system 710.
- the reconstruction module 720 performs the 3D reconstruction by determining the same points between the images from both sides using unique spatial codes.
- the 3D spatial coordinate of each point is then estimated using spatial triangulation.
- the 3D skull profile is generated based on the 3D spatial coordinates.
- the reconstructed 3D skull profile is then returned to the 3D computer vision system 710.
- the 3D computer vision system 710 is then used to align the reconstructed skull profile with the animal subject’s skull on the positioning platform and determine the proper subject position.
- Positioning commands and information are then transmitted between the 3D computer vision system 710 and the positioning platform 730 until the skull and positioning platform are in the proper position for a surgical procedure.
- Fig. 8 is a sequence diagram 800 illustrating an example of the data flow between the various components of a robotic stereotaxic platform in accordance with various embodiments of the present technology.
- a user may select a desired brain nucleus from MRI images, a stereotaxic brain atlas, or other source, and then the user can initiate a procedure using a graphical user interface ("GUI") 810.
- GUI graphical user interface
- An initiate image capture message is sent from the GUI 810 to the 3D computer vision system 820 to cause the 3D computer vision system 820 to create a 3D image that is transmitted back to the GUI 810.
- the GUI 810 then can send the 3D images to the reconstruction module 830 to create a 3D reconstruction of the skull profile that can then be returned to the GUI 810.
- Fig. 9 is a flowchart illustrating a series of steps 900 that make up structured pattern scanning and the 3D reconstruction process performed by the 3D computer vision system and the 3D reconstruction module in accordance with various embodiments of the present technology.
- one or more structured coding patterns are projected onto the skull of an animal subject in 910.
- a unique spatial code for each point on the skull is constructed, wherein the points are acquired in 2D images taken by one or more cameras.
- the image threshold is adjusted to determine the masking area for 3D reconstruction.
- the same points for the images acquired from both sides of the skull are determined using the unique spatial codes in step 940.
- the 3D spatial coordinate is estimated for each point using spatial triangulation.
- a 3D point cloud including pixel intensity information is constructed.
- Fig. 10A illustrates projecting multiple visual patterns onto an animal subject and capturing the images using the left camera in accordance with various embodiments of the present technology.
- Image 1010 shows a visual pattern that is horizontal bars
- Image 1020 shows a visual pattern that is vertical bars. After each image is captured, the visual pattern may become smaller and the shapes may be spaced closer. The visual pattern may also change orientation, as can be seen in image 1010 and image 1020.
- the visual pattern in this example consists of bars but may be any pattern, such as a grid.
- the light bends around the animal subject providing visual information about the shape of the animal subject. This allows the 2D images to be used to reconstruct a 3D skull profile or 3D body profile of the animal subject.
- varying sized horizontal bars are projected onto the animal subject and then varying sized vertical bars are projected onto the animal subject with an exposure time of 30 ms.
- the left view and right view are captured by the same camera that is moved by a robotic arm or manually from one side of the positioning platform to another other side.
- Fig. 10B illustrates projecting multiple visual patterns onto an animal subject and capturing the images using the right camera in accordance with one or more embodiments of the present technology. After each image is captured, the visual pattern may become smaller and the shapes may be spaced closer. The visual pattern may also change orientation. In some embodiments, the left view and right view are captured by the same camera that is moved by a robotic arm or manually from one side of the positioning platform to another other side.
- Figs. 1 1 A-1 1 B illustrate example views of skull in accordance with various embodiments of the present technology.
- Fig. 1 1 C illustrates an example of the center view of a reconstructed 3D skull profile in accordance with various embodiments of the present technology.
- the projected visual pattern may be shifted by a fraction of its period with respect to the previous pattern to cover the entire period.
- the reflected phase-shifted images are captured by the camera(s), the relative phase map of the animal subject is calculated, and the reconstructed 3D skull profile is generated from the phase map.
- Fig. 1 D illustrates an example of the left view of a reconstructed 3D skull profile in accordance with one or more embodiments of the present technology.
- Fig. 1 1 E illustrates an example of the right view of a reconstructed 3D skull profile in accordance with some embodiments of the present technology.
- Fig. 12A illustrates a 2D optical image of a Mongolian gerbil skull taken by one of the cameras from the right in accordance with various embodiments of the present technology.
- Fig. 12B illustrates a top view of a 3D reconstruction of the Mongolian gerbil skull in accordance with various embodiments of the present technology.
- Fig. 12C illustrates a top-right view of a 3D reconstruction of the Mongolian gerbil skull in accordance with various embodiments of the present technology.
- Fig. 12 D illustrates a top-left view of a 3D reconstruction of the Mongolian gerbil skull in accordance with various embodiments of the present technology.
- the views of the 3D reconstruction of the Mongolian gerbil skull in Fig. 12 B, Fig. 12C, and Fig. 12D were superimposed with grey scale intensity to allow intensity representation.
- the two prominent landmarks of Bregma and Lambda can be easily identified based on the grey scale contrast.
- Fig. 13A illustrates an example of the left camera view of two fixed test objects in accordance with various embodiments of the present technology.
- the fixed test objects are calibration standards with known shapes and sizes, and they rest on a calibration plate with dots that are equally spaced to provide measurements of the 3D reconstruction error in the horizontal and vertical directions.
- the lower fixed test object is a small pyramid and location marker 1310 is one of three markers on the upper fixed test object that defines a triangle on one side of the pyramid.
- the lower fixed test object pyramid has a peak 1330.
- the upper fixed test object with a peak 1320 is a larger pyramid.
- the fixed test objects and line segments may be used to compensate for geometric distortions and optical aberrations created by the optics and the perspective of the projectors and camera(s).
- Fig. 13B illustrates an example of the right camera view of two fixed test objects in accordance with some embodiments of the present technology.
- the fixed test objects are the same fixed test objects as seen in Fig. 13A, but they are viewed by a camera on the right side of the positioning platform. In some embodiments, the camera may be moved from the left side to the ride side of the positioning platform to capture images from multiple positions and angles.
- Figs. 14A, 14B, and 14C illustrate an example of measuring 3D reconstruction error for the two fixed test objects as shown in Fig. 13A and Fig. 13B that may be used for one or more embodiments of the present technology.
- the fixed test objects are calibration standards with known shapes and sizes, and they rest on a calibration plate with dots that are equally spaced that is placed on the movable linear plate of the positioning platform to provide measurements of the 3D reconstruction error in the horizontal and vertical directions.
- the line segments may be formed by multiple equally spaced dots and each has a known length.
- the upper fixed test object is a small pyramid with a peak 1410 and one side of the pyramid is measured, defined by line segments M8, M9, and M10.
- the lower fixed test object with a peak 1420 is a larger pyramid and one side of this pyramid is measured.
- Line segment 1430 is an example of a vertical line segment consisting of five equally spaced dots on the calibration plate with a 3D reconstruction measurement length of 4.07709 and an actual length of 4 resulting in an accuracy of 1 .93 %.
- 1440 is an example of a horizontal line segment consisting of six equally spaced dots on the calibration plate with a 3D reconstruction measurement length of 4.96712 and an actual length of 5 resulting in an accuracy of 0.66%.
- Table 1 shows the 3D reconstruction errors for each of the horizontal and vertical line segments. Accuracy is calculated by subtracting the actual value from measurement value, dividing by the actual value, and taking the absolute value of the result.
- Table 2 shows the 3D reconstruction errors for the two pyramids of the two fixed test objects.
- Fig. 15A illustrates an example of measuring 3D reconstruction error in the left view for a profile of a gerbil skull that may be used for one or more embodiments of the present technology.
- the gerbil skull 1510 has a fixed test object 1520 to the left of the skull and a fixed test object 1530 to the right of the skull.
- Line segments M0, M1 , and M2 define one side of the pyramid of fixed test object 1520.
- Line segments M3, and M4 define one side of the pyramid of fixed test object 1530.
- Table 4 shows the 3D reconstruction errors for the two pyramids of the two fixed test objects that are located on each side of the gerbil skull.
- Table 5 shows the 3D reconstruction errors for the horizontal line segments formed by equally spaced dots on the calibration plate that sits on the positioning platform.
- Table 6 shows the 3D reconstruction errors for the vertical line segments formed by equally spaced dots on the calibration plate.
- the fixed test objects and the line segments may be used to compensate for geometric distortions and optical aberrations created by the optics and the perspective of the projectors and camera(s).
- Fig. 15B illustrates an example of measuring 3D reconstruction error in the right view for a profile of a gerbil skull in accordance with various embodiments of the present technology.
- Table 7 shows the 3D reconstruction errors for the horizontal line segments formed by equally spaced dots on the calibration plate.
- Table 8 shows the 3D reconstruction errors for the vertical line segments formed by equally spaced dots on the calibration plate.
- the line segments may be used to compensate for geometric distortions and optical aberrations created by the optics and the perspective of the projectors and camera(s).
- Fig. 16 is a block diagram illustrating an example machine representing the computer systemization that may be used in some embodiments of the present technology.
- a variety of these steps and operations may be performed by hardware components or may be embodied in computer-executable instructions, which may be used to cause a general-purpose or special-purpose processor (e.g., in a computer, server, or other computing device) programmed with the instructions to perform the steps or operations.
- the steps or operations may be performed by a combination of hardware, software, and/or firmware.
- the system controller 1600 may be in communication with entities including one or more users 1625 client/terminal devices 1620, user input devices 1605, peripheral devices 1610, an optional co-processor device(s) (e.g., cryptographic processor devices) 1615, and networks 1630. Users may engage with the controller 1300 via terminal devices 1620 over networks 1630.
- entities including one or more users 1625 client/terminal devices 1620, user input devices 1605, peripheral devices 1610, an optional co-processor device(s) (e.g., cryptographic processor devices) 1615, and networks 1630.
- users may engage with the controller 1300 via terminal devices 1620 over networks 1630.
- Computers may employ central processing unit (CPU) or processor to process information.
- Processors may include programmable general-purpose or special-purpose microprocessors, programmable controllers, application-specific integrated circuits (ASICs), programmable logic devices (PLDs), embedded components, combination of such devices and the like.
- ASICs application-specific integrated circuits
- PLDs programmable logic devices
- Processors execute program components in response to user and/or system-generated requests.
- One or more of these components may be implemented in software, hardware or both hardware and software.
- Processors pass instructions (e.g., operational and data instructions) to enable various operations.
- Computers may also process information via parallel processing.
- a graphical processing unit GPU
- GPU graphical processing unit
- the controller 1600 may include clock 1665, CPU 1670, memory such as read only memory (ROM) 1685 and random-access memory (RAM) 1680 and co processor 1675 among others. These controller components may be connected to a system bus 1660, and through the system bus 1660 to an interface bus 1635. Further, user input devices 1605, peripheral devices 1610, co-processor devices 1615, and the like, may be connected through the interface bus 1635 to the system bus 1660.
- the interface bus 1635 may be connected to a number of interface adapters such as processor interface 1640, input output interfaces (I/O) 1645, network interfaces 1650, storage interfaces 1655, and the like.
- Processor interface 1640 may facilitate communication between co processor devices 1615 and co-processor 1675. In at least one implementation, processor interface 1640 may expedite encryption and decryption of requests or data.
- I/O Input output interfaces
- I/O 1645 facilitate communication between user input devices 1605, peripheral devices 1610, co-processor devices 1615, and/or the like and components of the controller 1600 using protocols such as those for handling audio, data, video interface, wireless transceivers, or the like (e.g., Bluetooth, IEEE 1394a-b, serial, universal serial bus (USB), Digital Visual Interface (DVI), 802.1 1 a/b/g/n/x, cellular, etc.).
- Network interfaces 1650 may be in communication with the network 1630. Through the network 1630, the controller 1600 may be accessible to remote terminal devices 1620.
- Network interfaces 1650 may use various wired and wireless connection protocols such as, direct connect, Ethernet, wireless connection such as IEEE 802.1 1 a-x, and the like.
- Examples of network 1630 include the Internet, Local Area Network (LAN), Metropolitan Area Network (MAN), a Wide Area Network (WAN), wireless network (e.g., using Wireless Application Protocol WAP), a secured custom connection, and the like.
- the network interfaces 1650 can include a firewall which can, in some aspects, govern and/or manage permission to access/proxy data in a computer network, and track varying levels of trust between different machines and/or applications.
- the firewall can be any number of modules having any combination of hardware and/or software components able to enforce a predetermined set of access rights between a particular set of machines and applications, machines and machines, and/or applications and applications, for example, to regulate the flow of traffic and resource sharing between these varying entities.
- the firewall may additionally manage and/or have access to an access control list which details permissions including, for example, the access and operation rights of an object by an individual, a machine, and/or an application, and the circumstances under which the permission rights stand.
- Other network security functions performed or included in the functions of the firewall can be, for example, but are not limited to, intrusion-prevention, intrusion detection, next-generation firewall, personal firewall, etc., without deviating from the novel art of this disclosure.
- Storage interfaces 1655 may be in communication with a number of storage devices such as, storage devices 1690, removable disc devices, and the like.
- the storage interfaces 1655 may use various connection protocols such as Serial Advanced Technology Attachment (SATA), IEEE 1694, Ethernet, Universal Serial Bus (USB), and the like.
- SATA Serial Advanced Technology Attachment
- IEEE 1694 IEEE 1694
- Ethernet Universal Serial Bus
- USB Universal Serial Bus
- User input devices 1605 and peripheral devices 1610 may be connected to I/O interface 1645 and potentially other interfaces, buses and/or components.
- User input devices 1305 may include card readers, finger print readers, joysticks, keyboards, microphones, mouse, remote controls, retina readers, touch screens, sensors, and/or the like.
- Peripheral devices 1610 may include antenna, audio devices (e.g., microphone, speakers, etc.), cameras, external processors, communication devices, radio frequency identifiers (RFIDs), scanners, printers, storage devices, transceivers, and/or the like.
- Co-processor devices 1615 may be connected to the controller 1600 through interface bus 1635, and may include microcontrollers, processors, interfaces or other devices.
- Computer executable instructions and data may be stored in memory (e.g., registers, cache memory, random access memory, flash, etc.) which is accessible by processors. These stored instruction codes (e.g., programs) may engage the processor components, motherboard and/or other system components to perform desired operations.
- the controller 1600 may employ various forms of memory including on-chip CPU memory (e.g., registers), RAM 1680, ROM 1685, and storage devices 1690.
- Storage devices 1690 may employ any number of tangible, non- transitory storage devices or systems such as fixed or removable magnetic disk drive, an optical drive, solid state memory devices and other processor-readable storage media.
- Computer-executable instructions stored in the memory may include robotic stereotaxic platform 140 having one or more program modules such as routines, programs, objects, components, data structures, and so on that perform particular tasks or implement particular abstract data types.
- the memory may contain operating system (OS) component 1695, modules and other components, database tables, and the like. These modules/components may be stored and accessed from the storage devices, including from external storage devices accessible through an interface bus.
- OS operating system
- the database components can store programs executed by the processor to process the stored data and imaging data.
- the database components may be implemented in the form of a database that is relational, scalable and secure. Examples of such database include DB2, MySQL, Oracle, Sybase, and the like.
- the database may be implemented using various standard data- structures, such as an array, hash, list, stack, structured text file (e.g., XML), table, and/or the like. Such data-structures may be stored in memory and/or in structured files.
- the controller 1600 may be implemented in distributed computing environments, where tasks or modules are performed by remote processing devices, which are linked through a communications network, such as a Local Area Network ("LAN”), Wide Area Network ("WAN”), the Internet, and the like.
- LAN Local Area Network
- WAN Wide Area Network
- program modules or subroutines may be located in both local and remote memory storage devices.
- Distributed computing may be employed to load balance and/or aggregate resources for processing.
- aspects of the controller 1300 may be distributed electronically over the Internet or over other networks (including wireless networks).
- portions of the system may reside on a server computer, while corresponding portions reside on a client computer. Data structures and transmission of data particular to aspects of the controller 1600 are also encompassed within the scope of the disclosure.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Public Health (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- Optics & Photonics (AREA)
- High Energy & Nuclear Physics (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pulmonology (AREA)
- Theoretical Computer Science (AREA)
- Neurology (AREA)
- Neurosurgery (AREA)
- Physiology (AREA)
- Image Processing (AREA)
Abstract
Various embodiments of the present technology generally relate to stereotaxic devices. More specifically some embodiments relate to a robotic stereotaxic platform with computer vision to automatically and accurately position an animal subject into the proper position for a surgical procedure. A surgical procedure is required to precisely place electrodes, injection pipettes, optical fibers, or other device inside an animal subjects brain nucleus or other region of the brain to test functioning or experimentally manipulating the brain, or inside another part of the body. Improving the success rate of these surgical procedures reduces investigator time, reduces expenditure of supplies, and promotes scientific discovery resulting in better treatments for patients.
Description
ROBOTIC STEREOTAXIC PLATFORM WITH COMPUTER VISION
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority to U.S. Provisional Application No. 62/635,849 filed February 27, 2018, which is incorporated herein by reference in its entirety for all purposes.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH
[0002] This invention was made with government support under grant number DC01 1582 awarded by National Institutes of Health. The government has certain rights in the invention.
TECHNICAL FIELD
[0003] Various embodiments of the present technology generally relate to stereotaxic devices. More specifically some embodiments relate to a robotic stereotaxic platform with computer vision to automatically and accurately position an animal subject into the proper position for a surgical procedure.
BACKGROUND
[0004] The brain is not a uniform organ but consists of multiple distinct regions that perform distinct information processing tasks. Testing the function or experimentally manipulating these regions requires precise surgical placement of electrodes, injection pipettes, or optical fibers within the desired regions. In typical animal models (e.g., rodents) the spatial accuracy requirement is typically in the hundreds or even tens of micrometers. Stereotaxic (or stereotactic) surgery for small animals is an indispensable tool for system neuroscience studies. Neuroscience behavioral animal studies often require injecting of tracers, viral constructs, pharmaceutical agents, or fluorescent dies into specific brain regions within the animal’s skull.
[0005] Traditional stereotaxic systems are based on manual mechanical alignment and manual measurement of skull profiles and landmarks. These traditional
systems are slow to use and have difficultly achieving the desired accuracy. As a result, the animal model must be under anesthesia for longer periods of times and the procedures will often have low success rates. As such, there is a need for an improved stereotaxic system with faster alignment that will reduce the total time required to perform the surgical procedure and increase the likelihood that the animal subject will recover from the surgical procedure. Improving the success rate of these surgical procedures reduces investigator time, reduces expenditure of supplies, and promotes scientific discovery resulting in better treatments for patients.
BRIEF DESCRIPTION OF THE DRAWINGS [0006] Embodiments of the present technology will be described and explained through the use of the accompanying drawings.
[0007] Fig. 1 illustrates an example of a robotic stereotaxic platform with computer vision utilizing two cameras in which some embodiments of the present technology may be utilized. [0008] Fig. 2 illustrates an example of a robotic stereotaxic platform with computer vision utilizing one camera to position an animal subject and command a surgical instrument to perform a surgical procedure in accordance with one or more embodiments of the present technology.
[0009] Fig. 3A illustrates an example alignment diagram for a positioning platform in which some embodiments of the present technology may be utilized.
[0010] Figs. 3B-3D illustrate examples of positioning platforms that may be used in one or more embodiments of the present technology.
[0011] Fig. 4 illustrates a set of components within a robotic stereotaxic platform with computer vision in which some embodiments of the present technology may be utilized.
[0012] Fig. 5 is a flowchart illustrating a set of operations for performing a surgical procedure on an animal subject in accordance with one or more embodiments of the present technology.
[0013] Fig. 6 is a flowchart illustrating an alternative set of operations for performing a surgical procedure on an animal subject in accordance with one or more embodiments of the present technology.
[0014] Fig. 7 is a sequence diagram illustrating an example of the data flow between the various components of an automated robotic stereotaxic platform in accordance with various embodiments of the present technology.
[0015] Fig. 8 is a sequence diagram illustrating an alternative example of the data flow between the various components of a robotic stereotaxic platform, wherein the components include a graphical user interface, a 3D computer vision, system, a reconstruction module, and a positioning platform, in accordance with various embodiments of the present technology.
[0016] Fig. 9 is a flowchart illustrating a series of steps performed by a 3D computer vision system in accordance with various embodiments of the present technology. [0017] Fig. 10A illustrates projecting multiple visual patterns onto an animal subject and capturing the images using the left camera in accordance with various embodiments of the present technology.
[0018] Fig. 10B illustrates projecting multiple visual patterns onto an animal subject and capturing the images using the right camera in accordance with one or more embodiments of the present technology.
[0019] Figs. 1 1 A-1 1 B illustrate example views of skull in accordance with various embodiments of the present technology.
[0020] Figs. 1 1 C-1 1 E illustrate examples of various views of a reconstructed 3D skull profile in accordance with one or more embodiments of the present technology. [0021] Fig. 12A illustrates a 2D optical image of a gerbil skull taken by a camera from the right in accordance with various embodiments of the present technology.
[0022] Fig. 12B illustrates a top view of a reconstructed 3D profile of a gerbil skull in accordance with one or more embodiments of the present technology.
[0023] Fig. 12C illustrates a top-right view of a reconstructed 3D profile of a gerbil skull in accordance with some embodiments of the present technology.
[0024] Fig. 12D illustrates a top-left view of a reconstructed 3D profile of a gerbil skull in accordance with some embodiments of the present technology. [0025] Fig. 13A illustrates an example of the left camera view of two fixed test objects in accordance with various embodiments of the present technology.
[0026] Fig. 13B illustrates an example of the right camera view of two fixed test objects in accordance with some embodiments of the present technology.
[0027] Figs. 14A-14C illustrate an example of measuring 3D reconstruction error for the two fixed test objects as shown in Fig. 13A and Fig. 13B that may be used for one or more embodiments of the present technology.
[0028] Fig. 15A illustrates an example of measuring 3D reconstruction error in the left view for a profile of a gerbil skull that may be used for one or more embodiments of the present technology. [0029] Fig. 15B illustrates an example of measuring 3D reconstruction error in the right view for a profile of a gerbil skull in accordance with various embodiments of the present technology.
[0030] Fig. 16 is a block diagram illustrating an example machine representing the computer systemization that may be used in some embodiments of the present technology.
[0031] The drawings have not necessarily been drawn to scale. Similarly, some components and/or operations may be separated into different blocks or combined into a single block for the purposes of discussion of some of the embodiments of the present technology. Moreover, while the technology is amenable to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and are described in detail below. The intention, however, is not to limit the technology to the particular embodiments described. On the contrary, the technology is intended to cover all modifications, equivalents, and alternatives falling within the scope of the technology as defined by the appended claims.
DETAILED DESCRIPTION
[0032] Various embodiments of the present technology generally relate to stereotaxic devices. More specifically some embodiments relate to a six degree-of- freedom robotic stereotaxic platform with a three-dimensional (3D) computer vision system to automatically and accurately position an animal subject into the proper position to perform a surgical procedure that involves inserting a probe or other device into the brain or other part of the body of an animal subject. Traditional stereotaxic systems use manual mechanical alignment that is slow and makes it difficult to achieve the required accuracy to successfully perform a desired surgical procedure. In contrast, various embodiments provide for system and techniques for a robotic stereotaxic platform that allows faster and more accurate automated placement of an animal subject resulting in an improved surgical procedure success rate.
[0033] In some embodiments, an animal subject may first be anesthetized with any type of anesthesia, for example a mixture of ketamine-xylazine and given a maintenance dose following the complete anesthesia to maintain the anesthetized state. Once the animal is properly anesthetized, the fur over the skull may be shaved off and the underlying skin may be sanitized with ethanol. Skin and muscle overlying the skull can be cut away allowing a craniotomy, a craniectomy, or other procedure to be made to the skull using a surgical instrument, such as a dental drill. [0034] Next, the animal subject may be secured to a positioning platform comprising a movable linear plate using a head immobilization device, such as ear bars that push against the animal subject’s ears, a bite bar attached to the teeth of the animal subject, a head post that may be attached to the exposed animal subject’s skull, possibly using dental cement, and/or another immobilization device. A heating pad may be used to maintain the animal subject’s proper internal temperature during the surgical procedure.
[0035] A 3D computer vision system can then be used to construct a 3D contour map of the skull to locate the position of the animal subject secured to the movable linear plate of a positioning platform, provide visual feedback during positioning of the animal subject, and confirm that the animal subject is moved into the correct final position. The 3D computer vision system can be realized by one or more techniques,
including structural illumination, light-field, time-of-flight, structured-laser-light-based 3-D scanning, a projected light stripe system, or simultaneous localization and mapping, as well as other techniques.
[0036] In some embodiments, the 3D computer vision system consists of a video projector and one or more computer cameras. Data acquisition performed by the 3D computer vision system can include both a scanning procedure and a 3D reconstruction routine. For the scanning procedure, a sequence of spatially structured patterns consisting of vertical or horizontal monochromatic stripes can be projected onto the animal skull by the video projector with increasing spatial frequencies. As the stripes are projected onto the animal skulls, the stripes are laterally distorted proportional to the vertical displacement of the skull surface. The computer cameras can be used to capture the overlaid structured patterns. Upon collecting the images captured by the cameras, the images can be combined to create a unique binary code for each point on the surface of the entire animal skull. [0037] For example, structural illumination allows 3D reconstruction profiles to be created on monotone low-contrast surfaces. In structural illumination, a projector projects multiple visual patterns onto an animal subject, e.g., a skull, and one or more cameras can capture 2D images of each of the visual patterns. In some embodiments, a light and a rod may be used to create the visual pattern. The light bends around the animal subject to allow the 2D images to be used to reconstruct a 3D skull profile of the animal subject based on geometric triangulation. For example, if sixteen images are captured, then each part of the image may be encoded using sixteen binary bits, and triangulation between the right and left views may be used to determine the depth of each point in the image. Encoding strategies include but are not limited to Fourier methods, binary boundary based methods, and line shifting.
[0038] The 3D reconstruction profiles may be used to accurately determine the animal subject’s skull’s position relative to the movable linear plate of the positioning platform. This information may then be used to align the skull a) either according to standard landmarks such as Bregma and Lambda, or b) by aligning the skull profile with the computerized tomography ("CT") scan of the animal subject. CT scans of the animal subject may be used to show the location of bones under the animal subject’s skin and be used to identify markers on the bones.
[0039] Bregma and Lambda are commonly used landmarks on the skull relied on for aligning targets in the brain. Bregma, the most prominent landmark, is the intersection of the sagittal and coronal sutures of the skull. Lambda is located posterior to Bregma, at the intersection of the sagittal and the lambdoid sutures of the skull. Spatial and rotational displacements can be calculated based on Bregma and Lambda and used to guide stereotaxic platforms into alignment. In some embodiments of the present technology, the Bregma and Lambda landmarks can be precisely located based on the 3D reconstruction and used for precise alignment of the skull in 3D space. [0040] The 3D computer vision system may use one or more conventional cameras to capture the 2D images, a 3D camera or scanner may be used, or another image capture device may be used. In some embodiments, a camera may be attached to a robotic arm that can move the camera into multiple positions to capture multiple images of the test patterns. In another embodiment, a camera mounted on a movable stage or a robotic arm can be used to estimate the 3D location of the skull profile based on relative movements between the camera and the skull, as in simultaneous localization and mapping.
[0041] Once the animal subject is accurately located using the 3D computer vision system, the positioning platform can be utilized to reposition the animal subject into the proper position for the surgical procedure. To accurately place the animal into the proper final position, the positioning platform should be able to move with six degrees of freedom. In some embodiments, six degrees of freedom may be accomplished by utilizing six servo or stepper motors or translational shafts. Each servo motor is secured to a base plate and is connected to the movable linear plate of the positioning platform using a rod and can move independently of the other servo motors to position an animal subject into the proper position. The distance from the center of the servo motor to the end of the six rods or virtual rods vary as the servo motor rotates, allowing the movable linear plate of the positioning platform to move in all six directions of motion. [0042] Actuating the servo motors allows the animal subject to follow a calculated trajectory in both position and orientation from the initial position to the final proper position. Actuation of the servo motors may be performed by the vision
feedback motion controller of the robotic stereotaxic platform, programmatical under the control of a user through a user interface, or by another means. In an alternative embodiment, the positioning platform can be built using conventional translational shafts. In yet another embodiment, the positioning platform can be built by separating the translational motions from the rotational motions using separate translational stages and rotational platforms.
[0043] The current position of the animal subject on the movable linear plate of the positioning platform may be described using six parameters that correspond to the three linear movements (lateral, longitudinal, and vertical), and the three rotations (roll, pitch, and yaw). For example, the initial position may be described by (xi, yi, z, n, pi, Wi). The final position may be determined based on user selection of a desired brain nucleus and the position of the animal subject on the movable linear plate of the positioning platform. From a practical point of view, the user will select the coordinates for the desired target brain area typically based on a brain atlas coordinate system which defines target areas based on Bregma, Lambda, the dorsal (=top) midline bone suture, and location lateral (=to either side) of that suture. The desired position may be described by (xt, yt, , n, pt, wt) or by relation to these brain atlas coordinates.
[0044] The positioning platform may be controlled to guide the animal subject from the initial position to the final position required by the surgical procedure. The control of the positioning platform may be automatic or a user may use an interface to control the position of the movable linear plate of the positioning platform. In some embodiments, the position platform and 3D computer vision system may perform simultaneous localization and mapping ("SLAM") while moving the animal subject, the surgical instrument, the camera(s), and/or other components of the robotic stereotaxic platform.
[0045] In at least one embodiment, the final position of the animal subject may be based on alignment of a brain nucleus of the animal subject. A graphical user interface ("GUI") may allow a user to select a desired brain nucleus from a magnetic resonance imaging ("MRI") image, a stereotaxic brain atlas, or other image source. The stereotaxic brain image may be fused to the CT scan of the animal subject to accurately determine the location of the brain nucleus within the skull. In some embodiments, a skull marker, like Lambda, Bregma, Intra Aural Line, another marker
or combination of markers, may be used to determine the position of the brain within the animal subject’s skull. The CT scan may also be aligned to the 3D reconstruction to provide an external reference to the brain nucleus. Once a desired brain nucleus is selected, the user may initiate the robotic stereotaxic platform to place the animal subject into the proper final position for the surgical procedure.
[0046] In some embodiments, the animal subject is stationary, and a surgical robotic arm may guide, with high accuracy, a surgical instrument to advance an electrode, fiber, or other tool into the animal subject’s brain to the user selected brain nucleus or desired area. In another embodiment, a positioning platform moves the animal subject, and a surgical robotic arm guides the surgical instrument.
[0047] In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present technology. It will be apparent, however, to one skilled in the art that embodiments of the present technology may be practiced without some of these specific details.
[0048] The techniques introduced here can be embodied as special-purpose hardware (e.g., circuitry), as programmable circuitry appropriately programmed with software and/or firmware, or as a combination of special-purpose and programmable circuitry. Hence, embodiments may include a machine-readable medium having stored thereon instructions that may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), magneto-optical disks, ROMs, random access memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media / machine-readable medium suitable for storing electronic instructions.
[0049] The phrases "in some embodiments," "according to some embodiments," "in the embodiments shown," "in other embodiments," and the like generally mean the particular feature, structure, or characteristic following the phrase is included in at least one implementation of the present technology, and may be included in more than one
implementation. In addition, such phrases do not necessarily refer to the same embodiments or different embodiments.
[0050] Fig. 1 illustrates an example of a six degree of freedom robotic stereotaxic platform with computer vision utilizing two cameras 100 in which some embodiments of the present technology may be utilized. Components of the platform may include a projector 1 10 used to project visual patterns 1 15 onto an animal subject. The animal subject may be placed on the movable linear plate of the positioning platform 120. Cameras 130 and 135 may capture images of the visual pattern projected onto the animal subject. The robotic stereotaxic platform 140 can create a 3D reconstruction of the animal subject using the images acquired with cameras 130 and 135. Positioning platform 120 is guided by robotic stereotaxic platform to place the animal subject in the correct position to perform a surgical procedure.
[0051] Fig. 2 illustrates an example of a robotic stereotaxic platform with computer vision utilizing one camera to position an animal subject or surgical instrument to perform a surgical procedure 200 in accordance with one or more embodiments of the present technology. Components of the platform may include a projector 1 10, one or more projected visual patterns 1 15, a positioning platform 120, a camera 130, an animal subject 220, a user selected brain nucleus 230, a surgical robotic arm 240, and a surgical instrument 250. The animal subject 220 can be secured to the movable linear plate of the positioning platform 120 using an immobilization device (not shown). In some embodiments, a heating pad (not shown) may also be used to maintain the animal subject’s 220 proper body temperature during the procedure.
[0052] The projector 1 10 can be used to project visual patterns 1 15 onto the animal subject 220 placed on the movable linear plate of the positioning platform 120. Camera 130 may be rotated around the positioning platform 120 using a robotic arm to capture multiple images of the visual patterns 1 15 projected onto the animal subject 220. In some embodiments, multiple cameras are used and may be moved by one or more robotic arms. The 2D images captured by camera 130 can then be used to create a 3D reconstruction of the animal subject 220. A user may select a brain nucleus 230 and the positional platform 120 may move the animal subject 220 into a proper position for the surgical procedure. In some embodiments, the surgical robotic
arm 240 can direct the surgical instrument 250 to the user selected brain nucleus 230 while the animal subject 220 remains stationary, or both may move. In some embodiments, multiple surgical robotic arms 240 and multiple surgical instruments 250 may be used to perform parallel or sequential surgical procedures.
[0053] Fig. 3A illustrates an example diagram for a robotic positioning platform in which some embodiments of the present technology may be utilized. The robotic positioning platform has a base plate 305 and a top plate 310. The x-axis 315, y-axis 320, and the z-axis 325 correspond to the three linear movements (lateral, longitudinal, and vertical) and Q 330, f 335, and y 340 correspond to the three rotations around the x-axis 315, y-axis 320, and z-axis 325. O 345 and O’ 350 are the centers of the base and top plates. The base plate 205 and top plate 310 are connected by six extendable arms controlled by six individual motors. L 355 is the length of an extendable arm, where the extendable arm is constructed with two sub-arms with fixed lengths Li 360 and l_2 365. The length of each arm can be independently adjusted through the rotation of the corresponding motor by which the two sub-arms, Li 360 and l_2 365, are coordinated. An animal subject placed on the movable linear plate of the positioning platform can be moved in six degrees of freedom, including the three linear movements and the three rotations (pitch, roll, and yaw).
[0054] The base plate 305 is considered to be the reference frame for the top plate 310. The center of the base plate is the origin for the primary frame with coordinates of X, Y and Z (x-axis 315, y-axis 320, and z-axis 325). Top plate 310 has its own secondary frame with coordinates of X’, Y’ and Z’. The movement of top plate 310 can be considered as a mathematical mapping from the origin, O 345, of the base plate 305 frame of reference relative to the origin, O’ 350, of the top plate 310 frame by the lengths of the six extendable arms.
[0055] Fig. 3B illustrates an example of positioning platform 120 in accordance with one or more embodiments of the present technology. The positioning platform 120 can move in six degrees of freedom by controlling six servo motors or linear actuators connected to the movable linear plate that allows sub-millimeter accuracy. Each servo motor connects to the movable linear plate using a rod or virtual rod, and each servo motor or linear actuator can move independently of the other servo motors to position an animal subject into the proper position for a surgical procedure.
[0056] Fig. 3C illustrates an alternative embodiment of the present technology wherein x-y stage 375 is added to separate translational positioning from the positioning platform 310. x-y stage 375 is driven by translational motors 380 and 385. In the present embodiment, positioning platform 120 is only used for rotational positioning of the animal subject on top plate 310. By separating the translational and rotational motion in this manner, finer, more accurate control can be achieved. Additionally, the present embodiment enables a larger range for positioning. If a user of the robotic stereotaxic platform prefers to position surgical tool 370 towards an animal’s skull while keeping the animal steady, the present embodiment may accommodate that preference.
[0057] Fig. 3D illustrates an alternative embodiment which simplifies the rotational platform design using a pivot design. In the present embodiment, a center ball pivot 395 and multiple springs are used to hold the top platform 310 to the bottom platform 305 such that it is held in place. Linear translational micrometers 390 control the pitch and roll positioning of top platform 310 and may be motorized. The bottom platform 305 is used for yaw positioning. Translational motors 380 and 385 are used for translational positioning of x-y stage 375.
[0058] Fig. 4 illustrates a set of components within a robotic stereotaxic platform with 3D computer vision in which some embodiments of the present technology may be utilized. As shown in Fig. 4, robotic stereotaxic platform 140 may include memory 410 (e.g., volatile memory and/or nonvolatile memory); one or more processors 415; power supply 420 (e.g., a battery); operating system 425; graphical user interface 430; internal imaging systems 440; a 3D computer vision system 450 comprising a projector 452, one or more cameras 454, one or more light sources 456, and a robotic arm 458; a positioning platform 460 comprising a vision feedback motion controller 462, and a head immobilization device 464; a reconstruction module 470; a surgical robotic arm 480; a surgical instrument 490; and/or additional components (e.g., audio interfaces, keypads or keyboards, and other input and/or output interfaces).
[0059] Memory 410 can be any device, mechanism, or populated data structure used for storing information. In accordance with some embodiments of the present technology, memory 410 can encompass any type of, but is not limited to, volatile memory, nonvolatile memory, and dynamic memory. For example, memory 410 can
be random access memory, memory storage devices, optical memory devices, media magnetic media, floppy disks, magnetic tapes, hard drives, SDRAM, RDRAM, DDR RAM, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), compact disks, DVDs, and/or the like. In accordance with some embodiments, memory 410 may include one or more disk drives, flash drives, one or more databases, one or more tables, one or more files, local cache memories, processor cache memories, relational databases, flat databases, and/or the like. In addition, those of ordinary skill in the art will appreciate many additional devices and techniques for storing information that can be used as memory 410.
[0060] Memory 410 may be used to store instructions for running one or more applications or modules on processor(s) 415. For example, memory 410 could be used in one or more embodiments to house all or some of the instructions needed to execute the functionality or controlling operating system 425, graphical user interface 430; internal imaging systems 440; a 3D computer vision system 450; a positioning platform 460; a reconstruction module 470; a surgical robotic arm 480, and/or additional components. Operating system 425 can provide a software package that is capable of managing the hardware resources of robotic stereotaxic platform 140. Operating system 425 can also provide common services for software applications running on processor(s) 415.
[0061] Processor(s) 415 are the main processors of robotic stereotaxic platform 140 which may include application processors, baseband processors, various coprocessors, and other dedicated processors. For example, processor(s) 415 can provide the processing power to support software applications, memory management, graphics processing, and multimedia. Processors 415 may be communicably coupled with memory 410 and configured to run the operating system 425, the graphical user interface 430, and the applications stored on memory 410 or data storage component (not shown). Processor(s) 415 along with the other components may be powered by power supply 420. The volatile and nonvolatile memories found in various embodiments may include storage media for storing information such as processor- readable instructions, data structures, program modules, or other data. Some
examples of information that may be stored include basic input/output systems (BIOS), operating systems, and applications.
[0062] 3D computer vision system 450 can be realized by one or more techniques, including structural illumination, light-field, time-of-flight, as well as other techniques. These techniques may require additional components that are not shown but are included. For example, structural illumination allows 3D profiles to be created on monotone low-contrast surfaces. In structural illumination, a projector 452 projects multiple visual patterns onto an animal subject, e.g., a skull, and one or more cameras 454 capture 2D images of the visual patterns. These 2D images can then be used by the reconstruction module 470 to create a 3D profile of the animal subject. The positioning platform 460 can then align the animal subject into the proper position for the surgical procedure. In some embodiments, the projector is not necessary and other techniques of producing a visual pattern may be used. In some embodiments, the animal subject may remain stationary, and the surgical robotic arm 480 may move the surgical instrument 490 to advance an electrode, fiber, or other device into the animal subject’s brain to the user selected brain nucleus or desired area in the brain with high accuracy.
[0063] Internal imaging systems 440 include MRI images, CT scans, and/or other internal imaging techniques. A user may select a brain nucleus using an MRI image, a stereotaxic brain atlas, or other image source. The brain image then can be fused with CT scans of the animal subject to accurately determine the location of the brain nucleus within the animal subject’s skull. CT scans of the animal subject may also be used to show the location of bones under the animal subject’s skin to identify markers on the bones, like Lambda, Bregma or another marker, that can be used to identify the position of the brain. Finally, the 3D reconstruction may be used to accurately determine the animal subject’s position relative to the movable linear plate of the positioning platform by aligning the skull profile with the CT scans of the animal subject.
[0064] As illustrated in Fig. 4, some embodiments of positioning platform 460 can include a vision feedback motion controller 462, head immobilization device 464, and/or other positioning controls. The vision feedback motion controller 462 may use dynamic information from the 3D computer vision system 450 or static information from
the reconstruction module 470 to guide the animal subject into the proper position. The final position of the animal subject may be confirmed by creating a new 3D reconstruction profile of the animal subject, and aligning the 3D reconstruction with CT scan(s) of the animal subject, as well as performing other position verification methods.
[0065] Fig. 5 is a flowchart illustrating a set of operations 500 for performing a surgical procedure on an animal subject in accordance with one or more embodiments of the present technology. First, the positioning platform is moved to a default or reset position in operation 505. Then, the animal subject can be induced with anesthesia and placed on the movable linear plate of the positioning platform in operation 510.
[0066] A visual pattern can be projected onto the animal subject, and one or more cameras are used to capture images in operation 515. In operation 520, the images captured in operation 515 are used to create a 3D reconstruction of the animal subject’s skull profile, and photogrammetric bundle adjustment can be performed to optimize the 3D reconstruction. Photogrammetric bundle adjustment is a mathematical model of the imaging properties of the projector and camera(s), where the parameters of the projector and camera(s) as well as their orientations in space can be determined by a series of calibration measurements, to optimize or correct the 3D reconstruction of the subject’s skull profile. [0067] In decision operation 525, the robotic stereotaxic platform determines if the animal skull profile is aligned in the“skull-flat” position. Skull-flat position is accomplished when Bregma and Lambda (which are points along a dorsal (top) midline bone suture which runs from front to back) are brought into the same horizontal plane. This is typically accomplished by titling the anterio-posterior (front-back) axis of the animal’s skull until these two point are aligned. Skull-flat also includes the leveling of the skull in the medio-lateral axis, meaning that corresponding points located symmetrically to the left and the right of the same midline by the same distance, are also brought into the same horizontal plane. Once Bregma and Lambda are identified on the 3D reconstruction of the skull, the medio-lateral axis of the skull is automatically calculated, allowing the software to estimate the required translational and rotational motions required in order to achieve skull-flat position, operation 530. The robotic platform may then move to the new position, operation 535. After the movement is
completed, operations 515 and 520 begin again and the new reconstructed 3D skull profile is used to verify that the animal skull is in skull-flat position. Operations 515, 520, 525, 530 and 535 are performed repeatedly until the animal skull profile is in the skull-flat position. In some embodiments, the animal skull profile is aligned in the plane of de Groot.
[0068] In some embodiments, once the animal skull profile is in the skull-flat position, the 3D reconstruction can be aligned according to the position of the animal subject in operation 540.
[0069] The proper position of the subject relative to the surgical instrument can then be determined in operation 545. Finally, the positioning platform can be controlled to place the animal subject into the proper position to allow insertion of the surgical instrument in operation 550.
[0070] In some embodiments, a user-specific coordinate position is used to align the animal subject. The coordinate position can be based on pre-determined coordinates and standard landmarks (Bregma and Lambda) to position the animal subject once the animal skull profile is in the skull-flat position.
[0071] Fig. 6 is a flowchart illustrating an alternative series of steps 600 to those presented in Fig. 5. In step 605, the animal subject is immobilized using a head immobilization device. In step 610, upon determining that the skull is in skull-flat position, the 3D skull profile is aligned by aligning it with a CT scan of the animal subject. In step 615, a user can select a brain nucleus that will be used to determine the proper position of the animal subject. In step 620, the proper position of the animal subject is determined based on the selected brain nucleus.
[0072] Fig. 7 is a sequence diagram 700 illustrating an example of the data flow between the various components of an automated stereotaxic platform in accordance with various embodiments of the present technology. Once an animal subject is secured to the positioning platform 730, a visual pattern is projected onto the animal subject by the 3D computer vision system 710. 2D images from one or more cameras are then collected by the 3D computer vision system 710. 3D reconstruction is then initiated by the 3D computer vision system 710. The reconstruction module 720 performs the 3D reconstruction by determining the same points between the images
from both sides using unique spatial codes. The 3D spatial coordinate of each point is then estimated using spatial triangulation. The 3D skull profile is generated based on the 3D spatial coordinates.
[0073] The reconstructed 3D skull profile is then returned to the 3D computer vision system 710. The 3D computer vision system 710 is then used to align the reconstructed skull profile with the animal subject’s skull on the positioning platform and determine the proper subject position. Positioning commands and information are then transmitted between the 3D computer vision system 710 and the positioning platform 730 until the skull and positioning platform are in the proper position for a surgical procedure.
[0074] Fig. 8 is a sequence diagram 800 illustrating an example of the data flow between the various components of a robotic stereotaxic platform in accordance with various embodiments of the present technology. A user may select a desired brain nucleus from MRI images, a stereotaxic brain atlas, or other source, and then the user can initiate a procedure using a graphical user interface ("GUI") 810. An initiate image capture message is sent from the GUI 810 to the 3D computer vision system 820 to cause the 3D computer vision system 820 to create a 3D image that is transmitted back to the GUI 810. The GUI 810 then can send the 3D images to the reconstruction module 830 to create a 3D reconstruction of the skull profile that can then be returned to the GUI 810. Next, the 3D reconstruction can be aligned with the CT scan and fused to the MRI data to determine the proper position of the animal subject to allow access to the desired brain nucleus. Finally, control commands can be sent to the positioning platform 840 to move the animal subject into the proper position for the surgical procedure and to perform neural navigation. [0075] Fig. 9 is a flowchart illustrating a series of steps 900 that make up structured pattern scanning and the 3D reconstruction process performed by the 3D computer vision system and the 3D reconstruction module in accordance with various embodiments of the present technology. First, one or more structured coding patterns are projected onto the skull of an animal subject in 910. In 920, a unique spatial code for each point on the skull is constructed, wherein the points are acquired in 2D images taken by one or more cameras. In 930, the image threshold is adjusted to determine the masking area for 3D reconstruction. Next, the same points for the images acquired
from both sides of the skull are determined using the unique spatial codes in step 940. In 950, the 3D spatial coordinate is estimated for each point using spatial triangulation. Last, in step 960, a 3D point cloud including pixel intensity information is constructed.
[0076] Fig. 10A illustrates projecting multiple visual patterns onto an animal subject and capturing the images using the left camera in accordance with various embodiments of the present technology. Image 1010 shows a visual pattern that is horizontal bars, and Image 1020 shows a visual pattern that is vertical bars. After each image is captured, the visual pattern may become smaller and the shapes may be spaced closer. The visual pattern may also change orientation, as can be seen in image 1010 and image 1020. The visual pattern in this example consists of bars but may be any pattern, such as a grid. The light bends around the animal subject providing visual information about the shape of the animal subject. This allows the 2D images to be used to reconstruct a 3D skull profile or 3D body profile of the animal subject. In this example, varying sized horizontal bars are projected onto the animal subject and then varying sized vertical bars are projected onto the animal subject with an exposure time of 30 ms. In some embodiments, the left view and right view are captured by the same camera that is moved by a robotic arm or manually from one side of the positioning platform to another other side.
[0077] Fig. 10B illustrates projecting multiple visual patterns onto an animal subject and capturing the images using the right camera in accordance with one or more embodiments of the present technology. After each image is captured, the visual pattern may become smaller and the shapes may be spaced closer. The visual pattern may also change orientation. In some embodiments, the left view and right view are captured by the same camera that is moved by a robotic arm or manually from one side of the positioning platform to another other side.
[0078] Figs. 1 1 A-1 1 B illustrate example views of skull in accordance with various embodiments of the present technology. Fig. 1 1 C illustrates an example of the center view of a reconstructed 3D skull profile in accordance with various embodiments of the present technology. As example of using structural illumination, the projected visual pattern may be shifted by a fraction of its period with respect to the previous pattern to cover the entire period. The reflected phase-shifted images are captured by the camera(s), the relative phase map of the animal subject is
calculated, and the reconstructed 3D skull profile is generated from the phase map. Fig. 1 1 D illustrates an example of the left view of a reconstructed 3D skull profile in accordance with one or more embodiments of the present technology. Fig. 1 1 E illustrates an example of the right view of a reconstructed 3D skull profile in accordance with some embodiments of the present technology.
[0079] Fig. 12A illustrates a 2D optical image of a Mongolian gerbil skull taken by one of the cameras from the right in accordance with various embodiments of the present technology. Fig. 12B illustrates a top view of a 3D reconstruction of the Mongolian gerbil skull in accordance with various embodiments of the present technology. Fig. 12C illustrates a top-right view of a 3D reconstruction of the Mongolian gerbil skull in accordance with various embodiments of the present technology. Fig. 12 D illustrates a top-left view of a 3D reconstruction of the Mongolian gerbil skull in accordance with various embodiments of the present technology. The views of the 3D reconstruction of the Mongolian gerbil skull in Fig. 12 B, Fig. 12C, and Fig. 12D were superimposed with grey scale intensity to allow intensity representation. The two prominent landmarks of Bregma and Lambda can be easily identified based on the grey scale contrast.
[0080] Fig. 13A illustrates an example of the left camera view of two fixed test objects in accordance with various embodiments of the present technology. The fixed test objects are calibration standards with known shapes and sizes, and they rest on a calibration plate with dots that are equally spaced to provide measurements of the 3D reconstruction error in the horizontal and vertical directions. The lower fixed test object is a small pyramid and location marker 1310 is one of three markers on the upper fixed test object that defines a triangle on one side of the pyramid. The lower fixed test object pyramid has a peak 1330. The upper fixed test object with a peak 1320 is a larger pyramid. The fixed test objects and line segments may be used to compensate for geometric distortions and optical aberrations created by the optics and the perspective of the projectors and camera(s). A mathematical model of the imaging properties of the projector and camera(s) can be created, where the parameters of the projector and camera(s) as well as their orientations in space can be determined by a series of calibration measurements. The fixed test objects may have other shapes, including a flat object, a sphere, a ball bar, and/or another shape.
[0081] Fig. 13B illustrates an example of the right camera view of two fixed test objects in accordance with some embodiments of the present technology. The fixed test objects are the same fixed test objects as seen in Fig. 13A, but they are viewed by a camera on the right side of the positioning platform. In some embodiments, the camera may be moved from the left side to the ride side of the positioning platform to capture images from multiple positions and angles.
[0082] Figs. 14A, 14B, and 14C illustrate an example of measuring 3D reconstruction error for the two fixed test objects as shown in Fig. 13A and Fig. 13B that may be used for one or more embodiments of the present technology. The fixed test objects are calibration standards with known shapes and sizes, and they rest on a calibration plate with dots that are equally spaced that is placed on the movable linear plate of the positioning platform to provide measurements of the 3D reconstruction error in the horizontal and vertical directions. The line segments may be formed by multiple equally spaced dots and each has a known length. The upper fixed test object is a small pyramid with a peak 1410 and one side of the pyramid is measured, defined by line segments M8, M9, and M10. The lower fixed test object with a peak 1420 is a larger pyramid and one side of this pyramid is measured. Line segment 1430 is an example of a vertical line segment consisting of five equally spaced dots on the calibration plate with a 3D reconstruction measurement length of 4.07709 and an actual length of 4 resulting in an accuracy of 1 .93 %. Line segment
1440 is an example of a horizontal line segment consisting of six equally spaced dots on the calibration plate with a 3D reconstruction measurement length of 4.96712 and an actual length of 5 resulting in an accuracy of 0.66%.
[0083] Table 1 shows the 3D reconstruction errors for each of the horizontal and vertical line segments. Accuracy is calculated by subtracting the actual value from measurement value, dividing by the actual value, and taking the absolute value of the result. Table 2 shows the 3D reconstruction errors for the two pyramids of the two fixed test objects.
Table 3. Measurements for vertical lines
[0084] Fig. 15A illustrates an example of measuring 3D reconstruction error in the left view for a profile of a gerbil skull that may be used for one or more embodiments of the present technology. The gerbil skull 1510 has a fixed test object 1520 to the left of the skull and a fixed test object 1530 to the right of the skull. Line segments M0, M1 , and M2 define one side of the pyramid of fixed test object 1520. Line segments M3, and M4 define one side of the pyramid of fixed test object 1530.
[0085] Table 4 shows the 3D reconstruction errors for the two pyramids of the two fixed test objects that are located on each side of the gerbil skull. Table 5 shows the 3D reconstruction errors for the horizontal line segments formed by equally spaced dots on the calibration plate that sits on the positioning platform. Table 6 shows the 3D reconstruction errors for the vertical line segments formed by equally spaced dots on the calibration plate. The fixed test objects and the line segments may be used to compensate for geometric distortions and optical aberrations created by the optics and the perspective of the projectors and camera(s).
Table 4. 3D Reconstruction Error for Two Fixed Test Objects
Table 5. 3D Reconstruction Error for Horizontal Line Segments on the Calibration
Plate
[0086] Fig. 15B illustrates an example of measuring 3D reconstruction error in the right view for a profile of a gerbil skull in accordance with various embodiments of the present technology. Table 7 shows the 3D reconstruction errors for the horizontal line segments formed by equally spaced dots on the calibration plate. Table 8 shows the 3D reconstruction errors for the vertical line segments formed by equally spaced dots on the calibration plate. The line segments may be used to compensate for geometric distortions and optical aberrations created by the optics and the perspective of the projectors and camera(s).
Table 7. 3D Reconstruction Error for Horizontal Line Segments on the Calibration
Plate
Table 8. 3D Reconstruction Error for Vertical Line Segments on the Calibration Plate
[0087] Fig. 16 is a block diagram illustrating an example machine representing the computer systemization that may be used in some embodiments of the present technology. A variety of these steps and operations may be performed by hardware components or may be embodied in computer-executable instructions, which may be used to cause a general-purpose or special-purpose processor (e.g., in a computer, server, or other computing device) programmed with the instructions to perform the steps or operations. For example, the steps or operations may be performed by a combination of hardware, software, and/or firmware.
[0088] The system controller 1600 may be in communication with entities including one or more users 1625 client/terminal devices 1620, user input devices 1605, peripheral devices 1610, an optional co-processor device(s) (e.g., cryptographic processor devices) 1615, and networks 1630. Users may engage with the controller 1300 via terminal devices 1620 over networks 1630.
[0089] Computers may employ central processing unit (CPU) or processor to process information. Processors may include programmable general-purpose or special-purpose microprocessors, programmable controllers, application-specific integrated circuits (ASICs), programmable logic devices (PLDs), embedded components, combination of such devices and the like. Processors execute program components in response to user and/or system-generated requests. One or more of these components may be implemented in software, hardware or both hardware and software. Processors pass instructions (e.g., operational and data instructions) to enable various operations.
[0090] Computers may also process information via parallel processing. In some embodiments, a graphical processing unit (GPU) may be used in parallel with a CPU to accelerate the computational speed of the 3D reconstruction.
[0091] The controller 1600 may include clock 1665, CPU 1670, memory such as read only memory (ROM) 1685 and random-access memory (RAM) 1680 and co processor 1675 among others. These controller components may be connected to a system bus 1660, and through the system bus 1660 to an interface bus 1635. Further, user input devices 1605, peripheral devices 1610, co-processor devices 1615, and the like, may be connected through the interface bus 1635 to the system bus 1660. The
interface bus 1635 may be connected to a number of interface adapters such as processor interface 1640, input output interfaces (I/O) 1645, network interfaces 1650, storage interfaces 1655, and the like.
[0092] Processor interface 1640 may facilitate communication between co processor devices 1615 and co-processor 1675. In at least one implementation, processor interface 1640 may expedite encryption and decryption of requests or data. Input output interfaces (I/O) 1645 facilitate communication between user input devices 1605, peripheral devices 1610, co-processor devices 1615, and/or the like and components of the controller 1600 using protocols such as those for handling audio, data, video interface, wireless transceivers, or the like (e.g., Bluetooth, IEEE 1394a-b, serial, universal serial bus (USB), Digital Visual Interface (DVI), 802.1 1 a/b/g/n/x, cellular, etc.). Network interfaces 1650 may be in communication with the network 1630. Through the network 1630, the controller 1600 may be accessible to remote terminal devices 1620. Network interfaces 1650 may use various wired and wireless connection protocols such as, direct connect, Ethernet, wireless connection such as IEEE 802.1 1 a-x, and the like.
[0093] Examples of network 1630 include the Internet, Local Area Network (LAN), Metropolitan Area Network (MAN), a Wide Area Network (WAN), wireless network (e.g., using Wireless Application Protocol WAP), a secured custom connection, and the like. The network interfaces 1650 can include a firewall which can, in some aspects, govern and/or manage permission to access/proxy data in a computer network, and track varying levels of trust between different machines and/or applications. The firewall can be any number of modules having any combination of hardware and/or software components able to enforce a predetermined set of access rights between a particular set of machines and applications, machines and machines, and/or applications and applications, for example, to regulate the flow of traffic and resource sharing between these varying entities. The firewall may additionally manage and/or have access to an access control list which details permissions including, for example, the access and operation rights of an object by an individual, a machine, and/or an application, and the circumstances under which the permission rights stand. Other network security functions performed or included in the functions of the firewall, can be, for example, but are not limited to, intrusion-prevention,
intrusion detection, next-generation firewall, personal firewall, etc., without deviating from the novel art of this disclosure.
[0094] Storage interfaces 1655 may be in communication with a number of storage devices such as, storage devices 1690, removable disc devices, and the like. The storage interfaces 1655 may use various connection protocols such as Serial Advanced Technology Attachment (SATA), IEEE 1694, Ethernet, Universal Serial Bus (USB), and the like.
[0095] User input devices 1605 and peripheral devices 1610 may be connected to I/O interface 1645 and potentially other interfaces, buses and/or components. User input devices 1305 may include card readers, finger print readers, joysticks, keyboards, microphones, mouse, remote controls, retina readers, touch screens, sensors, and/or the like. Peripheral devices 1610 may include antenna, audio devices (e.g., microphone, speakers, etc.), cameras, external processors, communication devices, radio frequency identifiers (RFIDs), scanners, printers, storage devices, transceivers, and/or the like. Co-processor devices 1615 may be connected to the controller 1600 through interface bus 1635, and may include microcontrollers, processors, interfaces or other devices.
[0096] Computer executable instructions and data may be stored in memory (e.g., registers, cache memory, random access memory, flash, etc.) which is accessible by processors. These stored instruction codes (e.g., programs) may engage the processor components, motherboard and/or other system components to perform desired operations. The controller 1600 may employ various forms of memory including on-chip CPU memory (e.g., registers), RAM 1680, ROM 1685, and storage devices 1690. Storage devices 1690 may employ any number of tangible, non- transitory storage devices or systems such as fixed or removable magnetic disk drive, an optical drive, solid state memory devices and other processor-readable storage media. Computer-executable instructions stored in the memory may include robotic stereotaxic platform 140 having one or more program modules such as routines, programs, objects, components, data structures, and so on that perform particular tasks or implement particular abstract data types. For example, the memory may contain operating system (OS) component 1695, modules and other components, database tables, and the like. These modules/components may be stored and
accessed from the storage devices, including from external storage devices accessible through an interface bus.
[0097] The database components can store programs executed by the processor to process the stored data and imaging data. The database components may be implemented in the form of a database that is relational, scalable and secure. Examples of such database include DB2, MySQL, Oracle, Sybase, and the like. Alternatively, the database may be implemented using various standard data- structures, such as an array, hash, list, stack, structured text file (e.g., XML), table, and/or the like. Such data-structures may be stored in memory and/or in structured files.
[0098] The controller 1600 may be implemented in distributed computing environments, where tasks or modules are performed by remote processing devices, which are linked through a communications network, such as a Local Area Network ("LAN"), Wide Area Network ("WAN"), the Internet, and the like. In a distributed computing environment, program modules or subroutines may be located in both local and remote memory storage devices. Distributed computing may be employed to load balance and/or aggregate resources for processing. Alternatively, aspects of the controller 1300 may be distributed electronically over the Internet or over other networks (including wireless networks). Those skilled in the relevant art(s) will recognize that portions of the system may reside on a server computer, while corresponding portions reside on a client computer. Data structures and transmission of data particular to aspects of the controller 1600 are also encompassed within the scope of the disclosure.
Conclusion [0099] Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise," "comprising," and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of "including, but not limited to." As used herein, the terms "connected," "coupled," or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words
"herein," "above," "below," and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word "or," in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
[00100] The above Detailed Description of examples of the technology is not intended to be exhaustive or to limit the technology to the precise form disclosed above. While specific examples for the technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the technology, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative implementations may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed or implemented in parallel, or may be performed at different times. Further any specific numbers noted herein are only examples: alternative implementations may employ differing values or ranges.
[00101] The teachings of the technology provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various examples described above can be combined to provide further implementations of the technology. Some alternative implementations of the technology may include not only additional elements to those implementations noted above, but also may include fewer elements.
[00102] These and other changes can be made to the technology in light of the above Detailed Description. While the above description describes certain examples of the technology, and describes the best mode contemplated, no matter how detailed the above appears in text, the technology can be practiced in many ways. Details of the system may vary considerably in its specific implementation, while still being
encompassed by the technology disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the technology should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the technology with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the technology to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the technology encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the technology under the claims.
[00103] To reduce the number of claims, certain aspects of the technology are presented below in certain claim forms, but the applicant contemplates the various aspects of the technology in any number of claim forms. For example, while only one aspect of the technology is recited as a computer-readable medium claim, other aspects may likewise be embodied as a computer-readable medium claim, or in other forms, such as being embodied in a means-plus-function claim. Any claims intended to be treated under 35 U.S.C. § 1 12(f) will begin with the words "means for", but use of the term "for" in any other context is not intended to invoke treatment under 35 U.S.C. § 1 12(f). Accordingly, the applicant reserves the right to pursue additional claims after filing this application to pursue such additional claim forms, in either this application or in a continuing application.
Claims
1 . A stereotaxic system comprising:
a moveable platform to which an animal subject can be secured,
wherein the moveable platform has multiple motors or multiple linear actuators to adjust a position of the moveable platform;
a three-dimensional (3D) computer vision system to determine the position or alignment of the animal subject and provide visual feedback while adjusting the position of the moveable platform; and
a reconstruction module configured to receive information related to the position or alignment of the animal subject from the 3D computer vision system and generate a 3D reconstruction of at least a portion of the animal subject.
2. The stereotaxic system of claim 1 , wherein the 3D reconstruction of the animal subject is used to determine a proper position of the moveable platform for a surgical procedure and guide the moveable platform into the proper position.
3. The stereotaxic system of claim 2, wherein the 3D computer vision system determines the position or alignment of the animal subject by capturing a set of images of the animal subject.
4. The stereotaxic system of claim 3, wherein the information related to the position or alignment of the animal subject includes the set of images and the 3D reconstruction is generated based at least in part on the set of images.
5. The stereotaxic system of claim 4, wherein the set of images are used to determine a location of a brain or portion of a brain within a skull of the animal subject and the location of the brain or portion of the brain is used, at least in part, to determine
the proper position of the moveable platform and guide the moveable platform into the proper position.
6. The stereotaxic system of claim 1 , wherein the 3D computer vision system determines the position or alignment of the animal subject based on standard landmarks of a skull of the animal subject.
7. The stereotaxic system of claim 1 , wherein the 3D computer vision system uses a technique to project a set of visual patterns onto the animal subject to determine the position or alignment of the animal subject.
8. The stereotaxic system of claim 7, wherein the technique to project a set of visual patterns onto the animal subject comprises structural illumination, light-field, time-of-flight, structured-laser-light-based 3D scanning, a projected light stripe system, or simultaneous localization and mapping to determine the position of the animal subject.
9. The stereotaxic system of claim 1 , further comprising a surgical instrument and a surgical robotic arm configured to position, based at least in part on the 3D reconstruction of the animal subject, the surgical instrument into a desired location of a brain of the animal subject.
10. The stereotaxic system of claim 1 , wherein the moveable platform includes: a fixed base plate having affixed thereto the multiple motors or the linear actuators; and
a movable linear plate positioned above the fixed base plate, wherein connecting rods connect each of the multiple motors or each of the linear actuators to the movable linear plate.
1 1. The stereotaxic system of claim 10, wherein the multiple motors includes at least six servo motors, or linear actuators, to control the position of the movable linear plate with six degrees-of-freedom.
12. A method comprising:
determining a position of an animal subject with a computer vision system, wherein the animal subject is secured to a moveable plate; reconstructing a three-dimensional (3D) skull profile of the animal subject; determining a proper position of the animal subject for a surgical procedure; and
controlling the moveable plate to place the animal subject into a proper position for the surgical procedure.
13. The method of claim 12, further comprising:
capturing a set of images of the animal subject secured to the moveable plate; reconstructing the 3D skull profile based on the set of images of the animal subject; and
aligning the 3D skull profile relative to the position the animal subject.
14. The method of claim 13, further comprising:
presenting the 3D skull profile on a graphical user interface;
receiving a selection of a brain nucleus via the graphical user interface; and determining the proper position of the animal subject by aligning the brain nucleus with the 3D skull profile.
15. The method of claim 12, further comprising performing a photogrammetric bundle adjustment of the 3D skull profile to optimize or correct the 3D skull profile.
16. The method of claim 12, further comprising:
determining if the 3D skull profile of the animal subject is in a skull-flat position; and
repositioning the animal subject until the 3D skull profile is in the skull-flat position.
17. The method of claim 12, further comprising projecting a set of visual patterns onto a skull of the animal subject secured to the moveable plate.
18. The method of claim 12, further comprising controlling a robot arm to position, based at least in part on the 3D skull profile of the animal subject, a surgical instrument into a desired location of a brain of the animal subject.
19. A stereotaxic system comprising:
a positioning platform with six degrees-of-freedom to which an animal subject can be secured, wherein the positioning platform has multiple motors or multiple linear actuators to adjust a position of the positioning platform; a three-dimensional (3D) computer vision system configured to:
project one or more visual patterns onto the animal subject and capture images of the animal subject to determine the position or alignment of the animal subject;
provide visual feedback while adjusting the position of the positioning platform;
determine a proper position of the animal subject for a surgical procedure;
calculate movements required to align the animal subject for the surgical procedure; and
control the positioning platform using the calculated movements to align the animal subject for the surgical procedure; and
a 3D reconstruction module configured to receive information from the 3D computer vision system and generate a 3D reconstruction of at least a portion of the animal subject.
20. The stereotaxic system of claim 19, wherein the 3D computer vision system is further configured to:
capture one or more images of the animal subject that can be used to locate a brain nucleus within a skull of the animal subject;
align the 3D reconstruction of the animal subject with the one or more images that can be used to locate a brain nucleus within the skull of the animal subject;
allow a user to select a brain nucleus of interest for the surgical procedure; and
determine the proper position of the animal subject based on the selected brain nucleus.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862635849P | 2018-02-27 | 2018-02-27 | |
US62/635,849 | 2018-02-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019168973A1 true WO2019168973A1 (en) | 2019-09-06 |
Family
ID=67806410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2019/019818 WO2019168973A1 (en) | 2018-02-27 | 2019-02-27 | Robotic stereotaxic platform with computer vision |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019168973A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020016600A1 (en) * | 1995-01-31 | 2002-02-07 | Cosman Eric R. | Repositioner for head, neck, and body |
US20090082783A1 (en) * | 2007-09-24 | 2009-03-26 | Surgi-Vision, Inc. | Control unit for mri-guided medical interventional systems |
US20130245424A1 (en) * | 2002-07-26 | 2013-09-19 | R. Christopher deCharms | Methods for measurement and analysis of brain activity |
US20140112579A1 (en) * | 2012-10-23 | 2014-04-24 | Raytheon Company | System and method for automatic registration of 3d data with electro-optical imagery via photogrammetric bundle adjustment |
US20150018842A1 (en) * | 2009-10-31 | 2015-01-15 | Voxel Rad, Ltd. | Systems and methods for frameless image-guided biopsy and therapeutic intervention |
WO2017172641A1 (en) * | 2016-03-28 | 2017-10-05 | George Papaioannou | Robotics driven radiological scanning systems and methods |
-
2019
- 2019-02-27 WO PCT/US2019/019818 patent/WO2019168973A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020016600A1 (en) * | 1995-01-31 | 2002-02-07 | Cosman Eric R. | Repositioner for head, neck, and body |
US20130245424A1 (en) * | 2002-07-26 | 2013-09-19 | R. Christopher deCharms | Methods for measurement and analysis of brain activity |
US20090112082A1 (en) * | 2007-06-07 | 2009-04-30 | Surgi-Vision, Inc. | Imaging device for mri-guided medical interventional systems |
US20090082783A1 (en) * | 2007-09-24 | 2009-03-26 | Surgi-Vision, Inc. | Control unit for mri-guided medical interventional systems |
US20150018842A1 (en) * | 2009-10-31 | 2015-01-15 | Voxel Rad, Ltd. | Systems and methods for frameless image-guided biopsy and therapeutic intervention |
US20140112579A1 (en) * | 2012-10-23 | 2014-04-24 | Raytheon Company | System and method for automatic registration of 3d data with electro-optical imagery via photogrammetric bundle adjustment |
WO2017172641A1 (en) * | 2016-03-28 | 2017-10-05 | George Papaioannou | Robotics driven radiological scanning systems and methods |
Non-Patent Citations (1)
Title |
---|
"Technical White Paper: Overview", ALLEN BRAIN OBSERVATORY, October 2016 (2016-10-01), pages 1 - 24, XP055634400, Retrieved from the Internet <URL:http://help.brain-map.org/download/attachments/10616846/VisualCoding_Overview.pdf?version=2&modificationDate=1477332395144&api=v2> [retrieved on 20190425] * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2018204655B2 (en) | Image guidance for radiation therapy | |
JP7350422B2 (en) | Adaptive radiotherapy using composite imaging slices | |
CN107875524B (en) | Radiotherapy system, phantom and isocenter calibration method | |
KR102426979B1 (en) | Registration method and electronic equipment for visual navigation of dental implant surgery | |
EP2663363B1 (en) | Determination of a body part position during radiation therapy | |
US11648066B2 (en) | Method and system of determining one or more points on operation pathway | |
CN102525525B (en) | Positioning image arranges method and apparatus and the CT equipment of position line | |
CN115426967A (en) | Spinal surgery planning device and method based on two-dimensional medical image | |
US20180345040A1 (en) | A target surface | |
JP2008022896A (en) | Positioning system | |
CN115702983A (en) | Positioning control method, device, system and medium for radiotherapy equipment | |
CN110772320A (en) | Registration method, registration device and computer readable storage medium | |
CN110772319A (en) | Registration method, registration device and computer readable storage medium | |
CN115887003A (en) | Registration method and device of surgical navigation system and surgical navigation system | |
WO2019168973A1 (en) | Robotic stereotaxic platform with computer vision | |
CN109938835B (en) | Method and robot system for registration when adjusting instrument orientation | |
CN115530978A (en) | Navigation positioning method and system | |
CN116529756A (en) | Monitoring method, device and computer storage medium | |
CN113597288B (en) | Method and system for determining operation path based on image matching | |
JP7570713B2 (en) | Apparatus and method for two-dimensional medical image-based spine surgery planning | |
Ly et al. | A stereotaxic platform for small animals based on 3D computer vision and robotics | |
Goerlach et al. | Evaluation of pattern based point clouds for patient registration—A phantom study | |
Hong et al. | Phantom experiment of an ear surgery robot for automatic mastoidectomy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19760151 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19760151 Country of ref document: EP Kind code of ref document: A1 |