COMPUTER ASSISTED SURGICAL SYSTEM WITH POSITION REGISTRATION MECHANISM AND METHOD OF OPERATION THEREOF

A computer assisted surgical system and method of operation thereof includes: capturing historic scan data from a three dimensional object; sampling a current surface image from the three dimensional object in a different position; automatically transforming the historical scan data to align with the current surface image for forming a transform data; and displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION(S)

The present application contains subject matter related to U.S. patent application Ser. No. 14/202,677 filed Mar. 10, 2014, and the subject matter thereof is incorporated herein by reference thereto.

TECHNICAL FIELD

The present invention relates generally to a computer assisted surgical system, and more particularly to a system for establishing the reference position with pre-surgery medical data.

BACKGROUND ART

Image-based surgical navigation systems display the positions of surgical tools with respect to preoperative (prior to surgery) or intraoperative (during surgery) image data sets. Two and three dimensional image data sets are used, as well as time-variant images data, such as multiple data sets taken at different times. Types of data sets that are primarily used include two-dimensional fluoroscopic images and three-dimensional data sets include magnetic resonance imaging (MRI) scans, computer tomography (CT) scans, positron emission tomography (PET) scans, and angiographic data. Intraoperative images are typically fluoroscopic, as a C-arm fluoroscope is relatively easily positioned with respect to patient and does not require that a patient be moved. Other types of imaging modalities require extensive patient movement and thus are typically used only for preoperative and post-operative imaging, but may still be used intra-operatively.

The most popular surgical navigation systems make use of a tracking or localizing system to track tools, instruments and patients during surgery. These systems identify predefined coordinate space via. uniquely recognizable markers that are manually attached or affixed to, or possibly inherently a part of, an object such as an instrument or a mask. Markers can take several forms, including those that can be manually located using optical (or visual), electromagnetic., radio or acoustic methods. Furthermore, at least e case of optical or visual systems, location of the marker's position may be based on intrinsic features or landmarks that, in effect, function as recognizable marker sites, while the actual marker is positioned manually by a person. Markers will have a known, geometrical arrangement with respect to, typically, an end point and/or axis of the instrument. Thus, objects can be recognized at least in pan from the geometry of the markers (assuming that the geometry is unique), and the orientation of the axis and location of endpoint within a frame of reference deduced from the positions of the markers. Any error in the position of the markers represents a reduction in the safety margin of the operation where fractions of a millimeter can be critical.

Thus, a need still remains for a computer assisted surgical system that can provide position registration without the position error induced by the manual positioning of markers. In view of the increased popularity in the use of computer assisted surgery, it is increasingly critical that answers be found to these problems. In view of the ever-increasing commercial competitive pressures, along with growing consumer expectations and the diminishing opportunities for meaningful product differentiation in the marketplace, it is critical that answers be found for these problems. Additionally, the need to reduce costs, improve efficiencies and performance, and meet competitive pressures adds an even greater urgency to the critical necessity for finding answers to these problems.

Solutions to these problems have been long sought but prior developments have not taught or suggested any solutions and, thus, solutions to these problems have long eluded those skilled in the art.

DISCLOSURE OF THE INVENTION

The present invention provides a method of operation of a computer assisted surgical system including: capturing historic scan data from a three dimensional object; sampling a current surface image from the three dimensional object in a different position; automatically transforming the historical scan data to align with the current surface image for forming a transform data; and displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.

The present invention provides a computer assisted surgical system, including: a pre-operation medical scan configured to record historic scan data from a three dimensional object; a position image capture module configured to sample a current surface image from the three dimensional object in a different position; a 3D registration module configured to automatically transform the historical scan data to align with the current surface image for forming a transform data; and a display controller configured to display, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.

Certain embodiments of the invention have other steps or elements in addition to or in place of those mentioned above. The steps or elements will become apparent to those skilled in the art from a reading of the following detailed description when taken with reference to the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a functional block diagram of a computer assisted surgical system with position registration in an embodiment of the present invention.

FIG. 2 is a functional block diagram of a surgical plan generation mechanism in an embodiment of the present invention.

FIG. 3 is a functional block diagram of a region of interest capture mechanism in an embodiment of the present invention.

FIG. 4 is a functional block diagram of an alignment and presentation mechanism in an embodiment of the present invention.

FIG. 5 is a flow chart of a method of operation of a computer assisted surgical system in a further embodiment of the present invention.

BEST MODE FOR CARRYING OUT THE INVENTION

The following embodiments are described in sufficient detail to enable those skilled in the art to make and use the invention. It is to be understood that other embodiments would be evident based on the present disclosure, and that system, process, or mechanical changes may be made without departing from the scope of the present invention.

In the following description, numerous specific details are given to provide a thorough understanding of the invention. However, it will be apparent that the invention may be practiced without these specific details. In order to avoid obscuring the present invention, some well-known circuits, system configurations, and process steps are not disclosed in detail.

The drawings showing embodiments of the system are semi-diagrammatic and not to scale and, particularly, some of the dimensions are for the clarity of presentation and are shown exaggerated in the drawing FIGS. Similarly, although the views in the drawings for ease of description generally show similar orientations, this depiction in the FIGS. is arbitrary for the most part. Generally, the invention can be operated in any orientation.

Where multiple embodiments are disclosed and described having some features in common, for clarity and ease of illustration, description, and comprehension thereof, similar and like features one to another will ordinarily be described with similar reference numerals. For expository purposes, the term “horizontal” as used herein is defined as a plane parallel to the active surface of the integrated circuit, having the non-volatile memory system, regardless of its orientation. The term “vertical” refers to a direction perpendicular to the horizontal as just defined. Terms, such as “above”, “below”, “bottom”, “top”, “side” (as in “sidewall”), “higher”, “lower”, “upper”, “over”, and “under”, are defined with respect to the horizontal plane, as shown in the figures. The term “directly on” means that there is direct contact between elements with no intervening elements.

The term “module” referred to herein can include software, hardware, or a combination thereof in an embodiment of the present invention in accordance with the context in which the term is used. For example, the software can be machine code, firmware, embedded code, and application software. Also for example, the hardware can be circuitry, processor, computer, integrated circuit, integrated circuit cores, a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), passive devices, or a combination thereof

Image processing can relate to projective imaging and tomographic imaging using imagers. The projective imaging employs planar view of an object using a camera and X-ray, as examples. The tomographic imaging employs slicing through an object using penetrating waves including sonar, computed tomography (CT) scan, magnetic resonance imaging (MRI), as examples.

Referring now to FIG. 1, therein is shown a functional block diagram of a computer assisted surgical system 100 with position registration in an embodiment of the present invention. The functional block diagram of a computer assisted surgical system 100 depicts a pre-operation medical scan 102, such as magnetic resonance imaging (MRI) scans, computer tomography (CT) scans, positron emission tomography (PET) scans, or angiographic data of a three dimensional object 104, such as a surgical patient.

The pre-operation medical scan 102 can provide historical image data 106, to a computer 107, that represents the internal composition of the three dimensional object 104. The historical image data 106 can be used by a physician or medical specialist to formulate a surgical plan 108 that will be executed during a surgical operation performed on the three dimensional object 104. During the formulation of the surgical plan 108, physical models 110, such as organ models, vessel maps, nerve maps, muscle and tendon maps, tissue structures, or combinations thereof, can formulate a surgical strategy with optimum egress paths, and safe regions that can accept intrusion by surgical tools during the surgical operation. The combination of the surgical plan 108 and the physical models 110 can generate a surgical plan and highlights 109 capable of highlighting areas of the operation that can pose a danger if entered, or safe areas of the planned surgery that can allow access for the surgeon (not shown).

The historic image data 106 can be conveyed to a surface of interest extract module 112 in order to isolate a historic point cloud 114 can represent the outer layer of the skin that covers the area of the intended access of the surgical plan 108. The historical image data can be captured up to several days prior to a scheduled surgical operation represented by the surgical plan 108.

At the scheduled time of the surgical operation, the three dimensional object 104 can be in a substantially different position than the position used to capture the historical image data 106. A position image capture module 116, such as a stereo camera, structured light camera, laser scanner, can provide a detailed surface image of the three dimensional object 104 in a surgery position for the surgical operation. The position image capture module 116 can provide a current surface image 118, of the three dimensional object 104, to a pre-surgery 3D capture module 120 for analysis. The pre-surgery 3D capture module 120 can process the current surface image 118 to remove obstructions, such as hair, surgical masking, sterile dressings, or the like, from the current surface image 118. A surface of the three dimensional object 104 can be captured as a current image data 122.

The current image data 122 can be coupled to a region of interest extract module 124 for further reduction. The region of interest extract module 124 can generate a current point cloud 126.

An intended point cloud 128 can be coupled from the surface of interest extract module 112 to a 3D registration module 130. An actual point cloud 132, such as an array of related points that represent the three dimensional topology of the surface of the three dimensional object 104, can be coupled from the region of interest extract module 124 to the 3D registration module 130. A 3D registration algorithm module 134 can perform a feature by feature alignment of the intended point cloud 128 and the actual point cloud 132. The 3D registration module 130 can manipulate the results of the 3D registration algorithm module 134 based on a transform parameter module 136. The transform parameter module 136 can provide visual queues or highlights when generating a composite image data 138.

A transform module 140 can be coupled to the composite image data 138, the surgical plan and highlights 109, and a historic scan data 142, such as the data from the pre-operation medical scan 102 to automatically align the historic scan data 142 based on the composite image data 138. The transform module 140 can maintain the positional correlation between the composite image data 138 and the surgical plan 108 based on the historic scan data 142. The transform module 140 can overlay the surgical plan and highlights 109 capable of highlighting areas of the operation that can pose a danger if entered, or safe areas of the planned surgery that can allow access for the surgeon. The surgical plan 108 can be formulated by the surgeon analyzing the pre-operation medical scan 102 in preparation for the surgery.

The transform module 140 can provide continuous updates to a transformed data 144 without manual intervention. Since the historic scan data 142 has many layers that are all in positional correlation with the surface layer identified by the surface of interest extract module 112, all of the internal layers can be in positional correlation to the current surface image 118 of the three dimensional object 104.

It has been discovered that the computer assisted surgical system 100 can provide highly accurate positional correlation between historic scan data 142, the surgical plan 108, and the current surface image 118 with no manual intervention or markers applied to the three dimensional object 104. The transform data 144 can provide highly accurate positional information with computer generated highlights indicating safe zones and danger zones for every step of the surgical plan 108.

The transform data 144 can be coupled to an augmented reality display 146 managed by a display controller 148. The current surface image 118 can be coupled to the augmented reality display 146 for establishing a patient coordinate space in which the transformed data 144 can be displayed by the display controller 148.

A tool tracking module 150 can present tool tracking data 152 to the augmented reality display 146. The tool tracking module 150 can be in position correlation with the current surface image 118. The transform data 144 is also in position correlation with the current surface image 118, which allows the augmented reality display 146 to present the actual position of the surgical tools used to execute the surgical plan 108 in real time. It has been discovered that the computer assisted surgical system 100 can provide positional correlation between the current surface image 118 and the historical scan data 142 having a mean square error less than 2 mm, which represents a significant improvement over prior art marker systems that that can induce more than twice the positional error in placing a single marker on the three dimensional object 104.

Referring now to FIG. 2, therein is shown a functional block diagram of a surgical plan generation mechanism 201 in an embodiment of the present invention. The functional block diagram of the surgical plan generation mechanism 201 depicts the pre-operation medical scan 102 having captured the image data of the three dimensional object 104. The pre-operation medical scan 102 can convey the historical scan data 142 to the surgical plan 108. A surgeon (not shown) can access the historical scan data 142 and with the use of the physical models 110 in order to develop a strategy to complete the surgery on the three dimensional object 104, such as a surgical patient.

The surgical plan 108 can provide extensive details of the requirements of the operation including safe areas, an entry path, the location, shape and size of the object of the operation, and danger zones, which if entered could harm the surgical patient 104. The key to the success of the plan is the absolute position of registration between the position of the three dimensional object 104 during the operation and the historical scan data 142. The surgical plan 108 can provide visual queues to the surgeon performing the operation. The surgical plan 108 can convey the surgical plan and highlights 109 of FIG. 1 to the surface of interest extract module 112.

The surface of interest extract module 112 can use the historical image data 106 to extract the surface of interest to form the point clouds 114, which can be assembled as the intended point cloud 128 to define the outer surface of the three dimensional object 104.

It has been discovered that the surgical plan 108 can provide the surgical plan and highlights 109, including specific coordinates that can be highlighted during the display of the transform data 144 of FIG. 1 on the augmented reality display 146 of FIG. 1. The surgical plan and highlights 109 can identify safe zones and danger zones in the intended point cloud 128 that can assist the surgeon (not shown) who is performing the operation.

Referring now to FIG. 3, therein is shown a functional block diagram of a region of interest capture mechanism 301 in an embodiment of the present invention. The functional block diagram of the region of interest capture mechanism 301 depicts the position image capture module 116, such as a stereo image camera, ultra-sonic surface analysis device, structured light or laser surface analysis device, or the like, coupled to the pre-surgery 3D capture module 120.

The position image capture module 116 can capture the surface of the three dimensional object 104 in a surgical position, which can be significantly different that the position of the three dimensional object 104 captured by the pre-operation medical scan 102 of FIG. 1. The pre-surgery 3D capture module 120 can process the current surface image 118 provided by the position image capture module 116. A complete surface topology of the three dimensional object 104 can be provided, through the current image data 122, to the region of interest extract module 124. It is understood that the current image data 122 includes a visible surface topology of the three dimensional object 104. The region of interest extract module 124 can identify the detail of the surface and can algorithmically remove undesired regions, such as hair, from the surface of the region of interest.

The current point cloud 126 can represent a detailed surface of the three dimensional object 104 in the operative surgical position. The region of interest extract module 124 can produce the actual point cloud 132, such as an array of related points that represent the three dimensional topology of the visible surface of the three dimensional object 104, from the current point cloud 126. It is understood that the actual point cloud 132 can contain a subset cloud of points as those contained in the intended point cloud 128 of FIG. 1 because they both originate with the three dimensional object 104, but in different positions.

It has been discovered that the region of interest extract module 124 can generate the actual point cloud 132 as a visible surface topology of the three dimensional object 104 that is a subset of the intended point cloud 128 of the surface of interest extract module 112 of FIG. 1. It is understood that the position image capture module 116 only monitors the outer surface of the three dimensional object 104 to perform automatic registration and alignment between the intended point cloud 128 and the actual point cloud 132 without additional human intervention. This alignment process can remove the human induced position error that can accompany the use of markers or masks adhered to the surface of the three dimensional object 104.

Referring now to FIG. 4, therein is shown a functional block diagram of an alignment and presentation mechanism 401 in an embodiment of the present invention. The functional block diagram of the alignment and presentation mechanism 401 depicts the 3D registration module 130 coupled to the intended point cloud 128 and the actual point cloud 132. The 3D registration module 130 can employ a feature selection module for determining subsets of point clouds, the subsets selected based on key points of a three-dimensional object; a feature matching module, coupled to the feature selection module, for generating matched results based on a matching transformation of the subsets; and a point registration module, coupled to the feature matching module, for refining the matched results based on a refinement transformation to optionally align different data sets of the point clouds for displaying the aligned data sets on a device, wherein the refinement transformation includes a refinement error less than a matching error of the matching transformation.

An example embodiment of the 3D registration module 130 can include a three dimensional registration alignment module 134, which can implement a feature identification structure that can operate on both the intended point cloud 128 and the actual point cloud 132 to identify similar features. The three dimensional registration alignment module 134 can also implement a feature matching structure for rough alignment providing positional alignment to within less than 5 millimeter. The three dimensional registration alignment module 134 can also implement a registration refinement structure that can improve the positional alignment to less than 2 millimeter without the need for any human intervention to identify portions of the three dimensional object 104.

The 3D registration module 130 can have the transform parameter module 136 that can determine the three dimensional transformation, such as translation, rotation and scaling, required to align the intended point cloud 128 with the actual point cloud 132. The composite image data 138 can include the transformation information that is required to position the historic scan data 142 in the proper alignment to coincide with the actual point cloud 132 and reflect the actual position of the three dimensional object 104, such as the surgical patient.

It is understood that the historic scan data 142 can be collected, by the pre-operation medical scan 102, from the three dimensional object 104 at a time prior to the capture of the current surface image 118 by the position image capture module 116. It is further understood that the difference in position between the historic scan data 142 and the current surface image can be significant. The computer assisted surgical system 100 of FIG. 1 can resolve the difference in position without manual intervention by any medical staff and without external markers applied to the three dimensional object 104 either during the pre-operation medical scan 102 or during the capture of the current surface image 118.

The composite image data 138 is coupled to the transform module 140. The transform module 140 can apply the positional transformation, such as translation, rotation and scaling, from the composite image data 138 and the surgical plan and highlights 109 to the historic scan data 142 provided by the pre-operation medical scan 102. The transform module 140 can complete the merge of the highlighted information from the surgical plan 108 with the properly oriented version of the historic scan data 142 in order to provide the transform data 144 that is coupled to the augmented reality display 146.

The display controller 148 can receive the current surface image 118, the transform data 144, and the tool tracking data 152 to form a composite display in the augmented reality display 146. The positional conformity of the transform data 144 to the current surface image 118 allows the display controller 148 to overlay the data with minimal resources. The tool tracking data 152 can be calibrated, through the position image capture module 116 and the tool tracking module 150, by the surgical staff prior to the initiation of the surgical plan 108. A surgeon (not shown) can supervise the execution of the surgical plan 108, or the surgeon can articulate the tools with computer assistance in order to execute the surgical plan with visual aids provided through the transform data 144.

It is understood that the execution of the surgical plan 108 can be completely performed by a computer in a remote location from the surgeon with minimal risk to the three dimensional object 104, such as the surgical patient. It has been discovered that an embodiment of the computer assisted surgical system 100 can be used to provide intricate surgical procedures to remote locations of the world with only a rudimentary surgical team in the area, while the surgeon can manage the operation from a location on the opposite side of the planet.

Referring now to FIG. 5, therein is shown a flow chart of a method 500 of operation of a computer assisted surgical system 100 in a further embodiment of the present invention. The method 500 includes: capturing historic scan data from a three dimensional object in a block 502; sampling a current surface image from the three dimensional object in a different position in a block 504; automatically transforming the historical scan data to align with the current surface image for forming a transform data in a block 506; and displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention in a block 508.

The resulting method, process, apparatus, device, product, and/or system is straightforward, cost-effective, uncomplicated, highly versatile, accurate, sensitive, and effective, and can be implemented by adapting known components for ready, efficient, and economical manufacturing, application, and utilization.

Another important aspect of the present invention is that it valuably supports and services the historical trend of reducing costs, simplifying systems, and increasing performance.

These and other valuable aspects of the present invention consequently further the state of the technology to at least the next level.

While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the scope of the included claims. All matters hithertofore set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.

Claims

1. A method of operation of a computer assisted surgical system comprising:

capturing historic scan data from a three dimensional object;
sampling a current surface image from the three dimensional object in a different position;
automatically transforming the historical scan data to align with the current surface image for forming a transform data; and
displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.

2. The method as claimed in claim 1 further comprising extracting an actual point cloud from the current surface image.

3. The method as claimed in claim 1 further comprising generating a composite image data from the historical scan data and the current surface image.

4. The method as claimed in claim 1 further comprising establishing a surgical plan for highlighting the transform data in the augmented reality display.

5. The method as claimed in claim 1 wherein forming the transform data includes aligning an intended point cloud with an actual point cloud.

6. A method of operation of a computer assisted surgical system comprising:

capturing historic scan data from a three dimensional object includes scanning with a pre-operation medical scan;
sampling a current surface image from the three dimensional object in a different position includes viewing the three dimensional object in a surgery position;
automatically transforming the historical scan data to align with the current surface image for forming a transform data; and
displaying, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention including highlighting the transform data by a surgical plan.

7. The method as claimed in claim 6 further comprising extracting an actual point cloud from the current surface image.

8. The method as claimed in claim 6 further comprising generating a composite image data from the historical scan data and the current surface image.

9. The method as claimed in claim 6 further comprising establishing the surgical plan for highlighting the transform data in the augmented reality display.

10. The method as claimed in claim 6 wherein forming the transform data includes aligning an intended point cloud with an actual point cloud.

11. A computer assisted surgical system comprising:

a pre-operation medical scan configured to record historic scan data from a three dimensional object;
a position image capture module configured to sample a current surface image from the three dimensional object in a different position;
a 3D registration module configured to automatically transform the historical scan data to align with the current surface image for forming a transform data; and
a display controller configured to display, on an augmented reality display, the current surface image overlaid by the transform data with no manual intervention.

12. The system as claimed in claim 11 further comprising a region of interest extract module configured to extract an actual point cloud from the current surface image.

13. The system as claimed in claim 11 wherein the 3D registration module is further configured to generate a composite image data from the historical scan data and the current surface image.

14. The system as claimed in claim 11 wherein the pre-operation medical scan is further configured to establish a surgical plan for highlighting the transform data in the augmented reality display.

15. The system as claimed in claim 11 further comprising a transform module configured to form the transform data includes an intended point cloud automatically aligned with an actual point cloud.

16. The system as claimed in claim 11 further comprising:

a pre-surgery 3D capture module configured to sample the current surface image from the three dimensional object in a different position includes the three dimensional object viewed in a surgery position;
a 3D registration algorithm module configured to automatically transform the historical scan data to align with the current surface image for forming the transform data; and
the display controller configured to display, on the augmented reality display, the current surface image overlaid by the transform data with no manual intervention includes the transform data highlighted by a surgical plan.

17. The system as claimed in claim 16 further comprising a region of interest extract module configured to extract an actual point cloud from the current surface image.

18. The system as claimed in claim 16 wherein the 3D registration module is further configured to generate a composite image data from the historical scan data and the current surface image includes a three dimensional registration alignment module for aligning an intended point cloud and an actual point cloud.

19. The system as claimed in claim 16 wherein the pre-operation medical scan is further configured to establish the surgical plan for highlighting the transform data in the augmented reality display.

20. The system as claimed in claim 16 further comprising a transform module configured to form the transform data includes an intended point cloud automatically aligned with an actual point cloud.

Patent History
Publication number: 20160019716
Type: Application
Filed: Jul 15, 2014
Publication Date: Jan 21, 2016
Inventors: Albert Huang (Cupertino, CA), Ming-Chang Liu (San Jose, CA), Dennis Harres (San Jose, CA)
Application Number: 14/331,541
Classifications
International Classification: G06T 19/00 (20060101); G06F 19/00 (20060101); G06T 3/00 (20060101);