SURGICAL NAVIGATION SYSTEM WITH ONE OR MORE BODY BORNE COMPONENTS AND METHOD THEREFOR
A system for performing a navigated surgery comprises a first target attached to a patient at a surgical site and a second target at the surgical site. An optical sensor is coupled to the user and detects the first target and second target simultaneously in a working volume of the sensor. An intra-operative computing unit (ICU) receives sensor data concerning the first target and second target, calculates a relative pose and provides display information. The sensor can be handheld, body-mounted or head-mounted, and communicate wirelessly with the ICU. The sensor may also be mountable on a fixed structure (e.g. proximate) and in alignment with the surgical site. The ICU may receive user input via the sensor, where the user input is at least one of sensor motions, voice commands, and gestures presented to the optical sensor by the user. The display information may be presented via a (heads-up) display unit.
This application claims priority to U.S. provisional application No. 62/072,041 titled “Systems, Methods and Devices for Anatomical Registration and Surgical Localization” and filed on Oct. 29, 2014, the entire contents of which are incorporated herein by reference.
This application claims priority to U.S. provisional application No 62/072,030 titled “Devices including a surgical navigation camera and systems and methods for surgical navigation” and filed on Oct. 29, 2014, the entire contents of which are incorporated herein by reference.
This application claims priority to U.S. provisional application No. 62/084,891 titled “Devices, systems and methods for natural feature tracking of surgical tools and other objects” and filed on Nov. 26, 2014, the entire contents of which air incorporated herein by reference.
This application claims priority to U.S. provisional application No. 62/072,032 titled “Devices, systems and methods for reamer guidance and cup sealing ” and filed on Oct. 29, 2014, the entire contents of which are incorporated herein by reference.
FIELDThe present application relates to computer-assisted surgery and surgical navigation systems where One or more targets and the one or more objects to which the targets are attached are tracked by an optical sensor, such as one borne by the body of the user (e.g. hand, head etc.) The present application further relates to gestural control for surgical navigation systems.
BACKGROUNDThe field of computer-assisted surgery (or “computer navigation”) creates systems and devices to provide a surgeon with positional measurements of objects in space to allow the surgeon to operate more precisely and accurately. Existing surgical navigation systems utilize binocular cameras as optical sensors to detect targets attached to objects within a working volume. The binocular cameras are part of large and expensive medical equipment systems. The cameras are affixed to medical carts with various computer systems, monitors, etc. The binocular-based navigation systems are located outside a surgical sterile field, and can localize (i.e. measure the pose of) targets within the sterile field. There are several limitations to existing binocular-based navigation systems, including line-of-sight disruptions between the cameras and the objects, ability to control computer navigation software, cost, and complexity.
BRIEF SUMMARYIn one aspect, a system is disclosed for performing a navigated surgery. The system comprises a first target attached to a patient at a surgical site and a second target at the surgical site. An optical sensor is coupled to the user and detects the first target and second target simultaneously in a working volume of the sensor. An intra-operative computing unit (ICU) receives sensor data concerning the first target and second target, calculates a relative pose and provides display information to a display unit. The sensor can be handheld, body-mounted or head-mounted, and communicate wirelessly with the ICU. The sensor may also be mountable on a fixed structure (e.g. proximate thereto) with the working volume in alignment with the surgical site. The ICU may receive user input via the sensor, where the user input is at least one of sensor motions, voice commands, and gestures presented to the optical sensor by the user. The display information may be presented via a heads-up or other display unit.
There is provided as system for performing a navigated surgery at a surgical site of a patient where the system comprises: a first target configured to be attached to the patient at the surgical site; a second target at the surgical site; a sensor configured to be coupled to a user, the sensor comprising an optical sensor configured to detect the first target and second target simultaneously; and an intra-operative computing unit (ICU) configured to: receive, from the sensor, sensor data concerning the first target and second target; calculate the relative pose between the first target and second target; and based on the relative pose, provide display information to a display unit.
The second target may be a static reference target. The second target may be attached to one of: a surgical instrument; a bone cutting guide; and a bone.
The sensor may be configured to be at least one of: handheld; body-mounted; and head-mounted.
A sensor working volume for the sensor may be in alignment with a field of view of the user.
The sensor may communicate wirelessly with the ICU.
The sensor may be communicatively connected by wire to a sensor control unit and the sensor control unit is configured to wirelessly communicate with the ICU.
The sensor may be further configured to be mountable on a fixed structure.
The ICU may be further configured to present, via the display unit, where the targets are with respect to the sensor field of view. The ICU may be further configured to present, via the display unit, an optical sensor video feed from the optical sensor. The ICU may be further configured to receive user input via the sensor by at least one of receiving motions of the sensor, where the sensor has additional sensing capabilities to sense motions; receiving voice commands, where the sensor further comprises a microphone; and receiving gestures presented to the optical sensor by the user, the gestures being associated with specific commands.
The system may further comprise a display unit wherein the display unit is further configured to be positionable within a field of view of the user while the optical sensor is detecting the first target and second target. The display unit may be a surgeon-worn heads up display.
There is provided a computer-implemented method for performing a navigated surgery at a surgical site of a patient. The method comprises receiving, by at least one processor of an intra-operative computing unit (ICU), sensor data from a sensor where the sensor data comprises information for calculating the relative pose of a first target and a second target, wherein the sensor is coupled to a user and comprises an optical sensor configured to detect the first target and second target simultaneously, and wherein the first target is attached to the patient at the surgical site and the second target is located at the surgical site calculating, by the at least one processor, the relative pose between the first target and second target; and based on the relative pose, providing, by at least one processor, display information to a display unit.
In this method a sensor working volume of the sensor may be in alignment with a field of view of the user and the first target and second target are in the sensor working volume.
The method may further comprise receiving, by the at least one processor further sensor data from the sensor, wherein the sensor is attached to a fixed structure such that the sensor volume is aligned with the surgical site when the sensor is attached.
The method may further comprise, receiving, by the at least one processor, user input from the sensor for invoking the at least one processor to perform an activity of the navigated surgery. The user input may comprise a gesture sensed by the sensor.
There is provided a method comprising: aiming an optical sensor, held by the hand, at a surgical site having two targets at the site and within a working volume of the sensor, one of the two targets attached to patient anatomy, wherein the optical sensor is in communication with a processing unit configured to determine a relative position of the two targets and provide display information, via a display unit, pertaining to the relative position; and receiving the display information via the display unit. This method may further comprise providing to the processing unit user input via the sensor for invoking the processing unit to perform an activity of the navigated surgery.
Additional objects and advantages of the disclosed embodiments will be set forth in part in the description that follows, and in part will be obvious from the description, or may be learned by practice of the disclosed embodiments. The objects and advantages of the disclosed embodiments will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosed embodiments as claimed.
The accompanying drawings constitute a part of this specification. The drawings illustrate several embodiments of the present disclosure and, together with the description, serve to explain the principles of the disclosed embodiments as set forth in the accompanying claims.
Embodiments disclosed herein will be more fully understood from the detailed description and the corresponding drawings, which form a part of this application, and in which:
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity.
DEFINITIONSField of View (FOV): The angular Span (horizontally and vertically) that an optical sensor (e.g. camera) is able to view.
Degrees of Freedom (DOF): independent parameters used to describe the pose (position and orientation) of a rigid body. There are up to 6 DOF for a rigid body: 3 DOF for position (i.e. x, y, z position), and 3 DOF for orientation (e.g. roll, pitch, yaw).
Pose: The position and orientation of an object in up to 6 DOF in space.
Working Volume: A 3D volume relative to an optical sensor within which valid poses may be generated. The Working Volume is a subset of the optical sensor's field of view and is configurable depending on the type of system that uses the optical sensor.
DETAILED DESCRIPTIONIn navigated surgery, targets are affixed to a patient's anatomy, as well as to surgical instruments, such that a relative position and orientation between the sensor and the target may be measured. Total Knee Arthroplasty (TKA) will be used as an example in this disclosure to describe a navigated surgical procedure. As illustrated in
The architecture of a surgical navigation system is shown in
The sensor may also include, other sensing components. For example, positional sensing components, such as accelerometers, gyroscopes, magnetometers, IR detectors, etc. may be used to supplement, augment or enhance the positional measurements obtained from the optical sensor. Additionally, the sensor may be capable of receiving user input, for example, through buttons, visible gestures, motions (e.g. determined via an accelerometer), and microphones to receive voice commands etc. The sensor may include indicators that signal the state of the sensor, for example, green LED to signify sensor is on, red LED to signify error.
The sensor may be in communication with a sensor control unit (SCU). The SCU is an intermediate device to facilitate communication between the sensor and an intra-operative computing unit (ICU).
The ICU may comprise a laptop, workstation, or other computing device having at least one processing unit and at least one storage device, such as memory storing software (instructions and/or data) as further described herein to configure the execution of the intra-operative computing unit. The ICU receives positional data from the sensor via the SCU, processes the data and computes the pose of targets that are within the working volume of the sensor. The ICU also performs any further processing associated with the other sensing and/or user input components. The ICU may further process the measurements to express them in clinically-relevant terms (e.g. according to anatomical registration). The ICU may further implement a user interface to guide a user through a surgical workflow. The ICU may further provide the user interface, including measurements, to a display unit for displaying to a surgeon or other user.
Handheld Camera EmbodimentIn one embodiment, a sensor is configured for handheld use. As depicted in
Furthermore, both handheld and non-handheld modes may be supported for use of the sensor within the same surgery. This is illustrated in
This system configuration has several advantages. This system does not have a wired connection to objects being tracked. This system further allows the sensor to be used for pose measurements of the targets at specific steps in the workflow, and set aside when not in use. When a target is outside of or obstructed from the working volume of the sensor, while the sensor is attached to a static/fixed position, the position of the sensor can be moved, using the sensor in handheld mode, to include the target in its working volume. The sensor may comprise user input components (e.g. buttons). This user input may be a part of the surgical workflow, and may be communicated to the ICU. Furthermore, indicators on the sensor could provide feedback to the surgeon (e.g. status LED's indicating that a target is being detected by the sensor). Also, in handheld operation, it may be preferable for a surgical assistant to bold the sensor to align the sensor' working volume with the targets in the surgical site, while the surgeon is performing another task.
Body Mounted EmbodimentAs illustrated in
In the previously described embodiments, the sensor in the surgical navigation system is mounted on the surgeon's body. For reasons of sterility and ergonomics, it may not be feasible to use buttons on the sensor in order to interact with the intra-operative computing unit. Hand gestures may be sensed by the optical system and used as user input. For example, if the sensor can detect a user's hand in its field of view, the system comprising the sensor, SCU, ICU, and a display unit, may be able to identify predefined gestures that correspond to certain commands within the surgical workflow. For example, waving a hand from left to right may correspond to advancing the surgical workflow on the display unit of the ICU; snapping fingers may correspond to saving a measurement that may also be displayed on the display unit of the ICU; waving a hand hack and forth may correspond to cancelling an action within the workflow; etc. According to
In
Reference is now made to
Reference is now made to
It will be appreciated that a sensor mounted on a surgeon's body need not necessarily be sterile. For example, the surgeon's forehead is not sterile. This is advantageous since a sensor may not be made from materials that can be sterilized, and by removing the sensor from the sterile field, there is no requirement to ensure that the sensor is sterile (e.g. through draping, autoclaving, etc.).
Display PositioningReference is no made to
In another embodiment, the display unit 1102 is a heads-up display configured to be worn by the surgeon 502, and be integrated with the sensor 110. The heads-up display is any transparent display that does not require the surgeon to look away from the usual field of view. For example, the heads-up display may be projected onto a visor that surgeon's typically wear during surgery. A head-mounted sensor allows the surgeon to ensure that the targets are within the working volume of the sensor without additionally trying to look at a static display unit within the operating room, and with a heads-up display, the display unit is visible to the surgeon regardless of where they are looking.
In another embodiment, as illustrated in
In another embodiment, as illustrated in
In a body-mounted or handheld camera configuration, the sensor is not required to be stationary when capturing relative pose between multiple targets. However, some measurements may be required to be absolute. For example, when calculating the center of rotation (COR) of a hip joint, a fixed sensor may localize a target affixed to the femur as it is rotated about the hip COR. The target is constrained to move along a sphere, the sphere's center being the hip COR. Since the sensor is in a fixed location, and the hip COR is in a fixed location, the pose data from rotating the femur may be used to calculate the pose of the hip COR relative to the target on the femur. In TKA, the pose of the hip COR is useful for determining the mechanical axis of a patient's leg.
In an embodiment where the sensor may not be fixed, a second stationary target is used as a static reference to compensate for the possible motion of the sensor. In
Where the surgeon has direct control over the working volume of the optical sensor (i.e. in the body-mounted or handheld configurations), it may be advantageous to display the video feed as seen by the optical sensor to the surgeon via the display unit. This visual feedback may allow the surgeon to see what the optical sensor “sees”, and may be useful in a) ensuring that the target(s) are within the working volume of the sensor, and b) diagnosing any occlusions, disturbances, disruptions, etc. that prevents the poses of the targets from being captured by the ICU. For example, a persistent optical sensor image feed may be displayed. This is illustrated in
Various embodiments have been described herein with reference to the accompanying drawings. It will however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the disclosed embodiments as set forth in the claims that follow.
Claims
1. A system for performing a navigated surgery at a surgical site of a patient, the system comprising:
- a first target configured to be attached to the patient at the surgical site;
- a second target at the surgical site;
- a sensor configured to be coupled to a user, the sensor comprising an optical sensor configured to detect the first target and second target simultaneously; and
- an intra-operative computing unit (ICU) configured to: receive, from the sensor, sensor data concerning the first target and second target; calculate the relative pose between the first target and second target; and based on the relative pose, provide display information to a display unit.
2. The system of claim 1 wherein the second target is a static reference target.
3. The system of claim 1 wherein the second target is attached to one of:
- a surgical instrument;
- a bone cutting guide; and
- a bone.
4. The system of claim 1, wherein the sensor is configured to be at least one of:
- handheld;
- body-mounted; and
- head-mounted.
5. The system of claim 1, wherein a sensor working volume fir the sensor is in alignment with a field of view of the user.
6. The system of claim 1, wherein the sensor communicates wirelessly with the ICU.
7. The system of claim 1, wherein the sensor is communicatively connected by wire to as sensor control unit and the sensor control unit is configured to wirelessly communicate with the ICU.
8. The system of claim 1, wherein the sensor is further configured to be mountable on a fixed structure.
9. The system of claim 1, wherein the ICU is further configured to present, via the display unit, where the targets are with respect to the sensor field of view.
10. The system of claim 9, wherein the ICU is further configured to present, via the display unit, an optical sensor video feed from the optical sensor.
11. The system of claim 9, wherein the ICU is further configured to receive user input via the sensor by at least one of:
- receiving motions of the sensor, where the sensor has additional sensing capabilities to sense motions;
- receiving voice commands, where the sensor further comprises a microphone; and
- receiving gestures presented to the optical sensor by the user, the gestures being associated with specific commands.
12. The system of claim 1 further comprising a display unit wherein the display unit is further configured to be positionable within a field of view of the user while the optical sensor is detecting the first target and second target.
13. The system of claim 12 wherein the display unit is a surgeon-worn heads up display.
14. A computer-implemented method for performing a navigated surgery at a surgical site of a patient, the method comprising:
- receiving, by at least one processor of an intra-operative computing unit (ICU), sensor data from a sensor where the sensor data comprises information for calculating the relative pose of a first target and a second target, wherein the sensor is coupled to a user and comprises an optical sensor configured to detect the first target and second target simultaneously, and wherein the first target is attached to the patient at the surgical site and the second target is located at the surgical site;
- calculating, by the at least one processor, the relative pose between the first target and second target; and
- based on the relative pose, providing, by at least one processor, display information to a display unit.
15. The method of claim 14, wherein a sensor working volume of the sensor is in alignment with a field of view of the user and the first target and second target are in the sensor working volume.
16. The method of claim 14, further comprising receiving, by the at least one processor further sensor data from the sensor, wherein the sensor is attached to a fixed structure such that the sensor volume is aligned with the surgical site when the sensor is attached.
17. The method of claim 14 further comprising, receiving, by the at least one processor, user input from the sensor for invoking the at least one processor to perform an activity of the navigated surgery.
18. The method of claim 17 wherein the user input comprises a gesture sensed by the sensor.
19. A method comprising:
- aiming an optical sensor, held by the hand, at a surgical site having two targets at the site and within a working volume of the sensor, one of the two targets attached to patient anatomy, wherein the optical sensor is in communication with a processing unit configured to determine a relative position of the two targets and provide display information, via a display unit, pertaining to the relative position; and
- receiving the display information via the display unit.
20. The method of claim 19, further comprising providing to the processing unit user input via the sensor for invoking the processing unit to perform an activity of the navigated surgery.
Type: Application
Filed: Oct 29, 2015
Publication Date: May 12, 2016
Inventors: ANDRE NOVOMIR HLADIO (ON), ARMEN GARO BAKIRTZIAN (ON), RICHARD TYLER FANSON (ON)
Application Number: 14/926,963