HAPTIC GUIDANCE SYSTEM AND METHOD
A surgical planning method is provided. A height of a cartilage surface above a bone is detected. A representation of the bone and a representation of the height of the cartilage surface is created. Bone preparation for implanting an implant on the bone is planned based at least in part on the detected height of the cartilage surface.
Latest Patents:
This application is a divisional of U.S. patent application Ser. No. 11/357,197, filed Feb. 21, 2006, which is a continuation-in-part of U.S. patent application Ser. No. 10/384,072, filed Mar. 6, 2003, published Feb. 5, 2004; U.S. patent application Ser. No. 10/384,077, filed Mar. 6, 2003, published Feb. 19, 2004; and U.S. patent application Ser. No. 10/384,194, filed Mar. 6, 2003, published Feb. 19, 2004, each of which claims priority from U.S. Provisional Patent Application No. 60/362,368, filed Mar. 6, 2002. U.S. patent application Ser. No. 11/357,197 is also a continuation-in-part of U.S. patent application Ser. No. 10/621,119, filed Jul. 16, 2003, published Jun. 3, 2004, which is a continuation-in-part of U.S. patent application Ser. No. 10/384,078, filed Mar. 6, 2003, published Feb. 19, 2004, which claims priority from U.S. Provisional Patent Application Ser. No. 60/362,368, filed Mar. 6, 2002. U.S. patent application Ser. No. 11/357,197 further claims priority from U.S. Provisional Patent Application Ser. No. 60/655,642, filed Feb. 22, 2005, and U.S. Provisional Patent Application Ser. No. 60/759,186, filed Jan. 17, 2006. Each of the above-referenced published applications is incorporated by reference herein in its entirety.
BACKGROUND OF THE INVENTION1. Field of the Invention
The invention relates to a surgical system and, more particularly, to a surgical system and method for orthopedic joint replacement.
2. Description of Related Art
Minimally invasive surgery (MIS) is the performance of surgery through incisions that are considerably smaller than incisions used in traditional surgical approaches. For example, in an orthopedic application such as total knee replacement surgery, an MIS incision length may be in a range of about 4 to 6 inches whereas an incision length in traditional total knee surgery is typically in a range of about 6 to 12 inches. As a result of the smaller incision length, MIS procedures are generally less invasive than traditional surgical approaches, which minimizes trauma to soft tissue, reduces post-operative pain, promotes earlier mobilization, shortens hospital stays, and speeds rehabilitation.
One drawback of MIS is that the small incision size reduces a surgeon's ability to view and access the anatomy. For example, in minimally invasive orthopedic joint replacement, limited visibility and limited access to the joint increase the complexity of assessing proper implant position and of reshaping bone. As a result, accurate placement of implants may be more difficult. Conventional techniques for counteracting these problems include, for example, surgical navigation, positioning the leg for optimal joint exposure, and employing specially designed, downsized instrumentation and complex surgical techniques. Such techniques, however, typically require a large amount of specialized instrumentation, a lengthy training process, and a high degree of skill. Moreover, operative results for a single surgeon and among various surgeons are not sufficiently predictable, repeatable, and/or accurate. As a result, implant performance and longevity varies among patients.
In orthopedic applications, one drawback of both MIS and traditional surgical approaches is that healthy as well as diseased bone is removed when the bone is prepared to receive the implant. For example, a total knee replacement can require removal of up to ½ inch of bone on each of three compartments of the knee. One conventional solution for preserving healthy bone is to perform a partial (or unicompartmental) knee replacement where only one compartment of the knee is damaged. A unicompartmental approach involves removal of damaged or arthritic portions on only one compartment of the knee. For example, the REPICCI® unicondylar knee system typically requires removal of only about ¼ inch of bone on one compartment of the knee. The REPICCI® system employs freehand sculpting of bone with a spherical burr through a minimally invasive incision typically about 3 inches in length. The spherical burr enables cuts having rounded shapes that cannot be reproduced with a surgical saw. The freehand burring technique, however, is difficult to master and requires more artistic sculpting capability from the surgeon than techniques utilizing traditional cutting jigs or saw guides. As a result, freehand cutting requires a high degree of skill to achieve operable results that are sufficiently predictable, repeatable, and/or accurate. Moreover, the REPICCI® technique and traditional surgical approaches can not produce cuts having complex or highly curved geometries. Thus, such approaches typically require the removal of at least some healthy bone along with the diseased/damaged bone.
Another drawback of both MIS and traditional orthopedic surgical approaches is that such approaches do not enhance the surgeon's inherent surgical skill in a cooperative manner. For example, some conventional techniques for joint replacement include autonomous robotic systems to aid the surgeon. Such systems, however, typically serve primarily to enhance bone machining by performing autonomous cutting with a high speed burr or by moving a drill guide into place and holding the position of the drill guide while the surgeon inserts cutting tools through the guide. Although such systems enable precise bone resections for improved implant fit and placement, they act autonomously (rather than cooperatively with the surgeon) and thus require the surgeon to cede a degree of control to the robot. Additional drawbacks of autonomous systems include the large size of the robot, poor ergonomics, the need to rigidly clamp the bone during registration and cutting, increased incision length for adequate robot access, and limited acceptance by surgeons and regulatory agencies due to the autonomous nature of the system.
Other conventional robotic systems include robots that cooperatively interact with the surgeon. One drawback of conventional interactive robotic systems is that such systems lack the ability to adapt surgical planning and navigation in real-time to a dynamic intraoperative environment. For example, U.S. patent application Ser. No. 10/470,314 (Pub. No. US 2004/0128026), which is hereby incorporated by reference herein in its entirety, discloses an interactive robotic system programmed with a three-dimensional virtual region of constraint that is registered to a patient. The robotic system includes a three degree of freedom (3-DOF) arm having a handle that incorporates force sensors. The surgeon utilizes the handle to manipulate the arm to move the cutting tool. Moving the arm via the handle is required so that the force sensors can measure the force being applied to the handle by the surgeon. The measured force is then used in controlling motors to assist or resist movement of the cutting tool. For example, during a knee replacement operation, the femur and tibia of the patient are fixed in position relative to the robotic system. As the surgeon applies force to the handle to move the cutting tool, the interactive robotic system may apply an increasing degree of resistance to resist movement of the cutting tool as the cutting tool approaches a boundary of the virtual region of constraint. In this manner, the robotic system guides the surgeon in preparing the bone by maintaining the cutting tool within the virtual region of constraint. As with the above-described autonomous systems, however, the interactive robotic system functions primarily to enhance bone machining. The interactive robotic system also requires the relevant anatomy to be rigidly restrained and the robotic system to be fixed in a gross position and thus lacks real-time adaptability to the intraoperative scene. Moreover, the 3-DOF configuration of the arm and the requirement that the surgeon manipulate the arm using the force handle results in limited flexibility and dexterity, making the robotic system unsuitable for certain MIS applications.
In view of the foregoing, a need exists for a surgical system that can replace direct visualization in minimally invasive surgery, spare healthy bone in orthopedic joint replacement applications, enable intraoperative adaptability and surgical planning, and produce operative results that are sufficiently predictable, repeatable, and/or accurate regardless of surgical skill level. A surgical system need not necessarily meet all or any of these needs to be an advance, though a system meeting these needs would be more desirable.
SUMMARY OF THE INVENTIONAn aspect of the present invention relates to a surgical apparatus. The surgical apparatus includes a computer system and a surgical device configured to be manipulated by a user to perform a procedure on a patient. The computer system is programmed to implement control parameters for controlling the surgical device to provide at least one of haptic guidance to the user and a limit on user manipulation of the surgical device, based on a relationship between an anatomy of the patient and at least one of a position, an orientation, a velocity, and an acceleration of a portion of the surgical device, and to adjust the control parameters in response to movement of the anatomy during the procedure.
Another aspect of the present invention relates to a surgical apparatus. The surgical apparatus includes a haptic device configured to be manipulated by a user to perform a procedure on a patient. The haptic device includes at least one feedback mechanism configured to supply feedback to the user manipulating the haptic device. The surgical apparatus also includes a computer system programmed to implement control parameters for controlling the at least one feedback mechanism to provide haptic guidance to the user, while the user manipulates the haptic device, based on a relationship between an anatomy of the patient and at least one of a position, an orientation, a velocity, and an acceleration of a portion of the haptic device.
Yet another aspect of the present invention relates to a surgical method. The surgical method includes creating a representation of an anatomy of a patient; associating the anatomy and a surgical device with the representation of the anatomy; manipulating the surgical device to perform a procedure on a patient by moving a portion of the surgical device in a region of the anatomy; controlling the surgical device to provide at least one of haptic guidance and a limit on manipulation of the surgical device, based on a relationship between the representation of the anatomy and at least one of a position, an orientation, a velocity, and an acceleration of a portion of the surgical device; and adjusting the representation of the anatomy in response to movement of the anatomy during the procedure.
Yet another aspect of the present invention relates to a surgical method. The surgical method includes creating a representation of an anatomy of a patient; associating the anatomy and a haptic device with the representation of the anatomy; and manipulating the haptic device to perform a procedure on a patient by moving a portion of the haptic device in a region of the anatomy, where the haptic device includes at least one feedback mechanism configured to supply feedback during manipulation. The surgical method further includes controlling the at least one feedback mechanism to provide haptic guidance, during manipulation of the haptic device, based on a relationship between the representation of the anatomy of the patient and at least one of a position, an orientation, a velocity, and an acceleration of a portion of the haptic device.
Yet another aspect of the present invention relates to a method for joint replacement. The method includes creating a representation of a first bone; creating a representation of a second bone; planning bone preparation for implanting a first implant on the first bone; preparing the first bone to receive the first implant by manipulating a surgical tool to sculpt the first bone; planning bone preparation for implanting a second implant on the second bone after preparing the first bone; and preparing the second bone to receive the second implant by manipulating the surgical tool to sculpt the second bone.
Yet another aspect of the present invention relates to a surgical planning method. The surgical planning method includes detecting a height of a cartilage surface above a bone; creating a representation of the bone and a representation of the height of the cartilage surface; and planning bone preparation for implanting an implant on the bone based at least in part on the detected height of the cartilage surface.
Yet another aspect of the present invention relates to a surgical planning method. The surgical planning method includes creating a representation of a bone of a joint; moving the joint to a first position; identifying a first point corresponding to a first location in the joint, when the joint is in the first position; moving the joint to a second position; identifying a second point corresponding to a second location in the joint, when the joint is in the second position; and planning bone preparation for implanting an implant on the bone based at least in part on the first and second points.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain principles of the invention.
Presently preferred embodiments of the invention are illustrated in the drawings. Although this specification refers primarily to orthopedic procedures involving the knee joint, it should be understood that the subject matter described herein is applicable to other joints in the body, such as, for example, a shoulder, elbow, wrist, spine, hip, or ankle and to any other orthopedic and/or musculoskeletal implant, including implants of conventional materials and more exotic implants, such as orthobiologics, drug delivery implants, and cell delivery implants.
The computing system 20 includes hardware and software for operation and control of the surgical system 10. As shown in
The computer 21 may be any known computing system but is preferably a programmable, processor-based system. For example, the computer 21 may include a microprocessor, a hard drive, random access memory (RAM), read only memory (ROM), input/output (I/O) circuitry, and any other well-known computer component. The computer 21 is preferably adapted for use with various types of storage devices (persistent and removable), such as, for example, a portable drive, magnetic storage (e.g., a floppy disk), solid state storage (e.g., a flash memory card), optical storage (e.g., a compact disc or CD), and/or network/Internet storage. The computer 21 may comprise one or more computers, including, for example, a personal computer (e.g., an IBM-PC compatible computer) or a workstation (e.g., a SUN or Silicon Graphics workstation) operating under a Windows, MS-DOS, UNIX, or other suitable operating system and preferably includes a graphical user interface (GUI). In one embodiment, the computer 21 includes a Navigation Module available from MAKO SURGICAL CORP.™ and identified by product number 0040TAS00001.
The display device 23 is a visual interface between the computing system 20 and the user. The display device 23 is connected to the computer 21 and may be any device suitable for displaying text, images, graphics, and/or other visual output. For example, the display device 23 may include a standard display screen (e.g., LCD, CRT, plasma, etc.), a touch screen, a wearable display (e.g., eyewear such as glasses or goggles), a projection display, a head-mounted display, a holographic display, and/or any other visual output device. The display device 23 may be disposed on or near the computer 21 (e.g., on the cart 29 as shown in
In addition to the display device 23, the computing system 20 may include an acoustic device (not shown) for providing audible feedback to the user. The acoustic device is connected to the computer 21 and may be any known device for producing sound. For example, the acoustic device may comprise speakers and a sound card, a motherboard with integrated audio support, and/or an external sound controller. In operation, the acoustic device may be adapted to convey information to the user. For example, the computer 21 may be programmed to signal the acoustic device to produce a sound, such as a voice synthesized verbal indication “DONE,” to indicate that a step of a surgical procedure is complete. Similarly, the acoustic device may be used to alert the user to a sensitive condition, such as producing a beep to indicate that a surgical cutting tool is nearing a critical portion of soft tissue.
The input device 25 of the computing system 20 enables the user to communicate with the surgical system 10. The input device 25 is connected to the computer 21 and may include any device enabling a user to provide input to a computer. For example, the input device 25 can be a known input device, such as a keyboard, a mouse, a trackball, a touch screen, a touch pad, voice recognition hardware, dials, switches, buttons, a trackable probe, a foot pedal, a remote control device, a scanner, a camera, a microphone, and/or a joystick.
The computing system 20 (in whole or in part) may be disposed on the cart 29 to economize space, minimize a physical footprint of the computing system 20, and/or permit portability. The cart 29 may be, for example, a known cart, platform, or equipment stand and is preferably configured for ease of mobility of the computing system 20. For example, as shown in
The computing system 20 is adapted to enable the surgical system 10 to perform various functions related to surgical planning, navigation, image guidance, and/or haptic guidance. For example, the computer 21 may include algorithms, programming, and software utilities related to general operation, data storage and retrieval, computer aided surgery (CAS), applications, haptic control, and/or any other suitable functionality. In one embodiment, the computing system 20 includes software used in a Navigation Module currently available from MAKO SURGICAL CORP.™ and identified by product number 0040TAS00001.
Utilities related to general operation are configured to provide basic computing functions that enable and support overall operation of the surgical system 10. General operation utilities may include, for example, well known features such as functions for fast graphics processing, functions for supporting input/output (I/O) devices, functions for connecting to a hospital network, functions for managing database libraries (e.g., implant and instrument databases), functions for system security (e.g., login features, access restrictions, etc.), and/or any other functionality useful for supporting overall operation of the surgical system 10.
Utilities related to data storage and retrieval are configured to enable storage of and access to various forms of data, such as image data (e.g., two- or three-dimensional image data sets obtained using any suitable imaging modality, such as, for example, x-ray, computed tomography (CT), magnetic resonance (MR), positron emission tomography (PET), single photon emission computed tomography (SPECT), ultrasound, etc.), application data, implant data, instrument data, anatomical model data, patient data, user preference data, and the like. The data storage and retrieval utilities may include any functionality appropriate for storing and handling relevant data.
Utilities related to computer aided surgery are configured to enable surgical planning, navigation, and basic image guided surgery capabilities. For example, as is well known, the CAS utilities may include functions for generating and displaying images from image data sets, functions for determining a position of a tip and an orientation of an axis of a surgical instrument, and functions for registering a patient and an image data set to a coordinate frame of the tracking system 40. These functions enable, for example, the computing system 20 to display on the display device 23 a virtual representation of a tracked surgical instrument overlaid on one or more images of a patient's anatomy and to update the virtual representation of the tracked instrument in real time during a surgical procedure. Images generated from the image data set may be two-dimensional or, in the case of a three-dimensional image data set, a three-dimensional reconstruction based, for example, on segmentation of the image data set. When more than one image is shown on the display device 23, the computing system 20 preferably coordinates the representation of the tracked instrument among the different images. In addition to or in lieu of images generated from image data sets, the computing system 20 may use anatomical models (e.g., based on CAD models, line art, sketches, cartoons, artist renderings, generic or morphed data sets, etc.).
Utilities related to applications of the surgical system 10 include application specific programs configured to assist the user with surgical planning and navigation. Programs associated with the application utilities may be configured for use in various medical procedures and/or may be customized for a specific procedure. For example, the application utilities may include programs related to one or more orthopedic procedures, such as, for example, total knee replacement, partial knee replacement, hip replacement, shoulder replacement, elbow replacement, wrist replacement, ankle replacement, spinal surgery, and/or installation of orthopedic and/or musculoskeletal implants, including implants of conventional materials and more exotic implants, such as orthobiologics, drug delivery implants, and cell delivery implants. The application utilities may be directed to various aspects of surgical planning and navigation, including pre-operative, intra-operative, and post-operative activities. For example, the application utilities may include programs or processes directed to planning and set up, such as, for example, system initialization processes, planning processes, visualization processes, diagnostic imaging processes, registration processes, and calibration processes. The application utilities may also include programs or processes directed to object tracking and system control, such as, for example, coordinate transform processes, interpolation processes, tool and power control processes, anatomy positioning processes, mode control processes, safety processes, occlusion detection algorithms, and forward kinematics algorithms. The application utilities may include programs or processes related to the haptic device 30, such as, for example, haptic force computation processes, haptic force mapping processes, processes for generating haptic objects, and haptic rendering algorithms. The application utilities may also include programs and processes for communicating with the user during a surgical procedure, such as, for example, software for displaying pages or images corresponding to specific steps of a surgical procedure, software for prompting a user to perform a certain task, and software for providing feedback (e.g., visual, audible, tactile, and/or force feedback) to the user.
Utilities related to haptic control are configured to perform various functions related to control, performance, stability, and/or safety of the haptic device 30. For example, the haptic control utilities may include a real time operating system (RTOS), motion control software, hardware and software for generating high frequency updates for control of the haptic device 30, software for ensuring failsafe operation of the haptic device 30 (e.g., control of brakes, monitoring of redundant sensors, etc.), and/or any other utility suitable for improving or promoting performance, stability, and/or safety of the haptic device 30. The haptic control utilities may be executed on the computer 21 of the computing system 20 provided the computer 21 has a computing architecture sufficient to support the operating requirements of the haptic control utilities. For example, processes associated with haptic control typically have higher operational frequency requirements that other processes running on the computer 21. In one embodiment, the haptic control processes operate at a frequency of approximately 2 kHz. In another embodiment, the haptic control processes operate at a frequency in a range of between about 0.1 kHz to about 10 kHz. In yet another embodiment, the haptic control processes operate at a frequency in a range of between about 500 Hz to about 2,400 Hz. In contrast, the computer 21 may operate at a substantially lower frequency, such as, for example, a frequency in a range of about 15 Hz to about 20 Hz. In another embodiment, the frequency of the computer 21 may be in a range of between about 2 Hz to about 60 Hz. In other embodiments, the frequency of the computer 21 may be substantially equivalent to the operating frequency required by the haptic control processes (e.g., approximately 2 kHz). If the computer 21 does not have an architecture sufficient to support operation of the haptic control processes, the computing system 20 may include a computer 31 for execution of the haptic control utilities. In a preferred embodiment, the computer 31 is integrated or embedded with the haptic device 30.
The computer 31 (shown in
In addition to the haptic control utilities, the computer 31 may include programs that enable the haptic device 30 to utilize data from the tracking system 40. For example, the tracking system 40 may generate tracked object pose (e.g., position and orientation) data periodically. In one embodiment, the object pose data is generated approximately 30 times a second or 30 Hz. In other embodiments, object pose data is generated more frequently such as, for example, at approximately 500 Hz or greater. The object posed data is transferred from the tracking system 40 to the computer 31 (e.g., via an interface 100b) and may be conditioned in any conventional manner such as, for example, using a noise filter as is well known. Additionally, in embodiments where the tracking system 40 operates at a lower frequency than the haptic control processes, the object pose data may be conditioned using an interpolation filter as is well known. The interpolation filter smoothes the object pose data by populating gaps between discrete data samples to enable the object pose data to be used in the higher frequency haptic control processes. The computer 31 may also include a coordinate transform process for mapping (or transforming) coordinates in one space to those in another to achieve spatial alignment or correspondence. For example, the surgical system 10 may use the coordinate transform process to map positions of tracked objects (e.g., surgical tools, patient anatomy, etc.) into a coordinate system used by a process running on the computer 31 and/or the computer 21. As is well known, the coordinate transform process may include any suitable transformation technique, such as, for example, rigid-body transformation, non-rigid transformation, affine transformation, and the like.
One advantage of including multiple computers (e.g., the computer 21 and the computer 31) in the computing system 20 is that each computer can be independently configured. Thus, the computer 21 can be customized for surgical planning and navigation, and the computer 31 can be customized for controlling performance, stability, and/or safety of the haptic device 30. For example, the computer 31 may include a real time operating system (RTOS) to maintain dependable updates to the haptic control system and a stable operating platform for the haptic device 30. In contrast, the computer 21 may include a non-RTOS because the computing system 20 may not require the same degree of stability as the haptic device 30. Thus, the computer 21 may instead be customized to meet specific requirements of surgical navigation, such as, for example, graphics processing. Another advantage of multiple computers having separate computing architectures is that software developers with limited knowledge of haptic systems can create CAS utilities for the computer 21 that can be used in conjunction with a variety of haptic devices. Similarly, software developers with limited knowledge of CAS can create haptic utilities focused on enhancing the performance, stability, and/or safety of a particular haptic device. As an alternative to separate computers, the computing functions of the haptic device 30 and the computing system 20 may be incorporated, for example, into a single computer (e.g., the computer 21 or the computer 31), into a computing system of an imaging device (e.g., a CT device, an MRI device, a fluoroscopic device, etc.), and/or into a hospital computing system (e.g., a network system, an equipment cart in a room where the surgical procedure will be performed, etc.).
As shown in
The haptic device 30 is a surgical device configured to be manipulated by a user to move a surgical tool 50 to perform a procedure on a patient. During the procedure, the computing system 20 implements control parameters for controlling the haptic device 30 based, for example, on a relationship between an anatomy of the patient and a position, an orientation, a velocity, and/or an acceleration of a portion of the haptic device 30 (e.g., the surgical tool 50). In one embodiment, the haptic device 30 is controlled to provide a limit on user manipulation of the device (e.g., by limiting the user's ability to physically manipulate the haptic device 30). In another embodiment, the haptic device 30 is controlled to provide haptic guidance (i.e., tactile and/or force feedback) to the user. “Haptic” refers to a sense of touch, and the field of haptics involves research relating to human interactive devices that provide tactile and/or force feedback to an operator. Tactile feedback generally includes tactile sensations such as, for example, vibration, whereas force feedback refers to feedback in the form of force (e.g., resistance to movement) and/or torque (also known as “wrench). Wrench includes, for example, feedback in the form of force, torque, or a combination of force and torque.
Guidance from the haptic device 30 coupled with computer aided surgery (CAS) enables a surgeon to actively and accurately control surgical actions (e.g., bone cutting) and delivery of localized therapies (e.g., in the brain). For example, the computing system 20 may be programmed to determine the control parameters based on data representative of a patient's anatomy (e.g., preoperative CT image data, ultrasound data); a virtual (or haptic) object associated with (or registered to) the anatomy; a parameter relative to the anatomy (e.g., a depth defined with respect to a portion of the anatomy); and/or the anatomy. The computing system 20 can control the haptic device 30 to generate a force, a torque, and/or vibration based on the position of the tool 50 relative to the virtual object, the parameter, and/or the anatomy. For example, the tool 50 may be constrained against penetrating a virtual boundary associated with a representation of the anatomy and/or constrained against exceeding a parameter defined with respect to the representation of the anatomy. Thus, in operation, as a surgeon manipulates the haptic device 30 to move the tool 50, virtual pathways may be used to guide the tool 50 to specific targets, virtual boundaries may be used to define cutting shapes or to prevent the tool 50 from contacting critical tissue, and predefined parameters may be used to limit travel of the tool 50 (e.g., to a predefined depth). The computing system 20 may also be programmed to adjust the control parameters in response to movement of the physical anatomy during the procedure (e.g., by monitoring detected movement of the physical anatomy and then adjusting the virtual object in response to the detected movement). In this manner, the surgical system 10 can supplement or replace direct visualization of the surgical site, enhance the surgeon's natural tactile sense and physical dexterity, and facilitate the targeting, repairing, and replacing of various structures in the body through conventionally sized portals (e.g., 12 inches or greater in length) to portals having a diameter as small as approximately 1 mm.
In orthopedic applications, for example, the haptic device 30 can be applied to the problems of inaccuracy, unpredictability, and non-repeatability in bone preparation by assisting the surgeon with proper sculpting of bone to thereby enable precise, repeatable bone resections while maintaining intimate involvement of the surgeon in the bone preparation process. Moreover, because the haptic device 30 haptically guides the surgeon in the bone cutting operation, the skill level of the surgeon is less critical. As a result, surgeons with varying degrees of skill and experience are able perform accurate, repeatable bone resections. In one embodiment, for example, a surgical tool is coupled to the haptic device 30. The surgeon can operate the tool to sculpt bone by grasping and moving the tool and/or by grasping and manipulating the haptic device 30 to move the tool. As the surgeon performs the cutting operation, the surgical system 10 tracks the location of the tool (with the tracking system 40) and, in most cases, allows the surgeon to freely move the tool in the workspace. When the tool is in proximity to a virtual boundary in registration with the patient, however, the surgical system 10 controls the haptic device 30 to provide haptic guidance that tends to constrain the surgeon from penetrating the virtual boundary with the tool. For example, the virtual boundary may be defined by a haptic object, and the haptic guidance may comprise an output wrench (i.e., force and/or torque) that is mapped to the haptic object and experienced by the surgeon as resistance to further tool movement in the direction of the virtual boundary. Thus, the surgeon may feel as if the tool has encountered a physical object, such as a wall. In this manner, the virtual boundary functions as a virtual cutting guide. Thus, the haptic device 30 communicates information to the surgeon regarding the location of the tool relative to the virtual boundary and provides physical guidance in the actual cutting process. The haptic device 30 may also be configured to limit the user's ability to manipulate the surgical tool as described, for example, in U.S. patent application Ser. No. 10/470,314 (Pub. No. US 2004/0128026), which is hereby incorporated by reference herein in its entirety.
The haptic device 30 may include a mechanical or electromechanical device adapted to transmit tactile feedback (e.g., vibration) and/or force feedback (e.g., wrench) to the user. The haptic device 30 may be robotic, non-robotic, or a combination of robotic and non-robotic systems. For example, the haptic device 30 may include a haptic device as described in U.S. patent application Ser. No. 10/384,072, filed Mar. 6, 2003, published Feb. 5, 2004; U.S. patent application Ser. No. 10/384,077, filed Mar. 6, 2003, published Feb. 19, 2004; U.S. patent application Ser. No. 10/384,078, filed Mar. 6, 2003, published Feb. 19, 2004; U.S. patent application Ser. No. 10/384,194, filed Mar. 6, 2003, published Feb. 19, 2004; U.S. patent application Ser. No. 10/621,119, filed Jul. 16, 2003, published Jun. 3, 2004; and/or U.S. Provisional Patent Application Ser. No. 60/655,642, filed Feb. 22, 2005. Each of the above-referenced published applications is hereby incorporated by reference herein in its entirety.
In one embodiment, the haptic device 30 comprises a robot. In this embodiment, as shown in
The base 32 provides a foundation for the haptic device 30. As shown in
The arm 33 is disposed on the base 32 and is adapted to enable the haptic device 30 to be manipulated by the user. The arm 33 may be any suitable mechanical or electromechanical structure but is preferably an articulated arm having four or more degrees of freedom (or axes of movement), such as, for example, a robotic arm known as the “Whole-Arm Manipulator” or WAM™ currently manufactured by Barrett Technology, Inc. In one embodiment, the arm 33 includes a first segment 33a, a second segment 33b, and a third segment 33c as shown in
Dexterity of the arm 33 may be enhanced, for example, by adding additional degrees of freedom. For example, the arm 33 may include a wrist 36. As shown in
The arm 33 incorporates a feedback mechanism to enable the haptic device 30 to communicate information to the user while the user manipulates the haptic device 30. In operation, the computing system 20 controls the feedback mechanism to generate and convey tactile and/or force feedback to the user to communicate, for example, information about a location of a portion of the haptic device (e.g., the tool 50) relative to a virtual object, a parameter relative to the anatomy, and/or the anatomy. The feedback mechanism is preferably configured to produce force, torque, and/or vibration. The feedback mechanism may incorporate a drive system (not shown) comprising one or more actuators (e.g., motors) and a mechanical transmission. The actuators are preferably adapted to supply force feedback opposing the user's manipulation of the haptic device 30. The actuators may include, for example, a samarium-cobalt brushless motor driven by sinusoidally-commutated current amplifier/controllers, a neodymium-iron brushless motor driven by space-vector-commutated current amplifier/controllers, and/or any other suitable motor and commutation scheme suitable for use in a robotic system. The transmission may be, for example, a tension-element drive system (e.g., a cable, steel tape, or polymeric tendon transmission), a direct drive system, and/or any other low static friction and low backlash transmission system suitable for use in a robotic system. In an exemplary embodiment, the drive system includes a high-speed cable transmission and zero backlash, low friction, cabled differentials. In one embodiment, the cable transmission may be a cable transmission used in the WAM™ robotic arm manufactured by Barrett Technology, Inc. and/or a cable transmission as described in U.S. Pat. No. 4,903,536, which is hereby incorporated by reference herein in its entirety. One advantage of a cable transmission is that the cable transmission permits most of the bulk of the arm 33 to be disposed a sufficient distance from the surgical site so that the user is not hindered or impeded by the structure or components of the arm 33 during a surgical procedure. The drive system is preferably configured for low friction, low inertia, high stiffness, large bandwidth, near-zero backlash, force fidelity, and/or backdrivability and may also be also be adapted to help maintain the arm 33 in a state where the user perceives the arm 33 as weightless. For example, in one embodiment, the arm 33 may have a configuration that is substantially balanced. Any imbalance in the arm (e.g., due gravitational effects) can be counteracted, for example, by controlling the drive system to generate forces and/or torques to correct the imbalanced condition. The motors of the drive system may also be configured to produce oscillations or vibrations so that the haptic device 30 can provide tactile feedback to the user. In addition to the drive system, the feedback mechanism may also include a vibratory device, such as an oscillator, separate from the motors for producing vibration.
The arm 33 may include position sensors (not shown) for determining a position and orientation (i.e., pose) of the arm 33. The position sensors may include any known sensor for determining or tracking a position of an object, such as, for example, encoders, resolvers, potentiometers, linear variable differential transformers (LVDTs), tilt sensors, heading (compass) sensors, gravity direction sensors (e.g., accelerometers), optical sensors (e.g., infrared, fiber optic, or laser sensors), magnetic sensors (e.g., magnetoresistive or magnetorestrictive sensors), and/or acoustic sensors (e.g., ultrasound sensors). The position sensors may be disposed at any suitable location on or within the haptic device 30. For example, the position sensors may include encoders mounted on the joints 33d and 33e and/or resolvers mounted on a shaft of each motor. The pose of the arm 33 may also be tracked using any tracking system suitable for use in a surgical environment, such as, for example, an optical, magnetic, radio, or acoustic tracking system, including the tracking system 40 described below.
In addition to the position sensors, the arm 33 may include redundant sensors (not shown). The redundant sensors are similar to the position sensors and may be used to detect discrepancies and/or instability during operation of the haptic device 30. For example, differences in output of the redundant sensors and output of the position sensors may indicate a problem with the drive system and/or the position sensors. Redundant sensors can also improve accuracy in determining the pose of the arm 33 by providing data that enables a control system of the haptic device 30 to reduce or eliminate the effect of deflection in components of the drive system and/or the arm 33. The redundant sensors are particularly advantageous when the arm 33 includes a cable transmission.
The end effector 35 comprises a working end of the haptic device 30 and is configured to enable the user to perform various activities related to a surgical procedure. For example, in one embodiment, the end effector 35 functions as an adapter or coupling between the arm 33 and the tool 50. By interchanging one tool 50 for another, the user can utilize the haptic device 30 for different activities, such as registration, bone preparation, measurement/verification, and/or implant installation. In one embodiment, as shown in
The end effector 35 may also be configured to enable the user to input information into the surgical system 10. For example, in one embodiment, the end effector 35 is adapted to function as an input device, such as a joystick. In this embodiment, the end effector 35 includes one or more degrees of freedom to enable joystick functionality. As shown in
The user interface 37 of the haptic device 30 enables physical interaction between the user and the haptic device 30. For example, the interface 37 may be configured so that the user can physically contact the interface 37 and manipulate the tool 50 while simultaneously receiving haptic guidance from the haptic device 30. The interface 37 may be a separate component affixed to the haptic device 30 (such as a handle or hand grip) or may simply be part of the existing structure of the haptic device 30. For example, the interface 37 may be associated with the arm 33, the end effector 35, and/or the tool 50. Because the interface 37 is affixed to or is an integral part of the haptic device 30, any tactile or force feedback output by the haptic device 30 is transmitted directly to the user when the user is in contact with the interface 37. In one embodiment, as shown in
The user interface 37 is preferably sized so that the user can easily grip the interface 37. For example, a diameter of the interface 37 may correspond to a diameter that is easily grasped by a hand and/or finger(s) of a user. The diameter of the interface 37 may be, for example, in a range of approximately 5 mm to approximately 75 mm. In one embodiment, the user interface 37 is integral with the end effector 35. In this embodiment, the end effector 35 includes one or more portions having a diameter suitable for gripping by the user. For example, a diameter of the proximal portion of the end effector 35 may be about 43 mm; a diameter of the distal portion of the end effector 35 may be about 36 mm; a diameter of the tool holder 51 may be about 19 mm; and a diameter of the tool 50 may be about 6 mm. In one embodiment, the distal portion of the end effector 35 includes a grip for the user's index finger. The interface 37 may optionally include a taper to accommodate users with different hand sizes. The interface 37 may also be shaped or contoured to mate with the contours of a user's hand and/or finger(s) and may include other ergonomic features, for example, to increase user comfort and prevent slippage (e.g., when the user's glove is wet/bloody).
One advantage of the haptic device 30 is that the user interface 37 advantageously enables the haptic device 30 to hold the tool 50 cooperatively with the user. In contrast, haptic devices used in surgical teleoperation systems have a “slave” device that exclusively holds the tool and a “master” device through which the surgeon controls the tool. The master device is typically remote from the surgical site either to permit the surgeon to perform the surgery over a distance or to provide a more ergonomic working position/environment for the surgeon. Thus, with a haptic teleoperation system, the surgeon has the disadvantage of having to rely entirely on the teleoperation system to view the surgical site and perform the surgery. In contrast, with the surgical system 10, as user moves the tool 50 with guidance from the haptic device 30, the user remains in close physical and visual proximity to the surgical site.
Another advantage of the haptic device 30 is that the haptic device 30 is not intended to move autonomously on its own. In contrast, autonomous surgical robotic systems used for orthopedic joint replacement perform bone cutting autonomously with a high speed burr. Although the surgeon monitors progress of the robot and may interrupt if necessary, the surgeon is not in full control of the procedure. With the haptic device 30, however, the surgeon (as opposed to the robot) manipulates the tool 50. Thus, the surgeon maintains control of the cutting operation and receives only guidance or assistance from the haptic device 30. As a result, the surgeon is not required to cede control to the robot of the haptic device 30, which increases the surgeon's comfort level during the procedure.
As described above in connection with the computing system 20, the haptic device 30 may include the computer 31. The computer 31 may be housed in any convenient location on the surgical system 10, such as, for example, on or in a stand or equipment cabinet (e.g., the platform 39 as shown in
The haptic device 30 is preferably sized so that the haptic device 30 can fit in an operating room without impeding other equipment or movement of the user about the operating room. For example, in one embodiment, a height of the haptic device 30 (with the arm 33 in a stored or retracted position) is approximately 1.4 m, and a footprint of the haptic device 30 is in a range of between about 0.25 m2 to about 0.6 m2. In another embodiment, the footprint is in a range of between about 0.09 m2 and 0.13 m2. Similarly, the haptic device 30 preferably has a weight that enables the haptic device 30 to be moved from one location to another with relative ease. For example, in one embodiment, the weight of the haptic device 30 is in a range of approximately 100 pounds to approximately 500 lbs. In another embodiment, the weight of the haptic device 30 is in a range of approximately 50 pounds to approximately 200 lbs. The haptic device 30 preferably has a low weight and small size both for ease of mobility and to permit the haptic device 30 to be optimally positioned for the surgical procedure. For example, the haptic device 30 (or any portion thereof) may be configured to rest on a floor of an operating room, to be mounted on the operating table (or other piece of equipment in the operating room), or to be affixed to a bone of the patient.
As shown in
As shown in
The tracking (or localizing) system 40 of the surgical system 10 is configured to determine a pose (i.e., position and orientation) of one or more objects during a surgical procedure to detect movement of the object(s). For example, the tracking system 40 may include a detection device that obtains a pose of an object with respect to a coordinate frame of reference of the detection device. As the object moves in the coordinate frame of reference, the detection device tracks the pose of the object to detect (or enable the surgical system 10 to determine) movement of the object. As a result, the computing system 20 can adjust the control parameters (e.g., by adjusting a virtual object) in response to movement of the tracked object. Tracked objects may include, for example, tools/instruments, patient anatomy, implants/prosthetic devices, and components of the surgical system 10. Using pose data from the tracking system 40, the surgical system 10 is also able to register (or map or associate) coordinates in one space to those in another to achieve spatial alignment or correspondence (e.g., using a coordinate transformation process as is well known). Objects in physical space may be registered to any suitable coordinate system, such as a coordinate system being used by a process running on the computer 21 and/or the computer 31. For example, utilizing pose data from the tracking system 40, the surgical system 10 is able to associate the physical anatomy and the tool 50 (and/or the haptic device 30) with a representation of the anatomy (such as an image displayed on the display device 23). Based on tracked object and registration data, the surgical system 10 may determine, for example, (a) a spatial relationship between the image of the anatomy and the relevant anatomy and (b) a spatial relationship between the relevant anatomy and the tool 50 so that the computing system 20 can superimpose (and continually update) a virtual representation of the tool 50 on the image, where the relationship between the virtual representation and the image is substantially identical to the relationship between the tool 50 and the actual anatomy. Additionally, by tracking not only the tool 50 but also the relevant anatomy, the surgical system 10 can compensate for movement of the relevant anatomy during the surgical procedure (e.g., by adjusting a virtual object in response to the detected movement).
Registration may include any known registration technique, such as, for example, image-to-image registration (e.g., monomodal registration where images of the same type or modality, such as fluoroscopic images or MR images, are registered and/or multimodal registration where images of different types or modalities, such as MRI and CT, are registered); image-to-physical space registration (e.g., image-to-patient registration where a digital data set of a patient's anatomy obtained by conventional imaging techniques is registered with the patient's actual anatomy); and/or combined image-to-image and image-to-physical-space registration (e.g., registration of preoperative CT and MRI images to an intraoperative scene).
The tracking system 40 may be any tracking system that enables the surgical system 10 to continually determine (or track) a pose of the relevant anatomy of the patient and a pose of the tool 50 (and/or the haptic device 30). For example, the tracking system 40 may comprise a non-mechanical tracking system, a mechanical tracking system, or any combination of non-mechanical and mechanical tracking systems suitable for use in a surgical environment. The non-mechanical tracking system may include an optical (or visual), magnetic, radio, or acoustic tracking system. Such systems typically include a detection device adapted to locate in predefined coordinate space specially recognizable trackable elements (or trackers) that are detectable by the detection device and that are either configured to be attached to the object to be tracked or are an inherent part of the object to be tracked. For example, the a trackable element may include an array of markers having a unique geometric arrangement and a known geometric relationship to the tracked object when the trackable element is attached to the tracked object. The known geometric relationship may be, for example, a predefined geometric relationship between the trackable element and an endpoint and axis of the tracked object. Thus, the detection device can recognize a particular tracked object, at least in part, from the geometry of the markers (if unique), an orientation of the axis, and a location of the endpoint within a frame of reference deduced from positions of the markers. The markers may include any known marker, such as, for example, extrinsic markers (or fiducials) and/or intrinsic features of the tracked object. Extrinsic markers are artificial objects that are attached to the patient (e.g., markers affixed to skin, markers implanted in bone, stereotactic frames, etc.) and are designed to be visible to and accurately detectable by the detection device. Intrinsic features are salient and accurately locatable portions of the tracked object that are sufficiently defined and identifiable to function as recognizable markers (e.g., landmarks, outlines of anatomical structure, shapes, colors, or any other sufficiently recognizable visual indicator). The markers may be located using any suitable detection method, such as, for example, optical, electromagnetic, radio, or acoustic methods as are well known. For example, an optical tracking system having a stationary stereo camera pair sensitive to infrared radiation may be used to track markers that emit infrared radiation either actively (such as a light emitting diode or LED) or passively (such as a spherical marker with a surface that reflects infrared radiation). Similarly, a magnetic tracking system may include a stationary field generator that emits a spatially varying magnetic field sensed by small coils integrated into the tracked object.
In one embodiment, as shown in
Because the non-mechanical tracking system relies on an ability of the detection device 41 to optically “see” the markers, the detection device 41 and the tracker should be positioned so that a clear line of sight between the detection device 41 and the markers is maintained during the surgical procedure. As a safeguard, the surgical system 10 is preferably configured to alert a user if the detection device 41 is unable to detect the tracker during the procedure (e.g., when the line of sight between the detection device 41 and one or more of the markers is blocked and/or when reflectivity of the markers is occluded). For example, the surgical system 10 may include an audible (and/or visual) alarm programmed to sound (and/or flash) when a person steps between the markers and the detection device 41, when an object is interposed between the markers and the detection device 41, when a lens of the camera is occluded (e.g., by dust), and/or when reflectivity of the markers is occluded (e.g., by blood, tissue, dust, bone debris, etc.). The surgical system 10 may also include programming to trigger other safety features, such as, for example, an occlusion detection algorithm (discussed below in connection with step S11 of
The non-mechanical tracking system may include a trackable element (or tracker) for each object the user desires to track. For example, in one embodiment, the non-mechanical tracking system includes an anatomy tracker 43 (to track patient anatomy), a haptic device tracker 45 (to track a global or gross position of the haptic device 30), an end effector tracker 47 (to track a distal end of the haptic device 30), and an instrument tracker 49 (to track an instrument/tool held manually by the user).
As shown in
As shown in
As shown in
The end effector tracker 47 is adapted to enable the surgical system 10 to determine a pose of a distal end (e.g., a working end) of the haptic device 30. The end effector tracker 37 is preferably configured to be disposed on a distal end of the arm 33 or on the tool 50. For example, as shown in
In one embodiment, the end effector tracker 47 is used only during calibration of the haptic device 30 (as discussed below in connection with step S9 of
In an alternative embodiment, the end effector tracker 47 may be eliminated. In this embodiment, the haptic device tracker 45 is fixed in a permanent position on the haptic device 30. Because the haptic device tracker 45 is fixed in a permanent position on the haptic device 30, the relationship between the haptic device tracker 45 and the coordinate frame of the haptic device 30 is known. Accordingly, the surgical system 10 does not need the end effector tracker 47 for calibration to establish a relationship between the haptic device tracker 45 and the coordinate frame of the haptic device 30. In this embodiment, the haptic device tracker 45 may be rigidly mounted on the haptic device 30 in any position that permits the tracking system 40 to see the array S3 of the haptic device tracker 45, that is close enough to the surgical site so as not to degrade accuracy, and that will not hinder the user or interfere with other personnel or objects in the surgical environment.
In another alternative embodiment, the haptic device 30 is firmly locked in position. For example, the haptic device 30 may be bolted to a floor of the operating room or otherwise fixed in place. As a result, the global or gross position of the haptic device 30 does not change substantially so the surgical system 10 does not need to track the global or gross position of the haptic device 30. Thus, the haptic device tracker 45 may be eliminated. In this embodiment, the end effector tracker 47 may be used to determine an initial position of the haptic device 30 after the haptic device 30 is locked in place.
In another alternative embodiment, the tracking system 40 is attached to the haptic device 30 in a permanently fixed position. For example, the tracking system 40 (including the detection device 41) may be mounted directly on the haptic device 30 or connected to the haptic device 30 via a rigid mounting arm or bracket so that the tracking system is fixed in position with respect to the haptic device 30. In this embodiment, the haptic device tracker 45 and the end effector tracker 47 may be eliminated because a position of the tracking system 40 relative to the haptic device 30 is fixed and may be established during a calibration procedure performed, for example, during manufacture or set up of the haptic device 30.
In another alternative embodiment, the tracking system 40 is attached to the haptic device 30 in an adjustable manner. For example, the tracking system 40 (including the detection device 41) may be connected to the haptic device 30 with an arm, such as the adjustable arm 34 (described above in connection with the haptic device tracker 45) so that the tracking system 40 is moveable from a first position to a second position relative to the haptic device 30. After the arm and the tracking system 40 are locked in place, a calibration can be performed to determine a position of the tracking system 40 relative to the haptic device 30. The calibration may be performed, for example, using the end effector tracker 47.
The instrument tracker 49 is adapted to be coupled to an instrument 150 that is held manually in the hand of the user (as opposed, for example, to the tool 50 that is attached to the end effector 35). The instrument 150 may be, for example, a probe, such as a registration probe (e.g., a straight or hooked probe). As shown in
The instrument tracker 49 may also be configured to verify calibration of the instrument 150. For example, another tracker (e.g., the tracker 43, 45, or 47) may include a divot into which the user can insert the tip of the instrument 150. In one embodiment, as shown in
The tracking system 40 may additionally or alternatively include a mechanical tracking system. In contrast to the non-mechanical tracking system (which includes a detection device 41 that is remote from the trackers 43, 45, 47, and 49), a mechanical tracking system may be configured to include a detection device (e.g., an articulating arm having joint encoders) that is mechanically linked (i.e., physically connected) to the tracked object. The tracking system 40 may include any known mechanical tracking system, such as, for example, a mechanical tracking system as described in U.S. Pat. No. 6,033,415 and/or U.S. Pat. No. 6,322,567, each of which is hereby incorporated by reference herein in its entirety. In one embodiment, the tracking system 40 includes a mechanical tracking system having a jointed mechanical arm 241 (e.g., an articulated arm having six or more degrees of freedom) adapted to track a bone of the patient. As shown in
One advantage of the mechanical tracking system over a non-mechanical tracking system is that the detection device (i.e., the arm 241) is physically connected to the tracked object and therefore does not require a line of site to “see” markers on the tracked object. Thus, the user and other personnel may freely move about the operating room during a surgical procedure without worrying about blocking an invisible line of sight between a set of markers and an optical camera. Another advantage of the mechanical tracking system is that the arm 241 may be physically connected to the haptic device 30 (e.g., to the base 32). Such a configuration eliminates the need to track a global or gross position of the haptic device 30 relative to the patient (e.g., using the haptic device tracker 45 as described above). There is no need to track the global or gross position of the haptic device 30 because the arm 241 moves with the haptic device 30. As a result, the haptic device 30 may be repositioned during a procedure without having to be recalibrated to a bone motion tracking system. Additionally, mechanical tracking systems may be more accurate than non-mechanical tracking systems and may enable faster update rates to the computer 21 and/or the computer 31. Faster update rates are possible because a mechanical tracking system is hardwired to the computer 21 and/or the computer 31. Thus, the update rate is limited only by the speed of the computer 21 and/or the computer 31.
In an alternative embodiment, the arm 241 of the mechanical tracking system may be attached to an operating table, a leg holder 62 (e.g., as shown in
When the tracking system 40 includes the mechanical tracking system, the arm 241 may be used to register the patient's anatomy. For example, the user may use the arm 241 to register the tibia T while the second arm (i.e., the arm that is identical to the arm 241 but that is affixed to the tibia T) tracks motion of the tibia T. Registration may be accomplished, for example, by pointing a tip of the distal end of the arm 241 to anatomical landmarks on the tibia T and/or by touching points on (or “painting”) a surface of the tibia T with the tip of the distal end of the arm 241. As the user touches landmarks on the tibia T and/or paints a surface of the tibia T, the surgical system 10 acquires data from the position sensors in the arm 241 and determines a pose of the tip of the arm 241. Simultaneously, the second arm provides data regarding motion of the tibia T so that the surgical system 10 can account for bone motion during registration. Based on the bone motion data and knowledge of the position of the tip of the arm 241, the surgical system 10 is able to register the tibia T to the diagnostic images and/or the anatomical model of the patient's anatomy in the computing system 20. In a similar manner, the second arm may be used to register the femur F while the arm 241 (which is affixed to the femur F) tracks motion of the femur F. The patient's anatomy may also be registered, for example, using a non-mechanical tracking system in combination with a tracked probe (e.g., the instrument 150 with the instrument tracker 49) and/or using the haptic device 30 (e.g., as described below in connection with step S8 of
As shown in
The surgical system 10 is adapted to be connected to a power source. The power source may be any known power source, such as, for example, an electrical outlet, a battery, a fuel cell, and/or a generator and may be connected to the surgical system 10 using conventional hardware (e.g., cords, cables, surge protectors, switches, battery backup/UPS, isolation transformer, etc.). The surgical system 10 preferably includes a user-activated device for manually controlling a supply of power to the tool 50. For example, the surgical system 10 may include a foot pedal (or other switching device) that can be positioned on the floor of the operating room in proximity to the user. Depressing the foot pedal causes the power source to supply power to the tool 50 (or to a compressed air supply in the case of a pneumatic tool 50). Conversely, releasing the foot pedal disrupts the flow of power to the tool 50. The surgical system 10 may also be adapted to automatically disrupt the flow of power to the tool 50 to promote safety. For example, the surgical system 10 may include programs or processes (e.g., running on the computer 21 and/or the computer 31) configured to shut off the tool 50 if a dangerous condition is detected, such as, for example, when the anatomy tracker 43 and/or the haptic device tracker 45 become occluded during a critical operation such as bone cutting.
In operation, the computing system 20, the haptic device 30, and the tracking system 40 cooperate to enable the surgical system 10 to provide haptic guidance to the user during a surgical procedure. The surgical system 10 provides haptic guidance by simulating the human tactile system using a force feedback haptic interface (i.e., the haptic device 30) to enable the user to interact with a virtual environment. The haptic device 30 generates computer controlled forces to convey to the user a sense of natural feel of the virtual environment and virtual (or haptic) objects within the virtual environment. The computer controlled forces are displayed (i.e., reflected or conveyed) to the user to make him sense the tactile feel of the virtual objects. For example, as the user manipulates the tool 50, the surgical system 10 determines the position and orientation of the tool 50. Collisions between a virtual representation of the tool 50 and virtual objects in the virtual environment are detected. If a collision occurs, the surgical system 10 calculates haptic reaction forces based on a penetration depth of the virtual tool into the virtual object. The calculated reaction forces are mapped over the virtual object surface and appropriate force vectors are fed back to the user through the haptic device 30. As used herein, the term “virtual object” (or “haptic object”) can be used to refer to different objects. For example, the virtual object may be a representation of a physical object, such as an implant or surgical tool. Alternatively, the virtual object may represent material to be removed from the anatomy, material to be retained on the anatomy, and/or anatomy (or other objects) with which contact with the tool 50 is to be avoided. The virtual object may also represent a pathway, a guide wire, a boundary, a border, or other limit or demarcation.
To enable the user to interact with the virtual environment, the surgical system 10 employs a haptic rendering process. One embodiment of such a process is represented graphically in
The haptic rendering process may include any suitable haptic rendering process, such as, for example, a haptic rendering process as described in U.S. Pat. No. 6,111,577; C. B. Zilles & J. K. Salisbury, A constraint-based god-object method for haptic display, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 3, pp. 146-51, 1995; T. V. Thompson II, D. E. Johnson & E. Cohen, Direct haptic rendering of sculptured models, Proceedings of the Symposium on Interactive 3D Graphics, pp. 167-76, 1997; K. Salisbury & C. Tar, Haptic rendering of surfaces defined by implicit functions, Proceedings of the ASME Dynamic Systems and Control Division, DSC-Vol. 61, pp. 61-67, 1997; and/or J. E. Colgate, M. C. Stanley & J. M. Brown, Issues in the haptic display of tool use, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 3, pp. 140-45, 1995, each of which is hereby incorporated by reference herein in its entirety.
The virtual environment created by the haptic rendering process includes virtual (or haptic) objects that interact with a virtual representation of the tool 50. Interaction between the virtual objects and the virtual representation of the tool 50 may be point-based or ray-based. In a preferred embodiment, the surgical system 10 employs point-based haptic interaction where only a virtual point, or haptic interaction point (HIP), interacts with virtual objects in the virtual environment. The HIP corresponds to a physical point on the haptic device 30, such as, for example, a tip of the tool 50. The HIP is coupled to the physical point on the physical haptic device 30 by a virtual spring/damper model. The virtual object with which the HIP interacts may be, for example, a haptic object 705 (shown in
The virtual (or haptic) objects can be modeled, for example, using 3D geometric primitive objects, 3D polygonal objects, mathematical equations, computer models, surface models, and/or voxel arrays. Haptic objects may be static, quasi-static, dynamic, continuous, discontinuous, time varying, and/or existing only at certain times. In one embodiment, the haptic object is modeled using one or more functions of tool position, orientation, velocity, and/or acceleration. Thus, in the case of a surgical bone cutting operation, the haptic rendering process may produce a mapping of output wrench versus tool position. The mapping may be configured so that the output wrench fed back to the user is sufficient to resist further penetration of the virtual tool (or HIP) into the haptic object. In this manner, a virtual cutting boundary is established. The virtual boundary is associated with (e.g., registered to) the physical anatomy of the patient, an image of the anatomy, and/or other coordinate frame of interest. A haptic object rendered by the haptic rendering process may function as a pathway (e.g., a guide wire), may be repulsive (e.g., configured to repel the tool 50 from entering an interior of a haptic object), may function as a container (e.g., to maintain the tool 50 within the interior of the haptic object), and/or may have portions that repel and portions that contain. As shown in
A haptic object may be customized to include any desired shape, such as, for example, anatomically contoured implant shapes, protective boundaries for sensitive structures (e.g., intra-articular anatomy), image-derived tumor boundaries, and virtual fixtures for in vivo assembly of implant components. In one embodiment, the haptic object may be uniquely contoured to match a disease state of the patient. For example, the haptic object may define a virtual cutting boundary that encompasses only diseased bone. Thus, the haptic object can be used to guide the user in removing the diseased bone while sparing healthy surrounding bone. In this manner, the surgical system 10 enables the user to sculpt bone in a customized manner, including complex geometries and curves that are not possible with conventional cutting jigs and saw guides. As a result, the surgical system 10 facilitates bone sparing surgical procedures and implant designs that are smaller in size and adapted for a patient's unique disease state.
A haptic object may have an associated spatial or geometric representation that can be graphically represented on the display device 23. The graphical representation may be selected so as to convey useful information to the user. For example, as shown in
Haptic objects having simple volumes are preferably modeled with a combination of 3D implicit surface objects such as planes, spheres, cones, cylinders, etc. For example, the haptic object 705 shown in
In step S704 of
In step S706 of
In step S708 of
As shown in
In contrast, in step S101, if collisionDetectedFlag(t−1) has a value of 1, the algorithm follows the right branch of the flowchart. In step S102, the algorithm maps HIP(t) into voxel coordinates. In step S104, the algorithm searches neighboring polygons at the HIP(t) from a voxel lookup table. In step S106, the algorithm retrieves polygonal information from a polygon lookup table. In step S108, each neighboring polygon is tested to determine whether it is intersected by the line segment from HIP(t−1) to HIP(t). In step S110, the algorithm uses this information to determine whether the HIP(t) has exited the polygons. If so, the HIP is no longer penetrating the haptic object, and the algorithm proceeds to steps S115, S117, and S119 as described above. If step S110 determines that the HIP has not exited the polygons, the algorithm proceeds to step S112 where the algorithm projects the HIP(t) on each neighboring polygon along the corresponding surface normal vectors of the polygons. If the projected HIP(t) is within a polygon, the algorithm sets the polygon as an On-Polygon and stores the intersecting point. Otherwise, the algorithm finds a point on a boundary of the polygon that is closest to the projected HIP(t) (all within the plane of the polygon) and stores the point. This process is repeated for each neighboring polygon. The algorithm then has decision points based on whether an Active Polygon from the previous time cycle, AP(t−1), was set to be an On-Polygon in step 22 and whether only a single polygon was set to be an On-Polygon in the current cycle. Each case is handled as described below.
In step S114, the algorithm determines whether a previous active polygon (on which the virtual proxy point was in contact) is still an On-Polygon. If so, in step S124 (ActivePolygonPriority), this polygonal surface has priority to be the active polygon, even if other polygons are identified as On-Polygons. AP(t) is therefore maintained, and VP(t) is set at the closest point on the active polygonal surface. For example,
If step S114 determines that the previous active polygon is not an On-Polygon, the algorithm proceeds to step S116 to determine whether a single On-Polygon is detected. If a single On-Polygon is not detected in step S116, the algorithm checks again in step S120. If a single On-Polygon is detected in step S16, the algorithm proceeds to step S118 and augments the On-Polygons for a concave corner before checking again for a single On-Polygon in step S120. If a single On-Polygon is detected in step S120, the algorithm proceeds to step S126 (described below). If a single On-Polygon is not detected in step S120, the algorithm proceeds to step S122 and determines whether multiple On-Polygons are detected. If so, the algorithm proceeds to step S128 (described below). Otherwise, the algorithm proceeds to step S130 (described below).
In step S126 (OnPolygonPriority), AP(t) is updated with a new On-Polygon and VP(t) is set at the closest point on the active polygonal surface. For example, as shown in
In step S128 (ContinuousSurfacePriority), AP(t) is selected based on force vector deviation criteria and VP(t) is set at the closest point on the active polygonal surface. The algorithm detects the multiple new On-Polygons as illustrated in
where fsi,t represents a spring force vector defined by a current location of the HIP and a possible location of the virtual proxy point on the ith polygon and ft−1 represents a haptic force displayed at previous time. In one embodiment, the surface 570 will be the new active polygon and a location 580 will be the new proxy point position.
In step S130 (MinimumForcePriority), AP(t) is based on minimum force criteria and VP(t) is set at the closest point on the active polygonal surface. As shown in
where xi,vp represents a position of the possible virtual proxy point on the ith polygon and xhip represents a position of the current haptic interface point. In this situation, the algorithm sets either the surface 584 or the surface 586 as the On-Polygon depending on their processing sequence and the location 596 will be the proxy point location.
In step S132 (ContactPolygonPriority), AP(t) is updated with an intersected polygon and VP(t) is set at the closest point on the active polygonal surface. The algorithm augments the On-Polygon objects when a haptic interface point lies in a concave corner where the algorithm detects one On-Polygonal object and multiple concave surfaces. In this situation, the application sets the concave polygonal surface to On-Polygon so that continuous haptic rendering can happen at the concave corner.
In step S134, stiffness and damping matrices defined in tool coordinates as constant parameters are transformed into an inertial coordinate frame. When the physical haptic device 30 has different transmission devices, such as a cable driven transmission and a direct-driven transmission, isotropic spatial stiffness and damping gains can cause instability because the physical system has different dynamic properties in different directions. For this reason, the spatial stiffness and damping matrices can be defined with respect to the tool coordinates and need to be transformed into the inertial coordinate frame. The algorithm computes an adjoint transformation matrix based on current rotational and translational matrices and transforms the spatial stiffness and damping matrices. Let TKs and IKs denote the stiffness matrices measured in tool frame and inertial frame, respectively. Let Adg denote the adjoint transformation matrix given as
Given a vector p=(px, py, pZ)T, {circumflex over (p)} denotes a skew-symmetric matrix used for representing a cross product as a matrix-vector product:
where R is the rotational matrix and p is the translational vector.
The algorithm computes the stiffness matrix in the inertial frame:
IKS=AdgTTKSAdg
In step S136, the algorithm computes a spring haptic force vector based on the location of the haptic interface point and the virtual proxy point location according to Hooke's law:
Fsping(t)=IKS(xvp−xhip)
where xvp represents a position of a current virtual proxy point, and xhip represents a position of a current haptic interface point.
In step S138, the algorithm computes a damping haptic force vector based on the relative motion between the haptic interface point and the virtual proxy point:
Fdamping(t)=IKD({dot over (x)}vp−{dot over (x)}hip)
where {dot over (x)}vp represents motion of the virtual proxy point, {dot over (x)}hip represents motion of the haptic interface point, and IKD represents the spatial damping matrix in an inertial frame.
In step S140, the sum of the damping force and spring force is sent to the physical haptic device 30 as a desired force output (step S718 of
τ=JTFdesired
where JT is a Jacobian transpose. The computing system 20 then controls the actuators of the haptic device 30 to output the joint torque, τ.
In step S142, collisionDetectedFlag(t)=1. In step S144, the time advances to t=t+1. In cases where there may be a transmission with compliance, backlash, hysteresis, or nonlinearities between the haptic device drive (e.g., motors) and position outputs (e.g., joints), it is beneficial to include position sensors on both the drive end and load end of the transmission. The load end sensors are used to compute all joint and endpoint positions because they will most accurately reflect the actual values. The drive end sensors are used to compute velocities in any damping computations, such as for Fdamping above, which helps avoid exciting the transmission dynamics.
According to one embodiment, the desired force feedback (or output wrench) of the haptic device 30 is determined based on a proximity of a portion of the haptic device 30 (e.g., the tool 50) to a virtual (or haptic) boundary associated with the representation of the anatomy. Thus, if the tool 50 is disposed a sufficient distance from the haptic boundary, a controller commands no haptic forces, and the user is free to move the tool 50 as if exploring empty space. However, as the tool 50 approaches or contacts the haptic boundary, the controller commands torques to the motors so as to exert the appropriate wrench on the user's hand via the interface 37. Preferably, a magnitude of the force feedback increases as the tool 50 approaches the virtual boundary and does not present a discontinuous step that may induce oscillation or unwanted vibration. For example, as the tool 50 approaches the haptic boundary, the haptic device 30 may exert a force in a direction opposite a direction of movement of the user interface 37 such that the user perceives a repulsive or counteracting force that slows and/or stops movement of the tool 50. In one embodiment, a rate of increase of the force as the tool 50 continues moving toward the haptic boundary may be, for example, in a range of 5 N/mm to 50 N/mm. In another embodiment, the rate of increase of the force may be approximately 20 N/mm. In this manner, the user is constrained to not penetrate the haptic boundary too deeply. When the tool 50 contacts the haptic boundary, the force may be such that the user feels as if the tool 50 has collided with a physical object, such as a wall. The magnitude of the force may prevent the user from penetrating the haptic boundary (e.g., a magnitude of approximately 100 N or greater) but is preferably set so that the user may breach the haptic boundary if desired (e.g., a magnitude in a range of approximately 20 N to approximately 60 N). Thus, the computing system 20 may be programmed to permit the user to overcome the force feedback and move the haptic device 30 to a desired location. In this manner, the haptic device 30 constrains the user against inadvertently violating the haptic boundary, but the user has the option to overpower the haptic device 30 and thus retains full control over the surgical procedure.
In one embodiment, the surgical system 1 includes a haptic tuning feature for customizing a force feedback function of the haptic object for a particular user. Such a feature is advantageous because each user has a unique surgical technique. Thus, different users may use differing amounts of force when maneuvering the tool 50. For example, users who maneuver the tool 50 with a light touch may sense haptic feedback earlier than users with a heavier touch. Rather than requiring the user with the heavier touch to alter his surgical technique to sufficiently sense the haptic feedback, the haptic tuning feature enables the force feedback function to be adjusted to accommodate each particular user. By adjusting (or tuning) the force feedback function, the user can manipulate the tool 50 with his preferred degree of force and still sufficiently perceive the haptic feedback exerted by the haptic device 30. As a result, the user's ability to maintain the tool within the haptic boundary is improved. For example, as shown in
To enable each user to tune the force feedback function, the computing system 20 preferably includes programming to enable a graphical selection interface that can be displayed on the display device 23. For example, as shown in
The haptic device 30 is preferably configured to operate in various operating modes. For example, the haptic device 30 may be programmed to operate in an input mode, a hold mode, a safety mode, a free mode, an approach mode, a haptic (or burring) mode, and/or any other suitable mode. The operating mode may be selected manually by the user (e.g., using a selection button represented graphically on the display device 23 or a mode switch located on the haptic device 30 and/or the computing system 20) and/or automatically by a controller or software process. In the input mode, the haptic device 30 is enabled for use as an input device to input information to the surgical system 10. When the haptic device 30 is in the input mode, the user may operate the haptic device 30 as a joystick or other input device, for example, as described above in connection with the end effector 35 and/or in U.S. patent application Ser. No. 10/384,078 (Pub. No. US 2004/0034282), which is hereby incorporated by reference herein in its entirety. Other methods of inputting information to the surgical system 10 include, for example, moving the wrist 36, moving a joint of the arm 33, and/or moving the arm 33 (or a portion thereof). For example, moving the arm 33 toward an object (e.g., a tracked object) may comprise a first input. Similarly, moving the arm 33 toward the object and twisting the wrist 36 may comprise a second input. Thus, the surgical system 10 may identify or distinguish user input based on, for example, a pose of the haptic device 30 with respect to a tracked object, movement of a portion of the haptic device 30 (e.g., the wrist 36), or a combination of pose and movement. In the hold mode, the arm 33 of the haptic device 30 may be locked in a particular pose. For example, the arm 33 may be locked using brakes, control servoing techniques, and/or any other appropriate hardware and/or software for stabilizing the arm 33. The user may desire to place the haptic device 30 in the hold mode, for example, during an activity such as bone cutting to rest, confer with a colleague, allow cleaning and irrigation of the surgical site, and the like. In the safety mode, the tool 50 coupled to the haptic device 30 may be disabled, for example, by shutting off power to the tool 50. In one embodiment, the safety mode and the hold mode may be executed simultaneously so that the tool 50 is disabled when the arm 33 of the haptic device 30 is locked in position.
In the free mode, the end effector 35 of the haptic device 30 is freely moveable within the workspace of the haptic device 30. Power to the tool 50 is preferably deactivated, and the haptic device 30 may be adapted to feel weightless to the user. A weightless feeling may be achieved, for example, by computing gravitational loads acting on the segments 33a, 33b, and 33c of the arm 33 and controlling motors of the haptic device 30 to counteract the gravitational loads. As a result, the user does not have to support the weight of the arm. The haptic device 30 may be in the free mode, for example, until the user is ready to direct the tool 50 to a surgical site on the patient's anatomy.
In the approach mode, the haptic device 30 is configured to guide the tool 50 to a target object, such as, for example, a surgical site, feature of interest on the patient's anatomy, and/or haptic object registered to the patient, while avoiding critical structures and anatomy. For example, in one embodiment, the approach mode enables interactive haptic positioning of the tool 50 as described in U.S. patent application Ser. No. 10/384,194 (Pub. No. US 2004/0034283), which is hereby incorporated by reference herein in its entirety. In another embodiment, the haptic rendering application may include a haptic object defining an approach volume (or boundary) that constrains the tool 50 to move toward the target object while avoiding sensitive features such as blood vessels, tendons, nerves, soft tissues, bone, existing implants, and the like. For example, as shown in
In the haptic (or burring) mode, the haptic device 30 is configured to provide haptic guidance to the user during a surgical activity such as bone preparation. In one embodiment, as shown in
The haptic device 30 may utilize any suitable haptic control scheme, such as, for example, admittance control, impedance control, or hybrid control. In an admittance control mode, the haptic device 30 accepts force input and yields position (or motion) output. For example, the haptic device 30 measures or senses a wrench at a particular location on the haptic device 30 (e.g., the user interface 37) and acts to modify a position of the haptic device 30. In an impedance control mode, the haptic device 30 accepts position (or motion) input and yields wrench output. For example, the haptic device 30 measures, senses, and/or calculates a position (i.e., position, orientation, velocity, and/or acceleration) of the tool 50 and applies an appropriate corresponding wrench. In a hybrid control mode, the haptic device 30 utilizes both admittance and impedance control. For example, a workspace of the haptic device 30 may be divided into a first subspace in which admittance control is used and a second subspace in which impedance control is used. In a preferred embodiment, the haptic device 30 operates in the impedance control mode.
During a surgical procedure, the computing system 20 guides the user through the procedure. For example, the computing system 20 may be programmed to generate a display configured to guide the user manipulating the haptic device 30 through the procedure. The display may comprise screens shown on the display device 23 that include, for example, predefined pages and/or images corresponding to specific steps of the procedure. The display may also prompt the user to perform one or more tasks. For example, the display may instruct a user to select anatomical landmarks on a representation of the anatomy (discussed below in connection with steps S3 and S4 of
Displays or screens associated with the surgical procedure may be configured to communicate visual information to the user regarding the procedure. For example, as shown in
In one embodiment, the portion of bone to be removed may be indicated for example, using a color that is different from a color of surrounding bone. For example, the portion of bone to be removed may be colored green while the surrounding bone is colored white. As the user removes bone with the tool 50, the computing system 20 updates the image in the navigation pane 600 so that when the tool 50 reaches a desired cutting depth, the color changes from green to white. Similarly, if the tool 50 cuts beyond the desired cutting depth, the color changes from white to red. Thus, the surgical system 10 creates a representation of a portion of material to be removed in a first color and, when a desired amount of material has been removed, creates a representation of the material removed by the haptic device 30 in a second color. If the material removed by the haptic device exceeds the desired amount of material, the surgical system 10 creates a representation of the material removed in a third color. In a preferred embodiment, a haptic object includes an array of volume elements (i.e., voxels) having a first portion corresponding to a portion of bone to be removed, a second portion corresponding to surrounding bone, and a third portion corresponding to a cutting depth that is outside a predefined cutting volume. The voxels in the first portion may be a first color (e.g., green), the voxels in the second portion may be a second color (e.g., white), and the voxels in the third portion may be a third color (e.g., red). As the tool 50 overlaps a voxel, the voxel is cleared thereby exposing an adjacent underlying voxel. Thus, if the user cuts too deeply with the tool 50, green and/or white voxels may be cleared to expose underlying red voxels. In another embodiment, the surgical system 10 may provide a visual indication of a distance between the tip of the tool 50 and a surface of a haptic object in registration with the patient as described, for example, in U.S. patent application Ser. No. 10/621,119 (Pub. No. 2004/0106916), which is hereby incorporated by reference herein in its entirety. The navigation pane 600 may also include, for example, a representation of a current position of the tool 50, a desired trajectory of the tool 50, a representation of an implant, and/the like.
In addition to communicating with the user visually, the computing system 20 may be programmed to emit audible signals (e.g., via the acoustic device). For example, in one embodiment, the computing system 20 may emit sounds (e.g., beeps) indicating that a cutting depth of the tool 50 is too shallow, approximately correct, or too deep. In another embodiment, the surgical system 10 may provide an audible indication of a distance between the tip of the tool 50 and a surface of a haptic object in registration with the patient as described, for example, in U.S. patent application Ser. No. 10/621,119 (Pub. No. US 2004/0106916), which is hereby incorporated by reference herein in its entirety. The computing system 20 may also be programmed to control the haptic device 30 to provide tactile feedback to the user, such as, for example, a vibration indicating that the tool 50 has reached or exceeded the desired cutting depth. The software of the computing system 20 may also include programs or processes that automatically prompt a user to perform certain tasks, such as, for example, segmenting an image of a diagnostic image data set, selecting points on the patient's anatomy to define a mechanical axis, touching (or “painting”) points on a surface of the bone with a registration probe, entering data (e.g., implant size, burr size, etc.), and the like.
In the embodiment of
In step S1, patient information may be input to the surgical system 10. For example, the surgical system 10 may display a screen on the display device 23 requesting information about the patient. Patient information may include any relevant patient data, such as, for example, name, birth date, identification number, sex, height, and weight. Patient information may also include information related to the procedure to be performed, such as, for example, specifying the appropriate leg (e.g., left or right), specifying the portion of the joint to be replaced (medial, lateral, total), and selecting preoperative diagnostic image data files (e.g., CT data files) of the patient's anatomy. Patient information may be input to the surgical system 10 in any known manner. For example, the user may directly enter the patient information or the patient information may be downloaded into the surgical system 10 from a hospital network or electronic storage medium. Preferably, patient information is recorded when the patient's anatomy is imaged, is saved in an image data file (e.g., a CT data file), and is loaded into the surgical system 10 along with the image data file in step S2 below. The computing system 20 may also request information related to the user (e.g., name, identification number, PIN number, etc.), the surgical facility, and/or any other information useful for identification, security, or record keeping purposes. As with the patient data, user information may also be included in the image data file. As a safeguard, the computing system 20 may include a verification feature that prompts the surgeon (or other licensed medical professional) to verify patient information that has been input to the surgical system 10.
In step S2, a representation of the anatomy is created by loading image data files containing preoperative diagnostic images (e.g., an upper leg image, a knee image, and a lower leg image) into the surgical system 10. The diagnostic images constitute a representation of the anatomy. Additional representations of the anatomy may be generated by segmenting the images. For example, the surgical system 10 may display a screen 81a (shown in
In steps S3 and S4, the user designates landmarks on the representation of the first bone and the representation of the second bone. For example, in step S3, the user may designate femoral landmarks on an image of the femur F. The femoral landmarks are used by the surgical system 10 to associate (or register) the patient's physical anatomy with the representation of the anatomy (e.g., diagnostic images, models generated from segmentation, anatomical models, etc.). As shown in
Similarly, in step S4, the user may designate tibial landmarks on an image of the tibia T. The tibial landmarks are used by the surgical system 10 to associate (or register) the patient's physical anatomy with the representation of the anatomy (e.g., diagnostic images, models generated from segmentation, anatomical models, etc.). As shown in
In step S5, a homing process initializes the position sensors (e.g., encoders) of the haptic device 30 to determine an initial pose of the arm 33. Homing may be accomplished, for example, by manipulating the arm 33 so that each joint encoder is rotated until an index marker on the encoder is read. The index marker is an absolute reference on the encoder that correlates to a known absolute position of a joint. Thus, once the index marker is read, the control system of the haptic device 30 knows that the joint is in an absolute position. As the arm 33 continues to move, subsequent positions of the joint can be calculated based on the absolute position and subsequent displacement of the encoder. The surgical system 10 may guide the user through the homing process by providing instructions regarding the positions in which the user should place the arm 33. The instructions may include, for example, images displayed on the display device 23 showing the positions into which the arm 33 should be moved.
In step S6, an instrument (e.g., a registration probe such as the instrument 150) is checked to verify that the instrument is calibrated. For example, step S6 may be used to verify that a registration probe has a proper physical configuration. As discussed above in connection with the instrument tracker 49, calibration of a probe that includes the instrument tracker 49 may be accomplished by inserting a tip of the probe into the divot 47a of the end effector tracker 47, holding the tip in place, and detecting the instrument tracker 49 and the end effector tracker 47 with the detection device 41. The detection device 41 acquires pose data, and the surgical system 10 compares an actual geometric relationship between the trackers 49 and 47 to an expected geometric relationship between the trackers 49 and 47. Deviation between the actual and expected geometric relationships indicates one or more physical parameters of the probe is out of calibration. As shown in
Prior to step S7, the patient arrives in the operating room. As shown in
To elevate the leg L of the patient and enable the leg L to be bent at different angles, the leg L may be supported or braced in a leg holder (or support device) that can be moved into various positions. In one embodiment, the leg holder is a manually adjustable leg holder 62. As shown in
In another embodiment, the leg holder 62 may be automated, for example, by the addition of position sensors (e.g., encoders) and a motor controlled by the computer 21 and/or the computer 31. The motor may enable the leg holder 62 to be fully automated or may simply perform a power-assist function to aid the user in positioning the leg holder 62. One advantage of fully automating the leg holder 62 is that an automated leg holder can be controlled by the surgical system 10 to autonomously move the leg L to a correct position, which spares the user the difficulty of physically maneuvering the leg L and guessing the correct position for the leg L. For example, a process for controlling an automatic leg holder (or support device) may include placing a first bone (e.g., the tibia T) and/or a second bone (e.g., the femur F) in the leg holder 62 and actuating the leg holder 62 to move the first bone and/or the second bone from a first position to a second position. The process may also include the steps of determining an actual pose of the first bone and/or the second bone (e.g., from the anatomy trackers 43a and 43b), determining a desired pose of the first bone and/or the second bone, and actuating the leg holder 62 to move the first bone and/or the second bone from the actual pose to the desired pose. As the leg holder 62 moves, the surgical system 10 can monitor the position of the first bone and/or the second bone. When the first bone and/or the second bone is in the desired pose, the process stops. In addition to tracking the position of the first and second bones, the position of the leg holder 62 may be monitored (e.g., using position sensors on the leg holder 62).
In another embodiment, as shown in
In step S7, the surgical system 10 prompts the user to attach the anatomy trackers 43a and 43b to the patient. As shown in
In one embodiment, once the anatomy trackers 43a and 43b are attached, a range of motion (ROM) of the knee joint is captured (e.g., by moving the knee joint through the ROM while tracking the anatomy trackers 43a and 43b with the tracking system 40). The captured ROM data may be used to assess relative placement of the femoral and tibial implants. For example, the ROM data augmented by registration of the physical patient to the preoperative image data allows the user to plan relative implant positions consistent with a current condition of the patient's soft tissue (e.g., based on disease state, age, weight, current ROM, etc.). In one embodiment, implant depth can be planned so that the installed implants fill the pre-existing joint gap (i.e., the gap existing preoperatively between the tibia T and the femur F) in the knee of the patient. In addition, other important parameters such as, for example, adequate contact, anterior and posterior coverage, and proper relative rotation of the implant pair can be evaluated throughout the ROM of the knee joint. In this way, comprehensive placement planning for both implants can be performed before cutting any bone. The ROM data may also be used (e.g., during the implant planning steps S10 and S13) to display relative positions of the femoral and tibial implants at extension, flexion, and various angles between extension and flexion on the display device 23.
After the anatomy trackers 43a and 43b are fixed to the patient, the process proceeds to step S8 in which the patient's physical anatomy is registered to the representation of the anatomy. For example, the femur F and the tibia T of the patient may be registered in standard fashion using a paired-point/surface match approach based on the femoral and tibial landmarks specified in steps S3 and S4, respectively. The surgical system 10 generates screens to guide the user through the registration process. For example, a screen 86a (
In step S9, the haptic device 30 is calibrated to establish a geometric relationship between a coordinate frame of reference of the haptic device 30 and the haptic device tracker 45. If the haptic device tracker 45 is fixed in a permanent position on the haptic device 30, calibration is not necessary because the geometric relationship between the tracker 45 and the haptic device 30 is fixed and known (e.g., from an initial calibration during manufacture or setup). In contrast, if the tracker 45 can move relative to the haptic device 30 (e.g., if the arm 34 on which the tracker 45 is mounted is adjustable) calibration is necessary to determine the geometric relationship between the tracker 45 and the haptic device 30. The surgical system 10 initiates the calibration process by generating a screen 87 (shown in
In step S10, the user plans bone preparation for implanting a first implant on a first bone. In a preferred embodiment, the first bone is the tibia T, the first implant is the tibial component 74, and bone preparation is planned by selecting a location on a proximal end of the tibia T where the tibial component 74 will be installed. To facilitate implant planning, the surgical system 10 generates a screen 88b (shown in
The location of the tibial component 74 may be selected, for example, based on surgical judgment, to generally center the tibial component 74 on the tibial plateau, to position the tibial component 74 on hard bone to avoid subsidence over time, to position the tibial component 74 a desired distance from one or more landmarks, and/or based on a cartilage surface identified by a tracked tool. In one embodiment, the user selects a location for the tibial component 74 by moving the implant model 808b (shown in
In a preferred embodiment, soft tissue in the joint gap of the knee is taken into account when selecting a placement for the tibial component 74. For example, the first implant (i.e., the tibial component 74) may be planned so that a top surface of the tibial component 74 is aligned with a top surface of cartilage in the joint gap. Such an approach advantageously preserves the natural configuration of the joint space which may improve implant performance and longevity. In this embodiment, a height of a cartilage surface above the first bone (i.e., the tibia T) is detected, a representation of the first bone and a representation of the height of the cartilage surface are created, and bone preparation for implanting the first implant on the first bone is based at least in part on the detected height of the cartilage surface. For example, the top surface of the cartilage may be detected (or mapped) by placing a tip of a tracked probe at a point on the top surface of the cartilage and selecting the point with a button (designated “Map Point) in the frame 807. The representation of the height of the cartilage surface may include a numerical representation (e.g., a distance from the first bone to the cartilage surface) and/or a visual representation (e.g., mapped points may be displayed as points 809 in the frame 800). Several cartilage points may be mapped (e.g., an anterior point, a posterior point, a medial point, etc.). The user aligns at least a portion of the representation of the first implant (i.e., the implant model 808b) with the representation of the height of the cartilage surface (i.e., the points 809), for example, by adjusting the depth of the implant model 808b so that the upper edges of the implant model 808b align with the mapped cartilage points 809. In this embodiment, therefore, the surgical system 10 associates the representation of the first implant with the representation of the first bone based at least in part on a detected location of cartilage in a region of the first bone. In this manner, the depth of the tibial component may be selected based on a thickness of the cartilage on the tibial plateau. Thus, the surgical system 10 enables the user to determine a placement of the tibial component 74 that aligns the top surface of the tibial component 74 with the top surface of the cartilage prior to any bone cutting.
If desired, in step S10, the user may also preoperatively plan an initial placement of the second implant (i.e., the femoral component 72) on the second bone (i.e., the femur F). Preferably, however, step 10 includes only preoperative planning of the first implant (i.e., the tibial component 74). Femoral planning is delayed until after sculpting (step S11) and trialing (step S12) of the tibia T so that the size, internal/external rotation, and medial/lateral position of the femoral component can be determined based on the position of the tibial trial in relation to the femur F.
Steps S11 to S15 encompass the bone preparation process. In step S11, the first bone (e.g., the tibia T) is prepared to receive the first implant (e.g., the tibial component 74) by manipulating the tool 50 to sculpt the first bone. In step S12, a trial implant is fitted to the prepared feature on the first bone. In step S13, an initial placement of the second implant (e.g., the femoral component) is planned (or a previously planned placement of the second implant may be revisited and adjusted). In step S14, the second bone (e.g., the femur F) is prepared to receive the second implant after preparation of the first bone. In step S15, a trial implant is fitted to the prepared features on the second bone.
Bone preparation (or sculpting) may be accomplished, for example, using a spherical burr to sculpt or contour the bone so that a shape of the bone substantially conforms to a shape of a mating surface of the implant. The user has the option to prepare either the femur F or the tibia T first. In a preferred embodiment, the tibia T is prepared first (step S11), and the tibial trail implant is fitted to the prepared surface of the tibia T (step S112). Placement of the femoral component 72 is then planned (step S13) followed by preparation of the femur F (step S14). Such an approach is advantageous because the user can plan placement of the femoral component 72 based on a physical relationship between the tibial trial implant and the femur F at various flexions of the leg. Additionally, prior to sculpting the tibia T and the femur F, a portion (e.g., a 3 mm thick section) of the medial posterior condyle of the femur F is preferably removed with a sagittal saw. Removing this portion of the posterior condyle reduces the likelihood of bone impingement of the posterior condyle on the tibial component 74 and provides additional workspace in the knee joint.
Throughout surgical procedure, the surgical system 10 monitors movement of the anatomy to detect movement of the anatomy and makes appropriate adjustments to the programs running on the computer 21 and/or the computer 31. In one embodiment, the surgical system 10 adjusts the representation of the anatomy in response to the detected movement. For example, the surgical system 10 adjusts the representation of the first bone (i.e., the tibia T) in response to movement of the first bone and adjusts the representation of the second bone (i.e., the femur F) in response to movement of the second bone. The surgical system 10 can also adjust a virtual object associated with the anatomy in response to the detected movement of the anatomy. For example, the virtual object may include a virtual boundary that comprises a representation of an implant (e.g., the virtual boundary may correspond to a shape of a surface of the implant). When bone preparation is planned, the surgical system 10 associates the representation of the implant with the representation of the bone on which the implant is to be implanted. During the surgical procedure, the surgical system 10 adjusts the virtual boundary in response to movement of the bone.
In step S11, the first bone is prepared to receive the first implant by manipulating the tool 50 to sculpt the first bone. In one embodiment, the tibia T is prepared by forming the medial tibial pocket feature on the proximal end of the tibia T. Upon installation of the tibial component 74, the medial tibial pocket feature will mate with the surface 74a of the tibial component 74 (shown in
The occlusion detection algorithm is a safety feature that turns off power to the tool 50 if either the haptic device tracker 45 or one of the anatomy trackers 43a or 43b is at any time occluded while the haptic device 30 is in the haptic (or burring) mode. If an occluded state is detected, the occlusion detection algorithm may also cause a warning message to be displayed on the display device 23, an audible alarm to sound, and/or power to the tool 50 to be shut off. Thus, the occlusion detection algorithm prevents the tool 50 from damaging the anatomy when the tracking system 40 is not able to track a relative position of the tool 50 and the anatomy. For example, in one embodiment, if the occlusion detection algorithm detects an occluded state, the surgical system 10 determines whether the tool 50 is touching a haptic boundary of a haptic object. If the tool 50 is not in contact with a haptic boundary, the occlusion detection algorithm places the haptic device 30 in the free mode so that the tool 50 will move with the patient and, if necessary, can be withdrawn from the patient. When the occluded state ends (e.g., when an occluded tracker again becomes visible), the surgical system 10 places the haptic device 30 in the approach mode so that the user may resume the procedure. In contrast, if the surgical system 10 determines that the tool 50 is touching the haptic boundary during the occluded state, the occlusion detection algorithms waits for a predetermined period of time (e.g., 1 second) to see if the occluded tracker becomes visible. If the haptic device tracker 45 and the anatomy trackers 43a and 43b all become visible within the predetermined period of time, the haptic (or burring) mode is resumed. Otherwise, the haptic device 30 is placed in the free mode so that the tool 50 will move with the patient and, if necessary, can be withdrawn from the patient. As before, when the occluded state ends (e.g., when an occluded tracker again becomes visible), the surgical system 10 places the haptic device 30 in the approach mode so that the user may resume the procedure.
Once the haptic device 30 enters the haptic mode, the user may proceed with bone sculpting. To sculpt the bone, the user manipulates the haptic device 30 by moving a portion of the haptic device 30 (e.g., the tool 50) in a region of the anatomy (e.g., the bone). As best seen in
In addition to haptically guiding the user in the bone sculpting process, the surgical system 10 may also provide visual feedback to the user. For example, when the tool 50 reaches a desired cutting depth in a particular location of the portion 618, the color of the particular location may change from green to white to indicate that no more bone should be removed from that location. Similarly, if the tool 50 cuts beyond the desired cutting depth, the color of the particular location may change from white to red to alert the user that the cut is too deep. To further reduce the possibility of damage to healthy tissue, the surgical system 10 may also be programmed to disable power to the tool 50 should the user cut too deeply. When sculpting of the medial tibial pocket feature is complete, the user may signal (e.g., using a foot pedal or other input device 25) that he is ready to proceed to forming the next feature or that he wishes to withdraw the tool 50. The tool 50 may be withdrawn at any time during the sculpting process even if the feature is not complete. For example, the user may wish to withdraw the tool 50 to replace the tool tip, irrigate the surgical site, perform a trail reduction, revisit implant planning, address a problem that has arisen, or the like. If the user signals that he wants to withdraw the tool 50, the occlusion detection algorithm is halted and the haptic device 30 is placed in the free mode to enable withdrawal of the tool 50.
Step S12 is a trial reduction process in which the first implant (i.e., the tibial component 74) or a trial implant (e.g., a tibial trial) is fitted to the first bone (i.e., the prepared medial tibial pocket feature on the tibia T). The user assesses the fit of the tibial component or the tibial trial and may make any desired adjustments, such as, for example, repeating implant planning and/or bone sculpting to achieve an improved fit.
In step S13, the user plans bone preparation for implanting a second implant on a second bone after preparing the first bone. In a preferred embodiment, the second bone is the femur F, the second implant is the femoral component 72, and bone preparation is planned by selecting a location on a distal end of the femur F where the femoral component 72 will be installed. If the femoral component 72 has been previously planned (e.g., in step S1), the prior placement may be revisited and adjusted if desired. As in step S10, the surgical system 10 facilitates implant planning by generating a screen 88a (shown in
The location of the femoral component 72 may be determined, for example, relative to the position of pre-existing implants and surrounding structures. These points may be mapped using a tracked tool in the same manner as the cartilage points in step S110 above. The mapped points may include points on anatomic structures in the joint (e.g., bone, nerves, soft tissue, etc.) and/or points on pre-existing implants in the joint (e.g., edges, corners, surfaces, verification features, divots, grooves, centerline markings, etc.). The pre-existing implants may include, for example, the first implant (i.e., the tibial component 74), a trial implant (e.g., the tibial trial), and/or an existing implant from a prior surgery. The points may be selected with the leg L at various angles from full extension to full flexion. For example, points may be mapped with the leg L in full extension, at 90°, and in full flexion. In one embodiment, the knee joint is moved to a first position (e.g., one of flexion and extension), the user identifies a first point corresponding to a first location in the joint when the joint is in the first position, the knee joint is moved to a second position (e.g., the other of flexion and extension), and the user identifies a second point corresponding to a second location in the joint when the joint is in the second position. The surgical system 10 displays the first and second points in the frame 800 on the screen 88a as points 810. The points 810 aid the user in visualizing placement of the second implant (i.e., the femoral component 72). Thus, the user is able to plan bone preparation for implanting the second implant on the second bone based at least in part on the first and second points.
In one embodiment, the size and position of the femoral component 72 are determined by mapping a first point at a centerline on an anterior edge of the tibial trial implant with the leg in extension and a second point at the centerline on the anterior edge of the tibial trial implant with the leg in flexion. The extension point is used to size the femoral component 72. For example, the size of the femoral component 72 may be selected so that the tibial component 74 will not ride off an anterior edge of the femoral component 72 as the knee moves into extension. The flexion and extension points together are used to determine the internal/external rotation of the femoral component 72 to ensure that the femoral component 72 properly rides on the tibial component 74 (e.g., based on the patient's natural range of motion and joint kinematics). For example, a centerline of a representation of the second implant (e.g., a representation of the keel 72c of the femoral component 72) may be aligned with the flexion and extension points. Optionally, a point on the posterior “cut” edge may be used to determine the posterior placement of the femoral component 72. In this embodiment, the user selects a location for the femoral component 72 by moving the implant model 808a (shown in
In step S14, the second bone is prepared to receive the second implant by manipulating the tool 50 to sculpt the second bone. In one embodiment, the femur F is prepared by forming the medial femoral surface, post, and keel features on the distal end of the femur F. Upon installation of the femoral component 72, the medial femoral surface, post, and keel features will mate with a surface 72a, a post 72b, and a keel 72c, respectively, of the femoral component 72 (shown in
Once the haptic device 30 enters the haptic mode, the user may proceed with bone sculpting. As shown in
During sculpting, the user may desire to change the tool 50. For example, in one embodiment, the user uses a 6 mm burr to form most of the medial femoral surface feature and a 2 mm to sculpt the “corners” (e.g., regions where a vertical wall of the feature transitions to a horizontal bottom of the feature). To replace the burr, the user signals that he wants to withdraw the tool 50. In response, the occlusion detection algorithm is halted and the haptic device 30 is placed in the free mode to enable withdrawal of the tool 50. Once the burr has been replaced, the haptic device 30 may be placed in the approach mode to enable the user to direct the tool 50 to the surgical site to finish forming the medial femoral surface feature. In a preferred embodiment, prior to recommencing sculpting, the user touches the tool 50 (or a tracked probe) to a mark that was placed on the bone (e.g., the femur F or the tibia T) during the initial registration in step S8. The mark functions as a check point that enables the surgical system 10 to verify proper system configuration. For example, the check point can be used to verify that the tracking system 40 is properly configured (e.g., trackers still properly aligned relative to the anatomy, not blocked or occluded, etc.), that that the tool 50 is correctly installed (e.g., property seated, shaft not bent, etc.), and/or that any other object is properly mounted, installed, set up, etc. If the check reveals a problem with the system configuration (e.g., one of the trackers was bumped by the user during the tool change and is now misaligned), registration (step S8) must be repeated. This check point verification may be performed anytime the user desires to validate the system configuration such as when the tool 50 is withdrawn from and then reinserted into the patient. When sculpting of the medial femoral surface feature is complete, the user may signal (e.g., using a foot pedal or other input device 25) that he is ready to proceed to forming the medial femoral post feature. In one embodiment, prior to forming the medial post feature, the user replaces the 2 mm burr used to form the corners of the medial femoral surface feature with a 4 mm burr.
The process for sculpting the medial femoral post feature is substantially similar to the process for sculpting the medial femoral surface feature. As with the femoral surface feature, the surgical system 10 displays the screen 91 (shown in
The haptic device 30 enters the approach mode in which a haptic object (e.g., the haptic object 300 in
Once the haptic device 30 enters the haptic mode, the user may proceed with bone sculpting. As the user removes bone with the tool 50, the surgical system 10 updates the image of the femur F on the screen 91 to show a depth to which bone has been removed. During the bone removal process, the haptic device 30 imparts force feedback to the user, for example, based on a haptic object having a shape and volume corresponding to the portion 618 of bone to be removed. For the medial femoral post feature, a boundary of the haptic object may substantially correspond, for example, to a surface of the post 72b (shown in
The process for sculpting the medial femoral keel feature is substantially similar to the process for sculpting the medial femoral surface and post features. As with the femoral surface and post features, the surgical system 10 displays the screen 91 (shown in
The haptic device 30 enters the approach mode in which a haptic object (e.g., the haptic object 300 in
Step S15 is a trial reduction process in which the second implant (i.e., the femoral component 72) or a trial implant (e.g., a femoral trial) is fitted to the prepared medial femoral surface, post, and keel features on the femur F. The user assesses the fit of the femoral component 72 or the femoral trial and may make any desired adjustments, such as, for example, repeating implant planning and/or bone sculpting to achieve an improved fit. In step S15, adjustments may also be made to the tibia T. To facilitate trial reduction, the surgical system 10 may generate a screen (not shown) that graphically represents the tracked movement of the femur F and the tibia T and displays measurements, such as, for example, flexion, varus/valgus, and internal/external rotation angles. Additionally, the femoral and/or tibial trial implants may include intrinsic features (e.g., divots, markings, etc.) that can be touched with a tracked probe after the trial implant is fitted to the bone to enable the surgical system 10 to verify placement of the trial implant. The intrinsic features may also be used to key a position of one implant to another implant (e.g., in the case of a modular implant). When the user is satisfied with the fit of the trial implants, the user may proceed with installation of the femoral component 72 and the tibial component 74 and completion of the surgical procedure.
Thus, embodiments of the present invention provide a haptic guidance system and method that may replace direct visualization in minimally invasive surgery, spare healthy bone in orthopedic joint replacement applications, enable intraoperative adaptability and planning, and produce operative results that are sufficiently predictable, repeatable, and/or accurate regardless of surgical skill level.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only.
Claims
1. A surgical planning method, comprising:
- detecting a height of a cartilage surface above a bone;
- creating a representation of the bone and a representation of the height of the cartilage surface; and
- planning bone preparation for implanting an implant on the bone based at least in part on the detected height of the cartilage surface.
2. The method of claim 1, wherein planning the bone preparation includes aligning at least a portion of a representation of the implant with the representation of the height of the cartilage surface.
3. The method of claim 1, wherein planning the bone preparation includes adjusting at least one of a depth, a rotation, a medial/lateral position, an anterior/posterior position, an internal/external angle, a varus/valgus angle, a flexion angle, and a size of a representation of the implant.
4. The method of claim 1, further comprising displaying at least one of a depth, an internal/external angle, a varus/valgus angle, a flexion angle, and a size of a representation of the implant on a display device.
5. The method of claim 1, wherein the representation of the height of the cartilage surface includes at least one of a visual representation and a numerical representation.
6. The method of claim 1, further comprising superimposing a representation of the implant on the representation of the bone.
7. The method of claim 1, further comprising associating a representation of an implant with the representation of the bone.
8. The method of claim 1, further comprising:
- sculpting the bone with a surgical tool; and
- constraining the surgical tool so that a tip of the surgical tool is constrained against penetrating a virtual boundary.
9. The method of claim 8, wherein at least a portion of the virtual boundary comprises a representation of the implant.
10. The method of claim 8, wherein a shape of at least a portion of the virtual boundary corresponds to a shape of a surface of the implant.
11. The method of claim 1, further comprising:
- sculpting the bone to receive the implant; and
- fitting the implant to the bone.
12. The method of claim 11, further comprising:
- creating a representation of a second bone of the joint;
- moving the joint to a first position;
- identifying a first point corresponding to a first location in the joint, when the joint is in the first position;
- moving the joint to a second position;
- identifying a second point corresponding to a second location in the joint, when the joint is in the second position; and
- planning bone preparation for implanting a second implant on the second bone based at least in part on the first and second points.
13. The method of claim 12, wherein moving the joint to the first position includes moving the joint into one of flexion and extension.
14. The method of claim 12, wherein moving the joint to the second position includes moving the joint into one of flexion and extension.
15. The method of claim 12, wherein at least one of the first location and the second location includes a location on the implant fitted to the bone.
16. The method of claim 12, wherein at least one of the first location and the second location includes a location on a pre-existing implant disposed in the joint.
17. The method of claim 12, wherein planning bone preparation for implanting the second implant includes aligning a centerline of a representation of the second implant with the first and second points.
18. The method of claim 12, wherein planning bone preparation for implanting the second implant includes adjusting at least one of a depth, a rotation, a medial/lateral position, an anterior/posterior position, an internal/external angle, a varus/valgus angle, a flexion angle, and a size of a representation of the second implant.
19. The method of claim 12, further comprising superimposing a representation of the second implant on the representation of the second bone.
20. The method of claim 12, further comprising displaying at least one of a depth, an internal/external angle, a varus/valgus angle, a flexion angle, and a size of a representation of the second implant on a display device.
21. The method of claim 12, further comprising associating a representation of the second implant with the representation of the second bone.
22. The method of claim 12, further comprising:
- sculpting the second bone with a surgical tool; and
- constraining the surgical tool so that a tip of the surgical tool is constrained against penetrating a virtual boundary.
23. The method of claim 22, wherein at least a portion of the virtual boundary comprises a representation of the second implant.
24. The method of claim 22, wherein a shape of at least a portion of the virtual boundary corresponds to a shape of a surface of the second implant.
25. The method of claim 22, further comprising adjusting the virtual boundary in response to movement of the second bone.
Type: Application
Filed: Jun 23, 2008
Publication Date: Jan 8, 2009
Applicant:
Inventors: Arthur QUAID (North Miami, FL), Hyosig Kang (Weston, FL), Dennis Moses (Hollywood, FL), Rony Abovitz (Hollywood, FL), Maurice R. Ferre (Key Biscayne, FL), Binyamin Hajaj (Plantation, FL), Martin Roche (Fort Lauderdale, FL), Scott Illsley (Waterloo), Louis Arata (Mentor, FL), Dana Mears (Pittsburgh, PA), Timothy Blackwell (Miramar, FL), Alon Mozes (Miami Beach, FL), Sherif Aly (Boca Raton, FL), Amardeep Singh Dugal (Hollywood, FL), Randall Hand (Clinton, MS), Sandi Glauser (Weston, FL), Juan Salcedo (Miami, FL), Peter Ebbitt (Boca Raton, FL), William Tapia (Weston, FL)
Application Number: 12/144,517
International Classification: A61B 19/00 (20060101);