Patents by Inventor Thenkurussi Kesavadas

Thenkurussi Kesavadas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11410564
    Abstract: The present disclosure provides a development system to permit a developer to generate mixed reality (MR) streaming content for display on a VR headset worn by a viewer. The system allows development and generation of the content steam by non-technical personnel, where such developers are not required to possess computer skills or engineering knowledge. The streaming content generated includes embedded pre-recorded video files originally recorded in a 360 degree format, which significantly reduces computer processing time, memory requirements, and significantly speeds up the development time required to produce a final executable streaming content.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: August 9, 2022
    Assignee: The Board of Trustees of the University of Illinois
    Inventors: Thenkurussi Kesavadas, Naveen Kumar Sankaran, Steven M. LaValle
  • Publication number: 20190307359
    Abstract: A tactile sensing system includes at least a stretchable strain sensing layer, an inflatable reservoir, and a means for detecting strain in the stretchable strain sensing layer. The tactile sensing layer may be configured as a tumor detection system by configuring the inflatable reservoir to apply pressure to at least part of a tissue in conjunction with an anatomical contact structure and the stretchable strain sensing layer to be in contact with a region of the surface of the tissue.
    Type: Application
    Filed: May 24, 2019
    Publication date: October 10, 2019
    Applicants: University of Maryland, College Park, University of Illinois at Urbana-Champaign
    Inventors: Elisabeth Smela, Miao Yu, Hugh A. Bruck, Thenkurussi Kesavadas
  • Publication number: 20190175054
    Abstract: A pressure sensing system includes at least two pressure sensing layers. The first pressure sensing layer includes a first sensing system configured in a layer, a first layer of foam having a Young's modulus and mounted between a first sensing system configured in a layer, and a second sensing system configured in a layer; at least a second pressure sensing layer including the second sensing system configured in a layer, and a second layer of foam having a Young's modulus that is greater than the Young's modulus of the first layer of foam and mounted between the second sensing system configured in a layer and a rigid substrate having a Young's modulus greater than the layer of the first sensing system, the first layer of foam, the layer of the second sensing system, and the second layer of foam. The pressure sensing system thereby defines a multi-layer pressure sensing system.
    Type: Application
    Filed: August 24, 2018
    Publication date: June 13, 2019
    Applicants: University of Maryland, The Board of Trustees of the University of Illinois
    Inventors: Elisabeth Smela, Miao Yu, Hugh A. Bruck, Ying Chen, Thenkurussi Kesavadas
  • Publication number: 20190139426
    Abstract: The present disclosure provides a development system to permit a developer to generate mixed reality (MR) streaming content for display on a VR headset worn by a viewer. The system allows development and generation of the content steam by non-technical personnel, where such developers are not required to possess computer skills or engineering knowledge. The streaming content generated includes embedded pre-recorded video files originally recorded in a 360 degree format, which significantly reduces computer processing time, memory requirements, and significantly speeds up the development time required to produce a final executable streaming content.
    Type: Application
    Filed: October 19, 2018
    Publication date: May 9, 2019
    Inventors: Thenkurussi Kesavadas, Naveen Kumar Sankaran, Steven M. LaValle
  • Patent number: 9601025
    Abstract: The present invention may be embodied as a method of minimally-invasive surgery (“MIS”) training using a video of an MIS comprising the steps of providing a processor, a display, and a first input device. The method comprises the step of processing the video using the processor to determine a location of the first surgical tool in each of the frames, determining whether the configuration of the first input device substantially corresponds to the location of the first surgical tool in each frame of the video while the video is displayed. The present invention may be embodied as a system for MIS training. The system comprises a processor, a communication device, and a first input device in communication with the processor. The processor is programmed perform any or all of the described disclosed methods.
    Type: Grant
    Filed: May 26, 2011
    Date of Patent: March 21, 2017
    Assignees: Health Research, Inc., The Research Foundation for The State University of New York
    Inventors: Thenkurussi Kesavadas, Khurshid Guru
  • Patent number: 9595207
    Abstract: A system and method for training a person in minimally-invasive surgery (“MIS”) utilizing a video of the MIS. The system comprises a processor, a display, and a first interaction device. The processor programmed to receive the video and to obtain tracking data. The tracking data may correspond to the motion of a tool controller. The tracking data may correspond to motion of a first surgical tool in the video. The processor programmed to calculate motion of the first interaction device corresponding to the tracking data, to display the video, and to cause the first interaction device to move according to the calculated motion. The method comprises receiving the video, obtaining the tracking data, calculating a motion of a first interaction device corresponding to the tracking data, displaying the video, and causing the first interaction device to move according to the calculated motion.
    Type: Grant
    Filed: May 26, 2011
    Date of Patent: March 14, 2017
    Assignees: Health Research, Inc., The Research Foundation for The State University of New York
    Inventors: Thenkurussi Kesavadas, Khurshid Guru
  • Publication number: 20130295540
    Abstract: A system and method for training a person in minimally-invasive surgery (“MIS”) utilizing a video of the MIS. The system comprises a processor, a display, and a first interaction device. The processor programmed to receive the video and to obtain tracking data. The tracking data may correspond to the motion of a tool controller. The tracking data may correspond to motion of a first surgical tool in the video. The processor programmed to calculate motion of the first interaction device corresponding to the tracking data, to display the video, and to cause the first interaction device to move according to the calculated motion. The method comprises receiving the video, obtaining the tracking data, calculating a motion of a first interaction device corresponding to the tracking data, displaying the video, and causing the first interaction device to move according to the calculated motion.
    Type: Application
    Filed: May 26, 2011
    Publication date: November 7, 2013
    Applicants: The Research Foundation for The State University of New York, HEALTH RESEARCH INC.
    Inventors: Thenkurussi Kesavadas, Khurshid Guru
  • Publication number: 20130288214
    Abstract: The present invention may be embodied as a method of minimally-invasive surgery (“MIS”) training using a video of an MIS comprising the steps of providing a processor, a display, and a first input device. The method comprises the step of processing the video using the processor to determine a location of the first surgical tool in each of the frames, determining whether the configuration of the first input device substantially corresponds to the location of the first surgical tool in each frame of the video while the video is displayed. The present invention may be embodied as a system for MIS training. The system comprises a processor, a communication device, and a first input device in communication with the processor. The processor is programmed perform any or all of the described disclosed methods.
    Type: Application
    Filed: May 26, 2011
    Publication date: October 31, 2013
    Applicants: The Research Foundation for The State University of New York, HEALTH RESEARCH INC.
    Inventors: Thenkurussi Kesavadas, Khurshid Guru
  • Publication number: 20120245595
    Abstract: A system for manipulating elongate surgical instruments comprises a console, which comprises an input controller. The input controller may have a haptic feedback mechanism. The system further comprises a slave component, which comprises a first linear actuator, a second linear actuator, and a first rotational actuator. Each actuator is in electrical communication with the input controller. The slave component further comprises a force sensor in electronic communication with the input controller. The force sensor is configured to measure a force acting upon the first elongate member on at least one degree of freedom. The force sensor will send a force signal to the haptic feedback mechanism of the input controller.
    Type: Application
    Filed: August 26, 2010
    Publication date: September 27, 2012
    Applicant: The Research Foundation of State University of New York
    Inventors: Thenkurussi Kesavadas, Govindarajan Srimathveeravalli
  • Publication number: 20100285438
    Abstract: The present invention may be embodied as a method of minimally-invasive surgery (“MIS”) training wherein a simulator having a display, a computer, and a first input device, is provided. A video of an MIS is displayed on the display, and a first surgical tool is visible in at least a portion of the video. A match zone corresponding to a position on the first surgical tool is determined. A computer-generated virtual surgical tool (“CG tool”) is superimposed on the displayed video. The CG tool is selectively controlled by the first input device. A target position of the CG tool is determined. If the target position is not determined to be within the match zone, further steps may be taken. For example, the video may be paused, a message may be displayed to the operator, or the computer may signal the input device to move to a position such that the target position is within the match zone.
    Type: Application
    Filed: March 12, 2010
    Publication date: November 11, 2010
    Inventors: Thenkurussi Kesavadas, Khurshid Guru, Govindarajan Srimathveeravalli
  • Publication number: 20090305210
    Abstract: A system according to the invention may include a frame, a computer, a display, and two input devices. The frame may be adjustable, may be made from a lightweight material, and may fold for easier portability. The display and the computer may be in communication with each other and each may be attached to the frame. The display may be a binocular display, or may be a touchscreen display. Additional displays may be used. Two input devices may be used to simulate the master console of a surgical robot. The input devices may be articulated armature devices suitable for providing 3D input. The input devices may be attached to the frame in an “upside-down” configuration wherein a base of each input device is affixed to the frame such that a first joint of an arm is below the base. The input devices may be in communication with the computer and may provide positional signals to the computer. The positional signals may correspond to a position of an arm of each input device.
    Type: Application
    Filed: March 11, 2009
    Publication date: December 10, 2009
    Inventors: Khurshid Guru, Thenkurussi Kesavadas, Govindarajan Srimathveeravalli
  • Publication number: 20090278798
    Abstract: A finger-mounted implement including a kinesthetic sensor, at least one tactile sensor, and means for securing the kinesthetic sensor and the at least one tactile sensor to a fingertip. The tactile sensor may be a thin-film force transducer, a piezoelectric accelerometer, or a combination thereof. An artificial fingernail may be connected to the accelerometer. The kinesthetic sensor may include a magnetic transducer and may sense an X-Y-Z position and an angular orientation of a fingertip to which the kinesthetic sensor is secured. The securing means may include at least one means selected from the group consisting of adhesive tape, an elastically deformable cover, and detachable adhesive. The implement can be further connected to a computer processing system for, amongst other things, the virtual representation of sensed objects. The implement can also be used as part of a method of haptic sensing of objects.
    Type: Application
    Filed: July 26, 2007
    Publication date: November 12, 2009
    Applicant: The Research Foundation of the State University of New York
    Inventors: Young-Seok Kim, Thenkurussi Kesavadas
  • Publication number: 20060119578
    Abstract: An apparatus (15) for interfacing between an operator (26) and computer generated virtual object comprising a force sensor (19) that provides a force signal as a function of the amount of force applied to a representative physical body (22), a position sensor (18) that provides a position signal representative of the location of the position sensor when the force is applied, an article (16) for coupling the force sensor and the position sensor to an extremity of an operator, and a processor system (20) communicating with the force sensor and the position sensor and adapted to deform a virtual object (24) as a function of the force signal and the position signal.
    Type: Application
    Filed: November 10, 2005
    Publication date: June 8, 2006
    Inventors: Thenkurussi Kesavadas, Ameya Kamerkar, Ajay Anand
  • Patent number: 6752770
    Abstract: An apparatus for analyzing a region below one or more layers of tissue includes a force sensor, a position sensor, and a processing system. The force sensor provides a force signal representative of an amount of force applied to a portion of the one or more layers of tissue which is adjacent to the region. The position sensor provides a position signal representative of the location of the position sensor when the force is applied. The processing system coupled to the force sensor and the position sensor determines at least one property of the region based on the force signal and the position signal.
    Type: Grant
    Filed: November 14, 2001
    Date of Patent: June 22, 2004
    Assignee: The Research Foundation of State University of New York
    Inventors: James Mayrose, Thenkurussi Kesavadas, Kevin Chugh
  • Patent number: 6654656
    Abstract: The disclosure relates to rapid informational prototypes of three-dimensional objects and systems having three or more dimensions, wherein the informational prototype includes information beyond outer physical shape, such as stress contours, thermal gradients, internal structures and elements, and elements varying with time. One preferred manner for indicating information is through the use of differently colored regions in the prototype. According to a rapid prototyping method of the present invention, a series of slices through the object or system are defined by an ordinal number, overall contour information for describing shape of the slice in an X-Y plane, slice thickness information for describing thickness of the slice in a Z direction, and slice image information for providing useful information other than the overall contour information, preferably through color images.
    Type: Grant
    Filed: March 6, 2001
    Date of Patent: November 25, 2003
    Assignee: The Research Foundation of State University of New York
    Inventors: Thenkurussi Kesavadas, Kirk C. Stalis
  • Publication number: 20030167099
    Abstract: The disclosure relates to rapid informational prototypes of three-dimensional objects and systems having three or more dimensions, wherein the informational prototype includes information beyond outer physical shape, such as stress contours, thermal gradients, internal structures and elements, and elements varying with time. One preferred manner for indicating information is through the use of differently colored regions in the prototype. According to a rapid prototyping method of the present invention, a series of slices through the object or system are defined by an ordinal number, overall contour information for describing shape of the slice in an X-Y plane, slice thickness information for describing thickness of the slice in a Z direction, and slice image information for providing useful information other than the overall contour information, preferably through color images.
    Type: Application
    Filed: March 6, 2001
    Publication date: September 4, 2003
    Inventors: Thenkurussi Kesavadas, Kirk C. Stalis
  • Publication number: 20020133093
    Abstract: An apparatus for analyzing a region below one or more layers of tissue includes a force sensor, a position sensor, and a processing system. The force sensor provides a force signal representative of an amount of force applied to a portion of the one or more layers of tissue which is adjacent to the region. The position sensor provides a position signal representative of the location of the position sensor when the force is applied. The processing system coupled to the force sensor and the position sensor determines at least one property of the region based on the force signal and the position signal.
    Type: Application
    Filed: November 14, 2001
    Publication date: September 19, 2002
    Inventors: James Mayrose, Thenkurussi Kesavadas, Kevin Chugh