Patents by Inventor James J. Kuffner

James J. Kuffner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160207200
    Abstract: Methods and systems for determining a status of a component of a device are provided. An example method includes triggering an action of a component of a device, and responsively receiving information associated with the action of the component from a sensor. The method further includes a computing system having a processor and a memory comparing the information with calibration data and determining a status of the component based on the comparison. In some examples, the calibration data may include information derived from data received from a pool of one or more devices utilizing same or similar components as the component. The determined status may include information associated with a performance of the component with respect to performances of same or similar components of the pool of devices. In one example, the device may self-calibrate the component based on the status.
    Type: Application
    Filed: March 30, 2016
    Publication date: July 21, 2016
    Inventors: James J. Kuffner, JR., Ryan Hickman
  • Patent number: 9375839
    Abstract: Methods and computer-program products for evaluating grasp patterns for use by a robot are disclosed. In one embodiment, a method of evaluating grasp patterns includes selecting an individual grasp pattern from a grasp pattern set, establishing a thumb-up vector, and simulating the motion of the manipulator and the end effector according to the selected individual grasp pattern, wherein each individual grasp pattern of the grasp pattern set corresponds to motion for manipulating a target object. The method further includes evaluating a direction of the thumb-up vector during at least a portion of the simulated motion of the manipulator and the end effector, and excluding the selected individual grasp pattern from use by the robot if the direction of the thumb-up vector during the simulated motion is outside of one or more predetermined thresholds. Robots utilizing the methods and computer-program products for evaluating grasp patterns are also disclosed.
    Type: Grant
    Filed: March 6, 2015
    Date of Patent: June 28, 2016
    Assignees: Carnegie Mellon University, Toyota Jidosha Kabushiki Kaisha
    Inventors: Yasuhiro Ota, Junggon Kim, James J. Kuffner
  • Patent number: 9327404
    Abstract: Methods and systems for determining a status of a component of a device are provided. An example method includes triggering an action of a component of a device, and responsively receiving information associated with the action of the component from a sensor. The method further includes a computing system having a processor and a memory comparing the information with calibration data and determining a status of the component based on the comparison. In some examples, the calibration data may include information derived from data received from a pool of one or more devices utilizing same or similar components as the component. The determined status may include information associated with a performance of the component with respect to performances of same or similar components of the pool of devices. In one example, the device may self-calibrate the component based on the status.
    Type: Grant
    Filed: December 1, 2014
    Date of Patent: May 3, 2016
    Assignee: Google Inc.
    Inventors: James J. Kuffner, Jr., Ryan Hickman
  • Patent number: 9283678
    Abstract: Methods and systems for determining and presenting virtual safety cages are provided. An example method may involve receiving an instruction for a robotic device to perform a physical action in a physical environment occupied by the robotic device. The method may also involve, responsive to receiving the instruction, and based on one or more parameters of one or more physical components of the robotic device, determining one or more estimated trajectories along which the one or more physical components of the robotic device are estimated to move as the robotic device performs the physical action. The method may further involve, based on the one or more estimated trajectories, determining a virtual representation of a space that the robotic device is estimated to occupy in the physical environment while performing the physical action. The method may then involve providing, into the physical environment, an indication of a location of the space.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: March 15, 2016
    Assignee: Google Inc.
    Inventors: James J. Kuffner, Jr., Peter Elving Anderson-Sprecher
  • Publication number: 20160042555
    Abstract: A method and system for video encoding assets for swivel/360-degree spinners is disclosed. Still images of a 3D object from different perspectives about the 3D object may be stacked and then video encoded to generate video frames of the object from the different perspectives. The video-encoded assets may be stored on a server or other network-connected device, and later retrieved by a connected client device for display processing by a swivel/360-degree spinner on the client device. The swivel/360-degree spinner may utilize native video processing capabilities of the client device and/or of a browser running on the client device to display video motion of the object moving through different angular orientations in response to movement of an interactive cursor.
    Type: Application
    Filed: October 12, 2015
    Publication date: February 11, 2016
    Inventors: Chaitanya Gharpure, James J. Kuffner, JR.
  • Publication number: 20160016315
    Abstract: Methods and systems for determining and presenting virtual safety cages are provided. An example method may involve receiving an instruction for a robotic device to perform a physical action in a physical environment occupied by the robotic device. The method may also involve, responsive to receiving the instruction, and based on one or more parameters of one or more physical components of the robotic device, determining one or more estimated trajectories along which the one or more physical components of the robotic device are estimated to move as the robotic device performs the physical action. The method may further involve, based on the one or more estimated trajectories, determining a virtual representation of a space that the robotic device is estimated to occupy in the physical environment while performing the physical action. The method may then involve providing, into the physical environment, an indication of a location of the space.
    Type: Application
    Filed: July 16, 2014
    Publication date: January 21, 2016
    Inventors: James J. Kuffner, JR., Peter Elving Anderson-Sprecher
  • Patent number: 9205886
    Abstract: The present application discloses systems and methods for inventorying objects. In one embodiment, a robot detects an object and sends identification data and location data associated with the detected object to a cloud computing system. The identification data may include an image of the object and/or information from a tag, code, or beacon associated with the object. In response to receiving the identification data and the location data, the cloud computing system identifies the object. The cloud computing system may also determine or create a first map associated with the identified object and a second map associated with the identified object. The first map may be associated with the current location of the object and the second map may correspond to a past location of the object. The cloud computing server may compare the first and second maps, and then send instructions to the robot based on the comparison.
    Type: Grant
    Filed: May 3, 2012
    Date of Patent: December 8, 2015
    Assignee: Google Inc.
    Inventors: Ryan Hickman, James J. Kuffner, Jr., Anthony G. Francis, Jr., Chaitanya Gharpure
  • Patent number: 9189884
    Abstract: A method and system for video encoding assets for swivel/360-degree spinners is disclosed. Still images of a 3D object from different perspectives about the 3D object may be stacked and then video encoded to generate video frames of the object from the different perspectives. The video-encoded assets may be stored on a server or other network-connected device, and later retrieved by a connected client device for display processing by a swivel/360-degree spinner on the client device. The swivel/360-degree spinner may utilize native video processing capabilities of the client device and/or of a browser running on the client device to display video motion of the object moving through different angular orientations in response to movement of an interactive cursor.
    Type: Grant
    Filed: November 13, 2012
    Date of Patent: November 17, 2015
    Assignee: Google Inc.
    Inventors: Chaitanya Gharpure, James J. Kuffner, Jr.
  • Patent number: 9183672
    Abstract: Methods and systems for providing a three-dimensional (3D) image viewer in a webpage are provided. According to an example method, a webpage may be provided, and the webpage may include embedded language that identifies a 3D image viewer to be provided within the webpage. Based on the embedded language, a computer having a processor and a memory may request information associated with rendering a 3D object data model in the 3D image viewer. The method may also include providing the 3D image viewer within the webpage, and receiving information associated with rendering the 3D object data model. Additionally, the 3D object data model may be rendered in the 3D image viewer based on the received information. Additional example systems and methods are described herein.
    Type: Grant
    Filed: May 15, 2013
    Date of Patent: November 10, 2015
    Assignee: Google Inc.
    Inventors: Ryan Hickman, James J. Kuffner, Jr., Anthony Gerald Francis, Jr., Arshan Poursohi, James R. Bruce, Thor Lewis, Chaitanya Gharpure
  • Publication number: 20150213417
    Abstract: Methods and systems for proactively preventing hazardous or other situations in a robot-cloud interaction are provided. An example method includes receiving information associated with task logs for a plurality of robotic devices. The task logs may include information associated with tasks performed by the plurality of robotic devices. The method may also include a computing system determining information associated with hazardous situations based on the information associated with the task logs. For example, the hazardous situations may comprise situations associated with failures of one or more components of the plurality of robotic devices. According to the method, information associated with a contextual situation of a first robotic device may be determined, and when the information associated with the contextual situation is consistent with information associated with the one or more hazardous situations, an alert indicating a potential failure of the first robotic device may be provided.
    Type: Application
    Filed: April 8, 2015
    Publication date: July 30, 2015
    Inventors: James J. Kuffner, Jr., Ryan Hickman
  • Publication number: 20150185729
    Abstract: Methods and systems for allocating tasks to robotic devices are provided. An example method includes receiving information associated with task logs for a plurality of robotic devices and in a computing system configured to access a processor and memory, determining information associated with a health level for the plurality of robotic devices based on the information associated with the task logs. A health level for a given robotic device may be proportional to a current level of ability to perform a function, which may change over a lifespan of the given robotic device. Information associated with a plurality of tasks to be performed by one or more or the robotic devices may also be determined. The computing system may optimize an allocation of the plurality of tasks such that a high precision task may be allocated to a robotic device having a greater current health level than another robotic device.
    Type: Application
    Filed: March 11, 2015
    Publication date: July 2, 2015
    Inventors: James J. Kuffner, JR., Ryan Hickman
  • Publication number: 20150174759
    Abstract: Methods and computer-program products for evaluating grasp patterns for use by a robot are disclosed. In one embodiment, a method of evaluating grasp patterns includes selecting an individual grasp pattern from a grasp pattern set, establishing a thumb-up vector, and simulating the motion of the manipulator and the end effector according to the selected individual grasp pattern, wherein each individual grasp pattern of the grasp pattern set corresponds to motion for manipulating a target object. The method further includes evaluating a direction of the thumb-up vector during at least a portion of the simulated motion of the manipulator and the end effector, and excluding the selected individual grasp pattern from use by the robot if the direction of the thumb-up vector during the simulated motion is outside of one or more predetermined thresholds. Robots utilizing the methods and computer-program products for evaluating grasp patterns are also disclosed.
    Type: Application
    Filed: March 6, 2015
    Publication date: June 25, 2015
    Applicants: Toyota Motor Engineering & Manufacturing North America, Inc., Carnegie Mellon University
    Inventors: Yasuhiro Ota, Junggon Kim, James J. Kuffner
  • Publication number: 20150148957
    Abstract: Methods and systems for determining a status of a component of a device are provided. An example method includes triggering an action of a component of a device, and responsively receiving information associated with the action of the component from a sensor. The method further includes a computing system having a processor and a memory comparing the information with calibration data and determining a status of the component based on the comparison. In some examples, the calibration data may include information derived from data received from a pool of one or more devices utilizing same or similar components as the component. The determined status may include information associated with a performance of the component with respect to performances of same or similar components of the pool of devices. In one example, the device may self-calibrate the component based on the status.
    Type: Application
    Filed: December 1, 2014
    Publication date: May 28, 2015
    Inventors: James J. Kuffner, JR., Ryan Hickman
  • Patent number: 9024771
    Abstract: Methods and systems for proactively preventing hazardous or other situations in a robot-cloud interaction are provided. An example method includes receiving information associated with task logs for a plurality of robotic devices. The task logs may include information associated with tasks performed by the plurality of robotic devices. The method may also include a computing system determining information associated with hazardous situations based on the information associated with the task logs. For example, the hazardous situations may comprise situations associated with failures of one or more components of the plurality of robotic devices. According to the method, information associated with a contextual situation of a first robotic device may be determined, and when the information associated with the contextual situation is consistent with information associated with the one or more hazardous situations, an alert indicating a potential failure of the first robotic device may be provided.
    Type: Grant
    Filed: January 30, 2013
    Date of Patent: May 5, 2015
    Assignee: Google Inc.
    Inventors: James J. Kuffner, Jr., Ryan Hickman
  • Patent number: 9019268
    Abstract: System and methods for rendering three-dimensional (3D) object data models based on a comparison of images. A 3D object data model of an object can be characterized by parameters defining rendering features of the 3D object data model. A comparison can be made of a first rendering of the 3D object data model to one or more reference images related to the object and, based on the comparison, the parameters of the 3D object data model can be modified. Following the modification, the 3D object data model can be rendered to generate a second rendering. Based on the second rendered 3D object data model, statistical information can be obtained and based on the statistical information, the parameters of the 3D object data model can be modified again to further adjust the appearance of the second rendering of the 3D object data model.
    Type: Grant
    Filed: November 9, 2012
    Date of Patent: April 28, 2015
    Assignee: Google Inc.
    Inventors: James J. Kuffner, Jr., Anthony Gerald Francis, Jr., James R. Bruce, Arshan Poursohi
  • Patent number: 9014857
    Abstract: Methods and computer program products for generating robot grasp patterns are disclosed. In one embodiment, a method for generating robot grasp patterns includes generating a plurality of approach rays associated with a target object. Each approach ray of the plurality of approach rays extends perpendicularly from a surface of the target object. The method further includes generating at least one grasp pattern for each approach ray to generate a grasp pattern set of the target object, calculating a grasp quality score for each individual grasp pattern of the grasp pattern set, and comparing the grasp quality score of each individual grasp pattern with a grasp quality threshold. The method further includes selecting individual grasp patterns of the grasp pattern set having a grasp quality score that is greater than the grasp quality threshold, and providing the selected individual grasp patterns to the robot for on-line manipulation of the target object.
    Type: Grant
    Filed: January 13, 2012
    Date of Patent: April 21, 2015
    Assignees: Toyota Motor Engineering & Manufacturing North America, Inc., Carnegie Mellon University
    Inventors: Yasuhiro Ota, Junggon Kim, Kunihiro Iwamoto, James J. Kuffner, Nancy S. Pollard
  • Patent number: 9014850
    Abstract: Methods and computer-program products for evaluating grasp patterns for use by a robot are disclosed. In one embodiment, a method of evaluating grasp patterns includes selecting an individual grasp pattern from a grasp pattern set, establishing a thumb-up vector, and simulating the motion of the manipulator and the end effector according to the selected individual grasp pattern, wherein each individual grasp pattern of the grasp pattern set corresponds to motion for manipulating a target object. The method further includes evaluating a direction of the thumb-up vector during at least a portion of the simulated motion of the manipulator and the end effector, and excluding the selected individual grasp pattern from use by the robot if the direction of the thumb-up vector during the simulated motion is outside of one or more predetermined thresholds. Robots utilizing the methods and computer-program products for evaluating grasp patterns are also disclosed.
    Type: Grant
    Filed: January 13, 2012
    Date of Patent: April 21, 2015
    Assignees: Toyota Motor Engineering & Manufacturing North America, Inc., Carnegie Mellon University
    Inventors: Yasuhiro Ota, Junggon Kim, James J. Kuffner
  • Patent number: 9008839
    Abstract: Methods and systems for allocating tasks to robotic devices are provided. An example method includes receiving information associated with task logs for a plurality of robotic devices and in a computing system configured to access a processor and memory, determining information associated with a health level for the plurality of robotic devices based on the information associated with the task logs. A health level for a given robotic device may be proportional to a current level of ability to perform a function, which may change over a lifespan of the given robotic device. Information associated with a plurality of tasks to be performed by one or more or the robotic devices may also be determined. According to the method, the computing system may optimize an allocation of the plurality of tasks such that a high precision task may be allocated to a robotic device having a greater current health level than another robotic device.
    Type: Grant
    Filed: February 3, 2013
    Date of Patent: April 14, 2015
    Assignee: Google Inc.
    Inventors: James J. Kuffner, Jr., Ryan Hickman
  • Patent number: 8976179
    Abstract: Methods and systems are provided for determining and transmitting applicable lighting information, applicable viewing perspective, and a 3D model for an object in response to a search query. An example method includes receiving, at a server, a search query regarding an object. A 3D model for the object is determined. The 3D model includes three-dimensional shape information about the object. The method also includes determining, based on a plurality of stored images of the object, at least one applicable light field and at least one applicable viewing perspective. A search query result is transmitted from the server. The search query result may include the 3D model, the applicable light field(s), and the applicable viewing perspective(s). A server and a non-transitory computer readable medium are also disclosed that could perform a similar method.
    Type: Grant
    Filed: March 8, 2013
    Date of Patent: March 10, 2015
    Assignee: Google Inc.
    Inventors: James J. Kuffner, Jr., James R. Bruce, Arshan Poursohi, Ryan Hickman
  • Patent number: 8948498
    Abstract: Examples disclose a method and system to transform a colored point cloud to a 3D textured mesh. The method may be executable to identify a location on a 2D image of an object, identify a location on a 3D image of the object, and determine a color associated with the location on the 2D image. Determining a color may include receiving data associated with a simulation of a plurality of rays cast on the 3D image, identifying a color of the location on the 3D image associated with the received data, identifying a confidence level associated with the identified color of the location on the 3D image, and associating the identified color of the location on the 3D image with the location on the 2D image.
    Type: Grant
    Filed: August 23, 2013
    Date of Patent: February 3, 2015
    Assignee: Google Inc.
    Inventors: Ryan Hickman, James J. Kuffner, Jr., James R. Bruce