Patents by Inventor Sam Zapolsky
Sam Zapolsky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230105746Abstract: In variants, a method for robot control can include: receiving sensor data of a scene, modeling the physical objects within the scene, determining a set of potential grasp configurations for grasping a physical object within the scene, determining a reach behavior based on the potential grasp configuration, determining a trajectory for the reach behavior, and grasping the object using the trajectory.Type: ApplicationFiled: December 7, 2022Publication date: April 6, 2023Inventors: Evan DRUMWRIGHT, Sam ZAPOLSKY
-
Patent number: 11548152Abstract: A system comprises a database; at least one hardware processor coupled with the database; and one or more software modules that, when executed by the at least one hardware processor, receive at least one of sensory data from a robot and images from a camera, identify and build models of objects in an environment, wherein the model encompasses immutable properties of identified objects including mass and geometry, and wherein the geometry is assumed not to change, estimate the state including position, orientation, and velocity, of the identified objects, determine based on the state and model, potential configurations, or pre-grasp poses, for grasping the identified objects and return multiple grasping configurations per identified object, determine an object to be picked based on a quality metric, translate the pregrasp poses into behaviors that define motor forces and torques, communicate the motor forces and torques to the robot.Type: GrantFiled: July 30, 2020Date of Patent: January 10, 2023Assignee: Dextrous Robotics, Inc.Inventors: Evan Drumwright, Sam Zapolsky
-
Patent number: 11548154Abstract: A method includes providing a virtual representation of an environment of a robot, the virtual representation including an object representation of an object in the environment. The method further includes receiving manipulation input from a user to teleoperate the robot for manipulation of the object. The method also includes alerting the user to an alignment dimension based upon the manipulation input, receiving confirmation input from the user to engage the alignment dimension, and constraining at least one dimension of movement of the object according to the alignment dimension.Type: GrantFiled: October 12, 2018Date of Patent: January 10, 2023Assignee: Toyota Research Institute, Inc.Inventors: Allison Thackston, Sam Zapolsky, Katarina Bouma, Laura Stelzner, Ron Goldman
-
Publication number: 20220281120Abstract: A robot comprising: a chopstick, configured for at least four degrees of freedom of movement, a stiff body of shape and proportions approximate to a pool cue; an electromagnetic actuator, comprising a motor, for each degree of freedom of movement coupled with the stiff body, wherein the functional mapping from each actuator's motor current to torque output along an axis of motion is stored, and used in concert with a calibrated model of the robot for effective impedance control; and a 6-axis force/torque sensor mounted inline between the actuators and each chopstick.Type: ApplicationFiled: July 30, 2020Publication date: September 8, 2022Inventors: Evan Drumwright, Sam Zapolsky, Doug Schwandt, Jason Cortell
-
Publication number: 20220258355Abstract: A system comprises a database; at least one hardware processor coupled with the database; and one or more software modules that, when executed by the at least one hardware processor, receive at least one of sensory data from a robot and images from a camera, identify and build models of objects in an environment, wherein the model encompasses immutable properties of identified objects including mass and geometry, and wherein the geometry is assumed not to change, estimate the state including position, orientation, and velocity, of the identified objects, determine based on the state and model, potential configurations, or pre-grasp poses, for grasping the identified objects and return multiple grasping configurations per identified object, determine an object to be picked based on a quality metric, translate the pre-grasp poses into behaviors that define motor forces and torques, communicate the motor forces and torques to the robot in order to allow the robot to perform a complex behavior generated from theType: ApplicationFiled: July 30, 2020Publication date: August 18, 2022Inventors: Evan Drumwright, Sam Zapolsky
-
Patent number: 11192253Abstract: A method includes providing a virtual representation of an environment of a robot. The virtual representation includes an object representation of an object in the environment. The method further includes determining an attribute of the object within the environment of the robot. The attribute includes at least one of occupancy data, force data, and deformation data pertaining to the object. The method further includes receiving a user command to control the robot to move with respect to the object, and modifying a received user command pertaining to the representation of the object based upon the attribute.Type: GrantFiled: October 12, 2018Date of Patent: December 7, 2021Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Allison Thackston, Sam Zapolsky, Katarina Bouma, Laura Stelzner, Ron Goldman
-
Patent number: 11110603Abstract: A method includes detecting an object in a real environment of a robot. The method further includes inferring an expected property of the object based upon a representation of the object within a representation of the real environment of the robot. The method also includes sensing, via a sensor of the robot, a presently-detected property of the object in the real environment corresponding to the expected property. The method still further includes detecting a conflict between the expected property of the object and the presently-detected property of the object.Type: GrantFiled: October 2, 2018Date of Patent: September 7, 2021Assignee: Toyota Research Institute, Inc.Inventors: Allison Thackston, Sam Zapolsky, Katarina Bouma, Laura Stelzner, Ron Goldman
-
Patent number: 11027430Abstract: A method includes presenting a virtual representation of an environment of a robot, receiving a first user command to control the robot within the environment, rendering a predicted version of the virtual representation during a period of latency in which current data pertaining to the environment of the robot is not available, updating the predicted version of the virtual representation based upon a second user command received during the period of latency, and upon conclusion of the period of latency, reconciling the predicted version of the virtual representation with current data pertaining to the environment of the robot.Type: GrantFiled: October 12, 2018Date of Patent: June 8, 2021Assignee: Toyota Research Institute, Inc.Inventors: Allison Thackston, Sam Zapolsky, Katarina Bouma, Laura Stelzner, Ron Goldman
-
Publication number: 20210031375Abstract: A system comprises a database; at least one hardware processor coupled with the database; and one or more software modules that, when executed by the at least one hardware processor, receive at least one of sensory data from a robot and images from a camera, identify and build models of objects in an environment, wherein the model encompasses immutable properties of identified objects including mass and geometry, and wherein the geometry is assumed not to change, estimate the state including position, orientation, and velocity, of the identified objects, determine based on the state and model, potential configurations, or pre-grasp poses, for grasping the identified objects and return multiple grasping configurations per identified object, determine an object to be picked based on a quality metric, translate the pre-grasp poses into behaviors that define motor forces and torques, communicate the motor forces and torques to the robot in order to allow the robot to perform a complex behavior generated from theType: ApplicationFiled: July 30, 2020Publication date: February 4, 2021Inventors: Evan DRUMWRIGHT, Sam ZAPOLSKY
-
Publication number: 20210031373Abstract: A robot comprising a chopstick, configured for at least four degrees of freedom of movement, a stiff body of shape and proportions approximate to a pool cue; an electromagnetic actuator, comprising a motor, for each degree of freedom of movement coupled with the stiff body, wherein the functional mapping from each actuator's motor current to torque output along an axis of motion is stored, and used in concert with a calibrated model of the robot for effective impedance control; and a 6-axis force/torque sensor mounted inline between the actuators and each chopstick.Type: ApplicationFiled: July 30, 2020Publication date: February 4, 2021Inventors: Evan DRUMWRIGHT, Sam ZAPOLSKY, Doug SCHWANDT, Jason CORTELL
-
Publication number: 20200114513Abstract: A method includes presenting a virtual representation of an environment of a robot, receiving a first user command to control the robot within the environment, rendering a predicted version of the virtual representation during a period of latency in which current data pertaining to the environment of the robot is not available, updating the predicted version of the virtual representation based upon a second user command received during the period of latency, and upon conclusion of the period of latency, reconciling the predicted version of the virtual representation with current data pertaining to the environment of the robot.Type: ApplicationFiled: October 12, 2018Publication date: April 16, 2020Applicant: Toyota Research Institute, Inc.Inventors: Allison Thackston, Sam Zapolsky, Katarina Bouma, Laura Stelzner, Ron Goldman
-
Publication number: 20200114515Abstract: A method includes providing a virtual representation of an environment of a robot. The virtual representation includes an object representation of an object in the environment. The method further includes determining an attribute of the object within the environment of the robot. The attribute includes at least one of occupancy data, force data, and deformation data pertaining to the object. The method further includes receiving a user command to control the robot to move with respect to the object, and modifying a received user command pertaining to the representation of the object based upon the attribute.Type: ApplicationFiled: October 12, 2018Publication date: April 16, 2020Applicant: Toyota Research Institute, Inc.Inventors: Allison Thackston, Sam Zapolsky, Katarina Bouma, Laura Stelzner, Ron Goldman
-
Publication number: 20200114514Abstract: A method includes providing a virtual representation of an environment of a robot, the virtual representation including an object representation of an object in the environment. The method further includes receiving manipulation input from a user to teleoperate the robot for manipulation of the object. The method also includes alerting the user to an alignment dimension based upon the manipulation input, receiving confirmation input from the user to engage the alignment dimension, and constraining at least one dimension of movement of the object according to the alignment dimension.Type: ApplicationFiled: October 12, 2018Publication date: April 16, 2020Applicant: Toyota Research Institute, Inc.Inventors: Allison Thackston, Sam Zapolsky, Katarina Bouma, Laura Stelzner, Ron Goldman
-
Publication number: 20200101610Abstract: A method includes detecting an object in a real environment of a robot. The method further includes inferring an expected property of the object based upon a representation of the object within a representation of the real environment of the robot. The method also includes sensing, via a sensor of the robot, a presently-detected property of the object in the real environment corresponding to the expected property. The method still further includes detecting a conflict between the expected property of the object and the presently-detected property of the object.Type: ApplicationFiled: October 2, 2018Publication date: April 2, 2020Applicant: Toyota Research Institute, Inc.Inventors: Allison Thackston, Sam Zapolsky, Katarina Bouma, Laura Stelzner, Ron Goldman