Patents by Inventor Ethan Rublee
Ethan Rublee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240221202Abstract: Methods and systems for real time localization of a device using image data are disclosed herein. A disclosed method for localizing a device with respect to a known object includes capturing an image of at least a portion of the known object. The method also includes determining, using a trained machine intelligence system and the image, a set of known object coordinates for a set of known object pixels of the image. The set of known object coordinates: are outputs of the trained machine intelligence system; are in a frame of reference of the object; and encode each object coordinate in the set of known object coordinates using an encoding with at least two values per object coordinate. The method also includes localizing the device with respect to the known object using the set of known object coordinates from the trained machine intelligence system.Type: ApplicationFiled: April 29, 2022Publication date: July 4, 2024Inventor: Ethan Rublee
-
Publication number: 20240199152Abstract: Provided is a wheel assembly including a housing having a top side, an attachment arrangement on the top side of the housing and configured to be removably attached to an external support, a wheel supported by the housing and arranged below the top side of the housing, a motor supported by the housing and configured to move the wheel, a network interface supported by the housing, and a controller in communication with the motor and the network interface, the controller configured to control the motor based on at least one command received via the network interface. A modular vehicle system is also described.Type: ApplicationFiled: February 27, 2024Publication date: June 20, 2024Inventors: Ethan Rublee, Matthew Bitterman
-
Patent number: 11775788Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.Type: GrantFiled: April 30, 2021Date of Patent: October 3, 2023Assignee: Matterport, Inc.Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
-
Publication number: 20220331961Abstract: Systems and methods are provided for specifying safety rules for robotic devices. A computing device can determine information about any actors present within a predetermined area of an environment. The computing device can determine a safety classification for the predetermined area based on the information. The safety classification can include: a low safety classification if the information indicates zero actors are present within the predetermined area, a medium safety classification if the information indicates any actors are present within the predetermined area all are of a predetermined first type, and a high safety classification if the information indicates at least one actor present within the predetermined area is of a predetermined second type. After determining the safety classification for the predetermined area, the computing device can provide a safety rule for operating within the predetermined area to a robotic device operating in the environment.Type: ApplicationFiled: June 24, 2022Publication date: October 20, 2022Inventors: Ethan Rublee, John Zevenbergen
-
Patent number: 11436752Abstract: Methods and systems for real time localization of a device using image data are disclosed herein. A disclosed method for localizing a device with respect to a known object includes capturing an image of at least a portion of the known object. The method also includes determining, using a trained machine intelligence system and the image, a set of known object coordinates for a set of known object pixels of the image. The set of known object coordinates: are outputs of the trained machine intelligence system; are in a frame of reference of the object; and encode each object coordinate in the set of known object coordinates using an encoding with at least two values per object coordinate. The method also includes localizing the device with respect to the known object using the set of known object coordinates from the trained machine intelligence system.Type: GrantFiled: April 29, 2022Date of Patent: September 6, 2022Assignee: farm-ng Inc.Inventor: Ethan Rublee
-
Patent number: 11383380Abstract: Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.Type: GrantFiled: November 18, 2019Date of Patent: July 12, 2022Assignee: Intrinsic Innovation LLCInventors: Gary Bradski, Steve Croft, Kurt Konolige, Ethan Rublee, Troy Straszheim, John Zevenbergen, Stefan Hinterstoisser, Hauke Strasdat
-
Patent number: 11383382Abstract: Systems and methods are provided for specifying safety rules for robotic devices. A computing device can determine information about any actors present within a predetermined area of an environment. The computing device can determine a safety classification for the predetermined area based on the information. The safety classification can include: a low safety classification if the information indicates zero actors are present within the predetermined area, a medium safety classification if the information indicates any actors are present within the predetermined area all are of a predetermined first type, and a high safety classification if the information indicates at least one actor present within the predetermined area is of a predetermined second type. After determining the safety classification for the predetermined area, the computing device can provide a safety rule for operating within the predetermined area to a robotic device operating in the environment.Type: GrantFiled: February 23, 2021Date of Patent: July 12, 2022Assignee: INTRINSIC INNOVATION LLCInventors: Ethan Rublee, John Zevenbergen
-
Patent number: 11345037Abstract: Systems and methods are provided for specifying safety rules for robotic devices. A computing device can determine information about any actors present within a predetermined area of an environment. The computing device can determine a safety classification for the predetermined area based on the information. The safety classification can include: a low safety classification if the information indicates zero actors are present within the predetermined area, a medium safety classification if the information indicates any actors are present within the predetermined area all are of a predetermined first type, and a high safety classification if the information indicates at least one actor present within the predetermined area is of a predetermined second type. After determining the safety classification for the predetermined area, the computing device can provide a safety rule for operating within the predetermined area to a robotic device operating in the environment.Type: GrantFiled: February 23, 2021Date of Patent: May 31, 2022Assignee: INTRINSIC INNOVATION LLCInventors: Ethan Rublee, John Zevenbergen
-
Publication number: 20220058414Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.Type: ApplicationFiled: April 30, 2021Publication date: February 24, 2022Applicant: Matterport, Inc.Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
-
Patent number: 11189031Abstract: Methods and systems regarding importance sampling for the modification of a training procedure used to train a segmentation network are disclosed herein. A disclosed method includes segmenting an image using a trainable directed graph to generate a segmentation, displaying the segmentation, receiving a first selection directed to the segmentation, and modifying a training procedure for the trainable directed graph using the first selection. In a more specific method, the training procedure alters a set of trainable values associated with the trainable directed graph based on a delta between the segmentation and a ground truth segmentation, the first selection is spatially indicative with respect to the segmentation, and the delta is calculated based on the first selection.Type: GrantFiled: May 14, 2019Date of Patent: November 30, 2021Assignee: Matterport, Inc.Inventors: Gary Bradski, Ethan Rublee, Mona Fathollahi, Michael Tetelman, Ian Meeder, Varsha Vivek, William Nguyen
-
Patent number: 11080884Abstract: A trained network for point tracking includes an input layer configured to receive an encoding of an image. The image is of a locale or object on which the network has been trained. The network also includes a set of internal weights which encode information associated with the locale or object, and a tracked point therein or thereon. The network also includes an output layer configured to provide an output based on the image as received at the input layer and the set of internal weights. The output layer includes a point tracking node that tracks the tracked point in the image. The point tracking node can track the point by generating coordinates for the tracked point in an input image of the locale or object. Methods of specifying and training the network using a three-dimensional model of the locale or object are also disclosed.Type: GrantFiled: May 15, 2019Date of Patent: August 3, 2021Assignee: Matterport, Inc.Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
-
Patent number: 11080861Abstract: Systems and methods for frame and scene segmentation are disclosed herein. A disclosed method includes providing a frame of a scene. The scene includes a scene background. The method also includes providing a model of the scene background. The method also includes determining a frame background using the model and subtracting the frame background from the frame to obtain an approximate segmentation. The method also includes training a segmentation network using the approximate segmentation.Type: GrantFiled: May 14, 2019Date of Patent: August 3, 2021Assignee: Matterport, Inc.Inventors: Gary Bradski, Ethan Rublee
-
Publication number: 20210197382Abstract: Systems and methods are provided for specifying safety rules for robotic devices. A computing device can determine information about any actors present within a predetermined area of an environment. The computing device can determine a safety classification for the predetermined area based on the information. The safety classification can include: a low safety classification if the information indicates zero actors are present within the predetermined area, a medium safety classification if the information indicates any actors are present within the predetermined area all are of a predetermined first type, and a high safety classification if the information indicates at least one actor present within the predetermined area is of a predetermined second type. After determining the safety classification for the predetermined area, the computing device can provide a safety rule for operating within the predetermined area to a robotic device operating in the environment.Type: ApplicationFiled: February 23, 2021Publication date: July 1, 2021Inventors: Ethan Rublee, John Zevenbergen
-
Publication number: 20210187736Abstract: Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.Type: ApplicationFiled: March 9, 2021Publication date: June 24, 2021Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee
-
Patent number: 10997448Abstract: Systems and methods for registering arbitrary visual features for use as fiducial elements are disclosed. An example method includes aligning a geometric reference object and a visual feature and capturing an image of the reference object and feature. The method also includes identifying, in the image of the object and the visual feature, a set of at least four non-colinear feature points in the visual feature. The method also includes deriving, from the image, a coordinate system using the geometric object. The method also comprises providing a set of measures to each of the points in the set of at least four non-colinear feature points using the coordinate system. The measures can then be saved in a memory to represent the registered visual feature and serve as the basis for using the registered visual feature as a fiducial element.Type: GrantFiled: May 15, 2019Date of Patent: May 4, 2021Assignee: Matterport, Inc.Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen
-
Patent number: 10967506Abstract: Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.Type: GrantFiled: November 30, 2017Date of Patent: April 6, 2021Assignee: X Development LLCInventors: Gary Bradski, Kurt Konolige, Ethan Rublee
-
Patent number: 10946524Abstract: Systems and methods are provided for specifying safety rules for robotic devices. A computing device can determine information about any actors present within a predetermined area of an environment. The computing device can determine a safety classification for the predetermined area based on the information. The safety classification can include: a low safety classification if the information indicates zero actors are present within the predetermined area, a medium safety classification if the information indicates any actors are present within the predetermined area all are of a predetermined first type, and a high safety classification if the information indicates at least one actor present within the predetermined area is of a predetermined second type. After determining the safety classification for the predetermined area, the computing device can provide a safety rule for operating within the predetermined area to a robotic device operating in the environment.Type: GrantFiled: August 6, 2020Date of Patent: March 16, 2021Assignee: X DEVELOPMENT LLCInventors: Ethan Rublee, John Zevenbergen
-
Publication number: 20200364873Abstract: Methods and systems regarding importance sampling for the modification of a training procedure used to train a segmentation network are disclosed herein. A disclosed method includes segmenting an image using a trainable directed graph to generate a segmentation, displaying the segmentation, receiving a first selection directed to the segmentation, and modifying a training procedure for the trainable directed graph using the first selection. In a more specific method, the training procedure alters a set of trainable values associated with the trainable directed graph based on a delta between the segmentation and a ground truth segmentation, the first selection is spatially indicative with respect to the segmentation, and the delta is calculated based on the first selection.Type: ApplicationFiled: May 14, 2019Publication date: November 19, 2020Applicant: Matterport, Inc.Inventors: Gary Bradski, Ethan Rublee, Mona Fathollahi, Michael Tetelman, Ian Meeder, Varsha Vivek, William Nguyen
-
Publication number: 20200364877Abstract: Systems and methods for frame and scene segmentation are disclosed herein. A disclosed method includes providing a frame of a scene. The scene includes a scene background. The method also includes providing a model of the scene background. The method also includes determining a frame background using the model and subtracting the frame background from the frame to obtain an approximate segmentation. The method also includes training a segmentation network using the approximate segmentation.Type: ApplicationFiled: May 14, 2019Publication date: November 19, 2020Applicant: Matterport, Inc.Inventors: Gary Bradski, Ethan Rublee
-
Publication number: 20200364895Abstract: A trained network for point tracking includes an input layer configured to receive an encoding of an image. The image is of a locale or object on which the network has been trained. The network also includes a set of internal weights which encode information associated with the locale or object, and a tracked point therein or thereon. The network also includes an output layer configured to provide an output based on the image as received at the input layer and the set of internal weights. The output layer includes a point tracking node that tracks the tracked point in the image. The point tracking node can track the point by generating coordinates for the tracked point in an input image of the locale or object. Methods of specifying and training the network using a three-dimensional model of the locale or object are also disclosed.Type: ApplicationFiled: May 15, 2019Publication date: November 19, 2020Applicant: Matterport, Inc.Inventors: Gary Bradski, Gholamreza Amayeh, Mona Fathollahi, Ethan Rublee, Grace Vesom, William Nguyen