Patents by Inventor Stefanie Tellex
Stefanie Tellex has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240153230Abstract: A method includes, in an automated machine equipped with one or more camera-based object detectors, receiving human-provided information or information inferred from point cloud observations regarding target locations, maintaining information states regarding the target locations through a probability distribution structured as an octree, initializing the information states based on point cloud observations, updating the information states based on object detection observations or point cloud observations, determining a search region occupancy through constructing an octree-based occupancy grid based on point cloud observations, and using ray-tracing to determine visibility at three dimensional locations within the search region.Type: ApplicationFiled: November 2, 2023Publication date: May 9, 2024Inventors: Kaiyu ZHENG, Stefanie TELLEX
-
Patent number: 11897140Abstract: A system and method of operating a mobile robot to perform tasks includes representing a task in an Object-Oriented Partially Observable Markov Decision Process model having at least one belief pertaining to a state and at least one observation space within an environment, wherein the state is represented in terms of classes and objects and each object has at least one attribute and a semantic label. The method further includes receiving a language command identifying a target object and a location corresponding to the target object, updating the belief associated with the target object based on the language command, driving the mobile robot to the observation space identified in the updated belief, searching the updated observation space for each instance of the target object, and providing notification upon completing the task. In an embodiment, the task is a multi-object search task.Type: GrantFiled: September 27, 2019Date of Patent: February 13, 2024Assignee: BROWN UNIVERSITYInventors: Arthur Richard Wandzel, Stefanie Tellex
-
Patent number: 11847841Abstract: A method includes, as a robot encounters an object, creating a probabilistic object model to identify, localize, and manipulate the object, the probabilistic object model using light fields to enable efficient inference for object detection and localization while incorporating information from every pixel observed from across multiple camera locations.Type: GrantFiled: October 18, 2018Date of Patent: December 19, 2023Assignee: Brown UniversityInventors: Stefanie Tellex, John Oberlin
-
Patent number: 11383386Abstract: A method includes providing a robot, providing an image of drawn handwritten characters to the robot, enabling the robot to capture a bitmapped image of the image of drawn handwritten characters, enabling the robot to infer a plan to replicate the image with a writing utensil, and enabling the robot to reproduce the image.Type: GrantFiled: October 3, 2019Date of Patent: July 12, 2022Assignee: Brown UniversityInventors: Stefanie Tellex, Atsunobu Kotani
-
Publication number: 20220058842Abstract: A method of representing a space of handwriting stroke styles includes representing writer-, character- and writer-character-level style variations within a recurrent neural network (RNN) model using decoupled style descriptors (DSD) that model the style variations such that character style variations depend on writer style.Type: ApplicationFiled: August 23, 2021Publication date: February 24, 2022Inventors: Atsunobu KOTANI, Stefanie TELLEX, James TOMPKIN
-
Publication number: 20220032468Abstract: A method includes providing a robot, providing an image of drawn handwritten characters to the robot, enabling the robot to capture a bitmapped image of the image of drawn handwritten characters, enabling the robot to infer a plan to replicate the image with a writing utensil, and enabling the robot to reproduce the image.Type: ApplicationFiled: October 3, 2019Publication date: February 3, 2022Inventors: Stefanie TELLEX, Atsunobu KOTANI
-
Publication number: 20210347046Abstract: A system and method of operating a mobile robot to perform tasks includes representing a task in an Object-Oriented Partially Observable Markov Decision Process model having at least one belief pertaining to a state and at least one observation space within an environment, wherein the state is represented in terms of classes and objects and each object has at least one attribute and a semantic label. The method further includes receiving a language command identifying a target object and a location corresponding to the target object, updating the belief associated with the target object based on the language command, driving the mobile robot to the observation space identified in the updated belief, searching the updated observation space for each instance of the target object, and providing notification upon completing the task. In an embodiment, the task is a multi-object search task.Type: ApplicationFiled: September 27, 2019Publication date: November 11, 2021Inventors: Arthur Richard WANDZEL, Stefanie TELLEX
-
Patent number: 11086938Abstract: A system includes a robot having a module that includes a function for mapping natural language commands of varying complexities to reward functions at different levels of abstraction within a hierarchical planning framework, the function including using a deep neural network language model that learns how to map the natural language commands to reward functions at an appropriate level of the hierarchical planning framework.Type: GrantFiled: March 2, 2020Date of Patent: August 10, 2021Assignee: Brown UniversityInventors: Stefanie Tellex, Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L. S. Wong
-
Patent number: 11034019Abstract: A method includes enabling a robot to learn a mapping between English language commands and Linear Temporal Logic (LTL) expressions, wherein neural sequence-to-sequence learning models are employed to infer a LTL sequence corresponding to a given natural language command.Type: GrantFiled: April 18, 2019Date of Patent: June 15, 2021Assignee: Brown UniversityInventors: Stefanie Tellex, Dilip Arumugam, Nakul Gopalan, Lawson L. S. Wong
-
Publication number: 20200368899Abstract: A method includes, as a robot encounters an object, creating a probabilistic object model to identify, localize, and manipulate the object, the probabilistic object model using light fields to enable efficient inference for object detection and localization while incorporating information from every pixel observed from across multiple camera locations.Type: ApplicationFiled: October 18, 2018Publication date: November 26, 2020Inventors: Stefanie TELLEX, John OBERLIN
-
Patent number: 10766145Abstract: A robot includes a gripping member configured to move and pick up the object, a camera affixed to the gripping member such that movement of the gripping member causes movement of the camera, the camera configured to measure and store data related to intensity of light and direction of light rays within the environment, an image processing module configured to process the data to generate a probabilistic model defining a location of the object within the environment, and an operation module configured to move the gripping member to the location and pick up the object.Type: GrantFiled: April 13, 2018Date of Patent: September 8, 2020Assignee: Brown UniversityInventors: John Oberlin, Stefanie Tellex
-
Publication number: 20200201914Abstract: A system includes a robot having a module that includes a function for mapping natural language commands of varying complexities to reward functions at different levels of abstraction within a hierarchical planning framework, the function including using a deep neural network language model that learns how to map the natural language commands to reward functions at an appropriate level of the hierarchical planning framework.Type: ApplicationFiled: March 2, 2020Publication date: June 25, 2020Inventors: Stefanie TELLEX, Dilip ARUMUGAM, Siddharth KARAMCHETI, Nakul GOPALAN, Lawson L.S. WONG
-
Patent number: 10606898Abstract: A system includes a robot having a module that includes a function for mapping natural language commands of varying complexities to reward functions at different levels of abstraction within a hierarchical planning framework, the function including using a deep neural network language model that learns how to map the natural language commands to reward functions at an appropriate level of the hierarchical planning framework.Type: GrantFiled: April 19, 2018Date of Patent: March 31, 2020Assignee: Brown UniversityInventors: Stefanie Tellex, Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L. S. Wong
-
Publication number: 20200023514Abstract: A method includes enabling a robot to learn a mapping between English language commands and Linear Temporal Logic (LTL) expressions, wherein neural sequence-to-sequence learning models are employed to infer a LTL sequence corresponding to a given natural language command.Type: ApplicationFiled: April 18, 2019Publication date: January 23, 2020Inventors: Stefanie Tellex, Dilip Arumugam, Nakul Gopalan, Lawson L.S. Wong
-
Publication number: 20180307779Abstract: A system includes a robot having a module that includes a function for mapping natural language commands of varying complexities to reward functions at different levels of abstraction within a hierarchical planning framework, the function including using a deep neural network language model that learns how to map the natural language commands to reward functions at an appropriate level of the hierarchical planning framework.Type: ApplicationFiled: April 19, 2018Publication date: October 25, 2018Applicant: Brown UniversityInventors: Stefanie Tellex, Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L.S. Wong
-
Publication number: 20180297215Abstract: A robot includes a gripping member configured to move and pick up the object, a camera affixed to the gripping member such that movement of the gripping member causes movement of the camera, the camera configured to measure and store data related to intensity of light and direction of light rays within the environment, an image processing module configured to process the data to generate a probabilistic model defining a location of the object within the environment, and an operation module configured to move the gripping member to the location and pick up the object.Type: ApplicationFiled: April 13, 2018Publication date: October 18, 2018Applicant: Brown UniversityInventors: John Oberlin, Stefanie Tellex