Patents by Inventor Lee Crippen
Lee Crippen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11173611Abstract: This specification relates to robots and audio processing in robots. In general, one innovative aspect of the subject matter described in this specification can be embodied in a robot that includes: a body and one or more physically moveable components; a plurality of accessory input subsystems and one or more other sensor subsystems; one or more processors; and one or more storage devices storing instructions that are operable, when executed by the one or more processors, to cause the robot to perform operations. The operations can include: receiving one or more sensor inputs from the one or more other sensor subsystems; determining a predicted direction of a detected sound emitter based on the one or more sensor inputs of the one or more other sensor subsystems; calculating a spatial filter based on the predicted direction; obtaining, by the plurality of accessory input subsystems, respective audio inputs; and processing the respective audio inputs according to the calculated spatial filter.Type: GrantFiled: July 2, 2020Date of Patent: November 16, 2021Assignee: Digital Dream Labs, LLCInventors: Daniel Thomas Casner, Lee Crippen, Hanns W. Tappeiner, Kevin Yoon
-
Patent number: 10970527Abstract: A robot that uses sensor inputs for attention activation and corresponding methods, systems, and computer programs encoded on computer storage media. The robot can be configured to compute a plurality of attention signals from sensor inputs and provide the plurality of attention signals as input to the attention level classifier to generate an attention level. If a user is paying attention to the robot based on the generated attention level, the robot selects a behavior to execute based on the current attention level, wherein a behavior comprises one or more coordinated actions to be performed by the robot.Type: GrantFiled: September 1, 2017Date of Patent: April 6, 2021Assignee: Digital Dream Labs, LLCInventors: Hanns W. Tappeiner, Brad Neuman, Andrew Neil Stein, Lee Crippen
-
Publication number: 20200331149Abstract: This specification relates to robots and audio processing in robots. In general, one innovative aspect of the subject matter described in this specification can be embodied in a robot that includes: a body and one or more physically moveable components; a plurality of accessory input subsystems and one or more other sensor subsystems; one or more processors; and one or more storage devices storing instructions that are operable, when executed by the one or more processors, to cause the robot to perform operations. The operations can include: receiving one or more sensor inputs from the one or more other sensor subsystems; determining a predicted direction of a detected sound emitter based on the one or more sensor inputs of the one or more other sensor subsystems; calculating a spatial filter based on the predicted direction; obtaining, by the plurality of accessory input subsystems, respective audio inputs; and processing the respective audio inputs according to the calculated spatial filter.Type: ApplicationFiled: July 2, 2020Publication date: October 22, 2020Applicant: Digital Dream Labs, LLCInventors: Daniel Thomas Casner, Lee Crippen, Hanns W. Tappeiner, Kevin Yoon
-
Patent number: 10766144Abstract: This specification relates to robots and audio processing in robots. One aspect of the subject matter includes: a body and one or more physically moveable components; a plurality of microphones; one or more processors; and one or more storage devices storing instructions that are operable, when executed by the one or more processors, to cause the robot to perform operations. The operations can include: obtaining map data of an environment of the robot; selecting a test location from the map data; navigating to the selected test location; receiving a sound wave propagating through the environment of the robot; computing one or more acoustic transfer functions for the test signal, wherein each acoustic transfer function represents how the test signal was transformed by the environment of the robot during its propogation through the environment of the robot; and storing each of the transfer functions in association with the test location.Type: GrantFiled: March 16, 2018Date of Patent: September 8, 2020Assignee: DIGITAL DREAM LABS, LLCInventors: Daniel Thomas Casner, Lee Crippen, Hanns W. Tappeiner, Anthony Armenta, Kevin Yoon
-
Patent number: 10717197Abstract: This specification relates to robots and audio processing in robots. In general, one innovative aspect of the subject matter described in this specification can be embodied in a robot that includes: a body and one or more physically moveable components; a plurality of microphones and one or more other sensor subsystems; one or more processors; and one or more storage devices storing instructions that are operable, when executed by the one or more processors, to cause the robot to perform operations. The operations can include: receiving one or more sensor inputs from the one or more other sensor subsystems; determining a predicted direction of a detected sound emitter based on the one or more sensor inputs of the one or more other sensor subsystems; calculating a spatial filter based on the predicted direction; obtaining, by the plurality of microphones, respective audio inputs; and processing the respective audio inputs according to the calculated spatial filter.Type: GrantFiled: March 16, 2018Date of Patent: July 21, 2020Assignee: DIGITAL DREAM LABS, LLCInventors: Daniel Thomas Casner, Lee Crippen, Hanns W. Tappeiner, Kevin Yoon
-
Publication number: 20190210227Abstract: This specification relates to robots and audio processing in robots. In general, one innovative aspect of the subject matter described in this specification can be embodied in a robot that includes: a body and one or more physically moveable components; a plurality of microphones and one or more other sensor subsystems; one or more processors; and one or more storage devices storing instructions that are operable, when executed by the one or more processors, to cause the robot to perform operations. The operations can include: receiving one or more sensor inputs from the one or more other sensor subsystems; determining a predicted direction of a detected sound emitter based on the one or more sensor inputs of the one or more other sensor subsystems; calculating a spatial filter based on the predicted direction; obtaining, by the plurality of microphones, respective audio inputs; and processing the respective audio inputs according to the calculated spatial filter.Type: ApplicationFiled: March 16, 2018Publication date: July 11, 2019Inventors: Daniel Thomas Casner, Lee Crippen, Hanns W. Tappeiner, Kevin Yoon
-
Publication number: 20190212441Abstract: This specification relates to robots and audio processing in robots. One aspect of the subject matter includes: a body and one or more physically moveable components; a plurality of microphones; one or more processors; and one or more storage devices storing instructions that are operable, when executed by the one or more processors, to cause the robot to perform operations. The operations can include: obtaining map data of an environment of the robot; selecting a test location from the map data; navigating to the selected test location; receiving a sound wave propagating through the environment of the robot; computing one or more acoustic transfer functions for the test signal, wherein each acoustic transfer function represents how the test signal was transformed by the environment of the robot during its propogation through the environment of the robot; and storing each of the transfer functions in association with the test location.Type: ApplicationFiled: March 16, 2018Publication date: July 11, 2019Inventors: Daniel Thomas Casner, Lee Crippen, Hanns W. Tappeiner, Anthony Armenta, Kevin Yoon
-
Publication number: 20190102377Abstract: A apparatus, e.g., a robot, that uses sensor inputs and physical actions to disambiguate terms in natural language commands and corresponding methods, systems, and computer programs encoded on computer storage media. A robot can receive a natural language command from a user having an ambiguous term that references a location or an entity in an environment of the robot. A user location indicator is identified from one or more sensor inputs. A location within the environment of the robot is computed using the location indicator identified from the one or more sensor inputs. Resolution data is computed using the computed location, wherein the resolution data resolves the reference of the ambiguous term. One or more actions are generated using the natural language command and the resolved reference of the ambiguous term, and the robot can execute the one or more actions.Type: ApplicationFiled: October 4, 2017Publication date: April 4, 2019Inventors: Brad Neuman, Andrew Neil Stein, Lee Crippen
-
Publication number: 20190070735Abstract: A robot that uses sensor inputs for attention activation and corresponding methods, systems, and computer programs encoded on computer storage media. The robot can be configured to compute a plurality of attention signals from sensor inputs and provide the plurality of attention signals as input to the attention level classifier to generate an attention level. If a user is paying attention to the robot based on the generated attention level, the robot selects a behavior to execute based on the current attention level, wherein a behavior comprises one or more coordinated actions to be performed by the robot.Type: ApplicationFiled: September 1, 2017Publication date: March 7, 2019Inventors: Hanns W. Tappeiner, Brad Neuman, Andrew Neil Stein, Lee Crippen
-
Publication number: 20180250815Abstract: Exemplary methods, apparatuses, and systems receive first and second sets of command tracks, each set including one or more command tracks and each command track directed to control a component of a robot. In response to detecting that a first command track within the first set is directed to control a first component of the robot to perform a first action and a second command track within the second set is directed to control the first component of the robot to perform a second action, the first and second command tracks are merged into a composite command track. The composite command track is executed, causing the first component of the robot to perform the first action while performing the second action.Type: ApplicationFiled: June 26, 2017Publication date: September 6, 2018Inventors: Andrew Neil Stein, Kevin Yoon, Richard Chaussee, Lee Crippen, Mark Wesley, Michelle Sintov, Hanns Tappeiner