Patents by Inventor Eric Jang

Eric Jang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240126134
    Abstract: The disclosed system may include at least one gradient-index liquid crystal lens. The system may include a selection module that selects a viewing angle. The system may also include an adjustment module that dynamically adjusts a phase reset property of the gradient-index liquid crystal lens in response to the selected viewing angle. Various other devices, systems, and methods are also disclosed.
    Type: Application
    Filed: October 12, 2023
    Publication date: April 18, 2024
    Inventors: Afsoon Jamali, Changwon Jang, Zhimin Shi, Sho Nakahara, Eric Stratton
  • Publication number: 20240100693
    Abstract: Some implementations relate to using trained robotic action ML models in controlling a robot to perform a robotic task. Some versions of those implementations include (a) a first modality robotic action ML model that is used to generate, based on processing first modality sensor data instances, first predicted action outputs for the robotic task and (b) a second modality robotic action ML model that is used to generate, in parallel and based on processing second modality sensor data instances, second predicted action outputs for the robotic task. In some of those versions, respective weights for each pair of the first and second predicted action outputs are dynamically determined based on analysis of embeddings generated in generating the first and second predicted action outputs. A final predicted action output, for controlling the robot, is determined based on the weights.
    Type: Application
    Filed: January 26, 2023
    Publication date: March 28, 2024
    Inventors: Daniel Ho, Eric Jang, Mohi Khansari, Yu Qing Du, Alexander A. Alemi
  • Publication number: 20230381970
    Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
    Type: Application
    Filed: August 11, 2023
    Publication date: November 30, 2023
    Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
  • Publication number: 20230370282
    Abstract: The present disclosure describes system and method for using distributed ledgers to improve data integrity. The system may include a distributed integrity ledger, a distributed identity ledger, multiple network node managers to manage transactions in both ledgers, a data recording device, a manufacture to make the device, a user to use the device, a data center to store recorded data pieces, and a verifier who needs to verify authenticity of the recorded data. The distributed integrity ledger is used to store commitments generated by the data recording device to verify authenticity of recorded data pieces. In addition, because the commitment is neither traceable nor linkable to personal information, possibility of privacy violation is minimized even if the commitments are disclosed to the public.
    Type: Application
    Filed: October 6, 2021
    Publication date: November 16, 2023
    Inventors: Kyle HUANG, Chiahsin LI, Andrew TULLY, Eric JANG
  • Patent number: 11772272
    Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: October 3, 2023
    Assignee: GOOGLE LLC
    Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
  • Patent number: 11717959
    Abstract: Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: August 8, 2023
    Assignee: GOOGLE LLC
    Inventors: Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor Sampedro, Julian Ibarz, Sergey Levine
  • Publication number: 20230154160
    Abstract: Implementations disclosed herein relate to mitigating the reality gap through feature-level domain adaptation in training of a vision-based robotic action machine learning (ML) model. Implementations mitigate the reality gap through utilization of embedding consistency losses and/or action consistency losses during training of the action ML model.
    Type: Application
    Filed: November 14, 2022
    Publication date: May 18, 2023
    Inventors: Mohi Khansari, Daniel Ho, Eric Jang, Yu Qing Du
  • Publication number: 20220410380
    Abstract: Utilizing an initial set of offline positive-only robotic demonstration data for pre-training an actor network and a critic network for robotic control, followed by further training of the networks based on online robotic episodes that utilize the network(s). Implementations enable the actor network to be effectively pre-trained, while mitigating occurrences of and/or the extent of forgetting when further trained based on episode data. Implementations additionally or alternatively enable the actor network to be trained to a given degree of effectiveness in fewer training steps. In various implementations, one or more adaptation techniques are utilized in performing the robotic episodes and/or in performing the robotic training. The adaptation techniques can each, individually, result in one or more corresponding advantages and, when used in any combination, the corresponding advantages can accumulate.
    Type: Application
    Filed: June 17, 2022
    Publication date: December 29, 2022
    Inventors: Yao Lu, Mengyuan Yan, Seyed Mohammad Khansari Zadeh, Alexander Herzog, Eric Jang, Karol Hausman, Yevgen Chebotar, Sergey Levine, Alexander Irpan
  • Publication number: 20220297303
    Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
    Type: Application
    Filed: March 16, 2021
    Publication date: September 22, 2022
    Inventors: Seyed Mohammad Khansari Zadeh, Eric Jang, Daniel Lam, Daniel Kappler, Matthew Bennice, Brent Austin, Yunfei Bai, Sergey Levine, Alexander Irpan, Nicolas Sievers, Chelsea Finn
  • Publication number: 20220105624
    Abstract: Techniques are disclosed that enable training a meta-learning model, for use in causing a robot to perform a task, using imitation learning as well as reinforcement learning. Some implementations relate to training the meta-learning model using imitation learning based on one or more human guided demonstrations of the task. Additional or alternative implementations relate to training the meta-learning model using reinforcement learning based on trials of the robot attempting to perform the task. Further implementations relate to using the trained meta-learning model to few shot (or one shot) learn a new task based on a human guided demonstration of the new task.
    Type: Application
    Filed: January 23, 2020
    Publication date: April 7, 2022
    Inventors: Mrinal Kalakrishnan, Yunfei Bai, Paul Wohlhart, Eric Jang, Chelsea Finn, Seyed Mohammad Khansari Zadeh, Sergey Levine, Allan Zhou, Alexander Herzog, Daniel Kappler
  • Publication number: 20210237266
    Abstract: Using large-scale reinforcement learning to train a policy model that can be utilized by a robot in performing a robotic task in which the robot interacts with one or more environmental objects. In various implementations, off-policy deep reinforcement learning is used to train the policy model, and the off-policy deep reinforcement learning is based on self-supervised data collection. The policy model can be a neural network model. Implementations of the reinforcement learning utilized in training the neural network model utilize a continuous-action variant of Q-learning. Through techniques disclosed herein, implementations can learn policies that generalize effectively to previously unseen objects, previously unseen environments, etc.
    Type: Application
    Filed: June 14, 2019
    Publication date: August 5, 2021
    Inventors: Dmitry Kalashnikov, Alexander Irpan, Peter Pastor Sampedro, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Sergey Levine
  • Patent number: 11045949
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: June 29, 2021
    Assignee: GOOGLE LLC
    Inventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
  • Publication number: 20200338722
    Abstract: Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Application
    Filed: June 28, 2018
    Publication date: October 29, 2020
    Inventors: Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor Sampedro, Julian Ibarz, Sergey Levine
  • Publication number: 20200215686
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Application
    Filed: March 19, 2020
    Publication date: July 9, 2020
    Inventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
  • Patent number: 10639792
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: May 5, 2020
    Assignee: GOOGLE LLC
    Inventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
  • Publication number: 20180147723
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Application
    Filed: January 26, 2018
    Publication date: May 31, 2018
    Inventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
  • Patent number: 9914213
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: March 13, 2018
    Assignee: GOOGLE LLC
    Inventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
  • Publication number: 20170252924
    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
    Type: Application
    Filed: March 2, 2017
    Publication date: September 7, 2017
    Inventors: Sudheendra Vijayanarasimhan, Eric Jang, Peter Pastor Sampedro, Sergey Levine
  • Patent number: 7894686
    Abstract: An apparatus comprising a first circuit and a second circuit. The first circuit may be configured to determine frequency of occurrence information for a range of gray levels from luminance data of an input signal. The second circuit may be configured to selectively adjust enhancement for at least one portion of the range of grey levels based upon the frequency of occurrence information.
    Type: Grant
    Filed: January 5, 2006
    Date of Patent: February 22, 2011
    Assignee: LSI Corporation
    Inventor: Eric Jang
  • Publication number: 20070154107
    Abstract: An apparatus comprising a first circuit and a second circuit. The first circuit may be configured to determine frequency of occurrence information for a range of gray levels from luminance data of an input signal. The second circuit may be configured to selectively adjust enhancement for at least one portion of the range of grey levels based upon the frequency of occurrence information.
    Type: Application
    Filed: January 5, 2006
    Publication date: July 5, 2007
    Inventor: Eric Jang