Patents by Inventor Ofir Nachum
Ofir Nachum has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11875262Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.Type: GrantFiled: March 23, 2022Date of Patent: January 16, 2024Assignee: Google LLCInventors: Ofir Nachum, Ariel Gordon, Elad Eban, Bo Chen
-
Publication number: 20230376697Abstract: Systems and methods for dialogue response prediction can leverage a plurality of machine-learned language models to generate a plurality of candidate outputs, which can be processed by a dialogue management model to determine a predicted dialogue response. The plurality of machine-learned language models can include a plurality of experts trained on different intents, emotions, and/or tasks. The particular candidate output selected may be selected by the dialogue management model based on semantics determined based on a language representation. The language representation can be a representation generated by processing the conversation history of a conversation to determine conversation semantics.Type: ApplicationFiled: February 23, 2023Publication date: November 23, 2023Inventors: Yinlam Chow, Ofir Nachum, Azamat Tulepbergenov
-
Publication number: 20230367996Abstract: A method includes determining a first state associated with a particular task, and determining, by a task policy model, a latent space representation of the first state. The task policy model may have been trained to define, for each respective state of a plurality of possible states associated with the particular task, a corresponding latent space representation of the respective state. The method also includes determining, by a primitive policy model and based on the first state and the latent space representation of the first state, an action to take as part of the particular task. The primitive policy model may have been trained to define a space of primitive policies for the plurality of possible states associated with the particular task and a plurality of possible latent space representations. The method further includes executing the action to reach a second state associated with the particular task.Type: ApplicationFiled: September 23, 2021Publication date: November 16, 2023Inventors: Anurag Ajay, Ofir Nachum, Aviral Kumar, Sergey Levine
-
Patent number: 11429844Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network used to select actions to be performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes obtaining path data defining a path through the environment traversed by the agent. A consistency error is determined for the path from a combined reward, first and last soft-max state values, and a path likelihood. A value update for the current values of the policy neural network parameters is determined from at least the consistency error. The value update is used to adjust the current values of the policy neural network parameters.Type: GrantFiled: June 18, 2020Date of Patent: August 30, 2022Assignee: Google LLCInventors: Ofir Nachum, Mohammad Norouzi, Dale Eric Schuurmans, Kelvin Xu
-
Publication number: 20220215263Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.Type: ApplicationFiled: March 23, 2022Publication date: July 7, 2022Inventors: Ofir Nachum, Ariel Gordon, Elad Eban, Bo Chen
-
Patent number: 11315019Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.Type: GrantFiled: November 15, 2017Date of Patent: April 26, 2022Assignee: Google LLCInventors: Ofir Nachum, Ariel Gordon, Elad Eban, Bo Chen
-
Publication number: 20220036203Abstract: The present disclosure is directed to systems and methods for identifying and correcting label bias in machine learning via intelligent re-weighting of training examples. In particular, aspects of the present disclosure leverage a problem formulation which assumes the existence of underlying, unknown, and unbiased labels which are overwritten by an agent who intends to provide accurate labels but may have biases towards certain groups. Despite the fact that a biased training dataset provides only observations of the biased labels, the systems and methods described herein can nevertheless correct the bias by re-weighting the data points without changing the labels.Type: ApplicationFiled: October 16, 2019Publication date: February 3, 2022Inventors: Ofir Nachum, Hanxi Heinrich Jiang
-
Publication number: 20200320372Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network used to select actions to be performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes obtaining path data defining a path through the environment traversed by the agent. A consistency error is determined for the path from a combined reward, first and last soft-max state values, and a path likelihood. A value update for the current values of the policy neural network parameters is determined from at least the consistency error. The value update is used to adjust the current values of the policy neural network parameters.Type: ApplicationFiled: June 18, 2020Publication date: October 8, 2020Inventors: Ofir Nachum, Mohammad Norouzi, Dale Eric Schuurmans, Kelvin Xu
-
Patent number: 10733502Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network used to select actions to be performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes obtaining path data defining a path through the environment traversed by the agent. A consistency error is determined for the path from a combined reward, first and last soft-max state values, and a path likelihood. A value update for the current values of the policy neural network parameters is determined from at least the consistency error. The value update is used to adjust the current values of the policy neural network parameters.Type: GrantFiled: July 8, 2019Date of Patent: August 4, 2020Assignee: Google LLCInventors: Ofir Nachum, Mohammad Norouzi, Dale Eric Schuurmans, Kelvin Xu
-
Publication number: 20190332922Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network used to select actions to be performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes obtaining path data defining a path through the environment traversed by the agent. A consistency error is determined for the path from a combined reward, first and last soft-max state values, and a path likelihood. A value update for the current values of the policy neural network parameters is determined from at least the consistency error. The value update is used to adjust the current values of the policy neural network parameters.Type: ApplicationFiled: July 8, 2019Publication date: October 31, 2019Inventors: Ofir Nachum, Mohammad Norouzi, Dale Eric Schuurmans, Kelvin Xu
-
Publication number: 20190147339Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.Type: ApplicationFiled: November 15, 2017Publication date: May 16, 2019Inventors: Ofir Nachum, Ariel Gordon, Elad Eban, Bo Chen