Patents by Inventor Wojciech Zaremba
Wojciech Zaremba has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240020096Abstract: Disclosed herein are methods, systems, and computer-readable media for generating computer code based on natural language input. In an embodiment, a method may comprise one or more of: receiving a docstring representing natural language text specifying a digital programming result; generating, using a trained machine learning model, and based on the docstring, a computer code sample configured to produce respective candidate results; causing the computer code sample to be executed; identifying, based on the executing, a computer code sample configured to produce a particular candidate result associated with the digital programming result; performing at least one of outputting, via a user interface, the identified computer code sample, compiling the identified computer code sample, transmitting the identified computer code sample to a recipient device, storing the identified computer code sample, and/or re-executing the identified computer code sample.Type: ApplicationFiled: May 23, 2023Publication date: January 18, 2024Applicant: OpenAI Opco, LLCInventors: Mark CHEN, Jerry TWOREK, Ilya SUTSKEVER, Wojciech ZAREMBA, Heewoo JUN, Henrique PONDE DE OLIVEIRA PINTO
-
Publication number: 20240020116Abstract: Disclosed herein are methods, systems, and computer-readable media for generating natural language based on computer code input. In an embodiment, a method may comprise one or more of: accessing a docstring generation model configured to generate docstrings from computer code; receiving one or more computer code samples; generating, using the docstring generation model and based on the received one or more computer code samples, one or more candidate docstrings representing natural language text, each of the one or more candidate docstrings being associated with at least a portion of the one or more computer code samples; identifying at least one of the one or more candidate docstrings that provides an intent of the at least a portion of the one or more computer code samples; and/or outputting, via a user interface, the at least one identified docstring with the at least a portion of the one or more computer code samples.Type: ApplicationFiled: May 23, 2023Publication date: January 18, 2024Applicant: OpenAI Opco, LLCInventors: Mark CHEN, Jerry TWOREK, Ilya SUTSKEVER, Wojciech ZAREMBA, Heewoo JUN, Henrique PONDE DE OLIVEIRA PINTO
-
Patent number: 11080594Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for augmenting neural networks with an external memory using reinforcement learning. One of the methods includes providing an output derived from the system output portion of the neural network output as a system output in the sequence of system outputs; selecting a memory access process from a predetermined set of memory access processes for accessing the external memory from the reinforcement learning portion of the neural network output; writing and reading data from locations in the external memory in accordance with the selected memory access process using the differentiable portion of the neural network output; and combining the data read from the external memory with a next system input in the sequence of system inputs to generate a next neural network input in the sequence of neural network inputs.Type: GrantFiled: December 30, 2016Date of Patent: August 3, 2021Assignee: DeepMind Technologies LimitedInventors: Ilya Sutskever, Ivo Danihelka, Alexander Benjamin Graves, Gregory Duncan Wayne, Wojciech Zaremba
-
Patent number: 10936828Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural translation systems with rare word processing. One of the methods is a method training a neural network translation system to track the source in source sentences of unknown words in target sentences, in a source language and a target language, respectively and includes deriving alignment data from a parallel corpus, the alignment data identifying, in each pair of source and target language sentences in the parallel corpus, aligned source and target words; annotating the sentences in the parallel corpus according to the alignment data and a rare word model to generate a training dataset of paired source and target language sentences; and training a neural network translation model on the training dataset.Type: GrantFiled: November 16, 2018Date of Patent: March 2, 2021Assignee: Google LLCInventors: Quoc V. Le, Minh-Thang Luong, Ilya Sutskever, Oriol Vinyals, Wojciech Zaremba
-
Patent number: 10657435Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing an input sequence using a recurrent neural network to generate an output for the input sequence. One of the methods includes receiving the input sequence; generating a doubled sequence comprising a first instance of the input sequence followed by a second instance of the input sequence; and processing the doubled sequence using the recurrent neural network to generate the output for the input sequence.Type: GrantFiled: October 7, 2015Date of Patent: May 19, 2020Assignee: Google LLCInventors: Ilya Sutskever, Wojciech Zaremba
-
Patent number: 10402752Abstract: A system for training a model to predict a sequence (e.g. a sequence of words) given a context is disclosed. A model can be trained to make these predictions using a combination of individual predictions compared to base truth and sequences of predictions based on previous predictions, where the resulting sequence is compared to the base truth sequence. In particular, the model can initially use the individual predictions to train the model. The model can then be further trained over the training data in multiple iterations, where each iteration includes two processes for each training element. In the first process, an initial part of the sequence is predicted, and the model and model parameters are updated after each prediction. In the second process, the entire remaining amount of the sequence is predicted and compared to the corresponding training sequence to adjust model parameters to encourage or discourage each prediction.Type: GrantFiled: November 18, 2016Date of Patent: September 3, 2019Assignee: Facebook, Inc.Inventors: Marc Aurelio Ranzato, Sumit Chopra, Michael Auli, Wojciech Zaremba
-
Patent number: 10380482Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes obtaining partitioned training data for the neural network, wherein the partitioned training data comprises a plurality of training items each of which is assigned to a respective one of a plurality of partitions, wherein each partition is associated with a respective difficulty level; and training the neural network on each of the partitions in a sequence from a partition associated with an easiest difficulty level to a partition associated with a hardest difficulty level, wherein, for each of the partitions, training the neural network comprises: training the neural network on a sequence of training items that includes training items selected from the training items in the partition interspersed with training items selected from the training items in all of the partitions.Type: GrantFiled: October 7, 2015Date of Patent: August 13, 2019Assignee: Google LLCInventors: Ilya Sutskever, Wojciech Zaremba
-
Publication number: 20190188268Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural translation systems with rare word processing. One of the methods is a method training a neural network translation system to track the source in source sentences of unknown words in target sentences, in a source language and a target language, respectively and includes deriving alignment data from a parallel corpus, the alignment data identifying, in each pair of source and target language sentences in the parallel corpus, aligned source and target words; annotating the sentences in the parallel corpus according to the alignment data and a rare word model to generate a training dataset of paired source and target language sentences; and training a neural network translation model on the training dataset.Type: ApplicationFiled: November 16, 2018Publication date: June 20, 2019Inventors: Quoc V. Le, Minh-Thang Luong, Ilya Sutskever, Oriol Vinyals, Wojciech Zaremba
-
Patent number: 10133739Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural translation systems with rare word processing. One of the methods is a method training a neural network translation system to track the source in source sentences of unknown words in target sentences, in a source language and a target language, respectively and includes deriving alignment data from a parallel corpus, the alignment data identifying, in each pair of source and target language sentences in the parallel corpus, aligned source and target words; annotating the sentences in the parallel corpus according to the alignment data and a rare word model to generate a training dataset of paired source and target language sentences; and training a neural network translation model on the training dataset.Type: GrantFiled: October 23, 2015Date of Patent: November 20, 2018Assignee: Google LLCInventors: Quoc V. Le, Minh-Thang Luong, Ilya Sutskever, Oriol Vinyals, Wojciech Zaremba
-
Publication number: 20180144264Abstract: A system for training a model to predict a sequence (e.g. a sequence of words) given a context is disclosed. A model can be trained to make these predictions using a combination of individual predictions compared to base truth and sequences of predictions based on previous predictions, where the resulting sequence is compared to the base truth sequence. In particular, the model can initially use the individual predictions to train the model. The model can then be further trained over the training data in multiple iterations, where each iteration includes two processes for each training element. In the first process, an initial part of the sequence is predicted, and the model and model parameters are updated after each prediction. In the second process, the entire remaining amount of the sequence is predicted and compared to the corresponding training sequence to adjust model parameters to encourage or discourage each prediction.Type: ApplicationFiled: November 18, 2016Publication date: May 24, 2018Inventors: Marc Aurelio Ranzato, Sumit Chopra, Michael Auli, Wojciech Zaremba
-
Publication number: 20170323201Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for augmenting neural networks with an external memory using reinforcement learning. One of the methods includes providing an output derived from the system output portion of the neural network output as a system output in the sequence of system outputs; selecting a memory access process from a predetermined set of memory access processes for accessing the external memory from the reinforcement learning portion of the neural network output; writing and reading data from locations in the external memory in accordance with the selected memory access process using the differentiable portion of the neural network output; and combining the data read from the external memory with a next system input in the sequence of system inputs to generate a next neural network input in the sequence of neural network inputs.Type: ApplicationFiled: December 30, 2016Publication date: November 9, 2017Inventors: Ilya Sutskever, Ivo Danihelka, Alexander Benjamin Graves, Gregory Duncan Wayne, Wojciech Zaremba
-
Publication number: 20160117316Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural translation systems with rare word processing. One of the methods is a method training a neural network translation system to track the source in source sentences of unknown words in target sentences, in a source language and a target language, respectively and includes deriving alignment data from a parallel corpus, the alignment data identifying, in each pair of source and target language sentences in the parallel corpus, aligned source and target words; annotating the sentences in the parallel corpus according to the alignment data and a rare word model to generate a training dataset of paired source and target language sentences; and training a neural network translation model on the training dataset.Type: ApplicationFiled: October 23, 2015Publication date: April 28, 2016Inventors: Quoc V. Le, Minh-Thang Luong, Ilya Sutskever, Oriol Vinyals, Wojciech Zaremba
-
Publication number: 20160098632Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes obtaining partitioned training data for the neural network, wherein the partitioned training data comprises a plurality of training items each of which is assigned to a respective one of a plurality of partitions, wherein each partition is associated with a respective difficulty level; and training the neural network on each of the partitions in a sequence from a partition associated with an easiest difficulty level to a partition associated with a hardest difficulty level, wherein, for each of the partitions, training the neural network comprises: training the neural network on a sequence of training items that includes training items selected from the training items in the partition interspersed with training items selected from the training items in all of the partitions.Type: ApplicationFiled: October 7, 2015Publication date: April 7, 2016Inventors: Ilya Sutskever, Wojciech Zaremba
-
Patent number: 8411854Abstract: A method of generating a private key for use in an authentication protocol comprises, at a client: receiving a user specific identifier; converting the identifier through a one-way function to a string of a pre-determined length; and mapping said string to a permutation ?priv of a pre-determined order, said permutation being operable with a first graph G1 to generate a second graph G2=?priv(G1).Type: GrantFiled: December 10, 2008Date of Patent: April 2, 2013Assignee: National University of Ireland, GalwayInventors: Slawomir Grzonkowski, Wojciech Zaremba
-
Publication number: 20100290618Abstract: A method of generating a private key for use in an authentication protocol comprises, at a client: receiving a user specific identifier; converting the identifier through a one-way function to a string of a pre-determined length; and mapping said string to a permutation ?priv of a pre-determined order, said permutation being operable with a first graph G1 to generate a second graph G2=?priv(G1).Type: ApplicationFiled: December 10, 2008Publication date: November 18, 2010Applicant: Hational University of Ireland, GalwayInventors: Grzonkowski Slawomir, Wojciech Zaremba