Patents by Inventor David Joseph Weiss

David Joseph Weiss has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230040006
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for planning the future trajectory of an autonomous vehicle in an environment.
    Type: Application
    Filed: August 6, 2021
    Publication date: February 9, 2023
    Inventors: David Joseph Weiss, Jeffrey Ling
  • Publication number: 20230041501
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. In one aspect, a method for training a policy neural network configured to receive a scene data input and to generate a policy output to be followed by a target agent comprises: maintaining a set of training data, the set of training data comprising (i) training scene inputs and (ii) respective target policy outputs; at each training iteration: generating additional training scene inputs; generating a respective target policy output for each additional training scene input using a trained expert policy neural network that has been trained to receive an expert scene data input comprising (i) data characterizing the current scene and (ii) data characterizing a future state of the target agent; updating the set of training data; and training the policy neural network on the updated set of training data.
    Type: Application
    Filed: August 6, 2021
    Publication date: February 9, 2023
    Inventors: David Joseph Weiss, Jeffrey Ling, Adam Edward Bloniarz, Cole Gulino
  • Publication number: 20220383076
    Abstract: A method for performing one or more tasks, wherein each of the one or more tasks includes predicting behavior of one or more agents in an environment, the method comprising: obtaining a three-dimensional (3D) input tensor representing behaviors of the one or more agents in the environment across a plurality of time steps; generating an encoded representation of the 3D input tensor by processing the 3D input tensor using an encoder neural network, wherein 3D input tensor comprises a plurality of observed cells and a plurality of masked cells; and processing the encoded representation of the 3D input tensor using a decoder neural network to generate a 4D output tensor.
    Type: Application
    Filed: May 31, 2022
    Publication date: December 1, 2022
    Inventors: Jonathon Shlens, Vijay Vasudevan, Jiquan Ngiam, Benjamin James Caine, Zhengdong Zhang, Zhifeng Chen, Hao-Tien Chiang, David Joseph Weiss, Jeffrey Ling, Ashish Venugopal
  • Patent number: 10878188
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating dependency parses for input text segments, which may be provided as inputs to a natural language processing system. One of the systems includes a first neural network comprising: one or more initial neural network layers configured to, for each token in an input text sequence: receive features for the token; and collectively process the features to generate an alternative representation of the features for use in determining a part of speech of the token in the input text sequence; and a dependency parsing neural network configured to: process the alternative representations of the features for the tokens in the input text sequence generated by the one or more initial neural network layers to generate a dependency parse of the input text sequence.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: December 29, 2020
    Assignee: Google LLC
    Inventors: Yuan Zhang, David Joseph Weiss
  • Publication number: 20190073351
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating dependency parses for input text segments, which may be provided as inputs to a natural language processing system. One of the systems includes a first neural network comprising: one or more initial neural network layers configured to, for each token in an input text sequence: receive features for the token; and collectively process the features to generate an alternative representation of the features for use in determining a part of speech of the token in the input text sequence; and a dependency parsing neural network configured to: process the alternative representations of the features for the tokens in the input text sequence generated by the one or more initial neural network layers to generate a dependency parse of the input text sequence.
    Type: Application
    Filed: March 17, 2017
    Publication date: March 7, 2019
    Inventors: Yuan Zhang, David Joseph Weiss
  • Publication number: 20170372628
    Abstract: A system and associated methods are provided for generating a representation of the reading ability and general knowledge of a user, receiving information regarding a plurality of electronic documents, generating an estimate of the reading difficulty for the user of each electronic document of the plurality of electronic documents using the generated representation of the reading ability and general knowledge of the user, and presenting results based upon the estimates of the reading difficulty. The representation of the reading ability and general knowledge of a user may then be updated based, in part, upon feedback from the user regarding the presented results.
    Type: Application
    Filed: July 14, 2017
    Publication date: December 28, 2017
    Inventors: David Joseph Weiss, Eleni Miltsakaki
  • Publication number: 20170270407
    Abstract: A method includes training a neural network having parameters on training data, in which the neural network receives an input state and processes the input state to generate a respective score for each decision in a set of decisions. The method includes receiving training data including training text sequences and, for each training text sequence, a corresponding gold decision sequence. The method includes training the neural network on the training data to determine trained values of parameters of the neural network. Training the neural network includes for each training text sequence: maintaining a beam of candidate decision sequences for the training text sequence, updating each candidate decision sequence by adding one decision at a time, determining that a gold candidate decision sequence matching a prefix of the gold decision sequence has dropped out of the beam, and in response, performing an iteration of gradient descent to optimize an objective function.
    Type: Application
    Filed: January 17, 2017
    Publication date: September 21, 2017
    Inventors: Christopher Alberti, Aliaksei Severyn, Daniel Andor, Slav Petrov, Kuzman Ganchev Ganchev, David Joseph Weiss, Michael John Collins, Alessandro Presta
  • Publication number: 20150248398
    Abstract: A system and associated methods are provided for generating a representation of the reading ability and general knowledge of a user, receiving information regarding a plurality of electronic documents, generating an estimate of the reading difficulty for the user of each electronic document of the plurality of electronic documents using the generated representation of the reading ability and general knowledge of the user, and presenting results based upon the estimates of the reading difficulty. The representation of the reading ability and general knowledge of a user may then be updated based, in part, upon feedback from the user regarding the presented results.
    Type: Application
    Filed: March 2, 2015
    Publication date: September 3, 2015
    Inventors: David Joseph Weiss, Eleni Miltsakaki