Patents by Inventor Isaac Elias

Isaac Elias has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240062743
    Abstract: A method for training a non-autoregressive TTS model includes obtaining a sequence representation of an encoded text sequence concatenated with a variational embedding. The method also includes using a duration model network to predict a phoneme duration for each phoneme represented by the encoded text sequence. Based on the predicted phoneme durations, the method also includes learning an interval representation and an auxiliary attention context representation. The method also includes upsampling, using the interval representation and the auxiliary attention context representation, the sequence representation into an upsampled output specifying a number of frames. The method also includes generating, based on the upsampled output, one or more predicted mel-frequency spectrogram sequences for the encoded text sequence.
    Type: Application
    Filed: October 31, 2023
    Publication date: February 22, 2024
    Applicant: Google LLC
    Inventors: Isaac Elias, Byungha Chun, Jonathan Shen, Ye Jia, Yu Zhang, Yonghui Wu
  • Patent number: 11908448
    Abstract: A method for training a non-autoregressive TTS model includes receiving training data that includes a reference audio signal and a corresponding input text sequence. The method also includes encoding the reference audio signal into a variational embedding that disentangles the style/prosody information from the reference audio signal and encoding the input text sequence into an encoded text sequence. The method also includes predicting a phoneme duration for each phoneme in the input text sequence and determining a phoneme duration loss based on the predicted phoneme durations and a reference phoneme duration. The method also includes generating one or more predicted mel-frequency spectrogram sequences for the input text sequence and determining a final spectrogram loss based on the predicted mel-frequency spectrogram sequences and a reference mel-frequency spectrogram sequence. The method also includes training the TTS model based on the final spectrogram loss and the corresponding phoneme duration loss.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: February 20, 2024
    Assignee: Google LLC
    Inventors: Isaac Elias, Jonathan Shen, Yu Zhang, Ye Jia, Ron J. Weiss, Yonghui Wu, Byungha Chun
  • Patent number: 11823656
    Abstract: A method for training a non-autoregressive TTS model includes obtaining a sequence representation of an encoded text sequence concatenated with a variational embedding. The method also includes using a duration model network to predict a phoneme duration for each phoneme represented by the encoded text sequence. Based on the predicted phoneme durations, the method also includes learning an interval representation and an auxiliary attention context representation. The method also includes upsampling, using the interval representation and the auxiliary attention context representation, the sequence representation into an upsampled output specifying a number of frames. The method also includes generating, based on the upsampled output, one or more predicted mel-frequency spectrogram sequences for the encoded text sequence.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: November 21, 2023
    Assignee: Google LLC
    Inventors: Isaac Elias, Byungha Chun, Jonathan Shen, Ye Jia, Yu Zhang, Yonghui Wu
  • Publication number: 20220301543
    Abstract: A method for training a non-autoregressive TTS model includes obtaining a sequence representation of an encoded text sequence concatenated with a variational embedding. The method also includes using a duration model network to predict a phoneme duration for each phoneme represented by the encoded text sequence. Based on the predicted phoneme durations, the method also includes learning an interval representation and an auxiliary attention context representation. The method also includes upsampling, using the interval representation and the auxiliary attention context representation, the sequence representation into an upsampled output specifying a number of frames. The method also includes generating, based on the upsampled output, one or more predicted mel-frequency spectrogram sequences for the encoded text sequence.
    Type: Application
    Filed: May 21, 2021
    Publication date: September 22, 2022
    Applicant: Google LLC
    Inventors: Isaac Elias, Byungha Chun, Jonathan Shen, Ye Jia, Yu Zhang, Yonghui Wu
  • Publication number: 20220122582
    Abstract: A method for training a non-autoregressive TTS model includes receiving training data that includes a reference audio signal and a corresponding input text sequence. The method also includes encoding the reference audio signal into a variational embedding that disentangles the style/prosody information from the reference audio signal and encoding the input text sequence into an encoded text sequence. The method also includes predicting a phoneme duration for each phoneme in the input text sequence and determining a phoneme duration loss based on the predicted phoneme durations and a reference phoneme duration. The method also includes generating one or more predicted mel-frequency spectrogram sequences for the input text sequence and determining a final spectrogram loss based on the predicted mel-frequency spectrogram sequences and a reference mel-frequency spectrogram sequence. The method also includes training the TTS model based on the final spectrogram loss and the corresponding phoneme duration loss.
    Type: Application
    Filed: May 21, 2021
    Publication date: April 21, 2022
    Applicant: Google LLC
    Inventors: Isaac Elias, Jonathan Shen, Yu Zhang, Ye Jia, Ron J. Weiss, Yonghui Wu, Byungha Chun
  • Publication number: 20220108680
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, synthesizing audio data from text data using duration prediction. One of the methods includes processing an input text sequence that includes a respective text element at each of multiple input time steps using a first neural network to generate a modified input sequence comprising, for each input time step, a representation of the corresponding text element in the input text sequence; processing the modified input sequence using a second neural network to generate, for each input time step, a predicted duration of the corresponding text element in the output audio sequence; upsampling the modified input sequence according to the predicted durations to generate an intermediate sequence comprising a respective intermediate element at each of a plurality of intermediate time steps; and generating an output audio sequence using the intermediate sequence.
    Type: Application
    Filed: October 1, 2021
    Publication date: April 7, 2022
    Inventors: Yu Zhang, Isaac Elias, Byungha Chun, Ye Jia, Yonghui Wu, Mike Chrzanowski, Jonathan Shen
  • Patent number: 9684432
    Abstract: Systems and methods are provided for adding and displaying interactive annotations for existing online hosted videos. A graphical annotation interface allows the creation of annotations and association of the annotations with a video. Annotations may be of different types and have different functionality, such as altering the appearance and/or behavior of an existing video, e.g. by supplementing it with text, allowing linking to other videos or web pages, or pausing playback of the video. Authentication of a user desiring to perform annotation of a video may be performed in various manners, such as by checking a uniform resource locator (URL) against an existing list, checking a user identifier against an access list, and the like. As a result of authentication, a user is accorded the appropriate annotation abilities, such as full annotation, no annotation, or annotation restricted to a particular temporal or spatial portion of the video.
    Type: Grant
    Filed: September 10, 2013
    Date of Patent: June 20, 2017
    Assignee: Google Inc.
    Inventors: Michael Fink, Ryan Junee, Sigalit Bar, Aviad Barzilai, Isaac Elias, Julian Frumar, Herbert Ho, Nir Kerem, Simon Ratner, Jasson Arthur Schrock, Ran Tavory, Virginia Wang, Leora Wiseman, Shanmugavelayutham Muthukrishnan, Mihai Badoiu, Ankur Bhargava, Igor Kofman
  • Patent number: 8868591
    Abstract: The present invention relates to the identification of alternative suggestions which potentially improve on a given query suggestion, without being perceived by a user as being offensively different from the user's query. The alternative suggestions may for example be different query formulations that relate to the same topic as that of the given query suggestion. The technology disclosed uses similarity screening of the given query suggestion against unique queries which do not include the given query suggestion as a prefix, in conjunction with query utility scores representing prior user response to the unique queries.
    Type: Grant
    Filed: September 21, 2011
    Date of Patent: October 21, 2014
    Assignee: Google Inc.
    Inventors: Lev Finkelstein, Artiom Myaskouvskey, Shaul Markovitch, Tomer Shmiel, Eran Ofek, Isaac Elias
  • Patent number: 8826357
    Abstract: Systems and methods are provided for adding and displaying interactive annotations for existing online hosted videos. A graphical annotation interface allows the creation of annotations and association of the annotations with a video. Annotations may be of different types and have different functionality, such as altering the appearance and/or behavior of an existing video, e.g. by supplementing it with text, allowing linking to other videos or web pages, or pausing playback of the video. Authentication of a user desiring to perform annotation of a video may be performed in various manners, such as by checking a uniform resource locator (URL) against an existing list, checking a user identifier against an access list, and the like. As a result of authentication, a user is accorded the appropriate annotation abilities, such as full annotation, no annotation, or annotation restricted to a particular temporal or spatial portion of the video.
    Type: Grant
    Filed: February 19, 2009
    Date of Patent: September 2, 2014
    Assignee: Google Inc.
    Inventors: Michael Fink, Ryan Junee, Sigalit Bar, Aviad Barzilai, Isaac Elias, Julian Frumar, Herbert Ho, Nir Kerem, Simon Ratner, Jasson Arthur Schrock, Ran Tavory
  • Publication number: 20140115476
    Abstract: Systems and methods are provided for adding and displaying interactive annotations for existing online hosted videos. A graphical annotation interface allows the creation of annotations and association of the annotations with a video. Annotations may be of different types and have different functionality, such as altering the appearance and/or behavior of an existing video, e.g. by supplementing it with text, allowing linking to other videos or web pages, or pausing playback of the video. Authentication of a user desiring to perform annotation of a video may be performed in various manners, such as by checking a uniform resource locator (URL) against an existing list, checking a user identifier against an access list, and the like. As a result of authentication, a user is accorded the appropriate annotation abilities, such as full annotation, no annotation, or annotation restricted to a particular temporal or spatial portion of the video.
    Type: Application
    Filed: December 28, 2013
    Publication date: April 24, 2014
    Applicant: Google Inc.
    Inventors: Michael Fink, Ryan Junee, Sigalit Bar, Aviad Barzilai, Isaac Elias, Julian Frumar, Herbert Ho, Nir Kerem, Simon Ratner, Jasson Arthur Schrock, Ran Tavory
  • Publication number: 20140019862
    Abstract: Systems and methods are provided for adding and displaying interactive annotations for existing online hosted videos. A graphical annotation interface allows the creation of annotations and association of the annotations with a video. Annotations may be of different types and have different functionality, such as altering the appearance and/or behavior of an existing video, e.g. by supplementing it with text, allowing linking to other videos or web pages, or pausing playback of the video. Authentication of a user desiring to perform annotation of a video may be performed in various manners, such as by checking a uniform resource locator (URL) against an existing list, checking a user identifier against an access list, and the like. As a result of authentication, a user is accorded the appropriate annotation abilities, such as full annotation, no annotation, or annotation restricted to a particular temporal or spatial portion of the video.
    Type: Application
    Filed: September 10, 2013
    Publication date: January 16, 2014
    Applicant: Google Inc.
    Inventors: Michael Fink, Ryan Junee, Sigalit Bar, Aviad Barzilai, Isaac Elias, Julian Frumar, Herbert Ho, Nir Kerem, Simon Ratner, Jasson Arthur Schrock, Ran Tavory, Virginia Wang, Leora Wiseman, Shanmugavelayutham Muthukrishnan, Mihai Badoiu, Ankur Bhargava, Igor Kofman
  • Patent number: 8566353
    Abstract: Systems and methods are provided for adding and displaying interactive annotations for existing online hosted videos. A graphical annotation interface allows the creation of annotations and association of the annotations with a video. Annotations may be of different types and have different functionality, such as altering the appearance and/or behavior of an existing video, e.g. by supplementing it with text, allowing linking to other videos or web pages, or pausing playback of the video. Authentication of a user desiring to perform annotation of a video may be performed in various manners, such as by checking a uniform resource locator (URL) against an existing list, checking a user identifier against an access list, and the like. As a result of authentication, a user is accorded the appropriate annotation abilities, such as full annotation, no annotation, or annotation restricted to a particular temporal or spatial portion of the video.
    Type: Grant
    Filed: February 18, 2009
    Date of Patent: October 22, 2013
    Assignee: Google Inc.
    Inventors: Michael Fink, Ryan Junee, Sigalit Bar, Aviad Barzilai, Isaac Elias, Julian Frumar, Herbert Ho, Nir Kerem, Simon Ratner, Jasson Arthur Schrock, Ran Tavory, Virginia Wang, Leora Wiseman, Shanmugavelayutham Muthukrishnan, Mihai Badoiu, Ankur Bhargava, Igor Kofman
  • Publication number: 20090297118
    Abstract: Systems and methods are provided for adding and displaying interactive annotations for existing online hosted videos. A graphical annotation interface allows the creation of annotations and association of the annotations with a video. Annotations may be of different types and have different functionality, such as altering the appearance and/or behavior of an existing video, e.g. by supplementing it with text, allowing linking to other videos or web pages, or pausing playback of the video. Authentication of a user desiring to perform annotation of a video may be performed in various manners, such as by checking a uniform resource locator (URL) against an existing list, checking a user identifier against an access list, and the like. As a result of authentication, a user is accorded the appropriate annotation abilities, such as full annotation, no annotation, or annotation restricted to a particular temporal or spatial portion of the video.
    Type: Application
    Filed: February 19, 2009
    Publication date: December 3, 2009
    Applicant: Google Inc.
    Inventors: Michael Fink, Ryan Junee, Sigalit Bar, Aviad Barzilai, Isaac Elias, Julian Frumar, Herbert Ho, Nir Kerem, Simon Ratner, Jasson Arthur Schrock, Ran Tavory
  • Publication number: 20090300475
    Abstract: Systems and methods are provided for adding and displaying interactive annotations for existing online hosted videos. A graphical annotation interface allows the creation of annotations and association of the annotations with a video. Annotations may be of different types and have different functionality, such as altering the appearance and/or behavior of an existing video, e.g. by supplementing it with text, allowing linking to other videos or web pages, or pausing playback of the video. Authentication of a user desiring to perform annotation of a video may be performed in various manners, such as by checking a uniform resource locator (URL) against an existing list, checking a user identifier against an access list, and the like. As a result of authentication, a user is accorded the appropriate annotation abilities, such as full annotation, no annotation, or annotation restricted to a particular temporal or spatial portion of the video.
    Type: Application
    Filed: February 18, 2009
    Publication date: December 3, 2009
    Applicant: Google Inc.
    Inventors: Michael Fink, Ryan Junee, Sigalit Bar, Aviad Barzilai, Isaac Elias, Julian Frumar, Herbert Ho, Nir Kerem, Simon Ratner, Jasson Arthur Schrock, Ran Tavory