Patents by Inventor Jan VAINER

Jan VAINER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11848005
    Abstract: There is provided a computer-implemented method of training a speech-to-speech (S2S) machine learning (ML) model for adapting at least one voice attribute of speech, comprising: creating an S2S training dataset of a plurality of S2S records, wherein an S2S record comprises: a first audio content comprising speech having at least one first voice attribute, and a ground truth label of a second audio content comprising speech having at least one second voice attribute, wherein the first audio content and the second audio content have the same lexical content and are time-synchronized, and training the S2S ML model using the S2S training dataset, wherein the S2S ML model is fed an input of a source audio content with at least one source voice attribute and generates an outcome of the source audio content with at least one target voice attribute.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: December 19, 2023
    Assignee: Meaning.Team, Inc
    Inventors: Yishay Carmiel, Lukasz Wojciak, Piotr Zelasko, Jan Vainer, Tomas Nekvinda, Ondrej Platek
  • Publication number: 20230352001
    Abstract: There is provided a computer-implemented method of training a speech-to-speech (S2S) machine learning (ML) model for adapting at least one voice attribute of speech, comprising: creating an S2S training dataset of a plurality of S2S records, wherein an S2S record comprises: a first audio content comprising speech having at least one first voice attribute, and a ground truth label of a second audio content comprising speech having at least one second voice attribute, wherein the first audio content and the second audio content have the same lexical content and are time-synchronized, and training the S2S ML model using the S2S training dataset, wherein the S2S ML model is fed an input of a source audio content with at least one source voice attribute and generates an outcome of the source audio content with at least one target voice attribute.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicant: Meaning.Team, Inc
    Inventors: Yishay CARMIEL, Lukasz WOJCIAK, Piotr ZELASKO, Jan VAINER, Tomas NEKVINDA, Ondrej PLATEK