Patents by Inventor Lev Haikin

Lev Haikin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230144379
    Abstract: A system and method of automatically discovering unigrams in a speech data element may include receiving a language model that includes a plurality of n-grams, where each n-gram includes one or more unigrams; applying an acoustic machine-learning (ML) model on one or more speech data elements to obtain a character distribution function; applying a greedy decoder on the character distribution function, to predict an initial corpus of unigrams; filtering out one or more unigrams of the initial corpus to obtain a corpus of candidate unigrams, where the candidate unigrams are not included in the language model; analyzing the one or more first speech data elements, to extract at least one n-gram that comprises a candidate unigram; and updating the language model to include the extracted at least one n-gram.
    Type: Application
    Filed: November 8, 2021
    Publication date: May 11, 2023
    Applicant: GENESYS CLOUD SERVICES, INC.
    Inventors: LEV HAIKIN, ARNON MAZZA, EYAL ORBACH, AVRAHAM FAIZAKOF
  • Patent number: 11645460
    Abstract: A first text corpus comprising punctuated and capitalized text is received. The words in the first text corpus are then annotated with a set of labels indicating a punctuation and a capitalization of each word. At an initial training stage, a machine learning model is trained on a first training set using the annotated words from the first text corpus and the labels. A second text corpus is received representing conversational speech. The words in the second text corpus are then annotated with the set of labels. In a re-training stage, the machine learning model is re-trained on a second training set comprising the annotated words from the second text corpus, and the labels. At an inference stage, the trained machine learning model is applied to a target set of words representing conversational speech to predict a punctuation and capitalization of each word in the target set.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: May 9, 2023
    Inventors: Avraham Faizakof, Arnon Mazza, Lev Haikin, Eyal Orbach
  • Publication number: 20220382982
    Abstract: A method and system for automatic topic detection in text may include receiving a text document of a corpus of documents and extracting one or more phrases from the document, based on one or more syntactic patterns. For each phrase, embodiments of the invention may: apply a word embedding neural network on one or more words of the phrase, to obtain one or more respective word embedding vectors; calculate a weighted phrase embedding vector, and compute a phrase saliency score, based on the weighted phrase embedding vector. Embodiments of the invention may subsequently produce one or more topic labels, representing one or more respective topics in the document, based on the computed phrase saliency scores, and may select one or more topic labels according to their relevance to the business domain of the corpus.
    Type: Application
    Filed: May 12, 2021
    Publication date: December 1, 2022
    Applicant: GENESYS CLOUD SERVICES, INC.
    Inventors: EYAL ORBACH, AVRAHAM FAIZAKOF, ARNON MAZZA, LEV HAIKIN
  • Publication number: 20220366197
    Abstract: A method and system for finetuning automated sentiment classification by at least one processor may include: receiving a first machine learning (ML) model M0, pretrained to perform automated sentiment classification of utterances, based on a first annotated training dataset; associating one or more instances of model M0 to one or more corresponding sites; and for one or more (e.g., each) ML model M0 instance and/or site: receiving at least one utterance via the corresponding site; obtaining at least one data element of annotated feedback, corresponding to the at least one utterance; retraining the ML model M0, to produce a second ML model Mi, based on a second annotated training dataset, wherein the second annotated training dataset may include the first annotated training dataset and the at least one annotated feedback data element; and using the second ML model Mi, to classify utterances according to one or more sentiment classes.
    Type: Application
    Filed: May 12, 2021
    Publication date: November 17, 2022
    Applicant: GENESYS CLOUD SERVICES, INC.
    Inventors: ARNON MAZZA, LEV HAIKIN, EYAL ORBACH, AVRAHAM FAIZAKOF
  • Publication number: 20220208176
    Abstract: A method comprising: receiving a first text corpus comprising punctuated and capitalized text; annotating words in said first text corpus with a set of labels indicating a punctuation and a capitalization of each word; at an initial training stage, training a machine learning model on a first training set comprising: (i) said annotated words in said first text corpus, and (ii) said labels; receiving a second text corpus representing conversational speech; annotating words in said second text corpus with said set of labels; at a re-training stage, re-training said machine learning model on a second training set comprising: (iii) said annotated words in said second text corpus, and (iv) said labels; and at an inference stage, applying said trained machine learning model to a target set of words representing conversational speech, to predict a punctuation and capitalization of each word in said target set.
    Type: Application
    Filed: December 28, 2020
    Publication date: June 30, 2022
    Applicant: GENESYS TELECOMMUNICATIONS LABORATORIES, INC.
    Inventors: AVRAHAM FAIZAKOF, ARNON MAZZA, LEV HAIKIN, EYAL ORBACH
  • Patent number: 11341986
    Abstract: A method comprising: receiving a plurality of audio segments comprising a speech signal, wherein said audio segments represent a plurality of verbal interactions; receiving labels associated with an emotional state expressed in each of said audio segments; dividing each of said audio segments into a plurality of frames, based on a specified frame duration; extracting a plurality of acoustic features from each of said frames; computing statistics over said acoustic features with respect to sequences of frames representing phoneme boundaries in said audio segments; at a training stage, training a machine learning model on a training set comprising: said statistics associated with said audio segments, and said labels; and at an inference stage, applying said trained model to one or more target audio segments comprising a speech signal, to detect an emotional state expressed in said target audio segments.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: May 24, 2022
    Inventors: Avraham Faizakof, Lev Haikin, Yochai Konig, Arnon Mazza
  • Publication number: 20210193169
    Abstract: A method comprising: receiving a plurality of audio segments comprising a speech signal, wherein said audio segments represent a plurality of verbal interactions; receiving labels associated with an emotional state expressed in each of said audio segments; dividing each of said audio segments into a plurality of frames, based on a specified frame duration; extracting a plurality of acoustic features from each of said frames; computing statistics over said acoustic features with respect to sequences of frames representing phoneme boundaries in said audio segments; at a training stage, training a machine learning model on a training set comprising: said statistics associated with said audio segments, and said labels; and at an inference stage, applying said trained model to one or more target audio segments comprising a speech signal, to detect an emotional state expressed in said target audio segments.
    Type: Application
    Filed: December 20, 2019
    Publication date: June 24, 2021
    Inventors: Avraham Faizakof, Lev Haikin, Yochai Konig, Arnon Mazza