Patents by Inventor Abedelkader ASI
Abedelkader ASI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12374321Abstract: The disclosure herein describes reducing training bias in outputs generated by a generative language model. A communication segment associated with a communication is obtained by at least one processor of a generative language model. An output value associated with the communication segment is generated by the generative language model. The output value is mapped to a set of training bias values associated with the generative language model and based on the mapping of the output value to a training bias value of the set of training bias values, an alternative output value is generated. The alternative output value is used in a generated segment output for the communication segment. The accuracy of segment outputs generated by the generative language model is improved through reducing or eliminating its training biases.Type: GrantFiled: June 8, 2021Date of Patent: July 29, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Abedelkader Asi, Yarin Kuper, Royi Ronen, Song Wang, Olga Goldenberg, Shimrit Rada Bemis, Erez Altus, Yi Mao, Weizhu Chen
-
Patent number: 12299081Abstract: A large language model predicts a topic summarization of a long document given a set of segments from the long document that pertain to a topic of interest. The set of segments from the long document are selected by searching for similar segments from other documents that have been labeled to indicate whether or not the segment pertains to a topic of interest. The search is based on an embedding of a segment from the long document closely matching embeddings of the labeled segments. Each segment of the long document is scored based on the labels of the closest-matching similar segments. The segments from the long document are ranked by their respective score and the highest-scored segments are included in a prompt to the large language model for the model to generate a topic summarization of the long document.Type: GrantFiled: March 3, 2024Date of Patent: May 13, 2025Assignee: Microsoft Technology Licensing, LLC.Inventors: Abedelkader Asi, Roy Eisenstadt, Rotem Rina Preizler
-
Publication number: 20250133042Abstract: A disclosed method facilitates AI-generation of a customized email per a methodology that significantly reduces the risk of the customized email including hallucinated facts or undesirable personal identity information (PII). The method includes identifying an email template and a recipient identifier that identifies a recipient of the customized email based on user inputs to an email application; mining contextual data stored in association with the recipient identifier; generating a large language model (LLM) prompt based on the email template and the contextual data; providing the LLM prompt as input to a trained large language model (LLM); receiving the customized email as an output from the LLM; and returning the customized email to the email application for display within a user interface.Type: ApplicationFiled: October 23, 2023Publication date: April 24, 2025Inventors: Alexander TSVETKOV, Abedelkader ASI, Roy EISENSTADT, Royi RONEN
-
Patent number: 12165631Abstract: A method of generating keyword-based dialogue summaries is provided. The method includes inputting a transcript of an audio conversation and a keyword into a machine learning model trained based on encodings representing the keyword and the transcript, generating computer-generated text different from and semantically descriptive of the transcript and semantically associated with the keyword, and outputting the computer-generated text in association with a selectable item selectable for inclusion of the computer-generated text in displayed text representing the transcript, the selectable item associated with the keyword.Type: GrantFiled: May 3, 2022Date of Patent: December 10, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Abedelkader Asi, Royi Ronen, Roy Eisenstadt, Dean Geckt
-
Publication number: 20230367968Abstract: Text coherence is classified by receiving a multiword text string into a machine learning model, determining, by the machine learning model, semantic probability data representing a probability that a word of the received multiword text string is semantically correlated to one or more other words in the multiword text string, determining, by the machine learning model, an inferential aggregate perplexity score of the multiword text string, based on the determined semantic probability data, outputting, from the machine learning model, the inferential aggregate perplexity score, and classifying a coherence of the multiword text string based on whether the outputted inferential aggregate perplexity score satisfies a coherence condition, wherein the coherence condition is based on a predefined coherence score.Type: ApplicationFiled: May 11, 2022Publication date: November 16, 2023Inventors: Roy EISENSTADT, Abedelkader ASI, Royi RONEN, Dean GECKT
-
Publication number: 20230360640Abstract: A method of generating keyword-based dialogue summaries is provided. The method includes inputting a transcript of an audio conversation and a keyword into a machine learning model trained based on encodings representing the keyword and the transcript, generating computer-generated text different from and semantically descriptive of the transcript and semantically associated with the keyword, and outputting the computer-generated text in association with a selectable item selectable for inclusion of the computer-generated text in displayed text representing the transcript, the selectable item associated with the keyword.Type: ApplicationFiled: May 3, 2022Publication date: November 9, 2023Inventors: Abedelkader ASI, Royi RONEN, Roy EISENSTADT, Dean GECKT
-
Patent number: 11630958Abstract: The disclosure herein describes determining topics of communication transcripts using trained summarization models. A first communication transcript associated with a first communication is obtained and divided into a first set of communication segments. A first set of topic descriptions is generated based on the first set of communication segments by analyzing each communication segment of the first set of communication segments with a generative language model. A summarization model is trained using the first set of communication segments and associated first set of topic descriptions as training data. The trained summarization model is then applied to a second communication transcript and, based on applying the trained summarization model to the second communication transcript, a second set of topic descriptions of the second communication transcript is generated.Type: GrantFiled: June 2, 2021Date of Patent: April 18, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Royi Ronen, Yarin Kuper, Tomer Rosenthal, Abedelkader Asi, Erez Altus, Rona Shaanan
-
Publication number: 20220391591Abstract: The disclosure herein describes determining topics of communication transcripts using trained summarization models. A first communication transcript associated with a first communication is obtained and divided into a first set of communication segments. A first set of topic descriptions is generated based on the first set of communication segments by analyzing each communication segment of the first set of communication segments with a generative language model. A summarization model is trained using the first set of communication segments and associated first set of topic descriptions as training data. The trained summarization model is then applied to a second communication transcript and, based on applying the trained summarization model to the second communication transcript, a second set of topic descriptions of the second communication transcript is generated.Type: ApplicationFiled: June 2, 2021Publication date: December 8, 2022Inventors: Royi RONEN, Yarin KUPER, Tomer ROSENTHAL, Abedelkader ASI, Erez ALTUS, Rona SHAANAN
-
Publication number: 20220392434Abstract: The disclosure herein describes reducing training bias in outputs generated by a generative language model. A communication segment associated with a communication is obtained by at least one processor of a generative language model. An output value associated with the communication segment is generated by the generative language model. The output value is mapped to a set of training bias values associated with the generative language model and based on the mapping of the output value to a training bias value of the set of training bias values, an alternative output value is generated. The alternative output value is used in a generated segment output for the communication segment. The accuracy of segment outputs generated by the generative language model is improved through reducing or eliminating its training biases.Type: ApplicationFiled: June 8, 2021Publication date: December 8, 2022Inventors: Abedelkader ASI, Yarin KUPER, Royi RONEN, Song WANG, Olga GOLDENBERG, Shimrit Rada BEMIS, Erez ALTUS, Yi MAO, Weizhu CHEN
-
Publication number: 20210312362Abstract: The disclosure herein describes providing action item information for a current activity based on similarity with past activities. Activity attributes indicative of an activity outcome are identified, and a random forest classifier based on the identified activity attributes is generated. The random forest classifier classifies an activity based on the activity attributes. Similarity factors associated with the current activity and past activities are calculated based on the random forest classifier. Based on the similarity factors, data value ranges of performance indicators of past activities associated with the activity outcome are determined. Based on comparing the determined data value ranges to performance indicator data values of the current activity, action item information associated with the performance indicator data values of the current activity is provided.Type: ApplicationFiled: April 7, 2020Publication date: October 7, 2021Inventors: Royi RONEN, Abedelkader ASI, Arshdeep SINGH, Sandeep N. MENON, Inbar OREN
-
Patent number: 11031003Abstract: Technology is disclosed for providing dynamic identification and extraction or tagging of contextually-coherent text blocks from an electronic document. In an embodiment, an electronic document may be parsed into a plurality of content tokens that each corresponds to a portion of the electronic document, such as a sentence or a paragraph. Employing a sliding window approach, a number of token groups are independently analyzed, where each group of tokens has a different number of tokens included therein. Each token group is analyzed to determine confidence scores for various determinable contexts based on content included in the token set. The confidence scores can then be processed for each token group to determine an entropy score for the token group. In this way, one of the analyzed token groups can be selected as a representative text block that corresponds to one of the plurality of determinable contexts.Type: GrantFiled: May 25, 2018Date of Patent: June 8, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Abedelkader Asi, Liron Izhaki-Allerhand, Ran Mizrachi, Royi Ronen, Ohad Jassin
-
Publication number: 20190362713Abstract: Technology is disclosed for providing dynamic identification and extraction or tagging of contextually-coherent text blocks from an electronic document. In an embodiment, an electronic document may be parsed into a plurality of content tokens that each corresponds to a portion of the electronic document, such as a sentence or a paragraph. Employing a sliding window approach, a number of token groups are independently analyzed, where each group of tokens has a different number of tokens included therein. Each token group is analyzed to determine confidence scores for various determinable contexts based on content included in the token set. The confidence scores can then be processed for each token group to determine an entropy score for the token group. In this way, one of the analyzed token groups can be selected as a representative text block that corresponds to one of the plurality of determinable contexts.Type: ApplicationFiled: May 25, 2018Publication date: November 28, 2019Inventors: Abedelkader ASI, Liron IZHAKI-ALLERHAND, Ran MIZRACHI, Royi RONEN, Ohad JASSIN