Patents by Inventor Oron NIR
Oron NIR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12242803Abstract: An ontology matching system performs operations to refine a natural language processing (NLP) model that encodes terms of a first hierarchical ontology and of a second hierarchical ontology as embeddings in a latent space. The operations include performing at least a first round of triplet loss training to decrease separation between select pairs of the embeddings sampled from the different ontologies that satisfy a first hierarchical relation while increasing separation between other pairs of the embeddings that do not satisfy the first hierarchical relation. The system then determines, from the refined NLP model, a stable matching scheme that matches each term in the first hierarchical ontology with a corresponding term of the second hierarchical ontology. Responsive to receiving terms of the first hierarchical ontology from an application, the system uses the stable matching scheme to map each of the terms to corresponding terms of the second hierarchical ontology.Type: GrantFiled: June 29, 2022Date of Patent: March 4, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Oron Nir, Inbal Sagiv, Fardau Van Neerden
-
Publication number: 20250054337Abstract: Aspects of the technology described herein improve an object recognition system by specifying a type of picture that would improve the accuracy of the object recognition system if used to retrain the object recognition system. The technology described herein can take the form of an improvement model that improves an object recognition model by suggesting the types of training images that would improve the object recognition model's performance. For example, the improvement model could suggest that a picture of a person smiling be used to retrain the object recognition system. Once trained, the improvement model can be used to estimate a performance score for an image recognition model given the set characteristics of a set of training of images. The improvement model can then select a feature of an image, which if added to the training set, would cause a meaningful increase in the recognition system's performance.Type: ApplicationFiled: October 21, 2024Publication date: February 13, 2025Inventors: Oron NIR, Royi RONEN, Ohad JASSIN, Milan M. GADA, Mor Geva PIPEK
-
Patent number: 12222974Abstract: A method for automatically classifying terms of a first ontology into categories of a classification scheme defined with respect to a second ontology includes generating, for each term in the first ontology and each term in the second ontology, an embedding encoding the term and a description of the term. The method further includes adding the generated embeddings to a transformer model and computing, for each pair of the embeddings consisting of a first term from the first ontology and a second term from the second ontology, a similarity metric quantifying a similarity of the first term and the second term. The method still further provides for determining a matching scheme based on the similarity metric computed with respect to each pair of the embeddings, where the matching scheme associates term of the first ontology with one or more relevant categories of the classification scheme defined with respect to the second ontology.Type: GrantFiled: June 29, 2022Date of Patent: February 11, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Oron Nir, Inbal Sagiv, Fardau Van Neerden
-
Publication number: 20240419944Abstract: Sampling operations enable a computer vision tool to regulate downstream tasks. The sampling operations can indicate which frames of a video sequence should be processed by different downstream tasks. For example, a computer vision tool receives encoded data for a given frame and uses the encoded data to determine inputs for machine learning models in different channels. The computer vision tool provides the inputs to the machine learning models, respectively, and fuses results from the machine learning models. In this way, the computer vision tool determines a set of event indicators for the given frame. Based at least in part on the event indicator(s) for the given frame, the computer vision tool regulates downstream tasks for the given frame (e.g., selectively performing or skipping downstream tasks for the given frame, or otherwise adjusting how and when downstream tasks are performed for the given frame).Type: ApplicationFiled: June 13, 2023Publication date: December 19, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Oron NIR, Fardau VAN NEERDEN, Inbal SAGIV
-
Patent number: 12169984Abstract: Aspects of the technology described herein improve an object recognition system by specifying a type of picture that would improve the accuracy of the object recognition system if used to retrain the object recognition system. The technology described herein can take the form of an improvement model that improves an object recognition model by suggesting the types of training images that would improve the object recognition model's performance. For example, the improvement model could suggest that a picture of a person smiling be used to retrain the object recognition system. Once trained, the improvement model can be used to estimate a performance score for an image recognition model given the set characteristics of a set of training of images. The improvement model can then select a feature of an image, which if added to the training set, would cause a meaningful increase in the recognition system's performance.Type: GrantFiled: January 25, 2021Date of Patent: December 17, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Oron Nir, Royi Ronen, Ohad Jassin, Milan M. Gada, Mor Geva Pipek
-
Publication number: 20240370661Abstract: Multimedia content is summarized with the use of summary prompts that are created with audio and visual insights obtained from the multimedia content. An aggregated timeline temporally aligns the audio and visual insights. The aggregated timeline is segmented into coherent segments that each include a unique combination of audio and visual insights. These segments are grouped into chunks, based on prompt size constraints, and are used with identified summarization styles to create the summary prompts. The summary prompts are provided to summarization models to obtain summaries having content and summarization styles based on the summary prompts.Type: ApplicationFiled: June 9, 2023Publication date: November 7, 2024Inventors: Tom HIRSHBERG, Yonit HOFFMAN, Zvi FIGOV, Maayan YEDIDIA DOTAN, Oron NIR
-
Publication number: 20240312477Abstract: Examples of the present disclosure describe systems and methods for multichannel audio speech classification. In examples, an audio signal comprising multiple audio channels is received at a processing device. Each of the audio channels in the audio signal is transcoded to a predefined audio format. For each of the transcoded audio channels, an average power value is calculated for one or more data windows in the audio signal. A correlation value is calculated between the average power value for each audio channel and the combined average power value of the other audio channels in the audio signal. Each of the correlation values (or an aggregated correlation value for the audio channels) is then compared against a threshold value to determine whether the audio signal is to be classified as a speech-based communication. Based on the classification, an action associated with the audio signal may be performed.Type: ApplicationFiled: December 27, 2023Publication date: September 19, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Oron NIR, Inbal SAGIV, Maayan YEDIDIA, Fardau VAN NEERDEN, Itai NORMAN
-
Publication number: 20240202240Abstract: A video indexing system generates descriptive metadata for a video including identifiers for each of multiple detections that each correspond to a select one of multiple subjects that appear in the video. These detections are used to create relational graph data for the video, where the relational graph data includes nodes corresponding to each of the multiple subjects that appear in the video. A knowledge graph is queried with unique identifiers corresponding to the multiple subjects of the video to retrieve implicit relational data for each of the multiple subjects, and a merged relational graph is created by merging the implicit relational data retrieved from the knowledge graph with the relational graph data created for the video. A search engine uses the merged relational graph to identify video content relevant to a user query that is based on an implicit relation. Search results identifying the relevant content are presented on a user device.Type: ApplicationFiled: December 16, 2022Publication date: June 20, 2024Inventors: Oron NIR, Ika BAR-MENACHEM, Inbal SAGIV
-
Patent number: 11954893Abstract: The technology described herein is directed to systems, methods, and software for indexing video. In an implementation, a method comprises identifying one or more regions of interest around target content in a frame of the video. Further, the method includes identifying, in a portion of the frame outside a region of interest, potentially empty regions adjacent to the region of interest. The method continues with identifying at least one empty region of the potentially empty regions that satisfies one or more criteria and classifying at least the one empty region as a negative sample of the target content. In some implementations, the negative sample of the target content in a set of negative samples of the target content, with which to train a machine learning model employed to identify instances of the target content.Type: GrantFiled: June 17, 2022Date of Patent: April 9, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Oron Nir, Maria Zontak, Tucker Cunningham Burns, Apar Singhal, Lei Zhang, Irit Ofer, Avner Levi, Haim Sabo, Ika Bar-Menachem, Eylon Ami, Ella Ben Tov, Anika Zaman
-
Patent number: 11900961Abstract: Examples of the present disclosure describe systems and methods for multichannel audio speech classification. In examples, an audio signal comprising multiple audio channels is received at a processing device. Each of the audio channels in the audio signal is transcoded to a predefined audio format. For each of the transcoded audio channels, an average power value is calculated for one or more data windows in the audio signal. A correlation value is calculated between the average power value for each audio channel and the combined average power value of the other audio channels in the audio signal. Each of the correlation values (or an aggregated correlation value for the audio channels) is then compared against a threshold value to determine whether the audio signal is to be classified as a speech-based communication. Based on the classification, an action associated with the audio signal may be performed.Type: GrantFiled: May 31, 2022Date of Patent: February 13, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Oron Nir, Inbal Sagiv, Maayan Yedidia, Fardau Van Neerden, Itai Norman
-
Publication number: 20240005094Abstract: A system for ontology matching performs operations to refine a natural language processing (NLP) model that encodes terms of a first hierarchical ontology and of a second hierarchical ontology as embeddings in a vector space in which spatial proximity between the embeddings is correlated with similarity between the associated terms. The operations to refine the NLP model include performing at least a first round of triplet loss training to decrease separation between select pairs of the embeddings sampled from the different ontologies that satisfy a first hierarchical relation while increasing separation between other pairs of the embeddings that do not satisfy the first hierarchical relation. The system then determines, from the refined NLP model, a stable matching scheme that matches each term in the first hierarchical ontology with a corresponding term of the second hierarchical ontology.Type: ApplicationFiled: June 29, 2022Publication date: January 4, 2024Inventors: Oron NIR, Inbal SAGIV, Fardau VAN NEERDEN
-
Publication number: 20240004915Abstract: A method for automatically classifying terms of a first ontology into categories of a classification scheme defined with respect to a second ontology includes generating, for each term in the first ontology and each term in the second ontology, an embedding encoding the term and a description of the term. The method further includes adding the generated embeddings to a transformer model and computing, for each pair of the embeddings consisting of a first term from the first ontology and a second term from the second ontology, a similarity metric quantifying a similarity of the first term and the second term. The method still further provides for determining a matching scheme based on the similarity metric computed with respect to each pair of the embeddings, where the matching scheme associates term of the first ontology with one or more relevant categories of the classification scheme defined with respect to the second ontology.Type: ApplicationFiled: June 29, 2022Publication date: January 4, 2024Inventors: Oron NIR, Inbal SAGIV, Fardau VAN NEERDEN
-
Publication number: 20230419663Abstract: Examples of the present disclosure describe systems and methods for video genre classification. In one example implementation, video content is received. A plurality of sliding windows of the video content is sampled. The plurality of sliding windows comprises audio data and video data. The audio data is analyzed to identify a set of audio features. The video data is analyzed to identify a set of video features. The set of audio features and the set of video features is provided to a classifier. The classifier is configured to detect a genre for the video content using the set of audio features and the set of video features. The video content is indexed based on the genre.Type: ApplicationFiled: June 27, 2022Publication date: December 28, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Oron NIR, Mattan SERRY, Yonit HOFFMAN, Michael BEN-HAYM, Zvi FIGOV, Eliyahu STRUGO, Avi NEEMAN
-
Publication number: 20230386505Abstract: Examples of the present disclosure describe systems and methods for multichannel audio speech classification. In examples, an audio signal comprising multiple audio channels is received at a processing device. Each of the audio channels in the audio signal is transcoded to a predefined audio format. For each of the transcoded audio channels, an average power value is calculated for one or more data windows in the audio signal. A correlation value is calculated between the average power value for each audio channel and the combined average power value of the other audio channels in the audio signal. Each of the correlation values (or an aggregated correlation value for the audio channels) is then compared against a threshold value to determine whether the audio signal is to be classified as a speech-based communication. Based on the classification, an action associated with the audio signal may be performed.Type: ApplicationFiled: May 31, 2022Publication date: November 30, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Oron NIR, Inbal SAGIV, Maayan YEDIDIA, Fardau VAN NEERDEN, Itai NORMAN
-
Patent number: 11823453Abstract: The technology described herein is directed to a media indexer framework including a character recognition engine that automatically detects and groups instances (or occurrences) of characters in a multi-frame animated media file. More specifically, the character recognition engine automatically detects and groups the instances (or occurrences) of the characters in the multi-frame animated media file such that each group contains images associated with a single character. The character groups are then labeled and used to train an image classification model. Once trained, the image classification model can be applied to subsequent multi-frame animated media files to automatically classifying the animated characters included therein.Type: GrantFiled: February 1, 2022Date of Patent: November 21, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Oron Nir, Maria Zontak, Tucker Cunningham Burns, Apar Singhal, Lei Zhang, Irit Ofer, Avner Levi, Haim Sabo, Ika Bar-Menachem, Eylon Ami, Ella Ben Tov
-
Patent number: 11768961Abstract: Methods for speaker role determination and scrubbing identifying information are performed by systems and devices. In speaker role determination, data from an audio or text file is divided into respective portions related to speaking parties. Characteristics classifying the portions of the data for speaking party roles are identified in the portions to generate data sets from the portions corresponding to the speaking party roles and to assign speaking party roles for the data sets. For scrubbing identifying information in data, audio data for speaking parties is processed using speech recognition to generate a text-based representation. Text associated with identifying information is determined based on a set of key words/phrases, and a portion of the text-based representation that includes a part of the text is identified. A segment of audio data that corresponds to the identified portion is replaced with different audio data, and the portion is replaced with different text.Type: GrantFiled: October 28, 2021Date of Patent: September 26, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Yun-Cheng Ju, Ashwarya Poddar, Royi Ronen, Oron Nir, Ami Turgman, Andreas Stolcke, Edan Hauon
-
Patent number: 11501546Abstract: In various embodiments, methods and systems for implementing a media management system, for video data processing and adaptation data generation, are provided. At a high level, a video data processing engine relies on different types of video data properties and additional auxiliary data resources to perform video optical character recognition operations for recognizing characters in video data. In operation, video data is accessed to identify recognized characters. A video OCR operation to perform on the video data for character recognition is determined from video character processing and video auxiliary data processing. Video auxiliary data processing includes processing an auxiliary reference object; the auxiliary reference object is an indirect reference object that is a derived input element used as a factor in determining the recognized characters. The video data is processed based on the video OCR operation and based on processing the video data, at least one recognized character is communicated.Type: GrantFiled: July 27, 2020Date of Patent: November 15, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Royi Ronen, Ika Bar-Menachem, Ohad Jassin, Avner Levi, Olivier Nano, Oron Nir, Mor Geva Pipek, Ori Ziv
-
Publication number: 20220318574Abstract: The technology described herein is directed to systems, methods, and software for indexing video. In an implementation, a method comprises identifying one or more regions of interest around target content in a frame of the video. Further, the method includes identifying, in a portion of the frame outside a region of interest, potentially empty regions adjacent to the region of interest. The method continues with identifying at least one empty region of the potentially empty regions that satisfies one or more criteria and classifying at least the one empty region as a negative sample of the target content. In some implementations, the negative sample of the target content in a set of negative samples of the target content, with which to train a machine learning model employed to identify instances of the target content.Type: ApplicationFiled: June 17, 2022Publication date: October 6, 2022Inventors: Oron NIR, Maria ZONTAK, Tucker Cunningham BURNS, Apar SINGHAL, Lei ZHANG, Irit OFER, Avner LEVI, Haim SABO, Ika BAR-MENACHEM, Eylon AMI, Ella BEN TOV, Anika ZAMAN
-
Patent number: 11366989Abstract: The technology described herein is directed to systems, methods, and software for indexing video. In an implementation, a method comprises identifying one or more regions of interest around target content in a frame of the video. Further, the method includes identifying, in a portion of the frame outside a region of interest, potentially empty regions adjacent to the region of interest. The method continues with identifying at least one empty region of the potentially empty regions that satisfies one or more criteria and classifying at least the one empty region as a negative sample of the target content. In some implementations, the negative sample of the target content in a set of negative samples of the target content, with which to train a machine learning model employed to identify instances of the target content.Type: GrantFiled: March 26, 2020Date of Patent: June 21, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Oron Nir, Maria Zontak, Tucker Cunningham Burns, Apar Singhal, Lei Zhang, Irit Ofer, Avner Levi, Haim Sabo, Ika Bar-Menachem, Eylon Ami, Ella Ben Tov, Anika Zaman
-
Publication number: 20220157057Abstract: The technology described herein is directed to a media indexer framework including a character recognition engine that automatically detects and groups instances (or occurrences) of characters in a multi-frame animated media file. More specifically, the character recognition engine automatically detects and groups the instances (or occurrences) of the characters in the multi-frame animated media file such that each group contains images associated with a single character. The character groups are then labeled and used to train an image classification model. Once trained, the image classification model can be applied to subsequent multi-frame animated media files to automatically classifying the animated characters included therein.Type: ApplicationFiled: February 1, 2022Publication date: May 19, 2022Inventors: Oron NIR, Maria ZONTAK, Tucker Cunningham BURNS, Apar SINGHAL, Lei ZHANG, Irit OFER, Avner LEVI, Haim SABO, Ika BAR-MENACHEM, Eylon AMI, Ella BEN TOV