Patents by Inventor Nir Nice
Nir Nice has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230177111Abstract: A method of training a machine learning model is provided. The method includes receiving labeled training data in the machine learning model, the received labeled training data including content data for items accessible to a user and input usage data representing recorded interaction between the user and the items, wherein the received content data for each item includes data representing intrinsic attributes of the item. The method further includes selecting a set of the input usage data that excludes input usage data for a proper subset of the items and training the machine learning model based on both the content data and the selected set of input usage data of the received labeled training data for the items.Type: ApplicationFiled: December 6, 2021Publication date: June 8, 2023Inventors: Oren BARKAN, Roy HIRSCH, Ori KATZ, Avi CACIULARU, Yonathan WEILL, Noam KOENIGSTEIN, Nir NICE
-
Publication number: 20230137744Abstract: A method of generating an aggregate saliency map using a convolutional neural network. Convolutional activation maps of the convolutional neural network model are received into a saliency map generator, the convolutional activation maps being generated by the neural network model while computing the one or more prediction scores based on unlabeled input data. Each convolutional activation map corresponds to one of the multiple encoding layers. The saliency map generator generates a layer-dependent saliency map for each encoding layer of the unlabeled input data, each layer-dependent saliency map being based on a summation of element-wise products of the convolutional activation maps and their corresponding gradients. The layer-dependent saliency maps are combined into the aggregate saliency map indicating the relative contributions of individual components of the unlabeled input data to the one or more prediction scores computed by the convolutional neural network model on the unlabeled input data.Type: ApplicationFiled: October 29, 2021Publication date: May 4, 2023Inventors: Oren BARKAN, Omri ARMSTRONG, Amir HERTZ, Avi CACIULARU, Ori KATZ, Itzik MALKIEL, Noam KOENIGSTEIN, Nir NICE
-
Publication number: 20230137718Abstract: A relational similarity determination engine receives as input a dataset including a set of entities and co-occurrence data that defines co-occurrence relations for pairs of the entities. The relational similarity determination engine also receives as input side information defining explicit relations between the entities. The relational similarity determination engine jointly models the co-occurrence relations and the explicit relations for the entities to compute a similarity metric for each different pair of entities within the dataset. Based on the computed similarity metrics, the relational similarity determination engine identifies a most similar replacement entity from the dataset for each of the entities within the dataset. For a select entity received as an input, the relational similarity determination engine outputs the identified most similar replacement entity.Type: ApplicationFiled: October 29, 2021Publication date: May 4, 2023Inventors: Oren BARKAN, Avi CACIULARU, Idan REJWAN, Yonathan WEILL, Noam KOENIGSTEIN, Ori KATZ, Itzik MALKIEL, Nir NICE
-
Publication number: 20230138579Abstract: An anchor-based collaborative filtering system receives a training dataset including user-item interactions each identifying a user and an item that the user has positively interacted with. The system defines a vector space and distributes the items of the training dataset within the vector based on a determined similarity of the items. The system further defines a set of taste anchors that are each associated in memory with a subgroup of the items in a same neighborhood of the vector space. To make a recommendation to an individual user, the system identifies an anchor-based representation for the individual user that includes a subset of the defined taste anchors that best represents the types of items that the user has favorably interacted with in the past. The taste anchors included in the identified anchor-based representation for the individual user are used to make recommendations to the user in the future.Type: ApplicationFiled: October 29, 2021Publication date: May 4, 2023Inventors: Oren BARKAN, Roy HIRSCH, Ori KATZ, Avi CACIULARU, Noam KOENIGSTEIN, Nir NICE
-
Publication number: 20230137692Abstract: A computing system scores importance of a number of tokens in an input token sequence to one or more prediction scores computed by a neural network model on the input token sequence. The neural network model includes multiple encoding layers. Self-attention matrices of the neural network model are received into an importance evaluator. The self-attention matrices are generated by the neural network model while computing the one or more prediction scores based on the input token sequence. Each self-attention matrix corresponds to one of the multiple encoding layers. The importance evaluator generates an importance score for one or more of the tokens in the input token sequence. Each importance score is based on a summation as a function of the self-attention matrices, the summation being computed across the tokens in the input token sequence, across the self-attention matrices, and across the multiple encoding layers in the neural network model.Type: ApplicationFiled: October 29, 2021Publication date: May 4, 2023Inventors: Oren BARKAN, Edan HAUON, Ori KATZ, Avi CACIULARU, Itzik MALKIEL, Omri ARMSTRONG, Amir HERTZ, Noam KOENIGSTEIN, Nir NICE
-
Patent number: 11580764Abstract: Examples provide a self-supervised language model for document-to-document similarity scoring and ranking long documents of arbitrary length in an absence of similarity labels. In a first stage of a two-staged hierarchical scoring, a sentence similarity matrix is created for each paragraph in the candidate document. A sentence similarity score is calculated based on the sentence similarity matrix. In the second stage, a paragraph similarity matrix is constructed based on aggregated sentence similarity scores associated with the first candidate document. A total similarity score for the document is calculated based on the normalize the paragraph similarity matrix for each candidate document in a collection of documents. The model is trained using a masked language model and intra-and-inter document sampling. The documents are ranked based on the similarity scores for the documents.Type: GrantFiled: June 22, 2021Date of Patent: February 14, 2023Assignee: Microsoft Technology Licensing, LLC.Inventors: Itzik Malkiel, Dvir Ginzburg, Noam Koenigstein, Oren Barkan, Nir Nice
-
Publication number: 20220405504Abstract: Examples provide a self-supervised language model for document-to-document similarity scoring and ranking long documents of arbitrary length in an absence of similarity labels. In a first stage of a two-staged hierarchical scoring, a sentence similarity matrix is created for each paragraph in the candidate document. A sentence similarity score is calculated based on the sentence similarity matrix. In the second stage, a paragraph similarity matrix is constructed based on aggregated sentence similarity scores associated with the first candidate document. A total similarity score for the document is calculated based on the normalize the paragraph similarity matrix for each candidate document in a collection of documents. The model is trained using a masked language model and intra-and-inter document sampling. The documents are ranked based on the similarity scores for the documents.Type: ApplicationFiled: June 22, 2021Publication date: December 22, 2022Inventors: Itzik MALKIEL, Dvir GINZBURG, Noam KOENIGSTEIN, Oren BARKAN, Nir NICE
-
Publication number: 20220318504Abstract: The disclosure herein describes a system for interpreting text-based similarity between a seed item and a recommended item selected by a pre-trained language model from a plurality of candidate items based on semantic similarities between the seed item and the recommended item. The system analyzes similarity scores and contextual paragraph representations representing text-based descriptions of the seed item and recommended item to generate gradient maps and word scores representing the text-based descriptions. A model for interpreting text-based similarity utilizes the calculated gradients and word scores to match words from the seed item description with words in the recommended item description having similar semantic meaning. The word-pairs having the highest weight are identified by the system as the word-pairs having the greatest influence over the selection of the recommended item from the candidate items by the original pre-trained language model.Type: ApplicationFiled: March 30, 2021Publication date: October 6, 2022Inventors: Itzik MALKIEL, Noam KOENIGSTEIN, Oren BARKAN, Dvir GINZBURG, Nir NICE
-
Publication number: 20220300348Abstract: The disclosed distributed task coordination ensures task execution while minimizing both the risk of duplicate execution and resources consumed for coordination. Execution is guaranteed, while only best efforts are used to avoid duplication. Example solutions include requesting, by a node, a first lease from a first set of nodes; based at least on obtaining at least one first lease, requesting, by the node, a second lease from a second set of nodes; based at least on the node obtaining at least one second lease, determining a majority holder of second leases; and based at least on obtaining the majority of second leases, executing, by the node, a task associated with the at least one second lease. In some examples, the nodes comprise online processing units (NPUs). In some examples, if a first node begins executing the task and fails, another node automatically takes over to ensure completion.Type: ApplicationFiled: June 6, 2022Publication date: September 22, 2022Inventors: Michael FELDMAN, Nimrod Ben SIMHON, Ayelet KROSKIN, Nir NICE
-
Publication number: 20220300814Abstract: Machine learning multiple features of an item depicted in images. Upon accessing multiple images that depict the item, a neural network is used to machine train on the plurality of images to generate embedding vectors for each of multiple features of the item. For each of multiple features of the item depicted in the images, in each iteration of the machine learning, the embedding vector is converted into a probability vector that represents probabilities that the feature has respective values. That probability vector is then compared with a value vector representing the actual value of that feature in the depicted item, and an error between the two vectors is determined. That error is used to adjust parameters of the neural network used to generate the embedding vector, allowing for the next iteration in the generation of the embedding vectors. These iterative changes continue thereby training the neural network.Type: ApplicationFiled: June 9, 2022Publication date: September 22, 2022Inventors: Oren BARKAN, Noam RAZIN, Noam KOENIGSTEIN, Roy HIRSCH, Nir NICE
-
Publication number: 20220269723Abstract: Aspects of the technology described herein use acoustic features of a music track to capture information for a recommendation system. The recommendation can work without analyzing label data (e.g., genre, artist) or usage data for a track. For each audio track, a descriptor is generated that can be used to compare the track to other tracks. The comparisons between track descriptors result in a similarity measure that can be used to make a recommendation. In this process, the audio descriptors are used directly to form a track-to-track similarity measure between tracks. By measuring the similarity between a track that a user is known to like and an unknown track, a decision can be made whether to recommend the unknown track to the user.Type: ApplicationFiled: May 10, 2022Publication date: August 25, 2022Inventors: Oren BARKAN, Noam KOENIGSTEIN, Nir NICE
-
Patent number: 11372690Abstract: The disclosed distributed task coordination ensures task execution while minimizing both the risk of duplicate execution and resources consumed for coordination. Execution is guaranteed, while only best efforts are used to avoid duplication. Example solutions include requesting, by a node, a first lease from a first set of nodes; based at least on obtaining at least one first lease, requesting, by the node, a second lease from a second set of nodes; based at least on the node obtaining at least one second lease, determining a majority holder of second leases; and based at least on obtaining the majority of second leases, executing, by the node, a task associated with the at least one second lease. In some examples, the nodes comprise online processing units (NPUs). In some examples, if a first node begins executing the task and fails, another node automatically takes over to ensure completion.Type: GrantFiled: October 3, 2019Date of Patent: June 28, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Michael Feldman, Nimrod Ben Simhon, Ayelet Kroskin, Nir Nice
-
Patent number: 11373095Abstract: Machine learning multiple features of an item depicted in images. Upon accessing multiple images that depict the item, a neural network is used to machine train on the plurality of images to generate embedding vectors for each of multiple features of the item. For each of multiple features of the item depicted in the images, in each iteration of the machine learning, the embedding vector is converted into a probability vector that represents probabilities that the feature has respective values. That probability vector is then compared with a value vector representing the actual value of that feature in the depicted item, and an error between the two vectors is determined. That error is used to adjust parameters of the neural network used to generate the embedding vector, allowing for the next iteration in the generation of the embedding vectors. These iterative changes continue thereby training the neural network.Type: GrantFiled: December 23, 2019Date of Patent: June 28, 2022Inventors: Oren Barkan, Noam Razin, Noam Koenigstein, Roy Hirsch, Nir Nice
-
Patent number: 11328010Abstract: Aspects of the technology described herein use acoustic features of a music track to capture information for a recommendation system. The recommendation can work without analyzing label data (e.g., genre, artist) or usage data for a track. For each audio track, a descriptor is generated that can be used to compare the track to other tracks. The comparisons between track descriptors result in a similarity measure that can be used to make a recommendation. In this process, the audio descriptors are used directly to form a track-to-track similarity measure between tracks. By measuring the similarity between a track that a user is known to like and an unknown track, a decision can be made whether to recommend the unknown track to the user.Type: GrantFiled: May 25, 2017Date of Patent: May 10, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Oren Barkan, Noam Koenigstein, Nir Nice
-
Patent number: 11238521Abstract: The disclosure herein describes a recommendation system utilizing a specialized domain-specific language model for generating cold-start recommendations in an absence of user-specific data based on a user-selection of a seed item. A generalized language model is trained using a domain-specific corpus of training data, including title and description pairs associated with candidate items in a domain-specific catalog. The language model is trained to distinguish between real title-description pairs and fake title-description pairs. The trained language model analyzes the title and description of the seed item with the title and description of each candidate item in the catalog to create a hybrid set of scores. The set of scores includes similarity scores and classification scores for the seed item title with each candidate item description and title. The scores are utilized by the model to identify candidate items maximizing similarity with the seed item for cold-start recommendation to a user.Type: GrantFiled: February 12, 2020Date of Patent: February 1, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Itzik Malkiel, Pavel Roit, Noam Koenigstein, Oren Barkan, Nir Nice
-
Patent number: 11062198Abstract: A recommender system that represents items in a catalog by first feature vectors in a first vector space based on first characteristics of the items and second feature vectors in a second vector space based on second characteristics of the items different from the first characteristics and maps a feature vector defined in the first vector space for an item to a vector in the second vector space to provide recommendations based on the item.Type: GrantFiled: October 31, 2016Date of Patent: July 13, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Oren Barkan, Noam Koenigstein, Eylon Yogev, Nir Nice
-
Publication number: 20210192338Abstract: Machine learning multiple features of an item depicted in images. Upon accessing multiple images that depict the item, a neural network is used to machine train on the plurality of images to generate embedding vectors for each of multiple features of the item. For each of multiple features of the item depicted in the images, in each iteration of the machine learning, the embedding vector is converted into a probability vector that represents probabilities that the feature has respective values. That probability vector is then compared with a value vector representing the actual value of that feature in the depicted item, and an error between the two vectors is determined. That error is used to adjust parameters of the neural network used to generate the embedding vector, allowing for the next iteration in the generation of the embedding vectors. These iterative changes continue thereby training the neural network.Type: ApplicationFiled: December 23, 2019Publication date: June 24, 2021Inventors: Oren BARKAN, Noam RAZIN, Noam KOENIGSTEIN, Roy HIRSCH, Nir NICE
-
Publication number: 20210192000Abstract: Computerized searching for an item based on a prior viewed item. A displayed item is identified as a query input item to be used in searching for a target item. That input item has an associated set of embedding vectors each representing a respective feature of the input item. Target features of the search are then identified based on the input item. For each feature in the target item that is desired to be the same as the input item, an embedding vector for the input item is accessed as the vector for that feature in the search. For each feature in the target item that is desired to be different than the input item, a special vector associated with that desired value and feature is accessed for that feature in the search. These accessed vectors are then compared against target items to find close matches.Type: ApplicationFiled: December 23, 2019Publication date: June 24, 2021Inventors: Oren BARKAN, Noam RAZIN, Roy HIRSCH, Noam KOENIGSTEIN, Nir NICE
-
Publication number: 20210182935Abstract: The disclosure herein describes a recommendation system utilizing a specialized domain-specific language model for generating cold-start recommendations in an absence of user-specific data based on a user-selection of a seed item. A generalized language model is trained using a domain-specific corpus of training data, including title and description pairs associated with candidate items in a domain-specific catalog. The language model is trained to distinguish between real title-description pairs and fake title-description pairs. The trained language model analyzes the title and description of the seed item with the title and description of each candidate item in the catalog to create a hybrid set of scores. The set of scores includes similarity scores and classification scores for the seed item title with each candidate item description and title. The scores are utilized by the model to identify candidate items maximizing similarity with the seed item for cold-start recommendation to a user.Type: ApplicationFiled: February 12, 2020Publication date: June 17, 2021Inventors: Itzik MALKIEL, Pavel ROIT, Noam KOENIGSTEIN, Oren BARKAN, Nir NICE
-
Publication number: 20210103482Abstract: The disclosed distributed task coordination ensures task execution while minimizing both the risk of duplicate execution and resources consumed for coordination. Execution is guaranteed, while only best efforts are used to avoid duplication. Example solutions include requesting, by a node, a first lease from a first set of nodes; based at least on obtaining at least one first lease, requesting, by the node, a second lease from a second set of nodes; based at least on the node obtaining at least one second lease, determining a majority holder of second leases; and based at least on obtaining the majority of second leases, executing, by the node, a task associated with the at least one second lease. In some examples, the nodes comprise online processing units (NPUs). In some examples, if a first node begins executing the task and fails, another node automatically takes over to ensure completion.Type: ApplicationFiled: October 3, 2019Publication date: April 8, 2021Inventors: Michael FELDMAN, Nimrod Ben SIMHON, Ayelet KROSKIN, Nir NICE