Patents by Inventor Vivek Natarajan

Vivek Natarajan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119586
    Abstract: We disclose the generation and training of Generative Adversarial Networks (GAN) to synthesize clinical images with skin conditions. Synthetic images for a pre-specified skin condition are generated, while being able to vary its size, location and the underlying skin color. We demonstrate that the generated images are of high fidelity using objective GAN evaluation metrics. The synthetic images are not only visually similar to real images, but also embody the respective skin conditions. Additionally, synthetic skin images can be used as a data augmentation technique for training a skin condition classifier, and improve the ability of the classifier to detect rare but malignant conditions.
    Type: Application
    Filed: October 13, 2020
    Publication date: April 11, 2024
    Inventors: Vivek Natarajan, Yuan Liu, David Coz, Amirata Ghorbani
  • Publication number: 20240080362
    Abstract: A storage area network (SAN)-attached storage system architecture is disclosed. The storage system provides strongly consistent distributed storage communication protocol semantics, such as SCSI target semantics. The system includes a mechanism for presenting a single distributed logical unit, comprising one or more logical sub-units, as a single logical unit of storage to a host system by associating each of the logical sub-units that make up the single distributed logical unit with a single host visible identifier that corresponds to the single distributed logical unit. The system further includes mechanisms to maintain consistent context information for each of the logical sub-units such that the logical sub-units are not visible to a host system as separate entities from the single distributed logical unit.
    Type: Application
    Filed: November 13, 2023
    Publication date: March 7, 2024
    Inventors: Santosh Ananth Rao, Geoffrey Stewart Brown, Srikumar Natarajan, Pranab Patnaik, Kai Tan, Peter Frank Corbett, Vivek Venkatesan
  • Publication number: 20230260652
    Abstract: Systems and methods can perform self-supervised machine learning for improved medical image analysis. As one example, self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabeled medical images from the target domain of interest, followed by fine-tuning on labeled medical images from the target domain significantly improves the accuracy of medical image classifiers such as, for example diagnostic models. Another example aspect of the present disclosure is directed to a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple different medical images that share one or more attributes (e.g., multiple images that depict the same underlying pathology and/or the same patient) to construct more informative positive pairs for self-supervised learning.
    Type: Application
    Filed: December 10, 2021
    Publication date: August 17, 2023
    Inventors: Shekoofeh Azizi, Wen Yau Aaron Loh, Zachary William Beaver, Ting Chen, Jonathan Paul Deaton, Jan Freyberg, Alan Prasana Karthikesalingam, Simon Kornblith, Basil Mustafa, Mohammad Norouzi, Vivek Natarajan, Fiona Keleher Ryan
  • Publication number: 20230222605
    Abstract: In one embodiment, a method includes receiving at a head-mounted device a speech input from a user and a visual input captured by cameras of the head-mounted device, wherein the visual input comprises subjects and attributes associated with the subjects, and wherein the speech input comprises a co-reference to one or more of the subjects, resolving entities corresponding to the subjects associated with the co-reference based on the attributes and the co-reference, and presenting a communication content responsive to the speech input and the visual input at the head-mounted device, wherein the communication content comprises information associated with executing results of tasks corresponding to the resolved entities.
    Type: Application
    Filed: March 16, 2023
    Publication date: July 13, 2023
    Inventors: Vivek Natarajan, Shawn C.P. Mei, Zhengping Zuo
  • Patent number: 11676220
    Abstract: In one embodiment, a method includes receiving a user input based on a plurality of modalities at the client system, wherein at least one of the modalities of the user input is a visual modality, determining one or more subjects and one or more attributes associated with the one or more subjects, respectively, based on the visual modality of the user input, resolving one or more entities corresponding to the one or more subjects based on the determined one or more attributes, and presenting a communication content at the client system responsive to the user input, wherein the communication content comprises information associated with executing results of one or more tasks corresponding to the one or more resolved entities.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: June 13, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Vivek Natarajan, Shawn C. P. Mei, Zhengping Zuo
  • Publication number: 20220189612
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to perform a downstream computer vision task. One of the methods includes pre-training an initial neural network that shares layers with the neural network to perform an initial computer vision task and then training the neural network on the downstream computer vision task.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 16, 2022
    Inventors: Xiaohua Zhai, Sylvain Gelly, Alexander Kolesnikov, Yin Ching Jessica Yung, Joan Puigcerver i Perez, Lucas Klaus Beyer, Neil Matthew Tinmouth Houlsby, Wen Yau Aaron Loh, Alan Prasana Karthikesalingam, Basil Mustafa, Jan Freyberg, Patricia Leigh MacWilliams, Vivek Natarajan
  • Publication number: 20220012076
    Abstract: In one embodiment, a method includes receiving a user input based on a plurality of modalities at the client system, wherein at least one of the modalities of the user input is a visual modality, determining one or more subjects and one or more attributes associated with the one or more subjects, respectively, based on the visual modality of the user input, resolving one or more entities corresponding to the one or more subjects based on the determined one or more attributes, and presenting a communication content at the client system responsive to the user input, wherein the communication content comprises information associated with executing results of one or more tasks corresponding to the one or more resolved entities.
    Type: Application
    Filed: January 25, 2021
    Publication date: January 13, 2022
    Inventors: Vivek Natarajan, Shawn C.P. Mei, Zhengping Zuo
  • Publication number: 20210326391
    Abstract: In one embodiment, a method includes receiving, from a client system of a user, a user input comprising a plurality of n-grams, parsing the user input to identify one or more overall intents, hidden intents, and slots associated with the one or more n-grams, wherein at least one of the hidden intents is non-resolvable for being associated with partial slot information corresponding to an n-gram that has not been resolved to a particular entity identifier, wherein the partial slot information is associated with two more entity identifiers of two or more entities, respectively, sending, to the client system, instructions for prompting the user to select one of the entities to be associated with the non-resolvable hidden intent, resolving the non-resolvable hidden intent based on the entity identifier of the entity selected by the user, and generating a response to the user input based on the resolved hidden intent.
    Type: Application
    Filed: June 29, 2021
    Publication date: October 21, 2021
    Inventors: Vivek Natarajan, Baiyang Liu, Shubham Gupta, Krishna Mittal, Scott Martin
  • Patent number: 11093551
    Abstract: In one embodiment, a method includes, by one or more computing systems, receiving a user input comprising a plurality of n-grams from a user of a client system, generating a tree-structured representation for the user input based on a parsing by a compositional model, resolving the tree-structured representation by applying a depth-first search algorithm, wherein the tree-structured representation comprises one or more non-resolvable non-terminal nodes associated with one or more slots, and wherein each non-terminal parent node of a non-resolvable non-terminal node is partially resolved based on partial slot information associated with the non-resolvable non-terminal node, and wherein each non-resolvable non-terminal node is resolved based on the respective partially resolved non-terminal parent node of the non-resolvable non-terminal node, generating a response to the user input based on the resolved tree-structured representation, sending instructions for presenting the response to the client system of the
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: August 17, 2021
    Assignee: Facebook, Inc.
    Inventors: Vivek Natarajan, Baiyang Liu, Shubham Gupta, Krishna Mittal, Scott Martin
  • Publication number: 20210232589
    Abstract: In one embodiment, a method includes, by one or more computing systems, receiving, by an assistant xbot associated with the one or more computing systems, a user request for a content digest, retrieving one or more content objects corresponding to the user request, generating one or more slides for the one or more retrieved content objects, respectively, and providing, by the assistant xbot, instructions for presenting the content digest responsive to the request from the first user, wherein the content digest comprises the one or more slides, and wherein one or more of the slides of the content digest are removed from the content digest after a predetermined time period.
    Type: Application
    Filed: April 9, 2021
    Publication date: July 29, 2021
    Inventors: Brian Nelson, Vivek Natarajan, Shawn C. P. Mei, Wenhai Yang
  • Patent number: 11010179
    Abstract: In one embodiment, a method includes receiving a user input by the first user from a client system associated with a first user, parsing the user input to identify one or more n-grams associated with the user input, accessing a user profile associated with the first user, wherein the user profile is stored in a first data store, accessing ontology data based on the one or more identified n-grams from one or more information graphs, wherein the one or more information graphs are stored in one or more second data stores, respectively, determining contextual information associated with the user input, generating semantic information by aggregating the user profile, ontology data, and contextual information, generating a feature representation for the identified one or more n-grams based on the semantic information, and resolving one or more entities associated with the one or more n-grams based on the feature representation.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: May 18, 2021
    Assignee: Facebook, Inc.
    Inventors: Vivek Natarajan, Baiyang Liu, Xiaohu Liu, Ahmed Aly
  • Patent number: 11003669
    Abstract: In one embodiment, a method includes, by one or more computing systems, receiving, from a client system associated with a user, a request from the user for a content digest from an online social network, retrieving one or more content objects associated with the online social network that are accessible by the user, selecting one or more of the retrieved content objects to incorporate into the content digest based on their identified categories, generating one or more slides for the one or more selected content objects, respectively, wherein each slide comprises a summary and representative image of the respective selected content object, sending, to the client system of the user, instructions for presenting the content digest responsive to the request from the user, wherein the content digest comprises the one or more slides.
    Type: Grant
    Filed: October 3, 2018
    Date of Patent: May 11, 2021
    Assignee: Facebook, Inc.
    Inventors: Brian Nelson, Vivek Natarajan, Shawn C. P. Mei, Wenhai Yang
  • Patent number: 10936346
    Abstract: In one embodiment, a method includes receiving from a client system associated with a first user a user input based on one or more modalities, at least one of which is a visual modality, identifying one or more subjects associated with the user input based on the visual modality based on one or more machine-learning models, determining one or more attributes associated with the one or more subjects respectively based on the one or more machine-learning models, resolving one or more entities corresponding to the one or more subjects based on the determined one or more attributes, executing one or more tasks associated with the one or more resolved entities, and sending instructions for presenting a communication content including information associated with the executed one or more tasks responsive to user input to the client system associated with the first user.
    Type: Grant
    Filed: August 2, 2018
    Date of Patent: March 2, 2021
    Assignee: Facebook, Inc.
    Inventors: Vivek Natarajan, Shawn C. P. Mei, Zhengping Zuo
  • Patent number: 10877784
    Abstract: A virtual assistant receives a message including message content from a client device. The virtual assistant determines an intent to organize an event and initial parameters for the event based on the message content. The virtual assistant retrieves a set of messages related to the received message from a data store and refines the initial parameters based on the related messages. A set of potential recommendations is generated based on the refined event parameters and the virtual assistant selects one or more of the potential recommendations to surface to users. The selected recommendations are sent to the client device for presentation to the user.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: December 29, 2020
    Assignee: Facebook, Inc.
    Inventors: Davide Testuggine, Wenhai Yang, Vivek Natarajan, Brett Charles Groel, Julia Framel, Laurent Nicolas Landowski, Brian Nelson
  • Publication number: 20190325080
    Abstract: In one embodiment, a method includes receiving from a client system associated with a first user a user input based on one or more modalities, at least one of which is a visual modality, identifying one or more subjects associated with the user input based on the visual modality based on one or more machine-learning models, determining one or more attributes associated with the one or more subjects respectively based on the one or more machine-learning models, resolving one or more entities corresponding to the one or more subjects based on the determined one or more attributes, executing one or more tasks associated with the one or more resolved entities, and sending instructions for presenting a communication content including information associated with the executed one or more tasks responsive to user input to the client system associated with the first user.
    Type: Application
    Filed: August 2, 2018
    Publication date: October 24, 2019
    Inventors: Vivek Natarajan, Shawn C.P. Mei, Zhengping Zuo
  • Publication number: 20190327330
    Abstract: In one embodiment, a method includes accessing a plurality of content objects associated with a first user from an online social network, accessing a baseline profile, wherein the baseline profile is based on ontology data from one or more information graphs, accessing conversational data associated with the first user, determining one or more subjects associated with the first user based on the plurality of content objects and conversational data associated with the first user, and generating a customized user profile for the first user based on the baseline profile, wherein the user profile comprises one or more confidence scores associated with the respective one or more subjects associated with the first user, wherein the one or more confidence scores are calculated based on the plurality of content objects associated with the first user and the conversational data associated with the first user.
    Type: Application
    Filed: April 30, 2018
    Publication date: October 24, 2019
    Inventors: Vivek Natarajan, Wenhai Yang, Honglei Liu, Anuj Kumar
  • Publication number: 20190327331
    Abstract: In one embodiment, a method includes receiving a user input by the first user from a client system associated with a first user, parsing the user input to identify one or more n-grams associated with the user input, accessing a user profile associated with the first user, wherein the user profile is stored in a first data store, accessing ontology data based on the one or more identified n-grams from one or more information graphs, wherein the one or more information graphs are stored in one or more second data stores, respectively, determining contextual information associated with the user input, generating semantic information by aggregating the user profile, ontology data, and contextual information, generating a feature representation for the identified one or more n-grams based on the semantic information, and resolving one or more entities associated with the one or more n-grams based on the feature representation.
    Type: Application
    Filed: April 30, 2018
    Publication date: October 24, 2019
    Inventors: Vivek Natarajan, Baiyang Liu, Xiaohu Liu, Ahmed Aly