Patents by Inventor Avneesh Sud

Avneesh Sud has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230040793
    Abstract: Example systems perform complex optimization tasks with improved efficiency via neural meta-optimization of experts. In particular, provided is a machine learning framework in which a meta-optimization neural network can learn to fuse a collection of experts to provide a predicted solution. Specifically, the meta-optimization neural network can learn to predict the output of a complex optimization process which optimizes over outputs from the collection of experts to produce an optimized output. In such fashion, the meta-optimization neural network can, after training, be used in place of the complex optimization process to produce a synthesized solution from the experts, leading to orders of magnitude faster and computationally more efficient prediction or problem solution.
    Type: Application
    Filed: July 21, 2022
    Publication date: February 9, 2023
    Inventors: Avneesh Sud, Andrea Tagliasacchi, Ben Usman
  • Publication number: 20210295025
    Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
    Type: Application
    Filed: June 4, 2021
    Publication date: September 23, 2021
    Inventors: Avneesh Sud, Steven Hickson, Vivek Kwatra, Nicholas Dufour
  • Patent number: 11042729
    Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: June 22, 2021
    Assignee: Google LLC
    Inventors: Avneesh Sud, Steven Hickson, Vivek Kwatra, Nicholas Dufour
  • Patent number: 10528620
    Abstract: For each image in a collection of images to be searched, the image is represented as a collection of color-edge words, where each color-edge words includes location information, shape information, and color information. The images may be indexed based on the color-edge words. A user-generated sketch is received as a query and represented as a collection of color-edge words. The collection of color-edge words representing the sketch is compared to the image index to identify search results based on a combination of location similarity, shape similarity, and color similarity.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: January 7, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Changhu Wang, Avneesh Sud, Lei Zhang, Xinghai Sun
  • Patent number: 10269177
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: April 23, 2019
    Assignee: GOOGLE LLC
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Publication number: 20180314881
    Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
    Type: Application
    Filed: December 5, 2017
    Publication date: November 1, 2018
    Inventors: Avneesh Sud, Steven Hickson, Vivek Kwatra, Nicholas Dufour
  • Publication number: 20180129658
    Abstract: For each image in a collection of images to be searched, the image is represented as a collection of color-edge words, where each color-edge words includes location information, shape information, and color information. The images may be indexed based on the color-edge words. A user-generated sketch is recessed as a query and represented as a collection of color-edge words. The collection of color-edge words representing the sketch is compared to the image index to identify search results based on a combination of location similarity, shape similarity, and color similarity.
    Type: Application
    Filed: December 15, 2017
    Publication date: May 10, 2018
    Inventors: Changhu Wang, Avneesh Sud, Lei Zhang, Xinghai Sun
  • Publication number: 20180101984
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Publication number: 20180101227
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Patent number: 9875253
    Abstract: For each image in a collection of images to be searched, the image is represented as a collection of color-edge words, where each color-edge words includes location information, shape information, and color information. The images may be indexed based on the color-edge words. A user-generated sketch is received as a query and represented as a collection of color-edge words. The collection of color-edge words representing the sketch is compared to the image index to identify search results based on a combination of location similarity, shape similarity, and color similarity.
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: January 23, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Changhu Wang, Avneesh Sud, Lei Zhang, Xinghai Sun
  • Patent number: 9569868
    Abstract: A system described herein includes a receiver component that receives a tree-structured dataset that includes multiple branches that are hierarchically related to one another. The system also includes an executor component that causes a programmable graphical processing unit to generate a Voronoi treemap based at least in part upon the tree-structured dataset, wherein the Voronoi treemap comprises a plurality of subareas that correspond to the multiple branches, and wherein the Voronoi treemap represents hierarchical relationships between the multiple branches.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: February 14, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Avneesh Sud, Danyel Fisher
  • Patent number: 9552421
    Abstract: Simplified collaborative searching is provided by pattern recognition such as facial recognition, motion recognition, and the like to provide handsfree functionality. Users join a collaborative search by placing themselves within the field of view of a camera communicationally coupled to a computing device that performs facial recognition and identifies the users, thereby adding them to the collaboration. Users also join by performing simple movements with a portable computing device, such as the ubiquitous mobile phone. A collaboration component tracks the users in the collaboration and identifies them to a search engine, thereby enabling the search engine to perform a collaborative search. The collaboration component also disseminates the collaborative recommendations, either automatically or based upon explicit requests triggered by pattern recognition, including motion recognition and touch recognition.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: January 24, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Aidan C. Crook, Avneesh Sud, Xiaoyuan Cui, Ohil K. Manyam
  • Patent number: 9507803
    Abstract: Systems, methods, and computer-readable storage media for web-scale visual search capable of using a combination of visual input modalities are provided. An edgel index is created that includes shape-descriptors, including edgel-based representations, that correspond to each of a plurality of images. Each edgel-based representation includes pixels that depicts edges or boundary contours of an image and is created, at least in part, by segmenting the image into a plurality of image segments and performing a multi-phase contour detection on each segment.
    Type: Grant
    Filed: November 11, 2013
    Date of Patent: November 29, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Avneesh Sud, Rajeev Prasad, Ayman Malek Abdel Hamid Kaheel, Pragyana Mishra, Sumit Amar, Kancheng Cao
  • Publication number: 20160132498
    Abstract: For each image in a collection of images to be searched, the image is represented as a collection of color-edge words, where each color-edge words includes location information, shape information, and color information. The images may be indexed based on the color-edge words. A user-generated sketch is received as a query and represented as a collection of color-edge words. The collection of color-edge words representing the sketch is compared to the image index to identify search results based on a combination of location similarity, shape similarity, and color similarity.
    Type: Application
    Filed: June 14, 2013
    Publication date: May 12, 2016
    Inventors: Changhu Wang, Avneesh Sud, Lei Zhang, Xinghai Sun
  • Publication number: 20140333629
    Abstract: A system described herein includes a receiver component that receives a tree-structured dataset that includes multiple branches that are hierarchically related to one another. The system also includes an executor component that causes a programmable graphical processing unit to generate a Voronoi treemap based at least in part upon the tree-structured dataset, wherein the Voronoi treemap comprises a plurality of subareas that correspond to the multiple branches, and wherein the Voronoi treemap represents hierarchical relationships between the multiple branches.
    Type: Application
    Filed: July 28, 2014
    Publication date: November 13, 2014
    Inventors: Avneesh Sud, Danyel Fisher
  • Publication number: 20140280299
    Abstract: Simplified collaborative searching is provided by pattern recognition such as facial recognition, motion recognition, and the like to provide handsfree functionality. Users join a collaborative search by placing themselves within the field of view of a camera communicationally coupled to a computing device that performs facial recognition and identifies the users, thereby adding them to the collaboration. Users also join by performing simple movements with a portable computing device, such as the ubiquitous mobile phone. A collaboration component tracks the users in the collaboration and identifies them to a search engine, thereby enabling the search engine to perform a collaborative search. The collaboration component also disseminates the collaborative recommendations, either automatically or based upon explicit requests triggered by pattern recognition, including motion recognition and touch recognition.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Aidan C. Crook, Avneesh Sud, Xiaoyuan Cui, Ohil K. Manyam
  • Patent number: 8803883
    Abstract: A system described herein includes a receiver component that receives a tree-structured dataset that includes multiple branches that are hierarchically related to one another. The system also includes an executor component that causes a programmable graphical processing unit to generate a Voronoi treemap based at least in part upon the tree-structured dataset, wherein the Voronoi treemap comprises a plurality of subareas that correspond to the multiple branches, and wherein the Voronoi treemap represents hierarchical relationships between the multiple branches.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: August 12, 2014
    Assignee: Microsoft Corporation
    Inventors: Avneesh Sud, Danyel Fisher
  • Publication number: 20140074852
    Abstract: Systems, methods, and computer-readable storage media for web-scale visual search capable of using a combination of visual input modalities are provided. An edgel index is created that includes shape-descriptors, including edgel-based representations, that correspond to each of a plurality of images. Each edgel-based representation includes pixels that depicts edges or boundary contours of an image and is created, at least in part, by segmenting the image into a plurality of image segments and performing a multi-phase contour detection on each segment.
    Type: Application
    Filed: November 11, 2013
    Publication date: March 13, 2014
    Applicant: Microsoft Corporation
    Inventors: Avneesh Sud, Rajeev Prasad, Ayman Malek Abdel Hamid Kaheel, Pragyana Mishra, Sumit Amar, Kancheng Cao
  • Patent number: 8589410
    Abstract: Systems, methods, and computer-readable storage media for web-scale visual search capable of using a combination of visual input modalities are provided. An edgel index is created that includes shape-descriptors, including edgel-based representations, that correspond to each of a plurality of images. Each edgel-based representation includes pixels that depicts edges or boundary contours of an image and is created, at least in part, by segmenting the image into a plurality of image segments and performing a multi-phase contour detection on each segment.
    Type: Grant
    Filed: November 21, 2011
    Date of Patent: November 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Avneesh Sud, Rajeev Prasad, Ayman Malek Abdel Hamid Kaheel, Pragyana Mishra, Sumit Amar, Kancheng Cao
  • Publication number: 20130097181
    Abstract: Systems, methods, and computer-readable storage media for web-scale visual search capable of using a combination of visual input modalities are provided. An edgel index is created that includes shape-descriptors, including edgel-based representations, that correspond to each of a plurality of images. Each edgel-based representation includes pixels that depicts edges or boundary contours of an image and is created, at least in part, by segmenting the image into a plurality of image segments and performing a multi-phase contour detection on each segment.
    Type: Application
    Filed: November 21, 2011
    Publication date: April 18, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: AVNEESH SUD, RAJEEV PRASAD, AYMAN MALEK ABDEL HAMID KAHEEL, PRAGYANA K. MISHRA, SUMIT AMAR, KANCHENG CAO