Patents by Inventor Avneesh Sud
Avneesh Sud has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230040793Abstract: Example systems perform complex optimization tasks with improved efficiency via neural meta-optimization of experts. In particular, provided is a machine learning framework in which a meta-optimization neural network can learn to fuse a collection of experts to provide a predicted solution. Specifically, the meta-optimization neural network can learn to predict the output of a complex optimization process which optimizes over outputs from the collection of experts to produce an optimized output. In such fashion, the meta-optimization neural network can, after training, be used in place of the complex optimization process to produce a synthesized solution from the experts, leading to orders of magnitude faster and computationally more efficient prediction or problem solution.Type: ApplicationFiled: July 21, 2022Publication date: February 9, 2023Inventors: Avneesh Sud, Andrea Tagliasacchi, Ben Usman
-
Publication number: 20210295025Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.Type: ApplicationFiled: June 4, 2021Publication date: September 23, 2021Inventors: Avneesh Sud, Steven Hickson, Vivek Kwatra, Nicholas Dufour
-
Patent number: 11042729Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.Type: GrantFiled: December 5, 2017Date of Patent: June 22, 2021Assignee: Google LLCInventors: Avneesh Sud, Steven Hickson, Vivek Kwatra, Nicholas Dufour
-
Patent number: 10528620Abstract: For each image in a collection of images to be searched, the image is represented as a collection of color-edge words, where each color-edge words includes location information, shape information, and color information. The images may be indexed based on the color-edge words. A user-generated sketch is received as a query and represented as a collection of color-edge words. The collection of color-edge words representing the sketch is compared to the image index to identify search results based on a combination of location similarity, shape similarity, and color similarity.Type: GrantFiled: December 15, 2017Date of Patent: January 7, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Changhu Wang, Avneesh Sud, Lei Zhang, Xinghai Sun
-
Patent number: 10269177Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.Type: GrantFiled: June 7, 2017Date of Patent: April 23, 2019Assignee: GOOGLE LLCInventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
-
Publication number: 20180314881Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.Type: ApplicationFiled: December 5, 2017Publication date: November 1, 2018Inventors: Avneesh Sud, Steven Hickson, Vivek Kwatra, Nicholas Dufour
-
Publication number: 20180129658Abstract: For each image in a collection of images to be searched, the image is represented as a collection of color-edge words, where each color-edge words includes location information, shape information, and color information. The images may be indexed based on the color-edge words. A user-generated sketch is recessed as a query and represented as a collection of color-edge words. The collection of color-edge words representing the sketch is compared to the image index to identify search results based on a combination of location similarity, shape similarity, and color similarity.Type: ApplicationFiled: December 15, 2017Publication date: May 10, 2018Inventors: Changhu Wang, Avneesh Sud, Lei Zhang, Xinghai Sun
-
Publication number: 20180101984Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.Type: ApplicationFiled: June 7, 2017Publication date: April 12, 2018Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
-
Publication number: 20180101227Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.Type: ApplicationFiled: June 7, 2017Publication date: April 12, 2018Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
-
Patent number: 9875253Abstract: For each image in a collection of images to be searched, the image is represented as a collection of color-edge words, where each color-edge words includes location information, shape information, and color information. The images may be indexed based on the color-edge words. A user-generated sketch is received as a query and represented as a collection of color-edge words. The collection of color-edge words representing the sketch is compared to the image index to identify search results based on a combination of location similarity, shape similarity, and color similarity.Type: GrantFiled: June 14, 2013Date of Patent: January 23, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Changhu Wang, Avneesh Sud, Lei Zhang, Xinghai Sun
-
Patent number: 9569868Abstract: A system described herein includes a receiver component that receives a tree-structured dataset that includes multiple branches that are hierarchically related to one another. The system also includes an executor component that causes a programmable graphical processing unit to generate a Voronoi treemap based at least in part upon the tree-structured dataset, wherein the Voronoi treemap comprises a plurality of subareas that correspond to the multiple branches, and wherein the Voronoi treemap represents hierarchical relationships between the multiple branches.Type: GrantFiled: July 28, 2014Date of Patent: February 14, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Avneesh Sud, Danyel Fisher
-
Patent number: 9552421Abstract: Simplified collaborative searching is provided by pattern recognition such as facial recognition, motion recognition, and the like to provide handsfree functionality. Users join a collaborative search by placing themselves within the field of view of a camera communicationally coupled to a computing device that performs facial recognition and identifies the users, thereby adding them to the collaboration. Users also join by performing simple movements with a portable computing device, such as the ubiquitous mobile phone. A collaboration component tracks the users in the collaboration and identifies them to a search engine, thereby enabling the search engine to perform a collaborative search. The collaboration component also disseminates the collaborative recommendations, either automatically or based upon explicit requests triggered by pattern recognition, including motion recognition and touch recognition.Type: GrantFiled: March 15, 2013Date of Patent: January 24, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Aidan C. Crook, Avneesh Sud, Xiaoyuan Cui, Ohil K. Manyam
-
Patent number: 9507803Abstract: Systems, methods, and computer-readable storage media for web-scale visual search capable of using a combination of visual input modalities are provided. An edgel index is created that includes shape-descriptors, including edgel-based representations, that correspond to each of a plurality of images. Each edgel-based representation includes pixels that depicts edges or boundary contours of an image and is created, at least in part, by segmenting the image into a plurality of image segments and performing a multi-phase contour detection on each segment.Type: GrantFiled: November 11, 2013Date of Patent: November 29, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Avneesh Sud, Rajeev Prasad, Ayman Malek Abdel Hamid Kaheel, Pragyana Mishra, Sumit Amar, Kancheng Cao
-
Publication number: 20160132498Abstract: For each image in a collection of images to be searched, the image is represented as a collection of color-edge words, where each color-edge words includes location information, shape information, and color information. The images may be indexed based on the color-edge words. A user-generated sketch is received as a query and represented as a collection of color-edge words. The collection of color-edge words representing the sketch is compared to the image index to identify search results based on a combination of location similarity, shape similarity, and color similarity.Type: ApplicationFiled: June 14, 2013Publication date: May 12, 2016Inventors: Changhu Wang, Avneesh Sud, Lei Zhang, Xinghai Sun
-
Publication number: 20140333629Abstract: A system described herein includes a receiver component that receives a tree-structured dataset that includes multiple branches that are hierarchically related to one another. The system also includes an executor component that causes a programmable graphical processing unit to generate a Voronoi treemap based at least in part upon the tree-structured dataset, wherein the Voronoi treemap comprises a plurality of subareas that correspond to the multiple branches, and wherein the Voronoi treemap represents hierarchical relationships between the multiple branches.Type: ApplicationFiled: July 28, 2014Publication date: November 13, 2014Inventors: Avneesh Sud, Danyel Fisher
-
Publication number: 20140280299Abstract: Simplified collaborative searching is provided by pattern recognition such as facial recognition, motion recognition, and the like to provide handsfree functionality. Users join a collaborative search by placing themselves within the field of view of a camera communicationally coupled to a computing device that performs facial recognition and identifies the users, thereby adding them to the collaboration. Users also join by performing simple movements with a portable computing device, such as the ubiquitous mobile phone. A collaboration component tracks the users in the collaboration and identifies them to a search engine, thereby enabling the search engine to perform a collaborative search. The collaboration component also disseminates the collaborative recommendations, either automatically or based upon explicit requests triggered by pattern recognition, including motion recognition and touch recognition.Type: ApplicationFiled: March 15, 2013Publication date: September 18, 2014Applicant: MICROSOFT CORPORATIONInventors: Aidan C. Crook, Avneesh Sud, Xiaoyuan Cui, Ohil K. Manyam
-
Patent number: 8803883Abstract: A system described herein includes a receiver component that receives a tree-structured dataset that includes multiple branches that are hierarchically related to one another. The system also includes an executor component that causes a programmable graphical processing unit to generate a Voronoi treemap based at least in part upon the tree-structured dataset, wherein the Voronoi treemap comprises a plurality of subareas that correspond to the multiple branches, and wherein the Voronoi treemap represents hierarchical relationships between the multiple branches.Type: GrantFiled: May 29, 2009Date of Patent: August 12, 2014Assignee: Microsoft CorporationInventors: Avneesh Sud, Danyel Fisher
-
Publication number: 20140074852Abstract: Systems, methods, and computer-readable storage media for web-scale visual search capable of using a combination of visual input modalities are provided. An edgel index is created that includes shape-descriptors, including edgel-based representations, that correspond to each of a plurality of images. Each edgel-based representation includes pixels that depicts edges or boundary contours of an image and is created, at least in part, by segmenting the image into a plurality of image segments and performing a multi-phase contour detection on each segment.Type: ApplicationFiled: November 11, 2013Publication date: March 13, 2014Applicant: Microsoft CorporationInventors: Avneesh Sud, Rajeev Prasad, Ayman Malek Abdel Hamid Kaheel, Pragyana Mishra, Sumit Amar, Kancheng Cao
-
Patent number: 8589410Abstract: Systems, methods, and computer-readable storage media for web-scale visual search capable of using a combination of visual input modalities are provided. An edgel index is created that includes shape-descriptors, including edgel-based representations, that correspond to each of a plurality of images. Each edgel-based representation includes pixels that depicts edges or boundary contours of an image and is created, at least in part, by segmenting the image into a plurality of image segments and performing a multi-phase contour detection on each segment.Type: GrantFiled: November 21, 2011Date of Patent: November 19, 2013Assignee: Microsoft CorporationInventors: Avneesh Sud, Rajeev Prasad, Ayman Malek Abdel Hamid Kaheel, Pragyana Mishra, Sumit Amar, Kancheng Cao
-
Publication number: 20130097181Abstract: Systems, methods, and computer-readable storage media for web-scale visual search capable of using a combination of visual input modalities are provided. An edgel index is created that includes shape-descriptors, including edgel-based representations, that correspond to each of a plurality of images. Each edgel-based representation includes pixels that depicts edges or boundary contours of an image and is created, at least in part, by segmenting the image into a plurality of image segments and performing a multi-phase contour detection on each segment.Type: ApplicationFiled: November 21, 2011Publication date: April 18, 2013Applicant: MICROSOFT CORPORATIONInventors: AVNEESH SUD, RAJEEV PRASAD, AYMAN MALEK ABDEL HAMID KAHEEL, PRAGYANA K. MISHRA, SUMIT AMAR, KANCHENG CAO