Patents by Inventor Manan Goel
Manan Goel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11676278Abstract: Techniques related to automatically segmenting video frames into per pixel dense object of interest and background regions are discussed. Such techniques include applying a segmentation convolutional neural network (CNN) to a CNN input including a current video frame, a previous video frame, an object of interest indicator frame, a motion frame, and multiple feature frames each including features compressed from feature layers of an object classification convolutional neural network as applied to the current video frame to generate candidate segmentations and selecting one of the candidate segmentations as a final segmentation of the current video frame.Type: GrantFiled: September 26, 2019Date of Patent: June 13, 2023Assignee: Intel CorporationInventors: Anthony Rhodes, Manan Goel
-
Publication number: 20230048187Abstract: An athletic performance monitoring system, including a gesture recognition processor configured to execute gesture recognition processes. Interaction with the athletic performance monitoring system may be based, at least in part, on gestures performed by a user, and may offer an alternative to making selections on the athletic performance monitoring system using physical buttons, which may be cumbersome and/or inconvenient to use while performing an athletic activity. Additionally, recognized gestures may be used to select one or more operational modes for the athletic performance monitoring system, such that a reduction in power consumption may be achieved.Type: ApplicationFiled: October 26, 2022Publication date: February 16, 2023Inventors: Manan Goel, Kate Richmond, Peter Laigar, David Switzer, Sebastian Imlay, Michael Lapinsky
-
Publication number: 20220397962Abstract: Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.Type: ApplicationFiled: May 26, 2022Publication date: December 15, 2022Inventors: Manan Goel, Saurin Shah, Lakshman Krishnamurthy, Steven Xing, Matthew Pinner, Kevin James Doucette
-
Patent number: 11513610Abstract: An athletic performance monitoring system, including a gesture recognition processor configured to execute gesture recognition processes. Interaction with the athletic performance monitoring system may be based, at least in part, on gestures performed by a user, and may offer an alternative to making selections on the athletic performance monitoring system using physical buttons, which may be cumbersome and/or inconvenient to use while performing an athletic activity. Additionally, recognized gestures may be used to select one or more operational modes for the athletic performance monitoring system, such that a reduction in power consumption may be achieved.Type: GrantFiled: December 29, 2021Date of Patent: November 29, 2022Assignee: NIKE, Inc.Inventors: Manan Goel, Kate Cummings, Peter Laigar, David Switzer, Sebastian Imlay, Michael Lapinsky
-
Patent number: 11347319Abstract: Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.Type: GrantFiled: October 19, 2020Date of Patent: May 31, 2022Assignee: Intel CorporationInventors: Manan Goel, Saurin Shah, Lakshman Krishnamurthy, Steven Xing, Matthew Pinner, Kevin James Doucette
-
Publication number: 20220130130Abstract: An apparatus, method, system and computer readable medium for video tracking. An exemplar crop is selected to be tracked in an initial frame of a video. Bayesian optimization is applied with each subsequent frame of the video by building a surrogate model of an objective function using Gaussian Process Regression (GPR) based on similarity scores of candidate crops collected from a search space in a current frame of the video. A next candidate crop in the search space is determined using an acquisition function. The next candidate crop is compared to the exemplar crop using a Siamese neural network. Comparisons of new candidate crops to the exemplar crop are made using the Siamese neural network until the exemplar crop has been found in the current frame. The new candidate crops are selected based on an updated surrogate model.Type: ApplicationFiled: January 10, 2022Publication date: April 28, 2022Applicant: Intel CorporationInventors: Anthony Rhodes, Manan Goel
-
Publication number: 20220121291Abstract: An athletic performance monitoring system, including a gesture recognition processor configured to execute gesture recognition processes. Interaction with the athletic performance monitoring system may be based, at least in part, on gestures performed by a user, and may offer an alternative to making selections on the athletic performance monitoring system using physical buttons, which may be cumbersome and/or inconvenient to use while performing an athletic activity. Additionally, recognized gestures may be used to select one or more operational modes for the athletic performance monitoring system, such that a reduction in power consumption may be achieved.Type: ApplicationFiled: December 29, 2021Publication date: April 21, 2022Inventors: Manan Goel, Kate Cummings, Peter Laigar, David Switzer, Sebastian Imlay, Michael Lapinsky
-
Patent number: 11243611Abstract: A wrist-worn athletic performance monitoring system, including a gesture recognition processor configured to execute gesture recognition processes. Interaction with the performance monitoring system may be based, at least in part, on gestures performed by the user, and offer an alternative to making selections on the performance monitoring system using physical buttons, which may be cumbersome and/or inconvenient to use while performing an athletic activities. Additionally, recognized gestures may be used to select one or more operational modes for the athletic performance monitoring systems such that a reduction in power consumption may be achieved.Type: GrantFiled: August 7, 2014Date of Patent: February 8, 2022Assignee: NIKE, Inc.Inventors: Manan Goel, Kate Cummings, Peter Laigar, David Switzer, Sebastian Imlay, Michael Lapinsky
-
Patent number: 11227179Abstract: An apparatus, method, system and computer readable medium for video tracking. An exemplar crop is selected to be tracked in an initial frame of a video. Bayesian optimization is applied with each subsequent frame of the video by building a surrogate model of an objective function using Gaussian Process Regression (GPR) based on similarity scores of candidate crops collected from a search space in a current frame of the video. A next candidate crop in the search space is determined using an acquisition function. The next candidate crop is compared to the exemplar crop using a Siamese neural network. Comparisons of new candidate crops to the exemplar crop are made using the Siamese neural network until the exemplar crop has been found in the current frame. The new candidate crops are selected based on an updated surrogate model.Type: GrantFiled: September 27, 2019Date of Patent: January 18, 2022Assignee: Intel CorporationInventors: Anthony Rhodes, Manan Goel
-
Publication number: 20210225002Abstract: Various embodiments are generally directed to techniques for image segmentation utilizing context, such as with a machine learning (ML) model that injects context into various training stages. Many embodiments utilize one or more of an encoder-decoder model topology and select criteria and parameters in hyper-parameter optimization (HPO) to conduct the best model neural architecture search (NAS). Some embodiments are particularly directed to resizing context frames to a resolution that corresponds with a particular stage of decoding. In several embodiments, the context frames are concatenated with one or more of data from a previous decoding stage and data from a corresponding encoding stage prior to being provided as input to a next decoding stage.Type: ApplicationFiled: January 28, 2021Publication date: July 22, 2021Applicant: Intel CorporationInventors: Ke Ding, Anthony Rhodes, Manan Goel
-
Publication number: 20210150329Abstract: Methods, systems and apparatuses may provide for technology that trains a neural network by inputting video data to the neural network, determining a boundary loss function for the neural network, and selecting weights for the neural network based at least in part on the boundary loss function, wherein the neural network outputs a pixel-level segmentation of one or more objects depicted in the video data. The technology may also operate the neural network by accepting video data and an initial feature set, conducting a tensor decomposition on the initial feature set to obtain a reduced feature set, and outputting a pixel-level segmentation of object(s) depicted in the video data based at least in part on the reduced feature set.Type: ApplicationFiled: November 14, 2019Publication date: May 20, 2021Inventors: Anthony Rhodes, Manan Goel
-
Publication number: 20210118146Abstract: Methods, systems, and apparatus for high-fidelity vision tasks using deep neural networks are disclosed. An example apparatus includes a feature extractor to extract low-level features and edge-enhanced features of an input image processed using a convolutional neural network, an eidetic memory block generator to generate an eidetic memory block using the extracted low-level features or the extracted edge-enhanced features, and an interactive segmentation network to perform image segmentation using the eidetic memory block, the eidetic memory block used to propagate domain-persistent features through the segmentation network.Type: ApplicationFiled: December 23, 2020Publication date: April 22, 2021Inventors: Anthony Rhodes, Ke Ding, Manan Goel
-
Publication number: 20210110198Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for interactive image segmentation. An example apparatus includes an inception controller to execute an inception sublayer of a convolutional neural network (CNN) including two or more inception-atrous-collation (IAC) layers, the inception sublayer including two or more convolutions including respective kernels of varying sizes to generate multi-scale inception features, the inception sublayer to receive one or more context features indicative of user input; an atrous controller to execute an atrous sublayer of the CNN, the atrous sublayer including two or more atrous convolutions including respective kernels of varying sizes to generate multi-scale atrous features; and a collation controller to execute a collation sublayer of the CNN to collate the multi-scale inception features, the multi-scale atrous features, and eidetic memory features.Type: ApplicationFiled: December 22, 2020Publication date: April 15, 2021Inventors: Anthony Rhodes, Manan Goel, Ke Ding
-
Publication number: 20210034163Abstract: Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.Type: ApplicationFiled: October 19, 2020Publication date: February 4, 2021Inventors: Manan Goel, Saurin Shah, Lakshman Krishnamurthy, Steven Xing, Matthew Pinner, Kevin James Doucette
-
Patent number: 10900992Abstract: Systems and methods configured to process motion data associated with a user. The systems and methods are configured to receive motion data from a sensor, calculate motion attributes from the data, and classify the motion data using one or more mathematical models. Attributes may be calculated without classifying the motion data into an activity type (such as walking, running, swimming, or any specific or general activity). Attributes may be compared to activity models comprising motion data from several individuals, which may not include the user. Motion data within the models and attributes of the user may be independent of any activity type. Attributes may be compared to select an energy expenditure model from one or more energy expenditure models, which may be selected as a best-match to the one or more motion attributes. An energy expenditure associated with the motion of the user may then be calculated.Type: GrantFiled: October 14, 2014Date of Patent: January 26, 2021Assignee: NIKE, Inc.Inventors: Santoshkumar Balakrishnan, Manan Goel, Bradley W. Wilkins, Corey Dow-Hygelund, Jeff Hazel, John Schmitt
-
Patent number: 10900991Abstract: Systems and methods configured to process motion data associated with a user. The systems and methods are configured to receive motion data from a sensor, calculate motion attributes from the data, and classify the motion data using one or more mathematical models. Attributes may be calculated without classifying the motion data into an activity type (such as walking, running, swimming, or any specific or general activity). Attributes may be compared to activity models comprising motion data from several individuals, which may not include the user. Motion data within the models and attributes of the user may be independent of any activity type. Attributes may be compared to select an energy expenditure model from one or more energy expenditure models, which may be selected as a best-match to the one or more motion attributes. An energy expenditure associated with the motion of the user may then be calculated.Type: GrantFiled: October 14, 2014Date of Patent: January 26, 2021Assignee: NIKE, Inc.Inventors: Santoshkumar Balakrishnan, Manan Goel, Bradley W. Wilkins, Corey Dow-Hygelund, Jeff Hazel, John Schmitt
-
Patent number: 10809808Abstract: Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes at least two of an on-body sensor, an off-body sensor, and an RF local triangulation system to detect at least one of a position or a movement of a body part of a user relative to a virtual instrument. The example apparatus includes a processor to generate an audio output of the virtual instrument in response to the at least one of the position or the movement.Type: GrantFiled: December 22, 2016Date of Patent: October 20, 2020Assignee: Intel CorporationInventors: Manan Goel, Saurin Shah, Lakshman Krishnamurthy, Steven Xing, Matthew Pinner, Kevin James Doucette
-
Patent number: 10802038Abstract: Systems and methods configured to process motion data associated with a user. The systems and methods are configured to receive motion data from a sensor, calculate motion attributes from the data, and classify the motion data using one or more mathematical models. Attributes may be calculated without classifying the motion data into an activity. Attributes may be compared to mathematical models. Motion data within the models and attributes of the user may be independent of any activity type. Attributes may be compared to select an energy expenditure model from one or more energy expenditure models, or an activity classification model, from the one or more activity classification models. An energy expenditure, or a classification of received data as a linear travel motion, may then be calculated.Type: GrantFiled: October 14, 2014Date of Patent: October 13, 2020Assignee: NIKE, Inc.Inventors: Santoshkumar Balakrishnan, Manan Goel, Bradley W. Wilkins, Corey Dow-Hygelund, Jeff Hazel, John Schmitt
-
Publication number: 20200311948Abstract: Embodiments described herein provide an apparatus comprising a processor to receive an input video, convert the input video to one or more image sequences based at least in part on an analysis of a motion of one or more objects in the input video, receive an indicator of an object of interest in a first frame to be tracked through multiple frames in the input video, and apply a convolutional neural network to track the object of interest through the multiple frames in the input video. Other embodiments may be described and claimed.Type: ApplicationFiled: March 27, 2019Publication date: October 1, 2020Applicant: Intel CorporationInventors: ANTHONY RHODES, MANAN GOEL, SWARNENDU KAR
-
Patent number: 10789855Abstract: A system configured to provide feedback to a user in order to motivate said user to reach one or more energy expenditure goals. The one or more energy expenditure goals may be associated with one or more of time periods, or activity sessions, and the feedback may be provided to a user using one or more of a visual display on a sensor device worn by a user, and/or using audible and haptic feedback.Type: GrantFiled: October 14, 2014Date of Patent: September 29, 2020Assignee: NIKE, Inc.Inventors: Manan Goel, Christopher L. Andon