Patents by Inventor Manan Goel

Manan Goel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11676278
    Abstract: Techniques related to automatically segmenting video frames into per pixel dense object of interest and background regions are discussed. Such techniques include applying a segmentation convolutional neural network (CNN) to a CNN input including a current video frame, a previous video frame, an object of interest indicator frame, a motion frame, and multiple feature frames each including features compressed from feature layers of an object classification convolutional neural network as applied to the current video frame to generate candidate segmentations and selecting one of the candidate segmentations as a final segmentation of the current video frame.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: June 13, 2023
    Assignee: Intel Corporation
    Inventors: Anthony Rhodes, Manan Goel
  • Publication number: 20230048187
    Abstract: An athletic performance monitoring system, including a gesture recognition processor configured to execute gesture recognition processes. Interaction with the athletic performance monitoring system may be based, at least in part, on gestures performed by a user, and may offer an alternative to making selections on the athletic performance monitoring system using physical buttons, which may be cumbersome and/or inconvenient to use while performing an athletic activity. Additionally, recognized gestures may be used to select one or more operational modes for the athletic performance monitoring system, such that a reduction in power consumption may be achieved.
    Type: Application
    Filed: October 26, 2022
    Publication date: February 16, 2023
    Inventors: Manan Goel, Kate Richmond, Peter Laigar, David Switzer, Sebastian Imlay, Michael Lapinsky
  • Publication number: 20220397962
    Abstract: Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.
    Type: Application
    Filed: May 26, 2022
    Publication date: December 15, 2022
    Inventors: Manan Goel, Saurin Shah, Lakshman Krishnamurthy, Steven Xing, Matthew Pinner, Kevin James Doucette
  • Patent number: 11513610
    Abstract: An athletic performance monitoring system, including a gesture recognition processor configured to execute gesture recognition processes. Interaction with the athletic performance monitoring system may be based, at least in part, on gestures performed by a user, and may offer an alternative to making selections on the athletic performance monitoring system using physical buttons, which may be cumbersome and/or inconvenient to use while performing an athletic activity. Additionally, recognized gestures may be used to select one or more operational modes for the athletic performance monitoring system, such that a reduction in power consumption may be achieved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: November 29, 2022
    Assignee: NIKE, Inc.
    Inventors: Manan Goel, Kate Cummings, Peter Laigar, David Switzer, Sebastian Imlay, Michael Lapinsky
  • Patent number: 11347319
    Abstract: Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: May 31, 2022
    Assignee: Intel Corporation
    Inventors: Manan Goel, Saurin Shah, Lakshman Krishnamurthy, Steven Xing, Matthew Pinner, Kevin James Doucette
  • Publication number: 20220130130
    Abstract: An apparatus, method, system and computer readable medium for video tracking. An exemplar crop is selected to be tracked in an initial frame of a video. Bayesian optimization is applied with each subsequent frame of the video by building a surrogate model of an objective function using Gaussian Process Regression (GPR) based on similarity scores of candidate crops collected from a search space in a current frame of the video. A next candidate crop in the search space is determined using an acquisition function. The next candidate crop is compared to the exemplar crop using a Siamese neural network. Comparisons of new candidate crops to the exemplar crop are made using the Siamese neural network until the exemplar crop has been found in the current frame. The new candidate crops are selected based on an updated surrogate model.
    Type: Application
    Filed: January 10, 2022
    Publication date: April 28, 2022
    Applicant: Intel Corporation
    Inventors: Anthony Rhodes, Manan Goel
  • Publication number: 20220121291
    Abstract: An athletic performance monitoring system, including a gesture recognition processor configured to execute gesture recognition processes. Interaction with the athletic performance monitoring system may be based, at least in part, on gestures performed by a user, and may offer an alternative to making selections on the athletic performance monitoring system using physical buttons, which may be cumbersome and/or inconvenient to use while performing an athletic activity. Additionally, recognized gestures may be used to select one or more operational modes for the athletic performance monitoring system, such that a reduction in power consumption may be achieved.
    Type: Application
    Filed: December 29, 2021
    Publication date: April 21, 2022
    Inventors: Manan Goel, Kate Cummings, Peter Laigar, David Switzer, Sebastian Imlay, Michael Lapinsky
  • Patent number: 11243611
    Abstract: A wrist-worn athletic performance monitoring system, including a gesture recognition processor configured to execute gesture recognition processes. Interaction with the performance monitoring system may be based, at least in part, on gestures performed by the user, and offer an alternative to making selections on the performance monitoring system using physical buttons, which may be cumbersome and/or inconvenient to use while performing an athletic activities. Additionally, recognized gestures may be used to select one or more operational modes for the athletic performance monitoring systems such that a reduction in power consumption may be achieved.
    Type: Grant
    Filed: August 7, 2014
    Date of Patent: February 8, 2022
    Assignee: NIKE, Inc.
    Inventors: Manan Goel, Kate Cummings, Peter Laigar, David Switzer, Sebastian Imlay, Michael Lapinsky
  • Patent number: 11227179
    Abstract: An apparatus, method, system and computer readable medium for video tracking. An exemplar crop is selected to be tracked in an initial frame of a video. Bayesian optimization is applied with each subsequent frame of the video by building a surrogate model of an objective function using Gaussian Process Regression (GPR) based on similarity scores of candidate crops collected from a search space in a current frame of the video. A next candidate crop in the search space is determined using an acquisition function. The next candidate crop is compared to the exemplar crop using a Siamese neural network. Comparisons of new candidate crops to the exemplar crop are made using the Siamese neural network until the exemplar crop has been found in the current frame. The new candidate crops are selected based on an updated surrogate model.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 18, 2022
    Assignee: Intel Corporation
    Inventors: Anthony Rhodes, Manan Goel
  • Publication number: 20210225002
    Abstract: Various embodiments are generally directed to techniques for image segmentation utilizing context, such as with a machine learning (ML) model that injects context into various training stages. Many embodiments utilize one or more of an encoder-decoder model topology and select criteria and parameters in hyper-parameter optimization (HPO) to conduct the best model neural architecture search (NAS). Some embodiments are particularly directed to resizing context frames to a resolution that corresponds with a particular stage of decoding. In several embodiments, the context frames are concatenated with one or more of data from a previous decoding stage and data from a corresponding encoding stage prior to being provided as input to a next decoding stage.
    Type: Application
    Filed: January 28, 2021
    Publication date: July 22, 2021
    Applicant: Intel Corporation
    Inventors: Ke Ding, Anthony Rhodes, Manan Goel
  • Publication number: 20210150329
    Abstract: Methods, systems and apparatuses may provide for technology that trains a neural network by inputting video data to the neural network, determining a boundary loss function for the neural network, and selecting weights for the neural network based at least in part on the boundary loss function, wherein the neural network outputs a pixel-level segmentation of one or more objects depicted in the video data. The technology may also operate the neural network by accepting video data and an initial feature set, conducting a tensor decomposition on the initial feature set to obtain a reduced feature set, and outputting a pixel-level segmentation of object(s) depicted in the video data based at least in part on the reduced feature set.
    Type: Application
    Filed: November 14, 2019
    Publication date: May 20, 2021
    Inventors: Anthony Rhodes, Manan Goel
  • Publication number: 20210118146
    Abstract: Methods, systems, and apparatus for high-fidelity vision tasks using deep neural networks are disclosed. An example apparatus includes a feature extractor to extract low-level features and edge-enhanced features of an input image processed using a convolutional neural network, an eidetic memory block generator to generate an eidetic memory block using the extracted low-level features or the extracted edge-enhanced features, and an interactive segmentation network to perform image segmentation using the eidetic memory block, the eidetic memory block used to propagate domain-persistent features through the segmentation network.
    Type: Application
    Filed: December 23, 2020
    Publication date: April 22, 2021
    Inventors: Anthony Rhodes, Ke Ding, Manan Goel
  • Publication number: 20210110198
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for interactive image segmentation. An example apparatus includes an inception controller to execute an inception sublayer of a convolutional neural network (CNN) including two or more inception-atrous-collation (IAC) layers, the inception sublayer including two or more convolutions including respective kernels of varying sizes to generate multi-scale inception features, the inception sublayer to receive one or more context features indicative of user input; an atrous controller to execute an atrous sublayer of the CNN, the atrous sublayer including two or more atrous convolutions including respective kernels of varying sizes to generate multi-scale atrous features; and a collation controller to execute a collation sublayer of the CNN to collate the multi-scale inception features, the multi-scale atrous features, and eidetic memory features.
    Type: Application
    Filed: December 22, 2020
    Publication date: April 15, 2021
    Inventors: Anthony Rhodes, Manan Goel, Ke Ding
  • Publication number: 20210034163
    Abstract: Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.
    Type: Application
    Filed: October 19, 2020
    Publication date: February 4, 2021
    Inventors: Manan Goel, Saurin Shah, Lakshman Krishnamurthy, Steven Xing, Matthew Pinner, Kevin James Doucette
  • Patent number: 10900992
    Abstract: Systems and methods configured to process motion data associated with a user. The systems and methods are configured to receive motion data from a sensor, calculate motion attributes from the data, and classify the motion data using one or more mathematical models. Attributes may be calculated without classifying the motion data into an activity type (such as walking, running, swimming, or any specific or general activity). Attributes may be compared to activity models comprising motion data from several individuals, which may not include the user. Motion data within the models and attributes of the user may be independent of any activity type. Attributes may be compared to select an energy expenditure model from one or more energy expenditure models, which may be selected as a best-match to the one or more motion attributes. An energy expenditure associated with the motion of the user may then be calculated.
    Type: Grant
    Filed: October 14, 2014
    Date of Patent: January 26, 2021
    Assignee: NIKE, Inc.
    Inventors: Santoshkumar Balakrishnan, Manan Goel, Bradley W. Wilkins, Corey Dow-Hygelund, Jeff Hazel, John Schmitt
  • Patent number: 10900991
    Abstract: Systems and methods configured to process motion data associated with a user. The systems and methods are configured to receive motion data from a sensor, calculate motion attributes from the data, and classify the motion data using one or more mathematical models. Attributes may be calculated without classifying the motion data into an activity type (such as walking, running, swimming, or any specific or general activity). Attributes may be compared to activity models comprising motion data from several individuals, which may not include the user. Motion data within the models and attributes of the user may be independent of any activity type. Attributes may be compared to select an energy expenditure model from one or more energy expenditure models, which may be selected as a best-match to the one or more motion attributes. An energy expenditure associated with the motion of the user may then be calculated.
    Type: Grant
    Filed: October 14, 2014
    Date of Patent: January 26, 2021
    Assignee: NIKE, Inc.
    Inventors: Santoshkumar Balakrishnan, Manan Goel, Bradley W. Wilkins, Corey Dow-Hygelund, Jeff Hazel, John Schmitt
  • Patent number: 10809808
    Abstract: Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes at least two of an on-body sensor, an off-body sensor, and an RF local triangulation system to detect at least one of a position or a movement of a body part of a user relative to a virtual instrument. The example apparatus includes a processor to generate an audio output of the virtual instrument in response to the at least one of the position or the movement.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: October 20, 2020
    Assignee: Intel Corporation
    Inventors: Manan Goel, Saurin Shah, Lakshman Krishnamurthy, Steven Xing, Matthew Pinner, Kevin James Doucette
  • Patent number: 10802038
    Abstract: Systems and methods configured to process motion data associated with a user. The systems and methods are configured to receive motion data from a sensor, calculate motion attributes from the data, and classify the motion data using one or more mathematical models. Attributes may be calculated without classifying the motion data into an activity. Attributes may be compared to mathematical models. Motion data within the models and attributes of the user may be independent of any activity type. Attributes may be compared to select an energy expenditure model from one or more energy expenditure models, or an activity classification model, from the one or more activity classification models. An energy expenditure, or a classification of received data as a linear travel motion, may then be calculated.
    Type: Grant
    Filed: October 14, 2014
    Date of Patent: October 13, 2020
    Assignee: NIKE, Inc.
    Inventors: Santoshkumar Balakrishnan, Manan Goel, Bradley W. Wilkins, Corey Dow-Hygelund, Jeff Hazel, John Schmitt
  • Publication number: 20200311948
    Abstract: Embodiments described herein provide an apparatus comprising a processor to receive an input video, convert the input video to one or more image sequences based at least in part on an analysis of a motion of one or more objects in the input video, receive an indicator of an object of interest in a first frame to be tracked through multiple frames in the input video, and apply a convolutional neural network to track the object of interest through the multiple frames in the input video. Other embodiments may be described and claimed.
    Type: Application
    Filed: March 27, 2019
    Publication date: October 1, 2020
    Applicant: Intel Corporation
    Inventors: ANTHONY RHODES, MANAN GOEL, SWARNENDU KAR
  • Patent number: 10789855
    Abstract: A system configured to provide feedback to a user in order to motivate said user to reach one or more energy expenditure goals. The one or more energy expenditure goals may be associated with one or more of time periods, or activity sessions, and the feedback may be provided to a user using one or more of a visual display on a sensor device worn by a user, and/or using audible and haptic feedback.
    Type: Grant
    Filed: October 14, 2014
    Date of Patent: September 29, 2020
    Assignee: NIKE, Inc.
    Inventors: Manan Goel, Christopher L. Andon