Patents by Inventor Asim Kadav

Asim Kadav has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240123288
    Abstract: A first output is received from a first hardware optical sensor. A second output is received from a second hardware sensor. Guidance is provided for a movement via a user interface, wherein the guidance is based at least in part on: the first output from the first hardware optical sensor; the second output from the second hardware sensor; and a model based at least in part on historical performance of the movement; and wherein at least one of the first output and the second output triggers a condition.
    Type: Application
    Filed: October 16, 2023
    Publication date: April 18, 2024
    Inventors: Giuseppe Barbalinardo, Joshua Ben Shapiro, Asim Kadav, Ivan Savytskyi, Rajiv Bhan, Rustam Paringer, Aly E. Orady
  • Publication number: 20240123284
    Abstract: A first video of a first individual performing an exercise movement is received, wherein the first video is associated with a first guidance label. A modified version of a video is generated at least in part by passing the first video to a pose data change model. The modified version of the video is associated with a second guidance label. A guidance classifier model is trained using the modified version of the video.
    Type: Application
    Filed: October 16, 2023
    Publication date: April 18, 2024
    Inventors: Giuseppe Barbalinardo, Joshua Ben Shapiro, Asim Kadav, Ivan Savytskyi, Rajiv Bhan, Rustam Paringer, Aly E. Orady
  • Publication number: 20230381587
    Abstract: An exercise machine accessory is disclosed. In one embodiment, a resistance identifier is configured to identify resistance for an exercise machine associated with the exercise machine accessory, wherein the resistance identifier is coupled to the exercise machine. In one embodiment, a motion identifier is configured to identify exercise motion for a user of the exercise machine. In one embodiment, a communications module is configured to communicate with the user of the exercise machine, wherein the communications module is coupled to the resistance identifier and the motion identifier.
    Type: Application
    Filed: March 16, 2023
    Publication date: November 30, 2023
    Inventors: Gabriel Peal, Asim Kadav
  • Patent number: 11741712
    Abstract: A method for using a multi-hop reasoning framework to perform multi-step compositional long-term reasoning is presented. The method includes extracting feature maps and frame-level representations from a video stream by using a convolutional neural network (CNN), performing object representation learning and detection, linking objects through time via tracking to generate object tracks and image feature tracks, feeding the object tracks and the image feature tracks to a multi-hop transformer that hops over frames in the video stream while concurrently attending to one or more of the objects in the video stream until the multi-hop transformer arrives at a correct answer, and employing video representation learning and recognition from the objects and image context to locate a target object within the video stream.
    Type: Grant
    Filed: September 1, 2021
    Date of Patent: August 29, 2023
    Inventors: Asim Kadav, Farley Lai, Hans Peter Graf, Alexandru Niculescu-Mizil, Renqiang Min, Honglu Zhou
  • Publication number: 20230148017
    Abstract: A method for compositional reasoning of group activity in videos with keypoint-only modality is presented. The method includes obtaining video frames from a video stream received from a plurality of video image capturing devices, extracting keypoints all of persons detected in the video frames to define keypoint data, tokenizing the keypoint data with time and segment information, clustering groups of keypoint persons in the video frames and passing the clustering groups through multi-scale prediction, and performing a prediction to provide a group activity prediction of a scene in the video frames.
    Type: Application
    Filed: October 5, 2022
    Publication date: May 11, 2023
    Inventors: Asim Kadav, Farley Lai, Hans Peter Graf, Honglu Zhou
  • Patent number: 11638856
    Abstract: An exercise machine accessory is disclosed. In one embodiment, a resistance identifier is configured to identify resistance for an exercise machine associated with the exercise machine accessory, wherein the resistance identifier is coupled to the exercise machine. In one embodiment, a motion identifier is configured to identify exercise motion for a user of the exercise machine. In one embodiment, a communications module is configured to communicate with the user of the exercise machine, wherein the communications module is coupled to the resistance identifier and the motion identifier.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: May 2, 2023
    Assignee: Tonal Systems, Inc.
    Inventors: Gabriel Peal, Asim Kadav
  • Publication number: 20230128118
    Abstract: An exercise machine includes a cable. It further includes an interface to a moveable camera device coupled with the exercise machine. It further includes a processor configured to receive a cable-based measurement associated with an exercise performed by a user. The processor is further configured to receive, from the moveable camera device, video information associated with the exercise. The processor is further configured to provide a workout determination based at least in part on both the cable-based measurement and the video information received from the moveable camera device.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 27, 2023
    Inventors: Asim Kadav, Rajiv Bhan, Ryan LaFrance, Bryan James, Aly E. Orady, Brandt Belson, Gabriel Peal, Thomas Kroman Watt, Ivan Savytskyi
  • Patent number: 11620814
    Abstract: Aspects of the present disclosure describe systems, methods and structures providing contextual grounding—a higher-order interaction technique to capture corresponding context between text entities and visual objects.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: April 4, 2023
    Inventors: Farley Lai, Asim Kadav, Ning Xie
  • Publication number: 20230086023
    Abstract: A method for model training and deployment includes training, by a processor, a model to learn video representations with a self-supervised contrastive loss by performing progressive training in phases with an incremental number of positive instances from one or more video sequences, resetting the learning rate schedule in each of the phases, and inheriting model weights from a checkpoint from a previous training phase. The method further includes updating the trained model with the self-supervised contrastive loss given multiple positive instances obtained from Cascade K-Nearest Neighbor mining of the one or more video sequences by extracting features in different modalities to compute similarities between the one or more video sequences and selecting a top-k similar instances with features in different modalities. The method also includes fine-tuning the trained model for a downstream task.
    Type: Application
    Filed: September 8, 2022
    Publication date: March 23, 2023
    Inventors: Farley Lai, Asim Kadav, Cheng-En Wu
  • Patent number: 11600067
    Abstract: Aspects of the present disclosure describe systems, methods, and structures that provide action recognition with high-order interaction with spatio-temporal object tracking. Image and object features are organized into into tracks, which advantageously facilitates many possible learnable embeddings and intra/inter-track interaction(s). Operationally, our systems, method, and structures according to the present disclosure employ an efficient high-order interaction model to learn embeddings and intra/inter object track interaction across the space and time for AR. Each frame is detected by an object detector to locate visual objects. Those objects are linked through time to form object tracks. The object tracks are then organized and combined with the embeddings as the input to our model. The model is trained to generate representative embeddings and discriminative video features through high-order interaction which is formulated as an efficient matrix operation without iterative processing delay.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: March 7, 2023
    Inventors: Farley Lai, Asim Kadav, Jie Chen
  • Publication number: 20230049770
    Abstract: Methods and systems of training a neural network include training a feature extractor and a classifier using a first set of training data that includes one or more base cases. The classifier is trained with few-shot adaptation using a second set of training data, smaller than the first set of training data, while keeping parameters of the feature extractor constant.
    Type: Application
    Filed: July 12, 2022
    Publication date: February 16, 2023
    Inventors: Biplob Debnath, Srimat Chakradhar, Oliver Po, Asim Kadav, Farley Lai, Farhan Asif Chowdhury
  • Patent number: 11568247
    Abstract: A computer-implemented method executed by at least one processor for performing mini-batching in deep learning by improving cache utilization is presented. The method includes temporally localizing a candidate clip in a video stream based on a natural language query, encoding a state, via a state processing module, into a joint visual and linguistic representation, feeding the joint visual and linguistic representation into a policy learning module, wherein the policy learning module employs a deep learning network to selectively extract features for select frames for video-text analysis and includes a fully connected linear layer and a long short-term memory (LSTM), outputting a value function from the LSTM, generating an action policy based on the encoded state, wherein the action policy is a probabilistic distribution over a plurality of possible actions given the encoded state, and rewarding policy actions that return clips matching the natural language query.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: January 31, 2023
    Inventors: Asim Kadav, Iain Melvin, Hans Peter Graf, Meera Hahn
  • Publication number: 20220383522
    Abstract: A surveillance system is provided. The surveillance system is configured for (i) detecting and tracking persons locally for each camera input video stream using the common area anchor boxes and assigning each detected ones of the persons a local track id, (ii) associating a same person in overlapping camera views to a global track id, and collecting associated track boxes as the same person moves in different camera views over time using a priority queue and the local track id and the global track id, (iii) performing track data collection to derive a spatial transformation through matched track box spatial features of a same person over time for scene coverage and (iv) learning a multi-camera tracker given visual features from matched track boxes of distinct people across cameras based on the derived spatial transformation.
    Type: Application
    Filed: May 11, 2022
    Publication date: December 1, 2022
    Inventors: Farley Lai, Asim Kadav, Likitha Lakshminarayanan
  • Patent number: 11475590
    Abstract: Aspects of the present disclosure describe systems, methods and structures for an efficient multi-person posetracking method that advantageously achieves state-of-the-art performance on PoseTrack datasets by only using keypoint information in a tracking step without optical flow or convolution routines. As a consequence, our method has fewer parameters and FLOPs and achieves faster FPS. Our method benefits from our parameter-free tracking method that outperforms commonly used bounding box propagation in top-down methods. Finally, we disclose tokenization and embedding multi-person pose keypoint information in the transformer architecture that can be re-used for other pose tasks such as pose-based action recognition.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: October 18, 2022
    Inventors: Asim Kadav, Farley Lai, Hans Peter Graf, Michael Snower
  • Publication number: 20220319157
    Abstract: A method for augmenting video sequences in a video reasoning system is presented. The method includes randomly subsampling a sequence of video frames captured from one or more video cameras, randomly reversing the subsampled sequence of video frames to define a plurality of sub-sequences of randomly reversed video frames, training, in a training mode, a video reasoning model with temporally augmented input, including the plurality of sub-sequences of randomly reversed video frames, to make predictions over temporally augmented target classes, updating parameters of the video reasoning model by a machine leaning algorithm, and deploying, in an inference mode, the video reasoning model in the video reasoning system to make a final prediction related to a human action in the sequence of video frames.
    Type: Application
    Filed: April 4, 2022
    Publication date: October 6, 2022
    Inventors: Farley Lai, Asim Kadav
  • Patent number: 11422907
    Abstract: While connected to cloud storage, a computing device writes data and metadata to the cloud storage, indicates success of the write to an application of the computing device, and, after indicating success to the application, writes the data and metadata to local storage of the computing device. The data and metadata may be written to different areas of the local storage. The computing device may also determine that it has recovered from a crash or has connected to the cloud storage after operating disconnected and reconcile the local storage with the cloud storage. The reconciliation may be based at least on a comparison of the metadata stored in the area of the local storage with metadata received from the cloud storage. The cloud storage may store each item of data contiguously with its metadata as an expanded block.
    Type: Grant
    Filed: August 19, 2013
    Date of Patent: August 23, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: James W. Mickens, Jeremy E. Elson, Edmund B. Nightingale, Bin Fan, Asim Kadav, Osama Khan
  • Patent number: 11423655
    Abstract: A computer-implemented method is provided for disentangled data generation. The method includes accessing, by a variational autoencoder, a plurality of supervision signals. The method further includes accessing, by the variational autoencoder, a plurality of auxiliary tasks that utilize the supervision signals as reward signals to learn a disentangled representation. The method also includes training the variational autoencoder to disentangle a sequential data input into a time-invariant factor and a time-varying factor using a self-supervised training approach which is based on outputs of the auxiliary tasks obtained by using the supervision signals to accomplish the plurality of auxiliary tasks.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: August 23, 2022
    Inventors: Renqiang Min, Yizhe Zhu, Asim Kadav, Hans Peter Graf
  • Publication number: 20220237884
    Abstract: A computer-implemented method is provided for action localization. The method includes converting one or more video frames into person keypoints and object keypoints. The method further includes embedding position, timestamp, instance, and type information with the person keypoints and object keypoints to obtain keypoint embeddings. The method also includes predicting, by a hierarchical transformer encoder using the keypoint embeddings, human actions and bounding box information of when and where the human actions occur in the one or more video frames.
    Type: Application
    Filed: January 27, 2022
    Publication date: July 28, 2022
    Inventors: Asim Kadav, Farley Lai, Hans Peter Graf, Yi Huang
  • Patent number: 11356334
    Abstract: A method is provided for sparse communication in a parallel machine learning environment. The method includes determining a fixed communication cost for a sparse graph to be computed. The sparse graph is (i) determined from a communication graph that includes all the machines in a target cluster of the environment, and (ii) represents a communication network for the target cluster having (a) an overall spectral gap greater than or equal to a minimum threshold, and (b) certain information dispersal properties such that an intermediate output from a given node disperses to all other nodes of the sparse graph in lowest number of time steps given other possible node connections. The method further includes computing the sparse graph, based on the communication graph and the fixed communication cost. The method also includes initiating a propagation of the intermediate output in the parallel machine learning environment using a topology of the sparse graph.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: June 7, 2022
    Inventors: Asim Kadav, Erik Kruus
  • Publication number: 20220171989
    Abstract: A computer-implemented method for representation disentanglement is provided. The method includes encoding an input vector into an embedding. The method further includes learning, by a hardware processor, disentangled representations of the input vector including a style embedding and a content embedding by performing sample-based mutual information minimization on the embedding under a Wasserstein distance regularization and a Kullback-Leibler (KL) divergence. The method also includes decoding the style and content embeddings to obtain a reconstructed vector.
    Type: Application
    Filed: November 18, 2021
    Publication date: June 2, 2022
    Inventors: Renqiang Min, Asim Kadav, Hans Peter Graf, Ligong Han