Patents by Inventor Daniel Dejos

Daniel Dejos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12280297
    Abstract: Methods and systems are disclosed for pose comparison, interactive physical gaming, and remote fitness training on a user computing device. The methods and systems are configured to first receive a reference feature generated from a frame of a reference video, the reference feature computed from a reference posture of a reference person in the frame of the reference video. Next, receive a frame of a user video, the frame of the user video comprising a user. Next, extract a user posture from the frame of the user video, by performing a machine learning-based computer vision algorithm that detects one or more body key points of the user in an image plane of the user video. Finally, generate a user feature from the user posture; and determine an output score based on a distance between the reference feature and the user feature.
    Type: Grant
    Filed: September 16, 2021
    Date of Patent: April 22, 2025
    Assignee: NEX Team Inc.
    Inventors: Qi Zhang, Keng Fai Lee, Daniel Dejos, Jorge Fino, Long Mak
  • Patent number: 11450010
    Abstract: Methods and systems for determining and classifying a number of repetitive motions in a video are described, and include the steps of first determining a plurality of images from a video, where the images are segmented from at least one video frame of the video. Next, performing a pose detection process on a feature of the images to generate one or more landmarks. Next, determining one or more principle component axes on points associated with a given landmark. Finally, determining at least one repetitive motion based on a pattern associated with a projection of the points onto the one or more principle components. In some embodiments, the disclosed methods can classify the repetitive motions to respective types. The present invention can be implemented for convenient use on a mobile computing device, such as a smartphone, for tracking exercises and similar repetitive motions.
    Type: Grant
    Filed: October 16, 2021
    Date of Patent: September 20, 2022
    Assignee: NEX Team Inc.
    Inventors: On Loy Sung, Qi Zhang, Keng Fai Lee, Shing Fat Mak, Daniel Dejos, Man Hon Chan
  • Publication number: 20220138966
    Abstract: Methods and systems for determining and classifying a number of repetitive motions in a video are described, and include the steps of first determining a plurality of images from a video, where the images are segmented from at least one video frame of the video. Next, performing a pose detection process on a feature of the images to generate one or more landmarks. Next, determining one or more principle component axes on points associated with a given landmark. Finally, determining at least one repetitive motion based on a pattern associated with a projection of the points onto the one or more principle components. In some embodiments, the disclosed methods can classify the repetitive motions to respective types. The present invention can be implemented for convenient use on a mobile computing device, such as a smartphone, for tracking exercises and similar repetitive motions.
    Type: Application
    Filed: October 16, 2021
    Publication date: May 5, 2022
    Inventors: On Loy Sung, Qi Zhang, Keng Fai Lee, Shing Fat Mak, Daniel Dejos, Man Hon Chan
  • Publication number: 20220080260
    Abstract: Methods and systems are disclosed for pose comparison, interactive physical gaming, and remote fitness training on a user computing device. The methods and systems are configured to first receive a reference feature generated from a frame of a reference video, the reference feature computed from a reference posture of a reference person in the frame of the reference video. Next, receive a frame of a user video, the frame of the user video comprising a user. Next, extract a user posture from the frame of the user video, by performing a machine learning-based computer vision algorithm that detects one or more body key points of the user in an image plane of the user video. Finally, generate a user feature from the user posture; and determine an output score based on a distance between the reference feature and the user feature.
    Type: Application
    Filed: September 16, 2021
    Publication date: March 17, 2022
    Inventors: Qi Zhang, Keng Fai Lee, Daniel Dejos, Jorge Fino, Long Mak