Patents by Inventor Yosuke IKEGAMI

Yosuke IKEGAMI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240303859
    Abstract: he The target includes a plurality of keypoints of a body including a plurality of joints and the 3D position of the target is identified by positions of the plurality of keypoints. A bounding box and reference 2D joint position determining unit determines a bounding box surrounding the target in a camera image at a target frame to be predicted subsequent to at least one frame at which imaging is performed by the plurality of cameras using the 3D positions of the keypoints of the target at the at least one frame and acquires reference 2D positions of the keypoints projected from the 3D positions of the keypoints of the target onto a predetermined plane. A 3D pose acquiring unit acquires the 3D positions of the keypoints of the target at the target frame using image information in the bounding box and the reference 2D positions.
    Type: Application
    Filed: March 7, 2022
    Publication date: September 12, 2024
    Inventors: Yoshihiko NAKAMURA, Yosuke IKEGAMI, Takuya OHASHI
  • Patent number: 12045994
    Abstract: The invention relates to automatic discovery and evaluation of a motion by treating motion recognition as an optimization problem by considering a series of basic motions obtained by segmenting a subject motion. The method comprises segmenting time series data defining a motion of a subject into a plurality of segments, classifying each segment into a class for a basic motion by using time series data of the segment, and converting the motion of the subject to a sequence of high rank symbols in which each high rank symbol is formed from a series of the basic motions, wherein a function that calculates a score based on a set of a high rank symbol and a sequence of basic motions is provided and the motion of the subject is converted to the sequence of the high rank symbols by an optimization calculation using dynamic programming.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: July 23, 2024
    Assignee: THE UNIVERSITY OF TOKYO
    Inventors: Yoshihiko Nakamura, Wataru Takano, Yosuke Ikegami
  • Patent number: 11967101
    Abstract: The present invention provides a motion capture with a high accuracy which can replace an optical motion capture technology, without attaching optical markers and sensors to a subject. A subject with an articulated structure has a plurality of feature points in the body of the subject including a plurality of joints wherein a distance between adjacent feature points is obtained as a constant. A spatial distribution of a likelihood of a position of each feature point is obtained based on a single input image or a plurality of input images taken at the same time. One or a plurality of position candidates corresponding to each feature point are obtained based on the spatial distribution of the likelihood of the position of each feature point. Each join angle is obtained by performing an optimization calculation based on inverse kinematics using the candidates and the articulated structure.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: April 23, 2024
    Assignee: THE UNIVERSITY OF TOKYO
    Inventors: Yoshihiko Nakamura, Wataru Takano, Yosuke Ikegami, Takuya Ohashi, Kazuki Yamamoto, Kentaro Takemoto
  • Publication number: 20240020853
    Abstract: The invention relates to automatic discovery and evaluation of a motion by treating motion recognition as an optimization problem by considering a series of basic motions obtained by segmenting a subject motion. The method comprises segmenting time series data defining a motion of a subject into a plurality of segments, classifying each segment into a class for a basic motion by using time series data of the segment, and converting the motion of the subject to a sequence of high rank symbols in which each high rank symbol is formed from a series of the basic motions, wherein a function that calculates a score based on a set of a high rank symbol and a sequence of basic motions is provided and the motion of the subject is converted to the sequence of the high rank symbols by an optimization calculation using dynamic programming.
    Type: Application
    Filed: August 28, 2019
    Publication date: January 18, 2024
    Applicant: THE UNIVERSITY OF TOKYO
    Inventors: Yoshihiko NAKAMURA, Wataru TAKANG, Yosuke IKEGAMI
  • Publication number: 20230031291
    Abstract: A treadmill according to the present invention includes: a frame; an endless belt; an endless-belt drive unit; a plurality of cameras provided on the frame such that the orientations and/or locations thereof can be adjusted; a motion-data acquisition unit that markerlessly acquires motion data of a subject by using image information obtained with camera images; a motion analysis unit that analyzes the motion of the subject by using the motion data; and a control unit that controls the endless-belt drive unit based on the motion data.
    Type: Application
    Filed: December 24, 2020
    Publication date: February 2, 2023
    Applicants: THE UNIVERSITY OF TOKYO, OHTAKE ROOT KOGYO CO., LTD.
    Inventors: Yoshihiko NAKAMURA, Yosuke IKEGAMI, Yoshitake OTA, Yuji KANZAKI, Hirotaka KOGA
  • Publication number: 20220108468
    Abstract: The present invention provides a motion capture with a high accuracy which can replace an optical motion capture technology, without attaching optical markers and sensors to a subject. A subject with an articulated structure has a plurality of feature points in the body of the subject including a plurality of joints wherein a distance between adjacent feature points is obtained as a constant. A spatial distribution of a likelihood of a position of each feature point is obtained based on a single input image or a plurality of input images taken at the same time. One or a plurality of position candidates corresponding to each feature point are obtained based on the spatial distribution of the likelihood of the position of each feature point. Each join angle is obtained by performing an optimization calculation based on inverse kinematics using the candidates and the articulated structure.
    Type: Application
    Filed: August 29, 2019
    Publication date: April 7, 2022
    Applicant: THE UNIVERSITY OF TOKYO
    Inventors: Yoshihiko NAKAMURA, Wataru TAKANO, Yosuke IKEGAMI, Takuya OHASHI, Kazuki YAMAMOTO, Kentaro TAKEMOTO