Patents Assigned to Bodygram, Inc.
  • Patent number: 12138073
    Abstract: Systems and methods for generating a prediction of a body composition of a user using an image capturing device are disclosed. The systems and methods can be used to predict body compositions such as body fat percentage, water content percentage, muscle mass, bone mass, and so on, from a single user image. The methods include the steps of receiving one or more user images and one or more user parameters, generating one or more key points based on the one or more user images, and generating a prediction of the body composition of the user based on the one or more key points and the one or more user parameters, using a body composition deep learning network (DLN). In one embodiment, the body composition DLN comprises a face image DLN, a body feature DLN, and an output DLN.
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: November 12, 2024
    Assignee: Bodygram, Inc.
    Inventors: Subas Chhatkuli, Kyohei Kamiyama, Chong Jin Koh
  • Patent number: 11869152
    Abstract: The present invention provides systems and methods for generating a 3D product mesh model and product dimensions from user images. The system is configured to receive one or more images of a user's body part, extract a body part mesh having a plurality of body part key points, generate a product mesh from an identified subset of the body part mesh, and generate one or more product dimensions in response to the selection of one or more key points from the product mesh. The system may output the product mesh, the product dimensions, or a manufacturing template of the product. In some embodiments, the system uses one or more machine learning modules to generate the body part mesh, identify the subset of the body part mesh, generate the product mesh, select the one or more key points, and/or generate the one or more product dimensions.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: January 9, 2024
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Kyohei Kamiyama, Nobuyuki Hayashi
  • Patent number: 11798299
    Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: October 24, 2023
    Assignee: Bodygram, Inc.
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Patent number: 11507781
    Abstract: Disclosed are systems and methods for generating large data sets for training deep learning networks (DLNs) for 3D measurements extraction from 2D images taken using a mobile device camera. The method includes the steps of receiving a 3D model of a 3D object; extracting spatial features from the 3D model; generating a first type of augmentation data for the 3D model, such as but not limited to skin color, face contour, hair style, virtual clothing, and/or lighting conditions; augmenting the 3D model with the first type of augmentation data to generate an augmented 3D model; generating at least one 2D image from the augmented 3D model by performing a projection of the augmented 3D model onto at least one plane; and generating a training data set to train the deep learning network (DLN) for spatial feature extraction by aggregating the spatial features and the at least one 2D image.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: November 22, 2022
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Kyohei Kamiyama
  • Patent number: 11497267
    Abstract: Disclosed are systems and methods for full body measurements extraction using a mobile device camera.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: November 15, 2022
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Yu Sato
  • Patent number: 11010896
    Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: May 18, 2021
    Assignee: Bodygram, Inc.
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Patent number: 10962404
    Abstract: Disclosed are systems and methods for body weight prediction from one or more images. The method includes the steps of receiving one or more subject parameters; receiving one or more images containing a subject; identifying one or more annotation key points for one or more body features underneath a clothing of the subject from the one or more images utilizing one or more annotation deep-learning networks; calculating one or more geometric features of the subject based on the one or more annotation key points; and generating a prediction of the body weight of the subject utilizing a weight machine-learning module based on the one or more geometric features of the subject and the one or more subject parameters.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: March 30, 2021
    Assignee: Bodygram, Inc.
    Inventors: Kyohei Kamiyama, Chong Jin Koh, Yu Sato
  • Patent number: 10918150
    Abstract: Disclosed are methods and systems for generating a customized garment design. The method, when executed by a processor, comprises first retrieving a user signal about a user, identifying a user style comprising at least a preferred garment category for the user by analyzing the user signal, and retrieving a group of features for the preferred garment category, where each of the features is associated with at least one style variable. The method further comprises identifying at least one preferred style value for each of the style variables based on the user signal, generating one or more candidate garment feature sets by selecting one or more combinations of preferred style values for each style variable, and generating the customized garment design by selecting a garment feature set from the one or more candidate garment feature sets.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: February 16, 2021
    Assignee: Bodygram, Inc.
    Inventor: Chong Jin Koh
  • Patent number: 10636158
    Abstract: A mobile computing device used for measuring the height of an object, such as a human user, may be positioned on a reference surface, such as the ground plane. The reference surface is detected and a position guide is generated in an augmented reality (AR) plane on a display of the mobile computing device. The AR plane enables the object being measured to be positioned at a measurement position located at a predefined distance along the reference surface from the mobile computing device. The top and bottom of the object are detected in an image taken by the mobile computing device. The height of the object is measured based on the predefined distance and a distance between the top and bottom of the object in the image. The height of the object can also be measured with assistance from software development kits (SDKs) included in the mobile computing device.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: April 28, 2020
    Assignee: Bodygram, Inc.
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Patent number: 10489683
    Abstract: Disclosed are systems and methods for generating large data sets for training deep learning networks for 3D measurements extraction from images taken using a mobile device camera. The method includes the steps of receiving at least one 3D model; generating a first type of augmentation data, such as but not limited to skin color, face contour, hair style, virtual clothing, and/or lighting conditions; augmenting the 3D model with the first type of augmentation data; generating at least one image from the augmented 3D model; receiving a second type of augmentation data, such as a plurality of background images representing a variety of backgrounds; augmenting the at least one image with the second type of augmentation data to generate a plurality of augmented images; extracting spatial features from the 3D model; and providing the plurality of augmented images and the spatial features to train a deep learning network for 3D measurement determination.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: November 26, 2019
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Kyohei Kamiyama
  • Patent number: 10470510
    Abstract: Disclosed are systems and methods for full body measurements extraction using a mobile device camera.
    Type: Grant
    Filed: August 10, 2019
    Date of Patent: November 12, 2019
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Kyohei Kamiyama
  • Patent number: 10321728
    Abstract: Disclosed are systems and methods for full body measurements extraction using a mobile device camera. The method includes the steps of receiving one or more user parameters from a user device; receiving at least one image from the user device, the at least one image containing the human and a background; performing body segmentation on the at least one image to extract one or more body features associated with the human from the background using a segmentation deep-learning network; generating annotation lines on the extracted body features using an annotation deep-learning network; generating body feature measurements of the one or more annotated body features utilizing a machine-learning module based on the annotated body features and the one or more user parameters; and generating body size measurements by aggregating the body feature measurements for each extracted body feature. In one embodiment, the deep-learning algorithms are trained on manually-annotated human body size measurements.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: June 18, 2019
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Yu Sato