Patents by Inventor Kyohei Kamiyama

Kyohei Kamiyama has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240037769
    Abstract: Systems and methods for predicting body part measurements of a user from depth images are disclosed. The method first receives a plurality of depth images of a body part of the user from an image-capturing device. Next, the method generates a plurality of individual point clouds based on the plurality of depth images. Next, the method stitches the plurality of individual point clouds into a stitched point cloud, and determines a measurement location based on the stitched point cloud. Finally, the method projects the measurement location to the stitched point cloud, and generates the body part measurement based on the projected measurement location. To determine the measurement location, one embodiment uses a morphed base 3D model, whereas another embodiment uses a 3D keypoint detection algorithm on the stitched point cloud. The method may be implemented on a mobile computing device with a depth sensor.
    Type: Application
    Filed: December 22, 2021
    Publication date: February 1, 2024
    Applicant: Visualize K.K.
    Inventors: Bryan Hobson Atwood, Chong Jin Koh, Kyohei Kamiyama
  • Patent number: 11869152
    Abstract: The present invention provides systems and methods for generating a 3D product mesh model and product dimensions from user images. The system is configured to receive one or more images of a user's body part, extract a body part mesh having a plurality of body part key points, generate a product mesh from an identified subset of the body part mesh, and generate one or more product dimensions in response to the selection of one or more key points from the product mesh. The system may output the product mesh, the product dimensions, or a manufacturing template of the product. In some embodiments, the system uses one or more machine learning modules to generate the body part mesh, identify the subset of the body part mesh, generate the product mesh, select the one or more key points, and/or generate the one or more product dimensions.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: January 9, 2024
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Kyohei Kamiyama, Nobuyuki Hayashi
  • Patent number: 11798299
    Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: October 24, 2023
    Assignee: Bodygram, Inc.
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Publication number: 20230316046
    Abstract: Methods and systems are disclosed for evaluating or training a machine learning module when its corresponding truth data sets are unavailable or unreliable. The methods and systems are configured for evaluating or training a target machine learning module having a first (system) input and a first output, wherein the target module is connected to a second machine learning module having an intermediate input (identical to the first output of the target module) and a second (system) output, by training the second module using received corresponding intermediate and output data sets, generating an evaluation data set using a received system input data set, and evaluating or training the target module using a loss function based on a distance metric between the evaluation data set and a received system output data set corresponding to the system input data set.
    Type: Application
    Filed: September 17, 2021
    Publication date: October 5, 2023
    Inventors: Ito Takafumi, Kyohei Kamiyama
  • Publication number: 20230186567
    Abstract: The present invention provides systems and methods for generating a 3D product mesh model and product dimensions from user images. The system is configured to receive one or more images of a user's body part, extract a body part mesh having a plurality of body part key points, generate a product mesh from an identified subset of the body part mesh, and generate one or more product dimensions in response to the selection of one or more key points from the product mesh. The system may output the product mesh, the product dimensions, or a manufacturing template of the product. In some embodiments, the system uses one or more machine learning modules to generate the body part mesh, identify the subset of the body part mesh, generate the product mesh, select the one or more key points, and/or generate the one or more product dimensions.
    Type: Application
    Filed: May 11, 2021
    Publication date: June 15, 2023
    Inventors: Chong Jin Koh, Kyohei Kamiyama, Nobuyuki Hayashi
  • Publication number: 20230052613
    Abstract: Disclosed are systems and methods for obtaining a scale factor and 3D measurements of objects from a series of 2D images. An object to be measured is selected from a menu of an Augmented Reality (AR) based measurement application being executed by a mobile computing device. Measurement instructions corresponding to the selected object are retrieved and used to generate a series of image capture screens. A series of image capture screens assist the user in positioning the device relative to the object in a plurality of imaging positions to capture the series of 2D images. The images are used to determine one or more scale factors and to build a complete scaled 3D model of the object in virtual 3D space. The 3D model is used to generate one or more measurements of the object.
    Type: Application
    Filed: January 22, 2021
    Publication date: February 16, 2023
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Patent number: 11574421
    Abstract: A structured 3D model of a real-world object is generated from a series of 2D photographs of the object, using photogrammetry, a keypoint detection deep learning network (DLN), and retopology. In addition, object parameters of the object are received. A pressure map of the object is then generated by a pressure estimation DLN based on the structured 3D model and the object parameters. The pressure estimation DLN was trained on structured 3D models, object parameters, and pressure maps of a plurality of objects belonging to a given object category. The pressure map of the real-world object can be used in downstream processes, such as custom manufacturing.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: February 7, 2023
    Assignee: Visualize K.K.
    Inventors: Chong Jin Koh, Kyohei Kamiyama
  • Patent number: 11507781
    Abstract: Disclosed are systems and methods for generating large data sets for training deep learning networks (DLNs) for 3D measurements extraction from 2D images taken using a mobile device camera. The method includes the steps of receiving a 3D model of a 3D object; extracting spatial features from the 3D model; generating a first type of augmentation data for the 3D model, such as but not limited to skin color, face contour, hair style, virtual clothing, and/or lighting conditions; augmenting the 3D model with the first type of augmentation data to generate an augmented 3D model; generating at least one 2D image from the augmented 3D model by performing a projection of the augmented 3D model onto at least one plane; and generating a training data set to train the deep learning network (DLN) for spatial feature extraction by aggregating the spatial features and the at least one 2D image.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: November 22, 2022
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Kyohei Kamiyama
  • Publication number: 20220351378
    Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.
    Type: Application
    Filed: November 2, 2020
    Publication date: November 3, 2022
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Publication number: 20220270297
    Abstract: Systems and methods for generating pressure maps of real-world objects using deep learning are disclosed. A structured 3D model of a real-world object is generated from a series of 2D photographs of the object, using a process which in some embodiments utilizes photogrammetry, a keypoint detection deep learning network (DLN), and retopology. In addition, object parameters of the object are received. A pressure map of the object is then generated by a pressure estimation deep learning network (DLN) based on the structured 3D model and the object parameters, where the pressure estimation DLN was trained on structured 3D models, object parameters, and pressure maps of a plurality of objects belonging to a given object category. The pressure map of the real-world object can be used in downstream processes, such as custom manufacturing.
    Type: Application
    Filed: August 27, 2020
    Publication date: August 25, 2022
    Inventors: Chong Jin Koh, Kyohei Kamiyama
  • Publication number: 20220044070
    Abstract: Disclosed are systems and methods for generating large data sets for training deep learning networks (DLNs) for 3D measurements extraction from 2D images taken using a mobile device camera. The method includes the steps of receiving a 3D model of a 3D object; extracting spatial features from the 3D model; generating a first type of augmentation data for the 3D model, such as but not limited to skin color, face contour, hair style, virtual clothing, and/or lighting conditions; augmenting the 3D model with the first type of augmentation data to generate an augmented 3D model; generating at least one 2D image from the augmented 3D model by performing a projection of the augmented 3D model onto at least one plane; and generating a training data set to train the deep learning network (DLN) for spatial feature extraction by aggregating the spatial features and the at least one 2D image.
    Type: Application
    Filed: December 17, 2019
    Publication date: February 10, 2022
    Inventors: Chong Jin Koh, Kyohei Kamiyama
  • Patent number: 11010896
    Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: May 18, 2021
    Assignee: Bodygram, Inc.
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Patent number: 10962404
    Abstract: Disclosed are systems and methods for body weight prediction from one or more images. The method includes the steps of receiving one or more subject parameters; receiving one or more images containing a subject; identifying one or more annotation key points for one or more body features underneath a clothing of the subject from the one or more images utilizing one or more annotation deep-learning networks; calculating one or more geometric features of the subject based on the one or more annotation key points; and generating a prediction of the body weight of the subject utilizing a weight machine-learning module based on the one or more geometric features of the subject and the one or more subject parameters.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: March 30, 2021
    Assignee: Bodygram, Inc.
    Inventors: Kyohei Kamiyama, Chong Jin Koh, Yu Sato
  • Publication number: 20200319015
    Abstract: Disclosed are systems and methods for body weight prediction from one or more images. The method includes the steps of receiving one or more subject parameters; receiving one or more images containing a subject; identifying one or more annotation key points for one or more body features underneath a clothing of the subject from the one or more images utilizing one or more annotation deep-learning networks; calculating one or more geometric features of the subject based on the one or more annotation key points; and generating a prediction of the body weight of the subject utilizing a weight machine-learning module based on the one or more geometric features of the subject and the one or more subject parameters.
    Type: Application
    Filed: March 26, 2020
    Publication date: October 8, 2020
    Inventors: Kyohei Kamiyama, Chong Jin Koh, Yu Sato
  • Publication number: 20200193591
    Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.
    Type: Application
    Filed: November 26, 2019
    Publication date: June 18, 2020
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Patent number: 10636158
    Abstract: A mobile computing device used for measuring the height of an object, such as a human user, may be positioned on a reference surface, such as the ground plane. The reference surface is detected and a position guide is generated in an augmented reality (AR) plane on a display of the mobile computing device. The AR plane enables the object being measured to be positioned at a measurement position located at a predefined distance along the reference surface from the mobile computing device. The top and bottom of the object are detected in an image taken by the mobile computing device. The height of the object is measured based on the predefined distance and a distance between the top and bottom of the object in the image. The height of the object can also be measured with assistance from software development kits (SDKs) included in the mobile computing device.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: April 28, 2020
    Assignee: Bodygram, Inc.
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Publication number: 20190357615
    Abstract: Disclosed are systems and methods for full body measurements extraction using a mobile device camera.
    Type: Application
    Filed: August 10, 2019
    Publication date: November 28, 2019
    Inventors: Chong Jin Koh, Kyohei Kamiyama
  • Patent number: 10489683
    Abstract: Disclosed are systems and methods for generating large data sets for training deep learning networks for 3D measurements extraction from images taken using a mobile device camera. The method includes the steps of receiving at least one 3D model; generating a first type of augmentation data, such as but not limited to skin color, face contour, hair style, virtual clothing, and/or lighting conditions; augmenting the 3D model with the first type of augmentation data; generating at least one image from the augmented 3D model; receiving a second type of augmentation data, such as a plurality of background images representing a variety of backgrounds; augmenting the at least one image with the second type of augmentation data to generate a plurality of augmented images; extracting spatial features from the 3D model; and providing the plurality of augmented images and the spatial features to train a deep learning network for 3D measurement determination.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: November 26, 2019
    Assignee: Bodygram, Inc.
    Inventors: Chong Jin Koh, Kyohei Kamiyama
  • Patent number: D971256
    Type: Grant
    Filed: January 26, 2020
    Date of Patent: November 29, 2022
    Assignee: Visualize K.K.
    Inventors: Kyohei Kamiyama, Chong Jin Koh
  • Patent number: D971257
    Type: Grant
    Filed: January 26, 2020
    Date of Patent: November 29, 2022
    Assignee: Visualize K.K.
    Inventors: Kyohei Kamiyama, Chong Jin Koh