Patents by Inventor Antti Myllykoski

Antti Myllykoski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11488317
    Abstract: A system is provided that stores a neural network model trained on a training dataset which indicates an association between first graphic information associated with one or more first objects and corresponding first plurality of depth images. The system receives second graphic information that corresponds to the one or more first objects. The system further applies the trained neural network model received on the second graphic information. The system predicts a first depth image from the first plurality of depth images based on the application of the trained neural network model on the received second graphic information. The system extracts first depth information from the predicted first depth image. The first depth information corresponds to the one or more first objects indicated by the second graphic information.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: November 1, 2022
    Assignee: SONY GROUP CORPORATION
    Inventors: Jong Hwa Lee, Gareth White, Antti Myllykoski, Edward Theodore Winter
  • Patent number: 11475631
    Abstract: A system for generation of training dataset is provided. The system controls a depth sensor to capture, from a first viewpoint, a first image a first depth value associated with the first object. The system receives tracking information from a handheld device associated with the depth sensor, based on a movement of the handheld device and the depth sensor in a 3D space. The system generates graphic information corresponding to the first object based on the received tracking information. The graphic information includes the first object from a second viewpoint. The system calculates a second depth value associated with the first object, based on the graphic information. The system generates, for a neural network model, a training dataset which includes a first combination of the first image and the first depth value, and a second combination of second images corresponding to the graphic information and the second depth value.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: October 18, 2022
    Assignee: SONY CORPORATION
    Inventors: Jong Hwa Lee, Gareth White, Antti Myllykoski, Edward Theodore Winter
  • Publication number: 20220165027
    Abstract: A system for generation of training dataset is provided. The system controls a depth sensor to capture, from a first viewpoint, a first image a first depth value associated with the first object. The system receives tracking information from a handheld device associated with the depth sensor, based on a movement of the handheld device and the depth sensor in a 3D space. The system generates graphic information corresponding to the first object based on the received tracking information. The graphic information includes the first object from a second viewpoint. The system calculates a second depth value associated with the first object, based on the graphic information. The system generates, for a neural network model, a training dataset which includes a first combination of the first image and the first depth value, and a second combination of second images corresponding to the graphic information and the second depth value.
    Type: Application
    Filed: November 23, 2020
    Publication date: May 26, 2022
    Inventors: JONG HWA LEE, GARETH WHITE, ANTTI MYLLYKOSKI, EDWARD THEODORE WINTER
  • Publication number: 20220164973
    Abstract: A system is provided that stores a neural network model trained on a training dataset which indicates an association between first graphic information associated with one or more first objects and corresponding first plurality of depth images. The system receives second graphic information that corresponds to the one or more first objects. The system further applies the trained neural network model received on the second graphic information. The system predicts a first depth image from the first plurality of depth images based on the application of the trained neural network model on the received second graphic information. The system extracts first depth information from the predicted first depth image. The first depth information corresponds to the one or more first objects indicated by the second graphic information.
    Type: Application
    Filed: November 23, 2020
    Publication date: May 26, 2022
    Inventors: JONG HWA LEE, GARETH WHITE, ANTTI MYLLYKOSKI, EDWARD THEODORE WINTER
  • Publication number: 20140156396
    Abstract: Embodiments of systems and methods are disclosed for providing messages based on an identified ridership pattern of a user of a transit system. Embodiments can include receiving information associated with a plurality of transactions of the user of the transit system, and identifying a ridership pattern of the user of the transit system. A predicted time and duration that the user of the transit system will be at a predicted location can be determined based, at least in part, on the identified ridership pattern. A message can be formulated using this information, and the message can be sent to the user or other message subscriber. Messages can include a variety of information, including advertisements, transit status updates, and more.
    Type: Application
    Filed: May 15, 2013
    Publication date: June 5, 2014
    Applicant: Cubic Corporation
    Inventors: David L. deKozan, Boris Karsch, Antti Myllykoski, Philip B. Dixon, Timothy Cook, Pradip Mistry, Janet Koenig