Patents by Inventor Yoshihide TONOMURA

Yoshihide TONOMURA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230095732
    Abstract: A synchronous control system is provided. A transmission device 1 includes a reception unit 11 that receives a control signal for a lighting device in a live event site and a transmission unit 13 that transmits, to a reception device in a live viewing site, the control signal to which an output time is appended. The output time is obtained by adding a predetermined period of time to a receipt time at which the control signal is received. A reception device 2 includes a reception unit 21 that receives the control signal to which the output time is appended, and an output control unit 13 that outputs the control signal to a lighting device 6 in the live viewing site at the output time, and allows the control of the lighting device 6 to synchronize with at least one of video and audio to be played in the live viewing site.
    Type: Application
    Filed: January 27, 2020
    Publication date: March 30, 2023
    Inventors: Masato Ono, Takahide Hoshide, Shinji Fukatsu, Yoshihide TONOMURA
  • Patent number: 11461903
    Abstract: Accurate and rapid identification can be performed even when feature vectors of input pixels and background pixels are close.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: October 4, 2022
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Hirokazu Kakinuma, Yoshihide Tonomura, Hiromu Miyashita, Hidenobu Nagata, Kota Hidaka
  • Patent number: 11398049
    Abstract: To identify and track a plurality of objects. An object tracking device 1 includes a frame acquisition unit 31 for acquiring one set of frame data 11 included in moving image data that captures a space where a plurality of objects are present; a distance acquisition unit 32 for acquiring measurement data 12 from measurement of three-dimensional coordinates of points configuring the objects in the space where the objects are present; an object position detection unit 33 for detecting a two-dimensional position of each of the objects from the frame data 11 on the basis of comparison with template data of each of the objects; an object distance detection unit 34 for detecting a plurality of three-dimensional positions of the objects from the measurement data 12; and an associating unit 35 for associating each two-dimensional position from the object position detection unit 33 with the nearest three-dimensional position among the three-dimensional positions from the object distance detection unit 34.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: July 26, 2022
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Yoko Ishii, Kota Hidaka, Yoshihide Tonomura, Tetsuro Tokunaga, Yuichi Hiroi
  • Publication number: 20220224872
    Abstract: A video generation device includes: an object extraction unit 11 that extracts an object area from a captured video in a space; a spatial position tracking unit 12 that detects an article from three-dimensional position measurement data in the space, applies identification information to the article, and calculates a three-dimensional spatial position of the article using the three-dimensional position measurement data; a position information merging unit 13 that links the object area to the identification information of the article to associate the three-dimensional spatial position with the object area; a depth expression unit 14 that generates a depth expression video of only the object area with which the three-dimensional spatial position has been associated, a depth of the video being able to be adjusted in the depth expression video; and a feedback unit 15 that transmits information indicating a method for reducing blurring generated in the object area of the depth expression video to any one or more o
    Type: Application
    Filed: May 29, 2019
    Publication date: July 14, 2022
    Inventors: Jiro Nagao, Yoshihide TONOMURA, Kota HIDAKA
  • Patent number: 11303305
    Abstract: A selective PEG algorithm, creating a sparse matrix while maintaining row weight/column weight at arbitrary multi-levels, and in the process, inactivating an arbitrary edge so that a minimum loop formed between arbitrary nodes is enlarged or performing constrained interleaving, so that encoding efficiency in the case where a matrix space is narrow is improved.
    Type: Grant
    Filed: October 8, 2014
    Date of Patent: April 12, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yoshihide Tonomura, Daisuke Shirai, Tatsuya Fujii, Takayuki Nakachi, Takahiro Yamaguchi, Masahiko Kitamura
  • Patent number: 11257224
    Abstract: There is provided an object tracking apparatus that realizes robust object detection and tracking even for movement fluctuation and observation noise, an object tracking method and a computer program. An object cracking apparatus 1 is an apparatus tracking an object in video, the object tracking apparatus 1 including: a deep learning discriminator 2 which is a discriminator by deep learning; and a particle filter function unit 3 tracking an object by applying a multi-channel feature value of video including feature values by the deep learning discriminator 2 to likelihood evaluation by a particle filter, according to a distance between position information about the multi-channel feature value and position information about each particle.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: February 22, 2022
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Yuichi Hiroi, Yoko Ishii, Tetsuro Tokunaga, Yoshihide Tonomura, Kota Hidaka
  • Publication number: 20210209793
    Abstract: To identify and track a plurality of objects. An object tracking device 1 includes a frame acquisition unit 31 for acquiring one set of frame data 11 included in moving image data that captures a space where a plurality of objects are present; a distance acquisition unit 32 for acquiring measurement data 12 from measurement of three-dimensional coordinates of points configuring the objects in the space where the objects are present; an object position detection unit 33 for detecting a two-dimensional position of each of the objects from the frame data 11 on the basis of comparison with template data of each of the objects; an object distance detection unit 34 for detecting a plurality of three-dimensional positions of the objects from the measurement data 12; and an associating unit 35 for associating each two-dimensional position from the object position detection unit 33 with the nearest three-dimensional position among the three-dimensional positions from the object distance detection unit 34.
    Type: Application
    Filed: May 20, 2019
    Publication date: July 8, 2021
    Inventors: Yoko ISHII, Kota HIDAKA, Yoshihide TONOMURA, Tetsuro TOKUNAGA, Yuichi HIROI
  • Publication number: 20210158534
    Abstract: Accurate and rapid identification can be performed even when feature vectors of input pixels and background pixels are close.
    Type: Application
    Filed: May 23, 2019
    Publication date: May 27, 2021
    Inventors: Hirokazu Kakinuma, Yoshihide TONOMURA, Hiromu Miyashita, Hidenobu Nagata, Kota HIDAKA
  • Publication number: 20210042935
    Abstract: There is provided an object tracking apparatus that realizes robust object detection and tracking even for movement fluctuation and observation noise, an object tracking method and a computer program. An object cracking apparatus 1 is an apparatus tracking an object in video, the object tracking apparatus 1 including: a deep learning discriminator 2 which is a discriminator by deep learning; and a particle filter function unit 3 tracking an object by applying a multi-channel feature value of video including feature values by the deep learning discriminator 2 to likelihood evaluation by a particle filter, according to a distance between position information about the multi-channel feature value and position information about each particle.
    Type: Application
    Filed: March 4, 2019
    Publication date: February 11, 2021
    Inventors: Yuichi HIROI, Yoko ISHII, Tetsuro TOKUNAGA, Yoshihide TONOMURA, Kota HIDAKA
  • Patent number: 10511331
    Abstract: This method and device makes it possible to implement maximum likelihood decoding of a sparse graph code at low computational complexity in the maximum likelihood decoding of the sparse graph code. This is, in the maximum likelihood of decoding of the sparse graph code, a lost data decoding process by a trivial decoding method and a lost data decoding process by a Gauss elimination method are performed repeatedly and alternately.
    Type: Grant
    Filed: August 8, 2014
    Date of Patent: December 17, 2019
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yoshihide Tonomura, Tatsuya Fujii, Takahiro Yamaguchi, Daisuke Shirai, Takayuki Nakachi
  • Publication number: 20160248448
    Abstract: A selective PEG algorithm, creating a sparse matrix while maintaining row weight/column weight at arbitrary multi-levels, and in the process, inactivating an arbitrary edge so that a minimum loop formed between arbitrary nodes is enlarged or performing constrained interleaving, so that encoding efficiency in the case where a matrix space is narrow is improved.
    Type: Application
    Filed: October 8, 2014
    Publication date: August 25, 2016
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yoshihide TONOMURA, Daisuke SHIRAI, Tatsuya FUJII, Takayuki NAKACHI, Takahiro YAMAGUCHI, Masahiko KITAMURA
  • Publication number: 20160191080
    Abstract: This method and device makes it possible to implement maximum likelihood decoding of a sparse graph code at low computational complexity in the maximum likelihood decoding of the sparse graph code. This is, in the maximum likelihood of decoding of the sparse graph code, a lost data decoding process by a trivial decoding method and a lost data decoding process by a Gauss elimination method are performed repeatedly and alternately.
    Type: Application
    Filed: August 8, 2014
    Publication date: June 30, 2016
    Inventors: Yoshihide TONOMURA, Tatsuya FUJII, Takahiro YAMAGUCHI, Daisuke SHIRAI, Takayuki Nakachi