Patents by Inventor Tetsuya KINEBUCHI

Tetsuya KINEBUCHI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240013798
    Abstract: A conversion device (10) includes: an evaluation unit (11) that estimates which one of subjective evaluation values obtained by quantifying easiness of transmission of a content of a voice felt by a person is to be taken from an input voice signal; and a conversion unit (12) that converts the input voice signal so as to obtain a subjective evaluation value of a predetermined value on the basis of the subjective evaluation value estimated by the evaluation unit (11).
    Type: Application
    Filed: November 13, 2020
    Publication date: January 11, 2024
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kazunori YAMADA, Ko MITSUDA, Tetsuya KINEBUCHI, Yushi AONO, Hiroko YABUSHITA, Akihiko TAKASHIMA, Takashi NAKAMURA
  • Patent number: 11797845
    Abstract: Simultaneous learning of a plurality of different tasks and domains, with low costs and high precision, is enabled. A learning unit 160, on the basis of learning data, uses a target encoder that takes data of a target domain as input and outputs a target feature expression, a source encoder that takes data of a source domain as input and outputs a source feature expression, a common encoder that takes data of the target domain or the source domain as input and outputs a common feature expression, a target decoder that takes output of the target encoder and the common encoder as input and outputs a result of executing a task with regard to data of the target domain, and a source decoder that takes output of the source encoder and the common encoder as input and outputs a result of executing a task with regard to data of the source domain, to learn so that the output of the target decoder matches training data, and the output of the source decoder matches training data.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: October 24, 2023
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Takayuki Umeda, Kazuhiko Murasaki, Shingo Ando, Tetsuya Kinebuchi
  • Patent number: 11651515
    Abstract: It is possible to determine a geometric transformation matrix representing geometric transformation between an input image and a template image with high precision. A geometric transformation matrix/inlier estimation section 32 determines a corresponding point group serving as inliers, and estimates the geometric transformation matrix representing the geometric transformation between the input image and the template image. A scatter degree estimation section 34 estimates scatter degree of the corresponding points based on the corresponding point group serving as inliers. A plane tracking convergence determination threshold calculation section 36 calculates a threshold used in convergence determination when iterative update of the geometric transformation matrix in a plane tracking section 38 is performed based on the estimated scatter degree.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: May 16, 2023
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Jun Shimamura, Yukito Watanabe, Tetsuya Kinebuchi
  • Patent number: 11594009
    Abstract: Even if an object to be detected is not remarkable in images, and the input includes images including regions that are not the object to be detected and have a common appearance on the images, a region indicating the object to be detected is accurately detected. A local feature extraction unit 20 extracts a local feature of a feature point from each image included in an input image set. An image-pair common pattern extraction unit 30 extracts, from each image pair selected from images included in the image set, a common pattern constituted by a set of feature point pairs that have similar local features extracted by the local feature extraction unit 20 in images constituting the image pair, the set of feature point pairs being geometrically similar to each other.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: February 28, 2023
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shuhei Tarashima, Takashi Hosono, Yukito Watanabe, Jun Shimamura, Tetsuya Kinebuchi
  • Patent number: 11461597
    Abstract: Objectness indicating a degree of accuracy of a single object is accurately estimated. An edge detection unit 30 detects an edge for a depth image, an edge density/uniformity calculation unit 40 calculates an edge density on the periphery of a candidate region, an edge density inside the candidate region, and edge uniformity on the periphery of the candidate region. An objectness calculation unit 42 calculates the objectness of the candidate region based on the edge density on the periphery of the candidate region, the edge density inside the candidate region, and the edge uniformity on the periphery of the candidate region.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: October 4, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Takashi Hosono, Shuhei Tarashima, Jun Shimamura, Tetsuya Kinebuchi
  • Patent number: 11416710
    Abstract: The present invention relates to representing image features used by a convolutional neural network (CNN) to identify concepts in an input image. The CNN includes a plurality of filters in each of a plurality of layers. The method generates the CNN based on a set of images for training with predetermined concepts in regions of the set of images. For a select layer of the CNN, the method generates integrated maps, Each integrated map is based on a set of feature maps in a cluster and relevance between the set of feature maps for the select layer and a region representing one of the features in the image data. The method provides a pair of a feature representation visualization image of a feature in the select layer and a concept information associated with the integration map.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: August 16, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kaori Kumagai, Yukito Watanabe, Jun Shimamura, Tetsuya Kinebuchi
  • Patent number: 11334744
    Abstract: A large-scale point cloud having no limitation on the range or the number of points is set as an object, and labels are attached to the points constituting the object regardless of the type of object.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: May 17, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yasuhiro Yao, Hitoshi Niigaki, Ken Tsutsuguchi, Tetsuya Kinebuchi
  • Patent number: 11328384
    Abstract: An object is to make it possible to precisely infer a geometric transformation matrix for transformation between an image and a reference image representing a plane region even if correspondence to the reference image cannot be obtained. A first line segment group extraction unit 120 extracts, out of a line segment group in an image, line segments that correspond to a direction that is parallel or perpendicular to a side of a rectangle included in the image, from the inside of the rectangle, takes the extracted line segments to be a first line segment group, and extracts a plurality of line segments different from the first line segment group out of the line segment group.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: May 10, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Jun Shimamura, Shuhei Tarashima, Yukito Watanabe, Takashi Hosono, Tetsuya Kinebuchi
  • Publication number: 20210304415
    Abstract: The present invention makes it possible to estimate, with high precision, a candidate region indicating each of multiple target objects included in an image. A parameter determination unit 11 determines parameters to be used when detecting a boundary line of an image 101 based on a ratio between a density of boundary lines included in an image 101 and a density of boundary lines in a region indicated by region information 102 indicating the region including at least one of the multiple target objects included in the image 101. A boundary line detection unit 12 detects the boundary line in the image 101 using the parameter. For each of the multiple target objects included in the image 101, the region estimation unit 13 estimates the candidate region of the target object based on the detected boundary line.
    Type: Application
    Filed: August 1, 2019
    Publication date: September 30, 2021
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yukito WATANABE, Shuhei TARASHIMA, Takashi HOSONO, Jun SHIMAMURA, Tetsuya KINEBUCHI
  • Publication number: 20210217196
    Abstract: It is possible to determine a geometric transformation matrix representing geometric transformation between an input image and a template image with high precision. A geometric transformation matrix/inlier estimation section 32 determines a corresponding point group serving as inliers, and estimates the geometric transformation matrix representing the geometric transformation between the input image and the template image. A scatter degree estimation section 34 estimates scatter degree of the corresponding points based on the corresponding point group serving as inliers. A plane tracking convergence determination threshold calculation section 36 calculates a threshold used in convergence determination when iterative update of the geometric transformation matrix in a plane tracking section 38 is performed based on the estimated scatter degree.
    Type: Application
    Filed: May 14, 2019
    Publication date: July 15, 2021
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Jun SHIMAMURA, Yukito WATANABE, Tetsuya KINEBUCHI
  • Publication number: 20210216818
    Abstract: Simultaneous learning of a plurality of different tasks and domains, with low costs and high precision, is enabled. A learning unit 160, on the basis of learning data, uses a target encoder that takes data of a target domain as input and outputs a target feature expression, a source encoder that takes data of a source domain as input and outputs a source feature expression, a common encoder that takes data of the target domain or the source domain as input and outputs a common feature expression, a target decoder that takes output of the target encoder and the common encoder as input and outputs a result of executing a task with regard to data of the target domain, and a source decoder that takes output of the source encoder and the common encoder as input and outputs a result of executing a task with regard to data of the source domain, to learn so that the output of the target decoder matches training data, and the output of the source decoder matches training data.
    Type: Application
    Filed: May 28, 2019
    Publication date: July 15, 2021
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Takayuki UMEDA, Kazuhiko MURASAKI, Shingo ANDO, Tetsuya KINEBUCHI
  • Publication number: 20210216829
    Abstract: Objectness indicating a degree of accuracy of a single object is accurately estimated. An edge detection unit 30 detects an edge for a depth image, an edge density/uniformity calculation unit 40 calculates an edge density on the periphery of a candidate region, an edge density inside the candidate region, and edge uniformity on the periphery of the candidate region. An objectness calculation unit 42 calculates the objectness of the candidate region based on the edge density on the periphery of the candidate region, the edge density inside the candidate region, and the edge uniformity on the periphery of the candidate region.
    Type: Application
    Filed: May 31, 2019
    Publication date: July 15, 2021
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Takashi HOSONO, Shuhei TARASHIMA, Jun SHIMAMURA, Tetsuya KINEBUCHI
  • Publication number: 20210209403
    Abstract: Even if an object to be detected is not remarkable in images, and the input includes images including regions that are not the object to be detected and have a common appearance on the images, a region indicating the object to be detected is accurately detected. A local feature extraction unit 20 extracts a local feature of a feature point from each image included in an input image set. An image-pair common pattern extraction unit 30 extracts, from each image pair selected from images included in the image set, a common pattern constituted by a set of feature point pairs that have similar local features extracted by the local feature extraction unit 20 in images constituting the image pair, the set of feature point pairs being geometrically similar to each other.
    Type: Application
    Filed: May 7, 2019
    Publication date: July 8, 2021
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shuhei TARASHIMA, Takashi HOSONO, Yukito WATANABE, Jun SHIMAMURA, Tetsuya KINEBUCHI
  • Publication number: 20210201440
    Abstract: An object is to make it possible to precisely infer a geometric transformation matrix for transformation between an image and a reference image representing a plane region even if correspondence to the reference image cannot be obtained. A first line segment group extraction unit 120 extracts, out of a line segment group in an image, line segments that correspond to a direction that is parallel or perpendicular to a side of a rectangle included in the image, from the inside of the rectangle, takes the extracted line segments to be a first line segment group, and extracts a plurality of line segments different from the first line segment group out of the line segment group.
    Type: Application
    Filed: May 28, 2019
    Publication date: July 1, 2021
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Jun SHIMAMURA, Shuhei TARASHIMA, Yukito WATANABE, Takashi HOSONO, Tetsuya KINEBUCHI
  • Publication number: 20210158016
    Abstract: A large-scale point cloud having no limitation on the range or the number of points is set as an object, and labels are attached to the points constituting the object regardless of the type of object.
    Type: Application
    Filed: April 16, 2019
    Publication date: May 27, 2021
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yasuhiro YAO, Hitoshi NIIGAKI, Ken TSUTSUGUCHI, Tetsuya KINEBUCHI
  • Publication number: 20210089827
    Abstract: It is made possible that features in an input image used by a CNN to identify the input image are efficiently represented.
    Type: Application
    Filed: February 25, 2019
    Publication date: March 25, 2021
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kaori KUMAGAI, Yukito WATANABE, Jun SHIMAMURA, Tetsuya KINEBUCHI
  • Publication number: 20200378898
    Abstract: Even under a different light source, such as outdoor light source, diagnosis of surface states of a diagnosis target object is performed with a high accuracy without measuring spectral distribution information about the light source at the time of measuring a diagnosis target object.
    Type: Application
    Filed: February 6, 2019
    Publication date: December 3, 2020
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shunsuke TSUKATANI, Shingo ANDO, Tetsuya KINEBUCHI