Patents by Inventor Norimasa Kobori

Norimasa Kobori has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119353
    Abstract: A training data generation method generate labeled training data used for training an object identification model that is based on machine learning. The training data generation method includes: (A) detecting a moving object in a sequence of images; (B) tracking a same moving object in the sequence of images by using a tracker, to automatically obtain a track that is information representing a time series of the same moving object in the sequence of images; and (C) generating the labeled training data by giving the track as a label to the sequence of images.
    Type: Application
    Filed: August 14, 2023
    Publication date: April 11, 2024
    Inventors: Hsuan-Kung YANG, Norimasa Kobori
  • Publication number: 20240119354
    Abstract: A model training method trains an object identification model that is based on machine learning. The model training method includes acquiring labeled training data where a track is given as a label to a sequence of images. The track is information representing a time series of a same moving object in the sequence of images and is automatically obtained by a tracker that tracks the same moving object in the sequence of images. The model training method further includes training the object identification model based on the labeled training data.
    Type: Application
    Filed: August 14, 2023
    Publication date: April 11, 2024
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Hsuan-Kung YANG, Norimasa Kobori
  • Publication number: 20230419717
    Abstract: A re-identification method for performing re-identification of a target object in image data using a machine learning model is proposed. The re-identification method comprises acquiring first image data and second image data in both of which the target object is, acquiring a plurality of first output data and a plurality of second output data by inputting the first image data and the second image data into the machine learning model, calculating a plurality of distances each of which is a distance in an embedding space between each of the plurality of first output data and each of the plurality of second output data, determining that the target object of the first image data and the target object of the second image data are similar when a predetermined number or more of the plurality of distances are less than a predetermined threshold.
    Type: Application
    Filed: May 24, 2023
    Publication date: December 28, 2023
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Norimasa Kobori, Rajat Saini
  • Publication number: 20230316798
    Abstract: According to the human re-identification method according to the present disclosure, first, estimating a pose of a human to be re-identified in an image of the human is executed. Next, clipping a predetermined number of patches from the image along a body of the human based on the pose is executed. Then, generating positional information of each of the predetermined number of patches is executed. Then, inputting the predetermined number of patches with the positional information into a vision transformer encoder is executed. Then, inputting an output of the vision transformer encoder into a neural network is executed. And finally, obtaining an output of the neural network as a re-identification result of the human is executed.
    Type: Application
    Filed: March 28, 2023
    Publication date: October 5, 2023
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Norimasa KOBORI, Rajat SAINI
  • Publication number: 20230300445
    Abstract: In the first identification processing, recognition information on a person included in an image of a first range captured by a first camera is collated with user information included in antenna information on an antenna corresponding to the first camera. When the recognition information and the user information match each other at the position and the time, the person included in the image of the first range is identified as the user, and tracking support information on the user is generated. In the second identification processing, recognition information on a person included in an image of a second range captured by a second camera is compared with the tracking support information on the first camera corresponding to the second camera, and it is determined whether the same user as the user determined to be included in the image of the first range is included in the second range.
    Type: Application
    Filed: February 9, 2023
    Publication date: September 21, 2023
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Norimasa KOBORI
  • Publication number: 20230186741
    Abstract: A security system monitors a behavior of a person included in a plurality of camera images captured continuously in time series by a surveillance camera. The security system performs tracking processing of detecting each person region of a person included in a plurality of camera images, and identifying a person included in the plurality of camera images in time series based on person image data included in the person region. When the identification by the tracking processing transitions to failure in the middle, it is determined that a person having suspicious behavior is included in the camera image. When it is determined that the suspicious behavior is included, alert information may be notified from an output device.
    Type: Application
    Filed: December 5, 2022
    Publication date: June 15, 2023
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Norimasa KOBORI
  • Publication number: 20230188678
    Abstract: A system for outputting a surveillance video, the system comprises a plurality of cameras each of which is mounted on a movable object, one or more memories storing, for each of the plurality of cameras, captured video data and time series position data, and one or more processors. The oner or more processors execute selecting, for each predetermined time width, an observation camera which is one camera with the time series position data in which data within the predetermined time width are included in an observation area, and acquiring the captured video data within the predetermined time width of the observation camera. Then the one or more processors execute outputting the captured video data acquired for each of the predetermined time width in chronological order. Wherein the selecting the observation camera includes selecting one camera with the longest distance to travel in the observation area within the predetermined time width.
    Type: Application
    Filed: December 6, 2022
    Publication date: June 15, 2023
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Norimasa KOBORI
  • Publication number: 20230177844
    Abstract: An apparatus for identifying the state of lighting includes a processor configured to identify the states of lighting of a first lighting part and a second lighting part of a vehicle by inputting time-series images representing the vehicle into a classifier. The classifier includes a feature calculation part configured to calculate a feature map whenever one of the time-series images is inputted in chronological order; a first state-of-lighting identification part including a recursive structure and configured to identify the state of lighting of the first lighting part while updating a first internal state by inputting the feature map calculated for each of the time-series images in chronological order; and a second state-of-lighting identification part including a recursive structure and configured to identify the state of lighting of the second lighting part while updating a second internal state by inputting the feature map in chronological order.
    Type: Application
    Filed: November 3, 2022
    Publication date: June 8, 2023
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Norimasa KOBORI, Hitoshi KAMADA, Yushi NAGATA
  • Publication number: 20230100238
    Abstract: A grasping points determination method including: S10) receiving a scene image showing an object to be grasped; S20) in the scene image, determine the object and its features, and local descriptors and 2D-locations of these features; S30) based on database local descriptors of features of the object and 2D- and 3D-location(s) of grasping points of the object, determined in a previous position of the object, identifying a best-fit combination which transforms previously defined local descriptors into the determined local descriptors; S40) determining the registration corresponding to the combination; S50) determining in the scene image 2D-location(s) of grasping points by applying registration to previously defined 3D-location(s) of the grasping points; S60) determining 3D information relative to the object; S70) determining 3D-location(s) of the grasping points. Grasping points database creation method for creating the database. Systems for implementing the above methods.
    Type: Application
    Filed: October 11, 2019
    Publication date: March 30, 2023
    Applicant: TOYOTA MOTOR EUROPE
    Inventors: Norimasa KOBORI, Luca MINCIULLO, Gianpiero FRANCESCA, Lorenzo GARATTONI
  • Publication number: 20230080133
    Abstract: A computer-implemented method of estimating a 6D pose and shape of one or more objects from a 2D image, comprises the steps of: detecting, within the 2D image, one or more 2D regions of interest, each 2D region of interest containing a corresponding object among the one of more objects; cropping out a corresponding pixel value array, coordinate tensor , and feature map for each 2D region of interest; concatenating the corresponding pixel value array, coordinate tensor, and feature map for each 2D region of interest; and inferring, for each 2D region of interest, a 4D quaternion describing a rotation of the corresponding object in the 3D rotation group, a 2D centroid, which is a projection of a 3D translation of the corresponding object onto a plane of the 2D image given a camera matrix associated to the 2D, image, a distance from a viewpoint of the 2D image to the corresponding object a size and a class-specific latent shape vector of the corresponding object.
    Type: Application
    Filed: February 21, 2020
    Publication date: March 16, 2023
    Inventors: Sven Meier, Norimasa Kobori, Luca Minciullo, Kei Yoshikawa, Fabian Manhardt, Manuel Nickel, Nassir Navab
  • Publication number: 20220301320
    Abstract: A vehicle controller includes a processor configured to: input characteristics extracted from an object region representing a target vehicle in an image obtained by a camera of a vehicle into a light-state classifier to identify a light state that is the state of a signal light of the target vehicle; determine a first control value, based on the result of identification of the light state and the positional relationship between the vehicle and the target vehicle, in accordance with a predetermined rule; input the result of identification of the light state into a control command classifier, which has been trained to output a second control value for controlling the vehicle to avoid a collision between the vehicle and the target vehicle, thereby determining the second control value; and determine an integrated control value to be used for controlling the vehicle, based on the first and second control values.
    Type: Application
    Filed: March 16, 2022
    Publication date: September 22, 2022
    Inventors: Daisuke HASHIMOTO, Norimasa KOBORI, Hitoshi KAMADA, Yushi NAGATA
  • Patent number: 11335024
    Abstract: A system and a method for processing an image include inputting the image to a neural network configured to: obtain a plurality of feature maps, each feature map having a respective resolution and a respective depth, perform a classification on each feature map to deliver, for each feature map: the type of at least one object visible on the image, the position and shape in the image of at least one two-dimensional bounding box surrounding the at least one object, at least one possible viewpoint for the at least one object, at least one possible in-plane rotation for the at least one object.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: May 17, 2022
    Assignee: TOYOTA MOTOR EUROPE
    Inventors: Sven Meier, Norimasa Kobori, Wadim Kehl, Fabian Manhardt, Federico Tombari
  • Publication number: 20220050997
    Abstract: A system and a method for processing an image include inputting the image to a neural network configured to: obtain a plurality of feature maps, each feature map having a respective resolution and a respective depth, perform a classification on each feature map to deliver, for each feature map: the type of at least one object visible on the image, the position and shape in the image of at least one two-dimensional bounding box surrounding the at least one object, a plurality of rotation hypotheses for the at least one object.
    Type: Application
    Filed: September 7, 2018
    Publication date: February 17, 2022
    Applicants: TOYOTA MOTOR EUROPE, TECHNICAL UNIVERSITY OF MUNICH
    Inventors: Sven MEIER, Norimasa KOBORI, Fabian MANHARDT, Diego Martin ARROYO, Federico TOMBARI, Christian RUPPRECHT
  • Publication number: 20210374988
    Abstract: A system and a method for processing an image include inputting the image to a neural network configured to: obtain a plurality of feature maps, each feature map having a respective resolution and a respective depth, perform a classification on each feature map to deliver, for each feature map: the type of at least one object visible on the image, the position and shape in the image of at least one two-dimensional bounding box surrounding the at least one object, at least one possible viewpoint for the at least one object, at least one possible in-plane rotation for the at least one object.
    Type: Application
    Filed: October 20, 2017
    Publication date: December 2, 2021
    Applicant: TOYOTA MOTOR EUROPE
    Inventors: Sven MEIER, Norimasa KOBORI, Wadim KEHL, Fabian MANHARDT, Federico TOMBARI
  • Patent number: 10909717
    Abstract: A viewpoint recommendation apparatus includes image feature extraction means for extracting an image feature from the acquired image at a first viewpoint, pose estimation means for calculating a first likelihood map indicating a relation between the estimated pose of the object and a likelihood of this estimated pose, second storage means for storing a second likelihood map indicating a relation between the true first viewpoint and a likelihood of this first viewpoint in the estimated pose, third storage means for storing a third likelihood map indicating a relation between the pose of the object when the object is observed at the first and the second viewpoints and a likelihood of this pose, and viewpoint estimation means for estimating the second viewpoint so that a value of an evaluation function of the first, second, and third likelihood maps becomes the maximum or minimum.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: February 2, 2021
    Assignees: National University Corporation Nagoya University, Toyota Jidosha Kabushiki Kaisha
    Inventors: Hiroshi Murase, Yasutomo Kawanishi, Daisuke Deguchi, Nik Mohd Zarifie bin Hashim, Yusuke Nakano, Norimasa Kobori
  • Patent number: 10783218
    Abstract: The present disclosure is applied to a variable group calculation apparatus for calculating an undetermined variable group that simultaneously minimizes a difference value and a data value. The difference value is a difference between an added composite value, which is obtained by adding and combining the undetermined variable group and a dictionary data group, and an observation data group. The data value includes the difference value and a regularization term of the undetermined variable group. The variable group calculation apparatus of the present disclosure includes a convolution unit configured to convert the regularization term to a convolution value for an L1 norm using the undetermined variable group and a mollifier function, and a calculation unit configured to perform the calculation using the regularization term, which is converted to the convolution value by the convolution unit.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: September 22, 2020
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Shintaro Yoshizawa, Norimasa Kobori
  • Patent number: 10534974
    Abstract: Provided is an image area extraction method for extracting an image area of an object from a color image of obtained color image data. The image data extraction method includes converting RGB values of each pixel in the color image data into HSV values, performing threshold processing to binarize at least one of the converted S and V values of each pixel so that it will be converted into HS?V? values, generating composite image data including an X value, a Y value, and a Z value for each pixel, the X value, the Y value, and the Z value being obtained by adding values according to predetermined one-to-one combinations between any one of an R value, a G value, and a B value and any one of an H value, an S? value, and a V? value, and extracting the image area using the composite image data.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: January 14, 2020
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Norimasa Kobori
  • Publication number: 20200005480
    Abstract: A viewpoint recommendation apparatus includes image feature extraction means for extracting an image feature from the acquired image at a first viewpoint, pose estimation means for calculating a first likelihood map indicating a relation between the estimated pose of the object and a likelihood of this estimated pose, second storage means for storing a second likelihood map indicating a relation between the true first viewpoint and a likelihood of this first viewpoint in the estimated pose, third storage means for storing a third likelihood map indicating a relation between the pose of the object when the object is observed at the first and the second viewpoints and a likelihood of this pose, and viewpoint estimation means for estimating the second viewpoint so that a value of an evaluation function of the first, second, and third likelihood maps becomes the maximum or minimum.
    Type: Application
    Filed: June 27, 2019
    Publication date: January 2, 2020
    Inventors: Hiroshi Murase, Yasutomo Kawanishi, Daisuke Deguchi, Nik Mohd Zarifie bin Hashim, Yusuke Nakano, Norimasa Kobori
  • Patent number: 10452950
    Abstract: An object recognition apparatus includes image information acquisition means for acquiring image information of an object to be recognized, storage means for storing detection profile information associating an object candidate with a detector capable of detecting the object candidate, and model image information of the object candidate associated with the object candidate and object detection means including detectors defined in the detection profile information, the object detection means detecting the object to be recognized by using the detector from the image information acquired by the image information acquisition means. The detector of the object detection means detects the object candidate by comparing the model image information of the object candidate associated with the detector in the detection profile information with the image information of the object to be recognized acquired by the image information acquisition means, and outputs the detected object candidate as the object to be recognized.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: October 22, 2019
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Norimasa Kobori, Kunimatsu Hashimoto, Minoru Yamauchi
  • Publication number: 20180349318
    Abstract: The present disclosure is applied to a variable group calculation apparatus for calculating an undetermined variable group that simultaneously minimizes a difference value and a data value. The difference value is a difference between an added composite value, which is obtained by adding and combining the undetermined variable group and a dictionary data group, and an observation data group. The data value includes the difference value and a regularization term of the undetermined variable group. The variable group calculation apparatus of the present disclosure includes a convolution unit configured to convert the regularization term to a convolution value for an L1 norm using the undetermined variable group and a mollifier function, and a calculation unit configured to perform the calculation using the regularization term, which is converted to the convolution value by the convolution unit.
    Type: Application
    Filed: May 24, 2018
    Publication date: December 6, 2018
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Shintaro Yoshizawa, Norimasa Kobori