Patents by Inventor Jun Shimamura

Jun Shimamura has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12286143
    Abstract: A roadside unit 40 that controls a traffic safety apparatus provided at a railroad crossing 150 at which a general road 120 on which a vehicle 20 travels and a railroad track 110 on which a railroad vehicle 10 given priority over the vehicle 20 travels intersect receives a first message from the railroad vehicle 10. The first message includes an information element indicating a railroad vehicle as a type of a transmission source vehicle, and an information element indicating at least one of a position of the railroad vehicle 10 or speed of the railroad vehicle 10. In the roadside unit 40, a communicator transmits a second message to the vehicle 20. The second message includes an information element related to waiting time for passing of the railroad vehicle 10 through the railroad crossing 150.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: April 29, 2025
    Assignee: KYOCERA Corporation
    Inventors: Takatoshi Yoshikawa, Jun Shimamura
  • Publication number: 20250052590
    Abstract: A road boundary detection device is a road boundary detection device that acquires a set of lines corresponding to a road boundary from point cloud data as road boundary information. The road boundary detection device includes: a candidate point detection unit that detects each point of road boundary candidates corresponding to candidates of a road boundary from the point cloud data; a candidate point clustering unit that clusters each point of the road boundary candidates; an adjacent cluster reduction unit that reduces a cluster from a distribution of points in clusters in an adjacency relationship by using a predetermined cluster reduction method; a line fitting unit that fits one or more straight lines or curved lines to one or more of the clusters and output fitted lines as road boundary candidates; a line connecting unit that connects some of the fitted lines by using a predetermined analysis method; and an information output unit that outputs a calculated line as the road boundary information.
    Type: Application
    Filed: December 8, 2021
    Publication date: February 13, 2025
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Taiga YOSHIDA, Yasuhiro YAO, Naoki ITO, Jun SHIMAMURA
  • Publication number: 20250045875
    Abstract: An input processing unit receives an image and a three-dimensional point cloud including three-dimensional points having reflection intensity on a surface of an object for which at least a relationship between an image capturing position and a measurement position is obtained in advance, and obtains pixel positions on the image corresponding to the respective three-dimensional points of the three-dimensional point cloud. A shadow region estimation unit performs clustering on pixels of the image on the basis of pixel values and pixel positions, obtains an average reflection intensity and an average value of quantified color information for each of clusters, and estimates a shadow region by performing comparison in the average reflection intensity and the average value of the color information between the clusters. A shadow correction unit corrects pixel values of the shadow region from the shadow region estimated and the image.
    Type: Application
    Filed: December 7, 2021
    Publication date: February 6, 2025
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shogo SATO, Yasuhiro YAO, Shingo ANDO, Jun SHIMAMURA
  • Publication number: 20250037402
    Abstract: A position and posture estimation device acquires three-dimensional point cloud data at each of times and position data at each of times, the three-dimensional point cloud data being measured every time a first time elapses, the position data being measured every time a second time longer than the first time elapses. The position and posture estimation device estimates a local position in a local coordinate system and a local posture in the local coordinate system. The position and posture estimation device estimates an estimated absolute position and an estimated absolute posture in an absolute coordinate system every time the position data is acquired. The position and posture estimation device generates provisional three-dimensional point cloud data in the absolute coordinate system every time the position data is acquired.
    Type: Application
    Filed: December 6, 2021
    Publication date: January 30, 2025
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yasuhiro YAO, Kana KURATA, Shingo ANDO, Jun SHIMAMURA
  • Publication number: 20240371148
    Abstract: It is possible to identify a position of a target object that is difficult to recognize. A position estimation device includes: an information fusion unit that generates fusion information in which position information of a subject object that is an object corresponding to a subject, visual information of the subject object, and relationship information indicating a relationship with a target object paired with the subject object are fused; and an object position estimation unit that estimates a position of the target object by using an object position estimator learned in advance on the basis of the fusion information.
    Type: Application
    Filed: May 26, 2021
    Publication date: November 7, 2024
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kaori KUMAGAI, Takayuki UMEDA, Masaki KITAHARA, Jun SHIMAMURA
  • Publication number: 20240371093
    Abstract: A point cloud complementing device 100 receives a colored three-dimensional point cloud of each point including a missing region and the number of points to be complemented as inputs. A CPU 11 of the point cloud complementing device 100 extracts a feature vector of the three-dimensional point cloud using a feature extractor learned in advance. The CPU 11 uses a point cloud complementing model learned in advance in consideration of an error between color information and brightness information, has the feature vector and the number of points to be complemented as inputs, and outputs a point cloud obtained by complementing the input three-dimensional point cloud up to the number of points to be complemented by performing correction on point at which the brightness does not change among adjacent points of a predicted point cloud such that the points have close brightness information assuming that the points are on the same plane.
    Type: Application
    Filed: September 6, 2021
    Publication date: November 7, 2024
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shunsuke TSUKATANI, Shingo ANDO, Jun SHIMAMURA
  • Publication number: 20240312047
    Abstract: There is provided a position detection device that recognizes a presence position of a target object in a three-dimensional space. The device acquires three-dimensional point cloud information of the space and a plurality of images obtained by imaging an area including surroundings of an object in the space from different imaging points. A region detection unit receives, as an input, the plurality of acquired images, determines whether a target object appears in the plurality of images, and detects a region of the object in each of the plurality of images in a case where the target object appears in each of the images. A specifying unit specifies a region of a point cloud corresponding to the target object based on the point cloud information and the region of the object detected in each of the images.
    Type: Application
    Filed: July 14, 2021
    Publication date: September 19, 2024
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Taiga YOSHIDA, Naoki ITO, Jun SHIMAMURA
  • Publication number: 20240311989
    Abstract: A motion abnormality determination unit 62 classifies video data representing a motion of a person into motion clusters and determines whether the motion of the person is abnormal. A procedure classification unit 66 classifies the motion of the person into procedures based on classification results of the motion clusters and a procedure tree. A procedure abnormality determination unit 68 determines whether the procedure including the motion of the person is abnormal based on the classification result of the procedure.
    Type: Application
    Filed: June 29, 2021
    Publication date: September 19, 2024
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Motohiro TAKAGI, Kazuya YOKOHARI, Masaki KITAHARA, Jun SHIMAMURA
  • Patent number: 12094187
    Abstract: An identification result explanation device calculates, for each of feature maps output individually from a plurality of filters used in specified layers of a CNN, a weight representing a degree of association with a result of identification for an input image by the CNN. The identification result explanation device outputs transposed feature maps obtained by transposing, for each of clusters, the feature maps in the cluster based on a result of classification of each of the feature maps for the input image classified into any of the clusters and on the weight calculated for each of the feature maps. The identification result explanation device uses the transposed feature maps in each of the clusters to retrieve, in a storage unit, each of the clusters including the feature maps linked to the same filters as those linked to the transposed feature maps for the input image and selects.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: September 17, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kaori Kumagai, Jun Shimamura, Atsushi Sagata
  • Publication number: 20240296696
    Abstract: An object detection unit 60 detects appearance features related to an object near the person and an appearance of the person, person region information related to a region representing the person, and object region information related to a region representing the object from video data representing a motion of a person. A motion feature extraction unit 62 extracts a motion feature related to the motion of the person based on the video data and the person region information. A relational feature extraction unit 64 extracts a relational feature indicating a relationship between the object and the person based on the object region information and the person region information. The abnormality determination unit 66 determines whether the motion of the person is abnormal based on the appearance feature, the motion feature, and the relational feature.
    Type: Application
    Filed: June 29, 2021
    Publication date: September 5, 2024
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Motohiro TAKAGI, Kazuya YOKOHARI, Masaki KITAHARA, Jun SHIMAMURA
  • Publication number: 20240281501
    Abstract: A three-dimensional point group is accurately identified on the basis of GIS data in which a geographical position of an object is defined by a polygon, a line, or a point.
    Type: Application
    Filed: July 29, 2021
    Publication date: August 22, 2024
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kana KURATA, Yasuhiro YAO, Naoki ITO, Jun SHIMAMURA
  • Publication number: 20240281711
    Abstract: There is provided a class label estimation device that estimates a class label of input data and estimates a cause of an estimation error, the class label estimation device including: a distribution estimation unit that estimates a distribution followed by a training set; a distance estimation unit that estimates a distance of the input data from the training set based on the distribution; an unknown degree estimation unit that estimates an unknown degree of the input data based on the distance; an unknown degree correction unit that corrects the unknown degree based on the distribution; and an error cause estimation unit that estimates a cause of an estimation error using the corrected unknown degree.
    Type: Application
    Filed: March 23, 2021
    Publication date: August 22, 2024
    Inventors: Mihiro UCHIDA, Takayuki UMEDA, Shingo ANDO, Jun SHIMAMURA
  • Patent number: 12008765
    Abstract: The present invention makes it possible to estimate, with high precision, a candidate region indicating each of multiple target objects included in an image. A parameter determination unit 11 determines parameters to be used when detecting a boundary line of an image 101 based on a ratio between a density of boundary lines included in an image 101 and a density of boundary lines in a region indicated by region information 102 indicating the region including at least one of the multiple target objects included in the image 101. A boundary line detection unit 12 detects the boundary line in the image 101 using the parameter. For each of the multiple target objects included in the image 101, the region estimation unit 13 estimates the candidate region of the target object based on the detected boundary line.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: June 11, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yukito Watanabe, Shuhei Tarashima, Takashi Hosono, Jun Shimamura, Tetsuya Kinebuchi
  • Patent number: 11928790
    Abstract: An object included in a low-resolution image can be recognized with a high degree of precision. An acquisition unit acquires, from a query image, an increased-resolution image, which is acquired by increasing the resolution of the query image, by performing pre-learned acquisition processing for increasing the resolution of an image. A feature extraction unit, using the increased-resolution image as input, extracts a feature vector of the increased-resolution image by performing pre-learned extraction processing for extracting a feature vector of an image. A recognition unit recognizes an object captured on the increased-resolution image on the basis of the feature vector of the increased-resolution image and outputs the recognized object as the object captured on the query image.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: March 12, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yukito Watanabe, Jun Shimamura, Atsushi Sagata
  • Publication number: 20240005655
    Abstract: A learning apparatus includes: a data generation unit that learns generation of data based on a class label signal and a noise signal; an unknown degree estimation unit that learns estimation of a degree to which input data is unknown using a training set and the data generated by the data generation unit; a first class likelihood estimation unit that learns estimation of a first likelihood of each class label for input data using the training set; a second class likelihood estimation unit that learns estimation of a second likelihood of each class label for input data using the training set and the data generated by the data generation unit; a class likelihood correction unit that generates a third likelihood by correcting the first likelihood on the basis of the unknown degree and the second likelihood; and a class label estimation unit that estimates a class label of data related to the third likelihood on the basis of the third likelihood, thereby automatically estimating a cause of an error by a deep mod
    Type: Application
    Filed: October 21, 2020
    Publication date: January 4, 2024
    Inventors: Mihiro UCHIDA, Jun SHIMAMURA, Shingo ANDO, Takayuki UMEDA
  • Publication number: 20230409964
    Abstract: An identification device acquires a plurality of identification target points by sampling a target point group that is a set of three-dimensional target points. The identification device calculates relative coordinates of a neighboring point of the identification target point with respect to the identification target point. The identification device inputs coordinates of the plurality of identification target points and relative coordinates of neighboring points with respect to each of the plurality of identification target points into a class label assigning learned model to acquire class labels of the plurality of identification target points and validity of the class labels with respect to the neighboring points for each of the plurality of identification target points.
    Type: Application
    Filed: November 5, 2020
    Publication date: December 21, 2023
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kana KURATA, Yasuhiro YAO, Naoki ITO, Shingo ANDO, Jun SHIMAMURA
  • Patent number: 11809525
    Abstract: A list for accurately identifying objects with different sizes that are the same attributes can be generated automatically. A classification unit classifies, from a group of images consisting of images including objects with any of a plurality of attributes, each of the images including the objects that have an identical attribute and have different real sizes into an identical cluster. An output unit outputs each of clusters into which at least two of the images are classified as a list of images with different real sizes of the objects for the identical attribute.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: November 7, 2023
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Takashi Hosono, Yukito Watanabe, Jun Shimamura, Atsushi Sagata
  • Patent number: 11790635
    Abstract: A retrieval apparatus includes a first retrieval unit, a second retrieval unit, and an integration unit that calculates an integrated similarity by integrating a first similarity calculated by the first retrieval unit and a second similarity calculated by the second retrieval unit. For similarities between a basic image, an image similar to the basic image, and an image dissimilar to the basic image of the reference images, at least the feature extraction of the first retrieval unit is learned such that a margin based on a second similarity between the basic image and the similar image and a second similarity between the basic image and the dissimilar image increases as the second similarity between the basic image and the dissimilar image increases relative to the second similarity between the basic image and the similar image.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: October 17, 2023
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Yukito Watanabe, Takayuki Umeda, Jun Shimamura, Atsushi Sagata
  • Publication number: 20230245438
    Abstract: A training unit 24 performs leaning of a recognizer that recognizes labels of data based on a plurality of training data to which training labels are given. A score calculation unit 28 calculates a score output by the recognizer for each of the plurality of training data by using the trained recognizer. A threshold value determination unit 30 determines a threshold value for the score for determining the label, based on a shape of an ROC curve representing a correspondence between a true positive rate and a false positive rate, which is obtained based on the score calculated for each of the plurality of training data. A selection unit 32 selects the training data difficult to recognize by the recognizer based on the threshold value determined and the score calculated for each of the plurality of training data. The process of each unit described above is repeated until a predetermined iteration termination condition is satisfied.
    Type: Application
    Filed: June 22, 2020
    Publication date: August 3, 2023
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kazuhiko MURASAKI, Shingo ANDO, Jun SHIMAMURA
  • Patent number: 11651515
    Abstract: It is possible to determine a geometric transformation matrix representing geometric transformation between an input image and a template image with high precision. A geometric transformation matrix/inlier estimation section 32 determines a corresponding point group serving as inliers, and estimates the geometric transformation matrix representing the geometric transformation between the input image and the template image. A scatter degree estimation section 34 estimates scatter degree of the corresponding points based on the corresponding point group serving as inliers. A plane tracking convergence determination threshold calculation section 36 calculates a threshold used in convergence determination when iterative update of the geometric transformation matrix in a plane tracking section 38 is performed based on the estimated scatter degree.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: May 16, 2023
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Jun Shimamura, Yukito Watanabe, Tetsuya Kinebuchi