Patents by Inventor Katsuhiko Takahashi

Katsuhiko Takahashi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220292397
    Abstract: The server device receives a model information from a plurality of terminal devices, and generates an integrated model by integrating the model information received from the plurality of terminal devices. The server device generates an updated model by learning a model defined by the model information received from the terminal device of update-target using the integrated model. Then, the server device transmits the model information of the updated model to the terminal device. Thereafter, the terminal device executes recognition processing using updated model.
    Type: Application
    Filed: August 21, 2019
    Publication date: September 15, 2022
    Applicant: NEC Corporation
    Inventors: Katsuhiko TAKAHASHI, Tetsuo INOSHITA, Asuka ISHII, Gaku NAKANO
  • Publication number: 20220277552
    Abstract: In an object detection device, a plurality of object detection units output a score indicating the probability that a predetermined object exists for each partial region set with respect to inputted image data. On the basis of the image data, a weight computation unit uses weight computation parameters to compute weights for each of the plurality of object detection units, the weights being used when the scores outputted by the plurality of object detection units are merged. A merging unit merges the scores outputted by the plurality of object detection units for each partial region according to the weights computed by the weight computation unit. A loss computation unit computes a difference between a ground truth label of the image data and the scores merged by the merging unit as a loss. Then, a parameter correction unit corrects the weight computation parameters so as to reduce the computed loss.
    Type: Application
    Filed: July 11, 2019
    Publication date: September 1, 2022
    Applicant: NEC Corporation
    Inventors: Katsuhiko TAKAHASHI, Yuichi NAKATANI, Tetsuo INOSHITA, Asuka ISHII, Gaku NAKANO
  • Publication number: 20220277553
    Abstract: In an object detection device, a plurality of object detection units output a score indicating probability that a predetermined object exists, for each partial region set to image data inputted. The weight computation unit computes weights for merging the scores outputted by the plurality of object detection units, using weight calculation parameters, based on the image data. The merging unit merges the scores outputted by the plurality of object detection units, for each partial region, with the weights computed by the weight computation unit. The target model object detection unit configured to output a score indicating probability that the predetermined object exists, for each partial region set to the image data. The first loss computation unit computes a first loss indicating a difference of the score of the target model object detection unit from a ground truth label of the image data and the score merged by the merging unit.
    Type: Application
    Filed: July 11, 2019
    Publication date: September 1, 2022
    Applicant: NEC Corporation
    Inventors: Katsuhiko TAKAHASHI, Yuichi NAKATANI, Asuka ISHII, Tetsuo INOSHITA, Gaku NAKANO
  • Patent number: 11417080
    Abstract: An object detection apparatus 100 is provided with: a fish-eye image acquisition unit 10 configured to acquire a time series fish-eye image; a horizontal panorama image generation unit 20 configured to, for each frame, perform conversion to a horizontal panorama image in which a vertical direction in a real space is expressed in a perpendicular direction of the frame, and an azimuth is expressed equiangularly in a horizontal direction of the frame; an edge pair extraction unit 30 configured to extract a pair of edges in the perpendicular direction from the horizontal panorama image; a change rate extraction unit 40 configured to extract a change rate of an inter-edge distance between the pair of edges; a lower end region extraction unit 50 configured to extract a region of a lower end of an object providing the pair of edges; a distance change rate extraction unit 60 configured to calculate a distance from the object to the fish-eye camera based on the position of the region of the lower end of the object in
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: August 16, 2022
    Assignee: NEC CORPORATION
    Inventor: Katsuhiko Takahashi
  • Publication number: 20220254136
    Abstract: An image acquisition unit 110 acquires a plurality of images. The plurality of images include an object to be inferred. An image cut-out unit 120 cuts out an object region including the object from each of the plurality of images acquired by the image acquisition unit 110. An importance generation unit 130 generates importance information by processing the object region cut out by the image cut-out unit 120. The importance information indicates the importance of the object region when an object inference model is generated, and is generated for each object region, that is, for each image acquired by the image acquisition unit 110. A learning data generation unit 140 stores a plurality of object regions cut out by the image cut-out unit 120 and a plurality of pieces of importance information generated by the importance generation unit 130 in a learning data storage unit 150 as at least a part of the learning data.
    Type: Application
    Filed: January 28, 2022
    Publication date: August 11, 2022
    Applicant: NEC Corporation
    Inventors: Tomokazu KANEKO, Katsuhiko TAKAHASHI, Makoto TERAO, Soma SHIRAISHI, Takami SATO, Yu NABETO, Ryosuke SAKAI
  • Patent number: 11361452
    Abstract: An information processing apparatus (2000) includes a first analyzing unit (2020), a second analyzing unit (2040), and an estimating unit (2060). The first analyzing unit (2020) calculates a flow of a crowd in a capturing range of a fixed camera (10) using a first surveillance image (12). The second analyzing unit (2040) calculates a distribution of an attribute of objects in a capturing range of a moving camera (20) using a second surveillance image (22). The estimating unit (2060) estimates an attribute distribution for a range that is not included in the capturing range of the moving camera (20).
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: June 14, 2022
    Assignee: NEC CORPORATION
    Inventors: Ryoma Oami, Katsuhiko Takahashi, Yusuke Konishi, Hiroshi Yamada, Hiroo Ikeda, Junko Nakagawa, Kosuke Yoshimi, Ryo Kawai, Takuya Ogawa
  • Publication number: 20220172133
    Abstract: Based on operation trajectory data an operation analysis device identifies all open points indicating positions at which the crusher is opened during the operation period and all close points indicating positions at which a crusher is closed during an operation period, calculates, as a shortest distance, a distance between each open point of the all open points and a close point nearest to the each open point, and identifies, as a sorting destination open point, an open point at which the shortest distance exceeds a first threshold value, identifies data until the crusher grasping the dismantling part moves to the sorting destination and returns to the dismantling target again from among the operation trajectory data as movement data of the crusher having moved in the dismantling operation, and identifies data in which the movement data has been removed from the operation trajectory data as grasping operation data.
    Type: Application
    Filed: February 6, 2020
    Publication date: June 2, 2022
    Applicants: KOBELCO CONSTRUCTION MACHINERY CO., LTD., HIROSHIMA UNIVERSITY
    Inventors: Katsuhiko TAKAHASHI, Katsumi MORIKAWA, Shoya NAKAMURA
  • Patent number: 11341762
    Abstract: The purpose of the present invention is to detect an object in images accurately by means of image recognition without using a special device for removing the influence of the parallax between a plurality of images. An image transformation unit (401) transforms a plurality of images acquired by an image acquisition unit (407). A reliability level calculation unit (402) calculates a level of reliability representing how small the misalignment between images is. A score calculation unit (405) calculates a total score taking into account both an object detection score based on a feature quantity calculated by a feature extraction unit (404), and the level of reliability calculated by the reliability level calculation unit (402). An object detection unit (406) detects an object in the images on the basis of the total score.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: May 24, 2022
    Assignee: NEC CORPORATION
    Inventors: Takashi Shibata, Azusa Sawada, Katsuhiko Takahashi
  • Publication number: 20220128988
    Abstract: A data-series group includes data series which is a series of data obtained by observing the same object at discrete times. Time labels are time information added to respective data included in the data-series group. State labels are added to some of the data included in the data-series group. A loss-function control unit determines a loss function to be used for learning based on the time labels and the state labels. A threshold is used to adjust a branch condition of the loss-function control unit. A regressor is a model, and is used to detect an abnormality or predict a remaining life span. A dictionary stores parameters of the regressor. A regressor training unit trains the regressor based on the loss function determined by the loss-function control unit.
    Type: Application
    Filed: February 19, 2019
    Publication date: April 28, 2022
    Applicant: NEC Corporation
    Inventors: Azusa SAWADA, Takashi SHIBATA, Katsuhiko TAKAHASHI
  • Patent number: 11277591
    Abstract: A surveillance system (1) includes an area information acquisition unit (101), a request information provision unit (102), and a participation consent reception unit (103). The area information acquisition unit (101) acquires information of a surveillance-desired area. The request information provision unit (102) provides participation request information for surveillance that is conducted by using an image capturing unit of a portable terminal in the surveillance-desired area. The participation consent reception unit (103) receives participation consent as a response to the participation request information, from the portable terminal.
    Type: Grant
    Filed: August 16, 2016
    Date of Patent: March 15, 2022
    Assignee: NEC CORPORATION
    Inventors: Ryoma Oami, Katsuhiko Takahashi, Yuusuke Konishi, Hiroo Ikeda, Junko Nakagawa, Kosuke Yoshimi, Yukie Ebiyama, Ryo Kawai, Takuya Ogawa
  • Publication number: 20220058395
    Abstract: An information processing apparatus (10) includes an event detection unit (110), an input reception unit (120), and a processing execution unit (130). The event detection unit (110) detects a specific event from video data. The input reception unit (120) receives, from a user, input for specifying processing to be executed. The processing execution unit (103) executes first processing specified by input received by the input reception unit (120), and executes second processing of generating learning data used for machine learning and storing the generated learning data in a learning data storage unit (40).
    Type: Application
    Filed: December 26, 2018
    Publication date: February 24, 2022
    Applicant: NEC Corporation
    Inventor: Katsuhiko TAKAHASHI
  • Publication number: 20220006979
    Abstract: A surveillance system (1) includes an area information acquisition unit (101), a position information acquisition unit (102), a candidate determination unit (103), and a notification unit (104). The area information acquisition unit (101) acquires information of a surveillance-desired area. The position information acquisition unit (102) acquires pieces of position information of a plurality of portable terminals (20), each portable terminal performing surveillance using an image capturing unit. The candidate determination unit (103) determines a candidate portable terminal (20) to be moved to the surveillance-desired area from among the plurality of portable terminals (20) based on the acquired pieces of position information of the plurality of portable terminals (20). The notification unit (104) outputs a notification to the candidate portable terminal requesting to move to the surveillance-desired area.
    Type: Application
    Filed: September 21, 2021
    Publication date: January 6, 2022
    Applicant: NEC CORPORATION
    Inventors: Ryoma OAMI, Katsuhiko TAKAHASHI, Yuusuke KONISHI, Hiroo IKEDA, Junko NAKAGAWA, Kosuke YOSHIMI, Yukie EBIYAMA, Ryo KAWAI, Takuya OGAWA
  • Patent number: 11205275
    Abstract: An information processing apparatus (2000) includes a first analyzing unit (2020), a second analyzing unit (2040), and an estimating unit (2060). The first analyzing unit (2020) calculates a flow of a crowd in a capturing range of a fixed camera (10) using a first surveillance image (12). The second analyzing unit (2040) calculates a distribution of an attribute of objects in a capturing range of a moving camera (20) using a second surveillance image (22). The estimating unit (2060) estimates an attribute distribution for a range that is not included in the capturing range of the moving camera (20).
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: December 21, 2021
    Assignee: NEC CORPORATION
    Inventors: Ryoma Oami, Katsuhiko Takahashi, Yusuke Konishi, Hiroshi Yamada, Hiroo Ikeda, Junko Nakagawa, Kosuke Yoshimi, Ryo Kawai, Takuya Ogawa
  • Patent number: 11176690
    Abstract: An information processing apparatus (2000) includes a first analyzing unit (2020), a second analyzing unit (2040), and an estimating unit (2060). The first analyzing unit (2020) calculates a flow of a crowd in a capturing range of a fixed camera (10) using a first surveillance image (12). The second analyzing unit (2040) calculates a distribution of an attribute of objects in a capturing range of a moving camera (20) using a second surveillance image (22). The estimating unit (2060) estimates an attribute distribution for a range that is not included in the capturing range of the moving camera (20).
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: November 16, 2021
    Assignee: NEC CORPORATION
    Inventors: Ryoma Oami, Katsuhiko Takahashi, Yusuke Konishi, Hiroshi Yamada, Hiroo Ikeda, Junko Nakagawa, Kosuke Yoshimi, Ryo Kawai, Takuya Ogawa
  • Patent number: 11158068
    Abstract: An information processing apparatus (2000) includes a first analyzing unit (2020), a second analyzing unit (2040), and an estimating unit (2060). The first analyzing unit (2020) calculates a flow of a crowd in a capturing range of a fixed camera (10) using a first surveillance image (12). The second analyzing unit (2040) calculates a distribution of an attribute of objects in a capturing range of a moving camera (20) using a second surveillance image (22). The estimating unit (2060) estimates an attribute distribution for a range that is not included in the capturing range of the moving camera (20).
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: October 26, 2021
    Assignee: NEC CORPORATION
    Inventors: Ryoma Oami, Katsuhiko Takahashi, Yusuke Konishi, Hiroshi Yamada, Hiroo Ikeda, Junko Nakagawa, Kosuke Yoshimi, Ryo Kawai, Takuya Ogawa
  • Patent number: 11151714
    Abstract: An anomaly detection apparatus 100 includes an image transformation unit 103 that calculates an image transformation parameter, based on an inspection image in which an inspection object appears, a reference image indicating a normal state of the inspection object and a parameter for image transformation parameter calculation, and performs image transformation on the inspection image using the image transformation parameter, an image change detection unit 104 that collates the reference image and the image-transformed inspection image using a change detection parameter, and calculates an anomaly certainty factor indicating whether there is a change in a specific region of the inspection image, a change detection parameter learning unit 106 that learns the change detection parameter, based on a difference between a training image indicating a correct answer value of the change and the anomaly certainty factor, and an image transformation parameter learning unit 108 that learns the parameter for image transform
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: October 19, 2021
    Assignee: NEC CORPORATION
    Inventors: Katsuhiko Takahashi, Yuichi Nakatani
  • Patent number: 11134226
    Abstract: A surveillance system (1) includes an area information acquisition unit (101), a position information acquisition unit (102), a candidate determination unit (103), and a notification unit (104). The area information acquisition unit (101) acquires information of a surveillance-desired area. The position information acquisition unit (102) acquires pieces of position information of a plurality of portable terminals (20), each portable terminal performing surveillance using an image capturing unit. The candidate determination unit (103) determines a candidate portable terminal (20) to be moved to the surveillance-desired area from among the plurality of portable terminals (20) based on the acquired pieces of position information of the plurality of portable terminals (20). The notification unit (104) outputs a notification to the candidate portable terminal requesting to move to the surveillance-desired area.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: September 28, 2021
    Assignee: NEC CORPORATION
    Inventors: Ryoma Oami, Katsuhiko Takahashi, Yuusuke Konishi, Hiroo Ikeda, Junko Nakagawa, Kosuke Yoshimi, Yukie Ebiyama, Ryo Kawai, Takuya Ogawa
  • Publication number: 20210232835
    Abstract: The present invention is directed to a helmet wearing determination system including a imaging means that is installed in a predetermined position and images a two-wheel vehicle that travels on a road; and a helmet wearing determination means that processes an image imaged by the imaging means, estimates a rider head region corresponding to a head of a person who rides on the two-wheel vehicle that travels on the road, compares image characteristics of the rider head region with image characteristics according to the head at a time when a helmet is worn or/and at a time when a helmet is not worn, and determines whether or not the rider wears the helmet.
    Type: Application
    Filed: April 9, 2021
    Publication date: July 29, 2021
    Applicant: NEC Corporation
    Inventor: Katsuhiko TAKAHASHI
  • Publication number: 20210201007
    Abstract: The purpose of the present invention is to detect an object in images accurately by means of image recognition without using a special device for removing the influence of the parallax between a plurality of images. An image transformation unit (401) transforms a plurality of images acquired by an image acquisition unit (407). A reliability level calculation unit (402) calculates a level of reliability representing how small the misalignment between images is. A score calculation unit (405) calculates a total score taking into account both an object detection score based on a feature quantity calculated by a feature extraction unit (404), and the level of reliability calculated by the reliability level calculation unit (402). An object detection unit (406) detects an object in the images on the basis of the total score.
    Type: Application
    Filed: May 29, 2018
    Publication date: July 1, 2021
    Applicant: NEC Corporation
    Inventors: Takashi SHIBATA, Azusa SAWADA, Katsuhiko TAKAHASHI
  • Publication number: 20210174231
    Abstract: Disclosed is a model generation device capable of mitigating the risk of overlooking a phenomenon of interest in machine learning. The model generation device determines whether or not a label of a first data is similar to a label of a second data. The model generation device assigns the label of the second data to the first data when determining that the label of the first data is similar to the label of the second data based on a degree of similarity between observation information representing a state where the first data is observed and observation information representing a state where the second data is observed. The model generation device calculates model representing a relevance between data information containing the first data and the second data and label information containing the assigned label and the label of the second data.
    Type: Application
    Filed: August 2, 2018
    Publication date: June 10, 2021
    Applicant: NEC Corporation
    Inventors: Azusa SAWADA, Takashi SHIBATA, Katsuhiko TAKAHASHI