Patents by Inventor Tetsuo Inoshita
Tetsuo Inoshita has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240135696Abstract: The model training apparatus trains an image conversion model to generate, from an input image representing a scene in a first environment, an output image representing the scene in a second environment. The model training apparatus inputs a training image to the image conversion model to obtain a first feature map and an output image, input the output image to the image conversion model to obtain a second feature map, computes a patch-wise loss using the features corresponding to a positive example patch and a negative example patch extracted from the training image and a positive example patch extracted from the output image, and trains the image conversion model based on the patch-wise loss, which is extracted intensively from the region representing an object of a specific type.Type: ApplicationFiled: September 9, 2021Publication date: April 25, 2024Applicant: NEC CorporationInventors: Tetsuo Inoshita, Yuichi Nakatani
-
Patent number: 11954901Abstract: A selecting unit selects first to third moving image data from a plurality of frame images composing moving image data. A first generating unit generates first training data that is labeled data relating to a specific recognition target from the first moving image data. A first learning unit learns a first model recognizing the specific recognition target by using the first training data. A second generating unit generates second training data from the second moving image data by using the first model. A second learning unit learns a second model by using the second training data. A third generating unit generates third training data from the third moving image data by using the second model.Type: GrantFiled: April 25, 2019Date of Patent: April 9, 2024Assignee: NEC CORPORATIONInventor: Tetsuo Inoshita
-
Patent number: 11900659Abstract: A selecting unit selects first moving image data and second moving image data from a plurality of frame images composing moving image data. A first generating unit generates first training data that is labeled data relating to a specific recognition target from the frame images composing the first moving image data. A learning unit learns a first model recognizing the specific recognition target by using the first training data. A second generating unit generates second training data that is labeled data relating to the specific recognition target from the frame images composing the second moving image data by using the first model.Type: GrantFiled: April 25, 2019Date of Patent: February 13, 2024Assignee: NEC CORPORATIONInventor: Tetsuo Inoshita
-
Patent number: 11887329Abstract: Provided are a moving body guidance apparatus, a moving body guidance method and a computer-readable recording medium that are for accurately guiding a moving body to a target site. The moving body guidance apparatus has a detection unit 2 that detects, from an image 40 captured by an image capturing apparatus 23 mounted on a moving body 20, a target member image 42 captured of an entirety of a target member 30 or a feature member image 43 captured of an entirety or a portion of feature members 31 and 32 forming the target member 30, and a control unit 3 that performs guidance control for moving a set position 41 set with respect to the image 40 and indicating a position of the moving body 20 closer to the target member image 42 or closer to a designated region 44 set based on the feature member image 43.Type: GrantFiled: March 13, 2018Date of Patent: January 30, 2024Assignee: NEC CORPORATIONInventor: Tetsuo Inoshita
-
Publication number: 20240013566Abstract: To accurately determine a work content at an image capturing time point by an image analysis, the present invention provides an image processing apparatus 10 including an acquisition unit 11 that acquires an image, a detection unit 12 that detects a hand of a person and a work target object from the image, and a determination unit 13 that determines, based on a relative position relation within the image between the detected hand of the person and the detected work target object, a work content at a time point when the image is captured.Type: ApplicationFiled: June 28, 2023Publication date: January 11, 2024Applicant: NEC CorporationInventor: Tetsuo INOSHITA
-
Patent number: 11866197Abstract: Provided is a target member for accurately guiding a moving body to a target site. A target member 10 is used when performing control for guiding a moving body 20 to a target site, and the target member 10 is formed by using two or more different feature members 11, 12 that are set such that a shape of a target member image 44 that corresponds to the target member 10 captured by an image capturing unit 21 (an image capturing apparatus) that is mounted in the moving body 20 changes according to a measurement distance that indicates a distance between the target member 10 and the moving body 20.Type: GrantFiled: March 13, 2018Date of Patent: January 9, 2024Assignee: NEC CORPORATIONInventor: Tetsuo Inoshita
-
Publication number: 20230388449Abstract: To more reliably land a flying body at a desired point, a flying body includes a determiner that determines whether the flying body is taking off and ascending from a takeoff point or descending to land, a camera mounted in the flying body, a recorder that records a lower image captured by the camera if it is determined that the flying body is taking off and ascending and a guider that, if it is determined that the flying body is descending to land, guides the flying body to the takeoff point while descending using a lower image recorded in the recorder during takeoff/ascent and a lower image captured during the descent.Type: ApplicationFiled: August 15, 2023Publication date: November 30, 2023Applicant: NEC CorporationInventor: Tetsuo INOSHITA
-
Publication number: 20230343085Abstract: Provided is an object detection device for efficiently and simply selecting an image for creating instructor data on the basis of the number of detected objects. The object detection device is provided with: a detection unit for detecting an object from each of a plurality of input images using a dictionary; an acceptance unit for displaying, on a display device, a graph indicating the relationship between the input images and the number of subregions in which the objects are detected, and displaying, on the display device, in order to create instructor data, one input image among the plurality of input images in accordance with a position on the graph accepted by operation of an input device; a generation unit for generating the instructor data from the input image; and a learning unit for learning a dictionary from the instructor data.Type: ApplicationFiled: June 28, 2023Publication date: October 26, 2023Applicant: NEC CorporationInventor: Tetsuo INOSHITA
-
Publication number: 20230334837Abstract: In an object detection device, the plurality of object detection units output a score indicating a probability that a predetermined object exists for each partial region set with respect to inputted image data. The weight computation unit uses weight computation parameters to compute a weight for each of the plurality of object detection units on a basis of the image data and outputs of the plurality of object detection units, the weight being used when the scores outputted by the plurality of object detection units are merged. The merging unit merges the scores outputted by the plurality of object detection units for each partial region according to the weights computed by the weight computation unit. The first loss computation unit computes a difference between a ground truth label of the image data and the score merged by the merging unit as a first loss. Then, the first parameter correction unit corrects the weight computation parameters so as to reduce the first loss.Type: ApplicationFiled: September 24, 2020Publication date: October 19, 2023Applicant: NEC CorporationInventors: Katsuhiko TAKAHASHI, Yuichi NAKATANI, Tetsuo INOSHITA, Asuka ISHII
-
Patent number: 11765315Abstract: To more reliably land a flying body at a desired point, a flying body includes a determiner that determines whether the flying body is taking off and ascending from a takeoff point or descending to land, a camera mounted in the flying body, a recorder that records a lower image captured by the camera if it is determined that the flying body is taking off and ascending and a guider that, if it is determined that the flying body is descending to land, guides the flying body to the takeoff point while descending using a lower image recorded in the recorder during takeoff/ascent and a lower image captured during the descent.Type: GrantFiled: August 8, 2017Date of Patent: September 19, 2023Assignee: NEC CORPORATIONInventor: Tetsuo Inoshita
-
Publication number: 20230289998Abstract: In an object recognition device, an estimation means estimates a possession of the person based on a pose of a person in an image. An object detection means detects an object from surroundings of the person in the image. A weighting means sets weights with respect to an estimation result of the possession and a detection result of the object based on the image. A combination unit combines the estimation result of the possession and the detection result of the object by using the weights being set, and recognizes the possession of the person.Type: ApplicationFiled: August 14, 2020Publication date: September 14, 2023Applicant: NEC CorporationInventor: Tetsuo Inoshita
-
Patent number: 11734920Abstract: Provided is an object detection device for efficiently and simply selecting an image for creating instructor data on the basis of the number of detected objects. The object detection device is provided with: a detection unit for detecting an object from each of a plurality of input images using a dictionary; an acceptance unit for displaying, on a display device, a graph indicating the relationship between the input images and the number of subregions in which the objects are detected, and displaying, on the display device, in order to create instructor data, one input image among the plurality of input images in accordance with a position on the graph accepted by operation of an input device; a generation unit for generating the instructor data from the input image; and a learning unit for learning a dictionary from the instructor data.Type: GrantFiled: November 30, 2020Date of Patent: August 22, 2023Assignee: NEC CORPORATIONInventor: Tetsuo Inoshita
-
Patent number: 11604478Abstract: Concerning a partial area image that constitutes a wide area image, to control a flying body in accordance with a flight altitude at a past point of time of image capturing, an information processing apparatus includes a wide area image generator that extracts, from a flying body video obtained when a flying body captures a ground area spreading below while moving, a plurality of video frame images and combines the video frame images, thereby generating a captured image in a wide area, an image capturing altitude acquirer that acquires a flight altitude at a point of time of image capturing by the flying body for each of the plurality of video frame images, and an image capturing altitude output unit that outputs a difference of the flight altitude for each video frame image.Type: GrantFiled: March 31, 2017Date of Patent: March 14, 2023Assignee: NEC CORPORATIONInventor: Tetsuo Inoshita
-
Patent number: 11586850Abstract: The first parameter generation unit 811 generates a first parameter, which is a parameter of a first recognizer, using first learning data including a combination of data to be recognized, a correct label of the data, and domain information indicating a collection environment of the data. The second parameter generation unit 812 generates a second parameter, which is a parameter of a second recognizer, using second learning data including a combination of data to be recognized that is collected in a predetermined collection environment, a correct label of the data, and target domain information indicating the predetermined collection environment, based on the first parameter. The third parameter generation unit 813 integrates the first parameter and the second parameter to generate a third parameter to be used for pattern recognition of input data by learning using the first learning data.Type: GrantFiled: May 10, 2018Date of Patent: February 21, 2023Assignee: NEC CORPORATIONInventors: Katsuhiko Takahashi, Hiroyoshi Miyano, Tetsuo Inoshita
-
Patent number: 11537814Abstract: Identification means 71 identifies an object indicated by data by applying the data to a model learned by machine learning. Determination means 72 determines whether or not the data is transmission target data to be transmitted to a predetermined computer based on a result obtained by applying the data to the model. Data transmission means 73 transmits the data determined to be the transmission target data to the predetermined computer at a predetermined timing.Type: GrantFiled: May 7, 2018Date of Patent: December 27, 2022Assignee: NEC CORPORATIONInventor: Tetsuo Inoshita
-
Publication number: 20220398833Abstract: The information processing device performs distillation learning of a student model using unknown data which a teacher model has not learned. The label distribution determination unit outputs an arbitrary label for the unknown data. The data generation unit outputs new generated data using an arbitrary label and unknown data as inputs. The distillation learning part performs distillation learning of the student model using the teacher model and using the generated data as an input.Type: ApplicationFiled: November 13, 2019Publication date: December 15, 2022Applicant: NEC CorporationInventors: Gaku NAKANO, Yuichi NAKATANI, Tetsuo INOSHITA, Katsuhiko TAKAHASHI, Asuka ISHII
-
Publication number: 20220301293Abstract: A plurality of recognition units respectively recognize image data using a learned model and output degrees of reliability corresponding to classes regarded as recognition targets by respective recognition units. A reliability generation unit generates degrees of reliability corresponding to a plurality of target classes based on the degrees of reliability output from the plurality of recognition units. A target model recognition unit recognizes the same image data as that recognized by the recognition units, by using a target model, and adjusts parameters of the target model in order to match the degrees of reliability corresponding to the target classes generated by a generation unit that outputs degrees of reliability corresponding to the target classes with the degrees of reliability corresponding to the target classes output from the target model recognition unit.Type: ApplicationFiled: September 5, 2019Publication date: September 22, 2022Applicant: NEC CORPORATIONInventor: Tetsuo INOSHITA
-
Publication number: 20220292397Abstract: The server device receives a model information from a plurality of terminal devices, and generates an integrated model by integrating the model information received from the plurality of terminal devices. The server device generates an updated model by learning a model defined by the model information received from the terminal device of update-target using the integrated model. Then, the server device transmits the model information of the updated model to the terminal device. Thereafter, the terminal device executes recognition processing using updated model.Type: ApplicationFiled: August 21, 2019Publication date: September 15, 2022Applicant: NEC CorporationInventors: Katsuhiko TAKAHASHI, Tetsuo INOSHITA, Asuka ISHII, Gaku NAKANO
-
Publication number: 20220277473Abstract: To improve pose estimation accuracy, a pose estimation apparatus according to the present invention extracts a person area from an image, and generates person area image information, based on an image of the extracted person area. The pose estimation apparatus according to the present invention further extracts a joint point of a person from the image, and generates joint point information, based on the extracted joint point. Then, the pose estimation apparatus according to the present invention generates feature value information, based on both of the person area image information and the joint point information. Then, the pose estimation apparatus according to the present invention estimates a pose of a person included in the image, based on an estimation model in which the feature value information is an input, and pose estimation result is an output.Type: ApplicationFiled: February 16, 2022Publication date: September 1, 2022Applicant: NEC CorporationInventors: Tetsuo INOSHITA, Yuichi NAKATANI
-
Publication number: 20220277553Abstract: In an object detection device, a plurality of object detection units output a score indicating probability that a predetermined object exists, for each partial region set to image data inputted. The weight computation unit computes weights for merging the scores outputted by the plurality of object detection units, using weight calculation parameters, based on the image data. The merging unit merges the scores outputted by the plurality of object detection units, for each partial region, with the weights computed by the weight computation unit. The target model object detection unit configured to output a score indicating probability that the predetermined object exists, for each partial region set to the image data. The first loss computation unit computes a first loss indicating a difference of the score of the target model object detection unit from a ground truth label of the image data and the score merged by the merging unit.Type: ApplicationFiled: July 11, 2019Publication date: September 1, 2022Applicant: NEC CorporationInventors: Katsuhiko TAKAHASHI, Yuichi NAKATANI, Asuka ISHII, Tetsuo INOSHITA, Gaku NAKANO