Patents by Inventor Naoki Hosomi

Naoki Hosomi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240386711
    Abstract: An information processing apparatus in embodiments performs at least one trained machine learning model that includes an encoder and a decoder. The encoder receives inputs of text information including designation of a place, a first image that is an image captured by an image capturing apparatus and that includes the place, and a second image obtained by dividing a region for every identical object in the first image, and outputs tri-modal features that have been generated to include visual features of the first image that has been captured, visual features of the second image obtained by dividing the region, and language features of the text information. The decoder outputs a region on the first image corresponding to the designation of the place in the text information, by using the tri-modal features.
    Type: Application
    Filed: May 17, 2023
    Publication date: November 21, 2024
    Inventors: Naoki HOSOMI, Teruhisa MISU, Shumpei HATANAKA, Wei YANG, Komei SUGIURA
  • Patent number: 12109706
    Abstract: A control device includes a target position setting part, a trajectory estimation part, and a target position selector. The target position setting part determines a target position of an actor of a robot based on a form of an object located in an operating environment of the robot. The trajectory estimation part estimates a predicted trajectory of the actor based on motion of the actor up to the present, and estimates a trajectory of the actor from a current position to the target position as an approach trajectory using a predetermined function. The target position selector selects one target position based on a degree of similarity between the predicted trajectory and each of the approach trajectory.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: October 8, 2024
    Assignee: Honda Motor Co., Ltd.
    Inventors: Anirudh Reddy Kondapally, Naoki Hosomi, Nanami Tsukamoto
  • Patent number: 12033340
    Abstract: A system including an acquisition unit configured to acquire, from a user via a communication device associated with the user, target object data including a feature of a target object selected by the user, an analysis unit configured to analyze whether the target object data that has been acquired by the acquisition unit includes, as the feature, at least one of data of a proper noun or data of a character string related to the target object, and whether the target object data includes data of a color related to the target object, and an estimation unit configured to estimate a distance from the target object to the user, based on an analysis result of the analysis unit, wherein the estimation unit estimates the distance from the target object to the user such that the distance from the target object to the user in a case where the target object data includes at least one of the data of the proper noun or the data of the character string is shorter than the distance from the target object to the user in a ca
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: July 9, 2024
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Teruhisa Misu, Naoki Hosomi, Kentaro Yamada
  • Publication number: 20240193955
    Abstract: A mobile object control device acquires a captured image obtained by capturing an image of surroundings of the mobile object by a camera mounted on a mobile object and an input directive sentence input by a user of the mobile object, inputs, when an image and a directive sentence are input, the captured image and the input directive sentence into a learned model learned to output one or more objects corresponding to the directive sentence in the image together with corresponding degrees of certainty to detect the one or more objects and the corresponding degrees of certainty, sequentially selects the one or more objects based on at least the degree of certainty and makes an inquiry to a user of the mobile object, and causes the mobile object to travel to an indicated position in the input directive sentence, which is specified based on a result of the inquiry.
    Type: Application
    Filed: December 12, 2022
    Publication date: June 13, 2024
    Inventors: Naoki Hosomi, Teruhisa Misu, Kazunori Komatani, Ryu Takeda, Ryusei Taniguchi
  • Publication number: 20240149458
    Abstract: A robot remote operation control device includes, in robot remote operation control for an operator to remotely operate a robot capable of gripping an object, an information acquisition unit that acquires operator state information on a state of the operator who operates the robot, an intention estimation unit that estimates a motion intention of the operator who causes the robot to perform a motion, on the basis of the operator state information, and a gripping method determination unit that determines a gripping method for the object on the basis of the estimated motion intention of the operator.
    Type: Application
    Filed: March 16, 2022
    Publication date: May 9, 2024
    Inventors: Tomoki Watabe, Akira Mizutani, Takeshi Chiku, Yili Dong, Tomohiro Chaki, Nanami Tsukamoto, Naoki Hosomi, Anirudh reddy Kondapally, Takahide Yoshiike, Christian Goerick, Dirk Ruiken, Bram Bolder, Mathias Franzius, Simon Manschitz
  • Publication number: 20240071090
    Abstract: Provided is a mobile object control device including a storage medium storing a computer-readable command and a processor connected to the storage medium, the processor executing the computer-readable command to: acquire a photographed image, which is obtained by photographing surroundings of a mobile object by a camera mounted on the mobile object, and an input instruction sentence, which is input by a user of the mobile object; detect a stop position of the mobile object corresponding to the input instruction sentence in the photographed image by inputting at least the photographed image and the input instruction sentence into a trained model including a pre-trained visual-language model, the trained model being trained so as to receive input of at least an image and an instruction sentence to output a stop position of the mobile object corresponding to the instruction sentence in the image; and cause the mobile object to travel to the stop position.
    Type: Application
    Filed: August 25, 2022
    Publication date: February 29, 2024
    Inventors: Naoki Hosomi, Teruhisa Misu, Kentaro Yamada
  • Publication number: 20230326048
    Abstract: A system including an acquisition unit configured to acquire, from a user via a communication device associated with the user, target object data including a feature of a target object selected by the user, an analysis unit configured to analyze whether the target object data that has been acquired by the acquisition unit includes, as the feature, at least one of data of a proper noun or data of a character string related to the target object, and whether the target object data includes data of a color related to the target object, and an estimation unit configured to estimate a distance from the target object to the user, based on an analysis result of the analysis unit, wherein the estimation unit estimates the distance from the target object to the user such that the distance from the target object to the user in a case where the target object data includes at least one of the data of the proper noun or the data of the character string is shorter than the distance from the target object to the user in a ca
    Type: Application
    Filed: March 24, 2022
    Publication date: October 12, 2023
    Inventors: Teruhisa MISU, Naoki HOSOMI, Kentaro YAMADA
  • Publication number: 20230298340
    Abstract: An information processing apparatus of the present invention comprises acquires a captured image; detects a plurality of targets included in the captured image, and extracts a plurality of features for each of the detected plurality of targets; acquires an impurity for each extracted feature, the impurity indicating a degree to which a predetermined target is inseparable from among the plurality of targets in a case where a user is asked a question for presuming the predetermined target from among the plurality of targets based on each feature; and generates the question to reduce a number of questions for minimizing the impurity based on the extracted features and the impurity for each of the features.
    Type: Application
    Filed: January 26, 2023
    Publication date: September 21, 2023
    Inventor: Naoki HOSOMI
  • Publication number: 20220319514
    Abstract: An information processing apparatus capable of controlling a mobile object on the basis of an instruction by an utterance of a user identifies which scene a use scene of a target user is among a plurality of use scenes in a case where the mobile object is used, acquires utterance information of the target user, and selects a different machine learning model according to the identified use scene of the target user. The information processing apparatus estimates an intent of an utterance of the target user by using the selected machine learning model.
    Type: Application
    Filed: March 15, 2022
    Publication date: October 6, 2022
    Inventor: Naoki HOSOMI
  • Publication number: 20220314449
    Abstract: In a robot remote operation which recognizes a movement of an operator and transmits the movement of the operator to a robot to operate the robot, a robot remote operation control device includes: an information acquisition part, acquiring an environment sensor value acquired by an environment sensor provided in the robot or a surrounding environment of the robot and an operator sensor value, which is information indicating the movement of the operator that is detected; and an intention estimation part, estimating a motion of the operator, which is a motion instruction with respect to the robot, by using a trained model from the operator sensor value.
    Type: Application
    Filed: March 29, 2022
    Publication date: October 6, 2022
    Applicant: Honda Motor Co., Ltd.
    Inventors: Naoki HOSOMI, Anirudh Reddy KONDAPALLY, Nanami TSUKAMOTO
  • Publication number: 20220297296
    Abstract: A control device includes a target position setting part, a trajectory estimation part, and a target position selector. The target position setting part determines a target position of an actor of a robot based on a form of an object located in an operating environment of the robot. The trajectory estimation part estimates a predicted trajectory of the actor based on motion of the actor up to the present, and estimates a trajectory of the actor from a current position to the target position as an approach trajectory using a predetermined function. The target position selector selects one target position based on a degree of similarity between the predicted trajectory and each of the approach trajectory.
    Type: Application
    Filed: February 14, 2022
    Publication date: September 22, 2022
    Applicant: Honda Motor Co., Ltd.
    Inventors: Anirudh Reddy KONDAPALLY, Naoki HOSOMI, Nanami TSUKAMOTO
  • Publication number: 20150111005
    Abstract: A first mask layer (13) and a second mask layer (12) are transferred and imparted to a target object (20) using a fine-pattern-forming film (I) provided with a cover film (10) having a nanoscale concavo-convex structure (11) formed on one surface thereof, a second mask layer (12) provided in a recess of the concavo-convex structure (11), and a first mask layer (13) provided so as to cover the concavo-convex structure (11) and the second mask layer (12). A surface of a fine-pattern-forming film (II) to which the first mask layer (13) is provided is pressed toward a surface of the target object (20), energy rays are irradiated to the first mask layer (13), and the cover film (10) is then separated from the second mask layer (12) and the first mask layer (13). Pressing and energy ray irradiation are each performed independently. The target object is etched using the second mask layer (12) and the first mask layer (13).
    Type: Application
    Filed: April 30, 2013
    Publication date: April 23, 2015
    Applicant: ASAHI KASEI E-MATERIALS CORPORATION
    Inventors: Naoki Hosomi, Jun Koike, Fujito Yamaguchi
  • Publication number: 20100041167
    Abstract: The present invention is directed to a method for diagnosing large intestinal cancer and/or polyp and a method for observing postoperative course or monitoring recurrence thereof, wherein each method includes detecting cystatin SN protein by use of an anti-cystatin SN antibody. The present invention is able to provide a kit for assaying cystatin SN, which can be used, in a simple manner, in a diagnosis performed prior to conventional barium enema examination and endoscopic examination which impose burdens on patients; as an indicator of metastasis and recurrence; and in the evaluation of therapeutic effects. The present invention provides a method for diagnosing or monitoring large intestinal cancer and/or polyp which can be performed in a simple manner, and thus can allow to design a new regimen rapidly.
    Type: Application
    Filed: December 14, 2005
    Publication date: February 18, 2010
    Applicants: Perseus Proteomics Inc., The University of Tokyo
    Inventors: Hiroyuki Aburatani, Takahiro Shimamura, Kiyotaka Watanabe, Takeharu Asano, Shin Ohnishi, Takao Hamakubo, Akira Sugiyama, Naoki Hosomi, Hiroko Iwanari, Keisuke Ishii