Patents by Inventor Kyohei TOMITA
Kyohei TOMITA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11200506Abstract: A method provides information to a user as a function of derived user intent. The method includes receiving input from a user, generating an intent vector by processing the received input though an artificial intelligence model that has been trained with data representative of the user's intention, wherein the intent vector comprises a probability for each intent in a known set of possible intents, executing a trigger control model to determine whether to respond to the user as a function of the input from the user and the intent vector, utilizing the trigger control model, received input, and intent vector input to generate a response via a trained chatbot, and providing the response via an output device.Type: GrantFiled: December 15, 2017Date of Patent: December 14, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Xianchao Wu, Kyohei Tomita, Keizo Fujiwara
-
Patent number: 10671181Abstract: Receiving user input. A method includes displaying a set of characters arranged sequentially in a curvilinear or linear fashion such that any one of the characters in the set of characters can be identified for selection by continuous and uniform user input from a user. The method further comprises displaying a plurality of the characters in the set of characters in a fashion where each given character in the plurality of characters is displayed at a level of prominence determined by a probability that the given character is a next character in a string of characters selected by a user. The method further includes receiving user input in a continuous and uniform fashion from a user to identify a character in the set of characters. The method further includes receiving user input selecting the identified character. The method further includes adding the identified character to the string of characters.Type: GrantFiled: April 3, 2017Date of Patent: June 2, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Yaming Xu, Kyohei Tomita
-
Patent number: 10573009Abstract: Provided is an in vivo movement tracking apparatus configured to track a portion of interest that moves in vivo, in which accuracy and robustness of tracking are improved. The apparatus is configured to determine an estimated position of ah organ in a biological image based on the past movement of the organ and search for contour points corresponding to a plurality of control points, respectively, representing a contour shape of the organ in a region corresponding to the estimated position, to thereby determine an estimated contour of the organ based on the contour points. The in vivo movement tracking apparatus is configured to determine a position of a portion of interest, which moves in association with the organ, based on the estimated contour with reference to previously acquired sample data regarding a positional relationship between a contour of the organ and the portion of interest.Type: GrantFiled: January 22, 2018Date of Patent: February 25, 2020Assignees: THE UNIVERSITY OF TOKYO, THE UNIVERSITY OF ELECTRO-COMMUNICATIONS, PUBLIC UNIVERSITY CORPORATION YOKOHAMA CITY UNIVERSITYInventors: Norihiro Koizumi, Atsushi Kayasuga, Kyohei Tomita, Izumu Hosoi, Yu Nishiyama, Hiroyuki Tsukihara, Hideyo Miyazaki, Hiroyuki Fukuda, Kazushi Numata, Kiyoshi Yoshinaka, Takashi Azuma, Naohiko Sugita, Yukio Homma, Yoichiro Matsumoto, Mamoru Mitsuishi
-
Patent number: 10535159Abstract: An in vivo motion tracking device tracking an in vivo motion that is a tracking target included in an ultrasonic image includes an image acquiring unit that is configured to acquire an ultrasonic image, an advance learning unit that is configured to perform advance learning using the ultrasonic image as learning data, and a tracking unit that is configured to track a position of the tracking target in an ultrasonic image including the tracking target after the advance learning performed by the advance learning unit.Type: GrantFiled: January 10, 2018Date of Patent: January 14, 2020Assignees: The University of Electro-Communications, PUBLIC UNIVERSITY CORPORATION YOKOHAMA CITY UNIVERSITYInventors: Norihiro Koizumi, Yu Nishiyama, Ryosuke Kondo, Kyohei Tomita, Fumio Eura, Kazushi Numata
-
Publication number: 20190188590Abstract: A method provides information to a user as a function of derived user intent. The method includes receiving input from a user, generating an intent vector by processing the received input though an artificial intelligence model that has been trained with data representative of the user's intention, wherein the intent vector comprises a probability for each intent in a known set of possible intents, executing a trigger control model to determine whether to respond to the user as a function of the input from the user and the intent vector, utilizing the trigger control model, received input, and intent vector input to generate a response via a trained chatbot, and providing the response via an output device.Type: ApplicationFiled: December 15, 2017Publication date: June 20, 2019Inventors: Xianchao Wu, Kyohei Tomita, Keizo Fujiwara
-
Publication number: 20190057517Abstract: An in vivo motion tracking device tracking an in vivo motion that is a tracking target included in an ultrasonic image includes an image acquiring unit that is configured to acquire an ultrasonic image, an advance learning unit that is configured to perform advance learning using the ultrasonic image as learning data, and a tracking unit that is configured to track a position of the tracking target in an ultrasonic image including the tracking target after the advance learning performed by the advance learning unit.Type: ApplicationFiled: January 10, 2018Publication date: February 21, 2019Inventors: Norihiro KOIZUMI, Yu NISHIYAMA, Ryosuke KONDO, Kyohei TOMITA, Fumio EURA, Kazushi NUMATA
-
Publication number: 20180285335Abstract: Receiving user input. A method includes displaying a set of characters arranged sequentially in a curvilinear or linear fashion such that any one of the characters in the set of characters can be identified for selection by continuous and uniform user input from a user. The method further comprises displaying a plurality of the characters in the set of characters in a fashion where each given character in the plurality of characters is displayed at a level of prominence determined by a probability that the given character is a next character in a string of characters selected by a user. The method further includes receiving user input in a continuous and uniform fashion from a user to identify a character in the set of characters. The method further includes receiving user input selecting the identified character. The method further includes adding the identified character to the string of characters.Type: ApplicationFiled: April 3, 2017Publication date: October 4, 2018Inventors: Yaming Xu, Kyohei Tomita
-
Publication number: 20180253855Abstract: Provided is an in vivo movement tracking apparatus configured to track a portion of interest that moves in vivo, in which accuracy and robustness of tracking are improved. The apparatus is configured to determine an estimated position of ah organ in a biological image based on the past movement of the organ and search for contour points corresponding to a plurality of control points, respectively, representing a contour shape of the organ in a region corresponding to the estimated position, to thereby determine an estimated contour of the organ based on the contour points. The in vivo movement tracking apparatus is configured to determine a position of a portion of interest, which moves in association with the organ, based on the estimated contour with reference to previously acquired sample data regarding a positional relationship between a contour of the organ and the portion of interest.Type: ApplicationFiled: January 22, 2018Publication date: September 6, 2018Inventors: Norihiro KOIZUMI, Atsushi KAYASUGA, Kyohei TOMITA, Izumu HOSOI, Yu NISHIYAMA, Hiroyuki TSUKIHARA, Hideyo MIYAZAKI, Hiroyuki FUKUDA, Kazushi NUMATA, Kiyoshi YOSHINAKA, Takashi AZUMA, Naohiko SUGITA, Yukio HOMMA, Yoichiro MATSUMOTO, Mamoru MITSUISHI