Patents by Inventor Koichiro Niinuma

Koichiro Niinuma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240161376
    Abstract: In an example, a method may include obtaining, from a data source, first data including multiple frames each including a human face. The method may include automatically detecting, in each of the multiple frames, one or more facial landmarks and one or more action units (AUs) associated with the human face. The method may also include automatically generating one or more semantic masks based at least on the one or more facial landmarks, the one or more semantic masks individually corresponding to the human face. The method may further include obtaining a facial hyperspace using at least the first data, the one or more AUs, and the semantic masks. The method may also include generating a synthetic image of the human face using a first frame of the multiple frames and one or more AU intensities individually associated with the one or more AUs.
    Type: Application
    Filed: March 29, 2023
    Publication date: May 16, 2024
    Applicants: Fujitsu Limited, CARNEGIE MELLON UNIVERSITY
    Inventors: Heng YU, Koichiro NIINUMA, Laszlo JENI
  • Publication number: 20240029434
    Abstract: A non-transitory computer-readable recording medium has stored therein a program that causes a computer to execute a process, the process including acquiring movie data including a plurality of consecutive frames calculating first likelihood of a class of the movie data by inputting the acquired movie data to a trained model, calculating an optical flow indicating movement of an area included in the movie data, based on the movie data generating occluded movie data by setting an occluded area in each of the frames included in the movie data, based on the optical flow, calculating second likelihood of a class of the occluded movie data by inputting the occluded movie data to the model identifying an area that affects identification of the class among areas in the movie data, based on the first likelihood and the second likelihood and displaying the identified area that affects identification of the class.
    Type: Application
    Filed: July 19, 2022
    Publication date: January 25, 2024
    Applicants: FUJITSU LIMITED, University of Tsukuba
    Inventors: Tomoki UCHIYAMA, Naoya SOGI, Koichiro NIINUMA, Kazuhiro FUKUI
  • Publication number: 20230316700
    Abstract: An information processing apparatus acquires an image; generates a shielding image in which a part of an area included in an area of the acquired image has been shielded; calculates, by inputting the image to a first model that has been trained, first likelihood of the target object included in the image; calculates, by inputting the shielding image to a second model that calculates an approximation value of likelihood of the target object included in the image when the image is input, second likelihood corresponding to the approximation value of the likelihood of the target object included in the shielding image; specifies, based on the first likelihood and the second likelihood, an area that affects discrimination of the class and that is included in the area of the image; and displays the specified area that affects discrimination of the class.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Applicants: FUJITSU LIMITED, University of Tsukuba
    Inventors: Naoya SOGI, Tomoki UCHIYAMA, Kazuhiro FUKUI, Koichiro NIINUMA
  • Publication number: 20230029505
    Abstract: A method may include obtaining a facial image of a subject and identifying a number of new images to be synthesized with target AU combinations and categories of intensity. The method may also include synthesizing the number of new images using the facial image of the subject as the base image with the number of target AU combinations and categories of intensity with a number of new images that have different AU combinations than the facial image of the subject. The method may additionally include adding the number of new images to a dataset and training a machine learning system using the dataset to identify a facial expression of the subject.
    Type: Application
    Filed: August 2, 2021
    Publication date: February 2, 2023
    Applicant: FUJITSU LIMITED
    Inventors: Koichiro NIINUMA, Jeffrey F. COHN, Laszlo A. JENI
  • Patent number: 11557149
    Abstract: A method may include obtaining a dataset including a target Action Unit (AU) combination and labeled images of the target AU combination with at least a first category of intensity for each AU of the target AU combination and a second category of intensity for each AU of the target AU combination. The method may also include determining that the first category of intensity for a first AU has a higher number of labeled images than the second category of intensity for the first AU, and based on the determination, identifying a number of new images to be synthesized in the second category of intensity for the first AU. The method may additionally include synthesizing the number of new images with the second category of intensity for the first AU, and adding the new images to the dataset.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: January 17, 2023
    Assignees: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITY
    Inventors: Koichiro Niinuma, Laszlo A. Jeni, Itir Onal Ertugrul, Jeffrey F. Cohn
  • Publication number: 20220051003
    Abstract: A method may include obtaining a dataset including a target Action Unit (AU) combination and labeled images of the target AU combination with at least a first category of intensity for each AU of the target AU combination and a second category of intensity for each AU of the target AU combination. The method may also include determining that the first category of intensity for a first AU has a higher number of labeled images than the second category of intensity for the first AU, and based on the determination, identifying a number of new images to be synthesized in the second category of intensity for the first AU. The method may additionally include synthesizing the number of new images with the second category of intensity for the first AU, and adding the new images to the dataset.
    Type: Application
    Filed: August 14, 2020
    Publication date: February 17, 2022
    Applicants: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITY
    Inventors: Koichiro NIINUMA, Laszlo A. JENI, Itir Onal ERTUGRUL, Jeffrey F. COHN
  • Patent number: 11244206
    Abstract: A method may include obtaining a base facial image, and obtaining a first set of base facial features within the base facial image, the first set of base facial features associated with a first facial AU to be detected in an analysis facial image. The method may also include obtaining a second set of base facial features within the base facial image, the second set of facial features associated with a second facial AU to be detected. The method may include obtaining the analysis facial image, and applying a first image normalization to the analysis facial image using the first set of base facial features to facilitate prediction of a probability of the first facial AU. The method may include applying a second image normalization to the analysis facial image using the second set of base facial features to facilitate prediction of a probability of the second facial AU.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: February 8, 2022
    Assignees: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITY
    Inventors: Koichiro Niinuma, Laszlo A. Jeni, Itir Onal Ertugrul, Jeffrey F. Cohn
  • Patent number: 11216064
    Abstract: A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process including obtaining gaze data that indicates a position of a gaze of a user at each of a plurality of times, determining a first movement regarding the gaze of the user based on the gaze data, displaying, on a screen of a display device, gaze information indicating a position of the gaze at each of a plurality of times from a time at which a second movement occurs when the first movement includes the second movement, and displaying, on the screen, gaze information indicating a position of the gaze at each of a plurality of times during a specified time period when the first movement does not include the second movement.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: January 4, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Yoshihide Fujita, Akinori Taguchi, Koichiro Niinuma
  • Publication number: 20210073600
    Abstract: A method may include obtaining a base facial image, and obtaining a first set of base facial features within the base facial image, the first set of base facial features associated with a first facial AU to be detected in an analysis facial image. The method may also include obtaining a second set of base facial features within the base facial image, the second set of facial features associated with a second facial AU to be detected. The method may include obtaining the analysis facial image, and applying a first image normalization to the analysis facial image using the first set of base facial features to facilitate prediction of a probability of the first facial AU. The method may include applying a second image normalization to the analysis facial image using the second set of base facial features to facilitate prediction of a probability of the second facial AU.
    Type: Application
    Filed: September 6, 2019
    Publication date: March 11, 2021
    Applicants: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITY
    Inventors: Koichiro NIINUMA, Laszlo A. JENI, Itir Onal ERTUGRUL, Jeffrey F. COHN
  • Patent number: 10627897
    Abstract: A non-transitory computer-readable storage medium storing a program that causes a processor of a head mounted display to execute a process, the process includes obtaining a result of measurement performed by an acceleration sensor included in the head mounted display; and determining whether a user with the head mounted display is chewing a food or the user is speaking based on the result of the measurement performed by the acceleration sensor.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: April 21, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Motonobu Mihara, Akinori Taguchi, Koichiro Niinuma
  • Patent number: 10573083
    Abstract: A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process including estimating a first three-dimensional position of a wearable display device and a first arrangement of the wearable display device, estimating a second three-dimensional position of the physical object and a second arrangement of the physical object; estimating a third three-dimensional position of a specified body part of a person, determining whether a positional relationship between the physical object and the specified body part satisfies a criteria, displaying a virtual target object with a display position and a display arrangement determined based on the second three-dimensional position and the second arrangement when the positional relationship satisfies the criteria, and displaying the virtual target object with a display position and a display arrangement determined based on the first three-dimensional position and the first arrangement when the positional relationship does no
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: February 25, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Hiroyuki Kobayashi, Koichiro Niinuma, Masayuki Nishino
  • Patent number: 10558413
    Abstract: A degree of interest evaluation device includes a memory and a processor coupled to the memory. The processor is configured to: in cases in which input of an operation has been received while content is displayed on an information processing terminal, measure the operation time of the operation; measure a time-wise change of the content being displayed during the operation time; decide on a message to be displayed in accordance with the time-wise change; and display the message decided on in association with the content.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: February 11, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Teruyuki Sato, Koichiro Niinuma
  • Patent number: 10531045
    Abstract: A recording medium on which a user assistance program is recorded which makes a computer perform: based on image information obtained by photographing a plurality of users who use a given service, calculating state quantities of the plurality of respective users corresponding to the image information; counting, for each time period, a number of users whose amounts of change in the calculated respective state quantities are equal to or more than a given threshold value among the plurality of users; and detecting a time period in which the counted number of users satisfies a given condition.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: January 7, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Yuushi Toyoda, Koichiro Niinuma, Ryosuke Kawamura
  • Publication number: 20190320139
    Abstract: A recording medium on which a user assistance program is recorded which makes a computer perform: based on image information obtained by photographing a plurality of users who use a given service, calculating state quantities of the plurality of respective users corresponding to the image information; counting, for each time period, a number of users whose amounts of change in the calculated respective state quantities are equal to or more than a given threshold value among the plurality of users; and detecting a time period in which the counted number of users satisfies a given condition.
    Type: Application
    Filed: March 5, 2019
    Publication date: October 17, 2019
    Applicant: FUJITSU LIMITED
    Inventors: Yuushi TOYODA, Koichiro NIINUMA, Ryosuke KAWAMURA
  • Patent number: 10410051
    Abstract: A method of extracting a region in a distance image including pixels, the method includes: for each of adjacent pixel pairs in the distance image, generating a third pixel group that includes a first pixel group to which a first pixel belongs and a second pixel group to which a second pixel belongs based on a difference between pixel values of the first pixel and the second pixel included in the adjacent pixel pair; dividing the distance image into regions by determining whether to generate a third region represented by the third pixel group by merging a first region represented by the first pixel group and a second region represented by the second pixel group, based on a positional relationship of points represented by pixels included in the third pixel group; and selecting a region that satisfies a predetermined condition from among the regions.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: September 10, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Koichiro Niinuma
  • Publication number: 20190236618
    Abstract: A non-transitory recording medium recording a degree-of-interest evaluating program which causes a computer to execute a process, the process includes: identifying,_based on a combination of a terminal motion amount relating to a change in an orientation of a terminal and information on the orientation of the terminal, a first cluster in which the terminal motion amount and the orientation of a terminal are in a specific state from a plurality of clusters into which the terminal motion amount is categorized; determining whether the terminal motion amount belongs to an inattentive viewing state of an operator of the terminal for content based on an operating state of the terminal; and determining a parameter to evaluate a degree of interest of the operator of the terminal based on the terminal motion amount which belongs to the first cluster and the terminal motion amount which belongs to the inattentive viewing state.
    Type: Application
    Filed: December 19, 2018
    Publication date: August 1, 2019
    Applicant: FUJITSU LIMITED
    Inventors: Teruyuki Sato, Koichiro NIINUMA
  • Publication number: 20190171282
    Abstract: A non-transitory computer-readable storage medium storing a program that causes a processor of a head mounted display to execute a process, the process includes obtaining a result of measurement performed by an acceleration sensor included in the head mounted display; and determining whether a user with the head mounted display is chewing a food or the user is speaking based on the result of the measurement performed by the acceleration sensor.
    Type: Application
    Filed: November 9, 2018
    Publication date: June 6, 2019
    Applicant: FUJITSU LIMITED
    Inventors: Motonobu Mihara, Akinori Taguchi, Koichiro Niinuma
  • Patent number: 10296360
    Abstract: A display control method includes: based on information acquired from an information processing terminal that accesses content provided by an information processing device, computing a degree of interest and a degree of perplexity, with respect to the content, of a user using the information processing terminal; and, based on the computed degree of interest and the computed degree of perplexity, displaying a symbol corresponding to the information processing terminal at a corresponding position in a region that has degree of interest and degree of perplexity as axes.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: May 21, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Koichiro Niinuma, Teruyuki Sato, Arata Shimizu, Masahiro Hirata, Masao Hirocho, Kazutoshi Sakaguchi
  • Patent number: 10255887
    Abstract: A computer-readable recording medium storing an intensity of interest evaluation program that causes a computer to execute a procedure is provided. The procedure includes: using a movement amount detection sensor installed to an information processing terminal to detect a value of a movement amount of an information processing terminal in a period in which content is being displayed on the information processing terminal; and evaluating a intensity of interest toward the content based on a length of a first period within the period in which the detected value of the movement amount of the information processing terminal is a predetermined value or less.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: April 9, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Teruyuki Sato, Koichiro Niinuma
  • Publication number: 20190094962
    Abstract: A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process including obtaining gaze data that indicates a position of a gaze of a user at each of a plurality of times, determining a first movement regarding the gaze of the user based on the gaze data, displaying, on a screen of a display device, gaze information indicating a position of the gaze at each of a plurality of times from a time at which a second movement occurs when the first movement includes the second movement, and displaying, on the screen, gaze information indicating a position of the gaze at each of a plurality of times during a specified time period when the first movement does not include the second movement.
    Type: Application
    Filed: September 20, 2018
    Publication date: March 28, 2019
    Applicant: FUJITSU LIMITED
    Inventors: Yoshihide Fujita, Akinori Taguchi, Koichiro NIINUMA