Patents by Inventor Koichiro Niinuma
Koichiro Niinuma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250037433Abstract: Operations include extracting a depiction of a person and associated movement of the person from a first video clip of a first training video included in a first domain dataset. The operations further include superimposing the depiction of the person and corresponding movement into a second video clip of a second training video included in a second domain dataset to generate a third video clip. The operations also include annotating the third video clip to indicate that the movement of the person corresponds to a particular type of behavior, the annotating being based on the first video clip also being annotated to indicate that the movement of the person corresponds to the particular type of behavior. Moreover, the operations include training a machine learning model to identify the particular type of behavior using the second training video having the annotated third video clip included therewith.Type: ApplicationFiled: February 5, 2024Publication date: January 30, 2025Applicant: Fujitsu LimitedInventors: Pradeep NARWADE, Ryosuke KAWAMURA, Gaurav GAJBHIYE, Koichiro NIINUMA
-
Patent number: 12175753Abstract: A non-transitory computer-readable recording medium has stored therein a program that causes a computer to execute a process, the process including acquiring movie data including a plurality of consecutive frames calculating first likelihood of a class of the movie data by inputting the acquired movie data to a trained model, calculating an optical flow indicating movement of an area included in the movie data, based on the movie data generating occluded movie data by setting an occluded area in each of the frames included in the movie data, based on the optical flow, calculating second likelihood of a class of the occluded movie data by inputting the occluded movie data to the model identifying an area that affects identification of the class among areas in the movie data, based on the first likelihood and the second likelihood and displaying the identified area that affects identification of the class.Type: GrantFiled: July 19, 2022Date of Patent: December 24, 2024Assignees: FUJITSU LIMITED, UNIVERSITY OF TSUKUBAInventors: Tomoki Uchiyama, Naoya Sogi, Koichiro Niinuma, Kazuhiro Fukui
-
Publication number: 20240290021Abstract: In an example, a method may include deforming a first ray associated with a dynamic object at a first time using a first neural network and a latent code to obtain a deformed ray. The method may also include obtaining a hyperspace code associated with the first ray by inputting the first ray, the first time, and the latent code into a second neural network. The method may further include sampling one or more points from the deformed ray. The method may also include combining the sampled points and the hyperspace code into a network input. The method may further include inputting the network input into a third neural network to obtain RGB values for rendering images of a three-dimensional scene representative of the dynamic object at a second time.Type: ApplicationFiled: February 27, 2023Publication date: August 29, 2024Applicants: Fujitsu Limited, CARNEGIE MELLON UNIVERSITYInventors: Heng YU, Joel JULIN, Zoltán Ádám MILACSKI, Koichiro NIINUMA, Laszlo JENI
-
Patent number: 12073655Abstract: A method may include obtaining a facial image of a subject and identifying a number of new images to be synthesized with target AU combinations and categories of intensity. The method may also include synthesizing the number of new images using the facial image of the subject as the base image with the number of target AU combinations and categories of intensity with a number of new images that have different AU combinations than the facial image of the subject. The method may additionally include adding the number of new images to a dataset and training a machine learning system using the dataset to identify a facial expression of the subject.Type: GrantFiled: August 2, 2021Date of Patent: August 27, 2024Assignees: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITYInventors: Koichiro Niinuma, Jeffrey F. Cohn, Laszlo A. Jeni
-
Publication number: 20240161376Abstract: In an example, a method may include obtaining, from a data source, first data including multiple frames each including a human face. The method may include automatically detecting, in each of the multiple frames, one or more facial landmarks and one or more action units (AUs) associated with the human face. The method may also include automatically generating one or more semantic masks based at least on the one or more facial landmarks, the one or more semantic masks individually corresponding to the human face. The method may further include obtaining a facial hyperspace using at least the first data, the one or more AUs, and the semantic masks. The method may also include generating a synthetic image of the human face using a first frame of the multiple frames and one or more AU intensities individually associated with the one or more AUs.Type: ApplicationFiled: March 29, 2023Publication date: May 16, 2024Applicants: Fujitsu Limited, CARNEGIE MELLON UNIVERSITYInventors: Heng YU, Koichiro NIINUMA, Laszlo JENI
-
Publication number: 20240029434Abstract: A non-transitory computer-readable recording medium has stored therein a program that causes a computer to execute a process, the process including acquiring movie data including a plurality of consecutive frames calculating first likelihood of a class of the movie data by inputting the acquired movie data to a trained model, calculating an optical flow indicating movement of an area included in the movie data, based on the movie data generating occluded movie data by setting an occluded area in each of the frames included in the movie data, based on the optical flow, calculating second likelihood of a class of the occluded movie data by inputting the occluded movie data to the model identifying an area that affects identification of the class among areas in the movie data, based on the first likelihood and the second likelihood and displaying the identified area that affects identification of the class.Type: ApplicationFiled: July 19, 2022Publication date: January 25, 2024Applicants: FUJITSU LIMITED, University of TsukubaInventors: Tomoki UCHIYAMA, Naoya SOGI, Koichiro NIINUMA, Kazuhiro FUKUI
-
Publication number: 20230316700Abstract: An information processing apparatus acquires an image; generates a shielding image in which a part of an area included in an area of the acquired image has been shielded; calculates, by inputting the image to a first model that has been trained, first likelihood of the target object included in the image; calculates, by inputting the shielding image to a second model that calculates an approximation value of likelihood of the target object included in the image when the image is input, second likelihood corresponding to the approximation value of the likelihood of the target object included in the shielding image; specifies, based on the first likelihood and the second likelihood, an area that affects discrimination of the class and that is included in the area of the image; and displays the specified area that affects discrimination of the class.Type: ApplicationFiled: March 31, 2022Publication date: October 5, 2023Applicants: FUJITSU LIMITED, University of TsukubaInventors: Naoya SOGI, Tomoki UCHIYAMA, Kazuhiro FUKUI, Koichiro NIINUMA
-
Publication number: 20230029505Abstract: A method may include obtaining a facial image of a subject and identifying a number of new images to be synthesized with target AU combinations and categories of intensity. The method may also include synthesizing the number of new images using the facial image of the subject as the base image with the number of target AU combinations and categories of intensity with a number of new images that have different AU combinations than the facial image of the subject. The method may additionally include adding the number of new images to a dataset and training a machine learning system using the dataset to identify a facial expression of the subject.Type: ApplicationFiled: August 2, 2021Publication date: February 2, 2023Applicant: FUJITSU LIMITEDInventors: Koichiro NIINUMA, Jeffrey F. COHN, Laszlo A. JENI
-
Patent number: 11557149Abstract: A method may include obtaining a dataset including a target Action Unit (AU) combination and labeled images of the target AU combination with at least a first category of intensity for each AU of the target AU combination and a second category of intensity for each AU of the target AU combination. The method may also include determining that the first category of intensity for a first AU has a higher number of labeled images than the second category of intensity for the first AU, and based on the determination, identifying a number of new images to be synthesized in the second category of intensity for the first AU. The method may additionally include synthesizing the number of new images with the second category of intensity for the first AU, and adding the new images to the dataset.Type: GrantFiled: August 14, 2020Date of Patent: January 17, 2023Assignees: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITYInventors: Koichiro Niinuma, Laszlo A. Jeni, Itir Onal Ertugrul, Jeffrey F. Cohn
-
Publication number: 20220051003Abstract: A method may include obtaining a dataset including a target Action Unit (AU) combination and labeled images of the target AU combination with at least a first category of intensity for each AU of the target AU combination and a second category of intensity for each AU of the target AU combination. The method may also include determining that the first category of intensity for a first AU has a higher number of labeled images than the second category of intensity for the first AU, and based on the determination, identifying a number of new images to be synthesized in the second category of intensity for the first AU. The method may additionally include synthesizing the number of new images with the second category of intensity for the first AU, and adding the new images to the dataset.Type: ApplicationFiled: August 14, 2020Publication date: February 17, 2022Applicants: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITYInventors: Koichiro NIINUMA, Laszlo A. JENI, Itir Onal ERTUGRUL, Jeffrey F. COHN
-
Patent number: 11244206Abstract: A method may include obtaining a base facial image, and obtaining a first set of base facial features within the base facial image, the first set of base facial features associated with a first facial AU to be detected in an analysis facial image. The method may also include obtaining a second set of base facial features within the base facial image, the second set of facial features associated with a second facial AU to be detected. The method may include obtaining the analysis facial image, and applying a first image normalization to the analysis facial image using the first set of base facial features to facilitate prediction of a probability of the first facial AU. The method may include applying a second image normalization to the analysis facial image using the second set of base facial features to facilitate prediction of a probability of the second facial AU.Type: GrantFiled: September 6, 2019Date of Patent: February 8, 2022Assignees: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITYInventors: Koichiro Niinuma, Laszlo A. Jeni, Itir Onal Ertugrul, Jeffrey F. Cohn
-
Patent number: 11216064Abstract: A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process including obtaining gaze data that indicates a position of a gaze of a user at each of a plurality of times, determining a first movement regarding the gaze of the user based on the gaze data, displaying, on a screen of a display device, gaze information indicating a position of the gaze at each of a plurality of times from a time at which a second movement occurs when the first movement includes the second movement, and displaying, on the screen, gaze information indicating a position of the gaze at each of a plurality of times during a specified time period when the first movement does not include the second movement.Type: GrantFiled: September 20, 2018Date of Patent: January 4, 2022Assignee: FUJITSU LIMITEDInventors: Yoshihide Fujita, Akinori Taguchi, Koichiro Niinuma
-
Publication number: 20210073600Abstract: A method may include obtaining a base facial image, and obtaining a first set of base facial features within the base facial image, the first set of base facial features associated with a first facial AU to be detected in an analysis facial image. The method may also include obtaining a second set of base facial features within the base facial image, the second set of facial features associated with a second facial AU to be detected. The method may include obtaining the analysis facial image, and applying a first image normalization to the analysis facial image using the first set of base facial features to facilitate prediction of a probability of the first facial AU. The method may include applying a second image normalization to the analysis facial image using the second set of base facial features to facilitate prediction of a probability of the second facial AU.Type: ApplicationFiled: September 6, 2019Publication date: March 11, 2021Applicants: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITYInventors: Koichiro NIINUMA, Laszlo A. JENI, Itir Onal ERTUGRUL, Jeffrey F. COHN
-
Patent number: 10627897Abstract: A non-transitory computer-readable storage medium storing a program that causes a processor of a head mounted display to execute a process, the process includes obtaining a result of measurement performed by an acceleration sensor included in the head mounted display; and determining whether a user with the head mounted display is chewing a food or the user is speaking based on the result of the measurement performed by the acceleration sensor.Type: GrantFiled: November 9, 2018Date of Patent: April 21, 2020Assignee: FUJITSU LIMITEDInventors: Motonobu Mihara, Akinori Taguchi, Koichiro Niinuma
-
Patent number: 10573083Abstract: A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process including estimating a first three-dimensional position of a wearable display device and a first arrangement of the wearable display device, estimating a second three-dimensional position of the physical object and a second arrangement of the physical object; estimating a third three-dimensional position of a specified body part of a person, determining whether a positional relationship between the physical object and the specified body part satisfies a criteria, displaying a virtual target object with a display position and a display arrangement determined based on the second three-dimensional position and the second arrangement when the positional relationship satisfies the criteria, and displaying the virtual target object with a display position and a display arrangement determined based on the first three-dimensional position and the first arrangement when the positional relationship does noType: GrantFiled: September 12, 2018Date of Patent: February 25, 2020Assignee: FUJITSU LIMITEDInventors: Hiroyuki Kobayashi, Koichiro Niinuma, Masayuki Nishino
-
Patent number: 10558413Abstract: A degree of interest evaluation device includes a memory and a processor coupled to the memory. The processor is configured to: in cases in which input of an operation has been received while content is displayed on an information processing terminal, measure the operation time of the operation; measure a time-wise change of the content being displayed during the operation time; decide on a message to be displayed in accordance with the time-wise change; and display the message decided on in association with the content.Type: GrantFiled: September 14, 2017Date of Patent: February 11, 2020Assignee: FUJITSU LIMITEDInventors: Teruyuki Sato, Koichiro Niinuma
-
Patent number: 10531045Abstract: A recording medium on which a user assistance program is recorded which makes a computer perform: based on image information obtained by photographing a plurality of users who use a given service, calculating state quantities of the plurality of respective users corresponding to the image information; counting, for each time period, a number of users whose amounts of change in the calculated respective state quantities are equal to or more than a given threshold value among the plurality of users; and detecting a time period in which the counted number of users satisfies a given condition.Type: GrantFiled: March 5, 2019Date of Patent: January 7, 2020Assignee: FUJITSU LIMITEDInventors: Yuushi Toyoda, Koichiro Niinuma, Ryosuke Kawamura
-
Publication number: 20190320139Abstract: A recording medium on which a user assistance program is recorded which makes a computer perform: based on image information obtained by photographing a plurality of users who use a given service, calculating state quantities of the plurality of respective users corresponding to the image information; counting, for each time period, a number of users whose amounts of change in the calculated respective state quantities are equal to or more than a given threshold value among the plurality of users; and detecting a time period in which the counted number of users satisfies a given condition.Type: ApplicationFiled: March 5, 2019Publication date: October 17, 2019Applicant: FUJITSU LIMITEDInventors: Yuushi TOYODA, Koichiro NIINUMA, Ryosuke KAWAMURA
-
Patent number: 10410051Abstract: A method of extracting a region in a distance image including pixels, the method includes: for each of adjacent pixel pairs in the distance image, generating a third pixel group that includes a first pixel group to which a first pixel belongs and a second pixel group to which a second pixel belongs based on a difference between pixel values of the first pixel and the second pixel included in the adjacent pixel pair; dividing the distance image into regions by determining whether to generate a third region represented by the third pixel group by merging a first region represented by the first pixel group and a second region represented by the second pixel group, based on a positional relationship of points represented by pixels included in the third pixel group; and selecting a region that satisfies a predetermined condition from among the regions.Type: GrantFiled: November 11, 2016Date of Patent: September 10, 2019Assignee: FUJITSU LIMITEDInventor: Koichiro Niinuma
-
Publication number: 20190236618Abstract: A non-transitory recording medium recording a degree-of-interest evaluating program which causes a computer to execute a process, the process includes: identifying,_based on a combination of a terminal motion amount relating to a change in an orientation of a terminal and information on the orientation of the terminal, a first cluster in which the terminal motion amount and the orientation of a terminal are in a specific state from a plurality of clusters into which the terminal motion amount is categorized; determining whether the terminal motion amount belongs to an inattentive viewing state of an operator of the terminal for content based on an operating state of the terminal; and determining a parameter to evaluate a degree of interest of the operator of the terminal based on the terminal motion amount which belongs to the first cluster and the terminal motion amount which belongs to the inattentive viewing state.Type: ApplicationFiled: December 19, 2018Publication date: August 1, 2019Applicant: FUJITSU LIMITEDInventors: Teruyuki Sato, Koichiro NIINUMA