Patents by Inventor Osafumi Nakayama

Osafumi Nakayama has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230377374
    Abstract: From plural observation features of a time series acquired by observing movements of a person, plural candidate segments are decided of a target action series containing plural respective actions expressing plural movements. Each of the plural candidate segments is divided into each action segment that is a time segment of the action, a likelihood corresponding to each of the plural actions computed for each of the action segments is normalized by action segment, and as an evaluation value a representative value is computed of normalized likelihood corresponding to each of the action segments selected from out of all of the respective action segments in the candidate segments based on an order of actions in the target action series. Being the target action series is determined in cases in which the evaluation value exceeds a common threshold.
    Type: Application
    Filed: June 26, 2023
    Publication date: November 23, 2023
    Applicant: Fujitsu Limited
    Inventors: Junya FUJIMOTO, Osafumi NAKAYAMA
  • Publication number: 20230343142
    Abstract: In a hidden semi-Markov model, observation probabilities for each type of movement of plural first hidden Markov models are learned using unsupervised learning. The learnt observation probabilities are fixed, input first supervised data is augmented so as to give second supervised data, and transition probabilities of the movements of the first hidden Markov models are learned by supervised learning in which the second supervised data is employed. The learnt observation probabilities and the learnt transition probabilities are used to build the hidden semi-Markov model that is a model for estimating segments of the actions. Augmentation is performed on the first supervised data by adding teacher information of the first supervised data to each item of data generated by at least one out of oversampling in the time direction or oversampling in feature space.
    Type: Application
    Filed: June 26, 2023
    Publication date: October 26, 2023
    Applicant: Fujitsu Limited
    Inventors: Junya Fujimoto, Osafumi Nakayama
  • Publication number: 20230343080
    Abstract: A hidden semi-Markov model includes plural second hidden Markov models each containing plural first hidden Markov models using types of movement of a person as states. The plural second hidden Markov models each use partial actions that are parts of actions determined by combining plural movements as states. In the hidden semi-Markov model observation probabilities are leant for each type of the movements of the plural first hidden Markov models using unsupervised learning. The learnt observation probabilities are fixed, and input first supervised data is augmented to give second supervised data, and transition probabilities of the movements of the first hidden Markov models are learned by supervised learning in which the second supervised data is employed. The learnt observation probabilities and transition probabilities are employed to build the hidden semi-Markov model that is a model for estimating segments of the partial actions.
    Type: Application
    Filed: June 26, 2023
    Publication date: October 26, 2023
    Applicant: Fujitsu Limited
    Inventors: Junya FUJIMOTO, Osafumi NAKAYAMA
  • Publication number: 20230237690
    Abstract: An information processing device configured to: specify, from a moving image obtained by imaging work of a person, a first plurality of stationary positions at which the person is stationary and a movement order in which the person moves through the first plurality of stationary positions, divide the first plurality of stationary positions into a first plurality of clusters by clustering the first plurality of stationary positions, when a cluster included in the first plurality of clusters includes a pair of stationary positions with a relationship of a movement source and a movement destination in the movement order, divide a second plurality of stationary positions included in the cluster into a second plurality of clusters by clustering the second plurality of stationary positions, and generate a region of interest in the moving image based on the second plurality of clusters.
    Type: Application
    Filed: March 28, 2023
    Publication date: July 27, 2023
    Applicant: FUJITSU LIMITED
    Inventors: Masakiyo TANAKA, Osafumi NAKAYAMA, Yuichi MURASE, Chisato SHIODA
  • Publication number: 20230206639
    Abstract: An information processing apparatus acquires video image data that includes target objects including a person and an object, and specifies, by using graph data that indicates a relationship between each of target objects stored in a storage unit, a relationship between each of the target objects included in the acquired video image data. The information processing apparatus specifies, by using a feature value of the person included in the acquired video image data, a behavior of the person included in the video image data. The information processing apparatus predicts, by inputting the specified behavior of the person and the specified relationship to a probability model, a future behavior or a future state of the person.
    Type: Application
    Filed: September 16, 2022
    Publication date: June 29, 2023
    Applicant: FUJITSU LIMITED
    Inventors: Takuma YAMAMOTO, Yuya OBINATA, Osafumi NAKAYAMA
  • Patent number: 11109811
    Abstract: An apparatus is configured to execute: a first process for estimating a first component at a first time point by using a waveform and the first component calculated from the waveform before the first time point, the waveform being based on a running trace of a vehicle, the first component being less than a first frequency; a second process for estimating the first component at the first time point by using the waveform, the first component calculated from the waveform before the first time point, and a second component at the first time point, the second component being greater than the first frequency and predicted from the second component calculated from the waveform before the first time point; and a calculation process for calculating the second component at the first time point from the waveform based on the first components estimated by the first and second process.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: September 7, 2021
    Assignee: FUJITSU LIMITED
    Inventor: Osafumi Nakayama
  • Patent number: 10893207
    Abstract: An object tracking apparatus is configured to execute a tracking process, a prediction process, an influence-degree obtaining process, and a difficulty-degree obtaining process, wherein the influence-degree obtaining process is configured to obtain a backside influence degree representing that a detection of an object to be tracked is affected by other object that overlaps the object, wherein the difficulty-degree obtaining process is configured to calculate, for each object to be tracked, a detection difficulty degree for detecting the object from each of next frames captured by respective cameras, based on the backside influence degree, wherein the tracking process is configured to select the next frame that is included in a set of next frames in a pieces of video and from which the object is to be detected, based on the detection difficulty degree, and detect the object from the selected next frames.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: January 12, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Daisuke Ishii, Osafumi Nakayama
  • Patent number: 10748357
    Abstract: A waveform estimating method performed by a computer, the waveform estimating method including: estimating a first vibration component of less than a first frequency in a period from a present time to a time preceding by a half wavelength of the first frequency, using an input waveform in the period, the input waveform corresponding to a driving trajectory of a vehicle traveling on a roadway; and calculating a second vibration component of the first frequency or higher in the period by subtracting the first vibration component from the input waveform.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: August 18, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Osafumi Nakayama
  • Patent number: 10740923
    Abstract: A non-transitory computer-readable recording medium has recorded thereon a computer program for face direction estimation that causes a computer to execute a process including: generating, for each presumed face direction, a face direction converted image by converting the direction of the face represented on an input image into a prescribed direction; generating, for each presumed face direction, a reversed face image by reversing the face represented on the face direction converted image; converting the direction of the face represented on the reversed face image to be the presumed face direction; calculating, for each presumed face direction, an evaluation value that represents the degree of difference between the face represented on the reversed face image and the face represented on the input image, based on the conversion result; and specifying, based on the evaluation value, the direction of the face represented on the input image.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: August 11, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Osafumi Nakayama
  • Patent number: 10726575
    Abstract: A non-transitory computer-readable recording medium having stored therein a line of sight detection program for causing a computer to execute a process, the process includes finding an index indicating a variation of a line of sight of an observer who observes an object based on a difference between line of sight data of a left eye of the observer and line of sight data of a right eye of the observer, determining a stay of the line of sight of the observer based on the index, and resolving a line of sight position of the observer based on a result of the determination on the stay.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: July 28, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Junichi Odagiri, Osafumi Nakayama
  • Patent number: 10712815
    Abstract: A procedure includes calculating a position of a line of sight of a user in a display screen of a display device, based on information on an eyeball portion of the user included in an input image, setting a processing target region which is a target region of processing corresponding to an input operation by a line of sight and an operation region which is adjacent to the processing target region and is for receiving the input operation by the line of sight, in the display screen, based on the position of the line of sight and information on a field of view of the user, and creating screen data in which image information within the processing target region is included in image information within the operation region and which is to be displayed on the display device.
    Type: Grant
    Filed: July 24, 2018
    Date of Patent: July 14, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Junichi Odagiri, Osafumi Nakayama
  • Patent number: 10692210
    Abstract: A recording medium storing a program causing a computer to execute: detecting an eye area of the eye; detecting bright spot areas in the eye area; setting a reference point in a pupil; setting a first search lines radially; determining whether each first search lines passes through the bright spot areas; determining, for a second search line that passes through the bright spot area, a degree of overlapping between the bright spot area and the pupil based on brightness on a circumference of the bright spot area; setting a search range for a point on a contour of the pupil based on the degree; detecting a first point on the contour; detecting, for a third search line that does not pass through the bright spot areas, a second point on the contour on the third search line; and detecting the pupil, based on the first and second points.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: June 23, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Daisuke Ishii, Osafumi Nakayama
  • Publication number: 20190191098
    Abstract: An object tracking apparatus is configured to execute a tracking process, a prediction process, an influence-degree obtaining process, and a difficulty-degree obtaining process, wherein the influence-degree obtaining process is configured to obtain a backside influence degree representing that a detection of an object to be tracked is affected by other object that overlaps the object, wherein the difficulty-degree obtaining process is configured to calculate, for each object to be tracked, a detection difficulty degree for detecting the object from each of next frames captured by respective cameras, based on the backside influence degree, wherein the tracking process is configured to select the next frame that is included in a set of next frames in a pieces of video and from which the object is to be detected, based on the detection difficulty degree, and detect the object from the selected next frames.
    Type: Application
    Filed: December 5, 2018
    Publication date: June 20, 2019
    Applicant: FUJITSU LIMITED
    Inventors: Daisuke Ishii, Osafumi NAKAYAMA
  • Patent number: 10325518
    Abstract: A computer detects a virtual central line of a traveling lane from a road-captured image captured from a vehicle. Next, the computer displays a transformed image generated by transforming the road-captured image such that the detected virtual central line is situated in a prescribed position. At this point, the computer moves a display position of a symbol indicating the vehicle on the generated transformed image according to a result of detecting a traveling position of the vehicle in the traveling lane.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: June 18, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Osafumi Nakayama
  • Publication number: 20190150847
    Abstract: An apparatus is configured to execute: a first process for estimating a first component at a first time point by using a waveform and the first component calculated from the waveform before the first time point, the waveform being based on a running trace of a vehicle, the first component being less than a first frequency; a second process for estimating the first component at the first time point by using the waveform, the first component calculated from the waveform before the first time point, and a second component at the first time point, the second component being greater than the first frequency and predicted from the second component calculated from the waveform before the first time point; and a calculation process for calculating the second component at the first time point from the waveform based on the first components estimated by the first and second process.
    Type: Application
    Filed: November 19, 2018
    Publication date: May 23, 2019
    Applicant: FUJITSU LIMITED
    Inventor: Osafumi NAKAYAMA
  • Publication number: 20190156512
    Abstract: A method for estimating orientation includes: executing a detection process that includes detecting multiple line segments from each of multiple images included in a video image captured by an imaging device; executing an estimation process that includes estimating a first inclination that is an inclination of a line segment that is among the multiple line segments and detected from a central region including a center of an image among the multiple images; and associating the first inclination with a vertical direction in a three-dimensional space to estimate an orientation of the imaging device.
    Type: Application
    Filed: November 14, 2018
    Publication date: May 23, 2019
    Applicant: FUJITSU LIMITED
    Inventors: Tetsuhiro KATO, Osafumi NAKAYAMA
  • Publication number: 20190102910
    Abstract: A camera parameter estimating method includes: obtaining a plurality of image frames in time series, the image frames being photographed by a camera installed in a mobile body; detecting at least one straight line from a central portion of a first image frame group including one or more first image frames among the plurality of image frames; detecting a plurality of curves corresponding to the straight line from a second image frame group including one or more second image frames at a later time than an image frame from which the straight line is detected based on a feature quantity of the detected straight line; and estimating a parameter of the camera based on the plurality of curves.
    Type: Application
    Filed: September 28, 2018
    Publication date: April 4, 2019
    Applicant: FUJITSU LIMITED
    Inventors: Soutaro KANEKO, Osafumi NAKAYAMA
  • Publication number: 20190102904
    Abstract: A non-transitory computer-readable recording medium having stored therein a line of sight detection program for causing a computer to execute a process, the process includes finding an index indicating a variation of a line of sight of an observer who observes an object based on a difference between line of sight data of a left eye of the observer and line of sight data of a right eye of the observer, determining a stay of the line of sight of the observer based on the index, and resolving a line of sight position of the observer based on a result of the determination on the stay.
    Type: Application
    Filed: September 28, 2018
    Publication date: April 4, 2019
    Applicant: FUJITSU LIMITED
    Inventors: JUNICHI ODAGIRI, Osafumi NAKAYAMA
  • Publication number: 20190033966
    Abstract: A procedure includes calculating a position of a line of sight of a user in a display screen of a display device, based on information on an eyeball portion of the user included in an input image, setting a processing target region which is a target region of processing corresponding to an input operation by a line of sight and an operation region which is adjacent to the processing target region and is for receiving the input operation by the line of sight, in the display screen, based on the position of the line of sight and information on a field of view of the user, and creating screen data in which image information within the processing target region is included in image information within the operation region and which is to be displayed on the display device.
    Type: Application
    Filed: July 24, 2018
    Publication date: January 31, 2019
    Applicant: FUJITSU LIMITED
    Inventors: JUNICHI ODAGIRI, Osafumi NAKAYAMA
  • Publication number: 20180350070
    Abstract: A recording medium storing a program causing a computer to execute: detecting an eye area of the eye; detecting bright spot areas in the eye area; setting a reference point in a pupa; setting a first search lines radially; determining whether each first search lines passes through the bright spot areas; determining, for a second search line that passes through the bright spot area, a degree of overlapping between the bright spot area and the pupil based on brightness on a circumference of the bright spot area; setting a search range for a point on a contour of the pupil based on the degree; detecting a first point on the contour; detecting, for a third search line that does not pass through the bright spot areas, a second point on the contour on the third search line; and detecting the pupil, based on the first and second points,
    Type: Application
    Filed: May 25, 2018
    Publication date: December 6, 2018
    Applicant: FUJITSU LIMITED
    Inventors: Daisuke Ishii, Osafumi NAKAYAMA