Patents by Inventor Kikuo Fujimura

Kikuo Fujimura has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10395126
    Abstract: Systems and techniques for sign based localization are provided herein. Sign based localization may be achieved by capturing an image of an operating environment around a vehicle, extracting one or more text candidates from the image, detecting one or more line segments within the image, defining one or more quadrilateral candidates based on one or more of the text candidates, one or more of the line segments, and one or more intersections of respective line segments, determining one or more sign candidates for the image based on one or more of the quadrilateral candidates and one or more of the text candidates, matching one or more of the sign candidates against one or more reference images, and determining a location of the vehicle based on a match between one or more of the sign candidates and one or more of the reference images.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: August 27, 2019
    Assignee: Honda Motor Co., Ltd.
    Inventors: Yan Lu, Aniket Murarka, Kikuo Fujimura, Ananth Ranganathan
  • Publication number: 20190113929
    Abstract: According to one aspect, an autonomous vehicle policy generation system may include a state input generator generating a set of attributes associated with an autonomous vehicle undergoing training, a traffic simulator simulating a simulation environment including the autonomous vehicle, a roadway associated with a number of lanes, and another vehicle within the simulation environment, a Q-masker determining a mask to be applied to a subset of a set of possible actions for the autonomous vehicle for a time interval, and an action generator exploring a remaining set of actions from the set of possible actions and determining an autonomous vehicle policy for the time interval based on the remaining set of actions and the set of attributes associated with the autonomous vehicle.
    Type: Application
    Filed: August 14, 2018
    Publication date: April 18, 2019
    Inventors: Mustafa Mukadam, Alireza Nakhaei Sarvedani, Akansel Cosgun, Kikuo Fujimura
  • Publication number: 20190107839
    Abstract: According to one aspect, keyframe based autonomous vehicle operation may include collecting vehicle state information and collecting environment state information. A size of an object within the environment, a distance between the object and the autonomous vehicle, and a lane structure of the environment through which the autonomous vehicle is travelling may be determined. A matching keyframe model may be selected based on the size of the object, the distance from the object to the autonomous vehicle, the lane structure of the environment, and the vehicle state information. Suggested limits for a driving parameter associated with autonomous vehicle operation may be generated based on the selected keyframe model. The autonomous vehicle may be commanded to operate autonomously according to the suggested limits for the driving parameter.
    Type: Application
    Filed: April 6, 2018
    Publication date: April 11, 2019
    Inventors: Priyam Parashar, Kikuo Fujimura, Alireza Nakhaei Sarvedani, Akansel Cosgun
  • Publication number: 20170046580
    Abstract: Systems and techniques for sign based localization are provided herein. Sign based localization may be achieved by capturing an image of an operating environment around a vehicle, extracting one or more text candidates from the image, detecting one or more line segments within the image, defining one or more quadrilateral candidates based on one or more of the text candidates, one or more of the line segments, and one or more intersections of respective line segments, determining one or more sign candidates for the image based on one or more of the quadrilateral candidates and one or more of the text candidates, matching one or more of the sign candidates against one or more reference images, and determining a location of the vehicle based on a match between one or more of the sign candidates and one or more of the reference images.
    Type: Application
    Filed: August 11, 2015
    Publication date: February 16, 2017
    Inventors: Yan Lu, Aniket Murarka, Kikuo Fujimura, Ananth Ranganathan
  • Patent number: 9501693
    Abstract: An action recognition system recognizes driver actions by using a random forest model to classify images of the driver. A plurality of predictions is generated using the random forest model. Each prediction is generated by one of the plurality of decision trees and each prediction comprises a predicted driver action and a confidence score. The plurality of predictions is regrouped into a plurality of groups with each of the plurality of groups associated with one of the driver actions. The confidence scores are combined within each group to determine a combined score associated with each group. The driver action associated with the highest combined score is selected.
    Type: Grant
    Filed: October 9, 2013
    Date of Patent: November 22, 2016
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Trevor Sarratt, Kikuo Fujimura
  • Patent number: 9477315
    Abstract: Navigating through objects or items in a display device using a first input device to detect pointing of a user's finger to an object or item and using a second input device to receive the user's indication on the selection of the object or the item. An image of the hand is captured by the first input device and is processed to determine a location on the display device corresponding to the location of the fingertip of the pointing finger. The object or the item corresponding to the location of the fingertip is selected after the second input device receives predetermined user input from the second input device.
    Type: Grant
    Filed: February 27, 2014
    Date of Patent: October 25, 2016
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Kikuo Fujimura, Victor Ng-Thow-Hing, Behzad Dariush
  • Publication number: 20160054563
    Abstract: One or more embodiments of techniques or systems for 3-dimensional (3-D) navigation are provided herein. A heads-up display (HUD) component can project graphic elements on focal planes around an environment surrounding a vehicle. The HUD component can cause these graphic elements to appear volumetric or 3-D by moving or adjusting a distance between a focal plane and the vehicle. Additionally, a target position for graphic elements can be adjusted. This enables the HUD component to project graphic elements as moving avatars. In other words, adjusting the focal plane distance and the target position enables graphic elements to be projected in three dimensions along an x, y, and z axis. Further, a moving avatar can be ‘animated’ by sequentially projecting the avatar on different focal planes, thereby providing an occupant with the perception that the avatar is moving towards or away from the vehicle.
    Type: Application
    Filed: September 30, 2013
    Publication date: February 25, 2016
    Inventors: Kikuo Fujimura, Victor Ng-Thow-Hing
  • Patent number: 9165199
    Abstract: A system, method, and computer program product for estimating human body pose are described. According to one aspect, anatomical features are detected in a depth image of a human actor. The method detects a head, neck, and trunk (H-N-T) template in the depth image, and detects limbs in the depth image based on the H-N-T template. The anatomical features are detected based on the H-N-T template and the limbs. An estimated pose of a human model is estimated based on the detected features and kinematic constraints of the human model.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: October 20, 2015
    Assignee: Honda Motor Co., Ltd.
    Inventors: Youding Zhu, Behzad Dariush, Kikuo Fujimura
  • Patent number: 9122916
    Abstract: Systems and methods for detecting, tracking the presence, location, orientation and/or motion of a hand or hand segments visible to an input source are disclosed herein. Hand, hand segment and fingertip location and tracking can be performed using ball fit methods. Analysis of hand, hand segment and fingertip location and tracking data can be used as input for a variety of systems and devices.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: September 1, 2015
    Assignee: HONDA MOTOR CO., LTD.
    Inventor: Kikuo Fujimura
  • Patent number: 9098766
    Abstract: A system, method, and computer program product for estimating upper body human pose are described. According to one aspect, a plurality of anatomical features are detected in a depth image of the human actor. The method detects a head, neck, and torso (H-N-T) template in the depth image, and detects the features in the depth image based on the H-N-T template. An estimated pose of a human model is estimated based on the detected features and kinematic constraints of the human model.
    Type: Grant
    Filed: December 19, 2008
    Date of Patent: August 4, 2015
    Assignee: Honda Motor Co., Ltd.
    Inventors: Behzad Dariush, Youding Zhu, Kikuo Fujimura
  • Publication number: 20150098609
    Abstract: An action recognition system recognizes driver actions by using a random forest model to classify images of the driver. A plurality of predictions is generated using the random forest model. Each prediction is generated by one of the plurality of decision trees and each prediction comprises a predicted driver action and a confidence score. The plurality of predictions is regrouped into a plurality of groups with each of the plurality of groups associated with one of the driver actions. The confidence scores are combined within each group to determine a combined score associated with each group. The driver action associated with the highest combined score is selected.
    Type: Application
    Filed: October 9, 2013
    Publication date: April 9, 2015
    Applicant: Honda Motor Co., Ltd.
    Inventors: Trevor Sarratt, Kikuo Fujimura
  • Publication number: 20140282259
    Abstract: Navigating through objects or items in a display device using a first input device to detect pointing of a user's finger to an object or item and using a second input device to receive the user's indication on the selection of the object or the item. An image of the hand is captured by the first input device and is processed to determine a location on the display device corresponding to the location of the fingertip of the pointing finger. The object or the item corresponding to the location of the fingertip is selected after the second input device receives predetermined user input from the second input device.
    Type: Application
    Filed: February 27, 2014
    Publication date: September 18, 2014
    Applicant: Honda Motor Co., Ltd.
    Inventors: Kikuo Fujimura, Victor Ng-Thow-Hing, Behzad Dariush
  • Publication number: 20140268353
    Abstract: One or more embodiments of techniques or systems for 3-dimensional (3-D) navigation are provided herein. A heads-up display (HUD) component can project graphic elements on focal planes around an environment surrounding a vehicle. The HUD component can cause these graphic elements to appear volumetric or 3-D by moving or adjusting a distance between a focal plane and the vehicle. Additionally, a target position for graphic elements can be adjusted. This enables the HUD component to project graphic elements as moving avatars. In other words, adjusting the focal plane distance and the target position enables graphic elements to be projected in three dimensions along an x, y, and z axis. Further, a moving avatar can be ‘animated’ by sequentially projecting the avatar on different focal planes, thereby providing an occupant with the perception that the avatar is moving towards or away from the vehicle.
    Type: Application
    Filed: September 30, 2013
    Publication date: September 18, 2014
    Applicant: HONDA MOTOR CO., LTD.
    Inventors: Kikuo Fujimura, Victor Ng-Thow-Hing
  • Publication number: 20140270352
    Abstract: Systems and methods for detecting, tracking the presence, location, orientation and/or motion of a hand or hand segments visible to an input source are disclosed herein. Hand, hand segment and fingertip location and tracking can be performed using ball fit methods. Analysis of hand, hand segment and fingertip location and tracking data can be used as input for a variety of systems and devices.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Applicant: Honda Motor Co., Ltd.
    Inventor: Kikuo Fujimura
  • Patent number: 8351646
    Abstract: A method and apparatus for estimating poses of a subject by grouping data points generated by a depth image into groups representing labeled parts of the subject, and then fitting a model representing the subject to the data points using the grouping of the data points. The grouping of the data points is performed by grouping the data points to segments based on proximity of the data points, and then using constraint conditions to assign the segments to the labeled parts. The model is fitted to the data points by using the grouping of the data points to the labeled parts.
    Type: Grant
    Filed: October 9, 2007
    Date of Patent: January 8, 2013
    Assignee: Honda Motor Co., Ltd.
    Inventors: Kikuo Fujimura, Youding Zhu
  • Patent number: 8170287
    Abstract: A system, method, and computer program product for avoiding collision of a body segment with unconnected structures in an articulated system are described. A virtual surface is constructed surrounding an actual surface of the body segment. Distances between the body segment and unconnected structures are monitored. Responding to an unconnected structure penetrating the virtual surface, a redirected joint motion that prevents the unconnected structure from penetrating deeper into the virtual surface is determined. The body segment is redirected based on the redirected joint motion to avoid colliding with the unconnected structure.
    Type: Grant
    Filed: October 24, 2008
    Date of Patent: May 1, 2012
    Assignee: Honda Motor Co., Ltd.
    Inventors: Behzad Dariush, Youding Zhu, Kikuo Fujimura
  • Patent number: 8031906
    Abstract: A system for estimating orientation of a target based on real-time video data uses depth data included in the video to determine the estimated orientation. The system includes a time-of-flight camera capable of depth sensing within a depth window. The camera outputs hybrid image data (color and depth). Segmentation is performed to determine the location of the target within the image. Tracking is used to follow the target location from frame to frame. During a training mode, a target-specific training image set is collected with a corresponding orientation associated with each frame. During an estimation mode, a classifier compares new images with the stored training set to determine an estimated orientation. A motion estimation approach uses an accumulated rotation/translation parameter calculation based on optical flow and depth constrains. The parameters are reset to a reference value each time the image corresponds to a dominant orientation.
    Type: Grant
    Filed: October 2, 2009
    Date of Patent: October 4, 2011
    Assignee: Honda Motor Co., Ltd.
    Inventors: Kikuo Fujimura, Youding Zhu
  • Patent number: 8005263
    Abstract: A method and system for recognizing hand signs that include overlapping or adjoining hands from a depth image. A linked structure comprising multiple segments is generated from the depth image including overlapping or adjoining hands. The hand pose of the overlapping or adjoining hands is determined using either (i) a constrained optimization process in which a cost function and constraint conditions are used to classify segments of the linked graph to two hands or (ii) a tree search process in which a tree structure including a plurality of nodes is used to obtain the most-likely hand pose represented by the depth image. After determining the hand pose, the segments of the linked structure are matched with stored shapes to determine the sign represented by the depth image.
    Type: Grant
    Filed: October 26, 2007
    Date of Patent: August 23, 2011
    Assignee: Honda Motor Co., Ltd.
    Inventors: Kikuo Fujimura, Lijie Xu
  • Publication number: 20110054870
    Abstract: A system, method, and computer program product for providing a user with a virtual environment in which the user can perform guided activities and receive feedback are described. The user is provided with guidance to perform certain movements. The user's movements are captured in an image stream. The image stream is analyzed to estimate the user's movements, which is tracked by a user-specific human model. Biomechanical quantities such as center of pressure and muscle forces are calculated based on the tracked movements. Feedback such as the biomechanical quantities and differences between the guided movements and the captured actual movements are provided to the user.
    Type: Application
    Filed: September 1, 2010
    Publication date: March 3, 2011
    Applicant: HONDA MOTOR CO., LTD.
    Inventors: Behzad Dariush, Kikuo Fujimura, Yoshiaki Sakagami
  • Publication number: 20100034427
    Abstract: A system for estimating orientation of a target based on real-time video data uses depth data included in the video to determine the estimated orientation. The system includes a time-of-flight camera capable of depth sensing within a depth window. The camera outputs hybrid image data (color and depth). Segmentation is performed to determine the location of the target within the image. Tracking is used to follow the target location from frame to frame. During a training mode, a target-specific training image set is collected with a corresponding orientation associated with each frame. During an estimation mode, a classifier compares new images with the stored training set to determine an estimated orientation. A motion estimation approach uses an accumulated rotation/translation parameter calculation based on optical flow and depth constrains. The parameters are reset to a reference value each time the image corresponds to a dominant orientation.
    Type: Application
    Filed: October 2, 2009
    Publication date: February 11, 2010
    Inventors: Kikuo Fujimura, Youding Zhu