Patents by Inventor Xiang Yu

Xiang Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180334166
    Abstract: A lane departure detection system detects that an autonomous driving vehicle (ADV) is departing from the lane in which the ADV is driving based on sensor data captured when the ADV contact a deceleration curb such as a speed bump laid across the lane. When the ADV contacts the deceleration curb, the lane departure detection system detects and calculates an angle of a moving direction of the ADV vs a longitudinal direction of the deceleration curb. Based on the angle, the system calculates how much the moving direction of the ADV is off compared to a lane direction of the lane. The lane direction is typically substantially perpendicular to the longitudinal direction of the deceleration curb. A control command such as a speed control command and/or a steering control command is generated based on the angle to correct the moving direction of the ADV.
    Type: Application
    Filed: March 30, 2017
    Publication date: November 22, 2018
    Inventors: Fan ZHU, Qi KONG, Qi LUO, Xiang YU, Sen HU, Zhenguang ZHU, Xiaoxin FU, Jiarui HE, Hongye LI, Yuchang PAN, Zhongpu XIA, Chunming ZHAO, Guang YANG, Jingao WANG
  • Publication number: 20180330173
    Abstract: When generating a control command of an autonomous driving vehicle (ADV), a pitch status and/or a roll status of the road is determined. The control command is adjusted based on the pitch status and the roll status. For example, when an ADV is driving on an uphill or downhill road, a pitch status of the road is determined and a speed control command will be generated based on the pitch status of the road, such that the ADV have a similar acceleration rate as of driving on a flat road. Similarly, when the ADV is driving on a road that is tilted or rolled left or right, a roll status of the road is determined and a steering control command will be generated in view of the roll status of the road, such that the ADV have a similar heading direction as of driving on a flat road.
    Type: Application
    Filed: May 15, 2017
    Publication date: November 15, 2018
    Inventors: Fan ZHU, Qi KONG, Qi LUO, Xiang YU, Sen HU, Li ZHUANG, Liangliang ZHANG, Weicheng ZHU, Haoyang FAN, Yajia ZHANG, Guang YANG, Jingao WANG
  • Publication number: 20180326956
    Abstract: In one embodiment, it is determined that an ADV is about to decelerate based on perception of a driving environment surrounding the ADV. In addition, if there is another vehicle that is following the ADV, a distance between the ADV and the following vehicle, as well as the speed of the following vehicle, is determined. A deceleration rate that is required for the following vehicle to avoid a collision with the ADV is determined based on the distance between the ADV and the following vehicle and the speed of the following vehicle. If the deceleration rate is greater than a predetermined threshold, a brake light and an emergency light of the ADV are turned on to warn the following vehicle that the ADV is about to rapidly decelerate as it is treated as an emergency situation.
    Type: Application
    Filed: May 10, 2017
    Publication date: November 15, 2018
    Inventors: Fan ZHU, Qi KONG, Qi LUO, Xiang YU, Sen HU, Guang YANG, Jingao WANG
  • Patent number: 10118639
    Abstract: In one embodiment, an autonomous driving vehicle (ADV) steering control system determines how much and when to apply a steering control to maneuver obstacles of a planned route. The steering control system calculates a first steering angle based on a target directional angle and an actual directional angle of the ADV, a second steering angle based on a target lateral position and an actual lateral position of the ADV to maneuver a planned route, an object, or an obstacle course. The steering control system determines a target steering angle based on the first steering angle and the second steering angles and utilizes the target steering angle to control a subsequent steering angle of the ADV.
    Type: Grant
    Filed: November 24, 2016
    Date of Patent: November 6, 2018
    Assignee: BAIDU USA LLC
    Inventors: Fan Zhu, Qi Kong, Xiang Yu, Sen Hu, Qi Luo, Zhenguang Zhu, Yuchang Pan, Wenli Yang, Guang Yang, Jingao Wang
  • Publication number: 20180304900
    Abstract: In one embodiment, planning data is received, for example, from a planning module, to drive an autonomous driving vehicle (ADV) from a starting location and a destination location. In response, a series of control commands are generated based on the planning data, where the control commands are to be applied at different points in time from the starting location to the destination location. A cost is calculated by applying a cost function to the control commands, a first road friction to be estimated in a current trip, and a second road friction estimated during a prior trip from the starting location to the destination location. The first road friction of the current trip is estimated using the cost function in view of a prior termination cost of the prior trip, such that the cost reaches minimum.
    Type: Application
    Filed: January 13, 2017
    Publication date: October 25, 2018
    Inventors: Qi LUO, Fan ZHU, Sen HU, Qi KONG, Xiang YU, Zhenguang ZHU, Yuchang PAN, Wenli YANG, Guang YANG
  • Publication number: 20180307234
    Abstract: In one embodiment, a lane departure detection system detects at a first point in time that a wheel of an ADV rolls onto a lane curb disposed on an edge of a lane in which the ADV is moving. The system detects at a second point in time that the wheel of the ADV rolls off the lane curb of the lane. The system calculates an angle between a moving direction of the ADV and a lane direction of the lane based on the time difference between the first point in time and the second point in time in view of a current speed of the ADV. The system then generates a control command based on the angle to adjust the moving direction of the ADV in order to prevent the ADV from further drifting off the lane direction of the lane.
    Type: Application
    Filed: April 19, 2017
    Publication date: October 25, 2018
    Inventors: Fan ZHU, Qi KONG, Qi LUO, Xiang YU, Sen HU, Zhenguang ZHU, Xiaoxin FU, Jiarui HE, Hongye LI, Yuchang PAN, Zhongpu XIA, Chunming ZHAO, Guang YANG, Jingao WANG
  • Publication number: 20180300327
    Abstract: The described embodiments relate to method and products for organizing a plurality of images. Specifically, the methods and products can automatically organize a plurality of images into a plurality of groups of images using allocation criteria. The allocation criteria for each image include a similarity distance between that image and at least one other image that measures how similar those images are. Each image can be allocated to at least one similar image group based on the similarity distance. The methods and products can also be used to visualize and display representative images for each of the groups of images.
    Type: Application
    Filed: June 22, 2018
    Publication date: October 18, 2018
    Inventors: En-Hui Yang, Xiang Yu, Jin Meng
  • Publication number: 20180299898
    Abstract: When an ADV is detected to transition from a manual driving mode to an autonomous driving mode, a first pedal value corresponding to a speed of the ADV at a previous command cycle during which the ADV was operating in the manual driving mode is determined. A second pedal value is determined based on a target speed of the ADV at a current command cycle during which the ADV is operating in an autonomous driving mode. A pedal value represents a pedal percentage of a maximum pedal pressure or maximum pedal pressed distance of a throttle pedal or brake pedal from a neutral position. A speed command is generated and issued to the ADV based on the first pedal value and the second pedal value, such that the ADV runs in a similar acceleration before and after switching from the manual driving mode to the autonomous driving mode.
    Type: Application
    Filed: March 10, 2017
    Publication date: October 18, 2018
    Inventors: Qi LUO, Qi KONG, Fan ZHU, Sen HU, Xiang YU, Zhenguang ZHU, Yuchang PAN, Jiarui HE, Haoyang FAN, Guang YANG, Jingao WANG
  • Publication number: 20180297606
    Abstract: In one embodiment, a request is received to turn the autonomous driving vehicle (ADV) from a first direction to a second direction. In response to the request, a number of segment masses of a number of segments of the ADV are determined. The segment masses are located on a plurality of predetermined locations within a vehicle platform of the ADV. A location of a mass center for an entire ADV is calculated based on the segment masses of the segments of the ADV, where the mass center represents a center of an entire mass of the entire ADV. A steering control command based on the location of the mass center of the entire ADV for steering control of the ADV.
    Type: Application
    Filed: March 10, 2017
    Publication date: October 18, 2018
    Inventors: Qi LUO, Qi KONG, Fan ZHU, Sen HU, Xiang YU, Zhenguang ZHU, Yuchang PAN, Jiarui HE, Haoyang FAN, Guang YANG, Jingao WANG
  • Publication number: 20180295281
    Abstract: Embodiments of the present disclosure allow user experience of photographing to be improved. In operation, it is determined whether a picture composition of a first object and a second object needs to be adjusted based on a predefined composition rule. If the picture composition needs to be adjusted, an adjusting pattern is determined based on the predefined composition rule. Then, the adjusting pattern is provided to a user to indicate the user to adjust the picture composition based on thereon.
    Type: Application
    Filed: April 5, 2017
    Publication date: October 11, 2018
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ruo Meng HAO, Xiang Yu SONG, Ning WANG, You Miao ZHANG
  • Publication number: 20180268055
    Abstract: A video retrieval system is provided that includes a server for retrieving video sequences from a remote database responsive to a text specifying a face recognition result as an identity of a subject of an input image. The face recognition result is determined by a processor of the server, which estimates, using a 3DMM conditioned Generative Adversarial Network, 3DMM coefficients for the subject of the input image. The subject varies from an ideal front pose. The processor produces a synthetic frontal face image of the subject of the input image based on the input image and coefficients. An area spanning the frontal face of the subject is made larger in the synthetic than in the input image. The processor provides a decision of whether the synthetic image subject is an actual person and provides the identity of the subject in the input image based on the synthetic and input images.
    Type: Application
    Filed: February 5, 2018
    Publication date: September 20, 2018
    Inventors: Xiang Yu, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20180268265
    Abstract: An object recognition system is provided that includes a device configured to capture a video sequence formed from unlabeled testing video frames. The system includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted recognition engine, by applying a non-reference set of CNNs to a set of domains that include the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the labeled training still image frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, a set of objects in the video sequence. A display device displays the set of recognized objects.
    Type: Application
    Filed: February 6, 2018
    Publication date: September 20, 2018
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Publication number: 20180268201
    Abstract: A face recognition system is provided. The system includes a device configured to capture an input image of a subject. The system further includes a processor. The processor estimates, using a 3D Morphable Model (3DMM) conditioned Generative Adversarial Network, 3DMM coefficients for the subject of the input image. The subject varies from an ideal front pose. The processor produces, using an image generator, a synthetic frontal face image of the subject of the input image based on the input image and the 3DMM coefficients. An area spanning the frontal face of the subject is made larger in the synthetic image than in the input image. The processor provides, using a discriminator, a decision indicative of whether the subject of the synthetic image is an actual person. The processor provides, using a face recognition engine, an identity of the subject in the input image based on the synthetic and input images.
    Type: Application
    Filed: February 5, 2018
    Publication date: September 20, 2018
    Inventors: Xiang Yu, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20180268266
    Abstract: A surveillance system is provided that includes a device configured to capture a video sequence, formed from a set of unlabeled testing video frames, of a target area. The surveillance system further includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted recognition engine, by applying a non-reference set of CNNs to domains including the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, at least one object in the target area. A display device displays the recognized objects.
    Type: Application
    Filed: February 6, 2018
    Publication date: September 20, 2018
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Publication number: 20180268202
    Abstract: A video surveillance system is provided. The system includes a device configured to capture an input image of a subject located in an area. The system further includes a processor. The processor estimates, using a three-dimensional Morphable Model (3DMM) conditioned Generative Adversarial Network, 3DMM coefficients for the subject of the input image. The subject varies from an ideal front pose. The processor produces, using an image generator, a synthetic frontal face image of the subject of the input image based on the input image and coefficients. An area spanning the frontal face of the subject is made larger in the synthetic than in the input image. The processor provides, using a discriminator, a decision of whether the subject of the synthetic image is an actual person. The processor provides, using a face recognition engine, an identity of the subject in the input image based on the synthetic and input images.
    Type: Application
    Filed: February 5, 2018
    Publication date: September 20, 2018
    Inventors: Xiang Yu, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20180268222
    Abstract: An action recognition system is provided that includes a device configured to capture a video sequence formed from a set of unlabeled testing video frames. The system further includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted engine, by applying non-reference CNNs to domains that include the still image and video domains and a degraded image domain that includes labeled synthetically degraded versions of the frames in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, an action performed by at least one object in the sequence, and controls a device to perform a response action in response to an action type of the action.
    Type: Application
    Filed: February 6, 2018
    Publication date: September 20, 2018
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Publication number: 20180268203
    Abstract: A face recognition system is provided that includes a device configured to capture a video sequence formed from a set of unlabeled testing video frames. The system includes a processor configured to pre-train a face recognition engine formed from reference CNNs on a still image domain that includes labeled training still image frames of faces. The processor adapts the face recognition engine to a video domain to form an adapted engine, by applying non-reference CNNs to domains including the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, identities of persons corresponding to at least one face in the video sequence to obtain a set of identities. A display device displays the set of identities.
    Type: Application
    Filed: February 6, 2018
    Publication date: September 20, 2018
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Publication number: 20180268292
    Abstract: A computer-implemented method executed by at least one processor for training fast models for real-time object detection with knowledge transfer is presented. The method includes employing a Faster Region-based Convolutional Neural Network (R-CNN) as an objection detection framework for performing the real-time object detection, inputting a plurality of images into the Faster R-CNN, and training the Faster R-CNN by learning a student model from a teacher model by employing a weighted cross-entropy loss layer for classification accounting for an imbalance between background classes and object classes, employing a boundary loss layer to enable transfer of knowledge of bounding box regression from the teacher model to the student model, and employing a confidence-weighted binary activation loss layer to train intermediate layers of the student model to achieve similar distribution of neurons as achieved by the teacher model.
    Type: Application
    Filed: March 1, 2018
    Publication date: September 20, 2018
    Applicant: NEC Laboratories America, Inc.
    Inventors: Wongun Choi, Manmohan Chandraker, Guobin Chen, Xiang Yu
  • Publication number: 20180251135
    Abstract: According to one embodiment, when an ADV transitions from a manual driving mode to an autonomous driving mode, a first speed reference is determined based on a current position of the ADV. The current position of the ADV is dynamically measured in response to a speed control command issued in a previous command cycle and a target speed of a current command cycle. A second speed reference is determined based on a current target position for a current command cycle. A speed control command is then generated for controlling the speed of the ADV in the autonomous driving mode based on the first speed reference, the second speed reference, and the target speed of the ADV for the current command cycle, such that the ADV operates in a similar acceleration rate or deceleration rate before and after transitioning from the manual driving mode to the autonomous driving mode.
    Type: Application
    Filed: March 3, 2017
    Publication date: September 6, 2018
    Inventors: Qi LUO, Qi KONG, Fan ZHU, Sen HU, Xiang YU, Zhenguang ZHU, Yuchang PAN, Jiarui HE, Haoyang FAN, Guang YANG, Jingao WANG
  • Patent number: 10031928
    Abstract: The described embodiments relate to method and products for organizing a plurality of images. Specifically, the methods and products can automatically organize a plurality of images into a plurality of groups of images using allocation criteria. The allocation criteria for each image include a similarity distance between that image and at least one other image that measures how similar those images are. Each image can be allocated to at least one similar image group based on the similarity distance. The methods and products can also be used to visualize and display representative images for each of the groups of images.
    Type: Grant
    Filed: July 2, 2015
    Date of Patent: July 24, 2018
    Assignee: BICDROID INC.
    Inventors: En-Hui Yang, Xiang Yu, Jin Meng