Patents by Inventor Yafeng Yin

Yafeng Yin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11094028
    Abstract: A method comprises: obtaining historical vehicle service data in an area for a time period including historical locations of passenger-seeking vehicles with respect to time, historical locations of passenger orders with respect to time, and historical trip fares with respect to pick-up locations and time; discretizing the area into a plurality of zones and discretizing the time period into a plurality of time segments; aggregating the historical vehicle service data according the zones and time segments; obtaining an expected reward for a passenger-seeking vehicle to move from zone A to each neighboring zone of the zone A based on the aggregated historical vehicle service data; and obtaining a probability of a passenger-seeking vehicle moving from zone A to a neighboring zone B based on the expected reward for the passenger-seeking vehicle to move from zone A to the each neighboring zone of the zone A.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: August 17, 2021
    Assignee: Beijing DiDi Infinity Technology and Development Co., Ltd.
    Inventors: Jintao Ke, Guojun Wu, Zhengtian Xu, Hai Yang, Yafeng YiN, Jieping Ye
  • Publication number: 20200175635
    Abstract: A method comprises: obtaining historical vehicle service data in an area for a time period including historical locations of passenger-seeking vehicles with respect to time, historical locations of passenger orders with respect to time, and historical trip fares with respect to pick-up locations and time; discretizing the area into a plurality of zones and discretizing the time period into a plurality of time segments; aggregating the historical vehicle service data according the zones and time segments; obtaining an expected reward for a passenger-seeking vehicle to move from zone A to each neighboring zone of the zone A based on the aggregated historical vehicle service data; and obtaining a probability of a passenger-seeking vehicle moving from zone A to a neighboring zone B based on the expected reward for the passenger-seeking vehicle to move from zone A to the each neighboring zone of the zone A.
    Type: Application
    Filed: December 5, 2019
    Publication date: June 4, 2020
    Applicant: BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO., LTD.
    Inventors: Jintao KE, Guojun WU, Zhengtian XU, Hai YANG, Yafeng YIN, Jieping YE
  • Patent number: 9898809
    Abstract: Systems, methods and techniques are provided for interacting with mobile devices using a camera-based keyboard. The system comprises a processor system including at last one processor. The processor system is configured to at least capture a plurality of images in connection with the keyboard and at least one hand typing on the keyboard via the camera. Based on the plurality of captured images, the processor system is further configured to locate the keyboard, extract at least a portion of the keys on the keyboard, extract a hand, and detect a fingertip of the extracted hand. After that, a keystroke may be detected and localized through tracking the detected fingertip in at least one of the plurality of captured images, and a character corresponding to the localized keystroke may be determined.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: February 20, 2018
    Assignees: NANJING UNIVERSITY, COLLEGE OF WILLIAM AND MARY
    Inventors: Lei Xie, Yafeng Yin, Qun Li, Sanglu Lu
  • Publication number: 20170131760
    Abstract: Systems, methods and techniques are provided for interacting with mobile devices using a camera-based keyboard. The system comprises a processor system including at last one processor. The processor system is configured to at least capture a plurality of images in connection with the keyboard and at least one hand typing on the keyboard via the camera. Based on the plurality of captured images, the processor system is further configured to locate the keyboard, extract at least a portion of the keys on the keyboard, extract a hand, and detect a fingertip of the extracted hand. After that, a keystroke may be detected and localized through tracking the detected fingertip in at least one of the plurality of captured images, and a character corresponding to the localized keystroke may be determined.
    Type: Application
    Filed: November 10, 2015
    Publication date: May 11, 2017
    Applicants: Nanjing University, College of William and Mary
    Inventors: Lei XIE, Yafeng Yin, Qun Li, Sanglu Lu
  • Patent number: 9256955
    Abstract: A system and method processes visual information including at least one object in motion. The visual information is processed by locating at least one spatial edge of the object, generating a plurality of spatio-temporal gradients for the at least one spatial edge over N frames, and then generating motion blur images from the spatio-temporal gradients. A regression analysis is performed on the motion blur images to determine direction of motion information of the object, and scene activity vectors are then generated for the N frames based on the direction of motion information. An event is detected in the visual information based on the scene activity vectors.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: February 9, 2016
    Assignee: Alcatel Lucent
    Inventors: Lawrence O'Gorman, Tin K. Ho, Yafeng Yin
  • Publication number: 20150235379
    Abstract: A system and method processes visual information including at least one object in motion. The visual information is processed by locating at least one spatial edge of the object, generating a plurality of spatio-temporal gradients for the at least one spatial edge over N frames, and then generating motion blur images from the spatio-temporal gradients. A regression analysis is performed on the motion blur images to determine direction of motion information of the object, and scene activity vectors are then generated for the N frames based on the direction of motion information. An event is detected in the visual information based on the scene activity vectors.
    Type: Application
    Filed: March 13, 2013
    Publication date: August 20, 2015
    Inventors: Lawrence O'Gorman, Tin K. Ho, Yafeng Yin