Patents by Inventor Fei Ma

Fei Ma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7986372
    Abstract: Systems and methods for smart media content thumbnail extraction are described. In one aspect program metadata is generated from recorded video content. The program metadata includes one or more key-frames from one or more corresponding shots. An objectively representative key-frame is identified from among the key-frames as a function of shot duration and frequency of appearance of key-frame content across multiple shots. The objectively representative key-frame is an image frame representative of the recorded video content. A thumbnail is created from the objectively representative key-frame.
    Type: Grant
    Filed: August 2, 2004
    Date of Patent: July 26, 2011
    Assignee: Microsoft Corporation
    Inventors: Yu-Fei Ma, Bin Lin, Zhike Kong, Xinli Zou, Wei-Ying Ma, Hong-Jiang Zhang
  • Publication number: 20100288262
    Abstract: A gas cooker control system includes an ignition controller includes a touch button, a resistor, a first comparator, a second comparator and a third comparator, the touch button comprising an end connected to external power source, and the other end connected to first inputs of the first, second and third comparators, the resistor being connected between the inputs of the first, second and third comparators and ground, second inputs of the first, second and third comparators being configured for providing a first reference voltage, a second reference voltage and a third reference voltage respectively, outputs of the first, second and third comparators being connected to an igniter, a gas valve and a system power source, a voltage of the external power source being greater than the first reference voltage, the first reference voltage being greater than the second reference voltage, the second reference voltage being greater than the third reference voltage; a gas valve controller including a switch circuit con
    Type: Application
    Filed: April 25, 2008
    Publication date: November 18, 2010
    Inventors: Fei Ma, Fujun Cao, Jianwei Liu, Zhaobing Shou
  • Publication number: 20100253296
    Abstract: An over-current condition is detected in a synchronous DC-DC converter by sampling and holding a measured load current value. The load current is sampled while a low-side transistor is ON and then held when the low-side transistor is OFF. The held value is compared to a threshold value while the low-side transistor is OFF. The comparison occurs during the portion of the cycle when the low-side transistor is OFF so that a comparator has sufficient time in which to detect the over-current condition, even in high duty cycle applications.
    Type: Application
    Filed: April 3, 2009
    Publication date: October 7, 2010
    Applicant: Texas Instruments Incorporated
    Inventors: Jin-Biao Huang, Joseph M. Khayat, Fei Ma
  • Publication number: 20100188061
    Abstract: A synchronous buck converter operates in a PWM mode of operation and switches to light-load mode of operation under a light-load condition. When operating in the light-load mode, the synchronous buck converter transitions between a burst mode and an idle mode of operation. In the burst mode of operation, the converter operates with a fixed but increased duty ratio, with respect to the PWM mode of operation, that installs additional energy in an output capacitor. In the idle mode of operation, the high-side and low-side transistors are each turned off. To maximize energy savings and to quickly transition back to the PWM mode of operation if the load increases, a limit as to the number of allowed switching cycles when bursting is imposed and a minimum ratio of the number of clock cycles when idling to the number of switching cycles when bursting is set. Additionally, a comparator is provided to detect a sudden step-increase in the load to quickly switch the converter back to the PWM mode of operation.
    Type: Application
    Filed: January 27, 2009
    Publication date: July 29, 2010
    Applicant: Texas Instruments Incorporated
    Inventors: Fei Ma, Jin-Biao Huang, Brian Thomas Lynch
  • Patent number: 7400761
    Abstract: Systems and methods for image attention analysis are described. In one aspect, image attention is modeled by preprocessing an image to generate a quantized set of image blocks. A contrast-based saliency map for modeling one-to-three levels of image attention is then generated from the quantized image blocks.
    Type: Grant
    Filed: September 30, 2003
    Date of Patent: July 15, 2008
    Assignee: Microsoft Corporation
    Inventors: Yu-Fei Ma, Hong-Jiang Zhang
  • Publication number: 20080095330
    Abstract: Disclosed are a method, information processing system, and computer storage program product for providing communication between a user electronic device and an Interactive Voice Response (“IVR”) system. At least one selection from a user corresponding to at least one menu in an IVR system is received. The selection comprises an instruction selection sequence. At least one voice message and at least one visual message associated with the voice message, each corresponding to the instruction sequence are generated in response to the receiving. The voice message and visual message are transmitted to the electronic device associated with the user. The visual message is to be displayed to the user on an electronic device associated with the user while the voice message is being played to the user.
    Type: Application
    Filed: October 24, 2007
    Publication date: April 24, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ling Jin, Yu Fei Ma, Pei Sun, Jun Shen, Chun Sheng Chu
  • Publication number: 20080033007
    Abstract: Disclosed are compounds that are effective for selectively killing cancer cells. Compounds have been demonstrated to be especially effective for killing glioma cells, while exhibiting low toxicity to normal cells.
    Type: Application
    Filed: April 18, 2007
    Publication date: February 7, 2008
    Inventors: Duane Miller, Eldon Geisert, Charles Yates, Renukadevi Patil, William Orr, XiangDi Wang, Fei Ma, Oleg Kirichenko
  • Patent number: 7312819
    Abstract: A robust camera motion analysis method is described. In an implementation, a method includes analyzing video having sequential frames to determine one or more camera motions that occurred when sequential frames of the video were captured. The one or more camera motions for each frame are described by a set of displacement curves, a mean absolute difference (MAD) curve, and a major motion (MAJ) curve. The set of displacement curves describe the one or more camera motions in respective horizontal (H), vertical (V), and radial (R) directions. The MAD curve relates a minimum MAD value from the set of displacement curves. The MAJ curve is generated from the minimum MAD value and provides one or more qualitative descriptions that describe the one or more camera motions as at least one of still, vertical, horizontal and radial.
    Type: Grant
    Filed: November 24, 2003
    Date of Patent: December 25, 2007
    Assignee: Microsoft Corporation
    Inventors: Yu-Fei Ma, Hong-Jiang Zhang, Dongjun Lan
  • Patent number: 7313185
    Abstract: Systems and methods for representing sequential motion patterns are described. In one aspect, video frames are converted into a sequence of energy redistribution measurements. One or more motion filters are then applied to the ER measurements to generate one or more temporal sequences of motion patterns, the number of temporal sequences being a function of the number of motion filters.
    Type: Grant
    Filed: August 1, 2003
    Date of Patent: December 25, 2007
    Assignee: Microsoft Corporation
    Inventors: Yu-Fei Ma, Gu Xu, Hong-Jiang Zhang
  • Patent number: 7274741
    Abstract: Systems and methods to generate an attention model for computational analysis of video data are described. In one aspect, feature components from a video data sequence are extracted. Attention data is generated by applying multiple attention models to the extracted feature components. The generated attention data is integrated into a comprehensive user attention model for the computational analysis of the video data sequence.
    Type: Grant
    Filed: November 1, 2002
    Date of Patent: September 25, 2007
    Assignee: Microsoft Corporation
    Inventors: Yu-Fei Ma, Lie Lu, Hong-Jiang Zhang
  • Patent number: 7127120
    Abstract: Systems and methods to automatically edit a video to generate a video summary are described. In one aspect, sub-shots are extracted from the video. Importance measures are calculated for at least a portion of the extracted sub-shots. Respective relative distributions for sub-shots having relatively higher importance measures as compared to importance measures of other sub-shots are determined. Based on the determined relative distributions, sub-shots that do not exhibit a uniform distribution with respect to other sub-shots in the particular ones are dropped. The remaining sub-shots are connected with respective transitions to generate the video summary.
    Type: Grant
    Filed: November 1, 2002
    Date of Patent: October 24, 2006
    Assignee: Microsoft Corporation
    Inventors: Xian-Sheng Hua, Lie Lu, Yu-Fei Ma, Mingjing Li, Hong-Jiang Zhang
  • Patent number: 7116716
    Abstract: Systems and methods to generate a motion attention model of a video data sequence are described. In one aspect, a motion saliency map B is generated to precisely indicate motion attention areas for each frame in the video data sequence. The motion saliency maps are each based on intensity I, spatial coherence Cs, and temporal coherence Ct values. These values are extracted from each block or pixel in motion fields that are extracted from the video data sequence. Brightness values of detected motion attention areas in each frame are accumulated to generate, with respect to time, the motion attention model.
    Type: Grant
    Filed: November 1, 2002
    Date of Patent: October 3, 2006
    Assignee: Microsoft Corporation
    Inventors: Yu-Fei Ma, Hong-Jiang Zhang
  • Publication number: 20060165178
    Abstract: Systems and methods to generate a motion attention model of a video data sequence are described. In one aspect, a motion saliency map B is generated to precisely indicate motion attention areas for each frame in the video data sequence. The motion saliency maps are each based on intensity I, spatial coherence Cs, and temporal coherence Ct values. These values are extracted from each block or pixel in motion fields that are extracted from the video data sequence. Brightness values of detected motion attention areas in each frame are accumulated to generate, with respect to time, the motion attention model.
    Type: Application
    Filed: April 3, 2006
    Publication date: July 27, 2006
    Applicant: Microsoft Corporation
    Inventors: Yu-Fei Ma, Hong-Jiang Zhang
  • Publication number: 20060026524
    Abstract: Systems and methods for smart media content thumbnail extraction are described. In one aspect program metadata is generated from recorded video content. The program metadata includes one or more key-frames from one or more corresponding shots. An objectively representative key-frame is identified from among the key-frames as a function of shot duration and frequency of appearance of key-frame content across multiple shots. The objectively representative key-frame is an image frame representative of the recorded video content. A thumbnail is created from the objectively representative key-frame.
    Type: Application
    Filed: August 2, 2004
    Publication date: February 2, 2006
    Applicant: Microsoft Corporation
    Inventors: Yu-Fei Ma, Bin Lin, Zhike Kong, Xinli Zou, Wei-Ying Ma, Hong-Jiang Zhang
  • Publication number: 20050110875
    Abstract: A robust camera motion analysis method is described. In an implementation, a method includes analyzing video having sequential frames to determine one or more camera motions that occurred when sequential frames of the video were captured. The one or more camera motions for each frame are described by a set of displacement curves, a mean absolute difference (MAD) curve, and a major motion (MAJ) curve. The set of displacement curves describe the one or more camera motions in respective horizontal (H), vertical (V), and radial (R) directions. The MAD curve relates a minimum MAD value from the set of displacement curves. The MAJ curve is generated from the minimum MAD value and provides one or more qualitative descriptions that describe the one or more camera motions as at least one of still, vertical, horizontal and radial.
    Type: Application
    Filed: November 24, 2003
    Publication date: May 26, 2005
    Inventors: Yu-Fei Ma, Hong-Jiang Zhang, Dongjung Lan
  • Publication number: 20050069206
    Abstract: Systems and methods for image attention analysis are described. In one aspect, image attention is modeled by preprocessing an image to generate a quantized set of image blocks. A contrast-based saliency map for modeling one-to-three levels of image attention is then generated from the quantized image blocks.
    Type: Application
    Filed: September 30, 2003
    Publication date: March 31, 2005
    Inventors: Yu-Fei Ma, Hong-Jiang Zhang
  • Publication number: 20050025242
    Abstract: Systems and methods for representing sequential motion patterns are described. In one aspect, video frames are converted into a sequence of energy redistribution measurements. One or more motion filters are then applied to the ER measurements to generate one or more temporal sequences of motion patterns, the number of temporal sequences being a function of the number of motion filters.
    Type: Application
    Filed: August 1, 2003
    Publication date: February 3, 2005
    Inventors: Yu-Fei Ma, Gu Xu, Hong-Jiang Zhang
  • Publication number: 20040086046
    Abstract: Systems and methods to generate a motion attention model of a video data sequence are described. In one aspect, a motion saliency map B is generated to precisely indicate motion attention areas for each frame in the video data sequence. The motion saliency maps are each based on intensity I, spatial coherence Cs, and temporal coherence Ct values. These values are extracted from each block or pixel in motion fields that are extracted from the video data sequence. Brightness values of detected motion attention areas in each frame are accumulated to generate, with respect to time, the motion attention model.
    Type: Application
    Filed: November 1, 2002
    Publication date: May 6, 2004
    Inventors: Yu-Fei Ma, Hong-Jiang Zhang
  • Publication number: 20040088723
    Abstract: Systems and methods to generate a video summary of a video data sequence are described. In one aspect, key-frames of the video data sequence are identified independent of shot boundary detection. A static summary of shots in the video data sequence is then generated based on key-frame importance. For each shot in the static summary of shots, dynamic video skims are calculated. The video summary consists of the calculated dynamic video skims.
    Type: Application
    Filed: November 1, 2002
    Publication date: May 6, 2004
    Inventors: Yu-Fei Ma, Lie Lu, Hong-Jiang Zhang, Mingjing Li
  • Publication number: 20040085341
    Abstract: Systems and methods to automatically edit a video to generate a video summary are described. In one aspect, sub-shots are extracted from the video. Importance measures are calculated for at least a portion of the extracted sub-shots. Respective relative distributions for sub-shots having relatively higher importance measures as compared to importance measures of other sub-shots are determined. Based on the determined relative distributions, sub-shots that do not exhibit a uniform distribution with respect to other sub-shots in the particular ones are dropped. The remaining sub-shots are connected with respective transitions to generate the video summary.
    Type: Application
    Filed: November 1, 2002
    Publication date: May 6, 2004
    Inventors: Xian-Sheng Hua, Lie Lu, Yu-Fei Ma, Mingjing Li, Hong-Jiang Zhang