Patents by Inventor Ziyou Xiong

Ziyou Xiong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20100034420
    Abstract: A method for recognizing fire using block-wise processing of video input provided by a video detector. Video input is divided into a plurality of frames (42), and each frame is divided into a plurality of blocks (44). Video metrics are calculated with respect to each of the plurality of blocks (46), and blocks containing the presence of fire are identified based on the calculated video metrics (74). The detection of a fire is then communicated to an alarm system.
    Type: Application
    Filed: January 16, 2007
    Publication date: February 11, 2010
    Applicant: UTC Fire & Security Corporation
    Inventors: Ziyou Xiong, Pei-Yuan Peng, Alan Matthew Finn, Muhidin A. Lelic
  • Publication number: 20100002142
    Abstract: A method for improving a video-processing algorithm (36) identifies Video data (48) that may have been misinterpreted by the video-processing algorithm (36). The identified video data (48) is provided to a monitoring center (38) that uses the identified video data (48) to modify and improve upon the video-processing algorithm (36). An improved video-processing algorithm (49) is able to correctly analyze the identified video data (48).
    Type: Application
    Filed: February 8, 2007
    Publication date: January 7, 2010
    Inventors: Alan Matthew Finn, Pei-Yuan Peng, Steven Barnett Rakoff, Pengju Kang, Ziyou Xiong, Lin Lin, Christian M. Netter, James C. Moran, Ankit Tiwari
  • Patent number: 7548637
    Abstract: A method for detecting an object in an image includes calculating a log L likelihood of pairs of pixels at select positions in the image that are derived from training images. The calculated log L likelihood of the pairs of pixels is compared with a threshold value. The object is detected when the calculated log L likelihood is greater than the threshold value.
    Type: Grant
    Filed: April 7, 2005
    Date of Patent: June 16, 2009
    Assignee: The Board of Trustees of the University of Illinois
    Inventors: Ziyou Xiong, Thomas S. Huang, Makoto Yoshida
  • Publication number: 20090086022
    Abstract: The present invention describes a system and method for surveillance cameras that maintain proper mapping of a mapped region of interest with an imaged region of interest based on feedback received regarding the current orientation of a surveillance camera. The system or method first determines the location of the imaged region of interest within the surveillance camera's imaged current field of view based on mechanical or imaged feedback, or a combination of both. The system or method then remaps the mapped region of interest within the surveillance camera's imaged current field of view such that the mapped region of interest is coextensive with the imaged region of interest.
    Type: Application
    Filed: April 29, 2005
    Publication date: April 2, 2009
    Applicant: Chubb International Holdings Limited
    Inventors: Alan M. Finn, Pengju Kang, Ziyou Xiong, Lin Lin, Pei-Yuan Peng, Meghna Misra, Christian Maria Netter
  • Publication number: 20090057068
    Abstract: An elevator control system (24) provides elevator dispatch and door control based on passenger data received from a video monitoring system. The video monitoring system includes a video processor (16) connected to receive video input from ut least one video camera (12). The video processor (16) tracks objects located within the field of view of the video camera, and calculates passenger data parameters associated with each tracked object. The elevator controller (24) provides elevator dispatch (26), door control (28), and security functions (30) based in part on passenger data provided by the video processor (16). The security functions may also be based in part on data from access control systems (14).
    Type: Application
    Filed: January 12, 2006
    Publication date: March 5, 2009
    Applicant: OTIS ELEVATOR COMPANY
    Inventors: Lin Lin, Ziyou Xiong, Alan Matthew Finn, Pei-Yuan Peng, Pengju Kang, Mauro Atalla, Meghna Misra, Christian Maria Netter
  • Publication number: 20090040303
    Abstract: A system for automatically determining video quality receives video input from one or more surveillance cameras (16a, 16b . . . 16N), and based on the received input calculates a number of video quality metrics (40). The video quality metrics are fused together (42), and provided to decision logic (44), which determines, based on the fused video quality metrics, the video quality provided by the one or more surveillance cameras (16a, 16b . . . 16N). The determination is provided to a monitoring station (24).
    Type: Application
    Filed: April 29, 2005
    Publication date: February 12, 2009
    Applicant: Chubb International Holdings Limited
    Inventors: Alan M. Finn, Steven B. Rakoff, Pengju Kang, Pei-Yuan Peng, Ankit Tiwari, Ziyou Xiong, Lin Lin, Meghna Misra, Christian Maria Netter
  • Publication number: 20080272902
    Abstract: An alarm filter (22) for use in a security system (14) to reduce the occurrence of nuisance alarms receives sensor signals (S1-Sn, Sv) from a plurality of sensors (18, 20) included in the security system (14). The alarm filter (22) produces an opinion output as a function of the sensor signals and selectively modifies the sensor signals as a function of the opinion output to produce verified sensor signals (S1?-Sn?).
    Type: Application
    Filed: March 15, 2005
    Publication date: November 6, 2008
    Applicant: Chudd International Holdings Limited
    Inventors: Pengju Kang, Alan M. Finn, Robert N. Tomastik, Thomas M. Gillis, Ziyou Xiong, Lin Lin, Pei-Yuan Peng
  • Patent number: 7327885
    Abstract: A method detects short term, unusual events in a video. First, features are extracted features from the audio and the video portions of the video. Segments of the video are labeled according to the features. A global sliding window is applied to the labeled segments to determine global characteristics over time, while a local sliding window is applied only to the labeled segments of the global sliding window to determine local characteristic over time. The local window is substantially shorter in time than the global window. A distance between the global and local characteristic is measured to determine occurrences of the unusual short time events.
    Type: Grant
    Filed: June 30, 2003
    Date of Patent: February 5, 2008
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Ajay Divakaran, Ziyou Xiong, Regunathan Radhakrishnan, Kadir A. Peker, Koji Miyahara
  • Publication number: 20060228026
    Abstract: A method for detecting an object in an image includes calculating a log L likelihood of pairs of pixels at select positions in the image that are derived from training images. The calculated log L likelihood of the pairs of pixels is compared with a threshold value. The object is detected when the calculated log L likelihood is greater than the threshold value.
    Type: Application
    Filed: April 7, 2005
    Publication date: October 12, 2006
    Inventors: Ziyou Xiong, Thomas Huang, Makoto Yoshida
  • Publication number: 20060075346
    Abstract: A method presents a video according to compositional structures associated with the video. Each compositional structure has a label, and multiple segments that can be organized temporally or hierarchically. A particular compositional structure is selected with a remote controller, and the video is presented by a playback controller on a display device according to the compositional structure.
    Type: Application
    Filed: September 27, 2004
    Publication date: April 6, 2006
    Inventors: Tom Lanning, Ajay Divakaran, Kadir Peker, Regunathan Radhakrishnan, Ziyou Xiong, Clifton Forlines
  • Publication number: 20060059120
    Abstract: A method identifies highlight segments in a video including a sequence of frames. Audio objects are detected to identify frames associated with audio events in the video, and visual objects are detected to identify frames associated with visual events. Selected visual objects are matched with an associated audio object to form an audio-visual object only if the selected visual object matches the associated audio object, the audio-visual object identifying a candidate highlight segment. The candidate highlight segments are further refined, using low level features, to eliminate false highlight segments.
    Type: Application
    Filed: August 27, 2004
    Publication date: March 16, 2006
    Inventors: Ziyou Xiong, Regunathan Radhakrishnan, Ajay Divakaran
  • Publication number: 20050125223
    Abstract: A method uses probabilistic fusion to detect highlights in videos using both audio and visual information. Specifically, the method uses coupled hidden Markov models (CHMMs). Audio labels are generated using audio classification via Gaussian mixture models (GMMs), and visual labels are generated by quantizing average motion vector magnitudes. Highlights are modeled using discrete-observation CHMMs trained with labeled videos. The CHMMs have better performance than conventional hidden Markov models (HMMs) trained only on audio signals, or only on video frames.
    Type: Application
    Filed: December 5, 2003
    Publication date: June 9, 2005
    Inventors: Ajay Divakaran, Ziyou Xiong, Regunathan Radhakrishnan
  • Publication number: 20040268380
    Abstract: A method detects short term, unusual events in a video. First, features are extracted features from the audio and the video portions of the video. Segments of the video are labeled according to the features. A global sliding window is applied to the labeled segments to determine global characteristics over time, while a local sliding window is applied only to the labeled segments of the global sliding window to determine local characteristic over time. The local window is substantially shorter in time than the global window. A distance between the global and local characteristic is measured to determine occurrences of the unusual short time events.
    Type: Application
    Filed: June 30, 2003
    Publication date: December 30, 2004
    Inventors: Ajay Divakaran, Ziyou Xiong, Regunathan Radhakrishnan, Kadir A. Peker, Koji Miyahara
  • Publication number: 20040167767
    Abstract: A method extracts highlights from an audio signal of a sporting event. The audio signal can be part of a sports videos. First, sets of features are extracted from the audio signal. The sets of features are classified according to the following classes: applause, cheering, ball hit, music, speech and speech with music. Adjacent sets of identically classified features are grouped. Portions of the audio signal corresponding to groups of features classified as applause or cheering and with a duration greater than a predetermined threshold are selected as highlights.
    Type: Application
    Filed: February 25, 2003
    Publication date: August 26, 2004
    Inventors: Ziyou Xiong, Regunathan Radhakrishnan, Ajay Divakaran