Patents by Inventor Jiangjian Xiao

Jiangjian Xiao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8995717
    Abstract: A computer implemented method for deriving an attribute entity network (AEN) from video data is disclosed, comprising the steps of: extracting at least two entities from the video data; tracking the trajectories of the at least two entities to form at least two tracks; deriving at least one association between at least two entities by detecting at least one event involving the at least two entities, said detecting of at least one event being based on detecting at least one spatio-temporal motion correlation between the at least two entities; and constructing the AEN by creating a graph wherein the at least two objects form at least two nodes and the at least one association forms a link between the at least two nodes.
    Type: Grant
    Filed: August 29, 2012
    Date of Patent: March 31, 2015
    Assignee: SRI International
    Inventors: Hui Cheng, Jiangjian Xiao, Harpreet Sawhney
  • Patent number: 8744122
    Abstract: The present invention relates to a system and method for detecting one or more targets belonging to a first class (e.g., moving and/or stationary people), from a moving platform in a 3D-rich environment. The framework described here is implemented using a number of monocular or stereo cameras distributed around the vehicle to provide 360 degrees coverage. Furthermore, the framework described here utilizes numerous filters to reduce the number of false positive identifications of the targets.
    Type: Grant
    Filed: October 22, 2009
    Date of Patent: June 3, 2014
    Assignee: SRI International
    Inventors: Garbis Salgian, John Benjamin Southall, Sang-Hack Jung, Vlad Branzoi, Jiangjian Xiao, Feng Han, Supun Samarasekera, Rakesh Kumar, Jayan Eledath
  • Patent number: 8712096
    Abstract: The present invention relates to a method and apparatus for detecting and tracking vehicles. One embodiment of a system for detecting and tracking an object (e.g., vehicle) in a field of view includes a moving object indication stage for detecting a candidate object in a series of input video frames depicting the field of view and a track association stage that uses a joint probabilistic graph matching framework to associate an existing track with the candidate object.
    Type: Grant
    Filed: March 4, 2011
    Date of Patent: April 29, 2014
    Assignee: SRI International
    Inventors: Jiangjian Xiao, Harpreet Singh Sawhney, Hui Cheng
  • Patent number: 8634638
    Abstract: The present invention relates to a method and system for creating a strong classifier based on motion patterns wherein the strong classifier may be used to determine an action being performed by a body in motion. When creating the strong classifier, action classification is performed by measuring similarities between features within motion patterns. Embodiments of the present invention may utilize candidate part-based action sets and training samples to train one or more weak classifiers that are then used to create a strong classifier.
    Type: Grant
    Filed: June 22, 2009
    Date of Patent: January 21, 2014
    Assignee: SRI International
    Inventors: Feng Han, Hui Cheng, Jiangjian Xiao, Harpreet Singh Sawhney, Sang-Hack Jung, Rakesh Kumar, Yanlin Guo
  • Patent number: 8340349
    Abstract: A method for detecting a moving target is disclosed that receives a plurality of images from at least one camera; receives a measurement of scale from one of a measurement device and a second camera; calculates the pose of the at least one camera over time based on the plurality of images and the measurement of scale; selects a reference image and an inspection image from the plurality of images of the at least one camera; and detects a moving target from the reference image and the inspection image based on the orientation of corresponding portions in the reference image and the inspection image relative to a location of an epipolar direction common to the reference image and the inspection image; and displays any detected moving target on a display. The measurement of scale can derived from a second camera or, for example, a wheel odometer.
    Type: Grant
    Filed: June 15, 2007
    Date of Patent: December 25, 2012
    Assignee: SRI International
    Inventors: Garbis Salgian, Supun Samarasekera, Jiangjian Xiao, James Russell Bergen, Rakesh Kumar, Feng Han
  • Publication number: 20120321137
    Abstract: A computer implemented method for deriving an attribute entity network (AEN) from video data is disclosed, comprising the steps of: extracting at least two entities from the video data; tracking the trajectories of the at least two entities to form at least two tracks; deriving at least one association between at least two entities by detecting at least one event involving the at least two entities, said detecting of at least one event being based on detecting at least one spatio-temporal motion correlation between the at least two entities; and constructing the AEN by creating a graph wherein the at least two objects form at least two nodes and the at least one association forms a link between the at least two nodes.
    Type: Application
    Filed: August 29, 2012
    Publication date: December 20, 2012
    Applicant: SRI INTERNATIONAL
    Inventors: HUI CHENG, JIANGJIAN XIAO, HARPREET SAWHNEY
  • Patent number: 8294763
    Abstract: A computer implemented method for deriving an attribute entity network (AEN) from video data is disclosed, comprising the steps of extracting at least two entities from the video data, tracking the trajectories of the at least two entities to form at least two tracks, deriving at least one association between at least two entities by detecting at least one event involving the at least two entities, where the detecting of at least one event is based on detecting at least one spatiotemporal motion correlation between the at least two entities, and constructing the AEN by creating a graph wherein the at least two objects form at least two nodes and the at least one association forms a link between the at least two nodes.
    Type: Grant
    Filed: November 14, 2008
    Date of Patent: October 23, 2012
    Assignee: SRI International
    Inventors: Hui Cheng, Jiangjian Xiao, Harpreet Sawhney
  • Patent number: 8233704
    Abstract: A method for automatically generating a strong classifier for determining whether at least one object is detected in at least one image is disclosed, comprising the steps of: (a) receiving a data set of training images having positive images; (b) randomly selecting a subset of positive images from the training images to create a set of candidate exemplars, wherein said positive images include at least one object of the same type as the object to be detected; (c) training a weak classifier based on at least one of the candidate exemplars, said training being based on at least one comparison of a plurality of heterogeneous compositional features located in the at least one image and corresponding heterogeneous compositional features in the one of set of candidate exemplars; (d) repeating steps (c) for each of the remaining candidate exemplars; and (e) combining the individual classifiers into a strong classifier, wherein the strong classifier is configured to determine the presence or absence in an image of the
    Type: Grant
    Filed: June 10, 2008
    Date of Patent: July 31, 2012
    Assignee: SRI International
    Inventors: Feng Han, Hui Cheng, Jiangjian Xiao, Harpreet Singh Sawhney
  • Publication number: 20120070034
    Abstract: The present invention relates to a method and apparatus for detecting and tracking vehicles. One embodiment of a system for detecting and tracking an object (e.g., vehicle) in a field of view includes a moving object indication stage for detecting a candidate object in a series of input video frames depicting the field of view and a track association stage that uses a joint probabilistic graph matching framework to associate an existing track with the candidate object.
    Type: Application
    Filed: March 4, 2011
    Publication date: March 22, 2012
    Inventors: Jiangjian Xiao, Harpreet Singh Sawhney, Hui Cheng
  • Publication number: 20100202657
    Abstract: The present invention relates to a system and method for detecting one or more targets belonging to a first class (e.g., moving and/or stationary people), from a moving platform in a 3D-rich environment. The framework described here is implemented using a number of monocular or stereo cameras distributed around the vehicle to provide 360 degrees coverage. Furthermore, the framework described here utilizes numerous filters to reduce the number of false positive identifications of the targets.
    Type: Application
    Filed: October 22, 2009
    Publication date: August 12, 2010
    Inventors: Garbis Salgian, John Benjamin Southall, Sang-Hack Jung, Vlad Branzoi, Jiangjian Xiao, Feng Han, Supun Samarasekera, Rakesh Kumar, Jayan Eledath
  • Patent number: 7760911
    Abstract: The methods and systems of the present invention enable the estimation of optical flow by performing color segmentation and adaptive bilateral filtering to regularize the flow field to achieve a more accurate flow field estimation. After creating pyramid models for two input image frames, color segmentation is performed. Next, starting from a top level of the pyramid, additive flow vectors are iteratively estimated between the reference frames by a process including occlusion detection, wherein the symmetric property of backward and forward flow is enforced for the non-occluded regions. Next, a final estimated optical flow field is generated by expanding the current pyramid level to the next lower level and the repeating the process until the lowest level is reached. This approach not only generates efficient spatial-coherent flow fields, but also accurately locates flow discontinuities along the motion boundaries.
    Type: Grant
    Filed: September 14, 2006
    Date of Patent: July 20, 2010
    Assignee: Sarnoff Corporation
    Inventors: Jiangjian Xiao, Hui Cheng
  • Publication number: 20090316983
    Abstract: The present invention relates to a method and system for creating a strong classifier based on motion patterns wherein the strong classifier may be used to determine an action being performed by a body in motion. When creating the strong classifier, action classification is performed by measuring similarities between features within motion patterns. Embodiments of the present invention may utilize candidate part-based action sets and training samples to train one or more weak classifiers that are then used to create a strong classifier.
    Type: Application
    Filed: June 22, 2009
    Publication date: December 24, 2009
    Inventors: Feng Han, Hui Cheng, Jiangjian Xiao, Harpreet Singh Sawhney, Sang-Hack Jung, Rakesh Kumar, Yanlin Guo
  • Publication number: 20090153661
    Abstract: A computer implemented method for deriving an attribute entity network (AEN) from video data is disclosed, comprising the steps of: extracting at least two entities from the video data; tracking the trajectories of the at least two entities to form at least two tracks; deriving at least one association between at least two entities by detecting at least one event involving the at least two entities, said detecting of at least one event being based on detecting at least one spatio-temporal motion correlation between the at least two entities; and constructing the AEN by creating a graph wherein the at least two objects form at least two nodes and the at least one association forms a link between the at least two nodes.
    Type: Application
    Filed: November 14, 2008
    Publication date: June 18, 2009
    Inventors: Hui Cheng, Jiangjian Xiao, Harpreet Sawhney
  • Publication number: 20080310737
    Abstract: A method for automatically generating a strong classifier for determining whether at least one object is detected in at least one image is disclosed, comprising the steps of: (a) receiving a data set of training images having positive images; (b) randomly selecting a subset of positive images from the training images to create a set of candidate exemplars, wherein said positive images include at least one object of the same type as the object to be detected; (c) training a weak classifier based on at least one of the candidate exemplars, said training being based on at least one comparison of a plurality of heterogeneous compositional features located in the at least one image and corresponding heterogeneous compositional features in the one of set of candidate exemplars; (d) repeating steps (c) for each of the remaining candidate exemplars; and (e) combining the individual classifiers into a strong classifier, wherein the strong classifier is configured to determine the presence or absence in an image of the
    Type: Application
    Filed: June 10, 2008
    Publication date: December 18, 2008
    Inventors: Feng Han, Hui Cheng, Jiangjian Xiao, Harpreet Singh Sawhney
  • Publication number: 20080089556
    Abstract: A method for detecting a moving target is disclosed that receives a plurality of images from at least one camera; receives a measurement of scale from one of a measurement device and a second camera; calculates the pose of the at least one camera over time based on the plurality of images and the measurement of scale; selects a reference image and an inspection image from the plurality of images of the at least one camera; and detects a moving target from the reference image and the inspection image based on the orientation of corresponding portions in the reference image and the inspection image relative to a location of an epipolar direction common to the reference image and the inspection image; and displays any detected moving target on a display. The measurement of scale can derived from a second camera or, for example, a wheel odometer.
    Type: Application
    Filed: June 15, 2007
    Publication date: April 17, 2008
    Inventors: Garbis Salgian, Supun Samarasekera, Jiangjian Xiao, James Bergen, Rakesh Kumar, Feng Han
  • Publication number: 20070092122
    Abstract: The methods and systems of the present invention enable the estimation of optical flow by performing color segmentation and adaptive bilateral filtering to regularize the flow field to achieve a more accurate flow field estimation. After creating pyramid models for two input image frames, color segmentation is performed. Next, starting from a top level of the pyramid, additive flow vectors are iteratively estimated between the reference frames by a process including occlusion detection, wherein the symmetric property of backward and forward flow is enforced for the non-occluded regions. Next, a final estimated optical flow field is generated by expanding the current pyramid level to the next lower level and the repeating the process until the lowest level is reached. This approach not only generates efficient spatial-coherent flow fields, but also accurately locates flow discontinuities along the motion boundaries.
    Type: Application
    Filed: September 14, 2006
    Publication date: April 26, 2007
    Inventors: Jiangjian Xiao, Hui Cheng