Patents by Inventor Jiangjian Xiao
Jiangjian Xiao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8995717Abstract: A computer implemented method for deriving an attribute entity network (AEN) from video data is disclosed, comprising the steps of: extracting at least two entities from the video data; tracking the trajectories of the at least two entities to form at least two tracks; deriving at least one association between at least two entities by detecting at least one event involving the at least two entities, said detecting of at least one event being based on detecting at least one spatio-temporal motion correlation between the at least two entities; and constructing the AEN by creating a graph wherein the at least two objects form at least two nodes and the at least one association forms a link between the at least two nodes.Type: GrantFiled: August 29, 2012Date of Patent: March 31, 2015Assignee: SRI InternationalInventors: Hui Cheng, Jiangjian Xiao, Harpreet Sawhney
-
Patent number: 8744122Abstract: The present invention relates to a system and method for detecting one or more targets belonging to a first class (e.g., moving and/or stationary people), from a moving platform in a 3D-rich environment. The framework described here is implemented using a number of monocular or stereo cameras distributed around the vehicle to provide 360 degrees coverage. Furthermore, the framework described here utilizes numerous filters to reduce the number of false positive identifications of the targets.Type: GrantFiled: October 22, 2009Date of Patent: June 3, 2014Assignee: SRI InternationalInventors: Garbis Salgian, John Benjamin Southall, Sang-Hack Jung, Vlad Branzoi, Jiangjian Xiao, Feng Han, Supun Samarasekera, Rakesh Kumar, Jayan Eledath
-
Patent number: 8712096Abstract: The present invention relates to a method and apparatus for detecting and tracking vehicles. One embodiment of a system for detecting and tracking an object (e.g., vehicle) in a field of view includes a moving object indication stage for detecting a candidate object in a series of input video frames depicting the field of view and a track association stage that uses a joint probabilistic graph matching framework to associate an existing track with the candidate object.Type: GrantFiled: March 4, 2011Date of Patent: April 29, 2014Assignee: SRI InternationalInventors: Jiangjian Xiao, Harpreet Singh Sawhney, Hui Cheng
-
Patent number: 8634638Abstract: The present invention relates to a method and system for creating a strong classifier based on motion patterns wherein the strong classifier may be used to determine an action being performed by a body in motion. When creating the strong classifier, action classification is performed by measuring similarities between features within motion patterns. Embodiments of the present invention may utilize candidate part-based action sets and training samples to train one or more weak classifiers that are then used to create a strong classifier.Type: GrantFiled: June 22, 2009Date of Patent: January 21, 2014Assignee: SRI InternationalInventors: Feng Han, Hui Cheng, Jiangjian Xiao, Harpreet Singh Sawhney, Sang-Hack Jung, Rakesh Kumar, Yanlin Guo
-
Patent number: 8340349Abstract: A method for detecting a moving target is disclosed that receives a plurality of images from at least one camera; receives a measurement of scale from one of a measurement device and a second camera; calculates the pose of the at least one camera over time based on the plurality of images and the measurement of scale; selects a reference image and an inspection image from the plurality of images of the at least one camera; and detects a moving target from the reference image and the inspection image based on the orientation of corresponding portions in the reference image and the inspection image relative to a location of an epipolar direction common to the reference image and the inspection image; and displays any detected moving target on a display. The measurement of scale can derived from a second camera or, for example, a wheel odometer.Type: GrantFiled: June 15, 2007Date of Patent: December 25, 2012Assignee: SRI InternationalInventors: Garbis Salgian, Supun Samarasekera, Jiangjian Xiao, James Russell Bergen, Rakesh Kumar, Feng Han
-
Publication number: 20120321137Abstract: A computer implemented method for deriving an attribute entity network (AEN) from video data is disclosed, comprising the steps of: extracting at least two entities from the video data; tracking the trajectories of the at least two entities to form at least two tracks; deriving at least one association between at least two entities by detecting at least one event involving the at least two entities, said detecting of at least one event being based on detecting at least one spatio-temporal motion correlation between the at least two entities; and constructing the AEN by creating a graph wherein the at least two objects form at least two nodes and the at least one association forms a link between the at least two nodes.Type: ApplicationFiled: August 29, 2012Publication date: December 20, 2012Applicant: SRI INTERNATIONALInventors: HUI CHENG, JIANGJIAN XIAO, HARPREET SAWHNEY
-
Patent number: 8294763Abstract: A computer implemented method for deriving an attribute entity network (AEN) from video data is disclosed, comprising the steps of extracting at least two entities from the video data, tracking the trajectories of the at least two entities to form at least two tracks, deriving at least one association between at least two entities by detecting at least one event involving the at least two entities, where the detecting of at least one event is based on detecting at least one spatiotemporal motion correlation between the at least two entities, and constructing the AEN by creating a graph wherein the at least two objects form at least two nodes and the at least one association forms a link between the at least two nodes.Type: GrantFiled: November 14, 2008Date of Patent: October 23, 2012Assignee: SRI InternationalInventors: Hui Cheng, Jiangjian Xiao, Harpreet Sawhney
-
Patent number: 8233704Abstract: A method for automatically generating a strong classifier for determining whether at least one object is detected in at least one image is disclosed, comprising the steps of: (a) receiving a data set of training images having positive images; (b) randomly selecting a subset of positive images from the training images to create a set of candidate exemplars, wherein said positive images include at least one object of the same type as the object to be detected; (c) training a weak classifier based on at least one of the candidate exemplars, said training being based on at least one comparison of a plurality of heterogeneous compositional features located in the at least one image and corresponding heterogeneous compositional features in the one of set of candidate exemplars; (d) repeating steps (c) for each of the remaining candidate exemplars; and (e) combining the individual classifiers into a strong classifier, wherein the strong classifier is configured to determine the presence or absence in an image of theType: GrantFiled: June 10, 2008Date of Patent: July 31, 2012Assignee: SRI InternationalInventors: Feng Han, Hui Cheng, Jiangjian Xiao, Harpreet Singh Sawhney
-
Publication number: 20120070034Abstract: The present invention relates to a method and apparatus for detecting and tracking vehicles. One embodiment of a system for detecting and tracking an object (e.g., vehicle) in a field of view includes a moving object indication stage for detecting a candidate object in a series of input video frames depicting the field of view and a track association stage that uses a joint probabilistic graph matching framework to associate an existing track with the candidate object.Type: ApplicationFiled: March 4, 2011Publication date: March 22, 2012Inventors: Jiangjian Xiao, Harpreet Singh Sawhney, Hui Cheng
-
Publication number: 20100202657Abstract: The present invention relates to a system and method for detecting one or more targets belonging to a first class (e.g., moving and/or stationary people), from a moving platform in a 3D-rich environment. The framework described here is implemented using a number of monocular or stereo cameras distributed around the vehicle to provide 360 degrees coverage. Furthermore, the framework described here utilizes numerous filters to reduce the number of false positive identifications of the targets.Type: ApplicationFiled: October 22, 2009Publication date: August 12, 2010Inventors: Garbis Salgian, John Benjamin Southall, Sang-Hack Jung, Vlad Branzoi, Jiangjian Xiao, Feng Han, Supun Samarasekera, Rakesh Kumar, Jayan Eledath
-
Patent number: 7760911Abstract: The methods and systems of the present invention enable the estimation of optical flow by performing color segmentation and adaptive bilateral filtering to regularize the flow field to achieve a more accurate flow field estimation. After creating pyramid models for two input image frames, color segmentation is performed. Next, starting from a top level of the pyramid, additive flow vectors are iteratively estimated between the reference frames by a process including occlusion detection, wherein the symmetric property of backward and forward flow is enforced for the non-occluded regions. Next, a final estimated optical flow field is generated by expanding the current pyramid level to the next lower level and the repeating the process until the lowest level is reached. This approach not only generates efficient spatial-coherent flow fields, but also accurately locates flow discontinuities along the motion boundaries.Type: GrantFiled: September 14, 2006Date of Patent: July 20, 2010Assignee: Sarnoff CorporationInventors: Jiangjian Xiao, Hui Cheng
-
Publication number: 20090316983Abstract: The present invention relates to a method and system for creating a strong classifier based on motion patterns wherein the strong classifier may be used to determine an action being performed by a body in motion. When creating the strong classifier, action classification is performed by measuring similarities between features within motion patterns. Embodiments of the present invention may utilize candidate part-based action sets and training samples to train one or more weak classifiers that are then used to create a strong classifier.Type: ApplicationFiled: June 22, 2009Publication date: December 24, 2009Inventors: Feng Han, Hui Cheng, Jiangjian Xiao, Harpreet Singh Sawhney, Sang-Hack Jung, Rakesh Kumar, Yanlin Guo
-
Publication number: 20090153661Abstract: A computer implemented method for deriving an attribute entity network (AEN) from video data is disclosed, comprising the steps of: extracting at least two entities from the video data; tracking the trajectories of the at least two entities to form at least two tracks; deriving at least one association between at least two entities by detecting at least one event involving the at least two entities, said detecting of at least one event being based on detecting at least one spatio-temporal motion correlation between the at least two entities; and constructing the AEN by creating a graph wherein the at least two objects form at least two nodes and the at least one association forms a link between the at least two nodes.Type: ApplicationFiled: November 14, 2008Publication date: June 18, 2009Inventors: Hui Cheng, Jiangjian Xiao, Harpreet Sawhney
-
Publication number: 20080310737Abstract: A method for automatically generating a strong classifier for determining whether at least one object is detected in at least one image is disclosed, comprising the steps of: (a) receiving a data set of training images having positive images; (b) randomly selecting a subset of positive images from the training images to create a set of candidate exemplars, wherein said positive images include at least one object of the same type as the object to be detected; (c) training a weak classifier based on at least one of the candidate exemplars, said training being based on at least one comparison of a plurality of heterogeneous compositional features located in the at least one image and corresponding heterogeneous compositional features in the one of set of candidate exemplars; (d) repeating steps (c) for each of the remaining candidate exemplars; and (e) combining the individual classifiers into a strong classifier, wherein the strong classifier is configured to determine the presence or absence in an image of theType: ApplicationFiled: June 10, 2008Publication date: December 18, 2008Inventors: Feng Han, Hui Cheng, Jiangjian Xiao, Harpreet Singh Sawhney
-
Publication number: 20080089556Abstract: A method for detecting a moving target is disclosed that receives a plurality of images from at least one camera; receives a measurement of scale from one of a measurement device and a second camera; calculates the pose of the at least one camera over time based on the plurality of images and the measurement of scale; selects a reference image and an inspection image from the plurality of images of the at least one camera; and detects a moving target from the reference image and the inspection image based on the orientation of corresponding portions in the reference image and the inspection image relative to a location of an epipolar direction common to the reference image and the inspection image; and displays any detected moving target on a display. The measurement of scale can derived from a second camera or, for example, a wheel odometer.Type: ApplicationFiled: June 15, 2007Publication date: April 17, 2008Inventors: Garbis Salgian, Supun Samarasekera, Jiangjian Xiao, James Bergen, Rakesh Kumar, Feng Han
-
Publication number: 20070092122Abstract: The methods and systems of the present invention enable the estimation of optical flow by performing color segmentation and adaptive bilateral filtering to regularize the flow field to achieve a more accurate flow field estimation. After creating pyramid models for two input image frames, color segmentation is performed. Next, starting from a top level of the pyramid, additive flow vectors are iteratively estimated between the reference frames by a process including occlusion detection, wherein the symmetric property of backward and forward flow is enforced for the non-occluded regions. Next, a final estimated optical flow field is generated by expanding the current pyramid level to the next lower level and the repeating the process until the lowest level is reached. This approach not only generates efficient spatial-coherent flow fields, but also accurately locates flow discontinuities along the motion boundaries.Type: ApplicationFiled: September 14, 2006Publication date: April 26, 2007Inventors: Jiangjian Xiao, Hui Cheng