Patents by Inventor Juwei Lu

Juwei Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8958645
    Abstract: Systems and methods for summarizing a video assign frames in a video to at least one of two or more groups based on a topic, generate a respective first similitude measurement for the frames in a group relative to the other frames in the group based on a feature, rank the frames in a group relative to one or more other frames in the group based on the respective first similitude measurement of the respective frames, and select a frame from each group as a most-representative frame based on the respective rank of the frames in a group relative to the other frames in the group.
    Type: Grant
    Filed: April 19, 2012
    Date of Patent: February 17, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Juwei Lu, Bradley Denney
  • Publication number: 20140372439
    Abstract: Systems, devices, and methods for creating a visual vocabulary extract a plurality of descriptors from one or more labeled images; cluster the descriptors into augmented-space clusters in an augmented space, wherein the augmented space includes visual similarities and label similarities; generate a descriptor-space cluster in a descriptor space based on the augmented-space clusters, wherein one or more augmented-space clusters are associated with the descriptor-space cluster; and generate augmented-space classifiers for the augmented-space clusters that are associated with the descriptor-space cluster based on the augmented-space clusters.
    Type: Application
    Filed: June 13, 2013
    Publication date: December 18, 2014
    Inventors: Juwei Lu, Bradley Scott Denney, Dariusz Dusberger
  • Publication number: 20140267301
    Abstract: Systems and methods for generating visual words define initial inter-visual word relationships between a plurality of visual words; define visual word-image relationships between the plurality of visual words and a plurality of images; define inter-image relationships between the plurality of images; generate revised inter-visual word relationships in a vector space based on the initial inter-visual word relationships, the inter-image relationships, and the visual word-image relationships; and generate higher-level visual words in the vector space based on the revised inter-visual word relationships.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Yang Yang, Bradley Scott Denney, Juwei Lu, Dariusz Dusberger, Hung Khei Huang
  • Publication number: 20140272822
    Abstract: Systems and methods for learning a high-level visual vocabulary generate inter-visual-word relationships between a plurality of visual words based on visual word-label relationships, map the visual words to a vector space based on the inter-visual word relationships, and generate high-level visual words in the vector space.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Yang Yang, Bradley Scott Denney, Juwei Lu, Dariusz Dusberger, Hung Khei Huang
  • Publication number: 20140140625
    Abstract: Systems, devices, and methods for generating attribute scores obtain a plurality of object images; generate a respective first attribute score of a first attribute for each object image in the plurality of object images based on the object images; calculate a respective pairwise object-similarity measure for pairs of object images in the plurality of object images; and refine the first attribute score of an object image in the plurality of object images based at least in part on the attribute scores of other object images in the plurality of object images and on the object-similarity measures of the pairs of object images in the plurality of object images.
    Type: Application
    Filed: November 11, 2013
    Publication date: May 22, 2014
    Inventors: Liyan Zhang, Juwei Lu, Bradley Scott Denney
  • Publication number: 20140093174
    Abstract: Systems and methods for organizing images extract low-level features from an image of a collection of images of a specified event, wherein the low-level features include visual characteristics calculated from the image pixel data, and wherein the specified event includes two or more sub-events; extract a high-level feature from the image, wherein the high-level feature includes characteristics calculated at least in part from one or more of the low-level features; identify a sub-events in the image based on the high-level feature and a predetermined model of the specified event, wherein the predetermined model describes a relationship between two or more sub-events; and annotate the image based on the identified sub-event.
    Type: Application
    Filed: September 28, 2012
    Publication date: April 3, 2014
    Inventors: Liyan Zhang, Bradley Scott Denney, Juwei Lu
  • Publication number: 20140056511
    Abstract: Systems and methods for generating a visual vocabulary build a plurality of visual words via unsupervised learning on set of features of a given type; decompose one or more visual words to a collection of lower-dimensional buckets; generate labeled image representations based on the collection of lower dimensional buckets and labeled images, wherein labels associated with an image are associated with a respective representation of the image; and iteratively select a sub-collection of buckets from the collection of lower-dimensional buckets based on the labeled image representations, wherein bucket selection during any iteration after an initial iteration is based at least in part on feedback from previously selected buckets.
    Type: Application
    Filed: August 22, 2012
    Publication date: February 27, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Juwei Lu, Bradley Scott Denney
  • Patent number: 8649594
    Abstract: A method for assessing events detected by a surveillance system includes assessing the likelihood that the events correspond to events being monitored from feedback in response to a condition set by a user. Classifiers are created for the events from the feedback. The classifiers are applied to allow the surveillance system improve its accuracy when processing new video data.
    Type: Grant
    Filed: June 3, 2010
    Date of Patent: February 11, 2014
    Assignee: Agilence, Inc.
    Inventors: Wei Hua, Juwei Lu, Jinman Kang, Jon Cook, Haisong Gu
  • Publication number: 20140015855
    Abstract: Systems and methods for clustering descriptors in a space of visual descriptors to generate augmented visual descriptors in an augmented space that includes semantic information, wherein the augmented space of the augmented descriptors includes both visual descriptor-to-descriptor dissimilarities and semantic label-to-label dissimilarities; and cluster the augmented visual descriptors in the augmented space based at least in part on a dissimilarity measure between augmented visual descriptors in the augmented descriptor space.
    Type: Application
    Filed: July 16, 2012
    Publication date: January 16, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Bradley Scott Denney, Juwei Lu, Dariusz T. Dusberger, Sholeh Forouzan
  • Publication number: 20130279881
    Abstract: Systems and methods for summarizing a video assign frames in a video to at least one of two or more groups based on a topic, generate a respective first similitude measurement for the frames in a group relative to the other frames in the group based on a feature, rank the frames in a group relative to one or more other frames in the group based on the respective first similitude measurement of the respective frames, and select a frame from each group as a most-representative frame based on the respective rank of the frames in a group relative to the other frames in the group.
    Type: Application
    Filed: April 19, 2012
    Publication date: October 24, 2013
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Juwei Lu, Bradley Denney
  • Patent number: 8195598
    Abstract: The present invention is directed to a computer automated method of selectively identifying a user specified behavior of a crowd. The method comprises receiving video data but can also include audio data and sensor data. The video data contains images a crowd. The video data is processed to extract hierarchical human and crowd features. The detected crowd features are processed to detect a selectable crowd behavior. The selected crowd behavior detected is specified by a configurable behavior rule. Human detection is provided by a hybrid human detector algorithm which can include Adaboost or convolutional neural network. Crowd features are detected using textual analysis techniques. The configurable crowd behavior for detection can be defined by crowd behavioral language.
    Type: Grant
    Filed: November 17, 2008
    Date of Patent: June 5, 2012
    Assignee: Agilence, Inc.
    Inventors: Wei Hua, Xiangrong Chen, Ryan Crabb, Juwei Lu, Jonathan Cook
  • Patent number: 7983480
    Abstract: A method and system for scanning a digital image for detecting the representation of an object, such as a face, and for reducing memory requirements of the computer system performing the image scan. One example method includes identifying an original image and downsamples the original image in an x-dimension and in a y-dimension to obtain a downsampled image that requires less storage space than the original digital image. A first scan is performed of the downsampled image to detect the representation of an object within the downsampled image. Then, the original digital image is divided into at least two image blocks, where each image block contains a portion of the original digital image. A second scan is then performed of each of the image blocks to detect the representation of the object within the image blocks.
    Type: Grant
    Filed: May 17, 2007
    Date of Patent: July 19, 2011
    Assignee: Seiko Epson Corporation
    Inventors: Juwei Lu, Hui Zhou, Mohanaraj Thiyagarajah
  • Patent number: 7945075
    Abstract: Converting a digital image from color to gray-scale. In one example embodiment, a method for converting a digital image from color to gray-scale is disclosed. First, an unconverted pixel having red, green, and blue color channels is selected from the color digital image. Next, the red color channel of the pixel is multiplied by ?. Then, the green color channel of the pixel is multiplied by ?. Next, the blue color channel of the pixel is multiplied by ?. Then, the results of the three multiplication operations are added together to arrive at a gray-scale value for the pixel. Finally, these acts are repeated for each remaining unconverted pixel of the color digital image to arrive at a gray-scale digital image. In this example method, ?+?+?1 and ?>?.
    Type: Grant
    Filed: November 30, 2007
    Date of Patent: May 17, 2011
    Assignee: Seiko Epson Corporation
    Inventors: Juwei Lu, Mohanaraj Thiyagarajah, Hui Zhou
  • Patent number: 7844085
    Abstract: Systems and methods for training an AdaBoost based classifier for detecting symmetric objects, such as human faces, in a digital image. In one example embodiment, such a method includes first selecting a sub-window of a digital image. Next, the AdaBoost based classifier extracts multiple sets of two symmetric scalar features from the sub-window, one being in the right half side and one being in the left half side of the sub-window. Then, the AdaBoost based classifier minimizes the joint error of the two symmetric features for each set of two symmetric scalar features. Next, the AdaBoost based classifier selects one of the features from the set of two symmetric scalar features for each set of two symmetric scalar features. Finally, the AdaBoost based classifier linearly combines multiple weak classifiers, each of which corresponds to one of the selected features, into a stronger classifier.
    Type: Grant
    Filed: June 7, 2007
    Date of Patent: November 30, 2010
    Assignee: Seiko Epson Corporation
    Inventors: Juwei Lu, Hui Zhou
  • Patent number: 7840037
    Abstract: A method and system for efficiently detecting faces within a digital image. One example method includes identifying a digital image comprised of a plurality of sub-windows and performing a first scan of the digital image using a coarse detection level to eliminate the sub-windows that have a low likelihood of representing a face. The subset of the sub-windows that were not eliminated during the first scan are then scanned a second time using a fine detection level having a higher accuracy level than the coarse detection level used during the first scan to identify sub-windows having a high likelihood of representing a face.
    Type: Grant
    Filed: March 9, 2007
    Date of Patent: November 23, 2010
    Assignee: Seiko Epson Corporation
    Inventors: Juwei Lu, Hui Zhou, Mohanaraj Thiyagarajah
  • Publication number: 20090222388
    Abstract: The present invention is directed to a computer automated method of selectively identifying a user specified behavior of a crowd. The method comprises receiving video data but can also include audio data and sensor data. The video data contains images a crowd. The video data is processed to extract hierarchical human and crowd features. The detected crowd features are processed to detect a selectable crowd behavior. The selected crowd behavior detected is specified by a configurable behavior rule. Human detection is provided by a hybrid human detector algorithm which can include Adaboost or convolutional neural network. Crowd features are detected using textual analysis techniques. The configurable crowd behavior for detection can be defined by crowd behavioral language.
    Type: Application
    Filed: November 17, 2008
    Publication date: September 3, 2009
    Inventors: Wei Hua, Xiangrong Chen, Ryan Crabb, Jonathan Cook, Juwei Lu
  • Publication number: 20080304714
    Abstract: Systems and methods for training an AdaBoost based classifier for detecting symmetric objects, such as human faces, in a digital image. In one example embodiment, such a method includes first selecting a sub-window of a digital image. Next, the AdaBoost based classifier extracts multiple sets of two symmetric scalar features from the sub-window, one being in the right half side and one being in the left half side of the sub-window. Then, the AdaBoost based classifier minimizes the joint error of the two symmetric features for each set of two symmetric scalar features. Next, the AdaBoost based classifier selects one of the features from the set of two symmetric scalar features for each set of two symmetric scalar features. Finally, the AdaBoost based classifier linearly combines multiple weak classifiers, each of which corresponds to one of the selected features, into a stronger classifier.
    Type: Application
    Filed: June 7, 2007
    Publication date: December 11, 2008
    Inventors: Juwei Lu, Hui Zhou
  • Publication number: 20080285849
    Abstract: A method and system for scanning a digital image for detecting the representation of an object, such as a face, and for reducing memory requirements of the computer system performing the image scan. One example method includes identifying an original image and downsamples the original image in an x-dimension and in a y-dimension to obtain a downsampled image that requires less storage space than the original digital image. A first scan is performed of the downsampled image to detect the representation of an object within the downsampled image. Then, the original digital image is divided into at least two image blocks, where each image block contains a portion of the original digital image. A second scan is then performed of each of the image blocks to detect the representation of the object within the image blocks.
    Type: Application
    Filed: May 17, 2007
    Publication date: November 20, 2008
    Inventors: Juwei Lu, Hui Zhou, Mohanaraj Thiyagarajah
  • Publication number: 20080219558
    Abstract: A method and system for efficiently detecting faces within a digital image. One example method includes identifying a digital image comprised of a plurality of sub-windows and performing a first scan of the digital image using a coarse detection level to eliminate the sub-windows that have a low likelihood of representing a face. The subset of the sub-windows that were not eliminated during the first scan are then scanned a second time using a fine detection level having a higher accuracy level than the coarse detection level used during the first scan to identify sub-windows having a high likelihood of representing a face.
    Type: Application
    Filed: March 9, 2007
    Publication date: September 11, 2008
    Inventors: Juwei Lu, Hui Zhou, Mohanaraj Thiyagarajah
  • Publication number: 20080144892
    Abstract: Converting a digital image from color to gray-scale. In one example embodiment, a method for converting a digital image from color to gray-scale is disclosed. First, an unconverted pixel having red, green, and blue color channels is selected from the color digital image. Next, the red color channel of the pixel is multiplied by ?. Then, the green color channel of the pixel is multiplied by ?. Next, the blue color channel of the pixel is multiplied by ?. Then, the results of the three multiplication operations are added together to arrive at a gray-scale value for the pixel. Finally, these acts are repeated for each remaining unconverted pixel of the color digital image to arrive at a gray-scale digital image. In this example method, ?+?+?1 and ?>?.
    Type: Application
    Filed: November 30, 2007
    Publication date: June 19, 2008
    Inventors: Juwei Lu, Mohanaraj Thiyagarajah, Hui Zhou