Patents by Inventor Quanfu Fan

Quanfu Fan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200160060
    Abstract: A system and a method for tracking a plurality of objects, including obtaining input data, estimating a number of skipping frames of the input data based on information from the input data, predicting results based on the estimating of the number of skipping frames, and correcting the predicted results.
    Type: Application
    Filed: November 15, 2018
    Publication date: May 21, 2020
    Applicant: International Business Machines Corporation
    Inventors: Chung-Ching Lin, Rogerio Feris, Honghui Shi, Quanfu Fan, Lisa Brown, Mandis Beigi
  • Publication number: 20200160144
    Abstract: Techniques are described for reducing the number of parameters of a deep neural network model. According to one or more embodiments, a device can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a structure extraction component that determines a number of input nodes associated with a fully connected layer of a deep neural network model. The computer executable components can further comprise a transformation component that replaces the fully connected layer with a number of sparsely connected sublayers, wherein the sparsely connected sublayers have fewer connections than the fully connecter layer, and wherein the number of sparsely connected sublayers is determined based on a defined decrease to the number of input nodes.
    Type: Application
    Filed: November 21, 2018
    Publication date: May 21, 2020
    Inventors: Dan Gutfreund, Quanfu Fan, Abhijit S. Mudigonda
  • Publication number: 20200034645
    Abstract: An image data is convolved with one or more kernels and corresponding one or more feature maps generated. Region of interest maps are extracted from the one or more feature maps, and pooled based on one or more features selected as selective features. Pooling generates a feature vector with dimensionality less than a dimensionality associated with the one or more feature maps. The feature vector is flattened and input as a layer in a neural network. The neural network outputs a classification associated with an object in the image data.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 30, 2020
    Inventors: Quanfu Fan, Richard Chen
  • Publication number: 20200005122
    Abstract: Embodiments of the present invention are directed to a computer-implemented method for multiscale representation of input data. A non-limiting example of the computer-implemented method includes a processor receiving an original input. The processor downsamples the original input into a downscaled input. The processor runs a first convolutional neural network (“CNN”) on the downscaled input. The processor runs a second CNN on the original input, where the second CNN has fewer layers than the first CNN. The processor merges the output of the first CNN with the output of the second CNN and provides a result following the merging of the outputs.
    Type: Application
    Filed: June 27, 2018
    Publication date: January 2, 2020
    Inventors: Quanfu Fan, Richard Chen
  • Publication number: 20190188526
    Abstract: Techniques facilitating generation of a fused kernel that can approximate a full kernel of a convolutional neural network are provided. In one example, a computer-implemented method comprises determining a first pattern of samples of a first sample matrix and a second pattern of samples of a second sample matrix. The first sample matrix can be representative of a sparse kernel, and the second sample matrix can be representative of a complementary kernel. The first pattern and second pattern can be complementary to one another. The computer-implemented method also comprises generating a fused kernel based on a combination of features of the sparse kernel and features of the complementary kernel that are combined according to a fusing approach and training the fused kernel.
    Type: Application
    Filed: December 14, 2017
    Publication date: June 20, 2019
    Inventors: Richard Chen, Quanfu Fan, Marco Pistoia, Toyotaro Suzumura
  • Publication number: 20190171926
    Abstract: Technologies for providing convolutional neural networks are described. An analysis component determines an initial convolutional layer in a network architecture of a convolutional neural network and one or more subsequent convolutional layers in the network architecture. A replacement component replaces original convolutional kernels in the initial convolutional layer with initial sparse convolutional kernels, and replaces subsequent convolutional kernels in one or more subsequent convolutional layers with complementary sparse convolutional kernels. The complementary sparse kernels have a complementary pattern with respect to sparse kernels of a previous convolutional layer. Analyzing the network architecture and a trained model of a convolutional neural network can determine the original convolutional kernels and replace those kernels with sparse kernels based on similarity and/or weight in an initial layer, with sparse complementary kernels used in subsequent layers.
    Type: Application
    Filed: December 1, 2017
    Publication date: June 6, 2019
    Inventors: Richard Chen, Quanfu Fan
  • Patent number: 10242267
    Abstract: Embodiments of the present invention provide a system, method, and program product to determine whether a product has been successfully purchased by identifying in a video record when a movement of a product adjacent to a scanner occurs, and whether the scanner did not record a purchase transaction at that time; measuring a difference in time between the time of the movement of the product and a time of another movement of a product, and determining by a trained support vector machine a likelihood that the product was successfully purchased. Alternately, the difference in time can be measured between the time of the movement of the product and a time of a transaction record, or between the time of the movement of the product and a boundary time. The support vector machine can use a radial basis function kernel and can generate a decision value and a confidence score.
    Type: Grant
    Filed: May 24, 2016
    Date of Patent: March 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Quanfu Fan, Sachiko Miyazawa, Sharathchandra U. Pankanti, Hoang Trinh
  • Patent number: 10029622
    Abstract: A method to automatically calibrate a static camera in a vehicle is provided. The method may include receiving a captured image file. The method may also include detecting a plurality of vehicles in the captured image file. The method may further include determining a 2D projected shape, size, location, direction of travel, and a plurality of features for each vehicle at various locations in the captured image file. The method may additionally include inferring a plurality of geometry scenes associated with the captured image file, whereby the plurality of geometry scenes is inferred based on the determined 2D projected shape, size, location, direction of travel, and the plurality of features of each vehicle within the detected plurality of vehicles as projected onto the captured image file. The method may include calibrating the static camera based on the inferred plurality of geometry scenes associated with the captured image file.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: July 24, 2018
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Quanfu Fan, Yun Zhai
  • Patent number: 9971942
    Abstract: A computer implemented method for detecting an object in a crowded scene utilizing an image capturing device. The method includes receiving an image of a predetermined area. From the image, the existence of selected portions as representing an entity of a selected class is determined. Each selected portion is assigned an initial confidence value that the selected portion is an entity representative of a selected class. Each selected portion is evaluated with each other to determine a context confidence value. The context confidence value and initial confidence value are utilized to determine which of the one or more selected portions are entities of a selected class.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: May 15, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Quanfu Fan, Jingjing Liu, Sharathchandra U. Pankanti
  • Publication number: 20180060673
    Abstract: A computer implemented method for detecting an object in a crowded scene utilizing an image capturing device. The method includes receiving an image of a predetermined area. From the image, the existence of selected portions as representing an entity of a selected class is determined. Each selected portion is assigned an initial confidence value that the selected portion is an entity representative of a selected class. Each selected portion is evaluated with each other to determine a context confidence value. The context confidence value and initial confidence value are utilized to determine which of the one or more selected portions are entities of a selected class.
    Type: Application
    Filed: November 15, 2017
    Publication date: March 1, 2018
    Inventors: Quanfu Fan, Jingjing Liu, Sharathchandra U. Pankanti
  • Patent number: 9892326
    Abstract: A computer implemented method for detecting an object in a crowded scene utilizing an image capturing device. The method includes receiving an image of a predetermined area. From the image, the existence of selected portions as representing an entity of a selected class is determined. Each selected portion is assigned an initial confidence value that the selected portion is an entity representative of a selected class. Each selected portion is evaluated with each other to determine a context confidence value. The context confidence value and initial confidence value are utilized to determine which of the one or more selected portions are entities of a selected class.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: February 13, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Quanfu Fan, Jingjing Liu, Sharathchandra U. Pankanti
  • Publication number: 20170286779
    Abstract: A computer implemented method for detecting an object in a crowded scene utilizing an image capturing device. The method includes receiving an image of a predetermined area. From the image, the existence of selected portions as representing an entity of a selected class is determined. Each selected portion is assigned an initial confidence value that the selected portion is an entity representative of a selected class. Each selected portion is evaluated with each other to determine a context confidence value. The context confidence value and initial confidence value are utilized to determine which of the one or more selected portions are entities of a selected class.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Quanfu Fan, Jingjing Liu, Sharathchandra U. Pankanti
  • Patent number: 9754178
    Abstract: Software for static object detection that performs the following operations: (i) detecting an object that is present in at least one image of a set of images, wherein the set of images correspond to a time period; (ii) identifying a set of corner points for the detected object; (iii) tracking the object's presence in the set of images over the time period, wherein the object's presence is determined by matching the set of images to a template generated based on the identified corner points; and (iv) identifying the object as a static object when an amount of time corresponding to the object's presence in the set of images is greater than a predefined threshold.
    Type: Grant
    Filed: August 19, 2015
    Date of Patent: September 5, 2017
    Assignee: International Business Machines Corporation
    Inventors: Quanfu Fan, Zuoxuan Lu, Sharathchandra U. Pankanti
  • Patent number: 9619715
    Abstract: Alerts to object behaviors are prioritized for adjudication as a function of relative values of abandonment, foregroundness and staticness attributes. The attributes are determined from feature data extracted from video frame image data. The abandonment attribute indicates a level of likelihood of abandonment of an object. The foregroundness attribute quantifies a level of separation of foreground image data of the object from a background model of the image scene. The staticness attribute quantifies a level of stability of dimensions of a bounding box of the object over time. Alerts are also prioritized according to an importance or relevance value that is learned and generated from the relative abandonment, foregroundness and staticness attribute strengths.
    Type: Grant
    Filed: March 3, 2016
    Date of Patent: April 11, 2017
    Assignee: International Business Machines Corporation
    Inventors: Quanfu Fan, Sharathchandra U. Pankanti
  • Patent number: 9569672
    Abstract: Transaction units of video data and transaction data captured from different checkout lanes are prioritized as a function of lane priority values of respective ones of the different checkout lanes from which the transaction units are acquired. Each of the checkout lanes has a different lane priority value. The individual transaction units are processed in the prioritized processing order to automatically detect irregular activities indicated by the transaction unit video and the transaction data of the processed individual transaction units.
    Type: Grant
    Filed: October 16, 2015
    Date of Patent: February 14, 2017
    Assignee: International Business Machines Corporation
    Inventors: Russell P. Bobbitt, Quanfu Fan, Sachiko Miyazawa, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20170024889
    Abstract: A method to automatically calibrate a static camera in a vehicle is provided. The method may include receiving a captured image file. The method may also include detecting a plurality of vehicles in the captured image file. The method may further include determining a 2D projected shape, size, location, direction of travel, and a plurality of features for each vehicle at various locations in the captured image file. The method may additionally include inferring a plurality of geometry scenes associated with the captured image file, whereby the plurality of geometry scenes is inferred based on the determined 2D projected shape, size, location, direction of travel, and the plurality of features of each vehicle within the detected plurality of vehicles as projected onto the captured image file. The method may include calibrating the static camera based on the inferred plurality of geometry scenes associated with the captured image file.
    Type: Application
    Filed: July 23, 2015
    Publication date: January 26, 2017
    Inventors: Lisa M. Brown, Quanfu Fan, Yun Zhai
  • Patent number: 9471832
    Abstract: Automated analysis of video data for determination of human behavior includes segmenting a video stream into a plurality of discrete individual frame image primitives which are combined into a visual event that may encompass an activity of concern as a function of a hypothesis. The visual event is optimized by setting a binary variable to true or false as a function of one or more constraints. The visual event is processed in view of associated non-video transaction data and the binary variable by associating the visual event with a logged transaction if associable, issuing an alert if the binary variable is true and the visual event is not associable with the logged transaction, and dropping the visual event if the binary variable is false and the visual event is not associable.
    Type: Grant
    Filed: May 13, 2014
    Date of Patent: October 18, 2016
    Assignee: International Business Machines Corporation
    Inventors: Lei Ding, Quanfu Fan, Sharathchandra U. Pankanti
  • Publication number: 20160267329
    Abstract: Embodiments of the present invention provide a system, method, and program product to determine whether a product has been successfully purchased by identifying in a video record when a movement of a product adjacent to a scanner occurs, and whether the scanner did not record a purchase transaction at that time; measuring a difference in time between the time of the movement of the product and a time of another movement of a product, and determining by a trained support vector machine a likelihood that the product was successfully purchased. Alternately, the difference in time can be measured between the time of the movement of the product and a time of a transaction record, or between the time of the movement of the product and a boundary time. The support vector machine can use a radial basis function kernel and can generate a decision value and a confidence score.
    Type: Application
    Filed: May 24, 2016
    Publication date: September 15, 2016
    Inventors: Quanfu Fan, Sachiko Miyazawa, Sharathchandra U. Pankanti, Hoang Trinh
  • Patent number: 9396621
    Abstract: Embodiments of the present invention provide a system, method, and program product to determine whether a product has been successfully purchased by identifying in a video record when a movement of a product adjacent to a scanner occurs, and whether the scanner did not record a purchase transaction at that time; measuring a difference in time between the time of the movement of the product and a time of another movement of a product, and determining by a trained support vector machine a likelihood that the product was successfully purchased. Alternately, the difference in time can be measured between the time of the movement of the product and a time of a transaction record, or between the time of the movement of the product and a boundary time. The support vector machine can use a radial basis function kernel and can generate a decision value and a confidence score.
    Type: Grant
    Filed: March 23, 2012
    Date of Patent: July 19, 2016
    Assignee: International Business Machines Corporation
    Inventors: Quanfu Fan, Sachiko Miyazawa, Sharathchandra U. Pankanti, Hoang Trinh
  • Publication number: 20160188982
    Abstract: Alerts to object behaviors are prioritized for adjudication as a function of relative values of abandonment, foregroundness and staticness attributes. The attributes are determined from feature data extracted from video frame image data. The abandonment attribute indicates a level of likelihood of abandonment of an object. The foregroundness attribute quantifies a level of separation of foreground image data of the object from a background model of the image scene. The staticness attribute quantifies a level of stability of dimensions of a bounding box of the object over time. Alerts are also prioritized according to an importance or relevance value that is learned and generated from the relative abandonment, foregroundness and staticness attribute strengths.
    Type: Application
    Filed: March 3, 2016
    Publication date: June 30, 2016
    Inventors: QUANFU FAN, SHARATHCHANDRA U. PANKANTI