Patents by Inventor Ankur Datta

Ankur Datta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11928665
    Abstract: Embodiments provide methods, and systems for facilitating a payment transaction utilizing a pre-defined radio frequency allocated by an agency to a merchant. A user may open an application provided and maintained by a payment network on a user device and input the pre-defined radio frequency associated with the merchant. The payment network server is configured to establish a secure radio frequency connection between the user device and a merchant server of the merchant. Post establishment, the user may proceed to make a payment transaction to the merchant by choosing a payment method and authenticating his/her identity. The payment transaction is authorized by an issuer of the payment account and is forwarded to the payment network server. The payment network server is configured to fetch acquirer information of the merchant account and process the payment. Post completion, a confirmation message is sent to the user device over the secure RF connection.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: March 12, 2024
    Assignee: MASTERCARD INTERNATIONAL INCORPORATED
    Inventors: Saugandh Datta, Awinash Pandey, Ankur Mehta, Chandan Garg
  • Patent number: 10614316
    Abstract: Aspects determining anomalous events, wherein processors determine a trajectory of tracked movement of an object through an image field of a camera partitioned into a matrix grid of different local units. The aspects generate anomaly confidence decision values for image features extracted from video data of the tracked movement of the object as a function of fitting extracted image features to normal patterns of local motion pattern models defined by dominant distributions of extracted image features. The aspects further extract trajectory features from the video data relative to the trajectory of the tracked movement of the object, and generate global anomaly confidence decision values for the object trajectory as a function of fitting the extracted trajectory features to a normal learned motion trajectory model. The aspects determine anomalous events as a function of the generated global anomaly confidence decision value and the anomaly confidence decision values.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: April 7, 2020
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Balamanohar Paluri, Sharathchandra U. Pankanti, Yun Zhai
  • Patent number: 10607089
    Abstract: An approach for re-identifying an object in a test image is presented. Similarity measures between the test image and training images captured by a first camera are determined. The similarity measures are based on Bhattacharyya distances between feature representations of an estimated background region of the test image and feature representations of background regions of the training images. A transformed test image based on the Bhattacharyya distances has a brightness that is different from the test image's brightness and matches a brightness of training images captured by a second camera. An appearance of the transformed test image resembles an appearance of a capture of the test image by the second camera. Another image included in test images captured by the second camera is identified as being closest in appearance to the transformed test image and another object in the identified other image is a re-identification of the object.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: March 31, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
  • Publication number: 20190095719
    Abstract: An approach for re-identifying an object in a test image is presented. Similarity measures between the test image and training images captured by a first camera are determined. The similarity measures are based on Bhattacharyya distances between feature representations of an estimated background region of the test image and feature representations of background regions of the training images. A transformed test image based on the Bhattacharyya distances has a brightness that is different from the test image's brightness and matches a brightness of training images captured by a second camera. An appearance of the transformed test image resembles an appearance of a capture of the test image by the second camera. Another image included in test images captured by the second camera is identified as being closest in appearance to the transformed test image and another object in the identified other image is a re-identification of the object.
    Type: Application
    Filed: November 26, 2018
    Publication date: March 28, 2019
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
  • Patent number: 10169664
    Abstract: An approach for re-identifying an object in a test image is presented. Similarity measures between the test image and training images captured by a first camera are determined. The similarity measures are based on Bhattacharyya distances between feature representations of an estimated background region of the test image and feature representations of background regions of the training images. A transformed test image based on the Bhattacharyya distances has a brightness that is different from the test image's brightness, and matches a brightness of training images captured by a second camera. An appearance of the transformed test image resembles an appearance of a capture of the test image by the second camera. Another image included in test images captured by the second camera is identified as being closest in appearance to the transformed test image and another object in the identified other image is a re-identification of the object.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
  • Patent number: 10037604
    Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations.
    Type: Grant
    Filed: June 8, 2016
    Date of Patent: July 31, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Xiaoyu Wang
  • Patent number: 10037607
    Abstract: Image-matching tracks the movements of the objects from initial camera scenes to ending camera scenes in non-overlapping cameras. Paths are defined through scenes for pairings of initial and ending cameras by different respective scene entry and exit points. For each of said camera pairings a combination path having a highest total number of tracked movements relative to all other combinations of one path through the initial and ending camera scene is chosen, and the scene exit point of the selected path through the initial camera and the scene entry point of the selected path into the ending camera define a path connection of the initial camera scene to the ending camera scene.
    Type: Grant
    Filed: January 22, 2016
    Date of Patent: July 31, 2018
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra Pankanti
  • Publication number: 20180107881
    Abstract: Aspects determining anomalous events, wherein processors determine a trajectory of tracked movement of an object through an image field of a camera partitioned into a matrix grid of different local units. The aspects generate anomaly confidence decision values for image features extracted from video data of the tracked movement of the object as a function of fitting extracted image features to normal patterns of local motion pattern models defined by dominant distributions of extracted image features. The aspects further extract trajectory features from the video data relative to the trajectory of the tracked movement of the object, and generate global anomaly confidence decision values for the object trajectory as a function of fitting the extracted trajectory features to a normal learned motion trajectory model. The aspects determine anomalous events as a function of the generated global anomaly confidence decision value and the anomaly confidence decision values.
    Type: Application
    Filed: December 7, 2017
    Publication date: April 19, 2018
    Inventors: ANKUR DATTA, BALAMANOHAR PALURI, SHARATHCHANDRA U. PANKANTI, YUN ZHAI
  • Patent number: 9928423
    Abstract: Local models learned from anomaly detection are used to rank detected anomalies. The local model patterns are defined from image feature values extracted from an image field of video image data with respect to different predefined spatial and temporal local units, wherein anomaly results are determined by fitting extracted image features to the local model patterns. Image features values extracted from the image field local units associated with anomaly results are normalized, and image feature values extracted from the image field local units are clustered. Weights for anomaly results are learned as a function of the relations of the normalized extracted image feature values to the clustered image feature values. The normalized values are multiplied by the learned weights to generate ranking values to rank the anomalies.
    Type: Grant
    Filed: September 4, 2015
    Date of Patent: March 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Balamanohar Paluri, Sharathchandra U. Pankanti, Yun Zhai
  • Patent number: 9885568
    Abstract: A camera at a fixed vertical height positioned above a reference plane, with an axis of a camera lens at an acute angle with respect to a perpendicular of the reference plane. One or more processors receive camera images of a multiplicity of people of unknown height and vertical axis of the images are transformed into pixel counts. The known heights of people from a known statistical distribution of heights of people are received by one or more processors and transformed to a normalized measurement of pixel counts, based in part on a focal length of the camera lens, the angle of the camera, and an objective function summing differences between pixel counts of the known heights of people and the unknown heights of people. The fixed vertical height of the camera is determined by adjusting the estimated camera height to minimize the objective function.
    Type: Grant
    Filed: March 22, 2016
    Date of Patent: February 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Patent number: 9805505
    Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: October 31, 2017
    Assignee: International Business Machines Corproation
    Inventors: Ankur Datta, Rogerio S. Feris, Yun Zhai
  • Patent number: 9710924
    Abstract: Field of view overlap among multiple cameras are automatically determined as a function of the temporal overlap of object tracks determined within their fields-of-view. Object tracks with the highest similarity value are assigned into pairs, and portions of the assigned object track pairs having a temporally overlapping period of time are determined. Scene entry points are determined from object locations on the tracks at a beginning of the temporally overlapping period of time, and scene exit points from object locations at an ending of the temporally overlapping period of time. Boundary lines for the overlapping fields-of-view portions within the corresponding camera fields-of-view are defined as a function of the determined entry and exit points in their respective fields-of-view.
    Type: Grant
    Filed: September 14, 2015
    Date of Patent: July 18, 2017
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
  • Publication number: 20170200051
    Abstract: An approach for re-identifying an object in a test image is presented. Similarity measures between the test image and training images captured by a first camera are determined. The similarity measures are based on Bhattacharyya distances between feature representations of an estimated background region of the test image and feature representations of background regions of the training images. A transformed test image based on the Bhattacharyya distances has a brightness that is different from the test image's brightness, and matches a brightness of training images captured by a second camera. An appearance of the transformed test image resembles an appearance of a capture of the test image by the second camera. Another image included in test images captured by the second camera is identified as being closest in appearance to the transformed test image and another object in the identified other image is a re-identification of the object.
    Type: Application
    Filed: March 28, 2017
    Publication date: July 13, 2017
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
  • Patent number: 9659224
    Abstract: Disclosed are techniques for merging optical character recognized (OCR'd) text from frames of image data. In some implementations, a device sends frames of image data to a server, where each frame includes at least a portion of a captured textual item. The server performs optical character recognition (OCR) on the image data of each frame. When OCR'd text from respective frames is returned to the device from the server, the device can perform matching operations on the text, for instance, using bounding boxes and/or edit distance processing. The device can merge any identified matches of OCR'd text from different frames. The device can then display the merged text with any corrections.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: May 23, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Matthew Joseph Cole, Sonjeev Jahagirdar, Matthew Daniel Hart, David Paul Ramos, Ankur Datta, Utkarsh Prateek, Emilie Noelle McConville, Prashant Hegde, Avnish Sikka
  • Patent number: 9633263
    Abstract: An approach for re-identifying an object in a first test image is presented. Brightness transfer functions (BTFs) between respective pairs of training images are determined. Respective similarity measures are determined between the first test image and each of the training images captured by the first camera (first training images). A weighted brightness transfer function (WBTF) is determined by combining the BTFs weighted by weights of the first training images. The weights are based on the similarity measures. The first test image is transformed by the WBTF to better match one of the training images captured by the second camera. Another test image, captured by the second camera, is identified because it is closer in appearance to the transformed test image than other test images captured by the second camera. An object in the identified test image is a re-identification of the object in the first test image.
    Type: Grant
    Filed: October 9, 2012
    Date of Patent: April 25, 2017
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
  • Patent number: 9633045
    Abstract: Images are retrieved and ranked according to relevance to attributes of a multi-attribute query through training image attribute detectors for different attributes annotated in a training dataset. Pair-wise correlations are learned between pairs of the annotated attributes from the training dataset of images. Image datasets may are searched via the trained attribute detectors for images comprising attributes in a multi-attribute query. The retrieved images are ranked as a function of comprising attributes that are not within the query subset plurality of attributes but are paired to one of the query subset plurality of attributes by the pair-wise correlations, wherein the ranking is an order of likelihood that the different ones of the attributes will appear in an image with the paired one of the query subset plurality of attributes.
    Type: Grant
    Filed: January 13, 2016
    Date of Patent: April 25, 2017
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Behjat Siddiquie
  • Publication number: 20160335798
    Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.
    Type: Application
    Filed: July 29, 2016
    Publication date: November 17, 2016
    Inventors: ANKUR DATTA, ROGERIO S. FERIS, YUN ZHAI
  • Publication number: 20160284097
    Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations.
    Type: Application
    Filed: June 8, 2016
    Publication date: September 29, 2016
    Inventors: ANKUR DATTA, ROGERIO S. FERIS, SHARATHCHANDRA U. PANKANTI, XIAOYU WANG
  • Patent number: 9430874
    Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: August 30, 2016
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Yun Zhai
  • Patent number: 9396548
    Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: July 19, 2016
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Xiaoyu Wang