Patents by Inventor Ankur Datta
Ankur Datta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11928665Abstract: Embodiments provide methods, and systems for facilitating a payment transaction utilizing a pre-defined radio frequency allocated by an agency to a merchant. A user may open an application provided and maintained by a payment network on a user device and input the pre-defined radio frequency associated with the merchant. The payment network server is configured to establish a secure radio frequency connection between the user device and a merchant server of the merchant. Post establishment, the user may proceed to make a payment transaction to the merchant by choosing a payment method and authenticating his/her identity. The payment transaction is authorized by an issuer of the payment account and is forwarded to the payment network server. The payment network server is configured to fetch acquirer information of the merchant account and process the payment. Post completion, a confirmation message is sent to the user device over the secure RF connection.Type: GrantFiled: July 20, 2021Date of Patent: March 12, 2024Assignee: MASTERCARD INTERNATIONAL INCORPORATEDInventors: Saugandh Datta, Awinash Pandey, Ankur Mehta, Chandan Garg
-
Patent number: 10614316Abstract: Aspects determining anomalous events, wherein processors determine a trajectory of tracked movement of an object through an image field of a camera partitioned into a matrix grid of different local units. The aspects generate anomaly confidence decision values for image features extracted from video data of the tracked movement of the object as a function of fitting extracted image features to normal patterns of local motion pattern models defined by dominant distributions of extracted image features. The aspects further extract trajectory features from the video data relative to the trajectory of the tracked movement of the object, and generate global anomaly confidence decision values for the object trajectory as a function of fitting the extracted trajectory features to a normal learned motion trajectory model. The aspects determine anomalous events as a function of the generated global anomaly confidence decision value and the anomaly confidence decision values.Type: GrantFiled: December 7, 2017Date of Patent: April 7, 2020Assignee: International Business Machines CorporationInventors: Ankur Datta, Balamanohar Paluri, Sharathchandra U. Pankanti, Yun Zhai
-
Patent number: 10607089Abstract: An approach for re-identifying an object in a test image is presented. Similarity measures between the test image and training images captured by a first camera are determined. The similarity measures are based on Bhattacharyya distances between feature representations of an estimated background region of the test image and feature representations of background regions of the training images. A transformed test image based on the Bhattacharyya distances has a brightness that is different from the test image's brightness and matches a brightness of training images captured by a second camera. An appearance of the transformed test image resembles an appearance of a capture of the test image by the second camera. Another image included in test images captured by the second camera is identified as being closest in appearance to the transformed test image and another object in the identified other image is a re-identification of the object.Type: GrantFiled: November 26, 2018Date of Patent: March 31, 2020Assignee: International Business Machines CorporationInventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
-
Publication number: 20190095719Abstract: An approach for re-identifying an object in a test image is presented. Similarity measures between the test image and training images captured by a first camera are determined. The similarity measures are based on Bhattacharyya distances between feature representations of an estimated background region of the test image and feature representations of background regions of the training images. A transformed test image based on the Bhattacharyya distances has a brightness that is different from the test image's brightness and matches a brightness of training images captured by a second camera. An appearance of the transformed test image resembles an appearance of a capture of the test image by the second camera. Another image included in test images captured by the second camera is identified as being closest in appearance to the transformed test image and another object in the identified other image is a re-identification of the object.Type: ApplicationFiled: November 26, 2018Publication date: March 28, 2019Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
-
Patent number: 10169664Abstract: An approach for re-identifying an object in a test image is presented. Similarity measures between the test image and training images captured by a first camera are determined. The similarity measures are based on Bhattacharyya distances between feature representations of an estimated background region of the test image and feature representations of background regions of the training images. A transformed test image based on the Bhattacharyya distances has a brightness that is different from the test image's brightness, and matches a brightness of training images captured by a second camera. An appearance of the transformed test image resembles an appearance of a capture of the test image by the second camera. Another image included in test images captured by the second camera is identified as being closest in appearance to the transformed test image and another object in the identified other image is a re-identification of the object.Type: GrantFiled: March 28, 2017Date of Patent: January 1, 2019Assignee: International Business Machines CorporationInventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
-
Patent number: 10037604Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations.Type: GrantFiled: June 8, 2016Date of Patent: July 31, 2018Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Xiaoyu Wang
-
Patent number: 10037607Abstract: Image-matching tracks the movements of the objects from initial camera scenes to ending camera scenes in non-overlapping cameras. Paths are defined through scenes for pairings of initial and ending cameras by different respective scene entry and exit points. For each of said camera pairings a combination path having a highest total number of tracked movements relative to all other combinations of one path through the initial and ending camera scene is chosen, and the scene exit point of the selected path through the initial camera and the scene entry point of the selected path into the ending camera define a path connection of the initial camera scene to the ending camera scene.Type: GrantFiled: January 22, 2016Date of Patent: July 31, 2018Assignee: International Business Machines CorporationInventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra Pankanti
-
Publication number: 20180107881Abstract: Aspects determining anomalous events, wherein processors determine a trajectory of tracked movement of an object through an image field of a camera partitioned into a matrix grid of different local units. The aspects generate anomaly confidence decision values for image features extracted from video data of the tracked movement of the object as a function of fitting extracted image features to normal patterns of local motion pattern models defined by dominant distributions of extracted image features. The aspects further extract trajectory features from the video data relative to the trajectory of the tracked movement of the object, and generate global anomaly confidence decision values for the object trajectory as a function of fitting the extracted trajectory features to a normal learned motion trajectory model. The aspects determine anomalous events as a function of the generated global anomaly confidence decision value and the anomaly confidence decision values.Type: ApplicationFiled: December 7, 2017Publication date: April 19, 2018Inventors: ANKUR DATTA, BALAMANOHAR PALURI, SHARATHCHANDRA U. PANKANTI, YUN ZHAI
-
Patent number: 9928423Abstract: Local models learned from anomaly detection are used to rank detected anomalies. The local model patterns are defined from image feature values extracted from an image field of video image data with respect to different predefined spatial and temporal local units, wherein anomaly results are determined by fitting extracted image features to the local model patterns. Image features values extracted from the image field local units associated with anomaly results are normalized, and image feature values extracted from the image field local units are clustered. Weights for anomaly results are learned as a function of the relations of the normalized extracted image feature values to the clustered image feature values. The normalized values are multiplied by the learned weights to generate ranking values to rank the anomalies.Type: GrantFiled: September 4, 2015Date of Patent: March 27, 2018Assignee: International Business Machines CorporationInventors: Ankur Datta, Balamanohar Paluri, Sharathchandra U. Pankanti, Yun Zhai
-
Patent number: 9885568Abstract: A camera at a fixed vertical height positioned above a reference plane, with an axis of a camera lens at an acute angle with respect to a perpendicular of the reference plane. One or more processors receive camera images of a multiplicity of people of unknown height and vertical axis of the images are transformed into pixel counts. The known heights of people from a known statistical distribution of heights of people are received by one or more processors and transformed to a normalized measurement of pixel counts, based in part on a focal length of the camera lens, the angle of the camera, and an objective function summing differences between pixel counts of the known heights of people and the unknown heights of people. The fixed vertical height of the camera is determined by adjusting the estimated camera height to minimize the objective function.Type: GrantFiled: March 22, 2016Date of Patent: February 6, 2018Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
-
Patent number: 9805505Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.Type: GrantFiled: July 29, 2016Date of Patent: October 31, 2017Assignee: International Business Machines CorproationInventors: Ankur Datta, Rogerio S. Feris, Yun Zhai
-
Patent number: 9710924Abstract: Field of view overlap among multiple cameras are automatically determined as a function of the temporal overlap of object tracks determined within their fields-of-view. Object tracks with the highest similarity value are assigned into pairs, and portions of the assigned object track pairs having a temporally overlapping period of time are determined. Scene entry points are determined from object locations on the tracks at a beginning of the temporally overlapping period of time, and scene exit points from object locations at an ending of the temporally overlapping period of time. Boundary lines for the overlapping fields-of-view portions within the corresponding camera fields-of-view are defined as a function of the determined entry and exit points in their respective fields-of-view.Type: GrantFiled: September 14, 2015Date of Patent: July 18, 2017Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Yun Zhai
-
Publication number: 20170200051Abstract: An approach for re-identifying an object in a test image is presented. Similarity measures between the test image and training images captured by a first camera are determined. The similarity measures are based on Bhattacharyya distances between feature representations of an estimated background region of the test image and feature representations of background regions of the training images. A transformed test image based on the Bhattacharyya distances has a brightness that is different from the test image's brightness, and matches a brightness of training images captured by a second camera. An appearance of the transformed test image resembles an appearance of a capture of the test image by the second camera. Another image included in test images captured by the second camera is identified as being closest in appearance to the transformed test image and another object in the identified other image is a re-identification of the object.Type: ApplicationFiled: March 28, 2017Publication date: July 13, 2017Inventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
-
Patent number: 9659224Abstract: Disclosed are techniques for merging optical character recognized (OCR'd) text from frames of image data. In some implementations, a device sends frames of image data to a server, where each frame includes at least a portion of a captured textual item. The server performs optical character recognition (OCR) on the image data of each frame. When OCR'd text from respective frames is returned to the device from the server, the device can perform matching operations on the text, for instance, using bounding boxes and/or edit distance processing. The device can merge any identified matches of OCR'd text from different frames. The device can then display the merged text with any corrections.Type: GrantFiled: March 31, 2014Date of Patent: May 23, 2017Assignee: Amazon Technologies, Inc.Inventors: Matthew Joseph Cole, Sonjeev Jahagirdar, Matthew Daniel Hart, David Paul Ramos, Ankur Datta, Utkarsh Prateek, Emilie Noelle McConville, Prashant Hegde, Avnish Sikka
-
Patent number: 9633263Abstract: An approach for re-identifying an object in a first test image is presented. Brightness transfer functions (BTFs) between respective pairs of training images are determined. Respective similarity measures are determined between the first test image and each of the training images captured by the first camera (first training images). A weighted brightness transfer function (WBTF) is determined by combining the BTFs weighted by weights of the first training images. The weights are based on the similarity measures. The first test image is transformed by the WBTF to better match one of the training images captured by the second camera. Another test image, captured by the second camera, is identified because it is closer in appearance to the transformed test image than other test images captured by the second camera. An object in the identified test image is a re-identification of the object in the first test image.Type: GrantFiled: October 9, 2012Date of Patent: April 25, 2017Assignee: International Business Machines CorporationInventors: Lisa M. Brown, Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti
-
Patent number: 9633045Abstract: Images are retrieved and ranked according to relevance to attributes of a multi-attribute query through training image attribute detectors for different attributes annotated in a training dataset. Pair-wise correlations are learned between pairs of the annotated attributes from the training dataset of images. Image datasets may are searched via the trained attribute detectors for images comprising attributes in a multi-attribute query. The retrieved images are ranked as a function of comprising attributes that are not within the query subset plurality of attributes but are paired to one of the query subset plurality of attributes by the pair-wise correlations, wherein the ranking is an order of likelihood that the different ones of the attributes will appear in an image with the paired one of the query subset plurality of attributes.Type: GrantFiled: January 13, 2016Date of Patent: April 25, 2017Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Behjat Siddiquie
-
Publication number: 20160335798Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.Type: ApplicationFiled: July 29, 2016Publication date: November 17, 2016Inventors: ANKUR DATTA, ROGERIO S. FERIS, YUN ZHAI
-
Publication number: 20160284097Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations.Type: ApplicationFiled: June 8, 2016Publication date: September 29, 2016Inventors: ANKUR DATTA, ROGERIO S. FERIS, SHARATHCHANDRA U. PANKANTI, XIAOYU WANG
-
Patent number: 9430874Abstract: Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.Type: GrantFiled: September 9, 2015Date of Patent: August 30, 2016Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Yun Zhai
-
Patent number: 9396548Abstract: Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations.Type: GrantFiled: September 22, 2015Date of Patent: July 19, 2016Assignee: International Business Machines CorporationInventors: Ankur Datta, Rogerio S. Feris, Sharathchandra U. Pankanti, Xiaoyu Wang