Patents by Inventor Sreeneel K. Maddika
Sreeneel K. Maddika has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230401795Abstract: An example process includes: while displaying a portion of an extended reality (XR) environment representing a current field of view of a user: detecting a user gaze at a first object displayed in the XR environment, where the first object is persistent in the current field of view of the XR environment; in response to detecting the user gaze at the first object, expanding the first object into a list of objects including a second object representing a digital assistant; detecting a user gaze at the second object; in accordance with detecting the user gaze at the second object, displaying a first animation of the second object indicating that a digital assistant session is initiated; receiving a first audio input from the user; and displaying a second animation of the second object indicating that the digital assistant is actively listening to the user.Type: ApplicationFiled: May 26, 2023Publication date: December 14, 2023Inventors: Lynn I. STREJA, Saurabh ADYA, Keith P. AVERY, Karan M. DARYANANI, Stephen O. LEMAY, Myra C. LUKENS, Sreeneel K. MADDIKA, Chaitanya MANNEMALA, Aswath MANOHARAN, Pedro MARI, Jay MOON, Abhishek RAWAT, Garrett L. WEINBERG
-
Patent number: 10558856Abstract: The present disclosure relates to optical character recognition using captured video. According to one embodiment, using a first image in stream of images depicting a document, the device extracts text data in a portion of the document depicted in the first image and determines a first confidence level regarding an accuracy of the extracted text data. If the first confidence level satisfies a threshold value, the device saves the extracted text data as recognized content of the source document. Otherwise, the device extracts the text data from the portion of the document as depicted in one or more second images in the stream and determines a second confidence level for the text data extracted from each second image until identifying one of the second images where the second confidence level associated with the text data extracted from the identified second image satisfies the threshold value.Type: GrantFiled: September 4, 2018Date of Patent: February 11, 2020Assignee: INTUIT INC.Inventors: Vijay S. Yellapragada, Peijun Chiang, Sreeneel K. Maddika
-
Patent number: 10339373Abstract: Techniques are disclosed for performing optical character recognition (OCR) by identifying a template based on a hash of a document. One embodiment includes a method for identifying a template associated with an image. The method includes receiving a digital image, a portion of the image depicting a first document, and extracting the portion of the image. The method further includes scaling the portion of the image and generating a first hash from the scaled image. The method further includes comparing the first hash to a set of hashes, each corresponding to a template. The method further includes selecting a first template as corresponding to the first document based on comparing the first hash to the set of hashes and extracting one or more sections of the portion of the image based on the selected first template. The method further includes performing OCR on the extracted one or more sections.Type: GrantFiled: August 24, 2018Date of Patent: July 2, 2019Assignee: INTUIT INC.Inventors: Vijay S. Yellapragada, Peijun Chiang, Sreeneel K. Maddika
-
Patent number: 10289905Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.Type: GrantFiled: August 24, 2018Date of Patent: May 14, 2019Assignee: Intuit Inc.Inventors: Eugene Krivopaltsev, Sreeneel K. Maddika, Vijay S. Yellapragada
-
Patent number: 10282604Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.Type: GrantFiled: August 23, 2018Date of Patent: May 7, 2019Assignee: Intuit, Inc.Inventors: Eugene Krivopaltsev, Sreeneel K. Maddika, Vijay S. Yellapragada
-
Patent number: 10229315Abstract: Aspects of the present disclosure provide methods and apparatuses for detecting duplicate copies of a form in an image of a document. An exemplary method generally includes obtaining a first digital image of a document, performing one or more transformations on the first digital image, determining one or more rectangles in the transformed first digital image, identifying at least a first duplicate copy of the form being depicted in the first digital image based, at least in part, on the detected one or more rectangles, and generating, based on the identified duplicate copy of the form, a notification that the first digital image includes at least the first duplicate copy of the form.Type: GrantFiled: July 27, 2016Date of Patent: March 12, 2019Assignee: INTUIT, INC.Inventors: Vijay Yellapragada, Peijun Chiang, Sreeneel K. Maddika
-
Patent number: 10210384Abstract: The present disclosure relates to optical character recognition using captured video. According to one embodiment, using a first image in stream of images depicting a document, the device extracts text data in a portion of the document depicted in the first image and determines a first confidence level regarding an accuracy of the extracted text data. If the first confidence level satisfies a threshold value, the device saves the extracted text data as recognized content of the source document. Otherwise, the device extracts the text data from the portion of the document as depicted in one or more second images in the stream and determines a second confidence level for the text data extracted from each second image until identifying one of the second images where the second confidence level associated with the text data extracted from the identified second image satisfies the threshold value.Type: GrantFiled: July 25, 2016Date of Patent: February 19, 2019Assignee: INTUIT INC.Inventors: Vijay Yellapragada, Peijun Chiang, Sreeneel K. Maddika
-
Publication number: 20180365487Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.Type: ApplicationFiled: August 23, 2018Publication date: December 20, 2018Inventors: Eugene KRIVOPALTSEV, Sreeneel K. MADDIKA, Vijay S. YELLAPRAGADA
-
Publication number: 20180365488Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.Type: ApplicationFiled: August 24, 2018Publication date: December 20, 2018Inventors: Eugene KRIVOPALTSEV, Sreeneel K. MADDIKA, Vijay S. YELLAPRAGADA
-
Patent number: 10108879Abstract: The present disclosure includes techniques for selecting a candidate presentation style for individual documents for inclusion in an aggregate training data set for a document type that may be used to train an OCR processing engine prior to identifying text in an image of a document of the document type. In one embodiment, text input corresponding to a text sample in a document is received, and an image of the text sample in the document is received. For each of a plurality of candidate presentation styles, an OCR processing engine is trained using a training data set corresponding to the given candidate presentation style, and the OCR processing engine is used, as trained, to identify text in the received image. The OCR processing results for each candidate presentation style are compared to the received text input. A candidate presentation style for the document is selected based on the comparisons.Type: GrantFiled: September 21, 2016Date of Patent: October 23, 2018Assignee: Intuit inc.Inventors: Eugene Krivopaltsev, Sreeneel K. Maddika, Vijay S. Yellapragada
-
Patent number: 10089523Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.Type: GrantFiled: October 5, 2016Date of Patent: October 2, 2018Assignee: INTUIT INC.Inventors: Eugene Krivopaltsev, Sreeneel K. Maddika, Vijay S. Yellapragada
-
Patent number: 10013643Abstract: Techniques are disclosed for facilitating optical character recognition (OCR) by identifying one or more regions in an electronic document to perform the OCR. For example a method for identifying information in an electronic document includes obtaining a set of training documents for each template of a plurality of templates for the electronic document, extracting spatial attributes for at least a first label region and at least a first corresponding value region from the set, and training a classifier model based on the extracted spatial attributes, wherein the classifier model is used to identify the information in the electronic document. The spatial attributes represent a position of at least the first label region and at least the first value region within the electronic document.Type: GrantFiled: July 26, 2016Date of Patent: July 3, 2018Assignee: INTUIT INC.Inventors: Vijay Yellapragada, Peijun Chiang, Sreeneel K. Maddika
-
Publication number: 20180096200Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.Type: ApplicationFiled: October 5, 2016Publication date: April 5, 2018Inventors: Eugene KRIVOPALTSEV, Sreeneel K. MADDIKA, Vijay S. YELLAPRAGADA
-
Publication number: 20180082146Abstract: The present disclosure includes techniques for selecting a candidate presentation style for individual documents for inclusion in an aggregate training data set for a document type that may be used to train an OCR processing engine prior to identifying text in an image of a document of the document type. In one embodiment, text input corresponding to a text sample in a document is received, and an image of the text sample in the document is received. For each of a plurality of candidate presentation styles, an OCR processing engine is trained using a training data set corresponding to the given candidate presentation style, and the OCR processing engine is used, as trained, to identify text in the received image. The OCR processing results for each candidate presentation style are compared to the received text input. A candidate presentation style for the document is selected based on the comparisons.Type: ApplicationFiled: September 21, 2016Publication date: March 22, 2018Inventors: Eugene KRIVOPALTSEV, Sreeneel K. MADDIKA, Vijay S. YELLAPRAGADA
-
Publication number: 20180032811Abstract: Aspects of the present disclosure provide methods and apparatuses for detecting duplicate copies of a form in an image of a document. An exemplary method generally includes obtaining a first digital image of a document, performing one or more transformations on the first digital image, determining one or more rectangles in the transformed first digital image, identifying at least a first duplicate copy of the form being depicted in the first digital image based, at least in part, on the detected one or more rectangles, and generating, based on the identified duplicate copy of the form, a notification that the first digital image includes at least the first duplicate copy of the form.Type: ApplicationFiled: July 27, 2016Publication date: February 1, 2018Applicant: INTUIT INC.Inventors: Vijay YELLAPRAGADA, Peijun CHIANG, Sreeneel K. MADDIKA
-
Publication number: 20180032842Abstract: Techniques are disclosed for facilitating optical character recognition (OCR) by identifying one or more regions in an electronic document to perform the OCR. For example a method for identifying information in an electronic document includes obtaining a set of training documents for each template of a plurality of templates for the electronic document, extracting spatial attributes for at least a first label region and at least a first corresponding value region from the set, and training a classifier model based on the extracted spatial attributes, wherein the classifier model is used to identify the information in the electronic document. The spatial attributes represent a position of at least the first label region and at least the first value region within the electronic document.Type: ApplicationFiled: July 26, 2016Publication date: February 1, 2018Inventors: Vijay YELLAPRAGADA, Peijun CHIANG, Sreeneel K. MADDIKA
-
Publication number: 20180025222Abstract: The present disclosure relates to optical character recognition using captured video. According to one embodiment, using a first image in stream of images depicting a document, the device extracts text data in a portion of the document depicted in the first image and determines a first confidence level regarding an accuracy of the extracted text data. If the first confidence level satisfies a threshold value, the device saves the extracted text data as recognized content of the source document. Otherwise, the device extracts the text data from the portion of the document as depicted in one or more second images in the stream and determines a second confidence level for the text data extracted from each second image until identifying one of the second images where the second confidence level associated with the text data extracted from the identified second image satisfies the threshold value.Type: ApplicationFiled: July 25, 2016Publication date: January 25, 2018Inventors: Vijay YELLAPRAGADA, Peijun CHIANG, Sreeneel K. MADDIKA
-
Patent number: 9721177Abstract: During an information-extraction technique, visual suitability indicators may be displayed to a user of the electronic device to assist the user in acquiring an image of a document that is suitable for subsequent extraction of textual information. For example, an imaging application executed by the electronic device may display, in a window associated with the imaging application, a visual suitability indicator of a tilt orientation of the electronic device relative to a plane of the document. When the tilt orientation falls within a predefined range, the electronic device may modify the visual suitability indicators to provide visual feedback to the user. Then, the electronic device may acquire the image of the document using an imaging device, which is integrated into the electronic device. Next, the electronic device may extract the textual information from the image of the document using optical character recognition.Type: GrantFiled: January 25, 2016Date of Patent: August 1, 2017Assignee: INTUIT INC.Inventors: Sammy Lee, Grace Pariante, Eugene Krivopaltsev, Sreeneel K. Maddika, Bobby G. Bray, Jr., Andrew B. Firstenberger
-
Publication number: 20160140410Abstract: During an information-extraction technique, visual suitability indicators may be displayed to a user of the electronic device to assist the user in acquiring an image of a document that is suitable for subsequent extraction of textual information. For example, an imaging application executed by the electronic device may display, in a window associated with the imaging application, a visual suitability indicator of a tilt orientation of the electronic device relative to a plane of the document. When the tilt orientation falls within a predefined range, the electronic device may modify the visual suitability indicators to provide visual feedback to the user. Then, the electronic device may acquire the image of the document using an imaging device, which is integrated into the electronic device. Next, the electronic device may extract the textual information from the image of the document using optical character recognition.Type: ApplicationFiled: January 25, 2016Publication date: May 19, 2016Applicant: INTUIT INC.Inventors: SAMMY LEE, GRACE PARIANTE, EUGENE KRIVOPALTSEV, SREENEEL K. MADDIKA, BOBBY G. BRAY, JR., ANDREW B. FIRSTENBERGER
-
Patent number: 9245341Abstract: During an information-extraction technique, visual suitability indicators may be displayed to a user of the electronic device to assist the user in acquiring an image of a document that is suitable for subsequent extraction of textual information. For example, an imaging application executed by the electronic device may display, in a window associated with the imaging application, a visual suitability indicator of a tilt orientation of the electronic device relative to a plane of the document. When the tilt orientation falls within a predefined range, the electronic device may modify the visual suitability indicators to provide visual feedback to the user. Then, the electronic device may acquire the image of the document using an imaging device, which is integrated into the electronic device. Next, the electronic device may extract the textual information from the image of the document using optical character recognition.Type: GrantFiled: April 14, 2014Date of Patent: January 26, 2016Assignee: INTUIT INC.Inventors: Sammy Lee, Grace Pariante, Eugene Krivopaltsev, Sreeneel K. Maddika, Bobby G. Bray, Jr., Andrew B. Firstenberger