Patents by Inventor Eugene Krivopaltsev

Eugene Krivopaltsev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960695
    Abstract: The disclosed embodiments provide a system that facilitates use of an application on an electronic device. During operation, the system obtains a first metadata definition containing a mapping of view components in a user interface of the application to a set of attribute-specific types associated with an attribute of the electronic device, and a second metadata definition containing a set of rules for binding the attribute-specific types to a set of platform-specific user-interface elements for a platform of the electronic device. Next, the system generates a view for display in the user interface by applying, based on the attribute and the platform, the first and second metadata definitions to content describing the view to select one or more platform-specific user-interface elements for rendering one or more of the view components in the content. The system then instantiates the platform-specific user-interface element(s) to render the view component(s).
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: April 16, 2024
    Assignee: INTUIT INC.
    Inventors: Eugene Krivopaltsev, Marc J. Attinasi, Shailesh K. Soliwal
  • Patent number: 11269477
    Abstract: The disclosed embodiments provide a system that renders a view component in a user interface of an application on an electronic device. During operation, the system generates, from content describing a view for display in the user interface, a styling path that includes a position of the view component in a content hierarchy of the view. Next, the system selects, by a styling component executing on a processor in the electronic device, a style context for the view component from a collection of style contexts by matching at least a subset of the styling path to an identifier for the style context. The system then uses the style context to render the view component in the view.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: March 8, 2022
    Assignee: INTUIT INC.
    Inventors: Eugene Krivopaltsev, Marc J. Attinasi, Shailesh K. Soliwal
  • Patent number: 11042387
    Abstract: This disclosure relates to cross-platform applications that include native and non-native components on mobile devices. An exemplary method generally includes receiving a first workflow step definition including a first set of widgets to be loaded into an application shell. A mobile shell identifies a type of each widget in the first set of widgets (e.g., native or platform-agnostic) and loads each widget into the mobile shell based on the widget type. For a platform-agnostic widget, the mobile shell creates a platform-agnostic widget proxy service, which provides a runtime environment. The platform-agnostic widget may be loaded into the platform-agnostic widget proxy service and executes in the runtime provided thereby.
    Type: Grant
    Filed: February 17, 2020
    Date of Patent: June 22, 2021
    Assignee: INTUIT, INC.
    Inventors: Ann Catherine Jose, Jay Yu, Anshu Verma, Eugene Krivopaltsev, Patteaswaran Karivaradasamy
  • Patent number: 10949844
    Abstract: Mobile payments and processing data related to electronic transactions. A near field communication connection is established between a mobile communication device of a consumer that serves as a mobile wallet and an electronic payment device of a merchant. Authorization data is shared between the mobile communication device and the electronic payment device without providing electronic payment instrument (e.g. credit card) data to the merchant. Authorization data is transmitted from the mobile communication device to a cloud computer or resource that serves as a cloud wallet and hosts respective data of respective electronic payment instruments of respective consumers, and from the electronic payment device a payment processor computer. The payment processor computer presents the authorization data to the cloud wallet, and in response, the cloud wallet transmits the credit card data to the payment processor computer, which processes the transaction.
    Type: Grant
    Filed: May 9, 2011
    Date of Patent: March 16, 2021
    Assignee: INTUIT INC.
    Inventors: Trevor D. Dryer, Eran Arbel, Alexander S. Ran, Ajay Tripathi, Douglas Lethin, Bennett R. Blank, Eugene Krivopaltsev
  • Publication number: 20200409513
    Abstract: The disclosed embodiments provide a system that facilitates use of an application on an electronic device. During operation, the system obtains a first metadata definition containing a mapping of view components in a user interface of the application to a set of attribute-specific types associated with an attribute of the electronic device, and a second metadata definition containing a set of rules for binding the attribute-specific types to a set of platform-specific user-interface elements for a platform of the electronic device. Next, the system generates a view for display in the user interface by applying, based on the attribute and the platform, the first and second metadata definitions to content describing the view to select one or more platform-specific user-interface elements for rendering one or more of the view components in the content. The system then instantiates the platform-specific user-interface element(s) to render the view component(s).
    Type: Application
    Filed: September 14, 2020
    Publication date: December 31, 2020
    Inventors: Eugene KRIVOPALTSEV, Marc J. ATTINASI, Shailesh K. SOLIWAL
  • Publication number: 20200333923
    Abstract: The disclosed embodiments provide a system that renders a view component in a user interface of an application on an electronic device. During operation, the system generates, from content describing a view for display in the user interface, a styling path that includes a position of the view component in a content hierarchy of the view. Next, the system selects, by a styling component executing on a processor in the electronic device, a style context for the view component from a collection of style contexts by matching at least a subset of the styling path to an identifier for the style context. The system then uses the style context to render the view component in the view.
    Type: Application
    Filed: June 30, 2020
    Publication date: October 22, 2020
    Inventors: Eugene KRIVOPALTSEV, Marc J. ATTINASI, Shailesh K. SOLIWAL
  • Patent number: 10802660
    Abstract: The disclosed embodiments provide a system that facilitates use of an application on an electronic device. During operation, the system obtains a first metadata definition containing a mapping of view components in a user interface of the application to a set of attribute-specific types associated with an attribute of the electronic device, and a second metadata definition containing a set of rules for binding the attribute-specific types to a set of platform-specific user-interface elements for a platform of the electronic device. Next, the system generates a view for display in the user interface by applying, based on the attribute and the platform, the first and second metadata definitions to content describing the view to select one or more platform-specific user-interface elements for rendering one or more of the view components in the content. The system then instantiates the platform-specific user-interface element(s) to render the view component(s).
    Type: Grant
    Filed: July 29, 2015
    Date of Patent: October 13, 2020
    Assignee: INTUIT INC.
    Inventors: Eugene Krivopaltsev, Marc J. Attinasi, Shailesh K. Soliwal
  • Patent number: 10732782
    Abstract: The disclosed embodiments provide a system that renders a view component in a user interface of an application on an electronic device. During operation, the system generates, from content describing a view for display in the user interface, a styling path that includes a position of the view component in a content hierarchy of the view. Next, the system selects, by a styling component executing on a processor in the electronic device, a style context for the view component from a collection of style contexts by matching at least a subset of the styling path to an identifier for the style context. The system then uses the style context to render the view component in the view.
    Type: Grant
    Filed: July 29, 2015
    Date of Patent: August 4, 2020
    Assignee: INTUIT INC.
    Inventors: Eugene Krivopaltsev, Marc J. Attinasi, Shailesh K. Soliwal
  • Publication number: 20200183710
    Abstract: This disclosure relates to cross-platform applications that include native and non-native components on mobile devices. An exemplary method generally includes receiving a first workflow step definition including a first set of widgets to be loaded into an application shell. A mobile shell identifies a type of each widget in the first set of widgets (e.g., native or platform-agnostic) and loads each widget into the mobile shell based on the widget type. For a platform-agnostic widget, the mobile shell creates a platform-agnostic widget proxy service, which provides a runtime environment. The platform-agnostic widget may be loaded into the platform-agnostic widget proxy service and executes in the runtime provided thereby.
    Type: Application
    Filed: February 17, 2020
    Publication date: June 11, 2020
    Inventors: Ann Catherine JOSE, Jay YU, Anshu VERMA, Eugene KRIVOPALTSEV, Patteaswaran KARIVARADASAMY
  • Patent number: 10564988
    Abstract: This disclosure relates to cross-platform applications that include native and non-native components on mobile devices. An exemplary method generally includes receiving a first workflow step definition including a first set of widgets to be loaded into an application shell. A mobile shell identifies a type of each widget in the first set of widgets (e.g., native or platform-agnostic) and loads each widget into the mobile shell based on the widget type. For a platform-agnostic widget, the mobile shell creates a platform-agnostic widget proxy service, which provides a runtime environment. The platform-agnostic widget may be loaded into the platform-agnostic widget proxy service and executes in the runtime provided thereby.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: February 18, 2020
    Assignee: INTUIT INC.
    Inventors: Ann Catherine Jose, Jay Yu, Anshu Verma, Eugene Krivopaltsev, Patteaswaran Karivaradasamy
  • Patent number: 10402035
    Abstract: The disclosed embodiments provide a system that facilitates use of an application on an electronic device. During operation, the system executes an orchestrator that coordinates the operation of a set of rendering components for rendering different views of a user interface for the application. The orchestrator is used to provide the user interface on the electronic device. First, the orchestrator obtains content for rendering the user interface. Next, the orchestrator identifies, from the content, a first rendering component from the set of rendering components for use in rendering a first view of the user interface. The system then provides the content to the first rendering component, wherein the content is used by the first rendering component to render the first view of the user interface.
    Type: Grant
    Filed: July 29, 2015
    Date of Patent: September 3, 2019
    Assignee: INTUIT INC.
    Inventors: Ann Catherine Jose, Eugene Krivopaltsev, Jay JieBing Yu
  • Patent number: 10356318
    Abstract: The present disclosure relates to capturing a document. In certain embodiments, Optical Character Recognition (OCR) is performed on each of a plurality of images to identify one or more character sequences in each image. Each image may comprise a portion of the document. In some embodiments, points of connection are identified among the plurality of images based on the one or more character sequences in each image. In certain embodiments, a unified image of the document is produced by stitching the plurality of images together based on the points of connection.
    Type: Grant
    Filed: April 27, 2017
    Date of Patent: July 16, 2019
    Assignee: INTUIT, INC.
    Inventors: Eugene Krivopaltsev, Samir Safi, Boris Fedorov, Daniel Lee
  • Patent number: 10289905
    Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: May 14, 2019
    Assignee: Intuit Inc.
    Inventors: Eugene Krivopaltsev, Sreeneel K. Maddika, Vijay S. Yellapragada
  • Patent number: 10282604
    Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: May 7, 2019
    Assignee: Intuit, Inc.
    Inventors: Eugene Krivopaltsev, Sreeneel K. Maddika, Vijay S. Yellapragada
  • Publication number: 20180365487
    Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.
    Type: Application
    Filed: August 23, 2018
    Publication date: December 20, 2018
    Inventors: Eugene KRIVOPALTSEV, Sreeneel K. MADDIKA, Vijay S. YELLAPRAGADA
  • Publication number: 20180365488
    Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.
    Type: Application
    Filed: August 24, 2018
    Publication date: December 20, 2018
    Inventors: Eugene KRIVOPALTSEV, Sreeneel K. MADDIKA, Vijay S. YELLAPRAGADA
  • Patent number: 10108879
    Abstract: The present disclosure includes techniques for selecting a candidate presentation style for individual documents for inclusion in an aggregate training data set for a document type that may be used to train an OCR processing engine prior to identifying text in an image of a document of the document type. In one embodiment, text input corresponding to a text sample in a document is received, and an image of the text sample in the document is received. For each of a plurality of candidate presentation styles, an OCR processing engine is trained using a training data set corresponding to the given candidate presentation style, and the OCR processing engine is used, as trained, to identify text in the received image. The OCR processing results for each candidate presentation style are compared to the received text input. A candidate presentation style for the document is selected based on the comparisons.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: October 23, 2018
    Assignee: Intuit inc.
    Inventors: Eugene Krivopaltsev, Sreeneel K. Maddika, Vijay S. Yellapragada
  • Patent number: 10089523
    Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.
    Type: Grant
    Filed: October 5, 2016
    Date of Patent: October 2, 2018
    Assignee: INTUIT INC.
    Inventors: Eugene Krivopaltsev, Sreeneel K. Maddika, Vijay S. Yellapragada
  • Publication number: 20180096200
    Abstract: Systems of the present disclosure generate accurate training data for optical character recognition (OCR). Systems disclosed herein generates images of a text passage as displayed piecemeal in a user interface (UI) element rendered in a selected font type and size, determine accurate dimensions and locations of bounding boxes for each character pictured in the images, stitch together a training image by concatenating the images, and associate the training image, the bounding box dimensions and locations, and the text passage together in a collection of training data. The collection of training data also includes a computer-readable master copy of the text passage with newline characters inserted therein.
    Type: Application
    Filed: October 5, 2016
    Publication date: April 5, 2018
    Inventors: Eugene KRIVOPALTSEV, Sreeneel K. MADDIKA, Vijay S. YELLAPRAGADA
  • Publication number: 20180082146
    Abstract: The present disclosure includes techniques for selecting a candidate presentation style for individual documents for inclusion in an aggregate training data set for a document type that may be used to train an OCR processing engine prior to identifying text in an image of a document of the document type. In one embodiment, text input corresponding to a text sample in a document is received, and an image of the text sample in the document is received. For each of a plurality of candidate presentation styles, an OCR processing engine is trained using a training data set corresponding to the given candidate presentation style, and the OCR processing engine is used, as trained, to identify text in the received image. The OCR processing results for each candidate presentation style are compared to the received text input. A candidate presentation style for the document is selected based on the comparisons.
    Type: Application
    Filed: September 21, 2016
    Publication date: March 22, 2018
    Inventors: Eugene KRIVOPALTSEV, Sreeneel K. MADDIKA, Vijay S. YELLAPRAGADA