Patents by Inventor Abhinandan GANAPATI BANNE

Abhinandan GANAPATI BANNE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11237853
    Abstract: Techniques for auto-executing instructions provided in a video on a computing platform are provided. A script is developed from audio provided in the video. Text shown in frames of the video is extracted. Simulated user interaction (UI) events present in the video are identified. A timeline representation is generated to include entries for elements of the script and the extracted text, and identified UI events. Like elements are collected into common entries. Each entry in the script that lacks an associated UI event but is likely to involve a user action prompt is identified. Each entry having an associated identified UI event, and each entry identified as likely to involve a user action prompt, is converted into a corresponding user action command representation. Each user action command representation is mapped to a computing platform executable command, each being performed using processing resources of the computing platform, automatically, without user intervention.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: February 1, 2022
    Assignee: Software AG
    Inventor: Abhinandan Ganapati Banne
  • Publication number: 20210165671
    Abstract: Techniques for auto-executing instructions provided in a video on a computing platform are provided. A script is developed from audio provided in the video. Text shown in frames of the video is extracted. Simulated user interaction (UI) events present in the video are identified. A timeline representation is generated to include entries for elements of the script and the extracted text, and identified UI events. Like elements are collected into common entries. Each entry in the script that lacks an associated UI event but is likely to involve a user action prompt is identified. Each entry having an associated identified UI event, and each entry identified as likely to involve a user action prompt, is converted into a corresponding user action command representation. Each user action command representation is mapped to a computing platform executable command, each being performed using processing resources of the computing platform, automatically, without user intervention.
    Type: Application
    Filed: February 12, 2021
    Publication date: June 3, 2021
    Inventor: Abhinandan Ganapati BANNE
  • Patent number: 10956181
    Abstract: Techniques for auto-executing instructions provided in a video on a computing platform are provided. A script is developed from audio provided in the video. Text shown in frames of the video is extracted. Simulated user interaction (UI) events present in the video are identified. A timeline representation is generated to include entries for elements of the script and the extracted text, and identified UI events. Like elements are collected into common entries. Each entry in the script that lacks an associated UI event but is likely to involve a user action prompt is identified. Each entry having an associated identified UI event, and each entry identified as likely to involve a user action prompt, is converted into a corresponding user action command representation. Each user action command representation is mapped to a computing platform executable command, each being performed using processing resources of the computing platform, automatically, without user intervention.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: March 23, 2021
    Assignee: Software AG
    Inventor: Abhinandan Ganapati Banne
  • Publication number: 20200371818
    Abstract: Techniques for auto-executing instructions provided in a video on a computing platform are provided. A script is developed from audio provided in the video. Text shown in frames of the video is extracted. Simulated user interaction (UI) events present in the video are identified. A timeline representation is generated to include entries for elements of the script and the extracted text, and identified UI events. Like elements are collected into common entries. Each entry in the script that lacks an associated UI event but is likely to involve a user action prompt is identified. Each entry having an associated identified UI event, and each entry identified as likely to involve a user action prompt, is converted into a corresponding user action command representation. Each user action command representation is mapped to a computing platform executable command, each being performed using processing resources of the computing platform, automatically, without user intervention.
    Type: Application
    Filed: May 22, 2019
    Publication date: November 26, 2020
    Inventor: Abhinandan Ganapati BANNE
  • Patent number: 9646009
    Abstract: A method and an apparatus for generating a visual representation of object timelines in a multimedia user interface by illustrating time information, associated with a moving object that needs to be displayed, directly over a motion path of the moving object by assigning and displaying color-values on a time-line of the moving object and displaying corresponding colors on the motion-path are provided. The method includes presenting an object through a display operatively coupled with an electronic device, presenting a first visual indicator that relates time information associated with a motion of the object with a motion path of the object, and presenting a timeline associated with the time information. Here a visual property of the first visual indicator matches a visual property of the second visual indicator in relation with the time information.
    Type: Grant
    Filed: May 13, 2015
    Date of Patent: May 9, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Biju Mathew Neyyan, Jaya Prakash Vanka, Praveen Krishnan, Abhinandan Ganapati Banne, Ranjith Tharayil
  • Patent number: 9558467
    Abstract: Certain example embodiments relate to techniques for creating/updating a computerized model usable with an enterprise modeling platform. The computerized model is defined with a first modeling language. An image of a hand-drawn model existing on a physical substrate and following rules of a second modeling language is acquired. Multi-level image processing is performed on the image, the different levels corresponding to recognitions of (a) structures in the image corresponding to objects in the hand-drawn model, (b) object types for the identified structures, (c) text associated with the identified structures, and (d) connections between at least some of the identified structures. A digitized, iteratively-reviewed version of the hand-drawn model is generated and transformed into the computerized model using rules defining relationships between elements in the different modeling languages.
    Type: Grant
    Filed: March 2, 2016
    Date of Patent: January 31, 2017
    Assignee: Software AG
    Inventors: Katrina Simon, Thomas Winkler, Abhinandan Ganapati Banne, Viktor Tymoshenko
  • Publication number: 20150379011
    Abstract: A method and an apparatus for generating a visual representation of object timelines in a multimedia user interface by illustrating time information, associated with a moving object that needs to be displayed, directly over a motion path of the moving object by assigning and displaying color-values on a time-line of the moving object and displaying corresponding colors on the motion-path are provided. The method includes presenting an object through a display operatively coupled with an electronic device, presenting a first visual indicator that relates time information associated with a motion of the object with a motion path of the object, and presenting a timeline associated with the time information. Here a visual property of the first visual indicator matches a visual property of the second visual indicator in relation with the time information.
    Type: Application
    Filed: May 13, 2015
    Publication date: December 31, 2015
    Inventors: Biju Mathew NEYYAN, Jaya Prakash VANKA, Praveen KRISHNAN, Abhinandan GANAPATI BANNE, Ranjith THARAYIL