Patents by Inventor Jason Glenn Silvis

Jason Glenn Silvis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240118803
    Abstract: A method of and system for automatically generating an ink note object is carried out by detecting receipt of a digital ink input on a user interface (UI) screen, the UI screen being displayed by an application and being associated with at least one of a document, a page or an event. Once digital ink input is detected, the digital ink input is captured. Additionally, contextual data associated with the digital ink input is collected, the contextual data being related to at least one of the document, the page, the event, and a user providing the digital ink input. An ink note object is then generated and stored for the digital ink input, the ink note object including the captured digital ink input and the contextual data, and the ink note object being an entity that is separate from the document, the page and the even.
    Type: Application
    Filed: October 7, 2022
    Publication date: April 11, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Fnu PRIMADONA, Sivaramakrishna MOPATI, Jason Glenn SILVIS
  • Publication number: 20240103646
    Abstract: Systems and methods are provided for interactively highlighting a region as pixel data on a screen and automatically retrieving context data associated with content of the highlighted region for contextual notetaking. The highlighted region includes at least a part of one or more windows and one or more applications associated with the one or more windows. The disclosed technology determines a context associated with content of the highlighted region and automatically retrieves context data that are contextually relevant to the content. Notes data are generated based on an aggregate of the highlighted content, window-specific context data, application-specific context data, and user-specific context data. A notetaking application retrieves stored the notes data from a notes database and displays the notes data for recall and for use. The contextual notetaking enables the user reducing a burden of performing manual operations for notetaking and utilizing notes that are enriched relevant data by context.
    Type: Application
    Filed: September 22, 2022
    Publication date: March 28, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: FNU PRIMADONA, Sivaramakrishna MOPATI, Jason Glenn SILVIS
  • Patent number: 11750669
    Abstract: A personalized contextual connection management system can identify and store contextual connections for shared items associated with particular users of a multi-user system, for example, in response to requests associated with certain tasks to user services in which a shared item is referenced. When a particular user requests to view the shared item, the personalized contextual connection management system obtains that user's personalized context including the contextual connections associated with that user for that shared item and provides that user's personalized context for display with the shared item.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: September 5, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David Jon Conger, Jason Glenn Silvis, Keith John Symons, Simon Wai-Han Chan
  • Patent number: 10955938
    Abstract: Electronic devices, interfaces for electronic devices, and techniques for interacting with such interfaces and electronic devices are described. For instance, this disclosure describes an example electronic device that includes sensors, such as multiple front-facing cameras to detect orientation and/or location of the electronic device relative to an object and one or more inertial sensors. Users of the device may perform gestures on the device by moving the device in-air and/or by moving their head, face, or eyes relative to the device. In response to these gestures, the device may perform operations.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: March 23, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Aaron Michael Donsbach, Timothy Thomas Gray, Brian Peter Kralyevich, Jason Phillip Kriese, Richard Leigh Mains, Jae Pum Park, Sean Anthony Rooney, Jason Glenn Silvis
  • Patent number: 10545624
    Abstract: Relevant content (e.g., containers and/or container elements) can be surfaced via user interfaces based at least partly on determining the relevant content based on interactions between user(s), container(s), and/or container element(s). Techniques described herein include generating a user interface configured with functionality to present content to a user. The user interface can include interface elements, such as cards, corresponding to containers. The cards can be arranged on the user interface in an order determined based at least partly on respective relevancies of the containers to the user, and a presentation of individual cards can be based at least partly on a type of corresponding individual containers. Individual cards can include a group of one or more interface elements corresponding to container elements that can be arranged based at least partly on respective relevancies of the container elements to the user.
    Type: Grant
    Filed: March 21, 2016
    Date of Patent: January 28, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmitriy Meyerzon, David M. Cohen, Adam Ford, Andrew C. Haon, Ryan Nakhoul, Jason Glenn Silvis, Vidya Srinivasan, Denise Trabona
  • Patent number: 10394410
    Abstract: Electronic devices, interfaces for electronic devices, and techniques for interacting with such interfaces and electronic devices are described. For instance, this disclosure describes an example electronic device that includes sensors, such as multiple front-facing cameras to detect orientation and/or location of the electronic device relative to an object and one or more inertial sensors. Users of the device may perform gestures on the device by moving the device in-air and/or by moving their head, face, or eyes relative to the device. In response to these gestures, the device may perform operations.
    Type: Grant
    Filed: May 9, 2014
    Date of Patent: August 27, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Bryan Todd Agnetta, Aaron Michael Donsbach, Catherine Ann Hendricks, Brian Peter Kralyevich, Richard Leigh Mains, Jae Pum Park, Sean Anthony Rooney, Marc Anthony Salazar, Jason Glenn Silvis, Nino Yuniardi
  • Patent number: 10114519
    Abstract: Presenting contextual content corresponding to interactions associated with microenvironments of user interfaces is described. In an example, the contextual content can be presented via a user interface that includes a plurality of regions. Each region can have localized functionalities that are distinct from, but related to, a global functionality of the user interface. Each region can include one or more elements. The techniques described herein include receiving input indicating an interaction with an element associated with a region. Additionally, the techniques described herein include presenting, based at least partly on the interaction, a notification associated with the interaction in an orientation and/or a style that is based on the element, a container corresponding to the element, and a type of the notification. In an example, the notification can be presented within the region or proximate to the region such to appear at a current focus area of the user interface.
    Type: Grant
    Filed: May 3, 2016
    Date of Patent: October 30, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jason Glenn Silvis, Ryan Nakhoul, Vidya Srinivasan
  • Patent number: 9817566
    Abstract: Various embodiments provide a graphical interface element displayable on a touch screen of a computing device associated with a function to be performed. In response to receiving a touch input from a finger of a user to an area associated with the graphical interface element, such as a tap, the function is performed. In response to receiving a touch input to the area associated with the user selectable element followed by a swipe, drag, or movement of the user selectable element with a finger of the user, the user is able to control an amount of the function to be performed.
    Type: Grant
    Filed: December 5, 2012
    Date of Patent: November 14, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Jason Glenn Silvis, Shweta Dattatraya Grampurohit
  • Publication number: 20170322678
    Abstract: Presenting contextual content corresponding to interactions associated with microenvironments of user interfaces is described. In an example, the contextual content can be presented via a user interface that includes a plurality of regions. Each region can have localized functionalities that are distinct from, but related to, a global functionality of the user interface. Each region can include one or more elements. The techniques described herein include receiving input indicating an interaction with an element associated with a region. Additionally, the techniques described herein include presenting, based at least partly on the interaction, a notification associated with the interaction in an orientation and/or a style that is based on the element, a container corresponding to the element, and a type of the notification. In an example, the notification can be presented within the region or proximate to the region such to appear at a current focus area of the user interface.
    Type: Application
    Filed: May 3, 2016
    Publication date: November 9, 2017
    Inventors: Jason Glenn Silvis, Ryan Nakhoul, Vidya Srinivasan
  • Publication number: 20170269791
    Abstract: Relevant content (e.g., containers and/or container elements) can be surfaced via user interfaces based at least partly on determining the relevant content based on interactions between user(s), container(s), and/or container element(s). Techniques described herein include generating a user interface configured with functionality to present content to a user. The user interface can include interface elements, such as cards, corresponding to containers. The cards can be arranged on the user interface in an order determined based at least partly on respective relevancies of the containers to the user, and a presentation of individual cards can be based at least partly on a type of corresponding individual containers. Individual cards can include a group of one or more interface elements corresponding to container elements that can be arranged based at least partly on respective relevancies of the container elements to the user.
    Type: Application
    Filed: March 21, 2016
    Publication date: September 21, 2017
    Inventors: Dmitriy Meyerzon, David M. Cohen, Adam Ford, Andrew C. Haon, Ryan Nakhoul, Jason Glenn Silvis, Vidya Srinivasan, Denise Trabona
  • Publication number: 20140337791
    Abstract: Electronic devices, interfaces for electronic devices, and techniques for interacting with such interfaces and electronic devices are described. For instance, this disclosure describes an example electronic device that includes sensors, such as multiple front-facing cameras to detect orientation and/or location of the electronic device relative to an object and one or more inertial sensors. Users of the device may perform gestures on the device by moving the device in-air and/or by moving their head, face, or eyes relative to the device. In response to these gestures, the device may perform operations.
    Type: Application
    Filed: May 9, 2014
    Publication date: November 13, 2014
    Applicant: Amazon Technologies, Inc.
    Inventors: Bryan Todd Agnetta, Aaron Michael Donsbach, Catherine Ann Hendricks, Brian Peter Kralyevich, Richard Leigh Mains, Jae Pum Park, Sean Anthony Rooney, Marc Anthony Salazar, Jason Glenn Silvis, Nino Yuniardi