Patents by Inventor Jason Thomas Faulkner

Jason Thomas Faulkner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10863136
    Abstract: Systems and methods for hosting a teleconference session are presented. One or more streams are received from a plurality of client computing devices at a server. The streams are combined to generate teleconference data. The teleconference data may be configured to display a first user interface arrangement in which a primary stream display area dominates a display with a secondary stream display area overlaid on the primary stream display area. The secondary stream display area may disappear after a period of time. The teleconference data may also be configured to display a second user interface arrangement in which the primary stream display area and secondary stream display area are displayed concurrently. A view control switch may be triggered to switch between the first and second user interface arrangement views.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: December 8, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruchir Astavans, Kevin D. Morrison, Jason Thomas Faulkner
  • Publication number: 20200382618
    Abstract: The techniques disclosed herein provide ways to manage events for communication sessions. Content data comprising a plurality of content items associated with a plurality of communication sessions may be received, and a primary stream comprising a plurality of time slots for the communication session may be generated for each session. Selected content items are inserted in the time slots for broadcast to user devices of the communication sessions. Secondary or suggestive streams are generated that contain candidate content items that are insertable into the primary streams. A user interface renders a graphical representation of the primary streams concurrently with a graphical representation of the secondary streams. In response to a user input, the primary streams are modified by inserting selected content items into selected time slots of the primary streams.
    Type: Application
    Filed: May 31, 2019
    Publication date: December 3, 2020
    Inventors: Jason Thomas FAULKNER, Ashwin M. APPIAH, Joshua GEORGE
  • Publication number: 20200371673
    Abstract: The techniques disclosed herein provide improvements over existing systems by allowing users to efficiently modify an arrangement of a user interface of a communication session by the use of an eye gaze gesture. An eye gaze gesture input can be utilized to focus on particular aspects of shared content. In addition, an eye gaze gesture can be utilized to configure an arrangement of a user interface displaying multiple streams of shared content of a communication session. A focused view of shared content and customized user interface layouts can be shared with specific individuals based on roles and or permissions. In addition, the disclosed techniques can also select and display unique user interface controls based on an eye gaze gesture. In one illustrative example, a specific set of functionality can be made available to a user based on a type of an object that is selected using an eye gaze gesture.
    Type: Application
    Filed: May 22, 2019
    Publication date: November 26, 2020
    Inventor: Jason Thomas FAULKNER
  • Publication number: 20200371677
    Abstract: The techniques disclosed herein improve existing computing systems by providing consistent interaction models during communication sessions. A system configured according to the disclosure presented herein can improve user engagement during communication sessions and conserve computing resources by enabling users to define arrangements of display areas in a user interface (UI) for presenting content during a communication session and to utilize the same pre-defined arrangement during multiple communication sessions. The arrangement can be presented to all or some of the participants in a communication session. By providing a consistent arrangement of display areas that render content to participants in communication sessions, the participants can be more engaged and productive, thereby improving human-computer interaction and conserving computing resources.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 26, 2020
    Inventors: Jason Thomas FAULKNER, Marek CAIS
  • Publication number: 20200374146
    Abstract: The techniques disclosed herein improve existing systems by automatically generating summaries of shared content based on a contextual analysis of a user's engagement with an event. User activity data from a number of sensors and other contextual data, such as scheduling data and communication data, can be analyzed to determine a user's level of engagement of an event. A system can automatically generate a summary of any shared content the user may have missed during a time period that the user was not engaged with the event. For example, if a user becomes distracted or is otherwise unavailable during a presentation, the system can provide a summary of salient portions of content that was shared during the time of the user's inattentive status, such as, but not limited to, key topics, tasks, shared files, an excerpt of a transcript of a presentation or any salient sections of a shared video.
    Type: Application
    Filed: May 24, 2019
    Publication date: November 26, 2020
    Inventors: Shalendra CHHABRA, Eric R. SEXAUER, Jason Thomas FAULKNER
  • Patent number: 10841112
    Abstract: Described herein is a system that generates and displays a timeline for communication content. The system determines events that occur in association with the communication content (e.g., a video conference, a chat or messaging conversation, etc.). The system adds a representation of an event to the timeline in association with a time at which the event occurs. Moreover, the system enables user interaction with the representation so that the user can view information associated with an event.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: November 17, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jason Thomas Faulkner, Jose Rodriguez, Casey Baker, Sonu Arora, Christopher Welsh, Kevin D. Morrison
  • Publication number: 20200293975
    Abstract: The techniques disclosed herein improve existing systems by automatically identifying tasks from a number of different types of user activity and providing suggestions for the tasks to one or more selected delivery mechanisms. A system compiles the tasks and pushes each task to a personalized task list of a user. The delivery of each task may be based on any suitable user activity, which may include communication between one or more users or a user's interaction with a particular file or a system. The system can identify timelines, performance parameters, and other related contextual data associated with the task. The system can identify a delivery schedule for the task to optimize the effectiveness of the delivery of the task. The system can also provide smart notifications. When a task conflicts with a person's calendar, the system can resolve scheduling conflicts based on priorities of a calendar event.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 17, 2020
    Inventors: Jason Thomas FAULKNER, Eric Randall SEXAUER, Shalendra CHHABRA
  • Publication number: 20200293618
    Abstract: The techniques provided herein improve existing systems by automatically generating summaries of a document in response to a user input that defines selected segments of a document. The document can include any type of content such as, but not limited to, channel conversations, chat threads, transcripts, word processing documents, spreadsheets, etc. As the user indicates a selection of segments, a system can dynamically update a summary of the segments to inform a user of salient information that is shared in the selected segments. A summary can include a text description of the information having a threshold priority level. A system can analyze documents that are referenced within the selected segments and provide summaries of the documents. The techniques disclosed herein also provide a number of graphical elements that communicate additional context of each part of the summary.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 17, 2020
    Inventors: Shalendra CHHABRA, Eric Randall SEXAUER, Jason Thomas FAULKNER
  • Patent number: 10776933
    Abstract: This disclosure provides enhanced techniques for tracking the movement of real-world objects for improved display of virtual objects that are associated with the real-world objects. A first device can track the position of a real-world object. When the real-world object moves out of a viewing area of the first device, a second device can use metadata defining physical characteristics of the real-world object shared by the first device to identify the real-world object as the real-world object comes into a viewing area of the second device. The second device can then maintain an association between the real-world object and the virtual objects as the real-world object moves, and share such information with other computers to enable the other computers to display the virtual objects in association with the real-world object even though they are not in direct view of an associated real-world object.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: September 15, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Jason Thomas Faulkner
  • Patent number: 10754526
    Abstract: A tool for interacting with a rendered environment is configured to render a representation of a real-world environment and receive input data indicative of a position for a zoom window to be placed within the representation. The zoom window is rendered having a size that is determined based on one or more criteria. A magnified view of a portion of the representation is rendered that is proximate to the position of the zoom window. Input data is received that is indicative of a first gesture indicative of a new position for the zoom window. The zoom window is repositioned on the UI and the size of the zoom window is maintained during the repositioning. Within the zoom window, a magnified view of a portion of the representation is rendered that is proximate to the new position of the zoom window. The zoom window is movable to any rendered portion of the representation.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: August 25, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jason Thomas Faulkner, Yingying Geng, Casey Baker
  • Publication number: 20200228473
    Abstract: In a device including a processor and a memory, the memory includes executable instructions causing the processor to control the device to perform functions of displaying, via a GUI of a first communication application, content of a first communication session associated with a first communication application; detecting an activity related to a second communication session associated with a second communication application; displaying, as a part of the GUI of the first communication application, an indication of the detected activity and a first control element that, when activated, causes a user of the device to join the second communication session; receiving a first user input to activate the first control element; responsive to the received first user input, causing the user of the device to join and participate, via the GUI of the first communication application, the second communication session concurrently with the first communication session.
    Type: Application
    Filed: March 25, 2020
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jason Thomas Faulkner, Casey James Baker
  • Publication number: 20200211408
    Abstract: A computer implemented agent may analyze live or stored communication session data for questions posed in a collaborative meeting environment, such as a communication session. The agent may also analyze the live or stored communication session data for answers provided in the collaborative meeting environment. The agent may be functional to create a data object that includes one or more questions posed and answers provided in the collaborative meeting environment. The data object may be decimated to different types of platforms, such as computer applications and websites, that include content contextually relevant to the questions and answers included in the data object.
    Type: Application
    Filed: December 26, 2018
    Publication date: July 2, 2020
    Inventors: Jason Thomas FAULKNER, Eric R. SEXAUER, Tiphanie LAU
  • Publication number: 20200201521
    Abstract: A tool for interacting with a rendered environment is configured to render a representation of a real-world environment and receive input data indicative of a position for a zoom window to be placed within the representation. The zoom window is rendered having a size that is determined based on one or more criteria. A magnified view of a portion of the representation is rendered that is proximate to the position of the zoom window. Input data is received that is indicative of a first gesture indicative of a new position for the zoom window. The zoom window is repositioned on the UI and the size of the zoom window is maintained during the repositioning. Within the zoom window, a magnified view of a portion of the representation is rendered that is proximate to the new position of the zoom window. The zoom window is movable to any rendered portion of the representation.
    Type: Application
    Filed: December 20, 2018
    Publication date: June 25, 2020
    Inventors: Jason Thomas FAULKNER, Yingying GENG, Casey BAKER
  • Publication number: 20200201522
    Abstract: A tool for interacting with a rendered environment is configured to render a representation of a real-world environment. Input data is received that is indicative of a position for a zoom window to be placed within the representation. The zoom window is rendered at the position within the representation and has a size that is determined based on one or more criteria. Within the zoom window, a magnified view of a portion of the representation is rendered that is proximate to the position of the zoom window. Input data is received that is indicative of a first gesture applied to the zoom window and is indicative of a resizing of the zoom window. The zoom window is resized on the UI in accordance with the first gesture, and a scale of the magnified view within the zoom window is maintained as the zoom window is resized. Input data is received that is indicative of a second gesture applied to the zoom window and indicative of a change to a zoom scale for content within the zoom window.
    Type: Application
    Filed: December 20, 2018
    Publication date: June 25, 2020
    Inventors: Jason Thomas FAULKNER, Yingying GENG, Casey BAKER
  • Publication number: 20200202634
    Abstract: The techniques disclosed herein improve the efficiency of a system by providing intelligent management of content that is associated with objects displayed within communication sessions. The participants can generate a content object associated with a 3D object. The content object may be in the form of 3D virtual object such as an arrow pointing to the table, a text box of an annotation, etc. The content object may also include functional features that collect and display information voting agent. The system can generate a data structure that associates the object with the content object. The data structure enables a system to maintain an association between the object and the content object when various operations are applied to either object. Thus, if a remote computer sends a request for the content object, the associated object is delivered with the content object.
    Type: Application
    Filed: December 20, 2018
    Publication date: June 25, 2020
    Inventors: Jason Thomas FAULKNER, Sandhya RAO
  • Publication number: 20200201512
    Abstract: A tool for interacting with a rendered environment is configured to render a representation of a real-world environment. First input data is received that is indicative of a position for a zoom window to be placed within the representation. The zoom window is rendered and a magnified view of a portion of the representation is rendered that is proximate to the position of the zoom window. Second input data is received that is indicative of an interaction with the zoom window. An editing pane is rendered that includes a representation of content of the zoom window and selectable options for actions to be applied to the content. Third input data is received that is indicative of a selection of one of the selectable options, and an editing action is performed on the content.
    Type: Application
    Filed: December 20, 2018
    Publication date: June 25, 2020
    Inventors: Jason Thomas FAULKNER, Yingying GENG, Casey BAKER
  • Publication number: 20200186375
    Abstract: The techniques disclosed herein provide dynamic curation of sequence events for communication sessions. A system can utilize smart filtering techniques to generate and select sequence events that are designed to optimize user engagement. The system can collect contextual data associated with a communication session, which can be in the form of a private chat session, a multi-user editing session, a group meeting, a live broadcast, etc. The system can utilize the contextual data, and other input data defining user activity, to customize sequence events defining contextually-relevant user interface (UI) layouts, volume levels, camera angles, special effects, and other parameters controlling aspects of the communication session.
    Type: Application
    Filed: December 10, 2018
    Publication date: June 11, 2020
    Inventor: Jason Thomas FAULKNER
  • Publication number: 20200184217
    Abstract: The techniques disclosed herein improve the efficiency of a system by providing intelligent agents for managing data associated with objects that are displayed within mixed-reality and virtual-reality collaboration environments. Individual agents are configured to collect, analyze, and store data associated with individual objects in a shared view. The agents can identify real-world objects and virtual objects discussed in a meeting, collect information about each object and generate recommendations for each object based on the collected information. The recommendations can suggest modifications to the objects, provide resources for obtaining or modifying the objects, and provide actionable information allowing users to reach a consensus regarding an object. The data can be shared between different communication sessions without requiring users to manually store and present a collection of content for each object.
    Type: Application
    Filed: December 7, 2018
    Publication date: June 11, 2020
    Inventor: Jason Thomas FAULKNER
  • Publication number: 20200184653
    Abstract: This disclosure provides enhanced techniques for tracking the movement of real-world objects for improved display of virtual objects that are associated with the real-world objects. A first device can track the position of a real-world object. When the real-world object moves out of a viewing area of the first device, a second device can use metadata defining physical characteristics of the real-world object shared by the first device to identify the real-world object as the real-world object comes into a viewing area of the second device. The second device can then maintain an association between the real-world object and the virtual objects as the real-world object moves, and share such information with other computers to enable the other computers to display the virtual objects in association with the real-world object even though they are not in direct view of an associated real-world object.
    Type: Application
    Filed: December 6, 2018
    Publication date: June 11, 2020
    Inventor: Jason Thomas FAULKNER
  • Patent number: 10630740
    Abstract: Described herein is a system that generates and displays an interactive timeline for a teleconference session, where the interactive timeline includes a representation of supplemental recorded content that has been added after a live viewing of the teleconference session has ended. The system can inject the supplemental recorded content into previously recorded content or append the supplemental recorded content to the interactive timeline. Moreover, the system can cause the supplemental recorded content to subsequently be displayed in one of multiple different views. Furthermore, the system can generate and/or distribute a notification of the supplemental recorded content so that participants to the teleconference session can be made aware of additional activity contributed to the teleconference session (e.g., by someone who missed the live viewing of the teleconference session).
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: April 21, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jose A. Rodriguez, Jason Thomas Faulkner, Casey Baker, Sonu Arora, Christopher Welsh, Kevin D. Morrison