Patents by Inventor Seth John Walker

Seth John Walker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11573764
    Abstract: A framework for generating and presenting verbal command suggestions to facilitate discoverability of commands capable of being understood and support users exploring available commands. A target associated with a direct-manipulation input is received from a user via a multimodal user interface. A set of operations relevant to the target is selected and verbal command suggestions relevant to the selected set of operations and the determined target are generated. At least a portion of the generated verbal command suggestions is provided for presentation in association with the multimodal user interface in one of three interface variants: one that presents command suggestions as a list, one that presents command suggestions using contextual overlay windows, and one that presents command suggestions embedded within the interface. Each of the proposed interface variants facilitates user awareness of verbal commands that are capable of being executed and teaches users how available verbal commands can be invoked.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: February 7, 2023
    Assignee: Adobe Inc.
    Inventors: Lubomira Dontcheva, Arjun Srinivasan, Seth John Walker, Eytan Adar
  • Publication number: 20210405964
    Abstract: A framework for generating and presenting verbal command suggestions to facilitate discoverability of commands capable of being understood and support users exploring available commands. A target associated with a direct-manipulation input is received from a user via a multimodal user interface. A set of operations relevant to the target is selected and verbal command suggestions relevant to the selected set of operations and the determined target are generated. At least a portion of the generated verbal command suggestions is provided for presentation in association with the multimodal user interface in one of three interface variants: one that presents command suggestions as a list, one that presents command suggestions using contextual overlay windows, and one that presents command suggestions embedded within the interface. Each of the proposed interface variants facilitates user awareness of verbal commands that are capable of being executed and teaches users how available verbal commands can be invoked.
    Type: Application
    Filed: September 8, 2021
    Publication date: December 30, 2021
    Inventors: Lubomira Dontcheva, Arjun Srinivasan, Seth John Walker, Eytan Adar
  • Patent number: 11132174
    Abstract: A framework for generating and presenting verbal command suggestions to facilitate discoverability of commands capable of being understood and support users exploring available commands. A target associated with a direct-manipulation input is received from a user via a multimodal user interface. A set of operations relevant to the target is selected and verbal command suggestions relevant to the selected set of operations and the determined target are generated. At least a portion of the generated verbal command suggestions is provided for presentation in association with the multimodal user interface in one of three interface variants: one that presents command suggestions as a list, one that presents command suggestions using contextual overlay windows, and one that presents command suggestions embedded within the interface. Each of the proposed interface variants facilitates user awareness of verbal commands that are capable of being executed and teaches users how available verbal commands can be invoked.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: September 28, 2021
    Assignee: ADOBE INC.
    Inventors: Lubomira Dontcheva, Arjun Srinivasan, Seth John Walker, Eytan Adar
  • Publication number: 20200293274
    Abstract: A framework for generating and presenting verbal command suggestions to facilitate discoverability of commands capable of being understood and support users exploring available commands. A target associated with a direct-manipulation input is received from a user via a multimodal user interface. A set of operations relevant to the target is selected and verbal command suggestions relevant to the selected set of operations and the determined target are generated. At least a portion of the generated verbal command suggestions is provided for presentation in association with the multimodal user interface in one of three interface variants: one that presents command suggestions as a list, one that presents command suggestions using contextual overlay windows, and one that presents command suggestions embedded within the interface. Each of the proposed interface variants facilitates user awareness of verbal commands that are capable of being executed and teaches users how available verbal commands can be invoked.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 17, 2020
    Inventors: Lubomira Dontcheva, Arjun Srinivasan, Seth John Walker, Eytan Adar
  • Patent number: 10701431
    Abstract: Embodiments disclosed herein facilitate virtual reality (VR) video playback using handheld controller gestures. More specifically, jog and shuttle gestures are associated with controller rotations that can be tracked once a triggering event is detected (e.g., pressing and holding a controller play button). A corresponding jog or shuttle command can be initialized when the VR controller rotates more than a defined angular threshold in an associated rotational direction (e.g., yaw, pitch, roll). For example, the jog gesture can be associated with changes in controller yaw, and the shuttle gesture can be associated with changes in controller pitch. Subsequent controller rotations can be mapped to playback adjustments for a VR video, such as a frame adjustment for a jog gesture and a playback speed adjustment for the shuttle gesture. Corresponding visualizations of available gestures and progress bars can be generated or otherwise triggered to facilitate efficient VR video playback control.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Seth John Walker, Brian David Williams
  • Publication number: 20190149873
    Abstract: Embodiments disclosed herein facilitate virtual reality (VR) video playback using handheld controller gestures. More specifically, jog and shuttle gestures are associated with controller rotations that can be tracked once a triggering event is detected (e.g., pressing and holding a controller play button). A corresponding jog or shuttle command can be initialized when the VR controller rotates more than a defined angular threshold in an associated rotational direction (e.g., yaw, pitch, roll). For example, the jog gesture can be associated with changes in controller yaw, and the shuttle gesture can be associated with changes in controller pitch. Subsequent controller rotations can be mapped to playback adjustments for a VR video, such as a frame adjustment for a jog gesture and a playback speed adjustment for the shuttle gesture. Corresponding visualizations of available gestures and progress bars can be generated or otherwise triggered to facilitate efficient VR video playback control.
    Type: Application
    Filed: November 16, 2017
    Publication date: May 16, 2019
    Inventors: Stephen Joseph DiVerdi, Seth John Walker, Brian David Williams