Patents by Inventor Seth Walker

Seth Walker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220076023
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Publication number: 20220075513
    Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventors: Seth Walker, Joy Oakyung Kim, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
  • Publication number: 20220076706
    Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
    Type: Application
    Filed: May 26, 2021
    Publication date: March 10, 2022
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popovic, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Publication number: 20220076705
    Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation. In some embodiments, the finest level of the hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms. Each level of the hierarchical segmentation clusters the clip atoms with a corresponding degree of granularity into a corresponding set of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline (e.g., clicks, drags), by performing a metadata search, or through selection of corresponding metadata segments from a metadata panel. Navigating to a different level of the hierarchy transforms the selection into corresponding coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
    Type: Application
    Filed: September 10, 2020
    Publication date: March 10, 2022
    Inventors: Seth Walker, Joy Oakyung Kim, Aseem Agarwala, Joel R. Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
  • Publication number: 20220060671
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.
    Type: Application
    Filed: November 4, 2021
    Publication date: February 24, 2022
    Inventors: Stephen DiVerdi, Seth Walker, Oliver Wang, Cuong Nguyen
  • Patent number: 11178374
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: November 16, 2021
    Assignee: ADOBE INC.
    Inventors: Stephen DiVerdi, Seth Walker, Oliver Wang, Cuong Nguyen
  • Patent number: 10949057
    Abstract: Techniques are described for modifying a virtual reality environment to include or remove contextual information describing a virtual object within the virtual reality environment. The virtual object includes a user interface object associated with a development user interface of the virtual reality environment. In some cases, the contextual information includes information describing functions of controls included on the user interface object. In some cases, the virtual reality environment is modified based on a distance between the location of the user interface object and a location of a viewpoint within the virtual reality environment. Additionally or alternatively, the virtual reality environment is modified based on an elapsed time of the location of the user interface object remaining in a location.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: March 16, 2021
    Assignee: Adobe Inc.
    Inventors: Stephen DiVerdi, Seth Walker, Brian Williams
  • Patent number: 10872637
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: December 22, 2020
    Assignee: Adobe Inc.
    Inventors: Geoffrey Oxholm, Seth Walker, Ramiz Sheikh, Oliver Wang, John Nelson
  • Publication number: 20200382755
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.
    Type: Application
    Filed: May 31, 2019
    Publication date: December 3, 2020
    Inventors: Stephen DiVerdi, Seth Walker, Oliver Wang, Cuong Nguyen
  • Publication number: 20200241730
    Abstract: Techniques are described for modifying a virtual reality environment to include or remove contextual information describing a virtual object within the virtual reality environment. The virtual object includes a user interface object associated with a development user interface of the virtual reality environment. In some cases, the contextual information includes information describing functions of controls included on the user interface object. In some cases, the virtual reality environment is modified based on a distance between the location of the user interface object and a location of a viewpoint within the virtual reality environment. Additionally or alternatively, the virtual reality environment is modified based on an elapsed time of the location of the user interface object remaining in a location.
    Type: Application
    Filed: April 14, 2020
    Publication date: July 30, 2020
    Inventors: Stephen DiVerdi, Seth Walker, Brian Williams
  • Patent number: 10671238
    Abstract: Techniques are described for modifying a virtual reality environment to include or remove contextual information describing a virtual object within the virtual reality environment. The virtual object includes a user interface object associated with a development user interface of the virtual reality environment. In some cases, the contextual information includes information describing functions of controls included on the user interface object. In some cases, the virtual reality environment is modified based on a distance between the location of the user interface object and a location of a viewpoint within the virtual reality environment. Additionally or alternatively, the virtual reality environment is modified based on an elapsed time of the location of the user interface object remaining in a location.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: June 2, 2020
    Assignee: Adobe Inc.
    Inventors: Stephen DiVerdi, Seth Walker, Brian Williams
  • Publication number: 20200118594
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 16, 2020
    Inventors: Geoffrey Oxholm, Seth Walker, Ramiz Sheikh, Oliver Wang, John Nelson
  • Patent number: 10496658
    Abstract: Method and systems of visually depicting hierarchical data are provided. The hierarchical data includes data pertaining to a plurality of categories, the hierarchical data further including data pertaining to a plurality of subcategories of at least one of the plurality of categories. Multiple viewing regions may then be displayed simultaneously on a display, each viewing region depicting a different view of the hierarchical data, objects displayed in each viewing region being color-coded with a different color for each category, such that an object corresponding to a first category in a first viewing region is displayed in an identical color as an object corresponding to the first category in a second viewing region.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: December 3, 2019
    Assignee: Adobe Inc.
    Inventors: Michael James Andrew Smith, Gavin Murray Peacock, Seth Walker, Adam Cath
  • Publication number: 20190155481
    Abstract: Techniques are described for modifying a virtual reality environment to include or remove contextual information describing a virtual object within the virtual reality environment. The virtual object includes a user interface object associated with a development user interface of the virtual reality environment. In some cases, the contextual information includes information describing functions of controls included on the user interface object. In some cases, the virtual reality environment is modified based on a distance between the location of the user interface object and a location of a viewpoint within the virtual reality environment. Additionally or alternatively, the virtual reality environment is modified based on an elapsed time of the location of the user interface object remaining in a location.
    Type: Application
    Filed: November 17, 2017
    Publication date: May 23, 2019
    Inventors: Stephen DiVerdi, Seth Walker, Brian Williams
  • Patent number: 9619529
    Abstract: Method and systems of visually depicting rendering data are provided. rendering data pertaining to rendering, by a display engine, of display objects in a display zone for a selected frame is accessed. Then, for the selected frame, a heat map is generated based on rendering data corresponding to the selected frame, the heat map containing a plurality of heat objects, each heat object corresponding in proportional size and location to a different display object in the display zone, each heat object displayed in a color having an intensity proportional to an amount of computational resources taken by the display engine to render the corresponding display object. Finally, the heat map is displayed.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: April 11, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Gavin Murray Peacock, Seth Walker, Sedat Akkus
  • Publication number: 20140282176
    Abstract: Method and systems of visually depicting rendering data are provided. rendering data pertaining to rendering, by a display engine, of display objects in a display zone for a selected frame is accessed. Then, for the selected frame, a heat map is generated based on rendering data corresponding to the selected frame, the heat map containing a plurality of heat objects, each heat object corresponding in proportional size and location to a different display object in the display zone, each heat object displayed in a color having an intensity proportional to an amount of computational resources taken by the display engine to render the corresponding display object. Finally, the heat map is displayed.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Inventors: Gavin Murray Peacock, Seth Walker, Sedat Akkus
  • Publication number: 20140282175
    Abstract: Method and systems of visually depicting hierarchical data are provided. The hierarchical data includes data pertaining to a plurality of categories, the hierarchical data further including data pertaining to a plurality of subcategories of at least one of the plurality of categories. Multiple viewing regions may then be displayed simultaneously on a display, each viewing region depicting a different view of the hierarchical data, objects displayed in each viewing region being color-coded with a different color for each category, such that an object corresponding to a first category in a first viewing region is displayed in an identical color as an object corresponding to the first category in a second viewing region.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Applicant: ADOBE SYSTEMS INCORPORATED
    Inventors: Michael James Andrew Smith, Gavin Murray Peacock, Seth Walker, Adam Cath
  • Patent number: 7487614
    Abstract: By attaching a radio or acoustic sound transmitters, or combinations thereof, to the float line of nets recovery of gill nets will become much easier, and timely. This could save millions of fish and other marine life, as well as the economic value of lost nets, which can cost thousands of dollars a piece.
    Type: Grant
    Filed: January 27, 2006
    Date of Patent: February 10, 2009
    Inventors: Seth Walker, John Mizzi