Patents by Inventor Yu-Han Chang

Yu-Han Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11373405
    Abstract: Data processing systems and methods are disclosed for combining video content with one or more augmentations to produce augmented video. Objects within video content may have associated bounding boxes that may each be associated with respective RGB values. Upon user selection of a pixel, the RGBA value of the pixel may be used to determine a bounding box associated with the RGBA value. The client may transmit an indicator of the determined bounding box to an augmentation system to request augmentation data for the object associated with the bounding box. The system then uses the indicator to determine the augmentation data and transmits the augmentation data to the client device.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: June 28, 2022
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su, Emil Dotchevski, Jason Kent Simon
  • Patent number: 11355158
    Abstract: Video may be edited to include collaborations by users. Collaborations may be added to the video and associated with a span of the video. The span of the collaborations may be determined according to an action that is received from a user contemporaneously with the playback of the video. In some cases, the span of the collaborations may be determined automatically by analyzing the collaboration and the video frames to which the user initial added the collaborations. Analysis of the collaborations and video frame may be used to determine span criteria for the frames of the video that should be associated with the collaborations.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: June 7, 2022
    Assignee: Genius Sports SS, LLC
    Inventors: Yu-Han Chang, Tracey Chui Ping Ho, Kevin William King, Rajiv Tharmeswaran Maheswaran
  • Patent number: 11275949
    Abstract: A user interface for a media system supports using gestures, such as swiping gestures and taps, to navigate frame-synchronized video clips or video feeds. The detection of the gestures is interpreted as a command to navigate the frame-synchronized content. In one implementation, a tracking system and a trained machine learning system is used to generate the frame synchronized video clips or video feeds. In one implementation, video clips of an event are organized into storylines and the user interface permits navigation between different storylines and within individual storylines.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: March 15, 2022
    Assignee: SECOND SPECTRUM, INC.
    Inventors: Edward Shek Chan, Andrew Konyu Cheng, Yu-Han Chang, Ryan Laurence Frankel, Rajiv Tharmeswaran Maheswaran
  • Publication number: 20210397846
    Abstract: Data processing systems and methods are disclosed for augmenting video content with one or more augmentations to produce augmented video. Elements within video content may be identified by spatiotemporal indices and may have associated values. An advertiser can pay to have an augmentation added to an element that, for example, advertises the advertiser's goods and/or includes a link that, when activated, takes a user to the advertiser's web site. Elements may have associated contexts that can be used to determine augmentations and element value, such as a position and/or current use of the element.
    Type: Application
    Filed: August 11, 2021
    Publication date: December 23, 2021
    Inventors: Yu-Han Chang, Tracey Chui Ping Ho, Rajiv Tharmeswaran Maheswaran
  • Publication number: 20210358525
    Abstract: Video may be edited to include collaborations by users. Collaborations may be added to the video and associated with a span of the video. The span of the collaborations may be determined according to an action that is received from a user contemporaneously with the playback of the video. In some cases, the span of the collaborations may be determined automatically by analyzing the collaboration and the video frames to which the user initial added the collaborations. Analysis of the collaborations and video frame may be used to determine span criteria for the frames of the video that should be associated with the collaborations.
    Type: Application
    Filed: May 14, 2021
    Publication date: November 18, 2021
    Inventors: Yu-Han Chang, Tracey Chui Ping Ho, Kevin William King, Rajiv Tharmeswaran Maheswaran
  • Patent number: 11120271
    Abstract: Data processing systems and methods are disclosed for augmenting video content with one or more augmentations to produce augmented video. Elements within video content may be identified by spatiotemporal indices and may have associated values. An advertiser can pay to have an augmentation added to an element that, for example, advertises the advertiser's goods and/or includes a link that, when activated, takes a user to the advertiser's web site. Elements may have associated contexts that can be used to determine augmentations and element value, such as a position and/or current use of the element.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: September 14, 2021
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Tracey Chui Ping Ho, Rajiv Tharmeswaran Maheswaran
  • Publication number: 20210240992
    Abstract: A location in a video may be specified and content related to the location may be accessed. A method for accessing the related content may include receiving a reference to a pixel location in a frame of a video feed of a filmed occurrence and accessing a spatio-temporal index corresponding to the filmed occurrence. The spatio-temporal index may index information relating to events or objects of the filmed occurrence and corresponding pixel locations at which the events or the objects are detected in the video feed. The method may further include querying the spatio-temporal index using the pixel location to determine particular information of an indexed event or an indexed object and receiving the particular information wherein the particular information indicates at least one of spatial and temporal alignment parameters for aligning the indexed event with a corresponding event in at least one other video feed of the filmed occurrence.
    Type: Application
    Filed: April 23, 2021
    Publication date: August 5, 2021
    Inventors: Edward Shek Chan, Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
  • Patent number: 11023736
    Abstract: Presenting event-specific video content that conforms to a user selection of an event type includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of at least one event within the at least one video feed to determine at least one event type, wherein the at least one event type includes an entry in a relationship library at least detailing a relationship between two visible features of the at least one video feed, extracting the video content displaying the at least one event and associating the understanding with the video content in a video content data structure. A user interface is configured to permit a user to indicate a preference for at least one event type that is used to retrieve and provide corresponding extracted video content with the data structure in a new video feed.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: June 1, 2021
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
  • Patent number: 10997425
    Abstract: A media system generally includes a memory device that stores an event datastore that stores a plurality of event records, each event record corresponding to a respective event and event metadata describing at least one feature of the event. The media system (a) receives a request to generate an aggregated clip comprised of one or more media segments, where each media segment depicts a respective event; (b) for each event record from at least a subset of the plurality of event records, determines an interest level of the event corresponding to the event record; (c) determines one or more events to depict in the aggregated clip based on the respective interest levels of the one or more events; (d) generates the aggregated clip based on the respective media segments that depict the one or more events; and (e) transmits the aggregated clip to a user device.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: May 4, 2021
    Assignee: Second Spectrum, Inc.
    Inventors: Edward Shek Chan, Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
  • Publication number: 20210089779
    Abstract: A user interface for a media system supports using gestures, such as swiping gestures and taps, to navigate frame-synchronized video clips or video feeds. The detection of the gestures is interpreted as a command to navigate the frame-synchronized content. In one implementation, a tracking system and a trained machine learning system is used to generate the frame synchronized video clips or video feeds. In one implementation, video clips of an event are organized into storylines and the user interface permits navigation between different storylines and within individual storylines.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Inventors: Edward Shek Chan, Andrew Konyu Cheng, Yu-Han Chang, Ryan Laurence Frankel, Rajiv Tharmeswaran Maheswaran
  • Publication number: 20210089780
    Abstract: Data processing systems and methods are disclosed for augmenting video content with one or more augmentations to produce augmented video. Elements within video content may be identified by spatiotemporal indices and may have associated values. An advertiser can pay to have an augmentation added to an element that, for example, advertises the advertiser's goods and/or includes a link that, when activated, takes a user to the advertiser's web site. Elements may have associated contexts that can be used to determine augmentations and element value, such as a position and/or current use of the element.
    Type: Application
    Filed: December 10, 2020
    Publication date: March 25, 2021
    Inventors: Yu-Han Chang, Tracey Chui Ping Ho, Rajiv Tharmeswaran Maheswaran
  • Publication number: 20200401809
    Abstract: Data processing systems and methods are disclosed for combining video content with one or more augmentations to produce augmented video. Objects within video content may have associated bounding boxes that may each be associated with respective RGB values. Upon user selection of a pixel, the RGBA value of the pixel may be used to determine a bounding box associated with the RGBA value. The client may transmit an indicator of the determined bounding box to an augmentation system to request augmentation data for the object associated with the bounding box. The system then uses the indicator to determine the augmentation data and transmits the augmentation data to the client device.
    Type: Application
    Filed: August 31, 2020
    Publication date: December 24, 2020
    Inventors: Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su, Emil Dotchevski, Jason Kent Simon
  • Patent number: 10832057
    Abstract: A user interface for a media system supports using gestures, such as swiping gestures and taps, to navigate frame-synchronized video clips or video feeds. The detection of the gestures is interpreted as a command to navigate the frame-synchronized content. In one implementation, a tracking system and a trained machine learning system is used to generate the frame synchronized video clips or video feeds. In one implementation, video clips of an event are organized into storylines and the user interface permits navigation between different storylines and within individual storylines.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: November 10, 2020
    Assignee: SECOND SPECTRUM, INC.
    Inventors: Edward Shek Chan, Andrew Konyu Cheng, Yu-Han Chang, Ryan Laurence Frankel, Rajiv Tharmeswaran Maheswaran
  • Publication number: 20200342233
    Abstract: In various embodiments, a Data Processing System for Generating Interactive User Interfaces and Interactive Game Systems Based on Spatiotemporal Analysis of Video Content may be configured to: (1) enable a user to select one or more players participating in a substantially live (e.g., live) sporting or other event; (2) determine scoring data for each of the one or more selected players during the sporting or other event; (3) track the determined scoring data; (4) generate a custom (e.g., to the user) user interface that includes the scoring data; and (5) display the custom user interface over at least a portion of a display screen (e.g., on a mobile computing device) displaying one or more video feeds of the sporting or other event. In this way, the system may be configured to convert a video feed of a sporting event into an interactive game.
    Type: Application
    Filed: July 10, 2020
    Publication date: October 29, 2020
    Inventors: Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
  • Patent number: 10769446
    Abstract: Data processing systems and methods are disclosed for combining video content with one or more augmentations to produce augmented video. Objects within video content may have associated bounding boxes that may each be associated with respective RGB values. Upon user selection of a pixel, the RGBA value of the pixel may be used to determine a bounding box associated with the RGBA value. The client may transmit an indicator of the determined bounding box to an augmentation system to request augmentation data for the object associated with the bounding box. The system then uses the indicator to determine the augmentation data and transmits the augmentation data to the client device.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: September 8, 2020
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su, Emil Dotchevski, Jason Kent Simon
  • Patent number: 10762351
    Abstract: Presenting event-specific video content that conforms to a user selection of an event type includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of at least one event within the at least one video feed to determine at least one event type, wherein the at least one event type includes an entry in a relationship library at least detailing a relationship between two visible features of the at least one video feed, extracting the video content displaying the at least one event and associating the understanding with the video content in a video content data structure. A user interface is configured to permit a user to indicate a preference for at least one event type that is used to retrieve and provide corresponding extracted video content with the data structure in a new video feed.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: September 1, 2020
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
  • Patent number: 10755103
    Abstract: Interacting with a broadcast video content stream is performed with a machine learning facility that processes a video feed of a video broadcast through a spatiotemporal pattern recognition algorithm that applies machine learning on at least one event in the video feed in order to develop an understanding of the at least one event. Developing the understanding includes identifying context information relating to the at least one event and identifying an entry in a relationship library detailing a relationship between two visible features of the video feed. Interacting is further enabled with a touch screen user interface configured to permit at least one broadcaster to control a portion of the content of the video feed through interaction options that are based on the identified context information. Interacting is further enhanced through an interface configured to permit remote viewers to control the portion of the content.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: August 25, 2020
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
  • Patent number: 10755102
    Abstract: Producing an event related video content data structure includes processing a video feed through a spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of an event within the video feed. Developing the understanding includes identifying context information relating to the event and identifying an entry in a relationship library at least detailing a relationship between two visible features of the video feed. Content of the video feed that displays the event is automatically extracted by a computer and associated with the context information. A video content data structure that includes the context information is produced.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: August 25, 2020
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
  • Patent number: 10748008
    Abstract: Providing enhanced video content includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of a plurality of events and to determine at least one event type for each of the plurality of events. The event type includes an entry in a relationship library detailing a relationship between two visible features. Extracting and indexing a plurality of video cuts from the video feed is performed based on the at least one event type determined by the understanding that corresponds to an event in the plurality of events detectable in the video cuts. Lastly, automatically and under computer control, an enhanced video content data structure is generated using the extracted plurality of video cuts based on the indexing of the extracted plurality of video cuts.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: August 18, 2020
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
  • Patent number: 10713494
    Abstract: In various embodiments, a Data Processing System for Generating Interactive User Interfaces and Interactive Game Systems Based on Spatiotemporal Analysis of Video Content may be configured to: (1) enable a user to select one or more players participating in a substantially live (e.g., live) sporting or other event; (2) determine scoring data for each of the one or more selected players during the sporting or other event; (3) track the determined scoring data; (4) generate a custom (e.g., to the user) user interface that includes the scoring data; and (5) display the custom user interface over at least a portion of a display screen (e.g., on a mobile computing device) displaying one or more video feeds of the sporting or other event. In this way, the system may be configured to convert a video feed of a sporting event into an interactive game.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: July 14, 2020
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su