Patents by Inventor Edward Shek Chan

Edward Shek Chan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240087316
    Abstract: A location in a video may be specified and content related to the location may be accessed. A method for accessing the related content may include receiving a reference to a pixel location in a frame of a video feed of a filmed occurrence and accessing a spatio-temporal index corresponding to the filmed occurrence. The spatio-temporal index may index information relating to events or objects of the filmed occurrence and corresponding pixel locations at which the events or the objects are detected in the video feed. The method may further include querying the spatio-temporal index using the pixel location to determine particular information of an indexed event or an indexed object and receiving the particular information wherein the particular information indicates at least one of spatial and temporal alignment parameters for aligning the indexed event with a corresponding event in at least one other video feed of the filmed occurrence.
    Type: Application
    Filed: November 15, 2023
    Publication date: March 14, 2024
    Inventors: Edward Shek CHAN, Yu-han CHANG, Rajiv Tharmeswaran MAHESWARAN, Jeffrey Wayne SU
  • Patent number: 11861905
    Abstract: A location in a video may be specified and content related to the location may be accessed. A method for accessing the related content may include receiving a reference to a pixel location in a frame of a video feed of a filmed occurrence and accessing a spatio-temporal index corresponding to the filmed occurrence. The spatio-temporal index may index information relating to events or objects of the filmed occurrence and corresponding pixel locations at which the events or the objects are detected in the video feed. The method may further include querying the spatio-temporal index using the pixel location to determine particular information of an indexed event or an indexed object and receiving the particular information wherein the particular information indicates at least one of spatial and temporal alignment parameters for aligning the indexed event with a corresponding event in at least one other video feed of the filmed occurrence.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: January 2, 2024
    Assignee: Genius Sports SS, LLC
    Inventors: Edward Shek Chan, Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
  • Patent number: 11275949
    Abstract: A user interface for a media system supports using gestures, such as swiping gestures and taps, to navigate frame-synchronized video clips or video feeds. The detection of the gestures is interpreted as a command to navigate the frame-synchronized content. In one implementation, a tracking system and a trained machine learning system is used to generate the frame synchronized video clips or video feeds. In one implementation, video clips of an event are organized into storylines and the user interface permits navigation between different storylines and within individual storylines.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: March 15, 2022
    Assignee: SECOND SPECTRUM, INC.
    Inventors: Edward Shek Chan, Andrew Konyu Cheng, Yu-Han Chang, Ryan Laurence Frankel, Rajiv Tharmeswaran Maheswaran
  • Publication number: 20210240992
    Abstract: A location in a video may be specified and content related to the location may be accessed. A method for accessing the related content may include receiving a reference to a pixel location in a frame of a video feed of a filmed occurrence and accessing a spatio-temporal index corresponding to the filmed occurrence. The spatio-temporal index may index information relating to events or objects of the filmed occurrence and corresponding pixel locations at which the events or the objects are detected in the video feed. The method may further include querying the spatio-temporal index using the pixel location to determine particular information of an indexed event or an indexed object and receiving the particular information wherein the particular information indicates at least one of spatial and temporal alignment parameters for aligning the indexed event with a corresponding event in at least one other video feed of the filmed occurrence.
    Type: Application
    Filed: April 23, 2021
    Publication date: August 5, 2021
    Inventors: Edward Shek Chan, Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
  • Patent number: 10997425
    Abstract: A media system generally includes a memory device that stores an event datastore that stores a plurality of event records, each event record corresponding to a respective event and event metadata describing at least one feature of the event. The media system (a) receives a request to generate an aggregated clip comprised of one or more media segments, where each media segment depicts a respective event; (b) for each event record from at least a subset of the plurality of event records, determines an interest level of the event corresponding to the event record; (c) determines one or more events to depict in the aggregated clip based on the respective interest levels of the one or more events; (d) generates the aggregated clip based on the respective media segments that depict the one or more events; and (e) transmits the aggregated clip to a user device.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: May 4, 2021
    Assignee: Second Spectrum, Inc.
    Inventors: Edward Shek Chan, Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
  • Publication number: 20210089779
    Abstract: A user interface for a media system supports using gestures, such as swiping gestures and taps, to navigate frame-synchronized video clips or video feeds. The detection of the gestures is interpreted as a command to navigate the frame-synchronized content. In one implementation, a tracking system and a trained machine learning system is used to generate the frame synchronized video clips or video feeds. In one implementation, video clips of an event are organized into storylines and the user interface permits navigation between different storylines and within individual storylines.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Inventors: Edward Shek Chan, Andrew Konyu Cheng, Yu-Han Chang, Ryan Laurence Frankel, Rajiv Tharmeswaran Maheswaran
  • Patent number: 10832057
    Abstract: A user interface for a media system supports using gestures, such as swiping gestures and taps, to navigate frame-synchronized video clips or video feeds. The detection of the gestures is interpreted as a command to navigate the frame-synchronized content. In one implementation, a tracking system and a trained machine learning system is used to generate the frame synchronized video clips or video feeds. In one implementation, video clips of an event are organized into storylines and the user interface permits navigation between different storylines and within individual storylines.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: November 10, 2020
    Assignee: SECOND SPECTRUM, INC.
    Inventors: Edward Shek Chan, Andrew Konyu Cheng, Yu-Han Chang, Ryan Laurence Frankel, Rajiv Tharmeswaran Maheswaran
  • Publication number: 20200012861
    Abstract: A media system generally includes a memory device that stores an event datastore that stores a plurality of event records, each event record corresponding to a respective event and event metadata describing at least one feature of the event. The media system (a) receives a request to generate an aggregated clip comprised of one or more media segments, where each media segment depicts a respective event; (b) for each event record from at least a subset of the plurality of event records, determines an interest level of the event corresponding to the event record; (c) determines one or more events to depict in the aggregated clip based on the respective interest levels of the one or more events; (d) generates the aggregated clip based on the respective media segments that depict the one or more events; and (e) transmits the aggregated clip to a user device.
    Type: Application
    Filed: September 17, 2019
    Publication date: January 9, 2020
    Inventors: Edward Shek Chan, Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
  • Publication number: 20190354765
    Abstract: A user interface for a media system supports using gestures, such as swiping gestures and taps, to navigate frame-synchronized video clips or video feeds. The detection of the gestures is interpreted as a command to navigate the frame-synchronized content. In one implementation, a tracking system and a trained machine learning system is used to generate the frame synchronized video clips or video feeds. In one implementation, video clips of an event are organized into storylines and the user interface permits navigation between different storylines and within individual storylines.
    Type: Application
    Filed: July 30, 2019
    Publication date: November 21, 2019
    Inventors: Edward Shek Chan, Andrew Konyu Cheng, Yu-Han Chang, Ryan Laurence Frankel, Rajiv Tharmeswaran Maheswaran
  • Patent number: 10460177
    Abstract: A media system generally includes a memory device that stores an event datastore that stores a plurality of event records, each event record corresponding to a respective event and event metadata describing at least one feature of the event. The media system (a) receives a request to generate an aggregated clip comprised of one or more media segments, where each media segment depicts a respective event; (b) for each event record from at least a subset of the plurality of event records, determines an interest level of the event corresponding to the event record; (c) determines one or more events to depict in the aggregated clip based on the respective interest levels of the one or more events; (d) generates the aggregated clip based on the respective media segments that depict the one or more events; and (e) transmits the aggregated clip to a user device.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: October 29, 2019
    Assignee: Second Spectrum, Inc.
    Inventors: Edward Shek Chan, Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
  • Publication number: 20190114485
    Abstract: A media system generally includes a memory device that stores an event datastore that stores a plurality of event records, each event record corresponding to a respective event and event metadata describing at least one feature of the event. The media system (a) receives a request to generate an aggregated clip comprised of one or more media segments, where each media segment depicts a respective event; (b) for each event record from at least a subset of the plurality of event records, determines an interest level of the event corresponding to the event record; (c) determines one or more events to depict in the aggregated clip based on the respective interest levels of the one or more events; (d) generates the aggregated clip based on the respective media segments that depict the one or more events; and (e) transmits the aggregated clip to a user device.
    Type: Application
    Filed: December 21, 2018
    Publication date: April 18, 2019
    Inventors: Edward Shek Chan, Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su