Patents by Inventor Ernestine FU

Ernestine FU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11205460
    Abstract: A computer readable storage medium has stored thereon a prerecorded video experience container. The prerecorded video experience container includes a prerecorded video file that displays visual content, an executable experience which upon execution enables presentation of additional content associated with the prerecorded video file, and an interactive region of the prerecorded video file, wherein the interactive region of the prerecorded video file is associated with the executable experience such that a user interaction with the interactive region executes the executable experience. The prerecorded video file is displayed in response to a selection of the prerecorded video experience container. The executable experience associated with the prerecorded video file is executed in response to identifying an interaction with the interactive region. The additional content associated with the prerecorded video file is displayed in response to executing the executable experience.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: December 21, 2021
    Assignee: Gfycat, Inc.
    Inventors: Richard Rabbat, Ernestine Fu
  • Publication number: 20210365689
    Abstract: In a method for performing adaptive content classification of a video content item, frames of a video content item are analyzed at a sampling rate for a type of content, wherein the sampling rate dictates a frequency at which frames of the video content item are analyzed. Responsive to identifying content within at least one frame indicative of the type of content, the sampling rate of the frames is increased. Responsive to not identifying content within at least one frame indicative of the type of content, the sampling rate of the frames is decreased. It is determined whether the video content item includes the type of content based on the analyzing the frames.
    Type: Application
    Filed: August 3, 2021
    Publication date: November 25, 2021
    Inventors: Richard Rabbat, Ernestine Fu
  • Patent number: 11120273
    Abstract: In a method for performing adaptive content classification of a video content item, frames of a video content item are analyzed at a sampling rate for a type of content, wherein the sampling rate dictates a frequency at which frames of the video content item are analyzed. Responsive to identifying content within at least one frame indicative of the type of content, the sampling rate of the frames is increased. Responsive to not identifying content within at least one frame indicative of the type of content, the sampling rate of the frames is decreased. It is determined whether the video content item includes the type of content based on the analyzing the frames.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: September 14, 2021
    Assignee: Gfycat, Inc.
    Inventors: Richard Rabbat, Ernestine Fu
  • Publication number: 20210264517
    Abstract: A method for facilitating an Internet meme economy, executed by one or more processors, comprises identifying an Internet meme, providing an offering of shares in the Internet meme at a first share price, receiving a cryptocurrency purchase of shares in the Internet meme from a user, tracking the reach of the Internet meme, and based on the tracking of the Internet meme reaching a first threshold, providing a buyback offer for shares in the Internet meme at a second share price, the second share price being greater than the first share price. The method may further comprise identifying an iteration of the Internet meme and assigning shares in the iteration of the Internet meme to shareholders of the Internet meme.
    Type: Application
    Filed: February 24, 2021
    Publication date: August 26, 2021
    Inventors: Jeffrey Harris, Daniel McEleney, Harrison John Dodini, Ernestine Fu
  • Publication number: 20210264193
    Abstract: In a method for identification of an Internet meme, a plurality of sources is monitored for digital visual content comprising a visual moment and a caption. It is determined whether instances of digital visual content include a same visual moment. Provided the instances of digital visual content include the same visual moment, the instances of digital visual content including the same visual moment are identified as similar digital visual content. Each instance of the similar digital visual content is tracked. Provided a total number of instances of the similar digital visual content exceeds an Internet meme threshold, the similar digital visual content is identified as an Internet meme, wherein the same visual moment is a root visual moment and each caption corresponds to a different iteration of the Internet meme.
    Type: Application
    Filed: February 24, 2020
    Publication date: August 26, 2021
    Applicant: Gfycat, Inc.
    Inventors: Jeffrey HARRIS, Daniel MCELENEY, Harrison John DODINI, Ernestine FU
  • Publication number: 20210136285
    Abstract: In a method for generating a 360 degree looping video file, a source 360 degree video file is received.
    Type: Application
    Filed: January 11, 2021
    Publication date: May 6, 2021
    Inventors: Richard Rabbat, Ernestine Fu
  • Publication number: 20210112309
    Abstract: In a computer-implemented method for generating an interactive digital video content item, a digital video content item is accessed. Subject recognition is performed on the digital video content item, wherein the subject recognition automatically identifies a visual subject within the digital video content item. Responsive to identifying the visual subject, an interactive region is applied to visual subject within the digital video content item, wherein the interactive region enables presentation of content related to the visual subject in response to a user interaction with the interactive region during presentation of the digital video content item.
    Type: Application
    Filed: December 21, 2020
    Publication date: April 15, 2021
    Inventors: Richard Rabbat, Ernestine Fu
  • Patent number: 10944905
    Abstract: In a method for generating a 360 degree looping video file, a source 360 degree video file is received. At least one configuration setting is received for a 360 degree looping video file, the at least one configuration comprising a projection type of the 360 degree looping video file. The 360 degree looping video file is generated based at least on the source 360 degree video file and the projection type, the 360 degree looping video file comprising a video data file and spatial mapping instructions, wherein the 360 degree looping video file, when executed at an electronic device, displays the video data file according to the spatial mapping instructions.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: March 9, 2021
    Assignee: Gfycat, Inc.
    Inventors: Richard Rabbat, Ernestine Fu
  • Patent number: 10945042
    Abstract: In a computer-implemented method for generating an interactive digital video content item, a digital video content item is accessed. Subject recognition is performed on the digital video content item, wherein the subject recognition automatically identifies a visual subject within the digital video content item. Responsive to identifying the visual subject, an interactive region is applied to visual subject within the digital video content item, wherein the interactive region enables presentation of content related to the visual subject in response to a user interaction with the interactive region during presentation of the digital video content item.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: March 9, 2021
    Assignee: Gfycat, Inc.
    Inventors: Richard Rabbat, Ernestine Fu
  • Publication number: 20200401813
    Abstract: In a method for performing adaptive content classification of a video content item, frames of a video content item are analyzed at a sampling rate for a type of content, wherein the sampling rate dictates a frequency at which frames of the video content item are analyzed. Responsive to identifying content within at least one frame indicative of the type of content, the sampling rate of the frames is increased. Responsive to not identifying content within at least one frame indicative of the type of content, the sampling rate of the frames is decreased. It is determined whether the video content item includes the type of content based on the analyzing the frames.
    Type: Application
    Filed: June 18, 2020
    Publication date: December 24, 2020
    Applicant: Gfycat, Inc.
    Inventors: Richard RABBAT, Ernestine FU
  • Publication number: 20200364262
    Abstract: In a method for identifying visually similar media content items, perceptual hashes for video frames of media content items are received. The perceptual hashes are compared for at least a portion of video frames. Based on the comparing the perceptual hashes for at least a portion of video frames, it is determined whether media content items are matching. The media content items indicated as matching are grouped.
    Type: Application
    Filed: May 12, 2020
    Publication date: November 19, 2020
    Applicant: Gfycat, Inc.
    Inventors: Jeffrey Harris, Kenneth Au, Richard Rabbat, Ernestine Fu
  • Publication number: 20200342910
    Abstract: A computer readable storage medium has stored thereon a prerecorded video experience container. The prerecorded video experience container includes a prerecorded video file that displays visual content, an executable experience which upon execution enables presentation of additional content associated with the prerecorded video file, and an interactive region of the prerecorded video file, wherein the interactive region of the prerecorded video file is associated with the executable experience such that a user interaction with the interactive region executes the executable experience. The prerecorded video file is displayed in response to a selection of the prerecorded video experience container. The executable experience associated with the prerecorded video file is executed in response to identifying an interaction with the interactive region. The additional content associated with the prerecorded video file is displayed in response to executing the executable experience.
    Type: Application
    Filed: June 18, 2020
    Publication date: October 29, 2020
    Inventors: Richard Rabbat, Ernestine Fu
  • Patent number: 10699748
    Abstract: A computer readable storage medium has stored thereon a prerecorded video experience container. The prerecorded video experience container includes a prerecorded video file that displays visual content, an executable experience which upon execution enables presentation of additional content associated with the prerecorded video file, and an interactive region of the prerecorded video file, wherein the interactive region of the prerecorded video file is associated with the executable experience such that a user interaction with the interactive region executes the executable experience. The prerecorded video file is displayed in response to a selection of the prerecorded video experience container. The executable experience associated with the prerecorded video file is executed in response to identifying an interaction with the interactive region. The additional content associated with the prerecorded video file is displayed in response to executing the executable experience.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: June 30, 2020
    Assignee: Gfycat, Inc.
    Inventors: Richard Rabbat, Ernestine Fu
  • Patent number: 10665266
    Abstract: In a device and method for integrating a prerecorded video file into a video, a video of a scene is displayed on a display device of a mobile electronic device. A prerecorded video file to render on the display device is received. A modified prerecorded video file is generated by modifying a visual appearance of the prerecorded video file, where the modifying is for integrating the modified prerecorded video file into the scene. The modified prerecorded video file is superimposed over the video, such that the video is partially obscured by the modified prerecorded video file. The modified prerecorded video file is played while displaying the video, such that the modified prerecorded video file and a non-obscured portion of the video are rendered simultaneously.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: May 26, 2020
    Assignee: Gfycat, Inc.
    Inventors: Richard Rabbat, Ernestine Fu, Kasey Wang
  • Publication number: 20200162792
    Abstract: In a computer-implemented method for generating an interactive digital video content item, a digital video content item is accessed. Subject recognition is performed on the digital video content item, wherein the subject recognition automatically identifies a visual subject within the digital video content item. Responsive to identifying the visual subject, an interactive region is applied to visual subject within the digital video content item, wherein the interactive region enables presentation of content related to the visual subject in response to a user interaction with the interactive region during presentation of the digital video content item.
    Type: Application
    Filed: November 19, 2018
    Publication date: May 21, 2020
    Applicant: Gfycat, Inc.
    Inventors: Richard RABBAT, Ernestine FU
  • Publication number: 20200137447
    Abstract: In a computer-implemented method for identifying altered digital video content, a digital video content item is accessed. A visually identifiable region is identified within the digital video content item. A mask is applied to the visually identifiable region of the digital video content item, wherein the mask blocks the visually identifiable region of the digital video content item. The digital video content item with the mask is compared to other digital video content items, wherein the comparing disregards the visually identifiable region according to the mask. Provided the digital video content is identified as similar to at least one other digital video content item, the visually identifiable region of the digital content video item is compared to a visually identifiable region of the at least one other digital video content item.
    Type: Application
    Filed: October 24, 2018
    Publication date: April 30, 2020
    Applicant: Gfycat, Inc.
    Inventors: Richard RABBAT, Ernestine FU
  • Patent number: 10631036
    Abstract: In a computer-implemented method for identifying altered digital video content, a digital video content item is accessed. A visually identifiable region is identified within the digital video content item. A mask is applied to the visually identifiable region of the digital video content item, wherein the mask blocks the visually identifiable region of the digital video content item. The digital video content item with the mask is compared to other digital video content items, wherein the comparing disregards the visually identifiable region according to the mask. Provided the digital video content is identified as similar to at least one other digital video content item, the visually identifiable region of the digital content video item is compared to a visually identifiable region of the at least one other digital video content item.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: April 21, 2020
    Assignee: Gfycat, Inc.
    Inventors: Richard Rabbat, Ernestine Fu
  • Patent number: 10522187
    Abstract: In a method for tracking interactivity with a prerecorded video file superimposed into a video, presentation instructions for displaying a prerecorded video file are displayed on a display device of a mobile electronic device, the presentation instructions including display conditions for displaying the prerecorded video file. A video of a scene is displayed on the display device of the mobile electronic device. Responsive to detecting at least one display condition of the display conditions, the prerecorded video file is displayed on the display device of the mobile electronic device, such that the video is partially obscured by the prerecorded video file. Responsive to the displaying the prerecorded video file, a display instance for the prerecorded video file is logged.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: December 31, 2019
    Assignee: Gfycat, Inc.
    Inventors: Richard Rabbat, Ernestine Fu
  • Publication number: 20190379618
    Abstract: In a computer-implemented method for presenting visual media, a text string including communication between a first user using a messaging application at a first electronic device and a second user using the messaging application at a second electronic device is received. The text string is analyzed to identify a sentiment of the communication. Visual media representative of the sentiment is displayed within the messaging application and proximate the communication.
    Type: Application
    Filed: June 11, 2019
    Publication date: December 12, 2019
    Applicant: Gfycat, Inc.
    Inventors: Richard RABBAT, Ernestine FU, Hanna XU
  • Publication number: 20190377756
    Abstract: In a computer-implemented method for performing intent-based search of media files, a search query for searching media files of a library of media files is received from an electronic device. The search query includes a user-entered search term and additional search information related to an intent of a user. The library of media files is searched for media files according to the search query. Search results are returned to the electronic device, the search results including a listing of media files satisfying the user-entered search term and prioritized according to the additional search information related to the intent of the user.
    Type: Application
    Filed: June 11, 2019
    Publication date: December 12, 2019
    Applicant: Gfycat, Inc.
    Inventors: Richard RABBAT, Ernestine FU, Patrick ROGERS