Patents by Inventor Nathan James Frey
Nathan James Frey has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12236514Abstract: A method for efficient dynamic video rendering is described for certain implementations. The method may include identifying a file for rendering a video comprising one or more static layers and one or more dynamic layers, detecting, based on analyzing one or more fields of the file for rendering a video, the one or more static layers and the one or more dynamic layers, wherein each dynamic layer comprises a comment that indicates a variable component, rendering the one or more static layers of the file, receiving, from a user device, a request for the video that includes user information, determining, based on the user information, variable definitions designated to be inserted into a dynamic layer, rendering the one or more dynamic layers using the variable definitions, and generating a composite video for playback from the rendered one or more static layers and the rendered one or more dynamic layers.Type: GrantFiled: May 14, 2020Date of Patent: February 25, 2025Assignee: Google LLCInventors: Nathan James Frey, Zheng Sun, Yifan Zou, Sandor Miklos Szego
-
Publication number: 20250061922Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating videos. In one aspect, a method comprises: receiving: (i) an input video comprising a sequence of video frames, and (ii) data indicating a target object type; processing the input video to generate tracking data that identifies and tracks visual locations of one or more instances of target objects of the target object type in the input video; generating a plurality of sub-videos based on the input video and the tracking data, including: for each sub-video, generating a respective sequence of sub-video frames that are each extracted from a respective video frame of the input video to include a respective instance of a given target object from among the identified target objects of the target object type; and generating an output video that comprises the plurality of sub-videos.Type: ApplicationFiled: November 5, 2024Publication date: February 20, 2025Inventors: Nathan James Frey, Zheng Sun
-
Patent number: 12176006Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating videos. In one aspect, a method comprises: receiving: (i) an input video comprising a sequence of video frames, and (ii) data indicating a target object type; processing the input video to generate tracking data that identifies and tracks visual locations of one or more instances of target objects of the target object type in the input video; generating a plurality of sub-videos based on the input video and the tracking data, including: for each sub-video, generating a respective sequence of sub-video frames that are each extracted from a respective video frame of the input video to include a respective instance of a given target object from among the identified target objects of the target object type; and generating an output video that comprises the plurality of sub-videos.Type: GrantFiled: January 23, 2024Date of Patent: December 24, 2024Assignee: Google LLCInventors: Nathan James Frey, Zheng Sun
-
Publication number: 20240161783Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating videos. In one aspect, a method comprises: receiving: (i) an input video comprising a sequence of video frames, and (ii) data indicating a target object type; processing the input video to generate tracking data that identifies and tracks visual locations of one or more instances of target objects of the target object type in the input video; generating a plurality of sub-videos based on the input video and the tracking data, including: for each sub-video, generating a respective sequence of sub-video frames that are each extracted from a respective video frame of the input video to include a respective instance of a given target object from among the identified target objects of the target object type; and generating an output video that comprises the plurality of sub-videos.Type: ApplicationFiled: January 23, 2024Publication date: May 16, 2024Inventors: Nathan James Frey, Zheng Sun
-
Patent number: 11915724Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating videos. In one aspect, a method comprises: receiving: (i) an input video comprising a sequence of video frames, and (ii) data indicating a target object type; processing the input video to generate tracking data that identifies and tracks visual locations of one or more instances of target objects of the target object type in the input video; generating a plurality of sub-videos based on the input video and the tracking data, including: for each sub-video, generating a respective sequence of sub-video frames that are each extracted from a respective video frame of the input video to include a respective instance of a given target object from among the identified target objects of the target object type; and generating an output video that comprises the plurality of sub-videos.Type: GrantFiled: June 22, 2020Date of Patent: February 27, 2024Assignee: Google LLCInventors: Nathan James Frey, Zheng Sun
-
Publication number: 20230095856Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating videos. In one aspect, a method comprises: receiving: (i) an input video comprising a sequence of video frames, and (ii) data indicating a target object type; processing the input video to generate tracking data that identifies and tracks visual locations of one or more instances of target objects of the target object type in the input video; generating a plurality of sub-videos based on the input video and the tracking data, including: for each sub-video, generating a respective sequence of sub-video frames that are each extracted from a respective video frame of the input video to include a respective instance of a given target object from among the identified target objects of the target object type; and generating an output video that comprises the plurality of sub-videos.Type: ApplicationFiled: June 22, 2020Publication date: March 30, 2023Inventors: Nathan James Frey, Zheng Sun
-
Publication number: 20230058512Abstract: A method for efficient dynamic video rendering is described for certain implementations. The method may include identifying a file for rendering a video comprising one or more static layers and one or more dynamic layers, detecting, based on analyzing one or more fields of the file for rendering a video, the one or more static layers and the one or more dynamic layers, wherein each dynamic layer comprises a comment that indicates a variable component, rendering the one or more static layers of the file, receiving, from a user device, a request for the video that includes user information, determining, based on the user information, variable definitions designated to be inserted into a dynamic layer, rendering the one or more dynamic layers using the variable definitions, and generating a composite video for playback from the rendered one or more static layers and the rendered one or more dynamic layers.Type: ApplicationFiled: May 14, 2020Publication date: February 23, 2023Inventors: Nathan James Frey, Zheng Sun, Yifan Zou, Sandor Miklos Szego
-
Publication number: 20220301118Abstract: A method for replacing an object in an image. The method may include identifying a first object at a position within a first image, masking, based on the first image and the position of the first object, a target area to produce a masked image, generating, based on the masked image and an inpainting machine learning model, a second image different from the first image, the inpainting machine learning model being trained using a difference between the target area of training images and content of generated images at location corresponding to the target area of the training images, generating, based on the masked image and the second image, a third image, and adding, to the third image, a new object different from the first object.Type: ApplicationFiled: May 13, 2020Publication date: September 22, 2022Inventors: Nathan James Frey, Vinay Kotikalapudi Sriram