Abstract: To allow better quality rendering of video on any display, a method is proposed of encoding, in addition to video data (VID), additional data (DD) comprising at least one change time instant (TMA_1) indicating a change in time of a characteristic luminance (CHRLUM) of the video data, which characteristic luminance summarizes the set of luminances of pixels in an image of the video data, the method comprising: generating on the basis of the video data (VID) descriptive data (DED) regarding the characteristic luminance variation of the video, the descriptive data comprising at least one change time instant (TMA_1), and encoding and outputting the descriptive data (DED) as additional data (DD).
Type:
Grant
Filed:
January 23, 2024
Date of Patent:
March 25, 2025
Assignee:
Koninklijke Philips N.V.
Inventors:
Chris Damkat, Gerard De Haan, Mark Jozef Willem Mertens, Remco Muijs, Martin Hammer, Philip Steven Newton
Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for trimming video. The program and method provide for providing a capture user interface for capturing a video to generate a media content item; displaying a preview user interface for previewing and editing the captured video to generate the media content item, the preview user interface including an interface element for selecting to trim the captured video or to set a playback option for the media content item; receiving, via the interface element, user input selecting to trim the captured video; and displaying a preview bar within the preview user interface, the preview bar including a set of frames of the captured video and front and back handles respectively positioned in front and in back of the set of frames, each of the front and back handles being selectable to trim video.
Abstract: Disclosed herein is a receiver that generates a video signal. A first video is assigned to a first display layer of a plurality of display layers. A first trigger signal causes a first application to assign a first user interface to a second display layer, which is positioned over the first display layer. A second trigger signal causes a second application to assign a second user interface to a third display layer, which is positioned over the second display layer. The second trigger signal causes a reduction in a size of the first video, and a first portion of the third display layer and a second portion of the second display layer to be made transparent. The reduced size of the first video is positioned beneath the first portion. A combination of the first display layer, the second display layer, and the third display layer generates the video signal.
Abstract: Devices, systems, and processes for reducing interruptions due to a presentation timestamp restart (PTSrs) are provided. A process includes receiving content data packets identifiable by a timestamp. The timestamps vary between a PTSmin and a PTSmax. When PTSmax is reached, a next data packet is restarted at substantially equal to PTSmin. The process includes first determining whether one of the timestamps have restarted and, if so, generating a loop over index file associating a first timestamp with a first index value (A), a second timestamp with a second index value (B), a third timestamp with a third index value (C), and a fourth timestamp with a fourth index value (D). PTSrs may be detected when the second timestamp is greater than the fourth timestamp or when the first timestamp is greater than the third timestamp. When a restart occurs, adjustments to content playback sequence are made using the index values.
Type:
Grant
Filed:
July 12, 2023
Date of Patent:
February 4, 2025
Assignee:
DISH Network Technologies India Private Ltd.
Abstract: Noise compensation method comprising: (a) receiving a content stream including content audio data; (b) receiving first microphone signals from a first device; (c) detecting ambient noise from a noise source location in or near the audio environment; (d) causing a first wireless signal to be transmitted from the first device to a second device, the first wireless signal including instructions for the second device to record an audio segment (e) receiving a second wireless signal from the second device; (f) determining a content stream audio segment time interval for a content stream audio segment; (g) receiving a third wireless signal from the second device, including a recorded audio segment captured via a second device microphone; (h) determining a second device ambient noise signal at the second device location; and (i) implementing a noise compensation method for the content audio data based, at least in part, on the second device ambient noise signal.
Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing different camera modes. The program and method provide for displaying, by a messaging application, a capture user interface for capturing video according to a first camera mode for capturing a single video clip to generate a media content item; providing, by the messaging application, a camera mode selection element within the capture user interface, the camera mode selection element being selectable to switch from the first camera mode to a second camera mode for capturing multiple video clips for combining to generate the media content item; receiving, via the capture user interface, user input selecting the camera mode selection element; and updating, by the messaging application and in response to receiving the user input, the capture user interface for video capture according to the second camera mode.
Type:
Grant
Filed:
December 20, 2021
Date of Patent:
October 1, 2024
Assignee:
Snap Inc.
Inventors:
Kaveh Anvaripour, Christine Barron, Nathan Kenneth Boyd, Wayne Mike Cao, Ranidu Lankage
Abstract: Technologies are disclosed herein for providing content management for deploying and updating a fleet of resources. A system for providing content management may include a web server or other apparatus configured to receive a local content request from a local device of a fleet of resources, the local content request comprising a canonical uniform resource locator (URL) that uniquely identifies the local device. The web server may be further configured to analyze the local content request to determine if the URL matches one or more rewrite rules, formulate a response to the local content request based on the analyzing, and transmit the formulated response.
Abstract: The present disclosure relates to a video processing method and apparatus, a computer-readable medium and an electronic device. The method is applied to a first terminal, including: identifying a target object in a current video frame; receiving a special effect setting instruction input by a user; determining a plurality of special effects to be superposed according to the special effect setting instruction; and superposing the plurality of special effects to be superposed with the target object to acquire a processed video frame. In this way, synchronous superposition of a plurality of special effects can be implemented by one video processing process based on the current video frame, so that the plurality of special effects can take effect at the same time, thereby improving processing efficiency of special effects. In addition, an unnecessary intermediate video rendering process is also omitted, which is beneficial to improve terminal performance and user experience.
Abstract: Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities.
Abstract: Techniques are described for pre-exporting chunks of video content during video editing of a video editing project. For example, the chunks of the video editing project can be monitored for changes. When a change is detected to a chunk, the chunk can be pre-exported as an independent chunk that is combinable with other pre-exported chunks and without encoding or re-encoding the pre-exported chunks. In addition, the monitoring and pre-exporting can be performed while the video editing project is editable by a user of the video editing project. When the video editing project is ready to be finalized, the pre-exported chunks can be combined to generate, at least in part, a media file. The generated media file can then be output.
Abstract: An apparatus 103 stores, in a storage unit, a plurality of parameters for generation of a virtual viewpoint image based on a plurality of captured images, the plurality of parameters including a parameter representing a time and a parameter representing a position of a virtual viewpoint and a direction of view from a virtual viewpoint corresponding to the time, and causes, in accordance with a switching operation performed while a virtual viewpoint image is being displayed on a display unit, the display unit to display a virtual viewpoint image corresponding to a parameter representing a time selected from the plurality of parameters stored in the storage unit based on the switching operation, and corresponding to a parameter representing a position of a virtual viewpoint and a direction of view from a virtual viewpoint corresponding to the virtual viewpoint image being displayed.
Abstract: An apparatus and method for editing an image including dynamic tone metadata in an electronic device are provided. The electronic device includes a display, and at least one processor operatively connected to the display, wherein the at least one processor may be configured to generate a third image to be inserted between a first image and a second image continuous with the first image among a plurality of images belonging to video content, generate dynamic tone metadata of the third image based on dynamic tone metadata of the first image and the second image, and update the video content by adding the third image and the dynamic tone metadata of the third image.
Type:
Grant
Filed:
November 30, 2020
Date of Patent:
June 6, 2023
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Chansik Park, Yongchul Kim, Jungik Seo, Jaehun Cho, Hyunsoo Kim, Donghyun Yeom
Abstract: An audio and video processing method, includes: displaying a video creation interface of a target audio, where the video creation interface includes n audio clips of the target audio and video recording entries corresponding to the n audio clips respectively, n?2; receiving a trigger signal acting on a target video recording entry on the video creation interface, where the target video recording entry is a video recording entry corresponding to a target audio clip; acquiring a target video corresponding to the target audio clip based on the trigger signal, where the target video is a video clip of which the video duration is less than a duration threshold; and sending a video creation request carrying the target video to a server, where the video creation request is used to instruct to play picture information of the target video when the target audio is played.
Abstract: A system and method for automatically preparing personalized video presentations using a dynamic scene replacement engine which uses data points relating to a specific viewer to optimize the content of a video presentation for that specific viewer in order to increase the overall emotional effectiveness of the video presentation. The system and method for automatically preparing personalized video presentations operates to identify stock personalizing video content clips which can replace generic scenes in a raw video presentation to add personalizing material designed to appeal to the particular viewer to the presentation. Through this action, a unique personalized video presentation may be automatically prepared on demand for every particular viewer.
Type:
Grant
Filed:
August 7, 2020
Date of Patent:
May 16, 2023
Inventors:
Timothy Kenneth Moore, Joseph Jonathan Register
Abstract: A video generating method, an apparatus, an electronic device, and a computer-readable medium are provided. The method includes: acquiring a first video set and an audio material; determining a first music point of the audio material according to an amount of the video materials in the first video set; generating, according to a sorting order of the video materials in the first video set, one video clip for each first music clip in the audio material by respectively using one video material, so as to obtain a first video sequence; adjusting, in response to detecting an editing operation on the video clip in the first video sequence, the video clip in the first video sequence, so as to obtain a second video sequence; and splicing together video clips in the second video sequence, and adding the audio material as a video audio track to obtain a composite video.
Abstract: Aspects of the subject disclosure may include, for example, obtaining one or more signals, the one or more signals being based upon brain activity of a viewer while the viewer is viewing media content; predicting, based upon the one or more signals, a first predicted desired viewport of the viewer; obtaining head movement data associated with the media content; predicting, based upon the head movement data, a second predicted desired viewport of the viewer; comparing the first predicted desired viewport to the second predicted desired viewport, resulting in a comparison; and determining, based upon the comparison, to use the first predicted desired viewport to facilitate obtaining a first subsequent portion of the media content or to use the second predicted desired viewport to facilitate obtaining a second subsequent portion of the media content. Other embodiments are disclosed.
Abstract: In accordance with an embodiment, a method includes receiving, by at least one of a plurality of battery monitoring circuits a frequency synchronization signal and measurement frequency information from a host controller, wherein the at least one of the plurality of battery monitoring circuits is connected to at least one of a plurality of battery blocks; generating, by the at least one of the plurality of battery monitoring circuits, a periodic signal based on a clock signal having a clock frequency, the measurement frequency information, and the frequency synchronization signal; obtaining, by the at least one of the plurality of battery monitoring circuits, at least one measurement value of the at least one of the plurality of battery blocks using the periodic signal; and transmitting, by the at least one of the plurality of battery monitoring circuits, the at least one measurement value to the host controller.
Type:
Grant
Filed:
January 22, 2021
Date of Patent:
January 10, 2023
Assignee:
Infineon Technologies AG
Inventors:
Stefano Marsili, Andreas Berger, Klaus Hoermaier, Guenter Hofer, Christoph Sandner
Abstract: A system and method are operable within a computer network environment for compiling videos into a compilation, where each video is programmatically inserted into the compilation, and the resulting video compilation plays alongside an audio track preferably sourced using a unique identifier for the audio track. The system includes a solution stack comprising a remote service system and at least one client, which may be operable to generate at least one video to be associated with an audio track section, with such section determined by select start/end times, programmatically identified, or programmatically associated based on selected metadata. The system then operates to compile at least one user-generated video into an audiovisual set which may be presented as a social post, and further into a video compilation which may include additional filler content, to play alongside a section or the entirety of an audio track.
Type:
Grant
Filed:
June 21, 2021
Date of Patent:
January 3, 2023
Assignee:
Vertigo Media, Inc.
Inventors:
Nathan C. Haley, Gregory H. Leekley, Alexander Savenok, Rose J. Yen
Abstract: A video playing control method and apparatus, a device, and a storage medium. The method comprises: under the condition that a touch operation on a first touch element on a video playing interface is detected, obtaining feedback content generated on the basis of related information of the touch operation and a preset response strategy, and displaying the feedback content by means of a browser page, the video playing interface, or a local page (S1010); and under the condition that a touch operation on a second touch element on the browser page, the video playing interface, or the local page is detected, adjusting the video playing progress according to the touch operation on the second touch element (S1020).
Type:
Grant
Filed:
January 10, 2020
Date of Patent:
December 13, 2022
Assignee:
SHANGHAI MARINE DIESEL ENGINE RESEARCH INSTITUTE
Inventors:
Mengluo Feng, Xiaohang Huang, Jingui Wang
Abstract: A video sequence layout method, electronic device and storage medium are provided, and relate to fields of deep learning, virtual reality, cloud computing, video layout processing and the like. The method includes: acquiring a first video sequence, the first video sequence including a main sequence for describing a first posture of a human body and a subordinate sequence for describing a plurality of second postures of the human body; extracting the main sequence and the subordinate sequence from the first video sequence; and in a case that it is detected that a sequencing identification frame exists in the first video sequence, performing random mixed sequencing processing on video frames in the main sequence and the subordinate sequence based on the sequencing identification frame and taking a sequence combination obtained by the random mixed sequencing processing as a second video sequence.
Abstract: A method includes capturing video data from a camera over a period of time while capturing motion data from a detector and biometric data from another detector over that same period of time. Then, the method additionally includes selecting a subset of the video data captured during the period of time, based on a corresponding subset of the motion data and/or biometric data. The method then proscribes storing this selected subset of video data in memory.
Abstract: A method includes capturing video data from a camera over a period of time while capturing motion data from a detector over that same period of time. Then, the method additionally includes selecting a subset of the video data captured during the period of time, based on a corresponding subset of the motion data. The method then proscribes storing this selected subset of video data in memory.
Abstract: A method for viewing a collection of images or videos, includes analyzing the collection to determine properties of the images or videos and using the determined properties to produce icons corresponding to such properties; providing a time-varying display of the images or videos in the collection following an ordering of the images or videos in the collection and at least one of the corresponding icons; receiving a user selection of an icon; changing the time-varying display of the images or videos in the collection following a reordering of the images or videos in the collection in response to the user selection; storing the sequence of the user selections and associated timing in a script in a processor accessible memory; and playing back the viewing of the collection of images or videos using the script.
Type:
Grant
Filed:
February 21, 2011
Date of Patent:
November 4, 2014
Assignee:
Kodak Alaris Inc.
Inventors:
Jiebo Luo, Dhiraj Joshi, Peter O. Stubler, Madirakshi Das, Phoury Lei, Vivek Kumar Singh
Abstract: A video processing system includes a memory to store video data and a decoder to fetch the video data for processing. The decoder can be configured to perform a first fetch to obtain a luminance data and to perform a second fetch to obtain both a chrominance data and an additional portion of the luminance data.
Abstract: A method of creating a template data structure for a media effect involving a media data item to be presented during a presentation of the media effect is disclosed. The method comprises: defining a time stamp for an event of the template data structure, the time stamp comprising a relative time stamp component indicating a time span within the presentation of the media data item as a portion of a duration of the presentation, and an absolute time offset component indicating a time span independent from the duration of the presentation. A related method defines the processing of a media data item to be presented during a presentation, the related method using the above mentioned time stamps to determine a temporal position of an event to occur during the presentation. A corresponding template creator, a media data processor, and a computer readable medium are also disclosed.
Abstract: There is provided a playback device including a content playback unit configured to playback content, a playlist acquisition unit configured to acquire, while the playback unit is playing back the content on the basis of a playlist, at least one external playlist from outside, the external playlist having at least two pieces of content that are common to content in the playlist and having a matching playback order of at least the two consecutive pieces of content, and a playlist display unit configured to display the playlist and the external playlist acquired by the playlist acquisition unit such that the playlists are linked at positions of the consecutive matching pieces of content.
Type:
Grant
Filed:
May 3, 2012
Date of Patent:
April 29, 2014
Assignee:
Sony Corporation
Inventors:
Yoshihito Ohki, Tomohiko Hishinuma, Ryohei Morimoto, Junya Ono
Abstract: Scene-based program accessing systems and methods are operable to present a program at a scene corresponding to a selected thumbnail-sized image. An exemplary embodiment selects a plurality of image frames from a program based upon a scene separation duration; generates a thumbnail-sized image from each of the selected image frames, and presents the plurality of thumbnail-sized images on a scene index. The scene index is configured to present the plurality of thumbnail-sized images in a time ordered sequence corresponding to a subject matter presentation sequence of the program, and each of the selected image frames are temporally separated from each other by the scene separation duration.
Abstract: Adjusting the duration of first content having a plurality of segments including: identifying at least one segment of the first content; deleting the at least one identified segment from the first content to form modified content; and inserting transition periods between segments remaining in the modified content, wherein the transition periods are inserted to adjust the duration of the modified content to a desired duration.
Type:
Grant
Filed:
October 6, 2008
Date of Patent:
October 23, 2012
Assignee:
Sony Computer Entertainment America Inc.
Abstract: For reproduction of transport stream (TS) from a recording medium, it is necessary to control the reproduction so as not to overflow the decoder buffer at the time of starting the reproduction and or when switching to the normal reproduction from special reproduction. At the times of starting the TS reproduction from a recording medium and switching between the normal reproduction and the special reproduction, selection is made as appropriate between a reproduction technique which performs stream output with timing similar to the input timing used for recording, and another reproduction technique which monitors data amount stored up in a decoder buffer and controls stream output in accordance with the controlled data amount.