Abstract: The technology disclosed relates to scoring user experience of video frames displayed on a mobile or other video display device. In particular, it relates to capture alignment and test stimulus isolation techniques that compensate for artifacts in the capture mechanism. The technology disclosed includes methods and systems for analyzing both downlink and uplink quality for mobile or other video display device cameras capturing and transmitting video frames including teleconference video display. Particular aspects of the technology disclosed are described in the claims, specification and drawings.
Abstract: The present invention is an in-train information display apparatus that displays, to a passenger, advertisement content delivered from an advertisement-content delivering apparatus, and includes a display unit configured to display the advertisement content, a display monitoring unit configured to photograph a video screen displayed on the display unit, and a display-result determining unit configured to calculate color-related information, which is information concerning a color of the video screen, based on an image photographed by the display monitoring unit, determine, based on the calculated color-related information and reference information for display result determination, which is information concerning a color of the advertisement content normally displayed on the display unit, whether the display of the advertisement content is normally performed, and transmit, as information for display achievement calculation, a result of the determination to the advertisement-content delivering apparatus having
Abstract: Systems, methods, and non-transitory computer-readable media can initiate a video capture mode that provides a camera view. A touch gesture can be detected via a touch display. A drawing can be rendered based on the touch gesture. The drawing can be rendered to appear to overlay the camera view. A first video image frame can be acquired based on the camera view. At least a portion of the first video image frame and the drawing can be combined to produce a first combined frame. The drawing can appear to overlay the first video image frame. The first combined frame can be stored in a video buffer.
Abstract: An example information processing device displays a partial area of a panoramic video on a display device. The information processing device determines a display range of the panoramic video to be displayed on the display device based on an input made on a predetermined input device. A range and/or a position on a panoramic image is set as a target. Then, the information processing device displays the panoramic video of the display range on the display device, and outputs guide information representing a relationship between the display range and the target.
Type:
Grant
Filed:
April 24, 2013
Date of Patent:
February 28, 2017
Assignees:
Nintendo Co., Ltd., Hal Laboratory, Inc.
Abstract: A method and system of providing access to television programs without requiring a user to operate an electronic programming guide or to independently determine information required to access the television program. Optionally, access to the television program may be facilitated by scheduling a recording of the televisions program or providing instructions to facilitate accessing an on-demand showing of the television program.
Abstract: The present invention comprises an input part for inputting image data, a receiving part for receiving production information relating to production transmitted from another apparatus, a recording part for recording the production information received by the receiving part and image data input by the input part, a detection part for detecting a recording position on a recording medium at an editing point of image data recorded by the recording part, and a transmission part for transmitting information of the recording position detected by the detection part, whereby identification information for identifying image data and voice data is recorded in a recording medium or a recording device, this relieving a burden on a photographer and an editor and facilitating extraction of image data and voice data.
Abstract: A computer-implemented technique can include receiving, at a server computing device having one or more processors, a first video stream from a first user computing device associated with a first user. The first video stream can include a first image portion. The technique can further include extracting the first image portion of the first video stream in order to generate a first overlay stream, and receiving a second video stream from a second user computing device associated with a second user. A composite video stream can be generated from the first overlay stream and the second video stream. The composite video stream can comprise the first overlay stream superimposed over the second video stream. The composite video stream can be output to the second user computing device.
Abstract: A subtitle processing method includes a subtitle parsing step and a subtitle reading step. The subtitle parsing step includes: dividing a subtitle file into a plurality of subtitle blocks, each of the subtitle blocks including a plurality of subtitle contents, each of the subtitle contents corresponding to a subtitle time; generating an index table that records characteristic times of the subtitle blocks. The subtitle reading step, for reading a target subtitle content corresponding to a current time, includes: identifying a target subtitle block corresponding to the current time and the characteristic times recorded in the index table; and reading the target subtitle content of the target subtitle block according to the current time.
Abstract: The present disclosure provides a method and device for storing a video image. The method includes the follows. For each frame of image collected, a confidence value is generated after passing M frames of image each time, in which M is a positive integer. A target encoding frame rate may be determined based on the confidence value latest generated, after passing N frames of image each time. The N frames of image may be encoded and recorded, based on the target encoding frame rate determined, in which N is a positive integer. By adopting the technical solutions of the present disclosure, adaptive capability and availability of system may be improved.
Type:
Grant
Filed:
August 29, 2014
Date of Patent:
February 21, 2017
Assignee:
HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD.
Abstract: The present application relates to the field of media processing and more particularly to audio and video processing. The present application addresses the problem that videos collected by fans at concerts and other events generally have poor sound quality and provides a solution that matches a high quality sound to the video.
Abstract: Example embodiments are directed toward a method of decoding a multi-view video signal using a video decoding apparatus including using a global motion vector of a non-anchor current picture in a current view to determine a corresponding block. A reference block in the current view is then determined using motion information of the current block that is generated from motion information of the corresponding block. A pixel value of the current block is predicted using the reference block in order to raise efficiency in signal processing of the multi-view video signal.
Abstract: A video encoding apparatus, a video decoding apparatus and method of encoding and decoding one or more images using various shapes of blocks. The video encoding apparatus is configured to determine a partition form, among candidate partition forms, for partitioning a current block into one or more partition blocks; generate one or more predicted blocks by performing a motion compensation with a scale factor applied to the one or more partition blocks; generate a residual block by subtracting the predicted blocks from the corresponding one or more partition blocks; generate at least one transform block by transforming the residual block; generate at least one quantized transform block by quantizing the at least one transform block; and encode the at least one quantized transform block, information on the determined partition form, and the scale factor into a bitstream.
Type:
Grant
Filed:
June 4, 2013
Date of Patent:
February 14, 2017
Assignee:
SK TELECOM CO., LTD.
Inventors:
Jinhan Song, Jeongyeon Lim, Yunglyul Lee, Joohee Moon, Haekwang Kim, Byeungwoo Jeon, Jongki Han, Jaehee Cho, Hyundong Kim, Daeyeon Kim, Sungwook Hong
Abstract: A system and a method for semi-automatic video editing is provided herein. The method may include in one embodiment the following steps: processing a media stream comprising at least one of: a plurality of images and a video stream, to extract metadata relating to at least one characteristic of the media stream; displaying to a user the extracted metadata; receiving from the user an instruction to generate a modified media stream wherein said instruction is responsive to the at least one characteristic represented by the extracted metadata; and generating a modified media stream, in response to said instruction. The system may implement the aforementioned method using a computer processor, a display, and a user interface.
Abstract: The present invention proposes a method to mark and exploit at least one sequence record of a video presentation played on a multimedia unit, said method comprising the steps of: during the video presentation, receiving a command from a user to mark a currently displayed video sequence, said command initiating the step of: creating a sequence record comprising a time index or frame index, allowing to locate the proper part of the video presentation, and a reference of the video presentation. At a later stage, requesting the edition of the sequence record by: adding textual information which corresponds to the actual sequence, and storing the sequence record.
Abstract: Systems and approaches are provided to allow for collaborative image capturing. Multiple user devices subscribed to the collaborative image capturing system can be synchronized to perform certain image capturing related tasks, including coordinated image capture. When the user devices are widely dispersed, the image data can be aggregated to generate composited image data, such as panoramas, 3-D transformations, or stereoscopic image data. Multiple user devices can also be coordinated to simultaneously flash or activate other light emitting components that may improve lighting condition than would otherwise be capable of a single computing device.
Abstract: A system, method, and computer program product for producing a show. In an embodiment, the invention is directed to a production system having a first production path, a second production path, and a control system that causes the first production path to generate a show in a first aspect ratio (4:3), and that causes the second production path to generate the same show in a second aspect ratio (16:9). In another embodiment, the invention is directed to producing a show from live material and from archived material. This aspect of the invention operates by producing a first show comprising a plurality of stories, segmenting the first show, and storing the show segments in an archive. Then, the invention produces a second show using live portions as well as show segments retrieved from the archive. The invention is also directed to a media manager that interacts with a server. In some cases, the server is integrated with the production system.
Type:
Grant
Filed:
December 31, 2015
Date of Patent:
January 31, 2017
Assignee:
GVBB HOLDINGS S.A.R.L.
Inventors:
Alex Holtz, Robert J. Snyder, John R. Benson, William H. Couch, Marcel Larocque, Charles M. Hoeppner, Keith Gregory Tingle, Richard Todd, Maurice Smith
Abstract: The technology disclosed relates to adjusting the monitored field of view of a camera and/or a view of a virtual scene from a point of view of a virtual camera based on the distance between tracked objects. For example, if the user's hand is being tracked for gestures, the closer the hand gets to another object, the tighter the frame can becomeāi.e., the more the camera can zoom in so that the hand and the other object occupy most of the frame. The camera can also be reoriented so that the hand and the other object remain in the center of the field of view. The distance between two objects in a camera's field of view can be determined and a parameter of a motion-capture system adjusted based thereon. In particular, the pan and/or zoom levels of the camera may be adjusted in accordance with the distance.
Abstract: Method and system for capturing and playing back ancillary data associated with a video stream. At capture, a first video stream and its associated non-audio ancillary data are received. The non-audio ancillary data associated with the first video stream is encoded into a first audio stream on a basis of a predefined encoding scheme. The captured non-audio ancillary data can then be transmitted and processed with the first video stream in the form of the first audio stream. At playback, a second video stream and a second audio stream containing encoded non-audio ancillary data associated with the second video stream are received. The second audio stream is decoded on a basis of a predefined decoding scheme in order to extract therefrom the non-audio ancillary data associated with the second video stream. The second video stream and its associated non-audio ancillary data are then both output for playback.
Abstract: Provided is a video management apparatus and a method for event recording using the same, which are capable of reducing loss of video data in an event recording. The video management apparatus performs an event recording in cooperation with a network camera to transmit a first frame with basic information of video data and a plurality of second frames with changed information of the video data. The video management apparatus includes a buffer unit to store the first and the plurality of second frames, and eliminate the stored frames according to an external control signal. The video management apparatus further includes an event recording unit to save the first frame and at least one of the second frames stored in the buffer unit, and when detecting an occurrence of events, and frames transmitted from the network camera after the detection of the occurrence of events on a storage medium.
Type:
Grant
Filed:
January 26, 2015
Date of Patent:
January 10, 2017
Assignee:
IDIS Co., Ltd.
Inventors:
Hee Lock Jung, Jin Hui Park, Jai Min Jung
Abstract: Systems and methods described herein relate to processing of information, data and database identifiers involving content and/or experiences. According to one exemplary implementation, an illustrative method of computerized information processing may involve handling and/or processing data regarding a product, where the product may be an experience, a physical product, and/or a digital product.
Type:
Grant
Filed:
June 28, 2015
Date of Patent:
January 3, 2017
Assignee:
Traina Interactive Corp.
Inventors:
Trevor Dow Traina, Joseph Peter Vierra, Jennifer Chih-Ting Chen, Mitchell Paul Galbraith