Audio To Video Patents (Class 348/515)
  • Patent number: 8947597
    Abstract: According to one embodiment, a video reproducing device includes a separation controller and a processor. The separation controller is configured to receive a video signal and an audio signal synchronized with the video signal, and to separate a background sound and a voice in the audio signal. The processor is configured to select at least one of a plurality of image quality improvement processing schemes based on an analysis of the voice and the background sound, and to apply the selected image quality improvement processing scheme to the video signal.
    Type: Grant
    Filed: January 3, 2014
    Date of Patent: February 3, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Masahide Yanagihara
  • Patent number: 8947596
    Abstract: In embodiments, apparatuses, methods and storage media are described that are associated with alignment of closed captions. Video content (along with associated audio) may be analyzed to determine various times associated with speech in the video content. The video content may also be analyzed to determine various times associated with closed captions and/or subtitles in the video content. Likelihood values may be associated with the determined times. An alignment may be generated based on these determined times. Multiple techniques may be used, including linear interpolation, non-linear curve fitting, and/or speech recognition matching. Quality metrics may be determined for each of these techniques and then compared. An alignment for the closed captions may be selected from the potential alignments based on the quality metrics. The closed captions and/or subtitles may then be modified based on the selected alignment. Other embodiments may be described and claimed.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: February 3, 2015
    Assignee: Intel Corporation
    Inventor: Johannes P. Schmidt
  • Publication number: 20150029396
    Abstract: A networked, multi-channel, and bi-directional programmable datalink timecode system including a generator apparatus that connects to and provides timecode, genlock, metadata, and streaming audio and video images to networked devices via wired and wireless networks. System includes software on a computer hardware device capable of connecting to and receiving streaming timecode, metadata, and streaming audio and video images from a generator apparatus. System also includes a datalink transceiver apparatus that connects to and receives timecode, genlock, metadata, and streaming audio from a generator apparatus.
    Type: Application
    Filed: July 23, 2013
    Publication date: January 29, 2015
    Inventor: Paul Scurrell
  • Publication number: 20150022720
    Abstract: A method and apparatus are disclosed for providing a video signature representative of a content of a video signal. A method and apparatus are further disclosed for providing an audio signature representative of a content of an audio signal. A method and apparatus for detecting lip sync are further disclosed and take advantage of the method and apparatus disclosed for providing a video signature and an audio signature.
    Type: Application
    Filed: October 8, 2014
    Publication date: January 22, 2015
    Applicant: MIRANDA TECHNOLOGIES INC.
    Inventor: Pascal Carrieres
  • Publication number: 20150002740
    Abstract: According to one embodiment, a video reproducing device includes a separation controller and a processor. The separation controller is configured to receive a video signal and an audio signal synchronized with the video signal, and to separate a background sound and a voice in the audio signal. The processor is configured to select at least one of a plurality of image quality improvement processing schemes based on an analysis of the voice and the background sound, and to apply the selected image quality improvement processing scheme to the video signal.
    Type: Application
    Filed: January 3, 2014
    Publication date: January 1, 2015
    Applicant: Kabushiki Kaisha Toshiba
    Inventor: Masahide Yanagihara
  • Patent number: 8922713
    Abstract: Content comprising audio and video may be processed by different processing pipelines, but latencies between these pipelines may differ due to differences in data compression, processing loads, and so forth. The time between entry and exit from the pipeline of a frame is measured to determine pipeline latency. The pipeline latency may be used to shift timing of audio frames, video frames, or both, such that they are synchronized during presentation.
    Type: Grant
    Filed: April 25, 2013
    Date of Patent: December 30, 2014
    Assignee: Amazon Technologies, Inc.
    Inventors: Sreeram Raju Chakrovorthy, Ziqiang Huang, Jaee Patwardhan
  • Publication number: 20140375883
    Abstract: Audio and video signals are synchronized for pleasing presentation of content. As content is streamed to a device, an audio portion may lag or lead a video portion. Spoken words, for example, are out of synch with the lip movements. Video time stamps are synchronized to audio time stamps to ensure streaming content is pleasing.
    Type: Application
    Filed: September 6, 2014
    Publication date: December 25, 2014
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Dennis Meek, Robert A. Koch
  • Patent number: 8917354
    Abstract: A method for detecting motion in video fields of video data, comprises the steps of: calculating texture information for a pixel in the video fields; determining a threshold value as a function of the calculated texture information; calculating a differential value for the pixel; and detecting motion in the video fields as a function of the determined threshold value and the calculated differential value.
    Type: Grant
    Filed: September 30, 2013
    Date of Patent: December 23, 2014
    Assignee: Amlogic Co., Ltd.
    Inventors: Dongjian Wang, Xin Hu, Xuyun Chen
  • Patent number: 8913189
    Abstract: Audio data and video data are processed to determine one or more audible events and visual events, respectively. Contemporaneous presentation of the video data with audio data may be synchronized based at least in part on the audible events and the visual events. Audio processing functions, such as filtering, may be initiated for audio data based at least in part on the visual events.
    Type: Grant
    Filed: March 8, 2013
    Date of Patent: December 16, 2014
    Assignee: Amazon Technologies, Inc.
    Inventors: Richard William Mincher, Todd Christopher Mason
  • Patent number: 8896705
    Abstract: A measuring device for measuring a response speed of a display panel is provided. The measuring device includes a microcontroller and at least one photo sensor. The microcontroller provides a control command, according to which a display controller of the display panel provides test pattern to the display panel. The photo sensor senses a test frame displayed corresponding to the test pattern by the display panel, and provides a corresponding sensing signal associated with brightness and a response signal. According to the response signal, the response speed of the display panel is calculated.
    Type: Grant
    Filed: August 21, 2012
    Date of Patent: November 25, 2014
    Assignee: MStar Semiconductor, Inc.
    Inventors: Chih-Chiang Chiu, Tien-Hua Yu, Wen-Cheng Wu
  • Patent number: 8891013
    Abstract: A repeater is to be provided between a source device and a sink device and be used in a Lip-sync correction system that transmits a video signal and an audio signal from the source device to the sink device through a HDMI (High Definition Multimedia Interface) transmission path and reproduces the video signal and the audio signal in synchronous with them on the sink device. A communication from the source device to the sink device is defined as a downstream communication, and a communication from the sink device to the source device is defined as a upstream communication. The repeater includes a processor that receives the video signal and the audio signal through the upstream communication and the downstream communication and processes the received video signal and the received audio signal. The processor corrects deviation between the video signal and the audio signal.
    Type: Grant
    Filed: February 12, 2014
    Date of Patent: November 18, 2014
    Assignee: Panasonic Corporation
    Inventor: Naoki Ejima
  • Patent number: 8860883
    Abstract: A method and apparatus are disclosed for providing a video signature representative of a content of a video signal. A method and apparatus are further disclosed for providing an audio signature representative of a content of an audio signal. A method and apparatus for detecting lip sync are further disclosed and take advantage of the method and apparatus disclosed for providing a video signature and an audio signature.
    Type: Grant
    Filed: November 30, 2009
    Date of Patent: October 14, 2014
    Assignee: Miranda Technologies Partnership
    Inventor: Pascal Carrières
  • Patent number: 8860882
    Abstract: A system for constructing seamlessly viewable multimedia content from selectably presentable multimedia content blocks includes a block definition module for facilitating creation and modification of the content blocks. The block definition module includes a media assignment submodule for associating a synchronized audio and video segment with a content block. Also included is a block linking submodule for creating seamless connections between content blocks, whereby a transition between the connected blocks occurs substantially without interruption upon viewing the multimedia content. The block definition module further includes a layer submodule for associating an interactive layer having interactive controls with the content block.
    Type: Grant
    Filed: September 19, 2012
    Date of Patent: October 14, 2014
    Assignee: JBF Interlude 2009 Ltd—Israel
    Inventors: Jonathan Bloch, Barak Feldman, Tal Zubalsky, Kfir Y. Rotbard
  • Publication number: 20140300815
    Abstract: It is disclosed a method for changing channel in a television appliance. Upon reception of a user command to tune on a desired channel (301), the television appliance is tuned on the desired channel (302) and audio and video packets are received. Video and audio packets are buffered in relative buffers, so that audio and video output can be generated by processing the buffered packets. Video output frame rate is increased from a first, slower, frame rate to a predetermined final frame rate. Independently from the frame rate increasing law, video output frame rate is raised to the final frame rate as soon as an audio output can be generated from the buffered video packets which is synchronized to the video output. A television appliance implementing the method is also disclosed.
    Type: Application
    Filed: August 6, 2012
    Publication date: October 9, 2014
    Applicant: Advanced Digital Broadcast S.A.
    Inventors: Andrzej Dabrowa, Roman Lysak
  • Patent number: 8850500
    Abstract: Presented herein is a method of presenting alternative audio content for an audio/visual content segment, such as a television program or a motion picture. In the method, the audio/visual content segment is received into a media content receiver. The audio/visual content segment includes primary visual content and primary audio content. A request to receive alternative audio content for the audio/visual content segment is transmitted. After transmitting the request, the alternative audio content is received into the media content receiver. The primary audio content is replaced with the alternative audio content to generate a revised audio/visual content segment. The revised audio/visual content is transferred for presentation to a user.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: September 30, 2014
    Assignee: EchoStar Technologies L.L.C.
    Inventors: Steven M. Casagrande, Anthony F. Kozlowski
  • Publication number: 20140267906
    Abstract: Methods and apparatus are disclosed for enhancing the viewing experience of video content by analyzing the content to determine where enhanced sensory experience events may be appropriate, by identifying devices at the viewing location and devices personal to the viewer that can be controlled to provide an enhanced sensory experience, and by activating those devices in a way that is synchronized with the presentation of the content.
    Type: Application
    Filed: March 13, 2013
    Publication date: September 18, 2014
    Applicant: ECHOSTAR TECHNOLOGIES L.L.C.
    Inventors: Jeremy Mickelsen, Adam Schafer, Bradley Wolf
  • Patent number: 8838262
    Abstract: Embodiments are described for a synchronization and switchover mechanism for an adaptive audio system in which multi-channel (e.g., surround sound) audio is provided along with object-based adaptive audio content. A synchronization signal is embedded in the multi-channel audio stream and contains a track identifier and frame count for the adaptive audio stream to play out. The track identifier and frame count of a received adaptive audio frame is compared to the track identifier and frame count contained in the synchronization signal. If either the track identifier or frame count does not match the synchronization signal, a switchover process fades out the adaptive audio track and fades in the multi-channel audio track. The system plays the multi-channel audio track until the synchronization signal track identifier and frame count and adaptive audio track identifier and frame count match, at which point the adaptive audio content will be faded back in.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: September 16, 2014
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Sripal S. Mehta, Sergio Martinez, Ethan A. Grossman, Brad Thayer, Dean Bullock, John Neary
  • Patent number: 8830290
    Abstract: An audio-video synchronization method is executable in a video conference device. The method includes determining a first presence time of a predetermined visual effect in a captured video sample stream and a second presence time of a predetermined sound effect in a captured audio sample stream, calculating a time difference between the first and second presence time, and adjusting timestamps of each real-time transport protocol packet in an audio stream sent out by the video conference apparatus based on the time difference. The method further includes receiving an adjustment value from an user input, and adjusting timestamps of each real-time transport protocol packet in an audio stream received by the video conference apparatus based on the adjustment value.
    Type: Grant
    Filed: September 28, 2012
    Date of Patent: September 9, 2014
    Assignee: Hon Hai Precision Industry Co., Ltd.
    Inventor: Chi-Chung Su
  • Patent number: 8830401
    Abstract: A method and apparatus for producing video are provided. The method includes: determining a reference time used as a reference for producing a PIP video; determining a first task time at which to acquire the first image, a second task time at which to acquire the second image, and a third task time at which to acquire the audio; acquiring the first image, the second image, and the audio at the respective task times; and combining the first image, the second image, and the audio according to a result of comparing the reference time and each of the task times, and producing the PIP video. Accordingly, a time and cost for producing the PIP video can be remarkably reduced.
    Type: Grant
    Filed: July 3, 2013
    Date of Patent: September 9, 2014
    Assignee: RSupport Co., Ltd
    Inventors: Hyung Su Seo, Jun Hyuk Kwak
  • Publication number: 20140240596
    Abstract: According to one embodiment, an electronic device including, a display, an audio output module, a transmission module, a first detection module, a second detection module, a third detection module, and a controller configured to control at least one of the timing of the transmission of the audio signal by the transmission module and the timing of the output of the first reproduction output by the audio output module in accordance with the time difference detected by the third detection module, and to switch whether or not to control the timing in accordance with the positional relationship between the electronic device and the partner device.
    Type: Application
    Filed: May 5, 2014
    Publication date: August 28, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventor: Takashi Minemura
  • Patent number: 8817183
    Abstract: This invention relates to a device and a method of generating a first and a second fingerprint (102,104) usable for synchronisation of at least two signals (101,103) and corresponding method and device for synchronising two or more signals. A fingerprint pair is generated on the basis of a segment of a first signal e.g. an audio signal and of a segment of a second signal e.g. a video signal at each synchronisation time point. The generated fingerprint pair(s) are stored in a database (203) and communicated or distributed to a synchronisation device (303). During synchronisation, fingerprint(s) of the audio signal and fingerprint(s) of the video signal to be synchronised are generated and matched against fingerprints in the database. When a match is found, the fingerprints also determine the synchronisation time point, which is used to synchronise the two signals. In this way, a simple, reliable and efficient way of synchronising at least two signals is obtained.
    Type: Grant
    Filed: January 11, 2013
    Date of Patent: August 26, 2014
    Assignee: Gracenote, Inc.
    Inventors: Job Cornelis Oostveen, David K. Roberts, Adrianus Johannes Maria Denissen, Warner Rudolph Theophile Ten Kate
  • Patent number: 8817185
    Abstract: According to one embodiment, an electronic device includes a reproduction controller and a transmitter. The reproduction controller is configured to reproduce a first type of information of a first content. The first content includes a plurality of types of information. The transmitter is configured to transmit an instruction to reproduce a second type of information of the first content to other electronic device.
    Type: Grant
    Filed: May 29, 2013
    Date of Patent: August 26, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Hiroshi Kazawa
  • Patent number: 8818818
    Abstract: [PROBLEMS] To provide a high-quality audio signal encoding technique by controlling the number of time/frequency groups in a frame. [MEANS FOR SOLVING PROBLEMS] An audio encoding device includes: a time group boundary candidate position extraction unit (101) for analyzing a sub-band signal (2001) obtained by frequency-changing an input signal and calculating a candidate position of the time group boundary by comparing change in energy of three successive time groups; a time group quantity generation unit (103) for outputting a maximum value of the time group quantity; a time group selection unit (102) for generating a time group quantity not greater than the maximum time group quantity by using the candidate position; and a frequency group generation unit (104) for generating a frequency group by using the generated time group information.
    Type: Grant
    Filed: July 6, 2007
    Date of Patent: August 26, 2014
    Assignee: NEC Corporation
    Inventor: Osamu Shimada
  • Patent number: 8810659
    Abstract: A delay tracker utilizes a special code on the tracked signal in order to recognize such signal and ascertain any delays associated therewith.
    Type: Grant
    Filed: January 10, 2012
    Date of Patent: August 19, 2014
    Assignee: Cascades AV LLC
    Inventor: J. Carl Cooper
  • Patent number: 8810728
    Abstract: Some embodiments of the invention provide a method for synchronizing an audio stream with a video stream. This method involves searching in the audio stream for audio data having values that match a distinct set of audio data values and synchronizing the audio stream with the video stream based on the search. In some embodiments, the distinct set of audio data values is defined by a predetermined distinct tone. In other embodiments, the distinct set of audio data values is defined by audio data contained in the video stream.
    Type: Grant
    Filed: October 14, 2013
    Date of Patent: August 19, 2014
    Assignee: Apple Inc.
    Inventor: David Robert Black
  • Publication number: 20140226068
    Abstract: A method for synchronizing haptic effects with at least one media component in a media transport stream includes identifying a series of video frames containing imaging information and/or a series of audio frames containing sound information in the media transport stream; identifying a series of haptic frames containing force feedback information in the media transport stream; and synchronizing the force feedback information in response to the imaging information and/or sound information.
    Type: Application
    Filed: April 14, 2014
    Publication date: August 14, 2014
    Applicant: IMMERSION CORPORATION
    Inventors: Robert A. LACROIX, Andrianaivo RABEMIARISOA, Henrique D. DA COSTA, Herve Thu TIMONE, Stephen D. RANK, Christopher J. ULLRICH
  • Patent number: 8804037
    Abstract: Systems, methods, and processor readable media are disclosed for encoding and transmitting first media content and second media content using a digital radio broadcast system, such that the second media content can be rendered in synchronization with the first media content by a digital radio broadcast receiver. The disclosed systems, methods, and processor-readable media determine when a receiver will render audio and data content that is transmitted at a given time by the digital radio broadcast transmitter, and adjust the media content accordingly to provide synchronized rendering. In exemplary embodiments, these adjustments can be provided by: 1) inserting timing instructions specifying playback time in the secondary content based on calculated delays; or 2) controlling the timing of sending the primary or secondary content to the transmitter so that it will be rendered in synchronization by the receiver.
    Type: Grant
    Filed: February 21, 2012
    Date of Patent: August 12, 2014
    Assignee: iBiquity Digital Corporation
    Inventors: Steven Andrew Johnson, Muthu Gopal Balasubramanian, Harvey Chalmers, Jeffrey Ranken Detweiler, Albert Gambardella, Russell Iannuzzelli, Stephen Douglas Mattson
  • Patent number: 8786778
    Abstract: A timing control apparatus includes: an extraction unit that outputs an input timing signal of an image signal; an input timing switch unit that selects whether to output the input timing signal output from the extraction unit or to input an external input timing signal; an input timing delay addition unit capable of adding delay information to the input timing signal output from the extraction unit; a reference timing generation unit that generates a reference timing signal from the input timing signal; a reference timing switch unit that selects whether to output the reference timing signal or to input an external reference timing signal; and an individual timing generation unit that generates, from the reference timing signal, a video processing timing signal and an output timing signal.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: July 22, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Daisuke Kuroki, Shinichi Sunakawa, Kohei Murayama, Atsushi Date
  • Patent number: 8786779
    Abstract: A video audio system includes an audio processing apparatus capable of processing audio signals input from a plurality of input sources and a video processing apparatus capable of displaying a video image based on a video signal input from a selected input source. The video processing apparatus executes image processing based on the selected input source type notified by the audio processing apparatus, and transmits delay time information required for the image processing. The audio processing apparatus delays an audio signal based on the delay time information transmitted from the video processing apparatus.
    Type: Grant
    Filed: May 22, 2012
    Date of Patent: July 22, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Noriaki Suzuki
  • Publication number: 20140192263
    Abstract: Systems and methods of measuring a temporal offset between audio content and video content that employ audio fingerprints from an audio signal in the audio content, and video fingerprints from video frames in the video content. The systems obtain reference audio and video fingerprints prior to transmission of video over a media channel, and obtain target audio and video fingerprints subsequent to transmission of the video over the media channel. Each fingerprint has an associated time stamp. Using the reference and target audio fingerprints and their associated time stamps, the systems determine an audio time stamp offset. Using the reference and target video fingerprints and their associated time stamps, the systems determine a video time stamp offset. Using the audio and video time stamp offsets, the systems determine a temporal offset between the video content and the audio content introduced by the media channel.
    Type: Application
    Filed: March 13, 2014
    Publication date: July 10, 2014
    Inventors: Jeffrey A. Bloom, Dekun Zou, Ran Ding
  • Patent number: 8773585
    Abstract: A method for identifying state of macro block of de-interlacing computing and an image processing apparatus are provided, the method is as follows. A video frame is divided into a plurality of regions, where each of the regions includes a plurality of macro blocks. Then, a basic threshold corresponding to each of the regions is provided according to a position of each of the regions in the video frame, and a first macro block is identified to be a first type macro block or a second type macro block according to the basic threshold corresponding to one of the regions where the first macro block of the macro blocks locates. Then, a corresponding de-interlacing computing step is performed on the first macro block according to an result that the first macro block is identified as the first type macro block or the second type macro block.
    Type: Grant
    Filed: January 23, 2013
    Date of Patent: July 8, 2014
    Assignee: ALi (Zhuhai) Corporation
    Inventors: Jin-Song Wen, Feng Gao, Jin-Fu Wang
  • Patent number: 8773588
    Abstract: A method for de-interlacing interlaced video includes receiving a first video field and a second video field of an interlaced video frame, generating a first video frame from the first video field and a first synthesized video field, where video data of the first synthesized video field is based exclusively on video data of the first and second video fields, generating a second video frame from the second video field and a second synthesized video field, where video data of the second synthesized video field is based exclusively on the video data of the first and second video fields, and outputting two de-interlaced video frames for every received interlaced video frame. The first (second) synthesized video field is generated by combining image data from the second (first) video field with image data from corresponding lines of an up-scaled first (second) field generated by a scaler.
    Type: Grant
    Filed: February 6, 2014
    Date of Patent: July 8, 2014
    Assignee: Axis AB
    Inventor: Stefan Lundberg
  • Patent number: 8767122
    Abstract: A method of controlling reproduction for a stream containing video data and/or audio data is disclosed. A mute process is performed for a decoded output of the stream. A first decode process is performed to decode a partial region of the stream from a beginning thereof and obtain attribute information from the stream. Parameters with which the stream is reproduced are set on a basis of the attribute information. The mute process is stopped after the parameters have been set. A second decode process is performed to decode the stream from the beginning thereof.
    Type: Grant
    Filed: April 8, 2008
    Date of Patent: July 1, 2014
    Assignee: Sony Corporation
    Inventor: Koichi Osaki
  • Patent number: 8760584
    Abstract: A memory space configuration method applied in a video signal processing apparatus is provided. The method includes: arranging a first memory space and a second memory space in a memory, the first and second memory spaces being partially overlapped; determining a type of a signal source; when the signal source is a first video signal source, enabling a first processing circuit and buffering data associated with the first video signal source by using the first memory space; and, when the signal source is a second video signal source, enabling a second processing circuit and buffering data associated with the second video signal source by using the second memory space. The second processing circuit is disabled when the first processing circuit is enabled; the first processing circuit is disabled when the second processing circuit is enabled.
    Type: Grant
    Filed: May 14, 2013
    Date of Patent: June 24, 2014
    Assignee: MSTAR Semiconductor, Inc.
    Inventor: Po-Jen Yang
  • Publication number: 20140168515
    Abstract: Methods and apparatuses are provided for transmitting and receiving multimedia contents that include at least two components (C1, C2). The reception method entails the reception of a first component (C1) from a first transmission medium (DVB) and the reception of a second component (C2) from a second transmission medium (IP), as well as the steps of: detecting (A4) a first “watermark” sequence from the first component (C1), detecting (A4) a second “watermark” sequence from the second component (C2), synchronizing (A5) the first and second components (C1, C2) on the basis of the first and second “watermark” sequences, and combining (A6) the synchronized first and second components (C1, C2) to form the multimedia content (MM); of course, the reception method provides the desired results if both components have been suitably and repeatedly marked prior to transmission.
    Type: Application
    Filed: August 2, 2012
    Publication date: June 19, 2014
    Applicant: CSP - INNOVAZIONE NELLE ICT SCARL
    Inventors: Sergio Sagliocco, Leonardo Sileo, Roberto Borri
  • Publication number: 20140160351
    Abstract: A repeater is to be provided between a source device and a sink device and be used in a Lip-sync correction system that transmits a video signal and an audio signal from the source device to the sink device through a HDMI (High Definition Multimedia Interface) transmission path and reproduces the video signal and the audio signal in synchronous with them on the sink device. A communication from the source device to the sink device is defined as a downstream communication, and a communication from the sink device to the source device is defined as a upstream communication. The repeater includes a processor that receives the video signal and the audio signal through the upstream communication and the downstream communication and processes the received video signal and the received audio signal. The processor corrects deviation between the video signal and the audio signal.
    Type: Application
    Filed: February 12, 2014
    Publication date: June 12, 2014
    Applicant: PANASONIC CORPORATION
    Inventor: Naoki EJIMA
  • Publication number: 20140160352
    Abstract: A method and a system for deriving visual rhythm from a video signal are described. A feature extraction module receives the video signal and extracts a two-dimensional feature from the video signal. A one-dimensional video feature computation module derives a one-dimensional feature from the extracted two-dimensional feature. A visual rhythm detector module detects a visual beat and a visual tempo from the one-dimensional feature.
    Type: Application
    Filed: February 12, 2014
    Publication date: June 12, 2014
    Applicant: Sony Corporation
    Inventors: Ching-Wei CHEN, Trista Chen
  • Publication number: 20140152893
    Abstract: A method whereby a second display device is simulated, on a personal computer or other electronic device having a primary display, which may not have the capability of connection to a second display device, using software so that images directed to be displayed on the second display device may be redirected by a personal computer or other electronic device to appear on the primary display device. The software may display the image in a portion of the primary display, allowing the user of the invention to monitor the simulated display and capture images from the simulated second display using an automated capture algorithm to capture and store the image for future use. The presenter or user of the method may make adjustments to the capture parameters, via a control interface which may also be viewed on the primary display device.
    Type: Application
    Filed: December 5, 2012
    Publication date: June 5, 2014
    Inventor: Ivo Martinik
  • Patent number: 8743292
    Abstract: Video/audio production processing control synchronization apparatus and methods are provided. Processing control commands that are provided to a first installation of production processing equipment for controlling production processing of edit units of first production signals are echoed to another installation, or possibly multiple other installations, of production processing equipment. Timing information associated with the edit units is also provided to the other installation(s), to enable production processing of delayed production signals by the other installation(s) to be synchronized with the production processing by the first installation of production processing equipment. Multiple production processing equipment installations can be controlled and synchronized from a single control interface.
    Type: Grant
    Filed: January 30, 2012
    Date of Patent: June 3, 2014
    Assignee: Ross Video Limited
    Inventors: Michael James Atherton, Troy David English, Christopher David Welsh, Kevin Rockel, Leslie Vincent O'Reilly, Trevor Charles May, Steven David Bland, David Allan Ross
  • Patent number: 8743284
    Abstract: A multimedia device (100) including a separating entity configured to separate a multimedia stream into audio frames and video frames, a sequencing entity configured to add a sequence number to at least one audio frame, a transceiver configured to transmit audio frames to a remote audio device, a controller coupled to a video player, the controller configured to determine a delay associated with transmitting the audio frames to the remote audio device based upon the sequence number and to control the presentation of the video frames at the video player based on the delay.
    Type: Grant
    Filed: October 8, 2007
    Date of Patent: June 3, 2014
    Assignee: Motorola Mobility LLC
    Inventors: Michael E. Russell, Arnold Sheynman
  • Publication number: 20140146230
    Abstract: An invention for measuring, maintaining and correcting synchronization between signals which suffer varying relative delays during transmission and/or storage is shown. The present invention teaches measuring the relative delay between a plurality of signals which have suffered differing delays due to transmission, storage or other processing. The preferred embodiment of the invention includes the use of a marker which is generated in response to a second signal and combined with a first signal in a manner which ensures that the marker will not be lost in the expected processing of the first signal. Subsequently a first delayed marker is generated in response to the marker associated with or recovered from the first signal, and a second delayed marker is generated from the second signal. The first delayed marker and second delayed marker are compared to determine a measure of the relative timing or delay between said first signal and said second signal at said subsequent time.
    Type: Application
    Filed: February 3, 2014
    Publication date: May 29, 2014
    Inventor: J. Carl Cooper
  • Publication number: 20140139739
    Abstract: To provide a device which accomplishes real-time sound identification and matching, by solving both of the problem of reducing time length of a frame and improving temporal accuracy and the problem of being robust against mixing with other sounds. A sound processing device according to the present invention includes: a time-frequency analysis means which generates a time-frequency plane from a sound signal through time-frequency analysis; a region characteristic amount extraction means which, for a plurality of partial region pairs which is defined on the time-frequency plane and of which at least either of shapes of two partial regions or positions of the two partial regions differ from one another, extracts a region characteristic amount from each partial region; and a sound identifier generation means which generates a sound identifier which identifies the sound by using the region characteristic amount from the each partial region.
    Type: Application
    Filed: July 13, 2012
    Publication date: May 22, 2014
    Inventors: Naotake Fujita, Toshiyuki Nomura
  • Publication number: 20140139738
    Abstract: Embodiments are described for a synchronization and switchover mechanism for an adaptive audio system in which multi-channel (e.g., surround sound) audio is provided along with object-based adaptive audio content. A synchronization signal is embedded in the multi-channel audio stream and contains a track identifier and frame count for the adaptive audio stream to play out. The track identifier and frame count of a received adaptive audio frame is compared to the track identifier and frame count contained in the synchronization signal. If either the track identifier or frame count does not match the synchronization signal, a switchover process fades out the adaptive audio track and fades in the multi-channel audio track. The system plays the multi-channel audio track until the synchronization signal track identifier and frame count and adaptive audio track identifier and frame count match, at which point the adaptive audio content will be faded back in.
    Type: Application
    Filed: June 27, 2012
    Publication date: May 22, 2014
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Sripal S. Mehta, Sergio Martinez, Ethan A. Grossman, Brad Thayer, Dean Bullock, John Neary
  • Publication number: 20140132836
    Abstract: The present invention relates to automatic summarization so as to recognize entire contents of multimedia data. A method of generating summarized information according to the present invention includes: generating index information on a specific audio signal or a specific video signal among input signals; synchronizing text information extracted from the input signal or received for the input signal with the index information; and generating first summarized information by using the synchronized text information and index information.
    Type: Application
    Filed: March 26, 2013
    Publication date: May 15, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventor: Ho Young JUNG
  • Patent number: 8717499
    Abstract: Systems and methods of measuring a temporal offset between audio content and video content that employ audio fingerprints from an audio signal in the audio content, and video fingerprints from video frames in the video content. The systems obtain reference audio and video fingerprints prior to transmission of video over a media channel, and obtain target audio and video fingerprints subsequent to transmission of the video over the media channel. Each fingerprint has an associated time stamp. Using the reference and target audio fingerprints and their associated time stamps, the systems determine an audio time stamp offset. Using the reference and target video fingerprints and their associated time stamps, the systems determine a video time stamp offset. Using the audio and video time stamp offsets, the systems determine a temporal offset between the video content and the audio content introduced by the media channel.
    Type: Grant
    Filed: September 2, 2011
    Date of Patent: May 6, 2014
    Assignee: Dialogic Corporation
    Inventors: Jeffrey A. Bloom, Dekun Zou, Ran Ding
  • Patent number: 8718537
    Abstract: A communication system has a controller for transmitting data to be played back by a plurality of playback devices corresponding to a plurality of channels, and a plurality of adapters required for executing playback by the playback devices. The controller has a setting unit which sets data to be played back by the playback devices and control information required to control playback of the data in time slots of a sync transmission frame, and a transmission unit which transmits the sync transmission frame the by the setting unit to the adapters. Each adapter has a reception unit which receives the transmitted sync transmission frame, and a playback control unit which extracts data corresponding to the channel to be played back by the adapter from the sync transmission frame, and controls the playback timing of the data based on control information corresponding to the data.
    Type: Grant
    Filed: August 31, 2007
    Date of Patent: May 6, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Tsuguhide Sakata, Mitsuru Yamamoto, Ichiro Kato, Yasunori Ohora
  • Patent number: 8687118
    Abstract: A repeater is to be provided between a source device and a sink device and be used in a Lip-sync correction system that transmits a video signal and an audio signal from the source device to the sink device through a HDMI (High Definition Multimedia Interface) transmission path and reproduces the video signal and the audio signal in synchronous with them on the sink device. A communication from the source device to the sink device is defined as a downstream communication, and a communication from the sink device to the source device is defined as a upstream communication. The repeater includes a processor that receives the video signal and the audio signal through the upstream communication and the downstream communication and processes the received video signal and the received audio signal. The processor corrects deviation between the video signal and the audio signal.
    Type: Grant
    Filed: April 29, 2013
    Date of Patent: April 1, 2014
    Assignee: Panasonic Corporation
    Inventor: Naoki Ejima
  • Publication number: 20140078398
    Abstract: A method comprising: generating at least two frames from a video, wherein the at least two frames are configured to provide an animated image; determining at least one object based on the at least two frames, the at least one object having a periodicity of motion with respect to the at least two frames; determining at least one audio signal component for associating with the animated image based on a signal characteristic of at least one audio signal; and combining the at least one object and the at least one audio signal component wherein the animated image is substantially synchronised with the at least one signal component based on the signal characteristic.
    Type: Application
    Filed: September 12, 2013
    Publication date: March 20, 2014
    Applicant: Nokia Corporation
    Inventors: Ravi Shenoy, Pushkar Prasad Patwardhan
  • Patent number: 8677437
    Abstract: The embodiments described herein provide a method and system for determining the extent to which a plurality of media signals are out of sync with each other.
    Type: Grant
    Filed: May 7, 2008
    Date of Patent: March 18, 2014
    Assignee: Evertz Microsystems Ltd.
    Inventor: Jeff Wei
  • Patent number: 8670072
    Abstract: The present invention provides a streaming media data processing method. The method includes: based on a stream index in streaming media data, separating the streaming media data stream into audio stream data and video stream data and respectively buffering them in an audio stream data queue and a video stream data queue; respectively decoding audio data buffered in the audio stream data queue and video data buffered in the video stream data queue; based on a play callback timestamp of the decoded audio data and a system time of a streaming media playback equipment, determining an audio/video synchronization time; based on a comparison result between a video frame timestamp and a sum of the determined audio/video synchronization time and a video refresh time, processing and displaying each frame in the decoded video stream data according to a predetermined processing method in accordance with the comparison result.
    Type: Grant
    Filed: September 21, 2012
    Date of Patent: March 11, 2014
    Assignee: Guangzhou UCWeb Computer Technology Co., Ltd
    Inventor: Jie Liang