Audio To Video Patents (Class 348/515)
  • Publication number: 20130128115
    Abstract: This invention relates to a device and a method of generating a first and a second fingerprint (102,104) usable for synchronisation of at least two signals (101,103) and corresponding method and device for synchronising two or more signals. A fingerprint pair is generated on the basis of a segment of a first signal e.g. an audio signal and of a segment of a second signal e.g. a video signal at each synchronisation time point. The generated fingerprint pair(s) are stored in a database (203) and communicated or distributed to a synchronisation device (303). During synchronisation, fingerprint(s) of the audio signal and fingerprint(s) of the video signal to be synchronised are generated and matched against fingerprints in the database. When a match is found, the fingerprints also determine the synchronisation time point, which is used to synchronise the two signals. In this way, a simple, reliable and efficient way of synchronising at least two signals is obtained.
    Type: Application
    Filed: January 11, 2013
    Publication date: May 23, 2013
    Applicant: Gracenote, Inc.
    Inventor: Gracenote,Inc.
  • Publication number: 20130120654
    Abstract: Provided in some embodiments is a computer implemented method that includes receiving time-aligned script data including dialogue words of a script and timecodes corresponding to the dialogue words, identifying gaps between dialogue words for the insertion of video description content, wherein the gaps are identified based on the duration of pauses between timecodes of adjacent dialogue words, aligning segments of video description content with corresponding gaps in dialogue, wherein the video description content for the segments is derived from corresponding script elements of the script; and generating a script document including the aligned segments of video description content.
    Type: Application
    Filed: May 28, 2010
    Publication date: May 16, 2013
    Inventor: David A. Kuspa
  • Patent number: 8441576
    Abstract: A television receiver computes a delay time of an image displayed on the television receiver with respect to an audio signal transmitted from the television receiver to an audio amplifier, based on video delay information as EDID data, delay information transported from a disc recorder and information of a time required until audio data received from the disc recorder is transmitted to an audio amplifier. The audio amplifier controls a delay time lasting until the audio responsive to the received audio data is outputted so that the delay time matches the aforementioned delay time. Thereby, the displayed image in the television receiver and the audio output in the audio amplifier are synchronized.
    Type: Grant
    Filed: November 7, 2007
    Date of Patent: May 14, 2013
    Assignee: Sony Corporation
    Inventors: Yasuhisa Nakajima, Hidekazu Kikuchi
  • Patent number: 8441577
    Abstract: Systems and methods are operable to synchronize presentation between video streams and separately received audio streams. An exemplary embodiment receives a first media content stream at a media device, wherein the first media content stream comprises at least a video stream portion; receives a second media content stream at the media device, wherein the second media content stream comprises at least an audio stream portion; delays the audio stream portion of the second media content stream by a duration corresponding to at least one synchronization time delay; communicates the video stream portion of the first media content stream to a visual display device; and communicates the delayed audio stream portion of the second media content stream to an audio presentation device, wherein a visual scene of the video stream portion of the first media content stream is substantially synchronized with sounds of the audio stream portion of the second media content stream.
    Type: Grant
    Filed: February 8, 2011
    Date of Patent: May 14, 2013
    Assignee: EchoStar Technologies L.L.C.
    Inventor: Gregory Davis
  • Patent number: 8436939
    Abstract: Embodiments of the present invention provide systems and methods for non-invasive, “in-service” AV delay detection and correction. These systems and methods do not modify the audio signal or the video signal, nor do they rely on any metadata to be carried with the audio signal or the video signal via the distribution path. Instead, agents located at various points along the distribution path generate very small signature curves for the audio signal and the video signal and distribute them to a manager via a separate data path other than the distribution path. The manager calculates a measured AV delay caused by the distribution path based on these signature curves, and then optionally corrects the measured AV delay by adjusting an in-line delay in the distribution path.
    Type: Grant
    Filed: September 28, 2010
    Date of Patent: May 7, 2013
    Assignee: Tektronix, Inc.
    Inventor: Daniel G. Baker
  • Patent number: 8432954
    Abstract: A video processing system may include a video deserializer, a video serializer and a programmable video processing device. The video deserializer may have an input for receiving a serial data stream containing video data and a serial to pseudo-parallel converter, coupled to the serial data stream, for generating a plurality of serial output lanes from the serial data stream. The video serializer may have a plurality of inputs for receiving serial data streams and a pseudo-parallel to serial converter, coupled to the plurality of input serial data streams, for generating a single serial data stream from the plurality of input serial data streams. The programmable video processing device may be coupled to the video deserializer and the video serializer, and may have a plurality of interface pins for receiving the plurality of serial output lanes from the deserializer and for transmitting the plurality of serial data streams to the serializer.
    Type: Grant
    Filed: August 21, 2007
    Date of Patent: April 30, 2013
    Assignee: Semtech Canada Inc.
    Inventors: John Hudson, Ryan Eckhardt
  • Publication number: 20130088641
    Abstract: A method and system for audio transmission in a wireless communication system which transmits digital video and digital audio in High-Definition Multimedia Interface (HDMI) format. Position information of audio packets within the HDMI frame is obtained. Digital audio information including the position information is transmitted from a data source device to a data sink device via a wireless communication medium. At the data sink device, an HDMI frame is reconstructed by inserting received audio packets into horizontal and vertical blanking periods of the HDMI frame.
    Type: Application
    Filed: November 29, 2012
    Publication date: April 11, 2013
    Inventor: SAMSUNG ELECTRONICS CO., LTD.
  • Publication number: 20130079093
    Abstract: A method of and system for handling latency issues encountered in producing real-time entertainment such as games of skill synchronized with live or taped televised events is described herein. There are multiple situations that are dealt with regarding latencies in receiving a television signal with respect to real-time entertainment based on the unfolding games played along with the telecasts. Systemic delays, arbitrarily imposed delays of a broadcast signal and variances in the precise broadcast times of taped television programs have to be equalized so as to provide fair entertainment.
    Type: Application
    Filed: November 19, 2012
    Publication date: March 28, 2013
    Applicant: Winview, Inc.
    Inventor: Winview, Inc.
  • Patent number: 8405773
    Abstract: A multi-modal quality estimation unit (11) estimates a multi-modal quality value (23A) on the basis of an audio quality evaluation value (21A) and a video quality evaluation value (21). In addition, a delay quality degradation amount estimation unit (12) estimates a delay quality degradation amount (23B) on the basis of an audio delay time (22A) and a video delay time (22B). A video communication quality estimation unit (13) estimates a video communication quality value (24) on the basis of a multi-modal quality value (23A) and a delay quality degradation amount (23B).
    Type: Grant
    Filed: September 6, 2006
    Date of Patent: March 26, 2013
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Takanori Hayashi, Kazuhisa Yamagishi
  • Patent number: 8400566
    Abstract: Features are extracted from video and audio content that have a known temporal relationship with one another. The extracted features are used to generate video and audio signatures, which are assembled with an indication of the temporal relationship into a synchronization signature construct. the construct may be used to calculate synchronization errors between video and audio content received at a remote destination. Measures of confidence are generated at the remote destination to optimize processing and to provide an indication of reliability of the calculated synchronization error.
    Type: Grant
    Filed: August 17, 2009
    Date of Patent: March 19, 2013
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Kent Bennett Terry, Regunathan Radhakrishnan
  • Publication number: 20130063658
    Abstract: Embodiments of the invention relate to a system, method, computer product, computer system and device for providing a live broadcast, the method includes receiving a first audio content; determining a first video content that matches the first audio content; matching the first video content with the first audio content in real time, forming matched first audio/video content; and providing the matched first audio/video content in the live broadcast. The method further includes receiving a second audio content; determining a second video content that matches the second audio content; matching the second video content with the second audio content in real time, forming matched second audio/video content; and providing the matched second audio/video content in the live broadcast.
    Type: Application
    Filed: October 21, 2011
    Publication date: March 14, 2013
    Inventors: Robert B. Clemmer, Elliot A. Swan, Mathew D. Polzin, Sam O. Oluwalana
  • Publication number: 20130057761
    Abstract: Systems and methods of measuring a temporal offset between audio content and video content that employ audio fingerprints from an audio signal in the audio content, and video fingerprints from video frames in the video content. The systems obtain reference audio and video fingerprints prior to transmission of video over a media channel, and obtain target audio and video fingerprints subsequent to transmission of the video over the media channel. Each fingerprint has an associated time stamp. Using the reference and target audio fingerprints and their associated time stamps, the systems determine an audio time stamp offset. Using the reference and target video fingerprints and their associated time stamps, the systems determine a video time stamp offset. Using the audio and video time stamp offsets, the systems determine a temporal offset between the video content and the audio content introduced by the media channel.
    Type: Application
    Filed: September 2, 2011
    Publication date: March 7, 2013
    Inventors: Jeffrey A. Bloom, Dekun Zou, Ran Ding
  • Patent number: 8390669
    Abstract: The present disclosure discloses a method for identifying individuals in a multimedia stream originating from a video conferencing terminal or a Multipoint Control Unit, including executing a face detection process on the multimedia stream; defining subsets including facial images of one or more individuals, where the subsets are ranked according to a probability that their respective one or more individuals will appear in a video stream; comparing a detected face to the subsets in consecutive order starting with a most probable subset, until a match is found; and storing an identity of the detected face as searchable metadata in a content database in response to the detected face matching a facial image in one of the subsets.
    Type: Grant
    Filed: December 15, 2009
    Date of Patent: March 5, 2013
    Assignee: Cisco Technology, Inc.
    Inventors: Jason Catchpole, Craig Cockerton
  • Patent number: 8384827
    Abstract: A system and method for characterizing the relative offset in time between audio and video signals and enables the receiver of the audio and video signals to resynchronize the audio and video signals. Signal characterization data is dynamically captured and encoded into frames of video and audio data that is output by a television origination facility. The signal characterization data is extracted by the receiver and signal characterization data is captured for the received frames. The extracted signal characterization data is compared with the captured signal characterization data to compute the relative offset in time between the video and one or more audio signals for a frame. The receiver may then resynchronize the video and audio signals using the computed relative offset.
    Type: Grant
    Filed: June 2, 2010
    Date of Patent: February 26, 2013
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael J. Strein, Efthimis Stefanidis, James L. Jackson
  • Publication number: 20130044260
    Abstract: Systems and methods are provided for cross-platform rendering of video content on a user-computing platform that is one type of a plurality of different user-computing platform types. A script is transmitted to the user-computing platform and is interpreted by an application program compiled to operate on any one of the plurality of user-computing platform types. The script is configured to cause the script to be interpreted by the application program operating on the user-computing platform to: render the video data by displaying frame images which make up the video data; playback the associated audio data; ascertain an audio playback time reference associated with the playback of the associated audio data; and directly synchronize the displaying of the frame images with the playback of the associated audio data based on the audio playback time reference.
    Type: Application
    Filed: June 13, 2012
    Publication date: February 21, 2013
    Inventors: Steven Erik VESTERGAARD, Che-Wai TSUI, Shaoning TU
  • Patent number: 8379151
    Abstract: A method includes synchronizing audio and video streams including aligning the audio path and the video path by introducing a variable delay to the audio path or the video path to substantially equalize the end-to-end delay of both the audio path and the video path. An apparatus includes a digital to analog convertor for synchronizing audio and video where the audio path and the video path are aligned by introducing a variable delay to the audio path or the video path to substantially equalize the end-to-end delay of both the audio path and the video path.
    Type: Grant
    Filed: March 24, 2011
    Date of Patent: February 19, 2013
    Assignee: Floreat, Inc.
    Inventor: Kishan Shenoi
  • Patent number: 8379150
    Abstract: Data transmission first initializes a transmitter system and a receiver system. The transmitter system processes audio/video data, and transmits the processed audio/video data based on information received from the receiver system. The receiver system receives and processes the audio/video data sent by the transmitter system for generating corresponding audio output data and video output data. The receiver system sends the audio output data and the video output data to an audio output apparatus and a video output apparatus, respectively.
    Type: Grant
    Filed: March 28, 2008
    Date of Patent: February 19, 2013
    Assignee: Qisda Corporation
    Inventors: Cheng-Te Tseng, Chang-Hung Lee
  • Publication number: 20130038792
    Abstract: Embodiments of the present invention relate to computer systems for transmitting or receiving and playing a plurality of data streams, where at least one of the data streams includes video data packets and/or audio data packets and at least one other data stream includes further data packets that are synchronized with the video and/or audio data packets. In particular embodiments, the further data stream includes haptic data packets that include haptic command data generated in real time with the video and/or audio data streams, where the haptic command data is provided to a haptic output device configured to replicate or approximate sensations through the output of one or more mechanical forces in synchronization with the playing of video and/or audio from the video and/or audio data streams.
    Type: Application
    Filed: October 1, 2012
    Publication date: February 14, 2013
    Applicant: Internet Services, LLC
    Inventor: Internet Services, LLC
  • Patent number: 8368811
    Abstract: According to the present invention, the sound quality of an apparatus having HDMI outputs exclusive for both of an audio signal and a video signal can be further improved. According to the present invention, even when the calculated value of CTS obtained using a first video clock signal Vc1 generated by a decoder 204 is other than an integer, a value of a second video clock signal Vc2 and an N value are set such that the calculated value of CTS obtained using the second video clock signal Vc2 is an integer. By using the value of second video clock signal Vc2 and the N value set in this manner, an audio reproducing apparatus 103 can generate an audio clock signal Ac having reduced jitter.
    Type: Grant
    Filed: January 25, 2011
    Date of Patent: February 5, 2013
    Assignee: Panasonic Corporation
    Inventor: Tetsuya Itani
  • Patent number: 8363161
    Abstract: Methods of processing an audiovisual signal that has a video portion and an audio portion are described. One example includes detecting a video synchronization event in the video portion and, in response to the detecting, embedding a marker relating to the video synchronization event into a serial data stream carrying the audio portion. The serial data stream includes a series of packets, each packet having (A) a preamble that includes a synchronization sequence, (B) an auxiliary data field, and (C) a main data field.
    Type: Grant
    Filed: May 25, 2007
    Date of Patent: January 29, 2013
    Assignee: Broadcom Corporation
    Inventor: Larry Pearlstein
  • Publication number: 20130021525
    Abstract: A server comprises a processing unit configured to interlace audio data packets with video data to form an interlaced audio/video data file having an approximately uniform audio time interval between consecutive audio data packets in the interlaced audio/video data file. The server also comprises an interrupt timer configured to provide periodic interrupt signals. The processing unit is configured to synchronize the start of transmission of each instance of the audio data packets and the video data packets with the periodic interrupt signals from the interrupt timer.
    Type: Application
    Filed: July 22, 2011
    Publication date: January 24, 2013
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Manjunatha Karunakar, Stephen Mead, Prashanth Balanje Ramesh
  • Patent number: 8358375
    Abstract: To solve the problems in the prior art, the present invention provides a method for measuring a time difference between digital video signals and digital audio signals, wherein said method comprises the steps of extracting respective time series data from respective frequency domains of said digital video signals and said digital audio signals; statistically identifying the cross-correlation of said time series data in said frequency domains, thereby measuring the time difference between said digital video signals and said digital audio signals.
    Type: Grant
    Filed: October 6, 2006
    Date of Patent: January 22, 2013
    Assignee: National University Corporation Chiba University
    Inventors: Hiroaki Ikeda, Reiko Iwai
  • Patent number: 8358376
    Abstract: This invention relates to a device and a method of generating a first and a second fingerprint (102,104) usable for synchronization of at least two signals (101,103) and corresponding method and device for synchronizing two or more signals. A fingerprint pair is generated on the basis of a segment of a first signal e.g. an audio signal and of a segment of a second signal e.g. a video signal at each synchronization time point. The generated fingerprint pair(s) are stored in a database (203) and communicated or distributed to a synchronization device (303). During synchronization, fingerprint(s) of the audio signal and fingerprint(s) of the video signal to be synchronized are generated and matched against fingerprints in the database. When a match is found, the fingerprints also determine the synchronization time point, which is used to synchronize the two signals. In this way, a simple, reliable and efficient way of synchronizing at least two signals is obtained.
    Type: Grant
    Filed: February 9, 2011
    Date of Patent: January 22, 2013
    Assignee: Gracenote, Inc.
    Inventors: Job Cornelis Oostveen, David K. Roberts, Adrianus Denissen, Warner Ten Kate
  • Publication number: 20130002953
    Abstract: A transmission apparatus according to the present invention includes: an audio input unit configured to obtain audio data of 32-bit precision and to add additional information to the audio data, to generate output audio data including the audio data of 32-bit precision and the additional information, the additional information indicating characteristics of the audio data obtained; and a video-audio synthesizing unit configured to: add packet type information to the output audio data generated by the audio input unit, to generate at least one audio sample packet; and multiplex the at least one audio sample packet into a horizontal blanking interval of video data, the packet type information indicating that the output audio data includes audio data of 32-bit precision.
    Type: Application
    Filed: September 12, 2012
    Publication date: January 3, 2013
    Applicant: PANASONIC CORPORATION
    Inventors: Hiroyuki NOGUCHI, Shinya MURAKAMI
  • Publication number: 20120327300
    Abstract: A system may be provided for synchronizing a first media stream with a second media stream. A first receiver in the system may receive the first media stream over a network. A second receiver in the system may receive the second media stream over the network. The second receiver may determine an identity of a content delay matching stream that indicates the amount of processing delay introduced in the first media stream by a processing component of the first receiver or transmitter. The second receiver may subscribe to the identified content delay matching stream. The second receiver may receive the content delay matching stream over the network and determine the processing delay from the content delay matching stream. The receiver may cause the second media stream to be delayed in accordance with the processing delay in the first media stream such that the first and second media streams are synchronized.
    Type: Application
    Filed: June 21, 2011
    Publication date: December 27, 2012
    Applicant: HARMAN INTERNATIONAL INDUSTRIES, INC.
    Inventors: Jeffrey L. Hutchings, Aaron Gelter
  • Publication number: 20120314131
    Abstract: Some embodiments herein include at least one of systems, methods, and devices for remote audio capture using a hand-held device. In some embodiments, the device captures, stores, and transmits audio to another device via wireless technologies. In some embodiments, the device may be used either as a hand-held microphone or as a lavaliere microphone.
    Type: Application
    Filed: June 11, 2012
    Publication date: December 13, 2012
    Inventor: Andrew Charles Kamin-Lyndgaard
  • Patent number: 8330859
    Abstract: Measurement of the relative timing between images and associated information, for example video and audio. Image mutual event characteristics are recognized in the images and associated mutual event characteristics are recognized in the associated information. The image mutual events and associated mutual events are compared to determine their occurrences, one relative to the other as a measure of relative timing. Particular operation with audio and video signals is described.
    Type: Grant
    Filed: February 22, 2008
    Date of Patent: December 11, 2012
    Assignee: Pixel Instruments Corporation
    Inventors: J. Carl Cooper, Mirko Vojnovic, Maynard Grimm, Christopher Smith
  • Publication number: 20120307149
    Abstract: Methods, systems, and products are disclosed for retrieving audio signals. A video signal is received that is associated with a content identifier and an alternate audio tag. In response to the alternate audio tag, a query is made for an alternate audio source that corresponds to the content identifier. A query result is received that identifies at least one alternate audio signal that corresponds to the content identifier and that is separately available from the video signal.
    Type: Application
    Filed: August 17, 2012
    Publication date: December 6, 2012
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Dennis R. Meek, Robert A. Koch
  • Publication number: 20120307147
    Abstract: A network device receives multiple protocol data units (PDUs), or packets, and initiates aggregation of an intended number of the packets to form a transmission block. During the aggregation process, the device receives a priority PDU, or packet, and in response to receiving the priority PDU, terminates the aggregation process, either permanently or temporarily, even though the number of packets aggregated prior to receiving the priority packet is less than the number of packets intended to be included in the transmission block. The device releases the priority packet for transmission out of turn, ahead of the already aggregated packets. The priority packet can be released individually, with the aggregation being resumed after release of the priority packet, or the priority packet can be pre-pended to the already aggregated block of packets and the entire block released with a priority assigned according to the priority of the priority packet.
    Type: Application
    Filed: December 21, 2011
    Publication date: December 6, 2012
    Applicant: BROADCOM CORPORATION
    Inventors: Ragu (Raghunatha) Kondareddy, Yasantha N. Rajakarunanayake, James F. Dougherty, III
  • Publication number: 20120307148
    Abstract: A method and device for demultiplexing audio & video data in a multimedia file are provided. The method includes: setting and updating a maximum synchronization time point according to a preset maximum synchronization time; selecting an output data frame according to a comparison result between the decoding time stamp of the current data frame for each data frame channel in the multimedia file and the maximum synchronization time point in combination with the order of byte offset location values of the current data frames for each data frame channel; and fetching the output data frame via searching a position in the multimedia file according to the byte offset location value of the selected output data frame to obtain an original stream audio and video frame queue.
    Type: Application
    Filed: June 4, 2012
    Publication date: December 6, 2012
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Chunbo ZHU, Ye SUN
  • Patent number: 8322928
    Abstract: A pitch bearing for a wind turbine rotor comprises a rotor hub and at least one rotor blade, the pitch bearing comprising a cylindrical inner bearing ring connectable to a rotor blade of the wind turbine rotor, a cylindrical outer bearing ring connectable to the rotor hub of the wind turbine rotor and an annular reinforcement section for reinforcing the outer bearing ring. The annular reinforcement section adjoins the cylindrical outer bearing ring at its radial outer surface.
    Type: Grant
    Filed: October 1, 2008
    Date of Patent: December 4, 2012
    Assignee: Siemens Aktiengesellschaft
    Inventors: Martin Hedegaard Larsen, Anders Vangsgaard Nielsen, Steffen Frydendal Poulsen
  • Publication number: 20120274850
    Abstract: Digital video data and digital multiple-audio data are extracted from a source, using a hardware processor in a content source device within a premises. The extracted digital video data is processed for display on a main display device in the premises; and the extracted digital multiple-audio data is processed into a primary soundtrack in a primary language, to be listened to within the premises in synchronization with the displayed extracted digital video data. The primary soundtrack corresponds to the displayed extracted digital video data, in the primary language. The extracted digital multiple-audio data is processed into at least one secondary audio asset, different than the primary soundtrack; and the at least one secondary audio asset is transmitted to a personal media device within the premises, for apprehension by a user of the personal media device in synchronization with the displayed extracted digital video data.
    Type: Application
    Filed: April 27, 2011
    Publication date: November 1, 2012
    Applicant: Time Warner Cable Inc.
    Inventors: Sherisse Hawkins, Matthew Osminer, Chris Cholas
  • Patent number: 8300851
    Abstract: A method of managing a sound source in a digital AV device and an apparatus thereof are provided. The method of managing a sound source in a digital AV device includes: extracting at least one sound source from sound being reproduced through the digital AV device; mapping an image to the extracted sound source; and managing the sound sources by using the mapped image. In addition, preferably, the extracted sound source is registered, changed, deleted, selectively reproduced, or selectively deleted by using the image. Accordingly, sound being output can be visually managed by handling the sound sources separately, a desired sound source can be selectively reproduced or removed such that utilization of the digital AV device can be enhanced.
    Type: Grant
    Filed: November 10, 2005
    Date of Patent: October 30, 2012
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jung-eun Shin, Eun-ha Lee
  • Patent number: 8300147
    Abstract: A system and method for characterizing the relative offset in time between audio and video signals and enables the receiver of the audio and video signals to resynchronize the audio and video signals. Signal characterization data is dynamically captured and encoded into frames of video and audio data that is output by a television origination facility. The signal characterization data is extracted by the receiver and signal characterization data is captured for the received frames. The extracted signal characterization data is compared with the captured signal characterization data to compute the relative offset in time between the video and one or more audio signals for a frame. The receiver may then resynchronize the video and audio signals using the computed relative offset.
    Type: Grant
    Filed: June 2, 2010
    Date of Patent: October 30, 2012
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael J. Strein, Efthimis Stefanidis, James L. Jackson
  • Patent number: 8300667
    Abstract: In one method embodiment, receiving from the network device a multiplex of a compressed video stream and a compressed audio stream, the multiplex comprising a succession of intervals corresponding to a video program corresponding to a first playout rate; and at the start of each interval, replacing the compressed audio stream with a compressed, pitch-preserving audio stream corresponding to a second playout rate different than the first.
    Type: Grant
    Filed: March 2, 2010
    Date of Patent: October 30, 2012
    Assignee: Cisco Technology, Inc.
    Inventors: Ali C. Begen, Tankut Akgul, Michael A. Ramalho, David R. Oran, William C. Ver Steeg
  • Patent number: 8301018
    Abstract: An audio/video synchronous playback device includes a first synchronization section for repeating or skipping a first video data sequence in units of a video frame interval thereof to synchronize the first video data sequence with an audio data sequence, and a second synchronization section for repeating or skipping a second video data sequence in units of a video frame or video field interval thereof to synchronize the second video data sequence with the audio data sequence. A first video data sequence output and a second video data sequence output having different frame frequencies are separately synchronized with one channel of audio data sequence output with their respective precisions.
    Type: Grant
    Filed: July 2, 2008
    Date of Patent: October 30, 2012
    Assignee: Panasonic Corporation
    Inventor: Hideaki Shibata
  • Patent number: 8289445
    Abstract: Pixels extracted from each of an input video signal are thinned out in units of predetermined samples, and the thinned out samples are fetched in equal order frame by frame and mapped into active periods of first, second, third, and fourth sub-images conformable to the HD-SDI format. The mapped first, second, third, and fourth sub-images are each separated into a first-link transmission channel and a second-link transmission channel, and are thus mapped into eight channels. The mapped first, second, third, and fourth sub-images are converted in parallel. Parallel digital data items into which the sub-images are converted in parallel are then outputted.
    Type: Grant
    Filed: November 19, 2008
    Date of Patent: October 16, 2012
    Assignee: Sony Corporation
    Inventor: Shigeyuki Yamashita
  • Patent number: 8284310
    Abstract: An audio/video system comprises an audio signal processing path having an audio path delay and a video signal processing path having a video path delay. The audio path delay may be different from the video path delay. The audio path delay and/or the video path delay may change, for example because of replacement of a component within the audio signal processing path or the video signal processing path. Delay matching (synchronization) in the audio/video system comprises adjusting the audio path delay to be substantially equal to the video path delay. Matching the audio path delay to the video path delay generally includes adding delay to the signal processing path with the lesser delay.
    Type: Grant
    Filed: April 5, 2011
    Date of Patent: October 9, 2012
    Assignee: Sony Computer Entertainment America LLC
    Inventor: Dominic Saul Mallinson
  • Patent number: 8279344
    Abstract: A system synchronizes a video presentation to a master time reference (e.g., a corresponding audio presentation) by modifying a video cadence. The system detects when a displayed video leads or lags a master time reference by a programmable level or more. The system minimizes the synchronization error by inserting or removing source video frames to or from a frame cadence pattern.
    Type: Grant
    Filed: December 14, 2009
    Date of Patent: October 2, 2012
    Assignee: QNX Software Systems Limited
    Inventor: Adrian Boak
  • Publication number: 20120224100
    Abstract: A multimedia apparatus and a synchronization method thereof are provided. The multimedia apparatus includes a video output unit which outputs a video, and a control unit which transmits an audio signal to the external device through the communication module and operates the video output unit to display a video corresponding to the audio signal by delaying the video based on delay information received from the external device through the communication module.
    Type: Application
    Filed: May 14, 2012
    Publication date: September 6, 2012
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hwang-hyeon GHA, Byeong-woon HA
  • Publication number: 20120206650
    Abstract: A method for synchronized playback of wireless audio and video is applicable to a playback system. The method for synchronized playback includes steps of receiving and processing multimedia data by the playback system, in which the multimedia data includes video data and audio data; wirelessly transmitting the audio data to a loudspeaker and meanwhile holding the video data for a threshold time; and finishing the transmission of the audio data when the threshold time is reached, so that the video data and the corresponding audio data are synchronously played. The method for synchronized playback can control the delay caused by the wireless audio transmission, thus achieving the objective of synchronously playing the audio and video data.
    Type: Application
    Filed: September 23, 2011
    Publication date: August 16, 2012
    Applicant: AMTRAN TECHNOLOGY CO., LTD
    Inventor: Kao-Min Lin
  • Publication number: 20120200774
    Abstract: An audio/video distribution system and method for providing media to an end-user. The media has an audio component and a video component. At least the audio component is received and played at an end-user device. The system includes a controller, a display device and a synchronization tool. The controller generates a video signal and an audio signal. The display device is coupled to the controller for receiving and displaying the video signal. The synchronization tool is implemented at the end-user device and is configured to allow a user to synchronize playback of the audio signal at the end-user device to the display of the video signal at the display.
    Type: Application
    Filed: February 3, 2012
    Publication date: August 9, 2012
    Inventor: Gregory Allen Ehlers, SR.
  • Publication number: 20120200773
    Abstract: Systems and methods are operable to synchronize presentation between video streams and separately received audio streams. An exemplary embodiment receives a first media content stream at a media device, wherein the first media content stream comprises at least a video stream portion; receives a second media content stream at the media device, wherein the second media content stream comprises at least an audio stream portion; delays the audio stream portion of the second media content stream by a duration corresponding to at least one synchronization time delay; communicates the video stream portion of the first media content stream to a visual display device; and communicates the delayed audio stream portion of the second media content stream to an audio presentation device, wherein a visual scene of the video stream portion of the first media content stream is substantially synchronized with sounds of the audio stream portion of the second media content stream.
    Type: Application
    Filed: February 8, 2011
    Publication date: August 9, 2012
    Applicant: EchoStar Technologies L.L.C.
    Inventor: Gregory Davis
  • Patent number: 8233089
    Abstract: A method is provided for synchronising visual and audio data for a television system. The television system includes display means for displaying visual data thereon and audio means for allowing audio data to be sounded via said system. The method includes the steps of undertaking one of displaying a visual indicator on said display means or sounding an audio indicator via said audio means and, after a time delay or period of time, undertaking the other of displaying the visual indicator or sounding the audio indicator, the period of time between the display of said visual indicator and the sounding of said audio indicator being adjustable by a user of said television system using synchronisation means.
    Type: Grant
    Filed: May 3, 2006
    Date of Patent: July 31, 2012
    Assignee: Pace PLC.
    Inventors: Kevin Wood, James Belford
  • Patent number: 8229134
    Abstract: Spherical microphone arrays provide an ability to compute the acoustical intensity corresponding to different spatial directions in a given frame of audio data. These intensities may be exhibited as an image and these images are generated at a high frame rate to achieve a video image if the data capture and intensity computations can be performed sufficiently quickly, thereby creating a frame-rate audio camera. A description is provided herein regarding how such a camera is built and the processing done sufficiently quickly using graphics processors. The joint processing of and captured frame-rate audio and video images enables applications such as visual identification of noise sources, beamforming and noise-suppression in video conferencing and others, by accounting for the spatial differences in the location of the audio and the video cameras. Based on the recognition that the spherical array can be viewed as a central projection camera, such joint analysis can be performed.
    Type: Grant
    Filed: May 27, 2008
    Date of Patent: July 24, 2012
    Assignee: University of Maryland
    Inventors: Ramani Duraiswami, Adam O'Donovan, Nail A. Gumerov
  • Publication number: 20120176468
    Abstract: A videoconferencing system which encodes different streams of information. The information may include video, audio, speech recognized versions of the audio, and language translated versions of the audio. Text may be sent as part of the videoconference.
    Type: Application
    Filed: March 21, 2012
    Publication date: July 12, 2012
    Inventor: Scott C. Harris
  • Patent number: 8218651
    Abstract: A method for splicing a first data stream that conveys a first single program transport stream (SPTS) and a second data stream that conveys a second SPTS, the method includes: receiving first data stream metadata units representative of first data stream packets, second data stream metadata units representative of second data stream packets and a request to perform a splicing operation at a n?th splicing point; performing, in response to the splicing request, transport stream layer processing of the first data stream metadata units and of the second data stream metadata units such as to provide a control output stream; and transmitting an output stream in response to the control output stream.
    Type: Grant
    Filed: February 28, 2007
    Date of Patent: July 10, 2012
    Assignee: ARRIS Group, Inc
    Inventors: Amit Eshet, Lior Assouline, Edward Stein
  • Publication number: 20120169930
    Abstract: Provided herein is a method for synchronizing audio and video clock signals in a system. The method includes comparing, within a comparison module, a system video signal with the determined mathematical relationship to produce an adjustment signal. A system video reference signal is updated with the adjustment signal to produce an updated intermediate signal.
    Type: Application
    Filed: December 14, 2011
    Publication date: July 5, 2012
    Applicant: ATI Technologies ULC
    Inventor: Collis Quinn Carter
  • Patent number: 8212924
    Abstract: A multimedia processor includes an audio processor configured to process an audio input signal to generate an audio output signal and an assistant signal, and a video processor coupled with the audio processor and configured to process video input signal and the assistant signal to generate a video output signal simultaneously according to the video input signal and the assistant signal. Provided with the assistant signal, the video processor acquires more video processing-related information for rendering video content in a more realistic manner. Mal-motion detection can thus be prevented, and video quality can be improved.
    Type: Grant
    Filed: May 12, 2009
    Date of Patent: July 3, 2012
    Assignee: Himax Technologies Limited
    Inventor: Tzung-ren Wang
  • Publication number: 20120162362
    Abstract: Systems and methods are disclosed for mapping a sound spatialization field to a displayed panoramic image as the viewing angle of the panoramic image changes. As the viewing angle of the image data changes, the audio data is processed to rotate the captured sound spatialization field to the same extent. Thus, the audio data remains mapped to the image data whether the image data is rotated about a single axis or about more than one axis.
    Type: Application
    Filed: December 22, 2010
    Publication date: June 28, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Alex Garden, Ben Vaught, Michael Rondinelli