With At Least One Audio Signal Patents (Class 386/285)
  • Patent number: 10325397
    Abstract: Aspects of the present innovations relate to systems and/or methods involving multimedia modules, objects or animations. According to an illustrative implementation, one method may include accepting at least one input keyword relating to a subject for the animation and performing processing associated with templates. Further, templates may generates different types of output, and each template may include components for display time, screen location, and animation parameters. Other aspects of the innovations may involve providing search results, retrieving data from a plurality of web sites or data collections, assembling information into multimedia modules or animations, and/or providing module or animation for playback.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: June 18, 2019
    Assignee: OATH INC.
    Inventors: Doug Imbruce, Owen Bossola, Louis Monier, Rasmus Knutsson, Christian Le Cocq
  • Patent number: 10249341
    Abstract: A method, apparatus and system for synchronizing audiovisual content with inertial outputs for content reproduced on a mobile content device include, in response to a vibration of the mobile content device, receiving a recorded audio signal and a corresponding recorded inertial signal generated by the vibration. The recorded signals are each processed to determine a timestamp for a corresponding peak in each of the recorded signals. A time distance between the timestamp of the recorded audio signal and the timestamp of the recorded inertial signal is determine and inertial signals for content reproduced on the mobile content device are shifted by an amount of time equal to the determined time distance between the timestamp of the recorded audio signal and the timestamp of the recorded inertial signal.
    Type: Grant
    Filed: February 2, 2016
    Date of Patent: April 2, 2019
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Julien Fleureau, Fabien Danieau, Khanh-Duy Le
  • Patent number: 10198843
    Abstract: Implementations are directed to methods, systems, apparatus, and computer programs for generation of a three-dimensional (3D) animation by receiving a user input defining a two-dimensional (2D) representation of a plurality of elements, processing, by the one or more processors, the 2D representation to classify the plurality of elements in symbolic elements and action elements, generating, by the one or more processors, based on the symbolic elements, the action elements, and a set of rules a 3D animation corresponding to the 2D representation, and transmitting, by the one or more processors, the 3D animation to an extended reality device for display.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: February 5, 2019
    Assignee: Accenture Global Solutions Limited
    Inventors: Matthew Thomas Short, Robert Dooley, Grace T. Cheng, Sunny Webb, Mary Elizabeth Hamilton
  • Patent number: 10129586
    Abstract: Various implementations process a television content stream to detect program boundaries such as the starting point and ending point of the program. In at least some implementations, program boundaries such as intermediate points between the starting point and ending point of the program are also detected. The intermediate points correspond to where a program pauses for secondary content such as an advertisement or advertisements, and then resumes once the secondary content has run. Once program boundaries are detected, primary content is isolated by removing secondary content that occurs before the starting point and after the ending point. In at least some implementations, secondary content that occurs between detected intermediate points is also removed. The primary content is then recorded without secondary content that originally comprised part of the original television content stream.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: November 13, 2018
    Assignee: Google LLC
    Inventors: Joon-Hee Jeon, Jason R. Kimball, Benjamin P. Stewart
  • Patent number: 10109318
    Abstract: Various embodiments of the invention provide systems and methods for low bandwidth consumption online content editing, where user-created content comprising high definition/quality content is created or modified at an online content editing server according to instructions from an online content editor client, and where a proxy version of the resulting user-created content is provided to online content editor client to facilitate review or further editing of the user-created content from the online content editor client. In some embodiments, the online content editing server utilizes proxy content during creation and modification operations on the user-created content, and replaces such proxy content with corresponding higher definition/quality content, possibly when the user-created content is published for consumption, or when the user has paid for the higher quality content.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: October 23, 2018
    Assignee: WeVideo, Inc.
    Inventors: Jostein Svendsen, Bjørn Rustberggaard
  • Patent number: 10083535
    Abstract: Disclosed herein is an online software application for providing users or customers with newly created animated equivalents of their originally submitted static (aka, un-animated) scannable codes.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: September 25, 2018
    Inventors: Peter Miller, Will Bilton
  • Patent number: 9998722
    Abstract: A system and method of guided video creation is described. In an exemplary method, the system guides a user to create a video production based on a set of pre-defined activities. In one embodiment, the system detects a selection of an item in a shotlist. In response to the item selection, the system stores structured metadata about the selection, opens the video camera and displays a dynamic video overlay relevant to the item selection. In addition, the system detects contact with an overlay button in the dynamic video overlay configured to toggle visibility of the dynamic video overlay. The system further receives a command to save a recorded clip of video content, stores additional metadata for the recorded clip of the video content, and updates the respective item in the shotlist.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: June 12, 2018
    Assignee: Tapshot, Inc.
    Inventors: Lee Eugene Swearingen, Cathy Teresa Clarke
  • Patent number: 9942673
    Abstract: The method for adjusting a hearing system (2) to the preferences of a user (3) of the hearing system comprises a) playing an audio sequence to said user (3); wherein the audio sequence comprises a first sound object representative of a first real-life sound source and a second sound object representative of a second real-life sound source; b) receiving an input (R) in response to step a); c) adjusting at least one audio processing parameter (P) of said hearing system (2) in dependence of said input (R). Preferably, the method further comprises d) providing the user (3) synchronously with step a) with a visualization of a scene to which said audio sequence belongs; and providing a user input (U) which is indicative of a sound source or of a sound object or of an instant in or a portion of the audio sequence; and automatically selecting an audio processing parameter (P) of the hearing system (2) in dependence of the user input (U) and offering the selected audio processing parameter (P) for adjusting.
    Type: Grant
    Filed: November 14, 2007
    Date of Patent: April 10, 2018
    Assignee: SONOVA AG
    Inventor: Michael Boretzki
  • Patent number: 9875245
    Abstract: User created playlists can be analyzed to create a statistical language model indicating the likelihood that a particular sequence of content attributes will be found in a playlist created by a user, as well as the likelihood of any sequence of one or more content attributes following a playlist or partial playlist created by a user. The language model can be used to generate a recommended content attribute sequence based on a partial playlist of one or more content items. A recommended content item sequence that will be pleasant to a user when added to the partial playlist can be selected based on the recommended content attribute sequence.
    Type: Grant
    Filed: April 10, 2015
    Date of Patent: January 23, 2018
    Assignee: APPLE INC.
    Inventors: Daniel Cartoon, Mark H. Levy
  • Patent number: 9855497
    Abstract: An immersive play environment platform including techniques describing recognizing non-verbal vocalization gestures from a user is disclosed. A headset device receives audio input from a user. The headset device transmits the audio input to a controller device. The controller device evaluates characteristics of the audio input (e.g., spectral features over a period of time) to determine whether the audio input corresponds to a predefined non-verbal vocalization, such as a humming noise, shouting noise, etc. The controller device may perform an action in response to detecting such non-verbal vocalizations, such as engaging a play object (e.g., an action figure, an action disc) in the play environment.
    Type: Grant
    Filed: January 20, 2015
    Date of Patent: January 2, 2018
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael P. Goslin, Eric C. Haseltine, Joseph L. Olson
  • Patent number: 9832590
    Abstract: Embodiments are described for a method of rendering an audio program by receiving, in a renderer of a playback system, the audio program and a target response representing desired characteristics of the playback environment, deriving a playback environment response based on characteristics of the playback environment, comparing the target response to the playback environment response to generate a set of correction settings, and applying the correction settings to the audio program so that the audio program is rendered according to the characteristics of the target response. The target response may be based on audio characteristics in a creation environment.
    Type: Grant
    Filed: September 8, 2016
    Date of Patent: November 28, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventor: Charles Q. Robinson
  • Patent number: 9767852
    Abstract: Systems and methods determine, identify and/or detect one or more audio mismatches between at least two digital media files by providing a group of digital media files to a computer system as input files. Each digital media file comprises digital audio and digital video signals previously recorded at a same performance by a same artist, and the digital media files are previously synchronized with respect to each other and aligned on a timeline of the same performance and provide a first multi-angle digital video of the same performance. The systems and methods compare audio features based on the audio signals of each digital media file and detect at least one audio mismatch between at least two digital media files of the group based on compared audio features, wherein the at least one audio mismatch is generated by, caused by or based on one or more previously edited digital media files present within the group.
    Type: Grant
    Filed: September 11, 2015
    Date of Patent: September 19, 2017
    Inventor: Frederick Mwangaguhunga
  • Patent number: 9665895
    Abstract: A computer system receiving audiovisual information, geographic information, and a seller term from a seller computer, said audiovisual information disclosing a seller offer at a seller location captured via said seller computer, said geographic information associated with said location, said audiovisual information associated with said geographic information, said system providing said audiovisual information to a buyer computer for an acceptance of said offer via said buyer computer, said system conditioning said acceptance upon said buyer computer being geographically positioned in compliance with said term based on said geographic information.
    Type: Grant
    Filed: August 12, 2014
    Date of Patent: May 30, 2017
    Assignee: MOV, INC.
    Inventor: Christopher Renwick Alston
  • Patent number: 9495362
    Abstract: According to embodiments of the invention, systems, methods and devices are provided for a plurality of participants speaking different languages to participate in a singing event by using pre-determined song samples of different languages. In one embodiment, a system is provided that includes a storage that identifies songs by using samples from the song. The storage contains a song including both text and melody, wherein the song contains a plurality of versions of different languages. The system also includes devices allowing superiors and subordinates speaking different languages to sing at the same time. The collaboration may then be recorded and stored remotely via a cloud-based server.
    Type: Grant
    Filed: September 2, 2014
    Date of Patent: November 15, 2016
    Inventor: Pui Shan Xanaz Lee
  • Patent number: 9232174
    Abstract: A method for receiving and sending a television program according to one embodiment includes receiving a request to record a television program and receiving the television program. Further, the method includes storing a representation of the television program on a computer readable medium. Also, the method includes receiving a request to send the representation of the television program to a handheld device. Additionally, the method includes reducing a size of the representation of the television program and sending the reduced-size representation of the television program to the handheld device. Other systems and methods are also included.
    Type: Grant
    Filed: June 25, 2009
    Date of Patent: January 5, 2016
    Inventor: Dominic M. Kotab
  • Patent number: 9146942
    Abstract: Systems and methods for editing an image file include a server and at least one client device of the server including a display. An imaging module accesses from the server an image file including image content and a header, wherein the header provides information regarding the image file. An editing module receives user edits to the image content and insert information regarding the user edits into the header. The imaging module applies the user edits in an order that is determined based on a weight assigned to each user edit. The imaging module may also access, from the server, an image file including image content and a header thereof, wherein the header provides information regarding the image file and an edit decision list reflecting historical user edits to the image content. The editing module then identifies the edit decision list in the header for application by the imaging module.
    Type: Grant
    Filed: November 26, 2014
    Date of Patent: September 29, 2015
    Assignee: Visual Supply Company
    Inventors: Zachary Daniel Hodges, Robert A. Newport
  • Patent number: 9092435
    Abstract: A method is provided for extracting meta data from a digital media storage device in a vehicle over a communication link between a control module of the vehicle and the digital media storage device. The method includes establishing a communication link between control module of the vehicle and the digital media storage device, identifying a media file on the digital media storage device, and retrieving meta data from a media file, the meta data including a plurality of entries, wherein at least one of the plurality of entries includes text data. The method further includes identifying the text data in an entry of the media file and storing the plurality of entries in a memory.
    Type: Grant
    Filed: April 3, 2007
    Date of Patent: July 28, 2015
    Assignee: Johnson Controls Technology Company
    Inventors: Brian L. Douthitt, Karl W. Schripsema, Michael J. Sims
  • Publication number: 20150147049
    Abstract: A method and related apparatus for providing content information for a video remix, the method comprising: identifying at least one performer of an event on the basis of image data of a source video; obtaining information about a role of the at least one performer in the event; determining at least some video frames of the source video to contain said at least one performer as a dominant performer in said event; and annotating said video frames of the source video with a description of the role of the at least one performer.
    Type: Application
    Filed: May 31, 2012
    Publication date: May 28, 2015
    Inventors: Antti Johannes Eronen, Juha Henrik Arrasvuori, Arto Juhani Lehtiniemi
  • Publication number: 20150139615
    Abstract: A mobile video editing and sharing systems for social media referred to as the CAPTURE system, which improves over prior video editing systems for smartphones through a more extensive editing suite including functionality for adding and altering music to video files. The video editing portion of the system allows for further creative freedom and expansion than what is already offered. Firstly, current limitation on video are length are far too short; the system allows users to record up to one full minute of video, as well the ability to choose previously recorded video footage from the user's “Camera Roll” within the iOS software. Following this, the CAPTURE system offers a variety of specialty filters in addition to the color alteration filters typically provided in prior systems. The filters include reverse, speed and 3D effect, which adjust the visual content of the video without altering the music track.
    Type: Application
    Filed: November 19, 2014
    Publication date: May 21, 2015
    Inventor: Josh Hill
  • Publication number: 20150139616
    Abstract: Systems and methods for routing audio for audio-video recordings allow a user to record desired audio with captured video at the time the video is being captured. Audio from one or more sources may be routed to the video capture application and recorded with the video. In one or more examples, audio may be routed from another application, e.g., an audio playback application, running on the same device as the video capture application. In another example, audio may be received from a remote device through a wireless connection. Multiple streams of audio content may be mixed together prior to storing with video. The audio, upon reception, may then be routed to the video capture application for recordation. An audio progression bar may also be provided to indicate duration and elapsed time information associated with the audio being recorded.
    Type: Application
    Filed: January 29, 2015
    Publication date: May 21, 2015
    Inventors: Sanna LINDROOS, Sanna M. KOSKINEN, Heli JARVENTIE, Vesa HUOTARI, Paivi HEIKKILA
  • Patent number: 9036978
    Abstract: The present invention provides a content data recording/reproducing device, comprising a communication unit that engages in communication with an external device, a storage unit that stores content data and additional data related to the content data, a content data extraction unit that selectively extracts the content data from the storage unit based upon condition data received at the communication unit from the external device and the additional data stored in the storage unit and a contents list generation unit that generates a contents list based upon the condition data and additional data corresponding to the content data extracted by the content data extraction unit.
    Type: Grant
    Filed: September 4, 2012
    Date of Patent: May 19, 2015
    Assignee: Sony Corporation
    Inventor: Susumu Takatsuka
  • Publication number: 20150131972
    Abstract: A system for managing storage space on an electronic storage medium is provided in which a file format for stored data allows for progressive deletion of low-significance data, for example in a video or audio file, while allowing the remaining portions of the file to be subsequently retrieved. The file format allows for the ready deletion of low-significance data without having to open, edit and subsequently rewrite the data. Furthermore, rules-based algorithms for the deletion of low-significance data allow a user to store and progressively delete such low-significance data in accordance with time parameters, available storage space and the like, without having to delete the full file.
    Type: Application
    Filed: November 6, 2014
    Publication date: May 14, 2015
    Inventor: Richard Reisman
  • Publication number: 20150131973
    Abstract: A method and a system for automatic generation of clips from a plurality of images based on inter-object relationships score are provided herein. The method may include: obtaining a plurality of images, wherein at least two of the images contain at least one object over a background; analyzing at least some of the images to detect objects; extracting geometrical meta-data of at least some of the detected objects; calculating an inter-object relationships score for at least some of the detected objects; and determining a spatio-temporal arrangement of at least some of the objects and at least some of the images based at least partially on the inter-object relationships score and the geometrical meta-data of at least some of the detected objects.
    Type: Application
    Filed: November 11, 2014
    Publication date: May 14, 2015
    Inventors: Alexander RAV-ACHA, Oren Boiman
  • Patent number: 9031384
    Abstract: An interesting section identifying device for identifying an interesting section of a video file based on an audio signal included in the video file, the interesting section being a section in which a user is estimated to express interest, includes an interesting section candidate extracting unit that extracts an interesting section candidate from the video file, the interesting section candidate being a candidate for the interesting section, a detailed structure determining unit that determines whether the interesting section candidate includes a specific detailed structure, and an interesting section identifying unit that identifies the interesting section by analyzing a specific section when the detailed structure determining unit determines that the interesting section candidate includes the detailed structure, the specific section including the detailed structure and being shorter than the interesting section candidate.
    Type: Grant
    Filed: April 24, 2012
    Date of Patent: May 12, 2015
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Tomohiro Konuma, Ryouichi Kawanishi, Tomoyuki Karibe, Tsutomu Uenoyama
  • Patent number: 9031375
    Abstract: An electronic device may determine to present a video frame still image sequence version of a video instead of the video. The electronic device may derive a plurality of still images from the video. The electronic device may generate the video frame still image sequence by associating the plurality of still images. The electronic device may present the video frame still image sequence. The video frame still image sequence may be displayed according to timing information to resemble play of the video. In some cases, audio may also be derived from the video. In such cases, display of the video frame still image sequence may be performed along with play of the audio.
    Type: Grant
    Filed: July 3, 2013
    Date of Patent: May 12, 2015
    Assignee: Rapt Media, Inc.
    Inventors: Justin Tucker Trautman, Jonathan R. A. Woodard
  • Publication number: 20150104156
    Abstract: A method of generating video data with a soundtrack (114), the method including: receiving (204) video data (112) relating to a product or service; obtaining (208, 212) descriptive data relating to the product or service; generating (214, 216) audio data based on the descriptive data; adding (218) the audio data as a soundtrack to at least part of the video data, and storing (220) and/or playing the video data with the added soundtrack. The invention also includes a system configured to use the method and a related computer program element.
    Type: Application
    Filed: March 21, 2013
    Publication date: April 16, 2015
    Applicant: LIFE ON SHOW LIMITED
    Inventor: Adam James Price
  • Patent number: 9009040
    Abstract: According to certain embodiments, training a transcription system includes accessing recorded voice data of a user from one or more sources. The recorded voice data comprises voice samples. A transcript of the recorded voice data is accessed. The transcript comprises text representing one or more words of each voice sample. The transcript and the recorded voice data are provided to a transcription system to generate a voice profile for the user. The voice profile comprises information used to convert a voice sample to corresponding text.
    Type: Grant
    Filed: May 5, 2010
    Date of Patent: April 14, 2015
    Assignee: Cisco Technology, Inc.
    Inventors: Todd C. Tatum, Michael A. Ramalho, Paul M. Dunn, Shantanu Sarkar, Tyrone T. Thorsen, Alan D. Gatzke
  • Patent number: 8989560
    Abstract: Methods, apparatus, systems and machine readable medium for variable video production, distribution and presentation are disclosed. An example composer to author a variable video includes a labeler to tag each of a plurality of scenes with at least one vector based on the content of the respective scenes. In addition, the example composer includes a receiver to define relevance data to be obtained from intended viewers of respective versions of the variable video. A mapper is also included to chart a sequence of two or more of the scenes, at least one of the scenes being a variable scene. The variable scene includes content selected based on the vector and the respective relevance data to form respective versions of the variable video. The example composer further includes a publisher to publish the variable video as a single file for creating the respective versions of the variable video.
    Type: Grant
    Filed: January 23, 2014
    Date of Patent: March 24, 2015
    Assignee: R.R. Donnelley & Sons Company
    Inventor: Paul Howett
  • Publication number: 20150071619
    Abstract: A software application for mobile devices enables users to easily create a fully-edited short video by combining video clips of various lengths to form a final video that resembles a Hollywood-style, professionally edited video clip. The videos are automatically edited to the music cuts using pre-programmed storyboards and transitions that align with the user's thematic selection. There are few steps involved in the process making for a user-friendly experience. The professional style video clip is produced on a user's phone in only 45 seconds and can then be shared with friends via email, YouTube, Facebook and other forms of social media.
    Type: Application
    Filed: September 9, 2013
    Publication date: March 12, 2015
    Inventor: Michael Brough
  • Publication number: 20150055937
    Abstract: The disclosure includes a system and method for aggregating image frames and audio data to generate virtual reality content. The system includes a processor and a memory storing instructions that, when executed, cause the system to: receive video data describing image frames from a camera array; receive audio data from a microphone array; aggregate the image frames to generate a stream of three-dimensional (3D) video data, the stream of 3D video data including a stream of left panoramic images and a stream of right panoramic images; generate a stream of 3D audio data from the audio data; and generate virtual reality content that includes the stream of 3D video data and the stream of 3D audio data.
    Type: Application
    Filed: August 21, 2014
    Publication date: February 26, 2015
    Inventors: Arthur VAN HOFF, Thomas M. ANNAU, JENS CHRISTENSEN
  • Patent number: 8965181
    Abstract: A method of automatic announcer voice removal from a televised sporting event. A sound processing circuit divides an audio input signal of a televised sporting event into multiple audio segments. The audio input signal includes crowd noise and announcer commentary. If an audio segment does not exceed a pre-defined amplitude threshold, a voice removal utility adds the audio segment to a recent crowd noise library and stores the segment in an output buffer. If the amplitude of a segment exceeds the threshold, the utility adds the segment to a recent announcer voice library. The sound processing circuit generates an attenuated version of the segment and blends the attenuated version with one or more mixed segments from the recent crowd noise library. The voice removal utility stores the attenuated and blended segment in the output buffer and outputs one or more audio segments from the buffer in a chronological order.
    Type: Grant
    Filed: May 16, 2013
    Date of Patent: February 24, 2015
    Assignee: International Business Machines Corporation
    Inventor: Nathan J. Harrington
  • Publication number: 20150050010
    Abstract: A method and system can generate video content from a video. The method and system can include generating audio files and image files from the video, distributing the audio files and the image files across a plurality of processors and processing the audio files and the image files in parallel. The audio files associated with the video to text and the image files associated with the video to video content can be converted. The text and the video content can be cross-referenced with the video.
    Type: Application
    Filed: February 7, 2014
    Publication date: February 19, 2015
    Inventors: Naeem Lakhani, Bartlett Wade Smith
  • Patent number: 8942537
    Abstract: A content reproduction apparatus that adopts a content processing method includes a video processor, a video analyzer, and an audio processor for processing audio data and video data input thereto. The video analyzer analyzes video characteristics of video data such as resolutions, compressive distortions, and real frame rates. The video processor processes video data in accordance with video processing, which is determined based on analyzed video characteristics of video data. The audio processor processes audio data in accordance with audio processing, such as dynamic range compression and/or frequency component extension/enhancement, which is determined based on analyzed video characteristics of video data. Thus, it is possible to reproduce sound in an articulate manner depending on the video quality, which is either professional-level video shooting or nonprofessional-level video shooting.
    Type: Grant
    Filed: October 2, 2013
    Date of Patent: January 27, 2015
    Assignee: Yamaha Corporation
    Inventor: Ryotaro Aoki
  • Patent number: 8929713
    Abstract: In one embodiment, a method for segmenting video data in a mobile communication terminal includes acquiring sensor data periodically together with video data during video shooting, and segmenting the video data based on the acquired sensor data.
    Type: Grant
    Filed: March 2, 2012
    Date of Patent: January 6, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sun-Hee Youm, Jin-Guk Jeong, Mi-Hwa Park, Soo-Hong Park, Min-Ho Lee
  • Publication number: 20150003812
    Abstract: There is disclosed an apparatus and method for collaborative creation of shareable secondary digital media programs. The method comprises accessing data comprising a primary program generated using an authoring tool and enabling acceptance of a channel of a secondary program, the channel comprising a set of rich metadata time-synchronized with the primary program, from a user of the primary program other than an original creator of the primary program using an authoring tool including timing granularity controls to enable the time-synchronization accuracy to be adjusted between varying levels of fineness. The method further comprises storing the channel time-synchronized with the primary program in a database of rich metadata for access by other users of the primary program, and enabling access, upon request, to the channel time-synchronized with the primary program via a playback tool with varying levels of fineness for the time-synchronization.
    Type: Application
    Filed: June 25, 2014
    Publication date: January 1, 2015
    Inventor: HOWARD DAVID SOROKA
  • Patent number: 8922717
    Abstract: As information to be processed at an object-based video or audio-visual (AV) terminal, an object-oriented bitstream includes objects, composition information, and scene demarcation information. Such bitstream structure allows on-line editing, e.g. cut and paste, insertion/deletion, grouping, and special effects. In the interest of ease of editing, AV objects and their composition information are transmitted or accessed on separate logical channels (LCs). Objects which have a lifetime in the decoder beyond their initial presentation time are cached for reuse until a selected expiration time. The system includes a de-multiplexer, a controller which controls the operation of the AV terminal, input buffers, AV objects decoders, buffers for decoded data, a composer, a display, and an object cache.
    Type: Grant
    Filed: February 14, 2013
    Date of Patent: December 30, 2014
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Alexandros Eleftheriadis, Hari Kalva
  • Publication number: 20140376888
    Abstract: An information processing apparatus is provided that includes a playback unit to play back music data, an analysis unit to analyze a feature of a relevant image of the music data, an image correction unit to perform image correction with use of any of a plurality of correction types, a storage unit to store one or more than one image, a selection unit to select a correction type corresponding to the feature of the relevant image analyzed by the analysis unit from the plurality of correction types, a correction control unit to cause the image correction unit to perform image correction of an image stored in the storage unit with use of the correction type selected by the selection unit, and an output unit to output the image corrected by the image correction unit.
    Type: Application
    Filed: September 5, 2014
    Publication date: December 25, 2014
    Inventors: Daisuke MOCHIZUKI, Kazuto NISHIZAWA, Mitsuo OKUMURA, Takaomi KIMURA, Tomohiko GOTOH
  • Patent number: 8903217
    Abstract: The invention relates to a reproduction device (21), with a device (20) acting as a source of digital services. It also relates to a method of synchronizing two parts of a digital service in a system including a source device according to the invention and at least one reproduction device according to the invention. According to the invention, the reproduction device (21) includes means for receiving the data forming at least a part of a digital service originating from a digital service source device (20), means for processing (210) at least some of the data received, means (211) for reproducing an output of at least a part of the digital service, the time for processing and reproducing the data introducing a delay in the output of the reproduced data. This device also includes communication means (213) for informing the source device of the delay introduced.
    Type: Grant
    Filed: June 14, 2012
    Date of Patent: December 2, 2014
    Assignee: Thomson Licensing
    Inventors: Philippe Leyendecker, Rainer Zwing, Franck Abelard, Patrick Morvan, Sébastien Desert, Didier Doyen
  • Publication number: 20140341547
    Abstract: A method comprising: determining a spatial audio signal; determining an apparatus motion parameter; and stabilizing the spatial audio signal dependent on the apparatus motion parameters.
    Type: Application
    Filed: December 5, 2012
    Publication date: November 20, 2014
    Inventors: Ravi Shenoy, Pushkar Prasad Patwardhan, Miikka Vilermo, Kemal Ugur
  • Patent number: 8892229
    Abstract: An audio apparatus according to an embodiment includes an audio signal receiving unit, a music gap signal receiving unit, a playback unit, and a determining unit. The audio signal receiving unit receives an audio signal in which successive multiple music data are contained in a single block of data. The determining unit determines a boundary of the music data on the basis of the time at which the music gap signal that indicates the boundary of the music data by the music gap signal receiving unit and the duration of a silent period in the audio signal that is played back by the playback unit.
    Type: Grant
    Filed: May 14, 2012
    Date of Patent: November 18, 2014
    Assignee: Fujitsu Ten Limited
    Inventors: Osamu Yasutake, Fumitake Nakamura, Nobutaka Miyauchi, Masanobu Maeda, Masahiko Kubo, Nahoko Kawamura, Machiko Matsui, Hideto Saitoh, Hiroyuki Kubota, Masayuki Takaoka, Masanobu Washio, Yutaka Nishioka
  • Patent number: 8879895
    Abstract: Method and system for capturing and playing back ancillary data associated with a video stream. At capture, a first video stream and its associated non-audio ancillary data are received. The non-audio ancillary data associated with the first video stream is encoded into a first audio stream on a basis of a predefined encoding scheme. The captured non-audio ancillary data can then be transmitted and processed with the first video stream in the form of the first audio stream. At playback, a second video stream and a second audio stream containing encoded non-audio ancillary data associated with the second video stream are received. The second audio stream is decoded on a basis of a predefined decoding scheme in order to extract therefrom the non-audio ancillary data associated with the second video stream. The second video stream and its associated non-audio ancillary data are then both output for playback.
    Type: Grant
    Filed: February 2, 2010
    Date of Patent: November 4, 2014
    Assignee: Matrox Electronic Systems Ltd.
    Inventor: Simon Bussières
  • Publication number: 20140324557
    Abstract: A method and system for creating and editing video and/or audio tracks is described. The method includes providing at least one artist, venue, and track available for selection and providing at least one clip associated with the at least one artist, venue, and track. The method also includes allowing a user to create a custom track from the at least one clip. The system includes a plurality of video cameras for recording a live performance at a plurality of positions. The system also includes at least one server for storing a plurality of video clips created from the plurality of video cameras and an application stored on the at least one server for allowing a user to access the plurality of video clips via the Internet.
    Type: Application
    Filed: July 14, 2014
    Publication date: October 30, 2014
    Inventor: Michael Wayne SHORE
  • Publication number: 20140321829
    Abstract: A system for allowing a user to create a custom track on a user apparatus, the user apparatus having a display is described. A memory stores a plurality of video clips and an audio track having a timeline. An application is stored in the memory. The application is configured to provide, on the display of the user apparatus, a plurality of video source windows, each of the plurality of video source windows corresponding to a respective one of the plurality of video clips. The application is further configured to allow the user to create the custom track while the audio track is playing by correlating portions of the plurality of video clips with the audio track by selecting respective ones of the plurality of video source windows at desired times in the timeline of the audio track.
    Type: Application
    Filed: July 14, 2014
    Publication date: October 30, 2014
    Inventor: Michael Wayne SHORE
  • Publication number: 20140321832
    Abstract: A method and system for creating and editing video and/or audio tracks is described. The method includes providing at least one artist, venue, and track available for selection and providing at least one clip associated with the at least one artist, venue, and track. The method also includes allowing a user to create a custom track from the at least one clip. The system includes a plurality of video cameras for recording a live performance at a plurality of positions. The system also includes at least one server for storing a plurality of video clips created from the plurality of video cameras and an application stored on the at least one server for allowing a user to access the plurality of video clips via the Internet.
    Type: Application
    Filed: July 14, 2014
    Publication date: October 30, 2014
    Inventor: Michael Wayne SHORE
  • Patent number: 8873937
    Abstract: An audiovisual data transmission system includes the following components. A filesystem module creates a file that contains at least one of video data and audio data stored in a storage unit. A conversion processing unit manages a file created by the filesystem module. A filesystem-in-userspace module provides an interface between the conversion processing unit and the filesystem module. An update detecting unit detects an update of at least one of video data and audio data contained in a file. In response to an instruction to acquire a file managed by the conversion processing unit received in a state where an update of the file is detected by the update detecting unit, a server makes an inquiry to the conversion processing unit and repeatedly performs a read process in which the file is read, via the filesystem-in-userspace module.
    Type: Grant
    Filed: May 18, 2012
    Date of Patent: October 28, 2014
    Assignee: Sony Corporation
    Inventor: Hiroshi Masuda
  • Publication number: 20140313351
    Abstract: A method, system and data structure for concatenating a series of video files into a single video file is provided. A remote device having a video camera and a microphone can be used to record a series of video files where each video file contains an answer to an interview question and comprises both video data and audio data. The series of video files can then be uploaded to a server over a network where the series of files are concatenated into a single video file containing both audio data and video data.
    Type: Application
    Filed: April 17, 2014
    Publication date: October 23, 2014
    Applicant: OneStory Inc.
    Inventors: Dale Zak, Dmitri Dolguikh
  • Patent number: 8867902
    Abstract: A system for allowing a user to create a custom track on a user apparatus, the user apparatus having a display is described. A memory stores a plurality of video clips and an audio track having a timeline. An application is stored in the memory. The application is configured to provide, on the display of the user apparatus, a plurality of video source windows, each of the plurality of video source windows corresponding to a respective one of the plurality of video clips. The application is further configured to allow the user to create the custom track while the audio track is playing by correlating portions of the plurality of video clips with the audio track by selecting respective ones of the plurality of video source windows at desired times in the timeline of the audio track.
    Type: Grant
    Filed: June 14, 2012
    Date of Patent: October 21, 2014
    Assignee: CAPShore, LLC
    Inventor: Michael Wayne Shore
  • Patent number: 8849103
    Abstract: On a recording medium, stereoscopic and monoscopic specific areas are located one after another next to a stereoscopic/monoscopic shared area. The stereoscopic/monoscopic shared area is a contiguous area to be accessed both in stereoscopic video playback and monoscopic video playback. The stereoscopic specific area is a contiguous area to be accessed immediately before a long jump occurring in stereoscopic video playback. In both the stereoscopic/monoscopic shared area and the stereoscopic specific area, extents of base-view and dependent-view stream files are arranged in an interleaved manner. The extents on the stereoscopic specific area are next in order after the extents on the stereoscopic/monoscopic shared area. The monoscopic specific area is a contiguous area to be accessed immediately before a long jump occurring in monoscopic video playback. The monoscopic specific area has a copy of the entirety of the extents of the base-view stream file recorded on the stereoscopic specific area.
    Type: Grant
    Filed: September 22, 2011
    Date of Patent: September 30, 2014
    Assignee: Panasonic Corporation
    Inventors: Taiji Sasaki, Hiroshi Yahata, Tomoki Ogawa
  • Patent number: 8843375
    Abstract: Methods, systems and apparatus for editing audio clips. A computer-implemented method includes displaying in a user interface, a first audio clip including a first plurality of time instants and a second audio clip including a second plurality of time instants; displaying a first transition point identifier associated with the first audio clip to designate a portion from a beginning of the first audio clip to the first transition point identifier that is playable; displaying a second transition point identifier associated with the second audio clip to designate a portion from the second transition point identifier to an end of the second audio clip that is playable; and generating a combined audio clip comprising the portion from the beginning of the first audio clip to the first transition point identifier and the portion from the second transition point identifier to the end of the second audio clip.
    Type: Grant
    Filed: December 19, 2008
    Date of Patent: September 23, 2014
    Assignee: Apple Inc.
    Inventor: Randy Ubillos
  • Publication number: 20140270710
    Abstract: Methods, apparatuses, and computer program products are provided according to example embodiments in order to create optimized audio enabled cinemagraphs. In the context of a apparatus, the apparatus comprises at least one processor and at least one memory including computer program instructions, the at least one memory and the computer program instructions configured to, with the at least one processor, cause the apparatus at least to receive at least two image frames and audio, wherein the duration of the audio is longer than the duration of the at least two image frames; receive a selection of a segment of the at least two image frames; define an output image by looping the selected segment of the at least two image frames; define an output audio from the received audio based at least on a start time and a stop time of the selected segment; and produce an animated image by at least combining the output image and the output audio. A corresponding method and computer program product are also provided.
    Type: Application
    Filed: March 13, 2013
    Publication date: September 18, 2014
    Applicant: NOKIA CORPORATION
    Inventors: Miikka Tapani Vilermo, Mikko Tapio Tamm