Abstract: Provided is a vibration device using sound and a system comprising the same. More particularly, the present invention relates to a vibration device for generating vibration using sound such that the beat of the sound can be felt, which is convenient to carry or transfer due to a lightweight and compact size thereof, is capable of generating vibration matching the beat of sound to which a user is currently listening, is furthermore capable of generating vibrations of various feelings matching the beat of sound according to user settings, thereby greatly enhancing effects that the user may feel, and is very inexpensive to manufacture, and a system comprising the vibration device.
Abstract: A wireless enabled load control device having integrated voice control is disclosed. The load control device may be connected to a variety of local and remote devices, including lighting devices and smart appliances, as well as various cloud service platforms. The load control device may receive a voice command from a user. The load control device may transmit the voice command to a first cloud service platform for processing to determine an instruction included in the voice command. The determined instruction may be provided to a second cloud service platform connected to the first cloud service platform. The first and second cloud service platforms may be connected to a device to be controlled based on the voice command. The cloud service platforms can adjust an operation of the device based on the determined instruction.
Type:
Grant
Filed:
November 6, 2018
Date of Patent:
November 29, 2022
Assignee:
Leviton Manufacturing Co., Inc.
Inventors:
Aaron Ard, James Shurte, Thomas M. Morgan, Ronald J. Gumina
Abstract: A media summary is generated to include portions of media items. The portions of media items identified for inclusion in the media summary is determined based on the length of the media summary and classification of content depicted within the media items. Classification of content depicted within the media items includes number of smiles depicted within the media items.
Abstract: Methods, apparatus, systems and articles of manufacture to segment audio and determine audio segment similarities are disclosed. A disclosed example method includes developing features characterizing audio with a neural network, computing a self-similarity matrix based on the features, and identifying segments of the audio based on the self-similarity matrix.
Abstract: A method of generating a composite audio signal, comprising: receiving an input audio signal; performing a first comparison, wherein at least a first portion of the input audio signal is compared with at least a portion of one or more stored audio signals; selecting at least a first portion of a stored audio signal based on a result of the first comparison; performing a second comparison, wherein at least a second portion of the input audio signal is compared with at least a portion of one or more stored audio signals; selecting at least a second portion of a stored audio signal based on a result of the second comparison; and generating the composite audio signal using at least the selected first and second portions of a stored audio signal.
Type:
Grant
Filed:
June 16, 2017
Date of Patent:
March 31, 2020
Assignee:
Krotos Ltd
Inventors:
Orfeas Boteas, Matthew Collings, Douglas Barr
Abstract: A system for annotating frames in a media stream 114 includes a pattern recognition system (PRS) 108 to generate PRS output metadata for a frame; an archive 106 for storing ground truth metadata (GTM); a device to merge the GTM and PRS output metadata and thereby generate proposed annotation data (PAD) 110; and a user interface 109 for use by the human annotator HA 118. The user interface 104 includes an editor 111 and an input device 107 used by the HA 118 to approve GTM for the frame. An optimization system 105 receives the approved GTM and metadata output by the PRS 108, and adjusts input parameters for the PRS to minimize a distance metric corresponding to a difference between the GTM and PRS output metadata.
Type:
Grant
Filed:
June 28, 2019
Date of Patent:
February 4, 2020
Assignee:
LiveClips LLC
Inventors:
Eric David Petajan, David Eugene Weite, Douglas W. Vunic
Abstract: A method and system for enhancing a speech signal is provided herein. The method may include the following steps: obtaining an original video, wherein the original video includes a sequence of original input images showing a face of at least one human speaker, and an original soundtrack synchronized with said sequence of images; and processing, using a computer processor, the original video, to yield an enhanced speech signal of said at least one human speaker, by detecting sounds that are acoustically unrelated to the speech of the at least one human speaker, based on visual data derived from the sequence of original input images.
Type:
Grant
Filed:
July 3, 2018
Date of Patent:
November 12, 2019
Assignee:
Yissum Research Development Company, of The Hebrew University of Jerusalem Ltd.
Abstract: A system and method for automated video editing. A reference media is selected and analyzed. At least one video may be acquired, and thereby synced to the reference audio media. Once synced, audio analysis is used to assemble an edited video. The audio analysis can include information, including user inputs, video analysis, and metadata. The system and method for automated video editing may be applied to collaborative creation, simulated stop motion animation, and real-time implementations.
Abstract: Methods for suggesting an audio file for playback with a video file using an analysis of objects in images from the video file are provided. In one aspect, a method includes receiving a selection of a video file, identifying shot transition timings in the video file, and analyzing each shot transition associated with the identified shot transition to a identify an entity within the respective shot transition. The method also includes providing an identification of the identified entities to a natural language model to identify at least one mood associated with the identified entities, selecting, from a collection of audio files, at least one audio file associated with the at least one mood and including an average audio onset distance within an audio onset distance threshold, and providing an identification of the at least one audio file as a suggestion for audio playback with the video file. Systems and machine-readable media are also provided.
Abstract: By registering a set range and a sound type which are not subject to detection of an abnormal sound in advance, a PC performs an alert notification in a case of detecting a sound which is regarded as an abnormal sound, and does not perform an alert notification in a case of detecting a sound which is not regarded as an abnormal sound. That is, in a target area such as a hotel lobby, a sound output from a television provided therein, a water sound emitted by a fountain, and the like are no longer erroneously detected as an abnormal sound. Further, the PC forms the directivity of sound data in a direction toward an actual position corresponding to a designated position on an image of a sound pickup area displayed on a display.
Abstract: A method of synchronizing playback of a media asset between a first playback device and one or more other playback devices includes synchronizing an initial playback timeline of a media asset with a reference clock, and then playing the media asset with the initial playback timeline on each of the two or more playback devices. In response to a playback-alteration command, an updated playback timeline of the media asset is synchronized with the reference clock, and the media asset is then played with the updated playback timeline on each of the two or more playback devices.
Abstract: A method of generating data for controlling a rendering system (9) includes obtaining data representative of a recording of at least intervals of an event, the recording having at least two components (22,23) obtainable through different respective modalities. The data is analyzed to determine at least a dependency between a first and a second of the components (22,23). At least the dependency is used to provide settings (30) for a system (9) for rendering in perceptible form at least one output through a first modality in dependence on at least the settings and on at least one signal for rendering in perceptible form through a second modality.
Type:
Grant
Filed:
November 14, 2013
Date of Patent:
November 10, 2015
Assignee:
KONINKLIJKE PHILIPS N.V.
Inventors:
Stijn De Waele, Ralph Antonius Cornelis Braspenning, Joanne Henriëtte Desirée Monique Westerink
Abstract: Systems and methods for providing movement to seating including theater seating generate complex motion responses in the seating by automatically analyzing an audio component of media being consumed for one or more aspects of the audio information contained in the audio component at certain frequencies or frequency ranges. The aspects analyzed include aspects relating to frequencies and frequency ranges substantially higher than the low-frequency signals used to drive motion of theater seating. From the analysis of the audio aspects contained in the audio component, a plurality of independent low-frequency output signals is generated. The plurality of independent low-frequency output signals is directed to a plurality of individual actuators incorporated into different locations in a seat to provide sensations of motion at different locations of the seat, such as at a seat location, at a lower back location, and at an upper back location.
Type:
Application
Filed:
October 26, 2011
Publication date:
May 2, 2013
Inventors:
Levoy Haight, J. Ken Barton, David J. Havell, Aaron Michael Best, Trent Lawrence Rolf, Mark Myers
Abstract: A method of creating a playback control file for a recording medium and a method and apparatus for reproducing data using the playback control file are disclosed. The method of creating a playback control file includes reading an original PlayList from the recording medium, the original PlayList being configured to reproduce original data recorded on the recording medium, downloading at least one additional PlayList from an external source, the at least one PlayList being configured to reproduce additional data downloadable from the external source, and creating a composite PlayList by combining the original PlayList with the at least one additional PlayList, the composite PlayList being able to control reproduction of the original and additional data, individually or in combination. The method may further include playing-back any one of the original PlayList, the at least one additional PlayList, and the composite PlayList.
Type:
Grant
Filed:
November 17, 2004
Date of Patent:
August 2, 2011
Assignee:
LG Electronics Inc.
Inventors:
Kang Soo Seo, Byung Jin Kim, Jea Yong Yoo
Abstract: In encoder side, an electronic watermark generating device generates an electronic watermark which contains a encryption key. An electronic watermark inserting device inserts the electronic watermark containing the encryption key into a first portion of data. An encrypting device encrypts a second portion of the data with the encryption key. The first portion into which the watermark containing the encryption key has been inserted and the second portion which has been encrypted with the encryption key are combined by a switch and thereafter recorded on a record medium or transmitted to a network. In decoder side, the watermark containing the encryption key is extracted from the first portion and the encryption key is extracted from the watermark. The second portion is decrypted with the extracted encryption key.
Abstract: An Automatic Soundtrack Generator permits merger of a sound track that is independent of the external sound source, while either recording or playing a video sequence. The Automatic Soundtrack Generator integrates in a video recorder or player a module that generates music or other sounds which either can be mixed with the originally recorded sound (sound mixing), or can replace the originally recorded sound (sound dubbing). This sound mixing or dubbing can be performed either at video/audio record time or at play back time.
Abstract: Music videos are automatically produced from source audio and video signals. The music video contains edited portions of the video signal synchronized with the audio signal. An embodiment detects transition points in the audio signal and the video signal. The transition points are used to align in time the video and audio signals. The video signal is edited according to its alignment with the audio signal. The resulting edited video signal is merged with the audio signal to form a music video.
Type:
Grant
Filed:
February 28, 2002
Date of Patent:
April 11, 2006
Assignee:
Fuji Xerox Co., Ltd.
Inventors:
Jonathan Foote, Matthew Cooper, Andreas Girgensohn, Shingo Uchihashi
Abstract: Apparatus for dubbing a film from a source language to a target language includes a replay device (1) which replays a film and a control unit (6) which generates an acoustic model of the scene. A dubbing soundtrack on the replay device (1) is fed via a line (2) to an acoustic processor (3). The acoustic processor (3) processes the dubbed soundtrack under the control of the control unit (6) to produce a dubbed soundtrack which is modified to take into account the acoustic environment of the scene. This soundtrack may be recorded on a recording device (5).
The application further discloses the generation of soundtracks for computer generated scenes taking into account the virtual acoustic environment.
Abstract: Music videos are automatically produced from source audio and video signals. The music video contains edited portions of the video signal synchronized with the audio signal. An embodiment detects transition points in the audio signal and the video signal. The transition points are used to align in time the video and audio signals. The video signal is edited according to its alignment with the audio signal. The resulting edited video signal is merged with the audio signal to form a music video.
Type:
Application
Filed:
February 28, 2002
Publication date:
August 28, 2003
Inventors:
Jonathan Foote, Matthew Cooper, Andreas Girgensohn, Shingo Uchihashi
Abstract: An Automatic Soundtrack Generator permits merger of a sound track that is independent of the external sound source, while either recording or playing a video sequence. The Automatic Soundtrack Generator integrates in a video recorder or player a module that generates music or other sounds which either can be mixed with the originally recorded sound (sound mixing), or can replace the originally recorded sound (sound dubbing). This sound mixing or dubbing can be performed either at video/audio record time or at play back time.
Abstract: The camera, which is put on a user, shoots and processes image into image data and records the same. The camera has two or more operation modes and is provided with a detector for detecting either the motion state or the physiological state of the user or both of them and a controller for selecting one mode from among the operation modes on the basis of the detection results by the detector.
Abstract: A portable electronic image displayer and audio player, including: a digital memory for storing digital images; a digital memory for storing an audio recording; a display for displaying stored digital images; a music analyzer for analyzing the stored audio recording and for determining when to display a sequence of stored digital images according to the stored audio recording; and an audio reproducer for playing the audio recording.
Type:
Application
Filed:
August 6, 2001
Publication date:
February 6, 2003
Applicant:
Eastman Kodak Company
Inventors:
John R. Fredlund, John C. Neel, Steven M. Bryant
Abstract: In case continuous reproduction is commanded from a first AV stream to a second AV stream, a third AV stream, made up of a preset portion of the first AV stream and a preset portion of the second AV stream, is generated. The third AV stream is reproduced when reproduction is switched from the first AV stream to the second AV stream. As information pertinent to the third AV stream, the address information of a source packet of the first AV stream at a time of switching from the first AV stream to the third AV stream and the address information of a source packet of the second AV stream at a time of switching from the third AV stream to the second AV stream, are generated. This enables reproduction such as to maintain continuity between separately recorded AV streams.
Abstract: An Automatic Soundtrack Generator permits merger of a sound track that is independent of the external sound source, while either recording or playing a video sequence. The Automatic Soundtrack Generator integrates in a video recorder or player a module that generates music or other sounds which either can be mixed with the originally recorded sound (sound mixing), or can replace the originally recorded sound (sound dubbing). This sound mixing or dubbing can be performed either at video/audio record time or at play back time.
Abstract: Method and system for customizing a motion film selection by selecting a film clip including a video track and a sound track comprising one or more actor voice tracks and a background track, modifying the sound track to remove a selected actor voice track, recording a new voice track for synchronized playback with the selected actor, and saving a new sound track including the modified sound track and the new voice track.
Abstract: An embodiment of the invention provides a theater having a first screen and a second screen movable relative to the first screen. The first and second screens are typically designed for different movie formats. The second screen is mounted to a frame in sections, and a rotating member is operable to move sections of the second screen away from the first screen. A projection system disposed in front of one of the screens is operable to project screen images alternatively on the first screen or on the second screen. A plurality of speakers are mountable to the frame to provide a sound system.
Type:
Application
Filed:
September 22, 1999
Publication date:
January 3, 2002
Inventors:
JEFFREY P. GRAVES, WILLIAM NEIL GRANT, KEVIN P. JOHNSON, JOE M. PACITTI
Abstract: A moving picture signal is encoded by motion-compensated prediction using motion vectors for each motion-compensated block of the moving picture signal. The motion vectors are arranged into motion vector groups for each predetermined number of motion vectors. A code table is selected among a plurality of code tables for each motion vector group for encoding the motion vectors. Code table selection information is then output. The motion vectors is encoded by variable-length coding using the selected code table in accordance with the code table selection information. The code table selection information, the encoded motion vectors and an encoded predictive error signal are then multiplexed. A moving picture bit stream encoded as above is decoded. The moving picture bit stream is demultiplexed into the motion vectors, the code table selection information and the encoded predictive error signal.
Abstract: A data recording apparatus for a motion picture film for digitally recording data concerning an image of the motion picture film. The data recording apparatus includes a plurality of light emission units (LEDs) 4 lighted responsive to the data concerning an image, a light volume detection units 5 for detecting the light emission volume of each of the light emission unit, a reference value outputting unit 6 for outputting a reference output specifying a reference value of the light emission volume of the light emission unit and a control unit 8 for comparing the reference output value of the reference value outputting unit 6 to each detected output value from the light volume detection unit 5 for controlling the light emission of the LEDs 4 so that each detected output value will be equal to the reference output value.
Abstract: A motion picture sound recording chromogenic photographic film element for forming non-neutral images is disclosed comprising a film support bearing at least one silver halide emulsion layer comprising at least one dye-forming coupler which forms a dye which absorbs primarily in the green or red light region of the electromagnetic spectrum upon processing with color negative developer, wherein the element does not comprise a neutral-balanced combination of cyan, magenta, and yellow dye-forming couplers.
Type:
Grant
Filed:
May 5, 1997
Date of Patent:
January 5, 1999
Assignee:
Eastman Kodak Company
Inventors:
Vicky Sinn, Richard C. Sehlin, Mitchell J. Bogdanowicz, Patricia R. Greco, Gary N. Barber
Abstract: A film-to-tape transfer apparatus suitable for transferring images from movie film, slide film, photograph and the like which includes a case body having a bottom plate therein, a rotary shaft rotatably installed on the bottom plate and having a manipulating knob mounted on the top portion thereof, a supporting member secured to the rotary shaft, a reflecting mirror attached to the supporting member, a glass screen unit provided on the rear side of the case body, a macro-lens provided on the left side of the case body, a fluorescent lamp installed at the right portion inside the case body, a transparent and pushing plate provided on the right side of the case body and slightly spaced with each other for receiving a photograph to be recorded, and a sound mixing unit for controlling the sound to be recorded on the video tape.
Abstract: A system is disclosed for electronically processing negative composite video signals, wherein the negative composite video signals correspond to a negative image as produced by the projection of negative photographic film. The negative composite video signals are converted to positive composite video signals. The entire portion of the negative composite video signal between adjacent horizontal blanking portions is electronically inverted. The horizontal blanking portions of the negative composite video, which include the color burst signal, are not inverted. The composite signal that results includes the blanking portions, having the color burst signal, in their original, unaltered state. The image portion of the resulting composite signal, i.e., the portions of the resulting composite signal between the horizontal blanking portions, is inverted, and the resulting composite video signal is thus converted from negative to positive state.
Abstract: A film-to-tape transfer apparatus suitable for transferring images from movie film, slide film, photograph and the like which includes a case body having a bottom plate therein, a rotary shaft rotatably installed on the bottom plate and having a manipulating knob mounted on the top portion thereof, a supporting member secured to the rotary shaft, a reflecting mirror attached to the supporting member, a glass screen unit provided on the rear side of the case body, a macro-lens provided on the left side of the case body, a fluorescent lamp installed at the right portion inside the case body, a transparent and pushing plate provided on the right side of the case body and slightly spaced with each other for receiving a photograph to be recorded, and a sound mixing unit for controlling the sound to be recorded on the video tape.
Abstract: A cinema sound system for upperforated screens includes for each sterophonic channel a floor positioned direct radiator bass speaker unit radiating into quarter space and an upper frequency speaker unit mounted above the screen. Each upper frequency speaker unit includes a middle frequency driver mounted in a sealed rear enclosure which is attached to the throat of a middle frequency horn. A constant directivity high frequency horn with a high frequency driver attached to a rear end is mounted coaxially in the middle frequency horn. Sharp cutoff active crossover filters divide the input signal into low, middle, and high frequency band signals which are separately power amplified. The middle frequency horn is adapted to function as a direct radiator at a lower end of the middle band and as a sectoral horn above an unloading frequency of the middle frequency horn.
Abstract: An open air multiple screen cinema includes projection screens arranged outdoors in a circle or other arrangement about a centrally located automotive van having projection windows in alignment with the screens and housing angularly adjustable sound movie projectors directed through the windows to respective screens. A figure 8 magnetic induction loop is located within the viewing area of each screen and is electrically energized by an audio modulated current controlled by a respective movie sound projector and each of the viewers is provided with a magnetic field responsive receiver and coupled earphones so as to hear only the sound associated with the screen in the proximity of the viewer. The magnetic induction loops may be buried, at ground level, or supported overhead and may be of other configurations.
Abstract: A fluorescent soundtrack readout system for decoding the digital soundtrack of a motion picture film employing ultraviolet light directed onto the surface of the soundtrack film to cause the digital indicia thereon to emit visible light and for transmitting the emitted visible light to a photodiode array to detect same.
Abstract: A system for performance of photographing with moving picture cameras, still picture cameras or television cameras, particularly of direct sound - movie picture photography, whereby a signal receptor - and/or transmitting - device is coordinated to each photographic object, and the running time between the photographic object, and respectively, between the signal receptor - and/or transmitting - device and the camera is used for determining the distance of the photographic object from the camera and under the circumstances for the automatic distance setting of its camera taking lens.
Type:
Grant
Filed:
February 21, 1979
Date of Patent:
December 16, 1980
Assignees:
Karl Vockenhuber, Raimund Hauser
Inventors:
Otto Freudenschuss, Otto Kantner, Gerd Kittag
Abstract: Apparatus for splicing motion picture sound film provides for cutting the film transversely between the adjacent picture frames, thence longitudinally forward between the picture frames and sound stripe to the end point at which the sound relates to the first picture frame adjacent the transverse cut, and thence transversely across the sound stripe at said end point. A similar cutting is made at the opposite end of a length of the film to be removed, whereby to provide a pair of mating ends on film portions to be joined. These mating ends are arranged with their mating edges in abutment, the forwardly extended sound stripe of the first cutting fitting into the notch formed by the removal of sound stripe in the second cutting, and the picture frames adjacent the transverse cut being in edge-to-edge abutment. Pressure sensitive transparent tape then is applied across the abutting transverse and longitudinal edges to secure the splice.
Abstract: The disclosure relates to a system for providing a substantially noise free audio intelligence signal for a sound-movie camera. The system comprises first and second microphones mounted on the camera, the first microphone being arranged for detecting camera noise and audio intelligence for providing a first electrical signal having camera noise and audio intelligence components and the second microphone being arranged for detecting substantially only camera noise for providing a second electrical signal having substantially only a noise component. The second microphone is mounted nearer to the camera noise source than the first microphone so that the second signal is in advanced phase relation relative to the first signal. The system also includes a delay means and inverting means for delaying and inverting the second signal to provide a delayed inverted second signal which is in phase with the first signal.