Sound Editing Patents (Class 704/278)
  • Patent number: 10318637
    Abstract: An editing method facilitates the task of adding background sound to speech-containing audio data so as to augment the listening experience. The editing method is executed by a processor in a computing device and comprises obtaining characterization data that characterizes time segments in the audio data by at least one of topic and sentiment; deriving, for a respective time segment in the audio data and based on the characterization data, a desired property of a background sound to be added to the audio data in the respective time segment, and providing the desired property for the respective time segment so as to enable the audio data to be combined, within the respective time segment, with background sound having the desired property. The background sound may be selected and added automatically or by manual user intervention.
    Type: Grant
    Filed: May 13, 2017
    Date of Patent: June 11, 2019
    Inventor: Ola Thörn
  • Patent number: 10194199
    Abstract: Methods, systems, and computer program products that automatically categorize and/or assign ratings to content (video and audio content) uploaded by individuals who want to broadcast the content to others via a communications network, such as an IPTV network, are provided. When an individual uploads content to a network, a network service automatically extracts an audio stream from the uploaded content. Words in the extracted audio stream are identified. For each identified word, a preexisting library of selected words is queried to determine if a match exists between words in the library and words in the extracted audio stream. The selected words in the library are associated with a particular content category or content rating. If a match exists between an identified word and a word in the library, the uploaded content is assigned a content category and/or rating associated with the matched word.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: January 29, 2019
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Ke Yu
  • Patent number: 10109286
    Abstract: According to an embodiment, a speech synthesizer includes a source generator, a phase modulator, and a vocal tract filter unit. The source generator generates a source signal by using a fundamental frequency sequence and a pulse signal. The phase modulator modulates, with respect to the source signal generated by the source generator, a phase of the pulse signal at each pitch mark based on audio watermarking information. The vocal tract filter unit generates a speech signal by using a spectrum parameter sequence with respect to the source signal in which the phase of the pulse signal is modulated by the phase modulator.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: October 23, 2018
    Inventors: Kentaro Tachibana, Takehiko Kagoshima, Masatsune Tamura, Masahiro Morita
  • Patent number: 9942748
    Abstract: Embodiments of the present invention provide a service provisioning system and method, a mobile edge application server and support node. The system includes: at least one mobile edge application server (MEAS) and at least one mobile edge application server support function (MEAS-SF), where the MEAS is deployed at an access network side; and the MEAS-SF is deployed at a core network side, connected to one or more MEAS. In the service provisioning system provided in the embodiment, services that are provided by an SP are deployed in the MEAS. When the MEAS can provide the user equipment with a service requested in a service request, the MEAS directly and locally generates service data corresponding to the service request. Therefore, the user equipment directly obtains required service data from an RAN side, which avoids data congestion between an RAN and a CN and saves network resources.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: April 10, 2018
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Zhiming Zhu, Weihua Liu, Mingrong Cao
  • Patent number: 9841879
    Abstract: A computing device can include a recognition mode interface utilizing graphical elements, such as virtual fireflies, to indicate recognized or identified objects. The fireflies can be animated to move across a display, and the fireflies can create bounding boxes around visual representations of objects as the objects are recognized. In some cases, the object might be of a type that has specific meaning or information to be conveyed to a user. In such cases, the fireflies might be displayed with a particular size, shape, or color to convey that information. The fireflies also can be configured to form shapes or patterns in order to convey other types of information to a user, such as where audio is being recognized, light is sufficient for image capture, and the like. Other types of information can be conveyed as well via altering characteristics of the fireflies.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: December 12, 2017
    Inventors: Timothy Thomas Gray, Forrest Elliott
  • Patent number: 9838731
    Abstract: Video clips may be automatically edited to be synchronized for accompaniment by audio tracks. A preliminary version of a video clip may be made up from stored video content. Occurrences of video events within the preliminary version may be determined. A first audio track may include audio event markers. A first revised version of the video clip may be synchronized so that moments within the video clip corresponding to occurrences of video events are aligned with moments within the first audio track corresponding to audio event markers. Presentation of an audio mixing option may be effectuated on a graphical user interface of a video application for selection by a user. The audio mixing option may define volume at which the first audio track is played as accompaniment for the video clip.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: December 5, 2017
    Assignee: GoPro, Inc.
    Inventor: Joven Matias
  • Patent number: 9766854
    Abstract: This disclosure concerns the playback of audio content, e.g. in the form of music. More particularly, the disclosure concerns the playback of streamed audio. In one example embodiment, there is a method of operating an electronic device for dynamically controlling a playlist including one or several audio items. A request to adjust an energy level (e.g. a tempo) associated with the playlist is received. In response to receiving this request, the playlist is adjusted in accordance with the requested energy level (e.g., the tempo).
    Type: Grant
    Filed: August 28, 2015
    Date of Patent: September 19, 2017
    Assignee: SPOTIFY AB
    Inventors: Souheil Medaghri Alaoui, Miles Lennon, Kieran Del Pasqua
  • Patent number: 9736337
    Abstract: One example relates to a print system for adjusting a color profile. The print system can comprise a system comprising a memory for storing computer executable instructions and a processing unit for accessing the memory and executing the computer executable instructions. The computer executable instructions can comprise a profile transformer to receive a color sample comprising a color measurement of a proper subset of a gamut of colors printable with ink deposited at a printer. The profile transformer can also provide an adjusted color profile for the printer based on (i) an original color profile associated with a substrate for the printer and (ii) the color sample.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: August 15, 2017
    Inventors: Peter Morovic, Jan Morovic
  • Patent number: 9674051
    Abstract: An address generation section (111) receives an acquisition request including a file name of a target content and a device ID of a device (113) as a place where the target content is stored, from an application execution section (112) that executes a viewing application. Then, the address generation section (111) specifies the current file path and IP address of the target content in content information and device information each managed by a management section (107) on the basis of the received acquisition request, and generates an acquisition address for acquiring the target content from the device (113) on the basis of the specified file path and IP address.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: June 6, 2017
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventor: Shigehiro Iida
  • Patent number: 9626956
    Abstract: A method and a device that preprocess a speech signal are disclosed, which include extracting at least one frame corresponding to a speech recognition range from frames included in a speech signal, generating a supplementary frame to supplement speech recognition with respect to the speech recognition range based on the at least one extracted frame, and outputting a preprocessed speech signal including the supplementary frame along with the frames of the speech signal.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: April 18, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Hodong Lee
  • Patent number: 9558746
    Abstract: This invention describes methods for implementing human speech recognition. The methods described here are of using sub-events that are sounds between spaces (typically a fully spoken word) that is then compared with a library of sub-events. All sub-events are packaged with it's own speech recognition function as individual units. This invention illustrates how this model can be used as a Large Vocabulary Speech Recognition System.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: January 31, 2017
    Inventor: Darrell Poirier
  • Patent number: 9544649
    Abstract: A device and method are presently disclosed. The computer implemented method, includes at an electronic device with a touch-sensitive display, displaying a still image on the touch-sensitive display, while displaying the still image, detecting user's finger contact with the touch-sensitive display, and in response to detecting the user's finger contact, video recording the still image.
    Type: Grant
    Filed: February 25, 2014
    Date of Patent: January 10, 2017
    Assignee: Aniya's Production Company
    Inventors: Damon Wayans, James Cahall
  • Patent number: 9483459
    Abstract: A system is configured to receive a first string corresponding to an interpretation of a natural-language user voice entry; provide a representation of the first string as feedback to the natural-language user voice entry; receive, based on the feedback, a second string corresponding to a natural-language corrective user entry, where the natural-language corrective user entry may correspond to a correction to the natural-language user voice entry; parse the second string into one or more tokens; determine at least one corrective instruction from the one or more tokens of the second string; generate, from at least a portion of each of the first and second strings and based on the at least one corrective instruction, candidate corrected user entries; select a corrected user entry from the candidate corrected user entries; and output the selected, corrected user entry.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: November 1, 2016
    Assignee: Google Inc.
    Inventors: Michael D Riley, Johan Schalkwyk, Cyril Georges Luc Allauzen, Ciprian Ioan Chelba, Edward Oscar Benson
  • Patent number: 9472177
    Abstract: A music application guides a user with some musical, experience through the steps of creating and editing a musical enhancement file that enhances and plays in synchronicity with an audio signal of an original artist's recorded performance. This enables others, perhaps with lesser musical ability than the original artist, to play-along with the original artist by following melodic, chordal, rhythmic, and verbal prompts. The music application accounts for differences in the timing of the performance from a standard tempo by guiding the user through the process of creating a tempo map for the performance and by associating the tempo map with MIDI information of the enhancement file. Enhancements may contain. MIDI information, audio signal information, and/or video signal information which may be played back in synchronicity with the recorded performance to provide an aural and visual, aid to others playing-along who may have less musical experience.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: October 18, 2016
    Assignee: Family Systems, Ltd.
    Inventors: Brian Reynolds, William B. Hudak
  • Patent number: 9449611
    Abstract: A computer readable medium containing computer executable instructions is described for extracting a reference representation from a mixture representation that comprises the reference representation and a residual representation wherein the reference representation, the mixture representation, and the residual representation are representations of collections of acoustical waves stored on computer readable media.
    Type: Grant
    Filed: October 1, 2012
    Date of Patent: September 20, 2016
    Assignee: AUDIONAMIX
    Inventors: Pierre Leveau, Xabier Jaureguiberry
  • Patent number: 9423944
    Abstract: A method for adjusting the sound volume of media clips using volume adjuster lines is provided. The volume adjuster lines are individually set for each clip based on the intrinsic, or absolute, volume values of the clip. In some embodiments, the volume adjuster lines are set for each clip based on the peak value or a calculated loudness equivalent of the clip. A user can move the volume adjuster line to set the absolute sound level of a clip. The volume adjuster lines can be hidden in some embodiments. In these embodiments, dragging on any portion of a clip is treated as dragging on the volume adjuster line. Some embodiments provide a deformable volume adjuster line, or curve. In these embodiments, a single audio clip can have several different volume adjuster lines for different sections of the clip where the volume adjuster line for each section is individually adjustable.
    Type: Grant
    Filed: September 6, 2011
    Date of Patent: August 23, 2016
    Assignee: APPLE INC.
    Inventor: Aaron M. Eppolito
  • Patent number: 9418652
    Abstract: Systems and methods for modifying a computer-based speech recognition system. A speech utterance is processed with the computer-based speech recognition system using a set of internal representations, which may comprise parameters for recognizing speech in a speech utterance, such as parameters of an acoustic model and/or a language model. The computer-based speech recognition system may perform a first task in response to the processed speech utterance. The utterance may also be provided to a human who performs a second task based on the utterance. Data indicative of the first task, performed by the computer system, is compared to data indicative of a second task, performed by the human in response to the speech utterance. Based on the comparison, the set of internal representations may be updated or modified to improve the speech recognition performance and capabilities of the speech recognition system.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: August 16, 2016
    Assignee: Next IT Corporation
    Inventor: Charles C Wooters
  • Patent number: 9378731
    Abstract: The present disclosure relates to training a speech recognition system. One example method includes receiving a collection of speech data items, wherein each speech data item corresponds to an utterance that was previously submitted for transcription by a production speech recognizer. The production speech recognizer uses initial production speech recognizer components in generating transcriptions of speech data items. A transcription for each speech data item is generated using an offline speech recognizer, and the offline speech recognizer components are configured to improve speech recognition accuracy in comparison with the initial production speech recognizer components. The updated production speech recognizer components are trained for the production speech recognizer using a selected subset of the transcriptions of the speech data items generated by the offline speech recognizer.
    Type: Grant
    Filed: April 22, 2015
    Date of Patent: June 28, 2016
    Assignee: Google Inc.
    Inventors: Olga Kapralova, John Paul Alex, Eugene Weinstein, Pedro J. Moreno Mengibar, Olivier Siohan, Ignacio Lopez Moreno
  • Patent number: 9361887
    Abstract: Systems and methods of providing text related to utterances, and gathering voice data in response to the text are provide herein. In various implementations, an identification token that identifies a first file for a voice data collection campaign, and a second file for a session script may be received from a natural language processing training device. The first file and the second file may be used to configure the mobile application to display a sequence of screens, each of the sequence of screens containing text of at least one utterance specified in the voice data collection campaign. Voice data may be received from the natural language processing training device in response to user interaction with the text of the at least one utterance. The voice data and the text may be stored in a transcription library.
    Type: Grant
    Filed: September 7, 2015
    Date of Patent: June 7, 2016
    Assignee: VoiceBox Technologies Corporation
    Inventors: Daniela Braga, Faraz Romani, Ahmad Khamis Elshenawy, Michael Kennewick
  • Patent number: 9355683
    Abstract: Provided are a method and apparatus thereof for setting a marker within audio information, the method including: receiving the audio information including a silent portion and a non-silent portion; receiving a selection for a selected marker insertion point; determining, based on the received selection and received audio information, whether the selected marker insertion point occurs during the non-silent portion; and if the selected marker insertion point occurs during the non-silent portion, determining a time of the silent portion, and setting the marker to correspond to the determined time of the silent portion.
    Type: Grant
    Filed: July 30, 2010
    Date of Patent: May 31, 2016
    Inventor: Jung-dae Kim
  • Patent number: 9343053
    Abstract: A method of adding sound effects to movies, comprising: opening a file comprising audio and video tracks on a computing device comprising a display and touch panel input mode; running the video track on the display; selecting an audio sound suitable to a displayed frame from an audio sounds library; and adding audio effects to said selected audio sound using hand gestures on displayed art effects.
    Type: Grant
    Filed: May 7, 2014
    Date of Patent: May 17, 2016
    Assignee: SOUND IN MOTION
    Inventor: Zion Harel
  • Patent number: 9286900
    Abstract: An audio codec in a baseband processor may be utilized for mixing audio signals received at a plurality of data sampling rates. The mixed audio signals may be up sampled to a very large sampling rate, and then down sampled to a specified sampling rate that is compatible with a Bluetooth-enabled device by utilizing an interpolator in the audio codec. The down-sampled signals may be communicated to Bluetooth-enabled devices, such as Bluetooth headsets, or Bluetooth-enabled devices with a USB interface. The interpolator may be a linear interpolator for which the audio codec may enable generation of triggering and/or coefficient signals based on the specified output sampling rate. An interpolation coefficient may be generated based on a base value associated with the specified output sampling rate. The audio codec may enable selecting the specified output sampling rate from a plurality of rates.
    Type: Grant
    Filed: March 21, 2011
    Date of Patent: March 15, 2016
    Assignee: Broadcom Corporation
    Inventors: Hongwei Kong, Nelson Sollenberger, Li Fung Chang, Claude Hayek, Taiyi Cheng
  • Patent number: 9251805
    Abstract: An object of the present invention is to process the speech of a particular speaker. The present invention provides a technique for collecting speech, analyzing the collected speech to extract the features of the speech, grouping the speech, or text corresponding to the speech, on the basis of the extracted features, presenting the result of the grouping to a user, and when one or more of the groups is selected by the user, enhancing, or reducing or cancelling the speech of a speaker associated with the selected group.
    Type: Grant
    Filed: December 2, 2013
    Date of Patent: February 2, 2016
    Assignee: International Business Machines Corporation
    Inventors: Taku Aratsu, Masami Tada, Akihiko Takajo, Takahito Tashiro
  • Patent number: 9230561
    Abstract: A system and method of creating a customized multi-media message to a recipient is disclosed. The multi-media message is created by a sender and contains an animated entity that delivers an audible message. The sender chooses the animated entity from a plurality of animated entities. The system receives a text message from the sender and receives a sender audio message associated with the text message. The sender audio message is associated with the chosen animated entity to create the multi-media message. The multi-media message is delivered by the animated entity using as the voice the sender audio message wherein the mouth movements of the animated entity conform to the sender audio message.
    Type: Grant
    Filed: August 27, 2013
    Date of Patent: January 5, 2016
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Barbara Buda, Claudio Lande
  • Patent number: 9202469
    Abstract: A technique for recording dictation, meetings, lectures, and other events includes automatically segmenting an audio recording into portions by detecting speech transitions within the recording and selectively identifying certain portions of the recording as noteworthy. Noteworthy audio portions are displayed to a user for selective playback. The user can navigate to different noteworthy audio portions while ignoring other portions. Each noteworthy audio portion starts and ends with a speech transition. Thus, the improved technique typically captures noteworthy topics from beginning to end, thereby reducing or avoiding the need for users to have to search for the beginnings and ends of relevant topics manually.
    Type: Grant
    Filed: September 16, 2014
    Date of Patent: December 1, 2015
    Assignee: Citrix Systems, Inc.
    Inventors: Yogesh Moorjani, Ryan Warren Kasper, Ashish V. Thapliyal, Ajay Kumar, Abhinav Kuruvadi Ramesh Babu, Elizabeth Thapliyal, James Kalbach, Margaret Dianne Cramer
  • Patent number: 9203966
    Abstract: A method and device are provided for modifying a compounded voice message having at least one first voice component. The method includes a step of obtaining at least one second voice component, a step of updating at least one item of information belonging to a group of items of information associated with the compounded voice message as a function of the at least one second voice component and a step of making available the compounded voice message comprising the at least one first and second voice components, and the group of items of information associated with the compounded voice message. The compounded voice message is intended to be consulted by at least one recipient user.
    Type: Grant
    Filed: September 26, 2012
    Date of Patent: December 1, 2015
    Assignee: FRANCE TELECOM
    Inventor: Ghislain Moncomble
  • Patent number: 9201580
    Abstract: Sound alignment user interface techniques are described. In one or more implementations, a user interface is output having a first representation of sound data generated from a first sound signal and a second representation of sound data generated from a second sound signal. One or more inputs are received, via interaction with the user interface, that indicate that a first point in time in the first representation corresponds to a second point in time in the second representation. Aligned sound data is generated from the sound data from the first and second sound signals based at least in part on correspondence of the first point in time in the sound data generated from the first sound signal to the second point in time in the sound data generated from the second sound signal.
    Type: Grant
    Filed: November 13, 2012
    Date of Patent: December 1, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Brian John King, Gautham J. Mysore, Paris Smaragdis
  • Patent number: 9159363
    Abstract: Systems and methods are disclosed to adjust the loudness or another audio attribute for one or more audio clips. Intra-track audio levels can automatically be equalized, for example, to achieve a homogeneous audio level for all clips within an audio track. Information about such audio adjustments may be identified and stored as information without destructively altering the underlying clip content. For example, keyframes may define changes to a fader that will be applied at different points along a track's timeline to achieve the audio adjustments when the track is played. An audio editing application can provide a feature for automatically determining appropriate keyframes, allow manual editing of keyframes, and use keyframes to display control curves that represent graphically the time-based adjustments made to track-specific faders, play test audio output, and output combined audio, among other things.
    Type: Grant
    Filed: April 2, 2010
    Date of Patent: October 13, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Holger Classen, Sven Duwenhorst
  • Patent number: 9153234
    Abstract: A speech recognition apparatus includes: a recognition device that recognizes a speech of a user and generates a speech character string; a display device that displays the speech character string; a reception device that receives an input of a correction character string, which is used for correction of the speech character string, through an operation portion; and a correction device that corrects the speech character string with using the correction character string.
    Type: Grant
    Filed: March 19, 2013
    Date of Patent: October 6, 2015
    Inventors: Toru Nada, Kiyotaka Taguchi, Makoto Manabe, Shinji Hatanaka, Norio Sanma, Makoto Obayashi, Akira Yoshizawa
  • Patent number: 9064558
    Abstract: A recording and/or reproducing apparatus includes a microphone, a semiconductor memory, an operating section and a controller. An output signal from the microphone is written in the semiconductor memory and the written signals are read out from the semiconductor memory. The operating section performs input processing for writing a digital signal outputted by an analog/digital converter, reading out the digital signal stored in the semiconductor memory and for erasing the digital signal stored in the semiconductor memory. The control section controls the writing of the microphone output signal in the semiconductor memory based on an input from the operating section and the readout of the digital signal stored in the semiconductor memory.
    Type: Grant
    Filed: May 30, 2013
    Date of Patent: June 23, 2015
    Assignee: Sony Corporation
    Inventor: Kenichi Iida
  • Patent number: 9066049
    Abstract: Provided in some embodiments is a computer implemented method that includes providing script data including script words indicative of dialogue words to be spoken, providing recorded dialogue audio data corresponding to at least a portion of the dialogue words to be spoken, wherein the recorded dialogue audio data includes timecodes associated with recorded audio dialogue words, matching at least some of the script words to corresponding recorded audio dialogue words to determine alignment points, determining that a set of unmatched script words are accurate based on the matching of at least some of the script words matched to corresponding recorded audio dialogue words, generating time-aligned script data including the script words and their corresponding timecodes and the set of unmatched script words determined to be accurate based on the matching of at least some of the script words matched to corresponding recorded audio dialogue words.
    Type: Grant
    Filed: May 28, 2010
    Date of Patent: June 23, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Jerry R. Scoggins, II, Walter W. Chang, David A. Kuspa
  • Patent number: 9058876
    Abstract: A resistive random access memory integrated circuit for use as a mass storage media and adapted for bulk erase by substantially simultaneously switching all memory cells to one of at least two possible resistive states. Bulk switching is accomplished by biasing all bottom electrodes within an erase area to a voltage lower than that of the top electrodes, wherein the erase area can comprise the entire memory array of the integrated circuit or else a partial array. Alternatively the erase area may be a single row and, upon receiving the erase command, the row address is advanced automatically and the erase step is repeated until the entire array has been erased.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: June 16, 2015
    Assignee: 4D-S, LTD
    Inventors: Lee Cleveland, Franz Michael Schuette
  • Publication number: 20150149183
    Abstract: Processes are described herein for transforming an audio mixture signal data structure into a specified component data structure and a background component data structure. In the processes described herein, pitch differences between a guide signal and a dialogue component of an audio mixture signal are accounted for by explicit modeling.
    Type: Application
    Filed: November 26, 2014
    Publication date: May 28, 2015
    Inventor: Romain Hennequin
  • Patent number: 9037469
    Abstract: An apparatus includes a plurality of applications and an integrator having a voice recognition module configured to identify at least one voice command from a user. The integrator is configured to integrate information from a remote source into at least one of the plurality of applications based on the identified voice command. A method includes analyzing speech from a first user of a first mobile device having a plurality of applications, identifying a voice command based on the analyzed speech using a voice recognition module, and incorporating information from the remote source into at least one of a plurality of applications based on the identified voice command.
    Type: Grant
    Filed: January 27, 2014
    Date of Patent: May 19, 2015
    Inventor: Robert E. Opaluch
  • Patent number: 9031850
    Abstract: A stream combining apparatus is provided, comprising an input unit that receives the input of first group access units and second group access units from two streams that are generated by overlap transform; a decoder that generates group frames by decoding the group access units and that generates group frames by decoding the group access units; and a combining unit that uses first group frames and second group frames as a frame of reference for the access units, that decodes the frames, that performs selective mixing to generate mixed frames, that encodes said mixed frames, that generates a prescribed number of group access units, and that joins two streams, using a prescribed number of group access units as a joint such that the access units adjacent to each other on the boundary between the two streams and a prescribed number of group access units are stitched so that the information for decoding the same common frames is distributed.
    Type: Grant
    Filed: August 20, 2009
    Date of Patent: May 12, 2015
    Assignee: GVBB Holdings S.A.R.L.
    Inventor: Yousuke Takada
  • Patent number: 9031827
    Abstract: The present invention relates to a new method and system for use of a multi-protocol conference bridge, and more specifically a new multi-language conference bridge system and method of use where different cues, such as an attenuated voice of an original non-interpreted speaker, is used to improve the flow of information over the system.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: May 12, 2015
    Assignee: Zip DX LLC
    Inventors: David Paul Frankel, Barry Slaughter Olsen
  • Patent number: 9031838
    Abstract: Systems, methods and apparatus are described herein for continuously measuring voice clarity and speech intelligibility by evaluating a plurality of telecommunications channels in real time. Voice clarity and speech intelligibility measurements may be formed from chained, configurable DSPs that can be added, subtracted, reordered, or configured to target specific audio features. Voice clarity and speech intelligibility may be enhanced by altering the media in one or more of the plurality of telecommunications channels. Analytics describing the measurements and enhancements may be displayed in reports, or in real time via a dashboard.
    Type: Grant
    Filed: July 14, 2014
    Date of Patent: May 12, 2015
    Assignee: Vail Systems, Inc.
    Inventors: Alex Nash, Mariano Tan, David Fruin, Todd Whiteley, Jon Wotman
  • Patent number: 9015051
    Abstract: An audio signal having at least one audio channel and associated direction parameters indicating a direction of origin of a portion of the audio channel with respect to a recording position is reconstructed to derive a reconstructed audio signal. A desired direction of origin with respect to the recording position is selected. The portion of the audio channel is modified for deriving a reconstructed portion of the reconstructed audio signal, wherein the modifying includes increasing an intensity of the portion of the audio channel having direction parameters indicating a direction of origin close to the desired direction of origin with respect to another portion of the audio channel having direction parameters indicating a direction of origin further away from the desired direction of origin.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: April 21, 2015
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventor: Ville Pulkki
  • Patent number: 9002717
    Abstract: A system that incorporates teachings of the present disclosure may include, for example, a controller configured to obtain information associated with media content, to generate a first group of tones representative of the information associated with the media content, and to generate a media stream comprising the media content and the first group of tones; and a communication interface configured to transmit the media stream to a media device whereby the media device presents the media content and a sequence of tones, where the sequence of tones is generated based at least in part on the first group of tones, where the first group of tones comprises high frequency tones and low frequency tones, and where one of the high and low frequency tones represents a binary one and the other of the high and low frequency tones represents a binary zero. Other embodiments are disclosed.
    Type: Grant
    Filed: December 3, 2010
    Date of Patent: April 7, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Ke Yu, Ashwini Sule
  • Patent number: 8996382
    Abstract: Systems and methods for inhibiting access to the lips of speaking person including a sound receiving device for receiving speech of a person speaking, the person having lips that move when the person speaks, a blocker connected to the device for blocking the lips of the person speaking while the person is speaking; and, in some aspects, such a blocker with a material addition apparatus to provide added material for the breath of a person speaking, e.g., for preventing the spread of disease or to freshen a speaker's breath. This abstract is provided to comply with the rules requiring an abstract which will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims, 37 C.F.R. 1.72(b).
    Type: Grant
    Filed: October 11, 2011
    Date of Patent: March 31, 2015
    Inventor: Guy L. McClung, III
  • Patent number: 8996380
    Abstract: Systems and methods of synchronizing media are provided. A client device may be used to capture a sample of a media stream being rendered by a media rendering source. The client device sends the sample to a position identification module to determine a time offset indicating a position in the media stream corresponding to the sampling time of the sample, and optionally a timescale ratio indicating a speed at which the media stream is being rendered by the media rendering source based on a reference speed of the media stream. The client device calculates a real-time offset using a present time, a timestamp of the media sample, the time offset, and optionally the timescale ratio. The client device then renders a second media stream at a position corresponding to the real-time offset to be in synchrony to the media stream being rendered by the media rendering source.
    Type: Grant
    Filed: May 4, 2011
    Date of Patent: March 31, 2015
    Assignee: Shazam Entertainment Ltd.
    Inventors: Avery Li-Chun Wang, Rahul Powar, William Michael Mills, Christopher Jacques Penrose Barton, Philip Georges Inghelbrecht, Dheeraj Shankar Mukherjee
  • Patent number: 8977552
    Abstract: A system, method and computer readable medium that enhances a speech database for speech synthesis is disclosed. The method may include labeling audio files in a primary speech database, identifying segments in the labeled audio files that have varying pronunciations based on language differences, identifying replacement segments in a secondary speech database, enhancing the primary speech database by substituting the identified secondary speech database segments for the corresponding identified segments in the primary speech database, and storing the enhanced primary speech database for use in speech synthesis.
    Type: Grant
    Filed: May 28, 2014
    Date of Patent: March 10, 2015
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Alistair D. Conkie, Ann K. Syrdal
  • Patent number: 8972269
    Abstract: A transcript interface for displaying a plurality of words of a transcript in a text editor can be provided and configured to receive a command to edit the transcript. Limited edits to data corresponding to the transcript can be made in response to commands received via the user interface module. For example, edits may be limited to selection of a single word in the text editor for editing via a given command. The edit may affect an adjacent word in some instances, such as when two adjacent words are merged. In some embodiments, data corresponding to the selected word of the transcript is changed to reflect the edit without changing data defining the relative timing of those words of the transcript that are not adjacent to the selected word.
    Type: Grant
    Filed: December 1, 2008
    Date of Patent: March 3, 2015
    Assignee: Adobe Systems Incorporated
    Inventor: Steven Hoeg
  • Patent number: 8972251
    Abstract: An electronic device for generating a masking signal is described. The electronic device includes a plurality of microphones and a speaker. The electronic device also includes a processor and executable instructions stored in memory that is in electronic communication with the processor. The electronic device obtains a plurality of audio signals from the plurality of microphones. The electronic device also obtains an ambience signal based on the plurality of audio signals. The electronic device further determines an ambience feature based on the ambience signal. Additionally, the electronic device obtains a voice signal based on the plurality of audio signals. The electronic device also determines a voice feature based on the voice signal. The electronic device additionally generates a masking signal based on the voice feature and the ambience feature. The electronic device further outputs the masking signal using the speaker.
    Type: Grant
    Filed: June 7, 2011
    Date of Patent: March 3, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Pei Xiang, Joseph Jyh-huei Huang, Andre Gustavo Pucci Schevciw, Anthony Mauro, Erik Visser
  • Patent number: 8954332
    Abstract: A computer-implemented system and method for masking special data is provided. Speakers of a call recording are identified. The call recording is separated into strands corresponding to each of the speakers. A prompt list of elements that prompt the speaker of the other strand to utter special information is applied to one of the strands. At least one of the elements of the prompt list is identified in the one strand. A special information candidate is identified in the other strand and is located after a location in time where the element was found in the voice recording of the one strand. A confidence score is assigned to the element located in the one strand and to the special information candidate in the other strand. The confidence scores are combined and a threshold is applied. The special information candidate is rendered unintelligible when the combined confidence scores satisfy the threshold.
    Type: Grant
    Filed: November 4, 2013
    Date of Patent: February 10, 2015
    Assignee: Intellisist, Inc.
    Inventors: Howard M. Lee, Steven Lutz, Gilad Odinak
  • Patent number: 8949123
    Abstract: The voice conversion method of a display apparatus includes: in response to the receipt of a first video frame, detecting one or more entities from the first video frame; in response to the selection of one of the detected entities, storing the selected entity; in response to the selection of one of a plurality of previously-stored voice samples, storing the selected voice sample in connection with the selected entity; and in response to the receipt of a second video frame including the selected entity, changing a voice of the selected entity based on the selected voice sample and outputting the changed voice.
    Type: Grant
    Filed: April 11, 2012
    Date of Patent: February 3, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Aditi Garg, Kasthuri Jayachand Yadlapalli
  • Patent number: 8942987
    Abstract: A clear picture of who is speaking in a setting where there are multiple input sources (e.g., a conference room with multiple microphones) can be obtained by comparing input channels against each other. The data from each channel can not only be compared, but can also be organized into portions which logically correspond to statements by a user. These statements, along with information regarding who is speaking, can be presented in a user friendly format via an interactive timeline which can be updated in real time as new audio input data is received.
    Type: Grant
    Filed: March 21, 2014
    Date of Patent: January 27, 2015
    Assignee: Jefferson Audio Video Systems, Inc.
    Inventors: Matthew David Bader, Nathan David Cole
  • Patent number: 8924216
    Abstract: A method for synchronizing sound data and text data, said text data being obtained by manual transcription of said sound data during playback of the latter. The proposed method comprises the steps of repeatedly querying said sound data and said text data to obtain a current time position corresponding to a currently played sound datum and a currently transcribed text datum, respectively, correcting said current time position by applying a time correction value in accordance with a transcription delay, and generating at least one association datum indicative of a synchronization association between said corrected time position and said currently transcribed text datum. Thus, the proposed method achieves cost-effective synchronization of sound and text in connection with the manual transcription of sound data.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: December 30, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Andreas Neubacher, Miklos Papai
  • Publication number: 20140372129
    Abstract: Methods and systems are provided for receiving desired sounds. The system includes a position sensor configured to determine an occupant position of an occupant engaging in speech within a defined space and transmit the speaking occupant position. A plurality of microphones are configured to receive sound from within the defined space and transmit audio signals corresponding to the received sound. A processor, in communication with the position sensor and the microphones, is configured to receive the speaking occupant position and the audio signals, apply a beamformer to the audio signals to direct a microphone beam toward the occupant position, and generate a beamformer output signal.
    Type: Application
    Filed: June 14, 2013
    Publication date: December 18, 2014
  • Patent number: 8898062
    Abstract: A strained-rough-voice conversion unit (10) is included in a voice conversion device that can generate a “strained rough” voice produced in a part of a speech when speaking forcefully with excitement, nervousness, anger, or emphasis and thereby richly express vocal expression such as anger, excitement, or an animated or lively way of speaking, using voice quality change. The strained-rough-voice conversion unit (10) includes: a strained phoneme position designation unit (11) designating a phoneme to be uttered as a “strained rough” voice in a speech; and an amplitude modulation unit (14) performing modulation including periodic amplitude fluctuation on a speech waveform.
    Type: Grant
    Filed: January 22, 2008
    Date of Patent: November 25, 2014
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Yumiko Kato, Takahiro Kamai