Sound Editing Patents (Class 704/278)
  • Patent number: 11315546
    Abstract: Disclosed are systems and methods for improving interactions with and between computers in content searching, generating, hosting and/or providing systems supported by or configured with personal computing devices, servers and/or platforms. The systems interact to identify and retrieve data within or across platforms, which can be used to improve the quality of data used in processing interactions between or among processors in such systems. The disclosed systems and methods provide systems and methods for automatic creation of a formatted, readable transcript of multimedia content, which is derived, extracted, determined, or otherwise identified from the multimedia content. The formatted, readable transcript can be utilized to increase accuracy and efficiency in search engine optimization, as well as identification of relevant digital content available for communication to a user.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: April 26, 2022
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Aasish Pappu, Amanda Stent
  • Patent number: 11295783
    Abstract: Methods, apparatus, and systems for automatically producing a video program in accordance with a script are provided. Various media assets are recorded and/or stored in a content database, together with metadata relating to each of the media assets. Each media asset is tagged with a unique content ID, the unique content ID associating the metadata with the media asset. The media assets are then indexed. Text from a script is then analyzed using natural language processing to locate one or more relevant indexed media assets. The located one or more media assets are assembled into a video program in accordance with the script.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: April 5, 2022
    Assignee: TVU Networks Corporation
    Inventors: Paul Shen, Christopher Bell, Matthew Richard McEwen, Justin Chen
  • Patent number: 11238888
    Abstract: The disclosed computer-implemented method may include obtaining an audio sample from a content source, inputting the obtained audio sample into a trained machine learning model, obtaining the output of the trained machine learning model, wherein the output is a profile of an environment in which the input audio sample was recorded, obtaining an acoustic impulse response corresponding to the profile of the environment in which the input audio sample was recorded, obtaining a second audio sample, processing the obtained acoustic impulse response with the second audio sample, and inserting a result of processing the obtained acoustic impulse response and the second audio sample into an audio track. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: February 1, 2022
    Assignee: Netflix, Inc.
    Inventors: Yadong Wang, Shilpa Jois Rao, Murthy Parthasarathi, Kyle Tacke
  • Patent number: 11227638
    Abstract: The present invention discloses a method and system for cutting video using video content. The method comprises: acquiring recorded video produced by user's recording operation; extracting features of recorded audio in the recorded video and judging whether the recorded audio is damaged; and if not, extracting human voice data from the recorded audio which has been filtered out background sound, intercepting video segment corresponding to effective human voice, and displaying the video segment as clip video; and if yes, extracting image feature data of person's mouth shape and human movements in the recorded video after image processing, fitting the image feature data and the human voice data which has been filtered out background sound, and displaying the video segment with the highest fitting degree as clip video.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: January 18, 2022
    Assignee: Sunday Morning Technology (Guangzhou) Co., Ltd.
    Inventors: Qianya Lin, Tian Xia, RemyYiYang Ho, Zhenli Xie, Pinlin Chen, Rongchan Liu
  • Patent number: 11210322
    Abstract: Embodiments of the present disclosure relate to a method and apparatus for reducing storage space of a parameter table. The method may include: storing the parameter table in a lookup table system configured to compute an output value of a non-linear function according to an input value of the non-linear function, the parameter table including only an index value associated with an input value on one side of a median in a domain of the non-linear function and a parameter value corresponding to the index value; determining, by using a corresponding relationship between the index value associated with the input value on one side and the parameter value corresponding to the index value, a parameter value corresponding to an index value associated with an input value on the other side; and computing the output value by using the input value on the other side and the determined corresponding parameter value.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: December 28, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Huimin Li, Jian Ouyang
  • Patent number: 11157150
    Abstract: The present invention can include electronic devices having variable input/output interfaces that can allow a user to interact with the devices with greater efficiency and in a more ergonomic manner. An electronic device of the present invention can display icons associated with user-programmable parameters of a media file. By interacting with the icons, a user can change the user-programmable parameters during playback of the media file. Changes to the user-programmable parameters can affect playback of the remainder of the media file. An electronic device of the present invention also can automatically re-orient images shown on a display and re-configure user input components based on the orientation of the electronic device.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: October 26, 2021
    Assignee: Apple Inc.
    Inventors: Glenn Gregory Gilley, Sarah A. Brody, Randall Hayes Ubillos, Mihnea Calin Pacurariu
  • Patent number: 11152007
    Abstract: Embodiments of a method and device for matching a speech with a text, and a computer-readable storage medium are provided. The method can include: acquiring a speech identification text by identifying a received speech signal; comparing the speech identification text with multiple candidate texts in a first matching mode to determine a first matching text; and comparing phonetic symbols of the speech identification text with phonetic symbols of the multiple candidate texts in a second matching mode to determine a second matching text, in a case that no first matching text is determined.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: October 19, 2021
    Assignee: Baidu Online Network Technology Co., Ltd.
    Inventor: Yongshuai Lu
  • Patent number: 11138970
    Abstract: The present disclosure relates to a system, method, and computer program for creating a complete transcription of an audio recording from separately transcribed redacted and unredacted words. The system receives an original audio recording and redacts a plurality of words from the original audio recording to obtain a modified audio recording. The modified audio recording is outputted to a first transcription service. Audio clips of the redacted words from the original audio recording are extracted using word-level timestamps for the redacted words. The extracted audio clips are outputted to a second transcription service. The system receives a transcription of the modified audio recording from the first transcription service and transcriptions of the extracted audio clips from the second transcription service.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: October 5, 2021
    Assignee: ASAPP, Inc.
    Inventors: Kyu Jeong Han, Madison Chandler Riley, Tao Ma
  • Patent number: 11062696
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech endpointing are described. In one aspect, a method includes the action of accessing voice query log data that includes voice queries spoken by a particular user. The actions further include based on the voice query log data that includes voice queries spoken by a particular user, determining a pause threshold from the voice query log data that includes voice queries spoken by the particular user. The actions further include receiving, from the particular user, an utterance. The actions further include determining that the particular user has stopped speaking for at least a period of time equal to the pause threshold. The actions further include based on determining that the particular user has stopped speaking for at least a period of time equal to the pause threshold, processing the utterance as a voice query.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: July 13, 2021
    Assignee: Google LLC
    Inventors: Siddhi Tadpatrikar, Michael Buchanan, Pravir Kumar Gupta
  • Patent number: 11048749
    Abstract: A searchable media object comprises means for playing a media file; an audio file processor operable to receive a standard audio file and convert the standard audio file into machine coding; an Automatic Speech Recognition processor, operable to receive the standard audio file, determine each spoken word and output a string of text comprising each of the determined words, wherein each word in the string is accorded a time indicative of the time of occurrence of the word in the string; an assembler, operable to receive the machine coding, the string of text and the relevant topics and therefrom, assemble the self contained searchable media object comprising a media player operable without connection to the internet; and, a search engine operable to receive a user word search enquiry, search the string of text for the word search enquiry, determine the accorded time of the occurrence of each word enquiry in the string of text, display the occurrences of the determined word search enquiry to a user, receive a us
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: June 29, 2021
    Inventor: Nigel Henry Cannings
  • Patent number: 11032623
    Abstract: The present invention discloses a subtitled image generation apparatus that includes a subtitle generation circuit, an image delay circuit and an overlaying circuit. The subtitle generation circuit receives audio data to generate a subtitle image pattern. The image delay circuit includes a first delay path having delay buffer circuit, a second delay path having a data amount decreasing circuit, the delay buffer circuit and a data amount restoring circuit, and a control circuit. The control circuit controls the first delay path to store and delay the image data when a data amount of the image data matches a direct-writing condition, and controls the second delay path to decrease the data amount of the image data, to store and delay the image data and to restore the data amount of the image data when the data amount fails to match the direct-writing condition. The overlaying circuit overlays the subtitle image pattern on the image data having a corresponding timing to generate an output subtitled image.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: June 8, 2021
    Assignee: REALTEK SEMICONDUCTOR CORPORATION
    Inventor: Lien-Hsiang Sung
  • Patent number: 11011157
    Abstract: Techniques are disclosed for generating ASR training data. According to an embodiment, impactful ASR training corpora is generated efficiently, and the quality or relevance of ASR training corpora being generated is increased by leveraging knowledge of the ASR system being trained. An example methodology includes: selecting one of a word or phrase, based on knowledge and/or content of said ASR training corpora; presenting a textual representation of said word or phrase; receiving a speech utterance that includes said word or phrase; receiving a transcript for said speech utterance; presenting said transcript for review (to allow for editing, if needed); and storing said transcript and said audio file in an ASR system training database. The selecting may include, for instance, selecting a word or phrase that is under-represented in said database, and/or based upon an n-gram distribution on a language, and/or based upon known areas that tend to incur transcription mistakes.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: May 18, 2021
    Assignee: ADOBE INC.
    Inventor: Franck Dernoncourt
  • Patent number: 10977345
    Abstract: Hardware on a device, including sensors that may be embedded in or accessible to the device, extend the validity session of an authentication event by identifying behavior of the user periodically during the session. By detecting behavior that may be directly or indirectly unrelated to the application—but is necessarily present if the user remains the same—the session is extended for as long as that behavior is within some defined parameters. This process may be accomplished either by using an expert system or through the application of machine learning. Such a system may take input from sensors and detects a pattern in those inputs that coincide with the presence of that ephemeral personal or behavioral patterns that it is desired to detect. Using that detected behavior, the validity of a session can be extended using statements about the variance or invariance of the detected ephemeral personal or behavioral states.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: April 13, 2021
    Assignee: TwoSesnse, Inc.
    Inventors: Dawud Gordon, John Tanios
  • Patent number: 10956860
    Abstract: Techniques for determining a clinician's intent to order an item may include processing a free-form narration, of an encounter with a patient, narrated by a clinician, using a natural language understanding engine implemented by one or more processors, to extract at least one clinical fact corresponding to a mention of an orderable item from the free-form narration. The processing may comprise distinguishing between whether the at least one clinical fact indicates an intent to order the orderable item or does not indicate an intent to order the orderable item. In response to determining that the at least one clinical fact indicates an intent to order the orderable item, an order may be generated for the orderable item.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: March 23, 2021
    Assignee: Nuance Communications, Inc.
    Inventors: Isam H. Habboush, Davide Zaccagnini
  • Patent number: 10943600
    Abstract: A system or method for manipulating audiovisual data using transcript information. The system or method performs the following actions. Creating a computer-generated transcript of audio data from the audiovisual data, the computer-generated transcript includes a plurality of words, at least some words of the plurality of words are associated with a respective timestamp and a confidence score. Receiving a traditional transcript of the audio data, the traditional transcript includes a plurality of words that are not associated with timestamps. Identifying one or more words from the plurality of words of the computer-generated transcript that match words from the plurality of words of the traditional transcript. Associating the timestamp of the one or more words of the computer-generated transcript with the matching word of the traditional transcript. Processing the audiovisual data using the traditional transcript and the associated timestamps.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: March 9, 2021
    Assignee: Axon Enterprise, Inc.
    Inventors: Joseph Charles Dimino, Jr., Sayce William Falk, Leo Thomas Rossignac-Milon
  • Patent number: 10929091
    Abstract: This disclosure concerns the playback of audio content, e.g. in the form of music. More particularly, the disclosure concerns the playback of streamed audio. In one example embodiment, there is a method of operating an electronic device for dynamically controlling a playlist including one or several audio items. A request to adjust an energy level (e.g. a tempo) associated with the playlist is received. In response to receiving this request, the playlist is adjusted in accordance with the requested energy level (e.g., the tempo).
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: February 23, 2021
    Assignee: SPOTIFY AB
    Inventors: Souheil Medaghri Alaoui, Miles Lennon, Kieran Del Pasqua
  • Patent number: 10886028
    Abstract: Techniques for presenting alternative hypotheses for medical facts may include identifying, using at least one statistical fact extraction model, a plurality of alternative hypotheses for a medical fact to be extracted from a portion of text documenting a patient encounter. At least two of the alternative hypotheses may be selected, and the selected hypotheses may be presented to a user documenting the patient encounter.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: January 5, 2021
    Assignee: Nuance Communications, Inc.
    Inventor: Girija Yegnanarayanan
  • Patent number: 10854190
    Abstract: Various embodiments of the present disclosure evaluate transcription accuracy. In some implementations, the system normalizes a first transcription of an audio file and a baseline transcription of the audio file. The baseline transcription can be used as an accurate transcription of the audio file. The system can further determine an error rate of the first transcription by aligning each portion of the first transcription with the portion of the baseline transcription, and assigning a label to each portion based on a comparison of the portion of the first transcription with the portion of the baseline transcription.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: December 1, 2020
    Assignee: UNITED SERVICES AUTOMOBILE ASSOCIATION (USAA)
    Inventors: Michael J. Szentes, Carlos Chavez, Robert E. Lewis, Nicholas S. Walker
  • Patent number: 10845976
    Abstract: Approaches provide for navigating or otherwise interacting with content in response to input from a user, including voice inputs, device inputs, gesture inputs, among other such inputs such that a user can quickly and easily navigate to different levels of detail of content. This can include, for example, presenting content (e.g., images, multimedia, text, etc.) in a particular layout, and/or highlighting, emphasizing, animating, or otherwise altering in appearance, and/or arrangement of the interface elements used to present the content based on a current level of detail, where the current level of detail can be determined by data selection criteria associated with a magnification level and other such data. As a user interacts with the computing device, for example, by providing a zoom input, values of the selection criteria can be updated, which can be used to filter and/or select content for presentation.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: November 24, 2020
    Assignee: IMMERSIVE SYSTEMS INC.
    Inventors: Jason Simmons, Maksim Galkin
  • Patent number: 10798271
    Abstract: In various embodiments, a subtitle timing application detects timing errors between subtitles and shot changes. In operation, the subtitle timing application determines that a temporal edge associated with a subtitle does not satisfy a timing guideline based on a shot change. The shot change occurs within a sequence of frames of an audiovisual program. The subtitle timing application then determines a new temporal edge that satisfies the timing guideline relative to the shot change. Subsequently, the subtitle timing application causes a modification to a temporal location of the subtitle within the sequence of frames based on the new temporal edge. Advantageously, the modification to the subtitle improves a quality of a viewing experience for a viewer. Notably, by automatically detecting timing errors, the subtitle timing application facilitates proper and efficient re-scheduling of subtitles that are not optimally timed with shot changes.
    Type: Grant
    Filed: January 5, 2018
    Date of Patent: October 6, 2020
    Assignee: NETFLIX, INC.
    Inventors: Murthy Parthasarathi, Andrew Swan, Yadong Wang, Thomas E. Mack
  • Patent number: 10777095
    Abstract: A method to develop pronunciation and intonation proficiency of English using an electronic interface, includes: preparing video bites each having an English language sound clip; preparing a script of the sound clip, wherein the script is partially marked in accordance with a predetermined rule of a pronunciation and intonation rhythm; displaying a circle on a screen of the electronic interface, wherein the circle has an illuminant movably provided along the circle, wherein the circle is serially partitioned to first to fourth quadrants; selectively playing on the screen the sound clip and the script adjacent to the circle; and synchronizing the sound clip to the illuminant in accordance with the predetermined rule, wherein an angular velocity of the illuminant moving along the circle accelerates and decelerates in the first quadrant and substantially remains constant in the second and third quadrants.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: September 15, 2020
    Inventor: Il Sung Bang
  • Patent number: 10712998
    Abstract: There is provided an information processing device to improve communication between a user and a person speaking to the user by specifying speaking motion information indicating a motion of a surrounding person speaking to the user for whom information from the surroundings is auditorily or visually restricted, the information processing device including: a detecting unit configured to detect a speaking motion of a surrounding person speaking to a user using a device that auditorily or visually restricts information from surroundings; and a specifying unit configured to specify speaking motion information indicating the speaking motion on a basis of monitored surrounding information in a case in which the speaking motion is detected.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: July 14, 2020
    Assignee: SONY CORPORATION
    Inventor: Ryouhei Yasuda
  • Patent number: 10685667
    Abstract: In aspects, systems, methods, apparatuses and computer-readable storage media implementing embodiments for mixing audio content based on a plurality of user generated recordings (UGRs) are disclosed. In embodiments, the mixing comprises: receiving a plurality of UGRs, each UGR of the plurality of UGRs comprising at least audio content; determining a correlation between samples of audio content associated with at least two UGRs of the plurality of UGRs; generating one or more clusters comprising samples of the audio content identified as having a relationship based on the determined correlations; synchronizing, for each of the one or more clusters, the samples of the audio content to produce synchronized audio content for each of the one or more clusters, normalizing, for each of the one or more clusters, the synchronized audio content to produce normalized audio content; and mixing, for each of the one or more clusters, the normalized audio content.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: June 16, 2020
    Assignee: FOUNDATION FOR RESEARCH AND TECHNOLOGY—HELLAS (FORTH)
    Inventors: Nikolaos Stefanakis, Athanasios Mouchtaris
  • Patent number: 10672399
    Abstract: Techniques are provided for creating a mapping that maps locations in audio data (e.g., an audio book) to corresponding locations in text data (e.g., an e-book). Techniques are provided for using a mapping between audio data and text data, whether the mapping is created automatically or manually. A mapping may be used for bookmark switching where a bookmark established in one version of a digital work (e.g., e-book) is used to identify a corresponding location with another version of the digital work (e.g., an audio book). Alternatively, the mapping may be used to play audio that corresponds to text selected by a user. Alternatively, the mapping may be used to automatically highlight text in response to audio that corresponds to the text being played. Alternatively, the mapping may be used to determine where an annotation created in one media context (e.g., audio) will be consumed in another media context.
    Type: Grant
    Filed: October 6, 2011
    Date of Patent: June 2, 2020
    Assignee: APPLE INC.
    Inventors: Alan C. Cannistraro, Gregory S. Robbin, Casey M. Dougherty, Raymond Walsh, Melissa Breglio Hajj
  • Patent number: 10656901
    Abstract: A media item that was presented in media players of computing devices at a first audio level may be identified, each of the media players having a corresponding user of a first set of users. A second audio level value corresponding to an amplitude setting selected by a user of the set of users during playback of the media item may be determined for each of the media players. An audio level difference (ALD) value for each of the media players may be determined based on a corresponding second audio level value. A second audio level value for an amplitude setting to be provided for the media item in response to a request of a second user to play the media item may be determined based on determined ALD values.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: May 19, 2020
    Assignee: GOOGLE LLC
    Inventor: Christian Weitenberner
  • Patent number: 10579326
    Abstract: A control device is provided which mixes and records two types of audio signals processed under standards different from each other; in particular, an audio signal of ASIO standard and an audio signal of WDM standard. An audio interface is connected to a computer, and an audio signal is input to the computer. A mixer module of the computer mixes an audio signal which is effect-processed by an ASIO application and an audio signal reproduced by a WDM application, and outputs the mixed audio signal to the audio interface and to the WDM application for sound recording. The user operates a screen displayed on an operation panel to switch between presence and absence of effect process and presence and absence of mixing.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: March 3, 2020
    Assignee: TEAC Corporation
    Inventor: Kaname Hayasaka
  • Patent number: 10575119
    Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: February 25, 2020
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
  • Patent number: 10565435
    Abstract: A method for determining a video-related emotion and a method of generating data for learning video-related emotions include separating an input video into a video stream and an audio stream; analyzing the audio stream to detect a music section; extracting at least one video clip matching the music section; extracting emotion information from the music section; tagging the video clip with the extracted emotion information and outputting the video clip; learning video-related emotions by using the at least one video clip tagged with the emotion information to generate a video-related emotion classification model; and determining an emotion related to an input query video by using the video-related emotion classification model to provide the emotion.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: February 18, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jee Hyun Park, Jung Hyun Kim, Yong Seok Seo, Won Young Yoo, Dong Hyuck Im
  • Patent number: 10489450
    Abstract: Implementations generally relate to selecting soundtracks. In some implementations, a method includes determining one or more sound mood attributes of one or more soundtracks, where the one or more sound mood attributes are based on one or more sound characteristics. The method further includes determining one or more visual mood attributes of one or more visual media items, where the one or more visual mood attributes are based on one or more visual characteristics. The method further includes selecting one or more of the soundtracks based on the one or more sound mood attributes and the one or more visual mood attributes. The method further includes generating an association among the one or more selected soundtracks and the one or more visual media items, wherein the association enables the one or more selected soundtracks to be played while the one or more visual media items are displayed.
    Type: Grant
    Filed: February 26, 2015
    Date of Patent: November 26, 2019
    Assignee: Google LLC
    Inventor: Ryan James Lothian
  • Patent number: 10417279
    Abstract: Systems and methods are provided for curating playlists of content for provisioning and presenting to users a seamless cross fade experience from one piece of content to the next within the playlist. In embodiments, information that identifies portions of content without audio or video data may be maintained. Further, metadata may be generated that identifies a cross fade points for the content in response to receiving input from a user device of a user. In an embodiment, each cross fade point may identify a time window of the content for interleaving with other content. In accordance with at least one embodiment, the metadata may be transmitted based at least in part on a selection of the metadata for the content.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: September 17, 2019
    Assignee: Amazon Technologies, Inc.
    Inventor: Jonathan Beech
  • Patent number: 10397525
    Abstract: In a pilotless flying object detection system, a masking area setter sets a masking area to be excluded from detection of a pilotless flying object which appears in a captured image of a monitoring area, based on audio collected by a microphone array. An object detector detects the pilotless flying object based on the audio collected by the microphone array and the masking area set by the masking area setter. An output controller superimpose sound source visual information, which indicates the volume of a sound at a sound source position, at the sound source position of the pilotless flying object in the captured image and displays the result on a first monitor in a case where the pilotless flying object is detected in an area other than the masking area.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: August 27, 2019
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Hiroyuki Matsumoto, Shintaro Yoshikuni, Masanari Miyamoto
  • Patent number: 10332506
    Abstract: Disclosed are systems and methods for improving interactions with and between computers in content searching, generating, hosting and/or providing systems supported by or configured with personal computing devices, servers and/or platforms. The systems interact to identify and retrieve data within or across platforms, which can be used to improve the quality of data used in processing interactions between or among processors in such systems. The disclosed systems and methods provide systems and methods for automatic creation of a formatted, readable transcript of multimedia content, which is derived, extracted, determined, or otherwise identified from the multimedia content. The formatted, readable transcript can be utilized to increase accuracy and efficiency in search engine optimization, as well as identification of relevant digital content available for communication to a user.
    Type: Grant
    Filed: September 2, 2015
    Date of Patent: June 25, 2019
    Assignee: OATH INC.
    Inventors: Aasish Pappu, Amanda Stent
  • Patent number: 10334142
    Abstract: In some examples, a system receives a color sample comprising a color measurement of a proper subset of a gamut of colors printable by a printer, and computes a forward transform value and a reverse transform value based on a color profile calculated from a profiling chart comprising a set of estimated color samples calculated based on the received color sample, the forward and reverse transform values to convert between colorimetry values and color values for the printer. The system provides an adjusted color profile for the printer based on an original color profile for the printer and the computing, wherein the original color profile for the printer is associated with a substrate.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: June 25, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Peter Morovic, Jan Morovic
  • Patent number: 10318637
    Abstract: An editing method facilitates the task of adding background sound to speech-containing audio data so as to augment the listening experience. The editing method is executed by a processor in a computing device and comprises obtaining characterization data that characterizes time segments in the audio data by at least one of topic and sentiment; deriving, for a respective time segment in the audio data and based on the characterization data, a desired property of a background sound to be added to the audio data in the respective time segment, and providing the desired property for the respective time segment so as to enable the audio data to be combined, within the respective time segment, with background sound having the desired property. The background sound may be selected and added automatically or by manual user intervention.
    Type: Grant
    Filed: May 13, 2017
    Date of Patent: June 11, 2019
    Assignee: SONY MOBILE COMMUNICATIONS INC.
    Inventor: Ola Thörn
  • Patent number: 10194199
    Abstract: Methods, systems, and computer program products that automatically categorize and/or assign ratings to content (video and audio content) uploaded by individuals who want to broadcast the content to others via a communications network, such as an IPTV network, are provided. When an individual uploads content to a network, a network service automatically extracts an audio stream from the uploaded content. Words in the extracted audio stream are identified. For each identified word, a preexisting library of selected words is queried to determine if a match exists between words in the library and words in the extracted audio stream. The selected words in the library are associated with a particular content category or content rating. If a match exists between an identified word and a word in the library, the uploaded content is assigned a content category and/or rating associated with the matched word.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: January 29, 2019
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Ke Yu
  • Patent number: 10109286
    Abstract: According to an embodiment, a speech synthesizer includes a source generator, a phase modulator, and a vocal tract filter unit. The source generator generates a source signal by using a fundamental frequency sequence and a pulse signal. The phase modulator modulates, with respect to the source signal generated by the source generator, a phase of the pulse signal at each pitch mark based on audio watermarking information. The vocal tract filter unit generates a speech signal by using a spectrum parameter sequence with respect to the source signal in which the phase of the pulse signal is modulated by the phase modulator.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: October 23, 2018
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kentaro Tachibana, Takehiko Kagoshima, Masatsune Tamura, Masahiro Morita
  • Patent number: 9942748
    Abstract: Embodiments of the present invention provide a service provisioning system and method, a mobile edge application server and support node. The system includes: at least one mobile edge application server (MEAS) and at least one mobile edge application server support function (MEAS-SF), where the MEAS is deployed at an access network side; and the MEAS-SF is deployed at a core network side, connected to one or more MEAS. In the service provisioning system provided in the embodiment, services that are provided by an SP are deployed in the MEAS. When the MEAS can provide the user equipment with a service requested in a service request, the MEAS directly and locally generates service data corresponding to the service request. Therefore, the user equipment directly obtains required service data from an RAN side, which avoids data congestion between an RAN and a CN and saves network resources.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: April 10, 2018
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Zhiming Zhu, Weihua Liu, Mingrong Cao
  • Patent number: 9841879
    Abstract: A computing device can include a recognition mode interface utilizing graphical elements, such as virtual fireflies, to indicate recognized or identified objects. The fireflies can be animated to move across a display, and the fireflies can create bounding boxes around visual representations of objects as the objects are recognized. In some cases, the object might be of a type that has specific meaning or information to be conveyed to a user. In such cases, the fireflies might be displayed with a particular size, shape, or color to convey that information. The fireflies also can be configured to form shapes or patterns in order to convey other types of information to a user, such as where audio is being recognized, light is sufficient for image capture, and the like. Other types of information can be conveyed as well via altering characteristics of the fireflies.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: December 12, 2017
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Timothy Thomas Gray, Forrest Elliott
  • Patent number: 9838731
    Abstract: Video clips may be automatically edited to be synchronized for accompaniment by audio tracks. A preliminary version of a video clip may be made up from stored video content. Occurrences of video events within the preliminary version may be determined. A first audio track may include audio event markers. A first revised version of the video clip may be synchronized so that moments within the video clip corresponding to occurrences of video events are aligned with moments within the first audio track corresponding to audio event markers. Presentation of an audio mixing option may be effectuated on a graphical user interface of a video application for selection by a user. The audio mixing option may define volume at which the first audio track is played as accompaniment for the video clip.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: December 5, 2017
    Assignee: GoPro, Inc.
    Inventor: Joven Matias
  • Patent number: 9766854
    Abstract: This disclosure concerns the playback of audio content, e.g. in the form of music. More particularly, the disclosure concerns the playback of streamed audio. In one example embodiment, there is a method of operating an electronic device for dynamically controlling a playlist including one or several audio items. A request to adjust an energy level (e.g. a tempo) associated with the playlist is received. In response to receiving this request, the playlist is adjusted in accordance with the requested energy level (e.g., the tempo).
    Type: Grant
    Filed: August 28, 2015
    Date of Patent: September 19, 2017
    Assignee: SPOTIFY AB
    Inventors: Souheil Medaghri Alaoui, Miles Lennon, Kieran Del Pasqua
  • Patent number: 9736337
    Abstract: One example relates to a print system for adjusting a color profile. The print system can comprise a system comprising a memory for storing computer executable instructions and a processing unit for accessing the memory and executing the computer executable instructions. The computer executable instructions can comprise a profile transformer to receive a color sample comprising a color measurement of a proper subset of a gamut of colors printable with ink deposited at a printer. The profile transformer can also provide an adjusted color profile for the printer based on (i) an original color profile associated with a substrate for the printer and (ii) the color sample.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: August 15, 2017
    Assignee: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventors: Peter Morovic, Jan Morovic
  • Patent number: 9674051
    Abstract: An address generation section (111) receives an acquisition request including a file name of a target content and a device ID of a device (113) as a place where the target content is stored, from an application execution section (112) that executes a viewing application. Then, the address generation section (111) specifies the current file path and IP address of the target content in content information and device information each managed by a management section (107) on the basis of the received acquisition request, and generates an acquisition address for acquiring the target content from the device (113) on the basis of the specified file path and IP address.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: June 6, 2017
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventor: Shigehiro Iida
  • Patent number: 9626956
    Abstract: A method and a device that preprocess a speech signal are disclosed, which include extracting at least one frame corresponding to a speech recognition range from frames included in a speech signal, generating a supplementary frame to supplement speech recognition with respect to the speech recognition range based on the at least one extracted frame, and outputting a preprocessed speech signal including the supplementary frame along with the frames of the speech signal.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: April 18, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Hodong Lee
  • Patent number: 9558746
    Abstract: This invention describes methods for implementing human speech recognition. The methods described here are of using sub-events that are sounds between spaces (typically a fully spoken word) that is then compared with a library of sub-events. All sub-events are packaged with it's own speech recognition function as individual units. This invention illustrates how this model can be used as a Large Vocabulary Speech Recognition System.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: January 31, 2017
    Inventor: Darrell Poirier
  • Patent number: 9544649
    Abstract: A device and method are presently disclosed. The computer implemented method, includes at an electronic device with a touch-sensitive display, displaying a still image on the touch-sensitive display, while displaying the still image, detecting user's finger contact with the touch-sensitive display, and in response to detecting the user's finger contact, video recording the still image.
    Type: Grant
    Filed: February 25, 2014
    Date of Patent: January 10, 2017
    Assignee: Aniya's Production Company
    Inventors: Damon Wayans, James Cahall
  • Patent number: 9483459
    Abstract: A system is configured to receive a first string corresponding to an interpretation of a natural-language user voice entry; provide a representation of the first string as feedback to the natural-language user voice entry; receive, based on the feedback, a second string corresponding to a natural-language corrective user entry, where the natural-language corrective user entry may correspond to a correction to the natural-language user voice entry; parse the second string into one or more tokens; determine at least one corrective instruction from the one or more tokens of the second string; generate, from at least a portion of each of the first and second strings and based on the at least one corrective instruction, candidate corrected user entries; select a corrected user entry from the candidate corrected user entries; and output the selected, corrected user entry.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: November 1, 2016
    Assignee: Google Inc.
    Inventors: Michael D Riley, Johan Schalkwyk, Cyril Georges Luc Allauzen, Ciprian Ioan Chelba, Edward Oscar Benson
  • Patent number: 9472177
    Abstract: A music application guides a user with some musical, experience through the steps of creating and editing a musical enhancement file that enhances and plays in synchronicity with an audio signal of an original artist's recorded performance. This enables others, perhaps with lesser musical ability than the original artist, to play-along with the original artist by following melodic, chordal, rhythmic, and verbal prompts. The music application accounts for differences in the timing of the performance from a standard tempo by guiding the user through the process of creating a tempo map for the performance and by associating the tempo map with MIDI information of the enhancement file. Enhancements may contain. MIDI information, audio signal information, and/or video signal information which may be played back in synchronicity with the recorded performance to provide an aural and visual, aid to others playing-along who may have less musical experience.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: October 18, 2016
    Assignee: Family Systems, Ltd.
    Inventors: Brian Reynolds, William B. Hudak
  • Patent number: 9449611
    Abstract: A computer readable medium containing computer executable instructions is described for extracting a reference representation from a mixture representation that comprises the reference representation and a residual representation wherein the reference representation, the mixture representation, and the residual representation are representations of collections of acoustical waves stored on computer readable media.
    Type: Grant
    Filed: October 1, 2012
    Date of Patent: September 20, 2016
    Assignee: AUDIONAMIX
    Inventors: Pierre Leveau, Xabier Jaureguiberry
  • Patent number: 9423944
    Abstract: A method for adjusting the sound volume of media clips using volume adjuster lines is provided. The volume adjuster lines are individually set for each clip based on the intrinsic, or absolute, volume values of the clip. In some embodiments, the volume adjuster lines are set for each clip based on the peak value or a calculated loudness equivalent of the clip. A user can move the volume adjuster line to set the absolute sound level of a clip. The volume adjuster lines can be hidden in some embodiments. In these embodiments, dragging on any portion of a clip is treated as dragging on the volume adjuster line. Some embodiments provide a deformable volume adjuster line, or curve. In these embodiments, a single audio clip can have several different volume adjuster lines for different sections of the clip where the volume adjuster line for each section is individually adjustable.
    Type: Grant
    Filed: September 6, 2011
    Date of Patent: August 23, 2016
    Assignee: APPLE INC.
    Inventor: Aaron M. Eppolito
  • Patent number: 9418652
    Abstract: Systems and methods for modifying a computer-based speech recognition system. A speech utterance is processed with the computer-based speech recognition system using a set of internal representations, which may comprise parameters for recognizing speech in a speech utterance, such as parameters of an acoustic model and/or a language model. The computer-based speech recognition system may perform a first task in response to the processed speech utterance. The utterance may also be provided to a human who performs a second task based on the utterance. Data indicative of the first task, performed by the computer system, is compared to data indicative of a second task, performed by the human in response to the speech utterance. Based on the comparison, the set of internal representations may be updated or modified to improve the speech recognition performance and capabilities of the speech recognition system.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: August 16, 2016
    Assignee: Next IT Corporation
    Inventor: Charles C Wooters