Sound Editing Patents (Class 704/278)
  • Patent number: 11954452
    Abstract: A translation method includes: selecting a source word from a source sentence; generating mapping information including location information of the selected source word mapped to the selected source word in the source sentence; and correcting a target word, which is generated by translating the source sentence, based on location information of a feature value of the target word and the mapping information.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: April 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jihyun Lee, Hwidong Na, Hoshik Lee
  • Patent number: 11941000
    Abstract: An embodiment includes processing a dataset to generate a set of feature vectors that include a first feature vector corresponding to a first concept within a user's areas of interest and a second feature vector corresponding to a second concept within the user's areas of study. The embodiment identifies clusters of the feature vectors and identifies key features that most contribute to influencing the clustering algorithm. The embodiment selects the first feature vector in response to a user query, and then selects the second feature vector based on an overlap between key features of the first and second feature vectors and a degree of dissimilarity between the first and second concepts. The embodiment outputs a query response that includes the second concept. The embodiment also determines an effectiveness value based on sensor data indicative of a user action responsive to the outputting of the response to the query.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: March 26, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Shikhar Kwatra, Robert E. Loredo, Frederik Frank Flöther, Stefan Ravizza
  • Patent number: 11785293
    Abstract: A disclosed example apparatus includes at least one memory, instructions in the apparatus, and processor circuitry to execute the instructions to store a logged media impression for a media identifier representative of media accessed via the Internet, cause transmission of a device identifier or a user identifier to a database proprietor when a user has not elected to not participate in third-party tracking corresponding to online activities, access user information from the database proprietor based on the device identifier or the user identifier, log a demographic impression based on the media impression and the user information, and generate an impression report corresponding to the media based on the demographic impression.
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: October 10, 2023
    Assignee: The Nielsen Company (US), LLC
    Inventors: Steven J. Splaine, Adrian Swift
  • Patent number: 11776528
    Abstract: This application relates to a method of synthesizing a speech of which a speed and a pitch are changed. In one aspect, the method includes a spectrogram may be generated by performing a short-time Fourier transformation on a first speech signal based on a first hop length and a first window length, and speech signals of sections having a second window length at the interval of a second hop length from the spectrogram. A ratio between the first hop length and the second hop length may be set to be equal to the value of a playback rate and a ratio between the first window length and the second window length may be set to be equal to the value of a pitch change rate, thereby generating a second speech signal of which the speed and the pitch are changed.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: October 3, 2023
    Assignee: Xinapse Co., Ltd.
    Inventors: Jinbeom Kang, Dong Won Joo, Yongwook Nam
  • Patent number: 11763936
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for speech recognition. One method includes obtaining an input acoustic sequence, the input acoustic sequence representing one or more utterances; processing the input acoustic sequence using a speech recognition model to generate a transcription of the input acoustic sequence, wherein the speech recognition model comprises a domain-specific language model; and providing the generated transcription of the input acoustic sequence as input to a domain-specific predictive model to generate structured text content that is derived from the transcription of the input acoustic sequence.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: September 19, 2023
    Assignee: Google LLC
    Inventors: Christopher S. Co, Navdeep Jaitly, Lily Hao Yi Peng, Katherine Irene Chou, Ananth Sankar
  • Patent number: 11714560
    Abstract: Systems and processes for managing memory compression security to mitigate security risks related to compressed memory page access are disclosed herein. A system for managing memory compression security includes a system memory and a memory manager. The system memory includes an uncompressed region configured to store a plurality of uncompressed memory pages and a compressed region configured to store a plurality of compressed memory pages. The memory manager identifies a memory page in the uncompressed region of the system memory as a candidate for compression and estimate a decompression time for a compressed version of the identified memory page. The memory manager determines whether the estimated decompression time is less than a constant decompression time. The memory manager, based on a determination that the estimated decompression time is less than the constant decompression time, compresses the memory page and writes the compressed memory page in the compressed region.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: August 1, 2023
    Assignee: Amazon Technologies, Inc.
    Inventor: Martin Thomas Pohlack
  • Patent number: 11696068
    Abstract: A microphone may comprise a microphone element for detecting sound, and a digital signal processor configured to process a first audio signal that is based on the sound in accordance with a selected one of a plurality of digital signal processing (DSP) modes. Each of the DSP modes may be for processing the first audio signal in a different way. For example, the DSP modes may account for distance of the person speaking (e.g., near versus far) and/or desired tone (e.g., darker, neutral, or bright tone). At least some of the modes may have, for example, an automatic level control setting to provide a more consistent volume as the user changes their distance from the microphone or changes their speaking level, and that may be associated with particular default (and/or adjustable) values of the parameters attack, hold, decay, maximum gain, and/or target gain, each depending upon which DSP is being applied.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: July 4, 2023
    Assignee: Shure Acquisition Holdings, Inc.
    Inventors: Timothy William Balgemann, Soren Christian Pedersen, James Michael Pessin, Brent Robert Shumard, Lena Lins Sutter, Ryan Jerold Perkofski
  • Patent number: 11688390
    Abstract: Methods and systems are provided for assisting operation of a vehicle using speech recognition. One method involves identifying a user-configured speech recognition performance setting value selected from among a plurality of speech recognition performance setting values, selecting a speech recognition model configuration corresponding to the user-configured speech recognition performance setting value from among a plurality of speech recognition model configurations, where each speech recognition model configuration of the plurality of speech recognition model configurations corresponds to a respective one of the plurality of speech recognition performance setting values, and recognizing an audio input as an input state using the speech recognition model configuration corresponding to the user-configured speech recognition performance setting value.
    Type: Grant
    Filed: September 20, 2021
    Date of Patent: June 27, 2023
    Assignee: HONEYWELL INTERNATIONAL INC.
    Inventors: Naveen Venkatesh Prasad Nama, Vasantha Paulraj, Robert De Mers
  • Patent number: 11635936
    Abstract: Techniques are disclosed relating to implementing audio techniques for real-time audio generation. For example, a music generator system may generate new music content from playback music content based on different parameter representations of an audio signal. In some cases, an audio signal can be represented by both a graph of the signal (e.g., an audio signal graph) relative to time and a graph of the signal relative to beats (e.g., a signal graph). The signal graph is invariant to tempo, which allows for tempo invariant modification of audio parameters of the music content in addition to tempo variant modifications based on the audio signal graph.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: April 25, 2023
    Assignee: AiMi Inc.
    Inventors: Edward Balassanian, Patrick E. Hutchings, Toby Gifford
  • Patent number: 11599772
    Abstract: Guided character string alteration can be performed by obtaining an original character string and a plurality of altered character strings, traversing the original character string with a first Long Short Term Memory (LSTM) network to generate, for each character of the original character string, a hidden state of a partial original character string up to that character, and applying, during the traversing, an alteration learning process to each hidden state of a partial original character string to produce an alteration function for relating partial original character strings to partial altered character strings.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: March 7, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pablo Loyola, Kugamoorthy Gajananan, Yuji Watanabe, Fumiko Akiyama
  • Patent number: 11425444
    Abstract: A content display system includes a first acquisition processor that acquires first content registered on a management terminal; a first display processor that causes the display device to display the first content acquired by the first acquisition processor; a second acquisition processor that acquires second content based on an operation on a portable terminal by a user when the first content is displayed on the display device; and a second display processor that causes the display device to display the second content acquired by the second acquisition processor.
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: August 23, 2022
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Kaname Matsui, Masafumi Okigami
  • Patent number: 11315546
    Abstract: Disclosed are systems and methods for improving interactions with and between computers in content searching, generating, hosting and/or providing systems supported by or configured with personal computing devices, servers and/or platforms. The systems interact to identify and retrieve data within or across platforms, which can be used to improve the quality of data used in processing interactions between or among processors in such systems. The disclosed systems and methods provide systems and methods for automatic creation of a formatted, readable transcript of multimedia content, which is derived, extracted, determined, or otherwise identified from the multimedia content. The formatted, readable transcript can be utilized to increase accuracy and efficiency in search engine optimization, as well as identification of relevant digital content available for communication to a user.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: April 26, 2022
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Aasish Pappu, Amanda Stent
  • Patent number: 11295783
    Abstract: Methods, apparatus, and systems for automatically producing a video program in accordance with a script are provided. Various media assets are recorded and/or stored in a content database, together with metadata relating to each of the media assets. Each media asset is tagged with a unique content ID, the unique content ID associating the metadata with the media asset. The media assets are then indexed. Text from a script is then analyzed using natural language processing to locate one or more relevant indexed media assets. The located one or more media assets are assembled into a video program in accordance with the script.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: April 5, 2022
    Assignee: TVU Networks Corporation
    Inventors: Paul Shen, Christopher Bell, Matthew Richard McEwen, Justin Chen
  • Patent number: 11238888
    Abstract: The disclosed computer-implemented method may include obtaining an audio sample from a content source, inputting the obtained audio sample into a trained machine learning model, obtaining the output of the trained machine learning model, wherein the output is a profile of an environment in which the input audio sample was recorded, obtaining an acoustic impulse response corresponding to the profile of the environment in which the input audio sample was recorded, obtaining a second audio sample, processing the obtained acoustic impulse response with the second audio sample, and inserting a result of processing the obtained acoustic impulse response and the second audio sample into an audio track. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: February 1, 2022
    Assignee: Netflix, Inc.
    Inventors: Yadong Wang, Shilpa Jois Rao, Murthy Parthasarathi, Kyle Tacke
  • Patent number: 11227638
    Abstract: The present invention discloses a method and system for cutting video using video content. The method comprises: acquiring recorded video produced by user's recording operation; extracting features of recorded audio in the recorded video and judging whether the recorded audio is damaged; and if not, extracting human voice data from the recorded audio which has been filtered out background sound, intercepting video segment corresponding to effective human voice, and displaying the video segment as clip video; and if yes, extracting image feature data of person's mouth shape and human movements in the recorded video after image processing, fitting the image feature data and the human voice data which has been filtered out background sound, and displaying the video segment with the highest fitting degree as clip video.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: January 18, 2022
    Assignee: Sunday Morning Technology (Guangzhou) Co., Ltd.
    Inventors: Qianya Lin, Tian Xia, RemyYiYang Ho, Zhenli Xie, Pinlin Chen, Rongchan Liu
  • Patent number: 11210322
    Abstract: Embodiments of the present disclosure relate to a method and apparatus for reducing storage space of a parameter table. The method may include: storing the parameter table in a lookup table system configured to compute an output value of a non-linear function according to an input value of the non-linear function, the parameter table including only an index value associated with an input value on one side of a median in a domain of the non-linear function and a parameter value corresponding to the index value; determining, by using a corresponding relationship between the index value associated with the input value on one side and the parameter value corresponding to the index value, a parameter value corresponding to an index value associated with an input value on the other side; and computing the output value by using the input value on the other side and the determined corresponding parameter value.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: December 28, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Huimin Li, Jian Ouyang
  • Patent number: 11157150
    Abstract: The present invention can include electronic devices having variable input/output interfaces that can allow a user to interact with the devices with greater efficiency and in a more ergonomic manner. An electronic device of the present invention can display icons associated with user-programmable parameters of a media file. By interacting with the icons, a user can change the user-programmable parameters during playback of the media file. Changes to the user-programmable parameters can affect playback of the remainder of the media file. An electronic device of the present invention also can automatically re-orient images shown on a display and re-configure user input components based on the orientation of the electronic device.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: October 26, 2021
    Assignee: Apple Inc.
    Inventors: Glenn Gregory Gilley, Sarah A. Brody, Randall Hayes Ubillos, Mihnea Calin Pacurariu
  • Patent number: 11152007
    Abstract: Embodiments of a method and device for matching a speech with a text, and a computer-readable storage medium are provided. The method can include: acquiring a speech identification text by identifying a received speech signal; comparing the speech identification text with multiple candidate texts in a first matching mode to determine a first matching text; and comparing phonetic symbols of the speech identification text with phonetic symbols of the multiple candidate texts in a second matching mode to determine a second matching text, in a case that no first matching text is determined.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: October 19, 2021
    Assignee: Baidu Online Network Technology Co., Ltd.
    Inventor: Yongshuai Lu
  • Patent number: 11138970
    Abstract: The present disclosure relates to a system, method, and computer program for creating a complete transcription of an audio recording from separately transcribed redacted and unredacted words. The system receives an original audio recording and redacts a plurality of words from the original audio recording to obtain a modified audio recording. The modified audio recording is outputted to a first transcription service. Audio clips of the redacted words from the original audio recording are extracted using word-level timestamps for the redacted words. The extracted audio clips are outputted to a second transcription service. The system receives a transcription of the modified audio recording from the first transcription service and transcriptions of the extracted audio clips from the second transcription service.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: October 5, 2021
    Assignee: ASAPP, Inc.
    Inventors: Kyu Jeong Han, Madison Chandler Riley, Tao Ma
  • Patent number: 11062696
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech endpointing are described. In one aspect, a method includes the action of accessing voice query log data that includes voice queries spoken by a particular user. The actions further include based on the voice query log data that includes voice queries spoken by a particular user, determining a pause threshold from the voice query log data that includes voice queries spoken by the particular user. The actions further include receiving, from the particular user, an utterance. The actions further include determining that the particular user has stopped speaking for at least a period of time equal to the pause threshold. The actions further include based on determining that the particular user has stopped speaking for at least a period of time equal to the pause threshold, processing the utterance as a voice query.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: July 13, 2021
    Assignee: Google LLC
    Inventors: Siddhi Tadpatrikar, Michael Buchanan, Pravir Kumar Gupta
  • Patent number: 11048749
    Abstract: A searchable media object comprises means for playing a media file; an audio file processor operable to receive a standard audio file and convert the standard audio file into machine coding; an Automatic Speech Recognition processor, operable to receive the standard audio file, determine each spoken word and output a string of text comprising each of the determined words, wherein each word in the string is accorded a time indicative of the time of occurrence of the word in the string; an assembler, operable to receive the machine coding, the string of text and the relevant topics and therefrom, assemble the self contained searchable media object comprising a media player operable without connection to the internet; and, a search engine operable to receive a user word search enquiry, search the string of text for the word search enquiry, determine the accorded time of the occurrence of each word enquiry in the string of text, display the occurrences of the determined word search enquiry to a user, receive a us
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: June 29, 2021
    Inventor: Nigel Henry Cannings
  • Patent number: 11032623
    Abstract: The present invention discloses a subtitled image generation apparatus that includes a subtitle generation circuit, an image delay circuit and an overlaying circuit. The subtitle generation circuit receives audio data to generate a subtitle image pattern. The image delay circuit includes a first delay path having delay buffer circuit, a second delay path having a data amount decreasing circuit, the delay buffer circuit and a data amount restoring circuit, and a control circuit. The control circuit controls the first delay path to store and delay the image data when a data amount of the image data matches a direct-writing condition, and controls the second delay path to decrease the data amount of the image data, to store and delay the image data and to restore the data amount of the image data when the data amount fails to match the direct-writing condition. The overlaying circuit overlays the subtitle image pattern on the image data having a corresponding timing to generate an output subtitled image.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: June 8, 2021
    Assignee: REALTEK SEMICONDUCTOR CORPORATION
    Inventor: Lien-Hsiang Sung
  • Patent number: 11011157
    Abstract: Techniques are disclosed for generating ASR training data. According to an embodiment, impactful ASR training corpora is generated efficiently, and the quality or relevance of ASR training corpora being generated is increased by leveraging knowledge of the ASR system being trained. An example methodology includes: selecting one of a word or phrase, based on knowledge and/or content of said ASR training corpora; presenting a textual representation of said word or phrase; receiving a speech utterance that includes said word or phrase; receiving a transcript for said speech utterance; presenting said transcript for review (to allow for editing, if needed); and storing said transcript and said audio file in an ASR system training database. The selecting may include, for instance, selecting a word or phrase that is under-represented in said database, and/or based upon an n-gram distribution on a language, and/or based upon known areas that tend to incur transcription mistakes.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: May 18, 2021
    Assignee: ADOBE INC.
    Inventor: Franck Dernoncourt
  • Patent number: 10977345
    Abstract: Hardware on a device, including sensors that may be embedded in or accessible to the device, extend the validity session of an authentication event by identifying behavior of the user periodically during the session. By detecting behavior that may be directly or indirectly unrelated to the application—but is necessarily present if the user remains the same—the session is extended for as long as that behavior is within some defined parameters. This process may be accomplished either by using an expert system or through the application of machine learning. Such a system may take input from sensors and detects a pattern in those inputs that coincide with the presence of that ephemeral personal or behavioral patterns that it is desired to detect. Using that detected behavior, the validity of a session can be extended using statements about the variance or invariance of the detected ephemeral personal or behavioral states.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: April 13, 2021
    Assignee: TwoSesnse, Inc.
    Inventors: Dawud Gordon, John Tanios
  • Patent number: 10956860
    Abstract: Techniques for determining a clinician's intent to order an item may include processing a free-form narration, of an encounter with a patient, narrated by a clinician, using a natural language understanding engine implemented by one or more processors, to extract at least one clinical fact corresponding to a mention of an orderable item from the free-form narration. The processing may comprise distinguishing between whether the at least one clinical fact indicates an intent to order the orderable item or does not indicate an intent to order the orderable item. In response to determining that the at least one clinical fact indicates an intent to order the orderable item, an order may be generated for the orderable item.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: March 23, 2021
    Assignee: Nuance Communications, Inc.
    Inventors: Isam H. Habboush, Davide Zaccagnini
  • Patent number: 10943600
    Abstract: A system or method for manipulating audiovisual data using transcript information. The system or method performs the following actions. Creating a computer-generated transcript of audio data from the audiovisual data, the computer-generated transcript includes a plurality of words, at least some words of the plurality of words are associated with a respective timestamp and a confidence score. Receiving a traditional transcript of the audio data, the traditional transcript includes a plurality of words that are not associated with timestamps. Identifying one or more words from the plurality of words of the computer-generated transcript that match words from the plurality of words of the traditional transcript. Associating the timestamp of the one or more words of the computer-generated transcript with the matching word of the traditional transcript. Processing the audiovisual data using the traditional transcript and the associated timestamps.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: March 9, 2021
    Assignee: Axon Enterprise, Inc.
    Inventors: Joseph Charles Dimino, Jr., Sayce William Falk, Leo Thomas Rossignac-Milon
  • Patent number: 10929091
    Abstract: This disclosure concerns the playback of audio content, e.g. in the form of music. More particularly, the disclosure concerns the playback of streamed audio. In one example embodiment, there is a method of operating an electronic device for dynamically controlling a playlist including one or several audio items. A request to adjust an energy level (e.g. a tempo) associated with the playlist is received. In response to receiving this request, the playlist is adjusted in accordance with the requested energy level (e.g., the tempo).
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: February 23, 2021
    Assignee: SPOTIFY AB
    Inventors: Souheil Medaghri Alaoui, Miles Lennon, Kieran Del Pasqua
  • Patent number: 10886028
    Abstract: Techniques for presenting alternative hypotheses for medical facts may include identifying, using at least one statistical fact extraction model, a plurality of alternative hypotheses for a medical fact to be extracted from a portion of text documenting a patient encounter. At least two of the alternative hypotheses may be selected, and the selected hypotheses may be presented to a user documenting the patient encounter.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: January 5, 2021
    Assignee: Nuance Communications, Inc.
    Inventor: Girija Yegnanarayanan
  • Patent number: 10854190
    Abstract: Various embodiments of the present disclosure evaluate transcription accuracy. In some implementations, the system normalizes a first transcription of an audio file and a baseline transcription of the audio file. The baseline transcription can be used as an accurate transcription of the audio file. The system can further determine an error rate of the first transcription by aligning each portion of the first transcription with the portion of the baseline transcription, and assigning a label to each portion based on a comparison of the portion of the first transcription with the portion of the baseline transcription.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: December 1, 2020
    Assignee: UNITED SERVICES AUTOMOBILE ASSOCIATION (USAA)
    Inventors: Michael J. Szentes, Carlos Chavez, Robert E. Lewis, Nicholas S. Walker
  • Patent number: 10845976
    Abstract: Approaches provide for navigating or otherwise interacting with content in response to input from a user, including voice inputs, device inputs, gesture inputs, among other such inputs such that a user can quickly and easily navigate to different levels of detail of content. This can include, for example, presenting content (e.g., images, multimedia, text, etc.) in a particular layout, and/or highlighting, emphasizing, animating, or otherwise altering in appearance, and/or arrangement of the interface elements used to present the content based on a current level of detail, where the current level of detail can be determined by data selection criteria associated with a magnification level and other such data. As a user interacts with the computing device, for example, by providing a zoom input, values of the selection criteria can be updated, which can be used to filter and/or select content for presentation.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: November 24, 2020
    Assignee: IMMERSIVE SYSTEMS INC.
    Inventors: Jason Simmons, Maksim Galkin
  • Patent number: 10798271
    Abstract: In various embodiments, a subtitle timing application detects timing errors between subtitles and shot changes. In operation, the subtitle timing application determines that a temporal edge associated with a subtitle does not satisfy a timing guideline based on a shot change. The shot change occurs within a sequence of frames of an audiovisual program. The subtitle timing application then determines a new temporal edge that satisfies the timing guideline relative to the shot change. Subsequently, the subtitle timing application causes a modification to a temporal location of the subtitle within the sequence of frames based on the new temporal edge. Advantageously, the modification to the subtitle improves a quality of a viewing experience for a viewer. Notably, by automatically detecting timing errors, the subtitle timing application facilitates proper and efficient re-scheduling of subtitles that are not optimally timed with shot changes.
    Type: Grant
    Filed: January 5, 2018
    Date of Patent: October 6, 2020
    Assignee: NETFLIX, INC.
    Inventors: Murthy Parthasarathi, Andrew Swan, Yadong Wang, Thomas E. Mack
  • Patent number: 10777095
    Abstract: A method to develop pronunciation and intonation proficiency of English using an electronic interface, includes: preparing video bites each having an English language sound clip; preparing a script of the sound clip, wherein the script is partially marked in accordance with a predetermined rule of a pronunciation and intonation rhythm; displaying a circle on a screen of the electronic interface, wherein the circle has an illuminant movably provided along the circle, wherein the circle is serially partitioned to first to fourth quadrants; selectively playing on the screen the sound clip and the script adjacent to the circle; and synchronizing the sound clip to the illuminant in accordance with the predetermined rule, wherein an angular velocity of the illuminant moving along the circle accelerates and decelerates in the first quadrant and substantially remains constant in the second and third quadrants.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: September 15, 2020
    Inventor: Il Sung Bang
  • Patent number: 10712998
    Abstract: There is provided an information processing device to improve communication between a user and a person speaking to the user by specifying speaking motion information indicating a motion of a surrounding person speaking to the user for whom information from the surroundings is auditorily or visually restricted, the information processing device including: a detecting unit configured to detect a speaking motion of a surrounding person speaking to a user using a device that auditorily or visually restricts information from surroundings; and a specifying unit configured to specify speaking motion information indicating the speaking motion on a basis of monitored surrounding information in a case in which the speaking motion is detected.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: July 14, 2020
    Assignee: SONY CORPORATION
    Inventor: Ryouhei Yasuda
  • Patent number: 10685667
    Abstract: In aspects, systems, methods, apparatuses and computer-readable storage media implementing embodiments for mixing audio content based on a plurality of user generated recordings (UGRs) are disclosed. In embodiments, the mixing comprises: receiving a plurality of UGRs, each UGR of the plurality of UGRs comprising at least audio content; determining a correlation between samples of audio content associated with at least two UGRs of the plurality of UGRs; generating one or more clusters comprising samples of the audio content identified as having a relationship based on the determined correlations; synchronizing, for each of the one or more clusters, the samples of the audio content to produce synchronized audio content for each of the one or more clusters, normalizing, for each of the one or more clusters, the synchronized audio content to produce normalized audio content; and mixing, for each of the one or more clusters, the normalized audio content.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: June 16, 2020
    Assignee: FOUNDATION FOR RESEARCH AND TECHNOLOGY—HELLAS (FORTH)
    Inventors: Nikolaos Stefanakis, Athanasios Mouchtaris
  • Patent number: 10672399
    Abstract: Techniques are provided for creating a mapping that maps locations in audio data (e.g., an audio book) to corresponding locations in text data (e.g., an e-book). Techniques are provided for using a mapping between audio data and text data, whether the mapping is created automatically or manually. A mapping may be used for bookmark switching where a bookmark established in one version of a digital work (e.g., e-book) is used to identify a corresponding location with another version of the digital work (e.g., an audio book). Alternatively, the mapping may be used to play audio that corresponds to text selected by a user. Alternatively, the mapping may be used to automatically highlight text in response to audio that corresponds to the text being played. Alternatively, the mapping may be used to determine where an annotation created in one media context (e.g., audio) will be consumed in another media context.
    Type: Grant
    Filed: October 6, 2011
    Date of Patent: June 2, 2020
    Assignee: APPLE INC.
    Inventors: Alan C. Cannistraro, Gregory S. Robbin, Casey M. Dougherty, Raymond Walsh, Melissa Breglio Hajj
  • Patent number: 10656901
    Abstract: A media item that was presented in media players of computing devices at a first audio level may be identified, each of the media players having a corresponding user of a first set of users. A second audio level value corresponding to an amplitude setting selected by a user of the set of users during playback of the media item may be determined for each of the media players. An audio level difference (ALD) value for each of the media players may be determined based on a corresponding second audio level value. A second audio level value for an amplitude setting to be provided for the media item in response to a request of a second user to play the media item may be determined based on determined ALD values.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: May 19, 2020
    Assignee: GOOGLE LLC
    Inventor: Christian Weitenberner
  • Patent number: 10579326
    Abstract: A control device is provided which mixes and records two types of audio signals processed under standards different from each other; in particular, an audio signal of ASIO standard and an audio signal of WDM standard. An audio interface is connected to a computer, and an audio signal is input to the computer. A mixer module of the computer mixes an audio signal which is effect-processed by an ASIO application and an audio signal reproduced by a WDM application, and outputs the mixed audio signal to the audio interface and to the WDM application for sound recording. The user operates a screen displayed on an operation panel to switch between presence and absence of effect process and presence and absence of mixing.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: March 3, 2020
    Assignee: TEAC Corporation
    Inventor: Kaname Hayasaka
  • Patent number: 10575119
    Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: February 25, 2020
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
  • Patent number: 10565435
    Abstract: A method for determining a video-related emotion and a method of generating data for learning video-related emotions include separating an input video into a video stream and an audio stream; analyzing the audio stream to detect a music section; extracting at least one video clip matching the music section; extracting emotion information from the music section; tagging the video clip with the extracted emotion information and outputting the video clip; learning video-related emotions by using the at least one video clip tagged with the emotion information to generate a video-related emotion classification model; and determining an emotion related to an input query video by using the video-related emotion classification model to provide the emotion.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: February 18, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jee Hyun Park, Jung Hyun Kim, Yong Seok Seo, Won Young Yoo, Dong Hyuck Im
  • Patent number: 10489450
    Abstract: Implementations generally relate to selecting soundtracks. In some implementations, a method includes determining one or more sound mood attributes of one or more soundtracks, where the one or more sound mood attributes are based on one or more sound characteristics. The method further includes determining one or more visual mood attributes of one or more visual media items, where the one or more visual mood attributes are based on one or more visual characteristics. The method further includes selecting one or more of the soundtracks based on the one or more sound mood attributes and the one or more visual mood attributes. The method further includes generating an association among the one or more selected soundtracks and the one or more visual media items, wherein the association enables the one or more selected soundtracks to be played while the one or more visual media items are displayed.
    Type: Grant
    Filed: February 26, 2015
    Date of Patent: November 26, 2019
    Assignee: Google LLC
    Inventor: Ryan James Lothian
  • Patent number: 10417279
    Abstract: Systems and methods are provided for curating playlists of content for provisioning and presenting to users a seamless cross fade experience from one piece of content to the next within the playlist. In embodiments, information that identifies portions of content without audio or video data may be maintained. Further, metadata may be generated that identifies a cross fade points for the content in response to receiving input from a user device of a user. In an embodiment, each cross fade point may identify a time window of the content for interleaving with other content. In accordance with at least one embodiment, the metadata may be transmitted based at least in part on a selection of the metadata for the content.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: September 17, 2019
    Assignee: Amazon Technologies, Inc.
    Inventor: Jonathan Beech
  • Patent number: 10397525
    Abstract: In a pilotless flying object detection system, a masking area setter sets a masking area to be excluded from detection of a pilotless flying object which appears in a captured image of a monitoring area, based on audio collected by a microphone array. An object detector detects the pilotless flying object based on the audio collected by the microphone array and the masking area set by the masking area setter. An output controller superimpose sound source visual information, which indicates the volume of a sound at a sound source position, at the sound source position of the pilotless flying object in the captured image and displays the result on a first monitor in a case where the pilotless flying object is detected in an area other than the masking area.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: August 27, 2019
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Hiroyuki Matsumoto, Shintaro Yoshikuni, Masanari Miyamoto
  • Patent number: 10334142
    Abstract: In some examples, a system receives a color sample comprising a color measurement of a proper subset of a gamut of colors printable by a printer, and computes a forward transform value and a reverse transform value based on a color profile calculated from a profiling chart comprising a set of estimated color samples calculated based on the received color sample, the forward and reverse transform values to convert between colorimetry values and color values for the printer. The system provides an adjusted color profile for the printer based on an original color profile for the printer and the computing, wherein the original color profile for the printer is associated with a substrate.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: June 25, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Peter Morovic, Jan Morovic
  • Patent number: 10332506
    Abstract: Disclosed are systems and methods for improving interactions with and between computers in content searching, generating, hosting and/or providing systems supported by or configured with personal computing devices, servers and/or platforms. The systems interact to identify and retrieve data within or across platforms, which can be used to improve the quality of data used in processing interactions between or among processors in such systems. The disclosed systems and methods provide systems and methods for automatic creation of a formatted, readable transcript of multimedia content, which is derived, extracted, determined, or otherwise identified from the multimedia content. The formatted, readable transcript can be utilized to increase accuracy and efficiency in search engine optimization, as well as identification of relevant digital content available for communication to a user.
    Type: Grant
    Filed: September 2, 2015
    Date of Patent: June 25, 2019
    Assignee: OATH INC.
    Inventors: Aasish Pappu, Amanda Stent
  • Patent number: 10318637
    Abstract: An editing method facilitates the task of adding background sound to speech-containing audio data so as to augment the listening experience. The editing method is executed by a processor in a computing device and comprises obtaining characterization data that characterizes time segments in the audio data by at least one of topic and sentiment; deriving, for a respective time segment in the audio data and based on the characterization data, a desired property of a background sound to be added to the audio data in the respective time segment, and providing the desired property for the respective time segment so as to enable the audio data to be combined, within the respective time segment, with background sound having the desired property. The background sound may be selected and added automatically or by manual user intervention.
    Type: Grant
    Filed: May 13, 2017
    Date of Patent: June 11, 2019
    Assignee: SONY MOBILE COMMUNICATIONS INC.
    Inventor: Ola Thörn
  • Patent number: 10194199
    Abstract: Methods, systems, and computer program products that automatically categorize and/or assign ratings to content (video and audio content) uploaded by individuals who want to broadcast the content to others via a communications network, such as an IPTV network, are provided. When an individual uploads content to a network, a network service automatically extracts an audio stream from the uploaded content. Words in the extracted audio stream are identified. For each identified word, a preexisting library of selected words is queried to determine if a match exists between words in the library and words in the extracted audio stream. The selected words in the library are associated with a particular content category or content rating. If a match exists between an identified word and a word in the library, the uploaded content is assigned a content category and/or rating associated with the matched word.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: January 29, 2019
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Ke Yu
  • Patent number: 10109286
    Abstract: According to an embodiment, a speech synthesizer includes a source generator, a phase modulator, and a vocal tract filter unit. The source generator generates a source signal by using a fundamental frequency sequence and a pulse signal. The phase modulator modulates, with respect to the source signal generated by the source generator, a phase of the pulse signal at each pitch mark based on audio watermarking information. The vocal tract filter unit generates a speech signal by using a spectrum parameter sequence with respect to the source signal in which the phase of the pulse signal is modulated by the phase modulator.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: October 23, 2018
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kentaro Tachibana, Takehiko Kagoshima, Masatsune Tamura, Masahiro Morita
  • Patent number: 9942748
    Abstract: Embodiments of the present invention provide a service provisioning system and method, a mobile edge application server and support node. The system includes: at least one mobile edge application server (MEAS) and at least one mobile edge application server support function (MEAS-SF), where the MEAS is deployed at an access network side; and the MEAS-SF is deployed at a core network side, connected to one or more MEAS. In the service provisioning system provided in the embodiment, services that are provided by an SP are deployed in the MEAS. When the MEAS can provide the user equipment with a service requested in a service request, the MEAS directly and locally generates service data corresponding to the service request. Therefore, the user equipment directly obtains required service data from an RAN side, which avoids data congestion between an RAN and a CN and saves network resources.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: April 10, 2018
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Zhiming Zhu, Weihua Liu, Mingrong Cao
  • Patent number: 9841879
    Abstract: A computing device can include a recognition mode interface utilizing graphical elements, such as virtual fireflies, to indicate recognized or identified objects. The fireflies can be animated to move across a display, and the fireflies can create bounding boxes around visual representations of objects as the objects are recognized. In some cases, the object might be of a type that has specific meaning or information to be conveyed to a user. In such cases, the fireflies might be displayed with a particular size, shape, or color to convey that information. The fireflies also can be configured to form shapes or patterns in order to convey other types of information to a user, such as where audio is being recognized, light is sufficient for image capture, and the like. Other types of information can be conveyed as well via altering characteristics of the fireflies.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: December 12, 2017
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Timothy Thomas Gray, Forrest Elliott
  • Patent number: 9838731
    Abstract: Video clips may be automatically edited to be synchronized for accompaniment by audio tracks. A preliminary version of a video clip may be made up from stored video content. Occurrences of video events within the preliminary version may be determined. A first audio track may include audio event markers. A first revised version of the video clip may be synchronized so that moments within the video clip corresponding to occurrences of video events are aligned with moments within the first audio track corresponding to audio event markers. Presentation of an audio mixing option may be effectuated on a graphical user interface of a video application for selection by a user. The audio mixing option may define volume at which the first audio track is played as accompaniment for the video clip.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: December 5, 2017
    Assignee: GoPro, Inc.
    Inventor: Joven Matias