Patents Issued in March 31, 2016
-
Publication number: 20160093285Abstract: Systems and methods are disclosed for providing non-lexical cues in synthesized speech. Original text is analyzed to determine characteristics of the text and/or to derive or augment an intent (e.g., an intent code). Non-lexical cue insertion points are determined based on the characteristics of the text and/or the intent. One or more non-lexical cues are inserted at insertion points to generate augmented text. The augmented text is synthesized into speech, including converting the non-lexical cues to speech output.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: Jessica M. Christian, Peter Graff, Crystal A. Nakatsu, Beth Ann Hockey
-
Publication number: 20160093286Abstract: A system and computer-implemented method for synthesizing multi-person speech into an aggregate voice is disclosed. The method may include crowd-sourcing a data message configured to include a textual passage. The method may include collecting, from a plurality of speakers, a set of vocal data for the textual passage. Additionally, the method may also include mapping a source voice profile to a subset of the set of vocal data to synthesize the aggregate voice.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Jose A.G. de Freitas, Guy P. Hindle, James S. Taylor
-
Publication number: 20160093287Abstract: A system and method are disclosed for generating customized text-to-speech voices for a particular application. The method comprises generating a custom text-to-speech voice by selecting a voice for generating a custom text-to-speech voice associated with a domain, collecting text data associated with the domain from a pre-existing text data source and using the collected text data, generating an in-domain inventory of synthesis speech units by selecting speech units appropriate to the domain via a search of a pre-existing inventory of synthesis speech units, or by recording the minimal inventory for a selected level of synthesis quality. The text-to-speech custom voice for the domain is generated utilizing the in-domain inventory of synthesis speech units. Active learning techniques may also be employed to identify problem phrases wherein only a few minutes of recorded data is necessary to deliver a high quality TTS custom voice.Type: ApplicationFiled: December 10, 2015Publication date: March 31, 2016Inventors: Srinivas BANGALORE, Junlan FENG, Mazin GILBERT, Juergen SCHROETER, Ann K. SYRDAL, David SCHULZ
-
Publication number: 20160093288Abstract: A speech synthesis can record concatenation costs of most common acoustic unit sequential pairs to a concatenation cost database for speech synthesis by synthesizing speech from a text, identifying a most common acoustic unit sequential pair in the speech, assigning a concatenation cost to the most common acoustic sequential pair, and recording the concatenation cost of the most common acoustic sequential pair to a concatenation cost database.Type: ApplicationFiled: December 8, 2015Publication date: March 31, 2016Inventors: Mark Charles BEUTNAGEL, Mehryar MOHRI, Michael Dennis RILEY
-
Publication number: 20160093289Abstract: Techniques for performing multi-style speech synthesis. The techniques include using at least one computer hardware processor to perform: obtaining input comprising text and an identification of a first speaking style to use in rendering the text as speech; identifying a plurality of speech segments for use in rendering the text as speech, the identified plurality of speech segments comprising a first speech segment having the first speaking style and a second speech segment having a second speaking style different from the first speaking style; and rendering the text as speech having the first speaking style, at least in part, by using the identified plurality of speech segments.Type: ApplicationFiled: September 29, 2014Publication date: March 31, 2016Inventor: Vincent Pollet
-
Publication number: 20160093290Abstract: Embodiments included herein are directed towards a system and method for compressed domain language identification. Embodiments may include receiving a bitstream of a sequence of packets at one or more computing devices and classifying each packet into speech or non-speech based upon, at least in part, compressed domain voice activity detection (VAD). Embodiments may further include extracting a pseudo-cepstral representation from the speech detected packets and partially decoding without extracting a PCM format and generating a sequence of multi-frames, based upon, at least in part, the pseudo-cepstral representation. Embodiments may also include providing in real time the sequence of multi-frames to a deep neural network (DNN), wherein the DNN has been trained off-line for one or more desired target languages.Type: ApplicationFiled: September 29, 2014Publication date: March 31, 2016Inventors: Jose Lainez, Daniel Almendro Barreda
-
Publication number: 20160093291Abstract: This relates to providing an indication of the suitability of an acoustic environment for performing speech recognition. One process can include receiving an audio input and determining a speech recognition suitability based on the audio input. The speech recognition suitability can include a numerical, textual, graphical, or other representation of the suitability of an acoustic environment for performing speech recognition. The process can further include displaying a visual representation of the speech recognition suitability to indicate the likelihood that a spoken user input will be interpreted correctly. This allows a user to determine whether to proceed with the performance of a speech recognition process, or to move to a different location having a better acoustic environment before performing the speech recognition process.Type: ApplicationFiled: August 24, 2015Publication date: March 31, 2016Inventor: Yoon KIM
-
Publication number: 20160093292Abstract: A method in a computing device for decoding a weighted finite state transducer (WFST) for automatic speech recognition is described. The method includes sorting a set of one or more WFST arcs based on their arc weight in ascending order. The method further includes iterating through each arc in the sorted set of arcs according to the ascending order until the score of the generated token corresponding to an arc exceeds a score threshold. The method further includes discarding any remaining arcs in the set of arcs that have yet to be considered.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: Joachim HOFER, Georg STEMMER
-
Publication number: 20160093293Abstract: A method and a device that preprocess a speech signal are disclosed, which include extracting at least one frame corresponding to a speech recognition range from frames included in a speech signal, generating a supplementary frame to supplement speech recognition with respect to the speech recognition range based on the at least one extracted frame, and outputting a preprocessed speech signal including the supplementary frame along with the frames of the speech signal.Type: ApplicationFiled: April 7, 2015Publication date: March 31, 2016Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventor: Hodong LEE
-
Publication number: 20160093294Abstract: The present disclosure relates to training a speech recognition system. One example method includes receiving a collection of speech data items, wherein each speech data item corresponds to an utterance that was previously submitted for transcription by a production speech recognizer. The production speech recognizer uses initial production speech recognizer components in generating transcriptions of speech data items. A transcription for each speech data item is generated using an offline speech recognizer, and the offline speech recognizer components are configured to improve speech recognition accuracy in comparison with the initial production speech recognizer components. The updated production speech recognizer components are trained for the production speech recognizer using a selected subset of the transcriptions of the speech data items generated by the offline speech recognizer.Type: ApplicationFiled: April 22, 2015Publication date: March 31, 2016Inventors: Olga Kapralova, John Paul Alex, Eugene Weinstein, Pedro J. Moreno Mengibar, Olivier Siohan, Ignacio Lopez Moreno
-
Publication number: 20160093295Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for providing statistical unit selection language modeling based on acoustic fingerprinting. The methods, systems and apparatus include the actions of obtaining a unit database of acoustic units and, for each acoustic unit, linguistic data corresponding to the acoustic unit; obtaining stored data associating each acoustic unit with (i) a corresponding acoustic fingerprint and (ii) a probability of the linguistic data corresponding to the acoustic unit occurring in a text corpus; determining that the unit database of acoustic units has been updated to include one or more new acoustic units; for each new acoustic unit in the updated unit database: generating an acoustic fingerprint for the new acoustic unit; identifying an acoustic unit that (i) has an acoustic fingerprint that is indicated as similar to the fingerprint of the new acoustic unit, and (ii) has a stored associated probability.Type: ApplicationFiled: September 10, 2015Publication date: March 31, 2016Inventors: Alexander Gutkin, Javier Gonzalvo Fructuoso, Cyril Georges Luc Allauzen
-
Publication number: 20160093296Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing speech. A system configured to practice the method monitors user utterances to generate a conversation context. Then the system receives a current user utterance independent of non-natural language input intended to trigger speech processing. The system compares the current user utterance to the conversation context to generate a context similarity score, and if the context similarity score is above a threshold, incorporates the current user utterance into the conversation context. If the context similarity score is below the threshold, the system discards the current user utterance. The system can compare the current user utterance to the conversation context based on an n-gram distribution, a perplexity score, and a perplexity threshold. Alternately, the system can use a task model to compare the current user utterance to the conversation context.Type: ApplicationFiled: December 9, 2015Publication date: March 31, 2016Inventor: Srinivas BANGALORE
-
Publication number: 20160093297Abstract: A system, apparatus and method for efficient, low power, finite state transducer decoding. For example, one embodiment of a system for performing speech recognition comprises: a processor to perform feature extraction on a plurality of digitally sampled speech frames and to responsively generate a feature vector; an acoustic model likelihood scoring unit communicatively coupled to the processor over a communication interconnect to compare the feature vector against a library of models of various known speech sounds and responsively generate a plurality of scores representing similarities between the feature vector and the models; and a weighted finite state transducer (WFST) decoder communicatively coupled to the processor and the acoustic model likelihood scoring unit over the communication interconnect to perform speech decoding by traversing a WFST graph using the plurality of scores provided by the acoustic model likelihood scoring unit.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: MICHAEL E. DEISHER, OHAD FALIK, KISUN YOU
-
Publication number: 20160093298Abstract: Systems and processes for generating a shared pronunciation lexicon and using the shared pronunciation lexicon to interpret spoken user inputs received by a virtual assistant are provided. In one example, the process can include receiving pronunciations for words or named entities from multiple users. The pronunciations can be tagged with context tags and stored in the shared pronunciation lexicon. The shared pronunciation lexicon can then be used to interpret a spoken user input received by a user device by determining a relevant subset of the shared pronunciation lexicon based on contextual information associated with the user device and performing speech-to-text conversion on the spoken user input using the determined subset of the shared pronunciation lexicon.Type: ApplicationFiled: August 25, 2015Publication date: March 31, 2016Inventors: Devang K. NAIK, Ali S. MOHAMED, Hong M. CHEN
-
Publication number: 20160093299Abstract: A file classifying system and a file classifying method are disclosed herein, where the system includes a storing device storing at least one recognizing audio signal, a receiving device, and a processor. The receiving device receives an audio file or a video file. The processor compares a related audio signal and the at least one recognizing audio signal so as to generate a result of process, where the related audio signal is correlated to the audio file or the video file, and then automatically classifies the audio file or video file into a category.Type: ApplicationFiled: September 4, 2015Publication date: March 31, 2016Inventor: Kuo-Ying SU
-
Publication number: 20160093300Abstract: A machine-readable medium may include a group of reusable components for building a spoken dialog system. The reusable components may include a group of previously collected audible utterances. A machine-implemented method to build a library of reusable components for use in building a natural language spoken dialog system may include storing a dataset in a database. The dataset may include a group of reusable components for building a spoken dialog system. The reusable components may further include a group of previously collected audible utterances. A second method may include storing at least one set of data. Each one of the at least one set of data may include ones of the reusable components associated with audible data collected during a different collection phase.Type: ApplicationFiled: December 9, 2015Publication date: March 31, 2016Inventors: Lee Begeja, Giuseppe Di Fabbrizio, David Crawford Gibbon, Dilek Z. Hakkani-Tur, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Gokhan Tur
-
Publication number: 20160093301Abstract: Systems and processes are disclosed for predicting words using a categorical stem and suffix word n-gram language model. A word prediction includes determining a stem probability using a stem language model. The word prediction also includes determining a suffix probability using suffix language model decoupled from the stem model, in view of one or more stem categories. The word prediction also includes determine a probability of the stem belonging to the stem category. A joint probability is determined based on the foregoing, and one or more word predictions having sufficient likelihood. In this way, the categorical stem and suffix language model constraints predicted suffixes to those that would be grammatically valid with predicted stems, thereby producing word predictions with grammatically valid stem and suffix combinations.Type: ApplicationFiled: August 28, 2015Publication date: March 31, 2016Inventors: Jerome R. BELLEGARDA, Sibel YAMAN
-
Publication number: 20160093302Abstract: Systems and methods are provided for converting taxiway voice commands into taxiway textual commands. In various embodiments, the systems can comprise a radio receiver that is configured to receive the taxiway voice commands from an air traffic control center, a voice recognition processor coupled to the radio receiver that is configured to receive and convert the taxiway voice commands into the taxiway textual commands, and/or a taxiway clearance display coupled to the voice recognition processor that is configured to receive and display the taxiway textual commands.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Jan Bilek, Vaclav Pfeifer, Matej Dusik, Tomas Kralicek
-
Publication number: 20160093303Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for communicating information about transcription progress from a unified messaging (UM) server to a UM client. In one embodiment, the transcription progress describes speech to text transcription of speech messages such as voicemail. The UM server authenticates and establishes a session with a UM client, then receives a get message list request from a UM client as of a first time, responds to the get message list request with a view of a state of messages and available transcriptions for transcribable messages in a list of messages associated with the get message list call at the first time, and, at a second time subsequent to the first time, transmits to the UM client a notification that provides an indication of progress for at least one transcription not yet complete in the list of messages. The messages can include video.Type: ApplicationFiled: December 9, 2015Publication date: March 31, 2016Inventors: Mehrad YASREBI, James JACKSON, John E. LEMAY
-
Publication number: 20160093304Abstract: Systems and processes for generating a speaker profile for use in performing speaker identification for a virtual assistant are provided. One example process can include receiving an audio input including user speech and determining whether a speaker of the user speech is a predetermined user based on a speaker profile for the predetermined user. In response to determining that the speaker of the user speech is the predetermined user, the user speech can be added to the speaker profile and operation of the virtual assistant can be triggered. In response to determining that the speaker of the user speech is not the predetermined user, the user speech can be added to an alternate speaker profile and operation of the virtual assistant may not be triggered. In some examples, contextual information can be used to verify results produced by the speaker identification process.Type: ApplicationFiled: August 25, 2015Publication date: March 31, 2016Inventors: Yoon KIM, Sachin S. KAJAREKAR
-
Publication number: 20160093305Abstract: Systems and methods for bio-phonetic multi-phrase speaker identity verification are disclosed. Generally, a speaker identity verification engine generates a dynamic phrase including at least one dynamically-generated word. The speaker identity verification engine prompts a user to speak the dynamic phrase and receives a dynamic phrase utterance. The speaker identity verification engine extracts at least one voice characteristic from the dynamic phrase utterance and compares the at least one voice characteristic with a voice profile the generate a score. The speaker identity verification engine then determines whether to accept a speaker identity claim based on the score.Type: ApplicationFiled: December 9, 2015Publication date: March 31, 2016Inventor: Hisao M. CHANG
-
Publication number: 20160093306Abstract: A method includes: receiving a first speech frame; identifying a first codec mode based at least in part on a Codec Mode Command (CMC) comprising the first speech frame; identifying a second codec mode based at least in part on a downlink (DL) Codec Mode Indication (DCMI) comprising the first speech frame; determining, based at least in part on a current uplink (UL) codec mode, to apply one of the first codec mode, the second codec mode, and a third codec mode having a higher bit rate than the first codec mode; and applying one of the first codec mode, the second codec mode, and the third codec mode.Type: ApplicationFiled: November 6, 2014Publication date: March 31, 2016Inventors: Divaydeep Sikri, Neha Goel, Jafar Mohseni, Mungal Singh Dhanda
-
Publication number: 20160093307Abstract: Provided are systems and methods for reducing end-to-end latency. An example method includes configuring an interface, between a codec and a baseband or application processor, to operate in a burst mode. Using the burst mode, a transfer of real-time data is performed between the codec and the baseband or application processor at a high rate. The high rate is defined as rate faster than a real-time rate. The exemplary method includes padding data in a time period remaining after the transfer, at the high rate, of a sample of the real-time data samples. The padded of the data may be configured such that data can be ignored by the receiving component. The interface can include a Serial Low-power Inter-chip Media Bus (SLIMBus). Power consumption may be reduced for the SLIMBus by utilizing the gear shifting or clock stopping SLIMbus features.Type: ApplicationFiled: September 25, 2015Publication date: March 31, 2016Inventors: Niel D. Warren, Sean Mahnken
-
Publication number: 20160093308Abstract: A device configured to decode a bitsream comprising a memory and one or more processors may be configured to perform the techniques herein. The memory may be configured to store a reconstructed plurality of weights used to approximate the multi-directional V-vector in the higher order ambisonics domain from a past time segment; and the one or more processors may be configured to extract, from the bitstream, a weight index, retrieve, from the memory, the reconstructed plurality of weights from the past time segment, vector dequantize the weight index to determine a plurality of residual weight error, and reconstruct a plurality of weights for a current time segment based on the plurality of residual weight errors and the reconstructed plurality of weights used to approximate the multi-directional V-vector in the higher order ambisonics domain from the past time segment.Type: ApplicationFiled: September 18, 2015Publication date: March 31, 2016Inventor: Moo Young Kim
-
Publication number: 20160093309Abstract: A digital watermark embedding device includes a generating unit that makes use of a key random number which is input, and outputs a filter for determining a first band and a second band which represent at least a single pair of frequency bands in which a digital watermark bit is to be embedded; and an embedding unit that, when the digital watermark bit is to be embedded in a unit frame of a voice signal which is input, varies a sum of amplitude spectrum intensities of at least one of the first band and the second band in such a way that a first sum of amplitude spectrum intensities of the first band is greater than a second sum of amplitude spectrum intensities of the second band.Type: ApplicationFiled: December 9, 2015Publication date: March 31, 2016Applicant: KABUSHIKI KAISHA TOSHIBAInventor: Masanobu NAKAMURA
-
Publication number: 20160093310Abstract: The present invention relates to a new method and apparatus for improvement of High Frequency Reconstruction (HFR) techniques using frequency translation or folding or a combination thereof. The proposed invention is applicable to audio source coding systems, and offers significantly reduced computational complexity. This is accomplished by means of frequency translation or folding in the subband domain, preferably integrated with spectral envelope adjustment in the same domain. The concept of dissonance guard-band filtering is further presented. The proposed invention offers a low-complexity, intermediate quality HFR method useful in speech and natural audio coding applications.Type: ApplicationFiled: December 10, 2015Publication date: March 31, 2016Applicant: Dolby International ABInventors: Lars G. Liljeryd, Per Ekstrand, Fredrik Henn, Kristofer Kjoerling
-
Publication number: 20160093311Abstract: A device comprising a memory and one or more processors may be configured extract, from the bitstream, a type of quantization mode. The one or more processors may also be configured to switch, based on the type of quantization mode, between non-predictive vector dequantization to reconstruct a first set of one or more weights used to approximate the multi-directional V-Vector in the higher order ambisonics domain, and predictive vector dequantization to reconstruct a second set of one or more weights used to approximate the multi-directional V-Vector in the higher order ambisonics domain. The memory may be configured to store the reconstructed first set of one or more weights used to approximate the multi-directional V-Vector in the higher order ambisonics domain, and the reconstructed second set of one or more weights used to approximate the multi-directional V-Vector in the higher order ambisonics domain.Type: ApplicationFiled: September 18, 2015Publication date: March 31, 2016Inventors: Moo Young Kim, Nils Günther Peters
-
Publication number: 20160093312Abstract: In one embodiment, an audio decoder for decoding an audio bitstream is disclosed. The decoder includes a first decoding module adapted to operate in a first coding mode and a second decoding module adapted to operate in a second coding mode, the second coding mode being different from the first coding mode. The decoder further includes a pitch filter in either the first coding mode or the second coding mode, the pitch filter adapted to filter a preliminary audio signal generated by the first decoding module or the second decoding module to obtain a filtered signal. The pitch filter is selectively enabled or disabled based on a value of a first parameter encoded in the audio bitstream, the first parameter being distinct from a second parameter encoded in the audio bitstream, the second parameter specifying a current coding mode of the audio decoder.Type: ApplicationFiled: November 9, 2015Publication date: March 31, 2016Applicant: DOLBY INTERNATIONAL ABInventors: Barbara RESCH, Kristofer KJÖRLING, Lars VILLEMOES
-
Publication number: 20160093313Abstract: A “running range normalization” method includes computing running estimates of the range of values of features useful for voice activity detection (VAD) and normalizing the features by mapping them to a desired range. Running range normalization includes computation of running estimates of the minimum and maximum values of VAD features and normalizing the feature values by mapping the original range to a desired range. Smoothing coefficients are optionally selected to directionally bias a rate of change of at least one of the running estimates of the minimum and maximum values. The normalized VAD feature parameters are used to train a machine learning algorithm to detect voice activity and to use the trained machine learning algorithm to isolate or enhance the speech component of the audio data.Type: ApplicationFiled: September 25, 2015Publication date: March 31, 2016Applicant: CYPHER, LLCInventor: Earl Vickers
-
Publication number: 20160093314Abstract: An audio communication system includes a generation unit that superimposes an addition sound having a volume level determined on the basis of a voice acquired by a voice acquisition unit on an input voice acquired by the voice acquisition unit of a transmission terminal and generates a synthesis sound and a transmission unit that transmits a signal of the synthesis sound generated by the generation unit to a reception terminal.Type: ApplicationFiled: April 30, 2013Publication date: March 31, 2016Applicant: RAKUTEN, INC.Inventor: Hisanori YAMAHARA
-
Publication number: 20160093315Abstract: According to one embodiment, an electronic device includes circuitry configured to display, during recording, a first mark indicative of a sound waveform collected from a microphone and a second mark indicative of a section of voice collected from the microphone, after processing to detect the section of voice.Type: ApplicationFiled: April 16, 2015Publication date: March 31, 2016Inventor: Yusaku Kikugawa
-
Publication number: 20160093316Abstract: Unwanted audio, such as explicit language, may be removed during audio playback. An audio player may identify and remove unwanted audio while playing an audio stream. Unwanted audio may be replaced with alternate audio, such as non-explicit lyrics, a “beep”, or silence. Metadata may be used to describe the location of unwanted audio within an audio stream to enable the removal or replacement of the unwanted audio with alternate audio. An audio player may switch between clean and explicit versions of a recording based on the locations described in the metadata. The metadata, as well as both the clean and explicit versions of the audio data, may be part of a single audio file, or the metadata may be separate from the audio data. Additionally, real-time recognition analysis may be used to identify unwanted audio during audio playback.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Applicant: APPLE INC.Inventors: Baptiste P. Paquier, Anthony J. Guetta, Aram M. Lindahl, Eric A. Allamanche
-
Publication number: 20160093317Abstract: A computer-implemented method for preventing overwriting of data, e.g., on a magnetic medium, includes receiving a write command to write to a magnetic tape. The current location of the magnetic tape is determined. A determination is also made as to whether data corresponding to the write command is at least one of: a size and type specified for a block at the current location. Execution of the write command is disallowed in response to determining that the data corresponding to the write command is not of the specified size and/or type.Type: ApplicationFiled: August 28, 2015Publication date: March 31, 2016Inventor: Randolph E. Stiarwalt
-
Publication number: 20160093318Abstract: A magnetic head has a magnetic head slider that includes a recording element that generates a recording signal magnetic field, a microwave magnetic field generating element that generates a microwave magnetic field, a terminal electrode, and a first transmission line that interconnects the terminal electrode and the microwave magnetic field generating element. A second transmission line is connected to the terminal electrode, the second transmission line being used to transmit a microwave signal from the outside of the magnetic head slider to the magnetic head slider. A capacitor connected to the first transmission line is provided between the terminal electrode and the microwave magnetic field generating element. Accordingly, in the magnetic head, a microwave signal is efficiently propagated.Type: ApplicationFiled: September 8, 2015Publication date: March 31, 2016Inventors: Tomohiko SHIBUYA, Atsushi AJIOKA, Sadaharu YONEDA, Atsushi TSUMITA
-
Publication number: 20160093319Abstract: A magnetoresistive (MR) sensor including a synthetic antiferromagnetic (SAF) structure that is magnetically coupled to a side shield element. The SAF structure includes at least one magnetic amorphous layer that is an alloy of a ferromagnetic material and a refractory material. The amorphous magnetic layer may be in contact with a non-magnetic layer and antiferromagnetically coupled to a layer in contact with an opposite surface of the non-magnetic layer.Type: ApplicationFiled: December 8, 2015Publication date: March 31, 2016Inventors: Eric W. Singleton, Liwen Tan, Jae-Young Yi
-
Publication number: 20160093320Abstract: Embodiments of the present invention provide methods, systems, and computer program products for detecting damage to tunneling magnetoresistance (TMR) sensors. In one embodiment, resistances of a TMR sensor are measured upon application of one or both of negative polarity bias current and positive polarity bias current at a plurality of current magnitudes. Resistances of the TMR sensor can then be analyzed with respect to current, voltage, voltage squared, and/or power, including analyses of changes to slopes calculated with these values and hysteresis-induced fluctuations, all of which can be used to detect damage to the TMR sensor. The present invention also describes methods to utilize the measured values of neighbor TMR sensors to distinguish normal versus damaged parts for head elements containing multiple TMR read elements.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Milad Aria, Icko E. T. Iben, Guillermo F. Paniagua
-
Publication number: 20160093321Abstract: The magnetic recording medium comprises a magnetic layer comprising ferromagnetic powder and binder on a nonmagnetic support, and further comprises a compound denoted by Formula (1): wherein, in Formula (1), X denotes —O—, —S—, or NR1—; each of R and R1 independently denotes a hydrogen atom or a monovalent substituent; L denotes a divalent connecting group; Z denotes a partial structure of valence n comprising at least one group selected from the group consisting of carboxyl groups and carboxylate groups; m denotes an integer of greater than or equal to 2, and n denotes an integer of greater than or equal to 1.Type: ApplicationFiled: September 29, 2015Publication date: March 31, 2016Applicant: FUJIFILM CORPORATIONInventors: Toshihide AOSHIMA, Wataru KIKUCHI, Kazutoshi KATAYAMA, Tatsuo MIKAMI
-
Publication number: 20160093322Abstract: The magnetic tape comprises, on a nonmagnetic support, a nonmagnetic layer comprising nonmagnetic powder and binder, and on the nonmagnetic layer, a magnetic layer comprising ferromagnetic powder, nonmagnetic powder, and binder, wherein a total thickness of the magnetic tape is less than or equal to 4.80 ?m, and a coefficient of friction as measured on a base portion of a surface of the magnetic layer is less than or equal to 0.35.Type: ApplicationFiled: September 28, 2015Publication date: March 31, 2016Applicant: FUJIFILM CORPORATIONInventors: Norihito KASADA, Masahito OYANAGI, Toshio TADA, Yasuhiro KAWATANI
-
Publication number: 20160093323Abstract: The magnetic tape comprises a nonmagnetic layer comprising nonmagnetic powder and binder on a nonmagnetic support, and comprises a magnetic layer comprising ferromagnetic powder and binder on the nonmagnetic layer, wherein a fatty acid ester, a fatty acid amide, and a fatty acid are contained in either one or both of the magnetic layer and the nonmagnetic layer, with the magnetic layer and nonmagnetic layer each comprising at least one selected from the group consisting of a fatty acid ester, a fatty acid amide, and a fatty acid, a quantity of fatty acid ester per unit area of the magnetic layer in extraction components extracted from a surface of the magnetic layer with n-hexane falls within a range of 1.00 mg/m2 to 10.Type: ApplicationFiled: September 30, 2015Publication date: March 31, 2016Applicant: FUJIFILM CORPORATIONInventor: Kazufumi OMURA
-
Publication number: 20160093324Abstract: Disclosed are techniques and systems for manufacturing an optical disc having a stochastic (i.e., non-deterministic) anti-piracy feature in the form of a multi-spiral structure, and for verifying the feature on the optical disc to authenticate the disc for playback. The multi-spiral structure may be comprised of multiple partially interleaved, and partially overlapping, spiral data tracks formed in a designated area of the optical disc. A process of forming the multi-spiral structure may include forming, in the designated area, a first spiral data track with first track pitch and a second spiral data track with second track pitch that is different than the first track pitch. The multi-spiral structure may be analyzed to determine verification parameters for verifying the multi-spiral structure, and those verification parameters may be encrypted so that they may be subsequently decrypted and used to verify the multi-spiral structure on a disc reading device.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Felix Domke, Kenneth M McGrail
-
Publication number: 20160093325Abstract: According to one embodiment, a method for processing data includes directing first data through a first FIR gain module in response to a determination that the first data is being read from a magnetic tape medium in an asynchronous mode to control FIR gain of the first data. The method also includes directing second data through a second FIR gain module in response to a determination that the second data is being read from the magnetic tape medium in a synchronous mode to control FIR gain of the second data. Other systems and methods for processing data using dynamic gain control with adaptive equalizers are presented according to more embodiments.Type: ApplicationFiled: December 2, 2015Publication date: March 31, 2016Inventors: Katherine T. Blinick, Robert A. Hutchins, Sedat Oelcer
-
Publication number: 20160093326Abstract: According to one embodiment, a magnetic tape drive includes a controller configured to direct first data through a first finite impulse response (FIR) gain module in response to a determination that the first data is being read from a magnetic tape medium in an asynchronous mode to control FIR gain of the first data. The controller is also configured to direct second data through a second FIR gain module in response to a determination that the second data is being read from the magnetic tape medium in a synchronous mode to control FIR gain of the second data. A FIR gain value of the second FIR gain module is automatically controlled. Other systems for dynamic gain control with adaptive equalizers are described according to more embodiments.Type: ApplicationFiled: December 2, 2015Publication date: March 31, 2016Inventors: Katherine T. Blinick, Robert A. Hutchins, Sedat Oelcer
-
Publication number: 20160093327Abstract: A method for recording a plurality of audio files, which can be played individually and, at least in pairs, synchronously and which can be modified individually with respect to playing parameters, said method being implemented by means of electronic processing hardware and software means, including: —at least two independent devices originating sound signals, comprising storage means or a microphone input or an in line input; —means for playing audio files, and—software means for playing one or more audio files individually or synchronously, wherein (step 101) at least two independent audio files are acquired in real time and simultaneously, from at least two sound signal sources, and (step 102) they are synchronized one another by means of an encoder that encodes said same files, making them of the same time duration, obtaining at least two audio files of the same length and independent from one another, and said audio files are included in a respective container file, which is provided with related identifiType: ApplicationFiled: May 8, 2014Publication date: March 31, 2016Inventors: Pietro DI FRANCO, Francesco MORANA
-
Publication number: 20160093328Abstract: A product according to one embodiment includes a magnetic recording tape having at least one first servo track, and a supplemental servo track positioned in a spare area located within a data band of the magnetic recording tape. An apparatus according to one embodiment includes a magnetic head and at least one module having an array of transducers. The apparatus is configured to read and/or write to magnetic recording media having at least one first servo track, and a supplemental servo track positioned in a spare area located within a data band of the magnetic recording tape.Type: ApplicationFiled: September 16, 2015Publication date: March 31, 2016Inventor: Robert G. Biskeborn
-
Publication number: 20160093329Abstract: A system and method that time delays a playback from a first feed at a first time to a second feed at a second time. The method includes recording the first feed that is received at the first time to be used at least partially as a playback of the second feed at the second time. The second time has a predetermined delay relative to the first time. The method includes determining whether the first feed has a discrepancy in the actual playback from a desired playback. The discrepancy is at a known time and lasting a known time amount. The method includes transmitting the playback to the second feed after the predetermined delay. A fix is aired instead of the playback for the known time amount corresponding to the discrepancy.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Gregg William Riedel, Jeff Hess, Scott Danahy
-
Publication number: 20160093330Abstract: A system that includes at least two time delayed playback (TDP) devices for recording and playback. Each of the two TDP devices may perform a method that includes recording a first feed to be used at least partially as a playback of a second feed, determining whether a failure results in a missed feed portion from the recording of the first feed, the missed feed portion being at a known time and lasting a known time amount. When there is a failure, the method includes providing a backup recording corresponding to the missed feed portion from the other TDP device that is recording the first feed in parallel with the first TDP device and transmitting the playback of the second feed including the backup recording at the known time and lasting the known time amount.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Gregg William Riedel, Jeff Hess, Scott Danahy
-
Publication number: 20160093331Abstract: A method to edit digital videos includes dividing a first digital video into digital video segments, presenting a graphical user interface to select the digital video segments, receiving one or more selections of one or more of the digital video segments through the graphical user interface, and saving a second digital video with the selected digital video segments.Type: ApplicationFiled: October 30, 2014Publication date: March 31, 2016Inventors: Hui Deng, Peng Yang, Chuanqun Mei, Kaixuan Mao
-
Publication number: 20160093332Abstract: An image processing system (IPS) is provided for creating a video-linked photobook. The method includes: receiving a video file including video content; processing the video file to identify a series of still image frames extracted from the video content; formatting the series of still image frames into a pictorial compilation; storing in a memory the pictorial compilation, and an association between the pictorial compilation and the video file; and transmitting from the image processing system computer-readable instructions for printing the pictorial compilation. Accordingly, images excerpted from a video file can be used to create a printed pictorial compilation. Imaging of the pictorial compilation with a smartphone/tablet PC can responsively result in display of the associated video file on the smartphone/tablet PC.Type: ApplicationFiled: September 25, 2015Publication date: March 31, 2016Applicant: ZOOMIN USA INC.Inventor: Sunny B. Rao
-
Publication number: 20160093333Abstract: A recording medium recorded with a multi-track media file, a method for editing a multi-track media file and an apparatus for editing a multi-track media file. The apparatus for editing a media file stores a multi-track media file including an audio track and an video track corresponding to the audio track, receives an output adjustment command for adjusting an output of an audio or video track, generates a volume adjustment value according to the output adjustment command, and records the generated volume adjustment value in the multi-track media file, thereby realizing the present invention. According to the present invention, users may produce his/her own unique multimedia file by editing according to his/her taste, for example, by inserting his/her voice, in place of an existing audio, into a multimedia file such as a music video file, or inserting a video taken on his/her own, in place of an existing video, thereinto.Type: ApplicationFiled: April 11, 2014Publication date: March 31, 2016Inventor: Cheol SEOK
-
Publication number: 20160093334Abstract: Embodiments presented herein describe techniques for generating a story graph using a collection of digital media, such as images and video. The story graph presents a structure for activities, events, and locales commonly occurring in sets of photographs taken by different individuals across a given location (e.g., a theme park, tourist attraction, convention, etc.). To build a story graph, streams from sets of digital media are generated. Each stream corresponds to media (e.g., images or video) taken in sequence at the location by an individual (or related group of individuals) over a period of time. For each stream, features from each media are extracted relative to the stream. Clusters of media are generated and are connected by directed edges. The connections indicate a path observed to have occurred in the streams from one cluster to another cluster.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Gunhee KIM, Leonid SIGAL