Patents Issued in February 21, 2019
  • Publication number: 20190057667
    Abstract: A display panel driver includes a timing controller and a data driver. The timing controller generates a data signal based on an input image data. The data driver receives the data signal, converts the data signal into a data voltage and outputs the data voltage to a display panel. The data signal includes positive data and negative data. The data driver includes a data skew compensating circuit which samples the positive data using the negative data and compensates a skew of the data signal.
    Type: Application
    Filed: May 16, 2018
    Publication date: February 21, 2019
    Inventors: KIHYUN PYUN, YUNMI KIM, SUNG-JUN KIM, JUHYUN KIM, MINYOUNG PARK, HEEBUM PARK
  • Publication number: 20190057668
    Abstract: The disclosure discloses a display drive circuit, a display device, and a method for driving the same, where the display drive circuit includes a control circuit arranged between a power supply management circuit and a level conversion circuit, and the control circuit is configured to boost a standard gate turn-on voltage signal provided by the power supply management circuit, and to generate and then output a higher gate turn-on voltage signal to the level conversion circuit, upon determining that an ambient temperature is below a set temperature, and/or an output of a gate drive circuit of a display panel is abnormal, so that the level conversion circuit generates and then outputs a corresponding gate drive signal at higher voltage.
    Type: Application
    Filed: June 12, 2018
    Publication date: February 21, 2019
    Inventors: Lijun XIONG, Zhi ZHANG, Heecheol KIM, Xiuzhu TANG, Shuai CHEN, Jingpeng ZHAO, Xing DONG, Xiaolong LIU, Gang CHEN, Yanli ZHAO
  • Publication number: 20190057669
    Abstract: A touch panel, a display panel, and a display unit achieving prevention of erroneous detection caused by external noise. The touch panel includes: a plurality of detection scan electrodes extending in a first direction and a plurality of detection electrodes facing the plurality of detection scan electrodes and extending in a second direction which intersects the first direction. A ratio of fringe capacitance to total capacitance between one or more selected detection scan electrodes and a first detection electrode is different from a ratio of fringe capacitance to total capacitance between the one or more selected detection scan electrodes and a second detection electrode. The one or more selected detection scan electrodes are selected, in a desired unit, from the plurality of detection scan electrodes, to be supplied with a selection pulse, and each of the first and the second detection electrodes is selected from the plurality of detection electrodes.
    Type: Application
    Filed: October 22, 2018
    Publication date: February 21, 2019
    Applicant: Japan Display Inc.
    Inventors: Takayuki Nakanishi, Koji Noguchi, Koji Ishizaki, Yasuyuki Teranishi, Takeya Takeuchi
  • Publication number: 20190057670
    Abstract: Methods, systems and apparatus are described to dynamically generate map textures. A client device may obtain map data, which may include one or more shapes described by vector graphics data. Along with the one or more shapes, embodiments may include texture indicators linked to the one or more shapes. Embodiments may render the map data. For one or more shapes, a texture definition may be obtained. Based on the texture definition, a client device may dynamically generate a texture for the shape. The texture may then be applied to the shape to render a current fill portion of the shape. In some embodiments the render map view is displayed.
    Type: Application
    Filed: October 22, 2018
    Publication date: February 21, 2019
    Applicant: Apple Inc.
    Inventors: Marcel Van Os, Patrick S. Piemonte, Billy P. Chen, Christopher Blumenberg
  • Publication number: 20190057671
    Abstract: A system for modifying a user interface in a multi-display device environment described herein can include a processor and a memory storing instructions that cause the processor to detect a number of display screens coupled to the system. The plurality of instructions can also cause the processor to split an image to generate sub-images based on the number of display screens and a bezel size corresponding to each of the display screens, the sub-images to exclude portions of the image corresponding to the bezel size of each of the display screens. Additionally, the plurality of instructions can cause the processor to resize each of the sub-images based on a display size of each of the display screens and display the image by transmitting the sub-images to the display screens.
    Type: Application
    Filed: August 18, 2017
    Publication date: February 21, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Matthias BAER, Bryan K. MAMARIL, Kyle T. KRAL, Kae-Ling J. GURR, Ryan WHITAKER
  • Publication number: 20190057672
    Abstract: The present disclosure provides a display apparatus and an operating method thereof. The display apparatus includes a control unit outputting a first signal and a display module coupled to the control unit. The display module continuously displays a first image in a first frame time based on the first signal, the first image has a first pattern, and a first ratio of an area of the first pattern to an area of the first image ranges from 5% to 30%. The first pattern at a first time point in the first frame time has a color located at a first coordinate position in a CIE 1931 chromaticity diagram, the first pattern at a second time point in the first frame time has another color located at a second coordinate position in the CIE 1931 chromaticity diagram, and the first coordinate position is different from the second coordinate position.
    Type: Application
    Filed: July 26, 2018
    Publication date: February 21, 2019
    Inventors: Chung-Wen Yen, Kuo-Cheng Tung
  • Publication number: 20190057673
    Abstract: A method and an apparatus for modifying initial image data for a frame based on a relative level of stimulation of cones in a viewer's eye are disclosed, wherein the modified image data results in reduced contrast between neighboring cones in the viewer's eye.
    Type: Application
    Filed: January 18, 2017
    Publication date: February 21, 2019
    Inventors: Michael Benjamin Selkowe Fertik, Thomas W. Chalberg, Jr., David William Olsen
  • Publication number: 20190057674
    Abstract: The system, device and method may improve location or address identification including an on-site electronic location identification display device operating in communication over a network with a software application to assist response or delivery times of a driver with reduced reliance on maps, phone calls or geo-location devices. The approach includes hardware and software to communicate, display and clearly identify the location of a home, apartment, business or any other establishment. Various components of the embodiments include wireless communication (e.g. Bluetooth, Wi-Fi etc.) enabled hardware, RF technology, web application software and network (e.g. internet) connectivity. A programmable electronic display device (e.g. LED, LCD or light board) syncs with another device (e.g. phone, tablet, computer via a router) and through internet connectivity to provide an illuminated signal, indicating the street address or location, to a delivery driver or emergency responder, for example.
    Type: Application
    Filed: August 17, 2018
    Publication date: February 21, 2019
    Applicant: JERS Tech, LLC
    Inventors: Stephen Hoppe, Richard Altemus, Jesse Campo, Eric Arndt
  • Publication number: 20190057675
    Abstract: A closed end woodwind instrument is provided designed for use with an inventive teaching method, notation system, and repertory. The combined system is designed for teaching basics of music and to provide a simple, convenient and enjoyable instrument. The instrument is formed of a body, which defines an internal cavity. A mouthpiece has a windway that connects to the internal cavity. The windway allows a user to blow air through the mouthpiece into the cavity and through the instrument. A labium is positioned along the windway, such that it splits air flowing through the windway. A plurality of tone holes are in communication with the internal cavity to allow air to exit the cavity. Upon blocking or opening these tone holes, different tones can be produced by the instrument.
    Type: Application
    Filed: August 16, 2017
    Publication date: February 21, 2019
    Inventors: Wayne Hankin, Roy Sansom
  • Publication number: 20190057676
    Abstract: In the Play By Number Music Notation System a grand staff is placed on top with multiple keyboard images aligned underneath. Each keyboard image represents one “CHUNK” of the song (a melodic phrase) and is identified with a letter (A, B, C, . . . ) on the right side and numbers on the left (01, 02, 03 . . . ) to show the sequence each chunk is played. Numbers are placed on the white keys indicating the order the keys are pressed (i.e. 1-12 . . . ) and the duration by using the familiar plain, bold, bold-italic from printing to indicate short, long and longer. When numbers are placed on the line between two keys this indicates you move up the line and play the black key. No need to indicate sharp(#) or flat(b). One simple follows the numbers pressing each key in sequence until the chunk is done.
    Type: Application
    Filed: August 15, 2017
    Publication date: February 21, 2019
    Inventor: Carl Edward Johnson
  • Publication number: 20190057677
    Abstract: Methods and systems are described that are utilized for remotely controlling a musical instrument. A first digital record comprising musical instrument digital commands from a first electronic instrument for a first item of music is accessed. The first digital record is transmitted over a network using a network interface to a remote, second electronic instrument for playback to a first user. Optionally, video data is streamed to a display device of a user while the first digital record is played back by the second electronic instrument. A key change command is transmitted over the network using the network interface to the second electronic instrument to cause the second electronic instrument to playback the first digital record for the first item of music in accordance with the key change command. The key change command may be transmitted during the streaming of the video data.
    Type: Application
    Filed: June 14, 2018
    Publication date: February 21, 2019
    Inventor: Michael John Elson
  • Publication number: 20190057678
    Abstract: This invention discloses a switching system for any odd or even number of two or more matched vibrations sensors, such that all possible circuits of such sensors that can be produced by the system are humbucking, rejecting external interferences signals. The sensors must be matched, especially with respect to response to external hum and internal impedance, and be capable of being made or arranged so that the responses of individual sensors to vibration can be inverted, compared to another matched sensor, placed in the same physical position, while the interference signal is not. Such that for 2, 3, 4, 5, 6, 7 and 8 sensors, there exist 1, 6, 25, 90, 301, 966 and 3025 unique humbucking circuits, respectively, with signal outputs that can be either single-ended or differential.
    Type: Application
    Filed: September 22, 2018
    Publication date: February 21, 2019
    Inventor: Donald L Baker
  • Publication number: 20190057679
    Abstract: This invention discloses and claims means and methods for producing a continuous range of humbucking vibration signals from matched sensors, from bright to warm tones, using variable gains, with either manual control or automatic control by a digital micro-computing device and system. It shows how electronic circuits can control the linear combination of tones from humbucking pairs of sensors, based upon simulating humbucking basis vectors.
    Type: Application
    Filed: October 10, 2018
    Publication date: February 21, 2019
    Inventor: Donald L. Baker
  • Publication number: 20190057680
    Abstract: An underwater sound source includes an acoustical driver, a controller of the acoustical driver, and a resonant tube acoustically coupled to the acoustical driver. The resonant tube has a pair of slotted portions, in which each slotted portion is disposed along the length of the resonant tube at a location corresponding to a node of a harmonic of the resonant tube. The sound system is configured to emit an output signal within a bandwidth defined by a dual resonance characteristic of the resonator tube. The sound source may also include a pair of coaxial tubular sleeves disposed around the resonant tube, each sleeve configured to slidably cover one of the slotted portions, and tune the resonance frequency of the tube over a wide range. At a high frequency end, when slots are uncovered, the frequency response of the resonant tube obtains a dual-resonant form.
    Type: Application
    Filed: April 13, 2018
    Publication date: February 21, 2019
    Inventor: Andrey K. Morozov
  • Publication number: 20190057681
    Abstract: Embodiments relate generally to systems and methods for communicating data from a personal protection equipment device to a headset. A method may comprise generating at least one message, by a personal protection equipment device, comprising information related to the personal protection equipment device; wirelessly communicating the generated message from the personal protection equipment device; receiving, by a hearing protection headset, one or more of the generated messages from the at least one personal protection equipment device; converting, by the hearing protection headset, the received message to speech format; and communicating, via one or more speakers within the heating protection headset, the converted message to the user.
    Type: Application
    Filed: August 17, 2018
    Publication date: February 21, 2019
    Inventors: Nagaraju Rachakonda, LG Srinivasa Rao Kanakala, Mehabube Rabbanee Shaik
  • Publication number: 20190057682
    Abstract: The present disclosure relates to methods, non-transitory computer readable media, and devices for text-to-speech conversion of electronic documents. An electronic document comprising one or more pages comprising a plurality of characters and a plurality of first segments is received. The plurality of characters is segmented into a plurality of second segments based on first metadata associated with the plurality of characters. A first relationship between each of the plurality of second segments is identified based on the first metadata associated with the plurality of characters, second metadata associated with the plurality of first segments, and spatial information associated with the plurality of segments. A reading sequence of the electronic document is determined based on the first relationship. An audio is then generated based on the reading sequence of the electronic document.
    Type: Application
    Filed: October 3, 2017
    Publication date: February 21, 2019
    Inventor: Dhruv Premi
  • Publication number: 20190057683
    Abstract: Methods, systems, and apparatus for performing speech recognition. In some implementations, acoustic data representing an utterance is obtained. The acoustic data corresponds to time steps in a series of time steps. One or more computers process scores indicative of the acoustic data using a recurrent neural network to generate a sequence of outputs. The sequence of outputs indicates a likely output label from among a predetermined set of output labels. The predetermined set of output labels includes output labels that respectively correspond to different linguistic units and to a placeholder label that does not represent a classification of acoustic data. The recurrent neural network is configured to use an output label indicated for a previous time step to determine an output label for the current time step. The generated sequence of outputs is processed to generate a transcription of the utterance, and the transcription of the utterance is provided.
    Type: Application
    Filed: December 19, 2017
    Publication date: February 21, 2019
    Inventors: Hasim Sak, Sean Matthew Shannon
  • Publication number: 20190057684
    Abstract: An electronic device which can communicate with a plurality of artificial intelligence servers includes a voice receiving unit receiving a voice, a wireless communication unit communicating with a plurality of artificial intelligence servers set to be activated by mutually different starting words, and a controller generating a plurality of starting words set to be different respectively for the plurality of artificial intelligence servers in response to an input voice including a preset starting word, converting the voice t include the plurality of generated starting words and transmitting the converted voice to each of the plurality of artificial intelligence servers, and outputting a plurality of pieces of result information when the plurality of pieces of result information generated in response to the converted voice are received from the plurality of artificial intelligence servers.
    Type: Application
    Filed: January 22, 2018
    Publication date: February 21, 2019
    Applicant: LG ELECTRONICS INC.
    Inventors: Sibong ROH, Taeho JUNG
  • Publication number: 20190057685
    Abstract: The present disclosure discloses a method and device for speech recognition and decoding, pertaining to the field of speech processing. The method comprises: receiving speech information, and extracting an acoustic feature; computing information of the acoustic feature according to a connection sequential classification model; when a frame in the acoustic feature information is a non-blank model frame, performing linguistic information searching using a weighted finite state transducer adapting acoustic modeling information and storing historical data, or otherwise, discarding the frame. By establishing the connection sequential classification model, the acoustic modeling is more accurate. By using the weighted finite state transducer, model representation is more efficient, and nearly 50% of computation and memory resource consumption is reduced. By using a phoneme synchronization method during decoding, amount and times of computations are effectively reduced for model searching.
    Type: Application
    Filed: May 6, 2016
    Publication date: February 21, 2019
    Inventors: Kai Yu, Weida Zhou, Zhehuai Chen, Wei Deng, Tao Xu
  • Publication number: 20190057686
    Abstract: Systems and methods of network-based learning models for natural language processing are provided. Information may be stored information in memory regarding user interaction with network content. Further, a digital recording of a vocal utterance made by a user may be captured. The vocal utterance may be interpreted based on the stored user interaction information. An intent of the user may be identified based on the interpretation, and a prediction may be made based on the identified intent. The prediction may further correspond to a selected workflow.
    Type: Application
    Filed: August 21, 2017
    Publication date: February 21, 2019
    Inventor: Stephen Yong
  • Publication number: 20190057687
    Abstract: The embodiments of the present disclosure provide a device for recognizing speeches and a method for speech recognition. The device for recognizing speeches may comprise a processor, configured to execute instructions stored in the memory, to: perform speech recognition on the collected audio data to obtain a semantic content of the audio data; match the obtained semantic content with a semantic data stored in the database; determine whether the audio data contains ambient noise audio information and audio information of a user, in response to determining that the obtained semantic content does not match with the semantic data; and change conditions for collecting the audio data and control to collect the audio data with the changed conditions, in response to determining that the audio data contains the ambient noise audio information and the audio information of the user.
    Type: Application
    Filed: June 12, 2018
    Publication date: February 21, 2019
    Inventors: Xun Yang, Xiangdong Yang, Xingxing Zhao
  • Publication number: 20190057688
    Abstract: A system, method, and processor-readable storage medium for providing sound effects are disclosed. An example method comprises receiving an audio signal associated with a speech of a user, performing speech recognition on the audio signal to identify one or more recognized words, identifying at least one trigger word among the one or more recognized words, and providing to the user at least one sound effect associated with the at least one trigger word. The speech recognition can be implemented by a trained machine-learning system. The sound effects can be provided to the user and optionally to other users, for example, via a network gaming environment. The sound effects can also be adjusted or selected based on a conversational context, user identification, and/or audio characteristics of the received audio signal. The system for providing sound effects can include a smart speaker, mobile device, game console, television device, and the like.
    Type: Application
    Filed: August 15, 2017
    Publication date: February 21, 2019
    Inventor: Glenn Black
  • Publication number: 20190057689
    Abstract: A system and means for recognising phrases of approval, questions and answers in speech conversations exchanged between communication devices such that the phrase recognition means is conditional upon detection of a non-speech event from the devices to denote a subject of interest, the phrase recognition means employing detection of non-speech events from the devices to identify and select speech recognition rules of relevance to the subject item, questions about the subject and answers to the questions. Speech recognition means logs detected speech and non-speech events to a repository for later analysis.
    Type: Application
    Filed: October 27, 2016
    Publication date: February 21, 2019
    Inventor: Jonathan Peter Vincent Drazin
  • Publication number: 20190057690
    Abstract: A system and method to receive a spoken utterance and convert the spoken utterance into a recognized speech results through multiple automatic speech recognition modules. Multiple conversation modules interpret the recognized speech results. The system and method assign an affinity status to one or more of the multiple automatic speech recognition modules. An affinity status restricts the conversion of a subsequent spoken utterance to a selected automatic speech recognition module or modules.
    Type: Application
    Filed: November 8, 2017
    Publication date: February 21, 2019
    Inventor: Darrin Kenneth John Fry
  • Publication number: 20190057691
    Abstract: A system and method receives a spoken utterance and converts the spoken utterance into recognized speech results through automatic speech recognition modules. The system and method renders a composite recognition speech result comprising the recognized speech results joined in a return function. The system and method interprets the recognized speech results joined in a return function from each of the automatic speech recognition modules through multiple conversation modules.
    Type: Application
    Filed: November 8, 2017
    Publication date: February 21, 2019
    Inventor: Darrin Kenneth John Fry
  • Publication number: 20190057692
    Abstract: A system and method to receive a spoken utterance and convert the spoken utterance into a recognized speech results through an automatic speech recognition service. A spoken utterance into a recognized speech result through an automatic speech recognition service. The recognized speech results are interpreted through a natural language processing module. A normalizer processes the recognized speech results that transforms the recognized speech interpretations into predefined form for a given automatic speech recognition domain and further determines which automatic speech recognition domains or the recognized speech results are processed by a dedicated dialogue management proxy module or a conversation module.
    Type: Application
    Filed: January 4, 2018
    Publication date: February 21, 2019
    Inventor: Darrin Kenneth John Fry
  • Publication number: 20190057693
    Abstract: An automatic speech recognition (ASR) system includes at least one processor and a memory storing instructions.
    Type: Application
    Filed: March 29, 2018
    Publication date: February 21, 2019
    Applicant: 2236008 Ontario Inc.
    Inventor: Darrin Kenneth John FRY
  • Publication number: 20190057694
    Abstract: The present disclosure relates to methods for processing a decoded audio signal and for selectively applying speech/dialog enhancement to the decoded audio signal. The present disclosure also relates to a method of operating a headset for computer-mediated reality. A method of processing a decoded audio signal comprises obtaining a measure of a cognitive load of a listener that listens to a rendering of the audio signal, determining whether speech/dialog enhancement shall be applied based on the obtained measure of the cognitive load, and performing speech/dialog enhancement based on the determination. A method of operating a headset for computer-mediated reality comprises obtaining eye-tracking data of a wearer of the headset, determining a measure of a cognitive load of the wearer of the headset based on the eye-tracking data, and outputting an indication of the cognitive load of the wearer of the headset.
    Type: Application
    Filed: August 16, 2018
    Publication date: February 21, 2019
    Applicant: Dolby International AB
    Inventor: Arijit Biswas
  • Publication number: 20190057695
    Abstract: Embodiments of the present disclosure provide a method for controlling a smart device, a computer device and a non-transitory computer readable storage medium. The method includes performing speech recognition on a speech signal acquired by the smart device; determining whether a control instruction corresponding to the speech signal matches with a present operation scene of the smart device; and adjusting an operation state of the smart device according to the control instruction when the control instruction matches with the present operation scene.
    Type: Application
    Filed: July 26, 2018
    Publication date: February 21, 2019
    Inventors: Bo XIE, Yang SUN, Yan XIE, Sheng QIAN
  • Publication number: 20190057696
    Abstract: [Object] To provide an information processing apparatus, information processing method, and program, capable of interpreting the meaning of a result of voice recognition adaptively to the situation in collecting voice. [Solution] An information processing apparatus including: a semantic interpretation unit configured to interpret a meaning of a recognition result of a collected voice of a user on a basis of the recognition result and context information in collecting the voice.
    Type: Application
    Filed: November 25, 2016
    Publication date: February 21, 2019
    Applicant: SONY CORPORATION
    Inventor: HIROAKI OGAWA
  • Publication number: 20190057697
    Abstract: Systems and processes for operating a virtual assistant programmed to refer to shared domain concepts using concept nodes are provided. In an example process, user speech input is received. A textual representation of the user speech input is generated. The textual representation is parsed to determine a primary domain representing a user intent for the textual representation. A first substring from the textual representation that corresponds to a first attribute of the primary domain is identified. The identified first substring is parsed to determine a secondary domain representing a user intent for the first substring. A task flow comprising one or more tasks is performed based on the primary domain and the secondary domain.
    Type: Application
    Filed: August 27, 2018
    Publication date: February 21, 2019
    Inventors: Richard D. GIULI, Nicholas K. TREADGOLD
  • Publication number: 20190057698
    Abstract: An in-call virtual assistant system monitors a real-time call, e.g., a call that is in progress, between multiple speakers, identifies a trigger and executes a specified task in response to the trigger. The virtual assistant system can be invoked by an explicit trigger or an implicit trigger. For example, an explicit trigger can be a voice command from one of the speakers in the call, such as “Ok Chorus, summarize the call” for summarizing the call. An implicit trigger can be an event that occurred in the call, or outside of the call but that is relevant to a speaker. For example, an event such as a speaker dropping off the call suddenly can be an implicit trigger that invokes the virtual assistant system to perform an associated task, such as notifying the remaining speakers on the call that one of the speakers dropped.
    Type: Application
    Filed: October 19, 2018
    Publication date: February 21, 2019
    Inventors: Roy Raanani, Russell Levy, Micha Yochanan Breakstone
  • Publication number: 20190057699
    Abstract: A serving data collecting system includes: a customer information acquiring unit that acquires customer information which is information about a customer that can be served by a customer serving apparatus; a presentation control unit that causes the customer information to be presented to an operator; a serving information acquiring unit that acquires serving information indicative of a serving content which is decided by the operator and according to which the customer serving apparatus should serve the customer; a serving decision information acquiring unit that acquires decision information indicative of information that the operator used for deciding the serving content; and a recording control unit that causes the serving information and the decision information to be recorded in association with each other.
    Type: Application
    Filed: October 21, 2018
    Publication date: February 21, 2019
    Inventors: Masayoshi SON, Takashi TSUTSUI, Kosuke TOMONAGA, Kiyoshi OURA
  • Publication number: 20190057700
    Abstract: A communication device that can fit within an oral cavity of a person includes a number of contacts and a control circuit adapted to be positioned within the oral cavity. The number of contacts are configured to obtain one or more signals indicative of activity in at least an upper airway of the person. The control circuit is configured to interpret a communication provided by the person based at least in part on the one or more obtained signals.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 21, 2019
    Inventors: Harold Byron Kent, Robert Douglas Kent, Karena Yadira Puldon, Steven Thomas Kent
  • Publication number: 20190057701
    Abstract: An electronic device and method are disclosed herein. The electronic device implements the method, including: receiving a first speech, and extracting a first text from the received first speech, in response to detecting that extraction of the first text includes errors such that a request associated with the first speech is unprocessable, storing the extracted first text, receiving a second speech and extracting a second text from the received second speech, in response to detecting that the request is processable using the extracted second text, detecting whether a similarity between the first and second texts is greater than a similarity threshold, and whether the second speech is received within a predetermined time duration of receiving the first speech, and when the similarity is greater than the threshold, and the first and second speech signals are received within the time duration, storing the first text in association with the second text.
    Type: Application
    Filed: August 14, 2018
    Publication date: February 21, 2019
    Inventors: Yongwook Kim, Jamin Goo, Gangheok Kim, Dongkyu Lee
  • Publication number: 20190057702
    Abstract: A voice control method and display apparatus are provided. The voice control method includes converting a voice of a user into text in response to the voice being input during a voice input mode; performing a control operation corresponding to the text; determining whether speech of the user has finished based on a result of the performing the control operation; awaiting input of a subsequent voice of the user during a predetermined standby time in response to determining that the speech of the user has not finished; and releasing the voice input mode in response to determining that the speech of the user has finished.
    Type: Application
    Filed: October 22, 2018
    Publication date: February 21, 2019
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sung-wook CHOI, Hee-seob RYU, Hee-ran LEE, Sung-pil HWANG
  • Publication number: 20190057703
    Abstract: A voice assistance system may include an interface configured to receive a signal indicative of a voice command made to a first device. The system may also include at least one processor configured to: extract an action to be performed according to the voice command, locate a second device implicated by the voice command to perform the action, access data related to the second device from a storage device based on the voice command, and generate a control signal based on the data for actuating a control on at least one of the first device and the second device according to the voice command.
    Type: Application
    Filed: February 29, 2016
    Publication date: February 21, 2019
    Inventor: Mark Lewis Zeinstra
  • Publication number: 20190057704
    Abstract: A linear prediction-based noise signal processing method, includes obtaining a linear prediction coefficient of the noise signal, filtering a signal derived from the noise signal based on the linear prediction coefficient in order to obtain a linear prediction residual signal, obtaining excitation energy of the linear prediction residual signal and a spectral envelope of the linear prediction residual signal, and the spectral envelope, the excitation energy and the linear prediction coefficient are encoded.
    Type: Application
    Filed: October 23, 2018
    Publication date: February 21, 2019
    Inventor: Zhe Wang
  • Publication number: 20190057705
    Abstract: Methods, systems and articles of manufacture for a wearable electronic device having an audio source identifier are disclosed. Example audio source identifiers disclosed herein include first and second audio sensors disposed at first and second locations, respectively, on a wearable electronic device. Such audio source identifiers also include a phase shift determiner to determine a phase shift between a first sample of first audio captured at the first audio sensor and a second sample of the first audio captured at the second audio sensor. The first audio includes first speech generated by a first speaker wearing the wearable electronic device. Example audio source identifiers further include a speaker identifier to determine, based on the phase shift determined by the phase shift determiner, whether second audio includes speech generated by a second speaker wearing the wearable electronic device.
    Type: Application
    Filed: August 18, 2017
    Publication date: February 21, 2019
    Inventor: Swarnendu Kar
  • Publication number: 20190057706
    Abstract: Embodiments of the present disclosure provide signal encoding and decoding methods and devices. The method includes: determining, a quantity k of subbands to be encoded, where i is a positive number, and k is a positive integer; selecting, according to quantized envelopes of all subbands, k subbands from all the subbands, or selecting k subbands from all subbands according to a psychoacoustic model; and performing a first-time encoding operation on spectral coefficients of the k subbands. In some embodiments of the present disclosure, the quantity k of subbands to be encoded is determined according to the quantity of available bits and the first saturation threshold, and encoding is performed on the k subbands that are selected from all the subbands, instead of on an entire frequency band.
    Type: Application
    Filed: October 22, 2018
    Publication date: February 21, 2019
    Inventors: Zexin LIU, Lei MIAO, Chen HU
  • Publication number: 20190057707
    Abstract: An audio signal, having first and second regions of frequency spectrum, is coded. Spectral peaks in the first region are encoded by a first coding method. For a segment of the audio signal, a relation between energy of bands in the first and second regions is determined. A relation between the energy of the band in the second region and energy of neighboring bands in the second region is determined. A determination is made whether available bits are sufficient for encoding at least one non-peak segment of the first region and the band in the second region. Responsive to first and second relations fulfilling a respective predetermined criterion and a sufficient number of bits, encoding the band in the second region using a second coding method different from the first coding method, and otherwise, subjecting the band in the second region to BandWidth Extension BWE or noise fill.
    Type: Application
    Filed: October 23, 2018
    Publication date: February 21, 2019
    Inventors: Erik NORVELL, Volodya GRANCHAROV
  • Publication number: 20190057708
    Abstract: The invention relates to a codec and a signal classifier and methods therein for signal classification and selection of a coding mode based on audio signal characteristics. A method embodiment to be performed by a decoder comprises, for a frame m: determining a stability value D(m) based on a difference, in a transform domain, between a range of a spectral envelope of frame m and a corresponding range of a spectral envelope of an adjacent frame m?1. Each such range comprises a set of quantized spectral envelope values related to the energy in spectral bands of a segment of the audio signal. The method further comprises selecting a decoding mode, out of a plurality of decoding modes, based on the stability value D(m); and applying the selected decoding mode.
    Type: Application
    Filed: October 22, 2018
    Publication date: February 21, 2019
    Inventors: Erik Norvell, Stefan Bruhn
  • Publication number: 20190057709
    Abstract: Encoding and decoding systems are described for the provision of high quality digital representations of audio signals with particular attention to the correct perceptual rendering of fast transients at modest sample rates. This is achieved by optimising downsampling and upsampling filters to minimise the length of the impulse response while adequately attenuating alias products that have been found perceptually harmful.
    Type: Application
    Filed: October 2, 2018
    Publication date: February 21, 2019
    Inventors: Peter Graham Craven, John Robert Stuart
  • Publication number: 20190057710
    Abstract: The coding efficiency of an audio codec using a controllable—switchable or even adjustable—harmonic filter tool is improved by performing the harmonicity-dependent controlling of this tool using a temporal structure measure in addition to a measure of harmonicity in order to control the harmonic filter tool. In particular, the temporal structure of the audio signal is evaluated in a manner which depends on the pitch. This enables to achieve a situation-adapted control of the harmonic filter tool so that in situations where a control made solely based on the measure of harmonicity would decide against or reduce the usage of this tool, although using the harmonic filter tool would, in that situation, increase the coding efficiency, the harmonic filter tool is applied, while in other situations where the harmonic filter tool may be inefficient or even destructive, the control reduces the appliance of the harmonic filter tool appropriately.
    Type: Application
    Filed: August 30, 2018
    Publication date: February 21, 2019
    Inventors: Goran Markovic, Christian Helmrich, Emmanuel Ravelli, Manuel Jander, Stefan Doehla
  • Publication number: 20190057711
    Abstract: The present invention makes it possible to reduce noise from a driving unit with a two-channel microphone configuration. An audio processing apparatus has a first microphone whose main acquisition target is sound from outside of the apparatus, a second microphone whose main acquisition target is driving noise from the driving unit, and a noise removing unit that generates two-channel audio data in which driving noise made by the driving unit of the apparatus has been reduced based on the difference between time-series audio data acquired by the first microphone and time-series audio data acquired by the second microphone. This noise removing unit has two adaptive filter units that respectively perform filter processing on time-series audio data from the first microphone and time-series audio data from the second microphone, and generates stereo two-channel audio data.
    Type: Application
    Filed: August 8, 2018
    Publication date: February 21, 2019
    Inventor: Yusuke Toriumi
  • Publication number: 20190057712
    Abstract: The disclosure provides a noise reduction method and an electronic device. In an embodiment of the disclosure, when determining that a plurality of first applications occupy a plurality of first audio channels connected with a microphone and a second application occupies a second audio channel connected with a speaker, the electronic device resamples the audio data of the second audio channel according to the sampling rates corresponding to the plurality of first audio channels and then performs the noise reduction processing on the audio data of each of the plurality of first audio channels respectively according to the audio data obtained by resampling.
    Type: Application
    Filed: October 10, 2018
    Publication date: February 21, 2019
    Inventors: Weibo Zheng, Bingyu Geng
  • Publication number: 20190057713
    Abstract: A method for hybrid speech enhancement which employs parametric-coded enhancement (or blend of parametric-coded and waveform-coded enhancement) under some signal conditions and waveform-coded enhancement (or a different blend of parametric-coded and waveform-coded enhancement) under other signal conditions. Other aspects are methods for generating a bitstream indicative of an audio program including speech and other content, such that hybrid speech enhancement can be performed on the program, a decoder including a buffer which stores at least one segment of an encoded audio bitstream generated by any embodiment of the inventive method, and a system or device (e.g., an encoder or decoder) configured (e.g., programmed) to perform any embodiment of the inventive method. At least some of speech enhancement operations are performed by a recipient audio decoder with Mid/Side speech enhancement metadata generated by an upstream audio encoder.
    Type: Application
    Filed: October 22, 2018
    Publication date: February 21, 2019
    Applicants: DOLBY LABORATORIES LICENSING CORPORATION, DOLBY INTERNATIONAL AB
    Inventors: Jeroen KOPPENS, Hannes MUESCH
  • Publication number: 20190057714
    Abstract: Systems and methods are disclosed for creating a machine generated avatar. A machine generated avatar is an avatar generated by processing video and audio information extracted from a recording of a human speaking a reading corpora and enabling the created avatar to be able to say an unlimited number of utterances, i.e., utterances that were not recorded. The video and audio processing consists of the use of machine learning algorithms that may create predictive models based upon pixel, semantic, phonetic, intonation, and wavelets.
    Type: Application
    Filed: October 28, 2016
    Publication date: February 21, 2019
    Inventor: Wayne SCHOLAR
  • Publication number: 20190057715
    Abstract: A system for monitoring an environment is disclosed. In various embodiments, the system includes an artificial neural network; a plurality of microphones positioned about the environment, the plurality of microphones configured to feed one or more audio signals to an input layer of the artificial neural network; and a first camera positioned within the environment, the first camera configured to determine location data for input to the artificial neural network.
    Type: Application
    Filed: August 14, 2018
    Publication date: February 21, 2019
    Inventors: Saran Saund, Nurettin Burcak Beser, Paul Aerick Lambert
  • Publication number: 20190057716
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing audio. A system configured to practice the method monitors, via a processor of a computing device, an image feed of a user interacting with the computing device and identifies an audio start event in the image feed based on face detection of the user looking at the computing device or a specific region of the computing device. The image feed can be a video stream. The audio start event can be based on a head size, orientation or distance from the computing device, eye position or direction, device orientation, mouth movement, and/or other user features. Then the system initiates processing of a received audio signal based on the audio start event. The system can also identify an audio end event in the image feed and end processing of the received audio signal based on the end event.
    Type: Application
    Filed: October 22, 2018
    Publication date: February 21, 2019
    Inventors: Brant Jameson VASILIEFF, Patrick John EHLEN, Jay Henry Lieske, JR.