Pattern Display Patents (Class 704/276)
  • Patent number: 10847139
    Abstract: A crowdsourcing based community platform includes a natural language configuration system that predicts a user's desired function call based on a natural language input (speech or text). The system provides a collaboration platform to configure and optimize quickly natural language systems to leverage the work and data of other developers, thus minimizing the time and data required to improve the quality and accuracy of one single system and providing a network effect to reach quickly critical mass of data. An application developer can provide training data for training a model specific to the developer's application. The developer can also obtain training data by forking one or more other applications so that the training data provided for the forked applications is used to train the model for the developer's application.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: November 24, 2020
    Assignee: Facebook, Inc.
    Inventor: Alexandre Lebrun
  • Patent number: 10691296
    Abstract: An electronic device includes a display, a memory, and a processor, and the processor displays, on the display, a folder icon that includes execution icons of a plurality of applications and, in response to a first user input selecting the folder icon, displays a user interface for collectively controlling notifications for the plurality of applications.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: June 23, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yong Gu Lee, Kyu Ok Choi, Ji Won Kim, Young Hak Oh, Sun Young Yi, Won Jun Lee
  • Patent number: 10636422
    Abstract: There is provided a system in which empowerment is performed by outputting conversation information to the user, the system including: a computer including a processor, a memory, and an interface; and a measuring device that measures signals of a plurality of types, wherein the processor calculates values of conversation parameters of a plurality of attributes for evaluating a state of a user who performs the empowerment on the basis of a plurality of signals measured by the measuring device, the processor selects a selection parameter which is a conversation parameter of a change target on the basis of the values of the conversation parameters of the plurality of attributes, the processor decides conversation information for changing a value of the selection parameter, and the processor outputs the decided conversation information to the user.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: April 28, 2020
    Assignee: HITACHI, LTD.
    Inventors: Takashi Numata, Toshinori Miyoshi, Hiroki Sato
  • Patent number: 10606954
    Abstract: Embodiments for text segmentation for topic modelling by a processor. Real-time conversation data may be analyzed and time intervals (e.g., inter-arrival times) between messages of the conversation data may be recorded. Each of the messages may be defined (and/or segmented) as burst segments or reflection segments according to the analyzing and recording. One or more topic modelling operations may be enhanced for text segmentation using the burst segments or reflection segments.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: March 31, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew T. Penrose, Jonathan Dunne
  • Patent number: 10558475
    Abstract: A method for dynamically localizing content of a graphical user interface widget executed on a widget runtime model of a computing platform on a user device includes configuring the graphical user interface widget to provide first location-responsive content in a presentation runtime model by defaulting to a static geographic location, wherein the graphical user interface widget provides the first location-responsive content based on the static geographic location, receiving a configuration setting to configure the graphical user interface widget for a localized mode, retrieving a geographic location for the user device, and providing the retrieved geographic location to the widget runtime model for the graphical user interface widget to select second location-responsive content, wherein the graphical user interface widget switches to provide the second location-responsive content based on the retrieved geographic location.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: February 11, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Mark Leslie Caunter, Bruce Kelly Jackson, Steven Richard Geach
  • Patent number: 10529116
    Abstract: A method, computer system, and computer program product for determining and displaying tones with messaging information are provided. The embodiment may include receiving a plurality of user-entered messaging information from a messaging application. The embodiment may also include determining a tone associated with the plurality of received user-entered messaging information. The embodiment may further include determining a color and an animation for the determined tone based on a preconfigured mapping of a plurality of colors and a plurality of animations with a plurality of tones. The embodiment may also include displaying the animation with the color on a display screen of a user device until the user submits the plurality of user-entered messaging information for transmission to one or more other users.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: January 7, 2020
    Assignee: International Business Machines Corporation
    Inventors: Kelley M. Gordon, Michael Celedonia, Katelyn Applegate
  • Patent number: 10515076
    Abstract: One or more servers receive a natural language query from a client device associated with a user. The one or more servers classify the natural language query as a query that seeks information previously accessed by the user. The one or more servers then obtain a response to the natural language query from one or more collections of documents, wherein each document in the one or more collections of documents was previously accessed by the user. The one or more servers generate search results based on the response. Then, the one or more servers communicate the search results to the client device.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: December 24, 2019
    Assignee: Google LLC
    Inventors: Nathan Wiegand, Bryan C. Horling, Jason L. Smart
  • Patent number: 10490101
    Abstract: A wearable device is provided that includes a microphone, a display, and a controller. The controller controls to identify a direction of emitted sound based on sound picked up by the microphone, and to display information corresponding to the sound at a position on the display corresponding to the identified direction of the emitted sound.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: November 26, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Mamiko Teshima
  • Patent number: 10438698
    Abstract: An improved basal insulin management system and an improved user interface for use therewith are provided. User interfaces are provided that dynamically display basal rate information and corresponding time segment information for a basal insulin program in a graphical format. The graphical presentation of the basal insulin program as it is being built by a user and the graphical presentation of a completed basal insulin program provides insulin management information to the user in a more intuitive and useful format. User interfaces further enable a user to make temporary adjustments to a predefined basal insulin program with the adjustments presented graphically to improve the user's understanding of the changes. As a result of being provided with the user interfaces described herein, users are less likely to make mistakes and are more likely to adjust basal rates more frequently, thereby contributing to better blood glucose control and improved health outcomes.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: October 8, 2019
    Assignee: INSULET CORPORATION
    Inventors: Sandhya Pillalamarri, Jorge Borges, Susan Mercer
  • Patent number: 10409552
    Abstract: Systems and methods for displaying an audio indicator including a main portion having a width proportional to a volume of a particular phoneme of an utterance is described herein. In some embodiments, audio data representing an utterance may be received at a speech-processing system from a user device. The speech-processing system may determine a maximum volume amplitude for the utterance, and using the maximum volume amplitude, may determine a normalized amplitude value between 0 and 1 associated with a volume that phoneme's of an utterance are spoken. The speech-processing system may then map the normalized amplitude value(s) to widths for a main portion of an audio indicator, where larger normalized amplitude values may correspond to smaller main portion widths.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: September 10, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: David Adrian Jara, Timothy Thomas Gray, Kwan Ting Lee, Jae Pum Park, Michael Hone, Grant Hinkson, Richard Leigh Mains, Shilpan Bhagat
  • Patent number: 10311119
    Abstract: Implementations generally relate to hashtags. In some implementations, a method includes providing one or more location-based contextual hashtags to a user by receiving, from a first user device associated with a first user, information indicative of a physical location of the first user device. The method further includes identifying, with one or more processors, a place of interest based on the information indicative of the physical location of the first user device. The method further includes determining a category associated with the place of interest. The method further includes retrieving one or more hashtags from one or more databases based on the place of interest or the category associated with the place of interest. The method further includes providing the one or more hashtags and information about the place of interest to the first user device.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: June 4, 2019
    Assignee: Google LLC
    Inventors: Sreenivas Gollapudi, Alexander Fabrikant, Shanmugasundaram Ravikumar
  • Patent number: 10176817
    Abstract: The invention provides an audio encoder including a combination of a linear predictive coding filter having a plurality of linear predictive coding coefficients and a time-frequency converter, wherein the combination is configured to filter and to convert a frame of the audio signal into a frequency domain in order to output a spectrum based on the frame and on the linear predictive coding coefficients; a low frequency emphasizer configured to calculate a processed spectrum based on the spectrum, wherein spectral lines of the processed spectrum representing a lower frequency than a reference spectral line are emphasized; and a control device configured to control the calculation of the processed spectrum by the low frequency emphasizer depending on the linear predictive coding coefficients of the linear predictive coding filter.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: January 8, 2019
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Stefan Doehla, Bernhard Grill, Christian Helmrich, Nikolaus Rettelbach
  • Patent number: 10170101
    Abstract: A computer-implemented method includes determining, by a first device, a current emotional state of a user of the first device. The current emotional state is based, at least in part, on real-time information corresponding to the user and relates to a textual message from the user. The computer-implemented method further includes determining, by the first device, a set of phonetic data associated with a plurality of vocal samples corresponding to the user. The computer-implemented method further includes dynamically converting, by the first device, the textual message into an audio message. The audio message is converted from the textual message into the audio message based, at least in part, on the current emotional state and a portion of the set of phonetic data that corresponds to the current emotional state. A corresponding computer system and computer program product are also disclosed.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Kevin G. Carr, Thomas D. Fitzsimmons, Johnathon J. Hoste, Angel A. Merchan
  • Patent number: 10170100
    Abstract: A computer-implemented method includes determining, by a first device, a current emotional state of a user of the first device. The current emotional state is based, at least in part, on real-time information corresponding to the user and relates to a textual message from the user. The computer-implemented method further includes determining, by the first device, a set of phonetic data associated with a plurality of vocal samples corresponding to the user. The computer-implemented method further includes dynamically converting, by the first device, the textual message into an audio message. The audio message is converted from the textual message into the audio message based, at least in part, on the current emotional state and a portion of the set of phonetic data that corresponds to the current emotional state. A corresponding computer system and computer program product are also disclosed.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Kevin G. Carr, Thomas D. Fitzsimmons, Johnathon J. Hoste, Angel A. Merchan
  • Patent number: 10164921
    Abstract: A system and method for voice based social networking is disclosed. The system receives a voice message (and frequently an image) and ultimately delivers it to one or multiple users, placing it within an ongoing context of conversations. The voice and image may be recorded by various devices and the data transmitted in a variety of formats. An alternative implementation places some system functionality in a mobile device such as a smartphone or wearable device, with the remaining functionality resident in system servers attached to the internet. The system can apply rules to select and limit the voice data flowing to each user; rules prioritize the messages using context information such as user interest and user state. An image is fused to the voice message to form a comment. Additional image or voice annotation (or both) identifying the sender may be attached to the comment. Fused image(s) and voice annotation allow the user to quickly deduce the context of the comment.
    Type: Grant
    Filed: May 12, 2015
    Date of Patent: December 25, 2018
    Inventor: Stephen Davies
  • Patent number: 10127912
    Abstract: An apparatus comprising: an input configured to receive from at least two microphones at least two audio signals; at least two processor instances configured to generate separate output audio signal tracks from the at least two audio signals from the at least two microphones; a file processor configured to link the at least two output audio signal tracks within a file structure.
    Type: Grant
    Filed: December 10, 2012
    Date of Patent: November 13, 2018
    Assignee: Nokia Technologies Oy
    Inventors: Marko Tapani Yliaho, Ari Juhani Koski
  • Patent number: 10121461
    Abstract: Providing feedback on a musical performance performed with a musical instrument. An instrument profile associated with the musical instrument used to perform the musical performance is identified. The instrument profile comprises information relating to one or more tuning characteristics of the instrument. The pitch of notes of the musical performance are analyzed based on the instrument profile to determine a measure of tuning of the musical performance. A feedback signal is generated based on the determined measure of tuning.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: November 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Adrian D. Dick, Doina L. Klinger, David J. Nice, Rebecca Quaggin-Mitchell
  • Patent number: 10115380
    Abstract: Providing feedback on a musical performance performed with a musical instrument. An instrument profile associated with the musical instrument used to perform the musical performance is identified. The instrument profile comprises information relating to one or more tuning characteristics of the instrument. The pitch of notes of the musical performance are analyzed based on the instrument profile to determine a measure of tuning of the musical performance. A feedback signal is generated based on the determined measure of tuning.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: October 30, 2018
    Assignee: International Business Machines Corporation
    Inventors: Adrian D. Dick, Doina L. Klinger, David J. Nice, Rebecca Quaggin-Mitchell
  • Patent number: 10096308
    Abstract: Providing feedback on a musical performance performed with a musical instrument. An instrument profile associated with the musical instrument used to perform the musical performance is identified. The instrument profile comprises information relating to one or more tuning characteristics of the instrument. The pitch of notes of the musical performance are analyzed based on the instrument profile to determine a measure of tuning of the musical performance. A feedback signal is generated based on the determined measure of tuning.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: October 9, 2018
    Assignee: International Business Machines Corporation
    Inventors: Adrian D. Dick, Doina L. Klinger, David J. Nice, Rebecca Quaggin-Mitchell
  • Patent number: 10079890
    Abstract: A system and method for dynamically establishing an adhoc network amongst plurality of communication devices in a beyond audible frequency range is disclosed. The system comprises a first communication device to transmit a quantity of data to a second communication device. The first communication device comprises of an input capturing module to receive the quantity of data from a broadcaster in a format and converts the quantity of data received into a quantity of modulated data, an identity generating module to generate a temporary identity for a broadcasting user. The second communication device then receives the data broadcasted from the first communication device and determines a probabilistic confidence level of the quantity of modulated data. A transreceiver implemented in the first communication device and second communication device transmits and receives the quantity of data in conjugation with the temporary identity within a predefined proximity of each device.
    Type: Grant
    Filed: December 4, 2012
    Date of Patent: September 18, 2018
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Aniruddha Sinha, Arpan Pal, Dhiman Chattopadhyay
  • Patent number: 10037756
    Abstract: Techniques for analyzing long-term audio recordings are provided. In one embodiment, a computing device can record audio captured from an environment of a user on a long-term basis (e.g., on the order of weeks, months, or years). The computing device can store the recorded audio on a local or remote storage device. The computing device can then analyze the recorded audio based one or more predefined rules and can enable one or more actions based on that analysis.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: July 31, 2018
    Assignee: Sensory, Incorporated
    Inventors: Bryan Pellom, Todd F. Mozer
  • Patent number: 10019995
    Abstract: A method for teaching a language, comprising: accessing, using a processor of a computer, an audio recording corresponding to a series of pitch patterns; accessing a cantillation representation of said series of pitch patterns, said cantillation representation comprising a plurality of cantillations; processing said audio recording to match the pitch patterns to the cantillations in said cantillation representation; calculating, using said processor, a start time and an end time for each of the series of cantillations as compared to said audio recording; outputting, using said processor, an aligned output representation comprising an identification of each of the cantillations, an identification of the start time for each of the cantillations, and an identification of the end time for each of the cantillations; receiving a request to play a requested pitch pattern; looking up said requested pitch pattern in said aligned output representation to retrieve one or more requested start times and one or more reques
    Type: Grant
    Filed: September 1, 2011
    Date of Patent: July 10, 2018
    Inventors: Norman Abramovitz, Jonathan Stiebel
  • Patent number: 9772816
    Abstract: Example systems and methods may facilitate processing of voice commands using a hybrid system with automated processing and human guide assistance. An example method includes receiving a speech segment, determining a textual representation of the speech segment, causing one or more guide computing devices to display one or more portions of the textual representation, receiving input data from the one or more guide computing devices that identifies a plurality of chunks of the textual representation, determining an association between the identified chunks of the textual representation and corresponding semantic labels, and determining a digital representation of a task based on the identified chunks of the textual representation and the corresponding semantic labels.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: September 26, 2017
    Assignee: Google Inc.
    Inventors: Jeffrey Bigham, Walter Lasecki, Thiago Teixeira, Adrien Treuille
  • Patent number: 9767790
    Abstract: A voice retrieval apparatus executes processes of: converting a retrieval string into a phoneme string; obtaining, from a time length memory, a continuous time length for each phoneme contained in the converted phoneme string; deriving a plurality of time lengths corresponding to a plurality of utterance rates as candidate utterance time lengths of voices corresponding to the retrieval string based on the obtained continuous time length; specifying, for each of the plurality of time lengths, a plurality of likelihood obtainment segments having the derived time length within a time length of a retrieval sound signal; obtaining a likelihood showing a plausibility that the specified likelihood obtainment segment specified is a segment where the voices are uttered; and identifying, based on the obtained likelihood, for each of the specified likelihood obtainment segments, an estimation segment where utterance of the voices is estimated in the retrieval sound signal.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: September 19, 2017
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Hiroki Tomita
  • Patent number: 9754024
    Abstract: A voice retrieval apparatus executes processes of: obtaining, from a time length memory, a continuous time length for each phoneme contained in a phoneme string of a retrieval string; obtaining user-specified information on an utterance rate; changing the continuous time length for each obtained phoneme in accordance with the obtained information; deriving, based on the changed continuous time length, an utterance time length of voices corresponding to the retrieval string; specifying a plurality of likelihood obtainment segments of the derived utterance time length in a time length of a retrieval sound signal; obtaining a likelihood showing a plausibility that the specified likelihood obtainment segment is a segment where the voices are uttered; and identifying, based on the obtained likelihood, an estimation segment where, within the retrieval sound signal, utterance of the voices is estimated, the estimation segment being identified for each specified likelihood obtainment segment.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: September 5, 2017
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Hiroki Tomita
  • Patent number: 9672825
    Abstract: The present invention relates to implementing new ways of automatically and robustly evaluating agent performance, customer satisfaction, campaign and competitor analysis in a call-center and it is comprising; analysis consumer server, call pre-processing module, speech-to-text module, emotion recognition module, gender identification module and fraud detection module.
    Type: Grant
    Filed: January 3, 2013
    Date of Patent: June 6, 2017
    Assignee: SESTEK SES ILETISIM BILGISAYAR TEKNOLOJILERI SANAYI VE TICARET ANONIM SIRKETI
    Inventors: Mustafa Levent Arslan, Ali Haznedaro{hacek over (g)}lu
  • Patent number: 9519092
    Abstract: An apparatus includes an illumination module, an end reflector, and a beam splitter. The illumination module launches display light along a forward propagating path within an eyepiece. The end reflector is disposed at an opposite end of the eyepiece from the illumination module and reflects back the display light traveling along a reverse propagating path. The beam splitter is disposed in the forward propagating path between the end reflector and the illumination module. The beam splitter directs a first portion of the display light traveling along the forward propagating path out a first side of the eyepiece. The beam splitter directs a second portion of the display light traveling along the reverse propagation path out a second side of the eyepiece.
    Type: Grant
    Filed: March 21, 2012
    Date of Patent: December 13, 2016
    Assignee: Google Inc.
    Inventors: Xiaoyu Miao, Ehsan Saeedi
  • Patent number: 9501568
    Abstract: In an example context of identifying live audio, an audio processor machine accesses audio data that represents a query sound and creates a spectrogram from the audio data. Each segment of the spectrogram represents a different time slice in the query sound. For each time slice, the audio processor machine determines one or more dominant frequencies and an aggregate energy value that represents a combination of all the energy for that dominant frequency and its harmonics. The machine creates a harmonogram by representing these aggregate energy values at these dominant frequencies in each time slice. The harmonogram thus may represent the strongest harmonic components within the query sound. The machine can identify the query sound by comparing its harmonogram to other harmonograms of other sounds and may respond to a user's submission of the query sound by providing an identifier of the query sound to the user.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: November 22, 2016
    Assignee: Gracenote, Inc.
    Inventor: Zafar Rafii
  • Patent number: 9495955
    Abstract: Features are disclosed for generating acoustic models from an existing corpus of data. Methods for generating the acoustic models can include receiving at least one characteristic of a desired acoustic model, selecting training utterances corresponding to the characteristic from a corpus comprising audio data and corresponding transcription data, and generating an acoustic model based on the selected training utterances.
    Type: Grant
    Filed: January 2, 2013
    Date of Patent: November 15, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Frederick Victor Weber, Jeffrey Penrod Adams
  • Patent number: 9408572
    Abstract: Support structures for positioning sensors on a physiologic tunnel for measuring physical, chemical and biological parameters of the body and to produce an action according to the measured value of the parameters. The support structure includes a sensor fitted on the support structures using a special geometry for acquiring continuous and undisturbed data on the physiology of the body. Signals are transmitted to a remote station by wireless transmission such as by electromagnetic waves, radio waves, infrared, sound and the like or by being reported locally by audio or visual transmission. The physical and chemical parameters include brain function, metabolic function, hydrodynamic function, hydration status, levels of chemical compounds in the blood, and the like. The support structure includes patches, clips, eyeglasses, head mounted gear and the like, containing passive or active sensors positioned at the end of the tunnel with sensing systems positioned on and accessing a physiologic tunnel.
    Type: Grant
    Filed: April 15, 2015
    Date of Patent: August 9, 2016
    Assignee: GEELUX HOLDINGS, LTD.
    Inventor: Marcio Marc Abreu
  • Patent number: 9412363
    Abstract: A model-based approach for on-screen item selection and disambiguation is provided. An utterance may be received by a computing device in response to a display of a list of items for selection on a display screen. A disambiguation model may then be applied to the utterance. The disambiguation model may be utilized to determine whether the utterance is directed to at least one of the list of displayed items, extract referential features from the utterance and identify an item from the list corresponding to the utterance, based on the extracted referential features. The computing device may then perform an action which includes selecting the identified item associated with utterance.
    Type: Grant
    Filed: March 3, 2014
    Date of Patent: August 9, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruhi Sarikaya, Fethiye Asli Celikyilmaz, Zhaleh Feizollahi, Larry Paul Heck, Dilek Z. Hakkani-Tur
  • Patent number: 9384736
    Abstract: Techniques disclosed herein include systems and methods for managing user interface responses to user input including spoken queries and commands. This includes providing incremental user interface (UI) response based on multiple recognition results about user input that are received with different delays. Such techniques include providing an initial response to a user at an early time, before remote recognition results are available. Systems herein can respond incrementally by initiating an initial UI response based on first recognition results, and then modify the initial UI response after receiving secondary recognition results. Since an initial response begins immediately, instead of waiting for results from all recognizers, it reduces the perceived delay by the user before complete results get rendered to the user.
    Type: Grant
    Filed: August 21, 2012
    Date of Patent: July 5, 2016
    Assignee: Nuance Communications, Inc.
    Inventors: Martin Labsky, Tomas Macek, Ladislav Kunc, Jan Kleindienst
  • Patent number: 9330720
    Abstract: Methods, systems and computer readable media for altering an audio output are provided. In some embodiments, the system may change the original frequency content of an audio data file to a second frequency content so that a recorded audio track will sound as if a different person had recorded it when it is played back. In other embodiments, the system may receive an audio data file and a voice signature, and it may apply the voice signature to the audio data file to alter the audio output of the audio data file. In that instance, the audio data file may be a textual representation of a recorded audio data file.
    Type: Grant
    Filed: April 2, 2008
    Date of Patent: May 3, 2016
    Assignee: Apple Inc.
    Inventor: Michael M. Lee
  • Patent number: 9301719
    Abstract: Support structures for positioning sensors on a physiologic tunnel for measuring physical, chemical and biological parameters of the body and to produce an action according to the measured value of the parameters. The support structure includes a sensor fitted on the support structures using a special geometry for acquiring continuous and undisturbed data on the physiology of the body. Signals are transmitted to a remote station by wireless transmission such as by electromagnetic waves, radio waves, infrared, sound and the like or by being reported locally by audio or visual transmission. The physical and chemical parameters include brain function, metabolic function, hydrodynamic function, hydration status, levels of chemical compounds in the blood, and the like. The support structure includes patches, clips, eyeglasses, head mounted gear and the like, containing passive or active sensors positioned at the end of the tunnel with sensing systems positioned on and accessing a physiologic tunnel.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: April 5, 2016
    Assignee: GEELUX HOLDING, LTD.
    Inventor: Marcio Marc Abreu
  • Patent number: 9286708
    Abstract: An information device includes an image receiving unit receiving an information terminal image having a specific region composed of pixels having a same feature value from an information terminal, the feature value being luminance or chromaticity; a specific region detecting unit detecting the specific region within the information terminal image, based on feature values of pixels composing the information terminal image received by the image receiving unit; an information device image creating unit creating an information device image related to a function provided to the information device; a composite image creating unit creating a composite image where the information device image created by the information device image creating unit is embedded in the specific region detected by the specific region detecting unit within the information terminal image; and a display control unit displaying the composite image created by the composite image creating unit on a display apparatus.
    Type: Grant
    Filed: February 10, 2015
    Date of Patent: March 15, 2016
    Assignee: JVC KENWOOD Corporation
    Inventor: Hiroaki Takanashi
  • Patent number: 9159338
    Abstract: Systems and methods of rendering a textual animation are provided. The methods include receiving an audio sample of an audio signal that is being rendered by a media rendering source. The methods also include receiving one or more descriptors for the audio signal based on at least one of a semantic vector, an audio vector, and an emotion vector. Based on the one or more descriptors, a client device may render the textual transcriptions of vocal elements of the audio signal in an animated manner. The client device may further render the textual transcriptions of the vocal elements of the audio signal to be substantially in synchrony to the audio signal being rendered by the media rendering source. In addition, the client device may further receive an identification of a song corresponding to the audio sample, and may render lyrics of the song in an animated manner.
    Type: Grant
    Filed: December 3, 2010
    Date of Patent: October 13, 2015
    Assignee: Shazam Entertainment Ltd.
    Inventors: Rahul Powar, Avery Li-Chun Wang
  • Patent number: 9076347
    Abstract: A system and methods for analyzing pronunciation, detecting errors and providing automatic feedback to help non-native speakers improve pronunciation of a foreign language is provided that employs publicly available, high accuracy third-party automatic speech recognizers available via the Internet to analyze and identify mispronunciations.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: July 7, 2015
    Assignee: Better Accent, LLC
    Inventors: Julia Komissarchik, Edward Komissarchik
  • Patent number: 9069391
    Abstract: A method for inputting a Korean character using a touch screen of a mobile device determines a vowel as a neutral vowel according to multi-touches centered around a consonant input key displayed on the touch screen. The method can minimize the number of character input keys arranged on the touch screen utilized in the mobile device, and can combine the Korean characters through the minimal touch action for inputting the Korean character.
    Type: Grant
    Filed: November 4, 2010
    Date of Patent: June 30, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Sung-Jae Hwang
  • Patent number: 9037470
    Abstract: Apparatus and methods are provided for using automatic speech recognition to analyze a voice interaction and verify compliance of an agent reading a script to a client during the voice interaction. In one aspect of the invention, a communications system includes a user interface, a communications network, and a call center having an automatic speech recognition component. In other aspects of the invention, a script compliance method includes the steps of conducting a voice interaction between an agent and a client and evaluating the voice interaction with an automatic speech recognition component adapted to analyze the voice interaction and determine whether the agent has adequately followed the script. In yet still further aspects of the invention, the duration of a given interaction can be analyzed, either apart from or in combination with the script compliance analysis above, to seek to identify instances of agent non-compliance, of fraud, or of quality-analysis issues.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: May 19, 2015
    Assignee: West Business Solutions, LLC
    Inventors: Mark J. Pettay, Fonda J. Narke
  • Patent number: 9031828
    Abstract: Various embodiments described herein facilitate multi-lingual communications. The systems and methods of some embodiments may enable multi-lingual communications through different modes of communications including, for example, Internet-based chat, e-mail, text-based mobile phone communications, postings to online forums, postings to online social media services, and the like. Certain embodiments may implement communications systems and methods that translate text between two or more languages (e.g., spoken), while handling/accommodating for one or more of the following in the text: specialized/domain-related jargon, abbreviations, acronyms, proper nouns, common nouns, diminutives, colloquial words or phrases, and profane words or phrases.
    Type: Grant
    Filed: March 18, 2014
    Date of Patent: May 12, 2015
    Assignee: Machine Zone, Inc.
    Inventors: Gabriel Leydon, Francois Orsini, Nikhil Bojja, Shailen Karur
  • Patent number: 9026449
    Abstract: The invention relates to a communication system having a display unit (2) and a virtual being (3) that can be visually represented on the display unit (2) and that is designed for communication by means of natural speech with a natural person, wherein at least one interaction symbol (6, 7) that can be represented on the display unit (2) and by means of which the natural speech dialog between the virtual being (3) and the natural person is supported such that an achieved dialog state can be indicated and/or additional information depending on the dialog state achieved and/or information can be redundantly invoked. The invention further relates to a method for representing information of a communication between a virtual being and a natural person.
    Type: Grant
    Filed: May 15, 2009
    Date of Patent: May 5, 2015
    Assignee: Audi AG
    Inventors: Stefan Sellschopp, Valentin Nicolescu, Helmut Krcmar
  • Patent number: 8994522
    Abstract: The described method and system provide for HMI steering for a telematics-equipped vehicle based on likelihood to exceed eye glance guidelines. By determining whether a task is likely to cause the user to exceed eye glance guidelines, alternative HMI processes may be presented to a user to reduce ASGT and EORT and increase compliance with eye glance guidelines. By allowing a user to navigate through long lists of items through vocal input, T9 text input, or heuristic processing rather than through conventional presentation of the full list, a user is much more likely to comply with the eye glance guidelines. This invention is particularly useful in contexts where users may be searching for one item out of a plurality of potential items, for example, within the context of hands-free calling contacts, playing back audio files, or finding points of interest during GPS navigation.
    Type: Grant
    Filed: May 26, 2011
    Date of Patent: March 31, 2015
    Assignees: General Motors LLC, GM Global Technology Operations LLC
    Inventors: Steven C. Tengler, Bijaya Aryal, Scott P. Geisler, Michael A. Wuergler
  • Patent number: 8990093
    Abstract: Methods and arrangements for visually representing audio content in a voice application. A display is connected to a voice application, and an image is displayed on the display, the image comprising a main portion and at least one subsidiary portion, the main portion representing a contextual entity of the audio content and the at least one subsidiary portion representing at least one participatory entity of the audio content. The at least one subsidiary portion is displayed without text, and the image is changed responsive to changes in audio content in the voice application.
    Type: Grant
    Filed: August 29, 2012
    Date of Patent: March 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Amit Anil Nanavati, Nitendra Rajput
  • Patent number: 8983841
    Abstract: A network communication node includes an audio outputter that outputs an audible representation of data to be provided to a requester. The network communication node also includes a processor that determines a categorization of the data to be provided to the requester and that varies a pause between segments of the audible representation of the data in accordance with the categorization of the data to be provided to the requester.
    Type: Grant
    Filed: July 15, 2008
    Date of Patent: March 17, 2015
    Assignee: AT&T Intellectual Property, I, L.P.
    Inventors: Gregory Pulz, Steven Lewis, Charles Rajnai
  • Patent number: 8983849
    Abstract: Systems and methods for intelligent language models that can be used across multiple devices are provided. Some embodiments provide for a client-server system for integrating change events from each device running a local language processing system into a master language model. The change events can be integrated, not only into the master model, but also into each of the other local language models. As a result, some embodiments enable restoration to new devices as well as synchronization of usage across multiple devices. In addition, real-time messaging can be used on selected messages to ensure that high priority change events are updated quickly across all active devices. Using a subscription model driven by a server infrastructure, utilization logic on the client side can also drive selective language model updates.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: March 17, 2015
    Assignee: Nuance Communications, Inc.
    Inventors: Andrew Phillips, David Kay, Erland Unruh, Eric Jun Fu
  • Patent number: 8972259
    Abstract: A method and system for teaching non-lexical speech effects includes delexicalizing a first speech segment to provide a first prosodic speech signal and data indicative of the first prosodic speech signal is stored in a computer memory. The first speech segment is audibly played to a language student and the student is prompted to recite the speech segment. The speech uttered by the student in response to the prompt, is recorded.
    Type: Grant
    Filed: September 9, 2010
    Date of Patent: March 3, 2015
    Assignee: Rosetta Stone, Ltd.
    Inventors: Joseph Tepperman, Theban Stanley, Kadri Hacioglu
  • Patent number: 8972240
    Abstract: An “Interactive Word Lattice” provides a user interface for interacting with and selecting user-modifiable paths through a lattice-based representation of alternative suggested text segments in response to a user's text segment input, such as phrases, sentences, paragraphs, entire documents, etc. More specifically, the user input is provided to a trained paraphrase generation model that returns a plurality of alternative text segments having the same or similar meaning as the original user input. An interactive graphical lattice-based representation of the alternative text segments is then presented to the user. One or more words of each alternative text segment represents a “node” of the lattice, while each connection between nodes represents a lattice “edge. Both nodes and edges are user modifiable. Each possible path through the lattice corresponds to a different alternative text segment. Users select a path through the lattice to select an alternative text to the original input.
    Type: Grant
    Filed: August 18, 2011
    Date of Patent: March 3, 2015
    Assignee: Microsoft Corporation
    Inventors: Christopher John Brockett, William Brennan Dolan
  • Patent number: 8959024
    Abstract: Methods and arrangements for visually representing audio content in a voice application. A display is connected to a voice application, and an image is displayed on the display, the image comprising a main portion and at least one subsidiary portion, the main portion representing a contextual entity of the audio content and the at least one subsidiary portion representing at least one participatory entity of the audio content. The at least one subsidiary portion is displayed without text, and the image is changed responsive to changes in audio content in the voice application.
    Type: Grant
    Filed: August 24, 2011
    Date of Patent: February 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: Amit Anil Nanavati, Nitendra Rajput
  • Patent number: 8949134
    Abstract: A diagnostic tool for speech recognition applications is provided, which enables a administrator to collect multiple recorded speech sessions. The administrator can then search for various failure points common to one or more of the recorded sessions in order to get a list of all sessions that have the same failure points. The invention allows the administrator to playback the session or replay any portion of the session to see the flow of the application and the recorded utterances. The invention provides the administrator with information about how to maximize the efficiency of the application which enables the administrator to edit the application to avoid future failure points.
    Type: Grant
    Filed: September 13, 2004
    Date of Patent: February 3, 2015
    Assignee: Avaya Inc.
    Inventors: Jacob Levine, John Muller, Christopher Passaretti, Wu Chingfa
  • Patent number: RE48126
    Abstract: A technique for synchronizing a visual browser and a voice browser. A visual browser is used to navigate through visual content, such as WML pages. During the navigation, the visual browser creates a historical record of events that have occurred during the navigation. The voice browser uses this historical record to navigate the content in the same manner as occurred on the visual browser, thereby synchronizing to a state equivalent to that of the visual browser. The creation of the historical record may be performed by using a script to trap events, where the script contains code that records the trapped events. The synchronization technique may be used with a multi-modal application that permits the mode of input/output (I/O) to be changed between visual and voice browsers. When the mode is changed from visual to voice, the record of events captured by the visual browser is provided to the voice browser, thereby allowing the I/O mode to change seamlessly from visual to voice.
    Type: Grant
    Filed: September 1, 2011
    Date of Patent: July 28, 2020
    Assignee: GULA CONSULTING LIMITED LIABILITY COMPANY
    Inventors: Inderpal Singh Mumick, Sandeep Sibal