Application Patents (Class 704/270)
  • Patent number: 9558739
    Abstract: Methods and systems are provided for adapting a speech system. In one example a method includes: logging speech data from the speech system; processing the speech data for a pattern of a user competence associated with at least one of task requests and interaction behavior; and selectively updating at least one of a system prompt and an interaction sequence based on the user competence.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: January 31, 2017
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Robert D. Sims, III, Timothy J. Grost, Ron M. Hecht, Ute Winter
  • Patent number: 9552812
    Abstract: Various embodiments of the invention provide methods, systems, and computer-program products for predicting an outcome for an event of interest associated with a contact center communication. That is to say, various embodiments of the invention involve predicting an outcome for an event of interest associated with a party involved in a contact center communication based on characteristics and content of the communication conducted with the party by utilizing one or more classifier models.
    Type: Grant
    Filed: September 1, 2016
    Date of Patent: January 24, 2017
    Assignee: Noble Systems Corporation
    Inventors: Jason P. Ouimette, Christopher S. Haggerty
  • Patent number: 9547666
    Abstract: Location graph-based derivation of user attributes is disclosed. In various embodiments, location data associated with a user, such as a current and/or past location at which the user has been, is received. A user attribute data associated with the location data is determined and used to update a user profile associated with the user.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: January 17, 2017
    Assignee: NinthDecimal, Inc.
    Inventors: Kevin Ching, Grigory Sokol, Ahmad Fairiz Azizi, Luke Gain, Yury Zhyshko, Mark Dixon, Robert Abusaidi, Kevin McKenzie, John Raymond Klein, Leonid Blyukher, Jeff Pittelkau, David Staas
  • Patent number: 9547468
    Abstract: A system running on a mobile device such as a smartphone is configured to expose a user interface (UI) to enable a user to specify web pages that can be pinned to a start screen of the device. Once pinned, the user may launch a web page by voice command from any location on the UI or from within any experience that is currently being supported on the device. Thus, the user can be on a call with a friend talking about a new video game and then use a voice command to launch a web browser application on the mobile device that navigates to a pinned web page having information about the game's release date. Web pages can be readily pinned and unpinned from the start screen through the UI. When a web page is unpinned from the start screen, the system disables voice web navigation for it.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: January 17, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Cheng-Yi Yen, Derek Liddell, Kenneth Reneris, Charles Morris, Dieter Rindle, Tanvi Surti, Michael Stephens, Eka Tjung
  • Patent number: 9549068
    Abstract: A method for adaptive voice interaction includes monitoring voice communications between a service recipient and a service representative, measuring a set of voice communication features based upon the voice communications between the service recipient and the service representative, analyzing the set of voice communication features to generate emotion metric values, and generating a response based on the analysis of the set of voice communication features.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: January 17, 2017
    Assignee: Simple Emotion, Inc.
    Inventors: Akash Krishnan, Matthew Fernandez
  • Patent number: 9549249
    Abstract: The disclosed active noise suppression headphone system is directed to a headphone system that is capable of substantially suppressing high or low frequency interfering noise that penetrate through a headphone earpiece from multiple directions. An external microphone mounted with a housing of a headphone earpiece senses ambient noise outside of the earpiece. The sensed ambient noise may be processed through at least one parallel filter bank arranged in at least one headphone earpiece. Each parallel filter bank may include adaptively linked filters. The output of these filters may be amplified based on weighting factors that are dependent upon the sensed ambient noise and that are generated by a filtered x least mean square circuit. The amplified filtered outputs may be summed to generate an antinoise signal that is in input to a loudspeaker within the headphone earpiece that substantial suppresses the ambient noise before it can be perceived by an end user of the headphones.
    Type: Grant
    Filed: June 20, 2013
    Date of Patent: January 17, 2017
    Assignee: AKG Acoustics GmbH
    Inventors: Alois Sontacchi, Robert Höldrich, Markus Flock
  • Patent number: 9547716
    Abstract: A speech search method performed by a display device, the method including outputting media data including audio data, receiving a speech search command for additional data about the outputted media data from a user, the speech search command including at least one query word, determining whether the at least one query word matches a query term that is full and searchable, when the at least one query word matches the query term that is full and searchable, performing a search for the additional data using the query term, and when the at least one query word does not match the query term that is full and searchable, determining the query term from a predetermined amount of the audio data prior to receiving the speech search command and performing the search for the additional data using the query term.
    Type: Grant
    Filed: July 29, 2013
    Date of Patent: January 17, 2017
    Assignee: LG Electronics Inc.
    Inventor: Yongsin Kim
  • Patent number: 9542934
    Abstract: A computer-implemented method performed in connection with a computerized system incorporating a processing unit and a memory, the computer-implemented method involving: using the processing unit to generate a multi-modal language model for co-occurrence of spoken words and displayed text in the plurality of videos; selecting at least a portion of a first video; extracting a plurality of spoken words from the selected portion of the first video; extracting a first displayed text from the selected portion of the first video; and using the processing unit and the generated multi-modal language model to rank the extracted plurality of spoken words based on probability of occurrence conditioned on the extracted first displayed text.
    Type: Grant
    Filed: February 27, 2014
    Date of Patent: January 10, 2017
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Matthew L. Cooper, Dhiraj Joshi, Huizhong Chen
  • Patent number: 9536527
    Abstract: A speech-based system is configured to use its audio-based user interface to present various types of device status information such as wireless signal strengths, communication parameters, battery levels, and so forth. In described embodiments, the system is configured to understand spoken user requests for device and system status. For example, the user may speak a request to obtain the current wireless signal strength of the speech-based system. The speech-based system may respond by determining the signal strength and by playing speech or other sound informing the user of the signal strength. Furthermore, the system may monitor operational parameters to detect conditions that may degrade the user experience, and may report such conditions using generated speech or other sounds.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: January 3, 2017
    Assignee: Amazon Technologies, Inc.
    Inventor: Ty Loren Carlson
  • Patent number: 9529493
    Abstract: A jacket image receiver acquires data for music to be played back and, in addition, a related image related to the music. A three-dimensional image generating unit displays the related image related to music played back in the past along with the related image related to the music currently played back, arranging the images in a three-dimensional space. The three-dimensional image generating unit flickers an image representing a water surface in order to create a visual effect that makes the related images appear floating on the water surface.
    Type: Grant
    Filed: October 27, 2009
    Date of Patent: December 27, 2016
    Assignees: SONY CORPORATION, SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Ryuji Nakayama
  • Patent number: 9525686
    Abstract: A method for determining if a user of a computer system is a human. A processor receives an indication that a computer security program is needed and acquires at least one image depicting a first string of characters including at least a first and second set of one or more characters. A processor assigns a substitute character to be used as input for each of the second set of one or more characters. A processor presents the at least one image and an indication of the substitute character and when to use the substitute character to the user. A processor receives a second string of characters from the user. A processor determines whether the second string of characters substantially matches the first string of characters based on the substitute character assigned to each of the second set of one or more characters and determines whether the user is a human.
    Type: Grant
    Filed: May 16, 2016
    Date of Patent: December 20, 2016
    Assignee: International Business Machines Corporation
    Inventors: Michael S. Brown, Carlos F. Franca da Fonseca, Neil I. Readshaw
  • Patent number: 9524713
    Abstract: According to some embodiments, a user device may receive business enterprise information from a remote enterprise server. The user device may then automatically convert at least some of the business enterprise information into speech output provided to a user of the user device. Speech input from the user may be received via and converted by the user device. The user device may then interact with the remote enterprise server in accordance with the converted speech input and the business enterprise information.
    Type: Grant
    Filed: May 30, 2012
    Date of Patent: December 20, 2016
    Assignee: SAP SE
    Inventors: Guy Blank, Guy Soffer
  • Patent number: 9514744
    Abstract: Techniques for conversion of non-back-off language models for use in speech decoders. For example, an apparatus for conversion of non-back-off language models for use in speech decoders. For example, an apparatus is configured convert a non-back-off language model to a back-off language model. The converted back-off language model is pruned. The converted back-off language model is usable for decoding speech.
    Type: Grant
    Filed: August 12, 2013
    Date of Patent: December 6, 2016
    Assignee: International Business Machines Corporation
    Inventors: Ebru Arisoy, Bhuvana Ramabhadran, Abhinav Sethy, Stanley Chen
  • Patent number: 9502033
    Abstract: A speech recognition client sends a speech stream and control stream in parallel to a server-side speech recognizer over a network. The network may be an unreliable, low-latency network. The server-side speech recognizer recognizes the speech stream continuously. The speech recognition client receives recognition results from the server-side recognizer in response to requests from the client. The client may remotely reconfigure the state of the server-side recognizer during recognition.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: November 22, 2016
    Assignee: MModal IP LLC
    Inventors: Eric Carraux, Detlef Koll
  • Patent number: 9503579
    Abstract: A method of evaluating scripts in an interpersonal communication includes monitoring a customer service interaction. At least one portion of a script is identified. At least one script requirement is determined. A determination is made whether the at least one portion of the script meets the at least one script requirement. An alert is generated indicative of a non-compliant script.
    Type: Grant
    Filed: January 17, 2014
    Date of Patent: November 22, 2016
    Assignee: VERINT SYSTEMS LTD.
    Inventors: Joseph Watson, Christopher J. Jeffs, Oren Stern, Galia Zacay, Omer Ziv
  • Patent number: 9503462
    Abstract: A method for authenticating communicating parties is disclosed. In the method biometric information associated with a first party is generated based on a recording of the first party presenting a predefined input parameter. Said biometric information may then be transmitted to a second party. Authenticity of a security parameter associated with the first party can then be verified based on said biometric information.
    Type: Grant
    Filed: February 8, 2007
    Date of Patent: November 22, 2016
    Assignee: Nokia Technologies Oy
    Inventors: Nadarajah Asokan, Govind Krishnamurthi, Tat Chan
  • Patent number: 9495460
    Abstract: Merging search results is required, for example, where an information retrieval system issues a query to multiple sources and obtains multiple results lists. In an embodiment a search engine at an Enterprise domain sends a query to the Enterprise search engine and also to a public Internet search engine. In embodiments, results lists obtained from different sources are merged using a merging model which is learnt using a machine learning process and updates when click-through data is observed for example. In examples, user information available in the Enterprise domain is used to influence the merging process to improve the relevance of results. In some examples, the user information is used for query modification. In an embodiment a user is able to impersonate a user of a specified group in order to promote particular results.
    Type: Grant
    Filed: May 27, 2009
    Date of Patent: November 15, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael J. Taylor, Filiip Radlinski, Milad Shokouhi
  • Patent number: 9489934
    Abstract: A method for selecting music based on face recognition, a music selecting system and an electronic apparatus are provided. The method includes the following steps: accessing a database to retrieve a plurality of song emotion coordinates corresponding to a plurality of songs; mapping the song emotion coordinates to an emotion coordinate graph; capturing a human face image; identifying an emotion state corresponding to the human face image, and transforming the emotion state to a current emotion coordinate; mapping the current emotion coordinate to the emotion coordinate graph; updating a song playlist according to a relative position between the current emotion coordinate and a target emotion coordinate, wherein the song playlist includes a plurality of songs to be played that direct the current emotion coordinate to the target emotion coordinate.
    Type: Grant
    Filed: May 22, 2014
    Date of Patent: November 8, 2016
    Assignee: National Chiao Tung University
    Inventors: Kai-Tai Song, Chao-Yu Lin
  • Patent number: 9489375
    Abstract: Inputs provided into user interface elements of an application are observed. Records are made of the inputs and the state(s) the application was in while the inputs were provided. For each state, a corresponding language model is trained based on the input(s) provided to the application while the application was in that state. When the application is next observed to be in a previously-observed state, a language model associated with the application's current state is applied to recognize speech input provided by a user and thereby to generate speech recognition output that is provided to the application. An application's state at a particular time may include the user interface element(s) that are displayed and/or in focus at that time, and is determined by an operating system hooking component embedded in the automatic speech recognition system.
    Type: Grant
    Filed: June 19, 2012
    Date of Patent: November 8, 2016
    Assignee: MModal IP LLC
    Inventors: Detlef Koll, Michael Finke
  • Patent number: 9489944
    Abstract: According to an embodiment, a memory controller stores, in a memory, character strings in voice text obtained through voice recognition on voice data, a node index, a recognition score, and a voice index. A detector detects reproduction section of the voice data. An obtainer obtains reading of a phrase in a text written down from the reproduced voice data, and obtains insertion position of character strings. A searcher searches for a character string including the reading. A determiner determines whether to perform display based on the recognition score corresponding to the retrieved character string. A history updater stores, in a memory, candidate history data indicating the retrieved character string, the recognition score, and the character insertion position. A threshold updater decides on a display threshold value using the recognition score of the candidate history data and/or the recognition score of the character string selected by a selector.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: November 8, 2016
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Taira Ashikawa, Kouji Ueno
  • Patent number: 9489577
    Abstract: Methods and apparatus, including computer program products, for visual similarity. A method includes receiving a stream of video content, generating interpretations of the received video content using speech/natural language processing (NLP), associating the interpretations of the received video content with images extracted from video content based on timeline, and using the interpretations to obtain interpretations of other images or other video content.
    Type: Grant
    Filed: July 23, 2010
    Date of Patent: November 8, 2016
    Assignee: CXENSE ASA
    Inventor: Thomas Wilde
  • Patent number: 9483768
    Abstract: A computer-implemented method and an apparatus for modeling customer interaction experiences receives interaction data corresponding to one or more interactions between a customer and a customer support representative. At least one language associated with the interaction data is detected. Textual content in a plurality of languages is generated corresponding to the interaction data based at least in part on translating the interaction data using two or more languages different than the at least one language. At least one emotion score is determined for text corresponding to each language from among the plurality of languages. An aggregate emotion score is determined using the at least one emotion score for the text corresponding to the each language. An interaction experience of the customer is modeled based at least in part on the aggregate emotion score.
    Type: Grant
    Filed: August 10, 2015
    Date of Patent: November 1, 2016
    Assignee: 24/7 CUSTOMER, INC.
    Inventor: Bhupinder Singh
  • Patent number: 9484023
    Abstract: Techniques for conversion of non-back-off language models for use in speech decoders. For example, a method comprises the following step. A non-back-off language model is converted to a back-off language model. The converted back-off language model is pruned. The converted back-off language model is usable for decoding speech.
    Type: Grant
    Filed: February 22, 2013
    Date of Patent: November 1, 2016
    Assignee: International Business Machines Corporation
    Inventors: Ebru Arisoy, Bhuvana Ramabhadran, Abhinav Sethy, Stanley Chen
  • Patent number: 9483960
    Abstract: A method, non-transitory computer readable medium, and apparatus for providing a dimension and a proximity of an object are disclosed. For example, the method receives a three dimensional depth map expressed as a two dimensional array of gray values, rasterizes the two dimensional array of gray values into vertical scan lines and horizontal scan lines for a left speaker and a right speaker and converts the vertical scan lines and the horizontal scan lines into a double beep, wherein a first beep of the double beep represents a vertical dimension of the object, the second beep of the double beep represents a horizontal dimension of the object, an intensity of each beep of the double beep represents the proximity of the object and a frequency spectrum of the double beep represents a shape of the object.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: November 1, 2016
    Assignee: Xerox Corporation
    Inventor: Francis Kapo Tse
  • Patent number: 9477643
    Abstract: A method of using conversation state information in a conversational interaction system is disclosed. A method of inferring a change of a conversation session during continuous user interaction with an interactive content providing system includes receiving input from the user including linguistic elements intended by the user to identify an item, associating a linguistic element of the input with a first conversation session, and providing a response based on the input. The method also includes receiving additional input from the user and inferring whether or not the additional input from the user is related to the linguistic element associated with the conversation session. If related, the method provides a response based on the additional input and the linguistic element associated with the first conversation session. Otherwise, the method provides a response based on the second input without regard for the linguistic element associated with the first conversation session.
    Type: Grant
    Filed: November 4, 2013
    Date of Patent: October 25, 2016
    Assignee: Veveo, Inc.
    Inventors: Rakesh Barve, Murali Aravamudan, Sashikumar Venkataraman, Girish Welling
  • Patent number: 9471626
    Abstract: A semantic search engine is enhanced to employ user preferences to customize answer output by, for a first user, extracting user preferences and sentiment levels associated with a first question; receiving candidate answer results of a semantic search of the first question; weighting the candidate answer results according to the sentiment levels for each of the user preferences; and producing the selected candidate answers to the first user. Optionally, user preferences and sentiment levels may be accumulated over different questions for the same user, or over different users for similar questions. And, supplemental information may be retrieved relative to a user preference in order to further tune the weighting per the preferences and sentiment levels.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: October 18, 2016
    Assignee: International Business Machines Corporation
    Inventors: Corville O. Allen, Scott Robert Carrier, Scott N. Gerard, Sterling Richardson Smith, David Blake Styles, Eric Woods
  • Patent number: 9471324
    Abstract: A processor may include a vector functional unit that supports concurrent operations on multiple data elements of a maximum element size. The functional unit may also support concurrent execution of multiple distinct vector program instructions, where the multiple vector instructions each operate on multiple data elements of less than the maximum element size.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: October 18, 2016
    Assignee: Apple Inc.
    Inventor: Jeffry E. Gonion
  • Patent number: 9472188
    Abstract: Various embodiments of the invention provide methods, systems, and computer-program products for predicting an outcome for an event of interest associated with a contact center communication. That is to say, various embodiments of the invention involve predicting an outcome for an event of interest associated with a party involved in a contact center communication based on characteristics and content of the communication conducted with the party by utilizing one or more classifier models.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: October 18, 2016
    Assignee: NOBLE SYSTEMS CORPORATION
    Inventors: Jason P. Ouimette, Christopher S. Haggerty
  • Patent number: 9472202
    Abstract: The present invention relates to a method of evaluating intelligibility of a degraded speech signal received from an audio transmission system conveying a reference speech signal. The method comprises sampling said signals into reference and degraded signal frames, and forming frame pairs by associating reference and degraded signal frames with each other. For each frame pair a difference function representing disturbance is provided, which is then compensated for specific disturbance types for providing a disturbance density function. Based on the density function of a plurality of frame pairs, an overall quality parameter is determined. The method provides for compensating the overall quality parameter for the effect that the assessment of intelligibility of CVC words is dominated by the intelligibility of consonants.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: October 18, 2016
    Assignee: Nederlandse Organisatie voor toegepast-natuurwetenschappelijk onderzoek TNO
    Inventor: John Gerard Beerends
  • Patent number: 9473618
    Abstract: Various aspects disclosed herein are directed to different types of personal assistant techniques for facilitating call event notation, tagging, calendaring, and etc., particularly those implemented on mobile communication devices. Users of mobile devices are provided with a relatively easy way to record and organize personal notes relating to one or more selected telephone conversations conducted by the user. Users can also manage notes, tasks, and schedule items related to the user's contacts and social network(s). In at least one embodiment, a Mobile Application running on a user's mobile device may be configured or designed to automatically detect an end of phone call event at the mobile device, and to automatically display a “pop-up” dialog GUI prompting the user to record a personalized note or other content (if desired), to be associated with the phone call which just ended.
    Type: Grant
    Filed: August 25, 2015
    Date of Patent: October 18, 2016
    Assignee: ZENO HOLDINGS LLC
    Inventor: Timothy D. T. Woloshyn
  • Patent number: 9471212
    Abstract: The present disclosure proposes a reminder generating method and a mobile electronic device using the same method. In one of the exemplary embodiments, the mobile electronic device would include a display and a processor coupled to the display and is configured for displaying a first user interface of an application on a display of the mobile electronic device, converting a data source into a data stream by using the first application, receiving a keyword extracted from the data stream, analyzing the keyword to generate within the first user interface a second user interface which includes at least a first information based on the keyword, and storing the first information after the first information has been confirmed and established by the user.
    Type: Grant
    Filed: March 10, 2014
    Date of Patent: October 18, 2016
    Assignee: HTC Corporation
    Inventors: Kai-Feng Chiu, Cheng-Hang Lin
  • Patent number: 9460712
    Abstract: A method of operating a voice-enabled business directory search system includes receiving category-business pairs, each category-business pair including a business category and a specific business, and establishing a data structure having nodes based on the category-business pairs. Each node of the data structure is associated with one or more business categories and a speech recognition language model for recognizing specific businesses associated with the one or more businesses categories.
    Type: Grant
    Filed: August 7, 2014
    Date of Patent: October 4, 2016
    Assignee: GOOGLE INC.
    Inventors: Brian Strope, William J. Byrne, Francoise Beaufays
  • Patent number: 9460703
    Abstract: Systems and methods for providing synthesized speech in a manner that takes into account the environment where the speech is presented. A method embodiment includes, based on a listening environment and at least one other parameter associated with at least one other parameter, selecting an approach from the plurality of approaches for presenting synthesized speech in a listening environment, presenting synthesized speech according to the selected approach and based on natural language input received from a user indicating that an inability to understand the presented synthesized speech, selecting a second approach from the plurality of approaches and presenting subsequent synthesized speech using the second approach.
    Type: Grant
    Filed: November 26, 2013
    Date of Patent: October 4, 2016
    Assignee: Interactions LLC
    Inventors: Kenneth H. Rosen, Carroll W. Creswell, Jeffrey J. Farah, Pradeep K. Bansal, Ann K. Syrdal
  • Patent number: 9459754
    Abstract: A computer implemented interactive oral presentation display system provides server computers allowing one or more client devices and one or more administrator devices access to an oral presentation display application which provides client user interfaces having a first image display area and a second image display area concurrently displayed on a display surface allowing the client user to control presentation of streaming media in the first image display area and selection of one or more images for serial display in the second image display area, each of which can be coupled in timed synchronized relation with the streaming media.
    Type: Grant
    Filed: October 27, 2011
    Date of Patent: October 4, 2016
    Assignee: eduPresent, LLC
    Inventors: Jeffrey S. Lewis, Michael Jackowski, Robert J. Fiesthumel, Marna Deines
  • Patent number: 9461987
    Abstract: According to one embodiment, an apparatus is provided that comprises a memory, an interface, and a processor communicatively coupled to the memory and to the interface. The memory can store a conversion rule. The interface can receive an audio signal and receive a file. The file indicates a start time, an end time, a key, and a password. The processor can clip the audio signal from the start time to the end time to produce a portion of the audio signal. The processor can convert, based at least in part upon the conversion rule, the portion of the audio signal using the key to form a converted portion of the audio signal. The processor can determine that the converted portion of the audio signal matches the password. The interface can communicate a response indicating that the converted portion of the audio signal matches the password.
    Type: Grant
    Filed: August 14, 2014
    Date of Patent: October 4, 2016
    Assignee: Bank of America Corporation
    Inventor: Pankaj Panging
  • Patent number: 9459176
    Abstract: Embodiments of the present general inventive concept provide a voice controlled vibration data analyzer system, including a vibration sensor to detect vibration data from a machine-under-test, a data acquisition unit to receive the vibration data from the vibration sensor, and a control unit having a user interface to receive manual and audio input from a user, and to communicate information relating to the machine-under-test, the control unit executing commands in response to the manual or audio input to control the data acquisition unit and/or user interface to output an audio or visual message relating to a navigation path of multiple machines to be tested, to collect and process the vibration data, and to receive manual or audio physical observations from the user to characterize collected vibration data.
    Type: Grant
    Filed: October 26, 2012
    Date of Patent: October 4, 2016
    Assignee: Azima Holdings, Inc.
    Inventors: Kenneth Ralph Piety, K. C. Dahl
  • Patent number: 9449218
    Abstract: Certain example embodiments relate to large venue surveillance and reaction systems and/or methods that take into account both subjective emotional attributes of persons having relations to the large venues, and objective measures such as, for example, actual or expected wait times, current staffing levels, numbers of customers to be serviced, etc. Pre-programmed scenarios are run in real-time as events stream in over one or more electronic interfaces, with each scenario being implemented as a logic sequence that takes into account at least an aspect of a representation of an inferred emotional state. The scenarios are run to (a) determine whether an incident might be occurring and/or might have occurred, and/or (b) dynamically determine a responsive action to be taken. A complex event processing engine may be used in this regard. The analysis may be used in certain example embodiments to help improve customer satisfaction at the large venue.
    Type: Grant
    Filed: October 16, 2014
    Date of Patent: September 20, 2016
    Assignee: Software AG USA, Inc.
    Inventors: Leighton Smith, Gareth Smith
  • Patent number: 9449303
    Abstract: A notebook component within a note-taking application is utilized as a centralized mechanism for recording notations and providing documentation related to a particular meeting. The meeting participants are provided with centralized access to the notebook component and thus are able to update the notebook record of the meeting collaboratively and in real time. In addition to user-driven updates, updates may also be generated on an automatic or semi-automatic basis. Updates may be made before, during or after the actual meeting. Updates may originate from an application data source outside of the note-taking application itself.
    Type: Grant
    Filed: January 19, 2012
    Date of Patent: September 20, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Thomas Underhill, Cynthia Wessling, Apeksha Godiyal, Syed Mustafa Bilal, Hong Lin, Nathaniel Stott, Charles Duze, Po-Yan Tsang
  • Patent number: 9448993
    Abstract: A system and method of recording utterances for building Named Entity Recognition (“NER”) models, which are used to build dialog systems in which a computer listens and responds to human voice dialog. Utterances to be uttered may be provided to users through their mobile devices, which may record the user uttering (e.g., verbalizing, speaking, etc.) the utterances and upload the recording to a computer for processing. The use of the user's mobile device, which is programmed with an utterance collection application (e.g., configured as a mobile app), facilitates the use of crowd-sourcing human intelligence tasking for widespread collection of utterances from a population of users. As such, obtaining large datasets for building NER models may be facilitated by the system and method disclosed herein.
    Type: Grant
    Filed: September 7, 2015
    Date of Patent: September 20, 2016
    Assignee: VoiceBox Technologies Corporation
    Inventors: Daniela Braga, Spencer John Rothwell, Faraz Romani, Ahmad Khamis Elshenawy, Stephen Steele Carter, Michael Kennewick
  • Patent number: 9451578
    Abstract: Apparatus, systems, and/or methods to temporally and spatially bound personal information. A pseudo random number corresponding to time based on a random number time seed and generate a pseudo random number corresponding to location based on a random number location seed may be generated. In addition, the pseudo random number corresponding to time may be mixed with the pseudo random number corresponding to location to generate a combined pseudo random number corresponding to a specific user at a specific location at a specific time. The combined pseudo random number may be used to store and/or read personal information in an anonymous manner.
    Type: Grant
    Filed: June 3, 2014
    Date of Patent: September 20, 2016
    Assignee: Intel Corporation
    Inventors: William C. Deleeuw, Ned M. Smith
  • Patent number: 9442691
    Abstract: Within a network-based system, an entity may be identified by an identifier of the entity. An audio generation machine may be configured to generate an audio piece that represents the entity, and the audio generation machine may generate the audio piece based on the identifier of the entity. Hence, the audio piece generated by the audio generation machine may be representative of the entity, and playback of the audio piece may identify the entity, reference the entity, highlight the entity, suggest the entity, or otherwise call the entity to mind (e.g., for one or more listeners of the audio piece). Thus, the generated audio piece may function as an audio-based avatar of the entity (e.g., a representative of the entity within a virtual world). Furthermore, the audio piece may be shared (e.g., in a social networking context or a social shopping context).
    Type: Grant
    Filed: January 12, 2015
    Date of Patent: September 13, 2016
    Assignee: eBay Inc.
    Inventor: Huaping Gu
  • Patent number: 9445187
    Abstract: A method of processing an audio signal, the method including receiving a downmix signal and a first information, the downmix signal including at least one object, the first information including object information indicating an attribute of the at least one object; receiving a second information, the second information including external preset information and applied object number information, the external preset information being an external input and including an external preset rendering parameter and external preset metadata, the applied object number information indicating a number of objects to which the external preset information is applied; generating downmix processing information controlling panning or gain of the downmix signal by using the object information and the external preset information based on the applied object number information; and modifying the downmix signal by using the downmix processing information.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: September 13, 2016
    Assignee: LG ELECTRONICS INC.
    Inventors: Hyen O Oh, Yang Won Jung
  • Patent number: 9437215
    Abstract: The methods and systems described herein predict user behavior based on analysis of a user video communication. The methods include receiving a user video communication, extracting video facial analysis data from the video communication, extracting voice analysis data from the video communication, associating the video facial analysis data with the voice analysis data to determine an emotional state of a user, collecting biographical profile information specific to the user, applying a linguistic-based psychological behavioral model to the spoken words to determine personality type of the user, and inputting the collected biographical profile information, emotional state, and personality type into a predictive model to determine a likelihood of an outcome of the video communication.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: September 6, 2016
    Assignee: Mattersight Corporation
    Inventors: Kelly Conway, Christopher Danson
  • Patent number: 9432742
    Abstract: An intelligent television and methods for interactive channel navigation and channel switching are disclosed. Specifically, an input may be received at the intelligent television that prompts the display of a list or other collection of channel tiles to a display area of an intelligent television. The list or other collection may include at least one channel tile that can visually represent broadcast content available on a channel via an image, without tuning to a channel to retrieve the image. Upon receiving a navigational input, a focus may move to an alternate channel tile. Upon receiving a selection input, the television may tune to the channel that is associated with a channel tile currently associated with a focus. The list or other collection of one or more channel tiles may contain one or more subsets of channel tiles, which may be associated with a collection of favorite or recommended channels.
    Type: Grant
    Filed: August 16, 2013
    Date of Patent: August 30, 2016
    Assignee: Flextronics AP, LLC
    Inventors: Sanjiv Sirpal, Mohammed Selim
  • Patent number: 9431014
    Abstract: Systems and methods for intelligent placement of appliance response to a voice command are provided. An exemplary system includes a plurality of appliances. An exemplary method includes connecting each of the plurality of appliances over a local area network and generating a location map providing a location of each of the plurality of appliances. The method includes receiving the human voice signal at a plurality of microphones respectively included in the plurality of appliances and determining an originating location of the human voice signal based at least in part on the location map. The method includes selecting one of the plurality of appliances to respond to the human voice signal based at least in part on the location map and the originating location.
    Type: Grant
    Filed: July 25, 2013
    Date of Patent: August 30, 2016
    Assignee: Haier US Appliance Solutions, Inc.
    Inventor: Keith Wesley Wait
  • Patent number: 9424248
    Abstract: Monitoring an internet chat in which a text transcript is generated by at least two chat participants, by: (i) performing a simple check on the text transcript for existence of a potential frustration precondition; and (ii) on condition that a frustration precondition is found, performing text analytics type analysis on the text transcript to determine whether potential frustration is evidenced by the text transcript. If it is determined that potential frustration is evidenced by the chat transcript then responsive action is taken to prevent and/or stem the frustration.
    Type: Grant
    Filed: April 30, 2013
    Date of Patent: August 23, 2016
    Assignee: International Business Machines Corporation
    Inventors: Annemarie R. Fitterer, Ramakrishna J. Gorthi, Chandrajit G. Joshi, Romil J. Shah
  • Patent number: 9424833
    Abstract: Techniques for providing speech output for speech-enabled applications. A synthesis system receives from a speech-enabled application a text input including a text transcription of a desired speech output. The synthesis system selects one or more audio recordings corresponding to one or more portions of the text input. In one aspect, the synthesis system selects from audio recordings provided by a developer of the speech-enabled application. In another aspect, the synthesis system selects an audio recording of a speaker speaking a plurality of words. The synthesis system forms a speech output including the one or more selected audio recordings and provides the speech output for the speech-enabled application.
    Type: Grant
    Filed: December 16, 2014
    Date of Patent: August 23, 2016
    Assignee: Nuance Communications, Inc.
    Inventors: Darren C. Meyer, Corinne Bos-Plachez, Martine Marguerite Staessen
  • Patent number: 9418117
    Abstract: A method, system, and non-transitory computer readable medium for displaying relevant messages of a conversation graph. A reverse chronological stream of messages broadcasted to a recipient account of a messaging platform is received. The set of authoring accounts having a predefined graph relationship with the recipient account. Among the stream of messages, a message determined to be a part of a relevant conversation is identified. Additional content associated with the conversation is then inserted into the stream. A client displaying the stream displays the conversation related content with one or more display elements depicting relationships among messages of the conversation.
    Type: Grant
    Filed: September 6, 2013
    Date of Patent: August 16, 2016
    Assignee: Twitter, Inc.
    Inventors: Marcel Molina, Ross Cohen, Kyle Maxwell, Stuart Hood, Cara Meverden, Coleen Baik, Arya Asemanfar, Erin Moore
  • Patent number: 9412362
    Abstract: Systems and methods of script identification in audio data obtained from audio data. The audio data is segmented into a plurality of utterances. A script model representative of a script text is obtained. The plurality of utterances are decoded with the script model. A determination is made if the script text occurred in the audio data.
    Type: Grant
    Filed: June 30, 2014
    Date of Patent: August 9, 2016
    Assignee: Verint Systems Ltd.
    Inventors: Jeffery Michael Iannone, Ron Wein, Omer Ziv
  • Patent number: 9413891
    Abstract: Methods and systems are provided for receiving a communication, analyzing the communication in real-time or near real-time using a computer-based communications analytics facility for at least one of a language characteristic and an acoustic characteristic, wherein for analyzing the language characteristic of voice communications, the communication is converted to text using computer-based speech recognition, determining at least one of the category, the score, the sentiment, or the alert associated with the communication using the at least one language and/or acoustic characteristic, and providing a dynamic graphical representation of the at least one category, score, sentiment, or alert through a graphical user interface.
    Type: Grant
    Filed: January 8, 2015
    Date of Patent: August 9, 2016
    Assignee: CallMiner, Inc.
    Inventors: Michael C. Dwyer, Erik A. Strand, Scott R. Wolf, Frank Salinas, Jeffrey A. Gallino, Scott A. Kendrick, Shaoyu Xue