Pattern Display Patents (Class 704/276)
  • Patent number: 11325045
    Abstract: A method and apparatus for acquiring a merged map, a storage medium, a processor, and a terminal are provided. The method includes that: a configuration file and a thumbnail are acquired in an off-line state; and maps corresponding to model components contained in each game scene are loaded during game run, and the maps corresponding to the model components contained in each game scene and the thumbnail are merged according to the configuration file to obtain a merged map corresponding to at least one game scene. The present disclosure solves technical problems in the related art that a processing efficiency of a provided map merging scheme used aiming at a game scene is lower and too much storage space is required to be occupied.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: May 10, 2022
    Assignee: NETEASE (HANGZHOU) NETWORK CO., LTD.
    Inventor: Kunyu Cai
  • Patent number: 11253193
    Abstract: A body worn or implantable hearing prosthesis, including a device configured to capture an audio environment of a recipient and evoke a hearing percept based at least in part on the captured audio environment, wherein the hearing prosthesis is configured to identify, based on the captured audio environment, one or more biomarkers present in the audio environment indicative of the recipient's ability to hear.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: February 22, 2022
    Assignee: Cochlear Limited
    Inventors: Kieran Reed, John Michael Heasman, Kerrie Plant, Alex Von Brasch, Stephen Fung
  • Patent number: 11205277
    Abstract: A system includes sensors and a tracking subsystem. The subsystem receives a first image feed from a first sensor and a second image feed from a second sensor. The field-of view of the second sensor at least partially overlaps with that of the first sensor. The subsystem detects, in a frame from the first feed, a first contour associated with an object. The subsystem determines, based on pixel coordinates of the first contour, a first pixel position of the object. The subsystem detects, in a frame from the second feed, a second contour associated with the same object. The subsystem determines, based on pixel coordinates of the second contour, a second pixel position of the object. Based on the first pixel position and the second pixel position, a global position for the object is determined in a space.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: December 21, 2021
    Assignee: 7-ELEVEN, INC.
    Inventors: Shahmeer Ali Mirza, Sailesh Bharathwaaj Krishnamurthy
  • Patent number: 11205429
    Abstract: An information processing apparatus includes a receiving part that receives processing information based on voice; and a controller that performs control so that the processing information indicated by the voice received by the receiving part is displayed on a display. The receiving part further receives modification of the processing information displayed on the display, and the controller further performs control so that the modification received by the receiving part is reflected in processing received by the receiving part.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: December 21, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Ryoto Yabusaki
  • Patent number: 11195515
    Abstract: The present application provides a method and device for voice acquisition to reduce the affect of individual differences by quantitatively inputting voice indicators, the method comprising: displaying a first prompt word and starting to receive a first input voice of a user; after the first input voice of the user is received, recognizing the received first input voice to be a first user word; comparing the first user word with the first prompt word; if the first user word is matched with the first prompt word, then displaying a second prompt word and starting to receive a second input voice of the user; after the second input voice of the user is received, recognizing the received second input voice to be a second user word; comparing the second user word with the second prompt word; and integrating the first input voice and the second input voice to be a digital voice file, and storing the digital voice file.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: December 7, 2021
    Inventor: Zhonghua Ci
  • Patent number: 11195525
    Abstract: An operation terminal includes: an imaging part configured to image a space; a human detecting part configured to detect a user based on information on the space imaged; a voice inputting part configured to receive inputting of the spoken voice of the user; a coordinates detecting part configured to detect a first coordinate of a predetermined first part of an upper limb of the user and a second coordinate of a predetermined second part of an upper half body excluding the upper limb of the user based on information acquired by a predetermined unit when the user is detected by the human detecting part; and a condition determining part configured to compare a positional relationship between the first coordinate and the second coordinate, and configured to bring the voice inputting part into a voice inputting receivable state when the positional relationship satisfies a predetermined first condition at least one time.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: December 7, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Kohei Tahara, Yusaku Ota, Hiroko Sugimoto
  • Patent number: 11158320
    Abstract: Methods and systems for processing user input to a computing system are disclosed. The computing system has access to an audio input and a visual input such as a camera. Face detection is performed on an image from the visual input, and if a face is detected this triggers the recording of audio and making the audio available to a speech processing function. Further verification steps can be combined with the face detection step for a multi-factor verification of user intent to interact with the system.
    Type: Grant
    Filed: April 17, 2020
    Date of Patent: October 26, 2021
    Assignee: Soapbox Labs Ltd.
    Inventor: Patricia Scanlon
  • Patent number: 11157728
    Abstract: A method for person detection using overhead images includes receiving a depth image captured from an overhead viewpoint at a first location; detecting in the depth image for a target region indicative of a scene object within a height range; determining whether the detected target region has an area within a head size range; if within the head size range, determining whether the detected target region has a roundness value less than a maximum roundness value; if less than the maximum roundness value, classifying the detected target region as a head of a person and masking the classified target region in the depth image, where the masked region is excluded from detecting; and repeating the detecting to the masking to detect for and classify another target region in the depth image within the height range and outside of the masked region.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: October 26, 2021
    Assignee: Ricoh Co., Ltd.
    Inventors: Manuel Martinello, Edward L. Schwartz
  • Patent number: 11100066
    Abstract: Described herein are technologies that are configured to assist a user in recollection information about people, places, and things. Computer-readable data is captured, and contextual data that temporally corresponds to the computer-readable data is also captured. In a database, the computer-readable data is indexed by the contextual data. Thus, when a query is received that references the contextual data, the computer-readable data is retrieved.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: August 24, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bo-June Hsu, Kuansan Wang, Jeremy Espenshade, Chiyuan Huang, Yu-ting Kuo
  • Patent number: 11045092
    Abstract: Support structures for positioning sensors on a physiologic tunnel for measuring physical, chemical and biological parameters of the body and to produce an action according to the measured value of the parameters. The support structure includes a sensor fitted on the support structures using a special geometry for acquiring continuous and undisturbed data on the physiology of the body. Signals are transmitted to a remote station by wireless transmission such as by electromagnetic waves, radio waves, infrared, sound, and the like or by being reported locally by audio or visual transmission. The physical and chemical parameters include brain function, metabolic function, hydrodynamic function, hydration status, levels of chemical compounds in the blood, and the like. The support structure includes patches, clips, eyeglasses, head mounted gear and the like, containing passive or active sensors positioned at the end of the tunnel with sensing systems positioned on and accessing a physiologic tunnel.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: June 29, 2021
    Inventor: Marcio Marc Abreu
  • Patent number: 10963054
    Abstract: In an information processing system including a controller device including at least one vibration body, and an information processing apparatus outputting a control signal for the vibration body to the controller device, in vibration of the vibration body, sounds of the periphery are collected, and it is decided whether or not an allophone is generated in the controller device by using a signal of the collected sounds. When it is decided that the allophone is generated in the controller device, the information processing apparatus executes correction processing for the control signal for the vibration body, and outputs the control signal corrected by the correction processing.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: March 30, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Yusuke Nakagawa
  • Patent number: 10885809
    Abstract: A computing device is adapted to construct a user-memory data structure for a user based on interactions with the user. The user-memory data structure may comprise a plurality of memory representations for concepts and items important for gaining proficiency in a subject matter. The memory representations are dynamic, and characterize how well each of the concepts and items are retained as a function of time by the user. The computing device uses the user-memory data structure to guide operation of the computing device.
    Type: Grant
    Filed: May 20, 2016
    Date of Patent: January 5, 2021
    Assignee: Gammakite, Inc.
    Inventor: Emmanuel Roche
  • Patent number: 10847139
    Abstract: A crowdsourcing based community platform includes a natural language configuration system that predicts a user's desired function call based on a natural language input (speech or text). The system provides a collaboration platform to configure and optimize quickly natural language systems to leverage the work and data of other developers, thus minimizing the time and data required to improve the quality and accuracy of one single system and providing a network effect to reach quickly critical mass of data. An application developer can provide training data for training a model specific to the developer's application. The developer can also obtain training data by forking one or more other applications so that the training data provided for the forked applications is used to train the model for the developer's application.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: November 24, 2020
    Assignee: Facebook, Inc.
    Inventor: Alexandre Lebrun
  • Patent number: 10691296
    Abstract: An electronic device includes a display, a memory, and a processor, and the processor displays, on the display, a folder icon that includes execution icons of a plurality of applications and, in response to a first user input selecting the folder icon, displays a user interface for collectively controlling notifications for the plurality of applications.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: June 23, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yong Gu Lee, Kyu Ok Choi, Ji Won Kim, Young Hak Oh, Sun Young Yi, Won Jun Lee
  • Patent number: 10636422
    Abstract: There is provided a system in which empowerment is performed by outputting conversation information to the user, the system including: a computer including a processor, a memory, and an interface; and a measuring device that measures signals of a plurality of types, wherein the processor calculates values of conversation parameters of a plurality of attributes for evaluating a state of a user who performs the empowerment on the basis of a plurality of signals measured by the measuring device, the processor selects a selection parameter which is a conversation parameter of a change target on the basis of the values of the conversation parameters of the plurality of attributes, the processor decides conversation information for changing a value of the selection parameter, and the processor outputs the decided conversation information to the user.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: April 28, 2020
    Assignee: HITACHI, LTD.
    Inventors: Takashi Numata, Toshinori Miyoshi, Hiroki Sato
  • Patent number: 10606954
    Abstract: Embodiments for text segmentation for topic modelling by a processor. Real-time conversation data may be analyzed and time intervals (e.g., inter-arrival times) between messages of the conversation data may be recorded. Each of the messages may be defined (and/or segmented) as burst segments or reflection segments according to the analyzing and recording. One or more topic modelling operations may be enhanced for text segmentation using the burst segments or reflection segments.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: March 31, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew T. Penrose, Jonathan Dunne
  • Patent number: 10558475
    Abstract: A method for dynamically localizing content of a graphical user interface widget executed on a widget runtime model of a computing platform on a user device includes configuring the graphical user interface widget to provide first location-responsive content in a presentation runtime model by defaulting to a static geographic location, wherein the graphical user interface widget provides the first location-responsive content based on the static geographic location, receiving a configuration setting to configure the graphical user interface widget for a localized mode, retrieving a geographic location for the user device, and providing the retrieved geographic location to the widget runtime model for the graphical user interface widget to select second location-responsive content, wherein the graphical user interface widget switches to provide the second location-responsive content based on the retrieved geographic location.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: February 11, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Mark Leslie Caunter, Bruce Kelly Jackson, Steven Richard Geach
  • Patent number: 10529116
    Abstract: A method, computer system, and computer program product for determining and displaying tones with messaging information are provided. The embodiment may include receiving a plurality of user-entered messaging information from a messaging application. The embodiment may also include determining a tone associated with the plurality of received user-entered messaging information. The embodiment may further include determining a color and an animation for the determined tone based on a preconfigured mapping of a plurality of colors and a plurality of animations with a plurality of tones. The embodiment may also include displaying the animation with the color on a display screen of a user device until the user submits the plurality of user-entered messaging information for transmission to one or more other users.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: January 7, 2020
    Assignee: International Business Machines Corporation
    Inventors: Kelley M. Gordon, Michael Celedonia, Katelyn Applegate
  • Patent number: 10515076
    Abstract: One or more servers receive a natural language query from a client device associated with a user. The one or more servers classify the natural language query as a query that seeks information previously accessed by the user. The one or more servers then obtain a response to the natural language query from one or more collections of documents, wherein each document in the one or more collections of documents was previously accessed by the user. The one or more servers generate search results based on the response. Then, the one or more servers communicate the search results to the client device.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: December 24, 2019
    Assignee: Google LLC
    Inventors: Nathan Wiegand, Bryan C. Horling, Jason L. Smart
  • Patent number: 10490101
    Abstract: A wearable device is provided that includes a microphone, a display, and a controller. The controller controls to identify a direction of emitted sound based on sound picked up by the microphone, and to display information corresponding to the sound at a position on the display corresponding to the identified direction of the emitted sound.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: November 26, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Mamiko Teshima
  • Patent number: 10438698
    Abstract: An improved basal insulin management system and an improved user interface for use therewith are provided. User interfaces are provided that dynamically display basal rate information and corresponding time segment information for a basal insulin program in a graphical format. The graphical presentation of the basal insulin program as it is being built by a user and the graphical presentation of a completed basal insulin program provides insulin management information to the user in a more intuitive and useful format. User interfaces further enable a user to make temporary adjustments to a predefined basal insulin program with the adjustments presented graphically to improve the user's understanding of the changes. As a result of being provided with the user interfaces described herein, users are less likely to make mistakes and are more likely to adjust basal rates more frequently, thereby contributing to better blood glucose control and improved health outcomes.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: October 8, 2019
    Assignee: INSULET CORPORATION
    Inventors: Sandhya Pillalamarri, Jorge Borges, Susan Mercer
  • Patent number: 10409552
    Abstract: Systems and methods for displaying an audio indicator including a main portion having a width proportional to a volume of a particular phoneme of an utterance is described herein. In some embodiments, audio data representing an utterance may be received at a speech-processing system from a user device. The speech-processing system may determine a maximum volume amplitude for the utterance, and using the maximum volume amplitude, may determine a normalized amplitude value between 0 and 1 associated with a volume that phoneme's of an utterance are spoken. The speech-processing system may then map the normalized amplitude value(s) to widths for a main portion of an audio indicator, where larger normalized amplitude values may correspond to smaller main portion widths.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: September 10, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: David Adrian Jara, Timothy Thomas Gray, Kwan Ting Lee, Jae Pum Park, Michael Hone, Grant Hinkson, Richard Leigh Mains, Shilpan Bhagat
  • Patent number: 10311119
    Abstract: Implementations generally relate to hashtags. In some implementations, a method includes providing one or more location-based contextual hashtags to a user by receiving, from a first user device associated with a first user, information indicative of a physical location of the first user device. The method further includes identifying, with one or more processors, a place of interest based on the information indicative of the physical location of the first user device. The method further includes determining a category associated with the place of interest. The method further includes retrieving one or more hashtags from one or more databases based on the place of interest or the category associated with the place of interest. The method further includes providing the one or more hashtags and information about the place of interest to the first user device.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: June 4, 2019
    Assignee: Google LLC
    Inventors: Sreenivas Gollapudi, Alexander Fabrikant, Shanmugasundaram Ravikumar
  • Patent number: 10176817
    Abstract: The invention provides an audio encoder including a combination of a linear predictive coding filter having a plurality of linear predictive coding coefficients and a time-frequency converter, wherein the combination is configured to filter and to convert a frame of the audio signal into a frequency domain in order to output a spectrum based on the frame and on the linear predictive coding coefficients; a low frequency emphasizer configured to calculate a processed spectrum based on the spectrum, wherein spectral lines of the processed spectrum representing a lower frequency than a reference spectral line are emphasized; and a control device configured to control the calculation of the processed spectrum by the low frequency emphasizer depending on the linear predictive coding coefficients of the linear predictive coding filter.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: January 8, 2019
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Stefan Doehla, Bernhard Grill, Christian Helmrich, Nikolaus Rettelbach
  • Patent number: 10170100
    Abstract: A computer-implemented method includes determining, by a first device, a current emotional state of a user of the first device. The current emotional state is based, at least in part, on real-time information corresponding to the user and relates to a textual message from the user. The computer-implemented method further includes determining, by the first device, a set of phonetic data associated with a plurality of vocal samples corresponding to the user. The computer-implemented method further includes dynamically converting, by the first device, the textual message into an audio message. The audio message is converted from the textual message into the audio message based, at least in part, on the current emotional state and a portion of the set of phonetic data that corresponds to the current emotional state. A corresponding computer system and computer program product are also disclosed.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Kevin G. Carr, Thomas D. Fitzsimmons, Johnathon J. Hoste, Angel A. Merchan
  • Patent number: 10170101
    Abstract: A computer-implemented method includes determining, by a first device, a current emotional state of a user of the first device. The current emotional state is based, at least in part, on real-time information corresponding to the user and relates to a textual message from the user. The computer-implemented method further includes determining, by the first device, a set of phonetic data associated with a plurality of vocal samples corresponding to the user. The computer-implemented method further includes dynamically converting, by the first device, the textual message into an audio message. The audio message is converted from the textual message into the audio message based, at least in part, on the current emotional state and a portion of the set of phonetic data that corresponds to the current emotional state. A corresponding computer system and computer program product are also disclosed.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Kevin G. Carr, Thomas D. Fitzsimmons, Johnathon J. Hoste, Angel A. Merchan
  • Patent number: 10164921
    Abstract: A system and method for voice based social networking is disclosed. The system receives a voice message (and frequently an image) and ultimately delivers it to one or multiple users, placing it within an ongoing context of conversations. The voice and image may be recorded by various devices and the data transmitted in a variety of formats. An alternative implementation places some system functionality in a mobile device such as a smartphone or wearable device, with the remaining functionality resident in system servers attached to the internet. The system can apply rules to select and limit the voice data flowing to each user; rules prioritize the messages using context information such as user interest and user state. An image is fused to the voice message to form a comment. Additional image or voice annotation (or both) identifying the sender may be attached to the comment. Fused image(s) and voice annotation allow the user to quickly deduce the context of the comment.
    Type: Grant
    Filed: May 12, 2015
    Date of Patent: December 25, 2018
    Inventor: Stephen Davies
  • Patent number: 10127912
    Abstract: An apparatus comprising: an input configured to receive from at least two microphones at least two audio signals; at least two processor instances configured to generate separate output audio signal tracks from the at least two audio signals from the at least two microphones; a file processor configured to link the at least two output audio signal tracks within a file structure.
    Type: Grant
    Filed: December 10, 2012
    Date of Patent: November 13, 2018
    Assignee: Nokia Technologies Oy
    Inventors: Marko Tapani Yliaho, Ari Juhani Koski
  • Patent number: 10121461
    Abstract: Providing feedback on a musical performance performed with a musical instrument. An instrument profile associated with the musical instrument used to perform the musical performance is identified. The instrument profile comprises information relating to one or more tuning characteristics of the instrument. The pitch of notes of the musical performance are analyzed based on the instrument profile to determine a measure of tuning of the musical performance. A feedback signal is generated based on the determined measure of tuning.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: November 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Adrian D. Dick, Doina L. Klinger, David J. Nice, Rebecca Quaggin-Mitchell
  • Patent number: 10115380
    Abstract: Providing feedback on a musical performance performed with a musical instrument. An instrument profile associated with the musical instrument used to perform the musical performance is identified. The instrument profile comprises information relating to one or more tuning characteristics of the instrument. The pitch of notes of the musical performance are analyzed based on the instrument profile to determine a measure of tuning of the musical performance. A feedback signal is generated based on the determined measure of tuning.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: October 30, 2018
    Assignee: International Business Machines Corporation
    Inventors: Adrian D. Dick, Doina L. Klinger, David J. Nice, Rebecca Quaggin-Mitchell
  • Patent number: 10096308
    Abstract: Providing feedback on a musical performance performed with a musical instrument. An instrument profile associated with the musical instrument used to perform the musical performance is identified. The instrument profile comprises information relating to one or more tuning characteristics of the instrument. The pitch of notes of the musical performance are analyzed based on the instrument profile to determine a measure of tuning of the musical performance. A feedback signal is generated based on the determined measure of tuning.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: October 9, 2018
    Assignee: International Business Machines Corporation
    Inventors: Adrian D. Dick, Doina L. Klinger, David J. Nice, Rebecca Quaggin-Mitchell
  • Patent number: 10079890
    Abstract: A system and method for dynamically establishing an adhoc network amongst plurality of communication devices in a beyond audible frequency range is disclosed. The system comprises a first communication device to transmit a quantity of data to a second communication device. The first communication device comprises of an input capturing module to receive the quantity of data from a broadcaster in a format and converts the quantity of data received into a quantity of modulated data, an identity generating module to generate a temporary identity for a broadcasting user. The second communication device then receives the data broadcasted from the first communication device and determines a probabilistic confidence level of the quantity of modulated data. A transreceiver implemented in the first communication device and second communication device transmits and receives the quantity of data in conjugation with the temporary identity within a predefined proximity of each device.
    Type: Grant
    Filed: December 4, 2012
    Date of Patent: September 18, 2018
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Aniruddha Sinha, Arpan Pal, Dhiman Chattopadhyay
  • Patent number: 10037756
    Abstract: Techniques for analyzing long-term audio recordings are provided. In one embodiment, a computing device can record audio captured from an environment of a user on a long-term basis (e.g., on the order of weeks, months, or years). The computing device can store the recorded audio on a local or remote storage device. The computing device can then analyze the recorded audio based one or more predefined rules and can enable one or more actions based on that analysis.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: July 31, 2018
    Assignee: Sensory, Incorporated
    Inventors: Bryan Pellom, Todd F. Mozer
  • Patent number: 10019995
    Abstract: A method for teaching a language, comprising: accessing, using a processor of a computer, an audio recording corresponding to a series of pitch patterns; accessing a cantillation representation of said series of pitch patterns, said cantillation representation comprising a plurality of cantillations; processing said audio recording to match the pitch patterns to the cantillations in said cantillation representation; calculating, using said processor, a start time and an end time for each of the series of cantillations as compared to said audio recording; outputting, using said processor, an aligned output representation comprising an identification of each of the cantillations, an identification of the start time for each of the cantillations, and an identification of the end time for each of the cantillations; receiving a request to play a requested pitch pattern; looking up said requested pitch pattern in said aligned output representation to retrieve one or more requested start times and one or more reques
    Type: Grant
    Filed: September 1, 2011
    Date of Patent: July 10, 2018
    Inventors: Norman Abramovitz, Jonathan Stiebel
  • Patent number: 9772816
    Abstract: Example systems and methods may facilitate processing of voice commands using a hybrid system with automated processing and human guide assistance. An example method includes receiving a speech segment, determining a textual representation of the speech segment, causing one or more guide computing devices to display one or more portions of the textual representation, receiving input data from the one or more guide computing devices that identifies a plurality of chunks of the textual representation, determining an association between the identified chunks of the textual representation and corresponding semantic labels, and determining a digital representation of a task based on the identified chunks of the textual representation and the corresponding semantic labels.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: September 26, 2017
    Assignee: Google Inc.
    Inventors: Jeffrey Bigham, Walter Lasecki, Thiago Teixeira, Adrien Treuille
  • Patent number: 9767790
    Abstract: A voice retrieval apparatus executes processes of: converting a retrieval string into a phoneme string; obtaining, from a time length memory, a continuous time length for each phoneme contained in the converted phoneme string; deriving a plurality of time lengths corresponding to a plurality of utterance rates as candidate utterance time lengths of voices corresponding to the retrieval string based on the obtained continuous time length; specifying, for each of the plurality of time lengths, a plurality of likelihood obtainment segments having the derived time length within a time length of a retrieval sound signal; obtaining a likelihood showing a plausibility that the specified likelihood obtainment segment specified is a segment where the voices are uttered; and identifying, based on the obtained likelihood, for each of the specified likelihood obtainment segments, an estimation segment where utterance of the voices is estimated in the retrieval sound signal.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: September 19, 2017
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Hiroki Tomita
  • Patent number: 9754024
    Abstract: A voice retrieval apparatus executes processes of: obtaining, from a time length memory, a continuous time length for each phoneme contained in a phoneme string of a retrieval string; obtaining user-specified information on an utterance rate; changing the continuous time length for each obtained phoneme in accordance with the obtained information; deriving, based on the changed continuous time length, an utterance time length of voices corresponding to the retrieval string; specifying a plurality of likelihood obtainment segments of the derived utterance time length in a time length of a retrieval sound signal; obtaining a likelihood showing a plausibility that the specified likelihood obtainment segment is a segment where the voices are uttered; and identifying, based on the obtained likelihood, an estimation segment where, within the retrieval sound signal, utterance of the voices is estimated, the estimation segment being identified for each specified likelihood obtainment segment.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: September 5, 2017
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Hiroki Tomita
  • Patent number: 9672825
    Abstract: The present invention relates to implementing new ways of automatically and robustly evaluating agent performance, customer satisfaction, campaign and competitor analysis in a call-center and it is comprising; analysis consumer server, call pre-processing module, speech-to-text module, emotion recognition module, gender identification module and fraud detection module.
    Type: Grant
    Filed: January 3, 2013
    Date of Patent: June 6, 2017
    Assignee: SESTEK SES ILETISIM BILGISAYAR TEKNOLOJILERI SANAYI VE TICARET ANONIM SIRKETI
    Inventors: Mustafa Levent Arslan, Ali Haznedaro{hacek over (g)}lu
  • Patent number: 9519092
    Abstract: An apparatus includes an illumination module, an end reflector, and a beam splitter. The illumination module launches display light along a forward propagating path within an eyepiece. The end reflector is disposed at an opposite end of the eyepiece from the illumination module and reflects back the display light traveling along a reverse propagating path. The beam splitter is disposed in the forward propagating path between the end reflector and the illumination module. The beam splitter directs a first portion of the display light traveling along the forward propagating path out a first side of the eyepiece. The beam splitter directs a second portion of the display light traveling along the reverse propagation path out a second side of the eyepiece.
    Type: Grant
    Filed: March 21, 2012
    Date of Patent: December 13, 2016
    Assignee: Google Inc.
    Inventors: Xiaoyu Miao, Ehsan Saeedi
  • Patent number: 9501568
    Abstract: In an example context of identifying live audio, an audio processor machine accesses audio data that represents a query sound and creates a spectrogram from the audio data. Each segment of the spectrogram represents a different time slice in the query sound. For each time slice, the audio processor machine determines one or more dominant frequencies and an aggregate energy value that represents a combination of all the energy for that dominant frequency and its harmonics. The machine creates a harmonogram by representing these aggregate energy values at these dominant frequencies in each time slice. The harmonogram thus may represent the strongest harmonic components within the query sound. The machine can identify the query sound by comparing its harmonogram to other harmonograms of other sounds and may respond to a user's submission of the query sound by providing an identifier of the query sound to the user.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: November 22, 2016
    Assignee: Gracenote, Inc.
    Inventor: Zafar Rafii
  • Patent number: 9495955
    Abstract: Features are disclosed for generating acoustic models from an existing corpus of data. Methods for generating the acoustic models can include receiving at least one characteristic of a desired acoustic model, selecting training utterances corresponding to the characteristic from a corpus comprising audio data and corresponding transcription data, and generating an acoustic model based on the selected training utterances.
    Type: Grant
    Filed: January 2, 2013
    Date of Patent: November 15, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Frederick Victor Weber, Jeffrey Penrod Adams
  • Patent number: 9408572
    Abstract: Support structures for positioning sensors on a physiologic tunnel for measuring physical, chemical and biological parameters of the body and to produce an action according to the measured value of the parameters. The support structure includes a sensor fitted on the support structures using a special geometry for acquiring continuous and undisturbed data on the physiology of the body. Signals are transmitted to a remote station by wireless transmission such as by electromagnetic waves, radio waves, infrared, sound and the like or by being reported locally by audio or visual transmission. The physical and chemical parameters include brain function, metabolic function, hydrodynamic function, hydration status, levels of chemical compounds in the blood, and the like. The support structure includes patches, clips, eyeglasses, head mounted gear and the like, containing passive or active sensors positioned at the end of the tunnel with sensing systems positioned on and accessing a physiologic tunnel.
    Type: Grant
    Filed: April 15, 2015
    Date of Patent: August 9, 2016
    Assignee: GEELUX HOLDINGS, LTD.
    Inventor: Marcio Marc Abreu
  • Patent number: 9412363
    Abstract: A model-based approach for on-screen item selection and disambiguation is provided. An utterance may be received by a computing device in response to a display of a list of items for selection on a display screen. A disambiguation model may then be applied to the utterance. The disambiguation model may be utilized to determine whether the utterance is directed to at least one of the list of displayed items, extract referential features from the utterance and identify an item from the list corresponding to the utterance, based on the extracted referential features. The computing device may then perform an action which includes selecting the identified item associated with utterance.
    Type: Grant
    Filed: March 3, 2014
    Date of Patent: August 9, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruhi Sarikaya, Fethiye Asli Celikyilmaz, Zhaleh Feizollahi, Larry Paul Heck, Dilek Z. Hakkani-Tur
  • Patent number: 9384736
    Abstract: Techniques disclosed herein include systems and methods for managing user interface responses to user input including spoken queries and commands. This includes providing incremental user interface (UI) response based on multiple recognition results about user input that are received with different delays. Such techniques include providing an initial response to a user at an early time, before remote recognition results are available. Systems herein can respond incrementally by initiating an initial UI response based on first recognition results, and then modify the initial UI response after receiving secondary recognition results. Since an initial response begins immediately, instead of waiting for results from all recognizers, it reduces the perceived delay by the user before complete results get rendered to the user.
    Type: Grant
    Filed: August 21, 2012
    Date of Patent: July 5, 2016
    Assignee: Nuance Communications, Inc.
    Inventors: Martin Labsky, Tomas Macek, Ladislav Kunc, Jan Kleindienst
  • Patent number: 9330720
    Abstract: Methods, systems and computer readable media for altering an audio output are provided. In some embodiments, the system may change the original frequency content of an audio data file to a second frequency content so that a recorded audio track will sound as if a different person had recorded it when it is played back. In other embodiments, the system may receive an audio data file and a voice signature, and it may apply the voice signature to the audio data file to alter the audio output of the audio data file. In that instance, the audio data file may be a textual representation of a recorded audio data file.
    Type: Grant
    Filed: April 2, 2008
    Date of Patent: May 3, 2016
    Assignee: Apple Inc.
    Inventor: Michael M. Lee
  • Patent number: 9301719
    Abstract: Support structures for positioning sensors on a physiologic tunnel for measuring physical, chemical and biological parameters of the body and to produce an action according to the measured value of the parameters. The support structure includes a sensor fitted on the support structures using a special geometry for acquiring continuous and undisturbed data on the physiology of the body. Signals are transmitted to a remote station by wireless transmission such as by electromagnetic waves, radio waves, infrared, sound and the like or by being reported locally by audio or visual transmission. The physical and chemical parameters include brain function, metabolic function, hydrodynamic function, hydration status, levels of chemical compounds in the blood, and the like. The support structure includes patches, clips, eyeglasses, head mounted gear and the like, containing passive or active sensors positioned at the end of the tunnel with sensing systems positioned on and accessing a physiologic tunnel.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: April 5, 2016
    Assignee: GEELUX HOLDING, LTD.
    Inventor: Marcio Marc Abreu
  • Patent number: 9286708
    Abstract: An information device includes an image receiving unit receiving an information terminal image having a specific region composed of pixels having a same feature value from an information terminal, the feature value being luminance or chromaticity; a specific region detecting unit detecting the specific region within the information terminal image, based on feature values of pixels composing the information terminal image received by the image receiving unit; an information device image creating unit creating an information device image related to a function provided to the information device; a composite image creating unit creating a composite image where the information device image created by the information device image creating unit is embedded in the specific region detected by the specific region detecting unit within the information terminal image; and a display control unit displaying the composite image created by the composite image creating unit on a display apparatus.
    Type: Grant
    Filed: February 10, 2015
    Date of Patent: March 15, 2016
    Assignee: JVC KENWOOD Corporation
    Inventor: Hiroaki Takanashi
  • Patent number: 9159338
    Abstract: Systems and methods of rendering a textual animation are provided. The methods include receiving an audio sample of an audio signal that is being rendered by a media rendering source. The methods also include receiving one or more descriptors for the audio signal based on at least one of a semantic vector, an audio vector, and an emotion vector. Based on the one or more descriptors, a client device may render the textual transcriptions of vocal elements of the audio signal in an animated manner. The client device may further render the textual transcriptions of the vocal elements of the audio signal to be substantially in synchrony to the audio signal being rendered by the media rendering source. In addition, the client device may further receive an identification of a song corresponding to the audio sample, and may render lyrics of the song in an animated manner.
    Type: Grant
    Filed: December 3, 2010
    Date of Patent: October 13, 2015
    Assignee: Shazam Entertainment Ltd.
    Inventors: Rahul Powar, Avery Li-Chun Wang
  • Patent number: 9076347
    Abstract: A system and methods for analyzing pronunciation, detecting errors and providing automatic feedback to help non-native speakers improve pronunciation of a foreign language is provided that employs publicly available, high accuracy third-party automatic speech recognizers available via the Internet to analyze and identify mispronunciations.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: July 7, 2015
    Assignee: Better Accent, LLC
    Inventors: Julia Komissarchik, Edward Komissarchik
  • Patent number: RE48126
    Abstract: A technique for synchronizing a visual browser and a voice browser. A visual browser is used to navigate through visual content, such as WML pages. During the navigation, the visual browser creates a historical record of events that have occurred during the navigation. The voice browser uses this historical record to navigate the content in the same manner as occurred on the visual browser, thereby synchronizing to a state equivalent to that of the visual browser. The creation of the historical record may be performed by using a script to trap events, where the script contains code that records the trapped events. The synchronization technique may be used with a multi-modal application that permits the mode of input/output (I/O) to be changed between visual and voice browsers. When the mode is changed from visual to voice, the record of events captured by the visual browser is provided to the voice browser, thereby allowing the I/O mode to change seamlessly from visual to voice.
    Type: Grant
    Filed: September 1, 2011
    Date of Patent: July 28, 2020
    Assignee: GULA CONSULTING LIMITED LIABILITY COMPANY
    Inventors: Inderpal Singh Mumick, Sandeep Sibal