Pattern Display Patents (Class 704/276)
-
Patent number: 12124803Abstract: A method of generating an image for use in a conversation taking place in a messaging application is disclosed. Conversation input text is received from a user of a portable device that includes a display. Model input text is generated from the conversation input text, which is processed with a text-to-image model to generate an image based on the model input text. The generated image is displayed on the portable device, and user input is received to transmit the image to a remote recipient.Type: GrantFiled: August 17, 2022Date of Patent: October 22, 2024Assignee: Snap Inc.Inventors: Arnab Ghosh, Jian Ren, Pavel Savchenkov, Sergey Tulyakov
-
Patent number: 12108228Abstract: A voice processing system includes: a plurality of microphone-speaker devices; a voice acquirer that acquires audio data from each of the microphone-speaker devices; a voice transmitter that transmits the audio data acquired by the voice acquirer to other microphone-speaker devices; a determination processor that determines whether or not a predetermined condition is met with respect to a factor that affects progress of a conference; a notification processor that causes, when the predetermined condition is met, a microphone-speaker device selected from among the plurality of microphone-speaker devices depending on the factor that affects the progress of the conference to provide specific information related to the predetermined condition.Type: GrantFiled: February 15, 2022Date of Patent: October 1, 2024Assignee: SHARP KABUSHIKI KAISHAInventors: Tatsuya Nishio, Maaki Shozu
-
Patent number: 12047766Abstract: An apparatus, method and computer program product for: receiving spatial audio information captured by a plurality of microphones, receiving a captured audio object from an audio device wirelessly connected to the apparatus, determining an audio audibility value relating to the audio device, determining whether the audio audibility value fulfils at least one criterion, and activating, in response to determining that the audio audibility value fulfils the at least one criterion, inclusion of the audio object captured by the audio device in the spatial audio information captured by the plurality of microphones.Type: GrantFiled: January 21, 2021Date of Patent: July 23, 2024Assignee: Nokia Technologies OyInventors: Lasse Juhani Laaksonen, Miikka Tapani Vilermo, Arto Juhani Lehtiniemi, Jussi Artturi Leppanen
-
Patent number: 11854156Abstract: In one aspect, a computerized method for implementing a multi-pass iterative closest point (ICP) registration in an automated facial reconstruction process includes the step of providing an automated facial reconstruction process. The method includes the step of detecting that a face tracking application programming interface (API) misalignment has occurred during the automated facial reconstruction process. The method includes the step of implementing a multi-pass ICP process on the face tracking API misalignment by the following steps. One step includes obtaining a first point cloud (PC1) of the automated facial reconstruction process. The method includes the step of obtaining a second point cloud (PC2) of the automated facial reconstruction process. The method includes the step of obtaining a first face mask (mask1) of the automated facial reconstruction process. The method includes the step of obtaining a second face mask (mask2) of the automated facial reconstruction process.Type: GrantFiled: December 21, 2021Date of Patent: December 26, 2023Inventors: Mathew Powers, Xiao Xiao, William A. Kennedy
-
Patent number: 11730354Abstract: A light source control device includes: a light source configured to intermittently emit pulse light; and a hardware processor configured to detect a vocal cord vibrational frequency of a subject based on a voice signal input from an external device, determine whether or not the vocal cord frequency is equal to or greater than a threshold value, and change a light emission frequency of the pulse light in one frame period based on image brightness based on an image signal when the vocal cord frequency is equal to or greater than the threshold value, and prohibit change in the light emission frequency and change the pulse width of the pulse light emitted by the light source based on the image brightness when the vocal cord frequency is not equal to or greater than the threshold value.Type: GrantFiled: June 19, 2020Date of Patent: August 22, 2023Assignee: SONY OLYMPUS MEDICAL SOLUTIONS INC.Inventor: Hidetaro Kono
-
Patent number: 11636273Abstract: One embodiment of the present disclosure sets forth a technique for generating translation suggestions. The technique includes receiving a sequence of source-language subtitle events associated with a content item, where each source-language subtitle event includes a different textual string representing a corresponding portion of the content item, generating a unit of translatable text based on a textual string included in at least one source-language subtitle event from the sequence, translating, via software executing on a machine, the unit of translatable text into target-language text, generating, based on the target-language text, at least one target-language subtitle event associated with a portion of the content item corresponding to the at least one source-language subtitle event, and generating, for display, a subtitle presentation template that includes the at least one target-language subtitle event.Type: GrantFiled: June 14, 2019Date of Patent: April 25, 2023Assignee: NETFLIX, INC.Inventors: Ballav Bihani, Matthew James Rickard, Marianna Semeniakin, Ranjith Kumar Shetty, Allison Filemyr Smith, Patrick Brendon Pearson, Sameer Shah
-
Patent number: 11508107Abstract: The disclosure provides methods and systems for automatically generating an animatable object, such as a 3D model. In particular, the present technology provides fast, easy, and automatic animatable solutions based on unique facial characteristics of user input. Various embodiments of the present technology include receiving user input, such as a two-dimensional image or three-dimensional scan of a user's face, and automatically detecting one or more features. The methods and systems may further include deforming a template geometry and a template control structure based on the one or more detected features to automatically generate a custom geometry and custom control structure, respectively. A texture of the received user input may also be transferred to the custom geometry. The animatable object therefore includes the custom geometry, the transferred texture, and the custom control structure, which follow a morphology of the face.Type: GrantFiled: September 29, 2020Date of Patent: November 22, 2022Assignee: Didimo, Inc.Inventors: Verónica Costa Teixeira Pinto Orvalho, Eva Margarida Ferreira de Abreu Almeida, Hugo Miguel dos Reis Pereira, Thomas Iorns, José Carlos Guedes dos Prazeres Miranda, Alexis Paul Benoit Roche, Mariana Ribeiro Dias
-
Patent number: 11417086Abstract: A system for personalizing augmented reality for individuals that is easy to use. There is a central server with instructions for selecting or creating a personally meaningful multimedia object. A sound wave and an image of the sound wave are generated from the object. A tattoo of the generated image is applied as a tattoo, either permanently or temporarily, to a person. Automatically generating, assigning and storing a unique identifier from an uploaded image of the tattoo, the generated image, and the multimedia object to the central server. Capturing an image of the tattoo on the person using a smart device, where the smart device has instructions to determine the unique identifier from an image captured by the smart device. Retrieving and downloading, aligning and overlaying the stored multimedia where it is played back.Type: GrantFiled: April 14, 2018Date of Patent: August 16, 2022Inventor: Nathaniel Grant Siggard
-
Patent number: 11403849Abstract: Methods and apparatus related to characterization of digital content, such as in a content delivery and/or service provider network. In one embodiment, a method is provided for identifying characteristics of digital content by a first-pass analysis of the content data, and subsequent adjustment of results of the first-pass data analysis based on a heuristic analysis. In one variant, the first-pass analysis is based on an extant (COTS) or off-the-shelf analytics framework which generates a result; artificial intelligence and/or machine learning techniques are utilized for analyzing the result based on a multi-source or multivariate analytical framework to enable convergence of a final result having suitable level of accuracy, yet with optimized temporal and processing overhead characteristics. In one implementation, the methods and apparatus are adapted for use in a content distribution network advertisement ingestion processing system.Type: GrantFiled: September 25, 2019Date of Patent: August 2, 2022Assignee: CHARTER COMMUNICATIONS OPERATING, LLCInventors: Srilal M. Weerasinghe, Vipul Patel, Basil Badawiyeh, Robbie N. Mills, III, Michael Terada
-
Patent number: 11403064Abstract: Systems, methods, and software are disclosed herein for enhancing the content capture experience on computing devices. In an implementation, a combined user input comprises a voice signal and a touch gesture sustained at least partially coincident with the voice signal. An occurrence of the combined user input triggers the identification of an associated content object which may then be associated with a captured version of the voice signal. Such an advance provides users with a new framework for interacting with their devices, applications, and surroundings.Type: GrantFiled: November 14, 2019Date of Patent: August 2, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Eddie Louis Mays, III, Hauke Antony Gentzkow, Austin Seungmin Lee, Alice Shirong Ferng, Aaron David Rogers, Lauren Diana Lo, Cadin Lee Batrack, Jonathan Reed Harris, Brian Scott Stucker, Becky Ann Brown
-
Patent number: 11394838Abstract: An image forming apparatus that is capable of changing a setting content without putting an excessive burden on a user. An operation unit is operated by a user to change a setting content of the image forming apparatus. A memory device stores a set of instructions. At least one processor executes the set of instructions to obtain a command that is generated based on a natural language process for changing the setting content of the image forming apparatus, change the setting content of the image forming apparatus according to an operation to the operation unit and the command obtained, and prohibit the image forming apparatus from changing the setting content according to the operation to the operation unit in a case where the command based on the natural language process is obtained.Type: GrantFiled: July 26, 2019Date of Patent: July 19, 2022Assignee: CANON KABUSHIKI KAISHAInventor: Hiroto Tsujii
-
Patent number: 11343295Abstract: An electronic device includes one or more processors and memory storing one or more programs. The one or more programs include instructions for presenting, on a first playback component coupled with the electronic device, a first media content item and receiving an input to preview a second media content item. The one or more programs include instructions for, in accordance with a determination that the electronic device is in a first mode of operation, while the first media content item is presented on the first playback component, presenting a preview of the second media content item on a second playback component. The one or more programs include instructions for, in accordance with a determination that the electronic device is not in the first mode of operation: ceasing presentation of the first media content item and presenting the preview of the second media content item.Type: GrantFiled: July 2, 2020Date of Patent: May 24, 2022Assignee: Spotify ABInventors: Sten Garmark, Quenton Cook, Gustav Söderström, Ivo Silva, Michelle Kadir, Peter Strömberg
-
Patent number: 11325045Abstract: A method and apparatus for acquiring a merged map, a storage medium, a processor, and a terminal are provided. The method includes that: a configuration file and a thumbnail are acquired in an off-line state; and maps corresponding to model components contained in each game scene are loaded during game run, and the maps corresponding to the model components contained in each game scene and the thumbnail are merged according to the configuration file to obtain a merged map corresponding to at least one game scene. The present disclosure solves technical problems in the related art that a processing efficiency of a provided map merging scheme used aiming at a game scene is lower and too much storage space is required to be occupied.Type: GrantFiled: July 29, 2019Date of Patent: May 10, 2022Assignee: NETEASE (HANGZHOU) NETWORK CO., LTD.Inventor: Kunyu Cai
-
Patent number: 11253193Abstract: A body worn or implantable hearing prosthesis, including a device configured to capture an audio environment of a recipient and evoke a hearing percept based at least in part on the captured audio environment, wherein the hearing prosthesis is configured to identify, based on the captured audio environment, one or more biomarkers present in the audio environment indicative of the recipient's ability to hear.Type: GrantFiled: November 8, 2016Date of Patent: February 22, 2022Assignee: Cochlear LimitedInventors: Kieran Reed, John Michael Heasman, Kerrie Plant, Alex Von Brasch, Stephen Fung
-
Patent number: 11205277Abstract: A system includes sensors and a tracking subsystem. The subsystem receives a first image feed from a first sensor and a second image feed from a second sensor. The field-of view of the second sensor at least partially overlaps with that of the first sensor. The subsystem detects, in a frame from the first feed, a first contour associated with an object. The subsystem determines, based on pixel coordinates of the first contour, a first pixel position of the object. The subsystem detects, in a frame from the second feed, a second contour associated with the same object. The subsystem determines, based on pixel coordinates of the second contour, a second pixel position of the object. Based on the first pixel position and the second pixel position, a global position for the object is determined in a space.Type: GrantFiled: May 27, 2020Date of Patent: December 21, 2021Assignee: 7-ELEVEN, INC.Inventors: Shahmeer Ali Mirza, Sailesh Bharathwaaj Krishnamurthy
-
Patent number: 11205429Abstract: An information processing apparatus includes a receiving part that receives processing information based on voice; and a controller that performs control so that the processing information indicated by the voice received by the receiving part is displayed on a display. The receiving part further receives modification of the processing information displayed on the display, and the controller further performs control so that the modification received by the receiving part is reflected in processing received by the receiving part.Type: GrantFiled: September 30, 2019Date of Patent: December 21, 2021Assignee: FUJIFILM Business Innovation Corp.Inventor: Ryoto Yabusaki
-
Patent number: 11195525Abstract: An operation terminal includes: an imaging part configured to image a space; a human detecting part configured to detect a user based on information on the space imaged; a voice inputting part configured to receive inputting of the spoken voice of the user; a coordinates detecting part configured to detect a first coordinate of a predetermined first part of an upper limb of the user and a second coordinate of a predetermined second part of an upper half body excluding the upper limb of the user based on information acquired by a predetermined unit when the user is detected by the human detecting part; and a condition determining part configured to compare a positional relationship between the first coordinate and the second coordinate, and configured to bring the voice inputting part into a voice inputting receivable state when the positional relationship satisfies a predetermined first condition at least one time.Type: GrantFiled: June 6, 2019Date of Patent: December 7, 2021Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Kohei Tahara, Yusaku Ota, Hiroko Sugimoto
-
Patent number: 11195515Abstract: The present application provides a method and device for voice acquisition to reduce the affect of individual differences by quantitatively inputting voice indicators, the method comprising: displaying a first prompt word and starting to receive a first input voice of a user; after the first input voice of the user is received, recognizing the received first input voice to be a first user word; comparing the first user word with the first prompt word; if the first user word is matched with the first prompt word, then displaying a second prompt word and starting to receive a second input voice of the user; after the second input voice of the user is received, recognizing the received second input voice to be a second user word; comparing the second user word with the second prompt word; and integrating the first input voice and the second input voice to be a digital voice file, and storing the digital voice file.Type: GrantFiled: October 23, 2019Date of Patent: December 7, 2021Inventor: Zhonghua Ci
-
Patent number: 11157728Abstract: A method for person detection using overhead images includes receiving a depth image captured from an overhead viewpoint at a first location; detecting in the depth image for a target region indicative of a scene object within a height range; determining whether the detected target region has an area within a head size range; if within the head size range, determining whether the detected target region has a roundness value less than a maximum roundness value; if less than the maximum roundness value, classifying the detected target region as a head of a person and masking the classified target region in the depth image, where the masked region is excluded from detecting; and repeating the detecting to the masking to detect for and classify another target region in the depth image within the height range and outside of the masked region.Type: GrantFiled: April 2, 2020Date of Patent: October 26, 2021Assignee: Ricoh Co., Ltd.Inventors: Manuel Martinello, Edward L. Schwartz
-
Patent number: 11158320Abstract: Methods and systems for processing user input to a computing system are disclosed. The computing system has access to an audio input and a visual input such as a camera. Face detection is performed on an image from the visual input, and if a face is detected this triggers the recording of audio and making the audio available to a speech processing function. Further verification steps can be combined with the face detection step for a multi-factor verification of user intent to interact with the system.Type: GrantFiled: April 17, 2020Date of Patent: October 26, 2021Assignee: Soapbox Labs Ltd.Inventor: Patricia Scanlon
-
Patent number: 11100066Abstract: Described herein are technologies that are configured to assist a user in recollection information about people, places, and things. Computer-readable data is captured, and contextual data that temporally corresponds to the computer-readable data is also captured. In a database, the computer-readable data is indexed by the contextual data. Thus, when a query is received that references the contextual data, the computer-readable data is retrieved.Type: GrantFiled: May 6, 2019Date of Patent: August 24, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Bo-June Hsu, Kuansan Wang, Jeremy Espenshade, Chiyuan Huang, Yu-ting Kuo
-
Patent number: 11045092Abstract: Support structures for positioning sensors on a physiologic tunnel for measuring physical, chemical and biological parameters of the body and to produce an action according to the measured value of the parameters. The support structure includes a sensor fitted on the support structures using a special geometry for acquiring continuous and undisturbed data on the physiology of the body. Signals are transmitted to a remote station by wireless transmission such as by electromagnetic waves, radio waves, infrared, sound, and the like or by being reported locally by audio or visual transmission. The physical and chemical parameters include brain function, metabolic function, hydrodynamic function, hydration status, levels of chemical compounds in the blood, and the like. The support structure includes patches, clips, eyeglasses, head mounted gear and the like, containing passive or active sensors positioned at the end of the tunnel with sensing systems positioned on and accessing a physiologic tunnel.Type: GrantFiled: July 19, 2018Date of Patent: June 29, 2021Inventor: Marcio Marc Abreu
-
Patent number: 10963054Abstract: In an information processing system including a controller device including at least one vibration body, and an information processing apparatus outputting a control signal for the vibration body to the controller device, in vibration of the vibration body, sounds of the periphery are collected, and it is decided whether or not an allophone is generated in the controller device by using a signal of the collected sounds. When it is decided that the allophone is generated in the controller device, the information processing apparatus executes correction processing for the control signal for the vibration body, and outputs the control signal corrected by the correction processing.Type: GrantFiled: December 7, 2017Date of Patent: March 30, 2021Assignee: Sony Interactive Entertainment Inc.Inventor: Yusuke Nakagawa
-
Patent number: 10885809Abstract: A computing device is adapted to construct a user-memory data structure for a user based on interactions with the user. The user-memory data structure may comprise a plurality of memory representations for concepts and items important for gaining proficiency in a subject matter. The memory representations are dynamic, and characterize how well each of the concepts and items are retained as a function of time by the user. The computing device uses the user-memory data structure to guide operation of the computing device.Type: GrantFiled: May 20, 2016Date of Patent: January 5, 2021Assignee: Gammakite, Inc.Inventor: Emmanuel Roche
-
Patent number: 10847139Abstract: A crowdsourcing based community platform includes a natural language configuration system that predicts a user's desired function call based on a natural language input (speech or text). The system provides a collaboration platform to configure and optimize quickly natural language systems to leverage the work and data of other developers, thus minimizing the time and data required to improve the quality and accuracy of one single system and providing a network effect to reach quickly critical mass of data. An application developer can provide training data for training a model specific to the developer's application. The developer can also obtain training data by forking one or more other applications so that the training data provided for the forked applications is used to train the model for the developer's application.Type: GrantFiled: October 29, 2019Date of Patent: November 24, 2020Assignee: Facebook, Inc.Inventor: Alexandre Lebrun
-
Patent number: 10691296Abstract: An electronic device includes a display, a memory, and a processor, and the processor displays, on the display, a folder icon that includes execution icons of a plurality of applications and, in response to a first user input selecting the folder icon, displays a user interface for collectively controlling notifications for the plurality of applications.Type: GrantFiled: November 13, 2018Date of Patent: June 23, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Yong Gu Lee, Kyu Ok Choi, Ji Won Kim, Young Hak Oh, Sun Young Yi, Won Jun Lee
-
Patent number: 10636422Abstract: There is provided a system in which empowerment is performed by outputting conversation information to the user, the system including: a computer including a processor, a memory, and an interface; and a measuring device that measures signals of a plurality of types, wherein the processor calculates values of conversation parameters of a plurality of attributes for evaluating a state of a user who performs the empowerment on the basis of a plurality of signals measured by the measuring device, the processor selects a selection parameter which is a conversation parameter of a change target on the basis of the values of the conversation parameters of the plurality of attributes, the processor decides conversation information for changing a value of the selection parameter, and the processor outputs the decided conversation information to the user.Type: GrantFiled: January 4, 2018Date of Patent: April 28, 2020Assignee: HITACHI, LTD.Inventors: Takashi Numata, Toshinori Miyoshi, Hiroki Sato
-
Patent number: 10606954Abstract: Embodiments for text segmentation for topic modelling by a processor. Real-time conversation data may be analyzed and time intervals (e.g., inter-arrival times) between messages of the conversation data may be recorded. Each of the messages may be defined (and/or segmented) as burst segments or reflection segments according to the analyzing and recording. One or more topic modelling operations may be enhanced for text segmentation using the burst segments or reflection segments.Type: GrantFiled: February 15, 2018Date of Patent: March 31, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Andrew T. Penrose, Jonathan Dunne
-
Patent number: 10558475Abstract: A method for dynamically localizing content of a graphical user interface widget executed on a widget runtime model of a computing platform on a user device includes configuring the graphical user interface widget to provide first location-responsive content in a presentation runtime model by defaulting to a static geographic location, wherein the graphical user interface widget provides the first location-responsive content based on the static geographic location, receiving a configuration setting to configure the graphical user interface widget for a localized mode, retrieving a geographic location for the user device, and providing the retrieved geographic location to the widget runtime model for the graphical user interface widget to select second location-responsive content, wherein the graphical user interface widget switches to provide the second location-responsive content based on the retrieved geographic location.Type: GrantFiled: June 22, 2017Date of Patent: February 11, 2020Assignee: QUALCOMM IncorporatedInventors: Mark Leslie Caunter, Bruce Kelly Jackson, Steven Richard Geach
-
Patent number: 10529116Abstract: A method, computer system, and computer program product for determining and displaying tones with messaging information are provided. The embodiment may include receiving a plurality of user-entered messaging information from a messaging application. The embodiment may also include determining a tone associated with the plurality of received user-entered messaging information. The embodiment may further include determining a color and an animation for the determined tone based on a preconfigured mapping of a plurality of colors and a plurality of animations with a plurality of tones. The embodiment may also include displaying the animation with the color on a display screen of a user device until the user submits the plurality of user-entered messaging information for transmission to one or more other users.Type: GrantFiled: May 22, 2018Date of Patent: January 7, 2020Assignee: International Business Machines CorporationInventors: Kelley M. Gordon, Michael Celedonia, Katelyn Applegate
-
Patent number: 10515076Abstract: One or more servers receive a natural language query from a client device associated with a user. The one or more servers classify the natural language query as a query that seeks information previously accessed by the user. The one or more servers then obtain a response to the natural language query from one or more collections of documents, wherein each document in the one or more collections of documents was previously accessed by the user. The one or more servers generate search results based on the response. Then, the one or more servers communicate the search results to the client device.Type: GrantFiled: January 31, 2017Date of Patent: December 24, 2019Assignee: Google LLCInventors: Nathan Wiegand, Bryan C. Horling, Jason L. Smart
-
Patent number: 10490101Abstract: A wearable device is provided that includes a microphone, a display, and a controller. The controller controls to identify a direction of emitted sound based on sound picked up by the microphone, and to display information corresponding to the sound at a position on the display corresponding to the identified direction of the emitted sound.Type: GrantFiled: May 8, 2017Date of Patent: November 26, 2019Assignee: FUJITSU LIMITEDInventor: Mamiko Teshima
-
Patent number: 10438698Abstract: An improved basal insulin management system and an improved user interface for use therewith are provided. User interfaces are provided that dynamically display basal rate information and corresponding time segment information for a basal insulin program in a graphical format. The graphical presentation of the basal insulin program as it is being built by a user and the graphical presentation of a completed basal insulin program provides insulin management information to the user in a more intuitive and useful format. User interfaces further enable a user to make temporary adjustments to a predefined basal insulin program with the adjustments presented graphically to improve the user's understanding of the changes. As a result of being provided with the user interfaces described herein, users are less likely to make mistakes and are more likely to adjust basal rates more frequently, thereby contributing to better blood glucose control and improved health outcomes.Type: GrantFiled: November 13, 2017Date of Patent: October 8, 2019Assignee: INSULET CORPORATIONInventors: Sandhya Pillalamarri, Jorge Borges, Susan Mercer
-
Patent number: 10409552Abstract: Systems and methods for displaying an audio indicator including a main portion having a width proportional to a volume of a particular phoneme of an utterance is described herein. In some embodiments, audio data representing an utterance may be received at a speech-processing system from a user device. The speech-processing system may determine a maximum volume amplitude for the utterance, and using the maximum volume amplitude, may determine a normalized amplitude value between 0 and 1 associated with a volume that phoneme's of an utterance are spoken. The speech-processing system may then map the normalized amplitude value(s) to widths for a main portion of an audio indicator, where larger normalized amplitude values may correspond to smaller main portion widths.Type: GrantFiled: September 19, 2016Date of Patent: September 10, 2019Assignee: Amazon Technologies, Inc.Inventors: David Adrian Jara, Timothy Thomas Gray, Kwan Ting Lee, Jae Pum Park, Michael Hone, Grant Hinkson, Richard Leigh Mains, Shilpan Bhagat
-
Patent number: 10311119Abstract: Implementations generally relate to hashtags. In some implementations, a method includes providing one or more location-based contextual hashtags to a user by receiving, from a first user device associated with a first user, information indicative of a physical location of the first user device. The method further includes identifying, with one or more processors, a place of interest based on the information indicative of the physical location of the first user device. The method further includes determining a category associated with the place of interest. The method further includes retrieving one or more hashtags from one or more databases based on the place of interest or the category associated with the place of interest. The method further includes providing the one or more hashtags and information about the place of interest to the first user device.Type: GrantFiled: August 21, 2015Date of Patent: June 4, 2019Assignee: Google LLCInventors: Sreenivas Gollapudi, Alexander Fabrikant, Shanmugasundaram Ravikumar
-
Patent number: 10176817Abstract: The invention provides an audio encoder including a combination of a linear predictive coding filter having a plurality of linear predictive coding coefficients and a time-frequency converter, wherein the combination is configured to filter and to convert a frame of the audio signal into a frequency domain in order to output a spectrum based on the frame and on the linear predictive coding coefficients; a low frequency emphasizer configured to calculate a processed spectrum based on the spectrum, wherein spectral lines of the processed spectrum representing a lower frequency than a reference spectral line are emphasized; and a control device configured to control the calculation of the processed spectrum by the low frequency emphasizer depending on the linear predictive coding coefficients of the linear predictive coding filter.Type: GrantFiled: July 28, 2015Date of Patent: January 8, 2019Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.Inventors: Stefan Doehla, Bernhard Grill, Christian Helmrich, Nikolaus Rettelbach
-
Patent number: 10170101Abstract: A computer-implemented method includes determining, by a first device, a current emotional state of a user of the first device. The current emotional state is based, at least in part, on real-time information corresponding to the user and relates to a textual message from the user. The computer-implemented method further includes determining, by the first device, a set of phonetic data associated with a plurality of vocal samples corresponding to the user. The computer-implemented method further includes dynamically converting, by the first device, the textual message into an audio message. The audio message is converted from the textual message into the audio message based, at least in part, on the current emotional state and a portion of the set of phonetic data that corresponds to the current emotional state. A corresponding computer system and computer program product are also disclosed.Type: GrantFiled: October 24, 2017Date of Patent: January 1, 2019Assignee: International Business Machines CorporationInventors: Kevin G. Carr, Thomas D. Fitzsimmons, Johnathon J. Hoste, Angel A. Merchan
-
Patent number: 10170100Abstract: A computer-implemented method includes determining, by a first device, a current emotional state of a user of the first device. The current emotional state is based, at least in part, on real-time information corresponding to the user and relates to a textual message from the user. The computer-implemented method further includes determining, by the first device, a set of phonetic data associated with a plurality of vocal samples corresponding to the user. The computer-implemented method further includes dynamically converting, by the first device, the textual message into an audio message. The audio message is converted from the textual message into the audio message based, at least in part, on the current emotional state and a portion of the set of phonetic data that corresponds to the current emotional state. A corresponding computer system and computer program product are also disclosed.Type: GrantFiled: March 24, 2017Date of Patent: January 1, 2019Assignee: International Business Machines CorporationInventors: Kevin G. Carr, Thomas D. Fitzsimmons, Johnathon J. Hoste, Angel A. Merchan
-
Patent number: 10164921Abstract: A system and method for voice based social networking is disclosed. The system receives a voice message (and frequently an image) and ultimately delivers it to one or multiple users, placing it within an ongoing context of conversations. The voice and image may be recorded by various devices and the data transmitted in a variety of formats. An alternative implementation places some system functionality in a mobile device such as a smartphone or wearable device, with the remaining functionality resident in system servers attached to the internet. The system can apply rules to select and limit the voice data flowing to each user; rules prioritize the messages using context information such as user interest and user state. An image is fused to the voice message to form a comment. Additional image or voice annotation (or both) identifying the sender may be attached to the comment. Fused image(s) and voice annotation allow the user to quickly deduce the context of the comment.Type: GrantFiled: May 12, 2015Date of Patent: December 25, 2018Inventor: Stephen Davies
-
Patent number: 10127912Abstract: An apparatus comprising: an input configured to receive from at least two microphones at least two audio signals; at least two processor instances configured to generate separate output audio signal tracks from the at least two audio signals from the at least two microphones; a file processor configured to link the at least two output audio signal tracks within a file structure.Type: GrantFiled: December 10, 2012Date of Patent: November 13, 2018Assignee: Nokia Technologies OyInventors: Marko Tapani Yliaho, Ari Juhani Koski
-
Patent number: 10121461Abstract: Providing feedback on a musical performance performed with a musical instrument. An instrument profile associated with the musical instrument used to perform the musical performance is identified. The instrument profile comprises information relating to one or more tuning characteristics of the instrument. The pitch of notes of the musical performance are analyzed based on the instrument profile to determine a measure of tuning of the musical performance. A feedback signal is generated based on the determined measure of tuning.Type: GrantFiled: June 27, 2017Date of Patent: November 6, 2018Assignee: International Business Machines CorporationInventors: Adrian D. Dick, Doina L. Klinger, David J. Nice, Rebecca Quaggin-Mitchell
-
Patent number: 10115380Abstract: Providing feedback on a musical performance performed with a musical instrument. An instrument profile associated with the musical instrument used to perform the musical performance is identified. The instrument profile comprises information relating to one or more tuning characteristics of the instrument. The pitch of notes of the musical performance are analyzed based on the instrument profile to determine a measure of tuning of the musical performance. A feedback signal is generated based on the determined measure of tuning.Type: GrantFiled: December 15, 2017Date of Patent: October 30, 2018Assignee: International Business Machines CorporationInventors: Adrian D. Dick, Doina L. Klinger, David J. Nice, Rebecca Quaggin-Mitchell
-
Patent number: 10096308Abstract: Providing feedback on a musical performance performed with a musical instrument. An instrument profile associated with the musical instrument used to perform the musical performance is identified. The instrument profile comprises information relating to one or more tuning characteristics of the instrument. The pitch of notes of the musical performance are analyzed based on the instrument profile to determine a measure of tuning of the musical performance. A feedback signal is generated based on the determined measure of tuning.Type: GrantFiled: March 5, 2018Date of Patent: October 9, 2018Assignee: International Business Machines CorporationInventors: Adrian D. Dick, Doina L. Klinger, David J. Nice, Rebecca Quaggin-Mitchell
-
Patent number: 10079890Abstract: A system and method for dynamically establishing an adhoc network amongst plurality of communication devices in a beyond audible frequency range is disclosed. The system comprises a first communication device to transmit a quantity of data to a second communication device. The first communication device comprises of an input capturing module to receive the quantity of data from a broadcaster in a format and converts the quantity of data received into a quantity of modulated data, an identity generating module to generate a temporary identity for a broadcasting user. The second communication device then receives the data broadcasted from the first communication device and determines a probabilistic confidence level of the quantity of modulated data. A transreceiver implemented in the first communication device and second communication device transmits and receives the quantity of data in conjugation with the temporary identity within a predefined proximity of each device.Type: GrantFiled: December 4, 2012Date of Patent: September 18, 2018Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Aniruddha Sinha, Arpan Pal, Dhiman Chattopadhyay
-
Patent number: 10037756Abstract: Techniques for analyzing long-term audio recordings are provided. In one embodiment, a computing device can record audio captured from an environment of a user on a long-term basis (e.g., on the order of weeks, months, or years). The computing device can store the recorded audio on a local or remote storage device. The computing device can then analyze the recorded audio based one or more predefined rules and can enable one or more actions based on that analysis.Type: GrantFiled: March 29, 2016Date of Patent: July 31, 2018Assignee: Sensory, IncorporatedInventors: Bryan Pellom, Todd F. Mozer
-
Patent number: 10019995Abstract: A method for teaching a language, comprising: accessing, using a processor of a computer, an audio recording corresponding to a series of pitch patterns; accessing a cantillation representation of said series of pitch patterns, said cantillation representation comprising a plurality of cantillations; processing said audio recording to match the pitch patterns to the cantillations in said cantillation representation; calculating, using said processor, a start time and an end time for each of the series of cantillations as compared to said audio recording; outputting, using said processor, an aligned output representation comprising an identification of each of the cantillations, an identification of the start time for each of the cantillations, and an identification of the end time for each of the cantillations; receiving a request to play a requested pitch pattern; looking up said requested pitch pattern in said aligned output representation to retrieve one or more requested start times and one or more requesType: GrantFiled: September 1, 2011Date of Patent: July 10, 2018Inventors: Norman Abramovitz, Jonathan Stiebel
-
Patent number: 9772816Abstract: Example systems and methods may facilitate processing of voice commands using a hybrid system with automated processing and human guide assistance. An example method includes receiving a speech segment, determining a textual representation of the speech segment, causing one or more guide computing devices to display one or more portions of the textual representation, receiving input data from the one or more guide computing devices that identifies a plurality of chunks of the textual representation, determining an association between the identified chunks of the textual representation and corresponding semantic labels, and determining a digital representation of a task based on the identified chunks of the textual representation and the corresponding semantic labels.Type: GrantFiled: December 22, 2014Date of Patent: September 26, 2017Assignee: Google Inc.Inventors: Jeffrey Bigham, Walter Lasecki, Thiago Teixeira, Adrien Treuille
-
Patent number: 9767790Abstract: A voice retrieval apparatus executes processes of: converting a retrieval string into a phoneme string; obtaining, from a time length memory, a continuous time length for each phoneme contained in the converted phoneme string; deriving a plurality of time lengths corresponding to a plurality of utterance rates as candidate utterance time lengths of voices corresponding to the retrieval string based on the obtained continuous time length; specifying, for each of the plurality of time lengths, a plurality of likelihood obtainment segments having the derived time length within a time length of a retrieval sound signal; obtaining a likelihood showing a plausibility that the specified likelihood obtainment segment specified is a segment where the voices are uttered; and identifying, based on the obtained likelihood, for each of the specified likelihood obtainment segments, an estimation segment where utterance of the voices is estimated in the retrieval sound signal.Type: GrantFiled: November 30, 2015Date of Patent: September 19, 2017Assignee: CASIO COMPUTER CO., LTD.Inventor: Hiroki Tomita
-
Patent number: 9754024Abstract: A voice retrieval apparatus executes processes of: obtaining, from a time length memory, a continuous time length for each phoneme contained in a phoneme string of a retrieval string; obtaining user-specified information on an utterance rate; changing the continuous time length for each obtained phoneme in accordance with the obtained information; deriving, based on the changed continuous time length, an utterance time length of voices corresponding to the retrieval string; specifying a plurality of likelihood obtainment segments of the derived utterance time length in a time length of a retrieval sound signal; obtaining a likelihood showing a plausibility that the specified likelihood obtainment segment is a segment where the voices are uttered; and identifying, based on the obtained likelihood, an estimation segment where, within the retrieval sound signal, utterance of the voices is estimated, the estimation segment being identified for each specified likelihood obtainment segment.Type: GrantFiled: November 30, 2015Date of Patent: September 5, 2017Assignee: CASIO COMPUTER CO., LTD.Inventor: Hiroki Tomita
-
Patent number: RE48126Abstract: A technique for synchronizing a visual browser and a voice browser. A visual browser is used to navigate through visual content, such as WML pages. During the navigation, the visual browser creates a historical record of events that have occurred during the navigation. The voice browser uses this historical record to navigate the content in the same manner as occurred on the visual browser, thereby synchronizing to a state equivalent to that of the visual browser. The creation of the historical record may be performed by using a script to trap events, where the script contains code that records the trapped events. The synchronization technique may be used with a multi-modal application that permits the mode of input/output (I/O) to be changed between visual and voice browsers. When the mode is changed from visual to voice, the record of events captured by the visual browser is provided to the voice browser, thereby allowing the I/O mode to change seamlessly from visual to voice.Type: GrantFiled: September 1, 2011Date of Patent: July 28, 2020Assignee: GULA CONSULTING LIMITED LIABILITY COMPANYInventors: Inderpal Singh Mumick, Sandeep Sibal