Audio Input For On-screen Manipulation (e.g., Voice Controlled Gui) Patents (Class 715/728)
-
Patent number: 11688393Abstract: A method including embedding, by a trained issue MLM (machine learning model), a new natural language issue statement into an issue vector. An inner product of the issue vector with an actions matrix is calculated. The actions matrix includes centroid-vectors calculated using a clustering method from a second output of a trained action MLM which embedded prior actions expressed in natural language action statements taken as a result of prior natural issue statements. Calculating the inner product results in probabilities associated with the prior actions. Each of the probabilities represents a corresponding estimate that a corresponding prior action is relevant to the issue vector. A list of proposed actions relevant to the issue vector is generated by comparing the probabilities to a threshold value and selecting a subset of the prior actions with corresponding probabilities above the threshold. The list of proposed actions is transmitted to a user device.Type: GrantFiled: December 30, 2021Date of Patent: June 27, 2023Assignee: INTUIT INCInventors: Shlomi Medalion, Alexander Zhicharevich, Yair Horesh, Oren Sar Shalom, Elik Sror, Adi Shalev
-
Patent number: 11687318Abstract: Techniques include a method of providing a user interface on a device. The user interface has at least a first display portion, a second display portion and a third display portion, the second display portion including a link to the third display portion that, when activated by a user, cause the device to present the third display portion of the user interface, the first display portion not including the link to the third display portion. The method includes causing the first display portion to be displayed. The method further includes receiving audible input while the first display portion is being displayed. The method further includes determining that the third display portion corresponds to an utterance in the audible input based at least in part on labels determined to match the utterance. The method further includes causing the third display portion to be displayed.Type: GrantFiled: October 7, 2020Date of Patent: June 27, 2023Assignee: State Farm Mutual Automobile Insurance CompanyInventor: Duane Christiansen
-
Patent number: 11689692Abstract: A highlight moment within a video, music to accompany a looping presentation of the video, and a looping effect for the video may be determined. A segment of the video to be used for the looping presentation of the video may be selected based on highlight moment, the music, and the looping effect. The looping presentation of the video may be generated to have the segment edited based on a style of the looping effect and to include accompaniment of the music.Type: GrantFiled: November 29, 2021Date of Patent: June 27, 2023Assignee: GoPro, Inc.Inventors: Guillaume Oules, Guillaume Abbe
-
Patent number: 11676595Abstract: A reception apparatus, including processing circuitry that is configured to receive a voice command related to content from a user during presentation of the content to the user. The processing circuitry is configured to transmit the voice command to a server system for processing. The processing circuitry is configured to receive a response to the voice command from the server system. The response to the voice command is generated based on the voice command and content information for identifying the content related to the voice command.Type: GrantFiled: December 29, 2020Date of Patent: June 13, 2023Assignee: SATURN LICENSING LLCInventor: Tatsuya Igarashi
-
Patent number: 11657076Abstract: At least some embodiments are directed to a system to compute uniform structured summarization of customer chats. In at least some embodiments, the system may operate a processor and receive a corpus of chats between customers and customer service representatives of an enterprise. Grouping the corpus of chats into subgroup task types and then extracting chat keywords and chat related words for each subgroup task type. Generating an expandable template data structures for each subgroup task type. Processing at least one chat to extract chat utterances and chat snippets ranking the chat utterances and chat snippets. Populating the expandable template data structure based on rankings to generate a chat summary data structure.Type: GrantFiled: March 18, 2021Date of Patent: May 23, 2023Assignee: American Express Travel Related Services Company, INC.Inventors: Priya Radhakrishnan, Shourya Roy
-
Patent number: 11614917Abstract: Systems and methods to implement commands based on selection sequences to a user interface are disclosed. Exemplary implementations may: store, electronic storage, a library of terms utterable by users that facilitate implementation of intended results; obtain audio information representing sounds captured by a client computing platform; detect the spoken terms uttered by the user present within the audio information; determine whether the spoken terms detected are included in the library of terms; responsive to determination that the spoken terms are not included in the library of terms, effectuate presentation of an error message via the user interface; record a selection sequence that the user performs subsequent to the presentation of the error message that causes a result; correlate the selection sequence with the spoken terms based on the selection sequence recorded subsequent to error message to generate correlation; and store the correlation to the electronic storage.Type: GrantFiled: April 6, 2021Date of Patent: March 28, 2023Assignee: Suki AI, Inc.Inventors: Jatin Chhugani, Ganesh Satish Mallya, Alan Diec, Vamsi Reddy Chagari, Sudheer Tumu, Nithyanand Kota, Maneesh Dewan
-
Patent number: 11532307Abstract: The present disclosure discloses an image processing device including: a receiving module configured to receive a voice signal and an image to be processed; a conversion module configured to convert the voice signal into an image processing instruction and determine a target area according to a target voice instruction conversion model, in which the target area is a processing area of the image to be processed; and a processing module configured to process the target area according to the image processing instruction and a target image processing model. The examples may realize the functionality of using voice commands to control image processing, which may save users' time spent in learning image processing software prior to image processing, and improve user experience.Type: GrantFiled: September 29, 2018Date of Patent: December 20, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Tianshi Chen, Shuai Hu, Xiaobing Chen
-
Patent number: 11513592Abstract: An endpoint system including one or more computing devices presents an object in a virtual environment (e.g., a shared virtual environment); receives gaze input corresponding to a gaze of a user of the endpoint system; calculates a gaze vector based on the gaze input; receives motion input corresponding to an action of the user; determines a path adjustment (e.g., by changing motion parameters such as trajectory and velocity) for the object based at least in part on the gaze vector and the motion input; and simulates motion of the object within the virtual environment based at least in part on the path adjustment. The object may be presented as being thrown by an avatar, with a flight path based on the path adjustment. The gaze vector may be based on head orientation information, eye tracking information, or some combination of these or other gaze information.Type: GrantFiled: April 23, 2021Date of Patent: November 29, 2022Assignee: Rec Room Inc.Inventors: Nicholas Fajt, Cameron Brown, Dan Kroymann, Omer Bilal Orhan, Johnathan Bevis, Joshua Wehrly
-
Patent number: 11507183Abstract: The present disclosure relates to resolving natural language ambiguities with respect to a simulated reality setting. In an exemplary embodiment, a simulated reality setting having one or more virtual objects is displayed. A stream of gaze events is generated from the simulated reality setting and a stream of gaze data. A speech input is received within a time period and a domain is determined based on a text representation of the speech input. Based on the time period and a plurality of event times for the stream of gaze events, one or more gaze events are identified from the stream of gaze events. The identified one or more gaze events is used to determine a parameter value for an unresolved parameter of the domain. A set of tasks representing a user intent for the speech input is determined based on the parameter value and the set of tasks is performed.Type: GrantFiled: September 16, 2020Date of Patent: November 22, 2022Assignee: Apple Inc.Inventors: Niranjan Manjunath, Scott M. Andrus, Xinyuan Huang, William W. Luciw, Jonathan H. Russell
-
Patent number: 11501766Abstract: Provided are a device and a method for providing a response message to a voice input of a user. The method, performed by a device, of providing a response message to a voice input of a user includes: receiving the voice input of the user; determining a destination of the user and an intention of the user, by analyzing the received voice input; obtaining association information related to the destination; generating the response message that recommends a substitute destination related to the intention of the user, based on the obtained association information; and displaying the generated response message.Type: GrantFiled: November 14, 2017Date of Patent: November 15, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Ga-hee Lee, In-dong Lee, Se-chun Kang, Hyung-rai Oh
-
Patent number: 11501154Abstract: A sensor transformation attention network (STAN) model including sensors, attention modules, a merge module and a task-specific module is provided. The attention modules calculate attention scores of feature vectors corresponding to the input signals collected by the sensors. The merge module calculates attention values of the attention scores, and generates a merged transformation vector based on the attention values and the feature vectors. The task-specific module classifies the merged transformation vector.Type: GrantFiled: March 5, 2018Date of Patent: November 15, 2022Assignees: SAMSUNG ELECTRONICS CO., LTD., UNIVERSITAET ZUERICHInventors: Stefan Braun, Daniel Neil, Enea Ceolini, Jithendar Anumula, Shih-Chii Liu
-
Patent number: 11487815Abstract: An electronic device includes circuitry, firmware, and software that determines identification information associated with a first performer-of-interest at a live event and retrieves a first set of audio tracks from a plurality of audio tracks based on the determined identification information. The circuitry receives a first audio segment associated with the first performer-of-interest from an audio capturing device. The circuitry compares a first audio characteristic of the first audio segment with a second audio characteristic of a first audio portion of each of the first set of audio tracks. The circuitry determines a first audio track based on the comparison between the first audio characteristic and the second audio characteristic. The circuitry identifies a start position of the first audio track based on the first audio segment associated with the first audio track. The circuitry controls a display of the first lyrics information of the first audio track.Type: GrantFiled: June 6, 2019Date of Patent: November 1, 2022Assignee: SONY CORPORATIONInventors: Peter Shintani, Mahyar Nejat, Brant Candelore, Robert Blanchard
-
Patent number: 11481109Abstract: A method for performing multi-touch (MT) data fusion is disclosed in which multiple touch inputs occurring at about the same time are received to generating first touch data. Secondary sense data can then be combined with the first touch data to perform operations on an electronic device. The first touch data and the secondary sense data can be time-aligned and interpreted in a time-coherent manner. The first touch data can be refined in accordance with the secondary sense data, or alternatively, the secondary sense data can be interpreted in accordance with the first touch data. Additionally, the first touch data and the secondary sense data can be combined to create a new command.Type: GrantFiled: October 7, 2019Date of Patent: October 25, 2022Assignee: Apple Inc.Inventors: Wayne Carl Westerman, John Greer Elias
-
Patent number: 11450314Abstract: Methods, apparatus, systems, and computer-readable media are provided for using shortcut command phrases to operate an automated assistant. A user of the automated assistant can request that a shortcut command phrase be established for causing the automated assistant to perform a variety of different actions. In this way, the user does not necessarily have to provide an individual command for each action to be performed but, rather, can use a shortcut command phrase to cause the automated assistant to perform the actions. The shortcut command phrases can be used to control peripheral devices, IoT devices, applications, websites, and/or any other apparatuses or processes capable of being controlled through an automated assistant.Type: GrantFiled: October 16, 2017Date of Patent: September 20, 2022Assignee: GOOGLE LLCInventors: Yuzhao Ni, Lucas Palmer
-
Patent number: 11425064Abstract: A message may be suggested to a user participating in a conversation using one or more neural networks where the suggested message is adapted to the preferences or communication style of the user. The suggested message may be adapted to the user with a user embedding vector that represents the preferences or communication style of the user in a vector space. To suggest a message to the user, a conversation feature vector may be computed by processing the text the conversation with a neural network. A context score may be computed for one or more designated messages, where the context score is computed by processing the user embedding vector, the conversation feature vector, and a designated message feature vector with a neural network. A designated message may be selected as a suggested message for the user using the context scores. The suggestion may then presented to the user.Type: GrantFiled: October 25, 2019Date of Patent: August 23, 2022Assignee: ASAPP, INC.Inventors: Kelsey Taylor Ball, Tao Lei, Christopher David Fox, Joseph Ellsworth Hackman
-
Patent number: 11409497Abstract: Aspects of the present invention allow hands-free navigation of touch-based operating systems. The system may automatically interface with a touch-based operating system and generate hands-free commands that are associated with touch-based commands. Embodiments may utilize motion and/or audio inputs to facilitate hands-free interaction with the touch-based operating system.Type: GrantFiled: March 24, 2020Date of Patent: August 9, 2022Assignee: RealWear, Inc.Inventor: Christopher Iain Parkinson
-
Patent number: 11405514Abstract: An image forming apparatus includes a display device that displays a plurality of setting items related to an operation of an electronic apparatus, an operation device and a touch panel to be operated by a user, and a controller that sets a value of each of the setting items on a screen of the display device according to an input made through the operation device and the touch panel, initializes the values of the respective setting items, when a total reset mode is set through the operation device and the touch panel, and initializes, when an individual reset mode is set through the operation device and the touch panel, and resetting of the setting items on the screen of the display device is instructed through the touch panel, the value of the setting item about which the resetting has been instructed.Type: GrantFiled: September 13, 2019Date of Patent: August 2, 2022Assignee: KYOCERA Document Solutions Inc.Inventor: Shinichi Nakanishi
-
Patent number: 11379875Abstract: Aspects of the subject disclosure may include, for example, storing, in a database, information associated with a first item purchased by a user, the information comprising an identification of the first item and a time of purchase of the first item; receiving web browsing data based upon monitoring, by another device, web browsing of the user; determining, based upon the web browsing data that is received, whether the user is currently browsing at a shopping website, resulting in a determination; responsive to the determination being that the user is currently browsing at the shopping website, querying the database to determine an elapsed time since the time of purchase of the first item; responsive to the elapsed time meeting a threshold, generating a message to send to the another device monitoring the web browsing, the message informing the user of a suggested second item for the buyer to purchase, the suggested second item being a replacement for the first item; and sending the message to the another devType: GrantFiled: April 30, 2020Date of Patent: July 5, 2022Assignee: AT&T Intellectual Property I, L.P.Inventors: Alexander MacDougall, Anna Lidzba, Nigel Bradley, James Carlton Bedingfield, Sr., Ari Craine, Robert Koch
-
Patent number: 11381609Abstract: A system of multi-modal transmission of packetized data in a voice activated data packet based computer network environment is provided. A natural language processor component can parse an input audio signal to identify a request and a trigger keyword. Based on the input audio signal, a direct action application programming interface can generate a first action data structure, and a content selector component can select a content item. An interface management component can identify first and second candidate interfaces, and respective resource utilization values. The interface management component can select, based on the resource utilization values, the first candidate interface to present the content item. The interface management component can provide the first action data structure to the client computing device for rendering as audio output, and can transmit the content item converted for a first modality to deliver the content item for rendering from the selected interface.Type: GrantFiled: June 23, 2020Date of Patent: July 5, 2022Assignee: GOOGLE LLCInventors: Justin Lewis, Richard Rapp, Gaurav Bhaya, Robert Stets
-
Patent number: 11370125Abstract: A method and apparatus for controlling a social robot includes operating an electronic output device based on social interactions between the social robot and a user. The social robot utilizes an algorithm or other logical solution process to infer a user mental state, for example a mood or desire, based on observation of the social interaction. Based on the inferred mental state, the social robot causes an action of the electronic output device to be selected. Actions may include, for example, playing a selected video clip, brewing a cup of coffee, or adjusting window blinds.Type: GrantFiled: May 9, 2019Date of Patent: June 28, 2022Assignee: Warner Bros. Entertainment Inc.Inventors: Gregory I. Gewickey, Lewis S. Ostrover
-
Patent number: 11367439Abstract: An electronic device capable of providing a voice-based intelligent assistance service may include: a housing; a microphone; at least one speaker; a communication circuit; a processor disposed inside the housing and operatively connected with the microphone, the speaker, and the communication circuit; and a memory operatively connected to the processor, and configured to store a plurality of application programs. The electronic device may be controlled to: collect voice data of a user based on a specified condition prior to receiving a wake-up utterance invoking a voice-based intelligent assistant service; transmit the collected voice data to an external server and request the external server to construct a prediction database configured to predict an intention of the user; and output, after receiving the wake-up utterance, a recommendation service related to the intention of the user based on at least one piece of information included in the prediction database.Type: GrantFiled: July 17, 2019Date of Patent: June 21, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Seungnyun Kim, Geonsoo Kim, Hanjib Kim, Bokun Choi
-
Patent number: 11360791Abstract: Various embodiments of the present invention relate to an electronic device and a screen control method for processing a user input by using the same, and according to the various embodiments of the present invention, the electronic device comprises: a housing; a touchscreen display located inside the housing and exposed through a first part of the housing; a microphone located inside the housing and exposed through a second part of the housing; at least one speaker located inside the housing and exposed through a third part of the housing; a wireless communication circuit located inside the housing; a processor located inside the housing and electrically connected to the touchscreen display, the microphone, the at least one speaker, and the wireless communication circuit; and a memory located inside the housing and electrically connected to the processor, wherein the memory stores a first application program including a first user interface and a second application program including a second user interface,Type: GrantFiled: March 27, 2018Date of Patent: June 14, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Jihyun Kim, Dongho Jang, Minkyung Hwang, Kyungtae Kim, Inwook Song, Yongjoon Jeon
-
Patent number: 11354085Abstract: Example devices and methods are disclosed. An example device includes a memory configured to store a plurality of audio streams and an associated level of authorization for each of the plurality of audio streams. The device also includes one or more processors implemented in circuitry and communicatively coupled to the memory. The one or more processors are configured to select, based on the associated levels of authorization, a subset of the plurality of audio streams, the subset of the plurality of audio streams excluding at least one of the plurality of audio streams.Type: GrantFiled: July 1, 2020Date of Patent: June 7, 2022Assignee: Qualcomm IncorporatedInventors: Siddhartha Goutham Swaminathan, Isaac Garcia Munoz, S M Akramus Salehin, Nils Günther Peters
-
Patent number: 11347301Abstract: A method comprising causing display of information on a head mounted display that is worn by a user, receiving eye movement information associated with the user, receiving head movement information associated with the user, determining that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display, and decreasing prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display is disclosed.Type: GrantFiled: April 23, 2014Date of Patent: May 31, 2022Assignee: NOKIA TECHNOLOGIES OYInventor: Melodie Vidal
-
Patent number: 11335327Abstract: Disclosed herein are system, method, and computer program product embodiments for a text-to-speech system. An embodiment operates by identifying a document including text, wherein the text includes both a structured portion of text, and an unstructured portion of text. Both the structured portion and unstructured portions of the text are identified within the document rich data, wherein the structured portion corresponds to a rich data portion that includes both a descriptor and content, and wherein an unstructured portion of the text includes alphanumeric text. A request to audibly output the document including the rich data portion is received from a user profile. A summary of the rich data portion is generated at level of detail corresponding to the user profile. The audible version of the document including both the alphanumeric text of the unstructured portion of the document and the generated summary is audibly output.Type: GrantFiled: June 24, 2020Date of Patent: May 17, 2022Assignee: CAPITAL ONE SERVICES, LLCInventors: Galen Rafferty, Reza Farivar, Anh Truong, Jeremy Goodsitt, Vincent Pham, Austin Walters
-
Patent number: 11315378Abstract: A method of verifying an authenticity of a printed item includes: photographing the printed item to obtain a photographic image of the printed item, retrieving reference data of the printed item, the reference data including a reference image of the printed item, determining a test noise parameter from the photographic image of the printed item, determining a reference noise parameter from the reference image, comparing the test noise parameter of the photographic image of the printed item to the reference noise parameter of the reference image, and determining an authenticity of the printed item from a result of the comparing. The determining the authenticity of the printed item from the result of the comparing may include establishing from the reference noise parameter of the reference image and the test noise parameter of the printed item.Type: GrantFiled: June 11, 2020Date of Patent: April 26, 2022Assignee: FiliGrade B.V.Inventor: Johannes Bernardus Kerver
-
Patent number: 11312764Abstract: VH domain, in which: (i) the amino acid residue at position 112 is one of K or Q; and/or (ii) the amino acid residue at position 89 is T; and/or (iii) the amino acid residue at position 89 is L and the amino acid residue at position 110 is one of K or Q; and (iv) in each of cases (i) to (iii), the amino acid at position 11 is preferably V; and in which said VH domain contains a C-terminal extension (X)n, in which n is 1 to 10, preferably 1 to 5, such as 1, 2, 3, 4 or 5 (and preferably 1 or 2, such as 1); and each X is an (preferably naturally occurring) amino acid residue that is independently chosen, and preferably independently chosen from the group consisting of alanine (A), glycine (G), valine (V), leucine (L) or isoleucine (I).Type: GrantFiled: November 26, 2019Date of Patent: April 26, 2022Assignee: Ablynx N.V.Inventors: Marie-Ange Buyse, Carlo Boutton
-
Patent number: 11290590Abstract: Disclosed are a method and apparatus for managing a distraction to a smart device based on a context-aware rule. A method of managing a distraction to a smart device based on a context-aware rule, in the present disclosure, includes the steps of setting a context-aware distraction management rule, collecting context information for applying a distraction management mode based on the set context-aware distraction management rule, determining whether to set a distraction management mode and setting the distraction management mode, collecting context information for releasing the set distraction management mode, and determining whether to release the setting of the distraction management mode and releasing the distraction management mode.Type: GrantFiled: November 27, 2020Date of Patent: March 29, 2022Assignee: Korea Advanced Institute of Science and TechnologyInventors: Uichin Lee, Inyeop Kim
-
Patent number: 11244118Abstract: Disclosed is a dialogue management method and apparatus. The dialogue management method includes sequentially resolving an application domain to provide a service to a user, a function appropriate for an intent of a user from among functions of the application domain, and at least one slot to perform the function, adaptively determining an expression scheme of a dialogue management interface depending on a progress of the sequentially resolving, and displaying the dialogue management interface.Type: GrantFiled: January 28, 2020Date of Patent: February 8, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Junhwi Choi, Sanghyun Yoo, Sangsoo Lee, Hoshik Lee
-
Patent number: 11217021Abstract: A mixed reality system that includes a head-mounted display (HMD) that provides 3D virtual views of a user's environment augmented with virtual content. The HMD may include sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors provide the information as inputs to a controller that renders frames including virtual content based at least in part on the inputs from the sensors. The HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user.Type: GrantFiled: March 21, 2019Date of Patent: January 4, 2022Assignee: Apple Inc.Inventors: Ricardo J. Motta, Brett D. Miller, Tobias Rick, Manohar B. Srikanth
-
Patent number: 11216233Abstract: An electronic device includes a physical user interface, a wireless communication device, and one or more processors. The one or more processors identify one or more external electronic devices operating within an environment of the electronic device. The one or more processors cause the wireless communication device to transmit content and one or more control commands causing an external electronic device to present a graphical user interface depicting the physical user interface of the electronic device. The wireless communication device then receives one or more other control commands identifying user inputs interacting with the graphical user interface at the external electronic device. The one or more processors perform one or more control operations in response to the one or more other control commands.Type: GrantFiled: August 6, 2019Date of Patent: January 4, 2022Assignee: Motorola Mobility LLCInventors: Rachid Alameh, Jarrett Simerson, John Gorsica
-
Patent number: 11216245Abstract: An electronic device is provided. The electronic device includes a microphone, a touchscreen display, a processor, and a memory. The memory stores instructions, when executed, causing the processor to display a virtual keyboard including a first icon on the touchscreen display in response to a request associated with a text input to a first application which is running, execute a client module associated with a second application, based on an input to the first icon, identify a text entered through the virtual keyboard or a voice input received via the microphone, using the client module, determine an operation corresponding to the entered text and the voice input using the client module, and display a result image according to the operation on at least one region between a first region of the touchscreen display or a second region of the touchscreen display is displayed.Type: GrantFiled: March 24, 2020Date of Patent: January 4, 2022Assignee: Samsung Electronics Co., Ltd.Inventor: Seungyup Lee
-
Patent number: 11176188Abstract: A visualization framework based on document representation learning is described herein. The framework may first convert a free text document into word vectors using learning word embeddings. Document representations may then be determined in a fixed-dimensional semantic representation space by passing the word vectors through a trained machine learning model, wherein more related documents lie closer than less related documents in the representation space. A clustering algorithm may be applied to the document representations for a given patient to generate clusters. The framework then generates a visualization based on these clusters.Type: GrantFiled: January 9, 2018Date of Patent: November 16, 2021Assignee: Siemens Healthcare GmbHInventors: Halid Ziya Yerebakan, Yoshihisa Shinagawa, Parmeet Singh Bhatia, Yiqiang Zhan
-
Patent number: 11164567Abstract: A computer system is provided. The computer system includes a memory and at least one processor configured to recognize one or more intent keywords in text provided by a user; identify an intent of the user based on the recognized intent keywords; select a workflow context based on the identified intent; determine an action request based on analysis of the text in association with the workflow context, wherein the action request comprises one or more action steps and the action steps comprise one or more data points; obtaining a workspace context associated with the user; and evaluate the data points based on the workspace context.Type: GrantFiled: May 30, 2019Date of Patent: November 2, 2021Assignee: Citrix Systems, Inc.Inventor: Chris Pavlou
-
Patent number: 11150870Abstract: An embodiment of the present invention comprises a touch screen display, a communication circuit, a microphone, a speaker, a processor, and a memory. Wherein: the memory stores a first application program that includes a first user interface, and an intelligent application program that includes a second user interface; and the memory can cause the processor to display the first user interface at the time of execution and to receive a first user input for displaying the second user interface while displaying the first user interface.Type: GrantFiled: September 19, 2018Date of Patent: October 19, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Yun Hee Lee, Eun A Jung, Sang Ho Chae, Ji Hyun Kim
-
Patent number: 11146857Abstract: Methods and systems are described herein for providing streamlined access to media assets of interest to a user. The method includes determining that a supplemental viewing device, through which a user views a field of view, is directed at a first field of view. The method further involves detecting that the supplemental viewing device is now directed at a second field of view, and determining that a media consumption device is within the second field of view. A first media asset of interest to the user that is available for consumption via the media consumption device is identified, and the supplemental viewing device generates a visual indication in the second field of view. The visual indication indicates that the first media asset is available for consumption via the media consumption device, and the visual indication tracks a location of the media consumption device in the second field of view.Type: GrantFiled: June 3, 2020Date of Patent: October 12, 2021Assignee: Rovi Guides, Inc.Inventors: Jonathan A. Logan, Alexander W. Liston, William L. Thomas, Gabriel C. Dalbec, Margret B. Schmidt, Mathew C. Burns, Ajay Kumar Gupta
-
Patent number: 11138971Abstract: An embodiment provides a method, including: receiving, at an audio receiver of an information handling device, user voice input; identifying, using a processor, words included in the user voice input; determining, using the processor, one of the identified words renders ambiguous a command included in the user voice input; accessing, using the processor, context data; disambiguating, using the processor, the command based on the context data; and committing, using the processor, a predetermined action according to the command. Other aspects are described and claimed.Type: GrantFiled: December 5, 2013Date of Patent: October 5, 2021Assignee: Lenovo (Singapore) Pte. Ltd.Inventors: Peter Hamilton Wetsel, Jonathan Gaither Knox, Suzanne Marion Beaumont, Russell Speight VanBlon, Rod D. Waltermann
-
Patent number: 11127494Abstract: Methods and systems for using contextual information to generate reports for image studies. One method includes determining contextual information associated with an image study wherein at least one image included in the image study loaded in a reporting application. The method also includes automatically selecting, with an electronic processor, a vocabulary for a natural language processing engine based on the contextual information. In addition, the method includes receiving, from a microphone, audio data and processing the audio data with the natural language processing engine using the vocabulary to generate data for a report for the image study generated using the reporting application.Type: GrantFiled: August 23, 2016Date of Patent: September 21, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Marwan Sati
-
Patent number: 11100927Abstract: An information providing device includes circuitry configured to: acquire an uttered word which is uttered by a user and an utterance time at which the Littered word is uttered by the user; control output of offer information associated with the uttered word to the user; and restrict output of the offer information associated with the uttered word within a predetermined masking period from the utterance time of the uttered word.Type: GrantFiled: May 1, 2019Date of Patent: August 24, 2021Assignee: Toyota Jidosha Kabushiki KaishaInventor: Chihiro Inaba
-
Patent number: 11086597Abstract: The various implementations described herein include methods, devices, and systems for attending to a presenting user. In one aspect, a method is performed at an electronic device that includes an image sensor, microphones, a display, processor(s), and memory. The device (1) obtains audio signals by concurrently receiving audio data at each microphone; (2) determines based on the obtained audio signals that a person is speaking in a vicinity of the device; (3) obtains video data from the image sensor; (4) determines via the video data that the person is not within a field of view of the image sensor; (5) reorients the electronic device based on differences in the received audio data; (6) after reorienting the electronic device, obtains second video data from the image sensor and determines that the person is within the field of view; and (7) attends to the person by directing the display toward the person.Type: GrantFiled: August 14, 2018Date of Patent: August 10, 2021Assignee: GOOGLE LLCInventors: Yuan Yuan, Johan Schalkwyk, Kenneth Mixter
-
Patent number: 11081114Abstract: The control apparatus includes: a calculation unit configured to control a voice interaction apparatus including a speech section detector, the speech section detector being configured to identify whether an acquired voice includes a speech made by a target person by a set identification level and perform speech section detection, in which the calculation unit instructs, when an estimation result indicating that it is highly likely that the speech made by the target person is included in the acquired voice has been acquired from a voice recognition server, the voice interaction apparatus to change a setting in such a way as to lower the identification level of the speech section detector, and to perform communication with the voice recognition server by speech section detection in accordance with the identification level after the change.Type: GrantFiled: December 17, 2019Date of Patent: August 3, 2021Assignee: Toyota Jidosha Kabushiki KaishaInventor: Narimasa Watanabe
-
Patent number: 11068125Abstract: On a computing device, an overview mode is provided to present overview windows of all applications currently running on the computing device. When one or more applications are running in a windowed mode, a first overview window is generated for each of the one or more applications running in the windowed mode; when one or more applications are running in a full-screen mode, a second overview window is generated for each of the one or more applications running in the full-screen mode. The one or more first overview windows in the first space can be arranged in one or more rows in a first overview space, and the one or more second overview windows in the second space in a stack in a second overview space. The arranged overview windows may then be displayed in the overview mode of the computing device.Type: GrantFiled: October 27, 2016Date of Patent: July 20, 2021Assignee: Google LLCInventors: Alexander Friedrich Kuscher, Jennifer Shien-Ming Chen, Sebastien Vincent Gabriel
-
Patent number: 10983673Abstract: An operation screen display device includes: a display part; an operation part; and a processor that performs: making the display part display setting items for setting an operation condition of a job in an operation screen before starting the job; receiving a user manual input to specify one or more of the setting items from the operation part, the setting items being displayed in the operation screen; and receiving a user speech input to specify one or more of the setting items from a speech input device, the setting items being displayed in the operation screen. Upon receiving the user speech input, the processor makes the display part hide one or more of the setting items displayed in the operation screen, the one or more of the setting items being suitable for speech input.Type: GrantFiled: May 22, 2019Date of Patent: April 20, 2021Assignee: KONICA MINOLTA, INC.Inventor: Masaki Nakata
-
Patent number: 10977484Abstract: Described herein is a smart presentation system and method. Information identifying participant(s) associated with an electronic presentation document and information regarding a topic of the presentation is received. Participant profile information associated with at the participant(s) is retrieved using the received identification information. An audience profile relative to the topic of the presentation is determined using the retrieved participant profile information and received information regarding the topic of the presentation. A suggestion for the presentation is identified using an algorithm employing stored historical data, the determined audience profile and received information regarding the topic of the presentation. Further described herein is a presentation adaptation system and method. While presenting a presentation to participant(s), a cognitive expression of participant(s) is detected.Type: GrantFiled: March 19, 2018Date of Patent: April 13, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Varun Khanna, Chandra Sekhar Annambhotla
-
Patent number: 10976890Abstract: In an augmented reality and/or a virtual reality system, detected commands may be intelligently batched to preserve the relative order of the batched commands while maintaining a fluid virtual experience for the user. Commands detected in the virtual environment may be assigned to a batch command, of a plurality of batch commands, based on a temporal window in which the command(s) are detected, based on an operational type associated with the command(s), or based on a spatial position at which the command is detected in the virtual environment. The commands included in a batched set of commands may be executed in response to an un-do command and/or a re-do command and/or a re-play command.Type: GrantFiled: May 19, 2020Date of Patent: April 13, 2021Assignee: GOOGLE LLCInventor: Ian MacGillivray
-
Patent number: 10964321Abstract: The present disclosure involves systems, software, and computer implemented methods for providing voice-enabled human tasks in process modeling. One example method includes receiving a deployment request for a workflow that includes a human task. The workflow is deployed to a workflow engine in response to the deployment request. An instance of the workflow is created in response to a request from a client application. The instance of the workflow is processed, including execution of the human task. The human task is added to a task inbox of an assignee of the human task. A request is received from the assignee to access the task inbox from a telecommunications system. Voice guidance is provided, to the assignee, that requests assignee input. Voice input from the assignee is processed for completion of the human task. Workflow context for the human task is updated based on the received voice input.Type: GrantFiled: December 6, 2018Date of Patent: March 30, 2021Assignee: SAP SEInventors: Vikas Rohatgi, Abhinav Kumar
-
Patent number: 10962785Abstract: An electronic device is disclosed. The electronic device comprises a first camera and a second camera, a microphone, a display, and a processor electrically connected to the first camera, the second camera, the microphone, and the display, wherein the processor can be set to display, on the display, a user interface (UI) including a plurality of objects, acquire user gaze information from the first camera, activate, among the plurality of objects, a first object corresponding to the gaze information, determine at least one method of input, corresponding to a type of the activated first object, between a gesture input acquired from the second camera and a voice input acquired by the microphone, and execute the function corresponding to the input for the first object While an activated state of the first object is maintained, if the input of the determined method is applicable to the first object. In addition, various embodiments identified through the specification are possible.Type: GrantFiled: December 18, 2017Date of Patent: March 30, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Sang Woong Hwang, Young Ah Seong, Say Jang, Seung Hwan Choi
-
Patent number: 10963047Abstract: An augmented mirror for use with one or more user objects, the augmented mirror comprising: a partially silvered mirrored surface; a screen underneath the mirrored surface; a camera including a depth scanner; and a computer module for receiving data from the camera and for providing graphical images to the screen; wherein the sole means of communication with the computer module is via the one or more user objects.Type: GrantFiled: December 19, 2016Date of Patent: March 30, 2021Assignee: CONOPCO, INC.Inventors: Steven David Benford, Brian Patrick Newby, Adam Thomas Russell, Katharine Jane Shaw, Paul Robert Tennent, Robert Lindsay Treloar, Michel François Valstar, Ruediger Zillmer
-
Patent number: 10831982Abstract: One embodiment includes a portable reading device for reading a paginated e-book, with at least a page including a section including text linked to an illustration. The device can layout the section by keeping the text with the illustration to be displayed in one screen, and maintaining the pagination of the e-book if the page is displayed in more than one screen. Another embodiment includes reading materials with a text sub file with texts, an illustration sub file with illustrations, and a logic sub file with rules on displaying the materials. Either the text or the illustration sub file includes position information linking at least an illustration to a corresponding piece of text. One embodiment includes reading materials with a logic sub file that can analyze an attribute of, and provide a response to, a reader. Another embodiment can be an eyewear presenting device, allowing for hands-free presenting.Type: GrantFiled: March 24, 2016Date of Patent: November 10, 2020Assignee: IPLContent, LLCInventors: Chi Fai Ho, Peter P. Tong
-
Patent number: 10810413Abstract: A wakeup method based on lip reading is provided, the wakeup method including: acquiring a motion graph of a user's lips; determining whether the acquired motion graph matches a preset motion graph; and waking up a voice interaction function in response to the acquired motion graph matching the preset motion graph.Type: GrantFiled: October 19, 2018Date of Patent: October 20, 2020Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventor: Liang Gao