Audio Input For On-screen Manipulation (e.g., Voice Controlled Gui) Patents (Class 715/728)
  • Patent number: 11409497
    Abstract: Aspects of the present invention allow hands-free navigation of touch-based operating systems. The system may automatically interface with a touch-based operating system and generate hands-free commands that are associated with touch-based commands. Embodiments may utilize motion and/or audio inputs to facilitate hands-free interaction with the touch-based operating system.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: August 9, 2022
    Assignee: RealWear, Inc.
    Inventor: Christopher Iain Parkinson
  • Patent number: 11405514
    Abstract: An image forming apparatus includes a display device that displays a plurality of setting items related to an operation of an electronic apparatus, an operation device and a touch panel to be operated by a user, and a controller that sets a value of each of the setting items on a screen of the display device according to an input made through the operation device and the touch panel, initializes the values of the respective setting items, when a total reset mode is set through the operation device and the touch panel, and initializes, when an individual reset mode is set through the operation device and the touch panel, and resetting of the setting items on the screen of the display device is instructed through the touch panel, the value of the setting item about which the resetting has been instructed.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: August 2, 2022
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Shinichi Nakanishi
  • Patent number: 11381609
    Abstract: A system of multi-modal transmission of packetized data in a voice activated data packet based computer network environment is provided. A natural language processor component can parse an input audio signal to identify a request and a trigger keyword. Based on the input audio signal, a direct action application programming interface can generate a first action data structure, and a content selector component can select a content item. An interface management component can identify first and second candidate interfaces, and respective resource utilization values. The interface management component can select, based on the resource utilization values, the first candidate interface to present the content item. The interface management component can provide the first action data structure to the client computing device for rendering as audio output, and can transmit the content item converted for a first modality to deliver the content item for rendering from the selected interface.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: July 5, 2022
    Assignee: GOOGLE LLC
    Inventors: Justin Lewis, Richard Rapp, Gaurav Bhaya, Robert Stets
  • Patent number: 11379875
    Abstract: Aspects of the subject disclosure may include, for example, storing, in a database, information associated with a first item purchased by a user, the information comprising an identification of the first item and a time of purchase of the first item; receiving web browsing data based upon monitoring, by another device, web browsing of the user; determining, based upon the web browsing data that is received, whether the user is currently browsing at a shopping website, resulting in a determination; responsive to the determination being that the user is currently browsing at the shopping website, querying the database to determine an elapsed time since the time of purchase of the first item; responsive to the elapsed time meeting a threshold, generating a message to send to the another device monitoring the web browsing, the message informing the user of a suggested second item for the buyer to purchase, the suggested second item being a replacement for the first item; and sending the message to the another dev
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: July 5, 2022
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Alexander MacDougall, Anna Lidzba, Nigel Bradley, James Carlton Bedingfield, Sr., Ari Craine, Robert Koch
  • Patent number: 11370125
    Abstract: A method and apparatus for controlling a social robot includes operating an electronic output device based on social interactions between the social robot and a user. The social robot utilizes an algorithm or other logical solution process to infer a user mental state, for example a mood or desire, based on observation of the social interaction. Based on the inferred mental state, the social robot causes an action of the electronic output device to be selected. Actions may include, for example, playing a selected video clip, brewing a cup of coffee, or adjusting window blinds.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: June 28, 2022
    Assignee: Warner Bros. Entertainment Inc.
    Inventors: Gregory I. Gewickey, Lewis S. Ostrover
  • Patent number: 11367439
    Abstract: An electronic device capable of providing a voice-based intelligent assistance service may include: a housing; a microphone; at least one speaker; a communication circuit; a processor disposed inside the housing and operatively connected with the microphone, the speaker, and the communication circuit; and a memory operatively connected to the processor, and configured to store a plurality of application programs. The electronic device may be controlled to: collect voice data of a user based on a specified condition prior to receiving a wake-up utterance invoking a voice-based intelligent assistant service; transmit the collected voice data to an external server and request the external server to construct a prediction database configured to predict an intention of the user; and output, after receiving the wake-up utterance, a recommendation service related to the intention of the user based on at least one piece of information included in the prediction database.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: June 21, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungnyun Kim, Geonsoo Kim, Hanjib Kim, Bokun Choi
  • Patent number: 11360791
    Abstract: Various embodiments of the present invention relate to an electronic device and a screen control method for processing a user input by using the same, and according to the various embodiments of the present invention, the electronic device comprises: a housing; a touchscreen display located inside the housing and exposed through a first part of the housing; a microphone located inside the housing and exposed through a second part of the housing; at least one speaker located inside the housing and exposed through a third part of the housing; a wireless communication circuit located inside the housing; a processor located inside the housing and electrically connected to the touchscreen display, the microphone, the at least one speaker, and the wireless communication circuit; and a memory located inside the housing and electrically connected to the processor, wherein the memory stores a first application program including a first user interface and a second application program including a second user interface,
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: June 14, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jihyun Kim, Dongho Jang, Minkyung Hwang, Kyungtae Kim, Inwook Song, Yongjoon Jeon
  • Patent number: 11354085
    Abstract: Example devices and methods are disclosed. An example device includes a memory configured to store a plurality of audio streams and an associated level of authorization for each of the plurality of audio streams. The device also includes one or more processors implemented in circuitry and communicatively coupled to the memory. The one or more processors are configured to select, based on the associated levels of authorization, a subset of the plurality of audio streams, the subset of the plurality of audio streams excluding at least one of the plurality of audio streams.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: June 7, 2022
    Assignee: Qualcomm Incorporated
    Inventors: Siddhartha Goutham Swaminathan, Isaac Garcia Munoz, S M Akramus Salehin, Nils Günther Peters
  • Patent number: 11347301
    Abstract: A method comprising causing display of information on a head mounted display that is worn by a user, receiving eye movement information associated with the user, receiving head movement information associated with the user, determining that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display, and decreasing prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display is disclosed.
    Type: Grant
    Filed: April 23, 2014
    Date of Patent: May 31, 2022
    Assignee: NOKIA TECHNOLOGIES OY
    Inventor: Melodie Vidal
  • Patent number: 11335327
    Abstract: Disclosed herein are system, method, and computer program product embodiments for a text-to-speech system. An embodiment operates by identifying a document including text, wherein the text includes both a structured portion of text, and an unstructured portion of text. Both the structured portion and unstructured portions of the text are identified within the document rich data, wherein the structured portion corresponds to a rich data portion that includes both a descriptor and content, and wherein an unstructured portion of the text includes alphanumeric text. A request to audibly output the document including the rich data portion is received from a user profile. A summary of the rich data portion is generated at level of detail corresponding to the user profile. The audible version of the document including both the alphanumeric text of the unstructured portion of the document and the generated summary is audibly output.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: May 17, 2022
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Galen Rafferty, Reza Farivar, Anh Truong, Jeremy Goodsitt, Vincent Pham, Austin Walters
  • Patent number: 11315378
    Abstract: A method of verifying an authenticity of a printed item includes: photographing the printed item to obtain a photographic image of the printed item, retrieving reference data of the printed item, the reference data including a reference image of the printed item, determining a test noise parameter from the photographic image of the printed item, determining a reference noise parameter from the reference image, comparing the test noise parameter of the photographic image of the printed item to the reference noise parameter of the reference image, and determining an authenticity of the printed item from a result of the comparing. The determining the authenticity of the printed item from the result of the comparing may include establishing from the reference noise parameter of the reference image and the test noise parameter of the printed item.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: April 26, 2022
    Assignee: FiliGrade B.V.
    Inventor: Johannes Bernardus Kerver
  • Patent number: 11312764
    Abstract: VH domain, in which: (i) the amino acid residue at position 112 is one of K or Q; and/or (ii) the amino acid residue at position 89 is T; and/or (iii) the amino acid residue at position 89 is L and the amino acid residue at position 110 is one of K or Q; and (iv) in each of cases (i) to (iii), the amino acid at position 11 is preferably V; and in which said VH domain contains a C-terminal extension (X)n, in which n is 1 to 10, preferably 1 to 5, such as 1, 2, 3, 4 or 5 (and preferably 1 or 2, such as 1); and each X is an (preferably naturally occurring) amino acid residue that is independently chosen, and preferably independently chosen from the group consisting of alanine (A), glycine (G), valine (V), leucine (L) or isoleucine (I).
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: April 26, 2022
    Assignee: Ablynx N.V.
    Inventors: Marie-Ange Buyse, Carlo Boutton
  • Patent number: 11290590
    Abstract: Disclosed are a method and apparatus for managing a distraction to a smart device based on a context-aware rule. A method of managing a distraction to a smart device based on a context-aware rule, in the present disclosure, includes the steps of setting a context-aware distraction management rule, collecting context information for applying a distraction management mode based on the set context-aware distraction management rule, determining whether to set a distraction management mode and setting the distraction management mode, collecting context information for releasing the set distraction management mode, and determining whether to release the setting of the distraction management mode and releasing the distraction management mode.
    Type: Grant
    Filed: November 27, 2020
    Date of Patent: March 29, 2022
    Assignee: Korea Advanced Institute of Science and Technology
    Inventors: Uichin Lee, Inyeop Kim
  • Patent number: 11244118
    Abstract: Disclosed is a dialogue management method and apparatus. The dialogue management method includes sequentially resolving an application domain to provide a service to a user, a function appropriate for an intent of a user from among functions of the application domain, and at least one slot to perform the function, adaptively determining an expression scheme of a dialogue management interface depending on a progress of the sequentially resolving, and displaying the dialogue management interface.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: February 8, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Junhwi Choi, Sanghyun Yoo, Sangsoo Lee, Hoshik Lee
  • Patent number: 11216245
    Abstract: An electronic device is provided. The electronic device includes a microphone, a touchscreen display, a processor, and a memory. The memory stores instructions, when executed, causing the processor to display a virtual keyboard including a first icon on the touchscreen display in response to a request associated with a text input to a first application which is running, execute a client module associated with a second application, based on an input to the first icon, identify a text entered through the virtual keyboard or a voice input received via the microphone, using the client module, determine an operation corresponding to the entered text and the voice input using the client module, and display a result image according to the operation on at least one region between a first region of the touchscreen display or a second region of the touchscreen display is displayed.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: January 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Seungyup Lee
  • Patent number: 11216233
    Abstract: An electronic device includes a physical user interface, a wireless communication device, and one or more processors. The one or more processors identify one or more external electronic devices operating within an environment of the electronic device. The one or more processors cause the wireless communication device to transmit content and one or more control commands causing an external electronic device to present a graphical user interface depicting the physical user interface of the electronic device. The wireless communication device then receives one or more other control commands identifying user inputs interacting with the graphical user interface at the external electronic device. The one or more processors perform one or more control operations in response to the one or more other control commands.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: January 4, 2022
    Assignee: Motorola Mobility LLC
    Inventors: Rachid Alameh, Jarrett Simerson, John Gorsica
  • Patent number: 11217021
    Abstract: A mixed reality system that includes a head-mounted display (HMD) that provides 3D virtual views of a user's environment augmented with virtual content. The HMD may include sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors provide the information as inputs to a controller that renders frames including virtual content based at least in part on the inputs from the sensors. The HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: January 4, 2022
    Assignee: Apple Inc.
    Inventors: Ricardo J. Motta, Brett D. Miller, Tobias Rick, Manohar B. Srikanth
  • Patent number: 11176188
    Abstract: A visualization framework based on document representation learning is described herein. The framework may first convert a free text document into word vectors using learning word embeddings. Document representations may then be determined in a fixed-dimensional semantic representation space by passing the word vectors through a trained machine learning model, wherein more related documents lie closer than less related documents in the representation space. A clustering algorithm may be applied to the document representations for a given patient to generate clusters. The framework then generates a visualization based on these clusters.
    Type: Grant
    Filed: January 9, 2018
    Date of Patent: November 16, 2021
    Assignee: Siemens Healthcare GmbH
    Inventors: Halid Ziya Yerebakan, Yoshihisa Shinagawa, Parmeet Singh Bhatia, Yiqiang Zhan
  • Patent number: 11164567
    Abstract: A computer system is provided. The computer system includes a memory and at least one processor configured to recognize one or more intent keywords in text provided by a user; identify an intent of the user based on the recognized intent keywords; select a workflow context based on the identified intent; determine an action request based on analysis of the text in association with the workflow context, wherein the action request comprises one or more action steps and the action steps comprise one or more data points; obtaining a workspace context associated with the user; and evaluate the data points based on the workspace context.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: November 2, 2021
    Assignee: Citrix Systems, Inc.
    Inventor: Chris Pavlou
  • Patent number: 11150870
    Abstract: An embodiment of the present invention comprises a touch screen display, a communication circuit, a microphone, a speaker, a processor, and a memory. Wherein: the memory stores a first application program that includes a first user interface, and an intelligent application program that includes a second user interface; and the memory can cause the processor to display the first user interface at the time of execution and to receive a first user input for displaying the second user interface while displaying the first user interface.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: October 19, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yun Hee Lee, Eun A Jung, Sang Ho Chae, Ji Hyun Kim
  • Patent number: 11146857
    Abstract: Methods and systems are described herein for providing streamlined access to media assets of interest to a user. The method includes determining that a supplemental viewing device, through which a user views a field of view, is directed at a first field of view. The method further involves detecting that the supplemental viewing device is now directed at a second field of view, and determining that a media consumption device is within the second field of view. A first media asset of interest to the user that is available for consumption via the media consumption device is identified, and the supplemental viewing device generates a visual indication in the second field of view. The visual indication indicates that the first media asset is available for consumption via the media consumption device, and the visual indication tracks a location of the media consumption device in the second field of view.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: October 12, 2021
    Assignee: Rovi Guides, Inc.
    Inventors: Jonathan A. Logan, Alexander W. Liston, William L. Thomas, Gabriel C. Dalbec, Margret B. Schmidt, Mathew C. Burns, Ajay Kumar Gupta
  • Patent number: 11138971
    Abstract: An embodiment provides a method, including: receiving, at an audio receiver of an information handling device, user voice input; identifying, using a processor, words included in the user voice input; determining, using the processor, one of the identified words renders ambiguous a command included in the user voice input; accessing, using the processor, context data; disambiguating, using the processor, the command based on the context data; and committing, using the processor, a predetermined action according to the command. Other aspects are described and claimed.
    Type: Grant
    Filed: December 5, 2013
    Date of Patent: October 5, 2021
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Peter Hamilton Wetsel, Jonathan Gaither Knox, Suzanne Marion Beaumont, Russell Speight VanBlon, Rod D. Waltermann
  • Patent number: 11127494
    Abstract: Methods and systems for using contextual information to generate reports for image studies. One method includes determining contextual information associated with an image study wherein at least one image included in the image study loaded in a reporting application. The method also includes automatically selecting, with an electronic processor, a vocabulary for a natural language processing engine based on the contextual information. In addition, the method includes receiving, from a microphone, audio data and processing the audio data with the natural language processing engine using the vocabulary to generate data for a report for the image study generated using the reporting application.
    Type: Grant
    Filed: August 23, 2016
    Date of Patent: September 21, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Marwan Sati
  • Patent number: 11100927
    Abstract: An information providing device includes circuitry configured to: acquire an uttered word which is uttered by a user and an utterance time at which the Littered word is uttered by the user; control output of offer information associated with the uttered word to the user; and restrict output of the offer information associated with the uttered word within a predetermined masking period from the utterance time of the uttered word.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: August 24, 2021
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventor: Chihiro Inaba
  • Patent number: 11086597
    Abstract: The various implementations described herein include methods, devices, and systems for attending to a presenting user. In one aspect, a method is performed at an electronic device that includes an image sensor, microphones, a display, processor(s), and memory. The device (1) obtains audio signals by concurrently receiving audio data at each microphone; (2) determines based on the obtained audio signals that a person is speaking in a vicinity of the device; (3) obtains video data from the image sensor; (4) determines via the video data that the person is not within a field of view of the image sensor; (5) reorients the electronic device based on differences in the received audio data; (6) after reorienting the electronic device, obtains second video data from the image sensor and determines that the person is within the field of view; and (7) attends to the person by directing the display toward the person.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: August 10, 2021
    Assignee: GOOGLE LLC
    Inventors: Yuan Yuan, Johan Schalkwyk, Kenneth Mixter
  • Patent number: 11081114
    Abstract: The control apparatus includes: a calculation unit configured to control a voice interaction apparatus including a speech section detector, the speech section detector being configured to identify whether an acquired voice includes a speech made by a target person by a set identification level and perform speech section detection, in which the calculation unit instructs, when an estimation result indicating that it is highly likely that the speech made by the target person is included in the acquired voice has been acquired from a voice recognition server, the voice interaction apparatus to change a setting in such a way as to lower the identification level of the speech section detector, and to perform communication with the voice recognition server by speech section detection in accordance with the identification level after the change.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: August 3, 2021
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventor: Narimasa Watanabe
  • Patent number: 11068125
    Abstract: On a computing device, an overview mode is provided to present overview windows of all applications currently running on the computing device. When one or more applications are running in a windowed mode, a first overview window is generated for each of the one or more applications running in the windowed mode; when one or more applications are running in a full-screen mode, a second overview window is generated for each of the one or more applications running in the full-screen mode. The one or more first overview windows in the first space can be arranged in one or more rows in a first overview space, and the one or more second overview windows in the second space in a stack in a second overview space. The arranged overview windows may then be displayed in the overview mode of the computing device.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: July 20, 2021
    Assignee: Google LLC
    Inventors: Alexander Friedrich Kuscher, Jennifer Shien-Ming Chen, Sebastien Vincent Gabriel
  • Patent number: 10983673
    Abstract: An operation screen display device includes: a display part; an operation part; and a processor that performs: making the display part display setting items for setting an operation condition of a job in an operation screen before starting the job; receiving a user manual input to specify one or more of the setting items from the operation part, the setting items being displayed in the operation screen; and receiving a user speech input to specify one or more of the setting items from a speech input device, the setting items being displayed in the operation screen. Upon receiving the user speech input, the processor makes the display part hide one or more of the setting items displayed in the operation screen, the one or more of the setting items being suitable for speech input.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: April 20, 2021
    Assignee: KONICA MINOLTA, INC.
    Inventor: Masaki Nakata
  • Patent number: 10976890
    Abstract: In an augmented reality and/or a virtual reality system, detected commands may be intelligently batched to preserve the relative order of the batched commands while maintaining a fluid virtual experience for the user. Commands detected in the virtual environment may be assigned to a batch command, of a plurality of batch commands, based on a temporal window in which the command(s) are detected, based on an operational type associated with the command(s), or based on a spatial position at which the command is detected in the virtual environment. The commands included in a batched set of commands may be executed in response to an un-do command and/or a re-do command and/or a re-play command.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: April 13, 2021
    Assignee: GOOGLE LLC
    Inventor: Ian MacGillivray
  • Patent number: 10977484
    Abstract: Described herein is a smart presentation system and method. Information identifying participant(s) associated with an electronic presentation document and information regarding a topic of the presentation is received. Participant profile information associated with at the participant(s) is retrieved using the received identification information. An audience profile relative to the topic of the presentation is determined using the retrieved participant profile information and received information regarding the topic of the presentation. A suggestion for the presentation is identified using an algorithm employing stored historical data, the determined audience profile and received information regarding the topic of the presentation. Further described herein is a presentation adaptation system and method. While presenting a presentation to participant(s), a cognitive expression of participant(s) is detected.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: April 13, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Varun Khanna, Chandra Sekhar Annambhotla
  • Patent number: 10962785
    Abstract: An electronic device is disclosed. The electronic device comprises a first camera and a second camera, a microphone, a display, and a processor electrically connected to the first camera, the second camera, the microphone, and the display, wherein the processor can be set to display, on the display, a user interface (UI) including a plurality of objects, acquire user gaze information from the first camera, activate, among the plurality of objects, a first object corresponding to the gaze information, determine at least one method of input, corresponding to a type of the activated first object, between a gesture input acquired from the second camera and a voice input acquired by the microphone, and execute the function corresponding to the input for the first object While an activated state of the first object is maintained, if the input of the determined method is applicable to the first object. In addition, various embodiments identified through the specification are possible.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: March 30, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang Woong Hwang, Young Ah Seong, Say Jang, Seung Hwan Choi
  • Patent number: 10963047
    Abstract: An augmented mirror for use with one or more user objects, the augmented mirror comprising: a partially silvered mirrored surface; a screen underneath the mirrored surface; a camera including a depth scanner; and a computer module for receiving data from the camera and for providing graphical images to the screen; wherein the sole means of communication with the computer module is via the one or more user objects.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: March 30, 2021
    Assignee: CONOPCO, INC.
    Inventors: Steven David Benford, Brian Patrick Newby, Adam Thomas Russell, Katharine Jane Shaw, Paul Robert Tennent, Robert Lindsay Treloar, Michel François Valstar, Ruediger Zillmer
  • Patent number: 10964321
    Abstract: The present disclosure involves systems, software, and computer implemented methods for providing voice-enabled human tasks in process modeling. One example method includes receiving a deployment request for a workflow that includes a human task. The workflow is deployed to a workflow engine in response to the deployment request. An instance of the workflow is created in response to a request from a client application. The instance of the workflow is processed, including execution of the human task. The human task is added to a task inbox of an assignee of the human task. A request is received from the assignee to access the task inbox from a telecommunications system. Voice guidance is provided, to the assignee, that requests assignee input. Voice input from the assignee is processed for completion of the human task. Workflow context for the human task is updated based on the received voice input.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: March 30, 2021
    Assignee: SAP SE
    Inventors: Vikas Rohatgi, Abhinav Kumar
  • Patent number: 10831982
    Abstract: One embodiment includes a portable reading device for reading a paginated e-book, with at least a page including a section including text linked to an illustration. The device can layout the section by keeping the text with the illustration to be displayed in one screen, and maintaining the pagination of the e-book if the page is displayed in more than one screen. Another embodiment includes reading materials with a text sub file with texts, an illustration sub file with illustrations, and a logic sub file with rules on displaying the materials. Either the text or the illustration sub file includes position information linking at least an illustration to a corresponding piece of text. One embodiment includes reading materials with a logic sub file that can analyze an attribute of, and provide a response to, a reader. Another embodiment can be an eyewear presenting device, allowing for hands-free presenting.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: November 10, 2020
    Assignee: IPLContent, LLC
    Inventors: Chi Fai Ho, Peter P. Tong
  • Patent number: 10810413
    Abstract: A wakeup method based on lip reading is provided, the wakeup method including: acquiring a motion graph of a user's lips; determining whether the acquired motion graph matches a preset motion graph; and waking up a voice interaction function in response to the acquired motion graph matching the preset motion graph.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: October 20, 2020
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventor: Liang Gao
  • Patent number: 10803875
    Abstract: A speaker recognition system includes a non-transitory computer readable medium configured to store instructions. The speaker recognition system further includes a processor connected to the non-transitory computer readable medium. The processor is configured to execute the instructions for extracting acoustic features from each frame of a plurality of frames in input speech data. The processor is configured to execute the instructions for calculating a saliency value for each frame of the plurality of frames using a first neural network (NN) based on the extracted acoustic features, wherein the first NN is a trained NN using speaker posteriors. The processor is configured to execute the instructions for extracting a speaker feature using the saliency value for each frame of the plurality of frames.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: October 13, 2020
    Assignee: NEC CORPORATION
    Inventors: Qiongqiong Wang, Koji Okabe, Takafumi Koshinaka
  • Patent number: 10776080
    Abstract: A system and method are described for an IoT integrated development tool.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: September 15, 2020
    Assignee: Afero, Inc.
    Inventor: Joe Britt
  • Patent number: 10747467
    Abstract: Some embodiments can load one or more applications into working memory from persistent storage when permitted by a memory pressure level of a mobile device. Loading the applications into working memory enables the applications to be launched into the foreground quickly when the user indicates the desire to launch. Some embodiments may identify a set of applications that are designated for providing snapshots to be displayed when the mobile device is in a dock mode. Certain embodiments may determine a current memory pressure level. Some embodiments may load an application in the set of applications into working memory from a persistent storage responsive to determining that the memory pressure level is below a threshold. Certain embodiments may continue to load additional applications responsive to determining that the memory pressure level is below the threshold. After determining that the memory pressure level is above the threshold, some embodiments may reclaim memory.
    Type: Grant
    Filed: June 10, 2016
    Date of Patent: August 18, 2020
    Assignee: Apple Inc.
    Inventors: Antony J. Dzeryn, Michael J. Lamb, Neil G. Crane, Brent W. Schorsch
  • Patent number: 10699379
    Abstract: Methods, systems and articles of manufacture for a portable electronic device to change an orientation in which content is displayed on a display device of the portable electronic device based on a facial image. Example portable electronic devices include a display device, an image sensor to capture a facial image of a user of the portable electronic device, an orientation determination tool to determine a device orientation relative to the user based on the facial image of the user, and an orientation adjustment tool. The orientation adjustment tool changes a content orientation in which the display device of the portable electronic device presents content based on the determination of the device orientation.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: June 30, 2020
    Assignee: Intel Corporation
    Inventor: Jeffrey M. Tripp
  • Patent number: 10694244
    Abstract: Novel techniques are described for automated transition classification for binge watching of content. For example, a number of frame images is extracted from a candidate segment time window of content. The frame images can automatically be classified by a trained machine learning model into segment and non-segment classifications, and the classification results can be represented by a two-dimensional (2D) image. The 2D image can be run through a multi-level convolutional conversion to output a set of output images, and a serialized representation of the output images can be run through a trained computational neural network to generate a transition array, from which a candidate transition time can be derived (indicating a precise time at which the content transitions to the classified segment).
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: June 23, 2020
    Assignee: DISH Network L.L.C.
    Inventors: Ilhyoung Kim, Pratik Divanji, Abhijit Y. Sharma, Swapnil Tilaye
  • Patent number: 10686972
    Abstract: A method for rotating a field of view represented by a displayed image is disclosed. The method may include displaying a first image representing a first field of view. The method may also include determining a gaze direction of a user toward the first image. The method may further include identifying a subject in the first image at which the gaze direction is directed, wherein the subject is in a first direction from a center of the first image. The method may further include receiving a directional input in a second direction. The method may additionally include, based at least in part on the second direction being substantially the same as the first direction, displaying a second image representing a second field of view, wherein the subject is centered in the second image.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: June 16, 2020
    Assignee: Tobii AB
    Inventor: Denny Rönngren
  • Patent number: 10678816
    Abstract: Provided are systems and methods related to converting unlabeled data into structured and labeled data for answering one or more single-entity-single-relation questions. The systems and methods automates the labeling of data to generate training data for machine learning. The systems and methods identify and import question and answer pairs from an user generated discussion platform and access a knowledge base questions to extract questions by supervised extraction. The extracted questions are further filtered to remove mislabeled questions. When a question is posed, it is parsed for entity and relation, and an answer is identified by searching through the knowledge base.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: June 9, 2020
    Assignee: RSVP TECHNOLOGIES INC.
    Inventors: Zhongyu Peng, Kun Xiong, Anqi Cui
  • Patent number: 10664045
    Abstract: A system configured to generate and/or modify three-dimensional scenes comprising animated character(s) based on individual asynchronous motion capture recordings. The system may comprise sensor(s), display(s), and/or processor(s). The system may receive selection of a first character to virtually embody within the virtual space, receive a first request to capture the motion and/or the sound for the first character, and/or record first motion capture information characterizing the motion and/or the sound made by the first user as the first user virtually embodies the first character. The system may receive selection of a second character to virtually embody, receive a second request to capture the motion and/or the sound for the second character, and/or record second motion capture information. The system may generate a compiled virtual reality scene wherein the first character and the second character appear animated within the compiled virtual reality scene contemporaneously.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: May 26, 2020
    Assignee: VISIONARY VR, INC.
    Inventors: Jonathan Michael Ross, Gil Baron
  • Patent number: 10643607
    Abstract: Various arrangements for triggering transitions within a slide-based presentation are presented. An audio-based trigger system may receive a plurality of trigger words. A database may be created that maps trigger words to slide transitions. A voice-based request may be received to initiate audio control of the slide-based presentation being output by the presentation system. An audio stream may be monitored for trigger words. Based on accessing a database, a slide transition to be performed may be identified based on a recognized trigger word. A slide transition request may be transmitted to a presentation system that indicates a slide to which a transition should occur. The presentation system may then transition to the slide based on the received slide transition request.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: May 5, 2020
    Assignee: DISH Network L.L.C.
    Inventor: Shruti Meshram
  • Patent number: 10645035
    Abstract: Techniques are described related to enabling automated assistants to enter into a “conference mode” in which they can “participate” in meetings between multiple human participants and perform various functions described herein. In various implementations, an automated assistant implemented at least in part on conference computing device(s) may be set to a conference mode in which the automated assistant performs speech-to-text processing on multiple distinct spoken utterances, provided by multiple meeting participants, without requiring explicit invocation prior to each utterance. The automated assistant may perform semantic processing on first text generated from the speech-to-text processing of one or more of the spoken utterances, and generate, based on the semantic processing, data that is pertinent to the first text. The data may be output to the participants at conference computing device(s).
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: May 5, 2020
    Assignee: GOOGLE LLC
    Inventors: Marcin Nowak-Przygodzki, Jan Lamecki, Behshad Behzadi
  • Patent number: 10607148
    Abstract: In one embodiment, a method includes, by one or more computing devices of an online social network, receiving, from a client system of a first user of the online social network, a first audio input from an unknown user, identifying one or more candidate users, wherein each candidate user is a user of the online social network within a threshold degree of separation of a known user, calculating, for each candidate user, a probability score representing a probability that the unknown user is the candidate user, wherein the probability score is based on a comparison of the first audio input to a voiceprint of the candidate user stored by the online social network, wherein each voiceprint comprises audio data for auditory identification of the candidate user, and identifying one of the candidate users as being the unknown user based on the calculated probability scores of the candidate users.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: March 31, 2020
    Assignee: Facebook, Inc.
    Inventor: Mateusz Marek Niewczas
  • Patent number: 10599469
    Abstract: A method, a system, and a computer program product for indicating a dialogue status of a conversation thread between a user of an electronic device and a virtual assistant capable of maintaining conversational context of multiple threads at a time. The method includes receiving, at an electronic device providing functionality of a virtual assistant (VA), a user input that corresponds to a task to be performed by the VA. The method includes determining, from among a plurality of selectable threads being concurrently maintained by the VA and based on content of the user input, one target thread to which the user input is associated. The method includes performing the task within the target thread.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: March 24, 2020
    Assignee: Motorola Mobility LLC
    Inventors: Jun-Ki Min, David A. Schwarz, Krishna C. Garikipati, John W. Nicholson, Mir Farooq Ali
  • Patent number: 10216011
    Abstract: Systems, methods and devices for measuring eyewear characteristics are provided. The eyewear measurement systems and devices comprise a plurality of measurement standard frames, each having lenses marked with visible gridlines specifically configured to the measurement standard frame to allow for the direct measurement of a PD and SH/FH with respect to each eye of the wearer.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: February 26, 2019
    Assignees: iCoat Company, LLC, Wiley X, Inc.
    Inventors: Thomas Pfeiffer, Michael Bumerts, Lawrence Wickline, Imtiaz Hasan, Timothy George Stephan, Dan Freeman, Arman Bernardi
  • Patent number: 10200854
    Abstract: Certain embodiments provide a method including obtaining data at a first time using at least one sensor associated with a mobile computing device, the at least one sensor arranged to gather data regarding at least one operating factor for the mobile computing device, the mobile computing device configured to receive market data and execute a trading application. The example method includes analyzing the data obtained from the at least one sensor to determine the at least one operating factor. The example method includes determining a first operating state of the mobile computing device based on the at least one operating factor. The example method includes altering a function of the mobile computing device with respect to the trading application based on the first operating state.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: February 5, 2019
    Assignee: Trading Technologies International, Inc.
    Inventor: Scott F. Singer
  • Patent number: 10178293
    Abstract: A method, a computer program product, and a computer system for controlling a camera using a voice command and image recognition. One or more processors on the camera captures the voice command that is from a user of the camera and declares a subject of interest. The one or more processors processes the voice command and sets the subject of interest. The one or more processors receives a camera image from an imaging system of the camera. The one or more processors identifies the subject of interest in the camera image. The one or more processors sets camera one or more parameters that are appropriate to the subject of interest.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: January 8, 2019
    Assignee: International Business Machines Corporation
    Inventors: Deborah J. Butts, Adrian P. Kyte, Timothy A. Moran, John D. Taylor