Patents Examined by Jordany Nunez
-
Patent number: 11762952Abstract: Aspects of the present disclosure are directed to an artificial reality (XR) application system controlling applications in an artificial reality environment. In various cases, these controls include automatically suggesting XR applications by determining an XR context and identifying applications that match the XR context. These applications can be suggested to a user, who can authorize their execution, setting permissions for the application. In some cases, applications can be divided into components which can be progressively downloaded. By providing application suggestions relevant to the current context and progressively downloading application components, applications can appear ambient, rather than relying on users to constantly download, install, or activate applications. Permissions for applications may be revoked permanently or for certain situations—either through user permissions selections or automatically in response to determined user intents.Type: GrantFiled: June 28, 2021Date of Patent: September 19, 2023Assignee: Meta Platforms Technologies, LLCInventors: Michal Hlavac, Jasper Stevens, Arthur Zwiegincew, Alexander Michael Louie
-
Patent number: 11757950Abstract: An example method and system for sharing an output device between multimedia devices to transmit and receive data, is provided. The method includes operations of automatically discovering one or more second multimedia devices, when a first multimedia device is positioned within communication range of the one or more second multimedia devices that transmit a low power signal; and transmitting data of the first multimedia device to the one or more second multimedia devices, when the one or more second multimedia devices are discovered.Type: GrantFiled: February 25, 2022Date of Patent: September 12, 2023Assignee: Samsung Electronics Co., Ltd.Inventor: Sagar Kumar Verma
-
Patent number: 11733956Abstract: In one implementation, a method of providing display device sharing and interactivity in simulated reality is performed at a first electronic device including one or more processors and a non-transitory memory. The method includes obtaining a gesture input to a first display device in communication with the first electronic device from a first user, where the first display device includes a first display. The method further includes transmitting a representation of the first display to a second electronic device in response to obtaining the gesture input. The method additionally includes receiving an input message directed to the first display device from the second electronic device, where the input message includes an input directive obtained by the second electronic device from a second user. The method also includes transmitting the input message to the first display device for execution by the first display device.Type: GrantFiled: September 3, 2019Date of Patent: August 22, 2023Assignee: APPLE INC.Inventors: Bruno M. Sommer, Alexandre Da Veiga, Ioana Negoita
-
Patent number: 11720231Abstract: Embodiments of the present disclosure relate to a vehicle user interface. The vehicle user interface may receive user input from an input system. It may present user selectable options or prompt user action via an output system. The vehicle user interface may transmit, via a communication interface, to a computing system a series of user inputs received from at least the first input system, wherein the computing system is configured to extract at least one feature from the series of user inputs and generate a prediction model based on the at least one feature. At least one predicted option may be identified based on the prediction model. The vehicle user interface may instruct the first output system to present the at least one predicted option.Type: GrantFiled: December 8, 2021Date of Patent: August 8, 2023Inventor: Robert Richard Noel Bielby
-
Patent number: 11720172Abstract: One embodiment provides a method, including: receiving, at an information handling device, an indication to open a text object; identifying, based upon analysis of context data associated with the text object, at least one article of key information contained within the text object; ascertaining, using a camera sensor, an aspect of user gaze within the text object; determining, based on the identified at least one article of key information and the ascertained aspect of user gaze, a degree to which a user is apprised of the at least one article of key information contained within the text object; and providing, based on the determining, a visual indication of the degree to which the user is apprised of the at least one article of key information. Other aspects are described and claimed.Type: GrantFiled: December 21, 2021Date of Patent: August 8, 2023Assignee: LENOVO (SINGAPORE) PTE. LTD.Inventors: James G McLean, David D Chudy, Kenneth J Born, Cuong Thai
-
Patent number: 11714496Abstract: An apparatus, method and computer program, the apparatus comprising: means for enabling three dimensional content to be rendered; means for determining movement of the apparatus; means for controlling scrolling of the three dimensional content, wherein if the movement of the apparatus is within a first category the three dimensional content is scrolled to correspond to the movement of the apparatus and if the movement of the apparatus is within a second category the three dimensional content is not scrolled.Type: GrantFiled: December 3, 2018Date of Patent: August 1, 2023Assignee: NOKIA TECHNOLOGIES OYInventors: Miikka Vilermo, Antti Eronen, Jussi Leppänen
-
Patent number: 11704592Abstract: The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.Type: GrantFiled: July 23, 2020Date of Patent: July 18, 2023Assignee: Apple Inc.Inventors: Keith P. Avery, Jamil Dhanani, Harveen Kaur, Varun Maudgalya, Timothy S. Paek, Dmytro Rudchenko, Brandt M. Westing, Minwoo Jeong
-
Patent number: 11698716Abstract: A method for multitasking include displaying a dock containing application icons corresponding to different application concurrently with a first user interface of a first application; detecting a first input directed to an application icon corresponding to a second application in the dock; in accordance with a determination that the second application is associated with multiple windows, displaying a first representation of a first window for the second application and a second representation of a second window for the second application concurrently with the first user interface of the first application in a second region of the display area; and in accordance with a determination that the second application is associated with only a single window, displaying a second user interface of the second application concurrently with the first user interface of the first application.Type: GrantFiled: May 20, 2022Date of Patent: July 11, 2023Assignee: APPLE INC.Inventors: Brandon M. Walkin, Patrick L. Coffman
-
Patent number: 11693554Abstract: Aspects of the disclosure relate generally to effortlessly switching between user accounts. For example, a user may access an application on their computing device. Within the application the user may have multiple user accounts. The application may display a plurality of indicators that signify each user account associated with that application. In this regard, the user may perform a swiping or tapping motion to select a particular user account to switch to. A transitional stage may take place that changes a first background and details associated with a first user account to a second background and details associated with a second user account. When the transition is complete, the user is able to access and perform functions associated with the second user account. The user may switch to another user account using a similar swiping or tapping motion.Type: GrantFiled: September 16, 2021Date of Patent: July 4, 2023Assignee: Google LLCInventors: Erik Viktor Persson, Jonathan Lee, Jean-Marc Denis
-
Patent number: 11636233Abstract: A communication terminal, a system, a method, and a control program stored in a non-transitory recording medium for controlling capturing of an image, each of which displays, on a display of the communication terminal, an image based on image data to be shared with a counterpart communication terminal; receives an instruction to prohibit capturing of a screen that includes the image based on the image data; and transmits, from the communication terminal to the counterpart communication terminal, information related to the instruction to prohibit capturing of the screen.Type: GrantFiled: July 27, 2021Date of Patent: April 25, 2023Assignee: Ricoh Company, Ltd.Inventors: Takeshi Homma, Hiroshi Hinohara, Shigeru Nakamura, Yuichi Kawasaki
-
Patent number: 11630639Abstract: An electronic device and method are disclosed. The electronic device may include a display, a sensor module, a processor, and a memory operatively connected to the processor.Type: GrantFiled: October 27, 2021Date of Patent: April 18, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Sunghee Shin, Hyungsok Yeo, Valeriy Prushinskiy
-
Patent number: 11630633Abstract: A method for interactive collaboration between a streamer user and a remote collaborator. The method includes receiving, by a collaborator computing device, streaming images taken by a streamer computing device. A collaborator user interface is generated and provided for output by the collaborator device. The collaborator user interface includes the received images. Hand tracking data is received by the collaborator device. The collaborator user interface is updated to include a representation of the hand tracking data and the received images. The hand tracking data is transmitted by the collaborator device to the streamer device for inclusion on a streamer user interface that is generated and provided for output by the streamer device.Type: GrantFiled: June 29, 2022Date of Patent: April 18, 2023Assignee: PROMP, INC.Inventor: Francis MacDougall
-
Patent number: 11625542Abstract: A co-user list may be configured based on user interaction in a virtual world environment. A first user may be enabled to navigate the virtual world environment using an instant messenger application that includes the co-user list. A second user that is located proximate to the first user in the virtual world environment may be detected. An attribute associated with the second user may be determined. The co-user list may be configured based on the attribute associated with the second user.Type: GrantFiled: February 24, 2021Date of Patent: April 11, 2023Assignee: Verizon Patent and Licensing Inc.Inventor: David S. Bill
-
Patent number: 11614797Abstract: An apparatus having a computing device and a user interface—such as a user interface having a display that can provide a graphical user interface (GUI). The apparatus also includes a camera, and a processor in the computing device. The camera can be connected to the computing device and/or the user interface, and the camera can be configured to capture pupil location and/or eye movement of a user. The processor can be configured to: identify a visual focal point of the user relative to the user interface based on the captured pupil location, and/or identify a type of eye movement of the user (such as a saccade) based on the captured eye movement. The processor can also be configured to control parameters of the user interface based at least partially on the identified visual focal point and/or the identified type of eye movement.Type: GrantFiled: November 5, 2019Date of Patent: March 28, 2023Assignee: Micron Technology, Inc.Inventors: Dmitri Yudanov, Samuel E. Bradshaw
-
Patent number: 11614794Abstract: Adapting an automated assistant based on detecting: movement of a mouth of a user; and/or that a gaze of the user is directed at an assistant device that provides an automated assistant interface (graphical and/or audible) of the automated assistant. The detecting of the mouth movement and/or the directed gaze can be based on processing of vision data from one or more vision components associated with the assistant device, such as a camera incorporated in the assistant device. The mouth movement that is detected can be movement that is indicative of a user (to whom the mouth belongs) speaking.Type: GrantFiled: May 4, 2018Date of Patent: March 28, 2023Assignee: GOOGLE LLCInventors: Kenneth Mixter, Yuan Yuan, Tuan Nguyen
-
Patent number: 11614849Abstract: A system including a display output device, a computer including instructions that when executed by the computer, cause the computer to generate a virtual environment, instantiate one or more virtual devices into the virtual environment, instantiate a user representation into the virtual environment, and display the virtual environment, the one or more virtual devices, and the user representation on the display output device, an input device to receive a movement input associated with movement by the user representation in the virtual environment, and the computer further configured with instructions to move or rotate the one or more virtual devices relative to a point of reference in the virtual environment in response to the movement input, while maintaining the user representation stationary.Type: GrantFiled: May 13, 2019Date of Patent: March 28, 2023Assignee: Thermo Fisher Scientific, Inc.Inventors: Mark Field, Daniel Garden
-
Patent number: 11599332Abstract: A multi faceted graphic user interface with multiple shells or layers may be provided for interaction with a user to speech enable interaction with applications and processes that do not necessarily have native support for speech input. The shells may be components of an operating system or of a parent application which supports such shells. Each shell has multiple facets for displaying applications and processes, and typically speech and other input is directed the application or process in the facet which has focus within the active shell. These multiple shells lend themselves to grouping of input or grouping of related applications and processes. For example, input from a speech recognizer, a mouse and a keyboard may each be directed at different shells; or a user may group related windows within various shells, such that all documents are displayed in one shell and all windows of an instant messaging application are displayed in another, thereby enabling better organization of work and work flow.Type: GrantFiled: February 5, 2021Date of Patent: March 7, 2023Assignee: Great Northern Research, LLCInventor: Paul J. Lagassey
-
Patent number: 11595510Abstract: A mobile terminal and a control method therefor are disclosed. The mobile terminal includes a body, an input unit configured to receive a user input, a display coupled to the body to vary a display region viewed from a front of the body according to switching between an enlarged display mode and a reduced display mode, and a controller. The controller controls the display to be extended by a first region upon receiving a first signal, and controls the display to activate a touch function of the first region after activating an output function of the first region, based on extension of the display by the first region.Type: GrantFiled: May 19, 2021Date of Patent: February 28, 2023Assignee: LG ELECTRONICS INC.Inventors: Kensin Noh, Dongwan Kang, Seungyong Lee
-
Patent number: 11592907Abstract: A user may routinely wear or hold more than one computing devices. One of the computing devices may be a head-mounted computing-device configured for augmented reality. The head-mounted computing-device may include a camera. While imaging, the camera can consume power and processing resources that diminish a battery of the head-mounted computing device. To improve a battery life and to enhance a user's privacy, imaging of the camera can be deactivated during periods when the user is not interacting with the head-mounted computing device and activated when the user wishes to interact with the head-mounted computing device. The activation of the camera can be triggered by gestured data collected by a computing device other than the head-mounted computing-device.Type: GrantFiled: October 20, 2020Date of Patent: February 28, 2023Assignee: Google LLCInventors: Shengzhi Wu, Alexander James Faaborg
-
Patent number: 11586345Abstract: The present disclosure provides a method and apparatus for interaction control of a display page. The method includes: obtaining a sliding operation performed by a user on the display page and a sliding parameter corresponding to the sliding operation, the display page including a plurality of pieces of display content; controlling the display page to move in accordance with the sliding operation, and predicting, based on the sliding parameter, a position of current display content on the display page when the display page stops; determining whether the position of the current display content meets a predetermined requirement; and correcting, when it is determined that the position of the current display content does not meet the predetermined requirement, the position of the current display content, such that the position of the current display content meets the predetermined requirement when the display page stops.Type: GrantFiled: January 15, 2020Date of Patent: February 21, 2023Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.Inventor: Yanyun Gong