Augmented Reality (real-time) Patents (Class 345/633)
  • Patent number: 11494960
    Abstract: A display assembly generates environmentally matched virtual content for an electronic display. The display assembly includes a display controller and a display. The display controller is configured to estimate environmental matching information for a target area within a local area based in part on light information received from a light sensor. The target area is a region for placement of a virtual object. The light information describes light values. The display controller generates display instructions for the target area based in part on a human vision model, the estimated environmental matching information, and rendering information associated with the virtual object. The display is configured to present the virtual object as part of artificial reality content in accordance with the display instructions. The color and brightness of the virtual object is environmentally matched to the portion of the local area surrounding the target area.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: November 8, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jiangtao Kuang, Edward Buckley, Honghong Peng, Sapna Shroff, Romain Bachy
  • Patent number: 11494988
    Abstract: Novel tools and techniques are provided for implementing augmented reality (AR)-based assistance within a work environment. In various embodiments, a computing system might receive, from a camera having a field of view of a work environment, first images of at least part of the work environment, the first images overlapping with a field of view of a user wearing an AR headset; might analyze the received first images to identify objects; might query a database(s) to determine a task associated with a first object(s) among the identified objects; might generate an image overlay providing at least one of graphical icon-based, text-based, image-based, and/or highlighting-based instruction(s) each indicative of instructions presented to the user to implement the task associated with the first object(s); and might display, to the user's eyes through the AR headset, the generated first image overlay that overlaps with the field of view of the user's eyes.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: November 8, 2022
    Assignee: AGILENT TECHNOLOGIES, INC.
    Inventors: Amir Ben-Dor, Elad Arbel, Richard Workman, Victor Lim
  • Patent number: 11493764
    Abstract: The invention provides a method for dynamically displaying real-world scene, an electronic device and a computer readable medium. The method includes: obtaining a virtual scene boundary of a virtual reality environment; monitoring a specific distance between a specific element in a virtual reality system and the virtual scene boundary; in response to determining that the specific distance ranges between a first threshold and a second threshold, displaying a see-through window in a virtual reality content of the virtual reality environment, wherein the see-through window displays a real-world scene.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: November 8, 2022
    Assignee: HTC Corporation
    Inventor: Yi-Hsin Chang
  • Patent number: 11494951
    Abstract: An example method performed by a processing system includes receiving a request from a first user to render an extended reality environment, wherein the request includes a definition of a first policy that governs user behavior within the extended reality environment, rendering the extended reality environment by presenting content contributed by at least one user in the extended reality environment, monitoring the extended reality environment to ensure that the rendering results in a compliance of the extended reality environment with the first policy, detecting that a portion of the content contributed by at least one other user of the extended reality environment results in the extended reality environment failing to comply with the first policy, and modifying a presentation of the portion of content in the extended reality environment in response to the detecting, wherein the modifying results in the compliance of the extended reality environment with the first policy.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: November 8, 2022
    Assignees: AT&T Intellectual Property I, L.P., AT&T Mobility II LLC
    Inventors: John Oetting, Eric Zavesky, James Pratt, Jason Decuir, Terrel Lecesne
  • Patent number: 11496691
    Abstract: A system for illuminating a character for a scene includes a computing platform communicatively coupled to a lighting source and a camera, the computing platform including a hardware processor and a system memory storing a software code. The hardware processor executes the software code to identify a background for the scene, generate, using the lighting source, a simulation of the background on a surface illuminated by the lighting source, and utilize the simulation of the background generated on the surface illuminated by the lighting source to illuminate the character for the scene. The hardware processor also executes the software code to track, using the camera, a plurality of parameters of the camera during recording of an image of the illuminated character and the simulation of the background, and to remove the simulation of the background from the image based on the plurality of parameters of the camera.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: November 8, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Steven M. Chapman, Alice Taylor, Joseph Popp
  • Patent number: 11493999
    Abstract: Some embodiments of a method may include tracking a distance between a local VR device and a remote VR device; selecting an experience-sharing mode from a plurality of experience-sharing modes based on the tracked distance; and providing a different degree of immersive user experience to a user at the local VR device depending on the selected experience-sharing mode, wherein as the distance between the local VR device and the remote VR device decreases, the experience-sharing mode changes and the degree of immersive user experience increases. Some embodiments of another method may include detecting a gesture made by a user; selecting an immersiveness mode from a plurality of immersiveness modes based on the detected gesture; and providing a degree of immersive user experience to a local VR device user depending on the selected immersiveness mode.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: November 8, 2022
    Assignee: PMCS HOLDINGS, INC.
    Inventors: Tatu V. J. Harviainen, Marko Palviainen
  • Patent number: 11487353
    Abstract: In certain embodiments, a sensing and tracking system detects objects, such as user input devices or peripherals, and user interactions with them. A representation of the objects and user interactions are then injected into the virtual reality environment. The representation can be an actual reality, augmented reality, virtual representation or any combination. For example, an actual keyboard can be injected, but with the keys pressed being enlarged and lighted.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: November 1, 2022
    Assignee: Logitech Europe S.A.
    Inventors: Stephen Harvey, Denis O'Keeffe, Andreas Connellan, Damien O'Sullivan, Aidan Kehoe, Noirin Curran, Thomas Rouvinez, Mario Gutierrez, Olivier Riviere, Remy Zimmermann, Mathieu Meisser, Dennin Onorio, Ciaran Trotman, Pierce O'Bradaigh, Marcel Twohig, Padraig Murphy, Jerry Ahern
  • Patent number: 11487354
    Abstract: There is provided an information processing apparatus, an information processing method, and a program that make it possible to improve usability in information presentation. The information processing apparatus includes a recognition unit configured to recognize a shift in gaze and a move of a head of a user; and a display controller configured to control display of information on a subject according to the move of the head that is made after the gaze is turned to the subject.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: November 1, 2022
    Assignee: SONY CORPORATION
    Inventor: Kenji Sugihara
  • Patent number: 11481034
    Abstract: A low-power object tracking system is disclosed that includes an object tracking device that senses one or more magnetic field(s) to determine a position of the object tracking device. The object tracking device includes a magnetic field sensor including one or more receiving coils and position tracking circuitry in communication with the magnetic field sensor. The position tracking circuitry is configured to determine at least one field strength associated with at least one stationary magnetic field sensed at the one or more receiving coils, and to determine position information associated with the housing based at least in part on the at least one field strength. The object tracking device includes a communication interface configured to transmit the position information to at least one remote computing device.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: October 25, 2022
    Assignee: GOOGLE LLC
    Inventor: Alejandro Kauffmann
  • Patent number: 11481999
    Abstract: A maintenance work support system comprising: a database in which support information for supporting maintenance work is registered; a camera-image acquisition unit configured to acquire a camera-image imaged by a camera that is mounted on a terminal possessed by a worker performing the maintenance work; a position/attitude estimation unit configured to estimate a position and attitude of the terminal based on information that is obtained by at least one device mounted on the terminal; a target recognition unit configured to recognize a target of the maintenance work depicted in the camera-image; and a superimposed display unit configured to perform display processing in such a manner that the support information corresponding to the target acquired from the database is superimposed on least part of an image of the target visually recognized by the worker.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: October 25, 2022
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA ENERGY SYSTEMS & SOLUTIONS CORPORATION
    Inventors: Tomomi Hishinuma, Kenji Osaki
  • Patent number: 11481965
    Abstract: An augmented reality (AR) device supporting an AR is provided. The AR device includes a display, a communication circuit, at least one processor operatively connected to the display and the communication circuit, and a memory operatively connected to the at least one processor. The memory stores instructions that, when executed, cause the at least one processor to establish a connection with a user device storing a contact application and a message application, through the communication circuit, detect that the message application is executed, and display a first graphic user interface (GUI) and at least one avatar, which is disposed at a location adjacent to the first GUI and corresponds to at least one contact associated with the contact application or the message application, through the display in the AR.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: October 25, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Stephanie Kim Ahn, Aaron Samuel Faucher, Andrew R McHugh, Edgar Charles Evangelista, Jaehyun Kim
  • Patent number: 11483253
    Abstract: A network resource pushing method, a device, and a storage medium. The method comprises: a server receiving information of a network resource accessed by a user, and information of the environment of the user when the user accessed the network resource (101); the server applying, according to the information of the environment of the user, a pre-established scenario determination model to determine a user scenario (102); the server establishing an association model between the user scenario and the type of network resource accessed by the user (103); and the server receiving information of a current environment of the user, applying the scenario determination model to determine a current scenario corresponding to the information of the current environment of the user, and pushing to the user, according to the association model, a network resource having a type corresponding to the determined current scenario (104).
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: October 25, 2022
    Assignees: Beijing Jingdong Shangke Information Technology Co., Ltd., Beijing Jingdong Century Trading Co., Ltd.
    Inventor: Xinming Yao
  • Patent number: 11483156
    Abstract: A method that includes receiving, via processing circuitry of a server, a unique identifier having encoded data included in a reference patch embedded in displayed data received by an electronic device, the electronic device being instructed to display the displayed data in a first layer of the electronic device, the server being inaccessible by the first layer of the electronic device; identifying an identity of a user based on the unique identifier of the reference patch; upon determining the user is authorized to receive the secondary digital content, transmitting the secondary digital content to the electronic device; and instructing the electronic device to display the secondary digital content in a second layer of the electronic device, the server being accessible by the second layer of the electronic device, the first layer of the electronic device being different from the second layer of the electronic device.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: October 25, 2022
    Assignee: Mobeus Industries, Inc.
    Inventors: Dharmendra Etwaru, David Casper
  • Patent number: 11481963
    Abstract: Systems, methods, and non-transitory computer readable media configured for enabling content sharing between users of wearable extended reality appliances are provided. In one implementation, the computer readable medium may be configured to contain instructions to cause at least one processor to establish a link between a first wearable extended reality appliance and a second wearable extended reality appliance. The first wearable extended reality appliance may display first virtual content. The second wearable extended reality appliance may obtain a command to display first virtual content via the second wearable extended reality appliance, and in response, this content may be transmitted and displayed via the second extended reality appliance. Additionally, the first wearable extended reality appliance may receive second virtual content from the second wearable extended reality appliance, and display said second virtual content via the first wearable extended reality appliance.
    Type: Grant
    Filed: April 5, 2022
    Date of Patent: October 25, 2022
    Assignee: MULTINARITY LTD
    Inventors: Tamir Berliner, Tomer Kahan, Orit Dolev
  • Patent number: 11483470
    Abstract: A control apparatus configured to control an image-capturing apparatus executes focus adjustment of the image-capturing apparatus in response to detection of a specific marker pattern from an image of an area to be focused by the image-capturing apparatus in an image acquired by image-capturing of the image-capturing apparatus.
    Type: Grant
    Filed: July 28, 2020
    Date of Patent: October 25, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yasushi Shikata
  • Patent number: 11481980
    Abstract: A system and method for allowing a smooth transition between a public digital reality experience into a private digital reality experience. A cloud server delimitates a public digital zone where interactions with digital reality applications stored in the cloud server and corresponding digital content are viewable to all users in the public digital zone. When users access the public digital zone, the cloud server detects the users and provides digital content from the digital reality to the users via digital reality devices. Upon reaching predetermined levels of interaction, users may access a personal digital reality view, whereby a smooth transition from a public to a personal digital reality experience takes place, prompting a specific portion of computing power from the cloud server to be dedicated to the specific user to enable personal interactions with the digital content. Users may also invite other users to engage in a multi-user private streaming.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: October 25, 2022
    Assignee: THE CALANY HOLDING S.Á´ R.L.
    Inventor: Cevat Yerli
  • Patent number: 11475639
    Abstract: The disclosed artificial reality system can provide a user self representation in an artificial reality environment based on a self portion from an image of the user. The artificial reality system can generate the self representation by applying a machine learning model to classify the self portion of the image. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions tagged as either depicting a user from a self-perspective or not. The artificial reality system can display the self portion as a self representation in the artificial reality environment by positioning them in the artificial reality environment relative to the user's perspective in the artificial reality environment. The artificial reality system can also identify movements of the user and can adjust the self representation to match the user's movement, providing more accurate self representations.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: October 18, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allen Booth, Mahdi Salmani Rahimi, Gioacchino Noris
  • Patent number: 11475636
    Abstract: Embodiments of the present disclosure relate to techniques for providing an augmented reality experience for virtual desktops. In particular, certain embodiments relate to acquiring, by a computing device one or more images from a client device and determining, by the computing device, that the one or more images contain an artifact to be augmented. Further, certain embodiments involve acquiring, by the computing device, a screen buffer from a virtual desktop or application running on it and applying, by the computing device, a geometric transformation to the screen buffer. Further, certain embodiments relate to augmenting, by the computing device, the one or more images by inserting the screen buffer onto the artifact, resulting in one or more augmented images. Further, certain embodiments relate to providing, by the computing device, the one or more augmented images to the client device in order to provide a user of the client device with the augmented reality experience.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: October 18, 2022
    Assignee: VMWARE, INC.
    Inventor: Shubham Verma
  • Patent number: 11474610
    Abstract: A system includes one or more sensors configured to detect first motion data of a head mounted display (HMD) and second motion data of a hand device associated with the HMD. The system includes processing circuitry configured to identify one or more obstacles in an environment around the HMD. The processing circuitry is configured to provide, using the HMD, display data representing a warning regarding potential collision with the one or more obstacles in response to at least one of the first motion data or the second motion data and a distance between the HMD and the one or more obstacles or the hand device and the one or more obstacles.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: October 18, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventor: Eugene Lee
  • Patent number: 11475650
    Abstract: A method for facilitating an environmentally adaptive extended reality display in a physical environment includes virtually displaying content via a wearable extended reality appliance operating in the physical environment, wherein displaying content via the wearable extended reality appliance is associated with at least one adjustable extended reality display parameter. Image data is obtained from the wearable extended reality appliance and a specific environmental change unrelated to the virtually displayed content is detected in the image data. A group of rules associating environmental changes with changes in the at least one adjustable extended reality display parameter is accessed and a specific rule of the group of rules is determined, the specific rule corresponding to the specific environmental change. The specific rule is implemented to adjust the at least one adjustable extended reality display parameter based on the specific environmental change.
    Type: Grant
    Filed: April 1, 2022
    Date of Patent: October 18, 2022
    Assignee: MULTINARITY LTD
    Inventors: Tamir Berliner, Tomer Kahan, Orit Dolev
  • Patent number: 11475582
    Abstract: The method performed at an electronic device including one or more processors, a non-transitory memory, and a depth sensor includes: obtaining a task associated with a physical object within a physical environment; obtaining depth information, via the depth sensor, associated with the physical environment; determining one or more measurements for the physical object based at least in part on the depth information; obtaining a graphical overlay based at least in part on the task and the one or more measurements for the physical object; and causing presentation of the graphical overlay adjacent to a representation of the physical object, wherein the representation is obtained using sensor readings of the physical object.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: October 18, 2022
    Assignee: APPLE INC.
    Inventors: Austin Caleb Germer, Vincent Paul Sparacino, Adam James Bolton, Tomas Alvarez Rodriguez, Ryan Steven Bullock, Lori Lenore Smallwood
  • Patent number: 11468606
    Abstract: A method for aligning displayed data in an augmented reality (AR) display includes determining a selected location context associated with a piece of equipment, determining a process element associated with the piece of equipment and according to a selected engineering process, determining, according to a digital representation of the equipment, a first location of the process element, receiving meta-sensor location data for one or more meta-sensors in the piece of equipment and indicating a second location for each of the meta-sensors with respect to the selected location context, determining a third location of the AR display with respect to the selected location context, determining overlay data for the process element, determining a display location according to the first location, the third location and the location data of each meta-sensor, and displaying the overlay data at the display location.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: October 11, 2022
    Assignee: TEXTRON INNOVATIONS INC.
    Inventors: Jeremy Robert Chavez, Daniel Wesley Rowe, Michael Eugene Moody, Brian Edward Tucker
  • Patent number: 11467247
    Abstract: An embodiment includes at least one computer readable storage medium comprising instructions that when executed enable a system to: receive (a)(i) first radio signal location data for a first object from a radio sensor; and (a)(ii) first visual signal location data for the first object from a camera sensor; perform feature extraction on (b)(i) the first radio signal location data to determine first extracted radio signal features; and (b)(ii) the first visual signal location data to determine first extracted visual signal features; solve a first association problem between the first extracted radio signal features and the first extracted visual signal features to determine first fused location data; and store the first fused location data in the at least one computer readable storage medium. Other embodiments are described herein.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: October 11, 2022
    Assignee: Intel Corporation
    Inventors: Mi S. Park, Lei Yang, Shao-Wen Yang, Myung Hwangbo, Shahrokh Shahidzadeh
  • Patent number: 11467428
    Abstract: An apparatus to treat refractive error of an eye comprises an electroactive component configured to switch between a light scattering or optical power providing configuration to treat refractive error of the eye and a substantially transparent configuration to allow normal viewing. The electroactive component can be located on the lens away from a central axis of the lens to provide light to a peripheral region of the retina to decrease the progression of myopia. The electroactive component can be located on the lens away from the central axis of the lens in order for the wearer to view objects through an optical zone while the electroactive component scatters light. The electroactive component can be configured to switch to the substantially transparent configuration to allow light to pass through the electroactive component and to allow the lens to refract light to correct vision and allow normal viewing through the lens.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: October 11, 2022
    Assignee: ACUCELA INC.
    Inventors: Ryo Kubota, Amitava Gupta
  • Patent number: 11468605
    Abstract: A method is provided for rendering a mixed reality video. The method includes operations for capturing a head mounted display (HMD) game play by a user of a video game that is being executed on a computing system where the HMD game play is being captured from game play point of view (POV). The method further includes operations for identifying, by the computing system, a coordinate location of a camera that has a camera POV used to view the user during the HMD game play. In addition, the method further includes replaying the HMD game play to adjust the game play POV so that it substantially aligns with the camera POV. Moreover, the method includes rendering the mixed reality video by compositing video from the HMD game play after adjusting the game play POV and video from the camera POV. Rendering the mixed reality video also includes removing the background captured in the video from the camera POV so that the user appears partially within a scene of the video game when rendered in the mixed reality video.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: October 11, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Greg Corson
  • Patent number: 11467180
    Abstract: A distributed computing system for artificial intelligence in autonomously appreciating a circumstance context of a smart device. Raw context data is detected by sensors associated with the smart device. The raw context data is pre-processed by the smart device and then provided to a cloud based server for further processing. At the cloud based server, various sets of feature data are obtained from the pre-processed context data. The various sets of feature data are compared with corresponding classification parameters to determine a classification of a continuous event and/or a classification of transient event, if any, which occur in the context. The determined classification of the continuous event and the transient event will be used to autonomously configure the smart device or another related smart device to fit the context.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: October 11, 2022
    Assignees: STMICROELECTRONICS, INC., STMICROELECTRONICS INTERNATIONAL N.V.
    Inventors: Mahesh Chowdhary, Arun Kumar, Ghanapriya Singh, Rajendar Bahl
  • Patent number: 11468642
    Abstract: Aspects of using augment reality for service projects are described. In some implementations, an augmented reality (AR) system receives one or more images of a real-world space, detects a physical object in the real-world space from the one or more images, and identifies the physical object for a service project. The AR system also generates a virtual object associated with the identified physical object and outputs the virtual object for display in an AR environment. The AR system further receives a selection of one or more portions of the virtual object associated with one or more portions of the physical object to be serviced and generates information for performing the service project based on the one or more portions of the physical object.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: October 11, 2022
    Assignee: Intuit Inc.
    Inventors: Yuhua Xie, Amanda Burgmeier, Jin Salil Barai, Athitya Kumar
  • Patent number: 11468644
    Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: October 11, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
  • Patent number: 11461983
    Abstract: An augmented reality surgical system includes a head mounted display (HMD) with a see-through display screen, a motion sensor, a camera, and computer equipment. The motion sensor outputs a head motion signal indicating measured movement of the HMD. The computer equipment computes the relative location and orientation of reference markers connected to the HMD and to the patient based on processing a video signal from the camera. The computer equipment generates a three dimensional anatomical model using patient data created by medical imaging equipment, and rotates and scales at least a portion of the three dimensional anatomical model based on the relative location and orientation of the reference markers, and further rotate at least a portion of the three dimensional anatomical model based on the head motion signal to track measured movement of the HMD. The rotated and scaled three dimensional anatomical model is displayed on the display screen.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: October 4, 2022
    Assignee: Globus Medical, Inc.
    Inventors: Kenneth Milton Jones, John Popoolapade, Thomas Calloway, Thierry Lemoine, Christian Jutteau, Christophe Bruzy, Yannick James, Joachim Laguarda, Dong-Mei Pei Xing, Sebastien Gorges, Paul Michael Yarin
  • Patent number: 11455484
    Abstract: This disclosure describes architectures and techniques to provide information to a user about items with which the user interacts. In some instances, a user may utilize a wearable device that is configured to interact with one or more components of an information discovery system to obtain information about items in the user's environment.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: September 27, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Dilip Kumar, William Spencer Worley, III
  • Patent number: 11455486
    Abstract: An embodiment includes determining an experiential state of a first user participating in a mixed-reality experience. The embodiment also includes creating a first driver model that maps a relationship between the experiential state of the first user and a parameter of the mixed-reality experience. The embodiment also includes aggregating the first driver model with a plurality of driver models associated with experiential states and parameters of respective other users. The embodiment also includes creating a first cohort experience model using the aggregated driver models. The embodiment also includes deriving a first cohort experience parameter for the first cohort experience model. The embodiment also includes initiating an automated remedial action for participants in the mixed-reality system being associated with the first cohort experience model and the first cohort experience parameter.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: September 27, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aaron K. Baughman, Hernan A. Cunico, Martin G. Keen, John M. Ganci, Jr.
  • Patent number: 11450105
    Abstract: A scalable sport data collecting, sharing and processing system includes: a hub device, and a plurality of client devices, the client devices being connected to the hub device via a data communication link. The client devices include a producer client device and a consumer client device; the producer client device includes one or more sensors selected from the group consisting of be an image sensor, an audio sensor, a GPS/GNSS receiver, an accelerometer, a gyroscope, and a magnetometer, a processor, and a communication module; and the consumer client device includes a process, a communication module, a display and a data storage.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: September 20, 2022
    Inventor: Hanhui Zhang
  • Patent number: 11450074
    Abstract: A method for showing objects in an augmented reality environment includes the steps of an Augmented Reality (AR) device obtaining spatial information and object identification (ID) from an anchor device. The AR device obtains parameters of the objects from a cloud system. The parameters of an object include the image of the object, the audio of the object, and a first spatial relationship between the object and an anchor device. The AR device obtains a second spatial relationship between the AR device and the anchor device. A third spatial relationship between the object and the AR device can be obtained according to the first spatial relationship and the second spatial relationship. The image of the object and the audio of the object is displayed or projected in the AR device according to the third spatial relationship.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: September 20, 2022
    Assignee: AMBIT MICROSYSTEMS (SHANGHAI) LTD.
    Inventors: Yu-Hu Yan, Chien-Sheng Wu
  • Patent number: 11450033
    Abstract: Provided are an apparatus and method for experiencing an augmented reality (AR)-based screen sports match which enable even a child, an elderly person, and a person with a disability to easily and safely experience a ball sports match, such as tennis, badminton, or squash, as a screen sports match without using a wearable marker or sensor, a wearable display, an actual ball, and an actual tool.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: September 20, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jong Sung Kim, Youn Hee Gil, Seong Min Baek, Hee Sook Shin, Seong Il Yang, Cho Rong Yu, Sung Jin Hong
  • Patent number: 11450034
    Abstract: In some embodiments, an augmented reality system is provided. The augmented reality system is configured to detect real-world objects, create software objects that represent the real-world objects, receive requests from applications to present virtual objects at locations associated with the real-world objects, and to present the virtual objects. In some embodiments, an operating system of the augmented reality system is configured to resolve conflicts between requests from multiple applications. In some embodiments, the operating system of the augmented reality system is configured to provide information to applications to allow the applications to avoid or resolve conflicts amongst themselves.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: September 20, 2022
    Assignee: UNIVERSITY OF WASHINGTON
    Inventors: Kiron Lebeck, Tadayoshi Kohno, Franziska Roesner
  • Patent number: 11450101
    Abstract: A head-mounted device can be operated to detect and respond to a user's behavior. The head-mounted device can be regularly and frequently worn while the user performs regular daily tasks, allowing the head-mounted device to collect a large volume of data across a long duration of time so that the cognitive status of the user can be evaluated with greater confidence than may be achieved with an evaluation of short duration and/or frequency. Evaluations performed by the head-mounted device can facilitate the determinations of which disorder is present, a severity of the disorder, trends, and/or forecasts. The head-mounted device can provide feedback that can guide and direct a user to correct actions despite a tendency to do otherwise due to cognitive decline.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: September 20, 2022
    Assignee: Apple Inc.
    Inventor: Paul X. Wang
  • Patent number: 11443495
    Abstract: Embodiments described herein provide a system for facilitating dynamic assistance to a user in an augmented reality (AR) environment of an AR device. During operation, the system detects a first element of an object using an object detector, wherein the object is associated with a task and the first element is associated with a step of the task. The system then determines an orientation and an alignment of the first element in the physical world of the user, and an overlay for the first element. The overlay can distinctly highlight one or more regions of the first element and indicate how the first element fits in the object. The system then applies the overlay to the one or more regions of the first element at the determined orientation in the AR environment.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: September 13, 2022
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Hsuan-Ting Wu, Robert R. Price
  • Patent number: 11442274
    Abstract: An electronic device according to various embodiments of the present disclosure includes one or more cameras having a designated field of view, a display, a communication circuitry and a processor wherein the processor is configured to identify an external electronic device among one or more external objects included in the designated field of view using the camera, display a graphic object corresponding to the external electronic device on the display based on first location information of the external electronic device identified based at least on the image information obtained using the camera, and when the external electronic device is out of the designated field of view, display the graphic object on the display based on second location information of the external electronic device identified using the camera before the external electronic device is out of the designated field of view and information related to the movement of the external electronic device received from the external electronic via the c
    Type: Grant
    Filed: March 22, 2019
    Date of Patent: September 13, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyunji Song, Wootaek Song, Taekyung Lee
  • Patent number: 11443491
    Abstract: Systems and methods are provided for capturing by a camera of a user device, a first image depicting a first environment of the user device; overlaying a first virtual object on a portion of the first image depicting the first environment; modifying a surface of the first virtual object using content captured by the user device; storing a second virtual object comprising the first virtual object with the modified surface; and generating for display the second virtual object on a portion of a second image depicting a second environment.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: September 13, 2022
    Assignee: Snap Inc.
    Inventors: Samuel Edward Hare, Andrew James McPhee, Maxim Maximov Lazarov, Wentao Shang, Kyle Goodrich, Tony Mathew
  • Patent number: 11443718
    Abstract: A circuit device (160) includes an interface (161), a parameter computation unit (140), and a warp processing unit (130). A rendering image and tracking information are input to the interface (161). The parameter computation unit (140) computes a latency compensation parameter for compensating a latency including a rendering processing latency of a rendering image based on tracking information. The warp processing unit (130) performs latency compensation processing for compensating the position of a virtual object in a display region based on the latency compensation parameter.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: September 13, 2022
    Inventors: Wittmeir Manfred, Yosuke Tanikawa, Kumar anandabairavasamy Anand, Tetsuo Kawamoto
  • Patent number: 11442278
    Abstract: A method for peripheral segmentation of electronic contact lens includes determining a peripheral view for the electronic contact lens based on a user associated with the electronic contact lens and establishing a plurality of segmented areas in the peripheral view of an electronic contact lens, where each of the plurality of segmented areas in the peripheral view is associated with a software application from a plurality of software applications. The method also includes establishing a plurality of rules for accessing content of each software application from the plurality of software applications associated with the plurality of segmented areas and establishing a plurality of rules for accessing content of each software application from the plurality of software applications associated with the plurality of segmented areas. Responsive to displaying, in the electronic contact lens, the content of the first software application, performing an action on the content of the first software application.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: September 13, 2022
    Assignee: International Business Machines Corporation
    Inventors: Subha Kiran Patnaikuni, Sarbajit K. Rakshit
  • Patent number: 11442534
    Abstract: A physician performing a physical examination can miss subtle abnormalities, such as alterations of respiratory rate or dangerous skin lesion. A surgeon performing surgery can miss small areas of bleeding or tumor implants. This invention comprises head display unit comprising sensors including low light level TV (L3TV) cameras that gather data and in real time analyzes the data for potential hazards and alerts the user of a potential hazardous scenario.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: September 13, 2022
    Assignee: RED PACS, LLC
    Inventors: Robert Edwin Douglas, Kathleen Mary Douglas, David Byron Douglas
  • Patent number: 11436808
    Abstract: Disclosed are various embodiments for selecting augmenting reality (AR) objects based on contextual cues associated with an image captured by a camera associated with electronic device. Contextual cues are obtained at an electronic device and AR objects are identified from a memory associated with the electronic device. The electronic device implements a processor employing image segmentation techniques to combine the identified AR objects with the captured image and render the combined image for display at a display associated with the electronic device.
    Type: Grant
    Filed: October 9, 2019
    Date of Patent: September 6, 2022
    Assignee: GOOGLE LLC
    Inventors: Diane Wang, Paulo Coelho, Tarik Hany Abdel-Gawad, Matthew Gilgenbach, Jackson Lango, Douglas Muir, Mark Dochtermann, Suddhasattwa Bose, Ashley Pinnick, Drew Skillman, Samantha Raja, Steven Toh, Brian Collins, Jay Steele
  • Patent number: 11436788
    Abstract: The present invention is a file generation apparatus that generates a file for generating a virtual viewpoint image. The image file generation apparatus in one aspect of the present invention has a material information accumulation unit 520 configured to acquire and accumulate material information used for generation of a virtual viewpoint image, a virtual advertisement information acquisition unit 530 configured to acquire advertisement information that is displayed on a virtual viewpoint image, and an image file generation unit 540 configured to generate an image file including material information and advertisement information.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: September 6, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yasufumi Takama
  • Patent number: 11438725
    Abstract: A system and a method are provided for selecting a site for displaying information to a user in a geographical area including a plurality of sites at which information may be displayed. Position, orientation and optionally gaze direction data are received from headsets worn by a plurality of users in the geographical area. The data is analyzed to determine for each site a probability of the site being viewed by users in said geographical area. A request is received for a selection of a site at which to display information to a particular user wearing a headset. The request identifies the headset and includes position of said particular user. In response to a request, a site is selected based on the analysis and signal identifying the selected site is output.
    Type: Grant
    Filed: November 22, 2018
    Date of Patent: September 6, 2022
    Assignee: EVERYSIGHT LTD.
    Inventors: Asaf Ashkenazi, Hanan Shamir, Ari Abramson, Shmuel Akerman
  • Patent number: 11435590
    Abstract: Techniques are described for providing stabilized display components in a wearable heads-up display. A hinge is provided that includes at least two portions rotatable relative to each other, such that each portion may be directly coupled to an optical combiner or a light engine. A bias element may be provided to bias arms of a WHUD towards an unfolded configuration. The bias element and surrounding structure may inhibit movement of arms of the WHUD between the unfolded configuration and a folded configuration.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: September 6, 2022
    Assignee: Google LLC
    Inventors: Joshua Moore, Xiaofeng Li
  • Patent number: 11436800
    Abstract: A method for providing an immersive VR experience comprises defining in the computer memory, a model representing a three-dimensional model. The method further comprises producing field data based upon a simulation of the three-dimensional model. Additionally, the method comprises storing the field data within a data structure. The method also comprises extracting, for display, a surface of the three-dimensional model from a simulation model. The method additionally comprises creating a surface texture for the surface of the three-dimensional model from the field data. Further, the method comprises creating a query optimized grid from the calculated field data. Further still, the method comprises displaying a visualization of the calculated field data by means of the surface and the query optimized grid.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: September 6, 2022
    Assignee: IMMERSIVE VISUALIZATION, INC.
    Inventors: Robert Perry, Ernest Perry
  • Patent number: 11436790
    Abstract: In one embodiment, a method includes receiving image data corresponding to an external environment of a user. The image data is captured at a first time and comprises a body part of the user. The method also includes receiving a first tracking data generated based on measurements made at the first time by at least one motion sensor associated with the body part; generating, based at least on the image data, a model representation associated with the body part; receiving a second tracking data generated based on measurements made at a second time by the at least one motion sensor associated with the body part; determining a deformation of the model representation associated with the body part based on the first tracking data and the second tracking data; and displaying the deformation of the model representation associated with the body part of the user.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: September 6, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Gioacchino Noris, James Allan Booth, Alexander Sorkine Hornung
  • Patent number: 11430158
    Abstract: Described is an intelligent real-time multiplayer content management and data analytics system for AR-based social platforms. The data management system described herein manages AR content and multiple user interactions with the AR content. Additionally, a new multiplayer multiple-stage information augmentation design based on real-time data analysis and live AR interaction is described. In this design, AR content design is very flexible and may be organized into one or more stages containing pre-defined content, on-line searched content, user-generated content, other user-generated content, real-time user interactively generated content, or some combination. The flexible content structure allows for a highly customizable AR social experience to maximize the system performance and user experience.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: August 30, 2022
    Inventor: Eliza Y Du
  • Patent number: 11429333
    Abstract: A system and method for visualizing multiple datasets in a virtual 3-dimensional interactive environment. Multiple datasets may be related and virtually cast as 3-dimensional type structures. User interfaces, such as game controllers or headsets, may be used to present the dataset from differing perspectives including the appearance of moving through the data. Certain embodiments provide for mirror image views that allow for presentation of higher order datasets. Other embodiments provide for animation or motion indicia to show how the data is changing and the results on the display. The datasets may represent physical areas or virtual areas as well as demographic, sensors and financial information.
    Type: Grant
    Filed: September 14, 2019
    Date of Patent: August 30, 2022
    Assignee: BadVR, Inc.
    Inventors: Jad Meouchy, Suzanne Borders