Augmented Reality (real-time) Patents (Class 345/633)
  • Patent number: 11184562
    Abstract: A method of generating an augmented reality environment, the method comprising, transmitting information of a target content to an augmented reality device using light generated by one or more light emitters. Responsive to a reception of the information of the target content by the augmented reality device, determining a portion of the target content for displaying on the physical display area, and displaying the portion of the target content as an overlay on the physical display area using the augmented reality device.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: November 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Ben Z. Akselrod, Anthony DiLoreto, Steve McDuff, Kyle D. Robeson
  • Patent number: 11181379
    Abstract: A system and method for generating a tracking state for a device includes synchronizing measurement data from exteroceptive sensors and an inertial measurement unit (IMU). A processing unit is programmed to offset one of the measurement signals by a time offset that minimizes a total error between a change in rotation of the device predicted by the exteroceptive sensor data over a time interval defined by an exteroceptive sensor sampling rate and a change in rotation of the device predicted by the IMU sensor data over the time interval.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: November 23, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Benzun Pious Wisely Babu, Mao Ye, Liu Ren
  • Patent number: 11182965
    Abstract: In one example, a method for generating and displaying markers in XR environments to enhance social engagement among users includes presenting, by a processing system, an extended reality environment to a first user, wherein the extended reality environment combines elements of a real world environment surrounding the first user with elements of a virtual world, inferring, by the processing system, a marker to be associated with a second user in the extended reality environment, wherein the marker indicates information about the second user; and modifying, by the processing system, the extended reality environment to incorporate the marker in a manner that is apparent to the first user.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: November 23, 2021
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Eric Zavesky, Nigel Bradley, Nikhil Marathe, James Pratt
  • Patent number: 11182976
    Abstract: The invention refers to devices, intended for impact on virtual objects, namely to devices for impact on virtual objects of augmented reality. The device comprising a one or more video camera, SONAR module, infrared camera, display, connected to the computational module, processing device, the device has a database storage of actions virtual objects of facial expressions, gestures of the user, recognized objects of the real world and distance to such objects. All modules of the device are connected to the computing module that has an electronic unit adapted to select commands stored in the database based upon the information received through various modules of the device. As a result certain actions of virtual objects of augmented reality are activated, and the resulting video stream is shown to the user. According to the invention, the command recognition block further comprises a module for determining heart rate of the user.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: November 23, 2021
    Assignee: DEVAR ENTERTAINMENT LIMITED
    Inventors: Andrey Valeryevich Komissarov, Anna Igorevna Belova
  • Patent number: 11182629
    Abstract: A method for machine learning based driver assistance is provided. The method may include detecting, in one or more images of a driver operating an automobile, one or more facial landmarks. The detection of the one or more facial landmarks may include applying, to the one or more images, a first machine learning model. A gaze dynamics of the driver may be determined based at least on the one or more facial landmarks. The gaze dynamics of the driver may include a change in a gaze zone of the driver from a first gaze zone to a second gaze zone. A state of the driver may be determined based at least on the gaze dynamics of the driver. An operation of the automobile may be controlled based at least on the state of the driver. Related systems and articles of manufacture, including computer program products, are also provided.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: November 23, 2021
    Assignee: The Regents of the University of California
    Inventors: Sujitha Martin, Kevan Yuen, Mohan M. Trivedi
  • Patent number: 11182944
    Abstract: To take animations in a virtual space, an animation production method comprising: a step of placing a virtual camera in a virtual space; a step of placing one or more objects in the virtual space; a user input detection unit that detects an input of a user from at least one of a head mounted display and a controller which the user mounted; a step of accepting at least one choice of the object in response to the input; and a step of removing the object from the virtual space in response to the input.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: November 23, 2021
    Assignee: AniCast RM Inc.
    Inventors: Yoshihito Kondoh, Masato Murohashi
  • Patent number: 11182978
    Abstract: Various implementations disclosed herein render virtual content while accounting for air-born particles or lens-based artifacts to improve coherence or to otherwise better match the appearance of real content in the images with which the virtual content is combined.
    Type: Grant
    Filed: April 17, 2020
    Date of Patent: November 23, 2021
    Assignee: Apple Inc.
    Inventor: Daniel Kurz
  • Patent number: 11179633
    Abstract: A computer program comprising instructions which, when executed by a computer, cause the computer to carry out a sketching routine in a video game, wherein, in the sketching routine, a character controlled by a user produces a sketch of one or more features in the character's field of view. A computer-readable medium having such a computer program stored thereon is also provided.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: November 23, 2021
    Assignee: SQUARE ENIX LTD.
    Inventors: Raoul Barbet, Gaëlle Oliveau, Jeason Suarez, Martin Esquirol, Orson Favrel
  • Patent number: 11182124
    Abstract: A method including: receiving, by a computing device from an AR device worn by a user, a definition of a region of inclusion that includes included controllable devices and excludes excluded controllable devices, the included controllable devices being ones of a plurality of controllable devices that are inside the region of inclusion, and the excluded controllable devices being ones of the controllable devices that are outside of the region of inclusion; receiving, by the computing device from the AR device, an indication of the user to adjust the region of inclusion; adjusting, by the computing device, the region of inclusion based on the indication of the user; sending, by the computing device, a definition of the adjusted region of inclusion to the AR device; and instructing, by the computing device, the AR device to display to the user the adjusted region of inclusion projected over the included controllable devices.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: November 23, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Shubhadip Ray, Robert Huntington Grant, Zachary A. Silverstein, Sarbajit K. Rakshit
  • Patent number: 11182465
    Abstract: An augmented reality (AR) authentication method and system provides an AR environment that is presented on a display. The AR environment combines real-world object images captured by a camera with computer generated virtual icons. The virtual icons may be “mapped” to the real-world objects, whereupon the real-world objects take on the characteristics of the mapped images. During an authentication process, a user interacts with the AR environment by a variety of techniques to enter their passcode. Such interaction may include physically moving objects in the real-world so that such movement is reflected in the AR environment; controlling the “mapped” real-world objects through user interaction with the AR environment via an input device; and controlling the virtual icons through user interaction with the AR environment via the input device. Accordingly, if the user interacts with the AR environment in a manner that matches their previously “set” passcode the user is authenticated.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: November 23, 2021
    Inventor: Ye Zhu
  • Patent number: 11176750
    Abstract: An augmented reality surgical system includes a head mounted display (HMD) with a see-through display screen, a motion sensor, a camera, and computer equipment. The motion sensor outputs a head motion signal indicating measured movement of the HMD. The computer equipment computes the relative location and orientation of reference markers connected to the HMD and to the patient based on processing a video signal from the camera. The computer equipment generates a three dimensional anatomical model using patient data created by medical imaging equipment, and rotates and scales at least a portion of the three dimensional anatomical model based on the relative location and orientation of the reference markers, and further rotate at least a portion of the three dimensional anatomical model based on the head motion signal to track measured movement of the HMD. The rotated and scaled three dimensional anatomical model is displayed on the display screen.
    Type: Grant
    Filed: March 17, 2020
    Date of Patent: November 16, 2021
    Assignee: Globus Medical, Inc.
    Inventors: Kenneth Milton Jones, John Popoolapade, Thomas Calloway, Thierry Lemoine, Christian Jutteau, Christophe Bruzy, Yannick James, Joachim Laguarda, Dong-Mei Pei Xing, Sebastien Gorges, Paul Michael Yarin
  • Patent number: 11173837
    Abstract: A system for optically cloaking an object includes a photophoretic optical trap display including a trap light source and an illumination light source; and a visual sensor at a first visual sensor location. The visual sensor captures an image of a scene, the trap light source is configured to generate a trap beam to control one or more of a position and an orientation of a scattering particle, and the illumination light source is configured to generate an illumination beam to illuminate the scattering particle to generate a reproduction image based on the image of the scene captured by the visual sensor.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: November 16, 2021
    Assignee: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
    Inventors: Paul Schmalenberg, Ercan M. Dede
  • Patent number: 11178345
    Abstract: A video processing circuit outputs a video signal from an imaging device for imaging a scene outside a vehicle to a display unit. A main processing circuit generates an image from vehicle information and outputs the image to the video processing circuit, a starting time of the main processing circuit being longer than that of the video processing circuit. The video processing circuit superimposes the image from the main processing circuit on the video signal from the imaging device and outputs a resultant video to the display unit.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: November 16, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventor: Kazuho Sakurai
  • Patent number: 11170221
    Abstract: Provided is a technology which facilitates a search for an object such as a falling object. This object search system is provided with: a detection device which detects position information on a falling object; and a scope which is worn by a user and provided with a display unit that displays an augmented reality space in a real space. The detection device detects an object by using a radar device. A radar control device calculates the detected position information on the object and transmits the calculated position information to the scope through a wireless transmission device. The scope displays the position information, which pertains to the falling object and is acquired from the detection device, on an augmented reality space with a specific indication (an arrow or a wave shape, etc.).
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: November 9, 2021
    Assignee: HITACHI KOKUSAI ELECTRIC INC.
    Inventors: Yosuke Sato, Kenichi Kashima
  • Patent number: 11170222
    Abstract: The present disclosure relates to an extended reality (XR) device and a method for controlling the same. More particularly, the present disclosure is applicable to all of the technical fields of 5th generation (5G) communication, robots, self-driving, and artificial intelligence (AI). The XR device comprises a display; a wireless communication module performing communication with an external server that provides Augmented Reality (AR) information; a first camera receiving an image that includes at least one object of a real world; and a processor acquiring AR information on the object from the external server through the wireless communication module and displaying the acquired AR information on the display, wherein the processor displays an icon instead of the AR information based on situational characteristics of the XR device, and acquires and displays the AR information, in response to an input for selecting the icon.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: November 9, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Minwoo Kim, Hyunjoo Ryu, Minhyun Park, Seokyoung Oh
  • Patent number: 11170535
    Abstract: A virtual reality interface method for providing fusion with a real space according to the present disclosure includes the steps of: analyzing information on an object of the real space to be projected in a virtual space based on image information of the real space acquired from a camera; determining transparency information for the object of the real space based on the object information of the real space; and fusing an object image of the real space and a virtual space based on the transparency information.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: November 9, 2021
    Assignee: DEEPIXEL INC
    Inventors: Jehoon Lee, Honam Ahn, Wahseng Yap
  • Patent number: 11170576
    Abstract: A progressive display system can compute a virtual distance between a user and virtual objects. The virtual distance can be based on: a distance between the user and an object, a viewing angle of the object, and/or a footprint of the object in a field of view. The progressive display system can determine where the virtual distance falls in a sequence of distance ranges that correspond to levels of detail. Using a mapping between content sets for the object and levels of detail that correspond to distance ranges, the progressive display system can select content sets to display in relation to the object. As the user moves, the virtual distance will move across thresholds bounding the distance ranges. This causes the progressive display system to select and display other content sets for the distance range in which the current virtual distance falls.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: November 9, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Varga, Jasper Stevens, Robert Ellis, Jonah Jones
  • Patent number: 11163989
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing action localization in images and videos. In one aspect, a system comprises a data processing apparatus; a memory in data communication with the data processing apparatus and storing instructions that cause the data processing apparatus to perform image processing and video processing operations comprising: receiving an input comprising an image depicting a person; identifying a plurality of context positions from the image; determining respective feature representations of each of the context positions; providing a feature representation of the person and the feature representations of each of the context positions to a context neural network to obtain relational features, wherein the relational features represent relationships between the person and the context positions; and determining an action performed by the person using the feature representation of the person and the relational features.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: November 2, 2021
    Assignee: Google LLC
    Inventors: Chen Sun, Abhinav Shrivastava, Cordelia Luise Schmid, Rahul Sukthankar, Kevin Patrick Murphy, Carl Martin Vondrick
  • Patent number: 11164000
    Abstract: Disclosed is a mobile terminal for providing information based on an image, wherein the mobile terminal executes at least one of an installed artificial intelligence (AI) algorithm and a machine learning algorithm, and is capable of communicating with other electronic devices and external servers in a 5G communication environment. The mobile terminal includes a camera, a display, and a processor. Accordingly, since an image capture target can be accurately recognized, various services for improving user convenience can be provided.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: November 2, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Eun Sang Lee, Hye Young Koo, In Suk Kim, Jin Seong Lee
  • Patent number: 11164546
    Abstract: An example head mounted display (HMD) device includes a display for displaying a content; a detection unit for detecting the movement of the object in front of the HMD device; and a processor for changing, on the basis of the location of the object, a screen state of the display to provide image of the front of the HMD device, when the movement of the object is detected.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: November 2, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seong-won Han, Woo-jin Park, Dae-hyun Ban, Sangsoon Lim
  • Patent number: 11163997
    Abstract: In one general aspect, a method can include receiving a representation of a real-world scene captured by a user using a mobile device where the real-world scene is a portion of a real-world physical area. The method can include associating a location of the mobile device with an AR anchor based on a comparison of the representation of the real-world scene with a portion of a model of the real-world physical area. The method can include triggering display of an AR object associated with the model of the real-world physical area within the mobile device based on the location of the mobile device.
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: November 2, 2021
    Assignee: Google LLC
    Inventors: Steven Soon Leong Toh, Brandon Hyman, Eric Lai-Ong-Teung, Brian Collins, Edgar Chung
  • Patent number: 11164390
    Abstract: A user wears a virtual reality mask (VR mask) (100) and then watches a virtual reality image. The VR mask (100) blocks light incident from the outside. When the user wears the VR mask (100), the user cannot discover an obstacle (500) positioned outside the VR mask (100). The user may collide with the obstacle (500) and may be injured by the obstacle (500). Accordingly, in order to prevent injury to the user who uses the VR mask (100), it is necessary to make the user recognize the obstacle (500) positioned outside the VR mask (100). In order to recognize an obstacle, a virtual reality image and the obstacle may be displayed together, or the amount of light incident from the outside may be adjusted.
    Type: Grant
    Filed: November 27, 2020
    Date of Patent: November 2, 2021
    Assignee: Korea Institute of Science and Technology
    Inventors: Min Chul Park, Ji Hoon Kang, Jun Yong Choi, Kyul Ko, Dae Hwan Ahn, Dae Yeon Kim, Hyun Woo Ko
  • Patent number: 11163367
    Abstract: The invention is directed at a method of obtaining gesture zone definition data for a control system based on user input, wherein said user input is obtained through a mobile communication device external to said control system, the method comprising: receiving, by an image capture device, images of a space, and determining from the images, by a controller, a location data of the mobile communication device; providing, through the mobile communication device, a feedback signal in response to said determining of the location data, the feedback signal providing feedback information on said location data; receiving, via an input unit of the mobile communication device, an input signal indicative of an instruction command, and determining, by the controller, based on said instruction command, the gesture zone definition data. The invention is further directed at a method of operating a mobile communication device, to a computer program product, and to a control system.
    Type: Grant
    Filed: July 16, 2015
    Date of Patent: November 2, 2021
    Assignee: SIGNIFY HOLDING B.V.
    Inventors: Jonathan David Mason, Dzmitry Viktorovich Aliakseyeu, Sanae Chraibi
  • Patent number: 11164377
    Abstract: Methods and systems of navigating within a virtual environment are described. In an example, a processor may generate a portal that includes a set of portal boundaries. The processor may display the portal within a first scene of the virtual environment being displayed on a device. The processor may display a second scene of the virtual environment within the portal boundaries. The processor may receive sensor data indicating a movement of a motion controller. The processor may reposition the portal and the second scene in the first scene based on the sensor data, wherein the first scene remains stationary on the device during the reposition of the portal and the second scene. The processor may translate a location of the portal within the first scene to move the portal towards a user of the device until the second scene replaces the first scene being displayed on the device.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: November 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Aldis Sipolins, Lawrence A. Clevenger, Benjamin D. Briggs, Michael Rizzolo, Christopher J. Penny, Patrick Watson
  • Patent number: 11163922
    Abstract: Systems and methods are provided for providing real-time interactive design and simulation of a physical system to generate comparisons of varying system configurations under different physical conditions. A display is generated on a graphical user interface that displays a part in a physical system according to characteristic data. An initial simulation of the physical system is executed to determine an initial value for a metric of the initial design. The initial value is displayed on the graphical user interface. A change of the characteristic data or the environment condition is received through a user interface. The simulation of the physical system is recalculated to determine a next value for the metric based on the change, the next value for the metric being displayed on the graphical user interface along with the initial value in real time relative to the received change.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: November 2, 2021
    Assignee: ANSYS, Inc.
    Inventors: Vincent M. Pajerski, Joseph S. Solecki
  • Patent number: 11164351
    Abstract: System, method, and media for an augmented reality interface for sensor applications. Machines making up a particular production or processing facility are instrumented with one or more sensors for monitoring their operation and status and labeled with machine-readable tags. When viewed by a technician through an augmented reality display, the machine-readable tags can be recognized using a computer-vision system and the associated machines can then be annotated with the relevant sensor and diagnostic data. The sensors may further form a mesh network in communication with a head-mounted display for an augmented reality system, eliminating the needs for centralized networking connections.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: November 2, 2021
    Assignee: LP-Research Inc.
    Inventors: Zhuohua Lin, Klaus Petersen, Tobias Schlüter, Huei Ee Yap, Scean Monti Mitchell
  • Patent number: 11161029
    Abstract: A method, computer system, and computer program product for sport training on an augmented reality device or a virtual reality device is provided. The embodiment may include capturing a plurality of user movement data using one or more sensors. The embodiment may also include measuring a user body and eye gaze position based on the plurality of captured user movement data. The embodiment may further include calculating a body position difference by comparing the measured user body and eye gaze position with an expert-specified body position sequence. The embodiment may also include determining a body position quality threshold is not satisfied based on the calculated body position difference. The embodiment may further include generating an instruction based on the compared calculated body position difference.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: November 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Enara C. Vijil, Dipyaman Banerjee, Kuntal Dey
  • Patent number: 11164391
    Abstract: In general, embodiments of the present invention provide methods, apparatuses, systems, computing devices, computing entities, and/or the like for performing mixed reality processing using at least one of depth-based partitioning of a point cloud capture data object, object-based partitioning of a point cloud capture data object, mapping a partitioned point cloud capture data object to detected objects of a three-dimensional scan data object, performing noise filtering on point cloud capture data objects based at least in part on geometric inferences from three-dimensional scan data objects, and performing geometrically-aware object detection using point cloud capture data objects based at least in part on geometric inferences from three-dimensional scan data objects.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: November 2, 2021
    Assignee: Optum Technology, Inc.
    Inventors: Yash Sharma, Vivek R. Dwivedi, Anshul Verma
  • Patent number: 11156471
    Abstract: A system is provided for hands-free handling of at least one asset by a user. The system includes a user device configured to be worn by a user and a control system remotely located relative to the user device and configured to exchange asset-related data with the user device, the asset-related data including at least one or more notifications related to the handling of the at least one asset by the user at the asset location. The user device contains a processor configured to determine location data associated with the at least one asset and including a location for the at least one asset, the determination being based, in part, upon the obtained asset identifier data; dynamically generate and display, at the user device, one or more navigational projections configured to guide the user to the asset location; and detect handling of the at least one asset by the user.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: October 26, 2021
    Assignee: UNITED PARCEL SERVICE OF AMERICA, INC.
    Inventor: Julio Gil
  • Patent number: 11157295
    Abstract: Systems and methods are provided for providing an intelligent personal assistant as a service. The method includes opening a digital application on an electronic device, connecting one or more third party applications with the digital application within or outside of given trusting levels or authorities, determining one or more tasks to be performed using the one or more third party applications, creating one or more operants/bubbles, wherein each operant/bubble corresponds to at least one of the one or more tasks in specific supply chain sequence, selecting one or more operants/bubbles using the graphical riser interface or smart projections, and running the selected one or more operants/bubbles in a specific supply chain sequence, using the graphical user interface or smart projections, enabling control of the one or more third party applications using the digital application on users best situational channel.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: October 26, 2021
    Inventor: Patrick Schur
  • Patent number: 11158289
    Abstract: In a wearable device, a master unit comprises a first display and a first processing circuitry, and a slave unit comprises a second display and a second processing circuitry. A method aims at outputting information in said displays. The method comprises acquiring a sensor signal from a sensor in the master unit; generating a synchronisation signal, based on the acquired sensor signal; transmitting the synchronisation signal from the first processing circuitry to the second processing circuitry; rendering information to be outputted in the first display, and in the second display based on the synchronisation signal; and outputting the information on the first display and the second display, respectively according to the synchronisation signal.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: October 26, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Marko Eromaki, Lauri Jääskelä, Harri Hakulinen
  • Patent number: 11151967
    Abstract: A method and system for generating attention pointers, including: displaying, in a display of a mobile device, an object within and outside a field of view (FOV) of an user wherein the object outside the FOV are real objects; monitoring, by a processor of the mobile device, for a change in the object within and outside the FOV; in response to a change, generating by the processor one or more attention pointers within the FOV of the user for directing user attention to the change in the object which is either inside or outside the FOV; and displaying, by the processor, on a virtual screen within the FOV to the user, the one or more attention pointers wherein the one or more attention pointers are dynamically configured to interact with the user in response to detections based on a movement of the user or the object within or outside the FOV of the user.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: October 19, 2021
    Assignee: Honeywell International Inc.
    Inventors: David Chrapek, Dominik Kadlcek, Michal Kosik, Sergij Cernicko, Marketa Szydlowska, Katerina Chmelarova
  • Patent number: 11151775
    Abstract: An image processing apparatus includes a processor including hardware. The processor is configured to: generate a first virtual image when one wearer of a first wearer wearing a first wearable device configured to communicate with the processor and a second wearer wearing a second wearable device configured to communicate with the processor rides in a moving body and another wearer virtually rides in the moving body, the first virtual image reflecting a behavior of the other wearer and representing a state where the other wearer virtually rides in the moving body as observed from a viewpoint of the one wearer, the first virtual image; and output the generated first virtual image to the first or the second wearable device worn by the one wearer.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: October 19, 2021
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kazuhiro Itou, Hitoshi Kumon, Kotomi Teshima, Yoshie Mikami, Yuta Maniwa
  • Patent number: 11150102
    Abstract: The invention is provided with: a turning determination unit 13 and an image reproducing unit 14. The turning determination unit 13 determines a section in which an automobile travels with turning, not travels straight, based on road-route information regarding a traveling route and a road on the traveling route. The image reproducing unit 14 displays a selected-virtual space image that is selected in advance in a straight section in which the automobile travels straight, and displays a turning-virtual space image for a view field turning in accordance with a turning pattern determined by a road shape and a traveling direction in a turning section, in the turning section in which the automobile travels with turning.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: October 19, 2021
    Assignee: ALPHA CODE INC.
    Inventor: Takuhiro Mizuno
  • Patent number: 11153488
    Abstract: An imaging system comprises an image sensor configured to detect images, an inertial measurement unit configured to measure movement of the image sensor, a display unit configured to display the images detected by the image sensor, and a control unit. The control unit is configured to control display of the images by the display unit based on the movement measured by the inertial measurement unit. An exemplary variable latency and frame rate camera embodiment is disclosed.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: October 19, 2021
    Assignee: UNITED STATES OF AMERICA, AS REPRESENTED BY THE SECRETARY OF THE ARMY
    Inventors: Anthony Wayne Antesberger, William C. Cronk
  • Patent number: 11150310
    Abstract: A system for calibrating alignment of two or more sensors in a virtual reality (VR) or augmented reality (AR) display device, the two or more sensors in the display device including at least one magnetic sensor. The system can include a first pair of conductive loops oriented in parallel planes and spaced apart along a first axis. The system can also include a mount configured to attach to the display device and to support the display device in a first predetermined spatial relationship with respect to the first pair of conductive loops.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: October 19, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Michael Woods, Scott David Nortman
  • Patent number: 11151751
    Abstract: Systems and methods are described for augmenting visual content with a sponsored object instead of a selected object. An illustrative method receives an input selecting an object for augmenting visual content, determines whether a property of the selected object matches a property of a sponsored object included in a database of sponsored objects, and in response to determining that the property of the selected object matches the property of the sponsored object, augments the visual content with the sponsored object.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: October 19, 2021
    Assignee: Rovi Guides, Inc.
    Inventors: Christopher Franklin, Jennifer L. Holloway, Daniel P. Rowan, Nathalia S. Santos-Sheehan
  • Patent number: 11150350
    Abstract: An apparatus for target location is disclosed. The apparatus includes a housing, which includes a range sensor to generate range data, an image sensor to generate image data, an inertial sensor to generate inertia data, and a processor. The processor is configured to receive the image data from the image sensor and determine a first orientation of the housing and receive the inertia data from the inertial sensor and modify the first orientation based on the inertia data to produce a modified orientation of the housing.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: October 19, 2021
    Assignee: ELBIT SYSTEMS OF AMERICA, LLC
    Inventors: Andrew Struckhoff, Tom Hardy, Jason R. Lane, James Sarette, Darius Coakley
  • Patent number: 11150482
    Abstract: A system and method for generating a virtual content to a physical object is described. A processor includes an augmented reality application. The augmented reality application creates virtual content at the head mounted device, and associates the virtual content with predefined conditions based on data from sensors embedded in the head mounted device at a time of creation of the virtual content. The virtual content is displayed in a display of the head mounted device in response to sensor data satisfying the predefined conditions.
    Type: Grant
    Filed: February 15, 2016
    Date of Patent: October 19, 2021
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Brian Mullins, Matthew Kammerait
  • Patent number: 11151795
    Abstract: A computer-implemented virtual pop-up space comprises a traversable 3D representation of a real-world location, wherein the traversable 3D representation renders a first avatar of a first user. The inventive concept further provides a virtual interface configured to enable a user transaction through the first avatar within the traversable 3D representation of the real-world location.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: October 19, 2021
    Assignee: Wormhole Labs, Inc.
    Inventors: Curtis Hutten, Robert D. Fish, Brian Kim
  • Patent number: 11151791
    Abstract: Images and/or videos have associated therewith information like location and orientation information for the camera used to captured the images/videos. The associated location and orientation (or pose) information facilitates subsequent processing for producing accurate and convincing augmented reality (AR) outputs. In addition, some embodiments associate user-specific information with images or videos for producing customized AR content on a user-to-user basis.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: October 19, 2021
    Assignee: EDX Technologies, Inc.
    Inventors: Roger Ray Skidmore, Blair Nelson Ahlquist, Dragomir Rosson
  • Patent number: 11145125
    Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: October 12, 2021
    Assignee: Lucasfilm Entertainment Company Ltd.
    Inventors: Roger Cordes, David Brickhill
  • Patent number: 11145130
    Abstract: One variation of a method for automatically capturing data from non-networked production equipment includes: detecting a location of a mobile device within a facility, the mobile device manipulated by an operator while performing a step of an augmented digital procedure at a machine in the facility; estimating a position of a display on the machine relative to a field of view of an optical sensor in the mobile device based on the location of the mobile device and a stored location of the machine within the facility; in response to the position of the display falling within the field of view of the optical sensor, selecting an image captured by the optical sensor; extracting a value, presented on the display, from a region of the image depicting the display; and storing the value in a procedure file for the augmented digital procedure completed at the machine.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: October 12, 2021
    Assignee: Apprentice FS, Inc.
    Inventors: Frank Maggiore, Angelo Stracquatanio, Milan Bradonjic, Nabil Chehade
  • Patent number: 11144091
    Abstract: One embodiment provides a method, including: identifying, using one or more sensors, an orientation of a wearable device; determining, using a processor, whether the orientation corresponds to an inactive orientation; and providing, responsive to determining that the orientation corresponds to the inactive orientation, a power off notification to a user of the wearable device. A method, including: detecting, using an audio capture device operatively coupled to a wearable device, environmental audio, wherein the wearable device is an augmented reality device; identifying, using a processor, augmented reality contents presented on a display of the wearable device; determining, using a processor, whether the environmental audio and the augmented reality contents correspond to a similar context; and presenting, responsive to determining that the environmental audio and the augmented reality contents do not correspond to the similar context, a power save option to a user. Other aspects are described and claimed.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: October 12, 2021
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Song Wang, Mengnan Wang, Hong Xiong, Zhenyu Yang
  • Patent number: 11145117
    Abstract: Aspects of the subject disclosure may include, for example, identifying a physical domain, identifying a geometry of a digital entity adapted for presenting digital content within a rendered display of the physical domain according to the geometry and storing the geometry in association with the physical domain to obtain a stored geometry. A location of equipment of a user is determined and associated with the physical domain to obtain an association. Responsive to the association, the stored geometry is provided to the equipment of the user for presenting the digital content within the rendered display of the physical domain according to the geometry. Other embodiments are disclosed.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: October 12, 2021
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Barry A. Smith, Julian Volyn, Andrew Jonez
  • Patent number: 11145123
    Abstract: A mobile device that includes a camera and an extended reality software application program is employed by a user in an operating environment, such as an industrial environment. The user aims the camera within the mobile device at optical data markers, such as QR codes, that are associated with machines in the environment. The mobile device acquires an image from the camera and decodes the optical data markers included in the acquired image. The mobile device queries the data intake and query system for the values of metrics for the machines associated with the decoded optical data markers. Upon receiving the metric values from the data intake and query system, the mobile device generates AR overlays and superimposes the AR overlays onto the acquired image. The mobile device displays the image with superimposed AR overlays on a display device.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: October 12, 2021
    Assignee: SPLUNK INC.
    Inventors: Jesse Chor, Michael Emery, Christopher Chan, Glen Wong, Devin Bhushan
  • Patent number: 11144117
    Abstract: Methods, systems, and devices for deep learning based head motion prediction for extended reality are described. The head pose prediction may involve training one or more layers of a machine learning network based on application data and an estimated head motion range associated with the extended reality system. The network may receive one or more bias corrected inertial measurement unit (IMU) measurements based on a sensor. The network may model a relative head pose of the user as a polynomial of time over a prediction interval based on the bias corrected IMU measurements and the trained one or more layers of the machine learning network. The network may determine a future relative head pose of the user based on the polynomial (e.g., which may be used for virtual object generation, display, etc. within an extended reality system).
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: October 12, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Chiranjib Choudhuri, Ajit Deepak Gupte, Pushkar Gorur Sheshagiri, Gerhard Reitmayr, Tom Edward Botterill
  • Patent number: 11144760
    Abstract: A computer-implemented system and method provide for a tagging user (TU) device that determines a first location of the TU device and receives, in the first location, a selection of a real-world object from a TU who views the object through the TU device. The TU device receives, from a TU, tagging information to attach to the object, and captures descriptive attributes of the object. The descriptive attributes and the tagging information associated with the first location are stored in a tagged object database.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: October 12, 2021
    Assignee: International Business Machines Corporation
    Inventors: Robert Huntington Grant, Zachary A. Silverstein, Vyacheslav Zheltonogov, Juan C. Lopez
  • Patent number: 11144585
    Abstract: Methods and systems for supplying content to a user including generating, configuring content data for presentation on a display of a user computing device, transmitting the content data to the display of the user computing device for display on the user computing device, collecting, behavior data of the user sensed by the tracking module, generating engagement data based on the collected behavior data, determining supplemental content configured for presentation on a display of the user computing device and displaying the supplemental content to the user on the user computing device.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: October 12, 2021
    Assignee: MASSACHUSETTS MUTUAL LIFE INSURANCE COMPANY
    Inventors: Michal Knas, Jiby John
  • Patent number: 11140376
    Abstract: A calibrating method for calibrating the position of pictures on display elements of a binocular displaying device, the binocular displaying device comprising a right display element and a left display element to display right and left pictures. The method comprising a virtual markers displaying step, during which a right virtual marker and a left virtual marker are displayed respectively from the right display element and the left display element when the wearer uses the binocular displaying device, the right and left virtual markers being at least visually vertically alignable with a real world target at an alignment condition, and an aligning step, during which each of the right and left virtual markers are aligned with the real world target.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: October 5, 2021
    Assignee: Essilor International
    Inventor: Denis Rousseau