Augmented Reality (real-time) Patents (Class 345/633)
  • Patent number: 11899833
    Abstract: To interact with and manipulate a virtual object associated with an AR experience displayed on the screen of a user device (e.g., a smartphone), the user must first learn or discover the particular gestures that correspond with a given function, but such gestures may not always be intuitive. The user may find it difficult or bothersome to learn/remember the specific gesture(s) needed to achieve a specific result. Moreover, there are only a limited number of gestures that can be performed using the screen of the device, often leading to gestural conflict. In some embodiments, a dual-interface AR system may be implemented to provide an AR experience for a user. The dual-interface AR system utilizes two user interfaces and distributes the input gestures between them, allowing the user to dynamically switch between the two and to use the interface that may be most intuitive to the user.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: February 13, 2024
    Inventor: Brennan Letkeman
  • Patent number: 11900553
    Abstract: A method and apparatus for processing augmented reality (AR) are disclosed. The method includes determining a compensation parameter to compensate for light attenuation of visual information caused by a display area of an AR device as the visual information corresponding to a target scene is displayed through the display area, generating a background image without the light attenuation by capturing the target scene using a camera of the AR device, generating a compensation image by reducing brightness of the background image using the compensation parameter, generating a virtual object image to be overlaid on the target scene, generating a display image by synthesizing the compensation image and the virtual object image, and displaying the display image in the display area.
    Type: Grant
    Filed: June 9, 2022
    Date of Patent: February 13, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Inwoo Ha
  • Patent number: 11899908
    Abstract: Certain aspects of the present disclosure provide techniques for providing an augmented reality user interface, including: receiving, by an image sensor of an electronic device, an image of a physical document; determining a document type associated with the physical document by performing image recognition on the image of the physical document; determining an augmented reality template to display on a display of the electronic device; displaying the augmented reality template on the display of the electronic device, wherein the augmented reality template is aligned in three dimensions with the physical document; determining a distance between the physical document and the electronic device; and enabling one or more interactive user interface elements within the augmented reality template displayed on the display of the electronic device if the determined distance between the physical document and the electronic device is less than a threshold distance.
    Type: Grant
    Filed: September 26, 2022
    Date of Patent: February 13, 2024
    Assignee: Intuit, Inc.
    Inventors: Molly Beth Davis, Timothy Joseph Mueller, Mark Anders Holmberg, Jessica Jaiyeon Cho, Anoop Pratap Singh Tomar
  • Patent number: 11897421
    Abstract: Examples provide pre-configuration of emergency vehicle lockable device. A device manager obtains a unique identifier (UID) assigned to a lockable device, such as a multicolor lightbar or a siren device in an uninitialized state. The UID and pre-configuration parameters are received in a request to pre-configure a single device or a batch request to pre-configure a plurality of lockable devices. The device manager updates a lock status of a locked function from a locked state in which the lockable function is inoperable to a pre-unlocked state in accordance with the pre-configuration parameters. The device manager generates, and stores unlock code(s) to unlock the pre-configured lockable function. In response to receiving the UID of a lockable device during initialization, the unlock code(s) are automatically transmitted to a user device associated with the lockable device to unlock and enable the pre-unlocked function to operate in accordance with the pre-configuration parameters.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: February 13, 2024
    Assignee: Feniex Industries
    Inventors: Hamza Deyaf, Kyle Hale, Nicholas Cameron Marth, Aaron Brown, Geoffrey Salazar, Tom Duong
  • Patent number: 11893700
    Abstract: Spatial information that describes spatial locations of visual objects as in a three-dimensional (3D) image space as represented in one or more multi-view unlayered images is accessed. Based on the spatial information, a cinema image layer and one or more device image layers are generated from the one or more multi-view unlayered images. A multi-layer multi-view video signal comprising the cinema image layer and the device image layers is sent to downstream devices for rendering.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: February 6, 2024
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Ajit Ninan, Neil Mammen, Tyrome Y. Brown
  • Patent number: 11893696
    Abstract: Methods, systems, and computer readable media for providing an extended reality (XR) user interface. A method for providing an XR user interface occurs at a user device executing an XR application.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: February 6, 2024
    Assignee: THE TRUSTEES OF THE UNIVERSITY OF PENNSYLVANIA
    Inventors: Stephen H. Lane, Matthew Anthony Boyd-Surka, Yaoyi Bai, Aline Sarah Normoyle
  • Patent number: 11888909
    Abstract: A method for conducting a three dimensional (3D) video conference with multiple participants, the method may include (i) receiving, from a first participant and by software used to conduct the 3D video conference call, a request to see a second avatar that represents a second participant, within a representation of a virtual 3D environment displayed to the first participant during the 3D video conference; (ii) displaying the second avatar within the representation following a reception of (a) a first approval from the software to display the avatar within the representation, and (b) a second participant approval to display the avatar within the representation; and (iii) avoiding from displaying the second avatar within the representation until receiving the first approval and the second participant approval.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: January 30, 2024
    Assignee: TRUE MEETING INC.
    Inventor: Ran Oz
  • Patent number: 11886954
    Abstract: Image analysis is used to map objects in an arrangement. For example, images of a retail shelf are used to map items for sale on the retail shelf A first vector can used to identify a relative position of a first item on a shelf to a shelving diagram, and a second vector can be used to identify a relative position of a second on the shelf to the shelving diagram, using locations of optical codes (e.g., barcodes). Absolute positions can be calculated. In some configurations, multiple images having different fields of view are matched to an overview image.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: January 30, 2024
    Assignee: Scandit AG
    Inventors: Fabian Nater, Bernd Schoner, Matthias Bloch, Christian Floerkemeier
  • Patent number: 11886560
    Abstract: Disclosed is a system and a method for verifying a user using reality applications. The system includes a database for storing plurality of modules, a server coupled to the database for processing the stored plurality of modules, a reality glasses having a camera to capture movements of the user, and a reality display coupled to the server to overlay virtual objects and the processed plurality of modules onto a field of view of the user, wherein the plurality of modules authenticates the user to access at least one of the virtual objects. The plurality of modules includes a biometric module performs a first level verification by performing biometric scans on the user, using the reality camera, a signature module performs a second level verification by verifying signature of the user drawn in air captured by the reality camera, and a signature motion flow module performs a third level verification by verifying flow of the user's signature captured by the reality camera.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: January 30, 2024
    Assignee: Bottomline Technologies, Inc.
    Inventor: Shay Bhubhut
  • Patent number: 11885101
    Abstract: Provided is a system which can avoid instability of an operating state of a working machine, when switching a remote operation actor of the working machine. In a case where a state of one or both of a working machine 40 or a first operator is a specified state when the working machine 40 is remotely operated through a first remote operation apparatus 10, a first notification corresponding to the specified state is transmitted to a remote operation server 30. In the remote operation server 30, a second remote operation apparatus 20 which is appropriate in view of content of the first notification is selected as the remote operation actor of the working machine 40.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: January 30, 2024
    Assignee: Kobelco Construction Machinery Co., Ltd.
    Inventors: Masaki Otani, Hitoshi Sasaki, Seiji Saiki, Yoichiro Yamazaki
  • Patent number: 11880944
    Abstract: A method and apparatus for displaying information. The method comprises: determining a field of view of a user viewing an environment via a display of a display device; using the field of view, determining a focal region of interest of the user in the environment; providing a database comprising a list of objects and, for each object, a location of that object in the environment; using the list of objects and a location of the focal region in the environment, identifying one or more of the objects that are at least partially located in the environment in or proximate to the focal region; acquiring information related to the identified one or more objects; generating an image element comprising the acquired information; and displaying the generated image element on the display of the display device through which the user is viewing the environment.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: January 23, 2024
    Assignee: General Electric Technologies GmBH
    Inventors: Tigran Andjelic, Oliver Sims
  • Patent number: 11880946
    Abstract: Systems and methods herein describe a system for context triggered augmented reality. The proposed systems and methods receive first user input indicative of a selection of a user interface element corresponding to a recipient user, generate an augmented reality content item based on second user input from the first computing device, generate a contextual trigger for the generated augmented reality content item, the contextual trigger defining a set of conditions for presenting the generated augmented reality content item on a second computing device, generate a multi-media message comprising audio data recorded at the first computing device, detect at least one condition of the set of conditions being satisfied, and in response to detecting at least one of the set of conditions being satisfied, causing presentation of the augmented reality content item and the multi-media message at the second computing device.
    Type: Grant
    Filed: September 15, 2021
    Date of Patent: January 23, 2024
    Assignee: Snap Inc.
    Inventors: Brian Anthony Smith, Yu Jiang Tham, Rajan Vaish
  • Patent number: 11880945
    Abstract: The present disclosure describes a software-based solution for rendering digital crowds in real-time. The system at large is a network of machines that processes and ingests broadcast camera feeds and tracking data and then leverages an augmented reality system for compositing and tracking platforms to render tens of thousands of crowd members on top of live footage.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: January 23, 2024
    Assignee: SILVER SPOON ANIMATION INC.
    Inventor: Rotem Shiffman
  • Patent number: 11875564
    Abstract: A method, computer system, and a computer program product for part identification is provided. The present invention may include observing a user interaction with a physical object using an IoT device. The present invention may include identifying the physical object within a knowledge corpus using one or more image analysis tools. The present invention may include leveraging an identity of the physical object to identify one or more similar interactions with the physical object stored in the knowledge corpus. The present invention may include generating one or more recommendations based on the one or more similar interactions with the physical object. The present invention may include transmitting a user selected recommendation.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: January 16, 2024
    Assignee: International Business Machines Corporation
    Inventors: Venkata Vara Prasad Karri, Sarbajit K. Rakshit, Girish Padmanabhan, Abhijeet Sange
  • Patent number: 11874469
    Abstract: A method of controlling an imaging system for a Head Mounted Display (HMD) device. The method comprises capturing an external scene, for example using a camera, determining an attenuation pattern, for rendering a filter area. The method also comprises determining, based on the captured external scene, a compensation pattern to for compensating at least part of the filter area, attenuating the external scene using the attenuation pattern and generating a holographic image of a virtual object, the holographic image including the compensation pattern.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: January 16, 2024
    Assignee: Arm Limited
    Inventors: Daren Croxford, Roberto Lopez Mendez
  • Patent number: 11874956
    Abstract: Methods, devices, and systems related to a computing device for displaying an AR responsive to an input are described. An input can include, but is not limited to, a timestamp, weather data, event data, a rating, a user preference, a user input, or a location. In an example, a method can include receiving an input at an AR platform of a computing device from a processing resource of the computing device, receiving an image at the AR platform from a camera of the computing device, comparing the image to a number of AR images included on the AR platform, determining at the AR platform that the image is an AR image of the number of AR images, receiving at a user interface an AR associated with the AR image from the AR platform, and displaying the AR on the user interface in response to receiving the AR.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: January 16, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Radhika Viswanathan, Bhumika Chhabra, Carla L. Christensen, Zahra Hosseinimakarem
  • Patent number: 11875469
    Abstract: Methods and systems are disclosed for displaying an augmented reality virtual object on a multimedia device. One method comprises detecting, in an augmented reality environment displayed using a first device, a virtual object; detecting, within the augmented reality environment, a second device, the second device comprising a physical multimedia device; and generating, at the second device, a display comprising a representation of the virtual object.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: January 16, 2024
    Assignee: Worldpay Limited
    Inventors: Kevin Gordon, Charlotte Spender
  • Patent number: 11877086
    Abstract: A method of generating at least one image of a real environment comprises providing at least one environment property related to at least part of the real environment, providing at least one virtual object property related to a virtual object, determining at least one imaging parameter according to the at least one provided virtual object property and the at least one provided environment property, and generating at least one image of the real environment representing information about light leaving the real environment according to the determined at least one imaging parameter, wherein the light leaving the real environment is measured by at least one camera.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: January 16, 2024
    Assignee: Apple Inc.
    Inventors: Sebastian Knorr, Daniel Kurz
  • Patent number: 11877203
    Abstract: Systems, methods, and non-transitory computer readable media including instructions for enabling location-based virtual content. Enabling location-based virtual content includes receiving an initial location of a particular wearable extended reality appliance; performing a first lookup for a first rule associating the particular wearable extended reality appliance with the initial location, the first rule permitting display of a first type of content and preventing display of a second type of content in the initial location; implementing the first rule; receiving a subsequent location of the particular wearable extended reality appliance; performing a second lookup for a second rule associating the particular wearable extended reality appliance with the subsequent location, the second rule preventing display of the first type of content and permitting display of the second type of content in the subsequent location; and implementing the second rule.
    Type: Grant
    Filed: March 21, 2023
    Date of Patent: January 16, 2024
    Assignee: SIGHTFUL COMPUTERS LTD
    Inventors: Tamir Berliner, Tomer Kahan
  • Patent number: 11874092
    Abstract: An electronic device determines target information about a target and recommends a target based on the target information.
    Type: Grant
    Filed: March 26, 2023
    Date of Patent: January 16, 2024
    Inventor: Philip Lyren
  • Patent number: 11875466
    Abstract: Systems and methods for matching content elements to surfaces in a spatially organized 3D environment. The method includes receiving content, identifying one or more elements in the content, determining one or more surfaces, matching the one or more elements to the one or more surfaces, and displaying the one or more elements as virtual content onto the one or more surfaces.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: January 16, 2024
    Inventors: Denys Bastov, Victor Ng-Thow-Hing, Benjamin Zaaron Reinhardt, Leonid Zolotarev, Yannick Pellet, Aleksei Marchenko, Brian Everett Meaney, Marc Coleman Shelton, Megan Ann Geiman, John A. Gotcher, Matthew Schon Bogue, Shivakumar Balasubramanyam, Jeffrey Edward Ruediger, David Charles Lundmark
  • Patent number: 11876799
    Abstract: Disclosed are systems and methods for registering and localizing a building server. A system comprises a building server communicatively coupled with a computing cloud, and configured to initiate a registration process that comprises transmitting data identifying the building server. The computing cloud comprises at least a device registration module that receives the data transmitted from the building server, authenticates the building server, and generates and transmits data such as a building server password and a digital certificate. The computing cloud also comprises an identity management module that receives a request to create a unique ID associated with the building server, and updates a memory to indicate an association between the building server and the computing cloud.
    Type: Grant
    Filed: September 15, 2021
    Date of Patent: January 16, 2024
    Assignee: SIGNIFY HOLDING B.V.
    Inventors: Marcin Gramza, Mark Henricus Verberkt, Marcin Klecha
  • Patent number: 11868528
    Abstract: A method for a user activity recognition for data glasses. The data glasses include at least one integrated sensor unit. An activity of a user of the data glasses is recognized, via an evaluation of data that are detected by the integrated sensor unit, in at least one user activity recognition step carried out by a classifying unit. It is provided that when the user activity recognition step is carried out, the classifying unit takes into account information that has been ascertained in a (preferably) user-specific training step chronologically preceding the user activity recognition step, by training the classifying unit.
    Type: Grant
    Filed: February 1, 2023
    Date of Patent: January 9, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Johannes Meyer, Thomas Alexander Schlebusch
  • Patent number: 11869003
    Abstract: A computer-implemented system and method for managing a data process in a virtual reality setting are provided.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: January 9, 2024
    Inventors: Austen Thomas Stewart, Ankur Bagchi, Matthew Frederick Faller
  • Patent number: 11865916
    Abstract: During a journey using a motor vehicle, a virtual environment is displayed by use of a display device and additionally a virtual representation of a route layout located in front of the motor vehicle is also displayed by use of the display device within the virtual environment. The display device may be used in the motor vehicle.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: January 9, 2024
    Assignee: AUDI AG
    Inventor: Marcus Kuehne
  • Patent number: 11869395
    Abstract: A switchable display system for providing virtual reality or augmented reality using a color calibration display apparatus may comprise: an augmented reality data server configured to provide virtual reality and augmented reality information; and a color calibration display apparatus using a color calibration display module configured to perform color calibration according to ambient illuminance in a use environment.
    Type: Grant
    Filed: November 2, 2021
    Date of Patent: January 9, 2024
    Assignee: POSTECH Research and Business Development Foundation
    Inventors: Ji Hyung Kim, Young Seok Kim, Sungeun Park, Wook Sung Kim
  • Patent number: 11869195
    Abstract: A target object controlling method, apparatus (2), electronic device, and storage medium. The method includes in response to a movement control operation triggered for a target object in a real scene image, determining a control direction corresponding to the movement control operation (101); obtaining a photographing direction of the real scene image (102); and controlling the target object to move in the real scene image according to the control direction and the photographing direction (103). The target object controlling method can effectively solve the problem in the prior art that when the photographing direction of the real scene image changes, a direction deviation occurs in controlling the target object to move in the real scene image, and can also effectively improve the operation performance of the target object in the real scene image, bringing a better manipulation experience for a user.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: January 9, 2024
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventor: Jiayi Zhang
  • Patent number: 11860439
    Abstract: A head-mounted device may have a head-mounted housing. Optical components may be supported by the head-mounted housing. The optical components may include cameras such as front-facing cameras and/or movable optical modules that have displays for displaying images to eye boxes. Sensors may be provided in the head-mounted device to detect changes in orientation between respective optical modules, between respective portions of a chassis, display cover layer, or other head-mounted support structure in the housing, between optical components such as cameras, and/or between optical components and housing structures. Information from these sensors can be used to measure image misalignment such as image misalignment associated with misaligned cameras or misalignment between optical module images and corresponding eye boxes.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: January 2, 2024
    Assignee: Apple Inc.
    Inventors: Muhammad F. Hossain, Samuel A. Resnick
  • Patent number: 11857335
    Abstract: Systems, methods and apparatuses for rehabilitation and/or training of a subject. Rehabilitation may be performed after neurological damage has occurred, including without limitation acute or chronic damage. Training as referred to herein relates to any process performed in order to improve the physical and/or cognitive function of a subject.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: January 2, 2024
    Assignee: MINDMAZE GROUP SA
    Inventors: Tej Tadi, Nicolas Fremaux, Jose Rubio, Jonas Ostlund, Sebastien Lasserre, Léandre Bolomey
  • Patent number: 11863905
    Abstract: A mapping between environments and devices included therein may be maintained, such that a configuration of each environment is known. Upon detecting that a user is within an environment, and based on a current device state of devices within the environment, an application may be generated and presented to the user via a corresponding user device. The application may allow the user to activate and control the devices within the environment. In particular, the application may depict selectable controls that correspond to functions or operations associated with the different devices within the environment. The application may also be dynamically updated based on an updated current device state of the devices.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: January 2, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Milo Oostergo, Gary Zhong
  • Patent number: 11861901
    Abstract: This disclosure is enables various technologies involving various actions based on an object detecting a defined area and the defined area detecting the object.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: January 2, 2024
    Assignee: TransRobotics, Inc.
    Inventor: Sayf Alalusi
  • Patent number: 11861055
    Abstract: A virtual reality system 1 includes a display unit 13 that displays a video 100, a control unit 21 that controls a change of the video 100 displayed on the display unit 13, an origin position setting unit 14 that sets an origin position X, a current position recognition unit 15 that detects and recognizes a current position Y of a user U, and a progress direction setting unit 16 that calculates a direction of the current position Y recognized by the current position recognition unit 15 with respect to the origin position X set by the origin position setting unit 14, and sets a progress direction in the displayed video 100 in accordance with the direction, and the control unit 21 controls a change of the video 100 to cause the video 100 to progress in a progress direction set by the progress direction setting unit 16.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: January 2, 2024
    Assignee: Five for Co., Ltd.
    Inventor: Takayoshi Tanigawa
  • Patent number: 11861757
    Abstract: The disclosed artificial reality system can provide a user self representation in an artificial reality environment based on a self portion from an image of the user. The artificial reality system can generate the self representation by applying a machine learning model to classify the self portion of the image. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions tagged as either depicting a user from a self-perspective or not. The artificial reality system can display the self portion as a self representation in the artificial reality environment by positioning them in the artificial reality environment relative to the user's perspective in the artificial reality environment. The artificial reality system can also identify movements of the user and can adjust the self representation to match the user's movement, providing more accurate self representations.
    Type: Grant
    Filed: September 7, 2022
    Date of Patent: January 2, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allan Booth, Mahdi Salmani Rahimi, Gioacchino Noris
  • Patent number: 11861267
    Abstract: An interactive design tool may be configured for real-time architectural adaptation. This may include a user device including a hardware processor, physical memory and a user interface. The user device may provide operations to generate a virtual reality (VR) architectural session including a toolbelt with a virtual selection tool for adaptation of at least one of an environment, an object and an avatar. The operations may further include to receive or select a selection spot on the object by a projection between the virtual selection tool and the object, receive or select an adaptation relative to at least one of the object, the environment and the avatar; and display the adaptation to the at least one of the object, the environment and the avatar in real-time during the VR architectural session.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: January 2, 2024
    Assignee: HALSEY, MCCORMACK & HELMER, INC.
    Inventors: Christian Daniel Giordano, Michael Scott Kipfer, Jeffrey Anderson, Ahmad Y Tabbakh
  • Patent number: 11852825
    Abstract: Eye data is captured with one or more sensors of the head mounted device. The one or more sensors are configured to sense an eyebox region. User notifications are controlled based on the eye data.
    Type: Grant
    Filed: March 8, 2022
    Date of Patent: December 26, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Sebastian Sztuk, Salvael Ortega Estrada
  • Patent number: 11855933
    Abstract: A computer-implemented method, a computer system and a computer program product enhance content submissions of support chats. The method includes acquiring a support request from a user, wherein the support request includes a content submission. The content submission is selected from a group consisting of audio data, image data and text data. The method also includes determining a classification of the support request based on a request type. The method further includes obtaining a set of requirements for the support request from a server based on the determined classification. In addition, the method includes determining that the content submission does not meet the set of requirements. Lastly, the method includes generating an augmented reality inspector to assist the user in adding content to meet the set of requirements in response to the set of requirements not being met by the content submission.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: December 26, 2023
    Assignee: KYNDRYL, INC.
    Inventors: Mauro Marzorati, Todd Russell Whitman, Jeremy R. Fox, Sarbajit K. Rakshit
  • Patent number: 11856284
    Abstract: A method (100) of controlling a portable device comprising a first camera and a second camera facing in the same direction. The method comprises: selecting (110) one of the first camera and the second camera as a visualization camera; initializing (120) a localization algorithm having as an input image data representing images captured by one of the first camera and the second camera; determining (130) a respective focus score for at least one of the first camera and the second camera, said focus score indicating a focus quality of features identified from images captured by one of the respective camera; selecting (140,) one of the first camera and the second camera as an enabled camera based on the at least one focus score; and generating a control signal configured to cause the selected camera to be enabled such that the image data representing images captured by the enabled camera are provided as the input to the localization algorithm.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: December 26, 2023
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Diego Gonzalez Morin, José Araújo, AmirHossein Taher Kouhestani, Ioannis Karagiannis, Ananya Muddukrishna, Lars Andersson
  • Patent number: 11854147
    Abstract: Augmented reality guidance for guiding a user through an environment using an eyewear device. The eyewear device includes a display system and a position detection system. A user is guided though an environment by monitoring a current position of the eyewear device within the environment, identifying marker positions within a threshold of the current position, the marker positions defined with respect to the environment and associated with guidance markers, registering the marker positions, generating overlay image including the guidance markers, and presenting the overlay image on a display of the eyewear device.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: December 26, 2023
    Assignee: Snap Inc.
    Inventors: Shin Hwun Kang, Dmytro Kucher, Dmytro Hovorov, Ilteris Canberk
  • Patent number: 11854068
    Abstract: Images capture an object that a person in a frictionless store is looking at. A user interface is automatically initiated on a device identified as the object. If the person is holding an item, the user interface is automatically placed in a state that displays the item details and pricing within the user interface and provides a link to all item descriptions in possession of the user within the frictionless store along with a running price total of all the items. If the person is not holding any item, the user interface is automatically placed in a state that displays item details and pricing within the user interface to all item descriptions in possession of the user within the frictionless store along with a running price of all items.
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: December 26, 2023
    Assignee: NCR Voyix Corporation
    Inventors: Chario Bardoquillo Maxilom, Ferdinand Salarda Acedera, John White Go Mabute
  • Patent number: 11847794
    Abstract: The disclosed system may include a housing dimensioned to secure various components including at least one physical processor and various sensors. The system may also include a camera mounted to the housing, as well as physical memory with computer-executable instructions that, when executed by the physical processor, cause the physical processor to: acquire images of a surrounding environment using the camera mounted to the housing, identify features of the surrounding environment from the acquired images, generate a map using the features identified from the acquired images, access sensor data generated by the sensors, and determine a current pose of the system in the surrounding environment based on the features in the generated map and the accessed sensor data. Various other methods, apparatuses, and computer-readable media are also disclosed.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: December 19, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventor: Samuel Redmond D'Amico
  • Patent number: 11847753
    Abstract: Aspects of the present disclosure are directed to providing an artificial reality environment with augments and surfaces. An “augment” is a virtual container in 3D space that can include presentation data, context, and logic. An artificial reality system can use augments as the fundamental building block for displaying 2D and 3D models in the artificial reality environment. For example, augments can represent people, places, and things in an artificial reality environment and can respond to a context such as a current display mode, time of day, a type of surface the augment is on, a relationship to other augments, etc. Augments can be on a “surface” that has a layout and properties that cause augments on that surface to display in different ways. Augments and other objects (real or virtual) can also interact, where these interactions can be controlled by rules for the objects evaluated based on information from the shell.
    Type: Grant
    Filed: January 9, 2023
    Date of Patent: December 19, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Tichenor, Arthur Zwiegincew, Hayden Schoen, Alex Marcolina, Gregory Alt, Todd Harris, Merlyn Deng, Barrett Fox, Michal Hlavac
  • Patent number: 11847677
    Abstract: An augmented reality-based lighting design method includes displaying, by an augmented reality device, a real-time image of a target physical area on a display screen. The method further includes displaying, by the augmented reality device, a lighting fixture 3-D model on the display screen in response to a user input, where the lighting fixture 3-D model is overlaid on the real-time image of the target physical area. The method also includes displaying, by the augmented reality device, a lighting pattern on the display screen overlaid on the real-time image of the target physical area, wherein the lighting pattern is generated based on at least photometric data associated with the lighting fixture 3-D model.
    Type: Grant
    Filed: August 2, 2022
    Date of Patent: December 19, 2023
    Assignee: SIGNIFY HOLDING B.V.
    Inventors: Nam Chin Cho, Parth Joshi, William Thomas Cook
  • Patent number: 11847793
    Abstract: An imaging system can receive an image of a portion of an environment. The environment can include an object, such as a hand or a display. The imaging device can identify a data stream from an external device, for instance by detecting the data stream in the image or by receiving the data stream wirelessly from the external device. The imaging device can detect a condition based on the image and/or the data stream, for instance by detecting that the object is missing from the image, by detecting that a low resource at the imaging device, and/or by detecting visual media content displayed by a display in the image. Upon detecting the condition, imaging device automatically determines a location of the object (or a portion thereof) using the data stream and/or the image. The imaging device generates and/or outputs content that is based on the location of the object.
    Type: Grant
    Filed: June 18, 2021
    Date of Patent: December 19, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Bijan Forutanpour, Sebastien Mounier, Jonathan Kies
  • Patent number: 11842514
    Abstract: A system and method for detecting a pose of an object is described. An augmented reality display device accesses first sensor data from an image sensor and a depth sensor of the augmented reality display device. The first sensor data includes a first plurality of images of an object and corresponding depth data relative to the augmented reality display device and the object. The augmented reality display device detects first features corresponding to the object by applying a convolutional neural network to the first sensor data, forms a plurality of training clusters based on the first features, and stores the plurality of training clusters in a training database.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: December 12, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventor: William Hoff
  • Patent number: 11842451
    Abstract: Systems and methods herein describe a system for context triggered augmented reality. The proposed systems and methods receive first user input indicative of a selection of a user interface element corresponding to a recipient user, generate an augmented reality content item based on second user input from the first computing device, generate a contextual trigger for the generated augmented reality content item, the contextual trigger defining a set of conditions for presenting the generated augmented reality content item on a second computing device, generate a multi-media message comprising audio data recorded at the first computing device, detect at least one condition of the set of conditions being satisfied, and in response to detecting at least one of the set of conditions being satisfied, causing presentation of the augmented reality content item and the multi-media message at the second computing device.
    Type: Grant
    Filed: September 15, 2021
    Date of Patent: December 12, 2023
    Assignee: Snap Inc.
    Inventors: Brian Anthony Smith, Yu Jiang Tham, Rajan Vaish
  • Patent number: 11840226
    Abstract: A travel control device carries out a travel control for a vehicle based on detected lane boundary lines. A first prescribed position, an absolute vehicle position and an absolute vehicle azimuth angle are stored while changing from a state where the lane boundary lines can be detected to a state where the lane boundary lines cannot be detected. A second prescribed position is stored while changing from a state in which the lane boundary lines cannot be detected to a state in which the lane boundary lines can be detected. The host vehicle is controlled to travel along a travel path connecting the first prescribed position and the second prescribed position where a current absolute position and a current absolute azimuth angle of the host vehicle do not deviate by a prescribed value or more from the stored absolute position and the stored absolute azimuth angle.
    Type: Grant
    Filed: November 27, 2020
    Date of Patent: December 12, 2023
    Assignees: Nissan Motor Co., Ltd., Renault S.A.S.
    Inventors: Chikao Tsuchiya, Shoichi Takei
  • Patent number: 11842452
    Abstract: A method for positioning a virtual object in an image of a scene including a real object is disclosed. The method comprises receiving a signal from a short-range wireless transmitter arranged at a predetermined position relative to the real object and determining a location and orientation of the portable device relative to the short-range wireless transmitter based on the received signal. The image of the scene is displayed with the virtual object, wherein the position of the virtual object in the image is based on the determined location and orientation of the portable device relative to the short-range wireless transmitter and on the predetermined position of the short-range wireless transmitter relative to the real object. A system, a portable device and a computer program product is also disclosed.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: December 12, 2023
    Assignee: Hitachi Energy Ltd
    Inventors: Bernhard Deck, Stephan Gerspach, Michele Luvisotto
  • Patent number: 11836826
    Abstract: The subject technology selects a set of augmented reality content generators from a plurality of augmented reality content generators. The subject technology causes display, at a client device, of a graphical interface comprising a plurality of selectable graphical items. The subject technology receives, at the client device, a selection of a first selectable graphical item from the plurality of selectable graphical items, the first selectable graphical item comprising a first augmented reality content generator corresponding to a particular geolocation. The subject technology causes display, at the client device, at least one augmented content reality item generated by the first augmented reality content generator.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: December 5, 2023
    Assignee: Snap Inc.
    Inventors: Ilteris Kaan Canberk, Virginia Drummond, Jean Luo, Alek Matthiessen, Celia Nicole Mourkogiannis
  • Patent number: 11836999
    Abstract: A method implemented on an augmented reality (AR) device includes receiving an image of a document on the AR device. The image of the document includes one or more areas of obfuscated text. A marker on the document is identified. The marker is associated with an area of obfuscated text on the document. The marker is scanned using the AR device. When the user of the AR device is authenticated, a non-obfuscated image of the text associated with the marker is displayed on the AR device.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: December 5, 2023
    Assignee: Wells Fargo Bank, N.A.
    Inventors: Robert L. Carter, Jr., Kourtney Eidam, Andres J. Saenz
  • Patent number: 11836978
    Abstract: A related information output apparatus 1 includes an acquisition unit 10 that acquires a captured image, a recognition unit 12 that recognizes one or more objects included in the captured image acquired by the acquisition unit 10, and an output unit 13 that outputs related information related to the objects recognized by the recognition unit 12. The output unit 13 may output the related information based on a combination of a plurality of the objects recognized by the recognition unit 12.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: December 5, 2023
    Assignee: NTT DOCOMO, INC.
    Inventors: Yoonok Heo, Kevin Nguyen