Augmented Reality (real-time) Patents (Class 345/633)
-
Patent number: 12046036Abstract: Methods, devices, and computer-readable storage media for the customized presentation of digital information in a physical space, the method formulates information from physical space and accesses a predetermined first group of data sets of a virtual space based on the formulated information. The accessed virtual space is associated with the physical space. The method then generates a local data set associated with the first group of data sets and accesses a predetermined second group of data sets of the same virtual space. The method then transforms the second group of data sets using the local data set and provides the transformed data sets to be overlaid on the physical space.Type: GrantFiled: October 21, 2019Date of Patent: July 23, 2024Assignee: PHORIA PTY LTD.Inventors: Trent Clews-de Castella, Fabian Ulpiano
-
Patent number: 12046004Abstract: Systems and methods for determining pose using a trained neural network are described, whereby a user device receives image data of a 3-dimensional (“3D”) marker affixed to a 3D object to be tracked, provides a set of input data derived from the image data to a neural network stored on the user device, and generates a pose descriptor indicative of estimated pose of the 3D marker based on output of the neural network produced in response to receiving the set of input data. The 3D marker comprises a first surface to convey radiation in a first direction, and a second surface to convey radiation in a second direction different to the first direction, whereby the image processing system determines object pose from captured image data of at least a portion of the radiation conveyed from the first and/or second surface of the 3D marker affixed to the 3D object.Type: GrantFiled: August 27, 2020Date of Patent: July 23, 2024Assignee: ZETA MOTION LTD.Inventor: Wilhelm Eduard Jonathan Klein
-
Patent number: 12039141Abstract: Many users access artificial reality (XR) experiences through their mobile phones. However, it is difficult to translate XR experiences to a two-dimensional (2D) screen in a way that feels intuitive and natural. Thus, the technology can map an interaction plane in a three-dimensional (3D) scene to the 2D screen with as many affordances as possible, even if the plane is not parallel to the mobile phone. The plane can be a fixed surface or a dynamically changeable surface through which the user sends inputs through the 2D screen. The mapping of the plane to the 2D screen can control interaction with a virtual object on the interaction plane in the XR environment, enabling parity between the same experience on XR and non-XR interfaces.Type: GrantFiled: September 29, 2022Date of Patent: July 16, 2024Assignee: Meta Platforms Technologies, LLCInventor: Alex Elsayad
-
Patent number: 12039899Abstract: A head up display (HUD) system within a vehicle includes a controller adapted to initialize HUD graphics to be displayed by the HUD system within the vehicle, a projector adapted to project HUD graphics within a field of view of the HUD system onto an inner surface of a windshield of the vehicle, and a monitoring system adapted to collect real-time data of a head position and gaze direction of an occupant within the vehicle, wherein, the controller is further adapted to adapt the displayed HUD graphics, displayed by the HUD system, based on the real-time data of the head position and gaze direction of the occupant.Type: GrantFiled: April 25, 2023Date of Patent: July 16, 2024Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Akilesh Rajavenkatanarayanan, Joseph F. Szczerba, Manoj Sharma
-
Patent number: 12039221Abstract: Systems, methods, and computer program product for displaying virtual content with a wearable display device that in response to identifying a first change from a first field of view to a second field of view, determines, based at least in part upon attribute(s) or criterion (criteria), first virtual content element(s) and second virtual content element(s) from the set that match to first surface(s) within the first field of view, determines second surface(s) within the second field of view based at least in part upon attribute(s) or criterion (criteria), the second surface(s) matching to the second virtual content element(s), moves the first virtual content element(s) from the first surface(s) to the second surface(s), and maintains the second virtual content element(s) with respect to the first surface(s) while the first field of view has been changed into the second field of view.Type: GrantFiled: March 28, 2023Date of Patent: July 16, 2024Assignee: Magic Leap, Inc.Inventor: Genevieve Mak
-
Patent number: 12040953Abstract: Facilitating metaverse service orchestration and hybrid software defining network control with policy enabled multiple input multiple output in advanced networks is provided herein. Operations of a system include identifying physical resources associated with a physical network and virtual resources associated with a virtual network. The physical resources and the virtual resources are determined to be utilized for fulfillment of a service request. The operations can also include combining the physical resources and the virtual resources into a composition of mixed resources. The composition of mixed resources can include the physical resources and the virtual resources. Further, the operations can include rendering, in a perceivable format, the composition of mixed resources in response to the service request.Type: GrantFiled: August 3, 2022Date of Patent: July 16, 2024Assignee: AT&T Intellectual Property I, L.P.Inventors: Zhi Cui, Paul E. Smith, Jr.
-
Patent number: 12039776Abstract: Systems and methods provided for presenting supplemental content in an augmented reality environment where an object within a field of view of an augmented reality device of a user is identified and processed to detect a reference related to a participant in an event. A user profile or user social network is searched to identify a message from the user about the participant. The message may be combined with the object in the augmented reality field of view.Type: GrantFiled: July 26, 2022Date of Patent: July 16, 2024Assignee: Rovi Guides, Inc.Inventors: Adam Bates, Jesse F. Patterson, Mark K. Berner, Eric Dorsey, Jonathan A. Logan, David W. Chamberlin, Paul Stevens, Herbert A Waterman
-
Patent number: 12039231Abstract: The method includes displaying an augmented reality design file including receiving, at a mobile communication device, the augmented reality design file, the augmented reality design file including a geolocation parameter; activating a geolocation mode of an architecture software application; reading the geolocation parameter of the received augmented reality design file; calculating a distance value and a directional parameter of the mobile computing device with respect to a location of a structure corresponding to the augmented reality design file; displaying the distance value and the directional parameter on a display of the mobile communication device; and continuing to update the distance value and the directional parameter as the mobile computing device moves toward the structure location.Type: GrantFiled: March 31, 2021Date of Patent: July 16, 2024Assignee: AUGmentecture, Inc.Inventors: Zarik Boghossian, Alen Malekian
-
Patent number: 12037003Abstract: While a motor vehicle is travelling, a selected offer of entertainment for a vehicle occupant is output by use of a display device arranged in the motor vehicle. Driving characteristics of the motor vehicle are adapted by a control device of the motor vehicle to the offer of entertainment output at a given moment and the motor vehicle is autonomously controlled in a way corresponding to the adapted driving characteristics.Type: GrantFiled: September 7, 2018Date of Patent: July 16, 2024Assignee: AUDI AGInventors: Marcus Kühne, Daniel Profendiner, Nils Wollny
-
Patent number: 12039588Abstract: The techniques, methods, systems, and other mechanisms described herein include processes for determining if customized content should be generated, what information to include in the customized content, and when to provide the customized content. In general, a computing system determines that a user intends to travel to a physical venue. The computing system can determine if an entity associated with the physical venue has a web page. The computing system can determine various aspects of a predicted route of travel from the user's present location to the physical venue. The computing system can use location information indicating the user's current location and determine one or more routes of travel to the physical venue. The computing system can compare one or more determined aspects of the predicted route to threshold values to determine if customized content should be generated and presented to the user.Type: GrantFiled: March 18, 2020Date of Patent: July 16, 2024Assignee: GOOGLE LLCInventors: Scott James Ogden, Cayden Meyer
-
Patent number: 12039218Abstract: Systems and methods are described for improved sharing of an experience between users. A first image captured using a first user device is received by control circuitry. A second image captured using a second user device is received by the control circuitry. Control circuitry determines a first set of elements of the first image. Control circuitry determines a second set of elements of the second image. Control circuitry determines whether at least one element of the first set of elements corresponds to at least one element of the second set of elements. Control circuitry determines that a transient element is present in the first set of elements and is not present in the second set of elements. Control circuitry generates for display the transient element on the second user device.Type: GrantFiled: March 28, 2022Date of Patent: July 16, 2024Assignee: Rovi Guides, Inc.Inventor: Mona Singh
-
Patent number: 12033245Abstract: A mixed A mixed reality system, comprising: a sensor configured to acquire readings of real-world data, and display, on an output device, a real-world visualization of the real-world data based on the readings to a user, wherein the sensor has one or more parameters affecting the real-world visualization; and a processing circuitry configured to: obtain (a) information of one or more virtual entities located within an area from which the readings are acquired, the information defining, for each of the virtual entities, one or more simulated physical properties, and (b) values of one or more situational parameters indicative of a state of the sensor during acquisition of the readings, wherein the values of the one or more situational parameters are readings of one or more situational sensors, sensing the state of the sensor and its surroundings during acquisition of the readings; determine, for at least one given virtual entity of the virtual entities, a virtual entity visualization of the given virtual entityType: GrantFiled: August 29, 2023Date of Patent: July 9, 2024Assignee: ELBIT SYSTEMS LTD.Inventors: Amir Sheffer, Ayelet Mashiah, Ofer Livneh, Yoav Ophir
-
Patent number: 12033382Abstract: Provided is an electronic device including a body part, a glass member disposed on the body part, a display disposed on the glass member, a support part rotatably connected to the body part, a sensor including an eye tracking camera and a front camera configured to capture an image of a front side of a user, and a processor operatively connected to the display and the sensor, wherein the processor is configured to output at least one content via the display, recognize at least one object by the front camera, obtain the user's gaze dwell time for the object by the eye tracking camera, based on the gaze dwell time being longer than or equal to a reference time, obtain an area occupied by the object in a field of view (FOV) of the front camera, based on the area, output the at least one content on a region having no overlap with the object, or reduce a size of the at least one content and output the at least one content on one side of the display, and based on the user's gaze dwell time for the object being lessType: GrantFiled: October 4, 2022Date of Patent: July 9, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Taehun Ko, Jiyeon Shin, Hoon Han, Yunjeong Ji
-
Patent number: 12033291Abstract: Adaptive Control Driven System/ACDS 99, supports visual enhancement, mitigation of challenges and with basic image modification algorithms and any known hardware from contact lenses to IOLs to AR hardware glasses, and enables users to enhance vision with user interface based on a series of adjustments that are applied to move, modify, or reshape image sets and components with full advantage of the remaining useful retinal area, thus addressing aspects of visual challenges heretofore inaccessible by devices which learn needed adjustments.Type: GrantFiled: May 1, 2023Date of Patent: July 9, 2024Assignee: Eyedaptic, Inc.Inventors: David Watola, Jay E. Cormier, Brian Kim
-
Patent number: 12026826Abstract: An information processing apparatus includes at least one memory storing instructions and at least one processor that, upon execution of the instructions, is configured to operate as an instruction acquisition unit configured to acquire an instruction with respect to a virtual object in a mixed reality space from a user who is one of a plurality of users in the mixed reality space, a mode determination unit configured to determine, based on the acquired instruction, a display mode of the virtual object for another user in the mixed reality space, a viewpoint determination unit configured to determine, based on the determined display mode, a viewpoint with respect to the virtual object in the mixed reality space, and an image generation unit configured to generate an image of the mixed reality space including the virtual object viewed from the determined viewpoint.Type: GrantFiled: August 12, 2022Date of Patent: July 2, 2024Assignee: Canon Kabushiki KaishaInventor: Wenjing Li
-
Patent number: 12026433Abstract: A data processing method and apparatus, and a readable medium and an electronic device are provided. The method includes: collecting a facial image of a user by means of a terminal, and taking the collected facial image as a first facial image, displaying the first facial image in a user display area of a terminal screen; where the terminal screen displays a target object, collecting a sound signal of the user, and taking the collected sound signal as a target sound signal; and if the sound intensity of the target sound signal is within a first intensity range, switching a display state of the target object, and switching the first facial image in the user display area to a second facial image, wherein the second facial image is obtained on the basis of the first facial image.Type: GrantFiled: December 21, 2022Date of Patent: July 2, 2024Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.Inventors: Yu Meng, Hua-Ting Chung
-
Patent number: 12026831Abstract: An electronic device and method are disclosed. The electronic device includes a first and second camera, display, memory and processor. The processor implements the method, including acquiring image data of an external environment via the first camera, detecting a plurality of objects included in the image data, identifying a first object corresponding to the detected gaze among the detected plurality of objects, configuring a first precision for spatial mapping of the identified first object and a second precision of at least one other object from among the detected plurality of objects, wherein the first precision is higher than the second precision, executing 3D spatial mapping on the image data using the first precision for the identified first object and the second precision for the at least one other object, and displaying a 3D space generated based on the image data and the spatial mapping.Type: GrantFiled: October 20, 2022Date of Patent: July 2, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Sanghun Lee, Sungoh Kim, Donghwan Seo, Byeongyong Ahn, Dasom Lee
-
Patent number: 12028605Abstract: A method for image collection, a computer storage medium, and a vehicle are provided. The method includes the following. An initial photographing direction of a mobile device associated with a vehicle is obtained, in response to determining that a predetermined condition is satisfied. An in-vehicle photography apparatus of the vehicle is adjusted to an initial orientation, to match a photographing direction at the initial orientation of the in-vehicle photography apparatus with the initial photographing direction of the mobile device. A drive signal configured to adjust the photographing direction of the in-vehicle photography apparatus is generated, based on a detecting signal of a sensor of the mobile device, such that the photographing direction of the in-vehicle photography apparatus synchronously varies with a pose of the mobile device. A surrounding image collected via the in-vehicle photography apparatus is transmitted to the mobile device, to display the surrounding image at the mobile device.Type: GrantFiled: January 8, 2021Date of Patent: July 2, 2024Assignee: SHANGHAI QWIK SMART TECHNOLOGY CO., LTD.Inventor: Hongren Shi
-
Patent number: 12020388Abstract: An augmented reality application executing on a client device receives video data captured by a camera of the device, in which the video data includes a display area of the device. The application detects a set of items within the display area based on the video data, wherein the set of items is included among an inventory of a warehouse associated with a retailer, and accesses a set of attributes of each item. The application retrieves profile information including a set of preferences associated with a customer of the retailer, matches one or more of the set of preferences with one or more attributes of each item, and generates an augmented reality element based on the matches. The augmented reality element is then displayed, such that it is overlaid onto a portion of the display area based on a location within the display area at which the items are detected.Type: GrantFiled: July 29, 2022Date of Patent: June 25, 2024Assignee: Maplebear Inc.Inventors: Dominic Cocchiarella, Aditya Godbole, Andrew Peters, Spencer Schack, Brandon Leonardo
-
Patent number: 12020608Abstract: Apparatus and methods for displaying an image by a rotating structure are provided. The rotating structure can comprise blades of a fan. The fan can be a cooling fan for an electronics device such as an augmented reality display. In some embodiments, the rotating structure comprises light sources that emit light to generate the image. The light sources can comprises light-field emitters. In other embodiments, the rotating structure is illuminated by an external (e.g., non-rotating) light source.Type: GrantFiled: September 22, 2022Date of Patent: June 25, 2024Assignee: MAGIC LEAP, INC.Inventors: Guillermo Padin Rohena, Ralph Remsburg, Adrian Kaehler, Evan Francis Rynk
-
Patent number: 12020385Abstract: An augmented reality (AR) processing method, a computer readable storage medium, and an electronic device, relating to the technical field of AR. The AR processing method includes: obtaining a current frame image, extracting an image parameter of the current frame image, receiving information of a virtual object and displaying the virtual object; and editing the virtual object in response to an editing operation for the virtual object. The information of the virtual object corresponds to the image parameter of the current frame image and is determined by using a pre-stored mapping result.Type: GrantFiled: June 23, 2022Date of Patent: June 25, 2024Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventor: Fantao Zeng
-
Patent number: 12020384Abstract: Methods and systems are disclosed for generating AR experiences. The methods and systems access a first component of a plurality of components implemented by the messaging application, the plurality of components comprising an AR experience, each of the plurality of components being configured to be separately launched by the messaging application. The methods and systems store a first state of the first component in a data structure that is shared across the plurality of components; launching. The methods and system launch, by the messaging application, a second component of the plurality of components in response to determining that an interaction has been performed using the first component; and configure a second state of the second component based on the interaction that has been performed using the first component.Type: GrantFiled: June 21, 2022Date of Patent: June 25, 2024Assignee: Snap Inc.Inventors: Rastan Boroujerdi, Michael John Evans, Panayoti Haritatos
-
Patent number: 12013531Abstract: A near-eye optical assembly includes a display waveguide and an optical structure. The display waveguide is configured to receive display light and to direct the display light to an eye of a user. The optical structure includes an input coupler, an optical path, and an output coupler. The input coupler is disposed to receive a portion of the display light that propagates through the waveguide. The optical path directs the portion of the display light from the input coupler to an output coupler that is configured to provide the received portion of the display light to a disparity sense circuit.Type: GrantFiled: December 6, 2022Date of Patent: June 18, 2024Assignee: Meta Platforms Technologies, LLCInventors: Karol Constantine Hatzilias, Tamer Elazhary, Yu Shi, Guohua Wei, Michiel Koen Callens, Nicholas Mcgee
-
Patent number: 12014042Abstract: In an example, a method involves displaying interface objects and avatar interface objects on a user interface. Each interface object represents one of a set of financial services available within a financial system. The method also involves detecting a selection of an avatar interface object from the plurality of avatar interface objects, followed by detecting a movement of the selected avatar interface object and moving, on the user interface, the selected avatar interface object according to the movement. The method further involves detecting a placement of the selected avatar interface object on the user interface and, in response to detecting the placement, initiating an application for the financial service corresponding to the selected avatar interface object and displaying, on the user interface, a result of the application.Type: GrantFiled: May 22, 2023Date of Patent: June 18, 2024Assignee: Truist BankInventor: Sudhakar Swaminathan
-
Patent number: 12014324Abstract: An information processing apparatus (2000) acquires a shelf rack image (12) in which a product shelf rack on which a product is displayed is imaged. The information processing apparatus (2000) performs image analysis on the shelf rack image (12), and generates information (actual display information) relevant to a display situation of the product on a product shelf rack (20). The information processing apparatus (2000) acquires reference display information representing a reference for display of the product on the product shelf rack (20). The information processing apparatus (2000) compares the actual display information generated by performing the image analysis on the shelf rack image (12) with the acquired reference display information, and generates comparison information representing a result.Type: GrantFiled: November 14, 2022Date of Patent: June 18, 2024Assignee: NEC CORPORATIONInventors: Yaeko Yonezawa, Kaito Horita, Akira Yajima, Mizuto Sekine, Yoshinori Ehara
-
Patent number: 12008618Abstract: A system and method for method for providing an augmented reality tag viewer are provided. The method includes receiving, by a device, a low-energy encrypted beacon from a tag attached to a product, the beacon uniquely identifies the product; retrieving information about the product identified by the received beacon; determining if the product is present in a field of view of a camera of the device; upon determining that the identified product is present in the field of view of the camera, matching the product in the field of view with the retrieved information; and upon determining that the product in the field of view matches the retrieved information, overlaying an interface over the display to highlight the product.Type: GrantFiled: July 6, 2021Date of Patent: June 11, 2024Assignee: Wiliot, Ltd.Inventors: Roberto Sandre, Stephen Statler, Tal Tamir, Yaroslav Ross
-
Patent number: 12008719Abstract: Apparatus, methods and systems of providing AR content are disclosed. Embodiments of the inventive subject matter can obtain an initial map of an area, derive views of interest, obtain AR content objects associated with the views of interest, establish experience clusters and generate a tile map tessellated based on the experience clusters. A user device could be configured to obtain and instantiate at least some of the AR content objects based on at least one of a location and a recognition.Type: GrantFiled: January 28, 2022Date of Patent: June 11, 2024Assignee: Nant Holdings IP, LLCInventors: David McKinnon, Kamil Wnuk, Jeremi Sudol, Matheen Siddiqui, John Wiacek, Bing Song, Nicholas J. Witchey
-
Patent number: 12008152Abstract: Systems, methods, and computer readable media that determines distances for mixed reality interaction, where the methods include determining a first position of a point of a surface and rendering a virtual reality (VR) interactive item comprising a VR interactive control. The methods further include tracking a control indicator controlled by the user by determining a first plurality of positions of the control indicator and activating the VR interactive control in response to detecting the control indicator controlled by the user transgressing a first threshold distance from the VR interactive control. The methods further include determining a closest position of the first plurality of positions to the point based on the first position, determining the point of the surface to have a second position based on the determined closest position plus a constant for the control indicator, and associating a second threshold with the point of the surface.Type: GrantFiled: December 29, 2021Date of Patent: June 11, 2024Assignee: Snap Inc.Inventor: Benjamin Lucas
-
Patent number: 12008697Abstract: A digital element located within a region near a device is identified. The digital element is located at a dynamically updated location. It is determined that the digital element is to be rendered. A representation of the digital element is generated in a rendered view of at least a portion of the region. Content of the digital element is provided upon receiving an indication that the digital element has been selected.Type: GrantFiled: June 17, 2022Date of Patent: June 11, 2024Assignee: Ripple, Inc. of DelawareInventor: Ray Beau Lotto
-
Patent number: 12008155Abstract: A method for improving the startup time of a six-degrees of freedom tracking system is described. An augmented reality system receives a device initialization request and activates a first set of sensors in response to the device initialization request. The augmented reality system receives first tracking data from the first set of sensors. The augmented reality system receives an augmented reality experience request and in response to the augmented reality request, causes display of a set of augmented reality content items based on the first tracking data and simultaneously activates a second set of sensors. The augmented reality system receives second tracking data from the activated second set of sensors. The augmented reality system updates the display of the set of augmented reality content items based on the second tracking data.Type: GrantFiled: May 24, 2023Date of Patent: June 11, 2024Assignee: Snap Inc.Inventors: Jeroen Diederik Hol, Matthias Kalkgruber, Erick Mendez Mendez, Niall Murphy, Gerald Nilles, Mathieu Emmanuel Vignau
-
Patent number: 12005573Abstract: A mobility augmentation system monitors a user's motor intent data and augments the user's mobility based on the monitored motor intent data. A machine-learned model is trained to identify an intended movement based on the monitored motor intent data. The machine-learned model may be trained based on generalized or specific motor intent data (e.g., user-specific motor intent data). A machine-learned model initially trained on generalized motor intent data may be re-trained on user-specific motor intent data such that the machine-learned model is optimized to the movements of the user. The system uses the machine-learned model to identify a difference between the user's monitored movement and target movement signals. Based on the identified difference, the system determines actuation signals to augment the user's movement. The actuation signals determined can be an adjustment to a currently applied actuation such that the system optimizes the actuation strategy during application.Type: GrantFiled: December 6, 2020Date of Patent: June 11, 2024Assignee: Cionic, Inc.Inventors: Jeremiah Robison, Michael Dean Achelis, Lina Avancini Colucci, Sidney Rafael Primas, Andrew James Weitz
-
Patent number: 12008451Abstract: Technology embodied in a computer-implemented method for receiving a multi-modal input representing a query associated with a physical object, processing the multi-modal input to identify the physical object, and determining, based in part on an identification of the physical object and by accessing a language processing model, at least one response to the query associated with the physical object. The method also includes determining a sequence of actions associated with the at least one response, the sequence including at least one action that involves an interaction with at least one portion of the physical object. The method further includes generating a digital representation of the at least one action, and providing the digital representation to a user-device for presentation on a display. The digital representation includes a gesture-icon representing the action, the gesture-icon being overlaid on a digital twin of the physical object.Type: GrantFiled: December 21, 2023Date of Patent: June 11, 2024Inventor: Ishita Agrawal
-
Patent number: 12010157Abstract: Systems and methods are described for enabling user-controlled extended reality (XR). The provided systems and methods may determine an XR portion should be included in a video feed that comprises a depiction of a first user, and the XR portion may be configured to occlude the entity in the video feed. An entity may be detected in the video feed, and user input may be received specifying that a detected entity in the video feed should not be occluded by the XR portion. Based on the user input, the video feed may be modified to exclude the XR portion from a portion of the video feed depicting the detected entity, wherein the portion is included at a different portion of the video feed, and the depiction of the first user is included in the video feed, and transmit the modified video feed to a second user device.Type: GrantFiled: March 29, 2022Date of Patent: June 11, 2024Assignee: Rovi Guides, Inc.Inventor: Serhad Doken
-
Patent number: 12003845Abstract: A sensor apparatus includes an array sensor in which a plurality of detection elements are arranged one-dimensionally or two-dimensionally, a signal processing unit which performs signal processing on a detected signal obtained by the array sensor, and an arithmetic operation unit which performs object detection from the detected signal obtained by the array sensor, performs operation control of the signal processing unit based on object detection, and performs switching processing for changing processing contents based on device information input from a sensor-equipped device on which the sensor apparatus is mounted.Type: GrantFiled: February 15, 2021Date of Patent: June 4, 2024Assignee: SONY GROUP CORPORATIONInventors: Susumu Takatsuka, Hiroki Tetsukawa
-
Patent number: 11998798Abstract: Example systems, devices, media, and methods are described for presenting a virtual guided fitness experience using the display of an eyewear device in augmented reality. A guided fitness application implements and controls the capturing of frames of motion data using an inertial measurement unit (IMU) and video data from one or more cameras. The method includes detecting exercise motions (with or without equipment) as well as detecting and counting repetitions. Relevant data about detected motions or equipment is retrieved and used to curate the guided fitness experience. A current rep count is presented on the display along with an avatar for playing messages, performing animated demonstrations, responding to commands and queries using speech recognition, and presenting guided fitness instructions through text, audio, and video.Type: GrantFiled: May 14, 2021Date of Patent: June 4, 2024Assignee: Snap Inc.Inventor: Megan Hong
-
Patent number: 12003601Abstract: A system and method for split rendering immersive media using proxy edge cloud computing architecture. The system and method may include dynamically determining a task-split or splitting a task for rendering a scene in an immersive media stream into two or more computational tasks based on one or more processing delays, wherein the task-splitting splits the scene into first part including one or more first tasks performed by an edge computing system and second part including one or more second tasks performed by a cloud computing system. The system and method may include transmitting the first parts of the scene, and transmitting the second parts of the scene.Type: GrantFiled: November 30, 2022Date of Patent: June 4, 2024Assignee: TENCENT AMERICA LLCInventors: Paul Spencer Dawkins, Rohit Abhishek, Arianne Hinds
-
Patent number: 12001549Abstract: A system and method for providing cybersecurity incident response utilizing a large language model. The method includes: mapping a received incident input into a scenario of a plurality of scenarios, each scenario including a plurality of sub-scenarios; generating a query based on the received incident input and a selection of a sub-scenario of the plurality of sub-scenarios; executing the query on a security database, the security database including a representation of the computing environment; and initiating a mitigation action based on a result of the executed query.Type: GrantFiled: January 31, 2024Date of Patent: June 4, 2024Assignee: Wiz, Inc.Inventors: Alon Schindel, Barak Sharoni, Amitai Cohen, Ami Luttwak, Roy Reznik, Yinon Costica
-
Patent number: 11995778Abstract: A method for performing an augmented reality location operation. The augmented reality location operation includes: configuring an augmented reality device to perform a plurality of augmented reality tracking methods; identifying an augmented reality target via at least one of the plurality of augmented reality tracking methods; determining whether another of the plurality of augmented reality tracking methods provides a higher resolution of the augmented reality target; and, handing off the identifying the augmented reality target to the another of the plurality of augmented reality tracking methods.Type: GrantFiled: April 13, 2022Date of Patent: May 28, 2024Assignee: Dell Products L.P.Inventors: Robert Alan Barrett, Richard W. Guzman
-
Patent number: 11995779Abstract: Disclosed is a method, device, and system that determine the concentration of a user of an XR device where care contents are playing, in a manager terminal and manage and control the XR device based on the determination result. According to an embodiment of the inventive concept, the intended effects may be maximized through care contents of an XR device by managing and controlling a user's concentration of a registered XR device that is playing the care contents.Type: GrantFiled: September 22, 2022Date of Patent: May 28, 2024Assignee: SEVENPOINTONE INC.Inventors: Hyeonjun Lee, Juyeong Yoo
-
Patent number: 11995301Abstract: Methods for displaying user interfaces in a computer-generated environment provide for an efficient and intuitive user experience. In some embodiments, user interfaces can have different immersion levels. In some embodiments, a user interface can have a respective immersion level based on its location in the three-dimensional environment or distance from the user. In some embodiments, a user interface can have a respective immersion level based on the state of the user interface. In some embodiments, a user interface can switch from one immersion level to another in response to the user's interaction with the user interface.Type: GrantFiled: March 10, 2023Date of Patent: May 28, 2024Assignee: Apple Inc.Inventors: Benjamin Hylak, Alexis H. Palangie, Jordan A. Cazamias, Nathan Gitter, Aaron M. Burns
-
Patent number: 11995777Abstract: A method for performing an augmented reality location operation. The augmented reality location operation includes: configuring an augmented reality device to perform a plurality of augmented reality tracking methods; identifying a unique augmented reality target via at least one of the plurality of augmented reality tracking methods; associating a position with the unique augmented reality target; and, determining a relative position of another augmented reality target from the unique augmented reality target.Type: GrantFiled: April 13, 2022Date of Patent: May 28, 2024Assignee: Dell Products L.P.Inventors: Robert Alan Barrett, Richard W. Guzman
-
Patent number: 11995741Abstract: Disclosed in the present disclosure are a data generation method and apparatus, and an electronic device. The method includes: obtaining first image data, the first image data representing a real environment in which a user is located; obtaining category information and plane information of a target object, the target object being an object in the first image data, and the plane information including information of an outer surface of the target object; obtaining second image data, the second image data containing a virtual object; and generating target image data by mixing the first image data with the second image data based on the category information and the plane information, the target image data containing the target object and the virtual object.Type: GrantFiled: September 1, 2023Date of Patent: May 28, 2024Assignee: QINGDAO PICO TECHNOLOGY CO., LTD.Inventor: Tao Wu
-
Patent number: 11995578Abstract: Systems and methods are described for improved color rendering. In some embodiments, video is captured of a real-world scene, and a region of screen content is detected in the captured video. A processor selectively applies a screen-content color transformation on the region of screen content to generate processed video, and the processed video is displayed substantially in real time, for example on a video-see-through head-mounted display. The screen-content color processing may be different for different types of external displays. The screen-content color processing may also be determined based at least in part on illumination conditions. In some embodiments, the color of displayed virtual objects is adjusted based on visual parameters of the screen content.Type: GrantFiled: June 22, 2023Date of Patent: May 28, 2024Assignee: InterDigital VC Holdings, Inc.Inventors: David Wyble, Louis Kerofsky, Ralph Neff
-
Patent number: 11995774Abstract: Augmented reality experiences with an eyewear device including a position detection system and a display system are provided. The eyewear device acquires text from a user input, a data input, or a speech recognition system. The eyewear device presents a visual text graphic at a predefined location with respect to the eyewear on the display by the display system. The eyewear device allows the user to manipulate the visual text graphics in a number of ways.Type: GrantFiled: December 18, 2020Date of Patent: May 28, 2024Assignee: Snap Inc.Inventors: Ilteris Canberk, Shin Hwun Kang, Daniel Moreno
-
Patent number: 11994687Abstract: A display system includes an optical device configured according to constructive interference for a plurality of wavelengths at a focal length. The display system includes a fiber. The display system includes a controller configured to scan the fiber using a Lissajous scanning method to generate a display. The display can be disposed within a focal plane of the optical device. The controller is configured to modulate light intensity from the fiber. The controller can be configured to form a display image that passes through the optical device. The display system can include an optical combiner configured to reflect the display image from the optical device and form a virtual image. The optical device can be configured to magnify a display image from the display and form a virtual image.Type: GrantFiled: May 5, 2021Date of Patent: May 28, 2024Assignees: PRESIDENT AND FELLOWS OF HARVARD COLLEGE, TRUSTEES OF BOSTON UNIVERSITYInventors: Zhaoyi Li, Yao-Wei Huang, Peng Lin, Ji-Xin Cheng, Federico Capasso
-
Patent number: 11994364Abstract: The disclosure relates to a viewing optic. In one embodiment, the disclosure relates to a display system for a viewing optic. In one embodiment, the disclosure relates to a viewing optic having a display system with multiple active displays for generating images that are projected into a first focal plane of an optical system.Type: GrantFiled: August 7, 2019Date of Patent: May 28, 2024Assignee: SHELTERED WINGS, INC.Inventors: Samuel J. Hamilton, Ian Klemm, Calen Havens, Tom Cody, Craig Schultz, Garrison Bollig, Andy Carlson, William Lowry, Alexander Lewis, Nicholas B. Laufenberg
-
Patent number: 11986319Abstract: The present disclosure relates to a patch guide method, including at least: acquiring a matched model of a 3D scan model and a 3D brain MM model; capturing an image of the head of the object by using a depth camera; matching one location of the captured image and one location on the matched model; and determining a patch location on the head of the object, by using a 3D brain map. In the method, physical characteristics of areas included in the brain MRI image are acquired and used to generate the 3D brain map of the object. In the method, a target stimulus point, to which an electrical stimulus is to be applied in a brain of the object, is acquired and used in a simulation of a delivery process of the electrical stimulus to the target stimulus point from candidate stimulus positions, to determine the patch location.Type: GrantFiled: May 4, 2021Date of Patent: May 21, 2024Assignee: NEUROPHET Inc.Inventors: Dong Hyeon Kim, Jun Kil Been
-
Patent number: 11989339Abstract: A method of performing localization of a handheld device with respect to a wearable device includes capturing, by a first imaging device mounted to the handheld device, a fiducial image containing a number of fiducials affixed to the wearable device and capturing, by a second imaging device mounted to the handheld device, a world image containing one or more features surrounding the handheld device. The method also includes obtaining, by a sensor mounted to the handheld device, handheld data indicative of movement of the handheld device, determining the number of fiducials contained in the fiducial image, and updating a position and an orientation of the handheld device using at least one of the fiducial image or the world image and the handheld data.Type: GrantFiled: February 22, 2023Date of Patent: May 21, 2024Assignee: Magic Leap, Inc.Inventors: Zachary C. Nienstedt, Samuel A. Miller, Barak Freedman, Lionel Ernest Edwin, Eric C. Browy, William Hudson Welch, Ron Liraz Lidji
-
Patent number: 11987182Abstract: A display image to be displayed on a display unit is obtained in accordance with motion of a viewpoint of a driver, on the basis of a captured image obtained by capturing an image on a rear side from a vehicle, when a line-of-sight of the driver is in a certain region including the display unit. For example, the display image is obtained in accordance with a deviation of a viewpoint position of the driver from a reference viewpoint position. For example, the reference viewpoint position is updated on the basis of a long-term fluctuation of the viewpoint position. For example, the reference viewpoint position is not updated when a line-of-sight of the driver is in a certain region including a display unit that displays a display image.Type: GrantFiled: December 10, 2019Date of Patent: May 21, 2024Assignee: SONY GROUP CORPORATIONInventor: Koji Nagata
-
Patent number: 11989837Abstract: A method of spawning a digital island in a three-dimensional environment is disclosed. Data describing a three-dimensional environment is accessed. The data is partitioned into a plurality of contexts based on properties identified in the data, the properties corresponding to surfaces or objects in the three-dimensional environment. One or more values of one or more traits corresponding to a context of the plurality of context are identified. A digital island is matched to the context. The matching includes analyzing one or more conditions associated with the digital island with respect to the one or more values of the one or more traits corresponding to the context. Based on the matching, the spawning of the digital island is performed in the three-dimensional environment for the context.Type: GrantFiled: June 1, 2021Date of Patent: May 21, 2024Assignee: Unity IPR ApSInventors: Stella Mamimi Cannefax, Andrew Peter Maneri, Amy Melody DiGiovanni