Augmented Reality (real-time) Patents (Class 345/633)
-
Patent number: 12293267Abstract: Technology embodied in a computer-implemented method for receiving a multi-modal input representing a query associated with a physical object, processing the multi-modal input to identify the physical object, and determining, based in part on an identification of the physical object and by accessing a language processing model, at least one response to the query associated with the physical object. The method also includes determining a sequence of actions associated with the at least one response, the sequence including at least one action that involves an interaction with at least one portion of the physical object. The method further includes generating a digital representation of the at least one action, and providing the digital representation to a user-device for presentation on a display. The digital representation includes a gesture-icon representing the action, the gesture-icon being overlaid on a digital twin of the physical object.Type: GrantFiled: April 26, 2024Date of Patent: May 6, 2025Inventor: Ishita Agrawal
-
Patent number: 12291242Abstract: A mobile body control device adjusts a stop position of a mobile body based on an instruction of a user. The device acquires instruction information for designating a predetermined target and acquires a captured image captured in the mobile body. The device determines a stop position, which is a relative position of the predetermined target measured from the mobile body, where on a region of the predetermined target has been identified in the captured image and control traveling of the mobile body toward the determined stop position. When the predetermined target cannot be identified in the captured image when the mobile body is traveling toward the stop position, the device updates the stop position determined for the predetermined target in accordance with the movement of the mobile body, thereby controlling traveling of the mobile body toward the stop position.Type: GrantFiled: March 8, 2023Date of Patent: May 6, 2025Assignee: HONDA MOTOR CO., LTD.Inventors: Nanami Tsukamoto, Kosuke Nakanishi
-
Patent number: 12293474Abstract: A video pass-through computing system includes a head-mounted display device including a display, a camera configured to image a physical scene according to an exposure timing, and an augmented reality control circuit configured to receive a virtual image pixel stream and composite the camera image pixel stream with the virtual image pixel stream to generate a display image pixel stream output to the display, and if a corresponding pixel of the camera image pixel stream is not in temporal synchronization with a pixel of the virtual image pixel stream adjust the exposure timing of the camera.Type: GrantFiled: April 19, 2021Date of Patent: May 6, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Robert Warren Gruen, Weige Chen, Michael George Boulton, Roberta Rene Moeur
-
Patent number: 12294685Abstract: It is an object of the present invention to reduce a possibility that a subject (specific object) of a virtual viewpoint image and a virtual object area overlap. To achieve the object, the present invention comprises a first identifying unit for identifying a three-dimensional position of the specific object captured respectively from different directions by a plurality of cameras; a second identifying unit for identifying a three-dimensional position of a virtual viewpoint related to generation of the virtual viewpoint image based on captured images obtained by the plurality of cameras; and a determining unit for determining a position of the virtual object area to which a virtual object to be displayed in the virtual viewpoint image is disposed, based on the three-dimensional position of the specific object identified by the first identifying unit and the three-dimensional position of the virtual viewpoint identified by the second identifying unit.Type: GrantFiled: July 6, 2021Date of Patent: May 6, 2025Assignee: Canon Kabushiki KaishaInventors: Shogo Mizuno, Atsushi Date
-
Patent number: 12293607Abstract: An eye tracking device for tracking an eye is described. The eye tracking device comprises: a first diffractive optical element, DOE, arranged in front of the eye, an image module, wherein the image module is configured to capture an image of the eye via the first DOE. The first DOE is adapted to direct a first portion of incident light reflected from the eye, towards the image module. The eye tracking device is characterized in that the first DOE is configured to provide a lens effect.Type: GrantFiled: February 10, 2023Date of Patent: May 6, 2025Assignee: Tobii ABInventor: Daniel Tornéus
-
Patent number: 12288501Abstract: An information processing device includes a processor configured to: acquire a captured image of a real space; acquire schedule information that is information regarding a schedule of a user; determine, on the basis of the schedule information, from the captured image of the real space, an addition target that is a target to which a virtual screen is to be added; and perform control for displaying the virtual screen in association with the addition target.Type: GrantFiled: December 10, 2021Date of Patent: April 29, 2025Assignee: FUJIFILM Business Innovation Corp.Inventor: Tatsuyuki Esaki
-
Patent number: 12287921Abstract: In some embodiments, a computer system performs virtual object manipulation operations using respective portions of the user's body and/or input device(s). In some embodiments, a computer system manipulates a virtual object based on input from a hand of a user and/or a handheld device. In some embodiments, a computer system manipulates a virtual object directly or indirectly.Type: GrantFiled: September 22, 2023Date of Patent: April 29, 2025Assignee: Apple Inc.Inventors: William D. Lindmeier, Tony Kobayashi, Alexis H. Palangie, Carmine Elvezio, Matthew J. Sundstrom
-
Patent number: 12288301Abstract: A computer system displays a first view of a three-dimensional environment, including a first user interface object, that when activated by a user input meeting first criteria, causes performance of a first operation. While displaying the first view, the computer system detects first movement of a hand in a physical environment, and in response, changes an appearance of the first user interface object in the first view based on the first movement, including: in accordance with a determination that the first movement meets the first criteria requiring the hand move in a first manner, performing and indicating performance of the first operation; and in accordance with a determination that the first movement does not meet the first criteria, moving the first user interface object away from a position in the three-dimensional environment corresponding to a location of the hand in the physical environment without performing the first operation.Type: GrantFiled: February 22, 2024Date of Patent: April 29, 2025Assignee: APPLE INC.Inventors: Philipp Rockel, Charles C. Hoyt
-
Patent number: 12288419Abstract: An augmented reality (AR) device and a method of predicting a pose in the AR device is provided. In the augmented reality (AR) device inertial measurement unit (IMU) values corresponding to the movement of the AR device are obtained at an IMU rate, intermediate 6-degrees of freedom (6D) poses of the AR device are estimated based on the IMU values and images around the AR device via a visual-inertial simultaneous localization and mapping (VI-SLAM) module, and a pose prediction model for predicting relative 6D poses of the AR device is generated by performing learning by using a deep neural network.Type: GrantFiled: December 29, 2022Date of Patent: April 29, 2025Assignees: SAMSUNG ELECTRONICS CO., LTD., Korea University Research and Business FoundationInventors: Yuntae Kim, Sanghoon Sull, Geeyoung Sung, Hongseok Lee, Myungjae Jeon
-
Patent number: 12288370Abstract: A method of operating a wearable electronic device includes: recognizing a first gesture of a hand of a user for setting a region of interest (ROI) corresponding to a view of the user in an image frame corresponding to a view of a camera; generating a virtual display for projecting the ROI based on whether the first gesture is recognized; extracting the ROI from the image frame; recognizing a second gesture of the hand for adjusting a size of the ROI; and adjusting the size of the ROI and projecting the ROI of the adjusted size onto the virtual display, based on whether the second gesture is recognized.Type: GrantFiled: May 3, 2022Date of Patent: April 29, 2025Assignee: Samsung Electronics Co., Ltd.Inventor: Paul Oh
-
Patent number: 12283012Abstract: A cross reality system enables any of multiple devices to efficiently access previously stored maps. Both stored maps and tracking maps used by portable devices may have any of multiple types of location metadata associated with them. The location metadata may be used to select a set of candidate maps for operations, such as localization or map merge, that involve finding a match between a location defined by location information from a portable device and any of a number of previously stored maps. The types of location metadata may prioritized for use in selecting the subset. To aid in selection of candidate maps, a universe of stored maps may be indexed based on geo-location information. A cross reality platform may update that index as it interacts with devices that supply geo-location information in connection with location information and may propagate that geo-location information to devices that do not supply it.Type: GrantFiled: November 15, 2023Date of Patent: April 22, 2025Assignee: Magic Leap, Inc.Inventors: Xuan Zhao, Christian Ivan Robert Moore, Sen Lin, Ali Shahrokni, Ashwin Swaminathan
-
Patent number: 12277635Abstract: A multimodal search system is described. The system can receive image data from a user device. Additionally, the system can receive a prompt associated with the image data. Moreover, the system can determine, using a computer vision model, a first object in the image data that is associated with the prompt. Furthermore, the system can receive, from the user device, a user indication on whether the image data includes the first object. Subsequently, in response to receiving the user indication, the system can generate a response using a large language model.Type: GrantFiled: December 7, 2023Date of Patent: April 15, 2025Assignee: GOOGLE LLCInventors: Harshit Kharbanda, Louis Wang, Christopher James Kelley, Jessica Lee
-
Patent number: 12277663Abstract: To accurately replace the position coordinate in a virtual space composed of three-dimensional data with latitude, longitude, and altitude information in a real space. The device for measurement processing images a real space by the camera unit; superimposes virtual space indicating the imaged real space by three-dimensional data on the imaged real space and displays the superimposed spaces; acquires a measurement point from a position-measuring device measuring a latitude, a longitude, and an altitude; images the position-measuring device, associates the position coordinate in the virtual space of the imaged position-measuring device with the position coordinate of the acquired measurement point in the real space; and transforms the coordinate of the three-dimensional data to the position coordinate in the real space by a predetermined transformation equation.Type: GrantFiled: July 30, 2021Date of Patent: April 15, 2025Assignee: OPTIM CORPORATIONInventors: Tetsugo Matsuo, Nobuya Nishimoto, Hiroki Takada, Shunji Sugaya, Syunsuke Naganuma, Keisuke Murata, Yasuaki Sakata, Yoshio Okumura, Kenta Kubo
-
Patent number: 12277729Abstract: The embodiments of the disclosure provide a method for providing a visual content, a host, and a computer readable storage medium. The method includes: providing a reference object at a first location to aim at a first point and accordingly determining a first reference line related to the first point, wherein the first point is associated with an external camera; providing the reference object at a second location to aim at the first point and accordingly determining a second reference line related to the first point; determining a camera position of the external camera based on the first reference line and the second reference line; obtaining a specific image captured by the external camera; and generating a specific visual content via combining the specific image with a virtual scene based on the camera position.Type: GrantFiled: July 21, 2022Date of Patent: April 15, 2025Assignee: HTC CorporationInventor: Wei-Jen Chung
-
Patent number: 12277654Abstract: A number of zones are defined within a location, each zone corresponding to a different physical area within the location. A weighting factor is then assigned to each zone. A number of content items are identified for display in the AR display based on user preferences. Using the weighting factors and content item priority data determined from user preference data, each content item is assigned to a zone. The AR display then renders the content items in each zone.Type: GrantFiled: August 30, 2022Date of Patent: April 15, 2025Assignee: Adeia Guides Inc.Inventors: Christopher Phillips, Reda Harb
-
Patent number: 12277359Abstract: An information processing device comprises: a camera; a display screen for displaying a captured image; a line-of-sight sensor configured to detect a line of sight of a user and output line-of-sight information; a communication unit configured to provide communication with an image display device that is separate from the information processing device; and a processor, wherein the processor is configured to: determine whether a line-of-sight destination of the user is in the display screen based on the line-of-sight information while the camera is capturing an image; upon determining that the line-of-sight destination of the user is not in the display screen, transfer the captured image being displayed on the display screen to the image display device via the communication unit; and transmit, to the image display device, a display start signal for causing the image display device to display the captured image as transferred.Type: GrantFiled: July 1, 2021Date of Patent: April 15, 2025Assignee: MAXELL, LTD.Inventors: Hitoshi Akiyama, Nobukazu Kondo, Yoshinori Okada
-
Patent number: 12268954Abstract: A computer-readable medium is configured to translate movements performed by a person onto a user in augmented or virtual reality has a headset display for wearing by the person, and a first controller and a second controller for holding by the person. Each of the first and second controllers are communicatively connectable to the headset display and including a thumbstick, a first button, a second button, a grip button, and a trigger button. The computer-readable medium is further configured to execute the computer-readable program code to enable the user in the augmented or virtual reality to walk forward by the person forwardly actuating the thumbstick of the first controller and turn by the person tilting left or right the thumbstick of the second controller or by the person physically turning left or right.Type: GrantFiled: October 23, 2022Date of Patent: April 8, 2025Assignee: JoyWAY LTDInventor: Victor Sidorin
-
Patent number: 12272077Abstract: Embodiments of the invention provide a tracking performance evaluation method and a host. The method includes: establishing a virtual environment; obtaining a tracking result of a virtual tracking device executing a tracking function in the virtual environment; and providing a tracking performance evaluation result for the virtual environment based on the tracking result.Type: GrantFiled: January 3, 2023Date of Patent: April 8, 2025Assignee: HTC CorporationInventors: Nien Hsin Chou, Wen Ting Lo
-
Patent number: 12273589Abstract: A system for providing customized content includes a recognition device. A media transmission device is in communication with a recognition device and a media delivery device. The media transmission device is in communication with a processor and a memory storing computer instructions configured to instruct the processor to receive a primary media presentation including primary media content for a point of interest of a venue. The media transmission device generates a secondary media presentation including secondary media content for the user based on identified user preferences for the user. The media transmission device generates a custom media presentation including custom media content for the user based on the identified user preferences. The media transmission device transmits the primary media presentation, the secondary media presentation, or the custom media presentation and presents the primary media content, the secondary media content, or the custom media content to the user.Type: GrantFiled: October 26, 2023Date of Patent: April 8, 2025Inventor: Maris Jacob Ensing
-
Patent number: 12272012Abstract: In one embodiment, a method includes capturing images of a first user wearing a VR display device in a real-world environment. The method includes receiving a VR rendering of a VR environment. The VR rendering is from the perspective of the mobile computing device with respect to the VR display device. The method includes generating a first MR rendering of the first user in the VR environment. The first MR rendering of the first user is based on a compositing of the images of the first user and the VR rendering. The method includes receiving an indication of a user interaction with one or more elements of the VR environment in the first MR rendering. The method includes generating, in real-time responsive to the indication of the user interaction with the one or more elements, a second MR rendering of the first user in the VR environment. The one or more elements are modified according to the interaction.Type: GrantFiled: May 22, 2023Date of Patent: April 8, 2025Assignee: Meta Platforms Technologies, LLCInventors: Sarah Tanner Simpson, Gregory Smith, Jeffrey Witthuhn, Ying-Chieh Huang, Shuang Li, Wenliang Zhao, Peter Koch, Meghana Reddy Guduru, Ioannis Pavlidis, Xiang Wei, Kevin Xiao, Kevin Joseph Sheridan, Bodhi Keanu Donselaar, Federico Adrian Camposeco Paulsen
-
Patent number: 12271658Abstract: An example process includes: displaying, on a display of an electronic device, an extended reality (XR) environment corresponding to a copresence session including the electronic device and a second electronic device; while displaying the XR environment: sampling, with a microphone of the electronic device, a first audio input; determining whether the first audio input is intended for a first digital assistant operating on an external electronic device; and in accordance with a determination that the first audio input is intended for the first digital assistant: causing the first digital assistant to provide an audible response to the first audio input, where the audible response is not transmitted to the second electronic device over a shared communication channel for the copresence session.Type: GrantFiled: February 23, 2022Date of Patent: April 8, 2025Assignee: Apple Inc.Inventors: Jessica J. Peck, James N. Jones, Ieyuki Kawashima, Lynn I. Streja, Stephen O. Lemay
-
Patent number: 12272096Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.Type: GrantFiled: June 15, 2023Date of Patent: April 8, 2025Assignee: GOOGLE LLCInventors: Jianing Wei, Matthias Grundmann
-
Patent number: 12265647Abstract: A first device coupled with a first display and an image sensor receives output data from a second device having a second display different from the first display. The output data represents content displayable by the second device on the second display. The first device determines, using the image sensor, a position of the second display relative to the first device and causes the first display to display content based on the output data received from the second device and the determined position of the second display relative to the first device.Type: GrantFiled: August 16, 2023Date of Patent: April 1, 2025Assignee: Apple Inc.Inventors: Clément Pierre Nicolas Boissière, Timothy R. Oriol
-
Patent number: 12263585Abstract: Exosuit systems and methods according to various embodiments are described herein. The exosuit system can be a suit that is worn by a wearer on the outside of his or her body. It may be worn under the wearer's normal clothing, over their clothing, between layers of clothing, or may be the wearer's primary clothing itself. The exosuit may be assistive, as it physically assists the wearer in performing particular activities, or can provide other functionality such as communication to the wearer through physical expressions to the body, engagement of the environment, or capturing of information from the wearer.Type: GrantFiled: February 19, 2021Date of Patent: April 1, 2025Assignee: Seismic Holdings, Inc.Inventors: Melinda Cromie Lear, Katherine Goss Witherspoon, Megan Grant, Nicole Ida Kernbaum, Richard Mahoney, Mallory L. Tayson-Frederick, Louis Calvin Fielding, Violet Riggs, Erik Shahoian, Mary Elizabeth Hogue
-
Patent number: 12262959Abstract: The present invention relates to a method for determining the spatial position of objects, in particular medical objects. First position data is acquired that describes a spatial position of an object in a first coordinate system. First transformation data is acquired that transforms the object's position from the first coordinate system to a second coordinate system. Based on the foregoing data, second position data is acquired that specifies the spatial position of the object in the second coordinate system. Second transformation data is acquired that transforms the object's position from the second coordinate system to an inertial coordinate system. Based on the second position data and the second transformation data, inertial position data is determined that specifies a position of the object in the inertial coordinate system.Type: GrantFiled: August 31, 2023Date of Patent: April 1, 2025Assignee: Brainlab AGInventors: Oliver Fleig, Timo Neubauer, Mario Schubert, Sabine Kling
-
Patent number: 12266062Abstract: An augmented reality system to generate and cause display of a presentation of a space at a first client device, receive one or more selections of points within the presentation of the space at the first client device, and render graphical elements at the one or more points within the presentation of the space at the first client device. The augmented reality system is further configured to receive a display request to display the space at a second client device, and in response, may render a second presentation of the space at the second client device, wherein the second presentation of the space includes the graphical elements at the one or more points.Type: GrantFiled: April 11, 2023Date of Patent: April 1, 2025Assignee: Snap Inc.Inventors: Piers George Cowburn, Qi Pan, Isac Andreas Müller Sandvik
-
Patent number: 12266091Abstract: Provided are a defect detection method and apparatus, and a computer-readable storage medium. Specifically, the method includes: obtaining a to-be-detected image; obtaining a feature map of the to-be-detected image based on the to-be-detected image, where the feature map of the to-be-detected image includes a feature map of spatial position coordinate information; and performing defect detection on the to-be-detected image based on the feature map of the to-be-detected image. By modifying a neural network structure of defect detection and extracting the feature map of spatial position coordinate information during the detection, this application makes the neural network for use of defect detection sensitive to a spatial position, thereby enhancing sensitivity of a detection neural network to the spatial position, and in turn, increasing accuracy of detecting some specific defect types by the detection neural network, and increasing accuracy of defect detection.Type: GrantFiled: July 21, 2023Date of Patent: April 1, 2025Assignee: CONTEMPORARY AMPEREX TECHNOLOGY CO., LIMITEInventors: Zhiyu Wang, Xi Wang, Guannan Jiang
-
Patent number: 12260490Abstract: An image processing apparatus includes an imaging unit that images a user and a real object, an analysis unit that analyzes an attitude of the real object based on imaging information captured by the imaging unit, and a control unit that controls display of an image related to the real object based on the attitude of the real object.Type: GrantFiled: December 1, 2020Date of Patent: March 25, 2025Assignee: SONY GROUP CORPORATIONInventor: Masato Akao
-
Patent number: 12260497Abstract: According to aspects herein, methods and systems for capacity management of virtual space are provided. The method begins with estimating a capacity size need of a virtual space in a metaverse. A first quantity of avatars is monitored as avatars enter, leave, and occupy the virtual space. The first quantity of avatars in the virtual space is compared with a predetermined fill threshold. Based on the comparing, a duplicate virtual space may be created if the first quantity of avatars in the virtual space exceeds the predetermined fill threshold. A user may transmit a request to join a virtual space in a metaverse to a network. The request to join the virtual space contains at least one avatar identification and may be a request for a group to join. The user or group receives a sequential identifier of the virtual space and then joins the virtual space.Type: GrantFiled: December 5, 2022Date of Patent: March 25, 2025Assignee: T-Mobile Innovations LLCInventor: Zhisheng Chen
-
Patent number: 12260145Abstract: In one implementation, a method of transferring the display of video content from a first location to a second location is performed at a first device including an image sensor, a first display, one or more processors, and non-transitory memory. The method includes detecting a second device in a field-of-view of the image sensor displaying video content at a first location in the physical environment on a second display. The method includes detecting a transfer trigger to display the video content at a second location in the physical environment. The method includes, in response to detecting the transfer trigger, displaying, on the first display, a transfer animation of the video content being moved from the first location to the second location.Type: GrantFiled: September 8, 2023Date of Patent: March 25, 2025Assignee: APPLE INC.Inventor: Luis R. Deliz Centeno
-
Patent number: 12254564Abstract: Aspects of the present disclosure are directed to an artificial intelligence (“AI”) application running in conjunction with an artificial reality (“XR”) space. The AI Builder responds to user commands, verbal or gestural, to build or edit spaces or objects in space. If the requested object is of a type recognized by the AI Builder, then the AI Builder builds the object from one or more stored templates. The new object's location is determined by the objects that already exist in the user's XR environment and on commands or gestures from the user. If the AI Builder does not recognize the requested object, the user can show an image to the AI Builder, and the AI builds a 3D object in the XR space according to that image. To ease collaboration among users, the AI Builder may present its user interface as a non-player character within the XR world.Type: GrantFiled: December 19, 2022Date of Patent: March 18, 2025Assignee: Meta Platforms, Inc.Inventors: Vincent Charles Cheung, Jiemin Zhang, Bradley Duane Kowalk, Meng Wang
-
Patent number: 12254581Abstract: Aspects of the present disclosure are directed to providing an artificial reality environment with augments and surfaces. An “augment” is a virtual container in 3D space that can include presentation data, context, and logic. An artificial reality system can use augments as the fundamental building block for displaying 2D and 3D models in the artificial reality environment. For example, augments can represent people, places, and things in an artificial reality environment and can respond to a context such as a current display mode, time of day, a type of surface the augment is on, a relationship to other augments, etc. Augments can be on a “surface” that has a layout and properties that cause augments on that surface to display in different ways. Augments and other objects (real or virtual) can also interact, where these interactions can be controlled by rules for the objects evaluated based on information from the shell.Type: GrantFiled: November 14, 2023Date of Patent: March 18, 2025Assignee: Meta Platforms Technologies, LLCInventors: James Tichenor, Arthur Zwiegincew, Hayden Schoen, Alex Marcolina, Gregory Alt, Todd Harris, Merlyn Deng, Barrett Fox, Michal Hlavac
-
Contact determination system, contact determination device, contact determination method and program
Patent number: 12246705Abstract: A contact determination system includes a processor, and determines whether a contact event occurs between target objects in a vicinity of a host vehicle. The processor is configured to acquire, regarding each of the target objects including at least one of (i) a road user other than the host vehicle and (ii) a road-installed object, a type and a moving speed of the target object. The processor is also configured to perform an overlap determination regarding whether at least two target models respectively modeling the target objects overlap with each other. The processor is further configured to determine whether or not the contact event occurs for a pair of the target objects that have been determined to have an overlap based on the type and moving speed of the target object.Type: GrantFiled: July 22, 2022Date of Patent: March 11, 2025Assignee: DENSO CORPORATIONInventor: Isao Okawa -
Patent number: 12249033Abstract: In some embodiments, the present disclosure includes techniques and user interfaces for interacting with virtual objects in an extended reality environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects in an extended reality environment, including repositioning virtual objects relative to the environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects, in an extended reality environment, including virtual objects that aid a user in navigating within the environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects, including objects displayed based on changes in a field-of-view of a user, in an extended reality environment, including repositioning virtual objects relative to the environment.Type: GrantFiled: September 6, 2023Date of Patent: March 11, 2025Assignee: Apple Inc.Inventors: Yiqiang Nie, Giovanni Agnoli, Devin W. Chalmers, Allison W. Dryer, Thomas G. Salter, Giancarlo Yerkes
-
Patent number: 12249050Abstract: A computer-implemented method includes obtaining 3D model of real-world environment; receiving image of real-world environment captured using camera and pose information indicative of camera pose from which image is captured; utilising 3D model of real-world environment to generate reconstructed depth map from perspective of camera pose; and applying extended depth-of-field correction to image segment(s) of image that is/are out of focus, by using point spread function determined for camera, based on optical depths in segment(s) of reconstructed depth map corresponding to image segment(s) of image.Type: GrantFiled: November 21, 2022Date of Patent: March 11, 2025Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Mikko Ollila
-
Patent number: 12243303Abstract: A processing system including at least one processor may identify a user viewpoint of a user at a first venue, match a viewpoint at a second venue to the user viewpoint of the user at the first venue, detect a trigger condition to provide visual content of the second venue to the user at the first venue, obtain the visual content of the second venue, wherein the visual content of the second venue is obtained from the viewpoint at the second venue, and provide the visual content of the second venue to an augmented reality device of the user at the first venue, where the augmented reality device presents the visual content of the second venue as a visual overlay within a field of view of the user.Type: GrantFiled: April 17, 2023Date of Patent: March 4, 2025Assignee: AT&T Intellectual Property I, L.P.Inventors: Walter Cooper Chastain, Barrett Kreiner, James Pratt, Adrianne Binh Luu, Robert T. Moton, Jr., Ari Craine, Robert Koch
-
Patent number: 12243205Abstract: A method for avoiding a field of view disturbance for an operator of an object. The field of view disturbance IS caused by the overlay of virtual additional information via a display unit of a transportation vehicle or data glasses. The overlay of additional information supports the operator in the operation of an object. The method carries out an image analysis by which it is checked whether a field of view disturbance is caused by the overlay of the virtual additional information, and a measure for suppressing the field of view disturbance is carried out when a field of view disturbance is detected.Type: GrantFiled: October 9, 2019Date of Patent: March 4, 2025Assignee: VOLKSWAGEN AKTIENGESELLSCHAFTInventors: Dorit Foltin, Andreas Gäde, Christian Becker, Sandro Schwertner
-
Patent number: 12243316Abstract: In an example, a prescreening method includes: capturing an image of one or more items for a user; processing the image of the one or more items by image recognition; analyzing the image processed by image recognition to determine whether any of the one or more items are listed on a list; if none of the one or more items in the image is listed on the list, informing the user that none of the one or more items in the image is listed on the list; and if any of the one or more items in the image is listed on the list, identifying each item in the image which is listed on the list to the user.Type: GrantFiled: March 19, 2024Date of Patent: March 4, 2025Assignee: The Government of the United States of America, as represented by the Secretary of Homeland SecurityInventor: John F. Jasinski, Jr.
-
Patent number: 12243443Abstract: A system for treatment and/or psychology intervention of attention deficit disorders in a user in need thereof. The system uses a head mounted display equipped with an eye tracking device. The system is configured to display a scene that includes a principal object and at least one secondary object, then use the eye tracking device to determine whether the user's gaze is directed to one of the objects. If the user's gaze is directed to one of the secondary objects beyond a threshold criterion, that secondary object is removed from the scene. If the user's gaze is directed to the principal object beyond a threshold criterion, a previously removed secondary object is reintroduced into the scene. The principal object is a desired object of the user's attention, while the secondary object(s) may be potentially or known distracting objects. Corresponding software and methods are also disclosed.Type: GrantFiled: May 10, 2022Date of Patent: March 4, 2025Assignee: Kennesaw State University Research And Service Foundation, Inc.Inventor: Chao Mei
-
Patent number: 12243010Abstract: An information processing apparatus (2000) acquires a shelf rack image (12) in which a product shelf rack on which a product is displayed is imaged. The information processing apparatus (2000) performs image analysis on the shelf rack image (12), and generates information (actual display information) relevant to a display situation of the product on a product shelf rack (20). The information processing apparatus (2000) acquires reference display information representing a reference for display of the product on the product shelf rack (20). The information processing apparatus (2000) compares the actual display information generated by performing the image analysis on the shelf rack image (12) with the acquired reference display information, and generates comparison information representing a result.Type: GrantFiled: August 1, 2023Date of Patent: March 4, 2025Assignee: NEC CORPORATIONInventors: Yaeko Yonezawa, Kaito Horita, Akira Yajima, Mizuto Sekine, Yoshinori Ehara
-
Patent number: 12242667Abstract: Eye and hand tracking systems in head-mounted display (HMD) devices are arranged with lensless camera systems using optical masks as encoding elements that apply convolutions to optical images of body parts (e.g., eyes or hands) of HMD device users. The convolved body images are scrambled or coded representations that are captured by a sensor in the system, but are not human-recognizable. A machine learning system such as a neural network is configured to extract body features directly from the coded representation without performance of deconvolutions conventionally utilized to reconstruct the original body images in human-recognizable form. The extracted body features are utilized by the respective eye or hand tracking systems to output relevant tracking data for the user's eyes or hands which may be utilized by the HMD device to support various applications and user experiences. The lensless camera and machine learning system are jointly optimizable on an end-to-end basis.Type: GrantFiled: October 3, 2023Date of Patent: March 4, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Curtis Alan Tesdahl, Benjamin Eliot Lundell, David Rohn, Dmitry Reshidko, Dmitriy Churin, Kevin James Matherson, Sayyed Jaffar Ali Raza
-
Patent number: 12236527Abstract: A remote-control system, a remote-controller, and a remote-control method are provided. The remote-control method includes: obtaining environmental image data by an image capture device of a remote-controller; building a map according to the environmental image data based on simultaneous localization and mapping (SLAM) algorithm, and obtaining first location information of a first display in the map according to the environmental image data by the remote-controller; and receiving the first location information from the remote-controller and controlling the first display according to the first location information by a computing device.Type: GrantFiled: September 12, 2022Date of Patent: February 25, 2025Assignee: HTC CorporationInventors: Chun-Kai Huang, Jyun-Jhong Lin
-
Patent number: 12238403Abstract: A method includes obtaining a photographed image from a photographing device carried by a movable object, determining a target object in the photographed image to track the target object, determining location estimation information of the target object based on shooting information of the photographing device, and in response to a disappearance of the target object in the photographed image, controlling the photographing device to search for the target object around a location associated with the location estimation information.Type: GrantFiled: December 19, 2023Date of Patent: February 25, 2025Assignee: SZ DII TECHNOLOGY CO., LTD.Inventors: Jie Qian, Haonan Li, Cong Zhao
-
Patent number: 12236535Abstract: Augmented reality guidance for guiding a user through an environment using an eyewear device. The eyewear device includes a display system and a position detection system. A user is guided though an environment by monitoring a current position of the eyewear device within the environment, identifying marker positions within a threshold of the current position, the marker positions defined with respect to the environment and associated with guidance markers, registering the marker positions, generating overlay image including the guidance markers, and presenting the overlay image on a display of the eyewear device.Type: GrantFiled: November 15, 2023Date of Patent: February 25, 2025Assignee: Snap Inc.Inventors: Shin Hwun Kang, Dmytro Kucher, Dmytro Hovorov, Ilteris Canberk
-
Patent number: 12236550Abstract: Described are improved systems and methods for navigation and manipulation of interactable objects in a 3D mixed reality environment. Improved systems and methods are provided to implement physical manipulation for creation and placement of interactable objects, such as browser windows and wall hangings. A method includes receiving data indicating a selection of an interactable object contained within a first prism at the start of a user interaction. The method also includes receiving data indicating an end of the user interaction with the interactable object. The method further includes receiving data indicating a physical movement of the user corresponding to removing the interactable object from the first prism between the start and the end of the user interaction. Moreover, the method includes creating a second prism to contain the data associated with the interactable object at the end of the user interaction with the interactable object.Type: GrantFiled: March 13, 2023Date of Patent: February 25, 2025Assignee: Magic Leap, Inc.Inventors: Tim Zurmoehle, Andrea Isabel Montoya, Robert John Cummings Macdonald, Sakina Groth, Genevieve Mak
-
Patent number: 12236819Abstract: Various implementations disclosed herein include devices, systems, and methods for augmenting a physical writing surface. In various implementations, a device includes a display, a non-transitory memory and one or more processors coupled with the display and the non-transitory memory. In various implementations, a method includes presenting, via the display, a pass-through representation of a physical writing surface that corresponds to an application installed on the device. In some implementations, the method includes detecting a difference between the physical writing surface and an electronic record stored in association with the application. In some implementations, the method includes overlaying an element on the pass-through representation of the physical writing surface based on the difference between the physical writing surface and the electronic record.Type: GrantFiled: February 28, 2022Date of Patent: February 25, 2025Assignee: APPLE INC.Inventors: Thomas G. Salter, Anshu Kameswar Chimalamarri
-
Patent number: 12236921Abstract: Systems and methods described herein in accordance with some embodiments may display augmented reality (AR) information related to an object based on a predicted persistence of the object within view of the user. In some embodiments, display of AR information is prioritized based upon a prediction of how soon an object will disappear from view, perhaps if the predicted disappearance will occur sooner than some threshold time. Systems and methods described herein in accordance with some embodiments extend to mixed reality (MR) systems, and provide for displaying an image of an object that has become obstructed or has disappeared from view and/or for displaying AR information related to an object that has become obstructed or has disappeared from view.Type: GrantFiled: February 24, 2022Date of Patent: February 25, 2025Assignee: InterDigital VC Holdings, Inc.Inventor: Mona Singh
-
Patent number: 12236011Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for selectively rendering augmented reality content based on predictions regarding a user's ability to visually process the augmented reality content. For instance, the disclosed systems can identify eye tracking information for a user at an initial time. Moreover, the disclosed systems can predict a change in an ability of the user to visually process an augmented reality element at a future time based on the eye tracking information. Additionally, the disclosed systems can selectively render the augmented reality element at the future time based on the predicted change in the ability of the user to visually process the augmented reality element.Type: GrantFiled: December 28, 2022Date of Patent: February 25, 2025Assignee: Meta Platforms, Inc.Inventors: Mark Terrano, Ian Erkelens, Kevin James MacKenzie
-
Patent number: 12229907Abstract: Systems and methods are provided for displaying supplemental content for print media using augmented reality. A user profile for a user of an augmented reality device is determined. Content of the print media is searched to identify a first portion of the print media that matches the user profile and a second portion of the print media that does not match the user profile. Supplemental content is obtained based on content of the first portion of the print media. A display of the supplemental content is positioned over the second portion of the print media.Type: GrantFiled: September 26, 2023Date of Patent: February 18, 2025Assignee: ADEIA GUIDES INC.Inventors: Lucas Waye, Theresa Tokesky, Michael A. Montalto, Kanagasabai Sivanadian
-
Patent number: 12229897Abstract: Systems, methods, and apparatus are provided for intelligent dynamic rendering of an augmented reality (AR) display. An AR device may capture an image of a physical environment. Feature extraction combined with deep learning may be used for object recognition and detection of changes in the environment. Contextual analysis of the environment based on the deep learning outputs may enable improved integration of AR rendering with the physical environment in real time. A BLE beacon feed may provide supplemental information regarding the physical environment. The beacon feed may be extracted, classified, and labeled based on user interests using machine learning algorithms. The beacon feed may be paired with the AR advice to incorporate customized location-based graphics and text into the AR display.Type: GrantFiled: March 10, 2022Date of Patent: February 18, 2025Assignee: Bank of America CorporationInventor: Shailendra Singh