Patents Examined by Chong Wu
-
Patent number: 12272019Abstract: The creation of synthetic scenes from a combination of synthetic data and real environment data to allow for testing of extended reality (XR) applications on an electronic device is disclosed. In order to efficiently test an XR application, a scene data configuration can be specified within a synthetic service, representing a combination of different synthetic data and real environment data. In addition, scene understanding and alignment metadata can be added to the scene data. When the XR application is initiated, a synthetic scene in accordance with the scene data configuration and the added metadata can be rendered and presented on a display of the electronic device. The XR application can then be tested within the presented synthetic scene, with the application interacting with both the synthetic data and the real environment data of the synthetic scene as though it were interacting with real objects in a real environment.Type: GrantFiled: March 17, 2023Date of Patent: April 8, 2025Assignee: Apple Inc.Inventors: Przemyslaw M. Iwanowski, Joshua J. Taylor
-
Patent number: 12260018Abstract: Systems and methods are described for extended reality environment interaction. An extended reality environment including an object is generated for display, and an eye motion is detected. Based on the detecting, it is determined whether the object is in a field of view for at least a predetermined period of time, and in response to determining that the object is in the field of view for at least the predetermined period of time, one or more items related to the object are generated for display in the extended virtual reality environment.Type: GrantFiled: September 8, 2023Date of Patent: March 25, 2025Assignee: Adeia Guides Inc.Inventor: Sakura Saito
-
Patent number: 12254584Abstract: Methods comprising generating, by a processor, a 3D model of an object; facilitating displaying, by the processor using the 3D model of the object, a 3D display of the object on an electronic device of a user; receiving, by the processor and from the electronic device of the user, a zoom selection on the 3D display of the object; in response to receiving the zoom selection, facilitating displaying, by the processor, a zoomed 3D display of the object on the electronic device of the user; receiving, by the processor and from the electronic device of the user, a zoom rotation selection of the object in the zoomed 3D display; and in response to receiving the zoom rotation selection, facilitating rotating, by the processor, the 3D display of the object in the zoomed 3D display on the electronic device of the user. Other embodiments are disclosed herein.Type: GrantFiled: November 29, 2022Date of Patent: March 18, 2025Assignee: Carvana, LLCInventors: Alan Richard Melling, Remy Tristan Cilia, Bruno Jean Francois
-
Patent number: 12254581Abstract: Aspects of the present disclosure are directed to providing an artificial reality environment with augments and surfaces. An “augment” is a virtual container in 3D space that can include presentation data, context, and logic. An artificial reality system can use augments as the fundamental building block for displaying 2D and 3D models in the artificial reality environment. For example, augments can represent people, places, and things in an artificial reality environment and can respond to a context such as a current display mode, time of day, a type of surface the augment is on, a relationship to other augments, etc. Augments can be on a “surface” that has a layout and properties that cause augments on that surface to display in different ways. Augments and other objects (real or virtual) can also interact, where these interactions can be controlled by rules for the objects evaluated based on information from the shell.Type: GrantFiled: November 14, 2023Date of Patent: March 18, 2025Assignee: Meta Platforms Technologies, LLCInventors: James Tichenor, Arthur Zwiegincew, Hayden Schoen, Alex Marcolina, Gregory Alt, Todd Harris, Merlyn Deng, Barrett Fox, Michal Hlavac
-
Patent number: 12249024Abstract: Examples of the disclosure describe systems and methods for presenting virtual content on a wearable head device. In some embodiments, a state of a wearable head device is determined by minimizing a total error based on a reduced weight associated with a reprojection error. A view reflecting the determined state of the wearable head device is presented via a display of the wearable head device. In some embodiments, a wearable head device calculates a preintegration term based on the image data received via a sensor of the wearable head device and the inertial data received via a first IMU and a second IMU of the wearable head device. The wearable head device estimates a position of the device based on the preintegration term, and the wearable head device presents the virtual content based on the position of the device.Type: GrantFiled: February 12, 2024Date of Patent: March 11, 2025Assignee: Magic Leap, Inc.Inventors: Yu-Hsiang Huang, Evan Gregory Levine, Igor Napolskikh, Dominik Michael Kasper, Manel Quim Sanchez Nicuesa, Sergiu Sima, Benjamin Langmann, Ashwin Swaminathan, Martin Georg Zahnert, Blazej Marek Czuprynski, Joao Antonio Pereira Faro, Christoph Tobler, Omid Ghasemalizadeh
-
Patent number: 12249037Abstract: A system aligns a 3D model of an environment with image frames of the environment and generates a visualization interface that displays a portion of the 3D model and a corresponding image frame. The system receives LIDAR data collected in the environment and generates a 3D model based on the LIDAR data. For each image frame, the system aligns the image frame with the 3D model. After aligning the image frames with the 3D model, when the system presents a portion of the 3D model in an interface, it also presents an image frame that corresponds to the portion of the 3D model.Type: GrantFiled: February 6, 2024Date of Patent: March 11, 2025Assignee: Open Space Labs, Inc.Inventors: Michael Ben Fleischman, Jeevan James Kalanithi, Gabriel Hein, Elliott St. George Wilson Kember
-
Patent number: 12236546Abstract: An extended reality environment includes one or more virtual objects. A virtual object can be selected for manipulation by orienting a pointing device towards the virtual object and performing a selection input. In some embodiments, if the selection input is a first type of selection gesture received at the pointing device, the virtual object is selected for movement operations. In some embodiments, if the selection input is a second type of selection gesture received at the pointing device, the virtual object is selected for rotation operations. While the virtual object is selected for movement or rotation operations, the virtual object moves or rotates in accordance with the movement or rotation of the user's hand, respectively. In some embodiments, the virtual object has a first visual characteristic when the pointing device is pointing towards the virtual object and has a second visual characteristic when the virtual object is selected.Type: GrantFiled: August 20, 2021Date of Patent: February 25, 2025Assignee: Apple Inc.Inventor: David A. Lipton
-
Patent number: 12236540Abstract: A computer system concurrently displays a view of a physical environment; and a computer-generated user interface element overlaid on the view of the physical environment. An appearance of the computer-generated user interface element is based on an appearance of the view of the physical environment on which the computer-generated user interface element is overlaid. In response to an appearance of the physical environment changing, the appearance of the computer-generated user interface element is updated at a first time based on a graphical composition of the appearance of one or more portions of the physical environment at different times prior to the first time, including: an appearance of a first portion of the physical environment at a second time that is before the first time; and an appearance of a second portion of the physical environment at a third time that is before the second time.Type: GrantFiled: September 21, 2022Date of Patent: February 25, 2025Assignee: APPLE INC.Inventors: Miquel Estany Rodriguez, Wan Si Wan, Gregory M. Apodaca, William A. Sorrentino, III, James J. Owen, Pol Pla I. Conesa, Alan C. Dye
-
Patent number: 12223596Abstract: Embodiments disclosed herein mitigate technological barriers in preparing, sourcing, and exporting 3D assets with practical applications for journalism. According to one embodiment, a computer-implemented method for generating a three-dimensional map is provided. The method includes displaying a two-dimensional region of a map on a graphical user interface (GUI), wherein the map is displayed along longitudinal axis and a latitudinal axis. The method includes obtaining, through the GUI, a first input from a user device, the first input comprising an indication of a selected sub-region of the two-dimensional region of the map, wherein the selected sub-region comprises a plurality of pixels having longitudinal coordinates bounded by a first longitude coordinate and a second longitude coordinate along the longitudinal axis and latitudinal coordinates bounded by a first latitude coordinate and a second latitude coordinate along the latitudinal axis.Type: GrantFiled: November 17, 2022Date of Patent: February 11, 2025Assignee: THE NEW YORK TIMES COMPANYInventors: Or Fleisher, Sukanya Aneja
-
Patent number: 12216971Abstract: An example computing system is configured to (i) receive a request to generate a cross-sectional view of a three-dimensional drawing file, where the cross-sectional view is based on a location of a cross-section line within the three-dimensional drawing file and includes an intersection of two meshes within the three-dimensional drawing file; (ii) generate the cross-sectional view of the three-dimensional drawing file; (iii) add, to the generated cross-sectional view, dimensioning information involving at least one of the two meshes; (iv) generate one or more controls for adjusting a location of the cross-section line within the three-dimensional drawing file; and (v) based on an input indicating a selection of the one or more controls, adjust the location of the cross-section line within the three-dimensional drawing file, update the cross-sectional view based on the adjusted location of the cross-section line, and update the dimensioning information to correspond to the updated cross-sectional view.Type: GrantFiled: February 26, 2024Date of Patent: February 4, 2025Assignee: Procore Technologies, Inc.Inventors: David McCool, Christopher Myers, Christopher Bindloss
-
Patent number: 12216970Abstract: An example computing system is configured to (i) generate a cross-sectional view of a three-dimensional drawing file; (ii) receive a first user input indicating a selection of a first mesh, wherein the selection comprises a selection point that establishes a first end point; (iii) generate a first representation indicating an alignment of the first end point with at least one corresponding geometric feature of the first mesh and a second representation indicating a set of one or more directions; (iv) receive a second user input indicating a given direction; (v) based on receiving the second user input, generate a dynamic representation of the dimensioning information along the given direction; (vi) receive a third user input indicating that the second user input is complete; (vii) based on receiving the third user input, add the dimensioning information to the cross-sectional view between the first end point and the second end point.Type: GrantFiled: December 4, 2023Date of Patent: February 4, 2025Assignee: Procore Technologies, Inc.Inventors: Ritu Parekh, David McCool, Christopher Myers, Christopher Bindloss
-
Patent number: 12210867Abstract: Apparatuses, systems, and techniques for a compiled shader program caches in a cloud computing environment.Type: GrantFiled: September 13, 2021Date of Patent: January 28, 2025Assignee: NVIDIA CorporationInventors: Michael Oxford, Patrick Neill, Franck Diard, Paul Albert Lalonde
-
Patent number: 12198288Abstract: Systems and methods for generating an image of an automobile can include generating an artificial surface for a 3D model of the automobile, blending the artificial surface with a real world surface, and generating the image of the automobile using a blended surface. The image can have a number of different blended surfaces (e.g., a cleaner floor, shadows for the automobile, reflections for the automobile). The images can be used to create a 360 degree display of the automobile where the blended surface is displayed.Type: GrantFiled: August 14, 2023Date of Patent: January 14, 2025Assignee: Carvana, LLCInventors: Alan Richard Melling, Pedro Damian Velez Salas, Grant Evan Schindler, Bruno Jean Francois, Remy Tristan Cilia
-
Patent number: 12198235Abstract: The medical image processing device includes a processor, in which the processor acquires a medical image obtained by imaging a subject with an endoscope, identifies a tumor region and a non-tumor region from the medical image, generates a demarcation line that is a boundary between the tumor region and the non-tumor region, generates a virtual incision line at a position separated from the demarcation line by a designated distance, and performs control for superimposing the demarcation line and the virtual incision line on the medical image to be displayed.Type: GrantFiled: June 27, 2022Date of Patent: January 14, 2025Assignee: FUJIFILM CorporationInventors: Minoru Iketani, Tetsuya Fujikura, Haruo Akiba, Manabu Miyamoto
-
Patent number: 12198283Abstract: An augmented reality (“AR”) device applies smooth correction methods to correct the location of the virtual objects presented to a user. The AR device may apply an angular threshold to determine whether a virtual object can be moved from an original location to a target location. An angular threshold is a maximum angle by which a line from the AR device to the virtual object can change within a timestep. Similarly, the AR device may apply a motion threshold, which is a maximum on the distance that a virtual object's location can be corrected based on the motion of the virtual object. Furthermore, the AR device may apply a pixel threshold to the correction of the virtual object's location. A pixel threshold is a maximum on the distance that a pixel projection of the virtual object can change based on the virtual object's change in location.Type: GrantFiled: November 8, 2023Date of Patent: January 14, 2025Assignee: NIANTIC, INC.Inventors: Ben Benfold, Victor Adrian Prisacariu
-
Patent number: 12182952Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, one or more environmental sensors, and a display. The method includes determining a first positional value associated with a physical agent based on environmental data from the one or more environmental sensors. The method includes determining that a portion of computer-generated content satisfies an occlusion criterion with respect to a corresponding portion of the physical agent, based on the first positional value. The method includes, in response to determining that the occlusion criterion is satisfied and determining that the physical agent satisfies a movement criterion or a pose criterion, generating a mesh associated with the physical agent based on the first positional value, and displaying the mesh on the display.Type: GrantFiled: November 1, 2023Date of Patent: December 31, 2024Assignee: APPLE INC.Inventors: Pavel Veselinov Dudrenov, Edwin Iskandar
-
Patent number: 12165377Abstract: Imaging devices, systems, and methods for analyzing an image taken by an imaging system are described herein. An example method includes: receiving a bank image from the imaging device pursuant to one of a plurality of banks of imaging parameters; rendering the bank image in an image display; receiving an indication of a region of interest (ROI) in the image display associated with a user-selected tool; determining whether the user-selected tool is set to use the bank image, an output of a parent tool, or an output of the user-selected tool; and generating, in the image display, at least a portion of a display image in the ROI representative of: (i) the bank image, (ii) the output of the parent tool, or (iii) the output of the user-selected tool; wherein the portion of the display image is associated with the user-selected region.Type: GrantFiled: October 31, 2022Date of Patent: December 10, 2024Assignee: Zebra Technologies CorporationInventors: Matthew M. Degen, Anthony P. DeLuca, Adam Danielsen
-
Patent number: 12147591Abstract: A computer system displays an alert at a first position relative to the three-dimensional environment, the alert at least partially overlapping a first object in a first view. The first position has a respective spatial relationship to the user. The computer system detects movement of the user from a first viewpoint to a second viewpoint. At the second viewpoint, the computer system, in accordance with a determination that the alert is a first type of alert, displays the alert at a second position in the three-dimensional environment, the second position having the respective spatial relationship to the user and in accordance with a determination that the alert is a second type of alert, displays the three-dimensional environment from the second viewpoint without displaying the alert with the respective spatial relationship to the user.Type: GrantFiled: September 20, 2022Date of Patent: November 19, 2024Assignee: APPLE INC.Inventors: Jonathan R. Dascola, Lorena S. Pazmino
-
Patent number: 12148116Abstract: In one embodiment, a method of intermingling stereoscopic and conforming virtual content to a bounded surface is performed at a device that includes one or more processors, non-transitory memory, and one or more displays. The method includes displaying a bounded surface within a native user computer-generated reality (CGR) environment, wherein the bounded surface is displayed based on a first set of world coordinates characterizing the native user CGR environment. The method further includes displaying a first stereoscopic virtual object within a perimeter of a first side of the bounded surface, wherein the first stereoscopic virtual object is displayed in accordance with a second set of world coordinates that is different from the first set of world coordinates characterizing the native user CGR environment.Type: GrantFiled: July 11, 2023Date of Patent: November 19, 2024Assignee: APPLE INC.Inventors: Clement P. Boissiere, Samuel L Iglesias, Timothy Robert Oriol, Adam Michael O'Hern
-
Patent number: 12148090Abstract: In some implementations, a method of generating a third person view of a computer-generated reality (CGR) environment is performed at a device including non-transitory memory and one or more processors coupled with the non-transitory memory. The method includes: obtaining a first viewing vector associated with a first user within a CGR environment; determining a first viewing frustum for the first user within the CGR environment based on the first viewing vector associated with the first user and one or more depth attributes; generating a representation of the first viewing frustum; and displaying, via the display device, a third person view of the CGR environment including an avatar of the first user and the representation of the first viewing frustum adjacent to the avatar of the first user.Type: GrantFiled: August 14, 2023Date of Patent: November 19, 2024Inventors: Ian M. Richter, John Joon Park, David Michael Hobbins