Patents Examined by Chong Wu
-
Patent number: 12288302Abstract: A method for generating a virtual tablet at a left or right position of user based on a hand gesture for creation of a virtual tablet in an extended reality environment.Type: GrantFiled: July 3, 2024Date of Patent: April 29, 2025Assignee: CURIOXR, INC.Inventor: Ethan Fieldman
-
Patent number: 12288301Abstract: A computer system displays a first view of a three-dimensional environment, including a first user interface object, that when activated by a user input meeting first criteria, causes performance of a first operation. While displaying the first view, the computer system detects first movement of a hand in a physical environment, and in response, changes an appearance of the first user interface object in the first view based on the first movement, including: in accordance with a determination that the first movement meets the first criteria requiring the hand move in a first manner, performing and indicating performance of the first operation; and in accordance with a determination that the first movement does not meet the first criteria, moving the first user interface object away from a position in the three-dimensional environment corresponding to a location of the hand in the physical environment without performing the first operation.Type: GrantFiled: February 22, 2024Date of Patent: April 29, 2025Assignee: APPLE INC.Inventors: Philipp Rockel, Charles C. Hoyt
-
Patent number: 12282984Abstract: A computer implemented method for augmenting a first image with image data from a second image, the method comprising: receiving (S02) the first image depicting a first scene; capturing (S04) the second image depicting a second scene using an image capturing device; identifying (S06) an object in the second image; receiving (S08) a 3D model from a database, the database comprising a plurality of 3D models, the 3D model corresponding to the identified object of the second image; aligning (S10) the 3D model with the identified object in the second image; extracting (S12) pixel data from the second image using a contour of the 3D model's projection onto the second image; inserting (S14) the extracted pixel data of the second image data into the first image, thereby rendering an augmented image.Type: GrantFiled: September 2, 2022Date of Patent: April 22, 2025Assignee: Inter IKEA Systems B.V.Inventors: Jonas Gustavsson, Camila Dorin
-
Patent number: 12283009Abstract: An example augmented reality system includes: obtaining information about an instance of a device; recognizing the instance of the device based on the information; selecting a digital twin for the instance of the device, with the digital twin being unique to the instance of the device; and generating augmented reality content based on the digital twin and an actual graphic of the instance of the device.Type: GrantFiled: October 11, 2023Date of Patent: April 22, 2025Assignee: PTC Inc.Inventors: Vladimir Parfenov, Kevin Elliott Jordan, Steven Thomas Dertien, Moshe Jacob Baum, Andre Gosselin, Stephen Prideaux-Ghee, James E. Heppelman
-
Patent number: 12277887Abstract: A method of detecting a stain of a display panel is disclosed that includes capturing a first color image of the display panel, capturing a second color image of the display panel, generating a merged image by merging a color coordinate map of the first color image and a color coordinate map of the second color image, generating a background image of the merged image by a morphology filtering, generating a flattened image by operating the merged image and the background image and detecting an area of the flattened image in which a color coordinate value exceeds a threshold value as a stain area.Type: GrantFiled: December 20, 2022Date of Patent: April 15, 2025Assignee: Samsung Display Co., Ltd.Inventors: Huisu Kim, Seyun Kim, Hyungwoo Yim, Hakmo Choi, Seungho Park, Kyung-Sik Joo
-
Patent number: 12277647Abstract: This application discloses a method for processing three-dimensional data. The method may be applied to the field of intelligent driving. The method includes: for a target convolution operator in a convolutional neural network, a corresponding intermediate result may be first calculated by using one or more background values of three-dimensional data; then, when first data is processed, a corresponding intermediate result may be searched for based on a background identifier in a sparse distribution diagram; and further, a value of a voxel of second data is obtained based on the intermediate result, with no need to repeat a process of convolution processing for a plurality of times, thereby implementing convolution acceleration.Type: GrantFiled: March 22, 2023Date of Patent: April 15, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Yunkai Du, Yilong Chen, Yang Yang
-
Patent number: 12272019Abstract: The creation of synthetic scenes from a combination of synthetic data and real environment data to allow for testing of extended reality (XR) applications on an electronic device is disclosed. In order to efficiently test an XR application, a scene data configuration can be specified within a synthetic service, representing a combination of different synthetic data and real environment data. In addition, scene understanding and alignment metadata can be added to the scene data. When the XR application is initiated, a synthetic scene in accordance with the scene data configuration and the added metadata can be rendered and presented on a display of the electronic device. The XR application can then be tested within the presented synthetic scene, with the application interacting with both the synthetic data and the real environment data of the synthetic scene as though it were interacting with real objects in a real environment.Type: GrantFiled: March 17, 2023Date of Patent: April 8, 2025Assignee: Apple Inc.Inventors: Przemyslaw M. Iwanowski, Joshua J. Taylor
-
Patent number: 12260018Abstract: Systems and methods are described for extended reality environment interaction. An extended reality environment including an object is generated for display, and an eye motion is detected. Based on the detecting, it is determined whether the object is in a field of view for at least a predetermined period of time, and in response to determining that the object is in the field of view for at least the predetermined period of time, one or more items related to the object are generated for display in the extended virtual reality environment.Type: GrantFiled: September 8, 2023Date of Patent: March 25, 2025Assignee: Adeia Guides Inc.Inventor: Sakura Saito
-
Patent number: 12254584Abstract: Methods comprising generating, by a processor, a 3D model of an object; facilitating displaying, by the processor using the 3D model of the object, a 3D display of the object on an electronic device of a user; receiving, by the processor and from the electronic device of the user, a zoom selection on the 3D display of the object; in response to receiving the zoom selection, facilitating displaying, by the processor, a zoomed 3D display of the object on the electronic device of the user; receiving, by the processor and from the electronic device of the user, a zoom rotation selection of the object in the zoomed 3D display; and in response to receiving the zoom rotation selection, facilitating rotating, by the processor, the 3D display of the object in the zoomed 3D display on the electronic device of the user. Other embodiments are disclosed herein.Type: GrantFiled: November 29, 2022Date of Patent: March 18, 2025Assignee: Carvana, LLCInventors: Alan Richard Melling, Remy Tristan Cilia, Bruno Jean Francois
-
Patent number: 12254581Abstract: Aspects of the present disclosure are directed to providing an artificial reality environment with augments and surfaces. An “augment” is a virtual container in 3D space that can include presentation data, context, and logic. An artificial reality system can use augments as the fundamental building block for displaying 2D and 3D models in the artificial reality environment. For example, augments can represent people, places, and things in an artificial reality environment and can respond to a context such as a current display mode, time of day, a type of surface the augment is on, a relationship to other augments, etc. Augments can be on a “surface” that has a layout and properties that cause augments on that surface to display in different ways. Augments and other objects (real or virtual) can also interact, where these interactions can be controlled by rules for the objects evaluated based on information from the shell.Type: GrantFiled: November 14, 2023Date of Patent: March 18, 2025Assignee: Meta Platforms Technologies, LLCInventors: James Tichenor, Arthur Zwiegincew, Hayden Schoen, Alex Marcolina, Gregory Alt, Todd Harris, Merlyn Deng, Barrett Fox, Michal Hlavac
-
Patent number: 12249024Abstract: Examples of the disclosure describe systems and methods for presenting virtual content on a wearable head device. In some embodiments, a state of a wearable head device is determined by minimizing a total error based on a reduced weight associated with a reprojection error. A view reflecting the determined state of the wearable head device is presented via a display of the wearable head device. In some embodiments, a wearable head device calculates a preintegration term based on the image data received via a sensor of the wearable head device and the inertial data received via a first IMU and a second IMU of the wearable head device. The wearable head device estimates a position of the device based on the preintegration term, and the wearable head device presents the virtual content based on the position of the device.Type: GrantFiled: February 12, 2024Date of Patent: March 11, 2025Assignee: Magic Leap, Inc.Inventors: Yu-Hsiang Huang, Evan Gregory Levine, Igor Napolskikh, Dominik Michael Kasper, Manel Quim Sanchez Nicuesa, Sergiu Sima, Benjamin Langmann, Ashwin Swaminathan, Martin Georg Zahnert, Blazej Marek Czuprynski, Joao Antonio Pereira Faro, Christoph Tobler, Omid Ghasemalizadeh
-
Patent number: 12249037Abstract: A system aligns a 3D model of an environment with image frames of the environment and generates a visualization interface that displays a portion of the 3D model and a corresponding image frame. The system receives LIDAR data collected in the environment and generates a 3D model based on the LIDAR data. For each image frame, the system aligns the image frame with the 3D model. After aligning the image frames with the 3D model, when the system presents a portion of the 3D model in an interface, it also presents an image frame that corresponds to the portion of the 3D model.Type: GrantFiled: February 6, 2024Date of Patent: March 11, 2025Assignee: Open Space Labs, Inc.Inventors: Michael Ben Fleischman, Jeevan James Kalanithi, Gabriel Hein, Elliott St. George Wilson Kember
-
Patent number: 12236546Abstract: An extended reality environment includes one or more virtual objects. A virtual object can be selected for manipulation by orienting a pointing device towards the virtual object and performing a selection input. In some embodiments, if the selection input is a first type of selection gesture received at the pointing device, the virtual object is selected for movement operations. In some embodiments, if the selection input is a second type of selection gesture received at the pointing device, the virtual object is selected for rotation operations. While the virtual object is selected for movement or rotation operations, the virtual object moves or rotates in accordance with the movement or rotation of the user's hand, respectively. In some embodiments, the virtual object has a first visual characteristic when the pointing device is pointing towards the virtual object and has a second visual characteristic when the virtual object is selected.Type: GrantFiled: August 20, 2021Date of Patent: February 25, 2025Assignee: Apple Inc.Inventor: David A. Lipton
-
Patent number: 12236540Abstract: A computer system concurrently displays a view of a physical environment; and a computer-generated user interface element overlaid on the view of the physical environment. An appearance of the computer-generated user interface element is based on an appearance of the view of the physical environment on which the computer-generated user interface element is overlaid. In response to an appearance of the physical environment changing, the appearance of the computer-generated user interface element is updated at a first time based on a graphical composition of the appearance of one or more portions of the physical environment at different times prior to the first time, including: an appearance of a first portion of the physical environment at a second time that is before the first time; and an appearance of a second portion of the physical environment at a third time that is before the second time.Type: GrantFiled: September 21, 2022Date of Patent: February 25, 2025Assignee: APPLE INC.Inventors: Miquel Estany Rodriguez, Wan Si Wan, Gregory M. Apodaca, William A. Sorrentino, III, James J. Owen, Pol Pla I. Conesa, Alan C. Dye
-
Patent number: 12223596Abstract: Embodiments disclosed herein mitigate technological barriers in preparing, sourcing, and exporting 3D assets with practical applications for journalism. According to one embodiment, a computer-implemented method for generating a three-dimensional map is provided. The method includes displaying a two-dimensional region of a map on a graphical user interface (GUI), wherein the map is displayed along longitudinal axis and a latitudinal axis. The method includes obtaining, through the GUI, a first input from a user device, the first input comprising an indication of a selected sub-region of the two-dimensional region of the map, wherein the selected sub-region comprises a plurality of pixels having longitudinal coordinates bounded by a first longitude coordinate and a second longitude coordinate along the longitudinal axis and latitudinal coordinates bounded by a first latitude coordinate and a second latitude coordinate along the latitudinal axis.Type: GrantFiled: November 17, 2022Date of Patent: February 11, 2025Assignee: THE NEW YORK TIMES COMPANYInventors: Or Fleisher, Sukanya Aneja
-
Patent number: 12216970Abstract: An example computing system is configured to (i) generate a cross-sectional view of a three-dimensional drawing file; (ii) receive a first user input indicating a selection of a first mesh, wherein the selection comprises a selection point that establishes a first end point; (iii) generate a first representation indicating an alignment of the first end point with at least one corresponding geometric feature of the first mesh and a second representation indicating a set of one or more directions; (iv) receive a second user input indicating a given direction; (v) based on receiving the second user input, generate a dynamic representation of the dimensioning information along the given direction; (vi) receive a third user input indicating that the second user input is complete; (vii) based on receiving the third user input, add the dimensioning information to the cross-sectional view between the first end point and the second end point.Type: GrantFiled: December 4, 2023Date of Patent: February 4, 2025Assignee: Procore Technologies, Inc.Inventors: Ritu Parekh, David McCool, Christopher Myers, Christopher Bindloss
-
Patent number: 12216971Abstract: An example computing system is configured to (i) receive a request to generate a cross-sectional view of a three-dimensional drawing file, where the cross-sectional view is based on a location of a cross-section line within the three-dimensional drawing file and includes an intersection of two meshes within the three-dimensional drawing file; (ii) generate the cross-sectional view of the three-dimensional drawing file; (iii) add, to the generated cross-sectional view, dimensioning information involving at least one of the two meshes; (iv) generate one or more controls for adjusting a location of the cross-section line within the three-dimensional drawing file; and (v) based on an input indicating a selection of the one or more controls, adjust the location of the cross-section line within the three-dimensional drawing file, update the cross-sectional view based on the adjusted location of the cross-section line, and update the dimensioning information to correspond to the updated cross-sectional view.Type: GrantFiled: February 26, 2024Date of Patent: February 4, 2025Assignee: Procore Technologies, Inc.Inventors: David McCool, Christopher Myers, Christopher Bindloss
-
Patent number: 12210867Abstract: Apparatuses, systems, and techniques for a compiled shader program caches in a cloud computing environment.Type: GrantFiled: September 13, 2021Date of Patent: January 28, 2025Assignee: NVIDIA CorporationInventors: Michael Oxford, Patrick Neill, Franck Diard, Paul Albert Lalonde
-
Patent number: 12198283Abstract: An augmented reality (“AR”) device applies smooth correction methods to correct the location of the virtual objects presented to a user. The AR device may apply an angular threshold to determine whether a virtual object can be moved from an original location to a target location. An angular threshold is a maximum angle by which a line from the AR device to the virtual object can change within a timestep. Similarly, the AR device may apply a motion threshold, which is a maximum on the distance that a virtual object's location can be corrected based on the motion of the virtual object. Furthermore, the AR device may apply a pixel threshold to the correction of the virtual object's location. A pixel threshold is a maximum on the distance that a pixel projection of the virtual object can change based on the virtual object's change in location.Type: GrantFiled: November 8, 2023Date of Patent: January 14, 2025Assignee: NIANTIC, INC.Inventors: Ben Benfold, Victor Adrian Prisacariu
-
Patent number: 12198288Abstract: Systems and methods for generating an image of an automobile can include generating an artificial surface for a 3D model of the automobile, blending the artificial surface with a real world surface, and generating the image of the automobile using a blended surface. The image can have a number of different blended surfaces (e.g., a cleaner floor, shadows for the automobile, reflections for the automobile). The images can be used to create a 360 degree display of the automobile where the blended surface is displayed.Type: GrantFiled: August 14, 2023Date of Patent: January 14, 2025Assignee: Carvana, LLCInventors: Alan Richard Melling, Pedro Damian Velez Salas, Grant Evan Schindler, Bruno Jean Francois, Remy Tristan Cilia