Space Transformation Patents (Class 345/427)
-
Patent number: 12254576Abstract: In an aspect, a computer-implemented method allows for navigation in a three-dimensional (3D) virtual environment. In the method, data specifying a three-dimensional virtual space is received. A position and direction in the three-dimensional virtual space is received. The position and direction input by a first user and representing a first virtual camera used to render the three-dimensional virtual space to the first user. A video stream captured from a camera positioned to capture the first user is received. A second virtual camera is navigated according to an input of a second user.Type: GrantFiled: August 25, 2023Date of Patent: March 18, 2025Assignee: Katmai Tech Inc.Inventor: Kristofor Bernard Swanson
-
Patent number: 12244940Abstract: Systems, apparatus, and methods of rendering content based on a control point of a camera device are disclosed. In an example, a marker is attached to the camera device and its pose is tracked over time. Based on a camera model of the camera device, an offset between the marker and the control point is determined. The tracked pose of the marker can be translated and/or rotated according to the offset to estimate a pose of the control point. The rendering of the content is adjusted over time based on the estimated poses of the control point. Upon presentation of the content in a real-world space (e.g., on a display assembly located therein), the camera device can capture the content along with other objects of the real-world space and generate a video stream thereof.Type: GrantFiled: January 16, 2024Date of Patent: March 4, 2025Assignees: Nant Holdings IP, LLC, NantStudios, LLCInventors: Liudmila A. Beziaeva, Gary Marshall, Adolfo Sanchez, Juan Alfredo Nader Delgado
-
Patent number: 12236500Abstract: A method for processing an electronic image including receiving, by a viewer, the electronic image and a FOV (field of view), wherein the FOV includes at least one coordinate, at least one dimension, and a magnification factor, loading, by the viewer, a plurality of tiles within the FOV, determining, by the viewer, a state of the plurality of tiles in a cache, and in response to determining that the state of the plurality of tiles in the cache is a fully loaded state, rendering, by the viewer, the plurality of tiles to a display.Type: GrantFiled: April 9, 2024Date of Patent: February 25, 2025Assignee: Paige.AI, Inc.Inventors: Alexandre Kirszenberg, Razik Yousfi, Thomas Fresneau, Peter Schueffler
-
Patent number: 12230171Abstract: A control method for an imaging system is provided. The imaging system includes a display device and a dimming lens disposed on a display side of the display device. The dimming lens is capable of moving toward or away from the display apparatus to adjust an object distance from the display device to the dimming lens. The control method includes: determining the object distance when the dimming lens moves to a set position; according to the object distance, determining an image distance from a virtual image, generated by a display image of the display device through the dimming lens, to the dimming lens; and according to the determined image distance and a correlation between the image distance and a resolution of the display image, determining a resolution corresponding to the determined image distance, and controlling the display device to display the display image at the determined resolution.Type: GrantFiled: May 14, 2021Date of Patent: February 18, 2025Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.Inventors: Xi Li, Jinbao Peng, Wenyu Li, Longhui Wang
-
Patent number: 12205473Abstract: Techniques are disclosed for systems and methods to provide navigation control and/or docking assist for mobile structures. A navigation control system includes a logic device, one or more sensors, one or more actuators/controllers, and modules to interface with users, sensors, actuators, and/or other modules of a mobile structure. The logic device is configured to receive navigation control parameters from a user interface for the mobile structure and perimeter sensor data from a perimeter ranging system mounted to the mobile structure. The logic device determines navigation control signals based on the navigation control parameters and perimeter sensor data and provides the navigation control signals to a navigation control system for the mobile structure. Control signals may be displayed to a user and/or used to adjust a steering actuator, a propulsion system thrust, and/or other operational systems of the mobile structure.Type: GrantFiled: January 21, 2021Date of Patent: January 21, 2025Assignee: FLIR Belgium BVBAInventors: Jean-Luc Kersulec, Mark Johnson
-
Patent number: 12198256Abstract: Techniques are disclosed for improving the throughput of ray intersection or visibility queries performed by a ray tracing hardware accelerator. Throughput is improved, for example, by releasing allocated resources before ray visibility query results are reported by the hardware accelerator. The allocated resources are released when the ray visibility query results can be stored in a compressed format outside of the allocated resources. When reporting the ray visibility query results, the results are reconstructed based on the results stored in the compressed format. The compressed format storage can be used for ray visibility queries that return no intersections or terminate on any hit ray visibility query. One or more individual components of allocated resources can also be independently deallocated based on the type of data to be returned and/or results of the ray visibility query.Type: GrantFiled: November 14, 2023Date of Patent: January 14, 2025Assignee: NVIDIA CorporationInventors: Gregory Muthler, John Burgess, Ronald Charles Babich, Jr., William Parsons Newhall, Jr.
-
Patent number: 12198404Abstract: This application relates to the field of pose detection technologies, and discloses a method and an apparatus for obtaining pose information, a method and an apparatus for determining symmetry of an object, and a storage medium. The method includes: obtaining a rotational symmetry degree of freedom of a target object (901), obtaining pose information of the target object (902), and adjusting the pose information of the target object based on the rotational symmetry degree of freedom to obtain adjusted pose information (903), where the adjusted pose information is used for displaying a virtual object, and the virtual object is an object associated with the target object.Type: GrantFiled: December 22, 2021Date of Patent: January 14, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Er Li, Bo Zheng, Jianbin Liu, Jun Cao
-
Patent number: 12182957Abstract: One embodiment of the present invention sets forth a technique for performing style transfer. The technique includes generating an input shape representation that includes a plurality of points near a surface of an input three-dimensional (3D) shape, where the input 3D shape includes content-based attributes associated with an object. The technique also includes determining a style code based on a difference between a first latent representation of a first 3D shape and a second latent representation of a second 3D shape, where the second 3D shape is generated by applying one or more augmentations to the first 3D shape. The technique further includes generating, based on the input shape representation and style code, an output 3D shape having the content-based attributes of the input 3D shape and style-based attributes associated with the style code, and generating a 3D model of the object based on the output 3D shape.Type: GrantFiled: January 3, 2023Date of Patent: December 31, 2024Assignee: AUTODESK, INC.Inventors: Hooman Shayani, Marco Fumero, Aditya Sanghi
-
Patent number: 12175567Abstract: An example computing device is configured to (i) generate a cross-sectional view of a three-dimensional drawing file, the cross-sectional view including an object corresponding to a given mesh of the three-dimensional drawing file, the object including a void contained within the object, (ii) determine a plurality of two-dimensional line segments that collectively define a boundary of the void, (iii) for each line segment, determine nearby line segments based on a distance between an end point of the line segment and an end point of the one or more nearby line segments being within a threshold distance, (iv) determine one or more fully-connected sub-objects by connecting respective sets of nearby line segments in series, (v) determine, from the fully-connected sub-objects, a final sub-object to be used as a new boundary of the void, and (vi) add the final sub-object to the cross-sectional view as the new boundary of the void.Type: GrantFiled: December 29, 2023Date of Patent: December 24, 2024Assignee: Procore Technologies, Inc.Inventor: Christopher Myers
-
Patent number: 12165263Abstract: Briefly, embodiments, such as methods and/or systems of real-time visualization of web browser derived content within a gaming environment, for example, are described.Type: GrantFiled: February 2, 2024Date of Patent: December 10, 2024Assignee: AstroVirtual, Inc.Inventors: Dennis M. Futryk, Charles H. House
-
Patent number: 12153737Abstract: An augmented reality (AR) application authoring system is disclosed. The AR application authoring system enables the real-time creation of freehand interactive AR applications with freehand inputs. The AR application authoring system enables intuitive authoring of customized freehand gesture inputs through embodied demonstration while using the surrounding environment as a contextual reference. A visual programming interface is provided with which users can define freehand interactions by matching the freehand gestures with reactions of virtual AR assets. Thus, users can create personalized freehand interactions through simple trigger-action programming logic. Further, with the support of a real-time hand gesture detection algorithm, users can seamlessly test and iterate on the authored AR experience.Type: GrantFiled: July 26, 2022Date of Patent: November 26, 2024Assignee: Purdue Research FoundationInventors: Karthik Ramani, Tianyi Wang, Xun Qian, Fengming He
-
Patent number: 12153724Abstract: In one embodiment, a method includes capturing, using one or more cameras implemented in a wearable device worn by a user, a first image depicting at least a part of a hand of the user holding a controller in an environment, identifying one or more features from the first image to estimate a pose of the hand of the user, estimating a first pose of the controller based on the pose of the hand of the user and an estimated grip that defines a relative pose between the hand of the user and the controller, receiving IMU data of the controller, and estimating a second pose of the controller by updating the first pose of the controller using the IMU data of the controller. The method utilizes multiple data sources to track the controller under various conditions of the environment to provide an accurate controller tracking consistently.Type: GrantFiled: April 13, 2022Date of Patent: November 26, 2024Assignee: Meta Platforms Technologies, LLCInventors: Tsz Ho Yu, Chengyuan Yan, Christian Forster
-
Patent number: 12141913Abstract: In an aspect, a computer-implemented method allows for navigation in a three-dimensional (3D) virtual environment. In the method, data specifying a three-dimensional virtual space is received. A position and direction in the three-dimensional virtual space is received. The position and direction input by a first user and representing a first virtual camera used to render the three-dimensional virtual space to the first user. A video stream captured from a camera positioned to capture the first user is received. A second virtual camera is navigated according to an input of a second user.Type: GrantFiled: July 12, 2023Date of Patent: November 12, 2024Assignee: Katmai Tech Inc.Inventor: Kristofor Bernard Swanson
-
Patent number: 12141899Abstract: An imaging system (702) includes a reconstructor (716) configured to reconstruct obtained cone beam projection data with a voxel-dependent redundancy weighting such that low frequency components of the cone beam projection data are reconstructed with more redundant data than high frequency components of the cone beam projection data to produce volumetric image data. A method includes reconstructing obtained cone beam projection data with a voxel-dependent redundancy weighting such that low frequency components are reconstructed with more redundant data than high frequency components to produce volumetric image data.Type: GrantFiled: March 24, 2020Date of Patent: November 12, 2024Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Kevin Martin Brown, Thomas Koehler, Claas Bontus
-
Patent number: 12106044Abstract: Disclosed herein is a method and system for determining quality of an input document during risk and compliance assessment. The method includes receiving input document for risk and compliance assessment, identifying a document type, and at least one sub-type of the input document using a Natural Language Processing (NLP) technique and a trained neural network model. Layout of content present in the input document is detected based on each of a plurality of segments extracted from content and structural parameters associated with respective segments. A document review model is identified from a plurality of document review models based on type and at least one sub-type of input document. Thereafter, the quality of the input document and a compliance score is determined by identifying one or more deviations of content of the input document from content of a predefined template for the input document.Type: GrantFiled: June 1, 2022Date of Patent: October 1, 2024Assignee: Wipro LimitedInventors: Swapnil Dnyaneshwar Belhe, Zaheer Juzer Javi, Pravin Pawar
-
Patent number: 12086926Abstract: In some implementations, a computing device can simulate a virtual parallax to create three dimensional effects. For example, the computing device can obtain an image captured at a particular location. The captured two-dimensional image can be applied as texture to a three-dimensional model of the capture location. To give the two-dimensional image a three-dimensional look and feel, the computing device can simulate moving the camera used to capture the two-dimensional image to different locations around the image capture location to generate different perspectives of the textured three-dimensional model as if captured by multiple different cameras. Thus, a virtual parallax can be introduced into the generated imagery for the capture location. When presented to the user on a display of the computing device, the generated imagery may have a three-dimensional look and feel even though generated from a single two-dimensional image.Type: GrantFiled: February 21, 2023Date of Patent: September 10, 2024Assignee: Apple Inc.Inventors: Gunnar Martin Byrod, Jan H. Bockert, Johan V. Hedberg, Ross W. Anderson
-
Patent number: 12073601Abstract: An image matching system for determining visual overlaps between images by using box embeddings is described herein. The system receives two images depicting a 3D surface with different camera poses. The system inputs the images (or a crop of each image) into a machine learning model that outputs a box encoding for the first image and a box encoding for the second image. A box encoding includes parameters defining a box in an embedding space. Then the system determines an asymmetric overlap factor that measures asymmetric surface overlaps between the first image and the second image based on the box encodings. The asymmetric overlap factor includes an enclosure factor indicating how much surface from the first image is visible in the second image and a concentration factor indicating how much surface from the second image is visible in the first image.Type: GrantFiled: October 13, 2023Date of Patent: August 27, 2024Assignee: NIANTIC, INC.Inventors: Anita Rau, Guillermo Garcia-Hernando, Gabriel J. Brostow, Daniyar Turmukhambetov
-
Patent number: 12073506Abstract: In accordance with some embodiments of the disclosed subject matter, methods, systems, and media for generating images of multiple sides of an object are provided. In some embodiments, a method comprises receiving information indicative of a 3D pose of a first object in a first coordinate space at a first time; receiving a group of images captured using at least one image sensor, each image associated with a field of view within the first coordinate space; mapping at least a portion of a surface of the first object to a 2D area with respect to the image based on the 3D pose of the first object; associating, for images including the surface, a portion of that image with the surface of the first object based on the 2D area; and generating a composite image of the surface using images associated with the surface.Type: GrantFiled: June 10, 2022Date of Patent: August 27, 2024Assignee: Cognex CorporationInventors: Ahmed El-Barkouky, James A. Negro, Xiangyun Ye
-
Patent number: 12067692Abstract: Systems and methods in accordance with various embodiments of the invention can generate term (or PolyMap) coefficients that properly calibrate any camera lens using information obtained from a calibration pattern (e.g., chessboard patterns). Term coefficients in accordance with several embodiments of the invention can be used to transform a warped image to a dewarped image by mapping image information from the warped image to the dewarped coordinates. Determining calibration coefficients in accordance with certain embodiments of the invention can include novel and inventive processes for capturing calibration images, processing the calibration images, and/or deriving inputs needed for proper calibration as described in this disclosure. Processes described in this description can provide for improvements in the field of image processing, especially in increasing the speed of dewarping processes and in the accurate capture of calibration pattern images.Type: GrantFiled: April 30, 2021Date of Patent: August 20, 2024Assignee: Immersive Tech, Inc.Inventors: Micheal Woodrow Burns, Jon Clagg, Kunal Bansal
-
Patent number: 12041388Abstract: The invention is directed towards a system where a head-mounted display (HMD), or other head mounted computing device, may annotate video data captured by the HMD. When a video is annotated on the computer processor, the data packet associated with that digital marker is encoded as an operation-encoded audio packet and sent over a secure audio link to the HMD. Sending the operation-encoded audio packet over the secure audio link requires the HMD to decode the packet into a data packet which may then be used to annotate a display on the HMD. The user of the HMD may be able to view his own recorded field of view on a display device which may then be annotated using the data in the data packet.Type: GrantFiled: June 8, 2022Date of Patent: July 16, 2024Assignee: REALWEAR, INC.Inventor: Chris Parkinson
-
Patent number: 12032802Abstract: This invention relates to panning in a three dimensional environment on a mobile device. In an embodiment, a computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen. A user input is received indicating that an object has touched a first point on a touch screen of the mobile device and the object has been dragged to a second point on the touch screen. A first target location in the three dimensional environment is determined based on the first point on the touch screen. A second target location in the three dimensional environment is determined based on the second point on the touch screen. Finally, a three dimensional model is moved in the three dimensional environment relative to the virtual camera according to the first and second target locations.Type: GrantFiled: July 2, 2021Date of Patent: July 9, 2024Assignee: GOOGLE LLCInventor: David Kornmann
-
Patent number: 12025457Abstract: Systems, methods, and non-transitory computer readable media configured to provide three-dimensional representations of routes. Locations for a planned movement may be obtained. The location information may include tridimensional information of a location. Route information for the planned movement may be obtained. The route information may define a route of one or more entities within the location. A three-dimensional view of the route within the location may be determined based on the location information and the route information. An interface through which the three-dimensional view of the route within the location is accessible may be provided.Type: GrantFiled: February 21, 2023Date of Patent: July 2, 2024Assignee: Palantir Technologies Inc.Inventors: Richard Dickson, Mason Cooper, Quentin Le Pape
-
Patent number: 12028644Abstract: Various embodiments of an apparatus, method(s), system(s) and computer program product(s) described herein are directed to a Scaling Engine. The Scaling Engine identifies a background object portrayed in a background template for a video feed. The Scaling Engine determines a background template display position for concurrent display of the background object with video feed data. The Scaling Engine generates a scaled background template by modifying a current aspect ratio of the background template with the background object set at the background display position according to a video feed aspect ratio. The Scaling Engine generates a merged video feed by merging the scaled background template with live video feed data, the merged video feed data providing an unobstructed portrayal of the identified background object.Type: GrantFiled: May 19, 2022Date of Patent: July 2, 2024Assignee: Zoom Video Communications, Inc.Inventor: Thanh Le Nguyen
-
Patent number: 12008230Abstract: The present disclosure generally describe user interfaces related to time. In accordance with embodiments, user interfaces for displaying and enabling an adjustment of a displayed time zone are described. In accordance with embodiments, user interfaces for initiating a measurement of time are described. In accordance with embodiments, user interfaces for enabling and displaying a user interface using a character are described. In accordance with embodiments, user interfaces for enabling and displaying a user interface that includes an indication of a current time are described. In accordance with embodiments, user interfaces for enabling configuration of a background for a user interface are described. In accordance with embodiments, user interfaces for enabling configuration of displayed applications on a user interface are described.Type: GrantFiled: September 24, 2020Date of Patent: June 11, 2024Assignee: Apple Inc.Inventors: Kevin Will Chen, Teofila Connor, Aurelio Guzman, Eileen Y. Lee, Christopher Wilson, Alan C. Dye
-
Patent number: 12002161Abstract: Methods and apparatus for a map tool displaying a three-dimensional view of a map based on a three-dimensional model of the surrounding environment. The three-dimensional map view of a map may be based on a model constructed from multiple data sets, where the multiple data sets include mapping information for an overlapping area of the map displayed in the map view. For example, one data set may include two-dimensional data including object footprints, where the object footprints may be extruded into a three-dimensional object based on data from a data set composed of three-dimensional data. In this example, the three-dimensional data may include height information that corresponds to the two-dimensional object, where the height may be obtained by correlating the location of the two-dimensional object within the three-dimensional data.Type: GrantFiled: December 21, 2018Date of Patent: June 4, 2024Assignee: Apple Inc.Inventors: James A. Howard, Christopher Blumenberg
-
Patent number: 12001646Abstract: According to one aspect, it becomes possible to easily modify a 3D object which is displayed in a virtual reality space. A method is performed by a computer configured to be communicable with a position detection device that includes a drawing surface and that, in operation, detects a position of an electronic pen on the drawing surface. The method includes rendering, in a virtual reality space, a first object that is a 3D object, rendering, near the first object, a display surface that is a 3D object, rendering, on the display surface, a 3D line that is a 3D object generated based on the position of the electronic pen on the drawing surface, wherein the position of the electronic pen is detected by the position detection device, and outputting the first object, the display surface, and the 3D line, which are the 3D objects, to a display.Type: GrantFiled: January 12, 2023Date of Patent: June 4, 2024Assignee: Wacom Co., Ltd.Inventors: Hiroshi Fujioka, Naoya Nishizawa, Kenton J. Loftus, Milen Dimitrov Metodiev, Markus Weber, Anthony Ashton
-
Patent number: 11983796Abstract: A method for processing an electronic image including receiving, by a viewer, the electronic image and a FOV (field of view), wherein the FOV includes at least one coordinate, at least one dimension, and a magnification factor, loading, by the viewer, a plurality of tiles within the FOV, determining, by the viewer, a state of the plurality of tiles in a cache, and in response to determining that the state of the plurality of tiles in the cache is a fully loaded state, rendering, by the viewer, the plurality of tiles to a display.Type: GrantFiled: August 4, 2022Date of Patent: May 14, 2024Assignee: Paige.AI, Inc.Inventors: Alexandre Kirszenberg, Razik Yousfi, Thomas Fresneau, Peter Schueffler
-
Patent number: 11972529Abstract: The disclosure concerns an augmented reality method in which visual information concerning a real-world object, structure or environment is gathered and a deformation operation is performed on that visual information to generate virtual content that may be displayed in place of, or additionally to, real-time captured image content of the real-world object, structure or environment. Some particular embodiments concern the sharing of visual environment data and/or information characterizing the deformation operation between client devices.Type: GrantFiled: February 1, 2019Date of Patent: April 30, 2024Assignee: SNAP INC.Inventor: David Li
-
Patent number: 11971274Abstract: There is provided a method for producing a high-definition (HD) map. The method includes detecting an object of a road area from the aerial image, extracting a two-dimensional (2D) coordinate value of the detected object, calculating a three-dimensional (3D) coordinate value corresponding to the 2D coordinate value by projecting the extracted 2D coordinate value onto point cloud data that configures the MMS data, and generating an HD map showing a road area of the aerial image in three dimensions based on the calculated 3D coordinate value.Type: GrantFiled: November 19, 2020Date of Patent: April 30, 2024Assignee: THINKWARE CORPORATIONInventor: Suk Pil Ko
-
Patent number: 11948242Abstract: Methods and apparatuses are described for intelligent smoothing of 3D alternative reality applications for secondary 2D viewing. A computing device receives a first data set corresponding to a first position of an alternative reality viewing device. The computing device generates a 3D virtual environment for display on the alternative reality viewing device using the first data set, and a 2D rendering of the virtual environment for display on a display device using the first data set. The computing device receives a second data set corresponding to a second position of the alternative reality viewing device after movement of the alternative reality viewing device. The computing device determines whether a difference between the first data set and the second data set is above a threshold. The computing device updates the 2D rendering of the virtual environment on the display device using the second data set, when the difference is above the threshold value.Type: GrantFiled: August 27, 2021Date of Patent: April 2, 2024Assignee: FMR LLCInventors: Adam Schouela, David Martin, Brian Lough, James Andersen, Cecelia Brooks
-
Patent number: 11928783Abstract: Aspects of the present disclosure involve a system for presenting AR items. The system performs operations including receiving a video that includes a depiction of one or more real-world objects in a real-world environment and obtaining depth data related to the real-world environment. The operations include generating a three-dimensional (3D) model of the real-world environment based on the video and the depth data and adding an augmented reality (AR) item to the video based on the 3D model of the real-world environment. The operations include determining that the AR item has been placed on a vertical plane of the real-world environment and modifying an orientation of the AR item to correspond to an orientation of the vertical plane.Type: GrantFiled: December 30, 2021Date of Patent: March 12, 2024Assignee: Snap Inc.Inventors: Avihay Assouline, Itamar Berger, Gal Dudovitch, Peleg Harel, Gal Sasson
-
Patent number: 11922632Abstract: A human face data processing method according to an embodiment of the present disclosure includes acquiring a picture of a human face by means of a scanning apparatus, obtaining point cloud information by means of a structured light stripe, and further obtaining a three-dimensional model of the human face, and mapping the three-dimensional model onto a circular plane in an area-preserving manner so as to form a two-dimensional human face image. Three-dimensional data is converted into two-dimensional data, thereby facilitating data storage. In addition, the three-dimensional data uses the area-preserving manner, such that the restoration quality is better when the two-dimensional data is restored to the three-dimension data, thereby facilitating the re-utilization of a three-dimensional image.Type: GrantFiled: November 4, 2020Date of Patent: March 5, 2024Assignee: BEIJING GMINE VISION TECHNOLOGIES LTD.Inventors: Wei Chen, Boyang Wu
-
Patent number: 11921971Abstract: A live broadcasting recording equipment, a live broadcasting recording system and a live broadcasting recording method are provided. The live broadcasting recording equipment includes a camera, a processing device, and a terminal device. The camera captures images to provide photographic data. The processing device executes background removal processing on the photographic data to generate a person image. The terminal device communicates with the processing device and has a display. The processing device executes multi-layer processing to fuse the person image, a three-dimensional virtual reality background image, an augmented reality object image, and a presentation image, and generate a composite image. After an application gateway of the processing device recognizes a login operation of the terminal device, the processing device outputs the composite image to the terminal device, so that the display of the terminal device displays the composite image.Type: GrantFiled: April 11, 2022Date of Patent: March 5, 2024Assignee: Optoma China Co., LtdInventors: Kai-Ming Guo, Tian-Shen Wang, Zi-Xiang Xiao, Yi-Wei Lee
-
Patent number: 11915342Abstract: Systems, methods, and non-transitory computer-readable media can obtain data associated with a computer-based experience. The computer-based experience can be based on interactive real-time technology. At least one virtual camera can be configured within the computer-based experience in a real-time engine. Data associated with an edit cut of the computer-based experience can be obtained based on content captured by the at least one virtual camera. A plurality of shots that correspond to two-dimensional content can be generated from the edit cut of the computer-based experience in the real-time engine. Data associated with a two-dimensional version of the computer-based experience can be generated with the real-time engine based on the plurality of shots. The two-dimensional version can be rendered based on the generated data.Type: GrantFiled: July 15, 2022Date of Patent: February 27, 2024Assignee: Baobab Studios Inc.Inventors: Mikhail Stanislavovich Solovykh, Wei Wang, Nathaniel Christopher Dirksen, Lawrence David Cutler, Apostolos Lerios
-
Patent number: 11897394Abstract: A head up display for a vehicle including a display device configured to output light forming an image, an optical system configured to control a path of the light such that the image is output towards a light transmission region, and a controller configured to generate the image based on a first view and a second view such that a virtual image is produced on a ground surface in the light transmission region, the first view being towards the ground surface, the second view being towards a 3D space above the ground surface, the first view and the second view being based on an eye-box, the ground surface being in front of the vehicle, and the virtual image including a graphic object having a stereoscopic effect, and control the display device to output the image.Type: GrantFiled: July 20, 2021Date of Patent: February 13, 2024Assignee: NAVER LABS CORPORATIONInventors: Jae Won Cha, Jeseon Lee, Kisung Kim, Jongjin Park, Eunyoung Jeong, Yongho Shin
-
Patent number: 11900528Abstract: A method of rendering a view is disclosed. Three occlusion planes associated with an interior cavity of a three-dimensional object included in the view are created. The three occlusion planes are positioned based on a camera position and orientation. Any objects or parts of objects that are in a line of sight between the camera and any one of the three occlusion planes are culled. The view is rendered from the perspective of the camera.Type: GrantFiled: May 27, 2021Date of Patent: February 13, 2024Assignee: Unity IPR ApSInventors: Andrew Peter Maneri, Donnavon Troy Webb, Jonathan Randall Newberry
-
Patent number: 11889222Abstract: The present disclosure provides a system and method for creating at multilayer scene using a multiple visual input data. And injecting an image of an actor into the multilayer scene to produce a output video approximating a three-dimensional space which signifies depth by visualizing the actor in front of some layers and behind others. This is very useful for many situations where the actor needs to be on a display with other visual items but in a way that does not overlap or occlude those items. A user interacts with other virtual objects or items in a scene or even with other users visualized in the scene.Type: GrantFiled: July 22, 2021Date of Patent: January 30, 2024Inventor: Malay Kundu
-
Patent number: 11887499Abstract: The present invention relates a virtual-scene-based language-learning system, at least comprising a scheduling and managing module and a scene-editing module, and the system further comprises an association-analyzing module, the scheduling and managing module are connected to the scene-editing module and the association-analyzing module, respectively, in a wired or wireless manner, the association-analyzing module analyzes based on second-language information input by a user and provides at least one associated image and/or picture, and the association-analyzing module displays the associated images and/or picture selected by the user on a client, so that a teacher at the client is able to understand the language information expressed in the second language by the student based on the associated image and/or picture.Type: GrantFiled: July 13, 2021Date of Patent: January 30, 2024Inventor: Ailin Sha
-
Patent number: 11875583Abstract: The present invention belongs to the technical field of 3D reconstruction in the field of computer vision, and provides a dataset generation method for self-supervised learning scene point cloud completion based on panoramas. Pairs of incomplete point cloud and target point cloud with RGB information and normal information can be generated by taking RGB panoramas, depth panoramas and normal panoramas in the same view as input for constructing a self-supervised learning dataset for training of the scene point cloud completion network. The key points of the present invention are occlusion prediction and equirectangular projection based on view conversion, and processing of the stripe problem and point-to-point occlusion problem during conversion. The method of the present invention includes simplification of the collection mode of the point cloud data in a real scene; occlusion prediction idea of view conversion; and design of view selection strategy.Type: GrantFiled: November 23, 2021Date of Patent: January 16, 2024Assignee: DALIAN UNIVERSITY OF TECHNOLOGYInventors: Xin Yang, Tong Li, Baocai Yin, Zhaoxuan Zhang, Boyan Wei, Zhenjun Du
-
Patent number: 11875012Abstract: The technology disclosed relates to positioning and revealing a control interface in a virtual or augmented reality that includes causing display of a plurality of interface projectiles at a first region of a virtual or augmented reality. Input is received that is interpreted as user interaction with an interface projectile. User interaction includes selecting and throwing the interface projectile in a first direction. An animation of the interface projectile is displayed along a trajectory in the first directions to a place where it lands. A blooming of the control interface blooming from the interface projectile at the place where it lands is displayed.Type: GrantFiled: May 21, 2019Date of Patent: January 16, 2024Assignee: Ultrahaptics IP Two LimitedInventor: Nicholas James Benson
-
Patent number: 11854115Abstract: A vectorized caricature avatar generator receives a user image from which face parameters are generated. Segments of the user image including certain facial features (e.g., hair, facial hair, eyeglasses) are also identified. Segment parameter values are also determined, the segment parameter values being those parameter values from a set of caricature avatars that correspond to the segments of the user image. The face parameter values and the segment parameter values are used to generate a caricature avatar of the user in the user image.Type: GrantFiled: November 4, 2021Date of Patent: December 26, 2023Assignee: Adobe Inc.Inventors: Daichi Ito, Yijun Li, Yannick Hold-Geoffroy, Koki Madono, Jose Ignacio Echevarria Vallespi, Cameron Younger Smith
-
Patent number: 11856297Abstract: A panoramic video camera comprises a plurality of image sensors which are configured to capture a plurality of frames at a time; an image processing circuitry configured to generate a frame read signal to read the plurality of frames generated by the plurality of camera sensors, apply a cylindrical mapping function to map the plurality of frames to a cylindrical image plane and stitch the cylindrically mapped plurality of frames together in the cylindrical image plane based on a plurality of projection parameters.Type: GrantFiled: April 1, 2019Date of Patent: December 26, 2023Assignee: GN AUDIO A/SInventor: Yashket Gupta
-
Patent number: 11842444Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.Type: GrantFiled: June 2, 2021Date of Patent: December 12, 2023Assignee: STREEM, LLCInventors: Sean M. Adkinson, Teressa Chizeck, Ryan R. Fink
-
Patent number: 11838486Abstract: In one implementation, a method of performing perspective correction is performed at a head-mounted device including one or more processors, non-transitory memory, an image sensor, and a display. The method includes capturing, using the image sensor, a plurality of images of a scene from a respective plurality of perspectives. The method includes capturing, using the image sensor, a current image of the scene from a current perspective. The method includes obtaining a depth map of the current image of the scene. The method include transforming, using the one or more processors, the current image of the scene based on the depth map, a difference between the current perspective of the image sensor and a current perspective of a user, and at least one of the plurality of images of the scene from the respective plurality of perspectives. The method includes displaying, on the display, the transformed image.Type: GrantFiled: July 13, 2020Date of Patent: December 5, 2023Assignee: APPLE INC.Inventors: Samer Samir Barakat, Bertrand Nepveu, Vincent Chapdelaine-Couture
-
Patent number: 11830148Abstract: A mixed reality (MR) simulation system includes a console and a head mounted device (HMD). The MR system captures stereoscopic images from a real-world environment using outward-facing stereoscopic cameras mounted to the HMD. The MR system preprocesses the stereoscopic images to maximize contrast and then extracts a set of features from those images, including edges or corners, among others. For each feature, the MR system generates one or more two-dimensional (2D) polylines. Then, the MR system triangulates between 2D polylines found in right side images and corresponding 2D polylines found in left side images to generate a set of 3D polylines. The MR system interpolates between 3D vertices included in the 3D polylines or extrapolates additional 3D vertices, thereby generating a geometric reconstruction of the real-world environment. The MR system may map textures derived from the real-world environment onto the geometric representation faster than the geometric reconstruction is updated.Type: GrantFiled: July 30, 2020Date of Patent: November 28, 2023Assignee: Meta Platforms, Inc.Inventors: James Allan Booth, Gaurav Chaurasia, Alexandru-Eugen Ichim, Alex Locher, Gioacchino Noris, Alexander Sorkine Hornung, Manuel Werlberger
-
Patent number: 11816782Abstract: Systems can identify visible surfaces for pixels in an image (portion) to be rendered. A sampling pattern of ray directions is applied to the pixels, so that the sampling pattern of ray directions repeats, and with respect to any pixel, the same ray direction can be found in the same relative position, with respect to that pixel, as for other pixels. Rays are emitted from visible surfaces in the respective ray direction supplied from the sampling pattern. Ray intersections can cause shaders to execute and contribute results to a sample buffer. With respect to shading of a given pixel, ray results from a selected subset of the pixels are used; the subset is selected by identifying a set of pixels, collectively from which rays were traced for the ray directions in the pattern, and requiring that surfaces from which rays were traced for those pixels satisfy a similarity criteria.Type: GrantFiled: March 2, 2022Date of Patent: November 14, 2023Assignee: Imagination Technologies LimitedInventors: Gareth Morgan, Luke T. Peterson
-
Patent number: 11804011Abstract: Disclosed is a method and apparatus for enabling interactive visualization of three-dimensional volumetric models. The method involves maintaining three-dimensional volumetric models represented by explicit surfaces. In accordance with an embodiment of the disclosure, the method also involves, for a current point of view, generating and displaying images of the volumetric models in a manner that clarifies internal structures by accounting for light attenuation inside the volumetric models as a function of spatial positions of the explicit surfaces. The method also involves, upon receiving user input that adjusts a display variable, repeating the generating and the displaying of the images in accordance with the display variable that has been adjusted, thereby enabling interactive visualization of the volumetric models while simultaneously clarifying the internal structures by accounting for the light attenuation inside the volumetric models.Type: GrantFiled: September 15, 2021Date of Patent: October 31, 2023Assignee: LlamaZOO Interactive Inc.Inventors: Charles Lavigne, Li Jl
-
Patent number: 11777616Abstract: A method and arrangement for testing wireless connections is provided. The method comprises obtaining (500) a three-dimensional model of a given environment; obtaining (502) ray tracing calculations describing propagation of radio frequency signals in the given environment; locating (504) one or more devices in the given environment; determining (506) utilising ray tracing calculations the radio frequency signal properties of one or more devices communicating with the device under test; transmitting (508) control information to the radio frequency controller unit for updating the connections between one or more devices and a set of antenna elements to match with the determined properties; obtaining (510) information on the location and propagation environment of the one or more devices and updating (512) the radio frequency signal properties of the one or more devices if the location or propagation environment changes.Type: GrantFiled: December 13, 2022Date of Patent: October 3, 2023Assignee: Nokia Solutions and Networks OyInventors: Juha Hannula, Marko Koskinen, Petri Koivukangas, Iikka Finning
-
Patent number: 11770495Abstract: Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.Type: GrantFiled: August 13, 2021Date of Patent: September 26, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Michael Slutsky, Albert Shalumov
-
Patent number: 11744652Abstract: Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to Field Visualization Engine. The Field Visualization Engine tracks one or more collimator poses relative to one or more Augmented Reality (AR) headset device poses. Each respective collimator pose and each respective headset device pose corresponds to a three-dimensional (3D) unified coordinate space (“3D space”). The Field Visualization Engine generates an AR representation of a beam emanating from the collimator based at least on a current collimator pose and a current headset device pose. The Field Visualization Engine further generates an AR visualization of emanation of the beam throughout an AR display of medical data.Type: GrantFiled: July 22, 2022Date of Patent: September 5, 2023Assignee: Medivis, Inc.Inventors: Long Qian, Christopher Morley, Osamah Choudhry