Patents Examined by Xiao M. Wu
-
Patent number: 11935197Abstract: An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.Type: GrantFiled: February 12, 2021Date of Patent: March 19, 2024Assignee: Apple Inc.Inventors: Patrick S. Piemonte, Daniel De Rocha Rosario, Jason D Gosnell, Peter Meier
-
Patent number: 11908107Abstract: Provided are a method and apparatus for presenting image for a VR device, a device and a non-transitory computer-readable storage medium. The method for presenting image for a VR device includes steps of: acquiring input image data; establishing a correspondence between the input image data and output image data in a polar coordinate system, the polar coordinate system having a pole which is an intersection of a center of a field of view of the VR device and a display plane; obtaining image data to be presented according to the output image data; and presenting the image data to be presented on the basis of a convex lens.Type: GrantFiled: November 13, 2019Date of Patent: February 20, 2024Assignee: SANECHIPS TECHNOLOGY CO., LTD.Inventors: Dehui Kong, Ke Xu, Wei Yang, Jing You, Xin Liu, Jisong Ai
-
Patent number: 11893692Abstract: Disclosed is a graphics system and associated methodologies for selectively increasing the level-of-detail at specific parts of a mesh model based on a point cloud that provides a higher detailed representation of the same or similar three-dimensional (“3D”) object. The graphics system receives the mesh model and the point cloud of the 3D object. The graphics system determines a region-of-interest of the 3D object based in part on differences amongst points that represent part or all of the region-of-interest. The graphics system reconstructs the region-of-interest in the mesh model and generates a modified mesh model by modifying a first set of meshes representing the region-of-interest in the mesh model to a second set of meshes based on the positional elements of the point cloud points. The second set of meshes has more meshes and represents the region-of-interest at a higher level-of-detail than the first set of meshes.Type: GrantFiled: May 31, 2023Date of Patent: February 6, 2024Assignee: Illuscio, Inc.Inventor: DonEliezer Baize
-
Patent number: 11861786Abstract: Rendering system combines point sampling and volume sampling operations to produce rendering outputs. For example, to determine color information for a surface location in a 3-D scene, one or more point sampling operations are conducted in a volume around the surface location, and one or more sampling operations of volumetric light transport data are performed farther from the surface location. A transition zone between point sampling and volume sampling can be provided, in which both point and volume sampling operations are conducted. Data obtained from point and volume sampling operations can be blended in determining color information for the surface location. For example, point samples are obtained by tracing a ray for each point sample, to identify an intersection between another surface and the ray, to be shaded, and volume samples are obtained from a nested 3-D grids of volume elements expressing light transport data at different levels of granularity.Type: GrantFiled: December 14, 2022Date of Patent: January 2, 2024Assignee: Imagination Technologies LimitedInventors: Cuneyt Ozdas, Luke Tilman Peterson
-
Patent number: 11861787Abstract: Aspects relate to tracing rays in 3-D scenes that comprise objects that are defined by or with implicit geometry. In an example, a trapping element defines a portion of 3-D space in which implicit geometry exist. When a ray is found to intersect a trapping element, a trapping element procedure is executed. The trapping element procedure may comprise marching a ray through a 3-D volume and evaluating a function that defines the implicit geometry for each current 3-D position of the ray. An intersection detected with the implicit geometry may be found concurrently with intersections for the same ray with explicitly-defined geometry, and data describing these intersections may be stored with the ray and resolved.Type: GrantFiled: December 14, 2022Date of Patent: January 2, 2024Assignee: Imagination Technologies LimitedInventors: Cuneyt Ozdas, Luke Tilman Peterson, Steven Blackmon, Steven John Clohset
-
Patent number: 11854140Abstract: The present disclosure describes a system for fast generation of ray traced reflections of virtually augmented objects into a real-world image, specifically on reflective surfaces. The system utilizes a standard raster graphics pipeline.Type: GrantFiled: September 23, 2022Date of Patent: December 26, 2023Assignee: Snap Inc.Inventors: Reuven Bakalash, Elad Haviv
-
Patent number: 11830121Abstract: In some embodiments, the dynamic animation generation system can provide a deep learning framework to produce a large variety of martial arts movements in a controllable manner from unstructured motion capture data. The system can imitate animation layering using neural networks with the aim to overcome challenges when mixing, blending and editing movements from unaligned motion sources. The system can synthesize movements from given reference motions and simple user controls, and generate unseen sequences of locomotion, but also reconstruct signature motions of different fighters. For achieving this task, the dynamic animation generation system can adopt a modular framework that is composed of the motion generator, that maps the trajectories of a number of key joints and root trajectory to the full body motion, and a set of different control modules that map the user inputs to such trajectories.Type: GrantFiled: July 1, 2021Date of Patent: November 28, 2023Assignee: ELECTRONIC ARTS INC.Inventors: Wolfram Sebastian Starke, Yiwei Zhao, Mohsen Sardari, Harold Henry Chaput, Navid Aghdaie
-
Patent number: 11805236Abstract: A computer system generates stereo image data from monocular images. The system generates depth maps for single images using a monocular depth estimation method. The system converts the depth maps to disparity maps and uses the disparity maps to generate additional images forming stereo pairs with the monocular images. The stereo pairs can be used to form a stereo image training data set for training various models, including depth estimation models or stereo matching models.Type: GrantFiled: May 11, 2021Date of Patent: October 31, 2023Assignee: NIANTIC, INC.Inventors: James Watson, Oisin MacAodha, Daniyar Turmukhambetov, Gabriel J. Brostow, Michael David Firman
-
Patent number: 11803981Abstract: Various systems and methods for modeling a scene. A device for modeling a scene includes a hardware interface to obtain a time-ordered sequence of images representative of a scene, the time-ordered sequence including a plurality of images, one of the sequence of images being a current image, the scene captured by a monocular imaging system; and processing circuitry to: provide a data set to an artificial neural network (ANN) to produce a three-dimensional structure of the scene, the data set including: a portion of the sequence of images, the portion of the sequence of images including the current image; and motion of a sensor that captured the sequence of images; and model the scene using the three-dimensional structure of the scene, wherein the three-dimensional structure is determined for both moving and fixed objects in the scene.Type: GrantFiled: May 29, 2020Date of Patent: October 31, 2023Assignee: Mobileye Vision Technologies Ltd.Inventors: Gideon Stein, Itay Blumenthal, Nadav Shaag, Jeffrey Moskowitz, Natalie Carlebach
-
Patent number: 11798127Abstract: One or more cameras capture objects at a higher resolution than the human eye can perceive. Objects are segmented from the background of the image and scaled to human perceptible size. The scaled-up objects are superimposed over the unscaled background. This is presented to a user via a display whereby the process selectively amplifies the size of the objects' spatially registered retinal projection while maintaining a natural (unmodified) view in the remainder of the visual field.Type: GrantFiled: June 29, 2022Date of Patent: October 24, 2023Assignee: University of Central Florida Research Foundation, Inc.Inventors: Gerd Bruder, Gregory Welch, Kangsoo Kim, Zubin Choudhary
-
Patent number: 11776147Abstract: Systems, methods, and apparatuses described herein may provide image processing, including displaying, by a mobile device, an image of an object located perpendicular to a reference object, calculating, based on at least one depth measurement determined using a depth sensor in the mobile device, the predicted height of the mobile device when the image was captured, calculating scale data for the image based on the predicted height, determining a reference line identifying the location of the object and the reference object in the image, segmenting pixels in the object in the image from pixels in the image outside the object, measuring the object based on the pixels in the object and the scale data, and generating model data comprising the object, the scale data, and the measurements.Type: GrantFiled: May 28, 2021Date of Patent: October 3, 2023Assignee: NIKE, Inc.Inventors: David Alexander Bleicher, Dror Ben-Eliezer, Shachar Ilan, Tamir Lousky, Natanel Davidovits, Daniel Mashala, Eli Baram, Yossi Elkrief, Ofir Ron
-
Patent number: 11762199Abstract: A method and system for image display with a head-mounted device, which use a seethrough tunable diffractive mirror, such as a see-through tunable holographic mirror or seethrough tunable LCD array mirror, which mirror is useful in providing augmented reality.Type: GrantFiled: November 10, 2020Date of Patent: September 19, 2023Assignee: Essilor InternationalInventors: Aude Bouchier, Jean-Paul Cano, Samuel Archambeau
-
Patent number: 11756169Abstract: A method for error handling in a geometric correction engine (GCE) is provided that includes receiving configuration parameters by the GCE, generating, by the GCE in accordance with the configuration parameters, output blocks of an output frame based on corresponding blocks of an input frame, detecting, by the GCE, a run-time error during the generating, and reporting, by the GCE, an event corresponding to the run-time error.Type: GrantFiled: December 11, 2020Date of Patent: September 12, 2023Assignee: Texas Instruments IncorporatedInventors: Gang Hua, Rajasekhar Reddy Allu, Niraj Nandan, Mihir Narendra Mody
-
Patent number: 11748930Abstract: Disclosed is a rigging system for animating the detached and non-uniformly distributed data points of a point cloud. In response to a selection of a region of space in which a first set of data points are located, the system may identify commonality in the positional or non-positional elements of a first subset of the first set of data points, and may determine that a second subset of the first set of data points lack the commonality. The system may refine the first set of data points to a second set of data points that includes the first subset of data points and that excludes the second subset of data points. The system may link the second set of data points to a bone of a skeletal framework, and may animate the second set of data points based on an animation that is defined for the bone.Type: GrantFiled: January 16, 2023Date of Patent: September 5, 2023Assignee: Illuscio, Inc.Inventor: Max Good
-
Patent number: 11734830Abstract: A computer-implemented method of using augmented reality (AR) to detect a plane and a spatial configuration of an object, the method comprising the steps of detecting one or more edges of the object; identifying one or more lines of the object; filtering the one or more lines of the object; iteratively approximating one or more distributions of the one or more lines; and determining boundaries of the object.Type: GrantFiled: July 4, 2021Date of Patent: August 22, 2023Assignee: SKETCHAR , VABInventors: Andrey Drobitko, Aleksandr Danilin, Mikhail Kopeliovich, Mikhail Petrushan
-
Patent number: 11727648Abstract: According to various implementations, a method is performed at a first electronic device with a non-transitory memory and one or more processors. The method includes determining a reference location in a three-dimensional space based on a feature. The feature is generated by a second electronic device. The method includes obtaining, for the reference location, first reference coordinates in an augmented reality coordinate system of the first electronic device and second reference coordinates in an augmented reality coordinate system of the second electronic device. The method includes determining a coordinate transformation based on a function of the first reference coordinates and the second reference coordinates. The method includes synchronizing an augmented reality coordinate system of the first electronic device with an augmented reality coordinate system of the second electronic device using the coordinate transformation.Type: GrantFiled: August 31, 2020Date of Patent: August 15, 2023Inventors: Michael E. Buerli, Andreas Moeller, Michael Kuhn
-
Patent number: 11727628Abstract: A method of rendering an object is provided. The method comprises: encoding a feature vector to each point in a point cloud for an object, wherein the feature vector comprises an alpha matte; projecting each point in the point cloud and the corresponding feature vector to a target view to compute a feature map; and using a neural rendering network to decode the feature map into a RGB image and the alpha matte and to update the feature vector.Type: GrantFiled: November 4, 2022Date of Patent: August 15, 2023Assignee: ShanghaiTech UniversityInventors: Cen Wang, Jingyi Yu
-
Patent number: 11704842Abstract: Certain embodiments involve a graphics manipulation application using brushstroke parameters that include a maximum alpha-deposition parameter and a fractional alpha-deposition parameter. For instance, the graphics manipulation application uses an alpha flow increment computed from the maximum alpha-deposition parameter and the fractional alpha-deposition parameter to compute an output canvas color. In some embodiments, if the current canvas opacity exceeds or equals the maximum alpha-deposition parameter, the current canvas opacity is selected as the output canvas opacity. Otherwise, the graphics manipulation application computes the output canvas opacity by increasing the current canvas opacity based on the alpha flow increment. The graphics manipulation application updates a canvas portion affected by a brushstroke input to include the output canvas opacity and the output canvas color.Type: GrantFiled: December 4, 2020Date of Patent: July 18, 2023Assignee: ADOBE INC.Inventors: Byungmoon Kim, Jinyi Kwon
-
Patent number: 11699242Abstract: A system and method for hologram synthesis and processing capable of synthesizing holographic 3D data and displaying (or reconstructing) a full 3D image at high speed using a deep learning engine. The system synthesizes or generates a digital hologram from a light field refocus image input using the deep learning engine. That is, RGB-depth map data is acquired at high speed using the deep learning engine, such as a convolutional neural network (CNN), from real 360° multi-view color image information and the RGB-depth map data is used to produce hologram content. In addition, the system interlocks hologram data with user voice recognition and gesture recognition information to display the hologram data at a wide viewing angle and enables interaction with the user.Type: GrantFiled: February 19, 2021Date of Patent: July 11, 2023Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Min Sung Yoon, Byung Gyu Chae
-
Patent number: 11699264Abstract: A method, a system, and a computing device for reconstructing three-dimensional planes are provided. The method includes the following steps: obtaining a series of color information, depth information and pose information of a dynamic scene by a sensing device; extracting a plurality of feature points according to the color information and the depth information, and marking part of the feature points as non-planar objects including dynamic objects and fragmentary objects; computing point cloud according to the unmarked feature points and the pose information, and instantly converting the point cloud to a three-dimensional mesh; and growing the three-dimensional mesh to fill vacancy corresponding to the non-planar objects according to the information of the three-dimensional mesh surrounding or adjacent to the non-planar objects.Type: GrantFiled: May 22, 2020Date of Patent: July 11, 2023Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventor: Te-Mei Wang