Patents Examined by Kee M. Tung
-
Patent number: 12106443Abstract: Techniques for responsive video canvas generation are described to impart three-dimensional effects based on scene geometry to two-dimensional digital objects in a two-dimensional design environment. A responsive video canvas, for instance, is generated from input data including a digital video and scene data. The scene data describes a three-dimensional representation of an environment and includes a plurality of planes. A visual transform is generated and associated with each plane to enable digital objects to interact with the underlying scene geometry. In the responsive video canvas, an edit positioning a two-dimensional digital object with respect to a particular plane of the responsive video canvas is received. A visual transform associated with the particular plane is applied to the digital object and is operable to align the digital object to the depth and orientation of the particular plane. Accordingly, the digital object includes visual features based on the three-dimensional representation.Type: GrantFiled: February 23, 2022Date of Patent: October 1, 2024Assignee: Adobe Inc.Inventors: Cuong D. Nguyen, Valerie Lina Head, Talin Chris Wadsworth, Stephen Joseph DiVerdi, Paul John Asente
-
Patent number: 12100072Abstract: A tinting material may be generated and applied to the backdrop of an application presented on a user interface. The user interface is presented on a display and includes a background. The background comprises a first color, which includes a luminosity value, a hue value, and a saturation value. A tint color, having a luminosity value, a hue value, a saturation value, and an opacity value, is received. The luminosity value of the first color is modified to generate a second color, and the hue value and saturation value of the generated second color are modified to generate a third color. A tinting material color is generated that includes the modified luminosity value of the second color, the modified hue value of the third color, and the modified saturation value of the third color, and an application is presented on the user interface that includes the tinting material color.Type: GrantFiled: May 28, 2022Date of Patent: September 24, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Spencer Israel Antonin Nataraja Hurd, Chigusa Yasuda Sansen, Christopher Nathaniel Raubacher, Simone Magurno, Jeremy Scott Knudsen
-
Patent number: 12094083Abstract: A holographic aberration correction method and apparatus are provided. The holographic aberration correction apparatus includes: generating a plurality of sub-holograms in a hologram, as a matrix; calculating a plurality of eigenmodes and an eigenvalue and a weight corresponding to each of the eigenmodes by performing singular value decomposition on the matrix; selecting a predefined number of eigenmodes in the order of largest eigenvalues; calculating a plurality of first results which are obtained by multiplying a plurality of identical images by respective weights corresponding to the plurality of selected eigenmodes; calculating a plurality of second results by performing convolution of the plurality of first results and the plurality of selected eigenmodes, respectively; and generating an aberration-corrected hologram by adding the plurality of second results.Type: GrantFiled: December 30, 2021Date of Patent: September 17, 2024Assignee: SEOUL NATIONAL UNIVERSITY R&DB FOUNDATIONInventors: Byoungho Lee, Seung-Woo Nam, Juhyun Lee, Siwoo Lee
-
Patent number: 12086955Abstract: A platform, web application, system, and methods are disclosed for creating digital 3D scenes having digital 3D objects and for creating images of the digital 3D scenes. The platform can be communicatively coupled with the web application. The platform can apply a multi-stage method to convert high-fidelity digital 3D objects into low-fidelity digital 3D objects, store a mapping between them, and transmit low-fidelity digital 3D objects to the web application. The web application can include user interfaces for manipulating low-fidelity digital 3D objects to create low-fidelity digital 3D scenes. The platform can automatically convert low-fidelity digital 3D scenes received from the web application into high-fidelity digital 3D scenes and create high-fidelity 2D images.Type: GrantFiled: March 15, 2022Date of Patent: September 10, 2024Assignee: Target Brands, inc.Inventors: Godha Narayana, Madanmohan Atmakuri, Gopikrishna Chaganti, Anjali Unni, Balraj Govindasamy, Steve Samuel
-
Patent number: 12067675Abstract: A computer-implemented method for autonomous reconstruction of vessels on computed tomography images, includes: providing a reconstruction convolutional neural network (CNN); receiving an input 3D model of a vessel to be reconstructed; defining a region of interest (ROI) and a movement step, wherein the ROI is a 3D volume that covers an area to be processed; defining a starting position and positioning the ROI at the starting position; reconstructing a shape of the input 3D model within the ROI by inputting the fragment of the input 3D model within the ROI to the reconstruction convolutional neural network (CNN) and receiving the reconstructed 3D model fragment; moving the ROI by the movement step along a scanning path; repeating the reconstruction and moving steps to reconstruct a desired portion of the input 3D model at consecutive ROI positions; and combining the reconstructed 3D model fragments.Type: GrantFiled: March 27, 2022Date of Patent: August 20, 2024Assignee: KARDIOLYTICS INC.Inventors: Kris Siemionow, Paul Lewicki, Marek Kraft, Dominik Pieczynski, Michal Mikolajczak, Jacek Kania
-
Patent number: 12062138Abstract: A target detection method and apparatus are provided. A first image of a target scenario collected by an image sensor is analyzed to obtain one or more first 2D detection boxes of the target scenario, and a three-dimensional point cloud of the target scenario collected by a laser sensor is analyzed to obtain one or more second 2D detection boxes of the target scenario in one or more views (for example, a BEV and/or a PV). Then, comprehensive analysis is performed on a matching degree and confidence of the one or more first 2D detection boxes, and a matching degree and confidence of the one or more second 2D detection boxes, to obtain a 2D detection box of a target. Finally, a 3D model of the target is obtained based on a three-dimensional point corresponding to the 2D detection box of the target.Type: GrantFiled: November 11, 2022Date of Patent: August 13, 2024Assignee: Huawei Technologies Co., Ltd.Inventor: Hongmin Li
-
Patent number: 12062130Abstract: Systems and techniques are provided for performing video-based activity recognition. For example, a process can include generating a three-dimensional (3D) model of a first portion of an object based on one or more frames depicting the object. The process can also include generating a mask for the one or more frames, the mask including an indication of one or more regions of the object. The process can further include generating a 3D base model based on the 3D model of the first portion of the object and the mask, the 3D base model representing the first portion of the object and a second portion of the object. The process can include generating, based on the mask and the 3D base model, a 3D model of the second portion of the object.Type: GrantFiled: August 16, 2021Date of Patent: August 13, 2024Assignee: QUALCOMM IncorporatedInventors: Yan Deng, Michel Adib Sarkis, Ning Bi, Chieh-Ming Kuo
-
Patent number: 12062234Abstract: A client device that includes a camera and an extended reality client application program is employed by a user in a physical space, such as an industrial or campus environment. The user aims the camera within the mobile device at a real-world asset, such as a computer system, classroom, or vehicle. The client device acquires a digital image via the camera and detects textual and/or pictorial content included in the acquired image that corresponds to one or more anchors. The client device queries a data intake and query system for asset content associated with the detected anchors. Upon receiving the asset content from the data intake and query system, the client device generates visualizations of the asset content and presents the visualizations via a display device.Type: GrantFiled: October 24, 2022Date of Patent: August 13, 2024Assignee: SPLUNK INC.Inventors: Devin Bhushan, Seunghee Han, Caelin Thomas Jackson-King, Jamie Kuppel, Stanislav Yazhenskikh, Jim Jiaming Zhu
-
Patent number: 12051168Abstract: Systems and methods are provided that include a processor executing an avatar generation program to obtain driving view(s), calculate a skeletal pose of the user, and generate a coarse human mesh based on a template mesh and the skeletal pose of the user. The program further constructs a texture map based on the driving view(s) and the coarse human mesh, extracts a plurality of image features from the texture map, the image features being aligned to a UV map, and constructs a UV positional map based on the coarse human mesh. The program further extracts a plurality of pose features from the UV positional map, the pose features being aligned to the UV map, generates a plurality of pose-image features based on the UV map-aligned image features and UV map-aligned pose features, and renders an avatar based on the plurality of pose-image features.Type: GrantFiled: September 15, 2022Date of Patent: July 30, 2024Assignees: LEMON INC., BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.Inventors: Hongyi Xu, Tao Hu, Linjie Luo
-
Patent number: 12039654Abstract: Systems and methods for super sampling and viewport shifting of non-real time 3D applications are disclosed. In one embodiment, a graphics processing unit includes a processing resource to execute graphics commands to provide graphics for an application, a capture tool to capture the graphics commands, and a data generator to generate a dataset including at least one frame based on the captured graphics commands and to modify viewport settings for each frame of interest to generate a conditioned dataset.Type: GrantFiled: October 15, 2021Date of Patent: July 16, 2024Assignee: Intel CorporationInventors: Joanna Douglas, Michal Taryma, Mario Garcia, Carlos Dominguez
-
Patent number: 12039688Abstract: Systems, methods, and computer readable media for augmented reality beauty product tutorials. Methods disclose determining from live images of an augmented reality (AR) tutorial effects, the effects indicating changes to the live images of a presenter of the AR tutorial from a beauty product being applied to a body part of the presenter. The methods further comprising determining from the live images motion, the motion indicating motion of the beauty product from the beauty product being applied to the body part of the presenter and storing the effects and the motion.Type: GrantFiled: May 3, 2023Date of Patent: July 16, 2024Assignee: Snap Inc.Inventors: Christine Barron, Virginia Drummond, Jean Luo, Alek Matthiessen, Celia Nicole Mourkogiannis, Jonathan Solichin, Olesia Voronova
-
Patent number: 12033076Abstract: The disclosure relates to a system for evaluating movement of a body of a user. The system may include a video display, one or more digital cameras, and a processor. The processor may control the one or more cameras to generate images of at least the part of the body over a period of time. The processor may estimate a position of a plurality of joints of the body. The processor may receive a selection of a tracked pose, and determine, from the plurality of joints, a set of joints associated with the tracked pose. The processor may generate at least one joint vector connecting joints in the set of joints, and assign, based on changes in the joint vector over the period of time, a form score to a performance of the tracked pose. The processor may then generate a user interface that depicts the form score.Type: GrantFiled: April 24, 2023Date of Patent: July 9, 2024Assignee: MirrorAR LLCInventors: Hemant Virkar, Leah R. Kaplan, Stephen Furlani, Jacob Borgman, Anil Bhave, Mihir Thakkar, Sunkist Mehta
-
Patent number: 12016745Abstract: Embodiments are provided for digital dental modeling. One method embodiment includes receiving a three-dimensional data set including a first jaw and a second jaw of a three-dimensional digital dental model and receiving a two-dimensional data set corresponding to at least a portion of the first jaw and the second jaw. The method includes mapping two-dimensional data of the two-dimensional data set to the three-dimensional digital dental model by transforming a coordinate system of the two-dimensional data to a coordinate system of the three-dimensional data set. The method includes positioning the first jaw with respect to the second jaw based on the two-dimensional data mapped to the three-dimensional data set. The method includes using at least a portion of the two-dimensional data mapped to the three-dimensional data set as a target of movement of the first jaw with respect to the second jaw in the three-dimensional digital dental model.Type: GrantFiled: July 26, 2023Date of Patent: June 25, 2024Assignee: Align Technology, Inc.Inventors: Anatoliy Boltunov, Yury Brailov, Fedor Chelnokov, Roman Roschin, David Mason
-
Patent number: 12018442Abstract: The ground information about the uneven state of the ground is easily detected. A ground information detection method according to the present invention includes: a point group data acquisition step for acquiring point group data generated as three-dimensional coordinates for each point on the ground with laser light emitted from a three-dimensional scanning device installed at a known point; and a ground information detection step for detecting ground information about an uneven state of the ground using the point group data acquired at the point group data acquisition step.Type: GrantFiled: April 7, 2020Date of Patent: June 25, 2024Assignee: MR Support Inc.Inventors: Shigeo Kusaki, Takamitsu Mori
-
Patent number: 12014470Abstract: An object is to provide a model generation apparatus capable of generating a model for implementing a more precise simulation. Firstly, an object to be reconstructed on a 3D model is extracted from 3D image information, and an object model having a highest shape conformity degree with the object is acquired from among a plurality of object models available on the 3D model, and is associated with size information and disposed-place information of the object. Next, for each of acquired object models, the extracted object model is edited so as to conform with the size information of the object. Then, the edited object model is disposed on the 3D model so that the object model satisfies a physical constraint on the 3D model and conforms with the disposed-place information.Type: GrantFiled: May 22, 2019Date of Patent: June 18, 2024Assignee: NEC CORPORATIONInventor: Hisaya Wakayama
-
Patent number: 11989837Abstract: A method of spawning a digital island in a three-dimensional environment is disclosed. Data describing a three-dimensional environment is accessed. The data is partitioned into a plurality of contexts based on properties identified in the data, the properties corresponding to surfaces or objects in the three-dimensional environment. One or more values of one or more traits corresponding to a context of the plurality of context are identified. A digital island is matched to the context. The matching includes analyzing one or more conditions associated with the digital island with respect to the one or more values of the one or more traits corresponding to the context. Based on the matching, the spawning of the digital island is performed in the three-dimensional environment for the context.Type: GrantFiled: June 1, 2021Date of Patent: May 21, 2024Assignee: Unity IPR ApSInventors: Stella Mamimi Cannefax, Andrew Peter Maneri, Amy Melody DiGiovanni
-
Patent number: 11980428Abstract: A computer-implemented method for checking the correct alignment of a hip prosthesis, includes: detecting a 3D model of the pelvic bone of a patient in a preoperative phase, detecting at least one 2D image of the pelvic bone in post-implant situation, selecting an image of the 3D model according to a first inclination thereof and detecting a reference element on said selected image of the 3D model, identifying a plurality of reference points on the 2D image, superimposing the 2D image on said selected image of the 3D model, checking the correct superimposition and correspondence of the reference points of the 2D image with the reference element with said image of the 3D model, and detecting possible differences in the positioning of the pelvic bone in post-implant configuration with respect to the preoperative situation.Type: GrantFiled: April 6, 2020Date of Patent: May 14, 2024Assignee: Medacta International SAInventors: Francesco Siccardi, Massimiliano Bernardoni, Daniele Ascani
-
Patent number: 11972535Abstract: A computer-implemented method and a system are provided for visualising colocalised fluorescence signals. The method accesses signal intensity data obtained from a first fluorescence channel and a second fluorescence channel in which the signal intensity data is associated with voxels in an image. A regression factor on the signal intensity data is calculated to generate a regression parameter corresponding to a degree of correlation between the signal intensity data obtained from the first and second fluorescence channels The signal intensity data is mapped to the regression parameter and colourmap values are assigned to each voxel based on the mapped signal intensity data in which colourmap values of voxels embodying poorly correlated signal intensity data are reduced. The method renders the voxels in the image in colours according to their colourmap values to visualise colocalisation in the image.Type: GrantFiled: April 3, 2020Date of Patent: April 30, 2024Assignee: STELLENBOSCH UNIVERSITYInventors: Benjamin Loos, Thomas Richard Niesler, Rensu Petrus Theart
-
Patent number: 11968476Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media relate to a method for casting from a virtual environment to a video communications platform. The system may provide a video conference session in a video conference application. A connection may be established between the video conference application and a VR or AR device. The video conference application may receive 2D video content from the VR or AR device. The 2D video content may comprise a view of a virtual environment. The video conference application may stream the 2D video content in the video conference session.Type: GrantFiled: October 31, 2021Date of Patent: April 23, 2024Assignee: Zoom Video Communications, Inc.Inventor: Jordan Thiel
-
Patent number: 11961175Abstract: A method of performing anisotropic texture filtering includes generating one or more parameters describing an elliptical footprint in texture space; performing isotropic filtering at each of a plurality of sampling points along a major axis of the elliptical footprint, wherein a spacing between adjacent sampling points of the plurality of sampling points is proportional to ?{square root over (1???2)} units, wherein ? is a ratio of a major radius of an ellipse to be sampled and a minor radius of the ellipse to be sampled, wherein the ellipse to be sampled is based on the elliptical footprint; and combining results of the isotropic filtering at the plurality of sampling points with a Gaussian filter to generate at least a portion of a filter result.Type: GrantFiled: July 26, 2022Date of Patent: April 16, 2024Assignee: Imagination Technologies LimitedInventors: Rostam King, Kenneth Rovers