Patents Examined by Kee M. Tung
-
Patent number: 12118664Abstract: Disclosed herein are systems and methods that relate to wireless communication mesh network design and operation.Type: GrantFiled: September 24, 2021Date of Patent: October 15, 2024Assignee: L3VEL, LLCInventors: Kevin Ross, Muhammad Ahsan Naim
-
Patent number: 12112419Abstract: The technique of this disclosure suppresses a reduction in visibility of a predetermined object in virtual viewpoint image data. An image processing apparatus includes: an image capturing information acquisition unit configured to acquire image capturing information indicating a position and orientation of each of a plurality of image capturing apparatuses; an object information acquisition unit configured to acquire object information indicating a position and orientation of an object to be captured by the image capturing apparatuses, the object having a specific viewing angle; and a determination unit configured to determine, based on the acquired image capturing information and the position and orientation of the object indicated by the acquired object information, an image to be used for generating a virtual viewpoint image according to a position and orientation of a virtual viewpoint among a plurality of images based on capturing by the image capturing apparatuses.Type: GrantFiled: March 22, 2022Date of Patent: October 8, 2024Assignee: CANON KABUSHIKI KAISHAInventor: Daichi Adachi
-
Patent number: 12112444Abstract: An electronic device includes a camera, a display, and a processor, wherein the processor is configured to acquire an image using the camera, determine a 3D graphic object corresponding to an object included in the acquired image, and apply the determined 3D graphic object to a 3D avatar and display the same.Type: GrantFiled: January 22, 2020Date of Patent: October 8, 2024Assignee: Samsung Electronics Co., LtdInventors: Yejin Kim, Yoonjeong Kang, Iseul Yu, Heeyul Kim, Sanghyun Park, Jongil Jeong
-
Patent number: 12112446Abstract: Various implementations disclosed herein include devices, systems, and methods that uses object relationships represented in the scene graph to adjust the position of objects. For example, an example process may include obtaining a three-dimensional (3D) representation of a physical environment that was generated based on sensor data obtained during a scanning process, detecting positions of a set of objects in the physical environment based on the 3D representation, generating a scene graph for the 3D representation of the physical environment based on the detected positions of the set of objects, wherein the scene graph represents the set of objects and relationships between the objects, and determining a refined 3D representation of the physical environment by refining the position of at least one object in the set of objects based on the scene graph and an alignment rule associated with a relationship in the scene graph.Type: GrantFiled: August 24, 2023Date of Patent: October 8, 2024Assignee: Apple Inc.Inventors: Angela Blechschmidt, Daniel Ulbricht, Alexander S. Polichroniadis
-
Patent number: 12113374Abstract: According to various embodiments, an augmented reality device may comprise a display, at least one camera, a communication circuit, and at least one processor operatively connected with the display, the at least one camera, and the communication circuit. The at least one processor may be configured to obtain a first image through the at least one camera, identify that a wireless power receiver and a wirelesspower transmitter configured to transmit wireless power to the wirelesspower receiver are included in the first image, receive information regarding the wireless power from the wirelesspower receiver through the communication circuit, and display, on the display, an augmented reality object indicating an arrangement of at least one of the wirelesspower transmitter or the wirelesspower receiver, based on the information regarding the wireless power.Type: GrantFiled: August 24, 2022Date of Patent: October 8, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Jongmin Yoon, Seungnyun Kim, Yongsang Yun
-
Patent number: 12106443Abstract: Techniques for responsive video canvas generation are described to impart three-dimensional effects based on scene geometry to two-dimensional digital objects in a two-dimensional design environment. A responsive video canvas, for instance, is generated from input data including a digital video and scene data. The scene data describes a three-dimensional representation of an environment and includes a plurality of planes. A visual transform is generated and associated with each plane to enable digital objects to interact with the underlying scene geometry. In the responsive video canvas, an edit positioning a two-dimensional digital object with respect to a particular plane of the responsive video canvas is received. A visual transform associated with the particular plane is applied to the digital object and is operable to align the digital object to the depth and orientation of the particular plane. Accordingly, the digital object includes visual features based on the three-dimensional representation.Type: GrantFiled: February 23, 2022Date of Patent: October 1, 2024Assignee: Adobe Inc.Inventors: Cuong D. Nguyen, Valerie Lina Head, Talin Chris Wadsworth, Stephen Joseph DiVerdi, Paul John Asente
-
Patent number: 12100072Abstract: A tinting material may be generated and applied to the backdrop of an application presented on a user interface. The user interface is presented on a display and includes a background. The background comprises a first color, which includes a luminosity value, a hue value, and a saturation value. A tint color, having a luminosity value, a hue value, a saturation value, and an opacity value, is received. The luminosity value of the first color is modified to generate a second color, and the hue value and saturation value of the generated second color are modified to generate a third color. A tinting material color is generated that includes the modified luminosity value of the second color, the modified hue value of the third color, and the modified saturation value of the third color, and an application is presented on the user interface that includes the tinting material color.Type: GrantFiled: May 28, 2022Date of Patent: September 24, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Spencer Israel Antonin Nataraja Hurd, Chigusa Yasuda Sansen, Christopher Nathaniel Raubacher, Simone Magurno, Jeremy Scott Knudsen
-
Patent number: 12094083Abstract: A holographic aberration correction method and apparatus are provided. The holographic aberration correction apparatus includes: generating a plurality of sub-holograms in a hologram, as a matrix; calculating a plurality of eigenmodes and an eigenvalue and a weight corresponding to each of the eigenmodes by performing singular value decomposition on the matrix; selecting a predefined number of eigenmodes in the order of largest eigenvalues; calculating a plurality of first results which are obtained by multiplying a plurality of identical images by respective weights corresponding to the plurality of selected eigenmodes; calculating a plurality of second results by performing convolution of the plurality of first results and the plurality of selected eigenmodes, respectively; and generating an aberration-corrected hologram by adding the plurality of second results.Type: GrantFiled: December 30, 2021Date of Patent: September 17, 2024Assignee: SEOUL NATIONAL UNIVERSITY R&DB FOUNDATIONInventors: Byoungho Lee, Seung-Woo Nam, Juhyun Lee, Siwoo Lee
-
Patent number: 12086955Abstract: A platform, web application, system, and methods are disclosed for creating digital 3D scenes having digital 3D objects and for creating images of the digital 3D scenes. The platform can be communicatively coupled with the web application. The platform can apply a multi-stage method to convert high-fidelity digital 3D objects into low-fidelity digital 3D objects, store a mapping between them, and transmit low-fidelity digital 3D objects to the web application. The web application can include user interfaces for manipulating low-fidelity digital 3D objects to create low-fidelity digital 3D scenes. The platform can automatically convert low-fidelity digital 3D scenes received from the web application into high-fidelity digital 3D scenes and create high-fidelity 2D images.Type: GrantFiled: March 15, 2022Date of Patent: September 10, 2024Assignee: Target Brands, inc.Inventors: Godha Narayana, Madanmohan Atmakuri, Gopikrishna Chaganti, Anjali Unni, Balraj Govindasamy, Steve Samuel
-
Patent number: 12067675Abstract: A computer-implemented method for autonomous reconstruction of vessels on computed tomography images, includes: providing a reconstruction convolutional neural network (CNN); receiving an input 3D model of a vessel to be reconstructed; defining a region of interest (ROI) and a movement step, wherein the ROI is a 3D volume that covers an area to be processed; defining a starting position and positioning the ROI at the starting position; reconstructing a shape of the input 3D model within the ROI by inputting the fragment of the input 3D model within the ROI to the reconstruction convolutional neural network (CNN) and receiving the reconstructed 3D model fragment; moving the ROI by the movement step along a scanning path; repeating the reconstruction and moving steps to reconstruct a desired portion of the input 3D model at consecutive ROI positions; and combining the reconstructed 3D model fragments.Type: GrantFiled: March 27, 2022Date of Patent: August 20, 2024Assignee: KARDIOLYTICS INC.Inventors: Kris Siemionow, Paul Lewicki, Marek Kraft, Dominik Pieczynski, Michal Mikolajczak, Jacek Kania
-
Patent number: 12062234Abstract: A client device that includes a camera and an extended reality client application program is employed by a user in a physical space, such as an industrial or campus environment. The user aims the camera within the mobile device at a real-world asset, such as a computer system, classroom, or vehicle. The client device acquires a digital image via the camera and detects textual and/or pictorial content included in the acquired image that corresponds to one or more anchors. The client device queries a data intake and query system for asset content associated with the detected anchors. Upon receiving the asset content from the data intake and query system, the client device generates visualizations of the asset content and presents the visualizations via a display device.Type: GrantFiled: October 24, 2022Date of Patent: August 13, 2024Assignee: SPLUNK INC.Inventors: Devin Bhushan, Seunghee Han, Caelin Thomas Jackson-King, Jamie Kuppel, Stanislav Yazhenskikh, Jim Jiaming Zhu
-
Patent number: 12062130Abstract: Systems and techniques are provided for performing video-based activity recognition. For example, a process can include generating a three-dimensional (3D) model of a first portion of an object based on one or more frames depicting the object. The process can also include generating a mask for the one or more frames, the mask including an indication of one or more regions of the object. The process can further include generating a 3D base model based on the 3D model of the first portion of the object and the mask, the 3D base model representing the first portion of the object and a second portion of the object. The process can include generating, based on the mask and the 3D base model, a 3D model of the second portion of the object.Type: GrantFiled: August 16, 2021Date of Patent: August 13, 2024Assignee: QUALCOMM IncorporatedInventors: Yan Deng, Michel Adib Sarkis, Ning Bi, Chieh-Ming Kuo
-
Patent number: 12062138Abstract: A target detection method and apparatus are provided. A first image of a target scenario collected by an image sensor is analyzed to obtain one or more first 2D detection boxes of the target scenario, and a three-dimensional point cloud of the target scenario collected by a laser sensor is analyzed to obtain one or more second 2D detection boxes of the target scenario in one or more views (for example, a BEV and/or a PV). Then, comprehensive analysis is performed on a matching degree and confidence of the one or more first 2D detection boxes, and a matching degree and confidence of the one or more second 2D detection boxes, to obtain a 2D detection box of a target. Finally, a 3D model of the target is obtained based on a three-dimensional point corresponding to the 2D detection box of the target.Type: GrantFiled: November 11, 2022Date of Patent: August 13, 2024Assignee: Huawei Technologies Co., Ltd.Inventor: Hongmin Li
-
Patent number: 12051168Abstract: Systems and methods are provided that include a processor executing an avatar generation program to obtain driving view(s), calculate a skeletal pose of the user, and generate a coarse human mesh based on a template mesh and the skeletal pose of the user. The program further constructs a texture map based on the driving view(s) and the coarse human mesh, extracts a plurality of image features from the texture map, the image features being aligned to a UV map, and constructs a UV positional map based on the coarse human mesh. The program further extracts a plurality of pose features from the UV positional map, the pose features being aligned to the UV map, generates a plurality of pose-image features based on the UV map-aligned image features and UV map-aligned pose features, and renders an avatar based on the plurality of pose-image features.Type: GrantFiled: September 15, 2022Date of Patent: July 30, 2024Assignees: LEMON INC., BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.Inventors: Hongyi Xu, Tao Hu, Linjie Luo
-
Patent number: 12039688Abstract: Systems, methods, and computer readable media for augmented reality beauty product tutorials. Methods disclose determining from live images of an augmented reality (AR) tutorial effects, the effects indicating changes to the live images of a presenter of the AR tutorial from a beauty product being applied to a body part of the presenter. The methods further comprising determining from the live images motion, the motion indicating motion of the beauty product from the beauty product being applied to the body part of the presenter and storing the effects and the motion.Type: GrantFiled: May 3, 2023Date of Patent: July 16, 2024Assignee: Snap Inc.Inventors: Christine Barron, Virginia Drummond, Jean Luo, Alek Matthiessen, Celia Nicole Mourkogiannis, Jonathan Solichin, Olesia Voronova
-
Patent number: 12039654Abstract: Systems and methods for super sampling and viewport shifting of non-real time 3D applications are disclosed. In one embodiment, a graphics processing unit includes a processing resource to execute graphics commands to provide graphics for an application, a capture tool to capture the graphics commands, and a data generator to generate a dataset including at least one frame based on the captured graphics commands and to modify viewport settings for each frame of interest to generate a conditioned dataset.Type: GrantFiled: October 15, 2021Date of Patent: July 16, 2024Assignee: Intel CorporationInventors: Joanna Douglas, Michal Taryma, Mario Garcia, Carlos Dominguez
-
Patent number: 12033076Abstract: The disclosure relates to a system for evaluating movement of a body of a user. The system may include a video display, one or more digital cameras, and a processor. The processor may control the one or more cameras to generate images of at least the part of the body over a period of time. The processor may estimate a position of a plurality of joints of the body. The processor may receive a selection of a tracked pose, and determine, from the plurality of joints, a set of joints associated with the tracked pose. The processor may generate at least one joint vector connecting joints in the set of joints, and assign, based on changes in the joint vector over the period of time, a form score to a performance of the tracked pose. The processor may then generate a user interface that depicts the form score.Type: GrantFiled: April 24, 2023Date of Patent: July 9, 2024Assignee: MirrorAR LLCInventors: Hemant Virkar, Leah R. Kaplan, Stephen Furlani, Jacob Borgman, Anil Bhave, Mihir Thakkar, Sunkist Mehta
-
Patent number: 12018442Abstract: The ground information about the uneven state of the ground is easily detected. A ground information detection method according to the present invention includes: a point group data acquisition step for acquiring point group data generated as three-dimensional coordinates for each point on the ground with laser light emitted from a three-dimensional scanning device installed at a known point; and a ground information detection step for detecting ground information about an uneven state of the ground using the point group data acquired at the point group data acquisition step.Type: GrantFiled: April 7, 2020Date of Patent: June 25, 2024Assignee: MR Support Inc.Inventors: Shigeo Kusaki, Takamitsu Mori
-
Patent number: 12016745Abstract: Embodiments are provided for digital dental modeling. One method embodiment includes receiving a three-dimensional data set including a first jaw and a second jaw of a three-dimensional digital dental model and receiving a two-dimensional data set corresponding to at least a portion of the first jaw and the second jaw. The method includes mapping two-dimensional data of the two-dimensional data set to the three-dimensional digital dental model by transforming a coordinate system of the two-dimensional data to a coordinate system of the three-dimensional data set. The method includes positioning the first jaw with respect to the second jaw based on the two-dimensional data mapped to the three-dimensional data set. The method includes using at least a portion of the two-dimensional data mapped to the three-dimensional data set as a target of movement of the first jaw with respect to the second jaw in the three-dimensional digital dental model.Type: GrantFiled: July 26, 2023Date of Patent: June 25, 2024Assignee: Align Technology, Inc.Inventors: Anatoliy Boltunov, Yury Brailov, Fedor Chelnokov, Roman Roschin, David Mason
-
Patent number: 12014470Abstract: An object is to provide a model generation apparatus capable of generating a model for implementing a more precise simulation. Firstly, an object to be reconstructed on a 3D model is extracted from 3D image information, and an object model having a highest shape conformity degree with the object is acquired from among a plurality of object models available on the 3D model, and is associated with size information and disposed-place information of the object. Next, for each of acquired object models, the extracted object model is edited so as to conform with the size information of the object. Then, the edited object model is disposed on the 3D model so that the object model satisfies a physical constraint on the 3D model and conforms with the disposed-place information.Type: GrantFiled: May 22, 2019Date of Patent: June 18, 2024Assignee: NEC CORPORATIONInventor: Hisaya Wakayama