Patents Issued in August 20, 2024
-
Patent number: 12067644Abstract: Computer vision systems and methods for object detection with reinforcement learning are provided. The system includes a reinforcement learning agent configured to detect an object pertaining to a target object class and a plurality of objects pertaining to different target object classes, such that the reinforcement learning agent determines a bounding box for each of the detected of objects. The system first sets parameters of the reinforcement learning agent. The system then detects an object and/or objects in an image based on the set parameters. Finally, the system determines a bounding box and/or bounding boxes for each of the detected objects.Type: GrantFiled: December 16, 2020Date of Patent: August 20, 2024Assignee: Insurance Services Office, Inc.Inventors: Maneesh Kumar Singh, Sina Ditzel
-
Patent number: 12067645Abstract: There are provided computing devices and methods, etc. to controllably transform an image of a face, including a high resolution image, to simulate continuous aging. Ethnicity-specific aging information and weak spatial supervision are used to guide the aging process defined through training a model comprising a GANs based generator. Aging maps present the ethnicity-specific aging information as skin sign scores or apparent age values. The scores are located in the map in association with a respective location of the skin sign zone of the face associated with the skin sign. Patch-based training, particularly in association with location information to differentiate similar patches from different parts of the face, is used to train on high resolution images while minimize resource usage.Type: GrantFiled: June 30, 2021Date of Patent: August 20, 2024Assignee: L'OrealInventors: Julien Despois, Frederic Flament, Matthieu Perrot
-
Patent number: 12067646Abstract: A computer-implemented method includes receiving, by a computing device, a particular textual description of a scene. The method also includes applying a neural network for text-to-image generation to generate an output image rendition of the scene, the neural network having been trained to cause two image renditions associated with a same textual description to attract each other and two image renditions associated with different textual descriptions to repel each other based on mutual information between a plurality of corresponding pairs, wherein the plurality of corresponding pairs comprise an image-to-image pair and a text-to-image pair. The method further includes predicting the output image rendition of the scene.Type: GrantFiled: September 7, 2021Date of Patent: August 20, 2024Assignee: Google LLCInventors: Han Zhang, Jing Yu Koh, Jason Michael Baldridge, Yinfei Yang, Honglak Lee
-
Patent number: 12067647Abstract: Vehicles and methods for augmenting rearview displays for vehicles are provided. An exemplary method is provided for augmenting a rearview display for a vehicle having a gate moveable between a closed configuration and an opened configuration. The method includes collecting dynamic pixel images of an area behind the vehicle using a camera connected to the vehicle. Further, the method includes, in response to a command, displaying graphical overlays to the collected dynamic pixel images on a display screen, via a processor, wherein the graphical overlays depict an outline of the gate and/or projected path of the gate in the opened configuration.Type: GrantFiled: July 21, 2022Date of Patent: August 20, 2024Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Yun Qian Miao, Norman J Weigert, Ralph David Schlottke
-
Patent number: 12067648Abstract: The present disclosure is directed to a software tool that engages in a pattern matching technique. In one implementation, the software tool retrieves a two-dimensional drawing and identifies walls as lines, rotates the drawing until a threshold number of lines are aligned with either the X or Y axes, discards lines that are not aligned with either the X or Y axis, identifies intersection points, identifies a subset of intersection points that have a maxima or minima coordinate, constructs a data library indicative of the relative positions of the points in the identified subset; and compares the constructed data libraries for the two-dimensional drawing to data libraries constructed for another two-dimensional drawing.Type: GrantFiled: May 15, 2023Date of Patent: August 20, 2024Assignee: Procore Technologies, Inc.Inventor: Winson Chu
-
Patent number: 12067649Abstract: A disclosed technique includes determining a plurality of per-pixel variable rate shading rates for a plurality of fragments; determining a coarse variable shading rate for a coarse variable rate shading area based on the plurality of per-pixel variable rate shading rates; and shading one or more fragments based on the plurality of fragments and based on the coarse variable shading rate.Type: GrantFiled: June 29, 2021Date of Patent: August 20, 2024Assignee: Advanced Micro Devices, Inc.Inventor: Christopher J. Brennan
-
Patent number: 12067650Abstract: An imaging apparatus according to an aspect of the present disclosure includes: a filter that includes a region through which a two-dimensional signal indicating an image passes, and includes a portion which blocks at least a part of the two-dimensional signal in the region; a detector that detects a power of the two-dimensional signal passing through the filter; at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to reconstruct the image indicated by the two-dimensional signal based on the power detected in a plurality of conditions that differ in positional relationship between the two-dimensional signal imaged on the filter and a distribution of the portion.Type: GrantFiled: March 19, 2019Date of Patent: August 20, 2024Assignee: NEC CORPORATIONInventor: Chenhui Huang
-
Patent number: 12067651Abstract: Data acquired from a scan of an object can be decomposed into frequency components. The frequency components can be input into a trained model to obtain processed frequency components. These processed frequency components can be composed and used to generate a final image. The trained model can be trained, independently or dependently, using frequency components covering the same frequencies as the to-be-processed frequency components. In addition, organ specific processing can be enabled by training the trained model using image and/or projection datasets of the specific organ.Type: GrantFiled: August 31, 2021Date of Patent: August 20, 2024Assignee: CANON MEDICAL SYSTEMS CORPORATIONInventors: Qiulin Tang, Ruoqiao Zhang, Jian Zhou, Zhou Yu
-
Patent number: 12067652Abstract: Disclosed herein is a medical system (100, 300) comprising a memory (110) storing machine executable instructions (120) and an image generating neural network (122). The image generating neural network is configured for outputting synthetic magnetic resonance image data (128) in response to receiving reference magnetic resonance image data (126) as input. The synthetic magnetic resonance image data is a simulation of magnetic resonance image data acquired according to a first configuration of a magnetic resonance imaging system when the reference magnetic resonance image data is acquired according to a second configuration of the magnetic resonance imaging system.Type: GrantFiled: April 21, 2021Date of Patent: August 20, 2024Assignee: Koninklijke Philips N.V.Inventors: Christophe Michael Jean Schuelke, Karsten Sommer, George Randall Duensing, Peter Boernert
-
Patent number: 12067653Abstract: Systems, methods, and devices for generating a corrected image are provided. A first robotic arm may be configured to orient a source at a first pose and a second robotic arm may be configured to orient a detector at a plurality of second poses. An image dataset may be received from the detector at each of the plurality of second poses to yield a plurality of image datasets. The plurality of datasets may comprise an initial image having a scatter effect. The plurality of image datasets may be saved. A scatter correction may be determined and configured to correct the scatter effect. The correction may be applied to the initial image to correct the scatter effect.Type: GrantFiled: May 30, 2023Date of Patent: August 20, 2024Assignee: Mazor Robotics Ltd.Inventors: Noam Weiss, Ben Yosef Hai Ezair
-
Patent number: 12067654Abstract: The invention relates to methods and systems for reducing artefacts in image reconstruction employed in tomographic imaging including Positron Emission Tomography (PET) and Computer Assisted Tomography (CAT) or (CT). The method is carried out entirely or in part by a computer or computerised system communicatively coupled to a detector arrangement which comprises a plurality of detector elements, wherein the detector elements are configured to detect photons associated with an object during PET and CAT screening processes in at least medical and mining applications.Type: GrantFiled: June 24, 2020Date of Patent: August 20, 2024Assignee: University of JohannesburgInventors: Simon Henry Connell, Martin Nkululeko Hogan Cook
-
Patent number: 12067655Abstract: An image processing method includes acquiring a three-dimensional image of a subject eye and a two-dimensional image of the subject eye, specifying a first reference point in the three-dimensional image, specifying a second reference point in the two-dimensional image, and generating a superposed image. In the superposed image, the first reference point in the three-dimensional image is coordinated with the second reference point in the two-dimensional image and the two-dimensional image is superposed with at least a portion of the three-dimensional image.Type: GrantFiled: January 14, 2022Date of Patent: August 20, 2024Assignee: NIKON CORPORATIONInventors: Yasushi Tanabe, Daishi Tanaka, Makoto Ishida, Mariko Mukai
-
Patent number: 12067656Abstract: Provided is an image editing method including: displaying a setting image including a first image and a second image; receiving a first operation of designating a first point on the first image from a user; receiving a second operation of designating a second point on the second image from the user; receiving a third operation of designating a third point on the first image from the user; receiving a fourth operation of designating a fourth point on the second image from the user; and deforming the first image into a third image by making the first point correspond to the second point and making the third point correspond to the fourth point.Type: GrantFiled: March 29, 2023Date of Patent: August 20, 2024Assignee: SEIKO EPSON CORPORATIONInventor: Kohei Yanai
-
Patent number: 12067657Abstract: In a digital image annotation and retrieval system, a machine learning model identifies an image feature in an image and generates a plurality of question prompts for the feature. For a particular feature, a feature annotation is generated, which can include capturing a narrative, determining a plurality of narrative units, and mapping a particular narrative unit to the identified image feature. An enriched image is generated using the generated feature annotation. The enriched image includes searchable metadata comprising the feature annotation and the plurality of question prompts.Type: GrantFiled: February 28, 2023Date of Patent: August 20, 2024Assignee: Story File, Inc.Inventors: Samuel Michael Gustman, Andrew Victor Jones, Michael Glenn Harless, Stephen David Smith, Heather Lynn Smith
-
Patent number: 12067658Abstract: Embodiments of the present disclosure provide a method, apparatus and device for automatically making up portrait lips, storage medium and program product. The method includes: extracting lip key points from a portrait facial image and detecting a portrait facial orientation and a lip shape in the portrait facial image; adjusting positions of the lip key points based on the portrait facial orientation and the lip shape; detecting a skin hue and a skin color number of a facial area in the portrait facial image; and selecting a target lipstick color from a lipstick color sample library based on the skin hue and the skin color number; and performing fusion coloring by using the target lipstick color according to the positions of the lip key points.Type: GrantFiled: August 25, 2022Date of Patent: August 20, 2024Assignees: MIGU VIDEO CO., LTD, Migu Co., Ltd., China Mobile Communications Group Co., Ltd.Inventors: Xiao Ma, Wangdu Chen, Qi Wang, Xinghao Pan, Kangjing Li
-
Patent number: 12067659Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and method that utilize a character animation neural network informed by motion and pose signatures to generate a digital video through person-specific appearance modeling and motion retargeting. In particular embodiments, the disclosed systems implement a character animation neural network that includes a pose embedding model to encode a pose signature into spatial pose features. The character animation neural network further includes a motion embedding model to encode a motion signature into motion features. In some embodiments, the disclosed systems utilize the motion features to refine per-frame pose features and improve temporal coherency. In certain implementations, the disclosed systems also utilize the motion features to demodulate neural network weights used to generate an image frame of a character in motion based on the refined pose features.Type: GrantFiled: October 15, 2021Date of Patent: August 20, 2024Assignee: Adobe Inc.Inventors: Yangtuanfeng Wang, Duygu Ceylan Aksit, Krishna Kumar Singh, Niloy J Mitra
-
Patent number: 12067660Abstract: An avatar personalized for a subject and used in movement analysis and coaching uses a 3D polygon mesh representation of a human, authored for one human skeleton model, with a different size (scaled) skeleton model, without producing visual artifacts. Dimensions of the scaled skeleton model can be determined from three or eight measurements of a subject or from scanning or photographic methods. The avatar animated based on motion capture data may be animated alone or with one or more other avatars that are synchronized spatially, orientationally, and/or at multiple times to allow a user to easily compare differences between performances. An information presentation in an animation may use pixel energies from multiple animation frames to ensure that the information remains relatively stationary and does not obstruct important visual details.Type: GrantFiled: June 28, 2019Date of Patent: August 20, 2024Assignee: RLT IP LTD.Inventors: Leslie Andrew, David M. Jessop, Peter Robins, Ferenc Visztra, Jonathan M. Dalzell
-
Patent number: 12067661Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing unsupervised learning of discrete human motions to generate digital human motion sequences. The disclosed system utilizes an encoder of a discretized motion model to extract a sequence of latent feature representations from a human motion sequence in an unlabeled digital scene. The disclosed system also determines sampling probabilities from the sequence of latent feature representations in connection with a codebook of discretized feature representations associated with human motions. The disclosed system converts the sequence of latent feature representations into a sequence of discretized feature representations by sampling from the codebook based on the sampling probabilities. Additionally, the disclosed system utilizes a decoder to reconstruct a human motion sequence from the sequence of discretized feature representations.Type: GrantFiled: February 16, 2022Date of Patent: August 20, 2024Assignee: Adobe Inc.Inventors: Jun Saito, Nitin Saini, Ruben Villegas
-
Patent number: 12067662Abstract: The disclosure provides methods and systems for automatically generating an animatable object, such as a 3D model. In particular, the present technology provides fast, easy, and automatic animatable solutions based on unique facial characteristics of user input. Various embodiments of the present technology include receiving user input, such as a two-dimensional image or three-dimensional scan of a user's face, and automatically detecting one or more features. The methods and systems may further include deforming a template geometry and a template control structure based on the one or more detected features to automatically generate a custom geometry and custom control structure, respectively. A texture of the received user input may also be transferred to the custom geometry. The animatable object therefore includes the custom geometry, the transferred texture, and the custom control structure, which follow a morphology of the face.Type: GrantFiled: November 18, 2022Date of Patent: August 20, 2024Assignee: Didimo, Inc.Inventors: Verónica Costa Teixeira Pinto Orvalho, Hugo Miguel dos Reis Pereira, José Carlos Guedes dos Prazeres Miranda, Thomas Iorns, Alexis Paul Benoit Roche, Mariana Ribeiro Dias, Eva Margarida Ferreira de Abreu Almeida
-
Patent number: 12067663Abstract: Method for generating media content items on demand starts with a processor receiving an animation file including a first metadata based on an animation input. The animation file is associated with a media content identification. The processor generates puppets associated with frames in the animation file using the first metadata. The processor causes a puppet matching interface to be displayed on a client device. The puppet matching interface includes one of the puppets in a first pose. The processor receives a puppet posing input associated with a second pose from the client device. The processor causes the one of the puppets to be displayed in the second pose in the puppet matching interface by the client device. The processor can also generate a second metadata based on the puppet posing input. Other embodiments are disclosed herein.Type: GrantFiled: November 22, 2022Date of Patent: August 20, 2024Assignee: Snap Inc.Inventors: Bradley Kotsopoulos, Michael Kozakov, Yingying Wang, Nicholas Hendriks, Derek Spencer
-
Patent number: 12067664Abstract: A pose data file may represent, for each frame of a reference frame sequence, a plurality of two-dimensional skeleton projections on a virtual spherical surface, each of which, for a particular frame, corresponds to a two-dimensional reference pose image of a three-dimensional skeleton of a first human from a viewing angle. A real-time two-dimensional skeleton detector module detects a two-dimensional skeleton of a second human in each received test frame of a test frame sequence. A pose matching module selects a particular two-dimensional skeleton projection of the first human with the minimum mathematical distance from the two-dimensional skeleton of the second human in the current test frame to match the current pose of the second human in the current test frame with a corresponding reference pose image of the pose data file. The particular two-dimensional skeleton projection represents the corresponding reference pose image at the viewing angle.Type: GrantFiled: March 1, 2022Date of Patent: August 20, 2024Assignee: Lindera GmbHInventors: Arash Azhand, Punjal Agarwal
-
Patent number: 12067665Abstract: Described herein is a computer implemented method including: displaying a design including a first design element; detecting an initiate record animation event; receiving animation definition input creating an original path between an initial position and a final position, the animation definition input drawing the original path at original traversal speeds; generating animation data based on the animation definition input, the animation data including data that allows both the original path and the original traversal speeds to be reproduced; and associating the animation data with the first design element.Type: GrantFiled: June 16, 2023Date of Patent: August 20, 2024Assignee: Canva Pty LtdInventors: Stephen Richard Mudra, Liam Brodie Rayner, Mathew James Paul Manning, Melanie Joy Perkins, Jane Maree Abernethy, Elijah Alexander Sheppard, Joan Dyquiangco Magno, Jessica Faccin
-
Patent number: 12067666Abstract: Aspects presented herein relate to methods and devices for graphics processing including an apparatus, e.g., a GPU. The apparatus may receive a set of draw call instructions corresponding to a graphics workload, where the set of draw call instructions is associated with at least one run-time parameter. The apparatus may also obtain a first shader program associated with storing data in a system memory and at least one second shader program associated with storing data in a constant memory. Further, the apparatus may execute the first shader program or the at least one second shader program based on whether the at least one run-time parameter is less than or equal to a size of the constant memory. The apparatus may also update or maintain a configuration of a shader processor or a streaming processor based on executing the first shader program or the at least one second shader program.Type: GrantFiled: May 18, 2022Date of Patent: August 20, 2024Assignee: QUALCOMM IncorporatedInventors: Yun Du, Eric Demers, Andrew Evan Gruber, Chun Yu, Chihong Zhang, Baoguang Yang, Yuehai Du, Gang Zhong, Avinash Seetharamaiah, Jonnala Gadda Nagendra Kumar
-
Patent number: 12067667Abstract: Disclosed approaches provide for interactions of light transport paths in a virtual environment to share directional radiance when rendering a scene. Directional radiance that may be shared includes outgoing directional radiance of interactions, incoming directional radiance of interactions, and/or information derived therefrom. The shared directional radiance may be used for various purposes, such as computing lighting contributions at one or more interactions of a light transport path, and/or for path guiding. Directional radiance of an interaction may be shared with another interaction when the interaction is sufficiently similar (e.g., in radiance direction) to serve as an approximation of a sample for the other interaction. Sharing directional radiance may provide for online learning of directional radiance, which may build finite element approximations of light fields at the interactions.Type: GrantFiled: May 14, 2021Date of Patent: August 20, 2024Assignee: NVIDIA CorporationInventor: Jacopo Pantaleoni
-
Patent number: 12067668Abstract: There is provided an instruction, or instructions, that can be included in a program to perform a ray tracing operation, with individual execution threads in a group of execution threads executing the program performing the ray tracing operation for a respective ray in a corresponding group of rays such that the group of rays performing the ray tracing operation together. The instruction(s), when executed by the execution threads will cause one or more rays from the group of plural rays to be tested for intersection with a set of primitives. A result of the ray-primitive intersection testing can then be returned for the traversal operation.Type: GrantFiled: June 3, 2022Date of Patent: August 20, 2024Assignee: Arm LimitedInventors: Richard Bruce, William Robert Stoye, Mathieu Jean Joseph Robart, Jørn Nystad
-
Patent number: 12067669Abstract: A hardware-based traversal coprocessor provides acceleration of tree traversal operations searching for intersections between primitives represented in a tree data structure and a ray. The primitives may include triangles used in generating a virtual scene. The hardware-based traversal coprocessor is configured to properly handle numerically challenging computations at or near edges and/or vertices of primitives and/or ensure that a single intersection is reported when a ray intersects a surface formed by primitives at or near edges and/or vertices of the primitives.Type: GrantFiled: May 18, 2023Date of Patent: August 20, 2024Assignee: NVIDIA CorporationInventors: Samuli Laine, Tero Karras, Timo Aila, Robert Ohannessian, William Parsons Newhall, Jr., Greg Muthler, Ian Kwong, Peter Nelson, John Burgess
-
Patent number: 12067670Abstract: An avatar output device includes an avatar storage unit that stores, for each of two or more avatars, avatar information including first model information for displaying the avatar; a determination unit that determines at least one first avatar displayed using the first model information and at least one second avatar displayed using second model information with a data size smaller than a data size of the first model information from among the two or more avatars; an avatar acquisition unit that acquires an avatar using the second model information for each of the at least one second avatar determined by the determination unit, and acquires an avatar using the first model information for each of the at least one first avatar; and an avatar output unit that outputs the two or more avatars acquired by the avatar acquisition unit.Type: GrantFiled: May 5, 2023Date of Patent: August 20, 2024Assignee: CLUSTER, INC.Inventors: Daiki Handa, Hiroki Kurimoto
-
Patent number: 12067671Abstract: According to some embodiments, a method performed by an extended reality (XR) system comprises: identifying a space within an XR spatial mapping to which virtual content can be overlaid; determining an amount of static occlusion associated with the identified space; and transmitting an indication of the amount of static occlusion associated with the identified space to a virtual content supply side platform. According to some embodiments, a method performed by an XR system comprises: triggering a digital content display opportunity associated with a space within an XR spatial mapping to which virtual content can be overlaid; determining an amount of dynamic occlusion associated with the space; and transmitting an indication of the amount of dynamic occlusion associated with the identified space to a digital content display opportunity bidding system.Type: GrantFiled: November 26, 2019Date of Patent: August 20, 2024Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Paul McLachlan, Héctor Caltenco
-
Patent number: 12067672Abstract: An endoscopic image processing apparatus is configured to create a three-dimensional shape model of an object by performing processing on an endoscopic image group of an inside of the object, and includes a processor. The processor estimates a self-position of the image pickup device based on the endoscopic image group, calculates a first displacement amount corresponding to a displacement amount of the image pickup device based on an estimation result of the self-position of the image pickup device obtained by the estimation, calculates a second displacement amount corresponding to a displacement amount in a direction parallel to a longitudinal axis direction of the insertion portion, based on a detection signal outputted from an insertion/removal state detection device that detects an insertion/removal state of an insertion portion inserted into the object, and generates scale information in which the first displacement amount and the second displacement amount are associated with each other.Type: GrantFiled: October 28, 2021Date of Patent: August 20, 2024Assignee: Evident CorporationInventor: Hideaki Takahashi
-
Patent number: 12067673Abstract: In one embodiment, a computing system determines one or more depth measurements associated with a first physical object. The system captures an image including image data associated with the first physical object. The system identifies and associates a plurality of first pixels with a first representative depth value based on the one or more depth measurements. The system determines, for an output pixel of an output image, that (1) a portion of a virtual object is visible from a viewpoint and (2) the output pixel overlaps with a portion of the first physical object. The system determines that the portion of the first physical object is associated with the plurality of first pixels and renders the output image from the viewpoint. Occlusion at the output pixel is determined based on a comparison between the first representative depth value and a depth value associated with the portion of the virtual object.Type: GrantFiled: October 13, 2022Date of Patent: August 20, 2024Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Alberto Garcia Garcia, Gioacchino Noris, Gian Diego Tipaldi
-
Patent number: 12067674Abstract: A device may include an electronic display to display an image frame based on blended image data and image processing circuitry to generate the blended image data by combining first image data and second image data via a blend operation. The blend operation may include receiving graphics alpha data indicative of a transparency factor to be applied to the first image data to generate a first layer of the blend operation. The blend operation may also include overlaying the first layer onto a second layer that is based on the second image data. Overlaying the first layer onto the second layer may include adding first pixels values of the first image data that include negative pixel values and are augmented by the transparency factor to second pixel values of the second image data to generate blended pixel values of the blended image data.Type: GrantFiled: September 21, 2022Date of Patent: August 20, 2024Assignee: Apple Inc.Inventors: Yun Gong, Jim C Chou, Guy Cote
-
Patent number: 12067675Abstract: A computer-implemented method for autonomous reconstruction of vessels on computed tomography images, includes: providing a reconstruction convolutional neural network (CNN); receiving an input 3D model of a vessel to be reconstructed; defining a region of interest (ROI) and a movement step, wherein the ROI is a 3D volume that covers an area to be processed; defining a starting position and positioning the ROI at the starting position; reconstructing a shape of the input 3D model within the ROI by inputting the fragment of the input 3D model within the ROI to the reconstruction convolutional neural network (CNN) and receiving the reconstructed 3D model fragment; moving the ROI by the movement step along a scanning path; repeating the reconstruction and moving steps to reconstruct a desired portion of the input 3D model at consecutive ROI positions; and combining the reconstructed 3D model fragments.Type: GrantFiled: March 27, 2022Date of Patent: August 20, 2024Assignee: KARDIOLYTICS INC.Inventors: Kris Siemionow, Paul Lewicki, Marek Kraft, Dominik Pieczynski, Michal Mikolajczak, Jacek Kania
-
Patent number: 12067676Abstract: The present disclosure discloses a cyberspace map model creation method, which includes: determining that a coordinate system of a cyberspace map model uses an IP address as a basic vector; mapping the IP address to a two-dimensional coordinate system; determining, by means of orthogonalizing a logical port, a region, a topological structure to the IP address, a three-dimensional coordinate system to describe more fine-grained information of a cyberspace; constructing a scale standard, so as to lay a theoretical foundation for a multi-level and scalable representation of complex and diverse cyberspace resources in the cyberspace map model; and determining a mapping relationship between a cyberspace map and a geographic map to support screen segmentation, so as to perform a comparison drawing of a same cyberspace scenario in different cyberspace maps and geographic maps, and present cyberspace information in many aspects.Type: GrantFiled: June 2, 2021Date of Patent: August 20, 2024Assignee: TSINGHUA UNIVERSITYInventors: Jilong Wang, Congcong Miao, Shuying Zhuang
-
Patent number: 12067677Abstract: An information processing apparatus, method, and non-transitory computer-readable storage medium that estimate a three-dimensional skeleton based on images captured from a plurality of viewpoints, and generate a skeleton model by modeling the three-dimensional skeleton. The information processing apparatus, method, and non-transitory computer-readable storage medium further model a joint in a spherical shape or a bone in a cylindrical shape in the three-dimensional skeleton, and set a radius of a sphere of the joint or a radius of a cylinder of the bone according to a distance between any one of the plurality of viewpoints or a virtual viewpoint and the joint or the bone.Type: GrantFiled: December 10, 2020Date of Patent: August 20, 2024Assignee: Sony Group CorporationInventor: Daisuke Tahara
-
Patent number: 12067678Abstract: An exemplary processing system accesses a three-dimensional (3D) model of an anatomical structure of a patient and applies a detection process to the 3D model to detect a single-layer anatomical feature in the anatomical structure. The detection process includes generating, from the 3D model, a probability map of candidate points for the single-layer anatomical feature, and generating, based on the probability map of candidate points, a single-layer mesh representing the single-layer anatomical feature.Type: GrantFiled: June 9, 2020Date of Patent: August 20, 2024Assignee: Intuitive Surgical Operations, Inc.Inventors: Hui Zhang, Junning Li, Bai Wang, Tao Zhao
-
Patent number: 12067679Abstract: A method with three-dimensional (3D) modeling of a wearer of a wearable device includes generating a feature map for each of a plurality of images of the wearer obtained from a plurality of imaging devices provided in the wearable device, obtaining joint keypoint information corresponding to joint positions of the wearer and initial shape coefficient information associated with a shape of the wearer based on the feature map for each of the images, determining a target 3D joint angle for 3D modeling of the wearer based on the joint keypoint information and the initial shape coefficient information, determining target shape coefficient information for 3D modeling of the wearer based on the joint keypoint information and the initial shape coefficient information, and obtaining a 3D mesh of the wearer based on the target 3D joint angle and the target shape coefficient information.Type: GrantFiled: April 29, 2022Date of Patent: August 20, 2024Assignee: Samsung Electronics Co., Ltd.Inventor: Seunghoon Jee
-
Patent number: 12067680Abstract: Systems and methods for mesh generation are described. One aspect of the systems and methods includes receiving an image depicting a visible portion of a body; generating an intermediate mesh representing the body based on the image; generating visibility features indicating whether parts of the body are visible based on the image; generating parameters for a morphable model of the body based on the intermediate mesh and the visibility features; and generating an output mesh representing the body based on the parameters for the morphable model, wherein the output mesh includes a non-visible portion of the body that is not depicted by the image.Type: GrantFiled: August 2, 2022Date of Patent: August 20, 2024Assignee: ADOBE INC.Inventors: Jimei Yang, Chun-han Yao, Duygu Ceylan Aksit, Yi Zhou
-
Patent number: 12067681Abstract: Methods and tessellation modules for tessellating a patch to generate tessellated geometry data representing the tessellated patch. Received geometry data representing a patch is processed to identify tessellation factors of the patch. Based on the identified tessellation factors of the patch, tessellation instances to be used in tessellating the patch are determined. The tessellation instances are allocated amongst a plurality of tessellation pipelines that operate in parallel, wherein a respective set of one or more of the tessellation instances is allocated to each of the tessellation pipelines, and wherein each of the tessellation pipelines generates tessellated geometry data associated with the respective allocated set of one or more of the tessellation instances.Type: GrantFiled: June 12, 2023Date of Patent: August 20, 2024Assignee: Imagination Technologies LimitedInventor: John W. Howson
-
Patent number: 12067682Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that initiate communication between users of a networking system within an extended reality environment. For example, the disclosed systems can generate an extended-reality lobby window graphical user interface element for display on an extended-reality device of a user. The disclosed systems can further determine a connection between the user and a co-user and provide an animated visual representation of the co-user for display within the extended-reality lobby window graphical user interface element. In response to receiving user input targeting the animated visual representation of the co-user, the disclosed systems can generate and send, for display on an extended-reality device of the co-user, an invitation to join an extended-reality communication session with the user.Type: GrantFiled: September 13, 2022Date of Patent: August 20, 2024Assignee: Meta Platforms Technologies, LLCInventors: Charlene Mary Atlas, Mark Terrano
-
Patent number: 12067683Abstract: Methods for placement of location-persistent 3D objects or annotations in an augmented reality scene are disclosed. By capturing location data along with device spatial orientation and the placement of 3D objects or annotations, the augmented reality scene can be recreated and manipulated. Placed 3D objects or annotations can reappear in a subsequent capture by the same or a different device when brought back to the location of the initial capture and placement of objects. Still further, the placed 3D objects or annotations may be supplemented with additional objects or annotations, or the placed objects or annotations may be removed or modified.Type: GrantFiled: September 13, 2019Date of Patent: August 20, 2024Assignee: STREEM, LLCInventors: Ryan R. Fink, Sean M. Adkinson
-
Patent number: 12067684Abstract: A computerized method comprising acquiring an image of a physical environment comprising one or more physical entities; generating a virtual view based on the acquired image, the virtual view being a 3D representation of the physical environment and comprising 3D data corresponding to the one or more physical entities of the physical environment; displaying the virtual view overlaid on the acquired image of the physical environment; obtaining bounding volumes for a plurality of 3D object models; merging said bounding volumes for the plurality of 3D object models into a virtual bounding volume, said merging occurring with respect to a particular 3D point within each one of the bounding volumes such that the particular 3D points coincide in the virtual bounding volume; and displaying the virtual bounding volume in the virtual view.Type: GrantFiled: April 28, 2022Date of Patent: August 20, 2024Assignee: Inter IKEA Systems B.V.Inventors: Martin Enthed, Gustav Olsson
-
Patent number: 12067685Abstract: An electronic apparatus (1) configured to provide a treatment for a psychological anxiety disorder, the electronic apparatus including: a memory storing at least one instruction; a user interface (13) including a display configured to display a plurality of images; and a processor (11).Type: GrantFiled: July 29, 2022Date of Patent: August 20, 2024Assignee: MIND SWITCH, AGInventors: Michaela Schenker, Andre Schenker
-
Patent number: 12067686Abstract: A Virtual Reality (VR) computer system and method including a VR headset to be worn by at least one user; one or more pupil sensors located in the VR headset configured and operative to capture and track pupil movement of the at least one user wearing the VR headset; and at least one camera device operative to capture video image of at least one user wearing the VR headset. A computer processor is instructed to generate a two-dimensional (2D) image of the least one user such that an image of the VR headset is virtually removed from the image of the at least one user wearing the VR headset.Type: GrantFiled: December 8, 2022Date of Patent: August 20, 2024Inventor: Ronald H. Winston
-
Patent number: 12067687Abstract: A cross reality system that renders virtual content generated by executing native mode applications may be configured to render web-based content using components that render content from native applications. The system may include a Prism manager that provides Prisms in which content from executing native applications is rendered. For rendering web based content, a browser, accessing the web based content, may be associated with a Prism and may render content into its associated Prism, creating the same immersive experience for the user as when content is generated by a native application. The user may access the web application from the same program launcher menu as native applications. The system may have tools that enable a user to access these capabilities, including by creating for a web location an installable entity that, when processed by the system, results in an icon for the web content in a program launcher menu.Type: GrantFiled: July 17, 2023Date of Patent: August 20, 2024Assignee: Magic Leap, Inc.Inventors: Haiyan Zhang, Robert John Cummings MacDonald
-
Patent number: 12067688Abstract: A virtual object system can coordinate interactions between multiple virtual objects in an artificial reality environment. Embodiments receive a first virtual object, the first virtual object including first properties, where the artificial reality environment is set in a real-world environment. Embodiments register the first properties of the first virtual object to receive notifications of events from the artificial reality environment. Embodiments receive one or more queries from a second virtual object and in response to the one or more queries, respond to the second virtual object with identified features of the real-world environment in which the artificial reality environment is set and identifications of one or more other virtual objects (e.g., the first virtual object). The second virtual object can use the identification of the first virtual object to register for events related to the first virtual object and/or communicate with the first virtual object.Type: GrantFiled: February 14, 2022Date of Patent: August 20, 2024Assignee: Meta Platforms Technologies, LLCInventors: Miguel Goncalves, Bret Hobbs, Lionel Laurent Reyero, Gabriel Barbosa Nunes, Benjamin Blonder Leizman, Neil Anthony Clifford
-
Patent number: 12067689Abstract: Systems and methods are disclosed for generating a scaled reconstruction for a consumer product. One method includes receiving digital input comprising a calibration target and an object; defining a three-dimensional coordinate system; positioning the calibration target in the three-dimensional coordinate system; based on the digital input, aligning the object to the calibration target in the three-dimensional coordinate system; and generating a scaled reconstruction of the object based on the alignment of the object to the calibration target in the three-dimensional coordinate system.Type: GrantFiled: September 29, 2022Date of Patent: August 20, 2024Assignee: Bespoke, Inc.Inventors: Eric J. Varady, Atul Kanaujia
-
Patent number: 12067690Abstract: An image processing method is provided. The method includes: encoding an input image based on an attention mechanism to obtain an encoding tensor set and an attention map set of the input image; obtaining an encoding result of the input image according to the encoding tensor set and the attention map set, the encoding result of the input image recording an identity feature of a human face in the input image; encoding an expression image to obtain an encoding result of the expression image, the encoding result of the expression image recording an expression feature of a human face in the expression image; and generating an output image according to the encoding result of the input image and the encoding result of the expression image, the output image having the identity feature of the input image and the expression feature of the expression image.Type: GrantFiled: October 8, 2021Date of Patent: August 20, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Tianyu Sun, Haozhi Huang, Wei Liu
-
Patent number: 12067691Abstract: An apparatus where a vertical image position of the scene point may depend on the horizontal image position. A mapper generates a modified image having a modified projection by applying a mapping to the first wide angle image corresponding to a mapping from the first projection to a perspective projection followed by a non-linear vertical mapping from the perspective projection to a modified vertical projection of the modified projection and a non-linear horizontal mapping from the perspective projection to a modified horizontal projection of the modified projection. Then a disparity estimator generates disparities for the modified image relative to a second image and representing a different view point than the first wide angle image.Type: GrantFiled: November 4, 2019Date of Patent: August 20, 2024Assignee: Koninklijke Philips N.V.Inventor: Christiaan Varekamp
-
Patent number: 12067692Abstract: Systems and methods in accordance with various embodiments of the invention can generate term (or PolyMap) coefficients that properly calibrate any camera lens using information obtained from a calibration pattern (e.g., chessboard patterns). Term coefficients in accordance with several embodiments of the invention can be used to transform a warped image to a dewarped image by mapping image information from the warped image to the dewarped coordinates. Determining calibration coefficients in accordance with certain embodiments of the invention can include novel and inventive processes for capturing calibration images, processing the calibration images, and/or deriving inputs needed for proper calibration as described in this disclosure. Processes described in this description can provide for improvements in the field of image processing, especially in increasing the speed of dewarping processes and in the accurate capture of calibration pattern images.Type: GrantFiled: April 30, 2021Date of Patent: August 20, 2024Assignee: Immersive Tech, Inc.Inventors: Micheal Woodrow Burns, Jon Clagg, Kunal Bansal
-
Patent number: 12067693Abstract: A method for minimizing latency of moving objects in an augmented reality (AR) display device is described. In one aspect, the method includes determining an initial pose of a visual tracking device, identifying an initial location of an object in an image that is generated by an optical sensor of the visual tracking device, the image corresponding to the initial pose of the visual tracking device, rendering virtual content based on the initial pose and the initial location of the object, retrieving an updated pose of the visual tracking device, tracking an updated location of the object in an updated image that corresponds to the updated pose, and applying a time warp transformation to the rendered virtual content based on the updated pose and the updated location of the object to generate transformed virtual content.Type: GrantFiled: November 4, 2021Date of Patent: August 20, 2024Assignee: SNAP INC.Inventors: Bernhard Jung, Daniel Wagner