Patents Examined by Ming Wu
-
Patent number: 12045924Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.Type: GrantFiled: September 15, 2022Date of Patent: July 23, 2024Assignee: NVIDIA CorporationInventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
-
Patent number: 12033261Abstract: One example method involves a processing device that performs operations that include receiving a request to retarget a source motion into a target object. Operations further include providing the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object. The contact-aware motion retargeting neural network is trained by accessing training data that includes a source object performing the source motion. The contact-aware motion retargeting neural network generates retargeted motion for the target object, based on a self-contact having a pair of input vertices. The retargeted motion is subject to motion constraints that: (i) preserve a relative location of the self-contact and (ii) prevent self-penetration of the target object.Type: GrantFiled: July 26, 2021Date of Patent: July 9, 2024Assignee: ADOBE INC.Inventors: Ruben Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit, Aaron Hertzmann
-
Patent number: 12033274Abstract: The present disclosure provides new and innovative systems and methods for generating eye models with realistic color. In an example, a computer-implemented method includes obtaining refraction data, obtaining mesh data, generating aligned model data by aligning the refraction data and the mesh data, calculating refraction points in the aligned model data, and calculating an approximated iris color based on the refraction points and the aligned model data by calculating melanin information for the aligned model data based on the refraction points for iris pixels in the aligned model data.Type: GrantFiled: August 25, 2022Date of Patent: July 9, 2024Assignee: TRANSFOLIO, LLCInventor: Jeroen Snepvangers
-
Patent number: 12011647Abstract: A virtual reality (VR) system comprising a head-mounted display (HMD) and handheld controller set is enhanced to provide a more realistic end user VR experience, e.g., for generalized surface interactions in the VR environment. The techniques herein leverage controller-less calibration with any type of surface, followed by surface interactions in the VR environment. An example use case is an interactive fitness training session.Type: GrantFiled: January 3, 2022Date of Patent: June 18, 2024Assignee: Liteboxer Technologies, Inc.Inventors: Jeffrey W. Morin, Andrew J. Rollins, Rafael E. Alam, Gabriel LaForge
-
Patent number: 12014460Abstract: Robust temporal gradients, representing differences in shading results, can be computed between current and previous frames in a temporal denoiser for ray-traced renderers. Backward projection can be used to locate matching surfaces, with the relevant parameters of those surfaces being carried forward and used for patching. Backward projection can be performed for each stratum in a current frame, a stratum representing a set of adjacent pixels. A pixel from each stratum is selected that has a matching surface in the previous frame, using motion vectors generated during the rendering process. A comparison of the depth of the normals, or the visibility buffer data, can be used to determine whether a given surface is the same in the current frame and the previous frame, and if so then parameters of the surface from the previous frame G-buffer is used to patch the G-buffer for the current frame.Type: GrantFiled: May 15, 2023Date of Patent: June 18, 2024Assignee: Nvidia CorporationInventor: Alexey Panteleev
-
Patent number: 12008776Abstract: For multi-view video content represented in the MVD (Multi-view+Depth) format, the depth maps may be processed to improve the coherency therebetween. In one implementation, to process a target view based on an input view, pixels of the input view are first projected into the world coordinate system, then into the target view to form a projected view. The texture of the projected view and the texture of the target view are compared. If the difference at a pixel is small, then the depth of the target view at that pixel is adjusted, for example, replaced by the corresponding depth of the projected view. When the multi-view video content is encoded and decoded in a system, depth map processing may be applied in the pre-processing and post-processing modules to improve video compression efficiency and the rendering quality.Type: GrantFiled: February 13, 2020Date of Patent: June 11, 2024Assignee: InterDigital VC Holdings, Inc.Inventors: Didier Doyen, Benoit Vandame, Guillaume Boisson
-
Patent number: 11995764Abstract: Provided herein is a method, apparatus, and computer program product for identifying locations along a road segment as a tunnel based on point cloud data. Methods may include: receiving point cloud data representative of an environment of a trajectory along a road segment; generating, from the point cloud data, one or more two-dimensional images in one or more corresponding planes orthogonal to the trajectory; determining, for the one or more two-dimensional images, a probability as to whether a respective two-dimensional image is captured within a tunnel along the road segment; and classifying a point along the road segment at a position corresponding to a respective one of the one or more two-dimensional images as a tunnel point in response to the probability as to whether the respective two-dimensional image is captured within a tunnel along the road segment satisfying a predetermined value.Type: GrantFiled: June 30, 2021Date of Patent: May 28, 2024Assignee: HERE GLOBAL B.V.Inventor: Nicholas Armenoff
-
Patent number: 11995754Abstract: Systems and methods are provided for enhanced animation generation based on using motion mapping with local bone phases. An example method includes accessing first animation control information generated for a first frame of an electronic game including local bone phases representing phase information associated with contacts of a plurality of rigid bodies of an in-game character with an in-game environment. Executing a local motion matching process for each of the plurality of local bone phases and generating a second pose of the character model based on the plurality of matched local poses for a second frame of the electronic game.Type: GrantFiled: January 23, 2023Date of Patent: May 28, 2024Assignee: Electronic Arts Inc.Inventors: Wolfram Sebastian Starke, Yiwei Zhao, Mohsen Sardari, Harold Henry Chaput, Navid Aghdaie
-
Patent number: 11978160Abstract: A method of generating map images in a computing device includes loading source map data or vector data onto a host; loading an output tile system; determining which output tiles to process by cross-referencing the source map data or the vector data with the output tile system; loading source maps or vector data to a graphics processing unit memory; executing a graphics processing unit kernel to process data and return a map tile to the host; writing the map tile to a file on a database; determining if all tiles have been processed; and generating output map tiles. A map tile generation system includes computing devices; servers connected via a network; non-transitory computer-readable storage media storing instructions and machine-learning graphics processor units coupled to the servers via a kernel interface program.Type: GrantFiled: August 25, 2022Date of Patent: May 7, 2024Inventor: Daniel E. Curtis
-
Patent number: 11972743Abstract: A processing system comprises a first integrated circuit (IC) and a second IC. The first IC comprises first image processing circuitry, first display panel driver circuitry, and first communication circuitry. The first image processing circuitry is configured to generate a first overlay image by overlaying a first partial input image with a first image element based on first partial input image data representing the first partial input image and first image element data representing the first image element. The first display panel driver circuitry is configured to drive a display panel based on the first overlay image. The first communication circuitry is configured to output second image element data representing a second image element to the second IC.Type: GrantFiled: March 8, 2023Date of Patent: April 30, 2024Assignee: Synaptics IncorporatedInventors: Akihito Kumamoto, Keiichi Hirano
-
Patent number: 11962892Abstract: A method of dental treatment may include receiving photos of a person's dentition, identifying a stage of a treatment plan administered to the person's dentition, gathering a three-dimensional (3D) model of the person's dentition corresponding to the stage of the treatment plan, projecting attributes of the 3D model of the person's dentition onto an image plane to get a projected representation of the person's dentition at the stage of the treatment plan, comparing the photos to the projected representation to derive an error image representing the comparison, and analyzing the error image for discrepancies, wherein the discrepancies represent one or more deviations of the person's dentition from the stage of the treatment plan.Type: GrantFiled: July 22, 2021Date of Patent: April 16, 2024Assignee: ALIGN TECHNOLOGY, INC.Inventors: Christopher E. Cramer, Rajiv Venkata, Leela Parvathaneni, Phillip Thomas Harris, Sravani Gurijala, Svetozar Hubenov, Sebastien Hareng, Guotu Li, Yun Gao, Chad Clayton Brown
-
Patent number: 11954813Abstract: A three-dimensional scene constructing method, apparatus and system, and a storage medium. The three-dimensional scene constructing method includes: acquiring point cloud data of a key object and a background object in a target scene, wherein the point cloud data of the key object comprises three-dimensional information and corresponding feature information, and the point cloud data of the background object at least comprises three-dimensional information; establishing a feature database of the target scene, wherein the feature database at least comprises a key object feature library for recording three-dimensional information and feature information of the key object; performing registration and fusion on the point cloud data of the key object and the point cloud data of the background object, so as to obtain a three-dimensional model of the target scene; and when updating the three-dimensional model, reconstructing the three-dimensional model in a regional manner according to the feature database.Type: GrantFiled: September 2, 2021Date of Patent: April 9, 2024Assignee: BOE Technology Group Co., Ltd.Inventors: Youxue Wang, Xiaohui Ma, Kai Geng, Mengjun Hou, Qian Ha
-
Patent number: 11949869Abstract: A three-dimensional data encoding method includes: (i) in a first case where a layered structure is generated by classifying three-dimensional points into layers: encoding attribute information for the three-dimensional points based on the layered structure; and generating a bitstream including layer information utilized for the generation of the layered structure; and (ii) in a second case where the three-dimensional points are not classified: encoding attribute information for the three-dimensional points; and generating a bitstream not including the layer information.Type: GrantFiled: September 14, 2021Date of Patent: April 2, 2024Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Toshiyasu Sugio, Noritaka Iguchi
-
Patent number: 11941751Abstract: Techniques for aligning images generated by two cameras are disclosed. This alignment is performed by computing a relative 3D orientation between the two cameras. A first gravity vector for a first camera and a second gravity vector for a second camera are determined. A first camera image is obtained from the first camera, and a second camera image is obtained from the second camera. A first alignment process is performed to partially align the first camera's orientation with the second camera's orientation. This process is performed by aligning the gravity vectors, thereby resulting in two degrees of freedom of the relative 3D orientation being eliminated. Visual correspondences between the two images are identified. A second alignment process is performed to fully align the orientations. This process is performed by using the identified visual correspondences to identify and eliminate a third degree of freedom of the relative 3D orientation.Type: GrantFiled: March 30, 2023Date of Patent: March 26, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds, Sudipta Narayan Sinha
-
Patent number: 11941770Abstract: A method and a system for garment try-on. The method includes: capturing a three dimensional (3D) image of a customer; obtaining first 3D pose of the customer in the 3D image; performing a machine learning model on the 3D pose to generate a first skinned multi-person linear model (SMPL) pose; calculating an angle of the whole body rotation of the customer based on the first SMPL pose; when the angle is in a predefined range relative to a front direction of the customer: constructing an SMPL model using the first SMPL pose; and when the angle is out of the predefined range: generating a second SMPL pose using two dimensional (2D) component of the 3D image, and constructing the SMPL model using the second SMPL pose.Type: GrantFiled: December 30, 2020Date of Patent: March 26, 2024Assignees: BEIJING WODONG TIANJUN INFORMATION TECHNOLOGY CO., LTD., JD.COM AMERICAN TECHNOLOGIES CORPORATIONInventors: Xiaochuan Fan, Dan Miao, Chumeng Lyu
-
Patent number: 11943271Abstract: A method, computer program, and computer system is provided for streaming immersive media. Content is ingested in a first two-dimension format or a first three-dimensional format, whereby the format references a neural network. The ingested content is converted to a second two-dimensional or a second three-dimensional format based on the referenced neural network. The converted content is streamed to a client end-point, such as a television, a computer, a head-mounted display, a lenticular light field display, a holographic display, an augmented reality display, or a dense light field display.Type: GrantFiled: August 20, 2021Date of Patent: March 26, 2024Assignee: TENCENT AMERICA LLCInventors: Arianne Hinds, Stephan Wenger
-
Patent number: 11928830Abstract: Disclosed are methods and systems for generating three-dimensional reconstructions of environments. A system, for example, may include a housing having an image sensor directed in a first direction and a distance sensor directed in a second direction and a control unit including a processor and a memory storing instructions. The processor may be configured to execute the instructions to: generate a first 3D model of an environment; generate a plurality of revolved 3D models by revolving the first 3D model relative to the image sensor to a plurality of positions within a predetermined angular range; match a set of distance values to one of the revolved 3D models; determine an angular position of the second direction relative to the first direction; and generate a 3D reconstruction of the environment.Type: GrantFiled: December 22, 2021Date of Patent: March 12, 2024Assignee: Honeywell International Inc.Inventors: Zhiguo Ren, Alberto Speranzon, Carl Dins, Juan Hu, Zhiyong Dai, Vijay Venkataraman
-
Patent number: 11925519Abstract: Method for evaluating a dental situation of a patient. The method having the following successive steps: 1) generating an initial model of at least one dental arch of the patient, preferably by means of a scanner; 2) splitting the initial model in order to define a tooth model for at least some of the teeth represented on the initial model and thereby to obtain a split model; 3) determining an initial support curve of the tooth models in the split model; 4) fixing each tooth model virtually on the initial support curve, preferably by computer; 5) modifying the split model by deformation of the initial support curve according to a deformed support curve, so as to obtain a first deformed model, in which the tooth models are aligned according to the deformed support curve; 6) presenting the first deformed model.Type: GrantFiled: July 6, 2020Date of Patent: March 12, 2024Assignee: DENTAL MONITORINGInventors: Philippe Salah, Thomas Pellissard, Laurent Debraux, Louis-Charles Roisin
-
Patent number: 11910995Abstract: The application relates to the problem of navigating a surgical instrument (at 301, 311) towards a region-of-interest (at 312) in endoscopic surgery when an image (300) provided by the endoscope is obscured at least partly by obscuring matter (at 303), wherein the obscuring matter is a leaking body fluid, debris or smoke caused by ablation. To address this problem, a computer-implemented method is proposed, wherein, upon detecting that the image from the endoscope is at least partly obscured, a second image is determined based on a sequence of historic images and based on the current position and orientation of the endoscope. Furthermore, a virtual image (310) is generated based on the determined second image.Type: GrantFiled: July 10, 2020Date of Patent: February 27, 2024Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Bernardus Hendrikus Wilhelmus Hendriks, Caifeng Shan, Marco Lai, Robert Johannes Frederik Homan, Drazenko Babic
-
Patent number: 11900521Abstract: An apparatus includes an electronic display configured to be positioned in a first location and one or more processors electronically coupled to the electronic display. The processors receive a video from a server. The video depicts a view of a second location and includes an image of a rectangular casing, a frame, and one or more muntins. The image is composited with the video by the server to provide an illusion of a window in the second location to a user viewing the video. The rectangular casing surrounds the window. The processors synchronize a time-of-view at the second location in the video with a time-of-day at the first location and synchronize a second length-of-day at the second location in the video with a first length-of-day at the first location. The processors transmit the video to the electronic display for viewing by the user.Type: GrantFiled: August 17, 2021Date of Patent: February 13, 2024Assignee: LiquidView CorpInventors: Mitchell Braff, Jan C. Hobbel, Paulina A. Perrault, Adam Sah, Kangil Cheon, Yeongkeun Jeong, Grishma Rao, Noah Michael Shibley, Hyerim Shin, Marcelle van Beusekom