Patents Assigned to NVidia
  • Patent number: 12121801
    Abstract: In various examples, a user may access or acquire an application to download to the user's local computing device. Upon accessing the application, a local instance of the application may begin downloading to the computing device, and the user may be given the option to play a cloud-hosted instance of the application. If the user selects to play a hosted instance of the application, the cloud-hosted instance of the application may begin streaming while the local instance of the application downloads to the user's computing device in the background. Application state data may be stored and associated with the user during gameplay such that, once the local instance of the application has downloaded, the user may switch from the hosted instance of the application to the local instance to begin playing locally, with the application state information accounted for.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: October 22, 2024
    Assignee: NVIDIA Corporation
    Inventor: Andrew Fear
  • Patent number: 12124832
    Abstract: Devices and methods to update semiconductor components are disclosed. In at least one embodiment, a device updates semiconductor components independent of a semiconductor component operational state.
    Type: Grant
    Filed: January 21, 2021
    Date of Patent: October 22, 2024
    Assignee: Nvidia Corporation
    Inventors: Ryan Albright, William Andrew Mecham, Michael Thompson, Aaron Richard Carkin, William Ryan Weese, Benjamin Goska
  • Patent number: 12124346
    Abstract: In various examples, one or more components or regions of a processing unit—such as a processing core, and/or component thereof—may be tested for faults during deployment in the field. To perform testing while in deployment, the state of a component subject to test may be retrieved and/or stored during the test to maintain state integrity, the component may be clamped to communicatively isolate the component from other components of the processing unit, a test vector may be applied to the component, and the output of the component may be compared against an expected output to determine if any faults are present. The state of the component may be restored after testing, and the clamp removed, thereby returning the component to its operating state without a perceivable detriment to operation of the processing unit in deployment.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: October 22, 2024
    Assignee: NVIDIA Corporation
    Inventors: Jonah Alben, Sachin Idgunji, Jue Wu, Shantanu Sarangi
  • Patent number: 12124308
    Abstract: Apparatuses, systems, and techniques to optimize processor performance. In at least one embodiment, a method increases an operation voltage of one or more processors, based at least in part, on one or more error rates of the one or more processors.
    Type: Grant
    Filed: June 23, 2022
    Date of Patent: October 22, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Benjamin D. Faulkner, Padmanabhan Kannan, Srinivasan Raghuraman, Peng Cheng Shen, Divya Ramakrishnan, Swanand Santosh Bindoo, Sreedhar Narayanaswamy, Amey Y. Marathe
  • Patent number: 12121823
    Abstract: In various examples, game session audio data—e.g., representing speech of users participating in the game—may be monitored and/or analyzed to determine whether inappropriate language is being used. Where inappropriate language is identified, the portions of the audio corresponding to the inappropriate language may be edited or modified such that other users do not hear the inappropriate language. As a result, toxic behavior or language within instances of gameplay may be censored—thereby enhancing the user experience and making online gaming environments safer for more vulnerable populations. In some embodiments, the inappropriate language may be reported—e.g., automatically—to the game developer or game application host in order to suspend, ban, or otherwise manage users of the system that have a proclivity for toxic behavior.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: October 22, 2024
    Assignee: NVIDIA Corporation
    Inventors: Jithin Thomas, Neilesh Chorakhalikar, Ambrish Dantrey, Revanth Reddy Nalla, Prakshep Mehta
  • Patent number: 12118382
    Abstract: Apparatuses, systems, and techniques to parallelize operations in one or more programs with data copies from global memory to shared memory in each of the one or more programs. In at least one embodiment, a program performs operations on shared data and then asynchronously copies shared data to shared memory, and continues performing additional operations in parallel while the shared data is copied to shared memory until an indicator provided by an application programming interface to facilitate parallel computing, such as CUDA, informs said program that shared data has been copied to shared memory.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: October 15, 2024
    Assignee: NVIDIA Corporation
    Inventor: Harold Carter Edwards
  • Patent number: 12120122
    Abstract: Disclosed are apparatuses, systems, and techniques that improve efficiency and decrease latency of processing of authorization requests by cloud-based access servers that evaluate access rights to access various cloud-based services. The techniques include but are not limited to generating and processing advanced authorization requests that anticipate future authorization requests that may be generated by cloud-based services. The techniques further include processing of frequently accessed policies and policy data dependencies and preemptive generation and processing of authorization requests that are replicated from existing authorization requests.
    Type: Grant
    Filed: July 20, 2022
    Date of Patent: October 15, 2024
    Assignee: NVIDIA Corporation
    Inventor: Dhruva Lakshmana Rao Batni
  • Patent number: 12117298
    Abstract: According to an aspect of an embodiment, operations may comprise obtaining a pose graph that comprises a plurality of nodes. The operations may also comprise dividing the pose graph into a plurality of pose subgraphs, each pose subgraph comprising one or more respective pose subgraph interior nodes and one or more respective pose subgraph boundary nodes. The operations may also comprise generating one or more boundary subgraphs based on the plurality of pose subgraphs, each of the one or more boundary subgraphs comprising one or more respective boundary subgraph boundary nodes and comprising one or more respective boundary subgraph interior nodes. The operations may also comprise obtaining an optimized pose graph by performing a pose graph optimization. The pose graph optimization may comprise performing a pose subgraph optimization of the plurality of pose subgraphs and performing a boundary subgraph optimization of the plurality of boundary subgraphs.
    Type: Grant
    Filed: November 21, 2022
    Date of Patent: October 15, 2024
    Assignee: NVIDIA CORPORATION
    Inventor: Chen Chen
  • Patent number: 12120152
    Abstract: Disclosed are apparatuses, systems, and techniques that improve efficiency and decrease latency of processing of authorization requests by a cloud service. The techniques include obtaining, from an access server, a snapshot associated with processing an authorization request to evaluate an access to a resource of the cloud service and generating, using the snapshot, preemptive authorization requests by modifying the authorization request with a new user identity or a new resource identity. The techniques further include receiving, from the cloud service, a subsequent authorization request to evaluate an authorization of a user to access a particular resource of the cloud service, determining that the subsequent authorization request corresponds to one of preemptive authorization requests, and providing, to the cloud service, an authorization response for the user to access the resource, based on evaluation of this preemptive authorization request.
    Type: Grant
    Filed: July 20, 2022
    Date of Patent: October 15, 2024
    Assignee: NVIDIA Corporation
    Inventor: Dhruva Lakshmana Rao Batni
  • Patent number: 12118643
    Abstract: Apparatuses, systems, and techniques to perform a K-nearest-neighbor query. In at least one embodiment, a set of bounding boxes corresponding to a set of primitives is generated that allows the query to be solved using light transport simulation acceleration features of a GPU.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: October 15, 2024
    Assignee: NVIDIA Corporation
    Inventor: Nathan Vollmer Morrical
  • Patent number: 12118454
    Abstract: Neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. Compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. However, performing addition on logarithmic format values is more complex. Conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. Instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum.
    Type: Grant
    Filed: December 12, 2023
    Date of Patent: October 15, 2024
    Assignee: NVIDIA Corporation
    Inventors: William James Dally, Rangharajan Venkatesan, Brucek Kurdo Khailany
  • Patent number: 12118353
    Abstract: In various examples, a VPU and associated components may be optimized to improve VPU performance and throughput. For example, the VPU may include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators may be used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU may execute a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: October 15, 2024
    Assignee: NVIDIA Corporation
    Inventors: Ching-Yu Hung, Ravi P Singh, Jagadeesh Sankaran, Yen-Te Shih, Ahmad Itani
  • Patent number: 12111179
    Abstract: In various examples, a method to manage map data includes storing a map of a geographic area using an immutable tree. The immutable tree comprises a plurality of nodes stored using a distributed hash table. The plurality of nodes include a plurality of map tiles. At least two map tiles of the plurality of map tiles cover different geographic subregions of the geographic area of the map. The method includes hosting one or more binary large objects (BLOBs) that correspond to the plurality of map tiles in an origin data plane. The method includes making the one or more BLOBs available for distribution to one or more client devices using a content delivery network (CDN).
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: October 8, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Galen Collins, Vladimir Shestak
  • Patent number: 12112445
    Abstract: Generation of three-dimensional (3D) object models may be challenging for users without a sufficient skill set for content creation and may also be resource intensive. One or more style transfer networks may be used for part-aware style transformation of both geometric features and textural components of a source asset to a target asset. The source asset may be segmented into particular parts and then ellipsoid approximations may be warped according to correspondence of the particular parts to the target assets. Moreover, a texture associated with the target asset may be used to warp or adjust a source texture, where the new texture can be applied to the warped parts.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: October 8, 2024
    Assignee: Nvidia Corporation
    Inventors: Kangxue Yin, Jun Gao, Masha Shugrina, Sameh Khamis, Sanja Fidler
  • Patent number: 12111842
    Abstract: An initiating node (C) in a storage platform (100) receives a modification request (312, 314) for changing an object (O). The initiating node (C), using system configuration information (127), identifies an owner node (A) and a backup node (B) for the object (O) and sends change data (324, 334) to the owner node (A) and the backup node (B). The owner node (A) modifies the object (O) with the data (324) from the initiating node (C) and sends an update request (352) that does not include the data (324) to the backup node (B). The backup node (B) modifies a backup object (O?) with data (334) from the initiating node (C).
    Type: Grant
    Filed: March 15, 2022
    Date of Patent: October 8, 2024
    Assignee: Nvidia Corporation
    Inventors: Siamak Nazari, Jonathan A. McDowell, Nigel Kerr
  • Patent number: 12112148
    Abstract: Embodiments of the present disclosure relate to applications and platforms for configuring machine learning models for training and deployment using graphical components in a development environment. For example, systems and methods are disclosed that relate to determining one or more machine learning models and one or more processing operations corresponding to the one or more machine learning models. Further, a model component may be generated using the one or more machine learning models, the one or more processing operations, and one or more extension libraries in which the one or more extension libraries indicate one or more deployment parameters related to the one or more machine learning models. The model component may accordingly provide data that may be used to be able to use and deploy the one or more machine learning models.
    Type: Grant
    Filed: June 8, 2022
    Date of Patent: October 8, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Shaunak Gupte, Prashant Gaikwad, Chandrahas Jagadish Ramalad, Bhushan Rupde
  • Patent number: 12111381
    Abstract: One or more embodiments of the present disclosure may relate to communicating RADAR (RAdio Detection And Ranging) data to a distributed map system that is configured to generate map data based on the RADAR data. In these or other embodiments, certain compression operations may be performed on the RADAR data to reduce the amount of data that is communicated from the ego-machines to the map system.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: October 8, 2024
    Assignee: NVIDIA CORPORATION
    Inventor: Niharika Arora
  • Patent number: 12112422
    Abstract: A differentiable ray casting technique may be applied to a model of a three-dimensional (3D) scene (scene includes lighting configuration) or object to optimize one or more parameters of the model. The one or more parameters define geometry (topology and shape), materials, and lighting configuration (e.g., environment map, a high-resolution texture that represents the light coming from all directions in a sphere) for the model. Visibility is computed in 3D space by casting at least two rays from each ray origin (where the two rays define a ray cone). The model is rendered to produce a model image that may be compared with a reference image (or photograph) of a reference 3D scene to compute image space differences. Visibility gradients in 3D space are computed and backpropagated through the computations to reduce differences between the model image and the reference image.
    Type: Grant
    Filed: June 15, 2022
    Date of Patent: October 8, 2024
    Assignee: NVIDIA Corporation
    Inventors: Jon Niklas Theodor Hasselgren, Carl Jacob Munkberg
  • Patent number: 12112428
    Abstract: In various examples, shader bindings may be recorded in a shader binding table that includes shader records. Geometry of a 3D scene may be instantiated using object instances, and each may be associated with a respective set of the shader records using a location identifier of the set of shader records in memory. The set of shader records may represent shader bindings for an object instance under various predefined conditions. One or more of these predefined conditions may be implicit in the way the shader records are arranged in memory (e.g., indexed by ray type, by sub-geometry, etc.). For example, a section selector value (e.g., a section index) may be computed to locate and select a shader record based at least in part on a result of a ray tracing query (e.g., what sub-geometry was hit, what ray type was traced, etc.).
    Type: Grant
    Filed: July 17, 2023
    Date of Patent: October 8, 2024
    Assignee: NVIDIA Corporation
    Inventors: Martin Stich, Ignacio Llamas, Steven Parker
  • Patent number: 12114469
    Abstract: Systems and methods for cooling a datacenter are disclosed. In at least one embodiment, a flow controller adapter of a cooling manifold is to interchangeably receive a flow controller of a plurality of flow controllers, wherein a flow controller adapter is associated with a rack-side flow controller and with a tube there between and is configured to be movable within cooling manifold to allow different positions for mating a flow controller with a server-side flow controller of server tray or box.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: October 8, 2024
    Assignee: Nvidia Corporation
    Inventor: Ali Heydari