Patents by Inventor Rev Lebaredian

Rev Lebaredian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11798514
    Abstract: Embodiments of the present invention provide a novel solution that uses subjective end-user input to generate optimal image quality settings for an application. Embodiments of the present invention enable end-users to rank and/or select various adjustable application parameter settings in a manner that allows them to specify which application parameters and/or settings are most desirable to them for a given application. Based on the feedback received from end-users, embodiments of the present invention may generate optimal settings for whatever performance level the end-user desires. Furthermore, embodiments of the present invention may generate optimal settings that may be benchmarked either on a server farm or on an end-user's client device.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: October 24, 2023
    Assignee: NVIDIA Corporation
    Inventors: John Spitzer, Rev Lebaredian, Tony Tamasi
  • Patent number: 11625894
    Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: April 11, 2023
    Assignee: NVIDIA Corporation
    Inventors: Dmitry Duka, Rev Lebaredian, Jonathan Small, Ivan Shutov
  • Publication number: 20230004801
    Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.
    Type: Application
    Filed: August 30, 2022
    Publication date: January 5, 2023
    Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
  • Patent number: 11436484
    Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: September 6, 2022
    Assignee: NVIDIA Corporation
    Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
  • Publication number: 20220134222
    Abstract: A content management system may maintain a scene description that represents a 3D world using hierarchical relationships between elements in a scene graph. Clients may exchange delta information between versions of content being edited and/or shared amongst the clients. Each set of delta information may be assigned a value in a sequence of values which defines an order to apply the sets of delta information to produce synchronized versions of the scene graph. Clients may follow conflict resolution rules to consistently resolve conflicts between sets of delta information. Changes to structural elements of content may be represented procedurally to preserve structural consistency across clients while changes to non-structural elements may be represented declaratively to reduce data size. To store and manage the content, structural elements may be referenced using node identifiers, and non-structural elements may be assigned to the node identifiers as field-value pairs.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 5, 2022
    Inventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko
  • Publication number: 20220101619
    Abstract: A content management system may maintain a scene description that represents a 3D virtual environment and a publish/subscribe model in which clients subscribe to content items that correspond to respective portions of the shared scene description. When changes are made to content, the changes may be served to subscribing clients. Rather than transferring entire descriptions of assets to propagate changes, differences between versions of content may be exchanged, which may be used construct updated versions of the content. Portions of scene description may reference other content items and clients may determine whether to request and load these content items for lazy loading. Content items may be identified by Uniform Resource Identifiers (URIs) used to reference the content items. The content management system may maintain states for client connections including for authentication, for the set of subscriptions in the publish/subscribe model, and for their corresponding version identifiers.
    Type: Application
    Filed: December 3, 2021
    Publication date: March 31, 2022
    Inventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko, Dmitry Duka
  • Patent number: 11227448
    Abstract: A content management system may maintain a scene description that represents a 3D virtual environment and a publish/subscribe model in which clients subscribe to content items that correspond to respective portions of the shared scene description. When changes are made to content, the changes may be served to subscribing clients. Rather than transferring entire descriptions of assets to propagate changes, differences between versions of content may be exchanged, which may be used construct updated versions of the content. Portions of scene description may reference other content items and clients may determine whether to request and load these content items for lazy loading. Content items may be identified by Uniform Resource Identifiers (URIs) used to reference the content items. The content management system may maintain states for client connections including for authentication, for the set of subscriptions in the publish/subscribe model, and for their corresponding version identifiers.
    Type: Grant
    Filed: March 22, 2020
    Date of Patent: January 18, 2022
    Assignee: NVIDIA Corporation
    Inventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko, Dmitry Duka
  • Publication number: 20210358188
    Abstract: In various examples, a virtually animated and interactive agent may be rendered for visual and audible communication with one or more users with an application. For example, a conversational artificial intelligence (AI) assistant may be rendered and displayed for visual communication in addition to audible communication with end-users. As such, the AI assistant may leverage the visual domain—in addition to the audible domain—to more clearly communicate with users, including interacting with a virtual environment in which the AI assistant is rendered. Similarly, the AI assistant may leverage audio, video, and/or text inputs from a user to determine a request, mood, gesture, and/or posture of a user for more accurately responding to and interacting with the user.
    Type: Application
    Filed: May 12, 2021
    Publication date: November 18, 2021
    Inventors: Rev Lebaredian, Simon Yuen, Santanu Dutta, Jonathan Michael Cohen, Ratin Kumar
  • Publication number: 20210201576
    Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.
    Type: Application
    Filed: March 17, 2021
    Publication date: July 1, 2021
    Inventors: Dmitry Duka, Rev Lebaredian, Jonathan Small, Ivan Shutov
  • Patent number: 10984587
    Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: April 20, 2021
    Assignee: NVIDIA Corporation
    Inventors: Dmitry Duka, Rev Lebaredian, Jonathan Small, Ivan Shutov
  • Publication number: 20210074237
    Abstract: Embodiments of the present invention provide a novel solution that uses subjective end-user input to generate optimal image quality settings for an application. Embodiments of the present invention enable end-users to rank and/or select various adjustable application parameter settings in a manner that allows them to specify which application parameters and/or settings are most desirable to them for a given application. Based on the feedback received from end-users, embodiments of the present invention may generate optimal settings for whatever performance level the end-user desires. Furthermore, embodiments of the present invention may generate optimal settings that may be benchmarked either on a server farm or on an end-user's client device.
    Type: Application
    Filed: November 20, 2020
    Publication date: March 11, 2021
    Inventors: John SPITZER, Rev LEBAREDIAN, Tony TAMASI
  • Publication number: 20210049827
    Abstract: A content management system may maintain a scene description that represents a 3D virtual environment and a publish/subscribe model in which clients subscribe to content items that correspond to respective portions of the shared scene description. When changes are made to content, the changes may be served to subscribing clients. Rather than transferring entire descriptions of assets to propagate changes, differences between versions of content may be exchanged, which may be used construct updated versions of the content. Portions of scene description may reference other content items and clients may determine whether to request and load these content items for lazy loading. Content items may be identified by Uniform Resource Identifiers (URIs) used to reference the content items. The content management system may maintain states for client connections including for authentication, for the set of subscriptions in the publish/subscribe model, and for their corresponding version identifiers.
    Type: Application
    Filed: March 22, 2020
    Publication date: February 18, 2021
    Inventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko, Dmitry Duka
  • Patent number: 10878770
    Abstract: Embodiments of the present invention provide a novel solution that uses subjective end-user input to generate optimal image quality settings for an application. Embodiments of the present invention enable end-users to rank and/or select various adjustable application parameter settings in a manner that allows them to specify which application parameters and/or settings are most desirable to them for a given application. Based on the feedback received from end-users, embodiments of the present invention may generate optimal settings for whatever performance level the end-user desires. Furthermore, embodiments of the present invention may generate optimal settings that may be benchmarked either on a server farm or on an end-user's client device.
    Type: Grant
    Filed: December 2, 2013
    Date of Patent: December 29, 2020
    Assignee: Nvidia Corporation
    Inventors: John Spitzer, Rev Lebaredian, Tony Tamasi
  • Patent number: 10795691
    Abstract: A system, method, and computer program product are provided for simultaneously determining settings for a plurality of parameter variations. In use, a plurality of parameter variations associated with a device is identified. Additionally, settings for each of the plurality of parameter variations are determined simultaneously.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: October 6, 2020
    Assignee: NVIDIA CORPORATION
    Inventors: John F. Spitzer, Rev Lebaredian, Yury Uralsky
  • Publication number: 20200105047
    Abstract: The disclosure provides virtual view broadcasters and a broadcaster for a cloud-based computer application. In one embodiment, the virtual view broadcaster includes: (1) a cloud-based renderer configured to generate virtual view images from a virtual camera positioned in a computer application, wherein the cloud-based renderer includes multiple Graphics Processing Units (GPUs), and (2) an image processor configured to generate a virtual view stream for the virtual camera employing the virtual view images, wherein a single one of the GPUs is used to generate the virtual view images for the virtual camera.
    Type: Application
    Filed: December 3, 2019
    Publication date: April 2, 2020
    Inventors: Jen-Hsun Huang, Rev Lebaredian, Chad Vivoli
  • Publication number: 20200081724
    Abstract: A system, method, and computer program product are provided for simultaneously determining settings for a plurality of parameter variations. In use, a plurality of parameter variations associated with a device is identified. Additionally, settings for each of the plurality of parameter variations are determined simultaneously.
    Type: Application
    Filed: November 12, 2019
    Publication date: March 12, 2020
    Inventors: John F. Spitzer, Rev Lebaredian, Yury Uralsky
  • Publication number: 20200051030
    Abstract: A cloud-centric platform is used for generating virtual three-dimensional (3D) content, that allows users to collaborate online and that can be connected to different software tools (applications). Using the platform, virtual environments (e.g., scenes, worlds, universes) can be created, accessed, and interacted with simultaneously by multiple collaborative content creators using varying content creation or development applications.
    Type: Application
    Filed: August 12, 2019
    Publication date: February 13, 2020
    Inventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko, Dmitry Duka
  • Publication number: 20200020155
    Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.
    Type: Application
    Filed: June 7, 2019
    Publication date: January 16, 2020
    Inventors: Dmitry DUKA, Rev LEBAREDIAN, Jonathan SMALL, Ivan SHUTOV
  • Patent number: 10509658
    Abstract: A system, method, and computer program product are provided for simultaneously determining settings for a plurality of parameter variations. In use, a plurality of parameter variations associated with a device is identified. Additionally, settings for each of the plurality of parameter variations are determined simultaneously.
    Type: Grant
    Filed: July 6, 2012
    Date of Patent: December 17, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: John F. Spitzer, Rev Lebaredian, Yury Uralsky
  • Patent number: 10497168
    Abstract: The disclosure provides a virtual view broadcaster, a virtual view broadcasting system, and a video gaming broadcaster. In one embodiment, the virtual view broadcaster includes: (1) a cloud-based renderer configured to generate virtual view images from a virtual camera positioned in a computer application, and (2) an image processor configured to generate a virtual view stream for the virtual camera employing the virtual view rendered images, wherein the virtual view images are from different viewing directions at the virtual camera.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: December 3, 2019
    Assignee: Nvidia Corporation
    Inventors: Jen-Hsun Huang, Rev Lebaredian, Chad Vivoli