Patents by Inventor Rev Lebaredian
Rev Lebaredian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11798514Abstract: Embodiments of the present invention provide a novel solution that uses subjective end-user input to generate optimal image quality settings for an application. Embodiments of the present invention enable end-users to rank and/or select various adjustable application parameter settings in a manner that allows them to specify which application parameters and/or settings are most desirable to them for a given application. Based on the feedback received from end-users, embodiments of the present invention may generate optimal settings for whatever performance level the end-user desires. Furthermore, embodiments of the present invention may generate optimal settings that may be benchmarked either on a server farm or on an end-user's client device.Type: GrantFiled: November 20, 2020Date of Patent: October 24, 2023Assignee: NVIDIA CorporationInventors: John Spitzer, Rev Lebaredian, Tony Tamasi
-
Patent number: 11625894Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.Type: GrantFiled: March 17, 2021Date of Patent: April 11, 2023Assignee: NVIDIA CorporationInventors: Dmitry Duka, Rev Lebaredian, Jonathan Small, Ivan Shutov
-
Publication number: 20230004801Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.Type: ApplicationFiled: August 30, 2022Publication date: January 5, 2023Inventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
-
Patent number: 11436484Abstract: In various examples, physical sensor data may be generated by a vehicle in a real-world environment. The physical sensor data may be used to train deep neural networks (DNNs). The DNNs may then be tested in a simulated environment—in some examples using hardware configured for installation in a vehicle to execute an autonomous driving software stack—to control a virtual vehicle in the simulated environment or to otherwise test, verify, or validate the outputs of the DNNs. Prior to use by the DNNs, virtual sensor data generated by virtual sensors within the simulated environment may be encoded to a format consistent with the format of the physical sensor data generated by the vehicle.Type: GrantFiled: March 27, 2019Date of Patent: September 6, 2022Assignee: NVIDIA CorporationInventors: Clement Farabet, John Zedlewski, Zachary Taylor, Greg Heinrich, Claire Delaunay, Mark Daly, Matthew Campbell, Curtis Beeson, Gary Hicok, Michael Cox, Rev Lebaredian, Tony Tamasi, David Auld
-
Publication number: 20220134222Abstract: A content management system may maintain a scene description that represents a 3D world using hierarchical relationships between elements in a scene graph. Clients may exchange delta information between versions of content being edited and/or shared amongst the clients. Each set of delta information may be assigned a value in a sequence of values which defines an order to apply the sets of delta information to produce synchronized versions of the scene graph. Clients may follow conflict resolution rules to consistently resolve conflicts between sets of delta information. Changes to structural elements of content may be represented procedurally to preserve structural consistency across clients while changes to non-structural elements may be represented declaratively to reduce data size. To store and manage the content, structural elements may be referenced using node identifiers, and non-structural elements may be assigned to the node identifiers as field-value pairs.Type: ApplicationFiled: November 3, 2020Publication date: May 5, 2022Inventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko
-
Publication number: 20220101619Abstract: A content management system may maintain a scene description that represents a 3D virtual environment and a publish/subscribe model in which clients subscribe to content items that correspond to respective portions of the shared scene description. When changes are made to content, the changes may be served to subscribing clients. Rather than transferring entire descriptions of assets to propagate changes, differences between versions of content may be exchanged, which may be used construct updated versions of the content. Portions of scene description may reference other content items and clients may determine whether to request and load these content items for lazy loading. Content items may be identified by Uniform Resource Identifiers (URIs) used to reference the content items. The content management system may maintain states for client connections including for authentication, for the set of subscriptions in the publish/subscribe model, and for their corresponding version identifiers.Type: ApplicationFiled: December 3, 2021Publication date: March 31, 2022Inventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko, Dmitry Duka
-
Patent number: 11227448Abstract: A content management system may maintain a scene description that represents a 3D virtual environment and a publish/subscribe model in which clients subscribe to content items that correspond to respective portions of the shared scene description. When changes are made to content, the changes may be served to subscribing clients. Rather than transferring entire descriptions of assets to propagate changes, differences between versions of content may be exchanged, which may be used construct updated versions of the content. Portions of scene description may reference other content items and clients may determine whether to request and load these content items for lazy loading. Content items may be identified by Uniform Resource Identifiers (URIs) used to reference the content items. The content management system may maintain states for client connections including for authentication, for the set of subscriptions in the publish/subscribe model, and for their corresponding version identifiers.Type: GrantFiled: March 22, 2020Date of Patent: January 18, 2022Assignee: NVIDIA CorporationInventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko, Dmitry Duka
-
Publication number: 20210358188Abstract: In various examples, a virtually animated and interactive agent may be rendered for visual and audible communication with one or more users with an application. For example, a conversational artificial intelligence (AI) assistant may be rendered and displayed for visual communication in addition to audible communication with end-users. As such, the AI assistant may leverage the visual domain—in addition to the audible domain—to more clearly communicate with users, including interacting with a virtual environment in which the AI assistant is rendered. Similarly, the AI assistant may leverage audio, video, and/or text inputs from a user to determine a request, mood, gesture, and/or posture of a user for more accurately responding to and interacting with the user.Type: ApplicationFiled: May 12, 2021Publication date: November 18, 2021Inventors: Rev Lebaredian, Simon Yuen, Santanu Dutta, Jonathan Michael Cohen, Ratin Kumar
-
Publication number: 20210201576Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.Type: ApplicationFiled: March 17, 2021Publication date: July 1, 2021Inventors: Dmitry Duka, Rev Lebaredian, Jonathan Small, Ivan Shutov
-
Patent number: 10984587Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.Type: GrantFiled: June 7, 2019Date of Patent: April 20, 2021Assignee: NVIDIA CorporationInventors: Dmitry Duka, Rev Lebaredian, Jonathan Small, Ivan Shutov
-
Publication number: 20210074237Abstract: Embodiments of the present invention provide a novel solution that uses subjective end-user input to generate optimal image quality settings for an application. Embodiments of the present invention enable end-users to rank and/or select various adjustable application parameter settings in a manner that allows them to specify which application parameters and/or settings are most desirable to them for a given application. Based on the feedback received from end-users, embodiments of the present invention may generate optimal settings for whatever performance level the end-user desires. Furthermore, embodiments of the present invention may generate optimal settings that may be benchmarked either on a server farm or on an end-user's client device.Type: ApplicationFiled: November 20, 2020Publication date: March 11, 2021Inventors: John SPITZER, Rev LEBAREDIAN, Tony TAMASI
-
Publication number: 20210049827Abstract: A content management system may maintain a scene description that represents a 3D virtual environment and a publish/subscribe model in which clients subscribe to content items that correspond to respective portions of the shared scene description. When changes are made to content, the changes may be served to subscribing clients. Rather than transferring entire descriptions of assets to propagate changes, differences between versions of content may be exchanged, which may be used construct updated versions of the content. Portions of scene description may reference other content items and clients may determine whether to request and load these content items for lazy loading. Content items may be identified by Uniform Resource Identifiers (URIs) used to reference the content items. The content management system may maintain states for client connections including for authentication, for the set of subscriptions in the publish/subscribe model, and for their corresponding version identifiers.Type: ApplicationFiled: March 22, 2020Publication date: February 18, 2021Inventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko, Dmitry Duka
-
Patent number: 10878770Abstract: Embodiments of the present invention provide a novel solution that uses subjective end-user input to generate optimal image quality settings for an application. Embodiments of the present invention enable end-users to rank and/or select various adjustable application parameter settings in a manner that allows them to specify which application parameters and/or settings are most desirable to them for a given application. Based on the feedback received from end-users, embodiments of the present invention may generate optimal settings for whatever performance level the end-user desires. Furthermore, embodiments of the present invention may generate optimal settings that may be benchmarked either on a server farm or on an end-user's client device.Type: GrantFiled: December 2, 2013Date of Patent: December 29, 2020Assignee: Nvidia CorporationInventors: John Spitzer, Rev Lebaredian, Tony Tamasi
-
Patent number: 10795691Abstract: A system, method, and computer program product are provided for simultaneously determining settings for a plurality of parameter variations. In use, a plurality of parameter variations associated with a device is identified. Additionally, settings for each of the plurality of parameter variations are determined simultaneously.Type: GrantFiled: November 12, 2019Date of Patent: October 6, 2020Assignee: NVIDIA CORPORATIONInventors: John F. Spitzer, Rev Lebaredian, Yury Uralsky
-
Publication number: 20200105047Abstract: The disclosure provides virtual view broadcasters and a broadcaster for a cloud-based computer application. In one embodiment, the virtual view broadcaster includes: (1) a cloud-based renderer configured to generate virtual view images from a virtual camera positioned in a computer application, wherein the cloud-based renderer includes multiple Graphics Processing Units (GPUs), and (2) an image processor configured to generate a virtual view stream for the virtual camera employing the virtual view images, wherein a single one of the GPUs is used to generate the virtual view images for the virtual camera.Type: ApplicationFiled: December 3, 2019Publication date: April 2, 2020Inventors: Jen-Hsun Huang, Rev Lebaredian, Chad Vivoli
-
Publication number: 20200081724Abstract: A system, method, and computer program product are provided for simultaneously determining settings for a plurality of parameter variations. In use, a plurality of parameter variations associated with a device is identified. Additionally, settings for each of the plurality of parameter variations are determined simultaneously.Type: ApplicationFiled: November 12, 2019Publication date: March 12, 2020Inventors: John F. Spitzer, Rev Lebaredian, Yury Uralsky
-
Publication number: 20200051030Abstract: A cloud-centric platform is used for generating virtual three-dimensional (3D) content, that allows users to collaborate online and that can be connected to different software tools (applications). Using the platform, virtual environments (e.g., scenes, worlds, universes) can be created, accessed, and interacted with simultaneously by multiple collaborative content creators using varying content creation or development applications.Type: ApplicationFiled: August 12, 2019Publication date: February 13, 2020Inventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko, Dmitry Duka
-
Publication number: 20200020155Abstract: Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.Type: ApplicationFiled: June 7, 2019Publication date: January 16, 2020Inventors: Dmitry DUKA, Rev LEBAREDIAN, Jonathan SMALL, Ivan SHUTOV
-
Patent number: 10509658Abstract: A system, method, and computer program product are provided for simultaneously determining settings for a plurality of parameter variations. In use, a plurality of parameter variations associated with a device is identified. Additionally, settings for each of the plurality of parameter variations are determined simultaneously.Type: GrantFiled: July 6, 2012Date of Patent: December 17, 2019Assignee: NVIDIA CORPORATIONInventors: John F. Spitzer, Rev Lebaredian, Yury Uralsky
-
Patent number: 10497168Abstract: The disclosure provides a virtual view broadcaster, a virtual view broadcasting system, and a video gaming broadcaster. In one embodiment, the virtual view broadcaster includes: (1) a cloud-based renderer configured to generate virtual view images from a virtual camera positioned in a computer application, and (2) an image processor configured to generate a virtual view stream for the virtual camera employing the virtual view rendered images, wherein the virtual view images are from different viewing directions at the virtual camera.Type: GrantFiled: January 4, 2018Date of Patent: December 3, 2019Assignee: Nvidia CorporationInventors: Jen-Hsun Huang, Rev Lebaredian, Chad Vivoli