Patents by Inventor Sai Sindhur Malleni
Sai Sindhur Malleni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11934861Abstract: Systems and methods for flow rule installation latency testing in software defined networks. In some examples, a hypervisor may deploy a virtual network switch configured to route data to virtualized computing environments executing on the hypervisor. A client process may be deployed in a first container executing on the hypervisor. A server process may be deployed on the hypervisor. The client process may receive a first request to deploy a virtual machine on the hypervisor. The client process may generate first instructions configured to cause the server process to generate a first namespace. The server process may generate the first namespace and may communicatively couple the first namespace to the virtual network switch.Type: GrantFiled: February 16, 2023Date of Patent: March 19, 2024Assignee: Red Hat, Inc.Inventors: Sai Sindhur Malleni, Venkata Anil Kommaddi
-
Patent number: 11921604Abstract: The technology disclosed herein can be used to evaluate system recovery using emulated production systems. In accordance with one example, the technology can involve accessing state data of a target computing device that is in a production environment, the state data can include a performance measurement of a target computing device; updating a configuration of a computing device to adjust a performance of the computing device to correspond to the performance measurement of the target computing device; introduce, by the processing device, a disturbance to the computing device; determining, by the processing device, a performance of the computing device at a time after the introducing the disturbance; and generating performance data indicating an effect the disturbance has on the computing device.Type: GrantFiled: October 28, 2021Date of Patent: March 5, 2024Assignee: Red Hat, Inc.Inventors: Pradeep Kumar Surisetty, Sai Sindhur Malleni, Naga Ravi Chaitanya Elluri
-
Patent number: 11894983Abstract: Systems and methods for scale testing infrastructure as a service systems are disclosed. A processing device generates a container image including a plurality of processes for providing compute functions and modifying a fake virtual driver with network switch functionality. The plurality of processes includes a compute process to create fake virtual machines using the modified fake virtual driver. The processing device generates a plurality of simulated compute nodes using the container image and generates a plurality of fake virtual machines using the modified fake virtual driver on one or more simulated compute nodes, scheduled as pods using a container orchestration engine. In this way, network and messaging traffic on the control plane is effectively simulated at scale. The modified fake driver enables network switch functionality so that network configuration for each fake virtual machine may be simulated, thereby mimicking the network actions of a virtual machine on a compute node.Type: GrantFiled: March 16, 2020Date of Patent: February 6, 2024Assignee: Red Hat, Inc.Inventors: Sai Sindhur Malleni, Venkata Anil Kumar Kommaddi
-
Publication number: 20230195501Abstract: Systems and methods for flow rule installation latency testing in software defined networks. In some examples, a hypervisor may deploy a virtual network switch configured to route data to virtualized computing environments executing on the hypervisor. A client process may be deployed in a first container executing on the hypervisor. A server process may be deployed on the hypervisor. The client process may receive a first request to deploy a virtual machine on the hypervisor. The client process may generate first instructions configured to cause the server process to generate a first namespace. The server process may generate the first namespace and may communicatively couple the first namespace to the virtual network switch.Type: ApplicationFiled: February 16, 2023Publication date: June 22, 2023Inventors: Sai Sindhur Malleni, Venkata Anil Kommaddi
-
Publication number: 20230135825Abstract: The technology disclosed herein can be used to evaluate system recovery using emulated production systems. In accordance with one example, the technology can involve accessing state data of a target computing device that is in a production environment, the state data can include a performance measurement of a target computing device; updating a configuration of a computing device to adjust a performance of the computing device to correspond to the performance measurement of the target computing device; introduce, by the processing device, a disturbance to the computing device; determining, by the processing device, a performance of the computing device at a time after the introducing the disturbance; and generating performance data indicating an effect the disturbance has on the computing device.Type: ApplicationFiled: October 28, 2021Publication date: May 4, 2023Inventors: Pradeep Kumar Surisetty, Sai Sindhur Malleni, Naga Ravi Chaitanya Elluri
-
Patent number: 11620151Abstract: Systems and methods for flow rule installation latency testing in software defined networks. In some examples, a hypervisor may deploy a virtual network switch configured to route data to virtualized computing environments executing on the hypervisor. A client process may be deployed in a first container executing on the hypervisor. A server process may be deployed on the hypervisor. The client process may receive a first request to deploy a virtual machine on the hypervisor. The client process may generate first instructions configured to cause the server process to generate a first namespace. The server process may generate the first namespace and may communicatively couple the first namespace to the virtual network switch.Type: GrantFiled: September 22, 2020Date of Patent: April 4, 2023Assignee: RED HAT, INC.Inventors: Sai Sindhur Malleni, Venkata Anil Kommaddi
-
Patent number: 11561843Abstract: Workload profiling can be used in a distributed computing environment for automatic performance tuning. For example, a computing device can receive a performance profile for a workload in a distributed computing environment. The performance profile can indicate resource usage by the workload in the distributed computing environment. The computing device can determine a performance bottleneck associated with the workload based on the resource usage specified in the performance profile. A tuning profile can be selected to reduce the performance bottleneck associate with the workload. The computing device can output a command to adjust one or more properties of the workload in accordance with the tuning profile to reduce the performance bottleneck associated with the workload.Type: GrantFiled: June 11, 2020Date of Patent: January 24, 2023Assignee: RED HAT, INC.Inventor: Sai Sindhur Malleni
-
Publication number: 20220269494Abstract: Provisioning bare metal machines with a complex software product is disclosed. A request to install a software product is received. Based on information contained in the request, a subset of computing devices from a set of computing devices are identified. An operating system is caused to be installed on a first computing device of the subset of computing devices. Boot information on a second computing device and a third computing device of the subset of computing devices is modified to cause the second computing device and the third computing device to, upon being booted, request an operating system from the first computing device. A software product installer configured to install the software product is caused to be installed on the first computing device.Type: ApplicationFiled: February 24, 2021Publication date: August 25, 2022Inventor: Sai Sindhur Malleni
-
Patent number: 11341025Abstract: A system includes a memory and at least one processor in communication with the memory. A processor is configured to receive a first log message denoting an event associated with a first application executing in the system. A machine learning model generates a predicted log message based at least in part on the first log message. The predicted log message represents a prediction of a subsequent log message to be received from the first application. First metric data associated with the predicted log message is determined. The first metric data describes system conditions of the system associated with the predicted log message. A tuning profile associated with the system conditions is determined and the current system configuration of the system is modified using the tuning profile.Type: GrantFiled: May 27, 2020Date of Patent: May 24, 2022Assignee: RED HAT INC.Inventor: Sai Sindhur Malleni
-
Patent number: 11301363Abstract: Systems and methods for correlating continuous integration compute jobs with log messages. In some examples, a computing testing component may cause a first compute job to be deployed by a system under test (SUT) including at least one compute node. First identifier data may be generated that identifies the first compute job from among other compute jobs. The SUT may receive configuration data including the first identifier data. The SUT may generate a log message during execution of the first compute job. The log message may include the first identifier data. The computing testing component may receive result data for the first compute job from the SUT. The result data may include the first identifier data. The log message may be stored in a data store in association with the first identifier data and the first identifier data may correlate the log message with the first compute job.Type: GrantFiled: August 27, 2020Date of Patent: April 12, 2022Assignee: RED HAT, INC.Inventor: Sai Sindhur Malleni
-
Publication number: 20220091868Abstract: Systems and methods for flow rule installation latency testing in software defined networks. In some examples, a hypervisor may deploy a virtual network switch configured to route data to virtualized computing environments executing on the hypervisor. A client process may be deployed in a first container executing on the hypervisor. A server process may be deployed on the hypervisor. The client process may receive a first request to deploy a virtual machine on the hypervisor. The client process may generate first instructions configured to cause the server process to generate a first namespace. The server process may generate the first namespace and may communicatively couple the first namespace to the virtual network switch.Type: ApplicationFiled: September 22, 2020Publication date: March 24, 2022Inventors: Sai Sindhur Malleni, Venkata Anil Kommaddi
-
Publication number: 20220066910Abstract: Systems and methods for correlating continuous integration compute jobs with log messages. In some examples, a computing testing component may cause a first compute job to be deployed by a system under test (SUT) including at least one compute node. First identifier data may be generated that identifies the first compute job from among other compute jobs. The SUT may receive configuration data including the first identifier data. The SUT may generate a log message during execution of the first compute job. The log message may include the first identifier data. The computing testing component may receive result data for the first compute job from the SUT. The result data may include the first identifier data. The log message may be stored in a data store in association with the first identifier data and the first identifier data may correlate the log message with the first compute job.Type: ApplicationFiled: August 27, 2020Publication date: March 3, 2022Inventor: Sai Sindhur Malleni
-
Publication number: 20210389994Abstract: Workload profiling can be used in a distributed computing environment for automatic performance tuning. For example, a computing device can receive a performance profile for a workload in a distributed computing environment. The performance profile can indicate resource usage by the workload in the distributed computing environment. The computing device can determine a performance bottleneck associated with the workload based on the resource usage specified in the performance profile. A tuning profile can be selected to reduce the performance bottleneck associate with the workload. The computing device can output a command to adjust one or more properties of the workload in accordance with the tuning profile to reduce the performance bottleneck associated with the workload.Type: ApplicationFiled: June 11, 2020Publication date: December 16, 2021Inventor: SAI SINDHUR MALLENI
-
Publication number: 20210374034Abstract: A system includes a memory and at least one processor in communication with the memory. A processor is configured to receive a first log message denoting an event associated with a first application executing in the system. A machine learning model generates a predicted log message based at least in part on the first log message. The predicted log message represents a prediction of a subsequent log message to be received from the first application. First metric data associated with the predicted log message is determined. The first metric data describes system conditions of the system associated with the predicted log message. A tuning profile associated with the system conditions is determined and the current system configuration of the system is modified using the tuning profile.Type: ApplicationFiled: May 27, 2020Publication date: December 2, 2021Inventor: Sai Sindhur Malleni
-
Publication number: 20210288885Abstract: Systems and methods for scale testing infrastructure as a service systems are disclosed. A processing device generates a container image including a plurality of processes for providing compute functions and modifying a fake virtual driver with network switch functionality. The plurality of processes includes a compute process to create fake virtual machines using the modified fake virtual driver. The processing device generates a plurality of simulated compute nodes using the container image and generates a plurality of fake virtual machines using the modified fake virtual driver on one or more simulated compute nodes, scheduled as pods using a container orchestration engine. In this way, network and messaging traffic on the control plane is effectively simulated at scale. The modified fake driver enables network switch functionality so that network configuration for each fake virtual machine may be simulated, thereby mimicking the network actions of a virtual machine on a compute node.Type: ApplicationFiled: March 16, 2020Publication date: September 16, 2021Inventors: Sai Sindhur Malleni, Venkata Anil Kumar Kommaddi