Patents by Inventor Huamin Chen

Huamin Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230333942
    Abstract: A system and method for a tiered cloud storage for different availability and performance requirements includes a gateway, a block store configured to cache data, and an object store configured to persistently store data. The gateway, the block store, and the object store are in a compute zone. The gateway may receive from a user application a file access call and process the file access call. The gateway may also send the file access call to the block store. Then, the gateway may determine to store data in the object store and flush the data from the block store to the object store.
    Type: Application
    Filed: June 28, 2023
    Publication date: October 19, 2023
    Inventors: Huamin Chen, Jay Vyas
  • Patent number: 11789763
    Abstract: Methods and systems for storing and injecting bytecode are provided. In one embodiment, a method is provided that includes receiving, at a first time, a first function for execution within a serverless computing environment; generating, by an interpreter, a first bytecode based on the first function; storing the first bytecode in association with an identifier of the first function; receiving, at a second time after the first time, a second function for execution within the serverless computing environment; identifying the second function as corresponding to the first function; injecting the first bytecode into a container for execution of the second function; receiving performance metrics regarding execution of the second function; and determining, based on the performance metrics, whether to allow or prevent future injection of the first bytecode.
    Type: Grant
    Filed: July 29, 2022
    Date of Patent: October 17, 2023
    Assignee: Red Hat, Inc.
    Inventors: Huamin Chen, Michael Bursell
  • Patent number: 11789767
    Abstract: A system includes a memory, a processor in communication with the memory, a hypervisor executing on the processor, a pool of hypervisor resources, and a cloud-sharing module (CSM). The CSM runs in a kernel to assign an anonymous identity to a hypervisor resource from the pool of hypervisor resources. The CSM broadcasts a transaction for the hypervisor resource and determines which provider owns the hypervisor resource. A first provider is associated with a second anonymous identity and a second provider is associated with a third anonymous identity. Additionally, the CSM receives mining information that includes a block associated with the transaction, where the block is part of a blockchain. The CSM completes the transaction for the first anonymous identity associated with the hypervisor resource between the second anonymous identity and the third anonymous identity.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: October 17, 2023
    Assignee: Red Hat, Inc.
    Inventors: Jay Vyas, Huamin Chen
  • Patent number: 11782495
    Abstract: A power consumption optimization system includes a virtual machine (VM) provisioned on a host, a memory, a server, and a processor in communication with the memory. The processor causes the server to store a power consumption profile of the VM. The VM runs at a processor frequency state. Additionally, the processor causes the server to receive a request to lower a processor frequency for the VM from an original processor frequency state to a reduced processor frequency state. The request has request criteria indicating a time duration associated with the request. The server validates the request criteria and a requirement of another tenant on the host. Responsive to validating the request criteria and the requirement the other tenant on the host, the server confirms the request to lower the processor frequency. Additionally, the server lowers the processor frequency to the reduced processor frequency state during the time duration.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: October 10, 2023
    Assignee: RED HAT, INC.
    Inventors: Huamin Chen, Jay Vyas
  • Patent number: 11782753
    Abstract: A system for scheduling remediation includes a memory, a processor in communication with the memory, a container scheduled on a first node, a scheduler executing on the processor, and a node-local-unscheduler (“NLU”). The scheduler has a watch module. The NLU executes on the processor to determine a status of the container as failing validation. The NLU has access to scheduling policies corresponding to the container and the first node. Responsive to determining the status of the container as failing validation, the NLU annotates the container and stops execution of the container. The watch module executes on the processor to detect the annotation associated with the container. Responsive to detecting the annotation, the container is rescheduled to a second node.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: October 10, 2023
    Assignee: Red Hat, Inc.
    Inventors: Jay Vyas, Huamin Chen
  • Patent number: 11783070
    Abstract: Sensitive information can be managed using a trusted platform module. For example, a system can encrypt target information using a cryptographic key to generate encrypted data. The system can also receive an encrypted key from a trusted platform module, where the encrypted key is a version of the cryptographic key that is encrypted using a public key stored in the trusted platform module. The system can then transmit the encrypted data and the encrypted key to a remote computing system, for example to store the encrypted data and the encrypted key on the remote computing system. Using these techniques, the target information may be secured and stored in remote locations.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: October 10, 2023
    Assignee: Red Hat, Inc.
    Inventors: Ricardo Noriega De Soto, Michael Bursell, Huamin Chen
  • Patent number: 11775354
    Abstract: A system for reducing overlay network overhead includes a memory, a processor in communication with the memory, a first container and a second container running on a first host, and a container scheduler executing on the processor. Each of the first container and second container expose a network service port(s). The container scheduler executes on the processor to assign a network complexity weight to the first host. The network complexity weight is based on a quantity of network service ports that the first container and the second container expose. The container scheduler also filters hosts based on resource availability corresponding to each host and ranks the hosts based on a respective network complexity weight corresponding to each host. Additionally, the container scheduler dispatches a third container to a second host based on the resource availability and network complexity weight corresponding to the second host.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: October 3, 2023
    Assignee: Red Hat, Inc.
    Inventors: Huamin Chen, Jay Vyas
  • Publication number: 20230305905
    Abstract: Computer workloads can be assigned to nodes of a distributed computing environment based on energy consumption modes employed by the nodes. In one example, a system can determine a first tag assigned to a workload. The first tag can indicate a compatibility of the workload with one or more energy consumption modes employable by one or more nodes of a distributed computing environment. The system can also determine a second tag assigned to a node of the distributed computing environment. The second tag can indicate an energy consumption mode employed by the node. The system can then determine a correspondence between the first tag and the second tag indicating that the workload is compatible with the energy consumption mode employed by the node. Based on determining the correspondence between the first tag and the second tag, the system can assign the workload to the node.
    Type: Application
    Filed: March 22, 2022
    Publication date: September 28, 2023
    Inventors: Huamin Chen, Chen Wang
  • Patent number: 11748364
    Abstract: Systems and methods for providing scalable object storage query capabilities in a distributed storage system are disclosed. In one implementation, a processing device may receive, by an object-based distributed storage system, a request from a client to execute a query with respect to data stored at the distributed storage system. The processing device may execute the query to produce a result object and may store the result object at the distributed storage system. The processing device may further transmit the result object to the client. The processing device may re-execute the query at a subsequent point in time to update the result object and transmit the updated result object to the client.
    Type: Grant
    Filed: May 25, 2021
    Date of Patent: September 5, 2023
    Assignee: Red Hat, Inc.
    Inventors: Huamin Chen, Yehuda Sadeh-Weinraub
  • Publication number: 20230273997
    Abstract: A first data object is received for storing in an object repository of a storage platform. An encryption value of the object repository is increased responsive to identifying that a current entropy level of the first data object exceeds a prior entropy level of the first data object by more than a first threshold value. Remediation is performed by a processing device on the object repository responsive to determining that the encryption value of the object repository exceeds a second threshold value.
    Type: Application
    Filed: February 25, 2022
    Publication date: August 31, 2023
    Inventors: Yuval Lifshitz, Huamin Chen
  • Publication number: 20230273995
    Abstract: Methods, systems, and computer program products provide a hybrid data scan pipeline or detector that reduces (as compared to conventional storage operations) response latency and increases scanning accuracy of encryption attacks such as ransomware attacks. For example, a frontend of a storage platform receiving an incoming data object may scan a portion of the data object for a change of an entropy level. The portion scanned may be insignificant relative to the overall size of the data object. As such, the operations of the frontend would place an insignificant delay to the overall storage processing. Other portions of the data object will be processed at a backend of the storage platform. For example, subsequent to receiving the data object, a change of entropy level of the other portions is scanned for detecting ransomware attacks.
    Type: Application
    Filed: February 25, 2022
    Publication date: August 31, 2023
    Inventors: Huamin Chen, Yuval Lifshitz
  • Patent number: 11734125
    Abstract: A system and method for a tiered cloud storage for different availability and performance requirements includes a gateway, a block store configured to cache data, and an object store configured to persistently store data. The gateway, the block store, and the object store are in a compute zone. The gateway may receive from a user application a file access call and process the file access call. The gateway may also send the file access call to the block store. Then, the gateway may determine to store data in the object store and flush the data from the block store to the object store.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: August 22, 2023
    Assignee: Red Hat, Inc.
    Inventors: Huamin Chen, Jay Vyas
  • Patent number: 11720345
    Abstract: A method includes determining whether a code update for the service is available at a central repository of the computing environment and, in response to determining that the code update is available, retrieving the code update from the central repository. The method further includes performing a modification of the service in view of the code update.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: August 8, 2023
    Assignee: Red Hat, Inc.
    Inventors: Huamin Chen, Roland Ludwig Huss
  • Publication number: 20230244544
    Abstract: Systems and methods for inter-cluster deployment of compute services using federated operator components are generally described. In some examples, a first request to deploy a compute service may be received by a federated operator component. In various examples, the federated operator component may send a second request to provision a first compute resource for the compute service to a first cluster of compute nodes. In various examples, the first cluster of compute nodes may be associated with a first hierarchical level of a computing network. In some examples, the federated operator component may send a third request to provision a second compute resource for the compute service to a second cluster of compute nodes. The second cluster of compute nodes may be associated with a second hierarchical level of the computing network that is different from the first hierarchical level.
    Type: Application
    Filed: April 10, 2023
    Publication date: August 3, 2023
    Inventor: Huamin Chen
  • Patent number: 11715030
    Abstract: Automatic object optimization to accelerate machine learning training is disclosed. A request for a machine learning training dataset comprising a plurality of objects is received from a requestor. The plurality of objects includes data for training a machine learning model. A uniqueness characteristic for objects of the plurality of objects is determined, the uniqueness characteristic being indicative of how unique each object is relative to each other object. A group of objects from the plurality of objects is sent to the requestor, the group of objects being selected based at least partially on the uniqueness characteristic or sent in an order based at least partially on the uniqueness characteristic.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: August 1, 2023
    Assignee: Red Hat, Inc.
    Inventors: Huamin Chen, Dennis R. C. Keefe
  • Patent number: 11687490
    Abstract: A method performed by a computing system, includes executing an application, using a data call to an Application Programming Interface, the data call requesting access to a file stored on a storage system associated with the computing system, with a context extraction module, determining contextual information associated with the data call, through use of a library, causing a kernel to access the file according to the data call, storing the contextual information on the storage system, and performing an analysis on the contextual information, the analysis including determining an average size of a call stack when the data was accessed.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: June 27, 2023
    Assignee: Red Hat, Inc.
    Inventor: Huamin Chen
  • Patent number: 11689552
    Abstract: Methods and systems for security threat detection are disclosed. For example, a virtual machine with a network interface of a plurality of virtual machines includes a plurality of applications including first and second applications. The plurality of applications is associated with a respective plurality of application security modules, including a first and second application security modules associated with the first and second applications. A security policy engine executes on a processor in communication with a network including a network controller. The application security module detects an abnormality with a request to the first application, identifies a source and a mode of the abnormality, and reports the source and the mode to the security policy engine. The security policy engine prevents a further abnormality with the source and/or the mode from affecting the second application and commands the network controller to prevent the source from interacting with the network.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: June 27, 2023
    Assignee: Red Hat, Inc.
    Inventor: Huamin Chen
  • Publication number: 20230186143
    Abstract: Training nodes can be selected for use in training a machine-learning model according to some aspects described herein. In one example, a system can receive performance-metric values generated by training nodes, where the training nodes are configured to generate the performance-metric values by implementing an evaluation phase in which the training nodes partially train models using first training data. The system can select a subset of the training nodes based on the performance-metric values. The system can then transmit commands to the subset of training nodes for causing the subset of training nodes to implement a training phase in which the subset of training nodes further train the models using second training data.
    Type: Application
    Filed: December 9, 2021
    Publication date: June 15, 2023
    Inventors: Huamin Chen, Ricardo Noriega De Soto
  • Publication number: 20230156004
    Abstract: A method includes initiating, by a device manager associated with a cluster manager proxy, a connection with a cluster of computing devices, wherein initiating the connection includes providing first credentials to the cluster of computing devices to access the cluster manager proxy. The method further includes receiving, at the cluster manager proxy, a first request to register the cluster of computing devices with a cluster manager, the first request including the first credentials to access the cluster manager proxy and sending, from the cluster manager proxy to the cluster manager, a second request to register the cluster of computing devices with the cluster manager, the second request including second credentials to access the cluster manager.
    Type: Application
    Filed: November 15, 2021
    Publication date: May 18, 2023
    Inventors: Jonathan Hal Cope, Huamin Chen, Ricardo Noriega De Soto, Frank Alexander Zdarsky
  • Patent number: 11650856
    Abstract: Systems and methods for inter-cluster deployment of compute services using federated operator components are generally described. In some examples, a first request to deploy a compute service may be received by a federated operator component. In various examples, the federated operator component may send a second request to provision a first compute resource for the compute service to a first cluster of compute nodes. In various examples, the first cluster of compute nodes may be associated with a first hierarchical level of a computing network. In some examples, the federated operator component may send a third request to provision a second compute resource for the compute service to a second cluster of compute nodes. The second cluster of compute nodes may be associated with a second hierarchical level of the computing network that is different from the first hierarchical level.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: May 16, 2023
    Assignee: RED HAT INC.
    Inventor: Huamin Chen