Patents by Inventor Chen Tessler

Chen Tessler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240007403
    Abstract: In various embodiments, a congestion control modelling application automatically controls congestion in data transmission networks. The congestion control modelling application executes a trained neural network in conjunction with a simulated data transmission network to generate a training dataset. The trained neural network has been trained to control congestion in the simulated data transmission network. The congestion control modelling application generates a first trained decision tree model based on an initial loss for an initial model relative to the training dataset. The congestion control modelling application generates a final tree-based model based on the first trained decision tree model and at least a second trained decision tree model. The congestion control modelling application executes the final tree-based model in conjunction with a data transmission network to control congestion within the data transmission network.
    Type: Application
    Filed: April 11, 2023
    Publication date: January 4, 2024
    Inventors: Gal CHECHIK, Gal DALAL, Benjamin FUHRER, Doron HARITAN KAZAKOV, Shie MANNOR, Yuval SHPIGELMAN, Chen TESSLER
  • Publication number: 20230041242
    Abstract: A reinforcement learning agent learns a congestion control policy using a deep neural network and a distributed training component. The training component enables the agent to interact with a vast set of environments in parallel. These environments simulate real world benchmarks and real hardware. During a learning process, the agent learns how maximize an objective function. A simulator may enable parallel interaction with various scenarios. As the trained agent encounters a diverse set of problems it is more likely to generalize well to new and unseen environments. In addition, an operating point can be selected during training which may enable configuration of the required behavior of the agent.
    Type: Application
    Filed: October 3, 2022
    Publication date: February 9, 2023
    Inventors: Shie Mannor, Chen Tessler, Yuval Shpigelman, Amit Mandelbaum, Gal Dalal, Doron Kazakov, Benjamin Fuhrer
  • Publication number: 20220231933
    Abstract: A reinforcement learning agent learns a congestion control policy using a deep neural network and a distributed training component. The training component enables the agent to interact with a vast set of environments in parallel. These environments simulate real world benchmarks and real hardware. During a learning process, the agent learns how maximize an objective function. A simulator may enable parallel interaction with various scenarios. As the trained agent encounters a diverse set of problems it is more likely to generalize well to new and unseen environments. In addition, an operating point can be selected during training which may enable configuration of the required behavior of the agent.
    Type: Application
    Filed: June 7, 2021
    Publication date: July 21, 2022
    Inventors: Shie Mannor, Chen Tessler, Yuval Shpigelman, Amit Mandelbaum, Gal Dalal, Doron Kazakov, Benjamin Fuhrer