Patents Examined by Hiren P Patel
  • Patent number: 11960913
    Abstract: A system for dynamically auto-scaling allocated capacity of a virtual desktop environment includes: base capacity resources and burst capacity resources and memory coupled to a controller; wherein, in response to executing program instructions, the controller is configured to: in response to receiving a log in request from a first user device, connect the first user device to a first host pool to which the first device user is assigned; execute a load-balancing module to determine a first session host virtual machine to which to connect the first user device; and execute an auto-scaling module comprising a user-selectable auto-scaling trigger and a user-selectable conditional auto-scaling action, wherein, in response to recognition of the conditional auto-scaling action, the controller powers on or powers off one or more base capacity resources or creates or destroys one or more burst capacity resources.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: April 16, 2024
    Assignee: Nerdio, Inc.
    Inventor: Vadim Vladimirskiy
  • Patent number: 11960940
    Abstract: A FaaS system comprises a plurality of execution nodes. A software package is received in the system, the software package comprising a function that is to be executed in the FaaS system. Data location information related to data that the function is going to access during execution is obtained. Based on the data location information, a determination is then made of an execution node in which the function is to be executed. The function is loaded into the determined execution node and executing in the determined execution node.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: April 16, 2024
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Zoltán Turányi, Dániel Géhberger
  • Patent number: 11954517
    Abstract: The present disclosure relates to Application Programming Interface (API) framework that discloses a computer implemented method, polling service system, and non-transitory computer readable medium for providing dynamic endpoints for performing data transactions with a corresponding candidate application server. The method has two phases: a polling phase and a transaction phase. In the polling phase, the polling service system receives a first API request from one or more source devices and provides a dynamic endpoint for the one or more source devices to interact with the corresponding candidate application server of their requirement. In the transaction phase, the corresponding candidate application server receives a second API request from the one or more source devices through the dynamic endpoint generated during the polling phase, and performs data transactions.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: April 9, 2024
    Assignee: Visa International Service Association
    Inventor: Sai Nikhil Chennoor
  • Patent number: 11948010
    Abstract: An embodiment includes extracting, by a scheduler, function-tag data associated with a function identified by a deployment request. The embodiment also includes selecting, by the scheduler, a computing device within a server cluster to host the function based at least in part on a comparison of the function-tag data and host-tag data associated with the computing device. The embodiment also includes issuing, by the scheduler, an instruction to the computing device, wherein the issuing of the instruction causes an allocation of resources for hosting execution of the function.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: April 2, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Joseph W. Cropper, Duy Nguyen, Jeffrey W. Tenner
  • Patent number: 11928366
    Abstract: A cloud-based storage system within a cloud computing environment, the cloud-based storage system including: monitoring, for the cloud-based storage system, one or more storage system operations, wherein the cloud-based storage system includes a virtual instance storage layer and a cloud-based storage layer; determining, based at least upon the one or more storage system operations, one or more access patterns for the cloud-based storage system; and modifying, based at least upon the one or more access patterns for the cloud-based storage system, one or more cloud configurations for the cloud-based storage system.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: March 12, 2024
    Assignee: PURE STORAGE, INC.
    Inventors: Aswin Karumbunathan, John Colgrove, Constantine Sapuntzakis, Joshua Freilich, Naveen Neelakantam, Sergey Zhuravlev
  • Patent number: 11928504
    Abstract: A system and corresponding method queue work within a virtualized scheduler based on in-unit accounting (IUA) of in-unit entries (IUEs). The system comprises an IUA resource and arbiter. The IUA resource stores, in association with an IUA identifier, an IUA count and threshold. The IUA count represents a global count of work-queue entries (WQEs) that are associated with the IUA identifier and occupy respective IUEs of an IUE resource. The IUA threshold limits the global count. The arbiter retrieves the IUA count and threshold from the IUA resource based on the IUA identifier and controls, as a function of the IUA count and threshold, whether a given WQE from a given scheduling group, assigned to the IUA identifier, is moved into the IUE resource to be queued for scheduling. The IUA count and threshold prevent group(s) assigned to the IUA identifier from using more than an allocated amount of IUEs.
    Type: Grant
    Filed: March 8, 2023
    Date of Patent: March 12, 2024
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Jason D. Zebchuk, Wilson P. Snyder, II
  • Patent number: 11928518
    Abstract: Noisy-neighbor detection and remediation is provided by performing real-time monitoring of workload processing and associated resource consumption of application components that use shared resource(s) of a computing environment, determining workload and shared resource consumption patterns for each of the application components, for each application, of a plurality of applications, that includes at least one application component of the application components, correlating the determined workload and shared resource consumption patterns of each of those application component(s) and determining a correlated shared resource usage pattern for that application, performing impact analysis to determine impact of the applications on each other, and identifying noisy-neighbor(s) that use the one or more shared resources and automatically raising an alert indicating those noisy-neighbor(s).
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: March 12, 2024
    Assignee: Kyndryl, Inc.
    Inventors: Nadeem Malik, Anil Kumar Narigapalli, Rajeshwar Coimbatore Shankar, Anupama Ambe
  • Patent number: 11922225
    Abstract: Provided is a cluster node recommendation system. A method of controlling the cluster node recommendation system includes: inputting user selection information from a user, the user selection information including at least one of a cloud vendor, an Information Technology (IT) resource size, and a free resource size; checking resource requirements of a designated application; outputting a node configuration by inputting the input user selection information and the checked resource requirements of the application to an artificial intelligence module; verifying validity by arranging a container in which the application is executed, in the output node configuration; and providing a final node configuration in which validity verification is made, to the user.
    Type: Grant
    Filed: November 1, 2023
    Date of Patent: March 5, 2024
    Assignee: STRATO CO., LTD.
    Inventors: Hyeong-Doo Kim, Ho-Chul Lee, Sun-Kyu Park, Nam-Kyu Park, Yong-Min Kwon
  • Patent number: 11922185
    Abstract: In an architecture of a virtualized computing system plugins are less tightly integrated with a core user interface of a management server. Rather than being installed and executed at the management server as local plugins, the plugins are served as remote plugins from a plugin server, and may be accessed by a web client through a reverse proxy at the management server. Plugin operations may be executed at the plugin server and/or invoked from a user device where the web client resides. Furthermore, a plugin sandbox and other isolation configurations are provided at the user device, so as to further control access capability and interaction of the plugins.
    Type: Grant
    Filed: May 2, 2022
    Date of Patent: March 5, 2024
    Assignee: VMware, Inc.
    Inventors: Tony Ganchev, Plamen Dimitrov, Aleksandar Marinov
  • Patent number: 11907763
    Abstract: A technique for dynamically determining a modification of an initial cloud computing deployment (CCD) of a serverless application with multiple application functions is described. The multiple application functions in the initial CCD are grouped into one or more deployment artifacts each comprising at least one application function, wherein each deployment artifact is associated with a dedicated cloud computing platform type selected from FaaS and CaaS. An apparatus of the present disclosure is configured to obtain at least one requirement for the serverless application or its deployment, and to obtain an application model of the serverless application.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: February 20, 2024
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Balázs Peter Gerö, András Kern, Dávid Jocha, Bence Formanek
  • Patent number: 11900157
    Abstract: An embodiment of a semiconductor package apparatus may include technology to manage one or more virtual graphic processor units, and co-schedule the one or more virtual graphic processor units based on both general processor instructions and graphics processor instructions. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: February 13, 2024
    Assignee: Intel Corporation
    Inventors: Yan Zhao, Zhi Wang, Weinan Li
  • Patent number: 11900090
    Abstract: Disclosed are approaches for enforcement of updates for devices unassociated with a domain or directory service. An application executing on a client device can determine that the client device is to use a locator specified in a policy to receive and install updates to software installed on the client device. The application determines whether the client device complies with the policy based at least in part on a value of a registry key stored on the client device. The application then modifies a value of a registry key stored on the client device in an instance in which it is determined that the client device is to use the locator and that the client device does not comply with the policy.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: February 13, 2024
    Assignee: AirWatch LLC
    Inventors: Varun Murthy, Kalyan Regula, Shravan Shantharam, Jason Roszak
  • Patent number: 11893424
    Abstract: A system for training parameters of a neural network includes a processing node with a processor reconfigurable at a first level of configuration granularity and a controller reconfigurable at a finer level of configuration granularity. The processor is configured to execute a first dataflow segment of the neural network with training data to generate a predicted output value using a set of neural network parameters, calculate a first intermediate result for a parameter based on the predicted output value, a target output value, and a parameter gradient, and provide the first intermediate result to the controller. The controller is configured to receive a second intermediate result over a network, and execute a second dataflow segment, dependent upon the first intermediate result and the second intermediate result, to generate a third intermediate result indicative of an update of the parameter.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: February 6, 2024
    Assignee: SambaNova Systems, Inc.
    Inventors: Martin Russell Raumann, Qi Zheng, Bandish B. Shah, Ravinder Kumar, Kin Hing Leung, Sumti Jairath, Gregory Frederick Grohoski
  • Patent number: 11886917
    Abstract: Resources in an Infrastructure-as-a-Service (IaaS) system are upgraded in an iterative process. In response to an upgrade request indicating requested changes to a current configuration of the system, one or more graph representations of the current configuration and the requested changes are created. The graph representations include a control graph which has vertices representing resource groups, and edges representing dependences among the resource groups. A batch of resource groups is identified to be upgraded in a current iteration based on the dependencies and Service Level Agreement (SLA) requirements including availability and elasticity of the system. Upgrade operations are executed on the identified batch using selected upgrade methods which handle potential incompatibilties during transition of system configurations. The graph representations are updated to include any new requested changes and recovery operations in response to feedback of failed upgrade operations.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: January 30, 2024
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Mina Nabi, Maria Toeroe, Ferhat Khendek
  • Patent number: 11886930
    Abstract: The technology disclosed relates to runtime execution of functions across reconfigurable processor. In particular, the technology disclosed relates to a runtime logic that is configured to execute a first set of functions in a plurality of functions and/or data therefor on a first reconfigurable processor, and a second set of functions in the plurality of functions and/or data therefor on additional reconfigurable processors. Functions in the second set of functions and/or the data therefor are transmitted to the additional reconfigurable processors using one or more of a first reconfigurable processor-to-additional reconfigurable processors buffers, and results of executing the functions and/or the data therefor on the additional reconfigurable processors are transmitted to the first reconfigurable processor using one or more of additional reconfigurable processors-to-first reconfigurable processor buffers.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: January 30, 2024
    Assignee: SambaNova Systems, Inc.
    Inventors: Ram Sivaramakrishnan, Sumti Jairath, Emre Ali Burhan, Manish K. Shah, Raghu Prabhakar, Ravinder Kumar, Arnav Goel, Ranen Chatterjee, Gregory Frederick Grohoski, Kin Hing Leung, Dawei Huang, Manoj Unnikrishnan, Martin Russell Raumann, Bandish B. Shah
  • Patent number: 11886931
    Abstract: The technology disclosed relates to inter-node execution of configuration files on reconfigurable processors using network interface controller (NIC) buffers. In particular, the technology disclosed relates to a runtime logic that is configured to execute configuration files that define applications and application data for applications using a first reconfigurable processor connected to a first host, and a second reconfigurable processor connected to a second host. The first reconfigurable processor is configured to push input data for the applications in a first plurality of buffers. The first host is configured to cause a first network interface controller (NIC) to stream the input data to a second plurality of buffers from the first plurality of buffers. The second host is configured to cause a second NIC to stream the input data to the second reconfigurable processor from the second plurality of buffers.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: January 30, 2024
    Assignee: SambaNova Systems, Inc.
    Inventors: Ram Sivaramakrishnan, Sumti Jairath, Emre Ali Burhan, Manish K. Shah, Raghu Prabhakar, Ravinder Kumar, Arnav Goel, Ranen Chatterjee, Gregory Frederick Grohoski, Kin Hing Leung, Dawei Huang, Manoj Unnikrishnan, Martin Russell Raumann, Bandish B. Shah
  • Patent number: 11874901
    Abstract: Provided are a method, device for processing a network flow and a storage medium and a computer device. The method includes that: a network flow is acquired, and the acquired network flow is taken as a discrete object; the discrete object is clustered to obtain a clustering result; and the clustering result is output. The disclosure solves the technical problem in related technologies that a basis for formulating a network control policy cannot be provided from complex network information due to a complex network topology.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: January 16, 2024
    Assignee: Hillstone Networks Co., Ltd.
    Inventors: Shuyi Liu, Yingjie Cui, Haixia Qu
  • Patent number: 11853801
    Abstract: The present disclosure relates to a plug-in for enhancing resource elastic scaling of a distributed data flow and a method for enhancing a plug-in for enhancing resource elastic scaling of a distributed data flow. The plug-in is connected to a scaling controller used for resource elastic scaling of a distributed data flow. The plug-in includes a decision maker, a decision model, and a scaling operation sample library. The scaling controller registers a data flow to the plug-in through a first interface. The scaling controller sends an optimal decision of resource scaling in each status to the plug-in through a second interface. The scaling operation sample library is configured to record the optimal decision of resource scaling in each status. The decision model is configured to predict a received data flow based on the optimal decision recorded in the scaling operation sample library, to generate a prediction decision.
    Type: Grant
    Filed: October 20, 2021
    Date of Patent: December 26, 2023
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lijie Wen, Zan Zong
  • Patent number: 11847395
    Abstract: A system for executing a graph partitioned across a plurality of reconfigurable computing units includes a processing node that has a first computing unit reconfigurable at a first level of configuration granularity and a second computing unit reconfigurable at a second, finer, level of configuration granularity. The first computing unit is configured by a host system to execute a first dataflow segment of the graph using one or more dataflow pipelines to generate a first intermediate result and to provide the first intermediate result to the second computing unit without passing through the host system. The second computing unit is configured by the host system to execute a second dataflow segment of the graph, dependent upon the first intermediate result, to generate a second intermediate result and to send the second intermediate result to a third computing unit, without passing through the host system, to continue execution of the graph.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: December 19, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Martin Russell Raumann, Qi Zheng, Bandish B. Shah, Ravinder Kumar, Kin Hing Leung, Sumti Jairath, Gregory Frederick Grohoski
  • Patent number: 11824948
    Abstract: Disclosed are techniques for processing user profiles using data structures that are specialized for processing by a GPU. More particularly, the disclosed techniques relate to systems and methods for evaluating characteristics of user profiles to determine whether to offload certain user profiles to the GPU for processing or to process the user profiles locally by one or more central processing units (CPUs). Processing user profiles may include comparing the interest tags included in the user profiles with logic trees, for example, logic trees representing marketing campaigns, to identify user profiles that match the campaigns.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: November 21, 2023
    Assignee: Oracle International Corporation
    Inventor: David Lawrence Rager